[21:12] <thumper> morning
[21:41] <thumper> hi fwereade__
[21:41] <thumper> fwereade__: seeing you do reviews...
[22:27] <fwereade__> thumper, heyhey
[22:28] <thumper> fwereade__: I cc'ed you on a container email
[22:28] <thumper> I'm approaching a higher level of concern...
[22:28] <thumper> however
[22:28] <thumper> I did find this:
[22:28] <thumper> http://lxc.sourceforge.net/index.php/about/kernel-namespaces/network/configuration/
[22:28] <thumper> method 1 seems to closely match what we want
[22:28] <thumper> but I don't entirely understand it
[22:29] <thumper> I seem to be missing some  understanding in it
[22:29] <thumper> I seem to be lacking the key bit where the veth0 on host A is connected to container 1
[22:30] <thumper> also there seems to be overlap in the vethN numbering in the host
[22:30] <thumper> so I want to find someone who knows more
[22:33] <fwereade__> thumper, yeah, I'm no wiser really -- was it serge who told us about namespaces in the first place?
[22:33] <thumper> fwereade__: I think so
[22:33] <thumper> it seems like it shouldn't be too hard
[22:33] <thumper> just finding the right incantations
[22:33] <fwereade__> thumper, that's my best guess there then -- yeah, indeed
[22:33] <thumper> I'm putting on my review hat right now
[22:34] <thumper> reviewing some of your pending work and ians
[22:34] <fwereade__> thumper, re getting ip addresses assigned... mm, yeah
[22:34] <thumper> particularly around the container constraint
[22:34] <thumper> we need a way to ask for public/private ip addresses
[22:34] <thumper> not sure how to handle container addressability without at least private
[22:34] <fwereade__> thumper, yeah, indeed
[22:35] <fwereade__> thumper, ec2 first sounds eminently sensible to me
[22:35] <thumper> otherwise it seems the only way is to do port forwarding and fake it, which I loath as an idea
[22:35] <thumper> I'd rather just say "sorry containers aren't supported on this provider because they suck"
[22:36] <thumper> s/they suck/we can't get ip addresses dynamically/
[22:36] <thumper> fwereade__: on the plus side, it appears the default lxc bridge will work fine for the local provider with no mods
[22:37] <fwereade__> thumper, yeah, I don't think there's likely to be much mileage in faking it up
[22:38]  * thumper nods
[22:38] <thumper> I really don't want to put effort into a solution that doesn't take us towards a successful outcome
[22:38] <fwereade__> thumper, more and more work for less and less gain
[22:38] <thumper> I agree
[22:38] <fwereade__> thumper, vg news re lxc though
[22:39] <thumper> for the local provider?
[22:39] <fwereade__> yeah
[22:39] <thumper> yeah
[22:39] <thumper> I'm going to go back to that after the reviews
[22:39] <fwereade__> cool
[22:39] <thumper> I had an intersting thought though...
[22:39] <fwereade__> thumper, oh yes?
[22:39] <thumper> we can have the containers auto restart when you reboot
[22:39] <thumper> I suppose we need the same type of startup file that the machine agents have
[22:40] <thumper> for the local machine as part of bootstrap?
[22:40] <thumper> and clean it up with destroy-environment
[22:40] <thumper> I'm not going to do it initially
[22:40] <fwereade__> that's another one for serge I think -- some versions of lxc have autostart
[22:40] <thumper> but will make a card for it
[22:40] <thumper> fwereade__: reading the precise docs
[22:40] <thumper> it seems that they do
[22:40] <thumper> and it is easy
[22:40] <fwereade__> sweet
[22:40] <thumper> just symlink the config into /etc/lxc/autostart or something simiilar
[22:41] <fwereade__> yeah, I just wasn't sure it was there or easy on precise
[22:41] <thumper> but might be nice in the future for local provider to stay alive with a reboot
[22:41] <fwereade__> thumper, tbh I think it's an important feature anyway for whatever containers
[22:42] <thumper> I did notice though that the golxc impl uses lxc-stop and not lxc-shutdown
[22:42] <thumper> shutdown is nice, stop is flicking the power switch
[22:42] <fwereade__> thumper, we can't really expect that our cloud instances will never reboot
[22:42]  * thumper nods
[22:42] <thumper> easy to do by default I think
[22:42] <thumper> I have a card on the kanban board already for it
[22:43] <fwereade__> thumper, stop/shutdown is interesting... not sure when we'd stop one whose state we cared about
[22:43] <fwereade__> but regardless
[22:43] <thumper> it seems that we always go stop/destroy
[22:43] <thumper> so we don't need to be nice
[22:43] <thumper> but it does have me wondering what lxc does on system shutdown
[22:44] <thumper> I'm going to trust the lxc devs here
[22:44] <fwereade__> this is interesting but I'm flying horribly early tomorrow and hoping to be back at work by lunchtime, so I have to sleep now
[22:44] <thumper> and guess they do a shutdown by default and kill if takes too long
[22:44] <thumper> :-)
[22:44] <thumper> ok
[22:44] <thumper> ciao
[22:44] <fwereade__> enjoy your day, regards to wallyworld and davecheney; see you soon :)
[22:44] <wallyworld> see ya
[22:53] <thumper> ffs
[22:53] <thumper>     c.Assert(provider, Equals, &local.Provider)
[22:53] <thumper> ... obtained *local.environProvider = &local.environProvider{}
[22:53] <thumper> ... expected *local.environProvider = &local.environProvider{}
[22:53] <thumper> wallyworld: that is me trying to test the actual provider for you
[22:54] <thumper> wallyworld: interestingly DeepEquals works find
[22:54] <thumper> fine
[22:54] <wallyworld> thumper: because they are pointers
[22:54] <thumper> so
[22:54] <wallyworld> the actual mem addresses are different
[22:54] <wallyworld> but contents are the same
[22:54] <thumper> no they aren't
[22:54] <thumper> as in, they are the same object
[22:54]  * wallyworld was just guessing
[22:54] <wallyworld> in the past, i've seen that
[22:55] <thumper> what I was referring to was the obtained/expected results above
[22:55] <thumper> like, oh, you gave me X when I expected Y
[22:55] <thumper> but here X and Y are the same
[22:55] <wallyworld> deep equals says contents are the same
[22:55] <wallyworld> but are you sure mem addresses are the same
[22:56] <thumper> wallyworld: deepequals also checks for the same type
[22:56] <thumper> wallyworld: well, one is an interface, the other is a struct
[22:56] <wallyworld> sure, type is the same
[22:56] <thumper> so I'm thinking deep equals should be ok...
[22:56] <wallyworld> yes
[22:56] <thumper> or perhaps I should create an interface for the struct
[22:56]  * thumper tests that
[22:56] <wallyworld> i'd stick with deep equals perhaps
[22:57]  * wallyworld goes away for 10 minutes to buy tickets to the First Test in Novemer
[22:57] <thumper> nope
[22:57]  * thumper nods
[22:57] <thumper> deepequals it is
[23:16] <wallyworld> thumper:  i have a failing test in lxcProvisionerSuite. i've tracked it down to expectStopped() being called and checking that the container is still provisioned. but advancing a machine's lifecycle to dead causes it to be removed and the instance metadata is removed also. i'm not sure why the test was written the way it was
[23:17] <thumper> wallyworld: hmm... what changes have you made locally?
[23:17] <thumper> what object is expectStopped on?
[23:18] <wallyworld> this is the branch which introduces the instance metadata doc
[23:18] <wallyworld> expect stopped is on lxcProvisionerSuite
[23:18] <wallyworld> looking at a container
[23:18] <wallyworld> i'm not sure how the test would have passed the first time
[23:19] <wallyworld> i need to dig into it a bit
[23:19] <thumper> which file?
[23:19] <wallyworld> lxc-broker_test.go
[23:19] <wallyworld> making a machine dead and expecting it to still have a valid instance id doesn't make sense to me?
[23:20] <thumper> ah...
[23:20] <thumper> here I was calling instance id from the object as it was just returning a cached local value
[23:20] <thumper> so it was ok
[23:21] <thumper> now you have turned this into a function doing work right?
[23:21] <wallyworld> yes, but i check the local cached value first
[23:21] <thumper> ok, so just change the expectStopped to take a machineId instead of state.Machine
[23:21] <thumper> and get the isntance id out before stopping
[23:22] <thumper> make sense?
[23:22] <wallyworld> but why would we want to have a instance id != "" after making a machine dead?
[23:22] <thumper> what we are testing is that the right container was stopped
[23:22] <thumper> the container uses the container name as the isntance id
[23:22] <thumper> we are only testing we stopped the right one
[23:23] <thumper> not that there is something in state
[23:23] <wallyworld> by checking instance id?
[23:23] <thumper> we don't care about state here
[23:23] <thumper> state happened to conveniently cache the id we care about
[23:24] <thumper> how about...
[23:24] <wallyworld> hmmm. seems like a fragile way to do it
[23:24] <thumper> have expect started to return the event.id as a container name as a string
[23:24] <thumper> then pass that into the expect stopped
[23:24] <thumper> make more sense?
[23:25] <wallyworld> i'll look into it. a mechanism that doesn't rely on instance id being valid after life->dead is what we want
[23:25] <thumper> wallyworld: right, what I just said fits that
[23:25] <wallyworld> sure, just repeating it back
[23:34]  * wallyworld is pleasantly surprised the test suite seems to run so much quicker as of late
[23:52] <thumper> hmm...
[23:52] <thumper> I suppose I should start writing my juju talk at some stage