[21:12] morning [21:41] hi fwereade__ [21:41] fwereade__: seeing you do reviews... [22:27] thumper, heyhey [22:28] fwereade__: I cc'ed you on a container email [22:28] I'm approaching a higher level of concern... [22:28] however [22:28] I did find this: [22:28] http://lxc.sourceforge.net/index.php/about/kernel-namespaces/network/configuration/ [22:28] method 1 seems to closely match what we want [22:28] but I don't entirely understand it [22:29] I seem to be missing some understanding in it [22:29] I seem to be lacking the key bit where the veth0 on host A is connected to container 1 [22:30] also there seems to be overlap in the vethN numbering in the host [22:30] so I want to find someone who knows more [22:33] thumper, yeah, I'm no wiser really -- was it serge who told us about namespaces in the first place? [22:33] fwereade__: I think so [22:33] it seems like it shouldn't be too hard [22:33] just finding the right incantations [22:33] thumper, that's my best guess there then -- yeah, indeed [22:33] I'm putting on my review hat right now [22:34] reviewing some of your pending work and ians [22:34] thumper, re getting ip addresses assigned... mm, yeah [22:34] particularly around the container constraint [22:34] we need a way to ask for public/private ip addresses [22:34] not sure how to handle container addressability without at least private [22:34] thumper, yeah, indeed [22:35] thumper, ec2 first sounds eminently sensible to me [22:35] otherwise it seems the only way is to do port forwarding and fake it, which I loath as an idea [22:35] I'd rather just say "sorry containers aren't supported on this provider because they suck" [22:36] s/they suck/we can't get ip addresses dynamically/ [22:36] fwereade__: on the plus side, it appears the default lxc bridge will work fine for the local provider with no mods [22:37] thumper, yeah, I don't think there's likely to be much mileage in faking it up [22:38] * thumper nods [22:38] I really don't want to put effort into a solution that doesn't take us towards a successful outcome [22:38] thumper, more and more work for less and less gain [22:38] I agree [22:38] thumper, vg news re lxc though [22:39] for the local provider? [22:39] yeah [22:39] yeah [22:39] I'm going to go back to that after the reviews [22:39] cool [22:39] I had an intersting thought though... [22:39] thumper, oh yes? [22:39] we can have the containers auto restart when you reboot [22:39] I suppose we need the same type of startup file that the machine agents have [22:40] for the local machine as part of bootstrap? [22:40] and clean it up with destroy-environment [22:40] I'm not going to do it initially [22:40] that's another one for serge I think -- some versions of lxc have autostart [22:40] but will make a card for it [22:40] fwereade__: reading the precise docs [22:40] it seems that they do [22:40] and it is easy [22:40] sweet [22:40] just symlink the config into /etc/lxc/autostart or something simiilar [22:41] yeah, I just wasn't sure it was there or easy on precise [22:41] but might be nice in the future for local provider to stay alive with a reboot [22:41] thumper, tbh I think it's an important feature anyway for whatever containers [22:42] I did notice though that the golxc impl uses lxc-stop and not lxc-shutdown [22:42] shutdown is nice, stop is flicking the power switch [22:42] thumper, we can't really expect that our cloud instances will never reboot [22:42] * thumper nods [22:42] easy to do by default I think [22:42] I have a card on the kanban board already for it [22:43] thumper, stop/shutdown is interesting... not sure when we'd stop one whose state we cared about [22:43] but regardless [22:43] it seems that we always go stop/destroy [22:43] so we don't need to be nice [22:43] but it does have me wondering what lxc does on system shutdown [22:44] I'm going to trust the lxc devs here [22:44] this is interesting but I'm flying horribly early tomorrow and hoping to be back at work by lunchtime, so I have to sleep now [22:44] and guess they do a shutdown by default and kill if takes too long [22:44] :-) [22:44] ok [22:44] ciao [22:44] enjoy your day, regards to wallyworld and davecheney; see you soon :) [22:44] see ya [22:53] ffs [22:53] c.Assert(provider, Equals, &local.Provider) [22:53] ... obtained *local.environProvider = &local.environProvider{} [22:53] ... expected *local.environProvider = &local.environProvider{} [22:53] wallyworld: that is me trying to test the actual provider for you [22:54] wallyworld: interestingly DeepEquals works find [22:54] fine [22:54] thumper: because they are pointers [22:54] so [22:54] the actual mem addresses are different [22:54] but contents are the same [22:54] no they aren't [22:54] as in, they are the same object [22:54] * wallyworld was just guessing [22:54] in the past, i've seen that [22:55] what I was referring to was the obtained/expected results above [22:55] like, oh, you gave me X when I expected Y [22:55] but here X and Y are the same [22:55] deep equals says contents are the same [22:55] but are you sure mem addresses are the same [22:56] wallyworld: deepequals also checks for the same type [22:56] wallyworld: well, one is an interface, the other is a struct [22:56] sure, type is the same [22:56] so I'm thinking deep equals should be ok... [22:56] yes [22:56] or perhaps I should create an interface for the struct [22:56] * thumper tests that [22:56] i'd stick with deep equals perhaps [22:57] * wallyworld goes away for 10 minutes to buy tickets to the First Test in Novemer [22:57] nope [22:57] * thumper nods [22:57] deepequals it is [23:16] thumper: i have a failing test in lxcProvisionerSuite. i've tracked it down to expectStopped() being called and checking that the container is still provisioned. but advancing a machine's lifecycle to dead causes it to be removed and the instance metadata is removed also. i'm not sure why the test was written the way it was [23:17] wallyworld: hmm... what changes have you made locally? [23:17] what object is expectStopped on? [23:18] this is the branch which introduces the instance metadata doc [23:18] expect stopped is on lxcProvisionerSuite [23:18] looking at a container [23:18] i'm not sure how the test would have passed the first time [23:19] i need to dig into it a bit [23:19] which file? [23:19] lxc-broker_test.go [23:19] making a machine dead and expecting it to still have a valid instance id doesn't make sense to me? [23:20] ah... [23:20] here I was calling instance id from the object as it was just returning a cached local value [23:20] so it was ok [23:21] now you have turned this into a function doing work right? [23:21] yes, but i check the local cached value first [23:21] ok, so just change the expectStopped to take a machineId instead of state.Machine [23:21] and get the isntance id out before stopping [23:22] make sense? [23:22] but why would we want to have a instance id != "" after making a machine dead? [23:22] what we are testing is that the right container was stopped [23:22] the container uses the container name as the isntance id [23:22] we are only testing we stopped the right one [23:23] not that there is something in state [23:23] by checking instance id? [23:23] we don't care about state here [23:23] state happened to conveniently cache the id we care about [23:24] how about... [23:24] hmmm. seems like a fragile way to do it [23:24] have expect started to return the event.id as a container name as a string [23:24] then pass that into the expect stopped [23:24] make more sense? [23:25] i'll look into it. a mechanism that doesn't rely on instance id being valid after life->dead is what we want [23:25] wallyworld: right, what I just said fits that [23:25] sure, just repeating it back [23:34] * wallyworld is pleasantly surprised the test suite seems to run so much quicker as of late [23:52] hmm... [23:52] I suppose I should start writing my juju talk at some stage