[00:15] I think I've cracked it [00:18] ah ffa [00:18] ffs [00:18] even [00:19] I've got bootstrap fixed... [00:19] but status fails [00:19] * thumper stabs ec2 [00:20] * thumper forceably terminates instance [02:40] * thumper waves at mramm in passing [03:21] thumper: have you run the live ec2 tests on raring? ie in the environs/ec2 directory "go test -gocheck.v --amazon" [05:48] night all [06:03] jam: meeting? [06:04] wallyworld: yep [06:04] are you on mumble? [06:04] aye [06:46] mornin' all [07:59] hi danilos, I see you on, but you're still muted on mumble [08:00] jam: hi, joining (it's only :59 for me :) === ChanServ changed the topic of #juju-dev to: https://juju.ubuntu.com | On-call reviewer: jam | Bugs: 2 Critical, 64 High - https://bugs.launchpad.net/juju-core/ [08:05] rogpeppe: did you land the patch to work around the HTTP breakage with the ppa versions of juju-core? [08:05] jam: the one that retries the PUT requests? [08:05] rogpeppe: something about keepalive that dimitern and danilos worked out at the end of last week. [08:05] the "closed connection" stuff. [08:06] jam: that's a problem with the debian go distribution [08:06] jam: the solution is to use to using go1.0.3 [08:06] rogpeppe: well, juju-core being broken is our problem. [08:06] jam: i know, but there's no good fix AFAIK [08:06] jam: danilos' suggested fix will break other things [08:07] rogpeppe: I don't think we can get 1.0.3 into the archive at this point. [08:07] we're quite a bit past that point. [08:08] jam: the real fix is not to use a buggy version of go (that bug was never part of a released version BTW) [08:08] so some sort of fix is necessary [08:08] or we won't have juju-core that actually works on Raring. [08:08] jam: actually that's not the case - it does currently work on Raring i believe [08:09] rogpeppe: I'm pretty sure the ppa version is borked. I've had 3 people say so. [08:09] jam: because the problem was seen because we had many versions in the public bucket [08:09] If the ppa version becomes the archive version. [08:09] jam: which put the s3 LIST response over 8K [08:09] jam: the objects have now been deleted, so the issue is papered over for now [08:10] jam: it seemed that it *might* be possible to get 1.0.3 into the archive, even at this stage [08:11] jam: fwereade might know whether it's actually going to happen or notr [08:12] rogpeppe, jam: I kicked it across to jamespage/Daviey [08:12] rogpeppe, fwereade: just getting to that [08:12] rogpeppe, jam: IMO it's still profoundly irresponsible from our POV to switch the language version at this stage [08:13] fwereade: I would agree with you [08:13] rogpeppe, jam, jamespage: but I concede that from the POV of raring it's a good thing and we have to take our lumps [08:13] fwereade, I actually agree; I won't support pushing 1.0.3 into raring this late [08:13] fwereade: i still think... well, you know what i think - 1.0.3 is a bug-fix version, and we're being bitten by one of the bugs [08:13] fwereade: I also wanted to check about Ian's constraint patches. Can it get reviewed and landed before final cut? [08:13] jamespage, yay! [08:13] fwereade: so how do we fix our code then? [08:13] rogpeppe, we don't, because we left it too late [08:14] rogpeppe, we paper over it because that's the only thing we can do that doesn't risk far worse instability [08:14] fwereade, if there is a critical bug then I'm happy to pull in a single fix but I don't think upgrading even to a point release is a great idea right now [08:14] fwereade: go1.0.3 is not unstable; and neither is our code when using it [08:15] rogpeppe, it's nothing to do with 1.0.3 itself [08:15] jamespage: it's a patch release really [08:15] rogpeppe, and our code has not had anywhere near the use and testing on 1.0.3 that it has on 1.0.2 [08:15] * jamespage shrugs [08:15] lots of patch releases won't make raring now [08:16] jam, honestly I am -1 on wallyworld's constraints changes now [08:17] fwereade: wfm. It would be nice to have, but it would be a source of breakage. [08:17] jam, I don't know "wfm" but I think we're in agreement [08:17] jam, we have cowboyed it up far enough ;) [08:18] jam: i don't believe it would be a source of any breakage at all. the two releases are fully compatible - 1.0.3 just fixes some 1.0.2 bugs. [08:18] rogpeppe: I'm talking about the constraints stuff atm. [08:19] jam: oh, sorry [08:19] fwereade: we weren't seriously thinking of putting that constraints stuff in *now*, were we? [08:19] rogpeppe, jam suggested it, I said no [08:19] fwereade: phew. [08:19] rogpeppe, I don;t think he was seriously expecting a yes ;) [08:20] rogpeppe, but it's good to be clear ;) [08:22] jam, rogpeppe: there are lots of things that I want to get onto ASAP *after* today once we have the release in place [08:22] fwereade: in a "we'd like to have feature parity with pyjuju" it is definitely in the really-nice-to have. But the risk is high enough it didn't necessarily balance it out. [08:22] jam, rogpeppe: but as far as I am concerned we fixed or worked around the *really* critical stuff on friday [08:22] fwereade: perhaps we should move to using go1.1beta as standard from then [08:23] rogpeppe, I'm fine switching to 1.0.3 after today but not developing officially on a beta language version [08:23] fwereade, just for the record please can someone explain how tooling is prepared, i.e. why 1.0.3 is important in raring but 12.04 only has a much older version [08:23] jamespage, nobody cared until it was an opportunity to snark about me "not wanting to fix bugs" [08:24] i did care, but i presumed that we'd have 1.0.3 in raring [08:24] jamespage: i think we just use the unmaintained debian upstream go package [08:24] but why does 12.04 not matter? [08:25] fwereade, jam: do we need to identify exact patch that needs to go on top of already present debian/patches/15-net-http-connection-close.patch and see if it's minor enough for jamespage to include in 1.0.2 package? [08:26] jamespage: it does [08:26] for reference: [08:26] golang | 2:1-5 | precise/universe | source, all [08:26] golang | 2:1.0.2-2 | quantal/universe | source, all [08:26] golang | 2:1.0.2-2 | raring/universe | source, all [08:27] danilos, if we do anything it will be that [08:28] jamespage, I don't think anyone disagrees that it would be good to have, and I'd be +1 on updating it when we are no longer in release frenzy mode [08:28] jamespage, (I think that applies across the board from P onwards actually) [08:28] jamespage: i think go 1 was only just out when precise was released. [08:29] looking at https://code.google.com/p/go/issues/detail?id=4914 [08:30] it appears a revise patch was produced - but its never been picked up in the packaging [08:35] jamespage, right, I was wondering if people want me to test that patch on raring and see if it indeed solves our problems so we can ask for it to be included [08:35] "that patch" == "patch from https://codereview.appspot.com/6211069" [08:39] danilos, yes that would be a good idea [08:39] danilos: that patch isn't sufficient - it doesn't fix https://code.google.com/p/go/issues/detail?id=1967 [08:40] danilos: which is needed to make goamz/s3 work correctly, i believe [08:41] rogpeppe, that patch is already included in the package I believe (one I referenced above, 15-net-http-connection-close.patch) [08:41] rogpeppe, my understanding is the fix for that issue (which is already cherry picked into the packaging) is the one that caused the refression [08:41] regression rather [08:42] danilos, rogpeppe, fwereade: I still really need a LP bug raising against ubuntu explaining what the current impact is [08:43] jamespage: the current impact of that go distribution on juju? or on anything? [08:43] rogpeppe, well juju would be a good start [08:43] right now I just have "there is a bug" [08:44] jamespage, rogpeppe: that was also my understanding -- if it's practical to switch to an unfucked *1.0.2*, and that change is genuinely small and precisely targeted to the point where we can analyse its impact by inspection, then great [08:45] rogpeppe, would this be a sane characterisation of the core of the issue: "ubuntu doesn't have the real 1.0.2 because the upstream contains a bad patch"? [08:45] fwereade: yes [08:46] fwereade: but 1.0.2 does have the problem that it ignores the Close field. [08:46] fwereade, exactly - I can also unbreak it for quantal as well if its targetted enough [08:46] rogpeppe, ok -- but fixing that in isolation will also cause us problems with the s3 package, right? ornot? [08:47] rogpeppe, (due to the ignored close field) [08:47] fwereade: i don't know - applying a random patch and ending up with a version that is again not *quite* one of the official released versions seems to me to be a bad idea [08:48] rogpeppe, sorry imprecise: s/that in isolation/the diff against real 1.0.2 in isolation/ [08:48] rogpeppe, that's how debian packages usually look like [08:49] rogpeppe, and fwiw, I'd test with only this patch (http://pastebin.ubuntu.com/5591992/) added on top of the existing package, since we have been using debian-patched 1.0.2 for a while now already [08:51] danilos: that's from here, right? https://code.google.com/p/go/source/detail?r=c3cbd6798cc7 [08:52] rogpeppe, yeah, it's verbatim "hg diff -c c3cbd6798cc7" [08:54] rogpeppe, note that https://code.google.com/p/go/source/detail?r=820ffde8c396# is already included in the deb package [08:54] danilos: yeah [08:55] danilos: looking at the list of issues fixed between 1.0.2 and 1.0.3, i see none that might impact us, other than one which might actually fix potential bugs in our code. [08:57] rogpeppe, it's still a risk imo (even fixes break stuff :)), but I am not the release manager for either juju-core or ubuntu go package, so I won't comment further [08:58] danilos: yes, fixes can break stuff, but we have done quite a bit of testing against 1.0.3 actually. [08:59] danilos: i suppose if go1.0.3 is considered too risky, then the above patch would be better than nothing. [09:01] rogpeppe, yeah, it will also have the benefit that it's easier to include in raring so we don't have to keep a PPA with go 1.0.3 for people to use to compile juju-core [09:02] danilos: but what about this one? https://code.google.com/p/go/source/detail?r=4c333000f50b [09:02] danilos, won't we need one of those for precise at least regardless? [09:02] danilos: hmm, actually, that's server only [09:03] danilos: no, i lie, it's not [09:03] rogpeppe, there's transport stuff in there as well [09:03] danilos: yeah [09:04] rogpeppe, I'll test only with the first patch, if that doesn't help, then it'd be better to compile for 1.0.3 [09:04] danilos: this feels a bit like grasping for straws [09:05] fwereade, I don't know, since I have no idea how do version numbers compare and whether we can SRU this in precise (https://launchpad.net/ubuntu/precise/+source/golang/2:1-5) [09:05] danilos: we have a well known version that fixes the issue and is the most well tested version of go overall [09:05] rogpeppe, that's how packaging works, yes :) [09:06] rogpeppe, I remember niemeyer was saying about problems with 1.0.3 and was-it-juju-gui? [09:06] "saying something" [09:06] danilos: there is one known problem with trying to hack http redirects in 1.0.3, yes [09:07] rogpeppe, btw, for this particular problem, do you know if there's any reason we are using connection:close (since keep-alive worked much faster for me with the Asian Amazon zone) [09:07] danilos: the problem is that in general you're not allowed to keep on reusing s3 connections. [09:07] danilos: it can break after 3 or 4 tries [09:07] rogpeppe, ah, I see, more of an amazon policy rather than technical? [09:08] danilos: i think so, yes [09:08] rogpeppe, right, understood [09:08] rogpeppe, anyway, if people don't see the value in me testing this patch and we don't want to ask for it to be included with raring, I won't waste my time doing it [09:09] danilos: if that's the way things are usually done, and we won't get a fix any other way, then i think it's worth doing [09:09] danilos: i would much prefer to be using a well known and tested standard version though. [09:09] danilos, I don't believe we can adequately test it in time -- I don't see how you *can* without hacking other parts of juju to re-expose the issue [09:10] danilos, and while that's what we'll have to do tomorrow or the day after or whenever, I don't think it's a viable strategy in which the hours we have before we can land anything continue to tick down through the single digits [09:11] fwereade, it was failing consistently for me with ap-southeast-1 region, and so was http://pastebin.ubuntu.com/5721759/ [09:11] danilos, is it still doing so today? [09:12] danilos, sorry, is that the right link? says it doesn't exist [09:12] fwereade, sure, but my point is that we can get this fixed in *ubuntu* in the next couple of days so our _users_ would get the benefit of being able to build a working juju-core package out of nothing but standard ubuntu packages soon [09:12] fwereade, it was, but it seems ubuntu pastebin removes stuff (I had it open since Friday): new one on http://pastebin.ubuntu.com/5592035/ [09:13] fwereade, I'll try that to confirm that it was not the 8k limit [09:14] fwereade, no, it doesn't fail anymore [09:14] fwereade, so I suppose it is a moot point [09:14] danilos, it's still an issue but it's not one biting us *today* [09:15] fwereade, right, and I guess we can keep it under control by not allowing our bucket to grow too long? and then we can take the time to resolve the issue properly [09:15] danilos, I am not proud of the workaround but I think it's the only one with predictably bounded second-order effects [09:15] danilos, exactly [09:16] danilos, releasing i386 versions tightens the window, but we can relax it a little by deleting the more recent 1.9.*s [09:16] danilos, I am more confident that we can keep it out of users sight until we fix it for real than I am that we can fix it for real without unintended consequences given the timescale and associated pressure [09:17] fwereade, sure, agreed [09:23] so... rogpeppe, TheMue, mgz, jam, danilos, wallyworld: aside from the ap-southeast security group weirdness, is anyone aware of any trivially-fixable outstanding issues that would directly impact users if we were to release right now? [09:23] (the security groups are IMO not trivially fixable) [09:24] fwereade: the security groups could be fixed quite easily, but not in the way proposed [09:25] fwereade: i could prepare a patch quickly, but i fear it would need more testing time than we've got to make sure it works properly against all regions [09:25] rogpeppe, yeah, that's the heart of it [09:26] rogpeppe, I don't want to fix one region at the cost of another [09:26] * rogpeppe finds it very strange that amazon apparently implemented the same software many times independently [09:26] rogpeppe, I'd rather release with "ap-southeast-1 and ap-southeast-2 cannot currently be used" [09:26] fwereade: well, the fix would use mechanisms that are already used in other regions. [09:27] * fwereade hasn't looked under the covers but is terrified we're doing somthing crazy like always using "/current" api versions [09:27] fwereade: no, istr changing it to use a fixed version [09:27] rogpeppe, <3 [09:28] rogpeppe, it' a shame it only hit us just now but I think we're over the line [09:29] fwereade: hmm, the fixed version was just for the metadata [09:29] rogpeppe, aw feck [09:29] fwereade: ah, but all is not lost [09:30] fwereade: goamz/ec2 hardcodes 2011-12-15 [09:30] d [09:30] mgz: /wave [09:30] rogpeppe, ok, great :) [09:30] right [09:31] ok [09:31] I assert that we should bump the version, and release what we have right now, right now [09:32] fwereade: +1 [09:33] mgz, ping [09:34] * fwereade slopes off for a quick ciggie to see whether anything else plops into his mind, brb [09:34] hey jam [09:36] fwereade: we should probably just do a release, though I'm still not sure what exactly we should do with it [09:46] mgz, I think we should release 1.10.0 and put that into raring, then switch to 1.11.0 and GTW on the various things we haven't managed -- I feel like your statement alludes to things I have not thought of? [09:48] mgz, I suspect there are maybe things to do with the "series" of juju-core that should be done, but this is completely outside my ken at the moment [09:48] so, we can't bump the go version, though I think the issue that helped with we landed another fix for? and we still need to get stuff past the archive admins. [09:49] mgz, we papered over the go version one by trashing old releases in the juju-dist bucket [09:49] mgz, I don't think there's anything else we can fix today that will affect users [09:49] ah, that was the swift bucket listing one [09:50] mgz, s3 but yeah [09:50] ho ho ho, that's the reverse getting the names backwards from normal [09:50] openstack is winning [09:50] haha [09:51] mgz, can I leave the release in your hands then, and inform Daviey and jamespage that you will have 1.10.0 for them imminently? [09:54] yeah, though I also need to argue with the release guys for the other packaging as well. [09:54] so, nothing landed after the tagging on 1.10.0 that we want in? [09:55] the various bugs were all otherwise worked around? [09:56] mgz, we reverted that actually [09:57] mgz, but we didn't release anything from the briefly-1.10.0 source [09:57] mgz, so I think we're good to just bump and go [09:57] mgz, there were a couple on friday that I can't even remember today :/ [09:59] mgz, but neither I nor anyone else can AFAICT think of any way to make the release *definitely* better in the next couple of hours [10:00] mgz, so I see no further reason to delay [10:02] mgz, sorry, I missed that: the "other packaging"? [10:05] the python juju changes to make go juju installable under the same names [10:09] mgz, ah, hell, I had understood that that was already in hand [10:09] mgz, is it not? [10:10] mgz, if it isn't it kinda feels like we're irreversibly boned regardless... [10:12] the upload was rejected, and I need an archive admin's attention to get any new packages in [10:13] mgz, well, crap [10:14] mgz, that seems to justify a certain amount of frothing and screaming -- do you know to whom we should be directing it? [10:17] mgz, or is it in fact just a straight-up can't-be-done sort of issue? [10:17] the faceless bureaucracy of ubuntu, but don't froth, I'm on it. [10:20] mgz, ok, cool [10:21] mgz, so, the things we need are (1) bump to 1.10.0 (2) release 1.10.0 (3) bump again to 1.11.0 (4) get juju 0.7 into raring (5) get juju 1.10 into raring [10:21] yup. [10:21] I'm currently on 4, and will move onto 5 after [10:22] mgz, ok, I will propose a bump to 1.10.0 [10:24] mgz, Daviey is looking at the rejected package [10:26] mgz: Should i be looking at the rejected one.. or wait out? [10:29] I'll forward the email I got from stephane [10:29] I have a trivial diff that I think is all he wants [10:30] jamespage: what should I use to test install of a deb I've just built in canonistack? schroot? [10:30] schroot it good [10:30] (generally what I do) [10:32] can you give me a quick example of your procedure? I wasn't watching closely enough last time you did it [10:33] anyone want to give me an LGTM on https://codereview.appspot.com/8759045 for form's sake? :) [10:34] done. [10:37] fwereade: done [10:42] mgz, ok, that is submitted [10:42] rogpeppe, can you spot anything odd we're doing here? https://pastebin.canonical.com/89643/ [10:43] * rogpeppe gets his 2-factor key [10:44] mattyw, is there another package-level Test* function that doesn't use MgoTestPackage by any chance? [10:44] +1 [10:44] that was my question too [10:44] fwereade, there definately is yes [10:45] mattyw, that'd be the problem then, drop that and use this, I think [10:45] mattyw: you should only have one top level Test function [10:45] fwereade, rogpeppe I'll move it, thanks guys [10:45] mattyw: np [10:45] mattyw, cheers [10:45] ok [10:46] fwereade: just got back from soccer and saw your question - i'd love to get the constraints stuff for openstack in the release, but maybe it's too late? [10:46] wallyworld, I'dlove to too but I thought it was too late on friday really [10:47] ok, no problem [10:47] wallyworld, we can get that into the first server-side update and get a lot of value [10:47] i think we are doing a 10.1 release soonish anyway? [10:47] wallyworld, I think [10:47] wallyworld, that is the plan [10:47] wallyworld, as far as I am concerned we are now frozen [10:48] fwereade: also, i really see a lot of value in the image-id constraint. can we discuss sometime? [10:49] wallyworld, definitely -- it's a use case we should have in mind, but I really don't think it's a constraint [10:50] * TheMue is at lunch, bbiab [10:50] fwereade: did you want to pop i on Blue's standup soon? [10:51] wallyworld, honestly I don't think I can do it usefully -- I ran out of energy on thursday, friday was past my limit, and my first weekend off in a month was not enough to reset me [10:51] bah, this is still borked [10:51] ok no problem :-) [10:51] fwereade: we can discuss later this week perhaps [10:52] wallyworld, I'd love to [10:52] sure. i am off thursday so maybe wednesday or tomorrow [10:52] wallyworld, weds would be perfect [10:53] ah, no, probably just a bash quirk [10:53] ok, wed it it [10:54] wallyworld, cheers [10:54] mgz, will you be doing the 1.10.0 release or should someone else pick that up for you? [10:59] mramm2, heyhey [11:01] jam, i'm very sorry i missed the 1-1 today [11:04] fwereade: i'm sad you rejected my mp. i agree with rogpeppe about binary compatibility. i'd like to think that the landing bot would do the tests on precise, but we really need to run "bleeding edge" tests locally on the latest series to minimise risk for future deployments [11:04] fwereade: what does the release consist of in this case? build, tools to public bucket, and announcement on list? [11:05] fwereade: i guess we need to add that to our wednesday agenda :-) [11:06] note for self for next time... `sudo schroot -c juju -u root` as I don't know how to easily configure fancy permissions... [11:09] mgz, yes please, I think that's it -- maybe a note that 1.9.* will be removedimminently, and actual removal of 1.9.* in a day or 2 [11:10] mgz: yes those things seem to be the sum of what doing a release for go juju require ;) [11:10] wallyworld, I consider the failure entirely mine, it's fundamentally about communication [11:12] wallyworld, I'm not sure what's meant by the binary compatibility question but I'm afraid I'm done thinking for today [11:12] fwereade: i don't think you failed. but that aside, in past projects and fundamentally, i think local devs should test against the latest system release while the landing bot tests against the stable release [11:12] gents, you're all great, but I'm wiped out for now [11:12] fwereade: ok, talk later [11:12] wallyworld, cheers :) [11:30] jam, mgz: mumble poke [11:31] ta [11:33] jam: stand-up time [12:25] * dimitern lunch [12:51] anyway familiar with this error, "2013/04/22 08:50:19 ERROR command failed: no CA certificate in environment configuration" [13:04] we really need to upload the 1.10 tools [13:05] currently we can't use juju bootstrap with the tip version without using --upload-tools [13:05] llog [13:12] rogpeppe: building from recipe in the ppa now [13:13] mgz: thanks. [13:13] mgz: i wonder if every time we upload a new version, we should delete the oldest version currently in the bucket [13:13] mgz: to avoid running afoul of the bug [13:14] right now, probably [13:14] is it always safe to remove old tools, even if there are environments currently running them? [13:14] I guess not if we're good about major/minor compat discipline [13:16] mgz: there's probably a window during which a started instance can fail because the provisioner finds some tools and puts the url in the cloud-init, only for the tools to be removed before they get chance to run [13:17] mgz: and there's the compatibility issue too. but i hope we're going to be much better about that from now on. [13:35] so, I don't have creds to the juju-dist thing as far as I can find in my email archive, but it's pretty trivial for anyone else to download the debs and run the release-public-tools script [13:55] yay, look at these ping times http://paste.ubuntu.com/5592594/ === wedgwood_away is now known as wedgwood [13:57] mgz: no, i haven't brought that cloudinit workaround for raring to smoser's attention [13:58] mgz: maybe we should though, you're right - afaics it works, tested live and all, but some subtleties might have escaped me [13:59] let's bug him and see [13:59] can you do this please - since you're handling the release anyway? [14:01] done, though, as stated^ I'll need someone else to upload the binaries to the public bucket(s) [14:05] rogpeppe: kanban? [14:05] dimitern: i'll give it a go. network connect still v dodgy. [14:12] hey niemeyer. _mup_ seems to be sleeping, at least on #juju-gui. Could you wake him up? [14:13] gary_poster: WIll check it out [14:13] thank you [14:32] gary_poster: _mup_ was on crack here as well - not answering to bug 0123456 as well since few days now [14:32] dimitern, yeah, I thought I saw that. thanks for the confirmation :-) [14:32] (it seems it still is) [14:39] gary_poster: Launchpad seems to have changed the bug URL that mup looks for [14:39] niemeyer, ah :-/ [14:40] thank you for investigating niemeyer. Is that something you plan to address soon, or no time? [14:40] gary_poster: I'd like to see if fixed for sure.. I'm having a deeper look just now [14:40] cool [14:40] thanks again [14:42] gary_poster: np [14:50] hey jam. I think you had talked about adding tarmac support to lbox. Do you know if that has gone anywhere beyond that initial statement of idea? We would like that for the GUI. [15:34] * jamespage high 5's mgz [15:34] py juju accepted.... [15:37] woho! [15:37] right, now for go... [15:43] jamespage: mgz: AWESOME! [17:18] gary_poster: Found the issue with wgrant's and andreas' help [17:18] gary_poster: We're taking the chance to update the machine as well [17:18] niemeyer, fantastic [17:18] gary_poster: mup will be awaken soon :) [17:18] heh cool :-) === deryck is now known as deryck[lunch] [17:41] i'm off now [17:41] see y'all tomorrow === fss_ is now known as fss === deryck[lunch] is now known as deryck [19:32] guys, with go-juju, how are the open-port, close-port, etc commands inserted into $PATH? [19:32] I just deployed a service and got [19:33] 2013/04/22 19:14:24 INFO worker/uniter: HOOK /var/lib/juju/agents/unit-lds-quickstart-0/charm/hooks/install: line 329: open-port: command not found [19:33] I logged in, and these tools are in /var/lib/juju/tools/machine-1/ [19:33] is that in the hook's shell env? [19:39] ok, it's the charm that changes PATH [20:18] please can we have https://codereview.appspot.com/8648047/ reverted? [20:18] this problem should go away "forever" after released images on thursday. [20:18] see my comment in that bug for more information. [20:18] mgz, ^ [20:29] smoser: I'll propose that. [20:37] hm, the r1111 change to make .lbox.check actually verify the build works is going to screw me [20:37] because the build has *never* worked for me [20:37] it just doesn't fail in an important manner... [21:11] morning [21:11] man, perhaps I should have a coffee before tackling the emails... [21:42] * thumper goes to make that coffee