[00:15] is 1.25 and xenial supposed to work? I filed LP#1557345 because I'm having issues with it deploying to containers [00:23] bradm, it works if lxc is installed [00:24] but you will not be able to deploy a lxc container on a vanilla xenial image as lxc is not installed by default [00:24] alexisb: huh, its installed for me [00:25] alexisb: and it doesn't work deploying a container to it [00:26] alexisb: the tl;dr is that I took a working juju environment deploying to canonistack, just changed the default series, did a juju bootstrap and then a juju deploy local:xenial/ubuntu --to lxc:0, and got an error as per my bug [00:31] davecheney: looking now. was afk for a bit. [00:34] davecheney: ship it [00:35] ta [00:43] gah! nasty horrible import loop [00:51] bradm: could u please add this infor to the bug too? [00:53] anastasiamac_: that they're just bootstrapped? sure. I'm testing it out again, will confirm if lxc is installed both before and after I attempt the deploy [00:54] bradm: :D that u "... took a working juju environment deploying to canonistack, just changed the default series, did a juju bootstrap and then a juju deploy local:xenial/ubuntu --to lxc:0, and got an error as per my bug" [00:54] bradm: tyvm \o/ [00:55] anastasiamac_: most of that's already in the bug, but not as concise. [01:07] wallyworld: give me 15 mins please, just finishing cooking my lunch [01:07] axw: talking to anastasiamac_ , will ping when ready [01:16] alexisb: I can also confirm a freshly booted environment has lxc installed already when I can log into it, before I try to deploy [01:18] bradm, can you deploy without the --lxc? [01:18] bradm, I am about to eod, but I can raise visibility on the bug tomorrow [01:19] alexisb: yes I can, it spins up a new instance. which is fine, but doesn't work too well with HA openstack [01:21] alexisb: I think jillr is going to be having a seperate conversation about upgrading juju with cherylj tomorrow, but figuring out what I can do to unblock this would be great - I don't particular care what juju version it is, just that I can deploy to LXCs - this is ultimately to get a deployable xenial with mitaka openstack [01:23] bradm, ok, I see you added our convo to the bug [01:23] alexisb: yup, just to be clear about what's happening. [01:24] I will get the right eyes on the bug tomorrow [01:24] excellent, thanks very much. [01:29] menn0: so i fixed the lxd reboot tests, and it turns out they don't work [01:29] github.com/juju/juju/container/lxd/lxd_go12.go:24: LXD containers not supported in go 1.2 [01:29] github.com/juju/juju/cmd/jujud/reboot/reboot.go:88: failed to get manager for container type lxd [01:29] github.com/juju/juju/cmd/jujud/reboot/reboot.go:134: [01:29] github.com/juju/juju/cmd/jujud/reboot/reboot.go:66: [01:36] axw: ping whenever you are free, after lunch [01:37] wallyworld: just cooking, it's only 9:30 :) I'm free now [01:37] wallyworld: standup? [01:37] sure [01:40] davecheney: all the lxd stuff should be hidden behind +build !go1.3 === natefinch-afk is now known as natefnich === natefnich is now known as natefinch [01:41] davecheney: which maybe is what you found === bruno is now known as Guest32213 === ses is now known as Guest21273 [02:21] anastasiamac_: looking at your virttype PR now [02:22] menn0: thnx? :D [02:34] wallyworld: I have a type that supports gnuflag, for --resource foo=bar --resource baz=bat. I need to use it in juju/juju and also in the charmstore-client. I was thinking of putting that type in github.com/juju/cmd ... since it's a pretty useful type to have around in general. Do you think that's an ok place, and if not, do you have a suggestion for a better place? [02:35] natefinch: what does the type do that's not already covered by our existing key-value flags type? [02:35] wallyworld: AFAIK we don't actually have a key-value flags type... there [02:35] let me try and find it [02:36] wallyworld: there's the constraints-style keyvalue type, but that is severely restricted as to what keys and values it can support [02:36] we have another general one that frank wrote [02:36] wallyworld: there's a storage one [02:38] wallyworld: there's something for bindings [02:41] anastasiamac_: review done [02:41] menn0: \o/ thank u - looking [02:43] natefinch: yeah, i can't find anything, i may have misremembered what we had [02:43] juju/cmd seems a good spot [02:43] wallyworld: there's a few similar things, but nothing quite so straightforward [02:44] wallyworld: cool [02:44] i could have sworn we have a generic key=value one [02:44] we do have one but tit also accepts filenames [02:44] not just key values [02:45] the filename if specified contains key values [02:49] anastasiamac_: menn0: providers have a constraints validation interface which they implement - that's where the virt type value needs to be checked [02:49] yes [02:50] wallyworld: my question here was more along the lines of whether we have a set ofvirt types that we'd accept (like we do with arches) [02:50] provider dependent, hence the validation done in the provider [02:50] wallyworld: menn0: however, m happy to not do any validation and only do it on a provider side :D [02:50] even with arches, validation done one the provider also [02:51] apart from initial check [02:53] wallyworld: agreed... since i saw the initial arch check in constraints/validation, I've added the to-do to confirm that I did not need to do something similar for virttype. [02:53] wallyworld: i'll remove todo [02:53] ty :-) [02:54] wallyworld, anastasiamac_ : sounds good. I think we were all pretty much on the same page :) [02:54] in violent agreement :-) [02:54] :P [02:58] anastasiamac_: did you happen to test with a provider that doesn't support virt-type? I think we need to register unsupported constraints for them all (which is kinda dumb; should be a whitelist I think) [02:59] anastasiamac_: sorry, I think I just asked the same thing wallyworld did [02:59] axw: sounds good. I've hit merge but will add it now as a separate PR [03:00] although if virt-type is not specified, all will be good [03:00] if it's specified, and virt-type is not supported on clouds, we'd just say that nothing matches specified constraints... [03:01] anyone up for a quick and pretty painless review? http://reviews.vapour.ws/r/4190/ [03:06] re: ^ note this is a straight up copy of already-reviewed code in juju-core, just moving it somewhere accessible to other projects. [03:14] natefinch: reviewed [03:14] with juju 2.0 I'm seeing some very weird deltas. somehow a unit went from a config-changed hook error, to maintenance, to error [03:14] anastasiamac_: yep, I think we could just be a bit more helpful and say immediately that virt-type isn't handled by the provider [03:14] rather than filtering all the things out and saying nothing matches [03:15] hatch: hook retries maybe? [03:15] axw: sure. if my current PR lands, I'll follow it up (if it fails, i'll amend current) [03:15] axw: would a hook automatically retry? [03:15] anastasiamac_: thanks [03:15] hatch: yes, support was added not too long ago to automatically retry failed hooks [03:16] axw: interesting point about resources for services in bundles..... it's been on our mind, but we're basically out of time to implement it at this point. [03:16] axw ohh ok then, this is news to me - that would explain why I was seeing such weird results. [03:16] natefinch: fair enough. just keep that code in mind when you do get there [03:17] axw I'm also seeing that the 'juju status' updates quite a bit sooner than the delta stream...is this also possible? [03:17] axw: definitey, thanks for the pointer. That's really probably the most difficult part, is just the annoying contortions on the command line [03:18] hatch: possible, yeah. deltas are based of a polling mechanism [03:18] I forget the period, but it's in the seconds [03:18] ... I think [03:18] in this case, it was probably 10s [03:18] hatch: sounds about right [03:19] Bug #1555355 changed: MachineSerializationSuite.TestAnnotations unit test failure (Go 1.6) [03:19] Bug #1557345 opened: xenial juju 1.25.3 unable to deploy to lxc containers [03:19] ok thanks for confirming - I'm just qa'ing my changes to support the new agent_state er...JujuStatus and WorkloadStatus [03:19] thanks axw [03:19] hatch: no worries [03:19] lol, landing stuff outside of juju/juju is so much easier. CI runs in like 30 seconds rather than 30 minutes [03:25] axw: I'm a dip: http://reviews.vapour.ws/r/4191/ [03:26] natefinch: heh, oops :) [03:33] davecheney: this is much better: http://reviews.vapour.ws/r/4185/ [03:33] davecheney: PTAL [04:01] Bug #1557345 changed: xenial juju 1.25.3 unable to deploy to lxc containers [04:05] axw: i've also added support for setting allowed values to accommodate the "algorithm" attribute in joyent config http://reviews.vapour.ws/r/4171/ [04:07] wallyworld: cool, looks good. I was wondering whether we can determine the algorithm from the key... [04:07] wallyworld: is it ready for re-review now then? [04:07] axw: yeah, why not [04:09] axw: i realised i no longer need to pass authtpye to finalise, i'll remove that [04:09] wallyworld: thanks, was about to comment [04:10] menn0: looking [04:11] and an import fix [04:19] menn0: https://github.com/juju/juju/pull/4749 [04:19] could you check again [04:19] i had to skip the test if built with go 1.2 [04:20] wallyworld: reviewed [04:20] because lxd compiles on go 1.2, but doesn't actually work [04:20] ta [04:20] which i'm not sure is helping [04:21] axw: i didn't know about strictfieldmap, i'll use that [04:21] file attr may still be interesting though [04:21] davecheney: looking [04:22] wallyworld: file attr? [04:23] axw: the schema declares it has an attribute "foo". we then use a map with key "foo-file" which proves the value for "foo". "foo-file" would be declared invalid for a scrict schema map, no? [04:24] s/proves/provides [04:24] wallyworld: there will be two fields in the checker [04:24] wallyworld: foo and foo-file [04:25] wallyworld: both marked non-mandatory [04:25] davecheney: still ship it [04:26] axw: i'll look at it - tests as written won't pass at the moment i think [04:26] wallyworld: okey dokey [04:26] as they don't construct a schema containing foo-file [04:27] axw: and joyent schema won't pass either - so i'll need to inject any file attributes into the schema [04:28] wallyworld: I'm saying they already are added to the environschema [04:28] wallyworld: look at schemaChecker(), search for "(file)" [04:28] as yes [04:28] ah yes [04:28] so they are [04:29] davecheney: the reason for the blank lines was to separate the test setup, from the call being tested , and then the test asserts [04:29] davecheney: but whatever :) [04:33] meh, your call [04:34] blank lines matter? [05:04] menn0: axw: black-listing virt-type as constraint for all providers http://reviews.vapour.ws/r/4192/ [05:06] anastasiamac_: I'm a bit confused about the comment in ec2. are we using the virt-type constraint in ec2? [05:06] doesn't look like it [05:07] axw: no we are not [05:07] it's not in the code. [05:07] anastasiamac_: ok, then it should be in unsupported constraints [05:07] axw: i'll remove the comment but wanted 2nd pair of eyes to confirm that m not imagining things [05:08] axw: yep. i'll remove the comment now that u agree. [05:08] anastasiamac_: yeah, I guess we'll want to expose it sooner or later (choose pv/hvm), but we should reject if we're not using it [05:08] anastasiamac_: thanks [05:31] Bug #1557874 opened: juju behaviour in multi-hypervisor ec2 clouds [05:39] wallyworld: with all the CLI changes, is there a way to upload tools for multiple versions anymore? [05:39] series [05:39] specifically, I want to test the LXD provider on Trusty hosting an extra unit on Xenial [05:39] but I need to have tools in state for Trusty and Xenail [05:39] Xenial [05:40] jam: i'd have to check - at one point we uploaded tools for the specified series and any lts series automatically, give be a minute to look [05:41] wallyworld: thanks. I found something weird where "juju bootstrap test-lxd --upload-tools --bootstrap-series xenial" didn't work but somehow "--upload-tools --bootstrap-series xenial --config default-series=xenial" did, IIRC [05:41] but I want both Trusty and Xenial, not just one or the other. [05:42] jam: bootstraa-series is new - it could just be that uploadtools doesn't account for it [05:42] in fact, i bet that is the case [05:42] wallyworld: sure, but I still want both :) [05:42] so should be a simple fix [05:42] yes, both need to be accounted for [05:43] wallyworld: right, I'm looking for the old "--upload-series trusty,xenial" that we used to have [05:43] wallyworld: or even just some other command that lets me push a binary as the right tools for the series [05:43] it doesn't have to be all munged in one thing, just a way to have compiled tools for 2 series [05:45] i think so long as it honours bootstrap-series, default-series that will be a start. i can't recall what happened to the "use these series explicitly" bootstrap option, i seem to recall that was deprecated by someone so we removed it for 2.0 [05:50] jam: when you upload for one series, the server explodes that into all series for the same OS [05:52] jam, wallyworld: I'm doing the code to create local-login macaroons now. thoughts on a sensible expiry time? it's 1h for external, but that's too frequent for local I think. maybe 24h? [05:52] i think that sounds ok [05:52] axw: what happens when the macaroon expires? It just does an extra login step? [05:52] requires you to enter your password again? [05:52] yes [05:52] will prompt [05:52] jam: yup [05:52] online having to open a web browser every hour sounds very bad [05:53] 24h i ok though right? [05:53] wallyworld: how often do you like to 2-factor auth? [05:53] even 1/day is pretty hard [05:53] heh :) [05:53] agreed [05:53] fair point [05:55] wallyworld: axw: I set up SSH keys so I don't have to enter my passwords for things, and run an agent so I can enter it on login and forget about it for quite a while. [05:55] Bug #1557470 changed: juju reads from wrong streams.canonical.com location [05:55] Ubuntu SSO does some things about remembering your login for a while so it can recognize you [05:55] if we have something like that [05:56] jam: isn't that just the same? a time based token? [05:57] axw: so the SSO thing means that it is shared across users. If we're integrating with that such that we don't have to prompt the user as long as their SSO is still valid then that macaroon can be any timeout [05:57] we are just checking that they really are still valid, the *real* timeout is SSO [05:57] For Local, we'd like something akin to that. [05:57] Where we can issue a challenge+reauth at any time, but the *user* is in control of how ofter the real reauth happens. [05:57] I may not be clear [05:58] ssh-agent is the thing that says how often I need to login [05:58] not "ssh $MYHOST" [05:58] jam: ok, understand [05:58] I don't know if we have something tasteful here. [05:58] axw: its the sort of thing we may want a knob on the server for [05:59] so my cruddy sites I just set to never expire [05:59] jam: this is for a local controller without sso or an external identity manager [05:59] and the Production servers expire daily [05:59] without sso or an external identity manager, we just use username/password as set on the controller for that user [06:00] and we use a macaroon (time based) to avoid re-authenticating each time [06:00] wallyworld: sure I understand that bit. but how often do I need to auth to it. *Today* we never have to reauth, and its all local, and its pretty nice. [06:00] authentication is done by controller [06:00] jam: that sounds sane. I'll implement without that first, then add config for timeout [06:00] right, but we need a balance between what we have today and being secure [06:01] wallyworld: maybe. passwords that people can remember are rarely actually secure, which means you actually use a password manager [06:01] which means yet again the thing that actually decides how often you "auth" is something else. [06:02] (LastPass, your custom gpg encrypted secrets file, etc) [06:02] indeed, i use one for github and when that times out on the cli, it's trivial to paste in the pw again [06:02] 2 clicks in my browser plugin and paste [06:04] wallyworld: anyway, I would highly suspect people generating solid passwords for a local controller, which means we're just aggressively pissing them off to pass it in all the time. [06:04] likely it means them using weaker passwords that are easier to enter [06:04] ultimately being weaker security [06:05] sure, we just need to decide what "all the time" is in order to piss people off. 1day, 1 week, 1 month? [06:05] wallyworld: with your browser, if someone can login as you, then they're in. If I can login to my Laptop as Jameinel, I may (may not) need to login to Juju on that machine as well. [06:05] I'm just thinking through the space [06:06] i agree a short time is bad. i just think it should be finite [06:06] wallyworld: flip side, what happens if you forget your password? [06:06] What is our password recovery mechanism [06:06] nothing (yet). that is a currrent limitation [06:06] I'm still one "sudo foo" away from being root [06:07] I just hesitate to say "you must remember a password you set" and then not give a way to recover. [06:07] but if the recovery mechanism is weaker than the password, we haven't added security. [06:07] maybe 1/day is reasonable. [06:07] recovery is definitely on the todo list [06:07] as it at least makes you think about it. [06:07] wallyworld: I have a strong feeling that local passords actually don't make sense. [06:07] maybe 1 week or 1 month even. i have no firm view on how long, happy to let others decide [06:08] wallyworld: well 1 month is just saying "forget about this until 1 month later when you won't remember it" [06:08] I'm worried that its the same problem as you may not do "juju foo" for a month [06:08] regardless of the password timeout being shorter [06:08] well, i can't remember my gh password :-) [06:09] wallyworld: yeah, I have several 16 char random passwords I can't remember at all. [06:09] but those integrate with Firefox [06:09] not the gh cli [06:09] but really easy still [06:09] it comes down to i guess, how to we stop unauthorised people from loggin in to your controller [06:10] *today* we have a token on disk [06:10] you mean the ca cert? [06:10] I mean the password in ~/.juju/environments/ENV.jenv [06:11] that' s not there anymore, nor is most of bootstrap config, i'd need to double check what we do now [06:11] wallyworld: the password for admin@local in accounts.yaml is what was admin-secret [06:12] i didn't thinl client login needed anything more than the ca cert [06:12] ah right [06:12] wallyworld: ~/.local/share/juju/accounts.yaml [06:12] wallyworld: it needs a username and password. ca-cert is just for verifying the server's identity [06:12] i forgot [06:12] brain too full [06:12] heh [06:14] I think the statement from tim was that "If you have the cloud credentials you can get in as ADMIN" [06:14] which is. ok, fine. I can spend the money on the cloud, I can get into the env. Maybe not perfect, but something. [06:14] I have $root$ on my local machine, is that enough? [06:14] jam: ideally, although you could just as easily throw away your SSH keys [06:15] well maybe not *just* as easily :) [06:15] jam: I haven't thought a lot about how to do recovery yet, but was thinking of having a localhost-only interface on the controller machines to do that. if you can ssh into machine-0, then you can fix up your own password [06:15] and any admin can change anyone elses password [06:16] axw: so a unix socket is how LXD does it [06:16] so certainly there is precedent. [06:16] might even be how mongo does as well? [06:17] jam: yep, you have to start mongo in a special way tho IIRC (excluding the first startup, where there's an exception if you have no password set yet) [06:19] axw: and certainly we have to consider that we can't be more secure in Juju than someone who can access our DB [06:19] they can just set the password there. [06:19] * axw nods [06:20] jam: speaking of, we should probably disallow sharing admin models with non-admins :) [06:20] otherwise I'll just "juju deploy backdoor --to 0" [06:27] axw: wallyworld: so if it is "you don't need a password if you're on the machine" and you need to refresh your login 1/day from another machine, we can live with it. I don't think it is quite there overall, but it is probably acceptable. [06:28] jam: ok, thanks [06:29] sgtm [07:06] cherylj: perrito666: if you see this later. With my branch you now see messages if you do "juju status --format=yaml" but machine-status messages aren't shown by default in "juju status" output. [07:06] so we have some visibility, but not a huge amount. [07:06] wallyworld: on the downside, "juju status-history" is filled with 100 "downloading image 98%" messages. [07:06] oh joy [07:07] wallyworld: so its super nice to see the progress in status [07:07] but it is yet-another status-history message [07:07] why hasn't my machine started yet? Because the image copy is only 70% done. great [07:07] we should make that more usable [07:07] wanna file a bug? [07:08] wallyworld: expose status messages in default "juju status" or be able to have a message that gets updated instead of adding yet-another message? [07:08] both :-) [07:14] wallyworld: what is the map in SetInstanceStatus for? I haven't seen anywhere that it ever gets set. [07:14] Is it set from "status-set" in charm hooks? [07:14] jam: it accounds for the fact that we may want to pass some arbitary data, like for other status [07:15] eg [07:15] we could pass in the download percentage or something [07:15] or time remaining [07:15] wallyworld: we can, but if nothing is touching it, exposing it, does it actually do anything? [07:15] I guess the API would expose it? [07:16] yeah eg for juju status [07:16] the yaml output has omitempty [07:16] that's the way it works for normal status, i assume it's the same here [07:16] so it is shown just not in the default format. [07:16] it's been a while since i sw the code [07:17] yes, not sown in tabular [07:17] shown [07:17] tabular is more of a summary [07:20] wallyworld: bugs filed [07:20] ty [07:21] i'll try and get them done this week [07:21] there's a similar bug about status-history spam [07:21] for update-status calls [07:22] Bug #1557914 opened: "juju status" doesn't show machine-status messages by default [07:22] Bug #1557918 opened: "juju status-history" doesn't include the concept of progress messages [07:28] wallyworld: yeah, it certainly feels similar to the update-status issue [07:28] yup [07:29] wallyworld: I wonder if a flag like "current-only" would be relevant. [07:29] This is a message that should be displayed, but doesn't need to be logged. [07:29] yeah, i was wondering if we needed to do that [07:29] there's an argument though that we should store everything and filter on display [07:30] wallyworld: that's ok, but not showing by default is the important bit. [07:30] so that you can get the interesting bits [07:30] wallyworld: I thought the status-history collection only stored a limited set of history, though. [07:30] does it store everything always? [07:30] it's capped [07:31] right, so that's a reason to elide them [07:31] cause otherwise 100 "I'm almost there" messages end up pushing out the real content. [07:31] depends on the size but yeah [07:31] Bug #1557914 changed: "juju status" doesn't show machine-status messages by default [07:31] Bug #1557918 changed: "juju status-history" doesn't include the concept of progress messages [07:32] jam: i do like the idea of a "don't log this flag". but william afaik against throwing away data [07:32] eg who cares if we ran the update status hook 100 times [07:33] wallyworld: so I can see people wanting to know "when was the last time update-status was run" because something was going wrong there. [07:33] I can hypothesize it, at least. [07:33] But *nobody* cares about something that happens more than 10-ish time [07:34] times [07:34] yes [07:34] other than gathering stats about it [07:34] you just can't think about it [07:34] 100 messages saying "copying" [07:34] did you notice it was 99 and 73% wasn't there? :) [07:40] Bug #1557914 opened: "juju status" doesn't show machine-status messages by default [07:40] Bug #1557918 opened: "juju status-history" doesn't include the concept of progress messages [08:14] axw: interactive add credentials done but i need to rework the provider credentials schema because we want the attributes to be ordered, and atm it is a map [08:15] wallyworld: ok === Guest32213 is now known as BrunoR [10:01] dimitern: frobware: stdup? [10:09] wallyworld: jam reading your comment last night [10:10] why dont we grow a loglevel-sh attr to status? [10:10] morning all btw [10:20] Bug #1557993 opened: Can't create lxc containers with 1.25.4 and MAAS 1.9 and xenial [10:24] wallyworld: it's all very hacky atm, but I've got a "juju login" which will request a macaroon, write it to accounts.yaml, and then use that for future logins [10:32] axw: congrats btw [10:37] perrito666: you mean for status-history ? yeah something like that seems valid. [10:39] jam: well status history its just something that gets created by setstatus so we should add it to status as a whole but that would not hurt since it can just be ignored where it has no value [10:39] default loglevel should be the one that gets stored in history [10:39] perrito666: the caveat here is that we want it shown in the "juju status" content, because it is currently active dataa [10:40] however, once it has expired, it isn't really worth hanging onto it/showing it by default [10:40] which is a bit different interprentation of log level, where log level is not-shown-at-all [10:40] jam: you mean you want to set the status and then have it dissapear? [10:40] perrito666: I mean that when you do "juju status 0/lxd/0" you want to see "copying image: 25%" [10:41] but when you do "juju status-history --type machine 0/lxd/0" you don't really want to see 100 lines of "copying image: 1-100%" [10:42] yeah, I think we are on the same page then :) [10:43] currently you call setStatus and that sets the current status and tries to push it to the history bucket too [10:44] morning [10:44] loglevel (or a better name for it) would determine if it gets pushed to history [10:50] dimitern: I think I found it [10:51] dimitern: the test server doesn't add interfaces to the node when you call start (a post) only on get [10:51] voidspace, ah, there it is then :) good catch [10:52] dimitern: well, we'll see... [10:57] axw: sounds awesome [11:01] dimitern: yes, seems to be it [11:02] dimitern: that brings me down to 23 failures, but looks like many of those have the same cause but need fixing separately [11:03] voidspace, nice - what sort of failures remain? [11:03] in fact 12 of them [11:03] a couple of "acces to address maas.testing.invalid not allowed [11:03] because creating a NewEnviron actually hits the api now (to check version) which it didn't used to [11:03] ah [11:03] so those I can fix by patching out GetCapabilities (done in other places already) [11:04] a couple of bad requests which are odd but shouldn't be too hard [11:04] and a couple of 404s (also odd) [11:04] and a few "failed to allocate address" [11:04] so about four different failure cases across 23 tests [11:05] ah, some of the 400s are for missing subnets [11:05] all to do with test setup I expect [11:05] even better then! we'll fix the test server [11:05] yep, I'll have a PR for this fix shortly [11:05] sweet! [11:26] dimitern: https://github.com/juju/gomaasapi/pull/9 [11:27] voidspace, LGTM [11:27] dimitern: thanks [11:27] that was quick! [11:28] voidspace, I know that code all too well - it was a source of frustration :) [11:28] :-) [11:29] voidspace, dimitern: of course let's not update any dependencies.tsv in maas-spaces2... please... :) [11:30] frobware: this is needed only for my branch [11:30] frobware: I'll update dependencies there, there maybe more fixes first anyway [11:30] (this is the drop-maas-1.8 branch) [11:31] voidspace: yep, just wanted to ensure we don't perturb what we have in m-spaces2. Really would like to see that branch merged today/tomorrow... [11:31] well, my branch maybe ready to land in that timeframe... [11:31] ;-) [11:32] we'll do a separate CI run on this branch first though [11:34] dimitern: is it correct that MAAS 1.9 supports storage, so we don't need the checks for storage support in the provider? [11:35] voidspace, I believe so - axw / wallyworld can confirm? [11:35] dimitern: well, the error string is returned if the volumes aren't returned - so we still need to check that [11:36] so I think I'll leave the check and the test in place [11:36] maas 1.9 does support storage [11:36] wallyworld: thanks [11:36] maybe I should change the error message [11:37] voidspace, perhaps the test server is overly assumptive there [11:37] wallyworld: ever tried backup/restore recently on maas? [11:37] no [11:37] dimitern this is juju code [11:37] dimitern: it checks the number of returned volumes and if it doesn't match expected it reports that the version of MAAS doesn't support storage [11:37] voidspace, ah, is this around select/startNode ? [11:37] dimitern: I don't think we should drop that check [11:37] was trying backup/restore http://pastebin.ubuntu.com/15400937/ [11:37] dimitern: yeah, in startNode [11:38] I could be driving this the wrong way though [11:38] voidspace, because volumes are only returned in the result of selectNode (or selectNode / acquireNode), not in the result of startNode ? [11:39] dimitern: this is existing code, I'm only looking at the text of the error message :-) [11:41] voidspace, which test is that? [11:42] dimitern: TestStartInstanceUnsupportedStorage [11:42] dimitern: I've changed the error text to report that the incorrect number of storage volumes were returned. [11:42] dimitern: instead of complaining about MAAS version [11:42] dimitern: I think getting back the wrong number of volumes should still be an error [11:42] voidspace, yeah it looks like the error message is wrong [11:43] voidspace, but resultVolumes it returned by maasInstance.volumes [11:43] right [11:43] voidspace, and if you look at the comment around line 960 in environ.go... [11:44] yeah, I see it [11:44] voidspace, it matters which maasObject we're embedding in a maasInstance, and subsequently reading the volumes from that embedded maasObject [11:44] I'm not touching any of that [11:45] voidspace, so error message could be better indeed [11:45] voidspace, it's not about supporting spaces [11:45] voidspace, s/spaces/storage/ :) [11:45] down to 21 failures [11:45] yep, error message changed [11:46] storage code was done earlier than the spaces support, maybe even before the capability for storage was in place [11:49] I think it was being worked on when I joined [11:49] dimitern: re: backup/restore, there's something similar and already reported. bug #1554807 [11:49] Bug #1554807: juju backups restore makes no sense [11:49] down to 14 failures [11:49] a nice title indeed :D [11:50] dimitern: not sure it helps me with triaging "functional-backup-restore" [11:51] frobware, after rebasing and enabling multi-bridge creation my branch still seems to work \o/ [11:52] frobware, will push and then go on with the mediawiki demo [11:52] frobware, or should I wait? [11:55] dimitern: the mediawiki demo doesn't take too long to run, perhaps try that first. [11:55] frobware, +1 [11:56] mgz: you about? [12:07] OMFG, I have been hammering on this test for *hours*, and it turns out the reason the agent isn't running the model workers it "should"? I didn't give it JobManageModel >_< [12:27] fwereade: ouch [12:27] why aren't these things starting, its right here... [12:27] jam, at least the universe makes sense again :) [12:50] Bug #1557993 changed: Can't create lxc containers with 1.25.4 and MAAS 1.9 and xenial [12:50] Bug #1558061 opened: LXD machine-status stays in "allocating". [12:56] Bug #1558061 changed: LXD machine-status stays in "allocating". [12:56] Bug #1557993 opened: Can't create lxc containers with 1.25.4 and MAAS 1.9 and xenial [13:08] Bug #1557993 changed: Can't create lxc containers with 1.25.4 and MAAS 1.9 and xenial [13:08] Bug #1558061 opened: LXD machine-status stays in "allocating". [13:15] sense [13:50] Bug # changed: 1466100, 1498086, 1498094, 1499501, 1506869, 1506881, 1521217, 1528971, 1540447 [13:50] Bug #1558078 opened: help text for juju remove ssh key needs improving [14:05] Bug # changed: 1459298, 1463904, 1464665, 1482502 [14:05] Bug #1558087 opened: TestInvalidFileFormat fails on windows because of / [14:35] rogpeppe2: So, charmrepo depends on charmstore (for tests), and charmstore depends on charm repo. This makes updating dependencies.tsv.... complicated. [14:35] natefinch: yes, it is awkward [14:35] natefinch: but it is possible [14:36] natefinch: i haven't thought of a better approach yet, unfortunately [14:36] rogpeppe2: I think in order to update charmrepo to use a new version of charmstore, I'm going to need to update charmstore to use a new version of (at least) charm.v6 [14:36] natefinch: i've already fixed charmrepo to use a new version of charmstore [14:37] natefinch: what's the actual problem you're having? [14:38] rogpeppe2: sorry, I missed your comment at the end of the PR saying you already updated the deps. I had tried updating the deps, but I think I just got them into a bad state, which is why I was having problems. [14:38] rogpeppe2: there's actually no problem, using the deps in charmrepo, the tests run fine with changes you suggested [14:38] natefinch: try fetch origin and rebase and see how things go [14:38] rogpeppe2: yeah, just did [14:39] natefinch: the problem was just that the tests assumed the old charmstore semantics [14:39] rogpeppe2: right [14:39] natefinch: BTW my most recent thinking is that the server side should include tests against the client package, with only some minimal unit tests in the client package itself. [14:46] rogpeppe2: I have an interesting video I watched recently, which you'll probably totally disagree with. It's a talk titled Integrated Tests Are A Scam - https://vimeo.com/80533536 [14:46] natefinch: i saw your tweet and started watching, but then thought i should probably do it outside work time :) [14:47] natefinch: i'm interested to see what he has to say [14:47] rogpeppe2: yes, probably wise :) === rogpeppe2 is now known as rogpeppe [14:54] frobware: so, run today on maas-spaces2 looks good [14:54] has only failures from ci changes to ha test [14:56] Bug #1459300 opened: Failed to query drive template: unexpected EOF [14:56] cherylj: ^^ [14:57] mgz: what CI changes have been made? [14:58] well, I assume it's that, could also be master that maas-spaces2 merged in being bad [14:58] either way, it's not a maas-spaces problem [15:01] mgz: could the reports include the commit IDs of any CI repos so that we could determine when things are change? [15:03] ericsnow: hey [15:03] katco: hi [15:04] ericsnow: i'm working on performance reviews right now, but i read through your comments on my merge [15:04] frobware: some jobs to include that, but not all I think [15:04] katco: k [15:04] ericsnow: i agree merges shouldn't refactor code. a lot of that came over from main, and i had to change some things to get it to compile [15:04] katco: ah, I was wondering if that was the case [15:05] ericsnow: maybe TAL at master and see if you would have done things differently? specifically for store.go [15:05] katco: probably https://github.com/juju/juju/pull/4623 [15:05] ericsnow: yes, exactly that [15:05] katco: I'll take another look [15:06] ericsnow: so anyway, with that context, lmk if that changes your review. it was not my intent to change things. i tried to keep the changes minimal [15:06] katco: you bet [15:06] ericsnow: ta [15:32] ericsnow, rick_h__: for charm publish --resource .... are we supporting bundles? seems like if we are, we need the change how we specify resource names, to something like --resource service:resourcename-rev [15:33] natefinch, rick_h__: I recall that we weren't going to support deploying bundles with --resource args, but I don't know that we discussed publish [15:35] ericsnow, rick_h__, katco: yeah, I was worried we'd forgotten about that. Seems like we kind of need to support it, otherwise bundles can't be published with charms that use resources [15:37] natefinch: the bundle metadata is what defines which resources go with each service; bundles themselves do not have resources [15:37] natefinch: so publishing a bundle with --resource doesn't have the same meaning [15:38] ericsnow: oh, hmm... right, so we're actually putting the specific resource revision right in the bundle [15:38] natefinch: correct [15:38] ericsnow: ok, that's what I was forgetting. so, I think we don't need to support --resource on publish for bundles, since by definition, the resource revisions are already defined [15:38] ericsnow: good :) [15:38] natefinch: bundles don't have resources, charms do, so don't think we've got anything that does publishing [15:39] natefinch: in effect we're using the bundle to publish the revision set for each service rather than publishing that revision set directly [15:39] ericsnow: exactly [15:40] rick_h__: yeah, I just confused myself, since I'm writing the CLI to publish with --resource flag, but the same command does bundles and charms [15:40] rick_h__: I'd actually already written the "you can't do that" path... but then second guessed myself [15:41] perrito666, sorry was running late [15:41] on the hangoutnow [15:50] sinzui: hey, is the curse here from bugs brought in from master? http://reports.vapour.ws/releases/3755 [15:51] katco: we are discussing the nature of restore failing. We suspect the issue really is in master, in which case, we have a blocker [15:51] sinzui: ah ok =/ [15:51] sinzui: we're running out of time to land this branch... is there anything at all i can do to help? [15:57] dimitern: ping [15:58] OMG, my editor is converting tabs to spaces in a .tsv file :/ [16:01] voidspace, pong [16:01] dimitern: do you know about the maas provider and device hostnames [16:01] dimitern: there is a comment in newDevice about working round the testservice requiring a hostname [16:02] dimitern: and then it calls NewDeviceParams that *does not* fill in a hostname [16:02] dimitern: (and so tests fail) [16:02] I can patch out NewDeviceParams to provide a hostname in the tests - but I'm going to delete that comment [16:02] or I can fix the test server to not require a hostname and to generate a random one [16:03] [16:03] voidspace, yeah, that workaround was needed because testserver required hostname to be set [16:03] I know, however the code that has that comment doesn't do it [16:03] it doesn't workaround it [16:03] so the comment is "wrong", not because the workaround isn't needed but because that code doesn't do it! [16:04] voidspace, newdeviceparams is there only to make testing the device creation a bit less awkward [16:04] dimitern: ah, ok [16:04] dimitern: I'll make the comment a bit more obvious [16:04] voidspace, production code does not need it otherwise [16:05] the comment just confused me [16:06] voidspace, sorry about that :/ [16:06] np [16:36] natefinch: "internal" tests have yet again bitten me :( [16:38] ericsnow: doh, sorry to hear that. How so? [16:38] natefinch: fixing a test causes an import cycle [16:39] ericsnow: that indicates a problem with package boundaries or the tests, generally... [16:39] ericsnow: though I know sometimes with our infrastructure it is unavoidable [16:39] natefinch: exacto [16:40] ericsnow: (though it still indicates a problem, it's often a problem that cannot be easily fixed) [16:40] natefinch: right [17:03] Bug #1558158 opened: Restore fails with no instances found === alexisb is now known as alexisb-afk [17:14] dimitern: sooo, we now allocate addresses using claim_sticky_ip on the device and release through the standard ip address release api [17:15] dimitern: which I assume works fine in production - but doesn't work at all on the test server, they're not connected [17:15] dimitern: so that requires a test server change to look on devices when releasing addresses [17:15] at the moment it 404s [17:21] natefinch, katco: PTAL: https://github.com/juju/charm/pull/200 and https://github.com/juju/bundlechanges/pull/19 [17:22] natefinch, katco: ...and https://github.com/juju/juju/pull/4758 [17:23] heh, I was just PR 200 in the charmstore client repo [17:23] natefinch: solidarity! [17:23] voidspace, that's ok though, as we'll be using a different approach to release addresses and the AC code will go away as implemented atm [17:23] you know, sometimes I think Google has it right with a monorepo :/ [17:52] dimitern: yeah, but for the current tests to pass I still need to fix the test server, not difficult though [17:54] right, EOD - off to visit my Mum in hospital [17:54] see you tomorrow all [17:57] Bug #1558185 opened: juju 2 status shows machine as pending but charm is installing [17:57] Bug #1558191 opened: TestConstraintsValidatorUnsupported fails on go 1.5+ === natefinch is now known as natefinch-lunch === alexisb-afk is now known as alexisb [18:09] Bug #1558185 changed: juju 2 status shows machine as pending but charm is installing [18:09] Bug #1558191 changed: TestConstraintsValidatorUnsupported fails on go 1.5+ [18:12] Bug #1558185 opened: juju 2 status shows machine as pending but charm is installing [18:12] Bug #1558191 opened: TestConstraintsValidatorUnsupported fails on go 1.5+ [18:19] ashipika: you where looking for me a few days ago? [18:22] anybody doing `add machine lxd:0' at the moment with xenial? [18:22] I see: "Creating container: Error adding alias ubuntu-xenial: not found" [18:23] jam, tych0 ?? ^^ [18:23] cherylj, [18:27] alexisb, jam, tych0, cherylj: status output, http://pastebin.ubuntu.com/15403438/ [18:46] alexisb, jam, tych0, cherylj: current workaround is to 'lxd-images import ubuntu --alias ubuntu-xenial xenial' [18:46] on the host [18:46] frobware, great, can you please capture info in a bug [18:50] natefinch-lunch: still at lunch? :) how's your card going? [18:52] alexisb: https://bugs.launchpad.net/juju-core/+bug/1558223 [18:52] Bug #1558223: add-machine lxd:0 provisioning error: "Error adding alias ubuntu-xenial: not found" [18:53] thank you [19:00] Bug #1558223 opened: add-machine lxd:0 provisioning error: "Error adding alias ubuntu-xenial: not found" === natefinch-lunch is now known as natefinch [19:09] katco: it was a late lunch :) Was spending time mid-day working on code review comments from roger [19:09] Bug #1558223 changed: add-machine lxd:0 provisioning error: "Error adding alias ubuntu-xenial: not found" [19:10] katco: the card is going fairly smoothly... although I wanted to talk to people about the user-experience for charm push --resource [19:11] natefinch: we're running out of time for that. i'd say push forward with what's already defined and we can circle back iff we have the time [19:11] natefinch: keep in mind... day after tomorrow is our deadline [19:12] katco: that's fine. mostly just wanted people to be aware that charm push --resource is just going to be syntactic sugar around charm push + charm push-resource (i.e. they're separate calls to the charmstore) [19:13] Bug #1558223 opened: add-machine lxd:0 provisioning error: "Error adding alias ubuntu-xenial: not found" [19:13] natefinch: what's your eta for the remainder of the work? [19:14] katco: I can get the current review comments finished and charm push --resource proposed today, but there's no one from the UI team to review the latter until tomorrow morning [19:15] natefinch: i think rick_h__ believes in some kind of "code fairy" ;) [19:15] who might merge for us [19:15] katco: didn't happen last time I needed the code fairy :) [19:15] katco: as long as the code fairly also keeps roger from strangling us ;) [19:15] ericsnow: you must not really believe [19:15] ericsnow: clap harder! [19:15] s/fairly/fairy [19:15] heh [19:16] katco: :p can always asj for a merge command if review is ok [19:16] natefinch: yeah, i'd just advise you to pick your battles here this close to a deadline [19:16] katco: definitely trying to go with the flow to just get'er'dun [19:16] rick_h__: i think the issue is more that we don't have final sign-off from someone in the ui team [19:17] also, we're tagging CS with v4.5 today, release tomorrow, so, em, have that in mind :) [19:17] rick_h__: if only someone were around who used to work closely with the ui team and had the authority to sign off. and loved campers and photography. [19:17] because they like to "sleep" at "night" [19:17] katco: understand. [19:17] urulama: grats on impending release! :D [19:18] cs-client is part of that for the moment [19:18] so, em, all PRs that are not needed for that release will not be landed anyway [19:18] sorry [19:18] (as i suspect they don't have the equivalent functionality covered in the uitests) [19:19] urulama: as long as we can get the changes in by friday, we're fine [19:19] and i apologise, the fairies were kidnapped by en evil witch from the south [19:19] :) [19:19] haha [19:20] katco: np, it's unlocked as soon as it get's tested and tagged [19:20] urulama: cool. well gl to your team [19:25] Bug #1558232 opened: ERROR cannot obtain bootstrap information: Get https://10.0.3.1:8443/1.0/profiles: Unable to connect to: 10.0.3.1:8443 [19:25] natefinch, katco: wait ... charm push --resource? what's with charm attach-resource? [19:25] urulama: change in direction from mark [19:25] urulama: he wants push-resource to be attach-resource [19:26] urulama: and charm push --resource is just a way to skip a step and do it all at once [19:26] urulama: so push up the charm and the resources for it [19:26] was aware of attach-resource, not about charm push --resource ;) [19:26] ok, sgtm [19:28] natefinch: is this your PR? https://github.com/CanonicalLtd/charmstore-client/pull/200 [19:28] natefinch: i mean, the one you want to land? [19:29] urulama: yes [19:30] urulama: though it needs a few suggestions from code reviews implemented [19:31] kk [19:32] which is what I'm doing now :) [19:32] ok, if you don't get review tonight, i'll point them to it in the morning [19:32] katco: just attach, no -resource [19:33] urulama: thanks [19:33] rick_h__: what happened to the whole 2.0 - edict? [19:33] perrito666, http://paste.ubuntu.com/15403845/ , I can't bootstrap on lxd using master/head. Any known issues? [19:33] katco: it went with deploy, bootstrap, and such. [19:34] katco: i'm not 100% on it but i pushed attaxh-resource and was shot down. [19:34] rick_h__: hm. ok. natefinch ^^^ [19:34] rick_h__: so, juju attach django website=./site.zip ? [19:34] niedbalski: I honestly dont know [19:35] natefinch: yes [19:35] rogpeppe: okie dokie [19:35] whoops [19:35] rick_h__: ok :) [19:35] * urulama thinks you've awaken the dragon [19:36] * natefinch ducks [19:36] * rogpeppe swoops in [19:36] * rogpeppe ignores the puny humans [19:37] *grin* [19:38] rick_h__, are there other places we would use the word "attach" [19:39] I see danger in leaving it just attach with out the -resource [19:40] alexisb: rick_h__: what we've been doing is having - with an alias of if there's not conflict [19:40] well [19:40] katco: or in the case of resources :) [19:40] aka list-resources [19:40] so for example list-storage is the same as storage [19:40] yep [19:41] alexisb: natefinch: ah, yes. [19:41] katco: at least we're consistently inconsistent ;) [19:41] =| [19:41] but rick_h__ is right that there are cases where we use just the verb [19:41] like bootstrap [19:41] alexisb: i think the case there is that they are fundamental juju concepts, not substrates [19:41] but bootstrap is boothstrap - one case [19:42] alexisb: right, I tried to argue against but we had some verb patterns and mark was taken with the idea of "email attachments" [19:43] alexisb: the only other thing I could think for 'attach' was attach a storage device, or some sort of device [19:43] alexisb: but we don't use it currently and then if attach is established in this fashion the others will have to be something else [19:47] tych0: ping [19:47] rick_h__: i think i understand where mark is coming from, but i'm struggling with this: what delineates commands that should be - from those that should just be ? [19:47] katco: well there's the ambigious vs non-cases [19:47] katco: any verb that can apply to more than one think (list-) rightly needs to be more to it [19:48] attach-storage [19:49] attach-space [19:49] attach-model [19:49] we may not need them now but we could later [19:49] rick_h__: does that make it true that we can't have any other attach commands until a 3.0 to not break backwards compatibility? [19:49] attach-storage was the only one I was worried about. I don't think of attach in the network space or in models [19:49] but those are not for charm management, right [19:49] rick_h__: alexisb: yeah, this ^^^ [19:49] urulama: there is a juju attach as well [19:49] katco: no, it just means they'd need to be attach-XX worst case, and it'll be confusing so we'd look for a different work [19:50] word [19:52] rick_h__: i think that's a different way saying "can't have any other attach commands" [19:52] katco: :) [19:53] rick_h__: when the team heard attach, the first thing we thought of was storage [19:53] rick_h__: aren't we going to be doing unit-shared filesystems or something like that? [19:53] ericsnow: yes, shared filesystems [19:53] ericsnow: but what's to say that's not 'mount' or the like? [19:53] rick_h__: "attach" would make much more sense there [19:53] ericsnow: it's just hard to not use something because you *might* use it later. There's some cases but there's options. [19:54] * rick_h__ is trying to look at the commands/think through [19:54] rick_h__: if I see "juju attach" I am not going have enough context to know what that is [19:54] ericsnow: it's because resources are new [19:54] ericsnow: as natefinch's command shows, it's more obvious with the full command [19:54] rick_h__: that is chicken and egg though ;) [19:55] rick_h__: at long as "attach-resource" is a valid command [19:55] rick_h__: you would only have the full command if you knew what attach was [19:55] Bug #1558239 opened: [LXD provider] Cannot bootstrap using Xenial container [19:55] can we still have attach-resource as an alias? [19:55] or vice versa [19:55] natefinch: that's backwards imo [19:55] I want attach-resource in juju help commands [19:55] I guess that's attach-resource with attach as an alias [19:56] that way if you want to upload a resource, you do juju help commands, and it's obvious [19:56] the issue is the only aliases now tend to be the list-XXXX ones. [19:56] we've killed off most other aliases, there's a couple that need cleanup still. /me tries to find the list from before [19:56] rick_h__: FWIW, the obvious command is "juju upload-resource" [19:56] ^^^^ +100 [19:56] :) [19:56] but I'm not Mark :) [19:57] ericsnow: heh, yea that didn't work either. [19:57] ericsnow: natefinch katco so an alias is a no go. It'll be a precedent setter in that sense. THe other ones are either due to list- or plurals. [19:58] * rick_h__ has to hop on a call, but I think we have to go with attach for now and we'll suffer the consequences later. attach-resources is too long, and attach hasn't been used in storage/networking to date and I think we'll have other options when/if we get there. [19:59] and when I'm wrong, everyone gets free sprint beverages and gets to say "I told you!" :) [19:59] rick_h__: I'm just sad that juju help commands | grep resource is not gonna show the very first command you'd want to use with a resource [19:59] rick_h__: I guess that's not true... the description will probably use the word resource [19:59] natefinch: +1 [19:59] natefinch: sure it will, juju help commands has a text string on it that'll say resource in the string [20:00] natefinch: I thought of that and went and checked it in another terminal here :) [20:00] attach uploads a resource to the controller/charm store :/ [20:00] rick_h__: me too ;) [20:01] heh... actually juju help commands | grep resource shows some ambiguous usage of the word resources [20:02] destroy-controller terminate all machines and other associated resources for the juju controller [20:02] natefinch: yea, should clean that up [20:03] * natefinch adds a line to an export_test.go and feels shame [20:06] frobware: pong [20:10] tych0: I was looking at juju/worker/provisioner/lxd-broker.go and noticed this: [20:10] func (broker *lxdBroker) StartInstance(args environs.StartInstanceParams) (*environs.StartInstanceResult, error) { [20:10] if args.InstanceConfig.HasNetworks() { [20:10] return nil, errors.New("starting lxd containers with networks is not supported yet") [20:10] } [20:11] tych0: how significant is that? [20:11] frobware: i don't know :) [20:11] frobware: what is a "network" in this context? [20:12] tych0: I think this is a red-herring as I know you can have multiple networks in a profile - this looks juju-only to me right now [20:13] tych0: where networks in multiple interfaces [20:14] tych0: don't worry about this ... since my original ping I don't think is a genuine problem on the lxd side of things. [20:35] frobware: oh, i see [20:36] you mean like juju isn't rendering things to LXD? [20:36] tych0: yes, but a known issue. that's up next. I was just walking through the details and saw that comment. [20:37] ok, cool [20:37] if you have any questions about how to do the actual translation, let me know === natefinch is now known as natefinch-afk [21:28] Bug #1542206 opened: space discovery still in progress [21:34] Bug #1542206 changed: space discovery still in progress [21:37] Bug #1542206 opened: space discovery still in progress [22:07] Bug #1548813 changed: maas-spaces2 bootstrap failure unrecognised signature [22:07] Bug #1554584 changed: TestAddLinkLayerDevicesInvalidParentName in maas-spaces2 fails on windows [22:07] Bug #1556116 changed: TestDeployBundleMachinesUnitsPlacement mismatch [23:03] cmars: master is blocked waiting for a fix to 1558087. I saw ur comment on the bug saying that u r waiting for master to b unblocked.. [23:03] cmars: could u plz try Fixes-bug blah merge message to land ur fix? [23:04] cmars: great that rog has seen the same bakery issue [23:21] cmars: actually - never mind \o/ [23:49] ping, https://github.com/juju/juju/pull/4749 needs a second review [23:49] thnaks [23:51] davecheney: ship it