[00:33] <mup> Bug #1549545 opened: Bundle deploys fail at lxc-start when bridge br-eth1 is created <ci> <deploy> <maas-provider> <test-failure> <juju-core:New> <https://launchpad.net/bugs/1549545>
[00:34] <natefinch> katco, ericsnow:  the PR for the missing API field: http://reviews.vapour.ws/r/3961/
[00:36] <ericsnow> natefinch: LGTM
[00:36] <natefinch> ericsnow: nice, thanks!
[00:36] <mup> Bug #1549545 changed: Bundle deploys fail at lxc-start when bridge br-eth1 is created <ci> <deploy> <maas-provider> <test-failure> <juju-core:New> <https://launchpad.net/bugs/1549545>
[00:45] <mup> Bug #1549545 opened: Bundle deploys fail at lxc-start when bridge br-eth1 is created <ci> <deploy> <maas-provider> <test-failure> <juju-core:New> <https://launchpad.net/bugs/1549545>
[00:59] <natefinch> ericsnow: you still around?
[01:01] <wallyworld> axw: adding the restriction that controller name must exist in SetController() causes a lot of test failures \o/ i'm going through and fixing those
[01:01] <axw> wallyworld: hm :/  do you agree that it should fail though?
[01:02] <wallyworld> axw: nope :-)
[01:02] <wallyworld> yes
[01:02] <wallyworld> sorry, i meant yes
[01:02] <axw> lol
[01:02] <axw> the other nope
[01:02] <wallyworld> nope that we shound't fail
[01:02] <axw> ok
[01:02] <wallyworld> the fact that we never checked needs to be fixed
[02:21] <tych0> jam: https://github.com/juju/juju/pull/4526 is what i've got for now
[02:21] <tych0> it seems to bootstrap and deploy stuff
[02:21] <tych0> jam: I think (?) it should fix most of your concerns
[03:09] <menn0> wallyworld: is there any documentation on how to actually bootstrap at the moment
[03:09] <menn0> wallyworld: with lxd not working i'm trying aws
[03:09] <wallyworld> menn0: the release notes are very comprehensive :-)
[03:09] <menn0> wallyworld: but it keeps trying to use the EC2 environment variables instead of what's in credentials.yaml
[03:10] <wallyworld> where is credentials.yaml  located?
[03:10] <wallyworld> can you pastebin it?
[03:11] <menn0> wallyworld: I checked the release notes first. They only say that bootstrap and credentials has changed with no details
[03:11] <menn0> juju help bootstrap is also no help
[03:11]  * menn0 gets pastes 
[03:11]  * menn0 pastes even
[03:11] <wallyworld> menn0: the release notes have an example credentials.yaml file, what ones are you looking at?
[03:11] <menn0> beta1
[03:12] <wallyworld> let me check them
[03:12] <menn0> wallyworld: ok I see
[03:12] <menn0> it's further down
[03:13] <wallyworld> yeah, there's a whole essay on the new bootstrap stuff
[03:13] <wallyworld> i have had it confirmed that if you follow the directions it does all work :-)
[03:13] <wallyworld> but maybe some more consice getting started docs would have helped
[03:14] <menn0> wallyworld: http://paste.ubuntu.com/15193666/
[03:14] <menn0> that's what I get
[03:14] <menn0> the credentials.yaml looks right to me
[03:15] <menn0> (I've redacted the secret bits obviously)
[03:15] <wallyworld> looks ok at first glance, i'll dig in a bit
[03:15]  * menn0 wonders if it works for wallyworld because he has $AWE_SECRET_ACCESS_KEY set
[03:16] <menn0> $AWS... even
[03:16] <wallyworld> maybe, but it was tested without i am almost sure, but not 100%
[03:16] <wallyworld> menn0: you are missing the credentials:
[03:16] <wallyworld> at the top of the yaml
[03:17] <menn0> wallyworld: that would do it :)
[03:17] <wallyworld> let me know if it still is broken
[03:19] <menn0> wallyworld: looking better
[03:19] <wallyworld> \o/
[03:20] <menn0> wallyworld: thanks for your help
[03:20] <wallyworld> np
[03:20] <menn0> wallyworld: I think the client should emit a useful error when credentials.yaml doesn't start with a credentials section
[03:21] <menn0> wallyworld: shall I file a bug about it?
[03:21] <menn0> I suspect I won't be the only one making this mistake
[03:21] <wallyworld> menn0: agree, but we also have several other yaml files we omit crappy errors for. i'd like to tackle that improvement wholistically
[03:21] <wallyworld> and if it's not that error, there could be 100s of other syntactical issues
[03:22] <wallyworld> on the whole, juju's error reporting has a lot to answer for
[03:22] <menn0> wallyworld: in this case the YAML was syntactically correct. The first section just wasn't what the client expects.
[03:23] <wallyworld> fair point. but the same could happen in any yaml file juju likes to read. the user should be editing such files. beta2/3/4 will gain CLI to do that work. go ahead and file a bug if you want though
[03:23] <wallyworld> shouldn't
[03:24] <menn0> ok, if the user won't be touching the file soon, then I won't bother with the bug
[03:24] <wallyworld> "soon" :-)
[03:33] <wallyworld> axw: could you ptal at http://reviews.vapour.ws/r/3954/
[03:34] <axw> wallyworld: looking
[03:34] <wallyworld> ta
[03:44] <axw> wallyworld: done
[03:44] <wallyworld> ta
[03:46] <wallyworld> axw: damn, that plural is invisible to me :-/ will fix. need glasses
[03:46] <wallyworld> part 2 will get into the workflow proper, multiple clouds etc
[03:46] <wallyworld> just wanted to get the skelton for part 1
[03:46] <axw> wallyworld: thanks. ok, sounds good
[03:48] <menn0> axw: would you mind taking a look at this one please? http://reviews.vapour.ws/r/3964/
[03:48] <axw> menn0: sure
[03:48] <menn0> axw: thansk
[04:00] <axw> menn0: done
[04:08] <menn0> axw: thanks
[04:10] <menn0> axw: good catch with the missing Done call
[04:19] <wallyworld> menn0: where's thumper? anyways, why does juju create-model use creds/secrets for ec2 and openstack? i thought all hosted models were restricted to using the same creds as the original controller? is that not the case?
[04:19] <wallyworld> is ec2 nd openstack special cased?
[04:22] <menn0> wallyworld: thumper is on his way to christchurch. he's off tomorrow for a wedding up here.
[04:23] <wallyworld> and i thought he was slackong off
[04:23] <wallyworld> we're supposed to have a meeting, i wonder when he ws going to tell me
[04:23] <menn0> wallyworld: I don't know about the create-model creds situation. thumper did that work.
[04:23] <wallyworld> damn, ok
[04:23] <menn0> such a slacker
[04:24] <wallyworld> menn0: because if we do need to supply creds, we'll need to migrate that to crdentials.yaml etc
[04:24] <menn0> wallyworld: off memory, I thought that it was possible to use different creds for different models on the same controller
[04:24] <menn0> but i'm fairly vague about that
[04:25] <wallyworld> ok, np, we'll need to look into it
[04:25] <axw> we should surely default to the controller's creds though
[04:25] <wallyworld> yes
[04:25] <wallyworld> you'd hope so
[04:25] <wallyworld> but the cli has a comment that we look in supplied config vars for new creds
[04:25] <wallyworld> i didn't know we did that
[04:26] <axw> wallyworld: yep. and some providers will fail if you don't supply them, some don't
[04:26] <menn0> wallyworld, axw: think about the possibility of hosted envs being "owned" by completely unrelated parties
[04:26] <menn0> you'd want different creds then
[04:26] <wallyworld> ah yes. true
[04:26] <wallyworld> same provider, different creds
[04:26] <axw> menn0: sure, I think we should support it
[04:27] <axw> menn0: just not require it
[04:27] <menn0> it might even be "unsafe" to default to the controller's creds
[04:27] <wallyworld> axw: i think we'd want this for next beta, i'll try and get to it tomorrow, should be small change
[04:27] <menn0> not sure
[04:28] <menn0> wallyworld, axw: could one of you look at this one please? http://reviews.vapour.ws/r/3965/
[04:28] <axw> I'll look
[04:28] <menn0> fixes to apiserver panics that several people have been observing during test runs
[04:28] <wallyworld> sure
[04:28] <menn0> (including me)
[04:28] <menn0> axw, wallyworld: just one of you is fine :)
[04:29] <axw> menn0: yes, me too - thank you!
[04:29] <wallyworld> and me!
[04:29] <menn0> axw, wallyworld: the novela at the top explains it, but the problem has been there forever I think
[04:30] <menn0> axw, wallyworld: it's just that timings for worker startup and shutdown have changed due to the dep engine
[04:30] <wallyworld> menn0: i think it has, i don't recalls those /charm or /tools or /backups etc endpoints being tracked
[04:31] <menn0> axw, wallyworld: we're still not in good shape yet for the other streaming style APIs but at least /logsink and /log are fixed
[04:31] <menn0>  /logsink is in use all the time
[04:31] <wallyworld> got to start somewhere
[04:31] <menn0> and now the pieces are there to do the others
[04:31]  * menn0 spent most of yesterday and today on this one when he should have been doing other things
[04:32] <menn0> thought it was due to my changes
[04:32] <menn0> but it wasn't
[04:32]  * menn0 creates a ticket about the others
[04:35] <wallyworld> menn0: pr looks ok but my brain has not delved into any possible subtle breakages
[04:38] <menn0> wallyworld: i've done quite a bit of manual testing and i'm pretty sure it's right, but I understand your concern
[04:39] <menn0> axw: given wallyworld is a bit uncertain could you take a look too please?
[04:39] <wallyworld> menn0: +1 from me fwiw if you've tested manually
[04:39] <axw> menn0: was already looking
[04:39] <axw> still looking
[04:39] <menn0> axw, wallyworld: thanks both of you
[04:39] <wallyworld> awesome that you fixed it
[04:45] <axw> menn0: LGTM
[04:45] <menn0> axw: cheers
[04:49] <menn0> axw: your idea makes sense and is preferable to what i've done if I can make it work
[04:49] <menn0> trying now
[04:49] <axw> menn0: ok, thanks
[04:56] <menn0> axw: yeah, that's heaps better
[04:56] <menn0> axw: just making sure it works now
[04:56] <axw> menn0: cool :)
[04:57] <menn0> are
[04:58] <menn0> wallyworld: is this error expected?
[04:58] <menn0> $ juju bootstrap menn0 aws --config ~/canonical/juju-dev.yaml --upload-tools
[04:58] <menn0> Creating Juju controller "local.menn0" on aws/us-west-2
[04:58] <menn0> ERROR streams/v1/index2.sjson not accessed, actual error: invalid URL "https://streams.canonical.com/juju/images/releases/streams/v1/index2.sjson" not found
[04:58] <wallyworld> menn0: yeah
[04:58] <menn0> there's a few other similar ones
[04:58] <wallyworld> it needs to be suppressed
[04:58] <menn0> due to upload-tools?
[04:59] <wallyworld> it's because simplestreams searches everywhere and logs what it can't find
[04:59] <wallyworld> should be debug or trace
[05:16] <wallyworld> axw: so far, aws and google both support different named credentials having their own default region rather than a single global default :-(
[05:17] <wallyworld> might need to revisit the model
[05:17] <axw> wallyworld: docs?
[05:18] <wallyworld> axw: for aws, i'd have to look up docs again, i'm judt looking at what's in my ~/.aws directory. there's a config and a credentials file. config has region per user, credentials has credentials per user
[05:19] <axw> wallyworld: isn't that just the default for each though?
[05:19] <wallyworld> yes, but we don't model it that way
[05:19] <wallyworld> we have map[string]Credential
[05:19] <wallyworld> and region is a outside that map
[05:20] <wallyworld> i'll leave it for now
[05:21] <axw> wallyworld: so you're suggesting DefaultRegion should be part of Credential, rather than CloudCredential?
[05:22] <wallyworld> axw: yeah i think so
[05:22] <wallyworld> or at least an override
[05:22] <wallyworld> like we do with endpoints in cloud.yaml
[05:24] <axw> wallyworld: I'd wait until someone asks for it. I think for now we should take whatever's in [default] and use that where we have DefaultRegion today
[05:24] <wallyworld> sgtm, but we can easily model it if needed
[05:24] <axw> should be easy to have an override later
[05:24] <axw> yep
[05:47] <wallyworld> menn0: can you recall the help function to fix windows user names?
[06:58] <axw> wallyworld: no rush since it can't land yet, but admin bootstrap model changes: https://github.com/juju/juju/pull/4530
[06:58] <axw> wallyworld: I'll look at the next bit, creating the secondary model
[06:59] <wallyworld> axw: awesome, ty, will look after soccer, finishing up part2 of detect credentials
[07:10] <anastasiamac> dimitern: o/
[07:14] <dimitern> anastasiamac, hey there
[09:58] <fwereade_> dimitern, axw: do either of you have recent insight into the `if insideLXC` handling in MachineAgent.uninstallAgent?
[09:59] <fwereade_> dimitern, axw: naively, ISTM that it's happening too late -- that we should do it before setting the machine to dead, not after
[09:59] <fwereade_> dimitern, axw: thoughts?
[10:00] <fwereade_> dimitern, axw: (because, once we're dead, it's a matter of luck whether or not that code gets to run before the responsible provisioner nukes the whole container)
[10:00] <dimitern> fwereade_, I have to look at the code - after standup though
[10:01] <dimitern> ah no standup today
[10:02] <dooferlad> dimitern, anastasiamac, frobware, voidspace: (and others) meeting?
[10:02] <dimitern> dooferlad, thanks for the review btw; voidspace, frobware - I'd appreciate if you have a look as well: http://reviews.vapour.ws/r/3969/
[11:08] <axw> fwereade_: indeed, we should do it before setting Dead. pretty sure I wrote that - sorry
[11:27]  * dimitern whew... refcounts finally working
[11:28] <dimitern> note to self: $ge != $gte and bson.D{{"$gte", bson.D{{"field", 0}}}} != bson.D{{"field", bson.D{{"$gte", 0}}}}
[11:34] <dimitern> dooferlad, ping
[11:34] <dooferlad> dimitern: pong
[11:35] <dimitern> dooferlad, I'm not quite sure what do you mean by this comment: I think if you are calling the above function addLink...AndEnsureAllAdded then you shouldn't need a check after it.
[11:35] <dimitern> dooferlad, there's no check there
[11:36] <dooferlad> dimitern: the line has c.Check(children, gc.HasLen, len(childrenNames))
[11:36] <dimitern> dooferlad, ah
[11:37] <dooferlad> dimitern: that one seems unnecessary if you have already ensured that all are added
[11:37] <dimitern> dooferlad, I got you - yeah, it's not needed
[11:37] <dooferlad> np
[11:37] <dimitern> will drop it
[11:37] <fwereade_> axw, no worries, just checking
[11:37] <fwereade_> axw, I don't see how to address that in the near term but I'll leave a TODO
[11:39] <axw> fwereade_: nobody has implemented storage support for the LXD provider yet, so it's not actually doing anything at the moment. hopefully someone will fix that soon though...
[11:39] <axw> fix/implement
[11:39] <axw> we had it for local, so there's a small gap there now
[11:40] <fwereade_> axw, also, correct me if I'm wrong: ISTM that that code is the only reason we actually "need" to run the uninstall logic in non-manual-provider cases
[11:40] <axw> fwereade_: that is correct
[11:41] <fwereade_> axw, and hence the only reason for various workers to write the uninstall file when they're "sure" that ErrTerminateAgent is "really meant"
[11:42] <fwereade_> axw, ...so, if the lxd bits are currently meaningless... can I maybe drop the uninstall-file-writing entirely, and leave it to the discretion of the manual provider?
[11:42] <axw> fwereade_: it's almost certainly going to be needed again for lxd, so I'd prefer not to.. unless it's actively getting in your way?
[11:53] <fwereade_> axw, I think I've come to understand what's going on well enough that it's not any more
[11:53] <fwereade_> axw, so I'm quite happy not to take on ripping it out :)
[11:54] <fwereade_> axw, but I must say it has sharp edges ;p
[12:08] <dimitern> dooferlad, voidspace, here's the next PR in line: http://reviews.vapour.ws/r/3972/ please have a look when you can
[12:08]  * dimitern lunches
[12:25] <anastasiamac> dooferlad: sorry for missing team meeting. it falls at 8pm my time and with 3 kids, some days, it's the busiest time of the day \o/ i'll try next time but trust that u've had plenty gerat meeting :D
[12:26] <anastasiamac> s/plenty gerat/pretty great
[12:30] <mgz> gerat is less offensive than "u've" :P
[12:30] <dimitern> the only similar word that comes to mind is german - Geraet
[12:32] <mgz> anastasiamac: meeting was good, just us euros hangin' out
[12:33] <anastasiamac> mgz: it's great to hear that it's such a tight group \o/
[12:34] <anastasiamac> mgz: is there an awesome way to "predict" (=deduce?) what branch will run in CI next... ?..
[12:34] <mgz> anastasiamac: generally not because we're always bumping things around on request
[12:35] <mgz> atm, master will run next
[12:37] <anastasiamac> k... so what incantation did u use to figure that out?
[12:38] <mgz> in this case, `crontab -e` as jenkins on the master
[12:38] <anastasiamac> :D
[12:38] <dimitern> dooferlad, voidspace, rebased http://reviews.vapour.ws/r/3972/ onto maas-spaces2 after its prereq has landed to clean up the diff
[12:39] <mgz> anastasiamac: when overrides are not in place, it picks the branch in the github.com/juju/juju repo which has oldest untested changes
[12:52] <perrito666> bbl
[12:54] <bogdanteleaga> do we have something that translates back and forth between unit-<x>-0 and <x>/0?
[12:54] <dimitern> bogdanteleaga, check names.ParseUnitTag
[12:57] <bogdanteleaga> dimitern, thanks, looks promising
[13:08] <mup> Bug #1548813 opened: maas-spaces2 bootstrap failure unrecognised signature  <ci> <maas-provider> <juju-core:In Progress by dimitern> <juju-core maas-spaces:Fix Committed by mfoord> <https://launchpad.net/bugs/1548813>
[13:09] <frobware> huzzah! I have an internet access again \?/
[13:09] <dimitern> frobware, hey! :) welcome back then
[13:10] <frobware> dimitern: it's weird to have a mobile either. I went to a local coffee shop - it worked there for about 20 mins
[13:11] <frobware> s/to have/to not have
[13:11] <dimitern> ah :)
[13:11] <dimitern> frobware, I was wondering where did you go
[13:11] <mup> Bug #1548813 changed: maas-spaces2 bootstrap failure unrecognised signature  <ci> <maas-provider> <juju-core:In Progress by dimitern> <juju-core maas-spaces:Fix Committed by mfoord> <https://launchpad.net/bugs/1548813>
[13:11] <frobware> dimitern: generally it's considered a bad sign if you can't get to BBC Radio 4 - implies the world had ended...
[13:12] <dimitern> :D
[13:12] <mgz> isn't that one of the triggers for us launching nukes?
[13:12] <frobware> mgz: that's the one. I was close this morning, just couldn't get a +1.
[13:12] <mgz> no woman's hour? moscow is going down!
[13:13] <frobware> dimitern: did I miss anything from standup?
[13:13] <frobware> mgz: perhaps it was coincidental with ofcom ruling that BT did not have a monopoly.
[13:13] <dimitern> frobware, we swapped standup for the weekly team call, which was almost the same thing as standup today
[13:14] <dimitern> frobware, you missed nothing important
[13:14] <dimitern> well, apart from the next few PRs of mine :)
[13:15] <frobware> dimitern: I was surprised you cherry-picked voidspace's change into master if it's not actually broken there.
[13:16] <dimitern> frobware, it's not but since master uses older gomaasapi, I can't reliably test the fix for the br-eth1 issue I picked upgodeps
[13:16] <dimitern> ..while things got rather quiet
[13:16] <frobware> dimitern: this still need review? lldb?
[13:16] <frobware> dimitern: this still need review? http://reviews.vapour.ws/r/3969/
[13:17] <dimitern> frobware, this does - http://reviews.vapour.ws/r/3972/
[13:17] <frobware> nope, looks like I missed that by 48 mins too
[13:17] <mup> Bug #1548813 opened: maas-spaces2 bootstrap failure unrecognised signature  <ci> <maas-provider> <juju-core:In Progress by dimitern> <juju-core maas-spaces:Fix Committed by mfoord> <https://launchpad.net/bugs/1548813>
[13:18] <dimitern> frobware, ah, sorry about that :) I'm happy to do a follow-up after that last one, if you have concerns
[13:46] <voidspace> frobware: dimitern: yeah, I didn't understand that - surely if the fix is on maas-spaces2 then the fix will land on master when maas-spaces2 is merged
[13:47] <dimitern> voidspace, yeah, assuming the fix for br-eth1 lands in maas-spaces2 first
[13:47] <dimitern> frobware, voidspace, but it affects master already, and it's critical to fix this to unblock multi-nic containers
[13:47] <voidspace> ah, if it affects master already then fair enough
[13:48] <voidspace> your comment on the bug implied it wouldn't affect master until maas-spaces2 was merged (at which point the fix would land as well)
[13:50] <dimitern> initially it wasn't obvious it affects master, but I can confirm that now
[13:53] <frobware> dimitern, voidspace: so could somebody explain to me how it affects master.
[13:53] <frobware> dimitern: I'm missing some context arounf br-eth1
[13:53] <dimitern> frobware, if you try what that CI job does, the effect is the same
[13:54] <dimitern> frobware, i.e. deploying an lxc container on a machine with 2 nics, eth0 disabled, eth1 enabled
[13:57] <frobware> dimitern: I see. thx.
[13:58] <dimitern> frobware, so the issue is not with the bridge script this time :) - br-eth1 gets created OK, but then we don't render the correct lxc.conf and /e/n/i
[14:04] <frobware> dimitern: yeah, I eventually got that when I read bug #1549545
[14:04] <mup> Bug #1549545: Bundle deploys fail at lxc-start when bridge br-eth1 is created <ci> <deploy> <maas-provider> <test-failure> <juju-core:In Progress by dimitern> <https://launchpad.net/bugs/1549545>
[14:05] <frobware> dimitern: reviewed http://reviews.vapour.ws/r/3972/
[14:05] <dimitern> frobware, thanks! will have a look shortly
[14:07] <dimitern> removing any IPv6 settings from my vmaas-19 made everything SOOO much faster!
[14:13] <frobware> dimitern: be careful, when IPv4 has run out you'll have to switch back. :)
[14:14] <dimitern> frobware, I kept the config commented out just for such a case :)
[14:15] <frobware> dimitern: any decade now...
[14:16] <dimitern> frobware, indeed :D
[14:35] <perrito666> jam: can you try to reproduce that issue with master? if volumes are being deleted it means we might have broken something with tags
[14:35] <jam> perrito666: I can't try to reproduce right now as I'm focused on the LXD stuff.
[14:36] <jam> its an old envirenment that I forgot to teardown. Though now I can't tear it down at all.
[14:36] <perrito666> jam: did you just deploy to ec2 and then dropped the machine?
[14:37] <jam> perrito666: I would have bootstrapped a couple weeks ago. The machine was still up and running and happy returning "juju status" results.
[14:40] <jam> cherylj: tych0: mgz: I took tycho0's update to mine and I have something that is passing the test suite and seems able to "juju bootstrap lxd lxd"
[14:40] <jam> http://reviews.vapour.ws/r/3973/
[14:40] <mgz> jam: thanks!
[14:40] <jam> tych0: we still need to fix containers/lxd/lxd_test.go to not require you to have an "ubuntu-xenial" image.
[14:41] <jam> mgz: so there are still some known limitations. Like LXD containers on OS other than Xenial need you to install "ppa:ubuntu-lxc/lxd-stable"
[14:41] <jam> I thought it was supposed to be in trusty-backports
[14:41] <jam> but suddenly 0.26 is the version in trusty-backports
[14:42] <jam> which Juju happily installs and then immediately fails to work because of API incompatibilty.
[14:42] <mgz> hm, yeah, this need some sanity in which versions we promote to ubuntu archive when
[14:42] <mgz> which the juju team hasn't had much influence over so far
[14:43] <jam> mgz: also this is *not* compatible with 2.0.0~beta3
[14:43] <jam> they broke their api one more time for 2.0.0~beta4
[14:43] <jam> I'm told they aren't likely to break it again.
[14:49] <natefinch> ericsnow: you around?
[14:51] <jam> mgz: can you give it a bit of a run around to make sure it isn't just my local stuff that is working?
[14:51] <jam> I'm not sure who else is going to review that code.
[14:51] <jam> maybe ericsnow or katco or natefinch would like to review the LXD updates?
[14:51] <mgz> jam: if you like we can just send that branch through CI
[14:51] <jam> http://reviews.vapour.ws/r/3973/
[14:51] <jam> mgz: well, we could just land it on lxd-container-type couldn't we?
[14:52] <jam> that is the target of that branch
[14:52] <jam> I don't imagine it is going to be any more broken than the existing lxd-container-type. :)
[14:52] <natefinch> katco`: I need to bring my daughter to the doctor at 11:30 today - she's had a fever since Sunday night, and spiked to 103.5 last night.  It'll probably take 90 minutes all told, but most of that can come out of my lunch time.
[14:52] <mgz> yeah, that would be fine
[14:52] <jam> mgz: can you confirm the version of Go that the landing bot uses?
[14:52] <jam> mgz: I was pretty sure it was 1.2 but that means all the LXD tests get skipped.
[14:52] <katco`> natefinch: ok, hope she feels better :(
[14:53] <mgz> the gating uses 1.2, revision testing is go 1.2/gccgo/go 1.5/go 1.6
[14:53] <jam> mgz: right. so there is no gating on LXD :(
[14:53] <jam> but at least it will notice in CI
[14:53] <jam> which is how I broke it last time, probably.
[14:54] <jam> anyway I need to go walk the dog. I might try to stop by later to catch up to how things are going.
[14:54] <natefinch> katco`: yeah, it's been a scary few days, and the fact it just keeps on going is cause for concern.  And this is our 2.5 year old, so she's still pretty little.
[14:54] <katco`> natefinch: yeah, it's scary at that age
[14:55] <jam> natefinch: I hope if its something it gets resolved quickly. Been through that as well. always hard.
[14:55] <jam> mgz: so... I'm worried that we can't release because this is going to be "known broken" on Trusty because they no longer have 2.0.0~beta4 in backports
[14:55] <jam> mgz: thoughts?
[14:55] <natefinch> jam: thanks
[14:55] <jam> bbiab
[14:56] <mgz> jam: I agree, though what functionality are we promising on trusty with the beta?
[14:57] <natefinch> mgz: it won't format your hard drive ;)
[14:57]  * natefinch sets a low bar.
[15:06] <natefinch> ericsnow, katco`: in other news, juju upgrade-charm --resource basically works right now.  THere's some probable undesired behavior, but hooking into the code that deploy used for --resource was really easy.
[15:06] <frobware> jam: there was one printf-esque mismatch you might want to address now, my other comments were more benign.
[15:07] <katco`> natefinch: \o/
[15:07] <tych0> jam: do i count as a guy who can land that into lxd-container-type? i'd be happy to land it there so we can at least get more CI on it and start getting the +1s
[15:07] <tych0> your changes on top of mine on top of yours look good :)
[15:08] <natefinch> katco`, ericsnow: and by probable bad behavior, I mean that any resource you *don't* specify with --resource gets removed ;)
[15:13] <natefinch> rick_h_: I presume that if you upgrade-charm --resource foo, any other resources you have uploaded should remain the same?
[15:13] <frobware> tych0: just a heads-up, see my comment ^^ to jam about a printf-esque mismatch in the review and the one about limits.cpus vs limits.cpu.
[15:14] <natefinch> jam: saw the ping on lxd review, maybe katco` can talk about scheduling... we're pretty slammed, but I know the lxd stuff is critical
[15:15] <tych0> frobware: will take a look, thanks
[15:17] <tych0> frobware: yeah, it is subtle but also correct. see the commit message for the commit in question
[15:17] <tych0> (not sure how to link it on the review tool)
[15:17] <frobware> tych0: ok, that's fine. it was more out of completeness
[15:18] <katco`> natefinch: jam: we should assist with lxd support where possible
[15:18] <katco`> natefinch: jam: in a meeting, where is this request?
[15:18] <natefinch> katco`: http://reviews.vapour.ws/r/3973/
[15:19] <tych0> frobware: sure, np. thanks for looking!
[15:19] <katco`> ericsnow: when you're off this call, can you give this review a look?
[15:20] <ericsnow> katco`, natefinch: sure
[15:21] <katco`> jam: tych0: ^^^ we'll tal. ty for pr
[15:27] <frobware> dimitern: regarding, http://reviews.vapour.ws/r/3967/, in general I think we should only merge blessed master builds unless we need specific fixes. thoughts?
[15:33] <dimitern> frobware, wouldn't it be better to track master closely until 2.0 branch is cut?
[15:35] <frobware> dimitern: I would rather have blessed builds as we don't have to chase ghosts.
[15:35] <frobware> dimitern: as soon as anything on master is blessed, and we're behind, we merge. not opting out, just being choosy.
[15:36] <dimitern> frobware, sounds reasonable
[15:36] <dimitern> frobware, but there were a few useful fixes we picked up due to that merge
[15:37] <frobware> dimitern: again, not a hard-and-fast-rule, just trying to ensure we don't chase CI bugs on maas-spaces2.
[15:38] <mgz> master is in a rather unstable state atm unfortunately
[15:39] <frobware> mgz: dimitern: no further questions your honour! :)
[15:41] <dimitern> :)
[15:53] <voidspace> dimitern: frobware: dooferlad: another gomaasapi fix https://github.com/juju/gomaasapi/pull/8
[15:55] <frobware> voidspace: looking
[15:56] <dimitern> voidspace, why should it matter whether the reserved ranges are nil vs empty slices ?
[15:56] <voidspace> dimitern: because GetArray fails on nil
[15:57] <voidspace> dimitern: and it isn't what the real server does, so I'd rather the test server behaved like maas
[15:57] <voidspace> dimitern: instead of writing defensive code in production just for the gomaasapi test server
[15:57] <frobware> voidspace: how did you discover this?
[15:58] <voidspace> frobware: on my current branch a whole bunch of tests fail
[15:58] <voidspace> frobware: I've changed the Spaces implementation and the existing tests fail
[15:58] <frobware> voidspace: k
[15:58] <katco`> ericsnow: getting some more coffee and then we can do our 1:1
[15:58] <ericsnow> katco`: k
[15:58] <voidspace> and fetching a subnet calls Spaces to get the space provider id, so all the existing tests fail
[15:59] <voidspace> because the test server is now wrongly configured for the test
[16:00] <dimitern> voidspace, I see
[16:00] <dimitern> voidspace, yeah, that's a problem when using MAASObjects instead of structs with json serialization tags
[16:01] <perrito666> meh, switch lost --list
[16:03] <perrito666> aaand, I can no longer do a switch to my provider
[16:03] <perrito666> :(
[16:04] <katco`> ericsnow: k i'm in the hangout when you're ready
[16:04] <ericsnow> k
[16:10] <voidspace> frobware: what do you mean by 'can "1" ever return nil'?
[16:12] <frobware> voidspace: heh, sorry it was ambiguous. The call to GetSubObject("1").
[16:13] <voidspace> frobware: the answer is no
[16:13] <voidspace> frobware: it's a gomaasapi object for making a call to the server
[16:13] <voidspace> frobware: the subsequent call can fail, but GetSubObject can't return nil
[16:14] <frobware> voidspace: ok. it was the only thing that stuck out for me. LGTM.
[16:14] <voidspace> frobware: dimitern: thanks
[16:35] <perrito666> bbl, bye
[17:04] <kilty> natefinch, mgz, cherylj: Been trying to play around some more get this working. I'm still having problems with the metadata generation - I'm not sure what I'm missing
[17:04] <kilty> http://pastebin.com/7JR198Az
[17:05] <kilty> this is everything I'm working with - I'm not sure what information it's looking for with the -m. Is it just the name of the cloud? is it a file with config options?
[17:08] <katco`> kilty: have you popped into #juju to see if anyone there has experienced similar issues?
[17:08] <cherylj> kilty: the -m specifies the model (new term for environment) to operate in.      the command help looks like it hasn't been updated for the 2.0 conventions
[17:09] <katco`> kilty: we may be able to help you, but this channel is more focused on development
[17:09] <cherylj> I have a hard time wrapping my head around having to have a bootstrapped env / model in order to generate metadata
[17:09] <cherylj> let me poke at that command
[17:10] <kilty> katco`: I'll pop in there and ask as well
[17:11] <kilty> cherylj: Yeah I'm not sure why I wouldn't be able to generate metadata regardless of a model - when I was working with 1.25 I could generate metadata regardless of what env I was going to deploy it to
[17:11] <cherylj> kilty: the image metadata you generate with 1.25 should be usable with 2.0
[17:12] <kilty> so we should generate it on a 1.25 instance and scp it over?
[17:13] <cherylj> kilty: yeah, I'd give that a try
[17:13] <kilty> cherylj: ok. I will give that a try and see what happens. Thanks
[17:15] <frobware> jam: you about? I looked at the updates diffs for your review and it is now spread across 9 pages. Seems a lot. :)
[17:22] <alexisb> frobware, jam is probably out but tych0 is around and may be able to answer qs
[17:23] <tych0> yep, happy to answer any questions
[17:24] <frobware> alexisb: yep - asking because I saw some follow up from him about 15 mins..
[17:25] <frobware> tych0, jam: so looks like upstream/master was merged onto that review, hence the 9 pages. Just surprised me as I was looking at the diffs based on jam's comments.
[17:26] <tych0> frobware: yes, i didn't do the merge unfortunately
[17:32] <kilty> cherylj: So it looks like the metadata generation works, it's just throwing errors - it creates the same metadata that 1.25 does
[17:32] <kilty> same thing with the tool generation
[17:45] <natefinch> katco, ericsnow: can you guys TAL at this PR?  http://reviews.vapour.ws/r/3949/ It's the one from yesterday where we now return a unitresources value for every unit.
[17:45] <ericsnow> natefinch: yeah, sorry, will do soon
[17:46] <natefinch> katco, ericsnow: also, having push-resource call upgrade-charm is kind of amazing.  It makes everything just work the way you'd expect
[17:46] <katco> natefinch: will do so when i get a chance. lots o' meetings today. speaking of... we still on for 10m from now?
[17:46] <natefinch> er fire the upgrade-charm hook that is
[17:46] <katco> natefinch: that's good to hear :)
[17:47] <natefinch> katco: yep, back from the doctor. They tested and it's not strep or the flu... just some nasty virus.. said if she's not better in a couple days they'll have to do chest X-Rays, but we're hoping she'll just fight it off.
[17:47] <katco> natefinch: :( well hope she feels better real soon
[17:48] <natefinch> katco: yeah... she's a little better today... but she always seems a bit better morning/midday, so we'll see what happens tonight.
[18:29] <perrito666> I know this is not a popular opinion but, I really dislike format tabular as a default
[18:31] <voidspace> g'night all
[18:43] <natefinch> perrito666: blasphemy
[18:53] <natefinch> katco: btw, I think you can discard your api pass through review (http://reviews.vapour.ws/r/3948/) now, since my PR with that code landed last night
[18:55] <katco> natefinch: yeah been meaning to
[19:14] <perrito666> is master unlocked?
[19:16] <natefinch> perrito666: http://juju.fail
[19:16] <alexisb> katco, sorry, thats me
[19:16] <alexisb> logging back on
[19:16] <katco> alexisb: ah ok. lately i've been the one freezing :)
[19:17] <perrito666> natefinch: sweet I completely forgot that thing
[19:18] <natefinch> perrito666: best use of a non-standard TLD I've seen
[19:34] <kilty> niedbalski: working through the same issues you were in this bug, and was hoping that I could pick your brain
[19:34] <kilty> https://bugs.launchpad.net/juju-core/+bug/1452422
[19:34] <mup> Bug #1452422: Cannot boostrap from custom image-metadata-url or by specifying metadata-source <sts> <juju-core:Fix Released by wallyworld> <juju-core 1.24:Fix Released by wallyworld> <https://launchpad.net/bugs/1452422>
[19:46] <niedbalski> kilty, AFAIK that should be fixed, which juju-core version are you using?
[20:00] <kilty> niedbalski: 1.25
[20:01] <kilty> niedbalski: 1.25.3, to be exact
[20:02] <kilty> installing from ppa:juju/stable
[20:03] <niedbalski> kilty, can you pastebin the error you are seeing?
[20:03] <kilty> If I generate the metadata locally and use the --metadata-source option on bootstrap, it creates the ost image, but the ost image defaults to looking back at cloud-ubuntu images
[20:03] <kilty> niedbalski: sure thing
[20:05] <mgz> kilty: you need to use image-metadata-url like I said previously
[20:05] <kilty> mgz: when I use the image-metadata-url, it seems to completely ignore it
[20:06] <mgz> kilty: if you bootstrap with --debug you can see what simplstreams are being selected
[20:07] <kilty> mgz: I am running a bootstrap with --debug now, will pastebin the results
[20:08] <kilty> niedbalski: http://pastebin.com/raw/VcYGhsFF
[20:08] <kilty> that's with using the --metadata-source option
[20:10] <mgz> kilty: that won't work. the local file never gets uploaded anywhere the state server can see it. I can see that the image-metadata-url is being looked at.
[20:11] <kilty> mgz: http://pastebin.com/NnFH6jQj
[20:13] <kilty> mgz: that's that happens when I just run with the agent-metadata-url and image-metadata-url set in the env file
[20:16] <mgz> kilty: export JUJU_LOGGING_CONFIG="<root>=DEBUG; juju.environs.simplstreams=TRACE" and try that again?
[20:16] <mgz> gah,tyop
[20:16] <mgz> *juju.environs.simplestreams
[20:17] <kilty> mgz: running now
[20:18] <kilty> mgz: http://pastebin.com/zRnqWxZR
[20:19] <mgz> kilty: no route to host
[20:19] <mgz> line 11
[20:19] <kilty> mgz: yeah sorry about that hold on - I need to re-run it
[20:19] <mgz> you need to host the streams somewhere both your client and the machine you are deploying can see
[20:21] <mgz> that needs to be higher than trace...
[20:23] <kilty> mgz: http://pastebin.com/HfdYzspP
[20:23] <kilty> I'm running this from an instance that I created in the same tenant to which I will deploy juju
[20:23] <kilty> so the ost machine and the box I am on now are in the same network
[20:24] <mgz> kilty: l17, 403
[20:24] <mgz> can you actually resolve the path where you uploaded the json files, just in a browser?
[20:26] <kilty> I can resolve thee parent directories - images/streams
[20:26] <kilty> let me chmod the entire structure and try again
[20:28] <kilty> ok - it's created the instance and is starting to provision - I'll let you know if it fails
[20:29] <kilty> mgz: thank you so much for all your help..this has been bugging me for about a week now. I'm trying to set this up so that I can deploy about 400 juju environments on a private openstack cloud for internal developers
[20:29] <mgz> no probs
[20:30] <kilty> mgz: tools from http://192.168.111.20/tools/releases/juju-1.25.3-trusty-amd64.tgz downloaded: HTTP 404; time 0.002s; size 0 bytes; speed 0.000 bytes/s sha256sum: /var/lib/juju/tools/1.25.3-trusty-amd64/tools.tar.gz: No such file or directory
[20:30] <kilty> 2016-02-25 20:29:30 ERROR juju.cmd supercommand.go:429 failed to bootstrap environment: subprocess encountered error code 1
[20:31] <mgz> kilty: if you have outbound net access, I'd just delete the agent-metadata-url config and let the setup get the tools from our source
[20:31] <mgz> otherwise, you'll need to get that tgz mirrored
[20:33] <kilty> mgz: ok - that makes sense. Re-running w/o agent-metadata-url
[20:37] <kilty> mgz: success!
[20:37] <mgz> kilty: ace!
[20:38] <kilty> mgz: now for the noob question of the hour - how to I deploy the gui frontend?
[20:38] <kilty> is it not part of the bootstrap?
[20:39] <mgz> `juju deploy cs:juju-gui --to 0` should do
[20:39] <kilty> oh duh - it's a charm
[20:40] <kilty> mgz: thanks
[20:40] <mgz> they have plans for things in 2.0 which... I'm not completely up to speed with
[20:40] <mgz> just to integrate the gui a little more closely I think
[20:41] <perrito666> great, I cannot reproduce a bug even though I know its there
[20:42] <kilty> mgz: well now that we have 1.25 up I think that solves the issues we were having with 2.0 as well
[20:42] <mgz> yeah, should just be the same stuff
[20:43] <perrito666> anyone here has been bit by the "status history full of update-status calls" ?
[22:55] <axw> fwereade_: yep, it certainly has sharp edges. its raison d'etre is edge cases ;p   if we could find a nice way to confine it to manual and lxd that would be good. or maybe just manual, and do the loop-device detachment elsewhere...
[22:58] <mup> Bug #1550033 opened: Streaming API handlers may panic when apiserver shuts down <juju-core:New> <https://launchpad.net/bugs/1550033>
[23:07] <mup> Bug #1550033 changed: Streaming API handlers may panic when apiserver shuts down <juju-core:New> <https://launchpad.net/bugs/1550033>
[23:16] <mup> Bug #1550033 opened: Streaming API handlers may panic when apiserver shuts down <juju-core:New> <https://launchpad.net/bugs/1550033>