[00:33] Bug #1549545 opened: Bundle deploys fail at lxc-start when bridge br-eth1 is created [00:34] katco, ericsnow: the PR for the missing API field: http://reviews.vapour.ws/r/3961/ [00:36] natefinch: LGTM [00:36] ericsnow: nice, thanks! [00:36] Bug #1549545 changed: Bundle deploys fail at lxc-start when bridge br-eth1 is created [00:45] Bug #1549545 opened: Bundle deploys fail at lxc-start when bridge br-eth1 is created [00:59] ericsnow: you still around? [01:01] axw: adding the restriction that controller name must exist in SetController() causes a lot of test failures \o/ i'm going through and fixing those [01:01] wallyworld: hm :/ do you agree that it should fail though? [01:02] axw: nope :-) [01:02] yes [01:02] sorry, i meant yes [01:02] lol [01:02] the other nope [01:02] nope that we shound't fail [01:02] ok [01:02] the fact that we never checked needs to be fixed === ses is now known as Guest86319 [02:21] jam: https://github.com/juju/juju/pull/4526 is what i've got for now [02:21] it seems to bootstrap and deploy stuff [02:21] jam: I think (?) it should fix most of your concerns [03:09] wallyworld: is there any documentation on how to actually bootstrap at the moment [03:09] wallyworld: with lxd not working i'm trying aws [03:09] menn0: the release notes are very comprehensive :-) [03:09] wallyworld: but it keeps trying to use the EC2 environment variables instead of what's in credentials.yaml [03:10] where is credentials.yaml located? [03:10] can you pastebin it? [03:11] wallyworld: I checked the release notes first. They only say that bootstrap and credentials has changed with no details [03:11] juju help bootstrap is also no help [03:11] * menn0 gets pastes [03:11] * menn0 pastes even [03:11] menn0: the release notes have an example credentials.yaml file, what ones are you looking at? [03:11] beta1 [03:12] let me check them [03:12] wallyworld: ok I see [03:12] it's further down [03:13] yeah, there's a whole essay on the new bootstrap stuff [03:13] i have had it confirmed that if you follow the directions it does all work :-) [03:13] but maybe some more consice getting started docs would have helped [03:14] wallyworld: http://paste.ubuntu.com/15193666/ [03:14] that's what I get [03:14] the credentials.yaml looks right to me [03:15] (I've redacted the secret bits obviously) [03:15] looks ok at first glance, i'll dig in a bit [03:15] * menn0 wonders if it works for wallyworld because he has $AWE_SECRET_ACCESS_KEY set [03:16] $AWS... even [03:16] maybe, but it was tested without i am almost sure, but not 100% [03:16] menn0: you are missing the credentials: [03:16] at the top of the yaml [03:17] wallyworld: that would do it :) [03:17] let me know if it still is broken [03:19] wallyworld: looking better [03:19] \o/ [03:20] wallyworld: thanks for your help [03:20] np [03:20] wallyworld: I think the client should emit a useful error when credentials.yaml doesn't start with a credentials section [03:21] wallyworld: shall I file a bug about it? [03:21] I suspect I won't be the only one making this mistake [03:21] menn0: agree, but we also have several other yaml files we omit crappy errors for. i'd like to tackle that improvement wholistically [03:21] and if it's not that error, there could be 100s of other syntactical issues [03:22] on the whole, juju's error reporting has a lot to answer for [03:22] wallyworld: in this case the YAML was syntactically correct. The first section just wasn't what the client expects. [03:23] fair point. but the same could happen in any yaml file juju likes to read. the user should be editing such files. beta2/3/4 will gain CLI to do that work. go ahead and file a bug if you want though [03:23] shouldn't [03:24] ok, if the user won't be touching the file soon, then I won't bother with the bug [03:24] "soon" :-) [03:33] axw: could you ptal at http://reviews.vapour.ws/r/3954/ [03:34] wallyworld: looking [03:34] ta [03:44] wallyworld: done [03:44] ta [03:46] axw: damn, that plural is invisible to me :-/ will fix. need glasses [03:46] part 2 will get into the workflow proper, multiple clouds etc [03:46] just wanted to get the skelton for part 1 [03:46] wallyworld: thanks. ok, sounds good [03:48] axw: would you mind taking a look at this one please? http://reviews.vapour.ws/r/3964/ [03:48] menn0: sure [03:48] axw: thansk [04:00] menn0: done [04:08] axw: thanks [04:10] axw: good catch with the missing Done call [04:19] menn0: where's thumper? anyways, why does juju create-model use creds/secrets for ec2 and openstack? i thought all hosted models were restricted to using the same creds as the original controller? is that not the case? [04:19] is ec2 nd openstack special cased? [04:22] wallyworld: thumper is on his way to christchurch. he's off tomorrow for a wedding up here. [04:23] and i thought he was slackong off [04:23] we're supposed to have a meeting, i wonder when he ws going to tell me [04:23] wallyworld: I don't know about the create-model creds situation. thumper did that work. [04:23] damn, ok [04:23] such a slacker [04:24] menn0: because if we do need to supply creds, we'll need to migrate that to crdentials.yaml etc [04:24] wallyworld: off memory, I thought that it was possible to use different creds for different models on the same controller [04:24] but i'm fairly vague about that [04:25] ok, np, we'll need to look into it [04:25] we should surely default to the controller's creds though [04:25] yes [04:25] you'd hope so [04:25] but the cli has a comment that we look in supplied config vars for new creds [04:25] i didn't know we did that [04:26] wallyworld: yep. and some providers will fail if you don't supply them, some don't [04:26] wallyworld, axw: think about the possibility of hosted envs being "owned" by completely unrelated parties [04:26] you'd want different creds then [04:26] ah yes. true [04:26] same provider, different creds [04:26] menn0: sure, I think we should support it [04:27] menn0: just not require it [04:27] it might even be "unsafe" to default to the controller's creds [04:27] axw: i think we'd want this for next beta, i'll try and get to it tomorrow, should be small change [04:27] not sure [04:28] wallyworld, axw: could one of you look at this one please? http://reviews.vapour.ws/r/3965/ [04:28] I'll look [04:28] fixes to apiserver panics that several people have been observing during test runs [04:28] sure [04:28] (including me) [04:28] axw, wallyworld: just one of you is fine :) [04:29] menn0: yes, me too - thank you! [04:29] and me! [04:29] axw, wallyworld: the novela at the top explains it, but the problem has been there forever I think [04:30] axw, wallyworld: it's just that timings for worker startup and shutdown have changed due to the dep engine [04:30] menn0: i think it has, i don't recalls those /charm or /tools or /backups etc endpoints being tracked [04:31] axw, wallyworld: we're still not in good shape yet for the other streaming style APIs but at least /logsink and /log are fixed [04:31] /logsink is in use all the time [04:31] got to start somewhere [04:31] and now the pieces are there to do the others [04:31] * menn0 spent most of yesterday and today on this one when he should have been doing other things [04:32] thought it was due to my changes [04:32] but it wasn't [04:32] * menn0 creates a ticket about the others [04:35] menn0: pr looks ok but my brain has not delved into any possible subtle breakages [04:38] wallyworld: i've done quite a bit of manual testing and i'm pretty sure it's right, but I understand your concern [04:39] axw: given wallyworld is a bit uncertain could you take a look too please? [04:39] menn0: +1 from me fwiw if you've tested manually [04:39] menn0: was already looking [04:39] still looking [04:39] axw, wallyworld: thanks both of you [04:39] awesome that you fixed it [04:45] menn0: LGTM [04:45] axw: cheers [04:49] axw: your idea makes sense and is preferable to what i've done if I can make it work [04:49] trying now [04:49] menn0: ok, thanks [04:56] axw: yeah, that's heaps better [04:56] axw: just making sure it works now [04:56] menn0: cool :) [04:57] are [04:58] wallyworld: is this error expected? [04:58] $ juju bootstrap menn0 aws --config ~/canonical/juju-dev.yaml --upload-tools [04:58] Creating Juju controller "local.menn0" on aws/us-west-2 [04:58] ERROR streams/v1/index2.sjson not accessed, actual error: invalid URL "https://streams.canonical.com/juju/images/releases/streams/v1/index2.sjson" not found [04:58] menn0: yeah [04:58] there's a few other similar ones [04:58] it needs to be suppressed [04:58] due to upload-tools? [04:59] it's because simplestreams searches everywhere and logs what it can't find [04:59] should be debug or trace [05:16] axw: so far, aws and google both support different named credentials having their own default region rather than a single global default :-( [05:17] might need to revisit the model [05:17] wallyworld: docs? [05:18] axw: for aws, i'd have to look up docs again, i'm judt looking at what's in my ~/.aws directory. there's a config and a credentials file. config has region per user, credentials has credentials per user [05:19] wallyworld: isn't that just the default for each though? [05:19] yes, but we don't model it that way [05:19] we have map[string]Credential [05:19] and region is a outside that map [05:20] i'll leave it for now [05:21] wallyworld: so you're suggesting DefaultRegion should be part of Credential, rather than CloudCredential? [05:22] axw: yeah i think so [05:22] or at least an override [05:22] like we do with endpoints in cloud.yaml [05:24] wallyworld: I'd wait until someone asks for it. I think for now we should take whatever's in [default] and use that where we have DefaultRegion today [05:24] sgtm, but we can easily model it if needed [05:24] should be easy to have an override later [05:24] yep [05:47] menn0: can you recall the help function to fix windows user names? [06:58] wallyworld: no rush since it can't land yet, but admin bootstrap model changes: https://github.com/juju/juju/pull/4530 [06:58] wallyworld: I'll look at the next bit, creating the secondary model [06:59] axw: awesome, ty, will look after soccer, finishing up part2 of detect credentials [07:10] dimitern: o/ [07:14] anastasiamac, hey there [09:58] dimitern, axw: do either of you have recent insight into the `if insideLXC` handling in MachineAgent.uninstallAgent? [09:59] dimitern, axw: naively, ISTM that it's happening too late -- that we should do it before setting the machine to dead, not after [09:59] dimitern, axw: thoughts? [10:00] dimitern, axw: (because, once we're dead, it's a matter of luck whether or not that code gets to run before the responsible provisioner nukes the whole container) [10:00] fwereade_, I have to look at the code - after standup though [10:01] ah no standup today [10:02] dimitern, anastasiamac, frobware, voidspace: (and others) meeting? [10:02] dooferlad, thanks for the review btw; voidspace, frobware - I'd appreciate if you have a look as well: http://reviews.vapour.ws/r/3969/ [11:08] fwereade_: indeed, we should do it before setting Dead. pretty sure I wrote that - sorry [11:27] * dimitern whew... refcounts finally working [11:28] note to self: $ge != $gte and bson.D{{"$gte", bson.D{{"field", 0}}}} != bson.D{{"field", bson.D{{"$gte", 0}}}} [11:34] dooferlad, ping [11:34] dimitern: pong [11:35] dooferlad, I'm not quite sure what do you mean by this comment: I think if you are calling the above function addLink...AndEnsureAllAdded then you shouldn't need a check after it. [11:35] dooferlad, there's no check there [11:36] dimitern: the line has c.Check(children, gc.HasLen, len(childrenNames)) [11:36] dooferlad, ah [11:37] dimitern: that one seems unnecessary if you have already ensured that all are added [11:37] dooferlad, I got you - yeah, it's not needed [11:37] np [11:37] will drop it [11:37] axw, no worries, just checking [11:37] axw, I don't see how to address that in the near term but I'll leave a TODO [11:39] fwereade_: nobody has implemented storage support for the LXD provider yet, so it's not actually doing anything at the moment. hopefully someone will fix that soon though... [11:39] fix/implement [11:39] we had it for local, so there's a small gap there now [11:40] axw, also, correct me if I'm wrong: ISTM that that code is the only reason we actually "need" to run the uninstall logic in non-manual-provider cases [11:40] fwereade_: that is correct [11:41] axw, and hence the only reason for various workers to write the uninstall file when they're "sure" that ErrTerminateAgent is "really meant" [11:42] axw, ...so, if the lxd bits are currently meaningless... can I maybe drop the uninstall-file-writing entirely, and leave it to the discretion of the manual provider? [11:42] fwereade_: it's almost certainly going to be needed again for lxd, so I'd prefer not to.. unless it's actively getting in your way? [11:53] axw, I think I've come to understand what's going on well enough that it's not any more [11:53] axw, so I'm quite happy not to take on ripping it out :) [11:54] axw, but I must say it has sharp edges ;p [12:08] dooferlad, voidspace, here's the next PR in line: http://reviews.vapour.ws/r/3972/ please have a look when you can [12:08] * dimitern lunches [12:25] dooferlad: sorry for missing team meeting. it falls at 8pm my time and with 3 kids, some days, it's the busiest time of the day \o/ i'll try next time but trust that u've had plenty gerat meeting :D [12:26] s/plenty gerat/pretty great [12:30] gerat is less offensive than "u've" :P [12:30] the only similar word that comes to mind is german - Geraet [12:32] anastasiamac: meeting was good, just us euros hangin' out [12:33] mgz: it's great to hear that it's such a tight group \o/ [12:34] mgz: is there an awesome way to "predict" (=deduce?) what branch will run in CI next... ?.. [12:34] anastasiamac: generally not because we're always bumping things around on request [12:35] atm, master will run next [12:37] k... so what incantation did u use to figure that out? [12:38] in this case, `crontab -e` as jenkins on the master [12:38] :D [12:38] dooferlad, voidspace, rebased http://reviews.vapour.ws/r/3972/ onto maas-spaces2 after its prereq has landed to clean up the diff [12:39] anastasiamac: when overrides are not in place, it picks the branch in the github.com/juju/juju repo which has oldest untested changes [12:52] bbl [12:54] do we have something that translates back and forth between unit--0 and /0? [12:54] bogdanteleaga, check names.ParseUnitTag [12:57] dimitern, thanks, looks promising [13:08] Bug #1548813 opened: maas-spaces2 bootstrap failure unrecognised signature [13:09] huzzah! I have an internet access again \?/ [13:09] frobware, hey! :) welcome back then [13:10] dimitern: it's weird to have a mobile either. I went to a local coffee shop - it worked there for about 20 mins [13:11] s/to have/to not have [13:11] ah :) [13:11] frobware, I was wondering where did you go [13:11] Bug #1548813 changed: maas-spaces2 bootstrap failure unrecognised signature [13:11] dimitern: generally it's considered a bad sign if you can't get to BBC Radio 4 - implies the world had ended... [13:12] :D [13:12] isn't that one of the triggers for us launching nukes? [13:12] mgz: that's the one. I was close this morning, just couldn't get a +1. [13:12] no woman's hour? moscow is going down! [13:13] dimitern: did I miss anything from standup? [13:13] mgz: perhaps it was coincidental with ofcom ruling that BT did not have a monopoly. [13:13] frobware, we swapped standup for the weekly team call, which was almost the same thing as standup today [13:14] frobware, you missed nothing important [13:14] well, apart from the next few PRs of mine :) [13:15] dimitern: I was surprised you cherry-picked voidspace's change into master if it's not actually broken there. [13:16] frobware, it's not but since master uses older gomaasapi, I can't reliably test the fix for the br-eth1 issue I picked upgodeps [13:16] ..while things got rather quiet [13:16] dimitern: this still need review? lldb? [13:16] dimitern: this still need review? http://reviews.vapour.ws/r/3969/ [13:17] frobware, this does - http://reviews.vapour.ws/r/3972/ [13:17] nope, looks like I missed that by 48 mins too [13:17] Bug #1548813 opened: maas-spaces2 bootstrap failure unrecognised signature [13:18] frobware, ah, sorry about that :) I'm happy to do a follow-up after that last one, if you have concerns === akhavr1 is now known as akhavr [13:46] frobware: dimitern: yeah, I didn't understand that - surely if the fix is on maas-spaces2 then the fix will land on master when maas-spaces2 is merged [13:47] voidspace, yeah, assuming the fix for br-eth1 lands in maas-spaces2 first [13:47] frobware, voidspace, but it affects master already, and it's critical to fix this to unblock multi-nic containers [13:47] ah, if it affects master already then fair enough [13:48] your comment on the bug implied it wouldn't affect master until maas-spaces2 was merged (at which point the fix would land as well) [13:50] initially it wasn't obvious it affects master, but I can confirm that now [13:53] dimitern, voidspace: so could somebody explain to me how it affects master. [13:53] dimitern: I'm missing some context arounf br-eth1 [13:53] frobware, if you try what that CI job does, the effect is the same [13:54] frobware, i.e. deploying an lxc container on a machine with 2 nics, eth0 disabled, eth1 enabled [13:57] dimitern: I see. thx. [13:58] frobware, so the issue is not with the bridge script this time :) - br-eth1 gets created OK, but then we don't render the correct lxc.conf and /e/n/i [14:04] dimitern: yeah, I eventually got that when I read bug #1549545 [14:04] Bug #1549545: Bundle deploys fail at lxc-start when bridge br-eth1 is created [14:05] dimitern: reviewed http://reviews.vapour.ws/r/3972/ [14:05] frobware, thanks! will have a look shortly [14:07] removing any IPv6 settings from my vmaas-19 made everything SOOO much faster! [14:13] dimitern: be careful, when IPv4 has run out you'll have to switch back. :) [14:14] frobware, I kept the config commented out just for such a case :) [14:15] dimitern: any decade now... [14:16] frobware, indeed :D [14:35] jam: can you try to reproduce that issue with master? if volumes are being deleted it means we might have broken something with tags [14:35] perrito666: I can't try to reproduce right now as I'm focused on the LXD stuff. [14:36] its an old envirenment that I forgot to teardown. Though now I can't tear it down at all. [14:36] jam: did you just deploy to ec2 and then dropped the machine? [14:37] perrito666: I would have bootstrapped a couple weeks ago. The machine was still up and running and happy returning "juju status" results. [14:40] cherylj: tych0: mgz: I took tycho0's update to mine and I have something that is passing the test suite and seems able to "juju bootstrap lxd lxd" [14:40] http://reviews.vapour.ws/r/3973/ [14:40] jam: thanks! [14:40] tych0: we still need to fix containers/lxd/lxd_test.go to not require you to have an "ubuntu-xenial" image. === akhavr1 is now known as akhavr [14:41] mgz: so there are still some known limitations. Like LXD containers on OS other than Xenial need you to install "ppa:ubuntu-lxc/lxd-stable" [14:41] I thought it was supposed to be in trusty-backports [14:41] but suddenly 0.26 is the version in trusty-backports [14:42] which Juju happily installs and then immediately fails to work because of API incompatibilty. [14:42] hm, yeah, this need some sanity in which versions we promote to ubuntu archive when [14:42] which the juju team hasn't had much influence over so far [14:43] mgz: also this is *not* compatible with 2.0.0~beta3 [14:43] they broke their api one more time for 2.0.0~beta4 [14:43] I'm told they aren't likely to break it again. [14:49] ericsnow: you around? [14:51] mgz: can you give it a bit of a run around to make sure it isn't just my local stuff that is working? [14:51] I'm not sure who else is going to review that code. [14:51] maybe ericsnow or katco or natefinch would like to review the LXD updates? [14:51] jam: if you like we can just send that branch through CI [14:51] http://reviews.vapour.ws/r/3973/ [14:51] mgz: well, we could just land it on lxd-container-type couldn't we? [14:52] that is the target of that branch [14:52] I don't imagine it is going to be any more broken than the existing lxd-container-type. :) [14:52] katco`: I need to bring my daughter to the doctor at 11:30 today - she's had a fever since Sunday night, and spiked to 103.5 last night. It'll probably take 90 minutes all told, but most of that can come out of my lunch time. [14:52] yeah, that would be fine [14:52] mgz: can you confirm the version of Go that the landing bot uses? [14:52] mgz: I was pretty sure it was 1.2 but that means all the LXD tests get skipped. [14:52] natefinch: ok, hope she feels better :( [14:53] the gating uses 1.2, revision testing is go 1.2/gccgo/go 1.5/go 1.6 [14:53] mgz: right. so there is no gating on LXD :( [14:53] but at least it will notice in CI [14:53] which is how I broke it last time, probably. [14:54] anyway I need to go walk the dog. I might try to stop by later to catch up to how things are going. [14:54] katco`: yeah, it's been a scary few days, and the fact it just keeps on going is cause for concern. And this is our 2.5 year old, so she's still pretty little. [14:54] natefinch: yeah, it's scary at that age [14:55] natefinch: I hope if its something it gets resolved quickly. Been through that as well. always hard. [14:55] mgz: so... I'm worried that we can't release because this is going to be "known broken" on Trusty because they no longer have 2.0.0~beta4 in backports [14:55] mgz: thoughts? [14:55] jam: thanks [14:55] bbiab [14:56] jam: I agree, though what functionality are we promising on trusty with the beta? [14:57] mgz: it won't format your hard drive ;) [14:57] * natefinch sets a low bar. [15:06] ericsnow, katco`: in other news, juju upgrade-charm --resource basically works right now. THere's some probable undesired behavior, but hooking into the code that deploy used for --resource was really easy. [15:06] jam: there was one printf-esque mismatch you might want to address now, my other comments were more benign. [15:07] natefinch: \o/ [15:07] jam: do i count as a guy who can land that into lxd-container-type? i'd be happy to land it there so we can at least get more CI on it and start getting the +1s [15:07] your changes on top of mine on top of yours look good :) [15:08] katco`, ericsnow: and by probable bad behavior, I mean that any resource you *don't* specify with --resource gets removed ;) [15:13] rick_h_: I presume that if you upgrade-charm --resource foo, any other resources you have uploaded should remain the same? [15:13] tych0: just a heads-up, see my comment ^^ to jam about a printf-esque mismatch in the review and the one about limits.cpus vs limits.cpu. [15:14] jam: saw the ping on lxd review, maybe katco` can talk about scheduling... we're pretty slammed, but I know the lxd stuff is critical [15:15] frobware: will take a look, thanks [15:17] frobware: yeah, it is subtle but also correct. see the commit message for the commit in question [15:17] (not sure how to link it on the review tool) [15:17] tych0: ok, that's fine. it was more out of completeness [15:18] natefinch: jam: we should assist with lxd support where possible [15:18] natefinch: jam: in a meeting, where is this request? [15:18] katco`: http://reviews.vapour.ws/r/3973/ [15:19] frobware: sure, np. thanks for looking! [15:19] ericsnow: when you're off this call, can you give this review a look? [15:20] katco`, natefinch: sure [15:21] jam: tych0: ^^^ we'll tal. ty for pr [15:27] dimitern: regarding, http://reviews.vapour.ws/r/3967/, in general I think we should only merge blessed master builds unless we need specific fixes. thoughts? [15:33] frobware, wouldn't it be better to track master closely until 2.0 branch is cut? [15:35] dimitern: I would rather have blessed builds as we don't have to chase ghosts. [15:35] dimitern: as soon as anything on master is blessed, and we're behind, we merge. not opting out, just being choosy. [15:36] frobware, sounds reasonable [15:36] frobware, but there were a few useful fixes we picked up due to that merge [15:37] dimitern: again, not a hard-and-fast-rule, just trying to ensure we don't chase CI bugs on maas-spaces2. [15:38] master is in a rather unstable state atm unfortunately [15:39] mgz: dimitern: no further questions your honour! :) [15:41] :) [15:53] dimitern: frobware: dooferlad: another gomaasapi fix https://github.com/juju/gomaasapi/pull/8 [15:55] voidspace: looking [15:56] voidspace, why should it matter whether the reserved ranges are nil vs empty slices ? [15:56] dimitern: because GetArray fails on nil [15:57] dimitern: and it isn't what the real server does, so I'd rather the test server behaved like maas [15:57] dimitern: instead of writing defensive code in production just for the gomaasapi test server [15:57] voidspace: how did you discover this? [15:58] frobware: on my current branch a whole bunch of tests fail [15:58] frobware: I've changed the Spaces implementation and the existing tests fail [15:58] voidspace: k [15:58] ericsnow: getting some more coffee and then we can do our 1:1 [15:58] katco`: k [15:58] and fetching a subnet calls Spaces to get the space provider id, so all the existing tests fail [15:59] because the test server is now wrongly configured for the test [16:00] voidspace, I see === natefinch is now known as natefinch-afk [16:00] voidspace, yeah, that's a problem when using MAASObjects instead of structs with json serialization tags [16:01] meh, switch lost --list [16:03] aaand, I can no longer do a switch to my provider [16:03] :( [16:04] ericsnow: k i'm in the hangout when you're ready [16:04] k [16:10] frobware: what do you mean by 'can "1" ever return nil'? [16:12] voidspace: heh, sorry it was ambiguous. The call to GetSubObject("1"). [16:13] frobware: the answer is no [16:13] frobware: it's a gomaasapi object for making a call to the server [16:13] frobware: the subsequent call can fail, but GetSubObject can't return nil [16:14] voidspace: ok. it was the only thing that stuck out for me. LGTM. [16:14] frobware: dimitern: thanks [16:35] bbl, bye [17:04] natefinch, mgz, cherylj: Been trying to play around some more get this working. I'm still having problems with the metadata generation - I'm not sure what I'm missing [17:04] http://pastebin.com/7JR198Az [17:05] this is everything I'm working with - I'm not sure what information it's looking for with the -m. Is it just the name of the cloud? is it a file with config options? [17:08] kilty: have you popped into #juju to see if anyone there has experienced similar issues? [17:08] kilty: the -m specifies the model (new term for environment) to operate in. the command help looks like it hasn't been updated for the 2.0 conventions [17:09] kilty: we may be able to help you, but this channel is more focused on development [17:09] I have a hard time wrapping my head around having to have a bootstrapped env / model in order to generate metadata [17:09] let me poke at that command [17:10] katco`: I'll pop in there and ask as well [17:11] cherylj: Yeah I'm not sure why I wouldn't be able to generate metadata regardless of a model - when I was working with 1.25 I could generate metadata regardless of what env I was going to deploy it to [17:11] kilty: the image metadata you generate with 1.25 should be usable with 2.0 [17:12] so we should generate it on a 1.25 instance and scp it over? [17:13] kilty: yeah, I'd give that a try [17:13] cherylj: ok. I will give that a try and see what happens. Thanks [17:15] jam: you about? I looked at the updates diffs for your review and it is now spread across 9 pages. Seems a lot. :) [17:22] frobware, jam is probably out but tych0 is around and may be able to answer qs [17:23] yep, happy to answer any questions [17:24] alexisb: yep - asking because I saw some follow up from him about 15 mins.. [17:25] tych0, jam: so looks like upstream/master was merged onto that review, hence the 9 pages. Just surprised me as I was looking at the diffs based on jam's comments. [17:26] frobware: yes, i didn't do the merge unfortunately [17:32] cherylj: So it looks like the metadata generation works, it's just throwing errors - it creates the same metadata that 1.25 does [17:32] same thing with the tool generation === natefinch-afk is now known as natefinch [17:45] katco, ericsnow: can you guys TAL at this PR? http://reviews.vapour.ws/r/3949/ It's the one from yesterday where we now return a unitresources value for every unit. [17:45] natefinch: yeah, sorry, will do soon [17:46] katco, ericsnow: also, having push-resource call upgrade-charm is kind of amazing. It makes everything just work the way you'd expect [17:46] natefinch: will do so when i get a chance. lots o' meetings today. speaking of... we still on for 10m from now? [17:46] er fire the upgrade-charm hook that is [17:46] natefinch: that's good to hear :) [17:47] katco: yep, back from the doctor. They tested and it's not strep or the flu... just some nasty virus.. said if she's not better in a couple days they'll have to do chest X-Rays, but we're hoping she'll just fight it off. [17:47] natefinch: :( well hope she feels better real soon [17:48] katco: yeah... she's a little better today... but she always seems a bit better morning/midday, so we'll see what happens tonight. [18:29] I know this is not a popular opinion but, I really dislike format tabular as a default [18:31] g'night all [18:43] perrito666: blasphemy [18:53] katco: btw, I think you can discard your api pass through review (http://reviews.vapour.ws/r/3948/) now, since my PR with that code landed last night [18:55] natefinch: yeah been meaning to [19:14] is master unlocked? [19:16] perrito666: http://juju.fail [19:16] katco, sorry, thats me [19:16] logging back on [19:16] alexisb: ah ok. lately i've been the one freezing :) [19:17] natefinch: sweet I completely forgot that thing [19:18] perrito666: best use of a non-standard TLD I've seen [19:34] niedbalski: working through the same issues you were in this bug, and was hoping that I could pick your brain [19:34] https://bugs.launchpad.net/juju-core/+bug/1452422 [19:34] Bug #1452422: Cannot boostrap from custom image-metadata-url or by specifying metadata-source [19:46] kilty, AFAIK that should be fixed, which juju-core version are you using? [20:00] niedbalski: 1.25 [20:01] niedbalski: 1.25.3, to be exact [20:02] installing from ppa:juju/stable [20:03] kilty, can you pastebin the error you are seeing? [20:03] If I generate the metadata locally and use the --metadata-source option on bootstrap, it creates the ost image, but the ost image defaults to looking back at cloud-ubuntu images [20:03] niedbalski: sure thing [20:05] kilty: you need to use image-metadata-url like I said previously [20:05] mgz: when I use the image-metadata-url, it seems to completely ignore it [20:06] kilty: if you bootstrap with --debug you can see what simplstreams are being selected [20:07] mgz: I am running a bootstrap with --debug now, will pastebin the results [20:08] niedbalski: http://pastebin.com/raw/VcYGhsFF [20:08] that's with using the --metadata-source option [20:10] kilty: that won't work. the local file never gets uploaded anywhere the state server can see it. I can see that the image-metadata-url is being looked at. [20:11] mgz: http://pastebin.com/NnFH6jQj [20:13] mgz: that's that happens when I just run with the agent-metadata-url and image-metadata-url set in the env file [20:16] kilty: export JUJU_LOGGING_CONFIG="=DEBUG; juju.environs.simplstreams=TRACE" and try that again? [20:16] gah,tyop [20:16] *juju.environs.simplestreams [20:17] mgz: running now [20:18] mgz: http://pastebin.com/zRnqWxZR [20:19] kilty: no route to host [20:19] line 11 [20:19] mgz: yeah sorry about that hold on - I need to re-run it [20:19] you need to host the streams somewhere both your client and the machine you are deploying can see [20:21] that needs to be higher than trace... [20:23] mgz: http://pastebin.com/HfdYzspP [20:23] I'm running this from an instance that I created in the same tenant to which I will deploy juju [20:23] so the ost machine and the box I am on now are in the same network [20:24] kilty: l17, 403 [20:24] can you actually resolve the path where you uploaded the json files, just in a browser? [20:26] I can resolve thee parent directories - images/streams [20:26] let me chmod the entire structure and try again [20:28] ok - it's created the instance and is starting to provision - I'll let you know if it fails [20:29] mgz: thank you so much for all your help..this has been bugging me for about a week now. I'm trying to set this up so that I can deploy about 400 juju environments on a private openstack cloud for internal developers [20:29] no probs [20:30] mgz: tools from http://192.168.111.20/tools/releases/juju-1.25.3-trusty-amd64.tgz downloaded: HTTP 404; time 0.002s; size 0 bytes; speed 0.000 bytes/s sha256sum: /var/lib/juju/tools/1.25.3-trusty-amd64/tools.tar.gz: No such file or directory [20:30] 2016-02-25 20:29:30 ERROR juju.cmd supercommand.go:429 failed to bootstrap environment: subprocess encountered error code 1 [20:31] kilty: if you have outbound net access, I'd just delete the agent-metadata-url config and let the setup get the tools from our source [20:31] otherwise, you'll need to get that tgz mirrored === lazyPower is now known as lazyPower|lunch [20:33] mgz: ok - that makes sense. Re-running w/o agent-metadata-url [20:37] mgz: success! [20:37] kilty: ace! [20:38] mgz: now for the noob question of the hour - how to I deploy the gui frontend? [20:38] is it not part of the bootstrap? [20:39] `juju deploy cs:juju-gui --to 0` should do [20:39] oh duh - it's a charm [20:40] mgz: thanks [20:40] they have plans for things in 2.0 which... I'm not completely up to speed with [20:40] just to integrate the gui a little more closely I think [20:41] great, I cannot reproduce a bug even though I know its there [20:42] mgz: well now that we have 1.25 up I think that solves the issues we were having with 2.0 as well [20:42] yeah, should just be the same stuff [20:43] anyone here has been bit by the "status history full of update-status calls" ? === alexisb is now known as alexisb-brb === lazyPower|lunch is now known as lazyPower === alexisb-brb is now known as alexisb === blahdeblah_ is now known as blahdeblah [22:55] fwereade_: yep, it certainly has sharp edges. its raison d'etre is edge cases ;p if we could find a nice way to confine it to manual and lxd that would be good. or maybe just manual, and do the loop-device detachment elsewhere... [22:58] Bug #1550033 opened: Streaming API handlers may panic when apiserver shuts down [23:07] Bug #1550033 changed: Streaming API handlers may panic when apiserver shuts down [23:16] Bug #1550033 opened: Streaming API handlers may panic when apiserver shuts down