[01:21] thanks for the review waigani very good questions [01:22] perrito666: np :) [01:25] waigani: here's the fix for the frequent panics in the machine-dep-engine under Go 1.5: https://github.com/juju/juju/pull/4090 [01:25] waigani: review please [01:25] menn0: looking [01:29] menn0: ship it. I didn't know about sync.Once - handy [01:31] waigani: I knew about it but had never used it [01:34] menn0: It does feel like a slight kluge. Shouldn't the flow be such that close can only be called once? [01:34] waigani: well it's perfectly ok for Kill to be called lots of times [01:34] that's true [01:34] waigani: and this case you want the channel to be close on Kill [01:35] okay, fair enough :) === JoseeAntonioR is now known as jose === natefinch-afk is now known as natefinch [02:11] Bug #1533431 opened: Bootstrap fails inexplicably with LXD local provider [03:09] mwhudson: is someone actively working on getting go 1.5 into trusty? [04:08] Bug #1533469 opened: github.com/juju/juju/api/reboot Build failed [04:20] Bug #1533469 changed: github.com/juju/juju/api/reboot Build failed [04:23] Bug #1379930 changed: relation config values set to the empty string are lost [04:23] Bug #1514570 changed: 'JUJU_DEV_FEATURE_FLAGS=address-allocation' blocks after first 3 ips are allocated [04:23] Bug #1533469 opened: github.com/juju/juju/api/reboot Build failed [04:26] Bug #1379930 opened: relation config values set to the empty string are lost [04:26] Bug #1514570 opened: 'JUJU_DEV_FEATURE_FLAGS=address-allocation' blocks after first 3 ips are allocated [04:29] Bug #1379930 changed: relation config values set to the empty string are lost [04:29] Bug #1514570 changed: 'JUJU_DEV_FEATURE_FLAGS=address-allocation' blocks after first 3 ips are allocated [05:11] Bug #1532130 changed: Config item 'version' vanishes with 1.26 [05:15] Bug #1532130 opened: Config item 'version' vanishes with 1.26 [05:18] Bug #1532130 changed: Config item 'version' vanishes with 1.26 [05:21] Bug #1532130 opened: Config item 'version' vanishes with 1.26 [05:24] Bug #1532130 changed: Config item 'version' vanishes with 1.26 === ashipika1 is now known as ashipika [07:30] Bug #1532130 opened: Config item 'version' vanishes with 1.26 [08:41] anastasiamac, morning, still around? [08:42] mattyw: m on and off - kids r home, dinner... [08:42] anastasiamac, no problem [09:04] frobware, dooferlad, morning :) can I bother you with a review on http://reviews.vapour.ws/r/3482/ please? [09:31] dimitern, welcome back & done. [09:31] frobware, thanks! [09:32] dimitern, I plan to make everything a bridge on the maas-spaces branch when we re-render /e/n/i. Any thoughts or concerns? [09:34] frobware, like we discussed last week? i.e. is_active always True? [09:35] dimitern, yes. I think, initially, I would like to take the simplest approach and if we find that bridging the world is too much we take another look. [09:41] dimitern, do you want to drop this (http://reviews.vapour.ws/r/3472/) as my 1.25-redux branch landed? [09:43] frobware, sounds good [09:43] frobware, will do [10:01] voidspace, jam, frobware, standup? [10:01] dimitern, I am keeping jam busy [10:02] alexisb, sure, np just checking [10:03] dimitern: omw [10:27] dimitern: voidspace: have any time to chat briefly about what's going on here? [10:29] jam, sure - when will be a good time for you? [10:29] I'm writing up a summary email of the discussions from the morning, I have lunch pretty soon [10:30] I seem to have a gap in about 4hrs, does that work for both of you? [10:31] jam, works for me [10:39] jam: should be fine [10:43] bridge [14:46] Bug #1533694 opened: inconsistent juju-gui and juju status [14:52] Bug #1533694 changed: inconsistent juju-gui and juju status [14:55] Bug #1533694 opened: inconsistent juju-gui and juju status [15:01] yikes. what happened with that controller-rename test, sinzui ? [15:01] http://reports.vapour.ws/releases/3503 [15:01] cherylj: the previous job left xenial-slave dirty. I am not having must success cleaning either [15:01] :( [15:02] cherylj: I am going to remove the failed 1.25 revision from testing to make controller-rename complete testing, thenlet 1.25 be rediscovered. [15:02] sinzui: sounds good [15:03] dimitern: ping [15:03] dimitern: unping [15:05] voidspace: *lol* didn't know this "command" [15:05] TheMue: heh :-) [15:05] I use it all the time... [15:05] cool [15:17] well, writing tests has just found two more bugs in my code [15:18] voidspace: writing tests always does that [15:19] its very bad for the pride :p [15:20] :-) [15:22] voidspace: just writing tests for my latest feature too [15:23] TheMue: heh, cool [15:33] natefinch: katco: I'm trying to use the LXD provider, but it is failing to set up ssh keys on the started container [15:33] I can see authorized-keys that make sense in the instanceconfig [15:33] jam: ericsnow as well ^^^ [15:33] jam: that's a new one i think [15:33] but /home/ubuntu/.ssh/ is empty [15:34] that's very strange... [15:34] I can see in cloud-init-output.log it is complaining that there was nothing in /home/ubuntu/.ssh as well [15:34] (i think) [15:35] jam: try adding that to your keyring [15:35] I recall some of the providers not working for that [15:35] sinzui: it was the key ring right? during the sprint with HP [15:36] jam: talking this over with the team, but on the surface this doesn't sound like something the lxd provider would be responsible for. what build of juju are you using? [15:37] jam: also, standard questions: what ubuntu+lxd are you running? [15:37] jam: perrito666 : yes, add the $HOME/.ssh/ to your key ring. [15:37] katco: trusty with the stable ppa [15:38] * perrito666 is working at a bar and just asked a coffee in english [15:38] * perrito666 hides under the table [15:38] perrito666: if I attach to the instance, /home/ubuntu/.ssh is empty [15:38] that doesn't sound like a keyring thing [15:39] And if I add a debug print before the instance is spawned, I see the ssh public key in the instance config [15:39] :| [15:39] katco: I'm using nate's branch [15:39] jam: ah... i'm surprised it's even getting that far? the trusty version doesn't have lxd built in because it's not >= go1.3 [15:39] katco: stable ppa [15:39] ppa:ubuntu-lxc/lxd-stable IIRC [15:39] katco: I'm using the resources branch [15:39] ohhh for lxd [15:40] ok ok gotcha [15:40] I can "lxc attach" to the instance to debug it [15:40] but I get the same result reliably right now [15:40] (lxc exec ... bash) [15:41] jam: ok, that makes a lot more sense. natefinch is telling me the pastebin i sent you was local provider, not lxd. we're investigating lxd now [15:41] jam: we're probably just sitting on an unstable commit of master [15:41] ci-info: no authorized ssh key fingerprints found for user ubuntu [15:42] we know we're kind of out of date wrt to master... I'd just use local, if it's all the same. [15:44] g [15:45] natefinch: with local provider I can replicate the demo script [15:46] jam: we're working on getting lxd to work (as well as resource-get) [15:50] dimitern, frobware, voidspace: I can work around the issue of not having a default gateway set in MAAS by setting a guests default gateway to its hosts IP address. Since we have forwarding enabled on all our hosts this works. [15:51] It is a bit rubbish, but it doesn't involve parsing any more files - the data is in state. [15:52] dooferlad, does that mean the "real" gateway is not in state? [15:53] frobware: yes [15:53] dooferlad, we only have fwding on with the A-C feature flag [15:53] dimitern: I have set no flags [15:53] frobware: the host is getting its address by DHCP [15:54] and the guest from the MAAS API [15:54] it feels really messed up [15:54] katco: so talking with the guys here about lxd, you *shouldn't* use the linuxcontainers.org images, because those don't have cloudinit installed [15:54] you just need the lxd-images import [15:54] as that is from cloud-images.ubuntu.com simplestreams info [15:54] dooferlad, that's exactly what we do for A-C - host acts like a gateway, but that has its drawbacks [15:55] jam: the instructions do say to use the cloud images, doesn't it? (i only used lxd provider once) [15:55] like the need for fwding on, as well as packets coming from a container will have different TTL and source as they arrive at the destination [15:55] perrito666: the first thing it says is to "lxc add linuxcontainers.org" but then not use it [15:56] katco: fwiw, it fails with master as well [15:56] same "(publickey)" issue [15:56] jam: bootstrapping lxd? [15:56] jam: ah, I just ignored that one [15:56] katco: yeah [15:56] I'm using ubuntu-trusty and not ubuntu-wily, don't know ifthat matters [15:56] IIRC they use a diffreent cloud-init version [15:57] jam: you need ubuntu wily for the bootstrap server at least [15:57] jam: iirc [15:57] jam: that should work, i'm using ubuntu-trusty [15:57] anyway, I can reproduce this if you want a test case [15:57] but I'm off to other things now [15:57] dimitern: I don't know why we have the host using DHCP but not for the container. Is that how the MAAS devices API works? [15:57] jam: absolutely, ty for the input [15:57] I'm pretty sure this is just a result of using the wrong images [15:58] jam: you're doing --upload-tools? [15:58] katco: it fails right away if you don't [15:58] (it fails with "I don't know what lxd provider is" cause it uses some other version) [15:58] jam: i dunno, i think natefinch is correct. there's something unique with what you're doing. sounds like the image [15:59] jam: i've got a bootstrapped env. right now on ubuntu-trusty [15:59] jam: but we'll figure it out, open a bug if you don't mind [15:59] katco: I just did "lxd-images import ubuntu --alias ubuntu" [16:01] dooferlad, you have mail [16:01] its release-2015218 [16:01] jam: fingerprint? [16:02] jam: actually what's lxc image list [16:02] getting it, just a sec [16:02] frobware: a victory for sanity! [16:02] jam: np. understand if you have to run too [16:02] ffs. just ran into a bug with lxd-images. I had deleted the image to double check the download, and now lxd-images import is giving me a "NoneType is not iterable" after doing the download [16:03] jam: =| === benji__ is now known as benji [16:07] katco: 0afd3f6ac was in my terminal backtrace [16:08] that's different than mine... are they supposed to be the same fingerprint for different people and/or different times? I'm not sure I'd trust they don't get updated with security patches etc [16:08] jam: i think that's the one i'm on: 0afd3f6ac0d7 [16:12] katco: https://bugs.launchpad.net/juju-core/+bug/1533742 [16:12] Bug #1533742: lxd provider fails to setup ssh keys during bootstrap [16:12] natefinch: katco: 20151218 is dec-18 when they last updated it [16:12] (according to the URL from lxd-images import) [16:16] jam: i'm trying to repro from master as well (but my host is wily) [16:17] katco: thanks. [16:17] I would think the issue would be the image version vs the host, which I'm guessing you're thinking as well [16:18] jam: what version of go are you compiling master with? [16:18] katco: 1.6 [16:18] 1.5 [16:18] the one that you get after adding the lxd ppa [16:18] 1.5.1 [16:19] jam: should be sufficient [16:19] anyway, now need to get ready for dinner. but I do have your demo working. I think I'm scheduled for Fri morning [16:20] so if you can add resource-get to show the resources working, that'd be great [16:20] jam: k, we're also pushing for resource-get [16:20] bbl, changing locations [16:20] jam: thx john [16:22] Bug #1533742 opened: lxd provider fails to setup ssh keys during bootstrap [16:25] Bug #1533742 changed: lxd provider fails to setup ssh keys during bootstrap [16:28] Bug #1533742 opened: lxd provider fails to setup ssh keys during bootstrap [16:37] Bug #1533750 opened: 2.0-alpha1 stabilization [16:37] Bug #1533751 opened: Increment minimum juju version for 2.0 upgrade to 1.25.3 [16:40] Bug #1533750 changed: 2.0-alpha1 stabilization [16:40] Bug #1533751 changed: Increment minimum juju version for 2.0 upgrade to 1.25.3 [16:46] Bug #1533750 opened: 2.0-alpha1 stabilization [16:46] Bug #1533751 opened: Increment minimum juju version for 2.0 upgrade to 1.25.3 [17:02] huh, I think that was an earthquake [17:02] neat [17:02] * natefinch says from a place that pretty much never has earthquakes. [17:03] yeah, it's a bit disconcerting, I never remember feeling one in the uk [17:18] katco: ping? [17:18] cherylj: hey [17:18] katco: got a minute? [17:18] cherylj: yeah sure [17:19] katco: https://plus.google.com/hangouts/_/canonical.com/cheryl-katco?authuser=0 [17:28] natefinch: don't forget to drag the card for the bug you're working on to in progress [17:28] ericsnow: ping? [17:28] katco: hey [17:28] katco: sorry, I'm ready to go [17:28] ericsnow: "no more than half an hour" - ericsnow 2.5h ago [17:28] ;) [17:29] ericsnow: at this point, let's wait until after lunch. [17:29] Can I get a review? http://reviews.vapour.ws/r/3519/ [17:29] Small review [17:29] katco: k [17:30] cherylj: lgtm [17:30] thanks, katco [18:38] jam: ping [18:41] good licl., ot [18:41] it's 10:41pm where he is [18:41] good luck, that is... [18:41] unless he's in cape town, in which case it's 8:41 [18:42] still not good odds :) [18:44] I assumed he was in capetown [18:50] katco: sorry, was at lunch and then trying to figure out which of the networking bugs is least terrible to work on [19:01] Bug #1533790 opened: GCE provider is very restrictive when using a config-file (json) [19:01] Bug #1533792 opened: Panic: TestProvisioningMachinesWithSpacesSuccess [19:27] is anyone aware of the current status of syslog vs mongodb? [19:59] ericsnow: ok i'm ready to pair if you are [19:59] katco: gimme half an hour [19:59] katco: BRT :) [19:59] rofl [19:59] morning folks [20:00] * thumper grunts [20:00] almost 2500 emails [20:00] heyo thumper [20:31] thumper: returning from holidays? [20:46] perrito666: aye [20:46] first day back after three weeks [20:46] thumper: must be rough [20:46] uff, heavy [20:46] lots of email [20:46] it would be nice to have a "valid until" feature on emails [20:46] :p [20:46] "sorry this email went sour" [20:47] :) [20:47] jog__: are you around? [20:48] cherylj, yes === jog__ is now known as jog [20:48] jog__: are the MAASes currently occupied? [20:49] cherylj, yes, there are 3 CI jobs running. What do you need? [20:49] jog: if I could look at a running env while it's doing the deployer test, I might be able to get more data on the rabbitmq-server failures [20:50] jog: or if I could do a simple depoy [20:50] deploy even [20:52] cherylj, If you want to bootstrap and just deploy a single rabbitmq-server charm, the MAAS should have capacity for that... deploying the entire OS bundle when other things are running is when we run into resource issues... I can deploy a rabbitmq-server for you. [20:52] jog: it would need to be in a container [20:53] ok [20:53] so just deploy --to lxc:0 [20:53] I'm wondering if we've run into bug 1329930 [20:53] Bug #1329930: rabbitmq-server fails to deploy into lxc container on maas provider [20:53] I see someone else has run into it recently [21:02] cherylj: git blame says I should talk to you about https://bugs.launchpad.net/juju-core/+bug/1515289 [21:02] Bug #1515289: bootstrap node does not use the proxy to fetch tools from streams.c.c [21:02] natefinch: I didn't do it! [21:03] cherylj: totally possible :) [21:03] cherylj: https://github.com/juju/juju/blob/master/cloudconfig/userdatacfg_unix.go#L260 [21:04] cherylj: so, I think the title of that bug is wrong [21:04] cherylj: he can bootstrap just fine, which means he gets the tools from streams ok. It looks like the problem is actually that new nodes can't get the tools from the state server [21:05] cherylj: sorry, I may be confused, however [21:06] natefinch: he includes something else later that shows the get command the bootstrap node is trying to use to get the tools from streams [21:07] I then inspected the bootstrap node, and found this in all-machines.log: <...> [21:08] cherylj: ok, I think I understand what I'm seeing... I thought the --noproxy on the curl command from the other node was the problem, but that's probably just failing because the state server can't get the tools either [21:09] natefinch: yeah, I believe that's the case [21:09] cherylj: ok, nevermind then. Thanks for helping me understand the bug report :) [21:15] cherylj, it's bootstrapping [21:25] ericsnow: gah, you broke GenerateFingerprint(nil) :/ [21:32] Bug #1533849 opened: github.com/juju/juju/version [build failed] on windows [21:35] Bug #1533849 changed: github.com/juju/juju/version [build failed] on windows [21:41] Bug #1533849 opened: github.com/juju/juju/version [build failed] on windows [21:42] stop it, mup [21:42] review, pretty please: http://reviews.vapour.ws/r/3520/ [21:43] cherylj: shipit [21:43] cherylj: are you not using goimports? [21:44] Bug #1533849 changed: github.com/juju/juju/version [build failed] on windows [21:44] cherylj: or hmm, is it because it's an _windows file, so it gets missed [21:45] cherylj: was going to ask how you even landed code that wouldn't compile, but of course our landing bot isn't compiling on windows (yet) [21:45] goimports rocks [21:45] perrito666: yeah, I wasn't convinced until I used it, now I love it. [21:46] I have it hooked on save [21:47] yep, me too. The only problem I have is when I manually add an import (for example, one that needs to be named) and then accidentally hit save before I have used it. [21:47] Bug #1533849 opened: github.com/juju/juju/version [build failed] on windows [21:47] yup, happens sometimes [21:48] but i usually write the code the other way [21:49] perrito666: I have autocomplete, to help remind me what the names of things are, but it won't work if the import doesn't exist yet, so it's kind of a chicken and egg thing. [21:50] aaand of course, defaulting mongo log breaks tests [21:52] lol [21:52] because of course it does === _thumper_ is now known as thumper [21:59] perrito666: what, logging to mongo and having tests that inspect log output don't somehow turn two bad ideas into a good idea? :) [22:00] yeah, logging one into one unreliable destination beats just logging all into one destination [22:01] whelp, dinner time. Back later. Good luck storming the castle! === natefinch is now known as natefinch-afk [22:08] jog: any luck getting rabbitmq-server to deploy? [22:09] cherylj, not yet, I had to cleanup left behind nodes on the MAAS, it's redeploying now [22:10] jog: can I ssh into it from finfolk and watch the deploy? [22:11] cherylj, yes... export JUJU_HOME then run juju status -e min-lxc [22:11] jog: thanks! [22:16] cherylj, kill-controller ran despite --keep-env [22:17] blargh [22:26] cherylj: do you need me in the release standup? [22:28] katco: no, I don't think so. Thanks! [22:28] cherylj: cool ty, pairing atm [22:46] perrito666: still here? [22:47] perrito666: state environ.go, LatestAvailableTools, never written to, added back in Sep, is it going to be used? [22:47] hmm... found something [22:47] nm [23:13] hey thumper, I figured out the cause of some of the test failures. I have the change merging for master now. Would you be able to rebase controller-rename once it merges? [23:17] axw: anastasiamac going [23:17] thumper: here, [23:19] the controller-rename juju binary complains if destroy-environment is used with: ERROR "parallel-maas18" is a system; use 'juju system destroy' to destroy it [23:19] thumper: sorry I was trying to sport away the effects of working on a desk [23:19] it also complains with: ERROR unrecognized command: juju system [23:20] thumper: that was added and later changed by menn0 iirc [23:20] I was able to destroy with 'destroy-controller' are we just in transition here or is a bug needed? [23:41] Bug #1533896 opened: Metadata worker repeatedly errors on new azure [23:44] Bug #1533896 changed: Metadata worker repeatedly errors on new azure [23:47] Bug #1533896 opened: Metadata worker repeatedly errors on new azure