[09:03] <frobware> jam: you around today?
[09:03]  * frobware tries to find a 2FA device to check mail/cal
[10:50] <rogpeppe> jam: ping?
[10:53] <frobware> rogpeppe: curious as to why we are merging into juju:master for zaputil
[10:54] <rogpeppe> frobware: it's not juju:master - it's a new project
[10:54] <rogpeppe> frobware: ... unless i've borked things up horribly :)
[10:55] <rogpeppe> frobware: ah, it's juju:master because github only prints the username not the project in that place
[10:55] <frobware> rogpeppe: well, possibly not. the GH page says juju:master, but I notice the popup says juju/zaputil:master... horrible UI.
[10:55] <frobware> rogpeppe: apologies for the noise.
[10:56]  * frobware sticks to the truth (aka the CLI)
[10:56] <rogpeppe> frobware: np
[10:58] <frobware> access to reviews.vapour.ws is broken. anybody around to kick it back into life?
[11:51] <rick_h> frobware: wfm ?
[11:51] <frobware> rick_h: hmm. I always get a 500
[11:52] <rick_h> frobware: oh what page? I could login at least
[11:52] <rick_h> frobware: and click on a release test result summary
[11:52] <frobware> rick_h: http://reviews.vapour.ws/
[11:52] <frobware> rick_h: ah.....
[11:52] <rick_h> frobware: oh, that's the old reviewboard reviews site. Is it even up any more?
[11:53] <rick_h> frobware: I think they took it down, or maybe they left it up for old data but no one's noticed it went boom
[11:53] <frobware> rick_h: hehe. too much (little) tab completion.
[11:53] <frobware> rick_h: nevermind...
[11:53]  * frobware blushes
[11:53] <frobware> rick_h: indeed, http://reports.vapour.ws/ wfm too. :)
[11:54] <rick_h> frobware: :)
[11:54] <frobware> rick_h: what doesn't work atm is dynamic bridges. :(
[11:54] <frobware> rick_h: just digging through some of the changes jam made since I left
[11:55] <rick_h> frobware: k
[11:55] <rick_h> frobware: we talked a bit last week so I know about some of it, but not sure where it's at today
[11:56] <frobware> rick_h: would be great to catch up; also trying to figure out what needs to also be in develop vis-a-vis 2.1-dynamic-bridges
[11:57] <frobware> rick_h: this looks like a long list - http://reports.vapour.ws/releases/4680
[11:58] <rick_h> frobware: I'm going to guess if it's a long list that there's some dirty substrate stuff with folks not around to help keep an eye on things
[11:58] <rick_h> frobware: but not 100% sure
[11:59] <frobware> rick_h: I'm pretty sure https://bugs.launchpad.net/juju/+bug/1652161 will break the functional-container-networking tests
[11:59] <mup> Bug #1652161: juju-2.1-beta3 cannot add LXD container after host machine has rebooted <lxd> <network> <juju:New> <https://launchpad.net/bugs/1652161>
[12:00] <frobware> rick_h: with or without dynamic bridging
[12:02] <rick_h> frobware: k, will have to look at that. So we're not detecting an already configured lxd and trying to reconfigure it?
[12:06] <perrito666> wow the channel is back alive
[12:08] <rick_h> holiday is over, back to work
[12:13] <voidspace> rick_h: yep :-/
[12:13] <voidspace> :-)
[12:14] <voidspace> rick_h: ping
[12:14] <voidspace> rick_h: I'm working on bug #1631254 - lxd containers do not autostart
[12:14] <mup> Bug #1631254: [2.0rc3] lxd containers do not autostart <rteam> <juju:In Progress by mfoord> <https://launchpad.net/bugs/1631254>
[12:15] <voidspace> rick_h: which is marked as critical now
[12:15] <voidspace> rick_h: the *specific bug*, as described, has been fixed for us by lxd
[12:15] <voidspace> rick_h: https://github.com/lxc/lxd/issues/2469
[12:15] <voidspace> rick_h: there is still a bug in juju that needs fixing, and I have a fix - just need tests
[12:16] <voidspace> rick_h: however I can't manually test becasuse I can't repro (which is what I've spent this morning trying to do) because LXD have fixed the issue for us...
[12:17] <voidspace> rick_h: I can manually verify that my fix "does the right thing" with regard to the generated config however
[12:17] <voidspace> rick_h: but the bug can be downgraded from critical if it's holding anything up
[12:17] <rick_h> voidspace: so can we not reproduce by setting the config value to auto start, manually shut down the lxd machine, then reboot and it does not come up before vs should with the config update?
[12:18] <rick_h> voidspace: looking at the lxd issue, that was just that "if the container is running, restart it"
[12:18] <rick_h> voidspace: so setting the config that it should autostart, manually turning it off, and rebooting the machine should still demonstrate the change you're making?
[12:18] <voidspace> rick_h: nope, it now restarts fine even if you hard shutdown the host
[12:19] <voidspace> rick_h: without my fix
[12:19] <voidspace> rick_h: wait
[12:19] <voidspace> rick_h: manually shutdown the lxd before hard reset
[12:19] <rick_h> voidspace: I mean lxc stop the container first
[12:19] <voidspace> rick_h: and with the fix we would expect an auto-restart and without we wouldn't
[12:19] <rick_h> voidspace: then restart the host
[12:19] <voidspace> rick_h: yep, good point
[12:19] <voidspace> rick_h: thanks
[12:19] <rick_h> voidspace: all good
[12:20] <voidspace> I have my develop env bootstrapped, so I can try that *now*
[12:20] <voidspace> rick_h: appreciated
[12:22] <voidspace> rick_h: yep, on develop stopping then restarting the host does not restart the lxd container
[12:22] <voidspace> rick_h: I actually wonder if my fix *does* change that....
[12:22] <voidspace> I'll find out...
[12:24] <rick_h> voidspace: so my understanding is that your branch fixes the configuration in lxd that controls that so I cross my fingers
[12:24] <voidspace> rick_h: yep
[13:08]  * voidspace lunch break
[13:09] <voidspace> no school today, so no problem with second standup, beyond my normal philosophical objections to having two standups a day... :-)
[13:41] <frankban> wallyworld: hey
[13:41] <wallyworld> hi there
[13:43] <frankban> wallyworld: I was going to ask a question but, as it happens, I just found the answer. sorry, happy new year :-)
[13:43] <wallyworld> no worries. and same to you
[14:47] <natefinch> sinzui: my latest merge build failed with /usr/lib/go-1.6/pkg/tool/linux_amd64/link: running gcc failed: fork/exec /usr/bin/gcc: cannot allocate memory
[14:48] <natefinch> http://juju-ci.vapour.ws:8080/job/github-merge-juju/9935/console
[14:49] <natefinch> abentley: ^
[14:49] <perrito666> gcc?
[14:49] <natefinch> hmm yeah weird
[14:53] <sinzui> natefinch: perrito666 : I don;'t know why gcc is being used, but I believe it has something to do with a linker that will not actually have work to do.
[14:53] <sinzui> natefinch: We can retry the merge, maybe we need more memory for the test machine
[14:54] <natefinch> yeah, building Juju takes a ton of memory... I'm sure gcc is just something used in the background... we can retry, but if there's nothing in general wrong with the machine (errant processes running etc) then definitely more memory will be in order
[14:54] <perrito666> sinzui: seems to be using gc-go?
[14:56] <sinzui> perrito666: /usr/lib/go-1.6/pkg is no the location of gcc-go.
[15:04] <natefinch> sinzui: multiple merge jobs have failed with OOM... if there's nothing actually wrong with the machine, then we need a bigger machine
[15:04] <perrito666> natefinch: something might have leaked
[15:04] <sinzui> perrito666: natefinch .yep. I just deleted 10 containers to reclaim memory
[15:04] <perrito666> natefinch: sounds like something fixeable with a restart
[16:07] <frobware> macgreagoir, voidspace: PTAL @ https://github.com/juju/juju/pull/6758 - explains some of why dynamic bridging isn't currently working.
[16:20] <balloons> sinzui, natefinch, yea that job is still running on the old slave. It still does the actual merging
[16:21] <natefinch> balloons: how much RAM does it have, out of curiosity?  obviously 10 errant containers are going to skew things pretty badly
[16:24] <balloons> natefinch, 14 gig.
[16:25] <natefinch> balloons: so is the merge job the only thing we run there? Because 14 gig should definitely be enough (again, w/o containers)
[16:25] <balloons> natefinch, it is 14 gig. But I lied, I was thinking the merges where done by the old slave. But they aren't. All PR jenkins jobs are running on it
[16:26] <natefinch> balloons: but serially, right?
[16:26] <balloons> not entirely. The merges are, but the pre-commit jobs are open season
[16:28] <balloons> natefinch, however your job was the only thing running on the box at the time. Looking back over the last month there's hardly ever more than 1 job running, so
[16:29] <natefinch> so probably just the leaking containers.  maybe we need a job that runs occasionally when the machine is idle and cleans them up?
[16:30] <natefinch> (obviously the best answer is not to leak containers, but that can be hard)
[16:40] <alexisb> happy 2017 all!
[16:43] <perrito666> alexisb: <grumpycat>NO</grumpycat?
[16:43] <perrito666> :p
[16:50] <redir> Good time-zone appropriate period, and happy new year juju-dev
[17:44] <redir> frobware: yt?
[17:44] <frobware> redir: yep
[17:44] <frobware> redir: qemu and spaces?
[17:45] <frobware> redir: HNY :
[17:45] <frobware> :)
[17:45] <redir> HNY!
[17:45] <redir> curious frobware if you thought this https://github.com/juju/juju/pull/6758
[17:45] <frobware> redir: that's broken for sure. Was just adding tests for that case as we speak
[17:47] <redir> might be related to https://github.com/juju/juju/pull/6748#issuecomment-268935865 frobware
[17:49] <frobware> redir: I confess to not trying this with KVM. I ran out of time towards the end of the year.
[17:52] <redir> when that change lands I'll cherry pick and cross my fingers it puts the rabbit back in the hat
[17:52] <frobware> redir: want to cherry-pick with no-commit to try out now?
[17:52] <frobware> redir: not sure I'll end up with my tests completed before my EOD (soon-ish)
[17:53] <redir> I s'pose that can't hurt
[17:53] <frobware> redir: given this is so broken I would actually land the change as-is, with the promise to add unit tests.
[17:53] <frobware> redir: this would also give the 2.1-dynamic-bridges branch a chance to get a CI run overnight.
[17:55] <voidspace> alexisb: happy new year
[17:55] <alexisb> same to you voidspace!  hope you had a great holiday
[17:56] <voidspace> alexisb: yeah, really nice - lovely combination of busy-ness with people and relaxation with family
[17:56] <voidspace> alexisb: I return to work with a renewed and invigorated hatred for my job
[17:56] <voidspace> oops, didn't mean to say that
[17:56] <voidspace> ;-)
[17:56] <voidspace> alexisb: I hope you had a good break too, and managed to get away from work
[17:57] <alexisb> voidspace, I did, disconnected for 2+ week - it was awesome
[18:37] <natefinch> rick_h: you around?
[18:38] <rick_h> natefinch: otp what's up?
[18:38] <natefinch> rick_h: nothing super important, just wanted to talk about that deployer - to - manual email
[18:39] <rick_h> natefinch: k, meet you in core
[19:19] <TheMue> Happy New Year, my Juju fellows
[19:22] <natefinch> TheMue: happy new year!
[19:27] <TheMue> natefinch: started to work yesterday? I did, first a bit tired from vacation, but today better. got a new colleague in my team and teaching him go. :)
[19:28] <natefinch> Ahh cool, must be fun teaching someone Go.
[19:29] <TheMue> natefinch: yes, and he has a good feeling for it. so now we are 3 in my project.
[19:29] <natefinch> TheMue: That's great
[19:30] <TheMue> natefinch: absolutely, I'm happy I'm able to place Go in anew project.
[20:44] <natefinch> sinzui, balloons, abentley: is the vsphere listed in cloud-city's clouds.yaml running?  Is there anything other than being on VPN that I need to do to access it?  It's listed at 10.245.0.134
[20:44]  * redir lunches
[20:51] <babbageclunk> Happy new year everyone!
[20:53] <redir> HNY babbageclunk
[20:54] <babbageclunk> perrito666: I added Cloud.Name in https://github.com/juju/juju/pull/6735 - you were right, it looks better.
[20:55] <perrito666> I saw your change pass by but was in EOY mode :p
[20:55] <babbageclunk> perrito666: yeah, I figured! :)
[20:56] <babbageclunk> perrito666: I'll get jam to take another look today.
[20:56] <perrito666> babbageclunk: looks much better indeed
[20:57] <abentley> natefinch: It is running.  You probably need to be on a specific machine to access it.  I am looking that up.
[20:57] <perrito666> natefinch: you need to ask IS to grant you permission to see it on the vpn
[20:57] <perrito666> our vpn is quite fine grained
[20:58] <perrito666> natefinch: actually someone from oil needs to open a rt for is to grant you permissions <-- abentley sinzui  babbageclunk
[20:58] <perrito666> sorry that was for balloons
[20:59] <perrito666> meh, I cant bootstrap with lxd because I die waiting for address (using develop) is this a problem for someone else still?
[21:00] <sinzui> perrito666: I set natefinch the .ssh/config to get to the host we test with.
[21:00] <perrito666> nice "hacky vpn/vpn" :p
[21:02] <natefinch> sinzui: thanks
[21:02] <perrito666> ok EOD until standup, see you all later
[21:04] <abentley> sinzui: ping for standup
[21:35] <redir> ping alexisb
[21:35] <alexisb> heya redir omw
[21:35] <redir> k
[21:35] <alexisb> need about 5 more minutes
[21:38] <natefinch> so, I apt installed golang-1.6 and .... I still have no "go" command?  WTF?
[21:38] <redir> alexisb: np
[22:17] <babbageclunk> wallyworld: reviewed https://github.com/juju/juju/pull/6757
[22:18] <wallyworld> babbageclunk: great ty, will look
[22:18] <babbageclunk> wallyworld: around for a hangout?
[22:18] <wallyworld> sure
[22:18] <redir> wallyworld: you're back?
[22:18] <wallyworld> supposedly
[22:18] <wallyworld> since yesterday
[22:19] <redir> HNY wallyworld :)
[22:19] <babbageclunk> wallyworld: https://hangouts.google.com/hangouts/_/canonical.com/babbageclunk
[22:19] <wallyworld> same to you
[23:02] <perrito666> Natefinch golang-go is the package iirc
[23:47] <perrito666> meh my dns are really acting weird today
[23:47] <perrito666> at least I believe its the dns
[23:48] <redir> maybe it's just actin like dns
[23:57] <perrito666> has anyone experienced juju develop not being able to bootstrap lxd because it nevers gets an address?
[23:57] <perrito666> I ERROR failed to bootstrap model: waited for 20m0s without getting any addresses
[23:57] <blahdeblah> Anyone know when we will get 1.25.9 in the stable PPA for xenial?