[00:16] <anastasiamac> is anyone working with tip of master?
[00:16] <anastasiamac> i cannot complile..
[00:16] <anastasiamac> mongo/open.go:120: undefined: tls.TLS_RSA_WITH_AES_128_GCM_SHA256
[00:16] <anastasiamac> mongo/open.go:121: undefined: tls.TLS_RSA_WITH_AES_256_GCM_SHA384
[00:16] <anastasiamac> (and yes i've ran godeps) :-D
[00:17] <anastasiamac> any suggestions?
[00:41] <natefinch> anastasiamac: using go 1.6?
[00:42] <natefinch> anastasiamac: my guess is you're using an older version that didn't have that defined
[00:49] <anastasiamac> natefinch: no.. not go1.6 :( i guess i need to move \o/
[00:52] <natefinch> anastasiamac: we're officially a go 1.6 shop now. Which is nice except for the build times.  If you're feeling brave, use master of the go repo. They keep it really stable and nice, and compile times have dropped by about 25%
[00:53] <anastasiamac> natefinch: my reservation was that CI is not fully 1.6?...
[00:54] <anastasiamac> natefinch: u know feeling brave and being brave are a bit different ... m certainly not feeling too brave today :D
[00:54] <natefinch> anastasiamac: the call was made a day or two ago that we're abandoning go 1.2 entirely in CI.  The only reason we were keeping it around was for windows and I think centos tests that failed more on 1.6... but it was decided that we should just fix the 1.6 tests ,not rely on 1.2 to make those pass.
[00:55] <natefinch> anastasiamac: which I entirely agree with... given t hat we were shipping 1.6, but only testing on 1.2
[00:55] <natefinch> (for windows and centos)
[00:56] <anastasiamac> natefinch: sure.. we still have PRs that require 1.6 and we are holding off landing them coz .. well we are not brave :D
[00:57] <natefinch> anastasiamac: well, time to be brave, because we're 100% 1.6 now, AFAIK.
[00:58] <cmars> wallyworld, CQRS? http://martinfowler.com/bliki/CQRS.html
[00:58] <wallyworld> cmars: yeah, that's it
[00:58] <cmars> cool
[00:59] <anastasiamac> axw: r we brave enough ^^^ to land azure PR once master is unblocked?
[01:00] <axw> anastasiamac: should be fine, I'll probably bump up to the latest SDK before doing so
[01:00] <axw> anastasiamac: intending to test it again today, and hten base the retry changes on top
[01:00] <anastasiamac> axw: \o/
[01:34] <redir> go 2016:)
[01:34] <redir> g'nite juju-dev
[02:18] <natefinch> hmm... good to know.. for some reason ssh doesn't like my id_rsa key pair
[03:10] <mup> Bug #1578898 opened: cmd/juju/commands: bootstrap tests are fetching GUI metadata from streams.canonical.com <juju-core:Triaged> <https://launchpad.net/bugs/1578898>
[04:10] <mup> Bug #1578906 opened: Rackspace no longer works with Juju <blocker> <ci> <rackspace-provider> <regression> <juju-core:Triaged> <juju-release-tools:Triaged> <https://launchpad.net/bugs/1578906>
[09:06] <dooferlad> voidspace: guess you are the only sapphire person online today so no standup?
[09:06] <voidspace> dooferlad: I was there briefly
[09:07] <voidspace> dooferlad: but on my own so I left :-)
[09:07] <voidspace> dooferlad: where's dimiter?
[09:07] <dooferlad> voidspace: no idea
[09:07] <voidspace> ah, holiday
[09:07] <voidspace> public holidays on two fridays in a row - nice :-)
[09:08] <dooferlad> voidspace: heh. I could do with a holiday. Or a day off sick.
[09:08] <dooferlad> voidspace: or just passing out for 24 hours
[09:08] <voidspace> dooferlad: TGIF
[09:08] <dooferlad> voidspace: something like that
[09:09] <voidspace> :-)
[09:09] <voidspace> nice and sunny here - and hopefully lasting into the weekend
[09:09] <voidspace> we're planning a trip to bluebell woods
[09:09] <voidspace> dooferlad: you got any plans?
[09:09] <dooferlad> probably sleep
[09:09] <dooferlad> Liz and I both have stinking colds
[09:10] <voidspace> :-(
[09:10] <voidspace> hope you get well
[09:10] <voidspace> *quickly
[09:10] <dooferlad> thanks :-|
[09:10] <voidspace> too fast on the old enter key
[09:15] <rogpeppe> anyone know how to remove a model in state (using the state package API) ?
[09:18] <rogpeppe> fwereade: I seem to have forgotten how to remove things from the state... ^ :)
[09:18] <fwereade> rogpeppe, model.Destroy()?
[09:18] <rogpeppe> fwereade: that sets life to dead but doesn't seem to remove it
[09:20] <rogpeppe> fwereade: i'm wondering if Cleanup is the thing to use
[09:21] <fwereade> rogpeppe, the last thing to do with a dead model is RemoveAllModelDocs
[09:22] <rogpeppe> fwereade: ah, thanks. I think I might've expected that to be named Model.Remove
[09:24] <fwereade> rogpeppe, yeah, that would indeed be the sane thing to call it. not sure why the implementation details got leaked into the name there
[09:27] <rogpeppe> fwereade: that works BTW, thanks!
[09:27] <fwereade> rogpeppe, cool :)
[10:25] <rogpeppe> here's a fix for juju using excessive numbers of mgo sockets in some cases: http://reviews.vapour.ws/r/4783/)
[10:32] <mup> Bug #1579002 opened: state: uses too many mgo sockets in loops <juju-core:New> <https://launchpad.net/bugs/1579002>
[10:35] <mup> Bug #1579002 changed: state: uses too many mgo sockets in loops <juju-core:New> <https://launchpad.net/bugs/1579002>
[10:53] <mup> Bug #1579002 opened: state: uses too many mgo sockets in loops <juju-core:New> <https://launchpad.net/bugs/1579002>
[10:53] <mup> Bug #1579010 opened: state: removing model can generate huge transactions <juju-core:New> <https://launchpad.net/bugs/1579010>
[10:55] <bogdanteleaga> katco, might be too late, but pong
[10:56] <mup> Bug #1579010 changed: state: removing model can generate huge transactions <juju-core:New> <https://launchpad.net/bugs/1579010>
[11:02] <mup> Bug #1579010 opened: state: removing model can generate huge transactions <juju-core:New> <https://launchpad.net/bugs/1579010>
[12:30] <mup> Bug #1579051 opened: Race in juju/controller/destroy and TestDestroyCommandConfirmation <ci> <race-condition> <regression> <juju-core:New> <https://launchpad.net/bugs/1579051>
[12:51] <mup> Bug #1462966 changed: worker/provisioner: multiple data races <race-condition> <juju-core:Fix Released> <https://launchpad.net/bugs/1462966>
[12:51] <mup> Bug #1470297 changed: worker/uniter/storage: data race in test <race-condition> <unit-tests> <juju-core:Fix Released> <https://launchpad.net/bugs/1470297>
[12:51] <mup> Bug #1519183 changed: featuretests: tests fail under -race because of crappy timing issues <2.0-count> <race-condition> <juju-core:Fix Released> <https://launchpad.net/bugs/1519183>
[12:51] <mup> Bug #1579057 opened: Race in github.com/juju/juju/worker/catacomb/catacomb <blocker> <ci> <race-condition> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1579057>
[12:51] <mup> Bug #1579059 opened: MainSuite.TestFirstRun2xFrom1x fails on windows <blocker> <ci> <regression> <test-failure> <unit-tests> <windows> <juju-core:Triaged> <https://launchpad.net/bugs/1579059>
[13:21] <mup> Bug #1579062 opened: localHTTPSServerSuite no trusty arm64/ppc64el images <arm64> <blocker> <ci> <ppc64el> <regression> <test-failure> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1579062>
[13:30] <mup> Bug #1571914 changed: github.com/juju/juju/cmd/jujud unit tests fail if xenial is the LTS <blocker> <test-failure> <juju-core:Fix Released by reedobrien> <juju-core 1.25:Fix Released by reedobrien> <https://launchpad.net/bugs/1571914>
[13:30] <mup> Bug #1576021 changed: 1.25.6 cannot deploy on CI maas 1.9 or 1.8 <blocker> <ci> <maas-provider> <regression> <juju-ci-tools:Triaged> <juju-core:Invalid by dooferlad> <juju-core 1.25:Fix Released by dooferlad> <https://launchpad.net/bugs/1576021>
[13:30] <mup> Bug #1576368 changed: blockdevice 2.0 schema check failed: model: expected string, got nothing <blocker> <ci> <deploy> <maas-provider> <juju-core:Fix Released by 2-xtian> <https://launchpad.net/bugs/1576368>
[14:47] <katco> fwereade: hey, having trouble figuring out how to implement a timeout with a worker (i.e. worker.Wait(), but also continue after a certain amount of time). is there any prior art?
[14:51] <alexisb> katco, happy friday!
[14:51] <katco> alexisb: it is friday
[14:51] <alexisb> katco, not urgent but when you have a chance we need to add this bug on the bug squad board w/ blocker tag: https://bugs.launchpad.net/juju-core/+bug/1578906
[14:51] <mup> Bug #1578906: Rackspace no longer works with Juju <blocker> <ci> <rackspace-provider> <regression> <juju-core:Triaged> <juju-release-tools:Triaged> <https://launchpad.net/bugs/1578906>
[14:52] <natefinch> katco: at least no one will ask you to get up at 5:30 tomorrow (probably) :D
[14:52] <katco> alexisb: i'll do that now
[14:53] <fwereade> katco, sorry, restate please?
[14:55] <katco> fwereade: if i write a worker, and i only want to continue contingent on waiting on the worker or a timeout, can you point me to any prior art?
[14:56] <katco> fwereade: i want to do something like this: w := NewFooWorker; select { case <-w.Wait: case <- time.After(5*time.Minute)}
[14:58] <fwereade> katco, ah ok -- I think I'd just do something like http://paste.ubuntu.com/16260449/ -- and then Wait for the worker which I'd trust to notify me if it hadn't really finished its job
[14:59] <katco> fwereade: (headed to meeting) where does the abort channel come from? that's what i can't figure out how to safely get out of a worker
[14:59] <fwereade> katco, the abort channel is the abort channel for the agent, or whatever it is, that's running this local logic
[15:00] <fwereade> katco, (leaving a goroutine leaked to kill an already-dead worker in the future, if the process survives that long, is not really a big deal but it's untidy ;p)
[15:01] <fwereade> katco, (and it makes it hard to move the logic around safely too, I think)
[15:02] <fwereade> katco, sane?
[15:03] <katco> ericsnow: standup time
[15:03] <katco> fwereade: will digest in a bit, sorry
[15:04] <fwereade> katco, historical interlude: Dead() and Err() are the methods you might have to use on, say, an old-style watcher that mixes lifetime and notification concerns
[15:04] <fwereade> katco, np :)
[15:07] <fwereade> katco, if you *need* a Dead chan, you could build one like this: http://paste.ubuntu.com/16260672/
[15:21] <alexisb> fwereade, have you seen this: https://bugs.launchpad.net/juju-core/+bug/1579057
[15:21] <mup> Bug #1579057: Race in github.com/juju/juju/worker/catacomb/catacomb <blocker> <ci> <race-condition> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1579057>
[15:21] <fwereade> katco, I think there's something we haven't quite figured out re: workers that should run indefinitely, vs workers that exist for a specific task: *most* of our workers are indefinite, and Kill->no-error makes sense there, narrowly; but for workers that are *expected* to complete of their own accord, ErrAborted makes sense
[15:21] <fwereade> alexisb, looking
[15:25] <fwereade> alexisb, looks like it's in the uniter remotestate stuff? can take a look shortly though
[15:25] <alexisb> fwereade, that would be awesome, thank you
[15:31] <alexisb> katco, I added the remaining critical blockers for beta7 on the bug squad board
[15:31] <alexisb> katco, if people are looking for something to do they are there :)
[15:32] <katco> alexisb: ta... ericsnow picked up the rackspace one
[15:33] <alexisb> I saw that :)
[15:33] <alexisb> and fwereade picked up one and dimitern picked up another
[15:33] <katco> natefinch: redir: more blocking bugs are on the board for when you're ready
[15:33] <alexisb> so just 2 left
[15:33] <katco> alexisb: ... for now (dun dun dun!)
[15:34] <alexisb> natefinch, katco: if nate could finish up the manual provider bug (including making sure 2.0 works) that would be awsome, and priority
[15:34] <katco> alexisb: he has a PR up against the upstream project
[15:34] <katco> alexisb: but i think it's contingent on them accepting it, or us vendoring that project and carrying the patch
[15:35] <natefinch> yep.. I'm writing a test for it now
[15:35] <alexisb> ah I see
[15:35] <alexisb> ok, sorry missed that piece
[15:35] <alexisb> natefinch, please put test results in the bug
[15:38] <katco> fwereade: ok, digested your comments. so manage the timeout outside the worker and kill when it's timed-out?
[15:39] <fwereade> katco, I think so, yeah
[15:40] <fwereade> katco, and I think I've realised something about abort chans -- *workers* don't actually need them, because you can always ust Kill enternally
[15:40] <fwereade> katco, it's only long-running *funcs* that need abort chans
[15:40] <fwereade> katco, I think :)
[15:42] <katco> fwereade: hm. i'd have to use your pastebin to expose a channel that signals when the worker is dead?
[15:42] <fwereade> katco, you can just Wait, can't you?
[15:43] <katco> fwereade: for reboot? no, as written, the reboot will go ahead after a timeout, even if we're still seeing containers up
[15:43] <fwereade> katco, start the worker going, start the timeout-killer, wait for the worker to stop one way or the other
[15:44] <katco> fwereade: oh, so pass an abort into the worker that comes from the timeout-killer
[15:44] <fwereade> katco, it doesn't even need that, does it? the interaction via Kill() is all we need
[15:45] <katco> fwereade: so something like: go func() { time.Sleep(5*time.Minute); worker.Kill() }?
[15:46] <fwereade> katco, (and if you want it to be stopped when the enclosing context stops, manage that via catacomb and you get stop-when-parent-stopped for free)
[15:46] <fwereade> katco, yeah
[15:47] <fwereade> katco, that's the leaked goroutine I don't *really* care about -- the abort chan in the first pastebin would be to clean that up when the worker stopped
[15:51] <mup> Bug #1579127 opened: Cannot deploy windows nano <blocker> <deploy> <windows> <juju-core:Triaged> <https://launchpad.net/bugs/1579127>
[16:26] <fwereade> https://github.com/juju/juju/pull/5356 hasn't bee picked up by RB for some reason, but fixes lp:1579057
[16:26] <fwereade> cmars, you free to review^^?
[16:26] <fwereade> alexisb, sinzui: ^^
[16:27] <cmars> fwereade, sure, looking
[16:29] <cmars> fwereade, LGTM
[16:30] <fwereade> cmars, cheers
[16:30] <fwereade> alexisb, https://bugs.launchpad.net/juju-core/+bug/1579057 $$merge$$ing
[16:30] <mup> Bug #1579057: Race in github.com/juju/juju/worker/catacomb/catacomb <blocker> <ci> <race-condition> <regression> <juju-core:Triaged by fwereade> <https://launchpad.net/bugs/1579057>
[16:32] <alexisb> thanks!
[16:41] <mgz> ericsnow: I belive I have fixed rackspace
[16:41] <ericsnow> mgz: sweet
[16:42] <perrito666> well lxd hates me today
[16:42] <mgz> just today?
[16:42] <rick_h_> mgz: yay, does it need the next beta to work?
[16:43] <rick_h_> mgz: or trunk I should say?
[16:43] <mgz> rick_h_: no, it seems like part of our image streams got deleted from the account
[16:43] <rick_h_> mgz: ok cool ty much!
[16:43] <mgz> I regenerated using the script from last time, and it passed the deploy test
[16:44] <mgz> rick_h_: I have a couple of (somewhat related) things to bug you about if you have a sec
[16:46] <mgz> rick_h_: we have a couple of failures in CI due to out of date MAAS images, which nominally cpc managed (but I think smoser knows most about)
[16:46] <rick_h_> mgz: ok
[16:47] <mgz> rick_h_: bug 1568895 and bug 1576873
[16:47] <mup> Bug #1568895: Cannot add MAAS-based LXD containers in 2.0beta4 on trusty <ci> <jujuqa> <lxd> <maas-provider> <juju-core:Triaged> <https://launchpad.net/bugs/1568895>
[16:47] <mup> Bug #1576873: Juju2 cannot deploy centos workloads on maas 1.9 <blocker> <centos> <ci> <maas-provider> <regression> <cloud-images:New> <juju-core:Won't Fix by natefinch> <https://launchpad.net/bugs/1576873>
[16:47] <mgz> so, if you have pat or someone on hand to kick that would be ace, sinzui is also going to email and beg
[16:48]  * rick_h_ is looking
[16:49] <mgz> rick_h_: subject #2, just making sure you've mentioned to john grimm this week that we really want someone on the server team who'll respond to packaging review requests etc from us
[16:51] <mup> Bug #1579148 opened: dhclient needs reconfiguring after bridge set up <network> <juju-core:Triaged> <https://launchpad.net/bugs/1579148>
[17:06] <mup> Bug #1578906 changed: Rackspace no longer works with Juju <blocker> <ci> <rackspace-provider> <regression> <juju-ci-tools:In Progress by gz> <juju-core:Invalid by ericsnowcurrently> <https://launchpad.net/bugs/1578906>
[17:18] <mup> Bug #1578906 opened: Rackspace no longer works with Juju <blocker> <ci> <rackspace-provider> <regression> <juju-ci-tools:In Progress by gz> <juju-core:Invalid by ericsnowcurrently> <https://launchpad.net/bugs/1578906>
[17:19] <sinzui> mgz: rick_h_ : I am returning to the maas email now. I had to rescuse a child from school.
[17:21] <mgz> sinzui: had good news, rick said both bugs are being worked on
[17:21] <sinzui> the maas issue?
[17:22] <mup> Bug #1578906 changed: Rackspace no longer works with Juju <blocker> <ci> <rackspace-provider> <regression> <juju-ci-tools:In Progress by gz> <juju-core:Invalid by ericsnowcurrently> <https://launchpad.net/bugs/1578906>
[18:19] <mup> Bug #1579173 opened: wily onfigSuite.TestNewModelConfig test failure lxd <blocker> <ci> <lxd> <regression> <test-failure> <unit-tests> <wily> <juju-core:Triaged> <https://launchpad.net/bugs/1579173>
[18:22] <mup> Bug #1579173 changed: wily onfigSuite.TestNewModelConfig test failure lxd <blocker> <ci> <lxd> <regression> <test-failure> <unit-tests> <wily> <juju-core:Triaged> <https://launchpad.net/bugs/1579173>
[18:27] <rick_h_> sinzui: mgz yes, last report was images with backports were in biulding/testing
[18:27] <sinzui> \o/
[18:27] <sinzui> thank you rick_h_
[18:31] <mup> Bug #1579173 opened: wily onfigSuite.TestNewModelConfig test failure lxd <blocker> <ci> <lxd> <regression> <test-failure> <unit-tests> <wily> <juju-core:Triaged> <https://launchpad.net/bugs/1579173>
[18:39] <redir> sinzui: is there an arm64/ppc64el system I can test on ?
[18:40] <sinzui> redir: sure. I think you need a xenial host
[18:41] <redir> sinzui: I don't think I understand what you mean.
[18:41] <sinzui> redir: I think the arm64 host is best, but the xenial ppc64el host is idle. Both hosts are on very stricited networks. I will need to pass you some ssh config to get to them
[18:42] <redir> sinzui: OK let me know.
[18:42] <sinzui> redir: I think you are working on https://bugs.launchpad.net/juju-core/+bug/1579062 which most often happens on xenial
[18:42] <mup> Bug #1579062: localHTTPSServerSuite no trusty arm64/ppc64el images <arm64> <blocker> <ci> <ppc64el> <regression> <test-failure> <unit-tests> <juju-core:In Progress by reedobrien> <https://launchpad.net/bugs/1579062>
[18:43] <redir> sinzui: correct
[18:43] <sinzui> okay, I will get yoiu acces to both ppc64el and arm64. You can work on them as you need
[18:43]  * redir goes to make tea
[18:43] <redir> sinzui: tx
[19:01] <sinzui> redir: check you email
[19:05] <redir> sinzui: thanks. Did you add my key to the jump-host?
[19:06] <sinzui> redir: I didn't...I thought I had done that for the s390x host. I will add it since your question implies I need to
[19:08] <redir> sinzui: cool. tx.
[19:09] <sinzui> try it now redir
[19:10] <redir> sinzui: viola. merci
[19:43] <redir> sinzui: is this also expected to be an issue on 1.x?
[19:45] <sinzui> redir: no, the failure wasn't see on 1.25 when it was tested yesterday
[19:49] <redir> OK. the provider/openstack tests pass on both platforms.
[19:50] <redir> sinzui:^ I'll refrain running the full suite since the hosts seem pretty oversubscribed.
[19:51] <sinzui> redir: the arm64-slave host should be idle for about 60-90 minutes
[19:53] <redir> review anyone? https://github.com/juju/juju/pull/5358
[19:53] <redir> RB hasn't picked it up yet
[20:04] <redir> manually created one http://reviews.vapour.ws/r/4785/
[20:14] <redir> Bueller?
[20:15] <natefinch> redir, ericsnow: No space left on device: '/tmp/reviewboard.UipOWm'
[20:15] <natefinch>     
[20:15] <ericsnow> natefinch: gah
[20:15] <redir> whoops
[20:15] <ericsnow> natefinch: time to add a cron job :)
[20:15] <natefinch> ericsnow: cron job to delete garbage in tmp?
[20:15]  * natefinch hi5's eric
[20:16]  * ericsnow hi5's natefinch
[20:16] <redir> whelp, now we know why it wasn't picking up new PRs
[20:17] <redir> there's the actual PR https://github.com/juju/juju/pull/5358
[20:17] <redir> not a lot of there, there.
[20:19] <natefinch> oh man, json's policy of not allowing commas on the last item of a list is horrible
[20:24] <natefinch> redir: it's just an addition to somethnig in export_test?
[20:24] <redir> natefinch: yep, so that tests on arm64 and ppc64el can find appropriate images in test runs
[20:25] <natefinch> ahh, wacky
[20:25] <redir> probably something I should have done in the lts updates, but...I didn't know.
[20:25] <natefinch> LGTM
[20:26] <redir> well tools
[20:26] <redir> we by default build amd64 tools for supported series and then whatever the host arch is.
[20:26] <redir> but there were not images for in the exported index for those host arches
[20:26] <redir> natefinch: tx.
[20:26] <redir> $$merge$$ing
[20:27] <redir> ran the failing tests on the respective arches and it passed on both.
[20:30] <natefinch> cool
[20:37]  * redir steps out for a bite
[20:49] <redir> now redir really steps out.
[21:27] <redir> what is 08d:|?
[21:36] <katco> ericsnow: whoops meant to ping you here
[21:36] <katco> ericsnow: hm. review board not picking up my latest commit to a PR. suggestions?
[21:38] <redir> katco: seems it is  out of disk
[21:38] <katco> redir: how are you discovering this?
[21:38] <redir> I made a post manually from rbtools
[21:38] <katco> redir: ah
[21:38] <redir> then http://reviews.vapour.ws/r/4785/diff/#
[21:39] <redir> well nate noted it above when he tried to review
[21:40] <katco> redir: ah
[21:41] <ericsnow> katco: looking into it
[21:41] <katco> ericsnow: ta
[21:41] <ericsnow> katco: having trouble connecting to the juju environment
[22:37] <redir> must be friday
[22:37] <redir> The last blocker is windows.