[01:19] <axw> thumper: http://reviews.vapour.ws/r/3149/ -- FYI
[01:20] <axw> thumper: I'd be keen to know if you still get errors with this applied, since I had to really work hard to get the peergrouper to reliably fail
[01:26] <menn0> thumper: and another easy one: http://reviews.vapour.ws/r/3150/
[01:26] <thumper> axw: I'll grab it down and try
[01:29] <thumper> axw: yay, that looks like it fixed it, for me at least
[01:29] <axw> thumper: sweet
[01:52] <axw> menn0 thumper: can I retarget https://bugs.launchpad.net/juju-core/+bug/1516144 to a different series? it's blocking master but it's not on the master branch...
[01:52] <mup> Bug #1516144: Cannot deploy charms in jes envs <blocker> <charms> <ci> <regression> <juju-core:Fix Committed by menno.smits> <https://launchpad.net/bugs/1516144>
[01:52] <thumper> axw: yeah, it is
[01:53] <axw> thumper: oh. job says functional-jes
[01:53] <thumper> axw: menn0 is currently looking at it
[01:53] <axw> ok then
[01:53] <thumper> yes it is the master branch, but the JES test
[01:53] <axw> righto
[01:53] <axw> sorry
[01:53] <thumper> menn0 has fixed one of the problems, but CI failed again with an IP address error
[01:54] <thumper> so... weird
[02:18] <menn0> thumper: if I repeat what the CI test does with the local provider it all works
[02:18] <menn0> thumper: trying with joyent now
[02:58] <thumper> menn0: any joy with joyent?
[03:06] <menn0> thumper: everything works fine when I do it manually
[03:06] <menn0> with local provider and joyent
[03:06] <thumper> FFS
[03:06] <menn0> i'm going to look over the logs from the test failure again in more detail
[03:07] <thumper> I'd reply to the curse email, and make sure it is addressed to Curtis and Aaron, cc juju-dev
[03:07] <thumper> let them know
[03:08] <menn0> the test is seeing the dummy-source/0 unit in the first hosted environment is in an error state
[03:08] <menn0> no idea why or how
[03:08] <thumper> hmm
[03:09] <menn0> thumper: the test is passing a config.yaml to create-environment
[03:09] <menn0> I wonder if that is broken somehow
[03:09] <menn0> I kinda guessed with that
[03:09] <thumper> hmm..
[03:35] <thumper> menn0: there is a cursed email to reply to now
[03:36] <menn0> thumper: thanks, I will
[03:36] <menn0> thumper: I'm using the actual CI test script now
[04:40] <mup> Bug #1331151 changed: 'juju destroy-environment' sometimes errors <destroy-environment> <juju-core:Expired> <https://launchpad.net/bugs/1331151>
[04:49] <davecheney> axw: ping ?
[04:49] <axw> davecheney: pong
[04:50] <davecheney> axw: you mentioned you had some fixes for the peergrouper ?
[04:50] <davecheney> did they land ?
[04:50] <davecheney> ?
[04:50] <axw> davecheney: no, master is blocked
[04:51] <axw> davecheney: fixes here: http://reviews.vapour.ws/r/3149/
[04:57] <davecheney> axw: crap
[05:01] <mgz> menn0: is there some trick to getting hosted env... oh, oh dear
[05:01] <mgz> # TODO(gz): May want to gather logs from hosted env here.
[05:09] <davecheney> thumper: https://bugs.launchpad.net/juju-core/+bug/1516498
[05:09] <mup> Bug #1516498: api/unitassigner: data race <juju-core:New> <https://launchpad.net/bugs/1516498>
[05:16] <mup> Bug #1516498 opened: api/unitassigner: data race <juju-core:New> <https://launchpad.net/bugs/1516498>
[08:20] <jam> dimitern: ping
[08:25] <dimitern> jam, hey, sorry I missed our 1:1 :/
[08:43] <jam> dimitern: np, maybe we can chat after the standup if you have anything you want to go over
[08:48] <dimitern> jam, sure, ok
[09:14] <rogpeppe> this PR updates juju-core to the latest charm package version: http://reviews.vapour.ws/r/3152/)
[09:14] <rogpeppe> reviews appreciated :)
[09:38] <mup> Bug #1516541 opened: payload/api/private: tests do not pass <juju-core:New> <https://launchpad.net/bugs/1516541>
[09:41] <mup> Bug #1516541 changed: payload/api/private: tests do not pass <juju-core:New> <https://launchpad.net/bugs/1516541>
[09:44] <mup> Bug #1516541 opened: payload/api/private: tests do not pass <juju-core:New> <https://launchpad.net/bugs/1516541>
[10:01] <dimitern> jam, frobware, standup?
[10:46] <dimitern> frobware, dooferlad, voidspace, please take a look when you have a moment - http://reviews.vapour.ws/r/3153/ - almost straight cherry pick from the 1.25 fix for bug 1483879
[10:46] <mup> Bug #1483879: MAAS provider: terminate-machine --force or destroy-environment don't DHCP release container IPs <bug-squad> <destroy-machine> <landscape> <maas-provider> <sts> <juju-core:Triaged> <juju-core 1.24:Won't Fix> <juju-core 1.25:In Progress by dimitern> <https://launchpad.net/bugs/1483879>
[11:03] <frobware> voidspace, ok to start?
[11:03] <voidspace> frobware: yep, omw
[11:10] <dimitern> frankban, hey, do you have any idea when the guibundles branch will land on master?
[11:18] <frankban> dimitern: it is already landed on master
[11:18] <frankban> dimitern: because it was merged on the chicago-cubs one
[11:18] <dimitern> frankban, awesome! so juju deploy bundle.yaml is usable?
[11:18] <frankban> dimitern: yes
[11:19] <dimitern> frankban, nice! I'll give it a try now :) it's pity it's not mentioned in juju deploy help
[11:19] <frankban> dimitern: it should be mentioned actually
[11:20] <dimitern> frankban, oh, sorry - I missed it - it's there
[11:20] <frankban> dimitern: cool
[11:26] <dimitern> sweet! juju deploy bundle.yaml works just fine with spaces constraints
[11:27] <frankban> dimitern: \o/
[11:27]  * frankban lunches
[11:39] <dimitern> frobware, dooferlad, voidspace, I have another review for you to look at when you can - http://reviews.vapour.ws/r/3155/ - fixes spaces-based deployments on ec2 and brings feature parity between master and 1.25
[11:40] <voidspace> dimitern: is that a straight forward port?
[11:41] <dimitern> voidspace, yes, no changes needed
[11:41] <voidspace> dimitern: you don't need a review then if it's already been reviewed
[11:41] <voidspace> dimitern: but LGTM :-)
[11:41] <dimitern> voidspace, it still needs a review :) thanks!
[11:42] <voidspace> dimitern: I don't think we're re-reviewing stuff that is a straight port between branches
[11:43] <voidspace> dimitern: at least I and other people haven't been :-)
[11:43] <voidspace> dimitern: and it doesn't seem like a good use of time
[11:43] <dimitern> voidspace, it still needs a ship it stamp
[11:43] <voidspace> dimitern: that isn't what we've been doing
[11:43] <dimitern> voidspace, isn't it?
[11:43] <voidspace> dimitern: no
[11:43] <frobware> dooferlad, would dimitern's change have any impact to your CI tests? ^^
[11:44] <voidspace> dimitern: and people shouldn't "Ship It" *without* reviewing it
[11:44] <dimitern> voidspace, I agree
[11:44] <voidspace> dimitern: and re-reviewing it is a waste of everyone's time
[11:44] <dimitern> voidspace, not if you reviewed it the first time I guess
[11:45] <voidspace> dimitern: heh, well possibly
[11:45] <frobware> dimitern, voidspace: cherry-pick backports that have already been reviewed shouldn't need re-reviewing, IMO.
[11:45] <dimitern> frobware, ok, I don't mind at all to just land it then :)
[11:45] <voidspace> if they need substantive changes then a re-review is reasonable
[11:46] <dooferlad> frobware: it shouldn't have any impact on tests.
[11:46] <frobware> dimitern, my only observation is for the CI tests
[11:46] <voidspace> dooferlad: how far off getting to the maas test server are you?
[11:46] <voidspace> dooferlad: I'm going to need it "soon-ish"
[11:47] <dooferlad> voidspace: I just ran into a KVM not appearing for one of my tests, which may be due to mhy MAAS or may be just flake.
[11:47] <dooferlad> voidspace: once I have that sorted I will have got the review answered I was looking at and can get on with the test server
[11:48] <dooferlad> voidspace: so, this afternoon.
[11:48] <voidspace> dooferlad: ok
[11:49] <dimitern> frobware, voidspace, thanks for the Ship It! anyway guys :)
[12:36] <dimitern> frobware, dooferlad, voidspace, yet another for you to review - a really small one this time - http://reviews.vapour.ws/r/3156/ fixes bug 1499426
[12:36] <mup> Bug #1499426: deploying a service to a space which has no subnets causes the agent to panic <network> <juju-core:In Progress by dimitern> <juju-core 1.25:In Progress by dimitern> <https://launchpad.net/bugs/1499426>
[13:36] <dimitern> frobware, thanks for the review - I've replied and updated the PR
[13:46] <mattyw> fwereade, ping?
[13:46] <fwereade> mattyw, pong
[14:43] <frobware> dimitern, are you waiting for a review on http://reviews.vapour.ws/r/3153/  I ask because I saw that it was being merged.
[14:50] <alexisb> thank you wwitzel3 and katco !
[14:50] <katco> alexisb: yep, we'll get it figured out
[14:51] <wwitzel3> alexisb: np
[14:58] <dimitern> frobware, nope that one is for master and it's still blocked
[14:59] <dimitern> frobware, and since we're not reviewing forward ports, I'll just merge it when possible, if that's ok
[15:03] <katco> natefinch: standup
[15:05] <katco> frobware: hey, how close are you to getting a fix for bug 1512371 for 1.25?
[15:05] <mup> Bug #1512371: Using MAAS 1.9 as provider using DHCP  NIC will prevent juju bootstrap <bug-squad> <maas-provider> <network> <juju-core:In Progress> <juju-core 1.25:In Progress by frobware> <https://launchpad.net/bugs/1512371>
[15:06]  * dimitern steps out to the store; bbl
[15:11] <frobware> katco, probably tomorrow
[15:11] <katco> frobware: kk
[15:11] <frobware> katco, actively working on it now
[15:11] <frobware> katco, blocking you?
[15:11] <katco> frobware: cool, just trying to figure out how much wiggle room we have on another bug :)
[15:12] <katco> frobware: nope not blocked.
[15:12] <frobware> katco, in terms of a making a 1.25.x release?
[15:12] <katco> frobware: yeah
[15:12] <katco> frobware: e.g. is everyone waiting on us
[15:12] <katco> rather, i.e.
[15:14] <frobware> cherylj, you mentioned you had the replica set problem again - still holding true?
[15:15] <cherylj> frobware: that maas set up was hosed.  I ended up tearing it down and rebuilding it.  Haven't seen the problem since.
[15:15] <cherylj> frobware: I can't say for sure there wasn't something else going on
[15:19] <katco> cherylj: hey, can you read my comment at the bottom of bug 1382556 and give guidance?
[15:19] <mup> Bug #1382556:  "cannot allocate memory" when running "juju run" <cpe-critsit> <run> <juju-core:In Progress by ericsnowcurrently> <juju-core 1.25:In Progress by ericsnowcurrently> <https://launchpad.net/bugs/1382556>
[15:20] <cherylj> katco: sure, taking a look....
[15:20] <katco> cherylj: ty
[15:20] <katco> cherylj: this is one of the last blockers for 1.25.1
[15:20] <cherylj> katco: yeah.  Are you guys in your stand up?  Could I come chat with you guys if you are?
[15:20] <katco> cherylj: of course: https://plus.google.com/hangouts/_/canonical.com/moonstone?authuser=1
[15:21] <lazypower> wwitzel3 katco - ping
[15:21] <wwitzel3> lazypower: pong
[15:21] <lazypower> I'm riffing with mbruzek in a hangout, and it appears juju list-payloads isn't available on 1.26-alpha1, is this known/expected behavior?
[15:21] <katco> lazypower: pong
[15:22] <katco> lazypower: it is not yet in master
[15:22] <lazypower> i'm confused as to how its in 1.25 and not 1.26 :P
[15:22] <mbruzek> How did it get into 1.25 if it is not in master?
[15:22] <mbruzek> is it hidden by feature flag?
[15:22] <mbruzek> You gave us a feature then took it away!
[15:23] <lazypower> ^ yeah, wat
[15:24] <katco> lazypower: mbruzek: sorry in meeting. we started the feature based on 1.25, 1.26 was blocked by lack of a >= Go 1.3 process
[15:24] <lazypower> hmm, hokay
[15:24] <katco> lazypower: it's on the radar. we'll get it landed asap
[15:24] <mbruzek> OK, sorry to interrupt meeting.
[15:24] <lazypower> Thanks for the follow up o/
[15:24] <katco> (we're also on bug squad this iteration)
[15:36] <mup> Bug #1516668 opened: Switch juju-run to an API model (like actions) rather than SSH. <tech-debt> <juju-core:Triaged> <https://launchpad.net/bugs/1516668>
[15:36] <mup> Bug #1516669 opened: Memory/goroutine leaks. <tech-debt> <juju-core:Triaged> <https://launchpad.net/bugs/1516669>
[15:54] <mgz> I replied to menn0's mail about the blocker again, can natefinch or someone take a look?
[16:06] <mup> Bug #1516676 opened: Use of os/exec in juju is problematic in resource limited environments. <tech-debt> <juju-core:New> <https://launchpad.net/bugs/1516676>
[16:17] <natefinch> mgz: reading
[16:32] <natefinch> katco: seems like the jes CI tests are still blocked by code introduced by the unitassigner.  Should I work on that or the juju run bug?  (I presume the blocker, but wanted to confirm)
[16:33] <katco> natefinch: hm
[16:33] <katco> natefinch: my inclination is to say the juju run bug since it's blocking the impending 1.25.1 release
[16:34] <katco> natefinch: we still have some runway on master
[16:34] <katco> natefinch: plus it looks like menno did a fix-committed?
[16:35] <natefinch> katco:  menno responded to the CI test failure with some comments, thread title is "Cursed (final): #3310 gitbranch:master:github.com/juju/juju 0bf7c382 (functional-jes)"
[16:36] <natefinch> katco: basically... he thought it should have been fixed, but the CI test was still having problems
[16:37] <katco> natefinch: i see. well, i think you should still focus on the 1.25.1 blocker
[16:37] <katco> natefinch: that's coming out first
[16:37] <natefinch> katco: yep, that's fine.  That's why I asked :)
[16:37] <katco> natefinch: yep, ty
[16:42] <mbruzek> axw: ping?
[16:44] <voidspace> frobware: change to picking address algorithm landed on 1.25
[16:44] <voidspace> frobware: change discussed in standup fixed that failing test
[16:45] <voidspace> frobware: porting to master now
[16:45] <voidspace> frobware: also I think that the new Subnets implementation is done - but needs tests, which means I need a test harness
[16:45] <voidspace> frobware: I can switch to ListSpaces whilst I wait for that
[16:45] <mup> Bug #1516698 opened: Juju never stops trying to contact charm store <juju-core:Triaged> <https://launchpad.net/bugs/1516698>
[16:57] <natefinch> one wonders if no one thought about what might happen if this was run on an environment of 5000 machines: https://github.com/juju/juju/blob/master/apiserver/client/run.go#L164
[16:58] <dooferlad> voidspace: *sigh*, that CI stuff took ages. I am not going to get far with gomaasapi before I need to stop (now-ish). Will see if I can take a look after dinner.
[16:58] <frobware> voidspace, all sounds good
[17:05] <natefinch> sometimes I think people just randomly decide whether or not to pass around pointers versus values :/
[17:07] <voidspace> dooferlad: thanks
[17:25] <voidspace> natefinch: what's your problem with that function using a pointer?
[17:37] <natefinch> voidspace: it shouldn't be modifying the value, and the value is small enough to be copied easily.
[17:37] <natefinch> voidspace: making it a pointer makes me wonder if it's going to be modified somewhere.
[17:38] <voidspace> natefinch: if it's called 5000 times surely using a pointer is *more* efficient
[17:38] <voidspace> natefinch: and if that's not the issue why does it matter if it's called for 5000 machines as you called out
[17:38] <voidspace> natefinch: or is that a separate issue?
[17:39] <cherylj> katco, does the lxd provider use the container/lxc code to still do container provisioning?
[17:39] <natefinch> voidspace: separate issue... the problem is spawning 5000 goroutines that all do stuff at te same time
[17:39] <voidspace> natefinch: right, instead of queuing
[17:39] <voidspace> yeah, that would be much better...
[17:39] <katco> cherylj: i don't think so. container/lxd
[17:39] <natefinch> voidspace: and pointer dereference versus some small amount of memory copying is not always an obvious win
[17:40] <natefinch> voidspace: queueing is what I'm writing right now, since this code is causing OOM issues
[17:40] <voidspace> not always, just usually
[17:40] <voidspace> natefinch: right, cool
[17:40] <natefinch> voidspace: the pointer thing isn't really a problem, just a pet peeve
[17:40] <voidspace> heh
[17:40] <voidspace> natefinch: thanks for expanding, interesting stuff
[18:30] <natefinch> man I love channels and goroutines
[18:30] <natefinch> bbiab
[19:43] <katco> natefinch-afk: hey did you get that tech-debt card created?
[19:44] <natefinch> katco: oops, nope, will do now
[19:54] <natefinch> katco: done
[19:55] <katco> natefinch: ty
[21:06] <thumper> sinzui: what's the status with the CI blocker
[21:06] <thumper> ?
[21:06] <thumper> sinzui: menno ran the tests locally yesterday and could not reproduce
[21:06] <thumper> both with local, joyent, and the CI scripts
[21:07] <sinzui> mgz: ^ I think you are versed in this topic
[21:07] <mgz> thumper: I replied to menno's message
[21:08]  * thumper hasn't got to it yet, still reading
[21:08] <thumper> go there
[21:08] <thumper> s/go/got
[21:08] <mgz> thumper: short version, somehow with trunk the units in the hosted environment are going *through* error, when the machines are not up, rather than pending, but once the machines are up are fine
[21:08] <thumper> wha?
[21:08] <thumper> oh...
[21:08] <thumper> ha
[21:09] <thumper> I bet this is the unit assignment worker
[21:09] <thumper> trying to assign them too early
[21:09] <thumper> and somehow marking them
[21:09] <mgz> there is some layering thing screwed up
[21:09] <thumper> natefinch ^^?
[21:09] <mgz> it thinks they are units for the hosting env... till it gets machines, then it works it out
[21:13] <natefinch> thumper: reading backlog
[21:13] <thumper> natefinch: I was just about to look at the unit assignment worker
[21:13] <thumper> it seems that it is trying to assign the unit twice
[21:13] <thumper> because the machine isn't up yet
[21:13] <thumper> http://juju-ci.vapour.ws/job/functional-jes/276/console
[21:13] <thumper> natefinch: see the status output in there
[21:14] <thumper> natefinch: after two minutes, the status is taken again, and it looks ok
[21:14] <thumper> so it obviously settles itself down
[21:14] <thumper> but putting the unit into an error state is confusing the tests (and users)
[21:15] <natefinch> thumper: I've seen it error and then settle, but I thought I'd fixed that when I told it only to run the worker on the master state machine
[21:15] <natefinch> thumper: already assigned does not sound like the error you'd get if you tried to assign it and there was no machine yet
[21:16] <thumper> no, it sounds like it was assigned, and then attempted to assign it again
[21:18] <natefinch> right, which would imply some sort of race condition - either multiple people getting notified and trying to assign (like I originally fixed) or maybe two notifications firing off in succession and thus causing two unit assignments to run concurrently.... the latter seems possible
[21:19] <mgz> natefinch: the other thing with this is this doesn't appear in a state server log anywhere
[21:19] <mgz> despite being an error that appears in status. this seems very wrong.
[21:20] <thumper> maor logging plz
[21:20] <thumper> :)
[21:20] <natefinch> hmmm... wonder if I went too crazy in removing my debugging logging
[21:22] <thumper> this might be strange, but does the collection watcher fire when docs are removed?
[21:22] <thumper> natefinch: I have a feeling it might, but just a stab in the dark at the moment
[21:22] <thumper> I thought any change to the doc would fire the watcher
[21:23] <thumper> not just insertions
[21:23] <thumper> it appears that the assign units collection just has insertions and deletions and no updates
[21:23] <thumper> is that right?
[21:24] <natefinch> corret
[21:24] <natefinch> correct
[21:26] <thumper> natefinch: how about logging the unit ids that are being assigned
[21:26] <thumper> I wonder if we'll find a dupe
[21:27] <natefinch> thumper: yeah.... I swear I was, but again, maybe I just took out too much logging
[21:28] <mgz> I can rerun with logging turned up more if that would maybe make things clearer
[21:29] <natefinch> the worker has some tracef calls that you could turn on, but it definitely looks like I took out too many log statements
[21:31] <natefinch> that'll at least tell you wht unit ids the worker is seeing firing from the watcher, and log the results of the unit assignment attempt
[21:32] <mgz> natefinch: what do I want... "<main>=DEBUG ?=TRACE"
[21:32] <natefinch> juju.worker.unitassigner
[21:32] <mgz> ta
[21:52] <mgz> natefinch: http://juju-ci.vapour.ws/job/functional-jes/279/console
[21:52] <perrito666> anyone knows how to ask peergrouper which of the machines is the leader?
[21:52] <mgz> you'll want the gathered logs when the job completes
[21:54] <natefinch> mgz: thanks
[21:54] <natefinch> wow, juju status --format=tabular doesn't show containers?
[22:00] <mgz> 2015-11-16 21:52:19 TRACE juju.worker.unitassigner unitassigner.go:56 Unit assignment results: ["cannot assign unit \"dummy-source/0\" to machine: cannot assign unit \"dummy-source/0\" to new machine or container: cannot assign unit \"dummy-source/0\" to new machine: unit is already assigned to a machine" <nil>]
[22:00] <natefinch> lol, this OOM error frmo juju run is a lot harder to repro when we moved to m3.mediums.
[22:02] <natefinch> mgz: not really useful, given that we already knew that.  I'm working on another bug for bug squad right now, that's blocking 1.25, but I'll try to look at that one once I get this one finished up
[22:02] <natefinch> mgz: I'll take a look at the logs from the run later tonight and see if anything obvious pops up
[22:02] <natefinch> gotta run and make dinner for the family
[22:06] <perrito666> diner, honestly? at 6PM...
[22:44] <thumper> I'm sorry, but seriously?
[22:44] <thumper> a critical blocker stopping the entire team has less priority?
[23:14] <axw> wallyworld: I'm rebasing the azure-arm-provider branch because it's missing fixes from master
[23:14] <axw> wallyworld: assuming no need to review
[23:15] <wallyworld> axw: once, just finishing meeting
[23:16] <wallyworld> axw: sorry, done now, in standup
[23:16] <axw> oops, is that the time
[23:48] <thumper> axw: I have never suggested or required a review of merging master into a feature branch
[23:49]  * thumper off to walk the dog