[00:00] <moqq> (in the logs for the stalled agent its going crazy with many “panic: runtime error: invalid memory address or nil pointer dereference” and their stack traces)
[00:14] <thumper> wallyworld: ha
[00:14] <wallyworld> thumper: i'll raise a bug
[00:14] <thumper> wallyworld: never ran or never did the right thing?
[00:14] <wallyworld> never ran
[00:15] <thumper> ugh
[00:15] <wallyworld> there's a check in the steps that the tag passed in is a unit tag
[00:15] <thumper> haha
[00:15] <wallyworld> and it exits if not
[00:15] <thumper> oops
[00:15] <wallyworld> yeah
[00:15] <wallyworld> we cargo culted a 123 step for 126 and it didn't run
[00:15] <wallyworld> and it caused an issue in CI and that's how we found out
[00:16] <thumper> heh
[00:17] <mwhudson> wallyworld: want to test a package that makes the race detector work?
[00:18] <wallyworld> mwhudson: sure :-)
[00:18]  * thumper is being summoned for lunch
[00:18] <mwhudson> (maybe, this is the first package i've ever created from scratch i think)
[00:18] <thumper> bbl
[00:20] <mwhudson> wallyworld: http://people.canonical.com/~mwh/go-race-detector-runtime_229396-0ubuntu1_amd64.deb
[00:20]  * wallyworld downloads
[00:20] <mwhudson> wallyworld: caveat emptor but at worst it should simple not install / not help
[00:20] <mwhudson> *simply
[00:22] <wallyworld> mwhudson: it workd :-)
[00:22] <mwhudson> wallyworld: omg
[00:22] <wallyworld> have some faith in your own awesomeness :-)
[00:41] <davechen1y> is the build blocked ?
[00:42] <mup> Bug #1494070 opened: unit agent upgrade steps not run <juju-core:Triaged> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1494070>
[01:05] <natefinch-afk> davechen1y: http://juju.fail
[01:07] <wallyworld> niemeyer: hey gustavo, you around?
[01:08] <davechen1y> natefinch-afk: nice
[01:08] <niemeyer> wallyworld: Heya
[01:09] <niemeyer> wallyworld: Sorry, haven't had a chance to look at the issue you mailed me about, but I will
[01:09] <wallyworld> niemeyer: just wondering about mongo 2.6 onwards not like $ in the doc field names
[01:09] <wallyworld> np
[01:09] <wallyworld> juju seems to run ok
[01:09] <wallyworld> but mongo says it is unhappy
[01:09] <wallyworld> when you run the compatibility check tool
[01:09] <niemeyer> wallyworld: Need to check the details
[01:10] <wallyworld> np, i'll wait to hear back. ty
[01:10] <niemeyer> wallyworld: Thanks for your patience, and for the report
[01:10] <wallyworld> sure, np. i know you are busy
[02:26] <thumper> davechen1y, mwhudson: are either of you actively using rugby? There is an RT to upgrade its firmware
[02:34] <davechen1y> thumper: go for it
[02:53] <natefinch> Quick poll: creating a struct to hold function arguments to avoid code churn for all the millions of places where tests call this function every time it changes.... good idea or bad idea?
[02:54] <natefinch> (the function is state.AddService)
[02:54] <natefinch> (technically a method not a function)
[03:03] <natefinch_> thumper, wallyworld, davechen1y: ^ ?
[03:04] <davechen1y> natefinch_: sounds abstract
[03:04] <davechen1y> hard to comment
[03:04] <wallyworld> natefinch_: i like structs to hold args
[03:04] <davechen1y> as in
[03:04] <davechen1y> i don't think i understand what you are asking
[03:05] <thumper> me neither
[03:05] <natefinch_> davechen1y: instead of func foo(a int, b string, c instance.Something)    you have func foo(args FooArgs)    where FooArgs is just   type FooArgs struct{  A int, B string, C instance.Something }
[03:06] <natefinch_> the idea is that if we expect the arguments to the function to change often with optional parameters getting added to the end, this prevents code churn in the 100 tests that use this function only for the most basic functionality.
[03:07] <natefinch> Notable AddService is going from 5 arguments to 8 in my change, and there's a todo for modifying it further (from someone else)
[03:07] <davechen1y> natefinch: what is changing that often ?
[03:07] <davechen1y> natefinch: are most of those arguments usually the defaults ?
[03:07] <natefinch> davechen1y: every time we add a new feature - storage, spaces, different kinds of placement etc
[03:07] <davechen1y> natefinch: sounds like you should be using functional arguments
[03:08] <davechen1y> but if you don't want to do that, then use a struct
[03:08] <natefinch> davechen1y: in the tests, yes. There's about 40 places where we do AddService(soemthing, something, something, nil, nil, nil, nil)
[03:08] <davechen1y> methods/functions with many parameters are a smell
[03:08] <davechen1y> sounds like all those parameters have a sensible default, the zero value
[03:08] <natefinch> indeed
[03:09] <natefinch> thus a struct is easy and obvious.  Functional arguments are kinda obscure, though I do sorta like them
[03:09] <mup> Bug #1493850 changed: 1.22 cannot upgrade to 1.26-alpha1: run.socket: no such file or directory <1.22> <blocker> <ci> <regression> <run> <upgrade-juju> <juju-core:Fix Released by cmars> <https://launchpad.net/bugs/1493850>
[03:10] <davechen1y> natefinch: your call
[03:10] <davechen1y> one request
[03:11] <davechen1y> pass the configuration structure _by value_
[03:11] <davechen1y> so callers cannot hold a referecne to it
[03:11] <davechen1y> config := service.Config { .... }
[03:11] <davechen1y> AddService(config)
[03:12] <natefinch> yep, I'm a bigbfan of passing by value
[03:12] <natefinch> big fan
[03:13]  * thumper agrees with davechen1y
[03:13] <davechen1y> is trunk still blocked ?
[03:14] <davechen1y> apparently not ;)
[03:15] <wallyworld> davechen1y: i unblokced master
[03:15] <wallyworld> marked bug as fix released
[03:15] <mup> Bug #1493850 opened: 1.22 cannot upgrade to 1.26-alpha1: run.socket: no such file or directory <1.22> <blocker> <ci> <regression> <run> <upgrade-juju> <juju-core:Fix Released by cmars> <https://launchpad.net/bugs/1493850>
[03:16] <davechen1y> wallyworld: ta
[03:21] <mup> Bug #1493850 changed: 1.22 cannot upgrade to 1.26-alpha1: run.socket: no such file or directory <1.22> <blocker> <ci> <regression> <run> <upgrade-juju> <juju-core:Fix Released by cmars> <https://launchpad.net/bugs/1493850>
[04:13] <wallyworld> thumper: replied, hopefully it adds some extra context to the discussion
[04:16] <thumper> ta
[04:17] <wallyworld> awesome, unity crashed, reboot time
[04:31] <davecheney> here is a small one http://reviews.vapour.ws/r/2622/
[04:39] <davecheney> https://bugs.launchpad.net/juju-core/+bug/1494121
[04:39] <mup> Bug #1494121: worker/uniter/remotestate: data race  <juju-core:New> <https://launchpad.net/bugs/1494121>
[04:48] <mup> Bug #1494121 opened: worker/uniter/remotestate: data race  <juju-core:New> <https://launchpad.net/bugs/1494121>
[05:14] <ericsnow> davecheney: could you spare me a review of a backport of a 1.25 patch of yours: http://reviews.vapour.ws/r/2624/
[05:31] <davecheney> ericsnow: looking
[05:39] <axw> wallyworld: what's the problem with the race detector in 1.5? works on my machine
[05:39] <wallyworld> axw: i have no idea. i just saw the bug. davecheney ^^^^ are you using the packaged go 1.5?
[05:40] <axw> (I built from source FWIW)
[05:41] <wallyworld> axw: there is an issue in the go 1.5 packaged for wily. race detection is broen. i installed a patch from mwhudson to fix it for me
[05:41] <axw> wallyworld: ah I see
[05:41] <wallyworld> the breakage though is just compiling the test binaries i think
[05:42] <wallyworld> axw: fwiw, i tried the race detector just before and got the same issue as the bug
[05:42] <axw> wallyworld: yes, I can repro. I was just curious
[05:42] <axw> wallyworld: going to take a break from azure to fix it
[05:42] <wallyworld> tyvm
[05:42] <axw> wallyworld: I just got a failure in WatcherSuite.TestActionsReceived too ... not a data race, just a failure
[05:43] <wallyworld> oh dear
[05:48] <axw> wallyworld: is this really Critical? it's a data race in a test, not in the code under test
[05:54] <wallyworld> axw: i think the policy is that all data races are to be considered critical (or at least that was the case)
[05:55] <wallyworld> now that we have the count at 0, we need to maintain that
[05:55] <wallyworld> but i could be mis remembering the policy
[05:55] <axw> wallyworld: ok. does not seem more important to me than any other shitty tests, but ok ;)
[05:55] <axw> found the issue anyways
[05:56] <wallyworld> axw: it came about i think because of the need to get it to 0 so we could get upstream to fix that gccgo bug
[05:56] <wallyworld> so any data race was totally unacceptable
[06:00] <urulama> wallyworld: that's the Entity in charm store: https://github.com/juju/charmstore/blob/v5-unstable/internal/mongodoc/doc.go
[06:00] <wallyworld> looking
[06:01]  * wallyworld has to go to do school pickup, bbiab
[06:01] <urulama> wallyworld: i'm taking kids to school as well
[06:15] <davecheney> axw: http://reviews.vapour.ws/r/2625/diff/#
[06:15] <davecheney> i don't get it
[06:15] <davecheney> this change just adds a test
[06:15] <axw> wallyworld: no, it splits part off the end of a test
[06:15] <axw> and gives it a name
[06:15] <axw> err sorry, davecheney
[06:16] <axw> davecheney: hang on, I'll point out the problem line
[06:16] <davecheney> oh i see
[06:17] <axw> davecheney: https://github.com/juju/juju/blob/master/worker/uniter/remotestate/watcher_test.go#L374    <- here we trigger a watcher, it wakes up and we expect it to do nothing interesting.. but it will go off and read s.st.storageAttachment
[06:17] <axw> davecheney: the test was overloaded anyway, hence I split it
[06:18] <davecheney> got it
[06:18] <davecheney> thanks
[06:20] <davecheney> axw: http://paste.ubuntu.com/12326655/
[06:20] <davecheney> still racey
[06:25] <axw> davecheney: that one I'm fixing now
[06:26] <axw> davecheney: may as well put it in the same PR I guess
[06:27] <axw> davecheney: oh, I just saw the action failure... I see. storage is still racy
[06:32] <davecheney> yea
[06:32] <davecheney> thanks
[06:46] <axw> davecheney: should be fixed now. that one was a bug in the watcher, not the test. fixed the action test while I was there.
[06:47] <davecheney> looking
[06:52] <davecheney> axw: lgtm
[06:53] <axw> davecheney: thanks
[06:53] <davecheney> axw: http://reviews.vapour.ws/r/2622/
[06:53] <davecheney> how about one in return
[06:54] <axw> davecheney: just a move, no code change?
[06:54] <davecheney> yup
[06:54] <davecheney> moving the bzr pack to utils
[06:54] <davecheney> it doesn't need to be in juju/juju
[06:54] <axw> davecheney: agreed, LGTM
[07:47] <fwereade> wallyworld, worried about https://bugs.launchpad.net/juju-core/+bug/1483672
[07:47] <mup> Bug #1483672: Allow charms to associate structured data with status <cloud-installer> <landscape> <juju-core:Fix Committed by hduran-8> <juju-core 1.25:Fix Committed by hduran-8> <https://launchpad.net/bugs/1483672>
[07:48] <wallyworld> which bit?
[07:48] <fwereade> wallyworld, apparently we've just implemented rich status without a spec?
[07:48] <wallyworld> say wot?
[07:48] <wallyworld> we allowed nme/value pairs to be added for non error status
[07:49] <fwereade> wallyworld, yeah
[07:49] <wallyworld> we taked about this when the api was being discussed and neitjer of us recalled a good reason for diallowing it
[07:49] <wallyworld> and landscape wanted it
[07:49] <fwereade> wallyworld, that's rich status, except that it doesn't take any of sabdfl's requirements for it into account
[07:50] <wallyworld> i must admit i don't see the connection straight up - i'd have to find a rih status spec
[07:51] <fwereade> output documents?
[07:51] <fwereade> that is more or less an output doc
[07:51] <wallyworld> no, really?
[07:51] <fwereade> except it's not persisted usefully
[07:51] <wallyworld> i don't see it as that at all
[07:52] <fwereade> well, we've just grabbed the spelling sabdfl wanted for rich status
[07:53] <wallyworld> aren't output docs a totally different semantic than allowing a charm to record why it is in maintenance
[07:53] <fwereade> possibly
[07:54] <fwereade> but we are now expressly using the spelling earmarked for a different feature, but implemented with completely different semantics
[08:10] <frobware> TheMue, I had to move our 1:1 today as I have a conflict.
[08:10] <TheMue> frobware: yeah, just discovered. it's ok
[08:12] <frobware> TheMue, I have a P&C induction session. Also missing the standup too.
[08:12] <TheMue> frobware: P&C?
[08:12] <frobware> TheMue, HR
[08:12]  * TheMue missed that acronym
[08:13] <TheMue> frobware: ah, thx
[08:13] <frobware> TheMue, People & Culture to be more specific.
[08:13] <TheMue> frobware: which definitely sounds better than Human "Resources" *iirks*
[08:16] <TheMue> frobware: hmm, trusting my calendar we're overlapping with the core meeting
[08:16] <TheMue> frobware: but as long as we don't need more than 30 minutes it fits
[08:18] <frobware> TheMue, ah, I see. I'm not in the meeting and I have another at 12 which is why it ended up where it is. 30 mins should be good. Otherwise I have back-2-back meetings from 9-3.
[08:19] <fwereade> wallyworld, ...and it looks like we've invented a new convention for passing k/v data into hook tools?
[08:19] <TheMue> frobware: from my side it's enough, yes. currently mostly focussed on final pre-vacation tasks
[08:19] <wallyworld> fwereade: yaml isn't it?
[08:19] <fwereade> wallyworld, yeah -- where else do hook tools accept yaml?
[08:19] <wallyworld> relation-set i thought
[08:20] <fwereade> wallyworld, definitely not
[08:20] <fwereade> wallyworld, relation-set letter=y
[08:20] <wallyworld> so just kv pairs then
[08:21] <wallyworld> i thought i was told it was yaml
[08:21] <fwereade> wallyworld, that, and the action-set stuff
[08:21] <wallyworld> it can be changed to kv pairs
[08:21] <fwereade> wallyworld, which is the existing convention for arbitrary structured data
[08:21] <fwereade> wallyworld, *but* the rich status plans included output schemas for the arbitrary structured data
[08:22] <fwereade> wallyworld, and we don't have anything like that
[08:22] <fwereade> wallyworld, *and* we've just implemented a new side channel for peer-relation-like data
[08:22] <wallyworld> fwereade: so relation-set does read yaml from a file
[08:22] <wallyworld> just not from cmdline
[08:23] <fwereade> wallyworld, yes, via a --file arg
[08:23] <wallyworld> i think that's where the confusion came from
[08:24] <wallyworld> fwereade: so do we or do we not want to fix that landscape reported bug
[08:24] <wallyworld> i mean, this fix brings the cli into line with the api
[08:25] <wallyworld> the api now allows kv pairs with arbitrary status values
[08:25] <wallyworld> it didn't before
[08:25] <wallyworld> and the hook tools didn't so were inconsistent
[08:25] <wallyworld> with the api
[08:26] <fwereade> wallyworld, I honestly think the landscape bug is essentially a feature request for rich status
[08:27] <fwereade> wallyworld, it's certainly not an invitation to occupy a bunch of the hook-env design space without consultation or spec or any apparent consistency with what went before
[08:27] <wallyworld> so we can revert the commit. but that still leaves api inconsisent
[08:28] <wallyworld> the inconsistency was a mistake, should have been kv
[08:29] <wallyworld> the intent was not to occupy design space without a spec - it was to fill in an inconsistency between api and li
[08:29] <wallyworld> cli
[08:29] <frankban> hi all core devs: could you please take a look at https://github.com/juju/juju/pull/3249 (initial bundle deployment support)? thanks!
[08:30] <wallyworld> fwereade:  because someone could write a cli client to do the same thing as we are now going to disallow from a hook tool directly
[08:30] <fwereade> wallyworld, I don't think the api implementation details have any reason to affect the hook environment we expose
[08:30] <wallyworld> see above
[08:30] <wallyworld> someone could backdoor it
[08:31] <wallyworld> consistency is good
[08:31] <wallyworld> so maybe we should revert the api changes to again disallow it
[08:34] <fwereade> wallyworld, params.SetStatus has accepted data for a good couple of years now?
[08:34] <wallyworld> only for error states
[08:34] <wallyworld> there were explicit checks in code
[08:34] <wallyworld> remember how we talked about this?
[08:34] <wallyworld> and couldn't see a reason to continue that behaviour?
[08:37] <fwereade> wallyworld, right... still not seeing how this means "let's change the data model and interaction patterns for the hook context"
[08:37] <fwereade> wallyworld, you can't just add stuff to the hook env without thinking about it
[08:38] <fwereade> wallyworld, is this intended to be the complement to leader-settings? in a way, it's kinda cool that it is
[08:38] <wallyworld> it was merely meant to bring the cli in line with th api
[08:38] <fwereade> wallyworld, I don't think that's a relevant consideration
[08:39] <fwereade> wallyworld, if you want you can write an api client that impersonates the unit and sets it to dead
[08:39] <wallyworld> otherwise people would just backdoor it anyway
[08:39] <wallyworld> sure, i ment with the SetStaus api
[08:39] <wallyworld> not the api in general
[08:39] <fwereade> wallyworld, then they'd be dumb to do so, because they'd be depending on arbitrary implementation details
[08:39] <wallyworld> people use upload-tool
[08:39] <fwereade> wallyworld, and it would only work half the time anyway
[08:40] <wallyworld> seems easiest to revert for now
[08:41] <fwereade> wallyworld, I guess :(
[08:41] <wallyworld> but how do we give landscape what they want
[08:50] <wallyworld> fwereade: about the side channel comment. i'll use the same argument as you used - "they'd be dumb to do so". i guess people can always find a way to manipulate a system.
[08:51] <fwereade> wallyworld, does the word "affordance" mean anything to you?
[08:51] <wallyworld> so it comes down to - do we or do we not want to allow status other than error to have a little bit of extra data besidesa human string
[08:51] <fwereade> wallyworld, ofc we do
[08:51] <fwereade> wallyworld, but this is not an ok way to do it
[08:52] <wallyworld> i seemed ok as it uses the current tool
[08:52] <wallyworld> with extra params analogous to the api
[08:53] <fwereade> wallyworld, right
[08:53] <fwereade> so now a bunch of charms will fail in surprising ways on old jujus
[08:53] <wallyworld> unless we impleent min version
[08:53] <wallyworld> what's the status of that?
[08:54] <fwereade> wallyworld, seems like it's been deprioritised again :-/
[08:54] <wallyworld> so suggestions then
[08:54] <wallyworld> how would we implement this
[08:55] <fwereade> wallyworld, (1) stop and think -- who is this side channel for, what data should it contain, who is notiified of changes and how, what are the consequences of that
[08:56] <fwereade> wallyworld, what we've done here
[08:56] <fwereade> wallyworld, is create the first channel that outputs both to users and to other units in the service
[08:57] <wallyworld> which can be done via the api
[08:57] <wallyworld> so that's not really a key rebuttal
[08:57] <fwereade> wallyworld, well, no
[08:57] <fwereade> wallyworld, you're exposing the status data dict to the leader
[08:58] <fwereade> wallyworld, it is now suddenly a programmatic control channel, with new, surprising, and undocumented semantics, that will inevitably start to contain sensitive data
[08:58] <wallyworld> only if people put it there
[08:59] <wallyworld> it's only  control channel if people misuse it that way
[08:59] <fwereade> wallyworld, no
[08:59] <fwereade> wallyworld, we control the environment
[08:59] <fwereade> wallyworld, we control the data that goes in and out
[09:00] <wallyworld> except when we don't
[09:00] <fwereade> wallyworld, if we put a big radio in the room, marked "messages from minions you can't get any other way", but tell people they shouldn't use it, we're being actively user--hostile
[09:01] <fwereade> wallyworld, btw, did we ever implement minion-status-change watching?
[09:01] <fwereade> wallyworld, don't think I've seen any code for it
[09:02] <fwereade> wallyworld, making service-status work reliably should, I think, take precedence over adding new ways for it to break the model further
[09:04] <fwereade> wallyworld, or did we explicitly decide that service status should be composed from arbitrarily out-of-date unit statuses?
[09:07] <fwereade> wallyworld, sorry, we're probably desynchronised, I have been whinging down an empty pipe
[09:09] <fwereade> wallyworld, can we go back to the "except when we don't" bit?
[09:09] <fwereade> wallyworld, the point of the hook environment is to provide the underlying guarantees that let juju work
[09:10] <wallyworld> sorry, i keep getting disconnected the past 1 hour or so
[09:10] <fwereade> wallyworld, like, if we tell you some information, we will also tell you when that information has changed
[09:11] <fwereade> wallyworld, and, we will rabidly restrict the information you are allowed to access, because every side channel we provide is an *official* side channel -- we know that people will use everything we provide, so we only provide things we're willing to build a proper eventual-consistency convergence model for
[09:12] <fwereade> wallyworld, and every piece of information you can access *without* a mechanism for seeing when it's changed is, basically, a bug
[09:22] <wallyworld> fwereade: maybe my network will stay up for a bit
[09:22] <fwereade> wallyworld, ...so, did status-get always have --include-data?
[09:22] <wallyworld> not sure, i'd have to check
[09:23] <fwereade> wallyworld, seems like it wasn't added in that CL so I guess it's oldish
[09:23] <wallyworld> yeah, it has been there a bit
[09:23] <fwereade> wallyworld, and, yeah, I suppose *that* is not bad in isolation
[09:24] <fwereade> wallyworld, until we turned it into a subtly-broken variant of a minion-settings bucket, anyway
[09:25] <wallyworld> it's not supposed to be a settings bucket
[09:25] <wallyworld> it's not settings
[09:25] <fwereade> wallyworld, but you've made it one
[09:25] <wallyworld> i guess people could misuse it that way
[09:25] <fwereade> wallyworld, it's a data channel from one unit to another
[09:25] <wallyworld> only if misused
[09:26] <fwereade> wallyworld, you expose it in status-get, therefore you eviidenntly want ppeople to use that data
[09:26] <wallyworld> i think a unit can only get its own settings
[09:26] <fwereade> yeah but the service gets all of them
[09:26] <wallyworld> so it can aggregate overall state using the indivisual unit status
[09:27] <fwereade> wallyworld, right -- and
[09:27] <fwereade> oh god
[09:27] <fwereade> it's not the workload status, is it?
[09:27] <fwereade> it's the workload status or maybe the agent status
[09:28] <wallyworld> unit ad agent have separate status
[09:28] <fwereade> so it's a doubly unreliable channel because we'll hide any important data whenever the agent gets into an error state
[09:29] <wallyworld> yeah because the spec is "wrong"
[09:29] <fwereade> ehh, the spec didn't even bother to consider that case
[09:29] <fwereade> it was all in terms of what we expose to the user
[09:29] <fwereade> it's bad enough that we lie to the user
[09:29] <wallyworld> no, i meant that we were told the workloadhad to reflect the agent error
[09:30] <fwereade> right, and we got explicit agreement from the very top that that was a UX consideration and shouldn't have to impact the model
[09:30] <wallyworld> right, so we do store separate staus always
[09:30] <wallyworld> it is a ui thing
[09:30] <fwereade> wallyworld, different user, different interface
[09:31] <wallyworld> ?
[09:31] <fwereade> wallyworld, lying to end users because we think they can't handle the truth is just kinda dumb
[09:31] <wallyworld> i agree
[09:31] <wallyworld> but we were told to do it
[09:31] <fwereade> wallyworld, actively subverting the mechanism we use to tell the service what its components are up to is actively broken
[09:34] <fwereade> dammit I have to get to the shops before my meeting-block starts, bbiab
[09:35] <dimitern> TheMue, you've got a review
[09:36] <dimitern> axw, are you around by any chance?
[09:36] <axw> dimitern: hiya, I am
[09:37] <TheMue> dimitern: thx
[09:40] <dimitern> axw, about that change in provider common about subnets and zones, do you have a few minutes to discuss it?
[09:40] <axw> dimitern: yes, sure
[09:40] <axw> dimitern: hangout or here?
[09:40] <dimitern> axw, here's fine
[09:41] <dimitern> axw, I'm open to a better solution - basically we need to take into account 3 things: zone placement, 2) units auto distribution across zones; 3) spaces constraints (implying a given list of subnets to use)
[09:41] <dimitern> axw, while 1) when given overrides 2), but can cause an error if it conflicts with 3)
[09:42] <dimitern> s/, but can cause/, it can also cause/
[09:43] <dimitern> axw, and since most of that is happening in AvailabilityZoneAllocations, I was thinking it's the least obtrusive solution to just give it a list of subnet ids (if a spaces constraints are given and the provisioner already populated SubnetsToZones map in StartInstanceParams)
[09:44] <axw> dimitern: sorry, just need to clarify. you want to prevent the user from forcing a machine into a zone when it specifies constraints?
[09:45] <axw> dimitern: if so - I'm pretty sure up until now the idea has been to ignore constraints when placement is specified
[09:45] <dimitern> axw, ah, well that's sounds sane
[09:45] <dimitern> axw, however it might be surprising, if we at least don't issue a warning
[09:45] <axw> dimitern: IMO we just need to consider auto-placement in the face of those constraints
[09:47] <axw> dimitern: could be helpful I guess, but I don't think it's worth introducing more concepts into the AZ handling code. I'd prefer to see a more general way of indicating that a zone is not valid
[09:47] <dimitern> axw, e.g. consider you do $ juju deploy postgres --constraints spaces=db,^apps --to zone=one-of-the-zones-not-matching-db
[09:47] <axw> (not a valid choice for those constraints)
[09:48] <dimitern> axw, right, so if we can detect the conflict at deploy time we fail early (or proceed with a warning), rather than at provisioning time
[09:48] <axw> dimitern: equally in MAAS you can do "juju deploy postgres --constraints mem=1024M --to puny-node"
[09:48] <axw> but yes, it would be ideal to fail early
[09:49] <axw> dimitern: well 1024M is puny, but you know what I mean :)
[09:49] <dimitern> axw, right :)
[09:50] <dimitern> axw, ok, so about AvailabilityZoneAllocations..
[09:50] <dimitern> axw, you're suggesting to change it to call AvailabilityZones() before InstanceAvailabilityZoneNames() ?
[09:52] <axw> dimitern: nope. I assumed you were going to modify AvailabilityZoneAllocations to call SubnetsAvailabilityZoneNames, and ignore any results from AvailabilityZones that are not in that
[09:52] <axw> dimitern: is that right, or am I way off?
[09:52] <dimitern> axw, that was my plan, yes
[09:53] <dimitern> axw, so in case both candidates []instance.Id and subnetIds []network.Id are given, we call InstanceAvailabilityZoneNames() for the former and SubnetsAvailabilityZoneNames() for the latter
[09:54] <axw> dimitern: my only issue is that I'm not sure how many providers that will make sense for
[09:54] <dimitern> axw, and finally return []AvailabilityZoneInstances (which should grow a Subnets field []network.Id, like it has Instances)
[09:56] <dimitern> axw, well, only if the provider supports spaces SAZNs() will be called, as otherwise the SubnetsToZones StartInstanceParams field won't be populated by the provisioner otherwise
[09:56] <axw> dimitern: ok, but we are forcing all the implementers of ZonedEnviron to implement SubnetAvailabilityZoneNames
[09:58] <dimitern> axw, right, I see your point - it might be better to have SubnetsAvailabilityZoneNames() as a package-level func
[09:59] <dimitern> axw, but then the we need to extend common.AvailabilityZone to have SubnetIDs() method
[09:59] <axw> dimitern: I think if DistributeInstances (and AvailabilityZoneAllocations) were passed a function to filter out invalid zones, that'd work?
[10:00] <axw> dimitern: not even necessarily to AvailabilityZoneAllocations. the filtering logic could be done in DistributeInstancs alone
[10:00] <dimitern> axw, I'm not sure about that
[10:00] <dimitern> axw, DistributeInstances is called in state when assigning a unit, right?
[10:01] <axw> dimitern: yes. it would need to query the instances to determine their subnets.
[10:02] <axw> dimitern: yeah both functions would need the filter, one for add-machine, one for deploy/add-unit
[10:02] <dimitern> axw, hmm..
[10:02] <axw> dimitern: team meeting time, if you're coming
[10:02] <dimitern> axw, but would that be enough for StartInstance to do the right thing?
[10:02] <dimitern> axw, ah, yeah, with a callback it will
[10:02] <dimitern> axw, omw
[10:18] <dimitern> axw, thanks, I'll work out a sketch of what we discussed and propose it - will ping you to have a look tomorrow
[10:18] <axw> dimitern: thanks, sounds good
[12:07] <perrito666> I definitely want one of these at home https://pbs.twimg.com/media/CDC9FfDWEAAYgfS.jpg:large
[12:42] <TheMue> dimitern: time for a quick HO regarding one of your review comments?
[12:48] <perrito666> fantastic, something decided that I wanted my locale in spanish
[12:50] <dimitern> TheMue, not right now - we have another call with the MAAS guys in less than 10m
[12:51] <TheMue> dimitern: ok, only need infos about the full stack feature test. I've got them in there.
[12:52] <dimitern> TheMue, I meant in featuretests/ - e.g. cmd_juju_space_test.go
[12:52] <TheMue> dimitern: yes, there I do have them
[12:52] <TheMue> too
[12:52] <TheMue> dimitern: TestSpaceCreateNotSupported and TestSpaceListNotSupported
[12:53] <dimitern> TheMue, then it's fine - no follow-up needed :)
[12:53] <TheMue> dimitern: hehe, ok, thx
[12:54] <dimitern> TheMue, and you can drop the test and helper around the "not supported" case via the supercommand
[12:56] <TheMue> dimitern: you mean RunSuperNotSupported? it only has been a convenience helper for Run
[12:56] <dimitern> TheMue, yeah
[12:57] <dimitern> TheMue, both not supported cases are tested in the subcommand tests
[12:57] <dimitern> TheMue, and testing it via the supercommand running create|list when not supported is covered in featuretests/
[12:57] <TheMue> dimitern: yes, I needed this helper due to the ErrSilent
[13:31] <mgz> rogpeppe: are you around to talk charm/charmstore dependencies?
[13:31] <rogpeppe> mgz: sure
[13:32] <rogpeppe> mgz: wanna hangout?
[13:37] <mgz> rogpeppe: sure
[13:58] <mgz> rogpeppe: hm, http://paste.ubuntu.com/12328486/
[14:05] <pmatulis> re 'upgrade-juju --version',  ① how to get a list of available versions and ② what logic is used to pick a version?
[14:05] <mgz> rogpeppe: bumping github.com/juju/schema to version as in juju/juju works
[14:05] <perrito666> pmatulis: 1) juju ugprade-juju --dry-run
[14:09] <rogpeppe> mgz: i'll push a better version of dependencies.tsv
[14:10] <mgz> rogpeppe: ta
[14:10] <pmatulis> perrito666: did that already. it gives me versions according to some algorithm, not according to a forced version (--version). at this time the output is
[14:10] <mgz> you can save the landing to be a test run of the gating
[14:10] <pmatulis> no upgrades available
[14:10] <perrito666> :(
[14:11] <pmatulis> perrito666: 'xactly
[14:12] <perrito666> pmatulis: current version on the server?
[14:13] <pmatulis> perrito666: my agents are currently running 1.22.8, if that's what you meant
[14:13] <perrito666> pmatulis: yes thank you
[14:16] <rogpeppe> mgz: https://github.com/juju/charm/pull/152
[14:17] <perrito666> mgz: did you ever review the patch I proposed to fix the migration of status history??
[14:19] <mgz> rogpeppe: that's `godeps > dependencies.tsv` with your current working set of deps?
[14:19] <rogpeppe> mgz: pretty much, yes
[14:19] <rogpeppe> mgz: i've landed it
[14:19] <mgz> perrito666: I did read the branch, was past my eod so was hoping someone else would pick it up
[14:19] <rogpeppe> mgz: it should be ok now
[14:19] <perrito666> mgz: no one did
[14:19] <perrito666> how sad
[14:20] <mgz> perrito666: I can +1 but would like someone else to look as well, I am well removed from this code
[14:20] <perrito666> np
[14:21] <mgz> rogpeppe: you didn't let me use the change as a guinea pig... ;_;
[14:21] <rogpeppe> mgz: i can back it out...
[14:21] <mgz> rogpeppe: nah, I can test without needing a real landing
[14:21] <rogpeppe> mgz: ok, cool
[14:25] <mgz> actually, I'll just use 150, it's nice and trivial
[14:26] <TheMue> dimitern: found and removed it, now using your RunCreate and a similar RunList to not swallow the expected error. has been the needed hint, thx.
[14:28] <mgz> rogpeppe: worked,
[14:28] <mgz> https://github.com/juju/charm/pull/150
[14:28] <mgz> http://juju-ci.vapour.ws:8080/job/github-merge-juju-charm/1/console
[14:29] <rogpeppe> mgz: awesome, thanks
[14:35] <mgz> perrito666: you seem to have some review comments from the antipodes
[14:35] <perrito666> m?
[14:35] <katco> xwwt: ping
[14:36] <mgz> perrito666: trivial stuff
[14:36] <perrito666> mgz: indeed, nice anyway
[14:37] <dimitern> TheMue, cheers :)
[14:40]  * dimitern can't stand what cloudconfig/userdatacfg_test.go has become - turns out we're not even testing how non-ubuntu series InstanceConfig looks like
[14:41] <dimitern> I'm fixing this and adding centos7 tests
 natefinch, ping me when you come on and make me talk about the queued-action watcher and why it's good/bad and should be copied/not
[14:45] <fwereade> natefinch, ah, bother, I haven't looked at it properly
[14:45] <natefinch> fwereade: np
[14:47] <katco> frobware: last meeting wrapped up a bit early if you have time now
[14:47] <mup> Bug #1494356 opened: OS-deployer job fails to complete <blocker> <ci> <regression> <juju-core:Triaged> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1494356>
[14:47] <frobware> katco, sure
[14:48] <frobware> katco, meh. let me restart chrome...
[14:49] <katco> frobware: haha k
[14:49] <fwereade> natefinch, ok, so, actions use a thing called an idPrefixWatcher
[14:49] <fwereade> natefinch, which is a StringsWatcher
[14:49] <fwereade> natefinch, but which behaves differently to other StringsWatchers
[14:49] <frobware> katco, hehe. "There is a problem connecting to this video call. Try again in a few minutes.". joy.
[14:50] <katco> frobware: doh! possibly your auth. expired? seems to happen a lot
[14:50] <natefinch> frobware: make sure you're using the right account, it might not be using your canonical account
[14:50] <natefinch> fwereade: ok
[14:50] <katco> frobware: i see you in the meeting which is odd... now 2 of you :p
[14:52] <fwereade> natefinch, so many StringsWatchers are on sets of lifecycle entities
[14:53] <natefinch> fwereade: ug, this sounds like it's encoding data in the ID and then relying on parsing the ID to re-extract that data.... can we avoid doing that?  It's always bit me in the past
[14:53] <fwereade> natefinch, and they notify by sending the appropriate entity ids in response to enter-set, change-life-to-dying, and remove-or-set-dead
[14:53] <fwereade> natefinch, I am keen to hear alternatives
[14:53] <fwereade> natefinch, but we have some fun restrictions
[14:54] <fwereade> natefinch, like, we have to be incredibly stingy with db access in the watchers
[14:55] <fwereade> natefinch, because any time a watcher is not selecting on the channel it registered, it might be blocking *every other watcher*
[14:56] <fwereade> natefinch, that said
[14:56] <fwereade> natefinch, in this case I actually think we don't have to
[14:56] <mup> Bug #1494356 changed: OS-deployer job fails to complete <blocker> <ci> <regression> <juju-core:Triaged> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1494356>
[14:56] <fwereade> natefinch, or, well, hm
[14:57] <natefinch> fwereade:  I am probably misunderstanding something - I thought watchers were per-collection... and since this is a new collection with a single purpose... do we really need to do more than get the ID from the channel and use it to do the work it needs to do?
[14:57] <fwereade> natefinch, well, if the id does encode the data that's all we need
[14:57] <fwereade> natefinch, if it's a nice opaque id we have to hit the db to find out anything useful
[14:58] <fwereade> natefinch, and, fwiw, yes, almost all the watchers just look in one collection
[14:59] <natefinch> fwereade: how does one watch block all the other watchers?
[14:59] <fwereade> natefinch, but they're all sharing an underlying mechanism, with which they interact by registering/unregistering channels to receive events
[15:00] <fwereade> natefinch, and the underlying watcher just loops over everything that's registered for each event and delivers them all in sequence
[15:00] <fwereade> natefinch, I'll give you a moment to recover ;p
[15:01] <natefinch> fwereade: so is it blocking all other watchers or all other watchers of the same collection?
[15:01] <fwereade> natefinch, all other watchers
[15:01] <fwereade> natefinch, there's just one state/watcher.Watcher
[15:02] <natefinch> so it's like our very own global interpreter lock
[15:02] <fwereade> natefinch, one instance of which backs all the various watchers defined in state
[15:02] <fwereade> natefinch, yeah, close enough :)
[15:02] <natefinch> ...this statement coming from someone who knows nothing about the GIL except it's bad and stops multithreading ;)
[15:05]  * fwereade once wrote a bridge between GILful and GILfree python interpretations; that was fun, but most of the time the GIL-handling is safely out of the way of actual code
[15:05] <fwereade> natefinch, the same mitigation strategy probably applies though
[15:06] <fwereade> natefinch, run a bunch of them and distribute your requests among them so no one instance can lock everything up
[15:06] <fwereade> natefinch, although ofc that's a tad wasteful
[15:06] <fwereade> natefinch, taste and discretion required :)
[15:07] <fwereade> natefinch, so, anyway
[15:07] <natefinch> fwereade: well, if you say it's for the best, I believe you... but that sort of sounds like we need to encode everything in the ID
[15:08] <mup> Bug #1494356 opened: OS-deployer job fails to complete <blocker> <ci> <regression> <juju-core:Triaged> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1494356>
[15:09] <fwereade> natefinch, the forces very often push us that way, yes :(
[15:09] <fwereade> natefinch, however, in this case I think we *can* quite happily use opaque ids -- uuids or something
[15:10] <fwereade> natefinch, and send those out from the watcher
[15:11] <fwereade> natefinch, the worker doesn't *have* to do anything more than send up a call saying "please run this list of assignment request ids"
[15:11] <fwereade> natefinch, and figure out what retry strategy it needs to handle failures
[15:12] <natefinch> fwereade: fwereade right, so the worker just passes the id to the API and the API handles it.
[15:13] <fwereade> natefinch, yeah, exactly -- and my expectation here is that we have one global assigner, so there's no benefit to encoding classification data in the id anyway
[15:13] <fwereade> (one assigner per env, rather((
[15:13] <fwereade> )))
[15:15] <fwereade> natefinch, I think the most important part is going to be surfacing failures in the unit status, and knowing how we go about retrying; and making sure that, arrgh, we don't break the interaction between unit status and fast-forward unit destruction
[15:17] <fwereade> natefinch, am I saying helpful things?
[15:18] <natefinch> fwereade: yep
[15:19] <fwereade> natefinch, cool -- so, to go back to watcher semantics
[15:19] <fwereade> natefinch, I think it's fine to have a StringsWatcher that sends [initial set] on first event, and [newly added ids] on subsequent events
[15:20] <fwereade> natefinch, and I *think* that one is so simple as to be best implemented standalone (well, on top of commonWatcher)
[15:20] <fwereade> natefinch, the main thing is to hunt down the existing stringswatcher and make sure that what events they're signallling is clearly documented
[15:23] <fwereade> natefinch, (that's something that should have happened when we added the action watcher, sorry I missed it)
[15:24] <fwereade> natefinch, actually, it looks like many of them are already documented correctly
[15:24] <fwereade> natefinch, and we can follow the same form
[15:25] <fwereade> natefinch, // WatchAssignmentQueue returns a StringsWatcher that notifies of every item added to the queue.
[15:25] <fwereade> natefinch, or something
[15:26] <fwereade> natefinch, have you written watchers before?
[15:27] <xwwt> Hi katco
[15:30] <natefinch> fwereade: sorry, had to step away for a second.
[15:31] <natefinch> fwereade: I have, but it was a long time ago at this point
[15:33] <fwereade> natefinch, been a while for me too :)
[15:34] <fwereade> natefinch, I think the important considerations here are: (1) as always, go for at-least-once-delivery, so start the watch before reading initial state
[15:35] <fwereade> natefinch, (2) expect bursty writes, so do that things where we keep sucking events off the watch channel for a few ms before sending a batch
[15:36] <fwereade> natefinch, (3) send out events as uuids (if that's what you pick) -- but not raw internal ids, or tags, even though we'll want to send them back up as tags
[15:36] <fwereade> natefinch, because even though it's dumb that we send out state-client ids over watcher channels
[15:37] <fwereade> natefinch, we should address this with a watcher-event-translation layer in apiserver
[15:37] <fwereade> natefinch, rather than pervert the state watchers by making *some of them* return api tags
[15:38] <fwereade> natefinch, sane-ish?
[15:39] <fwereade> natefinch, re (2): 			updates, ok := collect(ch, in, w.tomb.Dying())
[15:41] <natefinch> fwereade: re: send events as UUIDs - do you mean to add a field to the document that is a UUID and separate from the _id itself?
[15:41] <fwereade> natefinch, I forget the details of how watchers interact with the multiEnv stuff in state
[15:42] <natefinch> I Think I'm confusing myself, if the watchers only get the IDs anyway
[15:42] <fwereade> natefinch, I *think* that in this case a plain string UUID is how we want to represent it as it leaves state
[15:42] <fwereade> natefinch, and we may or may not need to pay attention, at some point, to the fact that its _id is *really* going to be prefixed with the env id
[15:43] <fwereade> natefinch, menn0 would have the latest on how well the leaks in that abstraction have been patched
[15:43] <fwereade> natefinch, but *most* of the time, when we're safe from multi-env leaks, yes, we can just use the UUID as the _id
[15:44] <natefinch> fwereade: in theory if you're just using the _id as an opaque id, it doesn't matter what we've encoded into it... which is kind of the point of not parsing the id
[15:44] <fwereade> natefinch, yeah, hopefully all that is handled for you one layer below
[15:45] <natefinch> fwereade: when you say we'll want to send them back up as tags, what do you mean?
[15:46] <fwereade> natefinch, I mean that tags are the language of the api, and I would prefer to always represent references to juju entities in that format over the wire
[15:46] <fwereade> natefinch, it's annoying that the watchers don't respect that
[15:47] <natefinch> fwereade: how does a worker translate an id to a tag without accessing the database?
[15:48] <fwereade> natefinch, if the tag is always just, say, "queued-assignment-<uuid>" it's pretty easy to convert
[15:48] <fwereade> natefinch, the watcher concerns have sent tentacles all the way through the codebase, really
[15:49] <fwereade> natefinch, generally tag and id are two-way convertible though
[15:49] <fwereade> natefinch, without context
[15:50] <fwereade> natefinch, as are id and (internal) _id
[15:50] <natefinch> fwereade: so a tag is basically an id that also specifies its type
[15:50] <fwereade> natefinch, yeah
[15:56] <dimitern> fwereade, hey
[15:57] <dimitern> fwereade, looking at a few unit logs from the last blocker bug: http://data.vapour.ws/juju-ci/products/version-3040/OS-deployer/build-250/machine-0/unit-mysql-0.log
[15:58] <dimitern> fwereade, why the uniter seems to be always waiting to lose leadership at the first time ModeAbide is entered?
[16:00] <dimitern> looks like it happens just before the first relation-joined hook is called for mysql:cluster
[16:11] <fwereade> dimitern, that should certainly always be happening if it's the only unit of the service
[16:11] <fwereade> dimitern, minions will be waiting to gain leadership
[16:14] <fwereade> dimitern, the vast majority of the time those tickets will never fire
[16:19] <alexisb> katco, dimitern juju team, master and 1.25 are now officially blocked
[16:19] <alexisb> we will need to identify what is causing 1.25 to fail and get a fix commited
[16:19] <alexisb> sinzui, abentley do we have a bug open for the current CI failure on 1.25?
[16:21] <alexisb> mgz, ^^^
[16:21] <mgz> alexisb: bug 1494356
[16:21] <mup> Bug #1494356: OS-deployer job fails to complete <blocker> <ci> <regression> <juju-core:Triaged> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1494356>
[16:21] <alexisb> mgz, thanks, let me go look
[16:21] <sinzui> alexisb: also bug 1493887
[16:21] <mup> Bug #1493887: statusHistoryTestSuite teardown fails on windows <blocker> <ci> <regression> <test-failure> <windows> <juju-core:Triaged> <https://launchpad.net/bugs/1493887>
[16:22] <mgz> master only for that one
[16:22] <alexisb> cherylj, I see you are looking at 1494356
[16:22] <alexisb> first off thank you
[16:23] <alexisb> are you planning on working that bug, cherylj ?
[16:23] <alexisb> if so can you please assign it to yourself
[16:23] <cherylj> alexisb: yeah, I can do that
[16:23] <alexisb> mgz, can you address cherylj's question in the bug please
[16:25] <mgz> alexisb: sure
[16:25] <alexisb> thanks all
[16:26] <cherylj> mgz:  I will need to step out for a bit, but shouldn't be gone for more than an hour.
[16:28] <mgz> cherylj: short version is the jobs should include all the lxc logs if they were actually on the machine, but I will double check
[16:28] <dimitern> alexisb, I was looking at the OS-deployer bug for some time now
[16:28] <alexisb> dimitern, thank you
[16:28] <dimitern> cherylj, alexisb, unfortunately it's not clear why it happens yet
[16:49] <rogpeppe> with feature branches, what's the preferred method for keeping them up to date with master? merge or rebase?
[16:50] <mgz> cherylj, dimitern: I am rerunning the job with a shorter timeout and extra log capturing, should be done in 45mins
[17:03] <dimitern> mgz, cheers - btw do you know what version of maas is that os-deployer trying to use?
[17:06] <mgz> dimitern: it's running on our maas18
[17:07] <mgz> there;s nothing obviously borked about it, I've been poking it today looking for something
[17:07] <mgz> but it's possible our networking got screwed up or something else non-obvious
[17:07] <mgz> I just can't see any evidence of that from the run
[17:08] <dimitern> mgz, I found out with maas 1.9 we're having issues, but that's with a yet-uncommitted change on trunk there
[17:10] <dimitern> mgz, it seems both timeouts are due to not provisioning lxc containers, but I can see the juju-trusty-lxc-template is created ok
[17:11] <dimitern> mgz, and the X/lxc/Y machine starts; where are the container logs and cloud-init then?
[17:12] <mgz> dimitern: my rules were only capturing extra lxc logging for the local case, I added in a pattern for remote as well, so we will see
[17:13] <dimitern> mgz, awesome!
[18:57] <mup> Bug #1494441 opened: ppc64el: cannot find package "encoding" <blocker> <ci> <ppc64el> <regression> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1494441>
[19:01] <alexisb> rogpeppe, ^^^
[19:02] <rick_h_> alexisb: he's EOD
[19:02] <rogpeppe> alexisb: i am eod, but not sure what you were pointing me at
[19:02] <alexisb> the bug above
[19:02] <alexisb> it is your commit
[19:03] <alexisb> lp 1494441
[19:05] <natefinch> alexisb: the last bug of that type was an environmental one.... I'm pretty sure it's still an environmental issue
[19:05] <natefinch> sinzui: ^^    not being able to find a package in the standard library is not a bug in juju
[19:06] <sinzui> natefinch: sure, but we wont be releasing juju until someone on core fixes it
[19:06] <natefinch> it's an environmental issue, just like it was last time
[19:07] <natefinch> somehow the go standard library on the machine doing the build is messed up
[19:07] <sinzui> natefinch: We have new machines anc clean containers.
[19:07] <sinzui> Since we are building like Lp, and it fails, I cannot see how we can released
[19:07] <natefinch> certainly, the problem needs to be fixed, I'm just saying, nothing we change on github is going to fix the problem
[19:08] <sinzui> natefinch: I can fix the issue by backing out the bad commit so can any member of the core team
[19:10] <natefinch> sinzui: that's like blaming the car manufacturer for building a car that hits a pothole in the road. The pothole is the problem, not the car
[19:11] <natefinch> in this case, the build infrastructure is the road with the pothole
[19:11] <natefinch> master builds fine with gccgo on my machine
[19:12] <sinzui> natefinch: no it is not. We are obligated to deliiver to Ubuntu a version that they can build and distribute on trusty ppc64el. There are several forks of the xml package already in the code base, something needs to be taught to use the fork
[19:12] <natefinch> we need to fix the damn pothole, and stop changing the car to avoid it
[19:13] <alexisb> natefinch, we are not going to get a gccgo update into trusty
[19:13] <sinzui> natefinch: can we hangout. I want to tell up about my upcoming  MIR meeting any my hope for the one true path.
[19:13] <natefinch> gccgo works on my machine
[19:13] <natefinch> that's the thing
[19:14] <natefinch> in a meeting now... we can talk after
[19:14] <alexisb> sinzui, natefinch I agree with both of you
[19:14] <alexisb> however, for an immediate fix the commit needs to be revert
[19:15] <alexisb> katco, given it is rogpeppe eod, if there is a member of your team that has bandwidth we should revert the commit
[19:15] <alexisb> otherwise it will have to wait for tomorrow
[19:15] <katco> alexisb: k, sec in meeting
[19:15]  * alexisb changes location 
[19:15] <alexisb> now that I have caused trouble ;)
[19:30] <mgz> natefinch: are you confusing gccgo bugs?
[19:30] <mgz> natefinch: bug 1440940 wasn't changed by altering the ppc build environment
[19:30] <mup> Bug #1440940: xml/marshal.go:10:2: cannot find package "encoding" <blocker> <ci> <regression> <test-failure> <juju-core:Fix Released by ericsnowcurrently> <juju-core 1.24:Fix Released by ericsnowcurrently> <juju-release-tools:Fix Released by gz> <https://launchpad.net/bugs/1440940>
[19:30] <mgz> it was fixed twice, by:
[19:31] <mgz> * first me hacking around it in the juju/xml package
[19:31] <mgz> * second by eric making the vsphere provider *8never even try to compile* on gccgo
[19:32] <mgz> as juju/htttrequest introduces a new problem import we're going to have to hack around it again
[19:42] <natefinch> mgz: I swear last time I got on a fresh ppc machine and was able to build just fine with the packages provided by apt.
[19:59] <mwhudson> waait, i fixed that can't find encoding bug like a year ago
[20:03] <mwhudson> or at least i think i did
[20:05] <mwhudson> buuut somehow the fix isn't in trusty
[20:05] <mwhudson> ffs
[20:14] <natefinch> mwhudson: dave said the fix was in the gccgo in trusty-updates
[20:15] <mwhudson> yeah well it doesn't seem to be
[20:15] <natefinch> mwhudson: ahh, well, that explains some things
[20:42] <mup> Bug #1494476 opened: MAAS provider with MAAS 1.9 - /etc/network/interfaces "auto eth0" gets removed and bridge is not setup <juju-core:New> <https://launchpad.net/bugs/1494476>
[22:04] <perrito666> lol I rushed back to a meeting... it was in half an hour
[22:08] <perrito666> wallyworld: disregard my email, Ill be on time
[22:08] <wallyworld> ok
[22:53] <ericsnow> wallyworld: could you take a look at #1493123
[22:53] <ericsnow> wallyworld: it's similar to #1472729 which you fixed in July
[22:53] <mup> Bug #1493123: Upgrade in progress reported, but panic happening behind scenes <landscape> <landscape-release-29> <upgrade-juju> <juju-core:In Progress by ericsnowcurrently>
[22:53] <mup> <juju-core 1.24:In Progress by ericsnowcurrently> <juju-core 1.25:In Progress by ericsnowcurrently> <https://launchpad.net/bugs/1493123>
[22:53] <mup> Bug #1472729: Agent shutdown can cause cert updater channel already closed panic <regression> <upgrade-juju> <juju-core:Fix Released by wallyworld> <juju-core 1.24:Fix Released by wallyworld> <https://launchpad.net/bugs/1472729>
[22:53] <wallyworld> ericsnow: will do, just in a meeting
[22:54] <ericsnow> wallyworld: np; I'll be in and out
[23:03] <wallyworld> ericsnow: yeah, looks like a similar fix is needed at first glance
[23:04] <ericsnow> wallyworld: I'm just not positive that certChangedChan is the offending channel in this case
[23:04] <wallyworld> me either, i haven't looked in detail at the logs
[23:04] <ericsnow> wallyworld: the corresponding timeline wouldn't line up
[23:05] <ericsnow> wallyworld: k, I'll poke at it some more; feel free to grab it too :)
[23:06] <wallyworld> will do, got some stuff i have to get done this morning first up, will look after that