[00:16] <thumper> menn0: meeting?
[00:17] <menn0> thumper: browser is refusing to connect
[00:18] <thumper> tried turning it off and on again?
[00:30] <menn0> thumper: my connecitivty to bluejeans seems flaky today
[00:30] <menn0> thumper: internet is working otherwise
[00:30]  * thumper nods
[05:10] <rock_> redir/rich_k: Thank you for escalating my issue. How do I contact UIteam as <rich_h> mentioned.
[05:11] <redir> rock_: try #juju-gui here on freenode
[05:12] <mhilton> rock_: I can probably help you out
[05:13] <mhilton> rock_: what do you need?
[05:13] <rock_> redir: Thanks.
[05:13] <rock_> mhilton : Thank you.
[05:14] <redir> rock_: np, sorry I couldn't be of more help
[05:15] <rock_> mhilton: I will send clear info.
[05:15] <rock_> redir: No problem. I am happy with your support.
[05:17] <rock_> mhilton: Compllte issue details: http://paste.openstack.org/show/589216/
[05:17] <mhilton> rock_, thanks I'll take a look
[05:18] <rock_> mhilton: Ok. Thank you.
[09:17] <rock_> mhilton: Hmmm. thank you very much. One main issue resolved . but we have another issue as I mentioned in the Bug https://github.com/CanonicalLtd/jujucharms.com/issues/372
[10:33] <mhilton> rock_: the old kaminario are now no longer available.
[10:45] <rogpeppe> voidspace: hiya
[10:47] <rogpeppe> anyone know what the best way is to get all logs from a HA controller?
[10:50] <mgz> rogpeppe: in what fashion? you can just look at log files on all the machines, but there's also an api call that gathers it up for you
[10:50] <rogpeppe> mgz: do you have a moment for a chat?
[10:50] <mgz> rogpeppe: sure, lets
[11:09] <voidspace> rogpeppe: hey, hi
[11:09] <rogpeppe> voidspace: hiya
[11:09] <rogpeppe> voidspace: sorry, i was just fishing for people that might know about juju loggging :)
[11:09] <rogpeppe> voidspace: mgz bit
[11:10] <voidspace> rogpeppe: hah, I used to know about logging - but we changed it :-)
[11:10] <rogpeppe> voidspace: me too :)
[11:10] <voidspace> rogpeppe: mgz is handy to have around...
[11:10] <rogpeppe> voidspace: that he is
[11:18] <rock_> mhilton: Thank you very much. My all issues resolved.
[11:19] <mhilton> rock_: glad to hear it, happy jujuing
[11:27] <voidspace> really short PR for someone: https://github.com/juju/juju/pull/6567
[11:27] <voidspace> rogpeppe: just FYI, I changed the cookie file locking mechanism in persistent-cookiejar to use juju/mutex
[11:27] <voidspace> rogpeppe: https://github.com/juju/persistent-cookiejar/pull/19
[11:28] <voidspace> rogpeppe: PR 6567 switches to using the newer version
[11:28] <rogpeppe> voidspace: hmm, not sure about that
[11:28] <rogpeppe> voidspace: why?
[11:28] <voidspace> rogpeppe: this addresses https://bugs.launchpad.net/juju-wait/+bug/1632362
[11:28] <mup> Bug #1632362: error during juju-wait ERROR cannot load cookies: file locked for too long; giving up: cannot acquire lock: resource temporarily unavailable <eda> <oil> <juju:In Progress by mfoord> <Juju Wait Plugin:Invalid> <https://launchpad.net/bugs/1632362>
[11:28] <voidspace> rogpeppe: file based locking is unreliable (possibility of stale lock files - that's what we found in juju and why we switched to juju/mutex)
[11:29] <voidspace> rogpeppe: plus the exponential backoff in the old locking mechanism meant it would only poll the lockfile 4 times or so
[11:29] <rogpeppe> voidspace: it didn't use file-based locking
[11:29] <rogpeppe> voidspace: except when absolutely necessary
[11:29] <rogpeppe> voidspace: the exponential backoff is another issue
[11:30] <rogpeppe> voidspace: which indeed sounds like a bug
[11:30] <voidspace> rogpeppe: pretty sure the old lockFile function used file based locking
[11:30] <rogpeppe> voidspace: it used flock
[11:30] <rogpeppe> voidspace: which is file-based but doesn't suffer from the stale lock file issue
[11:31] <voidspace> right, juju/mutex is still good :-)
[11:31] <voidspace> and standardising on a single file locking mechanism is good
[11:32] <rogpeppe> voidspace: for future reference, please squash commits down to a single commit before submitting
[11:32] <voidspace> kk
[11:33] <rogpeppe> voidspace: that's true of all our projects FWIW
[11:35] <rogpeppe> voidspace: 100us seems like a bit quick if you're going to be polling for a long time
[11:43] <voidspace> rogpeppe: isn't it ms?
[11:44] <voidspace> rogpeppe: ah, it's not...
[11:44] <rogpeppe> voidspace: not by my reading of the code
[11:44] <voidspace> rogpeppe: yeah, I think you're right
[11:44] <voidspace> rogpeppe: every 100us is probably too much
[11:44] <rogpeppe> voidspace: originally it was 100us because it would exponentially backoff
[11:44] <rogpeppe> voidspace: which i think is still a good thing to do
[11:44] <voidspace> yeah, that means I'm wrong about the number of polls too
[11:44] <rogpeppe> voidspace: it should probably exponentially back off up to a maximum delay
[11:44] <rogpeppe> voidspace: which is easy to obtain
[11:45] <voidspace> sure
[11:45] <rogpeppe> voidspace: but not using juju/mutex
[11:45] <voidspace> well, I think a fixed number of polls is fine too
[11:45] <voidspace> and we really want to just maintain one set of locking code
[11:46] <voidspace> but you're right 100us is too frequent I'll look again
[11:46] <voidspace> and I think the whole retry loop within persistent-cookiejar may be unneeded with juju/mutex
[11:49] <rogpeppe> voidspace: agreed, but i think exponential backoff is an appropriate strategy, but not one that juju/mutex uses
[11:49] <rogpeppe> voidspace: tbh i'm not sure that the locking package should be doing the retry loop - it's easy for callers to do that and that gives them all the flexibility they need
[11:49] <voidspace> rogpeppe: I agree it's an appropriate strategy, I don't think fixed polling is inappropriate though, it simplifies the code and means we use one consistent file locking mechanism throughout
[11:50] <voidspace> rogpeppe: and hopefully it fixes a critical bug too
[11:50] <voidspace> we'll see
[11:51] <voidspace> rogpeppe: I'm not going to debate it endlessly unless you can suggest another fix for the bug, sorry
[11:51] <voidspace> coffee - brb
[11:51] <rogpeppe> voidspace: i'll take a longer look at the bug in a bit - i'm on a call currently
[11:53] <rogpeppe> mgz, mhilton: i just raised https://bugs.launchpad.net/juju/+bug/1641927
[11:53] <mup> Bug #1641927: debug-log on a model doesn't log any provisioner events <juju:New> <https://launchpad.net/bugs/1641927>
[11:59] <voidspace> rogpeppe: ok - the underlying issue (I'm pretty sure) is the "juju status" attempts to read the cookie file for all models simultaneously
[11:59] <rogpeppe> voidspace: really? it should only be reading the cookie file once
[11:59] <voidspace> well, maybe "pretty sure" is the wrong phrase...
[11:59] <rogpeppe> voidspace: for each juju command
[12:00] <voidspace> rogpeppe: I'll take a look - the error is that acquiring the lock times out in "juju status" when there are multiple models
[12:00] <rogpeppe> voidspace: but i guess you'd have significant contention if you're running lots of concurrently juju comands
[12:00] <voidspace> rogpeppe: it's triggered by the juju-wait plugin that polls status
[12:00] <rogpeppe> voidspace: if this is the problem then your fix won't address the issue, i reckon
[12:01] <rogpeppe> voidspace: ah, do you know where that lives?
[12:01] <voidspace> rogpeppe: reading the cookie file should not take very long
[12:01] <rogpeppe> voidspace: indeed
[12:01] <voidspace> rogpeppe: no, but the bug is in "juju status" not the plugin itself
[12:01] <voidspace> and the specific bug is the timeout in persistent-cookiejar
[12:01] <rogpeppe> voidspace: so the plugin invokes juju status for each model?
[12:02] <voidspace> rogpeppe:: ah, not sure about that
[12:02] <rogpeppe> voidspace: do you know where the juju-wait plugin source code is?
[12:02] <voidspace> rogpeppe: no, reading the bug report - it's for a given model
[12:03] <voidspace> rogpeppe: I believe https://code.launchpad.net/juju-wait
[12:03] <rogpeppe> voidspace: thanks
[12:03] <voidspace> I'm looking now
[12:04] <voidspace> and specifically: https://git.launchpad.net/juju-wait/tree/juju_wait/__init__.py
[12:09] <voidspace> leadership_poll can call juju concurrently.
[12:10] <voidspace> rogpeppe: I'm pretty sure if we take this to the tech board though, they will greatly prefer juju/mutex over an flock approach - as (if my memory is correct) we still had problems with that in juju which is *why* juju/mutex was written
[12:11] <voidspace> rogpeppe: and if the bug is genuinely lock contention then my changes *should* fix it - although they're not quite correct (polling too many times currently and the loop left in)
[12:11] <rogpeppe> voidspace: juju/mutex uses flock in some cases
[12:11] <rogpeppe> voidspace: unix domain sockets are not a decent solution
[12:11] <rogpeppe> voidspace: as they cannot work if you're using a shared fs
[12:12] <voidspace> well, I can take that to the tech board and see what they say - but I still need to fix the bug
[12:12] <rogpeppe> voidspace: did you manage to reproduce the bug?
[12:13] <voidspace> rogpeppe: nope
[12:13] <rogpeppe> voidspace: so how do you know you've fixed it?
[12:13] <voidspace> rogpeppe: I provided custom binaries for the OIL folk to try
[12:13] <voidspace> rogpeppe: they want to wait until it gets into a PPA it seems
[12:14] <voidspace> rogpeppe: so getting the change into develop is the best way to find out currently - see rick's comment on the bug
[12:14] <voidspace> but if it *is* lock contention (or stale lock file), it should be fixed
[12:26] <rogpeppe> voidspace: the reason juju/mutex was written was because the old juju lock mechanism relied on actual file creation
[12:27] <rogpeppe> voidspace: and the one that persistent-cookiejar is using was NIH
[12:27] <rogpeppe> voidspace: although that last reason is not necessarily the case :)
[12:27] <voidspace> heh
[12:28] <voidspace> rogpeppe: I still think, and I think others are likely to agree, that maintaining a single file locking mechanism is a great benefit
[12:28] <voidspace> rogpeppe: I'll talk to the team about it in our standup
[12:29] <voidspace> rogpeppe: I'll also fix the problems with the existing head of master on persistent-cookiejar and if necessary we can just roll all of that back and just tweak the parameters of the old mechanism
[12:30] <voidspace> rogpeppe: it's still the case that once the exponential back-off gets to 100ms delay it only does 4 *further* polls
[12:30] <voidspace> rogpeppe: so genuine lock contention is still a contender for cause of the bug - and yes I know we discussed above a way to fix that
[12:30] <rogpeppe> voidspace: yeah, it should back off to 100ms and then stay there for 2s or so
[12:31] <voidspace> cool
[12:33] <voidspace> rogpeppe: I can raise a bug against juju/mutex that it should allow exponential backoff (to a MaxDelay)
[12:33] <rogpeppe> voidspace: tbh i'd kinda prefer if the file locking mechanism was orthogonal to the retry loop
[12:33] <rogpeppe> voidspace: (which isn't to say that there shouldn't be a standard higher level function to acquire a lock with polling)
[12:34] <voidspace> which is *effectively* what juju/mutex provides, that "higher level function" no?
[12:35] <rogpeppe> voidspace: juju/mutex combines both.
[12:36] <rogpeppe> voidspace: the retry strategy depends quite a bit on what you're locking with respect to
[12:36] <voidspace> rogpeppe: I understand that you dislike that, I'm not really clear why (other than structural distaste)
[12:37] <voidspace> I've raised an issue against juju/mutex *anyway*
[12:40] <voidspace> ah, "tweakable retry strategy" - ok, understood
[12:40] <voidspace> I can explain this clearly to the team I think - we have two ways forward (revert and tweak or switch to juju/mutex)
[12:40] <voidspace> if we don't have consensus or feel we have enough understanding we'll take it to the tech board
[12:41] <voidspace> rogpeppe: and I'll copy you in
[12:42] <urulama> voidspace: so, by changing this without consulting with macaroon implementors you guys now will cover any possible problems that will arise from it, as we've used cookiejar for a while without any problems ... any macaroon issues on services in production and controller and so on, right?
[12:42] <voidspace> urulama: this is shared code by the juju team in the same way as any other code
[12:42] <voidspace> urulama: unless you want to take on this juju bug that seems to be caused by it?
[12:42] <urulama> voidspace: sure. np. just stating.
[12:43] <voidspace> urulama: I do not own this code, I will help where I can
[12:43] <urulama> all i am saying is i'm sure it has been properly tested and QA on all clients that depend on it
[12:44] <voidspace> urulama: I understand, this is true of any changes to shared code - there are risks
[12:44] <voidspace> urulama: and yes, it should be QA'd before any clients switch to the new version
[12:44] <voidspace> urulama: just like any other change
[12:44] <voidspace> urulama: I won't be paralysed by that though
[12:45] <voidspace> urulama: reverting is easy :-)
[12:45] <urulama> no need to revert, just stating that any change of the shared code needs to be properly tested before landing
[12:46] <voidspace> urulama: I mean if it causes problems
[12:46] <rogpeppe> voidspace: from an operational point of view I'm concerned that the persistent-cookiejar package is used by a bunch of external users (e.g. https://godoc.org/github.com/juju/persistent-cookiejar?importers) and that this breaks the existing semantics that the file is protected when using a shared filesystem
[12:47] <voidspace> rogpeppe: I'm going to write this up - it is protected on a shared filesystem precisely *because* then it uses file based locking, right?
[12:47] <rogpeppe> voidspace: yes
[12:47] <rogpeppe> voidspace: if you want to lock a file, a file-based lock is a good way to do it
[12:48] <voidspace> rogpeppe: right, and (I believe) the experience/consensus of those who wrote juju/mutex is that file based locking *cannot* be made reliable (my understanding)
[12:48] <rogpeppe> voidspace: so if it's ok, i'm going to revert it for the time being and fix the timeout issue
[12:48] <rogpeppe> voidspace: i'm afraid i don't believe that
[12:48] <voidspace> rogpeppe: so I think there is a technical disagreement here
[12:48] <voidspace> *sigh*
[12:49] <voidspace> rogpeppe:  I disagree with your decision
[12:49] <rogpeppe> voidspace: the contention issue would have happened regardless of whether it was file-based or unix-domain-socket-based locking
[12:49] <voidspace> right, but with unix-domain-socket-based locking stale lock files are *not* possible
[12:49] <rogpeppe> voidspace: that's true with flock too
[12:49] <rick_h> rogpeppe: just for context this is a critical stakeholder issue preventing oil and others from functioning. We need a fix asap
[12:50] <voidspace> rogpeppe: if you can get contention fixed today we can try that and see if it resolves the issue
[12:50] <rick_h> rogpeppe: so if we need a better path please help identify and move on that path with as much urgency as possible.
[12:50] <rogpeppe> voidspace: i'll do it now
[12:50] <voidspace> rogpeppe: a longer timeout and a max delay
[12:50] <voidspace> rogpeppe: ok, thanks
[12:50] <rogpeppe> voidspace: should only take a 10 minutes
[12:50] <voidspace> rogpeppe: if that doesn't fix it we'll have to look at alternatives - I will work harder on a repro before doing that
[13:08] <rogpeppe> voidspace: https://github.com/juju/persistent-cookiejar/pull/21
[13:25] <voidspace> rogpeppe: thanks, looking
[13:27] <voidspace> rogpeppe: yep, LGTM (making sensible assumptions about the semantics of retry)
[13:27] <voidspace> rogpeppe: land it and we'll try it
[13:36] <voidspace> rogpeppe: ta
[13:36] <voidspace> rogpeppe: thanks for doing it so quickly
[13:36] <rogpeppe> voidspace: thanks for the review :)
[13:37] <voidspace> :-)
[13:47] <voidspace> natefinch: mgz:  macgreagoir:  very short review if you  have time: https://github.com/juju/juju/pull/6568
[13:55] <rick_h> macgreagoir: ping, can you peek at https://github.com/juju/juju/pull/6563 please?
[13:55] <mgz> voidspace: lgtm
[13:58] <voidspace> mgz: you rock
[13:59] <mgz> rick_h: that change seems plausable, ideally we'd qa it
[13:59] <rick_h> mgz: the PR?
[14:00] <mgz> rick_h: yeah
[14:01] <rick_h> mgz: cool, I'm curious. It mentions firewall rules, but rackspace didn't have a firewall on things. /me wonders
[14:01] <mgz> rackspace is special
[14:01] <mgz> they don't really have any of the networking components enabled
[14:02] <mgz> so, we don't create security groups on rackspace, instead use a controller-sshs-in-and-futzes-with-conf method
[14:12] <macgreagoir> rick_h: belated ack on 6568
[14:15] <rick_h> macgreagoir: ty
[15:55] <deanman> evilnickveitch: Is it possible to syntax highligh with a specific color a specific word inside a code snippet ?
[15:56] <mgz> hml: wotcha, got some time to catch up in a sec?
[15:56] <evilnickveitch> deanman, Not the way we currently do it, no :(
[15:57] <evilnickveitch> The source just marks something as code. The syntax highlighting is handled by the javascript on the site
[15:58] <deanman> evilnickveitch: add image of the actual output then or adding images in general is not encouraged?
[15:58] <evilnickveitch> deanman, images are fine. In fact, we use quite a few images for output because of this
[15:59] <evilnickveitch> we just don't use them for commands etc, or stuff that people may want to cut and paste
[15:59] <evilnickveitch> deanman, was there a specific part of docs you had in mind for this?
[16:00] <deanman> evilnickveitch: https://jujucharms.com/docs/stable/help-openstack
[16:01] <deanman> evilnickveitch: `list-clouds` command doesn't seem to prefix with `local` anymore. Instead it colors the custom cloud as seen here https://s15.postimg.org/u8rz0stej/Screen_Shot_2016_11_15_at_17_21_21.png
[16:02] <hml> mgz: i’m here
[16:02] <deanman> evilnickveitch: Should i go forth and make minor wording adjustment and include image or any other suggestion ?
[16:03] <evilnickveitch>  deanman yeah, there were some late additions to the output for 2.0, and we haven't tidied them all up yet
[16:03] <evilnickveitch> deanman, please do!
[16:04] <evilnickveitch> deanman, you know where the docs live right? - https://github.com/juju/docs
[16:04] <mgz> hml: so, no alexis today but I think we want to go ahead and get your branches landed
[16:05] <mgz> hml: there's also https://github.com/juju/juju/pull/6563 which will need to be rebased on your change
[16:05] <hml> mgz: sure - i don’t have permissions to do the $$merge$$ thing.
[16:05] <hml> mgz: i haven’t seen anything yet today, but has the juju code been reviewed?  i know you had some high level questions.
[16:05] <hml> mgz: i think goose has to go first, then update dependencies with goose
[16:06] <mgz> I only have some mini points on the openstack provider code
[16:06] <mgz> and yeah, goose needs to happen first
[16:06] <mgz> so the juju change can include the dependency update
[16:09] <hml> mgz: got it - i’m in CA teaching a class for canonical this week.  i have time in the AM to do the code udpates, since I’m still on east coast time.  and some in the evenings as well
[16:10] <mgz> hml: I did have one additional thought which would not make you happy... reading through the provider changes made me less and less happy about the V2 on the end of all the functions, really feels like that should be an aspect of the client interface you get rather than the methods on it
[16:10] <mgz> I don't think it makes sense to change that as part of this branch though
[16:11] <hml> mgz: for neutron that would be okay - however in glance, there are different functions for GetImages within the same package.  one uses compute v1 and one uses the image service v2.  what to do in that case?
[16:12] <mgz> you'd need two glance clients, which would at least be explicit about using multiple versions
[16:13] <hml> mgz: that’s a long term change idea for sure - but like you said, it might not work for this set of pr
[16:13] <mgz> the structs also make sense to have the version emedded rather than being package scoped or similar I think
[16:19] <hml> hml: if that change is necessary can we chat on it later? or in email?  i have to run shortly for my ride
[16:20] <mgz> hml: it's not needed. I'm leaving some general comments in the juju pr, and will approve the goose pr
[16:20] <mgz> feel free to tackle later today, email me if you have any concerns
[16:20] <hml> hml: thank you.
[16:20] <hml> mgz: thank you.
[16:20] <mgz> I'd say no, thank you... but you already did :P
[17:18] <voidspace> rick_h: hmmm... it doesn't look to me like the devel ppa is building 2.1 (our develop branch)
[17:18] <voidspace> rick_h: https://launchpad.net/~juju/+archive/ubuntu/devel
[17:18] <voidspace> rick_h: so I might need to get my change into the 2.0 branch to get it into the ppa
[17:19] <rick_h> I thought we were not doing the ppa voidspace
[17:20] <rick_h> I was just talking about the snap. And the snap is rebuilt once a day
[17:20] <voidspace> ah
[17:20] <voidspace> rick_h: your comment is "We won't have a PPA of this until it lands/hits devel and then we'll have a dev snap available."
[17:20] <voidspace> rick_h: which I interpreted as "it will then be in a ppa"
[17:21] <voidspace> rick_h: but you meant, then there will be a snap
[17:21] <voidspace> rick_h: my mistake
[17:23] <voidspace> rick_h: where are the dev snaps (not on uappexplorer it seems)?
[17:24] <rick_h> No since it's not stable. Grabbing lunch, search the mailing list please
[17:24] <voidspace> rick_h: ok
[17:24] <rick_h> I'll look when I'm back at my computer
[17:24] <voidspace> there is a juju one there though, a 2.0 beta from July
[17:24] <voidspace> rick_h: I'll find it
[17:31] <voidspace> rick_h: snap install juju --edge --devmode
[17:40] <perrito666> could anyone https://github.com/juju/juju/pull/6564 ? its a rather long one (sorry, 700 but it was a change in the env interface so it spanned in many files)
[17:41] <perrito666> will run an errand and be back later
[17:41] <perrito666> chers
[17:44] <jcastro> rick_h: is there an easy way to see which version of juju is in which channels of the snap store?
[17:46] <voidspace> jcastro: I couldn't find one
[17:46] <voidspace> it looks like even the --edge channel might be 2.0 :-/
[17:47] <jcastro> yeah I'm on 2.0.1
[17:47] <voidspace> that's an issue for me then :-/
[17:47] <voidspace> I'll create a PR for 2.0 with my fix in
[17:58] <rick_h> voidspace: hmm, balloons is the --edge the daily? or is it a different name? /me doesn't recall
[17:58] <voidspace> rick_h: that command line invocation was the one suggested on the mailing list
[17:58] <rick_h> voidspace: for the daily?
[17:58] <voidspace> rick_h: I have a PR against 2.0, which I'm sending to nate to review
[17:58] <voidspace> rick_h: yes, let me find the mail (from August)
[17:59] <rick_h> voidspace: ah ok coolio then...but booo that it's 2.0 and not 2.1
[17:59] <rick_h> seems like worth a note up to the QA folks because I thought it was pointed at develop, or maybe staging branches now
[18:00] <voidspace> rick_h: I'm shortly going EOD and nate is curerently offline - so I've asked him to land it on 2.0
[18:00] <rick_h> voidspace: rgr
[18:00] <voidspace> rick_h: I'll contact them, staging gets updated sporadically, so develop would be much more useful for us
[18:00] <hoenir> https://github.com/juju/juju/pull/6570 any thoughts on this?
[18:00] <rick_h> voidspace: +1
[18:00] <voidspace> rick_h: and "juju status" does show the leader, so I've filed a bug against juju-wait
[18:01] <rick_h> voidspace: awesome, still good to fix the bug but also good to help them use changes that have been made
[18:01] <voidspace> yep
[18:02] <rick_h> perrito666: or redir, either of you up for reviewing/QA'ing hoenir's branch above please?
[18:05] <redir> rick_h: looking
[18:06] <redir> hoenir: PR 6570?
[18:13] <hoenir> redir, yeah
[18:45] <redir> hoenir: looks good with a couple minor nits
[20:49] <babbageclunk> hey veebers - any idea why this build failed? http://juju-ci.vapour.ws/job/github-check-merge-juju/212/
[20:49] <babbageclunk> veebers: I can see that it says trusty failed, but I can't see any reason in trusty.out/err
[20:50] <veebers> babbageclunk: I'll take a look now
[20:50] <babbageclunk> veebers: thanks!
[20:50] <babbageclunk> veebers: searching for "--- FAIL" normally works in that output, but it doesn't find anything for me this time.
[20:52] <babbageclunk> veebers: oh, sorry - found it.
[20:52] <veebers> babbageclunk: ah, was just going to say, look for "panic: close of closed channel" in .out
[20:53] <veebers> err trusty-out.log
[20:53] <babbageclunk> veebers: yup, that's what I found too. Thanks!
[20:55] <veebers> nw
[20:59] <babbageclunk> veebers: huh, annoying. That same test passes for me locally - I guess a race between the test closing the channel and the code doing it?
[21:01] <veebers> babbageclunk: possibly. You could always try a rebuild. If that fails too then there is a proper issue there and not a transient one
[21:02] <babbageclunk> veebers: Might do that anywhere for an extra data point.
[21:02] <babbageclunk> duh anyway
[21:38] <babbageclunk> help! I can't see how this line triggers "close of closed channel" panics! https://github.com/juju/juju/pull/6566/files#diff-0b1eab7c51bd2977a8e922580b9cf3bbR154
[21:39] <anastasiamac> babbageclunk: is it consistent or intermittent?
[21:39] <babbageclunk> anastasiamac: I can't get it to fail locally, only in jenkins
[21:39] <babbageclunk> anastasiamac: but it seems consistent there.
[21:40] <anastasiamac> babbageclunk: :(
[21:53] <perrito666> Alexisb i am right to think you are off today?
[21:59] <babbageclunk> perrito666: she's so off she's not even in here.
[22:01] <anastasiamac> perrito666: wait until standup :) she might join then \o/
[22:11] <perrito666> redir: anastasiamac I am in a weird limbo where I dont know which of you is OCR :p so https://github.com/juju/juju/pull/6564 let me give you this link to both
[22:12] <anastasiamac> perrito666: m in wednesday so for me it's reed
[22:12] <anastasiamac> perrito666: but i can peek in about an hr if the timing suits?
[22:14] <perrito666> but I think from reed perspective you are today and he is tomrrow, I hate relativism
[22:17]  * redir hates absolute relativism
[22:17] <redir> funny, it's tuesday here.
[22:17] <redir> :)
[22:17] <anastasiamac> perrito666: i'll look soon :)
[22:27] <perrito666> bbl sport
[22:52] <babbageclunk> anastasiamac: gah, worked it out - someone else (probably axw) had fixed the close channel problem in a better way that merged cleanly with mine - so it had both closes.
[23:07] <anastasiamac> babbageclunk: ur patience and perseverance is inspiring \o/ tyvm :D
[23:19] <babbageclunk> anastasiamac: :) Would be better if I could just work these problems out quicker - then I wouldn't have to be so patient!