[00:24] <axw> wallyworld: https://github.com/juju/juju/pull/8065 is part of a fix for the enable-ha bug
[00:24] <axw> will look at the replicaset stuff after school drop off
[00:24] <wallyworld> axw: nw, ty, will look after talking to xtian
[00:27]  * thumper needs food badly
[00:36] <wallyworld> hml: we need a unit test
[00:36] <hml> wallyworld: okay
[00:36] <wallyworld> there should be stuff to copy from; it's a bit hairy
[01:22]  * thumper is grumpy walking through resources code
[02:25] <hml> wallyworld: just a little hairy.  ha! - pushing the unit test now
[02:25] <wallyworld> great
[02:29] <wallyworld> hml: we just need to also check call names to ensure the rovider behaved as expected, in addition to not crashing wit hthe error
[02:29] <wallyworld> there's examples to copy from
[02:33] <hml> wallyworld: i saw examples for storage clients and such… but not the general sender
[02:34] <wallyworld> hml: yeah, guess so. it seems TestStopInstancesNotFound() for example just checks err is nil
[02:35] <wallyworld> so should be ok to land based on that precedent
[02:35] <hml> wallyworld: looked around, not much setup for checking the call tree - though i did verify with some logger messages before finaliing
[02:36] <wallyworld> sgtm
[02:36] <hml> wallyworld: had a few false positives so  i wanted to verify
[02:36] <wallyworld> yeah, testing manually is good for this type of issue
[02:40] <hml> wallyworld: ty - merging now
[03:03] <axw> wallyworld: can you please take a look at https://github.com/juju/juju/pull/8065?
[03:04] <wallyworld> sure
[03:04] <wallyworld> sorry, forgot
[03:16] <wallyworld> axw: done
[03:16] <axw> ta
[04:23] <axw> jam: do you know why mongo.SelectPeerAddress allows machine-level addresses?
[04:23] <jam> axw: you mean 127.* stuff?
[04:23] <jam> axw: you can run an HA cluster for testing on just your local machine
[04:23] <axw> jam: I meant to say machine-local, but yeah
[04:24] <axw> hmm ok.
[04:24] <jam> axw: we don't want to allow them ourselves
[04:24] <jam> so doing so is a bug
[04:24] <jam> axw: but I think that's why *mongo* doesn't refuse them
[04:24] <axw> jam: ok, I'll change it then. I meant in our juju/mongo package
[04:25] <jam> axw: so I don't think we personally ever do local-only testing
[04:25] <jam> and if we did, we could just use your eth0 ip address 3 times
[05:06] <axw> jam: can you please take a look at https://github.com/juju/juju/pull/8066?
[05:49] <jam> axw: will do
[06:14] <axw> wallyworld: I've added another commit to https://github.com/juju/juju/pull/8056/, can you please look at the last commit? moves the CACert methods around
[06:14] <wallyworld> ok
[06:14] <axw> wallyworld: sorry wait a sec
[06:14] <axw> I mucked up rebase
[06:14] <axw> wallyworld: ok, all good now
[06:14] <wallyworld> ok
[06:22] <wallyworld> axw: so there's 3 facades that dupe the getting of ca cert from controller config - you saying that sinces it's only half a dozen line sof code each time, it's not worth a common plugin
[06:23] <jam> axw: 8066 lgtm
[07:17] <axw> jam: thanks
[07:17] <axw> wallyworld: sorry, was afk. I took it off APIAddresser because (a) it doesn't have anything to do with API addresses, and (b) it was being exposed by things that didn't care about API addresses, and vice versa
[07:17] <axw> wallyworld: i.e. things only cared about CACert and not API addresses
[07:18] <axw> which should be a pretty clear indication that they're orthogonal
[07:19] <wallyworld> sure, i was thinking about a new common plugin
[07:19] <wallyworld> but probably overkill
[07:19] <wallyworld> for what it saves
[07:22] <wallyworld> anyway, lgtm
[07:25] <axw> wallyworld: yeah I don't think it's worthwhile. if it's used again maybe, but I don't see that happening any time soon
[07:25] <axw> I guess the caas provisioner might need it. I'll add it then if required
[07:25] <wallyworld> np
[08:17] <thumper> jam: ping
[08:26] <jam> thumper: pong
[08:26] <thumper> jam: got time for a quick chat about pingers?
[08:26] <thumper> I'm past EOD, but wanted to follow up
[08:27] <thumper> jam: I know you are on your standup so I'll leave ideas...
[08:27] <thumper> The dealing with reasources is required but perhaps not sufficient
[08:28] <thumper> I agree that we should work out where the other pingers are coming from
[08:28] <thumper> here's a thought...
[08:28] <thumper> api.Open will try all the apiservers, and kill those that aren't the first to respond
[08:29] <thumper> perhaps some of those don't get a close noticed on the apiserver, so they hang around for ~1 minute before the agent pinger closes them for not calling Pinger.Ping
[08:30] <thumper> if we were trying to open every few seconds, and there were some left around, this might be a reason why it floats around 20-30
[08:30] <thumper> just a thought
[08:30] <thumper> given that it is required I'd still like to land it
[08:31] <thumper> I'll leave it to you to do the $$merge$$ if you are happy enough with my comments and rationale
[08:31]  * thumper out
[10:08] <axw> balloons: something's borked in CI, https://github.com/juju/juju/pull/8056 says it's been accepted, but there's nothing running in jenkins
[10:27] <jam> axw: possibly. I've run into a few of those where the bot fails in such a way that it doesn't respond to the PR
[10:27] <jam> I can trigger a rebuild if you feel its ready to land
[10:28] <jam> I do see a http://ci.jujucharms.com/job/github-merge-juju/508/
[10:28] <jam> which says it failed
[10:33] <axw> jam: should be ready, it just failed on an intermittent unit test -- will try and fix that on develop tomorrow
[10:33] <axw> jam: what's the procedure? I can probably do it too, I have jenkins login
[10:34] <jam> axw: I *do* think we should bring it up to balloons / veebers, since I know when it was happening to me, it was a bug in the test script that it wasn't talking back to the bug.
[10:34] <jam> axw: if you log into CI (I use 'developer') you should be able to go back to the bug and just use "rebuild"
[10:34] <jam> on http://ci.jujucharms.com/job/github-merge-juju/508/ on the left hand side is a link to: http://ci.jujucharms.com/job/github-merge-juju/508/rebuild
[10:35] <axw> jam: ok, thanks
[10:36] <jam> externalreality: can you confirm the PR that you wanted me to review? It seems I had linked to the wrong one earlier
[10:37] <axw> jam: seems like jenkins is busted. rebuilding, or starting a new build with the same parameters, does not result in a build job...
[10:37] <axw> balloons: ^
[10:38] <jam> axw: hm. maybe the blue ocean stuff broke what I used to do
[10:39] <jam> axw: the other option is that you just reply with the same message that the bot usually does
[10:39] <axw> jam: tried that :(
[10:39] <axw> never mind, I can land this tomorrow
[10:39] <jam> ah, I see you did try that
[10:53] <externalreality> jam: https://github.com/juju/juju/pull/8048
[10:53] <jam> thx
[10:53] <externalreality> np
[10:59] <jam> externalreality: I'll see about running your stuff in a sec as well
[11:00] <externalreality> cool
[11:05] <jam> wpk: did you do a patch to show normal machine error messages in tabular 'juju status'?
[11:06] <jam> I'm running 2.3b3 to test things out, and I had an upgrade try-but-fail which is weird in its own right, but then the machines went to "error" but I don't see it normal status
[11:14] <wpk> It's even in 2.2
[11:14] <wpk> the 'Message' field, so it should be there
[11:30] <jam> wpk: is it not there because we only include INstance status and not Juju Agent status?
[11:30] <jam> wpk: bug #1732156
[11:30] <mup> Bug #1732156: juju upgrade-juju --build-agent allows invalid upgrades <upgrade-juju> <juju:Triaged> <https://launchpad.net/bugs/1732156>
[11:32] <wpk> we're showing machine-status: message:
[11:32] <wpk> not juju-status:
[11:32] <wpk> IIRC
[11:39] <jam> wpk: so, arguably we should allow for both
[11:39] <jam> the former shows provisioning errors
[11:39] <jam> the latter shows machiner errors once things are up
[11:40] <jam> http://github.com/juju/juju/pull/8063 and http://github.com/juju/juju/pull/8068 could both use reviews
[11:40] <jam> externalreality: wpk ^^ if you have a chance
[11:40] <jam> I'm happy to be on-hand if someone wants context
[11:40] <jam> though I think axw effectively approved 8068 because he approved the upstream mgo patch.
[11:42] <jam> I think I figured out the problem with Trello's github integration, is that it doesn't default to hiding closed PRs
[11:51] <mup> Bug #1732163 opened: juju status triggers some uninteresting DEBUG level mesasges <logging> <juju-core:Triaged> <https://launchpad.net/bugs/1732163>
[11:54] <jam> externalreality: so, how were you testing this that you found sometimes it breaks? Is it the CI tests, or just running "go test" in the right directory?
[11:55] <jam> you were mentioning you thought it might be your mongo version, so I'm guessing it was somewhere in local tests
[11:56] <jam> balloons: axw: I can confirm the same bad bot behavior for PR #8057
[11:56] <jam> something seems very wedged with the bot.
[11:59] <jam> wpk: can you join: https://hangouts.google.com/hangouts/_/canonical.com/juju-doc?authuser=1 he had some FAN questions
[11:59] <externalreality> jam: I can't be completely sure what it was
[12:00] <jam> externalreality: right, I'm just trying to make sure that I'm exercising the same test that you saw failing
[12:00] <jam> I know you said it was blocked at one point, but I don't see what was actually failing.
[12:01] <externalreality> Ah, for example, initialization_test.go would fail attempting to build "txns.log" twice.
[12:02] <externalreality> other tests would fail too, all suites that used stateSuite to establish connections to mongo
[12:02] <jam> externalreality: I don't see an "initialization_test.go" file
[12:02] <jam> am I just missing it?
[12:03] <externalreality> hmm
[12:03] <jam> initialize_test.go ?
[12:05] <externalreality> jam, correct. And a good example of a test that was failing is `TestDoubleInitializeConfig`
[12:33] <jam> externalreality: so, that test doesn't have anything to do with your changes, and I don't think it could possibly fail because of your changes (AFAICT).
[12:33] <jam> since its a state/state.go test
[12:33] <jam> might still be worth looking at, but otherwise its just a flaky test, and not related to your patch
[12:38] <externalreality> Yes, perhaps a flaky test or something related to the specific vm that I was running it on (some akin to a messed up clock or something).
[12:39] <wpk> jam: blah, missed it while lunching. Are you still there?
[12:40] <jam> wpk: no, we're done, but if you can respond to peter's questions around setting up VPC and the FAN would be useful.
[12:48] <wpk> kk
[13:31] <jam> balloons: just to note, the CI bot seems thoroughly wedged right now, not sure if there is something we could do to fix it. we should probably learn how, so that we can be landing code even when part of the world is asleep
[13:34]  * jam heads away for EOD, though I'm likely to stop back again later.
[13:52] <balloons> I'll look
[13:52] <balloons> And I agree
[14:51] <balloons> just fyi, I did nothing but it seems to have worked itself out
[14:51] <balloons> I'm curious if someone can comment about what was wrong
[15:27] <wpk> jam: I realized that I've never created a VPC for Juju, always used existing ones
[15:27] <wpk> (and if we don't have a clear doc on how to do it that's bad...)
[16:32] <jam> balloons: we were submitting requests, and it was saying "going into the queue" but the queue itself was not updating.
[16:33] <balloons> jam, are things still pending?
[16:34] <jam> balloons: I know axw had a PR, but also PR 8057
[16:34] <jam> balloons: actually, still just as broken for us
[16:34] <jam> balloons: axw was trying to resubmit PR 8056
[16:35] <jam> and that is the top of the queue, but didn't get retried, and nothing else got queued
[16:35] <jam> balloons: we also tried manually "rebuild" from the Jenkins UI, but didn't seem to do anything
[16:35] <balloons> hmm
[16:43] <balloons> jam, ah-hah! the disk is full
[17:19] <wpk> balloons: ... and there's no nagios to tell anyone ;)
[17:19] <balloons> wpk, indeed. Jenkins monitors all the nodes; but not itself
[17:29] <thedac> hml: fyi https://bugs.launchpad.net/juju/+bug/1732233
[17:29] <mup> Bug #1732233: Exiting from a debug-hook session puts hook in error state <juju:New> <https://launchpad.net/bugs/1732233>
[17:31] <hml> thedac: was debug-hooks used because of an hook error?  if so was it resolved before exit?
[17:31] <thedac> hml: I purposefully jumped into debug-hooks run them serially. Tried to exit clean but no matter what I do it goes into error state
[17:32] <thedac> all those log entries are me trying exit, exit 0 etc
[17:33] <thedac> I then have to do juju resolved --no-retry but this never actually passes relation data as juju thinks the hook has not "run"
[17:33] <hml> thedac: well that’s not cool, i’m trying to remember if we changed debug-hooks recently…
[17:33] <thedac> Should be easily reproducible, not specific to openstack
[17:33] <hml> thanks
[17:33] <thedac> no problem
[18:25] <hml> balloons: are we back in business with jenkins?
[20:48] <balloons> hml, sorry, I missed your ping. I was following up on the pr's that seemed stuck
[20:49] <balloons> hml, yours failed to merge "FAIL	github.com/juju/juju/worker/firewaller	1502.008s"
[20:49] <balloons> can I get a reivew on https://github.com/juju/juju/pull/8072?
[20:49] <balloons> just bumping the version
[21:37] <hml> balloons: ty for restarting my merge, that failure is really odd, esp with my change, retrying
[22:12] <wallyworld> babbageclunk: how goes it with the ss stuff?
[22:13] <babbageclunk> wallyworld: got confused about it again yesterday afternoon. But going alright again now.
[22:13] <wallyworld> ok, i'll review once it's ready
[22:33] <babbageclunk> wallyworld: have you got a moment for a quick hangout? want to check something with you.
[22:46] <balloons> babbageclunk, wallyworld, https://github.com/juju/juju/pull/8074. This does juju-versions.yaml now in the snap
[22:48] <balloons> wallyworld, babbageclunk, however, note the juju-versions.yaml file will be in /snap/bin/juju; aka, next to the binaries
[22:48] <babbageclunk> balloons: nice
[22:49] <balloons> tomorrow I'll get the patches included as well, and test it works for how we build / release
[22:49] <balloons> that will be a bit trickier. I may want to add a note about how to seed an agent yourself
[23:20] <wallyworld> balloons: yay, good progress