[01:50] <wallyworld> thumper: small PR? https://github.com/juju/juju/pull/8492
[01:50]  * thumper looks
[01:52] <thumper> wallyworld: one small comment
[01:52] <wallyworld> ty
[02:02] <thumper> https://github.com/juju/juju/pull/8493
[03:06] <thumper> wallyworld: can you look at this one? http://10.125.0.203:8080/view/unit%20tests/job/RunUnittests-ppc64el/286/testReport/junit/github/com_juju_juju_worker_firewaller/TestAll/
[03:07] <thumper> I've been staring at it for about 10-15 minutes, and also haven't been able to reproduce failure locally
[03:07] <thumper> ...
[03:07] <thumper> hmm...
[03:07] <thumper> I do wonder though...
[03:07] <thumper> this is most likely a watcher bug
[03:08] <thumper> and we do know that mongo has a bug on these architectures where we miss things...
[03:08] <thumper> holy crap I want to get off mongo 3.2
[03:09] <thumper> wallyworld: I do think now that this bug is likely an observation of the same mgo.OpLog test failure we see
[04:09] <wallyworld> thumper: looking
[04:24] <wallyworld> hmmm, i think i agree, it's a plausible explanation
[05:08] <wallyworld> anastasiamac: small review? https://github.com/juju/juju/pull/8494
[05:09]  * anastasiamac looking
[06:01] <jam> I have a patch to bring 2.3 up to go-1.10, turns out we have a few tests that fail because of things like changes in error strings.
[06:01] <jam> Anyone want to confirm that my fixes make sense?
[06:01] <jam> httsp://github.com/juju/juju/pull/8480
[06:01] <jam> https://github.com/juju/juju/pull/8480
[07:48] <manadart> jam: LGTM.
[08:43] <jam> thanks manadart
[08:48] <jam> manadart: I just approved your expenses, but wasn't there another meal in there, like Sun lunch ?
[08:51] <manadart> jam: Thanks. Doesn't matter. It's OK as is.
[09:11] <wpk> jam: do you have a sec?
[10:10] <jam> wpk: sure
[10:11] <wpk> jam: I msged you about the problem I have with vsphere. I'm stuck with it since yesterday
[10:14] <jam> wpk: happy to talk it over with you, not that I know that much about vsphere. Let me grab a coffee, and I'll hit you up for a hangout
[10:15] <wpk> jam: ok
[10:15] <wpk> I might grab a coffee as well
[10:18] <wpk> witold-john
[10:38] <manadart> jam: Scenario: No HA space, 3 controllers in HA (one cloud-local address each). Cloud-local address is added to one of the machines. Peergrouper errors out thereafter.
[10:40] <manadart> Until one goes back to one cloud-local each, or sets a correct HA space config.
[10:47] <manadart> jam: Initial thought is maybe to hold the err value of the up-front address check, and only return it if the members are different. Report no change if they are the same and still have the prior addresses...
[10:59] <jam> manadart: peergrouper starts erroring, but presumably the replicaset itself is untouched?
[11:01] <wpk> jam: linking jujud killed everything
[11:02] <manadart> jam: Yes.
[11:20] <jam> manadart: I'm about 3 conversations deep right now. I'm not positive what to do here. it could be fine to preserve the addresses, rather than going into failure mode, but we'd probably at least want some sort of message to users, since it is unclear
[11:20] <jam> maybe its an info or even just DEBUG message...?
[11:31] <manadart> jam: Yeah; I see you're all over. I will implement something now, review later.
[14:31] <mup> Bug #1752662 changed: ssh-proxy does not work as expected on AWS <juju:New> <https://launchpad.net/bugs/1752662>
[16:29] <wpk> jam: balloons: damn, I found the issue. after importing the disk has -two- files (file.vmdk and file-flat.vmdk), the first one is a ~500b text file with the link to the latter. Since I was creating the temporary VM for conversion in the same directory as the 'final' one it worked for the first time (because the 'large' file was there too), but for all later launches the directory was different and the VM
[16:29] <wpk> was left with an unusable short text file
[16:39] <balloons> wpk, wow..
[16:39] <balloons> wpk, glad you figured it out.. Foiled by a symlink then?
[16:43] <wpk> balloons: kind of
[16:44] <wpk> balloons: I'll switch to creating a template VM altogether, not an image. That should work and it should be 'cleaner'
[16:51] <jam> wpk: balloons: i think it is because a VM disk in VSphere can be a bunch of chained files that point to each other. there are comments while searching about "my thing is now 100 objects, how do I get it back to 1"
[16:53] <wpk> jam: yep, the file containts 'extents' section with block -> file
[16:53] <wpk> jam: so, I'll be doing it 'the template way'.
[16:53] <jam> wpk: it honestly does seem cleaner. hopefully its not a lot more effort
[16:55] <balloons> jam, ahh, interesting.
[16:56] <wpk> jam: hopefully create a VM, mark it as a template
[16:56] <balloons> wpk, will the template vm be reused as well, or will you make a new "template" each time?
[16:56] <wpk> reused
[16:57] <wpk> I'd create VM, clone it (to convert disk), then mark the clone as template.
[17:13] <balloons> wpk, so this is the same as before -- subsequent bootstraps would use the same template
[18:56] <wpk> balloons: not really, previously we had not template at all
[19:15] <jam> night all
[19:16] <jam> wpk: I thought it was more exporting a vm to a template
[19:16] <jam> anyway, still good
[19:54] <agprado> Team, does anybody know how to use testing.Isolation suite? And has a few minutes, for some questions?
[20:29] <balloons> agprado, I imagine APAC will be awake shortly :-)
[20:32] <agprado> balloons: ty, I know what I want, but I'm not sure how to make it run.
[21:46] <agprado> balloons: are you available?