[00:00] <perrito666> thumper: I could kiss you .... and also kill you
[01:13] <axw> anastasiamac: when you're back, could you please review http://reviews.vapour.ws/r/2736/ for me?
[01:22] <mup> Bug #1499570 opened:  public no address <backup-restore> <reliability> <retry> <juju-core:Triaged> <https://launchpad.net/bugs/1499570>
[01:22] <mup> Bug #1499571 opened: Restore failed: error fetching address <backup-restore> <reliability> <retry> <juju-core:Triaged> <https://launchpad.net/bugs/1499571>
[01:26] <anastasiamac> axw: looking :D
[01:30] <anastasiamac> axw: lgtm :)
[01:30] <axw> anastasiamac: thanks
[01:34] <mup> Bug #1499570 changed:  public no address <backup-restore> <reliability> <retry> <juju-core:Triaged> <https://launchpad.net/bugs/1499570>
[01:34] <mup> Bug #1499571 changed: Restore failed: error fetching address <backup-restore> <reliability> <retry> <juju-core:Triaged> <https://launchpad.net/bugs/1499571>
[01:43] <mup> Bug #1499570 opened:  public no address <backup-restore> <reliability> <retry> <juju-core:Triaged> <https://launchpad.net/bugs/1499570>
[01:43] <mup> Bug #1499571 opened: Restore failed: error fetching address <backup-restore> <reliability> <retry> <juju-core:Triaged> <https://launchpad.net/bugs/1499571>
[01:52] <mup> Bug #1499573 opened: TestString failed on windows <ci> <regression> <test-failure> <unit-tests> <windows> <juju-core:Incomplete> <juju-core feature-proc-mgmt:Triaged> <https://launchpad.net/bugs/1499573>
[02:10] <cherylj> is there anyone around who's familiar with envworkermanager?
[02:33] <thumper> davecheney: you have multiple versions of go handy right?
[02:34] <thumper> I was testing a bug backport to 1.22
[02:34] <thumper> but golang 1.5 really doesn't seem to like it, even with GOMAXPROCS=1
[02:35] <thumper> I was wondering if I could get you to see if you can get the tests to pass with a different version
[02:36] <thumper> https://github.com/howbazaar/juju/tree/ignore-machine-addresses-1.22
[03:04] <anastasiamac> axw: series from version translation in utils :D PTAL if u have a chance http://reviews.vapour.ws/r/2751/
[03:06] <axw> anastasiamac: before I review, can you show me where you need it? since we've not needed until now...
[03:08] <anastasiamac> axw: sure, in https://github.com/juju/juju/blob/master/apiserver/imagemetadata/metadata.go#L210
[03:08] <anastasiamac> in files, image metadata has versions
[03:08] <anastasiamac> but we are storing series in structured image metadata
[03:09] <anastasiamac> hence we need a translation
[03:09] <anastasiamac> apiserver is the first place... another place will be
[03:09] <anastasiamac> when we are caching cutom images
[03:10] <anastasiamac> since this is clearly a util, it should belong in centralasied, logical location not apiserver where i have originally implemented it
[03:10] <anastasiamac> in addition, it's counter-intuitive that we can go one way and not the othe... i guess the need did not arise until now
[03:56] <thumper> axw: cheers for finding the service config problem, how'd you identify the issue?
[03:57] <axw> thumper: nps. I looked through the commits listed as suspects, and that was the only likely candidate. then I repro'd exactly as described in the bug (took me a few goes because I picked the wrong bundle to start with)
[05:12] <thumper> axw or anastasiamac: I'm after a favour from someone
[05:12] <anastasiamac> thumper: on firday avo?
[05:12] <thumper> I need someone with an earlier golang to run the unit tests for a 1.22 potential fix
[05:12] <thumper> golang 1.5.1 hates 1.22
[05:13] <thumper> anastasiamac: yeah, it is pretty easy though
[05:15] <thumper> I have this https://github.com/howbazaar/juju/tree/ignore-machine-addresses-1.22
[05:15] <thumper> which I think isolates and backports a 1.24 fix into the 1.22 branch
[05:15] <thumper> but I can't run the tests here successfully
[05:15] <thumper> not at all
[05:15] <thumper> I even tried to downgrade golang, but then go failed for other reasons
[05:15] <thumper> and I couldn't get it working
[05:15] <thumper> and at the end of friday, I'm losing the will
[05:16] <anastasiamac> thumper: :( i know the feeling - it being friday and all...
[05:16] <axw> thumper: I'll give it a shot
[05:16] <thumper> axw: ta
[05:17] <thumper> so if you start from a fresh 1.22
[05:17] <axw> thumper: any particular tests?
[05:17] <thumper> and pull in that branch
[05:17] <axw> mk
[05:17] <thumper> axw: no, just all of them :)
[05:17] <thumper> it touches a few places
[05:17] <thumper> machiner worker
[05:17] <thumper> state
[05:17] <thumper> and apiserver
[05:17] <thumper> could probably get away with: api, apiserver, cmd/jujud, state, worker/machiner
[05:17] <thumper> if you wanted to limit it
[05:20] <axw> thumper: apiserver is happy. I'll run the lot and let you know
[05:21] <thumper> I've never been so tempted to spend 2.5kUSD https://glowforge.com
[05:21] <thumper> axw: ta
[05:23] <thumper> axw: what would the differences be in the binary created between me compiling with golang 1.5.1 and what you are using (which I'm assuming is an earlier one) ?
[05:23] <thumper> I know that golang 1.5 changed the default GOMAXPROCS
[05:23] <thumper> but does that impact the binary created?
[05:24] <axw> thumper: yes, the runtime is linked in statically. I'm using 1.4.2.
[05:24] <axw> thumper: couldn't tell you the differences without poring over the release notes
[05:25] <thumper> ah
[05:25] <thumper> in which case, can I get you to upload the built juju and jujud binaries to chinstrap somewhere?
[05:26] <thumper> I'll propose the backport, but I'm going to see if we can get some confirmation that it actually helps first
[05:28] <axw> thumper: sure
[05:36] <axw> thumper: tests are just hanging.. nfi what they're doing
[05:36] <thumper> bugger
[05:38] <axw> thumper: anyway, binaries hare at https://chinstrap.canonical.com/~axw/ignore-machine-addresses-1.22.tgz
[05:38] <thumper> axw: ta
[05:39] <axw> thumper: giving up on tests now, they don't appear to be doing anything
[05:39] <thumper> kk
[05:39] <thumper> which ones hung?
[05:41] <axw> thumper: I think it was in the cmd/juju package
[05:42] <thumper> axw: or they were just taking ages?
[05:42] <axw> thumper: maybe, hard to tell. it was going on for quite a long time (didn't have timestamps but around 10 mins?)
[05:43] <thumper> hmm...
[06:03] <thumper> axw: fyi https://bugs.launchpad.net/juju-core/+bug/1464304
[06:03] <mup> Bug #1464304: Sending a SIGABRT to jujud process causes jujud to uninstall (wiping /var/lib/juju) <sts> <juju-core:Triaged> <https://launchpad.net/bugs/1464304>
[06:04] <thumper> axw: I wonder if we should move the manual cleanup / removal code to SIGUSER1 or 2 rather than SIGABRT
[06:04] <axw> thumper: yes, I think so
[06:04] <thumper> axw: for some unknown reason, various folks have hit this
[06:05] <axw> thumper: main problem is how to change it while still being able to destroy old environments
[06:05] <thumper> yeah... always a problem
[06:05] <thumper> here's my suggestion
[06:05] <thumper> we fix it in 1.25 / master
[06:06] <thumper> in 1.25, when the api client connects, it stores the server version in the client api
[06:06] <thumper> so we can ask the server "what version are you"
[06:06] <thumper> then switch based on known version
[06:06] <thumper> or alternatively
[06:06] <thumper> ask for the environ config agent version
[06:06] <thumper> which is probably technically more correct
[06:07] <thumper> as the api says what version the server is
[06:07] <thumper> not what version the environment is
[06:07] <axw> thumper: that doesn't work when there's no API connection though. we still have to support --force
[06:07] <axw> thumper: probably could just "jujud --version" instead
[06:07] <thumper> does that work?
[06:07] <thumper> yep
[06:07] <thumper> it does
[06:08] <thumper> I think that would be a good start
[06:08] <axw> thumper: I believe we're on bugs next week, I'll fix it then. trying to finish up some ceph-related storage changes atm
[06:08] <thumper> kk
[06:08] <thumper> have a good weekend
[06:08] <thumper> chat next week
[06:08] <axw> thumper: cheers, you too
[06:26] <mup> Bug #1499613 opened: Windows device path mismatch in volumeSuite.TestListVolumesStorageLocationBlockDevicePath <ci> <test-failure> <unit-tests> <windows> <juju-core:Incomplete> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1499613>
[06:38] <mup> Bug #1499617 opened:  juju machine add --constraints arch=i386 says machine created, then doesn't create machine <juju-core:New> <https://launchpad.net/bugs/1499617>
[06:44] <mup> Bug #1499617 changed:  juju machine add --constraints arch=i386 says machine created, then doesn't create machine <juju-core:New> <https://launchpad.net/bugs/1499617>
[06:47] <mup> Bug #1499617 opened:  juju machine add --constraints arch=i386 says machine created, then doesn't create machine <juju-core:New> <https://launchpad.net/bugs/1499617>
[06:50] <mup> Bug #1499617 changed:  juju machine add --constraints arch=i386 says machine created, then doesn't create machine <juju-core:New> <https://launchpad.net/bugs/1499617>
[06:56] <mup> Bug #1499617 opened:  juju machine add --constraints arch=i386 says machine created, then doesn't create machine <juju-core:New> <https://launchpad.net/bugs/1499617>
[08:21] <frobware> voidspace, dooferlad: Thinking of postponing planning/retrospective until Monday when dimiter and frank are back. Thoughts?
[08:21] <voidspace> frobware: agreed
[08:21] <dooferlad> frobware: seems reasonable
[08:21] <frobware> voidspace, dooferlad: it would allow us to concentrate on the spaces bugs
[08:21] <voidspace> frobware: so, Monday?
[08:21] <frobware> voidspace, yep
[08:21] <voidspace> standup as usual today then
[08:21] <frobware> voidspace, yes, probably not long but we should...
[08:22] <voidspace> ok
[08:23] <voidspace> frobware: how's your git-fu?
[08:23] <voidspace> frobware: I wish to branch off 1.25
[08:23] <frobware> voidspace, mostly OK within Emacs... :)
[08:23] <voidspace> heh
[08:23] <frobware> voidspace, in your github clone?
[08:24] <voidspace> if I checkout 1.24 (a tag from my upstream) it works fine
[08:24] <voidspace> frobware: yep
[08:24] <voidspace> if I checkout 1.25 I get told it doesn't exist
[08:24] <voidspace> git checkout upstream/1.25 works
[08:24] <voidspace> but puts me in a detached head
[08:24] <frobware> voidspace, I have 1.2x branches in my github fork so I just branch from those
[08:25] <voidspace> frobware: I wonder why my github fork doesn't have a 1.25 branch/tag
[08:25] <voidspace> and how I get one...
[08:25] <voidspace> if
[08:25] <voidspace> th
[08:25] <voidspace> e
[08:25] <frobware> voidspace, I ran into this the other day and just pushed a branch
[08:25] <voidspace> oops
[08:25] <voidspace> if
[08:25] <voidspace> if
[08:25] <voidspace> 8f0
[08:25] <voidspace> s-a
[08:26] <voidspace>  
[08:26] <voidspace>   
[08:26] <dooferlad> well, this is interesting to watch
[08:26] <dooferlad> it looks like voidspace just had a cat sit on his enter key :-)
[08:27] <dooferlad> btw, you need to git checkout -b localname origin/branchname
[08:28] <dooferlad> http://stackoverflow.com/questions/471300/git-switch-branch-without-detaching-head
[08:36] <voidspace> my keyboard switched into "crazy mode" and I failed to get it to behave itself
[08:36] <voidspace> even on reboot
[08:36] <voidspace> so new keyboard it is
[08:36] <voidspace> dooferlad: thanks, I can branch off the detached head
[08:37] <voidspace> dooferlad: I just wondered why I apparently have a 1.24 tag/branch and not a 1.25 one
[08:37] <voidspace> and also I wondered if the detached head was expected/correct
[08:37] <dooferlad> voidspace: you may have branched rather than checked out previously?
[08:37] <voidspace> your implication is that it is
[08:37] <voidspace> dooferlad: possibly
[08:37] <fwereade> frankban, ping
[08:38] <frankban> fwereade: hi
[08:38] <fwereade> frankban, ancient history now, but: https://github.com/juju/juju/commit/c67e13c37948d5b3e41125c40425fccbee592452
[08:38] <dooferlad> voidspace: this is why I paid for http://www.syntevo.com/smartgit/ -- it is all the git-fu I need
[08:38] <dooferlad> voidspace: shame it doesn't do bzr!
[08:38] <fwereade> frankban, do you recall what the motivation for adding JujuOsEnvSuite to BaseSuite was?
[08:38] <voidspace> dooferlad: your mental model is better than mine too I think
[08:39] <voidspace> although I'll look at smartgit
[08:39] <voidspace> anything that makes my life easier is good...
[08:39] <dooferlad> voidspace: it feels like there should be a joke in there about me being mental...
[08:40] <voidspace> dooferlad: oh, I wouldn't joke about that!
[08:40] <dooferlad> voidspace: :p
[08:42] <frankban> fwereade: it seems that BaseSuite was cleaning up env var before as well
[08:42] <fwereade> frankban, yeah, sorry, on closer inspection it looks like it's just an extraction
[08:43] <frankban> fwereade: yeah, np
[09:02] <dooferlad> voidspace: https://plus.google.com/hangouts/_/canonical.com/sapphire
[09:03] <voidspace> dooferlad: omw
[09:32] <voidspace> dooferlad: frobware: oh, I forgot to mention in standup. I have a meeting at daughter's school for an hour this afternoon (school just round the corner so not much travel time). Will work later to make up the time.
[11:08] <mgz_> master is broken.
[11:09] <anastasiamac> mgz_: oh?.. why?
[11:10] <mgz_> anastasiamac: pr3210
[11:11] <mgz_> doesn't build on windows. last time dave poked the version also broke the build.
[11:12] <anastasiamac> :(
[11:16] <bogdanteleaga> mgz_: we really ought to get a GOOS=windows build test in the hook if this actually happens that often
[11:16] <mgz_> bogdanteleaga: I have a bigger, more painful for everyone solution
[11:16] <mgz_> that I've been procrastinating over
[11:18] <mgz_> but given the failure rate recently on windows testing, is justified to just gate on a full windows run, even though it doubles the time
[11:21] <mup> Bug #1499689 opened: Windows ftb after version.Binary.OS change <blocker> <ci> <regression> <windows> <juju-core:Triaged> <https://launchpad.net/bugs/1499689>
[11:23] <bogdanteleaga> mgz_: might still save time overall :)
[12:42] <mup> Bug # changed: 1466498, 1466514, 1488576, 1497456, 1498481
[12:47] <rogpeppe> mgz_: why would gating on a full windows run double the time? does it really take three times as long to run the tests on windows?
[12:50] <mgz_> rogpeppe: takes longer, and doing two runs means the various intermittent failures are more likely to be hit on any given merge attempt
[12:51] <rogpeppe> mgz_: presumably you could do the two runs concurrently?
[12:51] <mgz_> yeah, which is why it's only going to ~double the time
[12:51] <mup> Bug # opened: 1466498, 1466514, 1488576, 1497456, 1498481
[12:51] <rogpeppe> mgz_: (in general it might be interesting to consider running all the tests concurrently across a few machines)
[12:52] <rogpeppe> mgz_: (shouldn't be that hard, i'd think)
[12:52] <mgz_> rogpeppe: if the tests were just reliable run either in parallel, or under lxc, could already massively speed up the run
[12:53] <rogpeppe> mgz_: why would running them under lxc help?
[12:53] <mgz_> don't actually need multiple machines, just to be able to use all the cpus
[12:53] <rogpeppe> mgz_: doesn't go test use all cpus anyway?
[12:54] <mgz_> rogpeppe: because then I can just use multiple containers, also avoids the overhead of starting up a new machine each time
[12:54] <rogpeppe> mgz_: i'm fairly sure that by default it runs GOMAXPROCS packages' tests at the same time
[12:54] <mup> Bug # changed: 1466498, 1466514, 1488576, 1497456, 1498481
[12:54] <rogpeppe> mgz_: so we can't run tests under lxc?
[12:55] <mgz_> rogpeppe: I may be wrong, but I belive GOMAXPROCS defaults to 1 for us
[12:56] <mgz_> rogpeppe: they mostly work, but are less reliable and hits timing issues much more
[12:57] <mgz_> though, job also not helped by being on wily these days and the tests not passing on wily:

[12:57] <rogpeppe> mgz_: i think you'd need to deliberately set GOMAXPROCS to get it to default to 1
[12:57] <rogpeppe> mgz_: you can find out with 	fmt.Println(runtime.GOMAXPROCS(0))
[12:57] <rogpeppe> mgz_: tbh we *should* always run with GOMAXPROCS>1
[12:58] <rogpeppe> mgz_: anyway, spreading tests across machines would also be very feasible
[12:58] <mgz_> rogpeppe: wasn't the default changed in some go > 1.2.1?
[13:06] <rogpeppe> mgz_: i think it's controlled by the -p flag:
[13:06] <rogpeppe> 	-p n
[13:06] <rogpeppe> 		the number of builds that can be run in parallel.
[13:06] <rogpeppe> 		The default is the number of CPUs available.
[13:07] <rogpeppe> mgz_: so unless you're deliberately running with -p 1, i think you'll be running num-CPU tests at a time currently anyway
[13:08] <rogpeppe> mgz_: contrary to my assertion above, GOMAXPROCS is orthogonal to this
[13:09] <mgz_> rogpeppe: yeah, they're different effects. we used to force -p to something small, but that at least is no longer needed.
[13:10] <rogpeppe> mgz_: so probably you'll need to split across multiple machines in order to speed things up
[13:11] <rogpeppe> mgz_: another way to speed things up would be to have a git cache so that we're not fetching all the deps each time
[13:12] <mgz_> rogpeppe: another way would be make the tests less pants :)
[13:12] <rogpeppe> mgz_: i'm not asking for miracles :)
[13:13] <mgz_> the download isn't much of a speed issue - it's more painful when github api is down and the job fails out
[13:14] <rogpeppe> mgz_: if i had the task of speeding up the tests, i'd start by looking at setup/teardown time - there are lots of tests that take almost no time but fixture setup and teardown takes at least 0.25s
[13:15] <mgz_> rogpeppe: yeah, a bunch of the problem is still we're using suites that are terrible
[13:15] <mgz_> though there was some connsuite destruction recently
[13:15] <bogdanteleaga> anybody has an idea if juju uses the proxy settings right on the first setup? and if so, how?
[13:15] <bogdanteleaga> I can see it takes the config values from the state machine when proxyupdater starts
[13:15] <rogpeppe> mgz_: and i'd also look at the problem that there are quite a few tests that run for 5s or 10s or 15s because they're waiting for the poll interval. that shouldn't be too hard to sort out.
[13:16] <rogpeppe> mgz_: one thing that might potentially speed things up is to use a single external mongodb instance for all packages.
[13:17] <rogpeppe> mgz_: running 4 (or however many) mongodb instances at a time is not gonna be good for speed
[13:17] <rogpeppe> mgz_: i think that moving towards mocking everything because it's too slow is actually a step backwards in some ways.
[13:18] <rogpeppe> mgz_: and yeah, jujuconnsuite sets up much more than it needs to (directories, files, etc). most tests don't need that.
[13:21] <mup> Bug #1499617 changed:  juju machine add --constraints arch=i386 says machine created, then doesn't create machine <add-machine> <constraints> <juju-core:New> <https://launchpad.net/bugs/1499617>
[13:54] <rogpeppe> hi all. i have another step in my apiserver changes for macaroon auth available for your delight and edification. i'm sure you'll all be dying to review it. http://reviews.vapour.ws/r/2758/
[13:54] <mgz_> how cute rog.
[13:55] <rogpeppe> mgz_: :)
[13:58] <rogpeppe> TheMue, cmars: it seems you're OCR... any chance of a review? :) http://reviews.vapour.ws/r/2758/
[14:16] <cmars> rogpeppe, yep, got a bit of a backlog already though
[14:17] <rogpeppe> cmars: fair enough, had to try :)
[15:35] <frobware> rogpeppe, TheMue is out this week
[15:35] <rogpeppe> frobware: ah, ok
[15:36] <frobware> rogpeppe, which is obviously a bit late in the day to find out...
[15:36] <rogpeppe> frobware: it's fine, 'cos cmars is on the case, right casey? :)
[15:37] <cmars> rogpeppe, reviewing!
[15:37] <rogpeppe> cmars: much appreciated
[16:15] <natefinch> fwereade: you around?
[16:24] <mattyw> fwereade, ping?
[16:31] <mup> Bug #1499781 opened: macaroonLoginSuite fails on windows on chicago-cubs <ci> <test-failure> <windows> <juju-core:Incomplete> <juju-core chicago-cubs:Triaged> <https://launchpad.net/bugs/1499781>
[17:27] <fwereade> natefinch, just passing -- can I help?
[17:47] <natefinch> fwereade: no worries, was wondering about the unit assigning worker I'm writing... I presume it should be a singular worker, since we don't want multiple workers assigning units at the same time.
[18:46] <mup> Bug #1499689 changed: Windows ftb after version.Binary.OS change <blocker> <ci> <regression> <windows> <juju-core:Fix Released by dave-cheney> <https://launchpad.net/bugs/1499689>
[19:00] <sinzui> abentley: hangout?
[19:01] <abentley> sinzui: let's.
[20:39] <cmars> http://reviews.vapour.ws/r/2762/, fixes LP:#1499613
[20:39] <mup> Bug #1499613: Windows device path mismatch in volumeSuite.TestListVolumesStorageLocationBlockDevicePath <blocker> <ci> <regression> <test-failure> <unit-tests> <windows> <juju-core:Triaged> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1499613>
[20:41]  * natefinch changes code... test fails.
[20:41]  * natefinch changes test... test passes.
[20:42]  * natefinch wonders if anyone actually cares what error is returned from this function, so long as it actually fails.
[20:47] <cmars> natefinch, can you review these? http://reviews.vapour.ws/r/2762/ http://reviews.vapour.ws/r/2763/
[20:49] <natefinch> cmars: ship 'em
[20:49] <cmars> natefinch, thanks!
[20:49] <natefinch> cmars: thanks for fixing it, and FWIW, I wish we gated on the windows tests too
[21:56] <mup> Bug #1499900 opened: scope: container is too ambiguous and confusing <juju-core:New> <https://launchpad.net/bugs/1499900>