[00:17] davechen1y: np [00:48] Bug #1519190 changed: worker/addresser: FAIL: worker_test.go:260: workerEnabledSuite.TestWorkerAcceptsBrokenRelease [00:54] Bug #1519190 opened: worker/addresser: FAIL: worker_test.go:260: workerEnabledSuite.TestWorkerAcceptsBrokenRelease [01:06] Bug #1519190 changed: worker/addresser: FAIL: worker_test.go:260: workerEnabledSuite.TestWorkerAcceptsBrokenRelease [01:09] Bug #1519190 opened: worker/addresser: FAIL: worker_test.go:260: workerEnabledSuite.TestWorkerAcceptsBrokenRelease [01:12] Bug #1519190 changed: worker/addresser: FAIL: worker_test.go:260: workerEnabledSuite.TestWorkerAcceptsBrokenRelease [01:24] thumper: it feels like the fslock debate is crawling towards stalemate [01:26] thumper: what do you want to see from the discussion ? [01:27] sorry davechen1y, head down with a customer fire right now [01:35] axw: http://reviews.vapour.ws/r/3238/ [01:35] can you please review my last comment [01:35] thumper: my condolences [01:35] davechen1y: looking [01:36] davechen1y: I'm not really sure what you mean by that. It's valid for "createAliveLock" to be entered twice [01:36] davechen1y: (just not concurrently) [01:41] axw: createAliveFile starts a goroutine who's job is to scrobble in some state file [01:41] the stanity check fials because that goroutine is being started more than once [01:41] this is why the original data race existed [01:41] and I think the bug is still here, which wasn't the data race, that was just a side effect [01:42] davechen1y: no, before you could have two of those goroutines running at the same time. now you cannot, because of the wait group [01:42] axw: i don't believ that is true [01:42] see the sanity check [01:43] why is createAliveFile being entered more than once [01:43] why is createAliveFile being entered more than once ? [01:43] davechen1y: the sanity check shows that it was entered more than once, not that they were both in there at the same time [01:44] davechen1y: if you close a channel twice, sequentially, it still panics [01:44] yes [01:45] axw: ok, so I should be able to fix the sanityCheck by creating a new sanityCheck channel in Unlock ? [01:46] davechen1y: yep, that would make sense: recreate channel before removing the file [01:46] directory, whatever [01:46] i think Unlock is the right place to refresh the sanity check [01:46] i'm testing that now [01:47] i don't trust this code [01:47] i wnt tripple underpants on it [03:12] thumper: for allowing 2.0 upgrades, should I also allow people who've installed a 1.26 alpha to upgrade? or are we just going to assume they need to destroy the env and bootstrap with 2.0? [03:13] cherylj: there are two problems here [03:13] one is getting a version out for 1.25 that'll work [03:13] people with 1.26 will have a 2.0 client to upgrade [03:13] so we need to make sure it lands in master too [03:13] yes I think 1.26 alpha/beta would be fine for 2.0 upgrades [03:14] thumper: sounds good. I also updated the check to actually query the AgentVersion, not just version.Current [03:15] there will probably be a follow on change to show users that a 2.0 upgrade is available, but never select it for an automatic upgrade. [03:16] well, not automatic. Just never select it when upgrading a 1.25 env, unless explicitly specified [03:32] oh ffs [03:32] menn0: you around still? [03:32] I have questions [03:33] ugh [03:33] wallyworld: I think we need to get rid of the short-circuit removal of relations involving remote services. unless they're not registered [03:33] * thumper headdesks [03:33] wallyworld: otherwise I don't think we can guarantee they'll be unregistered [03:34] wallyworld: so Destroy should just mark them Dead, and then the worker will see that, unregister, then remove [03:34] axw: sounds reasonable at first glance [03:43] thumper: still here [03:43] menn0: https://github.com/howbazaar/juju-121-upgrades [03:43] menn0: and a quick hangout? [03:43] menn0: 1:1 hangout? [03:43] thumper: ok [03:51] lucky(~/src/github.com/juju/juju) % ls /tmp [03:51] check-5577006791947779410 hugo_cache juju-tools295324784 juju-tools-released484109868 test-mgo611590862 [03:51] config-err-g4oT3p juju-exec001679756 juju-tools327758394 juju-tools-released817795995 test-mgo998542473 [03:51] deps.svg juju-exec158644967 juju-tools689131141 t tmux-1000 [03:51] fileNq80up juju-exec845043821 juju-tools849695094 test-mgo169368628 unity_support_test.0 [03:51] hsperfdata_root juju-exec855581256 juju-tools891530041 test-mgo562608033 [03:51] here are all the things juju leaks per test run [03:51] :( [04:28] Bug #1501490 changed: juju-local can't bootstrap as root user [04:31] * thumper heads for the alcohol [07:14] axw: when you have time, ptal http://reviews.vapour.ws/r/3291/ [07:14] wallyworld: sure [07:14] ta [07:38] fwereade: ping [08:03] dooferlad, frobware, jam, fwereade, morning :) reviews on http://reviews.vapour.ws/r/3285/ will be appreciated [08:04] dimitern, looking [08:05] frobware, ta! [08:06] fwereade: when you have a chance, here's 32 pages of diffs :) http://reviews.vapour.ws/r/3292/ [08:06] fwereade: seriously though, see your email [08:15] dimitern, what's the [1:] for on the noDevicesWarning string? [08:16] frobware, so that it doesn't start with a \n [09:04] frobware, thanks for the review [12:08] katco, ericsnow, wwitzel3: in merging master into machine-dep-engine, it looks like component bits have leaked into worker/meterstatus and uniter.Paths; was this intentional? [12:25] dimitern, I reopened http://reviews.vapour.ws/r/3285/ as I found a go vet error. [12:26] Bug #1522001 opened: worker/instancepoller: intermittent data race [12:30] frobware, yeah, I caught this while forward porting [12:30] frobware, it will be fixed in master, and I'll propose a 1.25 fix [12:30] dimitern, thx. I noticed as I rebased to push my chances. [12:30] bbl [13:44] Bug #1522025 opened: juju upgrade-juju should confirm it has enough disk space to complete [13:50] Bug #1522025 changed: juju upgrade-juju should confirm it has enough disk space to complete [13:56] Bug #1522025 opened: juju upgrade-juju should confirm it has enough disk space to complete [14:32] Bug #1519527 changed: juju 1.25.1: lxc units all have the same IP address - changed to claim_sticky_ip_address [14:38] Bug #1519527 opened: juju 1.25.1: lxc units all have the same IP address - changed to claim_sticky_ip_address [14:44] Bug #1519527 changed: juju 1.25.1: lxc units all have the same IP address - changed to claim_sticky_ip_address [14:55] dimitern/frobware/voidspace: PTAL: http://reviews.vapour.ws/r/3294/ [15:14] Bug #1520380 changed: worker/provisioner: unit tests fail on xenial [15:26] dimitern, dooferlad, voidspace: PTAL @ http://reviews.vapour.ws/r/3298/ === JoseeAntonioR is now known as jose [16:09] alexisb, hey [16:35] Bug #1511138 opened: Bootstrap with the vSphere provider fails to log into the virtual machine [16:35] Bug #1513982 opened: Juju can't find daily image streams from cloud-images.ubuntu.com/daily === frobware_ is now known as frobware [17:01] dimitern, voidspace, dooferlad: PTAL @ http://reviews.vapour.ws/r/3300/ === jose is now known as Guest78115 [17:11] abentley: I have some hacks in assess_jes_deploy.py on master to add logging, I believe your changes should obselete this, but be aware on merging [17:12] mgz: My changes to assess_jes_deploy landed on Monday, IIRC. [17:13] did you run the thingy as well? maybe they didn't conflict and I actually need to make a working branch for 'em then [17:17] frobware: looking [17:17] frobware: the new api is *almost* the same as the existing facade, but annoyingly needs space provider ids not names [17:17] frobware: and it seems that creating a new facade is probably easier than versioning the existing one and adding the extra info... [17:18] frobware: if that's just a forward port of your previous branch then LGTM [17:53] bbl [19:40] wwitzel3: I pressume you're making the upload API client something like Upload(service, name string, resource io.Reader) error ? [19:41] natefinch: why not keep it decoupled? specify the interface you expect and we can adapt w/e wwitzel3 makes to that interface [19:44] katco: I figure we might as well start off with a perfect match if there's no reason not to... if we later need an adapter, we can always do that, but not even trying to make them the same seems like it just adds unnecessary complexity to the code. I'd be looking at the code wondering why the two interfaces were different, and the answer would be ¯\_(ツ)_/¯ [19:45] natefinch: the answer would be because they don't need to be the same? but yes, they are passed in, in the order of the command in the spec. Service, name, blob [19:45] natefinch: the reason to take in a func or interface is to guard against changes in the future; if the api layers change, your code doesn't have to. only the adapter [19:49] katco: yeah, I was going to take an interface... just might as well make the interface match to start with. If I see someone's written an adapter, and all the adapter does is change the order of the arguments, I'm gonna be annoyed that I had to go read that code [19:50] natefinch: agreed for order, but not arity [19:53] quick review please ? https://github.com/juju/juju/pull/3882 [19:59] davecheney: lgtm [20:03] katco: ta! === rharper` is now known as rharper [20:18] sinzui, mgz: the machine-dep-engine feature branch CI runs are never actually happening. it's always "Blocked by parallel-streams-publish-aws, build-binary-vivid-amd64, build-revision" [20:18] what's that about? [20:18] abentley too: ^^ [20:19] menn0: it needs master merged to get new version [20:19] menn0: read http://reports.vapour.ws/releases/3380 [20:19] menn0: Build-revision isn't completing successfully, because it hasn't been updated to a new version. [20:20] mgz, abentley: I merged master into the feature branch last night. so it should work next attempt? [20:21] yup, at least that initial step [20:21] menn0: Yes. If the version in the code is now greater than 1.26-alpha2. [20:22] mgz, abentley: it should be. it was master from yesterday. [20:22] mgz, abentley: Thanks for explaining. I didn't know we had that guard, but it makes sense now that I think about it. [20:47] abentley: I thnk you fell off #juju. To catch you. Up we lost access to HP. I we now have access back, in the meantime. I started to move the last machines from their. I quickstart still sucks. I have a hack in place to use gce. I quadrupled our cpus in gce to deploy landscape tests there. [20:47] sinzui: ack. [21:39] katco: is that kb plastic or rubber? [21:42] perrito666: plastic i think [22:12] thumper, ping [22:15] niedbalski: hey [22:16] niedbalski: I'm downloading the files to my laptop now [22:16] niedbalski: although the current logs show more problems than just the db bits [22:16] which sucks [23:02] davecheney: I have mixed news. the run-unit-tests-race is now on the new machine and can complete in less than 40 minutes. I think the new speed or the new host has uncovered errors. CI is retesting now. I may make the job non-voting until we get the test stable again [23:03] sinzui: davecheney: the race flag will only fail tests if it experiences the race, correct? [23:04] katco: In my experience, a failure of a test, or a panic will fail the test [23:07] sinzui: right, but the test won't fail if it contains a race condition unless that rc is triggered... even with the -race flag. at least that's my understanding [23:09] katco: if go test returns an exit code greater than 0 it will fail. That previous run failed because of panics and fails, I don't see a report of races [23:10] sinzui: right, i understand what you're saying. what i'm trying to add is that i think whether the tests fail because of races is as non-deterministic as the races themselves. so you may see wildly variable results. [23:10] katco: yeah. [23:10] sinzui: i.e. the test may not stabalize for a long while [23:13] davecheney: do you remember what the difference between GOARM=6 and GOARM=7 is? [23:14] https://github.com/juju/juju/pull/3886 [23:14] anyone [23:14] mwhudson: the extra 8 floating point regs in VPFv3 [23:14] what do you they call it, D16 or something [23:15] oh right [23:15] go defaults to goarm=6 [23:15] that is the right hoice [23:15] choice [23:15] ok [23:15] it's what ubuntu builds with too for armhf, apparently [23:15] armv6 vs v7 makes a huge difference in kernel space 'cos the cache model is completely different [23:15] sounds like that's ok? [23:15] in user space, bugger all difference [23:15] k [23:15] yes, that is the best choice [23:16] thanks [23:42] https://github.com/juju/juju/pull/3886 [23:42] anyone, pleaes