/srv/irclogs.ubuntu.com/2015/10/01/#juju-dev.txt

wallyworldaxw: a small one https://github.com/juju/charm/pull/16000:17
wallyworldaxw: ty, i've pushed up an additonal change to use typed errors00:34
axwwallyworld: ok, looking00:34
wallyworldbtw the original imple,entation was sort of on purpose because upstream handled the empty series stuff, but easier to push that down into charm00:35
axwwallyworld: ok. still LGTM00:36
wallyworldty00:36
thumperwallyworld: quick chat?00:39
wallyworldsure00:39
thumper1:1 hangout00:40
wallyworldaxw: see http://reports.vapour.ws/releases/3124 - it suggests bug 1479546 may be a cause, could you take a look?01:08
mupBug #1479546: Storage provisioner timeouts spawning extra volumes on AWS <juju-core:Fix Released by axwalk> <https://launchpad.net/bugs/1479546>01:09
axwwallyworld: looking01:09
wallyworldty01:09
axwwallyworld: as I responded in the bug, it's a different issue. but still an issue nonetheless01:11
axwwallyworld: I guess we need to increase the timeout01:11
wallyworldaxw: ty, i didn't read the bug too closely01:11
axwwallyworld: np, I'm just a bit annoyed there's a rule matching a bug which I explicitly stated is not the same. whatever01:12
wallyworldaxw: agreed. is there a bug for the new issue?01:13
axwwallyworld: about to check and file if not01:13
wallyworldty01:13
axwwallyworld: https://bugs.launchpad.net/juju-core/+bug/150155901:17
mupBug #1501559: provider/ec2: bootstrap fails with "failed to bootstrap environment: cannot start bootstrap instance: tagging root disk: timed out waiting01:17
mupfor EBS volume to be associated" <bootstrap> <ec2-provider> <intermittent-failure> <juju-core:Triaged by axwalk> <https://launchpad.net/bugs/1501559>01:17
wallyworldty01:17
axwwallyworld: there's not much to review. I'll have a look after I've reviewed perrito666's mongo branch01:17
wallyworldaxw: i'll just fix the series on the bug, plus i'll have a review for you real soon :-)01:18
axwwallyworld: lucky me :)01:18
wallyworldi know right01:18
mupBug #1501559 opened: provider/ec2: bootstrap fails with "failed to bootstrap environment: cannot start bootstrap instance: tagging root disk: timed out waiting01:22
mupfor EBS volume to be associated" <bootstrap> <ec2-provider> <intermittent-failure> <juju-core:Triaged by axwalk> <https://launchpad.net/bugs/1501559>01:22
* perrito666 feels summoned by axw01:24
mupBug #1501563 opened: Connection shutdown <ci> <test-failure> <juju-core:New> <https://launchpad.net/bugs/1501563>01:25
mupBug #1501563 changed: Connection shutdown <ci> <test-failure> <juju-core:New> <https://launchpad.net/bugs/1501563>01:31
mupBug #1501563 opened: Connection shutdown <ci> <test-failure> <juju-core:New> <https://launchpad.net/bugs/1501563>01:34
axwperrito666: sorry (sorry again), no summoning intended01:36
axwSORRY01:36
axwwallyworld: and sorry about continuously setting the wrong milestones on bugs :/01:37
wallyworldtis ok :-)01:37
axwwallyworld: it'd be great if it LP didn't let me do that... or was smarter about assigning series from milestones01:37
wallyworldyes01:38
perrito666axw: if you really want to troll me, add my name to a bug title and let mup do the rest01:39
axwperrito666: ok. when you least expect it01:40
axwcould be 2am... could be 5am...01:40
perrito666axw: could be , yet I dont have notifications for IRC I happen to be working that is Why I saw the notification01:41
axw:)01:41
perrito666I have too many troll friends to tie my phone to any sort of exploitable notifications01:41
natefinchthat sounds like a challenge01:41
natefinchThe worst thing that ever happened to me was when somehow my cell number got confused for a fax number.01:44
mupBug #1501563 changed: Connection shutdown <ci> <test-failure> <juju-core:New> <https://launchpad.net/bugs/1501563>01:46
mupBug #1501563 opened: Connection shutdown <ci> <test-failure> <juju-core:New> <https://launchpad.net/bugs/1501563>01:52
perrito666natefinch: I am at phone safe distance01:55
natefinchgood lord, who wrote this crap?  Evidently if you pass "invalid" as a placement directive to the dummy provider, it'll return an error if you try to use that placement01:56
perrito666natefinch: you can always git blame it :p01:56
natefinchperrito666: I usually do, but honestly, it doesn't matter.. unless it's someone who has left the company, I can't really call them out on it :/01:57
natefinchperrito666: https://github.com/juju/juju/blob/6e4a2cf80781a77934fcf559f3b7db88b4d9a271/provider/dummy/environs.go#L69401:58
mupBug #1501568 opened: TestRebootFromJujuRun Failed <ci> <intermittent-failure> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1501568>01:58
perrito666well you just asked who wrote it :p01:58
natefinchperrito666: lol.... just a figure of speech, really01:58
perrito666there is some humor value on "invalid placement is invalid02:00
natefinchyeah, I thought so02:00
natefinchblame says axw, but looking at the commit, he just refactored the code someone else wrote.   I need like a blame navigator so I can drill down to who started the whole mess.02:02
axwnatefinch: I think I did write that crap. that's how most of the dummy provider works I think?02:03
axwnatefinch: what's the issue with it, and how would you test placement with the dummy provider differently?02:03
natefinchaxw: sorry for the harsh tone, just frustrated.  My problem is that it's a magic string that causes an error in the dummy provider, and the only way you can know that it is supposed to cause an error is to go read the sourcecode deep in the dummy provider.  I'd much rather have a setting that can be toggled with an obvious name and functionality I can immediately go read.02:05
perrito666natefinch: you could also change the error string to be more informative02:06
axwnatefinch: no problem. fair enough, it could be more obvious. feel free to change it02:07
natefinchaxw: some context.... I'm here, trying to debug why this test is suddenly failing: https://github.com/juju/juju/blob/master/apiserver/service/service_test.go#L37202:07
perrito666"%s is invalid, the only valid is blah"02:07
mupBug #1501568 changed: TestRebootFromJujuRun Failed <ci> <intermittent-failure> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1501568>02:07
natefinchperrito666: that would help... certainly it would make it more searchable.02:07
axwnatefinch: I see. is this in your unit assigner branch?02:08
natefinchaxw: yes, which, now that I know where this error originates from, I know why it's not getting triggered02:08
axwnatefinch: okey dokey02:09
natefinchaxw:  actually would like your input on this.  Now that the unit assignment is being done in a worker... this will never fail.  The assignment from the worker will fail, but that's obviously asynchronous.  Not sure what to do with this test.02:11
axwnatefinch: we should still be doing the prechecking, even if we don't do assignment02:11
natefinchaxw: hmm good point. I made sure we were still doing some of the more basic parsing, but was missing the precheck.  Cool, I'll hook that in.02:12
axwnatefinch: thanks, SGTM. if it's not obvious, precheck is "are these args obviously wrong". it can still fail asynchronously if that passes, and that's fine02:13
natefinchaxw: yeah, I figured, thanks.02:16
mupBug #1501568 opened: TestRebootFromJujuRun Failed <ci> <intermittent-failure> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1501568>02:19
natefinchheh, failing that test did reveal that the test code just panics if there's no error.... it calls Results[0].Error.Error() without checking if that first .Error is nil.02:20
natefincharg.... of course precheck instance doesn't actually take a real instance.Placement...02:23
natefinchjust some string that has to be in a magically correct format :/02:23
mupBug #1501569 opened: MachineSuite failed <ci> <intermittent-failure> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1501569>02:28
beisnero/ thumper02:39
beisnerso the retry on the openstack provider destroy is better (46 bootstrap/destroy iterations successful vs. 9)02:41
beisnerbut then we hit a new one:  it tried to destroy twice, with a nil error:  http://paste.ubuntu.com/12629198/02:41
beisner(full loop output if interested:  http://paste.ubuntu.com/12629219/)02:43
beisneradded timing stats to the bug, as well as that ^ info02:59
thumperbeisner: hey, looking at paste now02:59
beisnerthumper, kk.  see last bug comment too for min/max/avg timings observed.03:00
thumperhmm... so now what?03:01
beisnerha!03:03
beisnerok, so i found a secret03:03
beisner| 2d794379-3472-492b-9034-7c6d87727883 | juju-beis1-machine-0              | ERROR   | -          | NOSTATE     |03:03
beisnerthat turned out to be an exercise in how well it handles an error in the cloud03:03
beisnerso, when the undercloud behaved, it looks like it works well (though 30s may still be pushing it)03:04
beisnerand when the undercloud misbehaves, that unexpected nil thing happens.  i'd expect it to fail differently.03:04
beisnerbut basically, the instance never disappeared, instead, hit a resource issue.03:04
beisnerand errored by nova-compute03:04
thumperhmm...03:06
thumperso nova errored out trying to terminate the instance?03:06
beisnerno it errored trying to spawn it03:06
thumperah03:06
beisner(neutron-api timed out as the undercloud was under fairly heavy load at that moment)03:06
thumpernew bug plz :)03:06
beisnerso, on the work done so far, suggestions:03:07
thumperbeisner: I think I'm about to crusade on better retries of cloud based errors across the board03:07
beisnerone iteration was 34s from 'terminate' to 'finished' in the juju destroy --debug output03:07
beisnerso, an increased max_wait may be in order03:07
beisner2nd suggestion:03:07
beisnerthat's a while to wait on apparently nothing03:08
thumper:-|03:08
beisneruser feedback (at least in --debug) while iterating, would be good03:08
* thumper nods03:08
beisneravg 7s, min 3s ... seem like reasonable normal/best cases03:09
* thumper nods03:10
thumperI can land additional tweaks to 1.24/1.2503:10
thumperboth have landed with the current fix I think03:10
thumperI'm currently poking something else :)03:10
beisnerunder load, ~30s seems like about as long as one should be expected to wait around on something to terminate.   but, as this shows, that may not be resilient with a production cloud under dynamic load changes.03:11
thumperchanging our fslock implementation on linux/macos to use flock and only use the rename dir on windows03:11
beisner++ for 1.24.x & 1.25  yes please03:11
beisnerone of the roles of this ci is to always consume whatever is in ppa:juju/stable, and run that against the released and the dev versions of the openstack charms03:12
beisnerplus, when we get the "stakeholders, please test proposed juju X" email, we flip a bit and run it all against that03:12
thumperbeisner: ack03:13
beisnerso, if i run on a fixed version, we'd loose that ability to flex03:13
thumperbeisner: we'll try our best to look after you :)03:13
beisnerditto :)03:13
beisneror at least feed info to ya03:13
thumper:)03:14
thumperthat's appreciated03:14
lazypowera thumper crusade :D i cant wait to read the commit stream on this one03:15
thumperheh03:15
thumperI'm getting pretty sick of Juju's inability to handle transient cloud errors03:16
lazypowerI feel your pain03:16
lazypowerthumper, have you seen how prominent they are on the public cloud space?03:16
thumpernot really, but I can guess03:17
beisnerthumper, thanks a ton.  plz lmk when/where i can get 1.24.x with the new goods ;-)03:17
lazypowerhttp://reports.vapour.ws/cloud-health/trends03:17
thumperbeisner: as in a released version of 1.24.x?03:17
thumperlazypower: why does that graph go back in time to the right03:18
thumperlazypower: that just looks weird03:18
lazypowerI'm not sure really03:18
thumperlazypower: also, what does the red really mean?03:18
lazypowerred means it encountered an error that we didn't handle and retry provisioning03:18
beisnerso tldr from a non-juju-go-dev:   the opportunity for this race to occur has always been present (not waiting on secgroup delete before attempting to create another).   something got better/faster in 1.24.6, which removed just enough wait for a line to be crossed.  or at least that's how my brain has resolved it.  ;-)03:18
lazypowerthats what i understand anyway. Not necessarily the exact science03:18
wallyworldaxw: thanks for review, in this branch there's not yet any new series order of precedence stuff - the charm store repo does not yet return supported series03:19
beisnerthumper, ah of course.  i'll notice that for sure.  thx once again.03:19
* thumper nods03:19
thumperkk03:19
axwwallyworld: ah I see, I was wondering why that result was ignored. can you please add a TODO there03:20
wallyworldok03:21
wallyworldaxw: i've responded to some of the questions, working on updating some of the doc as requested03:25
axwwallyworld: thanks, looking03:25
wallyworldaxw: so local repos. all charms are by convention/definition meant to be interpreted as single series only, because they are located in a directory named after the series. so if a charm author decides to modify the charm to declare supported series, we ignore it. if they want to have the charm be interpreted as multiseries, they move it or use the path syntax04:10
wallyworldso for now, all the repo related code ignore supported series so the system behaves as today04:10
wallyworldsupported series will be used for charm store only once it supports it04:11
wallyworldand local repo is considered deprecated04:11
axwwallyworld: that's fine. when I wrote the comment I thought the default series precedence was implemented, since you had updated the doc04:11
axwwallyworld: I think the default-series code I pointed at needs to change when it's implemented, regardless of local/cs04:12
axwbut I may be wrong. I don't know it well.04:12
axw(and it doesn't matter until it's implemented)04:12
wallyworld_damn, this kernel bug killing my networking is giving me the shits04:16
axw[12:11:58] <axw> wallyworld: that's fine. when I wrote the comment I thought the default series precedence was implemented, since you had updated the doc04:16
axw[12:12:29] <axw> wallyworld: I think the default-series code I pointed at needs to change when it's implemented, regardless of local/cs04:16
axw[12:12:41] <axw> but I may be wrong. I don't know it well.04:16
axw[12:12:54] <axw> (and it doesn't matter until it's implemented)04:16
axwwallyworld_: also, LGTM04:16
wallyworld_\o/04:17
wallyworld_ty04:17
wallyworld_i just need to get my charmrepo branch landed04:17
axwwallyworld_: if you have time, could you review some of my branches? master is blocked, but I'd like to back-port to 1.25 while it's still unblocked04:20
wallyworld_axw: yeah, was just about to do that04:20
axwwallyworld_: thanks04:21
thumperah fark04:49
thumperI suppose I should have grepped first...04:49
thumpertwo hours down the drain attempting to change a base implementation only to find that people rely on existing behaviour...04:50
thumperpoo04:50
wallyworld_axw: your reviews done, i'm afk for a bit04:50
axwwallyworld_: thanks04:50
axwwaigani: the "move lastlogin and last connection to their own collections" upgrade step for 1.25 is in the master branch, but not in the 1.25 branch. intentional?05:13
=== akhavr1 is now known as akhavr
mupBug #1501637 opened: provider/ec2: "iops" should be per-GiB in EBS pool config <ec2-provider> <juju-core:Triaged by axwalk> <juju-core 1.25:In Progress by axwalk> <https://launchpad.net/bugs/1501637>08:20
mupBug #1501642 opened: provider/ec2: min/max EBS volume sizes are wrong for SSD/IOPS <ec2-provider> <juju-core:Triaged by axwalk> <juju-core 1.25:In Progress by axwalk> <https://launchpad.net/bugs/1501642>08:20
rogpeppewallyworld: reviewed https://github.com/juju/charmrepo/pull/3208:30
wallyworldty08:30
axwwallyworld: would you agree with changing https://bugs.launchpad.net/juju-core/+bug/1501637 for 1.25?08:33
mupBug #1501637: provider/ec2: "iops" should be per-GiB in EBS pool config <ec2-provider> <juju-core:Triaged by axwalk> <juju-core 1.25:In Progress by axwalk> <https://launchpad.net/bugs/1501637>08:33
axwwallyworld: I mean, making the suggested change08:33
wallyworldrogpeppe: maybe we should just remove charm.Reference as you say, but in a followup if that's ok08:33
rogpeppewallyworld: definitely - it's quite a big job08:33
wallyworldaxw: yes, best to do it before release08:34
axwwallyworld: this demo prep has been enlightening ;)08:35
wallyworldi bet08:35
wallyworlddog food tastes awesome08:35
axwwallyworld: I got the benchmark GUI working before. I'll send you a link when I've tested these changes and got it up again08:35
wallyworldawesome08:36
dooferladdimitern, voidspace: hangout!09:03
voidspacedooferlad: omw09:03
dooferladfwereade: hangout?09:04
fwereadedooferlad, oops, ty09:04
dimiternjam, HO?09:04
dooferladdimitern, frobware: hangout?10:02
dimiterndooferlad, I think I'll skip it today10:11
dooferladdimitern: test for your demo passes :-) http://pastebin.ubuntu.com/12630924/10:12
dooferladdimitern: the last four lines are the good bit!10:13
axwwallyworld: results are a little underwhelming, but here's the GUI: http://52.64.145.252/ (will be taking it down soon)10:14
wallyworldlooking10:14
axwwallyworld: mysql-benchmark/2 is provisioned IOPS, mysql-benchmark/0 is SSD, mysql-benchmark/1 is magnetic10:15
axwwallyworld: (I have made the suggestion that the GUI show related unit info on the screen)10:15
dimiterndooferlad, awesome!10:16
dimiterndooferlad, and the logs look nice as well10:17
axwwallyworld: mysql-benchmark/2 is provisioned IOPS, mysql-benchmark/0 is SSD, mysql-benchmark/1 is magnetic10:18
axwwallyworld: (I have made the suggestion that the GUI show related unit info on the screen)10:18
axw(in case you got cut off before)10:18
dimiterndooferlad, I think we should add a scaling step - i.e. add-unit mysql and mediawiki and check they end up in the same spaces, but different subnets?10:19
wallyworldaxw: too bad there aren't labels that can be set to show that detail on the summary10:19
dooferladdimitern: already doing that :-)10:19
dimiterndooferlad, this will give us guarantee the AZ distribution works with spaces10:19
wallyworldyes i did get cut off again :-(10:19
dimiterndooferlad, great! cheers :)10:19
dooferladdimitern: just got sidetracked with trying to access haproxy, which even though it is exposed isn't responding.10:19
wallyworldaxw: with labels, it' impossible to easily see what beanchmark ran on what10:20
axwwallyworld: if I demo this, I'll deploy the benchmark charm once per mysql10:20
axwwallyworld: and give them useful names10:20
wallyworldsounds good, what cloud was it again?10:20
axwwallyworld: AWS10:20
wallyworldcool. can we do gce or azure as well?10:20
axwwallyworld: yes, can do10:21
axwwallyworld: we only do one disk type on each of them, so we'd be comparing multiple clouds rather than disk types10:22
wallyworldthat's fine, just to show storage run on those platforms10:23
axwwallyworld: tearing it down now10:24
wallyworldok10:24
axwwallyworld: if you're still working, another small one that will be helpful for the demo: https://github.com/juju/juju/pull/341710:24
axwif not, tomorrow10:25
wallyworldsure10:25
wallyworldaxw: i already lgtm that one10:26
wallyworldyou meant 2087?10:26
axwwallyworld: nope, http://reviews.vapour.ws/r/2807/10:26
wallyworldyeah, was dyslexic10:27
wallyworldaxw: lgtm, ty10:30
axwwallyworld: thanks10:30
dimiterndooferlad, voidspace, frobware, TheMue, I managed to pre-patch the gui charm so it can be deployed from a local repo; just get http://people.canonical.com/~dimitern/spaces-demo-local-repo.tar.bz2, extract it and use $ juju deploy --repository ./repo local:trusty/juju-gui --to 010:48
mupBug #1501709 opened: "juju deploy" does not validate volume/filesystem params <juju-core:Triaged by axwalk> <juju-core 1.25:Triaged by axwalk> <https://launchpad.net/bugs/1501709>11:05
mupBug #1501710 opened: worker/storageprovisioner: worker bounces upon finding invalid volume/filesystem params <juju-core:Triaged by axwalk> <juju-core 1.25:Triaged by axwalk> <https://launchpad.net/bugs/1501710>11:05
mupBug #1501709 changed: "juju deploy" does not validate volume/filesystem params <juju-core:Triaged by axwalk> <juju-core 1.25:Triaged by axwalk> <https://launchpad.net/bugs/1501709>11:08
mupBug #1501710 changed: worker/storageprovisioner: worker bounces upon finding invalid volume/filesystem params <juju-core:Triaged by axwalk> <juju-core 1.25:Triaged by axwalk> <https://launchpad.net/bugs/1501710>11:08
mupBug #1501709 opened: "juju deploy" does not validate volume/filesystem params <juju-core:Triaged by axwalk> <juju-core 1.25:Triaged by axwalk> <https://launchpad.net/bugs/1501709>11:11
mupBug #1501710 opened: worker/storageprovisioner: worker bounces upon finding invalid volume/filesystem params <juju-core:Triaged by axwalk> <juju-core 1.25:Triaged by axwalk> <https://launchpad.net/bugs/1501710>11:11
rogpeppewould anyone be able to give this a review please? it's been up for review for more than a day now. http://reviews.vapour.ws/r/2794/11:31
rogpeppeericsnow, axw: ^11:31
=== anthonyf is now known as Guest37825
=== 18WAATBUF is now known as REVAGOMES
=== REVAGOMES is now known as revagomes
mupBug #1501786 opened: juju cannot provision precise instances: need a repository as argument <blocker> <ci> <precise> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1501786>14:24
ericsnowpotentially a *much* faster "go test": https://github.com/rsc/gt14:26
ericsnowthanks for pointing it out, natefinch!14:26
TheMuedimitern: quick question, on command line a subnet can be created and directly added to a space, also a space can be direct created with and without subnets. is it possible to create a subnet without adding it to a space?14:28
mupBug #1501786 changed: juju cannot provision precise instances: need a repository as argument <blocker> <ci> <precise> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1501786>14:36
=== anthonyf is now known as Guest68817
natefinchgt is frigging amazing.  I can basically just always run go test over the entire repo and not pay the price of all the tests that can't possibly have changed14:37
natefinchericsnow: btw, there's a -f flag to tell gt to force-rerun stuff14:40
mupBug #1501786 opened: juju cannot provision precise instances: need a repository as argument <blocker> <ci> <precise> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1501786>14:42
mupBug #1501786 changed: juju cannot provision precise instances: need a repository as argument <blocker> <ci> <precise> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1501786>14:45
mupBug #1501786 opened: juju cannot provision precise instances: need a repository as argument <blocker> <ci> <precise> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1501786>14:48
rogpeppeericsnow: I see you're OCR; would you be able to do a review for me, by any chance? http://reviews.vapour.ws/r/2794/14:51
rogpeppenatefinch: is that russ's tool?14:51
natefinchrog yep14:51
katcorogpeppe: it has a ship it doesn't it?14:52
rogpeppekatco: i need a review from someone on juju-core14:53
katcorogpeppe: ahh ok14:53
katcorogpeppe: i'll tal14:53
rogpeppekatco: ta!14:53
natefinchrogpeppe: it's pretty great... it's even handy for our flaky tests, because you can cache the tests from when they pass ;)14:53
rogpeppenatefinch: ha14:53
rogpeppenatefinch: my problem with it was that i was often changing something on which lots of things depended14:54
rogpeppenatefinch: e.g. if you're working on the charm package, it doesn't really help much14:54
natefinchrogpeppe: right, it's useful for full-repo runs of juju/juju14:55
natefinchrogpeppe: I like it because it means I run full tests more often... and if I'm modifying something that a ton of stuff is using, at least I'm more aware of that fact.14:56
rogpeppenatefinch: yeah14:56
rogpeppenatefinch: i should try using it again14:56
dimiternTheMue, sorry, was in a call14:57
rogpeppenatefinch: i'd like someone to seriously look at speeding the tests up though. i'm not convinced that it requires wholesale refactoring.14:57
TheMuedimitern: no problem, found it14:57
dimiternTheMue, so a subnet without a space is forbidden by the model14:57
TheMueyep, seen it14:57
TheMue;)14:57
TheMuedimitern: thx anyway14:58
natefinchrogpeppe: yeah, I think we guessed out that if we just reused cleared the db rather than restarting mongo that it would go faster14:58
rogpeppenatefinch: a lot of the time is spent dialing mongo. providing a way to make a State from a mongo session might help a lot14:59
rogpeppenatefinch: looking at the state tests, even when the test itself only take 0.01 seconds, the actual time taken is more like 0.25s - there's a quarter second overhead on every test15:00
natefinchrogpeppe: yeah, I think one thing that would help a lot is if gocheck's time output included test setup/teardown and if it could output suite times, to include setup/teardown15:00
rogpeppenatefinch: i suspect that if someone was given a week of time, they could make the tests run twice as fast.15:00
natefinchrogpeppe: that would seem to be a no brainer for someone to work on15:03
rogpeppenatefinch: +115:03
cheryljperrito666: ping?15:11
perrito666cherylj: pong15:11
perrito666good afternoon15:12
cheryljperrito666: what's the story with https://github.com/juju/utils/pull/155 ?15:12
cheryljperrito666: was it in response to a bug?15:12
cheryljperrito666: good afternoon :)  Sorry to jump past the pleasantries :)15:12
perrito666cherylj: np, I assume that broke something else?15:13
cheryljperrito666: yeah bug 150178615:13
mupBug #1501786: juju cannot provision precise instances: need a repository as argument <blocker> <ci> <precise> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1501786>15:13
perrito666mmpf, I presume something changed between versions of apt-add-repository then15:14
perrito666cherylj: its a bug found and fixed in place, when adding a ppa it would break by saying something similar15:14
perrito666well actually when adding anything at all15:14
cheryljon trusty?15:14
perrito666vivid15:14
cheryljah15:14
perrito666cherylj: you can revert it, it breaks nothing in production15:16
cheryljperrito666: k, will do15:16
perrito666cherylj: Ill device a way to make that smarter15:16
cheryljperrito666: cool, thanks.  I'll get on the bug, then :)15:17
katcorogpeppe: lgtm15:22
rogpeppekatco: tvm!15:22
katcorogpeppe: sorry for the confusion15:23
rogpeppekatco: np15:23
rogpeppekatco: in general on our side we always require two reviews anyway, FWIW15:23
dimiternfrobware, i'm back15:28
frobwaredimitern, me too15:28
dimiternfrobware, shall we use the same HO?15:29
frobwaredimitern, ah... yes. off-by-1 for me... omw15:29
dimitern:)15:29
beisnermgz - are those openstack provider retry bits in the pkgs @ ppa:juju/devel ?15:36
mgzthe ones from tim yesterday?15:36
mgznot yet.15:37
beisnermgz - ack.  before i go drop custom bins onto 23 slaves, do you know an eta of when that might land there?15:46
katcomgz: rackspace call is now if you're interested16:01
mgzkatco: omw16:02
cheryljRB isn't picking up my PRs.  Can someone review https://github.com/juju/utils/pull/158 ?16:13
cheryljit's for the new blocker16:13
perrito666cherylj: ship it16:16
cheryljperrito666: thanks!16:16
rogpeppekatco: i wonder if you might want to take a look at this one - very similar to the previous one (but only test code this time) so should be quick: http://reviews.vapour.ws/r/2812/16:44
natefinchkatco: btw, all tests are passing on my branch, just adding a few more to cover things I realized I missed... but adding tests is easy, so it'll be ready for a PR in a bit.16:49
katconatefinch: awesome!16:52
rogpeppeericsnow: any chance of a review? http://reviews.vapour.ws/r/2812/17:00
ericsnowrogpeppe: katco has offered to handle at least some of the reviewing for me today17:01
rogpeppeericsnow: ok, cool17:01
ericsnowkatco: would you mind taking a look at this one?17:01
katcoericsnow: rogpeppe: res, plan to17:01
katcoyes17:01
rogpeppeericsnow: i already had one review of katco; maybe asking for two in a day is pushing it :)17:01
rogpeppes/of/off/17:01
rogpeppekatco: ta!17:01
ericsnowrogpeppe: we're heads-down on some pre-Seattle work :)17:02
rogpeppeericsnow: me too :)17:02
rogpeppeericsnow: (that's what this is)17:02
ericsnowrogpeppe: surprise, surprise lol17:02
cmarshey rogpeppe i'll take a look17:07
rogpeppecmars: ta!17:07
rogpeppecmars: i'd much appreciate your thoughts anyway as you know the territory...17:07
cheryljCan I get another review?  http://reviews.vapour.ws/r/2813/17:45
cheryljagain, for the blocker17:45
=== akhavr1 is now known as akhavr
mgzcherylj: juju code change looks fine to me, explain the utils revert?17:51
mgzwill it actually break something on vivid going back to the previous quoting?17:51
cheryljmgz: no, I tested that17:52
mgzcherylj: okay, lgtm17:52
katconatefinch: running a little late18:57
natefinchkatco: ok, just working on the PR for my branch18:58
katconatefinch: finish that up and ping me18:58
natefinchkatco: ok18:59
jcastroalexisb: wow, creepy. Everyone in eco is like, happy with 1.25 other than the bugs filed already19:06
jcastrombruzek actually demo'ed at nagios conf with 1.2519:06
jcastroaisrael has one bug he needs to reproduce before filing a bug19:06
jcastrobut overall I got thumbs up from everyone in our daily calls19:07
aisraeljcastro: two possible bugs19:07
aisraelI'll get that done tomorrow19:07
natefinchkatco: http://reviews.vapour.ws/r/2814/19:15
katconatefinch: rock tal19:15
natefinchkatco: sorry it's big.19:22
natefinchkatco: oops, crap, I see some code I left commented out19:22
cheryljwwitzel3!  I got virtual MAAS working without sacrificing a chicken!19:23
katconatefinch: no worries, i can review while you fix :)19:23
wwitzel3cherylj: yeah, that part isn't actually required, I should probably stop doing it19:23
cheryljwwitzel3: haha.  Thanks for the links19:23
wwitzel3cherylj: np, glad it worked19:24
natefinchkatco: gah, the merge with a more up to date master broke some tests... looking into them now.  I'm sure it's more of the same junk19:48
katconatefinch: k, i'll keep with the review19:49
natefinchkatco: haha, no it was that code I had commented out... it actually should have been deleted, not uncommented.  Oops.19:55
katconatefinch: fortunate :)19:55
katcoericsnow: wwitzel3: natefinch: want to meet up?20:02
wwitzel3katco: sure20:02
ericsnowkatco: sure20:02
natefinchkatco: yep20:02
natefinchis there any way to stop systemd from spamming every single terminal I have?20:03
perrito666natefinch: yes, upgrade to mongo 3 ;)20:04
ericsnowping20:21
ericsnowpong20:21
natefinchpong20:21
ericsnowthumper: ping20:22
thumperericsnow: hey, otp just now20:22
katcothumper: join moonstone whenever you're ready? https://plus.google.com/hangouts/_/canonical.com/moonstone?authuser=120:23
katcorick_h_: ping20:27
voidspaceericsnow: ping22:01
ericsnowvoidspace: hey :)22:01
voidspaceericsnow: hi22:02
voidspaceericsnow: you're OCR today I believe :-)22:02
voidspaceif you have a chance:22:02
voidspacehttp://reviews.vapour.ws/r/2816/22:02
voidspaceericsnow: you got over jetlag yet?22:02
ericsnowvoidspace: I'll take a look22:02
ericsnowvoidspace: just barely22:02
ericsnowvoidspace: got hit by an awful sinus infection22:03
ericsnowvoidspace: feeling better and my body is finally back on track22:03
voidspaceericsnow: :-(22:03
voidspacegood22:03
ericsnowvoidspace: LGTM22:17
voidspaceericsnow: thanks22:18
voidspaceericsnow: I wonder if 1.25 is blocked22:18
voidspacemaster has been blocked for days22:18
ericsnowvoidspace: good luck :/22:18
voidspace:-)22:18
voidspaceericsnow: did you work on storage?22:46
ericsnowvoidspace: not at all22:46
voidspaceheh, can't blame you22:46
voidspacecan't destroy an environment (without force) because a volume I didn't create doesn't exist22:47
voidspaceit's a shared amazon account, so I think it's getting confused by other things in the account that it shouldn't be concerned with22:47
voidspaceERROR environment destruction failed: destroying storage: destroying volumes: querying volume: The volume 'vol-45ac83a7' does not exist. (InvalidVolume.NotFound), destroying "vol-cbc0ce22": The volume 'vol-cbc0ce22' does not exist. (InvalidVolume.NotFound)22:48
voidspaceah well, I'll look into it tomorrow22:52
voidspaceericsnow: g'night22:52
ericsnowvoidspace: have a good one22:52
axwrogpeppe: sorry, I ignored it because it had a shipit already23:01
perrito666my kingdom for a big machine where to compile stuff23:28

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!