[00:17] axw: a small one https://github.com/juju/charm/pull/160 [00:34] axw: ty, i've pushed up an additonal change to use typed errors [00:34] wallyworld: ok, looking [00:35] btw the original imple,entation was sort of on purpose because upstream handled the empty series stuff, but easier to push that down into charm [00:36] wallyworld: ok. still LGTM [00:36] ty [00:39] wallyworld: quick chat? [00:39] sure [00:40] 1:1 hangout [01:08] axw: see http://reports.vapour.ws/releases/3124 - it suggests bug 1479546 may be a cause, could you take a look? [01:09] Bug #1479546: Storage provisioner timeouts spawning extra volumes on AWS [01:09] wallyworld: looking [01:09] ty [01:11] wallyworld: as I responded in the bug, it's a different issue. but still an issue nonetheless [01:11] wallyworld: I guess we need to increase the timeout [01:11] axw: ty, i didn't read the bug too closely [01:12] wallyworld: np, I'm just a bit annoyed there's a rule matching a bug which I explicitly stated is not the same. whatever [01:13] axw: agreed. is there a bug for the new issue? [01:13] wallyworld: about to check and file if not [01:13] ty [01:17] wallyworld: https://bugs.launchpad.net/juju-core/+bug/1501559 [01:17] Bug #1501559: provider/ec2: bootstrap fails with "failed to bootstrap environment: cannot start bootstrap instance: tagging root disk: timed out waiting [01:17] for EBS volume to be associated" [01:17] ty [01:17] wallyworld: there's not much to review. I'll have a look after I've reviewed perrito666's mongo branch [01:18] axw: i'll just fix the series on the bug, plus i'll have a review for you real soon :-) [01:18] wallyworld: lucky me :) [01:18] i know right [01:22] Bug #1501559 opened: provider/ec2: bootstrap fails with "failed to bootstrap environment: cannot start bootstrap instance: tagging root disk: timed out waiting [01:22] for EBS volume to be associated" [01:24] * perrito666 feels summoned by axw [01:25] Bug #1501563 opened: Connection shutdown [01:31] Bug #1501563 changed: Connection shutdown [01:34] Bug #1501563 opened: Connection shutdown [01:36] perrito666: sorry (sorry again), no summoning intended [01:36] SORRY [01:37] wallyworld: and sorry about continuously setting the wrong milestones on bugs :/ [01:37] tis ok :-) [01:37] wallyworld: it'd be great if it LP didn't let me do that... or was smarter about assigning series from milestones [01:38] yes [01:39] axw: if you really want to troll me, add my name to a bug title and let mup do the rest [01:40] perrito666: ok. when you least expect it [01:40] could be 2am... could be 5am... [01:41] axw: could be , yet I dont have notifications for IRC I happen to be working that is Why I saw the notification [01:41] :) [01:41] I have too many troll friends to tie my phone to any sort of exploitable notifications [01:41] that sounds like a challenge [01:44] The worst thing that ever happened to me was when somehow my cell number got confused for a fax number. [01:46] Bug #1501563 changed: Connection shutdown [01:52] Bug #1501563 opened: Connection shutdown [01:55] natefinch: I am at phone safe distance [01:56] good lord, who wrote this crap? Evidently if you pass "invalid" as a placement directive to the dummy provider, it'll return an error if you try to use that placement [01:56] natefinch: you can always git blame it :p [01:57] perrito666: I usually do, but honestly, it doesn't matter.. unless it's someone who has left the company, I can't really call them out on it :/ [01:58] perrito666: https://github.com/juju/juju/blob/6e4a2cf80781a77934fcf559f3b7db88b4d9a271/provider/dummy/environs.go#L694 [01:58] Bug #1501568 opened: TestRebootFromJujuRun Failed [01:58] well you just asked who wrote it :p [01:58] perrito666: lol.... just a figure of speech, really [02:00] there is some humor value on "invalid placement is invalid [02:00] yeah, I thought so [02:02] blame says axw, but looking at the commit, he just refactored the code someone else wrote. I need like a blame navigator so I can drill down to who started the whole mess. [02:03] natefinch: I think I did write that crap. that's how most of the dummy provider works I think? [02:03] natefinch: what's the issue with it, and how would you test placement with the dummy provider differently? [02:05] axw: sorry for the harsh tone, just frustrated. My problem is that it's a magic string that causes an error in the dummy provider, and the only way you can know that it is supposed to cause an error is to go read the sourcecode deep in the dummy provider. I'd much rather have a setting that can be toggled with an obvious name and functionality I can immediately go read. [02:06] natefinch: you could also change the error string to be more informative [02:07] natefinch: no problem. fair enough, it could be more obvious. feel free to change it [02:07] axw: some context.... I'm here, trying to debug why this test is suddenly failing: https://github.com/juju/juju/blob/master/apiserver/service/service_test.go#L372 [02:07] "%s is invalid, the only valid is blah" [02:07] Bug #1501568 changed: TestRebootFromJujuRun Failed [02:07] perrito666: that would help... certainly it would make it more searchable. [02:08] natefinch: I see. is this in your unit assigner branch? [02:08] axw: yes, which, now that I know where this error originates from, I know why it's not getting triggered [02:09] natefinch: okey dokey [02:11] axw: actually would like your input on this. Now that the unit assignment is being done in a worker... this will never fail. The assignment from the worker will fail, but that's obviously asynchronous. Not sure what to do with this test. [02:11] natefinch: we should still be doing the prechecking, even if we don't do assignment [02:12] axw: hmm good point. I made sure we were still doing some of the more basic parsing, but was missing the precheck. Cool, I'll hook that in. [02:13] natefinch: thanks, SGTM. if it's not obvious, precheck is "are these args obviously wrong". it can still fail asynchronously if that passes, and that's fine [02:16] axw: yeah, I figured, thanks. [02:19] Bug #1501568 opened: TestRebootFromJujuRun Failed [02:20] heh, failing that test did reveal that the test code just panics if there's no error.... it calls Results[0].Error.Error() without checking if that first .Error is nil. [02:23] arg.... of course precheck instance doesn't actually take a real instance.Placement... [02:23] just some string that has to be in a magically correct format :/ [02:28] Bug #1501569 opened: MachineSuite failed [02:39] o/ thumper [02:41] so the retry on the openstack provider destroy is better (46 bootstrap/destroy iterations successful vs. 9) [02:41] but then we hit a new one: it tried to destroy twice, with a nil error: http://paste.ubuntu.com/12629198/ [02:43] (full loop output if interested: http://paste.ubuntu.com/12629219/) [02:59] added timing stats to the bug, as well as that ^ info [02:59] beisner: hey, looking at paste now [03:00] thumper, kk. see last bug comment too for min/max/avg timings observed. [03:01] hmm... so now what? [03:03] ha! [03:03] ok, so i found a secret [03:03] | 2d794379-3472-492b-9034-7c6d87727883 | juju-beis1-machine-0 | ERROR | - | NOSTATE | [03:03] that turned out to be an exercise in how well it handles an error in the cloud [03:04] so, when the undercloud behaved, it looks like it works well (though 30s may still be pushing it) [03:04] and when the undercloud misbehaves, that unexpected nil thing happens. i'd expect it to fail differently. [03:04] but basically, the instance never disappeared, instead, hit a resource issue. [03:04] and errored by nova-compute [03:06] hmm... [03:06] so nova errored out trying to terminate the instance? [03:06] no it errored trying to spawn it [03:06] ah [03:06] (neutron-api timed out as the undercloud was under fairly heavy load at that moment) [03:06] new bug plz :) [03:07] so, on the work done so far, suggestions: [03:07] beisner: I think I'm about to crusade on better retries of cloud based errors across the board [03:07] one iteration was 34s from 'terminate' to 'finished' in the juju destroy --debug output [03:07] so, an increased max_wait may be in order [03:07] 2nd suggestion: [03:08] that's a while to wait on apparently nothing [03:08] :-| [03:08] user feedback (at least in --debug) while iterating, would be good [03:08] * thumper nods [03:09] avg 7s, min 3s ... seem like reasonable normal/best cases [03:10] * thumper nods [03:10] I can land additional tweaks to 1.24/1.25 [03:10] both have landed with the current fix I think [03:10] I'm currently poking something else :) [03:11] under load, ~30s seems like about as long as one should be expected to wait around on something to terminate. but, as this shows, that may not be resilient with a production cloud under dynamic load changes. [03:11] changing our fslock implementation on linux/macos to use flock and only use the rename dir on windows [03:11] ++ for 1.24.x & 1.25 yes please [03:12] one of the roles of this ci is to always consume whatever is in ppa:juju/stable, and run that against the released and the dev versions of the openstack charms [03:12] plus, when we get the "stakeholders, please test proposed juju X" email, we flip a bit and run it all against that [03:13] beisner: ack [03:13] so, if i run on a fixed version, we'd loose that ability to flex [03:13] beisner: we'll try our best to look after you :) [03:13] ditto :) [03:13] or at least feed info to ya [03:14] :) [03:14] that's appreciated [03:15] a thumper crusade :D i cant wait to read the commit stream on this one [03:15] heh [03:16] I'm getting pretty sick of Juju's inability to handle transient cloud errors [03:16] I feel your pain [03:16] thumper, have you seen how prominent they are on the public cloud space? [03:17] not really, but I can guess [03:17] thumper, thanks a ton. plz lmk when/where i can get 1.24.x with the new goods ;-) [03:17] http://reports.vapour.ws/cloud-health/trends [03:17] beisner: as in a released version of 1.24.x? [03:18] lazypower: why does that graph go back in time to the right [03:18] lazypower: that just looks weird [03:18] I'm not sure really [03:18] lazypower: also, what does the red really mean? [03:18] red means it encountered an error that we didn't handle and retry provisioning [03:18] so tldr from a non-juju-go-dev: the opportunity for this race to occur has always been present (not waiting on secgroup delete before attempting to create another). something got better/faster in 1.24.6, which removed just enough wait for a line to be crossed. or at least that's how my brain has resolved it. ;-) [03:18] thats what i understand anyway. Not necessarily the exact science [03:19] axw: thanks for review, in this branch there's not yet any new series order of precedence stuff - the charm store repo does not yet return supported series [03:19] thumper, ah of course. i'll notice that for sure. thx once again. [03:19] * thumper nods [03:19] kk [03:20] wallyworld: ah I see, I was wondering why that result was ignored. can you please add a TODO there [03:21] ok [03:25] axw: i've responded to some of the questions, working on updating some of the doc as requested [03:25] wallyworld: thanks, looking [04:10] axw: so local repos. all charms are by convention/definition meant to be interpreted as single series only, because they are located in a directory named after the series. so if a charm author decides to modify the charm to declare supported series, we ignore it. if they want to have the charm be interpreted as multiseries, they move it or use the path syntax [04:10] so for now, all the repo related code ignore supported series so the system behaves as today [04:11] supported series will be used for charm store only once it supports it [04:11] and local repo is considered deprecated [04:11] wallyworld: that's fine. when I wrote the comment I thought the default series precedence was implemented, since you had updated the doc [04:12] wallyworld: I think the default-series code I pointed at needs to change when it's implemented, regardless of local/cs [04:12] but I may be wrong. I don't know it well. [04:12] (and it doesn't matter until it's implemented) [04:16] damn, this kernel bug killing my networking is giving me the shits [04:16] [12:11:58] wallyworld: that's fine. when I wrote the comment I thought the default series precedence was implemented, since you had updated the doc [04:16] [12:12:29] wallyworld: I think the default-series code I pointed at needs to change when it's implemented, regardless of local/cs [04:16] [12:12:41] but I may be wrong. I don't know it well. [04:16] [12:12:54] (and it doesn't matter until it's implemented) [04:16] wallyworld_: also, LGTM [04:17] \o/ [04:17] ty [04:17] i just need to get my charmrepo branch landed [04:20] wallyworld_: if you have time, could you review some of my branches? master is blocked, but I'd like to back-port to 1.25 while it's still unblocked [04:20] axw: yeah, was just about to do that [04:21] wallyworld_: thanks [04:49] ah fark [04:49] I suppose I should have grepped first... [04:50] two hours down the drain attempting to change a base implementation only to find that people rely on existing behaviour... [04:50] poo [04:50] axw: your reviews done, i'm afk for a bit [04:50] wallyworld_: thanks [05:13] waigani: the "move lastlogin and last connection to their own collections" upgrade step for 1.25 is in the master branch, but not in the 1.25 branch. intentional? === akhavr1 is now known as akhavr [08:20] Bug #1501637 opened: provider/ec2: "iops" should be per-GiB in EBS pool config [08:20] Bug #1501642 opened: provider/ec2: min/max EBS volume sizes are wrong for SSD/IOPS [08:30] wallyworld: reviewed https://github.com/juju/charmrepo/pull/32 [08:30] ty [08:33] wallyworld: would you agree with changing https://bugs.launchpad.net/juju-core/+bug/1501637 for 1.25? [08:33] Bug #1501637: provider/ec2: "iops" should be per-GiB in EBS pool config [08:33] wallyworld: I mean, making the suggested change [08:33] rogpeppe: maybe we should just remove charm.Reference as you say, but in a followup if that's ok [08:33] wallyworld: definitely - it's quite a big job [08:34] axw: yes, best to do it before release [08:35] wallyworld: this demo prep has been enlightening ;) [08:35] i bet [08:35] dog food tastes awesome [08:35] wallyworld: I got the benchmark GUI working before. I'll send you a link when I've tested these changes and got it up again [08:36] awesome [09:03] dimitern, voidspace: hangout! [09:03] dooferlad: omw [09:04] fwereade: hangout? [09:04] dooferlad, oops, ty [09:04] jam, HO? [10:02] dimitern, frobware: hangout? [10:11] dooferlad, I think I'll skip it today [10:12] dimitern: test for your demo passes :-) http://pastebin.ubuntu.com/12630924/ [10:13] dimitern: the last four lines are the good bit! [10:14] wallyworld: results are a little underwhelming, but here's the GUI: http://52.64.145.252/ (will be taking it down soon) [10:14] looking [10:15] wallyworld: mysql-benchmark/2 is provisioned IOPS, mysql-benchmark/0 is SSD, mysql-benchmark/1 is magnetic [10:15] wallyworld: (I have made the suggestion that the GUI show related unit info on the screen) [10:16] dooferlad, awesome! [10:17] dooferlad, and the logs look nice as well [10:18] wallyworld: mysql-benchmark/2 is provisioned IOPS, mysql-benchmark/0 is SSD, mysql-benchmark/1 is magnetic [10:18] wallyworld: (I have made the suggestion that the GUI show related unit info on the screen) [10:18] (in case you got cut off before) [10:19] dooferlad, I think we should add a scaling step - i.e. add-unit mysql and mediawiki and check they end up in the same spaces, but different subnets? [10:19] axw: too bad there aren't labels that can be set to show that detail on the summary [10:19] dimitern: already doing that :-) [10:19] dooferlad, this will give us guarantee the AZ distribution works with spaces [10:19] yes i did get cut off again :-( [10:19] dooferlad, great! cheers :) [10:19] dimitern: just got sidetracked with trying to access haproxy, which even though it is exposed isn't responding. [10:20] axw: with labels, it' impossible to easily see what beanchmark ran on what [10:20] wallyworld: if I demo this, I'll deploy the benchmark charm once per mysql [10:20] wallyworld: and give them useful names [10:20] sounds good, what cloud was it again? [10:20] wallyworld: AWS [10:20] cool. can we do gce or azure as well? [10:21] wallyworld: yes, can do [10:22] wallyworld: we only do one disk type on each of them, so we'd be comparing multiple clouds rather than disk types [10:23] that's fine, just to show storage run on those platforms [10:24] wallyworld: tearing it down now [10:24] ok [10:24] wallyworld: if you're still working, another small one that will be helpful for the demo: https://github.com/juju/juju/pull/3417 [10:25] if not, tomorrow [10:25] sure [10:26] axw: i already lgtm that one [10:26] you meant 2087? [10:26] wallyworld: nope, http://reviews.vapour.ws/r/2807/ [10:27] yeah, was dyslexic [10:30] axw: lgtm, ty [10:30] wallyworld: thanks [10:48] dooferlad, voidspace, frobware, TheMue, I managed to pre-patch the gui charm so it can be deployed from a local repo; just get http://people.canonical.com/~dimitern/spaces-demo-local-repo.tar.bz2, extract it and use $ juju deploy --repository ./repo local:trusty/juju-gui --to 0 [11:05] Bug #1501709 opened: "juju deploy" does not validate volume/filesystem params [11:05] Bug #1501710 opened: worker/storageprovisioner: worker bounces upon finding invalid volume/filesystem params [11:08] Bug #1501709 changed: "juju deploy" does not validate volume/filesystem params [11:08] Bug #1501710 changed: worker/storageprovisioner: worker bounces upon finding invalid volume/filesystem params [11:11] Bug #1501709 opened: "juju deploy" does not validate volume/filesystem params [11:11] Bug #1501710 opened: worker/storageprovisioner: worker bounces upon finding invalid volume/filesystem params [11:31] would anyone be able to give this a review please? it's been up for review for more than a day now. http://reviews.vapour.ws/r/2794/ [11:31] ericsnow, axw: ^ === anthonyf is now known as Guest37825 === 18WAATBUF is now known as REVAGOMES === REVAGOMES is now known as revagomes [14:24] Bug #1501786 opened: juju cannot provision precise instances: need a repository as argument [14:26] potentially a *much* faster "go test": https://github.com/rsc/gt [14:26] thanks for pointing it out, natefinch! [14:28] dimitern: quick question, on command line a subnet can be created and directly added to a space, also a space can be direct created with and without subnets. is it possible to create a subnet without adding it to a space? [14:36] Bug #1501786 changed: juju cannot provision precise instances: need a repository as argument === anthonyf is now known as Guest68817 [14:37] gt is frigging amazing. I can basically just always run go test over the entire repo and not pay the price of all the tests that can't possibly have changed [14:40] ericsnow: btw, there's a -f flag to tell gt to force-rerun stuff [14:42] Bug #1501786 opened: juju cannot provision precise instances: need a repository as argument [14:45] Bug #1501786 changed: juju cannot provision precise instances: need a repository as argument [14:48] Bug #1501786 opened: juju cannot provision precise instances: need a repository as argument [14:51] ericsnow: I see you're OCR; would you be able to do a review for me, by any chance? http://reviews.vapour.ws/r/2794/ [14:51] natefinch: is that russ's tool? [14:51] rog yep [14:52] rogpeppe: it has a ship it doesn't it? [14:53] katco: i need a review from someone on juju-core [14:53] rogpeppe: ahh ok [14:53] rogpeppe: i'll tal [14:53] katco: ta! [14:53] rogpeppe: it's pretty great... it's even handy for our flaky tests, because you can cache the tests from when they pass ;) [14:53] natefinch: ha [14:54] natefinch: my problem with it was that i was often changing something on which lots of things depended [14:54] natefinch: e.g. if you're working on the charm package, it doesn't really help much [14:55] rogpeppe: right, it's useful for full-repo runs of juju/juju [14:56] rogpeppe: I like it because it means I run full tests more often... and if I'm modifying something that a ton of stuff is using, at least I'm more aware of that fact. [14:56] natefinch: yeah [14:56] natefinch: i should try using it again [14:57] TheMue, sorry, was in a call [14:57] natefinch: i'd like someone to seriously look at speeding the tests up though. i'm not convinced that it requires wholesale refactoring. [14:57] dimitern: no problem, found it [14:57] TheMue, so a subnet without a space is forbidden by the model [14:57] yep, seen it [14:57] ;) [14:58] dimitern: thx anyway [14:58] rogpeppe: yeah, I think we guessed out that if we just reused cleared the db rather than restarting mongo that it would go faster [14:59] natefinch: a lot of the time is spent dialing mongo. providing a way to make a State from a mongo session might help a lot [15:00] natefinch: looking at the state tests, even when the test itself only take 0.01 seconds, the actual time taken is more like 0.25s - there's a quarter second overhead on every test [15:00] rogpeppe: yeah, I think one thing that would help a lot is if gocheck's time output included test setup/teardown and if it could output suite times, to include setup/teardown [15:00] natefinch: i suspect that if someone was given a week of time, they could make the tests run twice as fast. [15:03] rogpeppe: that would seem to be a no brainer for someone to work on [15:03] natefinch: +1 [15:11] perrito666: ping? [15:11] cherylj: pong [15:12] good afternoon [15:12] perrito666: what's the story with https://github.com/juju/utils/pull/155 ? [15:12] perrito666: was it in response to a bug? [15:12] perrito666: good afternoon :) Sorry to jump past the pleasantries :) [15:13] cherylj: np, I assume that broke something else? [15:13] perrito666: yeah bug 1501786 [15:13] Bug #1501786: juju cannot provision precise instances: need a repository as argument [15:14] mmpf, I presume something changed between versions of apt-add-repository then [15:14] cherylj: its a bug found and fixed in place, when adding a ppa it would break by saying something similar [15:14] well actually when adding anything at all [15:14] on trusty? [15:14] vivid [15:14] ah [15:16] cherylj: you can revert it, it breaks nothing in production [15:16] perrito666: k, will do [15:16] cherylj: Ill device a way to make that smarter [15:17] perrito666: cool, thanks. I'll get on the bug, then :) [15:22] rogpeppe: lgtm [15:22] katco: tvm! [15:23] rogpeppe: sorry for the confusion [15:23] katco: np [15:23] katco: in general on our side we always require two reviews anyway, FWIW [15:28] frobware, i'm back [15:28] dimitern, me too [15:29] frobware, shall we use the same HO? [15:29] dimitern, ah... yes. off-by-1 for me... omw [15:29] :) [15:36] mgz - are those openstack provider retry bits in the pkgs @ ppa:juju/devel ? [15:36] the ones from tim yesterday? [15:37] not yet. [15:46] mgz - ack. before i go drop custom bins onto 23 slaves, do you know an eta of when that might land there? [16:01] mgz: rackspace call is now if you're interested [16:02] katco: omw [16:13] RB isn't picking up my PRs. Can someone review https://github.com/juju/utils/pull/158 ? [16:13] it's for the new blocker [16:16] cherylj: ship it [16:16] perrito666: thanks! [16:44] katco: i wonder if you might want to take a look at this one - very similar to the previous one (but only test code this time) so should be quick: http://reviews.vapour.ws/r/2812/ [16:49] katco: btw, all tests are passing on my branch, just adding a few more to cover things I realized I missed... but adding tests is easy, so it'll be ready for a PR in a bit. [16:52] natefinch: awesome! [17:00] ericsnow: any chance of a review? http://reviews.vapour.ws/r/2812/ [17:01] rogpeppe: katco has offered to handle at least some of the reviewing for me today [17:01] ericsnow: ok, cool [17:01] katco: would you mind taking a look at this one? [17:01] ericsnow: rogpeppe: res, plan to [17:01] yes [17:01] ericsnow: i already had one review of katco; maybe asking for two in a day is pushing it :) [17:01] s/of/off/ [17:01] katco: ta! [17:02] rogpeppe: we're heads-down on some pre-Seattle work :) [17:02] ericsnow: me too :) [17:02] ericsnow: (that's what this is) [17:02] rogpeppe: surprise, surprise lol [17:07] hey rogpeppe i'll take a look [17:07] cmars: ta! [17:07] cmars: i'd much appreciate your thoughts anyway as you know the territory... [17:45] Can I get another review? http://reviews.vapour.ws/r/2813/ [17:45] again, for the blocker === akhavr1 is now known as akhavr [17:51] cherylj: juju code change looks fine to me, explain the utils revert? [17:51] will it actually break something on vivid going back to the previous quoting? [17:52] mgz: no, I tested that [17:52] cherylj: okay, lgtm [18:57] natefinch: running a little late [18:58] katco: ok, just working on the PR for my branch [18:58] natefinch: finish that up and ping me [18:59] katco: ok [19:06] alexisb: wow, creepy. Everyone in eco is like, happy with 1.25 other than the bugs filed already [19:06] mbruzek actually demo'ed at nagios conf with 1.25 [19:06] aisrael has one bug he needs to reproduce before filing a bug [19:07] but overall I got thumbs up from everyone in our daily calls [19:07] jcastro: two possible bugs [19:07] I'll get that done tomorrow [19:15] katco: http://reviews.vapour.ws/r/2814/ [19:15] natefinch: rock tal [19:22] katco: sorry it's big. [19:22] katco: oops, crap, I see some code I left commented out [19:23] wwitzel3! I got virtual MAAS working without sacrificing a chicken! [19:23] natefinch: no worries, i can review while you fix :) [19:23] cherylj: yeah, that part isn't actually required, I should probably stop doing it [19:23] wwitzel3: haha. Thanks for the links [19:24] cherylj: np, glad it worked [19:48] katco: gah, the merge with a more up to date master broke some tests... looking into them now. I'm sure it's more of the same junk [19:49] natefinch: k, i'll keep with the review [19:55] katco: haha, no it was that code I had commented out... it actually should have been deleted, not uncommented. Oops. [19:55] natefinch: fortunate :) [20:02] ericsnow: wwitzel3: natefinch: want to meet up? [20:02] katco: sure [20:02] katco: sure [20:02] katco: yep [20:03] is there any way to stop systemd from spamming every single terminal I have? [20:04] natefinch: yes, upgrade to mongo 3 ;) [20:21] ping [20:21] pong [20:21] pong [20:22] thumper: ping [20:22] ericsnow: hey, otp just now [20:23] thumper: join moonstone whenever you're ready? https://plus.google.com/hangouts/_/canonical.com/moonstone?authuser=1 [20:27] rick_h_: ping [22:01] ericsnow: ping [22:01] voidspace: hey :) [22:02] ericsnow: hi [22:02] ericsnow: you're OCR today I believe :-) [22:02] if you have a chance: [22:02] http://reviews.vapour.ws/r/2816/ [22:02] ericsnow: you got over jetlag yet? [22:02] voidspace: I'll take a look [22:02] voidspace: just barely [22:03] voidspace: got hit by an awful sinus infection [22:03] voidspace: feeling better and my body is finally back on track [22:03] ericsnow: :-( [22:03] good [22:17] voidspace: LGTM [22:18] ericsnow: thanks [22:18] ericsnow: I wonder if 1.25 is blocked [22:18] master has been blocked for days [22:18] voidspace: good luck :/ [22:18] :-) [22:46] ericsnow: did you work on storage? [22:46] voidspace: not at all [22:46] heh, can't blame you [22:47] can't destroy an environment (without force) because a volume I didn't create doesn't exist [22:47] it's a shared amazon account, so I think it's getting confused by other things in the account that it shouldn't be concerned with [22:48] ERROR environment destruction failed: destroying storage: destroying volumes: querying volume: The volume 'vol-45ac83a7' does not exist. (InvalidVolume.NotFound), destroying "vol-cbc0ce22": The volume 'vol-cbc0ce22' does not exist. (InvalidVolume.NotFound) [22:52] ah well, I'll look into it tomorrow [22:52] ericsnow: g'night [22:52] voidspace: have a good one [23:01] rogpeppe: sorry, I ignored it because it had a shipit already [23:28] my kingdom for a big machine where to compile stuff