[00:25] Bug #1569097 opened: jujud fails to start with "could not find a suitable binary for "0.0/mmapv1"" [00:30] wallyworld: could you or someone from your team take bug 1569097? [00:30] Bug #1569097: jujud fails to start with "could not find a suitable binary for "0.0/mmapv1"" [00:30] ok [00:30] thanks! [00:31] cherylj: part of the issue is the mongo stuff got erged too soon, so we'll need to look into how to deal with that. i'm still ramping up on the issues [00:31] cherylj: i also added a bug to the board - HA doesn't use bootstrap constraints [00:31] wallyworld_: I guess that was a miscommunication - we thought it was safe to merge because it had a fallback :/ [00:32] cherylj: my PR didn't have a fallback - it expected mongo 3.2 to be in xenial [00:33] the above bug happened on trusty, if it makes a difference [00:33] anyways, all good, we'll fix [00:33] on trusty it was supposed to use mongo 2.4 stull, hmmm, i'll need to check [00:34] i bootstrapped yesterday without issues, but that may have been on xenial, i'll need to check [00:34] i wonder if wily is also broken [00:35] cherylj: good news though - it's in the queue, so progress :-) https://launchpad.net/ubuntu/xenial/+queue [00:35] yay! [00:35] brb [00:44] cherylj: i reckon bug 1534627 should be high rather than medium, since it quite adversely affects stakeholder deployments [00:44] Bug #1534627: Destroyed models still show up in list-models <2.0-count> [00:46] wallyworld_: +1 and the change on it is backward incompatible [00:46] yep, that too [00:46] Bug #1569106 opened: juju deploy --to lxd:0 does not work [01:07] wallyworld_: hey, ruthere? [01:08] maybe [01:08] depends who's asking [01:08] I would make a taxes joke, but I have no idea how is the Ausie irs called [01:08] ATO [01:09] australian tax office [01:09] will it kill you? like everything in australia? [01:09] it can [01:09] feeding it money helps [01:10] so lemme know when you can ho [01:11] anytime [01:11] k standup room? [01:11] ok [01:25] Bug #1569109 opened: Juju makes wrong network configuration when adding physical machine [01:36] evening folks [01:43] good evening all, see you in the morning [01:47] bugger... [01:48] * thumper sighs [01:48] shelving all current work to pop the stack and fix other bits. [01:55] cherylj: is there something I should be working on to help unblock master? [01:56] natefinch: want to take a look at https://bugs.launchpad.net/juju-core/+bug/1564791 ? [01:56] Bug #1564791: 2.0-beta3: LXD provider, jujud architecture mismatch [01:56] looks like an interesting one [01:56] not really ;) But I will :) [01:57] cherylj: actually, it gets less bad toward the end of the bug :) [01:59] Bug #1569120 opened: wrong lxc bridge still used in juju beta4 [02:07] axw: got a sec? [02:07] cherylj: yup? [02:07] axw: I'm looking at bug 1569024 [02:07] Bug #1569024: Region names for rackspace should accept caps and lowercase [02:07] and was thinking that for public clouds, we could strings.ToLower the region names [02:07] that way we don't mess with any user defined cloud regions [02:08] and maintain compatibility for rax [02:08] cherylj: gah, yeah, we should and I meant to do that [02:08] cherylj: on input, lower case [02:08] but just for public clouds, yes? [02:08] or for all? [02:09] cherylj: hrm. well, maybe not lowercase when we pass through, just compare case insensitive [02:09] * axw looks at the code [02:10] ah, that works too [02:10] strings.EqualFold() [02:10] neato [02:11] cherylj: I *think* it's just a matter of changing "getRegion" in cmd/juju/commands/bootstrap.go [02:11] where we check region.Name == [02:11] cherylj: also the set-default-region command [02:12] axw: yeah, I had some changes in there already, just wanted to verify what we should do [02:12] axw: so don't change the region, just do a case insensitive comparison? [02:12] cherylj: I think that's safest, yeah [02:12] axw: sounds good, thanks [02:24] natefinch: I have access to the arm hardware for that lxd bug. Need me to forward it your way? [02:26] cherylj: yes please [02:27] cherylj: though it probably will be a matter of looking at the code and then thinking real hard. [02:27] break out the hamster [02:27] hey rcj, slumming it with the juju devs? [02:37] cherylj: my brain refuses to read arm64 ... every time it translates it into amd64, and I have to do a double take to make sure it says the right thing [02:38] natefinch: oh me too [03:19] menn0: when you're importing a model, will it be visible during import? will it be mutable while importing? [03:19] (import as in migration) [03:20] axw: there's a migration-mode flag which will be set to "importing" [03:20] axw: that blocks critical txns as well as preventing API logins for it [03:20] axw: the former has been done but not the latter [03:20] menn0: ok, cool. but you'll still be able to see it in list-models? [03:21] axw: I guess so, but we could make it so they didn't show up [03:21] menn0: I'm thinking it might make sense to have a status entry for models [03:21] axw: that could be done [03:21] available, importing, destroying, archived [03:21] something like that [03:21] axw: sounds useful [03:22] menn0: we need to be able to filter out Dead models in list-models, but I think we should show status of Alive vs. Dying [03:22] but a more descriptive status would be better [03:22] I'll look at adding that [03:24] menn0: speaking of migrations... I added a field to charmDoc and tried to figure out if there was anything I needed to do for migration, but couldn't find code migrating charm stuff. What's up with charms and migration? [03:26] natefinch: migration of charms and tools is still in-progress... there is code but it needs reworking and isn't plugged in to the process yet [03:27] natefinch: for most collections there's tests that fail if fields are added [03:27] natefinch: but probably not for charms yet [03:27] natefinch: so just add your field for now and email thumper and me about it just to make sure [03:27] menn0: ok, cool, will do [03:28] axw: you thinking this status would replace the migration-mode field? [03:28] axw: or is the status a virtual concept only for the status API? [03:28] menn0: probably not, it's just for human consumption [03:28] axw: kk [03:29] axw: you know that there's already a environment-status (hopefully model-status) section which can optionally appear in the status output [03:29] axw: perrito666 added it to support reporting that there's a tool upgrade available [03:30] axw: model migration status will appear there too [03:30] menn0: ah ok, I'll check that out - thanks [03:37] cherylj, what did I do? [03:38] cherylj, I'm just here to remind everyone to use the 'daily' stream when running with the xenial series until it ships, otherwise you have a very stale experience. [03:39] heh [03:39] I mean, that's not why I'm here, but I'll make that public service address whenever the opportunity presents itself. [03:39] this has been a CPC public service announcement [03:39] cloud images, best consumed fresh [03:40] also, I'm not in charge of any actual branding efforts [03:45] axw: Can you do a quick review? http://reviews.vapour.ws/r/4528/ [03:45] cherylj: looking [03:46] natefinch: how's that arm bug coming? (I'm curious because it's such a weird bug) [03:46] it's not a nag :) [03:47] cherylj: looks good, but can you please do set-default-region while you're there? [03:47] gah, I forgot you said that [03:47] yes [03:47] cherylj: thanks :) [04:19] axw: can you take another look? http://reviews.vapour.ws/r/4528/ [04:19] I had to do it a bit differently for set-default-region [04:20] so that we wrote out what was in the cloud region list, not what the user specified [04:30] cherylj, thumper: what was the decision on where to land stuff while master is blocked? [04:30] bleh, I haven't done that. [04:30] * thumper waits... [04:30] cherylj: what was the decision? [04:31] it was a back and forth for a while, but the general consensus was "yeah, sure" [04:32] cherylj: sorry was afk, looking [04:33] menn0, thumper, since it's already tomorrow, I can go either way on a bug branch. [04:33] if either one of you wants to create one, go for it [04:33] I'm just waiting to land this rackspace fix so I can go to bed [04:33] cherylj: LGTM, thank you [04:33] sorry for keeping you from bed :( [04:33] it happens :) [04:33] thanks for the review! [04:33] cherylj: was it acceptable to have a release branch? [04:34] thumper: I'd rather not do that at this point because I do'nt know if CI would run on it tonight (until the QA team wakes up) [04:34] ah... good point [04:40] thumper, cherylj: let's make a "next" branch [04:41] ack [04:41] next branch created [04:41] the compression ration acheived by lrzip is amazing but geez it's slow [04:42] * menn0 has been waiting for almost 2 hours for a file to decompress [04:43] menn0: two *hours*? [04:43] menn0: seems unlikely the extra compression saved you two hours of download time... [04:46] mwhudson: I agree but that's how the file came [04:46] it's a 365MB file that currently up to 11GB and climbing [04:46] lrzip is even using every core and it's still taking this long [04:57] wow [05:05] mwhudson, thumper: just finished... a little over 2 hours. 365 MB to 14GB [05:08] what was in that giant file ? [05:09] menn0: that is quite a ratio [05:11] davecheney: DB dump from a broken system [05:14] axw: if you get a chance, here's a small mongo ha fix for beta4 http://reviews.vapour.ws/r/4529/ [05:14] wallyworld_: ok, a little later, trying not to context switch right now [05:15] (unless it's urgent) [05:15] tis fine, whenever suits [05:15] nah, can wait [05:15] so long as it lands sometime today so Ci can run [05:16] i could bug menn0 :-) if he is waiting for lrzip [05:17] m.Server = httptest.NewServer(nil) [05:17] c.Assert(m.Server, gc.NotNil) [05:17] m.oldHandler = m.Server.Config.Handler [05:17] create a new server, then save the value of it's handler ... [05:17] then restore the handler in the tear down [05:17] then the new test overwrites the value we just restored ... [05:17] wat [05:24] wallyworld_: i'm waiting for a long mgopurge run [05:24] wallyworld_: i'll take a look [05:24] ty [05:25] is there a custimer issue? [05:38] wallyworld_: ship it [05:38] menn0: yay, tyvm [05:39] wallyworld_: even though you're deleting a lot of my turd polishing :) [05:39] menn0: sorry :-) [05:39] less turds left now [05:41] wallyworld_: actually hang on [05:41] * wallyworld_ hangs [05:41] wallyworld_: can't you be a bit more aggressive about test removal [05:41] possibly [05:41] i thought about removing the whole fakeensure stuff [05:41] wallyworld_: some of those asserts you've removed were the point of those tests so I suspect the whole test can go [05:41] that's what I was thinking too [05:42] yeah, had the same thought [05:42] if it's not being used [05:42] i was trying to be a bit conservative [05:42] i'll take another look [05:42] wallyworld_: it's really just TestMachineAgentUpgradeMongo [05:42] and perhaps the fakeensuremongo [05:42] yep, i convinced myself that test remained useful [05:43] but seems not [05:50] menn0: yeah, a lot of extra code can just be deleted [05:50] wallyworld_: excellent [05:51] peergroup is having a big haircut [05:51] peergrouper [06:15] wallyworld_: before you delete all that... [06:15] already gone :-) [06:15] is it still possible to promote machines to controllers with your changes? [06:16] axw: you mean ones which are not yet has-vote [06:16] wallyworld_: I mean "enable-ha --to 0,1,2" [06:16] where we transform a non-state-server into a state-server [06:16] i'll double check, i didn't test that explicitly [07:08] whew it finally worked ! [07:31] Bug #1569196 opened: enable-ha with placement fails due to invalid JobManageNetworking [08:04] morning everyone [08:05] back to the routine of the school-run this morning [08:05] *sigh* [08:13] morning voidspace [08:23] dimitern: so thumper broke my code *again* overnight :-) [08:23] voidspace: oh yeah? :) [08:24] dimitern: see here: https://docs.google.com/document/d/1YmbdGpP7Oy5uglOwqbRXf1k_7siaxfEpoWkshk5_oPo/edit?ts=56fb30ca# [08:24] dimitern: basically you were right about not_networks so he changed the allocate machine args again [08:24] dimitern: and I was just updating the code to work with master as it was yesterday :-) [08:24] it's not a big change - so not difficult [08:25] voidspace: cool :) [08:36] voidspace: it is my mission in life to make your mornings miserable [08:37] however, dimitern will like to hear that he was right [08:37] thumper: ah, that explains why you joined our standups! [08:37] thumper: :-) [08:37] thumper: hey, so gomaasapi now has its own dependencies.tsv [08:38] voidspace: if you want to jump in the hangout now, we can chat that way I can not work later [08:38] voidspace: yeah, needed for the merge bot [08:38] thumper: sure [08:38] thumper: right, but the versions of its dependencies are different than the juju ones [08:38] thumper: I'll join the hangout [08:38] babbageclunk: you too? [08:38] voidspace: shouldn't be off by much [08:38] babbageclunk: early hangout [08:38] voidspace: probably just testing [08:38] all of them are now different I think [08:38] voidspace: sure [08:39] thumper: but everything still works [08:39] we just need to be careful [08:39] * thumper nods [08:49] thumper: I told you ;) [09:10] morning [09:10] TheMue: \o [09:15] dimitern: heavy on fire for Juju 2 and also 16.04? [09:17] TheMue: oh yeah :) [09:18] Hi, I wonder if somebody could please point to the place for a quick question on BigData charms? [09:18] dimitern: how is J2 different from the J1.*? so many incompatible changes to change the major release number? [09:20] TheMue: a lot has changed, and some things in an incompatible way, check the release notes :) [09:23] dimitern: will do. still very interested in juju and always trying to place it in projects or give interested people a hint. many don't know it. [09:35] fwereade_: the branch I put up is for 2.0, in which compatibility breaks are many and varied [09:44] Anyone know why building the next branch is failing? [09:59] axw, oops, fair enough, I do default to unthinkingly-maintain-compat [09:59] frobware: managed to figure it out - erc-email-userid needs to match my nick for i.canonical.c to accept it along with the server password [09:59] fwereade_: and I thank you for it :) [09:59] dimitern: no turning back now :) [10:00] frobware: indeed :) [10:00] babbageclunk: guessing... did you run godeps -u ... [10:01] frobware: not locally - in the github-merge-juju Jenkins job. [10:01] frobware: http://juju-ci.vapour.ws:8080/job/github-merge-juju/7313/console [10:02] frobware: looks like lots of provider/lxd tests [10:02] babbageclunk: fwiw I see the same errors even after upgrading to xenial when running make check on master tip [10:02] fwereade_: responded to your other questions on RB, will look again tomorrow. thanks for the review [10:03] if anything it got worse - I only saw a couple of failures yesterday on wily [10:03] frobware: some of the failing builds under that are against master, some against next. [10:04] frobware, dimitern: I tried running provider/lxd tests for next locally and I don't see the failures (although I didn't run the full test suite). [10:05] dimitern: I'll try running make check [10:05] babbageclunk: I'll try next now so see if it's any better [10:06] but first I need to reboot.. [10:18] hi all [10:23] menn0: hi! [10:23] babbageclunk: how's it? [10:23] Hey, I saw a build of yours failed with lots of lxd provider failures. [10:24] Did you work out why? A branch of mine had that just now too. [10:24] babbageclunk: everyone's seem to be failing like that. I wonder if there's a problem with the test runner hosts. [10:24] any QA people about? [10:24] frobware: launchpad seems to have gone read only, so I can't put this in the bridged bond bug right now. The pre-up/post-down thing is a red herring. Even if you include them cloudinit hangs. Rebooting always works and cloudinit seems to finish happily. [10:25] frobware: and I really need to get the proxy bug fix landed, so pausing on this for now. [10:25] dooferlad: ack [10:26] frobware: ah, bug just updated. Yay web services. [10:26] dooferlad: really need to conclude on an investigation of replace ENI and reboot... [10:28] menn0: Running the full test suite locally (on juju/next) I get the same failures [10:30] babbageclunk: interesting... so not the build hosts then [10:31] babbageclunk: I'm just finishing something else up and then I'll try on my machine. [10:31] menn0: takes ages though so I haven't run the tests against master as well yet - I saw that cherylj has some failing runs against master with the same errors. [10:34] dimitern: whoa! that's subtle... [10:35] dimitern: we currently have 00-juju.cfg and eth0.cfg [10:35] dimitern: which would/could/should give us 2 addresses on eth0 [10:36] dimitern: but because we specify a mac addr, the ifup via DHCP on eth0.cfg gives us the same IP addr [10:37] dimitern: ok, that explains it (for me at least) :) [10:37] babbageclunk: if you run just one of the tests that's failing in CI does it fail then? (that shouldn't take too long) [10:38] menn0: Yeah, it turns out running just ./provider/lxd that fails. [10:38] frobware: interesting [10:39] frobware: and lucky I guess :) [10:39] menn0: But now I can't find a version where it doesn't fail. [10:39] * menn0 runs those tests [10:39] babbageclunk: they pass for me [10:39] dimitern: I was trying to understand the behaviour. If I try this outside of juju the ifup on another foo.cfg (which also specifies eth0) will just add another IP addr to eth0. [10:39] menn0: is it safe to just do a checkout, godeps, then go test? [10:40] babbageclunk: yep that should do it (as long as you have mongodb installed) [10:40] babbageclunk: and I guess you probably need to have lxd installed for some tests too [10:40] menn0, babbageclunk: isn't the underlying problem related to the configuration of lxdbr0? [10:40] menn0: ok, so it'll rebuild everything. [10:40] or lack of [10:41] frobware: sure... but why is it suddenly happening in CI and on babbageclunk's machine? [10:41] menn0, frobware: ok - I installed lxd last friday. [10:41] babbageclunk: what does "lxc version" show? [10:41] 0.20 [10:41] menn0: I'm on wily [10:42] babbageclunk: I'm on vivid but I'm running 2.0.0.rc1 [10:42] so I see exactly the same test failures on next as on master [10:42] Maybe I should upgrade to that. [10:42] dimitern: yes, all recent merge attempts have had the same lxd/lxcbr0 problems [10:43] menn0: potentially that's also the problem on the build machine(s) [10:43] babbageclunk: there's a PPA for the current lxd from the lxd/lxc team [10:43] menn0, babbageclunk: to repro this just 'cd provider/lxd; go test'? [10:44] frobware: I believe so [10:44] frobware: yup - might need a godeps in there too [10:44] * menn0 prefers or "go test ./provider/lxd" but whatever [10:45] frobware: same thing with running only provider/lxd tests [10:45] menn0, babbageclunk: ok && not terribly helpful but OK: 77 passed, 1 skipped [10:45] menn0, babbageclunk: however, I am _only_ at dd9828ec7003d1a6ec1fc4dbcb7e6d17467a21f0 [10:46] menn0, frobware - ok, I'm going to add the ppa and upgrade. [10:46] babbageclunk, menn0: or go backto dd9828ec and try there... it may be something more recent in master [10:46] menn0, frobware: then I guess if that fixes it then it's an indication that someone should do the same on build hosts. [10:47] frobware: I suspect you did run `sudo dpkg-reconfigure -p medium lxd` as suggested by the tests? [10:47] otherwise how are you not seeing the failures.. [10:47] dimitern: nope, not medium. but I did reconfigure some time last week [10:47] dimitern: I can't repro the problem, and I haven't run dpkg-reconfigure in a long time [10:47] frobware: I tried going back to find a bisect start point, but got back to last Monday and the tests were still failing. [10:48] menn0: it was probably last tue/wed when I did the dpkg-reconfigure [10:48] I haven't since I installed lxd (about 2 months ago?) [10:48] * dimitern *facepalm* [10:48] babbageclunk: my /etc/default/lxd-bridge config: http://pastebin.ubuntu.com/15784102/ [10:49] anyone have the ppa handy? [10:49] I remember what I did - changed /e/default/lxd-bridge to not have IPv4 addresses as it was messing up my lxd multi-nic testing [10:50] * frobware would like to kickstart/jumpstart all his machines every morning to avoid state... [10:50] dimitern: but why is this also happening on the build hosts? [10:51] menn0: which is why I was suggesting first go back to my current rev ^^ to see if it's just recent churn in master. [10:53] frobware: I'm on that rev - it's upstream/next and upstream/master (since no one's been able to land anything) [10:53] oooohhh. I am at that rev. apologies... [10:54] babbageclunk: my lxd package is: [10:54] menn0: not sure - perhaps when /e/d/lxd-bridge was introduced it did not have IPv4 config and CI machines haven't been updated since? [10:54] $ apt-cache madison lxd [10:54] lxd | 2.0.0-0ubuntu2 | http://gb.archive.ubuntu.com/ubuntu xenial/main amd64 Packages [10:54] now all provider/lxd tests pass [10:54] dimitern: to confirm, you're running xenial? [10:54] frobware:yep [10:56] however, p/lxd tests should NOT fail if anything like that on the machine happens - they should be properly isolated [10:58] Huh. lxc version still says 0.20, but the tests pass for me now. [10:58] babbageclunk: you're on wily? [10:59] dimitern: yup [11:00] babbageclunk: so I needed to add `deb http://ppa.launchpad.net/ubuntu-lxc/lxd-stable/ubuntu wily main` to /e/a/srcs.list to get lxd to work on wily [11:01] dimitern: yeah - I did the same, but via add-apt-repository for ppa:ubuntu-lxc/lxd-stable [11:01] babbageclunk: yeah - same thing, and then a-g update && a-g dist-upgrade [11:02] dist-upgrade if you already installed lxd I mean [11:03] ok, so how do we get the tests passing on the build machines? [11:10] ooh, down to two failures [11:13] mgz: ^^ [11:14] mgz: istm the dpkg configure for lxd might have been skipped with the noninteractive frontend [11:14] mgz: it will be useful to keep the machine around when the merge job fails to see what's going on [11:15] dimitern: I'm running apt-get under ansible, so maybe it's being run noninteractively as well? [11:16] babbageclunk: well, what's in /etc/default/lxd-bridge ? [11:17] dimitern: http://pastebin.ubuntu.com/15784982/ [11:17] dimitern: But the tests are passing now that I've upgraded. [11:18] dimitern: so maybe it was just that I added the ppa. === babbageclunk is now known as babbageclunch [11:18] babbageclunk: yeah, it looks like the tests should still fail (?!) mine was very similar before I fixed it [11:21] babbageclunch: looks like most of the config is empty === babbageclunch is now known as babbageclunk [11:35] frobware: dimitern: babbageclunk http://reviews.vapour.ws/r/4535/ [11:45] frobware: dimitern: babbageclunk http://reviews.vapour.ws/r/4535/ [11:45] * voidspace lurches to lunch [11:48] voidspace: looking [11:51] dimitern: thanks [11:54] frobware, dimitern, voidspace: http://reviews.vapour.ws/r/4536/ [11:54] voidspace: looking at yours now. [11:54] babbageclunk: dimitern just reviewed it, but thanks [11:54] voidspace: yeah, I just saw that it's merging [11:55] it will fail [11:55] dammit [11:55] I missed off some test fixes - didn't push them [11:55] voidspace: probably should have a look anyway - I'm OCR tomorrow. [11:55] ooh [11:55] voidspace: Well, it was going to fail due to the lxd thing anyway, right? ;) [11:55] hah [11:56] babbageclunk: are all merges backed up on that [11:57] voidspace: there are 9 failures in a row on github-merge-juju that I think are provider/lxd ones. [11:59] babbageclunk: nice :-) [11:59] right [11:59] * voidspace really goes on lunch [12:55] cherylj: a small one for a ha bug i found testing ha http://reviews.vapour.ws/r/4537/ [12:59] voidspace: that AllocateMachine change is biting me too - I'll use a version that has Link.IPAddress() but not the AllocateMachine change until you've updated stuff. [13:22] wallyworld_: we've been having merge jobs fail on Jenkins because of LXD provider tests - do you know about that? [13:22] i saw my job fail, but don't know what's wrong with lxd [13:23] babbageclunk: wallyworld_ cherylj and QA at looking into I think [13:23] but i did see a bug where lxd behaves differently on trusty vs xenial with the bridge [13:23] i strongly suspect an upstream lxd issue [13:23] rick_h_: ok, thanks [13:24] babbageclunk: bug 1569120 may be related / relecant [13:24] Bug #1569120: wrong lxc bridge still used in juju beta4 [13:25] rick_h_, wallyworld_: if it helps, I was getting the same failures on my machine (wily) until I added the PPA for lxd-stable and upgraded. [13:25] we're using daily xenial images for the merge bot (which we have to, as the last one has too old an lxc) [13:26] and there's a new lxd as of 2016-04-11 that's probably in today's image [13:26] with various changes, bug 1548489 [13:26] Bug #1548489: [FFe] Let's get LXD 2.0 final in Xenial [13:26] so it's likely we just got broken again [13:26] mgz: ah, ok - thanks [13:39] bug #1569361 makes it hard to iterate on fixing container bugs... [13:39] Bug #1569361: LXD containers fail to upgrade because the bridge config changes to a different IP address [13:43] frobware: :( [13:43] * perrito666 gets budgeted for his next home internet... U$D450/5M [13:43] Bug #1569361 opened: LXD containers fail to upgrade because the bridge config changes to a different IP address [13:57] so... where are we actually at with lxd? [13:57] our master doesn't work with their 2.0 - plus various other bugs? [14:00] babbageclunk: or you can merge my branch [14:00] babbageclunk: https://github.com/juju/juju/pull/5094/files [14:01] babbageclunk: yeah, but this was pretty easy and likely to require less explanation at review time. [14:02] babbageclunk: cool, that branch is ready to land though [14:02] morning all [14:02] voidspace: true - I'll need to merge it in eventually. [14:02] katco: o/ [14:10] [14:10] \ [14:10] wallyworld_: your arm is falling off [14:11] pressed wrong key [14:16] katco: rogpeppe1 is proposing a small API change in csclient.Client which would require a likewise small (isolated) change in core [14:16] katco: any objections? [14:16] ericsnow: yeah saw the email... cherylj what would a change to core look at this point? would it still go into rc1? [14:17] katco: it pulls in an updated dep, right? [14:17] * ericsnow ignores wallyworld_ since he can't possibly be coherent at this point [14:17] cherylj: and a small change to core [14:18] katco: I'm going to say that should go into rc1. (not what we're trying to release this week) [14:18] cherylj: that's fine [14:18] ericsnow: ok, no objections [14:18] so put it in the next branch that thumper created [14:18] cherylj: FYI, it *is* a bug [14:18] katco: k [14:18] yes, I know [14:18] rogpeppe1: ^^^ [14:18] ericsnow: is there a bug opened? the email I saw didn't mention one? [14:18] cherylj: not yet, I expect [14:19] cherylj: no, i didn't file a bug yet. will do. [14:19] rogpeppe1: thanks [14:19] thanks rogpeppe1 [14:19] and thanks for noticing the bug :) [14:19] ericsnow: fix lands here: https://github.com/juju/juju/tree/next [14:19] cherylj: not sure if i should file the bug against juju-core or charmrepo/csclient [14:20] rogpeppe1: you can target to both [14:20] cherylj: interesting. how would I do that? [14:20] katco: is master for 2.0.1 now? [14:21] rogpeppe1: "Also affects project" [14:21] rogpeppe1: Use "also affects project" [14:21] ericsnow: that is my understanding. cherylj, correct? [14:21] ericsnow: no, master is for beta4. We didn't branch for the release last night because I didn't know if the branch would've been picked up for testing overnight [14:22] (but it didn't matter anyway because no merge jobs passed because of lxd) [14:22] cherylj: so the fix for rogpeppe1's bug should go in master or next? [14:22] cherylj: what is the "next" branch for? [14:22] cherylj: do i have to do that after submitting the bug? i don't see that option in the "new bug" page. [14:22] next is for rc1 [14:22] when we release beta4, we will merge next into master [14:23] cherylj: ah, okay [14:23] it's 100% that someone is going to screw up targetting here [14:23] that seems... backwards [14:23] rogpeppe1: yes, after you create the bug you can target to a different project [14:24] katco: yes, I know, but we did it that way because we wanted to make sure master / whatever we're going to release got a CI run overnight and I didn't know if it would pick up a new branch [14:24] and it was way past EOD for the qa team [14:24] cherylj: our tooling T.T [14:24] cherylj: it doesn't like the fact that there's no launchpad project for charmrepo (it's in github) [14:24] rogpeppe1: then just target to juju-core [14:24] cherylj: i've created the bug. https://bugs.launchpad.net/juju-core/+bug/1569386 [14:25] Bug #1569386: list resources will not work correctly [14:25] thanks! [14:25] guess I should create a 2.0 rc1 milestone [14:29] hey natefinch, any luck with bug 1564791? [14:29] Bug #1564791: 2.0-beta3: LXD provider, jujud architecture mismatch [14:31] cherylj: it's kind of a twisty maze of code getting passed around, but I have some suspicious lines I'm looking at. e.g. if result.Arch == "" {result.Arch = "amd64"} [14:46] Bug #1569386 opened: list resources will not work correctly [14:57] ericsnow: just had a good idea about the bug 3 lines up... I think this is another case of needing to make our "local" provider special. LXD has to always default to the arch of the host machine, but we have provider code that says that if you don't specify the arch, we default to amd64, which obviously fails to run on other arches. I think we never see this in development, because we always use --upload-tools [14:57] Bug #3: Custom information for each translation team [14:58] natefinch: yep [15:02] ericsnow: natefinch: standup time [15:13] cherylj: is there a card for https://bugs.launchpad.net/juju-core/+bug/1564791 [15:13] Bug #1564791: 2.0-beta3: LXD provider, jujud architecture mismatch [15:13] natefinch: not yet, I can make one for you [15:13] anybody else see bootstrap failures related to mongod not found in PATH? [15:14] I have bootstrapped quite a few times today but has failed twice in a row now [15:14] see bug #1569408 [15:14] cherylj: thanks [15:14] Bug #1569408: Failed to bootstrap because exec: "mongod": executable file not found in $PATH [15:18] cherylj: can redir land help text changes into the next branch? [15:19] katco: yes [15:19] cherylj: k ta [15:20] Bug #1569408 opened: Failed to bootstrap because exec: "mongod": executable file not found in $PATH [15:21] :) [15:25] redir: what's your launchpad id? [15:34] reedobrien [15:34] katco: ^ [15:34] redir: ty [15:50] now everything is broken [15:53] maas cannot bootstrap due to missing mongod, aws can't add lxc containers as cloud-init sets a non-present locale en_US.UTF-8 [15:54] and the locale is missing because apt-get update & upgrade are apparently required for xenial now [15:56] dimitern, voidspace, tych0: PTAL @ https://github.com/juju/juju/pull/5099 [15:57] dimitern: I went back to trusty and added backports to sources.list -- working there. \o/ [15:58] frobware: looking [15:58] frobware: I managed to get xenial to work as well by doing a-g up & upg & a-g install language-pack-en-base [15:59] dimitern: I can no longer bootstrap with xenial... [15:59] frobware: on maas, I have the same issue - but I'm using AWS now to verify dropping address-allocation ff does not break something there [16:00] dimitern: gotcha [16:01] lol, I now have 3 unkillable lxd environments [16:02] frobware: LGTM [16:02] dimitern: ty [16:03] uh.... anyone know what this means? [16:03] $ juju bootstrap local-Apr-12 lxd --upload-tools [16:03] ERROR invalid config: no addresses match [16:04] throw some debug there? [16:04] natefinch: try --debug? [16:04] oh, maybe this is the lxd problem everyone's been having, that I avoided by just not using lxd for a while :/ [16:04] 2016-04-12 16:04:14 DEBUG juju.cmd.juju.commands bootstrap.go:365 preparing controller with config: map[type:lxd name:admin uuid:0a58e9ef-099f-4cf8-8a48-2772cf8b5c05 controller-uuid:0a58e9ef-099f-4cf8-8a48-2772cf8b5c05] [16:04] 2016-04-12 16:04:14 ERROR cmd supercommand.go:448 invalid config: no addresses match [16:10] that's a new issue to me [16:11] natefinch: there's a good thread on that with rogpeppe1 and redir [16:12] natefinch: search email for that error message [16:14] natefinch: i think the underlying cause is this: https://bugs.launchpad.net/juju-core/+bug/1567952 [16:14] Bug #1567952: container/lxd: TestDetectSubnetLocal fails with link/none [16:16] natefinch: you need to do the dpkg-reconfigure to set up the bridge, then service lxd restart [16:17] perrito666: do you have a minute? [16:19] cherylj: is this something that'll get fixed? Or is this something special because we ran old versions of lxd, or? [16:19] natefinch: you should only have to do it once [16:20] but it's something that right now, you have to do every time for newly provisioned instances [16:20] cherylj: ew [16:20] yeah [16:23] cherylj: omg, this is so much worse than I expected [16:23] hahaha [16:23] seriously, an order of magnitude [16:23] yeah [16:23] it's *awesome* [16:23] I hope the only thing I have to change the default for is the name of the bridge [16:24] cherylj: but at least I can run --upgrade-juju now with LXD containers... makes debugging a little quicker. [16:25] and lol still fails with the same error message [16:27] * natefinch reboots just in case [16:27] natefinch, for master I was able to get it working by running lxd init and configuring the bridge and network that way [16:27] alexisb: ok [16:28] alexisb: oh, it doesn't want me to to that since I have existing containers, let me dump those [16:28] is there a way to replace the tools that are in state? [16:28] to deploy a machine with freshly built tools? [16:28] natefinch, yep you have to dump those, then I also removed my lxc bridge [16:28] not sure if that step was necessary, but that was my process [16:32] still thinks I have containers around, even though list says there aren't. Sigh. Gotta run to lunch, will pick this up after. === natefinch is now known as natefinch-lunch === redir is now known as redir_afk [16:53] so the missing juju-mongodb3.2 package on xenial broke AWS bootstrap as well as MAAS (with update/upgrade enabled) [16:53] perrito666: ping? [16:53] dimitern: yeah, I'm working on it [16:54] dimitern: well, the problem is now that it's there [16:54] and we're not looking in the right place for it [16:54] cherylj: oh, cheers then! :) [16:54] dimitern: what? [16:54] perrito666: hey, I've got mongo questions for you :) [16:55] dimitern: current master wont fail if the package is not there [16:55] cherylj: sure [16:55] perrito666: it does :( [16:55] perrito666: yeah? :) [16:55] wait [16:55] perrito666: sure [16:55] sorry [16:55] it fails if it *is* there [16:55] heh [16:55] cherylj: ok, ill need more details [16:55] I can has english [16:56] perrito666: can you HO? [16:56] cherylj: gimme a sec [16:56] perrito666: np, when you're ready: https://plus.google.com/hangouts/_/canonical.com/mongo-fun?authuser=0 [16:59] that sounds fun [17:07] cherylj: mgz look its no fun adding bugs to stuff if you people are going around finding them [17:07] ha [17:07] so it *IS* sabotage? [17:08] hey mgz - about functional-container-networking [17:08] Bug #1569467 opened: backup-restore loses the hosted model [17:08] oh yeah, I was going to look at that ^^ [17:08] good timing, mup [17:08] mgz: there was a change in juju ssh to default to not using the proxy [17:09] which breaks that test on AWS [17:10] oh fun [17:10] mgz: but an easy fix. Just use juju ssh --proxy=true [17:10] and backwards compatible to boot [17:10] how long have we had the --proxy flag? [17:11] I guess it doesn't matter too much, can just supply it always for 2.0 [17:11] https://goo.gl/X0oQBt [17:23] cloud "lxd" not found, trying as a provider name <--- such is my luck [17:27] perrito666: that's an expected warning [17:28] perrito666: it should still continue fine from there [17:31] mm I am getting same error as nate, I wonder if the upgrade did something to my conf === natefinch-lunch is now known as natefinch [17:47] so... lxc list returns an empty list, but lxd init says error: You have existing containers or images. lxd init requires an empty LXD. [17:48] * natefinch reboots just in case [17:49] sigh [17:50] hey, that's a ifferent error message [17:50] $ juju bootstrap local-apr-12 lxd --upload-tools [17:50] ERROR cannot find network interface "lxcbr0": route ip+net: no such network interface [17:50] ERROR invalid config: route ip+net: no such network interface [17:50] natefinch, it should not be looking for lxcbr0 [17:51] lxdbr0 [17:51] are you working off master? [17:51] natefinch: did you dpkg-reconfigure lxd ? [17:52] be sure to sat yes to the ipv4 config [17:52] perrito666: yes, I did, but I changed lxdbr0 to lxcbr0... I guess that was not the right thing to do. [17:52] natefinch: I did too and am working now with lxd [17:52] bootstraping a xenial as we speak [17:53] natefinch, you have time for a hangout [17:53] w should be able to work through this [17:53] alexisb: definitely... I'd love to get past this [17:53] k, out 1x1 HO [17:54] lemme know if I can help you [17:56] ugh, I can't even get restore to work [17:56] the db just exits [17:56] like "see ya, suckers" [17:56] Apr 12 17:49:41 ubuntu mongod.37017[3194]: Tue Apr 12 17:49:41.356 [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends [17:57] perrito666: thanks, alexis is helping me out [17:57] I hit this during the last restore problem I was debugging [17:58] perrito666: did you ever see this restore problem? [17:58] cherylj: never, which is weird since I coded it [18:00] I hit this every time. am I doing something wrong? argh! [18:02] yay, alexisb fixed it for me :) [18:03] :) [18:03] ugh wow, I wonder if there's a network problem between me and wherever the images are hosted 'cause dayum this is a slow download [18:03] natefinch, they have been very slow [18:03] once the image is cached it is easy [18:04] yeah [18:04] you can always copy the image over and alias it with the tag [18:04] lxd will look for the tag and use it [18:07] alexisb: this will be done in 5 minutes or so, it's ok [18:16] gah... is juju ssh supposed to work? [18:17] natefinch: might fail in lxd [18:17] natefinch: just lxc list and ssh to the machine [18:17] Bug #1569490 opened: storage-get crashes on xenial (aws) [18:17] you just need to put -m before the machine number for some reason [18:18] oh, I guess because if you put it after, it thinks that's the ssh command [18:18] blech [18:19] perrito666: works fine in lxd... just PEBCAK [18:19] lol I just ssh I am lazy [18:20] I juju ssh because I'm lazy :) [18:31] whelp, figured out why I always call kill-controller and not destroy-controller... I don't have to type out the pesky --destroy-all-models [18:32] yep [18:33] natefinch: are you trying to ssh to machine 0 just after a bootstrap? [18:34] cherylj: yes, but it was just a problem of spelling the command correctly, what with multiple models and stuff [18:34] ah, ok [18:35] ..... and now mongo3.2 has hit the mirrors for my region [18:35] yay [18:35] cherylj: like, it defaults to an empty model, but I wanted to ssh to the controller, so I had to specify the model, but if you put that after juju ssh 0 then it thinks it's a command.... [18:35] yeah, that's totes annoying [18:36] and I happened to have already created a machine in the non-admin model, so juju ssh 0 still worked and tried to run -m admin as an ssh command, which gave a wacky error message [18:41] if unrelated tests fail in CI, do I need to resubmit the PR? [18:41] yes, if you think they're spurious and will go away [18:42] ...which is fairly common, unfortunately. But if you're not sure, send a link and we can help [18:47] well the first failure is a termination worker timeout which is fine locally and I can't imagine that it would be related to helptext updates, so I'll resumbit in a bit. [18:49] second failure is because it can't untar juju-core_2.0-beta4.tar.gz... [18:49] which seems like a CI hiccup [18:49] sinzui: ^ [18:49] I'll resumbit both after the queue shrinks. [18:50] redir: I wouldn't count on the queue shrinking, just sayin' :) [18:50] true dat [18:50] redir: yes, sounds like one-off failures, though the failure to untar is concerning [18:51] no such file/dir so prolly failed to DL in time. [18:51] redir: looks like a hiccup, the tar file didn't arrive on the testing instance. [18:51] natefinch: looking at the log, we got ssh disconnected when scping the source to the ec2 test running machine [18:52] ...sinzui won ;_; [18:52] oh, is this using the xenial ami? [18:53] mgz: We are testing with the xenial from last week. [18:53] yeah I see 'lost connection' above [18:53] I know exactly what will fix this for me. [18:53] Soup and/or sandwich === redir is now known as redir_lunch [18:54] if only that were the answer to all problems. sigh... [18:54] mmm, soup [18:55] I guess it is just a work-around === redir_lunch is now known as redir [19:17] Bug #1569529 opened: update-clouds strips "DO NOT EDIT" warning [19:18] sinzui, mgz: is there a trick to compiling for arm64? GOARCH=arm64 go build github.com/juju/juju/cmd/juju returns errors from lxd about undefined functions [19:19] natefinch: You can compile on the actualy host if you lile, That is what we do [19:20] sinzui: I guess... cross compile *should* work and lets me edit in my local environment... but I guess I can copy my code up [19:21] natefinch: We cross compile windows. In the case of all builds, we use the release tarfile. The scripte that makes it double checks the deps and purges undocumented packages. [19:23] natefinch: The installed lxd packages can differ between archs in ubuntu. [19:23] sinzui: Okay, I actually got a restore to work. Does the test kill the controller? or use destroy-controller? [19:23] sinzui: yes, not the code, though... and I'm getting a compile error [19:23] cherylj: kill-controller. [19:24] thanks. [19:24] btw - the output makes it look like it's a status command that's failing: [19:24] ERROR:root:Command '('juju', '--show-log', 'show-status', '-m', 'functional-backup-restore', '--format', 'yaml')' returned non-zero exit status 1 [19:24] it's just confusing for me [19:24] but anyway [19:25] sinzui: oh, it uses cgo, that's probably the problem [19:25] natefinch: I don't think arm64 golang-1.6 is using cgo. only the osx is using cgo to my knowledge [19:26] sinzui: no no, sorry, not being clear. The LXD code uses cgo, which complicates cross compilation [19:26] ah [19:26] yeah it does. natefinch . We had to setup a dedicated OS X builder because it does need cgo to link to the native crypto libs [19:28] natefinch: this long log shows the last build of arm64 for master http://reports.vapour.ws/releases/3881/job/build-binary-xenial-arm64/attempt/424 [19:31] looks like the reason that yuo can cross compile windows is because the cgo stuff is all linux only.... what a PITA. [19:35] ..well, duh, of course the lxd stuff isn't compiled in Windows :) [19:36] sinzui: the arm64 machine can't access github, can it? :/ [19:37] natefinch: I just sent you an email with the ssh rules I use. The machine is on Canonical's network. It cannot see much [19:37] sinzui: yeah, I got the ssh config stuff from cherylj last night. I guess tgz it is [19:41] cherylj: I got it fixed, ill make a pr, this goes against master? [19:45] What happens when I targz a brand new gopath with just juju in it: -rw-rw-r-- 1 nate nate 216M Apr 12 15:44 src.tar.gz [19:47] oh well, ship it. Take longer to fix it than just push it up. Yay for a decent upload speed. [19:47] 4.8 MB/s... I'll take it [19:50] natefinch, I was wondering if you had a minute to repay the favor from earlier :) [19:51] I am stuck on a test update that I am sure is a simple "how go works" type q [19:54] can I actually forcibly kill a controller using current master? [19:54] "kill-controller" seems to be stuck waiting [19:55] bogdanteleaga, you should be able to with kill-controller [19:55] if it is not working it is a bug [19:56] seems to be very happily stuck on "Waiting on 1 model, 2 machines, 3 services" [19:56] alexisb, but I might be able to help with the how go works thing :p [20:06] sinzui: got a second? [20:06] I do [20:07] perrito666: yes, against master [20:07] bogdanteleaga, alexisb if the model is not in a good state, kill controller can "hang" [20:08] alexisb: sorry, yes, I can help [20:08] bogdanteleaga: see bug 1566426 [20:08] Bug #1566426: kill-controller should always work to bring down a controller [20:08] cherylj, yeah I turned of the controller, issued it again and it went straight to the provider [20:08] there's a "workaround" in there [20:08] s/of/off [20:08] yeah, that's the workaround :) [20:09] natefinch, back to the 1x1 hangout [20:09] while I have you here, bogdanteleaga, is bug 1516668 addressed by your action changes? [20:09] Bug #1516668: Switch juju-run to an API model (like actions) rather than SSH. <2.0> <2.0-count> [20:10] cherylj, yup [20:10] bogdanteleaga: and that landed, right? [20:10] cherylj, correct [20:11] yay, fix committed it is, then! [20:11] I think sometime last week [20:11] tx sinzui [20:11] bogdanteleaga: also bug 1470820 - now that we're ta go 1.6, should this be done? [20:11] Bug #1470820: Remove github.com/gabriel-samfira/sys/windows once go 1.4 lands [20:12] cherylj, https://bugs.launchpad.net/juju-core/+bug/1426729 [20:12] Bug #1426729: juju-run does not work on windows hosts [20:12] this too probably [20:13] cherylj, yeah I've talked with curtis about that one last week before the CI switch [20:13] however I'm still unsure [20:13] since the tests on windows get ran using 1.2 [20:13] bogdanteleaga: maybe something to look at for 2.1 then? [20:17] Bug #1545116 changed: When I run "juju resources " after a service is destroyed, resources are still listed. <2.0-count> [20:20] cherylj, I was about to say it shouldn't be that hard to get the tests passing on 1.6 until I saw the last email with the job [20:21] cherylj, we might have to push it further back I guess [20:21] any idea what's up with all the "no tools" test errors? [20:22] bogdanteleaga: do you have a job link you could send? [20:22] mgz: you still around? [20:22] cherylj: yo [20:23] cherylj, http://reports.vapour.ws/releases/3881/job/run-unit-tests-centos7-amd64-go1_6/attempt/1 [20:23] sorry [20:23] http://reports.vapour.ws/releases/3881/job/run-unit-tests-win2012-amd64-go1_6/attempt/1 [20:23] this one [20:24] hey mgz could you help me figure out the juju commands that are run as part of the functional-backup-restore test? [20:24] I can't recreate using what I *think* is going on, and the job output is unhelpful [20:24] bogdanteleaga: let me take a look [20:25] bogdanteleaga: I *think* there is one place in the test suite I could change to fix a lot of those problems [20:27] cherylj: sure, also refer to assess_recovery.py for the details [20:28] how can git not be able to fix a conflict where one commit has nothing and the other has something there.... [20:31] cherylj: did anyone just land anything in master? [20:31] cherylj: if you want, we can also rerun a CI job with --verbose for the explicit [20:32] mgz: would you be able to do that for this job? It would be most helpful to see the output of the reboostrap [20:32] cherylj, sounds good, I don't understand how changing the go version can give that kind of error [20:32] cherylj: well a change from wallyworld_ has just landed that broke my patch and pseudo fixed the issue [20:32] hmm [20:32] cherylj: backup-restoe exactly, not one of the other variants? [20:32] Bug # changed: 1175580, 1235529, 1276403, 1279879, 1280949 [20:33] mgz: yeah functional-backup-restore [20:33] building [20:34] I really need a punching bag in my office [20:34] sounds like an idea for the next team sprint, perrito666 [20:34] instead of tshirts [20:34] here's a punching bag! (complete with juju logo) [20:34] cherylj: oh no need, in the sprint I can use wallyworld_ :p [20:34] lol [20:35] perrito666: wot you talking about? [20:36] wallyworld_: GO TO SLEEEEEEEEP [20:36] hehe [20:36] oh ts 6:30 tis ok [20:36] perrito666: i just woke up [20:36] wallyworld_: go breakfast? [20:36] punching bags w/o sand, otherwise hard to take it as hand luggage in the plane [20:36] anyway we just clashed on a fix [20:37] perrito666: getMongoDumpPath still needs to be fixed [20:37] perrito666: I'll go to bed instead of wallyworld_. here it is almost 11pm now, so time is getting closer. [20:37] wallyworld_: no sts call today, btw [20:37] cherylj: yeah, saw, ty [20:38] i can actually have breakfast :-) [20:39] this inability to actually finish destroying controllers is beginning to get into my nerves [20:40] finally [20:44] oh, what the pants. assess_recovery.py is one of our few jobs that doesn't use common args yet [20:47] Bug # changed: 1158187, 1280953, 1289619, 1374906 [20:51] cherylj: really rebuilding this time [20:52] I'm watching it now, mgz :) [20:53] hm, I want to make our wait loops nicer with --verbose [20:57] gah, I can't tell if I've fixed this bug, because --upload-tools hides it [20:57] yeah, what a pain :( [20:59] cherylj: I gotta run to make dinner for the kids. won't be back for a few hours until after they're in bed. [21:00] natefinch: can you push your changes somewhere? maybe we could make a branch and test? [21:02] cherylj: here's a PR..I am honestly not super confident in the fix, since I was kind of running blind... and furthermore the tests in that package pass both before and after I made my change, which means they're not actually testing that [21:02] cherylj: https://github.com/juju/juju/pull/5116 [21:02] :( [21:02] thanks, natefinch, we'll see what we can do [21:02] cherylj: I'll be back on likely in 3.5 hours. === natefinch is now known as natefinch-afk [21:08] cherylj: run finished, 'INFO juju --show-log' search should get you all the commands [21:09] Bug #1554863 changed: juju bootstrap does not error on unknown or incorrect config values <2.0-count> [21:10] thanks mgz [21:12] cherylj: http://reviews.vapour.ws/r/4552/ [21:15] does lxd placement work? as in, should i be able to juju deploy xyz --to lxd: ? [21:15] cherylj: i need a new bug to work on. all of them seem to require a lot of context... any suggestions on what to pick up? [21:15] let me look [21:16] does lxd placement work? as in, should i be able to juju deploy xyz --to lxd: ? [21:16] katco: you can review perrito666's PR while I do that? ^^ :) [21:16] sorry, wrong window [21:16] cherylj: sure [21:16] perrito666: if you can review mine :) http://reviews.vapour.ws/r/4551/ [21:16] was up-arrow,enter-ing in a term [21:16] katco: just in case, check it in github too, I am not sure how well rb takes amends [21:16] katco: sure [21:18] perrito666: where's the test for this? [21:20] katco: mm, you are right, that did not break a test, lemme check that again [21:21] katco: ship it, but, I am curious, why this change? [21:21] this is going to make developmente testing incredibly hard [21:24] perrito666: just going on what the bug said. "the decision has been made" [21:24] perrito666: i was not part of that conversation [21:25] oh, ok, well It is time to resurrect my fake streams builder it seems [21:26] well of course I did not break any tests... there arent tests for that, well, lets fix that [21:26] perrito666: :) [21:27] aaand of course, external tests [21:27] perrito666: what do you mean external tests? [21:30] package_test tests [21:30] perrito666: i think that's devs discretion and i actively avoid doing that [21:30] perrito666: because it just causes boilerplate churn [21:31] I believe Ill do regular unit tests [21:31] * katco cheers [21:31] I am on your side, I was protesting that the existing ones are externals [21:31] I cant wait for this semester discussion about internal vs external tests [21:32] perrito666: lol [21:32] * perrito666 has it pretty much like one of the sprint events [21:40] * redir is in a maze of twisty little passages, all alike [21:40] redir: please beware of the gru, we've only begun to get to know you. [21:41] :) [21:47] oh dont worry, if you find it just throw the status tests to it, that should keep it occupied a good half an hour [21:48] Bug #1565089 changed: create-model does not use the same config format as bootstrap [21:48] Bug #1566303 changed: uniterV0Suite.TearDownTest: The handle is invalid [22:18] Bug #1339931 changed: Status panicks during juju-upgrade [22:22] perrito666: quick chat? [22:22] wallyworld_: sure [22:22] where? [22:22] standup [22:30] this is impressive http://classicprogrammerpaintings.tumblr.com/ [22:31] wallyworld_: frozen [22:32] wallyworld_: cannot hear you, you are frozen [22:35] wallyworld_: you left me speaking alone [22:37] perrito666: wow.. i think something exciting just happenned on our side... i was kicked out from perrito666uassel at least... maybe ian experiences fun too.. [22:37] quassel that is.. [22:39] perrito666: sorry, chrome ate all my memory :-( === alexisb is now known as alexisb-afk [23:15] Bug #1567690 changed: Can't push charm to my new LP home [23:25] ugh... struct equality again... [23:25] what's valid? [23:33] struct equality ? [23:34] nm [23:34] interestingly... [23:35] if args == StructType{} { [23:35] return nil [23:35] doesn't work [23:35] but [23:35] var empty StructType [23:35] if args == empty { [23:35] does [23:35] hit this before, and no idea why Go doesn't like it [23:37] thumper, have you tried args == (StructType{})? [23:37] no [23:37] but I find that less readable [23:38] so would probably go with empty var [23:42] wallyworld: ? :/ [23:44] it breaks symmetry though :P [23:50] thumper: http://play.golang.org/p/C16rPMEAlO [23:51] it's a parsing ambiguity because the parser cannot tell where the start of the block begins and the struct literal ends [23:52] ironicall it can with this even more verbose version [23:52] http://play.golang.org/p/R_ui2oTlma [23:52] but, what you're trying to do smells bad [23:54] katco: around? [23:54] can i get some juju usage help? [23:55] cherylj: katco: i think we got the PR for bug1567170 backwards [23:55] i'm trying to test the juju-mongo-tools3.2 package i made [23:55] so i need to try to make a backup [23:55] i have an controller bootstrapped in ec2 [23:55] mwhudson: bootstrap with mongo 3.2 is broken at the moment [23:55] but now i get [23:55] (master *)mwhudson@aeglos:juju-mongo-tools3.2$ juju backups create [23:55] ERROR backups are not supported for hosted models [23:55] wallyworld: i merged perrito666's PR [23:56] mwhudson: juju create-backup -m admin [23:56] or first switch to admin [23:56] juju switch admin [23:56] when you bootstrap, you are switched to the hosted model [23:56] ERROR while preparing for DB dump: mongodump not available: failed to get mongod path: exec: "mongod": executable file not found in $PATH [23:56] win, i think [23:56] wallyworld: now how do i log into the controller node? [23:56] wallyworld, katco I think you're right [23:57] mwhudson: yeah, that's a bug i told horatio i found yesterday when doing a code read [23:57] oh juju ssh 0 [23:57] oh right [23:57] mwhudson: the mongodump path needs to be fixed [23:57] i'll do a fix today [23:57] wallyworld: well mongodump is not even installed [23:58] mwhudson: that's because the mongotools package is not installed [23:58] juju should depend on it [23:58] wallyworld: because i haven't uploaded it yet :-) [23:58] right :-) [23:58] so i was going to install the package from the ppa [23:58] mwhudson: but even when it is uploaded, juju will look in the mongo2.4 path :-( [23:58] but you're saying that even that won't work, because the path is wrong? [23:58] excellent [23:59] i will do a fix this morning [23:59] i only just saw it yesterday doing a code read by accident [23:59] the backup code is not something i am 10000% familiar with