[01:20] grrr, something is leaking /tmp/gui* turds === menn0 is now known as menno-afk [01:25] thumper: "Generally LGTM" = shipit? I've answered your question about newline [01:26] thanks for reviewing btw [01:31] axw: yes [01:31] thumper: ta === menno-afk is now known as menn0 [02:50] // If any post-MVP command suite enabled the flag, keep it. [02:50] hasFeatureFlag := featureflag.Enabled(feature.PostNetCLIMVP) [02:50] s.BaseSuite.SetUpTest(c) [02:50] s.FakeJujuHomeSuite.SetUpTest(c) [02:50] if hasFeatureFlag { [02:50] s.BaseSuite.SetFeatureFlags(feature.PostNetCLIMVP) [02:50] } [02:50] wut [02:50] if the feature flag is enabled, then enable the feature flag [02:57] taht feel when you find the same bug copy pasted into several test suites [02:57] heh [04:26] heh [04:26] golint needs more smarts [04:27] don't use underscores in Go names; type interface_ should be interface (golint)at line 13 col 6 [04:27] also complains about type_ [04:53] thumper: here's an important one: http://reviews.vapour.ws/r/4414/ [04:53] * thumper looks [04:54] that was fun [04:54] * menn0 hasn't been in the zone like that for a long time [04:54] I'm sure will will like that one [04:55] thumper: yes, he doesn't like these custom watchers [04:55] thumper: and I now pretty much agree with him [04:57] shipit [04:58] thumper: cheers === thumper is now known as thumper-afk [05:00] bbl to catch europeans [07:28] fixing cleanup suite failures in the ec2 tests are making me sad [07:28] all day [07:29] no fucking idea what is broken [07:29] touch one thing, and magic values in other parts of the program dont' get overwritten [07:29] i hate patch value [07:29] it's tumor [07:39] davecheney: sorry about that [07:39] davecheney: it seemed like a good idea at the time [07:51] FFFFFFFFFFFFFFFFUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUU [07:51] Cleanup suite uses a package singleton [07:51] so you can have multiple CleanupSuites [07:51] pooped all the way through your suite construction [07:51] and they all COMPETE FOR THE SAME FUCKING SingleTON! [07:51] oh [07:51] no [07:51] i was wrong [07:52] i take that rant back [07:58] https://github.com/juju/testing/pull/94 [09:00] frobware, voidspace: hangout time [09:01] dooferlad: frobware: be there in 2 minutes! [09:06] dooferlad: omw [09:32] babbageclunk: I'll ping you shortly, just fending off some emails [09:32] ok cool [09:34] babbageclunk: I've pulled in the changes from use-boot-resources onto our maas2instance branch [09:34] babbageclunk: and am doing the rename [09:34] voidspace: the one Tim suggested? ok [09:34] babbageclunk: yeah, I think it's clearer [09:35] * babbageclunk nods [09:35] babbageclunk: so the old maasInstance becomes maas1Instance and the interface becomes maasInstance [09:36] babbageclunk: we need to update to the latest version of gomaasapi and fix verifyCredentials to not hit Machines endpoint plus use the new gomaasapi error instead of the net/http one [09:36] babbageclunk: we can do that in a new branch though [09:45] voidspace: shall I make a start on the verifyCredentials change? === menn0 is now known as menn0-afk [10:26] babbageclunk: sorry, missed your message [10:26] babbageclunk: sure you can look at errors too [10:28] babbageclunk: just grabbing coffee then I'm ready [10:30] babbageclunk: use-boot-resources landed [11:40] frobware: dooferlad: gah, updating to use the latest version of gomaasapi from thumper causes 23 test failures :-/ [11:40] just venting [11:41] now bisecting to find what caused it [12:03] voidspace: 23 test failures in juju or gomaasapi? [12:23] marcoceppi: hello, you had mentioned the name of a charm that implements update-status what was it? [12:23] also mgz ping [12:49] frobware: juju - it turned out to be an errors.Trace that was added in gomaasapi. [12:49] frobware: it meant adding errors.Cause in a few places. [12:54] anyone knows the exact version of lxd that works with master? [13:18] frobware: that was in juju [13:18] frobware: we found the cause, and fixed it [13:18] ah yes [13:18] babbageclunk: grabbing coffee [14:06] frobware: dooferlad: http://reviews.vapour.ws/r/4422/ [14:17] perrito666: any of the bigdata charms [14:18] yup, already found tx a lot marcoceppi [14:18] Bug # changed: 1229903, 1292157, 1323446, 1365542, 1531954 [14:27] Bug #972515 changed: Charm store needs search functionality [14:42] Bug #972515 opened: Charm store needs search functionality [14:46] voidspace, sorry, sidetracked by a customer issue. [14:46] dooferlad: would you mind obliging with voidspace's review ^^ [14:46] frobware: sure [14:48] Bug #972515 changed: Charm store needs search functionality [14:48] Bug #1565826 opened: Unable to build juju2 from master [14:48] Bug #1565827 opened: TestTimeoutRun fails [14:52] voidspace: got a +1 [14:59] dooferlad: thanks! [15:01] natefinch: ericsnow: standup time [15:05] morning all [15:05] if anything needs attention from me today please ping me directly [15:06] crawling through email atm [15:08] alexisb: wb [15:08] thanks katco ! [15:09] Bug #1565831 opened: unable to create authenticated client with maas 1.9 [15:15] returning from holidays my brain is all :"oh that english thing again, sure, lemme switch everything, in the meanwhile, talk like scooby doo" [15:17] perrito666: you did fine ;) [15:17] katco: when I just connected my brain was in "what are these people talking?" mode [15:22] perrito666: lol === ericsnow is now known as ericsnow-afk === ericsnow-afk is now known as ericsnow [15:47] fwereade: thanks for the review; processing now [16:00] rick_h_: is the meeting on today? [16:01] frobware: dooferlad: is the meeting with rick_h_ on today? [16:01] I added babbageclunk as a guest [16:02] ericsnow: charmstore meeting [16:09] Bug #1565826 changed: Unable to build juju2 from master [16:10] voidspace: frobware babbageclunk sorry! [16:10] dooferlad: frobware: rick_h_ is here! [16:12] ericsnow: natefinch: so when we all have a moment, we should point new work, and i can give you update on projected completion of project to see if we're in trouble [16:13] voidspace, rick_h_: omw [16:13] katco: k [16:15] rick_h_: sorry was clicking as you were speaking... [16:16] frobware: all good [16:18] Bug #1565872 opened: As a juju user I would like to use docker on the local provider [16:19] katco, ping [16:19] alexisb: pong [16:27] Bug #1565872 changed: Juju needs to support LXD profiles as a constraint [16:39] Bug #1565872 opened: As a juju user I would like to use docker on the local provider [16:39] Bug #1565880 opened: juju list-credentials --show-secrets does not do anything [16:40] katco: done meeting with the charmstore guys, would like to get lunch if that's ok? [16:42] natefinch: that's fine [16:42] ericsnow: natefinch: we'll meet after [16:42] katco: k [16:44] natefinch: just as a reference, re lxd, the version required for juju to work is the one in the ppa no the one in wily [16:45] perrito666: wait, I thought they reversed that [16:46] natefinch: so did I but juju says otherwise [16:46] perrito666: whatever, my lxd works, that's all I care about for now [16:47] natefinch: heh, mine too [16:47] perrito666: you're right, mines from the PPA [16:47] * natefinch lunches === natefinch is now known as natefinch-lunch [16:58] frobware: dooferlad: review if you get a chance http://reviews.vapour.ws/r/4425/ === natefinch-lunch is now known as natefinch [17:50] katco, ericsnow: ready when you guys are [17:51] natefinch: k [17:52] natefinch: ericsnow: let's do it! [18:17] can I remove storage from a service? [18:19] why do we have a detaching hook, but no way to detach storage? [18:19] rick_h_: ^ [18:39] urulama: rick_h_: hey, can you be authorized to a specific channel? [18:39] yes [18:39] acls are different for each channel [18:39] urulama: so a macaroon is channel-specific? [18:40] Bug #1503029 changed: juju plugins which exit > 0 report a subprocess ERROR [18:40] katco: yes, the idea being you give your tester folks access to dev channel [18:40] rick_h_: makes sense... suspicion confirmed ty :) [18:41] natefinch: macaroon is not, but ACL is [18:46] Bug #1565943 opened: Can't bootstrap on VSphere [18:59] urulama: so that's kinda weird... there's no channel parameter sent up when you get a charm archive.. I guess if that charm is in a public channel it'll just work? and if it's not it'll return an access denied error of some sort? [19:09] natefinch: sure it does. check out https://api.jujucharms.com/charmstore/v5/~jorge/bundle/wiki-simple/archive?channel=unpublished vs https://api.jujucharms.com/charmstore/v5/~jorge/bundle/wiki-simple/archive [19:11] urulama: ahh, ok, I was looking at the docs, where it doesn't seem to indicate channels get passed to get the archives [19:12] urulama: https://github.com/juju/charmstore/blob/v5-unstable/docs/API.md#get-idarchive [19:12] natefinch: ha, ok, that needs some updating then [19:12] urulama: oh, I guess the blurb about channels at the top covers that, for any endpoint that takes an id can take a channel [19:13] natefinch: works also on /meta endpoint [19:13] urulama: cool, thanks for clarifying [19:13] np [19:57] cherylj: Have you seen issues with deploying LXD issues with deploying xenial with the juju agent not finishing installation? [19:57] can anyone tell me why we aren't using logging here? https://github.com/juju/juju/blob/master/provider/common/bootstrap.go#L133 [19:59] ping anyone else working with LXD and xenial [19:59] katco: weird [20:00] mbruzek: I haven't tried it in a couple days [20:00] mbruzek: are you using lxd provider on a daily xenial image? [20:00] cherylj: yes. [20:01] Marco and I looked at the logs, we saw a modprobe fail. [20:01] mbruzek: k, let me provision a new xenial instance [20:01] katco: that's the output for the bootstrap command, to the CLI [20:01] natefinch: was just typing that [20:01] natefinch: got lost in a lot of --debug output so incorrectly seemed odd looking at backtraces [20:01] katco: it would still make sense to wrap it in a logger so you don't risk interleaved writes, but it's less likely on the client, I guess [20:03] cherylj: I assert the agent will be stuck "Waiting for agent initialization to finish" [20:03] after deploying a charm in xenial series [20:04] Or at least that is what I am seeing [20:04] * thumper dives into email [20:05] mbruzek: so it's also a xenial container you're deploying too? [20:05] mbruzek: is it a trusty controller? [20:05] I was following this page: https://jujucharms.com/docs/devel/config-LXD [20:06] That bootstrap command looks like I am requesting a xenial controller. [20:13] cherylj: Once you get bootstrapped, deploy a xenial charm. [20:13] juju deploy ubuntu --series xenial --force [20:13] cherylj: ^ [20:14] mbruzek: are you using beta3, or tip of master? [20:14] beta3 [20:14] omg, I'm deploying an openstack bundle on my vmaas and my laptop is screaming! [20:15] much lag, such slow [20:15] cherylj: Oh [20:15] cherylj: Yikes! [20:16] cherylj: I just deployed the ubuntu charm on both trusty and xenial [20:16] k, my instance in aws is coming up :) [20:16] LXD inside aws? [20:17] mbruzek: yeah, I like to spin up a clean xenial install when testing the lxd provider [20:19] cherylj: so I got both of the charms to deploy completely but I see the error in the machine log that will never work on lxd [20:20] The juju agent checks for kvm: WARNING juju.cmd.jujud machine.go:760 determining kvm support: INFO: /dev/kvm does not exist [20:20] Not there in the base cloud image [20:20] HINT: sudo modprobe kvm_intel [20:21] cherylj: Then I see an error when the juju agent tries to do modprobe. [20:21] modprobe: ERROR: ../libkmod/libkmod.c:556 kmod_search_moddep() could not open moddep file '/lib/modules/4.4.0-15-generic/modules.dep.bin' [20:21] : exit status 1 [20:21] no kvm containers possible [20:21] mbruzek: and that's causing the machine agent to barf? [20:21] mbruzek: that should be fine [20:22] bootstrapping now... [20:22] cherylj: This time it does not appear that they barfed, both trusty and xenial seem OK now. [20:23] mbruzek: did you do anything differently? or did just doing it a second time work? [20:23] cherylj: why do you think that is OK? doing a modprobe within a LXD container is never going to work. [20:23] mbruzek: it's just a check to see if we have the capability of hosting KVMs [20:24] happens on all providers [20:26] cherylj: but since LXD is sharing the kernel, a modprobe will not work in KVM. So the error is handled approprately? [20:26] cherylj: s/KVM/LXD [20:26] Because they share the kernel [20:26] you can not insert any more mods [20:30] cherylj: OK I was able to bootstrap, deploy and clean up correctly this time. [20:32] mbruzek: is there a better way to detect whether or not we can host KVMs? (I'm not familiar with how we would do that) [20:33] mbruzek: and on this lxd instance issue - when the unit was waiting for agent initialization, did it have an instance ID yet? [20:33] cherylj: Not that I am aware of, but perhaps we should check for LXD / containers first and if not in a container then check for kvm. [20:33] cherylj: let me check my status [20:34] cherylj: yes, I was even able to ssh to it, but the juju agent was dead [20:34] ah, I'm running into a different issue then [20:34] mbruzek: were both unit and machine agents dead? [20:34] cherylj: I think it was the machine unit, and I was not able to destroy the controller [20:35] mbruzek: when you did destroy-controller, what was the error? or did it just hang? [20:35] cherylj: it went into an infinite loop [20:36] mbruzek: ah, in the "waiting for x service" blurb? [20:36] I hit control-c and re ran it with --debug on [20:36] Waiting on 2 models, 1 machine, 2 services [20:37] cherylj: with debug: [20:37] 2016-04-04 19:32:20 INFO cmd cmd.go:129 Waiting on 2 models, 1 machine, 2 services [20:37] 2016-04-04 19:32:20 INFO cmd cmd.go:141 admin@local/default (alive), 1 service [20:37] for ever, I could not --force it [20:37] mbruzek: this sounds familiar. I think I've run into this before, but couldn't recreate [20:38] cherylj: well kill the machine agent, and try to destroy the controller [20:38] mbruzek: you can manually terminate the controller machine, then run kill-controller [20:38] But this time everything came down OK [20:39] ok [20:39] let me poke more, see if I can recreate [20:39] cherylj: Yes I figured out the lxc commands to stop and delete the images [20:40] But all the Juju stuff was left in the .local/share/juju [20:40] mbruzek: yeah, if you just kill the controller machine and run kill-controller, it will give up talking to the controller and then go to the provider to clean up [20:40] and then it will clean up your local cache information [20:41] cherylj: OK I was not able to reproduce that, but I didn't know that worked [20:41] mbruzek: I wrote a lot of the destroy / kill controller code so I know ALL ITS SECRETS [20:41] cherylj: OK well thank you for the information [20:42] mbruzek: there's also a bug open to provide a way to clean up stale controller information [20:42] It'd be super awesome if we could get that done for 2.0 [20:43] cherylj: OK so you were able to get a xenial ubuntu and everything looks good? [20:43] mbruzek: no, I'm running into a different bug [20:43] mbruzek: what I see is that on a new xenial install, deploying in lxd doesn't even get an instance associated with the service [20:43] cherylj: so let me understand, you are using amazon to run a local LXD provider? [20:44] mbruzek: yes, I manually deploy a xenial machine, install juju2 on it, and bootstrap lxd [20:44] cherylj: OK, I just wanted to understand the workflow [20:44] mbruzek: :) I do it to make sure it's clean and I haven't messed things up some how [20:45] that is a good idea! [20:45] mbruzek: if you haven't seen it before, I use the cloud image finder to quickly launch a particular instance in a particular region: https://cloud-images.ubuntu.com/locator/daily/ [20:46] Bug #1564622 changed: Suggest juju1 upon first use of juju2 if there is an existing JUJU_HOME dir [20:46] Bug #1565991 opened: juju commands don't detect a fresh juju 1.X user and helpfully tell them where to find juju 1.X [20:46] cherylj: Thanks for the link [20:47] cherylj: which type do you pick, I imagine the ssd ones are more expensive [20:47] mbruzek: I just go with the ssd ones [20:53] thumper: ping [20:53] voidspace: hey [20:53] thumper: morning [20:54] thumper: I just wanted to say thanks [20:54] for? [20:54] thumper: your changes to gomaasapi broke 24 tests and it took us nearly two hours to work out why [20:54] ;-) [20:54] thumper: we fixed it, but I do wonder if we ought to change it (which is why I'm really pinging) [20:54] wont make that mistake again then will you? [20:54] hehe [20:54] thumper: you added an errors.Trace to the error the client returns [20:55] gomaasapi.GetServerError(err error) (ServerError, bool) [20:55] thumper: which means that anything that attempts to cast to a ServerError fails [20:55] sure [20:55] I added that for that exact reason [20:55] but it's backwards incompatible [20:55] eh... [20:55] yeah [20:55] ish [20:55] a bit [20:55] not heaps [20:55] well, it broke 23 tests on master... [20:55] so yeah - ish [20:55] you're welcome [20:55] :-D [20:55] it's *better* [20:56] maybe we just see if anyone else complains [20:56] gomaasapi does have users [20:56] not many probably but some :-) [20:56] (I mean the change is better) [20:57] anyway, I'm off to bed [20:57] well - watch TV anyway [20:57] thumper: just thought as you were around I'd hassle you :-) [20:57] thumper: have a good day [20:57] :) [21:34] Bug # opened: 1564157, 1564163, 1564165, 1566011, 1566014 [21:37] Bug # changed: 1564157, 1564163, 1564165, 1566011, 1566014 [21:46] Bug #1565880 changed: juju list-credentials --show-secrets does not do anything [21:46] Bug # opened: 1564157, 1564163, 1564165, 1564168, 1564670, 1566011, 1566014, 1566023, 1566024 [22:19] thumper: PR #20 LGTM. It's a bit hard to read the #21 b/c it builds on it so let me know when you've merged #20. === menn0-afk is now known as menn0 [22:20] menn0: ack, will land, and update pr [22:37] Bug #1566044 opened: list-models and show-model should represent similar keys for controller model [22:49] Bug #1566044 changed: list-models and show-model should represent similar keys for controller model === redir is now known as redir_afk [22:52] Bug #1566044 opened: list-models and show-model should represent similar keys for controller model [23:15] axw: anastasiamac is standup still happening? [23:15] perrito666: yes, omw [23:15] m here [23:38] # github.com/juju/juju/provider/ec2_test [23:38] local_test.go:93: ambiguous selector t.AddCleanup [23:38] FATAL: command "test" failed: exit status 2 [23:39] pungent smell that CleanupSuite is present more than once in this suite [23:44] davecheney: odd, although I found someone had refactored something upstream last time I was nearby === redir_afk is now known as redir