[00:11] thumper: alexisb email sent [00:11] tell me what you think [00:11] certainly more than 6 weeks work :) [00:22] sinzui: i am blocked [00:23] something is fucked up on batuan and I cannot pull source from github [00:23] [rt.admin.canonical.com #72748] [00:23] <_mup_> Bug #72748: Crash while exiting [00:23] going to talk to #is now [00:24] davecheney, I have copied files to a machine inside the network, hackes /etc/hosts to point to that machine. I have also created certs that claim to be the other site [00:24] * sinzui doesn't accept no [00:26] sinzui: i'll try to debig this on amd64 for the moment [00:30] bodie_: review of PR 141 finished for now [00:39] sinzui: crisis averted [00:39] connectivity restored [00:44] sinzui: interesting, I cnanot reproduce the build failure on my pcc64 machine [00:44] we saw it several times in ci http://juju-ci.vapour.ws:8080/job/run-unit-tests-trusty-ppc64el/ [00:45] dave the machine is stilson-07, do you want to visit it? [00:46] davecheney, ^ [00:50] davecheney, ssh ubuntu@10.245.67.135. Your keys are there. maybe you can find a older or newer version of gcc there [00:53] davecheney, oh, 10.245.67.136 saw the same error. so both the unittest machine and the packaging machine hate juju [00:58] sinzui: intersting [01:01] sinzui: trying stilson now [01:08] sinzui: stilson-07 is running the outdated version of gccgo [01:08] sinzui: are you permitted to update it ? [01:18] sinzui: wtf, stilzon-07 doesn't even have bzr in the path ... [01:18] that's like saying the machine is actually running fedora [01:19] menn0: thanks for the review [01:19] wallyworld: can you also take a look at https://github.com/juju/juju/pull/148/ ? there's some changes to the instance-type constraint handling that you added [01:20] davecheney, We try to keep the machines clean. tests purge the left overs, they tests often export all the locations needed [01:20] axw: oh hi, i had looked, was waiting for you to come online [01:21] thumper: added some comments on the doc, heading your way [01:21] sinzui: short version [01:21] axw: i don't quite see the purpose from the covering letter - the azure provider already calls into common instance matching functionality [01:21] you need to upgrade the compiler [01:22] hurray [01:22] to the one that is (hopefully) in trusty updates [01:22] wallyworld: I'll find the code in question for you, just a sec [01:22] bug is fixed in that version [01:22] wallyworld: would've been more obvious with a branch preqreq'ing ;) [01:22] sinzui: I don't know how you can reconcile that with the requirements of the lp builders [01:22] yup [01:23] wallyworld: https://github.com/juju/juju/blob/master/provider/azure/instancetype.go#L19 [01:23] all this preferredTypes stuff can go [01:23] and be replaced with a call to MatchingInstanceTypes [01:23] axw: also, this line "if len(itypes) == 0 && cons.Mem != origCons.Mem {" <-- the semantics seem slightly different from the original origCons.Mem != nil [01:24] davecheney, the unittest run called this to get the compiler [01:24] sudo apt-get install -y build-essential bzr distro-info-data git-core mercurial zip rsyslog-gnutls juju-mongodb gccgo-4.9 gccgo-go [01:24] wallyworld: the idea is that above it'll only try again if it tried above with implied mem=1G [01:24] sinzui: the update has not landed [01:25] it's still stuck in whatever process is blocking it [01:25] i had to install the compiler from ppa [01:25] The builder called this sudo apt-get install -y build-essential fakeroot dpkg-dev debhelper bash-completion gccgo-4.9 gccgo-go [01:25] yes, you said [01:25] but it looks liek the compiler has not made it out of the -propose pocket [01:25] the original ppc bug is not marked fixed-released [01:25] wallyworld: I could do "!cons.HasInstanceType() && origCons.Mem == nil", but I thought this was clearer [01:26] sinzui: https://bugs.launchpad.net/ubuntu/+source/gccgo-4.9/+bug/1304754 [01:26] <_mup_> Bug #1304754: gccgo has issues when page size is not 4kB [01:27] davecheney, more interestingly. all machines got their compilers from ppa, but the ppa [01:27] what ppa ? [01:27] davecheney, the machines use ubuntu-toolchain-r-ppa-trusty.list per your instructions [01:27] axw: so you plan on changing the selectInstanceTypeAndImage() method? [01:28] davecheney, stilson-7 is missing it though [01:28] wallyworld: yes, but I think selectMachineType is more relevant [01:28] wallyworld: actually it could probably just be replaced [01:29] wallyworld: I could just propose my other branch [01:29] as a wIP [01:29] axw: i'm thinking that both openstack and ec2 providers call into FindInstanceSpec() and their code handles instance type constraints etc as is - can we just tweak azue to use the same code? [01:29] without exporting anything? [01:29] wallyworld: maybe... I'll see what they do [01:30] sinzui: ppa is not installed on stilson-07 [01:30] apt-get update confirms it [01:30] I am adding it now. I think a test replaced it... [01:30] wallyworld: do they allow you to explicitly specify the image name? [01:30] axw: i'm all for refactoring if it improves the code, so if it's necessary, go for it. but i also think we'd want azure, ec2, openstack, joyent etc to be consistent [01:30] ah, imagename [01:30] davecheney, the machine with the right ppa failed the same way [01:30] sorry [01:31] i was getting confused with instance type [01:31] wallyworld: this is caused by having to deal with force-image-name [01:31] hmmm. we specifically removed image name selection from ec2 etc [01:31] ie the image-id config was removed [01:31] because people could do bad things [01:32] like forcibly specify an image that didn't match tools [01:32] the fact that azure added support for it is unfortunate [01:33] i'm almost inclined to see if we shouldn't look at deprecating it [01:33] axw: np [01:33] it may have been useful early on when azure images were changing rapidly and we didn't have simplestreams [01:34] wallyworld: can do, but I think we need to give people notice. is exposing this function really that bad? [01:34] sinzui: can I see the failure from that machine ? [01:36] davecheney, http://juju-ci.vapour.ws:8080/job/run-unit-tests-trusty-ppc64el/606/consoleFull [01:36] axw: yeah, it would be deprecated over a release cycle. in which case i guess we could expose the function. but any time we expose previously internal stuff is potentially an issue we'd prefer to avoid. but in this case we could do at and hide it again when azure is changed [01:37] yep, that sounds fine to me [01:37] ok, let me take another look at the pr [01:37] ta [01:37] davecheney: arm email looks good to me [01:38] axw: also, i'd be interested in your thoughts on review board as per martin's email [01:38] davecheney: thanks [01:38] wallyworld: yeah taking a look now [01:39] sinzui: that machine is also lacking the ppa [01:39] oh no [01:39] i'm sorry [01:39] davecheney, ubuntu-toolchain-r? [01:39] i guess [01:39] i cannot see from that output [01:40] all i see is [01:40] Get:3 http://ppa.launchpad.net trusty/main ppc64el Packages [11.9 kB] [01:40] sinzui: can you add a [01:40] gccgo -v to the build script, just before doing go build [01:40] that will settle it [01:41] davecheney, that is a little tricky since that would error on golang [01:41] eh ? [01:41] davecheney, http://pastebin.ubuntu.com/7693053/ [01:41] wallyworld: are you able to get onto reviewboard atm? it's refusing my connection, just wondering if I've set up the tunnel right or not [01:41] interesting [01:41] that is the correct version [01:42] that matches winton-07 [01:42] that matches winton-09 [01:44] wallyworld_: are you able to get onto reviewboard atm? it's refusing my connection, just wondering if I've set up the tunnel right or not [01:44] axw: yeah, worked for me [01:45] hrm [01:45] did you set up proxy in browser? [01:45] wallyworld_: seems busted. I can't even telnet to 8080 on the machine. [01:45] oh, it's 80 isn't it [01:46] axw: all i did was run the ssh -D command, and then that logged me into a session. then i changed the browser to use a socks proxy and it worked [01:46] i used port 8080 in the socks proxy config [01:47] wallyworld_: I was trying to be clever and just forward the port, but I chose the wrong destination port. never mind :) [01:47] ok :-) [01:48] davecheney, stilson-07 with the correct ppa failed the same way http://juju-ci.vapour.ws:8080/job/publish-revision/546/console [01:48] davecheney, I am still deploying a hack to print the compiler to all the slaves [01:49] gccgo -v :: [01:51] sinzui: confirmed [01:51] same compiler [01:51] virtually same kernel [01:51] different results :( [01:51] sinzui: do you know what the policy would be in terms of deprecating and removing a juju feature which shipped in trusty? can we do that over a 1.20->1.22 release, or are we stuck with it forever? [01:54] wallyworld_, jamespage's proposal is co-installable version, so we can deprecate in devel and obsolete next devel. We will continue to state modern client works with older server 1 stable behind === wallyworld_ is now known as wallyworld [01:59] davecheney, I can reboot? I have rebooted stilson-08 a lot actually because the ppc bug past week cause the disk and ram to fill up [02:00] wallyworld: how did you create a review from a PR in reviewboard? [02:00] axw: i clicked on the new review link and it showed the recent branches [02:00] wallyworld: did you just get the diff and upload it? [02:00] i then clicked on the branch [02:00] huh ok [02:00] I don't get that... [02:01] no New Review menu? [02:01] wallyworld: it just shows me the juju/juju repo and master branch [02:01] hmmm [02:01] so you have the menu but it doesn't show any branches? [02:02] wallyworld: I clicked "New Review Request" at the top, and the only repo is "juju" which is juju/juuju [02:02] juju* [02:02] no obvious way to add my own [02:04] hmm, let me re setup the tunnel etc and try [02:04] sinzui: sure [02:05] sinzui: stilson-07 looks fucking sit [02:05] dmesg shows continual crashes [02:08] axw: just emailed a screenshot showing what i get when i click "New Review Request" [02:08] s/sit/sick [02:08] wallyworld: yeah that's what I see too. that's merged commits. [02:09] oh balls, it is too [02:09] i didn't look too closely [02:09] so looks like the rbt tool is needed after all [02:10] yeah, which is a bit crap IMO [02:10] yep :-( [02:10] i wonder if gerrit is any better [02:10] waigani: you're ocr today ? [02:10] wallyworld: everyone says gerrit is the one to go for [02:11] davecheney: we'll take a look at it. trouble is, they're all crap compared to lp [02:11] wallyworld: i'll agree to disagree with you there [02:12] no inline commenting on lp was a non starter for me [02:12] davecheney: what don't you like about lp? it has the best review queue, can mark stuff as wip, is *not* patch based, supports pre-req branches etc etc [02:12] none of the others do that very well at all [02:12] wallyworld: lack of inline commenting on reviews [02:12] meh [02:12] lp has that now anyway [02:13] damn, too late [02:13] since maybe about a month [02:13] like i say, we don't have to agree on this point [02:13] sure :-) [02:36] sinzui: did you reboot stilson ? [02:36] davecheney, 7 and 8 are rebooted. 7 is back [02:37] and 8 is back [02:37] [ 5.238055] init: plymouth-upstart-bridge main process ended, respawning [02:37] [ 7.117990] init: pollinate main process (797) terminated with status 1 [02:37] this machine is sick [02:45] davecheney, we could backout your change for a half a day to get a pass rev we can release. or may rewrite the offending code to be friendlier to gccgo [02:47] sinzui: yup, will revert [02:47] does anyone know how to do that ? [02:49] wallyworld: can you help revert a merge ? [02:49] which one? [02:49] https://github.com/juju/juju/pull/144 [02:49] i think you just reverse cheery pick and propose a pr [02:49] wallyworld: i have _never_ done this before on git [02:50] me either :-) [02:50] it's guarenteed i'll scrwe it up [02:50] might be easier to work around the compiler bug [02:50] davecheney, I don't have confidence in that myself. [02:50] davecheney: it that what breaks the compiler? [02:53] sinzui: wallyworld I have a workaround for the compiler crash [02:53] pls hold [02:53] ok [02:53] \o/ [02:54] hang on [02:54] shit is weird [02:55] my local working copy of master does not match what i'm seeing on stilson [02:55] fuck, forgot to pull master [02:57] ok, running tests [02:57] PR comming ASAP [03:07] wallyworld: did you see the review request for gwacl? [03:07] no, i'll look [03:07] wallyworld: https://code.launchpad.net/~axwalk/gwacl/rolesizes-update/+merge/224103 [03:09] axw: i know, let's just write a html page scraper, WCPGW [03:10] wallyworld: heh :) [03:10] sinzui: wallyworld https://github.com/juju/juju/pull/151 [03:11] davecheney, jam made a similar fix last week [03:13] davecheney: we could leave the authEntityTag alone and just do if names.NewMachineTag(parentId).String() == authEntityTag.String() { ?? [03:14] wallyworld: sure, [03:14] just a suggestion to preserve more of the original branch [03:14] but i've tested this version [03:15] ok, i was just thinking that we'd want to keep various getAuth functions consistent [03:15] this isn't forever [03:15] ok, +1 [03:17] davecheney, there is a cursed branch in front of yours. github-merge-juju will be available in 30 minutes [03:17] sinzui: ok [03:17] i don't know what you mean [03:17] but ok [03:40] I think this is ready to go -- https://github.com/juju/juju/pull/140 [03:40] would appreciate a lgtm / lbtm :) [03:49] bodie_: I'll take a look [03:49] davecheney: do you want to get this merged or would you prefer that the tests I suggested get done first? https://github.com/juju/juju/pull/127 [03:50] menn0: i'm not fussed [03:51] davecheney: let's just get it in. it's got my LGTM. [03:57] cool, thanks [03:58] panic: runtime error: invalid memory address or nil pointer dereference [03:58] [signal 0xb code=0x1 addr=0x1 pc=0x40761f] [03:58] goroutine 1438 [running]: [03:58] runtime.panic(0xed70c0, 0x1f9c7c8) /usr/lib/go/src/pkg/runtime/panic.c:266 +0xb6 [03:58] github.com/juju/juju/state/apiserver.func��009(0x0, 0x0, 0xc210256c60, 0x30941b60a0, 0x414361, ...) /home/ubuntu/juju-core_1.19.4/src/github.com/juju/juju/state/apiserver/root.go:169 +0x5ed [03:58] github.com/juju/juju/state/apiserver.(*srvCaller).Call(0xc21033cb40, 0x0, 0x0, 0x0, 0x0, ...) /home/ubuntu/juju-core_1.19.4/src/github.com/juju/juju/state/apiserver/root.go:101 +0x3f [03:58] github.com/juju/juju/rpc.(*Conn).runRequest(0xc210859500, 0x7f9e941e1b18, 0xc21033cb40, 0x129fb20, 0x12, ...) /home/ubuntu/juju-core_1.19.4/src/github.com/juju/juju/rpc/server.go:533 +0xd5 [03:58] created by github.com/juju/juju/rpc.(*Conn).handleRequest /home/ubuntu/juju-core_1.19.4/src/github.com/juju/juju/rpc/server.go:462 +0x671 [03:58] err [03:58] didn't jam make a fix for this ? [03:59] r.objectCache[objKey] = objValue [03:59] return objValue, nil [03:59] the hell [03:59] that means r.objectCache is nil [04:00] could use a bit more review on this as well -- menn0 if you're interested -- jcw4 https://github.com/juju/juju/pull/141 [04:02] davecheney: he has a fix but he hasn't merged it yet [04:02] davecheney: PR 146 [04:02] cool, thanks [04:02] i'll check it out [04:03] davecheney: it's fairly sporadic [04:04] bodie_: that extension to the tests for #141 looks good. I had some other comments too. [04:05] thumper: did you see this? https://github.com/juju/juju/pull/108#issuecomment-46822183 [04:06] thumper: fwereade is right that the implementation is probably not quite right given the plans in the identity spec. most of the work is still useful though. I wonder if it should merged but disabled. [04:07] menn0: urgh, shit, lots of races in cmd/jujud [04:07] davecheney: where? what? [04:08] menn0: pls hold [04:09] sinzui: that gccgo fix landed [04:09] \o/ [04:09] how do I kick off the ppc build ? [04:10] menn0: http://paste.ubuntu.com/7693425/ [04:10] as suspected there is a race on the api server root hashmap [04:10] davecheney, you can't CI already sees there is a new revision http://juju-ci.vapour.ws:8080/ [04:10] sinzui: ok [04:11] davecheney, Both publish-revision and run-unit-tests-trusty-ppc64el will start in 5 minutes [04:12] sinzui: https://bugs.launchpad.net/juju-core/+bug/1333513 [04:12] <_mup_> Bug #1333513: state/apiserver: data race on on apiserver method hashmap [04:12] ^ blocks 1.19.4 i'm afraid [04:12] :( [04:13] i can try jam's fix and see if that helps [04:14] menn0: hey, back at my desk now [04:14] menn0: let me look [04:15] thumper: k [04:16] davecheney: that fail is due to the thing that jam has fixed but not merged. I believe there was discussion about the best way to fix it which might be why there's a delay. [04:22] menn0, I had a few replies for your comments -- I need confirmation from jcw4 on at least one of them since 141 is actually an API for the Watcher, which itself is tested elsewhere -- I'm not entirely sure all the error cases need testing, since many of them are things from other packages [04:22] the API should really be simple, I think, but I'm open to being wrong about that :) [04:23] bodie_: give me a bit, I'm in the middle of your other review [04:24] sure thing, it's way past EOD for me here so I'll be reviewing in the morning -- hopefully we can sync effectively :) I really appreciate the comments [04:57] bodie_: no problems [05:01] sinzui: ppc build passed! [05:01] w00t [05:06] menn0: what shold we do [05:06] 1. release with this know issue ? [05:06] 2. wait a few hours for jam ? [05:06] 3. try to fix it ourselves ? [05:07] davecheney: I don't think we can release. This problem will happen in actual use. [05:07] ok [05:07] i agree [05:07] there are also a shitload of other races in that paste [05:07] davecheney: it looks like jam has a valid fix that several people approved [05:07] i only made an issue for the first one [05:07] menn0: hit it with $$merge$$ [05:08] davecheney: there was quite a bit of discussion though so I wonder if jam1 was planning on doing something better [05:09] davecheney: menn0: the fix for *my* code is just fine, the ancillary fix for other stuff that showed up with -race is in question, but I'm looking into it right now. [05:09] :) [05:10] davecheney: maybe just go with what jam has right now and he can always change it later. [05:10] i agree [05:10] hitting merge now [05:12] davecheney: trying to use "go test -compiler=gccgo" I got this error: http://paste.ubuntu.com/7693569/ [05:12] thoughts? [05:12] jam1: fix in trunk [05:12] running it a second time and it passed... [05:12] ah, nm, still breaks [05:12] landed a few minutes ago [05:13] I forgot I run the regular tests first [05:13] davecheney: thanks [05:13] jam1: did you catch that davecheney and I just hit $$merge$$ on your objectCache PR? [05:13] davecheney: is that a fix in "juju" trunk, or a fix in gccgo trunk [05:13] ? [05:13] jam1: workaround in juju [05:13] menn0: yeah, though I did *just* push up the alternative fix [05:14] jam1: crap. [05:14] jam1: shall we cancel the merge then? [05:14] menn0: meh we can just submit it again, I think if we missed something. It is noise but not terrible [05:17] menn0: there is a queue 3 deep already and maybe the bot doesn't track the tip version that was voted on, in which case it will get the updated one. We'll see. [05:18] jam1: I was wondering the same thing. [05:19] tarmac was careful about it, because you could be approving 3rd party proposals, though it often was annoying to have to go and reapprove since most of our branches were actually trusted. [05:19] I don't know the new bot nearly as well. [05:20] davecheney: I just send off that status API PR for merging too [05:20] thanks [05:20] sinzui: still there ? [05:21] when do we have to stop screwing around so you can cut a release ? [05:50] jam1: did you still want to catch up? [05:54] OMG... lots of addressing document comments, but no other work [05:54] * thumper feels exhausted [05:54] it is all rick_h_'s fault [05:54] yes rick_h_, read this when you wake up and know that you did this to me :-P [05:54] * thumper will add more tomorrow [05:54] night all [06:02] wallyworld: axw http://juju-ci.vapour.ws:8080/job/run-unit-tests-trusty-ppc64el/ [06:02] ^ can you push go on this build please [06:02] sure [06:02] ta [06:02] davecheney: what revision? tip? [06:05] wallyworld: tip is fine [06:05] ta, i started it with the sha of your latest commit [06:05] yup [06:05] that is the one i want to check [06:05] make sure I havn't just screwed things again === vladk|offline is now known as vladk [06:16] fwereade: if you are around and have time for a 5-minute chat about a tasteful way to share common code I'd like to run some ideas by you. [06:19] jam1, sure [06:19] fwereade: https://plus.google.com/hangouts/_/canonical.com/juju-sapphire [06:43] mornin' all [07:09] wallyworld: playing with wercker [07:09] it looks like we can setup custom build environments [07:09] trying to get the juju tests to pass [07:09] cool [07:09] wallyworld: you get pre commit tests [07:09] on a per branch basis [07:10] the werker bot participates on the PR and says if the build passes or not [07:12] i've probably screwed my working copy [07:12] doing all this [07:17] https://app.wercker.com/#buildstep/53a925f9770aadd70b0f1944 [07:19] well [07:19] that didn't work [07:19] and i'm out of fucks for today [07:22] fwereade: so I've made a couple comments, but now Google is telling me "file not found" anytime I click on the doc. [07:23] fwereade: It comes to mind that if we do the "this hook is only called if the other hook doesn't exist" model, then we could call it "unhandled-changes" [07:23] jam1, that's just the "missing" hook, and it'll get called 10000 times, isn't it? [07:23] jam1, (fwiw loading it in a new tab works for me) [07:23] fwereade: well it could be "missing" but have the semantics that it is only called once for a sequence of hooks [07:26] jam1, it's not quite working for me, but maybe I just need to think it through more [07:26] fwereade: so I mentioned in my comments that either semantic would be possible [07:26] "always queue one of these" vs "only queue if the hook wasn't there" [07:27] fwereade: I still get it to load and show the first time, but clicking anywhere says there is a problem. Can you share the doc directly with me? [07:28] jam1, shared [07:29] fwereade: thanks, it seems happier [07:30] fwereade: I just brought it up because I think it might change the name of the thing, and naming it seems problematic right now :) [07:32] jam1, I dunno, calling it once in response to any sequence of unimplemented hooks feels quite different to calling it when stable (and occasionally if it looks like we're not stabilising any time soon) [07:33] jam1, tying it to other hooks feels like complexity for no benefit -- maybe I'm just not getting the value though [07:33] jam1, you can implement install, you can implement relation-broken, etc etc [07:34] fwereade: so tying it to other hooks is because existing charms are going to have this symlink farm, and it will mean that it actually gets called significantly more often [07:34] jam1, having it so if you implement r-b you basically always see s-c, r-b, s-c, r-b, s-c, r-b, s-c, r-b, s-c when leaving a few relations -- vs r-b, r-b, r-b, r-b, s-c [07:34] jam1, ok, but the symlink farm is stupid and broken [07:35] jam1, getting rid of that is almost the whole point [07:35] fwereade: sure, getting there from here is something to be aware of. [07:36] jam1, I don't follow [07:36] fwereade: juju certainly doesn't have the concept of "I can deploy version X of charm Y", but not the latest. [07:36] jam1, symlink farms keep working until they delete everything [07:36] jam1, er, yes it does [07:36] so charms that *just implement* the new hook still need the symlink farm [07:36] jam1, ahhh ok [07:36] jam1, sorry, yes, I think we need charm feature flags much sooner than later [07:38] jam1, i reverted the network model doc to before domas changes [07:38] fwereade: so the thing with even feature flags is that Juju-1.18 could still install version 10 of the charm, it is just version 11 that it can no longer install, but juju has no way of knowing/representing that to the user. [07:38] dimitern: is that the giant paste in the header? [07:38] jam1, yep [07:38] jam1, expand please? [07:39] jam1, the charm store needs to know about feature flags too, it is true [07:39] jam1, and getting a version of 1.18 out that understands and uses them is important [07:39] fwereade: if I try to "juju deploy mysql" can it pick a version that doesn't have the features it doesn't know about ? [07:40] jam1, that's my expectation and assumption, yeah -- pass supported feature flags when querying charm store [07:40] the spec as I read it says that it wouldn't list "mysql" if the last version of mysql had "feature=unknown" [07:40] "charms with features not included in list" [07:40] jam1, it wouldn't list that version of mysql [07:40] is not the same as [07:40] "charm versions with features are not included in the list" [07:40] jam1, perhaps poorly worded then [07:41] fwereade: oh google docs, why is "ctrl+alt+m" write a comment but "ctrl+shift+m" is go into some weird 320x200 mobile view [07:41] the latter is far easier to type [07:41] jam1, but, er, I was not trying to propose a feature in which valid charm versions just *disappear* ;) [07:42] jam1, I was trying to convey that invalid ones are hidden from clients that can't handle them [07:42] jam1, I guess the issue is the fuzziness between charm and charm revision [07:45] fwereade: so the idea of r-b being implemented or not and how it effects queuing s-c being available. [07:45] fwereade: you missed my point a bit [07:45] the idea is that *if* r-b is implemented you get "r-b r-b r-b" full stop [07:45] if it is not [07:45] you get "s-c" full stop [07:46] when I say "queued" I meant "queued for some time in the future when we reach quiescent point" [07:46] so if you have config-changed, c-c, c-c, relation-changed, c-c, and only relation-changed and something-changed are implemented, you would get [07:46] r-c s-c [07:47] after the first "c-c" was triggered, s-c would be marked as "I want to run this when I can" [07:47] if, on the other hand, you had just "r-c r-c" then you would get exactly that, and not "r-c r-c s-c" [07:48] I don't think anyone is saying *if* r-c is implemented then immediately call s-c afterward [07:48] that would, indeed, be silly. [07:48] I realized "queued" is a bit of a bad term to use here, as it has meta meaning and real-meaning in juju hooks [07:48] fwereade: i'm thinking of moving the charm package to gopkg.in/juju/charm.v2 to avoid breaking the current API (and to try to commit to maintaining a stable API in the future). [07:49] fwereade: also with a view to potentially moving other juju packages there too [07:49] jam1, fwereade: does that sound reasonable to you? [07:50] the actual branch would be named "v2" at github.com/juju/charm [07:51] jam1: with your something-changed hook, would that mean that even if there was an hour gap between two things changing, something-changed wouldn't be called the second time? [07:51] jam1, I am certainly saying "call s-c whenever the queues clear out, however many hooks were implemented or not in the interim" === mwhudson is now known as mwhudson- [07:52] jam1, what's the issue there? [07:53] fwereade: my issue with that is that "queue clearing out" is a very fuzzy concept (it's not with our current highly dubious 5-second polling system of course, but hopefully we can move to something better in the future and i hope we can design for that) [07:54] fwereade: queue clear for... how long? [07:55] dimitern: please, take a look https://github.com/juju/juju/pull/121 [07:55] rogpeppe, a few seconds with nothing else firing? [07:55] rogpeppe, but the 5-second polling is so many layers away that I don't really see the relevance [07:55] vladk, looking [07:56] fwereade: without the 5 second polling, things can keep on firing indefinitely, and maybe that's ok [07:56] fwereade: well, even *with* the 5 second polling, things can keep on firing indefinitely [07:57] fwereade: are you trying to address the "fire something when we're in a `stable state'" issue? [07:57] rogpeppe, sure, that's why we fire every N minutes when not stable [07:58] rogpeppe, I'm more trying to address the "most charms are stupid and wasteful and boilerplatey" issue [07:58] rogpeppe, and I think fire-a-hook-when-stable is a good solution to that [07:58] fwereade: oh, you mean it would be nice to write a charm that just had a "tell me when something happens" hook? [07:58] rogpeppe, that's what people do already [07:58] rogpeppe, but they have to implement every hook as a symlink to their entry point [07:59] fwereade: ah, but you want to amalgamate events [07:59] rogpeppe, yes, because what happens is that the entry point gets called 30 times and returns without doing anything because not enough context is around [07:59] fwereade: presumably you can't amalgamate events for different relations? [08:00] fwereade: or do you just have an env var that can hold all the relations that have changed? [08:00] rogpeppe, and then once there's enough, it gets called another 100 times in a row, rebuilding the full service config every time, diffing against the running config and maybe replacing and bouncing the service [08:00] rogpeppe, nobody cares what's changed [08:00] rogpeppe, they all just slurp up the complete environment state and translate it into a config [08:00] fwereade: right [08:01] fwereade: the main issue with amalgamating events is that you won't get such a timely response, because you can never fire a hook immediately [08:01] rogpeppe, but I think you *converge* much faster [08:02] rogpeppe, because you don't do the same processing 100x over [08:02] rogpeppe, you do no processing 100x [08:02] fwereade: you may do - it depends how costly your hook executions are [08:02] rogpeppe, and then do the actual work just once [08:02] fwereade: if your actual work only takes a millisecond, then it's not a problem [08:02] rogpeppe, quite -- this is presented as a way of working better with the charms which are just one, big, complex hook [08:03] fwereade: given that we currently always have up to 5 second delay, perhaps that's the self-imposed queue-gathering delay is not a problem even in a non-polling system [08:04] rogpeppe, didn't quite follow that [08:05] fwereade: if you get an event, you have to wait for some length of time to gather other events before you can fire your something-changed hook [08:05] fwereade: otherwise you'll regress to always firing an event every time [08:06] fwereade: alternatively... [08:06] fwereade: (and probably better) [08:06] rogpeppe, or we could integrate the hook queues and have them generate an s-c whenever they empty out [08:06] fwereade: is to just fire the first hook anyway, then gather any events that happen while the hook is firing, then fire all of them at once when that completes [08:07] rogpeppe, may or may not be enough better to justify the cost [08:07] rogpeppe, I never imagined *not* firing any other hook -- just that I'd expect most of those hooks to not be implemented [08:07] fwereade: i don't really have an idea of what "integrating the hook queues" implies [08:08] rogpeppe, there's one per relation at the moment, and other hooks coming in from a variety of sources -- eg config-changed from the filter, install/start/stop according to the state machine === mwhudson is now known as mwhudson-bip [08:09] * rogpeppe goes to have a glance at the uniter source [08:11] rogpeppe, move relationId from AliveHookQueue to UnitInfo and you're quite a lot of the way there, although you probably want to maintain a linked list per relation as well alongside the global one [08:11] fwereade: perhaps life would be easier if the filter only had a single output channel [08:12] rogpeppe, mmmmm if we only had one channel for *hooks*, yes, it probably would [08:12] rogpeppe, single output chan on filter doesn't seem helpful to me [08:13] fwereade: ah, yeah, i was thinking of a single channel that modeAbideAliveLoop could be waiting on [08:13] rogpeppe, but regardless, we don't *need* any of that [08:14] rogpeppe, waiting a few seconds and firing s-c if nothing else happens, then not waiting if we just ran s-c, would I think work fine [08:15] fwereade: mmm, probably. it does feel a bit hacky though [08:16] fwereade: after all, the recipient *knows* that something has changed [08:17] rogpeppe, recipient == the charm? or the uniter? [08:17] fwereade: the uniter [08:17] fwereade: specifically modeAbideAliveLoop, though there may be other places [08:17] rogpeppe, right, but that's the uniter setting things up to tell itself when things *stop* changing [08:18] rogpeppe, and deciding then to inform the charm that s-c [08:18] fwereade: isn't that what you want? [08:18] rogpeppe, it is, we may be in violent agreement -- I think I didn't understand [08:18] after all, the recipient *knows* that something has changed [08:36] TheMue: I'd like to turn https://docs.google.com/a/canonical.com/document/d/1fPOSUu7Dc_23pil1HGNTSpdFRhkMHGxe4o6jBghZZ1A/edit# into a concrete doc in the source tree about how it was actually implemented and how people interact with the system. It is currently a little too "how do we do this" in places. Do you think you can work on that, or is it stuff that only I have in my head. [08:39] jam1: that’s a doc that will move into the Juju API Design Specification I’m currently working on [08:39] jam1: it’s pretty detailed and together with your code changes I think it’s no problem to get it [08:39] jam1: in unclear cases I’ll simply ask you :D [08:41] fwereade: FWIW i was thinking along these kinds of lines: http://paste.ubuntu.com/7694109/ [08:42] fwereade: though i'm sure it doesn't interact correctly with hook error retries, shutdowns and all that jazz [08:42] jam1: btw, thx for review. beside changing the tests and adding a failing case, do you think I should add a validity check to the API, or even deeper the RPC layer? [08:42] TheMue: I'd like a test at at least the state/api/ layer (which means probably in state/apiserver/client_test.go [08:42] rogpeppe, yeah, that's roughly what I was thinking too, although I agree it probably isn't *exactly* right as it is [08:43] vladk, reviewed [08:43] dimitern: thanks [08:43] fwereade: cool [08:44] fwereade: i thought you were considering something much more upstream than that [08:45] rogpeppe, considering, yeah -- there's something architecturall skewed about relation handling and filters and so on -- but I think the s-c can be done at that layer completely independently [08:45] fwereade: cool [08:45] rogpeppe, I just keep wanting to find excuses to look into that stuff again ;) [08:45] fwereade: FWIW the 2 second delay could probably be 10 milliseconds and it would still be useful [08:45] rogpeppe, concur [08:46] fwereade: BTW did you have an opinion on moving the charm package to use gopkg.in ? [08:47] rogpeppe, I rather like gopkg.in, I don't see why not [08:47] fwereade: cool. i've got a couple of outstanding proposals which i'd like to merge, but they break the API horribly, so i was wanting to merge only after moving to a new api version === mwhudson-bip is now known as mwhudson [09:12] dimitern: I answered some comments in https://github.com/juju/juju/pull/121 [09:13] vladk, thanks, will look in a bit === vladk is now known as vladk|offline [09:33] mornming all [09:36] jam1: do you have time to go over the multi env state server spec? Seems like we have some comments to talk about [09:36] dimitern, vladk|offline: either we need a stringswatcher, in which case we should write it as a stringswatcher, or we want a one=shot in which case we should write it as a one-shot [09:37] morning natefinch [09:38] wwitzel3: up early huh? [09:39] natefinch: woke up and it was only 30 minutes till my alarm anyway [09:39] ahh yeah, I know that one [09:42] natefinch: just reading over that email from wallyworld I'll get started on that [09:43] wwitzel3: yeah that seems good === mwhudson is now known as mwhudson-bip [09:47] jam1: i'm free wherever you are, just ping when when you have a break in your schedule [09:48] wwitzel3: i'm not sure how long that will take you, i can find more to do if you get that done :-) [09:50] fwereade: will you get a chance to review https://github.com/juju/juju/pull/124 today? I have most of the proof-of-access followup done which I plan to propose tomorrow [09:50] natefinch, jam1: based on a super-quick look at that, what I was hoping for was a plan for fixing the db to handle multiple envs, rather than a redesign of the stuff tim's been working on for the last couple of weeks -- added a couple more comments just now [09:51] wallyworld_, sorry :( [09:51] np :-) [09:51] wallyworld_, it mostly looked good, I will try to do it properly today [09:51] ok, ta [09:52] fwereade: well shit [09:53] natefinch, ehh, communications screwup, it happens [09:53] fwereade, we'll primarily watch the machine itself i think, but we'll also need to watch the req. networks (the watcher server-side will handle the machines + services deployed on it ofc, so we don't need to care about the services), raw addresses and subnets attached to the machine's interfaces [09:54] dimitern, (1) why watch the machine (2) are any of those other things relevant other than in the context of "we need to configure X network, let's look up the details of how we do so"? [09:54] fwereade: is there more to fixing the db than scoping each document's id by adding the environment UUID to it? [09:54] fwereade, check the https://docs.google.com/a/canonical.com/document/d/16SYAlZFc19YPXrB7BRwufZVoeLFpqGzBTAdo4EoQIHg/edit#heading=h.idpldjoq36jf section about the networker's responsibilities [09:55] fwereade, sorry, not the machine, but other things [09:55] natefinch, there's (1) picking a shard key -- I don't think _id makes for a good one -- and (2) Getting There From Here: how we change apiserver/state to work with multi-env without requiring rewriting All The Things [09:55] dimitern, checking [09:56] fwereade, fwiw, it seems more and more we'll actually need a worker not based on a watcher though [09:57] dimitern, how can a subnet start dying when a machine's using it? I thought non-zero refcounts would block that [09:57] fwereade, but not a one-shot single use worker, something that can still handle things in a loop using multiple watchers and "watcher-like" things, i.e. monitoring ifaces [09:58] fwereade, a subnet with no enabled interfaces attached to it has a refcount of 0 [09:58] dimitern, I am suspicious that there's a big pile of necessary jobs that are all being crammed into one worker because the name's roughly related [09:58] dimitern, then why can the machine see that subnet? [09:58] fwereade, i had that feeling as well [09:58] dimitern, mainly I'm scared of another firewaller [09:58] fwereade, because it has disabled ifaces attached to it [09:59] dimitern, that thing's a horrorshow [09:59] dimitern, hmm. can't we just disable ifaces that don't correspond to desired networks, bam, done? [10:00] fwereade, so we can split the tasks in two at least - and "addresser", which handles watching interfaces as they come up and update raw addresses of the machine, as opposed to the instance poller, which calls the provider; the addresser can take care of filtering addresses as well perhaps, moving them from raw to official [10:00] dimitern, the address handling stuff is IMO nothing to do with a worker [10:01] fwereade, why? who's gonna monitor what addresses are assigned to the new interfaces and save them in state? [10:01] dimitern, SetAddresses/SetMachineAddresses *themselves* are expected to look at all the raw addresses and update the *actual* addresses in one go [10:01] dimitern, SetMachineAddresses is really just "ehh, we have a bunch of IPs, figure them out please state server" [10:02] fwereade, yes, but someone has to call that with the discovered addreses, right? [10:02] fwereade, will it be another worker or something in the MA? [10:02] dimitern, sure, that's a worker, but *all* the worker does is discover IPs and call the API [10:03] dimitern, it is not expected to do anything sophisticated with those addresses [10:03] fwereade, right [10:04] fwereade, the networker can to watch the machine's network interfaces to make sure we do the right thing when they are enabled/disabled [10:05] fwereade, which will happen as part of provisioning or deployment (i.e. when they're added in the first place, or when a unit gets deployed and the combined req. networks change, triggering interfaces being marked as enabled/disabled) [10:06] dimitern, maybe I'm being dense, but I still don't see why the networker needs to do anything other than (1) watch the list of networks and (2) every time it changes, look up relevant info on those nets from the API server, config the ones we have, and disable the ones that don't map to those reported active by the state server [10:07] dimitern, why watch the interfaces though? [10:07] fwereade, ok it seems we need another talk on this [10:07] dimitern, yeah, probably :) [10:07] fwereade, and sequence diagrams of the order of events and who handles them :) [10:07] dimitern, you free now? [10:07] dimitern, +1 [10:07] fwereade, can we do it in 5m? [10:09] dimitern, sure [10:16] fwereade, sent you a link [10:21] fwereade: so it turns out that there is a reason to use the 'interface' style for Client facing facades. because they will be exposing both BestAPIVersion *and* at least Close. I'm not sure if there will be another common function yet or not. [10:22] And while yes, we could create 2 'simple' thunks [10:22] once we have >1 it feels better to put that as a common embed to me. === vladk|offline is now known as vladk === vladk is now known as vladk|offline === vladk|offline is now known as vladk [11:21] morning everybody === vladk is now known as vladk|offline === vladk|offline is now known as vladk [11:24] morning perrito666 [11:33] sinzui: did we manage to get 1.19.4 out? [11:51] wallyworld_: is it just me, or is the bot way worse than usual today? [11:51] not just you, does seem bad :-( [12:09] wallyworld_: if your still around, poke [12:09] hi [12:09] fwereade, OpenPort in in state/unit.go - this looks like a bug - just checking it's not intended behaviour: https://github.com/juju/juju/blob/master/state/unit.go#L653 [12:09] wallyworld_: can you meet me over here: https://plus.google.com/hangouts/_/canonical.com/juju-sapphire?authuser=1 [12:09] sure [12:33] bac, dimitern, jam1, wwitzel3, mgz, natefinch, perrito666, wallyworld_: here's the start of bundles implementation in the charm package; reviews very much appreciated: https://github.com/juju/charm/pull/9 [12:58] natefinch: poke [12:58] jam1: howdy [12:59] axw: finally landed :-/ [12:59] natefinch: hey, sorry we didn't get the focus on multi-environment stuff, I do feel we need to work on it. however, we have a release blocker bug (I believe) === vladk is now known as vladk|offline [13:00] the gccgo stuff? [13:01] natefinch: I think I fixed those, this is an upgrade failure [13:01] https://bugs.launchpad.net/juju-core/+bug/1333682 [13:01] <_mup_> Bug #1333682: upgrading 1.18 to 1.19 breaks agent.conf [13:01] specifically, juju 1.19 expects there to be an "apiaddresses" line in agent.conf and panics on a null pointer dereference if it isn't there. [13:01] natefinch: but 1.18 doesn't have it, and nothing seems to *put* it in there. [13:03] jam1: mm, shouldn't peergrouper do that? [13:04] perrito666: peergrouper can't come up because the data isn't there and we try to connect to the state with a nill API address [13:04] well, maybe maybe not it can come up, It isn't quite clear, but the process itself panics [13:05] perrito666: natefinch: this *might* not be a 1.19 regression, because it appears 1.18.1 is putting the line in agent.conf [13:05] so the bug is that potentially these people are upgrading from originally 1.16 [13:05] and 1.18 doesn't add the line, but doesn't care if it isn't there [13:05] and 1.19 now expects it to be there. [13:06] morning all [13:07] morning bodie_ [13:15] mramm: ping [13:16] jam1: pong [13:16] mramm: were we doing 1:1 call now? [13:16] oh, I'm in the cloudbase sprint [13:16] I can drop out and come over to the 1 on 1 [13:16] be there in one min === vladk|offline is now known as vladk [13:47] bzr branch takes a bit.... [14:08] natefinch: standup [14:16] wwitzel3: sorry, in a meeting with the cloudbase guys. Can you guys deal for now? [14:17] natefinch: we got started without you :) [14:17] natefinch: yep ^ [14:24] dimitern: ping [14:25] mgz, you around? [14:26] natefinch: will you be available for our 1on1 in a few minutes? [14:26] perrito666, hey [14:26] ericsnow: can we move it to this afternoon? [14:26] sounds good [14:27] alexisb: yup [14:28] mgz: ah you here too [14:28] good [14:28] mgz, can you reach out to perrito666, he is working a critical bug and could use your great wisdom :) [14:28] perrito666: feel free to bug me :) [14:29] jam1: any chance to take another look at https://github.com/juju/juju/pull/150 [14:29] jam1: ? [14:30] thanks mgz, this is important given it is our latest block for the release which is now a week behind :) [14:31] TheMue: my immediate thought is that "juju set" isn't returning an error code when it gets bad data? [14:34] jam1: thought about it too, but it would also be for names or any string arguments which pass the API [14:37] jam1: best would be imho if here no invalid encoded data would pass [14:37] TheMue: reviewed [14:38] jam1: thx [14:42] jam1: Main() so far returns no error as it doesn’t recognize it as error. as I said, to recognize it we would have to check any value for valid utf-8 best on rpc level [14:49] is there a way to connect to the mongo db created by a test ? [15:03] tasdomas: you mean manually? by default they get cleaned up at the end of the test, so they go away... but you can comment out the cleanup code if you want to go poke at the DB by hand [15:03] is anyone else having juju report dns-name for local deploys as localhost? [15:04] I have a bug problem on precise with local provider https://gist.github.com/anonymous/9ecb23a51844627028b0 Anyone able to point out something here? Here's a bug that maybe somewhat related https://bugs.launchpad.net/juju-core/+bug/1330406 ?? [15:04] <_mup_> Bug #1330406: juju deployed services to lxc containers error executing "lxc-create" with bad template: ubuntu-cloud [15:04] natefinch, wwitzel3: what you you think about https://bugs.launchpad.net/ubuntu/+source/juju-core/+bug/1328958 [15:05] automatemecolema: maybe you need to update lxc? [15:05] The summary is juju requires the ubuntu user to be on the client machine...except I don't and I don't know anyone who does [15:06] I have never seen this issue...and if this is about server images, they come with ubuntu so juju local still works [15:06] sinzui: I think we create the user in cloud init if it doesn't exist [15:07] natefinch, right, but the issue here is local host machine your desktop [15:07] oh sorry, I missed that it was local [15:07] natefinch, the juju client does not create the ubuntu user on the host machine [15:07] sinzui: no, I wouldn't expect it would [15:09] * fwereade was working at 1am and 8am today, calling it a day now [15:10] * fwereade probably back on later, ping me if you need me [15:11] natefinch, thanks - I'm actually suspending the test using a sleep [15:15] natefinch, I think the issue with https://bugs.launchpad.net/ubuntu/+source/juju-core/+bug/1328958 is that the localhost is a server image. juju recognises that, and then wants to use the ubuntu user. This works most of the time because servers always come with ubuntu user. [15:16] natefinch, The error implies that something in .juju/local is owned by ubuntu...I have never seen that on any of the 5 machines that test local. I would be looking though unless the test failed [15:18] tasdomas: that works [15:19] natefinch: I can't however, find the correct username/password to log into the db [15:19] perrito666: ^^ [15:19] * perrito666 reads [15:20] perrito666: I know you were just talking about that... where do you get the username/pw? [15:20] natefinch: well I followed the juju guide on what packages needed to be installed [15:20] natefinch: var/lib/juju/agents//agents.conf [15:20] tasdomas: ^ [15:20] natefinch: apt-get install juju-local linux-image-generic-lts-raring linux-headers-generic-lts-raring [15:20] user==Tag: password=apipassword iirc [15:21] perrito666, natefinch thanks [15:21] natefinch: not sure how more up to date I can get with lxc on precise?? [15:28] hey guys is there documentation for people who want to interact with the juju api like the GUI does? [15:28] automatemecolema: that sounds valid.... [15:28] hatch: no, sorry [15:28] natefinch ok np [15:29] natefinch: so I think I have a bug then.... [15:29] hatch: right now I'm recommending that if you want to script against juju that you do it using the CLI, since that's a lot better documented and much more suitable for human consumption [15:29] hatch: which I know doesn't work for a lot of use cases [15:29] (like a web gui) [15:30] natefinch yeah I'm sure the primary use cases would work with the CLI, but we should probably document it sometime :) [15:31] hatch: it's on the list of things to do, definitely. You're far from the first to ask about it. [15:31] natefinch, if only we had unlimited resources :) [15:32] fwereade, ping? [15:32] hatch: yep [15:32] automatemecolema: precise definitely works.... oh maybe you need to set default series [15:36] automatemecolema: in ~/.juju/environments.yaml add default-series: precise to the local provider section [15:37] automatemecolema: and/or edit ~/.juju/environments/local.jenv to add default-series: precise [15:40] natefinch: trying your suggestion out right now [15:43] natefinch: so I can't run a trusty charm with a bootstrapped precise environment? [15:44] the problem is trying to deploy trusty/juju-gui when my bootstrap node is precise [15:45] you should be able to deploy that to something other than the bootstrap node (like you couldn't do juju deploy trusty/juju-gui --to 0) [15:45] Yea, but every time I tried that it failed with an lxc-create problem [15:45] I seem to remember there being a problem running trusty containers on precise..... but I forget exactly what the problem was [15:46] automatemecolema: so.... generally when I hit an lxc problem with the local provider, I just reboot, because most of the time it's lxc getting itself wedged in a bad state [15:49] yea I tried a reboot, that didnt work out === vladk is now known as vladk|offline [15:52] mgz: any ideas to help out ^^? [15:54] natefinch: there were issues in the original juju release especially without a default series and such defined. [15:56] natefinch: maybe you were thinking of https://bugs.launchpad.net/ubuntu/+source/juju-quickstart/+bug/1306537 ? [15:56] <_mup_> Bug #1306537: LXC local provider fails to provision precise instances from a trusty host [16:04] rick_h_: ahh that may have been what I was thinking of [16:56] natefinch: alexisb I need to step off for a moment, I did not manage to fix the upgrade issue (I did nevertheless manage to break part of my env) Ill be back later but if anyone else wants to take a look he/she is welcome [18:54] hi sinzui, are you running jenkins for any go projects? [18:55] bac no. which ones interest you [19:00] natefinch: ping [19:00] ericsnow: coming [19:19] back [19:20] anyone clear enough on the State.Caller interface to tell me where to look for the point where I can fake it in my API client test? [19:21] basically I just want to have st.caller.Call("WatchActions", blah blah) use my mocked-up function instead of the real one [19:22] however, since caller isn't an exported field of uniter.State, I'm not sure where I can poke the new function (I'm using testing.Patch to replace the function) [19:22] the example I'm working from is state/api/usermanager/client_test.go but that appears to use a slightly different call technique [19:38] how do I specify to upgrade-juju --version (usinglocal provider) my own stream? [19:38] I want to go to 1.19.4 from 1.18.1.1 [19:44] perrito666, local provider ignores streams, it is impossible...we reported the bug months ago [19:45] sinzui: mmpf [19:46] perrito666, we have test streams in aws, hp, and joyent that you can use [19:48] sinzui: its ok, I have my own ;) I just thought that reproducing this bug was possible locally [19:49] perrito666, If you have both juju's installed, I think you can use the lower juju to upgrade (downgrade) [19:49] perrito666, one report of this bug was about local host I thought [19:50] perrito666, since local ignores streams, and only provides a crippled subset of archs, that the machine had two version of juju installed... [19:50] perrito666, and since few people know about the need to strictly define $PATH, the two jujus can mix [20:11] Can we call on user-data scripts / cloud init customizations at any time during the boot process? i dont see any documented constraints i can pass to populate user data on my cloud provider. [20:11] s/boot/provisioning [20:19] lazypower: no [20:19] menn0: morning :) [20:19] lazypower: we don't expose that [20:19] natefinch: ok, so if we cant modify user data, can we specify custom AMI's then? or are we bound to the official ubuntu images? [20:20] the idea behind this is to eliminate bloating charms install hooks installing a-z toolkits that will be required on every machine for compliance. [20:21] wait, we do this. i foudn an AU post on it. [20:21] or rather, we did but dont now [20:21] http://askubuntu.com/questions/84333/how-do-i-use-a-specific-ami-for-juju-instances [20:22] lazypower: yeah, right now it's not implemented [20:22] ok, thats all i needed. Thanks natefinch [20:22] * lazypower doffs hat [20:23] welcome [20:38] natefinch: did anyone on your team make any progress on bug1333682? [20:38] Hey guys is there a way to set environment variables for the local machine/units in Juju? [20:39] wallyworld, I believe that is the one perrito666 is working on [20:39] let me go verify the number [20:39] I was reading the documentation that describes changing the environment, but I am specifically interested in environment variables. [20:40] alexisb: ok, thanks. the bug is still marked as triaged [20:40] rather than in progress [20:40] mbruzek: doesnt juju set-environment do that? [20:40] wallyworld, yeah perrito666 has been looking at that one, but he will have to update you on progress I am not sure how far he has gotten [20:40] mbruzek: https://juju.ubuntu.com/docs/commands.html -- see: set-env [20:41] lazypower, The set-environment command will set a configuration option to the specified value. [20:41] ah, since we use environment interchangeably in our jargon, i see what you're saying. Thats wrt your juju env, not the bash env correct? [20:41] set-env only sets juju config values [20:41] I think that means that it will set a config option in our environments.yaml [20:41] i'm not sure about how to set environment variables [20:42] no, it will set a config value in a running deployment [20:42] environments.yaml is only used when first bootstrapping [20:43] after that, juju maintains a database on the state servers which contain the system config, plus also a local jenv file with certs and things like that [20:43] mbruzek: a perhaps crappy solution is to use juju run [20:44] that command runs a given set of commands on each machine/unit [20:44] the script that can be run could include export FOO=bar [20:44] yeah, that was my thought [20:45] i think that's the only way at the moment [20:45] OK thanks wallyworld and natefinch for responding [20:45] good luck, ping back if you get stuck [20:47] alexisb: wallyworld I saw him reply in one of these channels, looking for this comment on progress [20:47] alexisb: wallyworld it was basically, someone can have fun with it tonight [20:47] rick_h_: you talking about bug 13333682? [20:48] oh hmm, that was some 4hrs ago so maybe there's more [20:48] wallyworld: an upgrade issue? [20:48] perrito66| natefinch: alexisb I need to step off for a moment, I did not manage to fix the upgrade issue (I did nevertheless manage to break part of my env) Ill be back later but if anyone else wants to take a look he/she is welcome [20:48] rick_h_: yeah, the panic [20:48] was the last thing I saw related in irc [20:48] fyi [20:48] rick_h_: great, thank you [20:49] rick_h_, I think perrito666 has since come back, but no sure if he is still around [20:49] eitherway, wallyworld I would consider that bug fair game [20:49] it needs to get resolved [20:50] and it is perrito666 eod [20:50] alexisb: yes indeed. i just wanted to see where others may have got to before i started my day [20:50] alexisb: yea, I hadn't realize how long ago that was [20:50] time flies when you're fixing bugs [20:50] alexisb: this 1.19 release sure is cursed :-( [20:51] hopefully this will be the last blocker [20:53] * alexisb keeps her fingers crossed === automate_ is now known as automatemecolem_ [21:05] alexisb: looking for me? [21:06] perrito666, wallyworld was looking for an update on the bug given his team will be working the "night" shift on it [21:06] perrito666, can you please touch base with wallyworld ? [21:06] s/team/wallyworld :-) [21:09] wallyworld: let me touch your base [21:09] :p [21:09] ooooh [21:09] * wallyworld braces [21:14] lol [21:14] * perrito666 touched bases with wallyworld in priv [21:17] ok I dissapear, Ill be back later [21:17] wallyworld: anything else before I leave? [21:18] perrito666: nah, thanks, enjoy your evening. you can touch my base anytime :-) [21:18] wallyworld: http://www.youtube.com/watch?v=z13qnzUQwuI [21:18] lol [21:19] bye [21:21] wallyworld, I will be happy for you to declare the bug not a regression. I can release what we have while this be is fixed for the next release [21:21] alrighty all, I am headed into town [21:22] I will check back in later this evening, if you need me before then email or call my cell [21:22] see ya === alexisb is now known as alexisb_bbl [21:22] sinzui: i am still ramping up but there's a line in the bug comment which says "So potentially it is a different bug, which is that 1.19 is actually *removing* the line that used to be there" [21:22] if that's the case, then we do have a problem it seems [21:23] I defer to your judegment === mwhudson-bip is now known as mwhudson === hatch__ is now known as hatch === makyo_ is now known as Makyo [21:41] menn0, I really want to avoid redundancy at high time cost in these tests, we have a meeting with Mark approaching Friday and we're really trying to push the Actions down through these layers to the RunHook call [21:42] I understand your concern over the test coverage, but I think the api client methods here should really only be tested as far as their responsibility goes, I'm not positive we need to add redundancy here [21:44] e.g., the StringsWatcher is being checked for duplication at the State level; therefore, if duplicates are coming in, it seems that would be an issue with the st.call() method [21:44] which itself is being exercised elsewhere [21:47] bodie_: I think we're misunderstanding each other. I'm not talking about anything particularly heavyweight. [21:47] bodie_: I'm just interested in seeing tests that hit the error handling lines thats aren't currently being tested in state/api/uniter/unit.go:WatchActions [21:48] yeah, I was just discussing with jcw4 -- that function variable could be returned by the st.call() method [21:48] specifically 423, 426 and 430 [21:48] bodie_: I'll whip up a quick example of what I mean ... it won't take long [21:49] we spent some time trying to emulate the usermanager Patch technique, but we were having trouble I think since we weren't using the function var as you'd mentioned [22:01] bodie_: this is what I'm thinking: http://paste.ubuntu.com/7697305/ [22:01] untested but should be close [22:02] fwereade: still around? [22:03] bodie_: saw your comments about the time pressure you're under. Feel free to leave this bit until later if need be. [22:03] thanks, I think this should be pretty straightforward to implement :) [22:04] menn0, I was actually thinking of simply inserting a var call = ... which is then called by the existing st.call() function -- thus requiring no refactor [22:04] any reason not to do so? [22:04] besides being horribly lazy... [22:06] bodie_: that should be ok as well I think [22:08] bodie_: you'll probably still need to do something in export_test.go or the tests won't be able to get to call to patch it. [22:09] yep, got that bit in [22:09] thanks for the code example, btw! [22:10] I'm actually really happy about how close we are to finally getting Actions pulled together, the last few inches can be a little frustrating at times === tvansteenburgh1 is now known as tvansteenburgh [23:26] and pr 141 updated [23:26] menn0, would much appreciate a clean bill of health ;) [23:28] bodie_: in a call... should be done soon [23:37] sinzui: i have a theory as to what's happening. it's only a guess based on reading the code. i will make a trivial change which may help but i cannot be certain it will definitively fix things https://bugs.launchpad.net/juju-core/+bug/1333682/comments/5 [23:37] <_mup_> Bug #1333682: upgrading 1.18 to 1.19 breaks agent.conf [23:44] I wonder how much work will be required to support this architecture? http://www.engadget.com/2014/06/23/russian-government-avoids-intel-amd-chips-for-baikal/ [23:44] well it's a cortex a57 [23:45] so once it's booted you should be ok... [23:57] thumper: you got time for a trivial review? critical fix for 1.19.4 release https://github.com/juju/juju/pull/155