[01:32] thumper: hi, just taking to dog to the vet. i'm just about to propose the status stuff. took ages to get the new tests passing. i ended up exporting ParentId() and adding a couple of other helpers [01:32] i'll do the watcher stuff next [01:32] wallyworld_: ok [01:33] the status stuff supported arbitary nesting [01:33] bt we don't use it yet but it will be there for when we do [01:33] bbiab [02:23] thumper: found another problem - there are docs ordered by machine id which do an Atoi() but container ids fail that since they are not numbers. so i'll have to find a way around that [02:24] hahaha [02:24] bummer [02:25] yeah, i'll need to split the id and process from left to right [03:00] thumper: we you have a moment https://codereview.appspot.com/10252044 [03:01] when [03:07] thumper: question about container watchers - i'll first do a lifecycle watcher. does that suit your needs? [03:18] wallyworld_: hey, [03:18] ho [03:18] wallyworld_: I think that is what I need [03:18] ok, can add an entity watcher later if needed [03:18] what's the difference? [03:19] i think lifecycle is created/deleted etc, entity is state changes and such [03:19] not sure exactly yet [03:19] i'll know once i write some tests [03:28] wallyworld_: I want a watcher like the one that he provisioner watches :) [03:28] don't know what type that is [03:28] ok [03:58] thumper: https://codereview.appspot.com/10250044 [04:06] wallyworld_: soon [04:06] thumper: no problem, just wanted to let you know they were there [04:06] kk [04:48] * thumper crosses fingers [04:54] * thumper waits for ec2 to bootstrap [05:11] wallyworld_: http://paste.ubuntu.com/5760381/ === tasdomas_afk is now known as tasdomas === thumper is now known as thumper-afk [07:29] mornin' all === thumper-afk is now known as thumper [07:47] hi rogpeppe1 [07:47] thumper: yo! [07:48] rogpeppe1: http://paste.ubuntu.com/5760381/ [07:48] rogpeppe1: manual poking to get to this stage [07:48] but the bits are working [07:48] well, mostly working [07:49] thumper: why the lack of quotes around the container name? [07:49] thumper: good stuff BTW! [07:49] rogpeppe1: yaml output [07:49] needed around 0 so not an int [07:49] thumper: ah of course. a number needs to be quoted [07:49] but 1/lxc/0 is obviously a string [07:49] so no quotes needed [07:50] still using manual hackery to manually create a provisioner on the machine [07:50] but it is using the provisioner task and lxc broker [07:50] with a real cloud-init to start the machine agent [07:51] * thumper jumps into the meeting hangout [07:55] thumper: saw the pastebin earlier, looks nice, eill look even nicer with my juju status branch landed :-) [08:01] mgz: https://plus.google.com/hangouts/_/d3f48db1cccf0d24b0573a02f3a46f709af109a6 [08:02] wallyworld_: ^^ [08:02] oh different [08:02] TheMue: ^^^^^^^^^^ [08:03] wallyworld_: where did you get your link from? (This one was on the calendar) [08:03] jam: the calendar [08:04] wallyworld_, jam, I think I'm in the calendar one too... https://plus.google.com/hangouts/_/bf3f4cfe715017bf60521d59b0628e5873f2a1d3 [08:04] * fwereade__ leaves [08:04] https://plus.google.com/hangouts/_/calendar/bWFyay5yYW1tLWNocmlzdGVuc2VuQGNhbm9uaWNhbC5jb20.mut2dk4mvoj39eq8jqni20ukoc === danilos__ is now known as danilos [08:04] https://plus.google.com/hangouts/_/calendar/bWFyay5yYW1tLWNocmlzdGVuc2VuQGNhbm9uaWNhbC5jb20.mut2dk4mvoj39eq8jqni20ukoc [08:05] fwereade__: yes, thumperstuffed up :-) [08:07] thumper: were you going to chat about containers? perhaps a joint meeting with me too? [08:07] wallyworld_: I wanted to discuss the provisioner refactoring [08:08] ok, i have some questions as well [08:43] hey all, [08:43] sorry I set my alarm wrong [08:43] just went off [08:43] and I realize the team meeting is not starting in 15 min, it started 45 min ago [08:45] mramm: https://plus.google.com/hangouts/_/d3f48db1cccf0d24b0573a02f3a46f709af109a6 [09:05] jam: did you guys chat about when tarmac is going to "land"? [09:06] today [09:06] mramm: I'll be making it live today. Everyone seemed happy with the plan I proposed. [09:07] cool [09:07] sounds good === tasdomas is now known as tasdomas_afk === tasdomas_afk is now known as tasdomas [09:21] thumper: quick poke about the "InParallel" test. [09:22] What if we change it from sleeping, to having one subprocess block until the other one is started? [09:22] jam: yes... [09:22] so it would wait for a file that the other one creates, for example [09:23] I don't really care how it is changed as long as it is testing the parallel gathering [09:23] you could have subprocess a creates file a and waits for b [09:23] and vice versa [09:23] so they must both be called at the same time [10:15] fwereade__: ping [10:35] TheMue, pong [10:35] TheMue, still eating lunch [10:36] TheMue, I may have failed to communicate that if yur tests depend on some specific number of events they're Doing It Wrong [10:36] TheMue, but if the tests just watch for events until there are no more cleanups to perform... then... hmm. there may actually still be a race. bah [10:37] TheMue, yeah, there's a race, but it only affects the followup [10:37] TheMue, so all you need to do is watch for cleanup events until you get one and there aren't any more cleanups required [10:38] TheMue, at that point you can be sure that the cleaner did its job [10:38] fwereade__: https://codereview.appspot.com/10234047 makes a test that occassionaly fails due to timing hiccups into a faster and reliable 'event' sort of mode. [10:38] Something I'd like to land before switching to tarmac [10:39] jam, LGTM [10:39] fwereade__: Yeah, it does. Only want to talk about the timeout with you. [10:39] TheMue, ah, go on [10:41] fwereade__: You wondered about a "typo", but sadly it is none. The watcher in the test gets the second event (where then no more cleanups are needed, I followed it) after such a long time. [10:41] fwereade__: Maybe you've any idea for the reason. [10:41] fwereade__: Because that definitely should not be. [10:41] TheMue, ah, are you not syncing in your select loop? StartSync at the start of that loop, sorry I missed it [10:41] TheMue, default sync period is 5s [10:42] TheMue, ha [10:42] fwereade__: IMHO I've tested it also with StartSync in the loop, but I'll verify it. [10:42] TheMue, no, I know what's going on, sorry [10:42] fwereade__: Oh *listening* [10:42] TheMue, StartSync just at the top of the loop is not good enough [10:43] TheMue, because the watcher event could come in after that [10:43] TheMue, you need to keep on independently StartSyncing as you go through [10:43] TheMue, with StartSync at the top, `case <-time.After(50 * time.Millisecond): continue` should do the trick [10:44] mgz: can you look at https://codereview.appspot.com/10234047 so I can land it? [10:44] fwereade__: ah, push harder, nice idea [10:44] fwereade__: will add it and come back to you [10:45] fwereade__: but please don't let me disturb you at lunch, i'm sorry [10:45] jam, fwiw, I think there may still be branches approved that aren't necessarily ready to land, please unapprove them before you start tarmaccing ;) [10:46] looking at it [10:47] and ugh, shell [10:47] mgz: :) [10:48] fwereade__: just one last thing: fantastic, it works! [10:48] fwereade__: thanks [10:53] mgz: so have I turned you off so completely ? [10:53] jam: commented [11:05] mgz: thanks [11:13] * TheMue => lunchtime [11:30] mgz, jam: hey, https://plus.google.com/hangouts/_/8868e66b07fa02bdc903be4601200d470dae9ee3 :) [11:31] ta [12:42] mgz, I am back, fwiw (and actually already fixing another annoyance I hit) [12:43] likewise [12:44] anybody wants to take a peek at trivial branch: https://codereview.appspot.com/10244044 [12:47] fwereade__: any good idea on how to test the resumer? [13:09] jam: you had a fix for StateSuite.TestOpenDelaysRetryBadAddress failure or was that something else? [13:22] TheMue, none whatsoever -- just start it and stop it I reckon [13:22] TheMue, (sorry I mised you) [13:23] TheMue, we'd have to inflict damage to the transaction log on purpose to test it, and spending time on that in particular seems like a bit of a waste of effort [13:23] fwereade__: no pro, i'll add this. the rest has been simple. cleaner is also proposed again [13:24] fwereade__: yep, think so too. i took a look on how ResumeAll() works, and here we would have to do evil stuff under the hood :) === wedgwood_away is now known as wedgwood [14:25] mgz or danilos: the juju cross-team meeting is supposed to happen in about 10 minutes, could one of you make it? [14:25] I can if needed [14:26] mgz: thanks, I can't really make it today [14:26] (today is my wife's b-day) [14:26] mgz: https://plus.google.com/hangouts/_/calendar/YW50b25pby5yb3NhbGVzQGNhbm9uaWNhbC5jb20.2ijeg1lk8l6gq6d4ilo6jot1j4 is the link from email [14:26] ta [14:49] tarmac help needed. i've got an "old" branch now with two lgtms (after a final local change). now i've committed and merged it locally. but i'm not sure about the reproposal for launchpad. [14:50] see the instructions in jam's post to the list [14:51] mgz: i've seen them, but still have problems/questions. if i set my branched to approved now the final changes won't be in there. [14:52] you just go to the branch webpage, and repropose against lp:juju-core [14:53] with "resubmit proposal"? [14:53] yup [14:54] and my still local changes? simple bzr push before resubmit? [14:55] yes [14:56] that sounds good, thanks [15:04] so, now i'm curious how it will work ;) [15:08] TheMue, mramm: ready? [15:08] so, meeting [15:08] yep [15:08] hangout? [15:08] yep [15:08] mramm, TheMue, I'll start one [15:22] just going to pick up van from MOT test. back in 15 mins. [15:37] fwereade__: ping => https://codereview.appspot.com/10266043 [15:37] TheMue, cheers === tasdomas is now known as tasdomas_afk [16:56] mgz, do you have a release tarball for 1.11.0? [16:59] jamespage: no, but I can make one [16:59] it just won't be what was actually used (as I haven't switched dave onto the new method yet) [16:59] mgz, great - it would be nice to stuff a new release into saucy as we make it he default [18:09] so juju-core trunk now wants a service name in the config yaml file? [18:09] https://code.launchpad.net/~fwereade/juju-core/config-5-state-service-config-yaml/+merge/168579 [18:09] just checking [18:47] time to stop. g'night all. [21:34] morning folks [21:49] fwereade__: ping [22:04] * thumper bootstraps ec2 [22:04] geez this takes a while... [22:13] mramm2: ping [22:15] jcastro: ping? [22:16] m_3: ping? [22:17] * thumper feels the loneliness of being at utc+12 [22:18] yo thumper [22:18] jcastro: hey man [22:18] I am in the middle of BBQing, so in and out [22:18] so keep talking [22:18] jcastro: what is a nice simple charm to deploy? [22:18] jcastro: the first one to be in an lxc container in ec2 [22:19] thumper: pong [22:19] what's up? [22:19] m_3: just feeling lonely :) [22:19] ha [22:19] m_3: wanting an easy charm for testing [22:19] mysql [22:19] it's my goto for when I need to precache lxc and stuff like that [22:19] precache? [22:21] thumper: wordpress is our defacto [22:21] etherpad-lite has no deps though [22:21] if you only want a single container with no relations [22:21] thumper: yeah, the juju-0.7 worflow was pretty flawed wrt lxc [22:22] thumper: you'd run juju bootstrap and that'd run zk and prep stuff [22:22] but the lxc template images wouldn't build out (or download cloudimages) until the first time you actually _deployed_ a service [22:23] um... without letting the user know what was going on [22:23] :) [22:23] so it'd just quietly block while downloading [22:23] you'd naturally kill it [22:23] and wind up with a halfway downloaded image in /var/cache/lxc [22:24] let's not spoil him with how it sucked [22:24] ha [22:24] sorry [22:24] he asked [22:24] that way he can come up with something awesome without being jaded like we are [22:24] yes, /me will sit on his hands [22:24] "and then, we asked 10 people to download 450mb of images on conference wireless for this charm school" [22:24] hahahah [22:24] <--- what were we thinking [22:24] * thumper has some thoughts around that [22:25] jcastro: what if we wrote a simple go http server for you [22:25] jcastro: so at a charm school, you could run this, and it serves just the ubuntu-cloud lxc tempalte [22:25] people could then run wget to grab it locally [22:25] we can cache cloud images lots of ways [22:25] lxc builds nicely from cloud images now [22:27] I also liked having the download happen during package install... i.e., if the local provider is in its own package, it can get the image templates during package install [22:27] peiople are used to waiting on apt [22:27] thumper: but really we're open to whatever makes sense man [22:28] well, I can see how to make the local provider now that I've messed around a lot with lxc [22:28] but containerisation first, then network, then local provicer [22:28] but it is coming [22:28] * m_3 loves lxc [22:30] thumper: that sounds like a plan [22:40] thumper: you happy with the juju status mp now? [22:40] wallyworld_: sorry, not looked at it [22:41] trying to work out why what worked yesterday isn't today [22:41] np. fat chance i'll get a 2nd +1 today :-( [22:45] at least I have worked out why my programs were hanging [22:48] reason? [22:49] hmm... [22:49] lxc isn't nice [22:49] golxc runs lxc-start [22:50] then waits forever for it to be running [22:50] if there is an error in the config somehow... [22:50] then it won't start [22:50] and lxc-start doesn't return non-zero exit [22:50] also found [22:51] that if I pass the lxc.conf file to lxc-create, the mount works [22:51] if I only pass it to lxc-start, the mount fails [22:51] so it doesn't start [22:55] * thumper quietly rages [22:58] wallyworld_: http://paste.ubuntu.com/5763044/ [22:59] thumper: niiiiiice [22:59] would look better with my juju status :-) [22:59] :) [22:59] can we wrap lxc-start somehow? [22:59] that is with --force-machine [22:59] wallyworld_: I am [22:59] twice [22:59] maybe 3 times [23:00] I'm going to ask for some help around that [23:01] * thumper continues blindly (well with poor vision) [23:02] wallyworld_: I think part of the reason is that we are asking to start --daemon [23:02] which appears to be fire and forget [23:02] sounds about right [23:02] so, zero rc, all good [23:02] my lxc foo is not excellent [23:02] but the issue appears to be "we wait forever" [23:03] i really wish we had decent progress monitoring [23:03] baked into the cli [23:03] wallyworld_: I am using my logging to trace stuff though [23:03] very nice [23:03] \o/ [23:03] but we need to productise stuff for a user [23:04] * thumper nods [23:04] yeah === wedgwood is now known as wedgwood_away [23:32] wallyworld_: lxc containers on the bootstrap node are working fine [23:32] yay [23:32] wallyworld_: got a machine agent up and running on machine 0 [23:32] 0/lxc/0 that is [23:32] manually? [23:32] sure... [23:32] I need that container watcher [23:33] thumper: up for review yesterday [23:33] can't automate the provisioner until I get that [23:33] thumper: pong [23:33] \o/ [23:33] hi mramm2 [23:33] mramm2: was nothing important, was feeling lonely :) [23:33] haha [23:33] thumper: i did tell you it was there :-P [23:33] * thumper sighs [23:34] thumper: I don't know when andrew starts [23:34] but nate the other new hire is now slated to start on the 29th [23:34] mramm2: so... using --force-machine, you can deploy into a running container [23:34] mramm2: but the 29th of july, not june [23:34] manual network bridging or what [23:34] mramm2: just using the lxc default bridge [23:35] july [23:35] so no access in, just out [23:35] wow, I missed that [23:35] you are of course right [23:35] that is a long time [23:35] hopefully andrew will start sooner [23:35] * thumper nods [23:36] so you are bridging out, and everything is working [23:36] rock and roll === mramm2 is now known as mramm === mramm is now known as mramm22 === mramm22 is now known as mramm