[01:32] <wallyworld_> thumper: hi, just taking to dog to the vet. i'm just about to propose the status stuff. took ages to get the new tests passing. i ended up exporting ParentId() and adding a couple of other helpers
[01:32] <wallyworld_> i'll do the watcher stuff next
[01:32] <thumper> wallyworld_: ok
[01:33] <wallyworld_> the status stuff supported arbitary nesting
[01:33] <wallyworld_> bt we don't use it yet but it will be there for when we do
[01:33] <wallyworld_> bbiab
[02:23] <wallyworld_> thumper: found another problem - there are docs ordered by machine id which do an Atoi() but container ids fail that since they are not numbers. so i'll have to find a way around that
[02:24] <thumper> hahaha
[02:24] <thumper> bummer
[02:25] <wallyworld_> yeah, i'll need to split the id and process from left to right
[03:00] <wallyworld_> thumper: we you have a moment https://codereview.appspot.com/10252044
[03:01] <wallyworld_> when
[03:07] <wallyworld_> thumper: question about container watchers - i'll first do a lifecycle watcher. does that suit your needs?
[03:18] <thumper> wallyworld_: hey,
[03:18] <wallyworld_> ho
[03:18] <thumper> wallyworld_: I think that is what I need
[03:18] <wallyworld_> ok, can add an entity watcher later if needed
[03:18] <thumper> what's the difference?
[03:19] <wallyworld_> i think lifecycle is created/deleted etc, entity is state changes and such
[03:19] <wallyworld_> not sure exactly yet
[03:19] <wallyworld_> i'll know once i write some tests
[03:28] <thumper> wallyworld_: I want a watcher like the one that he provisioner watches :)
[03:28] <thumper> don't know what type that is
[03:28] <wallyworld_> ok
[03:58] <wallyworld_> thumper: https://codereview.appspot.com/10250044
[04:06] <thumper> wallyworld_: soon
[04:06] <wallyworld_> thumper: no problem, just wanted to let you know they were there
[04:06] <thumper> kk
[04:48]  * thumper crosses fingers
[04:54]  * thumper waits for ec2 to bootstrap
[05:11] <thumper> wallyworld_: http://paste.ubuntu.com/5760381/
[07:29] <rogpeppe1> mornin' all
[07:47] <thumper> hi rogpeppe1
[07:47] <rogpeppe1> thumper: yo!
[07:48] <thumper> rogpeppe1:  http://paste.ubuntu.com/5760381/
[07:48] <thumper> rogpeppe1: manual poking to get to this stage
[07:48] <thumper> but the bits are working
[07:48] <thumper> well, mostly working
[07:49] <rogpeppe1> thumper: why the lack of quotes around the container name?
[07:49] <rogpeppe1> thumper: good stuff  BTW!
[07:49] <thumper> rogpeppe1: yaml output
[07:49] <thumper> needed around 0 so not an int
[07:49] <rogpeppe1> thumper: ah of course. a number needs to be quoted
[07:49] <thumper> but 1/lxc/0 is obviously a string
[07:49] <thumper> so no quotes needed
[07:50] <thumper> still using manual hackery to manually create a provisioner on the machine
[07:50] <thumper> but it is using the provisioner task and lxc broker
[07:50] <thumper> with a real cloud-init to start the machine agent
[07:51]  * thumper jumps into the meeting hangout
[07:55] <wallyworld_> thumper: saw the pastebin earlier, looks nice, eill look even nicer with my juju status branch landed :-)
[08:01] <jam> mgz: https://plus.google.com/hangouts/_/d3f48db1cccf0d24b0573a02f3a46f709af109a6
[08:02] <jam> wallyworld_: ^^
[08:02] <wallyworld_> oh different
[08:02] <wallyworld_> TheMue: ^^^^^^^^^^
[08:03] <jam> wallyworld_: where did you get your link from? (This one was on the calendar)
[08:03] <wallyworld_> jam: the calendar
[08:04] <fwereade__> wallyworld_, jam, I think I'm in the calendar one too... https://plus.google.com/hangouts/_/bf3f4cfe715017bf60521d59b0628e5873f2a1d3
[08:04]  * fwereade__ leaves
[08:04] <wallyworld_> https://plus.google.com/hangouts/_/calendar/bWFyay5yYW1tLWNocmlzdGVuc2VuQGNhbm9uaWNhbC5jb20.mut2dk4mvoj39eq8jqni20ukoc
[08:04] <thumper> https://plus.google.com/hangouts/_/calendar/bWFyay5yYW1tLWNocmlzdGVuc2VuQGNhbm9uaWNhbC5jb20.mut2dk4mvoj39eq8jqni20ukoc
[08:05] <wallyworld_> fwereade__: yes, thumperstuffed up :-)
[08:07] <wallyworld_> thumper: were you going to chat about containers? perhaps a joint meeting with me too?
[08:07] <thumper> wallyworld_: I wanted to discuss the provisioner refactoring
[08:08] <wallyworld_> ok, i have some questions as well
[08:43] <mramm> hey all,
[08:43] <mramm> sorry I set my alarm wrong
[08:43] <mramm> just went off
[08:43] <mramm> and I realize the team meeting is not starting in 15 min, it started 45 min ago
[08:45] <mgz> mramm: https://plus.google.com/hangouts/_/d3f48db1cccf0d24b0573a02f3a46f709af109a6
[09:05] <mramm> jam: did you guys chat about when tarmac is going to "land"?
[09:06] <jam> today
[09:06] <jam> mramm: I'll be making it live today. Everyone seemed happy with the plan I proposed.
[09:07] <mramm> cool
[09:07] <mramm> sounds good
[09:21] <jam> thumper: quick poke about the "InParallel" test.
[09:22] <jam> What if we change it from sleeping, to having one subprocess block until the other one is started?
[09:22] <thumper> jam: yes...
[09:22] <jam> so it would wait for a file that the other one creates, for example
[09:23] <thumper> I don't really care how it is changed as long as it is testing the parallel gathering
[09:23] <jam> you could have subprocess a creates file a and waits for b
[09:23] <jam> and vice versa
[09:23] <jam> so they must both be called at the same time
[10:15] <TheMue> fwereade__: ping
[10:35] <fwereade__> TheMue, pong
[10:35] <fwereade__> TheMue, still eating lunch
[10:36] <fwereade__> TheMue, I may have failed to communicate that if yur tests depend on some specific number of events they're Doing It Wrong
[10:36] <fwereade__> TheMue, but if the tests just watch for events until there are no more cleanups to perform... then... hmm. there may actually still be a race. bah
[10:37] <fwereade__> TheMue, yeah, there's a race, but it only affects the followup
[10:37] <fwereade__> TheMue, so all you need to do is watch for cleanup events until you get one and there aren't any more cleanups required
[10:38] <fwereade__> TheMue, at that point you can be sure that the cleaner did its job
[10:38] <jam> fwereade__: https://codereview.appspot.com/10234047 makes a test that occassionaly fails due to timing hiccups into a faster and reliable 'event' sort of mode.
[10:38] <jam> Something I'd like to land before switching to tarmac
[10:39] <fwereade__> jam, LGTM
[10:39] <TheMue> fwereade__: Yeah, it does. Only want to talk about the timeout with you.
[10:39] <fwereade__> TheMue, ah, go on
[10:41] <TheMue> fwereade__: You wondered about a "typo", but sadly it is none. The watcher in the test gets the second event (where then no more cleanups are needed, I followed it) after such a long time.
[10:41] <TheMue> fwereade__: Maybe you've any idea for the reason.
[10:41] <TheMue> fwereade__: Because that definitely should not be.
[10:41] <fwereade__> TheMue, ah, are you not syncing in your select loop? StartSync at the start of that loop, sorry I missed it
[10:41] <fwereade__> TheMue, default sync period is 5s
[10:42] <fwereade__> TheMue, ha
[10:42] <TheMue> fwereade__: IMHO I've tested it also with StartSync in the loop, but I'll verify it.
[10:42] <fwereade__> TheMue, no, I know what's going on, sorry
[10:42] <TheMue> fwereade__: Oh *listening*
[10:42] <fwereade__> TheMue, StartSync just at the top of the loop is not good enough
[10:43] <fwereade__> TheMue, because the watcher event could come in after that
[10:43] <fwereade__> TheMue, you need to keep on independently StartSyncing as you go through
[10:43] <fwereade__> TheMue, with StartSync at the top, `case <-time.After(50 * time.Millisecond): continue` should do the trick
[10:44] <jam> mgz: can you look at https://codereview.appspot.com/10234047 so I can land it?
[10:44] <TheMue> fwereade__: ah, push harder, nice idea
[10:44] <TheMue> fwereade__: will add it and come back to you
[10:45] <TheMue> fwereade__: but please don't let me disturb you at lunch, i'm sorry
[10:45] <fwereade__> jam, fwiw, I think there may still be branches approved that aren't necessarily ready to land, please unapprove them before you start tarmaccing ;)
[10:46] <mgz> looking at it
[10:47] <mgz> and ugh, shell
[10:47] <jam> mgz: :)
[10:48] <TheMue> fwereade__: just one last thing: fantastic, it works!
[10:48] <TheMue> fwereade__: thanks
[10:53] <jam> mgz: so have I turned you off so completely ?
[10:53] <mgz> jam: commented
[11:05] <jam> mgz: thanks
[11:13]  * TheMue => lunchtime
[11:30] <danilos> mgz, jam: hey, https://plus.google.com/hangouts/_/8868e66b07fa02bdc903be4601200d470dae9ee3 :)
[11:31] <mgz> ta
[12:42] <danilos> mgz, I am back, fwiw (and actually already fixing another annoyance I hit)
[12:43] <mgz> likewise
[12:44] <danilos> anybody wants to take a peek at trivial branch: https://codereview.appspot.com/10244044
[12:47] <TheMue> fwereade__: any good idea on how to test the resumer?
[13:09] <danilos> jam: you had a fix for StateSuite.TestOpenDelaysRetryBadAddress failure or was that something else?
[13:22] <fwereade__> TheMue, none whatsoever -- just start it and stop it I reckon
[13:22] <fwereade__> TheMue, (sorry I mised you)
[13:23] <fwereade__> TheMue, we'd have to inflict damage to the transaction log on purpose to test it, and spending time on that in particular seems like a bit of a waste of effort
[13:23] <TheMue> fwereade__: no pro, i'll add this. the rest has been simple. cleaner is also proposed again
[13:24] <TheMue> fwereade__: yep, think so too. i took a look on how ResumeAll() works, and here we would have to do evil stuff under the hood :)
[14:25] <jam> mgz or danilos: the juju cross-team meeting is supposed to happen in about 10 minutes, could one of you make it?
[14:25] <mgz> I can if needed
[14:26] <jam> mgz: thanks, I can't really make it today
[14:26] <jam> (today is my wife's b-day)
[14:26] <jam> mgz: https://plus.google.com/hangouts/_/calendar/YW50b25pby5yb3NhbGVzQGNhbm9uaWNhbC5jb20.2ijeg1lk8l6gq6d4ilo6jot1j4 is the link from email
[14:26] <mgz> ta
[14:49] <TheMue> tarmac help needed. i've got an "old" branch now with two lgtms (after a final local change). now i've committed and merged it locally. but i'm not sure about the reproposal for launchpad.
[14:50] <mgz> see the instructions in jam's post to the list
[14:51] <TheMue> mgz: i've seen them, but still have problems/questions. if i set my branched to approved now the final changes won't be in there.
[14:52] <mgz> you just go to the branch webpage, and repropose against lp:juju-core
[14:53] <TheMue> with "resubmit proposal"?
[14:53] <mgz> yup
[14:54] <TheMue> and my still local changes? simple bzr push before resubmit?
[14:55] <mgz> yes
[14:56] <TheMue> that sounds good, thanks
[15:04] <TheMue> so, now i'm curious how it will work ;)
[15:08] <fwereade__> TheMue, mramm: ready?
[15:08] <mramm> so, meeting
[15:08] <mramm> yep
[15:08] <mramm> hangout?
[15:08] <TheMue> yep
[15:08] <fwereade__> mramm, TheMue, I'll start one
[15:22] <rogpeppe1> just going to pick up van from MOT test. back in 15 mins.
[15:37] <TheMue> fwereade__: ping => https://codereview.appspot.com/10266043
[15:37] <fwereade__> TheMue, cheers
[16:56] <jamespage> mgz, do you have a release tarball for 1.11.0?
[16:59] <mgz> jamespage: no, but I can make one
[16:59] <mgz> it just won't be what was actually used (as I haven't switched dave onto the new method yet)
[16:59] <jamespage> mgz, great - it would be nice to stuff a new release into saucy as we make it he default
[18:09] <ahasenack> so juju-core trunk now wants a service name in the config yaml file?
[18:09] <ahasenack> https://code.launchpad.net/~fwereade/juju-core/config-5-state-service-config-yaml/+merge/168579
[18:09] <ahasenack> just checking
[18:47] <rogpeppe1> time to stop. g'night all.
[21:34] <thumper> morning folks
[21:49] <thumper> fwereade__: ping
[22:04]  * thumper bootstraps ec2
[22:04] <thumper> geez this takes a while...
[22:13] <thumper> mramm2: ping
[22:15] <thumper> jcastro: ping?
[22:16] <thumper> m_3: ping?
[22:17]  * thumper feels the loneliness of being at utc+12
[22:18] <jcastro> yo thumper
[22:18] <thumper> jcastro: hey man
[22:18] <jcastro> I am in the middle of BBQing, so in and out
[22:18] <jcastro> so keep talking
[22:18] <thumper> jcastro: what is a nice simple charm to deploy?
[22:18] <thumper> jcastro: the first one to be in an lxc container in ec2
[22:19] <m_3> thumper: pong
[22:19] <m_3> what's up?
[22:19] <thumper> m_3: just feeling lonely :)
[22:19] <m_3> ha
[22:19] <thumper> m_3: wanting an easy charm for testing
[22:19] <m_3> mysql
[22:19] <m_3> it's my goto for when I need to precache lxc and stuff like that
[22:19] <thumper> precache?
[22:21] <jcastro> thumper: wordpress is our defacto
[22:21] <jcastro> etherpad-lite has no deps though
[22:21] <jcastro> if you only want a single container with no relations
[22:21] <m_3> thumper: yeah, the juju-0.7 worflow was pretty flawed wrt lxc
[22:22] <m_3> thumper: you'd run juju bootstrap and that'd run zk and prep stuff
[22:22] <m_3> but the lxc template images wouldn't build out (or download cloudimages) until the first time you actually _deployed_ a service
[22:23] <m_3> um... without letting the user know what was going on
[22:23] <thumper> :)
[22:23] <m_3> so it'd just quietly block while downloading
[22:23] <m_3> you'd naturally kill it
[22:23] <m_3> and wind up with a halfway downloaded image in /var/cache/lxc
[22:24] <jcastro> let's not spoil him with how it sucked
[22:24] <m_3> ha
[22:24] <m_3> sorry
[22:24] <m_3> he asked
[22:24] <jcastro> that way he can come up with something awesome without being jaded like we are
[22:24] <m_3> yes, /me will sit on his hands
[22:24] <jcastro> "and then, we asked 10 people to download 450mb of images on conference wireless for this charm school"
[22:24] <jcastro> hahahah
[22:24] <jcastro> <--- what were we thinking
[22:24]  * thumper has some thoughts around that
[22:25] <thumper> jcastro: what if we wrote a simple go http server for you
[22:25] <thumper> jcastro: so at a charm school, you could run this, and it serves just the ubuntu-cloud lxc tempalte
[22:25] <thumper> people could then run wget to grab it locally
[22:25] <m_3> we can cache cloud images lots of ways
[22:25] <m_3> lxc builds nicely from cloud images now
[22:27] <m_3> I also liked having the download happen during package install... i.e., if the local provider is in its own package, it can get the image templates during package install
[22:27] <m_3> peiople are used to waiting on apt
[22:27] <m_3> thumper: but really we're open to whatever makes sense man
[22:28] <thumper> well, I can see how to make the local provider now that I've messed around a lot with lxc
[22:28] <thumper> but containerisation first, then network, then local provicer
[22:28] <thumper> but it is coming
[22:28]  * m_3 loves lxc
[22:30] <jcastro> thumper: that sounds like a plan
[22:40] <wallyworld_> thumper: you happy with the juju status mp now?
[22:40] <thumper> wallyworld_: sorry, not looked at it
[22:41] <thumper> trying to work out why what worked yesterday isn't today
[22:41] <wallyworld_> np. fat chance i'll get a 2nd +1 today :-(
[22:45] <thumper> at least I have worked out why my programs were hanging
[22:48] <wallyworld_> reason?
[22:49] <thumper> hmm...
[22:49] <thumper> lxc isn't nice
[22:49] <thumper> golxc runs lxc-start
[22:50] <thumper> then waits forever for it to be running
[22:50] <thumper> if there is an error in the config somehow...
[22:50] <thumper> then it won't start
[22:50] <thumper> and lxc-start doesn't return non-zero exit
[22:50] <thumper> also found
[22:51] <thumper> that if I pass the lxc.conf file to lxc-create, the mount works
[22:51] <thumper> if I only pass it to lxc-start, the mount fails
[22:51] <thumper> so it doesn't start
[22:55]  * thumper quietly rages
[22:58] <thumper> wallyworld_: http://paste.ubuntu.com/5763044/
[22:59] <wallyworld_> thumper: niiiiiice
[22:59] <wallyworld_> would look better with my juju status :-)
[22:59] <thumper> :)
[22:59] <wallyworld_> can we wrap lxc-start somehow?
[22:59] <thumper> that is with --force-machine
[22:59] <thumper> wallyworld_: I am
[22:59] <thumper> twice
[22:59] <wallyworld_> maybe 3 times
[23:00] <thumper> I'm going to ask for some help around that
[23:01]  * thumper continues blindly (well with poor vision)
[23:02] <thumper> wallyworld_: I think part of the reason is that we are asking to start --daemon
[23:02] <thumper> which appears to be fire and forget
[23:02] <wallyworld_> sounds about right
[23:02] <thumper> so, zero rc, all good
[23:02] <wallyworld_> my lxc foo is not excellent
[23:02] <thumper> but the issue appears to be "we wait forever"
[23:03] <wallyworld_> i really wish we had decent progress monitoring
[23:03] <wallyworld_> baked into the cli
[23:03] <thumper> wallyworld_: I am using my logging to trace stuff though
[23:03] <thumper> very nice
[23:03] <wallyworld_> \o/
[23:03] <wallyworld_> but we need to productise stuff for a user
[23:04]  * thumper nods
[23:04] <thumper> yeah
[23:32] <thumper> wallyworld_: lxc containers on the bootstrap node are working fine
[23:32] <wallyworld_> yay
[23:32] <thumper> wallyworld_: got a machine agent up and running on machine 0
[23:32] <thumper> 0/lxc/0 that is
[23:32] <wallyworld_> manually?
[23:32] <thumper> sure...
[23:32] <thumper> I need that container watcher
[23:33] <wallyworld_> thumper: up for review yesterday
[23:33] <thumper> can't automate the provisioner until I get that
[23:33] <mramm2> thumper: pong
[23:33] <thumper> \o/
[23:33] <thumper> hi mramm2
[23:33] <thumper> mramm2: was nothing important, was feeling lonely :)
[23:33] <mramm2> haha
[23:33] <wallyworld_> thumper: i did tell you it was there :-P
[23:33]  * thumper sighs
[23:34] <mramm2> thumper: I don't know when andrew starts
[23:34] <mramm2> but nate the other new hire is now slated to start on the 29th
[23:34] <thumper> mramm2: so... using --force-machine, you can deploy into a running container
[23:34] <thumper> mramm2: but the 29th of july, not june
[23:34] <mramm2> manual network bridging or what
[23:34] <thumper> mramm2: just using the lxc default bridge
[23:35] <mramm2> july
[23:35] <thumper> so no access in, just out
[23:35] <mramm2> wow, I missed that
[23:35] <mramm2> you are of course right
[23:35] <mramm2> that is a long time
[23:35] <mramm2> hopefully andrew will start sooner
[23:35]  * thumper nods
[23:36] <mramm2> so you are bridging out, and everything is working
[23:36] <mramm2> rock and roll