[00:00] <bigjools> why do we have canonical-written code that isn't hosted in LP?
[00:01] <davecheney> bigjools: which code ?
[00:02] <bigjools> mgo
[00:03] <davecheney> bigjools: https://launchpad.net/mgo
[00:03] <davecheney> it is
[00:03] <davecheney> but gustavo is using a 'vanity url'
[00:03] <bigjools> ah ok
[00:03] <bigjools> thanks
[00:03] <niemeyer> bigjools: mgo is not Canonical written, by the way
[00:03] <niemeyer> bigjools: it is Gustavo written
[00:03] <bigjools> I'd change that purely from the PoV of getting lTAB to work in the shell :)
[00:04] <bigjools> niemeyer: under Canonical time?
[00:04] <niemeyer> bigjools: under Gustavo time
[00:04]  * davecheney butts out
[00:04] <niemeyer> Its creation predates even Canonical's adoption in Go, so there's little to be said
[00:04] <niemeyer> s/in Go/of Go/
[00:54] <wallyworld> thumper: have you ever had a bzr pointless merge error?
[01:04] <thumper> yes
[01:05] <thumper> wallyworld: I'm off to walk the dog now
[01:05] <thumper> bbl
[01:05] <wallyworld> thumper: ok, can you ping me when you get back
[01:05] <wallyworld> WARNING  Merging https://code.launchpad.net/~wallyworld/goose/null-project-description into https://code.launchpad.net/~go-bot/goose/trunk would be pointless
[01:05] <wallyworld> i'm not sure why the merge fails
[01:06] <wallyworld> the mp shows a nice diff etc
[01:29] <thumper> wallyworld: interesting, haven't had exactly that before
[01:29] <thumper> wallyworld: try merging trunk into null-project-description first
[01:29] <thumper> wallyworld: also, try 'bzr missing --mine ' from the null-project
[01:29] <wallyworld> thumper: ok. u had done that prior to proposing i'd swear but i'll try again
[01:30] <wallyworld> thumper: i've commented on https://codereview.appspot.com/10447045/, perhaps you could do the same
[01:30]  * thumper looks
[01:45] <wallyworld> thumper: so, no revs to merge in from trunk, and bzr missing shows what i would expect https://pastebin.canonical.com/93338/
[01:47] <thumper> wallyworld: ok, I have no idea
[01:47] <wallyworld> np
[01:47] <wallyworld> i'll bug john latert
[01:48] <wallyworld> thumper: i'd like another quick chat about constraints when you have a moment
[01:48] <thumper> wallyworld: sure, let me go make a drink and I'll be right back
[01:48] <wallyworld> kk
[01:53] <thumper> got a hangout?
[01:53] <wallyworld> i'll make one
[01:54] <wallyworld> https://plus.google.com/hangouts/_/8868e66b07fa02bdc903be4601200d470dae9ee3
[04:53]  * thumper bbl for more fwereade chats :)
[05:51] <rogpeppe> mornin' all
[05:56] <jam> morning rogpeppe, you seem up early
[05:56] <rogpeppe> jam: i wanna get stuff done before i go away on thurs
[06:09] <wallyworld> jam: i have to take my son to the dr but do you know why i get a bzr pointless merge error trying to get my bracnh into trunk?
[06:09] <wallyworld> WARNING  Merging https://code.launchpad.net/~wallyworld/goose/null-project-description into https://code.launchpad.net/~go-bot/goose/trunk would be pointless
[06:09] <wallyworld> i get the above in the tarmac log
[06:09] <wallyworld> the mp diff looks fine
[06:09] <wallyworld> bzr missing looks fine - shows the 2 revs that i have committed
[06:10] <wallyworld> tim doesn't know what's wrong
[06:10] <wallyworld> maybe you do?
[06:10] <wallyworld> i'll check in when i get back
[06:11] <jam> wallyworld: because goose successfully merged and committed your change, but failed to push it back to Launchpad
[06:11] <jam> so its local branch has merged your changes.
[06:12] <jam> wallyworld: I just pushed it out. I don't specifically know why it would have gotten into this situation.
[06:13] <rogpeppe> jam: i have some outstanding reviews, BTW, which it would be great to get moving; in particular fwereade has verbally ok'd this; i need another review and i'd appreciate your input. https://codereview.appspot.com/10259049/
[06:16] <bigjools> hi.  We need to generate an iso9660 from the user data that the Azure provider needs. We can shell out, or invoke some C, or .... anything better you can think of?
[06:20] <jam> bigjools: the recommendation that seems to come from the lxc work is that you should shell out
[06:20] <bigjools> jam: ok cheers.
[07:08] <rogpeppe> bigjools: would it be that hard to write a little package that generates an iso9660 image? the format doesn't look that abstruse, at first glance anyway.
[07:09] <bigjools> rogpeppe: I'd really rather not reinvent the wheel
[07:11] <rogpeppe> bigjools: i'm slightly concerned about juju acquiring many external dependencies. is there something that can produce an iso9660 image installed by default in ubuntu?
[07:11] <bigjools> yes
[07:12] <bigjools> what's the problem with external dependencies?
[07:14] <rogpeppe> bigjools: they might not always be available on platforms we want juju to run on.
[07:14] <bigjools> ok
[07:15] <bigjools> the one I am looking at is genisoimage
[07:16] <bigjools> ah it gets installed by ubuntu-desktop
[07:20] <bigjools> rogpeppe: I think we'll shell out for now (we're short on time) but the motivated can easily replace it with a native piece of code
[07:21] <bigjools> at least, it's good to show the rest of the code works before spending the time writing this
[07:21] <rogpeppe> bigjools: that seems reasonable. you could make a package for it anyway, designed to be reimplemented at some future point.
[07:22] <bigjools> rogpeppe: yes, that would be good
[07:22] <bigjools> gives a nice clean break
[07:26] <TheMue> morning
[08:33] <wallyworld> fwereade: would you be free for a quick chat?
[08:37] <fwereade> wallyworld, sure, start a hangout, with you in a sec
[08:39] <wallyworld> fwereade: https://plus.google.com/hangouts/_/8868e66b07fa02bdc903be4601200d470dae9ee3
[09:46] <thumper> jam: hello there
[09:47] <thumper> jam: landing issue because I tried to add a new dependency
[09:47] <thumper> jam: launchpad.net/golxc
[09:47] <thumper> mgz: morning
[09:48] <thumper> jam: oh, just saw the emails, seems like you are on it
[09:51] <mgz> hey thumper
[09:51] <thumper> mgz: hey, have you managed to hand off the api stuff yet?
[09:53] <mgz> thumper: yeah... +- one annyoing branch that's ready to land
[10:04] <jam> thumper: I installed the new dependency, still fai
[10:04] <jam> fails
[10:04] <jam> # launchpad.net/juju-core/container/lxc container/lxc/instance.go:49: undefined: instance.Metadata
[10:04] <thumper> oh ffs
[10:04] <thumper> that;s right
[10:04] <thumper> wallyworld: removed it again
[10:04] <thumper> I'll have to fix it tomorrow
[10:04] <thumper> jam: but thanks
[10:05] <jam> thumper: np. Poke me if you need updated golxc/etc.
[10:05] <thumper> ok, ta
[10:05] <wallyworld> thumper: i did tell you :-)
[10:08] <rogpeppe> mgz: is that branch really ready to land now? i'm blocked on it
[10:09] <mgz> rogpeppe: really :)
[10:11] <rogpeppe> mgz: i don't see anything different from yesterday (assuming we're both talking about https://codereview.appspot.com/10439043)
[10:17] <mgz> pushing now-ish
[10:18] <jam> rogpeppe: so the reason thumper went with another package name was so that you could do: import . "launchpad.net/juju-core/testing/checkers" is there a reason you prefer not to use '.' ?
[10:18] <jam> (the idea is to make it act the same as gocheck checkers)
[10:19] <rogpeppe> jam: yeah, i really don't think we should do any more importing to .
[10:19] <rogpeppe> jam: having one package imported from . is bad enough
[10:20] <rogpeppe> jam: and i don't think the saved typing is a good enough justification
[10:20] <jam> rogpeppe: I'd rather be consistent, so we should probably have that discussion on the list. thumper's assertion at least was without 'import .' there isn't really a benefit of 'testing.IsTrue' over Equals, true)
[10:20] <jam> rogpeppe: I do think it makes the code 'read' better.
[10:20] <jam> c.Assert(err, Satisfies, errors.IsNotFoundError) without the extra "checkers." in there.
[10:21] <rogpeppe> jam: we're introducing pollution to the local name space. there's a reason Go doesn't do that all the time
[10:21] <rogpeppe> jam: yes, i understand that point
[10:21] <jam> rogpeppe: so at least my argument is: either (a) put them in 'testing' and import it as a module or (b) put them in checkers and import them as '.'
[10:22] <rogpeppe> jam: i hadn't realised that was the only reason for putting the checkers in a new package
[10:23] <rogpeppe> jam: in which case i'd put them all in testing
[10:24] <rogpeppe> jam: there is another possible reason for putting them in a new package, which is that testing has lots of dependencies, where checkers has almost none.
[10:24] <mgz> rogpeppe: 's up
[10:24] <rogpeppe> mgz: ta
[10:25] <mgz> I think william might have wanted something different on the inital-event handling, but I didn't understand where he wanted the change, and just landing this seems... like a desired thing
[10:29] <jam> mgz: for the test, I think if we move NotifyWatcherC somewhere we can use it, then you can just NotifyWatcherC(resource).AssertOneChange()
[10:30] <mgz> yup, though I got the impression he wanted the actual logic, not the test, to change? I wasn't completely clear.
[10:30] <rogpeppe> jam: surely reading from a channel with a timeout is not something we *need* to factor out
[10:30] <rogpeppe> jam: we do it all over the place
[10:31] <jam> mgz: "then do some basic verification of the watcher's state with something like NotifyWatcherC"
[10:31] <mgz> rogpeppe: the current code, for instance, does not check that there aren't further events
[10:31] <jam> rogpeppe: I think NotifyWatcherC is intended to become something more than just reading off a channel.
[10:31] <rogpeppe> mgz: that's true, but we aren't testing the watcher here
[10:31] <mgz> helpers are as much about making sure everyone gets the code right as saving typing
[10:31] <rogpeppe> mgz: we're testing that the watcher is there and watching the right thing
[10:31] <rogpeppe> mgz: your test tests the former but not actually the latter
[10:32] <rogpeppe> mgz: to be honest, i think the client test should test that and kill two birds with one stone
[10:32] <mgz> I agree there's some question over what should be covered in the client tests rather than here
[10:33] <rogpeppe> mgz: originally i *only* did client tests, reasoning that they cover almost exactly the same ground
[10:33] <rogpeppe> mgz: but since the advent of bulk-for-everything (and the lack of a client interface to that), server-side-only tests are necessary
[10:33] <jam> mgz: testing that the side effect happened is very unit-y vs integration-y, and since we don't have a client-side thing yet...
[10:34] <jam> rogpeppe: while integration tests can cover everything unit-tests do, they tend to be overbroad and trigger to many failures when something low-level changes, vs a unit test that tends to be more precise. (I'm personally in favor of having both)
[10:35] <rogpeppe> jam: i don't really see the interposing the API server makes it that much more of an integration test
[10:35] <rogpeppe> jam: we're still calling all the way down to mongo
[10:35] <jam> rogpeppe: more piecs == more integration
[10:35] <jam> pieces
[10:36] <jam> I don't see how that is hard to see.
[10:36] <jam> testing-per-layer is a good thing to do IMO
[10:36] <rogpeppe> jam: i'm more interested in test coverage
[10:36] <rogpeppe> jam: and keeping tests from taking hours
[10:37] <rogpeppe> jam: and i dislike having a lot of seriously overlapping test code
[10:37] <rogpeppe> jam: because it wastes time and energy
[10:39] <rogpeppe> jam: i do see your point about the failures being harder to diagnose though
[10:39] <jam> rogpeppe: easier to diagnose failures often save *lots* of debugging time. Which in the lifetime of a project can easily dominate the overall cost.
[10:40] <jam> rogpeppe: I agree spending huge amounts of runtime testing the same codepath is a bit of a waste.
[10:40] <jam> I personally am in favor of layer testing, and a small number of integration tests.
[10:40] <jam> So you don't have to look at all the edge cases at integration time
[10:41] <jam> and just cover things that fail because of the combination, and general "does it work" tests.
[10:41] <TheMue> fwereade: ping
[10:41] <rogpeppe> jam: in this case, i tend to see the server type + the rpc package as one "layer"
[10:42] <rogpeppe> jam: and that nothing other than the rpc package will ever talk to the server types
[10:42] <mgz> so, I have a friend who might be interested in helping with the containerisation work
[10:42] <rogpeppe> jam: and the overhead of talking through that is fairly minimal
[10:43] <mgz> thumper, have we got any reasonably seperate parts that someone could have a go at?
[10:43] <jam> rogpeppe: except anytime you have an RPC you *really* want to test them in isolation, so you don't run into the "client 1.11 tests all pass, and 1.10 tests all pass, but 1.11 can't talk to 1.10 because we weren't asserting what the conversation actually was"
[10:44] <TheMue> jam: thx for review
[10:45] <rogpeppe> jam: that's an interesting point; i think we need both.
[10:45] <TheMue> jam: I came about chmod the dir because it is created in tests externally. this also can happen later
[10:46] <rogpeppe> jam: i'm not entirely sure of the best way to do the compatibility checks.
[10:46] <TheMue> jam: that's why I correct it during the initial writing ;)
[10:48] <rogpeppe> mgz: you have a review
[10:49] <jam> rogpeppe: I agree we need both
[10:50] <rogpeppe> jam: i think we could have automated tests for rpc message compatibility, but version compatibility involves much more than just the format of the rpc messages
[10:53] <frankban> fwereade, anyone else: could you please review  https://codereview.appspot.com/10497043 ? Thanks!
[10:53] <rogpeppe> frankban: looking
[10:53] <frankban> thanks
[10:55] <fwereade> frankban, I'll take a look after lunch
[10:55] <fwereade> TheMue, pong, very quickly
[10:55] <frankban> fwereade: great thank you
[10:57] <fwereade> mgz, rogpeppe, jam: I had liked rogpeppe's(?) original model in which the Watch call returns only when the guaranteed initial event has been read off the watcher, so the client-side watcher can hand that straight over to the out chan on creation, which seems kinda nice
[10:57] <rogpeppe> fwereade: ah, good catch
[10:58] <fwereade> mgz, rogpeppe, jam: ofc this is a degenerate case, there's no actual data to send, but for consistency's sake we should still be consuming the initial event before we hand over over as a resource to be Watch()ed
[10:58] <rogpeppe> mgz: yeah, it should do that
[10:58] <mgz> fwereade: okay, that's the statement from the review I wasn't clear on
[10:58] <fwereade> mgz, Next()ed, rather
[10:58] <mgz> so, the Watch call needs to pull from the channel, then the test needs to assert there are no further events
[10:59] <fwereade> mgz, yeah, do an AssertNoChange, tweak the machine, do an AssertOneChange
[10:59] <rogpeppe> mgz: yes to the former; for the latter, i'd change something in the Machine, Sync, and verify you get a change
[10:59] <fwereade> rogpeppe, I think the NotifyWatcherC gives quite a good vocabulary for those tests
[11:00] <rogpeppe> fwereade: i don't think it's necessary to AssertNoChange
[11:00] <rogpeppe> fwereade: we're not testing the actual watcher here
[11:00] <rogpeppe> fwereade: just that it exists and is attached to the right thing
[11:00] <TheMue> fwereade: oh, a quick pong? so we should talk about auto-sync later
[11:00] <fwereade> rogpeppe, how else do we verify the original event was read? we're testing the SUT by reference to the known and tested-elsewhere characteristics of the watcher
[11:01] <rogpeppe> fwereade: hmm, good point
[11:03] <fwereade> TheMue, ok, can we maybe talk about it just before kanban? or are you blocked? ...I never double-checked your english, sorry
[11:03] <fwereade> TheMue, would you link me that CL quickly please?
[11:04] <mgz> fwereade: so, I still don't know *where* exactly you want that initial event pulled off
[11:04] <TheMue> fwereade: we can talk before kanban, yes
[11:05] <fwereade> mgz, the Watch call should create the watcher, read its initial event, register it as a resource and return its resource id
[11:05] <TheMue> fwereade: review is https://codereview.appspot.com/10441044/
[11:05] <mgz> which one though, the MachinerAPI Watch call, or the Machine Watch call, or newEntityWatcher...
[11:05] <fwereade> mgz, MachinerAPI
[11:06] <mgz> okay.
[11:07] <fwereade> mgz, this is what we're currently implementing, in terms of the watchers that already exist and have this somewhat convenient initial-event model, which we want to take advantage of in the api
[11:07] <mgz> ...I thinmk I'll do this in a new branch
[11:07] <mgz> because it really wants the test helpers
[11:08] <fwereade> mgz, ok, sgtm
[11:09] <rogpeppe> fwereade: i have about two other branches blocked on mgz's branch BTW, so i'd very much like something to go in
[11:09] <rogpeppe> fwereade: soon
[11:09] <mgz> landing the current, then doing that change
[11:30] <rogpeppe> fwereade, frankban: here's an interesting question: if someone has a service with minUnits=5, then destroys a unit, should the new unit be created when the old unit has gone away entirely, or when it starts to die?
[11:32] <rogpeppe> davecheney: hiya
[11:44] <rogpeppe> lunch
[11:56] <frankban> rogpeppe: my understanding is that MinimumUnits expresses the concept of "minimum amount of units that should be alive". If that's correct, it seems sane to me to react right when one unit starts to die.
[11:59] <jam> mramm: I saw you show up for a second. I just got back from the restroom
[12:09] <rogpeppe> mgz: i think you need to re-approve your branch
[12:09] <rogpeppe> mgz: it died with one of them "bad MAC" errors
[12:10] <mgz> rogpeppe: do you know about those?
[12:10] <mgz> first time I've seen it, will resubmitting help?
[12:10] <rogpeppe> mgz: yeah, it's intermittent
[12:10] <rogpeppe> mgz: i thought that was the problem that jam fixed
[12:11] <jam> mgz, rogpeppe: I think this is a different error. This is using system mongo instead of the tarball mongo
[12:11] <jam> I'll try to fix it quicky.
[12:13] <jam> rogpeppe, mgz: I'm resubmitting now.
[12:14] <mgz> jam: thanks!
[12:23] <jam> mgz: when I update the config that the tarmac charm uses it rewrites crontab, which means I have to manually go fix it up, and I forgot to set the PATH
[12:24] <mgz> urgh
[13:09] <TheMue> clear
[13:16] <jam> mgz, rogpeppe: Looks like my fix worked.
[13:17] <rogpeppe> jam: cool
[13:17] <jam> (uninstalled the system mongo which the charm also installs, fix up the PATH for the test suite.)
[13:17] <jam> I'm actually pretty impressed that there is only 1 failing test
[13:17] <jam> (well 2, but it looks like 1 fails to tear down so 2 notices the setup isn't clean)
[13:56] <Makyo> gary_poster, mramm - is today the cross-team meeting today? I have something on my calendar, but it looks old.
[13:57] <mramm> Makyo: it's thursdays now
[13:57] <Makyo> mramm, Thanks.  Same time?
[13:58] <Makyo> mramm, actually, I won't have the hangout; can you invite me?
[13:59] <gary_poster> Makyo, sorry, I might have misunderstood what you said you wanted last Friday. cross-team is half hour after kanban time
[13:59] <gary_poster> Makyo, I'll delete the Tuesday juju core kanban appt for you?
[13:59] <Makyo> gary_poster, That's what I meant, I just had the old event on my board for today.
[13:59] <Makyo> gary_poster, I think I beat you to it :)
[14:00] <gary_poster> :-) you declined, I deleted
[14:00] <Makyo> Ah, okay.
[16:15] <rogpeppe> a branch to review, if anyone fancies it: https://codereview.appspot.com/10494043
[16:18] <rogpeppe> frankban: your merge proposal seems to be corrupted; could you re-propose, please? https://codereview.appspot.com/10497043
[16:20] <frankban> rogpeppe: done
[16:21] <rogpeppe> frankban: ta
[16:21] <ackk> hi, I'm hitting an error deploying with juju-core on openstack: instances goes into ERROR state with "ProcessExecutionError". I tried to remove the unit and destroy the service, but it all seems stuck
[16:23] <mgz_> ackk: that sounds pretty solidly like an issue with nova, not juju
[16:23] <ackk> mgz_, can you force the destroy of the service/unit in juju?
[16:23] <mgz_> just `juju destroy-environment`, and manually `nova delete` if needed
[16:24] <mgz_> you can use terminate-machine for one machine, but starting from scratch seems wiser
[16:25] <ackk> mgz_, yeah I tried that, it doesn't work because the machine is still associated to the unit (which has life: dying)
[16:25] <mgz_> right, you just need to wipe if things get that screwed
[16:25] <rogpeppe> frankban: you've got a review
[16:26] <frankban> rogpeppe: thanks
[16:26] <ackk> mgz_, I see. thanks
[16:36] <dpb1> To get the unit tests to pass, is there a reference ~/.juju/environments.yaml file that i need to have?
[16:38] <dpb1> I'm trying to follow the "Testing" section from the "CONTRIBUTING" file, but what is written there is not working.
[16:42] <rogpeppe> i have to go a bit early today
[16:42] <rogpeppe> g'night all
[16:45] <hazmat> fwereade, in your nomenclature LKP = ?
[16:45] <mgz_> dpb1: do you mean the live tests? you don't need environments.yaml for the unit tests
[16:48] <dpb1> mgz_: I'm running just make check from the juju-core checkout.  I installed mongodb (obvious error) the errors I'm getting now are not as obvious.  I can paste in one at a time, but I'm wondering if there is more than what the CONTRIBUTING file states.
[16:49] <dpb1> mgz_: I checked out juju core with go get -v -u launchpad.net/juju-core/...
[16:51] <dpb1> mgz_: I just found the README as well.  I'm missing some things, let me check back, thanks
[16:51] <mgz_> dpb1: feel free to pastebin anything if you get stuck
[17:05] <dpb1> mgz_: for starters, I'm getting this: http://paste.ubuntu.com/5799084/  (cstack2 is an environment from my ~/.juju/environments.yaml file, which is why I asked my original question the way I did).
[17:17] <mgz_> that's probably just bad isolation in one test...
[17:18] <mgz_> dpb1: try running with JUJU_HOME=/tmp or something
[17:19] <dpb1> mgz_: ok, I have tons of failures like these, will try now.  Hope that will be it
[17:24] <dpb1> mgz_:  you were close.  apparently some tests don't isolate against JUJU_ENV (which I had set).  All good now
[17:43] <dpb1> lbox is throwing an error about not being about diff branches when I submit a proposal: http://paste.ubuntu.com/5799185/  -- what am I missing?
[18:10] <andreas__> hso I have a unit in pending state, and nova list shows the instance is in ERROR. It never launched. Any idea how to recover without destroying the environment?
[18:11] <andreas__> http://pastebin.ubuntu.com/5799285/
[18:12] <andreas__> it's related to #1187959
[18:12] <_mup_> Bug #1187959: juju does not detect instance launch error, waits forever? <juju-core:Triaged> <https://launchpad.net/bugs/1187959>
[18:14] <andreas__> ok, a combination of juju destroy-service and destroy-unit allowed me to "terminate" that machine
[18:20] <andreas__> ...except terminate-machine doesn't work
[18:20] <andreas__> does nothing
[18:48] <hazmat> maybe i misunderstanding something.. when does a service in state 'dying' get garbage collected?
[19:04] <ahasenack> hazmat: I don't think it does
[19:04] <ahasenack> hazmat: or I haven't waited long enough
[21:12] <thumper> morning
[21:37] <thumper> poke fwereade
[21:38] <thumper> hazmat: ping
[21:42] <hazmat> thumper, pong
[21:43] <thumper> hazmat: what was the dependency tool you found that caused you to drop the requirements.txt work for juju-core?
[21:44] <hazmat> thumper, none, the use case for a commit or ci test runner was fulfilled i thought
[21:45] <thumper> hmm... not really
[21:45] <thumper> not in a reproducable way
[21:53] <hazmat> thumper, why's is that.. it puts all deps at a known revision based on req.txt or gets head?
[21:53] <thumper> head
[21:53] <thumper> AFAIK
[21:54] <thumper> I thought, and I may be wrong here, that we just have tarmac doing the landings for us
[21:54] <thumper> not that there is any special revno checking there
[21:54] <hazmat> thumper, a frontend script/make blows away the tree between runs
[21:56] <hazmat> thumper, go get -u should still pull/update to trunk afaik, but blowing away the tree is simple as well
[21:57] <hazmat> thumper, there's a bunch of other build tools, one other i might have mentioned is https://github.com/mozilla-services/heka-build
[21:59] <thumper> ok, I may take a look
[22:04] <hazmat> thumper, g+?
[22:04] <thumper> hazmat: sure
[23:38] <thumper> wallyworld_: you know how at the end of last week I said your watcher was returning dupes...
[23:38] <thumper> wallyworld_: well, I was wrong
[23:38] <thumper> wallyworld_: my code was bad
[23:38] <wallyworld_> \o/
[23:39] <thumper> I'm just trying to work out how to write a test for it
[23:39] <wallyworld_> my code would never have any bugs :-P
[23:41] <wallyworld_> thumper: talked to martin last night. he's across what we need to do. i asked him to send an email to us summarising the steps to address the main use case (deploy into container on new instance) as well as the next use case (mysql and wordpress in separate containers on an instance)
[23:41]  * thumper nods
[23:41] <wallyworld_> there's something easy we can do initially. it might get complicated later
[23:42] <wallyworld_> but that can wait
[23:42] <wallyworld_> lxc.net is an easy thing to get inter-container networking on the same machine
[23:42] <wallyworld_> and we can bridge to get the first use case going
[23:43] <wallyworld_> he already has some work in progress towards the goals so that's good
[23:56] <thumper> wallyworld_: provisioner_test, TestProvisioningDoesNotOccurForContainers
[23:56] <thumper> wallyworld_: why do you have cleanup code at the end of the test?
[23:56] <thumper> wallyworld_: doesn't the test framework clean that up?
[23:58]  * wallyworld_ looks