[01:50] <hazmat> davecheney, the newer samsung ssds (830?) are 7mm
[01:51] <davecheney> hazmat: yeah, i'll have to get one of those
[01:51] <davecheney> hazmat: for a moment I considered taking the metal case off my sandforce
[01:51] <davecheney> but only for a moment
[06:30] <TheMue> davecheney, fwereade, rogpeppe: Morning.
[06:30] <fwereade> TheMue, heyhey
[06:30] <fwereade> and everyone else :)
[06:32] <bigjools> fwereade: o/
[06:33] <fwereade> bigjools, heyhey
[06:33] <bigjools> who do I bug to get my branches reviewed today? :)
[06:34] <fwereade> bigjools, consider me bugged, I'll do that now :)
[06:34] <bigjools> tip top, thanks
[06:43] <fwereade> bigjools, one minor on https://code.launchpad.net/~julian-edwards/juju/maas-provider-non-mandatory-port/+merge/107577
[06:43] <bigjools> yup?
[06:43] <fwereade> bigjools, just _s for unused vars
[06:44] <fwereade> bigjools, otherwise both LGTM
[06:44] <fwereade> hazmat, ping
[06:44] <bigjools> fwereade: is that some juju convention?
[06:44] <bigjools> (for reference)
[06:45] <fwereade> bigjools, my heart says it is, but I couldn't point you to a document saying so
[06:45] <bigjools> heh
[06:45] <bigjools> fwereade: so: _s, port, _s = connect_call.args
[06:45] <bigjools> ?
[06:46] <fwereade> bigjools, soory, _
[06:46] <fwereade> bigjools, I should have said `_`s or something
[06:46] <bigjools> right, makes more sense now!
[06:47] <bigjools> fwereade: ok change pushed up, thanks muchly
[06:47] <fwereade> bigjools, also I'm not sure what the current datekeeper rules are for python; I'll have a word with hazmat when he's around and either extract more approves if required, or just merge them
[06:47] <bigjools> you will land them right?
[06:48] <fwereade> gatekeeper
[06:49] <fwereade> bigjools, (I should probably just land them but I'm reluctant to come charging back into the python without checking)
[06:49]  * bigjools nods
[06:59] <fwereade> bcsaller, ping
[07:00] <hazmat> fwereade, pong
[07:14] <davecheney> fwereade: rogpeppe TheMue does anyone have any comments about testing commands and package main_test ?
[07:15] <fwereade> hazmat, sorry: what are the current landing requirements for juju? bigjools has a couple of patches I'm happy with myself... what else should be done before landing?
[07:16] <hazmat> fwereade, the sru just landed
[07:16] <hazmat> for the connect args, i think in this case its actually more helpful to have the args documented.. its a completely mocked situation
[07:17] <fwereade> davecheney, I'm not sure my own preferences are useful... main_test seems to me like the obvious and sane thing to do but if it's flaky I guess we should do things differently
[07:17] <fwereade> hazmat, heh, sorry
[07:17] <hazmat> fwereade, and those params are never checked anywhere else in the codebase
[07:17] <TheMue> davecheney: Not yet. So far I had no main commands to test.
[07:17] <hazmat> so its completely unclear what they are if their ellidded
[07:17] <hazmat> bigjools, ^ sorry
[07:19] <hazmat> bigjools, actually i'd take the reverse notion.. and actually verify those values
[07:19] <davecheney> fwereade: for a long time gotest didn't work with main packages
[07:19] <hazmat> since their not really checked otherwise
[07:19] <hazmat> which is the root cause of this bug
[07:20] <davecheney> as they are the end of the line, dependency wise
[07:20] <davecheney> all the other code i've found that does test main, tests it inside package main
[07:20] <davecheney> i don't think there is a problem doing unit tests inside the package you are testing
[07:20]  * hazmat checks tests_auth
[07:21] <fwereade> davecheney, putting the tests in the package you're testing feels icky to me; but like I said, if main_test is flaky and not expected to be fixed any time soon, I guess we should just change them
[07:23] <davecheney> fwereade: you aren't putting them in the same package, well, only when testing, but there is no other way to test unexported functions
[07:24] <davecheney> anyway, i've got to go
[07:24] <davecheney> i'll see you on the flip side
[07:26] <hazmat> fwereade, trunk is green for this fix
[07:26] <hazmat> fwiw
[07:29] <fwereade> hazmat, ok then, I'll land them both once the change you asked for is done
[07:29] <fwereade> hazmat, cheers
[07:29] <hazmat> fwereade, thanks
[07:36] <rogpeppe> fwereade, TheMue, hazmat: mornin' all
[07:37] <fwereade> rogpeppe, heyhey
[07:37] <hazmat> g'night all
[07:45] <bigjools> hazmat: ok I'll reverse the last commit
[07:45] <bigjools> I did think it was rather odd to remove useful variable names :)
[07:46] <hazmat> when their well known and unused, we do tend to elide them.. but this is a special case.. since their effectively unknown.
[07:47] <bigjools> well they are test case logs
[07:47] <bigjools> the other ones are tested elsewhere
[07:48] <bigjools> hazmat: ok pushed up
[07:48] <hazmat> bigjools, are they? :-)
[07:48] <hazmat> bigjools, i couldn't find anywhere else where args was looked at
[07:49] <hazmat> bigjools, thanks
[07:50] <bigjools> I t hought I'd seen it somewhere
[07:50] <bigjools> my brain ceased to function 30 minutes ago, it's been a long day
[07:51] <bigjools> hazmat: ah just saw your paste, I'll poke that in too, one sec
[07:53] <hazmat> cool
[07:53]  * hazmat gives up on going back to sleep
[07:53] <bigjools> and pushed
[07:54]  * bigjools looks forward to a merge email
[08:53] <hazmat> bigjools, shouldn't this also do port 443 for https?
[08:58] <rogpeppe> fwereade: i've been looking at CL 6244060. i *think* that having multiple zk Servers sitting around in dummy isn't a good idea.
[08:59] <fwereade> rogpeppe, I'm somewhat ambivalent, but I think in practice that when one is added we will remember to defer a removal as well
[08:59] <rogpeppe> fwereade: i think we want to reuse the zk server regardless of the name that the test gives to the environment
[09:00] <rogpeppe> fwereade: how do we remove a server?
[09:00] <fwereade> rogpeppe, I guess doing it this way does effectively force us to use one dummy env name per package, which prevents us from encoding any helpful context in there
[09:00] <fwereade> rogpeppe, SetZookeeper("foo", nil)
[09:00] <rogpeppe> fwereade: that too
[09:01] <rogpeppe> fwereade: and if we're going to do that, why bother allowing multiple zk servers
[09:01] <rogpeppe> ?
[09:02] <rogpeppe> there's an alternative, actually
[09:02] <fwereade> rogpeppe, it seemed sensible to do that regardless... I think the issues are orthogonal?
[09:02] <fwereade> go on
[09:04] <rogpeppe> fwereade: hmm, another thought: why do we bother with SetZookeeper. why don't we let the dummy environ start its own zk?
[09:04] <rogpeppe> fwereade: then the dummy environ can manage the server "cache". (it could keep just one server around after a Reset)
[09:05] <fwereade> rogpeppe, hmm, how do we know when to shut the servers down?
[09:05] <rogpeppe> fwereade: oh yeah, good point. *that* was why :-)
[09:07] <rogpeppe> fwereade: anyway, i *think* we want to be able to reuse a zookeeper regardless of the name of the environ.
[09:08] <fwereade> rogpeppe, tbh that was my initial preference but we do seem to have multiple storages (right?) and consistency seems sensible
[09:09] <rogpeppe> fwereade: storage is *much* lighter weight than zk. and we trash all the storages after Reset.
[09:09] <rogpeppe> fwereade: i think the 10s startup overhead of zk means we will never want to start up two
[09:11] <fwereade> rogpeppe, my instinct says that indeed we won't, and that it will be much easier to note this and remove the capacity at some point in the future than it will to get into heavy discussions with niemeyer about it right now
[09:11] <fwereade> rogpeppe, my primary consideration, sad to say, is getting it merged so I can get to work on deploy without it being at the end of a looong pipeline of CLs under discussion ;)
[09:12] <niemeyer> Good mornings!
[09:12] <fwereade> niemeyer, heyhey!
[09:12] <rogpeppe> fwereade: i'm concerned that this kinda breaks the dummy environs model
[09:12] <rogpeppe> niemeyer: yo!
[09:12] <rogpeppe> niemeyer: just expressing mild concern about SetZookeeper taking an environ name
[09:13] <rogpeppe> niemeyer: we want the zk to be reused regardless of the name of the environment that's opened, i think.
[09:13] <niemeyer> fwereade, rogpeppe: No need to get into heavy discussions.. it was just an opinion.. happy to be shown the other side of it :)
[09:13] <niemeyer> rogpeppe: So what's the point of having environment names in the dummy provider?
[09:13] <fwereade> niemeyer, my personal view is that the cost of either approach is dwarfed by the cost of talking about it for more than 20s ;)
[09:14] <niemeyer> rogpeppe: Environment names explicitly enable talking to completely different environments
[09:14] <rogpeppe> niemeyer: so we can open multiple environments. but Reset should reset everything
[09:14] <niemeyer> rogpeppe: If we have two different environments that have different internal representations but that in fact are the same ZooKeeper state, isn't it the same environment?
[09:14] <niemeyer> rogpeppe: Agreed, but if we have a single ZooKeeper state, it's not multiple environments
[09:14] <rogpeppe> niemeyer: we weren't allowing that
[09:15] <rogpeppe> niemeyer: we weren't going to allow multiple zk environments
[09:15] <rogpeppe> niemeyer: because i don't think we'll ever actually want that
[09:15] <niemeyer> rogpeppe: There's no such thing as a "zk environment"..  an environment contains a ZooKeeper
[09:15] <rogpeppe> niemeyer: lots of tests use the dummy env without requiring zk
[09:15] <niemeyer> rogpeppe: If we have two environments with one zookeeper, we don't have two environments
[09:16] <niemeyer> rogpeppe: Yep, that sounds fine
[09:16] <rogpeppe> niemeyer: what sounds fine?
 niemeyer: lots of tests use the dummy env without requiring zk
[09:17] <niemeyer> fwereade: Do we have a use case for opening two environments with a single ZooKeeper?
[09:17] <fwereade> niemeyer, I don't think so
[09:17] <rogpeppe> niemeyer: at the same time, you mean?
[09:17] <niemeyer> fwereade: So what's the concern?
[09:18] <niemeyer> fwereade: I'm concern we're modeling something unrealistic in the dummy environment
[09:18] <rogpeppe> niemeyer: we don't want to restart zk every time we start a test.
[09:18] <niemeyer> fwereade: But doing dummy.SetZooKeeper("name", zk) vs dummy.SetZooKeeper(zk) execute in the same amount of test
[09:18] <niemeyer> erm
[09:18] <niemeyer> of time
[09:18] <fwereade> niemeyer, from my perspective, that it's just more bookkeeping... I expect to run a bunch of sequential tests, each of which uses a dummy env, and which in practice may as well all use the same ZK
[09:18] <niemeyer> rogpeppe: Yep.. agreed
[09:19] <niemeyer> fwereade: Sounds fine too?
[09:19] <fwereade> niemeyer, doing this effectively means we either need multiple zookeepers, or to explicitly set multiple envs to use the same one, if we ever want to use different env names
[09:20] <fwereade> niemeyer, but I really have no horse in this race, I think the cost is minimal either way and I'm happy enough with either approach
[09:20] <niemeyer> fwereade: The cost is zero if we're never doing that.. and if we ever do it we can consciously choose to assign to the same ZooKeeper, right?
[09:21] <niemeyer> fwereade: I guess I'm still lacking what's the argument against making the interface of the dummy provider a bit more realistic in terms of what happens
[09:21] <niemeyer> fwereade: We're debating about SetZooKeeper(name, zk) vs. SetZooKeeper(zk), right?
[09:22] <rogpeppe> niemeyer: i like the fact that dummy.Reset forgets all environ names
[09:22] <fwereade> niemeyer, the cost is (1) a little bit of extra bookkeeping code and (2) consciously choosing to set up extra zookeepers in the package test function
[09:22] <niemeyer> rogpeppe: Sure, so continue doing that.. (?)
[09:22] <rogpeppe> niemeyer: but if we add SetZooKeeper(name, zk) that name remains around
[09:22] <rogpeppe> niemeyer: even after Reset
[09:22] <fwereade> niemeyer, like I say, not enough for me to worry about
[09:23] <niemeyer> rogpeppe: I don't get it.. are you suggesting that Reset() won't reset what SetZooKeeper does? That sounds weird
[09:23] <rogpeppe> niemeyer: if it does reset it, then we'll need to call SetZooKeeper on every test
[09:23] <fwereade> niemeyer, ah, hmm, so Reset() should clear all zookeepers and each test should explicitly set one at the start?
[09:23] <fwereade> niemeyer, that does feel heavyweight to me
[09:23] <rogpeppe> niemeyer: perhaps that's what we should be doing i guess
[09:24] <fwereade> niemeyer, having the magic test ZK in python felt like a win to me tbh
[09:24] <niemeyer> fwereade: Assigning an entry to a map sounds heavyweight?
[09:24] <niemeyer> fwereade: Reset resets everything else in Dummy.. why is it not acting on what SetZooKeeper does?
[09:24] <fwereade> niemeyer, keeping track of the zookeeper in each test package, rather than just in the package setup func
[09:25] <fwereade> niemeyer, Reset trashes the content of every active zookeeper
[09:25] <fwereade> niemeyer, what I'm trying to avoid is starting a fresh ZK for every test
[09:25] <niemeyer> fwereade: You don't have to start a fresh zookeeper.. trashing content != starting zookeeper
[09:25] <rogpeppe> niemeyer: the idea behind SetZooKeeper was "here is a zookeeper server; use it for a zk environment whenever it's opened"
[09:25] <niemeyer> fwereade: Each test *should* get a zeroed out zookeeper, right?
[09:25] <fwereade> niemeyer, agreed; and it does
[09:26] <niemeyer> fwereade: Ok, so I misunderstand your point
[09:26] <fwereade> niemeyer, what rogpeppe said^^
[09:26] <niemeyer> rogpeppe, fwereade: Sure.. how does that change what was just pointed out, thought?
[09:26] <fwereade> niemeyer, I think that allowing different ZKs for different envs is fine; I'm not sure we'll use it much in practice
[09:27] <niemeyer> fwereade: Agreed.. I'm not arguing that we use it.. I'm arguing that we don't have unrealistic scenarios for testing
[09:27] <niemeyer> fwereade: Two different environments in a configuration, with different names, will have different backing ZooKeepers
[09:28] <fwereade> niemeyer, and I can get behind that; I've proposed a CL that, I think, does make the scenario more realistic as you suggest, and I'm perfectly happy with it :)
[09:28] <rogpeppe> niemeyer: if we have Reset clear out all the zk servers too, then every test will need to call SetZooKeeper, which means that every test suite will need a zk server field. if we have just a single zk server instance, then we can do the setup exactly as we do in state.TestPackage
[09:29] <rogpeppe> niemeyer: will we ever want to have two different environments concurrently with different zookeepers?
[09:29] <rogpeppe> niemeyer: it's a heavy weight test and one i can't really see a reason for doing
[09:29] <fwereade> niemeyer, I'm not so keen on the idea of having a zkServer package var for every test package, and explicitly setting it as the dummy zookeeper in every test; is that what you're proposing?
[09:30] <rogpeppe> niemeyer: if you never want to do that (and you can make it panic if that ever happens accidentally) then the scheme is not unrealistic - it's indistinguishable, i think.
[09:30] <niemeyer> rogpeppe: <niemeyer> fwereade: Two different environments in a configuration, with different names, will have different backing ZooKeepers
[09:30] <niemeyer> rogpeppe: That's how environments work for real
[09:31] <rogpeppe> niemeyer: yes, but will we *ever* want to open two such environments *at the same time* in a test?
[09:31] <niemeyer> rogpeppe: If we're creating a dummy provider that works in a different way, it sounds like we're doing something wrong..
[09:32] <niemeyer> rogpeppe: You were the one changing the dummy package so it could take multiple environments with different names.. (!?)
[09:32] <rogpeppe> niemeyer: that's for basic tests that don't require zk.
 niemeyer: yes, but will we *ever* want to open two such environments *at the same time* in a test?
[09:33] <niemeyer> rogpeppe: That does answer your own question
[09:33] <rogpeppe> niemeyer: by "such environments" i meant environments with a backing zk instance
[09:34] <niemeyer> rogpeppe: We're going in circles. I've already pointed out the reasoning behind that. You can agree or disagree.
[09:34] <rogpeppe> niemeyer: the approach i'm suggesting is the same one taken by state, for instance - we have a single zk instance that is reused in each test.
[09:34] <niemeyer> rogpeppe: That's where our conversation started.
[09:35] <rogpeppe> niemeyer: so... should Reset forget the servers registered with SetZooKeeper?
[09:36] <niemeyer> rogpeppe: That was my expectation, but I guess the dummy package is being used in a different way than what I expected
[09:37] <rogpeppe> niemeyer: if so, i could live with that. it's another line in each test or suite that opens a zk environ, but the zookeeper.Server could live in a global var just like state.TestingZkAddr.
[09:37] <niemeyer> rogpeppe, fwereade: So far we set up the dummy package on each suite, I think
[09:38] <fwereade> niemeyer, ATM we set it up per package
[09:38] <fwereade> niemeyer, following the model in state
[09:39] <niemeyer> fwereade: The dummy package?  Do you have an example where we set it up per package I could look at in the current source?
[09:40] <rogpeppe> cmd/juju/cmd_test.go sets it up per test
[09:41] <niemeyer> Right, I meant an example where we use the dummy provider on a package context, rather than per test or suite
[09:41] <fwereade> niemeyer, only in https://codereview.appspot.com/6243067/ which technically doesn't exist yet
[09:42] <fwereade> niemeyer, (that's calling SetZookeeper in the package test func; did you mean something different?)
[09:42] <niemeyer> fwereade: Right, so that's why I was assuming a given way of using it, which doesn't match what you had in mind.. not saying either is right or wrong, but part of the argument originates there
[09:42] <fwereade> niemeyer, I'm just monkey-see-monkey-doing what we did for state wrt zookeepers; seems to work ok ;)
[09:43]  * fwereade wonders whether "following established conventions" is a better way of putting that ;)
[09:43] <niemeyer> fwereade: Yeah, you could have monkey-see-monkey-done what the dummy package currently do now as well, though :-)
[09:44] <fwereade> niemeyer, so it seems; state is the one I already knew :0
[09:44] <rogpeppe> fwereade: "blindly following over a cliff"?
[09:44] <fwereade> rogpeppe, if all your friends jumped off a cliff and it all seemed to be going just fine...
[09:45] <rogpeppe> fwereade: come on in, the water's lovely!
[09:46] <niemeyer> fwereade: Ok, let's just move back to the SetZooKeeper you had then.. it's very straightforward and will get you going
[09:46] <fwereade> niemeyer, the cost of setting up a ZK was a contributing factor... if that weren't a consideration I'd happily do everything per-suite
[09:46] <rogpeppe> anyway, to get back to the point. i think i'd be happy with either: one zk instance, reused automatically, as originally; or an explicit SetZookeeper required after Reset
[09:46] <fwereade> niemeyer, ok, cool
[09:46] <fwereade> niemeyer, cheers :)
[09:46] <niemeyer> fwereade: I'm happy to delay my preciousism in the design (if that's even a word) to a second moment
[09:46] <fwereade> niemeyer, and a panic if anyone tries to SetZookeeper while one is already set?
[09:47] <niemeyer> fwereade: The fact it matches what's being done with zookeeper now in terms of the test suite is what makes me more comfortable
[09:47] <rogpeppe> niemeyer, fwereade: thanks for bearing with me. i feel like i've been the precious one :-)
[09:47] <rogpeppe> fwereade: i don't see why we shouldn't allow SetZookeeper twice.
[09:48] <niemeyer> fwereade: Not sure.. how do you plan to reset the zk?
[09:48] <fwereade> rogpeppe, ah ok, perhaps I misremembered an earlier comment of yours
[09:48] <niemeyer> fwereade: as in, how do you plan to unset the package variable
[09:48] <fwereade> niemeyer, `defer SetZookeeper(nil)` in the package test func; clear contents on Reset() or Destroy()
[09:48] <niemeyer> fwereade: Ah, ok.. so accept nil, but not another zk
[09:48] <niemeyer> Anyway, I have to move rooms
[09:48] <fwereade> niemeyer, yeah
[09:49] <niemeyer> Which means I have to shutdown
[09:49] <niemeyer> :(
[09:49] <niemeyer> brb
[09:50] <rogpeppe> fwereade: what bad thing might happen if you allowed setting another zk?
[09:50] <fwereade> rogpeppe, having two zookeepers around :p
[09:50] <fwereade> rogpeppe, if someone comes up with a legitimate use case removing the panic is not hard
[09:51] <fwereade> rogpeppe, but trying to do so indicates to me that something is likely to be being Done Wrong
[09:52] <rogpeppe> fwereade: i'm not sure i see the big difference between { SetZookeeper(nil); SetZookeeper(anotherZk)} and SetZookeeper(anotherZk)
[09:53] <rogpeppe> fwereade: but i don't mind actually. require the SetZookeeper(nil) if you prefer.
[09:53] <fwereade> rogpeppe, in either case, assuming package-level setup, either you need to keep track of the original package ZK *or* someone has failed to clear it out from a previous package
[09:54] <rogpeppe> fwereade: i can't see how that would happen. we only test one package at a time, right?
[09:55] <fwereade> rogpeppe, and if a server from a previous package is still there, someone's ballsed something up
[09:55] <rogpeppe> fwereade: so it would only happen if we happened to call SetZookeeper twice in the same package. which could happen, i guess, but i don't see it as an easy-to-find failure mode.
[09:55] <fwereade> rogpeppe, fair enough :)
[09:55] <rogpeppe> fwereade: how could a server from a previous package still be there? they're different programs?
[09:55] <rogpeppe> s/\?$//
[09:56]  * fwereade waves his hands vaguely, then runs out of steam and slinks off lokking embarrassed
[09:56] <rogpeppe> :-)
[10:06] <fwereade> rogpeppe, should Destroy be clearing out the appropriate key in providerInstance.state (and closing the listener)?
[10:07] <rogpeppe> fwereade: erm, zookeeper.Server.Destroy?
[10:07] <fwereade> rogpeppe, sorry, dummy.environ.Destroy
[10:08] <rogpeppe> fwereade: i don't see that method. do you mean Reset?
[10:08] <rogpeppe> oh!
[10:08] <fwereade> rogpeppe, Reset clears them all
[10:08] <rogpeppe> just grepped the wrong dir
[10:09] <rogpeppe> i don't *think* so
[10:09] <rogpeppe> fwereade: it's not like s3 goes away when you destroy an environment
[10:10] <rogpeppe> fwereade: i think it's ok for Reset to do that job
[10:11] <rogpeppe> Aram: yo!
[10:11] <Aram> morni g all.
[10:11] <fwereade> rogpeppe, I'll read some more, see if I can figure out why your proposed tweak to open panics... that was just a half-formed hypothesis
[10:12] <TheMue> Aram: Hi
[10:12] <rogpeppe> fwereade: it could be that something's not calling Reset
[10:12] <fwereade> Aram, heyhey
[10:14] <fwereade> rogpeppe, HAHA DISREGARD THAT I wrote the code in the wrong place :/
[10:14]  * rogpeppe disregards it
[10:50] <rogpeppe> fwereade, TheMue, niemeyer: i'm wondering about the way that we manage the state with watchers. if i'm watching something, i'll get a notification that something has happened to it (because its config node changed, for example) but then i have to re-read the same info from zk if i want to find out attributes of the object. but the info might have changed - the info i see might not be consistent with what i'm told
[10:51] <rogpeppe> i'm wondering if a model where each kind of thing (Unit, Service, Machine) caches the info that was known when it was created.
[10:51] <rogpeppe> s/if a/about a/
[10:51] <TheMue> rogpeppe: Typically the specialized watchers return the needed info.
[10:52] <TheMue> rogpeppe: So in your case a watcher returns an info but you need a different one depending on it?
[10:52] <rogpeppe> TheMue: i'm looking at doing a unit watcher. the obvious thing to do is have the channel sending *Units that have been added or removed.
[10:52] <niemeyer> rogpeppe: That's what the machine watcher does..
[10:53] <rogpeppe> TheMue: but if i want to get any info out of those units, i have to reread it
[10:53] <rogpeppe> niemeyer: yeah, and i'm a bit concerned about it
[10:53] <niemeyer> rogpeppe: No, you're not re-reading.. you're reading.. the information out of those units wasn't read before
[10:53] <niemeyer> rogpeppe: There are different ways to model that problem for sure
[10:54] <niemeyer> rogpeppe: And perhaps the model we have in place today isn't the best one, but changing it will incur in other issues
[10:54] <rogpeppe> niemeyer: ah, that's true. the info *about* the units is in the topology, but the info *of* the units is in the unit's config node, right?
[10:54] <niemeyer> rogpeppe: Yep
[10:54] <niemeyer> rogpeppe: So, in either case, I suggest not modifying that today so that we can move forward
[10:54] <TheMue> rogpeppe: Nice description, yes.
[10:56] <niemeyer> rogpeppe: I do think eventually we'll want to cache more than we do, but that can't be done without pondering further about the algorithms
[10:56] <rogpeppe> niemeyer: it doesn't seem that that's entirely true. i'm looking at AssignedMachineId and it reads the topology. to find new units, i'm going to need to watch the toplogy. so i'll be reading the toplogy for every unit as well.
[10:57] <TheMue> niemeyer: The test of store fails for me due to a not matching error message in lpad_test.go:42:. Is that only here or well known?
[10:57] <niemeyer> rogpeppe: Yeah, it's a mix of both
[10:59] <niemeyer> TheMue: Certainly not well known by me.. trunk should never be broken
[11:00] <rogpeppe> a possible model to consider for the future: each object acts as a local cache of some state. it only goes to zk when it doesn't already hold that state. an Update method could be implemented on each type to fetch new values from zk.
[11:00] <niemeyer> rogpeppe: Yeah
[11:00] <rogpeppe> TheMue: tests all pass for me
[11:01] <TheMue> niemeyer: Ah, found the error. It's the locale. So bzr returns a German message here while an English one is expected in the test. (sigh)
[11:01] <rogpeppe> niemeyer: that way most methods could lose their error return perhaps - the only place returning an error would be on Update.
[11:01] <TheMue> rogpeppe: See above, it's my locale.
[11:02] <rogpeppe> TheMue: ah. useful to have someone in a different language locale!
[11:02] <TheMue> rogpeppe: Yes. While our own messages are all English the ones of external tools may vary.
[11:03] <niemeyer> rogpeppe: and then all writes have to be cached as well?  That would mean the flush operation could fail, and we wouldn't know why.. which means we'd have to implement retry points that don't really know what is being retried
[11:03] <niemeyer> rogpeppe: etc
[11:04] <rogpeppe> niemeyer: no, i think writes would just write through
[11:04] <niemeyer> rogpeppe: It's a different model.. worth thinking about at some point.. not obviously better
[11:05] <TheMue> rogpeppe: As long as we don't have any problem with re-reading I won't do an application based caching. This may be task of the backend (like ZK, Mongo, whatelse).
[11:09] <rogpeppe> niemeyer: it's this kind of thing that concerns me: client A watches units. client B changes unit 1 to X. client A observes change X. client B changes unit 1 to Y. client A reads Y. client A observes change Y. client A reads Y.
[11:10] <niemeyer> rogpeppe: How's that an issue?
[11:11] <rogpeppe> niemeyer: it's very inefficient, particular when we're observing the topology and it's getting large.
[11:11] <rogpeppe> niemeyer: maybe i shouldn't worry about efficiency here though.
[11:12] <niemeyer> rogpeppe: Yeah, let's make it work first
[11:12] <niemeyer> rogpeppe: Reliably
[11:13] <niemeyer> rogpeppe: Note that, in your example, even if we cached the data and client B read X instead, there would be a second notification
[11:13] <rogpeppe> niemeyer: it's reliability i'm concerned about too. working on a slightly newer version than we've just been told about. maybe it's not an issue, but it seems potentially problematic.
[11:13] <niemeyer> rogpeppe: We haven't been told about anything.. we have simply been notified that a changed happened
[11:13] <rogpeppe> niemeyer: yeah, there are two changes, so there would be two notifications
[11:14] <rogpeppe> niemeyer: we've been told that some new units have been created, for example.
[11:14] <niemeyer> rogpeppe: Yes, we have, but that's unrelated to your example
[11:14] <rogpeppe> niemeyer: and we *did* know some info about those units, but we've thrown it away
[11:14] <niemeyer> rogpeppe: Yes, but that's unrelated to reliability
[11:16] <rogpeppe> the caching model seems to me like it has less potential for odd race conditions.
[11:18] <rogpeppe> niemeyer: anyway, food for thought.
[11:18] <hazmat> rogpeppe,  agreed it is inefficient
[11:19] <hazmat> rogpeppe, the other options are to have some notion of logical transactions that the topology records for
[11:19] <hazmat> not nesc. the 3.4 multi-node txn, but a logical  one where we carry the topology across multiple ops
[11:19] <hazmat> just to be clear its not a race condition per se that its avoiding, its the spurious notifications caused by every state change, that may actually compose a larger logical op
[11:20] <hazmat> but really the largest source of spurious notifies, is everything listening to the topology, getting hit by notifications they don't care about
[11:20] <rogpeppe> hazmat: yeah that's true
[11:21] <niemeyer> rogpeppe: Indeed, I do think those are good ideas, and that there's something there for us to evolve towards
[11:21] <rogpeppe> hazmat: but it doesn't help when we've been listening to the topology, we're told that it's changed and what it changed to, and then we do a round trip just to fetch it again.
[11:22] <niemeyer> rogpeppe: We quickly did that when we started.. but didn't quite work well
[11:22] <niemeyer> Luncnh time
[11:22] <rogpeppe> niemeyer: did what, sorry?
[11:23] <hazmat> as an alternative i think we either need to carefully devolve into multiple topo indexes, and/or starting using multi-node transactions..
[11:23] <hazmat> rogpeppe, that's the nature of watches
[11:24] <hazmat> rogpeppe, oh.. we got told what it changed to?
[11:24] <rogpeppe> hazmat: ah, no, of course. in this case we're doing a round trip to fetch it. and then another round trip to fetch it again.
[11:25] <hazmat> rogpeppe, ah
[11:26] <hazmat> rogpeppe, we did briefly experiment with an up to date object cache, but lacking transactions or pub/sub events (from watches), it ended up just being more complexity imo.
[11:27] <hazmat> rogpeppe, but if you could interject into the watch channel to an object cache invalidation protocol it might have some value, but effectively anything in cache would need to be watched
[11:27] <hazmat> else it could be stale
[11:27] <rogpeppe> hazmat: i wasn't thinking of a global cache, just a local cache for an object when you read it.
[11:27] <hazmat> alternatively work with an operation context, like a transaction scope
[11:27] <hazmat> er. scoped cache
[11:28] <rogpeppe> hazmat: and a way of saying "please update"
[11:28] <hazmat> rogpeppe, yeah.. that makes sense, but speaking of race conditions ;-)
[11:28] <hazmat> its unavoidable i suppose, the local cache gives a consistent view at least
[11:28] <rogpeppe> hazmat: yeah, that's my thought
[11:28] <hazmat> we always have to consider the possibility of state change
[11:29] <rogpeppe> hazmat: indeed.
[11:29] <hazmat> but if we go to modify a cached obj, we have to reconcile current state and modifications, not cache state and mods
[11:29] <rogpeppe> hazmat: but it can change when we ask it to
[11:29] <hazmat> ie. capture versions
[11:29] <hazmat> at a min or use retry change utilities
[11:29] <rogpeppe> hazmat: i'd be happy with a write-through cache
[11:30] <rogpeppe> hazmat: it doesn't solve all the problems, but it alleviates the worst
[11:30] <hazmat> niemeyer, are you in uk?
[11:30] <rogpeppe> hazmat: he is
[11:30]  * hazmat notes its not lunch time yet ;-)
[11:30] <hazmat> rogpeppe, cool, is there a gojuju sprint going on?
[11:30] <rogpeppe> hazmat: nope
[11:31] <rogpeppe> hazmat: niemeyer's at some sprint in london
[11:31] <hazmat> we should get a juju sprint  going on
[11:31] <hazmat> i can't even remember the last sprint we had
[11:31] <hazmat> like august of last year maybe..
[11:31] <rogpeppe> hazmat: +1
[11:32] <rogpeppe> hazmat: i haven't done a sprint yet
[11:32] <hazmat> we should fix that..
[11:32] <TheMue> rogpeppe, hazmat: Isn't something planned when Mark is on board?
[11:32] <hazmat> i dunno
[12:52] <niemeyer> rogpeppe: ping
[12:53] <rogpeppe> niemeyer: pong
[12:53] <niemeyer> rogpeppe: Yo
[12:53] <niemeyer> rogpeppe: Are you game to come tomorrow and return on Friday?
[12:53] <rogpeppe> niemeyer: i think so
[12:53] <niemeyer> rogpeppe: Awesome, sorting out hotel
[12:53] <rogpeppe> niemeyer: will just have a word with carmen
[12:53] <niemeyer> rogpeppe: Cheers
[12:57] <fwereade> rogpeppe, any particular reason StorageReader.URL doesn't return a url.URL?
[12:58] <Aram> niemeyer: eager to meet you tomorrow!
[12:58] <Aram> anyone else there?
[12:58] <niemeyer> Aram: Looking forward too
[12:58] <niemeyer> Aram: Lots of people, but no jujuers unfortunately
[12:59] <niemeyer> Aram: rog should come, thogh
[12:59] <niemeyer> though
[12:59] <Aram> great.
[13:35] <niemeyer> rogpeppe: Are we good to go?
[13:51] <fwereade> niemeyer, do I recall you saying we should revisit the way we handle unit placement in python?
[13:51] <fwereade> niemeyer, because we're aproaching the point at which we should be considering it :)
[13:51] <niemeyer> fwereade: Hmm.. not that I remember off hand
[13:52] <fwereade> niemeyer, jolly good then -- if it crosses your mind again, do let me know (I didn't see anything wrong with it myself, but...)
[14:04] <TheMue> niemeyer: I now start the go test with a temporary locale en_US. But now I've got this error: http://paste.ubuntu.com/1014818/
[14:06] <rogpeppe> niemeyer: yes.
[14:06] <rogpeppe> niemeyer: sorry about the wait. got into an involved discussio.n
[14:07] <niemeyer> rogpeppe: COol, thanks
[14:07] <niemeyer> rogpeppe: Moving forward then
[14:07] <rogpeppe> fwereade: there's no need for it to be a parsed URL. http.Get doesn't take a url.URL as an argument.
[14:07] <niemeyer> TheMue: Sorry, can't look at that now
[14:08] <TheMue> niemeyer: OK, no problem so far and doesn't hinder state development. ;)
[14:08] <rogpeppe> fwereade: a string seems a fine representation of a URL to me if you don't need to be looking inside it.
[14:09] <fwereade> rogpeppe, ah, ok; in that case the question becomes: TheMue, is there any particular reason AddCharm takes a url.URL?
[14:10] <TheMue> fwereade: Have to think back. IMHO because I'm getting it from an API call. One moment, I'll take a look.
[14:10] <fwereade> TheMue, thanks
[14:10] <rogpeppe> fwereade, TheMue: it looks like it could as well be a string, as all it does is call String on it...
[14:11] <rogpeppe> fwereade, TheMue: and charmData.BundleURL is a string
[14:11] <fwereade> rogpeppe, concur
[14:15] <TheMue> fwereade: Don't know the reason anymore, it became a URL during the discussion.
[14:16] <fwereade> TheMue, rogpeppe: any reason not to make it a string again?
[14:16] <TheMue> fwereade: Are there any problems with URL?
[14:16] <rogpeppe> TheMue: it's worth using the same representation for the same thing throughout, i'd say
[14:16] <fwereade> TheMue, just that it seems kinda silly to turn a string into a url just to turn it back into a string a few us later :)
[14:17] <rogpeppe> fwereade: +1
[14:18] <TheMue> rogpeppe: Same representation for the same thing: yes. A URL is a URL. ;)
[14:18] <TheMue> rogpeppe: But it's ok for me.
[14:19] <TheMue> rogpeppe: Can parse it to URL if access to the parts is needed.
[14:19] <rogpeppe> TheMue: yeah
[14:19] <TheMue> rogpeppe: The only thing is: URL verifies the URL to have a valid scheme.
[14:21] <rogpeppe> TheMue: i don't think so
[14:21] <rogpeppe> TheMue: it just verifies that the scheme matches [a-zA-Z][a-zA-Z0-9+-.]*
[14:22] <TheMue> rogpeppe: OK, not much (but even more than a pure string).
[14:24] <TheMue> rogpeppe: So AddCharm() gets a charm URL as charm.URL but the bundle URL as string. Hmm.
[14:27] <rogpeppe> TheMue: assuming we're expecting to have got the charm URL from the charm package, i don't think that's an issue
[14:28] <TheMue> rogpeppe: Yes, no issue, it only 'smells' a bit when reading the function signature.
[14:38] <rogpeppe> TheMue: can you enlighten me as to the role of unit names?
[14:38] <rogpeppe> TheMue: i can't quite see a unit has both a name and a sequence number
[14:38] <rogpeppe> TheMue: and a key
[14:39] <TheMue> rogpeppe: One moment.
[14:41] <TheMue> rogpeppe: The name is build by service name and sequence number.
[14:41] <TheMue> rogpeppe: The key is the internal key in ZK, like unit-0000000001.
[14:43] <rogpeppe> TheMue: i don't quite see why key and sequence number aren't pretty much the same thing in different forms
[14:43] <rogpeppe> ah! i see it, i think
[14:44] <rogpeppe> TheMue: i *think* it's because a unit is built in two operations; make unit directory, then add the newly made name to the topology.
[14:44] <rogpeppe> TheMue: i'm still not quite sure why we can't just take the unit number from the unit directory name.
[14:45] <TheMue> rogpeppe: Yes
[14:45] <rogpeppe> TheMue: and live with the fact that unit numbers might sometimes skip
[14:45] <rogpeppe> fwereade: any idea?
[14:46] <TheMue> rogpeppe: Do you have any trouble with it?
[14:46] <rogpeppe> TheMue: yeah, it's awkward because i'm implementing topology.UnitsForMachine and i'm not sure what to return from it
[14:46] <fwereade> rogpeppe, TheMue: I'm torn
[14:47] <rogpeppe> TheMue: i can't return []*zkUnit because that doesn't include the service name
[14:47] <rogpeppe> TheMue: i can't return *Unit because that's outside the purview of topology.go
[14:47] <TheMue> rogpeppe: The unit stuff is almost a 1:1 of the Py code.
[14:47] <fwereade> rogpeppe, TheMue: smoothly increasing ids throughout would obviously be nicest, and not represent a clear encapsulation breakdown
[14:48] <TheMue> rogpeppe: So I would investigate there.
[14:48] <rogpeppe> TheMue: i don't want to define *another* type representing a unit
[14:48] <rogpeppe> fwereade: that's what we've got now, i think.
[14:48] <rogpeppe> fwereade: at least, smoothly increasing *external* ids
[14:48] <fwereade> rogpeppe, TheMue: cool, because that's what we have with machines for sure
[14:49] <rogpeppe> fwereade: exactly.
[14:49] <fwereade> rogpeppe, TheMue: and in this I favour consistency
[14:49] <rogpeppe> fwereade: i'm thinking that machines is doing a very similar thing, and there (i *think) we have a 1-1 correspondence between key and machine id
[14:49] <fwereade> rogpeppe, yep, I think so too
[14:50] <TheMue> rogpeppe: The sequence no is increasing per service, the unit key over all units.
[14:50] <rogpeppe> fwereade, TheMue: doing this would also mean we could get rid of all the UnitSequence logic in topology.go
[14:50] <rogpeppe> TheMue: i don't think so, but let me check
[14:51] <rogpeppe> TheMue: oh yeah
[14:51] <TheMue> rogpeppe: It is, just checkt. It increases a counter per service name in topology.
[14:51] <rogpeppe> TheMue: that's a bit odd.
[14:51] <rogpeppe> TheMue: i'm surprised it's not a directory inside the service.
[14:51] <fwereade> rogpeppe, hmm, yeah, that might be nicer indeed
[14:52] <TheMue> rogpeppe: It's a value of topology, maybe because of the 'tranctional' behavior.
[14:52] <rogpeppe> TheMue: it's both actually
[14:52] <rogpeppe> TheMue: well... it's a directory and a node inside the topology
[14:52] <TheMue> rogpeppe: The increment only happens when the topology change succeeds.
[14:52] <rogpeppe> TheMue: the directory is inside /units though
[14:53] <TheMue> rogpeppe: The unit node is below units, yes.
[14:53] <rogpeppe> TheMue: yeah. but if the directory was inside the service, we could use a zk auto-increment to do the counting for us
[14:54] <rogpeppe> TheMue: but perhaps there's another reason for having units in a global namespace
[14:54] <TheMue> rogpeppe: We can do a lot. Do we have the need while the state isn't even completed?
[14:55] <rogpeppe> TheMue: probably not. it would simplify the code a bit though.
[14:55] <hazmat> its to give a semantic useful number
[14:55] <hazmat> zk sequence numbers can have gaps if an op fails
[14:56] <rogpeppe> hazmat: that's true. machine ids can do that.
[14:56] <hazmat> and there global
[14:56] <rogpeppe> hazmat: do gaps matter?
[14:56] <hazmat> across all services, where as we want the service unit increments
[14:56] <hazmat> rogpeppe, its rather disconcerting to see wordpress/5 and  wordpress/222
[14:57] <hazmat> as sequential units
[14:57] <hazmat> er.. sequentially allocated taht is
[14:57] <hazmat> ie.. the zk sequence is across all units
[14:57] <TheMue> hazmat: So while the key is purely internal the name containing the sequence no is also external visible?
[14:57] <hazmat> TheMue, yup
[14:58] <TheMue> hazmat: Makes sense, thx.
[14:58] <hazmat> TheMue, the python code base distinguishes between names and internal_ids
[14:58] <rogpeppe> hazmat: doesn't the same logic apply to machine ids?
[14:58] <hazmat> names being visible, and internal ids being the impl detail
[14:58] <TheMue> hazmat: Yes, in Go the internal ids are now keys, but have the same role.
[14:59] <hazmat> rogpeppe, to some extent, its commutes, but its not really something we ever really care about per se, a) juju cares about presenting the services not the machines b) the machine global sequence is correct for machine allocation
[14:59] <rogpeppe> hazmat: i guess i wonder if it's worth putting logic and code behind something that is a) very rare and b) only mildly disconcerting if it does happen.
[14:59] <hazmat> where as the units want service namespace'd sequences
[14:59] <hazmat> rogpeppe, i doubt it re machines
[15:00] <hazmat> rogpeppe, for service units its important, unless you have another impl of per service unit sequences
[15:00] <rogpeppe> hazmat: do we rely on the sequentiality anywhere?
[15:01] <hazmat> rogpeppe, never
[15:01] <hazmat> rogpeppe, its a user interface question
[15:01] <rogpeppe> hazmat: or is it just the fact that someone might see 1 followed by 3 and wonder what's going on.
[15:02] <hazmat> rogpeppe, its not about the 1 and 3, its about 1 and 25..
[15:02] <hazmat> rogpeppe, the unit sequence in zk is global to all units across all services
[15:02] <rogpeppe> hazmat: if we put unit directories inside service directories, that wouldn't be a problem
[15:02] <rogpeppe> hazmat: but maybe there's a good reason why that is a bad idea
[15:02] <hazmat> rogpeppe, right.. if you have another impl of per service unit sequences
[15:02] <rogpeppe> hazmat: ah, i see what you mean by that now
[15:02]  * hazmat loves repetition
[15:03] <hazmat> rogpeppe, that sounds fine to me
[15:04] <rogpeppe> TheMue: a reason for doing this is that it means i can map directly from a unit name e.g. wordpress/0 to its directory, without needing to consult the topology.
[15:04] <rogpeppe> TheMue: so unit name, sequence number and key are all derivable from each other.
[15:05] <TheMue> rogpeppe: Do we have problems so far consulting the topology?
[15:06] <TheMue> rogpeppe: Or asking it differently: When do we get scaling probs due to the top size?
[15:06] <rogpeppe> TheMue: what should UnitsForMachine return?
[15:06] <hazmat> it hasn't been a problem at scale, but we haven't been measuring the network traffic to get a better understanding of the outflow
[15:06] <rogpeppe> TheMue: it's not really a performance issue, but one of neatness.
[15:07] <hazmat> and from a size of topology, its really a non issue with any compression
[15:07] <hazmat> the sequence numbers that is
[15:07] <hazmat> we can get about 15k machines/services/service units in atm without blowing 1mb, without compression
[15:07] <rogpeppe> TheMue: i'll have a play and see how it looks
[15:07] <TheMue> rogpeppe: So both solutions have to be compared.
[15:08] <rogpeppe> TheMue: yeah
[15:08] <TheMue> rogpeppe: Breaking todays top concept could lead to a larger redesign, stop by step. Let's do it here, oh, and there.
[15:09] <hazmat> rogpeppe, its not clear thats its meanigful though, you need the topology for most anything.. and units to service name lookups aren't common
[15:09] <TheMue> rogpeppe: I only see the risk of changing too much of what we reached before the rest is done. 12.10 is coming. ;)
[15:09] <hazmat> rogpeppe, ie. that translation is pretty rare in practice for the runtime
[15:09] <hazmat> rogpeppe, because typically its holding onto a service unit state which has both values captured
[15:09] <rogpeppe> hazmat, TheMue: ok, i'll leave as is. it should be fairly easy to refactor later actually.
[15:10] <hazmat> yes.. avoid.. premature optimization ;-)
[15:10] <TheMue> hazmat: Hehe, yes.
[15:11] <hazmat> rogpeppe, i agree though that we can better about the topology slinging, i just don't think that particular case is particularly relevant.
[15:11] <TheMue> rogpeppe: Maybe we get a better solution when the layer around ZK is done and we change the backend in a second step.
[15:11] <rogpeppe> hazmat: as always. although i wasn't really considering this in performance terms.
[15:11] <rogpeppe> hazmat: just that we could reduce the number of entities w.l.o.g.
[15:11] <hazmat> isn't neatness is an eye of the beholder thing isn't it?
[15:12] <hazmat> and is the neatness you defined avoiding fetching the topology?
[15:12] <rogpeppe> hazmat: no, that was another thing.
[15:12] <hazmat> nm.. i should move on.. silly deadlines
[15:12] <rogpeppe> hazmat: in this case we've already got the topology
[15:12] <hazmat> gotcha
[15:14] <rogpeppe> TheMue, hazmat: my impulse to change things stemmed from my "so why does a unit need two names" question. and the answer is "it doesn't, but that's the way it's been done" AFAICS.
[15:15] <hazmat> rogpeppe, then you've ignored the discussion to this point.. its two fold, to present a nice interface, and to separate interface from impl
[15:15] <TheMue> rogpeppe: So let's keep it as long as there's no real need. There are still enough tasks to do.
[15:16] <rogpeppe> hazmat: putting unit directories inside their service directories solves both of those issues, i think.
[15:17] <hazmat> rogpeppe, agreed
[15:18] <hazmat> rogpeppe, but that's solving those two issues in a different way, that doesn't obviate the reasons
[15:18] <rogpeppe> hazmat: if we did that, then a unit would only have one name
[15:19] <rogpeppe> hazmat: or at least, the different categories of names would map 1-1
[15:19]  * hazmat nods
[15:25] <niemeyer> rogpeppe: Did you get the hotel confirmation?
[15:25] <rogpeppe> niemeyer: yes thanks
[15:26] <rogpeppe> niemeyer: should i organise my own travel?
[15:26] <rogpeppe> niemeyer: (if so, what time should i plan to arrive/leave?)
[15:26] <niemeyer> rogpeppe: Yeah, I think that's simpler in your case
[15:27] <niemeyer> rogpeppe: Arriving around noonish or early afternoon sounds fine.. I expect Aram will be arriving around that time as well
[15:29] <rogpeppe> niemeyer: ok, that's easy enough. leaving?
[15:29] <niemeyer> rogpeppe: Planning for end of afternoon/early night on Friday should work well
[15:31] <rogpeppe> niemeyer: not cheap, those train tickets!
[15:34] <rogpeppe> niemeyer: looks like minimum price is around £230.
[15:34] <rogpeppe> niemeyer: i bet Aram's flight isn't far off that
[15:35] <niemeyer> rogpeppe: Ugh.. a bit unfortunate indeed
[15:36] <niemeyer> rogpeppe: Still a lot cheaper than flying you to Brazil, though ;)
[15:36] <rogpeppe> niemeyer: very true
[15:38] <rogpeppe> niemeyer: if i want a flexible return time, the price is £301 !
[15:42] <rogpeppe> niemeyer: so i think i'll plan to leave at 1700 on friday, which should get me home by 2100, if that sounds ok
[15:42] <niemeyer> rogpeppe: Yep, sounds good
[15:43] <rogpeppe> niemeyer: that's the train leaving at 1700, so i guess i'd be leaving millbank (which is presumably where we'd be?) at 1600.
[15:43] <niemeyer> rogpeppe: No, we'll actually not have a room on Friday afternoon.. so we'll be meeting somewhere more open
[15:44] <niemeyer> rogpeppe: We have a room for the morning rented somewhere, though
[15:44] <rogpeppe> niemeyer: just as long as it's not too far away from king's cross, that'd be good
[15:44] <niemeyer> rogpeppe: Everybody is leaving EOD.. so won't be an issue
[15:45] <rogpeppe> niemeyer: cool. will book tickets now.
[16:18] <rogpeppe> niemeyer: all booked.
[17:07] <niemeyer> rogpeppe: Woohay
[17:07] <niemeyer> Aram: Do you have the new flight confirmation?
[17:07] <niemeyer> Aram: For real Thursday?
[17:08] <Aram> niemeyer: yes, I printed my boarding pass.
[17:09] <niemeyer> Aram: Cool, just to be sure, you noticed that the flight was booked wrong and then re-booked, right?
[17:09] <Aram> yes, yes, got a funny email why wasn't I present in today's flight.
[17:09] <niemeyer> Aram: Cool
[17:10] <niemeyer> Aram: It was half my mistake.. I was silly to assume they would actually book in the day I asked for
[17:10] <Aram> heh.
[17:10] <niemeyer> Rather than double checking the confirmation
[17:13] <rogpeppe> niemeyer, fwereade, TheMue: first step towards the unit-assignment watcher: https://codereview.appspot.com/6256070/
[17:13] <niemeyer> Aram: I'm not being ironic.. that's why they send the flights for confirmation
[17:14] <rogpeppe> i'm off for the night
[17:14] <rogpeppe> Aram, niemeyer: see ya tomorrow!
[17:14] <Aram> have fun.
[17:14] <niemeyer> rogpeppe: Awesome, have a good one
[17:15] <niemeyer> I shouldn't take too long either
[17:24] <niemeyer> Aram: I assume you have address and all?
[17:25] <Aram> 27th Floor, Millbank Tower
[17:25] <Aram> 21-24 Millbank
[17:25] <Aram> it's good, right?
[17:28] <niemeyer> Yep!
[17:28] <niemeyer> Aram: So if we don't talk again today, I'll see you here tomorrow
[17:28] <niemeyer> Stepping out for dinner
[17:28] <niemeyer> Cheers all
[17:28] <Aram> yes, great!
[17:28] <Aram> enjoy!
[17:29] <niemeyer> Thanks!
[22:30] <davecheney> niemeyer: are you happy for me to extend, go-cmd-jujud-fix-testing, to include the other commands then submit ?
[22:31] <davecheney> or should I do another round of reviews ?
[22:53] <davecheney> is schema_test.go failing for anyone else ?