[05:25] <wrtp> davecheney: mornin' boss
[05:28] <davecheney> wrtp: woop woop
[05:28] <wrtp> davecheney: how's tricks?
[05:29] <davecheney> wrtp: so, i added the a feature to environ.Instances, if you pass nil, you get whatever it knows about at that point
[05:29] <davecheney> you may find it upsetting
[05:29] <wrtp> davecheney: i think it's probably the best way forward at this point
[05:30] <wrtp> davecheney: the other alternative was to add a bool "all" argument to request all instances in addition to the ones requested.
[05:30] <davecheney> http://paste.ubuntu.com/1029872/
[05:30] <wrtp> s/the other/another/
[05:30] <davecheney> the PA calls environ.Instance(nil) in a loop anyway
[05:30] <davecheney> so I think we can cope with it eventually discovering all machines
[05:31] <wrtp> davecheney: seems reasonable. what a pity ec2 makes everything so darn hard.
[05:32] <davecheney> wrtp: is has to be, it's cloud scale
[05:32] <davecheney> wrtp: http://paste.ubuntu.com/1029874/
[05:32] <davecheney> I think it's fine to call this in a loop
[05:32] <davecheney> environs.Instance(nil) that is
[05:32] <wrtp> davecheney: i suppose so, but the eventual consistency stuff seems a little unnecessary.
[05:33] <davecheney> wrtp: it does sound a little overengineered
[05:33] <davecheney> it not like the console is fast or anything because of it
[05:34] <wrtp> :-)
[05:35] <wrtp> davecheney: i thought the provisioning agent only needed to get the instance list on startup
[05:39] <wrtp> davecheney: at least, that's how i understood your message on the mailing list
[05:41] <TheMue> Morning.
[05:41] <davecheney> wrtp: well, the pythonic version did it periodically
[05:41] <wrtp> TheMue: hiya
[05:41] <davecheney> TheMue: hello!
[05:42] <davecheney> wrtp: also, findUnknownInstances() can probably cache it's results after the first run
[05:42] <wrtp> davecheney: i suppose that's useful if there might be several provisioning agents running concurrently.
[05:42] <davecheney> it's more to fix the problem of
[05:42] <davecheney> add machine, stop PA, remove machine, start PA == instance is lost and not shutdown
[05:44] <wrtp> davecheney: yeah, but that could happen in a 2nd PA instance, i think, so the "unknown instance" thing could happen for the first PA instance even when it wasn't shut down.
[05:44] <davecheney> wrtp: i'm aware of, but not putting any effort towards concurrent PAs at this point
[05:44] <wrtp> davecheney: ok.
[05:55] <davecheney> TheMue_: internet troubles ?
[06:07] <TheMue> davecheney: Mobile access. ;)
[06:11] <davecheney> TheMue: i'll speak slowly then
[06:11] <davecheney> TheMue: mobile data speeds in australia are pitiful
[06:15] <TheMue> davecheney: Here it's mostly ok.
[06:17] <davecheney> bugger, after implementing code that shuts down unkonwn instances, the other side of the unit tests have broken
[06:17] <davecheney> FAIL: provisioning_test.go:245: ProvisioningSuite.TestProvisioningDoesNotProvisionTheSameMachineAfterRestart
[06:17] <davecheney> Error: provisioner started an instance
[06:47] <davecheney> TheMue: wrtp have you ever had visilibity issues between different zookeeper connections
[06:47] <wrtp> davecheney: no
[06:47] <davecheney> ie, you write somethig with connection A, but it isn't visible to connection B ?
[06:48] <wrtp> davecheney: using the same server?
[06:48] <davecheney> yup, localhost
[06:48] <wrtp> davecheney: nope. i suspect a bug in your code :-)
[06:48] <davecheney> yeah, good call
[06:51] <fwereade> davecheney, wrtp: I have suspicions that two connections to the same local server *can* have rather different views of what time it is
[06:52] <wrtp> fwereade: interesting. more than just transient phase difference you mean?
[06:52] <davecheney> fwereade: the problem I am seeing is, in a test, I start an instance, then write the provider id back to the state with machine.SetInstanceId
[06:52] <fwereade> davecheney, wrtp: they'll see the same history, sure, but I don't think any guarantees are made about distinct connections being synchronized in any way
[06:53] <davecheney> then I close that state connection, open another one, but the Id is not there
[06:53] <wrtp> fwereade: aren't they looking at the same underlying state?
[06:54] <davecheney> http://codereview.appspot.com/6307049/diff/2001/cmd/jujud/provisioning_test.go
[06:54] <davecheney> ^ line 245 etc
[06:55] <fwereade> wrtp, I'm afraid I don't know exactly what form that underlying state takes; but given the terms in which the guarantees are couched I wouldn't find it surprising
[06:55] <wrtp> davecheney: my first inclination would be to put a debugging print inside zk to print out what attributes are being set
[06:56] <davecheney> yeah, i'll keep digging
[06:56] <wrtp> fwereade: i think i would, but then again i haven't delved too deeply into zk storage internals.
[06:56] <davecheney> this is almost certainly my fault
[06:57] <wrtp> fwereade: when you've a moment, a chat about upgrade would be good
[06:58] <fwereade> wrtp, specifically:
[06:58] <fwereade> Timeliness
[06:58] <fwereade>     The clients view of the system is guaranteed to be up-to-date within a certain time bound. (On the order of tens of seconds.) Either system changes will be seen by a client within this bound, or the client will detect a service outage.
[06:58] <wrtp> fwereade: surely that applies only when you've got multiple servers?
[06:58] <davecheney> wrtp: i'd think so
[06:58] <davecheney> crap
[06:59] <davecheney> that is weird
[06:59] <fwereade> Sometimes developers mistakenly assume one other guarantee that ZooKeeper does not in fact make. This is:
[06:59] <fwereade> Simultaneously Conistent Cross-Client Views
[06:59] <fwereade> davecheney, wrtp: but I *hadn't* previously noticed the sync() call they mention for getting round this
[07:00] <fwereade> davecheney, wrtp: all above from Sometimes developers mistakenly assume one other guarantee that ZooKeeper does not in fact make. This is:
[07:00] <fwereade> Simultaneously Conistent Cross-Client Views
[07:00] <fwereade> er, http://zookeeper.apache.org/doc/r3.1.2/zookeeperProgrammers.html#ch_zkGuarantees
[07:01] <wrtp> fwereade: "ZooKeeper by itself doesn't guarantee that changes occur synchronously *across all servers*"
[07:01] <davecheney> which is fair
[07:01] <wrtp> fwereade: i still think it's talking about cross-server consistency, not single-server consistency
[07:02] <wrtp> fwereade: i may of course be wrong :-)
[07:02] <fwereade> wrtp, Single System Image
[07:02] <fwereade>     A client will see the same view of the service regardless of the server that it connects to.
[07:02] <wrtp> fwereade: i'm not sure i see the relevance of that
[07:03] <wrtp> fwereade: we've only got one server.
[07:04] <fwereade> wrtp, for that to hold, a client can't just be getting the latest state from whatever server it happens to connect to... can it?
[07:04] <fwereade> wrtp, I agree that I haven't seen anything specifically stating that there are no additional guarantees with single servers
[07:05] <wrtp> fwereade: i don't see why not. isn't that the whole point, in fact?
[07:05] <fwereade> wrtp, I think the whole point is that everyone sees the same history but not necessarily at the same time
[07:09] <wrtp> fwereade: i guess i'm just having trouble seeing how this would happen in the single-server case. a write would have to put the data into a slow pipeline, to be written eventually, but the write consistency guarantees mean that a write must always see the latest view, i think, so on a single server i can't see that happening.
[07:09] <wrtp> davecheney: you could try putting a sleep after the zk write and see if that made a difference.
[07:10] <davecheney> fwereade: sounds like 'everyone will see the same events, in the same order, eventually'
[07:10] <davecheney> uh oh, australia is lagging out
[07:10] <davecheney> wrtp: nope didn't help, _but_ if I did NOT call st.Close() on the first connection, before opening the second everything worked
[07:11] <fwereade> davecheney, I don't even think it guarantees that everyone will see the same events... just a consistent series of state snapshots, if you like
[07:11] <wrtp> davecheney: interesting. maybe the close is losing events or something.
[07:11] <davecheney> 'everyone will see most of what happened, probably in order
[07:11] <davecheney> wrtp: indeed, I wonder if we need an explicit flush
[07:12] <wrtp> davecheney: the close is happening in the same thread, right?
[07:12] <fwereade> davecheney, I think it does guarantee "in order", if somewhat more weakly than I would prefer
[07:12] <davecheney> wrtp: yes, i kill the tomb, which closes in a defer
[07:12] <wrtp> davecheney: you might want to try reproducing the behaviour in a less convoluted setting.
[07:13] <davecheney> wrtp: yes
[07:15] <davecheney> wrtp: i'm certainly seeing some scarey things hapen around close
[07:15] <davecheney> will do a simple test case
[08:13] <wrtp> fwereade: ping
[08:23] <fwereade> wrtp, pong
[08:24] <wrtp> fwereade: upgrades...
[08:24] <fwereade> wrtp, ah yes
[08:24] <wrtp> fwereade: this was my first thought: http://paste.ubuntu.com/1030040/
[08:25] <wrtp> fwereade: but then i realised that "exit and let upstart start new version" is asking for trouble
[08:25] <fwereade> wrtp, really? handing responsibility over to upstart seems pretty sane to me
[08:25] <fwereade> wrtp, what are you worried about?
[08:26] <wrtp> fwereade: i'm worried about what happens if someone uploads a dodgy set of tools. suddently everything will break.
[08:26] <wrtp> suddenly
[08:26] <fwereade> wrtp, hm
[08:26] <wrtp> fwereade: i have a solution
[08:26] <wrtp> fwereade: which you might or might not like
[08:27]  * fwereade listens
[08:28] <wrtp> fwereade: it would be nice if a program could upgrade itself, but it can't do that and still have upstart handle the case where it crashes.
[08:29] <wrtp> fwereade: so the idea is to add another "upgrader" command
[08:29] <wrtp> fwereade: which handles the rendezvous between the old and the new versions
[08:30] <fwereade> wrtp, that sounds fine until there's a problem with the upgrader :p
[08:30] <fwereade> wrtp, actually, I think I see another problem
[08:30] <wrtp> fwereade: the idea is that the upgrader is simple enough that it never needs upgrading itself.
[08:30] <fwereade> wrtp, sometimes the args we use to start the agents will change
[08:30] <wrtp> fwereade: that's fine. i've catered for that.
[08:30] <fwereade> wrtp, so we can't necessarily just reuse the upstart script
[08:30] <fwereade> ah
[08:30] <fwereade> cool :)
[08:31] <fwereade> wrtp, I'm reasonably happy with the upgrader idea
[08:31] <wrtp> fwereade: the idea is that when you upgrade, you actually run both programs together for a while.
[08:31] <wrtp> fwereade: the new program connects to the state, does some checks and then says "ok, i'm ready to start"
[08:31] <fwereade> wrtp, my only objection is that "the upgrader is simple enough that it never needs upgrading itself" feels a touch optimistic to me
[08:32] <fwereade> wrtp, I don't really like running 2 at a time
[08:32] <wrtp> fwereade: then the old program shuts down and the upgrader tells the new program to go ahead
[08:32] <wrtp> fwereade: the nice thing about running them both at the same time is that you get zero down time
[08:33] <fwereade> wrtp, yeah, it's just my gut reaction
[08:33] <fwereade> wrtp, I will probably come around to it
[08:34] <wrtp> fwereade: i've built the upgrader part already (although i've still got some compiler errors)
[08:34] <wrtp> fwereade: i think it's possible to do some exhaustive verification of it to check that it's correct
[08:35] <wrtp> fwereade: it's 280 lines of code
[08:35] <wrtp> fwereade: which is bigger than i'd hoped, but still pretty small
[08:36] <wrtp> fwereade: at the moment, the assumption is that it talks to the commands that it runs via stdin and stdout. that could change.
[08:37] <fwereade> wrtp, cool
[08:37] <fwereade> wrtp, sorry, I have to go help cath with the car for a bit :/
[08:37] <wrtp> fwereade: np
[08:38] <fwereade> wrtp, idea sounds broadly sane but I'd like to see an implementation ;)
[08:38] <wrtp> fwereade: later this morning, i hope :-)
[08:38] <fwereade> wrtp, sweet
[09:55] <fwereade_> TheMue, is it sane to have settings on a service relation?
[09:56] <TheMue> fwereade_: Today we have on a relation. But I don't know yet how they are used.
[09:57] <fwereade_> TheMue, it seems to me that *unit* relation settings are sane (and are set by units in their hooks) and *service* settings are sane (set by the user via command line)
[09:57] <fwereade_> TheMue, but that we don't currently have a use case for *relation* or *service relation* settings
[09:58] <TheMue> fwereade_: Did you check Py state where those settings are changed?
[09:58] <TheMue> fwereade_: Not that we only create that node and nothing else.
[09:59] <fwereade_> TheMue, what I'm dancing towards is a suggestion that we drop all creation of role/settings nodes in state.AddRelation, because I think they're meaningless until we add units
[10:01] <TheMue> fwereade_: Sounds reasonable.
[10:01] <fwereade_> TheMue, sweet, I think it makes things simpler
[10:02] <TheMue> fwereade_: Yes. Not much, but every bit counts.
[10:08] <Aram> morning.
[10:08] <wrtp> Aram: hiya
[10:21] <TheMue> Aram: Moin.
[10:27] <fwereade_> Aram, heyhey
[10:35] <wrtp> fwereade_: here's the WIP. i've compiled it only, no tests yet. https://codereview.appspot.com/6307061
[10:37] <fwereade_> wrtp, cheers
[11:12] <fwereade_> TheMue, are there any existing tests for the scope stuff in AddRelation?
[11:14] <TheMue> fwereade_: Only the validation of the value, it's in my last proposals.
[11:15] <TheMue> fwereade_: What exactly do you want to see tested?
[11:19] <fwereade> TheMue, sorry, lost history; did you respond?
[11:19] <TheMue> fwereade: Hehe, modern technology.
[11:19] <TheMue> fwereade: I said that my latest proposal does a value validation, but nothing else.
[11:19] <TheMue> fwereade: And I asked what do you want to see tested.
[11:20] <TheMue> fwereade: We've not yet reached the parts that add container scoped paths in ZK.
[11:20] <fwereade> TheMue, what is meant to happen when the scope doesn't match
[11:20] <fwereade> TheMue, that's edging its way towards a proposal on my end
[11:22] <TheMue> fwereade: One moment.
[11:24] <TheMue> fwereade: https://bugs.launchpad.net/juju-core/+bug/1007373 See latest question, it is still open.
[11:24] <TheMue> fwereade: Today, in Py as in Go, only one endpoint has to be container, that's enough.
[11:24] <fwereade> TheMue, I think that's correct; but we need to verify that we store ScopeContainer on each
[11:25] <fwereade> TheMue, that's what you do, but it's not tested AFAICS
[11:26] <TheMue> fwereade: You're right.
[11:26] <fwereade> TheMue, I was also wondering why we return the ServiceRelations from AddRelation; seems to me it would make more sense to return just the Relation, and give that a Services method
[11:27] <TheMue> fwereade: Did you checked the callers of AddRelation()
[11:27] <fwereade> TheMue, there aren't any
[11:28] <TheMue> fwereade: In Py?
[11:28] <TheMue> fwereade: Interesting.
[11:28] <fwereade> TheMue, but if there were, they could just as easily extract the service relations from the service
[11:28] <fwereade> TheMue, sorry, not in py
[11:28] <TheMue> fwereade: The you maybe see the reason.
[11:29] <TheMue> fwereade: But a relation has not enough information for all services, it would have to dynamically retrieve them again. So maybe it's better to only return the service relations, they have a Relation() method.
[11:30] <fwereade> TheMue, sorry, I don;t see the reason
[11:30] <fwereade> TheMue, the only client discards the result
[11:31] <TheMue> fwereade: Hmm, then the old reason for it may be gone.
[11:32] <fwereade> TheMue, and honestly I don't see what good the Relation type actually does (maybe status?)
[11:32] <fwereade> TheMue, given a service, I will definitely want to know what relations it's in
[11:33] <fwereade> TheMue, but for that I want ServiceRelations, I think
[11:33] <fwereade> TheMue, hmm, actually, even for status a bare Relation type is pretty useless I think
[11:34] <fwereade> TheMue, I think it should actually be AddRelation(...) error
[11:34] <TheMue> fwereade: It's used in RemoveRelation() ;)
[11:34] <fwereade> TheMue, why don't we just pass the same endpoints, calculated in the same way?
[11:35] <fwereade> morning mramm
[11:35] <TheMue> fwereade: We can do, no problem.
[11:35] <TheMue> mramm: Moin.
[11:36] <fwereade> TheMue, I don't *think* there are any other use cases... but I may be missing something
[11:36] <TheMue> fwereade: But that change would affect all callers too.
[11:36] <fwereade> TheMue, how many?
[11:37] <fwereade> TheMue, AFAICS the only use of get_relation_state is in remove-relation, which then uses it to... remove the relation
[11:37] <TheMue> fwereade: I don't know. I only want to take care that a redesign of todays API instead of pure porting doesn't cost too much time (up to 12.10).
[11:38] <TheMue> fwereade: And you know, our last API change is now gone almost completely back into the former solution.
[11:38] <fwereade> TheMue, but a bit of checking of the existing API, and seeing that parts of it are 100% useless, saves us an awful lot of implementation time
[11:38] <fwereade> TheMue, which one?
[11:39] <TheMue> fwereade: I'm not safe enough in todays Py code and the history which lead to it. So if you see opportinities in those changes please talk to Gustavo.
[11:40] <TheMue> fwereade: The topology related relation stuff we talked about at UDS.
[11:41] <fwereade> TheMue, this one:      Services  map[string]*topoRelationService
[11:41] <fwereade>  ?
[11:42] <TheMue> fwereade: That's one of the parts, and some implementation details. The last proposals almost looked like the first ones.
[11:43] <fwereade> TheMue, well, I guess this is the same area... but you are aware that we're burning time implementing putative multi-endpoint relation support that isn't on any roadmap I know of?
[11:44] <fwereade> TheMue, based on that I can deal with the data format change
[11:44] <TheMue> fwereade: Yep
[11:45] <fwereade> TheMue, ok, I just don't understand why it's worth spending time on this
[11:45] <TheMue> fwereade: By the way, did you read my error handling mail?
[11:45] <fwereade> TheMue, I thought you seemed to make a solid case but I haven't got much more of a response
[11:46] <fwereade> TheMue, anyway, sorry about that whole review cycle
[11:46] <fwereade> TheMue, I have no idea on what basis niemeyer makes that sort of API decision though
[11:47] <fwereade> TheMue, the AddRelation(args...) business is a known ugliness in python which I'm sure niemeyer complained about himself at one stage
[11:47] <TheMue> fwereade: That's no problem. It's only that I'm maybe not the best discussion partner for some ideas due to my lack of the project history and many design motivations.
[11:48] <fwereade> TheMue, yeah, understood -- but I think it's worthwhile talking to you about these things anyway
[11:48] <TheMue> fwereade: That's true.
[11:49]  * fwereade tries to figure out what he's actually doing
[11:50] <fwereade> TheMue, do we have Service.Relations() yet?
[11:51] <fwereade> TheMue, wait, sorry, ignore me
[11:51] <TheMue> fwereade: No
[11:52] <TheMue> fwereade: I'm currently taking a deeper look into our code base regarding error handling.
[11:53] <fwereade> TheMue, ok, cool
[11:53] <TheMue> fwereade: There are many places where errors just passed up. Not only ZK, topology or os.
[11:53] <fwereade> TheMue, I think the thing I *need* now is Service.Relations; I'll get to work on that
[11:54] <fwereade> TheMue, I think we should also fix the interface to AddRelation and RemoveRelation
[11:54] <TheMue> fwereade: IMHO we should seperate between errors containing possible additional informations (as own types) and the visualization of this error as message, log entry, something else on the UI level (commands, logs, web).
[11:54] <fwereade> TheMue, would you have any objection to a CL that combined the two? they're somewhat intertwined
[11:55] <TheMue> fwereade: Combining Add and Remove?
[11:56] <fwereade> TheMue, making Add return error, and Remove take endpoints... like add
[11:58] <TheMue> fwereade: So instead of a relation to remove I would have to get the endpoints, then (internally) search a matching relation and then delete it? Where do you get the EPs from? And did you take a look into todays RemoveRelation code?
[11:59] <fwereade> TheMue, how do you get a relation instance to remove now?
[11:59] <fwereade> TheMue, the answer is "you can't"
[11:59] <TheMue> fwereade: Are there no callers of remove_relation_state in Py today?
[12:00] <fwereade> TheMue, the only called uses a relation it got from get_relation_state, which takes endpoints and returns a relation
[12:00] <fwereade> TheMue, that place is the only client of get_relation_state
[12:00] <fwereade> TheMue, the actual operation is "given these endpoints, which came from the command line, remove the corresponding relation"
[12:01] <fwereade> TheMue, just like Addrelation but in reverse
[12:01] <TheMue> fwereade: OK, I see. So it sounds reasonable.
[12:01] <fwereade> TheMue, cool, thanks
[12:35] <Aram> I wonder why Gustavo made mgo take bson.{M,D} arguments instead of using reflection to determine the type of arguments, it's not like you buy any type safety since you can put whatever crap you want inside the bson objects.
[12:35] <Aram> err = k.nodes.Update(bson.M{"_id": parent}, bson.M{"$push": bson.M{"children": path}})
[12:36] <Aram> there's a lot of bson.M noise.
[12:36] <Aram> if you use bson.D it's even worse.
[12:36] <Aram> surely we can do better.
[12:54] <wrtp> Aram: what would you want that update line to look like?
[12:54] <Aram> thinking.
[13:50] <hazmat> jimbaker, the branch is looking much nicer
[14:34] <wrtp> fwereade: did you have a look at the upgrade interface in https://codereview.appspot.com/6307061/diff/1/upgrade/upgrade.go ? does it look kinda reasonable? all agents would call Start, and then Upgrade when they need to upgrade.
[14:36] <fwereade> wrtp, sorry: yeah, it looks sensible, but for some reason I'm a bit suspicious of stdin/stdout
[14:36] <fwereade> wrtp, trying to figure out whether I'm just being irrational
[14:36] <wrtp> fwereade: yeah, i wondered about that.
[14:37] <wrtp> fwereade: it would be perfectly possible to use e.g. unix sockets in the same framework
[14:37] <fwereade> wrtp, indeed, but then I'm not reaally sure what that buys us
[14:37] <wrtp> fwereade: but in the end, stdin/stdout seem to give us exactly what we want, i think.
[14:37] <fwereade> wrtp, indeed :)
[14:37] <wrtp> fwereade: i don't think we're gonna use it for anything else
[14:38] <fwereade> wrtp, consider my reaction to be "tentative approval" then :)
[14:38] <wrtp> fwereade: and i've added some safeguards in there in case anyone prints random stuff
[14:38] <wrtp> fwereade: thanks
[14:39] <wrtp> fwereade: i think the upgrader implementation is really quite neat, BTW. i'm happy to have written it, even if it goes in the bin on monday.
[14:39] <fwereade> wrtp, haha, yeah, it's nice :)
[14:40] <wrtp> fwereade: in particular the way that cmd.run recursively calls itself in a new goroutine and talks to the new version of itself via a channel.
[14:41] <wrtp> fwereade: i started doing it without goroutines and realised it was surprisingly hard to do nicely.
[14:43] <fwereade> wrtp, ++goroutines :)
[14:43] <wrtp> fwereade: +1
[15:00] <fwereade> TheMue, if you have a moment I would appreciate a look over https://codereview.appspot.com/6303060 which we discussed earlier
[15:01] <TheMue> *click*
[15:10] <wrtp> hmm, linux really isn't good at coping with runaway processes. whatever happened to time sharing?
[15:14] <TheMue> fwereade: It will conflict a bit with my latest validation, but so far LGTM.
[15:14] <TheMue> fwereade: I'm interested what niemeyer will say.
[15:21] <Aram> I've written the longest commit message of my life
[15:22] <Aram> http://paste.ubuntu.com/1030536/
[15:33] <Aram> TheMue: fwereade wrtp: I don't understand something about bzr, say I did these 5 commits in my branch. I want to propose a merge. After the merge is done, will my 5 changes appear in the trunk history or just the merge proposal?
[15:34] <wrtp> Aram: just the merge proposal
[15:34] <Aram> hmm.
[15:34] <Aram> that's unfortunate.
[15:35] <wrtp> Aram: if you want that commit to appear, you should have it as part of the merge proposal description
[15:35] <wrtp> Aram: that's what we'll read when looking over the CL anyway
[15:35] <wrtp> Aram: so you might as well put it there.
[15:35] <fwereade> TheMue, cool
[15:36] <Aram> thanks. I know this is how contribuiting to the Go project also works, but I find it counter intuitive in a distributed versioning system world. it feels more like cvs to me.
[15:37] <wrtp> Aram: seems reasonable to me. the commits aren't lost, they're just one level deeper.
[15:37] <Aram> oh? You can still access them?
[15:37] <wrtp> Aram: yeah
[15:37] <Aram> how?
[15:38] <wrtp> Aram: bzr log -n 0
[15:38] <wrtp> Aram: or -n 2 to show just two levels
[15:38] <Aram> interesting.
[15:45] <Aram> ugh, lbox uses exp/html
[15:45] <Aram> I guess I should install the ppa?
[15:47] <wrtp> Aram: yeah, i just install the ppa
[15:52] <Aram> wrtp:
[15:52] <Aram> 2012/06/08 17:50:06 Authenticating in Launchpad...
[15:52] <Aram> Go to your browser now and authorize access to Launchpad.
[15:52] <Aram>  
[15:52] <Aram> how do I do that?
[15:53] <wrtp> Aram: i guess your browser should have come up with a new window, assuming you're using a standard web browser. i doubt it'll work with lynx :-)
[15:53] <Aram> ah, damn.
[15:53] <Aram> this is a headless box
[15:53] <Aram> hmm...
[15:53] <hazmat> Aram, you can do it on a box with a head, and transfer the auth token
[15:53] <hazmat> Aram, actually you don't even need to do that
[15:54] <hazmat> Aram, you can go to lp, and authorize the app separately
[15:54] <hazmat> and copy the token
[15:54] <hazmat> maybe.. ;-0
[15:55] <hazmat> hmm.. no that doesn't quite work
[15:55] <hazmat> i was looking at https://launchpad.net/~<your_id>/+oauth-tokens
[15:55] <hazmat> you can't actualy retrieve the token there though, just manage the apps
[15:55] <Aram> yes, no option to add
[15:56]  * Aram is installing a browser in his headless box
[15:57] <hazmat> hmm.. it might work with lynx
[15:58] <Aram> great, chrome doesn't like remote X.
[15:58] <wrtp> lol
[15:58] <Aram> I have the URL now though
[15:59] <Aram> and I can do it, I think
[16:07] <Aram> I think I finally did it.
[16:08] <Aram> wrtp: fwereade: TheMue: care to review a small branch? (my first, heh...): https://codereview.appspot.com/6298062/
[16:08] <fwereade> Aram, gladly :)
[16:10] <wrtp> Aram: will do
[16:10] <Aram> thanks.
[16:11] <TheMue> Aram: Today only time for a quick look, have to leave now. Will look again tomorrow. So far ok, only seperating comments like // ------ are uncommon in our project.
[16:12] <Aram> I know, I want to delete those, niemeyer wrote them though
[16:12] <Aram> heh
[16:12] <TheMue> Aram: Even if I like them because I'm adding optical seperators in my private code too. ;)
[16:12] <Aram> there's a lot of dead code.
[16:13] <TheMue> So, I'm off, birthday party of sister-in-law.
[16:13] <Aram> enjoy
[16:13] <TheMue> Have a nice weekend.
[16:13] <Aram> likewise
[16:13] <TheMue> Aram: Thx.
[16:22] <wrtp> Aram: i'd delete 'em. gustavo's deleted them in another project recently AFAIR.
[16:22] <wrtp> Aram: but maybe in another branch...
[16:22] <Aram> yes.
[16:36] <wrtp> fwereade: any chance you could have a glance at this before i post it to the list? http://paste.ubuntu.com/1030658/
[16:36] <fwereade> wrtp, reading
[16:38] <fwereade> wrtp, sorry, another problem has just crystallized in my brain
[16:38] <wrtp> fwereade: go on
[16:38] <fwereade> wrtp, how will this handle zk data format changes? when we need to stop everything, fix zk, and restart everything?
[16:39] <fwereade> wrtp, not that the original proposal was explicit about that either...
[16:39] <fwereade> wrtp, and not that I can see anything that will make it *harder* than the original
[16:39] <wrtp> fwereade: i did have a plan for that... let me think
[16:40] <fwereade> wrtp, but that does all look good otherwise
[16:48] <wrtp> fwereade: here's a possibility - if the version is tagged as "pending", then all agents download the new version, stop what they're doing, then wait for the "pending" tag to be removed. *then* they go through the normal upgrade procedure.
[16:50] <wrtp> fwereade: it's possible we might want to alternate between two server ports, so that new-major-version agents can connect to the new version of the db while others are still hanging on to the old one.
[16:50] <fwereade> wrtp, first bit sounds sane, bit nervous about the second part
[16:51] <fwereade> wrtp, I don;t think we ever want 2 things connected to the same db that aren't in agreement about what data formats are in play ;)
[16:51] <wrtp> fwereade: me neither. hence before we bring up the new db, we make sure that everything in the system is halted, waiting for the pending tag to be removed.
[16:52] <fwereade> wrtp, if we can halt everything we can also tell them to drop their connections, surely?
[16:52] <wrtp> fwereade: yes, but we can't then tell them when to reconnect, right?
[16:53] <fwereade> wrtp, the laws of logic be a harsh mistress
[16:56] <wrtp> fwereade: i'm not sure i can see a way of reliably upgrading the db without using two db instances.
[16:58] <wrtp> fwereade: and also we'd like to be able to switch the underlying db technology too.
[16:58] <fwereade> wrtp, hmm, yeah, makes sense
[17:01] <wrtp> Aram: there's a convention of using "pathPkg" as an alternative name when importing "path".
[17:02] <wrtp> fwereade: last thing i saw from you was "[17:55:53] <fwereade> wrtp, the laws of logic be a harsh mistress"
[17:02] <wrtp> fwereade: in case i missed anything
[17:02] <fwereade> wrtp, you missed a "yeah, makes sense"
[17:02] <wrtp> fwereade: cool, thanks
[17:11] <wrtp> right, i'm off for the weekend. see y'all monday.
[17:11] <wrtp> fwereade: have a good weekend. enjoy the lovely sun which we haint got.
[17:13] <fwereade> wrtp, cheers, enjoy :)
[17:31] <Aram> wrtp: ok, thanks, will change
[17:35] <hazmat> wrtp, you stop the activity, wait till everyone is stopped, upgrade, then reboot the agents
[17:35] <hazmat> wrtp, have a good weekend
[17:36] <hazmat> er. upgrade == upgrade, run migrations