/srv/irclogs.ubuntu.com/2012/04/23/#juju-dev.txt

=== flepied_ is now known as flepied
=== mthaddon` is now known as mthaddon
wrtpTheMue, fwereade: good morning!08:03
fwereadewrtp, heyhey08:03
fwereadewrtp, I'm getting test failures in the ssh stuff08:03
TheMuewrtp, fwereade: moin08:03
wrtpfwereade: let me see 'em!08:03
fwereadewrtp, permissions on the key file are "too open"08:03
wrtpfwereade: oh yes, sorry, i forgot to fix that. darn. will do.08:04
wrtpfwereade: workaround is: chmod go-rwx state/sshtest/*08:04
fwereadewrtp, cheers08:05
wrtpfwereade: bloody ssh being too clever for its own good08:05
fwereadewrtp, haha, yeah08:05
fwereadewrtp, TheMue: nice weekends?08:05
wrtpfwereade: yes thanks. had a couple of friends to stay, had large quantities of curry, a nice walk, and a certain amount of nice whisky too.08:06
wrtpfwereade: you?08:06
fwereadewrtp, lovely :)08:06
wrtpfwereade: still got a pot full of some of the curry left for lunch :-)08:06
fwereadewrtp, yeah, very nice, wandering valletta with the family on sat and an afternoon of booze and boardgames with some friends on sun08:07
fwereadewrtp, awesome08:07
wrtpsounds nice08:07
TheMuefwereade: Yes, with two parties. On Saturday our neighbor went 50, on Sunday my niece had confirmation.08:07
fwereadewrtp, actually afternoon/evening I guess, munchkin takes too long really08:07
TheMuefwereade: So too much to drink and eat. *lol*08:07
fwereadewrtp, I guess we're still learning it, just need to be a bit snappier about playing08:07
* wrtp doesn't know about munchkin08:07
fwereadeTheMue, lovely, sounds like we all did that ;)08:07
fwereadewrtp, it's a card game that aims to be the (humorous) essence of D&D08:08
fwereadewrtp, killing monsters, stealing treasure, betraying friends08:08
wrtpfwereade: with success?08:08
wrtpfwereade: (its aim, that is)08:08
fwereadewrtp, a surprising amount actually08:08
fwereadewrtp, down to the scope for rules-lawyering arguments ;)08:09
fwereadewrtp, the rules are pretty simple but a lot of the cards tweak them in one way or another08:09
fwereadewrtp, and the overlaps/conflicts are not always well-specified08:09
fwereadewrtp, so it's not a technically "good" game IMO but it's kinda fun08:10
fwereadewrtp, also, most of the cards have/are some sort of joke08:10
fwereadewrtp, "invoke obscure rules: go up a level"08:11
wrtpfwereade: funnier if you've previously played D&D, perhaps?08:11
fwereadewrtp, probably08:11
fwereadewrtp, but people seem to enjoy it even if they haven't08:11
wrtpfwereade: sounds fun08:12
fwereadewrtp, it's a nice change of pace from catan/carcassonne08:12
wrtpfwereade: oh yeah, minor version compatibility:09:27
fwereadewrtp, oh yes?09:27
wrtpfwereade: if you increment a minor version, you can add (backwardly compatible) features09:28
wrtpfwereade: so if you've got a client with a minor version of 2, it might not work with agents with a minor version of 109:28
fwereadewrtp, ah; my reading had been that, if it doesn't deal with older versions, that's a non-backward-compatible change, and demands a major version bump09:29
wrtpfwereade: hmm. interesting.09:30
fwereadewrtp, it seems to me that in that sort of mixed environment we'd really want to just expose a lowest-common-denominator feature set09:30
wrtpfwereade: i thought that 1.0.0 is compatible with 1.1.0 but not vice versa.09:30
wrtpfwereade: because otherwise AFAICS minor versions can't actually add any features09:31
fwereadewrtp, that makes sense as well tbh09:31
fwereadewrtp, they can add features but only expose the common capabilities of the whole env09:31
wrtpfwereade: yeah09:31
fwereadewrtp, sounds like a hassle to implement, but... ;)09:32
wrtpfwereade: so things are backwardly compatible09:32
wrtpfwereade: but not necessarily forward compatible.09:32
fwereadewrtp, that's what it intuitively means to me but I expect someone's come up with a rigorous definition of the phrase "backward compatible" somewhere ;)09:32
wrtpfwereade: yeah, well this is why it's a good time to be having the conversation... the version package seems like a good place to start09:33
fwereadewrtp, absolutely so :)09:33
fwereadewrtp, ok, think of the common situation, that someone upgrades their client juju from 2.2 to 2.409:34
wrtpok09:34
fwereadewrtp, to be suddenly unable to interact, without the upgrading the whole env, is a serious problem IMO09:34
fwereadewrtp, so 2.4 *must* be able to emit stuff that can be understood by versions back to 2.0.009:35
fwereadewrtp, plausible reading of it?09:35
wrtpfwereade: yes09:35
wrtpfwereade: however09:35
wrtpfwereade: think about when we've upgraded a client and now *bootstrap* a new environment09:36
wrtpfwereade: we want to be able to use the new client features09:36
fwereadewrtp, yeah; and because everything in the env has version >=2.4, we can09:36
wrtpfwereade: yeah09:37
fwereadewrtp, if everything doesn't then we block attempts to use them with friendly errors09:37
wrtpfwereade: yes09:37
wrtpfwereade: although09:37
wrtpfwereade: i suppose that perhaps we shouldn't worry too much. if the latest minor version is there, we'll use it. if it's not, then we should probably still use the earilier version.09:38
wrtpfwereade: so perhaps my compatibility test *is* wrong.09:38
wrtpfwereade: because even though all the features of 2.4 won't work with a 2.2 backend, if a 2.2 backend is all we can find, we shouldn't bomb out09:39
fwereadewrtp, exactly09:39
fwereadewrtp, the thing is, part of me wants to impose versioning at the node-data-struct level09:40
fwereadewrtp, but I can't really justify it very well, even with lots of handwaving09:41
wrtpfwereade: doesn't that amount to the same thing, given that all of those are created by the code?09:41
fwereadewrtp, kind of... it's just that the various kinds of node data won't necessarily be changing in sync with one another09:42
fwereadewrtp, and that to (1) accommodate the old-environment thing and (2) not bump major versions all the time09:43
wrtpfwereade: i think it might be useful to use node-data-struct compatibility to determine the overall version number09:43
fwereadewrtp, we'll need some way of saying "write the 2.2 format of a unit workflow state node"09:44
fwereadewrtp, yeah, that too09:44
wrtpfwereade: i think we can be simpler than that09:44
fwereadewrtp, oh yes?09:44
wrtpfwereade: like having a rule to say: sub-nodes can only be added, not removed or changed, without losing backward compatibility09:45
wrtpfwereade: kinda structural compatibility09:45
wrtpfwereade: rather than versioning every node09:45
fwereadewrtp, my concern is that that has a fossilizing effect and, long term, leads to horrid frankensteiny data structures09:46
wrtpfwereade: if anything is frankenstein it's 2.2 unit nodes bolted on to 2.1 machine nodes :-)09:47
wrtpfwereade: and we can clean up at major versions09:47
fwereadewrtp, "we can clean up at <point in future>" is an argument of which I have learned to be suspicious09:48
wrtpfwereade: if everything is versioned, you end up with n versions of the code, one for each version09:48
wrtpfwereade: i think the protobuf/gob approach can work better in general09:48
fwereadewrtp, yes: N datatype versions per node09:49
wrtpfwereade: yuck.09:49
wrtpfwereade: don't add a node unless you mean it :-)09:49
fwereadewrtp, it seems to me to encapsulate the yuckiness better than having a 2.6 format extended from 2.5 from ... from 2.009:49
fwereadewrtp, indeed, we should not abuse it09:50
wrtpfwereade: i think it would significantly complicate the code09:50
fwereadewrtp, but either way promiscuously changing data formats is icky and costly09:50
wrtpfwereade: presumably you'd need a separate data type for each version09:50
fwereadewrtp, yeah; I don't anticipate too many versions09:50
fwereadewrtp, but if we have 3 versions I'd rather have 3 versions than one version that has to handle reading/writing all 3 formats09:51
fwereadewrtp, even *with* the restrictions on how the format can change which at least cuts down on that specific burden09:51
wrtpfwereade: i don't think it's that much of a problem - you do it with subset/superset09:51
wrtpfwereade: read the "A bit of history" section at https://developers.google.com/protocol-buffers/docs/overview for some context09:53
fwereadewrtp, like I said, it's a pretty handwavey preference... and protobuf does seem to me to be essentially a more sophisticated version of what I propose09:55
wrtpfwereade: i think so. and it doesn't have version numbers.09:55
fwereadewrtp, I wouldn't like it if we had to manage all the version-munging by hand09:55
wrtpfwereade: agreed. so i think we can work out ways to proceed forward in such a way that version n is automatically compatible with version n-109:56
wrtpfwereade: (version == minor version that is)09:56
fwereadewrtp, yeah, SGTM09:56
wrtpfwereade: but i do think that writing down the schema in some semi-formalised way might help to see potential compatibility problems as they happen09:57
wrtpfwereade: not sure what syntax we'd use though09:58
fwereadewrtp, tbh if we're not generating stuff from the schema and *knowing* that we cannot change the schemas except in tightly proscribed ways I don't think it'd be worth the effort09:59
fwereadewrtp, it's basically just a comment, and has all the attendant problems09:59
wrtpfwereade: i, for one, would quite like to see that comment :-)10:00
wrtpfwereade: but i take the point10:00
fwereadewrtp, I agree that a description of the intent at it was at some unknown time in the past is better than nothing, so long as you are always aware that's what you have10:01
wrtpfwereade, TheMue: i wonder if it might make sense to have some table-driven tests in state (maybe there are already and i've missed them) where we start with a given zk tree (maybe generated by a previous version) and verify that it has the expected properties.10:03
fwereadewrtp, +110:03
wrtpfwereade: that way we could do automatic backward-compatibility checking10:03
fwereadewrtp, exactly10:03
TheMuewrtp: so far there're only for watches table-driven tests10:04
TheMuewrtp: the other ones are almost 1:1 ported from Python10:04
fwereadewrtp, but then I start to feel that we're putting all the infrastructure for separate datatypes in place but trying to kid ourselves that they're not10:04
wrtpfwereade: that's just testing10:04
wrtpfwereade: the actual code remains straightforward (hopefully)10:05
TheMuewrtp: ah, get it, you don't mean unit tests but a kind of compatability tests as part of the features10:06
wrtpfwereade: as part of the testing, yes.10:06
wrtpoop10:06
wrtps10:06
wrtpTheMue: ^10:06
fwereadewrtp, indeed10:07
TheMuewrtp: unit testing during dev or prerequisite testing during upgrades?10:07
wrtpTheMue: unit testing during dev10:07
fwereadewrtp, so +1 table-driven tests, and we'll see if anything else turns out to be a good idea at some point in the future10:07
wrtpTheMue: at every version, you'd generate some representative zk trees and dump them, along with the results of various API calls on those trees (e.g. Unit.CharmURL, etc etc etc)10:08
TheMuewrtp: Then I didn't get it. What exactly do you want to test?10:08
wrtpTheMue: i want to test that a subsequent version still returns the same results on the same zk tree10:08
wrtpTheMue: even though the zk tree was generated by a previous version of the code.10:09
wrtpof course, it would be nice if we could test the old code against the new data structures too...10:09
TheMuewrtp: OK. Makes it more clear.10:10
wrtpTheMue: i'm imagining just testing the read-only methods, because testing the writing is harder and i can't think of a way of doing it nicely.10:13
wrtpTheMue: for read-only methods, i think it might be possible to do something neat with reflection to automatically call methods and check/store the results10:14
TheMuewrtp: Yes, it's a hard topic. What exactly is the result you would expect by such a test?10:15
wrtpTheMue: this is my thought (it might be wrong!): call a load of functions to create a given state. dump the zk tree for that state, and also dump the results of enumerating all read-only methods on that state (reflection *might* make it possible to do that by simply naming the read-only methods). to test, we restore the zk tree, then do the same enumeration and check the result are the same as the dumped values.10:18
TheMuewrtp: So as a result you get a verification that the read methods with a newer version are able to read the data created with an older version. Am I right?10:20
wrtpTheMue: yes10:20
wrtpTheMue: you could also branch the older version and add test data generated by a newer version.10:21
TheMuewrtp: Sounds good, so far I only dislike the reflection part. I would like to make it more explicit.10:22
wrtpTheMue: i think it might just save lots of boilerplate code, but if you don't mind writing boilerplate...10:23
TheMuewrtp: There would be an initial effort, but later adding new readers is simple. I fear the read-by-accident.10:24
wrtpTheMue: read-by-accident?10:24
TheMuewrtp: And this first list of readers could be generated by reflection and then manually controlled.10:25
TheMuewrtp: How do you know (by reflection) which methods are the (important) readers?10:25
wrtpTheMue: you don't - you name the methods explicitly. but use reflection to make the call and dump (or restore) the data.10:26
wrtpTheMue: but you're probably right - it might not be too much work and perhaps it's better to be explicit10:27
TheMuewrtp: OK, that's the explicit part I missed and wanted. I thought your idea would even start at a higher level and 'detect' those read methods by reflection. That has been my fear.10:27
wrtpTheMue: no, that's not possible, i think10:27
TheMuewrtp: Exactly10:28
wrtpTheMue: i guess all one would need would be a function that takes a state and returns JSON-marshalable data from that10:28
wrtpTheMue: then we can do a DeepEqual on the dumped JSON and current result10:29
TheMuewrtp: Why exactly JSON?10:29
wrtpTheMue: 'cos it's a nice format for dumping10:29
TheMuewrtp: We could also gob it and write the binary.10:29
TheMuewrtp: I like JSON (especially more than XML or YAML), but here I could live with gob.10:30
wrtpTheMue: i think the transparency of JSON could be good here. and we might want to change the struct names.10:30
wrtpTheMue: i think it would be useful to be able to manually inspect both the output tree and the dumped state data.10:31
TheMuewrtp: Why? In case you do it the compatibility check would fail.10:31
wrtpTheMue: the compatibility check would fail unnecessarily10:32
TheMuewrtp: The test is "read old, dump, deploy new, read and compare dumed data".10:32
TheMues/dumed/dumped/10:32
wrtpTheMue: i don't think the test itself would dump10:33
wrtpTheMue: that would be a separate thing, done every so often10:33
TheMuewrtp: Not the technological test run with go test, I described the whole test.10:33
wrtpTheMue: ok. yes, that sounds right.10:34
wrtpTheMue: so you don't want the "compare dumped data" step to fail unnecessarily10:34
TheMuewrtp: You have to call the read methods of version A, dump those results, because you expect them later, install version A+1, run the read methods again and compare.10:34
wrtpTheMue: yes10:35
wrtpTheMue: except that we do the version A thing, then take those files and store them in version A+1 as test data10:35
TheMuewrtp: Oh, yes, missed it.10:36
TheMuewrtp: We need a ZK dump for simple restore of test data and a Go-readable dump for the comparings.10:37
wrtpTheMue: exactly10:37
TheMuewrtp: Sounds like an interesting job. Any experience dumping and restoring ZK?10:38
wrtpTheMue: and i think it might make sense for both dumps to be human-readable - that way we get automatic "documentation" of what the zk tree actually looks like, killing two birds with one stone10:38
wrtpTheMue: should be trivial - just: type ZkNode struct {Contents string; Children map[string] ZkNode}; func GetContents(zk *Zk) ZkNode10:40
wrtpTheMue: i don't think we need to check the metadata10:40
TheMuewrtp: Then you should write a novel marshaller. *lol* "The identificator of this unit is …"10:40
TheMuewrtp: But indeed, JSON is pretty fine here.10:40
wrtpTheMue: don't understand previous remark...10:41
wrtpyeah, JSON should do all the work just fine10:41
wrtpahh!10:41
wrtpyou mean a marshaller that produces documentation!10:42
wrtpa fine idea. i'll leave it as an exercise for the reader :-)10:42
TheMuewrtp: But a novel,generated out of ZK, could be fun too. And you could configure if it is crime, mystery or love. *rofl*10:43
TheMuewrtp: Hehe.10:43
wrtpTheMue: definitely science fiction10:43
wrtpTheMue: all this stuff is already sf :-)10:43
TheMuewrtp: That's pretty simple. "$ dump" leads to 42.10:44
TheMuewrtp: Oh, no, error, that the Python version.10:45
TheMue*lol*10:45
wrtpTheMue: i've thought of a particular reason why dumping as JSON might be better for the values too.10:45
wrtpTheMue: it means we can potentially do forward-compatibility tests even when the new version has introduced some new fields.10:46
wrtpTheMue: so rather than using DeepEqual, we could use an equality check that ignored new fields in the new version10:47
TheMuewrtp: Sounds good.10:47
fwereadeTheMue, you've done some stuff with tomb recently, right?11:50
TheMuefwereade: Yes.11:50
fwereadeTheMue, I was wondering what happened to tomb.Stop11:51
TheMuefwereade: Isn't the old Stop() now Kill()?11:52
fwereadeTheMue, IIRC Stop was a constant that you could use to kill without it showing up as an error11:53
fwereadeTheMue, now it seems that Err() returns nil in 2 cases: "no error has yet been encountered" and "we're shutting down cleanly"11:54
TheMuefwereade: Kill(nil) is allowed.11:54
fwereadeTheMue, and if you're checking the error return from a blocking call to something the gets Closed in another goroutine in response to <-Dying()11:54
fwereadeTheMue, then it's important to distinguish between "someone is shutting us down deliberately, so the read error caused by the Close should be ignored"11:55
fwereadeTheMue, and "whoa, read error! help help"11:55
fwereadeTheMue, checking for the error being tomb.Stop allowed for that11:56
fwereadeTheMue, is there another way to do what I ask?11:56
TheMuefwereade: So watchers use two tombs. The first one is inside of the generic watcher types. They only handle the inner state of the watcher.11:57
TheMuefwereade: And thos concrete watchers who use it have their own tombs.11:58
TheMuefwereade: They are used if there is a logical/technical error on this level (e.g. an illegal content delivered by the content watcher).11:58
fwereadeTheMue, sorry, reading code, trying to figure out the analogous bit11:59
fwereadeTheMue, ah, hold on, I think I see what to do12:00
TheMuefwereade: So the tomb of the ContentWatcher may return a problem with ZK. And if everything is ok the content watcher delivers a string that may be not interpretable by the surrounding ConfigWatcher.12:00
fwereadeTheMue, ok, wait, I don't think it's the same situation12:01
fwereadeTheMue, say I have `conn, err = listener.Accept` in my looping bit12:01
fwereadeTheMue, this blocks, and makes select on Dying() tricky12:02
fwereadeTheMue, so I have a goroutine which blocks on Dying() and calls Close on the listener12:03
fwereadeTheMue, this causes the blocking Accept to return an error12:03
fwereadeTheMue, which in *this* case is expected, and should be swallowed12:04
fwereadeTheMue, but I need some mechanism to tell me whether that was the case12:04
TheMuefwereade: Uff, nice task. Sounds tricky.12:04
fwereadeTheMue, tomb.Stop was a magic value to pass to Fatal(), which allowed me to distinguish between the two cases12:04
fwereadeTheMue, it was always a little ugly, but it did work ;)12:05
TheMuefwereade: Could you paste it? Or is it too much code?12:05
fwereadeTheMue, relevant bit is here: http://paste.ubuntu.com/942426/12:06
fwereadeTheMue, there are a couple of bits that appear conspicuously stupid12:06
fwereadeTheMue, I have no idea why I blocked on Dead() as well in the killer goroutine12:06
niemeyerMorning!12:07
TheMueniemeyer: Morning.12:07
TheMuefwereade: Just scanning it.12:07
fwereadeniemeyer, heyhey12:08
TheMuefwereade: Don't really get why you're using the goroutine here.12:11
fwereadeTheMue, how would you structure it?12:11
TheMuefwereade: One moment, maybe I've got an idea. You want to keep the err to return it to a possible caller?12:14
wrtpniemeyer: yo!12:14
fwereadeTheMue, I want server.Close() to return the error but only if it's a real error12:15
TheMuefwereade: OK12:15
fwereadeTheMue, and likewise it seems sensible to have Wait() working as I would expect12:15
fwereadeTheMue, (maybe I shouldn't be *embedding* a Tomb at all... it seemed like it might be simpler... misuse?)12:16
wrtpfwereade: one initial trivial remark: there's no reason for the goroutine to wait on <-s.Dead()12:17
niemeyerwrtp: Heya12:17
niemeyerfwereade, TheMue: What's up folks12:17
fwereadewrtp, yeah, I noticed that... the more I look at this the more I think I should start from scratch again12:17
* wrtp nods12:17
fwereadewrtp, I have a horrible feeling that I was halfway through a change :/12:17
TheMuefwereade: Back in a moment ...12:17
wrtpniemeyer: we had a discussion about versions12:18
wrtpniemeyer: you might have seen this: https://codereview.appspot.com/6082044/12:18
niemeyerwrtp: Indeed, and I'm participating on it :_)12:18
wrtpniemeyer: (my impression of semantic versions)12:18
wrtpniemeyer: this morning we also discussed ways of testing state backward compatibility12:19
niemeyerwrtp: Ok.. I haven't seen this code yet, and I'd prefer to continue the conversation until we reach consensus before jumping into an implementation12:21
wrtpniemeyer: ok. the versions package was only a couple of hours' work - i thought people seemed pretty much agreed on using semantic versioning, so it seemed to make sense to write it as a point of reference.12:23
TheMue*: Have an interruption here, will be back later. Sorry fwereade12:23
niemeyerwrtp: Sounds good.. we'll likely need this in a bit12:23
wrtpniemeyer: (kapil's reply came quite a bit later)12:23
fwereadeTheMue, np, I'll experiment12:23
wrtpniemeyer: so are we not agreed that your plan seems good? i was all set to go ahead with it.12:24
niemeyerwrtp: I think people are mostly happy with it, so yeah, we can move forward with it I suppose12:25
wrtpniemeyer: what do you think about my suggestion of using prerelease versions instead of odd-numbered versions?12:25
niemeyerwrtp: There's just some minor disagreements there that we should settle on12:25
niemeyerwrtp: I'm trying to answer it! :-)12:25
wrtpniemeyer: :-)12:26
niemeyerUgh.. does bzr not have a hash for its revisions?12:27
wrtpniemeyer, TheMue, fwereade: are we gonna have a meeting?13:01
niemeyerwrtp: Yeah.. it'd be in 1h per the previous agreement, but I'm happy to do it sooner again13:05
fwereadewrtp, niemeyer, TheMue: now is fine for me13:05
niemeyerTheMue?13:06
wrtpniemeyer: ah, sorry, i got my calendar time wrong by an hour13:07
wrtpniemeyer: happy in an hour if TheMue is lunching13:07
fwereadewrtp, niemeyer: ah yeah, TheMue did mention he had to go for a bit, didn't he?13:08
wrtpno, it seems that google calendar is just confusing. it says "GMT+00" when it actually means "BST-01" (or is that +01 ?)13:10
wrtpniemeyer: i'm not sure using the build scheme will work for dev versions13:19
wrtpniemeyer: build versions have higher priority than the base versions13:19
wrtpniemeyer: which means that 1.2.0+dev would override 1.2.0, which isn't what we want, i think.13:20
wrtpniemeyer: but i'm probably misunderstanding13:21
niemeyerwrtp: It actually is what we want13:29
niemeyerwrtp: Oh, wait13:29
niemeyerwrtp: No, it's not..13:29
* niemeyer reads again13:29
niemeyerwrtp: Ah, interesting, I misunderstood indeed, but it actually works just as well13:29
niemeyerwrtp: It just inverts when what we put the +dev on13:30
niemeyers/when//13:30
niemeyerwrtp: We need to put +dev on the last release rather than on the next one13:30
fwereadeniemeyer, wrtp: pre-release on the next version feels slightly cleaner/clearer to me13:31
niemeyerfwereade: That's no better than the odd/even scheme..13:31
niemeyerfwereade: That said, I guess using the build scheme in that way is no better either13:33
fwereadeniemeyer, yeah, I was trying to figure out where the relevant distinction lay13:33
fwereadebtw, I was chatting to someone about their organisation's data the other day13:35
niemeyerfwereade, wrtp: Maybe -dev and -pre$N would do it..13:36
fwereadeniemeyer, that looks sensible at first glance13:36
* fwereade decides the thread of conversation it looked like he was going to start is a derail13:36
niemeyerWe're still breaking the concept a little bit by allowing -dev to change, but this should be fine really13:36
fwereadeniemeyer, heh, dev-<UUID>13:37
niemeyerfwereade: Exactly.. :)13:37
wrtpniemeyer: i don't *think* the concept is broken by allowing -dev to change, as the spec says nothing about breaking changes *between* prerelease versions13:38
niemeyerfwereade: I guess we could have part of the bzr revid as a hash.. the last few chars.. something like 1.0.2-dev.$REVID[-8:]13:38
fwereadeniemeyer, SGTM13:39
niemeyerwrtp: There's a distinction between changing *between* pre-release versions and changing *pre-release* versions13:39
wrtpniemeyer: ah, i don't think we'd do that. i see what you're suggesting now.13:39
niemeyerwrtp: Well, we need to say so, and say how that's happening13:39
wrtpniemeyer: we can use build versions on top of prerelease versions for that if we want13:40
wrtp1.0.2-dev+build113:40
niemeyerwrtp: Who defines build1?13:40
niemeyerwrtp: and why? This looks ugly13:40
wrtpniemeyer: we think we need to talk about who is going to use what versions...13:40
wrtps/we/i/13:41
wrtpniemeyer: we've got versions that we'll hand out to people13:41
wrtpniemeyer: we've also got versions that we'll use for our own development13:41
niemeyerwrtp: Sure, that's 1.2.3-pre413:41
niemeyerwrtp: Yep13:41
wrtpniemeyer: for versions for our own development, everyone that wants can have their "own" prerelease tag13:42
wrtpe.g. 1.2.3-rog13:42
niemeyerwrtp: the suggestion above puts those as 1.2.3-dev.hash13:42
niemeyerwrtp: No no no13:42
wrtpniemeyer: if i number my own prereleases, then i can use version numbering between my own releases13:42
niemeyerwrtp: Let's please have a simple and well defined scheme..13:42
wrtpniemeyer: that seems well defined to me13:43
niemeyerwrtp: Exactly. It is well defined *to you*13:43
wrtpniemeyer: we allocate each developer a part of the revision namespace13:43
wrtpniemeyer: the semantic version spec takes care of the rest13:43
niemeyerwrtp: No, that's going in an entirely different atmosphere which is completely out of the conversation in the list13:43
wrtpniemeyer: we can't use revid13:44
niemeyerwrtp: Because..?13:44
wrtpniemeyer: because revid only applies to trunk13:44
niemeyerwrtp: Huh?13:44
niemeyerwrtp: revision id != revision number13:44
wrtpniemeyer: oh, sorry, yeah13:45
wrtpniemeyer: the problem with that is they're not ordered13:45
niemeyerwrtp: Yep.. they're not..13:45
niemeyerwrtp: Which is why the idea is handling dev builds as flag13:46
wrtpniemeyer: which is a problem if i'm iteratively deploying a test env, no?13:46
niemeyeras a flag13:46
wrtpniemeyer: ah, so what does the flag do?13:46
niemeyerwrtp: It is a problem indeed, and I'm trying to address it taking in account the characteristics of bzr13:46
niemeyerwrtp: a dev build would always upgrade something with a version <= version (respecting the details for major)13:47
niemeyerwrtp: Even if the -dev bit matches13:47
niemeyerwrtp: so "juju upgrade-juju --dev" would do the right thing as we iterate through development13:48
wrtpniemeyer: i'm not convinced we need a dev mode. if we can work out how to order dev releases, then the versions can do the work13:48
niemeyerwrtp: Without having to artificially bump revision numbers all the time13:49
niemeyerwrtp: We can't order dev releases.. there's simply no way13:49
wrtpniemeyer: i think there might be13:49
wrtpniemeyer: after all, revisions are ordered within a given branch13:49
niemeyerwrtp: Ok.. so please explain how two different branches can cross-upgrade to each other?13:49
wrtpniemeyer: they don't need to, i don't think13:50
niemeyerwrtp: Heh13:50
niemeyerwrtp: I want them to.. I don't want to be shutting down an environment just because I switched branches13:50
niemeyerwrtp: I'm trying to solve actual problems. semver.org is only relevant to me for as long as it13:51
niemeyer's solving issues. It won't solve this issue, so we'll need to agree on something else that does.13:51
wrtpniemeyer: hmm. there are two issues here13:51
wrtpniemeyer: 1) can i deploy a given client version against a given backend? 2) can i upgrade a given client to a given client software version?13:53
wrtpniemeyer: it seems to me that we haven't really touched on 2) yet.13:53
niemeyerwrtp: 2 is completely out of our control..13:54
wrtpniemeyer: oh? i thought that's what kapil was talking about in his message.13:55
wrtpniemeyer: so if we're just talking about 1), i don't see why you'd need to shut down an environment because you switched branches. the deployed environment would still be compatible.13:56
wrtpniemeyer: (because all branches with the same major version are compatible with each other)13:57
niemeyer<wrtp> niemeyer: i'm not convinced we need a dev mode. if we can work out how to order dev releases, then the versions can do the work13:57
niemeyer<niemeyer> wrtp: Without having to artificially bump revision numbers all the time13:57
niemeyer<niemeyer> wrtp: We can't order dev releases.. there's simply no way13:57
niemeyerwrtp: Because it's an *upgrade*..13:58
wrtpniemeyer: sorry, i don't understand. what's an upgrade?13:58
niemeyerwrtp: Ok.. we're not making progress13:59
niemeyerwrtp: THe suggestion is this:13:59
niemeyeractually.. I've just figured that pre-releases scheme isn't friends with Debian versions.. hah14:00
niemeyerSpamapS: Was that your concern?14:01
wrtpniemeyer: how do you mean?14:01
niemeyerwrtp: 1.0.0-pre1 is greater than 1.0.014:01
* wrtp doesn't know anything about Debian versions14:01
wrtpniemeyer: in Debian versions?14:01
niemeyerwrtp: Yeah14:02
wrtpniemeyer: does Debian versioning matter for our version-selection algorithm?14:02
niemeyerwrtp: Absolutely.. that's the environment we're living in14:02
niemeyerwrtp: Using 1.0.0-pre1 will mean people will have to bump the series version every single release14:03
niemeyerwrtp: We'd end up with a version like 123:4.2.114:03
wrtpniemeyer: what's the "123:" ?14:03
niemeyerwrtp: Because that's the only way to force the Debian package to upgrade 1.0.0-pre1 to 1.0.014:04
niemeyerwrtp: It's the series version.. a monotonically increasing number that overrides the whole string-based comparison for cases like this14:04
wrtpniemeyer: ah. it does lexical comparison?14:05
niemeyerwrtp: It breaks down the version in parts and compares lexically, usually, but not if there's a series14:05
niemeyerwrtp: Well.. not if there's a *different* series14:05
niemeyerwrtp: So, I'm coming back to the conclusion that odd/even is perfectly fine14:05
wrtpniemeyer: i'm still not quite sure why these versions have to pertain directly to debian versions. kapil was suggesting a single number. that wouldn't be a debian version either.14:06
niemeyerwrtp: We'll be doing semver.org at the evens, and less strict on the odds14:06
niemeyerwrtp: That'd be compare fine as a deb package version14:06
niemeyerwrtp: I'm talking about packaging14:06
wrtpniemeyer: why would we bother packaging a prerelease version?14:07
niemeyerwrtp: Because we want people to use it?14:07
niemeyerwrtp: There's an alternative, though.. which is simple and might work14:09
niemeyerwrtp: and maybe you're happy with..14:09
wrtpniemeyer: for prerelease versions, we *could* say "just get the bzr repo"14:09
wrtpniemeyer: go on14:09
niemeyerwrtp: We could use the +dev tag, as suggested at semver.org14:10
niemeyerwrtp: In the *previous* version14:10
wrtpniemeyer: yes14:10
wrtperm, no14:10
niemeyerwrtp: So immediately after 1.0.0, we tag it as 1.0.0+dev14:10
niemeyerwrtp: But this still won't solve the pre-release issue, I guess14:11
wrtpniemeyer: no, i think that breaks semantic versions too badly14:11
wrtpbut i really don't like the odd-numbered dev versions either14:11
wrtpniemeyer: if we want people to use a prerelease version, why don't we just release it?14:12
wrtpniemeyer: then we can bump the patch version as necessary when their feedback comes in14:12
niemeyerwrtp: Because it's a pre-release.. I think the concept of a pre-release is well understood, but I can explain if not14:12
wrtpniemeyer: please do. this area is all quite new to me.14:13
niemeyerwrtp: A pre-release is a candidate to be the given release14:13
niemeyerwrtp: Which isn't yet blessed as stable for production use14:13
wrtpniemeyer: so... does debian packaging have pre-releases?14:13
niemeyerwrtp: It can also be *really* unstable, rather than a candidate, though14:13
niemeyerwrtp: The goal is to get people in the wild to experiment with what's coming14:13
niemeyerwrtp: But only the brave souls that understand the risks of doing so14:14
niemeyerwrtp: Debian packages may contain pre-releases if the authors decide to offer them as such14:14
wrtpniemeyer: so, in your proposal, all odd numbered branches are pre-releases, yes?14:14
niemeyerwrtp: No14:14
niemeyerwrtp: They can also be development snapshots14:15
wrtpniemeyer: one might say the two things played a similar role14:15
wrtpok...14:15
wrtpall odd numbered branches are *potential* pre-releases, then14:15
wrtp?14:15
niemeyerwrtp: Development snapshots are changing by the minute..14:16
wrtpniemeyer: many potential pre-releases :-)14:16
niemeyerwrtp: They can be seen that way, yes, if we find a way to tag them appropriately with a mark that is unique14:17
wrtpniemeyer: so with Debian packages, how do you tag a pre-release version?14:17
niemeyerwrtp: But even that is problematic during development.. you'd have to commit for every test you do, for example14:17
niemeyers/test/test deployment/14:17
wrtpha14:18
niemeyerMaybe that's fine, though14:18
niemeyerwrtp: Debian packages don't care about the details of the version being packaged. It just has an algorithm to define what's new and what's old that must be respected, or the upgrade won't happen.14:18
wrtpi think it's reasonable.14:18
fwereadeniemeyer, wrtp: commit per deployment sounds sane -- it hasn't been a notable pain point in the past14:18
niemeyerfwereade: Cool14:19
wrtpniemeyer: so debian prerelease versions would always use an explicit series version?14:20
wrtpniemeyer: hmm, that seems unlikely.14:20
niemeyerwrtp: No.. series would have to be bumped every single time a pre-release is used, to enable an upgrade like 1.0.0-pre1 to 1.0.014:20
wrtpniemeyer: how *do* Debian packages deploy pre-release versions?14:21
niemeyerniemeyer> wrtp: Debian packages don't care about the details of the version being packaged. It just has an algorithm to define what's new and what's old that must be respected, or the upgrade won't happen.14:21
wrtpniemeyer: sure. i'm just wondering what people do in practice.14:21
wrtpniemeyer: do they bump the version number one more time, e.g. 1.2.3 for prerelease, then 1.2.4 for actual release?14:22
wrtpniemeyer: or do they append to the previous version e.g. 1.2.2-prerelease before bumping to 1.2.3 for actual release?14:23
niemeyerwrtp: In general I believe people end up splitting the version number in two to avoid the pain of series management14:23
niemeyerwrtp: Ending up with something like 1.0.0-0~pre114:23
niemeyerwrtp: So that 1.0.0-1 would upgrade it14:24
wrtpniemeyer: ok, so they'd never use 1.0.0 as a version after a pre-release then?14:24
niemeyerwrtp: I'm not sure about what you mean in that case14:24
wrtpniemeyer: if i've pre-released as 1.0.0-0~pre1 then i can't release as 1.0.014:25
hazmat fwereade munchkin is awesome and well suited for the kids in the family ;-)14:25
niemeyerwrtp: You can, as I just explained14:25
niemeyer<niemeyer> wrtp: Ending up with something like 1.0.0-0~pre114:25
niemeyer<niemeyer> wrtp: So that 1.0.0-1 would upgrade it14:25
wrtpniemeyer: yeah 1.0.0-1 != 1.0.014:25
niemeyerwrtp: -1 is the release information that is available in all packages14:26
niemeyerwrtp: Do dpkg -l <whatever>14:26
niemeyerdpkg -l bash14:26
fwereadehazmat, don't think laura would follow it quite yet, we can play carcassonne as a jigsaw though ;)14:26
wrtpniemeyer: ah, i didn't know about that14:27
niemeyerwrtp: The basic rule is that the release information is distro-selected.. the version itself is upstream selected14:27
niemeyerwrtp: In most cases, at least14:27
niemeyerwrtp: If we put 1.0.0-pre1 or -dev as upstream, it'll certainly be broken, though, since it'd be a pain14:28
wrtpniemeyer: so there's an easy mapping from semantic versions to debian versions: put a '0' before the pre-release version...14:28
wrtpniemeyer: alternatively always use a 0 as the first char of the pre-release version14:28
wrtpand enforce it14:28
niemeyerwrtp: Nah, doesn't sound necessary14:29
niemeyerwrtp: Anyway, you're right.. that's mostly a red-herring..14:29
niemeyerwrtp: It can be hacked to fit14:30
niemeyerSo.. what can we agree on?14:30
niemeyer1.0.0-pre$N is a pre-release14:30
niemeyer1.0.0-dev.$REVID[-9:] is a snapshot?14:30
niemeyerMaybe pre.1 to conform?14:31
wrtpwhat's the "[-9:]" bit?14:31
niemeyerThat should be -814:31
wrtpa python slice operator?14:31
niemeyerwrtp: The last N bytes of the revid14:31
wrtpah14:31
wrtpyeah, that sounds good14:32
niemeyerWe'll need a --force flag on upgrade-juju14:32
wrtpordering of releases is a red herring anyway. with prereleases we want exact specification or nothing.14:32
niemeyerTo compensate for the lack of ordering14:32
niemeyerActually, we don't.. we just need to enable cross-dev upgrades14:33
niemeyerwrtp: Yeah.. the issue is on development snapshots.. pre-releases are properly ordered14:33
wrtpniemeyer: the main issue AFAICS is that you don't want to use some else's dev snapshot14:34
wrtpniemeyer: and that, i think, is solved by simply using a different bucket for each developer14:34
niemeyerwrtp: My suggestion is that we enforce the use of --dev to get from a stable release onto a dev release, and then after being into this dev release, upgrading to a dev release on the same patch level is done ignoring the ordering of hash14:34
niemeyerof revid I mean14:34
niemeyerwrtp: I disagree.. I *do* want to use someone else's dev snapshot14:35
niemeyerwrtp: and I do want to use my own snapshot, on different branches, out of order too14:35
niemeyerwrtp: There's no questions.. we have to solve that problem14:35
niemeyerwrtp: Development is a nightmare without that14:35
wrtpniemeyer: how would you want to specify the snapshot to use?14:37
wrtpniemeyer: rev id hash isn't enormously friendly :-)14:37
niemeyerwrtp: Actually, given we'll be using revids, it's an issue regardless.. the revids will be unordered even within a single branch14:38
wrtpyes14:38
wrtpniemeyer: which is why i was presuming that you'd always be pushing the current branch's executables14:38
wrtpniemeyer: and then specifying that version exactly14:38
niemeyerwrtp: Yep, we're just talking about what "that version" means14:39
niemeyerwrtp: and the behavior the client will enforce when working with it14:40
wrtpniemeyer: maybe we should push a hash of the actual executables14:40
wrtp1.0.0-dev.HASH14:40
wrtpniemeyer: then you'll push correctly if you've changed the Go version, for example14:41
niemeyerwrtp: That might work as well.. but it'd just be changing where the hash is being obtained from.. the issues debated still hold14:41
wrtpniemeyer: not quite, as above14:41
niemeyerwrtp: It's also potentially less useful information14:41
wrtpniemeyer: it does mean that if you branch and do nothing, then (assuming a deterministic compiler) you won't need to push the executables again14:42
niemeyerwrtp: Having such a hash will be pretty equivalent to having a random number14:42
wrtpniemeyer: but i take the point about less useful information too14:42
wrtpniemeyer: yeah, that's true.14:43
wrtpniemeyer: but does the revid really matter too much either?14:43
wrtpniemeyer: what *really* matters IMHO is replicability14:43
niemeyerwrtp: It's not ideal, but it's useful.. I can look at a running env and tell which code it is running14:44
wrtpniemeyer: all the juju bits anyway, but not necessarily the bits that they rely on14:44
niemeyerwrtp: Well.. :-)14:45
wrtpniemeyer: so if there's a fix to the tomb package, say, then it might not push when it should14:45
niemeyerwrtp: Yeah, if there's a change in the kernel we won't know as well14:45
wrtpniemeyer: we can't avoid that. we can avoid this.14:45
niemeyerwrtp: not really..14:46
niemeyerwrtp: Unless you introduce a random or pseudo-random number14:46
wrtpniemeyer: if i add some logging code to some package that's not in juju and rebuild, i want to be able to deploy that executable14:46
wrtpniemeyer: ?14:46
wrtpniemeyer: how is the hash of the executables not good enough?14:46
niemeyerwrtp: It's random..14:47
SpamapSniemeyer: pre release scheme is fine if we use ~pre14:47
niemeyerSpamapS: Cool, cheers14:47
wrtpniemeyer: random? it's a deterministic hash of the code, surely?14:47
SpamapSniemeyer: I as more thinking about those who want to automate on top of juju. Its a very small corner case though.14:48
niemeyerwrtp: No, it's surely not..14:48
wrtpniemeyer: no?14:48
niemeyerwrtp: Nope.. hash of executable != hash of code..14:48
wrtpniemeyer: how's that?14:48
niemeyerwrtp: It's also not useful for what a version is useful for..14:49
SpamapSniemeyer: just in general, I've never liked the idea of tagging releases with something like that. But I would not be wholly against using that scheme given all the parameters.14:49
wrtpniemeyer: this isn't a sequential version - it's a scheme that means that, hopefully, we can reliably deploy development versions of juju and have them work predictably.14:50
niemeyerwrtp: It's just not the same thing.. there are zero guarantees that two builds have produce the same hash14:50
SpamapShash of executable has a lot of parameters other than the code14:51
niemeyerwrtp: There's also the detail that one can't lock a drawer with the key inside14:51
wrtpniemeyer: it doesn't matter if they don't, i think. it's more important that two builds with different code will produce *different* hashes14:51
wrtpniemeyer: would we store the hash inside the executable?14:52
niemeyerwrtp: I hope we can type "juju version" and have it..14:52
wrtpniemeyer: yeah, that would be good. but i still want to be certain that the code that's executing is the code i just compiled.14:54
niemeyerwrtp: You can always md5sum the executable.. that's beyond the role of a version14:54
wrtpSpamapS: code+compiler is it, i think. i don't think it embeds a time stamp.14:55
niemeyerwrtp: building code doesn't have to produce the same binary every time.. there are zero guarantees about that14:55
SpamapScompiler+all libraries14:55
TheMueSo, phew, back again.14:55
niemeyerwrtp: Ok, I have to step out for lunch, and have a medical appointment first thing in the afternoon.. will think about that conversation meanwhile, and be back in action later in the day14:56
SpamapSoddly enough, the same problem has existed in charms for a while14:56
wrtpSpamapS: yeah. i count libraries as part of the code though.14:56
TheMueniemeyer: Could you please later also review my latest proposal? Thx.14:56
SpamapSwe added the 'deploy --upgrade' flag to deal with it. Arbitrary revision number in the charm that is bumped +1 every time you ask to upgrade it.14:56
niemeyerTheMue: Yeah, unfortunately the meeting time is gone now14:56
niemeyerTheMue: Will do later14:56
TheMueniemeyer: Emergency, my daughter had a cut in her food.14:57
niemeyerTheMue: Ouch!14:57
TheMuefoot14:57
niemeyerTheMue: How's she?14:57
wrtpTheMue: for a moment i thought she'd eaten some glass! i'm glad she'll only lose a foot :-)14:57
TheMueniemeyer: She now has a bandage, but walking will be difficult the next days.14:57
wrtpTheMue: (sorry to hear it, hope she's ok)14:57
niemeyerHeh14:57
niemeyerTheMue: Glad to hear14:58
niemeyerOk.. will be back later!14:58
TheMuewrtp: Yeah, only one char, but totally different meaning. ;)14:58
TheMueniemeyer: Thx, me too.14:58
wrtpSpamapS: by "the same problem" you mean that the current version wasn't pushed out, even though it had actually changed?14:58
TheMuefwereade: Found a solution?15:02
fwereadeTheMue, tbh, no, I always seemed to converge on what I was doing before15:02
fwereadeTheMue, here's how it looks at the moment...http://paste.ubuntu.com/942518/15:03
TheMuefwereade: OK, will do a quick draft I had time to thought about while waiting. ;)15:03
fwereadeTheMue, sweet, tyvm15:03
wrtpfwereade: so you want to propagate the listener close to the Server close?15:05
wrtpfwereade: is that the problem?15:05
fwereadewrtp, I don't know any other way to stop the loop while it blocks on listener.Accept15:06
wrtpfwereade: is there other code that's also using the tomb?15:06
fwereadewrtp, if I'm closing the listener explicitly, via server.Close, then I know I'll get an error out of Accept15:06
fwereadewrtp, no15:06
TheMuefwereade: Somehow this way http://paste.ubuntu.com/942618/15:06
wrtpfwereade: i don't think i'd use a tomb15:07
fwereadeTheMue, I don;t get that, the goroutine is surely wrong15:07
fwereadewrtp, ha, I suppose that's an option, but it does seem to give me most of what I need15:08
fwereadewrtp, (to continue dropped sentence) ...and I don;t want that error to count as an error15:09
fwereadewrtp, but, in general, I feel that I shouldn't just swallow real errors that happen inside the loop, even if the only way to get atthem ATM is to close the server15:10
TheMuefwereade: What's exactly what you don't get?15:12
fwereadeTheMue, that you kill the listener immediately for no apparent reason15:12
wrtpfwereade: something like this perhaps, with one notable omission...: http://paste.ubuntu.com/942639/15:14
TheMuefwereade: Ooops, wrote go instead of defer.15:14
TheMuefwereade: That's indeed wrong.15:14
wrtpfwereade: i'm not sure whether it's possible to portably define errorWasBecauseOfClose15:15
TheMuefwereade: I meant http://paste.ubuntu.com/942647/15:15
fwereadewrtp, indeed15:16
fwereadeTheMue, ok, but this appears to be dropping the errors on the floor, which I feel is a bad move unless they're expected15:16
TheMuefwereade: No, the error that leads to the leaving of the loop is passed to Kill in the deferred func.15:17
fwereadeTheMue, ahhhh15:17
wrtpfwereade: i think this does the job: http://paste.ubuntu.com/942650/15:18
fwereadeTheMue, we do still have the "is the error (overwhelmingly likely to be) due to a deliberate Close issue15:18
wrtpfwereade: i think my paste solves that issue15:18
fwereadewrtp, I don't quite follow what the select is doing15:19
SpamapSwrtp: well perhaps more generally, the same problem was that we needed to maintain remote binaries efficiently15:19
wrtpfwereade: you're right, it unnecessary15:19
wrtpfwereade: this should work fine: http://paste.ubuntu.com/942657/15:20
fwereadewrtp, and how do I distinguish between good errors and bad errors out of Accept?15:20
wrtpSpamapS: ironically, i've discovered that Go *does* include a build time stamp in its binaries.15:21
wrtpfwereade: doh! what was i thinking?15:21
wrtpfwereade: in general, you can't tell15:22
wrtpfwereade: unless you manage to work it out from the error itself15:22
wrtpfwereade: because there's a race.15:22
fwereadewrtp, in general it is indeed always possible that I could issue a Close(), and the Accept could error out on its own at just the right moment15:22
fwereadewrtp, but that is a possibility I am not overly concerned about15:22
SpamapSwrtp: perhaps that would be a good modifier then15:23
fwereadewrtp, the important thing is that we asked it to close and now it's closed (or at least broken ;p)15:23
fwereadewrtp, the disturbing case is when it broke without us asking it to close15:24
fwereadewrtp, and while there's no Wait method yet it seems like the sort of thing that will be useful for orderly shutdown15:24
wrtpfwereade: yeah. there should be an "IsErrorOnClosedConnection(err)" function in net15:24
fwereadewrtp, that would be the ideal, yeah15:25
fwereadewrtp, I'm reluctant to try to write that myself though15:25
wrtpfwereade: how about this: http://paste.ubuntu.com/942664/15:26
fwereadewrtp, I think I like that a lot15:27
fwereadewrtp, tyvm15:27
wrtpSpamapS: i'm not sure that we can avoid uploading the binaries every time while still guaranteeing we're executing the right code.15:27
wrtpfwereade: np15:27
wrtpfwereade: easy when you think about it the right way, as usual :-)15:28
wrtpfwereade: took me a while tho15:28
fwereadewrtp, yeah, it's taking a while to reroute my normal thinking15:28
fwereadewrtp, a month of python didn't help ;)15:28
wrtpfwereade: w e   w i l l   r e w i r e   y o u r   b r a i n15:29
fwereadewrtp, haha15:30
wrtpfwereade: there's still a potential problem though15:31
wrtpfwereade: well, i suppose it depends15:31
fwereadewrtp, bah! go on :)15:32
wrtpfwereade: if you *didn't* get an error, you need to wait for the error anyway, otherwise the server may still be active15:32
wrtpfwereade: and i presume that ServeConn is guaranteed to finish in a fixed amount of time??15:33
wrtpfwereade: the first problem is easily solved: http://paste.ubuntu.com/942676/15:33
fwereadewrtp, well, it depends on what the request actually does... it shouldn't just get stuck, in theory; I'd assumed we'd want to allow inflight requests to complete15:34
wrtpfwereade: well, yeah, it's a matter of choice.15:34
fwereadewrtp, we may want to change that choice at some stage, sure15:35
wrtpfwereade: i think maybe you shouldn't be calling ServeConn synchronously.15:35
fwereadewrtp, hmm, maybe, but I got confused about allowing multiple connections at once15:36
fwereadewrtp, so I thought I'd punt on that for now15:36
wrtpfwereade: i don't think it helps15:37
wrtpfwereade: you can have multiple RPCs in parallel anyway, theoretically.15:37
fwereadewrtp, yeah, there's nothing stopping us on the backend15:37
wrtpfwereade: how about something as simple as this? http://paste.ubuntu.com/942683/15:39
wrtpfwereade: but perhaps we *want* to block Close until the current request terminates. hmm.15:40
fwereadewrtp, I had been fretting over the multiple connections to the socket, not on locking around backend stuff15:40
fwereadewrtp, I thought it seemed like a sensible thing to do; it may be that future discoveries will prove it wrong15:40
wrtpfwereade: ah. i'm not sure i see why we should prevent two commands executing concurrently. one might block while the other might return some quick info.15:41
wrtpfwereade: in which case, just "go s.server.ServeConn" would be fine15:42
fwereadewrtp, ok, cool, I guess I have been paranoid without reason ;p15:43
wrtpfwereade: it is worth considering whether you want Server.Close to close all existing client connections. that would take more code, but not hard.15:44
fwereadewrtp, IMO I may want that at some stage but it's premature for now15:52
wrtpfwereade: easy to fix anyway15:52
SpamapSwrtp: isn't that what makefiles are for?15:53
SpamapSwrtp: like, "if X changed, compile it" ?15:53
wrtpSpamapS: yes, that's a good point. i guess i was hoping to avoid upload if i had the same binaries as someone else. but in a dev environment, we don't care too much. i wonder how long it takes me to upload 4MB to S3.15:54
wrtpSpamapS: so we could use revid + time of day.15:55
wrtpSpamapS: or mod time of executable, rather15:55
SpamapSI haven't looked, what does go use to build? I didn't see autotools stuff... :)15:56
SpamapSwrtp: yeah mtime would actually be accurate15:56
fwereadesorry all: eod, and I can't really hang around today -- I'll be around all tomorrow, can we meet then instead?16:02
fwereadeI'll try to pop on again later16:03
wrtpfwereade: sounds good16:07
wrtpSpamapS: it uses the go tool16:08
wrtpSpamapS: it looks at the source files (recursively) to determine dependencies16:08
wrtpSpamapS: http://golang.org/doc/articles/go_command.html16:09
SpamapSwrtp: but not all of the "code" is go16:18
SpamapSyou have documentation, and examples...16:19
wrtpSpamapS: the documentation and examples are in the code16:19
wrtpSpamapS: well... the Go docs and examples are16:19
SpamapSwrtp: where are the fairies and toilets made of solid gold? ;)16:20
wrtpSpamapS: for other docs, i guess we'd use make or something similar16:20
wrtpSpamapS: how did you guess? :-)16:20
wrtpSpamapS: sorry, i thought you meant by "what does go use to build?", "what does go use to build Go executables?"16:21
wrtpSpamapS: if you meant "what does go use to build restructured-text documentation and everything else?" i guess the answer is "it doesn't" :-)16:23
SpamapSI figured as much. :)16:27
wrtpSpamapS: i'm quite happy it does a good job with the code tbh16:29
SpamapSI'm just thinking about how to make release.16:31
SpamapSreleases even16:31
wrtpSpamapS: for getting the juju executables into a known place without knowing the names of all of them, this could work well: GOBIN=someplace go install launchpad.net/juju/go/...16:42
wrtpSpamapS: the documentation can probably be done exactly as it is now16:42
* wrtp is off for the evening, see y'all tomorrow17:01
* niemeyer is back17:18
niemeyerrobbiew: Is the call still running?17:19
niemeyerwrtp: Still around? Just wanted to run an idea by you17:34
niemeyermthaddon: ping18:06
wrtpniemeyer: i've got 15 minutes before dinner if you're still around19:28

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!