=== flepied_ is now known as flepied | ||
=== mthaddon` is now known as mthaddon | ||
wrtp | TheMue, fwereade: good morning! | 08:03 |
---|---|---|
fwereade | wrtp, heyhey | 08:03 |
fwereade | wrtp, I'm getting test failures in the ssh stuff | 08:03 |
TheMue | wrtp, fwereade: moin | 08:03 |
wrtp | fwereade: let me see 'em! | 08:03 |
fwereade | wrtp, permissions on the key file are "too open" | 08:03 |
wrtp | fwereade: oh yes, sorry, i forgot to fix that. darn. will do. | 08:04 |
wrtp | fwereade: workaround is: chmod go-rwx state/sshtest/* | 08:04 |
fwereade | wrtp, cheers | 08:05 |
wrtp | fwereade: bloody ssh being too clever for its own good | 08:05 |
fwereade | wrtp, haha, yeah | 08:05 |
fwereade | wrtp, TheMue: nice weekends? | 08:05 |
wrtp | fwereade: yes thanks. had a couple of friends to stay, had large quantities of curry, a nice walk, and a certain amount of nice whisky too. | 08:06 |
wrtp | fwereade: you? | 08:06 |
fwereade | wrtp, lovely :) | 08:06 |
wrtp | fwereade: still got a pot full of some of the curry left for lunch :-) | 08:06 |
fwereade | wrtp, yeah, very nice, wandering valletta with the family on sat and an afternoon of booze and boardgames with some friends on sun | 08:07 |
fwereade | wrtp, awesome | 08:07 |
wrtp | sounds nice | 08:07 |
TheMue | fwereade: Yes, with two parties. On Saturday our neighbor went 50, on Sunday my niece had confirmation. | 08:07 |
fwereade | wrtp, actually afternoon/evening I guess, munchkin takes too long really | 08:07 |
TheMue | fwereade: So too much to drink and eat. *lol* | 08:07 |
fwereade | wrtp, I guess we're still learning it, just need to be a bit snappier about playing | 08:07 |
* wrtp doesn't know about munchkin | 08:07 | |
fwereade | TheMue, lovely, sounds like we all did that ;) | 08:07 |
fwereade | wrtp, it's a card game that aims to be the (humorous) essence of D&D | 08:08 |
fwereade | wrtp, killing monsters, stealing treasure, betraying friends | 08:08 |
wrtp | fwereade: with success? | 08:08 |
wrtp | fwereade: (its aim, that is) | 08:08 |
fwereade | wrtp, a surprising amount actually | 08:08 |
fwereade | wrtp, down to the scope for rules-lawyering arguments ;) | 08:09 |
fwereade | wrtp, the rules are pretty simple but a lot of the cards tweak them in one way or another | 08:09 |
fwereade | wrtp, and the overlaps/conflicts are not always well-specified | 08:09 |
fwereade | wrtp, so it's not a technically "good" game IMO but it's kinda fun | 08:10 |
fwereade | wrtp, also, most of the cards have/are some sort of joke | 08:10 |
fwereade | wrtp, "invoke obscure rules: go up a level" | 08:11 |
wrtp | fwereade: funnier if you've previously played D&D, perhaps? | 08:11 |
fwereade | wrtp, probably | 08:11 |
fwereade | wrtp, but people seem to enjoy it even if they haven't | 08:11 |
wrtp | fwereade: sounds fun | 08:12 |
fwereade | wrtp, it's a nice change of pace from catan/carcassonne | 08:12 |
wrtp | fwereade: oh yeah, minor version compatibility: | 09:27 |
fwereade | wrtp, oh yes? | 09:27 |
wrtp | fwereade: if you increment a minor version, you can add (backwardly compatible) features | 09:28 |
wrtp | fwereade: so if you've got a client with a minor version of 2, it might not work with agents with a minor version of 1 | 09:28 |
fwereade | wrtp, ah; my reading had been that, if it doesn't deal with older versions, that's a non-backward-compatible change, and demands a major version bump | 09:29 |
wrtp | fwereade: hmm. interesting. | 09:30 |
fwereade | wrtp, it seems to me that in that sort of mixed environment we'd really want to just expose a lowest-common-denominator feature set | 09:30 |
wrtp | fwereade: i thought that 1.0.0 is compatible with 1.1.0 but not vice versa. | 09:30 |
wrtp | fwereade: because otherwise AFAICS minor versions can't actually add any features | 09:31 |
fwereade | wrtp, that makes sense as well tbh | 09:31 |
fwereade | wrtp, they can add features but only expose the common capabilities of the whole env | 09:31 |
wrtp | fwereade: yeah | 09:31 |
fwereade | wrtp, sounds like a hassle to implement, but... ;) | 09:32 |
wrtp | fwereade: so things are backwardly compatible | 09:32 |
wrtp | fwereade: but not necessarily forward compatible. | 09:32 |
fwereade | wrtp, that's what it intuitively means to me but I expect someone's come up with a rigorous definition of the phrase "backward compatible" somewhere ;) | 09:32 |
wrtp | fwereade: yeah, well this is why it's a good time to be having the conversation... the version package seems like a good place to start | 09:33 |
fwereade | wrtp, absolutely so :) | 09:33 |
fwereade | wrtp, ok, think of the common situation, that someone upgrades their client juju from 2.2 to 2.4 | 09:34 |
wrtp | ok | 09:34 |
fwereade | wrtp, to be suddenly unable to interact, without the upgrading the whole env, is a serious problem IMO | 09:34 |
fwereade | wrtp, so 2.4 *must* be able to emit stuff that can be understood by versions back to 2.0.0 | 09:35 |
fwereade | wrtp, plausible reading of it? | 09:35 |
wrtp | fwereade: yes | 09:35 |
wrtp | fwereade: however | 09:35 |
wrtp | fwereade: think about when we've upgraded a client and now *bootstrap* a new environment | 09:36 |
wrtp | fwereade: we want to be able to use the new client features | 09:36 |
fwereade | wrtp, yeah; and because everything in the env has version >=2.4, we can | 09:36 |
wrtp | fwereade: yeah | 09:37 |
fwereade | wrtp, if everything doesn't then we block attempts to use them with friendly errors | 09:37 |
wrtp | fwereade: yes | 09:37 |
wrtp | fwereade: although | 09:37 |
wrtp | fwereade: i suppose that perhaps we shouldn't worry too much. if the latest minor version is there, we'll use it. if it's not, then we should probably still use the earilier version. | 09:38 |
wrtp | fwereade: so perhaps my compatibility test *is* wrong. | 09:38 |
wrtp | fwereade: because even though all the features of 2.4 won't work with a 2.2 backend, if a 2.2 backend is all we can find, we shouldn't bomb out | 09:39 |
fwereade | wrtp, exactly | 09:39 |
fwereade | wrtp, the thing is, part of me wants to impose versioning at the node-data-struct level | 09:40 |
fwereade | wrtp, but I can't really justify it very well, even with lots of handwaving | 09:41 |
wrtp | fwereade: doesn't that amount to the same thing, given that all of those are created by the code? | 09:41 |
fwereade | wrtp, kind of... it's just that the various kinds of node data won't necessarily be changing in sync with one another | 09:42 |
fwereade | wrtp, and that to (1) accommodate the old-environment thing and (2) not bump major versions all the time | 09:43 |
wrtp | fwereade: i think it might be useful to use node-data-struct compatibility to determine the overall version number | 09:43 |
fwereade | wrtp, we'll need some way of saying "write the 2.2 format of a unit workflow state node" | 09:44 |
fwereade | wrtp, yeah, that too | 09:44 |
wrtp | fwereade: i think we can be simpler than that | 09:44 |
fwereade | wrtp, oh yes? | 09:44 |
wrtp | fwereade: like having a rule to say: sub-nodes can only be added, not removed or changed, without losing backward compatibility | 09:45 |
wrtp | fwereade: kinda structural compatibility | 09:45 |
wrtp | fwereade: rather than versioning every node | 09:45 |
fwereade | wrtp, my concern is that that has a fossilizing effect and, long term, leads to horrid frankensteiny data structures | 09:46 |
wrtp | fwereade: if anything is frankenstein it's 2.2 unit nodes bolted on to 2.1 machine nodes :-) | 09:47 |
wrtp | fwereade: and we can clean up at major versions | 09:47 |
fwereade | wrtp, "we can clean up at <point in future>" is an argument of which I have learned to be suspicious | 09:48 |
wrtp | fwereade: if everything is versioned, you end up with n versions of the code, one for each version | 09:48 |
wrtp | fwereade: i think the protobuf/gob approach can work better in general | 09:48 |
fwereade | wrtp, yes: N datatype versions per node | 09:49 |
wrtp | fwereade: yuck. | 09:49 |
wrtp | fwereade: don't add a node unless you mean it :-) | 09:49 |
fwereade | wrtp, it seems to me to encapsulate the yuckiness better than having a 2.6 format extended from 2.5 from ... from 2.0 | 09:49 |
fwereade | wrtp, indeed, we should not abuse it | 09:50 |
wrtp | fwereade: i think it would significantly complicate the code | 09:50 |
fwereade | wrtp, but either way promiscuously changing data formats is icky and costly | 09:50 |
wrtp | fwereade: presumably you'd need a separate data type for each version | 09:50 |
fwereade | wrtp, yeah; I don't anticipate too many versions | 09:50 |
fwereade | wrtp, but if we have 3 versions I'd rather have 3 versions than one version that has to handle reading/writing all 3 formats | 09:51 |
fwereade | wrtp, even *with* the restrictions on how the format can change which at least cuts down on that specific burden | 09:51 |
wrtp | fwereade: i don't think it's that much of a problem - you do it with subset/superset | 09:51 |
wrtp | fwereade: read the "A bit of history" section at https://developers.google.com/protocol-buffers/docs/overview for some context | 09:53 |
fwereade | wrtp, like I said, it's a pretty handwavey preference... and protobuf does seem to me to be essentially a more sophisticated version of what I propose | 09:55 |
wrtp | fwereade: i think so. and it doesn't have version numbers. | 09:55 |
fwereade | wrtp, I wouldn't like it if we had to manage all the version-munging by hand | 09:55 |
wrtp | fwereade: agreed. so i think we can work out ways to proceed forward in such a way that version n is automatically compatible with version n-1 | 09:56 |
wrtp | fwereade: (version == minor version that is) | 09:56 |
fwereade | wrtp, yeah, SGTM | 09:56 |
wrtp | fwereade: but i do think that writing down the schema in some semi-formalised way might help to see potential compatibility problems as they happen | 09:57 |
wrtp | fwereade: not sure what syntax we'd use though | 09:58 |
fwereade | wrtp, tbh if we're not generating stuff from the schema and *knowing* that we cannot change the schemas except in tightly proscribed ways I don't think it'd be worth the effort | 09:59 |
fwereade | wrtp, it's basically just a comment, and has all the attendant problems | 09:59 |
wrtp | fwereade: i, for one, would quite like to see that comment :-) | 10:00 |
wrtp | fwereade: but i take the point | 10:00 |
fwereade | wrtp, I agree that a description of the intent at it was at some unknown time in the past is better than nothing, so long as you are always aware that's what you have | 10:01 |
wrtp | fwereade, TheMue: i wonder if it might make sense to have some table-driven tests in state (maybe there are already and i've missed them) where we start with a given zk tree (maybe generated by a previous version) and verify that it has the expected properties. | 10:03 |
fwereade | wrtp, +1 | 10:03 |
wrtp | fwereade: that way we could do automatic backward-compatibility checking | 10:03 |
fwereade | wrtp, exactly | 10:03 |
TheMue | wrtp: so far there're only for watches table-driven tests | 10:04 |
TheMue | wrtp: the other ones are almost 1:1 ported from Python | 10:04 |
fwereade | wrtp, but then I start to feel that we're putting all the infrastructure for separate datatypes in place but trying to kid ourselves that they're not | 10:04 |
wrtp | fwereade: that's just testing | 10:04 |
wrtp | fwereade: the actual code remains straightforward (hopefully) | 10:05 |
TheMue | wrtp: ah, get it, you don't mean unit tests but a kind of compatability tests as part of the features | 10:06 |
wrtp | fwereade: as part of the testing, yes. | 10:06 |
wrtp | oop | 10:06 |
wrtp | s | 10:06 |
wrtp | TheMue: ^ | 10:06 |
fwereade | wrtp, indeed | 10:07 |
TheMue | wrtp: unit testing during dev or prerequisite testing during upgrades? | 10:07 |
wrtp | TheMue: unit testing during dev | 10:07 |
fwereade | wrtp, so +1 table-driven tests, and we'll see if anything else turns out to be a good idea at some point in the future | 10:07 |
wrtp | TheMue: at every version, you'd generate some representative zk trees and dump them, along with the results of various API calls on those trees (e.g. Unit.CharmURL, etc etc etc) | 10:08 |
TheMue | wrtp: Then I didn't get it. What exactly do you want to test? | 10:08 |
wrtp | TheMue: i want to test that a subsequent version still returns the same results on the same zk tree | 10:08 |
wrtp | TheMue: even though the zk tree was generated by a previous version of the code. | 10:09 |
wrtp | of course, it would be nice if we could test the old code against the new data structures too... | 10:09 |
TheMue | wrtp: OK. Makes it more clear. | 10:10 |
wrtp | TheMue: i'm imagining just testing the read-only methods, because testing the writing is harder and i can't think of a way of doing it nicely. | 10:13 |
wrtp | TheMue: for read-only methods, i think it might be possible to do something neat with reflection to automatically call methods and check/store the results | 10:14 |
TheMue | wrtp: Yes, it's a hard topic. What exactly is the result you would expect by such a test? | 10:15 |
wrtp | TheMue: this is my thought (it might be wrong!): call a load of functions to create a given state. dump the zk tree for that state, and also dump the results of enumerating all read-only methods on that state (reflection *might* make it possible to do that by simply naming the read-only methods). to test, we restore the zk tree, then do the same enumeration and check the result are the same as the dumped values. | 10:18 |
TheMue | wrtp: So as a result you get a verification that the read methods with a newer version are able to read the data created with an older version. Am I right? | 10:20 |
wrtp | TheMue: yes | 10:20 |
wrtp | TheMue: you could also branch the older version and add test data generated by a newer version. | 10:21 |
TheMue | wrtp: Sounds good, so far I only dislike the reflection part. I would like to make it more explicit. | 10:22 |
wrtp | TheMue: i think it might just save lots of boilerplate code, but if you don't mind writing boilerplate... | 10:23 |
TheMue | wrtp: There would be an initial effort, but later adding new readers is simple. I fear the read-by-accident. | 10:24 |
wrtp | TheMue: read-by-accident? | 10:24 |
TheMue | wrtp: And this first list of readers could be generated by reflection and then manually controlled. | 10:25 |
TheMue | wrtp: How do you know (by reflection) which methods are the (important) readers? | 10:25 |
wrtp | TheMue: you don't - you name the methods explicitly. but use reflection to make the call and dump (or restore) the data. | 10:26 |
wrtp | TheMue: but you're probably right - it might not be too much work and perhaps it's better to be explicit | 10:27 |
TheMue | wrtp: OK, that's the explicit part I missed and wanted. I thought your idea would even start at a higher level and 'detect' those read methods by reflection. That has been my fear. | 10:27 |
wrtp | TheMue: no, that's not possible, i think | 10:27 |
TheMue | wrtp: Exactly | 10:28 |
wrtp | TheMue: i guess all one would need would be a function that takes a state and returns JSON-marshalable data from that | 10:28 |
wrtp | TheMue: then we can do a DeepEqual on the dumped JSON and current result | 10:29 |
TheMue | wrtp: Why exactly JSON? | 10:29 |
wrtp | TheMue: 'cos it's a nice format for dumping | 10:29 |
TheMue | wrtp: We could also gob it and write the binary. | 10:29 |
TheMue | wrtp: I like JSON (especially more than XML or YAML), but here I could live with gob. | 10:30 |
wrtp | TheMue: i think the transparency of JSON could be good here. and we might want to change the struct names. | 10:30 |
wrtp | TheMue: i think it would be useful to be able to manually inspect both the output tree and the dumped state data. | 10:31 |
TheMue | wrtp: Why? In case you do it the compatibility check would fail. | 10:31 |
wrtp | TheMue: the compatibility check would fail unnecessarily | 10:32 |
TheMue | wrtp: The test is "read old, dump, deploy new, read and compare dumed data". | 10:32 |
TheMue | s/dumed/dumped/ | 10:32 |
wrtp | TheMue: i don't think the test itself would dump | 10:33 |
wrtp | TheMue: that would be a separate thing, done every so often | 10:33 |
TheMue | wrtp: Not the technological test run with go test, I described the whole test. | 10:33 |
wrtp | TheMue: ok. yes, that sounds right. | 10:34 |
wrtp | TheMue: so you don't want the "compare dumped data" step to fail unnecessarily | 10:34 |
TheMue | wrtp: You have to call the read methods of version A, dump those results, because you expect them later, install version A+1, run the read methods again and compare. | 10:34 |
wrtp | TheMue: yes | 10:35 |
wrtp | TheMue: except that we do the version A thing, then take those files and store them in version A+1 as test data | 10:35 |
TheMue | wrtp: Oh, yes, missed it. | 10:36 |
TheMue | wrtp: We need a ZK dump for simple restore of test data and a Go-readable dump for the comparings. | 10:37 |
wrtp | TheMue: exactly | 10:37 |
TheMue | wrtp: Sounds like an interesting job. Any experience dumping and restoring ZK? | 10:38 |
wrtp | TheMue: and i think it might make sense for both dumps to be human-readable - that way we get automatic "documentation" of what the zk tree actually looks like, killing two birds with one stone | 10:38 |
wrtp | TheMue: should be trivial - just: type ZkNode struct {Contents string; Children map[string] ZkNode}; func GetContents(zk *Zk) ZkNode | 10:40 |
wrtp | TheMue: i don't think we need to check the metadata | 10:40 |
TheMue | wrtp: Then you should write a novel marshaller. *lol* "The identificator of this unit is …" | 10:40 |
TheMue | wrtp: But indeed, JSON is pretty fine here. | 10:40 |
wrtp | TheMue: don't understand previous remark... | 10:41 |
wrtp | yeah, JSON should do all the work just fine | 10:41 |
wrtp | ahh! | 10:41 |
wrtp | you mean a marshaller that produces documentation! | 10:42 |
wrtp | a fine idea. i'll leave it as an exercise for the reader :-) | 10:42 |
TheMue | wrtp: But a novel,generated out of ZK, could be fun too. And you could configure if it is crime, mystery or love. *rofl* | 10:43 |
TheMue | wrtp: Hehe. | 10:43 |
wrtp | TheMue: definitely science fiction | 10:43 |
wrtp | TheMue: all this stuff is already sf :-) | 10:43 |
TheMue | wrtp: That's pretty simple. "$ dump" leads to 42. | 10:44 |
TheMue | wrtp: Oh, no, error, that the Python version. | 10:45 |
TheMue | *lol* | 10:45 |
wrtp | TheMue: i've thought of a particular reason why dumping as JSON might be better for the values too. | 10:45 |
wrtp | TheMue: it means we can potentially do forward-compatibility tests even when the new version has introduced some new fields. | 10:46 |
wrtp | TheMue: so rather than using DeepEqual, we could use an equality check that ignored new fields in the new version | 10:47 |
TheMue | wrtp: Sounds good. | 10:47 |
fwereade | TheMue, you've done some stuff with tomb recently, right? | 11:50 |
TheMue | fwereade: Yes. | 11:50 |
fwereade | TheMue, I was wondering what happened to tomb.Stop | 11:51 |
TheMue | fwereade: Isn't the old Stop() now Kill()? | 11:52 |
fwereade | TheMue, IIRC Stop was a constant that you could use to kill without it showing up as an error | 11:53 |
fwereade | TheMue, now it seems that Err() returns nil in 2 cases: "no error has yet been encountered" and "we're shutting down cleanly" | 11:54 |
TheMue | fwereade: Kill(nil) is allowed. | 11:54 |
fwereade | TheMue, and if you're checking the error return from a blocking call to something the gets Closed in another goroutine in response to <-Dying() | 11:54 |
fwereade | TheMue, then it's important to distinguish between "someone is shutting us down deliberately, so the read error caused by the Close should be ignored" | 11:55 |
fwereade | TheMue, and "whoa, read error! help help" | 11:55 |
fwereade | TheMue, checking for the error being tomb.Stop allowed for that | 11:56 |
fwereade | TheMue, is there another way to do what I ask? | 11:56 |
TheMue | fwereade: So watchers use two tombs. The first one is inside of the generic watcher types. They only handle the inner state of the watcher. | 11:57 |
TheMue | fwereade: And thos concrete watchers who use it have their own tombs. | 11:58 |
TheMue | fwereade: They are used if there is a logical/technical error on this level (e.g. an illegal content delivered by the content watcher). | 11:58 |
fwereade | TheMue, sorry, reading code, trying to figure out the analogous bit | 11:59 |
fwereade | TheMue, ah, hold on, I think I see what to do | 12:00 |
TheMue | fwereade: So the tomb of the ContentWatcher may return a problem with ZK. And if everything is ok the content watcher delivers a string that may be not interpretable by the surrounding ConfigWatcher. | 12:00 |
fwereade | TheMue, ok, wait, I don't think it's the same situation | 12:01 |
fwereade | TheMue, say I have `conn, err = listener.Accept` in my looping bit | 12:01 |
fwereade | TheMue, this blocks, and makes select on Dying() tricky | 12:02 |
fwereade | TheMue, so I have a goroutine which blocks on Dying() and calls Close on the listener | 12:03 |
fwereade | TheMue, this causes the blocking Accept to return an error | 12:03 |
fwereade | TheMue, which in *this* case is expected, and should be swallowed | 12:04 |
fwereade | TheMue, but I need some mechanism to tell me whether that was the case | 12:04 |
TheMue | fwereade: Uff, nice task. Sounds tricky. | 12:04 |
fwereade | TheMue, tomb.Stop was a magic value to pass to Fatal(), which allowed me to distinguish between the two cases | 12:04 |
fwereade | TheMue, it was always a little ugly, but it did work ;) | 12:05 |
TheMue | fwereade: Could you paste it? Or is it too much code? | 12:05 |
fwereade | TheMue, relevant bit is here: http://paste.ubuntu.com/942426/ | 12:06 |
fwereade | TheMue, there are a couple of bits that appear conspicuously stupid | 12:06 |
fwereade | TheMue, I have no idea why I blocked on Dead() as well in the killer goroutine | 12:06 |
niemeyer | Morning! | 12:07 |
TheMue | niemeyer: Morning. | 12:07 |
TheMue | fwereade: Just scanning it. | 12:07 |
fwereade | niemeyer, heyhey | 12:08 |
TheMue | fwereade: Don't really get why you're using the goroutine here. | 12:11 |
fwereade | TheMue, how would you structure it? | 12:11 |
TheMue | fwereade: One moment, maybe I've got an idea. You want to keep the err to return it to a possible caller? | 12:14 |
wrtp | niemeyer: yo! | 12:14 |
fwereade | TheMue, I want server.Close() to return the error but only if it's a real error | 12:15 |
TheMue | fwereade: OK | 12:15 |
fwereade | TheMue, and likewise it seems sensible to have Wait() working as I would expect | 12:15 |
fwereade | TheMue, (maybe I shouldn't be *embedding* a Tomb at all... it seemed like it might be simpler... misuse?) | 12:16 |
wrtp | fwereade: one initial trivial remark: there's no reason for the goroutine to wait on <-s.Dead() | 12:17 |
niemeyer | wrtp: Heya | 12:17 |
niemeyer | fwereade, TheMue: What's up folks | 12:17 |
fwereade | wrtp, yeah, I noticed that... the more I look at this the more I think I should start from scratch again | 12:17 |
* wrtp nods | 12:17 | |
fwereade | wrtp, I have a horrible feeling that I was halfway through a change :/ | 12:17 |
TheMue | fwereade: Back in a moment ... | 12:17 |
wrtp | niemeyer: we had a discussion about versions | 12:18 |
wrtp | niemeyer: you might have seen this: https://codereview.appspot.com/6082044/ | 12:18 |
niemeyer | wrtp: Indeed, and I'm participating on it :_) | 12:18 |
wrtp | niemeyer: (my impression of semantic versions) | 12:18 |
wrtp | niemeyer: this morning we also discussed ways of testing state backward compatibility | 12:19 |
niemeyer | wrtp: Ok.. I haven't seen this code yet, and I'd prefer to continue the conversation until we reach consensus before jumping into an implementation | 12:21 |
wrtp | niemeyer: ok. the versions package was only a couple of hours' work - i thought people seemed pretty much agreed on using semantic versioning, so it seemed to make sense to write it as a point of reference. | 12:23 |
TheMue | *: Have an interruption here, will be back later. Sorry fwereade | 12:23 |
niemeyer | wrtp: Sounds good.. we'll likely need this in a bit | 12:23 |
wrtp | niemeyer: (kapil's reply came quite a bit later) | 12:23 |
fwereade | TheMue, np, I'll experiment | 12:23 |
wrtp | niemeyer: so are we not agreed that your plan seems good? i was all set to go ahead with it. | 12:24 |
niemeyer | wrtp: I think people are mostly happy with it, so yeah, we can move forward with it I suppose | 12:25 |
wrtp | niemeyer: what do you think about my suggestion of using prerelease versions instead of odd-numbered versions? | 12:25 |
niemeyer | wrtp: There's just some minor disagreements there that we should settle on | 12:25 |
niemeyer | wrtp: I'm trying to answer it! :-) | 12:25 |
wrtp | niemeyer: :-) | 12:26 |
niemeyer | Ugh.. does bzr not have a hash for its revisions? | 12:27 |
wrtp | niemeyer, TheMue, fwereade: are we gonna have a meeting? | 13:01 |
niemeyer | wrtp: Yeah.. it'd be in 1h per the previous agreement, but I'm happy to do it sooner again | 13:05 |
fwereade | wrtp, niemeyer, TheMue: now is fine for me | 13:05 |
niemeyer | TheMue? | 13:06 |
wrtp | niemeyer: ah, sorry, i got my calendar time wrong by an hour | 13:07 |
wrtp | niemeyer: happy in an hour if TheMue is lunching | 13:07 |
fwereade | wrtp, niemeyer: ah yeah, TheMue did mention he had to go for a bit, didn't he? | 13:08 |
wrtp | no, it seems that google calendar is just confusing. it says "GMT+00" when it actually means "BST-01" (or is that +01 ?) | 13:10 |
wrtp | niemeyer: i'm not sure using the build scheme will work for dev versions | 13:19 |
wrtp | niemeyer: build versions have higher priority than the base versions | 13:19 |
wrtp | niemeyer: which means that 1.2.0+dev would override 1.2.0, which isn't what we want, i think. | 13:20 |
wrtp | niemeyer: but i'm probably misunderstanding | 13:21 |
niemeyer | wrtp: It actually is what we want | 13:29 |
niemeyer | wrtp: Oh, wait | 13:29 |
niemeyer | wrtp: No, it's not.. | 13:29 |
* niemeyer reads again | 13:29 | |
niemeyer | wrtp: Ah, interesting, I misunderstood indeed, but it actually works just as well | 13:29 |
niemeyer | wrtp: It just inverts when what we put the +dev on | 13:30 |
niemeyer | s/when// | 13:30 |
niemeyer | wrtp: We need to put +dev on the last release rather than on the next one | 13:30 |
fwereade | niemeyer, wrtp: pre-release on the next version feels slightly cleaner/clearer to me | 13:31 |
niemeyer | fwereade: That's no better than the odd/even scheme.. | 13:31 |
niemeyer | fwereade: That said, I guess using the build scheme in that way is no better either | 13:33 |
fwereade | niemeyer, yeah, I was trying to figure out where the relevant distinction lay | 13:33 |
fwereade | btw, I was chatting to someone about their organisation's data the other day | 13:35 |
niemeyer | fwereade, wrtp: Maybe -dev and -pre$N would do it.. | 13:36 |
fwereade | niemeyer, that looks sensible at first glance | 13:36 |
* fwereade decides the thread of conversation it looked like he was going to start is a derail | 13:36 | |
niemeyer | We're still breaking the concept a little bit by allowing -dev to change, but this should be fine really | 13:36 |
fwereade | niemeyer, heh, dev-<UUID> | 13:37 |
niemeyer | fwereade: Exactly.. :) | 13:37 |
wrtp | niemeyer: i don't *think* the concept is broken by allowing -dev to change, as the spec says nothing about breaking changes *between* prerelease versions | 13:38 |
niemeyer | fwereade: I guess we could have part of the bzr revid as a hash.. the last few chars.. something like 1.0.2-dev.$REVID[-8:] | 13:38 |
fwereade | niemeyer, SGTM | 13:39 |
niemeyer | wrtp: There's a distinction between changing *between* pre-release versions and changing *pre-release* versions | 13:39 |
wrtp | niemeyer: ah, i don't think we'd do that. i see what you're suggesting now. | 13:39 |
niemeyer | wrtp: Well, we need to say so, and say how that's happening | 13:39 |
wrtp | niemeyer: we can use build versions on top of prerelease versions for that if we want | 13:40 |
wrtp | 1.0.2-dev+build1 | 13:40 |
niemeyer | wrtp: Who defines build1? | 13:40 |
niemeyer | wrtp: and why? This looks ugly | 13:40 |
wrtp | niemeyer: we think we need to talk about who is going to use what versions... | 13:40 |
wrtp | s/we/i/ | 13:41 |
wrtp | niemeyer: we've got versions that we'll hand out to people | 13:41 |
wrtp | niemeyer: we've also got versions that we'll use for our own development | 13:41 |
niemeyer | wrtp: Sure, that's 1.2.3-pre4 | 13:41 |
niemeyer | wrtp: Yep | 13:41 |
wrtp | niemeyer: for versions for our own development, everyone that wants can have their "own" prerelease tag | 13:42 |
wrtp | e.g. 1.2.3-rog | 13:42 |
niemeyer | wrtp: the suggestion above puts those as 1.2.3-dev.hash | 13:42 |
niemeyer | wrtp: No no no | 13:42 |
wrtp | niemeyer: if i number my own prereleases, then i can use version numbering between my own releases | 13:42 |
niemeyer | wrtp: Let's please have a simple and well defined scheme.. | 13:42 |
wrtp | niemeyer: that seems well defined to me | 13:43 |
niemeyer | wrtp: Exactly. It is well defined *to you* | 13:43 |
wrtp | niemeyer: we allocate each developer a part of the revision namespace | 13:43 |
wrtp | niemeyer: the semantic version spec takes care of the rest | 13:43 |
niemeyer | wrtp: No, that's going in an entirely different atmosphere which is completely out of the conversation in the list | 13:43 |
wrtp | niemeyer: we can't use revid | 13:44 |
niemeyer | wrtp: Because..? | 13:44 |
wrtp | niemeyer: because revid only applies to trunk | 13:44 |
niemeyer | wrtp: Huh? | 13:44 |
niemeyer | wrtp: revision id != revision number | 13:44 |
wrtp | niemeyer: oh, sorry, yeah | 13:45 |
wrtp | niemeyer: the problem with that is they're not ordered | 13:45 |
niemeyer | wrtp: Yep.. they're not.. | 13:45 |
niemeyer | wrtp: Which is why the idea is handling dev builds as flag | 13:46 |
wrtp | niemeyer: which is a problem if i'm iteratively deploying a test env, no? | 13:46 |
niemeyer | as a flag | 13:46 |
wrtp | niemeyer: ah, so what does the flag do? | 13:46 |
niemeyer | wrtp: It is a problem indeed, and I'm trying to address it taking in account the characteristics of bzr | 13:46 |
niemeyer | wrtp: a dev build would always upgrade something with a version <= version (respecting the details for major) | 13:47 |
niemeyer | wrtp: Even if the -dev bit matches | 13:47 |
niemeyer | wrtp: so "juju upgrade-juju --dev" would do the right thing as we iterate through development | 13:48 |
wrtp | niemeyer: i'm not convinced we need a dev mode. if we can work out how to order dev releases, then the versions can do the work | 13:48 |
niemeyer | wrtp: Without having to artificially bump revision numbers all the time | 13:49 |
niemeyer | wrtp: We can't order dev releases.. there's simply no way | 13:49 |
wrtp | niemeyer: i think there might be | 13:49 |
wrtp | niemeyer: after all, revisions are ordered within a given branch | 13:49 |
niemeyer | wrtp: Ok.. so please explain how two different branches can cross-upgrade to each other? | 13:49 |
wrtp | niemeyer: they don't need to, i don't think | 13:50 |
niemeyer | wrtp: Heh | 13:50 |
niemeyer | wrtp: I want them to.. I don't want to be shutting down an environment just because I switched branches | 13:50 |
niemeyer | wrtp: I'm trying to solve actual problems. semver.org is only relevant to me for as long as it | 13:51 |
niemeyer | 's solving issues. It won't solve this issue, so we'll need to agree on something else that does. | 13:51 |
wrtp | niemeyer: hmm. there are two issues here | 13:51 |
wrtp | niemeyer: 1) can i deploy a given client version against a given backend? 2) can i upgrade a given client to a given client software version? | 13:53 |
wrtp | niemeyer: it seems to me that we haven't really touched on 2) yet. | 13:53 |
niemeyer | wrtp: 2 is completely out of our control.. | 13:54 |
wrtp | niemeyer: oh? i thought that's what kapil was talking about in his message. | 13:55 |
wrtp | niemeyer: so if we're just talking about 1), i don't see why you'd need to shut down an environment because you switched branches. the deployed environment would still be compatible. | 13:56 |
wrtp | niemeyer: (because all branches with the same major version are compatible with each other) | 13:57 |
niemeyer | <wrtp> niemeyer: i'm not convinced we need a dev mode. if we can work out how to order dev releases, then the versions can do the work | 13:57 |
niemeyer | <niemeyer> wrtp: Without having to artificially bump revision numbers all the time | 13:57 |
niemeyer | <niemeyer> wrtp: We can't order dev releases.. there's simply no way | 13:57 |
niemeyer | wrtp: Because it's an *upgrade*.. | 13:58 |
wrtp | niemeyer: sorry, i don't understand. what's an upgrade? | 13:58 |
niemeyer | wrtp: Ok.. we're not making progress | 13:59 |
niemeyer | wrtp: THe suggestion is this: | 13:59 |
niemeyer | actually.. I've just figured that pre-releases scheme isn't friends with Debian versions.. hah | 14:00 |
niemeyer | SpamapS: Was that your concern? | 14:01 |
wrtp | niemeyer: how do you mean? | 14:01 |
niemeyer | wrtp: 1.0.0-pre1 is greater than 1.0.0 | 14:01 |
* wrtp doesn't know anything about Debian versions | 14:01 | |
wrtp | niemeyer: in Debian versions? | 14:01 |
niemeyer | wrtp: Yeah | 14:02 |
wrtp | niemeyer: does Debian versioning matter for our version-selection algorithm? | 14:02 |
niemeyer | wrtp: Absolutely.. that's the environment we're living in | 14:02 |
niemeyer | wrtp: Using 1.0.0-pre1 will mean people will have to bump the series version every single release | 14:03 |
niemeyer | wrtp: We'd end up with a version like 123:4.2.1 | 14:03 |
wrtp | niemeyer: what's the "123:" ? | 14:03 |
niemeyer | wrtp: Because that's the only way to force the Debian package to upgrade 1.0.0-pre1 to 1.0.0 | 14:04 |
niemeyer | wrtp: It's the series version.. a monotonically increasing number that overrides the whole string-based comparison for cases like this | 14:04 |
wrtp | niemeyer: ah. it does lexical comparison? | 14:05 |
niemeyer | wrtp: It breaks down the version in parts and compares lexically, usually, but not if there's a series | 14:05 |
niemeyer | wrtp: Well.. not if there's a *different* series | 14:05 |
niemeyer | wrtp: So, I'm coming back to the conclusion that odd/even is perfectly fine | 14:05 |
wrtp | niemeyer: i'm still not quite sure why these versions have to pertain directly to debian versions. kapil was suggesting a single number. that wouldn't be a debian version either. | 14:06 |
niemeyer | wrtp: We'll be doing semver.org at the evens, and less strict on the odds | 14:06 |
niemeyer | wrtp: That'd be compare fine as a deb package version | 14:06 |
niemeyer | wrtp: I'm talking about packaging | 14:06 |
wrtp | niemeyer: why would we bother packaging a prerelease version? | 14:07 |
niemeyer | wrtp: Because we want people to use it? | 14:07 |
niemeyer | wrtp: There's an alternative, though.. which is simple and might work | 14:09 |
niemeyer | wrtp: and maybe you're happy with.. | 14:09 |
wrtp | niemeyer: for prerelease versions, we *could* say "just get the bzr repo" | 14:09 |
wrtp | niemeyer: go on | 14:09 |
niemeyer | wrtp: We could use the +dev tag, as suggested at semver.org | 14:10 |
niemeyer | wrtp: In the *previous* version | 14:10 |
wrtp | niemeyer: yes | 14:10 |
wrtp | erm, no | 14:10 |
niemeyer | wrtp: So immediately after 1.0.0, we tag it as 1.0.0+dev | 14:10 |
niemeyer | wrtp: But this still won't solve the pre-release issue, I guess | 14:11 |
wrtp | niemeyer: no, i think that breaks semantic versions too badly | 14:11 |
wrtp | but i really don't like the odd-numbered dev versions either | 14:11 |
wrtp | niemeyer: if we want people to use a prerelease version, why don't we just release it? | 14:12 |
wrtp | niemeyer: then we can bump the patch version as necessary when their feedback comes in | 14:12 |
niemeyer | wrtp: Because it's a pre-release.. I think the concept of a pre-release is well understood, but I can explain if not | 14:12 |
wrtp | niemeyer: please do. this area is all quite new to me. | 14:13 |
niemeyer | wrtp: A pre-release is a candidate to be the given release | 14:13 |
niemeyer | wrtp: Which isn't yet blessed as stable for production use | 14:13 |
wrtp | niemeyer: so... does debian packaging have pre-releases? | 14:13 |
niemeyer | wrtp: It can also be *really* unstable, rather than a candidate, though | 14:13 |
niemeyer | wrtp: The goal is to get people in the wild to experiment with what's coming | 14:13 |
niemeyer | wrtp: But only the brave souls that understand the risks of doing so | 14:14 |
niemeyer | wrtp: Debian packages may contain pre-releases if the authors decide to offer them as such | 14:14 |
wrtp | niemeyer: so, in your proposal, all odd numbered branches are pre-releases, yes? | 14:14 |
niemeyer | wrtp: No | 14:14 |
niemeyer | wrtp: They can also be development snapshots | 14:15 |
wrtp | niemeyer: one might say the two things played a similar role | 14:15 |
wrtp | ok... | 14:15 |
wrtp | all odd numbered branches are *potential* pre-releases, then | 14:15 |
wrtp | ? | 14:15 |
niemeyer | wrtp: Development snapshots are changing by the minute.. | 14:16 |
wrtp | niemeyer: many potential pre-releases :-) | 14:16 |
niemeyer | wrtp: They can be seen that way, yes, if we find a way to tag them appropriately with a mark that is unique | 14:17 |
wrtp | niemeyer: so with Debian packages, how do you tag a pre-release version? | 14:17 |
niemeyer | wrtp: But even that is problematic during development.. you'd have to commit for every test you do, for example | 14:17 |
niemeyer | s/test/test deployment/ | 14:17 |
wrtp | ha | 14:18 |
niemeyer | Maybe that's fine, though | 14:18 |
niemeyer | wrtp: Debian packages don't care about the details of the version being packaged. It just has an algorithm to define what's new and what's old that must be respected, or the upgrade won't happen. | 14:18 |
wrtp | i think it's reasonable. | 14:18 |
fwereade | niemeyer, wrtp: commit per deployment sounds sane -- it hasn't been a notable pain point in the past | 14:18 |
niemeyer | fwereade: Cool | 14:19 |
wrtp | niemeyer: so debian prerelease versions would always use an explicit series version? | 14:20 |
wrtp | niemeyer: hmm, that seems unlikely. | 14:20 |
niemeyer | wrtp: No.. series would have to be bumped every single time a pre-release is used, to enable an upgrade like 1.0.0-pre1 to 1.0.0 | 14:20 |
wrtp | niemeyer: how *do* Debian packages deploy pre-release versions? | 14:21 |
niemeyer | niemeyer> wrtp: Debian packages don't care about the details of the version being packaged. It just has an algorithm to define what's new and what's old that must be respected, or the upgrade won't happen. | 14:21 |
wrtp | niemeyer: sure. i'm just wondering what people do in practice. | 14:21 |
wrtp | niemeyer: do they bump the version number one more time, e.g. 1.2.3 for prerelease, then 1.2.4 for actual release? | 14:22 |
wrtp | niemeyer: or do they append to the previous version e.g. 1.2.2-prerelease before bumping to 1.2.3 for actual release? | 14:23 |
niemeyer | wrtp: In general I believe people end up splitting the version number in two to avoid the pain of series management | 14:23 |
niemeyer | wrtp: Ending up with something like 1.0.0-0~pre1 | 14:23 |
niemeyer | wrtp: So that 1.0.0-1 would upgrade it | 14:24 |
wrtp | niemeyer: ok, so they'd never use 1.0.0 as a version after a pre-release then? | 14:24 |
niemeyer | wrtp: I'm not sure about what you mean in that case | 14:24 |
wrtp | niemeyer: if i've pre-released as 1.0.0-0~pre1 then i can't release as 1.0.0 | 14:25 |
hazmat | fwereade munchkin is awesome and well suited for the kids in the family ;-) | 14:25 |
niemeyer | wrtp: You can, as I just explained | 14:25 |
niemeyer | <niemeyer> wrtp: Ending up with something like 1.0.0-0~pre1 | 14:25 |
niemeyer | <niemeyer> wrtp: So that 1.0.0-1 would upgrade it | 14:25 |
wrtp | niemeyer: yeah 1.0.0-1 != 1.0.0 | 14:25 |
niemeyer | wrtp: -1 is the release information that is available in all packages | 14:26 |
niemeyer | wrtp: Do dpkg -l <whatever> | 14:26 |
niemeyer | dpkg -l bash | 14:26 |
fwereade | hazmat, don't think laura would follow it quite yet, we can play carcassonne as a jigsaw though ;) | 14:26 |
wrtp | niemeyer: ah, i didn't know about that | 14:27 |
niemeyer | wrtp: The basic rule is that the release information is distro-selected.. the version itself is upstream selected | 14:27 |
niemeyer | wrtp: In most cases, at least | 14:27 |
niemeyer | wrtp: If we put 1.0.0-pre1 or -dev as upstream, it'll certainly be broken, though, since it'd be a pain | 14:28 |
wrtp | niemeyer: so there's an easy mapping from semantic versions to debian versions: put a '0' before the pre-release version... | 14:28 |
wrtp | niemeyer: alternatively always use a 0 as the first char of the pre-release version | 14:28 |
wrtp | and enforce it | 14:28 |
niemeyer | wrtp: Nah, doesn't sound necessary | 14:29 |
niemeyer | wrtp: Anyway, you're right.. that's mostly a red-herring.. | 14:29 |
niemeyer | wrtp: It can be hacked to fit | 14:30 |
niemeyer | So.. what can we agree on? | 14:30 |
niemeyer | 1.0.0-pre$N is a pre-release | 14:30 |
niemeyer | 1.0.0-dev.$REVID[-9:] is a snapshot? | 14:30 |
niemeyer | Maybe pre.1 to conform? | 14:31 |
wrtp | what's the "[-9:]" bit? | 14:31 |
niemeyer | That should be -8 | 14:31 |
wrtp | a python slice operator? | 14:31 |
niemeyer | wrtp: The last N bytes of the revid | 14:31 |
wrtp | ah | 14:31 |
wrtp | yeah, that sounds good | 14:32 |
niemeyer | We'll need a --force flag on upgrade-juju | 14:32 |
wrtp | ordering of releases is a red herring anyway. with prereleases we want exact specification or nothing. | 14:32 |
niemeyer | To compensate for the lack of ordering | 14:32 |
niemeyer | Actually, we don't.. we just need to enable cross-dev upgrades | 14:33 |
niemeyer | wrtp: Yeah.. the issue is on development snapshots.. pre-releases are properly ordered | 14:33 |
wrtp | niemeyer: the main issue AFAICS is that you don't want to use some else's dev snapshot | 14:34 |
wrtp | niemeyer: and that, i think, is solved by simply using a different bucket for each developer | 14:34 |
niemeyer | wrtp: My suggestion is that we enforce the use of --dev to get from a stable release onto a dev release, and then after being into this dev release, upgrading to a dev release on the same patch level is done ignoring the ordering of hash | 14:34 |
niemeyer | of revid I mean | 14:34 |
niemeyer | wrtp: I disagree.. I *do* want to use someone else's dev snapshot | 14:35 |
niemeyer | wrtp: and I do want to use my own snapshot, on different branches, out of order too | 14:35 |
niemeyer | wrtp: There's no questions.. we have to solve that problem | 14:35 |
niemeyer | wrtp: Development is a nightmare without that | 14:35 |
wrtp | niemeyer: how would you want to specify the snapshot to use? | 14:37 |
wrtp | niemeyer: rev id hash isn't enormously friendly :-) | 14:37 |
niemeyer | wrtp: Actually, given we'll be using revids, it's an issue regardless.. the revids will be unordered even within a single branch | 14:38 |
wrtp | yes | 14:38 |
wrtp | niemeyer: which is why i was presuming that you'd always be pushing the current branch's executables | 14:38 |
wrtp | niemeyer: and then specifying that version exactly | 14:38 |
niemeyer | wrtp: Yep, we're just talking about what "that version" means | 14:39 |
niemeyer | wrtp: and the behavior the client will enforce when working with it | 14:40 |
wrtp | niemeyer: maybe we should push a hash of the actual executables | 14:40 |
wrtp | 1.0.0-dev.HASH | 14:40 |
wrtp | niemeyer: then you'll push correctly if you've changed the Go version, for example | 14:41 |
niemeyer | wrtp: That might work as well.. but it'd just be changing where the hash is being obtained from.. the issues debated still hold | 14:41 |
wrtp | niemeyer: not quite, as above | 14:41 |
niemeyer | wrtp: It's also potentially less useful information | 14:41 |
wrtp | niemeyer: it does mean that if you branch and do nothing, then (assuming a deterministic compiler) you won't need to push the executables again | 14:42 |
niemeyer | wrtp: Having such a hash will be pretty equivalent to having a random number | 14:42 |
wrtp | niemeyer: but i take the point about less useful information too | 14:42 |
wrtp | niemeyer: yeah, that's true. | 14:43 |
wrtp | niemeyer: but does the revid really matter too much either? | 14:43 |
wrtp | niemeyer: what *really* matters IMHO is replicability | 14:43 |
niemeyer | wrtp: It's not ideal, but it's useful.. I can look at a running env and tell which code it is running | 14:44 |
wrtp | niemeyer: all the juju bits anyway, but not necessarily the bits that they rely on | 14:44 |
niemeyer | wrtp: Well.. :-) | 14:45 |
wrtp | niemeyer: so if there's a fix to the tomb package, say, then it might not push when it should | 14:45 |
niemeyer | wrtp: Yeah, if there's a change in the kernel we won't know as well | 14:45 |
wrtp | niemeyer: we can't avoid that. we can avoid this. | 14:45 |
niemeyer | wrtp: not really.. | 14:46 |
niemeyer | wrtp: Unless you introduce a random or pseudo-random number | 14:46 |
wrtp | niemeyer: if i add some logging code to some package that's not in juju and rebuild, i want to be able to deploy that executable | 14:46 |
wrtp | niemeyer: ? | 14:46 |
wrtp | niemeyer: how is the hash of the executables not good enough? | 14:46 |
niemeyer | wrtp: It's random.. | 14:47 |
SpamapS | niemeyer: pre release scheme is fine if we use ~pre | 14:47 |
niemeyer | SpamapS: Cool, cheers | 14:47 |
wrtp | niemeyer: random? it's a deterministic hash of the code, surely? | 14:47 |
SpamapS | niemeyer: I as more thinking about those who want to automate on top of juju. Its a very small corner case though. | 14:48 |
niemeyer | wrtp: No, it's surely not.. | 14:48 |
wrtp | niemeyer: no? | 14:48 |
niemeyer | wrtp: Nope.. hash of executable != hash of code.. | 14:48 |
wrtp | niemeyer: how's that? | 14:48 |
niemeyer | wrtp: It's also not useful for what a version is useful for.. | 14:49 |
SpamapS | niemeyer: just in general, I've never liked the idea of tagging releases with something like that. But I would not be wholly against using that scheme given all the parameters. | 14:49 |
wrtp | niemeyer: this isn't a sequential version - it's a scheme that means that, hopefully, we can reliably deploy development versions of juju and have them work predictably. | 14:50 |
niemeyer | wrtp: It's just not the same thing.. there are zero guarantees that two builds have produce the same hash | 14:50 |
SpamapS | hash of executable has a lot of parameters other than the code | 14:51 |
niemeyer | wrtp: There's also the detail that one can't lock a drawer with the key inside | 14:51 |
wrtp | niemeyer: it doesn't matter if they don't, i think. it's more important that two builds with different code will produce *different* hashes | 14:51 |
wrtp | niemeyer: would we store the hash inside the executable? | 14:52 |
niemeyer | wrtp: I hope we can type "juju version" and have it.. | 14:52 |
wrtp | niemeyer: yeah, that would be good. but i still want to be certain that the code that's executing is the code i just compiled. | 14:54 |
niemeyer | wrtp: You can always md5sum the executable.. that's beyond the role of a version | 14:54 |
wrtp | SpamapS: code+compiler is it, i think. i don't think it embeds a time stamp. | 14:55 |
niemeyer | wrtp: building code doesn't have to produce the same binary every time.. there are zero guarantees about that | 14:55 |
SpamapS | compiler+all libraries | 14:55 |
TheMue | So, phew, back again. | 14:55 |
niemeyer | wrtp: Ok, I have to step out for lunch, and have a medical appointment first thing in the afternoon.. will think about that conversation meanwhile, and be back in action later in the day | 14:56 |
SpamapS | oddly enough, the same problem has existed in charms for a while | 14:56 |
wrtp | SpamapS: yeah. i count libraries as part of the code though. | 14:56 |
TheMue | niemeyer: Could you please later also review my latest proposal? Thx. | 14:56 |
SpamapS | we added the 'deploy --upgrade' flag to deal with it. Arbitrary revision number in the charm that is bumped +1 every time you ask to upgrade it. | 14:56 |
niemeyer | TheMue: Yeah, unfortunately the meeting time is gone now | 14:56 |
niemeyer | TheMue: Will do later | 14:56 |
TheMue | niemeyer: Emergency, my daughter had a cut in her food. | 14:57 |
niemeyer | TheMue: Ouch! | 14:57 |
TheMue | foot | 14:57 |
niemeyer | TheMue: How's she? | 14:57 |
wrtp | TheMue: for a moment i thought she'd eaten some glass! i'm glad she'll only lose a foot :-) | 14:57 |
TheMue | niemeyer: She now has a bandage, but walking will be difficult the next days. | 14:57 |
wrtp | TheMue: (sorry to hear it, hope she's ok) | 14:57 |
niemeyer | Heh | 14:57 |
niemeyer | TheMue: Glad to hear | 14:58 |
niemeyer | Ok.. will be back later! | 14:58 |
TheMue | wrtp: Yeah, only one char, but totally different meaning. ;) | 14:58 |
TheMue | niemeyer: Thx, me too. | 14:58 |
wrtp | SpamapS: by "the same problem" you mean that the current version wasn't pushed out, even though it had actually changed? | 14:58 |
TheMue | fwereade: Found a solution? | 15:02 |
fwereade | TheMue, tbh, no, I always seemed to converge on what I was doing before | 15:02 |
fwereade | TheMue, here's how it looks at the moment...http://paste.ubuntu.com/942518/ | 15:03 |
TheMue | fwereade: OK, will do a quick draft I had time to thought about while waiting. ;) | 15:03 |
fwereade | TheMue, sweet, tyvm | 15:03 |
wrtp | fwereade: so you want to propagate the listener close to the Server close? | 15:05 |
wrtp | fwereade: is that the problem? | 15:05 |
fwereade | wrtp, I don't know any other way to stop the loop while it blocks on listener.Accept | 15:06 |
wrtp | fwereade: is there other code that's also using the tomb? | 15:06 |
fwereade | wrtp, if I'm closing the listener explicitly, via server.Close, then I know I'll get an error out of Accept | 15:06 |
fwereade | wrtp, no | 15:06 |
TheMue | fwereade: Somehow this way http://paste.ubuntu.com/942618/ | 15:06 |
wrtp | fwereade: i don't think i'd use a tomb | 15:07 |
fwereade | TheMue, I don;t get that, the goroutine is surely wrong | 15:07 |
fwereade | wrtp, ha, I suppose that's an option, but it does seem to give me most of what I need | 15:08 |
fwereade | wrtp, (to continue dropped sentence) ...and I don;t want that error to count as an error | 15:09 |
fwereade | wrtp, but, in general, I feel that I shouldn't just swallow real errors that happen inside the loop, even if the only way to get atthem ATM is to close the server | 15:10 |
TheMue | fwereade: What's exactly what you don't get? | 15:12 |
fwereade | TheMue, that you kill the listener immediately for no apparent reason | 15:12 |
wrtp | fwereade: something like this perhaps, with one notable omission...: http://paste.ubuntu.com/942639/ | 15:14 |
TheMue | fwereade: Ooops, wrote go instead of defer. | 15:14 |
TheMue | fwereade: That's indeed wrong. | 15:14 |
wrtp | fwereade: i'm not sure whether it's possible to portably define errorWasBecauseOfClose | 15:15 |
TheMue | fwereade: I meant http://paste.ubuntu.com/942647/ | 15:15 |
fwereade | wrtp, indeed | 15:16 |
fwereade | TheMue, ok, but this appears to be dropping the errors on the floor, which I feel is a bad move unless they're expected | 15:16 |
TheMue | fwereade: No, the error that leads to the leaving of the loop is passed to Kill in the deferred func. | 15:17 |
fwereade | TheMue, ahhhh | 15:17 |
wrtp | fwereade: i think this does the job: http://paste.ubuntu.com/942650/ | 15:18 |
fwereade | TheMue, we do still have the "is the error (overwhelmingly likely to be) due to a deliberate Close issue | 15:18 |
wrtp | fwereade: i think my paste solves that issue | 15:18 |
fwereade | wrtp, I don't quite follow what the select is doing | 15:19 |
SpamapS | wrtp: well perhaps more generally, the same problem was that we needed to maintain remote binaries efficiently | 15:19 |
wrtp | fwereade: you're right, it unnecessary | 15:19 |
wrtp | fwereade: this should work fine: http://paste.ubuntu.com/942657/ | 15:20 |
fwereade | wrtp, and how do I distinguish between good errors and bad errors out of Accept? | 15:20 |
wrtp | SpamapS: ironically, i've discovered that Go *does* include a build time stamp in its binaries. | 15:21 |
wrtp | fwereade: doh! what was i thinking? | 15:21 |
wrtp | fwereade: in general, you can't tell | 15:22 |
wrtp | fwereade: unless you manage to work it out from the error itself | 15:22 |
wrtp | fwereade: because there's a race. | 15:22 |
fwereade | wrtp, in general it is indeed always possible that I could issue a Close(), and the Accept could error out on its own at just the right moment | 15:22 |
fwereade | wrtp, but that is a possibility I am not overly concerned about | 15:22 |
SpamapS | wrtp: perhaps that would be a good modifier then | 15:23 |
fwereade | wrtp, the important thing is that we asked it to close and now it's closed (or at least broken ;p) | 15:23 |
fwereade | wrtp, the disturbing case is when it broke without us asking it to close | 15:24 |
fwereade | wrtp, and while there's no Wait method yet it seems like the sort of thing that will be useful for orderly shutdown | 15:24 |
wrtp | fwereade: yeah. there should be an "IsErrorOnClosedConnection(err)" function in net | 15:24 |
fwereade | wrtp, that would be the ideal, yeah | 15:25 |
fwereade | wrtp, I'm reluctant to try to write that myself though | 15:25 |
wrtp | fwereade: how about this: http://paste.ubuntu.com/942664/ | 15:26 |
fwereade | wrtp, I think I like that a lot | 15:27 |
fwereade | wrtp, tyvm | 15:27 |
wrtp | SpamapS: i'm not sure that we can avoid uploading the binaries every time while still guaranteeing we're executing the right code. | 15:27 |
wrtp | fwereade: np | 15:27 |
wrtp | fwereade: easy when you think about it the right way, as usual :-) | 15:28 |
wrtp | fwereade: took me a while tho | 15:28 |
fwereade | wrtp, yeah, it's taking a while to reroute my normal thinking | 15:28 |
fwereade | wrtp, a month of python didn't help ;) | 15:28 |
wrtp | fwereade: w e w i l l r e w i r e y o u r b r a i n | 15:29 |
fwereade | wrtp, haha | 15:30 |
wrtp | fwereade: there's still a potential problem though | 15:31 |
wrtp | fwereade: well, i suppose it depends | 15:31 |
fwereade | wrtp, bah! go on :) | 15:32 |
wrtp | fwereade: if you *didn't* get an error, you need to wait for the error anyway, otherwise the server may still be active | 15:32 |
wrtp | fwereade: and i presume that ServeConn is guaranteed to finish in a fixed amount of time?? | 15:33 |
wrtp | fwereade: the first problem is easily solved: http://paste.ubuntu.com/942676/ | 15:33 |
fwereade | wrtp, well, it depends on what the request actually does... it shouldn't just get stuck, in theory; I'd assumed we'd want to allow inflight requests to complete | 15:34 |
wrtp | fwereade: well, yeah, it's a matter of choice. | 15:34 |
fwereade | wrtp, we may want to change that choice at some stage, sure | 15:35 |
wrtp | fwereade: i think maybe you shouldn't be calling ServeConn synchronously. | 15:35 |
fwereade | wrtp, hmm, maybe, but I got confused about allowing multiple connections at once | 15:36 |
fwereade | wrtp, so I thought I'd punt on that for now | 15:36 |
wrtp | fwereade: i don't think it helps | 15:37 |
wrtp | fwereade: you can have multiple RPCs in parallel anyway, theoretically. | 15:37 |
fwereade | wrtp, yeah, there's nothing stopping us on the backend | 15:37 |
wrtp | fwereade: how about something as simple as this? http://paste.ubuntu.com/942683/ | 15:39 |
wrtp | fwereade: but perhaps we *want* to block Close until the current request terminates. hmm. | 15:40 |
fwereade | wrtp, I had been fretting over the multiple connections to the socket, not on locking around backend stuff | 15:40 |
fwereade | wrtp, I thought it seemed like a sensible thing to do; it may be that future discoveries will prove it wrong | 15:40 |
wrtp | fwereade: ah. i'm not sure i see why we should prevent two commands executing concurrently. one might block while the other might return some quick info. | 15:41 |
wrtp | fwereade: in which case, just "go s.server.ServeConn" would be fine | 15:42 |
fwereade | wrtp, ok, cool, I guess I have been paranoid without reason ;p | 15:43 |
wrtp | fwereade: it is worth considering whether you want Server.Close to close all existing client connections. that would take more code, but not hard. | 15:44 |
fwereade | wrtp, IMO I may want that at some stage but it's premature for now | 15:52 |
wrtp | fwereade: easy to fix anyway | 15:52 |
SpamapS | wrtp: isn't that what makefiles are for? | 15:53 |
SpamapS | wrtp: like, "if X changed, compile it" ? | 15:53 |
wrtp | SpamapS: yes, that's a good point. i guess i was hoping to avoid upload if i had the same binaries as someone else. but in a dev environment, we don't care too much. i wonder how long it takes me to upload 4MB to S3. | 15:54 |
wrtp | SpamapS: so we could use revid + time of day. | 15:55 |
wrtp | SpamapS: or mod time of executable, rather | 15:55 |
SpamapS | I haven't looked, what does go use to build? I didn't see autotools stuff... :) | 15:56 |
SpamapS | wrtp: yeah mtime would actually be accurate | 15:56 |
fwereade | sorry all: eod, and I can't really hang around today -- I'll be around all tomorrow, can we meet then instead? | 16:02 |
fwereade | I'll try to pop on again later | 16:03 |
wrtp | fwereade: sounds good | 16:07 |
wrtp | SpamapS: it uses the go tool | 16:08 |
wrtp | SpamapS: it looks at the source files (recursively) to determine dependencies | 16:08 |
wrtp | SpamapS: http://golang.org/doc/articles/go_command.html | 16:09 |
SpamapS | wrtp: but not all of the "code" is go | 16:18 |
SpamapS | you have documentation, and examples... | 16:19 |
wrtp | SpamapS: the documentation and examples are in the code | 16:19 |
wrtp | SpamapS: well... the Go docs and examples are | 16:19 |
SpamapS | wrtp: where are the fairies and toilets made of solid gold? ;) | 16:20 |
wrtp | SpamapS: for other docs, i guess we'd use make or something similar | 16:20 |
wrtp | SpamapS: how did you guess? :-) | 16:20 |
wrtp | SpamapS: sorry, i thought you meant by "what does go use to build?", "what does go use to build Go executables?" | 16:21 |
wrtp | SpamapS: if you meant "what does go use to build restructured-text documentation and everything else?" i guess the answer is "it doesn't" :-) | 16:23 |
SpamapS | I figured as much. :) | 16:27 |
wrtp | SpamapS: i'm quite happy it does a good job with the code tbh | 16:29 |
SpamapS | I'm just thinking about how to make release. | 16:31 |
SpamapS | releases even | 16:31 |
wrtp | SpamapS: for getting the juju executables into a known place without knowing the names of all of them, this could work well: GOBIN=someplace go install launchpad.net/juju/go/... | 16:42 |
wrtp | SpamapS: the documentation can probably be done exactly as it is now | 16:42 |
* wrtp is off for the evening, see y'all tomorrow | 17:01 | |
* niemeyer is back | 17:18 | |
niemeyer | robbiew: Is the call still running? | 17:19 |
niemeyer | wrtp: Still around? Just wanted to run an idea by you | 17:34 |
niemeyer | mthaddon: ping | 18:06 |
wrtp | niemeyer: i've got 15 minutes before dinner if you're still around | 19:28 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!