=== slank is now known as slank_away
rogpeppefwereade, dimitern, jam1, wallyworld, mgz: morning!08:04
fwereaderogpeppe, et al, heyhey08:04
rogpeppefwereade: ping08:25
fwereaderogpeppe, pong08:25
rogpeppefwereade: i wonder if you could have a look at some speculative stuff i've been working up before i run it past gustavo later08:26
fwereaderogpeppe, sure08:26
rogpeppefwereade: it's to do with where we're aiming with the API server and how the machine agent will deal with that08:26
fwereaderogpeppe, ok, cool08:26
rogpeppefwereade: here's some pseudocode: http://paste.ubuntu.com/1544082/08:27
rogpeppefwereade: the idea is to solve the problem of multiple API servers and state servers08:28
rogpeppefwereade: the main additions are the API server (obviously) and the publisher and locationer (provisional names) workers08:28
rogpeppefwereade: i also have a sketch for what i'm doing right now, which is a transitional step where we don't have multiple servers, but we are running an API server and connecting to it08:30
rogpeppefwereade: http://paste.ubuntu.com/1544098/08:30
fwereaderogpeppe, still processing08:31
rogpeppefwereade: np08:31
rogpeppefwereade: i hope the pseudocode is reasonably clear08:31
fwereaderogpeppe, yeah, I think so, I'm mainly just trying to get my head around the bootstrap process08:32
fwereaderogpeppe, and how it fits in with davecheney's stater work08:32
rogpeppefwereade: i'm not sure i know about that08:32
rogpeppefwereade: what's the stater work?08:33
fwereaderogpeppe, they're in review at the moment08:33
fwereaderogpeppe, https://code.launchpad.net/~dave-cheney/juju-core/069-worker-stater-I/+merge/14362908:33
fwereaderogpeppe, https://code.launchpad.net/~dave-cheney/juju-core/071-cmd-jujud-stater-I/+merge/14363808:33
fwereaderogpeppe, I guess the heart of the problem is that you're working on the state API and he is working on HA state08:35
fwereaderogpeppe, and they have a fair number of collision points...08:35
rogpeppefwereade: yeah08:35
rogpeppefwereade: it would be good to chat with dave at some point, but i haven't been online at the same time this year so far08:36
fwereaderogpeppe, heh, I would suggest trying to arrange something08:36
fwereaderogpeppe, communication by CL is ok for CLs but not so much for major design considerations08:37
fwereaderogpeppe, but modulo bootstrap worries, and concerns about managing what machines will be running mongo (as opposed to just the API server) I think it's roughly sane08:39
rogpeppefwereade: bootstrap worries?08:40
rogpeppefwereade: yeah, i have entirely delegated the decisions about which machines run what to a higher level.08:40
fwereaderogpeppe, I don't see how to fit it together with davecheney's stuff08:40
fwereaderogpeppe, I'm on the fence as to whether state server maintenance should be separate from the machine agent08:41
rogpeppefwereade: i'm not sure there's a problem. from what i've seen so far, i think it'll fit in ok08:41
rogpeppefwereade: i think it's fine to be part of the MA08:41
rogpeppefwereade: (i'm assuming dfc's branch does that)08:41
fwereaderogpeppe, yeah, me too, but that definitely collides badly with dave's direction08:41
fwereaderogpeppe, nope08:41
fwereaderogpeppe, IMO it's crack, but it's apparently pre-agreed with niemeyer08:42
rogpeppefwereade: oh, i see08:42
rogpeppefwereade: the question in my mind is how other things are going to know how to contact the state08:43
fwereaderogpeppe, I also think that doing it in the MA will be fiddly, but still probably the right thing08:43
rogpeppefwereade: i don't think it should be *too* fiddly. i'll have a quick check to see how it fits into my plan.08:44
fwereaderogpeppe, cool08:44
dimiternrogpeppe, fwereade, wallyworld: morning :)08:45
rogpeppedimitern: hiya08:45
fwereadedimitern, heyhey08:46
rogpeppefwereade: i think the stater worker fits in quite easily actually: http://paste.ubuntu.com/1544194/08:54
rogpeppefwereade: but i think i agree that it shouldn't be a new subcommand08:57
fwereaderogpeppe, I don't see a stater involved in the bootstrap machine08:57
rogpeppefwereade: i'm not sure it needs to, although there's no harm if it does08:58
rogpeppefwereade: on a bootstrap machine, the state must be started manually08:59
fwereaderogpeppe, I'm -1 on anything that makes machine 0 work differently from any other mchine with the same responsibilities08:59
rogpeppefwereade: or probably bootstrap itself would do it08:59
fwereaderogpeppe, we may need to be clever in bootstrap, but after that it shoudl be just the same as any other08:59
rogpeppefwereade: machine 0 is fundamentally different, because there's no existing state to connect to08:59
rogpeppefwereade: but i agree in principle09:00
fwereaderogpeppe, once there's state, if we treat it differently, I think we're Doing It Wrong09:00
rogpeppefwereade: ok, i think this might be better: http://paste.ubuntu.com/1544253/09:08
TheMueMorning all.09:12
dimiternTheMue: morning09:13
TheMuedimitern: Hiya09:15
fwereadeTheMue, heyhey09:15
fwereaderogpeppe, yeah, looks sane to me; lots of other details (do machine jobs change (I think they will have to), how do we deal with job changes (significant differences in task-running)) etc09:17
fwereaderogpeppe, except, well, hmm09:17
rogpeppefwereade: yes, job changes are an interesting question.09:17
fwereaderogpeppe, I guess I still don't quite know what a stater will be doing... watching for a job change and killing itself?09:18
fwereaderogpeppe, (sorry, removing mongo and then killing itself?)09:18
rogpeppefwereade: yeah, when jobs can change09:19
rogpeppefwereade: in my sketch above, it does nothing after initial check-and-install09:19
fwereaderogpeppe, indeed, that's what I'm waffling on about09:19
rogpeppefwereade: perhaps i should work out how we might deal with changing jobs. there are some interesting issues around that (perhaps we'd need to keep a count of api servers and state servers in the state, and disallow a job change if it will lose the last one of either)09:21
rogpeppewallyworld: i've just replied to https://codereview.appspot.com/7133043/; i'm interested if you think my suggestion is viable.09:27
fwereaderogpeppe, yeah, I think so09:28
fwereaderogpeppe, do you have a moment to revisit the destroy-* command plan?09:28
fwereaderogpeppe, I'm pretty sure I convinced you of crack the other day09:28
rogpeppefwereade: lol09:28
fwereaderogpeppe, but really all the possible approaches seem crackful one way or another09:28
rogpeppefwereade: what in particular was possibly crack?09:29
fwereaderogpeppe, refusing if any unit's already Dying09:29
rogpeppefwereade: refusing what, sorry?09:29
fwereaderogpeppe, to do anything :)09:29
fwereaderogpeppe, it certainly doesn't fit well with the Destroy methods in state09:30
fwereaderogpeppe, all of which are fine with dying/dead/removed entities09:30
fwereaderogpeppe, that's the main problem for me09:30
rogpeppefwereade: are we talking about https://codereview.appspot.com/7138062/ here?09:31
fwereaderogpeppe, not really, I'm doing destroy-machine -- with destroy-service I weakened and went with allow-if-exists09:31
rogpeppefwereade: or is this a general discussion, unrelated to any particular CL?09:32
fwereaderogpeppe, but service for some reason only destroys one at a time09:32
fwereaderogpeppe, (ok that is crazy too, but for another time)09:32
rogpeppefwereade: i'm a bit at sea here. are we talking about the "destroy gives an error if destroy has already been called on the enitity" decision?09:34
fwereaderogpeppe, yes: from the perspective of the CLI09:34
rogpeppefwereade: (assuming there *was* such a decision, which i can't quite remember)09:34
fwereaderogpeppe, such an approach is incoherent in state IMO09:34
fwereaderogpeppe, "no error, it's being destroyed" makes a lot more sense at that level09:34
rogpeppefwereade: what if it's already been removed?09:35
fwereaderogpeppe, no error -- the user had a *state.Whatever, it must have existed at some point, now it doesn't, everyone;s happy09:35
rogpeppefwereade: not from the perspective of the CLI though09:36
fwereaderogpeppe, well, yeah, the question is: what *is* correct from that perspective09:36
rogpeppefwereade: you'll get a different error if it's been removed from if it's been destroyed-but-not-removed09:36
rogpeppefwereade: yeah09:36
rogpeppefwereade: i'm ok with the rm analogy09:36
rogpeppefwereade: actually, the kill(1) analogy is probably stronger09:37
rogpeppefwereade: 'cos processes can take a while to go away09:37
rogpeppefwereade: so i think i'm fine if we get no error if the entity is Dying, but we do get an error if it's been removed.09:39
fwereaderogpeppe, the philosophical issue here is then how we treat Dead09:39
rogpeppefwereade: ha09:40
fwereaderogpeppe, in theory, Dead/removed are not states we should really be distinguishing between09:40
fwereaderogpeppe, in practice, I don't see how we can avoid it09:40
fwereaderogpeppe, and I can't tell whether that matters or not ;p09:40
rogpeppefwereade: yeah, i think i agree. perhaps we should go with "if it doesn't show up in status, you should get an error trying to talk to it"09:41
fwereaderogpeppe, hmm, yeah, does status strip Dead things yet?09:41
fwereaderogpeppe, (and is it even sane for it to do so?)09:41
rogpeppefwereade: i dunno09:41
rogpeppefwereade: but if we go with the above guiding principle, it doesn't matter either way, just that we're consistent in that respect09:42
fwereaderogpeppe, if we strip (say) dead units, then the user won't be able to understand why they can't terminate a machine that appears to have no units09:42
fwereaderogpeppe, or possibly they'll see the unit on the machine, but see it as a dangling pointer09:42
fwereaderogpeppe, actually, dangling pointers in status cannot be avoided in general09:42
rogpeppefwereade: i'm ok with status showing dead things09:43
rogpeppefwereade: so then the implementation all becomes obvious, no subterfuge required09:43
fwereaderogpeppe, ok, yeah, that SGTM09:43
fwereaderogpeppe, so as long as an identified entity can be obtained from state, we allow a destroy, even if it's a no-op09:44
rogpeppefwereade: yup09:44
fwereaderogpeppe, if any entity in the list is not found, bail09:44
rogpeppefwereade: what about the other entities on the list? do they get destroyed or not?09:45
fwereaderogpeppe, (not that we can destroy them all txnly anyway, so we're still vulnerable to conns dropping half way through processing...)09:45
rogpeppefwereade: i think that we should go through trying to destroy all the entities, even if one fails09:45
fwereaderogpeppe, I'm imagining we collect up all the identified entities that are Alive, error if any isn't there, and then just Destroy the ones we found09:46
rogpeppefwereade: when you say "error if any isn't there", you mean "log an error" not "return an error09:46
rogpeppe", yes09:46
fwereaderogpeppe, meh, maybe -- so we keep track of any such errors and then exit 1 if we hit one?09:47
rogpeppefwereade: yup09:47
rogpeppefwereade: that's consistent with rm and kill, for example09:47
fwereaderogpeppe, yeah, ok, I think that's sanest then09:48
rogpeppefwereade: it means that if you've got a big list of entities to destroy, it's less fragile if one of those has just been removed.09:48
fwereaderogpeppe, yeah, sgtm09:49
fwereaderogpeppe, hmm... I guess if we can't tell whether state is broken, we have to just keep on trying and failing when the conn goes down half way through09:50
rogpeppefwereade: no, i think that if the error is not "entity doesn't exist", we should abort09:51
rogpeppefwereade: that's the approach we take in other places.09:52
fwereaderogpeppe, (incidentally, destroy-unit won't scale sanely anyway... we're going to need something like `juju resize wordpress 27`)09:52
fwereaderogpeppe, hmm09:52
rogpeppefwereade: i totally agree09:52
rogpeppefwereade: having a target number of units makes much more sense than managing individual units09:52
fwereaderogpeppe, ok, so if someone added a JobManageEnviron to a machine and made it unkillable, that should stop all the other machines from being destroyed?09:53
rogpeppefwereade: no09:53
rogpeppefwereade: ok, so there are probably other errors to check for :-)09:53
fwereaderogpeppe, that is not an "entity doesn't exist" error ;)09:53
rogpeppefwereade: i really want us to be able to check for state upness09:53
fwereaderogpeppe, yeah, agreed09:54
rogpeppefwereade: i was thinking recently that the best place for it is probably in mgo itself.09:54
fwereaderogpeppe, hmm, yeah, if that could reliably give us ErrConnDown or something that would be great09:55
rogpeppefwereade: as that's the place that is actually (potentially) in the knowledge of whether it can still talk to the state or not.09:55
rogpeppefwereade: i was thinking more of mgo.Database.Up() bool09:55
fwereaderogpeppe, ah, yeah, nice09:56
rogpeppefwereade: which would return false if any operations had failed permanently09:56
rogpeppefwereade: the problem is that i'm not sure it's possible to tell that, because there's a cluster, and just because one operation has failed doesn't mean that a subsequent op will. i don't know enough about mongodb and mgo's client implementation to be able to tell how feasible this is.09:59
fwereaderogpeppe, indeed10:00
fwereaderogpeppe, nor do I10:00
fwereaderogpeppe, hmm -- in the absence of such a thing,how about: (1) collect what we can, ignore not-Alive, log removed, return other errors directly; (2) destroy everything we collected, logging all errors; (3) return an error if we encountered any10:12
fwereaderogpeppe, except, no, maybe not10:12
rogpeppefwereade: perhaps we should not try to be clever10:12
fwereaderogpeppe, state.IsCannotDestroy(err)10:12
rogpeppefwereade: just make all the attempts we can, logging each error as it happens10:13
fwereaderogpeppe, the only things that are actually ever undestroyable are machines10:13
rogpeppefwereade: that won't be true for long10:13
rogpeppefwereade: we'll be able to get permission-denied errors, i think10:13
fwereaderogpeppe, ah, yes, cool10:14
rogpeppefwereade: i think if we go with the naive approach for now, it'll be easy to retrofit state-upness checking if/when it's available10:14
fwereaderogpeppe, hmm, are the destroy-machine errors really special cases of permission errors? I think they might be10:14
rogpeppefwereade: i don't think so10:15
rogpeppefwereade: or... nah, i don't think so10:15
fwereaderogpeppe, yeah, maybe it's not quite right10:15
rogpeppefwereade: it's not that you don't have permission, it's that you haven't arranged things correctly to enable the operation10:16
TheMuerogpeppe: ping10:16
fwereaderogpeppe, heh, I'd seen it as "*nobody* has permission to do that, are you crazy?" ;p10:16
rogpeppefwereade: lol10:17
rogpeppeTheMue: ping10:17
rogpeppeTheMue: pong10:17
rogpeppeTheMue: pung10:17
rogpeppeTheMue: peng10:17
TheMuerogpeppe: pang10:17
TheMuerogpeppe: Did you read Daves mail?10:17
rogpeppeTheMue: yeah10:17
TheMuerogpeppe: Thoughts?10:17
fwereaderogpeppe, apparently "pung" is an obscene swedish word; this was pointed out when I used it in test data once10:17
rogpeppeTheMue: i'm just writing a reply actually10:17
TheMuerogpeppe: OK, thx.10:18
rogpeppefwereade: "pung" is also a term for a set of three of the same tile in maj jong10:18
TheMuerogpeppe: I'm a bit torn.10:18
fwereaderogpeppe, when you say the "naive" approach, I'm not quite sure how maive you mean10:18
rogpeppeTheMue: he's still off track - changing the delay in juju.Conn won't affect anything10:18
rogpeppefwereade: go through each thing, getting it, trying to destroy it, and log any errors. exit(1) if any error occurred.10:19
fwereaderogpeppe, ok, sgtm10:20
=== jtv1 is now known as jtv
TheMuerogpeppe: I have to try if the error still exists w/o a change, because I've been able to reproduce it last year and now, with both approaches, it's gone.10:21
rogpeppeTheMue: what error?10:22
TheMuerogpeppe: That one that has been the reason for that change. In the LP ticket. Or lets call it "behavior", those quick retries.10:22
rogpeppeTheMue: which LP ticket?10:26
rogpeppeTheMue: (the CL doesn't seem to reference it)10:26
TheMuerogpeppe: Yes, missed it, but the kanban card. *lol* It's https://bugs.launchpad.net/juju-core/+bug/1084867.10:29
_mup_Bug #1084867: conn: conn connections should pause between retry attempts <juju-core:In Progress by themue> < https://launchpad.net/bugs/1084867 >10:29
rogpeppeTheMue: ah, i thought you were talking about an actual error10:30
rogpeppeTheMue: how were you checking whether the behaviour had changed?10:31
TheMuerogpeppe: Calling status with debug like shown before and after.10:33
rogpeppeTheMue: i haven't seen any before and after - have you got a paste?10:34
TheMuerogpeppe: No. In the tests without a change I had the same retries. Afterwards one retry sometimes has been enough and sometimes some more (with a different timing then before due to the different delay).10:38
TheMuerogpeppe: But never again in such an amount.10:38
rogpeppeTheMue: were you calling status immediately after bootstrap?10:38
rogpeppeTheMue: (because that's the only time when retries are significant)10:39
TheMuerogpeppe: Yes, as immidiate as possible by typing or pasting the command.10:40
rogpeppeTheMue: one mo, i'll try your branch and see if i see the same10:41
wallyworldrogpeppe: hi, your suggestions are viable i think. my personal preference is still to run all the tests with the -local or -live flags since it will be a royal pita to have to comment out all the skips all the time10:41
TheMuerogpeppe: Thanks10:42
rogpeppewallyworld: in practice, you'll probably just comment out the skips that you think will be fixed by the openstack changes you're making10:42
wallyworldrogpeppe: that's part of the issue - it's hard to know what will be fixed each time10:42
rogpeppewallyworld: i think there's a significant advantage in being able to incrementally enable more tests10:42
wallyworldsince a small change to some of the provider stuff magically makes a bunch more tests pass10:42
rogpeppewallyworld: surely commenting out the skips is a single editor command - not a great hassle?10:43
wallyworldsure, but it's still work to do, and changing code means there's always a possibility of accidentally committing something10:43
rogpeppewallyworld: we'll catch that!10:43
wallyworldi guess my workflow the past few weeks has been very suited to having all the tests run with -live and seeing what improves with each change10:44
wallyworldbut if i can't change your mind then i'll put in the skips10:45
rogpeppewallyworld: i think that the tests that pass should be enabled by default in trunk, so that anyone running the tests will run them too.10:45
wallyworldyou mean the service double ones?10:45
rogpeppewallyworld: yeah10:46
wallyworldpart of the issue as well is that we seem to be making good progress (touch wood) and so there will be a lot of extra juju-core branches which simply uncomment skips, and all the overhead that involves getting reviews etc10:47
wallyworldso it affects velocity10:47
dimiternwallyworld: but these CLs uncommenting skipped tests should be trivial and easy to review10:48
wallyworldsure, but the turn around time is still 12-24 hours, and by that time, you have several branches queued up. but is seems your mind is made up so i will add the skips10:49
rogpeppewallyworld: i think any CL that just uncomments skipped tests could be counted as trivial and submitted immediately.10:50
rogpeppewallyworld: although i'd check that out with others to make sure that's ok10:51
wallyworld\o/ ok!10:51
rogpeppeTheMue: i'm trying your branch now, and i don't see any backoff. it's redialling just as fast as ever.10:52
wallyworldi've had several beers (it's friday evening after all), so i will tackle the skips tomorrow :-)10:52
rogpeppeTheMue: http://paste.ubuntu.com/1544692/10:52
rogpeppeTheMue: shit, one mo, i'm trying the wrong branch!10:52
dimiternwallyworld: thanks for submitting the other CL btw10:53
TheMuerogpeppe: I already wondered, because yesterday I had a different behavior. But I'll test too.10:53
wallyworlddimitern: you mean the test double fixes? not submitted yet, still some more stuff to fix, but soon10:54
dimiternwallyworld: no the one about fake tools10:54
wallyworldah ok10:54
rogpeppeTheMue: testing with your branch now, and still no change, i'm afraid10:55
rogpeppeTheMue: can you paste me the output of juju status --debug showing exponential backoff happening on the retries?10:56
TheMuerogpeppe: Aaargh, just seeing it here too. Seems yesterday the net has been too good, now I have multiple retries too and can confirm your observation of no exponential delay.10:58
rogpeppeTheMue: ok, good. nice to know i'm not going bonkers :-)10:59
TheMuerogpeppe: Let me try to add it to state.Open(), like in my first approach. Just as a test, because there we dial mongo.10:59
TheMuerogpeppe: That would confirm your idea of mongo having it as a feature.11:00
rogpeppeTheMue: that will work - just revert to your earlier revision11:03
TheMuerogpeppe: Currently it tries to connect, wonderfully with increasing delays.11:18
TheMuerogpeppe: Bing, and now it's in.11:18
rogpeppeTheMue: cool11:22
niemeyerHi all11:22
TheMuerogpeppe: And beside that I now can connect the web console of MAAS from this VM while it is installed in another one. Hehe!11:23
TheMueniemeyer: Hello11:23
rogpeppeniemeyer: yo!11:23
dimiternniemeyer: hiya11:23
fwereadeniemeyer, morning11:25
niemeyerWow, hi all! :-)11:28
niemeyerWhat a warm welcome11:28
rogpeppeniemeyer: when you have a moment for discussion about the API archtecture, please let me know.11:29
TheMueniemeyer: Matching to your weather (21° more than here).11:29
niemeyerrogpeppe: Sounds good11:33
niemeyerrogpeppe: Just give me a few minutes and will be with you11:33
rogpeppeniemeyer: cool11:33
niemeyerrogpeppe: Alright12:09
rogpeppeniemeyer: okeydokey12:09
rogpeppeniemeyer: i've only realised this morning how actively dave cheney has been working on the HA stuff, so there's some overlap here.12:10
fwereadeniemeyer_, resync on the question "is it sane/meaningful for a service whose charm has a peer relation to ever not be in a peer relation?"12:45
fwereadeniemeyer_, I believe the answer is no, but I wanted to check your opinion12:45
niemeyer_fwereade: Hmm12:46
niemeyer_fwereade: If I get what you mean, yes, it will always be in that relation12:46
fwereadeniemeyer_, that was (roughly) the justification given for blocking peer relation interactions in the command line12:46
niemeyer_fwereade: "be" is a bit vague, though, so maybe it's worth expanding12:47
niemeyer_fwereade: The relation may always be there, even if there are no counterpart units12:47
fwereadeniemeyer_, yeah, I'm not thinking about units12:48
niemeyer_fwereade: Well, this raises a few extra ideas12:48
niemeyer_fwereade: That maybe we just shouldn't even worry right now, for the benefit of progress12:48
fwereadeniemeyer_, I'm thinking of whether or not the existence of a such a relation doc is tied to the existence of the service doc12:48
niemeyer_fwereade: I'll just put something on the back of your mind for awareness:12:49
niemeyer_fwereade: It's quite reasonable to, some day, have peer relations with multiple services in it12:49
fwereadeniemeyer_, yeah, I am aware of this, and I don't *think* it affects this problem12:50
fwereadeniemeyer_, the reaosn is that I just picked up https://bugs.launchpad.net/juju-core/+bug/107275012:50
_mup_Bug #1072750: deploy does not add relations transactionally <juju-core:In Progress by fwereade> < https://launchpad.net/bugs/1072750 >12:50
niemeyer_fwereade: Ah, okay12:51
fwereadeniemeyer_, we can (1) ignore the bug; (2) implement peer-relation-addition within service addition; (3) allow direct control of peer relations; (4) ???12:51
niemeyer_fwereade: I'd vote for (2) if it's not too much trouble12:52
fwereadeniemeyer_, such was my instinct too12:52
niemeyer_fwereade: and for (4) postpone the solution, if it is12:52
fwereadeniemeyer_, IMO that's just a nicer way of saying (1) ;)12:52
fwereadeniemeyer_, ah ok, I misinterpreted12:53
fwereadeniemeyer_, yeah, I'll see how annoying it feels12:53
niemeyer_fwereade: It actually is.. it was just a different way of saying "The bug is relevant, but maybe not worth fixing before other critical things on our pipeline"12:53
niemeyer_fwereade: (1) felt a bit like "We don't care." :-)12:54
fwereadeniemeyer_, yeah, it's slightly better than (1) because it says "we have a plan but we won't do it yet"12:54
fwereadeniemeyer_, followup question -- state API and CLI are inconsistent wrt peer relations12:55
fwereadeniemeyer_, I think that to do this cleanly we want (1) AddRelation to reject peer relations and (2) Relation.Destroy to reject direct attempts to destroy a peer relation12:56
fwereadeniemeyer_, (they'd still get destroyed but only when their service was)12:56
niemeyer_fwereade: +1.. sounds reasonable with the status quo12:57
fwereadeniemeyer_, cool, thanks12:57
niemeyer_fwereade: We may have to fix if we add what I suggested above, but that's totally fine12:57
fwereadeniemeyer_, I don't think it collides with multi-endpoint peer relations at all actually -- that will require new code but I *think* not invalidate any that exists12:58
fwereadeniemeyer_, logic, that is, not code12:58
rogpeppemramm: when are the southern-hemisphere kanban meetings?13:41
=== slank_away is now known as slank
mramm17:30 eastern time13:41
mramm22:00 GMT13:41
mramm22:30 GMT13:41
mrammrogpeppe: if you want I can throw you on the invite list so you know about them13:43
rogpeppemramm: please. i'll try to make it along some time - it would be useful to have some interaction with dave.13:44
mrammrogpeppe: yea, that would be good13:44
niemeyer_rogpeppe: Re. "with regard to 3), Write *is* checking that the configuration is correct unless13:51
niemeyer_i'm missing something stupid."13:51
niemeyer_rogpeppe: Maybe I misunderstood then.. why is Change calling Check before WRite?13:51
rogpeppeniemeyer_: ha, good question. i can't remember. the point is moot now, 'cos i've deleted Change.13:52
niemeyer_rogpeppe: Cool13:52
rogpeppeniemeyer_: but Write does call Check still13:53
niemeyer_rogpeppe: Cool.. so it was probably just unnecessary13:53
rogpeppeniemeyer_: probs, yeah13:53
rogpeppetrivial CL anyone: https://codereview.appspot.com/712805513:55
niemeyer_rogpeppe: Done14:09
rogpeppeniemeyer_: thanks14:10
rogpeppe"<unknown job constant: %d>" would probably be better14:11
rogpeppeniemeyer_: it's not a constant when it's printed :-)14:11
niemeyer_rogpeppe: Hm?14:11
rogpeppeniemeyer_: <unknown job value %d> ?14:11
rogpeppeniemeyer_: although i actually think i prefer the unknown value being in the same form as the known value14:12
niemeyer_rogpeppe: I don't understand what you mean14:12
niemeyer_rogpeppe: JobFooBar are constants14:12
niemeyer_rogpeppe: If we get a *constant* we don't recognize, we say so14:12
rogpeppeniemeyer_: we're not getting a constant we don't recognise. we're getting a value we don't recognise...14:13
rogpeppeniemeyer_: just <unknown job %d> would be better perhaps14:13
niemeyer_rogpeppe: 42 is not a valid job constant.. it's a perfectly reasonable value :-)14:13
=== niemeyer_ is now known as niemeyer
niemeyerrogpeppe: I won't bikeshed over this, though. It's a pretty straightforward issue.14:14
rogpeppeniemeyer: indeed. i'll go with <unknown job %d>, if that's ok14:14
niemeyerrogpeppe: Sure, whatever works14:15
rogpeppeniemeyer: if you might be able to take another look at https://codereview.appspot.com/7085062/, that would be great too, thanks.14:16
rogpeppeniemeyer: that's the CL that deletes the Change method14:17
rogpeppeniemeyer: (as well as the CL's stated purpose...)14:17
niemeyerrogpeppe: Cool, will check agian14:19
=== TheRealMue is now known as TheMue
niemeyerrogpeppe: LGTM14:33
rogpeppeniemeyer: thanks14:36
rogpeppeniemeyer: there's a trivial followup that i seem to have neglected to unWIP: https://codereview.appspot.com/712804714:44
niemeyerrogpeppe: LGTM, with a couple of points to ponder aobut14:48
* niemeyer => lunch, so he can type again14:48
rogpeppei'm heading off early today to try and beat the snow. will be leaving in about an hour.15:39
fwereade_niemeyer, ping15:45
fwereade_rogpeppe, take care15:45
rogpeppefwereade_: and you - have a good trip to austin...15:46
fwereade_rogpeppe, cheers :)15:48
niemeyerfwereade_: pongus16:06
fwereade_niemeyer, heyhey16:07
fwereade_niemeyer, so, I got a little distracted with another bug, because I had a silly approach to the peer-relation one16:07
fwereade_niemeyer, and ISTM that we can hugely simplify AddRelation if we're explicit about it taking 2 endpoints only16:08
niemeyerfwereade_: Hmm16:08
fwereade_niemeyer, how insane do you consider that suggestion to be? :)16:09
niemeyerfwereade_: It sounds to me like a significant refactoring to change something that is already working16:09
niemeyerfwereade_: All the logic around relations handles an arbitrary number of endpoints16:09
niemeyerfwereade_: Hardcoding a limit on that seems like a step backwards for no great reason16:10
fwereade_niemeyer, are you free for a very quick call?16:13
niemeyerfwereade_: Let's do it16:13
fwereade_niemeyer, ^^16:14
niemeyerfwereade_: ON th eway16:14
rogpeppeniemeyer: here's a CL that changes agent.Conf.StateInfo and APIINfo to pointer types, after your remark earlier: https://codereview.appspot.com/7124062/16:47
rogpeppehave a great weekend all16:47
TheMuerogpeppe: Enjoy the snow. ;)16:48
TheMuerogpeppe: And have a great weekend too.16:48
rogpeppeTheMue: thanks, you too16:48
niemeyerrogpeppe: Have a good one!16:49
niemeyerrogpeppe: Reviewed, btw16:54
rogpeppeniemeyer: thanks. yeah, it was just a debugging remnant, and i wondered about and/or there too!17:04
rogpeppeniemeyer: have a great weekend, see ya monday17:04
niemeyerrogpeppe: Cheers!17:04

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!