/srv/irclogs.ubuntu.com/2012/10/16/#juju-dev.txt

davecheneywho is supposed to call unit.UnassignFromMachine() ?00:17
davecheneyis it the UA on the way out the door ?00:17
davecheneythe MA on observing the death of the UA ?00:17
davecheneyis juju remove-unit ?00:18
niemeyerdavecheney: Nobody calls it at the moment.. the unit dies and should be removed00:25
niemeyerdavecheney: And morning!00:25
davecheneyhey hey00:29
davecheneyfor the moment, i stuck it in juju remove-unit00:29
davecheneybut that may not be correct00:29
davecheneyniemeyer: correct my logic: the MA is responsible for removing units that have reached dead from the state00:31
niemeyerdavecheney: Right.. remove-unit should not unassign or remove00:32
davecheneyniemeyer: right, then we have a problem00:34
davecheneyniemeyer: https://bugs.launchpad.net/juju-core/+bug/106712700:34
niemeyerdavecheney: Not entirely surprising00:38
niemeyerdavecheney: We're just now getting to the end of the watcher support for lifecycle, unit dying, etc00:38
davecheneyniemeyer: cool00:39
davecheneyso, i can add unassignfrommachine to remove-unit00:39
niemeyerdavecheney: That said, it would be useful to understand what's the missing spot there00:39
niemeyerdavecheney: I bet it's something simple00:39
niemeyerdavecheney: But I can't tell if it's in anyone's plate yet00:39
niemeyerdavecheney: No, that's not the way to go00:39
davecheneyniemeyer: lets talk about it this evening00:39
niemeyerdavecheney: The unit doesn't have to be unassigned, ever00:40
niemeyerdavecheney: Because it's being removed with the assignment00:40
davecheneyniemeyer: can you say that annother way00:40
davecheneyniemeyer: currently we have tow actions, EnsureDying and UnassignFromMachine00:40
niemeyerdavecheney: The machiner should remove the unit once it's dead00:41
niemeyerdavecheney: That's probably the missing link00:41
davecheneyniemeyer: i agree00:41
niemeyerdavecheney: So there's no point in unassigning it00:41
* davecheney reads worker/uniter00:41
davecheneyworker/machiner00:41
niemeyerdavecheney: It should be removed, and then its assignment is gone, whatever it was00:41
davecheneyi agee, the machiner should be responsible for calling Unassign00:41
davecheneyit is a Machine after all :)00:41
niemeyerdavecheney: Nobody has to call unassign :)00:42
davecheneyniemeyer: well, then there is a bug00:43
davecheneysee above00:43
davecheneywhen you say 'nobody has to call unassign'00:43
davecheneyyou mean, no person, ie, nobody typing juju remove-unit ?00:43
niemeyer<niemeyer> davecheney: It should be removed, and then its assignment is gone, whatever it was00:50
niemeyer<niemeyer> davecheney: The machiner should remove the unit once it's dead00:50
niemeyer<niemeyer> davecheney: That's probably the missing lin00:50
davecheneyniemeyer: ok, thanks, understood00:50
niemeyerdavecheney: No unassignment in the picture00:51
wrtpdavecheney, fwereade: morning!07:02
fwereadewrtp, davecheney, heyhey07:02
davecheneymorning07:08
davecheney70 working charms07:08
davecheneywhoo hoo!07:08
TheMuemorning07:14
TheMuedavecheney: cheers07:14
TheMuedavecheney: those are very good news07:14
fwereadeTheMue, morning07:39
TheMuefwereade: hiya07:39
wrtpfwereade: i'm not sure about the idea of making all hooks in a container mutually exclusive08:32
fwereadewrtp, go on08:33
wrtpfwereade: it seems a bit like unwarranted interaction between independent charms08:34
wrtpfwereade: for instance, one charm might be very slow to execute certain hooks, making another one slow to react08:35
wrtpfwereade: in fact, if one charm's hook hangs up for a while, it would lock out all other charms in the same container, which seems... dubious08:36
wrtpfwereade: isn't apt-get supposed to work if run concurrently with itself?08:36
fwereadewrtp, you don't think that, say, a subordinate might try to make concurrent tweaks to a setting file being changed by the principal?08:39
wrtpfwereade: i think that would be extremely dodgy behaviour08:40
wrtpfwereade: just because it runs in the same container doesn't mean a subordinate has a right to delve into the inner workings of its principal08:40
wrtpfwereade: if it's a setting file not in the charm directory, then it's not so dodgy, but it's fundamentally flawed if there's no locking, because the principal might not be making the change in a hook context.08:41
fwereadewrtp, yeah I'd been thinking of the second case08:43
fwereadewrtp, but please expand on the changes that might be made outside a hook context08:43
wrtpfwereade: it's perfectly reasonable that a charm might trigger some changes in a hook that don't execute synchronously with the hook08:44
wrtpfwereade: for instance, it might have started a local server that manages some changes for it, and the hook might just be telling that server about the changes.08:45
wrtpfwereade: that's an implementation detail, and not a technique we should preclude08:45
wrtpfwereade: my view is that charms should be views as independent concurrent entities08:46
fwereadewrtp, I dunno, I still have a strong instinct that the Right Thing is to explicitly declare all activity outside either a hook context or an error state (in which you're expected to ssh in) is unsafe08:46
wrtps/views/viewed/08:46
wrtpfwereade: huh?08:46
wrtpfwereade: so you can't run a server?08:46
wrtpfwereade: isn't that kinda the whole point?08:47
fwereadewrtp, there is juju activity and service activity08:47
fwereadewrtp, the service does what it does, regardless of juju08:47
wrtpfwereade: it seemed to me like you were talking about the subordinate mucking with service setting files08:48
fwereadewrtp, yes, but not stuff inside the charm directory... the settings of the actual service08:48
wrtpfwereade: ok, so that's service activity, right?08:48
fwereadewrtp, ensuring a logging section has particular content, or something08:48
fwereadewrtp, that is juju activity... acting on the service's config08:48
wrtpfwereade: i think it's a grey area08:49
fwereadewrtp, IME it is rare for services to wantonly change their own config files at arbitrary times08:49
wrtpfwereade: "rare" isn't good enough.08:49
wrtpfwereade: we want charms to be able to work with one another regardless of how they're implemented08:50
wrtpfwereade: and it seems to me like it's perfectly reasonable for a charm to start a service which happens to manage another service.08:50
fwereadewrtp, how does allowing parallel hook execution do anything except make it harder for charms to work reliably together?08:51
wrtpfwereade: it means the failure of one charm is independent on the failure of another08:52
wrtps/on the/of the/08:52
wrtpfwereade: and in that sense, it makes it easier to charms to work reliably together08:52
fwereadewrtp, sorry, I may be slow today: how is hook failure relevant?08:53
fwereadewrtp, having them execute in parallel makes it *more* likely that hooks will fail due to inappropriately parallelised operations08:53
wrtpfwereade: if i write a command in a hook that happens to hang for a long time (say 15 minutes, trying to download something), then that should not block out any other charms08:53
wrtpfwereade: i think that if you write a subordinate charm, it's your responsibility to make it work correctly when other things are executing concurrently.08:54
fwereadewrtp, and if you write a principal charm, it's also your responsibility to know everything that every subordinate charm might do so yu can implement your side of the locking correctly?08:55
wrtpfwereade: no. i think that kind of subordinate behaviour is... insubordinate :-)08:55
wrtpfwereade: i think that we should not think of subordinates as ways to muck directly with the operations of other charms.08:56
wrtpfwereade: if you want that, then you should change the other charms directly.08:56
wrtpfwereade: ISTM that if you've got two things concurrently changing the same settings file (whether running in mutually exclusive hooks or not) then it's a recipe for trouble.08:58
fwereadewrtp, the point is to eliminate the concurrency...08:58
fwereadewrtp, by mandating that if yu want to make a change you must do it in a hook, and serialising hook executions across all units, we do that08:58
fwereadewrtp, the other drawbacks may indeed sink the idea08:59
fwereadewrtp, but I'm pretty sure that doing this gives us a much lower chance of weird and hard-to-repro interactions08:59
wrtpfwereade: yeah, but if you're a principal and you change a settings file, you might be warranted in expecting that it's the same when the next hook is called.08:59
wrtpfwereade: for instance you might just *write* the settings file, rather than read it in and modify it09:00
fwereadewrtp, (it also depends on adding juju-run so that we *can* run commands in a hook context at arbitrary times)09:00
Arammoin.09:01
wrtpAram: hiya09:01
fwereadeAram, heyhey09:01
fwereadewrtp, well, it is true that a hook never knows what hook (if any) ran last09:03
wrtpfwereade: i don't think we should be making it easier to write the kind of charms that this would facilitate09:03
fwereadewrtp, what's the solution to the apt issues then?09:04
wrtpfwereade: so is it true that apt does not work if called concurrently?09:04
fwereadewrtp, it appears to be09:04
wrtpfwereade: i would not be averse to providing an *explicit* way to get mutual exclusion across charms in a container09:05
wrtpfwereade: so you could do, e.g.: juju-acquire; apt-get...; juju-release09:06
wrtpfwereade: last thing i saw: [10:04:39] <fwereade> wrtp, it appears to be09:08
wrtpfwereade: it would be better if apt-get was fixed though - that seems to be the root of this suggestion.09:09
fwereade_wrtp, sorry -- I was composing something like "that sounds potentially good, please suggest it in a reply"09:09
fwereade_wrtp, I still find it hard to believe that apt-get is the only possible legitimate source of concurrency issues09:09
wrtpfwereade_: of course it's not - but we're in a timesharing environment - it's all concurrent and people need to deal with that.09:10
fwereade_wrtp, and if it's not then everybody has to carve out their own exceptions for the things they know and care about09:10
fwereade_wrtp, and I have a strong suspicion that everyone will figure out that the Best Practice is to grab the lock at the start of the hook and release it at the end09:10
wrtpfwereade_: i think that trying to pretend that in this fluffy juju world everything is sequential and lovely, is going to create systems that are very fragile09:11
wrtpfwereade_: that may well be true for install hooks at any rate09:11
wrtpfwereade_: i'm not so sure about other hooks09:11
fwereade_wrtp, I am not trying to "pretend" anything... I am saying we can implement things one way, or another way, and that I think one way might be good. you seem to be asserting that even if we do things sequentially they still won't be sequential09:13
fwereade_wrtp, it's not about pretending it's about making a choice09:14
wrtpfwereade_: yeah. i'm saying that hook sequencing doesn't necessarily make the actions of a charm sequential09:14
fwereade_wrtp, if we only pretend to make choices I agree we'll be screwed there ;)09:14
fwereade_wrtp, wait, you have some idea that any charm knows anything about what happened before it was run?09:14
fwereade_wrtp, s/any charm/any hook/09:15
wrtpfwereade_: i think it's reasonable for a charm to assume ownership of some system file.09:15
fwereade_wrtp, ok, but that still implies nothing about what the last hook to modify that file was, at any given time09:16
wrtpfwereade_: it means that you know that whatever change was made, it was made by your hooks09:17
fwereade_wrtp, I don't remotely care about hook *ordering* in this context... is that the perspective you're considering?09:17
wrtpfwereade_: no, not at all09:17
fwereade_wrtp, wait, you were just telling me that "rare" isn't good enough when considering the possibility of, say, a service changing its own config... ISTM that it follows that we must have some magical system which is safe from any and all concurrent modifications (or, really, that every charm author has to build compatible pieces of such a system)09:18
fwereade_wrtp, or we have a simple solution, which is, don't run two hooks at a time09:19
wrtpfwereade_: or... don't have one charm that modifies the same things as another. keep out of each others' hair.09:19
fwereade_wrtp, so, no apt then09:19
fwereade_wrtp, and nothing else that doesn't like consurrent modifications09:20
wrtpfwereade_: apt needs to be fixed. or we need to provide a workaround for that, in particular.09:20
fwereade_wrtp, *or* a vast distributed multi-author locking algorithm using new hook commands09:20
wrtpfwereade_: "vast distributed multi-author" ??09:21
fwereade_wrtp, every single charm author has to do the locking dance right09:21
wrtpfwereade_: only if you you're changing something that others might change concurrently.09:22
wrtpfwereade_: i think this all comes down to how we see the role of subordinates09:22
fwereade_wrtp, requiring that charm authors have perfect precognition doesn't strike me as helpful ;p09:22
wrtpfwereade_: have you looked at what subordinate charms are out there now, and whether any potentially suffer from these issues? (ignore apt-get issues for the moment)09:24
fwereade_wrtp, no, because these sorts of issues are by their very nature subtle and hidden09:24
wrtpfwereade_: i'm not so sure. i think it should be fairly evident if a subordinate is doing stuff that may interact badly with a principal.09:25
fwereade_wrtp, I think that if it were that clear, everybody would have spotted the apt problem and worked around it in every single charm09:25
wrtpfwereade_: i assume that hardly anyone uses subordinates yet tbh09:26
wrtpfwereade_: i don't mean evident from behaviour, but evident from what the purpose of the subordinate charm is09:27
fwereade_wrtp, right -- my position is that the reason that apt is the only problem we've seen is likely to be because we don't use many yet09:27
fwereade_wrtp, IMO it is consistent with the general feel of juju to make it easier, not harder, for charms to play together09:29
wrtpfwereade_: IMO it's also consistent with juju to make independent components that have independent failure modes09:30
fwereade_wrtp, we provide a consistent snapshot of remote state in a hook context -- why mess that up by explicitly encouraging inconsistency in local state?09:30
wrtpfwereade_: because we *can't* provide a consistent snapshot of local state?09:30
fwereade_wrtp, and yet you seem to consider that adding a class of subtle and hard-to-detect concurrency-based failures is consistent with this goal09:30
fwereade_wrtp, we can either have a hook which is equivalent to logging into a machine yourself, or logging into a machine with N concurrent administrators09:31
fwereade_wrtp, all making changes at the same time09:31
fwereade_wrtp, I don't see how the second scenario is more robust09:32
wrtpfwereade_: if one of those N concurrent adminstrators hangs for ages, the others can continue uninterrupted. i think that's a very useful property.09:33
fwereade_wrtp, I think that's a very situation-specific property and not worth introducing this class of bug for09:34
wrtpfwereade_: it means that if i decide to install a subordinate charm, the principal service can carry on regardless.09:34
fwereade_wrtp, I feel if we ever do something like this it should be a release/reacquire pair around the long-running operations09:34
fwereade_wrtp, making people have to lock by default seems really unhelpful to me09:34
wrtpfwereade_: tbh, i'm very surprised that apt-get doesn't work concurrently by default. i haven't managed to find any bug reports so far.09:36
wrtpfwereade_: it seems to take out file locks09:36
fwereade_wrtp, so plausibly 2 things are installing things with overlapping dependencies?09:36
wrtpfwereade_: we could always provide a version of apt-get that *is* exclusive...09:39
fwereade_wrtp, I dunno, it feels to me like we'll end up with a bunch of special cases sooner or later09:48
fwereade_wrtp, can we take it to the lists for further discussion? need to pop out to baker before it closes09:48
wrtpfwereade_: sure, i'll try and draft a reply09:49
niemeyerGood morning!11:01
davecheneyhello11:01
TheMuehi11:01
niemeyerAnyone has the calls active already, or should I?11:02
TheMueniemeyer: feel free to start, imho none has done it yet11:03
niemeyerCOol, starting it up11:03
niemeyerhttps://plus.google.com/hangouts/_/2a0ee8de20f9362c47ab06b9b5635551d4959416?authuser=0&hl=en11:04
davecheneyno camera today11:04
davecheneynot sure why11:04
davecheneythe mac says it can see the device11:05
davecheneybut no green light :(11:05
niemeyerwrtp: ping11:05
wrtpniemeyer: pong11:05
niemeyerwrtp: Party time11:05
wrtpniemeyer: am just sorting out the hangout laptop11:05
AramI hate this technical shit11:06
wrtplunch11:59
wrtpback12:19
niemeyerfwereade_: Sent a more carefully considered comment on the lock-stepping issue12:54
mrammHow are folks doing this fine morning?13:05
niemeyerfwereade_: ping13:06
niemeyermramm: Heya13:06
mrammI'm about to go over Mark S's open stack design summit keynote with him (and kapil and clint)13:06
niemeyermramm: All good 'round here13:06
niemeyermramm: Brilliant, good luck there13:06
mrammI think we have a really good story to tell around openstack upgrades thanks to the cloud archive13:06
mrammand the look and feel of the juju gui is impressive13:07
niemeyerfwereade_: When you're back and you have a moment, I'd appreciate talking a bit about https://codereview.appspot.com/668704313:08
niemeyerfwereade_: Both about the logic in EnterScope, and about the fact the CL seems to include things I've reviewed elsewhere13:09
niemeyermramm: What's the cloud archive?13:09
niemeyermramm: Good to hear re. GUI13:09
mrammit's just a package archive13:10
mrammwith all the new stuff, backported and tested against the LTS13:10
niemeyermramm: LOL13:10
niemeyermramm: So we manage to stick the word "cloud" on package archives? ;-)13:10
niemeyermanaged13:10
mrammit's all "cloud" stuff in the archive13:10
mrammyes13:11
mrammgotta make things cloudy13:11
fwereade_niemeyer, pong, sorry I missed you13:13
niemeyerwrtp: no problem13:13
niemeyerErm13:13
niemeyerfwereade_: no problem13:13
fwereade_niemeyer, haha13:13
niemeyerfwereade_, wrtp: I was actually about to ask something else13:14
wrtpniemeyer: go on13:14
fwereade_niemeyer, 043 was meant to be a prereq for 046, i didn't realise I'd skipped it until yesterday13:14
niemeyerfwereade_, wrtp: I think it'd make sense to have the interface of juju.Conn exposing at least similar functionality to what we have in the command line13:14
wrtpniemeyer: are you talking about your Deploy bug?13:14
fwereade_niemeyer, yes, I think I like that idea13:15
niemeyerNo, I'm talking about https://codereview.appspot.com/670004813:15
niemeyerfwereade_, wrtp: We've been going back and forth on what we have in juju.Conn, and the state we're in right now is quite neat13:15
wrtpniemeyer: ah, i think that's a tricky one.13:15
niemeyerfwereade_, wrtp: But the decision to put something there or not is a bit ad-hoc at the moment13:15
wrtpniemeyer: i *do* think it's perhaps a bit confusing that RemoveUnit in Conn isn't anything like RemoveUnit in State.13:16
niemeyerwrtp: Agreed, and I have a proposal: DestroyUnit13:16
wrtpniemeyer: in state?13:16
niemeyerwrtp: No, in juju.Conn13:16
niemeyerIdeally that'd be the name of the command-line thing as well, but that's too late13:16
wrtpniemeyer: hmm13:17
niemeyerWe do have destroy-service and destory-environment, though13:17
fwereade_niemeyer, honstly I would prefer us to change Add/Remove in state to Create/Delete, say, and save the meanings of those verbs for the user-facing add/removes13:17
wrtpniemeyer: Destroy sounds more drastic than Remove tbh13:17
niemeyerfwereade_: Remove vs. Delete feels awkward13:17
niemeyerwrtp: It's mean to be drastric13:17
niemeyerdrastic13:17
niemeyermeant13:17
niemeyerI can't spell13:17
wrtpniemeyer: ah, i thought the command-line remove-unit just set dying.13:18
fwereade_niemeyer, well, the trouble is we have this awkward remove-unit verb, which doesn't really mean remove at all13:18
niemeyerwrtp: and dying does what? :)13:18
wrtpniemeyer: then again, i suppose... yeah13:18
niemeyerfwereade_: We can obsolete the command, and have destroy-unit13:18
niemeyerfwereade_: (supporting the old name, of course)13:19
fwereade_niemeyer, I'm -0.5 on the add/destroy pairing but it doesn't seem all that bad13:20
wrtpfwereade_: we already have add-service, destroy-service, no?13:20
niemeyerwrtp: We don't have add-service, yet13:20
niemeyerwrtp: WE may have some day..13:20
wrtpniemeyer: good point13:20
niemeyerWe do have AddService, though, so the pairing is already there in some fashion at least13:21
fwereade_wrtp, we also have terminate-machine rather than destroy-machine13:21
niemeyerI quite like destroy precisely because it's drastic, and because it avoids the add/remove/dying conflict13:21
niemeyerfwereade_:+1 on destroy-machine too13:21
fwereade_wrtp, niemeyer: and in general I am in favour of making the commands more consistent13:21
wrtpfwereade_: +1 too13:22
niemeyerfwereade_: destroy-service, destroy-unit, destroy-machine, destroy-environment..13:22
niemeyerI'm happy with that, at least13:22
wrtpdestroy for destructive actions seems good13:22
fwereade_wrtp, niemeyer: any quibbles I may have over the precise verb are drowned out by my approval for consistency13:22
wrtpniemeyer: sounds like a plan13:22
fwereade_niemeyer, wrtp: destroy-relation13:23
niemeyerwrtp, fwereade_: Awesome, let's document and move in that direction13:23
niemeyerI'll add a comment to Dave's branch13:23
niemeyerfwereade_: +113:23
fwereade_niemeyer, great, thanks13:23
wrtpfwereade_: i don't mind about remove-relation actually - it doesn't feel like that much of a destructive operation.13:23
niemeyerwrtp: It actually is13:23
fwereade_wrtp, strong disagreement13:23
* niemeyer has to take the door.. biab13:23
wrtpfwereade_: ok, cool13:23
fwereade_niemeyer, re the review -- if I were you I'd just drop that one, you've seen it all already in the one without the prereq13:25
fwereade_niemeyer, I will try to figure out exactly where I am and whether I've introduced anything that deserves a test, then I should have the fixed one-you've-seen ready to repropose soon13:25
wrtpinteresting; this test didn't *fail*, but it did take over 2 minutes to execute on my machine: http://paste.ubuntu.com/1283102/13:36
wrtpi'm not sure if i'm being pathological there or not13:37
niemeyerwrtp: Would be very useful to know where the time is being spent13:55
niemeyerfwereade_: Awesome, thanks13:55
wrtpniemeyer: i'm looking into it right now.13:55
niemeyerfwereade_: Can we speak about EnterScope when have a moment?13:55
fwereade_niemeyer, any time13:56
fwereade_niemeyer, now?13:56
niemeyerfwereade_: Let's do it13:56
wrtpniemeyer: quick check before you do that13:56
niemeyerwrtp: Sure13:56
wrtpniemeyer: should there be any watcher stuff running in a normal state unit test?13:56
wrtpniemeyer: (i'm seeing hundreds of "watcher: got changelog document" debug msgs13:56
wrtp)13:57
niemeyerwrtp: The underlying watcher starts on state opening13:57
wrtpniemeyer: ah13:57
niemeyerwrtp: If you're creating hundreds of machines, that's expected13:57
wrtpniemeyer: i see 4600 such messages initially13:57
fwereade_niemeyer, actually, I'm just proposing -wip13:57
fwereade_niemeyer, not quite sure it's ready, have ended up a bit confused by the branches13:58
fwereade_niemeyer, but it does have an alternative approach to EnterScope13:58
fwereade_niemeyer, that I am not quite sure whether I should do as it is, or loop over repeatedly until I get so many aborteds that I give up13:58
niemeyerwrtp: You'll get as many messages as changes13:59
fwereade_niemeyer, https://codereview.appspot.com/667804613:59
niemeyerfwereade_: Cool13:59
niemeyerfwereade_: Invite sent14:01
wrtphmm, i see the problem, i think14:10
niemeyerwrtp: Found it?14:26
wrtpniemeyer: the problem is that all the goroutines try to assign to the same unused machine at once, but only one succeeds; then the all try with the next one etc etc14:26
wrtps/the all/they all/14:26
wrtpniemeyer: i think i've got a solution14:27
wrtpniemeyer: i'm not far off trying it out14:27
wrtpniemeyer: my solution is to read in batches, and then try to assign to each machine in the batch in a random order.14:28
niemeyerwrtp: What about going back to the approach we had before?14:28
wrtpniemeyer: which was?14:29
niemeyerwrtp: Create a machine and assign to it14:29
wrtpniemeyer: what if we don't need to create a machine?14:29
wrtpniemeyer: this is in AssignToUnusedMachine, which doesn't create machines14:30
niemeyerwrtp: My understanding is that we had an approach to allocate machines that was simple, and worked deterministically14:31
wrtpniemeyer: and the approach we used before is inherently racy if someone else *is* using AssignToUnusedMachine14:31
wrtpniemeyer: that's fine (modulo raciness), but that doesn't fix the issue i'm seeing in this test (which we may, of course, decide is pathological and not worth fixing)14:31
niemeyerwrtp: The only bad case was that if someone created a machine specifically for a service while someone else attempted to pick a random machine, the random one could pick the machine just allocated for the specific service14:32
wrtpniemeyer: so in that case we should loop, right?14:32
niemeyerwrtp: I'm not sure14:33
wrtpniemeyer: actually, we *do* create a machine and then assign the unit to that machine14:35
wrtpniemeyer: and that's the cause of the bug that dfc is seeing (i now realise)14:35
niemeyerwrtp: Indeed, sounds plausible14:37
wrtpniemeyer: in the case i'm dealing with currently, we have a big pool of machines already created, all unused, and we're trying to allocate a load of units over them.14:37
wrtpniemeyer: that seems like a reasonable scenario actually.14:37
niemeyerwrtp: Agreed14:38
wrtpniemeyer: so i think it's worth trying to make that work ok.14:38
niemeyerwrtp: +114:38
wrtpniemeyer: so... do you think my proposed solution is reasonable?14:38
niemeyerwrtp: It seems to reduce the issue, but still feels racy and brute-forcing14:40
wrtpniemeyer: alternatives are: - read *all* the machines, then choose them in random order; - add a random value to the machine doc and get the results in a random order14:40
wrtpniemeyer: yeah, i know what you mean14:40
wrtpniemeyer: there's probably a way of doing it nicely, though i haven't come up with one yet14:41
niemeyerwrtp: I think we could introduce the concept of a lease14:43
wrtpniemeyer: interesting way forward, go on.14:43
niemeyerwrtp: When a machine is created, the lease time is set to, say, 30 minutes14:44
niemeyerwrtp: AssignToUnused never picks up machines that are within the lease time14:44
wrtpniemeyer: that doesn't solve the big-pool-of-already-created-machines problem AFAICS14:45
wrtpniemeyer: which is, admittedly, a different issue14:45
niemeyerwrtp: Hmm, good point14:45
niemeyerwrtp: You know what.. I think we shouldn't do anything right now other than retrying14:47
wrtpniemeyer: and ignore the time issue?14:47
niemeyerwrtp: Yeah14:48
wrtpniemeyer: the random-selection-from-batch isn't much code and will help the problem a lot14:48
niemeyerwrtp: It makes the code more complex and bug-prone for a pretty unlikely scenario14:48
wrtpniemeyer: ok. it's really not that complex, though, but i see what you're saying.14:49
niemeyerwrtp: I recall you saying that before spending a couple of days on the last round on unit assignment too :-)14:50
wrtpniemeyer: i've already written this code :-)14:50
wrtpniemeyer: and it's just an optimisation that fairly obviously doesn't affect correctness.14:51
niemeyerwrtp: I don't think it's worth it.. it's increasing complexity and the load of the system in exchange for a reduction in the chance of conflicts in non-usual scenarios14:52
niemeyerwrtp: We'll still have conflicts, and we still have to deal with the problem14:53
niemeyerwrtp: People adding 200 machines in general will do add-machine -n 20014:53
niemeyerwrtp: and we should be able to not blow our own logic out with conflicts in those cases14:53
wrtpniemeyer: i'm thinking of remove-service followed by add-service14:55
niemeyerwrtp: Ok?14:56
wrtpniemeyer: sure.14:57
wrtpniemeyer: i'll scale back my test code :-)14:57
niemeyerwrtp: Sorry, I was asking what you were thinking14:57
niemeyerwrtp: What about remove-service follows by add-service?14:57
niemeyerfollowed14:57
wrtpniemeyer: if someone does remove-service, then two add-services concurrently, they'll see this issue.14:58
wrtpniemeyer: that doesn't seem that unusual a scenario14:58
wrtpniemeyer: i mean two "deploy -n 100"s of course14:59
wrtpniemeyer: assuming the original service had 200 units.14:59
niemeyerwrtp: If someone does destroy-service, they'll put units to die.. if they run add-service twice immediately, they'll create two new machines14:59
niemeyerwrtp: What's the problem with that?14:59
wrtpniemeyer: if someone does destroy-service, then waits, the machines lie idle with no units after a while, yes?14:59
niemeyerwrtp: Sorry, what's the scenario again?  Different scenarios are not "of course" the same15:00
wrtpniemeyer: here's the scenario i'm thinking of:15:00
wrtpjuju deploy -n 200 somecharm; juju remove-service somecharm; sleep 10000; juju deploy -n 100 othercharm & juju deploy -n 100 anothercharm15:02
niemeyerwrtp: I don't understand why we're talking about deploy + remove-service15:02
niemeyerwrtp: What's the difference between that and add-machine -n 200?15:02
wrtpniemeyer: because that leaves a load of machines allocated but unused, no?15:03
niemeyerwrtp: add-machine -n 200?15:03
wrtp[15:53:33] <niemeyer> wrtp: People adding 200 machines in general will do add-machine -n 20015:03
niemeyerwrtp: Yes, what's the difference?15:03
wrtpniemeyer: but they are more likely to remove a service and add another one, i think15:03
niemeyerwrtp: Doesn't matter to the allocation algorithm, does it?15:03
wrtpniemeyer: "juju deploy -n 200 foo" doesn't have the issue15:04
wrtpniemeyer: if the machines are not currently allocated15:04
niemeyerwrtp: Agreed.. that's why I'm saying the whole problem is not important..15:05
niemeyerwrtp: I still don't get what you're trying to say with deploy+remove-service+sleep15:05
niemeyerwrtp: Isn't that an expensive way to say add-machine -n 200?15:06
wrtpniemeyer: i'm trying to show a moderately plausible scenario that would exhibit the pathological behaviour we're seeing here.15:06
wrtpniemeyer: yeah, sure.15:06
niemeyerwrtp: Okay, phew..15:06
niemeyerwrtp: So how is add-machine -n 200 + deploy -n 200 an issue?15:06
wrtpniemeyer: it's only an issue if you've got two concurrent deploys.15:07
niemeyerwrtp: Okay, so we should just ensure that these cases actually work by retrying, until we sort a real solution out in the future that actually prevents the conflict15:08
wrtpniemeyer: sounds reasonable.15:08
=== TheMue is now known as TheMue-AFK
wrtpniemeyer: AssignToUnusedMachine does currently retry as it stands actually.15:09
niemeyerwrtp: So how is Dave stumbling upon issues?15:09
wrtpniemeyer: the problem is in AssignUnit, but there's a trivial fix, i think15:09
niemeyerwrtp: Coo15:09
niemeyerll15:09
wrtpniemeyer: currently AssignUnused calls Unit.AssignToMachine(m) but it should call Unit.assignToMachine(m, true)15:10
wrtpniemeyer: yeah, i was surprised when my test didn't fail.15:10
niemeyerwrtp: I'm not sure this solves the issue15:10
wrtpniemeyer: no?15:11
wrtpniemeyer: i *think* it solves the case of AssignUnit racing against itself15:12
wrtpniemeyer: it doesn't solve the problem of AssignUnit racing against AssignToUnusedMachine15:12
wrtpniemeyer: if we want to solve that, we'll need to loop, i think.15:12
wrtpniemeyer: (but that's not the problem that dave is seeing)15:13
wrtpniemeyer: erk15:14
wrtpniemeyer: no, you're right15:14
wrtpniemeyer: i'm thinking of something like this: http://paste.ubuntu.com/1283247/15:19
niemeyerwrtp: This doesn't feel great.. allocating a machine and having it immediately stolen is pretty awkward15:21
niemeyerwrtp: If we want to solve this stuff for real, I suggest two different fronts:15:21
wrtpniemeyer: AddMachineWithUnit ?15:22
niemeyer1) Introduce a lease time on AddMachine that prevents someone else from picking it up non-explicitly15:22
niemeyer2) Do a variant of your suggestion that picks the highest and the smallest id of all unused machines, and picks the first one >= a random id in the middle15:23
niemeyerwrtp: -1 I think.. this would mean we'll have to do a bunch of transaction merging that right now are totally independent15:24
wrtpniemeyer: ok15:24
wrtpniemeyer: do we have a way of getting an agreed global time for leaseholding?15:25
wrtpniemeyer: presumably the presence stuff does that currently15:25
wrtpniemeyer: hmm, maybe mongo provides access to the current time15:25
niemeyerwrtp: Yeah15:26
niemeyerwrtp: Although, ideally we'd not even load that time15:27
wrtpniemeyer: i'm thinking we shouldn't need to15:28
niemeyerwrtp: if the machine is created with a bson.MongoTimestamp, that's automatically set15:29
niemeyerwrtp: It needs to be the second field, though, IIRC15:29
wrtpniemeyer: weird15:29
niemeyerwrtp: Yeah, it's a bit of an internal time type15:29
niemeyerwrtp: it'd be nicer to use a normal time, actually15:30
niemeyerI don't recall if there's a way to create it with "now", though15:30
* niemeyer checks15:30
niemeyerNothing great15:32
niemeyerI talked to Eliot before about $now.. I think it'll come, but doesn't exist yet15:33
wrtpniemeyer: ah15:34
niemeyerAnyway, will think further about that over lunch15:34
wrtpniemeyer: cool15:34
wrtpniemeyer: for the time being, perhaps it's best just to do the loop?15:34
wrtpniemeyer: as it's a quick fix for a current bug15:35
niemeyerwrtp: Yeah, that's what I think we should do15:35
niemeyerwrtp: The real solution is involved and will steal our time15:36
wrtpniemeyer: agreed15:36
wrtpniemeyer: also, what we will have will be correct, just not very efficient.15:36
niemeyerwrtp: Yeah, but those are edge cases really.. the cheap answer is "don't allocate tons of machines and then do tons of assignments in parallel"15:37
niemeyerwrtp: Which isn't hard to avoid15:37
wrtpniemeyer: yeah15:37
wrtpniemeyer: concurrent deploys are inefficient. we can live with that for the time being.15:38
niemeyerCool, lunch time.. biab15:38
niemeyerwrtp: Concurrent deploys with spare machines, specifically15:38
wrtpniemeyer: all concurrent deploys will suffer from the someone-stole-my-new-machine problem, i think.15:39
wrtpniemeyer: this seems to work ok. https://codereview.appspot.com/671304516:45
niemeyerwrtp: Looking16:47
niemeyerwrtp: Nice16:49
fwereade_niemeyer, hmm, is it OK to go from Dying straight to removed without passing through Dead?16:49
niemeyerwrtp: How long does it take to run?16:49
fwereade_niemeyer, blast sorry can't talk now16:49
wrtpniemeyer: <2s16:49
wrtpniemeyer: one mo, i'll check16:49
wrtpniemeyer: 0.753s to run the state tests with just that one test.16:50
niemeyerwrtp: Beautiful, thanks16:50
niemeyerwrtp: LGTM16:50
wrtpniemeyer: thanks16:50
wrtpniemeyer: it was surprisingly difficult to provoke the race before applying the fix16:51
niemeyerfwereade_: If nothing else, I see how it might be okay in cases we have tight control on16:51
niemeyerfwereade_: We can talk more once you're back16:51
niemeyerwrtp: Those are very useful tests to hold16:52
wrtpniemeyer: agreed16:52
wrtpniemeyer: any chance of getting some feedback on https://codereview.appspot.com/6653050/ ?16:54
niemeyerwrtp: I was reviewing that when I stopped to review your request here16:55
wrtpniemeyer: ah brilliant, thanks!16:55
niemeyerwrtp: Why does it reset the admin password on tear down of ConnSuite?17:01
fwereade_niemeyer, well, it was what we were discussing earlier... that it seemed sensible for the last unit to leave a relation scope to be the one to finally remove it, and that we should do it in a transaction17:01
wrtpniemeyer: because every time we connect to the state, the admin password gets set.17:02
niemeyerwrtp: Where's that done?17:02
wrtpniemeyer: in juju.NewConn17:03
wrtpniemeyer: actually, in Bootstrap17:03
wrtpniemeyer: and then juju.NewConn resets it, as is usual17:04
niemeyerwrtp:  135         // because the state might have been reset17:04
niemeyer 136         // by the test independently of JujuConnSuite.17:04
niemeyerwrtp: Is that done when the password change fails?17:05
niemeyerwrtp: I mean, where do we reset and put it in such a non-working state17:05
wrtpniemeyer: one mo, i'll check the code again17:05
niemeyerwrtp: Cheers17:06
wrtpniemeyer: ah, yes, it's when we have tests that Bootstrap then Destroy17:06
wrtpniemeyer: any test that does a Destroy will cause the SetAdminPassword call to fail17:07
niemeyerwrtp: Hmm..17:08
niemeyerwrtp: I'm pondering about what it means.. won't the follow up tear down fail too17:08
niemeyer?17:08
wrtpniemeyer: no, it doesn't. can't quite remember why though, let me check again.17:08
niemeyerwrtp: It feels a bit wild17:09
niemeyerwrtp: You've just worked on that and can't remember.. neither of us will have any idea of that stuf fin a bit :(17:09
wrtpniemeyer: i know what you mean17:09
wrtpniemeyer: the interaction between JujuConnSuite, MgoSuite and the dummy environ isn't ideal17:10
niemeyerwrtp: What actually fails we don17:10
niemeyer't reset the password there?17:10
wrtpniemeyer: lots of tests need the server to be restarted then17:10
wrtpniemeyer: nothing fails - tests just get slower17:11
niemeyerwrtp: That's good17:11
wrtpniemeyer: when someone calls Environ.Destroy, it calls mgo.Reset, but the JujuConn.State variable remains pointing to to old connection.17:12
niemeyerwrtp: Right17:12
niemeyerwrtp: Okay, I'll just suggest a comment there17:13
wrtpniemeyer: sounds good17:13
niemeyer/ Bootstrap will set the admin password, and render non-authorized use17:14
niemeyer/ impossible. s.State may still hold the right password, so try to reset17:14
niemeyer/ the password so that the MgoSuite soft-resetting works. If that fails,17:14
niemeyer/ it will still work, but it will take a while since it has to kill the whole17:14
niemeyer/ database and start over.17:14
niemeyerAh, will add a note about when it happens too17:15
niemeyerwrtp: LGTM17:18
niemeyerwrtp: Pleasantly straightforward17:19
wrtpniemeyer: great, thanks17:19
wrtpniemeyer: yeah, when i realised that all tests were going to need to connect with authorisation, i thought the changes would be worse than they ended up.17:20
niemeyerfwereade_: I see17:30
niemeyerfwereade_: I think it sounds reasonable in that case17:31
niemeyerfwereade_: Is there anyone else that might be responsible for taking the unit from dying => dead => remove?17:31
wrtpright, submitted. good way to end the day. see y'all tomorrow.17:32
niemeyerwrtp: Indeed, have a good one!17:34
fwereade__niemeyer, it's the relation I'm pondering taking directly Dying->gone19:12
fwereade__niemeyer, I *think* it's ok, because the last thing to be doing anything with it should be that last relation19:13
niemeyerfwereade__: Makes sense.. have you seen my comments on it?19:13
fwereade__niemeyer, sorry, I'm not sure which comments.. I haven't seen any comments of yours less than ~ 1 day old on the CLs I'm thinking of20:11
niemeyerfwereade__: My apologies, I meant on IRC, right above20:14
niemeyer<niemeyer> fwereade_: I see20:14
niemeyer<niemeyer> fwereade_: I think it sounds reasonable in that case20:14
niemeyer<niemeyer> fwereade_: Is there anyone else that might be responsible for taking the unit from dying => dead => remove?20:14
fwereade__niemeyer, it's not the unit, it's the relation20:14
niemeyerfwereade__: Ah, sorry, yes, s/unit/relation20:15
fwereade__niemeyer, the other thing thta might have do do it is the client, if no units are in the relation yest20:15
fwereade__niemeyer, I'm actually starting to feel less keen on the idea20:15
fwereade__niemeyer, I'm starting to think that it would be better to set it to Dead and add a cleanup doc for it20:16
fwereade__niemeyer, we can do it in one transaction but don't have to get overly clever20:16
niemeyerfwereade__: What's the benefit?20:17
fwereade__niemeyer, we get (1) consistent lifecycle progress and (2) a single transaction that the unit agent uses to wash its hands of a dying relation20:18
niemeyerfwereade__: Actually, hmm20:19
niemeyerfwereade__: Well, before we derail..20:19
niemeyerfwereade__: Both don't look like very strong points.. we're exchanging simple and deterministic termination by a hand-off of responsibility20:20
niemeyerfwereade__: There's perhaps an alternative that might offer a middle ground solving some of your concerns, though20:21
fwereade__niemeyer, (the big one is "LeaveScope will be less complicated")20:22
fwereade__niemeyer, but go on please20:22
niemeyerfwereade__: Sorry, I spoke too soon, I think the idea would introduce further races down the road20:23
fwereade__niemeyer, ha, no worries20:23
fwereade__niemeyer, anyway I'm hardly married to the idea, I'll take it round the block another time and try to simplify a bit more20:24
niemeyerfwereade__: I don't see any simple alternatives..20:24
niemeyerfwereade__: Adding a cleanup document would mean persisting service and associated relations for an undetermined amount of time20:25
niemeyerfwereade__: Even in the good cases20:25
fwereade__niemeyer, really? a Dead relation has decreffed its service... I don't think there's anything blocking service removal at that point20:26
fwereade__niemeyer, that's almost the whole point of it being dead20:26
fwereade__niemeyer, if anything else is reacting in any way to a dead relation I think they're doing it wrong20:27
niemeyerfwereade__: It would be the first time we're keeping dead stuff around referencing data that does not exist20:28
niemeyerfwereade__: This feels pretty bad20:29
niemeyerfwereade__: Find(relation)... oh, sorry, your service is gone20:29
niemeyerfwereade__: Worse.. Find(relation).. oh, look, your service is live, again, but it's a different service!20:29
niemeyerfwereade__: The purpose of Dead as we always covered was to implement clean termination, not to leave old unattended data around20:30
fwereade__niemeyer, fair enough, as I said I'm happy to take it round the block again -- I had seen it as just one more piece of garbage in the same vein as all its unit settings, but mileage clearly varies20:32
fwereade__niemeyer, just to sync up on perspective: would you agree that we should, where possible, be making all related state changes in a single transaction, and only falling back to a CA when dictated by potentially large N?20:36
niemeyerfwereade__: Settings have no lifecycle support, and an explicit free pass in the case of relation unit settings because we do want to keep them up after scope-leaving for reasons we discussed20:37
fwereade__niemeyer, yep, ok, I am not actually arguing for it any more, I think I have gone into socratic mode largely for my own benefit20:38
niemeyerfwereade__: Regarding CA use, yes, it feels like a last resort we should use when that's clearly the best way forward20:39
niemeyerfwereade__: Again, it sounds sensible in the case of settings precisely because we have loose control of when to remove20:40
fwereade__niemeyer, thanks, but in fact I have a more general statement: we should be making state changes as single transactions where possible, and exceptions need very strong justifications20:40
fwereade__niemeyer, because I am suddenly fretting about interrupted deploys20:40
niemeyerfwereade__: I'm finding a bit hard to agree with the statement open as it is because I'm not entirely sure about what I'd be agreeing wiht20:41
fwereade__niemeyer, it feels like maybe we should actually be adding the service, with N units and all its peer relations, in one go20:41
niemeyerfwereade__: The end goal is clear, though: our logic should continue to work reliably even when things explode in the middle20:41
fwereade__niemeyer, ok, thank you, that is a much better statement of the sentiment I am trying to express20:42
niemeyerfwereade__: In some cases, we may be forced to build up a single transaction20:42
niemeyerfwereade__: In other cases, it may be fine to do separate operations because they will independently work correctly even if there's in-between breakage20:42
niemeyerfwereade__: and then, just to put the sugar in all of that, we have to remember that our transaction mechanism is read-committed20:43
niemeyerfwereade__: We can see mid-state20:43
niemeyerfwereade__: even if we have some nice guarantees that it should complete eventually20:43
fwereade__niemeyer, I have been trying to keep that at the forefront of my mind but I bet there are some consequences I've missed somewhere ;)20:44
niemeyerfwereade__: That's great to know20:44
niemeyerfwereade__: We most likely have issues here and there, but if nothing else we've been double-checking20:45
fwereade__niemeyer, so, to again consider specifically the extended LeaveScope I'm looking at now20:46
niemeyerfwereade__: I often consider the order in which the operations are done, and the effect it has on the watcher side, for example20:46
niemeyerfwereade__: Ok20:46
fwereade__niemeyer, (huh, that is not something I had properly considered... are they ordered as the input {}Op?)20:47
niemeyerfwereade__: Yep20:47
niemeyerfwereade__: We've been getting it right, I think :)20:47
niemeyerfwereade__: E.g. add to principals after unit is in20:48
niemeyerfwereade__: I like to think it's not a coincidence :-)20:48
fwereade__niemeyer, cool, but that's one of those totally unexamined assumptions I think I've been making, but could easily casually break in pursuit of aesthetically pleasing code layout or something ;)20:48
fwereade__niemeyer, good to be reminded20:48
niemeyerfwereade__: So, LeaveScope20:49
niemeyerfwereade__: What do you think?20:49
fwereade__niemeyer, you know, I'm not sure any more, I need to write some code :/20:50
fwereade__niemeyer, thank you, though, this has helped some things to fall into place20:50
niemeyerfwereade__: Okay, since we're here with state loaded in our minds, this is my vague understanding of what we probably need:20:51
niemeyer1) If the relation is live, run a transaction doing the simplest, asserting that the relation is still alive20:52
niemeyer2) If there are > 1 units in the relation we're observing, run a transaction asserting that this is not the last unit, and just pop out the scope20:54
niemeyer3) If there is exactly 1 unit remaining, or 2 was aborted, remove relation and scope, unasserted20:55
fwereade__niemeyer, yeah, that matches my understanding20:56
niemeyerActually, sorry, (3) has to assert the scope doc exists20:56
niemeyerOtherwise we may havoc the system in some edge cases20:56
niemeyerfwereade__: ^20:57
fwereade__niemeyer, it was the refcount checks I had been thinking of when you said unasserted20:58
fwereade__niemeyer, but then actually, hmm: by (3) a failed assertion should be reason enough to blow up, unless a refresh reveals that someone else already deleted it... right?20:59
niemeyerfwereade__: Right20:59
fwereade__niemeyer, we can't do anything sophisticated with the knowledge, it's always going to be an error: may as well assert for everything even in (3)21:00
fwereade__niemeyer, at least we fail earlier if state does somehow become corrupt21:00
niemeyerfwereade__: Hmm21:00
niemeyerfwereade__: Sounds like the opposite21:00
niemeyerfwereade__: If we assert what we care about, we can tell how to act21:00
niemeyerfwereade__: If we assert just on existence of scope doc, which is the only thing we care about, we're know exactly what happened if it fails21:00
niemeyerfwereade__: Even if we don't load anything else21:01
niemeyerfwereade__: We don't care about refcounts, in theory21:01
niemeyerfwereade__: If it's 1, there's only 1.. if someone removed that one, and it wasn't us, that's okay too21:01
niemeyerfwereade__: 1 should never become 2 unless we have a significant bug21:02
niemeyerfwereade__: Makes sense?21:02
fwereade__niemeyer, so if we assert lots, fail early, and recover if it turns out that the relation was removed by someone else, I think we're fine, and in the case of such a significant bug at least we haven't made *more* nonsensical changes to the system ;)21:03
niemeyerfwereade__: My point is that we don't have to "recover if it turns out ..."21:04
fwereade__niemeyer, yeah, fair enough, I see that side too21:04
niemeyerfwereade__: Otherwise, agreed regarding assert lots21:04
niemeyerfwereade__: In fact, we're doing in-memory Life = Dead, which sounds pretty dangerous in that place21:05
niemeyerfwereade__: We need to make sure to not use an in-memory value we got from elsewhere in that > 1 logic.21:06
fwereade__niemeyer, sorry, I am suddenly at sea21:06
niemeyerfwereade__: EnsureDead does in-memory .doc.Life = Dead21:06
niemeyerfwereade__: Picking a count and life from an external relation doc and saying "Oh, if it's dying, it surely has no more than 1 units" will bite21:07
niemeyerfwereade__: Because someone else may have inc'd before it became Dying21:09
fwereade__niemeyer, I may have misunderstood: but my reading was that it would be ok to use the in-memory values to pick a transaction to start off with, but that we should refresh on ErrAborted21:10
fwereade__niemeyer, and use those values to figure out what to do next21:10
niemeyerfwereade__: It depends on how you build the logic really21:13
fwereade__niemeyer, I think I know roughly what I'm doing... time will tell :)21:13
niemeyerfwereade__: If we load a value from the database that says life=dying and units=1, you don't have to run a transaction that says >1 because you know it'll fail21:13
niemeyerfwereade__: If you have a value in memory you got from elsewhere that says the same thing, you can't trust it21:14
fwereade__niemeyer, yes, this is true, there are inferences I can draw once it's known to be dying21:14
niemeyerfwereade__: That was the only point I was making in the last few lines21:14
fwereade__niemeyer, cool21:14
niemeyerfwereade__: I'll go outside to exercise a tad while I can.. back later21:15
fwereade__niemeyer, enjoy21:15
niemeyerfwereade__: Have a good evening in case we don't catch up21:15
fwereade__niemeyer, and you :)21:15
niemeyerfwereade__: Cheers21:15
=== hazmat is now known as kapilt
=== kapilt is now known as hazmat

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!