/srv/irclogs.ubuntu.com/2012/07/11/#juju-dev.txt

=== bcsaller1 is now known as bcsaller
* davecheney waves06:41
wrtpdavecheney, fwereade, TheMue: mornin' all07:07
fwereadewrtp, davecheney, TheMue: heyhey07:07
davecheneywrtp: morning07:07
davecheneywrtp: i reapplied the local ec2 tests and was pleased to discover none of the tests were broken07:08
davecheneybut that was just pure luck07:08
davecheneyas I there were blindly being changed07:08
wrtpdavecheney: great!07:08
davecheneywrtp: you may not think so when you discover why I needed to add UseLocalStateInfo07:08
wrtpdavecheney: why's that?07:09
wrtp(to LiveTests, presumably?)07:09
davecheneywrtp: ec2test is hard coded to hand back DNS names machines in the form i-NNN.example.com07:09
wrtpdavecheney: well ec2test is there to be changed for tests' convenience...07:11
wrtpdavecheney: but maybe there's no convenient way of changing it07:11
davecheneywrtp: I tried for a while07:12
davecheneygiven how inception like jujutest is07:12
davecheneythere is no way to easily access the underly ec2test07:12
davecheneyor even know it is being used07:12
davecheneyhave a good evening07:14
davecheneyi've gotta fly07:14
Aramhello.10:44
TheMueAram: Hi10:48
TheMueAram: Took a deeper look into mstate and really like it.10:49
Aramgreat :).10:49
* TheMue diggs into environs to get a better idea of where get_machine_provider() is or will be in Go and to better integrate a new firewall approach into the provisioning agent10:52
wrtpTheMue: ping12:13
=== Aram2 is now known as Aram
* Aram is off for a few hours.13:03
TheMuewrtp: pong13:08
wrtpTheMue: i wonder if we could have a chat about the firewall code13:09
wrtpTheMue: not right now though... i've just got involved in fixing another bug13:09
TheMuewrtp: for sure13:09
wrtpTheMue: 15 minutes or so13:09
wrtp?13:10
TheMuewrtp: ok, I'm currently start with smaller chunks13:10
wrtpTheMue: i think it's worth working out what the overall structure might look like (without actually doing it)13:10
TheMuewrtp: yeah, there have to be changes from the old approach13:11
wrtpTheMue: currently you've got several independent agents all doing their own thing, and i think that's potentially problematic13:11
wrtpTheMue: i'm wondering whether it might be better to funnel all events into a central goroutine that keeps track of the state and issues port open/close requests.13:11
niemeyerGood morning all13:21
TheMueniemeyer: hello, just had half a doomsday here. lots of rain.13:28
niemeyerTheMue: Heya13:28
niemeyerTheMue: Woohay :)13:28
TheMueSo, have to step out shortly for the dentist, they just called me if I wonna come earlier. Should not last long.13:43
niemeyerTheMue: Awesome, good luck there13:48
wrtpniemeyer: small bug fix for you. should fix the charm store upload process. https://codereview.appspot.com/634410513:50
wrtpniemeyer: good morning, BTW!13:50
niemeyerwrtp: Heya13:51
niemeyerwrtp: Neat!13:53
hazmatg'morning13:56
hazmatwrtp, cool13:57
hazmatfwiw.. i think there two charms that applies to atm, there's a larger listing of other charms that don't appear in the charm store here.. http://jujucharms.com/tools/store-missing13:58
hazmatniemeyer, does the charm store require maintainer?13:58
niemeyerhazmat: Not yet13:58
hazmathmm.. ok for several charms that's the only thing clint's lint/proof tool reports, so its unclear what the issue is with them14:00
hazmatniemeyer, how do you like gce?14:00
niemeyerhazmat: Great stuff14:01
wrtpniemeyer: what happens currently if two charms in the same container each open the same port?14:39
wrtphazmat: ^14:39
wrtpi suppose i should really ask what *should* happen in that case?14:40
niemeyerwrtp: They conflict14:40
niemeyerwrtp: and will always continue to conflict14:40
wrtpniemeyer: there's an error?14:40
niemeyerwrtp: A single container is a single port namespace14:40
wrtpniemeyer: open-port fails?14:40
niemeyerwrtp: Oh, no, that should work14:40
hazmatat a juju level currently there is not, at a system level the port binding is an error14:40
niemeyerwrtp: Well.. I don't know if it "should" work, but I bet it "will" work14:41
wrtphazmat: so a charm shouldn't open-port until it's actually bound the socket?14:41
hazmatwrtp, not nesc.14:41
wrtpniemeyer: i quite like the idea that a given port is "owned" by a particular unit.14:41
wrtpniemeyer: then open-port by another unit would give an error14:42
hazmatwrtp, it could be reserving port for future exposed usage14:42
niemeyerwrtp: +114:42
niemeyerwrtp: Specifically in the case of subordinates, right?14:42
wrtpniemeyer: absolutely14:43
niemeyerwrtp: Cool, makes sense14:43
wrtpniemeyer: i've been going over the firewall semantics14:43
hazmatsounds good, detect errors structurally instead of runtime undetected failures.14:43
wrtpniemeyer: and that would make sense.14:43
wrtphazmat: yeah14:43
imbrandonshoud ports be part of the units metadata then, instead of in an arbitrary hook14:48
imbrandonso its owned from the getgo14:49
wrtpimbrandon: that's a much-discussed question...14:49
hazmatimbrandon, that discussion viewpoint is part of the ml archive on this topic14:49
imbrandonahh14:49
wrtpimbrandon: i wasn't actually suggesting that though14:50
hazmatto date though, nothing is actually using dynamic ports14:50
wrtpimbrandon: i intended to suggest that open-port would take ownership of a given port, if possible.14:50
imbrandonright, not much does , incomming wise iirc14:50
imbrandonwrtp: right, but what if it cant, the charm would need logic to handle that right ?14:51
imbrandonand maybe try another port14:51
wrtpimbrandon: yup.14:51
wrtpimbrandon: if you're deploying two charms which want to use the same port, there's no way around that14:52
imbrandonright14:52
wrtpTheMue, niemeyer, fwereade_: here's a pseudocode sketch of a slightly different approach to the firewall management code: http://paste.ubuntu.com/1086303/15:04
TheMue*click*15:04
niemeyerwrtp: Can you talk me through it?15:04
wrtpniemeyer: ok15:05
niemeyerwrtp: Is this a worker.. what's unit/machine/etc15:05
wrtpniemeyer: so, we've got one central goroutine that has a coherent idea of the current state of the system (with regard to ports)15:05
wrtpniemeyer: this is to be started by the provisioning agent.15:06
fwereade_wrtp, that looks broadly sensible to me15:06
niemeyerwrtp: Okay, so it is a worker15:06
wrtpniemeyer: yeah.15:06
TheMuewrtp: we have two kinds of service changes: adding/removing and exposed flag.15:07
wrtpniemeyer: and it *probably* will work ok when run concurrently with itself, assuming a sensible implementation of Open and ClosePort in the provider15:07
niemeyerwrtp: machine/unit/etc are local structs, I assume, rather than representing changes to state.Unit/etc15:07
fwereade_wrtp, I presume portManager is something separate, with state, that worries about EC2 errors and suchlike and keeps retrying on errors?15:07
wrtpniemeyer: yes15:07
niemeyerwrtp: cool15:07
niemeyerwrtp: Re-reading with that info15:07
wrtpniemeyer: portManager was my name for the main loop15:07
wrtpniemeyer: but it would be restarted on errors, yes15:08
fwereade_wrtp, it was also the thing that had OpenPort and ClosePort called on it15:08
wrtpfwereade_: oh, sorry, i've got two portManagers!15:08
fwereade_wrtp, if that's an env I'm a little uncertain15:08
wrtpfwereade_: no, portManager is intended to be an environs.Instance15:09
fwereade_wrtp, ah-ha, ok, sorry15:09
wrtpthere is actually a problem15:09
fwereade_wrtp, but still... any errors there will surely mean that we have to keep retrying, there, until we succeed... right?15:10
wrtpfwereade_: i guess so.15:10
TheMuewrtp: sounds good so far, only the missing differentiation between adding/removing and exposing of services15:10
niemeyerwrtp: The data coming from the change on line 38 looks curious15:11
fwereade_wrtp, that feels a little icky to me but not enough to sink the concept :)15:11
wrtpniemeyer: yes, i glossed over that bit15:11
wrtpniemeyer: since we're waiting for many watchers at once, we have a goroutine for each watcher that adds context to the change passed on the channel, then sends to a single channel.15:12
wrtpniemeyer: so where the pseudocode says "add port watcher...", it implies setting up a goroutine to do that too15:12
wrtpniemeyer: but those goroutines don't mess with the state at all15:13
wrtpthe main problem i can see currently is that there needs to be another phase at the start15:14
wrtpwhere we need to interrogate the currently open ports and close them if they need to be.15:15
wrtpfwereade_: it's possible that we might want another layer, being a proxy for a machine, that deals with retrying port changes for that machine.15:16
fwereade_wrtp, yeah, something like that15:17
TheMuewrtp: right now the real state is retrieved from the provider and compared to the state informations15:20
wrtpTheMue: if OpenPort and ClosePort are idempotent, i'm not sure that's necessary.15:20
wrtps/idem/each idem/15:20
TheMuewrtp: would be the better solution, indeed15:21
wrtpit's entirely possible that this scheme is crackful though. i just thought i'd give it as a talking point.15:22
wrtpone thing that's not currently taken into account is that the instance for a machine can change15:23
TheMuewrtp: today the fw is notificated when services are added. those get an exposed watcher. if exposed, a unit watcher is set up. and those are watching the units ports. *sigh* deeply nested.15:23
niemeyerwrtp: Looks very sensible15:23
wrtpTheMue: i didn't see any point in watching services that have no machines, so i add the service watcher only when necessary15:24
wrtpniemeyer: thanks15:24
TheMuewrtp: sounds reasonable15:24
niemeyerwrtp: Have you seen this: https://codereview.appspot.com/6333067/15:31
wrtpniemeyer: no. will look.15:31
niemeyerwrtp: Cool, it's good to sync up with Dave on that, since they both seem to be overlapping15:32
wrtpniemeyer: looks pretty compatible to me15:32
wrtpniemeyer: i *think* the environ watching would go inside the same loop15:33
niemeyerwrtp: It is compatible so far for sure. I'm just saying that they're both supposed to implement the same functionality, so synchronizing is important15:34
niemeyerwrtp: Or we'll end up with two people working on the same thing15:34
wrtpniemeyer: definitely. i wasn't actually proposing to write this code - TheMue is there already.15:35
niemeyerwrtp: Perfect, thanks15:35
wrtpniemeyer: this was borne out of my looking at TheMue's initial stab, which was invaluable for me to see what actually needed to be done.15:35
niemeyerwrtp: Super15:36
niemeyerwrtp: Thanks for diving into this. Very useful.15:36
TheMueniemeyer: If the firewall is only used by provisioning, is it worth to create an own service?15:36
wrtpTheMue: hopefully this will be useful input to your next steps, and perhaps we have a better idea of what we might be aiming for15:36
TheMuewrtp: Yes, thx.15:36
wrtpTheMue: i think it should be a file within the provisioning agent15:37
TheMueniemeyer: There are two connection points in the provisioner.15:37
wrtps/a file/implemented in a file/15:37
TheMuewrtp: The PA starts the provisioner. And there is a loop where today the machines are watched. In the Py code here also services are watched.15:38
TheMuewrtp: So I would see it as a non-exported type for the provisioner (same package, own file).15:39
wrtpTheMue: yup15:39
wrtpTheMue: that's what i was trying to suggest15:39
TheMuewrtp: h515:39
wrtpTheMue: h515:39
niemeyerTheMue: It is worth creating a *worker*, yes15:41
niemeyerwrtp: I'd prefer to have this as an independent worker15:41
niemeyerwrtp: It's functionality is completely unrelated to the rest of the provisioner15:41
wrtpniemeyer: a separate executable?15:41
niemeyerwrtp: No15:42
niemeyerA different worker, not a different agent15:42
wrtpniemeyer: a separate goroutine with the PA?15:42
wrtpniemeyer: (that's what i had envisaged)15:42
wrtps/with the/within the/15:42
niemeyerwrtp: Yes, and a different package under juju-core/worker/firewaller15:42
niemeyerwrtp: I only disagreed with "a file within the provisioning agnet"15:43
wrtpniemeyer: ah, i hadn't seen juju-core/worker15:43
wrtpniemeyer: presumably a CL waiting to land15:43
niemeyerwrtp: It's currently named juju-core/service, but that's wrong and we should rename ASAP15:43
niemeyerwrtp: No, we've agreed that was the best nomenclature, and Dave had stuff in progress that he wanted to push forward without distractions. Sounded sensible15:43
wrtpniemeyer: yes, that all sounds very sensible15:44
wrtpniemeyer: now i understand what you mean by "worker" :-)15:44
* TheMue too15:44
TheMueniemeyer: Today the notification about added/removed services or machines is done by the PA (in Py). The according code fragments in Go are in the provisioner worker.15:48
TheMueniemeyer: So should the provisioner call those two exported methods in future too or better setup own watchers to work standalone?15:49
wrtphere's a version with logic for dealing with instance ids coming and going: http://paste.ubuntu.com/1086373/15:49
wrtpTheMue: i think they'd each set up their own watchers15:50
wrtpTheMue: it's a little less efficient, but nicer structurally15:50
TheMuewrtp: sounds more clear, yes. better maintainable15:50
niemeyerTheMue: Sorry, I don't get the question15:51
niemeyerTheMue: As a hint, though, the Python structure of provisioner+firewaller is not an example to be followerd15:51
niemeyerfollowed15:51
TheMueniemeyer: Yes, that's what I've done with the first approach.15:52
TheMueniemeyer: Do so the FW only needs two public methods for the PA.15:53
TheMueniemeyer: But it leads to this nested structure.15:53
TheMueniemeyer: So as an own worker with own watchers it's definitely better.15:53
niemeyerTheMue: Yeah, that's the direction we should follow15:54
niemeyerTheMue: and *small branches*, with *tests*, please!15:54
TheMueniemeyer: Yeah, I only have still problems to propose something when it does nothing. And the nested structure of todays FW lead fast to this amount of code, sorry.15:57
TheMueniemeyer: So I now will already propose code with stubs.15:57
niemeyerTheMue: Thanks a lot15:59
niemeyerTheMue: As a hint, try to write tests with the code, rather than getting it ready and then testing15:59
niemeyerThat's lunch time.. biab16:00
wrtpniemeyer: do you think we should do the same thing with security groups as the python version (i.e. one security group per instance) ?17:19
wrtpniemeyer: i've been wondering about ways to do better (e.g. one security group per combination of ports)17:19
wrtpniemeyer: the latter uses as many security groups as machines in the worst case, but the usual case would be many fewer, i think.17:20
niemeyerwrtp: There's no way to do different17:22
niemeyerwrtp: Security groups can only be defined at instance creation time17:22
wrtpniemeyer: oh darn it! i'd forgotten that17:23
niemeyerwrtp: I wish it worked as well17:31
wrtpniemeyer: originally i had environs.Instance.OpenPort and ClosePort, but i'm thinking that doesn't map very well to how it works. perhaps environs.Environ.OpenPort(machineId int) would be better.17:34
wrtpniemeyer: thoughts?17:34
wrtpanyway, gotta go. see y'all tomorrow!17:35
niemeyerwrtp: In case you read this, yes, that's how it currently works in Python too17:56
niemeyerflaviamissi: Hey!20:45
flaviamissiniemeyer: hi!20:46
niemeyerflaviamissi: Just now I was looking at the auth problem20:46
niemeyerflaviamissi: I think I can reproduce it20:46
niemeyerflaviamissi: Should be fixed in a moment hopefully20:47
flaviamissiniemeyer: good! I debugged it, but I couldn't get anywhere...20:47
flaviamissiniemeyer: you have something that can be causing this problem in mind?20:47
niemeyerflaviamissi: Not yet, but should have something in a bit :)20:50
niemeyerflaviamissi: It's not hard to find this kind of inconsistency between EC2 & similars20:51
niemeyerunfortunately20:51
niemeyerflaviamissi: Even between *regions* of EC2, sometimes there are inconsistencies20:51
flaviamissiniemeyer: hmmm, first time i've seen something like that, good to know though20:52
flaviamissiniemeyer: I'm really curious about what is causing that problem, if you can let me know when you find something, I would really appreciate that :)20:53
niemeyerflaviamissi: Oh yeah, I'll certainly let you know20:57
flaviamissiniemeyer:  thanks :)20:58
niemeyerflaviamissi: It's the path21:04
niemeyerflaviamissi: The endpoint path, more specifically21:04
niemeyerI'll have a fix in a moment21:05
niemeyerWorks!21:08
flaviamissiniemeyer: putz!21:08
niemeyer:)21:08
flaviamissiniemeyer: We didnt think about it...21:08
flaviamissiniemeyer: really great. Thanks a log21:09
flaviamissilot*21:09
niemeyerflaviamissi: My pleasure21:09
flaviamissiniemeyer: is the change in trunk yet?21:35
flaviamissiniemeyer: well, i'll leave now, when you merge it with trunk i'll try it :)21:40
niemeyerCome on Launchpad.. why you don't like me21:47
niemeyerDinner time23:08

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!