=== bcsaller1 is now known as bcsaller | ||
* davecheney waves | 06:41 | |
wrtp | davecheney, fwereade, TheMue: mornin' all | 07:07 |
---|---|---|
fwereade | wrtp, davecheney, TheMue: heyhey | 07:07 |
davecheney | wrtp: morning | 07:07 |
davecheney | wrtp: i reapplied the local ec2 tests and was pleased to discover none of the tests were broken | 07:08 |
davecheney | but that was just pure luck | 07:08 |
davecheney | as I there were blindly being changed | 07:08 |
wrtp | davecheney: great! | 07:08 |
davecheney | wrtp: you may not think so when you discover why I needed to add UseLocalStateInfo | 07:08 |
wrtp | davecheney: why's that? | 07:09 |
wrtp | (to LiveTests, presumably?) | 07:09 |
davecheney | wrtp: ec2test is hard coded to hand back DNS names machines in the form i-NNN.example.com | 07:09 |
wrtp | davecheney: well ec2test is there to be changed for tests' convenience... | 07:11 |
wrtp | davecheney: but maybe there's no convenient way of changing it | 07:11 |
davecheney | wrtp: I tried for a while | 07:12 |
davecheney | given how inception like jujutest is | 07:12 |
davecheney | there is no way to easily access the underly ec2test | 07:12 |
davecheney | or even know it is being used | 07:12 |
davecheney | have a good evening | 07:14 |
davecheney | i've gotta fly | 07:14 |
Aram | hello. | 10:44 |
TheMue | Aram: Hi | 10:48 |
TheMue | Aram: Took a deeper look into mstate and really like it. | 10:49 |
Aram | great :). | 10:49 |
* TheMue diggs into environs to get a better idea of where get_machine_provider() is or will be in Go and to better integrate a new firewall approach into the provisioning agent | 10:52 | |
wrtp | TheMue: ping | 12:13 |
=== Aram2 is now known as Aram | ||
* Aram is off for a few hours. | 13:03 | |
TheMue | wrtp: pong | 13:08 |
wrtp | TheMue: i wonder if we could have a chat about the firewall code | 13:09 |
wrtp | TheMue: not right now though... i've just got involved in fixing another bug | 13:09 |
TheMue | wrtp: for sure | 13:09 |
wrtp | TheMue: 15 minutes or so | 13:09 |
wrtp | ? | 13:10 |
TheMue | wrtp: ok, I'm currently start with smaller chunks | 13:10 |
wrtp | TheMue: i think it's worth working out what the overall structure might look like (without actually doing it) | 13:10 |
TheMue | wrtp: yeah, there have to be changes from the old approach | 13:11 |
wrtp | TheMue: currently you've got several independent agents all doing their own thing, and i think that's potentially problematic | 13:11 |
wrtp | TheMue: i'm wondering whether it might be better to funnel all events into a central goroutine that keeps track of the state and issues port open/close requests. | 13:11 |
niemeyer | Good morning all | 13:21 |
TheMue | niemeyer: hello, just had half a doomsday here. lots of rain. | 13:28 |
niemeyer | TheMue: Heya | 13:28 |
niemeyer | TheMue: Woohay :) | 13:28 |
TheMue | So, have to step out shortly for the dentist, they just called me if I wonna come earlier. Should not last long. | 13:43 |
niemeyer | TheMue: Awesome, good luck there | 13:48 |
wrtp | niemeyer: small bug fix for you. should fix the charm store upload process. https://codereview.appspot.com/6344105 | 13:50 |
wrtp | niemeyer: good morning, BTW! | 13:50 |
niemeyer | wrtp: Heya | 13:51 |
niemeyer | wrtp: Neat! | 13:53 |
hazmat | g'morning | 13:56 |
hazmat | wrtp, cool | 13:57 |
hazmat | fwiw.. i think there two charms that applies to atm, there's a larger listing of other charms that don't appear in the charm store here.. http://jujucharms.com/tools/store-missing | 13:58 |
hazmat | niemeyer, does the charm store require maintainer? | 13:58 |
niemeyer | hazmat: Not yet | 13:58 |
hazmat | hmm.. ok for several charms that's the only thing clint's lint/proof tool reports, so its unclear what the issue is with them | 14:00 |
hazmat | niemeyer, how do you like gce? | 14:00 |
niemeyer | hazmat: Great stuff | 14:01 |
wrtp | niemeyer: what happens currently if two charms in the same container each open the same port? | 14:39 |
wrtp | hazmat: ^ | 14:39 |
wrtp | i suppose i should really ask what *should* happen in that case? | 14:40 |
niemeyer | wrtp: They conflict | 14:40 |
niemeyer | wrtp: and will always continue to conflict | 14:40 |
wrtp | niemeyer: there's an error? | 14:40 |
niemeyer | wrtp: A single container is a single port namespace | 14:40 |
wrtp | niemeyer: open-port fails? | 14:40 |
niemeyer | wrtp: Oh, no, that should work | 14:40 |
hazmat | at a juju level currently there is not, at a system level the port binding is an error | 14:40 |
niemeyer | wrtp: Well.. I don't know if it "should" work, but I bet it "will" work | 14:41 |
wrtp | hazmat: so a charm shouldn't open-port until it's actually bound the socket? | 14:41 |
hazmat | wrtp, not nesc. | 14:41 |
wrtp | niemeyer: i quite like the idea that a given port is "owned" by a particular unit. | 14:41 |
wrtp | niemeyer: then open-port by another unit would give an error | 14:42 |
hazmat | wrtp, it could be reserving port for future exposed usage | 14:42 |
niemeyer | wrtp: +1 | 14:42 |
niemeyer | wrtp: Specifically in the case of subordinates, right? | 14:42 |
wrtp | niemeyer: absolutely | 14:43 |
niemeyer | wrtp: Cool, makes sense | 14:43 |
wrtp | niemeyer: i've been going over the firewall semantics | 14:43 |
hazmat | sounds good, detect errors structurally instead of runtime undetected failures. | 14:43 |
wrtp | niemeyer: and that would make sense. | 14:43 |
wrtp | hazmat: yeah | 14:43 |
imbrandon | shoud ports be part of the units metadata then, instead of in an arbitrary hook | 14:48 |
imbrandon | so its owned from the getgo | 14:49 |
wrtp | imbrandon: that's a much-discussed question... | 14:49 |
hazmat | imbrandon, that discussion viewpoint is part of the ml archive on this topic | 14:49 |
imbrandon | ahh | 14:49 |
wrtp | imbrandon: i wasn't actually suggesting that though | 14:50 |
hazmat | to date though, nothing is actually using dynamic ports | 14:50 |
wrtp | imbrandon: i intended to suggest that open-port would take ownership of a given port, if possible. | 14:50 |
imbrandon | right, not much does , incomming wise iirc | 14:50 |
imbrandon | wrtp: right, but what if it cant, the charm would need logic to handle that right ? | 14:51 |
imbrandon | and maybe try another port | 14:51 |
wrtp | imbrandon: yup. | 14:51 |
wrtp | imbrandon: if you're deploying two charms which want to use the same port, there's no way around that | 14:52 |
imbrandon | right | 14:52 |
wrtp | TheMue, niemeyer, fwereade_: here's a pseudocode sketch of a slightly different approach to the firewall management code: http://paste.ubuntu.com/1086303/ | 15:04 |
TheMue | *click* | 15:04 |
niemeyer | wrtp: Can you talk me through it? | 15:04 |
wrtp | niemeyer: ok | 15:05 |
niemeyer | wrtp: Is this a worker.. what's unit/machine/etc | 15:05 |
wrtp | niemeyer: so, we've got one central goroutine that has a coherent idea of the current state of the system (with regard to ports) | 15:05 |
wrtp | niemeyer: this is to be started by the provisioning agent. | 15:06 |
fwereade_ | wrtp, that looks broadly sensible to me | 15:06 |
niemeyer | wrtp: Okay, so it is a worker | 15:06 |
wrtp | niemeyer: yeah. | 15:06 |
TheMue | wrtp: we have two kinds of service changes: adding/removing and exposed flag. | 15:07 |
wrtp | niemeyer: and it *probably* will work ok when run concurrently with itself, assuming a sensible implementation of Open and ClosePort in the provider | 15:07 |
niemeyer | wrtp: machine/unit/etc are local structs, I assume, rather than representing changes to state.Unit/etc | 15:07 |
fwereade_ | wrtp, I presume portManager is something separate, with state, that worries about EC2 errors and suchlike and keeps retrying on errors? | 15:07 |
wrtp | niemeyer: yes | 15:07 |
niemeyer | wrtp: cool | 15:07 |
niemeyer | wrtp: Re-reading with that info | 15:07 |
wrtp | niemeyer: portManager was my name for the main loop | 15:07 |
wrtp | niemeyer: but it would be restarted on errors, yes | 15:08 |
fwereade_ | wrtp, it was also the thing that had OpenPort and ClosePort called on it | 15:08 |
wrtp | fwereade_: oh, sorry, i've got two portManagers! | 15:08 |
fwereade_ | wrtp, if that's an env I'm a little uncertain | 15:08 |
wrtp | fwereade_: no, portManager is intended to be an environs.Instance | 15:09 |
fwereade_ | wrtp, ah-ha, ok, sorry | 15:09 |
wrtp | there is actually a problem | 15:09 |
fwereade_ | wrtp, but still... any errors there will surely mean that we have to keep retrying, there, until we succeed... right? | 15:10 |
wrtp | fwereade_: i guess so. | 15:10 |
TheMue | wrtp: sounds good so far, only the missing differentiation between adding/removing and exposing of services | 15:10 |
niemeyer | wrtp: The data coming from the change on line 38 looks curious | 15:11 |
fwereade_ | wrtp, that feels a little icky to me but not enough to sink the concept :) | 15:11 |
wrtp | niemeyer: yes, i glossed over that bit | 15:11 |
wrtp | niemeyer: since we're waiting for many watchers at once, we have a goroutine for each watcher that adds context to the change passed on the channel, then sends to a single channel. | 15:12 |
wrtp | niemeyer: so where the pseudocode says "add port watcher...", it implies setting up a goroutine to do that too | 15:12 |
wrtp | niemeyer: but those goroutines don't mess with the state at all | 15:13 |
wrtp | the main problem i can see currently is that there needs to be another phase at the start | 15:14 |
wrtp | where we need to interrogate the currently open ports and close them if they need to be. | 15:15 |
wrtp | fwereade_: it's possible that we might want another layer, being a proxy for a machine, that deals with retrying port changes for that machine. | 15:16 |
fwereade_ | wrtp, yeah, something like that | 15:17 |
TheMue | wrtp: right now the real state is retrieved from the provider and compared to the state informations | 15:20 |
wrtp | TheMue: if OpenPort and ClosePort are idempotent, i'm not sure that's necessary. | 15:20 |
wrtp | s/idem/each idem/ | 15:20 |
TheMue | wrtp: would be the better solution, indeed | 15:21 |
wrtp | it's entirely possible that this scheme is crackful though. i just thought i'd give it as a talking point. | 15:22 |
wrtp | one thing that's not currently taken into account is that the instance for a machine can change | 15:23 |
TheMue | wrtp: today the fw is notificated when services are added. those get an exposed watcher. if exposed, a unit watcher is set up. and those are watching the units ports. *sigh* deeply nested. | 15:23 |
niemeyer | wrtp: Looks very sensible | 15:23 |
wrtp | TheMue: i didn't see any point in watching services that have no machines, so i add the service watcher only when necessary | 15:24 |
wrtp | niemeyer: thanks | 15:24 |
TheMue | wrtp: sounds reasonable | 15:24 |
niemeyer | wrtp: Have you seen this: https://codereview.appspot.com/6333067/ | 15:31 |
wrtp | niemeyer: no. will look. | 15:31 |
niemeyer | wrtp: Cool, it's good to sync up with Dave on that, since they both seem to be overlapping | 15:32 |
wrtp | niemeyer: looks pretty compatible to me | 15:32 |
wrtp | niemeyer: i *think* the environ watching would go inside the same loop | 15:33 |
niemeyer | wrtp: It is compatible so far for sure. I'm just saying that they're both supposed to implement the same functionality, so synchronizing is important | 15:34 |
niemeyer | wrtp: Or we'll end up with two people working on the same thing | 15:34 |
wrtp | niemeyer: definitely. i wasn't actually proposing to write this code - TheMue is there already. | 15:35 |
niemeyer | wrtp: Perfect, thanks | 15:35 |
wrtp | niemeyer: this was borne out of my looking at TheMue's initial stab, which was invaluable for me to see what actually needed to be done. | 15:35 |
niemeyer | wrtp: Super | 15:36 |
niemeyer | wrtp: Thanks for diving into this. Very useful. | 15:36 |
TheMue | niemeyer: If the firewall is only used by provisioning, is it worth to create an own service? | 15:36 |
wrtp | TheMue: hopefully this will be useful input to your next steps, and perhaps we have a better idea of what we might be aiming for | 15:36 |
TheMue | wrtp: Yes, thx. | 15:36 |
wrtp | TheMue: i think it should be a file within the provisioning agent | 15:37 |
TheMue | niemeyer: There are two connection points in the provisioner. | 15:37 |
wrtp | s/a file/implemented in a file/ | 15:37 |
TheMue | wrtp: The PA starts the provisioner. And there is a loop where today the machines are watched. In the Py code here also services are watched. | 15:38 |
TheMue | wrtp: So I would see it as a non-exported type for the provisioner (same package, own file). | 15:39 |
wrtp | TheMue: yup | 15:39 |
wrtp | TheMue: that's what i was trying to suggest | 15:39 |
TheMue | wrtp: h5 | 15:39 |
wrtp | TheMue: h5 | 15:39 |
niemeyer | TheMue: It is worth creating a *worker*, yes | 15:41 |
niemeyer | wrtp: I'd prefer to have this as an independent worker | 15:41 |
niemeyer | wrtp: It's functionality is completely unrelated to the rest of the provisioner | 15:41 |
wrtp | niemeyer: a separate executable? | 15:41 |
niemeyer | wrtp: No | 15:42 |
niemeyer | A different worker, not a different agent | 15:42 |
wrtp | niemeyer: a separate goroutine with the PA? | 15:42 |
wrtp | niemeyer: (that's what i had envisaged) | 15:42 |
wrtp | s/with the/within the/ | 15:42 |
niemeyer | wrtp: Yes, and a different package under juju-core/worker/firewaller | 15:42 |
niemeyer | wrtp: I only disagreed with "a file within the provisioning agnet" | 15:43 |
wrtp | niemeyer: ah, i hadn't seen juju-core/worker | 15:43 |
wrtp | niemeyer: presumably a CL waiting to land | 15:43 |
niemeyer | wrtp: It's currently named juju-core/service, but that's wrong and we should rename ASAP | 15:43 |
niemeyer | wrtp: No, we've agreed that was the best nomenclature, and Dave had stuff in progress that he wanted to push forward without distractions. Sounded sensible | 15:43 |
wrtp | niemeyer: yes, that all sounds very sensible | 15:44 |
wrtp | niemeyer: now i understand what you mean by "worker" :-) | 15:44 |
* TheMue too | 15:44 | |
TheMue | niemeyer: Today the notification about added/removed services or machines is done by the PA (in Py). The according code fragments in Go are in the provisioner worker. | 15:48 |
TheMue | niemeyer: So should the provisioner call those two exported methods in future too or better setup own watchers to work standalone? | 15:49 |
wrtp | here's a version with logic for dealing with instance ids coming and going: http://paste.ubuntu.com/1086373/ | 15:49 |
wrtp | TheMue: i think they'd each set up their own watchers | 15:50 |
wrtp | TheMue: it's a little less efficient, but nicer structurally | 15:50 |
TheMue | wrtp: sounds more clear, yes. better maintainable | 15:50 |
niemeyer | TheMue: Sorry, I don't get the question | 15:51 |
niemeyer | TheMue: As a hint, though, the Python structure of provisioner+firewaller is not an example to be followerd | 15:51 |
niemeyer | followed | 15:51 |
TheMue | niemeyer: Yes, that's what I've done with the first approach. | 15:52 |
TheMue | niemeyer: Do so the FW only needs two public methods for the PA. | 15:53 |
TheMue | niemeyer: But it leads to this nested structure. | 15:53 |
TheMue | niemeyer: So as an own worker with own watchers it's definitely better. | 15:53 |
niemeyer | TheMue: Yeah, that's the direction we should follow | 15:54 |
niemeyer | TheMue: and *small branches*, with *tests*, please! | 15:54 |
TheMue | niemeyer: Yeah, I only have still problems to propose something when it does nothing. And the nested structure of todays FW lead fast to this amount of code, sorry. | 15:57 |
TheMue | niemeyer: So I now will already propose code with stubs. | 15:57 |
niemeyer | TheMue: Thanks a lot | 15:59 |
niemeyer | TheMue: As a hint, try to write tests with the code, rather than getting it ready and then testing | 15:59 |
niemeyer | That's lunch time.. biab | 16:00 |
wrtp | niemeyer: do you think we should do the same thing with security groups as the python version (i.e. one security group per instance) ? | 17:19 |
wrtp | niemeyer: i've been wondering about ways to do better (e.g. one security group per combination of ports) | 17:19 |
wrtp | niemeyer: the latter uses as many security groups as machines in the worst case, but the usual case would be many fewer, i think. | 17:20 |
niemeyer | wrtp: There's no way to do different | 17:22 |
niemeyer | wrtp: Security groups can only be defined at instance creation time | 17:22 |
wrtp | niemeyer: oh darn it! i'd forgotten that | 17:23 |
niemeyer | wrtp: I wish it worked as well | 17:31 |
wrtp | niemeyer: originally i had environs.Instance.OpenPort and ClosePort, but i'm thinking that doesn't map very well to how it works. perhaps environs.Environ.OpenPort(machineId int) would be better. | 17:34 |
wrtp | niemeyer: thoughts? | 17:34 |
wrtp | anyway, gotta go. see y'all tomorrow! | 17:35 |
niemeyer | wrtp: In case you read this, yes, that's how it currently works in Python too | 17:56 |
niemeyer | flaviamissi: Hey! | 20:45 |
flaviamissi | niemeyer: hi! | 20:46 |
niemeyer | flaviamissi: Just now I was looking at the auth problem | 20:46 |
niemeyer | flaviamissi: I think I can reproduce it | 20:46 |
niemeyer | flaviamissi: Should be fixed in a moment hopefully | 20:47 |
flaviamissi | niemeyer: good! I debugged it, but I couldn't get anywhere... | 20:47 |
flaviamissi | niemeyer: you have something that can be causing this problem in mind? | 20:47 |
niemeyer | flaviamissi: Not yet, but should have something in a bit :) | 20:50 |
niemeyer | flaviamissi: It's not hard to find this kind of inconsistency between EC2 & similars | 20:51 |
niemeyer | unfortunately | 20:51 |
niemeyer | flaviamissi: Even between *regions* of EC2, sometimes there are inconsistencies | 20:51 |
flaviamissi | niemeyer: hmmm, first time i've seen something like that, good to know though | 20:52 |
flaviamissi | niemeyer: I'm really curious about what is causing that problem, if you can let me know when you find something, I would really appreciate that :) | 20:53 |
niemeyer | flaviamissi: Oh yeah, I'll certainly let you know | 20:57 |
flaviamissi | niemeyer: thanks :) | 20:58 |
niemeyer | flaviamissi: It's the path | 21:04 |
niemeyer | flaviamissi: The endpoint path, more specifically | 21:04 |
niemeyer | I'll have a fix in a moment | 21:05 |
niemeyer | Works! | 21:08 |
flaviamissi | niemeyer: putz! | 21:08 |
niemeyer | :) | 21:08 |
flaviamissi | niemeyer: We didnt think about it... | 21:08 |
flaviamissi | niemeyer: really great. Thanks a log | 21:09 |
flaviamissi | lot* | 21:09 |
niemeyer | flaviamissi: My pleasure | 21:09 |
flaviamissi | niemeyer: is the change in trunk yet? | 21:35 |
flaviamissi | niemeyer: well, i'll leave now, when you merge it with trunk i'll try it :) | 21:40 |
niemeyer | Come on Launchpad.. why you don't like me | 21:47 |
niemeyer | Dinner time | 23:08 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!