fwereade | heya TheMue | 09:07 |
---|---|---|
TheMue | fwereade: Hi | 09:24 |
TheMue | fwereade: Phew, it's raining cats and dogs here. | 09:24 |
fwereade | TheMue, it's pretty hot here | 09:25 |
fwereade | TheMue, we're just approaching the too-damn-hot point of the year | 09:25 |
fwereade | TheMue, give me a few days and I'll be begging for a decent rainstorm :) | 09:25 |
* TheMue dcc's fwereade some rain. | 09:26 | |
fwereade | TheMue, :) | 09:26 |
fwereade | TheMue, so how's it going? I hope the format 2 stuff isn't too much of a hassle -- I feel like I maybe should have done it myself, but I got caught up in the relations and I felt it was getting pretty important | 09:27 |
TheMue | fwereade: I'll do firewall first, then format 2. | 09:29 |
fwereade | TheMue, ah, excellent | 09:29 |
fwereade | TheMue, are we going with the security groups style or are we doing it properly this time? | 09:30 |
TheMue | fwereade: I'm talking about todays firewall.py in state. security and auth will be handled later. | 09:31 |
fwereade | TheMue, ah cool | 09:31 |
TheMue | fwereade: firewall is used by the PA. | 09:31 |
TheMue | fwereade: So I'm moving it out of state to cmd. | 09:32 |
* fwereade suddenly gets suspicious | 09:32 | |
* fwereade goes to read code a mo | 09:32 | |
fwereade | TheMue, doesn't implementing that presuppose the security groups approach? | 09:33 |
TheMue | fwereade: As far as I've seen yet not. | 09:33 |
TheMue | fwereade: But I've just started. | 09:34 |
fwereade | TheMue, it seems to me that if the PA is going to use it, then we're assuming that the PA will remain responsible for opening/closing ports | 09:34 |
fwereade | TheMue, a proper solution using firewalls on the units surely shouldn't involve the PA at all? | 09:35 |
TheMue | fwereade: Sorry, don't know. | 09:36 |
fwereade | TheMue, blast, wish niemeyer was on | 09:36 |
TheMue | fwereade: So how would your solution look like? | 09:37 |
fwereade | TheMue, unit agent messing with iptables, rather than PA messing with the provider | 09:38 |
fwereade | TheMue, we've certainly talked about our use of security groups being a serious problem, and about the need for a cross-provider firewall solution | 09:38 |
TheMue | fwereade: Pls go on ... | 09:39 |
fwereade | TheMue, but it would not necessarily be *irrational* for us to go with the tried, tested, known-working-at-small-scale solution (given the time constraints that are starting to wear at me slightly) | 09:40 |
fwereade | TheMue, the problems with security groups are (1) aws is really not designed to handle what we're doing with them and (2) the solution only works for aws | 09:41 |
fwereade | TheMue, (2) is not important wrt our critical short-term goals | 09:41 |
fwereade | TheMue, but disregarding (1) feels like the sort of decision that we should get some sort of consensus on before writing code that presupposes it | 09:42 |
fwereade | TheMue, s/presupposes it/presupposes that approach/ | 09:42 |
TheMue | fwereade: That are worries, ok, but how would a proper solution look like? | 09:43 |
fwereade | TheMue, I'm afraid I don't have a clear idea of the *precise* problem with our use of security groups... just that we're not meant to use any, and an apocryphal amazon engineer was said to look somewhat horrified by the prospect :) | 09:44 |
fwereade | TheMue, I think it comes down to the *unit* agents watching the ports that should be open in their container and taking charge of it themselves | 09:44 |
fwereade | TheMue, we'd still need *some* security groups, but probably just 2: one for PA machines and one for everything else | 09:45 |
fwereade | TheMue, make sense? | 09:45 |
TheMue | fwereade: Yep, so far understandable. | 09:46 |
=== wrtp is now known as rogpeppe | ||
TheMue | rogpeppe: Hey, you are not here. ;) | 09:46 |
rogpeppe | TheMue: that's right. i'm an invisible ghost. | 09:46 |
rogpeppe | TheMue: i've been given special dispensation :-) | 09:47 |
TheMue | rogpeppe: Ah, ok, then it's ok. | 09:47 |
fwereade | TheMue, that's pretty much it... | 09:47 |
fwereade | rogpeppe, heyhey | 09:47 |
rogpeppe | fwereade: yo! | 09:47 |
fwereade | rogpeppe, are you aware of any official preference as to how we implement firewalling this time round? | 09:47 |
rogpeppe | i seem to remember we've got a meeting scheduled in 13 minutes, so i thought i'd try and turn up for it... | 09:47 |
rogpeppe | (maybe i've got it wrong though!) | 09:47 |
fwereade | rogpeppe, btw, finished To Hold Infinity, very enjoyable | 09:48 |
rogpeppe | fwereade: cool, glad you enjoyed it. am enjoying wwz, in a slightly grim kinda way | 09:48 |
rogpeppe | hmm, firewalling | 09:48 |
rogpeppe | until we containerise everything, i think the current approach is probably the only one | 09:49 |
fwereade | rogpeppe, enjoying Axiomatic too, fun to have a more thinky, less experiencey read once in a while | 09:49 |
fwereade | rogpeppe, ah, expand please? I don't see the issue | 09:49 |
fwereade | rogpeppe, after all everything is containerised already... in a sense... which feels like the appropriate sense for this context | 09:50 |
rogpeppe | fwereade: how do we firewall without making use of ec2's facilities? | 09:50 |
fwereade | rogpeppe, iptables? | 09:50 |
rogpeppe | fwereade: can't anything get around that? | 09:51 |
fwereade | rogpeppe, I have always presumed that it works as advertised, but I can't point to anything proving that | 09:51 |
fwereade | rogpeppe, and I'm not saying we don't use security groups at all -- we have to -- but we know that using one per machine is a problem | 09:52 |
fwereade | rogpeppe, I just don't know whether it's the sort of problem we want to fix now, or the sort of problem we leave for 13.04 | 09:52 |
rogpeppe | fwereade: am i right about the meeting, BTW? | 09:53 |
fwereade | rogpeppe, er, I have no idea... I had a vague feeling it was weds, but maybe I missed another change | 09:53 |
fwereade | rogpeppe, but davecheney is on, and that may lend support to your theory ;p | 09:54 |
rogpeppe | dammit, it's an hour later | 09:55 |
rogpeppe | bugger, my dispensation is invalid | 09:55 |
rogpeppe | fwereade: iptables are manipulatable by root, and the charms run as root. | 09:55 |
rogpeppe | fwereade: we need to talk to niemeyer about this | 09:55 |
rogpeppe | fwereade, TheMue: well, gotta go. will miss the meeting, i think. have fun, and post any interesting/relevant conversations to juju-dev, where i will see 'em and sneakily read 'em... | 09:56 |
fwereade | rogpeppe, yeah, indeed -- I'm not even sure I have a strong position on this, I just feel it's something we should get niemeyer's input on before we implement code that supposes either way | 09:57 |
fwereade | rogpeppe, enjoy the holiday :) | 09:57 |
TheMue | rogpeppe: OK, have fun. | 09:57 |
fwereade | TheMue, I think that either way you can certainly implement something that keeps an eye on both sets of conditions, and emits events when ports should actually open or close | 09:59 |
rogpeppe | fwereade: we could cache groups, because we're unlikely to have too many configurations of ports. | 09:59 |
rogpeppe | fwereade: which might mitigate the issue | 09:59 |
TheMue | fwereade: That's what firewall does today. | 09:59 |
rogpeppe | TheMue: ah, it must've changed since i last looked | 10:00 |
rogpeppe | TheMue: i thought there was one group for each machine | 10:00 |
rogpeppe | anyway, gotta go | 10:00 |
TheMue | rogpeppe: The firewall.py does not very much. It's only used by the PA. | 10:00 |
fwereade | TheMue, where does it do that? | 10:01 |
fwereade | TheMue, I don't see anything that shares groups in there | 10:01 |
TheMue | fwereade: I didn't say anything about groups. I meant watcing the ports. | 10:02 |
fwereade | TheMue, if anything does that, it's in the individual provider's open_port/close_port methods | 10:02 |
fwereade | TheMue, ah got you | 10:02 |
fwereade | TheMue, all I'd suggest then is to make sure that the thing that watches an individual machine remains distinct from the thing that watches all machines | 10:03 |
fwereade | TheMue, do I appear to be approximately sane there? | 10:05 |
TheMue | fwereade: I'll keep it in mind. I'm not yet deep enough in it. Just started the porting and as a prerequisite the watcher for the exposed flag. | 10:05 |
fwereade | TheMue, cool | 10:06 |
* fwereade starts to wonder whether he's right about it being up to the UA... maybe the MA would be better... | 10:06 | |
TheMue | fwereade: You've got more insight than me. I sometimes miss an architecture graphics where the components, their responsibilities and roles and how they communicate are visible. | 10:07 |
* TheMue is a very visual being. | 10:07 | |
fwereade | TheMue, I think the issue there is that the responsibilities in python are not necessarily as they should be | 10:07 |
fwereade | TheMue, eg theMA being responsible for the first download of the charm, and the UA being responsible for subsequent ones | 10:08 |
TheMue | fwereade: OK, then two graphics: todays implementation and wanted implementation | 10:08 |
fwereade | TheMue, the first one is of limited value and the second one is subject to change as we figure out *how* we should be doing things... | 10:09 |
fwereade | *should* | 10:09 |
fwereade | TheMue, hopefully without succumbing to second-system effect | 10:09 |
TheMue | fwereade: That's a problem of working remote. I've used whiteboards a lot for a discussion of how something is and how it should change. | 10:11 |
TheMue | fwereade: My intention is now first class diagram | 10:11 |
TheMue | s/now/no/ | 10:11 |
Aram | moin. | 10:34 |
fwereade | Aram, heyhey | 10:54 |
TheMue | Aram: Moin | 11:03 |
Aram | fwereade: TheMue: had a little bit of fun yestarday: http://play.golang.org/p/D-qPq8uIw3 | 11:05 |
fwereade | Aram, haha, nice | 11:14 |
TheMue | Aram: *lol* | 11:18 |
TheMue | Hmm, seems it's time for a topology watcher. | 13:08 |
TheMue | fwereade: Any experiences with the size of topologies in large installations? | 14:05 |
fwereade | TheMue, all I know is that yaml was too big for the 2k deployment, json makes it small enough for that with room to spare | 14:06 |
TheMue | fwereade: I'm asking because topology watchers keep an old one in memory and pass it and a new one to the using callbacks/watcher users. | 14:06 |
fwereade | TheMue, IIRC max ZK node size is 1MB, so order of that, I guess | 14:07 |
TheMue | fwereade: I would store it already parsed, so there should be not whitespace problem. | 14:07 |
fwereade | TheMue, it shouldn't be an overwhelming load though | 14:08 |
TheMue | fwereade: ok | 14:08 |
fwereade | TheMue, however you way want to look at recent topology watchers in go, which don't keep a whole topology around | 14:08 |
fwereade | TheMue, they just keep the bits they're interested in | 14:08 |
TheMue | fwereade: WHich ones you're talking about? Most I've seen so far watch simple nodes. | 14:09 |
fwereade | TheMue, MachinesWatcher and MachineUnitsWatcher | 14:09 |
TheMue | fwereade: Also the event of change always forces me to at least read one complete node. | 14:10 |
fwereade | TheMue, also ServiceRelationsWatcher, new in review today | 14:10 |
fwereade | TheMue, yeah, you always read the whole new topology | 14:10 |
fwereade | TheMue, no reason to keep unit info around when all you care about is relations for one service | 14:10 |
TheMue | fwereade: Thx, will take a look. I need it for the ServiceUnitsWatcher. | 14:10 |
fwereade | TheMue, cool | 14:11 |
fwereade | TheMue, a suggestion, don't know if it applies: | 14:11 |
* TheMue listens | 14:12 | |
fwereade | TheMue, when doing the ServiceRelationsWatcher, it was very convenient to add (*Service)relationsFromTopology(t *topology) and use it both in Relations and the watcher | 14:12 |
fwereade | TheMue, haven't looked at MW or MUW to see whether they'd benefit from similar | 14:13 |
TheMue | fwereade: OK, will look, it sounds good. | 14:13 |
fwereade | TheMue, it may be that the code to extract the stuff we care about is small enough not to bother in those cases and maybe in yours | 14:14 |
TheMue | fwereade: Huh, the last sentence is difficult for me to understand. | 14:15 |
fwereade | TheMue, sorry | 14:15 |
fwereade | TheMue, I'm saying that getting a []*Relation from a service and a topology is enough work to make it worth factoring out | 14:16 |
fwereade | TheMue, but getting a []*Unit from a service and a topology may be trivial enough that it's better to duplicate the code | 14:16 |
fwereade | TheMue, similar may apply to MW and MUW | 14:17 |
TheMue | fwereade: OK, understand, I will see how much it is. | 14:17 |
niemeyer | Hellos! | 15:19 |
twobottux | aujuju: Is juju specific to ubuntu OS on EC2 [closed] <http://askubuntu.com/questions/149952/is-juju-specific-to-ubuntu-os-on-ec2> | 15:27 |
TheMue | niemeyer: Hello to the far west. | 15:31 |
niemeyer | TheMue: Hi :) | 15:35 |
niemeyer | TheMue: How's been the weekend? | 15:35 |
TheMue | niemeyer: Fine, a but support for my brother in law, he is building a house, and sitting on the couch on Sunday, it rained cats and dogs. | 15:37 |
TheMue | niemeyer: And your travel to SFO? | 15:37 |
niemeyer | TheMue: Hah :) | 15:37 |
niemeyer | TheMue: The trip was quite fine | 15:38 |
niemeyer | Hmm.. so it seems that Go's behavior on redirections has changed somehow.. lpad seems broken :( | 16:17 |
* niemeyer investigates | 16:17 | |
fwereade | niemeyer, heyhey | 16:27 |
fwereade | niemeyer, TheMue: please confirm that it is not safe to select on a send to a channel that might be closed | 16:28 |
niemeyer | fwereade: It is actually safe | 16:28 |
fwereade | niemeyer, really? oh, cool | 16:29 |
niemeyer | fwereade: It depends a bit on what you mean by that, though | 16:29 |
niemeyer | fwereade: Oh, wait.. *send*.. hmm | 16:29 |
fwereade | niemeyer, select {dodgy <- event: blah; <-t.Dying()} | 16:29 |
fwereade | niemeyer, select {dodgy <- event: blah; <-t.Dying():} | 16:29 |
niemeyer | fwereade: No, that's not ok, sorry for the misinfo | 16:30 |
fwereade | niemeyer, no worries :) | 16:30 |
niemeyer | fwereade: It's considered a bad practice (hence why it blows up) because it's a clear statement that the life time of the channel is messed up. | 16:32 |
fwereade | niemeyer, that was what I thought | 16:32 |
Aram | hi niemeyer, how's SF? | 16:32 |
fwereade | niemeyer, and I'm pretty sure I'm in a situation where I can just leave the channel alone without ever closing it anyway :) | 16:32 |
niemeyer | Aram: Pretty nice, sunny.. had a good time with Andrew yesterday as well | 16:33 |
Aram | niemeyer: nice. | 16:33 |
niemeyer | fwereade: That's a possible answer | 16:33 |
Aram | niemeyer: did you see my silly paste entry? http://play.golang.org/p/D-qPq8uIw3 | 16:33 |
niemeyer | Aram: Yeah, that was awesome :) | 16:34 |
niemeyer | robbiew: ping | 16:35 |
Aram | niemeyer: I could have made it an actual animated PNG, but animated PNGs don't work in webkit browsers yet. | 16:35 |
robbiew | niemeyer: pong | 16:35 |
niemeyer | robbiew: Heya | 16:35 |
niemeyer | robbiew: Do we have a meeting today? | 16:35 |
robbiew | niemeyer: heh...as usual, I have no idea...checking | 16:36 |
niemeyer | Aram: Surprisingly short | 16:36 |
niemeyer | robbiew: Cool.. I better find out a good way to call out of the hotel if so | 16:36 |
robbiew | niemeyer: no meeting | 16:36 |
niemeyer | robbiew: Super, thanks for checking | 16:37 |
fwereade | gn all | 17:28 |
fwereade | niemeyer, btw, I have to go again in a sec, but I meant to ask: | 18:08 |
fwereade | niemeyer, are we planning to replicate the security-group firewalling for 12.10? | 18:08 |
niemeyer | fwereade: Yeah | 18:09 |
niemeyer | fwereade: Should be easy, and gets us parity | 18:09 |
niemeyer | fwereade: We can then fix it another way later | 18:09 |
niemeyer | fwereade: But, | 18:09 |
niemeyer | fwereade: We should try to make the implementation sensible, so that we can reuse bits | 18:09 |
fwereade | niemeyer, yep, I approve (despite emotionally wanting to Do It Right ;)) | 18:09 |
niemeyer | fwereade: I've been talking to Frank about that | 18:09 |
niemeyer | fwereade: He's working on the firewall port watcher stuff | 18:10 |
fwereade | niemeyer, excellent, I realised I didn't know what plan we were following when he mentioned it this morning | 18:10 |
niemeyer | fwereade: That we have under state/firewall.py in Python | 18:10 |
niemeyer | fwereade: But with some twists.. the Python version assumes it knows about a provider and what not | 18:10 |
niemeyer | fwereade: The Go version will be a normal watcher | 18:10 |
fwereade | niemeyer, yeah, I presume we'll just be outputting changes | 18:10 |
fwereade | niemeyer, perfect | 18:10 |
niemeyer | fwereade: Exactly | 18:11 |
fwereade | niemeyer, I would guess two levels of watchers so we can reuse the inner one when it becomes the MA (UA???)'s responsibility? | 18:11 |
niemeyer | fwereade: Yeah, we actually already have one in the unit | 18:12 |
niemeyer | fwereade: So this is adding the second one, on Machine | 18:12 |
fwereade | niemeyer, ah, nice | 18:12 |
niemeyer | fwereade: WatchPorts | 18:12 |
niemeyer | fwereade: I think we'll use the exact same thing when we move | 18:12 |
niemeyer | fwereade: The difference is that the machine agent will call Machine.WatchPorts, rather than the provisioning | 18:12 |
fwereade | niemeyer, perfect :) | 18:13 |
robbiew | mramm: looking for me? | 19:27 |
Aram | niemeyer: somethins intriguing is happening... compare this: http://bazaar.launchpad.net/~gophers/juju-core/trunk/view/head:/mstate/state.go#L56 with this: https://codereview.appspot.com/6304099/diff2/9002:18002/mstate/state.go | 19:33 |
Aram | the machine function | 19:33 |
Aram | is different :) | 19:33 |
Aram | how can this be? | 19:33 |
Aram | the AllMachines function is the same though, and both have been altered in the same commit. | 19:33 |
niemeyer | Aram: Why should they be the same, just so I get the context? | 19:34 |
Aram | niemeyer: because I submitted what's on codereview, and what's in launchpad seems an earlier version. | 19:35 |
niemeyer | Aram: Ah, it's actually not | 19:36 |
niemeyer | Aram: https://codereview.appspot.com/6330045/ | 19:37 |
Aram | interesting. | 19:37 |
Aram | why the removal of that branch? | 19:38 |
niemeyer | Aram: The new error will look like "can't get machine 42: not found", which is fine | 19:38 |
niemeyer | Aram: I had to touch that logic due to the NotFound renaming | 19:38 |
niemeyer | Aram: (ErrNotFound now) | 19:38 |
Aram | yes, yes. | 19:38 |
niemeyer | Aram: But rather than replacing it, I just dropped and allowed the underlying error to go through as per the message above | 19:39 |
Aram | well yes, that was my initial version as well. | 19:39 |
niemeyer | Aram: Not really | 19:39 |
niemeyer | Aram: your initial version was the opposite.. any error would lead to "not found" | 19:39 |
Aram | right. | 19:40 |
Aram | niemeyer: anyway, thanks for clearing the confusion. | 19:41 |
niemeyer | Aram: np, and sorry for the trouble.. I wanted to ask for your review on it too, but at the same time didn't want to leave trunk broken | 19:41 |
Aram | of course | 19:41 |
Aram | niemeyer: first piece of the puzzle: https://codereview.appspot.com/6341050 | 20:01 |
niemeyer | Aram: Awesome, thanks! | 20:01 |
Aram | niemeyer: the diff on codereview is always done against lp:juju-core? can't I do it against some other branch I have? | 20:12 |
niemeyer | Aram: You can, with -req | 20:13 |
niemeyer | Aram: It only allows trees rather than graphs, but it works | 20:13 |
Aram | strange, that's what I did, lbox propose -cr -wip -req="lp:~aramh/juju-core/mstate-charm-basic" | 20:14 |
niemeyer | Aram: -req has to be used at propose time | 20:14 |
Aram | but it generated this: https://codereview.appspot.com/6325057 which is wrong because it should only be two lines | 20:14 |
niemeyer | Aram: After the merge proposal is created, it doesn't work anymore | 20:14 |
niemeyer | Aram: (because Launchpad doesn't allow changing it) | 20:14 |
Aram | can I delete a merge proposal and do it again from the same branch? | 20:14 |
niemeyer | Aram: Yeah | 20:14 |
niemeyer | Aram: That works fine | 20:14 |
Aram | ok, thanks | 20:15 |
niemeyer | np | 20:17 |
niemeyer | Okay, lpad works again.. I'll go out for finding some food, and will be back to work on reviews | 20:33 |
Aram | morning davecheney | 22:50 |
davecheney | morning Aram | 22:51 |
davecheney | hows it going ? | 22:51 |
Aram | great | 22:51 |
Aram | niemeyer: I believe three pieces of the puzzle should be in the queue now | 22:56 |
niemeyer | Aram: Super, thanks! | 22:56 |
niemeyer | davecheney: Heya | 22:56 |
davecheney | howdy lads | 22:57 |
=== Aram2 is now known as Aram |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!