[00:04] arosales: ping [00:04] wallyworld: ping [00:06] morning [00:10] jamespage: ping, [00:10] thank you for uploading 1.11.4 into saucy [00:10] any issues ? [00:36] * thumper waves [00:39] thumper: if we branch 1.12 [00:39] is the bot going to be able to cope with this ? [00:39] NFI [00:40] survey says: fuckup [00:44] thumper: do I need to wait for william on branching 1.12 [00:44] or do you speak for him ? [00:44] or in fact does he speak for you ? [00:44] :) [00:44] it is hard to make him out, from his exhaultedly high architects chair [00:44] william is on holiday [00:44] if we need it done now [00:44] do it [00:45] I'll take any heat - although I expect only the good kind [00:45] we don't need to do it _RIGHT_NOW_ [00:45] but it feels like the right time [00:45] do it [00:45] and that way we can have a stable release in backports when you go to Isle of Man [00:45] this bit was imporant to Mramm [00:45] * thumper nods [00:45] important either [00:53] sodding, shit [00:54] the bot will have to own the series branch right ? [01:06] davecheney: why do we want a bot controlled series branch? [01:07] so the bot owns juju-core/trunk [01:08] is it ok for ~gopher/juju-core/1.12 to be a thing ? [01:09] sure is [01:10] /s/gopher/gophers or whatever our team is called [01:10] ok, let me see if I can figure out how to do that [01:11] where can I list the series for a project ? [01:11] apart from the graphy thing on the front page ? [01:12] ah, ok, i think i have it now [01:21] hey folks. does whatever bot that does the merging not also update bug status to "fix committed"? [01:21] is that a manual step? [01:22] also, is there an up-to-date doc describing merge procedures? cos CONTRIBUTING talks about lbox submit, which apparently isn't used anymore [01:23] axw: the bot should mark linked bugs as committed, yes [01:23] but they have to be linked to the branch [01:24] errr not the bot sorry, Launchpad's branch scanner will do it [01:24] when it sees the branch merged [01:24] CONTRIBUTING needs fixing, as you noticed :) [01:25] I assume that would've run in the last 12 hours [01:25] https://code.launchpad.net/~axwalk/juju-core/lp1203935-ec2-octal-prices/+merge/176315 [01:25] is merged [01:25] yes [01:25] but the bug is still in progress [01:25] * bigjools looks [01:25] but... the mp says pending? [01:26] huh what's up here [01:26] oh the pending is because reviews are done on shiteveld not LP, so there's no approvals on the MP itself [01:27] lbox should do that really [01:28] what am I saying, lbox should die [01:28] :) [01:28] * thumper tries to focus [01:29] is the pending status stopping the scanner from updating the bug? [01:29] or something else is wrong? [01:30] axw: not entirely sure, thumper might know? [01:30] (he wrote most of that code) [01:30] * thumper looks up [01:30] bigjools: nps, thank you [01:30] ah [01:30] thumper: my MP is merged, but the bug wasn't automatically updated [01:30] thumper: branch scanner has not set fix committed on a linked bug where the branch is merged [01:30] curious what I did wrong, or missed [01:30] LP has never done that AFAIK [01:31] but people wrote scripts [01:31] really? [01:31] really [01:31] does tarmac do it? [01:31] I think so [01:31] I've seen it done [01:31] ah [01:32] tarmac? [01:32] axw: was the bug linked before marking the MP approved? [01:32] Tarmac is the landing bot code [01:32] bigjools: yes [01:33] ok so either it's a bug in Tarmac or something else is wrong, but no idea what! [01:33] mkay [01:33] I've definitely had this working on other projects [01:33] never mind, not a big deal for now [01:33] yeah, you can manually mark it [01:44] bigjools: how do I branch from a tag ? bzr branch -r ... lp:... ? [01:45] davecheney: bzr help revisionspec [01:45] bzr branch -r tag:BLAH lp:thing [01:45] thank you [01:46] i would grumble that every dvcs has a different convetion for rev specs [01:46] but actally i'm just a dumpass [01:46] dumbass [01:46] heh, np [01:46] bzr's help is actually very good, I reckon [01:48] thumper: would you think it is safe to say to #eco that 1.12 won't have the worlds best local provider, and they should follow 1.13 ? [01:49] who is #eco? [01:49] um... world's best? it is likely to get continual improvements [01:49] hopefully we won't be too long between non-dev releases [01:49] #eco is m_3 marcoceppi etc, who lurk elsewhere [01:49] so 1.14 should be soon enough (I hope) [01:50] o/ [01:50] to be more specific, 1.12 (stable) won't see a lot of lxc fixes [01:50] we'll deliver a better version in 1.14 [01:50] correct [01:50] thumper: cool, thanks for clarifying [01:50] I don't see us spending much time fixing things in 1.12 [01:50] * davecheney has given up trying to branch locally [01:50] too slow on this tiny internet [02:00] https://launchpad.net/juju-core/1.12 [02:35] davecheney: add a trunk series and you'll get a prettier graph [02:36] oh wait there is one, ignore me [02:37] bigjools: pretty isn't the word i'd use for it :) [02:37] elongated springs to mind [02:37] I did say "prettier".... it's all relative :) [02:38] it reflects your lack of release branches though [02:38] bigjools: if you squint, the shape looks like a cowboy hat [02:39] so.. I might be full of crap, but it seems that one of the cmd/juju tests is a bit wrong/broken [02:39] UpgradeJujuSuite.TestUpgradeJujuWithRealUpload always builds tools from the trunk [02:39] davecheney: seems appropriate [02:39] axw: wouldn't surprise me [02:39] is this intended? [02:41] axw: probably not [02:43] actually [02:43] I am full of crap [02:43] ignore me [02:53] axw: glad you figured it you :P [03:05] * thumper might do some more clean up of code [03:05] been writing documents [03:20] urgh, lunch coffee tastes like armpit [03:39] thumper: pingy ping [03:39] wallyworld_: hey [03:40] so, i've finally done some work in my validation branch to satisy rogr hopefully. to do it as a plugin.... i was thinking i'd make a plugin dir, add the cmd in there, and update the release scripts [03:41] wallyworld_: hmm... sounds interesting... [03:41] however [03:41] so the juju-foo binary is put in the deb [03:41] the point I raised about doing it as a plugin is to keep it out of the tree [03:41] and out of band [03:41] if you are just going to put it into the tree and build a binary and ship it [03:41] i think it belongs in the tree [03:41] just keep it in the freaking tree as a command [03:42] ok [03:42] it makes zero point to have a plugin in the tree [03:42] it belongs in the tree because the validation is tied to the version of juju [03:42] command is best IMO then [03:42] ok [03:43] another trivial gc prefix branch https://codereview.appspot.com/11754043 [03:43] when i say tied to..... it's not right now but there may well be thinsg we build into the metadata that only later version of juju can rad [03:43] read [03:44] wallyworld_: that's good enough for me [03:44] ok, ta. will re-propose [03:48] I have a failing test locally, what is wrong here folks? http://pastebin.ubuntu.com/5906306/ [03:49] bigjools: "-":"0" [03:49] ^ weird key in the map [03:49] yeah - nothing I changed either [03:51] bigjools: o_O, where does the "-" leak in from status ... [03:53] I dunno, I know very little about this stuff [03:54] * davecheney runs tests [03:56] I suspect a local setup problem but I've no idea how to work out what [03:56] the test looks fragile [03:59] bigjools: it is [04:00] i tried to use the same test harness to validate both json and yaml outputs [04:01] thumper: you free for a hangout sometime? [04:01] hey gang, anyone have a quick pointer to syntax for 'relation-get' from within a hook? I'm looking for args [04:01] m_3: shoot [04:01] what have you tried ? [04:01] * davecheney goes to look at the source [04:03] wallyworld_: sure, if you don't mind getting interrupted by me being a manual gps service [04:03] getting breakage from... popen("relation-get --format json - $node", 'r'); [04:03] thumper: it can wait till you're free [04:03] davecheney: I'm guessing that I can't pass in the unit-id [04:04] m_3: - $node looks suspect [04:04] yeah [04:04] did you mean -- $node ? [04:04] wallyworld_: ok [04:05] ahh, this is interesting [04:05] if c.Key = args[0]; c.Key == "-" { [04:05] c.Key = "" [04:05] } [04:05] davecheney: nope, the syntax was a single '-' apparently [04:05] looks like we consume it and remove it [04:05] m_3: do you have more context [04:05] one line isn't helping with language either [04:06] thumper: from within a hook, you'll typically call `relation-list` [04:06] thumper: then loop on the results of that and do a `relation-get --format json - $remote_unit` to get info for each of the related units [04:07] m_3: right, so I'm assuming bash? [04:07] actually python [04:07] in which case the $ looks wrong [04:07] popen("relation-get --format json - $node", 'r'); [04:08] m_3: can you pastebin the whole thing? [04:08] sure [04:08] hahaha [04:08] ok, sorry... php [04:08] fuck my life [04:08] mediawiki charm... lemme find the link [04:08] m_3: that's going on the quotes page [04:09] but really, I'm guessing something changed between py and go wrt this '-'? [04:09] do we expect the current relation get to accept a unit-id? [04:10] pls hold, bootstrapping [04:10] actually, don't need to do that [04:11] thumper: context... http://bazaar.launchpad.net/~charmers/charms/precise/mediawiki/trunk/view/head:/hooks/slave-relation-changed [04:11] do you guys have any details for the Brisbane induction sprint? [04:12] bigjools: no, i was going to ask you [04:12] heh [04:12] i'd good to see we're sticking to the code [04:13] the first rule of induction club ... [04:13] indeed, arrive at unknown destination, begin a linear search for your hotel [04:14] * bigjools emails persons [04:15] m_3: dude, that is PHP! [04:15] not python [04:15] right [04:15] lulz [04:15] was only looking at the system call to relation-get, not the surrounding code [04:16] * thumper vomits a little into his mouth [04:16] * m_3 cries [04:16] $node now makes sense [04:16] but I can't help you [04:16] but really, the same thing is done in bash and python in other charms [04:17] relation-list... relation-get - unit-id [04:17] sure... [04:17] my lack of ability to help is more around NFI what the actual commands do [04:17] or return [04:17] etc [04:17] :) [04:18] yeah, I'll start grepping through core [04:18] I do think it is funny that one uses exec [04:18] the other popen [04:18] to do the same shit [04:18] I'm not even going to start on that [04:18] heh [04:19] wallyworld_: https://plus.google.com/hangouts/_/2e05ac3602809a0997b9970e5bf785f4c607e745?hl=en [04:22] same from the old code [04:22] if self.options.settings_name == "-": [04:22] self.options.settings_name = "" [04:22] this - thing is a noop [04:23] buuuuuut, i wonder if gnuflags parses it correctly ... [04:35] I think I'm just gonna punt on this one. I'll find some other stack that's working and pretty to look at in the gui [04:35] thanks for the help [04:51] m_3: shit [04:51] that sucks [04:51] please log a bug [06:33] davecheney: is there a place we can store environment bootstrap data? we need the same data at env teardown time [06:33] bigjools: i'm thinking JUJU_HOME [06:33] we put a lot of other crap in there [06:33] ssl certs [06:33] caches [06:33] directory? [06:34] yup [06:34] on the client? [06:34] yes [06:34] maybe I misunderstood your question [06:34] I mean user's machine [06:34] what happens if they do stuff from a different machine later [06:34] What we need to store is information about resources on the cloud that we've allocated as part of setting up the environment. [06:34] So it has to be centralized. [06:34] what he saud === jtv2 is now known as jtv [06:35] said [06:40] davecheney: nowhere centralised then? [06:40] davecheney: can the provider update its own config for this sort of thing? (It happens to be a configurable resource — provided by the user or allocated by the provider) [06:46] bigjools: the two options are [06:46] ~/.juju and the state [06:46] jtv: yes, in theory [06:46] in practice no, it porbably doesn't have access to the state [06:47] ~/.juju is a non-starter really [06:47] oh sorry [06:47] one more place [06:47] * davecheney smacks head [06:47] the private bucket [06:47] otherwise destroy-environment would not work except on your original client [06:48] bigjools: it is unlikely anything will work except on the original client [06:48] private bucket dear liza [06:48] srsly? [06:48] if the user does not have a whole copy of ~/.juju [06:48] The private bucket... I guess that's the EC2 equivalent of our storage account & container. [06:48] no certs baby [06:48] so if you local machine loses its HD .... ? [06:48] your* [06:48] you are royally fucked [06:48] !!! [06:48] dafuq [06:52] private bucket means you'd need to know the data to get the data ... [07:06] bigjools: i can tell that you and I are going to get on like a house on fire in BNE [07:06] me telling you things [07:06] you trying not to hit me [07:13] that wasn't me cutting his internets, honest === tasdomas_afk is now known as tasdomas [07:35] bigjools: what environment bootstrap data do you want to store? [07:39] rogpeppe: otp, back in a while [07:42] bigjools: okeydoke [07:51] rogpeppe: right [07:52] rogpeppe: when we bootstrap, we have to create some storage account objects that need naming [07:53] we're configuring that to use existing storage accounts at the moment but there's also no reason why they can't get created by bootstrap code [07:53] but if we do that we also need to delete them when destroy-env runs [07:54] bigjools: i think the private bucket is exactly what you need here. it's where (for instance) the id of the bootstrap instance is stored. [07:54] bigjools: so the bootstrap code already writes there [07:54] rogpeppe: I was worried we had a chicken and egg situation [07:54] bigjools: i don't *think* so... [07:55] is the private bucket creds/details cached somewhere? [07:55] bigjools: you name the private bucket in your environments.yaml [07:55] bigjools: what about creating the storage with a fixed name, derived from the environment name? This way we won't need to store the name anywhere, because it can be derived from the env name. [07:55] rogpeppe: can't do that, it has to be globally unique [07:55] rvba: that's exactly how the private bucket works [07:56] err rvba sorruy [07:56] bigjools: that's the same as the ec2 private bucket [07:56] bigjools: you need to choose something globally unique [07:56] bigjools: is that a problem? [07:56] azure creates a DNS entry out of your account name [07:56] storage account I mean [07:57] would you like to guess how many people are going to call their environment "azure" [07:57] so yes it's a problem :) [07:57] I see :) [07:57] bigjools: how about adding a config entry to the azure environments.yaml: global-environment-name, or something [07:58] rogpeppe: could make it a {{rand}} [07:58] bigjools: yeah, that would probably be a good default [07:58] we did consider this earlier, but there is a snag [07:59] if someone wants to re-use an existing storage account instead of having juju create one on the fly, how do we know whether to delete it or not later? [07:59] bigjools: two possibilities there [08:00] bigjools: 1) if we create the account at bootstrap time, mark the private data with "this needs cleaning up", and clean up the account at destroy-environment time only if that flag is set [08:01] bigjools: 2) just delete it regardless and document that juju always requires its own storage account [08:01] bigjools: for 2) you could probably do some sanity checking to make sure there's nothing in the account that you wouldn't expect to be created by juju [08:01] rogpeppe: if the former, where can we store a flag? [08:02] bigjools: in the private bucket? (which presumably is in the storage account) [08:02] inside storage accounts, you have another layer of indirection called a container, BTW. These are separately public and private. [08:03] bigjools: hmm [08:03] fun isn't it :) [08:03] there's a pricing implication as well I think, which is why I am being careful [08:04] bigjools: the container name space is within a storage account, or global? [08:04] we took the cheap way out at the moment and made configuration of an existing account/container mandatory [08:04] within the account [08:04] bigjools: ah, you pay for a storage account? [08:04] I'm not sure but I expect so [08:05] bigjools: can you delete a storage account if it has containers? [08:05] no [08:06] anyway I think we can write a file to the private storage for now [08:06] it will suffice [08:07] thanks for the advice [08:07] bigjools: hmm, this might work: at destroy-environment time, you destroy the private bucket(container) and try to delete the storage account. if it fails because there are containers, ignore the error. [08:08] bigjools: this assumes that there's no other useful stuff attached to an account, i guess [08:08] well if someone created a storage account and didn't put any containers in it yet, they would get a surprise [08:08] bigjools: yeah, but they'll have named the account in their juju environments.yaml, and hopefully this behaviour is documented. [08:08] rvba: I was thinking - we might *have* to create a container on the fly to ensure it's private [08:09] bigjools: or... [08:09] bigjools: maybe there is a way to check if a container is private and issue a warning or even an error if it's not. [08:09] rogpeppe: well I personally dislike surprises like that, it ought to be simpler. I think we can make it simple. [08:09] bigjools: add a "do-not-delete-storage-account" attribute to environments.yaml, i guess [08:10] rogpeppe: jtv came up with that one too, I accused him of coming up with a developer solution and not a user solution :) [08:10] bigjools: the wrinkle that i see is that several juju environments might use the same storage account, presumably [08:10] rogpeppe: correct [08:12] bigjools: so i guess the question is: what semantics do we want? do we want the storage acct to be destroyed after the last environment is destroyed in the acct? [08:12] bigjools: but only if the account was created automatically [08:13] rvba: rogpeppe: my ideal scenario is to delete only if created automatically. [08:13] and that is decided by whether you configure it or not [08:13] Sounds like the best story from a user pov. [08:13] indeed [08:13] bigjools: don't you have to configure it, otherwise you won't know how to find the private bucket? [08:14] bigjools: assuming the private bucket is addressed relative to the storage account [08:14] right [08:14] so.... seems like we can't do this then === racedo` is now known as racedo [08:14] privacy is defined at the container level, not the account [08:15] so if a private "bucket" is required in the config up front, we can't auto-generate anything can we? [08:15] this is my chicken and egg question from earlier [08:17] bigjools: how about this: you must specify the storage account name. within that, the private bucket is named after the environment name. if the storage account does not exist, it's created, but we never remove a storage account. [08:18] could work [08:18] bigjools: i think that's a clear story to the user with minimal magic [08:18] indeed [08:19] mgz: ping [08:20] rvba: what do you think? [08:30] rvba is too busy deploying juju-gui on azure :) === vorpalbunny is now known as thumper [08:30] * thumper wanders off for a bit === thumper is now known as thumper-afk [08:30] rogpeppe: I /think/ it is… and that's why leaving an account after auto-creating it is a bit nasty. [08:30] heh [08:31] rvba: on the other hand, perhaps it's nice for a user to still have around the account that incurred the billing [08:32] rvba: the user is explicitly naming it after all. [08:33] rvba: we should find out definitively the cost implications of having a storage account [08:35] * rvba otp [09:03] rogpeppe: you're right, that's really the deciding factor. [09:03] bigjools: ^ [09:04] rvba: let's go half way for now and auto-generate the containers [09:04] named after the environment [09:04] auto generating storage accounts is fraught with problems === allenap` is now known as allenap [09:32] bigjools: that seems reasonable to me. a decent error message ('Juju storage account "foo" not found - please create it and try again') could help, and make things less obscure to the user [09:36] mgz ping === thumper-afk is now known as thumper [09:37] wallyworld_: got a moment to introduce me to the wonders of simplestreams? [09:41] thumper: pong [09:41] mgz: hey there [09:41] mgz: got any voip capabilities? [09:41] either hangout or mumble [09:42] your pick === ChanServ changed the topic of #juju-dev to: https://juju.ubuntu.com | On-call reviewer: thumper | Bugs: 5 Critical, 76 High - https://bugs.launchpad.net/juju-core/ [10:12] rogpeppe: ping [10:12] dimitern: pong [10:13] rogpeppe: hey, so the deployer needs to know state/API addresses for the simple context [10:13] rogpeppe: I missed that before, do you think we should have both in the deployer facade? [10:14] dimitern: yes, i think so [10:14] rogpeppe: and just return whatever's there in state [10:14] dimitern: yeah [10:14] rogpeppe: ok [10:14] rogpeppe: and CACert as well [10:15] dimitern: yes [10:15] rogpeppe: isn't this a security issue? [10:15] rogpeppe: maybe not, because it's the public cert [10:15] dimitern: exactly [10:15] rogpeppe: ok then [10:16] dimitern: we'll also need to provide the private key too, but only to nodes which are configured to run the state server. [10:16] rogpeppe: and there's no deployer there usually, right? [10:17] dimitern: there might be. i don't think we mind too much at this level. [10:17] rogpeppe: but it's not part of the deployer api [10:18] dimitern: the deployer wouldn't need the private key. i think that would probably be something for the machineagent API [10:18] rogpeppe: right [10:22] jtv: sorry, i was at soccer training and am about to go out again. tomorrow [10:22] Ah. [10:22] dimitern: i'm going out for a late dinner and a movie so will miss the meeting [10:23] gnight [10:35] wallyworld_: ok [10:35] wallyworld_: is there a rising trend there? :) kidding [10:38] i'm looking for a review on https://codereview.appspot.com/11723043/ if anyone cares to take a look [10:47] rogpeppe: looking [10:51] what ? [10:51] i can have a ubuntu edge phone for $625 [10:51] or iu can have the same phone for $675 [10:51] or $600 [10:51] or 800 [10:51] or 830 [10:51] or 700 in packs of two [10:51] WHAT THE FUCK [10:52] rogpeppe: reviewed [10:52] davecheney: weird, ain't it [10:52] davecheney: so much for the "for 24 hours only" thing [10:52] dimitern: ta [10:52] davecheney: you can't for less than 700, no longer [10:53] davecheney: ha! [10:53] davecheney: they updated it again :) [10:53] davecheney: it seems the campaign started skidding and they refreshed it to up the flattening curve :) [10:54] if I had pledged $800 I would feel fucking ripped off [10:54] * davecheney writes to Jane [10:55] davecheney: the 825 payers will be refunded [10:55] davecheney: apparently [10:55] dimitern: is there really a point in a blank separation line when there are only two imports? [10:56] davecheney: the only point of the blank line is so that each section is sorted, which is unnecessary there [10:56] oops [10:56] s/davecheney/dimitern/ [10:56] dimitern: ignore me, sorry, i'll just make the change as requested [10:57] rogpeppe: the idea is to format imports as agreed [10:58] dimitern: what rogpeppe said [10:58] davecheney: uh? [11:00] gah [11:00] * davecheney gets another beer [11:00] * rogpeppe hands davecheney a few beers [11:01] email sent [11:01] this is not quality [11:02] we are a quality organisiation selling a top shelf (and top price) product [11:02] but someone is acting like they are running a fire sale [11:02] rogpeppe: how about having a NetworkInfo call in DeployerAPI which returns Addresses, APIAddresses and CACert? [11:02] davecheney: +1 [11:03] dimitern: thinking [11:03] rogpeppe: and at client-side it will be cached [11:04] and with that career limiting move, i leave you in the grace and favor of the lord [11:04] dimitern: yeah, i think that's reasonable; we'll probably want to watch all those things later. [11:04] dimitern: not entirely sure about the NetworkInfo name though [11:05] rogpeppe: i was sure about that :) let's bikeshed it [11:05] dimitern: if you're happy with it, gfi [11:05] rogpeppe: i'm open to suggestings [11:05] suggestions [11:07] dimitern: i haven't thought of anything better yet. my reservation is it isn't really information about the network - it's information about how to connect to the state servers. [11:07] dimitern: maybe ServerInfo ? [11:07] rogpeppe: good point [11:07] rogpeppe: I like ServerInfo [11:08] rogpeppe: will do and propose it shortly [11:08] rogpeppe: that's obviously a non-bulk call [11:08] dimitern: oooh noooo! [11:08] rogpeppe: :D [11:08] dimitern: no, but it must be a bulk call! [11:08] rogpeppe: well I can pretend to make it bulk [11:08] dimitern: params []struct{} :-) [11:09] rogpeppe: like give me this number of the same struct as results :) [11:09] dimitern: yup [11:09] rogpeppe: it can be without args, right? [11:09] rogpeppe: at server-side [11:10] * rogpeppe still finds the bulk call thing very hard to deal with [11:10] dimitern: yes [11:10] rogpeppe: cool [11:11] rogpeppe: but wait.. maybe at least a machineTag, so I can call AuthOwner on it? [11:11] dimitern: wat? [11:11] rogpeppe: no authorization at all? free for all? [11:12] dimitern: the machine agent has already authorized [11:12] rogpeppe: so any agent [11:12] dimitern: well, any agent which can create a deployer API, yes [11:13] rogpeppe: how do we guarantee this? [11:13] dimitern: guarantee what? [11:14] rogpeppe: the only thing we check is AuthMachineAgent in NewDeployerAPI [11:15] dimitern: isn't that enough? [11:15] rogpeppe: so any MA can call it [11:15] dimitern: definitely [11:15] dimitern: any MA can run a deployer, right? [11:16] rogpeppe: but arguably the info ServerInfo returns you probably already know [11:16] rogpeppe: except the state addresses [11:16] dimitern: yes [11:16] rogpeppe: which is a huge security hole I think [11:16] dimitern: really? [11:16] rogpeppe: maybe not return state addresses [11:17] dimitern: why is knowing the state addresses a security hole? [11:17] rogpeppe: well, if any rouge MA can connect directly to state, then why we need the API? [11:17] dimitern: just having the address doesn't allow you to connect [11:17] dimitern: you need an identity and password too [11:17] dimitern: the address is not secret [11:17] rogpeppe: well, if you were able to connect to the API to call ServerInfo, you probably can connect to state with the same creds, no? [11:18] dimitern: no [11:18] dimitern: the two sets of credentials are separate [11:18] rogpeppe: ah, ok [11:18] rogpeppe: but setpassword sets them both, right? [11:18] dimitern: currently, yes (because everything connects directly to mongo) [11:19] rogpeppe: so it is a security leak then [11:19] dimitern: but in the future, SetPassword will only set the mongo password for agents that are allowed to connect to mongo [11:20] dimitern: it's not a security leak currently because the state server addresses are identical to the API server addresses [11:20] dimitern: and machine addresses are not private [11:20] rogpeppe: which is the api worker in the MA on 0 [11:20] dimitern: yup [11:20] rogpeppe: so if that gets compromised we're as good as wide open [11:20] dimitern: we should never rely on machine addresses being secret - they are easily enumerable and findable by other means [11:20] rogpeppe: but other MAs rouge or not won't be able to access state [11:21] dimitern: when we've moved to API-only, that's right [11:21] rogpeppe: that considerably diminishes the risk, but it's still there, we need to think about it later [11:22] dimitern: i don't see that there's a risk from publishing machine addresses [11:22] rogpeppe: no, provided the creds are not the same ;) [11:22] dimitern: if something can compromise the system just from having a machine address, then our system is broken [11:22] rogpeppe: for now, i agree we're no better of with publishing addresses or not [11:31] rogpeppe: standup [11:34] what the hell google [11:34] mgz: we've lost you [11:34] google logged me out of everything... for no apparent reason [12:25] rogpeppe: https://codereview.appspot.com/11765043/ [12:26] rogpeppe: the ServerInfo stuff [12:27] dimitern: i'm sure that having three separate calls in the client API is right [12:27] dimitern: that seems to me to miss part of the point of putting them together in the first place [12:28] dimitern: if you want all the info, you're going to make three API calls, all of which return exactly the same info [12:28] dimitern: if we're going to have separate client entry points, i think we should have separate API calls [12:28] dimitern: (which i'd be just fine with - i almost suggested that) [12:31] rogpeppe: I prefer to limit the number of calls at the server, if possible [12:31] rogpeppe: I was thinking of making a single serverInfo call at construction time [12:31] dimitern: construction of what? [12:31] rogpeppe: and cache them, but that seemed wrong - could they change? [12:31] rogpeppe: client deployer facade [12:31] dimitern: yes, they could and will change [12:32] rogpeppe: so caching is not a good idea, hence this [12:32] dimitern: we'll want a watcher too [12:32] dimitern: eventually [12:32] rogpeppe: but not for the deployer [12:32] rogpeppe: yeah [12:32] dimitern: when you say "limit the number of calls", do you mean the number of entry points, or the overall number of API calls at runtime? [12:33] s/at runtime/made at runtime/ [12:33] rogpeppe: entry points [12:33] rogpeppe: because this call is special [12:33] dimitern: i think our API is vast anyway - 2 extra entry points aren't going to make any difference [12:34] rogpeppe: you're probably right [12:34] rogpeppe: ok, will do them separately then [12:34] dimitern: thanks. [12:34] dimitern: i think that makes sense [12:35] dimitern: and means we can use a conventional StringsWatcher for watching the addresses in the future [12:36] rogpeppe: do I have to have an error result in an apiserver method, even though CACert doesn't return an error? [12:36] dimitern: no you don't [12:36] rogpeppe: cool [12:36] dimitern: read the rpc docs for details [12:37] rogpeppe: :) sorry, too lazy [12:37] rogpeppe: I did some time ago, but forgot [12:38] dimitern: the most important bit is in the Conn.Serve documentation - the list of allowed method signatures: http://paste.ubuntu.com/5907455/ [12:39] rogpeppe: yeah, just revisited that [12:51] rogpeppe: updated https://codereview.appspot.com/11765043/ [12:53] TheMue, mgz: can one of you take a look as well please? ^^ [12:54] dimitern: i think i'd probably send the CA cert as a string - it's already ascii AFAIR [12:54] rogpeppe: if it's []byte why pretend it's a string? [12:55] rogpeppe: how about if some unicode char gets there? [12:55] dimitern: it's a PEM-encoded certificate [12:55] dimitern: which is defined to be ascii [12:56] rogpeppe: so why are we not using string for it in other places? [12:56] dimitern: and is already base64-encoded [12:56] dimitern: so we'll be base-64 encoding twice [12:56] dimitern: we're not using string for it because the crypto entry points use []byte [12:57] rogpeppe: we can keep it as a string and convert it to []byte before passing it to crypto calls [12:57] dimitern: i played around with quite a few permutations. it's nicer to keep it as []byte most of the time, i think [12:58] rogpeppe: double encoding as base64 is not such a bad thing [12:58] dimitern: anyway, i don't mind much; if some js client wants to interpret it, i'm sure it's easy for them to b64 decode [12:58] dimitern: yeah, it's not *too* much bigger [12:58] rogpeppe: the size grows only marginally [12:59] dimitern: well, a third bigger, but that's probably of no particular import here [13:01] dimitern: reviewed [13:02] rogpeppe: thanks [13:03] rogpeppe: MongoAddresses only server-side or both? [13:04] dimitern: both [13:04] dimitern: "Addresses" made sense as a method on the mongo-based state [13:04] dimitern: but not as an API call [13:04] rogpeppe: that'll require changing Addresser interface in the deployer and several other places [13:04] dimitern: yeah, i think that's worth doing [13:05] rogpeppe: why not StateAddresses then? [13:05] dimitern: because i think that's ambiguous... then again, we have "StateWorker" vs "APIWorker". [13:05] dimitern: yeah, go with StateAddresses [13:06] rogpeppe: ok [13:07] rogpeppe: oops [13:07] rogpeppe: I think I forgot to do common.ServerError(err) when I'm returning it [13:07] dimitern: sigh [13:08] dimitern: actually, that doesn't matter here [13:08] dimitern: it's not a bulk call [13:08] rogpeppe: hmm [13:08] rogpeppe: sure about that? [13:08] dimitern: the usual error return always gets translated through ServerError [13:08] rogpeppe: ah, right [13:08] dimitern: that's one of the things we lost by moving to bulk calls for everything [13:10] rogpeppe: it was a good thing, you'll see at one point - definitely in the provisioner [13:11] * rogpeppe might be more convinced if there was a single bulk call implementation that didn't just do everything serially. [13:11] rogpeppe: having a bulk interface for the calls allows us to change the implementation later [13:12] dimitern: i have a feeling we standardised on a bulk call interface that's hard to implement in practice [13:12] rogpeppe: to be bulk, rather than serial like now [13:13] dimitern: seen your review demand, do it now [13:13] TheMue: thanks [13:13] dimitern: sure there are places where a bulk interface is appropriate, but for 95% of calls it's not and never will be IMHO [13:15] dimitern: and having a bulk call interface for calls that can only ever allow one thing is just farcical. [13:16] dimitern: sorry, you got me started. [13:16] * rogpeppe shuts up about bulk calls [13:17] rogpeppe: :) [13:17] rogpeppe: my bad [13:18] dimitern: you've got a review [13:18] TheMue: cheers [13:42] interesting [13:43] dimitern, rogpeppe : anyone an idea? on several places tests if err == tools.ErrNoTools work, but not where I use it in bootstrap command [13:43] dimitern, rogpeppe : printing the error shows, that it is that error [13:43] TheMue: could you expand more on the context please? [13:44] rogpeppe: bootstrap calls environs.Bootstrap() [13:44] rogpeppe: and there Bootstrap() of the current environment is called [13:45] rogpeppe: in there it is environs.FindBootstrapTools() [13:45] s/it is/it calls/ ? [13:46] rogpeppe: "it is" in the sense of "it is what's called" ;) === tasdomas is now known as tasdomas_afk [13:47] rogpeppe: and here it reads the list of the storage and then the public storage, but both are empty (what I want, that's correct) [13:47] rogpeppe: so ErrNoFiles is returned upwards until my bootstrap command, where I can evaluate it/print it [13:50] rogpeppe: tools.ReadList() is the one who returns it [13:51] hmm, will try something [13:53] strange [13:56] i'll add a print chain to see where the error is "lost" ;) [14:04] wow, that's really interesting [14:05] rogpeppe: in FindAvailableTools() == is true, but in FindBootstrapTools(), which calls FAT(), == is false *shrug* [14:07] dimitern: btw, you asked for a link: https://codereview.appspot.com/11588043/ [14:07] TheMue: cheers, will take a look [14:11] rogpeppe: please take a look at http://paste.ubuntu.com/5907706/ [14:11] rogpeppe: that's the most interesting part [14:17] hmmpf, disconnected by the German Telekom [14:30] TheMue: looking; sorry, was at lunch [14:30] rogpeppe: np [14:31] TheMue: what do you see if you print the type of the error (with %T) in each place? [14:32] rogpeppe: oh, good idea, will try [14:32] TheMue: print the error message too (with %q) [14:36] rogpeppe: TYPE *errors.NotFoundError MSG "no tools available" [14:36] rogpeppe: it mutates to a reference [14:38] TheMue: you've missed the call to convertToolsError [14:39] TheMue: you need to use errors.IsNotFoundError, i think [14:40] rogpeppe: ah, cool [14:40] rogpeppe: I'll tell you about the success [14:42] rogpeppe: you're fucking amazing ;) thx [14:42] TheMue: yw [14:43] type Removerer interface { [14:43] ha ha! [14:43] :D [14:44] dimitern: ping [14:47] rogpeppe: pong [14:48] dimitern: i'm thinking of changing the definition of params.ErrorResults [14:48] rogpeppe: oh? [14:48] dimitern: so that it's potentially forward compatible if we want to change a method to return a value in the future [14:48] rogpeppe: expand please [14:48] dimitern: currently it's defined as type ErrorResults {Errors []*Error} [14:48] dimitern: i propose defining it as http://paste.ubuntu.com/5907815/ [14:49] dimitern: then if a call that previously just returned an error decides to return a value, they can just create a new structure with some data in the result struct as well as the error. [14:50] rogpeppe: I don't see how anything stops us from changing the methods now to do that [14:50] dimitern: you couldn't do that in a backwardly compatible way [14:51] dimitern: you'd have to change the function result to look like: type MyResults {Errors []*Error; DataResults []MyDataResult} [14:51] dimitern: whereas really we'd like to have (following the rest of the API) type MyResults {Results []MyResult} where MyResult contains the error as well as the data [14:52] TheMue: reviewed [14:52] dimitern: does that make sense? [14:52] rogpeppe: wait [14:52] rogpeppe: we don't currently have that [14:53] rogpeppe: we have func f() -> results, error, where results contain both a result and an error [14:53] dimitern: exactly [14:53] dimitern: but if you're changing an API call that used to return ErrorResults, and want to return some data, you can't get that [14:54] dimitern: because the error is directly in each result element, rather than under an Error field which it would be in the result+error case [14:54] rogpeppe: I hear you [14:55] rogpeppe: but still can't see why we need to change ErrorResults-returning methods to return something else [14:55] rogpeppe: I need some examples [14:57] dimitern: well, as a particular example, i'm just changing the UpgraderAPI.SetTools signature. it did return extra data (the tag) but now i'm changing [14:57] ... [14:57] dimitern: it to return just an error [14:57] dimitern: it would be good if that kind of change could be done backwardly compatibly [14:58] dimitern: in all other places we give our API calls the freedom to add extra data [14:58] dimitern: i think we should do that for calls that currently just return an error too [14:58] rogpeppe: how can changing a method signature ever be backwards compatible? [14:58] dimitern: easily [14:59] dimitern: if you call a method and it returns more fields than you expect, the extra fields are ignored [14:59] dimitern: if you call a method with more fields than the method expects, the extra fields are ignored too [15:00] dimitern: that's all by virtue of the json unmarshal semantics [15:00] rogpeppe: can you paste your proposed changes to that method, I still can't see it, sorry [15:00] dimitern: to which method? [15:02] rogpeppe: UpgraderAPI.SetTools [15:02] dimitern: basically this change just regularises our call conventions so that errors are always in result.Results[i].Error [15:02] dimitern: ok, i want to change UpgraderAPI.SetTools to look like this: [15:02] func (u *UpgraderAPI) SetTools(args params.SetAgentsTools) (params.ErrorResults, error) { [15:03] rogpeppe: and now it looks like this [15:03] dimitern: it currently looks like this: [15:03] func (u *UpgraderAPI) SetTools(args params.SetAgentTools) (params.SetAgentToolsResults, error) { [15:03] rogpeppe: yeah [15:04] rogpeppe: so how will ErrorResults look like? [15:04] dimitern: as i pasted before http://paste.ubuntu.com/5907815/ [15:05] rogpeppe: but SetAgentTools returns a tag [15:05] rogpeppe: as well as an error [15:05] dimitern: it did - i'm changing it so it doesn't [15:05] dimitern: there's no need for it to return a tag [15:07] rogpeppe: so the question is between ErrorResults{Errors []*Error} and ErrorResults{Results []ErrorResult{Error *Error}} ? [15:07] dimitern: yup [15:08] rogpeppe: and having an extra nested struct helps us how? we can add stuff to ErrorResult later? [15:09] dimitern: it's compatible with SomeOtherType{Results []SomeOtherResult{Error *Error; Data SomeOtherData}} [15:09] dimitern: that is, we can make a method return some extra data without compromising the backwards-compatibility of the API [15:10] rogpeppe: but should we do that? [15:10] rogpeppe: ISTM this is exactly a type of change that needs versioning [15:10] rogpeppe: of the api [15:10] dimitern: not necessarily [15:11] rogpeppe: speaking from software development best practices, if you will [15:11] dimitern: if a client doesn't care about the new data returned from the call, then it can happily ignore it [15:11] rogpeppe: "thou shall not break the contract" [15:11] :) [15:12] dimitern: i disagree. see https://developers.google.com/protocol-buffers/docs/overview under "A bit of history" for some justification. [15:13] rogpeppe: these are different things [15:13] dimitern: the whole API is designed explicitly so we can get this kind of backward compatibility without having many different (fragile) versions [15:13] rogpeppe: interface and its over-the-wire format [15:15] dimitern: sorry, i don't understand [15:15] rogpeppe: the API defines the interface -> F(x, y) (a, b) [15:16] dimitern: ok [15:16] rogpeppe: the RPC layer defines the serialization mechanism [15:16] dimitern: ok [15:16] rogpeppe: you say these two don't have to match 1-1 [15:16] rogpeppe: from the client's POV [15:17] dimitern: i'm saying that there are defines ways of changing the API so as to preserve backward compatibility [15:17] s/defines/defined/ [15:17] rogpeppe: and you still oppose versioning in general [15:17] dimitern: so we can change the API to F to, say F(x, y) (a, b, c) [15:18] dimitern: and clients that expected the old version will continue to work [15:18] rogpeppe: so changing it to F(a, b, c) (x, y) is also ok? [15:19] dimitern: yes (assuming that F is implemented with the knowledge that c might be unset from old clients. [15:19] ) [15:19] dimitern: assuming you actually means F(x, y, z) (a, b) [15:19] s/means/meant/ [15:19] rogpeppe: yup [15:19] rogpeppe: and if z is required then what? [15:19] dimitern: then it's not backwardly compatible [15:20] rogpeppe: exactly [15:20] dimitern: but the point is that it's *possible* to change things in a backwardly compatible way [15:20] rogpeppe: the same applies to the client I think [15:20] rogpeppe: we cannot assume all clients will be as lenient in parsing the response as go is [15:21] dimitern: i think we can [15:21] rogpeppe: it *is* possible, but fragile [15:21] dimitern: in fact, we should probably write a set of API usage guidelines that specify that [15:21] dimitern: i think versions are more fragile in some ways [15:21] dimitern: if you get the wrong version you can't speak at all [15:22] dimitern: and that applies particular to our API where we have many different kinds of client [15:22] rogpeppe: and that's probably fine, because your behavior will be undefined then [15:22] dimitern: it doesn't have to be [15:23] dimitern: i'd prefer to do versioning by renaming entry points and/or facades (only) when necessary [15:23] rogpeppe: could there be possible security issues with that approach? [15:24] rogpeppe: buffer overruns, etc? [15:24] dimitern: how so? [15:24] rogpeppe: at client-side [15:24] dimitern: then we still have the freedom to change things in a backwardly compatible way, but we can change things in a safe backwardly incompatible and fine-grained way too [15:24] rogpeppe: well, i'm attempting to deserialize 2 fields and I get 5 [15:25] dimitern: the json package discards extra fields [15:25] dimitern: other clients will almost certainly just unmarshal to a map [15:25] rogpeppe: client-side you cannot guarantee that [15:25] dimitern: and then if there are extra fields in the map, that's unlikely to be a problem [15:26] dimitern: and particularly if we document that new fields may be added, which we should [15:26] rogpeppe: but not removed [15:26] dimitern: yup [15:27] dimitern: similar to the protobuf approach [15:27] rogpeppe: ok, seems sane, at least I can't see obvious holes, although I'm trying [15:28] rogpeppe: but please, as a separate CL [15:28] dimitern: please keep trying. i also try to find flaws in the approach [15:28] dimitern: which as a separate CL? [15:28] rogpeppe: if you're going to change ErrorResults, do it everywhere in one go, separately [15:28] dimitern: that's what i'm proposing, yes [15:29] rogpeppe: but please document what we discuseed about policies [15:29] rogpeppe: adding fields is ok, changing or removing - not [15:30] s/policies/guidelines/ [15:30] dimitern: yeah. i should spend some time updating doc/api.txt [15:31] rogpeppe: +100 [15:31] dimitern: i've added a kanban card [15:31] rogpeppe: incl. stuff mentioned in the huge doc comment about method signatures [15:32] dimitern: i'm not sure that implementation-specific stuff is appropriate for that document === tasdomas_afk is now known as tasdomas [15:32] dimitern: but there could easily be another document talking about that stuff [15:32] rogpeppe: api_hacking.txt ? === tasdomas is now known as tasdomas_afk [15:33] dimitern: yeah, probably [15:33] dimitern: the doc comment is only 36 lines though. it's not *that* huge :-) [15:34] rogpeppe: :) [15:35] rogpeppe: the point is, it's not easy to find, once you forgot where to look [15:35] rogpeppe: it's better to have docs in one place [15:35] rogpeppe: possibly even moving most of it in a txt file and referring to it in the comment itself [15:35] dimitern: i'd prefer to go the other way [15:36] dimitern: the package documentation is complete and kept up to date [15:36] dimitern: it says exactly how to use the rpc package [15:36] rogpeppe: either way, there has to be a link from the doc to it [15:36] dimitern: yeah [15:36] dimitern: definitely [15:36] http://godoc.org/launchpad.net/juju-core/rpc#Conn.Serve :-) [15:37] rogpeppe: nice! [16:08] hey, hey [16:08] OSCON is going very well [16:08] juju charm school went well [16:10] ace [16:11] I definitely feel like we've hit an inflection point -- people are starting to get it -- and the pace of user adoption/corp interest is definitely accelerating [16:12] there will be a bit of juju in mark's OSCON keynote [16:12] though also quite a bit of phone [16:12] keynote will be in half an hour, and will be live-streamed here: http://www.oscon.com/oscon2013/public/content/video [16:16] mramm: cool [16:17] dimitern, TheMue, mgz: any chance of a review on this: a large CL but entirely mechanical in nature: https://codereview.appspot.com/11760045/ [16:18] looking [16:19] rogpeppe: dimitern: how are things going with the API work? [16:19] rogpeppe: will take a look [16:19] mramm: coming along ok, i think. the machiner now actually talks to the API. deployer and upgrader in the works. [16:19] rogpeppe: so, the relevent change is - version.Binary + Version version.Binary? [16:20] rogpeppe: that's cool [16:20] mgz: yes [16:20] lgtmed. I trust to bot to catch any missed bits :) [16:21] mgz: ta! [16:21] rogpeppe: so the uniter is the big bit that hasn't been started yet...? [16:21] mramm: yes [16:21] when do you think deployer and upgraded will land? [16:25] haha -- conversation killing question... ;) [16:30] mramm: ya' know, estimation is always a bad topic [16:30] mramm: :D [16:36] mramm: oh sorry, didn't see your question [16:36] mramm: i'm quite hopeful for upgrader this week, assuming i don't get too bogged down with reviews [16:37] rogpeppe: you've got another revie [16:37] review [16:37] TheMue: thanks! [16:41] rogpeppe: yw [17:11] weird, my machine is running like a dog, the load average is 8, the system monitor shows all cpus pegged around 100%, but then adding up the all the processes' cpu percentages comes to about 20% [17:11] where's my power going?! [17:12] i guess it can only be something in the kernel [17:12] rogpeppe: try powertop [17:13] rogpeppe: in these cases I usually blame either compiz or Xorg [17:17] juju section of mark's talk starting now [17:22] dimitern: here's ErrorResults changed as we talked about: https://codereview.appspot.com/11674045/ [17:25] rogpeppe: LGTM [17:25] dimitern: thanks. anyone else still around for a review? [17:28] TheMue: ? [17:37] * rogpeppe has to go [17:37] dimitern: see ya tomorrow [17:38] dimitern: thanks as always for the prompt reviews :-) [17:38] g'night all [17:38] rogpeppe: 'night! [18:03] gary_poster: heh, I forgot about the presentation and was at lunch; I'm glad it worked ;) [18:41] rogpeppe: if you're looking: lgtm [20:49] morning [21:05] who knows MAAS? [21:05] * thumper looks for bigjools [21:54] o/ [21:54] hey [21:54] so, right now I have a package that wraps lxc as juju cares about it [21:54] also right now there is one and only one network config [21:55] that uses veth [21:55] this works fine for the local provider [21:55] what I'm thinking is having a way *waves hands* for the container manager to ask for network config [21:55] and will get one of three responses [21:55] use host - which means we don't configure any network bits [21:55] use device - and pass in the device name [21:56] so uses the phys setting [21:56] just to be clear - you do not want to be in the business of juju creating an openvswitch GRE based network right? [21:56] or use default [21:56] is that related to SDN? [21:56] yes [21:56] not just yet [21:56] k [21:56] I'm deferring any SDN discussions to IOM [21:57] the idea of using the host networking will mean we can have semi-functional containers on hosts that won't give us extra IP addresses [21:57] like openstack [21:57] the default veth stuff is how we'll use local provider [21:57] but the ideal is to have a new device created and passed through [21:57] which means the networking *should* be set up automagically [21:57] so we can default all the providers to use the host networking initially [21:58] and implement the physical nic creation as we can [21:58] with the physical bits meaning we won't have port clashes in the containers [21:58] so two containers could have the same ports open [21:58] which obviously we wouldn't be able to with the host network option [21:59] does that sound reasonable to you? [21:59] i think it'd be worth having smoser in on this conversation (or zul) as they would know more about what openstack and maas can provide [21:59] hallyn: well I have talked with lifeless about openstack [21:59] what is 'use default' for case 3? [21:59] and openstack isn't going to give us extra IPs [22:00] case 3 is the veth that works for local provider [22:00] bridge over lxcbr0 [22:00] case 1 and 2 are for cloud providers [22:00] ok [22:01] i understand what maas will do. i'm still not clear on what you plan to get from openstack [22:01] I'm actually feeling like this might acctually work L) [22:01] the plan for openstack at this stage is to use the host network [22:01] and deal with the limitations of port clashes [22:01] until we have a possible SDN [22:01] I think SDN is the only way we'll get proper network isolation for openstack [22:02] given what I have been told about openstack networking [22:02] what is lifeless' relation to openstack? [22:02] lifeless now works for HP on their openstack on openstack [22:02] i see no reason to take it for granted that we *can't* support multiple ip's per instance [22:03] hallyn: neutron as it is now won't allocate multiple ip's for a cloud instance [22:03] and also aparently the NAT is done prior to getting to the host [22:03] so if you did have multiple IPs, you wouldn't know anyway [22:03] so I'm deferring caring about containers inside openstack for now [22:03] thumper: the 'right now' matters to your demo, but should not limit our long-term planing [22:04] but containers to deploy openstack is important [22:04] that's good :) (deferring caring on openstack) [22:04] right [22:04] I think the "correct" approach in the future is to have SDN for the containers [22:04] so we have a "cloud" of containers inside our cloud [22:04] thumper: I'm not 100% sure, but seem to recall that was rejected at last sprint [22:04] which is getting a little meta [22:04] rejected because we thought we wouldn't need it [22:05] so either [22:05] we need to fix openstack to support our use case (like AWS) [22:05] or have another solution [22:05] also [22:05] I thought it was rejected bc we didn't want to complicate things on behalf of cloud providers which dont' fully do their job [22:05] azure only gives one IP address [22:05] (that's my own phrasing :) [22:05] yeah, kinda [22:05] ok. [22:06] but I guess it matters to how much we want to support containers on clouds that don't do their job properly [22:06] from what I've seen, EC2 will work fine [22:06] and MAAS should work [22:06] i don't mean to beat this to death - my only point is: openstack is OSS, so we should not let current limitations limit our long term planning [22:06] sure [22:06] we could have openstack updated to support our use case [22:06] which could mean we don't need SDN at all [22:06] it may well be the easier approach [22:07] in fact, [22:07] if we get containers working nicely on MAAS, and EC2 [22:07] then we can point at openstack and say "you need to work better" [22:07] and provide concrete use cases [22:07] i was thinking more of working with zul to push patches to do what we need :) [22:07] I've found that concrete problems are the best way to get new features \ [22:07] aye [22:07] * thumper tries to remember who zul is [22:07] but yeah we would still use the 'look there' to help push the patches [22:08] Chuck Short [22:08] ah Chuck [22:08] yeah, just poked mup [22:08] THE [22:08] :) [22:08] http://whereschuck.org/ [22:09] on that note, it's roundabout dinner time - ttyl [22:09] ok [22:09] thanks for your help [22:09] np - i'll check backlog later if you have more questions, but sounds like you have a plan [22:11] I do [22:11] now to see if the plan works [23:55] * thumper goes to the gym while amazon spins things up