[04:44] <_mup_> juju/unit-with-addresses r395 committed by kapil.thangavelu@canonical.com [04:44] <_mup_> unit pub/priv address retrival on ec2, orchestra, and lxc [05:01] <_mup_> juju/unit-with-addresses r396 committed by kapil.thangavelu@canonical.com [05:01] <_mup_> service unit state accessors/mutators for public/private addresses [06:45] hi all [06:47] <_mup_> juju/unit-with-addresses r397 committed by kapil.thangavelu@canonical.com [06:47] <_mup_> dummy unit address for tests [10:13] SpamapS: hey [11:45] mornings [11:57] hi hazmat [11:57] although its evening for me :D [12:05] https://juju.ubuntu.com/docs/write-formula.html [12:05] now will i replace ensemle to juju [12:06] ensemble: formula == juju: charm === niemeyer_ is now known as niemeyer [20:04] <_mup_> juju/go-charm-bits r12 committed by gustavo@niemeyer.net [20:04] <_mup_> Improvements and fixes in charm directory packing: [20:04] <_mup_> - Ignore hidden files as the Python version. [20:04] <_mup_> - Use the new filepath.Walk interface. [20:04] <_mup_> - Pack unix file mode into the charm bundle. [20:04] <_mup_> The last change requires this Go CL: [20:04] <_mup_> http://codereview.appspot.com/5124044/ [20:08] <_mup_> Bug #859151 was filed: Improvements and fixes in charm directory packing < https://launchpad.net/bugs/859151 > [20:21] niemeyer, do you want another look at the placement stuff? [20:22] niemeyer, also i think bcsaller addressed the review comments on the clone-lib [20:22] hazmat: Have you changed the placement stuff since we last talked? [20:22] we've basically got things working and factored out, just landing things into the queue and trunk in the right order [20:23] niemeyer, yes.. its basically redone to use a provider provided list of supported policies, intersected with the cli option and environment option [20:23] niemeyer, just pushed the latest [20:23] hazmat: Unless it'd make a difference for bcsaller (as in, he'd work on it now), I'll review it first thing on my morning tomorrow before he comes online [20:23] hazmat: Sure, I can check it out [20:23] niemeyer, he'd work on it now.. he's pending on pushing another branch into review, but its got a two headed pre-req [20:24] I'll push the merge tip now, but I could really use the rest of the day off :) [20:24] with lxc-provider-config (which is held up on placement) and lxc-lib-clone [20:24] so we need to get at least one of the pre-reqs in or the merge proposal diff is going to be messy [20:24] and then i've got the status/ssh/debug-hooks working with that as a pre-requisite [20:25] bcsaller: Sure.. I imagined that was the case.. I'll review it tomorrow during my morning then [20:25] ie.. (provider-placement->lxc-provider-config && lxc-library-clone ) -> lxc-omega -> units-with-addresses -> local-cli-support [20:26] hazmat, bsaller: two-headed pre-reqs are not a big deal.. just mention the puzzle in the summary so that I can sort out the base [20:26] niemeyer, cool [20:27] bcsaller, cool, then if your comfortable with lxc-omega can you put that into review, i'll push the last two into the review queue, i've got some minor work to finish on the local-cli-support (debug-hooks tests, and a functional test round) [20:27] it doesn't need to be in the review queue, but its just nice for pre-reqs [20:28] its been interesting to see how bzr/lp work with concurrent dev [20:28] lots of interconnected dev [20:28] <_mup_> juju/trunk r361 committed by gustavo@niemeyer.net [20:28] <_mup_> Added .dir and hooks/install in repository/dummy test data. [20:28] <_mup_> This is just to sync up with the Go port. [20:28] <_mup_> [trivial] [20:32] <_mup_> Bug #859180 was filed: LXC driven local development story < https://launchpad.net/bugs/859180 > [20:39] hazmat: niemeyer hello [20:39] koolhead17, hello yes re replace in docs ensemble/juju formula/charm [20:40] koolhead17, canonical does require a contributor agreement though .. http://www.canonical.com/contributors [20:41] hazmat: you mean i cannot contribute to juju without signing that? :P [20:41] hazmat: also wordpress/hooks/db-relation-changed [20:42] can i use this in case of other charms too ? hostname=`curl http://169.254.169.254/latest/meta-data/public-hostname` [20:42] ? [20:42] koolhead17, yeah.. that's how contrib agreements work.. actually they've become quite common in corporate or foundation backed projects (openstack, plone, etc) all use them [20:43] hazmat: will sign it once i start contributing, all this while i been only reporting bugs :D [20:43] ah.. those are welcome, but if your going to submit a patch to the core it will be needed [20:44] hazmat: point noted!! [20:44] koolhead17, charms don't count.... those are your own unless derived from an existing [20:44] hazmat: We can move forward with this given timing [20:44] hazmat: But I have a pretty bad feeling about the back and forth going on [20:44] koolhead17, a patch to the docs would count towards a core contrib though the way things are structured in the repo [20:46] hazmat: command line knows about provider, placement, and env; placement knows about env and provider; provider knows about env and placement [20:47] hazmat: and in the end we're solving an extremely simple problem, that right now would work fine with "placement = name" in the provider class [20:47] niemeyer, i pushed into placement because, else we'd be duplicating the logic in the commands [20:47] the pick_policy stuff is just a factoring out of what the cli uses to do just that [20:47] hazmat: We don't need any logic in the commands either right now [20:48] hazmat: We'd be totally fine just hardcoding the placement in the provider [20:48] niemeyer, then the user has no selection? [20:48] niemeyer, yeah.. i could drop list_policies easily [20:48] hazmat: Yep.. that sounds fine right now [20:48] hazmat: I mean, giving the user no selection [20:48] hazmat: Each provider has only a single option that works [20:48] hmmm [20:48] hazmat: the --placement option is a lie, pretty much [20:49] i'd like to introduce new policies in the cycle [20:49] niemeyer, only against local [20:49] a min/max machines policy if we can get lxc working on ec2 would be sweet for example [20:49] hazmat: I also feel bad with state.policy poking into provider.config [20:49] hazmat: It knows about the provider schema, when it shouldn't [20:49] yeah.. [20:50] Sorry, state.placement [20:50] niemeyer, with the provider not having selection anymore.. somebody has to poke at the config [20:50] hazmat: This is the default.. [20:50] ? [20:50] hazmat: The first entry in the preferences list [20:51] hazmat: get_placement_policies() [20:51] niemeyer, it is the default in the absence of user choice [20:51] or configuration [20:51] hazmat: If we're putting the user choice in the provider's configuration, the provider should validate the user choice, not state.placement [20:52] i'm crazy tired btw... i might be a little dense picking things up.. i've had 3hrs sleep over the last 48 [20:52] hazmat: Ouch [20:52] actually i exagerate its more like 36 [20:52] hazmat: go sleep :D [20:52] koolhead17, much to do for the release [20:53] ooh. when is it happening? [20:53] hazmat: So, suggestion to clean things up: [20:53] koolhead17, well we're trying to get into the oneiric cycle so we need to be there with some testing time, at this point its monday [20:53] (btw, again, I'm fine with doing this later if you'd rather not work further on that) [20:54] niemeyer, okay.. suggest away [20:54] waoo. ok [20:54] 1) Remove the placement option from the command line entirely [20:55] 2) Support it in the provider's configuration [20:55] 3) Validate it just like we validate the rest of the provider configuration [20:55] 4) Make Provider.get_placement_policy() a single item again, which is either the default preferred for the provider, or the placement config [20:55] 5) Kill pick_policy() [20:56] niemeyer, so basically revert to previous and drop the preference arg [20:56] hazmat: Well, you've also moved to support config in the env [20:58] niemeyer, ? that support was already there (placement config in env yaml) [20:58] hazmat: I missed it then [20:58] niemeyer, thats why the previous usage was doing things like serializing the env, so it could pick out the value on the cli [20:59] actually i think i yanked that a few branches back [20:59] hazmat: Hmmm.. [20:59] hazmat: Yeah, I'm pretty sure it wasn't in the branch I reviewed lsat [20:59] last [21:00] hazmat: Either way, it looks like a good idea.. certainly more sensible than the command line option [21:00] hazmat: It will also avoid the wide distribution of knowledge going on [21:00] hazmat: and probably become tirvial [21:00] trivial [21:01] so we'll rely on env yaml schema for validation.. yeah.. that sounds reasonable [21:01] bcsaller, you still around.. just curious if you had any concern over dropping cli placement [21:06] niemeyer, okay i'll have a look at reverting and simplifying after i finish up the cli support and do some end-to-end testing [21:06] i think i'm gonna have a nap first though [21:07] hazmat: Awesome, thanks a lot for that, and please do.. well deserved [21:07] hazmat: I'm pushing some additional tweaks to the Go bundling fixes [21:07] hazmat: It wasn't packing directories, now it will [21:07] doh.. [21:07] niemeyer, cool saw some of the commits floating by [21:07] hazmat: Not in the sense you probably think, though [21:08] hazmat: zip files work fine without intervening directories [21:08] hazmat: people have used placement as a kludge for somethings like doing deployments all to one machine apparently [21:08] hazmat: e.g. "foo/bar" is unpacked fine [21:08] but I'm not really opposed [21:08] hazmat: but after a while they started to pack "foo/" as an entry too [21:08] hazmat: as a consequence the Go port wasn't handling empty directories [21:08] bcsaller, but they could do that via env.yaml config just as well afaics [21:08] Yeah [21:09] hazmat: yes, but if you can picture them having to change that file between deploys to get the effect they want then we have other work to do ;) [21:09] <_mup_> juju/go-charm-bits r13 committed by gustavo@niemeyer.net [21:09] <_mup_> Bundle directories into the zip as well, so that empty directories are [21:09] <_mup_> handled properly. [21:10] bcsaller, better to come up with a min/max placement to solve the issue i think [21:10] bcsaller, afaics there's two usage modes.. fast deploy against local [21:10] hazmat: I think SpamapS and adam_g have both used it and would be better to ask than me. I don't strongly feel the need to keep it but I don't want to make it harder to do something people can already do [21:10] actually that it... i don't think folks are doing it adhoc [21:10] ie per deploy [21:11] you might be right [21:11] the intention was to expand on the allowed models though [21:11] if we had min/max placement, with some pre-booted background machines, and max cost of machines (max machines), i think people would use that in preference to the kludge usage [21:11] to pair a cpu bound and i/o bound set of services for example [21:12] bcsaller, local placement doesn't do that [21:12] although i see your point [21:12] hazmat: right, its not about local [21:12] Done! [21:12] Dirs, modes, etc.. all works well. Tested with "zip" as well to make sure everything is compatible. [21:12] niemeyer: that is such a satisfying word.. Done [21:13] I'll go outside for a while to check out a bit of day light.. [21:13] bcsaller: Very true! [21:13] bcsaller, hmmm.. maybe its magic pie in the sky.. but ideally juju could sample and do that for us, rebalancing units as needed [21:13] to get max efficiency out machines [21:13] crazy talk [21:14] hazmat: our model has always been to build the primitive first and then think about controlling layers later [21:14] bcsaller, but that sort of manual placement isn't easily encoded in a policy [21:14] manual or strategic ... agreed that modeling that is hard [21:14] bcsaller, that's admin knowledge of machine usage and assigned units, with manual assignment to a particular machine when deploying [21:14] and the basis is usually cost-savings [21:15] or performance benefits [21:15] utilization [21:15] ie deploy the namenode on a machine with these characteristics [21:15] we are not really prepared to assist on performance as we intend to isolate in VMs anyway [21:16] bcsaller, its more a question of capacity and machine characterstics guided by usage constraint [21:16] and things are move during their lifetime, its gets complex, like machine 1 goes down and those 2 services end up where? [21:16] when deploying a service [21:16] bcsaller, volume management will open up new doors on this stuff [21:16] agreed, it will help [21:17] its just not clear that placement helps here [21:17] but the per thing is often about faster IPC which we don't naturally aid in [21:17] because its either trying to describe machine constraints of capacity, or against current usage assuming sampling [21:17] for this scenario [21:18] hmm [21:18] hazmat: the placement strategies were supposed to be smart than they are now, but yeah, in their current form they don't allow that kind of reasoning [21:18] cross-az placement [21:18] is interesting here though [21:18] deploy these units... for this unit deploy in a different az [21:18] yeah [21:18] which the openstack stuff wants anyway [21:19] but we're overloading placement policies, we'll want to have multiple policies with that scenario given our current usage of placement [21:19] or overlapping responsibility [21:20] sounds like the choice is grow or die, I'd say grow, but we need more info before we do that [21:21] ok, off for a while again [21:23] yeah.. sleep time [21:23] bbiab