=== defunctzombie is now known as defunctzombie_zz === defunctzombie_zz is now known as defunctzombie [01:17] hazmat: it was during upgrade charm yes [01:18] hazmat: on the client yes === defunctzombie is now known as defunctzombie_zz === dannf` is now known as dannf === defunctzombie_zz is now known as defunctzombie === defunctzombie is now known as defunctzombie_zz === wedgwood_away is now known as wedgwood === jmb^ is now known as jmb === wedgwood is now known as wedgwood_away === wedgwood_away is now known as wedgwood [13:48] evilnickveitch: old-docs are building again [13:48] evilnickveitch: they were totally broken [13:48] evilnickveitch: when will a staging site be ready? I need to add things to policy and create a best practices page but I don't want to do it on old docs === mramm2 is now known as mramm [15:42] <_mup_> Bug #1185496 was filed: Packaging needs to support parallel installation with pyju [15:51] Weekly Charm meeting coming up at the top of the hour [15:51] Notes at the pad: http://pad.ubuntu.com/7mf2jvKXNa === defunctzombie_zz is now known as defunctzombie [15:51] Hanout URL = https://plus.google.com/hangouts/_/c34514986acd40525bcd5fa34bcb11f10d118ba8?authuser=0&hl=en [15:52] and if you want to watch to live web cast, the link is http://youtu.be/mQT25UVtLjk [15:53] arosales: if marco or jaca [15:53] or jcastro are around, you can use ubuntuonair [15:53] on it! [15:54] jcastro, thanks [15:54] jcastro, any bits I need to change? [15:54] nope [15:54] got it [15:56] jcastro: remember that what should go on the iframe is http://www.youtube.com/embed/mQT25UVtLjk [15:56] not what you pasted in there :) [15:56] oh [15:57] fixed! [15:57] now, gtg, classes ahead [15:57] thanks for the save! [15:57] jcastro: not fixed, fix the whole link instead [15:57] what you did won't work [15:57] oh I need the whole embed link from arosales? [15:58] (remove watch?v=) [15:58] * JoseAntonioR disappears [15:58] oh I don't have that [15:58] jcastro: remove the link that you pasted, put http://www.youtube.com/embed/mQT25UVtLjk instead [15:58] it's a different page [15:58] I did put that link in there [15:58] hard refresh [15:59] according to my browser, it's http://www.youtube.com/watch?v=embed/mQT25UVtLjk&feature=youtu.be [15:59] but anyways [15:59] have fun! === dosaboy_ is now known as dosaboy === defunctzombie is now known as defunctzombie_zz [17:45] is there a way to hack the juju client to use my own charm store? Like, when I run "juju deploy foo", it grabs foo from somewhere other than the charm store [17:46] because we have branches up for review for a month already, and that would make our lifes easier [17:46] so we wouldn't have to specify urls for each charm, just the name [17:50] I believe the plan was to allow any arbritrary url there [17:50] hazmat: ? [17:50] ahasenack, no [17:50] jcastro, highlander style is the intent. [17:51] jcastro, it had an env var config once upon a time, but that got shot down [17:51] maybe we can revisit [17:51] in future there's a notion this could be an env property, with the env downloading charms [17:52] ok [17:56] ahasenack, atm the only prod implmentation of the store is the charm store one that pulls from lp [17:56] so effectively you'd end up with the same content [17:56] unless it was reimplemented [18:04] hi folks, does juju enforce (at certain times) whether services are exposed or unexposed? [18:05] i.e. if you've manually "exposed" services via an external security group command, could juju stomp on that? [18:05] hazmat: I was hoping for something simple as some sort of "prefix" url, to use when the charm name was unqualified [18:05] like, if I say "juju deploy foobar", it uses lp:~ahasenack/something/foobar [18:06] ahasenack: any reason why you can't deploy from a local branch instead? [18:08] mthaddon: I can, and with deployer it's certainly easier, but sometimes I would just like "juju deploy foo" to use *our* charm and not worry about the url [18:08] mthaddon, it monitors the groups for the instance and checks their content periodically (in practice 30s)... but extraneous/external groups aren't checked.. however in ec2 because juju launches the instance and sans vpc there's no way to attach groups it means that juju's enforcement suffices, not so in openstack [18:09] hazmat: we've just had two instances where a service that's been unexposed (as far as juju is concerned) but with manual secgroup rules for over a month suddenly have those rules deleted by the bootstrap node - possibly triggered by seeing an instance in error state or some other event - does that sound feasible? [18:10] (in openstack) === wedgwood is now known as wedgwood_away === defunctzombie_zz is now known as defunctzombie [19:14] mthaddon, the secgroup rules for should be getting polled regularly and acted upon [19:14] from juju [19:15] based on the exposed ports [19:15] mthaddon, if its something you want i'd suggest attaching a new group to those instances with the ports you want [19:15] ok, so the fact that it's been over a month or so is just luck on our part - it could have happened any time after deploying the service === defunctzombie is now known as defunctzombie_zz === BradCrittenden is now known as bac________ === defunctzombie_zz is now known as defunctzombie [19:59] HexChat: 2.9.5 ** OS: Linux 3.2.0-44-generic x86_64 ** Distro: Ubuntu "precise" 12.04 ** CPU: 8 x Intel(R) Core(TM) i7-2760QM CPU @ 2.40GHz (GenuineIntel) @ 800MHz ** RAM: Physical: 7.7GB, 55.8% free ** Disk: Total: 450.6GB, 70.7% free ** VGA: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller ** Sound: HDA-Intel - HDA Intel PCH1: HDA-Intel - HD [19:59] A NVidia ** Ethernet: Intel Corporation 82579LM Gigabit Network Connection ** Uptime: 3h 49m 39s ** === defunctzombie is now known as defunctzombie_zz === bac________ is now known as bac [20:08] a relation-set command on one unit is what triggers a *_changed hook run in th other? [20:10] or rather, between services [20:13] it feels like calling relation-get in a _joined hook is always racy === defunctzombie_zz is now known as defunctzombie [20:24] ahasenack: you can try to call relation-get. there's no guarantee that there will be data in either the joined or changed hook. both should check for values. though joined only gets called once per relation changed gets called at least once [20:25] marcoceppi: I'm not liking the mixing of _joined and _changed that most charms seem to be doing [20:25] marcoceppi: that "if there is no value, exit 0 and wait to be called again". That doesn't mean you will be called again out of the blue, it's just because you are (likely) in _joined, not _changed yet [20:27] ahasenack: not true. joined is called once, then changed is called once. after that each time data on the wire changes changed is called again [20:28] there ate times where changed won't have data in relation-get [20:34] marcoceppi: I'm seeing something in a charm, and have a question, maybe it's a difference between gojuju and pyjuju [20:34] marcoceppi: the question is [20:34] marcoceppi: in _joined I do, for example, 10 relation-set [20:35] marcoceppi: do these variables get propagated to the other side immediately, or only after _joined finishes running? [20:35] do you know? [20:35] or, put it another way [20:35] let's say _joined has two commands: relation-set foo=bar [20:35] sleep 10h [20:35] should be investigator [20:36] immediate * [20:36] and _joined on the other side does relation-get foo in a loop [20:36] will it see foo=bar before 10h? [20:36] but I can't confirm [20:41] marcoceppi: another one: you said each time data on the wire changes, _changed is called [20:41] marcoceppi: is that batched up? For example, a hook calls 10 relation-set commands [20:41] only when the hook ends will the other _changed hook be called? Or right after each relation-set? [20:50] another great question ahasenack,I don't actually know. I assume it would create 10 separate changed requests which is probably why a lot of charms send where data all at once at the end of the hook execution [20:51] hm [21:01] marcoceppi: the relation-set doesn't seem to be immediate [21:02] http://pastebin.ubuntu.com/5714726/ we have these as joined hooks [21:02] and the lower one is not printing the foo, bar or baz values [21:05] ahasenack: interesting === defunctzombie is now known as defunctzombie_zz === defunctzombie_zz is now known as defunctzombie === defunctzombie is now known as defunctzombie_zz === defunctzombie_zz is now known as defunctzombie [21:27] hi all [21:27] Is "machine recycling" no longer something that juju does in juju-core? [21:28] what's "machine recycling"? [21:29] my term for re-using instances from destroyed services/removed units. [21:29] s/instances/machines/ [21:30] ah, as a way of avoiding the expense of giving a machine back to the cloud, just to re-request, and re-image the thing? [21:30] i.e., when I do juju remove-unit ubuntu/0; juju add-unit ubuntu; pyjuju would not spin up another instance, but would re-use the machine that ubuntu/0 [21:31] sarnold: right. I never liked the feature, since it would never work to do this in practice. [21:31] hehe, d'oh. [21:31] just when it sounded kind of convenient. :) [21:31] though I could imagine that the goal would be to provide a clean-slate for deployment every time [21:31] sarnold: ha, well. i mean, charms just never co-existed well [21:31] sarnold: so is this a goal of juju-core? [21:31] I mean, a purposful change? [21:32] dunno, sorry, was just curious what you meant. :) [21:32] sarnold: ah, ok haha [21:47] * thumper reads scroll-back [21:47] sarnold: hi there, I worked on this, so can give some info [21:48] and dpb1 [21:48] ah yes, would be nice. :) [21:48] thumper: cool :) [21:48] dpb1: right now, inside juju-core there is a machine assignment policy that is hard coded for each provider (ec2, openstack, etc) [21:48] right now we are looking to make this configurable... [21:48] some time back, machines used to get reused [21:49] however we couldn't guarantee the state of the machines at all [21:49] as there isn't good uninstall stuff around charms [21:49] so a new policy was created to not reuse machines at all [21:49] and the different providers changed to use that policy [21:49] this is a temporary measure [21:49] until we get containerisation in [21:49] let me guess, when lxc isolation gets in... [21:49] when you install a unit onto a machine, it makes it dirty [21:49] yes, that would be good [21:50] lxc containers will appear as "machines" [21:50] so when you install a unit into a container, the container gets dirty [21:50] but the machine running the container is clean [21:50] so when you destory the unit [21:50] the container goes away with it [21:50] but the hosting machine will still be clean [21:50] so will get reused [21:50] this is work in progress now [21:50] thumper: do you know approximately how far out containerisation is? (just looking to slot work on my end as well, not looking to hold your feet to the fire) [21:51] real soon now™ [21:51] ah ok.. [21:51] in practice, probably several weeks [21:51] if it is more than a month, something is horribly wrong [21:51] I'll keep an eye out then, thanks. [21:51] np [21:51] thanks for the explanation as well, makes sense [21:52] thumper: cool, thanks :) [21:52] happy to help [21:52] thumper: will that containerisation work allow co-mingling charms in a container without using the subordinate hack? [21:53] sarnold: probably, in the same way you can install (or force) two charms onto one machine now [21:53] ideally we'd like to record the intent with some form of constraint [21:53] but yes... [21:53] hopefully [21:53] containers will look just like machines [21:53] so you could deply --force to a container [21:54] * dpb1 is looking forward to this feature arriving. :) [21:54] thumper: nice, thanks :) [22:10] heya thumper [22:11] you pretty much got that dave guy plugged in or need anything from me? [22:12] hi jcastro [22:12] jcastro: I think me and FunnyLookinHat are good for now [22:13] marcoceppi: thx for the review on landscape-client. Understood about jitsu, I'll keep an eye out for what is coming next. [22:13] oh that was his irc [22:13] funny [22:13] jcastro: yeah :) [22:19] Yo - thumper - quick Q - what sort of density could we expect when it comes to using the containerization you were suggesting ( via LXC, I believe ) ? And how would the overhead factor in? Could I host, say, 50 PHP apps on one 4 GB Box ? [22:19] FunnyLookinHat: sorry NFI [22:20] lxc is supposed to be very low overhead [22:20] but I guess the density will depend on how much the different PHP apps are being hit === defunctzombie is now known as defunctzombie_zz [22:20] NFI? :) [22:20] if they are 50 apps doing not much, then I guess so [22:20] No Fucking Idea [22:20] haha [22:21] I'd expect it to depend upon the way the lxc containers are created.. if it is hardlinked or bindmounted files shared amongst them, it ought to be very cheap, not much more than just hosting 50 php apps on one box without containers.. [22:21] What if we were to try to put both the DB and Apache within each LXC ( for portability and back-up purposes ) ? I guess I'm not sure how to predict multiple container'd daemons all floating around working with limited resources [22:22] .. but if the files are independent to each container, the kernel won't be able to share RSS of them all, and it'll be a lot like 50 VMs, but with shared kernel space instead... not -as- bad as 50 vms, but not like a single machine, either.. === defunctzombie_zz is now known as defunctzombie [22:23] sarnold, mind if I PM ? [22:23] err - nvr mind [22:23] FunnyLookinHat: hmm, you'd be hit with a bunch of IO? though again, probably not more than if you had them all running outside a container [22:24] So - our goal is to deploy a web application on a per-customer basis... with a better density than Juju currently offers ( i.e. one instance per VPS ) [22:24] For security reasons each user has their own directory with the full PHP application - encryption keys, etc. [22:25] Jorge had suggested containers as a solution - which I believe are being worked on currently... [22:25] FunnyLookinHat: i'd make sure you deploy a kernel with user namespaces tho, if you get root inside a lxc container without user NS's, that's the same as root outside it, apparently [22:25] So I'm trying to estimate what sort of density I could expect - if I could usually get away with 50 per 4 GB box using the same MySQL and Apache daemons, what would I get using LXC container s? [22:25] kyhwana: mostly, apparmor confinement on the lxc container provides some protection against e.g. echo b > /proc/sysrq-trigger ... [22:26] FunnyLookinHat: i'd _guess_ about the same [22:27] sarnold: hmm, what about a kernel priv escalation exploit? [22:27] kyhwana: those are also available to userns roots :) just how well-written do you think all those 50-ish filesystem drivers are? :) [22:29] sarnold: :P