=== defunctzombie is now known as defunctzombie_zz | ||
=== defunctzombie_zz is now known as defunctzombie | ||
sidnei | hazmat: it was during upgrade charm yes | 01:17 |
---|---|---|
sidnei | hazmat: on the client yes | 01:18 |
=== defunctzombie is now known as defunctzombie_zz | ||
=== dannf` is now known as dannf | ||
=== defunctzombie_zz is now known as defunctzombie | ||
=== defunctzombie is now known as defunctzombie_zz | ||
=== wedgwood_away is now known as wedgwood | ||
=== jmb^ is now known as jmb | ||
=== wedgwood is now known as wedgwood_away | ||
=== wedgwood_away is now known as wedgwood | ||
jcastro | evilnickveitch: old-docs are building again | 13:48 |
jcastro | evilnickveitch: they were totally broken | 13:48 |
jcastro | evilnickveitch: when will a staging site be ready? I need to add things to policy and create a best practices page but I don't want to do it on old docs | 13:48 |
=== mramm2 is now known as mramm | ||
_mup_ | Bug #1185496 was filed: Packaging needs to support parallel installation with pyju <juju:New> <juju-core:New> <https://launchpad.net/bugs/1185496> | 15:42 |
arosales | Weekly Charm meeting coming up at the top of the hour | 15:51 |
arosales | Notes at the pad: http://pad.ubuntu.com/7mf2jvKXNa | 15:51 |
=== defunctzombie_zz is now known as defunctzombie | ||
arosales | Hanout URL = https://plus.google.com/hangouts/_/c34514986acd40525bcd5fa34bcb11f10d118ba8?authuser=0&hl=en | 15:51 |
arosales | and if you want to watch to live web cast, the link is http://youtu.be/mQT25UVtLjk | 15:52 |
JoseAntonioR | arosales: if marco or jaca | 15:53 |
JoseAntonioR | or jcastro are around, you can use ubuntuonair | 15:53 |
jcastro | on it! | 15:53 |
arosales | jcastro, thanks | 15:54 |
arosales | jcastro, any bits I need to change? | 15:54 |
jcastro | nope | 15:54 |
jcastro | got it | 15:54 |
JoseAntonioR | jcastro: remember that what should go on the iframe is http://www.youtube.com/embed/mQT25UVtLjk | 15:56 |
JoseAntonioR | not what you pasted in there :) | 15:56 |
jcastro | oh | 15:56 |
jcastro | fixed! | 15:57 |
JoseAntonioR | now, gtg, classes ahead | 15:57 |
jcastro | thanks for the save! | 15:57 |
JoseAntonioR | jcastro: not fixed, fix the whole link instead | 15:57 |
JoseAntonioR | what you did won't work | 15:57 |
jcastro | oh I need the whole embed link from arosales? | 15:57 |
JoseAntonioR | (remove watch?v=) | 15:58 |
* JoseAntonioR disappears | 15:58 | |
jcastro | oh I don't have that | 15:58 |
JoseAntonioR | jcastro: remove the link that you pasted, put http://www.youtube.com/embed/mQT25UVtLjk instead | 15:58 |
JoseAntonioR | it's a different page | 15:58 |
jcastro | I did put that link in there | 15:58 |
jcastro | hard refresh | 15:58 |
JoseAntonioR | according to my browser, it's http://www.youtube.com/watch?v=embed/mQT25UVtLjk&feature=youtu.be | 15:59 |
JoseAntonioR | but anyways | 15:59 |
JoseAntonioR | have fun! | 15:59 |
=== dosaboy_ is now known as dosaboy | ||
=== defunctzombie is now known as defunctzombie_zz | ||
ahasenack | is there a way to hack the juju client to use my own charm store? Like, when I run "juju deploy foo", it grabs foo from somewhere other than the charm store | 17:45 |
ahasenack | because we have branches up for review for a month already, and that would make our lifes easier | 17:46 |
ahasenack | so we wouldn't have to specify urls for each charm, just the name | 17:46 |
jcastro | I believe the plan was to allow any arbritrary url there | 17:50 |
jcastro | hazmat: ? | 17:50 |
hazmat | ahasenack, no | 17:50 |
hazmat | jcastro, highlander style is the intent. | 17:50 |
hazmat | jcastro, it had an env var config once upon a time, but that got shot down | 17:51 |
hazmat | maybe we can revisit | 17:51 |
hazmat | in future there's a notion this could be an env property, with the env downloading charms | 17:51 |
ahasenack | ok | 17:52 |
hazmat | ahasenack, atm the only prod implmentation of the store is the charm store one that pulls from lp | 17:56 |
hazmat | so effectively you'd end up with the same content | 17:56 |
hazmat | unless it was reimplemented | 17:56 |
mthaddon | hi folks, does juju enforce (at certain times) whether services are exposed or unexposed? | 18:04 |
mthaddon | i.e. if you've manually "exposed" services via an external security group command, could juju stomp on that? | 18:05 |
ahasenack | hazmat: I was hoping for something simple as some sort of "prefix" url, to use when the charm name was unqualified | 18:05 |
ahasenack | like, if I say "juju deploy foobar", it uses lp:~ahasenack/something/foobar | 18:05 |
mthaddon | ahasenack: any reason why you can't deploy from a local branch instead? | 18:06 |
ahasenack | mthaddon: I can, and with deployer it's certainly easier, but sometimes I would just like "juju deploy foo" to use *our* charm and not worry about the url | 18:08 |
hazmat | mthaddon, it monitors the groups for the instance and checks their content periodically (in practice 30s)... but extraneous/external groups aren't checked.. however in ec2 because juju launches the instance and sans vpc there's no way to attach groups it means that juju's enforcement suffices, not so in openstack | 18:08 |
mthaddon | hazmat: we've just had two instances where a service that's been unexposed (as far as juju is concerned) but with manual secgroup rules for over a month suddenly have those rules deleted by the bootstrap node - possibly triggered by seeing an instance in error state or some other event - does that sound feasible? | 18:09 |
mthaddon | (in openstack) | 18:10 |
=== wedgwood is now known as wedgwood_away | ||
=== defunctzombie_zz is now known as defunctzombie | ||
hazmat | mthaddon, the secgroup rules for should be getting polled regularly and acted upon | 19:14 |
hazmat | from juju | 19:14 |
hazmat | based on the exposed ports | 19:15 |
hazmat | mthaddon, if its something you want i'd suggest attaching a new group to those instances with the ports you want | 19:15 |
mthaddon | ok, so the fact that it's been over a month or so is just luck on our part - it could have happened any time after deploying the service | 19:15 |
=== defunctzombie is now known as defunctzombie_zz | ||
=== BradCrittenden is now known as bac________ | ||
=== defunctzombie_zz is now known as defunctzombie | ||
paraglade | HexChat: 2.9.5 ** OS: Linux 3.2.0-44-generic x86_64 ** Distro: Ubuntu "precise" 12.04 ** CPU: 8 x Intel(R) Core(TM) i7-2760QM CPU @ 2.40GHz (GenuineIntel) @ 800MHz ** RAM: Physical: 7.7GB, 55.8% free ** Disk: Total: 450.6GB, 70.7% free ** VGA: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller ** Sound: HDA-Intel - HDA Intel PCH1: HDA-Intel - HD | 19:59 |
paraglade | A NVidia ** Ethernet: Intel Corporation 82579LM Gigabit Network Connection ** Uptime: 3h 49m 39s ** | 19:59 |
=== defunctzombie is now known as defunctzombie_zz | ||
=== bac________ is now known as bac | ||
ahasenack | a relation-set command on one unit is what triggers a *_changed hook run in th other? | 20:08 |
ahasenack | or rather, between services | 20:10 |
ahasenack | it feels like calling relation-get in a _joined hook is always racy | 20:13 |
=== defunctzombie_zz is now known as defunctzombie | ||
marcoceppi | ahasenack: you can try to call relation-get. there's no guarantee that there will be data in either the joined or changed hook. both should check for values. though joined only gets called once per relation changed gets called at least once | 20:24 |
ahasenack | marcoceppi: I'm not liking the mixing of _joined and _changed that most charms seem to be doing | 20:25 |
ahasenack | marcoceppi: that "if there is no value, exit 0 and wait to be called again". That doesn't mean you will be called again out of the blue, it's just because you are (likely) in _joined, not _changed yet | 20:25 |
marcoceppi | ahasenack: not true. joined is called once, then changed is called once. after that each time data on the wire changes changed is called again | 20:27 |
marcoceppi | there ate times where changed won't have data in relation-get | 20:28 |
ahasenack | marcoceppi: I'm seeing something in a charm, and have a question, maybe it's a difference between gojuju and pyjuju | 20:34 |
ahasenack | marcoceppi: the question is | 20:34 |
ahasenack | marcoceppi: in _joined I do, for example, 10 relation-set | 20:34 |
ahasenack | marcoceppi: do these variables get propagated to the other side immediately, or only after _joined finishes running? | 20:35 |
ahasenack | do you know? | 20:35 |
ahasenack | or, put it another way | 20:35 |
ahasenack | let's say _joined has two commands: relation-set foo=bar | 20:35 |
ahasenack | sleep 10h | 20:35 |
marcoceppi | should be investigator | 20:35 |
marcoceppi | immediate * | 20:36 |
ahasenack | and _joined on the other side does relation-get foo in a loop | 20:36 |
ahasenack | will it see foo=bar before 10h? | 20:36 |
marcoceppi | but I can't confirm | 20:36 |
ahasenack | marcoceppi: another one: you said each time data on the wire changes, _changed is called | 20:41 |
ahasenack | marcoceppi: is that batched up? For example, a hook calls 10 relation-set commands | 20:41 |
ahasenack | only when the hook ends will the other _changed hook be called? Or right after each relation-set? | 20:41 |
marcoceppi | another great question ahasenack,I don't actually know. I assume it would create 10 separate changed requests which is probably why a lot of charms send where data all at once at the end of the hook execution | 20:50 |
ahasenack | hm | 20:51 |
ahasenack | marcoceppi: the relation-set doesn't seem to be immediate | 21:01 |
ahasenack | http://pastebin.ubuntu.com/5714726/ we have these as joined hooks | 21:02 |
ahasenack | and the lower one is not printing the foo, bar or baz values | 21:02 |
marcoceppi | ahasenack: interesting | 21:05 |
=== defunctzombie is now known as defunctzombie_zz | ||
=== defunctzombie_zz is now known as defunctzombie | ||
=== defunctzombie is now known as defunctzombie_zz | ||
=== defunctzombie_zz is now known as defunctzombie | ||
dpb1 | hi all | 21:27 |
dpb1 | Is "machine recycling" no longer something that juju does in juju-core? | 21:27 |
sarnold | what's "machine recycling"? | 21:28 |
dpb1 | my term for re-using instances from destroyed services/removed units. | 21:29 |
dpb1 | s/instances/machines/ | 21:29 |
sarnold | ah, as a way of avoiding the expense of giving a machine back to the cloud, just to re-request, and re-image the thing? | 21:30 |
dpb1 | i.e., when I do juju remove-unit ubuntu/0; juju add-unit ubuntu; pyjuju would not spin up another instance, but would re-use the machine that ubuntu/0 | 21:30 |
dpb1 | sarnold: right. I never liked the feature, since it would never work to do this in practice. | 21:31 |
sarnold | hehe, d'oh. | 21:31 |
sarnold | just when it sounded kind of convenient. :) | 21:31 |
sarnold | though I could imagine that the goal would be to provide a clean-slate for deployment every time | 21:31 |
dpb1 | sarnold: ha, well. i mean, charms just never co-existed well | 21:31 |
dpb1 | sarnold: so is this a goal of juju-core? | 21:31 |
dpb1 | I mean, a purposful change? | 21:31 |
sarnold | dunno, sorry, was just curious what you meant. :) | 21:32 |
dpb1 | sarnold: ah, ok haha | 21:32 |
* thumper reads scroll-back | 21:47 | |
thumper | sarnold: hi there, I worked on this, so can give some info | 21:47 |
thumper | and dpb1 | 21:48 |
dpb1 | ah yes, would be nice. :) | 21:48 |
sarnold | thumper: cool :) | 21:48 |
thumper | dpb1: right now, inside juju-core there is a machine assignment policy that is hard coded for each provider (ec2, openstack, etc) | 21:48 |
thumper | right now we are looking to make this configurable... | 21:48 |
thumper | some time back, machines used to get reused | 21:48 |
thumper | however we couldn't guarantee the state of the machines at all | 21:49 |
thumper | as there isn't good uninstall stuff around charms | 21:49 |
thumper | so a new policy was created to not reuse machines at all | 21:49 |
thumper | and the different providers changed to use that policy | 21:49 |
thumper | this is a temporary measure | 21:49 |
thumper | until we get containerisation in | 21:49 |
dpb1 | let me guess, when lxc isolation gets in... | 21:49 |
thumper | when you install a unit onto a machine, it makes it dirty | 21:49 |
dpb1 | yes, that would be good | 21:49 |
thumper | lxc containers will appear as "machines" | 21:50 |
thumper | so when you install a unit into a container, the container gets dirty | 21:50 |
thumper | but the machine running the container is clean | 21:50 |
thumper | so when you destory the unit | 21:50 |
thumper | the container goes away with it | 21:50 |
thumper | but the hosting machine will still be clean | 21:50 |
thumper | so will get reused | 21:50 |
thumper | this is work in progress now | 21:50 |
dpb1 | thumper: do you know approximately how far out containerisation is? (just looking to slot work on my end as well, not looking to hold your feet to the fire) | 21:50 |
thumper | real soon now™ | 21:51 |
dpb1 | ah ok.. | 21:51 |
thumper | in practice, probably several weeks | 21:51 |
thumper | if it is more than a month, something is horribly wrong | 21:51 |
dpb1 | I'll keep an eye out then, thanks. | 21:51 |
thumper | np | 21:51 |
dpb1 | thanks for the explanation as well, makes sense | 21:51 |
sarnold | thumper: cool, thanks :) | 21:52 |
thumper | happy to help | 21:52 |
sarnold | thumper: will that containerisation work allow co-mingling charms in a container without using the subordinate hack? | 21:52 |
thumper | sarnold: probably, in the same way you can install (or force) two charms onto one machine now | 21:53 |
thumper | ideally we'd like to record the intent with some form of constraint | 21:53 |
thumper | but yes... | 21:53 |
thumper | hopefully | 21:53 |
thumper | containers will look just like machines | 21:53 |
thumper | so you could deply --force to a container | 21:53 |
* dpb1 is looking forward to this feature arriving. :) | 21:54 | |
sarnold | thumper: nice, thanks :) | 21:54 |
jcastro | heya thumper | 22:10 |
jcastro | you pretty much got that dave guy plugged in or need anything from me? | 22:11 |
thumper | hi jcastro | 22:12 |
thumper | jcastro: I think me and FunnyLookinHat are good for now | 22:12 |
dpb1 | marcoceppi: thx for the review on landscape-client. Understood about jitsu, I'll keep an eye out for what is coming next. | 22:13 |
jcastro | oh that was his irc | 22:13 |
jcastro | funny | 22:13 |
thumper | jcastro: yeah :) | 22:13 |
FunnyLookinHat | Yo - thumper - quick Q - what sort of density could we expect when it comes to using the containerization you were suggesting ( via LXC, I believe ) ? And how would the overhead factor in? Could I host, say, 50 PHP apps on one 4 GB Box ? | 22:19 |
thumper | FunnyLookinHat: sorry NFI | 22:19 |
thumper | lxc is supposed to be very low overhead | 22:20 |
thumper | but I guess the density will depend on how much the different PHP apps are being hit | 22:20 |
=== defunctzombie is now known as defunctzombie_zz | ||
FunnyLookinHat | NFI? :) | 22:20 |
thumper | if they are 50 apps doing not much, then I guess so | 22:20 |
thumper | No Fucking Idea | 22:20 |
FunnyLookinHat | haha | 22:20 |
sarnold | I'd expect it to depend upon the way the lxc containers are created.. if it is hardlinked or bindmounted files shared amongst them, it ought to be very cheap, not much more than just hosting 50 php apps on one box without containers.. | 22:21 |
FunnyLookinHat | What if we were to try to put both the DB and Apache within each LXC ( for portability and back-up purposes ) ? I guess I'm not sure how to predict multiple container'd daemons all floating around working with limited resources | 22:21 |
sarnold | .. but if the files are independent to each container, the kernel won't be able to share RSS of them all, and it'll be a lot like 50 VMs, but with shared kernel space instead... not -as- bad as 50 vms, but not like a single machine, either.. | 22:22 |
=== defunctzombie_zz is now known as defunctzombie | ||
FunnyLookinHat | sarnold, mind if I PM ? | 22:23 |
FunnyLookinHat | err - nvr mind | 22:23 |
kyhwana | FunnyLookinHat: hmm, you'd be hit with a bunch of IO? though again, probably not more than if you had them all running outside a container | 22:23 |
FunnyLookinHat | So - our goal is to deploy a web application on a per-customer basis... with a better density than Juju currently offers ( i.e. one instance per VPS ) | 22:24 |
FunnyLookinHat | For security reasons each user has their own directory with the full PHP application - encryption keys, etc. | 22:24 |
FunnyLookinHat | Jorge had suggested containers as a solution - which I believe are being worked on currently... | 22:25 |
kyhwana | FunnyLookinHat: i'd make sure you deploy a kernel with user namespaces tho, if you get root inside a lxc container without user NS's, that's the same as root outside it, apparently | 22:25 |
FunnyLookinHat | So I'm trying to estimate what sort of density I could expect - if I could usually get away with 50 per 4 GB box using the same MySQL and Apache daemons, what would I get using LXC container s? | 22:25 |
sarnold | kyhwana: mostly, apparmor confinement on the lxc container provides some protection against e.g. echo b > /proc/sysrq-trigger ... | 22:25 |
kyhwana | FunnyLookinHat: i'd _guess_ about the same | 22:26 |
kyhwana | sarnold: hmm, what about a kernel priv escalation exploit? | 22:27 |
sarnold | kyhwana: those are also available to userns roots :) just how well-written do you think all those 50-ish filesystem drivers are? :) | 22:27 |
kyhwana | sarnold: :P | 22:29 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!