/srv/irclogs.ubuntu.com/2013/05/29/#juju.txt

=== defunctzombie is now known as defunctzombie_zz
=== defunctzombie_zz is now known as defunctzombie
sidneihazmat: it was during upgrade charm yes01:17
sidneihazmat: on the client yes01:18
=== defunctzombie is now known as defunctzombie_zz
=== dannf` is now known as dannf
=== defunctzombie_zz is now known as defunctzombie
=== defunctzombie is now known as defunctzombie_zz
=== wedgwood_away is now known as wedgwood
=== jmb^ is now known as jmb
=== wedgwood is now known as wedgwood_away
=== wedgwood_away is now known as wedgwood
jcastroevilnickveitch: old-docs are building again13:48
jcastroevilnickveitch: they were totally broken13:48
jcastroevilnickveitch: when will a staging site be ready? I need to add things to policy and create a best practices page but I don't want to do it on old docs13:48
=== mramm2 is now known as mramm
_mup_Bug #1185496 was filed: Packaging needs to support parallel installation with pyju <juju:New> <juju-core:New> <https://launchpad.net/bugs/1185496>15:42
arosalesWeekly Charm meeting coming up at the top of the hour15:51
arosalesNotes at the pad: http://pad.ubuntu.com/7mf2jvKXNa15:51
=== defunctzombie_zz is now known as defunctzombie
arosalesHanout URL = https://plus.google.com/hangouts/_/c34514986acd40525bcd5fa34bcb11f10d118ba8?authuser=0&hl=en15:51
arosalesand if you want to watch to live web cast, the link is http://youtu.be/mQT25UVtLjk15:52
JoseAntonioRarosales: if marco or jaca15:53
JoseAntonioRor jcastro are around, you can use ubuntuonair15:53
jcastroon it!15:53
arosalesjcastro, thanks15:54
arosalesjcastro, any bits I need to change?15:54
jcastronope15:54
jcastrogot it15:54
JoseAntonioRjcastro: remember that what should go on the iframe is http://www.youtube.com/embed/mQT25UVtLjk15:56
JoseAntonioRnot what you pasted in there :)15:56
jcastrooh15:56
jcastrofixed!15:57
JoseAntonioRnow, gtg, classes ahead15:57
jcastrothanks for the save!15:57
JoseAntonioRjcastro: not fixed, fix the whole link instead15:57
JoseAntonioRwhat you did won't work15:57
jcastrooh I need the whole embed link from arosales?15:57
JoseAntonioR(remove watch?v=)15:58
* JoseAntonioR disappears15:58
jcastrooh I don't have that15:58
JoseAntonioRjcastro: remove the link that you pasted, put http://www.youtube.com/embed/mQT25UVtLjk instead15:58
JoseAntonioRit's a different page15:58
jcastroI did put that link in there15:58
jcastrohard refresh15:58
JoseAntonioRaccording to my browser, it's http://www.youtube.com/watch?v=embed/mQT25UVtLjk&feature=youtu.be15:59
JoseAntonioRbut anyways15:59
JoseAntonioRhave fun!15:59
=== dosaboy_ is now known as dosaboy
=== defunctzombie is now known as defunctzombie_zz
ahasenackis there a way to hack the juju client to use my own charm store? Like, when I run "juju deploy foo", it grabs foo from somewhere other than the charm store17:45
ahasenackbecause we have branches up for review for a month already, and that would make our lifes easier17:46
ahasenackso we wouldn't have to specify urls for each charm, just the name17:46
jcastroI believe the plan was to allow any arbritrary url there17:50
jcastrohazmat: ?17:50
hazmatahasenack, no17:50
hazmatjcastro, highlander style is the intent.17:50
hazmatjcastro, it had an env var config once upon a time, but that got shot down17:51
hazmatmaybe we can revisit17:51
hazmatin future there's a notion this could be an env property, with the env downloading charms17:51
ahasenackok17:52
hazmatahasenack, atm the only prod implmentation of the store is the charm store one that pulls from lp17:56
hazmatso effectively you'd end up with the same content17:56
hazmatunless it was reimplemented17:56
mthaddonhi folks, does juju enforce (at certain times) whether services are exposed or unexposed?18:04
mthaddoni.e. if you've manually "exposed" services via an external security group command, could juju stomp on that?18:05
ahasenackhazmat: I was hoping for something simple as some sort of "prefix" url, to use when the charm name was unqualified18:05
ahasenacklike, if I say "juju deploy foobar", it uses lp:~ahasenack/something/foobar18:05
mthaddonahasenack: any reason why you can't deploy from a local branch instead?18:06
ahasenackmthaddon: I can, and with deployer it's certainly easier, but sometimes I would just like "juju deploy foo" to use *our* charm and not worry about the url18:08
hazmatmthaddon, it monitors the groups for the instance and checks their content periodically (in practice 30s)... but extraneous/external groups aren't checked.. however in ec2 because juju launches the instance and sans vpc there's no way to attach groups it means that juju's enforcement suffices, not so in openstack18:08
mthaddonhazmat: we've just had two instances where a service that's been unexposed (as far as juju is concerned) but with manual secgroup rules for over a month suddenly have those rules deleted by the bootstrap node - possibly triggered by seeing an instance in error state or some other event - does that sound feasible?18:09
mthaddon(in openstack)18:10
=== wedgwood is now known as wedgwood_away
=== defunctzombie_zz is now known as defunctzombie
hazmatmthaddon, the secgroup rules for should be getting polled regularly and acted upon19:14
hazmatfrom juju19:14
hazmatbased on the exposed ports19:15
hazmatmthaddon, if its something you want i'd suggest attaching a new group to those instances with the ports you want19:15
mthaddonok, so the fact that it's been over a month or so is just luck on our part - it could have happened any time after deploying the service19:15
=== defunctzombie is now known as defunctzombie_zz
=== BradCrittenden is now known as bac________
=== defunctzombie_zz is now known as defunctzombie
paraglade HexChat: 2.9.5 ** OS: Linux 3.2.0-44-generic x86_64 ** Distro: Ubuntu "precise" 12.04 ** CPU: 8 x Intel(R) Core(TM) i7-2760QM CPU @ 2.40GHz (GenuineIntel) @ 800MHz ** RAM: Physical: 7.7GB, 55.8% free ** Disk: Total: 450.6GB, 70.7% free ** VGA: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller ** Sound: HDA-Intel - HDA Intel PCH1: HDA-Intel - HD19:59
paragladeA NVidia ** Ethernet: Intel Corporation 82579LM Gigabit Network Connection ** Uptime: 3h 49m 39s **19:59
=== defunctzombie is now known as defunctzombie_zz
=== bac________ is now known as bac
ahasenacka relation-set command on one unit is what triggers a *_changed hook run in th other?20:08
ahasenackor rather, between services20:10
ahasenackit feels like calling relation-get in a _joined hook is always racy20:13
=== defunctzombie_zz is now known as defunctzombie
marcoceppiahasenack: you can try to call relation-get. there's no guarantee that there will be data in either the joined or changed hook. both should check for values. though joined only gets called once per relation changed gets called at least once20:24
ahasenackmarcoceppi: I'm not liking the mixing of _joined and _changed that most charms seem to be doing20:25
ahasenackmarcoceppi: that "if there is no value, exit 0 and wait to be called again". That doesn't mean you will be called again out of the blue, it's just because you are (likely) in _joined, not _changed yet20:25
marcoceppiahasenack: not true. joined is called once, then changed is called once. after that each time data on the wire changes changed is called again20:27
marcoceppithere ate times where changed won't have data in relation-get20:28
ahasenackmarcoceppi: I'm seeing something in a charm, and have a question, maybe it's a difference between gojuju and pyjuju20:34
ahasenackmarcoceppi: the question is20:34
ahasenackmarcoceppi: in _joined I do, for example, 10 relation-set20:34
ahasenackmarcoceppi: do these variables get propagated to the other side immediately, or only after _joined finishes running?20:35
ahasenackdo you know?20:35
ahasenackor, put it another way20:35
ahasenacklet's say _joined has two commands: relation-set foo=bar20:35
ahasenacksleep 10h20:35
marcoceppishould be investigator20:35
marcoceppiimmediate *20:36
ahasenackand _joined on the other side does relation-get foo in a loop20:36
ahasenackwill it see foo=bar before 10h?20:36
marcoceppibut I can't confirm20:36
ahasenackmarcoceppi: another one: you said each time data on the wire changes, _changed is called20:41
ahasenackmarcoceppi: is that batched up? For example, a hook calls 10 relation-set commands20:41
ahasenackonly when the hook ends will the other _changed hook be called? Or right after each relation-set?20:41
marcoceppianother great question ahasenack,I don't actually know. I assume it would create 10 separate changed requests which is probably why a lot of charms send where data all at once at the end of the hook execution20:50
ahasenackhm20:51
ahasenackmarcoceppi: the relation-set doesn't seem to be immediate21:01
ahasenackhttp://pastebin.ubuntu.com/5714726/ we have these as joined hooks21:02
ahasenackand the lower one is not printing the foo, bar or baz values21:02
marcoceppiahasenack: interesting21:05
=== defunctzombie is now known as defunctzombie_zz
=== defunctzombie_zz is now known as defunctzombie
=== defunctzombie is now known as defunctzombie_zz
=== defunctzombie_zz is now known as defunctzombie
dpb1hi all21:27
dpb1Is "machine recycling" no longer something that juju does in juju-core?21:27
sarnoldwhat's "machine recycling"?21:28
dpb1my term for re-using instances from destroyed services/removed units.21:29
dpb1s/instances/machines/21:29
sarnoldah, as a way of avoiding the expense of giving a machine back to the cloud, just to re-request, and re-image the thing?21:30
dpb1i.e., when I do juju remove-unit ubuntu/0; juju add-unit ubuntu; pyjuju would not spin up another instance, but would re-use the machine that ubuntu/021:30
dpb1sarnold: right.  I never liked the feature, since it would never work to do this in practice.21:31
sarnoldhehe, d'oh.21:31
sarnoldjust when it sounded kind of convenient. :)21:31
sarnoldthough I could imagine that the goal would be to provide a clean-slate for deployment every time21:31
dpb1sarnold: ha, well.  i mean, charms just never co-existed well21:31
dpb1sarnold: so is this a goal of juju-core?21:31
dpb1I mean, a purposful change?21:31
sarnolddunno, sorry, was just curious what you meant. :)21:32
dpb1sarnold: ah, ok haha21:32
* thumper reads scroll-back21:47
thumpersarnold: hi there, I worked on this, so can give some info21:47
thumperand dpb121:48
dpb1ah yes, would be nice. :)21:48
sarnoldthumper: cool :)21:48
thumperdpb1: right now, inside juju-core there is a machine assignment policy that is hard coded for each provider (ec2, openstack, etc)21:48
thumperright now we are looking to make this configurable...21:48
thumpersome time back, machines used to get reused21:48
thumperhowever we couldn't guarantee the state of the machines at all21:49
thumperas there isn't good uninstall stuff around charms21:49
thumperso a new policy was created to not reuse machines at all21:49
thumperand the different providers changed to use that policy21:49
thumperthis is a temporary measure21:49
thumperuntil we get containerisation in21:49
dpb1let me guess, when lxc isolation gets in...21:49
thumperwhen you install a unit onto a machine, it makes it dirty21:49
dpb1yes, that would be good21:49
thumperlxc containers will appear as "machines"21:50
thumperso when you install a unit into a container, the container gets dirty21:50
thumperbut the machine running the container is clean21:50
thumperso when you destory the unit21:50
thumperthe container goes away with it21:50
thumperbut the hosting machine will still be clean21:50
thumperso will get reused21:50
thumperthis is work in progress now21:50
dpb1thumper: do you know approximately how far out containerisation is?  (just looking to slot work on my end as well, not looking to hold your feet to the fire)21:50
thumperreal soon now™21:51
dpb1ah ok..21:51
thumperin practice, probably several weeks21:51
thumperif it is more than a month, something is horribly wrong21:51
dpb1I'll keep an eye out then, thanks.21:51
thumpernp21:51
dpb1thanks for the explanation as well, makes sense21:51
sarnoldthumper: cool, thanks :)21:52
thumperhappy to help21:52
sarnoldthumper: will that containerisation work allow co-mingling charms in a container without using the subordinate hack?21:52
thumpersarnold: probably, in the same way you can install (or force) two charms onto one machine now21:53
thumperideally we'd like to record the intent with some form of constraint21:53
thumperbut yes...21:53
thumperhopefully21:53
thumpercontainers will look just like machines21:53
thumperso you could deply --force to a container21:53
* dpb1 is looking forward to this feature arriving. :)21:54
sarnoldthumper: nice, thanks :)21:54
jcastroheya thumper22:10
jcastroyou pretty much got that dave guy plugged in or need anything from me?22:11
thumperhi jcastro22:12
thumperjcastro: I think me and FunnyLookinHat are good for now22:12
dpb1marcoceppi: thx for the review on landscape-client.  Understood about jitsu, I'll keep an eye out for what is coming next.22:13
jcastrooh that was his irc22:13
jcastrofunny22:13
thumperjcastro: yeah :)22:13
FunnyLookinHatYo - thumper - quick Q - what sort of density could we expect when it comes to using the containerization you were suggesting ( via LXC, I believe ) ?  And how would the overhead factor in?  Could I host, say, 50 PHP apps on one 4 GB Box ?22:19
thumperFunnyLookinHat: sorry NFI22:19
thumperlxc is supposed to be very low overhead22:20
thumperbut I guess the density will depend on how much the different PHP apps are being hit22:20
=== defunctzombie is now known as defunctzombie_zz
FunnyLookinHatNFI?  :)22:20
thumperif they are 50 apps doing not much, then I guess so22:20
thumperNo Fucking Idea22:20
FunnyLookinHathaha22:20
sarnoldI'd expect it to depend upon the way the lxc containers are created.. if it is hardlinked or bindmounted files shared amongst them, it ought to be very cheap, not much more than just hosting 50 php apps on one box without containers..22:21
FunnyLookinHatWhat if we were to try to put both the DB and Apache within each LXC ( for portability and back-up purposes ) ?  I guess I'm not sure how to predict multiple container'd daemons all floating around working with limited resources22:21
sarnold.. but if the files are independent to each container, the kernel won't be able to share RSS of them all, and it'll be a lot like 50 VMs, but with shared kernel space instead... not -as- bad as 50 vms, but not like a single machine, either..22:22
=== defunctzombie_zz is now known as defunctzombie
FunnyLookinHatsarnold, mind if I PM ?22:23
FunnyLookinHaterr - nvr mind22:23
kyhwanaFunnyLookinHat: hmm, you'd be hit with a bunch of IO? though again, probably not more than if you had them all running outside a container22:23
FunnyLookinHatSo - our goal is to deploy a web application on a per-customer basis...  with a better density than Juju currently offers ( i.e. one instance per VPS )22:24
FunnyLookinHatFor security reasons each user has their own directory with the full PHP application - encryption keys, etc.22:24
FunnyLookinHatJorge had suggested containers as a solution - which I believe are being worked on currently...22:25
kyhwanaFunnyLookinHat: i'd make sure you deploy a kernel with user namespaces tho, if you get root inside a lxc container without user NS's, that's the same as root outside it, apparently22:25
FunnyLookinHatSo I'm trying to estimate what sort of density I could expect - if I could usually get away with 50 per 4 GB box using the same MySQL and Apache daemons, what would I get using LXC container s?22:25
sarnoldkyhwana: mostly, apparmor confinement on the lxc container provides some protection against e.g. echo b > /proc/sysrq-trigger ...22:25
kyhwanaFunnyLookinHat: i'd _guess_ about the same22:26
kyhwanasarnold: hmm, what about a kernel priv escalation exploit?22:27
sarnoldkyhwana: those are also available to userns roots :) just how well-written do you think all those 50-ish filesystem drivers are? :)22:27
kyhwanasarnold: :P22:29

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!