[08:23] <lukasa> lazyPower: Ping =)
[09:45] <dweaver> Having a problem wit LXC containers in Juju 1.23.3 on trusty with a vivid kernel (3.19.0-21-generic).  We ask Juju to deploy some LXC containers.  The containers start, but get no IP address.  Anyone want to give us any pointers on how to debug further?
[09:52] <dweaver> Seems to be a problem with DHCP responses not getting through to the container network namespace.
[14:23] <lazyPower> lukasa: pong
[14:32] <lukasa> lazyPower: Mind if I PM?
[14:32] <lazyPower> no go right on ahead
[14:48] <frankban> cory_fu: thanks for your review? what do I need to do for the promulgation at this point? just wait?
[14:52] <cory_fu> We need a full charmer (which I am not, quite yet) to do the promulgation.  It would also be good to make sure marcoceppi has reviewed it, since he is the maintainer of the charms it will be supplementing / replacing.
[14:52]  * rick_h_ sends bribes of car parts to marcoceppi 
[14:53] <marcoceppi> frankban: rick_h_ it seems fine to me, I recently had Redis Labs look over it, and had a hard time explaining it's structure. I worry it may be slightly more complex a layout for a relatively simple execution
[14:53] <marcoceppi> but that's not really a blocker, I'll take a pass at it and a few others later today
[14:54] <frankban> marcoceppi: thanks! what did you find complex about the charm?
[14:55] <rick_h_> marcoceppi: all good, ty. We're tring to help bring up charms we use in prod and get them so we're helping maintain more.
[14:56] <marcoceppi> frankban: I had a hard time explaining the logic tree from the services framework, but this is probably ultimately a fail on my part of lack of general understanding of the framework
[14:57] <rick_h_> marcoceppi: :( we thought this was the cool new way to write charms these days
[14:57] <rick_h_> we try to keep up :P
[14:59] <marcoceppi> Don't take my lack of familiarity with a framework as it not being the cool way, I'm not too hip these days ;)
[15:00] <cory_fu> The framework has its upsides, mainly when dealing with multiple dependencies, but it's acknowledged that it does make the simple case harder to follow
[15:11] <khuss> i'm using Juju charms to install openstack. I also have to make some additional changes in the nova.conf file. However changes seem to be overwritten when I reboot the machine.
[15:11] <khuss> How do I make changes in the configuration files so that they are not overwritten by Juju
[15:13] <marcoceppi> khuss: you need to embody these changes in either the nova charm, or as a subordiante. The OpenStack charms own those configuration files, so you really can't (and shouldn't) make changes out of band of Juju
[15:13] <marcoceppi> khuss: out of curiosity, what are you trying to change in nova.conf?
[15:14] <khuss> marcoceppi: changes in the network_api_class. metadata agent configuration etc
[15:16] <khuss> marcoceppi: if I use a subordinate charm, do I edit the file directly or use some helper functions? Exactly, how do Juju determine if files need to be overwritten?
[15:17] <khuss> marcoceppi: we also have changes in cinder.conf and neturon.conf
[15:19] <jrwren> khuss: any juju hook might rewrite the file. config-changed and relation add/remove being most likely.
[15:24] <khuss> jrwren: then there has to be a dependency to make sure that the last charm did the right configuration?
[15:25] <jrwren> khuss: I am not sure what you mean.
[15:25] <jrwren> khuss: typically a charm owns a config file.
[15:26] <marcoceppi> khuss: juju doesn't manage the config files
[15:26] <marcoceppi> the openstack charms do
[15:27] <marcoceppi> they just happen to own those files, each charm is different. You either need to modify the openstack charms to do what you want, or in some cases, like cinder and nova, you can build subordinate charms which can communicate with the parent charm (nova, cinder, etc) on what values should be in the configuration file
[15:27] <khuss> marcoceppi: yes.. i understand. The nova-compute charm edits the nova.conf file. now If I want to add my changes, I can probably create a subordinate charm which will edit the same config file
[15:27] <jrwren> khuss: There are some facilities in some charms which can help you go off tested path. e.g. nova-compute charm has a nova-config setting. You could use that to have the charm write to a different config file and then merge with your needs yourself, but you are own your own :)
[15:27] <marcoceppi> khuss: no, the subordiante editing teh file won't work, the openstack charms have special relations which allow you to convey a context whcih icnludes the chagnes to configuration files it should include when building teh file
[15:28] <khuss> marcoceppi: not sure how the subordinate charms communicate the information with the parent charm.. For example, I want to add "security_group_api=nova" in the nova.conf file
[15:29] <khuss> marcoceppi: how would subordinate charm communicate this with parent charm
[15:31] <jrwren> khuss: it wouldn't. the parent charm would need to be updated to support this.
[15:33] <khuss> you mean to say we need to have a custom version of nova-compute to add some changes in the nova.conf file?
[15:34] <jrwren> khuss: I think so, but I am not sure.
[15:37] <marcoceppi> okay
[15:37] <marcoceppi> khuss: the openstack charms, a lot of them, have been created with a special relationship that allows ytou to send data, over teh relation wire, to tell it how to write some portions of teh configuration
[15:38] <marcoceppi> khuss: this is something that's unique to the openstack charms, it show venders can create "charms" for cinder without having to fork the cinder charm over and over again, etc
[15:38] <marcoceppi> I'm not sure it works as well with the nova charm, but cinder definitely has this concept, though it's more geared for cinder backends, the logic is there
[15:39] <marcoceppi> Depending on what the changes are, and how many you have to make, will ultimately drive if this should be a subordinate or a fork
[15:39] <marcoceppi> if it's one or two changes in nova.conf for general/generic options, having it as a configuration option on the charm is probably teh best way to go, if it's a lot of stuff, or for a custom compute plugin, a subordinate is probably the way to go
[15:40] <khuss> marcoceppi: apart from the looking into the code, is there any other way to see what relationships are supported and what type of data can be sent?
[15:40] <marcoceppi> khuss: OpenStack charms use this notion of "contexts" which are Python classes that allow you to define the file you want to make changes, the section of that file (if supported), and then the keys/values to set. You then can send that "context" over teh relation
[15:41] <marcoceppi> there's not much documentation around this, and only a few charms that (cinder, definitely, neutron to an extent, and nova I think as well)  support it so far
[15:42] <marcoceppi> it's best to find a charm that closely resemebles what you're trying to do, there are a few cinder charms, and some neutron charms that touch both their services and nova-compute
[15:42] <khuss> marcoceppi: if you have some examples of "sending context over the relation" it would be great
[15:43] <marcoceppi> khuss: this is one that I'm familiar with: https://jujucharms.com/u/marcoceppi/cinder-vnx/trusty/4
[15:43] <marcoceppi> but it's a storage backend for cinder, so it sends a slightly different context. An openstack-charmer could probably give you a better example, mailing juju@lists.ubuntu.com would be a good start to that conversation if none are listening right now :)
[15:48] <khuss> marcoceppi: ok  thanks I will take a look at this charm
[16:54] <pmatulis> hello, re restrictions (juju block|unblock), does an unblock override a restriction set in environments.yaml ?
[18:42] <moqq> so, jujud on both of our environments is sitting at 100% cpu
[18:42] <moqq> for no apparent reason
[18:43] <moqq> has anyone seen this before?
[18:44] <gQuigs> moqq: disk isn't full? (that's the only time I've seen it)
[18:44] <moqq> nope
[18:44] <moqq> lots of space
[18:44] <moqq> i think its talking to mongo a lot because the mongo process is spiking quite a bit too
[18:47] <amcleod-> moqq: strace -c?
[18:50] <moqq> amcleod-:  95.44    1.453291        4844       300        17 futex
[18:51] <moqq> amcleod- https://gist.github.com/tysonmalchow/d77900349832ebea23aa
[18:52] <amcleod-> hm right so maybe db or fs?
[18:52] <moqq> mongodb is sitting above average too
[18:53] <moqq> but more spikes and less consistent than juju
[18:53] <moqq> and if it was waiting on IO shouldn’t that be an interrupt/idle wait? why would that cause cpu to spike
[18:54] <amcleod-> i dont know if userspace io would be shown that way, is there something like gluster on it?
[18:56] <amcleod-> moqq: im just guessing now
[18:57] <moqq> no, its an aws block device
[18:59] <amcleod-> moqq: maybe just strace the process and see what its doing, probably a lot of futex wait and nothing particularly helpful.
[18:59] <amcleod-> ..
[18:59] <moqq> yeah
[18:59] <moqq> that’s all i’m seeing
[19:00] <moqq> most time spent on futex
[19:00] <moqq> 95%+
[19:00] <moqq> i dont understand what kind of io it could even be concerned with that much when there are no commands and the cluster is otherwise idle??
[19:01] <amcleod-> well maybe its not io, maybe its db as you suggested
[19:01] <amcleod-> http://stackoverflow.com/questions/17211357/debug-a-futex-lock
[19:02] <amcleod-> ^also not particularly helpful
[19:13] <amcleod-> moqq: maybe check mongo processes? http://docs.mongodb.org/manual/reference/method/db.currentOp/
[19:15] <amcleod-> moqq: ....errr.. http://compgroups.net/comp.unix.programmer/futex-high-sys-cpu-utilization/1391360
[19:32] <moqq> seemed so promising but no dice
[19:36] <amcleod-> hmm :/
[19:37] <moqq> trying to connect to mongo now but it seems to be failing.. where does the juju mongo db keep its logs?
[19:40] <amcleod-> not sure sorry
[20:20] <jrwren> syslog, I thought.
[23:08] <hazmat> anyone in town for the docker hackathon?