[09:01] <jamespage> gnuoy`, could you give me a review of https://code.launchpad.net/~james-page/charms/trusty/rabbitmq-server/network-splits/+merge/247384
[09:02] <jamespage> I've manually run lint, amulet and unit tests
[09:15] <gnuoy`> jamespage, I don't see the link between network splits and the new ceph.create_pool line
[09:15] <jamespage> gnuoy`, there is none other than I noticed it was broken
[09:15] <jamespage> gnuoy`, I can do that as a separate MP if you like
[09:15] <gnuoy`> jamespage, no, that's fine.
[09:17] <gnuoy`> jamespage, approved
[09:21] <jamespage> gnuoy`, I've also switched osci to use next for rabbitmq-server
[09:22] <gnuoy`> thanks
[10:00] <jamespage> marcoceppi_, gnuoy`, btw I backported the latest juju-deployer release into the stable PPA
[10:28] <mwak_> hi
[10:34] <jamespage> gnuoy`, https://launchpad.net/charms/+milestone/15.01
[11:08] <occc> hi all
[11:08] <occc> i would like to run my django tests over several (say 100) servers
[11:09] <occc> i was considering deploying ec2 instances and make each of them run a part of the tests
[11:09] <occc> a friend of mine told me about juju
[11:11] <occc> so my need is basically to create 100 clones of the same instance, run a slightly different command-line on each of them and aggregate results
[11:47] <gnuoy`> jamespage, could I get a review of the mps associated with Bug #1403132 when you have a moment please?
[11:47] <mup> Bug #1403132: hacluster default ports conflict between openstack charms <landscape> <openstack> <smoosh> <cinder (Juju Charms Collection):New> <glance (Juju Charms Collection):New> <keystone (Juju Charms Collection):New> <neutron-api (Juju Charms Collection):Invalid> <nova-cloud-controller (Juju
[11:47] <mup> Charms Collection):New> <percona-cluster (Juju Charms Collection):Invalid> <swift-proxy (Juju Charms Collection):Invalid> <https://launchpad.net/bugs/1403132>
[11:52] <jamespage> gnuoy`, +1 that looks fine
[13:39] <jamespage> gnuoy`, all of the nrpe stuff has landed right?
[13:40] <gnuoy`> jamespage, almost, I think there was a mongodb mp that someone else was reviewing and I ceilometer-agent slipped through the net but I'm hoping to have that done rsn
[15:14] <bloodearnest> arg, juju test doesn't respect $JUJU_HOME
[15:15] <bloodearnest> must have a hardcoded ~/.juju ?
[15:50] <lazyPower> bloodearnest: we've been superceeding juju test with bundletester
[15:51] <bloodearnest> lazyPower, I assumed that was for testing bundles?
[15:51] <lazyPower> That was the original target, it executes everything that is in /tests that's chmod +x however
[15:52] <lazyPower> and sniffs makefile targets to run linting + an implicit charm proof
[15:52] <lazyPower> its a parity tool for use with CI, as thats our exclusive tool for CI test running
[15:57] <bloodearnest> lazyPower, where can I get it?
[15:57] <lazyPower> bloodearnest: pip install bundletester - its not a package in the repos yet
[15:57] <lazyPower> we have that as a target for this/next cycle.
[15:58] <bloodearnest> ack
[16:00] <marcoceppi_> bloodearnest fwiw in the next few weeks when charm-tools is released, bundletester will replace juju-test, so running juju test will execute bundletester under the hood
[16:03] <bloodearnest> lazyPower, marcoceppi_ : and does bundletester respect $JUJU_HOME
[16:03] <bloodearnest> ?
[16:03] <marcoceppi_> bloodearnest: it should
[16:05] <bloodearnest> pipsi install bundletester works nicely, much better output
[16:06] <bloodearnest> does it leave the env intact on failure?
[16:07] <lazyPower> it can be configured to do either/or
[17:09] <skay> is block-storage-broker okay to use in trusty?
[17:10] <skay> I've been using trusty and now want to add storage for my postgresql relation
[17:10] <skay> but I see block-storage-broker is in precise. but maybe my searchfu is weeak?
[17:12] <skay> s/relation/service
[17:14] <skay> are there docs I can look at on how to upgrade a charm? how much work would it be for a naive user?
[17:14] <skay> can I do it in a day?
[17:20]  * skay talks to someone about it
[17:20] <skay> for a workaround I'll collect the precise repo and tell mojo to think it is trusty
[17:20] <skay> so cowboy. much irresponsible
[17:35] <lazyPower> skay: the upgrade work is kind of dependent on how its installing the package. ergo: if the package exists in both trusty/precise repo's it should be a fairly short winded porting process, ensuring there are tests in the charm.
[17:36] <lazyPower> if the repo doesn't exist that metric can go up by an order of magnitude.
[17:36] <lazyPower> IIRC we only have storage-broker in precise due to the libs it's consuming - it's not using the AWS SDK last time i checked, it was using the Euca2oolset
[17:36] <skay> lazyPower: I don't have time to look in to it today or over the weekend. maybe I can look in to it next week. it might be a good exercise in going through the process so that I can participate more in reviews and so on
[17:36] <cory_fu> whit: cf-weekly?
[17:36] <lazyPower> certainly - we're here to help :)
[19:01] <adalbas> mbruzek1, arosales , in one charm hook, is there a way to get specific information from the deployer (such as hostname, ip)?  something like what charmhelpers.core.host does, but for the deployer
[19:02] <mbruzek1> adalbas: Hello.  Let me see if I understand your question.  Is there a way to get the hostname/ip from a hook?
[19:03] <arosales> adalbas: hello
[19:04] <adalbas> mbruzek1, for instance, i want to add the hostname or ip to /etc/hosts of the machine where i'm deploying a charm in the service
[19:06] <mbruzek1> adalbas: that can be done.  What is the use case here?  The host computer (that is deploying the services) can already access the VMs in the cloud?
[19:08] <adalbas> mbruzek1, yes, but i want the vm in the cloud to have the host hostname, for i'm not using an external dns resolution
[19:10] <mbruzek1> adalbas: I am not sure I follow why you would want to do this but here is how I would get that information.
[19:10] <mbruzek1> juju status etcd/0 | grep public-address | cut -d: -f 2
[19:11] <mbruzek1> Where "etcd/0" is the deployed charm that you want to find the address for.
[19:12] <adalbas> mbruzek1, it is actually the other way around. I'm deploying etcd/0 from my server (juju) and in the install hook, i want to be able to get the information from juju, so etcd/0 would have the ip information of it's deployer.
[19:14] <adalbas> so in the hooks/install, i could add something like "echo $deployer_ip $deployer_hostname >> /etc/hosts"
[19:15] <mbruzek1> adalbas: I am not sure how to get that information from the install hook.
[19:16] <adalbas> mbruzek1, i see. i think i might need to change my approach here or have something in config.yaml.
[19:17] <adalbas> tks
[19:17] <mbruzek1> adalbas:  From the deployer you could embed the IP address into the VM like this.
[19:17] <mbruzek1> juju ssh etcd/0 "echo $deployer_ip $deployer_hostname >> /etc/hosts"
[19:17] <mbruzek1> adalbas: I am not sure the deployed vm knows where it came from. Lets say you are deploying from your laptop.
[19:18] <mbruzek1> when you bootstrap you create a Juju server on the cloud that you are using.  The juju client just talks to the bootstrap node which does all the provisioning of the other vms
[19:18] <adalbas> yes, exactly, i wasn't sure juju had a way to identify who is the deployer.
[19:19] <mbruzek1> So the VMs don't know about your laptop, and I am not sure how they would get that information because the adalbas bootstrap node is doing all the work.
[19:20] <adalbas> right. that makes sense.
[19:21] <mbruzek1> Since your laptop can modify the units that sounds like something that you could do from the laptop, I will check to see if the nodes can get that information, but I kind of doubt it.
[19:29]  * marcoceppi_ reads back
[19:30] <marcoceppi_> adalbas: that's not possible withinjuju
[19:30] <marcoceppi_> adalbas: if you needed the hostname/ip of the machine that's executing the deploymeht
[19:31] <marcoceppi_> you'd have to set it as a configuration value in the charm
[19:31] <marcoceppi_> and the person would have to `juju set mycharm my-hostname=$(hostname)`
[19:31] <adalbas> marcoceppi_, tks!
[19:32] <adalbas> yes, i ll set it on the config
[20:11] <hazmat> adalbas, latest jujus tie identity to the services/units in play
[20:12] <hazmat> along with apis for share/unshare env which create additional pricnipals
[20:12] <hazmat> although perhaps i mistake intent
[20:55] <skay> I've got juju bootstrapped to an openstack environment rather than a local one, but when I run protect-new and then workspace-new it attempts to make lxc containers
[20:55] <skay> I'm trying to figure out what step I missed
[21:14] <skay> oh, I see that mojo creates a local container, but when it gets to running hte manifest juju behaves as I'd expect it to
[22:24] <whit> so bundletester doesn't support --upload-tools currently eh?