[03:59] <tvansteenburgh> stub are you around?
[04:50] <stub> tvansteenburgh: morning
[07:00] <reith> Somehow juju master advertise wrong IP address for us, do you have any idea to fix it?
[07:01] <reith> It worked for serveral months but now it sends `apiaddess` as a private ip address (10.0.*)
[09:39] <jamespage> thedac, feedback on mps
[10:36] <bloodearnest> hazmat, tvansteenburgh, dpb1: hello! Could one of you juju-deployers folks please merge this already-reviewed branch for deployer?
[10:36] <bloodearnest> https://code.launchpad.net/~bloodearnest/juju-deployer/annotate-branches/+merge/239270
[11:48] <hazmat> bloodearnest: looking
[11:49] <hazmat> bloodearnest: getting an error running the tests on import annotation test
[11:50] <hazmat> test_data/wiki-branch.yaml missing
[12:02] <tvansteenburgh> anyone know how to work around this problem?
[12:02] <tvansteenburgh> $ juju set postgresql install_keys="`cat ACCC4CF8.asc`"
[12:02] <tvansteenburgh> 2015-07-16 04:19:50 INFO install yaml.scanner.ScannerError: mapping values are not allowed here
[12:02] <tvansteenburgh> 2015-07-16 04:19:50 INFO install   in "<unicode string>", line 2, column 8:
[12:02] <tvansteenburgh> 2015-07-16 04:19:50 INFO install     Version: GnuPG v1
[12:03] <tvansteenburgh> i get the same error if i paste the key contents into juju-gui
[12:10] <tvansteenburgh> full traceback: http://pastebin.ubuntu.com/11887422/
[12:10] <tvansteenburgh> i'm just trying to figure out how to escape the colon
[12:13]  * tvansteenburgh tries \x3A ...
[12:13] <stub> tvansteenburgh: That property is yaml encoded as a string
[12:14] <stub> tvansteenburgh: So "echo '[' "'"`cat ACC.asc` "'"']'" or similar bash monstrosity that actually works
[12:15]  * stub wishes juju cli didn't make things difficult by trying to keep things simple
[12:17] <tvansteenburgh> stub: thanks i'll try that. i'm surprised it breaks in the same way when using the gui
[12:17] <stub> tvansteenburgh: The charm is just passing the string to charm-helpers, which spits out that traceback since it isn't valid yaml.
[12:19] <stub> It could do with better validation and error messages, until juju gives us actual structured data here and we can stop serializing things ourselves.
[12:19] <stub> Whoops, I'm late
[12:19]  * stub wanders off
[12:19] <tvansteenburgh> o/
[12:23] <bloodearnest> hazmat, hmm, ok looking
[12:27] <bloodearnest> hazmat, added and pushed
[12:27] <bloodearnest> wthanks
[12:33] <coreycb> jamespage, gnuoy: could one of you take a look?  it fixes a master branch deploy from source failure that came about yesterday.  https://code.launchpad.net/~corey.bryant/charm-helpers/upgrade-pip/+merge/264886
[12:39] <jamey-uk> Can anyone help me with the error when deploying my Rails charm? I can't recreate the problem on Heroku, Dokku or locally: https://gist.github.com/anonymous/be6132615872fde0848a. I also have tried recreating the error on the VM itself with no luck.
[13:57] <beisner> gnuoy, jamespage - fyi bug 1475320
[13:57] <mup> Bug #1475320: rmq next charm:  pkg install fails when reverse dns fails - Error: unable to connect to node 'rabbit@10-245-173-55': nodedown <openstack> <uosci> <rabbitmq-server (Juju Charms Collection):New> <https://launchpad.net/bugs/1475320>
[14:03] <asanjar> cory_fu: kwmonroe bigdata review q
[14:03] <cory_fu> asanjar: kwmonroe has a meeting.  I thought you did, as well
[14:03] <cory_fu> You want to shift it 30 min?
[14:03] <asanjar> cory_fu: sure
[14:04] <kwmonroe> +1, thanks
[14:22] <jamespage> gnuoy, beisner: https://code.launchpad.net/~james-page/charm-helpers/liberty-versioning/+merge/264998
[14:28] <jamespage> beisner, I see the problem
[14:28] <jamespage> my fault
[14:28] <jamespage> refactoring missed something
[14:32] <jamespage> beisner, can you try this branch - lp:~james-page/charms/trusty/rabbitmq-server/fixup-configure-nodename
[14:34] <jamespage> beisner, gnuoy: https://code.launchpad.net/~james-page/charms/trusty/rabbitmq-server/fixup-configure-nodename/+merge/265001
[14:38] <thedac> jamespage: thanks for the feedback. That all makes sense.
[14:39] <thedac> gnuoy: any chance you could comment on jjo's nrpe bug? https://bugs.launchpad.net/charms/+source/nrpe/+bug/1473205
[14:39] <mup> Bug #1473205: nrpe charm creates checks with _sub postfix, breaking compatibility with nrpe-external-master <canonical-bootstack> <nrpe (Juju Charms Collection):New> <https://launchpad.net/bugs/1473205>
[14:40] <gnuoy> thedac, sure, otp atm
[14:40] <thedac> take your time
[14:40] <beisner> jamespage, thanks - sure will do
[14:42] <jamespage> thedac, np
[15:01] <asanjar> lazyPower: hi there
[15:01] <lazyPower> asanjar: greetings
[15:02] <asanjar> lazyPower: I am reviewing your AWESOME etcs charm, but bundletester fails with http://paste.ubuntu.com/11888125/. Am I missing anything?
[15:06] <lazyPower> asanjar: hmm looks ilke one of the etcd services didn't start in the cluster
[15:06] <asanjar> lazyPower: i am using "local"
[15:07] <asanjar> lazyPower: going to try on AWS
[15:08] <asanjar> lazyPower: according to this I got all services http://paste.ubuntu.com/11888149/
[15:09] <asanjar> lazyPower: but idle
[15:09] <lazyPower> asanjar: what i imagine happened is it race conditioned and we haven't seen it happen in our testing/deployments yet
[15:24] <marcoceppi> hey stub, I noticed that cassandra only runs on the public-address, is that by design?
[15:24] <marcoceppi> it's problematic because it appears that we're hitting network latency since cassandra has to be routed out of the cloud then back in again
[15:35] <beisner> jamespage, no love with that rmq branch.  fails @ the same place.  it looks like the main diff here is that rmq is in lxc, and the ptr record @ maas doesn't match the juju unit's hostname:  http://paste.ubuntu.com/11888267/
[15:35] <beisner> (diff between this test and our os-on-os testing that is)
[15:45] <xoritor> hello
[15:46] <xoritor> does juju have support for scaleio?  i know it has support for ceph
[15:46] <xoritor> ceph is fast enough, so scaleio is not an issue
[15:46] <beisner> jamespage, added to bug
[15:47]  * xoritor is a juju newb
[15:47] <xoritor> i am looking here https://jujucharms.com/store
[15:48] <xoritor> is that the right place to look for supported things?
[15:48]  * xoritor is doing a lot of reading
[15:49] <lathiat> xoritor: Charm store is the right place to look, though, there may be charms not in the store you can find elsewhere (github or something)
[15:50] <xoritor> lathiat, thanks!
[15:50] <xoritor> lathiat, or it may be named something that i am not searching for
[15:50] <lathiat> xoritor: also may simply not exist
[15:50] <xoritor> lol
[15:50] <xoritor> true
[15:51] <xoritor> ceph is what i have been using and is fast enough for me
[15:56] <aisrael> lazyPower: That drone-ci stuff is really sweet. Nicely done!
[15:56] <aisrael> http://blog.dasroot.net/2015-continuous-integration-with-juju-and-drone-ci.html
[15:57] <lazyPower> aisrael:  thanks! :)
[16:08] <xoritor> so when using juju if i want all machines to run all things i can manually place then all on all hosts?
[16:08] <xoritor> s/then/them/
[16:08] <xoritor> in the web ui
[16:09] <lazyPower> xoritor: using machine view, you can do unit placement, correct.
[16:09] <xoritor> i need the ha-proxy to make them H/A though correct
[16:09] <xoritor> lazyPower, that seems too eazy
[16:10] <lazyPower> Juju is good at that. Making complex things deceptively simple
[16:10] <ddellav> :D
[16:11] <xoritor> if that REALLY is that simple you may have won me over
[16:11] <marcoceppi> rick_h_: is there still a delay in the charmstore
[16:11] <rick_h_> marcoceppi: not that I'm aware of.
[16:11] <xoritor> one more thing... can juju+maas run on one of the openstack nodes?
[16:11] <rick_h_> marcoceppi: it took a few hours for things to sync, but that was all.
[16:12] <xoritor> or does it need to be outside of the stack
[16:12] <marcoceppi> I pushed a charm to lp:~marcoceppi/charms/trusty/cassandra-stress/trunk a bit a go, but don't see it
[16:12] <marcoceppi> rick_h_: is there anything that will reject a charm from ingestion?
[16:12] <rick_h_> marcoceppi: looking
[16:13] <rick_h_> marcoceppi: proof (we still have to keep charmworld/charmstore in sync so it first has to be in charmworld before the charmstore will load it)
[16:13] <xoritor> ok i also see you need a minimal of 7 machines... will it work with 5?
[16:13] <marcoceppi> rick_h_: will Warning stop from ingestion of personal namespace?
[16:13] <rick_h_> marcoceppi: no
[16:13] <xoritor> we are a really small shop
[16:16] <marcoceppi> xoritor: so, maas can be put in a virtual machine on one of the nodes, it's not really recommended however
[16:17] <marcoceppi> xoritor: juju however runs from your laptop, it's a client side tool
[16:20] <xoritor> hmm
[16:20] <xoritor> could i put it in a VM?
[16:20] <marcoceppi> xoritor: could what? juju or maas?
[16:20] <xoritor> marcoceppi, juju
[16:21] <marcoceppi> xoritor: sure, again, it's just a command line tool for managing a deployed juju environment. It could be installed on windows, mac, or linux, in a vm or a docker container, doesn't matter
[16:21] <xoritor> cool... multiple instances?
[16:22] <xoritor> ie... one in a VM and one on my laptop?   without detriment?
[16:23] <aisrael> xoritor: You could, but it's not necessary. You can control multiple juju deployments from your laptop
[16:23] <marcoceppi> xoritor: sure, again, it's just the client side tool. Juju then uses MAAS, or Amazon, or GCE, or Azure, or OpenStack, or whatever other cloud to actually provision and deploy machines
[16:23] <xoritor> AWESOME!!!
[16:23] <lazyPower> xoritor: Juju is a client/server model - each environment gets a single state server to drive the deployment/orchestration/statemanagement of the environment.
[16:23] <lazyPower> marcoceppi: dont forget about the state server :)
[16:23] <marcoceppi> lazyPower: I'm not, but you don't install that
[16:23] <marcoceppi> juju installs that
[16:23] <lazyPower> well,
[16:23] <lazyPower> ok you're right
[16:23] <lazyPower> in that context
[16:24] <xoritor> where is the "state server" located?
[16:24] <rick_h_> marcoceppi: getting logs pulled from IS and looking into it. Sorry for the delay. I'm not aware of any issues and lazyPower had something ingested yesterday so not sure what it doesn't like about this one
[16:25] <lazyPower> xoritor: it occupies a single node in your environment that you're managing with juju
[16:26] <xoritor> lazyPower, you mean runs on one node beside other things or takes an entire node
[16:27] <marcoceppi> xoritor: you, from your computer, run `juju bootstrap`, this connects to whatever machine provisioning tool you want to use (gce, aws, maas, openstack, etc) and creates a node on that service. Once teh environment is bootstraped you can start deploying and managing that environment. Juju can manage and control multiple environments at a time and each environment can have it's on Juju GUI so you don't even have to use the command line
[16:27] <marcoceppi> xoritor: while the bootstrap node does take an entire instance, you can put other things on it
[16:27] <xoritor> ok
[16:27] <xoritor> whew
[16:27] <xoritor> just making sure
[16:28] <marcoceppi> xoritor: I've managed to deploy openstack on to just two physical nodes, so it's definitely possible to model that level of complexity but at a very small physical footprint
[16:29] <marcoceppi> well, technically three machines if you include MAAS
[16:29] <xoritor> can i use just 5 physical machines if i have juju+maas
[16:30] <xoritor> if they all have everything with h/a
[16:30] <marcoceppi> xoritor: I'm not sure about the minimum requirements for OpenStack HA, but I think 5 nodes would be enough
[16:31] <marcoceppi> though, it would be pretty tight ;)
[16:33] <xoritor> they are all pretty beefy (12 cores, 128 GB DDR4, 4x1 Gbit, 2x40 Gbit, 1x256 GB SSD, 1x1 TB SSD, 1x2 TB PCIe)
[16:34] <xoritor> i can get more ram if we need it
[16:34] <marcoceppi> xoritor: tight in the amount of services you'd have to spread out, though those are some nice machines
[16:35] <xoritor> yea the new haswell xeons are pretty nice i have to say
[16:36] <xoritor> we are small but we are not afraid of buying what we need to get the job done
[16:37] <xoritor> thats why i was asking if scaleio was supported
[16:37] <xoritor> it is supposed to be much much faster than ceph with ssd/pcie
[16:38] <xoritor> http://virtualgeek.typepad.com/virtual_geek/2015/05/emc-day-3-scaleio-unleashed-for-the-world.html
[16:39] <xoritor> that says 24 times faster with SSD only
[16:39] <marcoceppi> xoritor: so, scaleio can be supported, the charms are a plugin architecture and there's a pretty easy way to build a charm for cinder/storage
[16:39] <marcoceppi> we have a few charms for other EMC products, while it's not scaleio, they could be modifiedto add support for it
[16:40] <xoritor> hmm
[16:42] <xoritor> i have my reservations on using it anyway...
[16:43] <xoritor> i dont fully trust emc
[16:43] <xoritor> ;-)
[16:50] <marcoceppi> rick_h_: it showed up fwiw
[16:50] <rick_h_> marcoceppi: ah that's good. /me checks when you pushed it up
[17:05] <beisner> jamespage, gnuoy - i've changed the stance of bug 1475320 to be:   The rabbitmq-server next charm works on bare metal but not in a container; the stable charm works fine on both.
[17:05] <mup> Bug #1475320: rmq next charm:  pkg install fails when deployed to lxc <openstack> <uosci> <rabbitmq-server (Juju Charms Collection):New> <https://launchpad.net/bugs/1475320>
[17:05] <beisner> also added a reproducer, and that enviro is still up @ dellstack for a bit while i collect logs.
[17:14] <rick_h_> NOTICE: jujucharms.com is having a webui outage due to a failed redis. Charm deploys should work as normal and the API is available.
[17:14] <xoritor> aaah redis
[17:14] <xoritor> how i love/hate thee
[17:37] <rick_h_> NOTICE: jujucharms.com webui is back up
[18:04] <xoritor> can juju and maas be run in docker containers?
[18:11] <lazyPower> xoritor: teh juju client can
[18:12] <lazyPower> xoritor: i haven't tested MAAS ina  docker container, but theory states you should be able to isolate the applications into containers. I know it can be driven from a LXC container - as that has been tested to drive VMAAS. setup your MAAS region/cluster controller in lxc, and drive KVM containers as the nodes.
[18:12] <marcoceppi> lazyPower: maas needs at least a KVM, as it needs - or works best - with two nics
[18:13] <lazyPower> marcoceppi: you can  create two virtual nics in a container and make it work.
[18:13] <xoritor> i have found a few docker files to build maas server(s)
[18:13] <xoritor> interesting idea
[18:14] <xoritor> i am trying to figure out the easiest approach to getting maas+juju up and running with my limited hardware
[18:15] <xoritor> i will run everything everywhere if i need to that does not bother me ;-)
[18:16] <xoritor> just need to get things up and running
[18:16] <xoritor> i also cant spare a machine or three for just doing maas
[18:17] <xoritor> sorry for bringing that up here not much response over in #maas when i asked
[18:31] <jcastro> well, if you don't have a ton of hardware then you don't really need maas for anything right?
[18:31] <xoritor> i would like to use it for the initial setup and the "in case i get hit by a bus" scenario
[18:32] <xoritor> i like to be a bit pro active for things like that
[18:33] <xoritor> heck its the whole reason i am looking into this in the first place
[18:34] <xoritor> if it were just me i would hand roll a custom solution that fit my needs and worked exactly how i wanted it to work as minimal as i could make it
[18:34] <xoritor> ;-)
[18:34] <xoritor> but thankfully i am not immortal
[18:41] <marcoceppi> xoritor: so MAAS is really geared for if you have 10 or more machines, though that's not to say you can't use it for your use case (I have 4 machines total at home, 3 I use to deploy onto and 1 for MAAS)
[18:41] <marcoceppi> but MAAS as some pretty specific requirements, the first being that it really /really/ will want two nics
[18:41] <marcoceppi> but you don't need a giant beefy machine to run maas, at home I use an old optiplex, and regularly people have used $300 little Intel NUCs
[18:42] <xoritor> i have an 8 core system i can use a VM for and give it 16GB ram 200 GB SSD and up to 4 1Gb nics via passthrough would that work?
[18:42] <xoritor> i just cant give it the whole machine
[18:42] <marcoceppi> xoritor: yeah, that should work fine
[18:42] <xoritor> it would have to be kvm
[18:42] <marcoceppi> right
[18:43] <marcoceppi> I mean, you only need like 2-4 gb
[18:43] <xoritor> cool
[18:43] <marcoceppi> and a core
[18:43] <marcoceppi> and maybe 50-80GB
[18:43] <marcoceppi> depending on how many images you want to roll out
[18:43] <xoritor> i really do not have much ;-)
[18:43] <xoritor> but i want to be able to have something in place should i have a heart attack
[18:44] <marcoceppi> well if it's just Ubuntu 14.04, that's one image
[18:44] <xoritor> it has happened to us before
[18:44] <xoritor> we are overly paranoid
[18:44] <xoritor> im 43 and the YOUNGEST employee
[18:44] <xoritor> ;-)
[18:44] <xoritor> heh
[18:45] <xoritor> you would never believe how many people drop a load on that one
[18:57] <tasdomas> hi
[18:57] <tasdomas> amulet does not support actions yet, does it?
[18:59] <tvansteenburgh> tasdomas: no
[19:13] <tasdomas> is there a good e
[19:13] <tasdomas> sorry
[19:13] <tasdomas> tvansteenburgh, thanks
[19:13] <tasdomas> what version of juju are tests currently being run against?
[19:14] <tvansteenburgh> tasdomas: which tests? :)
[19:14] <tasdomas> tvansteenburgh, well, when a charm goes through the approval process
[19:14] <tvansteenburgh> charm tests run against latest, so currently 1.24.2
[19:14] <tasdomas> tvansteenburgh, ah great - thanks
[19:15] <tvansteenburgh> btw, fwiw, adding action support to amulet is on the todo list. should be pretty straightforward once we get a little time to do it
[19:19] <tasdomas> tvansteenburgh, hm - I'll take a look at it, have a charm that uses actions extensively (lp:~tasdomas/charms/trusty/git/trunk)
[19:20] <tvansteenburgh> tasdomas: sure if you wanna submit a PR that'd be awesome :)
[19:28] <tasdomas> hm, running charm test -e local returns:
[19:28] <tasdomas> ERROR there was an issue examining the environment: failure setting config: mkdir /.juju: permission denied
[19:30] <marcoceppi> tasdomas: try using bundletester instead
[19:31] <tasdomas> marcoceppi, bundletester?
[19:32] <marcoceppi> tasdomas: yes, bundletester is actually what we use in CI
[19:32] <marcoceppi> https://pypi.python.org/pypi/bundletester
[19:34] <coreycb> marcoceppi: any chance you could review this?  the other guys are eod.   https://code.launchpad.net/~corey.bryant/charm-helpers/upgrade-pip/+merge/264886
[19:34] <marcoceppi> coreycb: sure, giv me a few mins
[19:34] <coreycb> marcoceppi, thanks
[19:35] <marcoceppi> coreycb: LGTM, this may actually be the errors I was hitting with neutron
[19:35] <marcoceppi> it was something about version strings
[19:35] <marcoceppi> never thought it was because pip was too old
[19:36] <coreycb> marcoceppi, thanks.  could be if you were using master.  could you land that if you weren't going to already?
[19:37] <marcoceppi> coreycb: merged, will this be in the 07.2015 release of the openstack charms?
[19:37] <coreycb> marcoceppi, it will be now!
[19:37] <marcoceppi> err, rather, can you let me know when these land in trunk?
[19:37] <marcoceppi> s/trunk/devel
[19:38] <marcoceppi> I'll give the deployment another go
[19:38] <coreycb> marcoceppi, yep will do
[19:38] <marcoceppi> thanks!
[19:40] <tasdomas> marcoceppi, thenks
[19:42] <tvansteenburgh> tasdomas: we also have a docker container with bundletester already installed, so you can, for example:
[19:42] <tvansteenburgh> sudo docker run --rm --net=host \
[19:42] <tvansteenburgh>     -u ubuntu \
[19:42] <tvansteenburgh>     -v ${JUJU_HOME}:/home/ubuntu/.juju \
[19:42] <tvansteenburgh>     -t jujusolutions/charmbox:devel \
[19:42] <tvansteenburgh>     sudo bundletester -F -e $env -t $url -l DEBUG -v
[19:42] <tasdomas> tvansteenburgh, interesting - where can I find that container?
[19:43] <tvansteenburgh> jujusolutions/charmbox in the docker registry
[19:45] <tvansteenburgh> tasdomas: you may also like lazyPower's new blog post on charm testing with drone (uses this same docker container) http://blog.dasroot.net/2015-continuous-integration-with-juju-and-drone-ci.html
[19:45] <tasdomas> tvansteenburgh, ah, thanks
[19:47] <lazyPower> tasdomas: full rundown of context management with that container: http://blog.dasroot.net/2015-local-isolation-with-docker-and-juju.html
[19:47] <lazyPower> but thats extra credit :)
[19:57] <tasdomas> lazyPower, thanks ;-]
[19:57] <tasdomas> lazyPower, are unit tests for charm functionality required to get the charm approved?
[19:57] <lazyPower> its best practice and will expedite any reviews
[19:57] <lazyPower> but its not strictly required, only preferred
[19:58] <lazyPower> the requirement is an amulet level integration test so we ensure it continues to deploy as the charm ages
[19:58] <lazyPower> which involves standing up your charm, any ancilliary charms related to it, and inspecting data sent over the wire + host configuration. eg: if you alter networking settings, verify they did indeed happen.
[20:02] <marcoceppi> tasdomas: unit tests are a super ++ in the review process, since it gives us an idea of how much coverage is done for the code, but like lazyPower said not required to be in the store
[20:05] <tasdomas> marcoceppi, lazyPower: thanks
[20:35] <xoritor> thanks everyone
[21:01] <thumper> tvansteenburgh: hey there
[21:01] <thumper> tvansteenburgh: still around?
[21:01] <tvansteenburgh> thumper: hey
[21:01] <thumper> hey
[21:02] <thumper> do the python-jujuclient tests pass for you?
[21:02] <tvansteenburgh> hee, hot topic!
[21:02] <tvansteenburgh> i haven't tried, but whit just submitted an MP that allegedly fixes them
[21:02] <tvansteenburgh> wanna review it? :P
[21:03] <thumper> I'll look too, but I need something to run it in
[21:14] <tvansteenburgh> thumper: running tests now
[21:16]  * beisner stands by waiting eagerly for rabbitmq-server next charm bare metal + lxc unbreak validation
[21:18] <tvansteenburgh> thumper: 18 passed, 5 skipped
[21:19] <tvansteenburgh> py27 and 34
[21:24] <whit> thumper: ported them to py-test
[21:24] <whit> you still have to set a env var
[21:25] <thumper> whit: is there instructions on how to run the tests anywhere?
[21:25] <tvansteenburgh> $ JUJU_TEST_ENV=local tox
[21:25] <tvansteenburgh> that's it
[21:25] <tvansteenburgh> just bootstrap local env
[21:25] <whit> thumper: should be in the readme iirc
[21:26] <whit> thumper: I've set the tests to skip some dodgy incomplete implementation and some tests with broken cleanups
[21:26] <whit> thumper: until the implementations and cleanups are complete
[21:27] <whit> hazmat: do you have a commit waiting to be pushed somewhere for the facade stuff?
[21:28] <tvansteenburgh> facades are already in
[21:38] <whit> tvansteenburgh: removed that print statement
[21:39] <whit> tvansteenburgh: I marked several tests as skips because of what appeared to be incomplete implemenation that caused the tests to error
[21:39] <whit> tvansteenburgh: but maybe there was some magic I missed
[21:40] <whit> tvansteenburgh: I did add comment to the skips so it should be obvious
[21:40] <whit> <3 py.test
[21:42] <tvansteenburgh> whit: ack
[21:45] <beisner> dear jamespage and/or gnuoy scrollback,
[21:46] <beisner> bug 1475320 is sorted, new proposal ready for you
[21:46] <mup> Bug #1475320: rmq next charm:  config-changed hook fails when deployed to lxc <openstack> <uosci> <rabbitmq-server (Juju Charms Collection):New> <https://launchpad.net/bugs/1475320>
[22:02] <marcoceppi> beisner: UHG
[22:03] <beisner> marcoceppi, uhgtastic?
[22:03] <marcoceppi> that bug was kicking my ass
[22:04] <beisner> ditto
[22:04] <beisner> lp:~1chb1n/charms/trusty/rabbitmq-server/fixup-configure-nodename2
[22:04] <marcoceppi> I would have to ssh in, force kill rabbit, then service rabbitmq-server start, then resolved --retry
[22:04] <beisner> ^ that should do it for now, until reviewed/landed
[22:04] <marcoceppi> glad this is going to be fixed!
[22:04] <marcoceppi> should be able to do a smoosh without any issues now
[22:04] <beisner> yes
[22:05] <beisner> happy smooshing, gotta run
[22:10] <thumper> whit: where is the main code for python-jujuclient now?
[22:10] <thumper> whit: I assumed it was LP
[22:11] <thumper> whit: or is this fix just proposed ATM?
[22:19] <thumper> whit: I hit problems trying to run tests with your branch
[22:19] <thumper> whit: no idea what I'm doing wrong
[22:21] <tvansteenburgh> thumper: sorry i saw your comment right after i merged it :/
[22:21] <thumper> :)
[22:21] <tvansteenburgh> thumper: pip install -U tox
[22:21] <thumper> tvansteenburgh: I did that earlier
[22:21]  * thumper tries again
[22:22] <thumper> I currently have 2.0.2
[22:23] <thumper> tvansteenburgh:  E   EnvironmentNotBootstrapped: Environment "test" is not bootstrapped
[22:23] <thumper> lots of that
[22:23] <tvansteenburgh> juju bootstrap -e test
[22:23] <thumper> so I'm guessing it expects an environment called "test" to be available?
[22:23] <thumper> tvansteenburgh: should that exist first?
[22:24] <tvansteenburgh> no you have to tell it the name of your juju env
[22:24] <tvansteenburgh> $ JUJU_TEST_ENV=local tox
[22:24] <tvansteenburgh> but you must bootstrap that env first
[22:24] <thumper> tvansteenburgh: would be nice to have that written down somewhere
[22:24] <tvansteenburgh> agree, not user friendly
[22:25] <thumper> tvansteenburgh: however I thiink I have enough to get this working with JES now
[22:26] <tvansteenburgh> thumper: i'll make a proper Makefile/readme when i run out of other things to do :D
[22:27] <thumper> haha
[22:27] <thumper> yeah...
[22:27] <thumper> like that'll happen
[22:28] <tvansteenburgh> thumper: check the HACKING doc
[22:28] <tvansteenburgh> turns out there are some instructions
[22:29] <thumper> :)