[10:02] <jamespage> alai, hey - whats the current status on the vnx charm? specifically https://bugs.launchpad.net/charms/+source/cinder/+bug/1394276
[10:02] <mup> Bug #1394276: Invalid config charm cinder-vnx <openstack> <partner> <cinder (Juju Charms Collection):New> <https://launchpad.net/bugs/1394276>
[10:02] <jamespage> alai, is that work complete? can we promulgate the charm now?
[10:46] <johnmce> Hi, can anyone give any quick advice on how to remove a charm/service that simply won't go away. I performed a "juju destroy-machine --force" on the last machine/unit. Now the charm can't be removed or re-installed. There seems to be no --force option for services.
[11:10] <jamespage> johnmce, I'd try juju terminate-machine --force on the machines its on
[11:11] <jamespage> and them destroy-service afterwards
[11:12] <johnmce> jamespage: Hi James. I've already destroyed the machine (--force), and now have a charm not linked to any machine (that exists).
[11:13] <johnmce> jamespage: This the output when I stat the service: $ juju stat openstack-dashboard\n environment: maas machines: {}\n services: {}
[11:16] <johnmce> jamespage: the juju-agent gui still shows the charm, and juju won't allow the same charm to be deployed under the same name because it still exists. It can however be deployed under a different alias.
[13:01] <jamespage> stub, erm do you think we should back out the py3 changes and consider exactly how we deal with this across 12.04 and 14.04 py3 versions?
[13:02] <stub> jamespage: I think the branch that works with the ancient six is good for now.
[13:02] <stub> jamespage: Unless you have found new problems
[13:02] <stub> https://code.launchpad.net/~stub/charm-helpers/py3-2/+merge/242653
[13:04] <stub> jamespage: That branch also fixes some other revisions, so we are testing against precise versions. We could run the tests multiple times against precise, trusty and trunk versions, but that is probably overkill.
[13:04] <stub> other package revisions I mean.
[13:08] <stub> I think backing it out is a bad idea, as if we can't fix any issue now I doubt anybody is going to bother attempting to fix them later.
[13:09] <stub> EENGLISH
[13:17]  * stub wanders off for a bit
[13:46] <alai> jamespage, we are moving the vnx charm to run on Juno.  For the charm to run on Juno, some changes need to go into prodstack which will happen sometimes this week.
[13:49] <alai> jamespage, we will not maintain the direct driver ppa for icehouse, Juno should have the features that we need.
[13:58] <gnuoy> dosaboy, jamespage either of you have a moment for https://code.launchpad.net/~gnuoy/charms/trusty/neutron-api/pop-unused-resources/+merge/242662 ? If you agree with the approach I'd like to apply it to the other os charms that can use corosync.
[13:58] <jamespage> gnuoy, use of haproxy is not tied to hacluster
[13:59] <jamespage> that's the most typical use case, but it works just fine without it
[13:59] <gnuoy> jamespage, ok, let me check again
[14:01] <gnuoy> jamespage, it looks like it would make sense to just change the gate to check for cluster rather than ha
[14:02]  * gnuoy goes to test that
[14:02] <jamespage> gnuoy, maybe we should think about this the other way round and just always run haproxy
[14:10] <gnuoy> interesting, that would trigger less change when scaling out for the first time I guess
[14:47] <jamespage> gnuoy, yeah - that was my thinking
[14:48] <jamespage> it would then be possible to just reload haproxy
[14:48] <jamespage> much less distruption
[14:48] <jamespage> gnuoy, its what I made the openstack-dashboard charm do
[14:50] <gnuoy> jamespage, oh, I'm surprised you have that already since the charmhelpers code seems to have hardcoded references to enable haproxy when peers a present
[14:50] <gnuoy> s/when/only when/
[14:50]  * gnuoy goes to look at the dashboard
[14:54] <jamespage> gnuoy, I think I override that behaviour in a subclass
[14:55] <jamespage> and as it always runs in apache anyway :-)
[14:55] <jamespage> gnuoy, actually that might be something to think on
[14:55] <jamespage> gnuoy, switching to using wsgi in apache rather than the native stuff
[14:55] <jamespage> just a thought
[16:57] <avoine> is it easy to launch an automated test on a charm?
[16:57] <avoine> I would like to know if my MP is finally passing the test -> https://code.launchpad.net/~patrick-hetu/charms/precise/python-django/pure-python/+merge/226742
[18:01] <tvansteenburgh> avoine: I kicked one off for you http://juju-ci.vapour.ws:8080/job/charm-bundle-test/10415/console
[18:56] <avoine> thanks
[20:11] <mbruzek> avoine:  You can also run bundletester on your local laptop https://github.com/juju-solutions/bundletester
[20:12] <mbruzek> Install bundletester per the readme, and then bundletester -F -e local -l DEBUG -v
[20:39] <skay> avoine: one test failed :(
[21:05] <mhall119> I think I broke my LXC again, do we have a "nuke it all and start over" option yet?
[21:06] <marcoceppi> mhall119: there's a juju-clean plugin. What version of juju?
[21:07] <marcoceppi> I'm more concerned that lxc keeps breaking
[21:07] <mhall119> marcoceppi: 1.20.10-utopic-i386
[21:08] <mhall119> my machine-2 gets stuck
[21:08] <mhall119> "2":
[21:08] <mhall119>     agent-state-info: 'open /var/lib/lxc/mhall-local-machine-2/config
[21:08] <mhall119> : no such file
[21:08] <mhall119>       or directory'
[21:08] <mhall119>     instance-id: pending
[21:08] <mhall119>     series: trusty
[21:08] <mhall119> even if I juju destroy-environment and re-bootstrap, machine-2 does this
[21:08] <mhall119> not machine 1, not machine-3 or higher, only machine-2
[21:09] <mhall119> I think I was chroot'd into it's rootfs last week when I tried to destroy it
[21:11] <avoine> skay: yeah, I think I'm hitting the timeout
[21:13] <lazyPower> mhall119: do you have something leftover when you destroy the environment in /var/lib/lxc?
[21:14] <lazyPower> mhall119: i've seen weird issues crop up due to stale lxc modifications left around - i just expected them to get cleared out when id estroyed the environment but the machine config was left over
[21:14] <lazyPower> anecdotal - but worth looking into
[21:14] <mhall119> lazyPower: destroying it to find out
[21:14] <mhall119> lazyPower: it turns out that I do
[21:15] <mhall119> http://paste.ubuntu.com/9220770/
[21:15] <mhall119> and machine-2 is one of the things left behind
[21:15] <lazyPower> mhall119: i think we know why its doing that now - if you wipe that stuff out it should be able to create what it needs when the machine is provisioned.
[21:15] <lazyPower> s/machine/container
[21:16] <mhall119> lazyPower: delete all of it?
[21:16] <mhall119> or just machine-2?
[21:16] <lazyPower> just machine-2
[21:16] <lazyPower> i mean it *should* be fine
[21:17] <lazyPower> but be surgical in what you're modifying so its easier to unwind what's been done, and you can isolate behavior change.
[21:17] <mhall119> alright, deleted that and bootstrapping agian
[21:17] <lazyPower> i've hozed my local provider by thinking blowing away things willy nilly would be fine, and had to start from scratch by nuking everything.
[21:37] <lazyPower> mbruzek: can i get a quick review fo this bundle rev for CTS? https://code.launchpad.net/~lazypower/charms/bundles/hdp-core-batch-processing/bundle/+merge/242715
[21:38] <mbruzek> yes
[22:13] <mwak> hi