/srv/irclogs.ubuntu.com/2014/11/24/#juju.txt

=== CyberJacob is now known as CyberJacob|Away
=== fuzzy_ is now known as Ponyo
=== erkules_ is now known as erkules
=== CyberJacob|Away is now known as CyberJacob
jamespagealai, hey - whats the current status on the vnx charm? specifically https://bugs.launchpad.net/charms/+source/cinder/+bug/139427610:02
mupBug #1394276: Invalid config charm cinder-vnx <openstack> <partner> <cinder (Juju Charms Collection):New> <https://launchpad.net/bugs/1394276>10:02
jamespagealai, is that work complete? can we promulgate the charm now?10:02
johnmceHi, can anyone give any quick advice on how to remove a charm/service that simply won't go away. I performed a "juju destroy-machine --force" on the last machine/unit. Now the charm can't be removed or re-installed. There seems to be no --force option for services.10:46
jamespagejohnmce, I'd try juju terminate-machine --force on the machines its on11:10
jamespageand them destroy-service afterwards11:11
johnmcejamespage: Hi James. I've already destroyed the machine (--force), and now have a charm not linked to any machine (that exists).11:12
johnmcejamespage: This the output when I stat the service: $ juju stat openstack-dashboard\n environment: maas machines: {}\n services: {}11:13
johnmcejamespage: the juju-agent gui still shows the charm, and juju won't allow the same charm to be deployed under the same name because it still exists. It can however be deployed under a different alias.11:16
jamespagestub, erm do you think we should back out the py3 changes and consider exactly how we deal with this across 12.04 and 14.04 py3 versions?13:01
stubjamespage: I think the branch that works with the ancient six is good for now.13:02
stubjamespage: Unless you have found new problems13:02
stubhttps://code.launchpad.net/~stub/charm-helpers/py3-2/+merge/24265313:02
stubjamespage: That branch also fixes some other revisions, so we are testing against precise versions. We could run the tests multiple times against precise, trusty and trunk versions, but that is probably overkill.13:04
stubother package revisions I mean.13:04
stubI think backing it out is a bad idea, as if we can't fix any issue now I doubt anybody is going to bother attempting to fix them later.13:08
stubEENGLISH13:09
* stub wanders off for a bit13:17
alaijamespage, we are moving the vnx charm to run on Juno.  For the charm to run on Juno, some changes need to go into prodstack which will happen sometimes this week.13:46
alaijamespage, we will not maintain the direct driver ppa for icehouse, Juno should have the features that we need.13:49
=== scuttle|afk is now known as scuttlemonkey
gnuoydosaboy, jamespage either of you have a moment for https://code.launchpad.net/~gnuoy/charms/trusty/neutron-api/pop-unused-resources/+merge/242662 ? If you agree with the approach I'd like to apply it to the other os charms that can use corosync.13:58
jamespagegnuoy, use of haproxy is not tied to hacluster13:58
jamespagethat's the most typical use case, but it works just fine without it13:59
gnuoyjamespage, ok, let me check again13:59
gnuoyjamespage, it looks like it would make sense to just change the gate to check for cluster rather than ha14:01
* gnuoy goes to test that14:02
jamespagegnuoy, maybe we should think about this the other way round and just always run haproxy14:02
gnuoyinteresting, that would trigger less change when scaling out for the first time I guess14:10
=== wwitzel3_ is now known as wwitzel3
jamespagegnuoy, yeah - that was my thinking14:47
jamespageit would then be possible to just reload haproxy14:48
jamespagemuch less distruption14:48
jamespagegnuoy, its what I made the openstack-dashboard charm do14:48
gnuoyjamespage, oh, I'm surprised you have that already since the charmhelpers code seems to have hardcoded references to enable haproxy when peers a present14:50
gnuoys/when/only when/14:50
=== kadams54 is now known as kadams54-away
* gnuoy goes to look at the dashboard14:50
jamespagegnuoy, I think I override that behaviour in a subclass14:54
jamespageand as it always runs in apache anyway :-)14:55
jamespagegnuoy, actually that might be something to think on14:55
jamespagegnuoy, switching to using wsgi in apache rather than the native stuff14:55
jamespagejust a thought14:55
=== kadams54-away is now known as kadams54
avoineis it easy to launch an automated test on a charm?16:57
avoineI would like to know if my MP is finally passing the test -> https://code.launchpad.net/~patrick-hetu/charms/precise/python-django/pure-python/+merge/22674216:57
=== kadams54 is now known as kadams54-away
tvansteenburghavoine: I kicked one off for you http://juju-ci.vapour.ws:8080/job/charm-bundle-test/10415/console18:01
=== scuttlemonkey is now known as scuttle|afk
=== scuttle|afk is now known as scuttlemonkey
avoinethanks18:56
mbruzekavoine:  You can also run bundletester on your local laptop https://github.com/juju-solutions/bundletester20:11
mbruzekInstall bundletester per the readme, and then bundletester -F -e local -l DEBUG -v20:12
skayavoine: one test failed :(20:39
=== kadams54 is now known as kadams54-away
=== roadmr is now known as roadmr_afk
=== kadams54-away is now known as kadams54
mhall119I think I broke my LXC again, do we have a "nuke it all and start over" option yet?21:05
marcoceppimhall119: there's a juju-clean plugin. What version of juju?21:06
marcoceppiI'm more concerned that lxc keeps breaking21:07
mhall119marcoceppi: 1.20.10-utopic-i38621:07
mhall119my machine-2 gets stuck21:08
mhall119"2":21:08
mhall119    agent-state-info: 'open /var/lib/lxc/mhall-local-machine-2/config21:08
mhall119: no such file21:08
mhall119      or directory'21:08
mhall119    instance-id: pending21:08
mhall119    series: trusty21:08
mhall119even if I juju destroy-environment and re-bootstrap, machine-2 does this21:08
mhall119not machine 1, not machine-3 or higher, only machine-221:08
mhall119I think I was chroot'd into it's rootfs last week when I tried to destroy it21:09
avoineskay: yeah, I think I'm hitting the timeout21:11
lazyPowermhall119: do you have something leftover when you destroy the environment in /var/lib/lxc?21:13
lazyPowermhall119: i've seen weird issues crop up due to stale lxc modifications left around - i just expected them to get cleared out when id estroyed the environment but the machine config was left over21:14
lazyPoweranecdotal - but worth looking into21:14
mhall119lazyPower: destroying it to find out21:14
mhall119lazyPower: it turns out that I do21:14
mhall119http://paste.ubuntu.com/9220770/21:15
mhall119and machine-2 is one of the things left behind21:15
lazyPowermhall119: i think we know why its doing that now - if you wipe that stuff out it should be able to create what it needs when the machine is provisioned.21:15
lazyPowers/machine/container21:15
mhall119lazyPower: delete all of it?21:16
mhall119or just machine-2?21:16
lazyPowerjust machine-221:16
lazyPoweri mean it *should* be fine21:16
lazyPowerbut be surgical in what you're modifying so its easier to unwind what's been done, and you can isolate behavior change.21:17
mhall119alright, deleted that and bootstrapping agian21:17
lazyPoweri've hozed my local provider by thinking blowing away things willy nilly would be fine, and had to start from scratch by nuking everything.21:17
lazyPowermbruzek: can i get a quick review fo this bundle rev for CTS? https://code.launchpad.net/~lazypower/charms/bundles/hdp-core-batch-processing/bundle/+merge/24271521:37
mbruzekyes21:38
=== kadams54 is now known as kadams54-away
=== roadmr_afk is now known as roadmr
mwakhi22:13
=== fuzzy_ is now known as Ponyo
=== kadams54 is now known as kadams54-away

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!