=== scuttlemonkey is now known as scuttle|afk === kadams54 is now known as kadams54-away === scuttle|afk is now known as scuttlemonkey === scuttlemonkey is now known as scuttle|afk [13:57] hi, was wondering if there is an easy way to use juju to deploy apache with centos ? [13:58] using local lxc [14:13] hey, a juju env is not responding to commands to remove-relation [14:13] just does nothing [14:14] status is clean, no errors, command completes ok, but relation still there [14:14] 1.20.11-trusty-amd64 [14:15] hmm, [14:15] did you remove unit ? [14:15] nope [14:15] try trhat? [14:15] no chance - production env [14:15] oh shit [14:15] what are you trying to achieve ? [14:16] g3naro, I would like to remove the relation :) [14:16] decouple the service from this machine ? [14:16] I want to decouple the 2 services. The relation is only supposed to be temporary, to perform a db migration, then removed [14:17] we've been doing it this way for a while [14:17] ahh ok, im sorry i can't advise anything in this case [14:17] ive only been using in dev env's so far [14:25] g3naro, thanks anyway! :) [14:30] ah, I seem to have a rogue debug-hooks running. Sorry for the noise [14:47] bloodearnest: that happens to me a lot [14:48] lazyPower, yeah, tell me about it! [14:49] lazyPower, I would really like a script to elevate a juju ssh session into a debug-hooks session. [14:49] then I can always start with ssh, rather than using debug-hooks "incase I need it" [14:49] bloodearnest: i want to go a step further and debug hooks into a particular hook context. [14:50] juju debug-hooks [14:50] that would be great. Only hook into one relation === scuttle|afk is now known as scuttlemonkey [14:57] How do I tell curtin to use GPT instead of MBR with MAAS? I have a 3TB disk and MAAS only uses 2TB because it is using MBR === scuttlemonkey is now known as scuttle|afk [15:27] sto: no idea, you may wish to ask in #maas === scuttle|afk is now known as scuttlemonkey [15:47] marcoceppi: thanks [16:03] Hi everyone, I have an issue with a juju charm being hung on "running install hook", can anyone give me a hint on how to destroy that unit and machine without losing the whole environment? [16:12] bleepbloop: juju help destroy-machine [16:13] bleepbloop: destroy-service and destroy-unit may also be of interest [16:19] tvansteenburgh: I actually tried all of those, destroy machine and destroy-service return but never destroy it and destroy-unit says "ERROR no units were destroyed: state changing too quickly; try again soon" [16:22] tvansteenburgh: I tried later and well same thing [16:22] bleepbloop: are any units in error state according to `juju status`? [16:24] tvansteenburgh: workload-status: current: maintenance message: installing charm software since: 25 Jun 2015 15:14:50-04:00 [16:24] agent-status: current: executing message: running install hook since: 25 Jun 2015 15:14:51-04:00 [16:26] tvansteenburgh: I found one other person with this issue https://bugs.launchpad.net/juju-core/+bug/1459761 however I'm not sure how to manually modify the mongo database to manually force it into an error state as suggested in the comments [16:26] Bug #1459761: Unable to destroy service/machine/unit [16:27] bleepbloop: does `juju debug-log` show any actual activity? [16:28] tvansteenburgh: unit-docker-2[1787]: 2015-07-08 16:27:09 ERROR juju.worker.uniter.filter filter.go:137 state changing too quickly; try again soon [16:28] unit-docker-2[1787]: 2015-07-08 16:27:09 ERROR juju.worker runner.go:219 exited "uniter": state changing too quickly; try again soon [16:28] tvansteenburgh: those two errors over and over === msbrown is now known as msbrown-afk [16:31] bleepbloop: if destroying the environment is not an option, i would comment on that bug and ask Gabriel how he did the mongo update. sorry, don't know what else to suggest [16:32] tvansteenburgh: No problem, thanks === liam_ is now known as Guest71963 === lukasa is now known as lukasa_away === lukasa_away is now known as lukasa === scuttlemonkey is now known as scuttle|afk === scuttle|afk is now known as scuttlemonkey === scuttlemonkey is now known as scuttle|afk === kadams54 is now known as kadams54-away === kadams54-away is now known as kadams54 === scuttle|afk is now known as scuttlemonkey === kadams54 is now known as kadams54-away === kadams54-away is now known as kadams54 === scuttlemonkey is now known as scuttle|afk === kadams54 is now known as kadams54-away [20:55] cory_fu: ping [20:55] Hey, what's up [20:55] looking @ the new redis charm that got a +1 - did you run bundletester against the charm? [20:55] i see consistent failures without disabling the venv in teh test plan yaml [20:55] https://launchpad.net/bugs/1459345 - for context [20:55] Bug #1459345: Review/promulgation request for the Redis charm [20:56] tthe makefile works perfectly though, as is - just when being routed through CI i noticed some failure due to not being able to find the venv targets - because bundletester gets hinky with venvs [20:57] Yeah, I ran it via bundletester, in a charmbox [20:57] ok, thast where i am too - in charmbox :| [20:57] wonder why you didnt run into this, it bit me and CI as well [20:57] Can you pastebin me the error? [20:58] http://juju-ci.vapour.ws:8080/job/charm-bundle-test-aws/182/console [20:58] :) [20:58] its a real minor fix, just adding venv: false to the tests.yaml [20:58] Um, says Jenkins is getting ready to work? [20:58] lolwut [20:59] looks like CI just got recycled [20:59] give me a sec to to spin up another charmbox and i'll re-run [20:59] lazyPower: Check out my comment #6. I ran into an issue with the venv and it was addressed [21:00] that fix, did not fix it. [21:00] it needs the virtualenv: false flag in tests/tests.yaml to function appropriately [21:01] otherwise it skips the venv yet again, thinking it should be using bundletesters venv [21:01] I retested after that change and it was fixed for me [21:01] schenanigans [21:01] but ok - whats different in our envs then? [21:01] something's got to be askew [21:01] btw, charmbox juju is 1.24 stable now for me [21:01] which charmbox did you pull? jujusolutions/charmbox? [21:02] Yep [21:02] ok, i'm in charmbox:devel [21:02] thats one thing isolated - bueno [21:05] I'm running bundletester now, btw [21:05] Hrm. I got a venv error [21:05] I swear this worked [21:06] :) i dont doubt that something worked at one time [21:06] cory_fu: do me a favor and drop in that tests.yaml fix suggested above and see if it works for you [21:17] lazyPower: It seems to but I'm confused as to why. Was bundletester creating an incomplete .venv underneath the charm? [21:17] I'm so confused how I got a successful run and didn't hit this [21:17] honestly i dont know - https://github.com/juju-solutions/bundletester/issues/15 - but that is what i came across looking for the proposed fix [21:17] i had to recommend this to thumper as well when we were riffing over django [21:22] I wonder if I forgot to clear the .venv from a previous manual run [21:23] that happens to me, i've had to adopt the workflow of exiting charmbox and re-running between reviews. [21:23] as i run the charmbox with --rm [21:24] this is a noobie doubt: when I'm doing multiple 'juju add-relation ' with the same service (like adding all relations to mysql), should I wait until the first one is applied or just execute all statements and juju makes sure the service is correctly configured and restarted after every relation change? [21:25] wolverineav: they are typically executed in the order they are received [21:25] so you can add all relations at once, and they will sequentially execute [21:26] lazyPower: ok. that's good. I don't need to monitor using 'juju debug-log' then :) [21:27] not unless you get hinky behavior :) [21:27] in which case, please file bugs against the charms [21:27] yep, will do. [21:27] wolverineav: while that's typically true, there's no guarentee when an event will run. Juju will queue things though, so you should just run all teh commands you want and the system will take care of taht for you [21:29] thats very true [21:29] +1 marcoceppi [21:29] marcoceppi: yes, as long as it doesn't apply changes to the same service simultaneously and result in an inconsistent state, I'm ok with it handling in any random order. [21:30] wolverineav: hooks run asyncronously in the environment, but serially on the node [21:30] you'll never have two hooks running at the same time on a single machine [21:31] got it! that's very useful piece of info! === mwhudson_ is now known as mwhudson === mwhudson is now known as Guest82160 === Guest82160 is now known as mwhudson === kadams54 is now known as kadams54-away