[00:16] <routelastresort> I know that Ubuntu GNOME is like 99% the same, but maybe I'm the only person in the world using juju + Saucy GNOME
[00:16] <routelastresort> much less issues on my new stock 13.10 system
[00:47] <routelastresort> I'm searching the bugs, but is there a bug for "dying" services that can never be deleted on local provider?
[00:48] <routelastresort> I've just been destroying my environment because it's easier
[00:55] <sarnold> routelastresort: I think zradmin was fighting that earlier -- zradmin, did you ever find an answer for that?
[00:58] <routelastresort> lp:charms/pbuilder
[00:58] <routelastresort> written for quantal
[00:59] <routelastresort> fails because precise doesn't have pbuilder-scripts w/o backports
[00:59] <routelastresort> easy fix, but a) dying services should still be able to be killed
[01:00] <routelastresort> and b) should it be letting me install Charms that aren't for precise in the first place??
[01:27] <zradmin> sarnold: i found a subrelation that didnt destroy itself... it wouldnt show under juju status, but if i wrote it to a file it showed up
[01:28] <sarnold> zradmin: crazy :) thanks
[02:10] <zradmin> sarnold: no problem, i kind of wish there was a juju "view tasklist" so you could see what was holding it up sometimes
[04:05] <sarnold> marcoceppi: ^^ I like zradmin's idea of a "view tasklist"  :)
[06:37] <marcoceppi> sarnold: zradmin there's talk of exposing this information from the juju-core team this cycle. Not sure where it lands on the roadmap
[06:38] <sarnold> marcoceppi: *nod* you guys are ambitious :)
[08:08] <nesusvet_> Hello everyone. I have the following question: I tried to deploy hosts via MAAS and everything went well after the juju bootstrap command, but after deployment the whole environment, I see only one machine after  the "juju status" command
[11:36] <AskUbuntu> Not able to find juju charm mysql root password? | http://askubuntu.com/q/365557
[15:30] <marcoceppi> nesusvet_: How many nodes do you have in MAAS?
[16:11] <Azendale> I have machines running on MaaS using Juju. Some of them failed to deploy because  a hook didn't run because of an (invalid) setting I set in the config. In the UI, I tried marking them as resolved, and then trying to remove them (and repeated through a few cycles of them going green and and then red). (I believe doing resolve + remove will make juju not get stuck on the fact that the hook didn't work, and let juju just get rid of the machine)
[16:12] <Azendale> Now I have units that seem stuck and say "agent-state: error, life: dying" in juju status. I've tried destroying the units and the machines they are on. Is there any way to just give up on those units and recycle the machines the are on for another try?
[16:19] <Azendale> If I just stop those machines in MaaS (I'm pretty sure that unassigns them) will juju notice, or will it just break juju more?
[16:19] <kurt__> you have to mark them as "resolved"
[16:20] <kurt__> "juju resolved"
[16:20] <kurt__> then they will destroy
[16:23] <Azendale> That's what I was trying though the juju gui, just tried it on the command line and I'm getting conflicting messages. "ERROR cannot set resolved mode for unit "ceph-osd/1": already resolved" when I tried to mark it resolved, but status says "agent-state: error, agent-state-info: 'hook failed: "install"',  life: dying"
[16:24] <kurt__> you may need to destroy env and start again
[16:25] <kurt__> if I cannot destroy services after resolving, that is tropically the path I take
[16:26] <kurt__> s/tropically/typically
[16:26] <Azendale> kurt__: not my first choice (other things that took a bit are working in the evironment) but it is a test environment, so if I have to I can do that
[16:27] <kurt__> Azendale: understood and definitely not mine either
[16:27] <Azendale> kurt__: I've typically done the same thing the other times I've run into this
[16:28] <Azendale> kurt__: I've just been trying to learn a bit more about how to fix things instead of always just starting over
[16:29] <kurt__> Azendale: your cause is noble.  If you have the time, create an AskUbuntu and the experts will get to it, just perhaps not in the timeframe you need.
[16:29] <kurt__> I know they are all on a plane today
[16:30] <mgz> Azendale: it never hurts to file a bug against juju-core with all the logs attached for cases like this
[16:30] <kurt__> that too :)
[16:31] <mgz> you want all-machines.log from machine 0 if you're not sure which particular machine was at issue
[16:31] <Azendale> kurt__: ok, I probably will ask on askubuntu. I started with IRC because of the faster iteration, but I'm trying to get into the habit of documenting what I've learned on AskUbuntu because it seems like there's a lot of semi confused people trying this stuff
[16:32]  * Azendale realizes I confused kurt__ and mgz
[16:32] <kurt__> confusion?
[16:32] <kurt__> lol
[16:33] <Azendale> thanks kurt__ and mgz, I will have to get back to this in about an hour or two, but I will do what you suggested
[16:33] <kurt__> Azendale: good luck
[17:02] <AskUbuntu> Juju remove units stuck in dying state so I can start over? | http://askubuntu.com/q/365724
[17:14] <AskUbuntu> JUJU and ERROR environment has no access-key or secret-key | http://askubuntu.com/q/365734
[19:06] <AskUbuntu> What's the correct way to share a Juju environment? | http://askubuntu.com/q/365807
[19:58] <zradmin_> is there anyone on dealing with the precise/havana charm updates?
[20:13] <marcoceppi> zradmin_: care to elaborate?
[20:31] <zradmin_> macoceppi: sure thing, I've been setting up a havana lab with juju and everything seems to stand up just fine, except quantum/neutron doesn't seem to work at all which is preventing me from launching anything in nova or logging into horizon. trying to run a neutron net list gives me a 503 server unavailable error
[21:01] <marcoceppi> zradmin_: jamespage and adam_g should be working on that
[21:29] <zradmin_> marcoceppi: ok hopefully they're on today :)