[07:51] <kjackal> Good morning juju world!
[07:56] <juju_world> Good morning kjackal
[08:23] <magicaltrout> *facepalm*
[08:41] <kjackal> lol!
[10:48] <tzon> hello
[10:48] <tzon> does anybody knows hot to fix hook failed "update-status"
[10:48] <tzon> I cannot resolbe
[10:48] <tzon> resolve*
[10:52] <kjackal> hi tzon
[10:52] <kjackal> you have an update status that is failing, right?
[10:53] <tzon> kjackal, yeah it says hook failed "update-status" on nova-cloud-controller
[10:55] <kjackal> ah, nova-cloud-controller not something I know anything about but I will try to help as much as I can
[10:56] <tzon> ok
[10:56] <kjackal> so, lets see, you are not in a state where the unit is in an error state
[10:57] <tzon> yeah it is in error state
[10:57] <kjackal> when you do a juju resolved --retry <unit> you are left again with an error
[10:57] <kjackal> what do the logs say?
[10:57] <kjackal> juju debig-log
[10:57] <kjackal> *debug
[10:58] <kjackal> you leave the logs running in a console and you fire a resolved --retry
[11:00] <kjackal> you could filter errors of ony the failing unit with something like this: juju debug-log --include unit-mysql-0
[11:08] <tzon> its gets me an error with the include
[11:09] <tzon> I used juju debug-log | grep nova-cloud
[11:09] <tzon> but I did not get any results
[11:09] <tzon> :/
[11:11] <kjackal> hm.... what kind of an include error?
[11:12] <tzon> sorry I dont get you
[11:12] <tzon> with the resolved --retry it says that it is already resolved
[11:13] <kjackal> ok so if it says it is already resolved then your unit should not be in an error state
[11:13] <kjackal> cloud you double check that
[11:15] <tzon> yeah I checked it again it is in error state
[11:15] <tzon> maybe its a bug?
[11:17] <kjackal> doesn't seem right. Could be a bug, but it is surprising...
[11:18] <kjackal> it is a rather basic "usecase": error state -> resolve --retry
[11:19] <tzon> yeah I have resolve similar issues in the past with this way but I have no idea whats going on now
[11:20] <tzon> also I have another service that is in executing state and running update 2 days now
[11:20] <tzon> this also is not normal
[11:22] <kjackal> 2 days! Super strange I would expect things to expire after 2 hours or something
[11:23] <kjackal> So do you know how to trigger a update-status hook?
[11:23] <kjackal> juju run --unit namenode/1 'hooks/update-status'
[11:23] <kjackal> this could save you some time
[11:24] <kjackal> tzon: ^
[11:27] <tzon> ok I will give it a shot
[11:33] <tzon> it just suck when I running it :)
[11:35] <kjackal> Cool that means that the hook runs OR that another hook is now running
[11:37] <infinityplusb> hi folks, after upgrading to 2.0-beta11 I have issues running any juju commands. Is there a way to "reset/reboot" juju?
[12:06] <tzon> finnaly I got error timed out :/
[12:08] <babbageclunk> infinityplusb: do you have a bootstrapped controller?
[12:10] <infinityplusb> @babbageclunk: I do have a controller present when I do `juju list-controllers` but if I try and get details about the model with `juju models` it hangs
[12:11] <infinityplusb> juju also hangs when I do `juju status` so I can't see what is happening
[12:12] <babbageclunk> infinityplusb: What about when you run `juju status --debug`?
[12:14] <infinityplusb> @babbageclunk: ah, that gives me ... something. It seems there is something "amiss" with lxd. I don't seem to have permissions to the charms that are already deployed ...
[12:15] <babbageclunk> infinityplusb: Want to put the output into a pastebin?
[12:18] <infinityplusb> @babbageclunk: http://pastebin.com/f5Gi51J3
[12:20] <infinityplusb> which is odd, cause if I do a `groups` command, I can see I am in the lxd group
[12:21] <babbageclunk> infinityplusb: ok, it seems like you can't connect to the container running the controller? Can you ssh to ubuntu@10.31.19.19?
[12:23] <infinityplusb> @babbageclunk: via `juju ssh ...` no, but I can just via regular ssh
[12:25] <babbageclunk> infinityplusb: hmm. Inside the container can you see jujud running?
[12:27] <infinityplusb> @babbageclunk: if I do a `service jujud status` it returns it as "inactive (dead)" ... probably not a good sign
[12:27] <babbageclunk> infinityplusb: no, doesn't sound great!
[12:27] <babbageclunk> infinityplusb: What can you see in /var/log/juju?
[12:31] <infinityplusb> @babbageclunk: many many errors - a lot similar to "juju.rpc server.go:576 error writing response: write tcp 10.31.19.19:17070->10.31.19.14:59690: write: connection reset by peer"
[12:32] <infinityplusb> @babbageclunk: and lots of "broken pipe" messages
[12:32] <babbageclunk> infinityplusb: Is that on the controller host?
[12:33] <infinityplusb> @babbageclunk: yup, in the "machine-0.log"
[12:36] <babbageclunk> infinityplusb: Maybe put it in a pastebin again? (This kind of stuff ends up using a lot of pastebins. ;)
[12:37] <babbageclunk> infinityplusb_: Does the controller have anything deployed? Is the ip address it's trying to write to the host's?
[12:37] <infinityplusb_> @babbageclunk: it's like 300k lines long. I'll pastebin it somewhere :P And I *think* there is stuff deployed, but I can't do a `juju status` to see what is deployed where.
[12:38] <babbageclunk> infinityplusb_: So was this bootstrapped with a previous beta of juju?
[12:40] <infinityplusb_> @babbageclunk: ... maybe. I (stupidly) didn't check if anything was up before updating
[12:41] <babbageclunk> infinityplusb_: That might be part of the problem - I think we've had some backwards incompatible changes in the latest beta, could you try bootstrapping a new controller and see if you get the same issue?
[12:43] <infinityplusb_> @babbageclunk: if I try a new bootstrap, I get a permission erro about being in the lxd group (which I am).
[12:44] <babbageclunk> infinityplusb_: That's weird. I don't really know much about lxd permissions - might be best to email the juju list? Sorry not to be too much help.
[12:45] <infinityplusb_> @babbageclunk: nah that's cool. I've learned some new things along the way. Thanks for trying. I'll keep digging. :)
[12:46] <babbageclunk> infinityplusb_: ok, good luck!
[13:19] <neiljerram> Morning all!
[13:21] <neiljerram> If I've written a new layer XYZ, how do I publish it, so that some other charm can say "includes: [ 'layer:XYZ' ]"
[13:28] <kjackal> hi neiljerram you could/should go and register your layer at http://interfaces.juju.solutions/
[13:30] <neiljerram> kjackal, I see, thanks.  What about during development?  Is that just a matter of putting the layer code under ${LAYER_PATH} ?
[13:31] <kjackal> neiljerram: yes putting your layer under {LAYER_PATH} will work
[13:31] <neiljerram> kjackal, Many thanks.
[13:33] <kjackal> neiljerram:  note that charm build will first look in you {LAYER_PATH} and then try to grab a layer from a remote repo. That means that your local copy will shadow anything else that might be outthere
[14:59]  * D4RKS1D3 Hi
[17:44] <cargonza> fyi: openstack irc meeting in #ubuntu-meeting this week. check out the details : https://wiki.ubuntu.com/ServerTeam/OpenStackCharmsMeeting