=== spammy is now known as Guest80454 === spammy is now known as Guest29965 === frankban|afk is now known as frankban [07:51] Good morning juju world! [07:56] Good morning kjackal [08:23] *facepalm* [08:41] lol! === spammy is now known as Guest93444 === Guest93444 is now known as spammy [10:48] hello [10:48] does anybody knows hot to fix hook failed "update-status" [10:48] I cannot resolbe [10:48] resolve* [10:52] hi tzon [10:52] you have an update status that is failing, right? [10:53] kjackal, yeah it says hook failed "update-status" on nova-cloud-controller [10:55] ah, nova-cloud-controller not something I know anything about but I will try to help as much as I can [10:56] ok [10:56] so, lets see, you are not in a state where the unit is in an error state [10:57] yeah it is in error state [10:57] when you do a juju resolved --retry you are left again with an error [10:57] what do the logs say? [10:57] juju debig-log [10:57] *debug [10:58] you leave the logs running in a console and you fire a resolved --retry [11:00] you could filter errors of ony the failing unit with something like this: juju debug-log --include unit-mysql-0 [11:08] its gets me an error with the include [11:09] I used juju debug-log | grep nova-cloud [11:09] but I did not get any results [11:09] :/ [11:11] hm.... what kind of an include error? [11:12] sorry I dont get you [11:12] with the resolved --retry it says that it is already resolved [11:13] ok so if it says it is already resolved then your unit should not be in an error state [11:13] cloud you double check that [11:15] yeah I checked it again it is in error state [11:15] maybe its a bug? [11:17] doesn't seem right. Could be a bug, but it is surprising... [11:18] it is a rather basic "usecase": error state -> resolve --retry [11:19] yeah I have resolve similar issues in the past with this way but I have no idea whats going on now [11:20] also I have another service that is in executing state and running update 2 days now [11:20] this also is not normal [11:22] 2 days! Super strange I would expect things to expire after 2 hours or something [11:23] So do you know how to trigger a update-status hook? [11:23] juju run --unit namenode/1 'hooks/update-status' [11:23] this could save you some time [11:24] tzon: ^ [11:27] ok I will give it a shot [11:33] it just suck when I running it :) [11:35] Cool that means that the hook runs OR that another hook is now running [11:37] hi folks, after upgrading to 2.0-beta11 I have issues running any juju commands. Is there a way to "reset/reboot" juju? === BlackDex_ is now known as BlackDex [12:06] finnaly I got error timed out :/ [12:08] infinityplusb: do you have a bootstrapped controller? [12:10] @babbageclunk: I do have a controller present when I do `juju list-controllers` but if I try and get details about the model with `juju models` it hangs [12:11] juju also hangs when I do `juju status` so I can't see what is happening [12:12] infinityplusb: What about when you run `juju status --debug`? [12:14] @babbageclunk: ah, that gives me ... something. It seems there is something "amiss" with lxd. I don't seem to have permissions to the charms that are already deployed ... [12:15] infinityplusb: Want to put the output into a pastebin? === spammy is now known as Guest78769 === dpm_ is now known as dpm [12:18] @babbageclunk: http://pastebin.com/f5Gi51J3 [12:20] which is odd, cause if I do a `groups` command, I can see I am in the lxd group [12:21] infinityplusb: ok, it seems like you can't connect to the container running the controller? Can you ssh to ubuntu@10.31.19.19? [12:23] @babbageclunk: via `juju ssh ...` no, but I can just via regular ssh [12:25] infinityplusb: hmm. Inside the container can you see jujud running? [12:27] @babbageclunk: if I do a `service jujud status` it returns it as "inactive (dead)" ... probably not a good sign [12:27] infinityplusb: no, doesn't sound great! [12:27] infinityplusb: What can you see in /var/log/juju? [12:31] @babbageclunk: many many errors - a lot similar to "juju.rpc server.go:576 error writing response: write tcp 10.31.19.19:17070->10.31.19.14:59690: write: connection reset by peer" [12:32] @babbageclunk: and lots of "broken pipe" messages [12:32] infinityplusb: Is that on the controller host? [12:33] @babbageclunk: yup, in the "machine-0.log" [12:36] infinityplusb: Maybe put it in a pastebin again? (This kind of stuff ends up using a lot of pastebins. ;) === Guest78769 is now known as spammy [12:37] infinityplusb_: Does the controller have anything deployed? Is the ip address it's trying to write to the host's? [12:37] @babbageclunk: it's like 300k lines long. I'll pastebin it somewhere :P And I *think* there is stuff deployed, but I can't do a `juju status` to see what is deployed where. [12:38] infinityplusb_: So was this bootstrapped with a previous beta of juju? [12:40] @babbageclunk: ... maybe. I (stupidly) didn't check if anything was up before updating [12:41] infinityplusb_: That might be part of the problem - I think we've had some backwards incompatible changes in the latest beta, could you try bootstrapping a new controller and see if you get the same issue? [12:43] @babbageclunk: if I try a new bootstrap, I get a permission erro about being in the lxd group (which I am). [12:44] infinityplusb_: That's weird. I don't really know much about lxd permissions - might be best to email the juju list? Sorry not to be too much help. [12:45] @babbageclunk: nah that's cool. I've learned some new things along the way. Thanks for trying. I'll keep digging. :) [12:46] infinityplusb_: ok, good luck! [13:19] Morning all! [13:21] If I've written a new layer XYZ, how do I publish it, so that some other charm can say "includes: [ 'layer:XYZ' ]" [13:28] hi neiljerram you could/should go and register your layer at http://interfaces.juju.solutions/ [13:30] kjackal, I see, thanks. What about during development? Is that just a matter of putting the layer code under ${LAYER_PATH} ? [13:31] neiljerram: yes putting your layer under {LAYER_PATH} will work [13:31] kjackal, Many thanks. [13:33] neiljerram: note that charm build will first look in you {LAYER_PATH} and then try to grab a layer from a remote repo. That means that your local copy will shadow anything else that might be outthere [14:59] * D4RKS1D3 Hi === Tristit1a is now known as Tristitia === frankban is now known as frankban|afk === spammy is now known as Guest70034 === Guest70034 is now known as spammy [17:44] fyi: openstack irc meeting in #ubuntu-meeting this week. check out the details : https://wiki.ubuntu.com/ServerTeam/OpenStackCharmsMeeting