=== spammy is now known as Guest80454 | ||
=== spammy is now known as Guest29965 | ||
=== frankban|afk is now known as frankban | ||
kjackal | Good morning juju world! | 07:51 |
---|---|---|
juju_world | Good morning kjackal | 07:56 |
magicaltrout | *facepalm* | 08:23 |
kjackal | lol! | 08:41 |
=== spammy is now known as Guest93444 | ||
=== Guest93444 is now known as spammy | ||
tzon | hello | 10:48 |
tzon | does anybody knows hot to fix hook failed "update-status" | 10:48 |
tzon | I cannot resolbe | 10:48 |
tzon | resolve* | 10:48 |
kjackal | hi tzon | 10:52 |
kjackal | you have an update status that is failing, right? | 10:52 |
tzon | kjackal, yeah it says hook failed "update-status" on nova-cloud-controller | 10:53 |
kjackal | ah, nova-cloud-controller not something I know anything about but I will try to help as much as I can | 10:55 |
tzon | ok | 10:56 |
kjackal | so, lets see, you are not in a state where the unit is in an error state | 10:56 |
tzon | yeah it is in error state | 10:57 |
kjackal | when you do a juju resolved --retry <unit> you are left again with an error | 10:57 |
kjackal | what do the logs say? | 10:57 |
kjackal | juju debig-log | 10:57 |
kjackal | *debug | 10:57 |
kjackal | you leave the logs running in a console and you fire a resolved --retry | 10:58 |
kjackal | you could filter errors of ony the failing unit with something like this: juju debug-log --include unit-mysql-0 | 11:00 |
tzon | its gets me an error with the include | 11:08 |
tzon | I used juju debug-log | grep nova-cloud | 11:09 |
tzon | but I did not get any results | 11:09 |
tzon | :/ | 11:09 |
kjackal | hm.... what kind of an include error? | 11:11 |
tzon | sorry I dont get you | 11:12 |
tzon | with the resolved --retry it says that it is already resolved | 11:12 |
kjackal | ok so if it says it is already resolved then your unit should not be in an error state | 11:13 |
kjackal | cloud you double check that | 11:13 |
tzon | yeah I checked it again it is in error state | 11:15 |
tzon | maybe its a bug? | 11:15 |
kjackal | doesn't seem right. Could be a bug, but it is surprising... | 11:17 |
kjackal | it is a rather basic "usecase": error state -> resolve --retry | 11:18 |
tzon | yeah I have resolve similar issues in the past with this way but I have no idea whats going on now | 11:19 |
tzon | also I have another service that is in executing state and running update 2 days now | 11:20 |
tzon | this also is not normal | 11:20 |
kjackal | 2 days! Super strange I would expect things to expire after 2 hours or something | 11:22 |
kjackal | So do you know how to trigger a update-status hook? | 11:23 |
kjackal | juju run --unit namenode/1 'hooks/update-status' | 11:23 |
kjackal | this could save you some time | 11:23 |
kjackal | tzon: ^ | 11:24 |
tzon | ok I will give it a shot | 11:27 |
tzon | it just suck when I running it :) | 11:33 |
kjackal | Cool that means that the hook runs OR that another hook is now running | 11:35 |
infinityplusb | hi folks, after upgrading to 2.0-beta11 I have issues running any juju commands. Is there a way to "reset/reboot" juju? | 11:37 |
=== BlackDex_ is now known as BlackDex | ||
tzon | finnaly I got error timed out :/ | 12:06 |
babbageclunk | infinityplusb: do you have a bootstrapped controller? | 12:08 |
infinityplusb | @babbageclunk: I do have a controller present when I do `juju list-controllers` but if I try and get details about the model with `juju models` it hangs | 12:10 |
infinityplusb | juju also hangs when I do `juju status` so I can't see what is happening | 12:11 |
babbageclunk | infinityplusb: What about when you run `juju status --debug`? | 12:12 |
infinityplusb | @babbageclunk: ah, that gives me ... something. It seems there is something "amiss" with lxd. I don't seem to have permissions to the charms that are already deployed ... | 12:14 |
babbageclunk | infinityplusb: Want to put the output into a pastebin? | 12:15 |
=== spammy is now known as Guest78769 | ||
=== dpm_ is now known as dpm | ||
infinityplusb | @babbageclunk: http://pastebin.com/f5Gi51J3 | 12:18 |
infinityplusb | which is odd, cause if I do a `groups` command, I can see I am in the lxd group | 12:20 |
babbageclunk | infinityplusb: ok, it seems like you can't connect to the container running the controller? Can you ssh to ubuntu@10.31.19.19? | 12:21 |
infinityplusb | @babbageclunk: via `juju ssh ...` no, but I can just via regular ssh | 12:23 |
babbageclunk | infinityplusb: hmm. Inside the container can you see jujud running? | 12:25 |
infinityplusb | @babbageclunk: if I do a `service jujud status` it returns it as "inactive (dead)" ... probably not a good sign | 12:27 |
babbageclunk | infinityplusb: no, doesn't sound great! | 12:27 |
babbageclunk | infinityplusb: What can you see in /var/log/juju? | 12:27 |
infinityplusb | @babbageclunk: many many errors - a lot similar to "juju.rpc server.go:576 error writing response: write tcp 10.31.19.19:17070->10.31.19.14:59690: write: connection reset by peer" | 12:31 |
infinityplusb | @babbageclunk: and lots of "broken pipe" messages | 12:32 |
babbageclunk | infinityplusb: Is that on the controller host? | 12:32 |
infinityplusb | @babbageclunk: yup, in the "machine-0.log" | 12:33 |
babbageclunk | infinityplusb: Maybe put it in a pastebin again? (This kind of stuff ends up using a lot of pastebins. ;) | 12:36 |
=== Guest78769 is now known as spammy | ||
babbageclunk | infinityplusb_: Does the controller have anything deployed? Is the ip address it's trying to write to the host's? | 12:37 |
infinityplusb_ | @babbageclunk: it's like 300k lines long. I'll pastebin it somewhere :P And I *think* there is stuff deployed, but I can't do a `juju status` to see what is deployed where. | 12:37 |
babbageclunk | infinityplusb_: So was this bootstrapped with a previous beta of juju? | 12:38 |
infinityplusb_ | @babbageclunk: ... maybe. I (stupidly) didn't check if anything was up before updating | 12:40 |
babbageclunk | infinityplusb_: That might be part of the problem - I think we've had some backwards incompatible changes in the latest beta, could you try bootstrapping a new controller and see if you get the same issue? | 12:41 |
infinityplusb_ | @babbageclunk: if I try a new bootstrap, I get a permission erro about being in the lxd group (which I am). | 12:43 |
babbageclunk | infinityplusb_: That's weird. I don't really know much about lxd permissions - might be best to email the juju list? Sorry not to be too much help. | 12:44 |
infinityplusb_ | @babbageclunk: nah that's cool. I've learned some new things along the way. Thanks for trying. I'll keep digging. :) | 12:45 |
babbageclunk | infinityplusb_: ok, good luck! | 12:46 |
neiljerram | Morning all! | 13:19 |
neiljerram | If I've written a new layer XYZ, how do I publish it, so that some other charm can say "includes: [ 'layer:XYZ' ]" | 13:21 |
kjackal | hi neiljerram you could/should go and register your layer at http://interfaces.juju.solutions/ | 13:28 |
neiljerram | kjackal, I see, thanks. What about during development? Is that just a matter of putting the layer code under ${LAYER_PATH} ? | 13:30 |
kjackal | neiljerram: yes putting your layer under {LAYER_PATH} will work | 13:31 |
neiljerram | kjackal, Many thanks. | 13:31 |
kjackal | neiljerram: note that charm build will first look in you {LAYER_PATH} and then try to grab a layer from a remote repo. That means that your local copy will shadow anything else that might be outthere | 13:33 |
* D4RKS1D3 Hi | 14:59 | |
=== Tristit1a is now known as Tristitia | ||
=== frankban is now known as frankban|afk | ||
=== spammy is now known as Guest70034 | ||
=== Guest70034 is now known as spammy | ||
cargonza | fyi: openstack irc meeting in #ubuntu-meeting this week. check out the details : https://wiki.ubuntu.com/ServerTeam/OpenStackCharmsMeeting | 17:44 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!