jackweirdy | Well now I think I'm being bitten by this https://bugs.launchpad.net/juju-core/+bug/1452422 but at least I'm further along ;) | 00:01 |
---|---|---|
mup | Bug #1452422: Cannot boostrap from custom image-metadata-url or by specifying metadata-source <sts> <juju-core:In Progress by anastasia-macmood> <https://launchpad.net/bugs/1452422> | 00:01 |
=== scuttle`` is now known as scuttle|afk | ||
bradm | wallyworld: sigh, this is still busted. | 03:28 |
bradm | wallyworld: ah, wrong channel :) | 03:29 |
=== rogpeppe1 is now known as rogpeppe | ||
stub | marcoceppi: https://code.launchpad.net/~stub/charms/trusty/cassandra/spike/+merge/262608 isn't showing up in the review queue and I don't know why | 12:22 |
marcoceppi | stub: check back in 30 mins | 12:23 |
marcoceppi | it should should show up then | 12:23 |
stub | marcoceppi: The MP was created yesterday, so unless you have just poked it... | 12:24 |
marcoceppi | stub: I just poked the review queue | 12:24 |
stub | ta :) | 12:24 |
marcoceppi | stub: it seems to have poped in, we'll have a look in due time! | 12:52 |
=== brandon is now known as web | ||
=== rogpeppe1 is now known as rogpeppe | ||
=== scuttle|afk is now known as scuttlemonkey | ||
danielbe | there was a talk at UDS about developers getting credits for a cloud for testing juju charms. Are there any news concerning this? | 15:49 |
=== caribou_ is now known as caribou | ||
cholcombe | i think the answer to this is no but is it possible to control the kernel version of my instances in juju? I know the containers can't but maybe with the VM's? | 16:07 |
marcoceppi | danielbe: not yet, it's in progress | 16:08 |
marcoceppi | cholcombe: not really outside of MAAS | 16:08 |
cholcombe | ok i figured | 16:08 |
danielbe | ok, thansk marcoceppi | 16:12 |
=== kadams54_ is now known as kadams54-away | ||
=== kadams54-away is now known as kadams54_ | ||
=== kadams54_ is now known as kadams54-away | ||
lkraider | I have machines stuck in dying state | 19:41 |
lkraider | How do I force to clear them? | 19:41 |
lkraider | running under vagrant (virtualbox) lxc | 19:42 |
lkraider | there is no corresponding machine in /var/lib/juju/containers/ | 19:52 |
lkraider | what is the command to clear a machine state? | 20:22 |
marcoceppi | lkraider: is it an error on a unit or a machine provisioning? | 20:27 |
lkraider | I believe in the machine | 20:27 |
lkraider | the network was down at the moment I tried to create a service | 20:27 |
marcoceppi | well, you can `juju retry-provisioning #` | 20:27 |
lkraider | "11": | 20:28 |
lkraider | instance-id: pending | 20:28 |
lkraider | life: dying | 20:28 |
lkraider | series: trusty | 20:28 |
=== kadams54 is now known as kadams54-away | ||
lkraider | "machine 11 is not in an error state" | 20:28 |
lkraider | the machine basically does not exist, for all I know | 20:29 |
marcoceppi | then, that's that. No there's no way to really "clear" it out afaik | 20:30 |
lkraider | but juju is still has some reference to it | 20:30 |
lkraider | but now I cannot provision new machines | 20:30 |
lkraider | juju is stuck | 20:30 |
lkraider | can I force add a new machine? | 20:32 |
lkraider | otherwise it also keeps on pending | 20:32 |
lkraider | can't I edit the juju database or something? | 20:34 |
thumper | lkraider: sorry, late to the story, but which provider? | 20:38 |
lkraider | @thumper lxc local | 20:38 |
thumper | lkraider: and host? | 20:39 |
thumper | also, juju version | 20:39 |
lkraider | mac -> vagrant trusty64 -> juju 1.23.3-trusty-amd64 | 20:40 |
thumper | lkraider: I think you can do 'juju remove-machine --force' | 20:41 |
thumper | lkraider: although you might first have to remove the service or unit that is deployed on that machine, I don't recall | 20:41 |
lkraider | I destroyed the service already | 20:42 |
=== scuttlemonkey is now known as scuttle|afk | ||
lkraider | @thumper I tried remove-machine --force --verbose --show-log | 20:43 |
lkraider | but it still shows in juju statu | 20:43 |
thumper | hmm... | 20:44 |
thumper | was there an error from remove-machine? | 20:44 |
lkraider | https://gist.github.com/anonymous/28409911b3b95bf8fdad | 20:45 |
lkraider | not | 20:45 |
lkraider | on the output | 20:45 |
lkraider | can I connect to the juju mongodb ? | 21:24 |
lkraider | @thumper I think my bug is this one: https://bugs.launchpad.net/juju-core/+bug/1233457 | 21:28 |
mup | Bug #1233457: service with no units stuck in lifecycle dying <cts-cloud-review> <destroy-service> <state-server> <juju-core:Fix Released by fwereade> <juju-core 1.16:Fix Released by fwereade> <juju-core (Ubuntu):Fix Released> <juju-core (Ubuntu Saucy):Won't Fix> <https://launchpad.net/bugs/1233457> | 21:28 |
lkraider | @thumper sorry, that is for a service, not a machine | 21:29 |
lkraider | this bug has some scripts that seem useful: https://bugs.launchpad.net/juju-core/+bug/1089291 | 21:30 |
mup | Bug #1089291: destroy-machine --force <canonical-webops> <destroy-machine> <iso-testing> <theme-oil> <juju-core:Fix Released by fwereade> <juju-core 1.16:Fix Released by fwereade> <juju-core (Ubuntu):Fix Released> <juju-core (Ubuntu Saucy):Won't Fix> <https://launchpad.net/bugs/1089291> | 21:30 |
lkraider | but they all fail on the mongo auth | 21:31 |
lkraider | pymongo.errors.OperationFailure: command SON([('authenticate', 1), ('user', u'admin'), ('nonce', u'15c36a2f77cdc42d'), ('key', u'9c3a5ccb7ba361b481b5ffd2ffa86a07')]) failed: auth fails | 21:31 |
lkraider | @hazmat it's about your mhotfix-machine.py script | 21:32 |
ddellav | lazyPower: this is my new nick, i created a new one for this channel | 21:41 |
lazyPower | ddellav: awesome :) Glad to see you're already idling here | 21:41 |
ddellav | unify usernames across everything helps | 21:41 |
ddellav | of course :) | 21:41 |
lkraider | is the only solution to destroy the environment? | 21:51 |
lkraider | what happens if I restart the host without destroying the environment? | 21:52 |
lkraider | what happens if I restart the local provider host without destroying the environment? | 21:54 |
lkraider | (reboot) | 21:55 |
lkraider | Ok, seems restarting the local provisioner host breaks all the environment | 23:16 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!