[00:01] Well now I think I'm being bitten by this https://bugs.launchpad.net/juju-core/+bug/1452422 but at least I'm further along ;) [00:01] Bug #1452422: Cannot boostrap from custom image-metadata-url or by specifying metadata-source === scuttle`` is now known as scuttle|afk [03:28] wallyworld: sigh, this is still busted. [03:29] wallyworld: ah, wrong channel :) === rogpeppe1 is now known as rogpeppe [12:22] marcoceppi: https://code.launchpad.net/~stub/charms/trusty/cassandra/spike/+merge/262608 isn't showing up in the review queue and I don't know why [12:23] stub: check back in 30 mins [12:23] it should should show up then [12:24] marcoceppi: The MP was created yesterday, so unless you have just poked it... [12:24] stub: I just poked the review queue [12:24] ta :) [12:52] stub: it seems to have poped in, we'll have a look in due time! === brandon is now known as web === rogpeppe1 is now known as rogpeppe === scuttle|afk is now known as scuttlemonkey [15:49] there was a talk at UDS about developers getting credits for a cloud for testing juju charms. Are there any news concerning this? === caribou_ is now known as caribou [16:07] i think the answer to this is no but is it possible to control the kernel version of my instances in juju? I know the containers can't but maybe with the VM's? [16:08] danielbe: not yet, it's in progress [16:08] cholcombe: not really outside of MAAS [16:08] ok i figured [16:12] ok, thansk marcoceppi === kadams54_ is now known as kadams54-away === kadams54-away is now known as kadams54_ === kadams54_ is now known as kadams54-away [19:41] I have machines stuck in dying state [19:41] How do I force to clear them? [19:42] running under vagrant (virtualbox) lxc [19:52] there is no corresponding machine in /var/lib/juju/containers/ [20:22] what is the command to clear a machine state? [20:27] lkraider: is it an error on a unit or a machine provisioning? [20:27] I believe in the machine [20:27] the network was down at the moment I tried to create a service [20:27] well, you can `juju retry-provisioning #` [20:28] "11": [20:28] instance-id: pending [20:28] life: dying [20:28] series: trusty === kadams54 is now known as kadams54-away [20:28] "machine 11 is not in an error state" [20:29] the machine basically does not exist, for all I know [20:30] then, that's that. No there's no way to really "clear" it out afaik [20:30] but juju is still has some reference to it [20:30] but now I cannot provision new machines [20:30] juju is stuck [20:32] can I force add a new machine? [20:32] otherwise it also keeps on pending [20:34] can't I edit the juju database or something? [20:38] lkraider: sorry, late to the story, but which provider? [20:38] @thumper lxc local [20:39] lkraider: and host? [20:39] also, juju version [20:40] mac -> vagrant trusty64 -> juju 1.23.3-trusty-amd64 [20:41] lkraider: I think you can do 'juju remove-machine --force' [20:41] lkraider: although you might first have to remove the service or unit that is deployed on that machine, I don't recall [20:42] I destroyed the service already === scuttlemonkey is now known as scuttle|afk [20:43] @thumper I tried remove-machine --force --verbose --show-log [20:43] but it still shows in juju statu [20:44] hmm... [20:44] was there an error from remove-machine? [20:45] https://gist.github.com/anonymous/28409911b3b95bf8fdad [20:45] not [20:45] on the output [21:24] can I connect to the juju mongodb ? [21:28] @thumper I think my bug is this one: https://bugs.launchpad.net/juju-core/+bug/1233457 [21:28] Bug #1233457: service with no units stuck in lifecycle dying [21:29] @thumper sorry, that is for a service, not a machine [21:30] this bug has some scripts that seem useful: https://bugs.launchpad.net/juju-core/+bug/1089291 [21:30] Bug #1089291: destroy-machine --force [21:31] but they all fail on the mongo auth [21:31] pymongo.errors.OperationFailure: command SON([('authenticate', 1), ('user', u'admin'), ('nonce', u'15c36a2f77cdc42d'), ('key', u'9c3a5ccb7ba361b481b5ffd2ffa86a07')]) failed: auth fails [21:32] @hazmat it's about your mhotfix-machine.py script [21:41] lazyPower: this is my new nick, i created a new one for this channel [21:41] ddellav: awesome :) Glad to see you're already idling here [21:41] unify usernames across everything helps [21:41] of course :) [21:51] is the only solution to destroy the environment? [21:52] what happens if I restart the host without destroying the environment? [21:54] what happens if I restart the local provider host without destroying the environment? [21:55] (reboot) [23:16] Ok, seems restarting the local provisioner host breaks all the environment