[12:58] Hey folks. I'm having trouble with juju destroy-environment timing out and being unable to contact the juju server. Anyone got suggestions? [13:02] lukasa_work: stale environment you just want to get rid of? [13:03] lukasa_work: or do you still have machines active in the env that you need to have destroyed [13:03] Still got active machines [13:04] hmm.. if its not able to reach the state machine, this gets a bit trickier. Is this an openstack env? [13:07] lukasa_work: to take a sledgehammer to the situation, juju destroy-environment --force + manual deletion of any leftover machines is an option. However if it cannot reach the state server it lends itself to the idea that either the api tanked on the state server, or something has changed in networking [13:08] Yeah, I think the first [13:08] But I don't really know how to debug it. ;) [13:08] juju ssh ubuntu@ -i ~/.juju/ssh/juju_id_rsa [13:08] should be able to sudo from there and go inspect logs in /var/log/juju [13:09] sorry, no juju ssh, just ssh [13:09] and thats my queue to go get my morning coffee [13:09] =P yeah, I checked that log, nothing exciting in there. =P [13:13] hmm [13:13] juju-db isn't running, but it doesn't appear to be logging [13:14] service jujud-machine-0 status is stopped as well? [13:14] Nope [13:14] can you restart those services? [13:15] Did so, no change. =) [13:15] juju-db fails to run [13:15] you should see output in syslog from trying to start that as to why its failing [13:16] its typically due to a db lock, mongo tanked and didn't clean up properly so the service refuses to restart until its manually massaged [13:19] Hmmmmm [13:19] Nothing in syslog from juju-db [13:22] natefinch: do you a moment to help debug juju-db not running? [13:24] lukasa_work: pinged the big guns in #dev. Sorry i'm not much help :| [13:30] lazyPower: sure [13:30] That's fine lazyPower, I'll clean it up. =) [13:31] lukasa_work: if it's a matter of destroy environment not working... probably easiest thing is just to manually delete machines from the provider yourself. [13:33] natefinch: i'm more concerned why the juju-db failed ;| [13:33] i haven't seen that failure in a loooooong time. [13:33] lazyPower: well, yes. [13:33] like, 1.18 days [13:33] lukasa_work: what version of juju? [13:33] 1.23.3-trusty-i386 [13:38] Hello all, I have a partner who are seeing "2015-07-20 11:48:42 WARNING juju.replicaset replicaset.go:98 Initiate: fetching replication status failed: cannot get replica set status: can't get local.system.replset config from self or any seed (EMPTYCONFIG)" when trying to bootstrap a manual environment. [13:39] Can anyone give me pointers as to how to start debugging this? [13:39] Odd_Bloke: that's almost always a networking issue. [13:40] natefinch: Is that local trying to connect to the remote node? [13:40] Odd_Bloke: check firewall settings on the provider. [13:40] Odd_Bloke: i ran into that when bootstrapping in a VPC and i didn't have the 27017 port opened in the firewall config [13:41] Odd_Bloke: that's juju trying to connect to mongo... ironically, it usually is getting the info from mongo itself and then trying to connect to the master that is indicated in master. Usually it's a matter of the computer name not being resolvable to its IP address [13:42] er, indicated in its database [13:42] yeah, that or its configured to use the public ip, which i've run into as well [13:42] however i'm not entirely certain that wasn't *my* fault [13:43] Ah, OK, non-resolvable local name is also a decent bet. [13:47] jamespage, gnuoy: I could use a review of this quantum-gateway EOL when one of you have a moment: https://code.launchpad.net/~corey.bryant/charms/trusty/quantum-gateway/end-of-life/+merge/265035 [13:51] lazyPower, lukasa_work: sorry, I have a meeting now and need to run some errands after. Can't really do much with investigating the DB problem now. [13:57] thanks for helping natefinch, cheers [14:14] coreycb, I have one request for the README - other than that looks OK === JoseeAntonioR is now known as jose === kadams54_ is now known as kadams54 === hazmat_ is now known as hazmat === perrito667 is now known as perrito666 [14:50] fyi jamespage, looks like an undercloud hiccup false-failed a couple of tests on your neutron-gateway merge prop ("message": "Connection to neutron failed: Maximum attempts reached"). will re-kick those as soon as i confirm that the enviro is ok. [14:51] beisner, ah ok [14:51] nice [15:02] fyi coreycb, since re-enabling the 052-basic-trusty-kilo-git amulet test on nova-compute next, it is consistently failing w/ "create_instance ERROR: instance creation timed out" [15:05] beisner, ok I'll take a look [15:07] coreycb, ack thanks! === arosales__ is now known as arosales === scuttle|afk is now known as scuttlemonkey [16:03] beisner, coreycb: hey - does a charm have to be a bzr branch to run its amulet tests? I'm getting oddness with a git based one I'm working on [16:04] jamespage, does deployer support git branches? iiuc, amulet uses deployer under the hood. [16:08] jamespage, ah there are indeed some lp:specific amulet helpers too. [16:08] hmm [16:11] jamespage: i use git based charms all the time. they work [16:11] as anecdotal evidence. I can point you to etcd/docker/flannel-docker/kubernetes and company [16:11] jamespage: we had a git based issue as well I think. I'm trying to ask bac what was up if he recalls. We did a PR against the deployer for something [16:11] lazyPower, think it might be something todo with our helpers for openstack [16:12] lazyPower, I already have the charm locally - everything else deploys, and then deployer borkes on the charm I'm testing [16:12] jamespage, lazyPower - i'm fairly certain the openstack/amulet/deployment.py helpers are going to need more smarts === lukasa is now known as lukasa_away [16:12] jamespage: define borked [16:12] just getting the error message now === lukasa_away is now known as lukasa [16:16] lazyPower, beisner: [16:16] RuntimeError: ('No charm metadata @ %s', 'trusty/openvswitch-odl/metadata.yaml') [16:16] openvswitch-odl being my charm in which i ran juju test [16:17] jamespage: are you using deployer from repo or pip? [16:18] repo I think [16:18] I can do a venv - lemme try that [16:20] ah-ha! [16:20] 2015-07-20 16:20:01 Could not branch /home/ubuntu/tools/ovs-odl/trusty/openvswitch-odl to trusty/openvswitch-odl [16:39] jamespage, got it figured out? [18:45] interesting that its trying to branch a git repository a-la bzr [20:05] Odd_Bloke: ping === natefinch is now known as natefinch-afk [22:54] marcoceppi: maybe we should spit out the hardware info like tacked into a result [22:54] with like, CPUs, drive info, etc. [22:54] sort of how landscape does [22:54] jcastro: no, there's no way to gather that data [22:54] jcastro: but the budnle will have constraints [22:55] which will be either the instance-type submitted, or the hardware output (cpu-cores, cpu-power, mem, etc) [22:55] oh ok [22:55] so in other words, just make the constraints very specific [22:56] jcastro: well, instance type will be enough to determinte exactly what the profile of the machine is [22:56] and if no constraints are given, IE default instance size in Juju, you'll get the hardware data juju provides