[12:58] <lukasa_work> Hey folks. I'm having trouble with juju destroy-environment timing out and being unable to contact the juju server. Anyone got suggestions?
[13:02] <lazyPower> lukasa_work: stale environment you just want to get rid of?
[13:03] <lazyPower> lukasa_work: or do you still have machines active in the env that you need to have destroyed
[13:03] <lukasa_work> Still got active machines
[13:04] <lazyPower> hmm.. if its not able to reach the state machine, this gets a bit trickier. Is this an openstack env?
[13:07] <lazyPower> lukasa_work: to take a sledgehammer to the situation, juju destroy-environment --force + manual deletion of any leftover machines is an option. However if it cannot reach the state server it lends itself to the idea that either the api tanked on the state server, or something has changed in networking
[13:08] <lukasa_work> Yeah, I think the first
[13:08] <lukasa_work> But I don't really know how to debug it. ;)
[13:08] <lazyPower> juju ssh ubuntu@<ip> -i ~/.juju/ssh/juju_id_rsa
[13:08] <lazyPower> should be able to sudo from there and go inspect logs in /var/log/juju
[13:09] <lazyPower> sorry, no juju ssh, just ssh
[13:09] <lazyPower> and thats my queue to go get my morning coffee
[13:09] <lukasa_work> =P yeah, I checked that log, nothing exciting in there. =P
[13:13] <lazyPower> hmm
[13:13] <lukasa_work> juju-db isn't running, but it doesn't appear to be logging
[13:14] <lazyPower> service jujud-machine-0 status is stopped as well?
[13:14] <lukasa_work> Nope
[13:14] <lazyPower> can you restart those services?
[13:15] <lukasa_work> Did so, no change. =)
[13:15] <lukasa_work> juju-db fails to run
[13:15] <lazyPower> you should see output in syslog from trying to start that as to why its failing
[13:16] <lazyPower> its typically due to a db lock, mongo tanked and didn't clean up properly so the service refuses to restart until its manually massaged
[13:19] <lukasa_work> Hmmmmm
[13:19] <lukasa_work> Nothing in syslog from juju-db
[13:22] <lazyPower> natefinch: do you a moment to help debug juju-db not running?
[13:24] <lazyPower> lukasa_work: pinged the big guns in #dev. Sorry i'm not much help :|
[13:30] <natefinch> lazyPower: sure
[13:30] <lukasa_work> That's fine lazyPower, I'll clean it up. =)
[13:31] <natefinch> lukasa_work: if it's a matter of destroy environment not working... probably easiest thing is just to manually delete machines from the provider yourself.
[13:33] <lazyPower> natefinch: i'm more concerned why the juju-db failed ;|
[13:33] <lazyPower> i haven't seen that failure in a loooooong time.
[13:33] <natefinch> lazyPower: well, yes.
[13:33] <lazyPower> like, 1.18 days
[13:33] <natefinch> lukasa_work: what version of juju?
[13:33] <lukasa_work> 1.23.3-trusty-i386
[13:38] <Odd_Bloke> Hello all, I have a partner who are seeing "2015-07-20 11:48:42 WARNING juju.replicaset replicaset.go:98 Initiate: fetching replication status failed: cannot get replica set status: can't get local.system.replset config from self or any seed (EMPTYCONFIG)" when trying to bootstrap a manual environment.
[13:39] <Odd_Bloke> Can anyone give me pointers as to how to start debugging this?
[13:39] <natefinch> Odd_Bloke: that's almost always a networking issue.
[13:40] <Odd_Bloke> natefinch: Is that local trying to connect to the remote node?
[13:40] <lazyPower> Odd_Bloke: check firewall settings on the provider.
[13:40] <lazyPower> Odd_Bloke: i ran into that when bootstrapping in a VPC and i didn't have the 27017 port opened in the firewall config
[13:41] <natefinch> Odd_Bloke: that's juju trying to connect to mongo... ironically, it usually is  getting the info from mongo itself and then trying to connect to the master that is indicated in master.  Usually it's a matter of the computer name not being resolvable to its IP address
[13:42] <natefinch> er, indicated in its database
[13:42] <lazyPower> yeah, that or its configured to use the public ip, which i've run into as well
[13:42] <lazyPower> however i'm not entirely certain that wasn't *my* fault
[13:43] <Odd_Bloke> Ah, OK, non-resolvable local name is also a decent bet.
[13:47] <coreycb> jamespage, gnuoy: I could use a review of this quantum-gateway EOL when one of you have a moment:  https://code.launchpad.net/~corey.bryant/charms/trusty/quantum-gateway/end-of-life/+merge/265035
[13:51] <natefinch> lazyPower, lukasa_work: sorry, I have a meeting now and need to run some errands after.  Can't really do much with investigating the DB problem now.
[13:57] <lazyPower> thanks for helping natefinch, cheers
[14:14] <jamespage> coreycb, I have one request for the README - other than that looks OK
[14:50] <beisner> fyi jamespage, looks like an undercloud hiccup false-failed a couple of tests on your neutron-gateway merge prop ("message": "Connection to neutron failed: Maximum attempts reached").   will re-kick those as soon as i confirm that the enviro is ok.
[14:51] <jamespage> beisner, ah ok
[14:51] <jamespage> nice
[15:02] <beisner> fyi coreycb, since re-enabling the 052-basic-trusty-kilo-git amulet test on nova-compute next, it is consistently failing w/ "create_instance ERROR: instance creation timed out"
[15:05] <coreycb> beisner, ok I'll take a look
[15:07] <beisner> coreycb, ack thanks!
[16:03] <jamespage> beisner, coreycb: hey  - does a charm have to be a bzr branch to run its amulet tests? I'm getting oddness with a git based one I'm working on
[16:04] <beisner> jamespage, does deployer support git branches?  iiuc, amulet uses deployer under the hood.
[16:08] <beisner> jamespage, ah there are indeed some lp:specific amulet helpers too.
[16:08] <jamespage> hmm
[16:11] <lazyPower> jamespage: i use git based charms all the time. they work
[16:11] <lazyPower> as anecdotal evidence. I can point you to etcd/docker/flannel-docker/kubernetes and company
[16:11] <rick_h_> jamespage: we had a git based issue as well I think. I'm trying to ask bac what was up if he recalls. We did a PR against the deployer for something
[16:11] <jamespage> lazyPower, think it might be something todo with our helpers for openstack
[16:12] <jamespage> lazyPower, I already have the charm locally - everything else deploys, and then deployer borkes on the charm I'm testing
[16:12] <beisner> jamespage, lazyPower - i'm fairly certain the  openstack/amulet/deployment.py helpers are going to need more smarts
[16:12] <lazyPower> jamespage: define borked
[16:12] <jamespage> just getting the error message now
[16:16] <jamespage> lazyPower, beisner:
[16:16] <jamespage> RuntimeError: ('No charm metadata @ %s', 'trusty/openvswitch-odl/metadata.yaml')
[16:16] <jamespage> openvswitch-odl being my charm in which i ran juju test
[16:17] <lazyPower> jamespage: are you using deployer from repo or pip?
[16:18] <jamespage> repo I think
[16:18] <jamespage> I can do a venv - lemme try that
[16:20] <jamespage> ah-ha!
[16:20] <jamespage> 2015-07-20 16:20:01 Could not branch /home/ubuntu/tools/ovs-odl/trusty/openvswitch-odl to trusty/openvswitch-odl
[16:39] <coreycb> jamespage, got it figured out?
[18:45] <lazyPower> interesting that its trying to branch a git repository a-la bzr
[20:05] <jose> Odd_Bloke: ping
[22:54] <jcastro> marcoceppi: maybe we should spit out the hardware info like tacked into a result
[22:54] <jcastro> with like, CPUs, drive info, etc.
[22:54] <jcastro> sort of how landscape does
[22:54] <marcoceppi> jcastro: no, there's no way to gather that data
[22:54] <marcoceppi> jcastro: but the budnle will have constraints
[22:55] <marcoceppi> which will be either the instance-type submitted, or the hardware output (cpu-cores, cpu-power, mem, etc)
[22:55] <jcastro> oh ok
[22:55] <jcastro> so in other words, just make the constraints very specific
[22:56] <marcoceppi> jcastro: well, instance type will be enough to determinte exactly what the profile of the machine is
[22:56] <marcoceppi> and if no constraints are given, IE default instance size in Juju, you'll get the hardware data juju provides