/srv/irclogs.ubuntu.com/2015/07/20/#juju.txt

lukasa_workHey folks. I'm having trouble with juju destroy-environment timing out and being unable to contact the juju server. Anyone got suggestions?12:58
lazyPowerlukasa_work: stale environment you just want to get rid of?13:02
lazyPowerlukasa_work: or do you still have machines active in the env that you need to have destroyed13:03
lukasa_workStill got active machines13:03
lazyPowerhmm.. if its not able to reach the state machine, this gets a bit trickier. Is this an openstack env?13:04
lazyPowerlukasa_work: to take a sledgehammer to the situation, juju destroy-environment --force + manual deletion of any leftover machines is an option. However if it cannot reach the state server it lends itself to the idea that either the api tanked on the state server, or something has changed in networking13:07
lukasa_workYeah, I think the first13:08
lukasa_workBut I don't really know how to debug it. ;)13:08
lazyPowerjuju ssh ubuntu@<ip> -i ~/.juju/ssh/juju_id_rsa13:08
lazyPowershould be able to sudo from there and go inspect logs in /var/log/juju13:08
lazyPowersorry, no juju ssh, just ssh13:09
lazyPowerand thats my queue to go get my morning coffee13:09
lukasa_work=P yeah, I checked that log, nothing exciting in there. =P13:09
lazyPowerhmm13:13
lukasa_workjuju-db isn't running, but it doesn't appear to be logging13:13
lazyPowerservice jujud-machine-0 status is stopped as well?13:14
lukasa_workNope13:14
lazyPowercan you restart those services?13:14
lukasa_workDid so, no change. =)13:15
lukasa_workjuju-db fails to run13:15
lazyPoweryou should see output in syslog from trying to start that as to why its failing13:15
lazyPowerits typically due to a db lock, mongo tanked and didn't clean up properly so the service refuses to restart until its manually massaged13:16
lukasa_workHmmmmm13:19
lukasa_workNothing in syslog from juju-db13:19
lazyPowernatefinch: do you a moment to help debug juju-db not running?13:22
lazyPowerlukasa_work: pinged the big guns in #dev. Sorry i'm not much help :|13:24
natefinchlazyPower: sure13:30
lukasa_workThat's fine lazyPower, I'll clean it up. =)13:30
natefinchlukasa_work: if it's a matter of destroy environment not working... probably easiest thing is just to manually delete machines from the provider yourself.13:31
lazyPowernatefinch: i'm more concerned why the juju-db failed ;|13:33
lazyPoweri haven't seen that failure in a loooooong time.13:33
natefinchlazyPower: well, yes.13:33
lazyPowerlike, 1.18 days13:33
natefinchlukasa_work: what version of juju?13:33
lukasa_work1.23.3-trusty-i38613:33
Odd_BlokeHello all, I have a partner who are seeing "2015-07-20 11:48:42 WARNING juju.replicaset replicaset.go:98 Initiate: fetching replication status failed: cannot get replica set status: can't get local.system.replset config from self or any seed (EMPTYCONFIG)" when trying to bootstrap a manual environment.13:38
Odd_BlokeCan anyone give me pointers as to how to start debugging this?13:39
natefinchOdd_Bloke: that's almost always a networking issue.13:39
Odd_Blokenatefinch: Is that local trying to connect to the remote node?13:40
lazyPowerOdd_Bloke: check firewall settings on the provider.13:40
lazyPowerOdd_Bloke: i ran into that when bootstrapping in a VPC and i didn't have the 27017 port opened in the firewall config13:40
natefinchOdd_Bloke: that's juju trying to connect to mongo... ironically, it usually is  getting the info from mongo itself and then trying to connect to the master that is indicated in master.  Usually it's a matter of the computer name not being resolvable to its IP address13:41
natefincher, indicated in its database13:42
lazyPoweryeah, that or its configured to use the public ip, which i've run into as well13:42
lazyPowerhowever i'm not entirely certain that wasn't *my* fault13:42
Odd_BlokeAh, OK, non-resolvable local name is also a decent bet.13:43
coreycbjamespage, gnuoy: I could use a review of this quantum-gateway EOL when one of you have a moment:  https://code.launchpad.net/~corey.bryant/charms/trusty/quantum-gateway/end-of-life/+merge/26503513:47
natefinchlazyPower, lukasa_work: sorry, I have a meeting now and need to run some errands after.  Can't really do much with investigating the DB problem now.13:51
lazyPowerthanks for helping natefinch, cheers13:57
jamespagecoreycb, I have one request for the README - other than that looks OK14:14
=== JoseeAntonioR is now known as jose
=== kadams54_ is now known as kadams54
=== hazmat_ is now known as hazmat
=== perrito667 is now known as perrito666
beisnerfyi jamespage, looks like an undercloud hiccup false-failed a couple of tests on your neutron-gateway merge prop ("message": "Connection to neutron failed: Maximum attempts reached").   will re-kick those as soon as i confirm that the enviro is ok.14:50
jamespagebeisner, ah ok14:51
jamespagenice14:51
beisnerfyi coreycb, since re-enabling the 052-basic-trusty-kilo-git amulet test on nova-compute next, it is consistently failing w/ "create_instance ERROR: instance creation timed out"15:02
coreycbbeisner, ok I'll take a look15:05
beisnercoreycb, ack thanks!15:07
=== arosales__ is now known as arosales
=== scuttle|afk is now known as scuttlemonkey
jamespagebeisner, coreycb: hey  - does a charm have to be a bzr branch to run its amulet tests? I'm getting oddness with a git based one I'm working on16:03
beisnerjamespage, does deployer support git branches?  iiuc, amulet uses deployer under the hood.16:04
beisnerjamespage, ah there are indeed some lp:specific amulet helpers too.16:08
jamespagehmm16:08
lazyPowerjamespage: i use git based charms all the time. they work16:11
lazyPoweras anecdotal evidence. I can point you to etcd/docker/flannel-docker/kubernetes and company16:11
rick_h_jamespage: we had a git based issue as well I think. I'm trying to ask bac what was up if he recalls. We did a PR against the deployer for something16:11
jamespagelazyPower, think it might be something todo with our helpers for openstack16:11
jamespagelazyPower, I already have the charm locally - everything else deploys, and then deployer borkes on the charm I'm testing16:12
beisnerjamespage, lazyPower - i'm fairly certain the  openstack/amulet/deployment.py helpers are going to need more smarts16:12
=== lukasa is now known as lukasa_away
lazyPowerjamespage: define borked16:12
jamespagejust getting the error message now16:12
=== lukasa_away is now known as lukasa
jamespagelazyPower, beisner:16:16
jamespageRuntimeError: ('No charm metadata @ %s', 'trusty/openvswitch-odl/metadata.yaml')16:16
jamespageopenvswitch-odl being my charm in which i ran juju test16:16
lazyPowerjamespage: are you using deployer from repo or pip?16:17
jamespagerepo I think16:18
jamespageI can do a venv - lemme try that16:18
jamespageah-ha!16:20
jamespage2015-07-20 16:20:01 Could not branch /home/ubuntu/tools/ovs-odl/trusty/openvswitch-odl to trusty/openvswitch-odl16:20
coreycbjamespage, got it figured out?16:39
lazyPowerinteresting that its trying to branch a git repository a-la bzr18:45
joseOdd_Bloke: ping20:05
=== natefinch is now known as natefinch-afk
jcastromarcoceppi: maybe we should spit out the hardware info like tacked into a result22:54
jcastrowith like, CPUs, drive info, etc.22:54
jcastrosort of how landscape does22:54
marcoceppijcastro: no, there's no way to gather that data22:54
marcoceppijcastro: but the budnle will have constraints22:54
marcoceppiwhich will be either the instance-type submitted, or the hardware output (cpu-cores, cpu-power, mem, etc)22:55
jcastrooh ok22:55
jcastroso in other words, just make the constraints very specific22:55
marcoceppijcastro: well, instance type will be enough to determinte exactly what the profile of the machine is22:56
marcoceppiand if no constraints are given, IE default instance size in Juju, you'll get the hardware data juju provides22:56

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!