/srv/irclogs.ubuntu.com/2015/09/08/#juju.txt

=== zz_CyberJacob is now known as CyberJacob
=== CyberJacob is now known as zz_CyberJacob
g3narohi, im having issues trying to access lxc- commands when running juju11:52
g3naroboth pointed to local enviornment11:52
g3narois this is a nknown issue ?11:52
=== coreycb` is now known as coreycb
beisnergnuoy, wolsen - avail to land this reviewed c-h bit, and review/land the dependent charm amulet test update MPs?  i'd like to move on to update others, but holding until these hit.  tia!  https://code.launchpad.net/~1chb1n/charm-helpers/amulet-svc-restart-race/+merge/26909813:47
beisnerthedac, gnuoy - resync'd the test branch to get fri's changes on lp:~thedac/charms/trusty/rabbitmq-server/native-cluster-race-fixes & testing.  thanks!14:02
g3naroanyone here?14:19
g3naroknow how i can deploy a centos6 box /|14:19
g3naro?14:19
lukasaHey, fun, I'm not successfully authing into the onboarding site in Firefox14:20
lukasaUgh, wrong channel, sorry. =D14:20
marcoceppig3naro: you'll need to follow this guide14:22
ennobleIs there a way to know that a configuration setting has reached the machine and been processed? I've then been checking that Agent-status is idle to know that the config has been applied, but I don't know how to differentiate between idle because the config has been applied or idle because the config hasn't been applied yet.15:57
marcoceppiennoble: you can do a juju status-history for the unit in question16:22
marcoceppiit should show you when it processed the config-changed event'16:22
ennoblemarcoceppi: thanks, is there a way to access that information from the python juju client library?16:28
marcoceppiennoble: that's a great question, and I'm not aware of a way at the moment.16:28
marcoceppiIt's definitely exposed in the API, as that's where the cli get it16:28
ennoblemarcoceppi: I've been using the watcher functionality in juju client do to that, but I seem to be able to miss it.16:28
ennoblemacroceppi: so I may be able to do an RPC call to get back the status-history? Is it's it's own RPC call?16:29
marcoceppiennoble: I'm trying to find that out right now16:32
ennoblemarcoceppi: Thanks!16:33
marcoceppiennoble: I'm asking the devs in #juju-dev but I haven't gotten an answer yet16:47
ennoblemarcoceppi: jujuenv._rpc({"Type":"Client", "Request":"UnitStatusHistory", "Params": {"Name": ‘myunit/0’, "Size" : 20, "Kind" : "agent"}}) did the trick16:50
marcoceppiennoble: good find, if you wanted to add that as a method to jujuclient I'm sure it'd be greatly appreciated16:51
beisnerthedac, gnuoy - friday revs sync'd in, passed the first run.  i've re-queued a couple add'l iterations.  https://code.launchpad.net/~1chb1n/charms/trusty/rabbitmq-server/amulet-refactor-1509b/+merge/27010217:48
thedacbeisner: excellent17:48
ennoblemarcoceppi: i've got to jump through quite a few hoops to submit a patch; but I do have two other bug reports on jujuclient that have been outstanding for a while with suggestion solutions in them. I'm trying to work through the hoops on my end, but if somene had a couple minutes to fix the issues I've reported (the fixes are in the bug reports) that would be great.18:26
marcoceppiennoble: I'll take a look and help triage those through, thanks18:31
ennoblethanks marcoceppi: the two I'm referring to are #1455302  and #148629718:35
mupBug #1455302: enqueue_units doesn't correctly pass parameters to action <python-jujuclient:New> <https://launchpad.net/bugs/1455302>18:36
mupBug #1486297: Action doesn't correctly translate unit name into tag if hyphen present <juju-core:Won't Fix> <python-jujuclient:Confirmed> <https://launchpad.net/bugs/1486297>18:36
=== bdx_ is now known as bdx
josehey guys, manual can't run in ports other than 22, right?19:32
marcoceppijose: elaborate19:41
josemarcoceppi: I have someone who has a server running in port 2222, and wants to set it up with juju + manual provide19:42
joser19:42
marcoceppijose: juju add-machine ssh:user@host:port19:43
joseoh, ok19:43
josethanks19:43
beisnerthedac, gnuoy - 2 more rmq iterations ok (18 of 18 clustered ok and happy).  that is:20:04
beisnerhttps://code.launchpad.net/~thedac/charms/trusty/rabbitmq-server/native-cluster-race-fixes   +=>  https://code.launchpad.net/~1chb1n/charms/trusty/rabbitmq-server/amulet-refactor-1509b20:04
beisnerprecise through vivid, juju w/ LE.20:05
thedacack20:05
beisnerpoor ack, the real ack.  he probably gets hilighted all day every day.20:05
thedacI'll start using /me nods :)20:06
beisnernah, as you were, as you were.  i ack as well.   ack ack20:06
ntpttrHey everyone, I'm getting an error launching the mysql charm from the juju-gui, "1 hook failed: "install"". Here's the contents of /var/log/juju/unit-mysql-0.log on the machine the service was booted on: http://pastebin.com/GSgz50Sd20:07
ntpttrIt looks like the issue is a calledprocesserror happening related to the keyserver20:17
thedacntpttr: yeah, the output suggest the config setting 'key' has a value of null. I just fired off a deploy of mysql (without the gui) and it works. Not sure if this is a juju gui related issue. Do you minde filing a bug? https://bugs.launchpad.net/charms20:19
ntpttrthedac: I think the issue is proxy related - when I did this at home it did work but behind a proxy now I'm having trouble. Would you still like me to file a bug?20:20
beisnerUnknown source: u'null'  seems unexpected, as it passes to the cmd as --recv null20:20
thedacright ^^20:20
thedacntpttr: yes on the bug report. With as much detail on how to recreate as possible20:22
ntpttrthedac: Okay, I'm doing this on the default bootstrapped setup provided by the Orange Box, should I mention that?20:23
beisneryep i think the network is probably restricting tcp 80 egress.  and since the charm is pushing hkp over 80, probably getting blocked.   i would suspect in that enviro, a simple apt-get install of anything would also fail.20:23
beisner(worth checking manually)20:24
ntpttrbeisner: To check that manually should I just juju ssh into the machine and try an install of anything?20:24
beisneri'd make sure something like this succeeds:   sudo apt-get update && sudo apt-get install multitail20:25
thedacntpttr: try 'nc -vz keyserver.ubuntu.com 80'20:25
beisneror, that's even better ;-)20:25
beisnerthedac's is-foo is quite handy20:25
ntpttrbeisner thedac: All right, one second while the machine deploys again and I'll run that command (I cleaned up the environment after the last failure, but I've had it happen multiple times so I'm confident it'll happen again)20:27
thedacok20:28
ntpttrthedac beisner: Uh so I have no idea what happened but it worked that time, I was looking at 'tail -f /var/log/juju/unit-mysql-0.log' and it went through the whole apt-get update and finished and the service started20:37
ntpttrthedac beisner: Do you want me to pastebin the log for you?20:37
thedacntpttr: that is odd. Let us know if you run into it again.20:38
ntpttrthedac: okay, will do. Thank you20:38
=== natefinch is now known as natefinch-afk
isantopjcastro: Can I ping you about some crazy juju-ness?21:02
=== zz_CyberJacob is now known as CyberJacob
marcoceppiisantop: it's getting to be EOD for most people on the east coast, anything I can help with?21:09
isantopI'm trying to manage services on a personal server using juju21:10
marcoceppiisantop: sounds reasonable21:11
isantopjoseantonior had me going through manual provisioning, but I'm hitting issues with the agent-status not progressing past allocating. I'm not sure what configuration needs to be done for the lxc side of things21:13
marcoceppiisantop: so you're manually provisioning lxc?21:15
isantopWell, I think I'm not and that's my problem ;-)21:15
* isantop is a total cloud n00b21:15
marcoceppiisantop: no worries, why don't you recount how you got to where you are right now?21:17
isantopdid "sudo apt-get install lxc" on the remote server, and "sudo apt-get install juju-core" on my local machine (after adding the PPA)21:18
isantopthen did "juju generate-config", "switch manual", and "bootstrap"21:18
marcoceppiisantop: so you're trying to bootstrap an LXC machine on the remote server? or trying to bootstrap the remote server?21:19
isantopthe environments.yaml file is currently pointing to the remote server itself. I haven't done any lxc-related things on the remote server apart from installing it.21:20
marcoceppiisantop: right, okay, and so is the bootstrap node stuck at allocating?21:21
isantopExactly21:21
marcoceppiisantop: or did it bootstrap properly?21:21
isantopEr21:21
isantopno, It did bootstrap properly21:21
marcoceppiisantop: can you put the output of `juju status` into paste.ubuntu.com and send the link over?21:22
isantopI removed the services I deployed and destroyed the environment already. I'll re-bootstrap21:23
isantophttp://paste.ubuntu.com/12316663/21:26
marcoceppiisantop: I thought there was a failure to allocate?21:28
isantopIt bootstraps fine, but I can't deploy any charms21:28
marcoceppiisantop: how are you trying to deploy charms?21:28
isantop'juju deploy charm --to lxc:0'21:29
marcoceppiisantop: I'd do that and wait, it can take some time to get the first cache of the lxc image21:29
marcoceppiisantop: it's best to do that, then to tail the machine-0.log on the remote host21:29
marcoceppiisantop: that will give insight to any errors21:29
isantophere is the machine-0.log: http://paste.ubuntu.com/12316694/21:34
isantopAnd the juju-status after attempting to deploy the owncloud charm: http://paste.ubuntu.com/12316705/21:34
marcoceppiisantop: what does `sudo lxc-ls --fancy` show on the remote server?21:35
isantopmarcoceppi: http://paste.ubuntu.com/12316722/21:38
marcoceppiisantop: that's a good sign21:39
marcoceppiisantop: status still show allocating?21:39
isantopmarcoceppi: Still allocating, yes21:48
marcoceppiisantop: I think I see the problem21:48
marcoceppithe contaienrs are running21:48
marcoceppibut don't have networking21:48
isantopAh21:48
isantopHow do I get them network?21:48
marcoceppiit should just happen21:49
marcoceppithis may sound bad, but try restarting the server if you can. Juju should come back online and the containers should auto-restart and hopefully the networking bridge will be active21:49
isantopI'll give that a shot in a while21:49
isantopWon't be able to do it right now21:49
lazyPowerisantop: can you run ifconfig and look for a lxcbr0 device?21:51
lazyPower^ on the state-server, thats currently trying to provision those lxc containers.21:51
isantoplazyPower: I can see lxcbr0 when I ssh into the server, but there's no entry for it in ifconfig21:52
lazyPowerhmm21:52
isantopWait21:52
isantopI just did that on my local machine. :|21:52
lazyPower:) that'll do it every time21:52
lazyPoweras a side note, i freaked myself out one day doing that, when i was ssh'd into my maas machine which should be *loaded* with virtual ethernet devices for each vm/container running on it.. and i instantly panic'd when it came back with only a wireless card and loopback interface.21:53
isantophaha21:53
isantophttp://paste.ubuntu.com/12316823/21:53
isantopAssuming those all look okay?21:53
lazyPowerhmm, ok so 10.0.3.1 - it created the device and gave it networking21:53
lazyPowerand looking at your --fancy output none of the containers are listing with an IP Address, can you stop/restart the container to see if it brings up with networking? (it should have done this already and been peachy) - sudo lxc-stop -n <name-of-lxc-container> && sudo lxc-start -n <name-of-lxc-container>21:54
lazyPowerif its a temporary thing, that should kick it into acting right21:55
isantopWould it just be the -1?21:55
lazyPowerjuju-machine-0-lxc-021:56
lazyPowerits unfortuantely going to be the full string of the container21:56
isantopNah, that's not too bad to type21:56
isantopI did -1, but it's stuck at waiting 120 seconds for network device21:57
lazyPowerok, so somethings actively stopping the container from grabbing a virtual device21:57
lazyPowercan you poke in /var/log/syslog to see if you see anything related to CGROUPS stopping anything?21:58
isantopgreping syslog for cgroups (with -i) doesn't give me any output21:59
lazyPowerI'm not sure what happened, but i imagine that marcoceppi's proposed fix will clear this up - just rebooting the machine will be the simplest path forward.21:59
isantopI'm currently on a ZNC on the server in question, so brb22:02
isantoplazyPower: No change, it seems22:07
isantopCurrent juju status: http://paste.ubuntu.com/12316885/22:07
isantopSame issue if I lxc-stop/lxc-start22:10
lazyPowerhmm22:44
lazyPowerisantop: sorry i stepped away, one moment22:45
isantopnp22:45
lazyPowerok, lets try something slightly different and see if we get networking... sudo lxc-create -t download -n u1 -- --dist ubuntu --release trusty --arch amd6422:47
lazyPowerrun that on the state server and see if the manually provisioned lxc container gets proper networking22:47
isantopI tried manually provisioning a container a while ago, and it got stuck at "Setting up the GPG Keyring"22:48
isantopAnd eventually fails because it can't download the keyring from the keyserver22:48
isantop(So, looks like no)22:49
lazyPowerinteresting22:49
lazyPowerAre you behind some form of firewall/proxy?22:49
isantopOh, we do have csf set up22:49
lazyPowergoogling csf returned cerebro spinal fluid... i dont think thats what you're referring to22:50
isantopgoogle "csf firewall"? :-p22:50
lazyPowerbut i'll assume its an egress firewall?22:50
isantopIt's an iptables utility22:50
lazyPowerhum.. that doesn't explain the lack of addressing on the lxc containers, but it its reasonable to assume there's a rule blocking the gpg service from contacting the keyring server @ keyserver.ubuntu.com22:51
isantopWhat port does that run over?22:51
lazyPowerport 11371 TCP22:52
isantopokay, lemme open that up22:52
lazyPowerhowever22:52
lazyPowerah wait this is seed so you cant edit the config, disregard22:52
isantopAh, yeah, now I can manually provision a container22:54
isantopAre there any other ports that will need to be opened up for it to work correctly?22:55
lazyPower80 and 11371 should be it22:55
lazyPower11371 should also hav efallen back over http tbh22:55
lazyPowerthat fix was put in place circa natty narwhal....22:56
=== CyberJacob is now known as zz_CyberJacob
isantopOkay, so that appears to be working. If I restart the two juju containers, will that get them back up and running?22:56
lazyPowerIts worth a shot22:56
isantopHmmmm, nope22:57
lazyPowerif that doesn't work, you may need to juju destroy them and attempt reprovisioning22:57
lazyPowerThe last thing to inspect if reprovisioning doesn't work is to take a look through the LXC configs to see if there's a divergence between the networking setup by your manually provisioned lxc container and what juju thought was right.22:58
lazyPowerand i can help there ^ its 2 text files and some diffing22:58
isantopNo luck stoping/starting22:59
lazyPowerjuju destroy-service owncloud should start tearing them down22:59
lazyPowerif the containers appear stuck, you can juju destroy-machine --force 0/lxc/1 - or the path to the container in juju status23:00
isantopLooks like juju-machine-0-lxc-0 is gone. But -1 is still there23:02
lazyPowerthe /1 was allocated to which service?23:02
isantopI'm not sure, it's not listed in juju status23:03
lazyPowerah, looks like its some leftover23:03
lazyPoweryou can safely remove that with sudo lxc-destroy -n23:04
isantopHow do I delete lxc containers?23:04
isantopthanks23:04
isantopOkay, I redeployed owncloud --to lxc:023:08
* lazyPower crosses fingers23:10
isantopThe "workload-status/message" is currently "Waiting for agent initialization to finish"23:12
isantopCurrent machine-0.log23:13
isantophttp://paste.ubuntu.com/12317287/23:13
lazyPoweryeah, machine-0 is kind of a black hole of provisioning information relating to lxc/kvm machines.23:15
lazyPower:( we're flying kind of blind at the moment23:15
isantopif I run juju ssh lxc:0, it looks like it grabbed one of my public IPs from eth123:17
lazyPowererr23:18
lazyPowerhmm, this might be part of the newer networking stuff that recently landed... but unless this came from a MAAS/AWS environment it should be using the lxcbr0 networking bridge23:19
isantopI only say that because the primary IP on eth1 is 173.248.161.18, and "juju ssh lxc:0" asked me to confirm the identity of "193.248.161.20". (We have a /29 static, and five of the addresses are allocated to eth1 via /etc/network/interfaces.)23:23
lazyPowerjuju routes ssh requests in from the public interface, and back out the units private interface to the requisit unit23:29
lazyPowerthat could be a lxc container on the bootstrap node, or a remote vm/server elsewhere in the DC23:29
lazyPowerthe stateserver acts as a proxy for just about everything you do23:30
isantopYeah, still stuck at "allocating"23:30
isantopCan I chroot into the lxc?23:31
lazyPowerThat i'm not sure of without an IP Address. I know the containers receive a cloudinit config to register w/ the juju ssh credentails so you can ssh ubuntu@host23:33
isantopnever mind, I figured that out "lxc-attach -n juju-machine-o-lxc-1"23:33
lazyPowerdid that attach you to a running console or a login prompt?23:33
isantopNo, it said it couldn't get init pid23:34
isantopOh, but running as sudo works23:34
isantopI'm going to try re-bootstrapping the environment23:37
isantoplazyPower: Hmmmm, doesn't appear to offer any changes.23:43
lazyPower:(23:43
lazyPowerisantop: I need to step out for a bit. I'll be back around tomorrow and can assist you further then. Or you can fire off a mail to the list juju@lists.ubuntu.com - and one of the EU core folks may be able to lend a hand23:44
lazyPowersorry I wasn't able to get you sorted this evening23:45
isantopI'll let it steam about it overnight and see if anything magic happens. If not, I'll let you or jcastro know tomorrow23:45

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!