[11:52] <g3naro> hi, im having issues trying to access lxc- commands when running juju
[11:52] <g3naro> both pointed to local enviornment
[11:52] <g3naro> is this is a nknown issue ?
[13:47] <beisner> gnuoy, wolsen - avail to land this reviewed c-h bit, and review/land the dependent charm amulet test update MPs?  i'd like to move on to update others, but holding until these hit.  tia!  https://code.launchpad.net/~1chb1n/charm-helpers/amulet-svc-restart-race/+merge/269098
[14:02] <beisner> thedac, gnuoy - resync'd the test branch to get fri's changes on lp:~thedac/charms/trusty/rabbitmq-server/native-cluster-race-fixes & testing.  thanks!
[14:19] <g3naro> anyone here?
[14:19] <g3naro> know how i can deploy a centos6 box /|
[14:19] <g3naro> ?
[14:20] <lukasa> Hey, fun, I'm not successfully authing into the onboarding site in Firefox
[14:20] <lukasa> Ugh, wrong channel, sorry. =D
[14:22] <marcoceppi> g3naro: you'll need to follow this guide
[15:57] <ennoble> Is there a way to know that a configuration setting has reached the machine and been processed? I've then been checking that Agent-status is idle to know that the config has been applied, but I don't know how to differentiate between idle because the config has been applied or idle because the config hasn't been applied yet.
[16:22] <marcoceppi> ennoble: you can do a juju status-history for the unit in question
[16:22] <marcoceppi> it should show you when it processed the config-changed event'
[16:28] <ennoble> marcoceppi: thanks, is there a way to access that information from the python juju client library?
[16:28] <marcoceppi> ennoble: that's a great question, and I'm not aware of a way at the moment.
[16:28] <marcoceppi> It's definitely exposed in the API, as that's where the cli get it
[16:28] <ennoble> marcoceppi: I've been using the watcher functionality in juju client do to that, but I seem to be able to miss it.
[16:29] <ennoble> macroceppi: so I may be able to do an RPC call to get back the status-history? Is it's it's own RPC call?
[16:32] <marcoceppi> ennoble: I'm trying to find that out right now
[16:33] <ennoble> marcoceppi: Thanks!
[16:47] <marcoceppi> ennoble: I'm asking the devs in #juju-dev but I haven't gotten an answer yet
[16:50] <ennoble> marcoceppi: jujuenv._rpc({"Type":"Client", "Request":"UnitStatusHistory", "Params": {"Name": ‘myunit/0’, "Size" : 20, "Kind" : "agent"}}) did the trick
[16:51] <marcoceppi> ennoble: good find, if you wanted to add that as a method to jujuclient I'm sure it'd be greatly appreciated
[17:48] <beisner> thedac, gnuoy - friday revs sync'd in, passed the first run.  i've re-queued a couple add'l iterations.  https://code.launchpad.net/~1chb1n/charms/trusty/rabbitmq-server/amulet-refactor-1509b/+merge/270102
[17:48] <thedac> beisner: excellent
[18:26] <ennoble> marcoceppi: i've got to jump through quite a few hoops to submit a patch; but I do have two other bug reports on jujuclient that have been outstanding for a while with suggestion solutions in them. I'm trying to work through the hoops on my end, but if somene had a couple minutes to fix the issues I've reported (the fixes are in the bug reports) that would be great.
[18:31] <marcoceppi> ennoble: I'll take a look and help triage those through, thanks
[18:35] <ennoble> thanks marcoceppi: the two I'm referring to are #1455302  and #1486297
[18:36] <mup> Bug #1455302: enqueue_units doesn't correctly pass parameters to action <python-jujuclient:New> <https://launchpad.net/bugs/1455302>
[18:36] <mup> Bug #1486297: Action doesn't correctly translate unit name into tag if hyphen present <juju-core:Won't Fix> <python-jujuclient:Confirmed> <https://launchpad.net/bugs/1486297>
[19:32] <jose> hey guys, manual can't run in ports other than 22, right?
[19:41] <marcoceppi> jose: elaborate
[19:42] <jose> marcoceppi: I have someone who has a server running in port 2222, and wants to set it up with juju + manual provide
[19:42] <jose> r
[19:43] <marcoceppi> jose: juju add-machine ssh:user@host:port
[19:43] <jose> oh, ok
[19:43] <jose> thanks
[20:04] <beisner> thedac, gnuoy - 2 more rmq iterations ok (18 of 18 clustered ok and happy).  that is:
[20:04] <beisner> https://code.launchpad.net/~thedac/charms/trusty/rabbitmq-server/native-cluster-race-fixes   +=>  https://code.launchpad.net/~1chb1n/charms/trusty/rabbitmq-server/amulet-refactor-1509b
[20:05] <beisner> precise through vivid, juju w/ LE.
[20:05] <thedac> ack
[20:05] <beisner> poor ack, the real ack.  he probably gets hilighted all day every day.
[20:06] <thedac> I'll start using /me nods :)
[20:06] <beisner> nah, as you were, as you were.  i ack as well.   ack ack
[20:07] <ntpttr> Hey everyone, I'm getting an error launching the mysql charm from the juju-gui, "1 hook failed: "install"". Here's the contents of /var/log/juju/unit-mysql-0.log on the machine the service was booted on: http://pastebin.com/GSgz50Sd
[20:17] <ntpttr> It looks like the issue is a calledprocesserror happening related to the keyserver
[20:19] <thedac> ntpttr: yeah, the output suggest the config setting 'key' has a value of null. I just fired off a deploy of mysql (without the gui) and it works. Not sure if this is a juju gui related issue. Do you minde filing a bug? https://bugs.launchpad.net/charms
[20:20] <ntpttr> thedac: I think the issue is proxy related - when I did this at home it did work but behind a proxy now I'm having trouble. Would you still like me to file a bug?
[20:20] <beisner> Unknown source: u'null'  seems unexpected, as it passes to the cmd as --recv null
[20:20] <thedac> right ^^
[20:22] <thedac> ntpttr: yes on the bug report. With as much detail on how to recreate as possible
[20:23] <ntpttr> thedac: Okay, I'm doing this on the default bootstrapped setup provided by the Orange Box, should I mention that?
[20:23] <beisner> yep i think the network is probably restricting tcp 80 egress.  and since the charm is pushing hkp over 80, probably getting blocked.   i would suspect in that enviro, a simple apt-get install of anything would also fail.
[20:24] <beisner> (worth checking manually)
[20:24] <ntpttr> beisner: To check that manually should I just juju ssh into the machine and try an install of anything?
[20:25] <beisner> i'd make sure something like this succeeds:   sudo apt-get update && sudo apt-get install multitail
[20:25] <thedac> ntpttr: try 'nc -vz keyserver.ubuntu.com 80'
[20:25] <beisner> or, that's even better ;-)
[20:25] <beisner> thedac's is-foo is quite handy
[20:27] <ntpttr> beisner thedac: All right, one second while the machine deploys again and I'll run that command (I cleaned up the environment after the last failure, but I've had it happen multiple times so I'm confident it'll happen again)
[20:28] <thedac> ok
[20:37] <ntpttr> thedac beisner: Uh so I have no idea what happened but it worked that time, I was looking at 'tail -f /var/log/juju/unit-mysql-0.log' and it went through the whole apt-get update and finished and the service started
[20:37] <ntpttr> thedac beisner: Do you want me to pastebin the log for you?
[20:38] <thedac> ntpttr: that is odd. Let us know if you run into it again.
[20:38] <ntpttr> thedac: okay, will do. Thank you
[21:02] <isantop> jcastro: Can I ping you about some crazy juju-ness?
[21:09] <marcoceppi> isantop: it's getting to be EOD for most people on the east coast, anything I can help with?
[21:10] <isantop> I'm trying to manage services on a personal server using juju
[21:11] <marcoceppi> isantop: sounds reasonable
[21:13] <isantop> joseantonior had me going through manual provisioning, but I'm hitting issues with the agent-status not progressing past allocating. I'm not sure what configuration needs to be done for the lxc side of things
[21:15] <marcoceppi> isantop: so you're manually provisioning lxc?
[21:15] <isantop> Well, I think I'm not and that's my problem ;-)
[21:15]  * isantop is a total cloud n00b
[21:17] <marcoceppi> isantop: no worries, why don't you recount how you got to where you are right now?
[21:18] <isantop> did "sudo apt-get install lxc" on the remote server, and "sudo apt-get install juju-core" on my local machine (after adding the PPA)
[21:18] <isantop> then did "juju generate-config", "switch manual", and "bootstrap"
[21:19] <marcoceppi> isantop: so you're trying to bootstrap an LXC machine on the remote server? or trying to bootstrap the remote server?
[21:20] <isantop> the environments.yaml file is currently pointing to the remote server itself. I haven't done any lxc-related things on the remote server apart from installing it.
[21:21] <marcoceppi> isantop: right, okay, and so is the bootstrap node stuck at allocating?
[21:21] <isantop> Exactly
[21:21] <marcoceppi> isantop: or did it bootstrap properly?
[21:21] <isantop> Er
[21:21] <isantop> no, It did bootstrap properly
[21:22] <marcoceppi> isantop: can you put the output of `juju status` into paste.ubuntu.com and send the link over?
[21:23] <isantop> I removed the services I deployed and destroyed the environment already. I'll re-bootstrap
[21:26] <isantop> http://paste.ubuntu.com/12316663/
[21:28] <marcoceppi> isantop: I thought there was a failure to allocate?
[21:28] <isantop> It bootstraps fine, but I can't deploy any charms
[21:28] <marcoceppi> isantop: how are you trying to deploy charms?
[21:29] <isantop> 'juju deploy charm --to lxc:0'
[21:29] <marcoceppi> isantop: I'd do that and wait, it can take some time to get the first cache of the lxc image
[21:29] <marcoceppi> isantop: it's best to do that, then to tail the machine-0.log on the remote host
[21:29] <marcoceppi> isantop: that will give insight to any errors
[21:34] <isantop> here is the machine-0.log: http://paste.ubuntu.com/12316694/
[21:34] <isantop> And the juju-status after attempting to deploy the owncloud charm: http://paste.ubuntu.com/12316705/
[21:35] <marcoceppi> isantop: what does `sudo lxc-ls --fancy` show on the remote server?
[21:38] <isantop> marcoceppi: http://paste.ubuntu.com/12316722/
[21:39] <marcoceppi> isantop: that's a good sign
[21:39] <marcoceppi> isantop: status still show allocating?
[21:48] <isantop> marcoceppi: Still allocating, yes
[21:48] <marcoceppi> isantop: I think I see the problem
[21:48] <marcoceppi> the contaienrs are running
[21:48] <marcoceppi> but don't have networking
[21:48] <isantop> Ah
[21:48] <isantop> How do I get them network?
[21:49] <marcoceppi> it should just happen
[21:49] <marcoceppi> this may sound bad, but try restarting the server if you can. Juju should come back online and the containers should auto-restart and hopefully the networking bridge will be active
[21:49] <isantop> I'll give that a shot in a while
[21:49] <isantop> Won't be able to do it right now
[21:51] <lazyPower> isantop: can you run ifconfig and look for a lxcbr0 device?
[21:51] <lazyPower> ^ on the state-server, thats currently trying to provision those lxc containers.
[21:52] <isantop> lazyPower: I can see lxcbr0 when I ssh into the server, but there's no entry for it in ifconfig
[21:52] <lazyPower> hmm
[21:52] <isantop> Wait
[21:52] <isantop> I just did that on my local machine. :|
[21:52] <lazyPower> :) that'll do it every time
[21:53] <lazyPower> as a side note, i freaked myself out one day doing that, when i was ssh'd into my maas machine which should be *loaded* with virtual ethernet devices for each vm/container running on it.. and i instantly panic'd when it came back with only a wireless card and loopback interface.
[21:53] <isantop> haha
[21:53] <isantop> http://paste.ubuntu.com/12316823/
[21:53] <isantop> Assuming those all look okay?
[21:53] <lazyPower> hmm, ok so 10.0.3.1 - it created the device and gave it networking
[21:54] <lazyPower> and looking at your --fancy output none of the containers are listing with an IP Address, can you stop/restart the container to see if it brings up with networking? (it should have done this already and been peachy) - sudo lxc-stop -n <name-of-lxc-container> && sudo lxc-start -n <name-of-lxc-container>
[21:55] <lazyPower> if its a temporary thing, that should kick it into acting right
[21:55] <isantop> Would it just be the -1?
[21:56] <lazyPower> juju-machine-0-lxc-0
[21:56] <lazyPower> its unfortuantely going to be the full string of the container
[21:56] <isantop> Nah, that's not too bad to type
[21:57] <isantop> I did -1, but it's stuck at waiting 120 seconds for network device
[21:57] <lazyPower> ok, so somethings actively stopping the container from grabbing a virtual device
[21:58] <lazyPower> can you poke in /var/log/syslog to see if you see anything related to CGROUPS stopping anything?
[21:59] <isantop> greping syslog for cgroups (with -i) doesn't give me any output
[21:59] <lazyPower> I'm not sure what happened, but i imagine that marcoceppi's proposed fix will clear this up - just rebooting the machine will be the simplest path forward.
[22:02] <isantop> I'm currently on a ZNC on the server in question, so brb
[22:07] <isantop> lazyPower: No change, it seems
[22:07] <isantop> Current juju status: http://paste.ubuntu.com/12316885/
[22:10] <isantop> Same issue if I lxc-stop/lxc-start
[22:44] <lazyPower> hmm
[22:45] <lazyPower> isantop: sorry i stepped away, one moment
[22:45] <isantop> np
[22:47] <lazyPower> ok, lets try something slightly different and see if we get networking... sudo lxc-create -t download -n u1 -- --dist ubuntu --release trusty --arch amd64
[22:47] <lazyPower> run that on the state server and see if the manually provisioned lxc container gets proper networking
[22:48] <isantop> I tried manually provisioning a container a while ago, and it got stuck at "Setting up the GPG Keyring"
[22:48] <isantop> And eventually fails because it can't download the keyring from the keyserver
[22:49] <isantop> (So, looks like no)
[22:49] <lazyPower> interesting
[22:49] <lazyPower> Are you behind some form of firewall/proxy?
[22:49] <isantop> Oh, we do have csf set up
[22:50] <lazyPower> googling csf returned cerebro spinal fluid... i dont think thats what you're referring to
[22:50] <isantop> google "csf firewall"? :-p
[22:50] <lazyPower> but i'll assume its an egress firewall?
[22:50] <isantop> It's an iptables utility
[22:51] <lazyPower> hum.. that doesn't explain the lack of addressing on the lxc containers, but it its reasonable to assume there's a rule blocking the gpg service from contacting the keyring server @ keyserver.ubuntu.com
[22:51] <isantop> What port does that run over?
[22:52] <lazyPower> port 11371 TCP
[22:52] <isantop> okay, lemme open that up
[22:52] <lazyPower> however
[22:52] <lazyPower> ah wait this is seed so you cant edit the config, disregard
[22:54] <isantop> Ah, yeah, now I can manually provision a container
[22:55] <isantop> Are there any other ports that will need to be opened up for it to work correctly?
[22:55] <lazyPower> 80 and 11371 should be it
[22:55] <lazyPower> 11371 should also hav efallen back over http tbh
[22:56] <lazyPower> that fix was put in place circa natty narwhal....
[22:56] <isantop> Okay, so that appears to be working. If I restart the two juju containers, will that get them back up and running?
[22:56] <lazyPower> Its worth a shot
[22:57] <isantop> Hmmmm, nope
[22:57] <lazyPower> if that doesn't work, you may need to juju destroy them and attempt reprovisioning
[22:58] <lazyPower> The last thing to inspect if reprovisioning doesn't work is to take a look through the LXC configs to see if there's a divergence between the networking setup by your manually provisioned lxc container and what juju thought was right.
[22:58] <lazyPower> and i can help there ^ its 2 text files and some diffing
[22:59] <isantop> No luck stoping/starting
[22:59] <lazyPower> juju destroy-service owncloud should start tearing them down
[23:00] <lazyPower> if the containers appear stuck, you can juju destroy-machine --force 0/lxc/1 - or the path to the container in juju status
[23:02] <isantop> Looks like juju-machine-0-lxc-0 is gone. But -1 is still there
[23:02] <lazyPower> the /1 was allocated to which service?
[23:03] <isantop> I'm not sure, it's not listed in juju status
[23:03] <lazyPower> ah, looks like its some leftover
[23:04] <lazyPower> you can safely remove that with sudo lxc-destroy -n
[23:04] <isantop> How do I delete lxc containers?
[23:04] <isantop> thanks
[23:08] <isantop> Okay, I redeployed owncloud --to lxc:0
[23:10]  * lazyPower crosses fingers
[23:12] <isantop> The "workload-status/message" is currently "Waiting for agent initialization to finish"
[23:13] <isantop> Current machine-0.log
[23:13] <isantop> http://paste.ubuntu.com/12317287/
[23:15] <lazyPower> yeah, machine-0 is kind of a black hole of provisioning information relating to lxc/kvm machines.
[23:15] <lazyPower> :( we're flying kind of blind at the moment
[23:17] <isantop> if I run juju ssh lxc:0, it looks like it grabbed one of my public IPs from eth1
[23:18] <lazyPower> err
[23:19] <lazyPower> hmm, this might be part of the newer networking stuff that recently landed... but unless this came from a MAAS/AWS environment it should be using the lxcbr0 networking bridge
[23:23] <isantop> I only say that because the primary IP on eth1 is 173.248.161.18, and "juju ssh lxc:0" asked me to confirm the identity of "193.248.161.20". (We have a /29 static, and five of the addresses are allocated to eth1 via /etc/network/interfaces.)
[23:29] <lazyPower> juju routes ssh requests in from the public interface, and back out the units private interface to the requisit unit
[23:29] <lazyPower> that could be a lxc container on the bootstrap node, or a remote vm/server elsewhere in the DC
[23:30] <lazyPower> the stateserver acts as a proxy for just about everything you do
[23:30] <isantop> Yeah, still stuck at "allocating"
[23:31] <isantop> Can I chroot into the lxc?
[23:33] <lazyPower> That i'm not sure of without an IP Address. I know the containers receive a cloudinit config to register w/ the juju ssh credentails so you can ssh ubuntu@host
[23:33] <isantop> never mind, I figured that out "lxc-attach -n juju-machine-o-lxc-1"
[23:33] <lazyPower> did that attach you to a running console or a login prompt?
[23:34] <isantop> No, it said it couldn't get init pid
[23:34] <isantop> Oh, but running as sudo works
[23:37] <isantop> I'm going to try re-bootstrapping the environment
[23:43] <isantop> lazyPower: Hmmmm, doesn't appear to offer any changes.
[23:43] <lazyPower> :(
[23:44] <lazyPower> isantop: I need to step out for a bit. I'll be back around tomorrow and can assist you further then. Or you can fire off a mail to the list juju@lists.ubuntu.com - and one of the EU core folks may be able to lend a hand
[23:45] <lazyPower> sorry I wasn't able to get you sorted this evening
[23:45] <isantop> I'll let it steam about it overnight and see if anything magic happens. If not, I'll let you or jcastro know tomorrow