=== zz_CyberJacob is now known as CyberJacob === CyberJacob is now known as zz_CyberJacob [11:52] hi, im having issues trying to access lxc- commands when running juju [11:52] both pointed to local enviornment [11:52] is this is a nknown issue ? === coreycb` is now known as coreycb [13:47] gnuoy, wolsen - avail to land this reviewed c-h bit, and review/land the dependent charm amulet test update MPs? i'd like to move on to update others, but holding until these hit. tia! https://code.launchpad.net/~1chb1n/charm-helpers/amulet-svc-restart-race/+merge/269098 [14:02] thedac, gnuoy - resync'd the test branch to get fri's changes on lp:~thedac/charms/trusty/rabbitmq-server/native-cluster-race-fixes & testing. thanks! [14:19] anyone here? [14:19] know how i can deploy a centos6 box /| [14:19] ? [14:20] Hey, fun, I'm not successfully authing into the onboarding site in Firefox [14:20] Ugh, wrong channel, sorry. =D [14:22] g3naro: you'll need to follow this guide [15:57] Is there a way to know that a configuration setting has reached the machine and been processed? I've then been checking that Agent-status is idle to know that the config has been applied, but I don't know how to differentiate between idle because the config has been applied or idle because the config hasn't been applied yet. [16:22] ennoble: you can do a juju status-history for the unit in question [16:22] it should show you when it processed the config-changed event' [16:28] marcoceppi: thanks, is there a way to access that information from the python juju client library? [16:28] ennoble: that's a great question, and I'm not aware of a way at the moment. [16:28] It's definitely exposed in the API, as that's where the cli get it [16:28] marcoceppi: I've been using the watcher functionality in juju client do to that, but I seem to be able to miss it. [16:29] macroceppi: so I may be able to do an RPC call to get back the status-history? Is it's it's own RPC call? [16:32] ennoble: I'm trying to find that out right now [16:33] marcoceppi: Thanks! [16:47] ennoble: I'm asking the devs in #juju-dev but I haven't gotten an answer yet [16:50] marcoceppi: jujuenv._rpc({"Type":"Client", "Request":"UnitStatusHistory", "Params": {"Name": ‘myunit/0’, "Size" : 20, "Kind" : "agent"}}) did the trick [16:51] ennoble: good find, if you wanted to add that as a method to jujuclient I'm sure it'd be greatly appreciated [17:48] thedac, gnuoy - friday revs sync'd in, passed the first run. i've re-queued a couple add'l iterations. https://code.launchpad.net/~1chb1n/charms/trusty/rabbitmq-server/amulet-refactor-1509b/+merge/270102 [17:48] beisner: excellent [18:26] marcoceppi: i've got to jump through quite a few hoops to submit a patch; but I do have two other bug reports on jujuclient that have been outstanding for a while with suggestion solutions in them. I'm trying to work through the hoops on my end, but if somene had a couple minutes to fix the issues I've reported (the fixes are in the bug reports) that would be great. [18:31] ennoble: I'll take a look and help triage those through, thanks [18:35] thanks marcoceppi: the two I'm referring to are #1455302 and #1486297 [18:36] Bug #1455302: enqueue_units doesn't correctly pass parameters to action [18:36] Bug #1486297: Action doesn't correctly translate unit name into tag if hyphen present === bdx_ is now known as bdx [19:32] hey guys, manual can't run in ports other than 22, right? [19:41] jose: elaborate [19:42] marcoceppi: I have someone who has a server running in port 2222, and wants to set it up with juju + manual provide [19:42] r [19:43] jose: juju add-machine ssh:user@host:port [19:43] oh, ok [19:43] thanks [20:04] thedac, gnuoy - 2 more rmq iterations ok (18 of 18 clustered ok and happy). that is: [20:04] https://code.launchpad.net/~thedac/charms/trusty/rabbitmq-server/native-cluster-race-fixes +=> https://code.launchpad.net/~1chb1n/charms/trusty/rabbitmq-server/amulet-refactor-1509b [20:05] precise through vivid, juju w/ LE. [20:05] ack [20:05] poor ack, the real ack. he probably gets hilighted all day every day. [20:06] I'll start using /me nods :) [20:06] nah, as you were, as you were. i ack as well. ack ack [20:07] Hey everyone, I'm getting an error launching the mysql charm from the juju-gui, "1 hook failed: "install"". Here's the contents of /var/log/juju/unit-mysql-0.log on the machine the service was booted on: http://pastebin.com/GSgz50Sd [20:17] It looks like the issue is a calledprocesserror happening related to the keyserver [20:19] ntpttr: yeah, the output suggest the config setting 'key' has a value of null. I just fired off a deploy of mysql (without the gui) and it works. Not sure if this is a juju gui related issue. Do you minde filing a bug? https://bugs.launchpad.net/charms [20:20] thedac: I think the issue is proxy related - when I did this at home it did work but behind a proxy now I'm having trouble. Would you still like me to file a bug? [20:20] Unknown source: u'null' seems unexpected, as it passes to the cmd as --recv null [20:20] right ^^ [20:22] ntpttr: yes on the bug report. With as much detail on how to recreate as possible [20:23] thedac: Okay, I'm doing this on the default bootstrapped setup provided by the Orange Box, should I mention that? [20:23] yep i think the network is probably restricting tcp 80 egress. and since the charm is pushing hkp over 80, probably getting blocked. i would suspect in that enviro, a simple apt-get install of anything would also fail. [20:24] (worth checking manually) [20:24] beisner: To check that manually should I just juju ssh into the machine and try an install of anything? [20:25] i'd make sure something like this succeeds: sudo apt-get update && sudo apt-get install multitail [20:25] ntpttr: try 'nc -vz keyserver.ubuntu.com 80' [20:25] or, that's even better ;-) [20:25] thedac's is-foo is quite handy [20:27] beisner thedac: All right, one second while the machine deploys again and I'll run that command (I cleaned up the environment after the last failure, but I've had it happen multiple times so I'm confident it'll happen again) [20:28] ok [20:37] thedac beisner: Uh so I have no idea what happened but it worked that time, I was looking at 'tail -f /var/log/juju/unit-mysql-0.log' and it went through the whole apt-get update and finished and the service started [20:37] thedac beisner: Do you want me to pastebin the log for you? [20:38] ntpttr: that is odd. Let us know if you run into it again. [20:38] thedac: okay, will do. Thank you === natefinch is now known as natefinch-afk [21:02] jcastro: Can I ping you about some crazy juju-ness? === zz_CyberJacob is now known as CyberJacob [21:09] isantop: it's getting to be EOD for most people on the east coast, anything I can help with? [21:10] I'm trying to manage services on a personal server using juju [21:11] isantop: sounds reasonable [21:13] joseantonior had me going through manual provisioning, but I'm hitting issues with the agent-status not progressing past allocating. I'm not sure what configuration needs to be done for the lxc side of things [21:15] isantop: so you're manually provisioning lxc? [21:15] Well, I think I'm not and that's my problem ;-) [21:15] * isantop is a total cloud n00b [21:17] isantop: no worries, why don't you recount how you got to where you are right now? [21:18] did "sudo apt-get install lxc" on the remote server, and "sudo apt-get install juju-core" on my local machine (after adding the PPA) [21:18] then did "juju generate-config", "switch manual", and "bootstrap" [21:19] isantop: so you're trying to bootstrap an LXC machine on the remote server? or trying to bootstrap the remote server? [21:20] the environments.yaml file is currently pointing to the remote server itself. I haven't done any lxc-related things on the remote server apart from installing it. [21:21] isantop: right, okay, and so is the bootstrap node stuck at allocating? [21:21] Exactly [21:21] isantop: or did it bootstrap properly? [21:21] Er [21:21] no, It did bootstrap properly [21:22] isantop: can you put the output of `juju status` into paste.ubuntu.com and send the link over? [21:23] I removed the services I deployed and destroyed the environment already. I'll re-bootstrap [21:26] http://paste.ubuntu.com/12316663/ [21:28] isantop: I thought there was a failure to allocate? [21:28] It bootstraps fine, but I can't deploy any charms [21:28] isantop: how are you trying to deploy charms? [21:29] 'juju deploy charm --to lxc:0' [21:29] isantop: I'd do that and wait, it can take some time to get the first cache of the lxc image [21:29] isantop: it's best to do that, then to tail the machine-0.log on the remote host [21:29] isantop: that will give insight to any errors [21:34] here is the machine-0.log: http://paste.ubuntu.com/12316694/ [21:34] And the juju-status after attempting to deploy the owncloud charm: http://paste.ubuntu.com/12316705/ [21:35] isantop: what does `sudo lxc-ls --fancy` show on the remote server? [21:38] marcoceppi: http://paste.ubuntu.com/12316722/ [21:39] isantop: that's a good sign [21:39] isantop: status still show allocating? [21:48] marcoceppi: Still allocating, yes [21:48] isantop: I think I see the problem [21:48] the contaienrs are running [21:48] but don't have networking [21:48] Ah [21:48] How do I get them network? [21:49] it should just happen [21:49] this may sound bad, but try restarting the server if you can. Juju should come back online and the containers should auto-restart and hopefully the networking bridge will be active [21:49] I'll give that a shot in a while [21:49] Won't be able to do it right now [21:51] isantop: can you run ifconfig and look for a lxcbr0 device? [21:51] ^ on the state-server, thats currently trying to provision those lxc containers. [21:52] lazyPower: I can see lxcbr0 when I ssh into the server, but there's no entry for it in ifconfig [21:52] hmm [21:52] Wait [21:52] I just did that on my local machine. :| [21:52] :) that'll do it every time [21:53] as a side note, i freaked myself out one day doing that, when i was ssh'd into my maas machine which should be *loaded* with virtual ethernet devices for each vm/container running on it.. and i instantly panic'd when it came back with only a wireless card and loopback interface. [21:53] haha [21:53] http://paste.ubuntu.com/12316823/ [21:53] Assuming those all look okay? [21:53] hmm, ok so 10.0.3.1 - it created the device and gave it networking [21:54] and looking at your --fancy output none of the containers are listing with an IP Address, can you stop/restart the container to see if it brings up with networking? (it should have done this already and been peachy) - sudo lxc-stop -n && sudo lxc-start -n [21:55] if its a temporary thing, that should kick it into acting right [21:55] Would it just be the -1? [21:56] juju-machine-0-lxc-0 [21:56] its unfortuantely going to be the full string of the container [21:56] Nah, that's not too bad to type [21:57] I did -1, but it's stuck at waiting 120 seconds for network device [21:57] ok, so somethings actively stopping the container from grabbing a virtual device [21:58] can you poke in /var/log/syslog to see if you see anything related to CGROUPS stopping anything? [21:59] greping syslog for cgroups (with -i) doesn't give me any output [21:59] I'm not sure what happened, but i imagine that marcoceppi's proposed fix will clear this up - just rebooting the machine will be the simplest path forward. [22:02] I'm currently on a ZNC on the server in question, so brb [22:07] lazyPower: No change, it seems [22:07] Current juju status: http://paste.ubuntu.com/12316885/ [22:10] Same issue if I lxc-stop/lxc-start [22:44] hmm [22:45] isantop: sorry i stepped away, one moment [22:45] np [22:47] ok, lets try something slightly different and see if we get networking... sudo lxc-create -t download -n u1 -- --dist ubuntu --release trusty --arch amd64 [22:47] run that on the state server and see if the manually provisioned lxc container gets proper networking [22:48] I tried manually provisioning a container a while ago, and it got stuck at "Setting up the GPG Keyring" [22:48] And eventually fails because it can't download the keyring from the keyserver [22:49] (So, looks like no) [22:49] interesting [22:49] Are you behind some form of firewall/proxy? [22:49] Oh, we do have csf set up [22:50] googling csf returned cerebro spinal fluid... i dont think thats what you're referring to [22:50] google "csf firewall"? :-p [22:50] but i'll assume its an egress firewall? [22:50] It's an iptables utility [22:51] hum.. that doesn't explain the lack of addressing on the lxc containers, but it its reasonable to assume there's a rule blocking the gpg service from contacting the keyring server @ keyserver.ubuntu.com [22:51] What port does that run over? [22:52] port 11371 TCP [22:52] okay, lemme open that up [22:52] however [22:52] ah wait this is seed so you cant edit the config, disregard [22:54] Ah, yeah, now I can manually provision a container [22:55] Are there any other ports that will need to be opened up for it to work correctly? [22:55] 80 and 11371 should be it [22:55] 11371 should also hav efallen back over http tbh [22:56] that fix was put in place circa natty narwhal.... === CyberJacob is now known as zz_CyberJacob [22:56] Okay, so that appears to be working. If I restart the two juju containers, will that get them back up and running? [22:56] Its worth a shot [22:57] Hmmmm, nope [22:57] if that doesn't work, you may need to juju destroy them and attempt reprovisioning [22:58] The last thing to inspect if reprovisioning doesn't work is to take a look through the LXC configs to see if there's a divergence between the networking setup by your manually provisioned lxc container and what juju thought was right. [22:58] and i can help there ^ its 2 text files and some diffing [22:59] No luck stoping/starting [22:59] juju destroy-service owncloud should start tearing them down [23:00] if the containers appear stuck, you can juju destroy-machine --force 0/lxc/1 - or the path to the container in juju status [23:02] Looks like juju-machine-0-lxc-0 is gone. But -1 is still there [23:02] the /1 was allocated to which service? [23:03] I'm not sure, it's not listed in juju status [23:03] ah, looks like its some leftover [23:04] you can safely remove that with sudo lxc-destroy -n [23:04] How do I delete lxc containers? [23:04] thanks [23:08] Okay, I redeployed owncloud --to lxc:0 [23:10] * lazyPower crosses fingers [23:12] The "workload-status/message" is currently "Waiting for agent initialization to finish" [23:13] Current machine-0.log [23:13] http://paste.ubuntu.com/12317287/ [23:15] yeah, machine-0 is kind of a black hole of provisioning information relating to lxc/kvm machines. [23:15] :( we're flying kind of blind at the moment [23:17] if I run juju ssh lxc:0, it looks like it grabbed one of my public IPs from eth1 [23:18] err [23:19] hmm, this might be part of the newer networking stuff that recently landed... but unless this came from a MAAS/AWS environment it should be using the lxcbr0 networking bridge [23:23] I only say that because the primary IP on eth1 is 173.248.161.18, and "juju ssh lxc:0" asked me to confirm the identity of "193.248.161.20". (We have a /29 static, and five of the addresses are allocated to eth1 via /etc/network/interfaces.) [23:29] juju routes ssh requests in from the public interface, and back out the units private interface to the requisit unit [23:29] that could be a lxc container on the bootstrap node, or a remote vm/server elsewhere in the DC [23:30] the stateserver acts as a proxy for just about everything you do [23:30] Yeah, still stuck at "allocating" [23:31] Can I chroot into the lxc? [23:33] That i'm not sure of without an IP Address. I know the containers receive a cloudinit config to register w/ the juju ssh credentails so you can ssh ubuntu@host [23:33] never mind, I figured that out "lxc-attach -n juju-machine-o-lxc-1" [23:33] did that attach you to a running console or a login prompt? [23:34] No, it said it couldn't get init pid [23:34] Oh, but running as sudo works [23:37] I'm going to try re-bootstrapping the environment [23:43] lazyPower: Hmmmm, doesn't appear to offer any changes. [23:43] :( [23:44] isantop: I need to step out for a bit. I'll be back around tomorrow and can assist you further then. Or you can fire off a mail to the list juju@lists.ubuntu.com - and one of the EU core folks may be able to lend a hand [23:45] sorry I wasn't able to get you sorted this evening [23:45] I'll let it steam about it overnight and see if anything magic happens. If not, I'll let you or jcastro know tomorrow