[00:07] Laney: I don't think so.. [00:08] Laney: note that the EC2 API is secure against all but replay attack (and that is mitigated by expiring most commands very soon after they are run) [00:10] server can do replay detection anyhow [00:11] if we wanted to teach it [00:13] lifeless: if you +1 this https://code.launchpad.net/~clint-fewbar/juju/docs-clarify-service-name/+merge/104995 I'll just merge it. [00:20] looking [00:21] I think on line 6 [00:21] fair [00:21] you should replace :: with [00:22] \n\nFirst we need a database for wordpress, so we use the optional name parameter to call it wordpress-db::' [00:22] SpamapS: ^ what do you think ? [00:23] can I have a service exposed on an allocated public address? [00:24] Laney: yes, it will expose the public-address [00:25] SpamapS: do I have to give it as a parameter to any command? [00:26] lifeless: yeah that makes sense. http://paste.ubuntu.com/974771/ [01:25] SpamapS: +1 :) [04:40] Oh crap, shazzner is working on a gitolite charm [04:44] yo yo yo [04:44] o/ [04:46] popey: did lxc finally get running? [04:46] * marcoceppi is lazy [04:57] marcoceppi: yes [04:57] lol [04:57] excellent [04:58] however... [04:58] alan bell is having fun [04:58] he had to reboot his machine [04:58] now doesn't know how to start the 3 lxc containers up [05:11] popey: that sounds fun... [06:03] popey: the incantations for recovery of the lxc containers after reboot has not been fully discovered yet [06:03] popey: best to destroy/bootstrap again [06:07] SpamapS: that's what we ended up doing. I though LXC could survive reboots, at least it has been for me. [06:11] marcoceppi: seeing as zookeeper and the containers are not restarted, thats surprising [06:12] which is particularly problematic as the machine agent *is* started, and spews forth with ridiculous maounts of zk timeouts [06:12] SpamapS: happened on my desktop a week ago, did a reboot, came back 5 days later with my disk full [06:13] SpamapS: ah, that's why. error log just grew due to timeouts [06:13] so, would restarting zookeeper fix that? [06:13] yes that [06:13] just need to restart it w/ the same data dir [06:14] and then in theory you could start the containers back up and it would all work [06:22] ugh I have to sleep.. have to wake up in 5 hours [06:39] SpamapS: see ya o/ [07:00] <_mup_> juju/scale-test r538 committed by kapil.thangavelu@canonical.com [07:00] <_mup_> disable extraneous ec2 api usage [07:36] thanks guys. i had fun playing with juju tonight, looking forward to more playing during breaks and lunch tomorrow :D === almaisan-away is now known as al-maisan [07:58] gnight charmers === al-maisan is now known as almaisan-away [11:57] I am trying out juju on my local machine. I have done a bootstrap and then I deploy a mysql charm. However "juju status" just keeps telling me that it is pending. When I look in the output.log for the unit the only info there is "/usr/bin/python: No module named juju.agents" [11:58] When I look in the lxc rootfs for the unit there is no juju python package installed (in /usr/share/pyshared). Is this a bug I should report or am I doing something wrong? === phschwartz_ is now known as phschwartz [15:44] so what does one do to make a charm subordinate? Is it at creation time or deploy time, or both? [15:47] * james_w finds https://juju.ubuntu.com/docs/subordinate-services.html [16:00] james_w: we need to improve the tutorial to include a subordinate example. [16:00] we also need to update the yaml examples to match the new format [16:00] o/ SpamapS [16:00] SpamapS, also the config.yaml documentation seems outdated? [16:01] marcoceppi: join us in junior ballroom 1 .. talking about mysql. :) [16:04] https://bugs.launchpad.net/charms/+bug/886362 [16:04] <_mup_> Bug #886362: New charm proposal: txstatsd < https://launchpad.net/bugs/886362 > [16:05] james_w: woot [16:05] need to complete the graphite-web side now [16:43] SpamapS: at the Ubuntu Cloud thing :) [16:43] marcoceppi, you at the cloud summit? [16:43] koolhead17: yes [16:43] Heyo - was at Charm School last night and was running local juju w/o issue, but all of a sudden today I get this error when I try to bootstrap: "ERROR Unable to create file storage for environment" [16:44] ok. i will join in afternoon. mysql roundtable talk sounds interesting :) [16:44] also - I'm at the System76 booth in case anyone wants to stop by... :D [16:44] marcoceppi, any luck with the glusterfs [16:54] and a reboot fixed it... [16:54] :-/ [17:00] FunnyLookinHat: nothing like a Windows fix to an Ubuntu glitch :) [17:29] FunnyLookinHat, local juju environments don't recover that well after a reboot [17:29] if you get that again try a 'juju destroy-environment' first - that normally sorts things out.. [17:33] FunnyLookinHat, i think you need to restart to get the networking part working if you using LXC [17:34] I destroyed environment and re-did the bootstrap after rebooting - not sure .. [17:34] Before rebooting the bootstrap didn't work right [17:34] very strange indeed. [19:04] Ok - so SSL... how would I configure a service to use a specific private and public key file? Is there an easy way to pass that along as an argument? [19:30] negronjl: that should be public now [20:18] Anyone? Bueller? Don't make me come up to the charm school... [20:59] FunnyLookinHat: ? [21:00] FunnyLookinHat: usually SSL keys are generated on the hosts that they live on. [21:03] SpamapS, Yeah - I guess I'm just wondering what the process would be in terms of rolling out and scaling a service that uses SSL [21:03] FunnyLookinHat: It looks like you're going to just have to come up [21:03] marcoceppi, hahaha - I'll try to sneak out of the booth in a bit. [21:08] Heyall, I'm trying to get juju working locally with lxc. Every command seems to succeed, but I don't get IP addresses appearing in juju status [21:09] oh, nm I think. Seems that while deploy works, it returns instantly instead of waiting for juju tofinish its background work [21:12] NCommander: the first time you deploy to local juju takes a while to build the master image [21:12] each subsequent time from deployment should be relatively quick [21:12] marcoceppi: the behavior confused me since I'm used to commands waiting until they're done [21:13] NCommander: yeah, Juju is asynchronous in that respect [21:13] * NCommander is learning to writing ajuju charm for quassel-core [21:13] commands get pushed to the juju bootstrap, and bootstrap "queues" them and manages/co-ordinates everything === carif_ is now known as carif [21:16] * NCommander tests his charm [21:18] marcoceppi: now I've got a weird issue. My charm "works" but the output from APT during RSA key generation is showing up as error [21:18] (I think openssl is writing to stderr ...) [21:18] NCommander: ERROR is just "stderr" [21:18] Right, but agent-status then ended up in start-error [21:18] NCommander: we all agree we should probably change that. :) [21:19] NCommander: your start-eror is based solely on exit code of the hook [21:19] Do I need to redirect stderr to stdout? :-/ [21:19] Weird [21:19] oh, I see what happened [21:19] SpamapS, so here's my confusion - when I setup an apache server to use SSL, I upload my cert to it and then set the apache virtual host to use that [21:21] i.e. I have to generate the cert locally with something I grab from my cert signer... and I'm not seeing a proper place to inject those steps or that data [21:21] SpamapS: alright, second question. It seems every juju deploy creates a new instance. That is probably desirable for some setups, but for quassel-core, its probably best to have it and its postgresql backend on the same machine (or I suppose I could set it up so when the relation is added, migrate from sqlite -> postgresql) [21:22] * flepied finished the first version of a naxsi charm [21:23] * NCommander is a bit suprised there is no Launchpad charm ... [21:27] hrm, also, I could connect to quassel-core without having to expose [21:30] flepied: naxsi? [22:18] SpamapS: help needed in #ubuntu-server :( is there some doc we can point folk at? === objectiveous_ is now known as objectiveous