[02:44] <bl3u> has anyone successfully run juju within docker?
[02:45] <bl3u> docker doesn't work well with upstart, and it's causing juju bootstrap some pain =/
[02:45] <waigani> thumper: ping
[02:48] <waigani> Are we still having our weekly team meeting?
[02:49] <thumper> hey
[02:49] <thumper> waigani: not sure
[02:50] <thumper> waigani: I'm guessing no... a few too many people, unless I hear otherwise
[02:50] <waigani> thumper: cool, I pmed you too
[02:51] <thumper> bl3u: you won't really be able to run juju within docker
[02:51] <thumper> bl3u: not unless you are running a full os docker image as the base
[02:52] <thumper> bl3u: also, nested lxc type containers don't really work
[02:52] <thumper> bl3u: I'm assuming you were wanting to run the local provider inside a docker image
[02:52] <bl3u> thumper: that is correct
[02:52] <bl3u> i am running the ubuntu 14.04 base docker image
[02:55] <bl3u> i'm trying to play with it in a safe way; i'll just stand up a VM i suppose
[08:00] <kryptos> hi
[08:00] <kryptos> i have  a problem juju configuration
[08:00] <kryptos> can u assist me on the same
[08:02] <kryptos> Error details: cannot parse "/root/.juju/environments.yaml": YAML error: line 326: could not find expected ':'
[08:39] <s3an2_> what is online 326 of the yaml file?
[11:56] <_benoit_> I am patching goamz to support a new provider
[11:56] <_benoit_> Should the goamz external API be kept intact at all cost ?
[11:58] <_benoit_> I would need to modify the S3 constructors to return *S2, ok for example
[13:32] <mhall119> weird, irssi disconnected me from here and re-connected me to oftc/#juju
[13:32] <mhall119> anyway, is `juju upgrade-charm` supposed to update the files in /var/lib/juju/agents/ on my instances?
[14:23] <lazyPower> mhall119: indeed, for the charm specified after the upgrade-charm
[14:24] <lazyPower> eg: juju upgrade-charm mysql, will upgrade mysql if there is a new revision in teh store. otherwise it will return an error that you're already running the latest revision.
[14:26] <mhall119> lazyPower: I think my problem was git and LXC
[14:27] <lazyPower> ah, you're still running 1.18.x right?
[14:27] <lazyPower> Once the 1.19 branch lands as 1.20, your git merge conflicts will go away. Its a brilliant thing.
[14:30] <mhall119> hmmm, I think I broke LXC
[14:32] <mhall119> marcoceppi: does anything look wrong here: http://paste.ubuntu.com/7416306/ ?
[14:33] <mhall119> that's my all-machines.log
[14:34] <mhall119> juju bootstrap returned without error, and I've issued several juju deploy commands that are all listed as pending, but nothing is happening
[14:34] <mhall119> http://paste.ubuntu.com/7416337/ is `juju status`
[14:34] <mhall119> $ sudo initctl list |grep juju
[14:34] <mhall119> juju-db-mhall-local start/running, process 456
[14:34] <mhall119> juju-agent-mhall-local start/running, process 485
[14:34] <lazyPower> mhall119: he's out today and tomorrow on swap
[14:35] <mhall119> ah, jcastro then ^^
[14:35] <mhall119> help me jcastro, you're my only hope
[14:35] <avoine> mhall119: it's like your don't have cpu-checker installed
[14:36] <mhall119> it was working fine an hour ago
[14:36] <mhall119> I haven't rebooted or anything
[14:38] <avoine> mhall119: t
[14:38] <avoine> mhall119: what is the result of that command: kvm-ok
[14:39] <mhall119> kvm-ok
[14:39] <mhall119> INFO: /dev/kvm does not exist
[14:39] <mhall119> HINT:   sudo modprobe kvm_intel
[14:39] <mhall119> INFO: For more detailed results, you should run this as root
[14:39] <mhall119> HINT:   sudo /usr/sbin/kvm-ok
[14:40] <avoine> mhall119: it looks like you tried to setup a kvm local environment but you don't have kvm support
[14:41] <avoine> do you have something like: container: kvm in your environment?
[14:42] <mhall119> nope
[14:43] <mhall119> and like I said, it was working an hour ago
[14:43] <mhall119> container: lxc is in my ~/.juju/environments/local.jenv
[14:46] <avoine> mhall119: I had a problem like that when I tried to use lxc-clone: true
[14:46] <avoine> and with an old version of lxc
[14:48] <mhall119> well, I have lxc-clone: true
[14:48] <mhall119> but, I'm on Trusty with the juju PPA, it shouldn't be that old
[14:50] <mhall119> might try rebooting
[15:08] <roadmr> hey folks, I tried following https://juju.ubuntu.com/docs/config-LXC.html but juju bootstrap got stuck trying to connect to mongodb (which wasn't installed). I had to manually install mongodb-server. Should juju-mongodb take care of this dependency for me?
[15:20] <lazyPower> roadmr: did you install juju-local?
[15:20] <lazyPower> that should pull down all the dependencies you need for lxc deployment
[15:20] <roadmr> lazyPower: I did
[15:21] <roadmr> lazyPower: it did pull everything *except* for mongodb-server :D
[15:21] <lazyPower> did you have mongodb installed prior to installing juju-local?
[15:21] <roadmr> lazyPower: no, it's a fresh install (utopic if it helps, but at this point it's almost identical to trusty)
[15:22] <lazyPower> ah, i have no idea whats going on in utopic... but i just booted a fresh vm with trusty (thanks maas master) and installed juju + juju-local and i was able to bootstrap
[15:22] <lazyPower> so, it installs juju-mongodb
[15:22] <lazyPower> if you're missing that package, give it a go and ping me with the results.
[15:22] <roadmr> lazyPower: yes, mine installed juju-mongodb too
[15:23] <roadmr> lazyPower: ok, I'll try redoing the whole thing from scratch, maybe I messed up somewhere :)
[15:24] <marcoceppi> mhall119: did you get it sorted?
[15:27] <mhall119> marcoceppi: not yet
[15:28] <marcoceppi> mhall119: what does sudo lxc-ls --fancy show?
[15:29] <mhall119> $ sudo lxc-ls --fancy
[15:29] <mhall119> Swipe your right index finger across the fingerprint reader
[15:29] <mhall119> NAME                   STATE    IPV4  IPV6  AUTOSTART
[15:29] <mhall119> -----------------------------------------------------
[15:29] <mhall119> juju-trusty-template   STOPPED  -     -     NO
[15:29] <mhall119> mhall-local-machine-1  STOPPED  -     -     NO
[15:30] <mhall119> I destroyed my old env, bootstrapped a new one and deployed just a single charm (postgresql) this time
[15:30] <marcoceppi> well, mhall-local-machine-1 stopped isn't nice
[15:34] <tvansteenburgh> marcoceppi: https://github.com/marcoceppi/amulet/pull/31
[15:35]  * tvansteenburgh drops the mic and walks off stage
[15:45] <mbruzek> mhall119, Have you resolved your problem?
[16:12] <mhall119> mbruzek: not yet, haven't tried rebooting yet though
[16:12] <themonk> marcoceppi: how do i restart a pending unit?
[16:13] <marcoceppi> mhall119: destroy environment with the --force flag, run sudo lxc-ls --fancy again see if machine-1 goes away
[16:13] <marcoceppi> themonk: what provider?
[16:13] <mbruzek> mhall119, what marco-vacation said.
[16:14] <themonk> marcoceppi: lxc local
[16:14] <marcoceppi> themonk: then the machine probably never came up, themonk what does sudo lxc-ls --fancy say for you? (and what does juju status show)
[16:15] <mbruzek> mhall119, Please paste results of ls /var/lib/juju/locks/
[16:16] <mhall119> marcoceppi: machine-1 does not go away
[16:16]  * marcoceppi laments about an updated local provider troubleshooting guide
[16:17] <marcoceppi> mhall119: lxc-destroy -n blahblah-machine-1
[16:17] <mhall119> mbruzek: http://paste.ubuntu.com/7416886/
[16:18] <mhall119> marcoceppi: $ lxc-destroy --n mhall-local-machine-1
[16:18] <mhall119> Container is not defined
[16:18] <marcoceppi> mhall119: single -n
[16:18] <mhall119> ok, now it's gone
[16:18] <marcoceppi> mhall119: okay, bootstrap and deploy again
[16:20] <mbruzek> mhall119, Is the juju-trusty-tempate directory out of the locks directory now as well?
[16:21] <mhall119> mbruzek: nope
[16:21] <marcoceppi> mhall119: it shouldn't disappear between destroys
[16:21] <marcoceppi> mbruzek: ^
[16:22] <marcoceppi> it only needs be removed if it exists and the trusty-template vm does not (or is not stopped)
[16:22] <mbruzek> OK.  Was just checking my history, for what Thumper helped me with last week.
[16:23] <marcoceppi> mbruzek: yeah a lot of people had template creation problems, cory and tim not the least
[16:25] <mhall119> marcoceppi: still doesn't appear to be deploying anything
[16:25] <mhall119> machine-0: 2014-05-08 16:19:27 WARNING juju.cmd.jujud machine.go:277 determining kvm support: exit status 1
[16:25] <mhall119> machine-0: no kvm containers possible
[16:26] <marcoceppi> that's illy
[16:28] <marcoceppi> mhall119: what does your environments.yaml look like?
[16:28] <marcoceppi> for the local provider
[16:31] <tvansteenburgh> marcoceppi: mind if i take a crack at https://bugs.launchpad.net/amulet/+bug/1293878 ?
[16:31] <marcoceppi> tvansteenburgh: not in the slightest
[16:31] <marcoceppi> tvansteenburgh: I was essentially going to rsync the contents of the charm to a temp location, run a bzr init, then a commit, then use that as the branch location in the deployer file
[16:32] <marcoceppi> tvansteenburgh: if you can think of a more clever path, by all means
[16:32] <tvansteenburgh> marcoceppi: cool, thanks - i'll start there
[16:35] <themonk> marcoceppi: themonk-local-machine-1  RUNNING  10.0.3.113  -     YES
[16:35] <themonk> marcoceppi: this is the out put
[16:35] <qhartman> I have a juju environment running, and I accidentally re-issued a bootstrap command for it (the default env). It looks like it nuked the environment, I can no longer get status from it.
[16:35] <qhartman> Is that expected?
[16:36] <marcoceppi> qhartman: uh, no. What version of juju are you running?
[16:36] <qhartman> 0.18.2  I think, latest in Trusty
[16:36] <mhall119> marcoceppi: https://pastebin.canonical.com/109888/
[16:36] <qhartman> 1.18.2
[16:37] <mhall119> marcoceppi: but it was working for me this morning, so I don't think it's a config thing, I think I just broke the runtime somehow
[16:37] <marcoceppi> qhartman: yeah, that should never happen, you should get a warning about it already being bootstrapped
[16:37] <marcoceppi> mhall119: interesting
[16:38] <qhartman> marcoceppi, yeah, I did see that warning, but the file that I expect to exist in .juju/environments to track it is definitely gone
[16:38] <qhartman> :-\
[16:38] <marcoceppi> qhartman: was this local provider?
[16:38] <qhartman> marcoceppi, nope, maas
[16:39] <marcoceppi> oh bother.
[16:39] <qhartman> yeah
[16:39] <marcoceppi> qhartman: do you have the output from the boostrap command still?
[16:39] <qhartman> I interrupted it, but I can get it
[16:40] <marcoceppi> qhartman: ah, interesting, that might be the expected behavior
[16:40] <marcoceppi> an interruptted bootstrap will attempt to "clean itself up"
[16:40] <qhartman> oh
[16:40] <qhartman> well, that's quitet he landmine
[16:40] <marcoceppi> if you run a bootstrap against a bootstrap it will eventually error out about already being bootstrapped
[16:40] <marcoceppi> but I'm curious the output regardless
[16:40] <qhartman> yeah, I'll go grab it
[16:40] <qhartman> one sec.
[16:42] <cjohnston> How is the order of relation hooks being run determined?
[16:42] <cjohnston> I'm using a deployer file, so is it just the order of the relations in the deployer file?
[16:45] <themonk> marcoceppi: do you have the solution?
[16:45] <qhartman_too> marcoceppi, http://pastebin.com/qsbnMnPD <- ctrl-c double-bootstrap landmine
[16:45] <marcoceppi> themonk: can you paste the output of juju status?
[16:46] <marcoceppi> qhartman: qhartman_too interesting, it looks like it error properly
[16:47] <themonk> environment: local
[16:47] <themonk> machines:
[16:47] <themonk>   "0":
[16:47] <qhartman_too> I find it interesting that is talks about cleanup before the error about being already bootstrapped. Makes me wonder if there's something happening out of order?
[16:47] <themonk>     agent-state: started
[16:47] <themonk>     agent-version: 1.18.1.1
[16:47] <themonk>     dns-name: localhost
[16:47] <themonk>     instance-id: localhost
[16:47] <themonk>     series: precise
[16:47] <themonk>   "1":
[16:47] <themonk>     agent-state: started
[16:47] <themonk>     agent-version: 1.18.1.1
[16:47] <themonk>     dns-name: 10.0.3.113
[16:47] <themonk>     instance-id: themonk-local-machine-1
[16:47] <themonk>     series: precise
[16:47] <themonk>     hardware: arch=amd64
[16:47] <themonk> services:
[16:47] <themonk>   opendj:
[16:47] <themonk>     charm: local:precise/opendj-0
[16:47] <themonk>     exposed: false
[16:47] <themonk>     units:
[16:47] <themonk>       opendj/0:
[16:47] <themonk>         agent-state: pending
[16:48] <themonk>         agent-version: 1.18.1.1
[16:48] <themonk>         machine: "1"
[16:48] <themonk>         public-address: 10.0.3.113
[16:48] <themonk> marcoceppi: sorry for bad paste
[16:48] <marcoceppi> themonk: paste.ubuntu.com would be helpful next time :)
[16:48] <marcoceppi> themonk: so, the machine is up, what is likely happening is the install hook is stuck
[16:48] <themonk> marcoceppi: https://gist.github.com/anonymous/e23b27a7b301b86e7759
[16:49] <marcoceppi> themonk: pastebin `cat ~/.juju/local/log/unit-opendj-0.log` for me?
[16:49] <marcoceppi> qhartman qhartman_too yes thatis a clear issue of order of operations
[16:50] <marcoceppi> qhartman qhartman_too can you open a bug with that and I'll flag it as king of a big issue
[16:51] <qhartman_too> marcoceppi, sure thing.
[16:51] <qhartman_too> I'll drop a link in here once it's up.
[16:51] <marcoceppi> qhartman_too: excellent
[16:52] <qhartman> marcoceppi, juju-core on launchpad?
[16:53] <marcoceppi> qhartman: yes
[16:56] <qhartman> marcoceppi, any recommended steps to clean this up? Is this a case of nuke-and-pave and start over?
[16:59] <marcoceppi> qhartman: so, all you need to do is regenerate the jenv file
[16:59] <marcoceppi> qhartman: I don't know how to do that out of band
[17:01] <qhartman> marcoceppi, yeah, I figured. I have some old "juju status" output lying around. Might that have the needed info?
[17:01] <qhartman> marcoceppi, https://bugs.launchpad.net/juju-core/+bug/1317596
[17:02] <marcoceppi> qhartman: not really, the jenv file is created that's basically everything in the environments.yaml def with some others
[17:02] <qhartman> marcoceppi, I figured
[17:02] <marcoceppi> natefinch: hey, are you around?
[17:02] <qhartman> marcoceppi, oh well. Luckily this was not yet a production deployment
[17:02] <natefinch> marcoceppi: I was just about to ping you.  What's up?
[17:02] <marcoceppi> natefinch: is there anyway to generate a jenv file other than bootstrap?
[17:03] <marcoceppi> qhartman: has a bug where in 1.18.2 if you bootstrap an already bootstrapped environment, it errors out but does a "cleanup" after erroring resulting in his jenv missing
[17:03] <marcoceppi> bug # 1217596
[17:03] <marcoceppi> bug #1217596
[17:03] <marcoceppi> lazy mup.
[17:03] <natefinch> yeah, I think it's broken
[17:04] <marcoceppi> WHY MUP WHYYYYYYY
[17:04] <natefinch> no, there's no other way to make a jenv
[17:04] <marcoceppi> \o/
[17:04] <marcoceppi> qhartman: so, starting over looks like the only choice
[17:04] <natefinch> rebootstrapping definitely should not blow away your jenv though
[17:04] <marcoceppi> natefinch: oh, yeah, that's clearly a bug
[17:05] <qhartman> I hit ctrl-c when I realized I had accidentally done it again, so that might have played a role
[17:05] <marcoceppi> qhartman natefinch I think the log is the most interesting
[17:05] <marcoceppi> Bootstrap failed, destroying environment
[17:05] <marcoceppi> ERROR environment is already bootstrapped
[17:05] <natefinch> ahh.... control-c in the middle of a second bootstrap.... that could do it
[17:06] <qhartman> yeah, I'm guessing it traps the ctrl-c and goes to cleanup before realizing it already exists
[17:06] <marcoceppi> natefinch: I think the Ctrl-C is a red herring
[17:06] <qhartman> could be that too for sure
[17:06] <marcoceppi> it looks like On error clean up is performed, but if already bootstrapped is an error, we enter a weird state
[17:06] <natefinch> marcoceppi: I'm not so sure.  double bootstrapping is something I do all the time by accident
[17:06]  * marcoceppi goes to verify with a non Ctrl-C
[17:08] <qhartman> any rate, thanks for the help guys, I'm going to go fire up my orbital laser platform and start over.
[17:08] <qhartman> *womp womp*
[17:08] <natefinch> evilnickveitch: if there's a minor edit to the docs, should I do it right in the github editor, or is there a better process?
[17:09] <mbruzek> I know charmhelpers best was discussed recently, are we supposed to put the charm helpers inside the hook directory or inside the charm directory in $CHARM_DIR/lib/charmhelpers  ?
[17:11] <themonk> marcoceppi: it continuously logging this, https://gist.github.com/anonymous/31e426b5314ef51cfb47
[17:11] <marcoceppi> mbruzek: I'd prefer they be put in lib, but that requires some muxing around for inheritence
[17:11] <marcoceppi> mbruzek: most people put them in the hooks directory
[17:12] <mbruzek> marcoceppi, I am doing one from scratch what is the best practise we discussed last week?
[17:12] <marcoceppi> mbruzek: outside the hooks directory is always considered a best practice
[17:12] <mbruzek> Done.  Thanks marcoceppi
[17:13] <marcoceppi> themonk: that's annoying, and will be fixed in 1.19, but this means that the container didn't get setup properly
[17:13] <marcoceppi> so the agent is running, everything is working as expected, but for some reason cloud init failed
[17:13] <marcoceppi> themonk: this likly means that the container doesn't not have outside networking access and can't reach the archives
[17:14] <marcoceppi> themonk: you should be able to `juju ssh 1` and run sudo apt-get update to verify
[17:14] <themonk> marcoceppi: ok
[17:14] <marcoceppi> natefinch: the README outlines the process, but you can totally do it from the editor
[17:15] <themonk> marcoceppi: it has network access
[17:15] <themonk> marcoceppi: apt-get is updating
[17:15] <marcoceppi> themonk: can you sudo apt-get install git ?
[17:16] <themonk> marcoceppi: inside machine 1
[17:16] <marcoceppi> themonk: yes
[17:16] <themonk> marcoceppi: ok
[17:16] <themonk> marcoceppi: git is installing
[17:17] <natefinch> marcoceppi: sweet.  moving to github and markdown just reduced the barrier for me fixing that docs bug to nearly zero.... which is about where it needs to be for me to actually get off my lazy butt to do it :)
[17:18] <marcoceppi> I'm adding that to the quotes page ;)
[17:18] <marcoceppi> jcastro: ^^ o/ \o
[17:20] <qhartman_too> So, in using maas and juju together, can I specify which node I want juju to boostrap onto if I have multiple available in maas?
[17:20] <qhartman_too> haven't found anything in the docsabout that
[17:21] <natefinch> qhartman_too: you can give constraints to bootstrap, which can sorta help, but you can't precisely say "go onto the machine named foo" .... though I think we have stuff in the works for that
[17:23] <natefinch> qhartman_too: juju help contraints will show you how to use the constraints.  You can set a tag on the node and then use that to get bootstrap to go where you want it to
[17:23] <qhartman_too> natefinch, ok, will do. Being able to specify by name would be nice too, so I hope that is in the works.
[17:23] <qhartman_too> thanks
[17:25] <natefinch> qhartman_too: code for it was merged into trunk a couple weeks ago, so it'll be available in 1.20
[17:25] <natefinch> (I just checked)
[17:25] <cjohnston> marcoceppi: any idea why amulet appears to be running tests before the deployment is complete?
[17:25] <qhartman_too> awesome. So, by set a tag, you mean a maas tag, correct natefinch?
[17:25] <marcoceppi> cjohnston: it shouldn't, do you have the output from the run?
[17:26] <natefinch> qhartman_too: yep, then you can do --constraints tags=foo
[17:26] <qhartman_too> natefinch, cool
[17:28] <natefinch> marcoceppi: I have a problem with debug hooks on local... for some reason it seems like it not setting up the environment correctly
[17:28] <natefinch> marcoceppi: I get a prompt like this: root@nate-local-machine-2:~#
[17:28] <cjohnston> marcoceppi: L77 http://paste.ubuntu.com/7417300/ I guess it shows complete, then 2 seconds later is starts another deployment... L78 shows the output of a command that is being run by the test, where we are trying to get the IP address and the open ports.. the open ports dont exist yet for some reason.. L84 is that same command, and they exist there
[17:28] <natefinch> marcoceppi: any ideas? I haven't used debug hooks much
[17:29] <marcoceppi> natefinch: are you in a tmux session?
[17:29] <natefinch> marcoceppi: yeah, it's a tmux session, it just doesn't have the right prompt, according to the docs, and seems to be in the wrong directory, etc
[17:30] <marcoceppi> natefinch: you're in window 0
[17:30] <marcoceppi> natefinch: you need to trigger a hook to get it to bring up a hook context
[17:30] <marcoceppi> natefinch: so either run a config changed, or if it's in error run resolved --retry
[17:30] <natefinch> marcoceppi: what I hear is: the docs need to be clarified ;)
[17:31] <cjohnston> when I run debug-hooks, I get the normal prompt for a bit until the hooks start running, then the hook prompt comes up
[17:32] <natefinch> marcoceppi: the docs say "When the first hook event is queued for a hook that is in the list of those to be debugged"  .... but it doesn't say how that happens.  Do I have to do something for that to happen?
[17:32] <marcoceppi> natefinch: you have to have a hook queued for execution then have it execute
[17:32] <marcoceppi> natefinch: feel free to word smith that line better :)
[17:32] <natefinch> marcoceppi: I did juju debug-hooks wordpress/0 install
[17:33] <marcoceppi> natefinch: yeah, that doesn't work (yet)
[17:33] <natefinch> marcoceppi: that brought me to this prompt.  I don't know what to do now
[17:33] <marcoceppi> juju debug-hooks wordpress/0 is what got executed
[17:33] <marcoceppi> natefinch: this is why I break the install hook when I write charms, so I can get to that hook context
[17:33] <natefinch> marcoceppi: that's fine.  What do I do now?
[17:33] <marcoceppi> natefinch: trigger a hook
[17:33] <marcoceppi> natefinch: ie, run juju set against the unit and change a config value
[17:34] <natefinch> marcoceppi: well, install is the one that failed
[17:34] <cjohnston> you need to run debug-hooks sooner
[17:34] <marcoceppi> natefinch: okay, run juju resolved --retry
[17:34] <marcoceppi> that will re-trigger the hook
[17:34] <marcoceppi> and put you in a hook context in debug-hooks
[17:35] <marcoceppi> that's documented /somewhere/ in the docs
[17:35] <natefinch> marcoceppi: ahh, yeah, that is under "debugging early hooks"
[17:37] <natefinch> man, juju resolved and juju debug-hooks seriously need better command line docs
[17:37] <natefinch> purpose: marks unit errors resolved
[17:39] <cjohnston> any thoughts on the amulet issue marcoceppi ?
[17:40] <marcoceppi> cjohnston: I'd nee to see your tests to be able to determine further what's oging on
[17:41] <cjohnston> marcoceppi: http://bazaar.launchpad.net/~canonical-ci-engineering/uci-engine/trunk/view/head:/tests/test_ppa_assigner.py
[17:41] <avoine> cjohnston: btw your graphite-txstatsd charm looks pretty cool
[17:41] <cjohnston> avoine: thanks
[17:41] <cjohnston> still trying to get postgres happy
[17:42] <natefinch> hmm... this sounds familiar: failed to fstat previous diversions file: No such file or directory
[17:42] <jose> lazyPower: ping, have a minute?
[17:42] <lazyPower> jose: kind of, whats up?
[17:43] <jose> lazyPower: tests are running 'good' but failing because of the (expected) 'invalid' SSL certs, as they're self-signed
[17:43] <lazyPower> it using pythong requests right?
[17:43] <jose> lemme double check
[17:44] <lazyPower> wow banana hands... whit you're contageous.
[17:44] <lazyPower> its got a loading phase of 3 days though
[17:45] <jose> lol, what?
[17:45] <jose> (they're using requests.post
[17:45] <lazyPower> the massive typos in a single sentence. Whit coined the phrase bananahands... i refuse to let that one go. its too fitting for whats happening.
[17:45] <lazyPower> jose: http://stackoverflow.com/questions/10667960/python-requests-throwing-up-sslerror
[17:46] <jose> :P
[17:46] <lazyPower> so, load the service with a CAFile, and point it at that.
[17:46] <whit> http://www.lestelecreateurs.com/projects/altoids-banana-hands/
[17:46] <lazyPower> yep. i think i just found my new canonical directory photo
[17:47] <jose> :P
[17:51] <lazyPower> jose: the other alternative would be to set verify=false (which you're not really verifying the ssl cert at that point) but if the url remains https:// and you get the respective content, its 'basically' doing the job we've requested. soo....
[17:51] <lazyPower> use your judgement on which approach you want to use. I'd accept either
[17:51] <jose> I prefer to do the latter, as I'm not sure how to generate the other
[17:51] <jose> as long as I get it to pass it'll be awesome
[17:51] <jose> then we can move on to upgrading it to 6.1.3 (I think)
[17:51] <cory_fu> What's the point of doing ssl if you don't verify the cert?
[17:52] <lazyPower> cory_fu: The fact its stills erved over https would validate its being sent over a secure chanel, you're not validating the key exchange. Whats a good way to perform that using python.requests cory_fu?
[17:52] <lazyPower> with a snakeoil cert
[17:52] <cory_fu> Or are you saying that the service you're downloading from doesn't have a valid cert?
[17:53] <lazyPower> its a snakeoil cert, not invalid, but not issued from a CA
[17:53] <jose> which makes it look 'invalid'
[17:53] <cory_fu> Hrm.  If you don't validate the cert at all, then it's not secure since it's open to MITM and spoofing
[17:53] <lazyPower> again, suggestions?
[17:53] <cory_fu> You should probably add that specific cert to the list of trusted, temporarily
[17:53] <lazyPower> ah
[17:54] <lazyPower> so suggestion 1 - you gen it locally and provide it to the charm right?
[17:54] <cory_fu> Um, there's a way to give requests a cert that you consider valid.  Lemme look
[17:54] <cory_fu> Yeah
[17:54] <lazyPower> jose: i think i'm inclined to agree with cory_fu, he's got a valid point.
[17:55] <lazyPower> we can't test if the cert is genned on the machine, but there are options to provide a certificate, and its reasonable to assume it doesn't matter if that cert is genned on your workstation or the server. thats just a nicety the charm provides.
[17:55] <lazyPower> its the same symptom and will yield the same result.
[17:56] <jose> wait!
[17:56] <jose> charm-helpers
[17:56] <jose> will it work in a test?
[17:57] <jose> if it can then I can just generate the cert and parse it
[17:58] <lazyPower> jose: I would gen a certificate once and package it as part of the test
[17:58] <lazyPower> make a non-expiring self-signed SSL certificate and bundle it.
[17:58] <cory_fu> Right
[17:58] <jose> hmm, good idea too
[18:00] <cory_fu> Actually, I may be missing some context here.  Is this cert for something the charm is downloading from an external (but self-signed) resource, or for testing a service that self-signs itself?
[18:00] <cory_fu> ("Self-signs itself?"  From the department of redundancy department.)
[18:00] <jose> it generates a self-signed cert in order to enable SSL
[18:01] <cory_fu> And you just want to test that it's up and running?
[18:01] <jose> yeah
[18:01] <cory_fu> Oh
[18:01] <jose> not verifying any downloads or anything
[18:01] <jose> it's just a unit test
[18:01] <cory_fu> I thought you were talking about downloading a resource from a self-signed location
[18:01] <jose> oh, nope
[18:01] <cory_fu> I think just tacking on verify=False is fine for a test
[18:01] <jose> ok then
[18:02] <jose> I'm running the test now so let's see how it does
[18:02] <cory_fu> Sorry to complicate the discussion by jumping in without looking.  :p
[18:02] <lazyPower> cory_fu: hah :)
[18:02] <lazyPower> <3
[18:02] <lazyPower> all we really need to verify, is that a) its open on port 443, and b) it responds with context we define.
[18:02] <lazyPower> teh SSL cert validation is a good idea though, and providing your own cert validates that it is indeed working with a certificate instead of faking it out by serving HTTP over HTTPS
[18:03] <cory_fu> Yeah.  It would be better, but I'm not sure it's worth the extra headache
[18:03] <cory_fu> Maybe if charm-helpers had some sugar for it
[18:04] <jose> if this passes I'll push and then look into it, think it's worth it
[18:04] <cory_fu> jose: Not to heap too much on you, but I was wondering if you saw my comments on the tracks charm?
[18:04] <jose> cory_fu: yep, I did!
[18:04] <cory_fu> thoughts?
[18:05] <jose> I'm working on them now :)
[18:05] <jose> sorry I missed the website-relation-joined, I thought I added it
[18:05] <jose> about the rake db:migrate being called multiple times... I'm not sure if it breaks anything
[18:05] <jose> maybe lazyPower knows? /me is no ruby expert
[18:06] <lazyPower> jose: nope
[18:06] <lazyPower> it wont break anything
[18:06] <jose> awesome then
[18:06] <lazyPower> it no-ops if there are no new migrations
[18:06] <lazyPower> see: gitlab-ci charm for more examples
[18:06] <lazyPower> https://code.launchpad.net/~lazypower/charms/precise/gitlab-ci/trunk
[18:07] <jose> what the... timed out
[18:27] <jose> cory_fu: changes pushed to the branch
[18:27] <cory_fu> Thanks.  I'll take a look in a minute
[18:30] <cory_fu> jose: There's still an issue with the port config setting.  Try changing it after deploying with the default port.  Because the .portnotsupported file doesn't exist, neither branch executes and the port doesn't change
[18:33] <jose> cory_fu: that's why the if [ -f .port ] && [ `cat .port` != "${PORT}" ]; then is in place
[18:34] <cjohnston> marcoceppi: anything stand out from those tests?
[18:49] <cory_fu> jose: Oh, I see, yeah.  But there's still the issue of once the port is set to > 1024, the .portsnotsupported file is removed and so neither of the checks is ever made again, so you could set it to 3000 and then change it to 80
[18:50] <jose> right
[18:52] <jose> cory_fu: fixed and pushed
[18:54] <cory_fu> Thanks!  :)
[18:55] <jose> np :)
[18:55] <jose> now, time for me to go and do some university stuff, be back later!
[18:55] <cory_fu> Could we just remove that first block which references .portnotsupported entirely now?
[18:55] <cory_fu> ok!
[18:55] <cory_fu> ttyl
[18:56] <jose> cory_fu: what?
[18:56] <cory_fu> Oh, I was just saying that the first if / else in config-changed seems redundant now.  Not a big deal
[18:56] <cory_fu> It looks like it'll work fine
[18:56] <jose> ooh, got it
[18:56] <jose> fixing
[18:57] <jose> cory_fu: changes pushed!
[18:57] <cory_fu> Thanks again!
[18:57] <jose> np, let me know if there's anything else I could fix and I'll make sure to fix it once I'm back :)
[18:58] <cory_fu> Awesome
[18:58] <cory_fu> You're the best
[19:28] <tvansteenburgh> hazmat, marcoceppi: either of you around?
[19:28] <marcoceppi> tvansteenburgh: o/
[19:29] <tvansteenburgh> hey so about this local charm amulet thing
[19:29] <marcoceppi> yeah
[19:29] <tvansteenburgh> it is actually deployer that can't handle a non-vcs charm
[19:29] <tvansteenburgh> and i'm wondering if it must stay that way
[19:29] <marcoceppi> tvansteenburgh: basically, and there's no reason that should change
[19:29] <marcoceppi> tvansteenburgh: the "branch" key assumes a VCS
[19:30] <tvansteenburgh> fair enough, i'll proceed with your original idea then, thanks
[19:30] <marcoceppi> tvansteenburgh: we could use the "charm" key
[19:30] <marcoceppi> and set JUJU_REPOSITORY
[19:30] <marcoceppi> and build it that way instead
[19:30] <marcoceppi> either way requries you to build a directory
[19:30] <tvansteenburgh> yeah ok
[19:31] <tvansteenburgh> i thought it was curious that deployer required your charm dir to be in vcs
[19:31] <tvansteenburgh> but if it must be so, that's fine
[19:31] <marcoceppi> tvansteenburgh: only using the branch options
[19:31] <marcoceppi> tvansteenburgh: if you use 'charm' intead it doesn't have to be
[19:42] <natefinch> marcoceppi: where's the right place to report a bug for the wordpress charm?
[20:12] <marcoceppi> natefinch: http://bugs.launchpad.net/charms/+source/wordpress/
[20:53] <waigani> fwereade: replace was a typo. I don't know how I managed to propose that, as I thought I'd reverted those changes. Sorry.
[21:00] <fwereade> waigani, no worries, all too easy to do
[21:18] <hazmat`> tvansteenburgh, deployer absolutely supports non-vcs scharms..
[21:47] <cory_fu> jose: You around?
[21:47] <jose> cory_fu: just came back!
[21:47] <jose> perfect timing!
[21:48] <cory_fu> :)
[21:48] <cory_fu> I was just testing the changes on tracks, and I got one more error
[21:48] <jose> what's it?
[21:48] <cory_fu> 2014-05-08 21:26:11 INFO config-changed /var/lib/juju/agents/unit-tracks-0/charm/hooks/config-changed: line 7: 1025: No such file or directory
[21:48] <cory_fu> I think we need to double up the brackets.  [[ "$PORT" < 1025 ]]
[21:49] <jose> oooh that's right
[21:57] <jose> cory_fu: pushed
[21:57] <cory_fu> Thanks
[21:57] <cory_fu> Checking
[21:57] <jose> ok!
[22:08] <cory_fu> jose: I'm still not able to change the port after deployment for some reason.  :(
[22:08] <jose> cory_fu: let me deploy and debug what's going on
[22:09] <cory_fu> Were you able to change the port?  It seems like the .port file isn't in place for me
[22:09] <jose> it should be on place
[22:09] <jose> but let's check
[22:09] <cory_fu> Oh!
[22:10] <jose> huh?
[22:10] <cory_fu> db-relation-changed does a cd /home/tracks/tracks before it creates the .port file
[22:10] <cory_fu> But the config-changed hook checks looks for it in the charm dir
[22:10] <jose> boom!
[22:10] <cory_fu> :)
[22:11] <jose> good catch
[22:12] <jose> let me deploy before pushing, I want to make sure it works
[22:35] <cory_fu> jose: I have to head out for the evening.  I might be back on briefly a bit later, but most likely, I'll review what you push tomorrow morning.  Thanks again for being so on top of this!  :)
[22:35] <jose> no problem
[22:35] <jose> it's supposed to be changing ports now :)
[22:35] <cory_fu> Excellent  :)
[22:43] <qhartman> anyone working on maas/juju/openstack deployments, feel free to drop into #majuos .
[22:45] <Term1nal> http://pastebin.com/fi86u6DK trying to deploy openstack-dashboard
[23:00] <benji_> k/quit
[23:59] <Term1nal> how would I go about seeing how many containers I'm running on a node?