[07:08] <jcastro> https://docs.google.com/a/canonical.com/document/d/1t_55N1il3XoL8z-jfa1CBoSxzOQjC90cgSpCqx5wkH0/edit#
[07:08] <jcastro> thumper, ^^^
[07:09] <thumper> bac, fwereade: https://docs.google.com/a/canonical.com/document/d/1t_55N1il3XoL8z-jfa1CBoSxzOQjC90cgSpCqx5wkH0/edit#
[07:38] <bac> thumper: https://docs.google.com/a/canonical.com/document/d/1t_55N1il3XoL8z-jfa1CBoSxzOQjC90cgSpCqx5wkH0/edit#
[07:38] <bac> thumper: no, https://wiki.canonical.com/InformationInfrastructure/IS/Mojo
[11:57] <stub> Tribaal: I think the non-corosync leader election stuff is still racy, in that you can have two or more units that think they are the leader running hooks at the same time.
[11:58] <Tribaal> stub: interesting, but how can that work?
[11:58] <Tribaal> stub: seems like "I am the unit with the smallest unit number" should be relatively easy to determine?
[11:59] <Tribaal> stub: or do you mean it races with the peer list fetching?
[11:59] <stub> A three unit cluster, units 2 and 3 have joined the peer relationship and happily running hooks. unit 1 is finally provisioned and joins the peer relation
[11:59] <Tribaal> ah
[11:59] <Tribaal> smartass units :)
[11:59] <stub> Last I checked, it is impossible to elect a leader reliably if you create a service with more than 2 units
[12:00]  * stub looks for the bug number
[12:00] <Tribaal> yeah, seems very dodgy to do so. I guess the decoumentation should reflect that, but the comments are still valid
[12:00] <Tribaal> stub: can we query the juju state server for the list of peers?
[12:00] <Tribaal> :)
[12:01] <stub> Tribaal: I haven't looked into unsupported mechanisms :)
[12:01] <Tribaal> stub: hehe
[12:01] <stub> Tribaal: I'm just sticking with the 'create 2 units, wait, then add more' as a documented limitation until juju gives us leader election
[12:01] <stub> https://bugs.launchpad.net/juju-core/+bug/1258485
[12:02]  * Tribaal looks into how complex a corosync setup is
[12:02] <stub> Let me know, that might solve my issues too...
[12:03] <Tribaal> stub: seems like it would be generally useful, yes. seems like a job zookeeper would have handled well though
[12:03] <Tribaal> sorry if I'm breaking a taboo :)
[12:04] <stub> I think juju has the information we need, it just needs to be exposed to the charms ;)
[12:07] <Tribaal> stub: yeah
[12:07] <Tribaal> stub: ohh
[12:08] <Tribaal> stub: I think I have an idea :)
[12:09] <Tribaal> stub: I'll give it a spin when I'm on the beach this week and see if it can work
[12:10] <stub> Tribaal: I've proven to myself that it is impossible, and nobody has yet corrected me, but you are more than welcome to prove me wrong :)
[12:10] <stub> My test suite seems guaranteed to trigger the race conditions :)
[12:10] <Tribaal> stub: sweet!
[12:10] <Tribaal> stub: a reproductible race is half he battle already
[12:11] <Tribaal> s/he/the/
[12:49] <Tribaal> so, corosync uses multicast it seems
[12:49] <Tribaal> that comes with its own set of problems
[14:08] <tvansteenburgh> jacekn: hi, i'm working the charm review queue this week, do you have any updates for https://code.launchpad.net/~jacekn/charms/precise/rabbitmq-server/queue-monitoring/+merge/218580 ?
[14:11] <jacekn> tvansteenburgh: sorry no another team took over this project
[14:11] <jacekn> tvansteenburgh: I will let them know
[14:11] <tvansteenburgh> jacekn: ok thanks
[14:13] <bigtree> I am having an issue with the juju mongodb filling up my 8gb micro sd card -- is there a way I can periodically flush this db?
[14:59] <jamespage> dimitern, http://paste.ubuntu.com/7961799/
[15:00] <jamespage> dimitern, http://paste.ubuntu.com/7961802/
[15:06] <khuss> i'm creating a new charm my-nova-compute and it has to be installed on top of nova-compute. This means my-nova-compute has to be installed after installing nova-compute on the same machine. What kind of relationship can I use to achieve this?
[15:30] <rbasak> sinzui: did you sort that source tarball for me, please?
[15:31] <rbasak> sinzui: I was having connectivity issues, so don't know if I missed a URL.
[15:34] <sinzui> rbasak, I am so sorry. I forgot. http://juju-ci.vapour.ws:8080/job/build-revision/1666/
[15:35] <rbasak> sinzui: no problem. Only getting to it now, as I wait on some very slow mysql tests :-/
[15:53] <rbasak> sinzui: are you free in eight minutes? The TB meeting has had some questions about Juju upstream QA for the exception request.
[15:53] <rbasak> sinzui: looks like it's dragged on for a while. If you could answer their questions, that might speed things up.
[15:53] <rbasak> sinzui: #ubuntu-meeting-2
[15:54] <sinzui> rbasak, I don't have time, sorry. I am sprinting and debating at this moment
[15:54] <rbasak> sinzui: OK, I'll try and do what I can.
[16:39] <hatch> anyone know why I would get this error when trying to bootstrap using local?
[16:39] <hatch> WARNING ignoring environments.yaml: using bootstrap config in file "/home/vagrant/.juju/environments/local.jenv"
[16:39] <hatch> 1.20.1-saucy-amd64
[16:40] <jcw4> hatch: I believe that's just a warning letting you know it's using the local.jenv instead of the environments.yaml
[16:41] <jcw4> hatch: if the local.jenv doesn't exist juju will create it the first time using environments.yaml as the template
[16:41] <jcw4> hatch: but after the local.jenv has been created, any changes in that section of the environments.yaml won't get picked up
[16:42] <hatch> ohh ok, it subsequently fails with:
[16:42] <hatch> ERROR Get http://10.0.3.1:8040/provider-state: dial tcp 10.0.3.1:8040: connection refused
[16:42] <hatch> so I thought that might have been the problem
[16:43] <jcw4> hatch: hmm, that seems like an unrelated error.  Not sure what that one is
[16:44] <hatch> here is the full output https://gist.github.com/hatched/5849510b38afac01b6cf
[16:45] <hatch> not sure if that helps at all heh
[16:46] <jcw4> hatch: interesting.  The WARNING unknown config field "shared-storage-port" bit is interesting
[16:46] <jcw4> hatch: but I'm not sure it's related either
[16:46] <jcw4> hatch: I'm suspecting lxc issues maybe
[16:47] <jcw4> hatch: can you 'juju destroy-environment local' and 'juju bootstrap' again?
[16:47] <hatch> yeah i have to use --force though because it seems to have created a 'partial' env
[16:47] <hatch> the same issue happens
[16:47] <jcw4> hatch: hmm
[16:47] <hatch> yeah I'm at a loss at how to debug this heh
[16:48] <jcw4> hatch: I'm afraid I don't know much more than that.  What does 'sudo lxc-ls --fancy' show?
[16:48]  * jcw4 grasping at straws
[16:48] <hatch> a fancy empty table :)
[16:48] <jcw4> hmm; that's interesting.  I would expecte at least one row
[16:48] <hatch> after destroying?
[16:49] <abrimer> jamespage, are you available for a question?
[16:49] <hatch> jcw4 well thanks for the help, I'll keep poking around
[16:49] <jcw4> hatch: yeah, I think the 'juju-*-template'  would stay around
[16:50] <jcw4> hatch: yw... good luck :)
[16:50] <hatch> thanks - I'll need it haha
[16:51] <jcw4> hatch: lazyPower or marcoceppi or someone else may know better, if they're available right now
[16:51]  * lazyPower reads scrollback
[16:52] <lazyPower> hatch: do you have teh juju-plugins repository added?
[16:52] <lazyPower> there's a plugin to help clean this up and get you to a known good state - fresh from the cloud. juju-clean
[16:52] <hatch> lazyPower not sure....
[16:52] <hatch> unrecognized command
[16:52] <hatch> so probably not
[16:52] <lazyPower> https://github.com/juju/plugins
[16:53] <lazyPower> install instructions are in the README. just clone and add to $PATH
[16:53] <hatch> oh ok will try
[16:54] <themonk> how to view unit log in amazon instance?
[16:55] <lazyPower> themonk: either jujud ebug-log, or cat/tail/less it in /var/log/juju/unit-service-#.log
[16:55] <lazyPower> *juju debug-log
[16:55] <themonk> ok thanks :)
[16:57] <hatch> lazyPower I don't want to jinx it but it appears to be working now....
[16:57] <lazyPower> woo
[16:57] <hatch> so...was that caused by the upgrade path or something?
[16:57] <hatch> any idea why it was broken?
[16:57] <lazyPower> hard to say
[16:57] <lazyPower> local provider can be picky
[16:58] <hatch> is this plugins stuff in the docs? I couldn't find it, it definitely should be :)
[16:58] <lazyPower> nope
[16:58] <lazyPower> its very unofficial atm
[16:59] <themonk> lazyPower, its not there i have /var/log/juju-themonk-local it has only local unit log, i want amazon instance unit log
[16:59] <lazyPower> themonk: you need to juju ssh to the unit, then look for it in /var/log/juju
[17:00] <lazyPower> bbiaf, lunch
[17:01] <themonk> ok got it
[17:18] <natefinch> man, memtest is not fast
[17:51] <hatch> natefinch you're sure having bad luck lately :)
[17:56] <natefinch> probably same problem as before... I just thought it wasn't hardware, since the live disk worked, but maybe it's something specific to booting
[18:22] <lazyPower> natefinch: no sir
[18:22] <lazyPower> memtest is slooowwwww especially when you have quite a bit of it.
[18:23] <sarnold> heh, reminds me of the first time using it on a machine with 16 gigs.. "oh haha look how long this is going to take! *wait five minutes* oh. this is annoying."
[18:24] <lazyPower> haha, seems about right
[18:40] <natefinch> took an hour... no errors in the first pass though
[18:57] <abrimer> Can anyone help me with my quantum configuration for openstack using maas and juju?
[20:29] <npasqua> Hello all. Does anybody have experience using the hacluster charm? We received agent-state-info: 'hook failed: "ha-relation-changed/joined"' on most subordinates.
[20:48] <abrimer> jamespage, do you have a minute to help me with my quantum issue?
[20:54] <themonk> I am not getting anything after hitting amazon public-address i cant ping too !!!!
[20:55] <themonk> amazone dashbord shows me that instance are running
[20:57] <lazyPower> themonk: did you expose it?
[20:57] <themonk> lazyPower, yes
[20:57] <lazyPower> did you validate your security groups were modified to actually open the ports?
[20:57] <lazyPower> and its not some hiccup on the AWS API side of things?
[20:58] <themonk> 30 min ago it was ok
[20:58] <lazyPower> did your units public address change on you?
[20:58] <themonk> i just redeploy my charm
[20:59] <themonk> i use --to 2 so public address should not change
[20:59] <themonk> and it remain same
[21:00] <themonk> i just expose my amother service and i cant access it now too !!!