[00:02] hazmat: yep, comment makes sense... will updated to ensure current set of clients are enabled. [01:07] noodles775, you could also do something like, get some previous reviews acked by someone actively involved in the charm [01:07] so that when a ~charmer on rotation gets to it, if it's had a few people's eyeballs on it already .... [01:10] jcastro: Yep, that makes sense when you've not got someone actively involved who is also a charmer. [04:00] trying to deploy juju-gui on my bootstrap node [04:01] I get the error "ERROR cannot assign unit "juju-gui/0" to machine 0: series does not match" [04:01] unit gets created but stays in pending state [04:03] projekt2: what's the series on the 0 machine? [04:03] precise [04:04] the command I ran was "juju deploy juju-gui --to 0" [04:24] projekt2: can you please try `juju deploy cs:precise/juju-gui --to 0`? [04:31] jose, that worked, thank you [04:31] projekt2: cool, no problem. let me know if there's anything else yo need help with! === uru_ is now known as urulama === CyberJacob|Away is now known as CyberJacob === CyberJacob is now known as CyberJacob|Away === CyberJacob|Away is now known as CyberJacob === nate_ is now known as natefinch [10:58] jamespage, I have a small charmhelpers mp if you have a moment at some point https://code.launchpad.net/~gnuoy/charm-helpers/add-peer_ips/+merge/228640 [11:00] gnuoy: comments [11:00] on MP [11:01] ta [11:01] gnuoy: thanks for merging the network splits stuff [11:01] much appreciated [11:01] np [11:08] jamespage, updated [11:10] gnuoy: a niggle but the round brackets are surplus [11:10] sure, 1 sec [11:12] jamespage, fixed [11:12] hello juju folks. I have been asked why in order to have juju ensure-availability (https://juju.ubuntu.com/docs/charms-ha.html) one cannot use a even number of odd.. I guess that it s because one need at least (n\2)+1 vote to get a quorum and having a odd number for n makes split brain avidable. [11:13] can someone point me to a more precise documentation on the odd number prerequisite (like, wich vote algo is being used or something ?) and tell me....what happen when the node being master dies... the all stuff ends up with a even number of nodes, does it not ? [11:14] gnuoy: merged [11:14] thanks [11:14] does this mean, as long as one of the odd node dies, the juju environment cluster is in split brain risk situation ? [11:14] melmoth, doing HA with even numbers of nodes is not reliable due to split brains [11:14] melmoth, so a minimum of 3 units is required - the same applies in ceph as well [11:15] melmoth, so that quorum is maintained [11:15] in the event of a single node failure [11:15] i got that.. but, then, when one node dies, you again in a situation where a split brain can occurs, right ? [11:15] melmoth, in that situation most tools just freeze - ceph does this - it knows it had a peer but its disappeared so io is frozen [11:16] melmoth, I'm not 100% sure what juju's behaviour would be [11:16] urm - rogpeppe1 might have an idea [11:16] melmoth: jamespage is right [11:17] rogpeppe1, so in the event that 2 out of 3 nodes fail, juju goes into lockdown right? [11:17] jamespage: yes [11:18] melmoth, jamespage: see http://docs.mongodb.org/manual/core/replica-set-members/ for some detail [11:18] rogpeppe1, awesome [11:18] ahh, thanks [11:18] jamespage: i don't think there's anything else that can be usefully done [11:18] so, the actual "engine" of the ha cluster is a native mongodb stuff, right ? [11:19] anyway, i think i got enough data to feed whoever asked me about this for a while :) cool. thanks. === CyberJacob is now known as CyberJacob|Away === CyberJacob|Away is now known as CyberJacob === CyberJacob is now known as CyberJacob|Away === CyberJacob|Away is now known as CyberJacob [13:47] Hello all. Trying to install a HA Openstack instance on 6 nodes (3 for quantum/neutron and 3 for compute/ceph/allinone using lxc) and when all of the lxc containers come up, they receive the error agent-state-info: 'hook failed: "ha-relation-changed"' [13:48] Each lxc container has received an IP in the same range as the mgmt ip addresses and there is a br0 set up for these lxc containers to use [14:06] sinzui: 1.18.4 is in trusty-proposed now. Do you have automated QA you can run on it yet? [14:06] rbasak, no, honestly that is a feature that need to be built [14:06] sinzui: I'm not sure it's worth getting you to do any 1.18 specific work for this though, since we want to be on 1.20 really. [14:07] Hmm, OK. I guess I should just verify that the client works in a local environment then? [14:09] rbask. +1, but I can do a little better by manually running the test scripts with the new juju. That will exercise other substrates/clouds [14:10] rbasak, Once I stop developers from landing non-fixes for devel regressions, I can return to feature work. I can promise to have this tested in the next 24 hours [14:15] sinzui: no problem - thanks. [14:40] Trying to install HA Openstack and having hook errors... getting this error in my hacluster logs on my lxc containers. Missing required principle configuration: ['corosync_bindnetaddr', 'corosync_mcastport'] [15:04] when i bring up a node using Maas, I would like it to update to the latest kernel. How can this be automated? [15:07] jamespage: do you have a moment? [15:08] I fixed up MongoDB, and added a real brief ceilometer test in amulet... i'd like input on what else you would like to see validated with Ceilometer so i have a robust test [15:11] lazyPower: hello [15:11] jamespage: i pushed the prelim here: https://launchpad.net/~lazypower/charms/trusty/mongodb/fixing_st00f [15:12] i'm refactoring the test now to validate with the address being in ceilometer.conf, are there any other levers i need to pull, services to deploy? [15:12] this basically just validates it related successfully, and has the conf file written [15:35] mramm, alexisb can you include me in that discussion [15:35] hazmat, yes I will have to, I do not have enough context to give jam guidance [15:36] hazmat: that discussion is just about can we have JAM look at it toku from a resource availability thing === CyberJacob is now known as CyberJacob|Away [16:08] jamespage: https://code.launchpad.net/~lazypower/charms/trusty/mongodb/fixing_st00f/+merge/228714 [16:10] lazyPower: that should be good - hook errors would indicate any other problems.... === psivaa is now known as psivaa-bbl [16:29] ahhh i just broke it backporting the storage-subordinate work === jrwren_ is now known as jrwren === Ursinha-afk is now known as Ursinha === Ursinha is now known as Ursinha-afk === Ursinha-afk is now known as Ursinha [17:31] could anybody point to docs on creating custom images for MAAS === roadmr is now known as roadmr_afk [17:38] how does a charm know which image to use for while installing the charm? [18:06] looks like this forum is dead [18:09] khuss: I brelieve the series comes from the 'precise' or similar in the path to the charm storage in the repository... https://bazaar.launchpad.net/~openstack-charmers/charms/precise/ceph/trunk/view/head:/metadata.yaml [18:11] sarnold: I would like to use a custom image while bringing up the charm. trying to see how to specify this custom image in the charm [18:12] khuss: I don't think you'd really specify that in the charm itself; just guessing here, but you could probably create your own 'series' like precise or trusty or whatever, rename a distro, rename the directory in your repository, but I lose the thread when you have to get simplestreams to know about it.. [18:12] khuss: look into juju-metadata, I know it's how you build the metadata used to specify images [18:13] khuss: but I don't know how that works with maas [18:14] sarnold: do you know the procedure for creating custom image.. essentially i want to create an image with different kernel and have MAAS boot node with that image [18:15] khuss: sorry, no idea there, I've always been content with the default images :( [18:16] sarnold: problem is that upgrading kernel from the default image will require reboot.. not sure how to handle that from a charm. I want the charm execution to continue after doing the reboot.. may be there is a way to do this with startup scripts.. but I can avoid all that hassle if I use custom image [18:18] khuss: hrm, what problem are you trying to solve? there might be an easier way around it. [18:18] sarnold: my charm requires a different kernel than what is provided by default image [18:19] (I'm not saying that I'll know the easier way -- just that this doesn't sound like the usual orchestrate service interactions that people use juju for :) [18:21] sarnold: how would you install a charm if that charm requies a different kernel - using juju way? [18:24] khuss: I -think- I might try first using a subordinate charm, whose sole job is to install the new kernel [18:24] khuss: scheduling the reboot would be annoying of course. [18:25] sarnold: reboot is the problem [18:26] sarnold: hope you understand the problem i am trying to solve [18:27] khuss: yes, makes much more sense now :) thanks. I hope one of the wizards can suggest something good for you :) [18:28] sarnold: pretty hard to get any help from this forum though.. thanks for your help anyway [18:30] khuss: yeah, people are busy and don't always have time to head over to irc. you could also try asking in askubuntu.com with the [juju] tag, that might draw input from a different audience -- and has the benefit that it would be recorded for posterity :) [18:30] sarnold: thanks === Ursinha-afk is now known as Ursinha === psivaa-bbl is now known as psivaa === roadmr_afk is now known as roadmr [19:41] hitting some blocks on deploying a charm that uses a postgresql db. my plan only has 8GB of local HD, so I need to use a volume - but how do I do that with the charm with Juju? there is nothing in the trusty/postgresql readme about this that I can see :( any hints anyone? [19:47] hmmm I guess storage charm has something to do with this [20:07] bummer, block-storage-broker does not work on trusty+openstack :( filed a bug #1350021 [20:07] <_mup_> Bug #1350021: block-storage-broker on Trusty + OpenStack fails in install [20:11] jaywink: trusty supports icehouse [20:13] oh that bug I just report has been fixed in code revision 53, the charm store is at 52 [20:14] the merge was done 18th July - I guess it takes some time for the store to update? [20:15] someone must promulgate it (I think that is the term) [20:19] hmm no wait, trusty branch hasn't got the latest commit, only precise branch. that kinda doesn't help since the fix is for trusty hehe [20:48] Hi -- when I deployed a node with juju on maas, it got /dev/sdb as it's root device. Is this a valid possibility? or a bug?