/srv/irclogs.ubuntu.com/2014/07/29/#juju.txt

noodles775hazmat: yep, comment makes sense... will updated to ensure current set of clients are enabled.00:02
jcastronoodles775, you could also do something like, get some previous reviews acked by someone actively involved in the charm01:07
jcastroso that when a ~charmer on rotation gets to it, if it's had a few people's eyeballs on it already ....01:07
noodles775jcastro: Yep, that makes sense when you've not got someone actively involved who is also a charmer.01:10
projekt2trying to deploy juju-gui on my bootstrap node04:00
projekt2I get the error "ERROR cannot assign unit "juju-gui/0" to machine 0: series does not match"04:01
projekt2unit gets created but stays in pending state04:01
joseprojekt2: what's the series on the 0 machine?04:03
projekt2precise04:03
projekt2the command I ran was "juju deploy juju-gui --to 0"04:04
joseprojekt2: can you please try `juju deploy cs:precise/juju-gui --to 0`?04:24
projekt2jose, that worked, thank you04:31
joseprojekt2: cool, no problem. let me know if there's anything else yo need help with!04:31
=== uru_ is now known as urulama
=== CyberJacob|Away is now known as CyberJacob
=== CyberJacob is now known as CyberJacob|Away
=== CyberJacob|Away is now known as CyberJacob
=== nate_ is now known as natefinch
gnuoyjamespage, I have a small charmhelpers mp if you have a moment at some point https://code.launchpad.net/~gnuoy/charm-helpers/add-peer_ips/+merge/22864010:58
jamespagegnuoy: comments11:00
jamespageon MP11:00
gnuoyta11:01
jamespagegnuoy: thanks for merging the network splits stuff11:01
jamespagemuch appreciated11:01
gnuoynp11:01
gnuoyjamespage, updated11:08
jamespagegnuoy: a niggle but the round brackets are surplus11:10
gnuoysure, 1 sec11:10
gnuoyjamespage, fixed11:12
melmothhello juju folks. I have been asked why in order to have  juju ensure-availability (https://juju.ubuntu.com/docs/charms-ha.html) one cannot use a even number of odd.. I guess that it s because one need at least (n\2)+1 vote to get a quorum and having a odd number for n makes split brain avidable.11:12
melmothcan someone point me to a more precise documentation on the odd number prerequisite (like, wich vote algo is being used or something ?) and tell me....what happen when the node being master dies... the all stuff ends up with a even number of nodes, does it not ?11:13
jamespagegnuoy: merged11:14
gnuoythanks11:14
melmothdoes this mean, as long as one of the odd node dies, the juju environment cluster is in split brain risk situation ?11:14
jamespagemelmoth, doing HA with even numbers of nodes is not reliable due to split brains11:14
jamespagemelmoth, so a minimum of 3 units is required - the same applies in ceph as well11:14
jamespagemelmoth, so that quorum is maintained11:15
jamespagein the event of a single node failure11:15
melmothi got that.. but, then, when one node dies, you again in a situation where a split brain can occurs, right ?11:15
jamespagemelmoth, in that situation most tools just freeze - ceph does this - it knows it had a peer but its disappeared so io is frozen11:15
jamespagemelmoth, I'm not 100% sure what juju's behaviour would be11:16
jamespageurm - rogpeppe1 might have an idea11:16
rogpeppe1melmoth: jamespage is right11:16
jamespagerogpeppe1, so in the event that 2 out of 3 nodes fail, juju goes into lockdown right?11:17
rogpeppe1jamespage: yes11:17
rogpeppe1melmoth, jamespage: see http://docs.mongodb.org/manual/core/replica-set-members/ for some detail11:18
jamespagerogpeppe1, awesome11:18
melmothahh, thanks11:18
rogpeppe1jamespage: i don't think there's anything else that can be usefully done11:18
melmothso, the actual "engine" of the ha cluster is a native mongodb stuff, right ?11:18
melmothanyway, i think i got enough data to feed whoever asked me about this for a while :) cool. thanks.11:19
=== CyberJacob is now known as CyberJacob|Away
=== CyberJacob|Away is now known as CyberJacob
=== CyberJacob is now known as CyberJacob|Away
=== CyberJacob|Away is now known as CyberJacob
npasquaHello all. Trying to install a HA Openstack instance on 6 nodes (3 for quantum/neutron and 3 for compute/ceph/allinone using lxc) and when all of the lxc containers come up, they receive the error agent-state-info: 'hook failed: "ha-relation-changed"'13:47
npasquaEach lxc container has received an IP in the same range as the mgmt ip addresses and there is a br0 set up for these lxc containers to use13:48
rbasaksinzui: 1.18.4 is in trusty-proposed now. Do you have automated QA you can run on it yet?14:06
sinzuirbasak, no, honestly that is a feature that need to be built14:06
rbasaksinzui: I'm not sure it's worth getting you to do any 1.18 specific work for this though, since we want to be on 1.20 really.14:06
rbasakHmm, OK. I guess I should just verify that the client works in a local environment then?14:07
sinzuirbask. +1, but I can do a little better by manually running the test scripts with the new juju. That will exercise other substrates/clouds14:09
sinzuirbasak, Once I stop developers from landing non-fixes for devel regressions, I can return to feature work. I can promise to have this tested in the next 24 hours14:10
rbasaksinzui: no problem - thanks.14:15
npasquaTrying to install HA Openstack and having hook errors... getting this error in my hacluster logs on my lxc containers. Missing required principle configuration: ['corosync_bindnetaddr', 'corosync_mcastport']14:40
khusswhen i bring up a node using Maas, I would like it to update to the latest kernel. How can this be automated?15:04
lazyPowerjamespage: do you have a moment?15:07
lazyPowerI fixed up MongoDB, and added a real brief ceilometer test in amulet... i'd like input on what else you would like to see validated with Ceilometer so i have a robust test15:08
jamespagelazyPower: hello15:11
lazyPowerjamespage: i pushed the prelim here: https://launchpad.net/~lazypower/charms/trusty/mongodb/fixing_st00f15:11
lazyPoweri'm refactoring the test now to validate with the address being in ceilometer.conf, are there any other levers i need to pull, services to deploy?15:12
lazyPowerthis basically just validates it related successfully, and has the conf file written15:12
hazmatmramm, alexisb can you include me in that discussion15:35
alexisbhazmat, yes I will have to, I do not have enough context to give jam guidance15:35
mrammhazmat: that discussion is just about can we have JAM look at it toku from a resource availability thing15:36
=== CyberJacob is now known as CyberJacob|Away
lazyPowerjamespage: https://code.launchpad.net/~lazypower/charms/trusty/mongodb/fixing_st00f/+merge/22871416:08
jamespagelazyPower: that should be good - hook errors would indicate any other problems....16:10
=== psivaa is now known as psivaa-bbl
lazyPowerahhh i just broke it backporting the storage-subordinate work16:29
=== jrwren_ is now known as jrwren
=== Ursinha-afk is now known as Ursinha
=== Ursinha is now known as Ursinha-afk
=== Ursinha-afk is now known as Ursinha
khusscould anybody point to docs on creating custom images for MAAS17:31
=== roadmr is now known as roadmr_afk
khusshow does a charm know which image to use for while installing the charm?17:38
khusslooks like this forum is dead18:06
sarnoldkhuss: I brelieve the series comes from the 'precise' or similar in the path to the charm storage in the repository... https://bazaar.launchpad.net/~openstack-charmers/charms/precise/ceph/trunk/view/head:/metadata.yaml18:09
khusssarnold: I would like to use a custom image while bringing up the charm. trying to see how to specify this custom image in the charm18:11
sarnoldkhuss: I don't think you'd really specify that in the charm itself; just guessing here, but you could probably create your own 'series' like precise or trusty or whatever, rename a distro, rename the directory in your repository, but I lose the thread when you have to get simplestreams to know about it..18:12
ahasenackkhuss: look into juju-metadata, I know it's how you build the metadata used to specify images18:12
ahasenackkhuss: but I don't know how that works with maas18:13
khusssarnold:  do you know the procedure for creating custom image.. essentially i want to create an image with different kernel and have MAAS boot node with that image18:14
sarnoldkhuss: sorry, no idea there, I've always been content with the default images :(18:15
khusssarnold: problem is that upgrading kernel from the default image will require reboot.. not sure how to handle that from a charm. I want the charm execution to continue after doing the reboot.. may be there is a way to do this with startup scripts.. but I can avoid all that hassle if I use custom image18:16
sarnoldkhuss: hrm, what problem are you trying to solve? there might be an easier way around it.18:18
khusssarnold: my charm requires a different kernel than what is provided by default image18:18
sarnold(I'm not saying that I'll know the easier way -- just that this doesn't sound like the usual orchestrate service interactions that people use juju for :)18:19
khusssarnold: how would you install a charm if that charm requies a different kernel - using juju way?18:21
sarnoldkhuss: I -think- I might try first using a subordinate charm, whose sole job is to install the new kernel18:24
sarnoldkhuss: scheduling the reboot would be annoying of course.18:24
khusssarnold: reboot is the problem18:25
khusssarnold: hope you understand the problem i am trying to solve18:26
sarnoldkhuss: yes, makes much more sense now :) thanks. I hope one of the wizards can suggest something good for you :)18:27
khusssarnold: pretty hard to get any help from this forum though.. thanks for your help anyway18:28
sarnoldkhuss: yeah, people are busy and don't always have time to head over to irc. you could also try asking in askubuntu.com with the [juju] tag, that might draw input from a different audience -- and has the benefit that it would be recorded for posterity :)18:30
khusssarnold: thanks18:30
=== Ursinha-afk is now known as Ursinha
=== psivaa-bbl is now known as psivaa
=== roadmr_afk is now known as roadmr
jaywinkhitting some blocks on deploying a charm that uses a postgresql db. my plan only has 8GB of local HD, so I need to use a volume - but how do I do that with the charm with Juju? there is nothing in the trusty/postgresql readme about this that I can see :( any hints anyone?19:41
jaywinkhmmm I guess storage charm has something to do with this19:47
jaywinkbummer, block-storage-broker does not work on trusty+openstack :( filed a bug #135002120:07
_mup_Bug #1350021: block-storage-broker on Trusty + OpenStack fails in install <openstack> <trusty> <block-storage-broker (Juju Charms Collection):New> <https://launchpad.net/bugs/1350021>20:07
jrwrenjaywink: trusty supports icehouse20:11
jaywinkoh that bug I just report has been fixed in code revision 53, the charm store is at 5220:13
jaywinkthe merge was done 18th July - I guess it takes some time for the store to update?20:14
jrwrensomeone must promulgate it (I think that is the term)20:15
jaywinkhmm no wait, trusty branch hasn't got the latest commit, only precise branch. that kinda doesn't help since the fix is for trusty hehe20:19
dpb1Hi -- when I deployed a node with juju on maas, it got /dev/sdb as it's root device.  Is this a valid possibility?  or a bug?20:48

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!