/srv/irclogs.ubuntu.com/2014/07/18/#juju.txt

=== thomi_ is now known as thomi
=== Ursinha-afk is now known as Ursinha
* hazmat looks wwitzel3's state services branch02:21
=== vladk|offline is now known as vladk
=== vladk is now known as vladk|offline
=== vladk|offline is now known as vladk
=== ashipika1 is now known as ashipika
=== wallyworld__ is now known as wallyworld
=== wallyworld__ is now known as wallyworld
=== CyberJacob|Away is now known as CyberJacob
=== CyberJacob is now known as CyberJacob|Away
=== vladk is now known as vladk|offline
=== vladk|offline is now known as vladk
=== vladk is now known as vladk|offline
=== vladk|offline is now known as vladk
=== vladk is now known as vladk|offline
=== vladk|offline is now known as vladk
schegijamespage?09:19
jamespageschegi, hello!09:48
schegihi got two question. you mentioned that you fixed somthing related to the missing zap with ceph. i updated your network-slpit charms but they are still up to date. have you commited?09:51
schegisecond. i got problems to deploy the checked-out network-split version of cinder and cinder-ceph. he does not recongnize them as charm if i want to deploy them from local repo. is there something missing in the charm??09:52
schegiAnd i have some strange behavious related to the hacluster charms. when i deploy it like it is described in https://wiki.ubuntu.com/ServerTeam/OpenStackHA or change to go for the percona-cluster charm (doent matter). the deployment works fine very service is up and running, no hook fails, BUT if i log in to one of the machines and check the corosync/pacemaker cluster by crm_mon or crm status it seems to me that the single services are r09:56
schegibut it looks like they are not connected. i am no corosync pacemaker pro. but i know from an manual deployment taht all nodes in the cluster should appear in the output of crm status and they didnt.09:57
jamespageschegi, rev 84 in ceph contains the proposed fix09:59
jamespageschegi, you have multicast enabled on your network right?10:00
jamespage(re the hacluster issue)10:00
jamespageschegi, cinder and cinder-ceph should be OK - do you get a specific error message?10:01
schegicurrently rebootstrapping but if i do something like juju deploy --repository /usr/share/custom-charms/ --to 0 --config ~/juju-configs/cinder.yaml local:cinder i get some charm not found message10:02
schegidoning the same with ceph works fine10:02
schegiok first problem was my fault, just using bzr to rarely. tried update but had to pull. :) (still using too much svn)10:09
schegijamespage, here is the error message i get ERROR charm not found in "/usr/share/custom-charms/": local:trusty/cinder10:10
schegitrying to do juju deploy --repository /usr/share/custom-charms/ --to 0 --config ~/juju-configs/cinder.yaml local:trusty/cinder or juju deploy --repository /usr/share/custom-charms/ --to 0 --config ~/juju-configs/cinder.yaml local:cinder10:11
jamespageschegi, dumb question but it is under a trusty subfolder right?10:11
schegiall the charms are there and for all the others it works perfectly10:12
schegicould do the same line just replacing cinder with ceoh and it deploys ceph.10:12
schegiaway for 20 mins brb10:12
g0d_51gm4hello guys, my situation is the following: i've an vMaaS environment with 3 vm node and a juju environment. I've deployed juju-gui on vm node 1 and openstack on the other 2 nodes. now I'd like to add other 3 vm nodes and use them to deploy hadoop master plus a slave and cassandra but i've a doubt is it necessary to create another juju environment to dedicate to that? thanks..10:16
g0d_51gm4the error is the following http://paste.ubuntu.com/7813669/ while if i specify the node obtain that error http://paste.ubuntu.com/7813674/10:22
g0d_51gm4is there  anyone can support me to resolve that? thanks10:33
schegijamespage, back10:43
jamespageschegi, great10:44
schegijamespage, ok checked it twice. path is correct but still not able to deploy cinder from local repo10:51
jamespageschegi, testing myself now11:04
g0d_51gm4anyone can help me?12:39
=== gsamfira1 is now known as gsamfira
=== roadmr is now known as roadmr_afk
=== roadmr_afk is now known as roadmr
g0d_51gm4please is there someone who can help me to resolve it?please13:37
Sh3rl0ckHello..Has anyone observed ceph.conf being configured wrongly after deploying juju ceph charm?13:38
Sh3rl0ckThe keyring values in the ceph.conf file seems to be incorrectly populated as it contains "$cluster.$name.keyring rather than actual values13:40
ctlaughjamespage - I'm having an issue with the cinder charm on a single-drive system.  Would you be able to help?14:00
=== sebas538_ is now known as sebas5384
jcastroHey everyone, so for a demo I want to launch a bundle into multiple environments15:28
jcastroI know I can do15:28
jcastrojuju deploy blah15:28
jcastrojuju switch15:28
jcastrojuju deploy blah again15:28
jcastroIs there a way I can do `juju deploy this bundle into these environments` all at once? Not serially.15:28
rick_h__jcastro: I've not tried it but quickstart takes a -e flag15:31
rick_h__jcastro: so you could in theory run a bunch of quickstart commands backgrounded each with a diff -e?15:32
jcastroyeah, but then I am concerned that they will step on each other15:32
=== Ursinha is now known as Ursinha-afk
rick_h__jcastro: yea, that's the untested part15:32
jcastrohmm, actually, as long as I don't switch while they are launching ....15:33
jcastrothat _should_ work15:33
rick_h__but I think it would work since once the script runs it's got all the config loaded.15:33
jcastrook I will try that15:33
Sh3rl0ckHello..has anyone used the ceph charm for ceph backend to cinder? Do we need to add a relation between Nova compute and Ceph?15:44
=== Ursinha-afk is now known as Ursinha
jcastrorick_h__, so that totally worked!16:02
rick_h__jcastro: woot!16:04
=== vladk is now known as vladk|offline
=== Ursinha is now known as Ursinha-afk
=== Ursinha-afk is now known as Ursinha
Sh3rl0ckHello..has anyone used the ceph charm for ceph backend to cinder? Do we need to add a relation between Nova compute and Ceph?16:50
jcastrohey sinzui16:50
jcastroI am having some issues with juju on joyent16:50
jcastroit bootstraps, and deploys the gui16:51
sinzuiyeah you see that too16:51
jcastrobut subsequent deploys are waiting on the machine16:51
sinzuijcastro, Exactly what CI sees16:51
jcastrobut oddly enough, juju just returns "pending" and continues on with life16:51
jcastrooh whew! so not crazy!16:51
sinzuijcastro, I manually deleted all the machines in our joyent account this morning to fix a test that was failing16:52
sarnoldSh3rl0ck: note also http://manage.jujucharms.com/charms/precise/cinder-ceph16:52
jcastrosinzui, I have one more issue with hp cloud16:52
jcastrodo you have a network config to put into their horizon console to make juju work?16:52
sinzuijcastro, This is an old problem that happens, but Juju might be partially to blame. I often see stopped machines instead of deleted machines. My api calls also fail to delete, so maybe joyent is at fault16:53
sinzuijcastro, creating one network is enough to make juju work in new Hp. New accounts get a default network, but migrated accounts may need to add one, just accept the recommended settings16:54
jcastroyeah I tried that but no joy, I'll give it another shot16:54
jcastrogood to know I wasn't going crazy wrt joyent though16:54
sinzuijcastro, I wrote this a few weeks ago. HP has been lovely since then. http://curtis.hovey.name/2014/06/12/migrating-juju-to-hp-clouds-horizon/16:55
Sh3rl0cksarnold: Thanks for link. I actually tried out cinder-ceph charm as well but seems like its required only if you want multi-backend support for Cinder16:55
* sinzui reads hp network config16:55
jcastrothanks, I'll try that16:55
jcastro<-- lunch, bbl16:55
Sh3rl0ckBut not sure ceph charm needs to be connected to nova-compute16:55
Sh3rl0ckAlso, does anyone know if there is a way to add netapp backend to Cinder? Is there a separate charm for this?16:56
sinzuijcastro, CI has "default  default 10.0.0.0/24  NoACTIVEUP"16:56
=== Ursinha is now known as Ursinha-afk
sinzuijcastro, charm-testing is similar, the only difference is "default" is a string that tells me Antonio named it16:57
sebas5384if an agent of a machine is down17:13
sebas5384what can I do?17:13
sebas5384agent-state: down17:14
sebas5384agent-state-info: (started)17:14
=== Ursinha-afk is now known as Ursinha
sinzuisebas5384, There are a few things to do after you ssh into the machine with the down agent17:28
sinzuisebas5384, get a listing of the procs: this will show both the machine and the unit agents ps ax | grep jujud17:28
=== roadmr is now known as roadmr_afk
sinzuisebas5384, sudo kill -HUP <pid> will restart a stalled agent17:29
sebas5384sinzui: there's only the 0 machine running17:30
sinzuiokay, that lead to starting the unit.  jujud are under upstart, but uniquely named. We need to learn the name: ls /etc/init/jujud-*17:31
sinzuisebas5384, you can run something like this: sudo start jujud-unit-arm64-slave-017:32
sebas5384hmmm sinzui yeah! I did a restart of the agent17:33
sebas5384because the jujud init isn't there17:33
sinzuioh17:34
sinzuisebas5384, that implies the service didn't complete installation17:34
sebas5384ls /etc/init | grep juju17:35
sebas5384juju-agent-devop-local.conf17:35
sebas5384juju-db-devop-local.conf17:35
sinzuibut the status you reported says it was there17:35
sebas5384hmmm so it should be there17:36
sebas5384this one is really tricky17:36
sinzuisebas5384, right those are the machine and db procs that comprise the state server commonly on a machine 0. DId you deploy a service17:36
sebas5384sinzui: i'm using the cloud-installer17:37
sebas5384that tries to deploy a bunch of services of openstack into one nested container into a kvm17:38
sinzuisebas5384, is the first time on that machine that use used it? lxc needs to build a template first, and that is slow. After the first time, it gets fast17:38
sinzuiah kvm, not template17:38
sebas5384yeah17:38
sebas5384hehe17:38
sinzuisebas5384, I don't have much experience with cloud installer. the agent you want to start/restart will be in the container, kvm or lxc. your local host only gets the state-server in this case17:40
sebas5384sinzui: right17:41
sinzuisebas5384, in my example, I sshed to the machine the agent was down in, then restarted the proc (2 weeks ago actually, that example was from my histor)17:41
sebas5384but because i can't ssh into the machine17:42
sebas5384so i'm a bit into a limbo17:42
sebas5384hehe17:42
sinzuisebas5384, "juju ssh" will work regardless of the state of the agent, and that is a wrapper for real ssh17:42
sebas5384ok, but is like the machine isn't started17:43
sebas5384but when i do juju status, its saying that it is17:43
sebas5384other thing i went to the virsh console to see the kvm machines17:44
sinzuisebas5384, well that is a different matter. kvm provisioning is slow or something else has gone amiss.17:44
sebas5384yeah, I think something else happen here17:44
sinzuistokachu, do you have any cloud-installer experience that might help sebas538417:46
stokachusinzui, yea im working with him to figure out whats happening17:47
sebas5384thanks sinzui and stokachu !! :)17:49
sebas5384so i restarted the machine by the virsh console17:49
sebas5384and it seems to worked17:49
sebas5384now im into the machine17:49
sebas5384holly molly!!17:50
sebas5384its all installing now17:50
=== vladk|offline is now known as vladk
=== CyberJacob|Away is now known as CyberJacob
=== roadmr_afk is now known as roadmr
=== Ursinha is now known as Ursinha-afk
=== Ursinha-afk is now known as Ursinha
=== vladk is now known as vladk|offline
shivtrying to deploy openstack HA on physical servers...anyone see issues while deploying on physical servers....Mysql hook failing with cinder...this is the error: shared-db-relation-changed 2014-07-18 20:22:28.637 28128 CRITICAL cinder [-] OperationalError: (OperationalError) (1130, "Host '4-4-1-95.cisco.com' is not allowed to connect to this MySQL server") None None20:29
shivhere is the stack trace20:30
shivhttp://pastebin.com/DRKfi4d120:30
shiv4.4.1.95 is the VIP for the for the cinder20:30
=== CyberJacob is now known as CyberJacob|Away

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!