=== thomi_ is now known as thomi | ||
=== Ursinha-afk is now known as Ursinha | ||
* hazmat looks wwitzel3's state services branch | 02:21 | |
=== vladk|offline is now known as vladk | ||
=== vladk is now known as vladk|offline | ||
=== vladk|offline is now known as vladk | ||
=== ashipika1 is now known as ashipika | ||
=== wallyworld__ is now known as wallyworld | ||
=== wallyworld__ is now known as wallyworld | ||
=== CyberJacob|Away is now known as CyberJacob | ||
=== CyberJacob is now known as CyberJacob|Away | ||
=== vladk is now known as vladk|offline | ||
=== vladk|offline is now known as vladk | ||
=== vladk is now known as vladk|offline | ||
=== vladk|offline is now known as vladk | ||
=== vladk is now known as vladk|offline | ||
=== vladk|offline is now known as vladk | ||
schegi | jamespage? | 09:19 |
---|---|---|
jamespage | schegi, hello! | 09:48 |
schegi | hi got two question. you mentioned that you fixed somthing related to the missing zap with ceph. i updated your network-slpit charms but they are still up to date. have you commited? | 09:51 |
schegi | second. i got problems to deploy the checked-out network-split version of cinder and cinder-ceph. he does not recongnize them as charm if i want to deploy them from local repo. is there something missing in the charm?? | 09:52 |
schegi | And i have some strange behavious related to the hacluster charms. when i deploy it like it is described in https://wiki.ubuntu.com/ServerTeam/OpenStackHA or change to go for the percona-cluster charm (doent matter). the deployment works fine very service is up and running, no hook fails, BUT if i log in to one of the machines and check the corosync/pacemaker cluster by crm_mon or crm status it seems to me that the single services are r | 09:56 |
schegi | but it looks like they are not connected. i am no corosync pacemaker pro. but i know from an manual deployment taht all nodes in the cluster should appear in the output of crm status and they didnt. | 09:57 |
jamespage | schegi, rev 84 in ceph contains the proposed fix | 09:59 |
jamespage | schegi, you have multicast enabled on your network right? | 10:00 |
jamespage | (re the hacluster issue) | 10:00 |
jamespage | schegi, cinder and cinder-ceph should be OK - do you get a specific error message? | 10:01 |
schegi | currently rebootstrapping but if i do something like juju deploy --repository /usr/share/custom-charms/ --to 0 --config ~/juju-configs/cinder.yaml local:cinder i get some charm not found message | 10:02 |
schegi | doning the same with ceph works fine | 10:02 |
schegi | ok first problem was my fault, just using bzr to rarely. tried update but had to pull. :) (still using too much svn) | 10:09 |
schegi | jamespage, here is the error message i get ERROR charm not found in "/usr/share/custom-charms/": local:trusty/cinder | 10:10 |
schegi | trying to do juju deploy --repository /usr/share/custom-charms/ --to 0 --config ~/juju-configs/cinder.yaml local:trusty/cinder or juju deploy --repository /usr/share/custom-charms/ --to 0 --config ~/juju-configs/cinder.yaml local:cinder | 10:11 |
jamespage | schegi, dumb question but it is under a trusty subfolder right? | 10:11 |
schegi | all the charms are there and for all the others it works perfectly | 10:12 |
schegi | could do the same line just replacing cinder with ceoh and it deploys ceph. | 10:12 |
schegi | away for 20 mins brb | 10:12 |
g0d_51gm4 | hello guys, my situation is the following: i've an vMaaS environment with 3 vm node and a juju environment. I've deployed juju-gui on vm node 1 and openstack on the other 2 nodes. now I'd like to add other 3 vm nodes and use them to deploy hadoop master plus a slave and cassandra but i've a doubt is it necessary to create another juju environment to dedicate to that? thanks.. | 10:16 |
g0d_51gm4 | the error is the following http://paste.ubuntu.com/7813669/ while if i specify the node obtain that error http://paste.ubuntu.com/7813674/ | 10:22 |
g0d_51gm4 | is there anyone can support me to resolve that? thanks | 10:33 |
schegi | jamespage, back | 10:43 |
jamespage | schegi, great | 10:44 |
schegi | jamespage, ok checked it twice. path is correct but still not able to deploy cinder from local repo | 10:51 |
jamespage | schegi, testing myself now | 11:04 |
g0d_51gm4 | anyone can help me? | 12:39 |
=== gsamfira1 is now known as gsamfira | ||
=== roadmr is now known as roadmr_afk | ||
=== roadmr_afk is now known as roadmr | ||
g0d_51gm4 | please is there someone who can help me to resolve it?please | 13:37 |
Sh3rl0ck | Hello..Has anyone observed ceph.conf being configured wrongly after deploying juju ceph charm? | 13:38 |
Sh3rl0ck | The keyring values in the ceph.conf file seems to be incorrectly populated as it contains "$cluster.$name.keyring rather than actual values | 13:40 |
ctlaugh | jamespage - I'm having an issue with the cinder charm on a single-drive system. Would you be able to help? | 14:00 |
=== sebas538_ is now known as sebas5384 | ||
jcastro | Hey everyone, so for a demo I want to launch a bundle into multiple environments | 15:28 |
jcastro | I know I can do | 15:28 |
jcastro | juju deploy blah | 15:28 |
jcastro | juju switch | 15:28 |
jcastro | juju deploy blah again | 15:28 |
jcastro | Is there a way I can do `juju deploy this bundle into these environments` all at once? Not serially. | 15:28 |
rick_h__ | jcastro: I've not tried it but quickstart takes a -e flag | 15:31 |
rick_h__ | jcastro: so you could in theory run a bunch of quickstart commands backgrounded each with a diff -e? | 15:32 |
jcastro | yeah, but then I am concerned that they will step on each other | 15:32 |
=== Ursinha is now known as Ursinha-afk | ||
rick_h__ | jcastro: yea, that's the untested part | 15:32 |
jcastro | hmm, actually, as long as I don't switch while they are launching .... | 15:33 |
jcastro | that _should_ work | 15:33 |
rick_h__ | but I think it would work since once the script runs it's got all the config loaded. | 15:33 |
jcastro | ok I will try that | 15:33 |
Sh3rl0ck | Hello..has anyone used the ceph charm for ceph backend to cinder? Do we need to add a relation between Nova compute and Ceph? | 15:44 |
=== Ursinha-afk is now known as Ursinha | ||
jcastro | rick_h__, so that totally worked! | 16:02 |
rick_h__ | jcastro: woot! | 16:04 |
=== vladk is now known as vladk|offline | ||
=== Ursinha is now known as Ursinha-afk | ||
=== Ursinha-afk is now known as Ursinha | ||
Sh3rl0ck | Hello..has anyone used the ceph charm for ceph backend to cinder? Do we need to add a relation between Nova compute and Ceph? | 16:50 |
jcastro | hey sinzui | 16:50 |
jcastro | I am having some issues with juju on joyent | 16:50 |
jcastro | it bootstraps, and deploys the gui | 16:51 |
sinzui | yeah you see that too | 16:51 |
jcastro | but subsequent deploys are waiting on the machine | 16:51 |
sinzui | jcastro, Exactly what CI sees | 16:51 |
jcastro | but oddly enough, juju just returns "pending" and continues on with life | 16:51 |
jcastro | oh whew! so not crazy! | 16:51 |
sinzui | jcastro, I manually deleted all the machines in our joyent account this morning to fix a test that was failing | 16:52 |
sarnold | Sh3rl0ck: note also http://manage.jujucharms.com/charms/precise/cinder-ceph | 16:52 |
jcastro | sinzui, I have one more issue with hp cloud | 16:52 |
jcastro | do you have a network config to put into their horizon console to make juju work? | 16:52 |
sinzui | jcastro, This is an old problem that happens, but Juju might be partially to blame. I often see stopped machines instead of deleted machines. My api calls also fail to delete, so maybe joyent is at fault | 16:53 |
sinzui | jcastro, creating one network is enough to make juju work in new Hp. New accounts get a default network, but migrated accounts may need to add one, just accept the recommended settings | 16:54 |
jcastro | yeah I tried that but no joy, I'll give it another shot | 16:54 |
jcastro | good to know I wasn't going crazy wrt joyent though | 16:54 |
sinzui | jcastro, I wrote this a few weeks ago. HP has been lovely since then. http://curtis.hovey.name/2014/06/12/migrating-juju-to-hp-clouds-horizon/ | 16:55 |
Sh3rl0ck | sarnold: Thanks for link. I actually tried out cinder-ceph charm as well but seems like its required only if you want multi-backend support for Cinder | 16:55 |
* sinzui reads hp network config | 16:55 | |
jcastro | thanks, I'll try that | 16:55 |
jcastro | <-- lunch, bbl | 16:55 |
Sh3rl0ck | But not sure ceph charm needs to be connected to nova-compute | 16:55 |
Sh3rl0ck | Also, does anyone know if there is a way to add netapp backend to Cinder? Is there a separate charm for this? | 16:56 |
sinzui | jcastro, CI has "default default 10.0.0.0/24 NoACTIVEUP" | 16:56 |
=== Ursinha is now known as Ursinha-afk | ||
sinzui | jcastro, charm-testing is similar, the only difference is "default" is a string that tells me Antonio named it | 16:57 |
sebas5384 | if an agent of a machine is down | 17:13 |
sebas5384 | what can I do? | 17:13 |
sebas5384 | agent-state: down | 17:14 |
sebas5384 | agent-state-info: (started) | 17:14 |
=== Ursinha-afk is now known as Ursinha | ||
sinzui | sebas5384, There are a few things to do after you ssh into the machine with the down agent | 17:28 |
sinzui | sebas5384, get a listing of the procs: this will show both the machine and the unit agents ps ax | grep jujud | 17:28 |
=== roadmr is now known as roadmr_afk | ||
sinzui | sebas5384, sudo kill -HUP <pid> will restart a stalled agent | 17:29 |
sebas5384 | sinzui: there's only the 0 machine running | 17:30 |
sinzui | okay, that lead to starting the unit. jujud are under upstart, but uniquely named. We need to learn the name: ls /etc/init/jujud-* | 17:31 |
sinzui | sebas5384, you can run something like this: sudo start jujud-unit-arm64-slave-0 | 17:32 |
sebas5384 | hmmm sinzui yeah! I did a restart of the agent | 17:33 |
sebas5384 | because the jujud init isn't there | 17:33 |
sinzui | oh | 17:34 |
sinzui | sebas5384, that implies the service didn't complete installation | 17:34 |
sebas5384 | ls /etc/init | grep juju | 17:35 |
sebas5384 | juju-agent-devop-local.conf | 17:35 |
sebas5384 | juju-db-devop-local.conf | 17:35 |
sinzui | but the status you reported says it was there | 17:35 |
sebas5384 | hmmm so it should be there | 17:36 |
sebas5384 | this one is really tricky | 17:36 |
sinzui | sebas5384, right those are the machine and db procs that comprise the state server commonly on a machine 0. DId you deploy a service | 17:36 |
sebas5384 | sinzui: i'm using the cloud-installer | 17:37 |
sebas5384 | that tries to deploy a bunch of services of openstack into one nested container into a kvm | 17:38 |
sinzui | sebas5384, is the first time on that machine that use used it? lxc needs to build a template first, and that is slow. After the first time, it gets fast | 17:38 |
sinzui | ah kvm, not template | 17:38 |
sebas5384 | yeah | 17:38 |
sebas5384 | hehe | 17:38 |
sinzui | sebas5384, I don't have much experience with cloud installer. the agent you want to start/restart will be in the container, kvm or lxc. your local host only gets the state-server in this case | 17:40 |
sebas5384 | sinzui: right | 17:41 |
sinzui | sebas5384, in my example, I sshed to the machine the agent was down in, then restarted the proc (2 weeks ago actually, that example was from my histor) | 17:41 |
sebas5384 | but because i can't ssh into the machine | 17:42 |
sebas5384 | so i'm a bit into a limbo | 17:42 |
sebas5384 | hehe | 17:42 |
sinzui | sebas5384, "juju ssh" will work regardless of the state of the agent, and that is a wrapper for real ssh | 17:42 |
sebas5384 | ok, but is like the machine isn't started | 17:43 |
sebas5384 | but when i do juju status, its saying that it is | 17:43 |
sebas5384 | other thing i went to the virsh console to see the kvm machines | 17:44 |
sinzui | sebas5384, well that is a different matter. kvm provisioning is slow or something else has gone amiss. | 17:44 |
sebas5384 | yeah, I think something else happen here | 17:44 |
sinzui | stokachu, do you have any cloud-installer experience that might help sebas5384 | 17:46 |
stokachu | sinzui, yea im working with him to figure out whats happening | 17:47 |
sebas5384 | thanks sinzui and stokachu !! :) | 17:49 |
sebas5384 | so i restarted the machine by the virsh console | 17:49 |
sebas5384 | and it seems to worked | 17:49 |
sebas5384 | now im into the machine | 17:49 |
sebas5384 | holly molly!! | 17:50 |
sebas5384 | its all installing now | 17:50 |
=== vladk|offline is now known as vladk | ||
=== CyberJacob|Away is now known as CyberJacob | ||
=== roadmr_afk is now known as roadmr | ||
=== Ursinha is now known as Ursinha-afk | ||
=== Ursinha-afk is now known as Ursinha | ||
=== vladk is now known as vladk|offline | ||
shiv | trying to deploy openstack HA on physical servers...anyone see issues while deploying on physical servers....Mysql hook failing with cinder...this is the error: shared-db-relation-changed 2014-07-18 20:22:28.637 28128 CRITICAL cinder [-] OperationalError: (OperationalError) (1130, "Host '4-4-1-95.cisco.com' is not allowed to connect to this MySQL server") None None | 20:29 |
shiv | here is the stack trace | 20:30 |
shiv | http://pastebin.com/DRKfi4d1 | 20:30 |
shiv | 4.4.1.95 is the VIP for the for the cinder | 20:30 |
=== CyberJacob is now known as CyberJacob|Away |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!