=== thomi_ is now known as thomi === Ursinha-afk is now known as Ursinha [02:21] * hazmat looks wwitzel3's state services branch === vladk|offline is now known as vladk === vladk is now known as vladk|offline === vladk|offline is now known as vladk === ashipika1 is now known as ashipika === wallyworld__ is now known as wallyworld === wallyworld__ is now known as wallyworld === CyberJacob|Away is now known as CyberJacob === CyberJacob is now known as CyberJacob|Away === vladk is now known as vladk|offline === vladk|offline is now known as vladk === vladk is now known as vladk|offline === vladk|offline is now known as vladk === vladk is now known as vladk|offline === vladk|offline is now known as vladk [09:19] jamespage? [09:48] schegi, hello! [09:51] hi got two question. you mentioned that you fixed somthing related to the missing zap with ceph. i updated your network-slpit charms but they are still up to date. have you commited? [09:52] second. i got problems to deploy the checked-out network-split version of cinder and cinder-ceph. he does not recongnize them as charm if i want to deploy them from local repo. is there something missing in the charm?? [09:56] And i have some strange behavious related to the hacluster charms. when i deploy it like it is described in https://wiki.ubuntu.com/ServerTeam/OpenStackHA or change to go for the percona-cluster charm (doent matter). the deployment works fine very service is up and running, no hook fails, BUT if i log in to one of the machines and check the corosync/pacemaker cluster by crm_mon or crm status it seems to me that the single services are r [09:57] but it looks like they are not connected. i am no corosync pacemaker pro. but i know from an manual deployment taht all nodes in the cluster should appear in the output of crm status and they didnt. [09:59] schegi, rev 84 in ceph contains the proposed fix [10:00] schegi, you have multicast enabled on your network right? [10:00] (re the hacluster issue) [10:01] schegi, cinder and cinder-ceph should be OK - do you get a specific error message? [10:02] currently rebootstrapping but if i do something like juju deploy --repository /usr/share/custom-charms/ --to 0 --config ~/juju-configs/cinder.yaml local:cinder i get some charm not found message [10:02] doning the same with ceph works fine [10:09] ok first problem was my fault, just using bzr to rarely. tried update but had to pull. :) (still using too much svn) [10:10] jamespage, here is the error message i get ERROR charm not found in "/usr/share/custom-charms/": local:trusty/cinder [10:11] trying to do juju deploy --repository /usr/share/custom-charms/ --to 0 --config ~/juju-configs/cinder.yaml local:trusty/cinder or juju deploy --repository /usr/share/custom-charms/ --to 0 --config ~/juju-configs/cinder.yaml local:cinder [10:11] schegi, dumb question but it is under a trusty subfolder right? [10:12] all the charms are there and for all the others it works perfectly [10:12] could do the same line just replacing cinder with ceoh and it deploys ceph. [10:12] away for 20 mins brb [10:16] hello guys, my situation is the following: i've an vMaaS environment with 3 vm node and a juju environment. I've deployed juju-gui on vm node 1 and openstack on the other 2 nodes. now I'd like to add other 3 vm nodes and use them to deploy hadoop master plus a slave and cassandra but i've a doubt is it necessary to create another juju environment to dedicate to that? thanks.. [10:22] the error is the following http://paste.ubuntu.com/7813669/ while if i specify the node obtain that error http://paste.ubuntu.com/7813674/ [10:33] is there anyone can support me to resolve that? thanks [10:43] jamespage, back [10:44] schegi, great [10:51] jamespage, ok checked it twice. path is correct but still not able to deploy cinder from local repo [11:04] schegi, testing myself now [12:39] anyone can help me? === gsamfira1 is now known as gsamfira === roadmr is now known as roadmr_afk === roadmr_afk is now known as roadmr [13:37] please is there someone who can help me to resolve it?please [13:38] Hello..Has anyone observed ceph.conf being configured wrongly after deploying juju ceph charm? [13:40] The keyring values in the ceph.conf file seems to be incorrectly populated as it contains "$cluster.$name.keyring rather than actual values [14:00] jamespage - I'm having an issue with the cinder charm on a single-drive system. Would you be able to help? === sebas538_ is now known as sebas5384 [15:28] Hey everyone, so for a demo I want to launch a bundle into multiple environments [15:28] I know I can do [15:28] juju deploy blah [15:28] juju switch [15:28] juju deploy blah again [15:28] Is there a way I can do `juju deploy this bundle into these environments` all at once? Not serially. [15:31] jcastro: I've not tried it but quickstart takes a -e flag [15:32] jcastro: so you could in theory run a bunch of quickstart commands backgrounded each with a diff -e? [15:32] yeah, but then I am concerned that they will step on each other === Ursinha is now known as Ursinha-afk [15:32] jcastro: yea, that's the untested part [15:33] hmm, actually, as long as I don't switch while they are launching .... [15:33] that _should_ work [15:33] but I think it would work since once the script runs it's got all the config loaded. [15:33] ok I will try that [15:44] Hello..has anyone used the ceph charm for ceph backend to cinder? Do we need to add a relation between Nova compute and Ceph? === Ursinha-afk is now known as Ursinha [16:02] rick_h__, so that totally worked! [16:04] jcastro: woot! === vladk is now known as vladk|offline === Ursinha is now known as Ursinha-afk === Ursinha-afk is now known as Ursinha [16:50] Hello..has anyone used the ceph charm for ceph backend to cinder? Do we need to add a relation between Nova compute and Ceph? [16:50] hey sinzui [16:50] I am having some issues with juju on joyent [16:51] it bootstraps, and deploys the gui [16:51] yeah you see that too [16:51] but subsequent deploys are waiting on the machine [16:51] jcastro, Exactly what CI sees [16:51] but oddly enough, juju just returns "pending" and continues on with life [16:51] oh whew! so not crazy! [16:52] jcastro, I manually deleted all the machines in our joyent account this morning to fix a test that was failing [16:52] Sh3rl0ck: note also http://manage.jujucharms.com/charms/precise/cinder-ceph [16:52] sinzui, I have one more issue with hp cloud [16:52] do you have a network config to put into their horizon console to make juju work? [16:53] jcastro, This is an old problem that happens, but Juju might be partially to blame. I often see stopped machines instead of deleted machines. My api calls also fail to delete, so maybe joyent is at fault [16:54] jcastro, creating one network is enough to make juju work in new Hp. New accounts get a default network, but migrated accounts may need to add one, just accept the recommended settings [16:54] yeah I tried that but no joy, I'll give it another shot [16:54] good to know I wasn't going crazy wrt joyent though [16:55] jcastro, I wrote this a few weeks ago. HP has been lovely since then. http://curtis.hovey.name/2014/06/12/migrating-juju-to-hp-clouds-horizon/ [16:55] sarnold: Thanks for link. I actually tried out cinder-ceph charm as well but seems like its required only if you want multi-backend support for Cinder [16:55] * sinzui reads hp network config [16:55] thanks, I'll try that [16:55] <-- lunch, bbl [16:55] But not sure ceph charm needs to be connected to nova-compute [16:56] Also, does anyone know if there is a way to add netapp backend to Cinder? Is there a separate charm for this? [16:56] jcastro, CI has "default default 10.0.0.0/24 No ACTIVE UP" === Ursinha is now known as Ursinha-afk [16:57] jcastro, charm-testing is similar, the only difference is "default" is a string that tells me Antonio named it [17:13] if an agent of a machine is down [17:13] what can I do? [17:14] agent-state: down [17:14] agent-state-info: (started) === Ursinha-afk is now known as Ursinha [17:28] sebas5384, There are a few things to do after you ssh into the machine with the down agent [17:28] sebas5384, get a listing of the procs: this will show both the machine and the unit agents ps ax | grep jujud === roadmr is now known as roadmr_afk [17:29] sebas5384, sudo kill -HUP will restart a stalled agent [17:30] sinzui: there's only the 0 machine running [17:31] okay, that lead to starting the unit. jujud are under upstart, but uniquely named. We need to learn the name: ls /etc/init/jujud-* [17:32] sebas5384, you can run something like this: sudo start jujud-unit-arm64-slave-0 [17:33] hmmm sinzui yeah! I did a restart of the agent [17:33] because the jujud init isn't there [17:34] oh [17:34] sebas5384, that implies the service didn't complete installation [17:35] ls /etc/init | grep juju [17:35] juju-agent-devop-local.conf [17:35] juju-db-devop-local.conf [17:35] but the status you reported says it was there [17:36] hmmm so it should be there [17:36] this one is really tricky [17:36] sebas5384, right those are the machine and db procs that comprise the state server commonly on a machine 0. DId you deploy a service [17:37] sinzui: i'm using the cloud-installer [17:38] that tries to deploy a bunch of services of openstack into one nested container into a kvm [17:38] sebas5384, is the first time on that machine that use used it? lxc needs to build a template first, and that is slow. After the first time, it gets fast [17:38] ah kvm, not template [17:38] yeah [17:38] hehe [17:40] sebas5384, I don't have much experience with cloud installer. the agent you want to start/restart will be in the container, kvm or lxc. your local host only gets the state-server in this case [17:41] sinzui: right [17:41] sebas5384, in my example, I sshed to the machine the agent was down in, then restarted the proc (2 weeks ago actually, that example was from my histor) [17:42] but because i can't ssh into the machine [17:42] so i'm a bit into a limbo [17:42] hehe [17:42] sebas5384, "juju ssh" will work regardless of the state of the agent, and that is a wrapper for real ssh [17:43] ok, but is like the machine isn't started [17:43] but when i do juju status, its saying that it is [17:44] other thing i went to the virsh console to see the kvm machines [17:44] sebas5384, well that is a different matter. kvm provisioning is slow or something else has gone amiss. [17:44] yeah, I think something else happen here [17:46] stokachu, do you have any cloud-installer experience that might help sebas5384 [17:47] sinzui, yea im working with him to figure out whats happening [17:49] thanks sinzui and stokachu !! :) [17:49] so i restarted the machine by the virsh console [17:49] and it seems to worked [17:49] now im into the machine [17:50] holly molly!! [17:50] its all installing now === vladk|offline is now known as vladk === CyberJacob|Away is now known as CyberJacob === roadmr_afk is now known as roadmr === Ursinha is now known as Ursinha-afk === Ursinha-afk is now known as Ursinha === vladk is now known as vladk|offline [20:29] trying to deploy openstack HA on physical servers...anyone see issues while deploying on physical servers....Mysql hook failing with cinder...this is the error: shared-db-relation-changed 2014-07-18 20:22:28.637 28128 CRITICAL cinder [-] OperationalError: (OperationalError) (1130, "Host '4-4-1-95.cisco.com' is not allowed to connect to this MySQL server") None None [20:30] here is the stack trace [20:30] http://pastebin.com/DRKfi4d1 [20:30] 4.4.1.95 is the VIP for the for the cinder === CyberJacob is now known as CyberJacob|Away