=== thumper is now known as thumper-afk | ||
kjackal | Good morning Juju world! | 07:19 |
---|---|---|
=== frankban|afk is now known as frankban | ||
jwd | morning | 08:01 |
dakj | jamespage: hi, I remade all the lab, all nodes have the clock like host. The issue is always present on Nova-Cloud-Controller and Ceph-Mon | 10:53 |
dakj | Anyone has a solution about that issue (https://askubuntu.com/questions/913007/issue-with-nova-cloud-controller-and-ceph-mon-with-openstack-base-bundle)? | 12:36 |
dakj | Anyone can help me to resolve the issue? thanks | 13:37 |
icey | dakj it looks like your ceph-osd charm doesn't have any block devices configured with `osd-devices` | 13:42 |
dakj | Icey: if you have a look on its juju status there ceph-mon/11 is in active status, ceph-mon/9 is in blocked and ceph-mon/10 is in maintenance. | 13:46 |
icey | dakj: can you paste the `juju status` into paste.ubuntu.com | 13:47 |
icey | dakj: https://i.stack.imgur.com/5fkrF.png looks like it's clustered | 13:48 |
dakj | icey: here is it (https://paste.ubuntu.com/24587013/) | 13:48 |
icey | right dakj, the ceph-mons are (Unit is ready and clustered) and the OSDs are (No block devices detected using current configuration) | 13:49 |
icey | you have to configure block devices for the OSD charms (and Ceph) to provide block storage to the cloud | 13:49 |
jamespage | dakj: still wedged on that second unit right? | 13:49 |
jamespage | odd | 13:49 |
* jamespage ponders whether its a network mtu mismatch of some description | 13:49 | |
dakj | Jamespage: yes | 13:50 |
jamespage | dakj: might be worth a check | 13:50 |
dakj | icey: how do I can that after deploy the bundle? | 13:50 |
jamespage | dakj: 99% of odd problems turn out to be some sort of network misconfig in my experience | 13:50 |
jamespage | dakj: I'd use iperf | 13:50 |
jamespage | and test the performance between the lxd container with the problem ceph-mon | 13:51 |
jamespage | so say from ceph-mon/10 -> ceph-mon/9 and from ceph-mon/11 -> ceph-mon/9 | 13:51 |
jamespage | there is a flag you can use to display the actual MTU | 13:51 |
icey | dakj: something like : `juju config ceph-osd osd-devices=/dev/sdb` where /dev/sdb is a space separated list of block devices or directories | 13:52 |
dakj | Icey: I've to run that before to launch the deploy of Openstack Base via juju? | 13:53 |
icey | dakj: if you want to set OSD devices up before deploy, you would want to put them into the bundle, that command is something you can run to add devices after deploying | 13:53 |
admcleod | jamespage: theres something else that might be useful... | 13:55 |
jamespage | admcleod: suggest away | 13:56 |
dakj | icey: the command has to be put in ceph-mon application? | 13:56 |
admcleod | oh no, nvm, wrong idea. gotta run to ameeting | 13:56 |
icey | dakj: you would run that command from your juju client | 13:56 |
dakj | Icey: ok, Let me try that, I'll inform you about the result soon. | 13:57 |
=== rye is now known as ryebot | ||
dakj | icey: it gives me this error "ERROR application "ceph-osd" not found (not found)" | 14:28 |
icey | dakj: and you ran that on the same machine that you have been running juju status on? I just deployed ceph-osd and ran that command, and I can confirm that it works | 14:29 |
dakj | Icey: yes | 14:29 |
dakj | Icey: wait, I've 4 virtual node used for deploying Openstack and another one for Juju. On this last one I've to launch the deploy first ceph-osd? And then run via Juju guy Openstack base. Is it right? | 14:33 |
icey | I don't understand what you're asking dakj, but you should be configuring the ceph-osd application from the same client where you deployed the openstack-base bundle | 14:34 |
dakj | icey: sorry. I try to explain that | 14:35 |
dakj | Icey: I've 1VM used for MAAS, 1VM used for JUJU Gui and 4VM used for OPENSTACK. I tryed to deploy Openstack Base bundle via JUJU Gui. The command you suggested me where I've to run? | 14:37 |
icey | from what machine did you run `juju bootstrap` | 14:38 |
icey | alternately, you could change that configuration value from within the juju gui | 14:38 |
dakj | icey: on MAAS used this command "juju bootstrap maaslab maaslab-controller --to juju.maas" | 14:39 |
icey | ok, so either you should run that command on that MAAS node, or you should update the configuration through the GUI | 14:39 |
dakj | Perfect, on that MAAS node I obtained this error https://paste.ubuntu.com/24587244/ | 14:41 |
icey | dakj, then you did not run `juju bootstrap...` on that MAAS node | 14:42 |
icey | dakj, can you try to change the osd-devices configuration option from within the Juju GUI for the cpeh-osd application instead | 14:43 |
dakj | Ice: in ceph-osd is already /dev/sdb. | 14:45 |
icey | dakj: the value /dev/sdb is a default, you need to configure it to match your disk setup | 14:48 |
dakj | Ice: this the a node dedicated to Openstack on MAAS https://pasteboard.co/706aKSJHl.png | 14:50 |
icey | dakj: it can be a space separated list of either disks or directories | 14:50 |
icey | dakj: dakj according to your bundle, machines 12,13, and 14 are the machines with ceph-osd on them, what disks do those machines have available? | 14:51 |
dakj | Ice: I've to run the commit on Juju to see that because I cleaned all to re-run that from begin. | 14:56 |
dakj | icey: I've started that, when it'll finished I'll see what you asked me | 14:59 |
icey | dakj: I'm about to End of Day but there are other people around who can help with questions :) | 15:01 |
dakj | Ice: thanks a lot for your support. Have a nice day, see you soon. | 15:04 |
dakj | icey: now on ceph-devices there is /dev/vdb | 15:27 |
icey | does /dev/vdb exist on the ceph-osd nodes dakj ? | 15:28 |
icey | dakj I'm EOD but cholcombe can probably help with ceph questions | 15:31 |
cholcombe | dakj: o/ | 15:31 |
dakj | Icey: here is th fdisk https://paste.ubuntu.com/24587473/ | 15:32 |
cholcombe | dakj: so they all have sdb 400GB on them. | 15:33 |
dakj | Cholcombe: yes | 15:36 |
dakj | icey: thanks a lot | 15:36 |
dakj | Cholcombe: on ceph-osd is present /dev/vdb | 15:38 |
dakj | Cholcombe: I'm EOD, can we meet tomorrow to see how to resolve my issue? | 15:39 |
cholcombe | dakj: sure | 15:39 |
dakj | Cholcombe: thanks have a nice day and see you tomorrow with my lab...... Are you present in the morning or evening? | 15:41 |
cholcombe | dakj: i'm on pacific west coast time | 15:42 |
dakj | Cholcombe: I'm in Europe time :-) see you!!! | 15:45 |
bdx | @stokachu | 15:51 |
rahworkx | hello all, can someone point me in the direction of how to uninstall "apt-get install conjure-up" entirely so I can use snap to install successfully? | 15:52 |
bdx | http://installion.co.uk/ubuntu/yakkety/universe/c/conjure-up/uninstall/index.html | 15:58 |
bdx | someone needs to take care of that bad boy | 15:58 |
=== frankban is now known as frankban|afk | ||
rick_h | SimonKLB: ping | 18:09 |
Budgie^Smore | woot! almost time for a final in person interview round! | 18:58 |
lazyPower | Budgie^Smore: good luck mate | 19:14 |
Budgie^Smore | got 3 in person final rounds this week! this job hunting is a full time thing! | 19:15 |
Budgie^Smore | oh and for anyone interested, I have been told by recruiters that there are more positions in the area than we have good people to fill them! | 19:16 |
rahworkx | hello all, when deploying cdk with conjure-up in a local bare metal server the ectd nodes are failing with "Missing relation to certificate authority."Are there any suggestions of a fix for this? | 19:26 |
stokachu_ | rahworkx: easyrsa should be deployed and active | 19:27 |
rahworkx | stokachu_: it is deployed with msg "Certificate Authority ready." | 19:27 |
stokachu_ | rahworkx: whats output of `juju status --format yaml` | 19:28 |
rahworkx | stokachu_: https://paste.ubuntu.com/24588600/ | 19:29 |
stokachu_ | rahworkx: how long has it been blocked for? | 19:30 |
stokachu_ | maybe lazyPower has an idea ^ | 19:31 |
lazyPower | rahworkx: interesting, i dont see the etcd->easyrsa relation declared in that status yaml. try 'juju add-relation etcd easyrsa' and see if that resolves the status message | 19:32 |
rahworkx | stokachu_: hmm, I was waiting untill each app finished installing before selecting the next.. I kicked off the last app "workers" and status is "started" now. | 19:32 |
stokachu_ | rahworkx: yea relations dont get set until after everything is deployed | 19:34 |
stokachu_ | b/c there could be applications the relations require that are not yet known to juju | 19:34 |
rahworkx | lazypower: Previously when selecting all apps to deploy at once, it was failing with a "failed to find hook msg" this is probably hardware related. "older server" | 19:35 |
lazyPower | failed to find hook? | 19:35 |
lazyPower | thats... not expected at all | 19:35 |
stokachu_ | yea that's a new one to me | 19:35 |
lazyPower | the hooks are charm components, the executeable events we invoke when things happen | 19:35 |
lazyPower | like that relationship for example will trigger a certificates-relation-joined hook | 19:35 |
rahworkx | my mistake.. may of seen that elsewhere.. | 19:38 |
lazyPower | rahworkx: there's a known deficiency right now where etcd isn't starting due to some changes in how snap presents version info in 'snap info etcd' | 19:39 |
lazyPower | there's a PR landing and we'll have a fix out the door today, that might have been what you saw... if it was etcd that was the problem child | 19:40 |
rahworkx | lazypower: this is the error I saw before.... https://paste.ubuntu.com/24588663/ | 19:44 |
lazyPower | rahworkx: yep thats the bug i was just referencing | 19:44 |
lazyPower | that happened sometime between lastnight and today, it didn't seem to affect existing deployments, only new deployments. | 19:45 |
rahworkx | ohh ok, makes sense | 19:45 |
rahworkx | stokachu_: lazyPower: thanks for shedding some light on that... | 19:51 |
lazyPower | np rahworkx - i'll ping you when the fix gets published in the bundles | 19:51 |
lazyPower | we're working through some nuances with this breakage, and hae a functional fix but its still a bit brittle. Trying to make this more robust so you dont find this six months later. | 19:52 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!