=== thumper is now known as thumper-afk [07:19] Good morning Juju world! === frankban|afk is now known as frankban [08:01] morning [10:53] jamespage: hi, I remade all the lab, all nodes have the clock like host. The issue is always present on Nova-Cloud-Controller and Ceph-Mon [12:36] Anyone has a solution about that issue (https://askubuntu.com/questions/913007/issue-with-nova-cloud-controller-and-ceph-mon-with-openstack-base-bundle)? [13:37] Anyone can help me to resolve the issue? thanks [13:42] dakj it looks like your ceph-osd charm doesn't have any block devices configured with `osd-devices` [13:46] Icey: if you have a look on its juju status there ceph-mon/11 is in active status, ceph-mon/9 is in blocked and ceph-mon/10 is in maintenance. [13:47] dakj: can you paste the `juju status` into paste.ubuntu.com [13:48] dakj: https://i.stack.imgur.com/5fkrF.png looks like it's clustered [13:48] icey: here is it (https://paste.ubuntu.com/24587013/) [13:49] right dakj, the ceph-mons are (Unit is ready and clustered) and the OSDs are (No block devices detected using current configuration) [13:49] you have to configure block devices for the OSD charms (and Ceph) to provide block storage to the cloud [13:49] dakj: still wedged on that second unit right? [13:49] odd [13:49] * jamespage ponders whether its a network mtu mismatch of some description [13:50] Jamespage: yes [13:50] dakj: might be worth a check [13:50] icey: how do I can that after deploy the bundle? [13:50] dakj: 99% of odd problems turn out to be some sort of network misconfig in my experience [13:50] dakj: I'd use iperf [13:51] and test the performance between the lxd container with the problem ceph-mon [13:51] so say from ceph-mon/10 -> ceph-mon/9 and from ceph-mon/11 -> ceph-mon/9 [13:51] there is a flag you can use to display the actual MTU [13:52] dakj: something like : `juju config ceph-osd osd-devices=/dev/sdb` where /dev/sdb is a space separated list of block devices or directories [13:53] Icey: I've to run that before to launch the deploy of Openstack Base via juju? [13:53] dakj: if you want to set OSD devices up before deploy, you would want to put them into the bundle, that command is something you can run to add devices after deploying [13:55] jamespage: theres something else that might be useful... [13:56] admcleod: suggest away [13:56] icey: the command has to be put in ceph-mon application? [13:56] oh no, nvm, wrong idea. gotta run to ameeting [13:56] dakj: you would run that command from your juju client [13:57] Icey: ok, Let me try that, I'll inform you about the result soon. === rye is now known as ryebot [14:28] icey: it gives me this error "ERROR application "ceph-osd" not found (not found)" [14:29] dakj: and you ran that on the same machine that you have been running juju status on? I just deployed ceph-osd and ran that command, and I can confirm that it works [14:29] Icey: yes [14:33] Icey: wait, I've 4 virtual node used for deploying Openstack and another one for Juju. On this last one I've to launch the deploy first ceph-osd? And then run via Juju guy Openstack base. Is it right? [14:34] I don't understand what you're asking dakj, but you should be configuring the ceph-osd application from the same client where you deployed the openstack-base bundle [14:35] icey: sorry. I try to explain that [14:37] Icey: I've 1VM used for MAAS, 1VM used for JUJU Gui and 4VM used for OPENSTACK. I tryed to deploy Openstack Base bundle via JUJU Gui. The command you suggested me where I've to run? [14:38] from what machine did you run `juju bootstrap` [14:38] alternately, you could change that configuration value from within the juju gui [14:39] icey: on MAAS used this command "juju bootstrap maaslab maaslab-controller --to juju.maas" [14:39] ok, so either you should run that command on that MAAS node, or you should update the configuration through the GUI [14:41] Perfect, on that MAAS node I obtained this error https://paste.ubuntu.com/24587244/ [14:42] dakj, then you did not run `juju bootstrap...` on that MAAS node [14:43] dakj, can you try to change the osd-devices configuration option from within the Juju GUI for the cpeh-osd application instead [14:45] Ice: in ceph-osd is already /dev/sdb. [14:48] dakj: the value /dev/sdb is a default, you need to configure it to match your disk setup [14:50] Ice: this the a node dedicated to Openstack on MAAS https://pasteboard.co/706aKSJHl.png [14:50] dakj: it can be a space separated list of either disks or directories [14:51] dakj: dakj according to your bundle, machines 12,13, and 14 are the machines with ceph-osd on them, what disks do those machines have available? [14:56] Ice: I've to run the commit on Juju to see that because I cleaned all to re-run that from begin. [14:59] icey: I've started that, when it'll finished I'll see what you asked me [15:01] dakj: I'm about to End of Day but there are other people around who can help with questions :) [15:04] Ice: thanks a lot for your support. Have a nice day, see you soon. [15:27] icey: now on ceph-devices there is /dev/vdb [15:28] does /dev/vdb exist on the ceph-osd nodes dakj ? [15:31] dakj I'm EOD but cholcombe can probably help with ceph questions [15:31] dakj: o/ [15:32] Icey: here is th fdisk https://paste.ubuntu.com/24587473/ [15:33] dakj: so they all have sdb 400GB on them. [15:36] Cholcombe: yes [15:36] icey: thanks a lot [15:38] Cholcombe: on ceph-osd is present /dev/vdb [15:39] Cholcombe: I'm EOD, can we meet tomorrow to see how to resolve my issue? [15:39] dakj: sure [15:41] Cholcombe: thanks have a nice day and see you tomorrow with my lab...... Are you present in the morning or evening? [15:42] dakj: i'm on pacific west coast time [15:45] Cholcombe: I'm in Europe time :-) see you!!! [15:51] @stokachu [15:52] hello all, can someone point me in the direction of how to uninstall "apt-get install conjure-up" entirely so I can use snap to install successfully? [15:58] http://installion.co.uk/ubuntu/yakkety/universe/c/conjure-up/uninstall/index.html [15:58] someone needs to take care of that bad boy === frankban is now known as frankban|afk [18:09] SimonKLB: ping [18:58] woot! almost time for a final in person interview round! [19:14] Budgie^Smore: good luck mate [19:15] got 3 in person final rounds this week! this job hunting is a full time thing! [19:16] oh and for anyone interested, I have been told by recruiters that there are more positions in the area than we have good people to fill them! [19:26] hello all, when deploying cdk with conjure-up in a local bare metal server the ectd nodes are failing with "Missing relation to certificate authority."Are there any suggestions of a fix for this? [19:27] rahworkx: easyrsa should be deployed and active [19:27] stokachu_: it is deployed with msg "Certificate Authority ready." [19:28] rahworkx: whats output of `juju status --format yaml` [19:29] stokachu_: https://paste.ubuntu.com/24588600/ [19:30] rahworkx: how long has it been blocked for? [19:31] maybe lazyPower has an idea ^ [19:32] rahworkx: interesting, i dont see the etcd->easyrsa relation declared in that status yaml. try 'juju add-relation etcd easyrsa' and see if that resolves the status message [19:32] stokachu_: hmm, I was waiting untill each app finished installing before selecting the next.. I kicked off the last app "workers" and status is "started" now. [19:34] rahworkx: yea relations dont get set until after everything is deployed [19:34] b/c there could be applications the relations require that are not yet known to juju [19:35] lazypower: Previously when selecting all apps to deploy at once, it was failing with a "failed to find hook msg" this is probably hardware related. "older server" [19:35] failed to find hook? [19:35] thats... not expected at all [19:35] yea that's a new one to me [19:35] the hooks are charm components, the executeable events we invoke when things happen [19:35] like that relationship for example will trigger a certificates-relation-joined hook [19:38] my mistake.. may of seen that elsewhere.. [19:39] rahworkx: there's a known deficiency right now where etcd isn't starting due to some changes in how snap presents version info in 'snap info etcd' [19:40] there's a PR landing and we'll have a fix out the door today, that might have been what you saw... if it was etcd that was the problem child [19:44] lazypower: this is the error I saw before.... https://paste.ubuntu.com/24588663/ [19:44] rahworkx: yep thats the bug i was just referencing [19:45] that happened sometime between lastnight and today, it didn't seem to affect existing deployments, only new deployments. [19:45] ohh ok, makes sense [19:51] stokachu_: lazyPower: thanks for shedding some light on that... [19:51] np rahworkx - i'll ping you when the fix gets published in the bundles [19:52] we're working through some nuances with this breakage, and hae a functional fix but its still a bit brittle. Trying to make this more robust so you dont find this six months later.