[01:39] @stormmore @rick_h @jampespage @catbus @Dweller_ @zeestrat I GOT IT! MWHAHAH! Ok so the issue is with the Dell Series systems, you have to configure all of your drives to VD from PD to Raid 0, with each drive configured individually and initialized individually in order for it to create each disk individually as VDs with the Perc cards. Once you do that, it works like a champ [01:41] and here I was so excited :( [01:41] I only see 400 of the total 3TB of storage available x..x [01:42] upgrade from previous is though that all OSD devices show as active idle, unit is ready, 1 OSD each. So thats a good sign at the very least. === frankban|afk is now known as frankban [07:59] Hey all [08:00] Could anyone help answer a question === frankban is now known as frankban|afk [10:55] hello folks, random design pattern question. We have a requirement to setup some services, in effect, as a base layer on all our nodes [10:56] other than manually applying them to each node, or making them subordinate to something [10:56] is there a way to do that thats nice? or not? :) === frankban|afk is now known as frankban [11:40] magicaltrout: heh we do that internally. There's a basenode charm that is actually what's deployed and then other charms are hulk-smashed to those. [11:40] magicaltrout: subordinate is nice way if you can work it that way [11:41] magicaltrout: I think one of the issues our folks had was that some of the setup needed to be done before the main charm could operate so it had to be a promise that basenode was laid down first [11:59] 'basenode' exists to configure apt repositories to our internal mirrors, which needs to happen before charms attempt to install stuff [12:02] rick_h: Some sort of a predeploy hook that lets us mess with a machine between provisioning and deploying the charm would make my life a lot easier. we could lose basenode, and all the fallout which is significant. [12:05] magicaltrout: We are currently deploying from local branches, which lets us dump stuff in $CHARMDIR/exec.d, and probably best documented in https://github.com/juju-solutions/layer-basic/blob/master/lib/charms/layer/execd.py [12:05] (all reactive charms get the feature, whereas non-reactive need to explicitly make the correct charm-helpers calls) [12:23] stub: rgr [13:13] hey errbody! [13:14] mornin [13:14] o/ [13:16] fallenour: how'd a night's rest go? Saw you made some progress late? [14:18] @rick_h yea Ive got it fully up and running, now I just need to know why it only sees 400 of the 3 TB of space in horizon. Any ideas? All osds reporting as good to go. [14:19] Im assuming that since the system will take the first drive for OS, and the rest for storage, I should only be down roughly 3 drives of the total available 22, so 19 at 146 gb each. [14:19] fallenour: sorry, /me is storage ignorant. I just like building stuff that uses storage [14:19] @rick_h lol @stokachu @jamespage any ideas then guys? [14:36] @fallenour: I suggest taking a look at ceph to get an idea of the state of your storage and then seeing if that matches up with what is presented in horizon [14:39] @zeestrat I really wish I could, but I dont exactly know how to do that.. :( [14:39] @zeestrat I know that the difference between the pure ceph-OSD node and the ones working is the nova-compute services are installed, which is why they appear in the hypervisor section. But other than that, and running drive related commands on the systems themselves, I dont know what to do next. [15:00] fallenour: hmm - not sure that horizon well tell you the avaliable storage in cinder? [15:00] normally it tells you the ephermeral storage capacity on the compute nodes [15:01] @jamespage Yea that was my concern. I guess my next question, is I made the ceph nodes in order to expand my overall storage in openstack, how do I make sure that storage is available for my instances? [15:01] @jamespage otherwise, Ive lost over 85% of the available storage for viable use, which is bad news bears. [15:01] fallenour: so ceph can be used in multiple ways in openstack [15:02] i suspect you have it configured as a cinder backend only; with this configuration you can do boot from volume for instances + attach persistent volumes to instances which have root disks on local disk on compute hypervisors [15:03] fallenour: you can also configure nova to use ceph as the backend for all instance storage - the nova-compute charm supports this, but its not the 'default' [15:03] @jamespage right now all I have installed on the nodes specifically is ceph-osd. Did I need to install something in addition to that? [15:03] you have to toggle a configuration option on the nova-compute charm [15:03] @jamespage ahh Im guessing thats the missing link. I do believe I installed cinder-ceph, but I dont know if nova accepts that. [15:04] @jamespage does that mean another rebuild? Q___Q [15:04] fallenour: no it should be switchable [15:04] fallenour: https://jujucharms.com/nova-compute/#charm-config-libvirt-image-backend [15:04] set that option to rbd [15:06] @jamespage Libvirt-image-backend from lvm to rbd I take it? [15:06] @jamespage will that also update the rados gateway and ceph osd nodes accordingly? [15:06] fallenour: it won't be set at all right now [15:06] fallenour: it should dtrt [15:08] @jamespage is there a specific series of commands I should issue, or do I need to just modify the /etc/nova/nova.conf config files on all the nova boxes? [15:20] @jamespage Another question, I would like to expand ont he ceph option osd-devices. Default only includes /dev/sdb, but with systems with more than two devices, will the system do a naming of /dev/sda, /dev/sdb, /dev/sdc, etc? If so, wont the default only capture the first avaialble drive only, and ignore the others? If so I would like to configure that to grab the others accordingly [16:27] @rick_h: Please enjoy https://bugs.launchpad.net/juju/+bug/1716948 I sure didn't [16:27] Bug #1716948: juju controller caches credentials from bootstrap === frankban is now known as frankban|afk [16:52] xarses_: ruh roh [16:52] ya, I didn't enjoy yesterday [17:19] The Juju Show #21 in 40 minutes. Get your coffee cups ready! [17:41] hml: kwmonroe tvansteenburgh hatch marcoceppi bdx magicaltrout beisner and anyone else able to make the show in 20min <3 [17:55] folks that want to chat in the show can join via https://hangouts.google.com/hangouts/_/okwrzmr46fgrvcwcokclw7yqcie [17:55] and those that want to watch load up https://www.youtube.com/watch?v=3658lsehjKM [18:23] rick_h: include-file config only available in bundles, not charm config? [18:29] bdx: correct, it's done client side as the bundle is processed [18:29] bdx: charm already takes a --config option that can be yaml so not sure how useful it'd be [18:37] Got it, thx === med_ is now known as med === med is now known as Guest20317 === Guest20317 is now known as med_ [18:39] rick_h: how do we get stale charms out of the charm store? [18:39] rick_h: contact the team and ask them to update the charm or revoke privs? [18:39] File a bug. We file a thing with folks that have delete permissions. [18:40] rick_h: ah nice [18:40] bdx: oh, yea if it's someone else's yea. Engage them is first steps [18:40] totally [18:40] ok, looks like I need to do both [18:40] I've got crufty trial and error charms just hanging around it would be nice to wipe off the face of the earth [18:41] rick_h: file a bug where? [18:42] bdx: github.com/canonicalltd/jujucharms.com [18:46] bdx: super quick fix is to remove read perms from everyone... like "charm revoke ~user/charm everyone". you'll still see it (assuming you're the owner of the charm), but noone else will. [18:52] rick_h: totaly [18:52] rick_h: https://github.com/CanonicalLtd/jujucharms.com/issues/487 [21:46] sudo cat /var/lib/mysql/mysql.passwd [21:46] I get: No such file or directory [21:47] is there another method for getting the root mysql password?