/srv/irclogs.ubuntu.com/2017/09/13/#juju.txt

fallenour@stormmore @rick_h @jampespage @catbus @Dweller_ @zeestrat I GOT IT! MWHAHAH! Ok so the issue is with the Dell Series systems, you have to configure all of your drives to VD from PD to Raid 0, with each drive configured individually and initialized individually in order for it to create each disk individually as VDs with the Perc cards. Once you do that, it works like a champ01:39
fallenourand here I was so excited :(01:41
fallenourI only see 400 of the total 3TB of storage available x..x01:41
fallenourupgrade from previous is though that all OSD devices show as active idle, unit is ready, 1 OSD each. So thats a good sign at the very least.01:42
=== frankban|afk is now known as frankban
nofunHey all07:59
nofunCould anyone help answer a question08:00
=== frankban is now known as frankban|afk
magicaltrouthello folks, random design pattern question. We have a requirement to setup some services, in effect, as a base layer on all our nodes10:55
magicaltroutother than manually applying them to each node, or making them subordinate to something10:56
magicaltroutis there a way to do that thats nice? or not? :)10:56
=== frankban|afk is now known as frankban
rick_hmagicaltrout: heh we do that internally. There's a basenode charm that is actually what's deployed and then other charms are hulk-smashed to those.11:40
rick_hmagicaltrout: subordinate is nice way if you can work it that way11:40
rick_hmagicaltrout: I think one of the issues our folks had was that some of the setup needed to be done before the main charm could operate so it had to be a promise that basenode was laid down first11:41
stub'basenode' exists to configure apt repositories to our internal mirrors, which needs to happen before charms attempt to install stuff11:59
stubrick_h: Some sort of a predeploy hook that lets us mess with a machine between provisioning and deploying the charm would make my life a lot easier. we could lose basenode, and all the fallout which is significant.12:02
stubmagicaltrout: We are currently deploying from local branches, which lets us dump stuff in $CHARMDIR/exec.d, and probably best documented in https://github.com/juju-solutions/layer-basic/blob/master/lib/charms/layer/execd.py12:05
stub(all reactive charms get the feature, whereas non-reactive need to explicitly make the correct charm-helpers calls)12:05
rick_hstub: rgr12:23
fallenourhey errbody!13:13
Dweller__mornin13:14
tvansteenburgho/13:14
rick_hfallenour: how'd a night's rest go? Saw you made some progress late?13:16
fallenour@rick_h yea Ive got it fully up and running, now I just need to know why it only sees 400 of the 3 TB of space in horizon. Any ideas? All osds reporting as good to go.14:18
fallenourIm assuming that since the system will take the first drive for OS, and the rest for storage, I should only be down roughly 3 drives of the total available 22, so 19 at 146 gb each.14:19
rick_hfallenour: sorry, /me is storage ignorant. I just like building stuff that uses storage14:19
fallenour@rick_h lol @stokachu @jamespage any ideas then guys?14:19
zeestrat@fallenour: I suggest taking a look at ceph to get an idea of the state of your storage and then seeing if that matches up with what is presented in horizon14:36
fallenour@zeestrat I really wish I could, but I dont exactly know how to do that.. :(14:39
fallenour@zeestrat I know that the difference between the pure ceph-OSD node and the ones working is the nova-compute services are installed, which is why they appear in the hypervisor section. But other than that, and running drive related commands on the systems themselves, I dont know what to do next.14:39
jamespagefallenour: hmm - not sure that horizon well tell you the avaliable storage in cinder?15:00
jamespagenormally it tells you the ephermeral storage capacity on the compute nodes15:00
fallenour@jamespage Yea that was my concern. I guess my next question, is I made the ceph nodes in order to expand my overall storage in openstack, how do I make sure that storage is available for my instances?15:01
fallenour@jamespage otherwise, Ive lost over 85% of the available storage for viable use, which is bad news bears.15:01
jamespagefallenour: so ceph can be used in multiple ways in openstack15:01
jamespagei suspect you have it configured as a cinder backend only; with this configuration you can do boot from volume for instances + attach persistent volumes to instances which have root disks on local disk on compute hypervisors15:02
jamespagefallenour: you can also configure nova to use ceph as the backend for all instance storage - the nova-compute charm supports this, but its not the 'default'15:03
fallenour@jamespage right now all I have installed on the nodes specifically is ceph-osd. Did I need to install something in addition to that?15:03
jamespageyou have to toggle a configuration option on the nova-compute charm15:03
fallenour@jamespage ahh Im guessing thats the missing link. I do believe I installed cinder-ceph, but I dont know if nova accepts that.15:03
fallenour@jamespage does that mean another rebuild? Q___Q15:04
jamespagefallenour: no it should be switchable15:04
jamespagefallenour: https://jujucharms.com/nova-compute/#charm-config-libvirt-image-backend15:04
jamespageset that option to rbd15:04
fallenour@jamespage Libvirt-image-backend from lvm to rbd I take it?15:06
fallenour@jamespage will that also update the rados gateway and ceph osd nodes accordingly?15:06
jamespagefallenour: it won't be set at all right now15:06
jamespagefallenour: it should dtrt15:06
fallenour@jamespage is there a specific series of commands I should issue, or do I need to just modify the /etc/nova/nova.conf config files on all the nova boxes?15:08
fallenour@jamespage Another question, I would like to expand ont he ceph option osd-devices. Default only includes /dev/sdb, but with systems with more than two devices, will the system do a naming of /dev/sda, /dev/sdb, /dev/sdc, etc? If so, wont the default only capture the first avaialble drive only, and ignore the others? If so I would like to configure that to grab the others accordingly15:20
xarses_@rick_h: Please enjoy https://bugs.launchpad.net/juju/+bug/1716948 I sure didn't16:27
mupBug #1716948: juju controller caches credentials from bootstrap <docs> <juju:New> <https://launchpad.net/bugs/1716948>16:27
=== frankban is now known as frankban|afk
rick_hxarses_: ruh roh16:52
xarses_ya, I didn't enjoy yesterday16:52
rick_hThe Juju Show #21 in 40 minutes. Get your coffee cups ready!17:19
rick_hhml: kwmonroe tvansteenburgh hatch marcoceppi bdx magicaltrout beisner and anyone else able to make the show in 20min <317:41
rick_hfolks that want to chat in the show can join via https://hangouts.google.com/hangouts/_/okwrzmr46fgrvcwcokclw7yqcie17:55
rick_hand those that want to watch load up https://www.youtube.com/watch?v=3658lsehjKM17:55
bdxrick_h: include-file config only available in bundles, not charm config?18:23
rick_hbdx: correct, it's done client side as the bundle is processed18:29
rick_hbdx: charm already takes a --config option that can be yaml so not sure how useful it'd be18:29
bdxGot it, thx18:37
=== med_ is now known as med
=== med is now known as Guest20317
=== Guest20317 is now known as med_
bdxrick_h: how do we get stale charms out of the charm store?18:39
bdxrick_h: contact the team and ask them to update the charm or revoke privs?18:39
rick_hFile a bug. We file a thing with folks that have delete permissions.18:39
bdxrick_h: ah nice18:40
rick_hbdx: oh, yea if it's someone else's yea. Engage them is first steps18:40
bdxtotally18:40
bdxok, looks like I need to do both18:40
bdxI've got crufty trial and error charms just hanging around it would be nice to wipe off the face of the earth18:40
bdxrick_h: file a bug where?18:41
rick_hbdx: github.com/canonicalltd/jujucharms.com18:42
kwmonroebdx: super quick fix is to remove read perms from everyone... like "charm revoke ~user/charm everyone".  you'll still see it (assuming you're the owner of the charm), but noone else will.18:46
bdxrick_h: totaly18:52
bdxrick_h: https://github.com/CanonicalLtd/jujucharms.com/issues/48718:52
tychicussudo cat /var/lib/mysql/mysql.passwd21:46
tychicusI get: No such file or directory21:46
tychicusis there another method for getting the root mysql password?21:47

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!