[01:41] <[Kid]> if i run a juju bootstrap manual/ubuntu@10.1.1.5 test-model
[01:41] <[Kid]> which SSH key does it use?
[01:41] <[Kid]> my understanding was the juju-client-key
[01:41] <[Kid]> which is in the /home/user/.local/share/ssh/ directory
[15:51] <xilet> juju v2.0.2, it looks like the controller isn't coming up,  any juju commands come back with ERROR unable to connect to API: websocket.Dial wss://IP:17070/model/<UUID>/api. And nothing is listening on 17070, is there a way to manually start that service?
[15:57] <bdx> elasticsearch-peeps: making headway - http://paste.ubuntu.com/25831054/, http://paste.ubuntu.com/25831045/, https://github.com/jamesbeedy/layer-elasticsearch-base/blob/refactor_for_network_get_juju_2_3/reactive/elasticsearch_base.py
[17:10] <jamesbenson> quick question:  For those who have maas and juju: If you raid all of your disk into one large volume, is there a way to get an openstack deploy to split up that one disk vs use sdb?
[17:12] <bdx> jamesbenson: yea, you have to create the partitions in maas, then specify the partitions in the charm config
[17:12] <jamesbenson> bdx!!  Awesome! Can you tell me how to specify that in the charm?  I've looked and not sure where/what to modify
[17:13] <bdx> jamesbenson: are you using the ceph-osd charm?
[17:14] <jamesbenson> yeah
[17:14] <bdx> https://jujucharms.com/ceph-osd/#charm-config-osd-devices
[17:16] <jamesbenson> you are awesome!  Thank you bdx
[17:16] <bdx> np
[18:41] <jamesbenson> bdx, do those directories need to exist prior or will juju make them?
[18:41] <bdx> jamesbenson: which?
[18:41] <jamesbenson> "For ceph >= 0.56.6 these can also be directories instead of devices - the charm assumes anything not starting with /dev is a directory instead."
[18:41] <jamesbenson> osd-devices
[18:43] <bdx> ah, my bad
[18:43] <bdx> jamesbenson: https://imgur.com/a/nEqeR
[18:43] <bdx> ^ for that setup, I would specify /dev/md0
[18:43] <bdx> I see what you are saying
[18:44] <bdx> you want to raid all of your disks in maas, and install ubuntu on to that raid, and then use a directory in the filesystem for ceph osd-device?
[18:45] <bdx> jamesbenson: are trying to make the most of your drive bays here?
[18:45] <jamesbenson> So maas only picks up 1 HD because we raided them all in our controller, not in maas.  But I don't want to go into all of our server to change the raid controller...
[18:45] <bdx> jamesbenson: I see, thats unfortunate
[18:46] <bdx> jamesbenson: for openstack with ceph ......
[18:46] <bdx> its going to be optimal if you feed ceph physical devices
[18:46] <bdx> instead of doing what you are trying to do
[18:47] <jamesbenson> https://snag.gy/qPgVtR.jpg
[18:47] <bdx> jamesbenson: moreover, if you want to use all of the disks connected to your controller as ceph osd|journal devices, just get a satadom to plug into your mobo, and configure that to be your / partition in maas
[18:47] <jamesbenson> So my 6 HDD's read as 10TB.
[18:48] <bdx> right
[18:49] <bdx> so you don't want to waste any of those by installing the host os on it
[18:49] <jamesbenson> Agreed with the physical disks... just we have OLD hard drives and well... things fail a lot, so RAID is easier so we have less babying...
[18:49] <bdx> jamesbenson: right
[18:50] <bdx> so like, you can get a POC openstack going with that
[18:50] <jamesbenson> I don't mind repartitioning the servers through maas, just don't want to go into the raid controller if necessary....
[18:50] <bdx> but if you are actually wanting ceph to support any type of workload
[18:50] <bdx> I just don't think the filesystem backend is really supported
[18:51] <jamesbenson> are you familiar with fuel?
[18:51] <bdx> jamesbenson: yeah
[18:51] <bdx> its what initially turned me onto using juju to deploy openstack years ago
[18:52] <[Kid]> guys, i am trying to enable HA on my controllers and I am getting ERROR failed to create new controller machines:
[18:52] <[Kid]> i have 1 controller and 2 machines added
[18:52] <jamesbenson> I'm not sure how fuel did it, but they were able to deploy openstack over our raid'ed systems....  but trying to migrate off since it's not supported with mirantis anymore.
[18:52] <jamesbenson> yeah, we are using kolla now, but also trying to test juju to see how that stability is and also for deployments of other systems...
[18:53] <bdx> yeah ... I have done that with fuel too
[18:53] <bdx> https://github.com/openstack-charmers/openstack-on-lxd/blob/master/bundle-mitaka-novalxd.yaml#L81
[18:53] <bdx> jamesbenson: I think its arbitrary filesystem path
[18:53] <bdx> try using /srv/osd
[18:53] <jamesbenson> should I deploy juju with lxd then?
[18:54] <bdx> jamesbenson: there are a number of ways to go, it all depends on the use case
[18:54] <jamesbenson> lol, simple!  we're not fancy here.... ;-)
[18:55] <bdx> jamesbenson: first things first, the filesystem backend is going to kill you
[18:55] <bdx> I have to be honest
[18:55] <[Kid]> jamesbenson, you can only deploy juju to lxd on a single localhost
[18:55] <[Kid]> using the localhost provider
[18:55] <jamesbenson> well we want it production potentially so more than devstack env...
[18:56] <bdx> jamesbenson: yeah, latest ceph doesn't even work with filesystem backend I don't think (with bluestore and all) ... possibly it does
[18:58] <bdx> last time I tried, I encountered issues
[18:58] <jamesbenson> Ceph L has xfs as a backend...
[18:59] <bdx> totally
[18:59] <bdx> I mean do what you want to do
[18:59] <jamesbenson> https://snag.gy/yadZHm.jpg
[18:59] <jamesbenson> I did this for another rack with Luminous...
[18:59] <jamesbenson> using ceph deploy and manually modified the raid controllers
[18:59] <bdx> ah,
[18:59] <bdx> I see
[19:00] <skay> I got a stack trace out of hookenv.log because of inadvertently trying to log too much info, https://paste.ubuntu.com/25831914/
[19:00] <bdx> jamesbenson: https://jujucharms.com/ceph-osd/#charm-config-bluestore
[19:00] <skay> (I had django.db logging turned to DEBUG)
[19:01] <bdx> jamesbenson: you are going to want bluestore, and direct-io
[19:01] <bdx> https://jujucharms.com/ceph-osd/#charm-config-use-direct-io
[19:01] <bdx> you just won't be able to take advantage of the goodies with the directory based osd devices
[19:01] <jamesbenson> yeah, I was a bit confused as to why bluestore wasn't being used by default...
[19:02] <jamesbenson> but that's how ceph-deploy did it... so I didn't argue.
[19:02] <bdx> right, well, ceph-deploy should be how you learn ceph, before professionally deploying with juju
[19:07] <jamesbenson> completely agree... unfortunately time is my enemy.... and too frequently hope/assume defaults are "good enough" options for us here and dig into it further as necessary.
[23:19] <dvavili> Can Juju be used to deploy P3 instances on AWS that was announced a couple of days back?
[23:42] <shadoxx> Can I run Juju on my Maas controller?
[23:53] <bdx> dvavili: submit a bug on that and it will get added
[23:53] <bdx> shadoxx: you can run the juju client from wherever you install it at
[23:54] <shadoxx> bdx: i mean, can i run juju on the same machine as my maas region/rack controller?
[23:55] <bdx> shadoxx: run juju(very ambiguous), you mean the juju client?
[23:56] <shadoxx> Is there a Juju server?
[23:57] <bdx> shadoxx: yes, there are a few different components for aure
[23:57] <bdx> The juju client is what you use to communicate with a juju controller
[23:58] <shadoxx> Ok, so. Can I install the Juju Controller on the same machine as my MaaS Region/Rack Controller
[23:58] <bdx> You *can*
[23:58] <shadoxx> But not recommended lol
[23:59] <shadoxx> I figured that it's not recommended. Just not sure if it was technically possible or not.
[23:59] <bdx> It is definitely possible