[Kid] | if i run a juju bootstrap manual/ubuntu@10.1.1.5 test-model | 01:41 |
---|---|---|
[Kid] | which SSH key does it use? | 01:41 |
[Kid] | my understanding was the juju-client-key | 01:41 |
[Kid] | which is in the /home/user/.local/share/ssh/ directory | 01:41 |
=== frankban|afk is now known as frankban | ||
=== freyes__ is now known as freyes | ||
=== Makyo is now known as Guest26189 | ||
xilet | juju v2.0.2, it looks like the controller isn't coming up, any juju commands come back with ERROR unable to connect to API: websocket.Dial wss://IP:17070/model/<UUID>/api. And nothing is listening on 17070, is there a way to manually start that service? | 15:51 |
bdx | elasticsearch-peeps: making headway - http://paste.ubuntu.com/25831054/, http://paste.ubuntu.com/25831045/, https://github.com/jamesbeedy/layer-elasticsearch-base/blob/refactor_for_network_get_juju_2_3/reactive/elasticsearch_base.py | 15:57 |
=== Guest26189 is now known as Makyo | ||
=== frankban is now known as frankban|afk | ||
jamesbenson | quick question: For those who have maas and juju: If you raid all of your disk into one large volume, is there a way to get an openstack deploy to split up that one disk vs use sdb? | 17:10 |
bdx | jamesbenson: yea, you have to create the partitions in maas, then specify the partitions in the charm config | 17:12 |
jamesbenson | bdx!! Awesome! Can you tell me how to specify that in the charm? I've looked and not sure where/what to modify | 17:12 |
bdx | jamesbenson: are you using the ceph-osd charm? | 17:13 |
jamesbenson | yeah | 17:14 |
bdx | https://jujucharms.com/ceph-osd/#charm-config-osd-devices | 17:14 |
jamesbenson | you are awesome! Thank you bdx | 17:16 |
bdx | np | 17:16 |
jamesbenson | bdx, do those directories need to exist prior or will juju make them? | 18:41 |
bdx | jamesbenson: which? | 18:41 |
jamesbenson | "For ceph >= 0.56.6 these can also be directories instead of devices - the charm assumes anything not starting with /dev is a directory instead." | 18:41 |
jamesbenson | osd-devices | 18:41 |
bdx | ah, my bad | 18:43 |
bdx | jamesbenson: https://imgur.com/a/nEqeR | 18:43 |
bdx | ^ for that setup, I would specify /dev/md0 | 18:43 |
bdx | I see what you are saying | 18:43 |
bdx | you want to raid all of your disks in maas, and install ubuntu on to that raid, and then use a directory in the filesystem for ceph osd-device? | 18:44 |
bdx | jamesbenson: are trying to make the most of your drive bays here? | 18:45 |
jamesbenson | So maas only picks up 1 HD because we raided them all in our controller, not in maas. But I don't want to go into all of our server to change the raid controller... | 18:45 |
bdx | jamesbenson: I see, thats unfortunate | 18:45 |
bdx | jamesbenson: for openstack with ceph ...... | 18:46 |
bdx | its going to be optimal if you feed ceph physical devices | 18:46 |
bdx | instead of doing what you are trying to do | 18:46 |
jamesbenson | https://snag.gy/qPgVtR.jpg | 18:47 |
bdx | jamesbenson: moreover, if you want to use all of the disks connected to your controller as ceph osd|journal devices, just get a satadom to plug into your mobo, and configure that to be your / partition in maas | 18:47 |
jamesbenson | So my 6 HDD's read as 10TB. | 18:47 |
bdx | right | 18:48 |
bdx | so you don't want to waste any of those by installing the host os on it | 18:49 |
jamesbenson | Agreed with the physical disks... just we have OLD hard drives and well... things fail a lot, so RAID is easier so we have less babying... | 18:49 |
bdx | jamesbenson: right | 18:49 |
bdx | so like, you can get a POC openstack going with that | 18:50 |
jamesbenson | I don't mind repartitioning the servers through maas, just don't want to go into the raid controller if necessary.... | 18:50 |
bdx | but if you are actually wanting ceph to support any type of workload | 18:50 |
bdx | I just don't think the filesystem backend is really supported | 18:50 |
jamesbenson | are you familiar with fuel? | 18:51 |
bdx | jamesbenson: yeah | 18:51 |
bdx | its what initially turned me onto using juju to deploy openstack years ago | 18:51 |
[Kid] | guys, i am trying to enable HA on my controllers and I am getting ERROR failed to create new controller machines: | 18:52 |
[Kid] | i have 1 controller and 2 machines added | 18:52 |
jamesbenson | I'm not sure how fuel did it, but they were able to deploy openstack over our raid'ed systems.... but trying to migrate off since it's not supported with mirantis anymore. | 18:52 |
jamesbenson | yeah, we are using kolla now, but also trying to test juju to see how that stability is and also for deployments of other systems... | 18:52 |
bdx | yeah ... I have done that with fuel too | 18:53 |
bdx | https://github.com/openstack-charmers/openstack-on-lxd/blob/master/bundle-mitaka-novalxd.yaml#L81 | 18:53 |
bdx | jamesbenson: I think its arbitrary filesystem path | 18:53 |
bdx | try using /srv/osd | 18:53 |
jamesbenson | should I deploy juju with lxd then? | 18:53 |
bdx | jamesbenson: there are a number of ways to go, it all depends on the use case | 18:54 |
jamesbenson | lol, simple! we're not fancy here.... ;-) | 18:54 |
bdx | jamesbenson: first things first, the filesystem backend is going to kill you | 18:55 |
bdx | I have to be honest | 18:55 |
[Kid] | jamesbenson, you can only deploy juju to lxd on a single localhost | 18:55 |
[Kid] | using the localhost provider | 18:55 |
jamesbenson | well we want it production potentially so more than devstack env... | 18:55 |
bdx | jamesbenson: yeah, latest ceph doesn't even work with filesystem backend I don't think (with bluestore and all) ... possibly it does | 18:56 |
bdx | last time I tried, I encountered issues | 18:58 |
jamesbenson | Ceph L has xfs as a backend... | 18:58 |
bdx | totally | 18:59 |
bdx | I mean do what you want to do | 18:59 |
jamesbenson | https://snag.gy/yadZHm.jpg | 18:59 |
jamesbenson | I did this for another rack with Luminous... | 18:59 |
jamesbenson | using ceph deploy and manually modified the raid controllers | 18:59 |
bdx | ah, | 18:59 |
bdx | I see | 18:59 |
skay | I got a stack trace out of hookenv.log because of inadvertently trying to log too much info, https://paste.ubuntu.com/25831914/ | 19:00 |
bdx | jamesbenson: https://jujucharms.com/ceph-osd/#charm-config-bluestore | 19:00 |
skay | (I had django.db logging turned to DEBUG) | 19:00 |
bdx | jamesbenson: you are going to want bluestore, and direct-io | 19:01 |
bdx | https://jujucharms.com/ceph-osd/#charm-config-use-direct-io | 19:01 |
bdx | you just won't be able to take advantage of the goodies with the directory based osd devices | 19:01 |
jamesbenson | yeah, I was a bit confused as to why bluestore wasn't being used by default... | 19:01 |
jamesbenson | but that's how ceph-deploy did it... so I didn't argue. | 19:02 |
bdx | right, well, ceph-deploy should be how you learn ceph, before professionally deploying with juju | 19:02 |
jamesbenson | completely agree... unfortunately time is my enemy.... and too frequently hope/assume defaults are "good enough" options for us here and dig into it further as necessary. | 19:07 |
dvavili | Can Juju be used to deploy P3 instances on AWS that was announced a couple of days back? | 23:19 |
shadoxx | Can I run Juju on my Maas controller? | 23:42 |
bdx | dvavili: submit a bug on that and it will get added | 23:53 |
bdx | shadoxx: you can run the juju client from wherever you install it at | 23:53 |
shadoxx | bdx: i mean, can i run juju on the same machine as my maas region/rack controller? | 23:54 |
bdx | shadoxx: run juju(very ambiguous), you mean the juju client? | 23:55 |
shadoxx | Is there a Juju server? | 23:56 |
bdx | shadoxx: yes, there are a few different components for aure | 23:57 |
bdx | The juju client is what you use to communicate with a juju controller | 23:57 |
shadoxx | Ok, so. Can I install the Juju Controller on the same machine as my MaaS Region/Rack Controller | 23:58 |
bdx | You *can* | 23:58 |
shadoxx | But not recommended lol | 23:58 |
shadoxx | I figured that it's not recommended. Just not sure if it was technically possible or not. | 23:59 |
bdx | It is definitely possible | 23:59 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!