[02:00] thedac: My juju version is 2.3.1-xenial-amd64 [02:03] thedac, maas version is 2.3.0-6434-gd354690-0ubuntu1~16.04.1 [03:38] is there anyway to completly wipe a juju controller/env? I made the mistake of trying to upgrade juju version - which corrupted the env. while trying to recover/remove/clean I just released the bootstrap node and uninstalled juju apt-get remove juju. then tried to start from scratch - install juju,etc. the problem is there are still remnants of the old system. I just want to wipe and start from scratch. juju [03:38] kill-controller, destroy-controller all fail. [03:40] found this on the web - will this still work ? "sudo apt-get purge --auto-remove juju" [03:42] jac_cplane that doesnt work, i tried [03:42] yep [03:42] I just tried. btw- i guess there is no way to update juju [03:42] ops, i found the error with lxc's, This is in the logs: [03:43] ERROR juju.worker runner.go:392 exited "0-container-watcher": worker "0-container-watcher" exited: setting up container dependencies on host machine: could not find unused subnet [03:44] thedac ^^ [03:45] jac_cplane, can you open a doc issue on the wipe stuff please? https://github.com/juju/docs/issues/new [03:46] McL0v1n - looks like i found the way. juju unregister [03:46] then create a new one - same name overwrites the prev. then you can bootstrap it [03:48] ahh, forgot about that [03:48] also juju kill-controller [03:48] juju kill-controller does not work in this case i mentioned. [03:48] I'll add to tghe docs [03:50] any idea about why my lxcs are not provisioning? The error i posted above is in the host machine's logs [03:59] jac_cplane, see it, thanks [04:03] I think it might have to be with: https://bugs.launchpad.net/juju/+bug/1665648 [04:03] Bug #1665648: Juju 2.0.3 fails to deploy LXD container lxdbr0 overlapping subnets [04:03] mup we posted at the same time [04:03] i'm using a /8 10. subnet === gsamfira_ is now known as gsamfira [14:30] hey folks, can you have key names like "foo-bar" in config.yaml or do you need to use underscore (foo_bar) ? [14:33] mattyw: we do stuff like: [14:33] storage-default-block-source: [14:33] value: maas [14:33] source: model [14:33] so foo-bar seems fine in yaml [14:33] jam, ok great thanks [14:34] mattyw: yamllint.com likes it fine, too [14:34] jam, I've seen lots of charms use _, even going so far as converting them back to - in the charm code [14:34] jam, so maybe it was a problem and not anymore? [14:35] mattyw: being ok in yaml != being ok in Mongo docs [14:35] though I think it is ok there, too. [14:35] jam, charm config docs? [14:35] as we certainly use {"model-uuid": blah} [14:35] ok great [14:35] I'll go with - for now thanks [17:06] when i bootstrap a controller to eg vsphere/dc1 and want another one in vsphere/dc2. what do i have to do [17:07] i mean a failover controller [17:09] or do i have to add another controller and move it with vmotion === frankban is now known as frankban|afk [17:22] ybaumy: here is some documention on juju high availablity: https://jujucharms.com/docs/2.3/controllers-ha [17:22] ybaumy: the juju enable-ha help page has info on specifying already created machines to be controllers. not sure if the vsphere dcs are part of constraints or not. [17:23] ybaumy: you’d spin up a vsphere instance where you wanted it - then use juju add-machine ssh:…. to let juju know about it [17:24] ybaumy: but the —to or —constaints flags would be better if possible [17:27] hml i did now a enable ha and moved one controller to dc2 [17:27] hml couldnt find anything that relates to creating it in dc2 in the first place [17:30] it also would make sense to distribute kubernetes etcd and master VM's to diffrent DC's [17:31] but appart from that and some dns problems im happy so far with it [17:34] ybaumy: typically juju distributes between zones, [17:35] ybaumy: looks like juju considers a vsphere dc as a region [17:36] ybaumy: not sure if a region works with a placement directive or not. [17:36] hml so i have to create yaml to create masters in region dc1 and then dc2 ... with constraints? [17:38] ybaumy: you don’t specify a region with a constraint: https://jujucharms.com/docs/2.3/reference-constraints#vsphere-provider: [17:44] rick_h: any ideas on how to do juju ha across regions (or vsphere dc)? [17:47] hml: hmm, can you add-machine into the region and try to enable-ha --to ? [17:47] hml: I think it'll be on your to make sure the controllers are reachable on a network level there [17:47] rick_h: that was my thought too, just wondering… [17:47] hml: yea, not tried it but I think that should work [17:47] ybaumy: ^^^ [17:48] rick_h: regions don’t work with placement directives correct? [17:48] hml: no, they're model level when you add-model and such [17:48] hml: as all of the model is in the same region, otherwise there could be issue [17:49] hml: region->region would need to be a CMR setup and controllers don't do that atm [17:49] rick_h: right [18:00] rick_h: hml thanks for clearing that up. but from a HA perspective its suboptimal [18:01] ybaumy: I'm sure there's ways to do it differently but my experience with clouds has been regions are for fallback and cross region coms can be slower/more $$ then inner region. [18:01] having a controller in different regions raises all kinds of questions about the workloads in that region/etc. [18:04] rick_h: from your perspective its ok to handle it this way. from mine it doesnt since datacenters are only few kilometers away [18:05] rick_h: so HA is something im looking for [18:05] ybaumy: understand, it's an interesting perspective and if the networking/latency is good I can see that. It starts to feel like regions should be zones in that situation as far as spreading all units of workloads out across them as well. [18:06] I mean if the region goes boom than any workloads the controller is managing goes with it [18:06] rick_h: well im talking on premise cloud though [18:06] ybaumy: right, do you have zones in each region? [18:07] rick_h: thats a different approach [18:07] * rick_h is curious how it's setup [18:08] currently we spread masters and workers and etcd 's accross two datacenters. pods are also setup in a way to handle workloads if one DC is out [18:08] kubernetes is what we are interessted in with juju [18:08] ybaumy: which on prem cloud is this? /me reads back [18:09] oic, vsphere [18:09] rick_h: our companies vsphere [18:09] rick_h: we are using openshift atm from redhat but it costs .. though im looking into other directions [18:10] yea, I'm not familiar with vsphere as much. I know in maas/openstack the we think of zones as you're doing dc's in vsphere. It's why we're hitting this mismatch as we obviously really care/want to have HA whereever possible in Juju and the workloads running. [18:10] rick_h: we would really like to see that [18:12] rick_h: first i made test with maas and vsphere and juju and then on top kubernetes but maas lets is also not what i want. 2.3 is really neat [18:12] juju 2.3 [18:12] i like that its talking directly now with API [18:12] ybaumy: I'm glad you're finding Juju interesting. I wish I had better news on the cross zone HA setup. Like I was saying, you can work around it by manually adding machines I think, but then once you go deploy things they'll only go into the original region which isn't what you're going to want [18:14] rick_h: maybe in the future. tomorow our architects will get a presentation with juju setup from me. i hope they like it [18:14] as much as i do [18:14] ybaumy: in the long run it'd be interesting with 2.3 and cross model relations if the charms in k8s could treat relations across the models as a way to do its HA work and so you'd just deploy two k8s and relate them and they'd combine into a super k8s [18:14] ybaumy: cool, let us know if we can be of any help [18:14] rick_h: i will give a feedback [18:14] <3 [18:15] im the canonical guy though .. most ppl think of canonical as desktop client OS and services [18:15] i hope i can change that [18:19] ybaumy: same here, we've put a lot of time and effort beyond the desktop :) [18:22] rick_h: in germany i tell you its really hard to find ppl who use ubuntu and other serivces professionaly [18:23] ybaumy: ah you're in germany? Yea, that's a tough nut there. Wasn't/isn't germany still very suse friendly as they were from there? [18:24] rick_h: we use suse for SAP of course cause there is no competition in this case. we use redhat and centos for everything else. but you are right SuSe is really popular [18:25] i tried their container platform lol [18:26] and ceph storage [18:26] its full of bugs [18:26] you just cant use it [18:26] SLES enterprise server is really stable for SAP and HANA [18:32] anyways i call it a day. beer is waiting. ttyl === thumper is now known as thumper-away