[02:00] <McL0v1n> thedac: My juju version is 2.3.1-xenial-amd64
[02:03] <McL0v1n> thedac, maas version is 2.3.0-6434-gd354690-0ubuntu1~16.04.1
[03:38] <jac_cplane> is there anyway to completly wipe a juju controller/env?   I made the mistake of trying to upgrade juju version - which corrupted the env.   while trying to recover/remove/clean I just released the bootstrap node and uninstalled juju apt-get remove juju.    then tried to start from scratch - install juju,etc.   the problem is there are still remnants of the old system.   I just want to wipe and start from scratch.   juju
[03:38] <jac_cplane> kill-controller, destroy-controller all fail.
[03:40] <jac_cplane> found this on the web - will this still work ?  "sudo apt-get purge --auto-remove juju"
[03:42] <McL0v1n> jac_cplane that doesnt work, i tried
[03:42] <jac_cplane> yep
[03:42] <jac_cplane> I just tried.   btw- i guess there is no way to update juju
[03:42] <McL0v1n> ops, i found the error with lxc's, This is in the logs:
[03:43] <McL0v1n> ERROR juju.worker runner.go:392 exited "0-container-watcher": worker "0-container-watcher" exited: setting up container dependencies on host machine: could not find unused subnet
[03:44] <McL0v1n> thedac ^^
[03:45] <pmatulis> jac_cplane, can you open a doc issue on the wipe stuff please? https://github.com/juju/docs/issues/new
[03:46] <jac_cplane> McL0v1n - looks like i found the way.    juju unregister <controller-name>
[03:46] <jac_cplane> then create a new one - same name overwrites the prev.  then you can bootstrap it
[03:48] <McL0v1n> ahh, forgot about that
[03:48] <McL0v1n> also juju kill-controller
[03:48] <jac_cplane> juju kill-controller does not work in this case i mentioned.
[03:48] <jac_cplane> I'll add to tghe docs
[03:50] <McL0v1n> any idea about why my lxcs are not provisioning? The error i posted above is in the host machine's logs
[03:59] <pmatulis> jac_cplane, see it, thanks
[04:03] <McL0v1n> I think it might have to be with: https://bugs.launchpad.net/juju/+bug/1665648
[04:03] <mup> Bug #1665648: Juju 2.0.3 fails to deploy LXD container lxdbr0 overlapping subnets <juju> <maas> <juju:Fix Released by hduran-8> <juju 2.1:Fix Released by jameinel> <https://launchpad.net/bugs/1665648>
[04:03] <McL0v1n> mup we posted at the same time
[04:03] <McL0v1n> i'm using a /8 10. subnet
[14:30] <mattyw> hey folks, can you have key names like "foo-bar" in config.yaml or do you need to use underscore (foo_bar) ?
[14:33] <jam> mattyw: we do stuff like:
[14:33] <jam> storage-default-block-source:
[14:33] <jam>   value: maas
[14:33] <jam>   source: model
[14:33] <jam> so foo-bar seems fine in yaml
[14:33] <mattyw> jam, ok great thanks
[14:34] <jam> mattyw: yamllint.com likes it fine, too
[14:34] <mattyw> jam, I've seen lots of charms use _, even going so far as converting them back to - in the charm code
[14:34] <mattyw> jam, so maybe it was a problem and not anymore?
[14:35] <jam> mattyw: being ok in yaml != being ok in Mongo docs
[14:35] <jam> though I think it is ok there, too.
[14:35] <mattyw> jam, charm config docs?
[14:35] <jam> as we certainly use {"model-uuid": blah}
[14:35] <mattyw> ok great
[14:35] <mattyw> I'll go with - for now thanks
[17:06] <ybaumy> when i bootstrap a controller to eg vsphere/dc1 and want another one in vsphere/dc2. what do i have to do
[17:07] <ybaumy> i mean a failover controller
[17:09] <ybaumy> or do i have to add another controller and move it with vmotion
[17:22] <hml> ybaumy: here is some documention on juju high availablity: https://jujucharms.com/docs/2.3/controllers-ha
[17:22] <hml> ybaumy: the juju enable-ha help page has info on specifying already created machines to be controllers.  not sure if the vsphere dcs are part of constraints or not.
[17:23] <hml> ybaumy: you’d spin up a vsphere instance where you wanted it - then use juju add-machine ssh:…. to let juju know about it
[17:24] <hml> ybaumy: but the —to or —constaints flags would be better if possible
[17:27] <ybaumy> hml i did now a enable ha and moved one controller to dc2
[17:27] <ybaumy> hml couldnt find anything that relates to creating it in dc2 in the first place
[17:30] <ybaumy> it also would make sense to distribute kubernetes etcd and master VM's to diffrent DC's
[17:31] <ybaumy> but appart from that and some dns problems im happy so far with it
[17:34] <hml> ybaumy: typically juju distributes between zones,
[17:35] <hml> ybaumy: looks like juju considers a vsphere dc as a region
[17:36] <hml> ybaumy: not sure if a region works with a placement directive or not.
[17:36] <ybaumy> hml so i have to create yaml to create masters in region dc1  and then dc2 ... with constraints?
[17:38] <hml> ybaumy: you don’t specify a region with a constraint: https://jujucharms.com/docs/2.3/reference-constraints#vsphere-provider:
[17:44] <hml> rick_h: any ideas on how to do juju ha across regions (or vsphere dc)?
[17:47] <rick_h> hml: hmm, can you add-machine into the region and try to enable-ha --to ?
[17:47] <rick_h> hml: I think it'll be on your to make sure the controllers are reachable on a network level there
[17:47] <hml> rick_h: that was my thought too, just wondering…
[17:47] <rick_h> hml: yea, not tried it but I think that should work
[17:47] <hml> ybaumy: ^^^
[17:48] <hml> rick_h: regions don’t work with placement directives correct?
[17:48] <rick_h> hml: no, they're model level when you add-model and such
[17:48] <rick_h> hml: as all of the model is in the same region, otherwise there could be issue
[17:49] <rick_h> hml: region->region would need to be a CMR setup and controllers don't do that atm
[17:49] <hml> rick_h: right
[18:00] <ybaumy> rick_h: hml thanks for clearing that up. but from a HA perspective its suboptimal
[18:01] <rick_h> ybaumy: I'm sure there's ways to do it differently but my experience with clouds has been regions are for fallback and cross region coms can be slower/more $$ then inner region.
[18:01] <rick_h> having a controller in different regions raises all kinds of questions about the workloads in that region/etc.
[18:04] <ybaumy> rick_h: from your perspective its ok to handle it this way. from mine it doesnt since datacenters are only few kilometers away
[18:05] <ybaumy> rick_h: so HA is something im looking for
[18:05] <rick_h> ybaumy: understand, it's an interesting perspective and if the networking/latency is good I can see that. It starts to feel like regions should be zones in that situation as far as spreading all units of workloads out across them as well.
[18:06] <rick_h> I mean if the region goes boom than any workloads the controller is managing goes with it
[18:06] <ybaumy> rick_h: well im talking on premise cloud though
[18:06] <rick_h> ybaumy: right, do you have zones in each region?
[18:07] <ybaumy> rick_h: thats a different approach
[18:07]  * rick_h is curious how it's setup 
[18:08] <ybaumy> currently we spread masters and workers and etcd 's accross two datacenters. pods are also setup in a way to handle workloads if one DC is out
[18:08] <ybaumy> kubernetes is what we are interessted in with juju
[18:08] <rick_h> ybaumy: which on prem cloud is this? /me reads back
[18:09] <rick_h> oic, vsphere
[18:09] <ybaumy> rick_h: our companies vsphere
[18:09] <ybaumy> rick_h: we are using openshift atm from redhat but it costs .. though im looking into other directions
[18:10] <rick_h> yea, I'm not familiar with vsphere as much. I know in maas/openstack the we think of zones as you're doing dc's in vsphere. It's why we're hitting this mismatch as we obviously really care/want to have HA whereever possible in Juju and the workloads running.
[18:10] <ybaumy> rick_h: we would really like to see that
[18:12] <ybaumy> rick_h: first i made test with maas and vsphere and juju and then on top kubernetes but maas lets is also not what i want. 2.3 is really neat
[18:12] <ybaumy> juju 2.3
[18:12] <ybaumy> i like that its talking directly now with API
[18:12] <rick_h> ybaumy: I'm glad you're finding Juju interesting. I wish I had better news on the cross zone HA setup. Like I was saying, you can work around it by manually adding machines I think, but then once you go deploy things they'll only go into the original region which isn't what you're going to want
[18:14] <ybaumy> rick_h: maybe in the future. tomorow our architects will get a presentation with juju setup from me. i hope they like it
[18:14] <ybaumy> as much as i do
[18:14] <rick_h> ybaumy: in the long run it'd be interesting with 2.3 and cross model relations if the charms in k8s could treat relations across the models as a way to do its HA work and so you'd just deploy two k8s and relate them and they'd combine into a super k8s
[18:14] <rick_h> ybaumy: cool, let us know if we can be of any help
[18:14] <ybaumy> rick_h: i will give a feedback
[18:14] <rick_h> <3
[18:15] <ybaumy> im the canonical guy though .. most ppl think of canonical as desktop client OS and services
[18:15] <ybaumy> i hope i can change that
[18:19] <rick_h> ybaumy: same here, we've put a lot of time and effort beyond the desktop :)
[18:22] <ybaumy> rick_h: in germany i tell you its really hard to find ppl who use ubuntu and other serivces professionaly
[18:23] <rick_h> ybaumy: ah you're in germany? Yea, that's a tough nut there. Wasn't/isn't germany still very suse friendly as they were from there?
[18:24] <ybaumy> rick_h: we use suse for SAP of course cause there is no competition in this case. we use redhat and centos for everything else. but you are right SuSe is really popular
[18:25] <ybaumy> i tried their container platform lol
[18:26] <ybaumy> and ceph storage
[18:26] <ybaumy> its full of bugs
[18:26] <ybaumy> you just cant use it
[18:26] <ybaumy> SLES enterprise server is really stable for SAP and HANA
[18:32] <ybaumy> anyways i call it a day. beer is waiting. ttyl