=== sparkieg` is now known as sparkiegeek === frankban|afk is now known as frankban === xnox_ is now known as xnox [08:31] Hello! Is it possible to move juju 'units' deployed in LXD containers between machines? [08:32] i mean move containers [08:36] well if your units are stateless, it's better to just "juju add-unit --to lxd: " and "juju destroy-unit " === Trefex_ is now known as Trefex [08:46] Thank you, sparkiegeek. That was solution i thought of === med_ is now known as Guest45512 === Guest45512 is now known as med_ [15:04] o/ juju world [15:05] hi here [15:05] party [15:06] just for info, for one of my 1.5.3 CDK cluster upgrading to 1.6.1, I had a strange issue with kube-dns claiming that its pod cannot mount his volume (kube-dns has a volume??): http://paste.ubuntu.com/24407777/ [15:06] don't know if it's normal [15:43] http://paste.ubuntu.com/24408100/ [15:43] kubernetes-dashboard is also unavailable [15:44] (it's a test cluster, no urgence but just to let you know if somebody of the CDK team already saw that kind of problem before submitting a bug) [15:52] hmm, seems it's the return of https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/238 === frankban is now known as frankban|afk [17:19] Does anyone know how to change the default bootstrap config options for juju? I managed to get enable-os-update turned off by default and it's preventing me from bootstrapping without manual intervention [17:22] cory_fu: perhaps you set it using "juju model-defaults" ? [17:23] sparkiegeek: That helps for creating new units inside models, but it requires a controller to already be bootstrapped. I'm trying to influence the default config of the controller during bootstrap. [17:24] sparkiegeek: I can do it per-bootstrap with `juju bootstrap --config enable-os-update=true` but I'm trying to figure out how I ended up with it defaulting to false [17:24] Oh, wait [17:25] Of course. I created an alias that's sending those options. [17:25] >_< [17:25] haha [18:19] cory_fu: wtg ;) === scuttlemonkey is now known as scuttle|afk [20:59] mbruzek you around? [20:59] Sure [21:00] I am looking at doing an install of 1.6.1 k8s [21:00] ontop of openstack, didn’t know the best way you would recommend it. just use the conjure? https://jujucharms.com/canonical-kubernetes/ [21:02] You only need conjure-up if you want to deploy to LXD. Otherwise just make sure Juju can talk to your OpenStack and you should be good with "juju deploy canonical-kubenernetes". [21:02] juju 2.0? [21:02] conjure-up just calls Juju for you. Yep 2.x [21:02] and does it “just work” with ingress and openstack? [21:05] Networking is complicated. If you are able to reach your OpenStack VMs without Kubernetes then you should be fine. In my test cases the VMs did not have ingress access. [21:07] so will I be able to reach my services some how though? [21:07] firl: so, the way it works is your workers deploy ingress controllers on port 80/443 respectively [21:07] yes, [21:08] like juju doesn’t block the ports to prevent me from putting up a ha_proxy for example I mean [21:08] firl: so long as you have a route to those vm's and you can reach port 80/443, the rest will be handled by the ingress objects you declare with your applications. [21:08] ok, I just remember juju not exposing those ports [21:08] firl: correct, you can expose/unexpose the workers respectively, but yeah. [21:08] so only port 80/443 support right now? [21:08] firl: what you'll find thats slightly mroe complicated is if you want to use the nodeport networking model [21:09] right, you'll wind up needing to do a juju run --application kubernetes-worker "open-port 6000" for example [21:09] thats the only caveat, is you have to manually expose those ports [21:09] gotcha [21:09] that’s perfectly acceptable, I just remember the first time I tried 8 months ago I couldn’t get that going [21:09] is the `juju run --application kubernetes-worker "open-port 6000”` documented anywhere? [21:09] * lazyPower checks the readme [21:10] i'm not positive we documented that [21:10] yeah, undocumented behavior at the moment firl, but i'll file a bug and get that added for the next release [21:10] sweet [21:11] I will go through and see what I can do, I think I have to adjust my environment to accept juju 2.0 first [21:11] I will report back here if you guys want on how it went [21:12] sounds good firl, make sure you ping me :) [21:12] sweet, thanks again as always [21:12] I monitor #juju but less actively than prior [21:12] s/prior/previously/ [21:13] gotcha [21:13] I can imagine, looks like you guys have been busy with juju as a service too [21:13] Is the hope to get it integrated with the kubernetes deployments there to kind of make it an easier deployment then the current azure one? [21:14] firl: i'm not sure i understand the question? [21:15] https://jujucharms.com/jaas [21:15] for example the default kubernetes in azure doesn’t allow for scaling post install or attaching to a scaling group etc [21:16] Juju deployed kubernetes certainly supports both of those cases (however instead of scaling groups, we use an autoscaler or manual scaling) [21:17] firl: one such autoscaler exists by a community submission. SimonKLB wrote the elastisys autoscaler charm so you get all the autoscaling goodies it brings with it. [21:17] I will have to check that out. It’s nice to see you guys progressing towards that [22:14] anyone know where the config data for juju2 is stored locally? [22:15] firl: .local/share/juju [22:15] ty [22:23] anyone know of a juju 2.0 environment generator for openstack I am having issues specifying the network id