/srv/irclogs.ubuntu.com/2017/04/18/#juju.txt

=== sparkieg` is now known as sparkiegeek
=== frankban|afk is now known as frankban
=== xnox_ is now known as xnox
filipltHello! Is it possible to move juju 'units' deployed in LXD containers between machines?08:31
filiplti mean move containers08:32
sparkiegeekwell if your units are stateless, it's better to just "juju add-unit --to lxd:<TARGET_MACHINE> <MY_APP>" and "juju destroy-unit <OLD_UNIT>"08:36
=== Trefex_ is now known as Trefex
filipltThank you, sparkiegeek. That was solution i thought of08:46
=== med_ is now known as Guest45512
=== Guest45512 is now known as med_
Budgie^Smoreo/ juju world15:04
Zichi here15:05
rick_hparty15:05
Zicjust for info, for one of my 1.5.3 CDK cluster upgrading to 1.6.1, I had a strange issue with kube-dns claiming that its pod cannot mount his volume (kube-dns has a volume??): http://paste.ubuntu.com/24407777/15:06
Zicdon't know if it's normal15:06
Zichttp://paste.ubuntu.com/24408100/15:43
Zickubernetes-dashboard is also unavailable15:43
Zic(it's a test cluster, no urgence but just to let you know if somebody of the CDK team already saw that kind of problem before submitting a bug)15:44
Zichmm, seems it's the return of https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/23815:52
=== frankban is now known as frankban|afk
cory_fuDoes anyone know how to change the default bootstrap config options for juju?  I managed to get enable-os-update turned off by default and it's preventing me from bootstrapping without manual intervention17:19
sparkiegeekcory_fu: perhaps you set it using "juju model-defaults" ?17:22
cory_fusparkiegeek: That helps for creating new units inside models, but it requires a controller to already be bootstrapped.  I'm trying to influence the default config of the controller during bootstrap.17:23
cory_fusparkiegeek: I can do it per-bootstrap with `juju bootstrap --config enable-os-update=true` but I'm trying to figure out how I ended up with it defaulting to false17:24
cory_fuOh, wait17:24
cory_fuOf course.  I created an alias that's sending those options.17:25
cory_fu>_<17:25
sparkiegeekhaha17:25
marcoceppicory_fu: wtg ;)18:19
=== scuttlemonkey is now known as scuttle|afk
firlmbruzek you around?20:59
mbruzekSure20:59
firlI am looking at doing an install of 1.6.1 k8s21:00
firlontop of openstack, didn’t know the best way you would recommend it. just use the conjure? https://jujucharms.com/canonical-kubernetes/21:00
mbruzekYou only need conjure-up if you want to deploy to LXD. Otherwise just make sure Juju can talk to your OpenStack and you should be good with "juju deploy canonical-kubenernetes".21:02
firljuju 2.0?21:02
mbruzekconjure-up just calls Juju for you. Yep 2.x21:02
firland does it “just work” with ingress and openstack?21:02
mbruzekNetworking is complicated. If you are able to reach your OpenStack VMs without Kubernetes then you should be fine. In my test cases the VMs did not have ingress access.21:05
firlso will I be able to reach my services some how though?21:07
lazyPowerfirl: so, the way it works is your workers deploy ingress controllers on port 80/443 respectively21:07
firlyes,21:07
firllike juju doesn’t block the ports to prevent me from putting up a ha_proxy for example I mean21:08
lazyPowerfirl: so long as you have a route to those vm's and you can reach port 80/443, the rest will be handled by the ingress objects you declare with your applications.21:08
firlok, I just remember juju not exposing those ports21:08
lazyPowerfirl: correct, you can expose/unexpose the workers respectively, but yeah.21:08
firlso only port 80/443 support right now?21:08
lazyPowerfirl: what you'll find thats slightly mroe complicated is if you want to use the nodeport networking model21:08
lazyPowerright, you'll wind up needing to do a juju run --application kubernetes-worker "open-port 6000" for example21:09
lazyPowerthats the only caveat, is you have to manually expose those ports21:09
firlgotcha21:09
firlthat’s perfectly acceptable, I just remember the first time I tried 8 months ago I couldn’t get that going21:09
firlis the  `juju run --application kubernetes-worker "open-port 6000”` documented anywhere?21:09
* lazyPower checks the readme21:09
lazyPoweri'm not positive we documented that21:10
lazyPoweryeah, undocumented behavior at the moment firl, but i'll file a bug and get that added for the next release21:10
firlsweet21:10
firlI will go through and see what I can do, I think I have to adjust my environment to accept juju 2.0 first21:11
firlI will report back here if you guys want on how it went21:11
lazyPowersounds good firl, make sure you ping me :)21:12
firlsweet, thanks again as always21:12
lazyPowerI monitor #juju but less actively than prior21:12
lazyPowers/prior/previously/21:12
firlgotcha21:13
firlI can imagine, looks like you guys have been busy with juju as a service too21:13
firlIs the hope to get it integrated with the kubernetes deployments there to kind of make it an easier deployment then the current azure one?21:13
lazyPowerfirl: i'm not sure i understand the question?21:14
firlhttps://jujucharms.com/jaas21:15
firlfor example the default kubernetes in azure doesn’t allow for scaling post install or attaching to a scaling group etc21:15
lazyPowerJuju deployed kubernetes certainly supports both of those cases (however instead of scaling groups, we use an autoscaler or manual scaling)21:16
lazyPowerfirl: one such autoscaler exists by a community submission. SimonKLB wrote the elastisys autoscaler charm so you get all the autoscaling goodies it brings with it.21:17
firlI will have to check that out. It’s nice to see you guys progressing towards that21:17
firlanyone know where the config data for juju2 is stored locally?22:14
blahdeblahfirl: .local/share/juju22:15
firlty22:15
firlanyone know of a juju 2.0 environment generator for openstack I am having issues specifying the network id22:23

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!