[07:21] <kjackal> Good morning Juju World!
[09:13] <magicaltrout> its average at best
[09:13] <magicaltrout> it is monday after all
[09:13] <admcleod> whos a little mr grumpypants
[09:13] <blahdeblah> Hi all.  juju add-model has a --credential flag; is there a way to set this after the model has been added? or show which credentials are in use for a model?
[09:14] <magicaltrout> too much wine, too little sleep ;)
[09:14] <blahdeblah> magicaltrout: That'll do it every time
[09:14] <magicaltrout> indeed
[09:14] <magicaltrout> i never learn
[09:16] <admcleod> blahdeblah: well, if you're in the model, you can 'juju add-credentials'
[09:16] <admcleod> -s
[09:16] <blahdeblah> admcleod: I was about to ask whether that did the same thing
[14:20] <cnf> btw, is there a proxmox provider for juju?
[15:35] <cory_fu> tvansteenburgh: For https://github.com/juju-solutions/interface-kube-control/pull/2 can you point me to the charm that uses that layer?  Is it kubes-master?
[15:35] <tvansteenburgh> cory_fu: yeah, thanks for looking at that
[15:36] <tvansteenburgh> cory_fu: https://github.com/kubernetes/kubernetes/blob/master/cluster/juju/layers/kubernetes-master/reactive/kubernetes_master.py
[15:36] <cory_fu> tvansteenburgh: Thanks
[15:38] <cory_fu> tvansteenburgh: I assume the issue is that https://github.com/kubernetes/kubernetes/blob/master/cluster/juju/layers/kubernetes-master/reactive/kubernetes_master.py#L308 is triggering when there are still workers (or whatever they're called)?
[15:39] <tvansteenburgh> cory_fu: https://github.com/juju-solutions/interface-kube-control/issues/1
[15:39] <tvansteenburgh> cory_fu: so, yes
[15:41] <cory_fu> tvansteenburgh: That's very odd.  That's not a peer relation so I can't see any way it should be affected by scaling.
[15:44] <cory_fu> tvansteenburgh: Deploying to see if I can replicate
[15:44] <tvansteenburgh> cory_fu:  oh. i think the problem is on the worker side.
[15:44] <tvansteenburgh> cory_fu: you remove a master, and then all the workers think they're not connected, when they are in fact still connected to the other master
[15:44] <cory_fu> Hrm.  Looking at the requires now
[15:47] <cory_fu> tvansteenburgh: Hrm.  Logic still seems fine.
[15:48] <cory_fu> tvansteenburgh: When my kubes-core finishes coming up, I'll debug at it a bit
[15:48] <tvansteenburgh> cory_fu: tyvm
[16:22] <kwmonroe> cory_fu: is it frowned upon to use @hook('upgrade-charm')?  on upgrade, i want to re-trigger an install() method that typically prevents a re-trigger by setting an 'installed' state.  i'd like to remove this state on upgrade and can either do it with @hook, or set additional kv data that i can use in a data_changed handler. is one way better than the other?
[16:24] <kwmonroe> .. keeping in mind that i don't currently have a data_changed handler, so it would be a new function either way.
[16:28] <cory_fu> kwmonroe: There's not really an alternative at this point.  I'd like to remove the need for it, but I think upgrade-charm is probably the best case for @hook.
[16:29] <cory_fu> kwmonroe: Though, if you're talking about upgrade-charm being triggered by a new resource, rather than new charm code, you could use data_changed (or I think there is a file_changed helper as well) instead if you wanted
[16:32] <kwmonroe> cory_fu: both attach triggers upgrade-charm, which triggers config-changed, right?  if so, i think i'll use a when(config.changed) with appropriate data_changed conditions in it.
[16:33] <kwmonroe> s/both//
[16:34] <cory_fu> kwmonroe: Yeah, I think that's right and will work
[16:52] <cory_fu> tvansteenburgh: Added a comment on https://github.com/juju-solutions/interface-kube-control/issues/1
[16:52] <cory_fu> tvansteenburgh: TL;DR is that the issue is the GLOBAL scope
[16:52] <tvansteenburgh> cory_fu: yeah, that makes sense :/
[17:54] <bdx> kubernetes-peeps: how can I set EXTRA_DOCKER_OPTS on the kubelets .... can I just "snap set kubelet insecure-registry=10.0.0.0/8" ?
[17:54] <lazyPower> bdx: i've been following along with 1685782 - :(  just added some bug heat there too
[17:55] <lazyPower> bdx: that should be the case, yeah. Is that something you need exposed in the charms?
[17:55] <bdx> lazyPower: using a mix of @SaMnCo's blog and my own findings, I have a working CDK + deis ... its amazing
[17:55] <lazyPower> bdx: <3
[17:55] <lazyPower> fan-freaking-tastic news
[17:55] <lazyPower> keep notes on your pain points for issues and I'll be happy to get those on our next planning cycle
[17:56] <bdx> lazyPower: yea, thanks for the heat there... I had to focus somewhere else, as all my other ops are blocked by that darn bug
[17:56] <bdx> I'm beside myself as to how that made its way into a release
[17:56] <bdx> but whatever
[17:57] <lazyPower> bdx: I know you're going to find more dragons in our charms using spaces, just be aware that testing that particular scenario historically hasn't been heavily invested in, and we do indeed need to add that to our future roadmap
[17:57] <SaMnCo> bdx: awesome! Let us blog about this. Today I met with riseML they do a sort of PaaS for Deep Learning
[17:57] <bdx> lazyPower: ok
[17:57] <lazyPower> we're going to need to add support for extra bindings and spaces and get some test suites written around that. Its an eventual certainty
[17:57] <SaMnCo> Will probably need some of that XP for them as well
[17:58]  * lazyPower silently sets modifier to 2x xp rates on the server
[17:59] <lazyPower> magicaltrout: speaking of xp and server rates, i found an older container build for ark servers thats even better than what i sent over. Would love to collab with you on a chart if thats something you're interested in.
[17:59] <bdx> SaMnCo, lazyPower: so ... my problem with the registry (one that I've hit before) can be found here https://github.com/deis/registry/issues/64
[17:59] <bdx> SaMnCo, lazyPower: I've applied the patch (near the bottom) to the workflow ... now I just need to add some EXTRA_DOCKER_OPTS to my kubelets
[18:00] <bdx> for the insecure local registry https://deis.com/docs/workflow/en/v2.2.0/installing-workflow/system-requirements/#docker-insecure-registry
[18:00] <lazyPower> bdx: yeah, your snap config kubelet should work a treat for that, we dont do any validation so you can pass malformed config all day ;)
[18:00] <bdx> so I need to set EXTRA_DOCKER_OPTS="--insecure-registry=10.0.0.0/8"
[18:00] <bdx> ok
[18:00] <bdx> sweet
[18:00] <lazyPower> Cynerva: or ryebot  may correct me, but i'm 98% certain thats the case
[18:00] <lazyPower> as they were teh primary authors of that feature of the snap(s)
[18:01]  * ryebot catches up
[18:04] <ryebot> hmm
[18:04] <ryebot> That's an env var for kubelet?
[18:05] <ryebot> pretty sure our config handling only works for cli args
[18:06] <bdx> do the kubernetes charms allow for setting EXTRA_DOCKER_OPTS somewhere?
[18:07] <ryebot> Hmm let me see
[18:08] <bdx> ryebot: sudo docker -d --insecure-registry 10.0.0.26:5000
[18:08] <bdx> it is a cli opt
[18:08] <ryebot> Not that I can see, correct me if I'm wrong lazyPower or cynerva
[18:08] <ryebot> ah I see it
[18:09] <ryebot> looks like layer-docker has an extra-opts configuration setting
[18:09] <ryebot> so if you do a `juju config kubernetes-worker`, you should see a `docker-opts` configuration you can alter
[18:10] <bdx> ryebot: so, which should I favor then?
[18:10] <ryebot> bdx: afaict using docker-opts is your only option
[18:11] <bdx> ryebot: if you don't mind me asking, why can't/shouldn't this be done via `snap set`?
[18:11] <ryebot> bdx: looks to me like it's a docker config, which is currently exposed via layer docker
[18:12] <ryebot> bdx: so, I don't think there's currently a snap config that would work
[18:12] <bdx> ryebot: doesn't the snap config accept all cli args?
[18:14] <ryebot> bdx: it does for our k8s snaps, but we're not currently snapping docker itself, just the kubelet half of that relationship
[18:14] <bdx> ryebot: I see. thx
[18:14] <ryebot> bdx: no problem, feel free to ping me if you hit any roadblocks
[18:15] <bdx> rybot: thanks
[18:16] <lazyPower> bdx: well DOCKER_OPTS is a thing, but do you need to pass this along to kubelet as well?
[18:16] <lazyPower> sorry i was afk otp
[18:19] <lazyPower> ryebot: bdx: yeah setting the dockeropts should satisfy this looking at it. If you come up with issues with kubelet lmk and we can work through that nuance.
[18:19] <ryebot> thanks lazyPower
[18:21] <skay_> hm, on my juju2 deployed system my landscape-client is failing in the config-changed hook because it has missing info from juju, but I'm not familiar enough with the landscape client to know exactly what is causing it. anyone here familiar with it?
[18:22] <skay_> it's expecting to get environment-uuid from a juju config file. I'm not sure where that lives
[18:24] <skay_> happens around here https://bazaar.launchpad.net/~landscape/landscape-client/trunk/view/head:/landscape/broker/registration.py#L192 and it's reading a json file somewhere, as you can see here https://bazaar.launchpad.net/~landscape/landscape-client/trunk/view/head:/landscape/lib/juju.py
[20:57] <erik_lonroth> Hello. I was trying to deploy to a "centos7" series on my local lxc controller and it didn't seem to work. As this would to me seem like a key functionality - is there anyone that can tell me what I need to do to make this work ?
[20:58] <erik_lonroth__> This is how I try to deploy: juju deploy ~/git/juju/charms/hello-world --series=centos7
[20:59] <erik_lonroth__> Juju outputs: juju deploy ~/git/juju/charms/hello-world --series=centos7
[20:59] <erik_lonroth__> oops
[20:59] <erik_lonroth__> Deploying charm "local:centos7/hello-world-2".
[21:00] <erik_lonroth__> But after that.... juju status shows Machine as "down" and lxc shows no machine has been spawned.
[22:52] <lazyPower> erik_lonroth__: it can take a few moments for the lxd provider to download the cloud image. How long has it been since you issued the deploy command?