[01:11] <kklimonda> is charms.reactive and layering "the way" to write new charms?
[01:14] <hloeung> kklimonda: yeah. As I understand, it's also a requirement to get charms accepted into the charmstore / official
[03:46] <mwhudson> axw, menn0: you'll be getting the go 1.8.1 snap on the next refresh, hope it works :)
[03:46] <axw> mwhudson: thanks!
[03:46] <axw> mwhudson: I'll let you know if I have any more issues
[03:49] <menn0> mwhudson: sweet!
[03:49] <menn0> mwhudson: what changes in this?
[04:09] <mwhudson> menn0: dunno!
[04:09] <mwhudson> menn0: https://golang.org/doc/devel/release.html#go1.8.minor
[07:18] <kjackal> good morning juju world!
[07:22] <kjackal> hloeung: kklimonda: you can push to the store under your namespace any charm (reactive or not). Also, as far as I know the use of reactive is not a requirement (eg I have reviewed charms in perl ). The requirement is that the charm has to be reviewable and with enough tests
[07:23] <hloeung> kjackal: ah, thanks for clearing that up
[07:24] <kjackal> hloeung: kklimonda: hloeung when you mention "charmstore / official" you mean charm that have gone through the review process and become recomended. This is why these charms have to have tests
[07:24] <kjackal> hloeung: kklimonda: you can push charms under your namespace without any restrictions and unattended
[07:26] <kklimonda> [insert a comment about fragmenting community]
[07:26] <kklimonda> kjackal: so, as long as it's not a part of a larger project (like openstack-charms) everything goes?
[07:28] <kklimonda> I was a bit confused about charms.reactive given that it's mentioned in docs, and yet pypi release is from 2015 - but now I see that there are newer releases on github
[07:32] <kjackal> kklimonda: you can push a charm that uses/extends openstack charms/bundles. Your charms will be available under cs:~kklimonda/my-awesome-charm . If you want to partner with Canonical - prtnership means have canonical recomend your charm, have shared marketing events etc- you will have to go through a charm review process. This process includes "charm schools" meetings were our engineers show you how charms work are tested best
[07:32] <kjackal>  practices etc. As soon as charm is reviewed and accesped as recomneded ("promulgated") the charm is served from cs:my-awesome-charm
[07:34] <kjackal> kklimonda: why would that fragment the comunity?
[07:36] <kklimonda> kjackal: you've mentioned charm written in perl, I've seen examples of bash charms, someone was discussing pupper-based charm, and I've seen code for wrapping ansible. That doesn't strike me as a desirable trait, given that the barrier of entry to contribute and/or fix charms is basically unbounded.
[07:37] <kklimonda> kjackal: even in my time with openstack (and related) charms I've seen 3 ways of defining apt repositories, with subtly different syntax.
[07:44] <kjackal> kklimonda: It is true that Juju does not stand behind a single language. There are are some (strongly) recomended  practices like reactive. The reason for this is that Juju is not here to install your software on a machine. Instead Juju will handle the lifecycle of your software and your infrastructure as a whole. What I mean by that is that you have puppet, ansimble, python, perl, bash scripts that would do the configuration
[07:44] <kjackal> and deployment of your software. You should be able to reuse the (operational) logic you have for managing a service. When this logic gets complicated (see for example openstack) Juju gives you a great environment to model your infrastructure and manges its lifecycle. Think of it like juju is a higher level set of abstructions that allow you to model and manage the changing states of your infrastructure
[07:49] <kklimonda> kjackal: so the idea is that you can either start from scratch, or wrap your existing internal tools in juju, and have it working with as little modifications as possible?
[07:52] <kjackal> kklimonda: Yes. And as you get into more complicated states you should align your operational lodgic to match the abstractions juju suggests (mainly charms and relations)
[07:56] <kjackal> kklimonda: let me also point out that Juju shines when you have dynamic environemnts where charms interact with each-other. From a theoretical perspective Juju allows you to do a service choreography althogh most of the time the term orchestration is used to describe what juju can do..... I am taking this too far... I should stop here
[08:00] <kklimonda> mhm, interactions between charms - provisioning users, sharing secrets, are indeed pretty cool
[08:04] <kklimonda> what about orchestrating applications on top of kubernetes cluster with juju? There's a lot of chatter about juju kubernetes distribution, but not much about how to provision stuff on top of the cluster.
[08:32] <kjackal> kklimonda: at the moment we do not have a sulution for delivering and orchestrating apps on kubernetes
[09:06] <BlackDex> blahdeblah: Do you know if it is possible to add two nrpe plugins to the same main-charm?
[09:38] <farfetchd> Hi All, we are using the new juju library 0.3.0 were the AddMachine method was implemented. we are facing some issues: can somebody help? here is the log: https://p.rrbone.net/paste/BKUdtoIr#twKArUtuTYMelvtKi0NmwXezdSImPiyCYtzwNuySxcd
[09:41] <farfetchd> what we are executing is here: https://p.rrbone.net/paste/4V3MquPk#CEjpqUArvyiS1QA2APxpaezgP7WZNqdGAeR64sOmsZc
[10:28] <blahdeblah> BlackDex: I *think* so, as long as they use two different application names, but I've never tried it, nor am I aware of any instances of it being used.
[10:29] <cnf> hmm
[10:29] <cnf> proxy use in juju is a mess
[10:33] <Zic> do you know if running "juju config kubernetes-worker install_from_upstream=true" on a CDK production cluster is possible? what Juju will do, upgrading Docker everywhere or one worker by one worker?
[10:33] <Zic> (oh sorry, forget to say hello :'()
[10:33] <Zic> forgot*
[10:41] <kjackal> Hi Zic looking at https://github.com/juju-solutions/layer-docker/blob/master/reactive/docker.py#L85 this will change docker under the hood. I think it will run on all workers at the same time.
[10:58] <Zic> kjackal: ok, so I will plan a maintenance upgrade at night, thanks :)
[11:09] <Zic> just for info, I tried it on our preproduction cluster, all is working except I needed to restart kubelet on every kubernetes-worker
[11:09] <Zic> all green after that in juju status :)
[11:22] <cnf> hmz
[11:22] <cnf> juju is a mess when you need it to be behind a proxy :/
[11:51] <BlackDex> blahdeblah: Oke, cool, maybe i will try. If i do, i will let you know :)
[12:16] <cnf> any idea how i can make juju retry provisioning a container?
[12:17] <cnf> this is very much a painpoint in juju, imo
[12:19] <BlackDex> provisioning of the container?
[12:19] <BlackDex> did it start the container?
[12:19] <cnf> it can't _start_ it
[12:19] <cnf> it couldn't even download it
[12:20] <cnf> i set the proxy, for now, so i need to tell it to retry it
[12:20] <cnf> and after that, remove the proxy again
[12:20] <BlackDex> you could try to restart the jujud machine unit
[12:21] <BlackDex> is the machine in error state?
[12:21] <cnf> no, nothing is in error
[12:22] <cnf> hence why i can't poke anything
[12:22] <cnf> i can do neither juju resolved nor Juju retry-provisioning
[12:23] <cnf> i'm kinda stuck
[12:25] <stub> I haven't needed to retry - it seems to do it automatically as soon as things are right (eg. fixing my VPN so I have network connectivity again, or bouncing my web proxy so it binds to the correct IP address)
[12:26] <cnf> yeah, it's not moving
[12:26] <cnf> at all
[12:26] <stub> It isn't very informative about it, but works. ie. it just sits there unhelpful until I diagnose the problem , myself and fix it
[12:27] <stub> Bouncing the jujuds on the controller would be a good kick in the teeth, but I have no idea what side effects that would have
[12:29] <cnf> hmz
[12:29] <stub> I have seen it sit there for a long, long time if images are being downloaded over a slow link
[12:30] <cnf> it's not doing anything
[12:30] <cnf> and it's not a slow link
[12:32] <stub> juju destroy-machine --force and retry now you have sorted your proxy?
[12:32] <cnf> yeah,no, because it needs to NOT have a proxy set for some things, and a proxy set for others
[12:32] <cnf> because juju is stupid that way
[12:33] <cnf> and resetting a machine takes 20+ minutes
[12:33] <stub> There are separate settings for apt proxy vs http proxy if that helps
[12:33] <cnf> no, it doesn't
[12:33] <cnf> because lxd doesn't use the apt proxy
[12:34] <cnf> and setting http-proxy sets it for _everything_
[12:34] <stub> Oh, I'm dealing with the lxd provider rather than lxd containers on a machine so I'm unsure if my experience counts here
[12:34] <stub> yeah, I've had to add specific proxy config options to some charms (eg. the proxy for the snap layer to use for snap store requests)
[12:34] <cnf> i keep putting in places where it doesn't know how to recover
[12:34] <cnf> not confidence inspiring
[12:35] <cnf> brb, got to pick up a package
[12:36] <kjackal> Zic could you open a ticket to track down the issue of needing to restart kubelet? https://github.com/juju-solutions/bundle-canonical-kubernetes/issues
[12:43] <cnf> k, disk swapped
[12:43] <cnf> stub: i don't know how to configure proxy for lxd, but not system wide...
[12:44] <cnf> if i set system wide proxy, openstack won't work
[12:44] <cnf> and no_proxy doesn't understand CIDR
[12:44] <cnf> so i'd have to add every single IP possible, which is a long list
[12:45] <cnf> https://bugs.launchpad.net/juju/+bug/1488139 is relevant, btw
[12:45] <mup> Bug #1488139: juju should add nodes IPs to no-proxy list <landscape> <network> <oil> <proxy> <juju:Triaged> <https://launchpad.net/bugs/1488139>
[12:46] <cnf> so i don't know how to solve this... apt has apt-http-proxy, but lxd doesn't use that
[12:47] <cnf> and i don't know how to make juju retry the lxd setup, ffs >,<
[12:48] <stub> #openstack-charms might have insight. I vaguely recall this being discussed before re: no_proxy
[12:49] <cnf> stub: yes, with me
[12:51] <cnf> hmz, it seems it just isn't possible
[12:51] <cnf> juju stops trying when it can't pull a container
[12:56] <cnf> logs don't show anything, either
[12:56] <cnf> just not doing anything
[12:59] <cnf> ugh, this is stupid
[13:05] <cnf> juju retry-provisioning should support containers
[15:53] <Zic> do you have any release date planned for CDK with K8s 1.6?
[16:03] <kjackal> Zic: We are finishing up our testing today. If all goes well we should be releasing it tomorrow
[16:03] <kjackal> Zic: it should be 1.6.1
[16:03] <kjackal> Zic: what are you looking to find in 1.6?
[16:09] <lazyPower> o/ Zic
[16:25] <Zic> kjackal: just a question of my customer, nothing particular in my concern :)
[16:25] <Zic> hi lazyPower :)
[19:00] <kwmonroe> cnf: not sure if it'll help, but here's a shortcut for getting all lxd subnet IPs into no-proxy when adding a model:  juju add-model foo --config no-proxy=`echo 10.x.y.{1..255} | sed 's/ /,/g'`
[19:01] <cnf> kwmonroe: times 4, for each subnet
[19:02] <kwmonroe> or.. `echo 10.x.{a,b,c,d}.{1..255} | sed 's/ /,/g'`
[19:03] <cnf> kwmonroe: i'll probably do something like that to get me going, thanks
[19:03] <cnf> but it should be fixed in juju, though
[19:06] <kwmonroe> agreed cnf -- good find on 1488139.  that feels like the right bug to get this fixed under.  after reading the final comment there, i'm not sure what the ramifications of adding so many IPs to no_proxy will be for you.  and i'm sure at some point, there's a limit to the length of data that add-model will allow :/
[19:07] <kwmonroe> still, hopefully it'll get you moving
[19:07] <cnf> uhu, i think my 1681495 is relevant too, though
[19:07] <cnf> even if you just want to cache lxd images at the edge
[19:08] <cnf> admcleod:
[19:08] <cnf> oops
[19:09] <kwmonroe> yeah cnf - i like your idea of lxd-proxy in addition to apt, http, etc
[19:09]  * cnf nods
[19:10] <cnf> btw, you don't know of a way to solve https://bugs.launchpad.net/juju/+bug/1681435, do you?
[19:10] <mup> Bug #1681435: juju retry-provisioning should support containers <juju:New> <https://launchpad.net/bugs/1681435>
[19:12] <kwmonroe> cnf: afraid i don't
[19:12] <cnf> k
[19:18] <kwmonroe> cnf: is rebooting the machine housing the lxd units an option (juju run --machine X 'sudo reboot')?  i wonder if juju would attempt some kind of retry if the controller <-> host lxd communication gets bounced.
[19:18] <cnf> kwmonroe: i didn, doesn't help
[19:18] <kwmonroe> boo
[19:19] <arosales> Cynerva: lazyPower: any success on https://github.com/juju-solutions/layer-etcd/issues/89
[19:19] <arosales> hopefully that is the last issue needing resolved for 1.6.1
[19:20] <Cynerva> arosales: we have a fix merged in and tested
[19:20] <arosales> thats an awesome answer :-)
[19:20] <arosales> sorry I missed that I was following updates in https://github.com/juju-solutions/layer-etcd/issues/89
[19:21] <arosales> but good stuff
[19:21] <arosales> mbruzek: by chance, have you built canonical-kubernetes with the latest bits in the candidate channel?
[19:22] <lazyPower> arosales: thats what we're doing now, we're releasing to candidate to test
[19:22] <mbruzek> arosales: we are doing that now, should be available in minutes
[19:22] <arosales> solid
[19:22] <arosales> sounds like we may have a  good canonical-k8 to test
[19:22] <arosales> Cynerva: lazyPower: mbruzek thanks
[19:22] <mbruzek> arosales: we will ping you when done.
[19:23] <arosales> mbruzek: thanks could you also give larrymi a ping. I think he was trying to get some run time on maas.
[19:23] <arosales> thanks
[19:23] <mbruzek> Great
[19:46] <mbruzek> arosales: We just pushed to candidate
[19:51] <larrymi> arosales, mbruzek: working on it now
[19:52] <arosales> mbruzek: thanks
[19:52] <arosales> larrymi: candidate channel should have the latest bits, and should be a good bundle to test on maas with
[19:53] <larrymi> arosales, cool will use those.
[19:54] <arosales> larrymi: thanks or the testing, ping if you have any issues
[19:54] <larrymi> will do arosales