[01:17] <Budgie^Smore> I never got an op to "play" with that charm rick_h, still want too
[01:34] <ME_54> hi
[01:34] <ME_54> god nigt
[01:34] <ME_54> good night ***
[01:35] <ME_54> Alguém do brasil por ai ?
[09:00] <Zic> hmm, just saw that kubernetes-master & kubernetes-worker automatically upgrades to 1.6.2 from 1.6.1 without action... so what part of https://kubernetes.io/docs/getting-started-guides/ubuntu/upgrades/ do I need ? :D
[09:00] <Zic> (hi here o/)
[09:07] <kjackal_> hi Zic
[09:10] <kjackal_> Zic: The binaries of kubernetes are shipped as snap packages. This is why you see them being upgraded to 1.6.2. The installation & management of your cluster is done through charms so by upgrading the charms you get you cluster lifecycle management upgraded.
[09:16] <Zic> kjackal_: thanks, so I always need to do some juju upgrade-charm for upgrading the operationnal code
[09:33] <kjackal_> Zic: yes
[09:37] <Zic> I also have a lab which not run on baremetal but in LXD via conjure-up, do you know if I can install a specific version of charms/K8s via conjure-up+LXD ?
[09:38] <Zic> (it's to test the 1.5.3 -> 1.6.2 upgrade of our production cluster in a lab)
[09:47] <kim0> What I understood from being here yesterday, was that k8s auto-upgrades itself within the 1.6.x series .. but jumping to say 1.7.x will need me changing the config variables to point to 1.7
[09:51] <kim0> I'd love to test upgrading 1.6.1 -> 1.6.2 like I did yesterday .. Can I force juju to install a specific version of k8s ?
[09:52] <kim0> I think I need to change the "config --channel" thing .. but not sure what to set it to (instead of stable) .. Where do I see available channels!
[09:58] <kjackal_> Zic upgrading snaps on lxd will not work due to this bug: https://bugs.launchpad.net/snapd/+bug/1668659
[09:58] <mup> Bug #1668659: Using snaps in lxd can be problematic during refresh (using squashfuse) <snapd:In Progress by zyga> <https://launchpad.net/bugs/1668659>
[09:59] <Zic> kjackal_: hmm, seems like it will be easier if I running my lab like a baremetal cluster, so without LXD and just classical VMs :/
[10:00] <Zic> it's not a problem, from ressources point of view,  but I prefer to use LXD instead of classical VMs for labs
[10:01] <Zic> s/I running/I run/
[10:01] <kjackal_> Zic yes, physical machines should work.
[10:02] <kjackal_> kim0: there is no easy way to pin snaps to 1.6.1 during deployment and then unpin them in order to trigger the upgrade
[10:02] <Zic> kjackal_: for forcing 1.5.3, I just need to specify the old version of all charms composing the CDK bundle, right?
[10:03] <Zic> (in the Juju GUI)
[10:03] <Zic> before deploying
[10:03] <kjackal_> the 1.6.x snaps are in the same track and in a way new versions shadow the old ones (nothing special, its like upgrading debs)
[10:03] <kjackal_> kim0: ^
[10:04] <kim0> Any option to demo "upgrades" to a friend ?
[10:06] <kjackal_> Zic: kim0: You can upgrade from 1.5.3 to 1.6.x . Give me a moment to get the revisions of the bundles and the upgrade procedure
[10:07] <kim0> it would be good to know how to get those revisions too I guess
[10:08] <Zic> I extracted them from our production cluster, which is still in 1.5.3 since I'm searching a way to build a 1.5.3 to lab-cluster, where our customer can puts all his pods, then upgrading this lab to 1.6.2, and if it succeed, upgrading the production-cluster
[10:08] <Zic> the production-cluster is running on baremetal & VMs at our DC
[10:08] <Zic> the lab-cluster was just a machine with LXD
[10:09] <Zic> but as conjure-up does not permit to deploy a specific version of CDK, I'm a bit screwed :) I think I will need to build a full CDK cluster for the lab also
[10:09] <kjackal_> Zic: kim0: Here is the upgrade process from 1.5 to 1.6 https://insights.ubuntu.com/2017/04/12/general-availability-of-kubernetes-1-6-on-ubuntu/
[10:09] <kim0> Thanks!
[10:10] <kjackal_> Zic: kim0: you can lookup the charm revisions from the bundle. Give me a moment
[10:11] <Zic> kjackal_: yup, but so I can only do that through juju CLI or GUI, not in conjure-up, so no LXD labs :( I will need to pop some VMware VMs and it's upset me as a big fan of LXD :)
[10:24] <BlackDex> Hello, i have a environment where juju seems to be very very slow.
[10:24] <BlackDex> And i see these messages in the logs of juju-controller
[10:25] <BlackDex> 2017-05-05 10:20:48 DEBUG juju.apiserver.client status.go:197 Remote applications: map[]
[10:25] <BlackDex> 2017-05-05 10:20:48 DEBUG juju.apiserver.client status.go:552 error fetching public address: "no public address(es)"
[10:25] <BlackDex> 2017-05-05 10:20:48 DEBUG juju.apiserver.client status.go:558 no IP addresses fetched for machine "arh4pg"
[10:25] <BlackDex> 2017-05-05 10:20:48 DEBUG juju.apiserver.client status.go:814 error fetching public address: no public address(es)
[10:25] <BlackDex> 2017-05-05 10:20:48 DEBUG juju.apiserver request_notifier.go:140 -> [410] user-admin 121.930409ms {"request-id":2,"response":"'body redacted'"} Client[""].FullStatus
[10:25] <BlackDex> 2017-05-05 10:20:48 DEBUG juju.apiserver request_notifier.go:80 [410] user-admin API connection terminated after 182.883216ms
[10:25] <BlackDex> someone knows what could be the problem?
[10:26] <BlackDex> i have tested the network, speed is fine, ping (inc. flood-ping) all work fine
[10:27] <kjackal_> BlackDex: how do you measure the speed of Juju? Why do you say it seems slow?
[10:30] <magicaltrout> I validate my juju speed against kjackal_ 's response time
[10:30] <magicaltrout> for example
[10:33] <kjackal_> whats up magicaltrout? I saw your http://spicule.co.uk/juju-bigdata.html ! Nice!
[10:34] <magicaltrout> actually we've just started work yesterday on our new "Spicule Big Data Platform" stuff
[10:34] <magicaltrout> for supported big data workloads via Juju/JAAS
[10:34] <magicaltrout> so there will be a load more new stuff coming soon
[10:35] <kjackal_> well done!
[10:36] <magicaltrout> we're also going to be bringing a couple of column store databases to Juju for our supported "Business Analytics" environmengt
[10:42] <kjackal_> magicaltrout: which ones?
[10:42] <magicaltrout> probably greenplum and monetdb
[10:42] <magicaltrout> so people can pick on data volume and complexity etc
[10:43] <kjackal_> any plans to integrate asterixdb? https://asterixdb.apache.org/ i know one of the professors behind this from UCLA
[10:43] <kjackal_> (I think)
[10:44] <magicaltrout> yeah I like asterixdb i was tracking it through incubation
[10:44] <magicaltrout> its certainly on the roadmap
[10:45] <kjackal_> nice
[10:45] <magicaltrout> i'd also like to do Druid.io
[10:47] <magicaltrout> need to find some customers first I guess :)
[11:32] <BlackDex> kjackal_: Well, i have deployed a lot openstack bundles, and this one is take a long time
[11:36] <BlackDex> and magicaltrout the response time was quicker then the time `juju deploy` took ;)
[13:16] <lazyPower> stub - are you around? (its super late and past EOW so i'm not hopeful)
[13:18] <Budgie^Smore> o/ juju world
[13:19] <lazyPower> o/ Budgie^Smore
[13:34] <vmorris> hi all, running juju and lxd here on xenial. I'm soak testing the openstack-on-lxd installation and keep running into a problem where not all the containers will start properly. one always gets hung up in the forkstart and causes things like `lxc list` and `juju status` to hang
[14:09]  * Budgie^Smore is still waiting ... 
[14:21] <Zic> lazyPower: hi! I have a small question today :) -> if kube-api-loadbalancer is KO, does it affect the qualify of the service? or just the cluster health (daily operation)
[14:21] <lazyPower> Zic: when you say KO you mean knocked out?
[14:21] <Zic> yup
[14:21] <lazyPower> as in taken offline?
[14:21] <Zic> exactly
[14:22] <lazyPower> it'll stop your kubelets frmob eing able to contact the masters. so you rkubelets will enter a degraded state
[14:22] <lazyPower> but the workloads should remain running in their current formation
[14:22] <Zic> ok, so no "PodEvicton" action ?
[14:22] <lazyPower> basically you will be unable to effect change to your workloads, but you will stay online.
[14:22] <lazyPower> correct
[14:22] <lazyPower> the workers are on a POLL'ing loop with the configured master
[14:23] <Zic> so normally, my "user-traffic" stay OK, but my operation (sysadmin & dev stuff) will be degraded
[14:23] <lazyPower> without an active master to talk to, they just kind of go funky
[14:23] <lazyPower> yeah
[14:23] <lazyPower> user traffic should be uneffected
[14:23] <Zic> lazyPower: thanks, because we had an outage between Paris & New York and had some bad effect on user traffic
[14:23] <lazyPower> hmmm
[14:23] <Zic> but I suspect is because we have no kube-dns pod on New York nodes
[14:23] <lazyPower> yeah
[14:24] <lazyPower> that sounds like the likely culprit
[14:24] <lazyPower> yeah the ingress tables *should* be cached on the worker units
[14:24] <lazyPower> if thats not the case i've missed something in the network design of ingress
[14:49] <lazyPower> Zic we should be able to simulate this failure pretty easily if you can replicate a smallish cluster where kube-dns doesn't exist in your NA nodes.
[14:54] <Zic> lazyPower: actually the only cases I feared was "PodEviction launched" if the node can't contact the master, or some unknown requests that the node needs to pass to the master before responding to the actual user (like Ingress things)
[14:55] <lazyPower> the "pod connecting to the master before responding to user" is what i'm concerned with as well
[14:55] <lazyPower> that should be cached, but i might have a misunderstanding there
[14:56] <lazyPower> so long as the node has a listing of endpoints, it shouldn't have to contact the master. but if that table has a TTL that tells it when to re-check the endpoint mapping, we're in trouble.
[14:56] <Zic> I sincerely expect it will, because my model is kinda screwed up if not :D
[14:57] <Zic> lazyPower: yeah, it's what I expect also, it will be an outage only if the kubernetes-worker reboot during it can't contact the kube-apilb
[14:57] <Zic> but if not, I see no reason why it will
[15:13] <lzyPower> lazyPower: you lazy lout
[15:13] <lazyPower> oh hey lzyPower. <3
[15:22] <Zic> lzyPower: do you have any schedule (no need of a precise date :p) about enabling cloud-integration and federated cluster ?
[15:23] <lzyPower> Zic: a member of the team is working on federation. we just landed a kube-fed snap
[15:23] <lzyPower> Zic: and cloud-native still has no ETA. there's alpha level code there but it hasn't been touched in a couple weeks. Its high priority on this cycles roadmap though
[15:24] <Zic> thanks for the update :)
[15:25] <Zic> cloud-integration is a "must-have" for one of our customer which do hacky things without it :/ he ask about cloud-integration at each K8s upgrading
[15:25] <Zic> federated-cluster is more about my own demand :) because I know that the model I use for our multi-region production cluster is not that good
[15:25] <Zic> K8s doc recommend federated cluster for multiregion
[15:26] <Zic> (without speaking of my earlier question about kube-apilb, we have some trouble with kube-dns & multi-region without federation, as request from "region A" can be resolved at a kube-dns in "region B", as there is only one endpoint for kube-dns entire cluster)
[15:28] <Zic> (about response time, not so important as we have good transiters between EU & US but still this 0.300ms things :/)
[15:34] <lazyPower> Zic: yeah, fed is def where you want to go with that
[15:35] <lazyPower> we're actively cycling on it, and i suspect cloud-native integration will resume later this month or early next month
[15:35] <lazyPower> Zic: atm i'm working on user auth and soak testing, so its all good stuff for ya :)
[15:36] <Zic> oh nice too :p we always have our "Kubernetes as a Service" project somewhere
[15:36] <Zic> user auth will help
[15:37] <Zic> (the day we can give a customer an exclusive account with an exclusive K8s namespace... \o/)
[17:40] <tychicus> LazyPower: just got CDK 1.6.2 up and running, maybe I missed something in the docs, but I can't seem to find what the username and password are for the k8s dashboard
[17:40] <lazyPower> tychicus: its behind a TLS key validation. You need to kubectl proxy to establish a secure tunnel and then you can access the dashboard at http://localhost:8001/dashboard
[17:40] <tychicus> ok
[17:41] <lazyPower> tychicus: https://github.com/juju-solutions/bundle-canonical-kubernetes/tree/master/fragments/k8s/cdk#accessing-the-kubernetes-dashboard
[17:42] <tychicus> "To access the dashboard, you may establish a secure tunnel to your cluster with
[17:42] <tychicus> the following command:"
[17:42] <tychicus> found it, sorry
[17:42] <lazyPower> hey no problem :)
[17:42] <lazyPower> that readme is info dense
[17:42] <lazyPower> easy to miss that blurb
[17:59] <tychicus> LazyPower: just as an FYI, rbd request for persistent volume claims are still DENIED by apparmor
[17:59] <lazyPower> :/  this is troublesome
[17:59] <tychicus> is that something that you would like me to file a bug for?
[17:59] <lazyPower> yeah
[17:59] <lazyPower> we're going to have to take this to the snappy team and see about getting a plug/slot created for this
[18:01] <tychicus> should be bug be filed against juju or against snappy?
[18:01] <lazyPower> Against the K8s charms
[18:01] <lazyPower> tychicus: https://github.com/juju-solutions/bundle-canonical-kubernetes/issues
[18:02] <tychicus> thanks
[18:28] <tychicus> lazyPower: bug report is in.  Thanks again for all your help.
[18:29] <lazyPower> tychicus: thanks for filing that bug. I'll take this to the snappy team at my first availability and see what steps we can take to resolve this
[18:29] <lazyPower> tychicus: i'll keep communication high on that bug as well so you know when there's something for you to test with the rbd autoprovisioner.
[18:30] <tychicus> sounds good, I'm happy to test
[21:31] <bdx> \q
[21:46] <bdx> #maas
[23:53] <bdx> stokachu, marcoceppi: cs:~jamesbeedy/lounge-irc-8, https://github.com/jamesbeedy/layer-lounge-irc
[23:57] <bdx> https://review.jujucharms.com/reviews/122
[23:58] <bdx> also https://jujucharms.com/u/jamesbeedy/lounge-irc/8
[23:59] <bdx> it has sasl support
[23:59] <bdx> so it works on aws