[01:17] I never got an op to "play" with that charm rick_h, still want too [01:34] hi [01:34] god nigt [01:34] good night *** [01:35] Alguém do brasil por ai ? [09:00] hmm, just saw that kubernetes-master & kubernetes-worker automatically upgrades to 1.6.2 from 1.6.1 without action... so what part of https://kubernetes.io/docs/getting-started-guides/ubuntu/upgrades/ do I need ? :D [09:00] (hi here o/) [09:07] hi Zic [09:10] Zic: The binaries of kubernetes are shipped as snap packages. This is why you see them being upgraded to 1.6.2. The installation & management of your cluster is done through charms so by upgrading the charms you get you cluster lifecycle management upgraded. [09:16] kjackal_: thanks, so I always need to do some juju upgrade-charm for upgrading the operationnal code === frankban|afk is now known as frankban [09:33] Zic: yes [09:37] I also have a lab which not run on baremetal but in LXD via conjure-up, do you know if I can install a specific version of charms/K8s via conjure-up+LXD ? [09:38] (it's to test the 1.5.3 -> 1.6.2 upgrade of our production cluster in a lab) [09:47] What I understood from being here yesterday, was that k8s auto-upgrades itself within the 1.6.x series .. but jumping to say 1.7.x will need me changing the config variables to point to 1.7 [09:51] I'd love to test upgrading 1.6.1 -> 1.6.2 like I did yesterday .. Can I force juju to install a specific version of k8s ? [09:52] I think I need to change the "config --channel" thing .. but not sure what to set it to (instead of stable) .. Where do I see available channels! [09:58] Zic upgrading snaps on lxd will not work due to this bug: https://bugs.launchpad.net/snapd/+bug/1668659 [09:58] Bug #1668659: Using snaps in lxd can be problematic during refresh (using squashfuse) [09:59] kjackal_: hmm, seems like it will be easier if I running my lab like a baremetal cluster, so without LXD and just classical VMs :/ [10:00] it's not a problem, from ressources point of view, but I prefer to use LXD instead of classical VMs for labs [10:01] s/I running/I run/ [10:01] Zic yes, physical machines should work. [10:02] kim0: there is no easy way to pin snaps to 1.6.1 during deployment and then unpin them in order to trigger the upgrade [10:02] kjackal_: for forcing 1.5.3, I just need to specify the old version of all charms composing the CDK bundle, right? [10:03] (in the Juju GUI) [10:03] before deploying [10:03] the 1.6.x snaps are in the same track and in a way new versions shadow the old ones (nothing special, its like upgrading debs) [10:03] kim0: ^ [10:04] Any option to demo "upgrades" to a friend ? [10:06] Zic: kim0: You can upgrade from 1.5.3 to 1.6.x . Give me a moment to get the revisions of the bundles and the upgrade procedure [10:07] it would be good to know how to get those revisions too I guess [10:08] I extracted them from our production cluster, which is still in 1.5.3 since I'm searching a way to build a 1.5.3 to lab-cluster, where our customer can puts all his pods, then upgrading this lab to 1.6.2, and if it succeed, upgrading the production-cluster [10:08] the production-cluster is running on baremetal & VMs at our DC [10:08] the lab-cluster was just a machine with LXD [10:09] but as conjure-up does not permit to deploy a specific version of CDK, I'm a bit screwed :) I think I will need to build a full CDK cluster for the lab also [10:09] Zic: kim0: Here is the upgrade process from 1.5 to 1.6 https://insights.ubuntu.com/2017/04/12/general-availability-of-kubernetes-1-6-on-ubuntu/ [10:09] Thanks! [10:10] Zic: kim0: you can lookup the charm revisions from the bundle. Give me a moment [10:11] kjackal_: yup, but so I can only do that through juju CLI or GUI, not in conjure-up, so no LXD labs :( I will need to pop some VMware VMs and it's upset me as a big fan of LXD :) [10:24] Hello, i have a environment where juju seems to be very very slow. [10:24] And i see these messages in the logs of juju-controller [10:25] 2017-05-05 10:20:48 DEBUG juju.apiserver.client status.go:197 Remote applications: map[] [10:25] 2017-05-05 10:20:48 DEBUG juju.apiserver.client status.go:552 error fetching public address: "no public address(es)" [10:25] 2017-05-05 10:20:48 DEBUG juju.apiserver.client status.go:558 no IP addresses fetched for machine "arh4pg" [10:25] 2017-05-05 10:20:48 DEBUG juju.apiserver.client status.go:814 error fetching public address: no public address(es) [10:25] 2017-05-05 10:20:48 DEBUG juju.apiserver request_notifier.go:140 -> [410] user-admin 121.930409ms {"request-id":2,"response":"'body redacted'"} Client[""].FullStatus [10:25] 2017-05-05 10:20:48 DEBUG juju.apiserver request_notifier.go:80 [410] user-admin API connection terminated after 182.883216ms [10:25] someone knows what could be the problem? [10:26] i have tested the network, speed is fine, ping (inc. flood-ping) all work fine [10:27] BlackDex: how do you measure the speed of Juju? Why do you say it seems slow? [10:30] I validate my juju speed against kjackal_ 's response time [10:30] for example [10:33] whats up magicaltrout? I saw your http://spicule.co.uk/juju-bigdata.html ! Nice! [10:34] actually we've just started work yesterday on our new "Spicule Big Data Platform" stuff [10:34] for supported big data workloads via Juju/JAAS [10:34] so there will be a load more new stuff coming soon [10:35] well done! [10:36] we're also going to be bringing a couple of column store databases to Juju for our supported "Business Analytics" environmengt [10:42] magicaltrout: which ones? [10:42] probably greenplum and monetdb [10:42] so people can pick on data volume and complexity etc [10:43] any plans to integrate asterixdb? https://asterixdb.apache.org/ i know one of the professors behind this from UCLA [10:43] (I think) [10:44] yeah I like asterixdb i was tracking it through incubation [10:44] its certainly on the roadmap [10:45] nice [10:45] i'd also like to do Druid.io [10:47] need to find some customers first I guess :) [11:32] kjackal_: Well, i have deployed a lot openstack bundles, and this one is take a long time [11:36] and magicaltrout the response time was quicker then the time `juju deploy` took ;) === grumble is now known as 14WAA001L === 14WAA001L is now known as grumble [13:16] stub - are you around? (its super late and past EOW so i'm not hopeful) [13:18] o/ juju world [13:19] o/ Budgie^Smore === mup_ is now known as mup [13:34] hi all, running juju and lxd here on xenial. I'm soak testing the openstack-on-lxd installation and keep running into a problem where not all the containers will start properly. one always gets hung up in the forkstart and causes things like `lxc list` and `juju status` to hang === petevg_afk is now known as petevg [14:09] * Budgie^Smore is still waiting ... [14:21] lazyPower: hi! I have a small question today :) -> if kube-api-loadbalancer is KO, does it affect the qualify of the service? or just the cluster health (daily operation) [14:21] Zic: when you say KO you mean knocked out? [14:21] yup [14:21] as in taken offline? [14:21] exactly [14:22] it'll stop your kubelets frmob eing able to contact the masters. so you rkubelets will enter a degraded state [14:22] but the workloads should remain running in their current formation [14:22] ok, so no "PodEvicton" action ? [14:22] basically you will be unable to effect change to your workloads, but you will stay online. [14:22] correct [14:22] the workers are on a POLL'ing loop with the configured master [14:23] so normally, my "user-traffic" stay OK, but my operation (sysadmin & dev stuff) will be degraded [14:23] without an active master to talk to, they just kind of go funky [14:23] yeah [14:23] user traffic should be uneffected [14:23] lazyPower: thanks, because we had an outage between Paris & New York and had some bad effect on user traffic [14:23] hmmm [14:23] but I suspect is because we have no kube-dns pod on New York nodes [14:23] yeah [14:24] that sounds like the likely culprit [14:24] yeah the ingress tables *should* be cached on the worker units [14:24] if thats not the case i've missed something in the network design of ingress [14:49] Zic we should be able to simulate this failure pretty easily if you can replicate a smallish cluster where kube-dns doesn't exist in your NA nodes. [14:54] lazyPower: actually the only cases I feared was "PodEviction launched" if the node can't contact the master, or some unknown requests that the node needs to pass to the master before responding to the actual user (like Ingress things) [14:55] the "pod connecting to the master before responding to user" is what i'm concerned with as well [14:55] that should be cached, but i might have a misunderstanding there [14:56] so long as the node has a listing of endpoints, it shouldn't have to contact the master. but if that table has a TTL that tells it when to re-check the endpoint mapping, we're in trouble. [14:56] I sincerely expect it will, because my model is kinda screwed up if not :D [14:57] lazyPower: yeah, it's what I expect also, it will be an outage only if the kubernetes-worker reboot during it can't contact the kube-apilb [14:57] but if not, I see no reason why it will [15:13] lazyPower: you lazy lout [15:13] oh hey lzyPower. <3 [15:22] lzyPower: do you have any schedule (no need of a precise date :p) about enabling cloud-integration and federated cluster ? [15:23] Zic: a member of the team is working on federation. we just landed a kube-fed snap [15:23] Zic: and cloud-native still has no ETA. there's alpha level code there but it hasn't been touched in a couple weeks. Its high priority on this cycles roadmap though [15:24] thanks for the update :) [15:25] cloud-integration is a "must-have" for one of our customer which do hacky things without it :/ he ask about cloud-integration at each K8s upgrading [15:25] federated-cluster is more about my own demand :) because I know that the model I use for our multi-region production cluster is not that good [15:25] K8s doc recommend federated cluster for multiregion [15:26] (without speaking of my earlier question about kube-apilb, we have some trouble with kube-dns & multi-region without federation, as request from "region A" can be resolved at a kube-dns in "region B", as there is only one endpoint for kube-dns entire cluster) [15:28] (about response time, not so important as we have good transiters between EU & US but still this 0.300ms things :/) === lzyPower is now known as lazyPower [15:34] Zic: yeah, fed is def where you want to go with that [15:35] we're actively cycling on it, and i suspect cloud-native integration will resume later this month or early next month [15:35] Zic: atm i'm working on user auth and soak testing, so its all good stuff for ya :) [15:36] oh nice too :p we always have our "Kubernetes as a Service" project somewhere [15:36] user auth will help [15:37] (the day we can give a customer an exclusive account with an exclusive K8s namespace... \o/) === frankban is now known as frankban|afk [17:40] LazyPower: just got CDK 1.6.2 up and running, maybe I missed something in the docs, but I can't seem to find what the username and password are for the k8s dashboard [17:40] tychicus: its behind a TLS key validation. You need to kubectl proxy to establish a secure tunnel and then you can access the dashboard at http://localhost:8001/dashboard [17:40] ok [17:41] tychicus: https://github.com/juju-solutions/bundle-canonical-kubernetes/tree/master/fragments/k8s/cdk#accessing-the-kubernetes-dashboard [17:42] "To access the dashboard, you may establish a secure tunnel to your cluster with [17:42] the following command:" [17:42] found it, sorry [17:42] hey no problem :) [17:42] that readme is info dense [17:42] easy to miss that blurb [17:59] LazyPower: just as an FYI, rbd request for persistent volume claims are still DENIED by apparmor [17:59] :/ this is troublesome [17:59] is that something that you would like me to file a bug for? [17:59] yeah [17:59] we're going to have to take this to the snappy team and see about getting a plug/slot created for this [18:01] should be bug be filed against juju or against snappy? [18:01] Against the K8s charms [18:01] tychicus: https://github.com/juju-solutions/bundle-canonical-kubernetes/issues [18:02] thanks [18:28] lazyPower: bug report is in. Thanks again for all your help. [18:29] tychicus: thanks for filing that bug. I'll take this to the snappy team at my first availability and see what steps we can take to resolve this [18:29] tychicus: i'll keep communication high on that bug as well so you know when there's something for you to test with the rbd autoprovisioner. [18:30] sounds good, I'm happy to test [21:31] \q [21:46] #maas === bdx73 is now known as bdxbdx === bdxbdx is now known as bdx [23:53] stokachu, marcoceppi: cs:~jamesbeedy/lounge-irc-8, https://github.com/jamesbeedy/layer-lounge-irc [23:57] https://review.jujucharms.com/reviews/122 [23:58] also https://jujucharms.com/u/jamesbeedy/lounge-irc/8 [23:59] it has sasl support [23:59] so it works on aws