/srv/irclogs.ubuntu.com/2017/05/05/#juju.txt

Budgie^SmoreI never got an op to "play" with that charm rick_h, still want too01:17
ME_54hi01:34
ME_54god nigt01:34
ME_54good night ***01:34
ME_54Alguém do brasil por ai ?01:35
Zichmm, just saw that kubernetes-master & kubernetes-worker automatically upgrades to 1.6.2 from 1.6.1 without action... so what part of https://kubernetes.io/docs/getting-started-guides/ubuntu/upgrades/ do I need ? :D09:00
Zic(hi here o/)09:00
kjackal_hi Zic09:07
kjackal_Zic: The binaries of kubernetes are shipped as snap packages. This is why you see them being upgraded to 1.6.2. The installation & management of your cluster is done through charms so by upgrading the charms you get you cluster lifecycle management upgraded.09:10
Zickjackal_: thanks, so I always need to do some juju upgrade-charm for upgrading the operationnal code09:16
=== frankban|afk is now known as frankban
kjackal_Zic: yes09:33
ZicI also have a lab which not run on baremetal but in LXD via conjure-up, do you know if I can install a specific version of charms/K8s via conjure-up+LXD ?09:37
Zic(it's to test the 1.5.3 -> 1.6.2 upgrade of our production cluster in a lab)09:38
kim0What I understood from being here yesterday, was that k8s auto-upgrades itself within the 1.6.x series .. but jumping to say 1.7.x will need me changing the config variables to point to 1.709:47
kim0I'd love to test upgrading 1.6.1 -> 1.6.2 like I did yesterday .. Can I force juju to install a specific version of k8s ?09:51
kim0I think I need to change the "config --channel" thing .. but not sure what to set it to (instead of stable) .. Where do I see available channels!09:52
kjackal_Zic upgrading snaps on lxd will not work due to this bug: https://bugs.launchpad.net/snapd/+bug/166865909:58
mupBug #1668659: Using snaps in lxd can be problematic during refresh (using squashfuse) <snapd:In Progress by zyga> <https://launchpad.net/bugs/1668659>09:58
Zickjackal_: hmm, seems like it will be easier if I running my lab like a baremetal cluster, so without LXD and just classical VMs :/09:59
Zicit's not a problem, from ressources point of view,  but I prefer to use LXD instead of classical VMs for labs10:00
Zics/I running/I run/10:01
kjackal_Zic yes, physical machines should work.10:01
kjackal_kim0: there is no easy way to pin snaps to 1.6.1 during deployment and then unpin them in order to trigger the upgrade10:02
Zickjackal_: for forcing 1.5.3, I just need to specify the old version of all charms composing the CDK bundle, right?10:02
Zic(in the Juju GUI)10:03
Zicbefore deploying10:03
kjackal_the 1.6.x snaps are in the same track and in a way new versions shadow the old ones (nothing special, its like upgrading debs)10:03
kjackal_kim0: ^10:03
kim0Any option to demo "upgrades" to a friend ?10:04
kjackal_Zic: kim0: You can upgrade from 1.5.3 to 1.6.x . Give me a moment to get the revisions of the bundles and the upgrade procedure10:06
kim0it would be good to know how to get those revisions too I guess10:07
ZicI extracted them from our production cluster, which is still in 1.5.3 since I'm searching a way to build a 1.5.3 to lab-cluster, where our customer can puts all his pods, then upgrading this lab to 1.6.2, and if it succeed, upgrading the production-cluster10:08
Zicthe production-cluster is running on baremetal & VMs at our DC10:08
Zicthe lab-cluster was just a machine with LXD10:08
Zicbut as conjure-up does not permit to deploy a specific version of CDK, I'm a bit screwed :) I think I will need to build a full CDK cluster for the lab also10:09
kjackal_Zic: kim0: Here is the upgrade process from 1.5 to 1.6 https://insights.ubuntu.com/2017/04/12/general-availability-of-kubernetes-1-6-on-ubuntu/10:09
kim0Thanks!10:09
kjackal_Zic: kim0: you can lookup the charm revisions from the bundle. Give me a moment10:10
Zickjackal_: yup, but so I can only do that through juju CLI or GUI, not in conjure-up, so no LXD labs :( I will need to pop some VMware VMs and it's upset me as a big fan of LXD :)10:11
BlackDexHello, i have a environment where juju seems to be very very slow.10:24
BlackDexAnd i see these messages in the logs of juju-controller10:24
BlackDex2017-05-05 10:20:48 DEBUG juju.apiserver.client status.go:197 Remote applications: map[]10:25
BlackDex2017-05-05 10:20:48 DEBUG juju.apiserver.client status.go:552 error fetching public address: "no public address(es)"10:25
BlackDex2017-05-05 10:20:48 DEBUG juju.apiserver.client status.go:558 no IP addresses fetched for machine "arh4pg"10:25
BlackDex2017-05-05 10:20:48 DEBUG juju.apiserver.client status.go:814 error fetching public address: no public address(es)10:25
BlackDex2017-05-05 10:20:48 DEBUG juju.apiserver request_notifier.go:140 -> [410] user-admin 121.930409ms {"request-id":2,"response":"'body redacted'"} Client[""].FullStatus10:25
BlackDex2017-05-05 10:20:48 DEBUG juju.apiserver request_notifier.go:80 [410] user-admin API connection terminated after 182.883216ms10:25
BlackDexsomeone knows what could be the problem?10:25
BlackDexi have tested the network, speed is fine, ping (inc. flood-ping) all work fine10:26
kjackal_BlackDex: how do you measure the speed of Juju? Why do you say it seems slow?10:27
magicaltroutI validate my juju speed against kjackal_ 's response time10:30
magicaltroutfor example10:30
kjackal_whats up magicaltrout? I saw your http://spicule.co.uk/juju-bigdata.html ! Nice!10:33
magicaltroutactually we've just started work yesterday on our new "Spicule Big Data Platform" stuff10:34
magicaltroutfor supported big data workloads via Juju/JAAS10:34
magicaltroutso there will be a load more new stuff coming soon10:34
kjackal_well done!10:35
magicaltroutwe're also going to be bringing a couple of column store databases to Juju for our supported "Business Analytics" environmengt10:36
kjackal_magicaltrout: which ones?10:42
magicaltroutprobably greenplum and monetdb10:42
magicaltroutso people can pick on data volume and complexity etc10:42
kjackal_any plans to integrate asterixdb? https://asterixdb.apache.org/ i know one of the professors behind this from UCLA10:43
kjackal_(I think)10:43
magicaltroutyeah I like asterixdb i was tracking it through incubation10:44
magicaltroutits certainly on the roadmap10:44
kjackal_nice10:45
magicaltrouti'd also like to do Druid.io10:45
magicaltroutneed to find some customers first I guess :)10:47
BlackDexkjackal_: Well, i have deployed a lot openstack bundles, and this one is take a long time11:32
BlackDexand magicaltrout the response time was quicker then the time `juju deploy` took ;)11:36
=== grumble is now known as 14WAA001L
=== 14WAA001L is now known as grumble
lazyPowerstub - are you around? (its super late and past EOW so i'm not hopeful)13:16
Budgie^Smoreo/ juju world13:18
lazyPowero/ Budgie^Smore13:19
=== mup_ is now known as mup
vmorrishi all, running juju and lxd here on xenial. I'm soak testing the openstack-on-lxd installation and keep running into a problem where not all the containers will start properly. one always gets hung up in the forkstart and causes things like `lxc list` and `juju status` to hang13:34
=== petevg_afk is now known as petevg
* Budgie^Smore is still waiting ... 14:09
ZiclazyPower: hi! I have a small question today :) -> if kube-api-loadbalancer is KO, does it affect the qualify of the service? or just the cluster health (daily operation)14:21
lazyPowerZic: when you say KO you mean knocked out?14:21
Zicyup14:21
lazyPoweras in taken offline?14:21
Zicexactly14:21
lazyPowerit'll stop your kubelets frmob eing able to contact the masters. so you rkubelets will enter a degraded state14:22
lazyPowerbut the workloads should remain running in their current formation14:22
Zicok, so no "PodEvicton" action ?14:22
lazyPowerbasically you will be unable to effect change to your workloads, but you will stay online.14:22
lazyPowercorrect14:22
lazyPowerthe workers are on a POLL'ing loop with the configured master14:22
Zicso normally, my "user-traffic" stay OK, but my operation (sysadmin & dev stuff) will be degraded14:23
lazyPowerwithout an active master to talk to, they just kind of go funky14:23
lazyPoweryeah14:23
lazyPoweruser traffic should be uneffected14:23
ZiclazyPower: thanks, because we had an outage between Paris & New York and had some bad effect on user traffic14:23
lazyPowerhmmm14:23
Zicbut I suspect is because we have no kube-dns pod on New York nodes14:23
lazyPoweryeah14:23
lazyPowerthat sounds like the likely culprit14:24
lazyPoweryeah the ingress tables *should* be cached on the worker units14:24
lazyPowerif thats not the case i've missed something in the network design of ingress14:24
lazyPowerZic we should be able to simulate this failure pretty easily if you can replicate a smallish cluster where kube-dns doesn't exist in your NA nodes.14:49
ZiclazyPower: actually the only cases I feared was "PodEviction launched" if the node can't contact the master, or some unknown requests that the node needs to pass to the master before responding to the actual user (like Ingress things)14:54
lazyPowerthe "pod connecting to the master before responding to user" is what i'm concerned with as well14:55
lazyPowerthat should be cached, but i might have a misunderstanding there14:55
lazyPowerso long as the node has a listing of endpoints, it shouldn't have to contact the master. but if that table has a TTL that tells it when to re-check the endpoint mapping, we're in trouble.14:56
ZicI sincerely expect it will, because my model is kinda screwed up if not :D14:56
ZiclazyPower: yeah, it's what I expect also, it will be an outage only if the kubernetes-worker reboot during it can't contact the kube-apilb14:57
Zicbut if not, I see no reason why it will14:57
lzyPowerlazyPower: you lazy lout15:13
lazyPoweroh hey lzyPower. <315:13
ZiclzyPower: do you have any schedule (no need of a precise date :p) about enabling cloud-integration and federated cluster ?15:22
lzyPowerZic: a member of the team is working on federation. we just landed a kube-fed snap15:23
lzyPowerZic: and cloud-native still has no ETA. there's alpha level code there but it hasn't been touched in a couple weeks. Its high priority on this cycles roadmap though15:23
Zicthanks for the update :)15:24
Ziccloud-integration is a "must-have" for one of our customer which do hacky things without it :/ he ask about cloud-integration at each K8s upgrading15:25
Zicfederated-cluster is more about my own demand :) because I know that the model I use for our multi-region production cluster is not that good15:25
ZicK8s doc recommend federated cluster for multiregion15:25
Zic(without speaking of my earlier question about kube-apilb, we have some trouble with kube-dns & multi-region without federation, as request from "region A" can be resolved at a kube-dns in "region B", as there is only one endpoint for kube-dns entire cluster)15:26
Zic(about response time, not so important as we have good transiters between EU & US but still this 0.300ms things :/)15:28
=== lzyPower is now known as lazyPower
lazyPowerZic: yeah, fed is def where you want to go with that15:34
lazyPowerwe're actively cycling on it, and i suspect cloud-native integration will resume later this month or early next month15:35
lazyPowerZic: atm i'm working on user auth and soak testing, so its all good stuff for ya :)15:35
Zicoh nice too :p we always have our "Kubernetes as a Service" project somewhere15:36
Zicuser auth will help15:36
Zic(the day we can give a customer an exclusive account with an exclusive K8s namespace... \o/)15:37
=== frankban is now known as frankban|afk
tychicusLazyPower: just got CDK 1.6.2 up and running, maybe I missed something in the docs, but I can't seem to find what the username and password are for the k8s dashboard17:40
lazyPowertychicus: its behind a TLS key validation. You need to kubectl proxy to establish a secure tunnel and then you can access the dashboard at http://localhost:8001/dashboard17:40
tychicusok17:40
lazyPowertychicus: https://github.com/juju-solutions/bundle-canonical-kubernetes/tree/master/fragments/k8s/cdk#accessing-the-kubernetes-dashboard17:41
tychicus"To access the dashboard, you may establish a secure tunnel to your cluster with17:42
tychicusthe following command:"17:42
tychicusfound it, sorry17:42
lazyPowerhey no problem :)17:42
lazyPowerthat readme is info dense17:42
lazyPowereasy to miss that blurb17:42
tychicusLazyPower: just as an FYI, rbd request for persistent volume claims are still DENIED by apparmor17:59
lazyPower:/  this is troublesome17:59
tychicusis that something that you would like me to file a bug for?17:59
lazyPoweryeah17:59
lazyPowerwe're going to have to take this to the snappy team and see about getting a plug/slot created for this17:59
tychicusshould be bug be filed against juju or against snappy?18:01
lazyPowerAgainst the K8s charms18:01
lazyPowertychicus: https://github.com/juju-solutions/bundle-canonical-kubernetes/issues18:01
tychicusthanks18:02
tychicuslazyPower: bug report is in.  Thanks again for all your help.18:28
lazyPowertychicus: thanks for filing that bug. I'll take this to the snappy team at my first availability and see what steps we can take to resolve this18:29
lazyPowertychicus: i'll keep communication high on that bug as well so you know when there's something for you to test with the rbd autoprovisioner.18:29
tychicussounds good, I'm happy to test18:30
bdx\q21:31
bdx#maas21:46
=== bdx73 is now known as bdxbdx
=== bdxbdx is now known as bdx
bdxstokachu, marcoceppi: cs:~jamesbeedy/lounge-irc-8, https://github.com/jamesbeedy/layer-lounge-irc23:53
bdxhttps://review.jujucharms.com/reviews/12223:57
bdxalso https://jujucharms.com/u/jamesbeedy/lounge-irc/823:58
bdxit has sasl support23:59
bdxso it works on aws23:59

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!