[00:03] BlackDex: hi - best to keep charm Qs in the channel here. It's a team effort and someone else may be able to help you in a more timely fashion due to time zone differences. :-) === mskalka is now known as mskalka|afk [05:24] bdx, is that the proxy error message again? [06:13] re here [06:13] all my Ingress are fine since yesterday :) [06:13] lazyPower: do I need to disable something to shut the debuf off `juju run-action debug kubernetes-master/0` command of yesterday? [06:57] hmm, I rebooted all the kubernetes-worker to test the resilience, all Ingress controller are CLBO, kube-dns and kubernetes-dashboard also :/ [06:58] http://paste.ubuntu.com/23879175/ [06:59] so I identify the step-to-reproduce in my cluster: rebooting the node which host the default-http-backend-0wt64 pod cause this kind of error [07:02] http://paste.ubuntu.com/23879186/ <= kube-dns logs [07:02] Killing container with docker id e5f551e889bc: pod "kube-dns-3216771805-w2853_kube-system(854c0971-e4cc-11e6-b87d-0050569e741e)" container "dnsmasq" is unhealthy, it will be killed and re-created [07:02] Liveness probe failed: HTTP probe failed with statuscode: 503 [08:07] if you have any reviews before I fill the issue on Kubernete's GitHub [08:21] Zic: Most of the k8s team is asleep, unfortunately [08:21] hell, I don't even know why I'm awake === marcoceppi_ is now known as marcoceppi [09:18] Hi marcoceppi , you still there? :) [09:18] junaidali: o/ yes [09:21] marcoceppi: have you got time to look into the issue? [09:21] junaidali: which issue? [09:21] aws credentials [09:22] junaidali: oh, jeeze, ofc [09:22] junaidali: give me 10 mins and I'll be back with some more info [09:51] marcoceppi: no problem, and it's the weekend btw :) [14:05] Zic: The debug action is a one-shot thing, no need to shut it off. Did your ingress controllers eventually come back up? [14:13] ryebot: nope, and it is atm stuck in CLBO since this morning [14:14] same for kube-dns and kubernetes-dashboard [14:15] Zic: Sorry about that. Please move forward with your bug. I'll see if I can repro your issue soon. [14:17] Zic: In the meantime, you might try destroying them all and seeing if they come back up (our charm should reinstate them automatically). === hml_ is now known as hml [14:30] ryebot: about Ingress controller or also the kube-dns and kubernetes-dashboard pods? [14:46] ryebot: (I did nothing for now) after 109 restarts (saw in the Restart column of a `kubectl get pods -o wide --all-namespaces`), all my Ingress and kubenertes-dashboard/kube-dns are running [14:46] but I suspect if I reboot again some nodes it will stay in CLBO for hours :/ [14:47] I'm preparing the issue for GitHub [15:02] https://github.com/kubernetes/kubernetes/issues/40648 (cc ryebot lazyPower) [15:03] Zic: Excellent, thanks! We'll be tracking this. [15:05] sadly the default-http-backend pod have no logs :/ [15:05] (kubectl logs on it return nothing) [19:49] having issues with `juju attach` ... it doesn't seem my resource ever makes it to the charm [19:50] even though uploading to the controller is successful [19:50] my charm just doesn't get the resource [19:50] I wonder if it has something to do with the hosted controller [19:52] in all honesty .... I can't upload resources to the charmstore, can't upload to the controller .... I guess resources are just broken right now? [19:52] thats the impression all my new uses have at least