blahdeblah | BlackDex: hi - best to keep charm Qs in the channel here. It's a team effort and someone else may be able to help you in a more timely fashion due to time zone differences. :-) | 00:03 |
---|---|---|
=== mskalka is now known as mskalka|afk | ||
stokachu | bdx, is that the proxy error message again? | 05:24 |
Zic | re here | 06:13 |
Zic | all my Ingress are fine since yesterday :) | 06:13 |
Zic | lazyPower: do I need to disable something to shut the debuf off `juju run-action debug kubernetes-master/0` command of yesterday? | 06:13 |
Zic | hmm, I rebooted all the kubernetes-worker to test the resilience, all Ingress controller are CLBO, kube-dns and kubernetes-dashboard also :/ | 06:57 |
Zic | http://paste.ubuntu.com/23879175/ | 06:58 |
Zic | so I identify the step-to-reproduce in my cluster: rebooting the node which host the default-http-backend-0wt64 pod cause this kind of error | 06:59 |
Zic | http://paste.ubuntu.com/23879186/ <= kube-dns logs | 07:02 |
Zic | Killing container with docker id e5f551e889bc: pod "kube-dns-3216771805-w2853_kube-system(854c0971-e4cc-11e6-b87d-0050569e741e)" container "dnsmasq" is unhealthy, it will be killed and re-created | 07:02 |
Zic | Liveness probe failed: HTTP probe failed with statuscode: 503 | 07:02 |
Zic | if you have any reviews before I fill the issue on Kubernete's GitHub | 08:07 |
marcoceppi_ | Zic: Most of the k8s team is asleep, unfortunately | 08:21 |
marcoceppi_ | hell, I don't even know why I'm awake | 08:21 |
=== marcoceppi_ is now known as marcoceppi | ||
junaidali | Hi marcoceppi , you still there? :) | 09:18 |
marcoceppi | junaidali: o/ yes | 09:18 |
junaidali | marcoceppi: have you got time to look into the issue? | 09:21 |
marcoceppi | junaidali: which issue? | 09:21 |
junaidali | aws credentials | 09:21 |
marcoceppi | junaidali: oh, jeeze, ofc | 09:22 |
marcoceppi | junaidali: give me 10 mins and I'll be back with some more info | 09:22 |
Zic | marcoceppi: no problem, and it's the weekend btw :) | 09:51 |
ryebot | Zic: The debug action is a one-shot thing, no need to shut it off. Did your ingress controllers eventually come back up? | 14:05 |
Zic | ryebot: nope, and it is atm stuck in CLBO since this morning | 14:13 |
Zic | same for kube-dns and kubernetes-dashboard | 14:14 |
ryebot | Zic: Sorry about that. Please move forward with your bug. I'll see if I can repro your issue soon. | 14:15 |
ryebot | Zic: In the meantime, you might try destroying them all and seeing if they come back up (our charm should reinstate them automatically). | 14:17 |
=== hml_ is now known as hml | ||
Zic | ryebot: about Ingress controller or also the kube-dns and kubernetes-dashboard pods? | 14:30 |
Zic | ryebot: (I did nothing for now) after 109 restarts (saw in the Restart column of a `kubectl get pods -o wide --all-namespaces`), all my Ingress and kubenertes-dashboard/kube-dns are running | 14:46 |
Zic | but I suspect if I reboot again some nodes it will stay in CLBO for hours :/ | 14:46 |
Zic | I'm preparing the issue for GitHub | 14:47 |
Zic | https://github.com/kubernetes/kubernetes/issues/40648 (cc ryebot lazyPower) | 15:02 |
ryebot | Zic: Excellent, thanks! We'll be tracking this. | 15:03 |
Zic | sadly the default-http-backend pod have no logs :/ | 15:05 |
Zic | (kubectl logs on it return nothing) | 15:05 |
bdx | having issues with `juju attach` ... it doesn't seem my resource ever makes it to the charm | 19:49 |
bdx | even though uploading to the controller is successful | 19:50 |
bdx | my charm just doesn't get the resource | 19:50 |
bdx | I wonder if it has something to do with the hosted controller | 19:50 |
bdx | in all honesty .... I can't upload resources to the charmstore, can't upload to the controller .... I guess resources are just broken right now? | 19:52 |
bdx | thats the impression all my new uses have at least | 19:52 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!