[06:31] how do I give someone ssh access to the controller? [06:31] I've done juju grant -c [controller] [user] superuser [06:32] and that did *someting* but now I can't figure out how to access it [06:32] juju ssh -m controller 0 says that there is no such model ctrl:[user]/controller [06:32] I've tried juju ssh -m admin/controller 0 but that also didn't work === frankban|afk is now known as frankban === caribou_ is now known as caribou [13:24] kklimonda: I dont think add-user actually adds the ssh key. There's a juju add-ssh-key command that has to be run in order for that to work. rick_h would know best though [13:24] lazyPower: correct, atm the admin has to add the key for the user [13:24] lazyPower: kklimonda it's a known issue and there's a task for the future to make keys end user manageable [13:25] ty for the alley oop rick_h [13:42] I'm experiencing some extreme crazyness [13:43] c4 type instances have some issue with lxd from what I can tell [13:43] not sure if its juju, or lxd or what [13:48] the issue is happening with t2 instances too [13:54] lazyPower: hi, long time with no problems but today I have one :) on our test cluster (happily...) upgraded from 1.5.3 to 1.6.1, kube-dns keeps crashing with kubernetes-dashboard, saying that kind of things: http://paste.ubuntu.com/24413931/ [13:54] juju status is all green [13:54] seems like a problem of endpoints services which does not respond (last part of my pastebin) [13:55] as it's a test cluster, I tried to reboot every single machines composing it, with no more luck [13:57] http://paste.ubuntu.com/24413949/ <= same kind of message for a kubectl logs on kube-dns container [13:58] lxd is failing across the board for me right now .... on aws instances [14:00] http://paste.ubuntu.com/24413976/ [14:00] ^ is something I've been doing on a daily basis [14:00] does look pretty broken [14:01] I woke up early to test out some newnew, and thats what I get [14:01] yeah ... at first I thought it was specific to instance type ... but its happening on all instance types (at least the 5 I've tried) [14:02] then I thought it might be a juju 2.1.2 thing .... as I just created my first model on 2.1.2 .... but I just verified its happening on 2.0.3 models as well [14:03] @team, what is going on here? [14:03] bdx: we're going to need bare minimum a bug report with a juju-crashdump log (you can report skinny, we dont need the charm artifacts) [14:04] lazyPower: is crashdump a plugin? [14:04] bdx: snap install juju-crashdump --classic, juju-crashdump -s should get you moving [14:04] nice [14:04] thx === salmankhan1 is now known as salmankhan [14:09] lazyPower: http://paste.ubuntu.com/24414014/ [14:09] lutostag: ping [14:09] pong [14:09] son of a [14:09] lutostag: i think we found a scenario where crashdump is misbehaving because of unstarted units [14:10] bdx: --edge [14:10] bdx: to be clear, snap refresh juju-crashdump --edge --classic [14:10] (fixed that bug, need to release it to stable) [14:10] lutostag: ty <3 [14:12] bdx_: 16:10 < lazyPower> bdx: to be clear, snap refresh juju-crashdump --edge --classic [14:14] rip [14:15] crashdump now spams me with [14:15] http://paste.ubuntu.com/24414051/ [14:15] lol [14:15] oh no [14:15] lazyPower: I appreciate the willingness to help out none the less [14:15] bdx_: thats fine [14:15] bdx_: the spam is expected [14:15] ok [14:15] its doing a lot of subprocess calls, and that cgo bit is golang doing what it does best [14:15] gotcha .. nice [14:16] its that or snap, i'm unconvinced on which level is spamming that [14:16] but its known and expected all the same, it takes a bit to grab everything on a large deployment, i hope you passed -s or --skinny so it doesn't spend forever nabbing all the charm source [14:17] the idea behind crashdump is we've professionalized nabbing state and debug/status messaging so we can tease apart the deployment artifacts and find root causes. Feel free to inspect the package and see what we're grabbing [14:17] any ideas on improvement are welcome [14:18] oh ... [14:18] ha === salmankhan1 is now known as salmankhan [14:19] I shall, thx [14:21] (lazyPower: did you see my last messages or they afraid you so much that I must have been cursed? :D) [14:23] lazyPower: these models are on beta controller [14:23] Zic: totally missed it, whats up? [14:23] bdx_: so something went fubar during collection or...? [14:23] lazyPower: do you think there is a possibility that juju-crashdump can't collect the info it needs because my user doesn't have permission? [14:24] lazyPower: (repasting my messages & pastes here: http://paste.ubuntu.com/24414104/) [14:24] lutostag: have we tested crashdump with jaas? [14:24] no ... its just spamming hard though with "runtime/cgo: pthread_create failed: Resource temporarily unavailable" [14:24] bdx_: it takes a while, seriously. its nabbing a ton of data [14:25] ok [14:25] on a 4 unit small k8s cluster the collection can take ~ 5 minutes. [14:25] to sum up: seems I have an Service/Endpoint problem on my K8s-test cluster upgraded to 1.6 [14:25] but i di dnt pass --skinny. [14:25] Zic: looking now [14:25] thx [14:26] Zic: check on flannel on the unit running the dashboard, is the flannel.1 interface up? [14:27] Zic: also, check kube-proxy service is started and not in error === salmankhan1 is now known as salmankhan [14:28] http://paste.ubuntu.com/24414128/ [14:28] Flannel is OK but kube-proxy is crashed [14:28] Zic: thats why its failing [14:28] lets dig into why kube-proxy is dead, anything in the logs? [14:28] lazyPower, lutostag: the last message it gave after 5 mins of spam was "runtime/cgo: need to run as root or suid" [14:29] I'm guessing it needs to be ran as root? [14:29] hmmm [14:29] lazyPower: I'm running journalctl -u kube-proxy but nothing except start/stop/backoff of systemd, do I have a better logs somewhere else? [14:29] Zic: can you just recycle the daemon? does it stick or does it immediately crash? [14:29] alright ... running again as root [14:30] bdx_: hang on, you shouldnt' need to run it as root [14:30] lutostag: ^ wat [14:30] lazyPower: http://paste.ubuntu.com/24414138/ <= logs from a fresh restart [14:30] error code 203 :x [14:31] Cynerva: ryebot -- post standup, lets dig into this together ^ [14:31] lazyPower: +1 [14:31] Zic: need you on ice for a bit while we do standup and will return to ask more questions [14:32] heres the bug https://bugs.launchpad.net/juju/+bug/1684143 [14:32] Bug #1684143: applications deployed to lxd on aws instances failing [14:32] I'll attach crashdump output when I can get it working [14:34] lazyPower: no problem, thanks :) [14:38] lazyPower: I found this in plain-text syslog: syslog.1:Apr 18 15:20:10 ig1-k8s-04 systemd[1163]: kube-proxy.service: Failed at step EXEC spawning /usr/local/bin/kube-proxy: No such file or directory [14:38] Zic: oooo snap, that looks like a stale hash. it should be spawning from /snap/bin/kube-proxy [14:39] the log is from tomorrow, I'm looking at the .service systemd's unit if it's really the case [14:39] hmm, I have similar logs for our restart test earlier [14:40] http://paste.ubuntu.com/24414178/ [14:40] the ExecStart is wrong so :) [14:41] -r--r--r-- 1 root root 425 Feb 16 11:15 /lib/systemd/system/kube-proxy.service [14:41] not touched by the snap upgrade [14:42] seems I hit the spot! :D [14:43] Zic: before you update that hang on [14:43] the snaps have a different system exec scheme, they use bash wrappers and a systemd script that gets installed on snap install. [14:47] (for info kube-proxy is dead on all kubernetes-worker units, I just cheched that, not only on kube-dns/kubernetes-dashboard nodes) [14:57] Zic: systemctl status snap.kube-proxy.daemon [14:58] http://paste.ubuntu.com/24414247/ [15:00] xref with https://github.com/kubernetes/kubernetes/issues/26003 [15:00] Zic: are you using network policies? [15:01] this test-cluster is not customized at all, the only parameter we change was docker_from_upstream [15:01] changed* [15:01] hmm [15:02] ok still in standup, will circle back in a sec [15:02] (docker_from_upstream was set to "true" before the upgrade to 1.6) [15:04] Zic: this is in reference to your workload objects [15:05] Zic: sudo iptables --list [15:06] lets see if it even created the iptables rulechains to do the serviceip forwarding [15:06] paste.ubuntu.com/24414272/ [15:06] http://paste.ubuntu.com/24414272/ [15:10] hmm, bdx, this is with a jaas deployment? [15:11] I'll see if I can get a one-off run to test that real quick [15:17] that seems fine... [15:17] * lazyPower ponders [15:20] Zic: i just remembered hitting something like this during my upgrade testing. What eventually got me in a working state was to recreate the pods that are failing [15:21] Cynerva: was my first attempt :) [15:21] Zic: which templates did you use? [15:21] Zic: the ones found in /etc/kubernetes? [15:21] the one at ~/cdk [15:22] oops [15:22] precisely at ~/snap/cdk-addons/current/addons :) [15:22] ok [15:23] dang, okay [15:23] through a kubectl replace -f [15:23] hmm i wonder if that recreates the pods? or just the deployment objects? [15:24] it *should* have recreated the pods [15:24] in nuke/repave style [15:24] it doesn't blue/green unless you specify a rolling update [15:24] okay [15:25] Zic: for grins on the worker [15:25] can you curl the http endpoint for your kubernetes-apiserver VIP? [15:25] curl https://10.152.183.1 [15:25] (I tried something more agressively: http://paste.ubuntu.com/24414342/) [15:25] (about kubectl replace) [15:26] ok [15:26] don't know if all this error are ignorable [15:26] so, that tells me any attempt to replace has failed [15:26] yup :( [15:26] you'll need to kubectl rm -f [15:26] and then reschedule [15:26] this *may* fix the issue [15:26] but i doubt it [15:26] ah, don't try this one, I will immediately [15:27] kubectl rm seems to not exist (?) [15:27] delete? [15:27] ya [15:27] just checking if you're awake ;) [15:27] :D [15:28] http://paste.ubuntu.com/24414356/ [15:28] Container still creating [15:28] I'm waiting a bit [15:29] http://paste.ubuntu.com/24414359/ <= the second line is strange [15:30] about the curl test: root@ig1-k8s-01:~# curl https://10.152.183.1 [15:30] Unauthorized [15:30] OH [15:30] Well thats good! [15:31] if your VIP is responding, its not a networking issue [15:31] and we expect that since you dont have the tls key on that curl command. had you included teh k8s key(s) on that curl rquest it would 404 (i think) you. As it begins at /api. [15:32] http://paste.ubuntu.com/24414374/ ContainerCreating finished but... it's another bad state now :( [15:32] don't know why it reached an ImagePullBackOff [15:33] Zic: give it a sec [15:33] that can happen when ther's issues hitting the gcr.io registry [15:33] temporary networking issue, saturation, noisy neighbors, etc. [15:33] 24s24s1kubelet, ig1-k8s-05spec.containers{influxdb}WarningFailedFailed to pull image "gcr.io/google_containers/heapster-influxdb-amd64:v1.1.1": rpc error: code = 2 desc = Error pulling image (v1.1.1) from gcr.io/google_containers/heapster-influxdb-amd64, Get https://gcr.io/v1/images/55d63942e2eb6a74ea81cbfccd95ef0f44f599a04ed4a46a41dc868a639c847d/ancestry: dial tcp 64.233.166.82:443: i/o timeout [15:33] seems like [15:33] yeah [15:34] oh, except grafana, all pods are now Running [15:34] i suspect your'e experiencing an outage atm. let me check here [15:34] \o/ [15:34] nice [15:34] so it self resolved [15:34] kube-system kube-dns-806549836-w842j 2/3 CrashLoopBackOff 3 6m 10.1.79.7 ig1-k8s-02 [15:34] * lazyPower chalks it up to internet gremlins [15:34] kube-system kubernetes-dashboard-2917854236-qmvn3 0/1 Error 5 6m 10.1.36.7 ig1-k8s-04 [15:34] speaks too fast :'( [15:34] Zic: you're playing with my heart man [15:34] :'( [15:34] ok, lets start with dns [15:35] was blocked in CLBO for so much time I was too happy to see a Running state :( [15:35] whats the story with dns clbo? [15:35] failed healthc heck, failed to reach apiserver? [15:35] 1m1m2kubelet, ig1-k8s-02WarningFailedSyncError syncing pod, skipping: failed to "StartContainer" for "kubedns" with CrashLoopBackOff: "Back-off 20s restarting failed container=kubedns pod=kube-dns-806549836-w842j_kube-system(c5838bc9-2514-11e7-b7ef-005056949324)" [15:35] let me do a kubectl logs on it [15:37] http://paste.ubuntu.com/24414398/ [15:38] grafana hits Running and stayed in Running. but kube-dns & kubernetes-dashboard are stuck in CLBO now [15:38] http://paste.ubuntu.com/24414411/ <= for dashboard [15:40] hmm [15:40] i'm uncertain why the dashboard isn't able to reach the VIP [15:40] but i'm still concerned about kube-dns [15:40] looks like the sidecar for dnsmasq metrics is whats causing it to fail [15:42] Zic: give me a repeat describe for the dns pods now that they are out of errimgpull [15:42] I got more info about kubernetes-dashboard through a direct `docker logs` at local worker: Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.152.183.1:443/version: dial tcp [15:42] 10.152.183.1:443: i/o timeout [15:43] don't know why it got a timeout if I can curl it... [15:43] right [15:43] I'm not sure whats fishy there but somethings up [15:43] and to make this all the more interesting, our upgrade tests didn't surface this, the addons upgraded without issue [15:44] http://paste.ubuntu.com/24414438/ [15:44] lazyPower: you know that I'm cursed and love to hit all the bug that nobody have :D [15:45] so the primary issue here is the kubednsmasq pod is still failing to pull. [15:46] Zic: can you paste journalctl logs for snap.kubelet.daemon, snap.kube-proxy.daemon, and flannel? [15:47] Zic: additionally, on any unit, try this: docker pull gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.1 [15:47] well, any worker. the master doesn't have docker so you'll figure out real quick to not do it there. [15:49] Cynerva: http://paste.ubuntu.com/24414459/ [15:49] thanks [15:50] lazyPower: http://paste.ubuntu.com/24414470/ [15:50] * lazyPower blinks [15:50] thats *literally* the manual interaction of what that stupid kubelet operation is trying to make happen [15:50] * lazyPower flips tables [15:51] Zic: juju run --application kubernetes-worker "docker pull gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.1" [15:51] pre-load all the workers with that image. if it resolves itself, again, i dont know why, but gremlins. [15:52] ok, it's loading :) [15:53] nothing interesting in the service logs aside from the stream errors, and those aren't telling us much O.o [15:53] Cynerva: i see we're missing the conntrack bin. we should probably add that and pack it into kube-proxy [15:53] that'll be needed for large scale deployments so it properly tracks and terminates stale connections. conntrack bits were causing rimas problems before on another distro. I want to learn from that mistake if we can. [15:54] lazyPower: http://paste.ubuntu.com/24414492/ [15:54] problem on one of the units [15:54] hmm that's weird, kube-proxy is classic confinement [15:54] seems very likely that gcr.io has an issue [15:54] Zic: UnitId: kubernetes-worker/1 <-- so we need to figure out why that unit is having connectivity issues [15:55] kubernetes-worker/1 active idle 10 ig1-k8s-01 [15:55] it's ig1-k8s-01, I will do a manual check [15:55] at least, it can ping 64.233.166.82 [15:56] http://paste.ubuntu.com/24414505/ [15:56] wtf :> [15:56] ah looks like it might have been 3 [15:56] i misread the yaml [15:56] kubernetes-worker/3 [15:56] oops, I did not check too :D [15:57] ok so it's kubernetes-worker/3* active idle 12 ig1-k8s-03 [15:57] Zic: again, just making sure you're awake [15:57] :) [15:58] pinging is OK, it's pulling now, in progress... [15:58] http://paste.ubuntu.com/24414515/ [15:58] it stopped [15:58] so either there's a network issue on that unit, or gcr.io is having trouble [15:58] i wouldn't be surprised of either [15:59] if you retry does it succeed or does it keep getting rejected? [15:59] ig1-k8s-03 has the exact same network configuration of other 4 kubernetes-worker units (they are all NATed by our hypervisor through the same public IP) [15:59] Status: Downloaded newer image for gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.1 [15:59] just work at the second attempt... [15:59] silly gcr.io [16:01] Zic: did that resolve the deployment? [16:01] lazyPower: hmm, saw in describe pod kube-dns that it tries to redownload the docker image [16:01] now that the image is cached on all the workers, it shouldn't be complaining about image pull sync [16:01] even if it's already pulled :( [16:01] 21s21s1kubelet, ig1-k8s-03spec.containers{kubedns}NormalPullingpulling image "gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.1" [16:01] * lazyPower sighs [16:01] it probably has pull: always in the manifest [16:02] because lets DDOS our registry sounds like a great plan. [16:02] http://paste.ubuntu.com/24414538/ [16:02] huhu [16:02] it's sidecar now [16:03] Zic: edit the manifest for kube-dns and set teh stupid image pull policy from imagePullPolicy: Always to imagePullPolicy:IfNotPresent [16:03] and reschedule kubedns [16:03] (delete and recreate) [16:04] mind you this is all a work-around to whatever networking issue we're seeing [16:04] kubedns-cm.yaml kubedns-controller.yaml kubedns-sa.yaml kubedns-svc.yaml [16:04] i'm not convinced [16:04] at the controller? [16:04] kbuedns-controller.yaml [16:04] o/ juju world :) [16:04] ya [16:04] Budgie^Smore: o/ [16:04] Budgie^Smore: did you bring your rocket launcher? We're on a bug hunt [16:05] lazyPower: in fact there is 0 ImagePullPolicy at the controller :D [16:05] so it must be the default value, which is... IfNotPresent [16:05] I don't understand :D [16:05] * lazyPower flips tables [16:05] Zic: i dont know what to recommend at this point [16:05] i've given every thought i can to work around this issue, the crux is the connectivity of grabbing hat image for kubedns [16:06] lazyPower no rocket launcher... pop corn to watch the show though ;-) [16:06] and i have no clue why teh dashboard pod is unable to contact the VIP if the host machine can contact the VIP [16:06] you did however give us some clues that our removal was not working as expected and have a fix en-route for that [16:07] lazyPower: sidecar just finished to pull... but health check is not good: http://paste.ubuntu.com/24414621/ [16:07] well, progress [16:07] whats in the logs for the pod? [16:07] (s) [16:07] same thing where dns cant reach the service vip of kube-apiserver? [16:08] http://paste.ubuntu.com/24414651/ [16:08] seems like it yup [16:08] it times out on the VIP [16:08] like the dashboard :( [16:08] reflector.go:199] k8s.io/dns/vendor/k8s.io/client-go/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.152.183.1:443/api/v1/services?resourceVersion=0: dial tcp 10.152.183.1:443: i/o timeout [16:09] yah, i see that :( [16:09] so we've resolved the other nit-noid issues but the core of why it cant find the vip is still alien to me [16:09] if hte host can see it, the container should see it [16:09] Zic: can you fire up an ubuntu pod and attempt the same curl test? [16:09] Zic: from within the container, via kubectl exec [16:10] http://paste.ubuntu.com/24414704/ <= re-doing the test, it answered [16:10] Budgie^Smore: share that popped corn [16:10] lazyPower: yup, trying that [16:11] lazyPower come and get it :P [16:12] Open source the corn man [16:12] s/corn man/corn, man/ [16:12] hum [16:12] lazyPower: no network inside a container [16:12] no network at all [16:12] Zic: boom [16:12] progress [16:12] can't do an apt install curl :( [16:12] now lets figure out why the container has no network [16:12] whats in /etc/default/docker? [16:13] http://paste.ubuntu.com/24414713/ [16:13] pretty empty [16:13] lazyPower lol now you going to get me to code a corny corn popper ;-) pun most assuredly intended! [16:14] hmm [16:14] lazyPower: strange things: this new container has no network [16:14] but I tried a kubectl exec at an ingress controller [16:14] network is up ghere [16:14] -g [16:15] why ingress-controller has network and the new container don't :o [16:15] (me being lazy) you have tried killing the container and having it start on another node right? [16:17] lazyPower: http://paste.ubuntu.com/24414727/ [16:17] don't understand this part... [16:18] ubuntu is running at kubernetes-worker/0 [16:18] the ingress-controller I used is running at kubernetes-worker/4 [16:19] Zic: whats the age on that ingress controller? [16:19] is it from pre-upgrade? [16:19] 1d [16:19] so after-upgrade [16:19] (was 62days before) [16:19] default default-http-backend-35bpm 1/1 Running 1 62d 10.1.80.5 ig1-k8s-01 [16:19] Zic, lazyPower have you checked the IPs and iptables yet? could it be a flannel / overlay network? [16:19] this, however, is up since 62d [16:20] and is also located at ig1-k8s-01 [16:20] Budgie^Smore: it could be, however it should fallback to the default docker network driver iirc. [16:21] Zic: try again but watch the kubelet log [16:21] see if anything leaps out at you there [16:22] lazyPower: for info, pods of kube-system has no network also [16:22] tried in the grafana-influxdb pod [16:22] no network [16:22] seems like just ingress-controllers have network :o [16:23] Zic: ingress controllers have hostNetwork: true, so i think they bypass flannel/cni entirely [16:23] Zic: i'm at an impass now, but we've gotten deeper into the issue that seems like yet another symptom, but not the root cause [16:24] not entirely sure how that works, but they're definitely a special case [16:24] Cynerva: that would be the case if it specifies host network it doesn't use any of the containerd networking bits. its binding on teh hosts tcp stack. [16:24] lazyPower: could it be tied to our use of docker_from_upstream ? [16:24] I can switch it to false if you want [16:24] Zic: quite possible, if you switch it back to archive, do things work? [16:25] I will try now [16:25] Cynerva: ryebot - i dont think we've tested with upstream docker in quite some time... is this true yeah? [16:26] lazyPower: yeah, we haven't that i'm aware of [16:26] I thought so, i might actually submit a PR this week to remove that option from the k8s charms as its inhereted from layer-docker. [16:26] if we're not extensively testing it, we shouldn't offer it [16:27] lazyPower: we have a serious garbage collection in our prod-cluster with the Docker version of Ubuntu :( [16:27] it's why we switched to PPA version === frankban is now known as frankban|afk [16:31] Zic: thats unfortunate if this resolves the issue [16:32] yup :( with the docker version at Ubuntu Archive, we got a lot of dockerd stucked at garbage collecting [16:32] if it doesn't i'm not really sure where to go from here either, as this makes no sense to me that your container network just falls out [16:32] switching docker_from_upstream resolve immediately this issue [16:32] seriously? [16:32] yup :( [16:32] welp [16:32] nothing to do here [16:32] * lazyPower jetpacks away [16:32] was Kibana containers wich crash Docker garbage collection [16:33] hmmm [16:33] Zic: its 1.11.x coming from archive correct? [16:33] lazyPower: careful, saying that for our production-cluster, for the test-cluster we're debugging, downgrading is in progress [16:34] lazyPower: downgrading is finished and... all my pods are Running and have network connectivity [16:34] :o [16:34] Zic: perhaps it was jsut recycling docker that did it? [16:35] to recap what I said: we used docker_from_upstream as we hitted severe garbage collection bug with dockerd on production with heavy usage-intensive container like Kibana, with the version from PPA of docker.com, it was fixed (in 1.5.3) [16:36] but it seems that this docker version of docker.com breaks network in 1.6 [16:36] i'm running a deploy with install_from_upstream=true right now [16:36] (to be clear, as we mixed our conversation about two different clusters earlier) [16:36] yep, i follow you now [16:36] for now, the test-cluster we're debugging here is now fixed [16:36] with docker_from_upstream sets to false [16:36] Zic: prior to doing that, did you attempt to restart the docker daemon? [16:37] was that part of your troubleshooting? [16:37] it's a bit lame as we are now using docker_from_upstream=true at production :/ [16:37] lazyPower: after the downgrading, yeah, I restarted docker [16:37] Zic: i meant before [16:37] ah, yeah, rebooted the whole cluster too [16:37] Zic: well, i just deployed and upgraded [16:37] so far so good [16:38] to be clear - deployed with docker from archive, enabled install_from_upstream, things are still running [16:38] did you enable docker_from_upstream at 1.5.3, then upgrade to 1.6 ? :D [16:38] was the correct path [16:38] don't know if it can played at the game [16:41] lazyPower: the *exact* path was: switching to docker_from_upstream=true, look at juju status and when it's ended, restart docker on every kubernetes-worker units (as the juju scenario don't handle this part) -> upgrade to 1.6 with the Ubuntu Insights tutorial -> CLBO at kube-dns+kubernetes-dashboard after the upgrade / no network in container [16:45] Zic: running another deploy through the upgrade scenario [16:45] but i got networking with upstream docker from a fresh deployment [16:45] so, murky water here... [16:57] Zic: looks like Cynerva may have confirmed the behavior [16:57] still debugging but yeah, we're close to identifying the symptom [17:04] lazyPower: great! I'm leaving my office to go back to home, I will my backlog later if you find something else :) [17:04] lazyPower: the issue seems to be with us-east-1a ..... the only way I can get an instance to deploy to us-east-1a is by spaces constraint, where the subnet in the space is in 1a [17:05] otherwise, `juju deploy ubuntu -n10` will not deploy anything to 1a [17:05] its the instances that I get into 1a with the spaces constraint that exhibit the issue of failing lxd [17:16] Zic: the only thing to note here is that with that upstream version of docker (1.28 API) is well beyond whats been tested by upstream. In the 1.6 release notes Drop the support for docker 1.9.x. Docker versions 1.10.3, 1.11.2, 1.12.6 have been validated. Anything outside of that is likely to have gremlins, as we're finding. [19:41] Hi === frankban|afk is now known as frankban === jasondotstar_ is now known as jasondotstar