/srv/irclogs.ubuntu.com/2020/01/23/#juju.txt

rick_hskay:  "values in the services file"?05:58
nvalerkoshi09:28
stickupkidnvalerkos, hi09:28
nvalerkosI've deployed a controller for kubernetes charm09:28
nvalerkosi got a problem which seem a bit odd09:28
nvalerkosStopped services: kube-proxy09:30
nvalerkosstatus of the kubernetes master is blocked09:30
nvalerkosI am using LXC on a single machine, localhosdt09:30
nvalerkosI am not sure why this is happening because I just deployed a charm which should work without any problems09:31
stickupkidnvalerkos, can you pastebin a juju status here09:32
nvalerkosApp                    Version  Status   Scale  Charm                  Store       Rev  OS      Notescontainerd                      active       5  containerd             jujucharms   54  ubuntu  easyrsa                3.0.1    active       1  easyrsa                jujucharms  296  ubuntu  etcd                   3.3.15   active       3  etcd09:33
nvalerkosjujucharms  488  ubuntu  flannel                0.11.0   active       5  flannel                jujucharms  468  ubuntu  kubeapi-load-balancer  1.14.0   active       1  kubeapi-load-balancer  jujucharms  704  ubuntu  exposedkubernetes-master      1.17.2   blocked      2  kubernetes-master      jujucharms  792  ubuntu  kubernetes-worker      1.17.209:33
nvalerkosactive       3  kubernetes-worker      jujucharms  634  ubuntu  exposedUnit                      Workload  Agent  Machine  Public address  Ports           Messageeasyrsa/0*                active    idle   1        10.38.12.8                      Certificate Authority connected.etcd/0                    active    idle   3        10.38.12.10309:33
nvalerkos2379/tcp        Healthy with 3 known peersetcd/1*                   active    idle   0        10.38.12.236    2379/tcp        Healthy with 3 known peersetcd/2                    active    idle   5        10.38.12.60     2379/tcp        Healthy with 3 known peerskubeapi-load-balancer/0*  active    idle   7        10.38.12.39     443/tcp09:33
nvalerkosLoadbalancer ready.kubernetes-master/0       waiting   idle   9        10.38.12.10     6443/tcp        Waiting for 6 kube-system pods to start  containerd/4            active    idle            10.38.12.10                     Container runtime available  flannel/4               active    idle            10.38.12.10                     Flannel09:33
nvalerkossubnet 10.1.4.1/24kubernetes-master/1*      blocked   idle   2        10.38.12.113    6443/tcp        Stopped services: kube-proxy  containerd/3            active    idle            10.38.12.113                    Container runtime available  flannel/3               active    idle            10.38.12.113                    Flannel subnet09:33
nvalerkos10.1.93.1/24kubernetes-worker/0       active    idle   8        10.38.12.54     80/tcp,443/tcp  Kubernetes worker running.  containerd/1            active    idle            10.38.12.54                     Container runtime available  flannel/1               active    idle            10.38.12.54                     Flannel subnet09:33
nvalerkos10.1.11.1/24kubernetes-worker/1       active    idle   6        10.38.12.237    80/tcp,443/tcp  Kubernetes worker running.  containerd/2            active    idle            10.38.12.237                    Container runtime available  flannel/2               active    idle            10.38.12.237                    Flannel subnet09:33
nvalerkos10.1.45.1/24kubernetes-worker/2*      active    idle   4        10.38.12.89     80/tcp,443/tcp  Kubernetes worker running.  containerd/0*           active    idle            10.38.12.89                     Container runtime available  flannel/0*              active    idle            10.38.12.89                     Flannel subnet09:33
nvalerkos10.1.96.1/24Machine  State    DNS           Inst id        Series  AZ  Message0        started  10.38.12.236  juju-0a24b5-0  bionic      Running1        started  10.38.12.8    juju-0a24b5-1  bionic      Running2        started  10.38.12.113  juju-0a24b5-2  bionic      Running3        started  10.38.12.103  juju-0a24b5-3  bionic      Running409:33
nvalerkosstarted  10.38.12.89   juju-0a24b5-4  bionic      Running5        started  10.38.12.60   juju-0a24b5-5  bionic      Running6        started  10.38.12.237  juju-0a24b5-6  bionic      Running7        started  10.38.12.39   juju-0a24b5-7  bionic      Running8        started  10.38.12.54   juju-0a24b5-8  bionic      Running9        started  10.38.12.1009:33
nvalerkosjuju-0a24b5-9  bionic      Running09:33
stickupkidnvalerkos, to here https://pastebin.canonical.com/09:33
stickupkid:)09:33
nvalerkoshttps://pasteboard.co/IRhbIoA.png09:33
nvalerkosI dont have access to that one09:34
stickupkidnvalerkos, it seems like there is an issue with kube-proxy which is blocking the kube-master09:34
stickupkidnvalerkos, I'm not sure why that would be09:35
stickupkidnvalerkos, it might be worth asking here https://discourse.jujucharms.com/09:35
manadartstickupkid nvalerkos: pastebin.canonical.com is internal pastebin.ubuntu.com is public.09:42
stickupkidd'ho09:42
nammn_derick_h:  morning, with provider are the clouds meant, right? (Maas, aws ..)09:46
nvalerkoslocalhost09:46
nvalerkosWarning  FailedCreatePodSandBox  3m48s (x125 over 31m)  kubelet, juju-0a24b5-6  (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container: failed to create containerd task: failed to mount rootfs component &{overlay overlay09:46
nvalerkos[workdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/1745/work upperdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/1745/fs lowerdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/1/fs]}: invalid argument: unknown09:46
nvalerkosAll pods are stuck at containercreating09:47
nvalerkossomething to do with the storage volume09:47
nvalerkosI am using lxd with zfs09:47
stickupkidmanadart, achilleasa CR please https://github.com/juju/juju/pull/1114010:45
manadartstickupkid: Yep.11:25
rick_hnammn_de:  right, I'm hinting there's a cloud-based specific trick to that PR13:12
nammn_derick_h: ahh, yes I was planning to write code where specific cloud providers support renaming (aws does, maas does not)13:12
nammn_desomething along the line13:12
rick_hnammn_de:  ok cool just tossing a hint your way since I didn't see anything in the PR around that13:13
nammn_derick_h: yeah thanks!13:13
nammn_debetter safe than sorry =D13:13
nammn_demy pr is still in the early phase, just out for people to give fundamental hints. Like you did.  Still constantly updated13:14
rick_hnammn_de:  understand13:14
nammn_derick_h: i answered your concerns in my old pr (show-space).  As well here in chat, late though. Does that answer your concers?13:14
nammn_deconcerns13:14
rick_hnammn_de:  I think so, I'm just trying to find time to play with it myself then13:15
skayrick_h: services file == yaml file that mojo uses. anyway, I figured out the problem. The yaml was not correct. all good. The magic thing happened where I found the error right after asking for help13:42
rick_hskay:  ah ok awesome14:03
manadartPatch for systemd file relocation: https://github.com/juju/juju/pull/1114714:51
stickupkidmanadart, I'll give it look now14:51
manadartstickupkid: Thanks.14:54
stickupkidmanadart, i'm assuming that this needs regression testing14:54
manadartstickupkid: I tested on Focal and Bionic. I assume that covers back to Xenial as well.14:56
stickupkidmanadart, yeah, what's supported under ESM14:56
stickupkidmanadart, answering my own questions - 14.04 is still in esm, but that's not systemd right, that's upstart?14:57
stickupkidmanadart, so we need to test xenial onwards?14:57
manadartmanadart: Series upgrades definitely now need rework. I will follow with a patch for that. At the same time I will test/fix the old upgrade steps.14:57
manadartstickupkid: Trusty is Upstart IIRC.14:58
stickupkidyeah, thought as much14:58
=== narindergupta is now known as narinderguptamac
stickupkidmanadart, so upgrading series will now need to move files right?14:59
manadartstickupkid: Actually thinking about it, upgrading from Trusty should work out of the box (I will test it). Aside from that, we handle in a Juju upgrade step.15:04
manadartI think. Testing to follow.15:04
manadartSeries upgrade leaves the files alone if the upgrade is systemd -> systemd.15:05
stickupkidbut that's not what we want if a restart happens though right?15:06
manadartIt should be OK. If you do a series upgrade with Juju < 2.7.2, you will not be able to go up to Focal.15:09
stickupkidinteresting15:09
manadartSo you upgrade Juju, then do it.15:09
manadartIf you are > Trusty and you upgrade Juju, it just needs to relocate the files.15:09
stickupkid> sudo: setrlimit(RLIMIT_CORE): Operation not permitted15:10
manadartstickupkid: That is a symptom of sudo on Focal (at least on LXD). it doesn't come from Juju.15:11
stickupkidyeah, really didn't like that15:11
manadartstickupkid: You can test it on the container directly - just sudo something.15:11
stickupkidanyway, it finished :D15:12
stickupkidmanadart, done15:22
nammn_deachilleasa manadart: is there an easy way to bootstrap an controller on aws with --config juju-ha-space=<any space> ? I thought alpha should work. But it fails for me15:23
manadartnammn_de: I think there might be chicken/egg thing there with AWS, because you are trying to land a controller on a space that doesn't come from the provider. Juju hasn't loaded the subnets and created the space yet, so it can't satisfy.15:29
nammn_demanadart: hmmm makes sense. Just wanted to set the config setting so that i can test the rename-code15:29
manadartnammn_de: Try the other config value (juju-mgmt-space).15:29
nammn_deI could, maybe, just add a key-value to the collection :D?15:29
nammn_demanadart: same error15:30
manadartnammn_de: OK, both should work if you set it post bootstrap with `juju controller-config juju-ha-space=alpha`15:30
nammn_demanadart: oh yes, hmm tried before. now works after trying again.. weird. Thanks!15:31
nammn_demanadart: regarding your comment16:32
nammn_dedidnt we discuss last time that we cannot remove them and need to add them as empty https://github.com/juju/juju/pull/11088#discussion_r37020880516:32
nammn_debecause we need to fulfill the interface?16:32
manadartWrap network backing stub in a new type for the old facade tests and put them on that.16:44
manadartThe old facade tests will be strangled out in favour of the new mock suite and they will ultimately disappeared. Those methods have nothing to do with the common backing.16:45
achilleasahml: can you take a look at https://github.com/juju/description/pull/72 ?16:50
hmlachilleasa:  looking16:50
nammn_demanadart: not sure if I can follow 100% do you have an example for me?16:52
nammn_debecause afaict the  spaces.NewAPIWithBacking(...) takes a backing interface.  That's why it initially used those implements. But Actually there is no networkbacking usage in that test anymore17:00
hmlachilleasa:  approved17:10
achilleasahml: tyvm17:10

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!