[05:58] <rick_h> skay:  "values in the services file"?
[09:28] <nvalerkos> hi
[09:28] <stickupkid> nvalerkos, hi
[09:28] <nvalerkos> I've deployed a controller for kubernetes charm
[09:28] <nvalerkos> i got a problem which seem a bit odd
[09:30] <nvalerkos> Stopped services: kube-proxy
[09:30] <nvalerkos> status of the kubernetes master is blocked
[09:30] <nvalerkos> I am using LXC on a single machine, localhosdt
[09:31] <nvalerkos> I am not sure why this is happening because I just deployed a charm which should work without any problems
[09:32] <stickupkid> nvalerkos, can you pastebin a juju status here
[09:33] <nvalerkos> App                    Version  Status   Scale  Charm                  Store       Rev  OS      Notescontainerd                      active       5  containerd             jujucharms   54  ubuntu  easyrsa                3.0.1    active       1  easyrsa                jujucharms  296  ubuntu  etcd                   3.3.15   active       3  etcd
[09:33] <nvalerkos> jujucharms  488  ubuntu  flannel                0.11.0   active       5  flannel                jujucharms  468  ubuntu  kubeapi-load-balancer  1.14.0   active       1  kubeapi-load-balancer  jujucharms  704  ubuntu  exposedkubernetes-master      1.17.2   blocked      2  kubernetes-master      jujucharms  792  ubuntu  kubernetes-worker      1.17.2
[09:33] <nvalerkos> active       3  kubernetes-worker      jujucharms  634  ubuntu  exposedUnit                      Workload  Agent  Machine  Public address  Ports           Messageeasyrsa/0*                active    idle   1        10.38.12.8                      Certificate Authority connected.etcd/0                    active    idle   3        10.38.12.103
[09:33] <nvalerkos> 2379/tcp        Healthy with 3 known peersetcd/1*                   active    idle   0        10.38.12.236    2379/tcp        Healthy with 3 known peersetcd/2                    active    idle   5        10.38.12.60     2379/tcp        Healthy with 3 known peerskubeapi-load-balancer/0*  active    idle   7        10.38.12.39     443/tcp
[09:33] <nvalerkos> Loadbalancer ready.kubernetes-master/0       waiting   idle   9        10.38.12.10     6443/tcp        Waiting for 6 kube-system pods to start  containerd/4            active    idle            10.38.12.10                     Container runtime available  flannel/4               active    idle            10.38.12.10                     Flannel
[09:33] <nvalerkos> subnet 10.1.4.1/24kubernetes-master/1*      blocked   idle   2        10.38.12.113    6443/tcp        Stopped services: kube-proxy  containerd/3            active    idle            10.38.12.113                    Container runtime available  flannel/3               active    idle            10.38.12.113                    Flannel subnet
[09:33] <nvalerkos> 10.1.93.1/24kubernetes-worker/0       active    idle   8        10.38.12.54     80/tcp,443/tcp  Kubernetes worker running.  containerd/1            active    idle            10.38.12.54                     Container runtime available  flannel/1               active    idle            10.38.12.54                     Flannel subnet
[09:33] <nvalerkos> 10.1.11.1/24kubernetes-worker/1       active    idle   6        10.38.12.237    80/tcp,443/tcp  Kubernetes worker running.  containerd/2            active    idle            10.38.12.237                    Container runtime available  flannel/2               active    idle            10.38.12.237                    Flannel subnet
[09:33] <nvalerkos> 10.1.45.1/24kubernetes-worker/2*      active    idle   4        10.38.12.89     80/tcp,443/tcp  Kubernetes worker running.  containerd/0*           active    idle            10.38.12.89                     Container runtime available  flannel/0*              active    idle            10.38.12.89                     Flannel subnet
[09:33] <nvalerkos> 10.1.96.1/24Machine  State    DNS           Inst id        Series  AZ  Message0        started  10.38.12.236  juju-0a24b5-0  bionic      Running1        started  10.38.12.8    juju-0a24b5-1  bionic      Running2        started  10.38.12.113  juju-0a24b5-2  bionic      Running3        started  10.38.12.103  juju-0a24b5-3  bionic      Running4
[09:33] <nvalerkos> started  10.38.12.89   juju-0a24b5-4  bionic      Running5        started  10.38.12.60   juju-0a24b5-5  bionic      Running6        started  10.38.12.237  juju-0a24b5-6  bionic      Running7        started  10.38.12.39   juju-0a24b5-7  bionic      Running8        started  10.38.12.54   juju-0a24b5-8  bionic      Running9        started  10.38.12.10
[09:33] <nvalerkos> juju-0a24b5-9  bionic      Running
[09:33] <stickupkid> nvalerkos, to here https://pastebin.canonical.com/
[09:33] <stickupkid> :)
[09:33] <nvalerkos> https://pasteboard.co/IRhbIoA.png
[09:34] <nvalerkos> I dont have access to that one
[09:34] <stickupkid> nvalerkos, it seems like there is an issue with kube-proxy which is blocking the kube-master
[09:35] <stickupkid> nvalerkos, I'm not sure why that would be
[09:35] <stickupkid> nvalerkos, it might be worth asking here https://discourse.jujucharms.com/
[09:42] <manadart> stickupkid nvalerkos: pastebin.canonical.com is internal pastebin.ubuntu.com is public.
[09:42] <stickupkid> d'ho
[09:46] <nammn_de> rick_h:  morning, with provider are the clouds meant, right? (Maas, aws ..)
[09:46] <nvalerkos> localhost
[09:46] <nvalerkos> Warning  FailedCreatePodSandBox  3m48s (x125 over 31m)  kubelet, juju-0a24b5-6  (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container: failed to create containerd task: failed to mount rootfs component &{overlay overlay
[09:46] <nvalerkos> [workdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/1745/work upperdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/1745/fs lowerdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/1/fs]}: invalid argument: unknown
[09:47] <nvalerkos> All pods are stuck at containercreating
[09:47] <nvalerkos> something to do with the storage volume
[09:47] <nvalerkos> I am using lxd with zfs
[10:45] <stickupkid> manadart, achilleasa CR please https://github.com/juju/juju/pull/11140
[11:25] <manadart> stickupkid: Yep.
[13:12] <rick_h> nammn_de:  right, I'm hinting there's a cloud-based specific trick to that PR
[13:12] <nammn_de> rick_h: ahh, yes I was planning to write code where specific cloud providers support renaming (aws does, maas does not)
[13:12] <nammn_de> something along the line
[13:13] <rick_h> nammn_de:  ok cool just tossing a hint your way since I didn't see anything in the PR around that
[13:13] <nammn_de> rick_h: yeah thanks!
[13:13] <nammn_de> better safe than sorry =D
[13:14] <nammn_de> my pr is still in the early phase, just out for people to give fundamental hints. Like you did.  Still constantly updated
[13:14] <rick_h> nammn_de:  understand
[13:14] <nammn_de> rick_h: i answered your concerns in my old pr (show-space).  As well here in chat, late though. Does that answer your concers?
[13:14] <nammn_de> concerns
[13:15] <rick_h> nammn_de:  I think so, I'm just trying to find time to play with it myself then
[13:42] <skay> rick_h: services file == yaml file that mojo uses. anyway, I figured out the problem. The yaml was not correct. all good. The magic thing happened where I found the error right after asking for help
[14:03] <rick_h> skay:  ah ok awesome
[14:51] <manadart> Patch for systemd file relocation: https://github.com/juju/juju/pull/11147
[14:51] <stickupkid> manadart, I'll give it look now
[14:54] <manadart> stickupkid: Thanks.
[14:54] <stickupkid> manadart, i'm assuming that this needs regression testing
[14:56] <manadart> stickupkid: I tested on Focal and Bionic. I assume that covers back to Xenial as well.
[14:56] <stickupkid> manadart, yeah, what's supported under ESM
[14:57] <stickupkid> manadart, answering my own questions - 14.04 is still in esm, but that's not systemd right, that's upstart?
[14:57] <stickupkid> manadart, so we need to test xenial onwards?
[14:57] <manadart> manadart: Series upgrades definitely now need rework. I will follow with a patch for that. At the same time I will test/fix the old upgrade steps.
[14:58] <manadart> stickupkid: Trusty is Upstart IIRC.
[14:58] <stickupkid> yeah, thought as much
[14:59] <stickupkid> manadart, so upgrading series will now need to move files right?
[15:04] <manadart> stickupkid: Actually thinking about it, upgrading from Trusty should work out of the box (I will test it). Aside from that, we handle in a Juju upgrade step.
[15:04] <manadart> I think. Testing to follow.
[15:05] <manadart> Series upgrade leaves the files alone if the upgrade is systemd -> systemd.
[15:06] <stickupkid> but that's not what we want if a restart happens though right?
[15:09] <manadart> It should be OK. If you do a series upgrade with Juju < 2.7.2, you will not be able to go up to Focal.
[15:09] <stickupkid> interesting
[15:09] <manadart> So you upgrade Juju, then do it.
[15:09] <manadart> If you are > Trusty and you upgrade Juju, it just needs to relocate the files.
[15:10] <stickupkid> > sudo: setrlimit(RLIMIT_CORE): Operation not permitted
[15:11] <manadart> stickupkid: That is a symptom of sudo on Focal (at least on LXD). it doesn't come from Juju.
[15:11] <stickupkid> yeah, really didn't like that
[15:11] <manadart> stickupkid: You can test it on the container directly - just sudo something.
[15:12] <stickupkid> anyway, it finished :D
[15:22] <stickupkid> manadart, done
[15:23] <nammn_de> achilleasa manadart: is there an easy way to bootstrap an controller on aws with --config juju-ha-space=<any space> ? I thought alpha should work. But it fails for me
[15:29] <manadart> nammn_de: I think there might be chicken/egg thing there with AWS, because you are trying to land a controller on a space that doesn't come from the provider. Juju hasn't loaded the subnets and created the space yet, so it can't satisfy.
[15:29] <nammn_de> manadart: hmmm makes sense. Just wanted to set the config setting so that i can test the rename-code
[15:29] <manadart> nammn_de: Try the other config value (juju-mgmt-space).
[15:29] <nammn_de> I could, maybe, just add a key-value to the collection :D?
[15:30] <nammn_de> manadart: same error
[15:30] <manadart> nammn_de: OK, both should work if you set it post bootstrap with `juju controller-config juju-ha-space=alpha`
[15:31] <nammn_de> manadart: oh yes, hmm tried before. now works after trying again.. weird. Thanks!
[16:32] <nammn_de> manadart: regarding your comment
[16:32] <nammn_de> didnt we discuss last time that we cannot remove them and need to add them as empty https://github.com/juju/juju/pull/11088#discussion_r370208805
[16:32] <nammn_de> because we need to fulfill the interface?
[16:44] <manadart> Wrap network backing stub in a new type for the old facade tests and put them on that.
[16:45] <manadart> The old facade tests will be strangled out in favour of the new mock suite and they will ultimately disappeared. Those methods have nothing to do with the common backing.
[16:50] <achilleasa> hml: can you take a look at https://github.com/juju/description/pull/72 ?
[16:50] <hml> achilleasa:  looking
[16:52] <nammn_de> manadart: not sure if I can follow 100% do you have an example for me?
[17:00] <nammn_de> because afaict the  spaces.NewAPIWithBacking(...) takes a backing interface.  That's why it initially used those implements. But Actually there is no networkbacking usage in that test anymore
[17:10] <hml> achilleasa:  approved
[17:10] <achilleasa> hml: tyvm