[02:37] <wallyworld> thumper: here's a 2.7 PR https://github.com/juju/juju/pull/11627
[03:11] <wallyworld> thumper: so that other related bug... this will fix part of that also, but there's a similar issue with destroy model, but that will probably take a bit of thought to fix as coordination which doesn't exist yet will likely be required between the various moving parts
[04:04] <tlm> wallyworld: what does it mean when a worker is in a "Starting" state in the engine report ?
[04:05] <wallyworld> tlm: is it stuck in that state?
[04:05] <tlm> wallyworld: yeah
[04:05] <wallyworld> i'd have to check - i think it may be waiting for a dependency
[04:05] <wallyworld> is that plausible?
[04:05] <tlm> k
[04:06] <tlm> i'll check
[04:06] <timClicks_> 3 new tutorials available - Postgres, RabbitMQ, ElasticSearch https://discourse.juju.is/t/3117
[04:07] <wallyworld> tlm: you talking about the httpserver worker?
[04:07] <tlm> wallyworld: yeah
[04:07] <wallyworld> that attribute seems bespoke to that worker
[04:08] <tlm> oh ?
[04:08] <wallyworld> it goes from starting to running after it gets the mutex inside the worker loop
[04:08] <wallyworld> see juju/worker/httpserver/worker.go
[04:09] <wallyworld> the Worker struct has a status attribute
[04:09] <wallyworld> which is printed in hte Report() method
[04:09] <tlm> ah  yep,
[04:09] <tlm> cheers
[04:37] <wallyworld> manadart: for bug 1831580, it seems like we just loop over all addresses recorded against a machine and splat those out as ingress-addresses, without regard for the fact that 127.x.x.x should probably be filtered out?
[04:37] <mup> Bug #1831580: wrong etcd_connection_string on kubernetes-master charm <network> <sts> <juju:Triaged> <https://launchpad.net/bugs/1831580>
[04:38]  * thumper looks at wallyworld's PR
[04:43] <thumper> hey team
[04:44] <thumper> the github actions seem to still be using go 1.10, but an azure library has brought in strings.ReplaceAll
[04:44] <timClicks_> thumper: hi
[04:44] <thumper> which seems to not be in go 1.10
[04:44] <thumper> so all github actions are failing to build
[04:44] <tlm> hmmmm
[04:45] <tlm> all the github action files seem to say 1.14
[04:53] <thumper> tlm: ah... wallyworld's PR is from 2.7
[04:53] <thumper> branch
[04:54]  * thumper wonders why that'd have issues
[04:54] <thumper> did something get backported there?
[04:54] <thumper> for azure updates?
[04:54] <wallyworld> thumper: the azure focal fix landed in 2.7 and was forward ported
[04:56] <wallyworld> IIANM
[04:56] <wallyworld> i think it is likely the actions are out of date in 2.7
[05:19] <thumper> hpidcock: I tried to go get -u the loggo repo from juju but it didn't change go.mod file
[05:19] <hpidcock> and your loggo change was merged?
[05:23] <thumper> hpidcock: yep
[05:23] <hpidcock> GOCACHE=direct go get -u github.com/juju/loggo
[05:23] <hpidcock> try that
[05:24] <hpidcock> oh
[05:24] <hpidcock> GOMODCACHE=direct
[05:25] <hpidcock> no
[05:25] <hpidcock> GOPROXY=direct
[05:25] <hpidcock> hah
[05:25] <hpidcock> brain isn't working today
[05:29] <thumper> ... ok, will try that
[05:30] <thumper> it noticed that time
[05:31] <thumper> hpidcock: should I care that go.sum lists multiple versions?
[05:31] <hpidcock> no
[05:31] <thumper> ok
[05:31] <thumper> why are they there?
[05:32] <hpidcock> go.sum is all the versions that have been seen AFAIK
[05:33] <hpidcock> as in, its like an append only log
[05:34] <wallyworld> thumper: thanks for review, i left a comment around the facade version bump, or lack thereof
[07:42] <manadart> wallyworld: I will take a look at the ingress-address bug. Thanks for the reviews on my patches.
[07:44] <wallyworld> manadart: no worries. we had that perhaps naiive approach in mind for the ingress bug but wanted to check in case we had missed some intended modelling etc
[07:47] <manadart> wallyworld: The correct thing to do will be to ensure that it is correctly identified as machine-local at detection, and omitted based on that characteristic.
[07:49] <wallyworld> manadart: yeah, that was what was discussed, but we weren't sure why those 127 addresses weren't being classified that way
[07:52] <manadart> wallyworld: I think I know. The addresses for link-layer devices are different from those on the machines collection and don't have scope classification (?) It should be easy to sort out.
[07:54] <manadart> stickupkid: https://github.com/juju/juju/pull/11626
[07:54] <stickupkid> manadart, sure, just writing up a essay, will get to that in a bit
[08:08] <stickupkid> wallyworld, whilst you're here https://github.com/juju/juju/pull/11627#discussion_r430230340
[08:20] <wallyworld> stickupkid: ty for comment. a little while ago, all facades were at a top level, and the categorisation into agent/client/controller was rather arbitrary. but i think we are at a stage where we could adopt more formal expectations around the cateorisations
[08:21] <stickupkid> wallyworld, 100% agreed, I believe we could really cleanup libjuju as well
[08:21] <wallyworld> yes indeed
[08:22] <wallyworld> libjuju shold ony really use the client facades, but there's also Ping() and maybe others that are excpetions
[08:22] <stickupkid> wallyworld, we could have agent, controller, client and common maybe
[08:23] <wallyworld> yeah. we do that for controller vs model (at the apiserver level)
[08:29] <stickupkid> manadart, payback https://github.com/juju/juju/pull/11621
[09:16] <stickupkid> manadart, when you get a sec, ping-a-rooney
[09:27] <manadart> stickupkid: I'm in Daily.
[10:05] <achilleasa> manadart: so the traffic from the container goes eth0 (container) -> veth (host) -> ens4 (host)
[10:06] <achilleasa> lxc profile show default on the host shows lxdbr0 as the parent of eth0
[10:07] <achilleasa> but the juju logs lines from the getContainerSpec show eth0 having ovsbr0 as the parent of eth0...
[10:07] <achilleasa> ...and brctl show has lxdbr0 but nothing under the interfaces column
[10:07] <achilleasa> looks like I broke it :D
[10:10] <stickupkid> achilleasa, https://github.com/caseykneale/VIMKiller
[10:11] <manadart> achilleasa: I think we are going to have issues with the fact that we always create lxdbr0.
[10:13] <manadart> achilleasa: Actually no, because we already explicitly bridge NICs when container-networking-method=provider and it works fine.
[10:33] <stickupkid> manadart, https://github.com/juju/juju/pull/11628
[10:35] <stickupkid> took longer to install the dependencies :sigh:
[10:43] <stickupkid> manadart, shall I fix the ReplaceAll whilst I'm here in 2.7 hell?
[10:46] <manadart> stickupkid: Might as well.
[10:49] <stickupkid> manadart, https://github.com/juju/juju/pull/11628/commits/fbb98cd20bd2f270185f30198a75e3f251cd1445
[10:50]  * manadart nods.
[11:05] <achilleasa> manadart: getting closer... https://pastebin.canonical.com/p/x7VBZxBnHr/
[11:05] <stickupkid> haha "profit"
[11:06] <achilleasa> ;-)
[11:06] <manadart> achilleasa: This will be a good chance to sanitise some of that flow if you're all up in it.
[11:07] <achilleasa> manadart: could do but I am not sure if I understand how it all works... also traffic still does not seem to flow through my bridge :-(
[11:07] <stickupkid> it's trolling you
[11:08] <achilleasa> is there any tool to show the parent configuration?
[11:08] <achilleasa> stickupkid: yeah... the packets are practicing social distancing from the bridge apparently...
[11:08] <stickupkid> achilleasa, haha, funny very funny
[11:09]  * achilleasa wonders if veth should be added to the ovs bridge manually...
[11:12] <stickupkid> achilleasa, bug fixes in real life https://preview.redd.it/6dgzo0qor1151.jpg?width=960&crop=smart&auto=webp&s=4cc65a4b3744c1f79a230218f30fad674f78b6fa
[11:13] <manadart> achilleasa: The veth devices show up for each container NIC connected via bridge to its parent NIC on the host.
[11:17] <achilleasa> manadart: I've edited the lxd profile and after deploying --to lxd:X I see the veth as a port in my ovs... https://pastebin.canonical.com/p/nX7zzBnxyV/
[11:24] <stickupkid> manadart, I'll check the bindings issue on aws
[13:13] <rick_h> manadart:  stickupkid_ petevg just a heads up I need to take over guimaas this week. Anything running that folks care deeply about?
[13:14] <manadart> rick_h: All clear here.
[13:14] <petevg> rick_h: we are good w/ whatever you do to guimaas.
[13:22] <rick_h> ok ty
[14:26] <stickupkid_> manadart, it's building goodra instead of grumpig
[14:33] <manadart> stickupkid_: Great.
[14:50] <stickupkid_> manadart, pingy mc ping ping
[14:54] <manadart> stickupkid_: Pong.
[14:54] <stickupkid_> daily
[16:24] <rick_h> kjackal:  ping, I'm trying to bootstrap on microk8s and getting hung up on the connecting to the new address space? Any hints on issues/working out the networking bits I need to figure out?
[16:25] <rick_h> kjackal_:  this is for a demo I need to do and appreciate the help. I get stuck at: "Contacting Juju controller at 10.152.183.207 to verify accessibility..." and nothing I'm seeing in notes speaks to network bits other than enable dns and storage?
[16:25] <rick_h> guild ^ if anyone has any hints
[16:27] <stickupkid_> sorry rick_h not touched microk8s from that perspective...
[16:28] <rick_h> stickupkid_:  :( ok be that way
[18:20] <rick_h> petevg:  wheee, have enough VMs setup on guimaas to get a CK deploy going...here's hoping it settles lol
[18:20] <petevg> rick_h: woot!
[19:45] <rick_h> petevg:  I'm hitting some lease errors in an HA controller on the RC. I know there was some conversations around that on the standup the other day. Known issue?
[19:47] <petevg> rick_h: is this on k8s? I thought that we had solved most of those issues w/ the latest rc.
[19:47] <rick_h> petevg:  no, no gcp
[19:48] <rick_h> ruh roh, might have broken the world now...
[19:49] <petevg> rick_h: uh-oh. We don't have any open bugs against a release candidate milestone, so I think that we closed out all the known issues ...
[19:50] <petevg> rick_h: I'm having a really hard time getting a controller to bootstrap on MicroStack in guimaas, though that might just be DNS stuff being unhappy. (e.g., related to my config, not to juju)
[19:52] <rick_h> petevg: filing a bug, will send you the link
[19:52] <petevg> cool cool
[19:53] <rick_h> petevg:  https://bugs.launchpad.net/juju/+bug/1880751
[19:54] <rick_h> petevg:  :( on bootstrap issue
[19:54] <mup> Bug #1880751: flapping HA controller machine with lease issue on 2.8-rc2 on GCP <juju:New> <https://launchpad.net/bugs/1880751>
[19:55] <rick_h> petevg:  ok, have to tear this down, it's fubar (and the 4 models of demo stuff with it :( )
[20:37] <kjackal_> rick_h: sorry, I just saw that.
[20:38] <kjackal_> rick_h: here is how we bootstrap for kubeflow https://github.com/ubuntu/microk8s/blob/master/microk8s-resources/actions/enable.kubeflow.sh#L183
[20:38] <kjackal_> this is after enabling dns storage
[20:38] <kjackal_> I am not aware of any issues, but I am not testing bootstran regularly
[20:39] <kjackal_> asking aroung
[20:41] <rick_h> kjackal_:  ok, thanks. I'll grab APAC folks when they're around to help as well
[20:41] <rick_h> kjackal_:  yea that's all I'm doing atm but something's missing
[20:42] <kjackal_> rick_h: what versions (microk8s and juju) do we have?
[20:43] <kjackal_> latest/stable both?
[20:46] <knkski> rick_h: kjackal_ said you were having issues with bootstrapping juju onto microk8s?
[20:47] <rick_h> knkski:  yea, I get hung on when the client reaches out to the controller IP
[20:47] <rick_h> knkski:  so then it fails
[20:47] <knkski> rick_h: what commands are you running to bootstrap?
[20:47] <rick_h> knkski:  16:47:32 INFO  cmd controller.go:89 Contacting Juju controller at 10.152.183.215 to verify accessibility...
[20:47] <rick_h> knkski:  so I added the k8s and then "juju bootstrap k8s --debug
[20:47] <knkski> rick_h: how did you add the k8s?
[20:48] <rick_h> knkski:  juju add-k8s k8s (after I setup the .kube/config)
[20:48] <rick_h> knkski:  so I did the microk8s.config > ~/.kube/config
[20:50] <knkski> rick_h: the issue is probably that you're attempting to bootstrap onto microk8s remotely. see here for an issue i raised about it: https://bugs.launchpad.net/juju/+bug/1841960
[20:50] <mup> Bug #1841960: Juju requires Kubernetes cluster to generate external IP for bootstrapping <k8s> <juju:Fix Released by wallyworld> <https://launchpad.net/bugs/1841960>
[20:51] <knkski> rick_h: the `--controller-service-*` config options should let you bootstrap remotely, but i never got them working since it was easier to just use juju's built-in local microk8s bootstrapping
[20:52] <knkski> rick_h: so if you get that working i'd like to hear about it
[20:54] <kjackal_> bootstraping the internal microk8s cloud seemes to worked here https://pastebin.ubuntu.com/p/S7wSYM5XDR/
[20:54] <kjackal_> let me try the add-k8s
[20:55] <rick_h> knkski:  kjackal_ so it's all on my laptop but does that count as "remote" since it was through add-k8s?
[20:59] <rick_h> knkski:  kjackal_ ah you're right. So I didn't see microk8s as a cloud at first. I bet I had to restart after the usermod command to have the right perms/etc
[20:59] <rick_h> kjackal_:  knkski so I can bootstrap with just "juju bootstrap microk8s"
[20:59] <rick_h> petevg:  ^
[20:59] <knkski> rick_h: yep
[21:00] <rick_h> I'll file a bug on the add-k8s path because it seems like that "should" work but guess not
[21:00] <petevg> Aha! Glad that got sorted :-)
[21:03] <rick_h> ok, bug filed, ty knkski and kjackal_ for having me look at the built in cloud option again
[21:03] <knkski> 👍
[21:17] <kjackal_> rick_h: adding the clud also worked https://pastebin.ubuntu.com/p/M2bhw9hH3d/
[21:22] <rick_h> kjackal_:  lol boooo you cheater :P
[21:22] <rick_h> kjackal_:  did you previous bootstrap microk8s? I wonder if there's some setup that it does on that path that isn't done on the other
[21:22] <rick_h> kjackal_:  and so once you bootstrap microk8s once, it works with the cloud then after
[21:23] <kjackal_> :) I snap removed and re-installed microk8s
[21:25] <kjackal_> rick_h: here it is https://pastebin.ubuntu.com/p/NFQrqgfRPh/
[21:25] <rick_h> kjackal_:  or fine, maybe it just hates me then :P
[21:30] <rick_h> kjackal_:  knkski one other question, anyone know who does the k8s mongodb charm? It was mentioned it does HA with "enable-ha config set" but I don't see the config and wondering the right path
[21:45] <knkski> rick_h: looks like it's the charmed-osm team, and i think tvansteenburgh would be the person to talk to about that
[21:46] <rick_h> knkski:  yea, email sent ty