[00:04] <hpidcock> tlm: approved
[00:08] <tlm> thanks hpidcock
[03:06] <timClicks> I would like to change a few headings in the `juju status` output
[03:06] <timClicks> "Inst id" -> "Instance ID"
[03:06] <timClicks> "AZ" -> "Availability Zone"
[03:06] <timClicks> "Rev" -> "Revision"
[03:07] <timClicks> "SAAS" -> "Remote Application"
[03:07] <timClicks> "App" -> "Application"
[03:36] <tlm> sounds better to me timClicks
[03:43] <timClicks> for the CMR section of `juju status`, we use the heading "Store" when we should probably use "Controller" or "Source"
[03:44] <timClicks> For models on the same controller, "Store" is reporting the controller name
[06:06] <kelvinliu> anyone could help to take a look plz? thanks!  https://github.com/juju/juju/pull/11743
[06:22] <zeestrat> timClicks: Those suggestion sound great and make it clearer, especially the SAAS -> Remote Application one.
[11:30] <manadart_> stickupkid or achilleasa: IP address extra field migration: https://github.com/juju/juju/pull/11744
[12:21] <rick_h> petevg:  morning, when you're around wondered if I could sync up on the gitlab image party
[12:21] <rick_h> timClicks[m]:  so the SAAS has actually been nice in these demos
[12:22] <rick_h> timClicks[m]:  as folks are actively talking about building "internal SAAS" and so it's made a lot of sense the way it's worded in the business cases (just for some customer engaging contexts)
[12:22] <rick_h> zeestrat:  ^
[12:23] <manadart_> stickupkid: You'll like this: https://github.com/juju/juju/pull/11745
[12:24] <zeestrat> rick_h: I see your point, but me and past self disagree :) https://bugs.launchpad.net/juju/+bug/1728631/comments/4
[12:24] <mup> Bug #1728631: [2.3] consume feature hard to manage <docteam> <juju:Expired> <https://launchpad.net/bugs/1728631>
[12:25] <rick_h> zeestrat:  I understand, it's just something that I've been surprised at how well that explains things in the past few weeks engaging with some folks in the real world
[12:25] <stickupkid> manadart_, wooooow
[12:25] <stickupkid> finally, hate that method
[12:25] <tvansteenburgh> manadart_: can i bug you with a question?
[12:26] <manadart_> tvansteenburgh: Shoot.
[12:27] <tvansteenburgh> manadart_: i'm got a bundle with some "to: [lxd/0, lxd/1]" directives in it. When I deploy, my machines are getting ipv4 addrs, but the lxds are getting ipv6 addrs. Is there a way force the lxds to get ipv4 addrs?
[12:28] <tvansteenburgh> I mean I know how to do it with lxd directly, but not sure how to do it since Juju is setting up lxd
[12:30] <manadart_> tvansteenburgh: 1, which Juju version; 2, are the containers space constrained/bound? I.e. are they using a bridged NIC from the host, or lxdbr0?
[12:32] <tvansteenburgh> manadart_: 2.7.6, and it appears they are using a bridged nic from the host
[12:33] <tvansteenburgh> manadart_: https://pastebin.canonical.com/p/XKZPtPWhzF/
[12:36] <manadart_> tvansteenburgh: Gimme a few.
[12:37] <petevg> rick_h: so I have promised to make folks waffles this morning, which means I won’t be at my desk until the sync. I have Adam Israel’s gitlab charm deployed on AKS, though.
[12:38] <petevg> I’m tempted to just replace the Juju one, since Adam’s actually has public source code, and is up to date!
[12:38] <rick_h> petevg:  ok, all good enjoy waffles. Just when you're in I wanted to say hi.
[12:44] <manadart_> manadart_: Can you get me /etc/netplan/whateveritis.yaml?
[12:45] <tvansteenburgh> manadart_: was that for me?
[12:45] <manadart_> tvansteenburgh: Derp. Yes.
[12:45] <tvansteenburgh> talking to yourself again?
[12:46] <manadart_> tvansteenburgh: Someone's got to.
[12:46] <tvansteenburgh> manadart_: http://paste.ubuntu.com/p/DqjMc2qSYw/
[12:48] <achilleasa> manadart_: stickupkid https://pastebin.canonical.com/p/bJ2SV9fJQg/ ... boo :-(
[12:48] <stickupkid> achilleasa, knew it
[12:49] <achilleasa> so now I need to figure out how the firewaller works :D
[12:49] <stickupkid> my bet still stands
[12:50] <achilleasa> at least we know that it's not the provider... it's the firewaller
[12:50]  * achilleasa needs to dig deeper
[13:00] <manadart_> tvansteenburgh: This is odd. I need to look into it.
[13:03] <tvansteenburgh> manadart_: ack
[13:52] <stickupkid> hml, this is the charmhub find PR https://github.com/juju/juju/pull/11736
[14:40] <Eryn_1983_FL> well boss finally for ovn to work, but then he rebooted and broke mysql
[14:40] <Eryn_1983_FL> also removed octavia, so thats got to go back in
[14:48] <Eryn_1983_FL> any idea what this means guys for mysql
[14:48] <Eryn_1983_FL> https://paste.debian.net/1153489/
[14:49] <Eryn_1983_FL> it cant connect to the mysql cluser
[14:49] <Eryn_1983_FL> yet i can telnet port 3306
[14:49] <Eryn_1983_FL> and ping just freaking fine
[15:15] <petevg> beisner, jamespage: does the error Eryn_1983_FL is running into above look familiar to you?
[15:16] <petevg> rick_h: Looping back to your request from this morning: want to jump into the Juju daily? I've got some time to chat.
[15:16] <Eryn_1983_FL> i think i made progress
[15:16] <Eryn_1983_FL> it had a cluster issue but i started replication agasin
[15:16] <rick_h> petevg:  omw
[15:16] <Eryn_1983_FL> i got 2/3 working so far
[15:18] <Eryn_1983_FL> https://paste.debian.net/1153489/
[15:18] <Eryn_1983_FL> ok now its working
[15:29] <rick_h> tvansteenburgh:  anyone free that knows how expose works on k8s charms able to hop into a call? https://meet.google.com/dxr-hngd-beo
[15:35] <Eryn_1983_FL> ok so vault is in blocked status
[15:35] <Eryn_1983_FL> i restart the services and it seems ok in the lxd
[15:35] <Eryn_1983_FL> but im still blocked
[15:35] <Eryn_1983_FL> should i juju resolve vaul?
[15:38] <tvansteenburgh> rick_h: kelvinliu should know
[15:38] <tvansteenburgh> oh maybe he's eod
[15:39] <tvansteenburgh> somebody on wallyworld's juju k8s team
[15:40] <tvansteenburgh> rick_h: i think juju expose adds an ingress rule for the charm, do you need more specifics than that?
[15:45] <tvansteenburgh> rick_h: there's some stuff in Discourse about it too, see https://discourse.juju.is/t/getting-started/152 and search page for "Exposing gitlab"
[15:47] <rick_h> tvansteenburgh:  yea, trying to figure it out but I'm on ask and so "ingress rule" and what needs to work is a bit fuzzy
[15:47] <rick_h> everything I see is that in CK you have to configure/get a worker node IP of the cluster
[15:48] <rick_h> tvansteenburgh:  so I'm not sure how this works in a hosted k8s world
[15:51] <tvansteenburgh> rick_h: if you're just trying to make it work, the steps in that discourse post ^ should be sufficient
[15:51] <Eryn_1983_FL> so is there a way i can check why vault is blocked?
[15:53] <rick_h> tvansteenburgh:  so I deployed it with the loadbalancer config argument, grabbed the IP of the unit, set the config for juju-external... to that $IP.xp.io and exposed it and nadda
[15:53] <rick_h> tvansteenburgh:  I don't get how .xp.io gets invoked into it
[15:53] <tvansteenburgh> rick_h: what cloud?
[15:55] <tvansteenburgh> rick_h: the ip should be that of the LB, if you have one, or the IP of a worker node if you don't have an LB
[15:58] <rick_h> tvansteenburgh:  ok, so I need to get those details in an AKS setup from kubectl then probably
[16:01] <rick_h> tvansteenburgh:  ok, so I've got a "kubnet" networking setup by default on AKS. And in the networking details the only interesting is DNS Service 10.0.0.10?
[16:03] <rick_h> tvansteenburgh:  hmmm, there's a http routing option I'm turning on and see if that gives me anything
[16:04] <knkski> rick_h: i just got on, so am missing the context. are you trying to get external access to your charms?
[16:10] <rick_h> knkski:  yes, on aks atm
[16:10] <rick_h> knkski:  looking at https://docs.microsoft.com/en-us/azure/aks/http-application-routing which seems in the ballpark but not sure how that integrates with juju's pod spec config provided
[16:40] <knkski> rick_h: if you're using AKS, you're not using Charmed Kubernetes at all, right? If so, I'm not really sure how to get the external IP of the cluster, but once you do, it should be as easy as `juju config charm-name juju-external-hostname=$EXTERNAL_IP && juju expose charm-name`
[16:45] <rick_h> knkski:  yea, I moved to gke because there I can get it on the load balancer, set the config and have the address but :(
[16:45] <knkski> rick_h: also, what charm are you trying to access externally?
[16:45] <rick_h> knkski:  gitlab or mediawiki
[16:51] <knkski> rick_h: so no connection after you've exposed the charm? does `juju status` show the charm as exposed?
[16:51] <knkski> and if so, what address does it show?
[16:54] <rick_h> knkski:  yea gke and juju status show one pod up on 35.223.146.32
[16:57] <rick_h> knkski:  i might just hijack a juju meeting later in the day tonight and make them make it work. :)
[16:57] <knkski> rick_h: would you be able to send me the kubeconfig for the cluster? i can poke at it if you'd like.
[16:57] <rick_h> knkski:  going into a call, thanks for the offer.
[17:51] <josephillips> hi
[17:51] <josephillips> question im perform a fork for a juju charm
[17:52] <josephillips> but reading the code i found this on a part of the code If the charm was installed from source we cannot upgrade it. For backwards compatibility a config flag must be set for this code to run, otherwise a full service level upgrade will fire on config-changed."""
[17:52] <josephillips> what exactly means that
[17:52] <josephillips> is charm-swift-proxy
[17:55] <josephillips> https://github.com/openstack/charm-swift-proxy/blob/0ce1ee67f8ee69f7c6fada10979aaf1415c7cf68/charmhelpers/contrib/openstack/utils.py#L1358
[17:56] <josephillips> what i have to do just set action-managed-upgrade to true
[17:56] <josephillips> on config
[17:56] <josephillips> ?
[18:50] <pmatulis> josephillips, what is your objective?
[19:26] <josephillips> pmatulis: understand how i can perform the upgrade
[19:27] <josephillips> if i use my fork
[19:57] <pmatulis> josephillips, https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/app-upgrade-openstack.html
[21:06] <jesseleo> Hello all, I have been having trouble getting juju bootstrapped on lxd here is the bug I filed: https://bugs.launchpad.net/juju/+bug/1884814
[21:06] <mup> Bug #1884814: bootstrap localhost: ERROR Get "https://10.194.144.1:8443/1.0": Unable to connect to: 10.194.144.1:8443 <juju:New> <https://launchpad.net/bugs/1884814>
[21:06] <jesseleo> let me know if you want me to provide any more info
[21:07] <hml> jesseleo:  bootstrap timed out and failed.  because the juju client was unable to connect to the lxd instance created.
[21:10] <jesseleo> Hey hml I can launch a container manually on the same machine and have connectivity, I just don't know how to verify the connectivity exists with the juju controller container as well
[21:10] <hml> jesseleo:  correction…
[21:12] <hml> jesseleo:  it’s timing out, but failed waiting for the lxd server to respond to writing an lxd profile
[21:13] <hml> hrm
[21:15] <jesseleo> I tried reinstalling juju and lxd snaps a number of times to no avail I was wondering what the next step in troubleshooting should be?
[21:17] <jesseleo> https://paste.ubuntu.com/p/4nvKhfbwXj/ it doesn't look like its using the socket inside the snap
[21:17] <jesseleo> but i could be wrong just pokin around
[21:18] <hml> jesseleo:  the pastebin is usually around sudo and groups.  though i didn’t htink that was an issue with the snap
[21:18] <pmatulis> jesseleo, what ubuntu release?
[21:18] <jesseleo> 20.04
[21:19] <pmatulis> confirm that you are using the snap that is installed by default?
[21:19] <pmatulis> the 'lxd' snap
[21:19] <pmatulis> maybe you have LXD deb packages installed too
[21:20] <jesseleo> https://paste.ubuntu.com/p/8PvSG6d44q/
[21:22] <thumper> jesseleo: have you logged out and back in after setting up lxd?
[21:22] <thumper> I think there may be some group changes that impact permissions
[21:25] <pmatulis> jesseleo, so you're running 'lxd.migrate'. that tells me you *are* running the lxd deb packages
[21:30] <jesseleo> pmatulis corret
[21:30] <jesseleo> correct
[21:32] <pmatulis> jesseleo, that could be the problem
[21:32] <pmatulis> jesseleo, so you have existing containers that you need to move to the snap?
[21:34] <pmatulis> jesseleo, if so, i suggest moving to #lxd to resolve the migration and then look at juju
[21:39] <jesseleo> pmatulis ohh now I get what you mean. I just reinitialized lxd more than once so i thought I needed to run lxd.migrate. I just spun up a clean virtual box because I needed 20.04
[21:41] <pmatulis> jesseleo, ah ha. you are running Ubuntu on Virtualbox
[21:43] <jesseleo> pmatulis yeah running it headless then I ssh into it
[21:43] <pmatulis> btw, lxd.migrate is to migrate containers that exist under the deb lxd packages to the env managed by the lxd snap
[21:44] <jesseleo> pmatulis thank you good to know
[21:46] <pmatulis> jesseleo, did you confirm that lxd is even supposed to work in a virtualbox environment?
[21:50] <jesseleo> pmatulis yeah it works. been using it for months on my 18.04 server
[21:51] <jesseleo> yeah its been working well on my other machine
[21:52] <pmatulis> with Juju as well?
[22:00] <jesseleo> pmatulis Yeah I was writing charms on my other machine and everything was working great. but charmcraft wouldn't install my requirements so I elected to move to 20.04 and thats where I'm running into this bootstrap issue
[22:06] <pmatulis> jesseleo, can you launch a native lxd container?
[22:07] <pmatulis> (lxc launch ubuntu)
[22:08] <jesseleo> yeah I tried that earlier it works
[22:12] <jesseleo> https://paste.ubuntu.com/p/K8DDG69sfR/
[22:12] <pmatulis> it would be good to doublecheck that the versions for juju, vbox, and lxd work on 18.04 but do not work on 20.04
[22:28] <rick_h> tlm:  can I bug you please about exposing an k8s app on gke/aks?
[22:28] <tlm> you can but not sure i'll be able to help :|
[22:29] <rick_h> tlm:  ok, no pressure but I kinda need to solve this or find someone to help or bust tbh
[22:29] <rick_h> tlm:  meet you in your standup room?
[22:29] <tlm> yep
[23:43] <rick_h> tlm:  do you know if the mediawiki charm actually configures the wiki? Looking at the source it sets pod_spec data but when you launch it mediawiki wants to walk you through setting up the LocalSettings.php so the db relation details don't actually get you a configured db?
[23:46] <tlm> I think you need to set the relation info
[23:46] <tlm> kelvinliu: you need to add the relations when deploying mediawiki ?
[23:47] <kelvinliu> tlm: yes, mediawiki requires the relation