/srv/irclogs.ubuntu.com/2020/06/23/#juju.txt

hpidcocktlm: approved00:04
tlmthanks hpidcock00:08
timClicksI would like to change a few headings in the `juju status` output03:06
timClicks"Inst id" -> "Instance ID"03:06
timClicks"AZ" -> "Availability Zone"03:06
timClicks"Rev" -> "Revision"03:06
timClicks"SAAS" -> "Remote Application"03:07
timClicks"App" -> "Application"03:07
tlmsounds better to me timClicks03:36
timClicksfor the CMR section of `juju status`, we use the heading "Store" when we should probably use "Controller" or "Source"03:43
timClicksFor models on the same controller, "Store" is reporting the controller name03:44
kelvinliuanyone could help to take a look plz? thanks!  https://github.com/juju/juju/pull/1174306:06
zeestrattimClicks: Those suggestion sound great and make it clearer, especially the SAAS -> Remote Application one.06:22
manadart_stickupkid or achilleasa: IP address extra field migration: https://github.com/juju/juju/pull/1174411:30
rick_hpetevg:  morning, when you're around wondered if I could sync up on the gitlab image party12:21
rick_htimClicks[m]:  so the SAAS has actually been nice in these demos12:21
rick_htimClicks[m]:  as folks are actively talking about building "internal SAAS" and so it's made a lot of sense the way it's worded in the business cases (just for some customer engaging contexts)12:22
rick_hzeestrat:  ^12:22
manadart_stickupkid: You'll like this: https://github.com/juju/juju/pull/1174512:23
zeestratrick_h: I see your point, but me and past self disagree :) https://bugs.launchpad.net/juju/+bug/1728631/comments/412:24
mupBug #1728631: [2.3] consume feature hard to manage <docteam> <juju:Expired> <https://launchpad.net/bugs/1728631>12:24
rick_hzeestrat:  I understand, it's just something that I've been surprised at how well that explains things in the past few weeks engaging with some folks in the real world12:25
stickupkidmanadart_, wooooow12:25
stickupkidfinally, hate that method12:25
tvansteenburghmanadart_: can i bug you with a question?12:25
manadart_tvansteenburgh: Shoot.12:26
tvansteenburghmanadart_: i'm got a bundle with some "to: [lxd/0, lxd/1]" directives in it. When I deploy, my machines are getting ipv4 addrs, but the lxds are getting ipv6 addrs. Is there a way force the lxds to get ipv4 addrs?12:27
tvansteenburghI mean I know how to do it with lxd directly, but not sure how to do it since Juju is setting up lxd12:28
manadart_tvansteenburgh: 1, which Juju version; 2, are the containers space constrained/bound? I.e. are they using a bridged NIC from the host, or lxdbr0?12:30
tvansteenburghmanadart_: 2.7.6, and it appears they are using a bridged nic from the host12:32
tvansteenburghmanadart_: https://pastebin.canonical.com/p/XKZPtPWhzF/12:33
manadart_tvansteenburgh: Gimme a few.12:36
petevgrick_h: so I have promised to make folks waffles this morning, which means I won’t be at my desk until the sync. I have Adam Israel’s gitlab charm deployed on AKS, though.12:37
petevgI’m tempted to just replace the Juju one, since Adam’s actually has public source code, and is up to date!12:38
rick_hpetevg:  ok, all good enjoy waffles. Just when you're in I wanted to say hi.12:38
manadart_manadart_: Can you get me /etc/netplan/whateveritis.yaml?12:44
tvansteenburghmanadart_: was that for me?12:45
manadart_tvansteenburgh: Derp. Yes.12:45
tvansteenburghtalking to yourself again?12:45
manadart_tvansteenburgh: Someone's got to.12:46
tvansteenburghmanadart_: http://paste.ubuntu.com/p/DqjMc2qSYw/12:46
achilleasamanadart_: stickupkid https://pastebin.canonical.com/p/bJ2SV9fJQg/ ... boo :-(12:48
stickupkidachilleasa, knew it12:48
achilleasaso now I need to figure out how the firewaller works :D12:49
stickupkidmy bet still stands12:49
achilleasaat least we know that it's not the provider... it's the firewaller12:50
* achilleasa needs to dig deeper12:50
manadart_tvansteenburgh: This is odd. I need to look into it.13:00
tvansteenburghmanadart_: ack13:03
stickupkidhml, this is the charmhub find PR https://github.com/juju/juju/pull/1173613:52
Eryn_1983_FLwell boss finally for ovn to work, but then he rebooted and broke mysql14:40
Eryn_1983_FLalso removed octavia, so thats got to go back in14:40
Eryn_1983_FLany idea what this means guys for mysql14:48
Eryn_1983_FLhttps://paste.debian.net/1153489/14:48
Eryn_1983_FLit cant connect to the mysql cluser14:49
Eryn_1983_FLyet i can telnet port 330614:49
Eryn_1983_FLand ping just freaking fine14:49
petevgbeisner, jamespage: does the error Eryn_1983_FL is running into above look familiar to you?15:15
petevgrick_h: Looping back to your request from this morning: want to jump into the Juju daily? I've got some time to chat.15:16
Eryn_1983_FLi think i made progress15:16
Eryn_1983_FLit had a cluster issue but i started replication agasin15:16
rick_hpetevg:  omw15:16
Eryn_1983_FLi got 2/3 working so far15:16
Eryn_1983_FLhttps://paste.debian.net/1153489/15:18
Eryn_1983_FLok now its working15:18
rick_htvansteenburgh:  anyone free that knows how expose works on k8s charms able to hop into a call? https://meet.google.com/dxr-hngd-beo15:29
Eryn_1983_FLok so vault is in blocked status15:35
Eryn_1983_FLi restart the services and it seems ok in the lxd15:35
Eryn_1983_FLbut im still blocked15:35
Eryn_1983_FLshould i juju resolve vaul?15:35
tvansteenburghrick_h: kelvinliu should know15:38
tvansteenburghoh maybe he's eod15:38
tvansteenburghsomebody on wallyworld's juju k8s team15:39
tvansteenburghrick_h: i think juju expose adds an ingress rule for the charm, do you need more specifics than that?15:40
tvansteenburghrick_h: there's some stuff in Discourse about it too, see https://discourse.juju.is/t/getting-started/152 and search page for "Exposing gitlab"15:45
rick_htvansteenburgh:  yea, trying to figure it out but I'm on ask and so "ingress rule" and what needs to work is a bit fuzzy15:47
rick_heverything I see is that in CK you have to configure/get a worker node IP of the cluster15:47
rick_htvansteenburgh:  so I'm not sure how this works in a hosted k8s world15:48
tvansteenburghrick_h: if you're just trying to make it work, the steps in that discourse post ^ should be sufficient15:51
Eryn_1983_FLso is there a way i can check why vault is blocked?15:51
rick_htvansteenburgh:  so I deployed it with the loadbalancer config argument, grabbed the IP of the unit, set the config for juju-external... to that $IP.xp.io and exposed it and nadda15:53
rick_htvansteenburgh:  I don't get how .xp.io gets invoked into it15:53
tvansteenburghrick_h: what cloud?15:53
tvansteenburghrick_h: the ip should be that of the LB, if you have one, or the IP of a worker node if you don't have an LB15:55
=== grumble is now known as rawr
rick_htvansteenburgh:  ok, so I need to get those details in an AKS setup from kubectl then probably15:58
rick_htvansteenburgh:  ok, so I've got a "kubnet" networking setup by default on AKS. And in the networking details the only interesting is DNS Service 10.0.0.10?16:01
rick_htvansteenburgh:  hmmm, there's a http routing option I'm turning on and see if that gives me anything16:03
knkskirick_h: i just got on, so am missing the context. are you trying to get external access to your charms?16:04
rick_hknkski:  yes, on aks atm16:10
rick_hknkski:  looking at https://docs.microsoft.com/en-us/azure/aks/http-application-routing which seems in the ballpark but not sure how that integrates with juju's pod spec config provided16:10
knkskirick_h: if you're using AKS, you're not using Charmed Kubernetes at all, right? If so, I'm not really sure how to get the external IP of the cluster, but once you do, it should be as easy as `juju config charm-name juju-external-hostname=$EXTERNAL_IP && juju expose charm-name`16:40
rick_hknkski:  yea, I moved to gke because there I can get it on the load balancer, set the config and have the address but :(16:45
knkskirick_h: also, what charm are you trying to access externally?16:45
rick_hknkski:  gitlab or mediawiki16:45
knkskirick_h: so no connection after you've exposed the charm? does `juju status` show the charm as exposed?16:51
knkskiand if so, what address does it show?16:51
rick_hknkski:  yea gke and juju status show one pod up on 35.223.146.3216:54
rick_hknkski:  i might just hijack a juju meeting later in the day tonight and make them make it work. :)16:57
knkskirick_h: would you be able to send me the kubeconfig for the cluster? i can poke at it if you'd like.16:57
rick_hknkski:  going into a call, thanks for the offer.16:57
josephillipshi17:51
josephillipsquestion im perform a fork for a juju charm17:51
josephillipsbut reading the code i found this on a part of the code If the charm was installed from source we cannot upgrade it. For backwards compatibility a config flag must be set for this code to run, otherwise a full service level upgrade will fire on config-changed."""17:52
josephillipswhat exactly means that17:52
josephillipsis charm-swift-proxy17:52
josephillipshttps://github.com/openstack/charm-swift-proxy/blob/0ce1ee67f8ee69f7c6fada10979aaf1415c7cf68/charmhelpers/contrib/openstack/utils.py#L135817:55
josephillipswhat i have to do just set action-managed-upgrade to true17:56
josephillipson config17:56
josephillips?17:56
pmatulisjosephillips, what is your objective?18:50
josephillipspmatulis: understand how i can perform the upgrade19:26
josephillipsif i use my fork19:27
pmatulisjosephillips, https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/app-upgrade-openstack.html19:57
jesseleoHello all, I have been having trouble getting juju bootstrapped on lxd here is the bug I filed: https://bugs.launchpad.net/juju/+bug/188481421:06
mupBug #1884814: bootstrap localhost: ERROR Get "https://10.194.144.1:8443/1.0": Unable to connect to: 10.194.144.1:8443 <juju:New> <https://launchpad.net/bugs/1884814>21:06
jesseleolet me know if you want me to provide any more info21:06
hmljesseleo:  bootstrap timed out and failed.  because the juju client was unable to connect to the lxd instance created.21:07
jesseleoHey hml I can launch a container manually on the same machine and have connectivity, I just don't know how to verify the connectivity exists with the juju controller container as well21:10
hmljesseleo:  correction…21:10
hmljesseleo:  it’s timing out, but failed waiting for the lxd server to respond to writing an lxd profile21:12
hmlhrm21:13
jesseleoI tried reinstalling juju and lxd snaps a number of times to no avail I was wondering what the next step in troubleshooting should be?21:15
jesseleohttps://paste.ubuntu.com/p/4nvKhfbwXj/ it doesn't look like its using the socket inside the snap21:17
jesseleobut i could be wrong just pokin around21:17
hmljesseleo:  the pastebin is usually around sudo and groups.  though i didn’t htink that was an issue with the snap21:18
pmatulisjesseleo, what ubuntu release?21:18
jesseleo20.0421:18
pmatulisconfirm that you are using the snap that is installed by default?21:19
pmatulisthe 'lxd' snap21:19
pmatulismaybe you have LXD deb packages installed too21:19
jesseleohttps://paste.ubuntu.com/p/8PvSG6d44q/21:20
thumperjesseleo: have you logged out and back in after setting up lxd?21:22
thumperI think there may be some group changes that impact permissions21:22
pmatulisjesseleo, so you're running 'lxd.migrate'. that tells me you *are* running the lxd deb packages21:25
jesseleopmatulis corret21:30
jesseleocorrect21:30
pmatulisjesseleo, that could be the problem21:32
pmatulisjesseleo, so you have existing containers that you need to move to the snap?21:32
pmatulisjesseleo, if so, i suggest moving to #lxd to resolve the migration and then look at juju21:34
jesseleopmatulis ohh now I get what you mean. I just reinitialized lxd more than once so i thought I needed to run lxd.migrate. I just spun up a clean virtual box because I needed 20.0421:39
pmatulisjesseleo, ah ha. you are running Ubuntu on Virtualbox21:41
jesseleopmatulis yeah running it headless then I ssh into it21:43
pmatulisbtw, lxd.migrate is to migrate containers that exist under the deb lxd packages to the env managed by the lxd snap21:43
jesseleopmatulis thank you good to know21:44
pmatulisjesseleo, did you confirm that lxd is even supposed to work in a virtualbox environment?21:46
jesseleopmatulis yeah it works. been using it for months on my 18.04 server21:50
jesseleoyeah its been working well on my other machine21:51
pmatuliswith Juju as well?21:52
jesseleopmatulis Yeah I was writing charms on my other machine and everything was working great. but charmcraft wouldn't install my requirements so I elected to move to 20.04 and thats where I'm running into this bootstrap issue22:00
pmatulisjesseleo, can you launch a native lxd container?22:06
pmatulis(lxc launch ubuntu)22:07
jesseleoyeah I tried that earlier it works22:08
jesseleohttps://paste.ubuntu.com/p/K8DDG69sfR/22:12
pmatulisit would be good to doublecheck that the versions for juju, vbox, and lxd work on 18.04 but do not work on 20.0422:12
rick_htlm:  can I bug you please about exposing an k8s app on gke/aks?22:28
tlmyou can but not sure i'll be able to help :|22:28
rick_htlm:  ok, no pressure but I kinda need to solve this or find someone to help or bust tbh22:29
rick_htlm:  meet you in your standup room?22:29
tlmyep22:29
rick_htlm:  do you know if the mediawiki charm actually configures the wiki? Looking at the source it sets pod_spec data but when you launch it mediawiki wants to walk you through setting up the LocalSettings.php so the db relation details don't actually get you a configured db?23:43
tlmI think you need to set the relation info23:46
tlmkelvinliu: you need to add the relations when deploying mediawiki ?23:46
kelvinliutlm: yes, mediawiki requires the relation23:47

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!