/srv/irclogs.ubuntu.com/2020/08/25/#juju.txt

kelvinliuhpidcock:  wallyworld_: https://github.com/juju/juju/pull/11932 this PR sync k8s-spike with dev, +1 plz06:14
wallyworld_looking06:14
wallyworld_kelvinliu: +106:17
kelvinliuty06:17
kelvinliuwallyworld_: hpidcock and here is the 1st PR for adding the api layer for k8s provider https://github.com/juju/juju/pull/1193306:21
stickupkidmanadart, I changed the PR, can you review again07:58
manadartstickupkid: Stand by.07:58
stickupkidhttps://github.com/juju/juju/pull/1192907:58
stickupkidit's a pretty drastic change, but I think we should do this more often07:59
iceyhey manadart: it gets more awesome: https://pastebin.ubuntu.com/p/PSQJ95XWzd/08:09
iceyone machine managed to get the spaces correct, but the rest are failing :)08:09
stickupkidwow08:09
stickupkidha08:09
iceyir looks a bit more mixed actually, there are some containers on machine 2 that did come up, but a lot that have missing spaces errors08:12
iceys/ir/it08:12
manadarticey: Hmm, and those container hosts were provisioned fresh after the work to replace the agent binaries?08:22
iceymanadart: they're all ina bundle I deployed this morning08:23
manadarticey: Can you confirm that the good ones have the br-{nic} device (according to Juju) and the failed ones are missing it?08:25
iceyand I did a `db.toolsmetadata.find({})` search before deploying, got no results08:25
iceymanadart: so, even just looking at juju status --format=yaml, there are some strange differences :)08:26
iceymanadart: https://pastebin.ubuntu.com/p/gSNWxcKtQc/08:28
iceymachine 0 has all containers up08:28
iceymachine 2 is mostly down08:28
iceyand there are br-{nic} entries, but not br-{nic}-{vlan} (mostly) in linklayerdevices on machine 208:31
manadarticey: What about the tools metadata after deploying?08:55
iceymanadart: db.toolsmetadata.find({}) returns nothing08:56
stickupkidmanadart, any idea what the restrictions are for upgrading a model/controller?09:00
stickupkidmanadart, i.e. do you need to be a super user/admin user?09:00
manadartstickupkid: For upgrade-model you need super-user and write.09:05
stickupkidwicked, did it right :)09:05
manadarticey: I think I have it. Got a sec? https://meet.google.com/fgr-xbog-nyb09:38
bthomaskubectl shows that juju init container (juju-pod-init) is in status Running, and the charm specific container is in status PodInitializing. I was under the impression that the init container must run to completion. If this is correct, how can I find out why the init container is persistently in the Running state. I have checked the init container log already.10:51
iceymanadart: in the end, one of the machines (2) had containers that wouldn't start11:25
manadarticey: OK, can you get me the machine log for it?11:37
manadartstickupkid: https://github.com/juju/juju/pull/1193411:37
iceysent via PM as it has some IPs in it :)11:39
stickupkidmanadart, is that it, SupportsSpaces == true12:24
stickupkidmanadart, ah, SSHAddresses12:24
manadartThat's it.12:24
stickupkidI'll do the Q&A in a bit...12:25
stickupkidmanadart, I'm going to be annoying and say I want an integration test for this12:29
manadartstickupkid: Fair enough. I'll do another card.12:30
qthepirateHello everyone!18:43
qthepirateHaving an issue with a cached IP address in juju. Essentially: My charm (nova-cloud-controller) keeps pointing to an IP address of an OLD percona charm. The old charm was removed AND all relations/applications are gone from it. I then removed N-C-C charm and redeployed it and its STILL pulling the old IP address.18:45
qthepirateIs there a way to clear/refresh the metadata-cache in the juju controller?18:45
mirek186has anyone expirinece odd juju ssh timeouts, hangs19:07
mirek186for some charms is good every single time, for others like keystone or nova-cloud-control juju will ssh in and then after few minutes hang and then timeout19:08
qthepirateis there a way to clear out the metadata-cache on a juju controller?21:10
qthepirateI think its holding on to a variable that needs to be updated21:41
wallyworldqthepirate: you may want to ask in #openstack (which is where the openstack charming folks tend to hang out) as this sounds like an openstack charming question. the charms themselves are responsible for passing around data about how the deployment is set up. you could also ask on https://discourse.juju.is and openstack charming folks can better see the post there22:02
qthepiratewallyworld: I would agree, but the issue i've chased down leads to this config line: destinations=metadata-cache://jujuCluster/?role=PRIMARY22:05
wallyworldthat's not a juju config item though right? that looks like charm config?22:13
qthepirateright, its in an app .conf file, but its looking to the juju cluster for an item22:20
wallyworldin the context of juju itself, juju knows nothing about "jujuCluster" - it seems like that's somewthing the charms manage22:30
qthepiratewallyworld: Thanks for letting me bounce it off you. I checked the logs and its definitely something else. Still trying to track down this error22:34
wallyworldqthepirate: no worries, sorry if i've been a bit vaque as the openstack charms are not something i know a lot about22:35
thumperhpidcock: https://github.com/juju/collections/compare/master...howbazaar:use-base-testing23:02
hpidcockthumper: looking good :)23:34
hpidcockthumper: and love the use of testing.T subtests23:36
qthepiratewallyworld: do you know anything about juju not releasing unused ip addresses?23:56

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!