[06:14] <kelvinliu> hpidcock:  wallyworld_: https://github.com/juju/juju/pull/11932 this PR sync k8s-spike with dev, +1 plz
[06:14] <wallyworld_> looking
[06:17] <wallyworld_> kelvinliu: +1
[06:17] <kelvinliu> ty
[06:21] <kelvinliu> wallyworld_: hpidcock and here is the 1st PR for adding the api layer for k8s provider https://github.com/juju/juju/pull/11933
[07:58] <stickupkid> manadart, I changed the PR, can you review again
[07:58] <manadart> stickupkid: Stand by.
[07:58] <stickupkid> https://github.com/juju/juju/pull/11929
[07:59] <stickupkid> it's a pretty drastic change, but I think we should do this more often
[08:09] <icey> hey manadart: it gets more awesome: https://pastebin.ubuntu.com/p/PSQJ95XWzd/
[08:09] <icey> one machine managed to get the spaces correct, but the rest are failing :)
[08:09] <stickupkid> wow
[08:09] <stickupkid> ha
[08:12] <icey> ir looks a bit more mixed actually, there are some containers on machine 2 that did come up, but a lot that have missing spaces errors
[08:12] <icey> s/ir/it
[08:22] <manadart> icey: Hmm, and those container hosts were provisioned fresh after the work to replace the agent binaries?
[08:23] <icey> manadart: they're all ina bundle I deployed this morning
[08:25] <manadart> icey: Can you confirm that the good ones have the br-{nic} device (according to Juju) and the failed ones are missing it?
[08:25] <icey> and I did a `db.toolsmetadata.find({})` search before deploying, got no results
[08:26] <icey> manadart: so, even just looking at juju status --format=yaml, there are some strange differences :)
[08:28] <icey> manadart: https://pastebin.ubuntu.com/p/gSNWxcKtQc/
[08:28] <icey> machine 0 has all containers up
[08:28] <icey> machine 2 is mostly down
[08:31] <icey> and there are br-{nic} entries, but not br-{nic}-{vlan} (mostly) in linklayerdevices on machine 2
[08:55] <manadart> icey: What about the tools metadata after deploying?
[08:56] <icey> manadart: db.toolsmetadata.find({}) returns nothing
[09:00] <stickupkid> manadart, any idea what the restrictions are for upgrading a model/controller?
[09:00] <stickupkid> manadart, i.e. do you need to be a super user/admin user?
[09:05] <manadart> stickupkid: For upgrade-model you need super-user and write.
[09:05] <stickupkid> wicked, did it right :)
[09:38] <manadart> icey: I think I have it. Got a sec? https://meet.google.com/fgr-xbog-nyb
[10:51] <bthomas> kubectl shows that juju init container (juju-pod-init) is in status Running, and the charm specific container is in status PodInitializing. I was under the impression that the init container must run to completion. If this is correct, how can I find out why the init container is persistently in the Running state. I have checked the init container log already.
[11:25] <icey> manadart: in the end, one of the machines (2) had containers that wouldn't start
[11:37] <manadart> icey: OK, can you get me the machine log for it?
[11:37] <manadart> stickupkid: https://github.com/juju/juju/pull/11934
[11:39] <icey> sent via PM as it has some IPs in it :)
[12:24] <stickupkid> manadart, is that it, SupportsSpaces == true
[12:24] <stickupkid> manadart, ah, SSHAddresses
[12:24] <manadart> That's it.
[12:25] <stickupkid> I'll do the Q&A in a bit...
[12:29] <stickupkid> manadart, I'm going to be annoying and say I want an integration test for this
[12:30] <manadart> stickupkid: Fair enough. I'll do another card.
[18:43] <qthepirate> Hello everyone!
[18:45] <qthepirate> Having an issue with a cached IP address in juju. Essentially: My charm (nova-cloud-controller) keeps pointing to an IP address of an OLD percona charm. The old charm was removed AND all relations/applications are gone from it. I then removed N-C-C charm and redeployed it and its STILL pulling the old IP address.
[18:45] <qthepirate> Is there a way to clear/refresh the metadata-cache in the juju controller?
[19:07] <mirek186> has anyone expirinece odd juju ssh timeouts, hangs
[19:08] <mirek186> for some charms is good every single time, for others like keystone or nova-cloud-control juju will ssh in and then after few minutes hang and then timeout
[21:10] <qthepirate> is there a way to clear out the metadata-cache on a juju controller?
[21:41] <qthepirate> I think its holding on to a variable that needs to be updated
[22:02] <wallyworld> qthepirate: you may want to ask in #openstack (which is where the openstack charming folks tend to hang out) as this sounds like an openstack charming question. the charms themselves are responsible for passing around data about how the deployment is set up. you could also ask on https://discourse.juju.is and openstack charming folks can better see the post there
[22:05] <qthepirate> wallyworld: I would agree, but the issue i've chased down leads to this config line: destinations=metadata-cache://jujuCluster/?role=PRIMARY
[22:13] <wallyworld> that's not a juju config item though right? that looks like charm config?
[22:20] <qthepirate> right, its in an app .conf file, but its looking to the juju cluster for an item
[22:30] <wallyworld> in the context of juju itself, juju knows nothing about "jujuCluster" - it seems like that's somewthing the charms manage
[22:34] <qthepirate> wallyworld: Thanks for letting me bounce it off you. I checked the logs and its definitely something else. Still trying to track down this error
[22:35] <wallyworld> qthepirate: no worries, sorry if i've been a bit vaque as the openstack charms are not something i know a lot about
[23:02] <thumper> hpidcock: https://github.com/juju/collections/compare/master...howbazaar:use-base-testing
[23:34] <hpidcock> thumper: looking good :)
[23:36] <hpidcock> thumper: and love the use of testing.T subtests
[23:56] <qthepirate> wallyworld: do you know anything about juju not releasing unused ip addresses?