[06:14] hpidcock: wallyworld_: https://github.com/juju/juju/pull/11932 this PR sync k8s-spike with dev, +1 plz [06:14] looking [06:17] kelvinliu: +1 [06:17] ty [06:21] wallyworld_: hpidcock and here is the 1st PR for adding the api layer for k8s provider https://github.com/juju/juju/pull/11933 [07:58] manadart, I changed the PR, can you review again [07:58] stickupkid: Stand by. [07:58] https://github.com/juju/juju/pull/11929 [07:59] it's a pretty drastic change, but I think we should do this more often [08:09] hey manadart: it gets more awesome: https://pastebin.ubuntu.com/p/PSQJ95XWzd/ [08:09] one machine managed to get the spaces correct, but the rest are failing :) [08:09] wow [08:09] ha [08:12] ir looks a bit more mixed actually, there are some containers on machine 2 that did come up, but a lot that have missing spaces errors [08:12] s/ir/it [08:22] icey: Hmm, and those container hosts were provisioned fresh after the work to replace the agent binaries? [08:23] manadart: they're all ina bundle I deployed this morning [08:25] icey: Can you confirm that the good ones have the br-{nic} device (according to Juju) and the failed ones are missing it? [08:25] and I did a `db.toolsmetadata.find({})` search before deploying, got no results [08:26] manadart: so, even just looking at juju status --format=yaml, there are some strange differences :) [08:28] manadart: https://pastebin.ubuntu.com/p/gSNWxcKtQc/ [08:28] machine 0 has all containers up [08:28] machine 2 is mostly down [08:31] and there are br-{nic} entries, but not br-{nic}-{vlan} (mostly) in linklayerdevices on machine 2 [08:55] icey: What about the tools metadata after deploying? [08:56] manadart: db.toolsmetadata.find({}) returns nothing [09:00] manadart, any idea what the restrictions are for upgrading a model/controller? [09:00] manadart, i.e. do you need to be a super user/admin user? [09:05] stickupkid: For upgrade-model you need super-user and write. [09:05] wicked, did it right :) [09:38] icey: I think I have it. Got a sec? https://meet.google.com/fgr-xbog-nyb [10:51] kubectl shows that juju init container (juju-pod-init) is in status Running, and the charm specific container is in status PodInitializing. I was under the impression that the init container must run to completion. If this is correct, how can I find out why the init container is persistently in the Running state. I have checked the init container log already. [11:25] manadart: in the end, one of the machines (2) had containers that wouldn't start [11:37] icey: OK, can you get me the machine log for it? [11:37] stickupkid: https://github.com/juju/juju/pull/11934 [11:39] sent via PM as it has some IPs in it :) [12:24] manadart, is that it, SupportsSpaces == true [12:24] manadart, ah, SSHAddresses [12:24] That's it. [12:25] I'll do the Q&A in a bit... [12:29] manadart, I'm going to be annoying and say I want an integration test for this [12:30] stickupkid: Fair enough. I'll do another card. [18:43] Hello everyone! [18:45] Having an issue with a cached IP address in juju. Essentially: My charm (nova-cloud-controller) keeps pointing to an IP address of an OLD percona charm. The old charm was removed AND all relations/applications are gone from it. I then removed N-C-C charm and redeployed it and its STILL pulling the old IP address. [18:45] Is there a way to clear/refresh the metadata-cache in the juju controller? [19:07] has anyone expirinece odd juju ssh timeouts, hangs [19:08] for some charms is good every single time, for others like keystone or nova-cloud-control juju will ssh in and then after few minutes hang and then timeout [21:10] is there a way to clear out the metadata-cache on a juju controller? [21:41] I think its holding on to a variable that needs to be updated [22:02] qthepirate: you may want to ask in #openstack (which is where the openstack charming folks tend to hang out) as this sounds like an openstack charming question. the charms themselves are responsible for passing around data about how the deployment is set up. you could also ask on https://discourse.juju.is and openstack charming folks can better see the post there [22:05] wallyworld: I would agree, but the issue i've chased down leads to this config line: destinations=metadata-cache://jujuCluster/?role=PRIMARY [22:13] that's not a juju config item though right? that looks like charm config? [22:20] right, its in an app .conf file, but its looking to the juju cluster for an item [22:30] in the context of juju itself, juju knows nothing about "jujuCluster" - it seems like that's somewthing the charms manage [22:34] wallyworld: Thanks for letting me bounce it off you. I checked the logs and its definitely something else. Still trying to track down this error [22:35] qthepirate: no worries, sorry if i've been a bit vaque as the openstack charms are not something i know a lot about [23:02] hpidcock: https://github.com/juju/collections/compare/master...howbazaar:use-base-testing [23:34] thumper: looking good :) [23:36] thumper: and love the use of testing.T subtests [23:56] wallyworld: do you know anything about juju not releasing unused ip addresses?