[01:03] wallyworld, kelvinliu: have updated that PR based on discussion this morning https://github.com/juju/juju/pull/11610 [01:15] tlm: lgtm [04:10] babbageclunk, kelvinliu: have we fixed this vsphere bug? "In 2.7.0, networking rules must allow direct access to the ESX host for the the Juju client and the controller VM. The Juju client’s access is required to upload disk images and the controller requires access to finalise the bootstrap process. If this access is not permitted by your site administrator, remain with Juju 2.6.9. This was an inadvertent regression and will be fixed in a futur [04:10] e release" [04:13] timClicks: probably not yet [04:19] seems we had a workaround to fix it, but we didn't have a chance to fix it [04:44] where are clouds.yaml and credentials.yaml stored on macOS and Windows? [05:19] timClicks: no, I don't think we know any way to fix it (without the workaround of allowing access to the host). [05:22] wallyworld: got a few minutes? [05:22] sure [05:22] 1:1? [05:45] wallyworld: it looks like the stuck offer also stops the model from being destroyed [05:45] in the same repro [05:46] hmmm, ok, even with --force? [05:50] I was using kill-controller [05:50] I would assume that would --force things [05:50] * thumper has to EOD [05:50] I think the same underlying issue is causing the destroy failure [05:52] * thumper out [07:30] this little bad boy keeps failing on me atm MachineWithCharmsSuite.TestManageModelRunsCharmRevisionUpdater [07:30] anybody know anything about it? [07:47] manadart, fix for the above https://github.com/juju/juju/pull/11613 [07:49] stickupkid: Approved. [07:50] ta [08:20] manadart, that didn't work, I believe that the long wait just isn't long enough, as I can't replicate it locally === ulidtko|k is now known as ulidtko [08:41] yeah, it's hard to get the replicaset sorted, which is causing this I believe [08:44] manadart, If you get a second as well --> https://github.com/juju/juju/pull/11611 [08:45] precursor to more work around endpoint binding changes [11:28] manadart, got a sec? [11:29] stickupkid: Yep. [11:29] in daily [13:07] petevg: ping, do you have a few I can steal? [13:07] rick_h: a few minutes? Not yet. But I do after the daily sync. [13:08] petevg: rgr ty [13:08] np [14:52] manadart, I'm guessing for endpoint bindings that have an alpha space, we should be ignoring that for the machine topology? [14:54] stickupkid: [14:54] Hmm. [14:54] I don't think we are ready for always spaces yet, so yes. [14:55] manadart, thought as much, just trying to hammer out the tests around this [14:59] stickupkid: Actually, this won't work for O7k. [14:59] manadart, what do you mean? [15:00] stickupkid: If we have a charm with bindings to space alpha and beta, we'd only get a NIC in beta and the charm wouldn't work. [15:00] ah, right right [15:01] so you'd end up with space topology all the time then I believe [15:01] manadart, [15:02] stickupkid: The case where you could ignore it and omit generation is when there is only one space, because that will be alpha and the operator doesn't care about spaces. [15:02] right right [15:02] I mean omit generation of the topology. [15:02] Yeah, I think that's the rule. [16:24] manadart, PR is ready for re-review [17:21] hi [17:22] hi/ [17:22] i already have a ubuntu openstack deployed with juju and kvm is posible assign some nodes to run on lxd and anothers with kvm? [17:22] josephillips: for the nova compute nodes? [17:23] yep [17:23] i want designated some nodes with kvm and another nodes with lxd [17:23] josephillips: hmm, have to check with the openstack folks. There was a lxd charm at one point but I'm not sure how that's managed today. [17:23] beisner: do you know someone that can point josephillips in the right direction? ^ [17:24] is possible perform a config per unit and not per application? [17:24] o/ [17:25] josephillips: no, the idea of applications is the units are consistent so you'd definitely need to split into two sets of units [17:25] https://github.com/openstack-archive/charm-lxd [17:26] because this documentatoin the usage is changin on nova-compute the virt-type [17:26] if i do that will change kvm to lxd in all nodes [17:26] josephillips rick_h - the nova-lxd charm and hypervisor work is deprecated. KVM is the hypervisor that we support. [17:26] with the nova charms, that is. [17:27] beisner: ok, I wasn't sure if there was a new path for the lxd as a hypervisor. Thanks for the guidance [17:27] any reason for that? [17:28] the main reason was: no actual adoption of it vs. the effort to maintain and develop it. [17:28] s/no/very little/ [17:28] but openstack itself is keep supporting it right? [17:29] we are openstack itself in this context. [17:29] oh [17:30] josephillips: virt-type lxc technically is all vanilla and might just work, but we do not validate it, nor do we put it forth for users to use. [17:30] via charms [17:31] oh got it but still have the same problem that lxd [17:31] if i do that will replace all compute nodes with kvm to lxc? [17:32] josephillips: yes. charm config is application-wide. [17:32] another question [17:32] josephillips: you can deploy two nova-computes as two differently-named applications and that works [17:32] oh [17:33] how i can do that? [17:33] "works" in the sense of you can have two differently-configured personalities of nova-compute units [17:34] not "works" in the sense that you will succeed with virt-type lxc. [17:34] :) [17:35] josephillips: all the same relations, just deploy another nova-compute with a name like nova-compute-foo. [17:35] i have to download the charm locally to do that? [17:35] ie. nova-compute-bar and nova-compute-foo would exist in the deployment. [17:35] you can do it in a bundle [17:36] also, you could download a local copy [17:37] and is not planning get a another container support in future ? [17:38] josephillips: lots of container support around here for sure :-) [17:39] josephillips: nova driving lxd is the specific use case that isn't in development. lxd/lxc are in full-gear adoption and use, just not with the nova shim. [17:40] josephillips: have you checked out lxd clustering? [17:40] nope [17:41] i was looking a solution for users can create lxc container for nodes for databases [17:44] josephillips: gotcha. so you can do that with juju on openstack in kvm instances; also, circling back to the lxd clustering (which is sans openstack): https://linuxcontainers.org/lxd/docs/master/clustering [17:48] i will check that === sai_p___ is now known as sai_p_ [20:08] hey, i was hoping someone might be able to help me with an issue i'm experiencing trying to enable kubeflow on microk8s. the issue arises with mongodb pod complaining about what i assume is ipv6 binding. i tried disabled ipv6 on lxd, but it doesn't seem to have fixed anything. [20:08] 2020-05-21T20:07:28.442+0000 I CONTROL [initandlisten] options: { net: { bindIpAll: true, ipv6: true, port: 37017, ssl: { PEMKeyFile: "/var/lib/juju/server.pem", PEMKeyPassword: "", mode: "requireSSL" } }, replication: { oplogSizeMB: 1024, replSet: "juju" }, security: { authorization: "enabled", keyFile: "/var/lib/juju/shared-secret" }, [20:08] storage: { dbPath: "/var/lib/juju/db", engine: "wiredTiger", journal: { enabled: true } }, systemLog: { quiet: true } }2020-05-21T20:07:28.443+0000 I STORAGE [initandlisten] exception in initAndListen std::exception: open: Address family not supported by protocol, terminating [20:09] this happens when calling the command microk8s.juju --debug bootstrap microk8s --config juju-no-proxy=10.0.0.1 [20:10] i need to pass ipv6: false i think to the mongo pod but am lost on where to do that [22:05] thumper: i commented on the bug - their log level was =WARNING, so yeah, not much got included :-) [22:16] thumper: one small step https://github.com/juju/juju/pull/11615 [22:57] kelvinliu: will that bootstrap to EKS PR land in Juju 2.8.0 [22:58] timClicks: sry, it should target develop branch [22:58] kelvinliu: no need to apologise :) [23:00] we want it for 2.8 [23:10] sinces so far it's a small CLI tweak only [23:41] wallyworld: let's discuss the target branch for eks? [23:42] ok [23:55] thumper: sigh, the cmr thing all worked for me on 2.8, will have to test with 2.7. you tested with 2.8 though right?