[01:03] <tlm> wallyworld, kelvinliu: have updated that PR based on discussion this morning https://github.com/juju/juju/pull/11610
[01:15] <kelvinliu> tlm: lgtm
[04:10] <timClicks> babbageclunk, kelvinliu: have we fixed this vsphere bug? "In 2.7.0, networking rules must allow direct access to the ESX host for the the Juju client and the controller VM. The Juju client’s access is required to upload disk images and the controller requires access to finalise the bootstrap process. If this access is not permitted by your site administrator, remain with Juju 2.6.9. This was an inadvertent regression and will be fixed in a futur
[04:10] <timClicks> e release"
[04:13] <kelvinliu> timClicks: probably not yet
[04:19] <kelvinliu> seems we had a workaround to fix it, but we didn't have a chance to fix it
[04:44] <timClicks> where are clouds.yaml and credentials.yaml stored on macOS and Windows?
[05:19] <babbageclunk> timClicks: no, I don't think we know any way to fix it (without the workaround of allowing access to the host).
[05:22] <thumper> wallyworld: got a few minutes?
[05:22] <wallyworld> sure
[05:22] <thumper> 1:1?
[05:45] <thumper> wallyworld: it looks like the stuck offer also stops the model from being destroyed
[05:45] <thumper> in the same repro
[05:46] <wallyworld> hmmm, ok, even with --force?
[05:50] <thumper> I was using kill-controller
[05:50] <thumper> I would assume that would --force things
[05:50]  * thumper has to EOD
[05:50] <thumper> I think the same underlying issue is causing the destroy failure
[05:52]  * thumper out
[07:30] <stickupkid> this little bad boy keeps failing on me atm MachineWithCharmsSuite.TestManageModelRunsCharmRevisionUpdater
[07:30] <stickupkid> anybody know anything about it?
[07:47] <stickupkid> manadart, fix for the above https://github.com/juju/juju/pull/11613
[07:49] <manadart> stickupkid: Approved.
[07:50] <stickupkid> ta
[08:20] <stickupkid> manadart, that didn't work, I believe that the long wait just isn't long enough, as I can't replicate it locally
[08:41] <stickupkid> yeah, it's hard to get the replicaset sorted, which is causing this I believe
[08:44] <stickupkid> manadart, If you get a second as well --> https://github.com/juju/juju/pull/11611
[08:45] <stickupkid> precursor to more work around endpoint binding changes
[11:28] <stickupkid> manadart, got a sec?
[11:29] <manadart> stickupkid: Yep.
[11:29] <stickupkid> in daily
[13:07] <rick_h> petevg:  ping, do you have a few I can steal?
[13:07] <petevg> rick_h: a few minutes? Not yet. But I do after the daily sync.
[13:08] <rick_h> petevg:  rgr ty
[13:08] <petevg> np
[14:52] <stickupkid> manadart, I'm guessing for endpoint bindings that have an alpha space, we should be ignoring that for the machine topology?
[14:54] <manadart> stickupkid:
[14:54] <manadart> Hmm.
[14:54] <manadart> I don't think we are ready for always spaces yet, so yes.
[14:55] <stickupkid> manadart, thought as much, just trying to hammer out the tests around this
[14:59] <manadart> stickupkid: Actually, this won't work for O7k.
[14:59] <stickupkid> manadart, what do you mean?
[15:00] <manadart> stickupkid: If we have a charm with bindings to space alpha and beta, we'd only get a NIC in beta and the charm wouldn't work.
[15:00] <stickupkid> ah, right right
[15:01] <stickupkid> so you'd end up with space topology all the time then I believe
[15:01] <stickupkid> manadart,
[15:02] <manadart> stickupkid: The case where you could ignore it and omit generation is when there is only one space, because that will be alpha and the operator doesn't care about spaces.
[15:02] <stickupkid> right right
[15:02] <manadart> I mean omit generation of the topology.
[15:02] <manadart> Yeah, I think that's the rule.
[16:24] <stickupkid> manadart, PR is ready for re-review
[17:21] <josephillips> hi
[17:22] <josephillips> hi/
[17:22] <josephillips> i already have a ubuntu openstack deployed with juju and kvm is posible assign some nodes to run on lxd and anothers with kvm?
[17:22] <rick_h> josephillips:  for the nova compute nodes?
[17:23] <josephillips> yep
[17:23] <josephillips> i want designated some nodes with kvm and another nodes with lxd
[17:23] <rick_h> josephillips:  hmm, have to check with the openstack folks. There was a lxd charm at one point but I'm not sure how that's managed today.
[17:23] <rick_h> beisner:  do you know someone that can point josephillips in the right direction? ^
[17:24] <josephillips> is possible perform a config per unit and not per application?
[17:24] <beisner> o/
[17:25] <rick_h> josephillips:  no, the idea of applications is the units are consistent so you'd definitely need to split into two sets of units
[17:25] <josephillips> https://github.com/openstack-archive/charm-lxd
[17:26] <josephillips> because this documentatoin the usage is changin on nova-compute the virt-type
[17:26] <josephillips> if i do that will change kvm to lxd in all nodes
[17:26] <beisner> josephillips rick_h - the nova-lxd charm and hypervisor work is deprecated.  KVM is the hypervisor that we support.
[17:26] <beisner> with the nova charms, that is.
[17:27] <rick_h> beisner:  ok, I wasn't sure if there was a new path for the lxd as a hypervisor. Thanks for the guidance
[17:27] <josephillips> any reason for that?
[17:28] <beisner> the main reason was:  no actual adoption of it vs. the effort to maintain and develop it.
[17:28] <beisner> s/no/very little/
[17:28] <josephillips> but openstack itself is keep supporting it right?
[17:29] <beisner> we are openstack itself in this context.
[17:29] <josephillips> oh
[17:30] <beisner> josephillips:  virt-type lxc technically is all vanilla and might just work, but we do not validate it, nor do we put it forth for users to use.
[17:30] <beisner> via charms
[17:31] <josephillips> oh got it but still have the same problem that lxd
[17:31] <josephillips> if i do that will replace all compute nodes with kvm to lxc?
[17:32] <beisner> josephillips: yes.  charm config is application-wide.
[17:32] <josephillips> another question
[17:32] <beisner> josephillips: you can deploy two nova-computes as two differently-named applications and that works
[17:32] <josephillips> oh
[17:33] <josephillips> how i can do that?
[17:33] <beisner> "works" in the sense of you can have two differently-configured personalities of nova-compute units
[17:34] <beisner> not "works" in the sense that you will succeed with virt-type lxc.
[17:34] <beisner> :)
[17:35] <beisner> josephillips: all the same relations, just deploy another nova-compute with a name like nova-compute-foo.
[17:35] <josephillips> i have to download the charm locally to do that?
[17:35] <beisner> ie. nova-compute-bar and nova-compute-foo would exist in the deployment.
[17:35] <beisner> you can do it in a bundle
[17:36] <beisner> also, you could download a local copy
[17:37] <josephillips> and is not planning get a another container support in future ?
[17:38] <beisner> josephillips:  lots of container support around here for sure :-)
[17:39] <beisner> josephillips:  nova driving lxd is the specific use case that isn't in development.  lxd/lxc are in full-gear adoption and use, just not with the nova shim.
[17:40] <beisner> josephillips: have you checked out lxd clustering?
[17:40] <josephillips> nope
[17:41] <josephillips> i was looking a solution for users can create lxc container for nodes for databases
[17:44] <beisner> josephillips:  gotcha.  so you can do that with juju on openstack in kvm instances;  also, circling back to the lxd clustering (which is sans openstack):  https://linuxcontainers.org/lxd/docs/master/clustering
[17:48] <josephillips> i will check that
[20:08] <jim31> hey, i was hoping someone might be able to help me with an issue i'm experiencing trying to enable kubeflow on microk8s. the issue arises with mongodb pod complaining about what i assume is ipv6 binding. i tried disabled ipv6 on lxd, but it doesn't seem to have fixed anything.
[20:08] <jim31> 2020-05-21T20:07:28.442+0000 I CONTROL  [initandlisten] options: { net: { bindIpAll: true, ipv6: true, port: 37017, ssl: { PEMKeyFile: "/var/lib/juju/server.pem", PEMKeyPassword: "<password>", mode: "requireSSL" } }, replication: { oplogSizeMB: 1024, replSet: "juju" }, security: { authorization: "enabled", keyFile: "/var/lib/juju/shared-secret" },
[20:08] <jim31> storage: { dbPath: "/var/lib/juju/db", engine: "wiredTiger", journal: { enabled: true } }, systemLog: { quiet: true } }2020-05-21T20:07:28.443+0000 I STORAGE  [initandlisten] exception in initAndListen std::exception: open: Address family not supported by protocol, terminating
[20:09] <jim31> this happens when calling the command microk8s.juju --debug bootstrap microk8s --config juju-no-proxy=10.0.0.1
[20:10] <jim31> i need to pass ipv6: false i think to the mongo pod but am lost on where to do that
[22:05] <wallyworld> thumper: i commented on the bug - their log level was <root>=WARNING, so yeah, not much got included :-)
[22:16] <wallyworld> thumper: one small step https://github.com/juju/juju/pull/11615
[22:57] <timClicks> kelvinliu: will that bootstrap to EKS PR land in Juju 2.8.0
[22:58] <kelvinliu> timClicks:  sry, it should target develop branch
[22:58] <timClicks> kelvinliu: no need to apologise :)
[23:00] <wallyworld> we want it for 2.8
[23:10] <wallyworld> sinces so far it's a small CLI tweak only
[23:41] <kelvinliu> wallyworld: let's discuss the target branch for eks?
[23:42] <wallyworld> ok
[23:55] <wallyworld> thumper: sigh, the cmr thing all worked for me on 2.8, will have to test with 2.7. you tested with 2.8 though right?