/srv/irclogs.ubuntu.com/2020/05/21/#juju.txt

tlmwallyworld, kelvinliu: have updated that PR based on discussion this morning https://github.com/juju/juju/pull/1161001:03
kelvinliutlm: lgtm01:15
timClicksbabbageclunk, kelvinliu: have we fixed this vsphere bug? "In 2.7.0, networking rules must allow direct access to the ESX host for the the Juju client and the controller VM. The Juju client’s access is required to upload disk images and the controller requires access to finalise the bootstrap process. If this access is not permitted by your site administrator, remain with Juju 2.6.9. This was an inadvertent regression and will be fixed in a futur04:10
timClickse release"04:10
kelvinliutimClicks: probably not yet04:13
kelvinliuseems we had a workaround to fix it, but we didn't have a chance to fix it04:19
timClickswhere are clouds.yaml and credentials.yaml stored on macOS and Windows?04:44
babbageclunktimClicks: no, I don't think we know any way to fix it (without the workaround of allowing access to the host).05:19
thumperwallyworld: got a few minutes?05:22
wallyworldsure05:22
thumper1:1?05:22
thumperwallyworld: it looks like the stuck offer also stops the model from being destroyed05:45
thumperin the same repro05:45
wallyworldhmmm, ok, even with --force?05:46
thumperI was using kill-controller05:50
thumperI would assume that would --force things05:50
* thumper has to EOD05:50
thumperI think the same underlying issue is causing the destroy failure05:50
* thumper out05:52
stickupkidthis little bad boy keeps failing on me atm MachineWithCharmsSuite.TestManageModelRunsCharmRevisionUpdater07:30
stickupkidanybody know anything about it?07:30
stickupkidmanadart, fix for the above https://github.com/juju/juju/pull/1161307:47
manadartstickupkid: Approved.07:49
stickupkidta07:50
stickupkidmanadart, that didn't work, I believe that the long wait just isn't long enough, as I can't replicate it locally08:20
=== ulidtko|k is now known as ulidtko
stickupkidyeah, it's hard to get the replicaset sorted, which is causing this I believe08:41
stickupkidmanadart, If you get a second as well --> https://github.com/juju/juju/pull/1161108:44
stickupkidprecursor to more work around endpoint binding changes08:45
stickupkidmanadart, got a sec?11:28
manadartstickupkid: Yep.11:29
stickupkidin daily11:29
rick_hpetevg:  ping, do you have a few I can steal?13:07
petevgrick_h: a few minutes? Not yet. But I do after the daily sync.13:07
rick_hpetevg:  rgr ty13:08
petevgnp13:08
stickupkidmanadart, I'm guessing for endpoint bindings that have an alpha space, we should be ignoring that for the machine topology?14:52
manadartstickupkid:14:54
manadartHmm.14:54
manadartI don't think we are ready for always spaces yet, so yes.14:54
stickupkidmanadart, thought as much, just trying to hammer out the tests around this14:55
manadartstickupkid: Actually, this won't work for O7k.14:59
stickupkidmanadart, what do you mean?14:59
manadartstickupkid: If we have a charm with bindings to space alpha and beta, we'd only get a NIC in beta and the charm wouldn't work.15:00
stickupkidah, right right15:00
stickupkidso you'd end up with space topology all the time then I believe15:01
stickupkidmanadart,15:01
manadartstickupkid: The case where you could ignore it and omit generation is when there is only one space, because that will be alpha and the operator doesn't care about spaces.15:02
stickupkidright right15:02
manadartI mean omit generation of the topology.15:02
manadartYeah, I think that's the rule.15:02
stickupkidmanadart, PR is ready for re-review16:24
josephillipshi17:21
josephillipshi/17:22
josephillipsi already have a ubuntu openstack deployed with juju and kvm is posible assign some nodes to run on lxd and anothers with kvm?17:22
rick_hjosephillips:  for the nova compute nodes?17:22
josephillipsyep17:23
josephillipsi want designated some nodes with kvm and another nodes with lxd17:23
rick_hjosephillips:  hmm, have to check with the openstack folks. There was a lxd charm at one point but I'm not sure how that's managed today.17:23
rick_hbeisner:  do you know someone that can point josephillips in the right direction? ^17:23
josephillipsis possible perform a config per unit and not per application?17:24
beisnero/17:24
rick_hjosephillips:  no, the idea of applications is the units are consistent so you'd definitely need to split into two sets of units17:25
josephillipshttps://github.com/openstack-archive/charm-lxd17:25
josephillipsbecause this documentatoin the usage is changin on nova-compute the virt-type17:26
josephillipsif i do that will change kvm to lxd in all nodes17:26
beisnerjosephillips rick_h - the nova-lxd charm and hypervisor work is deprecated.  KVM is the hypervisor that we support.17:26
beisnerwith the nova charms, that is.17:26
rick_hbeisner:  ok, I wasn't sure if there was a new path for the lxd as a hypervisor. Thanks for the guidance17:27
josephillipsany reason for that?17:27
beisnerthe main reason was:  no actual adoption of it vs. the effort to maintain and develop it.17:28
beisners/no/very little/17:28
josephillipsbut openstack itself is keep supporting it right?17:28
beisnerwe are openstack itself in this context.17:29
josephillipsoh17:29
beisnerjosephillips:  virt-type lxc technically is all vanilla and might just work, but we do not validate it, nor do we put it forth for users to use.17:30
beisnervia charms17:30
josephillipsoh got it but still have the same problem that lxd17:31
josephillipsif i do that will replace all compute nodes with kvm to lxc?17:31
beisnerjosephillips: yes.  charm config is application-wide.17:32
josephillipsanother question17:32
beisnerjosephillips: you can deploy two nova-computes as two differently-named applications and that works17:32
josephillipsoh17:32
josephillipshow i can do that?17:33
beisner"works" in the sense of you can have two differently-configured personalities of nova-compute units17:33
beisnernot "works" in the sense that you will succeed with virt-type lxc.17:34
beisner:)17:34
beisnerjosephillips: all the same relations, just deploy another nova-compute with a name like nova-compute-foo.17:35
josephillipsi have to download the charm locally to do that?17:35
beisnerie. nova-compute-bar and nova-compute-foo would exist in the deployment.17:35
beisneryou can do it in a bundle17:35
beisneralso, you could download a local copy17:36
josephillipsand is not planning get a another container support in future ?17:37
beisnerjosephillips:  lots of container support around here for sure :-)17:38
beisnerjosephillips:  nova driving lxd is the specific use case that isn't in development.  lxd/lxc are in full-gear adoption and use, just not with the nova shim.17:39
beisnerjosephillips: have you checked out lxd clustering?17:40
josephillipsnope17:40
josephillipsi was looking a solution for users can create lxc container for nodes for databases17:41
beisnerjosephillips:  gotcha.  so you can do that with juju on openstack in kvm instances;  also, circling back to the lxd clustering (which is sans openstack):  https://linuxcontainers.org/lxd/docs/master/clustering17:44
josephillipsi will check that17:48
=== sai_p___ is now known as sai_p_
jim31hey, i was hoping someone might be able to help me with an issue i'm experiencing trying to enable kubeflow on microk8s. the issue arises with mongodb pod complaining about what i assume is ipv6 binding. i tried disabled ipv6 on lxd, but it doesn't seem to have fixed anything.20:08
jim312020-05-21T20:07:28.442+0000 I CONTROL  [initandlisten] options: { net: { bindIpAll: true, ipv6: true, port: 37017, ssl: { PEMKeyFile: "/var/lib/juju/server.pem", PEMKeyPassword: "<password>", mode: "requireSSL" } }, replication: { oplogSizeMB: 1024, replSet: "juju" }, security: { authorization: "enabled", keyFile: "/var/lib/juju/shared-secret" },20:08
jim31storage: { dbPath: "/var/lib/juju/db", engine: "wiredTiger", journal: { enabled: true } }, systemLog: { quiet: true } }2020-05-21T20:07:28.443+0000 I STORAGE  [initandlisten] exception in initAndListen std::exception: open: Address family not supported by protocol, terminating20:08
jim31this happens when calling the command microk8s.juju --debug bootstrap microk8s --config juju-no-proxy=10.0.0.120:09
jim31i need to pass ipv6: false i think to the mongo pod but am lost on where to do that20:10
wallyworldthumper: i commented on the bug - their log level was <root>=WARNING, so yeah, not much got included :-)22:05
wallyworldthumper: one small step https://github.com/juju/juju/pull/1161522:16
timClickskelvinliu: will that bootstrap to EKS PR land in Juju 2.8.022:57
kelvinliutimClicks:  sry, it should target develop branch22:58
timClickskelvinliu: no need to apologise :)22:58
wallyworldwe want it for 2.823:00
wallyworldsinces so far it's a small CLI tweak only23:10
kelvinliuwallyworld: let's discuss the target branch for eks?23:41
wallyworldok23:42
wallyworldthumper: sigh, the cmr thing all worked for me on 2.8, will have to test with 2.7. you tested with 2.8 though right?23:55

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!