/srv/irclogs.ubuntu.com/2019/10/18/#juju.txt

wallyworldhpidcock: once you've put up your init container PR, I'll swap you https://github.com/juju/juju/pull/1075400:53
hpidcockmight be a bit longer, I'll check it out when I'm back00:55
wallyworldno rush00:56
thumperwallyworld: did you know the all the caas workers use "juju.workers" as the base logging module, but the rest of the workers use "juju.worker"?01:33
* thumper is fixing as he goes01:33
wallyworldsigh01:33
wallyworldhard to get good help nowadays01:33
thumperheh01:33
thumperI'm almost done01:33
wallyworldi put it there to see if you were paying attention01:33
thumperhave about two more I think01:33
thumperwallyworld: last 7 workers: https://github.com/juju/juju/pull/1075601:52
kelvinliuwallyworld: got this PR to add ResourceNames and NonResourceURLs support on RBAC rules, +1 plz thanks! https://github.com/juju/juju/pull/1075501:52
wallyworldi'm popular today01:52
kelvinliuur always popular lol01:52
wallyworlddepends who you ask :-)01:54
wallyworldthumper: kelvinliu: both lgtm02:01
thumperthanks wallyworld02:01
kelvinliuthanks wallyworld02:03
hpidcockwallyworld: just running through the QA steps but the code looks good02:28
wallyworldfor your branch?02:29
hpidcockyour branch02:33
wallyworldoh, nice ok02:34
stubwallyworld: I just reopened https://bugs.launchpad.net/juju/+bug/1830252 , but I'm working on second hand reports rather than direct experience.03:54
mupBug #1830252: k8s charm sees no egress-subnets on its own relation <k8s> <juju:New for wallyworld> <https://launchpad.net/bugs/1830252>03:54
wallyworldstub: ok, will look. it would be good to get some more detail, how to repro etc03:55
stubdavigar15 tripped over it working on his k8s prometheus stuff03:56
wallyworldjust reading the bug, maybe it's a charm helpers issue, not sure, will look03:57
stubBest I can tell, network_get works to get the ingress-address, but pulling it from the relation like charmhelpers does doesn't. And we can fix charm-helpers if necessary I think.03:58
wallyworldstub: it may well be juju reading the charn helpers code. need to look. but it is a different issue IMO. i'd prefer to open a new bug04:03
wallyworldalthough i guess it's all in the same area04:04
wallyworldi'll comment on the bug04:04
wallyworldstub: the juju code looks ok i think. as noted in a comment on the bug, ingress-address is not guaranteed to be available immediately when a relation is joined because the pod and service need time to spin up. so there will be times early on when the relation data will not have the value. the charm needs to deal with that04:09
stubYeah, I was surprised the work around worked. But apparently it does with a freshly deployed service.04:11
wallyworldit may be a timing thing04:11
wallyworldit would be good to know if a sleep were added or something if the value shows up04:11
wallyworldnetwork-get and the relation data use the same code to get the ingress address04:12
wallyworldit could also be that juju does not react to when the address info becomes available and updates relation settings04:13
wallyworldso i will investigate that aspect to be sure04:13
wallyworldthere *could* be a juju issue, not sure yet04:14
wallyworldthere's been a fair bit of refactoring of the network code to do the spaces work this cycle04:14
thumperhttps://github.com/juju/worker/pull/1304:30
wallyworldstub: do you know if the issue is on juju 2.6 or 2.7?04:32
wallyworldit's not clear from the promethius MP, but i may have missed it04:32
wallyworldthumper: lgtm04:33
thumperwallyworld: cheers04:33
stubI don't know sorry04:42
wallyworldstub: no worries, i opened bug 1848628 and subscribed you and david and asked for more info04:42
mupBug #1848628: relation data not updated when unit address changes <k8s> <juju:Incomplete> <https://launchpad.net/bugs/1848628>04:42
wallyworldi closed the other one again04:42
wallyworldhpidcock: thanks for review. init container takes precedence, but after that, for monday before we cut beta1, there's a folllow up to add --watch to show-task https://github.com/juju/juju/pull/1075704:45
manadartachilleasa: Reviewing 10750 now. Can you look over https://github.com/juju/juju/pull/10751 ?08:52
achilleasamanadart: sure08:53
achilleasamanadart: lgtm, left some minor comments about using the default space id const; running QA steps10:22
manadartTa.10:27
nammn_deachilleasa: let's say i want deploy to a aws public subnet (spaces -> using public subnet), is there something i need to take care or should it work without additional action?11:15
nammn_deis there a way to see which constraint are currently applied to a machine?11:23
rick_hnammn_de:  juju show-machine X11:25
rick_hnammn_de:  should show the constraints11:25
achilleasarick_h: got a few min for a quick chat (no rush, can wait till standup time)11:41
nammn_demanadart: regarding the bug you have filed that version does not get parsed correctly. Did you look into it a bit deeper? That's what I have found out https://bugs.launchpad.net/juju/+bug/1848149/comments/3 . Wdyt?12:38
mupBug #1848149: Charm Deployed or Upgrade from Dir Ignores "version" <juju:Triaged> <https://launchpad.net/bugs/1848149>12:38
manadartnammn_de: I *suspect* we have to add the version member/accessor to CharmDir and populate it charm.ReadVersion. Beyond that I am not sure how it will work on the receiver side. Not looked further.12:47
nammn_demanadart: just looked into that dir, would make sense.12:54
mbeierlstub: thanks for your help.  Had read that already but could not get the specific version of ansible from a ppa I wanted to use working.  It would appear that I needed to put the packages: line in layer.yaml and the repo in config.yaml.  I could not get extra_packages to do anything.  Either way, I have the ppa version of ansible loaded now.13:19
achilleasahml: running state tests and deploying to lxd/guimaas to test my patch15:38
hmlachilleasa: rgr15:38
nammn_demanadart achilleasa stickupkid any way that this can be false? https://github.com/nammn/charm/blob/a595818559134f9e9981a08a32cd9de76cb478f0/charmdir.go#L44415:41
nammn_deam i overlooking something or is that just a mistake?15:41
achilleasahml: actually if you can push your latest changes for lxd I can append my changes, test and push15:41
stickupkidnammn_de, yeah it seems like that should just return15:42
hmlachilleasa:  haven’t kicked the lxd issue yet15:43
stickupkidnammn_de, yeah, doesn't seem right tbh https://github.com/nammn/charm/commit/33dbb063fecda733f37b4dc0e476fd3f04d28c6d15:43
hmlachilleasa:  i’m using Map() but looking at the code, that should be okay with the other changes… just missing something.  the spacenames were used to map spaceInfos… with the spaceInfos being used, not the map key.  need to track it15:43
achilleasahml: I will try to deploy the defender/invader charms on machines to verify that the hash works15:47
hmlachilleasa:  you’ll EOD hours before me.  if you’re reasonibly confident of your changes, push it to the PR… and i’ll take it from there.15:48
achilleasa(which means that temporarily we will be out of maas nodes)15:48
achilleasa(multi-nic ones that is)15:48
hmlah…15:48
achilleasajust need 5-10min to test15:48
hmlachilleasa:  i’ll release the machines for you then.15:49
hmlachilleasa:  was about to get lunch anyways15:49
hmlachilleasa:  you should be good to use the dual nic machines..15:49
achilleasahml: yeap, got them. thanks15:50
nammn_demanadart rick_h: this should fix the bug we were talking before https://github.com/juju/charm/pull/294 . I still need to test it by changing the gopkg toml and send a pr for the juju main16:47
manadartnammn_de: Should be easy enough to verify by applying the patch straight the dep tree and building juju from it.16:49
nammn_demanadart: oh yeah, you are right xD16:49
Fallenourhey does anyone know the background on this video? https://www.youtube.com/watch?v=CidoBy3iqUw17:04
FallenourI want to build this and replicate it, but they didnt show how they built the HA juju controller cluster on the clustered LXD nodes17:04
Fallenour@bdx, @rick_h17:05
nammn_demanadart: should be working, to be sure im running the unit test on juju as well. But does the change make sense to you from your filed bug perspective?17:10
manadartnammn_de: Yes; pretty much along the lines of what I thought we'd have to do.17:14
nammn_demanadart: one thing I am not 100% sure  is, by extracting the version generation from "archiveTo" to the initial "charmdir" creation, any updates which happen between "creation" and "archiveTo" would not be added. I am not sure whether there can be any updates anyway and should be. I will add this to the PR as "disclaimer" to think about it while17:16
nammn_degoing through17:16
rick_hFallenour:  howdy, so it was just a normal bootstrap to the cloud once it was added, and then juju enable-ha17:36
rick_hFallenour:  hitting an issue with getting the cluster going or the bootstrap or the making ha part?17:36
Fallenour@rick_h, Im looking to rebuild my current stack, and instead of having 3 controllers on 3 physical machines, I wanted to reduce it to 3 lxd machines on 3 lxd machines in HA to reduce power so I could add more machines because of current power constraints.17:42
rick_hFallenour:  definitely, juju + lxd cluster is a great dense setup for workstation/etc17:43
Fallenourrick_h, will it work well in prod?17:43
Fallenourrick_h, rephrase, well enough?17:43
rick_hFallenour:  it'll work in prod, but there's limitations you have to be aware of. In a lxd cluster all systems are treated as homogenous, so one flat network/etc17:44
rick_hFallenour:  and so it's hard to do things like different spaces, storage, etc17:44
Fallenourrick_h, currently building the lXD nodes now. yea I expanded to a /22 to give me plenty of space, with HA included. I should have enough room for roughly 200-300 different applications, which is more than enough for me, especially given current power constraints.17:44
rick_hFallenour:  I've not tried to use it in anger in production to see what assumptions or limitations it fully bakes in17:45
Fallenourrick_h, im getting pretty clever with it, adding lxd using Cluster, and then adding ceph storage, using the other 4 drives, on a raid 1 for OS, raid 0 for ceph drives, then gonna deploy ceph to the systems via juju, let juju manage ceph, and then build volumes on ceph, and store data on those via databases.17:46
Fallenourrick_h, currently lxd doesnt support ceph and HA concurrently, but it does support HA and extra storage spaces, sooo...17:46
rick_hFallenour:  lol17:49
rick_hFallenour:  make sure to write up your project/hits and misses on discourse! :)17:49
rick_hI can't wait to hear how it goes17:49
Fallenourrick_h, hopefully its not a full on trainwreck!17:50
rick_hFallenour:  naw, trains are cool and rarely wreck :P17:50
Fallenourrick_h, But! I do believ eif it works, I should be able to take advantage of the best of both worlds, and if htis works, its going to do wonders for the CI/CD food chain, because people absolutely loathe service containers vs systems containers for security.17:50

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!