[00:53] hpidcock: once you've put up your init container PR, I'll swap you https://github.com/juju/juju/pull/10754 [00:55] might be a bit longer, I'll check it out when I'm back [00:56] no rush [01:33] wallyworld: did you know the all the caas workers use "juju.workers" as the base logging module, but the rest of the workers use "juju.worker"? [01:33] * thumper is fixing as he goes [01:33] sigh [01:33] hard to get good help nowadays [01:33] heh [01:33] I'm almost done [01:33] i put it there to see if you were paying attention [01:33] have about two more I think [01:52] wallyworld: last 7 workers: https://github.com/juju/juju/pull/10756 [01:52] wallyworld: got this PR to add ResourceNames and NonResourceURLs support on RBAC rules, +1 plz thanks! https://github.com/juju/juju/pull/10755 [01:52] i'm popular today [01:52] ur always popular lol [01:54] depends who you ask :-) [02:01] thumper: kelvinliu: both lgtm [02:01] thanks wallyworld [02:03] thanks wallyworld [02:28] wallyworld: just running through the QA steps but the code looks good [02:29] for your branch? [02:33] your branch [02:34] oh, nice ok [03:54] wallyworld: I just reopened https://bugs.launchpad.net/juju/+bug/1830252 , but I'm working on second hand reports rather than direct experience. [03:54] Bug #1830252: k8s charm sees no egress-subnets on its own relation [03:55] stub: ok, will look. it would be good to get some more detail, how to repro etc [03:56] davigar15 tripped over it working on his k8s prometheus stuff [03:57] just reading the bug, maybe it's a charm helpers issue, not sure, will look [03:58] Best I can tell, network_get works to get the ingress-address, but pulling it from the relation like charmhelpers does doesn't. And we can fix charm-helpers if necessary I think. [04:03] stub: it may well be juju reading the charn helpers code. need to look. but it is a different issue IMO. i'd prefer to open a new bug [04:04] although i guess it's all in the same area [04:04] i'll comment on the bug [04:09] stub: the juju code looks ok i think. as noted in a comment on the bug, ingress-address is not guaranteed to be available immediately when a relation is joined because the pod and service need time to spin up. so there will be times early on when the relation data will not have the value. the charm needs to deal with that [04:11] Yeah, I was surprised the work around worked. But apparently it does with a freshly deployed service. [04:11] it may be a timing thing [04:11] it would be good to know if a sleep were added or something if the value shows up [04:12] network-get and the relation data use the same code to get the ingress address [04:13] it could also be that juju does not react to when the address info becomes available and updates relation settings [04:13] so i will investigate that aspect to be sure [04:14] there *could* be a juju issue, not sure yet [04:14] there's been a fair bit of refactoring of the network code to do the spaces work this cycle [04:30] https://github.com/juju/worker/pull/13 [04:32] stub: do you know if the issue is on juju 2.6 or 2.7? [04:32] it's not clear from the promethius MP, but i may have missed it [04:33] thumper: lgtm [04:33] wallyworld: cheers [04:42] I don't know sorry [04:42] stub: no worries, i opened bug 1848628 and subscribed you and david and asked for more info [04:42] Bug #1848628: relation data not updated when unit address changes [04:42] i closed the other one again [04:45] hpidcock: thanks for review. init container takes precedence, but after that, for monday before we cut beta1, there's a folllow up to add --watch to show-task https://github.com/juju/juju/pull/10757 [08:52] achilleasa: Reviewing 10750 now. Can you look over https://github.com/juju/juju/pull/10751 ? [08:53] manadart: sure [10:22] manadart: lgtm, left some minor comments about using the default space id const; running QA steps [10:27] Ta. [11:15] achilleasa: let's say i want deploy to a aws public subnet (spaces -> using public subnet), is there something i need to take care or should it work without additional action? [11:23] is there a way to see which constraint are currently applied to a machine? [11:25] nammn_de: juju show-machine X [11:25] nammn_de: should show the constraints [11:41] rick_h: got a few min for a quick chat (no rush, can wait till standup time) [12:38] manadart: regarding the bug you have filed that version does not get parsed correctly. Did you look into it a bit deeper? That's what I have found out https://bugs.launchpad.net/juju/+bug/1848149/comments/3 . Wdyt? [12:38] Bug #1848149: Charm Deployed or Upgrade from Dir Ignores "version" [12:47] nammn_de: I *suspect* we have to add the version member/accessor to CharmDir and populate it charm.ReadVersion. Beyond that I am not sure how it will work on the receiver side. Not looked further. [12:54] manadart: just looked into that dir, would make sense. [13:19] stub: thanks for your help. Had read that already but could not get the specific version of ansible from a ppa I wanted to use working. It would appear that I needed to put the packages: line in layer.yaml and the repo in config.yaml. I could not get extra_packages to do anything. Either way, I have the ppa version of ansible loaded now. [15:38] hml: running state tests and deploying to lxd/guimaas to test my patch [15:38] achilleasa: rgr [15:41] manadart achilleasa stickupkid any way that this can be false? https://github.com/nammn/charm/blob/a595818559134f9e9981a08a32cd9de76cb478f0/charmdir.go#L444 [15:41] am i overlooking something or is that just a mistake? [15:41] hml: actually if you can push your latest changes for lxd I can append my changes, test and push [15:42] nammn_de, yeah it seems like that should just return [15:43] achilleasa: haven’t kicked the lxd issue yet [15:43] nammn_de, yeah, doesn't seem right tbh https://github.com/nammn/charm/commit/33dbb063fecda733f37b4dc0e476fd3f04d28c6d [15:43] achilleasa: i’m using Map() but looking at the code, that should be okay with the other changes… just missing something. the spacenames were used to map spaceInfos… with the spaceInfos being used, not the map key. need to track it [15:47] hml: I will try to deploy the defender/invader charms on machines to verify that the hash works [15:48] achilleasa: you’ll EOD hours before me. if you’re reasonibly confident of your changes, push it to the PR… and i’ll take it from there. [15:48] (which means that temporarily we will be out of maas nodes) [15:48] (multi-nic ones that is) [15:48] ah… [15:48] just need 5-10min to test [15:49] achilleasa: i’ll release the machines for you then. [15:49] achilleasa: was about to get lunch anyways [15:49] achilleasa: you should be good to use the dual nic machines.. [15:50] hml: yeap, got them. thanks [16:47] manadart rick_h: this should fix the bug we were talking before https://github.com/juju/charm/pull/294 . I still need to test it by changing the gopkg toml and send a pr for the juju main [16:49] nammn_de: Should be easy enough to verify by applying the patch straight the dep tree and building juju from it. [16:49] manadart: oh yeah, you are right xD [17:04] hey does anyone know the background on this video? https://www.youtube.com/watch?v=CidoBy3iqUw [17:04] I want to build this and replicate it, but they didnt show how they built the HA juju controller cluster on the clustered LXD nodes [17:05] @bdx, @rick_h [17:10] manadart: should be working, to be sure im running the unit test on juju as well. But does the change make sense to you from your filed bug perspective? [17:14] nammn_de: Yes; pretty much along the lines of what I thought we'd have to do. [17:16] manadart: one thing I am not 100% sure is, by extracting the version generation from "archiveTo" to the initial "charmdir" creation, any updates which happen between "creation" and "archiveTo" would not be added. I am not sure whether there can be any updates anyway and should be. I will add this to the PR as "disclaimer" to think about it while [17:16] going through [17:36] Fallenour: howdy, so it was just a normal bootstrap to the cloud once it was added, and then juju enable-ha [17:36] Fallenour: hitting an issue with getting the cluster going or the bootstrap or the making ha part? [17:42] @rick_h, Im looking to rebuild my current stack, and instead of having 3 controllers on 3 physical machines, I wanted to reduce it to 3 lxd machines on 3 lxd machines in HA to reduce power so I could add more machines because of current power constraints. [17:43] Fallenour: definitely, juju + lxd cluster is a great dense setup for workstation/etc [17:43] rick_h, will it work well in prod? [17:43] rick_h, rephrase, well enough? [17:44] Fallenour: it'll work in prod, but there's limitations you have to be aware of. In a lxd cluster all systems are treated as homogenous, so one flat network/etc [17:44] Fallenour: and so it's hard to do things like different spaces, storage, etc [17:44] rick_h, currently building the lXD nodes now. yea I expanded to a /22 to give me plenty of space, with HA included. I should have enough room for roughly 200-300 different applications, which is more than enough for me, especially given current power constraints. [17:45] Fallenour: I've not tried to use it in anger in production to see what assumptions or limitations it fully bakes in [17:46] rick_h, im getting pretty clever with it, adding lxd using Cluster, and then adding ceph storage, using the other 4 drives, on a raid 1 for OS, raid 0 for ceph drives, then gonna deploy ceph to the systems via juju, let juju manage ceph, and then build volumes on ceph, and store data on those via databases. [17:46] rick_h, currently lxd doesnt support ceph and HA concurrently, but it does support HA and extra storage spaces, sooo... [17:49] Fallenour: lol [17:49] Fallenour: make sure to write up your project/hits and misses on discourse! :) [17:49] I can't wait to hear how it goes [17:50] rick_h, hopefully its not a full on trainwreck! [17:50] Fallenour: naw, trains are cool and rarely wreck :P [17:50] rick_h, But! I do believ eif it works, I should be able to take advantage of the best of both worlds, and if htis works, its going to do wonders for the CI/CD food chain, because people absolutely loathe service containers vs systems containers for security.