[00:06] wallyworld: +1 [00:58] hpidcock: or kelvinli_: for today sometime, no rush https://github.com/juju/juju/pull/10848 [00:59] np [01:01] wallyworld: so we wanna make it support LB? I thought this change was just for microk8s [01:03] it's for k8s in general to handle the case where our opinionated default service type isn't suitable, eg we say loadbalancer but they just want clusterip, or visa versa [01:03] and to support externalName [01:03] this is just for controller service at the moment [01:04] to plug the gap where just cannot bootstrap due to different k8s cluster set ups [01:04] ic [01:04] the initial use case was microk8s to add in externalName support [01:05] but it's applicable everywhere really [01:05] ty for review [03:21] hpidcock: for 2.7.1, this adds the proxy-update-worker to the caas operator. it's already there for the k8s controller https://github.com/juju/juju/pull/10855 [03:21] alright [04:49] wallyworld: sorry got distracted, so the proxyupdater worker writes out conf files with the proxies so that hooks can use them correct? [04:50] hpidcock: that's only on iaas. for k8s (and also iaas) it unsures that http clients constructed have the proxy values set correctly [04:50] *ensures [04:50] we already run that worker in the k8s and iaas controller [04:50] and unit agent [04:50] but not k8s operator [04:51] juju automatically adds the controller ip address to no-proxy [04:52] to augment anything else that the user sets. so having that worker ensures that the operator agent can always see the controller [04:52] hmm did you test exec with this? [04:54] not exec. is it failing for you? [04:55] no I'm just trying to think of things that the operator does that isn't talking to the controller [04:55] i can't think of a pathway where it would fail [04:55] we might need to add the k8s controller ips to the no-proxy [04:55] juju does that automatically [04:55] 10/10 [04:55] it adds them to juju-no-prpxy [04:55] I mean, kube api [04:56] not juju controller [04:56] i see that as a cluster setup thing? [04:56] outside of juju [04:56] the juju-proxy settings are just to make juju agents work [04:56] and charms [04:57] the charm can see the juju-proxy values [04:57] and can use them as it needs [04:57] that's one of the reasons why we deviated from the standard proxy values [04:57] yeah but `settings.SetEnvironmentValues()` sets env vars on the agent [04:57] yes [04:58] and we expose those JUJU_ values to the charm hook context [04:59] hmm well the vendored github.com/juju/proxy uses "HTTPS_PROXY" etc [04:59] we use different values on purpose [05:00] so as not to interfere with any system values and let the charms decide how they want to use them [05:00] we kept getting bit by corner cases [05:00] and decided to make the juju configured proxy values totally separate [05:01] and juju stuff (controller, agents, charms) would use those values [05:08] ok so we need to document that if you specify proxy settings you may need to add your k8s apiserver address to no_proxy [05:08] do you know if that is specified anywhere? [05:08] hmmm, in the cloud metadata endpoint? [05:09] it would be the same issue for aws or vm clouds i think [05:09] i don't think k8s api endpoint would be any different in that regard [05:10] i am not sure what our doc says off hand [05:11] ok fair enough [05:11] thanks for clarifying [05:12] there is some doc https://jaas.ai/docs/offline-mode-strategies [05:12] https://jaas.ai/docs/configuring-offline-usage [05:12] which seems to cover what to put in no-proxy [05:13] but maybe doesn't call it out explictly [07:37] wallyworld: tiny PR PTAL :D https://github.com/juju/juju/pull/10858 [07:38] ok [07:42] anastasiamac: i don't get why we don;t think controller clouds have credentials. they must have credentials as they are in use [07:42] if i add-k8s and use that cluster, there's a credental there [07:42] wallyworld: i don't think we get this info from api... [07:43] when we list clouds [07:43] irrespective... is it really relevant for controller clouds?.. [07:43] just trying to understand the root cause [07:43] we'd want to list all of them, no? [07:43] for controller i'd think so [07:44] if the user has perm to see them [07:44] wallyworld: so whether controller clouds have creds or not, does not mater - we want them :D [07:44] so this check is redundant and there is no need to change api.. [07:45] sgtm [07:46] \o/ [10:29] manadart: can you take a quick look at this one? https://github.com/juju/juju/pull/10833 [10:29] we can change something for review :P [11:01] nammn_de: Looks good. 2 suggestions. [11:06] stickupkid achilleasa you guys worked with it before. Want to give a really small review? https://github.com/juju/charmrepo/pull/156 Description has reason [11:07] nammn_de: looking [11:07] nammn_de, do you want to wire it up in juju and change the dep package source [11:07] ? [11:08] stickupkid: was planning to do so in the near future. Either I change from juju/juju how we create the repo and charmarchive or I add something to charmrepo [11:08] achilleasa manadart: ta [11:09] nammn_de, i'd get everything full end to end, so you don't see churn? [11:09] stickupkid: dependenciy wise changing the place to tmp for charm testing is actually quite the hassle, I must say :/ [11:09] nammn_de: where are we going to use this? [11:11] achilleasa: I was planning to migrate some code here: https://github.com/juju/juju/pull/10836/files [11:11] reason: currently our charm tests are taken from the juju/juju path as a repo and used from there. Thus we create a version string from there and the tests fail because they expect us to parse the version file. Solution we discussed: Moving them to a tmp path [11:12] As it seems a lot of tests are using the `repo` from `charmrepo`. But as it currently is, I cannot migrate the code, because it forces me to use a relative path [11:12] nammn_de: fair enough; thanks for the info [11:13] achilleasa: thanks 🦹‍♂️ [11:14] woah got the feeling that i have to touch 100 places (and repos) for one small change meeeeeh xD [11:22] stickupkid: before adding the repo I tried a workaround to see whether it would work here: https://github.com/juju/juju/pull/10836/files A better way would be to change the testsuite and create a tmp repo there and access it from the tests. But imo this would be more related to an additional pr. With the new change in repo I would "just" change [11:22] this to be a "tmpRepo" https://github.com/juju/juju/blob/dfd597542ef537162a0ecff27e6a4095c23adc1e/testcharms/charm.go#L42 instead of "tmpArchive" Wdyt? [11:23] stickupkid: for now I only changed the places where the tests failed (2 places) to use a tmpcharm instead of our repo [11:23] nammn_de, i'd go with the lightest touch atm [11:23] stickupkid: great, one the same page here [11:57] stickupkid: thats the final change i suggest, if test go through. https://github.com/juju/juju/pull/10836 [11:58] nammn_de, yeah, lets wait for all the tests to go through, but looks much better === fenris is now known as Guest81494 [13:54] bdx: hi! found an issue with memcache interface, my proposed fix is: https://github.com/omnivector-solutions/interface-memcache/pull/2 [14:08] manadart: I tried to incoporate your suggestions https://github.com/juju/juju/pull/10833 [14:16] stickupkid: any particular reason why the root test runner is not using !/bin/bash? [14:16] because portability [14:16] achilleasa, i've said previously if people want to move to bash then we can do, but I think we don't really need to [14:17] no 'set -e pipefail' on my box :-( [14:17] yeah, that's bitten me [14:17] THAT is the ONLY thing I want [14:17] the work around is to split :( [14:18] but on the plus side it reduces complexity of a one liner [14:20] stickupkid: so how can I make a helper func abort the test with an error message? [14:20] achilleasa, lets try it, we can always improve [14:21] stickupkid: what’s the expectation for which directory the integrations tests are run from? [14:22] hml, features should have their own, but we should have a group of tests that we expect to be groupable [14:23] stickupkid: i’m having “fun” with specifying the path to a local bundle yaml [14:23] hml, deploy = stuff around deploy, machines = add machines etc, cli = test failures, help messages [14:23] stickupkid: right [14:23] stickupkid: if you run main.sh blah blah blah.. .from which directory? juju? tests? [14:23] tests [14:24] tests is the root directory, everything changes into there and runs main.sh [14:24] stickupkid: rgr got confused by “bundle” path in run_deploy_cmr_bundle() [14:25] stickupkid: but i can change it [15:00] hml: Mind taking a look at https://github.com/juju/juju/pull/10859 ? [15:00] manadart: sure [15:00] hml: Ta. [15:00] stickupkid: review pls? https://github.com/juju/juju/pull/10860 I made some changes to the cmr bundle test in there. Hopefully i wasn’t just running things wrong. :-D [15:03] hml, looks spot on, let me test it [15:10] manadart: do we have an O7k cloud with multiple zones to test with? [15:10] hml: Don't know. Both serverstack and canonistack all list just "nova". [15:21] hml, a most, it's just paths. [15:21] all* [15:21] stickupkid: yeah, just saw the lint failures [15:22] hml, once I applied the fixes, it worked as designed :D [15:22] stickupkid: ho? [15:22] hml, of course [16:17] manadart: reviewed with 2 small things. [16:19] hml: Thanks. The Cinder tests do verify the AZ at https://github.com/juju/juju/pull/10859/files#diff-2dc3b00e7504a446604d294e712eae0aR180 [16:20] manadart: ah, missed that. ty [16:58] stickupkid: updated 10860 [17:52] hml, https://github.com/juju/juju/pull/10862 [17:53] stickupkid: looking [17:58] stickupkid: hey… though ./main.sh was the wrong way to run those tests. :-D [20:55] if someone has time for a really small line? https://github.com/juju/charm/pull/298 Old pr forgot to remove old logger [20:57] nammn_de: timClicks got there before me. :-) [20:58] ha, thanks timClicks hml was just trying to get things running under the testsuite and saw weird logging messages :-D [20:59] nammn_de: shouldn't you shut down your computer? [20:59] it's late there [21:04] timClicks: haha dont mind me, im leaving. Have a nice day ! [21:07] night nammn_de