kelvinliu | tlm: have u figured out what the root cause is? | 01:05 |
---|---|---|
tlm | nah still tracing code at the moment | 01:10 |
tlm | just looks like the broker is loading the default creds from the container and not the kube-system ones | 01:10 |
kelvinliu | the weird, when u bootstrap then juju add-k8s model1 microk8s, the controller model and model1 should be using the same credential | 01:37 |
tlm | should but my debug points to no | 01:38 |
tlm | the bug will be in our creds loading | 01:38 |
kelvinliu | tlm: I just had a double check, I log the credential(token) in newK8sBroker, it's the correct token in microk8s(cloud).microk8s(credential) | 01:46 |
kelvinliu | and it's the token in mkubectl get secret/juju-credential-microk8s-token-hpdjs -n kube-system -o yaml | 01:47 |
kelvinliu | I think the credential should be correct. there might be some other issue | 01:47 |
tlm | two ticks will grab you my tests | 01:48 |
tlm | what field are you logging ? | 01:48 |
kelvinliu | logger.Criticalf("newK8sBroker newCfg -> %s", pretty.Sprint(newCfg)) | 01:48 |
kelvinliu | logger.Criticalf("newK8sBroker k8sRestConfig -> %s", pretty.Sprint(k8sRestConfig)) | 01:48 |
tlm | also bootstrap the controller under a different name then the default | 01:48 |
kelvinliu | i tested bootstrap to a different controller name. no problem as well | 01:52 |
tlm | will take a look | 01:55 |
tlm | yeah I agree after decoding the JWT | 01:59 |
tlm | hmmm back to the drawing board | 01:59 |
timClicks_ | can the application-version-set hook tool be executed outside of the update-status hook? (our docs seem to indicate that it does) | 02:48 |
=== timClicks_ is now known as timClicks | ||
thumper | timClicks: perhaps through juju-run, but not by itself | 02:51 |
timClicks | oh I meant can it be run in another hook, eg config-changed | 02:52 |
timClicks | in the same way that the add-metric hook tool is restricted to the collect-metrics hook | 02:53 |
wallyworld | hpidcock: here's a PR for the bug you raised, still need to do a followup for model migration https://github.com/juju/juju/pull/11473 | 03:15 |
timClicks | in goal-state... does the relation-get hook tool work if the relation to another unit is in the "broken" state? | 03:18 |
hpidcock | wallyworld: cool, will look | 03:35 |
tlm | hpidcock: SetConfig is the secret killer | 04:15 |
tlm | wallyworld: got 5 min for HO ? | 04:29 |
wallyworld | ok | 04:29 |
wallyworld | hpidcock: ty for review, that migratin todo is because there needs to be a juju/description update first https://github.com/juju/description/pull/76 | 05:23 |
hpidcock | wallyworld: Oh it was just not obvious as there was no comment | 05:24 |
wallyworld | fair enough,we often jsut do a quick todo since the followup PR is landing imminently | 05:24 |
wallyworld | and todos like that in the migration_import code are fairly common | 05:25 |
wallyworld | hpidcock: if you were free to do a small +1, it would unblock me to propose the last juju PR to remove those todos https://github.com/juju/description/pull/76 | 06:08 |
hpidcock | ok | 06:31 |
hpidcock | wallyworld: LGTM | 06:32 |
wallyworld | ty | 06:33 |
achilleasa | quick poll: I want to modify a provisioner API method which returns a map to *additionally* include an extra KV pair. As this is an agent API call and pylibjuju may potentially call it, do I really really need to bump the version? Note that there are *no* actual schema changes so bumping (and having V{latest-1} remove the key from the response) seems like overkill to me | 08:27 |
achilleasa | any thoughts? stickupkid_ ^ | 08:27 |
achilleasa | (call is "ContainerManagerConfig" and the new key is the channel for the lxd snap which comes from a model cfg) | 08:28 |
stickupkid_ | achilleasa, let me check, but if it's a map, then that seems fine to me | 08:29 |
stickupkid_ | achilleasa, if you add or remove in the map[string]string then no need to bump | 08:30 |
achilleasa | stickupkid_: thought so but wanted to confirm... pheww | 08:31 |
stickupkid_ | achilleasa, we should just make the api a map[string]string :troll: | 08:31 |
* achilleasa runs away | 08:32 | |
manadart | stickupkid_, achilleasa: Able to review a real small one? https://github.com/juju/juju/pull/11477 | 08:50 |
manadart | stickupkid_: Ta. | 08:51 |
achilleasa | tiny PR https://github.com/juju/packaging/pull/9 | 09:43 |
achilleasa | manadart: can you take a look at ^? | 09:54 |
manadart | achilleasa: Tick. | 09:55 |
achilleasa | manadart: tyvm | 09:55 |
achilleasa | manadart stickupkid or hml: any ideas why this gets stuck? Am I doing something wrong? (that's on develop/HEAD) https://pastebin.canonical.com/p/P864ZXPTYm/ | 12:40 |
hml | achilleasa: if you ssh to 0/lxd/0, has juju been setup? | 12:42 |
achilleasa | hml: let me check | 12:43 |
achilleasa | hml: btw, if I run ' juju add-unit ubuntu-lite --to lxd' it starts a bionic one that does work | 12:44 |
achilleasa | hml: I cannot ssh to 0/lxd/0 (no available addresses) | 12:45 |
hml | achilleasa: can you get to it from machine 0? | 12:45 |
achilleasa | hml: yes. no juju and cloud-init has completed without an error | 12:47 |
hml | achilleasa: weird… hrm… i’m trying samilar with only add-machine | 12:47 |
hml | achilleasa: i get the same with only add-machine: machine-2 https://pastebin.canonical.com/p/rH4Y88H5Kh/ | 12:51 |
achilleasa | hml: can you try with 2.7.6? | 12:51 |
achilleasa | wonder if this is a 2.8 issue | 12:52 |
hml | achilleasa: sure | 12:52 |
hml | achilleasa: hit with 2.7.6 | 13:11 |
hml | anyone seeing this bootstrapping lxd develop focal? “sudo: setrlimit(RLIMIT_CORE): Operation not permitted” | 13:55 |
hml | as part of the machine config piece | 13:55 |
stickupkid | hml, yeah, it's fine, it's focal issue | 13:55 |
stickupkid | hml, I made a bug out of it, it got closed | 13:55 |
hml | stickupkid: awesome! | 13:55 |
stickupkid | hml, https://github.com/juju/juju/pull/11482/files | 13:56 |
stickupkid | achilleasa, got an issue with windows build | 14:00 |
stickupkid | ##[error]worker/provisioner/container_initialisation.go:260:37: too many arguments in call to "github.com/juju/juju/container/lxd".NewContainerInitialiser | 14:00 |
hml | achilleasa: thumper renamed the config values etc: https://github.com/juju/juju/pull/11475 | 14:14 |
hml | achilleasa: once i used the correct config values… HA with OpenStack qa’d good | 14:25 |
achilleasa | rick_h: manadart: stickupkid hml looks like my patch works fine on aws; so it's probably the nested lxds that exhibit the problem... | 15:12 |
manadart | achilleasa: I bet it is specific to snap-installed LXD too. Apparmour/Cgroups or some such. | 15:14 |
achilleasa | stickupkid: can you run the qa steps on aws or maas to confirm? | 15:16 |
achilleasa | manadart: I tried with latest/stable on focal (aws) which should give you 4.0 | 15:17 |
stickupkid | achilleasa, sure, give me a sec | 15:40 |
stickupkid | achilleasa, I would but github is down | 15:40 |
achilleasa | anyone doing the homebrew bit of the 2.7.6 release? | 16:26 |
achilleasa | ^^^ doing the homebrew bits | 16:38 |
=== bdx1 is now known as bdx | ||
hml | quick review pls? https://github.com/juju/juju/pull/11484 | 20:01 |
rick_h | hml: +1 | 20:03 |
hml | rick_h: ty | 20:03 |
wallyworld | babbageclunk: hey, could you comment on https://bugs.launchpad.net/juju/+bug/1874031? | 20:34 |
mup | Bug #1874031: [vsphere 6.5] root-disk-source is failing to select from available datastores <juju:New> <https://launchpad.net/bugs/1874031> | 20:34 |
babbageclunk | wallyworld: yup, looking | 21:29 |
wallyworld | babbageclunk: ty. also a very small PR https://github.com/juju/juju/pull/11478 | 21:49 |
babbageclunk | love a small pr | 21:51 |
babbageclunk | wallyworld: approved | 21:52 |
wallyworld | \o/ | 21:53 |
hpidcock | babbageclunk: did you forget to update https://github.com/juju/utils/pull/309 in Gopkg.toml? | 23:01 |
hpidcock | I'm guessing this is safe for 2.8? | 23:02 |
babbageclunk | I added that for juju-restore. It didn't occur to me to update the dep in the juju repo. | 23:03 |
babbageclunk | hpidcock: yup, definitely safe | 23:03 |
babbageclunk | But I really should have, sorry! | 23:04 |
babbageclunk | At the moment the only use of UntarFiles in juju extracts to an existing directory tree, so all of the directories are already there. | 23:05 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!