[00:03] wallyworld: can you take a look at this? https://github.com/juju/juju-restore/pull/14 [00:04] sure [00:10] Very quick PR https://github.com/juju/rpcreflect/pull/1 [00:12] hpidcock: approved [00:12] I didn't read it all [00:13] hpidcock: is that pulled out of the juju/rpc package? [00:14] (I mean rpcreflect, not the licence) [00:14] I'm not sure ask Simon [00:14] but he's always asleep! [00:15] and you're right there [00:15] but the files themselves already have a license [00:15] doing a quick license audit [00:15] oh right [00:22] another very quick PR https://github.com/juju/jsonschema-gen/pull/5 [01:08] babbageclunk? [01:08] hpidcock: looking [01:09] approved [01:22] thanks [01:30] sorry babbageclunk because you are so awesome, I missed something https://github.com/juju/jsonschema-gen/pull/6 [01:31] * babbageclunk clicks the button [01:31] * hpidcock appreciates babbageclunk [01:32] :) === lifeless_ is now known as liffeless === liffeless is now known as lifeless [03:43] babbageclunk: if you are free, given the circumstances, a second +1 and even QA on this would be awesome https://github.com/juju/juju/pull/11449 [03:44] wallyworld: looking [03:51] hpidcock: a few tweaks if you have a chance sometime today https://github.com/juju/juju/pull/11444 [04:06] looking [04:53] turns out a lot of things get sad when you delete the credential for a cloud. [04:53] I mean controller model [04:53] true that. i did my testing with a different model [04:54] to simulate the field issue [04:54] yeah, I'll try again with that [04:54] suddenly couldn't get status or ssh to the machines [04:56] status won't work with cred missing from any model [05:08] I just kept my ssh sessions open this time [05:09] babbageclunk: if you still have a model, remove-application --destroy-storage is a test i should have done [05:09] since that will fail without --force [05:10] controller's still up [05:11] removing the bad model is difficult, since the storage is still there [05:11] babbageclunk: what i did was rename the cred id so i still had it to rename back after [05:12] then i could cleanup as normal [05:12] ah right - turns out I could update it back into the controller [05:12] yeah, that would work too [05:13] ok, redeploying postgres and trying again [05:14] how did the credential go away in the first place? [05:17] wallyworld: so if I do `remove-application --destroy-storage` without --force after renaming the cred, it should fail? Or will the cleanup fail? [05:18] babbageclunk: with destroy storage, the cleanup has to be able to talk to the cloud which will fail. i am gussing there will be errors looged and the app will hang around in dyng state [05:19] but then redoing the remove-application with --force should still clean it up [05:19] right? [05:19] yeah [05:19] assuming --force woeks as exopetced [05:23] wallyworld can I grab the charm you were using for the juju-run bug? [05:24] oh bollocks, sorry forgot [05:26] wallyworld: I don't get how you are updating the _id? Did you just paste it in again? [05:26] * babbageclunk just does that [05:27] babbageclunk: i loaded the record to a doc variable using FindOne(), updated the doc._id, inserted into collection, then removed the old record. then later I updated the doc._id gain and inserted/removed again [05:28] from go code? [05:28] I'm in the shell [05:31] wallyworld: ok, that seemed to work fine [05:32] wallyworld_: did you see my comments on the PR? [05:39] babbageclunk: thanks for QA etc, I've attempted to answer your questions on PR, see if they make sense [05:39] one core issue if thsat we've over applied the use of --force [05:39] in general [05:40] wallyworld: makes sense to me - just checking [06:23] wallyworld: the problem is that on caas, juju-run always runs the commands on the remote, not in the operator https://github.com/juju/juju/commit/d780faaf774bddd0b904e4428ae6c33076683f55#diff-89a10e4adacaacbb2dabc98d1b50011e [06:23] the reason it was working for me is I misunderstood the question [06:24] we will need to add a --operator flag to juju-run I think [06:52] hpidcock: ah, doh. of course. for charms with deploymentMode=operator, it should only ever run on --operator [06:52] well we have no juju-run --operator flag [06:53] for operator charms and normal iaas charms there isn't a problem at the moment [06:53] right, but as for run-action, we can handle the deployment mode case without the flag first up [06:54] yeah, well juju run and actions we already have the information we need [06:54] but juju-run has no arg for operator or workload [06:55] so you saying it works as expected for deploymentMode=operator charms? [06:55] it's just workload charms that need any changes? [06:55] I'm not sure, depends on the changes you made [06:55] yeah, i'd need to check the code as well [06:57] ok so action and juju-run (the action made by juju run) should handle this [06:57] juju-run doesn't handle either case properly [06:57] sounds about right [06:58] juju-run should execute on the pod it was invoked from [06:58] without any --operator flag, now that i think about it [06:58] can't juju-run call cross units/applications though? [06:59] yes, i mean more operator vs workload [06:59] as a start anyway [08:36] manadart, trolling you via a git diff [08:37] stickupkid_: Cunning. [08:39] manadart, give me 5-10 minutes and wanna discuss VIPs? [08:39] stickupkid_: Sure. [08:48] stickupkid_: Be there in a sec. [08:49] was scoffing my breakfast down tbh, just there now [09:03] pmatulis: This guide https://juju.is/docs/aws-cloud [09:04] Good morning [10:46] stickupkid_, achilleasa: Anyone able to tick a little race fix? https://github.com/juju/juju/pull/11452 [10:49] manadart, tick [10:52] stickupkid_: Ta. [12:24] manadart, https://github.com/juju/juju/pull/11454 [12:25] stickupkid_: Yep gimme a couple; got one to swap you. [12:25] manadart, runaway [12:27] manadart, https://i.imgflip.com/3wva2g.jpg [12:33] stickupkid_: https://github.com/juju/juju/pull/11455. It's the real fix for the one you approved. [12:34] manadart, well that's a horrid test :( // so TearDownTest does not reattempt. [12:41] I have serious question, if github actions is saying use "ubuntu-latest", but that comes with a ton of other services that aren't supported by ubuntu, is that not a trademark issue? [12:42] I'm getting angry at github for saying something is ubuntu, when it clearly isn't just ubuntu ehre [13:31] manadart, WINNER -> https://github.com/juju/juju/pull/11456 [13:31] CR please [13:46] mongo snap PR is ready for review: https://github.com/juju/juju/pull/11453 [13:46] QA-ing this will be hard so please spend some time to try different things... [14:33] stickupkid_: https://github.com/juju/juju/pull/11457. review pls? [14:34] hml, ah this makes sense, it's because I turned them on for aws the other month by default [14:35] stickupkid_: i think i caught them all [14:36] hml, ticked [14:36] stickupkid_: cool. ty! [14:37] damn it, why won't anything land [14:37] stickupkid_: i’m seeing that mongo not installing on the static analysis [14:38] sorry - client tests… analysis is good. :-) [14:45] stickupkid_: This fixes one of the intermittent failures: https://github.com/juju/juju/pull/11458 [14:45] manadart, I'm seeing two failures [14:45] stickupkid_: There are loads. [15:06] can someone remind me how to 'juju add-machine' and get an eoan one? [15:09] achilleasa: juju add-machine --series=eoan [15:09] rick_h: thanks... thought that was only for charms :D [15:20] rick_h: AFAICT the eoan image comes with an lxd snap; this means that the lxd-snap-channel flag that I am adding will only affect focal+ (if lxd is already installed on the host we do nothing) [15:20] is that ok? [15:21] achilleasa: it's not going ot be that way in focal? [15:21] achilleasa: I'd expect it to be part of the focal testing forward with the mongodb snap [15:22] rick_h: it's a separate PR but currently install-lxd is a noop if lxd is already installed. So the channel will only do something if lxd is not already there [15:22] testing now... brb [15:23] achilleasa: right, but if you change the model config and juju add-machine it should use the track for the new machine in focal since focal is getting it from a snap pre-seeded? [15:25] rick_h: that's the expectation. Just wanted to point out that it will have no effect when spinning up machines with series < focal [15:25] achilleasa: yep understand [15:25] (iow, we won't force it to use the channel if lxd is already there) [15:59] stickupkid_: somethign is definately horked in ci for make check and the merge job. my make check job failed at a 60 min timeout… the merge job queue is 3 long [15:59] stickupkid_: i found a place we were lost 15 min… but still [16:09] hml, we need to get manadarts patch to land [16:10] stickupkid_: which one? [16:10] https://github.com/juju/juju/pull/11458 [16:11] although there is an issue with it worker/caasunitprovisioner/worker_test.go:47:22: undefined: "github.com/juju/juju/worker/caasunitprovisioner".MockProvisioningStatusSetter [16:11] stickupkid_: i was gunna say… that’s could be a problem [16:18] hmm... are we installing lxd when we create new machines? I have patched container/lxd/initialization_linux.ensureDependencies and added logging. That code path gets triggered when adding an lxd container to the machine. It seems like the machine already has the lxd snap installed for focal even though if I 'lxc launch ubuntu:20.04 focal' I don't see it installed. Any ideas? [16:19] There doesn't seem to be anything related in the cloudinit bits (just apt pkgs) [16:21] achilleasa, lxc launch needs daily as well [16:21] achilleasa, lxc launch ubuntu-daily:focal focal [16:22] stickupkid_: doesn't ' lxc launch images:ubuntu/focal focal' work for you? [16:22] achilleasa, it might now, it didn't back when i was testing it [16:23] stickupkid_: let me try with the image you suggested. Is that the one we pull in juju? [16:24] stickupkid_: ok... mystery solved; that one has lxd [16:25] rick_h: not sure how to proceed ^ [16:38] achilleasa: stickupkid_ sorry was otp...I'm confused? Yes you need image streams for focal still. It'll be cleaning once the images start rolling out of daily in the coming week [16:39] rick_h: what I meant was that the focal image seems to have the lxd snap pre-installed [16:39] achilleasa, that's the case [16:39] rick_h: I could change the code to 'snap refresh lxd' though [16:45] achilleasa: yes, that's expected. Yes, I think that's the goal to snap refresh to the track in the config as part of our install/provision bits. [16:50] rick_h: ok. that probably needs additional changes to juju/packaging so I will need to revisit this on Tuesday [16:52] achilleasa: ok, is the other bits landing for the mongodb changes? [16:52] we can pick up and see what steps we need to update/move forward for focal tests/etc [16:53] rick_h: PR is up; lots of timeout issues with tests (keep !!build!! to get it green) but it is ready for review/QA. Will fire an email to the list in a few min [17:28] hml: small PR for you https://github.com/juju/packaging/pull/8 [17:28] achilleasa: looking [17:40] achilleasa: approved. [17:51] rick_h: got it working with snap refresh: https://paste.ubuntu.com/p/mgkhwrxwWK. Will push a PR on Tuesday [17:52] achilleasa: ok, got EOD and run. Thanks [17:52] sorry, go EOD I mean [17:52] ;-) [17:57] pmatulis: So, we acctually got through it now. But there are tons of errors and warnings in this step which makes it very uncertain if the adding of credentials was successful or not. [17:58] pmatulis: We also had to include the "juju add-model foobar aws/eu-west-1 --credential credname. Omitting that seems not to upload the credentials. [23:11] babbageclunk: the PR from yesterday, here's the forward port :-) https://github.com/juju/juju/pull/11451 [23:28] wallyworld_: sorry otp with thumper - no conflicts presumably [23:28] nah, easy peasy [23:39] Well, that was a thoroughly brain-melting discussion [23:43] sounds like it was fun [23:56] wallyworld_: would love some feedback on https://github.com/juju/juju/pull/11462 before I jump into writing unit tests