[02:39] kelvinliu: found it. https://github.com/juju/jujusvg/pull/58 [02:40] wallyworld: not easy! hah approved! [02:40] ty :-) [02:42] 🎊 [02:42] babbageclunk: do you know if there are any plans for storage on vsphere? https://discourse.jujucharms.com/t/adding-additional-disks-to-machine/2367 [02:47] timClicks: not sure - wallyworld? [02:50] there's currently no plans - vsphere provider in juju does not support storage directives it seems. we'd need to size up any work as a bug [02:50] it would be a medium chunk of work [02:55] so the recommendation today would be to use the vSphere console or an equivalent VMware CLI command? [03:05] wallyworld: if the charm already provided ingress for app svc itself in podspec, should we touch that spec at all if the user run `juju expose` or just reject and errors out? [03:07] kelvinliu: that's a good question. i am not sure. we can land PR without changing expose and discuss and maybe do a fllowup [03:08] wallyworld: ok, i will create a card for this as a TODO [03:09] ty [03:09] np [03:50] timClicks: I don't know of a workaround that would work reasonably [03:52] thumper: no, not for deploying ceph w/ juju [03:53] ceph on vsphere? [03:53] seems weird [04:01] sorry, I've got a bit nerdsniped trying to work out whether we could use different image metadata to make a machine that was centos on vsphere. [04:09] it's lead me to realise I don't actually understand how we get non-ubuntu images at all [04:12] thumper: they want to deploy cdk on vsphere, and use ceph as storage for that cdk instance [04:12] timClicks: but where is the ceph? [04:12] ew [04:12] on vsphere apparently.. [04:12] ah [04:12] no we don't support that [04:12] I don't think so [04:13] not natively anyway [04:13] I've added a note to the request https://discourse.jujucharms.com/t/adding-additional-disks-to-machine/2367 [04:13] there may be charmed-kubernetes config for external storage [04:13] i expect we'll receive a grumpy response along the lines of "what's the point in deploying ceph manually?" [04:14] we've received an HA-related question [04:15] https://discourse.jujucharms.com/t/auto-remove-var-lib-juju-when-i-restarting-jujud-in-empty-mongodb/2362 [04:16] timClicks: I think if you add a machine, then add the disks via CLI, then deploy ceph to the devices it may work [07:06] wallyworld: PTAL https://github.com/juju/juju/pull/10957 - cred validation relaxation as discussed [07:07] ok, in a bit [07:07] no rush at all... it's for 2.7 (whci i think is blocked until .0 anyway, right?) [07:08] which* [07:13] yeah [10:24] stickupkid: Breaking up my patch. This one is for replacing migration-master test mocks with generated ones: https://github.com/juju/juju/pull/10958 [10:25] manadart, we should do the same with the minion [10:32] stickupkid: did the email from ian work for you? [10:32] he did give me a review/comment as well [10:34] nammn_de1, we where missing expose :D [10:34] stickupkid: yeah, so doing that it shows something in the logs and doing something. I guess we would need a charm which would use the juju-application-offer to open some ports so that we can see that in the security group, right? At least thats how I understand it [10:35] :D [10:35] yeap [10:36] stickupkid: do you have some in your mind? At least trying it with mariadb and mysql shows log work but no change i the security group, as it seems they did not open a port. I do think it should be fine codewise. [10:36] nammn_de1, not sure tbh [12:00] manadart: got a few min for a quick HO? [12:03] achilleasa: Sure. [12:13] damn, we don't import application offers - wonder why my code wasn't working :sigh: [12:21] manadart, quick HO before i grab some lunch? [12:33] stickupkid: Eating mine now. Do it after? [12:33] manadart, sure can... i'll ping back in about 45mins [12:52] * manadart has to do a drop-off at sort notice. 20-30 mins. [14:15] manadart, ho [14:15] ran away [14:39] stickupkid: soo i updated the qa steps on my pr https://github.com/juju/juju/pull/10943 [14:39] so because it is related to the "offer" the pr, or another, needs to move the offer as well which then can be consumed again [14:40] i'm on it :D [14:40] not sure if we want to have that in that pr, because i thought that you and manadart are working on it [14:40] ha! :D [14:40] nammn_de1, let's call it done [14:40] biab [14:58] stickupkid: cool, happy for a review then [14:58] nammn_de1, give me a bit [15:02] stickupkid: no worries, take your time. A suggestion for the next task from the cmr column, seems like some are related to the one you are doing currently. Just to make sure that we do not overlap [15:03] nammn_de1, unsure tbh [15:03] thinking... [15:04] stickupkid: if you have something with the workers that would be cool but open to anything [16:14] stickupkid: I have to make a move. Patch is here: https://github.com/juju/juju/pull/10959. [16:14] manadart, sure nps [16:14] stickupkid: I still need to regenerate the facade schema. Once I do that, I'll squash the choppy commits. [16:14] manadart, fine by me [20:47] what is the command I need to use to to inspect/trace a unit agent hook's execution history? [21:15] timClicks: juju show-status-log ? [21:15] rick_h: that's the one [21:15] thanks! [21:22] timClicks: <3 [21:23] is it possible to inspect calls to hook-tools, such as relation-get and relation-set? [21:25] timClicks: you're going to need debug-hooks at that point I think [21:25] yeah, that's what I thought.. [21:25] timClicks: because you need the hook context which isn't normally available. You could try walking the relation ids and going that route but it takes a little owrk [21:25] well, not hook context, but the relation context in the hook context [21:27] yeah I experimented with that in a noop charm I've written, explore-relations [21:27] gotcha [21:27] juju run --unit explore-relations/0 “relation-ids peer-relation” [21:58] new tutorial up https://discourse.jujucharms.com/t/what-is-a-juju-relation-and-what-purpose-do-they-serve-part-2/2378