[07:51] <t0mb0> Hey stub, in the old reactive way of doing things if I wanted to execute a piece of code that configures something I'd just set some reactive state and configure a @when handler. Is the right pattern in the operator framework to create a custom EventBase and then register it with framework.observe? Are there examples of creating a custom event that isn't due to a juju hook or relationship state change?
[07:52] <stub> One way is just to call the function directly.
[07:53] <stub> Otherwise, yeah, you define an event, register an observer, then fire the event when you want the observer to run
[07:53] <t0mb0> stub, right but atm I have a chicken and the egg problem where I need to wait for the pod to be ready before I can call the function
[07:54] <stub> I'm just deferring events in that case
[07:54] <t0mb0> stub, yeah that's what I'm doing too :(
[07:54] <stub> 'if not_ready: event.defer(); return'
[07:54] <t0mb0> yeah I'm doing that too but it means I have to pass this event object around
[07:55] <t0mb0> i'd much rather just just set some state then signal a new event
[07:55] <t0mb0> than have to pass the event around and have massive callbacks
[07:55] <t0mb0> er callstack even
[07:56] <stub> Why fire and event to trigger the observer, instead of calling it directly?
[07:56] <stub> oh, I guess you want to defer it there if things are not ready
[07:56] <t0mb0> yeah
[07:57] <t0mb0> I don't want to configure my application in the same function where I spin up the pod
[07:57] <t0mb0> I want to spin up the pod and issue a new event, say "init", then have my init handler just defer loop until the pod is live
[07:58] <t0mb0> I can get it to work, I wondered if there was a cleaner way than passing the even down the stack
[08:04] <stub> https://pastebin.ubuntu.com/p/pgGQKdkFVb/ is roughly how it should go
[08:04] <stub> It gets more complex if you want to add behavior to your events or extra __init__ arguments
[08:06] <stub> https://pastebin.ubuntu.com/p/RHMcTQPJBv/ sorry (typo)
[08:06] <t0mb0> stub, so line 19 is where I am signalling an on.ants_in_my_pants event?
[08:06] <stub> Yup
[08:07] <t0mb0> awesome thanks!
[08:07] <stub> So the emit would happen after the pod_set_spec
[08:13] <Chipaca> good moring!
[08:13] <Chipaca> morning, also
[08:13]  * Chipaca decides the typo is ominous and cancels the whole day
[08:17] <jam> at least it wasn't "mourning"
[08:18] <Chipaca> indeed
[08:19] <niemeyer> Good morning
[08:20]  * niemeyer reads stub's pastebin and notices facubatista recent change will break people up
[08:21] <niemeyer> Events was renamed to EventSet
[08:22] <Chipaca> niemeyer: which pastebin?
[08:22] <niemeyer> Chipaca: There was a conversation here in the channel shortly before you joined
[08:22] <mthaddon> https://pastebin.ubuntu.com/p/RHMcTQPJBv/
[08:22] <mthaddon> ^ that's the one
[08:22] <Chipaca> ta
[08:25] <jam> niemeyer, actually it doesn't break that code, as it is using CharmEvents and EventBase which didn't change.
[08:26] <jam> but yes, custom events on types that use EventsBase break unless we do something like "@deprecated\nEventsBase = EventSetBase"
[08:26] <niemeyer> jam: If it didn't change it must change, or we need to revert it.. CharmEvents needs to be CharmEventSet
[08:27] <Chipaca> niemeyer: https://github.com/canonical/operator/pull/191#issuecomment-605875917
[08:27] <niemeyer> Otherwise we're neither here nor there
[08:28] <niemeyer> Chipaca: Well, yeah, that sounds like an awkward non-agreement
[08:28] <niemeyer> Chipaca: You raised the inconsistency, jam said he liked the old naming better, facundo merged it
[08:28] <t0mb0> I have a requirement to store some secrets, previously we stored them as files inside the leader pod but I'd like to change that so we store these as k8s secrets, does the framework support that yet? I see in this example charm the developer has written his own k8s API https://github.com/majduk/charm-k8s-mongodb/tree/b0e55efd53d38abe26ea9deb271af9922b88b238/src
[08:29] <t0mb0> will the StoredState support this kind of thing?
[08:29] <stub> Personally I'd revert given they are not sets (like set(), frozenset() or collections.abc.Set)
[08:30] <Chipaca> stub: the issue is EventBase vs EventsBase
[08:30] <niemeyer> stub: Yeah, I think we should just revert it.. it sounds like multiple people don't agree the naming is better
[08:32] <niemeyer> t0mb0: What API do you need to access?  We definitely want to grow the supported APIs over time
[08:32] <stub> If we have CharmEvents, we can have ObjectEvents
[08:33] <t0mb0> niemeyer, the kubernetes secrets and configmaps
[08:33] <Chipaca> stub: ooo
[08:33] <Chipaca> niemeyer: wdyt?
[08:34] <t0mb0> we're initialising a wordpress deployment with a random password and it'd be nice if we could expose that so operators can retrieve it if the pod dies
[08:38] <niemeyer> Chipaca, stub: Sounds fine to me
[08:38] <Chipaca> PR coming up
[08:38] <Chipaca> niemeyer: another thing: the check for having multiple names for a single instance of a StoredState is bogus, never worked AFAICT, and is untested. Should I drop it, or change it to work and add tests?
[08:39] <Chipaca> this is for class C: foo=StoredState(); bar=foo
[08:41] <jam> Chipaca, did you see my suggestion?
[08:41] <Chipaca> jam: I did. I'll PR that up as well so we can discuss.
[08:41] <jam> (when iterating attributes, just don't exit on first discovery)
[08:41] <Chipaca> ah
[08:41] <Chipaca> no i did not see that one
[08:41] <niemeyer> Chipaca: I see it as preventing an obvious invariant from being broken.. in which case would it be fine to ignore the invariant being wrong?
[08:41] <jam> the real question is whether that actually works.
[08:41] <jam> niemeyer, the problem is that it doesn't actually find the invariant
[08:42] <niemeyer> jam: Invariants don't have to be looked for :)
[08:42] <niemeyer> jam: If the attribute name is wrong, this is all bogus..
[08:43] <niemeyer> jam: But I agree with your statement, we might look further to detect a bug
[08:44] <Chipaca> so I can change the code to not break out of the loop, and if it finds itself twice it'll catch that
[08:45] <Chipaca> I'll do that, and then run to the shops
[08:45] <stub> Can we have relids being relname:16 format again? I just spent ages tracking down a bug, where I was failing to retrieve data by relid because somewhere thought it should be str(15) instead of int(15)
[08:45] <Chipaca> I'll look at @deprecated after
[08:46] <mthaddon> t0mb0: essentially we're talking about interacting with kubernetes secrets here (being able to store and retrieve them). I think filing an issue about that be the best way to go for now
[08:46] <niemeyer> stub: The relation ID is really the number.. the name is there to aid people when they are in a hook context. But in Python we have both the name and the ID at hand.
[08:47] <niemeyer> mthaddon, t0mb0: +1
[08:47] <niemeyer> And also +1 in supporting the API internally somehow
[08:48]  * mthaddon nods - sounds good
[08:48] <stub> yeah, and and now I need to ensure nothing casts it to a string or it becomes a different key according to dictionary lookups. Just stuffing in a pile of asserts to see what fed me a string instead of an int.
[08:49] <niemeyer> stub: Good question.. please let me know when you find out. If we have other APIs manipulating these IDs independently, we definitely need to look into it.
[08:49] <t0mb0> stub, do you know if there is a maximum number of times an event will be reemitted?
[08:49]  * stub shrugs
[08:50] <jam> t0mb0, under what circumstance? as in if you keep defer() then will it even not reemit ?
[08:51] <niemeyer> stub: This should only be an issue if you're running juju CLI commands next to the framework, in which case the answer would be to use the APIs in the framework
[08:52] <niemeyer> We'll soon start talking directly to the agent, and allow iterating over multiple hook events if desired.. this will not work if one is interacting with the agent externally
[08:52] <t0mb0> jam, so i'm basically defer looping until self.model.get_binding('website').network.ingress_addresses[0] returns something
[08:53] <t0mb0> I'd expect juju debug-log to be spamming my logger statements but it doesn't
[08:53] <jam> t0mb0, the only thing that causes deferred events to reemit is another hook invocation. if you are deferring and then exiting the hook, you are waiting for the next hook to tell you to check.
[08:54] <jam> that hook might  be 'config-changed' or 'update-status', but nothing that would 'spam the logs' as update-status is on a 5-min timer.
[08:54] <t0mb0> oh
[08:56] <stub> t0mb0: (and you won't see log.debug() level messages atm if they are missing entirely)
[08:56] <jam> t0mb0, generally state that you read from Juju won't change during the lifetime of a hook anyway (otherwise hooks have to check at the end of the hook if something is different than at the beginning of the hook)
[08:57] <jam> juju generally fires another hook to let you know if stuff that you responded to changed (eg, you get 'config-changed' when config is different than the last time config-changed ran)
[08:57] <t0mb0> jam, so if I'm waiting for my pod to be active before resuming the rest of the config-changed stuff what should I be doing?
[08:58] <jam> (also, I think set-pod-spec doesn't take effect until the hook exits, either, but I know there was some question there)
[09:00] <jam> t0mb0, so I thought there was supposed to be an event if your IP address changes, which might match what you're hoping to do?
[09:00] <jam> Juju doesn't currently feed the charm operator the full k8s events, IIRC
[09:02] <stub> Sounds like check if things are ready in a config-changed hook, and if not just wait for the next config-changed hook
[09:02] <t0mb0> jam, https://github.com/johnsca/charm-gitlab-k8s/blob/master/src/charm.py this charm makes use of a custom HTTPInterface is that what you're talking about?
[09:04] <t0mb0> stub, but there won't be another config-changed hook right? because you're returning from the on_config_changed function?
[09:05] <t0mb0> unless you do a self.on.config_changed.emit() or something similar?
[09:05] <t0mb0> if the state isn't set
[09:06] <stub> There will be another config-changed hook when something changes that Juju wants to tell you about. So if it is sending one when the pod IP addresses change, you can wait until that config-changed hook is fired. Might not be the first, won't be the last.
[09:07] <niemeyer> t0mb0: Also, this is normal code.. you don't have to (and most likely should not) have all of your logic living _inside_ the on_config_changed method
[09:08] <niemeyer> Instead, ideally code should be organized in methods and types that properly reflect what they are doing, and then you simply call them by their name whenever an appropriate situation shows up
[09:09] <t0mb0> niemeyer, that's how I roughly have it now, I'm checking to see if various variables on my state object are set and calling methods when they are and returning an event.defer() if they're not
[09:10] <niemeyer> t0mb0: Yeah, if I get what you're saying that's a pattern I've seen repeatedly in charms, but it doesn't feel like a very nice way to organize code in general.. what I've seen looks like:
[09:10] <niemeyer> 1) Ensure everything in the state and in the machine about everything the charm could possibly want to do is as it should be
[09:11] <niemeyer> 2) Do everything the charm possibly would like to do based on other checks previously done
[09:12] <niemeyer> The problem with this approach is that those several checks are detached from the code they actually "protect", and local encapsulation is broken down
[09:14] <niemeyer> A much better approach is to do things in smaller pieces, with local information.. for example, we want small components that are responsible for their own knowledge, and emit high-level events that are meaningful for what they do
[09:14] <t0mb0> niemeyer, so in that case I'd need k8s to tell Juju the pod is alive and ready so I can observe that event right?
[09:15] <niemeyer> t0mb0: Yes, that definitely sounds like something we want to have more comfortable access to
[09:15] <t0mb0> niemeyer, i'm guessing that doesn't exist yet and I should raise an issue?
[09:16] <niemeyer> t0mb0: Yes please.. but also, meanwhile let's find a nice way for you to move forward with your needs without being blocked on that issue
[09:16] <t0mb0> I feel like for initial setups of wordpress we don't mind waiting 5 minutes or so for another hook to run
[09:18] <niemeyer> t0mb0: Also note that what I described above says more about the overall organization of the charm itself than about external needs
[09:18] <niemeyer> t0mb0: In other words, improving things in that way doesn't need external events
[09:18] <t0mb0> niemeyer, yeah it makes sense. that's why i asked (s)tub about writing custom events earlier
[09:19] <niemeyer> t0mb0: Right, we made events very easy to create and use precisely so we can reuse that form of encapsulation into our own custom components as well
[09:20] <niemeyer> t0mb0: As a random example, it would make sense for example to have a MySQLServer endpoint handler that would consume the relation events, and emit higher-level on_database_available, etc
[09:21] <niemeyer> I'm somewhat surprised that the k8s support in juju doesn't inform the charm of the pod being ready.. I wonder what kind of pattern people have been using so far
[09:21] <niemeyer> Do people just fork off a daemon that keeps trying to do things with it?
[09:22] <niemeyer> I will be digging further into k8s in the coming days
[09:24] <t0mb0> niemeyer, it seems like the existing charms shy away from doing application initialisation and it's left to the user. I imagine you could bake initialisation logic into the docker image and then have some kind of lock to stop non leader pods from racing
[09:24] <stub> So.... using the relation id as a dictionary key isn't great if you are going to convert it to json, cause it will be cast into a string and won't round trip
[09:24] <niemeyer> stub: Where's the json conversion taking place
[09:24] <niemeyer> ?
[09:25] <stub> my code
[09:26] <stub> stuffing a data structure into leadership settings in this case
[09:26] <niemeyer> I see
[09:26] <stub> >>> assert json.loads(json.dumps({1: True})) == {"1": True}
[09:26] <stub> solution... stick to yaml :)
[09:26] <niemeyer> stub: Yeah, I'm surprised the json package doesn't complain about it
[09:27] <niemeyer> "Oh, sure.. here is your data completely different now sir."
[09:28] <niemeyer> t0mb0: I'd like to significantly improve the control charms have over their workload containers
[09:28] <niemeyer> t0mb0: That's one of my key goals over the next couple of cycles
[09:28] <t0mb0> niemeyer, I mean something like an "pod-ready" event/hook would be nice
[09:29] <t0mb0> once the readinessProbe is green
[09:29] <niemeyer> I need to get my hands dirty to get a better view of how we'll accomplish that
[09:30] <niemeyer> t0mb0: Yeah, something like that as well
[09:32] <Chipaca> ok, now yes, to the shops
[09:38]  * Chipaca really really needs to go
[09:57] <stub> git+ssh://git.launchpad.net/~stub/interface-pgsql/+git/operator seems to be working now if anyone wants a look. Unit tests next week.
[10:03] <stub> And the 'operator' branch of git+ssh://git.launchpad.net/~stub/plinth/+git/plinth-charm is a (private) non-k8s operator framework charm using it, still needing its actions wired up and a tidy
[10:05]  * stub wanders off into the long weekend
[10:05] <niemeyer> stub: Thanks for the branches, and have fun!
[10:05] <niemeyer> stub: Or have a good isolation I guess :)
[10:06] <stub> I'm shopping for everyone tomorrow... got to keep my ancestors fed.
[10:08] <niemeyer> stub++
[11:06] <Chipaca> jam: was @deprecated a suggestion to use https://pypi.org/project/deprecation/, or was it something else?
[11:07] <jam> Chipaca, I think I looked at 2 or 3 different ways of doing deprecation. It feels a bit overdone to do deprecation before we hit a 1.0 but certainly I do think we want a *plan* around deprecation.
[11:07] <jam> If we feel we need it from now, then we can
[11:08] <jam> Chipaca, given the simplicity of deprecation, it would be nice to not need an external lib
[11:25] <Dmitrii-Sh> Added unit tests for a peer relation to one of my charm PRs: https://github.com/canonical/cockroachdb-operator/pull/1/commits/95c61ebf40eaf812eb46e4b90055f47f8545704e (they need PR #196 to be merged to pass because of the issue #202)
[11:32] <Dmitrii-Sh> https://git.io/JvdO4 - TestCharmClass here is something worth discussing I think. The peer relation interface code expects a parent charm class to provide an event so that it can expose cluster id details on a relation when it happens. However, using the concrete CockroachDBCharm type in interface-specific unit tests is problematic because
[11:32] <Dmitrii-Sh> interface component events can trigger CockroachDBCharm handlers that do real work (such as writing config). Instead of using mocking, I am using CharmBase here but with an event set for CockroachDBCharm.
[11:33] <Dmitrii-Sh> This provides me with a way to still trigger CockroachDBCharm-specific events for the interface code to react to but without invoking the charm logic I don't need.
[11:38] <jam> Dmitrii-Sh, so why isn't the code that depends on an event/generates an event a separate class? it doesn't have to be, but it might be interesting to think about keeping the Charm as only generating Juju events, and having something else that generates higher level events.
[12:18] <Dmitrii-Sh> jam:  *thinking about how to rework it*. The first impression is that I would still have to avoid triggering handlers but only on a different type.
[12:31] <Chipaca> so many reviews to do
[12:31] <Chipaca> so little coffee
[12:32] <Chipaca> also, PEP 9999
[12:41] <jam> Chipaca, so we should use foo,bar,baz,quux as animal-friendly alternatives?
[12:42] <Chipaca> jam: I say we go anti-pep-9999 and make all our variables refer to animals being mutilated in 'orrible ways
[12:42] <jam> deadbeef we have
[12:42] <jam> slaughterhouse
[12:43] <Chipaca> 'foo' can become 'gib'
[12:51] <jam> Dmitrii-Sh, so I would think you'd want abstractions for places that your code interacts with "other things" like the filesystem, and then in tests you would substitute an alternate implementation of those that doesn't actually interact with the system.
[12:54] <Dmitrii-Sh> jam: ty, I see what you mean. The charm itself would only poke an implementation of this abstraction which I would replace in the test in some way.
[12:57] <jam> right
[12:57] <jam> certainly you don't want your actual charm rewriting /etc/postgresql.conf while you are testing.
[12:57] <niemeyer> Is that PEP-9999: https://www.blackcomb-shop.eu/en-PT/skirt-craft-1904867pep-9999black/
[12:57] <niemeyer> It's the first hit on Google at least..
[12:58] <niemeyer> It's also no longer available, so we don't need to worry
[12:58] <Chipaca> niemeyer: https://github.com/gerritholl/peps/blob/animal-friendly/pep-9999.rst
[12:59] <jam> http://www.montypython.net/scripts/fruit.php "Self Defense Against Fruit" I don't know that one nearly as well.
[13:06] <Chipaca> niemeyer: aw, but bugs with side effects are my favourite!
[13:06] <Chipaca> they're the gift that carry on giving!
[13:10] <niemeyer> I know.. I also like how colorful they are :)
[14:46] <Chipaca> niemeyer: got a couple of questions about debug-hook, after talking with rick h and reading through the discussion on the spec
[14:48] <Chipaca> niemeyer: if you have 15 minutes to a half hour that'd be good
[14:49] <Chipaca> (non-overlapping holidays + feature freeze means I'm trying to get this sorted myself, rather than wait for facubatista to be back)
[14:50] <niemeyer> Chipaca: I'm just off a call.. if you have time in ~5 minutes, I'll prepare a mate and we can dive into it
[14:50] <Chipaca> niemeyer: sgtm
[15:01] <niemeyer> Alright
[15:03] <niemeyer> Chipaca: Standup?
[15:03] <Chipaca> niemeyer: omw
[15:26] <Chipaca> niemeyer: i'll ping you for a meet as soon as i get out of my next one
[15:45] <jam> Chipaca, oth
[15:45] <jam> Chipaca, anything I can hepl with?
[15:47] <Chipaca> jam: if you're around, you're welcome to join
[15:47] <Chipaca> jam: but it is rather late for you
[15:50] <jam> true enough, though I wouldn't worry terribly about having the Charmcraft code finished for Juju feature freeze.
[15:51] <Chipaca> jam: so I hear :)
[15:53] <Chipaca> jam, niemeyer, standup meet if you could
[15:53] <Chipaca> jam: very optional for you
[15:53] <jam> I can give it a few mins
[15:53] <niemeyer> I need a couple of minutes, but will be there
[16:24] <Chipaca> crazy day today
[16:25] <Chipaca> i got to do 0 of Dmitrii-Sh's reviews
[16:25] <Chipaca> i also got to run 0
[16:26] <Chipaca> i'm going to go run because daylight is nearly over, then make dinner, then review some things
[16:26] <Chipaca> ttfn!
[16:27] <niemeyer> Enjoy!
[17:56] <Dmitrii-Sh> Would be good to get more eyes on this one - https://github.com/canonical/operator/issues/206. It seems to ask for `goal-state`-like functionality but I suggested a different idea.
[18:00] <Dmitrii-Sh> this seems to be a common problem when it comes to setting up stateful applications - you need to know whether to have one or multiple units at the beginning of a unit lifetime to initialize a cluster state from the initial cluster unit. In some cases this is merely about setting an initial replication factor to 1 vs 3 (or some other value).
[18:35] <jam> Dmitrii-Sh, I'm of the mind that we should let them know about it. But the other option is to have 'we've told you what we're going to tell you for now' events that let you make progress.
[18:35] <jam> eg, "start" will be called after all the relation-created
[18:35] <jam> so having some form of "we're done telling you about relation-joined for now"
[18:36] <jam> but relation-joined is blocked on the units actually coming up, and doesn't let you know what the admin's *intent* of deployment is
[19:03] <Dmitrii-Sh> jam: yes, it feels like a unit need reasonable checkpoints with enough data about the overall model to make progress. Most importantly, at the beginning of a lifetime where inital state is created.
[19:04] <Dmitrii-Sh> one other use-case that comes to mind besides clustering is setting up TLS
[19:05] <Dmitrii-Sh> at the beginning it is important not to have any exchanges without TLS between units if the end goal is to have TLS enabled
[19:06] <Dmitrii-Sh> you might later decide to remove a relation with a particular CA charm that generates certs/keys and choose another but this is unlikely to change quickly
[19:26] <niemeyer> There are different ideas intermixed here
[19:27] <niemeyer> We already discussed this (jam and myself) a while ago and we both seem to agree that having a number of units as a hint, and a created relation hook, sorts out a good part of the desire
[19:28] <niemeyer> We should avoid having something like goal state, though, as it creates a mixed and confusing reality for the unit
[19:28] <niemeyer> Another part of what's described above sounds like just configuration