[08:08] <Chipaca> mo'in all
[08:09] <bthomas> morning
 Chipaca or facubatista  in case bthomas comes back, it feels like he has it backwards. leader_settings_changed notifies *non* leaders that the settings have changed, leader_elected is telling the unit that they are now the leader
[08:22] <bthomas> Chipaca: Does leader_elected always follow leader_settings_changed or vice versa ?
[08:23] <Chipaca> bthomas: what does 'before' and 'after' mean when it's different units?
[08:23] <Chipaca> (nothing, that's what)
[08:23] <Chipaca> unless you have atomic clocks on the units
[08:23] <Chipaca> then we can talk
[08:24] <bthomas> Chipaca: wouldn't all unit receive the same events ? I do appreciate the problem with distributed clocks. However isn't there a single source of the firing of events - Juju .
[08:33] <Chipaca> bthomas: yes, but my point is there can be no ordering guarantee, so why do you care to ask?
[08:36] <bthomas> Chipaca: If there is a single source of events then the source gaurantees ordering . Suppose there is no ordering guarantee but there is the weaker gaurantee that a leader_elected is always assoicated with a leader_setting_changed. Then even with this weaker gaurantee it becomes hard to make a case on differentiating the use case for these two events. An ordering gaurantee actually helps differentiate the use case. Do I make se
[09:15] <oscar51> I have a question regarding juju's Operator framework
[09:18] <oscar51> does anyone know what exactly triggers "on.config_changed"?
[09:24] <Chipaca> oscar51: running 'juju config <app> <config>=<something>'
[09:25] <oscar51> I see
[09:25] <Chipaca> oscar51: why?
[09:26] <oscar51> I'm having trouble with one of our charms, on.config_changed keeps being called, causing downstream trouble. I'm trying to troubleshoot it
[09:27] <oscar51> but it helps to know that something is calling juju config
[09:27] <Chipaca> oscar51: https://discourse.juju.is/t/charm-hooks/1040#heading--config-changed
[09:27] <Chipaca> oscar51: there are 3 other situations in which it's triggered
[09:28] <Chipaca> oscar51: from your description, assuming you don't have something rogue doing 'juju config', your agent might be dieing a lot?
[09:29] <Chipaca> dying*
[09:29] <Chipaca> oscar51: what does debug-log show?
[09:30] <oscar51> thank you for those helpful pointers
[09:32] <Chipaca> oscar51: what is the charm in question?
[09:32] <oscar51> I'm using debug-log a lot, it's useful but does not reveal what is going on
[09:32] <oscar51> we have multiple custom charms. our stack is complex
[09:32] <Chipaca> oscar51: have you done 'juju model-config logging-config="<root>=DEBUG"'?
[09:32] <Chipaca> (to up the log level of most everything)
[09:33] <Chipaca> oscar51: is this on k8s?
[09:34] <oscar51> model-config is already set to debug:
[09:34] <oscar51> logging-config                model       <root>=DEBUG;unit=DEBUG
[09:34] <Chipaca> a'ight
[09:34] <oscar51> no, it's not on k8s.
[09:36] <Chipaca> oscar51: if it were the hook itself having issues i'd say try 'juju debug-code', but iiuc the issue is with whatever is triggering config-changed
[09:40] <Chipaca> oscar51: one other thing
[09:40] <oscar51> I see in debug-log that before entering "on.config_changed" juju prints: "Using local storage: /var/lib/juju/agents/unit-*****/charm/.unit-state.db already exists", do you think this might be a cause?
[09:40] <Chipaca> oscar51: your charm itself could be doing .on.config_changed.emit()
[09:41] <oscar51> good point
[09:41] <oscar51> something I'm going to check
[09:41] <Chipaca> oscar51: nah, that's correct for non-k8s
[09:42] <oscar51> I checked the charm code, and there are no emit() at all
[09:42] <Chipaca> oscar51: (on k8s you don't necessarily have non-ephemeral storage on the operator pod so state gets shipped back to the controller)
[09:43] <Chipaca> oscar51: good hunting, let me know what you find
[09:44] <Chipaca> if it turns out there's something we could log to make things easier we can do that
[09:44] <oscar51> I'm continuing to hunt this.. we need to resolve it :)
[10:27] <oscar51> Do you think this might trigger a config event in the operator framework:
[10:27] <oscar51> DEBUG juju.worker.uniter.remotestate update status timer triggered
[10:36] <oscar51> There is also this one:
[10:36] <oscar51> juju.worker.uniter.operation running operation run update-status hook
[10:51] <facubatista> ¡Muy buenos días a todos!
[10:54] <bthomas> नमस्ते facubatista
[10:55] <Chipaca> oscar51: no, that's the update-status event that happens periodically, https://discourse.juju.is/t/charm-hooks/1040#heading--update-status
[10:55] <facubatista> hola bthomas
[10:56] <Chipaca> oscar51: (you can change the periodicity with e.g. «juju model-config update-status-hook-interval=1m»)
[10:56] <Chipaca> facubatista: 👋❗
[10:57] <facubatista> Chipaca, hola
[10:57] <oscar51> Chipaca: I see
[10:57] <Chipaca> oscar51: now, your charm could connect update_status to your config-changed handler
[10:58] <Chipaca> oscar51: i.e., e.g., self.framework.observe(self.on.update_status, self._config_changed)
[10:58] <Chipaca> oscar51: but that's on you :)
[10:58] <oscar51> Chipaca: I will look at the code if that happens
[11:00] <Chipaca> oscar51: but presumably you've observed something like 'juju.worker.uniter.remotestate got config change' in the logs also
[11:06] <oscar51> Chipaca: I appreciate your input. I'm still looking into code and logs
[11:07] <oscar51> I was hoping for something simple, like assigning the on.update to the config function ;)
[11:21] <facubatista> jam, 'juju show-status-log' is not really what we want... it just shows the past changes (no "follow"), and you need to run it in a separate terminal (no mixed lines with the rest of the logs)
[11:22] <facubatista> jam, what would be more like the users are doing would be something like "juju debug-log --show-status-changes", for example
[11:22] <facubatista> jam, do you think an option like this would be something feasible for juju to implement?
[12:06] <oscar51> I see these in debug-log, does it mean that the charm is restarted?: DEBUG juju.worker.uniter.runner starting jujuc server  {unix @/var/lib/juju/agents/unit-****-***/agent.socket <nil>}
[12:09] <oscar51> And after that line comes this one: Operator Framework 0.9.0 up and running.
[12:26] <Chipaca> oscar51: the charm isn't a long-lived thing, it runs once per event
[12:27] <Chipaca> oscar51: so you should se ethe "Operator Framework X.Y.Z up and running" once per event
[12:27] <Chipaca> oscar51: the 'starting jujuc server', I don't know, I'd have to ask somebody with more juju juju
[12:27] <oscar51> Chipaca: okay, thanks
[12:27] <Chipaca> maybe jam is around?
[12:31] <oscar51> is there some way to know what config changed when on.config_changed is called?
[12:36] <Chipaca> oscar51: maybe setting unit=TRACE ?
[12:36] <Chipaca> (i don't know but i'd try that)
[12:36] <oscar51> for debug-log?
[12:37] <Chipaca> oscar51: i mean, juju model-config logging-config="<root>=DEBUG;unit=TRACE"
[12:37] <Chipaca> oscar51: and then debug-log yes
[12:45] <oscar51> running that command required admin access to the controller, so I did it using what was described here: https://juju.is/docs/logs
[12:52] <oscar51> it didn't reveal anything. what is the purpose of the 'event' argument in the events (talking about the Python package used to write charms)
[12:53] <oscar51> it has the type ops.charm.ConfigChangedEvent, can it be used to figure out what config has changed?
[12:56] <Chipaca> oscar51: not in the event itself, but you can grab the whole config and print it out
[12:56] <jam> Chipaca, oscar51 : when running a hook, we spin up a worker to respond to the events from the charm, it is expected, and is DEBUG for a reason, since you shouldn't typically care
[12:57] <Chipaca> jam: oscar51 is seeing config-changed again and again
[12:58] <jam> oscar51, Chipaca my guess is something in the framework is calling event.defer() so the framework emits it again before the next hook
[12:58] <oscar51> jam: what does event.defer() do?
[12:58] <oscar51> I saw a defer in the code
[12:58] <Chipaca> jam: queues the event for processing later
[12:58] <Chipaca> er
[12:59] <Chipaca> oscar51: ^
[12:59] <Chipaca> oscar51: so the next time you get e.g. update-status, first you'd handle the defered event
[12:59] <Chipaca> i should've known to ask this already :-/ thanks jam :)
[13:01] <oscar51> hmm. not sure defer is the problem but I will check the code
[13:04] <jam> its pretty easy to end up with an infinite defer() loop if the precondition to why you might want to defer never changes. but defer() in config-changed is almost never what you want
[13:04] <oscar51> there is a event.defer() at the end of on_config_changed, do you think it might be the cause?
[13:04] <jam> because you aren't waiting for the same config again, you are waiting for the *next* config-changed with new config
[13:05] <jam> oscar51, that will tell the framework that it should emit on.config_changed before the next hook, and since it defers at the end of the function, it will be queued before the next event as well
[13:05] <oscar51> sounds like it might be the cause. I really wonder why it was typed there in the first place
[13:09] <Chipaca> oscar51: in particular for config-changed, it's almost certainly a bug
[13:10] <Chipaca> and if it's unconditional at the end of a handler, it's also a bug
[13:10] <oscar51> I'm testing out the updated code right now
[13:13] <oscar51> it looks like it was solved, but
[13:14] <oscar51> thanks chipaca and jam for your help
[13:14] <oscar51> very much appreciated
[13:14] <Chipaca> methinks we shold log "defering this event" and "running this event because it was defered"
[13:14] <jam> Chipaca, if we don't, we definitely should, but I thought there were messages around that
[13:15] <Chipaca> i don't think there are
[13:15] <Chipaca> i think we talked about doing it but didn't
[13:15] <Chipaca> i'll file a bug
[13:17] <Chipaca> #404
[13:17] <mup> Issue #404: log event deferral and re-emission <Created by chipaca> <https://github.com/canonical/operator/issues/404>
[13:17] <mup> Issue operator#404 opened: log event deferral and re-emission <Created by chipaca> <https://github.com/canonical/operator/issues/404>
[13:18] <jam> Chipaca, ah, it is not trivial because *new* events are "record this event, reemit()" vs running deferred events are reemit()
[13:18] <Chipaca> yep
[13:18] <Chipaca> should still be doable :)
[13:18] <jam> Chipaca, I think we could watch for "single_event_path is not None"
[13:20] <Chipaca> jam: looks like that will work yes
[13:27] <facubatista> Chipaca, it looks your timing comparations are flaky
[13:27] <facubatista> >               assert p1.stat().st_mtime == p2.stat().st_mtime
[13:27] <facubatista> E               assert 1600257598.0 == 1600257598.6792173
[13:27] <facubatista> https://travis-ci.org/github/canonical/charmcraft/jobs/727674829
[13:27] <Chipaca> aha!
[13:27] <facubatista> my branch failed only for py 3.5 and py 3.7 in arm (amd64 both ok)
[13:28] <Chipaca> looks like it use utime instead of utimensat
[13:28] <Chipaca> silly copy2
[13:28] <Chipaca> facubatista: i can fix in a bit
[13:28] <facubatista> Chipaca, thanks
[13:28] <Chipaca> facubatista: but you should be able to retry for it to pass for now
[13:29] <facubatista> Chipaca, I still miss a review there, so no rush
[13:29] <facubatista> (actually, I'm two approves behind, so...)
[13:29] <Chipaca> facubatista: I'd change that test to be |p1-p2| < 1
[13:30] <facubatista> Chipaca, you have a pytest.approx()
[13:30]  * Chipaca checks his pockets
[13:30] <Chipaca> no, i probably left it on the piano
[13:48] <Chipaca> facubatista: what would be a good __str__ for EventBase?
[13:49] <Chipaca> feels like a generic <EventFoo at 0xDEADBEEF> works
[13:49] <Chipaca> or maybe <EventFoo via eventhandle>
[13:56] <facubatista> the type of event for sure, don't know if there's some internal info that could surface
[13:56] <facubatista> the id() of it doesn't make sense, right? as we would not have N of those, and if we defer() it later it will have another id
[13:57] <Chipaca> yeh
[13:57] <Chipaca> eventhandle probably is the best shot
[13:57] <Chipaca> deploying some charms now to get a handle on what that looks like irl
[14:57]  * facubatista -> early lunch before afternoon's meetings storm
[15:05] <axino> hi
[15:06] <axino> is there an easy way to test running a hook with some data in StoredState() ?
[15:12] <bthomas> axino: Not sure what you are asking. But you can always assign a key with an integer counter to your charms instance of StoredState. Then on each update_status hook set ActiveStatus with the message being the value of the counter from stored state and incrementing the counter in stored state. This way you will see that counter increasing in jujus show-status . I have not tried this.
[15:16] <Chipaca> axino: harness.charm._stored.foo = 42 should dtrt?
[15:17] <deej> Chipaca: I suspect he's looking for something like what i was asking for a couple weeks back, a good way to bulk set state
[15:17] <deej> Ala what you can do with config
[15:27] <bthomas> Chipaca, jam facubatista : Q4) Why does one need relation_depated hook/event ? Current docs say that relation departed is "to give the charm an opportunity to clean up any optimistically-generated configuration." But it is not clear what optimistically generated configuration is.
[15:28] <jam> bthomas, several reasons. Wordpress is using Postgresql as its storage, wordpress is going away, Postgresql needs to know that it can delete those dbs.
[15:28] <jam> bthomas, Cassandra cluster was scaled up to 7 nodes, it is being scaled down to 5
[15:29] <jam> cassandra 0-5 need to know that 6/7 are going away, and should no longer be used as part of the cluster.
[15:29] <Chipaca> cassandra going away parties are *wild*
[15:29] <bthomas> jam : but there is relation depated and relation broken. Sorry I pasted the explanation for relation broken as relation departed. Never the less I am unclear as to why both are required. BTW standup now
[15:30] <jam> bthomas, I have a different meeting now, so won't be there.
[15:30] <bthomas> no worries will catcup with you later
[15:30] <jam> departed is per unit
[15:30] <jam> broken is the overall relation to the application
[15:30] <bthomas> ok thank you
[15:31] <bthomas> will think a bit more about this.
[15:31]  * facubatista is back
[15:46] <mup> Issue operator#405 opened: Harness begin_with_initial_hooks() fails when no relations added <Created by zzehring> <https://github.com/canonical/operator/issues/405>
[17:19] <mup> PR operator#406 opened: Add empty list as default return if no relation ids in map <Created by zzehring> <https://github.com/canonical/operator/pull/406>
[18:36] <mup> Issue operator#405 closed: Harness begin_with_initial_hooks() fails when no relations added <Created by zzehring> <Closed by chipaca> <https://github.com/canonical/operator/issues/405>
[18:36] <mup> PR operator#406 closed: Add empty list as default return if no relation ids in map <Created by zzehring> <Merged by chipaca> <https://github.com/canonical/operator/pull/406>
[20:12] <mup> Issue operator#407 opened: Have a quick "what you should look for" as packaging docs <Created by jameinel> <https://github.com/canonical/operator/issues/407>