[04:05] <Dmitrii-Sh> o/
[04:50] <jam> morning Dmitrii-Sh
[04:51] <Dmitrii-Sh> morning jam
[07:10] <jam> Dmitrii-Sh, feedback given on https://github.com/canonical/operator/pull/212
[07:14] <Dmitrii-Sh> jam: ty, I'll spend some time on it. I added .begin() in different test cases to be able to get to charm metadata (so that I can  check whether a relation is a peer relation or not)
[07:15] <jam> Dmitrii-Sh, I believe we have self._meta we don't need self.charm.meta IIRC
[07:16] <Dmitrii-Sh> jam: ok, I think I missed this. If so, I can replace the check
[07:52] <niemeyer> Good morning folks
[07:58] <jam> morning
[08:29] <Chipaca> good morning charmcaft!
[08:29] <niemeyer> Moin!
[08:33] <jam> morning
[10:01] <Chipaca> jam: having some hardware issues, be there in a minute
[10:23] <vgrevtsev> hi everyone - is this a right place for a new developers to ask their questions about the new framework? :)
[10:33] <Chipaca> vgrevtsev: yep!
[10:34] <vgrevtsev> Chipaca: good to know, thanks - just starting writing my first charm, so I'll definitely have some questions around. As far as I understood, there are no single doc/howto, only READMEs from the different ops charms with some hidden knowledge?
[10:35] <Chipaca> vgrevtsev: yes :(
[10:35] <Chipaca> vgrevtsev: working on fixing that, but for now, yes
[11:23] <facubatista> Muy buenos días a todos!
[11:24] <Chipaca> facubatista: 👋
[11:25] <facubatista> hola Chipaca!
[11:25] <Dmitrii-Sh> facubatista: o/
[11:25] <facubatista> hola Dmitrii-Sh!
[11:41] <facubatista> jam, I feel like the previous version of the branch was more clear: if the envvar is set, the logging setup is indicated to go to debug mode, which means two decissions that belongs to the logging setup: a) logger in debug level, b) also a handler to stderr
[11:42] <jam> on a call, will chat with you in a sec
[11:42] <facubatista> jam, now it feels mixed, it looks like which stream is decided outside the logging setup, and that stream is indicated to the logging setup, which when present, not only builds a handler for it (it's fine there) but also changes the level of the logger
[11:42]  * niemeyer => lunch
[11:46]  * Chipaca ⇝ also lunch
[12:08] <jam> facubatista, so I don't have a huge problem with the fact that setting a 'debug_stream' means enabling debug messages. That said, since the log handler already filters at INFO, we could just have the log level always DEBUG?
[12:08] <jam> the fact that handlers have separate levels from loggers is always a bit wonky
[12:09] <facubatista> different levels for handlers and loggers is a good feature
[12:09] <facubatista> they filter different levels for different purposes
[12:09] <facubatista> I mean, they have different semantics
[12:11] <facubatista> jam, we can put the JujuLogHandler always in DEBUG, as it just sends stuff to juju, which also presents the user a way to select the level
[12:11] <facubatista> but the logger must be in INFO by default
[12:16] <facubatista> jam, actually, this impacts in that interaction: when the user jumps into a debug hook, juju will start getting DEBUG logs, as we're chaning the logger level, which I think it's fine
[12:16] <facubatista> the default is still info
[12:33] <facubatista> jam, so?
[12:34] <jam> facubatista, so... that isn't how it currently works, so I'm trying to parse out a bit of what your suggesting how we should make it work. And the various pieces at play.
[12:35] <facubatista> jam, this is related: https://github.com/canonical/operator/issues/198
[12:35] <facubatista> we need to have in mind how this will end in its whole
[12:38] <Dmitrii-Sh> https://github.com/canonical/operator/pull/216 - sent another interface for review. I'll need to modify the existing charms to adjust to this change.
[12:38] <Dmitrii-Sh> Of note: as I introduce more complex types into interfaces, I feel the need to use some form of serialization. For example, even using timedate.delta for various health-checking timeouts requires me to do that.
[12:43] <Chipaca> datetime.timedelta?
[12:43] <Dmitrii-Sh> yes
[12:44] <Dmitrii-Sh> got it backwards while writing :^\
[12:44] <Chipaca> :)
[12:45] <jam> Dmitrii-Sh, if it is a timedelta, why wouldn't you just serialize a float point seconds?
[12:48] <Chipaca> jam: I'm assuming that's "some form of serialization"
[12:49] <Dmitrii-Sh> jam: in this particular case I could covert datetime.delta to float and also expose a property on a type that does the reverse
[12:50] <jam> Chipaca, fair enough, though there is a fair difference to say "use a float" vs "I need to encode things using JSON because I want to round trip a serialized form"
[12:50] <jam> Chipaca, IIRC, charms.reactive defaulted to translating all relation-set values into JSON blobs
[12:51] <Chipaca> ew :)
[12:52] <facubatista> jam, may you help me with something, as IIUC you already tested this IRL... see this: http://linkode.org/#ipk9mNkD5aiE8vA78xkUD1 <-- there I exercised debug-hooks, set up the envvar myself and run the hook, got the pdb prompt..
[12:52] <facubatista> jam, of course those two lines will not exist; if I write a message, it will appear after the other one? (like in node 2 of that linkode), or alone (like in node 3)?
[12:52] <Dmitrii-Sh> @property
[12:52] <Dmitrii-Sh> def timeout(self):
[12:52] <Dmitrii-Sh>     return self._timeout.total_seconds()
[12:53] <Dmitrii-Sh> something like that ^
[12:53] <jam> facubatista, so you'll need juju 2.8 from the edge snap, and the command is "juju debug-code" if you want to test the new behavior.
[12:53] <jam> facubatista, but the new behavior is in (3)
[12:53] <jam> I intentionally skip the Juju message, because it isn't helpful when you are ending up in PDB
[12:55] <Dmitrii-Sh> jam: but, yes, to use the same type on both sides of the relation I need to use something to serialize/deserialize more complex objects. Whether its pickle, JSON or other boilerplate code to fit the data into relation data bags
[12:55] <Dmitrii-Sh> the question is how complex do we want it to be for somebody implementing an interface
[12:56] <jam> Dmitrii-Sh, given the whole point of interfaces is to be the point of abstraction, I certainly wouldn't want Python types on the wire
[12:57] <jam> even Pickle between Python isn't safe because on might be a Python3.5 running on Xenial, and the other is Python 3.8 running in Focal
[12:58] <jam> facubatista, if you want to test it, you should be able to do "snap install --channel=edge juju --classic" and then 'juju bootstrap lxd' should leave you with a '2.8beta1' controller.
[12:58] <Dmitrii-Sh> jam: yes, plus the code on both sides of a relation may be of different revisions
[12:58] <facubatista> jam, so, it will be better to have a more complex text, like http://linkode.org/#Pu9DEIIWWybq5Q2eDSelJ
[12:59] <jam> facubatista, yeah, I think something like that.
[12:59] <facubatista> Chipaca, it worries me lying in line 6 ^
[12:59] <jam> is there a way we can tell ^D/exit was done and make sure the hook exits nonzero?
[12:59] <facubatista> jam, can I upgrade juju and keep the same controller? how that works?
[13:00] <jam> facubatista, if you have the new snap, you might be able to "juju upgrade-controller". However, if it is a 2.7 stable controller, getting it to a 2.8 beta can be a bit trickier, IIRC.
[13:02] <facubatista> jam, I can totally destroy and re-bootstra, I just don't want to do that very often; do you think I will be able to just leave 2.8beta1 in my system?
[13:02] <jam> Chipaca, just make him write the documentation before he can land his code, then it won't be a lie.
[13:03] <jam> Though I believe the official discourse just switched to discourse.juju.is
[13:03] <jam> like 3 days ago or something.
[13:04] <jam> facubatista, until we hit official beta, Juju doesn't guarantee compatibility/upgrades. While it is coming real-soon-now and I don't see any particular problems I certainly wouldn't use it for anything you want to guarantee keep around.
[13:04] <jam> other than that, things in 'edge' have passed the unit test suite, but not necessarily the full CI regression suite.
[13:05] <jam> (I've always tended to run straight off develop/or a feature branch rather than edge, so I don't have a lot of experience with it)
[13:12] <facubatista> jam, ack, thanks
[14:12] <Dmitrii-Sh> niemeyer: just sent an invite - I hope you can make it.
[14:19] <niemeyer> Dmitrii-Sh: Thanks! If the meeting doesn't run over I will make it
[14:19] <facubatista> niemeyer, I put the message here: https://docs.google.com/document/d/1H_3V19XGnEvUtE_tk2CFWmtiW8Zc4y1lvxTPz55VSJY/edit (the original one, and a newer already improved version below)
[14:23] <niemeyer> facubatista: Thanks, let's sync later today as your timezone is friendly to that :)
[14:26] <facubatista> niemeyer, +1
[15:03] <crodriguez> Dmitrii-Sh: niemeyer : I need to dig more into the mssql components and the solution I want to put in place before asking questions about how to transpose that to the framework itself. The more I read about it the less I'm sure of what I want to do haha. I will dig more about that and maybe we can reschedule the call in a few days?
[15:04] <crodriguez> It's a bit complex since the mssql components are only in preview mode for docker, no official doc about transposing it to k8s itself, so I'm experimenting
[15:09] <Dmitrii-Sh> crodriguez: I think we can still have a call (even if it's going to be short) just to go over what's involved at a high level
[15:09] <Dmitrii-Sh> and then dive into the details as we get more info
[15:10] <crodriguez> Sure Dmitrii-Sh
[15:10] <niemeyer> crodriguez: Yeah, I'd appreciate even just listening to what you've been up to, so we can learn about your initial views on the problems you're working through
[15:11] <crodriguez> okay np :) I just didn't want to waste anyone's time since I'm still figuring things out
[15:18] <niemeyer> crodriguez: No worries.. that's exactly the sort of opinion I'd be happy to learn about.. once you learn a bit more about the problem space and the candidate solutions, these early feelings will be replaced by feasible alternatives given what you've learned
[15:20] <niemeyer> crodriguez: Some of the idiosyncrasies might be worth fixing, though, if we can learn about them
[16:05] <vgrevtsev> hi Dmitrii-Sh - are you able to have a quick sync now?
[16:05] <Dmitrii-Sh> crodriguez: thanks for the feedback, feel free to reach out going forward
[16:05] <Dmitrii-Sh> vgrevtsev: sure, just finished the last meeting
[16:05] <vgrevtsev> Dmitrii-Sh: https://meet.google.com/sxc-wcti-vko?authuser=1
[16:07] <niemeyer> Chipaca: How's the Office Hours thing going?
[16:07] <Chipaca> niemeyer: still need to meet with tim v and look at when and how to do it (soon)
[16:08] <niemeyer> Chipaca: Ack, thanks
[16:49] <Chipaca> good news is i know what i'm charming :-)
[16:56] <vgrevtsev> many thanks Dmitrii-Sh - it became much more clearer for me now :)
[16:56]  * vgrevtsev starts charming
[16:56] <Dmitrii-Sh> cheers
[17:11] <facubatista> mmm... juju bootstrap crashed for juju from edge, will ask in #juju
[17:16]  * Chipaca EODs
[17:20] <narindergupta> Hi team
[17:21] <crodriguez> o/
[17:22] <narindergupta> When I scale up my zookeeper or Kafka PODS does stop and start.
[17:22] <narindergupta> And when I run juju status I get a new juju unit
[17:23] <narindergupta> While pod name is same
[17:23] <narindergupta> For example kafka-k8s/0 becomes kafka-k8s/4
[17:23] <narindergupta> While pod name remain same Kafka-k8s-0
[17:24] <narindergupta> Due to that sometime storage volume gets stuck and new pod does not start successfully and get stuck because volume get stuck
[17:24] <narindergupta> Any ideas?
[17:37] <niemeyer> narindergupta: So you mean pods get destroyed and recreated but remain with the same name
[17:37] <niemeyer> ?
[17:37] <narindergupta> I am scaling up and there is change in spec
[17:37] <narindergupta> Juju status says pods stop and pods start
[17:38] <narindergupta> And I am expecting pod name to remain same which does
[17:38] <narindergupta> But juju unit changes
[17:38] <narindergupta> From Kafka/0 to Kafka/4
[17:38] <narindergupta> niemeyer, ^
[17:40] <niemeyer> narindergupta: That's a bit awkward.. I wasn't involved in the design of the k8s implementation so I don't yet know how that works internally, but I can't imagine how the unit would get destroyed and recreated but the pod would remain the same
[17:40] <narindergupta> That's what is happening with operator framework did not tried with reactive though
[17:41] <niemeyer> narindergupta: The operator framework doesn't have a saying if the entire unit is getting destroyed.. it's living inside the unit
[17:42] <narindergupta> So you are suggesting it is juju behavior?
[17:42] <narindergupta> We are calling set_spec whenever config change
[17:43] <narindergupta> self.model.pod.set_spec
[17:43] <niemeyer> narindergupta: Yes, the operator framework is a library to help you implementing the charm.. a charm is not deployed until juju says it should be
[17:43] <niemeyer> narindergupta: set_spec should not destroy your unit and recreate it
[17:43] <narindergupta> It is doing it
[17:44] <narindergupta> Not first time
[17:44] <narindergupta> But on config changes
[17:44] <narindergupta> But adding new unit
[17:46] <narindergupta> As adding unit in zookeeper required to change the number of units as part of zookeeper config
[17:46] <narindergupta> So I have to call set_spec during scale
[17:46] <narindergupta> up
[17:46] <narindergupta> Which is causing Destry old unit and create new one while pod remains the same
[17:53] <narindergupta> Actually whenever pod stop during upgrade Kafka unit gets terminated
[17:54] <niemeyer> narindergupta: We need someone that knows more deeply how the juju integration works.. what's your timezone?
[17:54] <narindergupta> I am US CST
[17:54] <niemeyer> Ok, so about 1PM now
[17:55] <niemeyer> narindergupta: How about we try to get hold of someone first thing tomorrow in your morning?
[17:55] <narindergupta> Sounds good to me
[17:55] <niemeyer> narindergupta: I'm keen on being present to understand as well how this is being done.. the behavior you describe doesn't seem to make sense, or I have a broken model in mind. So I'm keen to learn too.
[17:56] <narindergupta> Sure that would be helpful
[17:56] <niemeyer> narindergupta: Alright, can you please ping tomorrow when you get online?
[17:56] <narindergupta> Ok sounds good
[17:56] <niemeyer> narindergupta: Deal, thanks
[17:57] <niemeyer> facubatista: Are you around for that last call?
[17:57] <facubatista> niemeyer, otp, what about in 10'? or you EODing already? I can hang...
[17:57] <niemeyer> facubatista: 10' is fine
[17:58] <facubatista> niemeyer, thanks
[17:58] <niemeyer> np
[18:14] <facubatista> niemeyer, ready
[18:14] <facubatista> niemeyer, https://meet.google.com/veq-yfqm-kdk?
[18:14] <niemeyer> omw
[20:29]  * facubatista eods