#smooth-operator 2020-04-06
<Dmitrii-Sh> o/
<jam> morning Dmitrii-Sh
<Dmitrii-Sh> morning jam
<jam> Dmitrii-Sh, feedback given on https://github.com/canonical/operator/pull/212
<Dmitrii-Sh> jam: ty, I'll spend some time on it. I added .begin() in different test cases to be able to get to charm metadata (so that I can  check whether a relation is a peer relation or not)
<jam> Dmitrii-Sh, I believe we have self._meta we don't need self.charm.meta IIRC
<Dmitrii-Sh> jam: ok, I think I missed this. If so, I can replace the check
<niemeyer> Good morning folks
<jam> morning
<Chipaca> good morning charmcaft!
<niemeyer> Moin!
<jam> morning
<Chipaca> jam: having some hardware issues, be there in a minute
<vgrevtsev> hi everyone - is this a right place for a new developers to ask their questions about the new framework? :)
<Chipaca> vgrevtsev: yep!
<vgrevtsev> Chipaca: good to know, thanks - just starting writing my first charm, so I'll definitely have some questions around. As far as I understood, there are no single doc/howto, only READMEs from the different ops charms with some hidden knowledge?
<Chipaca> vgrevtsev: yes :(
<Chipaca> vgrevtsev: working on fixing that, but for now, yes
<facubatista> Muy buenos dÃ­as a todos!
<Chipaca> facubatista: ð
<facubatista> hola Chipaca!
<Dmitrii-Sh> facubatista: o/
<facubatista> hola Dmitrii-Sh!
<facubatista> jam, I feel like the previous version of the branch was more clear: if the envvar is set, the logging setup is indicated to go to debug mode, which means two decissions that belongs to the logging setup: a) logger in debug level, b) also a handler to stderr
<jam> on a call, will chat with you in a sec
<facubatista> jam, now it feels mixed, it looks like which stream is decided outside the logging setup, and that stream is indicated to the logging setup, which when present, not only builds a handler for it (it's fine there) but also changes the level of the logger
 * niemeyer => lunch
 * Chipaca â also lunch
<jam> facubatista, so I don't have a huge problem with the fact that setting a 'debug_stream' means enabling debug messages. That said, since the log handler already filters at INFO, we could just have the log level always DEBUG?
<jam> the fact that handlers have separate levels from loggers is always a bit wonky
<facubatista> different levels for handlers and loggers is a good feature
<facubatista> they filter different levels for different purposes
<facubatista> I mean, they have different semantics
<facubatista> jam, we can put the JujuLogHandler always in DEBUG, as it just sends stuff to juju, which also presents the user a way to select the level
<facubatista> but the logger must be in INFO by default
<facubatista> jam, actually, this impacts in that interaction: when the user jumps into a debug hook, juju will start getting DEBUG logs, as we're chaning the logger level, which I think it's fine
<facubatista> the default is still info
<facubatista> jam, so?
<jam> facubatista, so... that isn't how it currently works, so I'm trying to parse out a bit of what your suggesting how we should make it work. And the various pieces at play.
<facubatista> jam, this is related: https://github.com/canonical/operator/issues/198
<facubatista> we need to have in mind how this will end in its whole
<Dmitrii-Sh> https://github.com/canonical/operator/pull/216 - sent another interface for review. I'll need to modify the existing charms to adjust to this change.
<Dmitrii-Sh> Of note: as I introduce more complex types into interfaces, I feel the need to use some form of serialization. For example, even using timedate.delta for various health-checking timeouts requires me to do that.
<Chipaca> datetime.timedelta?
<Dmitrii-Sh> yes
<Dmitrii-Sh> got it backwards while writing :^\
<Chipaca> :)
<jam> Dmitrii-Sh, if it is a timedelta, why wouldn't you just serialize a float point seconds?
<Chipaca> jam: I'm assuming that's "some form of serialization"
<Dmitrii-Sh> jam: in this particular case I could covert datetime.delta to float and also expose a property on a type that does the reverse
<jam> Chipaca, fair enough, though there is a fair difference to say "use a float" vs "I need to encode things using JSON because I want to round trip a serialized form"
<jam> Chipaca, IIRC, charms.reactive defaulted to translating all relation-set values into JSON blobs
<Chipaca> ew :)
<facubatista> jam, may you help me with something, as IIUC you already tested this IRL... see this: http://linkode.org/#ipk9mNkD5aiE8vA78xkUD1 <-- there I exercised debug-hooks, set up the envvar myself and run the hook, got the pdb prompt..
<facubatista> jam, of course those two lines will not exist; if I write a message, it will appear after the other one? (like in node 2 of that linkode), or alone (like in node 3)?
<Dmitrii-Sh> @property
<Dmitrii-Sh> def timeout(self):
<Dmitrii-Sh>     return self._timeout.total_seconds()
<Dmitrii-Sh> something like that ^
<jam> facubatista, so you'll need juju 2.8 from the edge snap, and the command is "juju debug-code" if you want to test the new behavior.
<jam> facubatista, but the new behavior is in (3)
<jam> I intentionally skip the Juju message, because it isn't helpful when you are ending up in PDB
<Dmitrii-Sh> jam: but, yes, to use the same type on both sides of the relation I need to use something to serialize/deserialize more complex objects. Whether its pickle, JSON or other boilerplate code to fit the data into relation data bags
<Dmitrii-Sh> the question is how complex do we want it to be for somebody implementing an interface
<jam> Dmitrii-Sh, given the whole point of interfaces is to be the point of abstraction, I certainly wouldn't want Python types on the wire
<jam> even Pickle between Python isn't safe because on might be a Python3.5 running on Xenial, and the other is Python 3.8 running in Focal
<jam> facubatista, if you want to test it, you should be able to do "snap install --channel=edge juju --classic" and then 'juju bootstrap lxd' should leave you with a '2.8beta1' controller.
<Dmitrii-Sh> jam: yes, plus the code on both sides of a relation may be of different revisions
<facubatista> jam, so, it will be better to have a more complex text, like http://linkode.org/#Pu9DEIIWWybq5Q2eDSelJ
<jam> facubatista, yeah, I think something like that.
<facubatista> Chipaca, it worries me lying in line 6 ^
<jam> is there a way we can tell ^D/exit was done and make sure the hook exits nonzero?
<facubatista> jam, can I upgrade juju and keep the same controller? how that works?
<jam> facubatista, if you have the new snap, you might be able to "juju upgrade-controller". However, if it is a 2.7 stable controller, getting it to a 2.8 beta can be a bit trickier, IIRC.
<facubatista> jam, I can totally destroy and re-bootstra, I just don't want to do that very often; do you think I will be able to just leave 2.8beta1 in my system?
<jam> Chipaca, just make him write the documentation before he can land his code, then it won't be a lie.
<jam> Though I believe the official discourse just switched to discourse.juju.is
<jam> like 3 days ago or something.
<jam> facubatista, until we hit official beta, Juju doesn't guarantee compatibility/upgrades. While it is coming real-soon-now and I don't see any particular problems I certainly wouldn't use it for anything you want to guarantee keep around.
<jam> other than that, things in 'edge' have passed the unit test suite, but not necessarily the full CI regression suite.
<jam> (I've always tended to run straight off develop/or a feature branch rather than edge, so I don't have a lot of experience with it)
<facubatista> jam, ack, thanks
<Dmitrii-Sh> niemeyer: just sent an invite - I hope you can make it.
<niemeyer> Dmitrii-Sh: Thanks! If the meeting doesn't run over I will make it
<facubatista> niemeyer, I put the message here: https://docs.google.com/document/d/1H_3V19XGnEvUtE_tk2CFWmtiW8Zc4y1lvxTPz55VSJY/edit (the original one, and a newer already improved version below)
<niemeyer> facubatista: Thanks, let's sync later today as your timezone is friendly to that :)
<facubatista> niemeyer, +1
<crodriguez> Dmitrii-Sh: niemeyer : I need to dig more into the mssql components and the solution I want to put in place before asking questions about how to transpose that to the framework itself. The more I read about it the less I'm sure of what I want to do haha. I will dig more about that and maybe we can reschedule the call in a few days?
<crodriguez> It's a bit complex since the mssql components are only in preview mode for docker, no official doc about transposing it to k8s itself, so I'm experimenting
<Dmitrii-Sh> crodriguez: I think we can still have a call (even if it's going to be short) just to go over what's involved at a high level
<Dmitrii-Sh> and then dive into the details as we get more info
<crodriguez> Sure Dmitrii-Sh
<niemeyer> crodriguez: Yeah, I'd appreciate even just listening to what you've been up to, so we can learn about your initial views on the problems you're working through
<crodriguez> okay np :) I just didn't want to waste anyone's time since I'm still figuring things out
<niemeyer> crodriguez: No worries.. that's exactly the sort of opinion I'd be happy to learn about.. once you learn a bit more about the problem space and the candidate solutions, these early feelings will be replaced by feasible alternatives given what you've learned
<niemeyer> crodriguez: Some of the idiosyncrasies might be worth fixing, though, if we can learn about them
<vgrevtsev> hi Dmitrii-Sh - are you able to have a quick sync now?
<Dmitrii-Sh> crodriguez: thanks for the feedback, feel free to reach out going forward
<Dmitrii-Sh> vgrevtsev: sure, just finished the last meeting
<vgrevtsev> Dmitrii-Sh: https://meet.google.com/sxc-wcti-vko?authuser=1
<niemeyer> Chipaca: How's the Office Hours thing going?
<Chipaca> niemeyer: still need to meet with tim v and look at when and how to do it (soon)
<niemeyer> Chipaca: Ack, thanks
<Chipaca> good news is i know what i'm charming :-)
<vgrevtsev> many thanks Dmitrii-Sh - it became much more clearer for me now :)
 * vgrevtsev starts charming
<Dmitrii-Sh> cheers
<facubatista> mmm... juju bootstrap crashed for juju from edge, will ask in #juju
 * Chipaca EODs
<narindergupta> Hi team
<crodriguez> o/
<narindergupta> When I scale up my zookeeper or Kafka PODS does stop and start.
<narindergupta> And when I run juju status I get a new juju unit
<narindergupta> While pod name is same
<narindergupta> For example kafka-k8s/0 becomes kafka-k8s/4
<narindergupta> While pod name remain same Kafka-k8s-0
<narindergupta> Due to that sometime storage volume gets stuck and new pod does not start successfully and get stuck because volume get stuck
<narindergupta> Any ideas?
<niemeyer> narindergupta: So you mean pods get destroyed and recreated but remain with the same name
<niemeyer> ?
<narindergupta> I am scaling up and there is change in spec
<narindergupta> Juju status says pods stop and pods start
<narindergupta> And I am expecting pod name to remain same which does
<narindergupta> But juju unit changes
<narindergupta> From Kafka/0 to Kafka/4
<narindergupta> niemeyer, ^
<niemeyer> narindergupta: That's a bit awkward.. I wasn't involved in the design of the k8s implementation so I don't yet know how that works internally, but I can't imagine how the unit would get destroyed and recreated but the pod would remain the same
<narindergupta> That's what is happening with operator framework did not tried with reactive though
<niemeyer> narindergupta: The operator framework doesn't have a saying if the entire unit is getting destroyed.. it's living inside the unit
<narindergupta> So you are suggesting it is juju behavior?
<narindergupta> We are calling set_spec whenever config change
<narindergupta> self.model.pod.set_spec
<niemeyer> narindergupta: Yes, the operator framework is a library to help you implementing the charm.. a charm is not deployed until juju says it should be
<niemeyer> narindergupta: set_spec should not destroy your unit and recreate it
<narindergupta> It is doing it
<narindergupta> Not first time
<narindergupta> But on config changes
<narindergupta> But adding new unit
<narindergupta> As adding unit in zookeeper required to change the number of units as part of zookeeper config
<narindergupta> So I have to call set_spec during scale
<narindergupta> up
<narindergupta> Which is causing Destry old unit and create new one while pod remains the same
<narindergupta> Actually whenever pod stop during upgrade Kafka unit gets terminated
<niemeyer> narindergupta: We need someone that knows more deeply how the juju integration works.. what's your timezone?
<narindergupta> I am US CST
<niemeyer> Ok, so about 1PM now
<niemeyer> narindergupta: How about we try to get hold of someone first thing tomorrow in your morning?
<narindergupta> Sounds good to me
<niemeyer> narindergupta: I'm keen on being present to understand as well how this is being done.. the behavior you describe doesn't seem to make sense, or I have a broken model in mind. So I'm keen to learn too.
<narindergupta> Sure that would be helpful
<niemeyer> narindergupta: Alright, can you please ping tomorrow when you get online?
<narindergupta> Ok sounds good
<niemeyer> narindergupta: Deal, thanks
<niemeyer> facubatista: Are you around for that last call?
<facubatista> niemeyer, otp, what about in 10'? or you EODing already? I can hang...
<niemeyer> facubatista: 10' is fine
<facubatista> niemeyer, thanks
<niemeyer> np
<facubatista> niemeyer, ready
<facubatista> niemeyer, https://meet.google.com/veq-yfqm-kdk?
<niemeyer> omw
 * facubatista eods
#smooth-operator 2020-04-07
<Dmitrii-Sh> o/
<jam> morning Dmitrii-Sh
<Dmitrii-Sh> morning jam
<niemeyer> Morning all
<niemeyer> Today I'm running through the entire PR queue.. please don't merge anything as I go through it so we don't clash in the middle.
<Dmitrii-Sh> jam: https://github.com/canonical/operator/pull/212#discussion_r404621103 addressed comments for 212
<Chipaca> good and paperwork-flooded morning, all!
<Dmitrii-Sh> o/
<Dmitrii-Sh> jam: also: I think that emitting -changed events for remote_app_data if .begin() is called after a relation is added is an important use-case but probably belongs to a different PR
<Dmitrii-Sh> we haven't discussed it in 212 but one test case I just added is related to that (it needs a counterpart where a unit is not a leader and a peer relation is tested)
<niemeyer> Dmitrii-Sh: #216 is reviewed
<Dmitrii-Sh> niemeyer: ty
<niemeyer> np, glad to see this kind of component showing up
<niemeyer> Hopefully the first of many
<jam> Dmitrii-Sh, are you saying if you do add_relation_unit/add_relation after begin() ?
<jam> Chipaca, no, please don't wish that upon me :)
<jam> (hopefully you don't get too many papercuts from the paper flood)
<jam> morning Gustavo
<Chipaca> :)
<niemeyer> jam: o/
<Dmitrii-Sh> jam: not exactly. If somebody does `add_relation(... remote_app_data=some_dict)` or `set_leader(is_leader=False) ; add_relation(... initial_app_data=some_dict)` before calling .begin(), once they call begin, the charm object should get a relation_changed event
<Dmitrii-Sh> it's the scenario where app data exists in the model but your unit hasn't come up yet
<jam> Dmitrii-Sh, so we can certainly discuss that. My concern is mixing "setting up stuff as precondition" from "sending events during the lifetime"
<Dmitrii-Sh> jam: ok, I'll switch to a different PR for now but just wanted to bring that up - in my view, it's a condition that might occur and that a conscious developer would want to test
<Chipaca> whoa, 30 people. Feels like a celebration in here :)
<niemeyer> facubatista: You got a review on #213
<niemeyer> jam: #212 too.. this one feels like it might be better for you to think it through and take it over
<niemeyer> jam: Let's catch up about it at some point today or tomorrow
<jam> sure
<Chipaca> jam: I had a note to talk with you about something similar, adding network_get support to the harness
<Chipaca> as that's also going to be blocking Dmitrii-Sh soon (if not already)
<Dmitrii-Sh> Might be useful to address https://github.com/canonical/operator/issues/175 along with adding relation-created https://github.com/canonical/operator/pull/218
<Dmitrii-Sh> it feels like we'd need to change the signature of Relation.__init__ here https://github.com/canonical/operator/blob/17f4885b73d7b2bfb7445388e7e62b9fcd040859/ops/model.py#L368-L383 to be able to get the remote application based on JUJU_REMOTE_APP, instead of using units for that
<Dmitrii-Sh> there is no such thing as `relation-list -r <id> --app` to list the remote app either
<Dmitrii-Sh> happy to discuss this in more detail if anybody is interested
 * Chipaca steps out for a bit
<Dmitrii-Sh> FYI: there's an issue with Travis runs & aarch64: https://travis-ci.community/t/no-cache-support-on-arm64/5416/27 - that's why there were test failures in the tls-certificates PR which pulled the cryptography package
<Dmitrii-Sh> I applied a workaround to .travis.yml in this PR. Giving it more visibility because it's hard to find online.
<facubatista> Muy buenos dÃ­as a todos!
<niemeyer> facubatista: Good morning!
<facubatista> niemeyer, hola :)
<facubatista> niemeyer, ack, for the review
<Dmitrii-Sh> facubatista: o/
<facubatista> hola Dmitrii-Sh!
<jam> hi facubatista, the patch has landed in edge and you can bootstrap again
<facubatista> jam, yes! read the LP issue, thanks! (I put it to refresh, and then dived into a review
<facubatista> )
<facubatista> all: I don't know if this is more a github or a team convention, but I'm a little lost in this regard (never been involved in long multiperson reviews in GH before)... so: developer X does a PR, dev Y does a comment in the review; if X agrees with that (answers 'OK' or whatever, and pushes a commit with the proper change), WHO should "resolve" that conversation? X or Y?
<jam> facubatista, the way *I* do reviews: If I mark it Approved with a comment, that is saying that if you agree with my comment and fix it, then I don't need to review it again for you to land it.
<jam> If I mark it Comment then I want to see it again, but I'm not concerned, if I mark it Needs Fixing then I definitely don't want it to land without seeing it again.
<facubatista> jam, not yet approved... you did a review, has 5 comments, I address 4 of those, answering each "ok", and pushing a commit... one conversation is still open! but those 4 are resolved, shall I close them or you?
<jam> facubatista, but it is always on the developer to push to get their PR landed.
<facubatista> jam, my question is not the PR as a whole, but each conversation
<jam> good question. I'd generally say the person who raised the question should be the one who acknowledges it is resolved.
<Dmitrii-Sh> jam: interesting comment https://github.com/canonical/operator/pull/210/files#r402267572 I responded but not sure if we should do something about it in this particular PR
<jam> Dmitrii-Sh, that wasn't something for this patch, mostly something that we should think about.
<Dmitrii-Sh> ok
<jam> facubatista, though I think Dmitrii tends to use the comment list as the "things left to do" so he marks them resolved when he finishes them.
<niemeyer> Good practice
 * niemeyer => lunch
<Dmitrii-Sh> jam, facubatista: I tend to resolve them referencing the commit where I address the issue/comment. With that I can at least sanely track the remaining items without looping over them and remember where I think I fixed things
<Dmitrii-Sh> in short: jam is correct
<jam> Dmitrii-Sh, I like the comment about "addressed", I'm just not sure whether it is on the response to resolve the thread, because that hides it when the reviewer comes back
<jam> I often have to dig a bit to find your response to my comment.
<Dmitrii-Sh> hmm, I could tag the responses somehow if that helps
<Dmitrii-Sh> just need to agree on something easily searchable
<facubatista> jam, that's my main point in favor of "all comments should be resolved by the person who initiated it"... (which has its downsides, of course)
<jam> Dmitrii-Sh, my pattern is "I put it up for review, facu comments on it, I reply with 'done', facu gets to close it"
<jam> that acks that I saw what he asked, and that I've done the work myself (including the git hash on top of that is great),
<facubatista> jam, I prefer that; however, I need to pay attention to finally close all comments I initiated! (which IMO is my responsibility as a reviewer, anyway, UNLESS I already approved the PR)
<jam> It does mean your list of 'things left to do' isn't the list of open questions, but the list of open questions that you haven't replied to.
<jam> facubatista, yeah, I personally would probably only Resolve them if I was then asking for more, vs just an overall approval
<jam> facubatista, I did have a comment on the Breakpoint pr. (213). It feels like we should be only printing the help text 1 time at startup, vs on each breakpoint. Thoughts?
<jam> Dmitrii-Sh, given gustavo's comment. What do you think about letting update_relation_data set self values, but only prior to begin()
<jam> that would let it be incremental
<jam> but also avoid having the harness changing thing that the charm should be taking as its responsibility ?
 * jam takes the dog out quickly, bbiab
<facubatista> jam, I didn't see that question, sorry :( ... so, we should present the comment everytime the process is started, as juju would do that in a "clean tmux", right? but if we handle several breakpoints in the same run, one message should be enough
<Dmitrii-Sh> jam: I can rework it as you suggested. Then I think we also need to remove the existing remote_app_data argument and handle everything via update_relation_data
<Dmitrii-Sh> otherwise people will need to be educated that they can only add remote data like that but have to use update_relation_data before begin to set initial data for our unit and app
<Dmitrii-Sh> before `.begin()` *
<Dmitrii-Sh> I'll wait for feedback and then implement it if we all agree on the approach
<Dmitrii-Sh> jam: also, if you have any thoughts on https://github.com/canonical/operator/pull/218#issuecomment-610364750, it would be good to hear
<Dmitrii-Sh> either I don't see something obvious, or we need `relation-list -r <rid> --app`
<narindergupta> Dmitrii-Sh, niemeyer hi
<jam> facubatista, right. clean tmux per hook, but only 1 message when handling multiple breakpoints in a single exec
<jam> Dmitrii-Sh, don't we have JUJU_REMOTE_APP ?
<Dmitrii-Sh> jam: in a relation event context - yes. But not in "leader-elected" or "start"
<jam> yeah, I hadn't finished going through your comment.
<jam> hm
<facubatista> jam, indeed, will improve that
<narindergupta> Dmitrii-Sh, niemeyer in my case any change in pod sec  and calling set_spec is removing the juju unit and add a new unit.  So existing unit zookeeper/0 gets removed and zookeeper/1 gets added.
<jam> narindergupta, changing a pod spec will update the pod spec with k8s, which causes k8s to spin up new pods and tear down old pods
<jam> thus they show up as new Juju units.
<jam> (eg, if you have n=3, at some point there will be a 4th k8s pod before it kills the 1st one)
<jam> narindergupta, I'm not positive how it works if you have a StatefulSet that supports storage, because there the identifiers get to be reused.
<narindergupta> jam, for persistent pods name is same so it tries to delete the volume and create it again and gets stuck
<Dmitrii-Sh> jam: so if I want to write something to an app relation data bag in any non-relation event, I need to reference the app somehow. And currently we retrieve a remote app for a Relation object based on a remote unit
<Dmitrii-Sh> before we have remote units .app is None
<jam> Dmitrii-Sh, "reference the app", I don't quite follow. You don't get to write to the remote app's data.
<jam> Dmitrii-Sh, are you saying to *read* the remote app's data ?
<Dmitrii-Sh> jam: ok, sorry, read/write
<Dmitrii-Sh> actually, "just read" because we have special handling for peer relations
<narindergupta> jam, will it be an issue for any k8s charm update, upgrade etc if we change the pod spec. In other words we should not change the pod spec if using persistence volume. Is there a way to not to delete the persistence volume during pod stop?
<jam> narindergupta, so you're past my knowledge for persistent volumes. I'd say ask in #juju from wallyworld or kelvinliu
<narindergupta> jam, ok let me check with them.
<jam> Dmitrii-Sh, seems ok to drop remote_app_data if we are dropping the rest. I prefer the single setup-step approach but obviously that isn't universal
<Dmitrii-Sh> jam: ok, I think we can spend couple of minutes during the standup to go over that just to make sure everybody is on the same page and then I'll implement this.
<jam> Dmitrii-Sh, I think we have a cross team sync during our standup
<jam> Dmitrii-Sh, https://github.com/canonical/operator/pull/212/files#r404561805 . "if we make our unit a leader we should see it". Yes we should be able to read the data, but I don't think you get a relation-changed event for something the previous leader set.
<Dmitrii-Sh> jam: indeed :^)
<jam> Dmitrii-Sh, ok. I think it is just my confusion around the ordering of the checks. "if is_our_app_updated: " should return, but shouldn't return if it is a peer relation and we aren't the leader.
<jam> Trying to figure out if there is a clearer way to convey that.
<jam> I do see what you're checking.
<Dmitrii-Sh> jam: ok
<jam> Dmitrii-Sh, so you wanted this to handle the case of being promoted to leader?
<jam> Dmitrii-Sh, I have a proposed spelling/comment. if it doesn't feel clearer I'm not stuck on it. But I think my confusion was because you were checking "not is_peer" first
<Dmitrii-Sh> jam: "being promoted to leader?" - which line is that, sorry?
<jam> Dmitrii-Sh, given that your unit doesn't get to see app data when it is updated, why did you want to update the app data for the remote?
<jam> so that you could set it, and then promote the unit to leader and see that it sees the app data ?
<pekkari> Hi guys, I'm having a charm that seems to be skipping the install hook, is there anything else required than declaring the observer on start, and defining the on_install function?
<Dmitrii-Sh> pekkari: is that a k8s charm on Juju 2.7.x?
<Dmitrii-Sh> in that case, this is expected per https://bugs.launchpad.net/juju/+bug/1854635 but will be fixed in 2.8.x
<Dmitrii-Sh> you can use on_start though
<pekkari> no k8s, Dmitrii-Sh, I'm just trying to write gsss charm using the op framework
<Dmitrii-Sh> you need to have a symlink at hooks/install -> ../src/charm.py
<pekkari> so in __init__ I add the line: self.framework.observe(self.on.install, self) , and then add the def on_install(self, event):
<pekkari> that link is in place also
<jam> Dmitrii-Sh, I just checked with Juju, that was released in a 2.7.? so it should be fixed in 2.7.5
<Dmitrii-Sh> jam: thanks
<jam> getting them to update the bug
<Dmitrii-Sh> pekkari: could you paste your charm.py somewhere? Or at least the top and the bottom of it
<Dmitrii-Sh> #!/usr/bin/env python3
<Dmitrii-Sh> from ops.main import main
<Dmitrii-Sh> # ...
<Dmitrii-Sh> class MyCharm(ops.charm.CharmBase)
<Dmitrii-Sh> # ...
<Dmitrii-Sh> if __name__ == '__main__':
<Dmitrii-Sh>     main(MyCharm)
<Dmitrii-Sh> pekkari: you also need that ^
<jam> Dmitrii-Sh, facubatista : there is a "Weekly K8s Charm Sync" that got a collision with our usual standup time.
<Chipaca> Dmitrii-Sh: jam: standup?
<Chipaca> jam: or are you going to that sync?
<jam> Chipaca, going to that sync today at least
<niemeyer> jam: If that was just booked, ideally it should deconflict
<Dmitrii-Sh> Chipaca:  at this cross-team sync
<niemeyer> Dmitrii-Sh: Same
<niemeyer> We've moved the standup
<pekkari> Dmitrii-Sh: let me try that and if I observe no change I'll paste it, to let you read, thanks!
<narindergupta> jam, it seems to be juju bug and I have raised the bug and thanks for guidance. Also another question is there any event gets fired when pod/unit/app status gets changed? I am more interested in app status event as it would help me to determine whether all my pods are up or not? Instead of waiting for update status event after 5 minutes?
<niemeyer> narindergupta: I'd like to have a call with you and jam to understand this behavior a bit better
<Dmitrii-Sh> ok, sorry, I thought that sync conflict was agreed upon
<narindergupta> niemeyer, I am available now
<niemeyer> Okay, let's do it.. jam cannot join us but I'd like to understand at least what you're observing.
<niemeyer> narindergupta: https://meet.google.com/gri-hkiq-itv
<pekkari> hum, even moving the install code to start event doesn't seem to make it, can you please take a quick read here Dmitrii-Sh? https://pastebin.canonical.com/p/7TMVMG954c/
<narindergupta> niemeyer, https://bugs.launchpad.net/juju/+bug/1871388
<Dmitrii-Sh> pekkari: could you also share a pastebin with `tree <charm-source-dir>`?
<pekkari> absolutely
<pekkari> here it is, Dmitrii-Sh: https://pastebin.canonical.com/p/q5GVvb6rpw/
<niemeyer> narindergupta: Can you please paste the pod spec for the example we've explored?
<niemeyer> narindergupta: Just for completeness
<narindergupta> niemeyer, sure
<narindergupta> This is zookeeper pod spec http://paste.ubuntu.com/p/mqxjxZyBxh/
<narindergupta> niemeyer, added in the bug comment also.
<niemeyer> narindergupta: Thanks!
<Dmitrii-Sh> pekkari: haven't found anything apparent yet. I'll have a look after meetings
<Dmitrii-Sh> juju show-status-log <unit> might also help
<pekkari> take your time Dmitrii-Sh, seems no trivial issue, I struggled for a big while with this, show-status-log here: https://pastebin.canonical.com/p/3rzGz3M93J/
<jam> pekkari, install only happens once. it looks like you're long past install by the time you're doing these steps
<pekkari> jam, that is what I expected, though after the creation of the unit I never see the packages I expect installed. install hook is nearly copy & paste of the former charm, and apt packages library works when running ocally
<pekkari> so seems like the workflow never hit the on_start function when running the install hooks
<pekkari> I'm going to remove the unit and add a new one though, perhaps with the changes will help
<pekkari> no luck, it shows install status, goes forward to config-changed and errors there, the expected packages doesn't get installed
<jam> pekkari, config-changed happens before start
<jam> pekkari, install, relation-created, config-changed, start IIRC
<jam> (relation-created is new in Juju 2.8)
<pekkari> in that case, I'd need to consider to move back the code to the install event, and observe it as it was before
<jam> pekkari, so if you need something before you can respond, I would, indeed, put it in install, and consider if it is the sort of thing that would change over time, putting it in upgrade-charm
<pekkari> aha! now I see a traceback with my code, thanks for the hint jam and Dmitrii-Sh!
<Dmitrii-Sh> pekkari: ok, so the "install" symlink needs to be in the hooks directory at the deployment time
<Dmitrii-Sh> then all others will be created by the framework automatically
<Chipaca> my canonical calendar is now suggesting personal contacts instead of canonical contacts for meetings
<Chipaca> something's effed up o_o
<niemeyer> Chipaca: It just knows you care personally about it
<Dmitrii-Sh> Chipaca: maybe it's just the EOD time
<Chipaca> :-)
<niemeyer> Chipaca: "Hey, how about asking your son to fix that!?"
<Chipaca> I mean, I can _ask_
<niemeyer> I know *my* son would be super excited to get involved..
<Dmitrii-Sh> pekkari: if you have any juju storage defined in metadata.yaml you also need to have storage event symlinks because they fire before "install" but it doesn't look like you have any
<niemeyer> We might have some trouble handling that excitement, though
<pekkari> Dmitrii-Sh: no, so far not yet, I took this charm because of being always broken, and seems to be little enough to let me learn properly the main points in operator framework, but thanks for the hint, I may hit that later
<facubatista> niemeyer, jam, already addressed your comments on #213
 * facubatista -> lunch
<Dmitrii-Sh> pekkari: ok, have you managed to get to a proper state or can I still help?
<pekkari> Dmitrii-Sh: yes, I got to the install code I wanted to reach, your hints about the ops.main import and the if __name__ clause, plus keeping the install code in the install event made it through, and now the problems around are problems for tomorrows JosÃ©
<pekkari> thanks man!
<Dmitrii-Sh> pekkari: great, np!
<Dmitrii-Sh> have a good evening
<pekkari> you too!
<niemeyer> facubatista: LGTM still
 * facubatista is back
<facubatista> niemeyer, thanks
 * Chipaca goes off to make dinner
<Chipaca> ttfn!
<facubatista> Chipaca, have a nice cooking
 * facubatista eods
 * Chipaca eods
#smooth-operator 2020-04-08
<jam> morning all
<Dmitrii-Sh> moring jam
<Dmitrii-Sh> morning*
<t0mb0> are there any examples for testing operator charms?
<Dmitrii-Sh> t0mb0: we have PRs for interfaces with unit tests that use the test harness which was recently merged (https://github.com/canonical/operator/pull/146/files)
<Dmitrii-Sh> I am working on getting my interface PRs to a better state and will then switch my charm PRs
<Dmitrii-Sh> https://github.com/canonical/cockroachdb-operator/pull/1/files - this charm PR has unit tests but they are for the peer relation
<Dmitrii-Sh> I have a task to make it more modular so that I can test it in isolation from the operations that mutate the underlying machine and calls to network-get which are not yet handled by the harness
<Dmitrii-Sh> t0mb0: sorry it's at that stage right now but I am doing what I can to move it forward
<mthaddon> Dmitrii-Sh: how much are you expecting in the way of changes to how testing is done in the framework? Just wondering if it's worth us / t0mb0 investing time in beginning to write tests now, or whether we should hold off for a bit
<mthaddon> test/test_cluster.py certainly looks like it has some useful examples from https://github.com/canonical/cockroachdb-operator/pull/1/files but just want to confirm how much we'd expect things to change at this point
<Dmitrii-Sh> mthaddon: I would expect remote_app_data parameter to disappear here https://github.com/canonical/operator/blob/11a1849205d750e28aaa4a13938b5864659f928b/ops/testing.py#L143 per a discussion in https://github.com/canonical/operator/issues/211 but so far I am not aware of any other changes.
<Dmitrii-Sh> I think it's worthwhile to start using the harness as-is
<mthaddon> Dmitrii-Sh: thx - sounds like some minor changes, but certainly worth getting started with it. t0mb0 ^
<niemeyer> Morning all
<Dmitrii-Sh> morning
<niemeyer> mthaddon, t0mb0: If you find any issues / struggles with the harness, this is a good time to raise them
<niemeyer> jam has been pushing it forward and polishing it as we go
<mthaddon> thx
<t0mb0> no worries will dpo
<t0mb0> *do
<niemeyer> jam: Second review on #209
<Chipaca> moin moin moin
<jam> thanks niemeyer
<jam> morning Chipaca
<niemeyer> Chipaca: Morning!
<niemeyer> Chipaca: You just got another review on #203
<Chipaca> niemeyer: ð
<niemeyer> jam: You have another review on #196 as well
 * Chipaca takes a break
<Chipaca> dang, i messed up the revue thing
<Dmitrii-Sh> Updated 216. I got rid of pickle but we still need to discuss serialization.
<facubatista> Muy buenos dÃ­as a todos!
<Dmitrii-Sh> facubatista o/
<Chipaca> facubatista: buen dÃ­a!
<facubatista> hola Dmitrii-Sh, Chipaca :)
<Chipaca> I'm off to make lunch for m'boys
<niemeyer> Off to lunch as well
<facubatista> have a nice lunch!
<t0mb0> Dmitrii-Sh, is state something that the harness supports? https://pastebin.canonical.com/p/V44mnMF9NK/ I've tried to pass my charm to the harness but it seems to trip up creating the state attribute
<jam> t0mb0, https://github.com/canonical/operator/pull/199
<jam> probably you just need to update your operator directory
<Dmitrii-Sh> t0mb0: jam speaks the truth :^)
<t0mb0> well thats easy!
<Dmitrii-Sh> t0mb0: git submodule update --recursive --remote
<Dmitrii-Sh> that's one way to update all of your submodules
<t0mb0> \o/ works
<Dmitrii-Sh> great!
 * facubatista brb, doorbell
<jam> facubatista, ping me when you'are back
 * facubatista is back
<facubatista> sorry it took long, it was the meat delivery
 * niemeyer is sort of back too.. starting the meeting marathon
<facubatista> jam, ping, I'm back :)
<jam> hi facubatista, just hitting my meetings now as well. I wanted to check up on https://github.com/canonical/operator/pull/209
<jam> Dmitrii-Sh, it seems https://bugs.launchpad.net/juju/+bug/1854635 didn't actually get backported. So it will be in 2.7.6 (landed today)
<Dmitrii-Sh> jam: ok, then my tests were sane
<facubatista> jam, we initiated a conversation in IRC, so for that to not be forgotten, I wrote this comment: https://github.com/canonical/operator/pull/209#issuecomment-609983367
<Chipaca> Dmitrii-Sh: huzzah for sane tests
<facubatista> jam, I kind of was expecting to continue the conversation there
<jam> facubatista, hence why I pinged, but we'll chat after standup, I guess, since I can't chat live very well right now.
<jam> I did intend to discuss it
<jam> just ran out of time before meetings.
<facubatista> jam, ah, perfect
<jam> facubatista, if I understand you correctly, you're happy with the behavior, but you feel it is more obvious to pass a boolean rather than having a stream that isn't None indicating that you want the new behavior?
<facubatista> jam, it's more about which part is deciding what... if you call setuplogging and say debug=True, is fine to that setup function to decide a level and a stream; it's weird that the stream is decided outside, and that according to that stream debug is implied (this opens a lot of questions, what if the external caller want that stream but in info, etc)
<Chipaca> facubatista: stand up
<jam> facubatista, so if it feels cleaner to you, go back to a boolean, and make the decisions internally again?
<jam> I definitely prefer the env var to be evaluated in main(), but I can live with passing debug as a bool and having that decide to add a debug level log handler.
<facubatista> jam, it feels cleaner to me to go back to the external caller saying "I want logs in debug mode" (with a bool), and the setup code internally deciding what "debug mode" means, for us here is DEBUG level and also send logs to stderr
<facubatista> jam, the env var is perfect to be evaluated in main, yes
<facubatista> (the main code may decide "go to debug mode" for several reasons in the future)
<facubatista> so... who may help me splitting this schema in N charms? https://is.gd/WqBWkK
<facubatista> to explain it...
<facubatista> 1) everything is behind an apache, and two domains go to it
<facubatista> 2) if that domain is blog.t.c.a, the apache just serve some static files (which are prepared before)
<facubatista> 3) if that domain is comentarios.t.c.a, apache just proxies the request to a gunicorn serving other project
<facubatista> so, I attempted to split that structure in charms... AFAIU, I should have two units?
<facubatista> one with Apache serving local files... but I need a charm for the "blog" part, can I "mix it" with Apache?
<Chipaca> facubatista: so. For docs. Get a new category on the discourse juju (charm-docs perhaps?)
<facubatista> other with the comments system, exposing a port, for which I'll add a relation to the first unit, for apache to talk
<Chipaca> facubatista: we probably need to talk with somebody to get the category set up fwiw
<facubatista> is this structure correct?
<facubatista> Chipaca, we already have https://discourse.juju.is/c/docs/charming-docs
<facubatista> "charming-docs"
<facubatista> Chipaca, do we want a different section? reuse that one? for sure we could reuse a lot of text fromt there, or we'll start something from the scratch?
<Chipaca> facubatista: 1 sec
<Chipaca> facubatista: I think we need a separate category
<Chipaca> maybe ops-charms?
<Chipaca> ops-charms-docs?
<Chipaca> docops
<facubatista> Chipaca, and start everything from scratch?
<Chipaca> facubatista: we are
<facubatista> "ops" is too obscure
<Chipaca> starting from scratch i mean
<facubatista> Yeap, but a lot from those texts are reusable, they tell a story, they talk about juju commands, etc
<facubatista> which hooks exist and what they mean
<facubatista> we could start from current structure, discuss if we want it verbatim or which changes we should do, then start to fill the titles
<Chipaca> facubatista: I'd rather start with a blank slate, in a separate category, and point to the ones that apply, than making things confusing to newcomers by having things that are all about reactive or layers or w/e
<facubatista> Chipaca, I agree; my point is that let's do not grow doc structure organically, let's plan it, and we can use current structure as a source for that planning
<Chipaca> er
<Chipaca> facubatista: i don't understand what you mean
<Chipaca> maybe i'm missing something
<Chipaca> facubatista: you needed a place to write _a_ doc
<Chipaca> so let's make that place
<Chipaca> it is still empty, missing all kinds of things, that yes they need planning and such
<facubatista> Chipaca, yes; I'm also talking about what we want in a month
<facubatista> or a year
<facubatista> but yes, writing that one doc unblocks me
<Chipaca> facubatista: so for now, write a doc there. It'll be weird, and disconnected from everything, but it's a place to write them
<facubatista> eso
<Chipaca> exactly
<Chipaca> facubatista: and it's a place to write them that won't further confuse things, and will let us move forward better once other things line up
<facubatista> we need a category, then... operator-charms-docs?
<Chipaca> charm-operator-docs, or chods for short
 * Chipaca hides
<mthaddon> Chipaca: surely you mean "cods" :)
<Chipaca> I dunno, there's something fishy in that one
<mthaddon> I set them up, you knock 'em down...
<Chipaca> mthaddon: :-D
<facubatista> we can go with "ops-charm-docs" (which is just a token that will be used in the URLs), and "Operator Charms Docs" as the general title (which is what users will see)
<Chipaca> facubatista: sgtm
<facubatista> jam, you're admin in that Discourse, may you could create this new category for us? under general "Docs"? thanks!!
<jam> facubatista, I have the edit rights to be able to create a category (once I finally found the page that has a new category button), do we care about badge colors/etc ? or just leave it at defaults for now
<Chipaca> fwiw I don't mind (especially as you can change these later)
<jam> https://discourse.juju.is/c/docs/ops-charm-docs
<facubatista> jam, thanks!!
<facubatista> jam, we should fix https://discourse.juju.is/t/about-the-operator-charm-docs-category/2893
<Chipaca> jam: but not now. go away, now :-p
 * jam runs away
 * facubatista -> lunch
 * facubatista is back
<Dmitrii-Sh> jam:  ty for `self._framework = framework.Framework(":memory:", self._charm_dir, self._meta, self._model)` in the harness code
<Dmitrii-Sh> makes things so much simpler in writing independent test cases with components that have state
 * Chipaca steps away for a while
<Dmitrii-Sh> Made some progress on splitting out logic in the cockroachdb-operator PR
<Dmitrii-Sh> https://github.com/canonical/cockroachdb-operator/pull/1/commits/e6741a9a01432f19d445914a77ff3a65050f0bcd
<Dmitrii-Sh> There are a couple of points to discuss related to injecting dependencies and https://github.com/canonical/operator/pull/196 needs to be merged to uncomment important lines in 3 test cases.
<Dmitrii-Sh> otherwise, I managed to cover the logic I had to test manually previously so that's good in my view
<Dmitrii-Sh>  *logging out for today"
<Chipaca> Dmitrii-Sh: awesome, thank you, and good night
<Dmitrii-Sh> Chipaca: cheers o/
<Chipaca> facubatista: ping?
<facubatista> Chipaca, pong
<Chipaca> facubatista: pm
 * facubatista eods, and eows! see you all on Monday!
<niemeyer> facubatista: Have a good rest
<facubatista> niemeyer, thanks!
<facubatista> same for you
<niemeyer> Thanks o/
<Chipaca> facubatista: see you tuesday :D
<facubatista> yes
 * facubatista reboots for a long due update
<Chipaca> I too disappear, but only until tomorrow
<Chipaca> ð
#smooth-operator 2020-04-09
<Dmitrii-Sh> o/
<jam> morning Dmitrii-Sh
<Dmitrii-Sh> morning jam
<Dmitrii-Sh> regarding https://github.com/canonical/operator/pull/216#discussion_r405455364 - we'll need to change our copyright header checks then
<Dmitrii-Sh> as for serialization, I converted the interface code to use json instead https://github.com/canonical/operator/pull/216/commits/68152461b5b104ccf4aaea3c04ea36e9d1fd5c7b
<jam> Dmitrii-Sh, indeed, it really depends how we want to do it as a project, and then update the source checks, etc to match that.
<jam> I think I clarified that my preference is to keep Copyright as a comment and put the docstring after copyright comment
<jam> which wouldn't require changes
<Dmitrii-Sh> jam: ok, I reverted the change. Might be good to discuss during the standup
<niemeyer> Good morning all
<jam> morning
<Chipaca> gooood morning team!
 * jam steps away for lunch, bbaib
<Chipaca> 'be back after i burp' ð
<jam> I thought I was the one that made up backronyms. I did realize once I hit submit that it was a typo, wasn't sure it was worth a second message :)
<jam> I just came across this https://www.theonion.com/nasa-launches-vengeance-rover-to-pay-back-mars-for-kill-1842750166
<jam> Dmitrii-Sh, Chipaca : thoughts on parameter formatting. 'git annotate' says that I use ":param foo: description" style, while the two of you have used more "foo -- description" style.
<jam> we should probably get shared agreement on that style (possibly based on what Sphinx/readthedocs renders)
<Chipaca> jam: agreed
<jam> s/we should probably/we need to get/
<Chipaca> so, questions
<Chipaca> do we want to use python type annotations or not?
<Chipaca> (if we did, we could then use mypy to do static analysis while not requiring it to run)
<jam> I've never used them, my primary Python time was in the 2.* era. AIUI they are just documentation as well, right?
<jam> eg, not actually enforced.
<Chipaca> right, except mypy can check for them
<Chipaca> ie you can do static analysis
<jam> I don't have enough experience with them to know whether they feel good or feel like noise.
<Chipaca> when running with plain python, they're just metadata
<Chipaca> to me they feel more natural than the :type: docstrings
<Chipaca> but I don't have a super-strong preference :)
<Chipaca> let's talk about it on tuesday when facu is back
<Chipaca> facu and myself, that is :)
<jam> Given the language has built-in support for them, I'd tend to go with that over :type:, It also depends heavily on how well supported they are in editors
<Chipaca> i'll set up a meeting so we remember
<jam> but PyCharm seems to support either.
<Chipaca> sphinx supports either also (with a plugin)
<Chipaca> i need to check if readthedocs supports that plugin but it should (and i have a friend working on the readthedocs backend so i might be able to influence that if they don't)
<jam> My main concern with "foo --" is that it is missing the "bulleted list" indicator. (start of new section is after the section has started)
<Chipaca> jam: 1 sec
<Chipaca> jam: http://www.sphinx-doc.org/en/master/usage/extensions/napoleon.html
<Chipaca> jam: I'd prefer either of those styles (google or numpy) to the default rst style
<Chipaca> jam: in any case 'foo --' is wrong :)
<Chipaca> jam: and, 1 further sec for me to find the other extension
<jam> Chipaca, interestingly, their statement "Compare the jumble above to this alternative". I actually find it easier to read the "jumble" but that is likely just lots of usage of that style. :)
<jam> I can imagine that practice with an alternative will similarly improve my ability to read it.
<Chipaca> jam: https://github.com/agronholm/sphinx-autodoc-typehints would be the extension that gets type from the annotation, and it supports napoleon also
<jam> for some reason I don't like "foo (type):"
<jam> I would have preferred it "foo: (type)"
<niemeyer> I'm not a big fan of the :type: etc notation.. I've never seen it being worth the cost of having to write and read it, compared to normal English text in the summary
<jam> In that, I prefer Numpy, though it feels rather spaced out.
<niemeyer> We generally end up re-stating the type in the sentence anyway
<Chipaca> jam: note that with the typehints thing we'd just write foo: in the google style
<niemeyer> :type:FooBar:  The FooBar used to process it.
<Chipaca> maybe we should write them out in full for a single realistic docstring for the meeting tuesday
<Chipaca> i'll get on that after finishing dispatch support
<niemeyer> +1
<niemeyer> Thanks for that
<jam> Chipaca, sounds like some nice examples to have on hand
<niemeyer> Are we skipping the standup today?
<Chipaca> we can do it on IRC just to make sure we're aware of what we're doing
<niemeyer> Well.. :)
<Chipaca> :-D
<Chipaca> but, no, i don't think we should (it's just facu that's away)
<Chipaca> i did delete it and then re-created it because i thought we'd do the bug revue today (but facu's away)
<Chipaca> otoh i don't mind doing it on irc because introvert
<niemeyer> I don't have it in my calendar.. if it's just Facundo out, we should do it
<jam> so the weekly operator sync meeting that collided on Tues got moved to Friday evening....
<Chipaca> jam: yeap
<Chipaca> jam: don't worry about it
<jam> I was going to say that it isn't a time that I think I'll be attending regularly.
<niemeyer> Best time for regular weekly meetings.. Friday evening..
<Chipaca> I've emailed Anthony about it being sub-ideal, we'll see
<Chipaca> meanwhile, don't fret it
<niemeyer> Better than that only if it was a Good Friday..
<Chipaca> they're all good fridays bront
<Chipaca> oh drat 12pm and i didn't have mah mid-morning protein
<Chipaca> lunch is gonna be a big thing
 * jam heads to help my son study Spanish
<Chipaca> I think this is relevant to our interests: https://mobile.twitter.com/fasterthanlime/status/1248025337235210243
<niemeyer> Weeeeee
<Chipaca> niemeyer, Dmitrii-Sh, jam, ð
<Dmitrii-Sh> o/
<Chipaca> I came back from getting a cup of tea to find my desktop wouldn't un-map the lock screen
<Chipaca> annoying
<niemeyer> We're in the standup..
<Chipaca> omw
<jam> Chipaca, Dmitrii-Sh https://meet.google.com/fqw-mdqc-dsf
<Chipaca> ta
<jam> https://github.com/canonical/operator/pull/221 reviewed
<jam> 'osp' is that supposed to be 'ops' ?
<pekkari> Hi, anybody can take a quick read to this pastebin to see what I'm doing that wrong settinga relation data? https://pastebin.canonical.com/p/sJGfT8Cx7v/
<jam> pekkari, you are setting "admin_url" to a tuple (url,) is that intended?
<pekkari> jam: no those commas are not intended, I was trying to set a full dictionary first, and then moved to key by key assignment, thanks for the hint
<jam> pekkari, the other bug is that "event.unit" is the *remote* unit whose data has changed
<jam> pekkari, you're not allowed to write to the remote unit's data
<jam> pekkari, you want event.relation.data[self.model.unit]
<jam> pekkari, event.relation.data[event.unit] is to read the data that the remote side is telling you about during relation-changed
<pekkari> I see, yeah, I was suspecting it was a newbie issue here
<pekkari> thanks jam! I'll test it this way
<Chipaca> jam: thanks for the review
<Chipaca> jam: the "run the hook _as well_" was not what I had understood... but that's ok :)
<jam> Chipaca, so we had a number of discussions about it, where I ended up was that the operator framework already likes to have multiple event handlers
<jam> lots of things can listen for 'install'
<jam> the hooks/* is just another one of them
<Chipaca> right, so that seems in tune with that
<Chipaca> i'll fix
<jam> If you don't do that, then it ends up that dropping a shell script ends up breaking all the components that you're using
<Chipaca> you say that like it's a bad thing
 * Chipaca hides
<jam> Chipaca, no worries, I was originally more in favor of what you implemented. It was when I realized the multi-event thing that it clicked in my head.
<Chipaca> jam: and yes i seem to have been in more of a hurry than needed when writing the commit message, will fix that also
<Chipaca> I think I'm going to go for a run now, and return to this after
 * Chipaca goes
<niemeyer> #218
<mup_> PR #218: Add relation-created events <Created by dshcherb> <https://github.com/canonical/operator/pull/218>
<Chipaca> #waat
<Chipaca> cockroachdb-operator#1
<mup_> PR cockroachdb-operator#1: initial cockroachdb-operator charm for review <Created by dshcherb> <https://github.com/canonical/cockroachdb-operator/pull/1>
<Dmitrii-Sh> reworked #212
<mup_> PR #212: Harness: do not emit extra relation changed events <Created by dshcherb> <https://github.com/canonical/operator/pull/212>
<Dmitrii-Sh> nice ^
<Dmitrii-Sh> thanks for making mup_ work niemeyer
<niemeyer> My pleasure.. this is still not 100% online as it's running in my laptop for now, but hopefully next week I can finish the migration
<niemeyer> This is a separate instance from mup.. eventually mup will die and this will get renamed to it
<Chipaca> OK guys, i'm off, mostly. Will checkback from time to time to poke at things, but other than that, see you Tuesday!
<niemeyer> Chipaca: Have a good evening and a good holiday
<Chipaca> i've got a slow-cooking pork shoulder going since breakfast time
<Chipaca> the whole house smells of it
<Chipaca> so it's off to a good start i reckon :-)
<niemeyer> Poor neighbors :P
<Chipaca> :-D
#smooth-operator 2020-04-10
<Dmitrii-Sh> o/
