[00:41]  * MarkMaglana likes butternut squash soup with coconut milk
[00:41] <MarkMaglana> but first, coffee
[08:33] <__chip__> MarkMaglana: calabacín relleno ftw
[08:33] <__chip__> good morning all
[08:34]  * __chip__ still in alter-ego mode for today
[08:39] <MarkMaglana> __chip__: damn that relleno looks yummy! now i gotta find me one of those.
[08:43] <__chip__> jam: dunno if you saw we landed a fix to master for the flake8 woes, just chaning the charm.py for now
[08:43] <__chip__> changing*
[08:43] <jam> __chip__, I didn't see that, thank you
[08:43] <__chip__> we can look at pinning versions at some point but for now this was alright
[08:43] <__chip__> jam: i was tempted to merge it into everything red but didnt' want to trip you up :)
[08:44] <__chip__> (just 279 bah)
[08:44] <__chip__> (i merged it into mine, and i expect facu merged it into his)
[08:53] <jam> __chip__, do we have any idea why it fails on travis but not locally?
[08:53] <jam> I struggled to get a working 'different' version of flake8
[08:53] <jam> tip seems broken against the pycodestyle it requires
[08:53] <__chip__> jam: it fails locally if you install flake8 and autopep8 from pip
[08:53] <__chip__> jam: last night the rules seemed more subtle than what you'd reported so maybe the issue you saw was fied
[08:55] <__chip__> jam: flake8 now asks for  pycodestyle<2.7.0,>=2.6.0a1
[08:55] <__chip__> jam: and autopep8, pycodestyle>=2.5.0
[08:56] <__chip__> so I get pycodestyle 2.6.0 and everything's happy (except our tests until we fixed master)
[08:56] <jam> Chipaca, https://paste.ubuntu.com/p/zVsD7qkmG9/
[08:56] <jam> flake 3.8.1, pycodestyle 2.6.0 pyflakes 2.2.0 and flake8 fails AFAICT
[08:56] <__chip__> jam: maybe pip install -U ?
[08:57] <__chip__> jam: I get flake8-3.8.1, pycodestyle 2.6.0, and pyflakes 2.2.0, and it's happy
[08:57] <jam> The original install was pip3 install --upgrade --user flake8 pydocstyle autopep8
[08:58] <__chip__> jam: maybe it's python 3.6 that's the issue?
[08:58] <__chip__> i'm on 3.8 here
[08:58] <__chip__> let me try with 3.6
[08:59] <jam> the traceback implicates importlib-metadata, but that has "skipping upgrade importlib-metadata; python_version < "3.8"
[08:59] <__chip__> er, can't do 3.6, doing 3.5
[09:00] <__chip__> jam: that's happy too
[09:00] <jam> __chip__, is there a clear 'uninstall everything and try again' ? Or I can just go back to whatever flake8 was working and stop fighting it :)
[09:01] <__chip__> jam: https://gitlab.com/pycqa/flake8/-/issues/406
[09:01] <__chip__> which points to …
[09:01] <jam> __chip__, opened 2 years ago, and fixed by depending on a specific version of pycodestyle
[09:01] <__chip__> yeah
[09:02] <__chip__> jam: wrt 'uninstall everything and try again', i just blow away the venv
[09:02] <jam> which now we're 2 versions beyond that
[09:02] <__chip__> jam: let's compare 'pip list' output
[09:03] <__chip__> jam: https://paste.ubuntu.com/p/nRgxRx4NSn/
[09:03] <jam> https://paste.ubuntu.com/p/FDyjwXB8x3/
[09:04] <jam> (that is the one that works, let me upgrade and try again)
[09:05] <jam> this flake8 fails to load: https://paste.ubuntu.com/p/QxcZMwwfrT/
[09:07] <__chip__> jam: what happens if you grep for break_around_binary_operator in the lib?
[09:07] <__chip__> I only see a _break_around_binary_operator method in pycodestyle
[09:07] <__chip__> nothing calling into it
[09:09] <mup_> Issue operator#282 opened: Harness: Unable to assert if framework was called correctly by charm <Created by relaxdiego> <https://github.com/canonical/operator/issues/282>
[09:10] <jam> lib/python3.6/site-packages/pycodestyle.py:1279:def _break_around_binary_operators(tokens):
[09:10] <jam> it is an '_break'
[09:11] <__chip__> jam: right, but what tries to call break_around_binary_operator ?
[09:12] <jam> __chip__, so it is something about magically loading plugins, but my system doesn't seem to have that string, digging
[09:12] <__chip__> that exception should really print what file it's trying to load, huh
[09:13] <__chip__> MarkMaglana: thanks for the bug report!
[09:13] <jam>  /usr/lib/python3/dist-packages/pycodestyle.py
[09:13] <jam> who installed that...
[09:13] <MarkMaglana> __chip__: no worries! :)
[09:14] <__chip__> jam: gremlins
[09:14] <jam> __chip__, 'apt install flake8' depends on pycodestyle
[09:14] <jam> apparently the plugin loading finds the dist package but imports from the non-dist ?
[09:14] <jam> so to have a 'pip3 install --user flake8' I can't have an 'apt install flake8'
[09:17] <jam> __chip__, so with apt remove flake8 and a bunch of dependencies, ./run_tests fails for the "right" reason
[09:18] <__chip__> jam: huzzah, maybe
[09:18] <jam> ok, so I at least can reproduce and merged the fix and pushed it
[09:19] <__chip__> MarkMaglana: question for you
[09:20] <__chip__> MarkMaglana: what does 'build_juju_unit_status' do?
[09:21] <__chip__> MarkMaglana: ah, found it
[09:21] <MarkMaglana> __chip__: it just returns a *Status object based on arguments: https://github.com/relaxdiego/charm-k8s-grafana/blob/b689a77cb2714075b2c88e5fe6442e26f1ca033e/src/domain.py#L72-82
[09:21] <MarkMaglana> :)
[09:21] <__chip__> MarkMaglana: what is pod_status?
[09:21]  * __chip__ digs more
[09:21] <__chip__> ah, something from k8s?
[09:22] <MarkMaglana> __chip__: you got it
[09:22] <jam> __chip__, so if I understand correctly, what Mark wants is to see that his code is setting the unit status to A, and then to B, and then finally to C as part of responding to the config_changed event.
[09:23] <jam> __chip__, however, we don't assert at each step, so you can only see the final "after all is done the status is correct"
[09:23] <__chip__> MarkMaglana: why wouldn't k8s.get_pod_status return a framework status directly?
[09:23] <jam> __chip__, the config-changed handler is looping on the result of a mocked get_pod_status and should notice the transition of the pod to being active
[09:23] <MarkMaglana> __chip__: that one is to actually check if the k8s pod's livenessProbe and readinessProbe both return true.
[09:24] <jam> MarkMaglana, does the above sound correct for what you are trying to do?
[09:24] <jam> There are 2 possibilities as I see it. 1) I do have code for a _get_backend_calls() that would give details of how you interact step by step, I don't knw that I recommend that route, but the PR is https://github.com/canonical/operator/pull/264
[09:25] <mup_> PR #264: test/test_model.py: Test Model using Harness._get_backend calls <Created by jameinel> <https://github.com/canonical/operator/pull/264>
[09:25] <jam> b) You could change your mock so that you check the unit status just before you return a new pod status
[09:25] <MarkMaglana> i think i found that calling the framework status directly doesn't give me the fine-grained result that i needed. at the time I wrote that, the framework reported the pod as ready but in the k8s layer it was just live but not yet ready to receive requests. I wanted to make sure I reported that correctly to `juju status` so that the user doesn't panic or wonder what's going on.
[09:25] <jam> so you can see "the unit status started as X, then I returned pod status Y, then the unit status updated to Y, then I returned Z, and the unit status updated to Z"
[09:26] <__chip__> "the framework reported the pod as ready but in the k8s layer it was just live but not yet ready to receive requests" ← isn't that a bug in the framework?
[09:26] <jam> __chip__, it is a practical reality that saying "start this" takes a while for it to actually start
[09:26] <jam> __chip__, there is the k8s layer that says the container has started
[09:27] <jam> __chip__, and there is the "can I actually connect to the socket of the app"
[09:27] <__chip__> ahhh
[09:27]  * __chip__ ahhhs
[09:27] <MarkMaglana> that's what i report with PodStatus: https://github.com/relaxdiego/charm-k8s-grafana/blob/b689a77cb2714075b2c88e5fe6442e26f1ca033e/src/adapters/k8s.py#L75-L99
[09:28] <MarkMaglana> is_running looks at the livenessProbe and is_ready looks at the readinessProbe
[09:31] <MarkMaglana> jam: sorry i didn't notice that you were chatting me up with a different topic :)
[09:31] <jam> MarkMaglana, all of that should be prefaced with, I believe in Juju 2.8 pod-spec-set was made "transactional" which means Juju won't ask k8s to start the pod until your hook exits. So setting the pod spec and spinning isn't going to get you anywhere because Juju hasn't actually started the pod yet.
[09:51] <MarkMaglana> jam: that "transactional" tip is good information. in that case what would be the best way to report back to the user about the actual status of the underlying workload?
[09:52] <jam> MarkMaglana, I'm raising the question internally if it should be transactional. one option is to hook into the 'update-status' hook that Juju fires periodically (approx every 5 min). It isn't a great update rate
[09:52] <jam> MarkMaglana, the other option is to use something like cron / a forked subprocess
[09:52] <jam> and use "juju-run" to get back into the hook context and report back
[09:52] <jam> I don't think we have much to make that easy in the framework, unfortunately
[09:53] <MarkMaglana> jam: yeah 5 mins is not ideal. :)
[09:53] <jam> There have also been conversations that Juju should be triggering events on pod state changes
[09:53] <jam> But all that is hypothetical
[09:53] <MarkMaglana> gotcha. hmmmm.
[09:54] <MarkMaglana> jam: interesting challenge to tackle but for now, what do you suggest i do with the assertion? should i forego checking if the status was set 3x in my unit test and just check if the final status is correct?
[09:55] <MarkMaglana> brb my washer/dryer is singing lol
[09:55] <jam> so there is (1) which is use my patch that adds direct checking of every backend call, (2) check in the individual steps, (3) break apart the two so the one side emits events, and then you can have a second handler that monitors what events are sent. (4) just assert the final state
[10:01] <MarkMaglana> jam: thanks! i'll check out all options. :)
[10:04] <__chip__> yuss, got a dispatch shim main working
[10:04] <__chip__> main.py is now a mess
[10:05] <__chip__> refactoring :)
[10:41] <MarkMaglana> in case this helps anyone else, this has been a godsend for me when reviewing code on github: https://chrome.google.com/webstore/detail/sourcegraph/dgjhfomjieaadpoljlnidmbgkdffpack
[11:03] <mup_> PR operator#283 opened: ops/main: handle dispatch being a shim  <Created by chipaca> <https://github.com/canonical/operator/pull/283>
[11:04] <__chip__> jam: ^ wdyt?
[11:05] <jam> __chip__, about SourceGraph or about dispatch? :)
[11:05] <__chip__> jam: the latter
[11:05] <__chip__> haven't looked at the former yet
[11:05]  * __chip__ looks
[11:06] <__chip__> MarkMaglana: doesn't github already do that?
[11:07] <MarkMaglana> __chip__: well sh**. you're right! lol.
[11:08] <__chip__> MarkMaglana: doesn't quite work in firefox yet :( but they'll get it there
[11:09] <MarkMaglana> __chip__: I'm more of a Brave browser hipster guy.
[11:10] <facubatista> Muy buenos días a todos!
[11:11] <__chip__> MarkMaglana: ah! it does work, i was just impatient
[11:11] <__chip__> facubatista: heyy
[11:12] <facubatista> hola __chip__
[11:13] <__chip__> facubatista: could you start your day with a nice fresh cup of #276?
[11:13] <mup_> PR #276: explicitly ask for a newer sphinx on rtd <Created by chipaca> <https://github.com/canonical/operator/pull/276>
[11:13] <facubatista> __chip__, gladly
[11:14]  * facubatista pours himself a cup of that
[11:14]  * __chip__ passes facubatista a smaller cup
[11:14]  * facubatista clumsyly spoils all the bytes all over desk and keyboard
[11:15]  * __chip__ hands facubatista the kitchen towels
[11:15]  * __chip__ watches facubatista dress up as a mummy
[11:15]  * __chip__ /o\
[11:16] <facubatista> jajaja
[11:16] <jam> __chip__, did you even send an email/meeting for Ajduk to reschedule?
[11:17] <__chip__> i did not even
[11:17] <__chip__> i did send one to jay
[11:23] <mup_> PR operator#276 closed: explicitly ask for a newer sphinx on rtd <Created by chipaca> <Merged by chipaca> <https://github.com/canonical/operator/pull/276>
[11:38] <__chip__> jam: to schedule meetings with people in utc-6, should i make you optional, or is there any time that might work for you?
[11:40] <jam> __chip__, depends how much you want me to be there. :) I can make exceptions and work an evening here and there,and I'm willing to do a regular "monday nights I work evenings" but I wouldn't want it to be at arbitrary times
[11:41] <__chip__> jam: you kick ass at these meetings, not having you there makes them so much worse :) maybe one day a week you work evenings and we schedule these meetings for then? (i think it's just 2 (so far!) that are that far west)
[11:42] <jam> __chip__, one day a week is manageable
[11:43] <__chip__> jam: as it's just two, bi-weekly, tell me what day and at what hour :)
[11:43] <facubatista> jam, so, I deployed my blog as subordinate, which implied adding the relation to apache; at that moment "relation_joined" triggered on my charm, and I did some prints at that time...
[11:43] <jam> __chip__, so your thought is 2 every other week, or one every week?
[11:44] <facubatista> jam, as a next step, I wrote the proper code for that hook (configuring apache, if I'm right)... and then upgraded my charm... but "relation_joined" didn't trigger again... only "configuration changed"
[11:45] <facubatista> jam, what is the appropiate thing to do here? shall I hook that method to some other specific event and make it trigger? use config_changed for that? I'm failing to do a step properly?
[11:47] <jam> facubatista, so relation-joined only happens 1 time, when the new unit is seen. relation-changed would happen multiple times if Apache is giving you feedback.
[11:47] <jam> facubatista, I would think that telling Apache "here use this directory" only really needs to be told to it one time.
[11:47] <jam> Now, if some of your config causes a change in what content you are telling Apache about
[11:48] <jam> then in *config-changed* you would call self.model.get_relation('apache').data[self.unit] =new stuff
[11:48] <facubatista> jam, I don't really have a config that is related to Apache, in that sense
[11:48] <facubatista> jam, so, this is useful except when "bulding" my code :(
[11:49] <facubatista> yes, when all is done it should work ok
[11:50] <facubatista> jam, can I trigger that "relation_joined" manually somehow?
[11:50] <facubatista> or I could use an "action"
[11:50] <jam> __chip__, so 8am UTC-6 is 6pm UAE time. If that works, then doing those on Mondays at 6pm (just after our standup) would be a good time.
[11:50] <jam> If we need to go later because people don't start at 8am, then I'd still probably want to do them on Monday.
[11:51] <jam> facubatista, if you just want to test your code, I think you can do "juju run --unit some/1 'hooks/relation-joined'"
[11:52] <jam> the other option is to remove the relation and add it again, but that will kill the unit and create it again
[11:53] <facubatista> jam, if I do "juju run --unit some/1 'hooks/relation-joined'", the event I got still have a .relation pointed to apache?
[11:53] <facubatista> *will still have
[11:53] <jam> __chip__, so that would be Monday, UTC 14:00. But I could go until UTC 18:00 if necessary
[11:54] <jam> facubatista, I don't think Juju sets the JUJU_RELATION_ID, etc if you manually trigger a hook
[11:56] <jam> facubatista, 'get_relation' should still work, because the relation is still defined, but event.relation probably won't be set
[11:57] <facubatista> another reason to stop using `event.relation.data` and go to `self.model.get_relation('apache').data`, which looks less "backwards thinking" than the former
[11:59] <facubatista> jam, mmm... in `self.model.get_relation('apache').data` I'm expressing "the data from an application", not "the data from an unit"
[11:59] <jam> facubatista, you're looking at all the relation data there
[11:59] <jam> so you still need
[12:00] <jam> self.model.get_relation('apache').data[unit/or/app]
[12:00] <__chip__> jam: ack. I'll look at the calendar later and let you know what it looks like.
[12:00] <jam> __chip__, #283 reviewed. I think we're still missing a case
[12:00] <__chip__> I've jused realised the time, the bug revue, and the lack of lunch
[12:00] <mup_> PR #283: ops/main: handle dispatch being a shim  <Created by chipaca> <https://github.com/canonical/operator/pull/283>
[12:00]  * __chip__ takes the laptop to the kitchen
[12:02] <facubatista> jam, meeting?
[13:31] <facubatista> jam, __chip__, standup?
[13:42] <__chip__> https://github.com/sphinx-doc/sphinx/pull/7655  :-D
[13:42] <mup_> PR sphinx-doc/sphinx#7655: Make sphinx.util.typing.stringify render optional unions better <autodoc> <enhancement> <Created by chipaca> <Merged by tk0miya> <https://github.com/sphinx-doc/sphinx/pull/7655>
[13:47] <__chip__> facubatista: #280 GTG
[13:47] <mup_> PR #280: Included the dependencies in requirements files, updated README and .travis <Created by facundobatista> <https://github.com/canonical/operator/pull/280>
[13:47] <facubatista> __chip__, thanks
[13:47] <mup_> PR operator#280 closed: Included the dependencies in requirements files, updated README and .travis <Created by facundobatista> <Merged by facundobatista> <https://github.com/canonical/operator/pull/280>
[13:48] <facubatista> jam, boo https://paste.ubuntu.com/p/jSgxJZjzbT/
[13:48]  * facubatista will write an action for this
[13:48] <mup_> Issue operator#282 closed: Harness: Unable to assert if framework was called correctly by charm <Created by relaxdiego> <Closed by relaxdiego> <https://github.com/canonical/operator/issues/282>
[14:29] <DominikF> Hey all! I tested the `charm create -t operator-python` command today and wanted to provide some feedback. I didn't find the changes here https://github.com/juju/charm-tools , has the repository changed?
[14:34] <__chip__> DominikF: which changes?
[14:35] <DominikF> __chip__: The changes that enabled this new template
[14:35] <__chip__> DominikF: charm-tools is that repository, but we're not charm-tools (i don't really know what that command does... so maybe it's unmaintained?)
[14:38] <DominikF> __chip__: sorry maybe I misunderstood. Last week we had a conversation about having skeleton charms for the operator framework and I was told that this was added to the charm-tools. The command that I listed above does exactly that and works great, I thought it was the work that your team was talking about.
[14:39] <__chip__> DominikF: i'm glad it works :-D but if it was done by my team, it was done before i was on it
[14:39] <__chip__> DominikF: maybe cory_fu_?
[14:40] <DominikF> __chip__: sorry! I thought it was recent work and found it strange that there was no new commit for it.
[14:41] <__chip__> DominikF: sadly, no
[14:41] <__chip__> at _some_ point charmcraft will grow that ability, but not just yet (it didn't make the roadmap)
[15:54] <mthaddon> __chip__, DominikF: charm tools is maintained by cory_fu https://github.com/juju/charm-tools/tree/master/charmtools/templates and the operator-python template pulls from https://github.com/devec0/template-python-operator currently
[15:54] <__chip__> mthaddon: ack
[15:55] <mthaddon> __chip__: maybe that should at some point be owned by the charmcraft team (the python-operator template)?
[15:56] <__chip__> mthaddon: at some point there'll be a 'charmcraft init'
[15:56]  * __chip__ will also add a 'charmcraft innit' which will be like 'bruv'
[15:57] <mthaddon> hahaha, ah cool - would that have different options for "give me an IaaS charm" and "give me a k8s charm"?
[15:57] <DominikF> mthaddon: that's exactly what I wanted to ask
[15:57] <__chip__> mthaddon: it got kicked to the next cycle so i don't have details :) or i could say "probably"
[15:57] <DominikF> If there could be a template for k8s charms
[15:57] <__chip__> i mean
[15:57] <__chip__> ideally there'd be no difference :)
[15:58] <__chip__> we're almost there with dispatch
[16:00] <mthaddon> DominikF: there's some discussion about this on https://discourse.juju.is/t/first-steps-with-the-operator-framework/3006/4 fwiw
[16:03] <DominikF> mthaddon: thanks!
[16:46]  * __chip__ breaths a sigh of releif
[16:46] <__chip__> relief also
[17:16] <cory_fu> mthaddon, __chip__: Sorry for the long delay in replying; had some particularly long meetings today.  Getting caught up on the discussion
[17:16] <cory_fu> DominikF: ^
[17:20] <cory_fu> I'm the defacto maintainer of charm-tools but it's basically just been in maintenance mode rather than active development since we expect it to be phased out by charmcraft.  The template PR was contributed and comes from that external repo, as noted, so changes to the template should be proposed there.
[17:21] <cory_fu> My understanding, though, is that the intention with the new framework is to try to keep boilerplate small enough that ideally a template wouldn't be needed.  Ideally, creating a K8s charm vs a machine charm shouldn't be significantly different but might involve importing some different helpers to use with your charm class.
[17:22] <cory_fu> I think the template is a stop-gap and there is the issue that it is likely to get out of date as the design patterns evolve.
[17:23] <__chip__> cory_fu: thanks!
[17:24] <cory_fu> I'm also looking forward to charmcraft being a much more focused tool.  A lot of things, like the template system, have grown organically into charm-tools over the years that I think really make things more confusing for the end user.
[18:20] <mup_> Issue operator#269 closed: Warn that model.unit.is_leader() can raise an exception <Created by timClicks> <Closed by timClicks> <https://github.com/canonical/operator/issues/269>
[21:32]  * facubatista is out
[22:01] <Chipaca> 👋