[10:12] <facubatista> ¡Muy buenos días a todos!
[10:17] <Chipaca> facubatista: 👋❗
[10:17] <facubatista> hola Chipaca
[10:22] <bthomas> नमस्ते facubatista : I have pushed update to charm.py docs as discussed in last to last standup.
[10:23] <facubatista> chacharm.pycs?
[10:24] <bthomas> https://github.com/canonical/operator/pull/417
[10:26] <facubatista> bthomas, wonderful, I trust Chipaca and Justing reviews, so when you get two approves there, land it, thanks!
[10:27] <bthomas> ok
[10:27]  * Chipaca is reviewing it right now
[10:27] <bthomas> It will be good if some one from Juju team, perhaps jam also reviews.
[10:52] <Chipaca> bthomas: where's the deprecation notice for leader-settings-changed from?
[10:54] <bthomas> Chipaca: The depreciation notice was already in there. Wasn't it ?
[10:54] <bthomas> I only put it in all CAPS.
[10:54] <Chipaca> ah, true :)
[10:54] <bthomas> Please do make any changes, as required.
[11:09] <Chipaca> bthomas: just finished, great work
[11:09] <bthomas> Thanks Chipaca
[11:09] <Chipaca> bthomas: lots of suggestions of course
[11:09] <Chipaca> 🙂
[11:09] <bthomas> Will fix
[11:10] <Chipaca> jam: I've called you out in a couple of places in that PR, if you could give those specific bits a quick read (the whole thing is massive, but just those things shouldn't be too hard …)
[12:09] <Chipaca> i should lunch
[12:09]  * Chipaca goes
[14:53]  * justinclark is struggling with actions :(
[14:55] <justinclark> facubatista / Chipaca, is there something special about writing actions for k8s charms that requires something besides defining the action in actions.yaml and observing the proper action event in the charm?
[14:57] <justinclark> I've tested on several different charms and get errors like this: "env: can't execute 'python3': No such file or directory" with return code 127.
[14:59] <justinclark> The error messages are different for each app (Grafana, elasticsearch, Prometheus), which implies it might have something to do with the container in which the apps are running.
[14:59] <facubatista> Mmm...
[15:00] <facubatista> jam, actions are fully supported by Juju under k8s?
[15:00] <justinclark> I can talk more in standup, but the actions themselves are very simple - just test logging and a return message via event.set_results()
[15:01] <jam> facubatista, actions in k8s are executed in the application pod, not the operator pod
[15:03] <facubatista> so, they don't reach the OF/charm, right? it would be a way to execute stuff from the app directly?
[15:04] <Chipaca> justinclark: the application pod often doesn't have python3 installed
[15:05] <Chipaca> justinclark: ps if the application pod were ubuntu-based it (sh|w)ould have it
[15:06] <justinclark> I see. Well the most surprising error message I had (last night) was something like: "./src/charm.py no such file or directory". So I'm not sure it can even execute the code defined in the charm itself.
[15:06] <justinclark> I'll reproduce that one now.
[15:07] <jam> facubatista, the content of the charm code is copied into the application pod during the init container
[15:08] <jam> justinclark, I wonder if that is a $PATH issue, where we execute it but with the wrong $CWD
[15:09] <facubatista> justinclark, you can jump into the app pod?
[15:12] <justinclark> jam, that seems reasonable. I'll get the error output in a moment.
[15:12] <justinclark> facubatista, yes I can get into the pod.
[15:14] <justinclark> jam, here is the real error message: "/var/lib/juju/agents/unit-prometheus-0/charm/dispatch: line 3: ./src/charm.py: not found"
[15:18] <facubatista> justinclark, can you do a "tree" on /var/lib/juju/agents/unit-prometheus-0/charm ?
[15:18] <justinclark> However, I can do this: "microk8s.kubectl exec -it -n lma prometheus-0 -- ls /var/lib/juju/agents/unit-prometheus-0/charm/src" and it shows charm.py
[15:19] <Chipaca> the thing is
[15:19] <Chipaca> that charm.py will say
[15:19] <Chipaca> #!/usr/bin/env python3
[15:19] <Chipaca> and the container probably doesn't have /usr/bin/env
[15:21] <justinclark> Chipaca, that's correct. The only thing in /usr/bin is juju-run
[15:21] <Chipaca> thus, no such file or directory
[15:21] <facubatista> I would have expected another error, though
[15:21] <facubatista> 12:20:50|facundo@blackfx:~$ cat x.py
[15:21] <facubatista> #!/usr/bin/notenv python3
[15:21] <facubatista> whatever
[15:21] <facubatista> 12:20:54|facundo@blackfx:~$ ./x.py
[15:21] <facubatista> bash: ./x.py: /usr/bin/notenv: bad interpreter: No such file or directory
[15:21] <Chipaca> facubatista: now try with dash
[15:22] <facubatista> $ ./x.py
[15:22] <facubatista> dash: 1: ./x.py: not found
[15:23] <Chipaca> ¯\_(ツ)_/¯
[15:23] <facubatista> can we verify somehow that the docker image has Python3?
[15:23] <facubatista> I mean, at building/somewhen time, otherwise the OF will never work
[15:24] <Chipaca> facubatista: "the only thing in /usr/bin is juju-run"
[15:24] <Chipaca> ah
[15:24] <Chipaca> well, only actions won't
[15:24] <Chipaca> and only actions run in the application pod
[15:24] <Chipaca> and only if the charm uses dispatch, and the action isn't shell
[15:24] <Chipaca> by this i mean that there _are_ ways to get it to work, but they suck
[15:25] <facubatista> Chipaca, the charm uses dispatch, and the actions are not shell, because we're in the future
[15:25] <jam> Chipaca, charmcraft's dispatch can't handle if python isn't available at all and you have an actions/foo right?
[15:25] <Chipaca> jam: right, hence 'only if the charm uses dispatch'
[15:26] <Chipaca> we _could_ make it smarter, but not sure it's worth it
[15:26] <Chipaca> kinda cornercasey
[15:26] <jam> but you could have a dispatch with "if JUJU_ACTION and -x ./actions/$JUJU_DISPATCH_PATH" etc in dispatch and gave it work.
[15:28] <facubatista> why are we considering "if charm uses dispatch"? all our charms use dispatch
[15:30] <Chipaca> jam: that's what i mean by 'make it smarter'
[15:30] <Chipaca> facubatista: only the ones built with charmcraft and not then further tweaked
[15:31] <facubatista> Chipaca, which are the ones that be 100% of our base in some months from now, right?
[15:31] <Chipaca> jam: further that one have to check wither that hook is a symlink to dispatch, or the charm itself, etc
[15:31] <facubatista> Chipaca, jam, standup
[15:31] <jam> ok, but I don't see how that helps the discussion :) was there something on my chair?
[15:50] <Chipaca> jam: ants in yer pants?
[15:50] <jam> :)
[16:15]  * facubatista -> lunch
[16:19] <bthomas> There are a lot of commits in the docstrings pull request. Is it ok to squash them into a single commit with good commit message.
[17:10] <bthomas> I address as many review changes as possible. Still some left.
[17:11]  * bthomas -> out for dinner bb much later
[17:18] <facubatista> bthomas, it's ok to squash them, if you like
[18:55] <justinclark> Question about ElasticSearch-K8s HA: in the pod spec, we need to configure the network host for each ES unit. In other words, there will be a different pod spec for each unit. However, self.model.pod.set_spec() only runs for the master unit. Is there any way to have a different pod spec for each unit?
[19:13] <jam> @justinclark, i'm not sure what you *need* to do vs what is done so far. But the juju model is that multiple units should not be measurably different from each other. If you need a difference-per-object then that may need to be different apps.
[19:24] <justinclark> jam, basically what the ES docs say is if we want to have ES nodes on different servers (machines), we need to explicitly give ES a network host. Here is the short page in their docs: https://www.elastic.co/guide/en/elasticsearch/reference/6.8/network.host.html
[19:27] <justinclark> This is one of the first things we (Balbir and I) need to figure out since the ES charm should support HA by default.