[09:30] <mup> Issue operator#345 closed: Harness could start with config initialized from config.yaml <Created by jameinel> <Closed by chipaca> <https://github.com/canonical/operator/issues/345>
[09:30] <mup> PR operator#403 closed: Make Harness load default values from config.yaml <Created by johnsca> <Merged by chipaca> <https://github.com/canonical/operator/pull/403>
[11:05] <facubatista> ¡Muy buenos días a todos!
[11:06] <bthomas> morning facubatista
[11:10] <facubatista> hola bthomas
[11:22]  * facubatista begs for a review https://github.com/canonical/charmcraft/pull/156
[11:22] <mup> PR charmcraft#156: Text builders for simple usage, detailed help, and command help <Created by facundobatista> <https://github.com/canonical/charmcraft/pull/156>
[11:23] <facubatista> bthomas, Chipaca ^
[11:59] <bthomas> done
[12:37] <mup> PR operator#375 closed: first pass at getting travis to do a windows run <Created by chipaca> <Closed by chipaca> <https://github.com/canonical/operator/pull/375>
[12:37] <mup> PR operator#408 opened: These changes make it so that the test suite passes on Windows <Created by chipaca> <https://github.com/canonical/operator/pull/408>
[12:43] <Chipaca> Progress: Downloading python3 3.8.5... 2PProgrProgress:Progress: DowProgress: DownloaProgress: DownloadingProgress: Downloading pytProgressProgress: DoProgress: DownloProgress: DownloadinProgress: Downloading pyProgress: Downloading pythonProgress: DowProgress: Downloading python3 3.8.5... 54PrProgreProgress: Downloading python 3.8.5... 100%
[12:43]  * Chipaca wonders who thought that was a'ight
[14:01] <Chipaca> #408 is green 🙂
[14:01] <mup> PR #408: These changes make it so that the test suite passes on Windows <Created by chipaca> <https://github.com/canonical/operator/pull/408>
[14:02] <jam> o/
[14:22] <facubatista> yes!
[16:05] <crodriguez> hello hello. I get into a weird situation rn. So, upon deployment of a charm, both the start hook and the config-changed hook are triggered automatically. If my start hooks fails upon something, I set a blocked status. However, the config-changed hook runs right after and removes that blocked status, so the error is being hidden
[16:06] <crodriguez> So, I see a few options. 1) is it possible to *not* run the config-changed hook at the same time as the start hook? I do not see any benefits
[16:07] <crodriguez> 2) can we get the option of setting the status to error ? maybe that would actually block the other hooks from overwritting the status?
[16:07] <crodriguez> @facubatista, I'd like to get your input on this ^
[16:51] <facubatista> crodriguez, reading
[16:53] <facubatista> this is more "juju behaviour", so let's see what jam says about it, but:
[16:53] <narindergupta>   File "/var/lib/juju/agents/unit-cassandra-0/charm/venv/ops/model.py", line 979, in _run
[16:53] <narindergupta>     raise ModelError(e.stderr)
[16:53] <narindergupta> ops.model.ModelError: b'ERROR json: unknown field "lifecycle"\n'
[16:53] <facubatista> 1) it surprises me that config-changed is triggered after start ended with error
[16:53] <narindergupta> @facubatista have you seen frecently?
[16:53] <facubatista> 2) ah, mmm
[16:54] <facubatista> crodriguez, when you say that the "start hook fails on something", is it actually making the call crash, or you just set up to blocked, and then end the hook "correctly"?
[17:00] <facubatista> narindergupta, nop, do you have more context?
[17:03] <narindergupta> @facubatista i just deployed my old cassandra charm and having this issue.
[17:03] <narindergupta> facubatista: i will try to build the charm again and test and if that does not solve then have to look i know this charm was working some time back and wanted to implemented few comments
[17:07] <narindergupta> facubatista: still same error so looks like ops.model.ModelError: b'ERROR json: unknown field "lifecycle"\n'
[17:09] <facubatista> narindergupta, do you have a bigger traceback? where are you seeing this?
[17:09] <narindergupta> facubatista: never mind looks in my template i am adding lifecycle
[17:09] <narindergupta> which is not supported so far.
[17:10] <facubatista> so the "i just deployed my old cassandra" is "i just added an invalid key to metadata.yaml and tried the deploy"? :)
[17:12] <narindergupta> facubatista: somehow this was there in charm template as well which is weired and when i tested last it was not an issue may be while uploading i might have opush new charm as i was experimenting lifecycle feature
[17:12] <narindergupta> in other sense cassandra charm on charmstore is broken
[17:35] <mup> Issue operator#409 opened: Setting model.app.status causes the deployment to hang <Created by camille-rodriguez> <https://github.com/canonical/operator/issues/409>
[17:36] <jam> facubatista, crodriguez from her comments on mattermost, it is just setting Blocked in 'start' but config-changed is apparently causing it to set Active.
[17:36] <jam> so both (a) start isn't raising an exception/failing non-zero exit code and (b) something in config-changed isn't evaluating the same logic to realize it needs more.
[17:39] <facubatista> jam, but it's ok to "start" to "finish ok", right? Even if it detects an underlying problem/issue, and sets the status to Blocked
[17:39] <crodriguez> facubatista, well there's no way for me to easily set an error status, so until now I was using blocked status
[17:39] <crodriguez> I'll try the sys.exit strategy and see how that goes. I also opened bug#409
[17:40] <crodriguez> for something else I found (app.status doesnt work..)
[17:55] <jam> you can exit(1) or raise Exception to go into error status
[17:56] <jam> juju doesn't let you status-set error
[18:21] <facubatista> jam, crodriguez, my question is what it's expected to happen at "juju level"... Blocked means that manual intervention is needed... so the method should raise an exception instead of finishing correctly?
[18:21] <facubatista> is it always that way?
[18:22] <jam> facubatista, no. you may be blocked because you need a relation to a database. That needs a human, but the charm should still be able to respond to relation-created
[18:22] <facubatista> ah, good
[18:23] <facubatista> so, in the case of crodriguez she *should* end it in error because it makes no sense to keep receiving events, right?
[18:23] <jam> from what she's saying, yes.
[18:26] <facubatista> good, thanks
[18:50] <facubatista> jam, the "params" part in actions.yaml is used for something? I have a charm with an action without that section, and I was able to call the action using the parameter, and it reached the charm's method just fine
[18:51] <jam> facubatista, there is a field for "allow additional parameters" which you can set to False IIRC, let me check
[18:51] <jam> facubatista, additionalProperties: False
[18:51] <jam> sorry, 'false'
[18:51] <jam> lowercase
[18:51] <facubatista> ah, thanks!
[18:52] <jam> facubatista, with additionalProperties: false, then Juju won't let you supply things that aren't in the params.
[18:56] <facubatista> jam, so "params" is more about validations before the sent information actually reaches the charm, right?
[18:56] <facubatista> e.g.: ERROR validation failed: (root).foo : must be of type string, given 123
[18:57] <jam> facubatista, I believe so, yes
[18:57] <facubatista> it can get weird, though:
[18:57] <facubatista> $ juju run-action bdv/1 refresh foo='123' --wait
[18:57] <facubatista> {}
[18:57] <facubatista> ERROR validation failed: (root).foo : must be of type string, given 123
[18:59] <jam> facubatista, I would guess the CLI is parsed as YAML, so you would need appropriate quoting to pass a 'looks-like-integer' as a string
[18:59] <jam> foo='"123"' might work
[18:59] <facubatista> it did
[18:59] <facubatista> it's a nice thing to mention
[19:57] <crodriguez> jam, facubatista I have another question and I'm not sure if it's more a juju thing or an operator thing. The K8s API commands that I execute in my charm are launched inside the app controller pod. Everything works well when RBAC is not enabled, but if RBAC is enabled, then the operator pod does not have the required permissions to use the API. I don't understand why in this context, the controller pod is able
[19:57] <crodriguez> to set the pod_spec, but not to use the API
[19:57] <jam> crodriguez, the application pod asks Juju to set the pod spec, not the K8s api directly
[19:59] <crodriguez> mhm ok. So what are my options then ? I have to give extra permissions to the app operator, but i do not think that juju allows that?
[19:59] <crodriguez> the k8s guys have done a workaround in mutlus where they spin up another container, which they give enough permissions to run kubectl/api
[20:00] <crodriguez> but that's an ugly solution :(
[20:06] <crodriguez> so idk jam what you think about that. I'd like to avoid the extra container just to run the api..
[20:09] <jam> crodriguez, it would be good to understand what actual RBAC you need
[20:16] <jam> I agree that spinning up another container is not the right way to do it.
[20:20] <crodriguez> jam: I need to create pod security policies, create a namespaced role, and bind this role. So I need rbac to be allowed to do this in the controller pod
[20:21] <crodriguez> and I'm using the k8s api for this because of LP:1886694 and LP:1896076
[20:26] <jam> bug #1886694
[20:26] <jam> bug #1896076
[20:26] <jam> I guess mup doesn't do that here?
[20:27] <jam> crodriguez, PodSecurityPolicy is essentially allowing you to root the K8s hosts, it is a pretty dangerous thing to give arbitrarily
[20:29] <crodriguez> jam it's not arbitrarily, it's the way upstream MetalLB is designed.
[20:31] <jam> how to put it a different way
[20:31] <jam> I understand the desire for the functionality. But it isn't appropriate for every charm to have root on the entire K8s cluster
[20:31] <jam> MetalLB is a bit special in that it very much has to control the host to provide the functionality
[20:32] <jam> but it is, essentially, getting elevated privileges over everything else running in the cluster.
[20:32] <jam> eg, kernel level attacks on every K8s worker node
[20:34] <jam> now, we can probably model that with something akin to "juju trust" that would allow for a specific charm to operate with elevated privileges
[20:36] <jam> In the short term, it feels very appropriate to explicitly ask for a Cluster role to be done by someone outside of the charm, before the charm can operate
[20:38] <crodriguez> I don't understand how having the ability to create pod security policies means that it would give you root to the entire cluster. Being a juju admin already assumes that you have access to pretty much everything anyway, doesn't it? Pod security policies enables more security , it can let you prevent users from running privileged containers, etc.
[20:39] <jam> PodSecurityPolicy defines things like "Can I load a Kernel driver for this pod"
[20:39] <jam> On AWS we don't give the AWS credentials to every charm, or even every machine agent
[20:48] <crodriguez> jam: I've replied to you in MM/charmcraft, since this is becoming more of an internal discussion
[21:05] <mup> Issue operator#410 opened: Framework does not log re-emission of events <Created by camille-rodriguez> <https://github.com/canonical/operator/issues/410>
[21:10]  * facubatista eods and eows