[09:30] Issue operator#345 closed: Harness could start with config initialized from config.yaml [09:30] PR operator#403 closed: Make Harness load default values from config.yaml [11:05] ¡Muy buenos días a todos! [11:06] morning facubatista [11:10] hola bthomas [11:22] * facubatista begs for a review https://github.com/canonical/charmcraft/pull/156 [11:22] PR charmcraft#156: Text builders for simple usage, detailed help, and command help [11:23] bthomas, Chipaca ^ [11:59] done [12:37] PR operator#375 closed: first pass at getting travis to do a windows run [12:37] PR operator#408 opened: These changes make it so that the test suite passes on Windows [12:43] Progress: Downloading python3 3.8.5... 2PProgrProgress:Progress: DowProgress: DownloaProgress: DownloadingProgress: Downloading pytProgressProgress: DoProgress: DownloProgress: DownloadinProgress: Downloading pyProgress: Downloading pythonProgress: DowProgress: Downloading python3 3.8.5... 54PrProgreProgress: Downloading python 3.8.5... 100% [12:43] * Chipaca wonders who thought that was a'ight [14:01] #408 is green 🙂 [14:01] PR #408: These changes make it so that the test suite passes on Windows [14:02] o/ [14:22] yes! [16:05] hello hello. I get into a weird situation rn. So, upon deployment of a charm, both the start hook and the config-changed hook are triggered automatically. If my start hooks fails upon something, I set a blocked status. However, the config-changed hook runs right after and removes that blocked status, so the error is being hidden [16:06] So, I see a few options. 1) is it possible to *not* run the config-changed hook at the same time as the start hook? I do not see any benefits [16:07] 2) can we get the option of setting the status to error ? maybe that would actually block the other hooks from overwritting the status? [16:07] @facubatista, I'd like to get your input on this ^ [16:51] crodriguez, reading [16:53] this is more "juju behaviour", so let's see what jam says about it, but: [16:53] File "/var/lib/juju/agents/unit-cassandra-0/charm/venv/ops/model.py", line 979, in _run [16:53] raise ModelError(e.stderr) [16:53] ops.model.ModelError: b'ERROR json: unknown field "lifecycle"\n' [16:53] 1) it surprises me that config-changed is triggered after start ended with error [16:53] @facubatista have you seen frecently? [16:53] 2) ah, mmm [16:54] crodriguez, when you say that the "start hook fails on something", is it actually making the call crash, or you just set up to blocked, and then end the hook "correctly"? [17:00] narindergupta, nop, do you have more context? [17:03] @facubatista i just deployed my old cassandra charm and having this issue. [17:03] facubatista: i will try to build the charm again and test and if that does not solve then have to look i know this charm was working some time back and wanted to implemented few comments [17:07] facubatista: still same error so looks like ops.model.ModelError: b'ERROR json: unknown field "lifecycle"\n' [17:09] narindergupta, do you have a bigger traceback? where are you seeing this? [17:09] facubatista: never mind looks in my template i am adding lifecycle [17:09] which is not supported so far. [17:10] so the "i just deployed my old cassandra" is "i just added an invalid key to metadata.yaml and tried the deploy"? :) [17:12] facubatista: somehow this was there in charm template as well which is weired and when i tested last it was not an issue may be while uploading i might have opush new charm as i was experimenting lifecycle feature [17:12] in other sense cassandra charm on charmstore is broken [17:35] Issue operator#409 opened: Setting model.app.status causes the deployment to hang [17:36] facubatista, crodriguez from her comments on mattermost, it is just setting Blocked in 'start' but config-changed is apparently causing it to set Active. [17:36] so both (a) start isn't raising an exception/failing non-zero exit code and (b) something in config-changed isn't evaluating the same logic to realize it needs more. [17:39] jam, but it's ok to "start" to "finish ok", right? Even if it detects an underlying problem/issue, and sets the status to Blocked [17:39] facubatista, well there's no way for me to easily set an error status, so until now I was using blocked status [17:39] I'll try the sys.exit strategy and see how that goes. I also opened bug#409 [17:40] for something else I found (app.status doesnt work..) [17:55] you can exit(1) or raise Exception to go into error status [17:56] juju doesn't let you status-set error [18:21] jam, crodriguez, my question is what it's expected to happen at "juju level"... Blocked means that manual intervention is needed... so the method should raise an exception instead of finishing correctly? [18:21] is it always that way? [18:22] facubatista, no. you may be blocked because you need a relation to a database. That needs a human, but the charm should still be able to respond to relation-created [18:22] ah, good [18:23] so, in the case of crodriguez she *should* end it in error because it makes no sense to keep receiving events, right? [18:23] from what she's saying, yes. [18:26] good, thanks [18:50] jam, the "params" part in actions.yaml is used for something? I have a charm with an action without that section, and I was able to call the action using the parameter, and it reached the charm's method just fine [18:51] facubatista, there is a field for "allow additional parameters" which you can set to False IIRC, let me check [18:51] facubatista, additionalProperties: False [18:51] sorry, 'false' [18:51] lowercase [18:51] ah, thanks! [18:52] facubatista, with additionalProperties: false, then Juju won't let you supply things that aren't in the params. [18:56] jam, so "params" is more about validations before the sent information actually reaches the charm, right? [18:56] e.g.: ERROR validation failed: (root).foo : must be of type string, given 123 [18:57] facubatista, I believe so, yes [18:57] it can get weird, though: [18:57] $ juju run-action bdv/1 refresh foo='123' --wait [18:57] {} [18:57] ERROR validation failed: (root).foo : must be of type string, given 123 [18:59] facubatista, I would guess the CLI is parsed as YAML, so you would need appropriate quoting to pass a 'looks-like-integer' as a string [18:59] foo='"123"' might work [18:59] it did [18:59] it's a nice thing to mention [19:57] jam, facubatista I have another question and I'm not sure if it's more a juju thing or an operator thing. The K8s API commands that I execute in my charm are launched inside the app controller pod. Everything works well when RBAC is not enabled, but if RBAC is enabled, then the operator pod does not have the required permissions to use the API. I don't understand why in this context, the controller pod is able [19:57] to set the pod_spec, but not to use the API [19:57] crodriguez, the application pod asks Juju to set the pod spec, not the K8s api directly [19:59] mhm ok. So what are my options then ? I have to give extra permissions to the app operator, but i do not think that juju allows that? [19:59] the k8s guys have done a workaround in mutlus where they spin up another container, which they give enough permissions to run kubectl/api [20:00] but that's an ugly solution :( [20:06] so idk jam what you think about that. I'd like to avoid the extra container just to run the api.. [20:09] crodriguez, it would be good to understand what actual RBAC you need [20:16] I agree that spinning up another container is not the right way to do it. [20:20] jam: I need to create pod security policies, create a namespaced role, and bind this role. So I need rbac to be allowed to do this in the controller pod [20:21] and I'm using the k8s api for this because of LP:1886694 and LP:1896076 [20:26] bug #1886694 [20:26] bug #1896076 [20:26] I guess mup doesn't do that here? [20:27] crodriguez, PodSecurityPolicy is essentially allowing you to root the K8s hosts, it is a pretty dangerous thing to give arbitrarily [20:29] jam it's not arbitrarily, it's the way upstream MetalLB is designed. [20:31] how to put it a different way [20:31] I understand the desire for the functionality. But it isn't appropriate for every charm to have root on the entire K8s cluster [20:31] MetalLB is a bit special in that it very much has to control the host to provide the functionality [20:32] but it is, essentially, getting elevated privileges over everything else running in the cluster. [20:32] eg, kernel level attacks on every K8s worker node [20:34] now, we can probably model that with something akin to "juju trust" that would allow for a specific charm to operate with elevated privileges [20:36] In the short term, it feels very appropriate to explicitly ask for a Cluster role to be done by someone outside of the charm, before the charm can operate [20:38] I don't understand how having the ability to create pod security policies means that it would give you root to the entire cluster. Being a juju admin already assumes that you have access to pretty much everything anyway, doesn't it? Pod security policies enables more security , it can let you prevent users from running privileged containers, etc. [20:39] PodSecurityPolicy defines things like "Can I load a Kernel driver for this pod" [20:39] On AWS we don't give the AWS credentials to every charm, or even every machine agent [20:48] jam: I've replied to you in MM/charmcraft, since this is becoming more of an internal discussion [21:05] Issue operator#410 opened: Framework does not log re-emission of events [21:10] * facubatista eods and eows