[01:53] <justinclark> Update: I haven't been able to confirm, but I think my issue is that event.defer() might put the deferred event at the front of the event trigger queue - my code would only work if the deferred event got put at the back of the event queue. I'll find another way to do what I want to do.
[08:24]  * Chipaca takes a break
[09:16] <bthomas> Chipaca: I am thinking it would be useful to allow opslib k8s to be used outside charm context. For instance it would be use to me now to run the get_pod_status() method from a python console passing in my juju model, app and unit. What do you think ?
[09:17] <Chipaca> bthomas: no objections
[09:18] <bthomas> What I am worried about is in the future if opslib picks up functionality that can only be run in a charm context (as is the case for operator framework) this will be problematic
[09:21] <bthomas> As far as I can tell all that is necessary is to allow inject of service account credentials which I can easly do
[09:24] <Chipaca> bthomas: which things in k8s depend on charm context today?
[09:25] <bthomas> Chipaca: I do not see anything that depends on charm context. However it does require service account CA and TOKEN.
[09:27] <bthomas> A developer/client can always creat their own service account, if the have admin access to the cluster. I would think they should be able to inject such service account credentials into the APIServer class in k8s to then use it to query the juju cluster
[09:28] <Chipaca> bthomas: wrt 'require service account ca and token', it's picking those up from disk, from their 'canonical' locations, isn't it?
[09:28] <Chipaca> ie not charm-related
[09:28] <Chipaca> or is that something juju does for charms?
[09:30] <bthomas> Chipaca: there is no /var/run/secrets in my host (ubuntu laptop) system. So I would expect that the service account details are coming from Juju or microk8s. Will dig a bit.
[09:36] <bthomas> kubectl -n lma exec prometheus-operator-0 -- ls /var/run/secrets
[09:36] <bthomas> shows that the service account details are available in the prometheus charm pods
[10:03] <Chipaca> my dog, fetching individual treats and scritches for each opcode, would still run test_main.py faster than windows does
[10:05] <bthomas> :)
[11:09] <facubatista> ¡Muy buenos días a todos!
[11:12] <Chipaca> that was a short day
[11:17] <bthomas> Good Morning
[11:43] <bthomas> Is there any method call in the Operator Framework that a charm needs to make in order to ensure that the juju init container runs to completion ?
[11:50] <Chipaca> bthomas: what do you mean by the 'juju init container'?
[11:51] <bthomas> Chipaca: when you deploy a charm juju creates a container called juju-pod-init. This is a kubernetes "init container". As far as I understand this must run to completion before other pods will start.
[11:51] <bthomas> opps i meant other containers will start
[11:58]  * bthomas breaks for lunch
[12:40] <facubatista> Chipaca, what we discussed: https://github.com/canonical/charmcraft/pull/137
[12:40] <mup> PR charmcraft#137: Fixed global options behaviour <Created by facundobatista> <https://github.com/canonical/charmcraft/pull/137>
[12:40] <facubatista> bthomas, reviews appreciated ^ thanks
[12:42] <bthomas> facubatista: looking now
[12:49] <bthomas> facubatista: args.verbose_global and args.quite_global seem to be exactly opposite things. So I do not understand why have both and use both.
[12:51] <facubatista> bthomas, the user needs the option to use one or the other
[13:22] <justinclark> Good morning !
[13:24] <facubatista> hello justinclark
[13:29] <bthomas> Morning
[13:50] <bthomas> Inside my operator charm pod the pip3 installed in /usr/local/bin/pip3 gives the error "AttributeError: module 'platform' has no attribute 'linux_distribution'" . Oddly enough even though dpkg -L python3-pip shows it has installed a pip3 at /usr/bin/pip3 when trying to run /usr/bin/pip3 I get the error "No such file or directory".
[13:50]  * bthomas is perplexed
[13:52] <bthomas> Any idea why there are two pip3's in a charm pod ? The one in /usr/local/bin/pip3 seems to be outdated.
[14:03] <Chipaca> bthomas: what put /usr/local/bin/pip3 there, and why are you even running pip3 on a charm pod
[14:05] <bthomas> Chipaca: I do not know what put /usr/local/bin/pip3 there. The pod is called prometheus-opeartor-0. I am install python modules to help in remote debugging of operator hooks.
[14:06] <Chipaca> bthomas: so does /usr/bin/pip3 actually exist? (or is it a dangling symlink)
[14:06] <bthomas> Chipaca: /usr/bin/pip3 does not exist
[14:07] <bthomas> Chipaca: Very Very odd that dpkg -l python3-pip reports that the deb is actually installed
[14:10] <bthomas> I can not find any other way to debug this charm. Without being able to get pip3 working I have to thow up my hands in despair.
[14:15] <Chipaca> bthomas: why not 'apt instlal --reinstall python3-pip'?
[14:15] <Chipaca> or the same without typos
[14:16] <bthomas> Chipaca: result = "Reinstallation of python3-pip is not possible, it cannot be downloaded.". I am going to retry after apt update
[14:17]  * bthomas does cartwheels
[14:17] <bthomas> updated followed by install worked
[14:17] <bthomas> Chipaca: still aren't there issues here that need to be reported or a bug filed ? If so would you like me to do it and where ?
[14:23] <Chipaca> bthomas: are these things in the operator pod, or the application pod?
[14:23] <bthomas> Chipaca: operator pod
[14:24] <Chipaca> bthomas: does sound like a bug yes
[14:25] <bthomas> Chipaca: if it is a bug report for our team I can create a bug report describing what I said above. Would you like me to do it ? If it is for another team I would rather you do it knowing more about the issue, but will be happy to help.
[14:29] <Chipaca> bthomas: it isn't for us, it'd be for juju probably
[14:38] <bthomas> Adding a pdb set_trace() call in a hook does not do anything. I would expect that it should have halted the charm hook execution at that point. I can actually see the hook being called by watching juju status .
[14:38] <bthomas> 😕
[14:41] <bthomas> Hmm. I tried adding my python modules to requirements.txt and find they are not installed in the operator pod. I guess this means they are only installed in the application pod.
[15:08] <Chipaca> bthomas: how are you building the charm?
[15:09] <bthomas> Chipaca: charmcraft build
[15:09] <Chipaca> bthomas: and where are you looking for those python modules?
[15:09] <bthomas> Chipaca: in prometheus-operator-0 where the charm.py file is located.
[15:10] <Chipaca> bthomas: and where in there?
[15:10] <bthomas> Chipaca: /var/lib/juju/agents/unit-prometheus-0/charm/src/charm.py
[15:13] <Chipaca> bthomas: and where in there are you looking for the libs?
[15:13] <bthomas> Chipaca: by launching python3 after kubectl exec -it -- /bin/bash
[15:17] <Chipaca> bthomas: try « PYTHONPATH=/var/lib/juju/agents/unit-prometheus-0/charm/venv python »
[15:17] <Chipaca> or rather python3
[15:17] <Chipaca> bthomas: « PYTHONPATH=/var/lib/juju/agents/unit-prometheus-0/charm/venv python3 »
[15:17] <bthomas> thanks will do
[15:26]  * Chipaca gets tea
[16:11] <mup> Issue operator#392 opened: Testing harness does not re-emit deferred event <Created by justinmclark> <https://github.com/canonical/operator/issues/392>
[16:12] <justinclark> Here's the issue I mentioned during stand up today ^^
[16:16] <Chipaca> justinclark: thank you
[16:18] <Chipaca> 39 failed, 321 passed in 375.80s (0:06:15)
[16:18] <Chipaca> *SIX MINUTES*
[16:18] <Chipaca> grmbl
[16:18] <Chipaca> but also, only 39 failed :-D
[16:32] <bthomas> Chipaca: Can one expect k8s.get_pod_status() to return an empty dictionary if it is given valid model, app, and unit ?
[16:33] <Chipaca> bthomas: if everything is valid, it should not return an empty dict no
[16:34] <bthomas> Chipaca: Ok thinks. I am seeing an empty dictionary using pdb. But let us not get alarmed right now. I will try and dig a bit more for the rest of the day (although I really could do with a break for today :-) )
[16:48]  * Chipaca ← 34 failed, 326 passed
[16:52]  * bthomas Chipaca exercises windows muscle 💪
[17:02] <moon127> facubatista: thanks for the initial comments on https://code.launchpad.net/~moon127/charm-k8s-unifi-poller/+git/charm-k8s-unifi-poller/+merge/389643 - I added a response on the required settings code change, mainly as I'd tried it before and found juju deploying for real in microk8s behaved differently.  I could use some feedback on that.
[17:04] <bthomas> 🍛
[17:05] <facubatista> moon127, let me see...
[17:09] <facubatista> moon127, ah, oh, mmm... it looks like options are *always* there, but by default in ""? So probably it's a misname to search for "missing" values, better if we call them "emtpy"?
[17:09] <facubatista> moon127, are we sure that "" is never a valid value?
[17:11] <moon127> never valid value for my use case indeed
[17:12] <moon127> thanks for confirming I am not going mad!
[17:13] <moon127> you are right though if I don't explicitly declare the config option as at least "" in the test harness it does raise a KeyError... but I guess internally for real deployment as you say juju always has the option there.
[17:13] <Chipaca> while we can confirm that this particular instance is not evidence of you going mad, we do not mean to imply the more general statement
[17:14] <moon127> ha yeah true dat
[17:26] <Chipaca> down to 28 failures
[17:26] <Chipaca> and off for evening run + evening meal + evening evening
[17:26]  * Chipaca will bbl for more windows tests
[17:28] <facubatista> moon127, the tests should reflect reality, so I'm ok with your "not config['foo']", but let's just rename missing to empty, so when reading the code that's more clear; extra points for a comment
[19:24] <facubatista> Chipaca, and this is the other PR that I want for before release: https://github.com/canonical/charmcraft/pull/140
[19:24] <mup> PR charmcraft#140: Always log the charmcraft version when indicated <Created by facundobatista> <https://github.com/canonical/charmcraft/pull/140>
[19:24] <facubatista> bthomas, justinclark, review appreciated ^ (thanks!)
[19:25] <Chipaca> facubatista: but does it run on windows
[19:25]  * Chipaca sets python.exe on fire and runs away
[19:25] <facubatista> Chipaca, of course
[19:58] <bthomas> facubatista: I had already reviewed and approved "Fixed global options behaviour". This looks like the same pull request with another commit "Always log the charmcraft version when indicated." I did a quick review of that. +1
[19:59] <facubatista> yes, it depends on the other one
[21:00]  * facubatista eods
[23:30] <Chipaca> 360 tests passed, 8 skipped (4 of those still need work)
[23:30]  * Chipaca EODs
[23:47] <drewn3ss> Is there a reason relation data must be a string in operator?  https://github.com/canonical/operator/blob/master/ops/model.py#L687
[23:47] <drewn3ss> I'm seeing historical code in reactive that works both with dict structures and strings.
[23:48] <drewn3ss> perhaps the code we used to use was providing the serialization on the backend and operator wants it done by the operator?  I'm seeing interfaces that prefer the deserialized data.
[23:49] <drewn3ss> (though handle both, thankfully)
[23:49] <drewn3ss> https://git.launchpad.net/charm-grafana/tree/src/reactive/grafana.py#n1120