[01:34] Leaving a note here for anyone that can answer by tomorrow. I'm trying to provide relation information in an operator framework charm and am getting hung up. The interface is "promethus", and here's the old reactive class: https://git.launchpad.net/interface-prometheus/tree/provides.py#n28 I'm trying to write this into my helper library as: [01:34] https://git.launchpad.net/~afreiberger/charm-cloudstats/tree/lib/lib_cloudstats.py?h=feature/public-charm#n126 [01:36] relation.data[hookenv.unit_name()]['extended_data'] = {"some-data"} is failing on key error for hookenv.unit_name() and I'm betting I'm just misunderstanding the data model for Relation. === Dmitrii-Sh9 is now known as Dmitrii-Sh [08:38] https://github.com/canonical/interface-pgsql is (finally) there, but I was wondering about naming [08:38] Maybe the repo should be called opslib-pgsql [08:40] I'm also thinking that unless future stuff on the roadmap relies on it, ops.lib.use() is not going to be helpful now we have charmcraft assembling a venv [08:41] stub: how so? [08:41] (that branch supports both traditional 'import pgsql' syntax as well as ops.lib.use()) [08:42] stub: huzzah on the interface being there finally :) [08:42] Per the example in the README.md, to use my library you can just declare it in requirements.txt and do 'import pgsql'. [08:42] ok [08:42] Or you can declare it in requirements.txt and do 'pgsql = ops.lib.use(...)' [08:43] stub: yup :) [08:43] stub: it's fine, for now i'd recommend people go the opslib route so we explore it a bit more, but if in the end it doesn't fill a need we can easily drop it and move on [08:43] I'd like to update the docs to just have whatever we officially prefer, and decode between interface-pgsql and opslib-pgsql (so I can upload to pypi and remove the git URL from requirements.txt) [08:44] ack [08:55] I'll await opinions on naming the repo and python module before switching our charms over. [09:23] Chipaca: refactored to ops-lib-k8s and pushed. Worked like a "charm" . Am using get_pod_status but will look into other minor refactorings along with switch to PodStatus.for_charm() . [09:23] also ops-lib-k8s does fail tests when built with pybuild but not pip3 install. Have not looked into it. [09:30] bthomas: ack [09:30] bthomas: what is pybuild [09:30] :) [09:32] :) portability is good for the soul. Had a very quick but not through look. It is because setup.py file is not being found in the test setup. I suppose this can be fixed with some type of argument to pybuild but I am avoiding this distraction now. Want to focus on learning deployment with oci images. [10:01] drewn3ss: hookenv.unit_name() failing with a KeyError sounds like the JUJU_UNIT environment variable isn't set. [10:05] drewn3ss: In an event handler passed event 'ev', I think you want """ ev.relation.data[self.model.unit]['extended_data'] = "data" """ [10:26] Issue operator#365 closed: Make JujuVersion.from_environ return 0,0,0 if JUJU_VERSION isn't set [10:26] PR operator#379 closed: Assume version 0.0.0 if JUJU_VERSION is not set [10:58] PR operator#380 closed: Add info about fixing pyyaml in README.md [12:49] stub: thanks...I think after ruminating on it, that the helper being outside the operator's class is working against me, because, as you note, I need self.model.unit. (which I suppose I can pass to the helper, but may not be in the spirit of things) [13:08] I am trying to understand what this jujug debug-log error implies "application-prometheus: 14:00:24 ERROR juju.worker.uniter.operation hook "install" (via hook dispatching script: dispatch) failed: exit status 1" . [13:08] Note there is no install hook in the prometheus charm [13:11] bthomas: dispatch failed [13:11] Chipaca: you mean dispatching "install" hook ? [13:11] bthomas: no i mean you have a 'dispatch' binary, so it's called for every hook [13:11] bthomas: and that has failed [13:28] bthomas: what's the rest of the traceback before the exit status 1? [13:29] dispatch always calls your charm.py entrypoint and sets observable bits like self.on.install in this case [13:29] drewn3ss: https://pastebin.com/6ie9jez9 [13:30] thanks. [13:30] You may want to run debug-hooks and run the dispatch yourself to see what's happening on stdout. [13:31] whatever's happening isn't getting hookenv logged. [13:31] thanks was not aware of debug-hooks. will try. [13:33] or juju run -u "hooks/install" [13:33] which should call into dispatch [13:33] and send you stdout/err [13:35] btw, how do we call dispatch with the proper args, or does it have to be called from a wrapper so argv[0] is the right hook name? [13:36] this is the dispatch script : "JUJU_DISPATCH_PATH="${JUJU_DISPATCH_PATH:-$0}" PYTHONPATH=lib:venv ./src/charm.py" [13:36] thanks for the debuggin tips. much appreciated. [13:36] np [13:48] I should have mentioned I am deploying on a microk8s cluster. I just found out that juju deploy-hooks is not supported on k8s models. [13:49] juju deploy-logs from another terminal was helpful debugging k8s charms for me, the hook tracebacks appear in those logs. [13:50] thank you [13:50] Chipaca, justinclark, what's promised from my last branch: https://github.com/canonical/charmcraft/pull/123 [13:50] PR charmcraft#123: Show paths with str, not repr, as the latter is horrible in windows [14:27] bthomas, justinclark, drewn3ss, please remember about https://github.com/canonical/charmcraft/pulls and https://github.com/canonical/operator/pulls (feel free to shout here which one are you tackling so you don't step over each other... we do not need a zillion reviews on each PR, so let's coordinate) [14:27] Chipaca, ^ [14:31] facubatista: will do [14:31] justinclark, thanks [14:31] Chipaca, from what we spoke recently: https://github.com/canonical/charmcraft/issues/125 [14:36] Chipaca, also, https://github.com/canonical/charmcraft/issues/126 and https://github.com/canonical/charmcraft/issues/127 [14:41] Chipaca, and finally, https://github.com/canonical/charmcraft/issues/128 [14:42] (what a morning!!) [14:42] facubatista: ...teté [14:43] jajaj [14:44] facubatista: "what a morning, tea-tea", i guess? [14:45] no, no, its not "ti ti" [14:45] * facubatista needs to start writing phonetics [14:46] * bthomas [14:47] * bthomas 😕 [15:16] bthomas: ? [15:25] I was trying to deciper spanish :) [15:31] facubatista: stand up! [15:31] oops [15:44] justinclark: so, starting from Dmitrii-Sh's https://github.com/dshcherb/cockroachdb-operator you can see it uses https://github.com/dshcherb/interface-tcp-load-balancer which isn't exactly what you mentioned but might help? [15:44] I thought Dmitrii-Sh also wrote a generic 'http' interface but can't see it now [15:49] Thanks Chipaca. I'll look over this. Also, for a bit more context, I'm trying to get the host and port for an incoming relation (via an http interface). Here's the exact line I'm trying to decouple from k8s: https://github.com/charmed-lma/charm-k8s-grafana/blob/88006babe566f3dd04d24dc183d3af40f242e82f/src/interface_http.py#L112 [15:53] Chipaca: I had some HTTP health-checking-related code in the TCP load-balancer interface since just checking socket establishment wasn't enough but not the HTTP interface unfortunately. [15:58] @chipaca Do you know about pathlib? I think it could have been useful for some of these Windows fixes [16:04] justinclark: looking at line 104, and your todo, event.relation is the current event's relation handle. [16:04] you don't have to get_relations if you're in the relation event [16:07] Thanks drewn3ss. I'm actually rewriting this entire charm from scratch but hoping to at least take some nice tidbits from that version. My goal is to get the host/port of the incoming relation in a way that will work for k8s and non-k8s charms. [16:07] https://pastebin.canonical.com/p/NJtNPQ9WSv/ [16:08] ack, not sure about how k8s may throw a wrench in there, but this is how I'm consuming a very simple http interface ^ [16:12] Oh this looks helpful drewn3ss. Still getting up to speed on the framework as a whole -- would you be willing to share what your http interface looks like? I'm trying to learn by example at the moment :) [16:14] how do you mean, what it looks like? The relation data for "http" is defined as "private-address": , "port": , AFAIK [16:14] I'm only consuming a single website on the interface, so I'm not trying to build a loadbalancer. [16:14] the website in my code is the prometheus:website provides interface [16:16] I believe that answers my question. Thanks! [16:17] https://pastebin.canonical.com/p/pp9Gg3z6dZ/ [16:17] cloudstats/0 is the "requires: website" charm, and prometheus2/0 is the "provides:website" charm [16:17] the code above is from cloudstats [16:18] I guess technically ingress-adress should be referred before private-address [16:18] but all relations have private-address by virtue of the framework [16:20] That's exactly the information I wasn't aware of. Perfect. Thanks again drewn3ss. [16:28] Issue operator#386 opened: Add best practice suggestions into docs [17:47] dstathis: yeah we use pathlib extensively, that's why the windows fixes aren't completely overwhelmed by path changes [17:47] dstathis: not as extensively as we'd like because in 3.5 it wasn't as integrated as it is from 3.6 on [17:47] but still :) [19:52] Chipaca: So I was double checking a couple things after addressing your comments on that MP and it looks like the latest charmcraft release breaks interfaces in the build step? [19:52] i.e. I suddenly have a build/lib/interfaces/pgsql/ directory that's empty instead of a symlink to the real interface [19:53] deej: latest from edge? [19:53] The snap, so not the latest latest [19:54] deej: right, but the snap from edge channel, or beta? [19:54] Ah, edge, yes [19:54] deej: alternatively, what does 'charmcraft version' say [19:54] r42 [19:54] ah [19:54] facubatista: ^^^ [19:54] facubatista: we broke something [19:54] Version: 0.3.1+38.g9f241a8 [19:56] deej, can you please do a "ls -l" of the original file, and provide us with the build log (you can see it wiht -v) [19:56] ? [19:56] deej, thanks [19:57] Sure, one sec [19:58] facubatista: https://pastebin.canonical.com/p/3Qrg2jwmP9/ [20:01] deej, and what is? mod/interface-pgsql/pgsql/ [20:01] does that exist? [20:01] https://pastebin.canonical.com/p/WtmZHdpzYd/ [20:01] It does [20:01] ah, oh [20:01] mmm [20:03] deej, Chipaca, I found the issue... we're asking first if it's a dir, before if it's a symlink [20:05] facubatista: you mean walk gives you symlinks to dirs in dirnames? [20:07] Chipaca, mmm... well, is_dir() answers True [20:08] Chipaca, and yes, just confirmed [20:08] * Chipaca hugs deej [20:08] thanks [20:08] (that os.walk gives it in the dirnames) [20:08] facubatista: ok. I presume you're working on a fix? if so @ me when you've got a PR [20:09] Chipaca, I am, yes [20:09] k [20:09] i'm over there in the corner reading some stuff [20:13] Heh [20:17] Hello hello. I'm trying to understand better the concept of units in kubernetes with juju. When I deploy an app with juju, in the juju status, a unit "app/0" appears. The charm code executes in that "unit". Is that the equivalent of the app-operator-0 pod that appears in kubernetes? So the charm code actually runs in the operator pod? [20:18] The reason of this question is that I am trying to use the kubernetes API in the charm. For that, I need to load the cluster config, and when I do it manually in the operator pod , it works, but when I run it in the charm, it fails with this : [20:18] ```application-metallb-controller: 12:41:16 ERROR juju.worker.uniter.operation hook "install" (via hook dispatching script: dispatch) failed: exit status 1 [20:18] application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install Traceback (most recent call last): [20:18] application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install File "./src/charm.py", line 20, in [20:18] application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install class MetallbCharm(CharmBase): [20:18] application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install File "./src/charm.py", line 130, in MetallbCharm [20:18] application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install config.load_incluster_config() [20:18] application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install File "/var/lib/juju/agents/unit-metallb-controller-0/charm/venv/kubernetes/config/incluster_config.py", line 93, in load_incluster_config [20:18] application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install InClusterConfigLoader(token_filename=SERVICE_TOKEN_FILENAME, [20:18] application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install File "/var/lib/juju/agents/unit-metallb-controller-0/charm/venv/kubernetes/config/incluster_config.py", line 45, in load_and_set [20:19] application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install self._load_config() [20:19] application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install File "/var/lib/juju/agents/unit-metallb-controller-0/charm/venv/kubernetes/config/incluster_config.py", line 51, in _load_config [20:19] application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install raise ConfigException("Service host/port is not set.") [20:19] application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install kubernetes.config.config_exception.ConfigException: Service host/port is not set.``` [20:19] mmh sorry, I'll put that in a pastebin [20:19] https://pastebin.canonical.com/p/B77mMm4tYw/ [20:19] So, it seems to me that the charm does not run in the operator pod... so where does it? Just trying to figure this out [20:22] crodriguez: in the pod list you should be seeing two pods [20:27] yeah, the app pod and the operator pod. Well, install is failing right now because of this so only operator pod is up [20:27] crodriguez: you'll have one -0, and one -operator-0 [20:27] right [20:27] crodriguez: the operator pod is the one that runs the charm [20:30] does it run in a virtual environment ? Because if I go into the operator pod, I can run the python commands just fine https://pastebin.canonical.com/p/VbC2kvbXDR/ [20:30] but running these same commands in the charm causes the error I linked https://pastebin.canonical.com/p/B77mMm4tYw/ [20:31] crodriguez: looks like you didn't exec that venv you were in the directory of. [20:31] I couldn't find a bin/activate (: [20:31] oh fun. [20:32] look at the dispatch file for how python is called [20:32] maybe it might give a hint [20:32] good idea [20:32] it's _not_ a venv, fwiw [20:32] it's just the libraries [20:33] it's a fake venv called venv? :P [20:33] yes [20:33] lol [20:33] awesome [20:33] Chipaca, https://github.com/canonical/charmcraft/pull/131 [20:33] PR charmcraft#131: Respect the symlink even if it's a directory when building [20:33] deej, ^ [20:33] may I suggest a rename to "lib" ? [20:33] crodriguez: people are using lib for their own nefarious purposes :-p [20:33] aah man :P [20:34] lol...now that we all know :) [20:34] but yeah, maybe .lib or sth [20:34] the nice thing is, it acts as a venv for charm code library pathing purposes [20:34] drewn3ss, crodriguez, this is not the venv you're looking for *does some jedi hand waving* [20:34] XD [20:35] hahaha [20:35] Chipaca, ".lib or sth" is a horrible directory name [20:35] ok... so dispatch only has this here : https://pastebin.canonical.com/p/z2fwtnMqrt/ [20:35] facubatista: make it .💩 [20:36] facubatista, but venv is confusing! [20:36] crodriguez, if you do "PYTHONPATH=venv python3" and then the import, it should work [20:36] ooh, lib is being added automatically by dispatch...that's handy to reduce all the import not at top errors \o/ [20:36] crodriguez, for sure we can improve that [20:37] i have to agree with crodriguez here :-| [20:37] (we just need a better name) [20:37] my days of taking Chipaca seriously are certainly coming to a middle. [20:37] charm-packages ? [20:37] installed-python-libs [20:37] drewn3ss: there's a trick to knowing when i'm being serious and when i'm not [20:38] * drewn3ss is going to take the p00p emoji as a sign. [20:38] drewn3ss: (presumably) (i don't know it myself) [20:38] lmao [20:38] thanks facubatista , trying that now [20:38] facubatista: how about something that says where this comes from? like 'reqs' (or 'requirements' or sth) [20:39] deps [20:39] pydeps [20:39] btw, mad props to the operator team. this stuff makes charming SO much easier. no more finite-state-machine madness, and relation-state unit testing are killer apps. [20:39] psydeps [20:40] * drewn3ss is usually a curmudgeon when it comes to having to learn yet-another-framework, but this one is well worth the (oddly rather low) effort [20:40] \o/ [20:40] I know, I know, let's call yet another thing that's not a wheel a /wheel [20:40] well it works just as well, I can't reproduce what the charm is complaining about https://pastebin.canonical.com/p/VnJm9wWbRv/ [20:40] ooohh, we could call it somehting wheely and make them all wheels [20:41] crodriguez: :-( [20:42] crodriguez: want to share the charm so i can take a peek? [20:43] Sure. It's an ugly baby right now though, I warn you [20:43] i'll put on my charm grandma glasses; all charms will be beautiful [20:44] Chipaca, https://github.com/camille-rodriguez/charm-metallb-controller [20:45] Context is that I have to write a charm for metallb for k8s. MetalLB is contained of a controller and a speaker, so I'll have a bundle of 2 charms. I'm just getting started with the controller. And I want to use the k8s API directly in the charm because the juju pod_spec is not able to create PodSecurityPolicies yet [20:45] Chipaca, haha perfect [20:46] crodriguez: and you're importing kubernetes, which is part of the image the charm runs on? [20:47] facubatista: Awesome, ta [20:47] Chipaca, I added this block here https://github.com/camille-rodriguez/charm-metallb-controller/blob/master/src/charm.py#L130 to test the k8s api yeah [20:49] I'm testing with microk8s rn [20:50] heh, error about envget [20:51] yeah sorry, pull again :) [20:51] I removed that crap 20 sec after sending you the link lol. You're too quick! [20:54] this upgrade loop is so tedious [20:57] now i got there [20:58] :) yeah I usually just get rid of the app and redeploy... no upgrade for me haha [21:00] * facubatista needs to eod [21:01] kenneth just shared this with me https://github.com/juju-solutions/bundle-cert-manager/blob/master/charms/cert-manager-webhook/reactive/webhook.py#L37-L50 [21:01] apparently there's a bug in how the juju agent passes env variables to the charm [21:02] aha! this rings a bell [21:02] * Chipaca edits src/charm.py inside the pod directly [21:05] I wonder if there's a bug open for this.. [21:06] crodriguez: that worked, fwiw [21:06] crodriguez: you know of the k8s opslib, yes? [21:08] great, I'm testing it rn too [21:09] I knew there was a k8s lib under your personal github, I thought that it was not ready yet [21:09] crodriguez: it's very minimal, just what mark maglana created with some tweaks, but ¯\_(ツ)_/¯ [21:09] it certainly doesn't know about PodSecurityPolicies :-D [21:10] haha :P [21:10] but maybe it should include this hack until juju fixes it [21:10] well once my k8s charm is better defined, maybe I'll contribute to it [21:10] yeah true [21:11] crodriguez: do you know if there's a bug about this? [21:11] kenneth didn't respond when I asked about a bug # [21:11] I'll dig [21:11] crodriguez: i'll stick around a bit longer to try to catch what you dig up [21:12] * Chipaca ← not working though [21:14] the closest I can find is the one John raised a few days ago for another problem I got lol https://bugs.launchpad.net/juju/+bug/1891337 [21:15] i don't think it's related to the metrics hooks specifically though. I might open a new bug [21:27] There, a new one https://bugs.launchpad.net/juju/+bug/1892255 [21:35] thanks for your help :) have a great evening! [21:46] crodriguez: thanks!