#smooth-operator 2020-08-17
<Chipaca> mo'in
<bthomas> Morning
<bthomas> I am on a do or die mission to refactor the prometheus charm (except tests) this week. I hope I will have a decent burial and epitaph if I fail.
 * bthomas needs more coffeeeeeeeee
<Chipaca> bthomas: what do you want on the wreath?
<Chipaca> bthomas: I'd go with YOLO
<bthomas> Chipaca: I always fancied "Shit Happens" from "Forrest Gump" (love that movie).
 * Chipaca bbiab
<gnuoy> Chipaca, sorry for the late notice but I don't have much to add from our last meeting. I'd suggest we skip it unless you have something you'd like to discuss ?
<Chipaca> gnuoy: skip it is :)
<gnuoy> ack, thanks
<Chipaca> gnuoy: you miss out on watching me do unthinkable things to a rather boring wrap
<gnuoy> I'm sorry to have missed that :)
<Chipaca> you're really not :)
<narindergupta> Chipaca, I am facing weird issue where I set the PodSpec and run the k8s status in loop since workload status is unknown is fine. But it stops executing the pods nd pod never gets created?
<Chipaca> narindergupta: juju's pod spec setting is transactional
<Chipaca> narindergupta: i.e. it's applied after your hook finishes successfully
<Chipaca> narindergupta: if you set it, and then wait for it to change in a loop in the same hook, you're going to have a bad time
<narindergupta> Chipaca, oh ok do we have update_status event?
<Chipaca> narindergupta: if you mean an event in juju about k8s status, not currently
<narindergupta> Thinking of changing in update_status in that case?
<narindergupta> Oh ok so what's the solution as if now as my juju unit status is unknown while my pod status is active
<Chipaca> narindergupta: there is update-status, which runs every 5 minutes on a clock
<narindergupta> I think I can use that atleast so that status gets updated after 5 minutes.
<narindergupta> And once we have solution then I will switch in charm
<Chipaca> narindergupta: you can make that shorter, see https://discourse.juju.is/t/whats-the-update-status-interval/2571/5?u=chipaca
<narindergupta> Chipaca, Hun it is a model level change.
<Chipaca> yes
<Chipaca> jldev: you around?
<jldev> Chipaca: yes
<Chipaca> jldev: i'm wanting to leave a little early to get to physio on time, meaning i wouldn't be there at the start of the standup
<Chipaca> jldev: were you planning on going to the standup?
<jldev> Chipaca: yes, I'll be there, no worries if you need to leave early
<Chipaca> jldev: ok :)
<Chipaca> jldev: (i would've stayed if you couldn't make it, and just made it super-fast)
<narindergupta> Chipaca, looks like pod status is fetching me unknown everytime from k8s which is weired. While kubectl pod status returns active
<Chipaca> narindergupta: what's "pod status" in the above?
<Chipaca> as opposed to "kubectl pod status" i mean
<narindergupta> Chipaca, I am testing k8s api using the operator library
<narindergupta> Chipaca,     k8s_pod_status = k8s.get_pod_status(juju_model=juju_model,
<narindergupta>                                         juju_app=juju_app,
<narindergupta>                                         juju_unit=juju_unit)
<narindergupta> Do I need to encode the parameters?
<Chipaca> narindergupta: and what does the kubectl pod status output? (with -o yaml please)
<Chipaca> narindergupta: or, better, what is in the debug log?
<Chipaca> as that'll include the output i'm looking for
<Chipaca> narindergupta: i'm looking for the log line that starts 'Received k8s pod status: '
<narindergupta> Chipaca, unit.charm-k8s-cassandra/0.juju-log Received k8s pod status: <k8s.PodStatus object at 0x7f54cbb83df0>
<Chipaca> hah, lolfail
<Chipaca> i'll file a bug about that
<narindergupta> :)
<Chipaca> narindergupta: sorry, i'm going to need the output of the kubectl command
<narindergupta> Chipaca, yes just a second
<narindergupta> microk8s.kubectl get pods -n cassandra
<narindergupta> NAME                             READY   STATUS    RESTARTS   AGE
<narindergupta> charm-k8s-cassandra-0            1/1     Running   0          25m
<narindergupta> charm-k8s-cassandra-operator-0   1/1     Running   0          25m
<narindergupta> modeloperator-7d75bd7454-tfzgn   1/1     Running   0          25m
<narindergupta> It says running
<Chipaca> narindergupta: can you do: microk8s.kubectl get pods -n cassandra -o yaml | pastebinit -f yaml
<Chipaca> narindergupta: (you might need to apt install pastebinit first)
<narindergupta> Chipaca, somehow pastebinit is not working on this node
<Chipaca> ah
<narindergupta> I tried to upgrade this to 20.4 and pastbinit messed up now
<Chipaca> ok, so copy-paste into a pastebin, please
<narindergupta> Yeah will do that give me a sec
<Chipaca> narindergupta: and then show me more context of your code above that called get_pod_status, so i know the values of juju_model, _app, and _unit :)
<narindergupta> Sure
<narindergupta> Those value I am getting from model but let me give you that first
<narindergupta>     juju_model = self.model.name
<narindergupta>     juju_app = self.model.app.name
<narindergupta>     juju_unit = self.model.unit
<narindergupta> Chipaca, https://paste.ubuntu.com/p/XqTrYrtBZR/
<narindergupta> Chipaca, I have pasted only the Cassandra-pod status rather than operators as well.
<Chipaca> narindergupta: charm-k8s-cassandra is the juju app name?
<narindergupta> currect
<narindergupta> correct
<narindergupta> And I can see the same in pod labels as well
<narindergupta> Model is cassandra
<Chipaca> narindergupta: and charm-k8s-cassandra/0 is the unit?
<narindergupta> Yes correct
<narindergupta> juju status
<narindergupta> Model      Controller  Cloud/Region        Version  SLA          Timestamp
<narindergupta> cassandra  k8s-cloud   microk8s/localhost  2.8.1    unsupported  17:53:30Z
<narindergupta> App                  Version                         Status  Scale  Charm                Store  Rev  OS          Address        Notes
<narindergupta> charm-k8s-cassandra  gcr.io/google-samples/cassa...  active      1  charm-k8s-cassandra  local    0  kubernetes  10.152.183.66
<narindergupta> Unit                    Workload     Agent  Address      Ports                       Message
<narindergupta> charm-k8s-cassandra/0*  maintenance  idle   10.1.16.172  7000/TCP,7001/TCP,9042/TCP  Waiting for pod to appear
<Chipaca> narindergupta: from here it looks like it should work
<narindergupta> Chipaca, let me send you my charm code as well in case that helps
<Chipaca> narindergupta: can you send me an email, with the charm code and how to deploy it on microk8s?
<narindergupta> Chipaca, ok
<Chipaca> I'll review it as soon as I can (which is in no less than ~2 hours)
<Chipaca> thanks!
<narindergupta> Chipaca, https://paste.ubuntu.com/p/s7FYfVqKKZ/
<narindergupta> My charm code and I will send you in email as well
<narindergupta> Let me checkin into GitHub so ti would be easy for you to grab the whole charm
<narindergupta> Chipaca, sent you an email and attached the charm code
<Chipaca> narindergupta: reproduced the issue, debugging it now
<narindergupta> Ok cool
<Chipaca> narindergupta: i think i'm seeing something different though
<Chipaca> narindergupta: my kubectl shows the cassandra pod in CrashLoopBackOff
 * Chipaca reinstalls just in case
<Chipaca> narindergupta: yeah, it's always stuck in CrashLoopBackOff here
<narindergupta> Chipaca, in meeting with manager
<narindergupta> Chipaca, I can have a look after that but run juju debug-log and see
<Chipaca> narindergupta: i think it might be that i was running an older juju
<Chipaca> forgot i'd downgraded to test something else :)
<Chipaca> will fix
<narindergupta> Chipaca, :)
<narindergupta> I am sing 2.8.1
<Chipaca> narindergupta: and now i got it to status:active
<Chipaca> ah, there we go (spoke to soon? was impatient?)
<narindergupta> Chipaca, still having crash
<Chipaca> no, now the kube side works
<narindergupta> Chipaca, ok cool
<Chipaca> so now i can look at the data and see what the k8s library is getting wrong :)
<narindergupta> But k8 api library show unknown
<narindergupta> Ok cool
<narindergupta>  juju status
<narindergupta> Model      Controller  Cloud/Region        Version  SLA          Timestamp
<narindergupta> cassandra  k8s-cloud   microk8s/localhost  2.8.1    unsupported  21:05:31Z
<narindergupta> App                  Version                         Status  Scale  Charm                Store  Rev  OS          Address         Notes
<narindergupta> charm-k8s-cassandra  gcr.io/google-samples/cassa...  active      1  charm-k8s-cassandra  local    0  kubernetes  10.152.183.223
<narindergupta> Unit                    Workload     Agent  Address      Ports                       Message
<narindergupta> charm-k8s-cassandra/0*  maintenance  idle   10.1.16.177  7000/TCP,7001/TCP,9042/TCP  Waiting for pod to appear
<narindergupta> Hopefully you have the same status
<narindergupta> Chipaca, I am stepping out please send me email if I do not respond on this channel
<Chipaca> narindergupta: found the problem
<Chipaca> or at least _a_ problem :-) seeing if it fixed it now
<Chipaca> â¦
#smooth-operator 2020-08-18
<axino> hi
<axino> https://github.com/canonical/charmcraft/blob/12ceda1ecfa64be5d2636e453e968d574e952324/charmcraft/templates/init/.jujuignore.j2
<axino> what is /env for ?
<bthomas> If I try to deploy the prometheus charm with a microk8s controller I get the error : "ERROR local charm missing OCI images for: nginx-image, prometheus-image"
<bthomas> If I try to supply the "--to lxd" option to "juju deploy" I still get an error : "ERROR --to cannot be used on k8s models"
<bthomas> Is there any way to deploy a charm locally using microk8s ?
<Chipaca> bthomas: with microk8s, yes
<Chipaca> bthomas: but I don't know how to tell microk8s about an oci image
 * Chipaca googles
 * bthomas googles
<bthomas> Chipaca: Let me try and change the yaml config to use lxc/lxd images. It may simplify things. I see no reason why we should insist in oci images, and spend the time to deal with this complexity. I will google a bit though.
<Chipaca> bthomas: but lxc/lxd isn't k8s
<Chipaca> bthomas: 1 sec
<Chipaca> juju deploy /path/to/mycharm --resource workload_image=myimage:latest
<Chipaca> bthomas: ^
<Chipaca> bthomas: similarly,
<Chipaca> juju upgrade-charm myapp --path /path/to/charm --resource my_image=imagename
<Chipaca> all this from https://discourse.juju.is/t/error-attaching-oci-resource-to-juju-charm-using-containerd/1354/6?u=chipaca
<bthomas> Chipaca: thanks. Let me read that and try.
<bthomas> Yep "--resource" option did work. There are other issue but that is with my charm and for me to dig.
<Chipaca> :) huzzah
<facubatista> Â¡Muy buenos dÃ­as a todos!
<Chipaca> facubatista: ðâ
<facubatista> Hola Chipaca!
<bthomas> Morning facubatista : Hope you had a good break .
<facubatista> hello bthomas!
<facubatista> thanks
<bthomas> State handling in my refactored prometheus charm is buggy (throws an exception). I am stuck with unable to destroy the model into which I deployed this prometheus charm. juju destroy times out after 30m. Send attempt has been running for 25. Help I am drowning.
 * bthomas prays to the gods of charmcraft to protect himself from his own stupidity
<Chipaca> bthomas: does your last mean you figured it out?
<bthomas> Chipaca: just added --no-wait and --force options to juju destroy-model and trying again
<bthomas> this worked
<Chipaca> k
<bthomas> In ops.model.status_get() a subprocess call (from _run) is made to execute the command "status-get". Where is "status-get" coming from ?
<Chipaca> bthomas: hook tools
<Chipaca> bthomas: https://discourse.juju.is/t/hook-tools/1163#heading--status-get
<bthomas> Chipaca: thank you
<narinderguptamac> Chipaca, Thankyou for your help I have Cassandra status working currently and I can create Cassandra cluster.
<narinderguptamac>  juju status
<narinderguptamac> Model           Controller  Cloud/Region        Version  SLA          Timestamp
<narinderguptamac> cassandramodel  k8s-cloud   microk8s/localhost  2.8.1    unsupported  14:05:18Z
<narinderguptamac> App        Version                         Status  Scale  Charm      Store  Rev  OS          Address         Notes
<narinderguptamac> cassandra  gcr.io/google-samples/cassa...  active      3  cassandra  local    0  kubernetes  10.152.183.237
<narinderguptamac> Unit          Workload  Agent  Address      Ports                       Message
<narinderguptamac> cassandra/0*  active    idle   10.1.16.254  7000/TCP,7001/TCP,9042/TCP  Pod is Ready
<narinderguptamac> cassandra/1   active    idle   10.1.16.3    7000/TCP,7001/TCP,9042/TCP  Pod is Ready
<narinderguptamac> cassandra/2   active    idle   10.1.16.2    7000/TCP,7001/TCP,9042/TCP  Pod is Ready
<facubatista> Chipaca, standup?
<Chipaca> facubatista: and dance!
<bthomas> Chipaca: could you kindly check mattermost
<bthomas> Whew! Am going to call it a day.
<bthomas> I will try and refactor prometheus charm to opslib tomorrow.
 * facubatista eods
<Chipaca> EODing sounds like a Good Idea
#smooth-operator 2020-08-19
<drewn3ss> Leaving a note here for anyone that can answer by tomorrow.  I'm trying to provide relation information in an operator framework charm and am getting hung up.  The interface is "promethus", and here's the old reactive class: https://git.launchpad.net/interface-prometheus/tree/provides.py#n28  I'm trying to write this into my helper library as:
<drewn3ss> https://git.launchpad.net/~afreiberger/charm-cloudstats/tree/lib/lib_cloudstats.py?h=feature/public-charm#n126
<drewn3ss> relation.data[hookenv.unit_name()]['extended_data'] = {"some-data"} is failing on key error for hookenv.unit_name() and I'm betting I'm just misunderstanding the data model for Relation.
<stub> https://github.com/canonical/interface-pgsql is (finally) there, but I was wondering about naming
<stub> Maybe the repo should be called opslib-pgsql
<stub> I'm also thinking that unless future stuff on the roadmap relies on it, ops.lib.use() is not going to be helpful now we have charmcraft assembling a venv
<Chipaca> stub: how so?
<stub> (that branch supports both traditional 'import pgsql' syntax as well as ops.lib.use())
<Chipaca> stub: huzzah on the interface being there finally :)
<stub> Per the example in the README.md, to use my library you can just declare it in requirements.txt and do 'import pgsql'.
<Chipaca> ok
<stub> Or you can declare it in requirements.txt and do 'pgsql = ops.lib.use(...)'
<Chipaca> stub: yup :)
<Chipaca> stub: it's fine, for now i'd recommend people go the opslib route so we explore it a bit more, but if in the end it doesn't fill a need we can easily drop it and move on
<stub> I'd like to update the docs to just have whatever we officially prefer, and decode between interface-pgsql and opslib-pgsql (so I can upload to pypi and remove the git URL from requirements.txt)
<stub> ack
<stub> I'll await opinions on naming the repo and python module before switching our charms over.
<bthomas> Chipaca: refactored to ops-lib-k8s and pushed. Worked like a "charm" . Am using get_pod_status but will look into other minor refactorings along with switch to PodStatus.for_charm() .
<bthomas> also ops-lib-k8s does fail tests when built with pybuild but not pip3 install. Have not looked into it.
<Chipaca> bthomas: ack
<Chipaca> bthomas: what is pybuild
<Chipaca> :)
<bthomas> :) portability is good for the soul. Had a very quick but not through look. It is because setup.py file is not being found in the test setup. I suppose this can be fixed with some type of argument to pybuild but I am avoiding this distraction now. Want to focus on learning deployment with oci images.
<stub> drewn3ss: hookenv.unit_name() failing with a KeyError sounds like the JUJU_UNIT environment variable isn't set.
<stub> drewn3ss: In an event handler passed event 'ev',  I think you want """ ev.relation.data[self.model.unit]['extended_data'] = "data" """
<mup> Issue operator#365 closed: Make JujuVersion.from_environ return 0,0,0 if JUJU_VERSION isn't set <Created by chipaca> <Closed by chipaca> <https://github.com/canonical/operator/issues/365>
<mup> PR operator#379 closed: Assume version 0.0.0 if JUJU_VERSION is not set <Created by dstathis> <Merged by chipaca> <https://github.com/canonical/operator/pull/379>
<mup> PR operator#380 closed: Add info about fixing pyyaml in README.md <Created by dstathis> <Merged by chipaca> <https://github.com/canonical/operator/pull/380>
<drewn3ss> stub: thanks...I think after ruminating on it, that the helper being outside the operator's class is working against me, because, as you note, I need self.model.unit. (which I suppose I can pass to the helper, but may not be in the spirit of things)
<bthomas> I am trying to understand what this jujug debug-log error implies "application-prometheus: 14:00:24 ERROR juju.worker.uniter.operation hook "install" (via hook dispatching script: dispatch) failed: exit status 1" .
<bthomas> Note there is no install hook in the prometheus charm
<Chipaca> bthomas: dispatch failed
<bthomas> Chipaca: you mean dispatching "install" hook ?
<Chipaca> bthomas: no i mean you have a 'dispatch' binary, so it's called for every hook
<Chipaca> bthomas: and that has failed
<drewn3ss> bthomas: what's the rest of the traceback before the exit status 1?
<drewn3ss> dispatch always calls your charm.py entrypoint and sets observable bits like self.on.install in this case
<bthomas> drewn3ss: https://pastebin.com/6ie9jez9
<bthomas> thanks.
<drewn3ss> You may want to run debug-hooks and run the dispatch yourself to see what's happening on stdout.
<drewn3ss> whatever's happening isn't getting hookenv logged.
<bthomas> thanks was not aware of debug-hooks. will try.
<drewn3ss> or juju run -u <unit> "hooks/install"
<drewn3ss> which should call into dispatch
<drewn3ss> and send you stdout/err
<drewn3ss> btw, how do we call dispatch with the proper args, or does it have to be called from a wrapper so argv[0] is the right hook name?
<bthomas> this is the dispatch script : "JUJU_DISPATCH_PATH="${JUJU_DISPATCH_PATH:-$0}" PYTHONPATH=lib:venv ./src/charm.py"
<bthomas> thanks for the debuggin tips. much appreciated.
<drewn3ss> np
<bthomas> I should have mentioned I am deploying on a microk8s cluster. I just found out that juju deploy-hooks is not supported on k8s models.
<moon127> juju deploy-logs from another terminal was helpful debugging k8s charms for me, the hook tracebacks appear in those logs.
<bthomas> thank you
<facubatista> Chipaca, justinclark, what's promised from my last branch: https://github.com/canonical/charmcraft/pull/123
<mup> PR charmcraft#123: Show paths with str, not repr, as the latter is horrible in windows <Created by facundobatista> <https://github.com/canonical/charmcraft/pull/123>
<facubatista> bthomas, justinclark, drewn3ss, please remember about https://github.com/canonical/charmcraft/pulls and https://github.com/canonical/operator/pulls (feel free to shout here which one are you tackling so you don't step over each other... we do not need a zillion reviews on each PR, so let's coordinate)
<facubatista> Chipaca, ^
<justinclark> facubatista: will do
<facubatista> justinclark, thanks
<facubatista> Chipaca, from what we spoke recently: https://github.com/canonical/charmcraft/issues/125
<facubatista> Chipaca, also, https://github.com/canonical/charmcraft/issues/126 and https://github.com/canonical/charmcraft/issues/127
<facubatista> Chipaca, and finally, https://github.com/canonical/charmcraft/issues/128
<facubatista> (what a morning!!)
<Chipaca> facubatista: ...tetÃ©
<facubatista> jajaj
<Chipaca> facubatista: "what a morning, tea-tea", i guess?
<facubatista> no, no, its not "ti ti"
 * facubatista needs to start writing phonetics
 * bthomas 
 * bthomas ð
<Chipaca> bthomas: ?
<bthomas> I was trying to deciper spanish :)
<Chipaca> facubatista: stand up!
<facubatista> oops
<Chipaca> justinclark: so, starting from Dmitrii-Sh's https://github.com/dshcherb/cockroachdb-operator you can see it uses https://github.com/dshcherb/interface-tcp-load-balancer which isn't exactly what you mentioned but might help?
<Chipaca> I thought Dmitrii-Sh also wrote a generic 'http' interface but can't see it now
<justinclark> Thanks Chipaca. I'll look over this. Also, for a bit more context, I'm trying to get the host and port for an incoming relation (via an http interface). Here's the exact line I'm trying to decouple from k8s: https://github.com/charmed-lma/charm-k8s-grafana/blob/88006babe566f3dd04d24dc183d3af40f242e82f/src/interface_http.py#L112
<Dmitrii-Sh> Chipaca: I had some HTTP health-checking-related code in the TCP load-balancer interface since just checking socket establishment wasn't enough but not the HTTP interface unfortunately.
<dstathis> @chipaca Do you know about pathlib? I think it could have been useful for some of these Windows fixes
<drewn3ss> justinclark: looking at line 104, and your todo, event.relation is the current event's relation handle.
<drewn3ss> you don't have to get_relations if you're in the relation event
<justinclark> Thanks drewn3ss. I'm actually rewriting this entire charm from scratch but hoping to at least take some nice tidbits from that version. My goal is to get the host/port of the incoming relation in a way that will work for k8s and non-k8s charms.
<drewn3ss> https://pastebin.canonical.com/p/NJtNPQ9WSv/
<drewn3ss> ack, not sure about how k8s may throw a wrench in there, but this is how I'm consuming a very simple http interface ^
<justinclark> Oh this looks helpful drewn3ss. Still getting up to speed on the framework as a whole -- would you be willing to share what your http interface looks like? I'm trying to learn by example at the moment :)
<drewn3ss> how do you mean, what it looks like?  The relation data for "http" is defined as "private-address": <remote unit's address hosting the website>, "port": <port website is listening on>, AFAIK
<drewn3ss> I'm only consuming a single website on the interface, so I'm not trying to build a loadbalancer.
<drewn3ss> the website in my code is the prometheus:website provides interface
<justinclark> I believe that answers my question. Thanks!
<drewn3ss> https://pastebin.canonical.com/p/pp9Gg3z6dZ/
<drewn3ss> cloudstats/0 is the "requires: website" charm, and prometheus2/0 is the "provides:website" charm
<drewn3ss> the code above is from cloudstats
<drewn3ss> I guess technically ingress-adress should be referred before private-address
<drewn3ss> but all relations have private-address by virtue of the framework
<justinclark> That's exactly the information I wasn't aware of. Perfect. Thanks again drewn3ss.
<mup> Issue operator#386 opened: Add best practice suggestions into docs <Created by balbirthomas> <https://github.com/canonical/operator/issues/386>
<Chipaca> dstathis: yeah we use pathlib extensively, that's why the windows fixes aren't completely overwhelmed by path changes
<Chipaca> dstathis: not as extensively as we'd like because in 3.5 it wasn't as integrated as it is from 3.6 on
<Chipaca> but still :)
<deej> Chipaca: So I was double checking a couple things after addressing your comments on that MP and it looks like the latest charmcraft release breaks interfaces in the build step?
<deej> i.e. I suddenly have a build/lib/interfaces/pgsql/ directory that's empty instead of a symlink to the real interface
<Chipaca> deej: latest from edge?
<deej> The snap, so not the latest latest
<Chipaca> deej: right, but the snap from edge channel, or beta?
<deej> Ah, edge, yes
<Chipaca> deej: alternatively, what does 'charmcraft version' say
<deej> r42
<Chipaca> ah
<Chipaca> facubatista: ^^^
<Chipaca> facubatista: we broke something
<deej> Version: 0.3.1+38.g9f241a8
<facubatista> deej, can you please do a "ls -l" of the original file, and provide us with the build log (you can see it wiht -v)
<facubatista> ?
<facubatista> deej, thanks
<deej> Sure, one sec
<deej> facubatista: https://pastebin.canonical.com/p/3Qrg2jwmP9/
<facubatista> deej, and what is? mod/interface-pgsql/pgsql/
<facubatista> does that exist?
<deej> https://pastebin.canonical.com/p/WtmZHdpzYd/
<deej> It does
<facubatista> ah, oh
<facubatista> mmm
<facubatista> deej, Chipaca, I found the issue... we're asking first if it's a dir, before if it's a symlink
<Chipaca> facubatista: you mean walk gives you symlinks to dirs in dirnames?
<facubatista> Chipaca, mmm... well, is_dir() answers True
<facubatista> Chipaca, and yes, just confirmed
 * Chipaca hugs deej
<Chipaca> thanks
<facubatista> (that os.walk gives it in the dirnames)
<Chipaca> facubatista: ok. I presume you're working on a fix? if so @ me when you've got a PR
<facubatista> Chipaca, I am, yes
<Chipaca> k
<Chipaca> i'm over there in the corner reading some stuff
<deej> Heh
<crodriguez> Hello hello. I'm trying to understand better the concept of units in kubernetes with juju.  When I deploy an app with juju, in the juju status, a unit "app/0" appears. The charm code executes in that "unit". Is that the equivalent of the app-operator-0 pod that appears in kubernetes? So the charm code actually runs in the operator pod?
<crodriguez> The reason of this question is that I am trying to use the kubernetes API in the charm. For that, I need to load the cluster config, and when I do it manually in the operator pod , it works, but when I run it in the charm, it fails with this :
<crodriguez> ```application-metallb-controller: 12:41:16 ERROR juju.worker.uniter.operation hook "install" (via hook dispatching script: dispatch) failed: exit status 1
<crodriguez> application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install Traceback (most recent call last):
<crodriguez> application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install   File "./src/charm.py", line 20, in <module>
<crodriguez> application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install     class MetallbCharm(CharmBase):
<crodriguez> application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install   File "./src/charm.py", line 130, in MetallbCharm
<crodriguez> application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install     config.load_incluster_config()
<crodriguez> application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install   File "/var/lib/juju/agents/unit-metallb-controller-0/charm/venv/kubernetes/config/incluster_config.py", line 93, in load_incluster_config
<crodriguez> application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install     InClusterConfigLoader(token_filename=SERVICE_TOKEN_FILENAME,
<crodriguez> application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install   File "/var/lib/juju/agents/unit-metallb-controller-0/charm/venv/kubernetes/config/incluster_config.py", line 45, in load_and_set
<crodriguez> application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install     self._load_config()
<crodriguez> application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install   File "/var/lib/juju/agents/unit-metallb-controller-0/charm/venv/kubernetes/config/incluster_config.py", line 51, in _load_config
<crodriguez> application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install     raise ConfigException("Service host/port is not set.")
<crodriguez> application-metallb-controller: 12:43:56 DEBUG unit.metallb-controller/0.install kubernetes.config.config_exception.ConfigException: Service host/port is not set.```
<crodriguez> mmh sorry, I'll put that in a pastebin
<crodriguez> https://pastebin.canonical.com/p/B77mMm4tYw/
<crodriguez> So, it seems to me that the charm does not run in the operator pod... so where does it? Just trying to figure this out
<Chipaca> crodriguez: in the pod list you should be seeing two pods
<crodriguez> yeah, the app pod and the operator pod. Well, install is failing right now because of this so only operator pod is up
<Chipaca> crodriguez: you'll have one <charm>-0, and one <charm>-operator-0
<Chipaca> right
<Chipaca> crodriguez: the operator pod is the one that runs the charm
<crodriguez> does it run in a virtual environment ? Because if I go into the operator pod, I can run the python commands just fine https://pastebin.canonical.com/p/VbC2kvbXDR/
<crodriguez> but running these same commands in the charm causes the error I linked  https://pastebin.canonical.com/p/B77mMm4tYw/
<drewn3ss> crodriguez: looks like you didn't exec that venv you were in the directory of.
<crodriguez> I couldn't find a bin/activate (:
<drewn3ss> oh fun.
<drewn3ss> look at the dispatch file for how python is called
<drewn3ss> maybe it might give a hint
<crodriguez> good idea
<Chipaca> it's _not_ a venv, fwiw
<Chipaca> it's just the libraries
<crodriguez> it's a fake venv called venv? :P
<Chipaca> yes
<drewn3ss> lol
<drewn3ss> awesome
<facubatista> Chipaca, https://github.com/canonical/charmcraft/pull/131
<mup> PR charmcraft#131: Respect the symlink even if it's a directory when building <Created by facundobatista> <https://github.com/canonical/charmcraft/pull/131>
<facubatista> deej, ^
<crodriguez> may I suggest a rename to "lib" ?
<Chipaca> crodriguez: people are using lib for their own nefarious purposes :-p
<crodriguez> aah man :P
<drewn3ss> lol...now that we all know :)
<Chipaca> but yeah, maybe .lib or sth
<drewn3ss> the nice thing is, it acts as a venv for charm code library pathing purposes
<facubatista> drewn3ss, crodriguez, this is not the venv you're looking for  *does some jedi hand waving*
<drewn3ss> XD
<crodriguez> hahaha
<facubatista> Chipaca, ".lib or sth" is a horrible directory name
<crodriguez> ok... so dispatch only has this here : https://pastebin.canonical.com/p/z2fwtnMqrt/
<Chipaca> facubatista: make it .ð©
<crodriguez> facubatista, but venv is confusing!
<facubatista> crodriguez, if you do "PYTHONPATH=venv python3" and then the import, it should work
<drewn3ss> ooh, lib is being added automatically by dispatch...that's handy to reduce all the import not at top errors \o/
<facubatista> crodriguez, for sure we can improve that
<Chipaca> i have to agree with crodriguez here :-|
<facubatista> (we just need a better name)
<drewn3ss> my days of taking Chipaca seriously are certainly coming to a middle.
<crodriguez> charm-packages ?
<facubatista> installed-python-libs
<Chipaca> drewn3ss: there's a trick to knowing when i'm being serious and when i'm not
 * drewn3ss is going to take the p00p emoji as a sign.
<Chipaca> drewn3ss: (presumably) (i don't know it myself)
<crodriguez> lmao
<crodriguez> thanks facubatista , trying that now
<Chipaca> facubatista: how about something that says where this comes from? like 'reqs' (or 'requirements' or sth)
<facubatista> deps
<facubatista> pydeps
<drewn3ss> btw, mad props to the operator team.  this stuff makes charming SO much easier.  no more finite-state-machine madness, and relation-state unit testing are killer apps.
<Chipaca> psydeps
 * drewn3ss is usually a curmudgeon when it comes to having to learn yet-another-framework, but this one is well worth the (oddly rather low) effort
<Chipaca> \o/
<drewn3ss> I know, I know, let's call yet another thing that's not a wheel a /wheel
<crodriguez> well it works just as well, I can't reproduce what the charm is complaining about https://pastebin.canonical.com/p/VnJm9wWbRv/
<Chipaca> ooohh, we could call it somehting wheely and make them all wheels
<Chipaca> crodriguez: :-(
<Chipaca> crodriguez: want to share the charm so i can take a peek?
<crodriguez> Sure. It's an ugly baby right now though, I warn you
<Chipaca> i'll put on my charm grandma glasses; all charms will be beautiful
<crodriguez> Chipaca, https://github.com/camille-rodriguez/charm-metallb-controller
<crodriguez> Context is that I have to write a charm for metallb for k8s. MetalLB is contained of a controller and a speaker, so I'll have a bundle of 2 charms. I'm just getting started with the controller. And I want to use the k8s API directly in the charm because the juju pod_spec is not able to create PodSecurityPolicies yet
<crodriguez> Chipaca, haha perfect
<Chipaca> crodriguez: and you're importing kubernetes, which is part of the image the charm runs on?
<deej> facubatista: Awesome, ta
<crodriguez> Chipaca, I added this block here https://github.com/camille-rodriguez/charm-metallb-controller/blob/master/src/charm.py#L130 to test the k8s api yeah
<crodriguez> I'm testing with microk8s rn
<Chipaca> heh, error about envget
<crodriguez> yeah sorry, pull again :)
<crodriguez> I removed that crap 20 sec after sending you the link lol. You're too quick!
<Chipaca> this upgrade loop is so tedious
<Chipaca> now i got there
<crodriguez> :) yeah  I usually just get rid of the app and redeploy... no upgrade for me haha
 * facubatista needs to eod
<crodriguez> kenneth just shared this with me https://github.com/juju-solutions/bundle-cert-manager/blob/master/charms/cert-manager-webhook/reactive/webhook.py#L37-L50
<crodriguez> apparently there's a bug in how the juju agent passes env variables to the charm
<Chipaca> aha! this rings a bell
 * Chipaca edits src/charm.py inside the pod directly
<crodriguez> I wonder if there's a bug open for this..
<Chipaca> crodriguez: that worked, fwiw
<Chipaca> crodriguez: you know of the k8s opslib, yes?
<crodriguez> great, I'm testing it rn too
<crodriguez> I knew there was a k8s lib under your personal github, I thought that it was not ready yet
<Chipaca> crodriguez: it's very minimal, just what mark maglana created with some tweaks, but Â¯\_(ã)_/Â¯
<Chipaca> it certainly doesn't know about PodSecurityPolicies :-D
<crodriguez> haha :P
<Chipaca> but maybe it should include this hack until juju fixes it
<crodriguez> well once my k8s charm is better defined, maybe I'll contribute to it
<crodriguez> yeah true
<Chipaca> crodriguez: do you know if there's a bug about this?
<crodriguez> kenneth didn't respond when I asked about a bug #
<crodriguez> I'll dig
<Chipaca> crodriguez: i'll stick around a bit longer to try to catch what you dig up
 * Chipaca â not working though
<crodriguez> the closest I can find is the one John raised a few days ago for another problem I got lol https://bugs.launchpad.net/juju/+bug/1891337
<crodriguez> i don't think it's related to the metrics hooks specifically though. I might open a new bug
<crodriguez> There, a new one https://bugs.launchpad.net/juju/+bug/1892255
<crodriguez> thanks for your help :) have a great evening!
<Chipaca> crodriguez: thanks!
#smooth-operator 2020-08-20
<axino> hi
<axino> is it expected that my very basic charm weights ~10MB ?
<xavpaice> crikey what did you put in there, a video instruction on installation?
<xavpaice> just checked one or two of my operator charms, yeah they really are that big
<xavpaice> git submodule and all that - though the charmcraft setup might change that quite a bit
<axino> I think it was ~300k a few days ago
<axino> and was deploying fine
<axino> I just removed the venv and it's back to ~300k
<axino> and it deploys fine
<Chipaca> axino: 'sup
<axino> Chipaca: my charm was ~10M big because it included "venv" and a full python3.6 distro
<Chipaca> axino: ! how
<Chipaca> axino: steps to repro plz?
<axino> Chipaca: that's what happens if you follow README.md
<axino> from charmcraft init
<axino> (I'm developping on a 18.04 machine btw)
<Chipaca> axino: with charmcraf from beta, or edge?
<axino> Chipaca: edge
<axino> Chipaca: I ran "charmcraft init" on Aug 17 though
<axino> if that helps
<Chipaca> no, yes, totally, i can see what's happening now
<Chipaca> axino: this is more of the issue we were discussing last night
<Chipaca> axino: two workarounds to unblock you while we address it
<axino> Chipaca: I added "venv" to .jujuignore and it seems to work
<Chipaca> axino: 1. add the venv to .jujuignore
<axino> :)
<Chipaca> yeah :-)
<Chipaca> axino: 2. was to create the venv outside of the project
<Chipaca> axino: 3. was to run charmcraft from beta instead of edge
<Chipaca> I suspect the fix is just to add "add venv to jujuignore" to the generated README, because "include everything not in jujuignore" has exactly this consequence, and it's what people requested
<axino> Chipaca: or just add venv to the jujuignore generated by charmcraft init ?
<Chipaca> it already has /env
<Chipaca> which might be a tyop
<axino> Chipaca: ah
<Chipaca> so, yeah
<axino> Chipaca: because I saw /env and was wondering why it was there
<Chipaca> yeah i'll push a fix that just env->venv in README.md.j2
<Chipaca> what we were discussing last night was venv in the generated charm, which isn't a venv at all
<Chipaca> so not really related to this
<bthomas> Chipaca: Are you free anytime today for a 1:1 ?
<Chipaca> bthomas: sure. in 30?
<bthomas> thanks. will wait for you ping
<Chipaca> bthomas: belated ping
<Chipaca> bthomas: standup meet
<bthomas> Chipaca: yep going there now
<Chipaca> I HELPED!
 * Chipaca celebrates
<bthomas> Yep : thank you Chipaca
<bthomas> ops-lib-k8s is not in pypi so I guess I have to add it as git repo to requirements.txt
<facubatista> Â¡Muy buenos dÃ­as a todos!
<Chipaca> bthomas: correct
<Chipaca> facubatista: ï½ï½ï½  ï½ï½ï½ï½ï½ï½ï½ï¼
<bthomas> Morning facubatista
<Chipaca> hah!
<bthomas> I was faster but didn't get the font right
<Chipaca> facubatista: you have been blessed by the gods of $RANDOM
 * bthomas hail facubatista 
<facubatista> Chipaca, that's me, hello
<facubatista> hola bthomas
<Chipaca> how 'eval printf "o%.0s" {2..$((RANDOM/512))}' can result in 'god' requires some thought
<facubatista> Chipaca, carrot time
 * Chipaca goes
<bthomas> I do not see logs from my charm in juju debug-log. Will it make juju unresponsive if I set juju model-config update-status-hook-interval to a few seconds ?
<bthomas> Also "eval printf "o%.0s" {2..1}" produces oo and {2..0} produces ooo
<mup> Issue operator#387 opened: passing k8s_resources to pod.set_spec() requires a dict with a single key "kubernetesResources" <Created by axinojolais> <https://github.com/canonical/operator/issues/387>
<Chipaca> bthomas: update-status-hook-interval is documented as having a minimum of 1m
<Chipaca> facubatista: i'm back fwiw
<facubatista> Chipaca, let's do it
<facubatista> Chipaca, same than before?
<Chipaca> facubatista: actually, give me a minute to get a quick coffee thing
<facubatista> Chipaca, ack, I'm there
<bthomas> Chipaca: I am hosed then, am i not ? How do I debug why both original and refactored charms do not progress beyond waiting for pod to be setup ?
<Chipaca> bthomas: are you doing a pod-set-spec and then waiting, in the same hook, for the pod to come up?
<Chipaca> facubatista: did you drop?
<bthomas> Chipaca: yes that looks to be the case .
<bthomas> I have not changed this from the original charm.
<Chipaca> bthomas: that's not going to work; pod-set-spec is done once the hook finishes successfully
<facubatista> Chipaca, nop, https://meet.google.com/eyx-jtan-dac
 * justinclark awake
<justinclark> Question Chipaca/facubatista: I'm working on the GrafanaBase charm that has all the shared functionality between k8s and non-k8s and I think have an HTTP interface that works, but I want to get feedback in case I'm doing something silly.
<justinclark> Here's the interface_http.py file: https://github.com/justinmclark/grafana-charm-base/blob/grafana-base/src/interface_http.py
<facubatista> justinclark, let's talk in the standup? I'm in a meeting and have another one before that (I mean, standup is the earliest slot I may have to provide you feedback)
<justinclark> Sounds great. No rush. Thanks facubatista
<deej> https://code.launchpad.net/~deej/charm-k8s-openldap/+git/openldap/+merge/389606 <- Chipaca facubatista for our call in a few, though 90% of that is Docker stuff
<Chipaca> justinclark: was your review of charmcraft#124 a +1?
<mup> PR charmcraft#124: cope with bionic's slightly broken pip3 command <Created by chipaca> <https://github.com/canonical/charmcraft/pull/124>
<justinclark> Oh yes sorry about that Chipaca. Just gave it the official +1
<deej> Chipaca: https://code.launchpad.net/~deej/charm-k8s-openldap/+git/openldap/+merge/389345
<deej> Chipaca: https://code.launchpad.net/~deej/charm-k8s-openldap/+git/openldap/+merge/389606
<deej> Line 285 of that diff, I was hoping for a cleaner way to do that
<deej> Though if I go with facubatista's suggestion of storing a dict in the state and unpacking it when I do the pod configuration, that solves that issue for me
<deej> So I guess it's sort of a matter of best practices for how to store data structures in state
<Chipaca> deej: so a dict works, but if you'd rather use a class because db.user is better than db['user'], you can do one of two things: write an actual class with the appropriate methods, or, use a namedtuple and explicitly build it from state
<mthaddon> Chipaca, facubatista: have ~charmcrafters as a requested reviewer for https://code.launchpad.net/~axino/charm-k8s-gunicorn/+git/charm-k8s-gunicorn/+merge/389554 fwiw (sorry for so many at once)
<deej> Chipaca: Nah, a dict totally works, I'm not fussy
 * Chipaca marks them Disapprove and moves on
 * mthaddon cries :(
<Chipaca> deej: OTOH WRT that line in the diff itself, there isn't currently a better way but I don't see why we couldn't have one
<Chipaca> I mean, given set_default, an update makes sense
<Chipaca> i
<Chipaca> i'll file a bug
<deej> ta
<deej> Awesome
<Chipaca> deej: #388
<mup> Issue #388: give BoundStoreState an update method to set multiple things <Created by chipaca> <https://github.com/canonical/operator/issues/388>
<mup> Issue operator#388 opened: give BoundStoreState an update method to set multiple things <Created by chipaca> <https://github.com/canonical/operator/issues/388>
<Chipaca> facubatista: did you get to see that library with super interesting progress bars i shared a while ago?
<facubatista> Chipaca, I don't remember which one... I know about this https://pypi.org/project/progress/ and https://tqdm.github.io/
<Chipaca> facubatista: https://github.com/willmcgugan/rich
<Chipaca> facubatista: it's rather big, but of interest to us are the tables and the progress bar i think
<Chipaca> but it might be too much also :)
<facubatista> Chipaca, we can revisit it when we get the order to show help messages with markdown format
<Chipaca> facubatista: FINE
<Chipaca> facubatista: :-p
<facubatista> :)
<Chipaca> have i mentioned how weird python is in windows
<Chipaca> the actual weirdest thing is how much stuff just works
<Chipaca> the second thing is the stuff that doesn't :)
<mthaddon> so basically everything is weird
<facubatista> Chipaca, meeting
<Chipaca> already?
<Chipaca> mthaddon: oh yeah. and slow.
<Chipaca> bthomas: 1 min and then we meet
<bthomas> Thanks Chipaca
<bthomas> Chipaca: we can't use the standup meeting url so https://meet.google.com/mhc-svks-wnx
<Chipaca> justinclark: i'm not sure i followed what you meant in #378 but the fix addresses the issue you point out
<mup> Issue #378: Docstyle for `ops` is Google doc style but parsed with Sphinx <Created by justinmclark> <https://github.com/canonical/operator/issues/378>
<bthomas> Thanks for you time Chipaca : I am also going to explore adding OCI resource feature to opslib, after looking at available implementations. If you have a preferred approach in this regard, do mention it.
<Chipaca> bthomas: if you arrange it into opslib directly in your charm, we can look at pulling it out next week
<bthomas> will do
<Chipaca> bthomas: my main concerns are wrt who the original author is, and ensuring when we do split it we bring authorage information along so we're not stealing without crediting it :)
<Chipaca> bthomas: also I don't think it has any tests
<Chipaca> so that's a problem
<bthomas> Understood. That is a fair. I like fair :).
<Chipaca> bthomas: ultimately if it's more than a half hour of shunting code around just leave it and i can pick it up next week
<bthomas> Ok. It proabably is for me right now.
<Chipaca> that's also fair :)
<bthomas> :)
<Chipaca> facubatista: ping
<facubatista> Chipaca, meeting
<Chipaca> facubatista: one I should be in?
<facubatista> Chipaca, don't know, nvidia one
<Chipaca> facubatista: can you run charmcraft from your commands-long-help branch? i have a question for you
<facubatista> run it how?
<Chipaca> facubatista: to see the help for build and for login
<facubatista> Chipaca, https://pastebin.canonical.com/p/7WnxW75jBt/
<Chipaca> facubatista: so, do you want the overview indented as in build, or not as in login?
<Chipaca> i like it slightly indented as the build one but noted you use dedent elsewhere
<justinclark> facubatista it looks like others have had to check if event.unit is None. e.g. https://github.com/canonical/operator/issues/175#issuecomment-597042504
<facubatista> Chipaca, ah, got it! No identation, please
<Chipaca> facubatista: wrt version, i think we should drop "Version: " from its output
<facubatista> Chipaca, want me to include that change in this branch:
<facubatista> ?
<Chipaca> facubatista: i'm editing them as you requested
<facubatista> (in the long-commands-help, I mean)
<Chipaca> oh wait you mean Version:?
<Chipaca> could be:)
<Chipaca> i'm struggling with some of these texts but we can revisit, right?
<Chipaca> facubatista: pushed, please let me know what you think
 * facubatista checks
<Chipaca> and i bungled something up, pushing a fix
<facubatista> Chipaca, I see it now (it was in my original text, I think): if we quote "latest", shouldn't we also quote edge, beta, etc? or unquote them all?
<facubatista> Chipaca, also, need to update version's test
<Chipaca> psh
<Chipaca> fixing
<facubatista> Chipaca, beyond those two details, I like it
<facubatista> so we need just a third eye on the branch
<Chipaca> facubatista: ê®
<facubatista> ja
<Chipaca> facubatista: wrt the quotes, probably, and we should probably use the right quotes also
<Chipaca> facubatista: in this case the ârightâ quotes are âtheseâ
<facubatista> Chipaca, as you wish, I'm a quote impaired person
<Chipaca> facubatista: http://www.eng-lang.co.uk/ogs.htm
<Chipaca> facubatista: probably
<Chipaca> facubatista: Â«Use quotation marks to enclose an unfamiliar word or phrase, or one to be used in a technical sense.  The effect is similar to that of highlighting the term through italicsÂ»
<Chipaca> ANYway
<Chipaca> EOW for me babyy
<facubatista> yes, that's quoting
 * Chipaca sets the room on fire and runs away
<facubatista> I use " for that
<facubatista> I don't like the assymetric ones
<facubatista> and I almost hate the "compressed double greater/less than"
<justinclark> I noticed the reactive, non-k8s Grafana charm [1] is using snap/apt to install Grafana. Is this preferred to using a container image (for non-k8s charms)? I'm sure this partly depends on the app itself but I didn't know if there are best practices for charmers.
<justinclark> [1] https://git.launchpad.net/charm-grafana/tree/src/reactive/grafana.py
<justinclark> We'd have to setup persistent storage if using the grafana docker container but that's easy enough to do.
<justinclark> I suppose docker would also need to be running on the unit which could be a hassle (but I don't know that for sure)
<facubatista> justinclark, don't know either
<facubatista> justinclark, there are significant versions difference between snap's and repo's?
<justinclark> It doesn't look like repo vs. snap have significant version differences
<justinclark> facubatista ^
<facubatista> justinclark, which is the benefit of using a container image over "snap install grafana"?
<justinclark> facubatista, Since k8s requires a container image for the pod spec, I figured I'd see if using that same container image is just as easy to use in the non-k8s charm. Again, this is just me trying to find the overlap between k8s and non-k8s charms.
<justinclark> Snap does seem easier though
<drewn3ss> honestly, the history is that all lma/infra charms were moved to using snaps for A) dogfooding, and B) ease of iteration and installation and auto-updating nature of snaps.
<drewn3ss> it's lighter from an operational perspective
<drewn3ss> but if youre moving into a containerized world, and can build processes around managing container images like we have for managing snapcraft files in upstream projects, I don't see why a move to containers instead of snaps would be a problem.
<drewn3ss> in fact, the cdk-addons uses the grafana/influx container image
<drewn3ss> but that product lacks the persistent volume storage for upgradability and node failover management
<justinclark> Ah okay that makes sense drewn3ss. Whatever will make things easier people down the line is what I'll end up doing. Right now I'm experimenting so I won't make decisions that will lock anyone into a particular way of doing things.
<mup> PR operator#389 opened: minor docs fix <Created by justinmclark> <https://github.com/canonical/operator/pull/389>
#smooth-operator 2020-08-21
<facubatista> Â¡Muy buenos dÃ­as a todos!
<bthomas> Morning facubatista
<facubatista> hola bthomas
<bthomas> Looks like there is no easy way to debug charms on microk8s. juju debug-log does not show logging messages from the charm because even the smallest update interval of 1min is to slow to capture the logs. Any of the other means debug-hooks, debug-code etc are not supported on k8s.
<bthomas> facubatista: if you have some time for a 1:1 anytime I will be much obliged.
<facubatista> bthomas, in ~10' it's fine
<bthomas> facubatista: awesome thank you ping me when you are ready
 * facubatista could use a review on this dead simple branch https://github.com/canonical/charmcraft/pull/133
<mup> PR charmcraft#133: Added a note in the release howto about sending a mail when done <Created by facundobatista> <https://github.com/canonical/charmcraft/pull/133>
 * bthomas looking at https://github.com/canonical/charmcraft/pull/133
<facubatista> bthomas, anytime now, I'm in https://meet.google.com/fqw-mdqc-dsf
<bthomas> facubatista: going now
<facubatista> bthomas, join #juju
<moon127> hey all, I've added charmcrafters as a reviewer on the MP for unifi-poller charm being written in IS - https://code.launchpad.net/~moon127/charm-k8s-unifi-poller/+git/charm-k8s-unifi-poller/+merge/389643
<facubatista> moon127, ack, thanks
<justinclark> bthomas, I'm not sure about microk8s, but I can do juju ssh into units in a charmed-kubernetes bundle (the machines are LXD). Could you maybe juju deploy kubernetes-core and then deploy charms to that bundle?
<justinclark> Not sure if that'll work, but just offering suggestions.
<facubatista> another easy small PR for review, thanks! https://github.com/canonical/charmcraft/pull/134
<mup> PR charmcraft#134: Use original permissions when creating a dir at building time <Created by facundobatista> <https://github.com/canonical/charmcraft/pull/134>
<justinclark> I'll take a look now facubatista
<facubatista> thanks
<justinclark> I have a (potentially noob) question: When looking at the docs, I see that leader units can set application data [1], but the only way I've found to set application data is through a relation. So my question: is there a way to access the application data directly rather than through a relation?
<justinclark> [1] https://ops.readthedocs.io/en/latest/index.html#ops.model.RelationData
<bthomas> justinclark: thank. Will look into charmed kubernetes
<justinclark> bthomas: just a heads up - it's pretty resource intensive. You might have better luck with kubernetes-core bundle.
<bthomas> got it
<justinclark> Context for my question: when a relation_changed hook fires, I want to save some data that comes through that relation (host and port) to the main application's data.
<justinclark> I currently have a silly looking line like this: event.relation.data[self.app]['data-sources'][event.relation.id] = {...}
<justinclark> I think it would be much nicer to access app data directly but I don't know if that even makes sense in the context of Juju / operator framework.
<bthomas> I am looking at those docs myself and can see "They are allowed to read remote unit and application data." But it does not elaborate.
<bthomas> "They" reffers to units.
<facubatista> justinclark, what happens if you do ...
<facubatista> relation_data = event.relation.data[self.unit]
<facubatista> relation_data[somekey] = somevalue
<facubatista> ?
<justinclark> Yeah that would clean things up. Let me try something like that.
<justinclark> facubatista, Is there a rule of thumb for what kind of data should be stored at the unit level vs. application level?
<justinclark> (Again, probably showing my newness to charming)
<facubatista> IIUC you set the information to the application at the other end of the relation
<facubatista> not the unit
<facubatista> that `[self.unit]` I'm doing above is because you have the data for the apps in both units
<justinclark> So if my Grafana charm accepts a Prometheus relation, setting event.relation.data[self.app] would set prometheus app data rather than Grafana app data?
<justinclark> Okay, so I think what I actually need is to store the data in the charm's StoredState: https://ops.readthedocs.io/en/latest/index.html#ops.framework.StoredState
<justinclark> Basically, I'm just trying to make Grafana data source information (host/port/etc.) easily accessible.
<facubatista> justinclark, yes, you can store stuff in the charm's StoredState (btw, you should call that "store", not "state", as we want to get away of tha misnaming: it's a general store, not the "state of the charm")
<facubatista> another easy PR: https://github.com/canonical/charmcraft/pull/135
<mup> PR charmcraft#135: Support the global options also after the command is given <Created by facundobatista> <https://github.com/canonical/charmcraft/pull/135>
<facubatista> mthaddon, just took a look at the charm-k8s-gunicorn MP, added some comments
<crodriguez__> Does anyone know if it is possible to define the metadata in pod_spec_set for a role ? I want to turn this into a pod_spec, and the metadata field is not recognized with the way I do it https://pastebin.canonical.com/p/zQSXfQGnkq/
<crodriguez__> I get this https://pastebin.canonical.com/p/RYXNzDW6z6/
<mup> Issue operator#390 opened: Only one role allowed <Created by camille-rodriguez> <https://github.com/canonical/operator/issues/390>
<crodriguez__> I opened that bug in case you can do something about it in the framework itself. Idk if it is more juju than ops though
 * facubatista eods
<facubatista> and eows
<facubatista> see you all on Monday
