[04:25] <narindergupta> facubatista, I am using juju deploy .
[05:22] <mup> Issue operator#371 opened: Use controller storage if no local storage exists <Created by stub42> <https://github.com/canonical/operator/issues/371>
[09:17] <Chipaca> moin moin moin
[09:17] <Chipaca> (my previous good morning went to the wrong channel)
[09:18] <jam> morning @Chipaca
[09:18] <Chipaca> heh, the mattermost effect
[09:20] <bthomas> 🌅 Good Morning 🌅
[09:21] <Chipaca> 🏜
[09:25] <bthomas> :) only missing a rattle snake
[09:38] <Chipaca> I think I'm going to make the tests of a 'charmcraft init' charm fail
[09:38] <Chipaca> until the author does the XXXs :)
[09:43] <bthomas> How about a message saying what XXX is along with the failed message ?
[09:43] <Chipaca> yup
[09:43] <Chipaca> (that's already there)
[09:44] <Chipaca> you see the messages when you charmcraft init already
[09:51] <ballot> Hello !
[09:52] <ballot> Chipaca: trying to test your code change, however, unlike pjdc, when I run a "juju run --unit mm-pd-bot/3 -- pod-spec-set --file /pod.json --k8s-resources /k8s.json", it hangs doing nothing
[09:52] <ballot> same goes for any juju run really ...
[09:52] <ballot> I realize I never tried a juju run on a k8s charm
[09:53] <Chipaca> jam: that works, right?
[09:54] <jam> Chipaca, I had just been trying to say good morning in mattermost, and immediately decided to say hello here instead
[09:54] <jam> ballot, Chipaca IIRC in 2.8 you can't juju run on K8s because it uses 'juju ssh' under the hood and K8s containers don't run ssh
[09:55] <jam> I think they are working on that for 2.9, though I would be surprised that it just hangs.
[09:56] <Chipaca> ballot: ^ :-/
[09:56] <ballot> interesting
[09:56] <ballot> As a workaround, what would be the right move to use pod-spec-set with the right context then ?
[09:57] <ballot> (lacking juju knowledge)
[09:57] <jam> ballot, hack it into your charm and test that way?
[09:57] <jam> kubectl exec ?
[09:57] <jam> but kubectl wouldn't have the context
[10:00] <ballot> i've created the file on the operator pod right now, but if I run it directly from there I have : ERROR JUJU_CONTEXT_ID not set
[10:01] <ballot> yeah, that's the issue, I'm trying to see if Chipaca's branch fixes https://bugs.launchpad.net/juju/+bug/1880637
[10:01] <ballot> what is strange is that pjdc could do it ...
[10:01] <ballot> oh well I can try a more recent juju
[10:04] <Chipaca> ballot: maybe he wasn't using a k8s charm
[10:04] <ballot> nevermind I'll just hack through my charm and separate kubernetesResources from the pod_spec and test this way
[10:05] <ballot> Chipaca: I doubt it, but I'll make it work somehow :). But after lunch. Never stand between a frenchman and his lunch !
[10:05]  * Chipaca would never
[10:06] <Chipaca> reminds me i need to go to the shops to set fire to some baguettes
[10:06]  * Chipaca runs
[10:07] <Chipaca> jk i need to buy some hydration … pills? the fizzy kind
[10:38]  * Chipaca bbiab
[10:45] <facubatista> ¡Muy buenos días a todos!
[10:51] <bthomas> Morning
[11:26] <facubatista> hola bthomas
[11:31] <bthomas> yep
[11:31] <bthomas> how can I be for service facubatista
[11:32] <bthomas> just watched the testing.Harness video by Paul Goins you gave me last week
[11:36] <facubatista> awesome
[12:34] <ballot> Chipaca: I confirm your fix works
[12:34] <Chipaca> ballot: \\\\\oooo/////
[12:35] <ballot> Chipaca: I have a WIP branch waiting for your change to land then :) : https://code.launchpad.net/~ballot/charm-k8s-mm-pd-bot/+git/charm-k8s-mm-pd-bot/+merge/389156
[12:38] <ballot> The pastebin showing the fix : https://pastebin.ubuntu.com/p/VPpRGpr9D8/
[12:38] <ballot> Updating the github PR
[12:40] <mup> Issue operator#293 closed: passing k8s_resources to pod.set_spec does not work <juju-workaround> <Created by jetpackdanger> <Closed by chipaca> <https://github.com/canonical/operator/issues/293>
[12:40] <mup> PR operator#369 closed: use yaml instead of json for pod-spec-set call <Created by chipaca> <Merged by chipaca> <https://github.com/canonical/operator/pull/369>
[12:40] <Chipaca> ballot: ^^^ :)
[12:40] <ballot> ah ! too fast for me :)
[12:40] <ballot> Just updated the PR with my findings
[12:41] <ballot> well, all good, I'll check for secrets and configmaps later on to have a real "k8s friendly" configuration pattern then, thanks for the help Chipaca
[12:41] <Chipaca> thanks for testing!
[12:57] <bthomas> I thought the standard for naming Python unittest tests was test_* . The prometheus charm has "tese" as a suffix. Also it is lacking a package initializer (__init__.py) so if I use a script such as "run_tests" it does not run any tests.
[13:01] <Chipaca> bthomas: "tese"?
[13:01] <bthomas> oops test_
[13:02] <bthomas> test files are named charm_test.py instead of test_charm.py
[13:05]  * bthomas goes to get coffee before standup
[13:22] <Chipaca> jam: facubatista: what's a good value of foo in juju_version.has_foo() to know if the juju has state-set/state-get?
[13:22] <Chipaca> has_state()? has_state_get()? has_controller_state()?
[13:22] <facubatista> has_state_setting_capability()
[13:23] <Chipaca> has_embalmed_ones()
[13:23] <Chipaca> has_trained_ones()
[13:23] <Chipaca> has_suckling_pigs()
[13:23] <facubatista> can_set_state()
[13:23] <Chipaca> has_mermaids()
[13:25] <Chipaca> facubatista: can_haz_state()
[13:26] <jam> Chipaca, supports_controller_storage is the one I like
[13:26] <Chipaca> has_controller_storage? :)
[13:27] <jam> Chipaca, if you want your has you can have it
[13:27]  * Chipaca can has it
[13:31] <Chipaca> jam: facubatista: poke
[13:31] <facubatista> oops
[13:31] <jam> sorry, just finishing standup
[13:45] <Chipaca> facubatista: https://grammarist.com/idiom/lay-of-the-land-or-lie-of-the-land/
[13:49] <crodriguez> good day everyone! I'm hitting this issue with a subordinate charm deployed on a ubuntu VM. I'm not sure what is causing it, any idea?  https://paste.ubuntu.com/p/pMmg365QR8/
[13:52] <Chipaca> crodriguez: hi! at what point do you get that? doing what, i mean
[13:54] <crodriguez> Chipaca, simply by deploying the charm. I don't think it has time to go through any events really, from what I can see in the logs. I didn't run into this before and I have been editing the charm, so maybe something I changed triggered this
[13:55] <crodriguez> I'll go back a few commits and compare if I'm hitting that still
[13:55] <Chipaca> crodriguez: 0.8 looks at JUJU_VERSION for some things, more than 0.7
[13:55] <Chipaca> so it might be us
[13:55] <Chipaca> but it's supposed to always be set, jam checked all the way back to 2.6 or sth
[13:55] <crodriguez> I'm using juju 2.8.1
[13:56] <crodriguez> it's just checking for the JUJU_VERSION environment variable, right?
[13:56] <Chipaca> yep
[13:58] <facubatista> crodriguez, good day!
[13:58] <crodriguez> https://paste.ubuntu.com/p/V75wy85pq7/ humm maybe something changed in juju ?
[13:59] <Chipaca> there's no JUJU env vars
[14:00] <jam> crodriguez, "juju run --unit ubuntu/0 env"
[14:01] <jam> That should create a hook environment, where env vars like JUJU_VERSION would be set.
[14:01] <jam> they *aren't* set inside of an SSH session.
[14:01] <narindergupta> Chipaca, I am still getting this error ERROR cannot repackage charm: symlink "config.yaml" is absolute: "/home/ubuntu/narinder/charm-k8s-cassandra/config.yaml"
[14:01] <Chipaca> narindergupta: what are you trying to do?
[14:02] <jam> narindergupta, Chipaca : If you are using charmcraft to build the charm, you need to do "charmcraft build; juju deploy ./CHARMNAME.charm"
[14:02] <narindergupta> Chipaca, I build the charms using charmcraft and now trying to deploy
[14:02] <jam> you can't deploy from the directory itself.
[14:02] <narindergupta> jam, oh ok let me try tht
[14:03] <crodriguez> ah thanks jam. So doing this, https://paste.ubuntu.com/p/rk8337KpJf/ , looks like the env is set correctly
[14:03] <jam> crodriguez, for your initial pastebin, do you have a bit more context?
[14:03] <jam> I wonder if we are failing to set JUJU_VERSION for something like "during collect-metrics hook"
[14:03] <narindergupta> jam, ok I can confirm that your method is orking
[14:03] <jam> narindergupta, great
[14:04] <jam> narindergupta, I saw that when you were talking last night, but you weren't around for me to mention it :)
[14:04] <crodriguez> jam I can get you a complete pastebin of the charm execution
[14:04] <narindergupta> jam, no worries and thanks;
[14:05] <jam> JUJU_VERSION in hook environments hasn't been around forever, but the line existed in 2017-09
[14:05] <jam> so it definitely should be in ~2.4+
[14:05] <narindergupta> jam, Chipaca I got an install hook error just wondering why we need install in k8s?
[14:06] <jam> narindergupta, you don't have to implement it, but it is a point the framework itself uses to unpack/setup the python environment that the charm executes in.
[14:06] <crodriguez> jam: https://paste.ubuntu.com/p/WZfMRkvgyn/ that's all there is in /var/log/juju/unit-iscsi-connector-0
[14:07] <narindergupta> jam, in that case should I remove the install hook? As I am getting application-charm-k8s-cassandra: 14:03:28 ERROR juju.worker.uniter.operation hook "install" (via hook dispatching script: dispatch) failed: exit status 1
[14:07] <jam> crodriguez, 2020-08-12 13:39:05 ERROR juju.worker.meterstatus runner.go:77 error running "meter-status-changed": exit status 1
[14:07] <jam> so it is one of the metrics hooks
[14:07] <narindergupta> And I am not aware about the dispatch script
[14:08] <jam> narindergupta, it sounds more like you have a typo/etc in your Charm code, and during 'install' is the first time we try to load your code.
[14:08] <jam> so its less about 'install' and more just we had a problem executing your charm
[14:08] <jam> if you remove 'install' then we'll hit the same thing in whatever other hook we call.
[14:08] <narindergupta> jam, ok let me read my  code once more time then
[14:09] <jam> crodriguez, the reason we don't normally hit that (I believe) is because if you don't have a metrics.yaml file, then Juju doesn't fire the metrics hooks.
[14:09] <crodriguez> mmh.. what is the metrics.yaml file for ? :)
[14:10] <bthomas> Thank you for your suggestions in standup. That was a very useful discussion. Here is a start on the google docs (I will keep adding more questions iteratively as discussed) https://docs.google.com/document/d/14EKsjHtzXnOvY6vaI47VwFzl6MEQ6OxaSBHsN7bcG9Q/edit
[14:10] <jam> crodriguez, hm. I don't see a metrics.yaml in your code, so I could be off base.
[14:10] <jam> Chipaca, ^^ it looks like JUJU_VERSION isn't set in the metrics hooks, one more reason they should be normalized.
[14:11] <crodriguez> narindergupta, jam, there is a way to add more debug logs (so you could see what failed during the install hook). I don't recall the command though, maybe jam remembers it ?
[14:11] <jam> crodriguez, narindergupta : juju model-config logging-config="<root>=DEBUG" would certainly do it.
[14:11] <jam> that will give a lot more logging information generally
[14:11] <jam> bthomas, don't forget to ping here as well, in case we miss stuff getting added to the doc.
[14:12] <bthomas> jam: will do
[14:12] <jam> bthomas, also, you'll want to share the document with at least Comment rights
[14:12] <narindergupta> jam, ok
[14:12] <jam> bthomas, I'm only able to read that doc right now
[14:12] <jam> (my standard is Everyone At Canonical can Comment)
[14:12] <bthomas> jam: ofcourse. Let me fix that
[14:12] <narindergupta> jam, in Kafka charm there was no install hook as it does not make sense for k8s charm
[14:14] <bthomas> jam: anyone should be able to edit now. Let me know if that does not work
[14:14] <jam> narindergupta, in Juju 2.7 it wasn't calling install, in 2.8 it always calls install. You don't have to implement it, but it is there to make things more regular
[14:15] <narindergupta> jam, gotchu you
[14:15] <narindergupta> jam, thanks
[14:22] <mup> Issue operator#372 opened: JUJU_VERSION not set during metrics hooks <Created by jameinel> <https://github.com/canonical/operator/issues/372>
[14:23] <jam> facubatista, I've pushed an update to charmcraft#99
[14:23] <mup> PR charmcraft#99: charmcraft/jujuignore.py: Allow extending the list of patterns <Created by jameinel> <https://github.com/canonical/charmcraft/pull/99>
[14:31] <facubatista> jam, is there any particular reason for the relative import from parent? and not using absolute as we do with the rest for that?
[14:33] <crodriguez> jam re:metrics. Thanks for opening the bugs! It's not a blocker at least. My charm was stopped because of something else (my own charm code ofc), but it made me found this issue
[14:40] <narindergupta> Chipaca, I am getting this error k8s = ops.lib.use("k8s", 0, "chipaca@canonical.com")
[14:40] <narindergupta> NameError: name 'ops' is not defined
[14:40] <Chipaca> narindergupta: you need to import ops.lib
[14:40] <narindergupta> Chipaca, ok
[14:53] <Chipaca> jam: metric hooks don't work with controller state, right?
[14:56] <narindergupta> Chipaca, I am getting this error now ImportError: cannot find library "k8s" from "chipaca@canonical.com"
[14:57] <Chipaca> narindergupta: did you add the requirements.txt line?
[14:57] <narindergupta> Yes I have those two entries
[14:57] <narindergupta> ops
[14:57] <narindergupta> git+https://github.com/chipaca/ops-lib-k8s.git
[14:58] <Chipaca> narindergupta: and where do you get that error?
[14:58] <narindergupta> Chipaca, while executing k8s = ops.lib.use("k8s", 0, "chipaca@canonical.com")
[14:59] <narindergupta> In the charm class code
[14:59] <Chipaca> narindergupta: in a charm built with charmcraft?
[14:59] <narindergupta> Yes it was build with charmcode
[14:59] <narindergupta> charmcraft
[14:59] <Chipaca> narindergupta: can i see the debug logs that lead up to that error please?
[14:59] <narindergupta> And I can see k8s under venv
[15:01] <narindergupta> Chipaca, http://paste.ubuntu.com/p/kzJrVyBzTR/
[15:02] <narindergupta> Here is the http://paste.ubuntu.com/p/28hR8mTTmD src/charm.py
[15:02] <Chipaca> why are there no debug logs from the framework there?
[15:05] <Chipaca> narindergupta: can you look in venv/k8s to see if there's an 'opslib' directory there?
[15:06] <narindergupta> Chipaca, I am not seeing any directly there named opslib
[15:07] <narindergupta> Chipaca, build/venv contains  k8s  ops  ops-0.8.0.dist-info  ops_lib_k8s-0.0.dev0+unknown.dist-info  PyYAML-5.3.1.dist-info  test  yaml
[15:07] <Chipaca> very strange
[15:07] <narindergupta> And in ops/lib/ there is no k8s...
[15:08] <narindergupta> Yeah
[15:08] <Chipaca> give me a bit
[15:08] <narindergupta> Sure no problem let me know in case need any info
[15:08] <Chipaca> oh i know
[15:08]  * Chipaca fixes
[15:08] <narindergupta> :)
[15:10] <Chipaca> narindergupta: try now
[15:11] <narindergupta> Do I need to build again?
[15:12] <narindergupta> Yes I can see opslib now
[15:12] <narindergupta> Let me kill my controller and retry
[15:13] <Chipaca> poor controller
[15:13] <narindergupta> Chipaca, I know in microk8s that's the challenge
[15:16] <jam> Chipaca, metrics hooks don't work with controller state. We have checks for 'collect-metrics' but didn't implement the same check for meter-status-changed, so that might be a different bug/fix we should do.
[15:16] <jam> Chipaca, an update for 'is_restricted_context'
[15:16] <Chipaca> right
[17:44] <Chipaca> facubatista: i'm having an issue with my tests and logassert, where the tests pass on their own but fail when run in the whole suite
[17:44] <Chipaca> facubatista: anything in particular i should look out for?
[17:47] <Chipaca> facubatista: https://github.com/canonical/operator/compare/master...chipaca:more-heuristics-for-storage
[17:47] <Chipaca> ideas welcome
[17:47] <Chipaca> it might be something silly, looking at it for too long
[17:47] <Chipaca> i'm going to take a break, get dinner, etc
[17:47] <Chipaca> ttfn
[17:48] <facubatista> Chipaca, will check
[17:49] <facubatista> Chipaca, which logassert version do you have?
[17:53] <Chipaca> facubatista: 5
[17:55] <facubatista> let's see if I introduced a bug there :)
[17:57] <Chipaca> unpossible
[18:12] <facubatista> Chipaca, which is the specific test?
[18:13] <Chipaca> facubatista: all of TestStorageHeuristics
[18:18] <facubatista> Chipaca, I have a lot of other failures, I suspect because I'm getting messages of "yaml does not have libyaml extensions, using slower pure Python yaml", and tests don't expect that
[18:18] <facubatista>     self.assertRegex(calls.pop(0), 'Using local storage: not a kubernetes charm')
[18:18] <facubatista> AssertionError: Regex didn't match: 'Using local storage: not a kubernetes charm' not found in 'juju-log --log-level DEBUG yaml does not have libyaml extensions, using slower pure Python yaml loader'
[18:19] <Chipaca> facubatista: that means you haven't run the tests in quite a while :-)
[18:19] <Chipaca> and it also means we probably should fix that
[18:20] <facubatista> Chipaca, yes and yes
[18:20] <Chipaca> facubatista: 'apt build-dep python3-yaml' should get you the bits you need
[18:28] <Chipaca> facubatista: in any case commenting out that log line should get you places
[18:29] <facubatista> Chipaca, shall I rebuild the VM?
[18:29] <facubatista> s/VM/venv/
[18:30] <Chipaca> facubatista: i don't know what you're doing :)
[18:30] <facubatista> Chipaca, I just apt build-dep as you suggested, error still happening
[18:41] <facubatista> Chipaca, so, the difference I'm finding so far between running them all or running just a bunch is NOT logassert related
[18:42] <facubatista> Chipaca, when running just test.test_main.TestStorageHeuristics, all is fine
[18:43] <facubatista> Chipaca, but when running test.test_main, the test I'm supervising (test_fallback_to_current_juju_version__too_old) fails because
[18:43] <facubatista> FileNotFoundError: [Errno 2] No such file or directory: 'juju-log'
[18:43] <Chipaca> facubatista: huh
[18:44] <Chipaca> facubatista: so probably it's our log setup code
[18:44] <Chipaca> that's not getting reset
[18:44] <Chipaca> facubatista: nice pointer. i'll follow it after dinner :)
[18:44] <facubatista> Chipaca, the logger has all these handlers: [<JujuLogHandler (DEBUG)>, <JujuLogHandler (DEBUG)>, <JujuLogHandler (DEBUG)>, <JujuLogHandler (DEBUG)>, <JujuLogHandler (DEBUG)>, <JujuLogHandler (DEBUG)>]
[18:45] <facubatista> Chipaca, ack, let me know if you need something else
[19:46] <Chipaca> facubatista: where do you see those handlers?
[19:47] <facubatista> Chipaca, I printed the handlers when logassert is hook
[19:47] <facubatista> added this in l.38 of env/lib/python3.8/site-packages/logassert/logassert.py :
[19:47] <facubatista>         print("[19:55] <Chipaca> facubatista: i get those errors if i remove the reset_logging from test_log
[19:55] <Chipaca> anyway, i give up
[19:55] <Chipaca> EOD for me
[19:56] <Chipaca> tomorrow shall bring new joys
[22:55]  * facubatista eods