[04:25] facubatista, I am using juju deploy . [05:22] Issue operator#371 opened: Use controller storage if no local storage exists [09:17] moin moin moin [09:17] (my previous good morning went to the wrong channel) [09:18] morning @Chipaca [09:18] heh, the mattermost effect [09:20] 🌅 Good Morning 🌅 [09:21] 🏜 [09:25] :) only missing a rattle snake [09:38] I think I'm going to make the tests of a 'charmcraft init' charm fail [09:38] until the author does the XXXs :) [09:43] How about a message saying what XXX is along with the failed message ? [09:43] yup [09:43] (that's already there) [09:44] you see the messages when you charmcraft init already [09:51] Hello ! [09:52] Chipaca: trying to test your code change, however, unlike pjdc, when I run a "juju run --unit mm-pd-bot/3 -- pod-spec-set --file /pod.json --k8s-resources /k8s.json", it hangs doing nothing [09:52] same goes for any juju run really ... [09:52] I realize I never tried a juju run on a k8s charm [09:53] jam: that works, right? [09:54] Chipaca, I had just been trying to say good morning in mattermost, and immediately decided to say hello here instead [09:54] ballot, Chipaca IIRC in 2.8 you can't juju run on K8s because it uses 'juju ssh' under the hood and K8s containers don't run ssh [09:55] I think they are working on that for 2.9, though I would be surprised that it just hangs. [09:56] ballot: ^ :-/ [09:56] interesting [09:56] As a workaround, what would be the right move to use pod-spec-set with the right context then ? [09:57] (lacking juju knowledge) [09:57] ballot, hack it into your charm and test that way? [09:57] kubectl exec ? [09:57] but kubectl wouldn't have the context [10:00] i've created the file on the operator pod right now, but if I run it directly from there I have : ERROR JUJU_CONTEXT_ID not set [10:01] yeah, that's the issue, I'm trying to see if Chipaca's branch fixes https://bugs.launchpad.net/juju/+bug/1880637 [10:01] what is strange is that pjdc could do it ... [10:01] oh well I can try a more recent juju [10:04] ballot: maybe he wasn't using a k8s charm [10:04] nevermind I'll just hack through my charm and separate kubernetesResources from the pod_spec and test this way [10:05] Chipaca: I doubt it, but I'll make it work somehow :). But after lunch. Never stand between a frenchman and his lunch ! [10:05] * Chipaca would never [10:06] reminds me i need to go to the shops to set fire to some baguettes [10:06] * Chipaca runs [10:07] jk i need to buy some hydration … pills? the fizzy kind [10:38] * Chipaca bbiab [10:45] ¡Muy buenos días a todos! [10:51] Morning [11:26] hola bthomas [11:31] yep [11:31] how can I be for service facubatista [11:32] just watched the testing.Harness video by Paul Goins you gave me last week [11:36] awesome [12:34] Chipaca: I confirm your fix works [12:34] ballot: \\\\\oooo///// [12:35] Chipaca: I have a WIP branch waiting for your change to land then :) : https://code.launchpad.net/~ballot/charm-k8s-mm-pd-bot/+git/charm-k8s-mm-pd-bot/+merge/389156 [12:38] The pastebin showing the fix : https://pastebin.ubuntu.com/p/VPpRGpr9D8/ [12:38] Updating the github PR [12:40] Issue operator#293 closed: passing k8s_resources to pod.set_spec does not work [12:40] PR operator#369 closed: use yaml instead of json for pod-spec-set call [12:40] ballot: ^^^ :) [12:40] ah ! too fast for me :) [12:40] Just updated the PR with my findings [12:41] well, all good, I'll check for secrets and configmaps later on to have a real "k8s friendly" configuration pattern then, thanks for the help Chipaca [12:41] thanks for testing! [12:57] I thought the standard for naming Python unittest tests was test_* . The prometheus charm has "tese" as a suffix. Also it is lacking a package initializer (__init__.py) so if I use a script such as "run_tests" it does not run any tests. [13:01] bthomas: "tese"? [13:01] oops test_ [13:02] test files are named charm_test.py instead of test_charm.py [13:05] * bthomas goes to get coffee before standup [13:22] jam: facubatista: what's a good value of foo in juju_version.has_foo() to know if the juju has state-set/state-get? [13:22] has_state()? has_state_get()? has_controller_state()? [13:22] has_state_setting_capability() [13:23] has_embalmed_ones() [13:23] has_trained_ones() [13:23] has_suckling_pigs() [13:23] can_set_state() [13:23] has_mermaids() [13:25] facubatista: can_haz_state() [13:26] Chipaca, supports_controller_storage is the one I like [13:26] has_controller_storage? :) [13:27] Chipaca, if you want your has you can have it [13:27] * Chipaca can has it [13:31] jam: facubatista: poke [13:31] oops [13:31] sorry, just finishing standup [13:45] facubatista: https://grammarist.com/idiom/lay-of-the-land-or-lie-of-the-land/ [13:49] good day everyone! I'm hitting this issue with a subordinate charm deployed on a ubuntu VM. I'm not sure what is causing it, any idea? https://paste.ubuntu.com/p/pMmg365QR8/ [13:52] crodriguez: hi! at what point do you get that? doing what, i mean [13:54] Chipaca, simply by deploying the charm. I don't think it has time to go through any events really, from what I can see in the logs. I didn't run into this before and I have been editing the charm, so maybe something I changed triggered this [13:55] I'll go back a few commits and compare if I'm hitting that still [13:55] crodriguez: 0.8 looks at JUJU_VERSION for some things, more than 0.7 [13:55] so it might be us [13:55] but it's supposed to always be set, jam checked all the way back to 2.6 or sth [13:55] I'm using juju 2.8.1 [13:56] it's just checking for the JUJU_VERSION environment variable, right? [13:56] yep [13:58] crodriguez, good day! [13:58] https://paste.ubuntu.com/p/V75wy85pq7/ humm maybe something changed in juju ? [13:59] there's no JUJU env vars [14:00] crodriguez, "juju run --unit ubuntu/0 env" [14:01] That should create a hook environment, where env vars like JUJU_VERSION would be set. [14:01] they *aren't* set inside of an SSH session. [14:01] Chipaca, I am still getting this error ERROR cannot repackage charm: symlink "config.yaml" is absolute: "/home/ubuntu/narinder/charm-k8s-cassandra/config.yaml" [14:01] narindergupta: what are you trying to do? [14:02] narindergupta, Chipaca : If you are using charmcraft to build the charm, you need to do "charmcraft build; juju deploy ./CHARMNAME.charm" [14:02] Chipaca, I build the charms using charmcraft and now trying to deploy [14:02] you can't deploy from the directory itself. [14:02] jam, oh ok let me try tht [14:03] ah thanks jam. So doing this, https://paste.ubuntu.com/p/rk8337KpJf/ , looks like the env is set correctly [14:03] crodriguez, for your initial pastebin, do you have a bit more context? [14:03] I wonder if we are failing to set JUJU_VERSION for something like "during collect-metrics hook" [14:03] jam, ok I can confirm that your method is orking [14:03] narindergupta, great [14:04] narindergupta, I saw that when you were talking last night, but you weren't around for me to mention it :) [14:04] jam I can get you a complete pastebin of the charm execution [14:04] jam, no worries and thanks; [14:05] JUJU_VERSION in hook environments hasn't been around forever, but the line existed in 2017-09 [14:05] so it definitely should be in ~2.4+ [14:05] jam, Chipaca I got an install hook error just wondering why we need install in k8s? [14:06] narindergupta, you don't have to implement it, but it is a point the framework itself uses to unpack/setup the python environment that the charm executes in. [14:06] jam: https://paste.ubuntu.com/p/WZfMRkvgyn/ that's all there is in /var/log/juju/unit-iscsi-connector-0 [14:07] jam, in that case should I remove the install hook? As I am getting application-charm-k8s-cassandra: 14:03:28 ERROR juju.worker.uniter.operation hook "install" (via hook dispatching script: dispatch) failed: exit status 1 [14:07] crodriguez, 2020-08-12 13:39:05 ERROR juju.worker.meterstatus runner.go:77 error running "meter-status-changed": exit status 1 [14:07] so it is one of the metrics hooks [14:07] And I am not aware about the dispatch script [14:08] narindergupta, it sounds more like you have a typo/etc in your Charm code, and during 'install' is the first time we try to load your code. [14:08] so its less about 'install' and more just we had a problem executing your charm [14:08] if you remove 'install' then we'll hit the same thing in whatever other hook we call. [14:08] jam, ok let me read my code once more time then [14:09] crodriguez, the reason we don't normally hit that (I believe) is because if you don't have a metrics.yaml file, then Juju doesn't fire the metrics hooks. [14:09] mmh.. what is the metrics.yaml file for ? :) [14:10] Thank you for your suggestions in standup. That was a very useful discussion. Here is a start on the google docs (I will keep adding more questions iteratively as discussed) https://docs.google.com/document/d/14EKsjHtzXnOvY6vaI47VwFzl6MEQ6OxaSBHsN7bcG9Q/edit [14:10] crodriguez, hm. I don't see a metrics.yaml in your code, so I could be off base. [14:10] Chipaca, ^^ it looks like JUJU_VERSION isn't set in the metrics hooks, one more reason they should be normalized. [14:11] narindergupta, jam, there is a way to add more debug logs (so you could see what failed during the install hook). I don't recall the command though, maybe jam remembers it ? [14:11] crodriguez, narindergupta : juju model-config logging-config="=DEBUG" would certainly do it. [14:11] that will give a lot more logging information generally [14:11] bthomas, don't forget to ping here as well, in case we miss stuff getting added to the doc. [14:12] jam: will do [14:12] bthomas, also, you'll want to share the document with at least Comment rights [14:12] jam, ok [14:12] bthomas, I'm only able to read that doc right now [14:12] (my standard is Everyone At Canonical can Comment) [14:12] jam: ofcourse. Let me fix that [14:12] jam, in Kafka charm there was no install hook as it does not make sense for k8s charm [14:14] jam: anyone should be able to edit now. Let me know if that does not work [14:14] narindergupta, in Juju 2.7 it wasn't calling install, in 2.8 it always calls install. You don't have to implement it, but it is there to make things more regular [14:15] jam, gotchu you [14:15] jam, thanks [14:22] Issue operator#372 opened: JUJU_VERSION not set during metrics hooks [14:23] facubatista, I've pushed an update to charmcraft#99 [14:23] PR charmcraft#99: charmcraft/jujuignore.py: Allow extending the list of patterns [14:31] jam, is there any particular reason for the relative import from parent? and not using absolute as we do with the rest for that? [14:33] jam re:metrics. Thanks for opening the bugs! It's not a blocker at least. My charm was stopped because of something else (my own charm code ofc), but it made me found this issue [14:40] Chipaca, I am getting this error k8s = ops.lib.use("k8s", 0, "chipaca@canonical.com") [14:40] NameError: name 'ops' is not defined [14:40] narindergupta: you need to import ops.lib [14:40] Chipaca, ok [14:53] jam: metric hooks don't work with controller state, right? [14:56] Chipaca, I am getting this error now ImportError: cannot find library "k8s" from "chipaca@canonical.com" [14:57] narindergupta: did you add the requirements.txt line? [14:57] Yes I have those two entries [14:57] ops [14:57] git+https://github.com/chipaca/ops-lib-k8s.git [14:58] narindergupta: and where do you get that error? [14:58] Chipaca, while executing k8s = ops.lib.use("k8s", 0, "chipaca@canonical.com") [14:59] In the charm class code [14:59] narindergupta: in a charm built with charmcraft? [14:59] Yes it was build with charmcode [14:59] charmcraft [14:59] narindergupta: can i see the debug logs that lead up to that error please? [14:59] And I can see k8s under venv [15:01] Chipaca, http://paste.ubuntu.com/p/kzJrVyBzTR/ [15:02] Here is the http://paste.ubuntu.com/p/28hR8mTTmD src/charm.py [15:02] why are there no debug logs from the framework there? [15:05] narindergupta: can you look in venv/k8s to see if there's an 'opslib' directory there? [15:06] Chipaca, I am not seeing any directly there named opslib [15:07] Chipaca, build/venv contains k8s ops ops-0.8.0.dist-info ops_lib_k8s-0.0.dev0+unknown.dist-info PyYAML-5.3.1.dist-info test yaml [15:07] very strange [15:07] And in ops/lib/ there is no k8s... [15:08] Yeah [15:08] give me a bit [15:08] Sure no problem let me know in case need any info [15:08] oh i know [15:08] * Chipaca fixes [15:08] :) [15:10] narindergupta: try now [15:11] Do I need to build again? [15:12] Yes I can see opslib now [15:12] Let me kill my controller and retry [15:13] poor controller [15:13] Chipaca, I know in microk8s that's the challenge [15:16] Chipaca, metrics hooks don't work with controller state. We have checks for 'collect-metrics' but didn't implement the same check for meter-status-changed, so that might be a different bug/fix we should do. [15:16] Chipaca, an update for 'is_restricted_context' [15:16] right [17:44] facubatista: i'm having an issue with my tests and logassert, where the tests pass on their own but fail when run in the whole suite [17:44] facubatista: anything in particular i should look out for? [17:47] facubatista: https://github.com/canonical/operator/compare/master...chipaca:more-heuristics-for-storage [17:47] ideas welcome [17:47] it might be something silly, looking at it for too long [17:47] i'm going to take a break, get dinner, etc [17:47] ttfn [17:48] Chipaca, will check [17:49] Chipaca, which logassert version do you have? [17:53] facubatista: 5 [17:55] let's see if I introduced a bug there :) [17:57] unpossible [18:12] Chipaca, which is the specific test? [18:13] facubatista: all of TestStorageHeuristics [18:18] Chipaca, I have a lot of other failures, I suspect because I'm getting messages of "yaml does not have libyaml extensions, using slower pure Python yaml", and tests don't expect that [18:18] self.assertRegex(calls.pop(0), 'Using local storage: not a kubernetes charm') [18:18] AssertionError: Regex didn't match: 'Using local storage: not a kubernetes charm' not found in 'juju-log --log-level DEBUG yaml does not have libyaml extensions, using slower pure Python yaml loader' [18:19] facubatista: that means you haven't run the tests in quite a while :-) [18:19] and it also means we probably should fix that [18:20] Chipaca, yes and yes [18:20] facubatista: 'apt build-dep python3-yaml' should get you the bits you need [18:28] facubatista: in any case commenting out that log line should get you places [18:29] Chipaca, shall I rebuild the VM? [18:29] s/VM/venv/ [18:30] facubatista: i don't know what you're doing :) [18:30] Chipaca, I just apt build-dep as you suggested, error still happening [18:41] Chipaca, so, the difference I'm finding so far between running them all or running just a bunch is NOT logassert related [18:42] Chipaca, when running just test.test_main.TestStorageHeuristics, all is fine [18:43] Chipaca, but when running test.test_main, the test I'm supervising (test_fallback_to_current_juju_version__too_old) fails because [18:43] FileNotFoundError: [Errno 2] No such file or directory: 'juju-log' [18:43] facubatista: huh [18:44] facubatista: so probably it's our log setup code [18:44] that's not getting reset [18:44] facubatista: nice pointer. i'll follow it after dinner :) [18:44] Chipaca, the logger has all these handlers: [, , , , , ] [18:45] Chipaca, ack, let me know if you need something else [19:46] facubatista: where do you see those handlers? [19:47] Chipaca, I printed the handlers when logassert is hook [19:47] added this in l.38 of env/lib/python3.8/site-packages/logassert/logassert.py : [19:47] print("========= prv", logger.handlers) [19:55] facubatista: i get those errors if i remove the reset_logging from test_log [19:55] anyway, i give up [19:55] EOD for me [19:56] tomorrow shall bring new joys [22:55] * facubatista eods