[07:48] morning all [08:23] * Chipaca bbiab [08:28] morning [08:36] jam: how're you doing? [08:36] pretty good, how about yourself? get any better rest? [08:37] yeah :-) overslept today but hey [08:44] hm, just realised i can make the logic a tiny bit simpler [08:44] * Chipaca goes for it [08:45] hm, simpler but not clearer [08:45] * Chipaca undoes [09:23] jam: i'm moving the bug revue to tomorrow [09:23] ok [09:23] you could move it to Friday :) [09:23] I could [09:23] but I won't [09:23] :-) [09:24] * Chipaca is also off Friday, but might have a masochist streak [09:54] Chipaca, is there any explicit Juju testing that you would like me to do? I'm comfortable with your patch, but I can do a version test if you want [09:56] jam: i'm happy with it too. If we had concrete steps to reproduce an issue with low 2.7.x then I might ask to look at them, but we don't... [09:56] so i think the current approach is the best we can do [09:57] Chipaca, so I'm aware of 'timeit' I was wondering if there was anything better than that for Python benchmarks? [09:58] jam: I don't know of anything. I do feel like there has to be something :) [10:00] pytest mentions benchmarking but there doesn't seem to be stdlib stuff. I do like go test's benchmark utility [10:01] nose and pytest both seem to have something [10:02] Chipaca, pytest doesn't like our test suite because you called TestMain "Test*" even though you didn't inherit from TestCase [10:02] :) [10:16] jam: how so? [10:16] * Chipaca pokes [10:16] ahh [10:16] Chipaca, anything named "Test" gets run as a test, but it can't instantiate because it is an ABC [10:16] silly pytest [10:16] https://gist.github.com/jameinel/c10f81ad708513eafa489a3cdb0ce046 [10:17] Chipaca, ^^ seems like CSafeLoader is still important [10:17] I grabbed a modest sized YAML file, and benchmarked loading it. 2ms vs 25ms [10:17] jam: 'modest' [10:17] :) [10:17] Chipaca, it is postgresql's config.yaml: https://api.jujucharms.com/charmstore/v5/postgresql-207/archive/config.yaml [10:17] that aligns with what i remember when poking at snapcraft's usage of it too [10:18] It isn't a book, but it isn't a trivial 2 line thing either. [10:18] ie 8-10x speedups [10:18] 'modest' [10:18] Chipaca, I'm actually rather surprised that PyYAML doesn't just use C if it is available [10:18] having to have try/except /getattr in your code because they seem iffy about it seems odd [10:19] jam: you lost me there, what try/except/getattr? [10:19] jam: you mean to do Loader=? [10:20] Chipaca, yeah. Having the API to your library be "wrap everything in a check to see if you actually have what you want" rather than providing that [10:21] jam: agreed, that api is weird [10:21] jam: if you don't like try/except nor getattr, you could use yaml.__with_libyaml__ [10:23] jam: what did you use to build that benchmark btw? [10:24] Chipaca, pip3 install --user pytest-benchmark and then wrote a single simple file and "pytest test_bench.py" [10:24] neat [10:24] Chipaca, it has pretty output, so I figured its something reasonable to start with [10:24] I'm still not a huge fan of pytest's magic variable names [10:24] but their rendering is very nice [10:24] pretty output + super simple usage == win [10:25] pretty output + super simple usage - so much magic omg == … [10:25] i still can't decide that one :) [10:26] Chipaca, it goes back to discovery/surprise. If I write a function and it magically starts testing it, 'wow'. But then I have TestMain that gets run by accident [10:26] or a parameter that suddenly means something different because I changed a variable name [10:27] If you're deep in pytest it is probably reasonable and avoids boilerplate. When you're not, then it is inscrutable and you don't know what 'caplog' implies [10:28] Chipaca, so does that mean by default yaml.safe_load and yaml.load *don't* use the optimized backend? [10:28] jam: correct [10:28] ugh [10:29] jam: load's default loader is FullLoader [10:31] jam: hopefully subclassing CSafeLoader to add tuples works :) [10:31] dunno [10:32] anyway. had a meeting, now some exercise before lunch and more meetings [10:33] Chipaca, it passes the test suite :) [10:33] jam: huzzah :) [10:42] Chipaca, interestingly, that is a 20ms speedup per hook for Postgresql, since we load metadata.yaml on every hook (at least it isn't every call to Juju, but its still a fair bit) [10:43] Chipaca, this feels very much in violation of the API rule "the obvious way should be the correct way" [10:44] jam: 'this is YAML. we don't do "obvious" here.' [10:45] ok, now yes i go [10:47] PR operator#325 opened: ops/main.py: Use CSafeLoader [10:47] Chipaca, ^^ when you're back [10:53] jam: i'm back [10:54] run has been postponed by rain :) [10:56] I'm baaa-aaak [10:57] https://youtu.be/QOkSvLqkafU?t=7 [11:00] * Chipaca concurs [11:01] PR operator#326 opened: make abc TestMain private to help pytest [13:25] Chipaca, I'm available for standup whenever you are. why does YAML and python versions have to be so hard... [13:28] jam: let's do it [14:26] jam: so [14:26] jam: 1. make sure you have the appropriate python-dev on your system [14:26] jam: pip install cython [14:26] erm, that's (2) [14:27] jam: 3. make sure you nuke any previous yamls from the pip cache, find ~/.cache/pip/ -name 'PyYAML*' -delete [14:27] jam: 4. make sure you also have libyaml-dev etc (apt build-dep pyyaml or sth) [14:28] jam: now a pip install PyYAML should rebuild your wheel and get you the good stuff [14:28] nuking pip cache is a bit of a pain for 3 different versions but at least we know [14:29] hi everyone - still working on the Prom charm and found something that I'd like to discuss... so tl;dr when I'm doing a `model.pod.set_spec` inside of the hook (let's say, `config-changed`) - it applies ONLY after the config-changed hook will exit, not in the middle of the process [14:29] jam: it only nukes the wheels, the source will still be there (and you want it rebuilt anyway) [14:29] so, that makes impossible to poll the application for the new status (which is relying on the pod spec params, ConfigMap for example) [14:29] is this an expected behaviour or should I raise a bug about that? [14:29] Chipaca, $ pip3 uninstall yaml [14:29] Traceback (most recent call last): [14:29] ... [14:29] File "/home/jameinel/dev/operator/.venv/3.8/lib/python3.8/distutils/__init__.py", line 1, in [14:29] from distutils import dist, sysconfig [14:30] ImportError: cannot import name 'dist' from partially initialized module 'distutils' (most likely due to a circular import) (/home/jameinel/dev/operator/.venv/3.8/lib/python3.8/distutils/__init__.py) [14:30] ( [14:30] * Chipaca covers his eyes and hides [14:31] vgrevtsev: I think that's Juju's behaviour (we call pod-spec-set immediately) [14:31] :( [14:32] any ideas or suggestions how can we work around this in the charms? [14:32] vgrevtsev, I do know that it is intentional behavior. You're welcome to file a bug about it, I have discussed it a bit, but it is good to know why you need it and lend your voice to the discussion [14:32] jam: ah... alright. I will file a bug against Juju then. Thanks! [14:32] * vgrevtsev thinks what can he do to w/a this [14:33] vgrevtsev, how are you depending on a ConfigMap of another pod that you are creating, but then not setting ? [14:33] not another... [14:33] Chipaca, fortunately I can just rm .venv and try again, but pip being broken from virtualenv is annoying [14:33] jam: I haven't seen that one myself, fwiw [14:34] vgrevtsev, so, you're running the charm in the -operator pod, you are calling pod-spec-set to create a different pod (for the application), and then what config map entry exists for that pod that you didn't just set from pod-spec-set [14:35] (note that I'm not saying you're wrong in any way, just trying to understand what is going on) [14:35] jam: so my usecase is - I'm deploying the pod which has some ConfigMap, and then I'd like to re-configure this pod (that's what `config-changed` for, right?) So I'm setting the new PodSpec against my application pod with the new configmap options [14:35] then, I tell my app via API to reload its config and trying to identify, has the config been changed or not [14:36] the problem here is, the config will never be in its new state, because the podspec has not been updated in reality [14:37] so, I can exit the config-changed handler (so the configmap will apply), but nobody then will tell an application to reload its state [14:40] Chipaca, worked for 3.6 and 3.7, for 3.8 I get warnings about "char*" vs "unsigned char*" but I *think* it worked... [14:40] vgrevtsev, if you actually changed the pod spec, then Juju will tell K8s to schedule the new schema, causing your pod to get stopped and a new one started [14:42] interesting thing, but somethimes it doesn't happen, for example if you're changing only the configmap part - it does not lead to pod re-creation [14:42] but if you'll change the `containerPort`, it will shut the pod down and create a new one, that's true [14:43] vgrevtsev, so we had a discussion in the past about what should a charm do, and how does that act transactionally wrt other Juju model operations [14:44] vgrevtsev, eg, 'relation-set' doesn't propagate values to other agents until the hook returns, and that is very much intentional [14:44] pod-spec-set was seen a bit as something the charm does that then needs a similar transactionality [14:44] however, I've come to see pod-spec-set as more like "systemd restart" [14:44] which isn't under the same transaction as the hook [14:44] I'd be happy to have a way to forcibly restart/rebuild/... the pods. [14:45] That actually would solve my problem [14:45] Now I can see that pod-spec-set has been invoked, but the pod was left intact [14:47] so that leaves an application in non-consistent state, as charm config says `foo=baz`, while app still thinks `foo=bar`, for example :) [14:47] (while k8s's ConfigMap stays `foo=baz`, as it got updated by pod-spec-set) [14:49] says* [15:03] spec_path.write_text(json.dumps(spec)) [15:03] i wonder why that's done that way around :) [15:04] Chipaca, vs json.dump(open(spec_path)) ? [15:04] jam: or with spec_path.open(...) … [15:04] yeh [15:51] jam: fwiw, https://bugs.launchpad.net/juju/+bug/1882976 [16:17] vgrevtsev: as a (probably sucky) workaround, you could defer and then when you come back the change should be applied (but it'll be 5 minutes later) [16:17] was thinking about putting the reload logic in the update-status hook btw [16:19] anyhoo, soft-eod from me, will check in later