[06:08] good morning [06:37] battery running low, brb [06:39] re [06:42] mvo: quiet day, eh? [06:45] zyga: good morning [06:45] zyga: yes, pretty quiet [06:46] zyga: did you enjoy your long weekend? [06:46] yes, though lucy had fever yesterday and we were worried [06:47] it's all meant to be quite different, my wife returned to work today and (before I got sick) I was supposed to take two weeks off [06:47] but reality struck and here we are :) [06:48] zyga: yeah, reality is annoying sometimes - but you can't argue with it - it's always right :/ [06:48] :D [06:48] yep [07:01] morning [07:04] good morning pstolowski [07:04] pstolowski: hope you had a nice long weekend [07:10] mvo: hey! yes, was very nice [07:25] * zyga had breakfast [07:25] Lucy still sleeping :) [07:25] I'm working on the udev integration branch [07:26] it's open for so long and should not take long to finish [07:27] zyga: nice! [07:27] hmm, some store EOFs [07:28] - Download snap "core18" (1885) from channel "stable" (unexpected EOF) [07:29] is unexpected EOF something we retry on? [07:29] pstolowski: unless you disagree I will (squash) merge 9152 [07:29] zyga: probably, might be worthwhile to check the logs [07:29] mvo: here it was during "snap install shellcheck" before we ran spread [07:29] so no logs [07:30] mvo: fantastic! yes, go for it, thanks [07:34] PR snapd#9152 closed: cmd/snapd-generator: generate drop-in to use fuse in container [07:49] store is really unhappy today [07:49] PR snapd#9102 closed: corecfg: add "system.timezone" setting to the system settings [08:25] jamesh: hi! minor comment to your snapctl is-connected PR [08:25] pstolowski: looking [08:31] pstolowski: would it actually be possible to call snapctl in a context where implicit plugs and slots existed? [08:34] jamesh: we have a 'hack' for implicit slots (AddImplicitSlots helper) that we explicitly call over SnapInfo of system snap in various places. if core had interface hooks for slots than I image it would be needed. but i'd leave it as is for now because it's not going to be needed any time soon [08:35] s/image/imagine/ [08:36] I suppose core has a configure hook where it could make a difference if it actually checked [08:40] jamesh: it has but the hook is "hijacked" and handled internally [08:43] * zyga moves to the office, lucy is now awake and handled by grandparents [08:53] mvo: there is one question from Samuele pending your input on https://github.com/snapcore/snapd/pull/9084, can you take a look? [08:53] PR #9084: o/snapstate: check disk space before creating automatic snapshot on remove (3/N) [08:54] pstolowski: sure, looking [08:58] pstolowski: replied [08:58] ty [09:47] mvo: uh [09:47] we have a very silly bug in master [09:48] mvo: patch coming right up [09:48] zyga: oh, ok [09:50] PR snapd#9171 opened: [RFC] "virtual" configuration for timezone <⛔ Blocked> [09:55] PR snapd#9172 opened: tests: update spread test for unknown plug/slot with snapctl is-connected [09:58] jamesh: i've updated snapctl spread test to test the new error message ^ [10:03] mvo https://github.com/snapcore/snapd/pull/9173 [10:03] PR #9173: cmd: compile snap gdbserver shim correctly [10:05] PR snapd#9173 opened: cmd: compile snap gdbserver shim correctly [10:19] zyga-mbp: thank you [10:19] I really wonder what tests will say [10:19] I never tried this [10:20] 9084 needs a second review if someone has some spare cycles [10:40] PR snapd#9120 closed: interfaces: add kernel-crypto-api interface [10:40] PR snapd#9165 closed: interfaces: add kernel-crypto-api interface - 2.46 [10:44] looking [10:45] PR snapd#9167 closed: many: correctly calculate the desktop file prefix everywhere [10:50] pstolowski https://github.com/snapcore/snapd/pull/9084#pullrequestreview-469207489 [10:50] PR #9084: o/snapstate: check disk space before creating automatic snapshot on remove (3/N) [10:50] brb, I'll fetch some water [10:52] 9081 also needs a second review [11:00] jdstrand: 8301 has conflicts now, could you please have a look? [11:13] pstolowski: just fyi, I uploaded a new snapd to groovy with the lxd generator fix so that should allow groovy lxd preseed images. could you please let the relevant people know (still needs some hours before it enters groovy proper, still building of course) [11:18] mvo: sure, thanks [11:36] * zyga fetched lunched instead [11:36] re [11:40] opensuse tumbleweed seems to be unhappy [12:46] error: cannot query the store for updates: got unexpected HTTP status code 503 via POST to "https://api.snapcraft.io/v2/snaps/refresh" [12:46] more store woes [12:49] pstolowski: ping [12:49] I have this in a debug shell [12:49] + lxd.lxc exec my-nesting-ubuntu -- snap set lxd waitready.timeout=240 [12:49] error: cannot perform the following tasks: [12:49] - Run configure hook of "lxd" snap (snap "lxd" has no "configure" hook) [12:50] nested snaps are all broken, not mounted [12:50] zyga: ok, that's weird [12:51] zyga: is it master? [12:51] yep [12:51] but seems random [12:51] https://paste.ubuntu.com/p/3YspQRfGvT/ [12:51] interesting failed unit [12:51] no snap mounted [12:52] I guess one idea would be to not run hooks on broken snaps [12:52] the snaps are on disk [12:52] looking at the mount units [12:53] there are no mount units?!? [12:53] did i broke something with systemd generator thing..? [12:54] maybe [12:54] looking [12:54] mvo: was #9152 green when it landed, or did you merge manually? [12:54] PR #9152: cmd/snapd-generator: generate drop-in to use fuse in container [12:55] fstab has only LABEL=cloudimg-rootfs / ext4 defaults 0 0 [12:55] but there's no /dev/disk/by-label [12:55] I guess that's because of a container [12:55] and also explains the failed unit [12:55] seems like a separate container bug [12:56] I don't see any evidence that snapd generator ran [12:56] well [12:56] actually [12:56] I do see snap.mount [12:56] pstolowski: I thought it was, maybe I was mistaken [12:56] and nothing else [12:56] pstolowski: do you see errors now? [12:57] what's the other file we should have created? [12:57] the file /run/systemd/container is present [12:57] and has the word "lxc" [12:57] I guess that's why nothing mounted btw [12:57] but it's funny [12:57] I guess terrible [12:57] that we INSTALLED snaps in broken state after mounting failed [12:57] mounting should have undone everything [12:58] changes and tasks from the container https://paste.ubuntu.com/p/VPH4DwbGvx/ [12:58] zyga: there should drop-ins in /run/systemd/generator for all the snap mount untis for /etc/systemd/system [12:59] zyga: are there mount units in /etc/../system ? [12:59] there are no drop ins [12:59] https://paste.ubuntu.com/p/ft8pr6qXrC/ [12:59] and there are no mount nits [12:59] but we remove them in undo [13:00] although this makes no sense because "snap list" shows all the snaps installed and broken [13:00] this is tests/main/lxd [13:00] perhaps it's using older non-repackaged core18 + snapd [13:00] hmm hmm [13:00] zyga: ok if there are no mount units then root cause is elsewhere, not in the generator [13:00] can you just spread 16.04 + tests/main/lxd [13:00] and see if that fails [13:01] pstolowski: well, mount units do get removed on undo [13:01] but I really don't know what to make of this [13:01] the generator only has the /snap sharing mount [13:01] but doesn't have the drop ins [13:01] ah [13:01] 1337.2.46~pre2 [13:01] well [13:01] 1337 is the repackaged [13:01] so... [13:01] hmmm [13:01] zyga, pstolowski standup [13:01] it seems broken [13:01] oh [13:01] joining [13:17] mvo: hi! certainly, I'd love to. thanks for the reviews/merges! :) [13:17] jdstrand: thank you! [13:20] mvo: fyi, the cups one should good for re-review whenever people have a chance (though I'm going to merge master real quick). I'll quickly respond to any feedback there. moving on to 'misc policy updates' then after, will go through any PR reviews that need me [13:21] jdstrand: could you try to look at https://github.com/snapcore/snapd/pull/7614 again [13:21] PR #7614: cmd/snap-confine: implement snap-device-helper internally [13:21] jdstrand: I pushed it forward [13:23] zyga: sure [13:23] zyga: main/lxd failed for me as well [13:23] good [13:23] so it's a real thing [13:23] (reproducible) [13:24] pstolowski: try reverting the generator changes to see if that does something [13:25] "corecfg: add "system.timezone" setting to the system settings" [13:25] nice! :) [13:25] jdstrand: snap regedit ... [13:25] ;-) [13:25] ah, I wonder if that lxd thing is what was causing my spread tests to fail... [13:25] zyga: heh [13:25] jdstrand: I think we may have broken master [13:26] zyga: ok will do just in case, although it would contradict what ijohnson said [13:26] pstolowski: sure - just a sanity check [13:26] oh it wasnt't squash-merged [13:27] * jdstrand will keep an eye on commits to master for something to unbreak things then [13:27] zyga: yeah I saw it on my PR's last night before the generator stuff had landed [13:27] ah, sorry, I couldn't hear that exactly [13:27] that's good, maybe something environmental changed [13:36] jdstrand: and comments in that PR are a bit in a weird state, I could not reply to some [13:36] jdstrand: so please open new comments on anything that you find wrong [13:37] PR snapcraft#3165 closed: Update cmake plugin to support Ninja generator [13:39] pstolowski, ijohnson: interesting [13:39] in a debug shell look at the other container (my-ubuntu) [13:39] it has core installed as well [13:39] and guess what, core works [13:39] all the seeded snaps are broken [13:39] core is not seeded [13:40] does this make _any_ sense to you? [13:40] maybe what is really broken [13:40] is that /etc/systemd/system just doesn't have any mount units [13:40] maybe the image is broken to begin with [13:40] * zyga looks [13:42] pstolowski: confirmed [13:42] the image is busted [13:43] however it was made is wrong [13:43] pstolowski: should I expect to see mount units inside a squashfs lxd image? (in a container image downloaded by lxd) [13:43] I see the snaps and everything else [13:43] but the mount units are simply missing [13:44] as if it was seeded but then something filtered out the .mount units [13:44] pstolowski: to reproduce look at /var/snap/lxd/common/lxd/images/97c470e427c425cf2ec4d7d55b6f1397ea55043c518b194a58fc6b9da426f540.rootfs on /tmp/dupa type squashfs (ro,relatime) [13:44] cc stgraber ^ [13:44] mvo: fyi, PR 8301 is deconflicted [13:44] PR #8301: interfaces/many: deny arbitrary desktop files and misc from /usr/share [13:45] stgraber: it seems that 20.04.1 images that lxd pulls in are snap seeded in a broken way [13:45] stgraber: etc/systemd/system does not have any mount units [13:45] zyga: that'd be a bit concerning as the squashfs we use is supposed to be identical filesystem as the cloud image [13:45] stgraber: can you double check that I'm not making some rookie mistake [13:46] I mounted the .rootfs as downloaded by lxd [13:46] and looked at /etc/systemd/system therein [13:46] it's this image in your case: http://cloud-images.ubuntu.com/releases/focal/release-20200804/ [13:47] are they all identical? [13:47] * zyga pulls the squashfs [13:47] they should be [13:48] a bit slow to pull, I'll let it do its thing [13:48] i wonder if they started preseeding them and something went wrong [13:48] the date says 4th of August [13:48] maybe it is before we had some fixes? [13:49] and it's just busted \ [13:50] zyga, my yesterdays focal image that i installed looks fine here ... why would mount units be n the readonly squashfs though ? [13:50] ogra_: IIRC because we pre-seed things [13:50] so they start faster [13:50] isn't the squashfs expanded anyway, it's just a template [13:50] it's not a snap [13:51] root@focal:~# ls /etc/systemd/system/*.mount|wc -l [13:51] 8 [13:51] ogra_: that's not what I'm seeing [13:51] root@focal:~# snap list|wc -l [13:51] 6 [13:51] what are the mount units? [13:51] I had 0 mount units [13:51] (though I'm still getting the image stgraber suggestd) [13:51] root@focal:~# ls /etc/systemd/system/*.mount [13:51] root@focal:~# [13:51] bah [13:51] let me pastebin that [13:51] I saw the failure in a spread test that was just "lxd launch ..." things [13:51] thanks [13:52] https://paste.ubuntu.com/p/HJJ3NN9hxm/ [13:52] right [13:52] this is more like what I would expect [13:52] I think an image is broken then, the only question is why and how to fix it [13:52] stgraber@castiana:~/Downloads$ tar Jtvf ubuntu-20.04-server-cloudimg-amd64-root.tar.xz | grep etc/systemd | grep mount [13:52] thats what i got with "lxc launch ubuntu:20.04 focal" ... [13:52] stgraber@castiana:~/Downloads$ [13:52] zyga: also, it's from 4th Aug but started failing yesterday? [13:53] pstolowski: yeah, I think we are missing something [13:53] stgraber: 97c470e427c425cf2ec4d7d55b6f1397ea55043c518b194a58fc6b9da426f540.rootfs can we map that hash to something on cdimage? [13:53] pstolowski: assuming it's using ubuntu:, the images are manually promoted by CPC IIRC, though indeed feels a bit old [13:53] zyga: I did it for you [13:53] zyga: that's the URL I gave you [13:54] ah [13:54] * zyga looks at one more idea [13:54] current ubuntu:focal is indeed 20200804 on cloud-images.u.c [13:54] ubuntu-daily:focal would be 20200814 [13:55] there's a pretty good chance that the cloud images would similarly be missing those files, it's really meant to be identical across the board [13:55] so you probably should be talking to CPC :) [13:56] PR snapd#9173 closed: cmd: compile snap gdbserver shim correctly [13:57] thanks mvo [13:57] sorry had to step out for IRL things [13:57] so is the lxd image just busted then ? [13:58] ijohnson: it seems so but unclear why [13:58] I noticed something buggy in the test [13:58] looking now [13:58] lxd.lxc launch --quiet "ubuntu:${VERSION_ID:-}" my-ubuntu [13:58] VERSION_ID is not defined anywhere [13:58] what does that fetch? [13:58] latest? [13:59] ah so you went from bionic to focal last week then [13:59] I think it's a bug and it was meant to fetch the one matching [13:59] so it's likely that this bug triggers another one [13:59] "ubuntu:" fetches the latest recommended LTS [13:59] that was bionic until 20.04.1 [13:59] now is focal [14:00] zyga: I think we get VERSION_ID from /etc/os-release which gets sourced somewhere in the prepare for a spread run iirc [14:00] ijohnson: I doubt that [14:00] thre was some code in that task.yaml [14:00] it's gone now [14:00] must have been killed by accident [14:00] and ${VERSION_ID:-} masked that [14:00] haha fair enough [14:00] it used to load version id from the host [14:00] because it was about alignment of the container to the system [14:00] as we copy stuff inside [14:01] zyga: i'm looking at ubuntu-20.04-server-cloudimg-amd64-root.tar.xz (also dated 4/08) is this as good as any other image there? fwtw it is NOT preseeded, it's still seeding the old way [14:01] I see [14:02] I'll start by fixing that VERSION_ID [14:02] but it seems we have more bugs [14:02] pstolowski: can you focus on trying to understand what's going on in the inner layer [14:03] I'll send a patch fixing the symptom we see [14:05] ijohnson: you broke it ;D [14:06] * ijohnson runs and hides [14:06] zyga: was it my use the same version of lxd everywhere PR ? [14:06] d4a802934b1e62ec8924ea1c165a627695df5044 [14:06] that's the broken patch [14:06] anyway, fixes coming [14:07] zyga: haha to be fair though you approved the PR :-D [14:07] no hard feelings [14:07] at this point I'm beyond shame [14:07] I made so many bugs [14:07] yeah bugs happen [14:07] no worries [14:08] zyga: so what's actually broken? do i need to keep looking? [14:08] pstolowski: that test is broken when invoked with a focal image [14:08] pstolowski: on a xenial host [14:08] pstolowski: it should work [14:08] pstolowski: to reproduce just grab xenial VM [14:08] pstolowski: install lxd the same way [14:08] pstolowski: and boot focal [14:09] pstolowski: that should work and give you a working focal container with working snaps [14:09] zyga: ok so it's a test issue, not a real problem? [14:09] pstolowski: it's a real problem [14:09] pstolowski: the test just now stumbled onto it [14:09] pstolowski: it was not exercising that before [14:09] ah ok [14:09] pstolowski: but effectively started doing that because of another issue and recent promotion of focal [14:10] pstolowski: PR coming up, I've explained it there as well [14:11] just confirming that my patch fixes it [14:17] zyga: so the right thing for this lxd test is for a xenial host to run a xenial container and a focal host to run a focal container ? [14:17] ijohnson: yes [14:17] there's a comment there explaining why [14:18] zyga: I wonder if we should expand the lxd test to have more variants i.e. xenial host launches focal, bionic and xenial [14:18] oh [14:18] * ijohnson goes to read that comment [14:18] yes but then we should not just blindly copy snapd [14:18] but I agree [14:18] ah yes [14:18] I see [14:18] see if we built the snapd snap and used that as part of our tests we could avoid this [14:18] yes [14:18] I'm sure we could [14:18] it's just a limitation of the current setup [14:18] all proper builds will work [14:19] (properly built rpm will work on rpm distros) [14:19] geez that test is sloooooow [14:19] mmmm also this test is disabled for uc20 too [14:19] (network) [14:19] we should re-enable that and make sure lxd works on uc20 [14:19] I agree [14:19] maybe worth splitting lxd test [14:19] and thinking of a cache for lxd images if that helps [14:19] FYI https://github.com/snapcore/snapd/pull/9174 [14:19] PR #9174: tests: fix lxd test wrongly tracking 'latest' <⚠ Critical> [14:20] opened while local test is still going [14:20] so likely it works now [14:20] but I want to break now as my wife just returned from her first day of work after Lucy was born [14:20] I'll work later in the evening, to focus on bug triage [14:20] I think it's still broken though [14:20] just got this [14:21] https://paste.ubuntu.com/p/yJ6HThPdN6/ [14:21] PR snapd#9174 opened: tests: fix lxd test wrongly tracking 'latest' <⚠ Critical> [14:21] * zyga akf [14:21] zyga: approved [14:21] ijohnson: let's see it pass [14:21] well it "looks correct" :-) [14:22] I agree :) [14:22] oh [14:22] it passed [14:22] woooot [14:22] Ok [14:22] I'm really afk now [14:22] zyga \o/ [14:23] jdstrand: one quick question in 8301 [14:30] ijohnson: do you have any idea on why chooser triggering is not working in recover mode? the trigger file in /run is not there but I still didn't check if it's not being generated at all, or it's not being moved from initramfs to the real system [14:31] cmatsuoka: oh hey I tried that and it seemed to work for me with all edge snaps just now [14:31] oh really? then it must be something I'm doing locally, let me recheck [14:41] cmatsuoka: yes it took like a minute to trigger the chooser, etc. but it definitely wasn't 10 minutes or longer before the chooser came up [14:46] mvo: answered [14:47] PR snapcraft#3251 opened: build providers: honour http proxy settings for snapd [14:52] ijohnson: indeed it works with edge, so probably my remastering of initramfs is the source of the problem [14:57] mvo, hey [14:58] failover tests failing on master as well [14:58] cachio: hey [14:58] cachio: oh no! can you please paste the full error log? [14:59] mvo, https://paste.ubuntu.com/p/7c56xfYGwC/ [14:59] I just merged before running the tests [14:59] so my master is up to date [15:02] PR snapcraft#3252 opened: snapcraft: use system certificates by default for https requests [15:04] cAug 18 14:55:53 localhost.localdomain snapd[3776]: Aug 18 14:55:28 localhost.localdomain systemd[3584]: snapd.service: Failed at step EXEC spawning /snap/snapd/x1/usr/lib/snapd/snapd: Exec format error [15:05] hm, but maybe this is just the message that it failed correctly [15:10] cachio: the "external:" image, how was this build? [15:10] cachio: also, what is the output of "journalctl -u snapd-failover.service"? [15:11] mvo, ubuntu-image snap [15:11] cachio: I need to be afk for some minutes but I guess my question is, what commands do I need to run to trigger this error myself (to reproduce :) [15:12] * mvo is afk for some minutes but will read scrollback [15:14] let me run again [15:15] mvo, wget https://storage.googleapis.com/spread-snapd-tests/images/pc-amd64-16-beta/pc.img.xz [15:15] sudo kvm -snapshot -smp 2 -m 1500 -net nic,model=virtio -net user,hostfwd=tcp::8022-:22 -serial mon:stdio pc.img [15:15] then run [15:16] export SPREAD_EXTERNAL_ADDRESS=localhost:8022 [15:16] ./tests/lib/external/prepare-ssh.sh localhost 8022 [15:16] spread -debug external:ubuntu-core-16-64:tests/core/core-to-snapd-failover16 [15:16] mvo, following those step you can reproduce 100% [15:19] zyga: i'm playing with this broken lxc container. in this state i cannot remove or refresh broken snapd snap, a dead end [15:19] ijohnson, mvo: I ran some extra tests here and apparently what makes the difference in the reboot from chooser is using a local, unchanged core20 snap vs the edge asserted core20 [15:20] zyga: interestingly, i could manually snap install lxd and core18 snaps from seeds, and after that snap set lxd waitready... woks [15:20] *works [15:20] ijohnson, mvo: erm, wait. that's strange. let me do it again [15:21] zyga: also saw this, not sure if it means anything https://pastebin.ubuntu.com/p/sJvDhxp5gy/ [15:41] ijohnson, mvo: the actual problem with the server connection timeout is caused by snapd snap injection, not sure exactly why, but it's local, so sorry for the noise [15:55] pstolowski: re [15:56] pstolowski: that's a known issue, I can explain but I think it is neither new nor a problem for this test [15:56] pstolowski: today it is late so I won't dig deeper [15:56] I will focus on bug triage and a small thing I was working on earlier [15:57] let's attack this tomorrow morning [15:57] zyga: yeah i think i'll stop here as well [15:58] https://github.com/snapcore/snapd/pull/9174 is making progress, though 16.04-64 is still in progress [15:58] PR #9174: tests: fix lxd test wrongly tracking 'latest' <⚠ Critical> [16:16] PR snapd#9175 opened: tests: find -ignore_readdir_race when scanning cgroups [16:18] PR snapcraft#3253 opened: extensions: prepend the snapd glvnd path [16:18] Saviq: o/ I noticed your question yesterday but I was off [16:18] Saviq: can you grab me tomorrow - we can discuss that then [16:32] ijohnson: commented and asked a question on https://bugs.launchpad.net/snapd/+bug/1889092 [16:32] Bug #1889092: getent does not support extrausers on uc18 [16:34] zyga: interesting point, perhaps getent works fine with uc20 or even uc22 [16:36] zyga: also your lxd test PR failed again same problem [16:37] https://github.com/snapcore/snapd/pull/9174#issuecomment-675586745 [16:37] PR #9174: tests: fix lxd test wrongly tracking 'latest' <⚠ Critical> [16:38] no [16:38] not the same problem [16:38] mvo: we may need to revert pawel's generator [16:38] mvo: now that we test it better [16:38] it doesn't work on 16.04 [16:38] it was just never tested correctly [16:38] zyga: sorry how do you know that it is the generator? [16:38] the generator never ran [16:38] or ran and did nothing [16:38] zyga: your PR tests failed with the same configure hook problem [16:38] this test passed when we were testing against a different, more recent container (by accident) [16:39] ijohnson: https://github.com/snapcore/snapd/pull/9174/checks?check_run_id=998982860 shows a different error [16:39] PR #9174: tests: fix lxd test wrongly tracking 'latest' <⚠ Critical> [16:39] Run configure hook of "lxd" snap (snap "lxd" has no "configure" hook) [16:40] I see this failure: [16:40] 2020-08-18T15:30:31.5864620Z Sanity check that mount overrides were generated inside the container [16:40] 2020-08-18T15:30:31.5864910Z + MATCH '/var/run/systemd/generator/snap-core-.*mount.d/container.conf' [16:40] 2020-08-18T15:30:31.5865202Z + lxd.lxc exec my-ubuntu -- find /var/run/systemd/generator/ -name container.conf [16:40] 2020-08-18T15:30:31.5865318Z grep error: pattern not found, got: [16:40] which suggests that what I wrote above is likely [16:40] zyga: ok so 16.04 failed but 20.04 failed same way with the configure hook [16:40] zyga: I was only looking at the 20.04 failure [16:40] I see [16:40] well [16:40] 20.04 is broken because the image is broken :) [16:40] hey bionic is working well then [16:40] we are now testing image matching the host [16:40] with pawel's generator [16:40] so it's totally expected that focal is broken [16:41] that's the issue pawel was looking into just a moment ago [16:41] and that's what we immediately saw [16:41] the PR doesn't change that, it just makes host match the container [16:41] right ok so this makes sense then [16:41] yeah [16:41] I think we need to fix more than one issue here [16:41] and if this is something we badly need for 2.46 we need to delay [16:41] so focal is totally broken because broken images, and xenial is broken because the generator didn't run [16:42] ijohnson: if focal is preseeded then yes [16:42] if it's not pre-seeded then I thing something else is at play [16:42] mvo: ^ please ack if this is a release blocker [16:42] I'll copy this log to pawel [16:42] so that he knows about it [16:43] * cachio lunch [16:44] * zyga EODs === ijohnson is now known as ijohnson|lunch [17:42] PR snapd#9176 opened: cmd/snap: use ⬏ instead of ↑ where applicable [18:03] PR snapcraft#3253 closed: extensions: prepend the snapd glvnd path [18:31] zyga: hey, I was just looking at the scrollback [18:31] mvo: mmm [18:31] zyga: so the generator needs reverting? [18:32] mvo: it might [18:32] mvo: I have a hunch it doesn't work on 16.04 [18:32] zyga: oh, ok [18:32] mvo: the test was flaky, it never ran on a 16.04 container [18:32] mvo: we should discuss with pawel when he reads this tomorrow [18:32] mvo: I just wanted to let you know [18:32] mvo: as this seems to be relevant to 2.46 [18:33] zyga: totally [18:33] zyga: let's tackle this in the morning when pawerl and you and me are around [18:33] indeed [18:33] zyga: thanks for the heads up [18:33] :) [18:58] PR snapcraft#3219 closed: meta: detailed warnings for resolution of commands [19:06] hey hey, I am building a ubuntucore image with some snaps targetting raspi4, is there some best practice to test the image in some virtualisation/emulation ? qemu seems not to be trivial. the ubuntucore docs propose `multipass` but I assume this is then x86 only as long as I do not run it on a raspi, right ? [19:09] ijohnson|lunch https://listed.zygoon.pl/17533/measuring-coverage-of-shell-scripts === ijohnson|lunch is now known as ijohnson [19:09] dariball for raspi4 you pretty much need a raspi4 [19:10] zyga-mbp: nice! :-) [19:13] I'm really curious now how much of our shell scripts actually get run :-) [19:14] ijohnson so was I :) [19:14] although I have a second agenda here, that's a bit more annoying to accomplish [19:14] I want to write unit tests for makefiles [19:15] but the way make executes stuff makes that hard [19:15] this was a step towards that [19:15] ijohnson an aggregate mode would be nice [19:15] you mean a unit test for a makefile itself ? [19:15] something like bashcov --cumulative ... [19:15] yes [19:16] I wrote a build system a while ago [19:16] interesting what's the use case for it? [19:16] and I really want to get to 100% coverage and documentation [19:16] https://github.com/zyga/zmk/ [19:16] documentation is still sparse [19:16] but testing is pretty solid [19:16] zyga-mbp: means using a raspi4 with a regular ubuntu where I then run multipass, right ? [19:16] I know some things are not tested as I didn't write any tests yet (for certain modules) [19:17] but for those that are, I want to see if anything is missing [19:17] dariball no, I mean you need to run it on real metal [19:17] the best way forward is to automate deployment [19:17] we're doing that for tests of snapd and ubuntu core itself [19:17] but it's not something that's easy or fun [19:17] dariball CPU architecture is just one aspect of the problem [19:18] you really need an emulated "raspberry pi" virtualized but nobody made one that's not broken [19:18] so deployment on the real thing is really the only viable option [19:18] there are hardware add-ons that allow you to deploy to a SD card and boot a PI with that card [19:18] yeah this was my impression until now ... tried some qemu stuff, but thanks for confirming my impression [19:18] some companies manufacture them [19:19] some people make more or less successful implementations of that idea [19:19] IIRC canonical even has the most successful as a test engineer now :) [19:19] but anyway, it's not something I can recommend [19:19] unless you have a budget to spend [19:19] ijohnson as for testing [19:19] ijohnson I wrote unit tests in make [19:19] for make [19:20] ijohnson for example, how to compile a library written in C++ [19:20] https://github.com/zyga/zmk/blob/master/examples/libhello-cpp/Test.mk [19:20] but there's some complexity involved in making sure all combinations are covered [19:20] interesting I remember you mentioning zmk before [19:21] it's slowly growing [19:21] I paused all development while I was ill as working was hard as-is [19:21] anyway :) [19:21] i think bashcov is more interesting for us [19:23] ijohnson (although to be fair, that was a smoke test for an example, unit tests are more complex as they try to cover all the features, some being optional) [19:24] yeah bashcov seems very interesting in combination with our spread-shellcheck [19:26] for spread task.yaml's the problem will also be the fact that it just streams a bunch of text and not let us trace much [19:26] but we could find ways around that [19:26] we could patch spread to generate something that mimics the tests tree [19:26] and has actual scripts for everything [19:26] that source each other or what not [19:26] then bashcov could trace the whole execution [19:27] please read my post, play with the code, the idea came to me yesterday [19:27] and I implemented a working copy half an hour ago [19:27] I'm sure there's room for improvement [19:27] (bashcov handles ". foo.sh" sourcing today) [19:28] time to slack now [19:28] * zyga-mbp goes away [20:32] PR snapd#9177 opened: tests: remove support for ubuntu 19.10 from spread tests [21:07] PR snapd#9178 opened: secboot: document exported functions [22:33] PR snapcraft#3254 opened: tests: restrict colcon / ros2-foxy test to amd64 & arm64