[00:39] PR snapd#9139 opened: interfaces/many: miscellaneous updates for strict microk8s [04:00] PR snapd#9140 opened: vendor: update github.com/kr/pretty to fix diffs of values with pointer cycles [05:59] morning [06:27] mvo: hey [06:29] good morning mborzecki [06:38] hm the forum gained a canonical branded bar at the top of the page [06:40] and a roll down 'products' button, nice === oerheks is now known as DT_oerheks === DT_oerheks is now known as oerheks [06:48] * mvo hugs jamesh for pr9140 [06:49] :-) Having tests that don't hang when they fail is nice [06:49] jamesh: +100 for that [07:00] PR snapd#9140 closed: vendor: update github.com/kr/pretty to fix diffs of values with pointer cycles [07:03] morning [07:06] good morning pstolowski [07:07] pstolowski: great news, the PR for kr/pretty that prevents circular data structure hangs got merged upstream and james did a PR to update our vendor.json [07:07] mvo: whaaaaat? unbelievable [07:07] that's great [07:08] pstolowski: yeah :) [07:08] pstolowski: https://github.com/kr/pretty/pull/64 [07:08] PR kr/pretty#64: diff: detect cycles in the values being compared [07:12] kinda annoying my PR was hanging that for 1,5y with no traction, oh well [07:12] pstolowski: I hadn't seen yours when I wrote mine [07:13] jamesh: no worries, good it's fixed [07:14] I noticed the maintainer had merged something at end of July, so pinged him in my PR and got lucky. [07:14] slightly sad that we had to duplicate some work but the outcome is really good, this bug was super annoying in the past. I'm very happy [07:14] yes [07:15] my version also detects if you have different structure to the cycles in the two values being compared too [07:15] e.g. A->A compares different to A->B->A [07:25] jamesh: ah, nice [07:48] good morning [07:48] mvo could you please advise on https://github.com/snapcore/snapd/pull/9125 [07:48] PR #9125: tests: fix issues related to restarting systemd-logind.service <⚠ Critical> [07:48] it makes master much more gree [07:48] *green [07:48] and it is green itself [07:48] there are two separate fixes there [07:49] I can open two new PRs that won't be green but we can review separately and force-merge [07:49] or I can do something else, up to you [07:51] zyga: new ones are fine [07:51] mvo ack, I'll make that happen [08:07] the first fix is https://github.com/snapcore/snapd/pull/9141 [08:07] PR #9141: tests: cope with ghost cgroupv2 <⚠ Critical> [08:07] mborzecki ^ mvo ^ (it will require force merging) [08:10] this is a low-priority fix https://github.com/snapcore/snapd/pull/9142 [08:10] PR #9142: sandbox/cgroup: detect dangling v2 cgroup [08:11] PR snapd#9141 opened: tests: cope with ghost cgroupv2 <⚠ Critical> [08:11] PR snapd#9142 opened: sandbox/cgroup: detect dangling v2 cgroup [08:13] zyga-mbp: looking now [08:14] zyga-mbp: 9125 is still marked as draft, should that get promoted [08:14] not yet, I'm removing those commits and adding one more there [08:14] I'll open it in a moment [08:16] mvo https://github.com/snapcore/snapd/pull/9125 is now force-pushed to remove the commits split out to the two new branches [08:16] PR #9125: tests: fix issues related to restarting systemd-logind.service <⚠ Critical> [08:16] and is now open [08:20] zyga-mbp: one comment, thanks for this one [08:20] zyga-mbp: also remarkable complicated :( [08:22] zyga-mbp: again, thanks for diving so deep [08:26] looking [08:27] mvo which comment should I look like? [08:27] look at [08:27] sorry [08:27] I EODd late last night, should make 2nd coffee [08:29] I will make coffee and review https://github.com/snapcore/snapd/pull/8960 [08:29] PR #8960: o/snapstate,servicestate: use service-control task for service actions (9/9) [08:39] back with freshly ground coffee :) [08:39] and into the review from pawel [09:25] hi, i'm working on to package a cmake-based project to snap format. I refer mosquitto since they are similar since need both daemon and utils. mosquitto works well. but mine does not. I see snapcraft do not put file cmake built to stage and prime. I guess the CMakeLists.txt are something non-standard style. but I wonder if there is a way to workaround. thanks. [09:32] shuduo I'd ask this question in #snapcraft in a few hours [09:33] zyga, or i ask there? [09:33] yeah, ask there but wait for a few hours till the developers are all there [09:33] OK [09:33] thanks [09:49] zyga: do we still need #9125 with #9141 ? or do we need both? [09:49] PR #9125: tests: fix issues related to restarting systemd-logind.service <⚠ Critical> [09:49] PR #9141: tests: cope with ghost cgroupv2 <⚠ Critical> [09:50] pedronis we need both, I separated them for review per mvo's advice [09:50] zyga: I'm not talking about #9142 [09:50] PR #9142: sandbox/cgroup: detect dangling v2 cgroup [09:51] yes, we should get all three but 9142 is less important - we are immune by lucky coincidence [09:51] PR snapd#9136 closed: bootstate20: rm bootState20Modeenv, pass around modeenv directly <⛔ Blocked> [09:51] so 9142 can land later [09:51] ok [09:56] PR snapd#9134 closed: boot, o/devicestate: TrustedAssetUpdateObserver stubs, hook up to gadget updates [09:58] https://github.com/snapcore/snapd/pull/9141 needs a 2nd review and force-merge [09:58] PR #9141: tests: cope with ghost cgroupv2 <⚠ Critical> [09:58] so does https://github.com/snapcore/snapd/pull/9125 [09:58] PR #9125: tests: fix issues related to restarting systemd-logind.service <⚠ Critical> [09:58] mborzecki, pstolowski: ^ if you have a moment [10:04] morning folks [10:07] mvo: a remark about "snap recovery", it seems to produce errors like this: error: cannot list recovery systems: cannot list recovery systems: access denied [10:07] we should find out what's the nesting cause, and remove it [10:09] pedronis: uh, I will add a test, thanks for noticing [10:10] I don't know if one comes from snapd and one from the client or something else [10:10] brb [10:10] zyga: hey, with your systemd experience, how do you feel about changing our written mount units to have "[Install]\nWantedBy=local-fs.target" ? [10:11] hmmmm, I don't know yet [10:11] let me think and check and get back to you [10:11] no worries [10:13] mvo: also a comment about "snap reboot" and related: https://github.com/snapcore/snapd/pull/9021#issuecomment-672783319 [10:13] PR #9021: snap: implement new `snap reboot` command [10:14] pedronis: thanks [10:26] PR snapd#9143 opened: snap: fix repeated "cannot list recovery system" and add test [10:29] mvo: should we close #9110 ? [10:29] PR #9110: preseed: cherry-pick #8704, #8709, #9088 (2.45) [10:33] pedronis: yes, let me do that [10:36] PR snapd#9110 closed: preseed: cherry-pick #8704, #8709, #9088 (2.45) [10:44] zyga: did you take a look at the arch failure on #9125 ? very odd [10:44] PR #9125: tests: fix issues related to restarting systemd-logind.service <⚠ Critical> [10:44] ijohnson I did not yet [10:44] looking [10:44] but I just re+1'd that PR for you [10:44] would be great to sudo land that [10:44] yeah [10:44] checking log now [10:45] I saved it here anyways https://pastebin.ubuntu.com/p/QZBFr9QshT/ [10:45] I can't reproduce on ubuntu, but I've also never seen this test fail before [10:45] hmm [10:45] my first hunch was enoent -> it didn't mount [10:46] it's not the dynamic linker for sure [10:46] but it's mounted [10:46] there are a few other things we run that would fail this way [10:46] too bad it doesn't say _which_ exec fail [10:46] *failed [10:46] let me look at the source quickly [10:47] god this 16" is so much better than the x240 [10:47] almost comfortable in bed [10:47] damn, that's snap-exec [10:48] snap-confine failed to execute snap-exec [10:48] oh weird [10:48] how could that happen [10:54] no idea yet [10:54] actually [10:54] all snaps are mounted [10:54] so no, no idea [11:19] mvo could you please force-merge https://github.com/snapcore/snapd/pull/9125 [11:19] PR #9125: tests: fix issues related to restarting systemd-logind.service <⚠ Critical> [11:19] https://github.com/snapcore/snapd/pull/9141 needs a 2nd review [11:19] PR #9141: tests: cope with ghost cgroupv2 <⚠ Critical> [11:19] zyga: sure [11:19] and we should be back on our feet [11:20] thank you [11:22] PR snapd#9125 closed: tests: fix issues related to restarting systemd-logind.service <⚠ Critical> [11:53] zyga, hi [11:53] hey [11:55] PR snapcraft#3245 opened: specifications: minor cleanup for package-repositories [12:01] zyga, did you see the message I left in the standaup doc= [12:01] yes, but I did not investigate yet [12:01] nice, thnaks [12:02] I'll investigate a bit now [12:02] cachio it may go away with the fixes that are incoming [12:02] the one that just landed [12:02] I don't have spread2 around, can you pull master and see if it happens now? [12:03] zyga, sure [12:03] thank you [12:03] I need a moment, back in 30 [12:16] zyga, it passed now [12:22] PR snapd#9144 opened: bootloader: add helper for creating a bootloader based on gadget [12:22] pedronis: ^^ [12:31] cachio I think the fix for linger affected it [12:31] as that test - the failure cgroup one, otherwise nuked systemd --user for root [12:31] zyga, good to know, thanks for that!! [12:31] cachio: sorry I don't think I followed up but do we have a problem with the uc20-recovery test ? [12:32] ijohnson, sorry, yes, it failed yesterday on the pi3 when I did beta validation for 2.46 [12:32] cachio: do you have logs? [12:32] and then the debug is showing the same problem than before [12:32] ijohnson, yes, 1 sec [12:32] thanks [12:33] mvo could you please merge https://github.com/snapcore/snapd/pull/9141 [12:33] PR #9141: tests: cope with ghost cgroupv2 <⚠ Critical> [12:33] we should be good now [12:33] nice good work zyga! [12:33] we can start restarting tests, just a few at first, to see where we are [12:33] nothing like bad red master to spoil monday [12:33] it's Wed so let's just keep pushing now [12:34] mborzecki: reviewed #9144 [12:34] PR #9144: bootloader: add helper for creating a bootloader based on gadget [12:35] PR snapcraft#3242 closed: tests: migrate legacy classic patchelf tests to spread [12:37] ijohnson, https://paste.ubuntu.com/p/kXWV6tQDcN/ [12:37] thanks cachio I'll have a look [12:37] tx [12:39] pedronis: btw. if you want to look, i've made the changes for install boot config right here: https://github.com/bboozzoo/snapd/commit/9b684dfadba34afd7199891ba8bfe0d6ec56b1d6 mostly works, there's one case for uboot that ijohnson probably knows more about [12:39] * ijohnson goes to go hide from uboot bootloader details [12:41] cachio: ah very interesting [12:41] cachio: so part of what happened is that we failed to finish seeding in recover mode there [12:42] cachio: then we hit that rather weird/interesting bug with the recursive bind mount output from findmnt which broke cwayne's server [12:43] mborzecki: if I remember that should be fine, is some test failing? [12:43] the logic checks for .conf in any case, there is just differen behavior if .conf is empty [12:43] vs not [12:43] for uboot [12:45] ijohnson, yes [12:46] ijohnson, I think it is not breaking the server anymore [12:46] that's nice [12:46] because now I see that the output at some point is cut [12:46] so they implemented something in their side for that [12:52] PR snapd#9141 closed: tests: cope with ghost cgroupv2 <⚠ Critical> [12:52] thank you [13:31] PR snapcraft#3246 opened: plugins v2: quote python packages argument for pip [13:46] PR snapcraft#3245 closed: specifications: minor cleanup for package-repositories [13:58] hmm [13:58] + test-snapd-busybox-static.busybox-static echo hello [13:58] execv failed: No such file or directory [13:58] grep error: pattern not found, got: [13:58] this failed again [13:59] mborzecki does test-snapd-busybox-static work for you on arch? [13:59] zyga: i have a fix, just a sec [13:59] oh even better [13:59] thanks [14:05] ah so that is a real issue [14:07] PR snapd#9145 opened: boot: track trusted assets during initial install, assets cache [14:12] PR snapd#9146 opened: packaging/arch: use external linker when building statically [14:13] mborzecki: ^ is needed to make master all green again right ? [14:13] arch is required and that test is always failing right ? [14:15] ijohnson: yes [14:16] thanks [14:17] got to go for now [14:17] not fun [14:17] thank you [14:45] pstolowski: some comments in #9084 [14:45] PR #9084: o/snapstate: check disk space before creating automatic snapshot on remove (3/N) [14:45] pedronis: ty [14:59] * cachio lunch [15:25] cachio: does that uc20-recovery issue happen reliably such that I could reproduce it with a live rpi3 ? [15:49] PR core#117 opened: extrafiles/writable-paths: make /etc/default/crda writable [15:53] PR snapd#9146 closed: packaging/arch: use external linker when building statically <⚠ Critical> [15:54] pedronis: so I'm looking right now at transitioning from recover to run automatically via any reboot, and we currently detect in devicestate Manager that we are in recover mode via the modeenv, but regardless of how we set that up, at a minimum we need to set bootenv, but we will also need to set the modeenv too [15:54] the issue is that we use the modeenv to determine what mode we are in from devicestate [15:55] so I'm thinking that a pre-req to this is to have devicestate decide what mode it is in by 1) just presence of a modeenv file, and 2) decide what mode via kernel command line [15:55] ijohnson: why that? for ephemeral mode the modeenv is ephemeral [15:55] oh wait [15:55] I'm probably missing something [15:55] the run mode modeenv has always run in it [15:56] if that's not the case we have a bug [15:56] ok, but even if it's ephemeral/tmpfs modeenv, we still write it as "mode=recover" from the initramfs there [15:56] ah but nevermind the modeenv doesn't matter [15:56] nvm me the bootenv is all we care about [15:56] yes, but those modeenv go away each time [15:56] yes yes yes I had just confused myself [15:56] all makes sense now [15:56] it's ok, I was worried we had a bug [15:57] well fear not we do have a bug ... points at bugs.launchpad.net/snapd :-D [15:57] just not in this particular area [16:01] ijohnson, I could reproduce it twice [16:02] cachio: ack I will try to reproduce it myself with a debug shell I can poke around with [16:02] not sure if 100% of the times [16:02] nice [16:02] ijohnson, this is theimage [16:03] ijohnson, https://storage.googleapis.com/spread-snapd-tests/images/pi3-20-beta/pi.img.xz [16:03] cachio: nice thanks, do you use tf or do you flash it to your local pi3 ? [16:04] ijohnson, sudo dd if= of=/dev/mmcblk0 bs=4M oflag=sync status=noxfer [16:05] thanks cachio [16:05] ijohnson, yaw [16:13] ijohnson: you should try to look #9145 if possible, I looked at it, I think it's mostly good, but there are some implicit assumptions that I find a bit confusing (especially they way they might or not affect the update case) [16:13] PR #9145: boot: track trusted assets during initial install, assets cache [16:14] pedronis: sure [16:14] thx [16:17] pedronis: have you seen https://forum.snapcraft.io/t/module-blacklisting-interface/19171 or is it on your queue ? [16:22] I can put it on my queue, but I will not get to it for a while [16:29] pedronis: that's fine, just wondering if you had it on your queue already [16:38] PR core18#155 closed: hooks: fix broken symlink /etc/sysctl.conf.d/99-sysctl.conf [17:51] PR snapcraft#3240 closed: cli: only issue warning when checking for usage of sudo [17:53] PR snapd#9086 closed: many: reorg cmd/snapinfo.go into snap and new client/clientutil [17:57] re [18:00] * zyga jumps into customer bug analysis [18:06] mountedfilesystemwriter tests are happy with the new resolvedSrc stuff I'm working on, yay [18:07] mvo I won't make the morning desktop call tomorrow, I had to move my rehab from Friday to Thursday 9:00 AM [18:11] PR snapcraft#3232 closed: build providers: install apt-transport-https [18:14] zyga: ok [18:14] zyga: I think you mean physio? [18:14] I really don't know how it is translated [18:15] :D [18:15] I think 'rehabilitation' is also valid but I could be wrong [18:15] it's not the kind of rehab where rich people try not to kill themselves with alcohol and drugs [18:15] zyga: I'm probably wrong (no native speaker etc) but when I hear rehab I usually think of alcohl or something like this [18:16] zyga: maybe some native speaker can enlighten us :) [18:16] zyga: anyway I know what you mean [18:16] it's the kind where they teach you to slowly make progress and get back to shape [18:16] plus a lot of the time is actually spent on the scar itself, weird but supposedly important so that it doesn't hurt later [18:16] aaaanyw [18:17] I can show you my scar but it's in the same spot like before, just colored differently, guess that's a poor luck in body scar trophy ;) [18:17] hmm [18:17] Aug 12 14:32:53 localhost snapd[1507]: udevmon.go:149: udev event error: Unable to parse uevent, err: cannot parse libudev event: invalid env data [18:17] we may need to update our udev go code [18:18] and improve error handling so that we can at least get a dump of the event in hex [18:18] ijohnson ^ how would you describe what we were talking about above? [18:18] (which word would you use?) [18:18] PR snapd#9147 opened: [RFC] Support for gadget.yaml "$kernel:" style references (2/N) === lfaraone_ is now known as lfaraone [18:22] zyga: sorry I lost some context [18:22] ijohnson no worries, not important, just wondering how to say something in english [18:22] Oh you mean the appointment you have? [18:22] is "rehabilitation" understood as more than avoiding booze/drugs? [18:22] yes [18:22] like special exercises to get back to basic mobility [18:23] In th US we would call it "physical therapy" or just PT [18:23] aaah [18:23] thanks, [18:23] PT it is [18:23] though I'm sure Graham would call it something else ;-) [18:23] But that's just for recovering back to where you were if it's like something from birth or a long standing thing you do for a really long time then we call it "occupational therapy" [18:23] zyga haha yeah that's why I prefaced with "in the US" [18:24] ah, more terms I never used before [18:24] I wonder how brits call that actually [18:24] it's such a cool aspect of our work that we have all those people from different places in one team :) [18:26] zyga: we call it physio [18:26] ijohnson: I was wondering if "going to rehab" would be somethign that is more about drugs or more about physio [18:26] ijohnson: this is how this started [18:27] ijohnson: anyway, not *really* important :) [18:28] back to musl [18:29] mvo yeah rehab usually has that connotation in casual usage but I've heard it used in medical setting as well but it's less common than calling it physical therapy [18:29] thanks! [18:29] * mvo feels smarter now :) [18:30] np I agree with zyga it's really interesting to see how these things are called around the world [18:30] ijohnson: totally! [18:31] I think I call it a day, one more small victor on kernel-dtb, now I quickly need to call it a day before I see spread failures [18:31] * mvo waves [18:35] o/ [18:46] * cachio -> kinesiologist [20:23] cachio: aroud ? [20:24] *around [20:24] ijohnson, yes [20:24] cachio: what is the command to setup an external system so I can use it with spread ? [20:24] something from nested.sh I seem to remember... [20:24] you need to create the users? [20:25] or the env var to set ? [20:25] I have a rpi3 I want to run the external uc20 recovery test with [20:25] ok, in that case [20:26] you need [20:26] (and I have a locally created user) [20:26] ./tests/lib/external/prepare-ssh.sh 22 [20:26] this will create the users [20:27] then [20:27] export SPREAD_EXTERNAL_ADDRESS= [20:28] awesome thanks [20:28] then spread external:ubuntu-core-20-arm-32:tests/core/uc20-recovery [20:28] ijohnson, yaw [20:30] cachio: btw whatever happened to the tf backend for spread ? [20:30] was that completed but never merged ? [20:30] ijohnson, needs reviews [20:30] and approval [20:31] cachio: I see, would be good if we could get that in sometime soon [20:31] I already asked for reviewes and approval [20:36] PR snapcraft#3246 closed: plugins v2: quote python packages argument for pip [20:51] cachio: I reproduced the failure with the rpi3 I am using, investigating now [20:51] interestingly, our new `snap debug seeding` returns `seed-completion: 3195h26m10.26s` on this system hehe [20:51] ijohnson, nice [20:51] hehe [20:52] the customers wont be happy waiting that time [20:57] haha no I don't think so [20:57] I think I have an idea what's going on here [20:57] this happens specifically on the pi where we don't have a real time clock [21:23] ijohnson, in pi4 didn't fail [21:23] cachio: right because pi3 runs slower than pi4 [21:23] is it the same case? [21:24] ijohnson, ahh [21:24] cachio: no, pi4 is faster [21:24] makes sense [22:19] PR snapd#8942 closed: tests: support different images on nested execution [23:47] vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv88888888888888888