[00:28] mwhudson: https://bugs.launchpad.net/ubuntu/+source/snapd/+bug/1863255 [00:28] Bug #1863255: Programs installed in Snap format do not detect the keyboard [00:33] amurray: good morning [00:34] hey mwhudson :) [00:35] yes i see "apparmor="DENIED" operation="connect" profile="snap.chromium.chromium" pid=41956 comm="pool" family="unix" sock_type="stream" protocol=0 requested_mask="send receive connect" denied_mask="send connect" addr=none peer_addr="@/home/mwhudson/.cache/ibus/dbus-FTeGlGGx" peer="unconfined"" too [00:44] PR snapd#8141 opened: interfaces/desktop-legacy: ibus socket path has moved in focal [00:51] PR snapd#8141 closed: interfaces/desktop-legacy: ibus socket path has moved in focal [00:59] hah I should read scrollback from all channels before diving into stuff - jdstrand already sent a PR for this on friday (PR #8139) [00:59] PR #8139: interfaces/{desktop-legacy,unity7}: adjust for new ibus socket location [01:05] PR snapd#8139 closed: interfaces/{desktop-legacy,unity7}: adjust for new ibus socket location [07:08] morning [07:08] hey mborzecki - good morning [07:10] mvo: hey [07:10] mvo: how was your weekend? [07:18] good morning! [07:18] mvo: hey, [07:18] mvo: I merged a small patch last night [07:18] mvo: I was thinking we should dput that patch into focal [07:18] mvo: as it breaks snap X input for some users [07:19] mvo: the patch in question is in https://github.com/snapcore/snapd/pull/8139 [07:19] PR #8139: interfaces/{desktop-legacy,unity7}: adjust for new ibus socket location [07:19] mvo: it consists of extra permissions to talk to the current ibus api over dbus [07:19] er. unix [07:19] (not dbus, sorry) [07:27] mborzecki: hey, sorry for the delay, was reviewing #8078. weekend was good, a bit windy yesterday and tonight, another storm here [07:27] PR #8078: daemon: support resuming downloads [07:27] mborzecki: how was yours? [07:27] zyga: hey, let me see [07:28] zyga: aha, I see [07:28] mvo: let me know if I can help [07:28] mvo: I think I cannot dput to the archive (even snapd) [07:28] mvo: but if you want I can try [07:29] zyga: should be trivial, let me just finish my current stuff and then I will do it [07:31] zyga: hey [07:31] hey maciek :) [07:32] wow that ibus thing, saw the emails [07:32] guess a lot of folks were unhappy :/ [07:32] :D [07:32] yeah, it's still the dev release but I guess people noticed [07:33] zyga: afaik it broke chromium too [07:45] brb [07:53] mborzecki: hey [07:53] mborzecki: on Friday I was working on modernizing services code [07:53] mborzecki: and port it over to syncdir [07:54] mborzecki: I will post some of that today [07:54] I noticed I got a review from jamie so I'll part everything and handle that first [07:55] mborzecki: oh and a small one for you https://github.com/snapcore/snapd/pull/8134 :) [07:55] PR #8134: interfaces: use commonInteface for desktopInterface [08:09] PR snapd#8134 closed: interfaces: use commonInteface for desktopInterface [08:10] PR snapd#8127 closed: interfaces/cpu-control: allow to control cpufreq tunables [08:10] 8078 needs a second review, then it can go in [08:11] zyga: uploaded to focal [08:12] mvo: thank you so much! [08:16] the download API has evolved, I see [08:25] * mwhudson installs snapd from https://launchpad.net/ubuntu/+source/snapd/2.43.3+git1.8109f8/+build/18717296 [08:26] yay i can type again [08:26] haha, same here :) [08:39] morning [08:43] pstolowski: hey! [08:52] hey pstolowski [08:52] good morning pawel! [09:03] hey guys, on focal my snapped firefox (and actually some other app prior) lost its gtk runtime… discarding ns helped https://pastebin.canonical.com/p/yZNy7mNjrv/ - but is there a known problem? [09:03] anything else I can dig out? [09:06] zyga: FYI ↑ [09:06] Saviq: 1-2-1 now [09:14] nw [09:54] Saviq: hmmm [09:54] Saviq: it might be, perhaps, let me look [09:55] hmm [09:56] not sure, we have one thing that fixes a class of bugs like that that's not enabled by default [09:56] so without extra digging I cannot say with certainty that the issue you experienced is covered by that change [09:56] Saviq: I'm thinking about 8089 [09:57] this changes the default on an experimental setting [09:57] Saviq: I'll try to get it ready for 2.44 [09:57] zyga: it's not like I have a reproducer, but it did happen twice on two different snaps for me [09:57] I'll let you know if it's back, maybe we can dig live [09:58] thank you [09:58] Saviq: if you want you can also set the config option [09:59] Saviq: snap set core experimental.robust-mount-namespace-updates=true [10:00] #8078 needs a 2nd review [10:00] PR #8078: daemon: support resuming downloads [10:00] pedronis: going through it now [10:26] reviewed with some questions [10:26] I also requested a review from jamie [10:27] mvo: ^ fyi [10:28] zyga: ta [10:50] mvo: to be clear I don't think it needs to be blocked on jamie given that there is no special bypassing of the usual auth checks, I answerd some of the comments there [10:57] pedronis: ta, i add some tests around the range header now, that was indeed under-tested [10:57] PR core20#21 closed: Add bash-completion support [11:19] from the forum `> I run a once annual update` why would someone do this to themselves? [11:20] mborzecki: clearly it is when their asteroid comes in close proximity with earth [11:22] otoh users seem to have come up with some bizarre ways to worakroudn the auto update, maybe we should just allow disabling it and default to auto update? then any support requests start with -> please update to the latest version [11:24] mborzecki: on that thread, do we still have the refresh timer? [11:25] zyga: not sure we ship it anymore [11:27] brb, more tea [11:31] pedronis: hey, a quick question about the download client api (client/snap_op.go). For Client.Download() we currently pass "SnapOptions", for the download with resume we need to add peekHeader and resumeToken, should I add a new DownloadOptions that embeed SnapOptions or rather add the two fields to SnapOptions? [11:55] re, sorry, some interrupt at home [11:55] back now [11:57] mvo: probably a new struct [11:57] if you look the server is also a bit like that [11:58] mvo: are you pushing anything more to download-resumt atm? [12:01] pedronis: I'm done with download-resume for now [12:01] pedronis: was looking at the client bits [12:05] mvo: I pushed some small changes to it [12:05] if you are okay with it I will support merging download as is [12:06] we can ask for a security review later, I think the answers given by samuele are sufficient [12:06] pedronis: thank you! I have a look [12:06] pedronis: and will add the new struct in my other PR [12:18] mvo: do you understand what this is talking about: https://bugs.launchpad.net/ubuntu/+source/snapd/+bug/1861648 ? [12:18] Bug #1861648: When booting 20.04 an 'ld-2.23.so' process consumes 100% CPU for minutes [12:20] cachio: hey [12:20] PR snapd#8142 opened: client: add support for "ResumeToken", "HeaderPeek" to download [12:21] pedronis: I suspect that is fontconfig [12:21] pedronis: at least that's my gut feeling [12:27] pstolowski, hey [12:28] pedronis, mvo: I saw that bug and I came to the same conclusion - perhaps the way we linked fontconfig could be optimzed and we are not using the dynamic linker correctly (lots of relocations?) [12:28] cachio: hi, did you find anything about kill-timout with preseed test on focal? i've been debugging it, added a bunch of debug statements to the test and .sh helpers; it seems to be always killed at the same spot at the end of prepare, but it doesn't make sense.. when i'm dropped in the shell i can do the next operation of the test (actual preseeding) without problem [12:29] pedronis: I replied in 1861648 [12:29] thx [12:29] zyga: I replied in the bug, I suspect it's really the generation of either v6 or v7 or something is off in 20 and there is a v8 that we are not aware of [12:30] mvo: yeah, could be a good idea to reproduce it [12:32] mvo: perhaps it would be good to ask about their system as well [12:32] mvo: I can understand slow cache gen [12:32] mvo: but it doesn't seem to go to "minutes" [12:32] mvo: more like 20 seconds [12:32] mvo: that is: we could be running the same thing over and over, for each snap (perhaps) [12:32] mvo: or it could be a particularly low-end PC [12:32] mvo: or we have gazillion symbols [12:32] mvo: or all at once [12:35] pstolowski, I was testing it a bit on friday [12:35] pstolowski, I am going to run with thte profiler now to see the system resources [12:36] because it completes the prepare and then gets stuck [12:36] I moved the prepare as part of the execute [12:36] and it completes all that code and a bit more and then gets stuck [12:36] not sure why [12:36] yet [12:37] cachio: i see; what's weird though is when I run the critical operation (snap-preseed command) manually in the shell after it timeouts, it works [12:37] cachio: thanks for looking into it! [12:37] pstolowski, np [12:38] cachio: something must have changed, cause it worked a few days earlier, and i run it several times before [12:39] pstolowski, I updated the image [12:40] aah [12:41] apt updated/upgrade [12:41] pstolowski, just that [12:52] cachio: yeah, but on focal that means lots of updates atm, maybe something unexpected has changed [12:54] pstolowski, yes, I know === ricab_ is now known as ricab|brb === ricab|brb is now known as ricab_ [13:15] Hello there, i am having some issues with the latest spotify snap, it does not want to start. And if i check the journalctl i see the following output: https://pastebin.com/wTiE8jB6 [13:15] Hello [13:15] o/ [13:15] BlackDex: can you tell me the output of snap version please [13:16] BlackDex: I fixed part of the stuff you see on https://github.com/snapcore/snapd/pull/8133 [13:16] PR #8133: cmd/snap-confine: allow snap-confine to load nss libs [13:16] BlackDex: let me read the rest though [13:16] snap/snapd: 2.43.3-1 - series: 16 [13:16] BlackDex: are you on arch? [13:16] zyga: Yes [13:17] right, this is where I saw this problem before [13:17] having said that, I don't know if the errors there are strongly related to the eventual crash / failure of spotify [13:17] mborzecki: ^ can you please check what happens on your arch system [13:17] mborzecki: can you run spotify [13:18] mborzecki: and perhaps cherry pick the patch into the AUR package [13:19] BlackDex: perhaps also report a bug with this pastebin as attachment [13:20] at the snapcraft launchpad? [13:20] BlackDex: launchpad.net/snapd please [13:28] done [13:28] https://bugs.launchpad.net/snapd/+bug/1863613 [13:28] Bug #1863613: spotify fails to load (Trace/breakpoint trap (core dumped)) [13:28] BlackDex: thank you, we'll check it out [13:29] I do see a some what similar report [13:29] that is for openSuse [13:29] refer to it, perhaps it is the same issue [13:29] but seemed to be resolved and was in 2018 [13:29] hmm [13:29] i will change my report [13:29] thanks! [13:30] BlackDex: does that happen only on stable channel? [13:31] mborzecki: stable channel of what? Arch? Snapd? Spotify? [13:31] Since the spotify snap has been updated today it seems [13:31] BlackDex: spotify snap, but nvm, looks like they updated it today? [13:32] BlackDex: you can also `snap revert spotify` [13:32] and then check if it works [13:32] yea, i did that, that seemed to work. Then i went debugging, and in the end removed that revision :( [13:33] pstolowski, well [13:33] BlackDex: well, dumps core here too [13:33] it is really really weird [13:33] I prefiled it and mem and cpu are ok [13:34] zyga: i don't think the patch is related at all, the spotify process has already started [13:34] it finishes the execution [13:34] mborzecki: the patch would remove a lot of the leading noise [13:34] pstolowski, but it does not return the execute script [13:34] wth is libcef? [13:34] I see all the output [13:34] pstolowski, and see how it runs everything but I is not returning [13:35] mborzecki: no idea [13:35] pstolowski, perhaps something related to ssh? [13:35] huh, i only see it in spotify package, steam package, and spotify snap [13:35] CEF [13:35] hmm [13:35] * zyga checks one thing [13:36] mborzecki: haha [13:36] mborzecki: it's chrome [13:37] mborzecki: chromium embedded framework [13:37] mborzecki: aka, OS in a .so file [13:38] hm the aur pacakge is at 1.1.10, wonder why [13:39] ok [13:39] any clue whether it works on ubuntu? [13:39] CEF is similar, but also different, to electron [13:39] or fedora? [13:39] i just copied over all the files from /var/lib/snapd/snap/spotify/41/usr/share/spotify to ~/bin/spotify and started spotify [13:39] that seems to work [13:40] of course that'll work, because you're removing the confinement :-) [13:40] i only had to install libcurl-gnutls as an extra package [13:40] yea, but i just wanted to know if it isn't spotify which crashes on my system [13:40] brb [13:41] i could try and disasble apparamor [13:41] see what that does [13:47] BlackDex: not much, coredupms as well [13:49] BlackDex: zyga: so it works on fedora, but consistently coredumps on arch, even with --devmode [13:49] same here [13:49] i also tried to start snap within a lxd container, but that also didn't worked :S [13:50] i could try a nspan with ubuntu as a base see what happens [13:50] but seems a bit much of a hassle [13:52] BlackDex: just to be sure, do you have the -lts kernel? iirc it's 5.4 now? [13:52] i'm using the zen latest one [13:52] 5.5.4 [13:53] BlackDex: right, mine is 5.5.4 too, though the stock repo one [13:53] i could try the lts [13:53] let me reboot [13:53] fedora has 5.4.xx atm [13:55] hello folks [13:55] hey Ian :) [13:55] on 5.4.20-1 i get the same result [13:56] cachio: perhaps upgrade 20.04 image again? [13:57] ijohnson: hi, there's probably a few more places in makebootable.go that can use Filename [13:57] hi pedronis zyga [13:57] unrelated to the new code [13:58] pedronis: yes I missed some when rebasing [13:59] ijohnson: maybe not though, I think we need to be careful, because I don't think Path always has name_rev format there [14:00] well I specifically was trying to limit usage of that new function to places where we were doing filepath.Base(sn.MountFile()) [14:00] yea, indeed [14:00] I just grepped for that pattern of filepath.Base\(.*MountFile\(\)\) [14:01] anyways SU now? [14:03] BlackDex: hah, thanks for checking [14:11] i will update my report [14:13] hm flatpak has the same revision as aur [14:14] yea, i checked the repo, but that is still on an old version 1.10 or something [14:14] wonder whtehr there's something off with the version 1.1.20 [14:15] mborzecki: I'm checking if spotify works on ubuntu now [14:15] it does [14:16] and which snapd version is that? [14:16] maybe there is a small difference [14:22] zyga: what's the version of the spotify snap you installed? [14:22] mborzecki: stable [14:22] zyga: rev 41? [14:22] yes, 41 [14:22] hmmm [14:26] i just tested older versions of snapd, but no success [14:31] BlackDex: no, imo it's a problem with spotify snap [14:31] tried to disable gpu acceleration, but no luck there [14:33] pstolowski, hey [14:33] did you see what I wrote? [14:33] pstolowski, the line > qemu-nbd -c /dev/nbd0 "$CLOUD_IMAGE" [14:33] cachio: yep, about qemu-nbd? [14:33] right [14:33] in the function mount_ubuntu_image [14:33] cachio: interesting [14:34] cachio: qemu-nbd doesn't return or what happens? [14:35] pstolowski, this is affecting sshe somehow [14:35] huh [14:35] when I remove this line the systemd does not get stuck anymore [14:35] in fact what is getting stuck is ssh [14:35] the script is executed 100% [14:36] but ssh never returns [14:36] PR snapcraft#2944 opened: add github action ci/cd [14:36] pstolowski, don't know why [14:38] cachio: weird.. thanks, i'm trying to re-arrange this test a little bit [14:41] pstolowski: is my comment on services sensible, should I send a patch? [14:45] zyga: yes it is, thanks, i can do it, thanks for spotting! [14:48] cachio: bingo! [14:48] cachio: i've re-arranged the test to setup and restore qemu-nbd in execute: [14:49] cachio: and it passed [14:49] pstolowski, nice [14:49] pstolowski, perfect [14:49] cachio: it may have something to do with what zyga said in the standup (holding fds?) [14:49] please leave a note in the code :) [14:50] PR snapd#8143 opened: github: add github workflow [14:50] Hi [14:51] hey sdhd-sascha [14:51] :-) [14:52] Is 8143 useful ? [14:54] sdhd-sascha: that's a question for mvo and cachio [14:59] sdhd-sascha, there is already a procedure for creating/releasing snapd snap, perhaps mvo could clarify [14:59] sdhd-sascha, thanks for proposing this btw [14:59] cachio: i know. But i need a custom build, because i waiting for some PRs [15:00] So i build the patched version on github [15:00] sdhd-sascha: in a meeting right now, but this looks interessting, does it mean we get a snap for each commit basicly? if so, where is that stored? that might be cool for some people to test etc [15:00] mvo: foreach `git tag` [15:01] cachio: thanks for finding this! and it's weird 19.10 is happy [15:01] mvo: here are the results from snapcraft https://github.com/sd-hd/snapcraft/releases [15:04] pstolowski, yaw [15:04] mvo: then i do this for installation `snap install --dangerous <(wget -q -O- https://github.com/sd-hd/snapcraft/releases/download/3.9.9-0sdhd2/snapcraft-snap-3.9.9-0sdhd2) [15:09] mvo: we can rewrite this, to get a build from each commit, too [15:14] sdhd-sascha: in a meeting still, will get back to you [15:24] i'm almost tempted to create a aur package based upon the .snap squashfs file ;) [15:26] BlackDex: well, there's an aur package, but it extracts a *.deb afaik [15:26] i know, that is why i was thinking of using the .snap [15:27] * cachio lunch [16:00] mborzecki: #8136 needs a 2nd review [16:00] PR #8136: boot: write current_kernels in bootstate20, makebootable [16:13] mborzecki: if i check the coredump it seems to fail after the libc-2.27.so [16:24] * zyga afk / dinner [16:44] /me missed many tests in using Filename() it seems, had test*.go excluded in my grep :-| [17:03] PR snapd#8144 opened: tests: use Filename() instead of filepath.Base(sn.MountFile()) [17:15] mvo: oh also I just realized that the next PR I have for snap-bootstrap will break foundations' pi work w/o us also supporting ExtractedRunKernelImageBootloader for uboot because my next PR needs that interface in the initramfs-mounts :-( [17:17] I will propose that PR today, so maybe in the AM you can think about what to do about this [17:18] pedronis: ^ [17:41] ijohnson: ok, thank you! please just add a note about it in the PR and I can look tomorrow [17:45] mvo: sure I will also put it in my SU notes before my EOD, also unfortunately I found a bigger issue with the current kernel verification in initramfs: https://github.com/snapcore/snapd/pull/8136#issuecomment-587100038 [17:45] PR #8136: boot: write current_kernels in bootstate20, makebootable <β›” Blocked> [17:45] oh darn just missed him [17:45] oh well [18:15] ijohnson: I asked some questions there [18:16] pedronis: did you see the other comment I left on my PR ? [18:16] pedronis: the attack is that if they switch the symlinks around, then effectively they prevent the system from ever booting the new kernel [18:17] (with the current code that I had proposed) [18:17] ijohnson: yes, but I'm trying to understand what we are trying to fix, there are other ways to achieve that [18:17] or brick the device [18:17] what we don't allow is to open the disk [18:17] yes or brick the device [18:17] ijohnson: they can also remove try-kernel completely [18:17] they can empty boot [18:17] right but specifically switching the symlinks around should break something in the boot [18:18] currently it wouldn't [18:18] so I am refactoring the initramfs-mounts code I have to make the assumption that current_kernels[0] is _always_ the fallback kernel and current_kernels[1] (if it exists) is always the try kernel [18:18] then the boot would fail if an attacker switched the symlinks around [18:19] I guess the concern is that they switch the symlinks around and we still perform a boot without detecting that the attack happened [18:19] ijohnson: the issue here, is what distinguish a bad filesystem vs an attacker [18:19] how does bad filesystem play into this ? [18:19] ijohnson: we might try to boot without a try-kernel at all [18:20] Yes they could so other malicious things and in that case we would successfully either fail to boot (because we are broken/untrusted) or we would detect the issue and fail [18:20] Sorry my point is specifically that we wouldn't be able to know anything happened if they switched the symlink around [18:21] Unless we enforce an ordering to the modeenv kernels setting [18:21] ijohnson: my point is that the point we can check that we openend the disk [18:22] anyway [18:22] so if the old kernel is bad in some ways [18:22] well if openend the disk [18:22] When I say switch I mean literally switch, i.e make kernel.efi point to what we set try-kernel.efi to in boot [18:23] Hmm I suppose that's another way we could check, seems like more work though [18:23] ijohnson: do you understand my point? any check based on having opened the disk is maybe interesting but slightly too late [18:23] and by definiton anything can be written to boot [18:24] Perhaps I should also mention that the code in the boot pkg as of my PR doesn't detect this swapping either [18:25] So even if we get to userspace we don't detect that kernel.efi was changed and try-kernel.efi was changed [18:26] Maybe I should share my notes on this rather than try to explain over IRC [18:27] ijohnson: sorry, my point is that either of the try kernel and the kernel will let us open the disk [18:30] Yes I understand that, one minutes while I finish typing the attack in a doc to share [18:32] pedronis: hey, ok to land #8046 with small spread test fix for 20.04 (just need to push one more)? i don't think it warrant re-reviews [18:32] PR #8046: many, tests: integrate all preseed bits and add spread tests [18:33] pedronis I shared you a gdoc can you see it? [18:33] ijohnson: not yet [18:34] pedronis: https://docs.google.com/document/d/1Z4A4tYCaW_ew7tOLxQjJ0vaQ8E6v4lPWVi46fg4aE8Y/edit?usp=drivesdk [18:36] ijohnson: I need to go have dinner, also I can't even comment in that doc [18:36] Oh sorry let me add comments [18:37] We can discuss tomorrow morning, I don't mean to keep you [19:25] I am with a power outage here [19:26] if someone needs me please telegram me [19:38] PR snapd#8145 opened: cmd/snap-bootstrap: verify kernel snap is in modeenv before mounting it [20:35] PR snapd#8146 opened: tests/core: add swapfiles test [20:43] ijohnson: what happens if we reboot just before the point where we rewrite modeenv but we have already manipulated the symlinks ? [20:43] that was the isue why we thought we couldn't track which is which [20:44] pedronis: hmm in the boot pkg ? [20:44] yes [20:44] remember that originally we planned to have try_kerne and kernel in modenev [20:44] and then we found issues with that plan and fell back to just having a list [20:44] what has changed? [20:46] pedronis: for kernel snap setNext() we set the modeenv first [20:46] no, the issue is more with mark successful I think [20:46] * ijohnson looks at markSuccessful [20:47] with markSuccessful we are only removing kernels from the modeenv, and we do that last, because if we did that first and got rebooted we would not be able to finish boot in the initramfs [20:47] pedronis: ^ [20:48] yes, but before we remove it [20:48] the first kernel is the old kernel that might not be there [20:48] I mean for a bit the symlinks point to the same thing [20:48] then they point just to the new one [20:49] but only after the modeenv is adjusted [20:49] for markSuccessful the symlinks are moved before the modeenv is written ? so if we got rebooted right before writing the modeenv, kernel.efi -> new kernel, and new kernel is still in the modeenv [20:50] it is but the new code in your branch [20:50] in your new branch [20:50] seems very opinonated about what it expects where [20:50] ah I see what you mean now [20:50] that's the point of it [20:50] hmm yes that does introduce a problem then [20:50] but will it work with those intermediate states? [20:51] I mean that's where we started [20:52] I mean maybe it's tractable but it needs quite a bit of tests about all these states [20:52] yes I think the code is vulnerable right now, but I think we can move modeenv earlier [20:52] * move writing the modeenv earlier [20:53] then we have the reverse problem [20:54] anyway my point is that this needs testing infrastructure to double check these things [20:54] yes I think this is why we went with the list in the first place :-/ because there's no way to avoid that short period of time where we really don't want to be rebooted [20:54] and I don't see an easy win here [20:54] atm [20:56] I will think on this a bit [20:58] be back in a bit, need to reboot to finish upgrade to focal [21:22] PR snapd#8147 opened: cmd/snap-bootstrap: verify kernel snap is in modeenv before mounting it [21:23] PR snapd#8145 closed: cmd/snap-bootstrap: strictly verify kernel snap is in modeenv before mounting it [21:38] not sure this is the best place to ask this but the chromium snap has a sandboxed $HOME, right? so ~/Downloads is not /home/`whoami`/Downloads? [21:40] PR snapcraft#2945 opened: ci: only run docker builds for cron [21:42] nevermind answered my own question (yes) XD [21:50] * ijohnson is now a focal fossa [21:57] what even is a fossa :)