[05:25] morning [05:31] PR snapd#8695 opened: data/completion, packaging: cherry-pick zsh completion (2.45) [06:08] PR snapd#8696 opened: packaging/arch: update PKGBUILD to match one in AUR [06:26] mvo: hey [06:27] mvo: opened a cherry-pick of zsh completion: https://github.com/snapcore/snapd/pull/8695 [06:27] PR #8695: data/completion, packaging: cherry-pick zsh completion (2.45) [06:27] mvo: looks like we need the mount-ns fixes in 2.45 branch [06:29] mvo: i believe the fixes are: https://github.com/snapcore/snapd/pull/8666 and https://github.com/snapcore/snapd/pull/8669 [06:29] PR #8666: tests/mount-ns: update to reflect new UEFI boot mode [06:29] PR #8669: tests/mount-ns: stop binfmt_misc mount unit [07:00] o/ [07:02] morning [07:05] zyga: pstolowski: hey [07:08] hey zyga, pstolowski and mborzecki [07:12] PR snapd#8688 closed: state: log task errors in the journal too [07:13] 8604 needs a second review please :) [07:14] PR snapd#8695 closed: data/completion, packaging: cherry-pick zsh completion (2.45) [07:20] +1 [08:05] pstolowski: https://github.com/snapcore/snapd/pull/8692#discussion_r427101228 [08:05] PR #8692: configcore: add nomanagers buildtag for conditional build [08:06] mborzecki: thank you [08:06] pstolowski: maybe we could siplify that even further [08:20] mborzecki: what do you mean by whitelist and not using tags? [08:23] pstolowski: instead of building snap command with -tags nomanagers, you could check which overlord packages are imported, we know that overlord/state and overlord/auth are needed (for debug state and macaroons iirc) so those are the whitelist, everything else should throw an error [08:23] mborzecki: but how? we need to import configcore, it just needs conditional build to exclude certain parts [08:24] mborzecki: this dependency is introduced in my other PR [08:24] aaah ok [08:24] mborzecki: for that reason the simple dependency check will not work either [08:25] pstolowski: which is the other pr? [08:30] mborzecki: the spread test will be updated to check if packaging was done right (in followup(s)) [08:30] mborzecki: #8533. the conditional build will fix selinux denials there [08:30] PR #8533: image, tests: core18 early config [08:30] mborzecki: this is where we pull configcore from snap prepare-image: https://github.com/snapcore/snapd/blob/055397eb7bb37699ebe4ab885b378d3160771392/image/image_linux.go#L436 [08:30] pstolowski: btw. have you checked the size of snap binary with that branch? [08:32] pstolowski: uhh, looks like it's importing every package found in overlord now? https://paste.centos.org/view/9964d6af [08:34] mborzecki: i haven't check the size, will do in a moment, going to do packaging changes [08:34] mborzecki: these imports are from current master, or from #8533? [08:34] PR #8533: image, tests: core18 early config [08:35] mborzecki: nb snap bootstrap will benefit too [08:42] pstolowski: isn't this import problem happening only because of devicestate.CanManageRefreshes() in configcore/refresh.go? [08:45] mborzecki: i'm not sure, i haven't tracked it down precisely. nonetheless we want to eradicate all managers (i think snap bootstrap size is one of the main concerns) [08:47] fwiw snapd package is full of bit fat binaries already, tried to fixed that via symlinking of snap to snapd (also the imports would not be a concern in such setup) [08:48] pstolowski: snap is 20MB, with 8533 it's 21MB, snapd is 25MB, s-b is 16MB [08:53] mborzecki: mhm, interesting numbers [08:55] pstolowski: commented out devicestate.CanManageRefreshes() and the devicestate import in configcore, then proxy.go imports assertstate, but that's no fs-only-option either [08:57] pstolowski: maybe it'd make sense to make the split with a tag in configcore rather than individual managers? [08:59] pff nvm [09:00] pstolowski: it's what you have in your latest change ;) [09:01] apparently i need a coffee [09:01] mborzecki: ok, good, i got very confused by this last remark ;) [09:03] having fun with debian packaging now ;) [09:08] pedronis: hi, IIRC you hinted at dbusutil package, did you mean a new top level or something else? [09:08] mborzecki: thanks for the review! [09:10] zyga: found some scripts that cachio is using: https://github.com/snapcore/snapd/pull/8693#issuecomment-630692280 [09:10] PR #8693: tests: add fedora 32 to spread.yaml [09:24] zyga: would it contain only testing stuff atm [09:24] ? [09:30] Not only, looking at the current code I would put isSessionBusLikelyPresent and the associated mocking code [09:38] zyga: then yes, a top-level dbusutil (I want to move all *util under some package at some point but not immediately) [09:39] Ok, will do [09:39] Should I also move the relevant test code from testutil to something like dbusutil/dbustest? [09:40] It would be the non trivial dbus call testing stack [09:40] yes [09:41] Great, on it [09:46] mborzecki: mvo: #8675 needs a 2nd review [09:46] PR #8675: osutil: add disks pkg for associating mountpoints with disks/partitions [09:48] i'll take a look, already got it opened in a tab since yday [09:55] mborzecki: thx, I did a pass on 8686 btw, there's a question there [10:23] I need to do a small errand. Back in 45 [10:23] It’s such a sleepy day that some air will help [10:24] * mvo is in a meeting [10:29] PR core20#60 opened: Copy-in launchpad's build-archive [10:34] PR snapd#8678 closed: tests: port interfaces-contacts-service to session-tool [11:14] Thanks guys [11:14] Making progress on tests for dbus bits [11:15] And perhaps pressure is better as I’m not so sleepy anymore [11:15] PR snapd#8679 closed: tests: port interfaces-location-control to session-tool [11:19] Also, it must be spring because look at those green tests :-) [11:20] * mvo hugs zyga [11:48] PR snapd#8697 opened: [RFC] packaging: build cmd/snap with nomanagers tag === msalvatore_ is now known as msalvatore === verterok` is now known as verterok [13:06] xnox, sil2100, we have some serious probs with customers not being able to diable console-conf anymore ... when you guys removed the --disable-console-conf option in ubuntu-image you only did that for core20 builds, didnt you ? [13:08] ogra: that is correct. What is customer building? UC20 or 18/16? [13:08] 18 ... but on a 20.04 host machine it seems [13:08] ogra: what is the snap revision of ubuntu-image the customer is using? [13:08] Or are they using the deb? [13:10] xnox, apparently the deb (for whatever insane reason) [13:11] PR snapd#8696 closed: packaging/arch: update PKGBUILD to match one in AUR [13:15] (they are using 1.9-20.04 ... but i think we need to ask them to not use the deb anyway ... why is that still in the archive ? it will cause the same problems we have since yeats with the snapcraft db) [13:15] *deb [13:15] *since years [13:16] ogra: because we still use it to build production images. [13:16] And we have snap haters too. [13:16] fix that and make them use the snap ;) [13:17] there cant be snap haters n canonical :P [13:19] 🀭 [13:19] ogra: so [13:19] ogra: https://paste.ubuntu.com/p/zFMWpCqd2D/ [13:19] mvo: these 3 checks: https://github.com/snapcore/snapd/blob/80c895a0216cee4116826be5ceda8fa2791d7f97/packaging/ubuntu-16.04/rules#L179 [13:19] ogra: ubuntu-image snap has --disable-console-conf option; the deb does not yet have it. As we update the snap more often. [13:19] ogra: but it might depend on channels too, I'm running latest/edge [13:20] ogra: let me check if stable has that option at all [13:20] xnox, yeah .. they tried the snap before and switched to the deb because the stable snap was lacking the option ... heh [13:20] probably time for a release to stable :) [13:20] ogra: but deb doesn't have it either.... so..... ? [13:21] right, then they started to use a script to modify the image on the fly during build ... and then hell broke loose :) [13:21] sil2100: when will new ubuntu-image get released? [13:21] now pointing them to the edge snap should fix them ... [13:21] ogra: beta has it too [13:21] yeah [13:21] stable|candidate do not [13:21] deb does not [13:21] yup ... [13:22] 1.9+snap3 is the one they want for now [13:22] ogra: please test & ask for version numbers, before escalating next time =) [13:22] i was only asking ... not "escalating" ... :) [13:32] xnox: yes [13:32] ;) [13:33] (I know that is not the answer to the question) [13:34] Seriously speaking, I talked with Samuele and Michael and I will be releasing the beta UI to stable soon, probably after my +1 maintenance shift [13:40] PR snapd#8674 closed: tests: fix nested tests [13:54] does github work for you guys? [13:54] i can't fetch or push anything [13:58] mborzecki: seems fine to me [13:58] works fine for me .... tried to re-connect the caonical VPN ? [13:58] that hangs at times for me [14:05] ijohnson, hey, which is the other step I need to do after re-sign the kernel? [14:06] cachio: you need to re-build the gadget snap, signing shim with the snakeoil keys [14:06] cachio: to do that, you need the snakeoil dir from this git repo: https://github.com/snapcore/pc-amd64-gadget/ [14:06] then unpack the pc gadget snap and run this: [14:07] https://www.irccloud.com/pastebin/IgmkHoBY/ [14:08] ijohnson, I am using sbsign --key /etc/ssl/private/ssl-cert-snakeoil.key --cert /etc/ssl/certs/ssl-cert-snakeoil.pem [14:08] currently to sign the kernel [14:08] cachio: ok, I think the snakeoil keys from /etc/ssl are the same as from the gadget [14:08] ijohnson, should I use the cert and key provided by the gadget? [14:08] let me double check that they are the same [14:09] ijohnson, thanks [14:09] * zyga -> lunch [14:11] cachio: hmm no the snakeoil key is different it seems, so I think you need to sign shim with the keys from the gadget dir [14:12] cachio: when you sign the kernel, are you signing the kernel.efi ? [14:12] ijohnson, yes [14:12] which key and cert should I use? [14:12] cachio: are you using the ubuntu-core-initramfs package to create the efi? [14:12] i.e. are you running `ubuntu-core-initramfs create-efi` ? [14:12] yes [14:13] cachio: ok, so that is doing the same thing I am doing so you should be okay [14:14] ijohnson, nice, thanks, I'll try that and re-test [14:14] cachio: as long as you are not doing anything different from `ubuntu-core-initramfs create-efi` you should be good [14:14] thanks for the help [14:14] yeah np, let me know if you run into any more issues [14:14] ijohnson, sure [14:15] zyga: any idea why this check was "skipped" ? https://github.com/snapcore/snapd/pull/8683/checks?check_run_id=688936733 [14:15] PR #8683: osutil/disks: support IsDecryptedDevice for mountpoints which are dm devices [14:28] yeah, I clicked twice [14:28] and I cancelled a run [14:29] that's a stale marker of some kind [14:29] note that actual tests did run [14:37] sergiusens, hi, I'm seeing a strange behavior wrt the rbac snap on core20. it works fine when installed on focal, but I get https://paste.ubuntu.com/p/zNNJPdpSh5/ in a bionic container [14:39] sergiusens, both on snapd 2.44.3 [14:42] ackk: any python native code? [14:42] zyga, possibly, why? [14:43] ackk: classic confinement? [14:43] zyga, it's a fully confined snap, FTR [14:43] ackk: different so versions, won't import [14:43] just guessing [14:46] zyga, "snaphelpers" doesn't use native code [14:46] zyga, I just tried in a snap run --shell, trying to import that from python cli also fails [14:49] Strace it perhaps [14:50] ackk: can you share the yaml? [14:50] i'm guessing you'll need to set PYTHONPATH for your runtime environment [14:51] ackk: e.g. https://github.com/snapcore/snapcraft/blob/master/tests/spread/plugins/v2/snaps/python-hello-with-stage-package-dep/snap/snapcraft.yaml#L15 [14:52] cjp256, I updated https://bugs.launchpad.net/snapcraft/+bug/1876370 [14:52] Bug #1876370: Python libraries installed by python path are not found in sys.path [14:53] hey folks, - what happens if I install a snap that has an auto-alias that conflicts with an existing command on the system? does the snap's take precedence? does the deb-installed one? does snapd refuse to configure the conflicting alias? === lool- is now known as lool [14:58] ackk: out of curiosity, if you don't filter out files, do you get the same result? the bionic vs focal is interesting [14:58] cjp256, haven't tried, let me [14:59] ackk: also consider staging all of python and see if that changes things: https://github.com/snapcore/snapcraft/blob/master/tests/spread/plugins/v2/snaps/python-hello-staged-python/snap/snapcraft.yaml [14:59] cjp256, well that's what we did before, but it'd be nice not to have now that snapcraft supports using the builtin one [15:00] ackk: this is a strict snap, right? [15:00] cjp256, correct [15:01] roadmr: it's the same as what happens with snap that have a name that matches a system command, snapd will install the snap. The precedence depends on PATH, as set by default snaps are searched after system commands. [15:01] thanks pedronis [15:02] ackk: i'll see if can repro on 18.04 [15:02] roadmr: we check for conflicts among snaps, but not with the system commands [15:02] pedronis: right, I knew about inter-snap checking but was unsure about existing command behavior [15:02] asking for a friend :) [15:03] cjp256, it's unfortunaly not a public snap, so I can't share the code here [15:03] with my own attempt at a reproducer anyhow [15:07] cjp256, same issue without filtering files [15:15] ackk: i cannot reproduce on bionic. if you want, you can gmail me the full yaml and i can give it a shot [15:34] maybe one system is not marked as mandatory, let's see what spread says to this PR [15:36] ah [15:38] I'm struggling to get snapd to run inside an Ubuntu 18.04 docker container without having run that container in "privileged" mode. I've succeeded when the Docker host is Arch Linux, but am having issues when the host is also Ubuntu. The error is "main.go:123: system does not fully support snapd: cannot mount squashfs image using "fuse.squashfuse": fuse: mount failed: Permission denied" [15:39] pedronis, mvo I tested that with "secureboot: true" and workerd [15:41] actually, scratch that, it's not working on Arch Linux anymore either... [15:41] witquicked: hey [15:41] witquicked: docker is not a supported snapd host IIRC [15:41] witquicked: if you need a container and can use lxd instead, that should work [15:42] mvo, do you have a link to the change where secure boot is not wortking? [15:42] cachio: I misunderstood the conversation claudio had with Gustavo [15:42] I thought the change landed in spread with the preferred syntax [15:42] but seems not [15:43] mvo: for your benefit, this is the spread part: https://github.com/snapcore/spread/pull/104/files [15:43] PR spread#104: Adding option for systems created on google to start with secure boot enabled [15:43] there's a comment about the syntax by Gustavo [15:43] zyga, I know it's not officially supported, and that I'm fighting an uphill battle. I have successfully gotten it working several times, using various mount points... [15:44] witquicked: I think it's a waste of your time, unless you have absolutely no other choice [15:44] pedronis, cachio aha, htanks [15:44] I have a reasonable use-case for it... [15:44] PR snapd#8698 closed: spread.yaml: secureboot is really secure-boot [15:44] witquicked: can you use lxd instead? that's a really easy way out [15:44] mvo: anyway it means something else failed [15:45] which isn't great either [15:45] witquicked: if you absoulutely must use docker you are on your own as we don't support that and there will be random things you need to do each time that may or may not work [15:45] witquicked: you need systemd and ability to mount squashfs (probably via fuse) [15:45] pedronis, good, thanks [15:45] zyga, yep... [15:46] like I said, I can get it to work using the "privileged" command, so I know it's within the realm of possible [15:46] witquicked: I don't know what that does in docker [15:46] and yes, I've done the work needed to get systemd to run inside Docker [15:47] and if doing so is not messing up your system [15:47] that switch essentially allows the container to run as root. [15:47] witquicked: out of all containers we only support lxd because it does apparmor nesting [15:47] witquicked: it seems you are saying this is not a container anymore [15:47] mvo: anyway indeed store problems [15:47] witquicked: if you do this it may appear to work but may actually break something on your host [15:47] mvo: making the PR red atm [15:47] PRs [15:47] witquicked: and it may totally break if you run more than one "container" like that at the same time [15:49] PR snapd#8699 opened: interfaces/desktop-launch: support confined snaps launching other snaps [15:49] zyga, I have multiple use-cases for running systemd in a container. The one that sidesteps the religious discussion is that I use the container for testing ansible playbooks... [15:50] witquicked: systemd is probably not the problem [15:50] witquicked: I'm specifically talking about snapd [15:50] witquicked: (where "probably" is more of a I don't know) [15:50] zyga, agreed, it's not systemd. Fair enough. :) [15:50] witquicked: but snapd will for sure misbehave and, given the chance, change the host, not the container [15:50] I think we are talking in circles now, I can only wish you good luck [15:51] witquicked: I'm sorry but we cannot support docker without docker growing more features [15:51] Actually, what you said about snapd changing the host is very helpful... [15:51] witquicked: or someone writing a docker helper that lets you run more complex things [15:53] Are the changes you're referring to in "/sys/fs/fuse/connections", or "/sys/fs/cgroups", or the socket? [15:54] witquicked: snapd loads apparmor profiles [15:54] witquicked: apparmor is not namespaced [15:54] witquicked: without special setup that is clobbering the host apparmor configuration [15:54] aha, but it can run without apparmor, yes? [15:54] witquicked: snapd? [15:54] yes [15:54] witquicked: yes but all security promises are gone then [15:55] witquicked: at which point the question of what the docker container brings is questionable [15:55] yes, I'm aware - and apparmor is shipped to not start in a container... [15:55] witquicked: if you entirely trust the payload that might work to some extent [15:55] witquicked: it's not just the container, if your kernel has apparmor snapd will sue it [15:55] *use it [15:56] Yeah, it's definitely pushing the boundaries of what it means to be a container... [15:56] * zyga wonders what's the requirement [15:56] oh rilly? [15:56] building in "docker" [15:56] where it means that docker is the entry point and that's all [15:56] ? [15:57] the requirement is to be able to test ansible playbooks in a "somewhat" isolated and repeatable environment [15:57] I see [15:57] and how does that interact with snapd? ansible setting up snaps? [15:57] it's also used to support a build environment, where the setup of that environment uses snap to install tools [15:58] I know it's a big shift but lxd actually properly support sthat [15:58] especially if you want to test something that feels more like a real system [15:59] alternatively a vm as running in a container and out of a container does matter for both snapd and lxd [15:59] That's true [16:01] I know and love lxd, and it would be my first choice... however I need for this to support Linux, Mac and Windows hosts. [16:01] I understand [16:03] my only comment is that it's not testing something that looks like a normal system [16:03] so the test may say "great, it works" [16:03] but reality will be different [16:03] I see your point. [16:03] And it's absolutely valid... you're basically saying that any test that works without apparmor turned on isn't really a valid test. [16:03] it depends [16:03] for some snaps it might be okay [16:03] it's just not something that I would say is a good test [16:03] it's like a test that says "echo ok" is both fast and portable [16:03] and it passes too [16:04] but that's not the goal [16:24] cmatsuoka, I see, I'll fix it [16:25] cachio: thanks === ijohnson|lunch is now known as ijohnson [21:24] jdstrand, hmm, i'm playing with the account-control interface ... is there a chance we could get permission to "mkdir /home/$USER" ? it is kind f hard to create a valid user account with the interface the way it is atm [21:25] (indeed it works if i tel useradd to not create/use a homedir, but thats kind of crippled in the end) [22:25] ogra: I'll look into it as part of my 2.45 policy updates work. I think there is context I need to dig up [22:25] thx !