[00:02] <mwhudson> does anyone know why 
[00:02] <mwhudson> lxc delete --force wtf-snapd; lxc launch ubuntu-daily:groovy wtf-snapd -c security.privileged=true; sleep 5; lxc exec wtf-snapd -- bash -c "apt-get update && apt-get install snapd"
[00:02] <mwhudson> might hang?
[00:03] <mwhudson> it the privileged is important and it seems to be something around mounting the snapd snap
[09:06] <mwhudson> related to the above https://paste.ubuntu.com/p/XPz7b5s5MY/
[09:34] <pedronis> mwhudson: I think something like that is done when you turn on parallel instances,  but I probably lost the beginning of this conversation
[09:34] <pedronis> here
[09:34] <mwhudson> pedronis: lxc delete --force wtf-snapd; lxc launch ubuntu-daily:groovy wtf-snapd -c security.privileged=true; sleep 5; lxc exec wtf-snapd -- bash -c "apt-get update && apt-get install snapd" hangs
[09:35] <mwhudson> are parallel instances enabled by default these days
[09:35] <pedronis> they shouldn't
[09:35] <mwhudson> pedronis: although that's a bit of a red herring, just launch a privileged lxc ubuntu-daily:groovy and admire how much snapd is not working
[09:43] <pedronis> mwhudson: did it work before? I don't think we test priveleged containers
[09:43] <mwhudson> pedronis: yeah it started failing a few days ago (it hangs in the subiquity github actions)
[09:43] <mwhudson> hm well
[09:44] <mwhudson> it possibly only shows up on upgrade, i don't think we depend on snapd working inside the container
[09:46] <pedronis> anyway, no we do such a mount in another case, but this is code that was there since a long time
[09:46] <pedronis> I'm trying to give you pointer
[09:48] <pedronis> mwhudson: it's related to / itself not being mounted shared fwiw I think
[09:49] <pedronis> mwhudson: https://github.com/snapcore/snapd/blame/master/cmd/snapd-generator/main.c#L46
[09:50] <pedronis> mwhudson: this was the original PR https://github.com/snapcore/snapd/pull/4797
[09:52] <pedronis> mwhudson: basically if / is not mounted as shared we create a generated unit that remounts /snap as shared
[09:52] <pedronis> but we do this since a long time
[10:12] <pedronis> mwhudson: so something else than snapd might have changed in that area? 
[11:39] <pedronis> mardy: hi, I re-reviewed https://github.com/snapcore/snapd/pull/10282
[12:34] <chaology> hi o/
[12:34] <chaology> does anyone know whether snaps eventually got confinement on Fedora using SELinux as the confinement backend? I know that AppArmor handles the confinement on Ubuntu based systems
[12:38] <chaology> from the following and from the looks of the commit history within snapd repo on github it looks like there has been a lot of work around selinx, but I'm not experienced enough with SElinux to know for sure yet
[12:38] <chaology> link https://www.phoronix.com/scan.php?page=news_item&px=Snaps-Fedora-Arch-More
[12:39] <chaology> https://github.com/snapcore/snapd
[12:39] <chaology> https://github.com/snapcore/snapd/tree/master/sandbox/selinux
[12:41] <chaology> to be honest I just would like to know whether snaps on say fedora 34 are equal citizens to snaps on ubuntu with respect to their security via confinement
[12:42] <chaology> thanks for any information that anyone can provide
[12:57] <ijohnson> chaology: no, the selinux work on snapd was to provide policy for snapd itself to run and be packaged as a fedora package, snapd does not yet support using selinux as a backend for confinement and it's unlikely that work will be scheduled anytime soon
[12:58] <ijohnson> chaology: what's more likely is that linux security module "lsm" stacking would enable folks on fedora to run apparmor stacked inside of selinux, such that snaps are confined the same on fedora as on ubuntu, just with fedora there would be another outer layer beyond apparmor
[13:38] <chaology> ijohnson: ah I see. thanks for the explanation
[13:38] <chaology> I have heard about LSM stacking, which would be cool. so hopefully then
[15:46] <ijohnson> pedronis: ah-ha I figured out the oom-killer thing with the spread test, it's because on those systems, an empty cgroup has 4k usage, so when in the test we try to create a cgroup with 500B limits, it triggers the oom killer on that slice
[15:47] <ijohnson> pedronis: I think this means we should enforce a minimum of 4k as the memory limit for quota groups to avoid this probelm
[15:47] <pedronis> ijohnson: I suppose so
[15:48] <ijohnson> ok, I will hold off on filing the PR with the work-around until I first file a PR setting this lower limit I think
[15:48] <ijohnson> just to make it as easy as possible to follow
[17:00] <ijohnson> cachio: for #10298, we need to upload the snaps to the store
[17:01] <cachio> ijohnson, in this case we need a snapcraft.yaml
[17:01] <cachio> because it is required to build it in lp
[17:01] <ijohnson> cachio: ok, please ask for one in the PR then
[17:02] <cachio> is it not possible to install locally this?
[17:03] <cachio> ijohnson, ?
[17:03] <ijohnson> cachio: yes it needs to be pulled by snapd from the store
[17:03] <cachio> ok
[17:13] <cachio> ijohnson, I think I can upload manually the snap as it is just needed for amd64
[17:13] <cachio> right?
[17:13] <cachio> ans i386
[17:13] <cachio> perhaps we can skip i386
[17:13] <cachio> otherwise I need to build that on launchpad
[17:19] <ijohnson> cachio: sure if you want to manually upload the snap that's fine, but it should be transfered to test-snaps-canonical owner
[17:19] <ijohnson> cachio: and you will need to get permission to upload classic, you can talk to the store folks for that I think
[19:02] <pedronis> ijohnson|lunch: should I merge https://github.com/snapcore/snapd/pull/9932 ?
[19:07] <ijohnson|lunch> pedronis: yes that can be merged now 
[19:07] <ijohnson|lunch> Thanks 
[19:34] <cachio> ijohnson|lunch, I pushed the snaps for #10298
[19:34] <cachio> and also included a small fix
[19:34] <cachio> I already gave +1
[19:34] <cachio> need a second +1
[20:16] <pedronis> ijohnson|lunch: I merged the other test change as well, you have 2 open PRs now
[20:17] <ijohnson> pedronis: \o/ amazing I can't remember the last time I had this few prs open