[00:48] <mup> PR snapcraft#2423 closed: Do not use `bash` package name in staging <Created by jdahlin> <Merged by sergiusens> <https://github.com/snapcore/snapcraft/pull/2423>
[00:51] <mup> PR snapcraft#2289 closed: local source: only filter out snapcraft artifacts <do-not-merge-yet> <Created by kyrofa> <Closed by sergiusens> <https://github.com/snapcore/snapcraft/pull/2289>
[06:03] <brlin> https://github.com/ubuntu-core/hello-snapcraftio/pull/3
[06:03] <mup> PR ubuntu-core/hello-snapcraftio#3: Fix I18N <Created by Lin-Buo-Ren> <https://github.com/ubuntu-core/hello-snapcraftio/pull/3>
[06:08] <mborzecki> morning
[06:17] <brlin> G. afternoon
[06:28] <mup> PR snapd#6283 opened: osutil: do not import dirs <Simple 😃> <Created by bboozzoo> <https://github.com/snapcore/snapd/pull/6283>
[07:39] <mup> PR snapd#6284 opened: spread: make opensuse-42.3 manual <Simple 😃> <Created by bboozzoo> <https://github.com/snapcore/snapd/pull/6284>
[08:04] <pedronis> mvo: hi,  do you have time for a chat?
[08:08] <pstolowski> morning
[08:10] <mvo> pstolowski: good morning
[08:11] <mvo> pedronis: yeah, in 2min, just poking at the core18 image
[08:17] <pedronis> mvo: I'm in the standup
[08:21] <mup> PR snapd#6285 opened: selinux: package to query SELinux status and verify/restore file contexts <SELinux> <Created by bboozzoo> <https://github.com/snapcore/snapd/pull/6285>
[08:22] <mborzecki> pstolowski: mvo: pedronis: morning guys
[08:26] <zyga> Hello
[08:28] <pedronis> mborzecki: zyga: hi
[08:43] <mborzecki> have you seen https://forum.snapcraft.io/t/snap-interface-to-run-smartctl/8929 ?
[08:47] <zyga> Interesting. I wonder what is the best approach: expose smartctl or provide a replacement that talks to snapd or other trusted helper
[08:58] <mborzecki> zyga: wonder what's the actual interface used by smartctl and hdparm, i'd say hdparm does ioctls, but smartctl?
[08:59] <mborzecki> zyga: hdparm: ioctl(3, SG_IO, {interface_id='S', dxfer_direction=SG_DXFER_NONE, cmd_len=16, cmdp="\x85\x06\x20\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x40\xe3\x00", mx_sb_len=32, iovec_count=0, dxfer_len=0, timeout=15000, flags=0, status=0x2, masked_status=0x1, msg_status=0, sb_len_wr=22, sbp="\x72\x01\x00\x1d\x00\x00\x00\x0e\x09\x0c\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x40\x50", host_status=0,
[08:59] <mborzecki> driver_status=0x8, resid=0, duration=3914, info=SG_INFO_CHECK}) = 0
[09:01] <Chipaca> some of us wish you all a good morning
[09:01] <zyga> mborzecki: more ioctls :)
[09:01] <mborzecki> zyga: same for smartctl ioctl(3, SG_IO, {interface_id='S', dxfer_direction=SG_DXFER_FROM_DEV, cmd_len=16, cmdp="\x85\x08\x0e\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\xec\x00", mx_sb_len=32, iovec_count=0, dxfer_len=512, timeout=60000, flags=0, dxferp="\x40\x00\xff\x3f\x37\xc8\x10\x00\x56\x88\x2a\x02\x3f\x00\x00\x00\x00\x00\x00\x00\x31\x53\x41\x37\x39\x4a\x53\x44\x30\
[09:02] <zyga> mborzecki: sandboxing ioctls with seccomp is not great
[09:02] <mborzecki> x36\x35\x34"..., status=0, masked_status=0, msg_status=0, sb_len_wr=0, sbp="", host_status=0, driver_status=0, resid=0, duration=7, info=0}) = 0
[09:02] <zyga> SG_IO, the rest is in the struct
[09:02] <mborzecki> Chipaca: morning!
[09:02] <zyga> mborzecki: I'd vote for a trusted helper
[09:03]  * Chipaca would vote too, but then zyga might decide not to have the vote at the last minute
[09:03] <zyga> lol
[09:03] <zyga> Chipaca: last week I was impressed by the manner the MPs conduct themselves
[09:04] <zyga> Chipaca: guess this week reminds me all too much of home
[09:04] <mborzecki> haha
[09:06] <mborzecki> zyga: i suppose apparmor doesn't do much about ioctls either?
[09:06] <zyga> I don't think it does
[09:07] <zyga> apparmor can do only as much as the LSM layer allows
[09:07] <zyga> and even if, it must be implemented in the kernel
[09:07] <zyga> and even if, it must be implemented in the userspace
[09:07] <zyga> a trusted helper looks 10x more workable
[09:07] <Chipaca> zyga: https://www.youtube.com/watch?v=Tjp5OmoDYQM
[09:07] <mborzecki> and then it must reach distros
[09:07] <mborzecki> ehh
[09:07] <zyga> Chipaca: I saw that, shared that yesterday, brilliant work
[09:08] <Chipaca> zyga: :-)
[09:08] <Chipaca> but also, :-/
[09:08] <Chipaca> too close
[09:08] <zyga> Chipaca: my "radio" recently is https://parliamentlive.tv/Event/Index/d1114342-3529-43e7-86fa-2263df3271f2
[09:08] <zyga> Chipaca: at least it means the brits didn't lose their sense of humour :)
[09:09] <mborzecki> zyga: the struct is in /usr/include/scsi/sg.h insteresting read
[09:10] <zyga> mborzecki: fun fact, installing golang on fedora pulls in subversion and mercurial
[09:10] <zyga> what a blast from the past :)
[09:10] <mborzecki> hahaha
[09:10] <mborzecki> rich dependencies
[09:11] <mborzecki> zyga: another fun fact, installing snapd builddeps on fedora pulls in mongodb
[09:12] <zyga> why on eart?
[09:12] <zyga> earth?
[09:12] <mborzecki> zyga: hah, probably because we're pulling in bson
[09:12] <zyga> oh, right
[09:12] <zyga> ls
[09:14] <mborzecki> and the travis job in  https://github.com/snapcore/snapd/pull/6284 failed :/
[09:14] <mup> PR #6284: spread: make opensuse-42.3 manual <Simple 😃> <Created by bboozzoo> <https://github.com/snapcore/snapd/pull/6284>
[09:18] <mup> PR snapd#6279 closed: interfaces: tweak deny-auto-connect policy tests <Created by mvo5> <Closed by mvo5> <https://github.com/snapcore/snapd/pull/6279>
[09:19] <mvo> a review for 5845 would be great
[09:22] <zyga> doing that now
[09:22] <Chipaca> zyga: #5845 still has a changes-requested from you on it
[09:22] <Chipaca> ah ok
[09:22]  * Chipaca leaves you to it
[09:22] <mup> PR #5845: interface: add new `{personal,system}-files`  interface <Created by mvo5> <https://github.com/snapcore/snapd/pull/5845>
[09:23] <pedronis> zyga: I answered/added a couple more comments to the features PR, is there anything we need to discuss on my comments?
[09:23] <zyga> pedronis: I think I need to take some action now, I'll do that soon
[09:23] <pedronis> ok, thx
[09:23] <zyga> thank you!
[09:26]  * zyga has sudden urge to refactor one bug in interface design
[09:26] <mborzecki> zyga: left a note under the topic
[09:27] <zyga> smartctl?
[09:27] <mborzecki> zyga: yup
[09:27] <zyga> k
[09:27] <mborzecki> zyga: btw. https://github.com/snapcore/snapd/pull/6283 iirc you've stumbled upon some issues importing osutil too
[09:28] <mup> PR #6283: osutil: do not import dirs <Simple 😃> <Created by bboozzoo> <https://github.com/snapcore/snapd/pull/6283>
[09:28] <zyga> mborzecki: yes, I had an idea when you left a comment yesterday
[09:29] <zyga> done
[09:29] <zyga> is master still broken?
[09:30] <mborzecki> zyga: yeah, i've opened a PR to disable openssue for now, but the travis job failed, couldn't fetch the log and everything is broken :/
[09:30] <zyga> heh, must be Tuesday
[09:30] <zyga> thank you
[09:43] <mborzecki> zyga: https://github.com/snapcore/snapd/pull/6284
[09:43] <mup> PR #6284: spread: make opensuse-42.3 manual <Simple 😃> <Created by bboozzoo> <https://github.com/snapcore/snapd/pull/6284>
[09:46] <zyga> mvo: done
[09:46] <mvo> zyga: thank you!
[09:56] <pstolowski> mvo: thanks for commenting on that gpio attrs bug, i was (fruitlessly) trying to understand what went wrong...
[09:58] <pstolowski> mvo: i completly forgot it was limited to beta
[10:00] <mvo> pstolowski: yeah, the trouble is really that this runs with the broken code so refreshing out of it is hard
[10:01] <mvo> pstolowski: so lets see if that is good enough for them, if not we need to think harder :)
[10:14] <zyga> brb, quick coffee
[10:14] <zyga> mvo: as a "simple" workaround
[10:14] <zyga> disabling the snap should be enough
[10:15] <zyga> then you can refresh ok
[10:15] <zyga> and re-enable it
[10:16] <mvo> zyga: its the gadget - will that also work?
[10:16] <zyga> oh,
[10:16] <zyga> I don't know
[10:16] <zyga> I didn't anticipate it was the gadget itself
[10:16] <zyga> one thing one can do is to disconnect that interface
[10:16] <mvo> zyga: aha, nice idea
[10:17] <mborzecki> #6284 is green, yay!
[10:17] <mup> PR #6284: spread: make opensuse-42.3 manual <Simple 😃> <Created by bboozzoo> <https://github.com/snapcore/snapd/pull/6284>
[10:17] <mvo> zyga: feel free to commment in the bug (if you haven't already)
[10:17] <zyga> I haven't let me try
[10:22] <mvo> ta
[10:24] <zyga> re
[10:31] <Chipaca> Dec 10 08:44:04 milesdavenport snapd[3255]: error: EOF
[10:32] <Chipaca> hmmm
[10:32] <Chipaca> :-)
[10:33] <zyga> ls
[10:34] <Chipaca> no such file or directory
[10:46] <mvo> Chipaca: heh, thats a useful error
[10:48] <mup> PR snapd#6284 closed: spread: make opensuse-42.3 manual <Simple 😃> <Created by bboozzoo> <Merged by bboozzoo> <https://github.com/snapcore/snapd/pull/6284>
[10:50] <pedronis> pstolowski: I made some comment in #6180
[10:50] <mup> PR #6180: snap/info: bind global plugs/slots to implicit hooks <Created by stolowski> <https://github.com/snapcore/snapd/pull/6180>
[10:51] <pstolowski> pedronis: ty
[10:58] <zyga> some progress :)
[10:58] <mup> PR snapd#6286 opened: release: support probing SELinux state <SELinux> <⛔ Blocked> <Created by bboozzoo> <https://github.com/snapcore/snapd/pull/6286>
[11:07] <pstolowski> pedronis: wrt to retry vs running tasks problem, i wonder if we shouldn't set task back from Doing to Do status on Retry error with After>0
[11:07] <pedronis> pstolowski: I haven't looked in the issue at all so far
[11:07] <pedronis> you'll need to tell me more
[11:07] <pedronis> but almost lunch here
[11:09] <pstolowski> pedronis: sure, enjoy, let me know when you're back and we can discuss
[11:14] <cachio> mvo, hey
[11:15] <cachio> about the entropy on ubuntu core 18
[11:15] <cachio> any idea how to deal with this?
[11:17] <Chipaca> cachio: y u hate mvo
[11:17] <mvo> cachio: its hard, if you use spread you can use SPREAD_QEMU_GUI=1 in the environment and then type some keys when the qemu window comes up, keystrokes in the terminal are feed to the entropy
[11:17] <mvo> cachio: there is also a way in qemu to use the entropy of the host, however I don't think spread is exposing this :/
[11:17] <mvo> cachio: i.e. this would need a spread change
[11:18] <cachio> mvo, ok, I'll research a bit more
[11:18] <Chipaca> mvo: cachio: spread looks up kvm in PATH; I have a ~/bin/kvm that adds -smp 4
[11:19] <Chipaca> so if you know what options to pass kvm for it to pick up the host's entropy, you can do this
[11:20] <mvo> cachio, Chipaca thanks! please try "-device virtio-rng-pci"
[11:20] <Chipaca> https://pastebin.ubuntu.com/p/yg69DwrqTM/
[11:20] <Chipaca> ^ that's my ~/bin/kvm
[11:20] <Chipaca> but it doesn't need to be this fancy :-)
[11:22] <Chipaca> mvo: cachio: e.g., https://pastebin.ubuntu.com/p/jfw8GqVNV3/
[11:22] <mup> PR snapd#6287 opened: httputil: retry on temporary net errors <Created by mvo5> <https://github.com/snapcore/snapd/pull/6287>
[11:22] <cachio> it is hanging with -device virtio-rng-pci
[11:22] <cachio> Chipaca, if I don use -nographic
[11:22] <cachio> I can add manually entropy by moving the mouse and with keystroke
[11:22] <cachio> s
[11:23] <cachio> but not sure if it will work the whole run
[11:24] <mvo> cachio: once it provided it should work
[11:24] <mborzecki> cachio: try using -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0
[11:24] <mvo> cachio: I mean, on subsequent reboots it should reuse things
[11:25] <cachio> mborzecki, running this
[11:25] <cachio> mborzecki, hanging
[11:27] <cachio> mvo, trying without -nographic
[11:32] <cachio> mvo, https://paste.ubuntu.com/p/x96XCC69wv/
[11:32] <cachio> snapd.seeded.service failed
[11:32] <mvo> cachio: what is the output of "journalctl -u snapd.seeded.service" ?
[11:34] <cachio> mvo, https://paste.ubuntu.com/p/MjP8mMWGdB/
[11:45] <zyga> I have a good grasp of how to implement refreshes during app downtime now
[11:46] <zyga> need to write down my thoughts
[11:55] <cachio> mvo, tests already running on core 18
[11:55] <mvo> sergiusens: the seeded failure is curious, I'm looking at it now
[11:55] <cachio> I'll let you know the results
[11:56] <sergiusens> mvo: not related to me I hope 🙂
[11:56] <cachio> mvo, wrong sergio I think
[11:56] <mvo> sergiusens: *cough* sorry
[11:56] <mvo> cachio: yeah
[11:58] <cachio> i am gonna create a new vm to see if I can reproduce it
[11:59] <mvo> cachio: I think I have an idea
[11:59] <mvo> cachio: about this failure, working on a possible fix
[11:59] <cachio> mvo, yes same error
[12:00] <mvo> cachio: slightly strange that we don't see this all the time in our spread runs :/
[12:01] <cachio> mvo, yes, but the image is not created exactly the same
[12:01] <cachio> and the hypervisor is not the same as well
[12:03] <cachio> mvo, this is the snapd log https://paste.ubuntu.com/p/X465Jd8Vbs/
[12:04] <cachio> I see some errors
[12:04] <cachio> not sure if they are important
[12:05] <mvo> cachio: can you give me the matching journalctl -u snapd.seeded.service?
[12:05] <mvo> cachio: so that I can compare timestamps?
[12:06] <cachio> https://paste.ubuntu.com/p/R5Hg2DBX4f/
[12:06] <cachio> mvo, I had both
[12:06] <mvo> cachio: thanks, looking
[12:08] <mvo> cachio: I can also get the global journal log please?
[12:08] <mvo> cachio: it looks like something is killing snapd
[12:08] <mvo> cachio: and I wonder what
[12:09] <cachio> sure
[12:10] <cachio> mvo, https://paste.ubuntu.com/p/QGvrtRSThX/
[12:11] <cachio> systemctl restart snapd.socket
[12:11] <cachio> I see this
[12:11] <cachio> snapd[998]: main.go:147: Exiting on terminated signal.
[12:11] <cachio> Dec 11 11:59:23 localhost systemd[1]: Stopping Snappy daemon...
[12:12] <mvo> cachio: so the socket got restarted it seems
[12:12] <cachio> mvo, yes
[12:13] <mvo> cachio: thanks! so this is a freshly generated image, right? we did not modified it as part of some test setup except for adding the user to run the tests?
[12:14] <mvo> cachio: I think I can reproduce this, let me dig some more
[12:14] <sergiusens> mvo: since you have my attention, may I ask if you had time to look at the .snapcraft bug task I had?
[12:15] <cachio> mvo, fresh fresh
[12:15] <cachio> I genereted it with ubuntu-image yesterday
[12:15] <mvo> sergiusens: yes, I replied there - is there something missing?
[12:16] <mvo> cachio: I can reproduce now, I take it from here, still not clear yet about the why but it seems to be a bug in the firstboot setup
[12:16] <sergiusens> mvo: nah, I am just getting started and haven't looked yet but took the opportunity to ask as we started established a link
[12:17] <cachio> mvo, good, nice to find it now :)
[12:17] <mvo> cachio: yeah, I wonder if this is new or if we had it in the previous image as well
[12:17] <cachio> I just tried the stable one and I saw the same error
[12:17] <cachio> last stable published
[12:18] <cachio> mvo, so I think it is not new
[12:18] <cachio> :s
[12:18] <mvo> cachio: :/ at least we found it now
[12:18] <mvo> cachio: good, thanks for double checking
[12:18] <cachio> mvo, np
[12:18] <cachio> mvo, I'll be afk 15 minutes
[12:18] <cachio> need to take my son to the kinder garden
[12:19] <cachio> mvo, tests are running on that image
[12:19] <mvo> cachio: no worries, I don't need anything at this point, thanks
[12:20] <mvo> cachio: annoying - its a race, I just tried again and it worked .(
[12:21] <cachio> mvo, yes, do you what the line I use to start the image?
[12:22] <cachio> I don't know if that could help
[12:22] <mvo> cachio: its fine, I can reproduce in ~30% of the runs it seems
[12:22] <cachio> mvo, nice
[12:22] <cachio> I'll be bak in 15 minutes
[12:22]  * cachio afk
[12:33] <zyga> pedronis: I wrote https://forum.snapcraft.io/t/bug-saves-are-blocked-to-snap-user-data-if-snap-updates-when-it-is-already-running/3226/19?u=zyga - I feel comfortable having a discussion about the feature when you have the time
[12:33] <zyga> pedronis: I will now resume work on my branches
[12:33] <zyga> mborzecki: small leftover I didn't notice: https://github.com/snapcore/snapd/pull/6288
[12:33] <mup> PR #6288: cmd/snap-confine: refactor call to snap-update-ns --user-mounts <Per-user mount ns  🐎> <Created by zyga> <https://github.com/snapcore/snapd/pull/6288>
[12:34] <mup> PR snapd#6282 closed: tests: force test-snapd-daemon-notify exit 0 when the interface is not connected <Created by sergiocazzolato> <Merged by bboozzoo> <https://github.com/snapcore/snapd/pull/6282>
[12:34] <mup> PR snapd#6288 opened: cmd/snap-confine: refactor call to snap-update-ns --user-mounts <Per-user mount ns  🐎> <Created by zyga> <https://github.com/snapcore/snapd/pull/6288>
[12:34] <pedronis> zyga: thx
[12:35] <zyga> pedronis: I will ask chipaca and jdstrand to review the draft
[12:35] <pedronis> ok
[12:36] <sergiusens> mvo: if my reply to that bug is good, then I will implement like that
[12:43] <mvo> sergiusens: what was the bugnumber again?
[12:43] <sergiusens> mvo: https://bugs.launchpad.net/snapcraft/+bug/1805219
[12:43] <mup> Bug #1805219: .snapcraft <19.04> <19.04-external> <Snapcraft:Triaged> <https://launchpad.net/bugs/1805219>
[12:47] <pedronis> pstolowski: I'm looking at the taskrunner, and I'm not sure I understand how the bug happens actually, running is not based on status, is based on whether there's go routine/tomb
[12:49] <pstolowski> pedronis: yes i know, i realized that afterwards, but the idea was part of a "bigger" plan to prevent two tasks from ending up in "Doing" state in case of snapd restart. but at end i'm back at square one and no longer sure about proper fix atm tbh
[12:50] <pstolowski> pedronis: i can explain some more things in a HO
[12:50] <pedronis> pstolowski: I have a meeting very soon, we can chat after the standup? about this and a bit about hotplug status?
[12:50] <ackk> Saviq, hi, is there a way to allow a user to use multipass without sudo-ing it?
[12:50] <pstolowski> pedronis: sure
[12:51] <pedronis> thx
[12:53] <zyga> Chipaca: hey
[12:53] <zyga> do you have 10 minutes to discuss something?
[12:55] <zyga> mborzecki: replied on https://github.com/snapcore/snapd/pull/6288
[12:55] <mup> PR #6288: cmd/snap-confine: refactor call to snap-update-ns --user-mounts <Per-user mount ns  🐎> <Created by zyga> <https://github.com/snapcore/snapd/pull/6288>
[12:56] <Saviq> ackk: we normally use the `sudo` group, so if you add the user to that group (and potentially create that group if not there), it will work
[12:56] <Saviq> ackk: we're fixing that to try 'adm' https://github.com/CanonicalLtd/multipass/pull/513, too - but ultimately it will be world-writable as security on it will be on a higher level (RPC encryption certificates)
[12:56] <mup> PR CanonicalLtd/multipass#513: Fix daemon missing group <Created by townsend2010> <https://github.com/CanonicalLtd/multipass/pull/513>
[12:59] <ackk> Saviq, so if I create a "multipass" group and chgrp'd the unix socket to it that would work for users in that group. right?
[12:59] <ackk> Saviq, I'm trying not to sudo our jenkins user
[13:00] <Saviq> ackk: no, "multipass" is not a group we use
[13:00] <Saviq> ackk: in the current builds, "sudo" is the only group we can do - with the above PR, "adm" will also work
[13:00] <ackk> Saviq, ah I see
[13:00] <Saviq> ackk: you could add a post-start command in the multipassd service
[13:01] <Saviq> to change the owner/mode of the socket
[13:01] <mborzecki> huh, our centos images have selinux in enforcing mode by default
[13:02] <mborzecki> off to pick up the kids
[13:02] <Saviq> ackk: `systemctl edit snap.multipass.multipassd` and add a chown you like https://www.freedesktop.org/software/systemd/man/systemd.service.html#ExecStartPre=
[13:02] <Saviq> ackk: the socket is /run/multipass_socket
[13:05] <ackk> Saviq, won't the systemd service be regenerated on snap updates?
[13:06] <zyga> ackk: not today
[13:06] <ackk> ah cool
[13:07] <ackk> Saviq, is it correct that I get an empty file when editing the service?
[13:26] <mup> PR snapd#6289 opened: cmd/snap-confine: remove SC_NS_MNT_FILE <Simple 😃> <Created by zyga> <https://github.com/snapcore/snapd/pull/6289>
[13:29] <Saviq> ackk: yes
[13:29] <Saviq> ackk: systemctl lets you edit an override file
[13:30] <Saviq> ackk: you can `systemctl cat snap.multipass.multipassd` afterwards to see the results
[13:35] <mup> PR snapd#6290 opened: cmd/snap-confine: fix typo "a pipe" <Simple 😃> <Created by zyga> <https://github.com/snapcore/snapd/pull/6290>
[13:36] <ackk> Saviq, ah I see thanks
[13:38] <Chipaca> zyga: I do have 10 minutes to discuss somehting
[13:38] <zyga> hey
[13:38] <zyga> great
[13:38] <zyga> do you prefer HO or here?
[13:38] <Chipaca> zyga: here
[13:38] <zyga> ok
[13:38] <zyga> I wrote some notes on the forum
[13:38] <zyga> https://forum.snapcraft.io/t/bug-saves-are-blocked-to-snap-user-data-if-snap-updates-when-it-is-already-running/3226/19?u=zyga
[13:38] <zyga> could you please read that and tell me when we can discuss
[13:39] <zyga> I'm interested in your opinion on snap refresh task set design
[13:39] <zyga> if you think the proposal is sound or if you have better ideas
[13:39] <Chipaca> ok, let me check, i think i already read it
[13:39] <jdstrand> mborzecki, zyga: if you're talking about ioctls for smartctl, that is something I need to look into. zyga is right. ioctls are a nightmare to deal with
[13:40] <zyga> jdstrand: hello
[13:40] <jdstrand> mborzecki, zyga: or was it btrfs-- that's the other one I need to look at
[13:40] <zyga> jdstrand: I agree and I don't believe we should attempt to expose the real smartctl
[13:40] <jdstrand> anyway. nightmare, yes
[13:40] <zyga> yes
[13:40] <zyga> nightmare(2) - alias for ioctl(2)
[13:41] <zyga> jdstrand: I will have a question for you if you have a moment as well
[13:41] <zyga> jdstrand: not sure if this is the best time
[13:41] <jdstrand> there is stuff we could do with apparmor. its long, long been on the apparmor roadmap to handle those, but we don't (beyond mediating caps) at this time. the language would probably make it easier to work with as a policy author than what we have now with seccomp, but still would not be great
[13:42] <jdstrand> zyga: snap restarts? I saw the thread. I haven't read email yet today
[13:42] <jdstrand> zyga: regardless, ask away
[13:43] <zyga> jdstrand: I was surprised that snap-seccomp profile compilation is very slow, do you think we should consider introducing caching
[13:44] <zyga> jdstrand: where snap-seccomp would sort the system calls, discard comments, and use that as cache key
[13:44] <zyga> (regardless of snap name, so various snaps may end up using the same binary cache)
[13:44] <zyga> jdstrand: unlike apparmor the cache would be our responsibilty to maintain
[13:45] <jdstrand> zyga: how slow are we talking?
[13:46] <jdstrand> zyga: but even without answering that, I think the biggest bang for the buck is only compiling once per set of interface connections. like what I maintain is the best thing to do with apparmor
[13:46] <zyga> I measured quarter second compilation for nearly every profile on coffee lake i9, interestingly apparmor parser was significantly faster even when a profile was compiled
[13:46] <zyga> I agree with that, I started some early prototyping but put it on hold now
[13:46] <jdstrand> zyga: that's a great improvement with or without caching
[13:46] <zyga> but I'd love to get to that
[13:46] <zyga> though caching seems like an order of magnitude easier to implement for _some_ improvement
[13:46] <Chipaca> zyga: how reliable is the release-agent protocol?
[13:47] <zyga> Chipaca: the protocol is that the kernel clones a userspace task with an argument and... that's that
[13:47] <zyga> (nothing else happens)
[13:48] <jdstrand> quarter second is not terrible, but if you have 100 profiles, that's 25 seconds. but, if you have 10 profiles with 10 interfaces (eg a typical desktop app), that is also 100 seconds
[13:49] <zyga> jdstrand: I believe seccomp is likely to have cache hits
[13:49] <zyga> since unlike apparmor it has no paths to spoil the cache
[13:49] <jdstrand> zyga: this isn't an either/or
[13:49] <Chipaca> zyga: it looks like we'll need a lock, either global or per /run/snapd/runctl/snap.$SNAP_INSTANCE_NAME
[13:49] <jdstrand> yes, I think caching could benefit
[13:49] <jdstrand> even would benefit
[13:49] <zyga> Chipaca: we have locks at every level, can you be specific when you think a lock should be acquired
[13:50] <Chipaca> zyga: there's one step which is "After populating the freezer cgroup write /run/snapd/runctl/snap.$SNAP_INSTANCE_NAME.running"
[13:50] <zyga> ah
[13:50] <zyga> yes, we hold a lock at that phase
[13:50] <Chipaca> zyga: there's one step which is "if /run/snapd/runctl/snap.$SNAP_INSTANCE_NAME.running is present, postpones the [refresh]"
[13:50] <zyga> the per-instance lock
[13:50] <Chipaca> zyga: if the second one is run while the first one is populating the freezer, things might be interesting
[13:51] <zyga> I see, so you are after looking for the refresh racing
[13:51] <zyga> that's a good pioint
[13:51] <zyga> yes, we should grab the lock before manipulating such files
[13:51] <jdstrand> but if you took that delayed compilation off the back burner onto the front, you should be able to get two birds with one stone (seccomp *and* apparmor) and likely help the common case faster (ie, you could work on caching second). really, for caching I think you want this, otherwise you are going to be maintaining a forest of caches for all the intermediate steps of going from template to 10 interface
[13:51] <jdstrand> connections
[13:52] <zyga> jdstrand: I don't disagree about that
[13:53] <zyga> jdstrand: I would like to work on that after the refresh work
[13:53] <zyga> likely january
[13:53] <zyga> jdstrand: I wrote a prototype showing how significant the improvements were
[13:53] <zyga> jdstrand: I also came to a realisation that we made a mistake in interface design
[13:53] <jdstrand> the problem with caching is increased complexity. there is an opportunity to load the wrong cache. those would be fun and confusing bugs for people
[13:53] <zyga> jdstrand: that we can rectify to have cleaner and faster code still
[13:54] <zyga> jdstrand: yes, though I'm a fan of content-based caches
[13:54] <zyga> jdstrand: which are easier to work with
[13:54] <zyga> and it would especially be suitable to reuse that we are likely to see with seccomp
[13:55] <zyga> jdstrand: as for that mistake in interfaces, I will make a PR if I can today, definitely something in the next few days because it's bugging me more and more, seeing the amount of code we devote to working around it
[13:57] <jdstrand> zyga: so, the bottom line for me is that I'm not opposed to caching but I'm way more in favor of adding delayed compilation that would help both apparmor and seccomp. I worry that if we add caches, we'll put of the delayed compilation further and further (we've known about this a long time)
[13:57] <jdstrand> off*
[13:58] <jdstrand> that's why I'm harping on it. while I believe we'll have bigger overall gains with delayed compilation (which then caching could be added in the fullness of time) that isn't my decision though.
[13:59] <jdstrand> zyga: while it isn't strictly required for the caching work, one thing that has been on the non-official roadmap is organizing seccomp calls by frequency of use, since that will speed up snaps at runtime
[14:00] <jdstrand> zyga: if we grouped (even some of) that into this work, we could avoid a potentially painful recompilation flag day down the line
[14:01] <zyga> jdstrand: right, I'm happy to get the general perf improvements in place at the earliest opportunity
[14:01] <jdstrand> zyga: eg, open, read, write are all *highly* used, but an alphabetical sorting puts tham after, say, capget
[14:02] <zyga> (standup time)
[14:04] <jdstrand> zyga: I don't think this has to be terribly accurate. eg, strace like 10 popular snaps (5 desktop and server/iot) and get some stats and put the top 5 first (sorted to each other), the next top 20 perhaps alphabetically relative to each other and then the rest alphabetical
[14:04] <jdstrand> zyga: this is within the binary cache, not the src of course
[14:04] <jdstrand> zyga: something like that. we can fine tune more later, but just something like that would give nice perf gains
[14:05] <jdstrand> zyga: if you were to do the caching work and incorporate that, I would probably stop harping on delayed compilation
[14:06] <jdstrand> zyga: well, I won't promise that... I won't do it as much ;)
[14:07] <jdstrand> zyga: of course, if we are 'good enough' with this (eg, include postgres, nginx/apache, mysql, spotify, chromium, etc) then there may be no point to revisit down the line
[14:12] <jdstrand> zyga: it would be interesting to profile snap-seccomp to understand where the cost is. perhaps there are gains simply in making the code more efficient
[14:12] <zyga> yeah, that's true
[14:12] <zyga> we should measure that
[14:13] <jdstrand> zyga: overt the years we (and be we I mean jj) have cut apparmor profile times by something like 80%+
[14:13] <jdstrand> over*
[14:13] <jdstrand> maybe even more
[14:19] <mborzecki> cachio: can you rebuild centos images with selinux set to permissive mode?
[14:20] <cachio> mborzecki, sure
[14:21] <mborzecki> cachio: thanks!
[14:35] <cachio> mborzecki, test-centos-1
[14:35] <cachio> it is ready
[14:37] <mborzecki> cachio: thanks, checking now
[14:41] <cachio> mborzecki, should we use this one instead of the current one?
[14:42] <greyback> @zyga on no seccomp profile for multipass - I'll work on it. But I've a question - is there a way multipassd can create a child process with a different seccomp to itself?
[14:42] <zyga> greyback: only a subset (more restrictive profile)
[14:42] <zyga> greyback: my point is that I'd be happier with a really rich profile over a profile that is 100% open
[14:42] <zyga> greyback: I doubt running kvm requires that much
[14:43] <zyga> (that is, unbound set)
[14:43] <greyback> zyga: have we an example of that subset profile anywhere? I'm happy to give it a go
[14:43] <zyga> I mean
[14:44] <zyga> you can confine it more than it was before
[14:44] <zyga> just apply another seccomp profiles as usual
[14:44] <zyga> they stack in the kernel
[14:44] <zyga> (all profiles will run, most restrictive outcome is selected)
[14:45] <greyback> oh ok. This example seccomp for qemu looked pretty big to me: https://github.com/qemu/qemu/blob/cf9dc9e4807464a9d0b3d7368b818323e14921eb/qemu-seccomp.c#L34 but true, it still does limit it
[14:46] <mborzecki> cachio: the image looks ok to me, can you replace the current one?
[14:47] <cachio> mborzecki, sure, I'll make a run first
[14:48] <mborzecki> cachio: ta
[14:50] <jdstrand> greyback: yes, what zyga said. I suggest comparing the above link to what is in the default template, then add the missing calls to the multipass-support interface. then we can go from there in the PR review
[14:50] <greyback> jdstrand: ack, thanks
[14:51] <jdstrand> greyback: btw, do I still owe you an email?
[14:53] <greyback> jdstrand: nope, I've figured out qn 1 myself and implemented it in that PR. Qn 2, I think I also know the answer (but might run it by you as a sanity check, when the time comes)
[14:53] <greyback> mostly hoping not to be too security paranoid
[14:58] <jdstrand> greyback: feel free to run stuff by me, yes. with the holidays and sprints, etc, if I haven't answered you, feel free to ping me :)
[14:58] <greyback> jdstrand: thanks
[15:05] <mborzecki> mvo: zyga: funny failure in restoring google:ubuntu-18.04-64:tests/main/degraded https://paste.ubuntu.com/p/ypgPkCp4P6/ probably related to KillMode=process
[15:06] <mvo> mborzecki: hm, I thought we now do KillMode=process, so fc-cache should survive this now :/
[15:07] <zyga> mmm
[15:07] <zyga> yes
[15:07] <zyga> I have the same feeling]
[15:09] <mborzecki> mvo: zyga: hm i merged master to this branch earlier today
[15:11] <ackk> Saviq, is there a way to debug why snapcraft is failing to use multipass? All I get is "launch failed: Connect Failed" but I acn't see why it is failing
[15:13] <mborzecki> zyga: mvo: and I can see it in the source code
[15:13] <Saviq> ackk: we're now trying to improve the error messaging around it
[15:13] <Saviq> ackk: `ls -l /run/multipass_socket`?
[15:13] <zyga> brb for dinner
[15:14] <ackk> Saviq, ah, it's back to root:sudo
[15:14] <ackk> Saviq, I have ExecStartPost=chgrp multipass /run/multipass_socket in the override file but it doesn't seem to be working
[15:17] <Saviq> ackk: you can check in journal if that command results in an error
[15:18] <ackk> Saviq, I don't see anything related
[15:18] <Saviq> ackk: `systemctl status snap.multipass.multipassd` would also show whether it ran that command - maybe it did too early?
[15:19] <ackk> Saviq, https://paste.ubuntu.com/p/xv5trtZpZX/
[15:20] <jdstrand> greyback: ok, well, you coaxed an email out of me :)
[15:20] <Saviq> ackk: might be it ran too early
[15:20] <Saviq> ackk: http://paste.ubuntu.com/p/hn78NJw7CW/ worked for me
[15:21] <ackk> Saviq, ok I'll try that
[15:21] <ackk> thanks
[15:21] <jdstrand> greyback: it was very cunning how you said you didn't need it and then got me to do it. well played ;)
[15:22] <jdstrand> greyback: seriously though, hopefully it helps some
[15:22] <greyback> jdstrand: hah! reverse physchology works ever time :)
[15:22] <jdstrand> :)
[15:22] <mup> PR snapcraft#2424 opened: nodejs plugin: fail gracefully when a package.json is missing <Created by sergiusens> <https://github.com/snapcore/snapcraft/pull/2424>
[15:25] <ackk> Saviq, sigh, set to sleep 5, group is it still sudo
[15:25] <Saviq> ackk: did you restart?
[15:25] <ackk> yes
[15:25] <Saviq> ackk: if you apply my change verbatim, does it work?
[15:25] <ackk> (snap restart multipass)
[15:26] <ackk> Saviq, yes
[15:27] <Saviq> ackk: then something's wrong with your chgrp
[15:27] <mup> PR core18#96 opened: run-snapd-from-core: fix race when restarting snapd.socket <Created by mvo5> <https://github.com/snapcore/core18/pull/96>
[15:27] <ackk> Saviq, https://paste.ubuntu.com/p/gx66QysThT/
[15:27] <Saviq> ackk: in `systemctl status ...` there's a "Process: …" line that says what command was executed for ExecStartPost and its success state
[15:28] <ackk> Saviq, I don't see that
[15:29] <mvo> cachio: https://github.com/snapcore/core18/pull/96 <- this fixed it for me
[15:29] <mup> PR core18#96: run-snapd-from-core: fix race when restarting snapd.socket <Created by mvo5> <https://github.com/snapcore/core18/pull/96>
[15:29] <mvo> mborzecki: still slightly strange
[15:32] <ackk> Saviq, ah, foudn the issue. I didn't have the ExecStartPost in a [Service] section
[15:32] <cachio> nice
[15:32] <cachio> mborzecki, centos-7 image ready
[15:33] <cachio> mvo, is it any way to test it?
[15:33] <cachio> are you creating a new beta core?
[15:37]  * cachio lunch
[15:38] <ackk> Saviq, sorry, one more question. how does multipass pick the VM network?
[15:39] <ackk> (and how do I know which one it is)
[15:39] <Saviq> ackk: it's a random subnet from the private space, checked for conflicts
[15:40] <ackk> Saviq, is there a command to get which one it is?
[15:40] <mvo> cachio: you can "sudo snapcraft" from this branch - this will create a custom core18. then you can run "ubuntu-image ubuntu-core-18-amd64.model --channel=edge --extra-snaps /path/to/core18_....snap"
[15:41] <mvo> cachio: thats what I did - if it works for you, please comment in the bug
[15:42] <Saviq> ackk: there's a file in /var/snap/multipass/common - it has "subnet" in its name
[15:42] <ackk> Saviq, thanks
[15:44] <ackk> Saviq, ah, so the proxy set in multuipass is not pushed to VMs, is it?
[15:47] <Saviq> ackk: hmm indeed we're not
[15:47] <ackk> Saviq, any way I can inject it?
[15:47] <Saviq> ackk: can you please file an issue in https://github.com/CanonicalLtd/multipass/issues/new
[15:47] <Saviq> sergiusens: is there a way to pass a cloud-init file to multipass on launch?
[15:49] <mborzecki> cachio: thanks, i've restarted the affected PRs
[15:50] <ackk> Saviq, https://github.com/CanonicalLtd/multipass/issues/522
[15:54] <Saviq> ackk: thanks, the only thing I can think of is if you monkey-patch snapcraft/internal/build_providers/_base_provider.py to include proxy settings
[15:54] <ackk> Saviq, yeah but I can't do it in the snap, can I?
[15:55] <Saviq> ackk: right, you can't, TBH I think this is also a snapcraft issue - that's what should ensure that proxy settings are passed into the build environment
[15:56] <Saviq> like, you shouldn't even need to know multipass is in the mix
[15:56] <ackk> right. ideally if snapd has a proxy set (or knows of one from /etc/environment) , all snaps should use it without extra needs of configs for each one
[15:58] <Saviq> yeah probably
[16:02] <ackk> sergiusens, is there currently a way to do that in snapcraft ? ^ should I file a bug about it?
[16:08] <sergiusens> ackk: a bug would be fine
[16:08]  * zyga sees jdstrand typing on the forum and cannot wait to see what's coming 
[16:10] <mup> PR snapd#6266 closed: tests: make security-device-cgroups-{devmode,jailmode} work on arm devices <Created by sergiocazzolato> <Merged by sergiocazzolato> <https://github.com/snapcore/snapd/pull/6266>
[16:10] <mvo> Chipaca: the CLA checker is acting up: https://api.travis-ci.org/v3/job/466575606/log.txt
[16:10] <mvo> Chipaca: KeyError: 'canonical'
[16:11] <Chipaca> ?
[16:11]  * Chipaca looks
[16:11] <zyga> mvo: I saw that and I wonder if the issue is related to travis-side caching
[16:11] <zyga> if it asked and never bothered to ask again
[16:11] <mvo> Chipaca: this is in https://github.com/snapcore/snapd/pull/6252
[16:11] <mvo> zyga: no worries
[16:11] <mup> PR #6252: userd: handle help urls which requires prepending XDG_DATA_DIRS <Created by kenvandine> <https://github.com/snapcore/snapd/pull/6252>
[16:11] <mvo> Chipaca: ideas welcome :)
[16:13] <Chipaca> mvo: https://launchpad.net/~canonical
[16:13] <Chipaca> mvo: that's a private team
[16:13] <pedronis> always has been
[16:13] <Chipaca> so that check won't work ever
[16:13] <zyga> ah
[16:13] <zyga> my bad
[16:14] <pedronis> how was it working before?
[16:14] <Chipaca> kenvandine: what have you done :-)
[16:14] <zyga> I merge the PR adding that feature
[16:14] <zyga> I understand now, it works if you are a member
[16:14] <zyga> so works locally for testing
[16:14] <kenvandine> it passed in CI though
[16:14] <Chipaca> pedronis: it wasn't: https://github.com/snapcore/snapd/pull/6253
[16:14] <mup> PR #6253: Members of canonical LP group should pass CLA check <Created by kenvandine> <Merged by zyga> <https://github.com/snapcore/snapd/pull/6253>
[16:14] <kenvandine> i thought it did anyway
[16:14]  * Chipaca looks
[16:15] <Chipaca> kenvandine: https://travis-ci.org/snapcore/snapd/jobs/461925801 agrees with you
[16:15] <Chipaca> strange
[16:16] <Chipaca> I don't think it should work, with it being private, but now i don't know
[16:17] <Chipaca> i can confirm it returns a 404 here
[16:18] <Chipaca> maybe there was a bug or sth
[16:19] <Chipaca> in any case
[16:19] <Chipaca> kenvandine: mvo: if this is for 6252, merging master should let it pass
[16:19] <Chipaca> actually, it should already work
[16:19]  * Chipaca looks
[16:19] <mvo> Chipaca: I thought I just did merge master into the PR from ken
[16:20]  * Chipaca debugs
[16:21] <ijohnson> do we have a table/webpage somewhere that lists across distros snapd security confinement supported? I.e. Ubuntu supports everything, Arch supports apparmor, Debian doesn't support apparmor, XYZ supports seccomp, etc.?
[16:25] <zyga> ijohnson: we have snap debug sandbox-features
[16:26] <zyga> we don't have what you are asking for but I can tell you that only the ubuntu kernel has all the apparmor features
[16:26] <zyga> we're close but not there yet
[16:27] <ijohnson> ok, so is the story still basically "only ubuntu (classic and core)" support _all_ security confinement features?
[16:28] <mup> PR snapd#6290 closed: cmd/snap-confine: fix typo "a pipe" <Simple 😃> <Created by zyga> <Merged by zyga> <https://github.com/snapcore/snapd/pull/6290>
[16:32] <zyga> ijohnson: correct
[16:32] <ijohnson> zyga: ack, thanks
[16:40] <zyga> jdstrand: apologies for the edited responses, the UX on the forum is not great for some of those interactions.
[16:40] <zyga> I'm still editing the post to complete my replies
[16:40] <zyga> jdstrand: some of your response has broken formatting
[16:46]  * Chipaca takes a break
[16:46] <mup> PR snapd#6291 opened: Revert "Members of canonical LP group should pass CLA check" <Created by chipaca> <https://github.com/snapcore/snapd/pull/6291>
[16:49] <zyga> jdstrand: I believe I've answered your questions now, I will go and amend the post fix the inaccuracy with regards to refresh-pending hook and refresh-pending attribute
[16:51] <Chipaca> mvo: i know why the cla check is failing; i need to fix it
[16:51] <Chipaca> but first i need to take a break, will bbl
[16:51] <cachio> mvo, at least firt attempt I could not reproduce it
[16:52] <zyga> jdstrand: done, thank you for contributing to that thread
[16:53] <zyga> jdstrand: note that this will be an experimental feature and it will not be on by default when first shipping
[16:53] <zyga> jdstrand: we may even consider enabling it per-snap
[16:57] <mup> PR snapd#6283 closed: osutil: do not import dirs <Simple 😃> <Created by bboozzoo> <Merged by bboozzoo> <https://github.com/snapcore/snapd/pull/6283>
[17:09] <cachio> with the image which I build on edge I dont have lack of entropy
[17:09] <cachio> mvo, ~
[17:37] <zyga> Chipaca: small go test woe with locale and snapshots https://www.irccloud.com/pastebin/GPrTt5Sg/
[17:38] <zyga> pedronis: pushed some changes to 6190
[17:39] <zyga> I'll push some more tests but I think this is capturing most of what was discussed now
[17:40] <pedronis> zyga: will probably look at it later today or early tomorrow
[17:40] <mvo> cachio: oh, interessting - I wonder if the kernel fixes this
[17:40] <zyga> super, thank you
[17:40] <mvo> cachio: did you had a chance to test my core18 fix?
[17:41]  * pedronis breaks for dinner but will work a bit more after
[17:43] <blackboxsw> Saviq: sorry for the response out of left field it it's not helpful, but multipass accepts cloud-init #cloud-config per something like Josh Powers wrote up https://powersj.github.io/post/cloud-init-multipass/
[17:43] <blackboxsw> ^ if needed
[17:43]  * blackboxsw highlights on cloud-init :/
[17:43] <Saviq> blackboxsw: yeah, I know, we wrote it ;)
[17:44] <Saviq> the question is whether snapcraft allows passing it through to multipass, which it doesn't
[17:44] <blackboxsw> heh :/ sorry reading out of context  without the coffee to start up my brain
[17:45] <mvo> cachio: aha, thanks - I see you commented in the PR
[17:45] <mup> PR core18#96 closed: run-snapd-from-core: fix race when restarting snapd.socket <Created by mvo5> <Merged by mvo5> <https://github.com/snapcore/core18/pull/96>
[17:45] <mvo> cachio: I triggered a new core18 build
[17:46] <mvo> cachio: so soon we should have a new core18 snap in beta
[17:49] <cwayne> plars: ^ ready for some core18 runs? :)
[17:50] <plars> cwayne: definitely - our core18 runs are all happening on remote with dragonboard, rpi3, and cm3
[17:50] <cwayne> :D
[17:51] <cwayne> The rpi3 b right
[17:51] <cwayne> i.e. not +
[17:52] <plars> cwayne: I think so - it's the one you brought, which I believe was a regular b
[17:52] <plars> v1.2 or 1.3 probably
[17:52] <cwayne> Cool beans
[17:59] <cachio> mvo, nice
[18:00] <mvo> cwayne, plars \o/ thanks!
[18:08] <Chipaca> mvo: kenvandine wrt #6252 I fuxed a pish
[18:08] <mup> PR #6252: userd: handle help urls which requires prepending XDG_DATA_DIRS <Created by kenvandine> <https://github.com/snapcore/snapd/pull/6252>
[18:09] <Chipaca> but at some point we might need to add a whitelist feature
[18:09] <Chipaca> or, maybe, add creds so the cla check isn't anonymous
[18:09] <mvo> Chipaca: \o/
[18:10] <Chipaca> it works for ken because he's already got things on master (just, not in the last 50 commits)
[18:13] <kyrofa> zyga, jdstrand how does confinement affect connection to abstract unix sockets? If you have the network plug, can you connect to any?
[18:17] <jdstrand> kyrofa: af_unix is mediated, yes. abstract sockets need to match a certain path
[18:18] <jdstrand> kyrofa: by default, snaps are allowed to use a snap-specific abstract socket: @snap.@{SNAP_INSTANCE_NAME}.**
[18:19] <jdstrand> kyrofa: beyond that, it is interface specifc (look for 'unix' in interfaces/builtin/*.go
[18:19] <jdstrand> )
[18:21] <Chipaca> jdstrand: this reminds me that in deeplin (a debian derivative) confined x11 apps don't work
[18:21] <Chipaca> jdstrand: the socket there is different but i don't know enough to know why/how/wat
[18:21] <Chipaca> jdstrand: Deepin*
[18:21] <jdstrand> Chipaca: it is probably a named socket in /tmp
[18:22] <jdstrand> Chipaca: as opposed to an abstract socket. therefore the mount namespace is the issue
[18:22] <jdstrand> Chipaca: I'm guessing
[18:22] <Chipaca> jdstrand: i need to go make dinner, etc
[18:22] <Chipaca> jdstrand: but i've got a deepin image i can test with later, if you have the time to handhold :-)
[18:23] <Chipaca> otherwise i'll just do the old "here's a nickel, kid" thing
[18:23] <jdstrand> Chipaca: well, there is either an apparmor denial  if deepin supports strict mode snaps, or the mount namespace is getting in the way
[18:23] <jdstrand> but feel free to circle back
[18:24] <Chipaca> ok
[18:24] <Chipaca> i wonder if you can bind mount sockets
[18:24] <Chipaca> :-)
[18:24]  * Chipaca runs off to make dinner
[18:25] <jdstrand> Chipaca: you can
[18:26] <kyrofa> jdstrand, ah ha, very good thanks
[18:26] <kyrofa> jdstrand, so for snaps to connect to another snap's socket, it can't be abstract
[18:26] <kyrofa> And then content sharing can be used
[18:34] <jdstrand> kyrofa: or an interface can be created for the slot side
[18:35] <jdstrand> kyrofa: there are plenty of examples of slot side abstract sockets in the interfaces. with a named socket, it is (currently) just grouped in with file rules, so the content interface works there
[18:36] <jdstrand> (there are also plenty of examples of slot policy for named sockets in the interfaces too, but I digress)
[18:38] <kyrofa> Ah interesting, okay
[19:28] <mup> PR snapd#6291 closed: Revert "Members of canonical LP group should pass CLA check" <Created by chipaca> <Merged by pedronis> <https://github.com/snapcore/snapd/pull/6291>
[19:57]  * pedronis calls it a day
[20:02]  * zyga returns to see PRs
[20:03] <zyga> cachio: snap command not found :/ https://www.irccloud.com/pastebin/U9QtFWAV/