[02:29] <mup> PR snapd#10306 opened: snap/quota: add CurrentMemoryUsage for current memory usage of a quota group <Simple 😃> <Skip spread> <quota> <Created by anonymouse64> <https://github.com/snapcore/snapd/pull/10306>
[03:03] <freenodecom> This channel has been reopened with respect to the communities and new users. The topic is in violation of freenode policy: https://freenode.net/policies
[03:03] <freenodecom> The new channel is ##snappy
[11:23] <pstolowski> pedronis: i just saw you comment re debug api access while i was updating the PR to restrict it with rootAccess for stacktrace only (with a separate endpoint). shall I instead make it rootOnly for all debug api?
[11:24] <pstolowski> s/rootOnly/rootAccess/
[11:24] <pstolowski> biab, lunch
[11:38] <mardy> so, I need some advice :-)
[11:39] <mardy> the microk8s snap ships a profile for containerd: https://github.com/ubuntu/microk8s/blob/master/microk8s-resources/containerd-profile
[11:40] <mardy> I don't think we want snaps to ship their own profiles, but I'm also not sure what is the best way forward
[11:40] <mardy> one possibility would be to enhance the kubernetes-support interface, and add a sub-profile for containerd in there
[11:41] <mardy> another option is to add a containerd-support interface, and have the containerd-daemon use that
[11:41] <mardy> anything else?
[11:47] <zyga> mardy it's hard to pull permissions
[11:47] <zyga> mardy if it ships a profile it can load it
[11:48] <zyga> so you cannot revoke that without some extra new optional parameter that says it has an interface but wants a more limited version because it expects snapd to do stuff
[11:48] <zyga> note that the profile it ships is not in the snap.* namespace either
[11:48] <zyga> so it's really a case of a snap having permissions to define custom sandbox for something it manages
[11:52] <mardy> zyga: I'm not sure if I'm misunderstanding you, but in this case pulling permissions would be easier, because the microk8s snap is also developed by Canonical
[11:52] <mardy> it's "just" a matter of deciding where to put this profile, and how to load it
[11:53] <zyga> mardy I suspect that a snap interface _could_ load a profile from outside of the snap.* namespace but that is indeed new
[11:53] <zyga> mardy do you know if microk8s leaves its apparmor profile explicitly?
[11:53] <zyga> and sets up things to be in cri-containerd.apparmor.d profile?
[11:53] <zyga> I also wonder what you gain by moving that profile somewhere else
[11:54] <zyga> updates now would require updating snapd 
[11:54] <zyga> arguably this is easier now
[11:54] <mardy> zyga: yes, a script reexecutes itself with aa-exec as unconfined, then loads the cri-containerd profile with apparmor_parser :-)
[11:54] <zyga> okay
[11:54] <mardy> zyga: but I see that here we have a subprofile, for example: https://github.com/snapcore/snapd/blob/master/interfaces/builtin/kubernetes_support.go#L66-L68
[11:54] <zyga> I think as unfortunate as that is, it's cleaner if it's not in the snapd profile
[11:55] <mardy> and I wonder if we shouldn't do the same for containerd
[11:55] <zyga> hmmm
[11:56] <zyga> that's a profile for a specific binary though
[11:56] <zyga> how would you transition?
[11:56]  * ogra wonders what "cri-" stands for
[11:56] <zyga> (here microk8s must manually run things in that profile)
[11:56] <zyga> container runtime icannotmakeupmymind
[11:56] <ogra> haha
[11:56] <ogra> i like the last bit
[11:57] <mardy> zyga: AFAIU, containerd then would create specific apparmor profiles for its containers: https://docs.docker.com/engine/security/apparmor/
[11:59] <zyga> mardy I guess with enough coordination it is possible
[11:59] <zyga> what do microk8s folks say?
[12:04] <mardy> zyga: I will ask them next :-)
[12:12] <zyga> mardy one thing being problematic here is that it may require a new assumes: snapdXYZ in the microk8s snap
[12:36] <mardy> kubernetes-support     microk8s:k8s-journald           -                    -
[12:36] <mardy> kubernetes-support     microk8s:k8s-kubelet            -                    -
[12:36] <mardy> kubernetes-support     microk8s:k8s-kubeproxy          -                    -
[12:36] <mardy> kubernetes-support     microk8s:kubernetes-support     :kubernetes-support  -
[12:37] <mardy> any idea why the last one is autoconnected, while the previous three are not?
[12:40] <pstolowski> mardy: the previous one is a new interface you're working on right?
[12:43] <mardy> pstolowski: no, I'm innocent here :-)
[12:43] <mardy> pstolowski: all those microk8s:k8s-* plugs are different flavors of the kubernetes-support interface
[12:45] <zyga> mardy I have a guess
[12:45] <zyga> actually
[12:45] <zyga> maybe not
[12:45] <zyga> can you share the slot and plug info?
[12:45] <zyga> I thought it's something but I'm not sure anymore
[12:45] <zyga> that info should help
[12:47] <mardy> zyga: https://github.com/ubuntu/microk8s/blob/feature/jdb/strict/snap/snapcraft.yaml#L26-L34
[12:47] <mardy> or did you mean from snapd?
[12:47] <zyga> and can you share the snap decl assertion
[12:47] <zyga> no that's fine
[12:48] <zyga> snap known snap-declaration # I forget what you have to say here
[12:51] <mardy> zyga: you mean this? https://paste.ubuntu.com/p/tMZ3W9RQBV/
[12:52] <zyga> yes
[12:52] <zyga> it has auto-connection on kubernetes-support
[12:52] <zyga> I wonder how the interface name matters in auto-connection rules
[12:53] <zyga> I think pedronis would know but I recall there was some language to constrain interface names as well
[12:53] <mardy> zyga: when you enable autoconnection on one interface, does that apply to all flavors? or do you need to list them explicitly?
[12:54] <zyga> flavour is just an attribute
[12:54] <zyga> it's not special in the system AFAIK
[12:54] <zyga> have a look at interfaces/policy as well
[12:54] <pedronis> mardy: it applies to all unless there are extra constraint on it
[12:54] <zyga> I don't recall
[12:54] <zyga> hey ijohnson 
[12:54] <ijohnson> hey zyga 
[12:54] <pedronis> mardy: for example on attributes or plug names
[12:54] <mardy> mmm.. here it looks like it's doing something special based on the flavor: https://github.com/snapcore/snapd/blob/master/interfaces/builtin/kubernetes_support.go#L306-L323
[12:54] <zyga> mardy it could be that the core policy has something about it
[12:54] <ijohnson> zyga: how are things with you these days ?
[12:55] <zyga> mardy the core policy together witih snap decl on microk8s should have all the input to the engine that decides
[12:55] <zyga> ijohnson oh good, a bit over worked as usual but it's still fun
[12:55] <zyga> setting up CI lab for zephyr and linux boards
[12:55] <ijohnson> yeah I've heard about all the adventures in testing real metal boards with spread
[12:55] <zyga> looking into projects that do zephyr updates
[12:56] <zyga> looking into some linux updates for a gateway as well
[12:56] <zyga> ijohnson yeah, we're working with Linaro to extend spread to support using lava
[12:56] <zyga> or to be precise, to emit lava jobs
[12:56] <zyga> so that CI like gitlab or github can schedule lava jobs and collect results back
[12:56] <ijohnson> nice
[12:57] <zyga> yeah, though it took some convincing :D
[12:57] <ijohnson> :-D
[12:58] <zyga> I wish we could merge some of that back
[12:58] <zyga> that's always an option, I know everyone is busy
[12:58] <ijohnson> yeah I wish spread was more actively maintained as well
[12:58] <ijohnson> I mean everybody wishes everything was more actively maintained though haha
[12:59] <zyga> I think we can all collectively fork it if we're happy with that
[12:59] <zyga> I hope it's not needed
[12:59] <ijohnson> true always an option
[12:59] <ijohnson> anyways /me -> SU
[12:59] <zyga> o/
[13:34] <pedronis> ijohnson: this is probably something simple you can review https://github.com/snapcore/snapd/pull/10310 as you were in the discussions
[13:34] <mardy> is there any way for a snap to know if this is its first execution after boot?
[13:35] <ijohnson> pedronis: sure
[13:38] <mardy> the mikrok8s needs this, but their current way of getting this information requires access to /proc/1/environ: https://github.com/ubuntu/microk8s/blob/master/microk8s-resources/actions/common/utils.sh#L684-L686
[13:38] <mardy> so I was wondering that, unless we have something like this already, maybe we could provide this information via a snapctl action?
[13:58] <ijohnson> mardy: that feels a bit broken, can they not just write a file to /tmp the first time they start and if that file exists it's not the first start ?
[14:07] <pedronis> mardy: we try not to provide things via snapctl unless it's really snap specific or there's no other way to mediate the access
[14:25] <pstolowski> mvo: can you land https://github.com/snapcore/snapd/pull/10284 ? unrelated failures..
[14:25] <pedronis> mborzecki: ijohnson: I created https://github.com/snapcore/snapd/pull/10311 related to the SU discussion and 10255
[14:25] <mvo> pstolowski: done
[14:26] <pstolowski> ty!
[14:35] <ijohnson> pedronis: ack thanks for this
[14:36] <ijohnson> pedronis: related, how do you feel about making post to quotas related things authenticated? it currently is root only and it's ever so slightly annoying to me that I need to use sudo
[14:38] <mardy> ijohnson: uh, why didn't I think of that... :-)
[14:39] <ijohnson> mardy: haha no worries I'm sure this code works fine outside of confinement and wasn't written expecting to be confined at all
[14:42] <mardy> ijohnson: don't we set TMPDIR to something else, when running a snap? Or can confined snaps write to /tmp?
[14:43] <ijohnson> mardy: confined snaps have a different /tmp mounted for them, from the host system, a snap's /tmp is something like /tmp/snap_foobar/
[14:43] <mardy> at least we don't, according to https://snapcraft.io/docs/environment-variables
[14:43] <mardy> ah, nice
[14:43] <ijohnson> mardy: so we don't set the TMPDIR value to something else, it still points to /tmp by default, but /tmp is not /tmp on the host 
[14:43] <ijohnson> hey roadmr 
[14:43] <mardy> ijohnson: nice
[14:45] <pedronis> ijohnson: I think it's a bit premature to do that change, my current impression is that root still seems correct
[14:46] <ijohnson> pedronis: ack, as I said it's just slightly annoying for me to remember during development
[14:48] <roadmr> o/
[15:15] <ijohnson> pedronis: do you know the status of #8919 ? claudio marked it blocked waiting for some better API from Chris, but it's not clear to me better API was ever implemented
[15:33] <om26er> O Hi!
[15:33] <om26er> I need my cloak back
[15:33] <om26er> ;-)
[15:34] <ogra> om26er, #ubuntu-irc hands them out 
[15:35] <ogra> (and the libera ones are much nicer too ... if you modify them a bit they turn into superhero capes 😉 )
[15:41] <pedronis> ijohnson: we need to resync with chris
[16:05] <ijohnson> pedronis: ack, I will leave the pr be then
[16:38] <pedronis> ijohnson: btw, did you see I commented on https://github.com/snapcore/snapd/pull/9965 a while ago?
[16:39] <ijohnson> oh sorry yes I did see you commented on that, but forgot to respond / address your points
[16:39] <ijohnson> I will respond today
[16:39] <ijohnson> thanks
[16:39] <ijohnson> pedronis: what do you think of my proposal for https://github.com/snapcore/snapd/pull/10304#discussion_r639795752 ?
[16:52] <pedronis> ijohnson: I commented making a suggestion
[16:52]  * ijohnson thanks
[16:52] <ijohnson> err 
[16:52] <ijohnson> thanks pedronis 
[16:52] <pedronis> heh
[17:02] <ijohnson> pedronis: related to the default-providers question, is there also a possibly similar question for super-privileged snaps like lxd or docker being pulled in automatically that the user didn't consent to? I mean those snaps are strictly confined so probably it's fine?
[17:32] <pedronis> ijohnson: no, that's different, also we would likely control which snaps can pull them in
[17:33] <ijohnson> ok makes sense