[08:34] <pedronis> mvo: hi, I reworked 7020
[08:35] <mvo> pedronis: nice, thank you. I have a look
[09:04] <Chipaca> pedronis: do you remember offhand if aliases can have underscores in them?
[09:04]  * Chipaca has the silliest doubts
[09:04] <pedronis> Chipaca: they can
[09:04] <Chipaca> (this is related to a user question)
[09:04] <pedronis> they allow quite a bit more latitude than app names
[09:04] <pedronis> because one of their goals is to support a reasonable range
[09:04] <pedronis> of upstream binary names
[09:05] <Chipaca> good good :-)
[09:05] <Chipaca> somebody i swanting to package a jenkins plugin
[09:05] <Chipaca> and it seems those use underscores
[09:05] <Chipaca> so aliases ftw
[09:05] <Chipaca> (and a snapped jenkins would be yuge)
[09:06] <pedronis> ^[a-zA-Z0-9][-_.a-zA-Z0-9]* is their regexp
[09:06] <pedronis> oops with a $
[09:10] <Chipaca> whoa, england remembered how to rain
[09:28] <pedronis> mvo: mmh, we did this experimentally:  cmd/snapd: ensure GOMAXPROCS is at least 2
[09:28] <pedronis>  but it didn't help. did we remove it ?
[09:38] <mvo> pedronis: we did not remove it, there is a card, I can do thi snow
[09:39] <Chipaca> something weird just happened
[09:39] <Chipaca> or maybe not i'm not sure
[09:39] <Chipaca> I merged #7083
[09:40] <Chipaca> and #7086 also merged
[09:40] <Chipaca> were those two the same thing?
[09:40] <Chipaca> (that would explain github showing no diff … i thought it was just confused)
[09:40] <Chipaca> those are 3/4 and 4/4 of the snapd type work
[09:41] <Chipaca> truly I don't see a commit in 4/4 that is about what 4/4 claims to be about
[09:42] <Chipaca> ¯\_(ツ)_/¯
[09:43] <Wimpress> Morning o/
[09:43] <pedronis> Chipaca: yes, seems there is no actual renaming there (??)
[09:43] <Chipaca> yeah, probably a mis-push
[09:43] <Chipaca> made a note about it
[09:43] <Wimpress> I just upgrade from 19.04 to 19.10 daily and more than half my installed snaps are showing a "broken" in `snap list`.
[09:43] <Chipaca> Wimpress: kernel 5.2
[09:43] <Chipaca> Wimpress: buggy mc bug
[09:43] <Wimpress> Awesome.
[09:44] <Wimpress> "mc bug" being?
[09:44] <Chipaca> Wimpress: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1836914
[09:44] <Wimpress> Thanks.
[09:44] <Chipaca> niets te danken
[09:45] <Chipaca> clearly it's suse's fault
[09:48] <ogra> Chipaca, leave my girlfriend out of the discussion please !
[09:49] <Chipaca> ogra: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1836914/comments/5
[09:49] <Chipaca> ogra: i have iferrutable evidence
[09:49] <Chipaca> or is that iferrutable edivence
[09:49] <mvo> the two ubuntu-core-18 prs from sergio need a second review and should be easy wins
[09:50] <ogra> lol
[09:50] <ogra> damn ... i'll talk to her then to not do such stuff in the future
[09:51] <Wimpress> Chipaca: Anyway to manual remount the "broken" snaps?
[09:52] <Chipaca> Wimpress: if you're brave, yes
[09:52] <Chipaca> Wimpress: snap list --all
[09:52] <Chipaca> Wimpress: remove disabled,broken snaps
[09:52] <Chipaca> (just to make things easier)
[09:52] <Chipaca> Wimpress: then, disable+enable broken snaps
[09:52] <Chipaca> Wimpress: should do it
[09:53] <Chipaca> Wimpress: let me know, as i haven't heard back from anybody that's done this
[09:53]  * Wimpress drinks brave juice
[09:53]  * Chipaca spikes Wimpress's juice
[09:53] <Wimpress> Remove the disabled snaps firs?
[09:53] <Chipaca> Wimpress: in --all, yes, ie old revisions that just happened to not mount
[09:54] <Chipaca> somebody let popey know not to jump to 5.2 because this'd be a nightmare for him
[09:54] <Wimpress> I have done
[09:54] <Chipaca> it starts hitting people at ~40 snaps mounted
[09:54] <Wimpress> He is on vacation.
[09:54]  * Wimpress has 287 snaps installed
[09:54] <Chipaca> i know, thank you for letting him know
[09:54]  * Chipaca hugs Wimpress 
[09:55] <Chipaca> Wimpress: i didn't have you on my list of people with a lot of snaps, i do now
[09:57] <mvo> Chipaca: any tl;dr updates on the mount issue?
[09:57] <Chipaca> mvo: kernel bug
[09:57] <Chipaca> mvo: don't use 5.2 for a bit
[09:58] <Chipaca> mvo: https://github.com/torvalds/linux/commit/33ec3e53e7b1869d7851e59e126bdb0fe0bd1982
[09:58] <Chipaca> (if you're curious)
[09:59] <Chipaca> mvo: it's very much a race, hits once you're at ~40 snaps mounted (counting old revisions etc, so ~13 snaps is enough if you've got the usual 3x)
[09:59] <popey> Chipaca: thanks. I'm on vacation nowhere near snaps :)
[10:00] <Chipaca> popey: i didn't intend for that to reach you synchronously on vacation
[10:00] <popey> I have 287 snaps at the last count
[10:00] <popey> It's cool :)
[10:00] <Wimpress> Snap
[10:00] <popey> Good to know!
[10:00] <Wimpress> I had 287 until I started sorting this out.
[10:00] <popey> Haha
[10:00] <popey> That's 287 X 3 here :)
[10:01] <popey> Anyway, have fun. Bye!
[10:01]  * Chipaca has fun as instructed
[10:01]  * Chipaca ensures to always have a company-approved amount of fun
[10:02]  * Chipaca sets up N+2 sources of fun to accommodate network issues and demand surges
[10:03] <Wimpress> Chipaca: The disable/enable trick doesn't work :-(
[10:03] <Wimpress> `- Setup snap "mumble" (463) security profiles (cannot find installed snap "mumble" at revision 463: missing file /snap/mumble/463/meta/snap.yaml)`
[10:03] <Chipaca> ah
[10:03] <Wimpress> Get something like that during the enable ^
[10:03] <Chipaca> Wimpress: so
[10:03] <Chipaca> Wimpress: the issue is more complicated probably
[10:03] <Chipaca> Wimpress: try disabling/enabling *everything*
[10:04]  * Wimpress decides to get coffee...
[10:04] <Chipaca> the problem is the kernel gets the loop devices discombobulated with their backing file
[10:04] <Chipaca> so maybe snap A shows up as broken but it's snap B that needs to be disabled to release the device
[10:04] <Chipaca> I don't know
[10:04]  * Chipaca weeps
[10:05]  * Chipaca remembers having fun
[10:08] <Chipaca> mvo: is this affecting people not on the bleeding edge?
[10:21] <Wimpress> Chipaca: I'm down to 25 snaps.
[10:21] <Wimpress> 5 are broken.
[10:21] <Chipaca> Wimpress: at this point a reboot might be easier
[10:21] <Wimpress> I am remove a couple of snaps. Rebooting. See how many are broken.
[10:22] <Chipaca> Wimpress: ah
[10:22] <Chipaca> :-/
[10:22] <Wimpress> I am starting to suspect there is not alower limit on the number of snaps installed.
[10:22] <Chipaca> Wimpress: you can see more info about the breakage if you look at the failing mount units
[10:22] <Wimpress> At this point I am interested to know if there is a lower limit.
[10:23] <Chipaca> Wimpress: systemctl --all --failed |grep snap
[10:24] <Chipaca> Wimpress: ok
[10:24] <Chipaca> Wimpress: the number i was quoting was how many i had had to add before getting a solid reproducer
[10:24] <Chipaca> Wimpress: it's a race, though, so it's certainly possible to lose it with just two snaps :-/
[10:30] <Wimpress> So, enable/disable does not get a "broken" snap working.
[10:30] <Wimpress> Refreshing from a different channel doesn't get a "broken" snap working.
[10:30] <Wimpress> But remove then install a "broken" snap does get you a working snap.
[10:31] <Wimpress> I have 19 snaps installed. 1 (random on each boot) will always be in a broken state.
[10:37] <ackk> hi, should one use SNAP_NAME or SNAP_INSTANCE_NAME for prefixed names like abstract unix sockets or SHM prefixing?
[10:38] <ackk> I see snapcraft-preload uses $SNAP_NAME at the moment, but won't that cause collisions if you have multiple instances?
[10:38] <Chipaca> ackk: depends :-) but probably SNAP_INSTANCE_NAME
[10:39] <Chipaca> ackk: if you use SNAP_NAME, you risk trying to talk to some other instance of yourself
[10:39] <Chipaca> ackk: and we all know that's when the world ends
[10:39] <ackk> Chipaca, yeah but snapd check those names, right? so maybe snap.foo_bar won't be allowed for snap "foo" if snapd uses SNAP_NAME
[10:40] <ackk> Chipaca, ISTR it's SNAP_NAME for abstract sockets for socket activation
[10:40] <ackk> but that was before SNAP_INSTANCE
[10:40] <Chipaca> ackk: it _should_ get blocked, yes
[10:40] <Chipaca> ackk: i'm pretty sure it will get blocked, even :)
[10:41] <ackk> Chipaca, which case should?
[10:41] <Chipaca> ackk: using SNAP_NAME for SHM and dbus things
[10:41] <Chipaca> ackk: abstract unix sockets, it's more a free-for-all
[10:41] <Chipaca> whoever gets there first will probably win
[10:41] <Chipaca> and then things'd be weird
[10:41] <ackk> Chipaca, uhm, no, SNAP_NAME is what snapcraft-preload uses for /dev/shm/
[10:41] <Chipaca> (but i'm not sure we mediate them properly)
[10:42] <Chipaca> ackk: ?
 ackk: using SNAP_NAME for SHM and dbus things
[10:42] <Chipaca> ackk: well that probably won't work for an instance'd snap, then
[10:43] <jamesh> ackk: I don't think dbus slots are going to work with instanced snaps
[10:44] <jamesh> in general, applications hard code the bus names they try to acquire, and clients hard code the bus names they want to talk to.
[10:44] <ackk> jamesh, I'm not talking about dbus slots. mainly /dev/shm files and abstract sockets for socket activation
[10:44] <ackk> it doesn't seem ValidateSocket does any filtering
[10:45] <Chipaca> ackk: i don't think we look at abstract socket names in any way
[10:45] <Chipaca> might be wrong, but i don't think so
[10:46] <Chipaca> ackk: i mean: snap foo can create and use an abstract socket that claimed it was from snap bar in some way, and it'd work
[10:46] <ackk> ok
[10:46] <Chipaca> ackk: it shouldn't be able to talk to a socket from snap bar though
[10:46] <Chipaca> ackk: so if it uses SNAP_NAME, it'll work for itself if it's instanced and the non-instanced snap didn't create the socket (yet)
[10:46] <Chipaca> makes sense?
[10:47] <Chipaca> ackk: much like with regular ports, snapd does no arbitration
[10:47] <Chipaca> you can have N snaps trying to grab port 80
[10:47] <ackk> right
[10:47] <Chipaca> only one will 'win' :)
[10:47] <ackk> I see the apparmor template is for /dev/shm/snap.$SNAP_INSTANCE_NAME
[10:47] <ackk> so snapcraft-preload should use that too
[10:48] <Chipaca> yes, shm should use instance name, yes
[10:48] <Chipaca> which is why i said that if snapcraft-preload doesn't use that, it won't work for instanced snaps
[10:48] <Chipaca> instance-named snaps? instancedated?
[10:48]  * Chipaca gives up
[10:48] <ackk> instanced? :)
[10:49] <ackk> instant snaps!
[10:49] <jamesh> ackk: https://github.com/snapcore/snapd/blob/master/snap/validate.go#L211-L219 <- there's the validation for abstract sockets for socket activated services.
[10:49]  * Chipaca gets extra whimsical just before mealtime
[10:49] <ackk> jamesh, ah, right
[10:49] <Chipaca> oh, neat, i'd forgotten that bit
[10:49] <Chipaca> jamesh: nice
[10:49] <ackk> jamesh, so that shuld be the instance instead?
[10:50]  * ackk forgot he wrote that
[10:51] <ackk> I did have a memory that some validation was in place
[10:51] <Chipaca> huh, i wonder what maciej meant about it coming from the snap decl
[10:51] <jamesh> ackk: as the socket name comes from meta/snap.yaml, and already has the snap name in place, it doesn't really fit with instanced names
[10:51] <Chipaca> ah probably that :-)
[10:51] <Chipaca> s/snap declaration/snap yaml/ i guess
[10:51] <jamesh> ackk: if you want instance friendly unix domain sockets, consider putting them in $SNAP_COMMON (i.e. not abstract)
[10:51] <ackk> jamesh, ah, you have a point
[10:52] <Chipaca> non-abstract are safer anyway :-p
[10:53] <ogra> pfft ... just pipe into netcat :P
[10:53] <ogra> sockets ... pfft
[10:54] <jamesh> it'd be nice if there was a system level area under /run for a snap to use for this kind of thing
[10:55] <ogra> you could use /tmp which is transparently overlayed
[10:57] <jamesh> ogra: the validation code requires it start with $SNAP_DATA, $SNAP_COMMON, or $XDG_RUNTIME_DIR (the last of which doesn't make sense until we've got user session daemons)
[10:57] <ogra> oh, why is that ? /tmp should be as safe as these dirs are since it is fully confined
[10:58] <jamesh> it's systemd that binds the socket
[10:58] <jamesh> not something within the sandbox
[10:58] <Chipaca> meaning that the path needs to be the same outside and inside
[10:58] <ogra> ah
[10:58] <ogra> sorry, i missed that part ... ignore me then :)
[10:58] <Chipaca> now that there's no random component in the tmp path we _could_ possibly make it work
[11:01] <jamesh> would systemd create the parent directories with the right permissions though?
[11:13] <Chipaca> jamesh: ¯\_(ツ)_/¯ no idea
[11:14] <Chipaca> anyway, lunch
[11:14] <Chipaca> jamesh: don't forget to sleep sometime
[11:14] <jamesh> Chipaca: good idea.  It's past 7pm
[11:15] <Chipaca> ah, i figured it'd be even later
[11:15] <Chipaca> even so, go get some life in that work/life balance thing
[11:15] <Chipaca> :)
[11:15] <Chipaca> wait, that sounded dangerously close to 'get a life'
[11:16]  * Chipaca backs away from the whole thing
[11:17] <jamesh> https://www.youtube.com/watch?v=PrVSHa_5Tw4
[12:04] <Chipaca> mvo: #7102 is green, has two +1's, your call if you merge or address the indent question
[12:04] <Chipaca> mvo: #7100 is also green and has two +1's and I'd merge it but dunno :-)
[12:18] <diddledan> dang, wimpress has me beat by a long way: I only have 83 snaps installed
[12:25] <Chipaca> diddledan: try 'snap list --all | wc -l' :-)
[12:25] <diddledan> 154
[12:25] <diddledan> that's better :-)
[12:26] <Chipaca> -1 for the header line, fwiw
[12:26] <ogra> yeah
[12:26] <diddledan> d'oh
[12:26] <Chipaca> no cheating
[12:47] <mvo> Chipaca: sorry, was in a meeting, let me read backlog
[12:53] <pedronis> Chipaca: thanks for the review
[12:53] <mvo> Chipaca: I tweak 7102 now
[12:56] <mvo> pedronis: thanks for the updates to 7020, not quite finished reading but looks very nice so far
[13:22] <mvo> Chipaca: what was the bugnumber of the kernel losetup issue again?
[13:29] <Chipaca> mvo: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1836914
[13:35] <mvo> Chipaca: thank you
[13:41] <pedronis> Chipaca: mvo: btw now that snapd having a type has landed I think 6923 and 6950 are also unblocked
[13:46] <geodb27> People : hi ! snap is stuck on one of my lxd machine : it refuses to launch "snap restart lxd" complaining that snap "lxd" has "refresh-snap" change in progress.
[13:47] <geodb27> I've tried many things, including "snap abord" with the task list, but things remains the same. What can I do to have things up and running again without losing any data ?
[13:51] <Chipaca> geodb27: hi, sorry you're having trouble
[13:52] <Chipaca> geodb27: can you pastebin the output of 'snap changes'?
[13:52] <geodb27> Thanks for your help Chipaca. Here it is : https://pastebin.com/6PmTm0Lx
[13:53] <Chipaca> geodb27: and the output of 'snap tasks 20' ?
[13:54] <geodb27> https://pastebin.com/frhzSbwq Here it is.
[13:55] <Chipaca> looks like stopping the lxd services is struggling
[13:56] <Chipaca> geodb27: what's the output of snap logs lxd ?
[13:57] <geodb27> there is not that amount of informations in "snap logs lxd". It lists the kernel modules in use (I'd guess) and only one warning concerning swap accounting.
[13:58] <Chipaca> geodb27: and what does 'systemctl status snap.lxd.activate.service snap.lxd.daemon.service' think?
[13:58] <Chipaca> geodb27: also the output of 'snap services lxd' please
[13:59] <geodb27> https://pastebin.com/6qkSrTCi I included the latest even if it complains about an error concerning the snapd socket.
[14:01] <Chipaca> geodb27: can you retry it?
[14:01] <Chipaca> doesn't look good though
[14:01] <geodb27> Yes, got it.
[14:02] <geodb27> lxd.activate  activé   inactif  -
[14:02] <Chipaca> geodb27: 'journalctl -u snapd' might be revealing also
[14:02] <geodb27> lxd.daemon    activé   actif    socket-activated
[14:03] <geodb27> Oh, hang on... Don't know what really helped out there, but "snap restart lxd" seems to be working, at last...
[14:04] <Chipaca> hah, i was about to suggest stopping them manually, but that might also work
[14:04] <Chipaca> in fact, i'd written this:
[14:04] <Chipaca> geodb27: let's try this: sudo systemctl stop snap.lxd.* snapd && sudo systemctl start snapd
[14:05] <geodb27> Well it failed, but I know why : the revert to 3.14 went through, after a long while. But since now this machine is the only one of the lxd cluster with version 3.14, it can't start. I'll have to install latest 3.15 on this machine.
[14:05] <Chipaca> geodb27: sounds like you're unblocked, at least
[14:06] <Chipaca> i need to set aside some time to figure this one out though
[14:06] <Chipaca> suspect socket activation is breaking things for us
[14:06] <Chipaca> geodb27: good luck! and do reach out if it gets stuck again
[14:07] <geodb27> Indeed. In parallel, while you guided me to find a solution, I issued in another terminal "apt-get update && apt-get upgrade -y && apt dist-upgrade -y && apt autoremove -y".
[14:07] <geodb27> Could be that there was something not up-to-date that led to this "dead-lock'. snap refresh lxd is in progress now. I'll let it go.
[14:07] <geodb27> Thanks a lot for your kind help !
[14:12]  * diddledan moans at jdstrand "too many defined layouts (18 > 10) lint-snap-v2_layout"
[14:12] <diddledan> guess I need to trim them
[14:13] <jdstrand> diddledan: this was actually not only my decision. zyga and I decided on 10, but then I saw one with 8 so just upped it to 15 yesterday (not in prod yet)
[14:13] <diddledan> hah!
[14:13] <jdstrand> but layouts take resources so with a system with hundreds of snaps...
[14:14] <diddledan> aye
[14:15] <diddledan> I put so many as an attempt to reduce "magic" in my launcher script:
[14:16] <diddledan> https://www.irccloud.com/pastebin/snNh71xP/layout.yaml
[14:17] <diddledan> I prefer declarative rather than imperative so I felt declaring the layout as a better solution than shoving the world into LD_LIBRARY_PATH
[14:35] <ackk> jdstrand, hi, tested your latest snap package w/snap_daemon, works fine. just one note, the test snap still doesn't
[14:35] <Chipaca> diddledan: https://www.youtube.com/watch?v=tljyCX8-Hw8
[14:36] <ackk> jdstrand, not an issue for our testing though
[14:37] <jdstrand> ackk: that is puzzling, I both tested it and use it in spread tests. can you paste what you did?
[14:37] <ackk> jdstrand, I followed the readme, one sec
[14:42] <ackk> jdstrand, odd, I tried again and was getting:
[14:42] <ackk> $ sudo test-snapd-daemon-user.drop snap_daemon
[14:42] <ackk> execv failed: No such file or directory
[14:42] <ackk> jdstrand, then i removed and re-added the snap and it worked
[14:43] <jdstrand> the combination of deb and disabling reexec and installing a snap can be delicate
[14:43] <ackk> jdstrand, do you need to export SNAP_REEXEC=0 every time in the shell?
[14:43]  * jdstrand chalks it up to that
[14:43] <jdstrand> ackk: yes
[14:43] <ackk> jdstrand, ah so maybe that's whyt
[14:43] <ackk> *why
[14:44] <jdstrand> for 'snap'
[14:44] <ackk> jdstrand, is there an ETA for 2.40 release?
[14:44] <jdstrand> modifying /etc/environment is good for snapd and it you log out
[14:44] <ackk> right
[14:44] <jdstrand> ackk: idk, but fyi, this is now 2.41 material
[14:44] <ackk> jdstrand, so snap_daemon is 2.41 ?
[14:45] <jdstrand> yes
[14:45] <ackk> ok
[14:45] <diddledan> hopefully 2.41 will enable me to get the much requested makemkv out to stable
[14:45] <jdstrand> ackk: have a meeting today on a final detail for snap.yaml, then it is just finishing a couple small details and iterating on PRs
[14:46] <ackk> jdstrand, nice, thanks
[14:46] <jdstrand> ackk: ie, I don't foresee it later than 2.41
[15:07] <ackk> jdstrand, btw, posted https://forum.snapcraft.io/t/request-for-auto-connection-of-interfaces-for-maas/12367
[15:08] <cachio> mvo, hey, any idea who created the snap test-snapd-with-configure ?
[15:09] <cachio> I need to create test-snapd-with-configure-core18
[15:21] <oSoMoN> jdstrand, hey, I'd love to have your opinion on https://forum.snapcraft.io/t/auto-connecting-the-cups-control-and-password-manager-service-interfaces-for-the-chromium-snap/4592/7
[15:22] <mvo> cachio: in a meeting but let me check
[15:23] <cachio> mvo, np, I'll upload the new one
[15:23] <mvo> cachio: cool, go for it
[15:51] <pedronis> mvo: I reviewed 7122, have a nitpick but maybe it doesn't make sense completely
[15:51] <pedronis> not sure
[15:55] <jdstrand> oSoMoN: on my todo (thinking)
[15:57] <oSoMoN> thanks!
[16:06] <mvo> pedronis: thanks, let me see
[16:06]  * cachio lunch
[16:11] <pedronis> mvo: let me know if what I had in mind isn't clear
[16:12] <pedronis> anyway I gave a +1, it's really nitpick, I suppose I'm slightly allergic to GlobalRootDir (I know why it exists)
[16:14] <pedronis> jdstrand: I gave my general +1 (but not details) to gpio-control and packagekit-control, though the former still needs to be locked down to be superprivileged
[16:14] <pedronis> afaict
[16:17] <jdstrand> pedronis: ack, thanks
[16:20] <mvo> pedronis: hm, how would boot/kernel_os_test.go pick up this bootdir in the test suites?
[16:20] <mvo> pedronis: happy to change it but not fully sure what you have in mind, but we can have a quick HO tomorrow, I'm almost eod
[16:20] <pedronis> mvo: I might not have been very clear
[16:22] <mvo> pedronis: or (more likely) I'm just a bit slow this evening, I will look at it with fresh eyes in the morning and if I doN't figure it out we can have a chat, does that sound good?
[16:25] <jdstrand> pedronis: curious if you have an opinion on when to create the (shared) snap_daemon user. I'm of two minds: a) at the time a snap uses it and b) after snapd finishing fetching declarations, add any users that don't already exist). 'a' is easy but feels at the wrong level and means inconsistency between systems that have snaps that use the user and ones that don't. if 'b', I'm thinking in daemon.go-- is
[16:25] <jdstrand> there an area I should be looking at?
[16:26] <jdstrand> pedronis: err s/at the time the snap user it/at the time a snap that specifies it is installed/
[16:26] <jdstrand> finishes*
[16:26] <jdstrand> meh, so many typos. hope you can wade through them :)
[16:27] <pedronis> jdstrand: I'm not quite sure about the worry related the inconsistency, why would we have the user on a system without snaps using it?
[16:28] <pedronis> I'm probably missing something
[16:29] <jdstrand> pedronis: the question is whether to have the shared users unconditionally there or only as needed
[16:30] <jdstrand> pedronis: unconditionally gives consistency for all systems running a snapd that supports the feature (and eventually the snap declarations in the store)
[16:31] <jdstrand> pedronis: which may make sense for snap_daemon, but probably does not make sense for snap_other or scope: private
[16:31] <jdstrand> pedronis: so in that light, I think I answered my own question (at install)
[16:31] <jdstrand> which is cool, cause that is easy :)
[16:31] <pedronis> jdstrand: yes, that seems easier and sensible
[16:32] <jdstrand> pedronis: thanks for being my sounding board :)
[16:32] <pedronis> heh, np
[16:40] <pedronis> mvo: did you re-review 7020, or not quite yet?
[16:41] <pedronis> Chipaca: I think I never noticed it, but the error handling code in client is a bit strange (from a HTTP POV)
[17:28]  * zyga sends greetings from the eastern part of Italy 
[17:28] <zyga> crossing over to Slovenia soon
[17:32] <mvo> pedronis: I did not :( sorry ! in my morning or later tonight
[17:43] <pedronis> np, just wondered
[17:44] <pedronis> Chipaca: could you quickly double check the changes related to context I had to do to fix conflicts in the rereg PR: https://github.com/snapcore/snapd/pull/7007/commits/d904093cc19e1d13ce6b474bec7cdc5220711a82
[17:51]  * cachio afk
[19:11] <Chipaca> pedronis: LGTM
[19:35] <roadmr> jdstrand: review-tools 20190717somethingsomething are now in prod \o/
[19:40] <jdstrand> roadmr: yeah! thanks :)
[21:20] <sergiusens> mvo: hey, any reason the bare base is not on stable?