[00:15] PR snapcraft#2490 closed: test: test-beta [02:39] hmm trying to use the fakestore and failing [02:39] just get http 418s from everything [02:39] anyone around? [02:39] i guess i'll ask in the forum [06:15] morning [06:26] brb kernel update [07:09] Hey guys! I will be in the office in 20 minutes. Doing some babysitting now. [07:12] arch package update pushed === pstolowski|afk is now known as pstolowski [07:52] mornings [08:10] pstolowski: hi, mvo created this: https://trello.com/c/h3v4bN13/172-fix-build-failure-on-armhf-in-edge-ppa [08:10] is that a new test? or associated with recent changes? seems it's failing in the armhf builders [08:10] timing? [08:10] pedronis: hey, yes, i got email from trello, investigating, reproduced locally by running in a loop, seems like a race [08:11] pstolowski: ok, thank you, and yes it's a priority given it's stopping us from producing edge builds for arm [08:11] pedronis: i don't think it's because yesterday's changes, this landed last week [08:11] pstolowski: not quite sure when the error appeared [08:13] pedronis: more worrying is the amd64 failure in same ppa on TestHotplug [08:13] pstolowski: ok [08:14] pedronis: looks like something got stuck on udev monitor, might be something build env specific, hard to tell [08:14] but also, not related to yesterday's change i think, that test was landed long time ago [08:23] pstolowski: ok, let me know how it goes [08:25] pedronis: sure [08:30] rpm triggers for downgrade suck [08:32] pedronis: ok, i got it, PR coming [08:39] PR snapd#6550 opened: daemon/tests: fix race in the disconnect conflict test <⚠ Critical> [08:39] pedronis: ^ [08:41] pedronis: the amd64 failure on 14.04 is unclear atm, not sure why we didn't see it before; the udev mon test run for 10 minutes and got killed, i'll see if i can reproduce on 14.04 [08:43] pstolowski: +1 on the first fix [09:09] moin moin [09:24] Chipaca: hi [09:24] pedronis: 'sup [09:41] hmm rpm triggers are most interesting, semi-useless documentation, only thing you can really do is experiment :/ [09:45] Chipaca: how's the tweaking of 6356 ? [09:46] pedronis: nearly done, just the reRefreshUpdateMany rename pending [09:48] pedronis: problem is I either make it _the_ update many, or fix it with a comment [09:48] dithering too much on that [09:48] Chipaca: can I help? but I'm not understanding those last two comments [09:49] pedronis: I'll let you know :-) [09:57] uh, re [09:57] hey [09:57] sorry, tough night, tough morning [09:58] pstolowski: I see you know about the card mvo made [09:58] we discussed it last night and he asked me to relay that you have a look [09:58] zyga: yep, https://github.com/snapcore/snapd/pull/6550 [09:58] PR #6550: daemon/tests: fix race in the disconnect conflict test <⚠ Critical> [10:00] pstolowski, pedronis: should we do a .5 with this or is it sufficient to just patch master? [10:00] zyga: as far as I understand it was affecting edge [10:00] not 2.37 [10:01] pedronis: the failure was reported on .4 build [10:01] https://launchpad.net/~snappy-dev/+archive/ubuntu/edge/+packages [10:01] on line three there [10:02] then mvo explained himself badly [10:02] let me check the state of beta [10:04] pedronis: I think we're good, I see .4 in beta across all architectures [10:05] zyga: hella good [10:06] Chipaca: :-) hey, how are you doing? [10:06] zyga: hating names [10:06] oh? [10:06] what's wrong? [10:06] :-) [10:06] zyga: nothing unusual there though [10:24] google:ubuntu-14.04-64:tests/main/snapd-reexec failed on my PR (unrelated to the PR), and lots of aborts again [10:27] pstolowski: yes, lots of aborts [10:27] in many PRs [10:35] pstolowski: spread error output about aborts is always hard to chase [10:36] uhm [10:37] usually is allocating machines? [10:38] pstolowski: zyga: ah, 2019-02-28 07:08:49 Cannot allocate google:opensuse-tumbleweed-64: cannot find any Google image matching "opensuse-tumbleweed-64" on project "computeengine" or "opensuse-cloud" [10:38] uh [10:38] we should disable it until cachio can debug [10:38] it worked yesterday once or so [10:40] pstolowski: add a manual:true in its stanza (as we have for others) [10:40] in your PR [10:42] pedronis: you mean in #6491? [10:42] PR #6491: interfaces: hotplug nested vm test, updated serial-port interface [10:42] pstolowski: ? [10:43] pstolowski: the easy one [10:43] pstolowski: #6550 [10:43] PR #6550: daemon/tests: fix race in the disconnect conflict test <⚠ Critical> [10:44] pstolowski: the other needs a review process no? [10:45] pstolowski: we need a PR that we want to land and can land soon [10:49] pedronis: yes sure the other PR needs reviews; ok, will set opensuse-tumbleweed to manual for now [10:56] * Chipaca brb [11:04] mborzecki: could you review #6356 when you have some time? [11:04] PR #6356: overlord/snapstate: during refresh, re-refresh on epoch bump [11:04] pstolowski: thanks [11:05] pedronis: sure, gladly, got quite fed up with the rpm crap [11:06] heh, i cannot use snap-mgmt to do the patching of mount units and we'll have to ship a separate script, otherwise downgrade to pre selinux-aware snapd won't work [11:09] 2.37.4 panic in unit tests [11:09] https://www.irccloud.com/pastebin/Ky7i9Vr5/ [11:12] PR snapd#6499 closed: cmd/snap-confine: allow moving tasks to pids cgroup [11:14] I added a trello card [11:14] zyga: about ? [11:14] about that failure, so that I don't forget to look at it later today [11:14] going through the release process now [11:14] zyga: when does it happen? 2.37 is green [11:14] on GH [11:15] pedronis: it seems like a race, it happened in 2.37.4 unit tests during package build [11:15] ok, weird [11:15] I would those tests don't need to do weird racy things [11:15] but what do I know [11:15] yeah :( [11:15] the test is probably complicated [11:19] zyga: no, I fear a real bug [11:20] the manip of the counter is wrong [11:26] will send a fix after lunch [11:31] pedronis: somebody at the sprint said that when we remove a snap we forget all connections, but it was in the middle of a bigger discussion and I didn't correct them even though I don't think that's right [11:31] pedronis: that wasn't you was it? [11:31] we so don't forget them, you can ask for connections of snaps that aren't even installed [11:33] that was me [11:33] when we remove the last revision we add one more task [11:34] zyga: I'm not theorising, I'm looking at it on my system [11:34] oh [11:34] so bug :) [11:34] zyga: is it though [11:34] zyga: I've heard it described as a feature [11:35] it does feel at least partially buggy in that connections only accumulate [11:35] e.g. I iterated on the connections on icdiff, and my snapd remembers all the names as separate things [11:36] even more interestingly, 'snap connections' lists it as connected even though the snap is removed [11:36] which sounds like fun :-) [11:36] uh [11:36] can you add a card please [11:36] I'll look [11:37] pedronis: can you comment on this behaviour, wrt whether we want it one way or another please [11:40] PR snapd#6550 closed: daemon/tests: fix race in the disconnect conflict test <⚠ Critical> [11:40] pedronis: ^ merged [11:41] Chipaca: we have a thing called discard-conns [11:41] isn't it called [11:41] ? [11:41] pedronis: no [11:41] that was an easy grep [11:42] Chipaca: ? [11:42] pedronis: it's not called [11:42] pedronis: grep -r --include \*.go --exclude \*_test.go discard-conns [11:42] Chipaca: it was at some point [11:42] have we lost it? [11:43] addHandler("discard-conns", m.doDiscardConns, m.undoDiscardConns) [11:43] ifacestate does define it [11:43] Is there some way to do a "snap why " to find out why a particular snap is installed? e.g. "snap why core18" might tell me opentoonz is why i have it installed? [11:43] pedronis: yes we lost it [11:43] Chipaca: how, when? [11:43] pedronis: yes i think we last it when we switched to running disconnects [11:44] *lost [11:44] pstolowski: but disconnect should remove conns? [11:44] but Chipaca says conns are there [11:44] there is bug [11:44] pedronis: dc290cac8b548ec7369433dda75c9cdd6d71e271 [11:45] Chipaca: anyway, yes it's bug, we don't want to keep conns around for removed snaps [11:45] ok [11:45] we do :-) [11:45] we are not supposed to [11:45] would be good to find out how [11:45] because disconnect should remove them [11:45] at least one by one [11:46] popey: we don't currently track that though [11:46] Chipaca: anyway sounds like we are missing a test somewhere [11:46] or is more complicated [11:46] and we don't lose them only sometimes [11:46] pedronis: yes we do delete(conns, cref.ID()), although its condition on auto flags etc, so maybe that's wrong [11:46] pstolowski: ? [11:46] *conditional [11:46] ah [11:46] ah, maybe we only don't forget manual ones [11:46] pedronis: in doDisconnect [11:47] something seems off [11:47] discard-conns removed everything [11:47] anyway depends which auto flag [11:48] Chipaca: do you see undesired:true on those connections? [11:48] pstolowski: in the state itself? [11:49] Chipaca: yes [11:49] pstolowski: the code seems right to me tough [11:49] it always delete [11:50] pstolowski: "icdiff:personal-files core:personal-files":{"interface":"personal-files","plug-static":{"read":["$HOME/.gitconfig"]}} [11:50] unless it's a autodisconnect [11:50] it's not a auto-disconnect [11:50] are we setting that flag wrong? [11:52] doAutoDisconnect: // "auto-disconnect" flag indicates it's a disconnect triggered as part of snap removal, in which case we want to skip the logic of marking auto-connections as 'undesired' and instead just remove them [11:53] and we set it there [11:53] exactly [11:55] i looked at disconnect() as well, the flag is carried there, looks sane [11:56] we are missing a test and/or not understanding what happens/happened on Chipaca's system [11:56] pedronis, lokking this problem [11:57] pedronis: pstolowski: what happens to connections on refresh? if a snap's connections change, in particular [11:57] I imagine there it's sane to keep them all [11:58] on removal, do we only look at the 'current' connections? [11:58] * Chipaca should look at the code [11:58] * Chipaca should also try to reproduce this from scratch [11:58] zyga: fix https://github.com/snapcore/snapd/pull/6551 [11:58] PR #6551: systemd: decrease the checker counter before unlocking otherwise we can get spurious panics [11:58] anyway, first to go back to reviewing connections [11:58] PR snapd#6551 opened: systemd: decrease the checker counter before unlocking otherwise we can get spurious panics [12:00] Chipaca: we don't keep connections per revision, so there is no current [12:02] Chipaca: you might be onto something with this question, doSetupProfiles/setupProfilesForSnap looks a bit suspicious and it relies - again - on reloadConnections (which was wrongly uses on core transition and i was fixing it last week) [12:02] pedronis: ^ [12:03] did we run discard-conns on refreshes before switching to new revision? [12:03] no [12:03] it was strictly for full removal of snaps [12:04] only [12:04] ok [12:06] so yes, a test with refresh to a revision with less connections may give an answer [12:06] pstolowski: but those I think fully remove the snap from repo in a refresh [12:08] pstolowski: there's a DisconnectSnap and RemoveSnap in those [12:08] before adding it back and reloading connections [12:09] pedronis: yes, but where do we update conns be in sync with repo? [12:09] *to be [12:09] pstolowski: ? [12:09] we drop everything from repo [12:11] pstolowski: Chipaca: but there is code that would keep around connections for unknown slots/plugs [12:11] pedronis: yes, but don't we have the same problem we had with ubuntu-core transition? we need to keep conns in sync with repo, but reloadConnections only adds connections from conns in state, never removes [12:12] pstolowski: we DisconnectSnap and RemoveSnap [12:12] on repo [12:12] that should remove everything [12:12] then we add back [12:12] in reloadConnections [12:13] pstolowski: we reset the repo first or that's the intention at least [12:13] yes [12:13] pedronis: yes, but DisconnectSnap/RemoveSnap remove in memory only [12:13] pstolowski: ? [12:13] pstolowski: what else would it remove? [12:13] pstolowski: we keep the same conns [12:13] if what you are asking [12:13] because of the code in reloadConnections [12:14] that ignore some errors [12:14] but that has nothing to do with sync with repo [12:14] repo should be a clean slate at that point [12:14] pedronis: I'm wondering if we should change things so that repo is only constructed from state on demand [12:14] pedronis: repo is very useful [12:15] pedronis: but the mental model of how to sync it is non trivial [12:15] pedronis: RepoFromState(st *state.State) (*interfaces.Repository, error) [12:15] then work on repo all you want [12:15] maybe [12:15] and then some sort of commit/abort to change state again [12:15] seems a bit unrelated to the problem here [12:16] yes [12:16] pedronis: i mean: when we refresh to a snap with less connections, we remove all connections from repo with RemoveSnap/DisconnectSnap, but they stay intact in conns. Then we re-add snap to repo and reloadConnections so we have them back in repo. back we're left with an extra connection in state (conns), no? [12:16] just thinking about the reasoning about what reload does [12:16] s/back/but/ [12:16] pstolowski: they are not back in repo because you cannot connect things that don't exist [12:16] pstolowski: what happens is this: https://github.com/snapcore/snapd/blob/master/overlord/ifacestate/helpers.go#L311 [12:17] which maybe is what John hit or not [12:17] pedronis: yes, i understand this. we're left with a stray connection in conns, that's it [12:17] we don't have stray connections in repo [12:17] as such [12:17] or shouldn't [12:17] we cannot [12:17] it's not ending up in repo [12:17] yes [12:18] but AFAIK we do keep those in state [12:18] yes [12:18] that's the issue [12:18] and I think it explains those stray connections problem we observed long time ago (which we didn't understand), for which we added that "unknown plug/slot" logic in reload (to ignore them) [12:19] well, there's a comment that says we don't know what to do from ago [12:19] but yes then we added removeStaleConnections [12:19] only on startup [12:19] but that considers snaps [12:20] not plug/slots [12:20] anyway still unclear if this what John hit or something else [12:24] pedronis, I am creating a new tumbleweed image [12:24] I need to step out, but I just finished the review on connections, I can look at reproducing this from scratch if it'll help [12:25] I'm not sure what's needed, from all the above, though [12:25] pedronis, I don't know what happened with the previous one but it is not registered anymore [12:25] * Chipaca bbiab [12:26] cachio: fascinating [12:26] cachio: thanks, I suppose we should a day to see if this one sticks around before turning the tests back on [12:26] *should wait [12:27] pedronis, I see some errors related to lack of permissions in the logs [12:28] pedronis, perhaps that could be the cause [12:28] I'll continue researching [12:29] but what I see is that the image is not in out pool anymore and it was not manually deleted [12:32] Chipaca: did you change the name of the plug at some point? [12:32] pstolowski: is it on you queue to look at the comments by mvo on #6322 ? [12:33] PR #6322: overlord/hookstate: apply pending transaction changes onto temporary configuration for snapctl get [12:33] PR snapd#6552 opened: ifacestate/tests: fix/improve udev mon test [12:34] pedronis: ah, thanks, that one went under a radar a bit due to other things, i'll revisit it today [12:36] pedronis, tumbleweed image is ready again [12:41] pstolowski: that test you fixed, the accesses were, you have channels but you the things you access could be mutated by next loop of the monitor [12:41] while you were checking them [12:41] *were racy [12:41] pedronis: i kinda had a gut feeling about that, thanks [12:42] pstolowski: you can try go test -race , sometimes it useful [12:42] pedronis: will do, thanks! [12:43] pstolowski: here on master it detects are race on remInfo for example [12:46] pedronis: right, i can confirm, works great! and doesn't report it anymore on my PR [12:49] cachio: opensuse will need to be enabled back as I disabled it in the PR that landed earlier [12:52] cachio: I'll prepare a PR [12:53] pstolowski, yes, thanks, I am trying to understand what happened [12:54] PR snapd#6553 opened: tests: enable opensuse tumbleweed back [12:54] cachio: ^ [12:55] let's see how it goes, probably we shouldn't merge it until tomorrow [12:56] pstolowski, pedronis I think the problem is a bug in the google lib [13:08] pedronis: I did change the name of the plug, yes [13:09] Chipaca: ok [13:09] pedronis: am i evil [13:09] we need to be a bit more clever in that code [13:09] off to pick up the kids [13:10] PR snapd#6551 closed: systemd: decrease the checker counter before unlocking otherwise we can get spurious panics <⚠ Critical> [13:11] pedronis: that feels like a real bug [13:12] pedronis: I was thinking we should do a .5 [13:12] zyga: I poked mvo about that [13:12] and keep the 2.37 branch around as an LTS, with regular security updates and serious bug fixes [13:12] ? [13:12] it's the idea I was talking to mvo about a few times already [13:13] we move too fast and too slow [13:13] we should not release to the world on day one of 2.38 [13:13] server should stay behind on 2.37 until we had a few cycles of 2.38 desktop feedback [13:13] we cannot do that without tracks [13:14] anyway it's probably not a discussion to have this week [13:14] yeah, we'd have to do something to change how we release [13:14] but I think it's a good candidate for a starting point [13:14] given that mvo is not really around [13:15] also not sure that bug is good candidate for that logic, is probably not very likely to hit in the field [13:15] often [13:15] zyga: pedronis: listen to the wind! ☆c゚a.n*a・r。i゚e.s゚ [13:15] it's not that bug [13:15] is the polishing effort that goes into 2.37 [13:17] Chipaca: anyway if the plug was renamed, is not a regression, we had this kind of behavior since a long time [13:17] probably day 0 of the stuff [13:19] pedronis: a zero-day bug! /o\ [13:19] * Chipaca promotes headless chickens [13:19] zyga: I'm not particularly against or pro, I'm not sure an LTS as usual fits, also notice that the biggest non security issue with 2.37 was on servers [13:19] it was on people CI pipelines === ricab is now known as ricab|lunch [13:20] we will never prevent all regressions but think we should consider this move as it may fix our credibility on stability where people really seriously depend on snaps for day-to-day work [13:21] we could also use lts on ubuntu lts desktops [13:21] and use non-lts on devel releases [13:21] but those are just ideas, I'd like to raise it as a topic next week when we are back in full set [13:21] at some point we will just reduce the pool of testing to the point where we will hit the same issues but later [13:22] zyga: seriously, canarying is for this, or am I missing the point [13:22] if anything our issue is that people don't use beta/candidate enough [13:22] Chipaca: how would canarying prevent what we saw during 2.37? [13:23] zyga: depends on what you mean by "what you saw" [13:23] Chipaca: the point is simple, now we release almost synchronously all around the planet [13:23] "what we saw" i mean [13:23] we try not to really [13:23] Chipaca: I'd like to change that because we cannot maintain quality for production workloads [13:23] zyga: and how is that _not_ canarying? [13:24] Chipaca: perhaps I misunderstand but canarying is not a separate release schedule [13:24] it's just a pool of machines that say yes or no automatically [13:24] that's not the same in my eyes [13:24] anyway it's super unclear that the people that hit our bugs would have not run lts [13:24] zyga: canarying is where, say, 1% of the installed base gets a new revision [13:24] and then back to square one [13:25] zyga: if nothing breaks, 5%, 10, … etc [13:25] Chipaca: breaking 1% of server people IMO too much; I also don't trust in the automated aspect and holding that back; it feels like a good additional system but not a replacement for what LTSes are [13:25] zyga: if something breaks, you rollback [13:26] Chipaca: I don't disagree on that, just that the set of times you have to roll back burns our credibility and we should avoid that [13:26] Chipaca: we should do better in all the ways we can, including in how we release [13:26] anyway [13:26] I think we should discuss next week [13:26] just wanted to explain my point of view over canarying [13:27] zyga: as I said, no strong opinion right now, I just have the impression that this will just delay when we see the bugs [13:27] and that's it [13:32] * pstolowski off to the doctor [13:33] * zyga breaks and goes for a late walk [13:34] I'll skip standup today: my update is short, started late after rough night, releasing .4 everywhere; [13:34] one unusual aspect is that I filed bugs to help include snapd in opensuse [13:34] that's all [13:41] zyga: thx for the update [13:45] re [13:52] zyga: erk: https://koji.fedoraproject.org/koji/buildinfo?buildID=1217519 [13:52] did you accidentally chop the changelog entry up? [13:53] Michael's name is missing from the changelog entry and it looks like it was squished with the 2.37.4 release entry [13:53] There was no changelog entry this time [13:53] I see one here? https://github.com/snapcore/snapd/commit/af063cbb045a1e232bfb5131683657f78b0cd7ad [13:54] err. https://github.com/snapcore/snapd/commit/af063cbb045a1e232bfb5131683657f78b0cd7ad#diff-29bff6a81f0669dfd7375cb5e5f13a89 [13:54] Oh? When I looked I saw TBD [13:54] maybe stale view [13:54] I took entries from the release page [13:55] Michael puts them in as part of the tag-release commit [13:55] so they're usually there [13:56] I am afk but when I looked at the Fedora spec it had a placeholder only [13:56] * Son_Goku shrugs [13:56] I'll fix it in dist-git [13:56] no worries [13:57] Thanks! [14:00] zyga, thanks for doing the update stuff anyway :) [14:01] Chipaca: pstolowski: standup? [14:01] yep, omw [14:01] My pleasure :-) [14:07] zyga, new snapd packages are building [14:07] mborzecki: https://www.youtube.com/watch?v=OZpgnYhzdkI [14:08] when you get a moment, go ahead and file them as updates after they've been built [14:08] Yep [14:09] I’m heading home slowly [14:09] Hashtag baby hashtag walk [14:09] Hashtag babywalk? [14:19] popey: where can I find an Evan these days? (I know not IRC) [14:20] * cachio afk 15 mins [14:32] sergiusens: Do you have a moment to assist with a snap build issue I'm having? === ricab|lunch is now known as ricab [14:53] re [15:14] pedronis: sorry, i had an eye check today, i mentioned that yesterday during standup [15:17] pstolowski: and via the forum [15:17] re [15:18] jdstrand: hello [15:19] jdstrand: could you please look at a suggestion we got from the suse security team: https://bugzilla.opensuse.org/show_bug.cgi?id=1127366#c3 [15:19] cory_fu: what version? Also /join #snapcraft instead [15:19] the idea is to make snap confine ownership: snap-confine 6750 root:snapd [15:19] this way you can use snaps if you are a member of the group snapd [15:20] though I worry about setgid aspect of snap-confine (group root) [15:20] doing this would speed up entry to suse [15:23] Pharaoh_Atem: did you run another round of builds [15:23] Pharaoh_Atem: or shall I before I send stuff to bodhi? [15:24] pedronis: CC on the message to jdstrand [15:27] zyga: I'm not opposed to the idea, though it may complicate the priv-dropping code in snap-confine somewhat. it will definitely cause usability issues on suse since only people in the snapd group would be able to use snaps [15:27] pedronis: ^ [15:27] jdstrand: right [15:27] jdstrand: I understand the plan is to use this to simplify the review [15:28] jdstrand: and ship snapd in kubic OS by suse [15:28] jdstrand: where the users will have the permission by default [15:28] jdstrand: can you explain how it would complicate the priv dropping code? [15:29] jdstrand: don't we have also issues with reexec and the fact that we have one core [15:29] respectevely snapd snap [15:29] zyga: it would have to be reviewed-- it may not, but it may. I recall we very precisely drop uid and gid [15:29] yeah [15:30] s/drop/drop, raise, drop/ [15:30] I can attempt that to see what happens (just chmod/chown it on my system) [15:30] I suspect we may need a patch on snap-run [15:30] to say "please add the users to snapd group" or something alike [15:30] and some changes to snap-confine (there are probably three places that manipulate uid thre) [15:30] *there [15:31] jdstrand: my question how can we have snap-confine have different perms on different distros while keeping one core snap [15:32] pedronis: I think we are not using snap-confine from core/snapd on suse / fedora [15:32] so perhaps in those places as a special case? [15:43] pedronis: it wouldn't work in the reexec case [15:44] jdstrand: but then it means people can circumvent that check by turning reexec on [15:45] I mean, it could theoretically work if you get the same gid for snapd in all the distros and your distro packaging adds the group with that gid and the snap also uses that gid, but yikes, hard to coordinate [15:45] pe [15:45] pedronis: yes [15:45] we would need to hardwire somehow no reexec [15:45] I think kubic would not allow reexec, [15:45] pedronis: we do that today [15:45] there's no reexec on suse [15:46] zyga: but a user could set the env var, certainly [15:46] I thought we always respected the env var [15:46] jdstrand: it is ignored on suse [15:46] I ... don't know [15:46] heh [15:46] worth checking :) [15:46] but I think it does nothing [15:46] because it doesn't function [15:46] interesting [15:46] things just don't work in many ways still [15:46] different paths etc [15:46] I'd love to be able to turn it on [15:46] have a default that distros agree with [15:46] but a knob the user can toggle [15:47] it's just that we're not there yet [15:47] but we they ask for this [15:47] we cannot turn it on [15:47] the two things feels a bit incompatible [15:47] snapd could disable reexec if the sgid bit was set on snap-confile [15:47] I agree [15:47] snap-confine [15:47] jdstrand: I would have it as a first class concept: no reexec mode set at build time [15:47] then there's simpler way to control this [15:47] it's just off [15:48] though for now I think that's premature, we should just check what happens with the different group and see if we'd be happy with that mode during review (and land any extra changes before this goes in) [16:42] Pharaoh_Atem: updates posted to bodhi :) [16:43] you have to tell me one day how to do sets so that all the three updates can be in one big set === pstolowski is now known as pstolowski|afk [17:37] zyga: it's not a thing sadly [17:37] when we moved from Bodhi 1.0 to Bodhi 2.0, the ability to push a single update across multiple distributions went away [17:37] or are you referring to a set of packages per update? [17:41] Pharaoh_Atem: ah, I confused that [17:41] I thought it was a thing [17:41] but that's okay [17:41] anyway, I gave karma to everything [17:41] PR snapcraft#2491 opened: Fix and enable multipass spread tests [17:41] so it'll push out in the next 48 hours [17:46] Pharaoh_Atem: thank you [17:46] Pharaoh_Atem: say, what about manjaro [17:47] should we do something there? [17:47] hm? [17:47] is snapd in manjaro? [17:48] I believe so [17:48] do you have any contacts there? [17:48] someone we could talk to in case we need to release urgently? [17:49] nope [17:49] I don't know anyone in Manjaro anymore [17:49] ok, thank you [17:49] we're doing our best where we can [17:49] I will work on suse concept for some more time and then EOD [17:50] if it works out with the reworked packaging for SUSE, we can look at integrating those changes into the Fedora packaging and tweaking it to build for SUSE distributions too [18:03] Pharaoh_Atem: I already started :) [18:03] Pharaoh_Atem: I need to finish selinux support in snapd.k [18:03] .mk [18:03] it's not a priority for me this week but I expect to switch more and more packaging over to it [18:03] be careful, though [18:04] I still need to be able to build it everywhere [18:04] it has not lost any features [18:04] it should really be just less repetition without any other effects [18:04] and having a way to ensure things like the golang build flags are passed in are important [18:04] yep, that's part of why this is not ready yet [18:04] I'm just saying it's important for me to get this done to simplify packaging work [18:05] sure [18:05] I'd like it to be simpler too [18:05] it's a big part of why I hate Go so much :/ [19:06] * zyga EODs [20:04] pedronis: thanks for the answers about the fake store [21:41] hm [21:41] snap seeding appears to be completely hung in this image [21:41] i've probably broken something but how can i diagnose? [21:44] mwhudson: set SNAPD_DEBUG=1 in snapd.service override, hack the image to have serial tty, [21:44] i have a tty [21:44] and i think that env var is set [21:44] yeah it is [21:45] it is complaining about not being able to find a snap-declaration [21:45] ah whoops killed vm by mistake [21:49] well, good luck :) [21:50] Hey, I'm trying to build a snap with a custom plugin, and I'm getting a "/bin/sh: 29: snapcraftctl: not found" on the pulling step. snapcraftctl isn't in the source of any of my files. [21:50] Any ideas? Thanks [21:51] ah the fakestore can make a declaration too [22:06] omg it lives [22:10] hi all [22:10] short question, is there an equivalent to flatpak's --user? [22:10] for unpriviliged and formost userlocal installations [22:11] so installed into their home, system state stays unalterd [22:11] now just the api/v1/snaps/auth/nonces thing [22:11] RalphBa: no :) (short answer) [22:12] ok, and how is an app behaving when used by 2 users? [22:18] say two users use nextcloud snap [22:19] nextcloud-client [22:23] RalphBa, no differently than the deb installed once and used by both users. Settings are stored in a user-specific area [22:24] kyrofa: so all the data downloaded from nextcloud server is stored somewhere in /home/user directory? Where? [22:24] RalphBa, you said nextcloud-client, perhaps we're misunderstanding each other [22:25] the snap nextcloud-client available in store [22:25] I can access its data via the current directory [22:25] But yes-- the nextcloud client syncs stuff from client to server, and its settings are stored per-user. Where the stuff it syncs is located shouldn't really matter [22:25] but where is it stored?`and why is it available via /snap [22:26] it matters because I have multiple paritions for multiple stuff [22:26] I have a backup concept [22:26] I need to know where stuff is [22:27] RalphBa, what I mean to say is: where stuff syncs is up to the user(s). It has no bearing on whether or not multiple users can use it [22:27] default setting of that snap is to store data in some shady mount thing [22:27] how said, I access that data via /snap/.../current [22:27] The snap itself is installed into /snap like any other snap. That's not a shady mount thing, it's a squashfs image. That's what a snap is [22:28] It doesn't store anything there though. It's read-only [22:28] It just executes from there [22:28] Nextcloud downloadded 3 GB, the squashfs image had 87MB [22:28] Just like, if it was a deb, it'd probably be in /usr/bin/ [22:29] hm... [22:29] this was the path ~/snap/nextcloud-client/current/Nextcloud o /snap/nextcloud-client/current/Nextcloud [22:29] and also /... [22:30] the symlink pointed to snap/nextcloud-client/10 [22:30] which was the mountpoint of that .snap file [22:30] /snap/nextcloud-client/* is the snap installation path. ~/snap/nextcloud-client/current/ is the user-specific path where it might store user-specific settings (and yes, it can sync there as well, although I don't recommend it) [22:31] ok, but where is that user specific stuff? [22:31] if I want to put that file on a tape, where to find it? [22:31] Uh. Which file? [22:32] "user-specific path where it might store user-specific settings" << the file or the files in there [22:32] in the current directory [22:33] ok, where is a good technical documentation? [22:33] Anything in ~/snap/nextcloud-client/current/ falls under that category... I'm not quite sure what you're looking for [22:33] ~/snap/nextcloud-client/current/ is just a symlink to a mount point [22:34] No, it's a symlink to a directory [22:34] /snap/nextcloud-client/current IS a symlink to a mountpoint though [22:34] The one in $HOME is different [22:34] hm [22:36] oh [22:36] ok, got it [22:37] that was the point, I thought ~/snap/.../10 is a mount point [22:37] seems not to be the case [22:38] Indeed not, perhaps that explains both of our confusion :) [22:39] :-D thanks for that one [22:39] just one to come, can I move /snap to /data/snap and symlink it? [22:39] Snaps have a few defined places where they can always write-- that's one of them (SNAP_USER_DATA). This might enlighten you further: https://askubuntu.com/questions/762354/where-can-ubuntu-snaps-write-data/762405#762405 [22:40] I doubt it, snapd will want to be able to update it etc. [22:40] hmmm... I really would like to get the installed stuff out of the base system [22:40] deb for system stuff, snap and flatpak for application stuff [22:41] might it tolerate /snap beeing an own mount point? [22:43] RalphBa, I'm afraid I don't know, that's a question for the snapd devs, who are all probably gone for the day. I suggest coming back earlier tomorrow to ask [22:46] hmm so now when i try to refresh from the fake store i get a complaint about mismatched revisions [22:47] the assertion says rev 2 but somehow the snap has revision 1? [22:47] but i don't understand how a snap has a revision that is different from what the assertion says...