=== chihchun_afk is now known as chihchun [05:08] morning [05:43] PR snapd#5761 closed: store: use stable instance key in store refresh requests [05:51] mvo: hi [05:54] mborzecki: hey, good morning [05:57] mvo: could you take a look at https://github.com/snapcore/snapd/pull/5758 ? [05:57] PR #5758: overlord/snapstate, snap: handle shared snap directories when installing/remove snaps with instance key [05:59] mborzecki: sure, I'm just looking into the 18.10 upgrade hang, I will check this one in parallel [05:59] mvo: thank you [06:11] wrz 10 08:11:10 galeon snapd[9337]: retry.go:52: DEBUG: The retry loop for https://api.snapcraft.io/api/v1/snaps/names?confinement=strict%2Cclassic finished after 1 retries, elapsed time=303.098089ms, status: cannot decode new commands catalog: got unexpected HTTP status code 403 via GET to "https://api.snapcraft.io/api/v1/snaps/names?confinement=strict%2Cclassic" [06:11] 403? [06:18] good morning [06:19] zyga: hey [06:32] ok, breakfast done, time to get to work [06:32] zyga: hey [06:33] hey [06:33] I didn't do much work over weekend [06:33] I'm snapshotting now, going to upgrade to cosmic [06:37] mvo: is this expected? [06:37] Upgrades to the development release are only [06:37] available from the latest supported release. [06:37] I can update manually but I was expecting do-release-upgrade -d to work [06:38] zyga: yes, this should work. you could try change /etc/update-manager/release-upgrades from "Prompt=lts" to "prompt=normal" [06:39] mmm, let's try [06:39] indeed, thanks! [06:40] zyga: I'm confused, why is mkdirAllChown racy? it is that we create mkdirAllChown(/prefix/foo) and mkdirAllChown(/prefix/bar) with different users? [06:44] mvo: two concurrently running instances will eventually race with each other to mv a file over [06:45] (well, a directory) [06:45] zyga: aha, thats the one [06:45] mvo: and only one will win [06:45] zyga: and the other one will get a EEXITS? [06:45] yes [06:45] mvo: the version in snap-update-ns is not racy because it chooses to construct the directory in-place and chown it split second later [06:45] as such it already handles EEXIST [06:45] as a normal operation [06:45] it's a trade-off [06:46] trading temporary visibility of incorrect owner [06:46] for robustness [06:46] zyga: ok [06:46] zyga: thanks, that makes sense now [06:46] E:Problem executing scripts APT::Update::Post-Invoke-Success 'if [06:46] /usr/bin/test -w /var/cache/app-info -a -e /usr/bin/appstreamcli; [06:46] then appstreamcli refresh-cache > /dev/null; fi', E:Sub-process [06:46] returned an error code [06:47] mvo: I saw that before but I don't know how I fixed that [06:47] let me just upgrade manually [06:48] zyga: the appastream thing is harmless [06:48] zyga: just ignore it [06:48] it broke the upgrade [06:48] as in, do-release-upgrade failed [06:51] upgrading the old way now [06:51] that appstream thing is not very robust [06:52] zyga: ok [06:52] zyga: I'm surprised that the appstream thing prevented the upgrade [06:52] apt hook magic [06:52] zyga: but yeah, not super important, I'm keen to learn what you find [06:55] PR snapd#5787 closed: tests: update the listing expression to support core from different channels [07:04] mvo: upgrade complete [07:04] no issues [07:05] do you know if the upgrade was headless/ [07:05] perhaps that's the key ingredient === pstolowski|eow is now known as pstolowski [07:05] heyas [07:05] hey pawel [07:07] cosmic has new looks! [07:07] I was expecting to stay on bionic but ubuntu is like drugs, you just want more and more and more fresh software [07:10] mvo: do you want me to repeat the test under different conditions? [07:10] zyga: hm hm, I had no luck reproducing the issue myself earlier too [07:11] zyga: I think there is soemthing missing no need for you too keep trying [07:12] OK [07:29] hmm, it seems gnome-shell in 18.10 no longer logs errors whenever some window activity happens [07:37] https://bugs.launchpad.net/snapd/+bug/1791587 [07:37] Bug #1791587: [2.34.2] snapd ignores proxy settings [07:41] zyga: cannot be, wonder if they finally fixed it upstream [07:41] anyways, super annoying [07:42] mborzecki: I'm much happier now, it was always annoying to watch journal output with gnome-shell spamming everything [07:43] mborzecki: I left a bunch of comments in the PR, tl;dr its fine but I have quibbles with some of the naming (yay bike-sheed!) [07:44] mborzecki: hope its still useful [07:45] mvo: thanks, will look in a minute once i'm done with the autotools madness [07:46] mborzecki: no rush, I'm back at apt upgrade madness [07:50] PR snapd#5800 opened: cmd: use systemdsystemgeneratorsdir, cleanup automake complaints, tweaks [07:50] mborzecki: nice one [07:50] mvo: duh, i need coffee now, i'm still shaking :P [07:50] * mvo hugs mborzecki [07:54] mborzecki: "Not it's at the right place to begin with" ? [07:55] s/Not/Now/ ? [07:55] PR core18#70 opened: hooks: add rfkill [07:55] heh, right :) [07:56] zyga: fixed the description [07:57] mborzecki: very nice [07:59] zyga: o/ [07:59] hey [08:00] zyga: one question i had left over from friday was: do we really need to loadmountinfo of /proc/self/mountinfo so many times (in quick succession)? if these are all part of the same operation, can't we load it once and pass it around? [08:01] Chipaca: yeah, we could; it's somewhat "tricky" because interface backends have no state across invocations [08:02] mborzecki: https://github.com/snapcore/snapd/pull/5801 <- last of the removal PRs [08:02] PR snapd#5801 opened: cmd/snap-update-ns: remove the unused Secure type [08:02] PR #5801: cmd/snap-update-ns: remove the unused Secure type [08:02] mborzecki: after that it's just the additions; hopefully with a way that lets us decide if the names are ok and if the way they are passed around is ok [08:02] zyga: these 2k+ calls a re all from separate interface backend invocations? [08:02] is forum.snapcraft.com failing for anyone else right now? [08:02] yes [08:02] err .io [08:02] Chipaca: if we had a "mount manager" that could keep a representative state and share it [08:03] Chipaca: (e.g. use inotify) [08:03] diddledan: yes for me as well [08:03] Chipaca: we could just call it once per mount op [08:03] diddledan: same here [08:03] Chipaca: there's a not well known fact that you can poll on /proc/self/mountinfo [08:03] and get events when it changes :) [08:03] diddledan: and it's back [08:03] diddledan: so _that_'s why the morning was nice and quiet [08:04] :-) [08:04] hrm, its back and it ate the reply I was typing [08:04] *grumpf* [08:04] nomnomnom [08:04] mvo: i thought those were in cookies [08:04] or, well, local state in any case [08:04] boo [08:05] zyga: hmmm [08:05] Chipaca: maybe I had a bad cookie [08:05] or not enough milk [08:05] ¯\_(ツ)_/¯ computers are hard, let's go do something easy like rocket surgery [08:06] Chipaca: do you think we can sit down and work on this at the sprint? [08:06] zyga: yes [08:06] zyga: if it's all interfaces, then we probably can do the cache+dnotify from interface manager [08:06] yes, I think so [08:07] zyga: before stopping myself and going off on friday i had it down to where the biggest single memory chunk was bolt [08:08] zyga: this was via a hackish have-a-global-mountinfo that expired after a second, and changing asserts/findwildcard to not load the whole directory every time [08:08] Chipaca: so after mountinfo bolt is the biggest heavyweight? [08:09] did someone say rocket surgery? https://youtu.be/fg58hVEY5Og?t=36 [08:09] zyga: on my machine, the asserts db is the next one, after that it's bolt [08:09] diddledan: … no, nobody. You must be having bad dreams again. [08:10] zyga: bah. There's a json.Unmarshal of *json.RawMessage in there [08:10] but I ignored that because I'd have to dig into upgrading our go [08:10] and i'd rather not dream [08:11] :-) [08:12] Chipaca: I probably missed context, what are you looking into? memory usage? [08:13] pedronis: on friday i idlye looked at a memory profile of my snapd after it started up and did 5 things (refresh, find, list, changes, info) [08:14] pedronis: and two things stood out: osutil's strings.Split, and asserts's directory reading [08:14] pedronis: osutil's is just because it's called eleventy billion times in quick succession and re-reads the whole thing every time [08:14] Chipaca: yea, I would expect snap-revision directory to be growing over time, we will need to do something about it [08:15] pedronis: that one's an easy and clean refactor actually, i could split it out [08:15] at least if i understood the code correctly -- the tests were green :-) [08:16] ship it [08:16] diddledan: no because then mvo gets nasty emails to his personal inbox [08:16] test green means everything is perfect. right? [08:17] diddledan: they _should_ though :-) [08:17] :-p [08:17] Chipaca: will need to gc snap-revisions as well at some point, but that can wait a bit longer [08:18] er [08:18] pedronis: directory lookup will become an issue before space, i fear [08:21] any idea what issue may this guy have with the CLA? https://github.com/snapcore/snapd/pull/5799 [08:21] PR #5799: Install snap-failure binary on Fedora [08:21] Chipaca: ? [08:21] isn't what I said [08:21] that gc can wait longer [08:22] pedronis: i meant kernel performance of finding things in a yuge directory [08:22] pedronis: this is separate from go's memory usage from reading a whole big directory into memory [08:22] Chipaca: we scan them no [08:23] anyway [08:23] Chipaca: are you saying we need to sue a real index? [08:23] sooner or later [08:23] pedronis: I'll push the refactor in a bit [08:24] Chipaca: we already too many PRs :) [08:24] pedronis: I'm saying if we delay gc a lot more, we should think of moving to a two-level directory layout [08:35] so the hang on 18.10 is caused by incomplete seeding. the gtk-common-themes is seeded after the snaps that use it [08:36] * mvo scratches head [08:36] mvo: yes, they dediced that we should fix it in snapd [08:36] they only fixed 18.04 [08:37] pedronis: oh, looks like I did not get the memo [08:37] but are waiting for our fix for 18.10 [08:37] ok [08:37] it's in the bug [08:38] mvo: we discussed a fix I think [08:38] it's not a matter of sorting, we need to add some logic to autoconnect in ifacestate [08:40] pedronis: well, I think its a terrible choice to let users suffer when there is a trivial workaround availalbe but *shrug*. let me dive into the issue [08:40] pedronis: yeah [08:51] mvo: quick workaround: on cosmic, ignore the seed [08:53] Chipaca: ? [08:54] pedronis: a joke [08:54] heh [08:54] pedronis: calling out the cavalier attitude towards breaking users [08:56] Chipaca: I was actually thinking to have a preinst to reoder things, I [08:56] Chipaca: anyway, looking into the real fix now [08:59] Chipaca: is your comment about lookup still relevant with ext4 dir_index, I see here that firefox cache is flat and ext4 is uses a hash directory for it [09:02] Chipaca: yea, same for /var/lib/snapd/assertions/asserts-v0/snap-revision/ [09:04] pedronis: I think the hashe dir made it fine until ~100k entries, but I'd have to look that up [09:06] Chipaca: bit unclear if space will become a problem before or after then, it's not only the size of directory, there is the size of the assertions too [09:06] though snap-revisions are small === chihchun is now known as chihchun_afk [09:09] also each asssertion is a dir [09:13] mvo: apt hooks test unhappy? [09:13] the test runs forever (49 minutes and times out) [09:13] https://api.travis-ci.org/v3/job/426565106/log.txt [09:14] mvo: https://bugs.launchpad.net/ubuntu/+source/ubuntu-meta/+bug/1772844/comments/21 is the comment [09:14] Bug #1772844: snapd didn't initialize all the seeded snaps [09:14] [09:14] zyga: not sure [09:15] Chipaca: anyway playing with debug2fs wasn't actually my morning plan [09:15] pedronis: thanks, I remembered and found it, I still disagree but it seems like the intended effect was reached [09:15] pedronis: sary [09:16] Chipaca: anyway 100000 dirs with 4k blocks are 400 mb ? [09:16] mvo: does this mean #1776622 is a dupe of #1772844 ? [09:16] Bug #1776622: snapd on cosmic never finishes installing/updating. Can't install any other updates. [09:16] Bug #1772844: snapd didn't initialize all the seeded snaps [09:16] === chihchun_afk is now known as chihchun [09:21] Chipaca: too much stuff is too much stuff (tm) [09:24] Chipaca: yes [09:25] Chipaca: well sort of but if the ordering in the seed or our code would be fixed then the bug would be fixed [09:25] Chipaca: I mean the bug that users see the hang on an upgrade [09:28] mvo: we order bases now, but we don't want to order content providers (also because that's too specific for seed code) [09:29] pedronis: yeah, I'm building a unit test right now [09:29] pedronis: once that is done iirc you suggested to fix doAutoConnect in ifstate:handlers.go [09:29] pedronis: so that it would check the wait chain [09:29] yes [09:29] correct [09:29] but tests is the first order of things [09:29] pedronis: and if there is a install of the content provider in the wait chain just do nothing (indead of retrying) [09:30] yes [09:31] wtf, on arch .dirstamp is created in the $(builddir)/snap-mgmt, on ubuntu it's in $(srcdir)/snap-mgmt and generating snap-mgmt obviously fails [09:31] pedronis: great, yeah, working on a test now that models the current cosmmic problem closely and once that is done I look at the other bits [09:31] (s/other bits/fix/ [09:31] ) [09:32] mvo: then the other related issue (but not related to the bug) would be to teach image to at least warn if there are missing content providers [09:32] like it does for bases now [09:32] * mvo nods [09:34] mvo: #5797 green after adding comment as requested; can i merge? [09:34] PR #5797: osutil, o/snapshotstate, o/sss/backend: quick fixes [09:36] Chipaca: nice! out of curiosity, why does runuser fail as root? pardon my ignorance on this [09:36] mvo: because the user doesn't exist [09:36] it's faked [09:36] Chipaca: ok [09:36] mvo: if you're not root, runuser doesn't come into the picture so it's not an issue [09:37] aha, now I get it [09:37] mvo: the test could be changed to fake runuser [09:37] PR snapd#5797 closed: osutil, o/snapshotstate, o/sss/backend: quick fixes [09:37] mvo: but it's diminishing returns land [09:38] Chipaca: no worries, just wanted to understand it better, thats fine [09:39] Chipaca: do we run our unit tests as root in spread? or do we use a test user there [09:39] spread tests default to root [09:40] pedronis: the unit tests are run as the test user afaik [09:40] pedronis: yep [09:40] pedronis: su -l -c "[...] ./run-checks --unit" test [09:41] mborzecki: the meat [09:41] Chipaca: good [09:41] pedronis: the gccgo tests are run differently, but still as test user: su - -c "cd $GOHOME/src/github.com/snapcore/snapd && dpkg-buildpackage -tc -Zgzip" test [09:41] we run the tests as root during package build [09:41] mvo: is that actual root, or fake root? (both would confuse runuser) [09:42] fakeroot [09:42] well [09:42] usually [09:42] but iirc in pbuilder it will be real root [09:42] mborzecki: https://github.com/snapcore/snapd/pull/5802 [09:42] PR #5802: cmd/snap-update-ns: introduce trespassing state tracking [09:43] mborzecki: I staged it so that we can review the algorithm without discussing how the assumptions/restrictions are passed around [09:43] PR snapd#5802 opened: cmd/snap-update-ns: introduce trespassing state tracking [09:43] mborzecki: so that we can separately make that decision in the next branch, either as a new argument or as a helper that holds the state and has MkDir... methods [09:43] mborzecki: but all of the essential new bits are here now [09:44] mborzecki: have a look please and let me know, after this I will have the RFC branch that just passes Restrictions as an optional argument (as in the code you read last week) [09:45] mborzecki: and keeps tracks of Assumptions in main and in a few places below [09:45] ok [09:45] mborzecki: I made unit tests 100% cover the new module [09:45] mborzecki: and moved a pair of helpers there (and made them private since they only exist to fix trespassing issues) [09:46] mborzecki: one more thing to ask is if we should move this to osutils now [09:46] mborzecki: or not [09:50] zyga: iirc there was some concern if we move security sensitive code to osutils, changes tehre may go under the radar or sth [09:50] mborzecki: yeah but as I said on Friday during the call, I think we're past that and we just need to be mindful about it; we use mount code from osutil already because keeping two copies was less useful [09:51] zyga: aha, in that case i'm all for it [09:52] mborzecki: thanks, I'd like Chipaca to weigh in as well (and mvo) [10:05] PR snapd#5801 closed: cmd/snap-update-ns: remove the unused Secure type [10:24] seems like the automated review failed again for VLC, " [10:24] Task a7261f59-ffc0-4a32-b732-3e9d16f8457b failed. [10:32] brb, break [10:53] * zyga never does breaks [10:53] mborzecki: https://github.com/zyga/snapd/commit/7ab0f9f3344e2306779695aba0c355c767d8b0ee [10:53] that's the final cherry on top [10:53] mvo: ^ CC that's what I mentioned [10:53] mborzecki: now the question is, is that sensible or is there an easier way to pass the required state around [10:56] actually https://github.com/zyga/snapd/commit/75f47d017634159b6415246901a088e308c54cc2 is the better link [10:56] anyway [10:56] break for real now :) [10:56] coffee and dog [10:56] ttyl [11:10] PR snapd#5803 opened: ifacestate: fix hang when retrying content providers [11:31] mvo: ^ nice, thanks for that === chihchun is now known as chihchun_afk [11:38] pstolowski: thanks, my pleasure [11:41] mvo: thank you, couple initial comments [11:44] pedronis: aha, good one, will address right after lunch [11:44] pedronis: thank you! [11:45] PR snapd#5804 opened: ifacestate: fix hang when retrying content providers (2.35) [11:48] off to pick up the kids [11:52] zyga: need karma: https://bodhi.fedoraproject.org/updates/snapd-2.35-1.fc29 [11:53] Son_Goku: I'll test on my box today [11:54] thanks! [11:54] PR snapd#5800 closed: cmd: use systemdsystemgeneratorsdir, cleanup automake complaints, tweaks [11:57] zyga, mborzecki, also one of you guys should package the new deps added to snapd [11:57] for Fedora [12:11] is there a way to order services across snaps? like start my service after another snap's service? [12:14] mborzecki: I'll incorporate your review of 5760 into the last branch and close it, ok? [12:14] Saviq: no, there is no such feature [12:25] PR snapcraft#2252 closed: build providers: shell in provider if debug is used [12:34] PR snapd#5760 closed: cmd/snap-update-ns: don't trespass on host filesystem in /etc and other places [12:36] PR snapd#5788 closed: tests: add publisher regex to fix the snap-info test pass on sru [12:40] PR snapd#5514 closed: daemon, overlord/state: warnings pipeline [12:42] re [12:42] Son_Goku: which new deps? juju/ratelimit? [12:42] mborzecki, yes [12:42] Son_Goku: iirc it's already packaged [12:42] in Fedora? [12:42] well, then [12:42] apparently it is [12:43] Son_Goku: https://src.fedoraproject.org/rpms/golang-github-juju-ratelimit [12:43] * Son_Goku wonders what other canonical project is in Fedora... [12:43] mborzecki: oh, nice, it's already packaged! [12:43] Son_Goku: simple-scan? [12:44] Chipaca, is that written in go? [12:44] Son_Goku: no [12:44] lxd? [12:44] iirc, it's in glibified C [12:44] mborzecki, not in fedora yet, iirc [12:44] Son_Goku: you didn't ask about go :-D [12:44] Chipaca, I figured that was implied :P [12:44] Son_Goku: to a nitpicker like me? heck no [12:46] Son_Goku: I'd say jhbuild but I think that predates canonical by a couple of years [12:47] well I mean there are going to be a bunch of things whose upstreams happen to be Canonical employees but wearing different hats [12:47] I'd have put jhbuild in that category [12:48] cjwatson: :-) yes [12:49] cjwatson: I was just pulling Son_Goku's leg [12:49] also, I went to the wiki only to found it gone :-( [12:49] we used to have a wiki page of all our free software contributions and can't find it now [12:49] anyhoo [12:49] Chipaca, that's... depressing [12:50] Son_Goku: the wiki? yes [12:50] Son_Goku: too big, not very good search [12:50] anyway, the fedora package review gives me no clues, [12:50] whatever it was may not be in the distro anymore [12:50] the reverse dep query brings nothing [12:50] Son_Goku: about what? [12:51] Chipaca: you might be thinking of the one on the internal wiki - https://wiki.canonical.com/UbuntuEngineering/UpstreamContributions [12:51] what led to golang-github-juju-ratelimit being in Fedora [12:51] the only projects I know of that use juju libraries are lxd and juju itself [12:51] it's the sort of thing that is just about manageable when you're 50 people and completely unmanageable when you're 500 or whatever [12:51] (and obviously now snapd) [12:51] cjwatson: I thought there was a more project-level one hanging off of https://wiki.canonical.com/OpenSource (but my memory is only slightly worse than the wiki's search) [12:52] maybe, not sure [12:52] Son_Goku: there's a what-uses-this-go-lib thing online [12:52] where was it [12:52] cjwatson, Red Hat recently changed their list to a GitHub thing, that Red Hatters can send PRs for: https://redhatofficial.github.io/#!/main [12:53] Son_Goku: https://godoc.org/github.com/juju/ratelimit?importers [12:53] it's nowhere near complete, but it's a start [12:53] Chipaca, so k8s requires it [12:54] and thus openshift, too [12:54] Son_Goku: openshift is still a thing? [12:54] yes [12:54] the upstream project is OpenShift Origin / OKD [12:54] https://www.okd.io/ [12:55] I've got an openshift pint glass and was thinking how funny it was that it'd survived longer than the thing it was promoting [12:55] guess I was wrong, for now :-) [12:55] OpenShift 3 rebased on top of k8s [12:56] as a consequence, OpenShift devs are now contributing to k8s [12:57] Son_Goku: the glass 404s now though; http://openshift.com/beershift is no longer a thing [12:58] yeah, that's gone now === sergiusens_ is now known as sergiusens [13:26] mvo: seems like the proxy stuff is working with the core snap from the edge channel [13:28] Dmitrii-Sh: cool [13:28] Dmitrii-Sh: this will be in beta soon [13:28] Dmitrii-Sh: and ~4-5 weeks until it hits stable [13:29] mvo: thanks for the info [13:29] mvo: quick question about the overall design: I can see that /etc/environment is not affected (which is good). Neither do http_proxy/https_proxy/no-proxy of snaps themselves (also good in my view). Am I right to assume that core snap settings will only affect the snapd behavior and not apps themselves? [13:30] Dmitrii-Sh: sorry, at least that is the current design [13:30] i.e. we'd rather configure proxy settings in application-specific ways because they may have multiple network destinations (Internet and local DC network) [13:30] mvo: what I mean is that the current design is good :^) [13:30] Dmitrii-Sh: excellent :) [13:31] Dmitrii-Sh: (in a meeting right now so a bit slow maybe when reading/replying) [13:31] mvo: np, thanks for the quick response [13:35] Can somebody give me a hint why my install hook is failing to find bundled binaries (`/usr/bin/env ruby`: no such file or directory), but when I run the exact same script as an app command after installation, everything works fine? I know that hooks run as root, but they should have access to the snap's binaries nonetheless? [13:37] dot-tobias: if you are talking about ruby from stage-packages this is similar to what we had to do with staged python2 https://github.com/lolwww/fcbtest/blob/master/snapcraft.yaml#L16 PATH: $SNAP/bin:$SNAP/usr/bin:$PATH [13:37] dot-tobias: snapd expects all binaries to be called to live inside the snap [13:38] for hooks and apps [13:38] the difference for apps, is that we write a wrapper around those that satisfies snapd's requirement [13:39] hooks do not get wrappers if they are just created by adding them to "snap/hooks" but iirc do get them if driven through a part installation [13:40] It's a ruby bundled by the ruby plugin, so I think it should be in the snap, but I'll investigate and test Dmitrii-Sh 's PATH extension example. Thanks! [13:52] mvo: do you remember that occasional unit test failure related to default providers, where the number of tasks is off by one? [13:53] abeato: hey, just FYI, I'm taking over https://github.com/snapcore/snapd/pull/5469 so that it can land [13:53] PR #5469: interfaces/apparmor: (un)load profiles in one apparmor_parser call [13:54] pstolowski: I just noticed it in one of the ppa builds [13:54] pstolowski: do you have a theory about it? [13:56] Wimpress: around? [13:59] mvo: no, i just hit it on travis and was going to ask the same ;). if no one is looking at it, i'll nvestigate [14:02] pstolowski: noone is looking at this yet afaik [14:04] PR snapcraft#2253 closed: build-providers: add support for --shell-after [14:04] mvo: good, will see if i can find anytthing [14:04] zyga, ah, saw that, thanks! [14:11] niemeyer: the PR we talked about that needs your re-review is https://github.com/snapcore/snapd/pull/5632 [14:11] PR #5632: overlord: integrate device enumeration with udev monitor [14:13] pstolowski: Ack, thanks [14:45] sergiusens / Dmitrii-Sh : Adding $SNAP/bin to $PATH worked to have /usr/bin/env ruby find the bundled ruby binary. Thank you! [14:47] * zyga returns from lunch [14:49] PR snapd#5805 opened: cmd/snap-update-ns: enforce trespassing checks [14:54] mvo: I _suspect_ the latest snapd in cosmic has broken PATH generation for systemd units: https://bugs.launchpad.net/ubuntu/+source/snapd/+bug/1791691 [14:54] Bug #1791691: PATH broken in systemd units [14:54] Could you take a look? === chihchun_afk is now known as chihchun === sergiusens_ is now known as sergiusens [15:03] I'm currently trying to start an application installed via snap within an unprivileged LX container. The host is running Debian Stretch, now with a 4.17 kernel from Debian backports [15:03] the error I'm still having is the following though: [15:04] Sep 10 14:13:54 rocketchat2 snapd[150]: AppArmor status: apparmor is enabled but some features are missing: dbus, network [15:04] Sep 10 14:13:54 rocketchat2 snapd[150]: error: cannot start snapd: cannot mount squashfs image using "fuse.squashfuse": mount: /tmp/selftest-mountpoint-278537339: permission denied. [15:04] T_X that is just a comment, it should be harmless in practice [15:04] T_X: the part about fuse.squashfuse is the real issue [15:05] hm, okay. I do get the following "DENIED" messages in dmesg of the host though: [15:05] but having said that, I _don't_ know if this will work for you; mvo ^ is that part of the unsupported scenarios? [15:05] [ 70.019332] audit: type=1400 audit(1536588822.918:39): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxd-rocketchat2_" name="/sys/fs/cgroup/unified/" pid=2048 comm="systemd" fstype="cgroup2" srcname="cgroup" flags="rw, nosuid, nodev, noexec" [15:05] [ 70.019405] audit: type=1400 audit(1536588822.918:40): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxd-rocketchat2_" name="/sys/fs/cgroup/unified/" pid=2048 comm="systemd" fstype="cgroup2" srcname="cgroup" flags="rw, nosuid, nodev, noexec" [15:05] [ 74.560221] audit: type=1400 audit(1536588827.462:42): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxd-rocketchat2_" name="/run/systemd/unit-root/run/" pid=2247 comm="(networkd)" flags="ro, nosuid, nodev, remount, bind" [15:05] [ 78.724518] audit: type=1400 audit(1536588831.626:49): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxd-seafile_" name="/home/" pid=2393 comm="(networkd)" flags="ro, nosuid, nodev, remount, bind" [15:05] T_X: are you using v2 cgroups? [15:06] or is that a part of LXD somehow? [15:06] zyga: not sure, would I need to enable that explicitly? [15:06] not sure, [15:06] cgroups v2 came with one of the more recent kernels, rights [15:06] *right? [15:06] v2 is not supported widely AFAIK (not in snapd for sure) === chihchun is now known as chihchun_afk [15:07] I was under the impression that snapd within an unprivileged container should in theory be possible: https://blog.ubuntu.com/2016/02/16/running-snaps-in-lxd-containers [15:08] T_X: it is [15:08] T_X: I'm not an LXD expert [15:08] T_X: but, there a _lot_ of variables :-) [15:08] I'm wondering whether he was using a kernel provided by Ubuntu on the container host, though. with some non-upstream patches [15:09] T_X: what os is the guest running? [15:09] and whether that might be creating these issues for me [15:09] the guest is running Ubuntu [15:09] with a Debian Stretch in the container I had even more issues [15:10] Ubuntu 18.04 that is [15:10] T_X: at this point I'd ask stgraber [15:10] I hope they don't mind [15:12] also, within the Ubuntu 18.04 container the squashfuse package, version 0.1.100-0ubuntu2 is installed as suggested by stgraber's excellent blog post [15:20] mvo, hey [15:20] mvo, https://bugs.launchpad.net/ubuntu/+source/snapd/+bug/1791691 is critical [15:20] Bug #1791691: PATH broken in systemd units [15:20] mvo, do you want me to write a patch for handing empty or unset PATH? [15:21] mvo, cause current behaviour is broken =))) i've blocked release of snapd into bionic-updates. [15:30] mvo, zyga, should we make this for the other test suites #5806 ? [15:30] PR snapd#5806 opened: tests: fix install snaps test by adding link to /snap [15:30] PR #5806: tests: fix install snaps test by adding link to /snap [15:30] mvo, zyga or is something we should fix on the installer/setup [15:37] xnox: thanks, feel free to write a patch, otherwise I will fix it in my morning [15:38] mvo, well, it is fairly straight forward, and it can wait until tomorrow. plus we'd want a fix-up uploads into bionic-proposed and cosmic =/ and i don't know how you do those right, to comply with sru process etc. [15:38] mvo, the writting fix is the smallest piece here. [15:39] xnox: yes, will do either now (if I manage before I need to leave for hockey) or tomorrow [15:39] xnox: first thing [15:39] mvo, i did not test, to see if actually a simple "#!/bin/bin/printf PATH=/snap/bin\n" is actually enough to cover all cases with and without PATH set in systemd. [15:39] xnox: I'm slightly puzzled, I thought systemd always sets a PATH if none is set [15:39] mvo, hockey is more important =) [15:39] mvo, it does.... after generators are run [15:39] as i have now found out and confirmed. [15:39] are we aiming to have /snap/bin prepended, or appended to path? [15:39] but it does 'gather' envrionment in perculiar ways w.r.t. substitution and expansions [15:40] appended [15:40] xnox: interessting, sounds like the test we added was inadequate :/ [15:40] Chipaca, always appended. [15:40] mvo, it was fine for initramfs-full boots which have PATH set when /sbin/init is execed; but that's not true in lxd case, or initrmafs-less boots [15:40] xnox: what will happen if the generator sets the path to "/snap/bin" only - will systemd still add sensible defaults? [15:41] mvo, it does [15:41] xnox: whats the logic then? will it append the default path? [15:41] for the no-path set in LXD. and i did not feel like rebooting my machine. actually let me test these combos in a vm quickly [15:41] default path appears to be prepended [15:41] xnox: nice [15:41] xnox: that sounds like the fix will be trivial, thats good to know [15:43] PR snapd#5807 opened: interfaces: extra argument for static attrs in NewConnectedPlug/NewConnectedSlot [15:45] * cachio lunch [15:51] xnox: https://github.com/snapcore/snapd/pull/5808/files [15:51] PR #5808: snapd-env-generator: fix when PATH is empty or unset [15:51] PR snapd#5808 opened: snapd-env-generator: fix when PATH is empty or unset [15:53] roadmr, I once again have an inbox full of "manual review requested for version ..." for nextcloud's uploads. They seem to have been resolved (perhaps that was you) but now I have tons of revs that aren't in a chanel. I have to be honest, this is getting very frustrating [15:55] pstolowski: Just one final detail and looks okay, thanks for the changes! [15:55] I assume those are reviews timing out again. And the fact that pushes don't remember the channel they're supposed to go to is maddening [15:56] niemeyer: thanks, looking [15:57] kyrofa: One of my bits of upcoming planned work will have the side-effect of helping with that (see comment on https://docs.google.com/document/d/1vabN2wBNtGdBqShN1xi9a-osfGPtmA8EARdXfs2kfDI/edit#heading=h.l2jjh07gbn3p) [15:57] (Only specced so far, not yet started and won't be for a while) [15:58] Dmitrii-Sh: Sorry, I missed your earlier ping. Can I help? [15:58] It would still need rephrasing the push/release API in some way; but it would avoid the obvious negative side-effects of doing that in the current system [16:01] kyrofa: I have a merge proposal to increase that timeout, hmm perhaps I'll ask for it to be cowboyed now so you don't face so much pain and frustration [16:01] kyrofa: the "don't remember the channel they're supposed to go to" thing is https://bugs.launchpad.net/snapstore/+bug/1684529 but it's not as straightforward as "just remember this" [16:01] Bug #1684529: Need for manual review loses intent to release to channel [16:01] mvo, that looks good to me. [16:02] mvo, it testing things are weird, if PATH is set in the environment, and one doesn't expand it, it looks like it does override things and than initramfs-full boots fail to come up [16:02] cjwatson, that's a helpful doc, thank you. I look forward to those improvements. Are they actually on the store roadmap? [16:02] mvo, so this fix for empty or unset path looks good to me. [16:02] mvo, please upload in all the places =) [16:03] kyrofa: yes [16:03] well, modulo unresolved disagreement about the expiry thing, but the rest are [16:04] niemeyer: could you take a look at https://github.com/snapcore/snapd/pull/5713 ? i think all outstanding issues have been addressed [16:04] PR #5713: many: mount namespace mapping for parallel installs of snaps [16:04] mborzecki: Will do later today [16:04] niemeyer: thanks [16:04] cachio: looking [16:04] mborzecki: Now I'll take a break and head outside for a while [16:04] mborzecki: Sorry, that wasn't to you specifically :) [16:05] cachio: that's a tricky question, I think it depends on what you want t o test [16:05] See you all later or tomorrow [16:05] advocacy basically said "iz broken, pls fix" so it's kind of a pile of things that improve the situation in various ways :) [16:05] cjwatson, this cycle's roadmap, or next? [16:05] cachio: I think it's okay to setup the symlink on prepare.sh but then again, it's not really testing what people will otherwise see so it may hide bugs [16:05] kyrofa: depends how quickly we finish with git per-branch permissions. realistically probably early next cycle [16:06] cjwatson, fair enough, thanks for the info :) [16:08] Wimpress: we are trying to get this snap into the store for field engineering purposes https://forum.snapcraft.io/t/classic-confinement-review-request/7226 It's a test runner (rally) + a tempest plugin for openstack. It's an MVP for now as it allows us to run openstack validation tests. We called it fcbtesting on purpose because the way we implement it may be team-specific [16:09] the point of having it in the snap store is that we can download it from it in proxied environments where we cannot bring all the pip deps along with us === sparkieg` is now known as sparkiegeek [16:09] so, I just wanted to know what would be the review procedure and how could we move it forward [16:12] Dmitrii-Sh: hey, I think you just need two votes that approve it [16:13] we tried working around the python multiprocessing (shmem) issue via a preload but did not have any luck with building the preload on bionic. We'd probably have to ship a patched python2 version with the snap otherwise that allows using custom paths [16:14] zyga, so, we should do it as part the the installer? [16:14] create that symlink? [16:14] zyga: from the snapcrafters org or? https://github.com/orgs/snapcrafters/people [16:14] that should be the best solution [16:14] ? === pstolowski is now known as pstolowski|afk [16:17] cachio: installer? [16:17] zyga, when we buidl the deb or rpm [16:18] Dmitrii-Sh: no, from select group of people on the forum [16:18] cachio: no, I thonk think we can do that [16:18] cachio: i see a bunch of failures on amazon linux z with 'no space left' [16:18] cachio: I mean, the package cannot ship the symlink [16:18] zyga, but we can create it when installing the package [16:19] mborzecki, do you have a link? [16:19] cachio: https://travis-ci.org/snapcore/snapd/builds/426715967?utm_source=github_status&utm_medium=notification [16:20] cachio: maybe let me ask this way, why do you want to change that? is it for a specific test? [16:20] cachio: I think we could have a helper that installs a locally built snap [16:20] cachio: that uses classic confinement [16:20] cachio: and also creates the symlink if it is missing [16:20] cachio: but again, I don't have the full context [16:21] zyga, agree on that} [16:21] zyga, I ask if a user could face this problem trying to install a classic snap [16:22] cachio: yes, that's unfortunately how this is [16:24] cachio: i'm going to restart that job [16:25] mborzecki, go ahead [16:25] * zyga focuses on Fedora for now [16:25] * zyga hugs Pharaoh_Atem for his tireless packaging effort :) [16:26] * Pharaoh_Atem raises eyebrow [16:26] mborzecki, it is really weird because that image has not changed [16:26] and it is 25GB [16:27] zyga: technically, the package can ship the symlink [16:27] in rpm land, symlinks can be shipped [16:27] they're just files like any other [16:27] zyga, https://paste.ubuntu.com/p/RZnddCXszt/ [16:27] you just have to be careful on how you create them [16:27] this is happening on fedora right now [16:27] zyga, is that an issue? [16:27] cachio: right, that's is expected [16:28] cachio: the users have to create the /snap symlink to install snaps using classic confinement [16:28] Pharaoh_Atem: can we ship /snap -> /var/lib/snapd/snap? [16:28] no [16:28] but not because rpm can't do it [16:28] but because we shouldn't have that by default [16:28] PR snapcraft#2255 opened: catkin plugin: use SnapcraftException [16:28] people should create the symlink themselves, because we don't know how their systems are going to be set up [16:29] Pharaoh_Atem: could we create the symlink from snapd iff /snap doesn't exist? [16:29] that would be a scriptlet, but yes [16:29] the issue with that is that it makes behavior a bit less predictable [16:29] not from a script let, just from snapd's bootstrap phase [16:29] ah [16:30] people would murder you for that [16:30] zyga: cachio: we have a tests/main/classic-confinement wchich also runs on fedora and creates the symlink on demand [16:30] I'd have to disable it to prevent it being removed from Fedora [16:30] installing classic snap, /snap absent, create symlink [16:30] eh, ok :) [16:30] cachio: ^ there's your answer :) [16:30] it's just a policy, not a technical limitation [16:30] correct [16:30] zyga, :( [16:31] Pharaoh_Atem, zyga thanks for the answer [16:38] Pharaoh_Atem: where do you need karma? I only see F29 [16:38] zyga: F29 is the only one [16:38] all the others are already pushed through [16:52] Pharaoh_Atem: btw, fix-info-dir problem is back [16:57] eh? [16:57] really? blech [16:58] as is the cache hash checksum issue [16:58] I'm ignoring that, just a curiosity on the side [17:34] PR snapcraft#2256 opened: local source: don't include .snapcraft directory [18:22] mvo: is there a reason most call of MockModel* in snapstate_test.go don't restore? [18:43] can a snap, that uses layout feature be released to "stable" ? [18:43] om26er: I actually don't know but I believe it should be [18:43] om26er: layouts should be out of beta very soon now [18:44] om26er: 2.36 IMO [18:45] zyga: I had a snap with "grade: devel" and it got published to store after manual review (using layout). Today I changed that to "grade: stable" and seems my upload requires another review as well. [18:45] I think store has no "memory" about the past approval [18:45] so I was wondering if the problem is because I changed it to "stable" [18:46] I don't think so [18:46] hmm, I'll wait for the review then. [18:47] in case anyone is interested https://dashboard.snapcraft.io/snaps/xbr-dashboard/revisions/23/ [18:48] om26er: how have layouts worked for you? [18:48] zyga: well, my Qt based app wouldn't start if layouts were not there, so it definitely worked pretty great. [18:48] did you see the new documentation page for them? [18:49] no, I was probably given a working example from _ogra_ [18:49] om26er: check out https://forum.snapcraft.io/t/snap-layouts/7207 [19:02] * cachio afk [19:29] PR snapcraft#2257 opened: meta: take charge of environment used to run commands [19:59] hm, selinux, ... [20:08] Need some help with snaps an apparmor [20:08] hello marcoceppi [20:08] My system's apparmor is crashing after a snap refresh [20:08] can you please tell me more [20:09] do you have logs? is the kernel crashing? which system are you using? can you provide the output of "snap version" [20:09] zyga: snap daemons aren't starting, which lead me to logs, which lead me to a bug, which lead me to apparmor https://paste.ubuntu.com/p/57ZSCCDVp4/ [20:09] zyga: https://paste.ubuntu.com/p/BsvKCkjzY6/ [20:11] kernel is up, but lxd daemon isn't running, complains about File not found, lead me to this bug, but I'm unable to remedy with restarting apparmor https://github.com/lxc/lxd/issues/4402 [20:11] apparmor is not a daemon, the service just loads the profiles into the kernel [20:12] can you tell me more about your setup please [20:12] this is a lxd container or is the issue on the host? [20:12] 14.04 with HWE kernel, bare metal server in OVH [20:12] issue with the host [20:12] OVH? [20:12] server provider, ovh.com [20:12] a, I see [20:13] when has this issue started to show up? [20:13] It's been running a long time, moved from apt lxd -> snap lxd about 10 months ago, issue appeared this morning after a container got stuck [20:14] can you show me "snap changes" from the host? [20:14] maybe some snap refreshed and started causing issues (somehow) [20:14] also, what does systemctl status apparmor.service say? [20:14] hmm [20:14] no such file or directory? [20:14] It's in my first pastebin [20:14] yes [20:14] did you by any chance remove apparmor userspace package from the system? [20:14] error: no changes found [20:15] what does dpkg -L apparmor say? [20:15] you want big L or little l? [20:15] -L [20:15] https://paste.ubuntu.com/p/NSZHyKnJzH/ [20:16] ah, wait, 14.04 you say? [20:16] correct [20:16] * zyga doesn't remember if on 14.04 + snapd apparmor is managed by upstart or by systemd [20:16] systemd is on here [20:16] what does /etc/init.d/apparmor status say? [20:16] yes but that's a special copy of systemd for snapd [20:17] I don't believe it actually manages apparmor [20:17] https://paste.ubuntu.com/p/knJxXZGkjg/ [20:17] ok, so you have a number of apparmor profiles loaded [20:17] so what's the issue that you observe? is lxd broken? [20:18] I suppose, the daemon isn't starting [20:18] what does systemctl status snapd.service say? [20:18] active, running [20:19] and sytemctl status snap.lxd.lxd [20:19] systemctl status snap.lxd.server snap.lxd.server.service Loaded: error (Reason: No such file or directory) Active: inactive (dead) [20:19] it's not lxd.servcer, it's lxd.lxd.service [20:19] or lxd.daemon.service [20:19] the app names are the same as apparmor profile names [20:20] snap.lxd.daemon [20:20] hey stgraber, thanks for that! :) [20:20] initctl: Unknown job: snap.lxd.daemon [20:20] sorry, [20:20] yeah, that's going to be a systemd one [20:20] https://paste.ubuntu.com/p/fTfpC9qf52/ [20:21] marcoceppi: probably not related, "dpkg --configure -a" (just to rule out dpkg aspects) [20:21] marcoceppi: journalctl -u snap.lxd.daemon -n 300 [20:21] marcoceppi: what happens when you "systemctl start snap.lxd.daemon.service" ? [20:21] Sep 10 15:47:57 notaws.com lxd.daemon[1038]: cannot change profile for the next exec call: No such file or directory [20:21] now we are getting somewhere! [20:22] ok, so that's a snap-exec thing then [20:22] *snap-confine [20:22] too bad that's a generic apparmor error, not sure if it coming from snap-confine or lxd [20:22] apparently, when I ran start just now [20:22] stgraber: note that one of the other pastebins show that the profiles are loaded [20:22] lxd woke up [20:23] stgraber: how does lxd apply apparmor on exec or immediately? [20:23] zyga: LXD only plays with apparmor when containers start, the failure here is much much before that [20:23] ack [20:24] zyga: what it could be is that our "aa-exec" wrapper which passes "-p unconfined" is somehow failing in this case, but that'd be pretty weird given that unconfined is guaranteed to always exist [20:24] marcoceppi: uname -a [20:24] So, could this just be that 14.04 + lxd + apparmor profile loading ordering issue? [20:24] Linux notaws 4.4.0-134-generic #160~14.04.1-Ubuntu SMP Fri Aug 17 11:07:07 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux [20:24] PR #160: Trivial fix for the output of "snappy list" [20:24] marcoceppi: I don't know yet [20:24] that looks like a vanilla ubuntu kernel [20:24] yeah, kernel is good, that matches our test system for LXD on 14.04 [20:25] aye, hwe, but just what comes from the archive [20:25] happy to just shove this machine to xenial if that's a plausible fix, now that lxd is run from the snap I'm less worried about a system update [20:27] marcoceppi: please describe this on the forum -- it's late anyway so I will EOD soon [20:27] I'm looking at a selinux issue now [20:47] PR snapcraft#2258 opened: build providers: snapcraft images for multipass [22:08] PR snapcraft#2259 opened: cli: show trace if no tty [22:38] PR snapcraft#2251 closed: pluginhandler: stop using alias for snapcraftctl [22:45] PR snapd#5809 opened: tests: using single sh snap in interface tests [23:50] PR snapcraft#2254 closed: build providers: add support for --shell