[00:27] PR snapcraft#1805 opened: lifecycle: use the -no-fragments mksquashfs option [02:44] PR snapd#4396 opened: snap: use the -no-fragments mksquashfs option [06:39] morning [06:56] PR snapd#4395 closed: update HACKING.md for distro build dependencies === chihchun is now known as chihchun_afk === chihchun_afk is now known as chihchun [08:03] good morning [08:08] kalikiana: morning === __chip__ is now known as Chipaca [09:03] mborzecki: o/ [09:03] Chipaca: hey [09:03] mborzecki: would now be a good time to rant about the insanity of signed st_size [09:04] hahaha :) that's the ParseInt() thing? [09:05] mborzecki: yeah [09:05] mborzecki: stat64's st_size is a long long [09:06] mborzecki: stat's st_size is unsigned long (not unsigned long long) [09:07] (it's just an unsigned int on 64 bits, in fact; it's unsigned long on i386) [09:07] Chipaca: fair enough [09:08] mborzecki: and golang's os.FileInfo's Size() returns an int64 [09:08] mborzecki: and i it bothers me _every_ _time_ [09:08] like, who's this person going around creating negative-sized files [09:08] i'd like a word [09:10] Chipaca: is there some call that returns size_t directly, usually the reason is to mark errors as -1 [09:10] Chipaca: can you add a comment there in the code? just in case someone stubles on it in the future [09:10] pedronis: getuid reutrns unsigned ints, and uses -1 as a flag value [09:11] mborzecki: psh. comments. what's next, *tests*? [09:11] pedronis: but, yeah, probably [09:11] Chipaca: nah, tests you can skip === chihchun is now known as chihchun_afk [09:11] pedronis: and I guess the feeling is if reaching the limit of 63 bits is close, 64 bits is just as close [09:11] mborzecki: :-D [09:11] mborzecki: i'm quoting you on that one [09:13] * Chipaca looks forward to having 20EB sd cards [09:15] Chipaca: also defining off_t would be hard [09:16] Chipaca: i'm doing a meetup on go at my local group today, will reference both golang bugs that you found (geteuid and syscalls that must fail, even if they don't) [09:16] mborzecki: heh [09:17] mborzecki: those are mundane compared to the threading one :-) but ok! [09:17] i'll be showing the actual bits of go's asm that are responsible [09:18] pedronis: are you reviewing that PR? just asking to know if i should wait before pushing the changes [09:18] Chipaca: no, I don't even know what PR you are talking about, I need a belated breakfast actually :) [09:19] pedronis: breakfast >> reviews [09:32] * kalikiana coffee break [09:38] Any idea why none of my snaps can save settings? Happening with Corebird and Mailspring. I'll change settings, they will be reflected in the UI but not saved. When I go back to the settings of the app the old values are still there. [09:51] JamieBennett, any denials in the logs? [09:56] JamieBennett check ownership of files/dirs in ~/snap [09:57] pstolowski: Only denial I see is this but it looks unrelated: Dec 13 09:37:00 ubik kernel: [ 1876.976847] audit: type=1107 audit(1513157820.788:1187): pid=1834 uid=106 auid=4294967295 ses=4294967295 msg='apparmor="DENIED" operation="dbus_method_call" bus="system" path="/" interface="org.freedesktop.DBus.ObjectManager" member="GetManagedObjects" mask="send" name="org.bluez" pid=10850 [09:57] label="snap.mailspring.mailspring" peer_pid=1820 peer_label="unconfined" === chihchun_afk is now known as chihchun [09:57] right, seems unrelated [09:58] sergiusens: permissions looks OK too [09:58] It's strange [10:00] JamieBennett, also, does it create any (possibly hidden) files in ~/snap/.. as you modify & save settings? 'find ~/snap` before and after might give a clue [10:00] hmm corebird seems to work just fine here, the settings are preserved and i can actually see then on d-conf [10:01] Seems I am getting quite a few denied messages for dbus [10:02] http://paste.ubuntu.com/26175781/ [10:02] JamieBennett: anything with ca.desrt.dconf ? === chihchun is now known as chihchun_afk [10:08] Even stranger, if I change settings then close Corebird and reopen, the new settings are there. [10:08] JamieBennett: stat `dconf watch /org/baedert/corebird/` and run corebird, change some settings, you should see dconf watch listing updaated keys [10:08] if [ ! -e $(ls -1 /var/lib/snapd/snaps/ubuntu-core_*.snap | tail -1) ]; then exit 1; fi [10:08] * Chipaca wonders [10:09] mborzecki: nothing [10:10] Chipaca: ls: cannot access '/var/lib/snapd/snaps/ubuntu-core_*.snap': No such file or directory [10:10] ogra_: you're my favourite sh expert: does the above make any sense to you? as opposed to just [ -e /var/lib/snapd/snaps/ubuntu-core_*.snap ] ? [10:10] JamieBennett: uh, sorry, this's unrelated to your woes [10:10] lol [10:12] Chipaca: I think you get an error from -e if there is more than one argument [10:12] hm, probably [10:13] just the ls, then :-) [10:13] but yes, it's a weird way to check [10:14] mborzecki: do external links in tweets work for you? [10:14] (in Corebird) [10:14] JamieBennett: are the corebird interfaces connected? [10:15] Chipaca: yes, gsettings, home, network, ... [10:15] JamieBennett: yes (note, i'm running latest snapd master) [10:15] JamieBennett: dbus? [10:16] * JamieBennett cannot see a dbus interface [10:16] have we got any green runs in the last days? all the recent PRs seem red [10:16] what is the exact name? Should it be dbus? [10:16] pedronis: my pr was green yesterday [10:16] pedronis: first try, too [10:16] impressive [10:17] pedronis: and finishing at ~5pm which is peak SF [10:17] Chipaca: https://pastebin.ubuntu.com/26175847/ [10:17] pedronis: and timing out [10:18] JamieBennett: 'snap interfaces corebird' would've been nicer :-D [10:18] JamieBennett: corebird:dbus-corebird - [10:18] Chipaca: well, I'm having issues with snaps all round so pasted them all [10:19] JamieBennett: output of 'snap version'? [10:19] https://www.irccloud.com/pastebin/I1GemYmg/ [10:22] * mborzecki restarting master branch travis job for the 4th time [10:23] JamieBennett: so, about corebird, if you even get a window you're ahead of me [10:24] Mailspring was the other that is not saving settings for me [10:24] oh it took ages but finally popped up [10:24] it tried to download and bunzip an h264 decoder (wat) [10:25] JamieBennett: but, the link on "Create one" worked, so that works [10:25] Chipaca: What I tried to do was turn autoscroll on in settings, close the settings window, then go back and see if it is still enabled [10:27] JamieBennett: in corebird? [10:28] yes [10:29] JamieBennett: how do i get to settings? [10:29] The cog in the titlebar [10:30] JamieBennett: i have no cog in my titlebar [10:30] next to minimise? [10:31] JamieBennett: https://i.imgur.com/tXRVd4A.png [10:31] Ah, you have a different theme too, everything is on the left, I'm on 17.10 and everything is on the right [10:32] JamieBennett: this is sparta^W16.04 [10:33] https://usercontent.irccloud-cdn.com/file/F4QDY6Gw/Screenshot%20from%202017-12-13%2010-32-40.png [10:33] Chipaca: I think there is a more general problem as no snaps seem to be able to save settings, external links in snaps do not work e.t.c. [10:33] * JamieBennett keeps digging [10:34] JamieBennett: it does sound like something is broken [10:34] JamieBennett: have you tried seeing if running core from stable fixes things? [10:35] no, lets try that [10:36] Chipaca: nope, that wasn't it [10:42] JamieBennett: I don't have many ideas. I'd normally point you at zyga... [10:45] np, just debugging dbus at the moment, it definitely looks like something is wonky there [10:58] Chipaca: we will have to turn off some suites or something until we decide what to do, I really would like to merge a couple small PRs before leaving === daniellimws_ is now known as daniellimws [10:59] for the holidays [11:07] JamieBennet Chipaca snap run --shell corebird ... then touch $SNAP_USER_COMMON/stub and verify that works [11:08] * sergiusens is on his phone warming up on a static bike to get started with physiotherapy [11:09] pedronis: i'm trying something that might make a difference (maybe too small to be noticeable though, will see if it works at all first) [11:16] pedronis: the biggest issue seems to be i/o timeouts from linode :-/ [11:17] the second issue seems to be that the images haven't been refreshed in too long (or could use some love to be quicker) [11:18] (locally in my qemu images i cut down the time by ~15 minutes for 14.04 just by doing all the setup faff beforehand [11:18] ) [11:18] I'm sure, I'm looking for a remedy for the next 3 days that doesn't need Gustavo though [11:19] yeah [11:20] * pedronis lunch [11:20] pedronis: that's what i'm trying out: i have a branch that does a single apt-get for distro_install_package instead of one per package [11:20] pedronis: and replaces all -comp xz with -comp gzip in ~5 mksquashfs calls we have in tests [11:20] between them it might shave 10 minutes or more, depending [11:20] testing it now [11:24] Chipaca: are msquashfs calls done in prepare? [11:26] mborzecki: in core, yes, one [11:27] mborzecki: and in the prepare of cmd/snap-confine/spread-tests/regression/lp-1599608/task.yaml [11:27] mborzecki: and tests/main/ubuntu-core-custom-device-reg-extras/task.yaml [11:28] mborzecki: and tests/main/confinement-classic/test-snapd-hello-classic/Makefile [11:28] mborzecki: and tests/main/ubuntu-core-custom-device-reg/task.yaml [11:28] mborzecki: and tests/main/ubuntu-core-gadget-config-defaults/task.yaml [11:28] ah, right, the ones that were in the pr [11:29] mborzecki: [11:29] :) [11:29] one thing I observed while browsing logs today: https://paste.ubuntu.com/26176245/ [11:29] this is right before the timeout [11:30] tests/main/lxd task [11:30] mborzecki: yeah [11:31] any idea what's the size of this rootfs? [11:31] mborzecki: it should be possible to do that beforehand also :-/ [11:32] 10+ minutses of project prepare, blazing fast download and we're pushing the job timeout limit [11:38] ummm [11:39] i just got green on my pr again [11:39] dunnno what y'awl are moanin' about [11:39] :-p [11:39] * Chipaca is sure to be run over by a bus after this amount of good luck [11:45] Chipaca: this 2 lines PR takes regularly 49+ mins: https://github.com/snapcore/snapd/pull/4388 [11:45] PR #4388: overlord/snapstate: fix auto-refresh summary for 2 snaps [11:50] PR snapd#4397 opened: tests/main/lxd: temporarily switch to manual [11:50] another experiment, trying to see if this is the culprit [11:53] my test run is dying a death of a thousand "no powered off servers in Linode account exceed halt-timeout" :-/ [12:03] still seeing ~10 min prepares though [12:27] PR snapd#4398 opened: overlord/auth,daemon: introduce an explicit auth.ErrInvalidUser [12:28] Chipaca: mborzecki: simple PR (split out from my larger one) ^ [12:43] * Chipaca ~> lunch === daniellimws is now known as Guest24586 === dows is now known as daniellimws [12:57] Hello, is it possible for me to license snaps that I build? Is there an online documentation where I can read more on this? [12:59] Vamsi: license in what sense? [13:01] JamieBennett (cc pstolowski): that is definitely unrelated [13:02] pedronis: standup? [13:02] JamieBennett (cc pstolowski): (bluez) [13:02] jdstrand: thanks for confirming === alan_g is now known as alan_g|lunch [13:04] Chipaca: such that I can make my snap a paid snap without falling into trouble. [13:06] Vamsi: support for for-pay snaps isn't there yet -- JamieBennett or noise][ might have more detail [13:07] JamieBennett: you should connect screen-inhibit-control for irccloud [13:07] So no snap crurrently is a paid snap? All are free? [13:07] JamieBennett: there are a couple also unrelated denials in there that I'll investigate [13:07] Vamsi: that is correct, the service has not been launched yet [13:09] okay. thanks :) [13:14] jdstrand, thanks for checking [13:23] hm looking at some logs from ubuntu-14.04 and analyzing them with mvo's script, there's 13 minutes of prepare time, then the top tests take ~7 minutes, leaving us with 29 minutes for ~237 tests [13:24] that's 120ms per test :) [13:28] mborzecki: https://github.com/snapcore/snapd/compare/master...chipaca:dumb-spread-tweaks [13:29] mborzecki: that's why we have multiple workers (and why prepare time is so crucial) [13:32] most systems have 4 workers [13:34] Chipaca: pushed your commit to #4393 and #4391 [13:34] PR #4393: travis: separate unit tests into separate build matrix jobs [13:34] PR #4391: travis, run-checks: split travis job into build matrix [13:35] * kalikiana time for lunch! [13:39] Chipaca: also afaiu we run 3 travis jobs and have 80 machines, atm all workers for a job = 27 (6*4+3), 27*3 = 81, so we are a tiny bit overallocated as well [13:42] pedronis: ooh, that'll be biting at least 1/3 of our jobs when we're busy :-( [13:43] can that be dropped to 2, or does that also need gustavo? [13:45] PR snapd#4398 closed: overlord/auth,daemon: introduce an explicit auth.ErrInvalidUser [13:45] Chipaca: that can be changed in travies I think [13:45] ah yes [13:45] should I? [13:46] Chipaca: we can do that, or disable one of the distros [13:47] temporarely (though suse has been disabled temporarly for a long time now) [13:47] pedronis: i've set it to 2 for now [13:47] pedronis: let's see with mborzecki's matrix how things look [13:51] pedronis: i suppose you can find a time scale where 'long time' feels temporary :) [13:53] mborzecki: is 9.5 minutes prepare for 14.04 an improvement? [13:54] yup === alan_g|lunch is now known as alan_g [14:10] Chipaca: 14.04 finished in 34 minutes [14:10] mborzecki: so not necessarily better, but not necessarily worse [14:11] in previous run of #4393, ubuntu-14.04 finished in 24 minutes [14:11] PR #4393: travis: separate unit tests into separate build matrix jobs [14:13] mborzecki: but also timed out at 49+? [14:13] 16.04 hit a timeout in that build [14:14] well if we were overallocated, each run might have depended on when it got the last worker (unless it simply died unable to get it at all within allowed time) [14:14] I don't know how retrying spread does to get machines [14:14] *how much [14:23] kyrofa: hey, I think zyga was looking at the lxc snaps not updating with priority last week. I believe he has vacation to burn and is not around a lot atm [14:38] ^^ zyga eoy'ed already [14:38] re [14:45] hello [14:45] anddam: hello [14:47] hi how can I run the scripts in tools/travis locally? [14:49] daniellimws: you'll probably want to have a chat with elopio about that [14:49] ok it is regarding this task http://paste.ubuntu.com/26171261/ [14:56] from the conversation above, I probably should not send this to travis untested, to avoid eating up resources right? [14:57] daniellimws: yes and no... I believe there was another task to make it possible to run it locally as you're asking, but it's not possible right now [15:00] kalikiana: Well I suppose I'll just commit it then, since there's nothing running now it seems. If it's faulty it will die very quickly. [15:01] daniellimws: Yeah. I'd say go ahead. And otherwise leo should be around soon [15:02] is snappy a Canonical's project? [15:06] PR snapcraft#1806 opened: ci: transfer generated snapcraft snap to transfer.sh === daniellimws is now known as Guest13514 === Guest24586 is now known as daniellimws [15:18] mborzecki: what's the difference between #4391 and #4393 ? given that John reduced the travis jobs they will eat even more travis chances [15:18] PR #4391: travis, run-checks: split travis job into build matrix [15:18] PR #4393: travis: separate unit tests into separate build matrix jobs [15:18] can one be closed? [15:18] pedronis: 4393 has unit tests as a separate job [15:19] afaiu, travis is gating job execution already, so those should not eat up all the nodes [15:20] they don't eat nodes, they eat travis jobs [15:20] exactly [15:20] to be clear I'm not against the experiment, I'm against having the experiment twice [15:20] given our current constraints [15:20] also, unit tests were supposedly taking quite long, so the idea was to see whether moving those to separate jobs has a positive effect on the build time [15:21] hm i guess, i can cancel 4393 job for now, and i'll restart it in the evening [15:22] Chipaca reduced travis jobs to 2 thinking about other stuff, that approach would need more jobs (if possible) instead :/ [15:22] and i've canceled the jobs that were not started yet in 4391 [15:23] pedronis: yeah, we need to find a sweet spot, or write a spread job runner :P [15:24] PR snapcraft#1793 closed: project: refactor storeapi [15:24] yea, except that one of the motivation of spread was indeed not to be on task of running a permanent service :/ [15:25] trade-offs [15:58] anddam: what do you mean? [16:01] pedronis: how are your jobs faring with the new limit? any further luck? [16:16] should we make snapcraft or the store discourage people from calling something "foo-snap"? [16:16] sergiusens: ^ wdyt? [16:17] Hey folks [16:17] I'm in NY and headed to the hotel [16:17] A bit tired but can probably try to help moving a few things forward when I get a room [16:17] cc pedronis Chipaca [16:18] niemeyer: ack [16:18] niemeyer: tests are unhappy (lots of timeouts/errors/out-of-machines with linode cascading into timeouts in travis) [16:19] If there are any fires, please drop me a note about them in the forum so I start there [16:19] niemeyer: ratcheted travis down to 2 runs max [16:19] niemeyer: not sure whether it's helped [16:19] Ack [16:19] It should help [16:19] niemeyer: mborzecki has a couple of PRs to play with splitting spread runs into N, per arch, which might make things easier [16:19] I'm planning a revamp of our runs so we allocate systems dynamically === dows is now known as daniellimws [16:20] That might help making it cheaper and faster at the same time [16:20] Saviq needs this too [16:20] niemeyer: but when possible there's a bunch of tweaks we should probably do to the images to make them prepare quicker (and some tests hit the network less) [16:21] niemeyer: but, worst case, we'll sort it out in the new year :-) [16:21] Agreed [16:21] And agreed :) [16:24] * Chipaca ~> bbl [16:38] elopio: hey. could we chat about the yaml for integration tests in snapcraft#1639? [16:38] PR snapcraft#1639: grammar: to statement [16:39] kalikiana: sure. [16:40] kalikiana: this is the closest I got to what I would like to see, but still not 100% happy: https://github.com/snapcore/snapcraft/blob/master/snapcraft/tests/fixture_setup.py#L939 [16:42] elopio: mind joining me in the weekly? [16:45] how can I enable core dump generation on ubuntu core? [16:47] kalikiana: sure, give me a second. [16:48] Aye [16:58] what kernel modules are required for the build tests to pass? [16:59] I keep seeing issues with the seccomp test when testing setuid [17:03] specifically can: request_module (can-proto-0) [17:17] Chipaca: I got again a timeout (49+ mins) [17:18] Chipaca: https://travis-ci.org/snapcore/snapd/builds/315911230?utm_source=github_status&utm_medium=notification [17:20] elopio: sent you my notes. it's probably easiest if I update my branch first, and you can look into removing update_part later since I'll be adding the parts arguments anyway [17:21] now time to head out, for drinks and dinner [17:24] PR snapcraft#1805 closed: lifecycle: use the -no-fragments mksquashfs option [17:27] Hi guys, is it possible to specify the architecture of a snap when downloading it? e.g. sudo snap download --armhf [17:32] brunosfer: not super official but UBUNTU_STORE_ARCH=armhf snap download ... should work [17:45] elopio, you around? [18:02] hurricanehrndz: which test suite? [18:06] Chipaca: One sec, I think I might have figured it out. I'm going to run one more test, and I'llreport back [18:06] hurricanehrndz: ok (but also report which test suite you're talking about -- there're several, and who knows to help varies) [18:07] Chipaca: sounds good [18:13] Chipaca: it's about to fail again with a timeout :/ [18:15] pedronis: Thank you. It works, however when I do in a amd64 system the UBUNTU_STORE_ARCH=amd64 sudo snap download it downloads a different version of the snap then it does in an amrhf with the exact same command. [18:16] pedronis: Do you have any idea why this happens? [18:19] brunosfer: not sure, best to try to pass a channel --stable or something explicitly [18:20] pedronis: I will give it a shot. Thankx [18:21] Chipaca: yup, figured out the seccomp test, it had already been fixed in master [18:22] brunosfer: er, you mean UBUNTU_STORE_ARCH=armhf right? [18:23] pedronis: Doesn't work with the --stable flag. I could try to specify the version, however my intention is to get the latest version regardless of the architecture downloading the snap [18:23] Chipaca: I mean UBUNTU_STORE_ARCH=amd64 [18:23] brunosfer: but that's telling it to download the amd64 snap, don't you want the armhf one? [18:24] Chipaca: but could also be armhf, I just want consistency on the version I download regardless of the architecture downloading the snap [18:24] brunosfer: that sounds like a strange requirement to me, could you explain further? [18:24] notice that different arch different revision [18:24] pedronis: let _me_ press the 'restart' button this time; it likes me :-p [18:24] Chipaca: I want to make the download of the snap to send it oflfline to another device that has a different architecture. [18:25] for the version you need to look at snap info *.snap [18:25] pedronis: You mean that the snap I'm trying to download might have different versions for different architectures... [18:25] brunosfer: I understand that, but then you compare what you download with “UBUNTU_STORE_ARCH=amd64 snap download thesnap” with what you install on an armhf system with “snap install thesnap”; those are different things [18:25] brunosfer: revisions, not versions [18:27] Chipaca: I already did :/ [18:27] Chipaca: in this case, I'm downloading the nmap (because it's small) for testing purposes, but the version is in the name of the snap when I download the file right? [18:27] no [18:27] that's the revision [18:28] the files produced by snap download have the revision in them [18:28] I mean the file names of the files [18:28] pedronis: Ok, thankx, my mistake then, I will install them and check the version. They should be the same then :) [18:28] brunosfer: in "snap info nmap" (on amd64), the (29) is the revision; 7.12SVN-0.6 is the version [18:28] awww they're still using svn [18:28] :-) [18:28] brunosfer: snap info can give you the version without installing first [18:28] Chipaca: Thanks for the help ;) [18:28] pedronis: Thanks for the help ;) [18:28] in case [18:29] pedronis: Yes, but I was being mislead by the name of the snap. Thanks for the help ;) [18:53] elopio kyrofa snapcraft#1801 has no reviews yet, being pretty simple it should go fast [18:53] PR snapcraft#1801: lifecycle: detect docker to auto setup core [18:56] is it possible to limit the architectures the snap is built for on build.snapcraft.io ? [19:17] Saviq not today, not yet [19:21] ok, let's see what happens then [19:22] PR snapcraft#1806 closed: ci: transfer generated snapcraft snap to transfer.sh [19:24] How do I install a snapshot in a custom directory? its size exceed / [19:25] PR snapcraft#1801 closed: lifecycle: detect docker to auto setup core [19:42] PR snapd#4388 closed: overlord/snapstate: fix auto-refresh summary for 2 snaps [19:47] Wulong: pardon? [19:51] jdstrand: what's your advice on the test errors in https://github.com/snapcore/snapd/pull/4396 ? [19:51] PR #4396: snap: use the -no-fragments mksquashfs option [19:51] jdstrand: they're due to http requests with the store timing out [19:51] (unrelated to my changes0 [19:53] Chipaca: how can I install a snap to a custom dir. Eg. snap install vlc --store-to-some-custom-path=/some/custom/path [19:54] it doesn't look like I have the ability to restart the tests [19:54] Wulong: currently you can't, but if you have a single location that could hold all the snaps you need, you can mount that [19:55] Wulong: what sizes are we talking? [19:55] 30 GB [19:55] Wulong: the .snap is 30GB? [19:56] Not sure if its the snap or its data. [19:56] Probably the latter. [19:56] Wulong: the snap is heavily compressed, so it might be significantly smaller [19:56] Wulong: and, it's mounted compressed; you don't need space for more than just the snap [19:58] Wulong: but if you do need more space, /var/lib/snapd/snaps/ is where you need to put the space (but you need to mount it before installing any snaps...) [19:59] Ok, I'll give it another shot. Must have been application error or config issue. [19:59] Yea I know, but I don't have a free partition :| [19:59] Wulong: then where were you going to put the snap? [19:59] Thanks for the help Chipaca [19:59] In /home [20:00] Wulong: mount --bind is your friend [20:00] Ah, of course. Forgot abotu that one. [20:01] Linux has become too easy. Makes me lazy [20:01] Wulong: :-) [20:29] tyhicks: let me look [20:31] tyhicks: jdstrand: we're having issues with spread and linode and travis [20:32] Chipaca (cc tyhicks): whoops too late. I clicked 'restart build' [20:32] tyhicks: if this build fails, I would comment in the PR that the test failures are unrelated [20:32] jdstrand: 👌 [20:33] ack, thanks [20:37] Chipaca: I managed to merge some stuff, but maybe master is red for all I know [20:38] pedronis: when is it you're off? [20:48] elopio look at this error, wrong snapcraft is being called https://travis-ci.org/snapcore/snapcraft/jobs/316053900 [20:51] ohh nvm [21:02] hmm, isn't --devmode and --classic supposed to disable confinement? I have a tool that still gets permission denied when writing on NFS shares despite being installed with --devmode and --classic. [21:03] I'm running 2.29.4.2 [21:05] lundmar, yes, although you still may be hitting traditional permissions [21:11] PR snapd#4399 opened: rewrite snappy-app-dev [21:57] Chipaca: Friday is eoy for me [22:05] pedronis: ah ok, same here