[00:10] PR # closed: snapd#5644, snapd#5915, snapd#5962, snapd#6079, snapd#6098, snapd#6108, snapd#6111, snapd#6162, snapd#6177, snapd#6238, snapd#6258, snapd#6270, snapd#6281, snapd#6313, snapd#6322, snapd#6324, snapd#6325, snapd#6327, snapd#6329, snapd#6341, snapd#6347, snapd#6356, snapd#6360, [00:10] snapd#6367, snapd#6376, snapd#6381, snapd#6401, snapd#6404, snapd#6410, snapd#6418, snapd#6436, snapd#6472, snapd#6482, snapd#6485, snapd#6487, snapd#6488, snapd#6491, snapd#6492, snapd#6494, snapd#6496, snapd#6497, snapd#6498, snapd#6499, snapd#6501, snapd#6502, snapd#6503, snapd#6504 [00:11] PR # opened: snapd#5644, snapd#5915, snapd#5962, snapd#6079, snapd#6098, snapd#6108, snapd#6111, snapd#6162, snapd#6177, snapd#6238, snapd#6258, snapd#6270, snapd#6281, snapd#6313, snapd#6322, snapd#6324, snapd#6325, snapd#6327, snapd#6329, snapd#6341, snapd#6347, snapd#6356, snapd#6360, [00:11] snapd#6367, snapd#6376, snapd#6381, snapd#6401, snapd#6404, snapd#6410, snapd#6418, snapd#6436, snapd#6472, snapd#6482, snapd#6485, snapd#6487, snapd#6488, snapd#6491, snapd#6492, snapd#6494, snapd#6496, snapd#6497, snapd#6498, snapd#6499, snapd#6501, snapd#6502, snapd#6503, snapd#6504 [03:11] man, the arm build farm on launchpad is backed up something fierce [03:11] i have a 3 hour wait time on my snap packages [06:51] Good morning [07:14] Hey mvo [07:27] * zyga updated his aquaris E4.5 to "16" [07:27] guess my phone runs xenial now [07:30] zyga: good morning [07:31] wow, the phone says in the future it will support snaps! [07:31] that's neat [07:31] (though they said that it is not supported now because of upstart) [07:31] nice [07:45] Hmm, is there a way to reinstall a snap with --classic without losing its data? [07:45] zyga: btw, did you find out more from morphis about the jenkins issue he has with 2.37? [07:45] mvo: no, we need to get more feedback about how to reproduce the issue [07:46] zyga: thanks! so he didn't reply yet? [07:46] I failed to reproduce it on 18.04 with lxd and classic snaps in /var/lib/test [07:46] correct [07:46] zyga: thanks for looking into this! [07:46] morphis: ^ please ping us when you are up [07:54] PR snapd#6501 closed: tests/main/auto-refresh-private: make sure to actually download with the expired macaroon [07:54] 6503 is an easy win [07:57] zyga, mvo: hey! [07:57] good morning :) [07:57] mvo: (reviewed) [07:57] zyga: didn't you say you can easily reproduce it? [07:58] morphis: that was dry run mode, it is easy to reproduce conceptually but it doesn't actually happen for me [07:58] ok [07:58] morphis: so I started digging deeper to the point where I had lxd and the same version and it still wasn't happening [07:58] zyga: so what do you need from me? [07:58] morphis: in your setup, how was jenkins installed? [07:58] good question, plars did it long time ago [07:58] I guess via debs [07:59] can you drop us a bug report with: os version, container installation system on host, guest container os version, snap version on both sides, jenkins installation method in guest and the name of the snap used in the CI job [07:59] I will use that to reproduce and fix it [08:00] I mean, we kind of feel the fix is a one liner [08:00] but without a regression test [08:00] it's a hand-wave-y one [08:03] zyga: where do you guys file bugs these days? github? === pstolowski|afk is now known as pstolowski [08:03] morning [08:03] morphis: still on lp [08:03] zyga: ok [08:03] launchpad.net/snapd [08:05] zyga: the snap isn't from the store but I will drop it by mail [08:05] morphis: it is probably sufficient to classify it (strict or classic) [08:05] the denial was not related to the snap but to snap-confine [08:07] it's a classic snap [08:08] perfect [08:08] ok [08:11] zyga: does this rings any bell: cannot perform readlinkat() on the mount namespace file descriptor of the init process: Permission denied [08:11] yes [08:13] phone [08:14] zyga: https://bugs.launchpad.net/snapd/+bug/1815869 [08:14] Bug #1815869: Command of classic snap fails with denial when output is redirected [08:17] pedronis: do we have dmesg output with the appamror denials for this one? [08:17] mvo: I don't know, it was on a server [08:18] mvo: I'm asking [08:22] One sec, home stuff [08:23] re [08:24] pedronis: it is an issue that surfaces once in a while, typically on older kernels [08:24] pedronis: it doesn't affect anything, that I'm aware of, is in production [08:24] morphis: thank you [08:24] zyga: a service died on it [08:24] but there were other denials before as well [08:25] then it restarted [08:25] I mean snap-confine denials [08:25] do you have the log? [08:26] yes [08:34] morphis: I asked one more question on the bug [08:35] zyga: answered [08:37] morphis: I'm not familiar with lxd that much, how do I set up nesting + privileged [08:37] morphis: is lxd on the host a snap? [08:38] yes [08:39] lxc launch ubuntu:x test -c security.nesting=true -c security.privileged=true [08:39] thank you! [08:39] yw [08:39] launching now [08:39] I suspect it is an unsupported configuration in some form, I will confirm soon :/ [08:41] https://www.irccloud.com/pastebin/3G9BKJuL/ [08:42] morphis: ^ this is from installing core in that system [08:42] Feb 14 08:41:18 xenial snapd[378]: 2019/02/14 08:41:18.178394 backend.go:303: cannot create host snap-confine apparmor configuration: cannot synchronize snap-confine apparmor profile: open /var/lib/snapd/apparmor/profiles/snap-confine.core.6350.Ll210MNsYGb3~: no such file or directory [08:44] I'm installing core now [08:44] mvo: will xenial get SRU for newer snapd? [08:45] zyga: that one is new, I am running snapd in various LXD containers for a long time now [08:46] zyga: for 2.37.2 ? yes [08:46] mvo: perfect, thank you [08:46] zyga: I think its even in the unapproved queue, I need to check [08:46] zyga: hm, that doesn't happen in a unprivileged container [08:46] zyga: is that the fix? [08:46] zyga: if so I will push a bit harder for that [08:47] morphis: specifically apparmor stacking but please hold on, I don't know the key part yet [08:47] mvo: no, just curious [08:47] zyga: unprivileged does not mean there is no apparmor profile for the LXD container [08:48] s/unprivileged/privileged/ [08:48] installing strictly confined snap in xenial container with privileged/nesting enabled https://www.irccloud.com/pastebin/ghbLQFrn/ [08:50] morphis: ^ I cannot even install a snap in this container [08:51] morphis: but I managed to reproduce the issue [08:51] lut 14 09:50:40 crusty kernel: audit: type=1400 audit(1550134240.752:161): apparmor="DENIED" operation="file_inherit" namespace="root//lxd-xenial_" profile="/snap/core/6405/usr/lib/snapd/snap-confine" name="/home/ubuntu/foo" pid=15267 comm="snap-confine" requested_mask="w" denied_mask="w" fsuid=1000 ouid=1000 [08:51] at least that much is good [08:51] I will investigate [08:51] zyga: then the initial snapd setup broke at some point. This container is running for ~1.5y now [08:52] zyga: great! === cpaelzer__ is now known as cpaelzer [08:55] I updated the bug report again [08:56] morphis: unsupported may not be broken for some time but may still be unsupported, I'm trying to get to the bottom of what that container setup does [08:56] zyga: security.privileged=true means the container is not running in a UI namespace [08:57] security.nesting=true allows processes inside the container to use a stacked AppArmor profile which is otherwise forbidden by the default LXD AppArmor policy [08:57] UI namespace? [08:57] s/UI namespace/UID namespace/ [08:57] ah [08:58] isn't security.nesting the default? otherwise snaps would just be permanently broken inside containers [08:58] no [08:58] that's unexpected, are you sure? [08:58] * zyga tries another container [08:58] security.nesting=false is the default [08:58] check https://lxd.readthedocs.io/en/latest/containers/ [08:59] that is why you need security.nesting=true if you want to run snaps inside LXD containers [08:59] we never set that in our lxc tests [08:59] lxc or LXD? [09:01] zyga: looks like snapd works without security.nesting=true too [09:01] snapd works correctly then [09:01] the whole container has nesting [09:01] perhaps that option controls the ability to use nested apparmor inside the container [09:01] so more nesting [09:01] otherwise we do get nesting [09:02] no, you still can't run nested LXD or docker inside a container without security.nesting=true [09:02] the whole profile is namespaced [09:02] (ie: nested) [09:02] that's different [09:02] lxd does nesting [09:02] so you need security.nesting to run lxd inside lxd [09:02] snapd doesn't do nesting [09:02] yes [09:02] it just requires a clean slate (nested setup as lxd does automatically) [09:02] see https://github.com/lxc/lxd/blob/a6b780703350faff8328f3d565f6bac7b6dcf59f/lxd/apparmor.go#L418 [09:03] so a non-privleged container without extra nesting support works OK [09:04] I ran a busybox snap inside a default container [09:04] zyga@crusty:/proc/19399$ cat attr/apparmor/current [09:04] lxd-xenial-default_//&:lxd-xenial-default_:snap.snapd-hacker-toolbelt.busybox (enforce) [09:04] it has a nested profile applied [09:04] ok, so that much is good [09:05] lxd nowadays also uses zfs [09:05] man, all of those are not tested by us :/ [09:07] degville: when you have a moment, I have some PRs for the tutorial :) [09:08] popey: awesome, thanks! [09:08] pstolowski: hey [09:08] sudo snap remove gnome-3-26-1604 gnome-calculator gnome-characters gnome-logs gnome-system-monitor gtk-common-themes [09:09] this takes forever on 18.04 [09:09] conflicts [09:09] zyga: it uses ZFS for quite a long time, if I remember right even 2.0 used it [09:09] pstolowski: conflicts https://www.irccloud.com/pastebin/fgPcJpHn/ [09:10] pstolowski: is this expected? [09:10] zyga: yes, known bug [09:10] there's actually a fix up [09:10] aha, good [09:10] but needs to land early in 2.39 cycle [09:10] because is delicate [09:11] yep [09:12] zyga: it should succeed after w (long) while [09:13] pedronis: found intersting bug around hotplug and disconnect thanks to my spread test [09:15] morphis: a container using lxc launch ubuntu:x xenial -c security.nesting=true works OK [09:15] morphis: I can install more snaps but I still get the denial [09:16] morphis: I should be able to fix that this way [09:16] o/ [09:16] morphis: but using privileged containers is (for whatever reason) not supported now as things break on installation [09:16] hey Chipaca :) [09:17] zyga: dunno if you've seen https://forum.snapcraft.io/t/9927/5 [09:17] PR snapd#6505 opened: packaging: avoid race in snapd.postinst [09:17] zyga: it was working until 2.37.* [09:18] morphis: that's unrelated [09:18] hey Chipaca - good morning [09:18] I'm saying _supported_ [09:18] it may work by accident [09:18] mvo: suuuup [09:18] Chipaca: I haven't [09:18] pedronis: almost at the end of the test after a series of succesful plugin/replugging i re-plug the device, snap interfaces is happy, connection is restored, but "snap disconnect .." creates an empty change with no tasks (status=Hold). adding an artificial sleep in the test just before disconnect makes it happy. the disconnect bit in api.go looks suspicious, we query repo a few times but there is no locking, so in theory [09:18] we may get inconsistent view of things. ionterestingly it happens all the time on 16.04, but not on 18.04 (in nested vm) [09:18] zyga: is it documented anywhere which configuration is supported and which not? [09:18] zyga: any snap app gets "cannot create lock directory /run/snapd/lock: Permission denied" [09:19] morphis: only what we test [09:19] that is pretty hard to dig out [09:19] morphis: regular and cloud versions of ubuntu + several other systems [09:19] morphis: lxd on ubuntu with default config [09:19] Chipaca: more fun with .37 [09:19] morphis: that's that, anything more may work or not but is not something we test for [09:20] morphis: but I'm not saying it's hopeless :) [09:20] morphis: just clarifying why we have not caught that [09:20] sure [09:20] morphis: do you need to run a privileged container in your case or was that just a convenience/accident? [09:24] mvo: in what shape, this time? [09:24] pstolowski: ah, interesting [09:25] hmm, I opened a forum topic for unsupported features [09:25] and I cannot find it [09:25] zyga: finding forum topics for unsupported features is an unsupported feature [09:28] pstolowski: ok, I'm not surprised we found issues around our use of the repo, it has an internal lock, doesn't mean it be consulted many times without an external one [09:28] pedronis: yes [09:28] pedronis: yes, that's true [09:29] pedronis: in all the cases that we need consistency we should craft a method that uses private unlocked versions while holding the lock for the repo [09:29] pedronis: although what's puzzling me is the fact that snap interfaces check done before disconnect reports the connection, so in theory things should be settled at this point [09:30] but clearly the need for short "sleep" before disconnect contradicts it [09:33] Chipaca: apparently the catalog update is a bit too non-random and causes store issues [09:33] mvo: I'm discussing that with Chipaca [09:33] ta [09:33] I was about to say - pedronis knows the details better :) [09:34] zyga: re the waiting.. issue you asked about earlier - https://github.com/snapcore/snapd/pull/6472 [09:34] PR #6472: overlord: fix ensure before slowness on Retry <⛔ Blocked> [09:35] pedronis: is that overlord change so risky that we cannot just land it for .38? [09:35] mvo: I was hoping it was the same thing =) [09:36] zyga: we can reevaluate, remember in theory we should haved cut 2.38 already [09:36] but here we are [09:37] yeah [09:37] I would vote on landing it to master and reverting if we need to release and feel uneasy about it [09:39] zyga: it looks innocent but is of high risk. if there is anything wrong around this area you may end up with snapd getting stuck and not picking tasks (been there while working on this fix) [09:39] pstolowski: hence edge [09:41] pstolowski: we could probably mitigate the risk of getting totally stuck doing something in the Loop [09:47] pedronis: i can look at it ~next week if time on sprint permits as this week i'd like to address hotplug PRs. i wouldn't rush this fix though given that we don't hear many complaints about it (do we?) [09:47] zyga: pstolowski: I don't think we risk getting totally stuck actually, because of the PruneTicker [09:47] zyga: I currently trying to figure out what the reason was for that [09:49] pedronis: getting stuck was most likely specifically related to my attempts to read from timer channel, so you may be right [09:50] pstolowski: yes, that we don't do [09:51] we have one place doing that [09:51] morphis: so I applied a simple fix [09:51] but ... it has no effect? [09:51] digging [09:52] ah, sorry [09:52] all good [09:52] morphis: I will have a fix and a regression test [09:52] mvo: ^ CC, low risk patch re-introducing /var/lib/** rw, [09:53] zyga: that sounds good [09:53] zyga: so re-adding this fixed morphis bug? [09:53] mvo: yes, as expected [09:53] it is interesting that lxd is a factor [09:54] do we have a +1 from jdstrand? [09:54] because of how profile transitions work [09:54] is the setup supported anyway? [09:54] pedronis: he said as much yesterday in snappy-internal [09:54] pedronis: lxd is supported, home in /var/lib is not but this was working before so we can keep it working until apparmor grows the delegation features jdstrand described [09:55] pedronis: the specific case of lxd with stock profile, snapd with classic snaps and home in /var/lib/* will work [09:58] zyga: can we make a test? [09:58] pedronis: certainly, I'm on it now :) [09:58] pedronis: that's why I ask for bug reports [09:58] then I can make a regression test and have a matching bug number with more details [10:12] pstolowski: mvo: I removed the blocked on #6472 with a comment [10:12] PR #6472: overlord: fix ensure before slowness on Retry [10:12] thanks [10:12] pedronis: k, thanks [10:18] it needs 2nd review still [10:18] pedronis: i've addressed/replied to your comments on https://github.com/snapcore/snapd/pull/5962 [10:18] PR #5962: ifacestate/hotplug: hotplug handlers [10:30] pstolowski: do we know already an example that would produce different interfaces at the same devpath? [10:31] pedronis: now, but i was thinking about an "foo-observer" and "foo-controller" type of interfaces [10:31] *no [10:31] ok, I see [10:31] so, read-only or full control [10:34] pstolowski: did you not push some changes? [10:35] pedronis: ah, of course [10:35] or is github confusing me [10:38] pedronis: sorry, now [10:42] pedronis: aaa, we have err shadowed in api.go disconnect - this is the reason of earlier disconnect issue i talked about: error: snap "serial-port-hotplug" has "hotplug-add-slot-serial-port" change in progress [10:43] repo access wrt locks is a separate potential problem, but not in this case i think [10:45] PR snapd#6506 opened: overlord/snapstate: add some randomness to the catalog refresh [10:46] pedronis: ^ à votre santé [10:58] ok, that should do it :) [11:01] PR snapcraft#2472 closed: project loader: do not leak a part's build-environment [11:04] pstolowski: +1 on 5962 but with some requests of tweaks [11:06] pedronis: yay! thank you, will do [11:07] * pstolowski runs malta-related errand [11:07] * pstolowski runs malta-related errand (not a haircut) [11:11] * marcustomlinson loves pstolowski's beautiful long locks [11:12] marcustomlinson: lol [11:15] PR snapd#6507 opened: cmd/snap-confine: allow writes to /var/lib/** [11:16] morphis: https://github.com/snapcore/snapd/pull/6507 [11:16] PR #6507: cmd/snap-confine: allow writes to /var/lib/** [11:17] popey: nice talk at fosdem! [11:17] popey: I asked one question https://www.youtube.com/watch?v=UAEdSjou3c4&lc=UgwCtVt0IXrkcLD1jLB4AaABAg [11:17] thanks [11:18] mvo, pedronis ^ (jenkins regression fix) [11:21] zyga: answered there, but the question came from an endless developer who wanted to see how multiple versions of the same app was presented to the desktop (desktop files) compared to flatpak [11:21] ah [11:21] neat [11:21] I think making desktop icons smarter would be great [11:22] to add some disambigution in gnome shell [11:22] (classic, snap, flatpak, etc) [11:22] or version informationi [11:22] my wonky "i" make me type like some fake italian [11:32] zyga: thank you [11:45] PR snapd#6505 closed: packaging: avoid race in snapd.postinst [11:47] Chipaca: spread tests for catalog are now failing on that PR :/ [11:47] not too unexpectedly [11:47] pedronis: :-) [11:49] Chipaca: one test doing: test -s /var/cache/snapd/commands.db [11:54] hey zyga, mind pointing me to the spot in snap-confine where nvidia specific udev stuff is run? I only see udev setup from https://github.com/snapcore/snapd/blob/master/cmd/snap-confine/snap-confine.c#L354 AFAICT which won't run if we turn off udev tagging from snapd [11:54] ijohnson: hey [11:54] one sec [11:54] thanks [11:55] ijohnson: https://github.com/snapcore/snapd/blob/master/cmd/snap-confine/udev-support.c#L237 [11:56] but looking at this now I think you are good [11:56] okay, cool. I tested this with my machine with nvidia on it, and still didn't see any udev setup with my PR so it looks good in empirical testing too [11:56] I'll add a comment to the PR [12:11] PR snapd#6508 opened: packaging: avoid race in snapd.postinst [12:22] pstolowski: hey [12:22] do you have a sec for a quick discussion [12:22] pstolowski: the context is this bug https://bugs.launchpad.net/snapd/+bug/1815722 [12:22] Bug #1815722: Namespaces for failed snap installations are not discarded [12:22] pstolowski: I'm trying to figure out where the code for this should live [12:22] pedronis, Chipaca: ^ CC [12:22] when the install hook fails we will undo the whole change [12:23] but it's unclear which task should be responsible for discarding the mount namespace [12:23] should we encode that as an undo task for the install hook [12:23] should we encode that as special logic inside the implementation of that specific task (in the do path, when it fails just discard) [12:23] or should we do something entirely different [12:24] zyga: can this happen in some form on a refresh as well? [12:25] pedronis: install hooks don't run then, if any other hook fails we just restore the old revision and that's handled correctly [12:25] this feels like a special case [12:25] is it handled correctly also if layout related stuff fails? [12:27] pedronis: it depends on what you mean by fine, if layout application fails we fail loudly (commands and hooks cannot run), then proceed to undo; any changes made to the mount namespace remain intact and are accounted for [12:28] the subsequent call to snap-update-ns will attempt to change the mount namespace in accordance to what is the new desired format [12:28] but in either case we never discard the namespace in such situation [12:28] re [12:39] zyga: thanks [12:41] any advice? [12:45] zyga: we have the same problem on install if starting services fails, no? [12:46] pedronis: anything that causes a freshly installed snap to fail to install [12:46] and runs something potentially [12:46] yeah, [12:46] just saying that I don't think concetrating on install-hook [12:46] makes sense [12:48] zyga: anyway it seems something to do in the undo of link snap if the situation is a first install attempt? [12:48] yes, I was thinking about anything that is special to first-install [12:48] I agree [12:48] perhaps link is the right place [12:48] it's also annoying because this is cross manager [12:48] but I think we have handlers for that [12:49] zyga: well , we need to merge bits of tasks of ifacestate and snapstate anyway [12:51] yes but hopefully not for this fix :) [12:51] zyga: where do we discard the namespace for remove ? [12:51] there's a dedicated task for that IIRC [12:52] let me look [12:54] zyga: it's done in discard-snap [12:54] so that's already snapstate afaict [12:55] zyga: pstolowski: my impression is basically there might be more bits done in discard-snap [12:56] that we also need to do in undo link-snap [12:56] if it's a first install [12:57] reading the backlog [12:57] pstolowski: do we need to remove the config as well for example? [12:59] pstolowski: zyga: basically compare the code under https://github.com/snapcore/snapd/blob/master/overlord/snapstate/handlers.go#L1387 vs https://github.com/snapcore/snapd/blob/master/overlord/snapstate/handlers.go#L1084 [12:59] what does make sense for both [13:00] do the 2nd seems to have more len(seq) == 1 sections [13:04] pedronis: indeed, we have a problem with config too, we should remove it in undo of link-snap [13:04] pedronis, zyga and i agree, undo of link snap is a good place for the namespace cleanup [13:05] pstolowski: there is a call to config.RestoreRevisionConfig with an XXX later there [13:06] pstolowski: not sure what it does? are we missing a test? [13:06] pedronis: indeed, thank you, I'm looking at this now [13:09] cachio: i pushed fixes to hotplug-nested-vm, it's now passing for me on 16.04. old qemu is not a problem. [13:12] pstolowski, nice, I'll give a run again [13:12] thanks for fixing it === ricab is now known as ricab|lunch [13:15] pedronis: RestoreRevisionConfig would restore configuration of previous revision of the snap, but that's not going to help on undo of new install, we will leave config behind. yes i think undo is missing a test for this [13:16] pstolowski: yea, that code looks broken, like the condition should be the reverse? [13:16] != 1 [13:21] pstolowski: pedronis: that's one of the places I added an XXX i think [13:21] as the logic didn't seem to make much sense [13:21] Chipaca: yea, seems there is a bug lurking there and missing tests [13:22] * zyga breaks for a walk with the dog [13:22] pedronis: yes i think you're right [13:23] https://github.com/snapcore/snapd/blob/master/overlord/snapstate/handlers.go#L1125 <- that's the one i meant [13:23] pedronis: i can add it to my todo list and address soon [13:23] pstolowski: thx [13:24] pedronis: it's a different one -- with possibly the same logic [13:24] * Chipaca ⇝ lunch [13:25] Chipaca: heh [13:40] degville: fyi, I edited "Installing snap on kde neon" and replaced your dark theme screenshots with default theme ones :) [13:41] (at the kind and friendly request of upstream) :) [13:41] popey: thanks! you're absolutely right... I've used it so long I forget it's not default. [13:42] I think the words were "Which nerd uses the dark theme?" :) [13:42] (I do too) [13:42] I <3 KDE Neon [13:43] ahaha... yeah. Arc Dark and Papirus ftw. [13:44] PR snapd#6509 opened: overlord/snapstate: discard mount namespace when undoing 1st link snap [13:45] pedronis: ^ thank you for the idea again [13:56] hmmm [13:56] * Chipaca hmms [13:57] How do I downgrade to snapcraft2 with snap? On https://snapcraft.io/snapcraft I can only find 3.x versions [13:57] zyga a question there [14:06] t1mp: snapcraft3 ships 2 as well [14:06] t1mp: if your snapcraft.yaml doesn't have a base, you get 2 [14:07] I use core18 as base. But snapcraft3 broke some things (probably I can fix them, but it takes more time which I don't have right now) [14:09] t1mp: which things? (maybe the fix is easy) [14:09] sergiusens: ^ [14:18] some things were easy, like the version should be "2" instead of 2 (int). [14:18] version is indeed a string [14:18] I worked around the cloud parts that I couldn't use any more [14:18] and now, I have this: [14:18] version-script: python3 ../qmenta_gui/python/qmenta/gui/version.py [14:18] it was a bug that it allowed int [14:18] so when I build the package now, that cannot be executed. I don't have those files in the vm that multipass created. I guess with snapcraft2 it was not executed in a vm. [14:20] one thing you might want to try is "snapcraft --shell-after" which drops you into the vm post-build, which allows you to debug these things [14:20] I have started using adopt-info: partname, and using "snapcraftctl set-version '$(something)'" to set the version in override-build: | section in the part [14:20] (there's some examples of this in the snapcrafters repo) [14:23] zyga: https://github.com/snapcore/snapd/blob/master/sanity/apparmor_lxd.go#L33 <- this is what we have currently for testing inside containers === ricab|lunch is now known as ricab [14:49] debugging is a bit of a hassle, because every time I rebuild the snap it takes 10 minutes or so [14:50] the handy thing about debugging with --shell-after, is you get dropped in the VM, and can cd project/ and then run snapcraft *in* the VM [14:50] which will save time [14:50] I'll try that in the next re-run :) [14:55] mvo: right, we would have to have some more logic to check if we are running without pid namespace [14:55] mvo: er, uid namespace [14:55] mvo: this is not sufficient (separate issue) [14:55] hmm.. The snap I create locally works as expected. But the snap created from the same source, on jenkins, has a bug. [15:00] I can try to install snapcraft 3 there. But then I have Google VM -> Docker Ubuntu 18.04, with snapcraft 3 snap package, creating a qemu (?) vm to build the snap package..? [15:02] if I am already in an ubuntu 18.04 docker image, do I really need a VM for creating snap packages? [15:02] t1mp: no, if you're in docker, you can tell snapcraft to chill about the vm [15:02] there's a way you can do that, yes. [15:02] if you're already in an ad-hoc vm that is [15:02] yeah, I'm in a container [15:03] * Chipaca nods [15:03] it wasn't --chill-about-the-vm-already [15:03] I can't for the life of me remember what it is [15:03] the container is only created to run tests and build the package [15:03] sergiusens: ^ what's the "let me do dangerous things" parameter for snapcraft? (to build on a host) [15:05] t1mp: popey: --destructive-mode [15:05] * Chipaca hugs grep [15:05] 💣 [15:06] fwiw: grep -hor --include '2019*' 'snapcraft.*--[a-z_-]*' ~/.config/hexchat/logs/ [15:11] Chipaca: I guess that is not printed on purpose when I do --help? [15:11] Chipaca: anyway, thanks :) [15:11] t1mp: your guess is as good as mine [15:12] t1mp: it's probably supposed to only be internal [15:12] bugs.launchpad.net/snapcraft :) [15:12] t1mp: as it's what snapcraft uses to launch snapcraft inside the vm, afaik [15:12] ah. snapcraft help build shows it. [15:27] PR snapd#6510 opened: tests: stop catalog-update test for now [15:29] Chipaca: popey t1mp it is printed with `snapcraft --help` [15:30] snap is the default param for snapcraft, I wish we did not have a default, life would be simpler [15:32] popey: when inside the VM, no need to `cd project`, you can run snapcraft from whatever PWD [15:32] TIL [15:46] mvo: I commented on https://github.com/snapcore/snapd/pull/6508. I think the access() is unneeded... [15:46] PR #6508: packaging: avoid race in snapd.postinst <⚠ Critical> [15:47] is snapcraft using qemu instead of lxd to be compatible with windows/macos? [15:51] t1mp: essentially, yes === sparkieg` is now known as sparkiegeek [15:54] for all the meetings https://twitter.com/garabatokid/status/1016240967702204416 ;-) [15:55] * cachio lunch [15:58] mvo: oh, I misread. sorry, nm [15:59] jdstrand: thank you [15:59] jdstrand: maybe I should have added a comment there or rewrite the code, its definitely not obivous (which is why this is buggy :( [16:00] mvo: maybe an added comment "If after the rewrite we are still not ok, die" or something [16:01] mvo: but pr approved as is [16:07] jdstrand: very low priority ping about icdiff [16:09] jdstrand: \o/ thanks, I will add the chnage [16:15] mvo: why the mount-support.c change there ? [16:16] I mean I mentioned that issue in another PR [16:16] but why mixing it with that? [16:19] pedronis: the upgrade test from 2.15.2 fails without it [16:19] pedronis: it will die because it claims it can't find ubuntu-core [16:19] pedronis: 2.15.2ubuntu1 is still using ubuntu-core as base [16:20] Chipaca: you wouldn't know it because I haven't been transparent, but I've been working hard on a review-tools update to support that [16:21] they had some old janky code that was brittle and I rewrote it. should be committing today, at which point I'll add the aliases [16:21] (and ask for a store pull, etc) [16:21] mvo: can put it in a comment somewhere in it [16:23] pedronis: sure, adding it now [16:25] zyga: i see that repo.ResolveDisconnect is only used by api - disconnect; i'm going to change it to return []*Connection instead of []*ConnRef to remove the need for second call to repo. do you see any problem with that? [16:25] pedronis: updated the comment [16:26] pstolowski: I honestly don't know, is it tricky? [16:26] looking at the code now [16:26] zyga: no, not really [16:27] zyga: and it's easy to get ConnRef from Connection, if needs be [16:27] zyga: this will just let avoid ping-pong with the repo for disconnect case [16:27] yeah, looks ok [16:27] I wonder if connref should grow something [16:28] to detect unexpected changes colliding [16:28] zyga: connref? can you elabortae [16:28] *elaborate [16:29] a cookie derrived from the number of changes to the repo [16:29] so you can get a conn ref in one place [16:29] pass it elsewhere [16:29] and use it if it is still "fresh' [16:29] your desire to change the return type, is it to avoid a race? [16:29] ah, a "smart pointer" ;) [16:32] zyga: yes, to avoid repo.Connection() on a connref returned by repo.ResolveDisconnect(), that may no longer be there [16:33] on the other hand, i'm not sure it's worth fixing [16:34] it's basically for the cases where you don't fully specify plugs or slots, and let resolve compute the list [16:39] hI #snappy, I hit an error with juju snap, cannot create user data directory, Permission denied, can anybody help? [16:39] pekkari: can you be more specific please [16:40] I see -> cannot create user data directory: /path_to_my_home_folder/snap/juju/6362: Permission denied when I execute anything in juju [16:40] what is your home folder path? [16:42] for some reason, it's decided to be under /var/lib/home/my_user, I cannot tell who was having that great idea :$ [16:42] pekkari: that location is not supported by snapd [16:43] nevertheless the folder ~/snap/juju/6362 exists [16:43] hummm... unfortunately I cannot decide on the folder path zyga, customer stuff [16:43] PR snapd#6511 opened: daemon/api: fix error case for disconnect conflict [16:43] pekkari: we cannot support arbitrary folder names either [16:44] I'll need to try to bind mount the folder on /home/my_user [16:48] pekkari: I'm curious what was the motivation to store the home folder in /var/lib/ rather than in /home, is it a service (e.g. jenkins) or just some specific customer convention? [16:48] zyga: customer convention, no service, but shared account for a team [16:48] jdstrand: thanks for the update! no worries about the rest of it [16:49] have you any experience using mount --move zyga? seems to be the way to go as long as bind still resolves the /var/lib/home/blah folder [16:49] I just never happen to use it [16:50] pekkari: using it in which sense? yeah it works and snapd used to use it internally for one specific case [16:50] pekkari: but I'm not using the feature myself much [16:51] well, use it to solve the path problem without screwing the home folder where I have the sensitive info [16:51] pekkari: I don't understand, can you explain what you mean? [16:52] I just want to know it's a safe operation, ta [16:52] perhaps? I don't know for sure [16:52] on a live production system? [16:53] no, testing env, but still I have an fce workspace there that I'm taking backups just in case [17:01] grrrr. shared mount... [17:12] zyga: does that mean that home directories on Silverblue don't work either? [17:12] they're in /var/home rather than /home [17:15] * cachio afk [17:55] Pharaoh_Atem: correct [17:55] Pharaoh_Atem: we cannot support all the random locations people come up with [17:55] well, it's symlinked to /home [17:55] Pharaoh_Atem: we can add support for /var/home perhaps but not for every single random combination :) [17:55] Pharaoh_Atem: with mount namespaces, symlinks don't matter sadly [17:56] well /var/home is used by Atomic Fedora and SUSE MicroOS [17:56] well, they should have just used home [17:56] it's just a location after all [17:56] they can mount there [17:56] it's when there isn't separate partitions [17:56] we can support /var/home eventually [17:56] just weird [17:56] I don't see how this is related to partitions [17:57] the standard layout has two partitions, one managed by OSTree and another umanaged [17:57] *unmanaged [17:57] they've moved all the written stuff to there by default [17:57] * Pharaoh_Atem shrugs [17:57] if your ostree has /home you can mount there, right? [17:57] yes [17:57] and people do if they have a separate /home partition [17:57] so what's the need for /var/home? [17:58] if when they don't have it [17:58] you don't need a separate home partition [17:58] usually it'd be / if you don't have a separate /home [17:58] you can bind mount /home or even each /home/whatever [17:58] you can mount subtrees [17:58] I know [17:58] but they do it by symlink right now [17:59] so /home/foo is a symlink to /var/home/foo? [18:00] I guess snapd could learn a trick to make /var/home/$LOGNAME a way to access user's (whatever) home directory [18:00] but dragons apply :) [18:02] anyway [18:02] I meant to thank you for the release coordination the other day [18:02] I'm sleepy as hell now [18:03] I was meant to say I'll EOD now [18:10] PR snapd#6510 closed: tests: stop catalog-update test for now [18:35] niemeyer: could spread error if you told it to run on a system that doesn't exist? [18:40] Chipaca: Hmm.. doesn't it? [18:40] niemeyer: not with globs [18:41] niemeyer: just spotted a test that had systems: [ubuntu-16-*, ubuntu-18-*] [18:41] (neither of those matches anything) [18:41] Chipaca: Yeah, it's definitely plausible since we have the full real set in memory.. we ought to do that. [18:41] Chipaca: ah, wait [18:42] Chipaca: I misunderstood.. I thought you were asking for it to run on the cmdline and didn't match anything [18:42] * Chipaca feels misunderstood [18:42] Chipaca: Maybe it should warn in those cases you mention [18:43] Chipaca: Not sure if error as it could make a refactoring pretty boring [18:43] Chipaca: Or maybe let's just do it and see if people get mad at us [18:44] Erroring on the safe side can't go too wrong [18:44] niemeyer: a warning would've helped indeed [18:45] niemeyer: and yeah there're probably people depending on the behaviour :-) [19:03] Chipaca: see mvo find on the PR, we need to use int64 and corresponding rand helper [19:15] * Chipaca looks [19:15] pesky puny computers [19:16] I'm going to EOD now before I break something important [19:16] o/ [20:42] PR snapd#6507 closed: cmd/snap-confine: allow writes to /var/lib/** [20:43] PR snapd#6509 closed: overlord/snapstate: discard mount namespace when undoing 1st link snap [20:45] PR snapcraft#2471 closed: test: test-beta [20:52] Chipaca: surprisingly butn not, catalog-update passes also with /names down [20:52] so we shouldn't turn it off [20:52] pedronis: yes I noticed that locally, assumed mvo saw it fail [20:53] Chipaca: we should push to turn it on [20:53] again [20:54] pedronis: push who? [20:54] Chipaca: you were pushing stuff recently? can you otherwise I can try to do it [20:55] how come this works when /names is down? just curoious haven't looked at the details of the test [20:55] mvo: it doesn't really check that we got date, it checks that we tried [20:55] mvo: because all it does is check that catalog refreshes was attempted [20:55] yeah [20:55] data [20:55] "at least you tried" golden star [20:56] the other two do check data [20:56] apt-hooks [20:56] and snap-advise-command (except that it was not run) [20:59] Chipaca: are you working on turning it back on? otherwise I'm close myself [21:09] Chipaca: mvo: pushed to turn catalog-refresh back on [21:09] *catalog-update [21:11] pedronis: ta [21:12] PR snapd#6508 closed: packaging: avoid race in snapd.postinst <⚠ Critical> [21:15] pedronis: sorry, was afk for a bit [21:16] Chipaca: it's ok [21:16] we should all be eoded really [21:18] pedronis: yes, I need to go [21:18] 5606 should be squashed [21:18] I will merge tomorrow into 2.37 [21:18] * mvo waves