[00:14] hi [00:14] is anyone around? [00:15] seeding the snaps on the latest bionic live-server daily is taking an extraordinary amount of time [00:15] like 4 minutes [00:15] does anyone know anything about this? [02:32] mwhudson: I haven't heard anything about it, and certainly not expected [02:33] niemeyer: i forumed https://forum.snapcraft.io/t/extremely-slow-seeding/5891 [02:34] mwhudson: Thanks! [03:01] PR snapd#5313 opened: tests: enable fedora 28 again [04:58] morning [04:58] https://forum.snapcraft.io/t/wildcard-syntax-in-organize-keyword/5895 [05:12] PR snapd#5221 closed: snap: parse connect instructions in gadget.yaml [05:15] o/ [05:15] good morning! [05:30] zyga: is pstolowski still collecting logs from econnreset? [05:31] pstolowski: https://paste.ubuntu.com/p/8BgX5gJdXj/ [05:32] not sure [05:33] getsockopt: connection refused [05:40] econreset is driving me crazy [05:41] PR snapd#5304 closed: overlord/ifacestate: simplify checkConnectConflicts and also connect signature [05:43] PR snapd#5313 closed: tests: enable fedora 28 again [05:49] PR snapd#5292 closed: interfaces/docker-support: update for docker 18.05 [05:53] PR snapd#5285 closed: tests/main/{snap-repair,ubuntu-core-services}: add debug information [05:54] https://github.com/snapcore/snapd/pull/5230 needs a 2nd review [05:54] PR #5230: interfaces/udisks2: also implement implicit classic slot [05:59] * zyga goes out with the dog [06:44] re [06:51] abeato: hey, nice work on the watchdog PR! how urgent is this for you? is 2.34 (~5 weeks away) ok? or do you need it quicker? [06:51] 5276 needs a second review [06:51] please :) [06:52] PR snapd#5301 closed: snapstate,ifacestate: remove core-phase-2 handling [06:53] mvo, it is fine to have it for 2.34, thanks for asking (we already have a working workaround and do not need this urgently, it is fine to have it in an update) [06:53] mvo: 5276 needs deconflicting [06:54] abeato: great, thank you [06:55] yw [06:56] mborzecki: indeed, done [07:03] mvo: hi, did you see this: https://forum.snapcraft.io/t/extremely-slow-seeding/5891 [07:08] pedronis: not yet, let me check [07:13] mwhudson: hey, about your forum post about seeding being slow. is this a regression, i.e. was the previous snapd quicker? [07:18] morning [07:20] mborzecki: thanks for the log, i think i have enough logs now [07:25] i'll look at econnreset again now [07:28] mvo: +1 on 5276, also restarted the build as it failed with interfaces-calendar-.. (one that's know to randomly fail) [07:28] mborzecki: thank you [07:28] mborzecki: and thank you :) [07:31] pedronis: hey, is there any benefit of doing task.SetStatus(state.DoneStatus) at the very end of task handler? I remember moving it towards the end in the reviews, but now that i look at it i doubt it serves any purpose [07:34] pstolowski: it's mostly for consistency, are you wondering if you need it, we do need it [07:37] pstolowski: the reason we need is that we want to add th tasks/change to state in the same atomic write to state as we set the task to done, otherwise we risk having it done again (and add the tasks again) [07:37] or fail because to add them forever because self-conflicts of some kind [07:38] pedronis: ok, i see; there are some comments in a few other places that do that but they don't tell whole story, thanks [07:39] pstolowski: it's usually related to having the task change the state in a significant non idempotent way [07:39] at which point we want to mark it done in the same atomic write [07:39] mvo: yes it's a regression, haven't tried for a couple of weeks though [07:41] pedronis: understood, thanks [07:42] mwhudson: thank you, I tried to reproduce but no luck so far (booted it just two times though). any hints what I could be missing? I used the amd64 image in a standard kvm vm with -m 1500 [07:42] mvo: i'm trying your command line and it's hanging for me [07:42] maybe i should reboot my system [07:42] (as in, my laptop) [07:43] mwhudson: what commandline did you use? I can try that one on my box [07:43] mvo: kvm -m 1500 -snapshot ~/isos/bionic-live-server-amd64.iso [07:43] what's the URL to the iso [07:44] earlier i was using -m 1024 so i was wondering if it was swapping in the system [07:44] I can try here for one more sample [07:44] mwhudson: heh, that looks eerily similar ;) [07:44] my network has higher latency than landlines [07:44] mwhudson: aha, I can try with smaller ram values [07:44] mvo: usually i use -cdrom $ISO -hda target.img [07:44] mwhudson: I was thinking if .au makes a difference [07:44] if it's hitting the network, i have other complaints :) [07:45] the cd is usually plenty fast here [07:45] *cdn [07:45] mwhudson: or .nz - yeah, I shouldn't [07:46] * mwhudson downloads a snap at 11MB/s [07:46] pstolowski: it's usually: do things that could error, change the state, mark it done [07:46] pstolowski: if a task is fully idempotent it can rely on taskrunner marking it done [07:46] argh this vm booted after 4 mins but i forgot to pass -redir [07:48] pedronis, mvo: http://paste.ubuntu.com/p/BQGMG4pZdM/ <- snap tasks 2 [07:48] * mwhudson reboots the host [07:49] mwhudson: have you tried using virtio for net and drive btw? [07:51] Done 2018-06-13T07:42:47Z 2018-06-13T07:44:35Z Mount snap "core" (4650) [07:51] this is certainly unexpected [07:51] is this an SD card? [07:51] unless this includes some waiting on other tasks [07:52] zyga: you need to look at the ready before, not the spawn [07:52] the spawn is not when they start running [07:52] is when they are created [07:52] I see [07:52] in that case it's still around a minute of waiting for mount [07:52] that is curious [07:53] yes, also the time between the spawn and the first ensure ready is strangely long [07:53] 2018-06-13T07:42:47Z 2018-06-13T07:43:49Z [07:53] each mount is slow [07:53] mborzecki: what is the host storage like? is this a HDD or a SSD? which filesystem are you using [07:54] er [07:54] mwhudson: ^ [07:54] (bad tab) [07:54] heh ;) [07:56] baaaaaah rebooting my laptop fixed it [07:56] pedronis: question about seeding [07:56] pedronis: on line 28 we have "mark system seeded" [07:56] but then we have more tasks below [07:56] is that expected [07:58] mborzecki: yes, too lazy to remember command line syntax for virtio [07:59] mwhudson: fwiw it makes a huuge difference here ;) [08:00] PR snapd#5276 closed: devicestate: support seeding from a base snap instead of core [08:03] mborzecki: ok, i'll try to teach it to my bash history :) [08:17] moin moin [08:17] I'm en route to the dentist; connection might be flaky [08:18] zyga: https://www.youtube.com/watch?v=ofeCpN99FU8 [08:19] mborzecki: oh, nice, thank you [08:19] mborzecki: apparmor logo looks a little bit like our flag :) [08:19] zyga: mvo: anything else to note on the conncheck->info pr? [08:20] zyga: what happened to the the dope penguin with gas mask on? [08:20] asking because I wouldn't like to trigger a whole spread run for a small change only to then have to do it again :-) [08:20] mborzecki: whaaat, I didn't know that was the previous logo [08:21] zyga: https://gitlab.com/apparmor/apparmor this logo [08:27] ooh, 'Add Vulkan abstraction' commit in apparmor [08:27] (4 weeks ago) [08:32] zyga: not completely, it seems related to how auto-connect works [08:32] pstolowski: did you see: http://paste.ubuntu.com/p/BQGMG4pZdM/ the connects at the very end before mark seeded seems a bit wrong [08:33] zyga: pstolowski: ah [08:33] zyga: is the display that is off, they are sorted by id , not really time or sequence [08:34] oh drat! [08:34] indeed, I didn't see this [08:34] should we change that [08:34] I don't know, is definetely more confusing now that we have tasks that add more tasks [08:35] they will all show up like that at end [08:38] ttfn [08:41] pedronis: oh, do we run things twice? does this auto-connection make sense at all? [08:47] pstolowski: it's because is a self autoconnection [08:47] pstolowski: we probably can fix that in auto-connect itself [08:47] pedronis: i didn;t know we have self-autoconnection, what is it for? [08:48] pstolowski: tbh it's a bit of a corner case, but is quite important, docker uses it for example [08:48] pstolowski: it's usally when the server and the client of something are in the same snap, that's the docker case I think [08:49] pedronis: ok, what test is this? [08:49] pstolowski: test? [08:49] this is a real image [08:50] pedronis: ah, allright [08:51] pstolowski: anyway, to fix this we would need to keep track of connection already requested in newconns using the connref ID [08:53] pstolowski: it's actually a bug btw, not just an opt, because of the hooks (in this case they don't exist, do nothing) [08:56] pstolowski: btw do we check in ifacestate.Connect if the connection already exist? [08:56] pstolowski: I know at some point doConnect would do nothing if the connection already existed, is that still true? or did we change it [08:57] pedronis: performance review in a moment, will talk to you afterwards, i need to actually check hits [08:57] *this [08:58] np [09:06] allright, i'm back [09:10] * zyga finished his self review [09:11] pedronis: we don't check if connection already exists [09:11] pstolowski: I think repo does, but because of the hooks is too late there [09:12] pstolowski: we should also add that [09:12] pedronis: yes [09:13] pstolowski: so we need a check in ifacestate.Connect and a check in doAutoConnect in the 2nd loop to make we don't add the same connection again [09:22] zyga: I added a mount unit to 5284 [09:22] \o/ [09:22] thank you [09:26] pedronis: i'll address this later today [09:26] thx [09:30] zyga: for https://github.com/snapcore/snapd/pull/5271, I think the only open question was yours about what to do about too-old versions of xdg-desktop-portal. [09:30] PR #5271: cmd/snap: attempt to start the document portal if running with a session bus [09:30] * zyga looks [09:30] zyga: I don't have a good answer for that, but the PR doesn't introduce any new problems [09:31] (there's no --version option for any of the daemons, so it'd probably need per-packaging system checks) [09:37] jamesh: ack [09:38] zyga: for snapd delivered via distro packages, we can use a "Conflicts:" header, but that doesn't help when snapd is delivered by the core snap [09:46] jamesh: I think this is still a problem [09:46] AFAIK older portal implementation would not confine non-flatpack apps [09:46] so an older version of the portal is essentially a sandbox escape [09:46] (well, for files) [09:48] 5284 needs a second review (should be simple :) [09:52] zyga: so, I think the old xdg-desktop-portal file chooser won't register files with the document portal when it thinks it is talking to unconfined apps (good), but our attempt to bind mount the document portal will fail because the application ID is in a different format (bad) [09:53] the bind mount failing is bad because it will reveal the entire document portal tree, potentially giving access to any files registered to flatpak apps [09:57] mmmm [09:57] and if we fail and don't start the app this has other negative side effects [09:59] we explicitly set the document portal mounts to ignore failures so it'd work when the portal code isn't present [09:59] I need to jump to a call [09:59] okay [09:59] jamesh: if we ignore the failure, does that ever leave the portal wide open for the app to see? [10:00] could we fix that by perhaps bind mounting /var/lib/snapd/void there? [10:00] zyga: the apparmor policy lets the app access that mount point, so it is open [10:01] long term, I think it would be better to have a completely private $XDG_RUNTIME_DIR mounted at the usual /run/user/$uid [10:01] so if we failed to mount something in, then it just wouldn't be present [10:06] interesting [10:06] what brought this conversation on? [10:06] I was just having this convo with the Fedora Workstation guys [10:06] Son_Goku: I've been (slowly) working to integrate xdg-desktop-portal with snapd [10:07] yes, I saw the first bits land with snapd 2.33 [10:07] Son_Goku: it works with xdg-desktop-portal 0.2 (patches have been accepted upstream), but fails open with 0.1. [10:07] wait what [10:07] if it failed closed, that would be okay [10:07] I need an unreleased version of xdg-desktop-portal now? [10:07] this is going to be a problem [10:07] 0.2 is released [10:08] this is the latest version: https://github.com/flatpak/xdg-desktop-portal/releases/tag/0.11 [10:08] gah. getting versions mixed up [10:08] 0.11 is the fixed version [10:08] ah [10:08] (not sure where 0.2 came from) [10:09] at the moment, I can't ship snapd 2.33 to Fedora 27, which I'm trying to fix right now [10:09] Son_Goku: I suspect it isn't as big a deal for Fedora, where we don't have AppArmor confinement in place [10:10] well, snapd's confinement is mostly useless anyway, so it's not a big deal in that respect [10:10] but it's still ugly since the document portal moved from flatpak into its own thing [10:10] and that is required for xdg-desktop-portal 0.11 [10:10] yeah. Alex moved it so the two services could share the same auth logic [10:15] yep [10:15] which means that flatpak in Fedora 27 has to be upgraded to... [10:15] which they don't typically do if they don't need to [10:24] re [10:32] PR snapd#5314 opened: many: rename snap.Info.Name() to snap.Info.InstanceName(), leave parallel-install TODOs === Son_Goku is now known as Conan_Kudo === Conan_Kudo is now known as Son_Goku [10:40] pedronis: just-renames PR ^^ [11:02] mborzecki: I'll co-review [11:04] mborzecki: I haven't done it all yet but could we perhaps remove Name for a transition period so that no new code lands that uses Name vs InstanceName? [11:06] mborzecki: will look [11:11] zyga: which Name? info.Name() is no more, it's been replaced by InstanceName() [11:11] ah perfect [11:12] that's exactly what I wantee [11:12] *wanted [11:12] to avoid API skew from PRs [11:12] I'm at 1/3 of the diff [11:12] zyga: happy scrolling :) and thanks for going though it [11:12] pleasure :) [11:19] Chipaca: ping [11:19] zyga: bonk [11:19] Chipaca: can you please look at the conversation on https://bugs.launchpad.net/snapd/+bug/1776295 [11:19] Bug #1776295: `stop` and `disable` should kill all processes regardless of daemon stop-mode [11:19] I think what is being asked for is not supported by systemd but I want a 2nd opinion [11:21] zyga: we can easily make explicit stop be harder than the systemd one, can't we? [11:21] define explicit stop? [11:21] is that something other than systemctl stop foo.service? [11:21] zyga: snap stop blah [11:21] mm [11:21] zyga: or snap remove blah [11:21] yes, that we can [11:22] we can definitely do that [11:22] behind the scenes we currently do systemctl stop [11:22] we could easily do killall --user $USER instead [11:22] that'll learn them to complain [11:22] pastebin ~/.ssh/id_rsa [11:22] yeah, it's good that we don't ) [11:22] anyway, this needs some thinking and discussion [11:23] zyga: forum time? [11:23] sounds good to me [11:23] basically it'd mean "stop-mode" is for "internal" stoppage, and perhaps also for "restart", but explicit "snap stop" or "snap remove" would slay the whole namespace or w/e [11:24] zyga: shall you, or should I? [11:24] Chipaca: I can [11:24] thank you :) [11:24] zyga: perfect :) [11:24] didn't we do systemctl kill --kill-who=all on snap stop? [11:25] mborzecki: #1776295 [11:25] Bug #1776295: `stop` and `disable` should kill all processes regardless of daemon stop-mode [11:26] mvo: was your "looks good" on #5312 a +1? [11:26] PR #5312: store: switch connectivity check to use v2/info [11:29] Chipaca: yes, that's i'm referring to, iirc this was discussed a bit while that change was introduced, basically systemctl stop stops all processes in cgroup, now since that no longer works because of stop mode, snap stop was supposed to do the right thing [11:30] mborzecki: that bug says it doesn't (but I haven't confirmed this) [11:30] mborzecki: I also don't know if disable or remove dtrt [11:30] hmm [11:30] maybe I'm wrong [11:30] zyga: never! [11:31] but what I see is that there's a desire for different action on "systemctl stop" vs on "snap refresh" [11:31] zyga: yep [11:31] Chipaca: I need to tell my wife ;) [11:31] Chipaca: hmmm [11:31] zyga: you're never *locally* wrong [11:31] Chipaca: so when you do snap stop <...> it ends up in servicestate.Control() which calls systemctl stop ... ? [11:32] mhmm [11:39] mborzecki: done [11:41] zyga: as for apparmor & SNAP_NAME, jdstrand suggested introducing another variable in the templates and pick one depending on whether it's outside/inside mount ns [11:42] anyways, to be ironed out [11:42] mborzecki ack [11:42] mborzecki: can you do a simple review for me [11:42] 5315 [11:42] PR snapd#5315 opened: cmd/snap-update-ns: introduce MimicRequiredError, make ReadOnlyFsErro… [11:42] I'm trying to shrink my main patch [11:46] * cachio afk [11:55] Morning all [11:56] hey hey :) [11:56] mborzecki: #5030 needs love [11:56] PR #5030: packaging/amzn2: initial packaging of 2.32.5 for Amazon Linux 2 [11:57] niemeyer: i think it's ok to close it for now, it'll still be accessible and we can revive it (or build on what Son_Goku proposed) when needed [11:57] mborzecki: But is it not needed? [11:57] for now, no [11:57] mborzecki: I mean, we do want it building fine there as well [11:57] niemeyer, most likely, this will be obsoleted by eventual introduction of snapd into EPEL [11:57] Son_Goku: Heya [11:58] niemeyer: Hi! [11:58] Chipaca: woah, that merge was quick, did you click merge right after I clicked "approve"? [11:58] PR snapd#5312 closed: store: switch connectivity check to use v2/info [11:58] Son_Goku: Sorry for not responding quickly enough yesterday.. it's my understanding that all of our g-s are submitted upstream as well.. [11:58] mvo: got a notification of your approve [11:58] mvo: mashed the merge button [11:58] mvo: yes [11:58] Chipaca: heh [11:58] Son_Goku: The maintainer was also in a sprint in London with us just a week ago or so [11:58] Chipaca: impressive [11:59] mvo: thank you for the +1 [11:59] Son_Goku: Unfortunately it's a bit tricky to get things flowing when there's so much interest yet a disjoint set of needs [11:59] niemeyer, I'm mainly worried about the g-s plugin rotting to the point I'll be forced to disable it [11:59] the experience is... not great [12:00] Chipaca: you did all the hard work! [12:02] niemeyer, the main gaps are snap channels and basic perm management [12:02] the snap channels one is a biggie, because it leads to a confusing experience in g-s [12:03] Son_Goku: Yeah [12:03] the perm management is less important, because snapd confinement doesn't work in Fedora anyway [12:03] Son_Goku: One idea we've been considering is shipping a snap-only version of it as a snap [12:03] PR snapd#5081 closed: interfaces/apparmor: add chopTree [12:03] niemeyer, yes, I saw the snap-master branch of g-s [12:03] I'm not sure that's a good idea [12:04] I think it might be better to create an independent DE-agnostic tool [12:04] which would be useful for DEs that don't have a software center [12:04] there's an elementary app called Snapper or something similar but I think the experience there is not good at all :/ [12:04] Snaptastic [12:04] Son_Goku: Best would be to have a single tool working everywhere with everything [12:04] but yeah, it's kind of weird [12:04] snaptastic, thanks [12:05] but it could be an interesting foundation for the concept [12:05] Son_Goku: But that's taking some time to sort out, and we want to make *something* available that sorts out the GUI side of things while we get the ideal in place [12:05] zyga, one idea would be to use the libyui foundation libraries to build a UI agnostic frontend [12:06] like what I did for DNF with dnfdragora: https://github.com/manatools/dnfdragora [12:06] that would even give a reasonable experience for people who'd like an aptitude-like interface for snap management [12:07] zyga: screenshots of dnfdragora are in https://blog.mageia.org/en/2017/07/21/dandifying-mageia-part-2/ [12:08] personally I don't think aptitude like approach is needed for snaps [12:08] simply because you don't have the zoo of weird packages like -common or -bin or whatever [12:08] but you need a way for people to find and manage them [12:09] gnome software has pretty close idea to a proper app store [12:09] well, then, get the snap channel support in g-s ;) [12:11] We did some design work on g-s last week. [12:15] popey: cool, any mockups? :) [12:15] loads of sketches, its over to design now to do that [12:17] PR snapd#5284 closed: data/systemd/snapd.run-from-snap: ensure snapd tooling is available [12:18] mvo: do we need to do anything about https://github.com/snapcore/snapd/pull/5284#pullrequestreview-128293170 [12:18] PR #5284: data/systemd/snapd.run-from-snap: ensure snapd tooling is available [12:26] zyga: you mean libexecdir? [12:27] yes [12:27] zyga: I replied iirc, is the reply unclear (or did I forgot to submit it :/ [12:27] I didn't see a reply [12:28] zyga: """Thanks for checking. This code only runs on ubuntu-core systems, so we can hardcode the (known) libexecdir here.""" <- maybe github ate it :( [12:28] +1 [12:28] I see it now [12:28] thanks [12:28] zyga: \o/ [12:28] zyga: thanks for your careful review [12:28] :( [12:28] PR snapd#5316 opened: store, et al: kill dead code that uses the bulk endpoint [12:28] ^^^^ if anybody enjoys reviewing a +99 -1333 PR, now's your chance! [12:29] * Chipaca -> lunch [12:52] client.actionData.Name appears to be unused, anyone recalls what was the purpose of this field? [12:56] Chipaca, less Go code in the world makes me happier :) [12:57] mborzecki: it was the name of the snap [12:57] Chipaca: 'was' ? [12:58] mborzecki: git show d01831fda7995cc0f2f207000adab192fb2fbf6d [13:00] Chipaca, zyga we are still in a meeting, might overrun a litle bit [13:00] ack [13:01] mborzecki: was removed in 15edad32e3d572a2ddd53df3f5b7a9eab4b23333 [13:01] mborzecki: it's ignored server-side, is probably why [13:01] mborzecki: KILL IT WITH FIRE [13:02] mvo: should we wait? [13:02] Chipaca: oh i won't do that ;) i plan to use it instead [13:02] mvo: or shall we get started? [13:02] zyga: go ahead I would say [13:02] zyga: Meeting running late here.. we'll be there in a moment [13:02] niemeyer: ack [13:02] Chipaca: actually added Instance to SnapOptions, just to find that Name field a bit later [13:03] mborzecki: ¯\_(ツ)_/¯ [13:03] mborzecki: removing it seems the saner thing to do i think [13:21] mborzecki: sorry i didn't see your comment about using it instead [13:21] mborzecki: that works also :-) [13:22] mborzecki: but [13:22] mborzecki: what would you use it for? [13:22] Chipaca: snap install foo.snap --instance foo_bar [13:22] mborzecki: the reason it's not used is because in doSnapAction, the name is already in the path [13:23] mborzecki: that is, it's a POST to /v2/snaps/ [13:23] mborzecki: so saying name in the body of the post is redundant [13:23] redolent [13:23] refulgent [13:24] Chipaca: right, but if you try to install it as an instance than i can either use another form field or reuse this [13:24] Chipaca: and it's also POST /v2/snaps, so the current name comes from the snap itself [13:25] mborzecki: /v2/snaps is doMultiSnapAction [13:29] Chipaca: i'm doing the variant which sideloads a snap file, hits the same endpoints but uploads a snap too [13:31] mborzecki: ah! a'ight [13:40] * zyga needs to feed the dog, brb [13:51] re === pstolowski is now known as pstolowski|lunch [14:21] Chipaca: seems to work now https://paste.ubuntu.com/p/3WJZthdSyY/ :P [14:40] jdstrand: hey [14:40] jdstrand: do you have a sec? [14:47] pedronis: in #5317 I have increased coverage of details.go while keeping to the spirit of the PR :-D [14:57] Chipaca: ah === pstolowski|lunch is now known as pstolowski [14:57] pedronis: perhaps not what you were hoping for :) [14:58] let's see what we get when the test have run [15:01] pedronis: locally, coverage of store went from 88.3% (96.8% in details.go) to 87.9% (96.6% in details.go) [15:01] the same two lines are not uncovered [15:01] er, not covered* [15:03] PR snapd#5317 opened: overlord: introduce a gadget-connect task and use it at first boot [15:11] * zyga iret's from real-life interrupt [15:16] pstolowski: I will look at your disconnect hooks PR again in the morning === jkridner is now known as jkridner|afk [15:40] * cachio lunch [15:42] pedronis: ty [15:45] PR snapd#5318 opened: interfaces/builtin: add new cuda-support interface [15:58] preview of what I'm working on https://usercontent.irccloud-cdn.com/file/h6uU1nG3/image.png [15:59] @diddledan I'd love if gog would itself distribute snaps [15:59] yeah, they don't want to though [15:59] did we ask? [16:00] last I saw they're "monitoring packaging systems" [16:00] diddledan: I know some people there (or I used to, I need to check) [16:00] pstolowski, mborzecki: can you guys give me a quick +1 on 5319 [16:00] it's a mv from package to package [16:00] PR snapd#5319 opened: cmd/snap-update-ns,strutil: move PathIterator to strutil, add Depth helper [16:00] and a trivial one liner method [16:00] I can split those if you'd feel better [16:03] zyga: done [16:03] mvo: hey, do you want to have the call about core? [16:03] pstolowski: thank you! [16:03] zyga: maybe you will know: [16:04] yes? [16:04] zyga: when we run spread tests, do we ever reuse the test user home dir between runs of entire suite? [16:04] I think we never clean it [16:04] yes, we reuse it [16:05] I only recall seeing code that makes it [16:05] but then that's that [16:05] diddledan: gog has many similarities with steam [16:05] diddledan: so you may look at the (now closed) steam-support interface [16:06] as well as the surrounding discussions [16:06] zyga: just to be sure: i'm not talking about not cleaning it between individual tests; i mean possibility of having leftovers between executions of entire suite [16:06] pstolowski: I think it is the same [16:06] pstolowski: it is only clean before first test :) [16:06] after that, until the machine is recycled, we don't touch that [16:26] thank you! [16:26] * zyga just waits for green then proposes the next PR [16:26] well, reopens the next PR [16:27] zyga: lets do it tomorrow morning (unless you have sometihng urgent) [16:27] mvo: no, nothing urgent [16:27] zyga: cool, lets do it around 9ish tomorrow? [16:27] sounds good [16:28] cool [16:35] popey: ping [16:35] eh, blasted fedora 28 failure [16:36] popey: regarding axel snap, do you think we can enable its auto builds now ? [16:39] om26er: heya. it alreay is hooked up. I did it when I closed the bug report. [16:39] om26er: but it fails to build https://build.snapcraft.io/user/snapcrafters/axel [16:39] Can't exec "aclocal": No such file or directory at /usr/share/autoconf/Autom4te/FileUtils.pm line 326. [16:39] https://launchpadlibrarian.net/373136884/buildlog_snap_ubuntu_xenial_amd64_1c7edf66b8a27bd4ca35886b0511f15e-xenial_BUILDING.txt.gz [16:40] woa, it fails on all arches [16:41] mborzecki: shall I change https://github.com/snapcore/snapd/pull/5315 [16:41] PR #5315: cmd/snap-update-ns: introduce MimicRequiredError, make ReadOnlyFsErro… [16:43] mborzecki: I think only you can test this https://github.com/snapcore/snapd/pull/5318 [16:43] PR #5318: interfaces/builtin: add new cuda-support interface [17:03] mborzecki, hey [17:03] mborzecki, I created a new image for arch and when I run the tests I get https://paste.ubuntu.com/p/gWm7dzkKFN/ [17:04] mborzecki, it is checking for updates [17:04] but there are not new updates [17:08] PR snapd#5319 closed: cmd/snap-update-ns,strutil: move PathIterator to strutil, add Depth helper [17:10] PR snapd#5081 opened: interfaces/apparmor: add chopTree [17:10] mborzecki, jdstrand, Chipaca: I updated https://github.com/snapcore/snapd/pull/5081 and it is ready for another review [17:10] PR #5081: interfaces/apparmor: add chopTree [17:17] cachio: let me take a quick look, don't know why it's not showing 'there is nothing to do' message [17:18] mborzecki, we should remove the update from the snapd test suite [17:18] cachio: pacman got updated recently, maybe it's showing different output when there are no updates now [17:18] I mean, we don't need to update all the packages for arch [17:19] mborzecki, I am updating the images every 3 weeks [17:19] mborzecki, what do you think? [17:19] cachio: well in theory we ought to be [17:19] cachio: i'll take a quick look, it's probably something trivial [17:20] cachio: can I use this image somehow, or have you replaced the one that we used before? [17:20] mborzecki, I already remove the image [17:21] I could create a new one [17:22] cachio: can you built it and name it differntly than the one we have now? i'll update spread.yaml locally to use it [17:22] mborzecki, sure [17:23] mborzecki, in progress [17:31] mborzecki, arch-linux-64-new-v20180613 [17:31] this is the image [17:33] cachio: thx [17:46] I’m going for a bike run [17:47] See you later [17:53] cachio: right, so the logic in prepare_project for arch is wrong, i'll try to do a quick fix [17:53] mborzecki, great, tx === pstolowski is now known as pstolowski|afk [18:07] hmm my shellcheck got updated to 0.5.0 and it's rasing some issues in tests/lib/*.sh [18:11] PR snapd#5320 opened: tests/lib/prepare-restore: fix upgrade/reboot handling on arch [18:11] cachio: ^^ [18:18] mborzecki, tx [18:55] PR snapd#5030 closed: packaging/amzn2: initial packaging of 2.32.5 for Amazon Linux 2 [19:46] re [19:46] * zyga is safely back from biking [19:57] zyga, if you have 5 minutes #5320 [19:57] thanks [19:57] PR #5320: tests/lib/prepare-restore: fix upgrade/reboot handling on arch [20:01] PR snapd#5320 closed: tests/lib/prepare-restore: fix upgrade/reboot handling on arch [20:01] cachio: pleasure :-) [20:01] jdstrand: hey, around? [20:02] meeting [20:02] zyga, tx [20:19] guys when I do a ruby build in a build-override, how do I move those files into the stage then? [20:24] mbuahaha, got 'snap info' talking to v2/info \o/ [20:25] sergiusens: you around? [20:25] Chipaca: yup [20:25] sergiusens: two things [20:26] Luke: make sure they end up in `$SNAPCRAFT_PART_INSTALL` [20:26] Chipaca: one by one [20:26] sergiusens: one, building a cross-platform snap-pack turned out to be a lot more work than I thought it would be, so it's on my plate as a Thing (as opposed to something I slot in between the cracks) [20:26] sergiusens: two was Luke and you've done that :-) [20:27] sergiusens: as i see it i should be getting to work on it early next week [20:27] but I don't own my priorities :-) [20:28] Chipaca: oh neat, does that include replacing mksquashfs too or just snap? I wonder, not require 😉 [20:29] sergiusens: it does not include replacing mksquashfs [20:29] PR snapd#5321 opened: tests: fix interfaces-contacts-service test retrying to remove share dir [20:29] sergiusens: thanks. is that var documented somwhere? I couldn't find it in the docs [20:29] sergiusens: it's mostly a lot of careful separation of code into cross-platformy bits, but also a bunch of figuring out how to run a test suite on windows :-) [20:30] sergiusens: I need to be in a special mindset for that last bit [20:30] or hire somebody :-D [20:30] (number of times I've gone for "hire somebody" in that situation: 3) [20:30] Luke uses are describe on https://snapdocs.labix.org/scriptlets/4892 or https://docs.snapcraft.io/build-snaps/scriptlets [20:31] Chipaca: use appveyor, that is what we use [20:31] i saw the scriptlets docs before. there's no mention of $SNAPCRAFT_PART_INSTALL besides in passing in an example [20:32] i guess I'm looking for an explanation of the best practices/intended use of $SNAPCRAFT_PART_INSTALL etc [20:32] thanks btw [20:32] sergiusens: nice, i'll look that up [20:34] sergiusens: what happens every time I start is that I'm an hour into reading and I'm just adding to a stack of things to figure out :-) hence why i need to set aside time for it [20:35] sergiusens: for example, appveyor seems to boil down to "run msbuild for you" [20:35] so no i need to learn msbuild [20:35] etc :) [20:35] sergiusens: no worries, but not quick for me [20:35] i'll get to it next week [20:35] hopefully [20:39] jdstrand: if you still have time today please look at chopTree again [20:39] It now does what you asked for [20:39] zyga: if you've got beans after the bike run, #5316 :-D [20:39] PR #5316: store, et al: kill dead code that uses the bulk endpoint [20:40] zyga: also 'bike run' seems like you're doing something wrong :-) [20:44] zyga: ok, it's on my list. it may be tomorrow [20:53] Chipaca: that explains the odd stare ;-) [20:54] I’ll read it first thing tomorrow [21:18] niemeyer: hi! can you add this to your queue to give some thought: https://forum.snapcraft.io/t/camera-raw-usb-plugs-auto-connect-for-qtchildid/2917/4