[00:42] PR snapd#6618 opened: tests: validates snapd from ppa [05:58] Good morning [06:33] morning [06:47] mvo: morning [06:47] mborzecki: hey, good morning [07:43] hey mborzecki, mvo [07:43] zyga: hey [07:43] kids off to school, time to work [07:45] I took sime time to clean my office yesterday [07:45] I must say, it feels great to work in a tidy place [07:45] I will try to preserve this condition better than last time [07:46] I have a few small PRs that could land quickly, if anyone wants to see [07:47] zyga: which ones? [07:47] mborzecki: typedef struct mountinfo is trivial [07:48] mborzecki: UMOUNT_NOFOLLOW is super trivial [07:48] those two can just land [07:48] mborzecki: there are some short but not trivial branches like fixed private /tmp [07:48] or fixes to mountinfo parsing [07:58] hey zyga [07:58] good morning :) [07:58] how is your daughter today? [07:59] zyga: all well [07:59] zyga: and yours :) ? [08:00] she was very hungry today, eating a few times in a row [08:00] the older one went to school recently :) [08:00] (need to remember about both) === pstolowski|afk is now known as pstolowski [08:03] mornings! [08:04] pstolowski: hey [08:04] good morning pstolowski [08:14] zyga: do you need a review from pedronis on https://github.com/snapcore/snapd/pull/6608 and the mountinfo one? [08:14] PR #6608: cmd/snap-confine: umount scratch dir using UMOUNT_NOFOLLOW [08:14] I will wait for a pass, yes [08:17] good morning [08:30] hey dot-tobias [08:32] zyga: Morning 😊 BTW, thank you again for the layouts feature, made both of my snaps possible in the first place! Huge improvement for snapping otherwise “unconfinable” software IMHO [08:52] 6608 can land [09:05] man i wish each log line said what machine it was from :-( [09:05] morning all [09:08] Chipaca: wouldn't be suprised everyone has a patch for spread to do just that :P i had one [09:08] pedronis: hello [09:08] pedronis: I updated https://github.com/snapcore/snapd/pull/6502 [09:08] PR #6502: dirs,overlord/snapstate: add Soft and Hard refresh checks [09:08] dot-tobias: I'm very glad to hear that :-) [09:09] thank you :) [09:09] Chipaca: good morning [09:09] pedronis: thank you, merged now [09:09] PR snapd#6608 closed: cmd/snap-confine: umount scratch dir using UMOUNT_NOFOLLOW [09:15] zyga: hey, when you have a spare slot, could you please re-review 6491 - aiui it "just" needs a re-review at this point [09:15] pstolowski: (or is anything missing in 6491) [09:18] mvo: no, it needs 2nd review [09:24] Chipaca: hey, can you cast your eye on https://github.com/snapcore/snapd/pull/6617 ? [09:25] PR #6617: cmd/snap: fix regression of snap saved command <⚠ Critical> [09:26] mvo: sure, doing so now [09:26] ah, yes [09:27] zyga: thank you! [09:32] kjackal: ping [09:32] hey Chipaca [09:32] kjackal: hey [09:32] kjackal: which 16.04 image from ovh doesn't support squashfs? [09:33] Let me see what that person said, give me a sec [09:34] Chipaca: unfortunately he did not came back to answer your questions: https://github.com/ubuntu/microk8s/issues/362 [09:35] I did point him to the discourse topic [09:37] kjackal: thanks [09:37] kjackal: I've raised the issue internally as well [09:37] if it doesn't support squashfs it's running some random kernel, meaning it's not Ubuntu [09:37] does the ovh get certified ubuntu images from us? [09:38] they've been in trouble with canonical before for selling something as Ubuntu and it not being Ubuntu [09:38] but I don't know how that got resolved [09:38] if it did, even [09:39] grrr the testing situation is really annoying [09:40] can we all stop the line and fix tests [09:41] including perhaps spending a moment to ensure that the situation is really better, not just not broken again [09:42] zyga: what's broken now? [09:42] Chipaca: is the situation with /var/snap fixed now? [09:42] zyga: no [09:42] so that's enough ;) [09:42] i'm still working on it [09:42] tests cannot be red for many days [09:42] i'm still working on it [09:42] I understand, I'm saying we should all jump at fixing the fundamental issue [09:43] which is the fundamental issue? [09:43] that is the one-to-all influence of every test [09:43] any test can break any other test [09:43] we do insufficinent work in making sure the environment is pristine [09:43] and the definition of pristine is not strict [09:43] it's all reactionary [09:43] quick fix: set workers to the number of tests [09:45] I'm partially not joking, it's just impractical to develop snapd for several years and constantly be in a situation where nearly every PR needs to be restarted [09:45] github is down, fine, happens once a year [09:45] tests don't clean up: *not* fine [09:45] we should really fix this [09:46] i'm not disagreeing, but [09:46] this issue is something else [09:47] 'snap changes' indicates that snapd _did_ remove the snap [09:47] so there's an actual bug somewhere [09:47] that's okay, but the first test that experiences this should stop and fail, [09:48] and not influence something entirely different later [09:48] once you fix the problem with current we will still have the very same unreliability problem with tests [09:49] what do you mean by "experiences" [09:50] a test that, by the end of the execution, due to the bug or otherwise leaves the test environment (system) in a state different from the state it was in at the beginning of the test [09:51] my impression is that right now our tests feel like DOS software [09:51] one program runs and corrupts something [09:51] them sometime later another program dies because of that corruption [09:51] it's very time-consuming to debug [09:51] we should spend the time in a smarter way [09:52] can i get some eyes on https://github.com/snapcore/snapd/pull/6609 ? [09:53] PR #6609: snap/gadget: introduce volume update info [09:53] so, we have the snapd-state code [09:53] mborzecki: enqueued [09:53] zyga: thanks [09:53] this problem would go away, i expect, if /var/snap were also part of the things that got nuked every iteration [09:54] OTOH, maybe the restore code should fail if there's things that shouldn't be there [09:54] so then the tests _need_ to clean up after themselves [09:54] perhaps the assumption that there's a magic cleanup system was flawed [09:54] so then we are testing that things are clean after cleaning up [09:54] Chipaca: wonder if that may be the package post/pre rm cleanup code [09:54] would tests be really that much longer if install_local enqueued a restore action that removes the test [09:54] and the restore code would just check that nothing is left behind (and fail if there was) [09:54] though it's only happening on ubuntu-core, hmm [09:55] mborzecki: what's only happening on ubuntu-core? these failures you mean? [09:55] mborzecki: interesting [09:55] on core we get extra snaps for testing [09:55] Chipaca: yeah, i recall seeing only ubuntu-core there [09:55] anyway, I'm grumpy because I see us happily igoring the ephant in the room [09:55] hmmm! [09:55] our testing infrastructure needs love [09:55] mborzecki: i hadn't noticed :-/ [09:55] it's not healthy [09:55] if it's that, maybe it's silly [09:56] because is_snapd_state_saved behaves differently on core [09:56] heh [09:56] save_snapd_state behaves differently on core too [09:56] Chipaca: yup, looking at 3 logs atm, all ubuntu-core [09:57] i thought i'd seen a non-core one, but can't find it now [09:57] so it might've been unrelated [09:57] the whole 'snap state file' thing does not exist on core [09:57] and we dont uninstall the package on ubuntu-core, because how we'd do it? [09:57] afaict [09:58] mborzecki: we don't do that either, in the save state thing -- are you saying we do that each loop? [09:59] Chipaca: my bad, we don't [10:00] mborzecki: might just be coincidence tho [10:01] hm [10:01] mborzecki: in save_snapd_state, when it's core, we copy /var/lib/snapd [10:02] ah and then restore_snapd_lib [10:02] ok fair [10:04] eep! finally got a shell in a failed one! [10:04] * Chipaca runs [10:05] hahah [10:09] Chipaca: hm, but we dont restore snapd state for each test [10:09] mborzecki: in core we do [10:09] mborzecki: reset_all_snap does that [10:09] and this might be the problem? [10:10] or at least i think we do [10:10] i mean, that's the code that fails [10:11] reset.sh calls reset_all_snap on load [10:11] so everything that sources it will do that [10:11] (on core) [10:11] (on classic, reset_classic) [10:12] so prepare_suite_each and restore_suite both do that [10:16] we should restore the state for each test, lots of things assume that [10:18] Chipaca: ok, then reset_classic does rm -rvf /var/snap, reset_all_snap doesnot [10:19] it calls snap remove which sould do the trick [10:19] pedronis: the -each versions do run for each test [10:19] mborzecki: indeed [10:19] maybe we should have a check there to verify that /var/snap does not have any extra data? [10:21] mborzecki: we kinda do :-) [10:21] * zyga wonders if we will ever return to the spread measurements idea that he proposed about a year ago [10:21] i mean, that's why it's dying [10:21] well, more than a year now === ricab is now known as ricab|brb === ricab|brb is now known as ricab|bbiab [10:24] zyga: ? [10:25] Chipaca: I proposed a spread extension where we can add a "measure: " section; anything printed there is stored compared with before/after for each level of nesting [10:25] Chipaca: so if you print the set of installed packages [10:25] you will notice leaked packages [10:25] if you print the state of systemd services [10:25] you will see systemd services changing state across tests [10:25] we can easily measure anything we want [10:26] I iterated on this idea [10:26] and while doing so found a bunch of bugs in our test setup then [10:34] zyga: and what happened? [10:36] I think not enough eyes looking or agreeing with the gravity of the problem [10:38] Chipaca: I think that we need something that would alert us of a test doing unexpected system change [10:38] not just trying to fix them [10:38] fixing is costly (we untar / remove all the time) [10:38] checking is presumably cheaper [10:39] so this might also speed up things in the end [10:41] it was also mixed with a change to spread which come with friction [10:41] pedronis: we could replicate that without spread but arguably the issue is that spread is too open and doesn't give us the ability to tame that [10:41] pedronis: perhaps that's the way forward? [10:42] anyway "flaky tests" and "test invariant" are still part of my open problems list [10:43] I would argue we should solve that with priorty because it hinders any other development done by the team [10:51] let's look at this in the standup (cc mvo) [10:59] gladly! [11:04] brb [11:06] PR snapd#6619 opened: partition,bootloader: rename 'partition' package to 'bootloader' [11:29] pstolowski: https://github.com/snapcore/snapd/pull/6491 reviewed [11:29] PR #6491: interfaces: hotplug nested vm test, updated serial-port interface [11:30] mborzecki: looking at https://github.com/snapcore/snapd/pull/6609 [11:30] PR #6609: snap/gadget: introduce volume update info [11:30] zyga: thanks! [11:32] mborzecki: https://github.com/snapcore/snapd/pull/6609 done [11:32] PR #6609: snap/gadget: introduce volume update info [11:32] pstolowski: I think that some of my questions got answers by the time I read to the bottom but I chose not to remove them. [11:33] pstolowski: it's safer and easier to ask and just get a quick answer [11:33] zyga: sure, makes sense [11:36] * zyga goes for a break [11:48] Chipaca: have you found anything interesting in the debug shell? [11:49] mborzecki: nothing as yet, but i'm retrying with more debug [12:03] hm [12:03] mvo: 'snap remodel' doesn't print a last \n [12:04] HAH! found the bugger [12:04] * Chipaca notes that is "the thing that causes the bug", not any other meaning of the word [12:05] mborzecki: the 'snap remodel' spread test leaves things in a confused state [12:06] 'confused' [12:06] well, it's a remodel [12:06] :/ [12:06] Chipaca: installed snap disappeared from the state? [12:07] well, it's hard to call it 'disappear' when the restore of the test overwrites state.json [12:09] pedronis: it isn't the remodel per se; the restore of the test just was untidy [12:09] i now need to test the fix a few times [12:10] pedronis: we can remove required-snaps. Are we supposed to be able to remove required-snaps? [12:10] Chipaca: no, it's a bug in remodel, I actually discussed that mvo this morning [12:10] *with mvo [12:10] Chipaca: it's not setting the flag properly [12:11] grr [12:11] then this fix gets a lot harder [12:18] pedronis: can you remodel 'back'? [12:18] do we have a helper to parse size information that has format like 100M ? [12:18] Chipaca: yes [12:18] mborzecki: yes, strutil [12:18] Chipaca: ta! [12:18] it doesn't remove things [12:18] mborzecki: strutil.ParseByteSize [12:18] but under correct behavior [12:18] mborzecki: it's ISO tho [12:18] it would unset the required flag [12:19] Chipaca: hey, do you know in what snapd version will the """download""" local snaps feature show up in? [12:19] pedronis: so I can remodel back, and snap remove, and it'd be happy [12:19] Chipaca: yes [12:19] assuming no bugs :) [12:19] sergiusens: 2.38 afaik [12:19] but we are interesetd to know if we have some [12:19] sergiusens: but let me confirm [12:19] sergiusens: this reminds me we still need to talk about the commandline options (not this week tho) [12:20] sergiusens: maybe 2.39 :-( [12:20] Chipaca: also, can we consider having a `snap wait-for-ready` command for easier scripting, and until that is done, what is the most bullet proof way to write my own? (e.g.; for multipass we run `multipass version` and wait for the `multipassd` version to show up) [12:20] sergiusens: what is wait-for-ready [12:21] Chipaca: block until snapd is ready [12:21] ready in which sense? [12:21] sergiusens: what pedronis said [12:21] sergiusens: snap wait system seed.loaded <- that's one version of 'wait for ready' [12:22] ready to be able to carry over "snap install" commands through the socket (no need to consider networking) [12:23] then what John showed is mostly that [12:23] to the minimum, the socket existing and taking connections [12:23] great [12:35] sergiusens: I should probably ask mvo every time somebody asks me about versions-to-features [12:35] sergiusens: 2.38 isn't cut yet, so 2.38 will have the download api [12:36] thanks Chipaca, that will be a pleasure 🙂 [12:36] sergiusens: you saw what its final form was? [12:36] * pstolowski lunch [12:37] Chipaca: is that to counter my "pleasure" comment? Should I be afraid? [12:37] sergiusens: GET ..../v2/snaps//file [12:37] Chipaca: I did not get to see it though, no... did it change much from what we discussed [12:37] ah, that sounds usable [12:37] :-) [12:38] sergiusens: when you saw it it was the same, but .../blob [12:38] * Chipaca <- really good at bad names [12:38] I remember the tusks and husks and all that 😉 [12:39] it's all been downhill since candirú [12:39] cerati/candirú/curucú [12:45] popey: BTW, Feb 11 17:07:15 hal snapd[14673]: desktop.go:129: cannot use line "Exec=/usr/share/code-insiders/code-insiders --new-window %F" for desktop file "/var/lib/snapd/desktop/applications/code-insiders_code-insiders.desktop" (snap code-insiders) [12:45] maybe something to raise with code-insiders [12:47] popey: and, as feared, Mar 19 12:31:02 hal snapd[3271]: main.go:141: DEBUG: activation done in 21.615s [12:47] that's one slow activation [12:49] PR snapcraft#2505 closed: Test store [12:53] Chipaca: question to you sir [12:53] Chipaca: when is a refresh done [12:53] zyga: it was like that when I got there! [12:53] conceptually [12:53] hah :) [12:54] I would like to reset a new time stored in the state when a refresh is ready [12:54] zyga: I did not understand that last statement [12:54] zyga: but [12:54] zyga: please explain [12:54] Chipaca: when refresh is done, when a part of a change that refreshes a particular snap is deemed complete, I'd like to perfrom some operations [12:55] Chipaca: to account for that in the state [12:55] Chipaca: should that be link-snap/ [12:55] Chipaca: or start services? [12:55] or a new task? [12:55] this may also be relevant for health checks [12:55] pedronis: you _can't_ set a revision 'back' :-/ [12:55] Chipaca: do you know CLOS? [12:56] Chipaca: no, you can make a new one though, that is like the first [12:56] pedronis: but i don't have the key for that [12:56] pedronis: dunno who does [12:56] pedronis: do you? [12:56] Chipaca: it's the test key [12:56] zyga: no i do not know clos [12:56] it's in the code [12:56] but let's talk at the standup about this at this point [12:57] (defmethod refresh :after ((s snap)) (setf (postponed-refresh-time (state-of-snap s)) 0)) [12:57] :-( ok [12:57] something along those lines [12:57] * Chipaca kickbans zyga [12:57] hehe [12:58] zyga: CLOs are collateralized loan obligations, the internet tells me [12:58] https://lispcookbook.github.io/cl-cookbook/clos.html#method-qualifiers-before-after-around [12:58] * pedronis eyes his copy of "The Art of The Metaobject Protocol" [12:58] * Chipaca writes back to the kind person from google [12:58] well, recruiter, 'person' might be a stretch [12:59] I think the answer to "thinks are rather complex and starting to fail in ways that escape us" is not "let's make it more complex" [12:59] all things evolve till the contain an implementation of lisp, that sort of thing ;) [12:59] things* [13:02] pedronis: I mean, I could push a fix that just does 'snap remove', which will then fail when it gets fixed [13:03] Chipaca: that's fine I think [13:03] Chipaca: so.... where would I put something like that? [13:03] it will fail in a clear way [13:03] (at least) [13:03] zyga: nowhere [13:04] zyga: it depends what you need to know? link-snap is the easiest place if it's good enough [13:05] pedronis: yeah, I think link-snap is OK for now [13:05] pedronis: conceptually where post-refresh hooks run probably [13:05] s/where/when/ [13:05] we have rerefresh now , so you have options if you need something else [13:05] but link-snap would be easier [13:05] if it's enough [13:05] off to pick up the kids [13:06] I think it's enough for now [13:06] thanks! [13:06] it will all be a PR soon [13:07] * Chipaca honestly didn't understand [13:07] * Chipaca will look at the pr [13:08] Chipaca: reading backlog - verison-to-feature - for some we have this in the forum. for all the "important" ones. remodel test - I miss some details, how can it leave things in state when we restore state.json from a backup? [13:08] mvo: remodel test, fixing it [13:08] mvo: with comments about how to fix it again when you fix remodel and thus break the test fixer [13:09] mvo: i'll push as soon as it's green here [13:11] Chipaca: woah [13:11] Chipaca: looking forward to that [13:11] ¯\_(ツ)_/¯ it's not lisp [13:11] Chipaca: really curious what I'm missing [13:11] =) [13:13] lots of incredibly silly parentheses [13:13] Chipaca: at least this part I understand [13:13] * mvo is very proud about this [13:25] * zyga is unsure about the LISP comments [13:33] mvo: vvv [13:33] PR snapd#6620 opened: tests/main/remodel: clean up before reverting the state [13:33] mvo: ^^^ [13:34] PR snapd#6621 opened: overlord/snapstate: track time of postponed refreshes [13:36] Chipaca: looking [13:37] Chipaca: thanks! do we actually keep state.json between when we do a restroe for the next test? [13:37] Chipaca: what I'm missing is why this makes the state untidy [13:37] mvo: this test itself does [13:37] this test is rather piggy [13:38] on classic, where we restore the state by hand, it doesn't break (because that restoring cleans things up also) [13:38] Chipaca: oh, so this restore runs *after* the full restore? [13:38] on core we use snapd to remove stuff, and that fails because of this [13:38] Chipaca: aha [13:38] so [13:38] it's like this: [13:38] this test restores the state, but does not clean up /var/snap [13:39] Chipaca: it seems on core we also want to restore the previous state.json. it sounds dangerous what we do right now [13:39] the next test that installs test-snapd-tools, fails on cleanup [13:39] Chipaca: ohhhhh [13:39] Chipaca: now I get it [13:39] so it might not be the immediate next test [13:39] it would have to install test-snapd-tools first :-) [13:39] anyway, that [13:40] mvo: on the contrary i'd be happier if restoring state.json wasn't needed (unless we were snapshotting the whole machine) [13:40] because we _should_ be able to clean back to a known-good state [13:40] but, alas, sometimes we don't [13:40] so i dunno [13:40] Chipaca: right, except that some tests use "jq" to do things [13:40] Chipaca: which might make a mess [13:40] yeah [13:40] Chipaca: but yeah, I guess even then we should write good "jq" [13:40] mvo: those do restore it tho [13:41] at least, i grepped and it looked like they did [13:41] i wasn't too thorough because i'm tired of red tests [13:41] and the jq-using tests are all old :) [13:42] Chipaca: thanks for finding this one, you get a hero medal for this one (https://www.google.com/imgres?imgurl=https://vignette.wikia.nocookie.net/wreckitralph/images/1/1d/RalphHeroesDutyMedal.png/revision/latest?cb%3D20130516213303&imgrefurl=https://wreckitralph.fandom.com/wiki/Medal_of_Heroes&docid=OoNm4g5X82ygWM&tbnid=4S4dCW2qcULhzM:&vet=1&w=783&h=446&source=sh/x/im) [13:42] mvo: quick trivial for your consideration https://github.com/snapcore/snapd/pull/6622 [13:42] PR #6622: cmd/libsnap: rename C enum for feature flag [13:42] zyga: thanks [13:42] PR snapd#6622 opened: cmd/libsnap: rename C enum for feature flag [13:47] hmm, comparing time is apparently harder than I hoped [13:52] brb, see you at standup [13:54] Chipaca: sorry, i don't know what that means, whether it's good or bad (the DEBUG thing) [13:55] popey: I'll be getting you a snapd with moar debug, if you don't mind [13:55] hah [13:55] popey: because your snapd is taking 21 seconds to do … nothing? :-) [13:55] sweet [13:55] not nothing, but all very quick things [13:55] so we're missing something :-) [13:56] popey: i've got a few things on my plate before that tho [13:56] so maybe tomorrow [13:56] kk [13:56] np [13:56] any more things on my plate and the carrots will start touching the fish === ricab is now known as ricab|lunch [13:57] Create a breakwater with some potatoes [13:57] the potatos are at gravy carrying capacity [13:57] as is law [13:58] Chipaca: we initialize the interface manager, that might have slow bits [14:03] pedronis: perhaps we should inject a change (even a fake one) that accumulates the time of the system-key regeneration cost [14:03] pedronis: it would help with accountability [14:03] well, the new infra can measure [14:03] out of changes too [14:04] does the new infra measure non-task cost like this? if so that's great [14:18] Chipaca: if I do not wait by other means, I get `error: cannot communicate with server: Get http://localhost/v2/snaps/system/conf?keys=seed.loaded: dial unix /run/snapd.socket: connect: no such file or directory` [14:20] should I just loop until that call works? I want to avoid using anything internal that is subject to change [14:34] sergiusens: that's strange, because multi-user.target and cloud-final.target both wait for that [14:34] sergiusens: how are you getting 'in'? [14:35] Chipaca: lxc exec [14:35] HRM [14:35] sergiusens: does that ssh? [14:36] shouldnt "apt remove snapd" also remove all installed snaps ?? [14:36] Chipaca: no, it just runs whatever I tell it to run in that container [14:36] ogra: purge does [14:36] bah [14:36] ogra: dpkg -P snapd [14:36] Chipaca: if you say that waiting for cloud-init is enough, then I will do that [14:36] apt purge does it too ... thanks ... i thought a simple remove suffices [14:36] sergiusens: how do you wait for cloud-init? [14:36] `apt remove --purge snapd` [14:37] Chipaca: I am not, but I can [14:37] sergiusens: if you can wait for cloud-init, can't you wait for snapd? [14:38] Chipaca: I think we just agreed on what to do [14:38] sergiusens: wait for snapd :) [14:53] pstolowski: there are new imacs today :) https://www.apple.com/pl/imac/ [14:54] zyga: i'm not looking. lalalla. [14:54] zyga: ok i lied; i looked [14:58] i'm going for a walk to wake up a bit, bbiab === ricab|lunch is now known as ricab [15:02] * zyga lunch & walk [15:02] jdstrand: 3 second PR https://github.com/snapcore/snapd/pull/6607 [15:02] * zyga really gone now [15:02] PR #6607: cmd: typedef mountinfo structures [15:31] * cachio lunch [15:44] zyga: I'm aware of the PR :) I glanced at it, it isn't 3 seconds, but it is fast. I have several things I'm trying to get to, and that is towards the top [15:47] jdstrand: how can it not be 3 seconds, it's a typedef :) [15:47] jdstrand: but I'm happy to see the final review [15:47] jdstrand: don't forget it's the 360 day [15:47] zyga: cause it is a bunch of them :) [15:47] that's always a priority [15:48] zyga: indeed, that is one of the things that was before it (I just completed that, fwiw) [15:48] I'm doing that now [15:48] just NaN people left :) [15:48] heh, yeah [15:50] brb need to reroute my booter === Luke_ is now known as Guest69457 [16:14] PR # closed: core-build#11, core-build#22, core-build#26, core-build#37 [16:15] PR # opened: core-build#11, core-build#22, core-build#26, core-build#37 [16:27] almost done with 360s [16:27] (and then the walk I promised myself) [16:34] 360 done, I'll go now because it's getting dark [16:34] ttyl [16:39] PR snapd#6623 opened: interfaces/builtin/opengl: allow access to Tegra X1 [16:54] Chipaca: mborzecki: we can close 6615 at this point, right? [16:54] pedronis: i'm still running that [16:55] pedronis: to confirm my hypothesis [16:55] (so far it's failed because opensuse is having a day) [16:56] ah [16:57] hi, does snapd do something special with /snap and snap dirs if the underlying fs is btrfs? [16:58] ackk: no [16:58] ackk: any weirdness you see is entirely of btrfs's making :-) [17:00] pedronis: … and again opensuse :-( [17:04] ackk: can you tell me more please? [17:06] Chipaca: push to make it manual if it's having a day [17:07] Chipaca: that's another area that we'll need to think about in our tests [17:08] zyga, so it's a bit complicated dev setup I have here. basically my disk is all btrfs, and I'm using "snap try" in a lxd container (which uses btrfs backend as well) [17:09] zyga, the issue I'm having which I suspect is related is that sometimes I rsync some files to the prime/ dir then "snap restart mysnap" and I get "no such file or directory" when trying to execute snap commands [17:09] but the files are there [17:09] Can you provide a small reproducer [17:09] I wonder if rsync replaces the directory [17:10] Making the bind mount die [17:10] zyga, it doesn't happen all the times. what I noticed is that the /snap dir in the snap shows as a btrfs subvolume mounted, but that's not true [17:10] Can you provide a copy of proc self moujtinfo please [17:10] zyga, specifically: /dev/mapper/ubuntu-root on /snap type btrfs (rw,noatime,ssd,space_cache,subvolid=1337,subvol=/lxd/storage-pools/default/containers/maas/rootfs/snap) [17:11] sure one sec [17:11] Tying from phone sorry [17:11] I will be home in 10 minutes [17:12] zyga, http://paste.ubuntu.com/p/SXmVh3XYM9/ [17:12] zyga, if you're trying to reproduce in a lxd container on your phone, that's impressive :) [17:12] It is an Ubuntu phone ;-) [17:12] Kidding though [17:12] I will try at home [17:12] thanks [17:13] Do you have the broken setup still [17:13] Can you jump into lxd [17:13] And mointinfo there [17:14] Lastly jump into broken mount namespace (use nsenter -m/run/snapd/ns/snapname.mnt) [17:14] And provide that third mountinfo please [17:14] zyga, I don't have it anmore at the moment [17:14] Please wrap that into a bug report [17:14] Ok [17:14] Once you do please :-) [17:15] zyga, fwiw the snap-try'd dir also was showing as a btrfs mount [17:15] with a subvol= pointing at the dir (which is not really a subvol) [17:15] zyga, that paste is from within the lxd fwiw [17:16] zyga, ok I will, thanks [17:16] I've seen this happening a lot of times [17:16] zyga, if it matters, the snap is --devmode [17:16] Snap try is just a bind mount [17:17] Snaps sets that up based on the path given [17:17] So it would likely be whatever is powering the lxd container [17:17] My personal setup is not using btrfs [17:18] ETOOMUCHHASSLE [17:18] but perhaps I should dog food more [17:18] Is this on openSUSE? [17:20] no, ubuntu [17:20] I've been using btrfs since xenial here [17:21] Aha [17:22] zyga, ah yeah indeed if you bind-mount a dir it shows in subvol === pstolowski is now known as pstolowski|afk [17:30] ackk: so the bind mount subvol thing is a non-issue? [17:30] ackk: I wonder about the breakage [17:30] ackk: I suspect rsync is replacing the prime directory [17:30] ackk: if this happens, look if mountinfo says "deleted" [17:30] ackk: that would be a good bug report [17:32] PR # closed: snapcraft#1649, snapcraft#1875, snapcraft#2020, snapcraft#2135, snapcraft#2176, snapcraft#2229, snapcraft#2239, snapcraft#2398, snapcraft#2413, snapcraft#2433, snapcraft#2444, snapcraft#2445, snapcraft#2463, snapcraft#2470, snapcraft#2493, snapcraft#2495, snapcraft#2500, snapcraft#2504 [17:32] zyga, yeah it seems the bind mount just shows the dir as subvol, so I guess it's not an issue [17:32] ackk: but you said it was broken too [17:33] zyga, yeah sorry I mean, it's not something different that snapd is doing wrt the mountpoint [17:33] ok [17:33] zyga, yeah it seems binaries are not found althoguh they're there [17:34] can you run "snap run --strace=--raw snap.app" [17:34] I wonder what really happens [17:35] PR # opened: snapcraft#1649, snapcraft#1875, snapcraft#2020, snapcraft#2135, snapcraft#2176, snapcraft#2229, snapcraft#2239, snapcraft#2398, snapcraft#2413, snapcraft#2433, snapcraft#2444, snapcraft#2445, snapcraft#2463, snapcraft#2470, snapcraft#2493, snapcraft#2495, snapcraft#2500, snapcraft#2504 [17:47] * zyga EODs [17:47] tty [17:47] ackk: please follow up tomorrow, I'd love to see that strace [17:51] #6620 could use a second review [17:51] PR #6620: tests/main/remodel: clean up before reverting the state [17:51] PR snapd#6624 opened: overlord/snapstate, store: retry less for auto-stuff [17:52] and, pedronis if you're around, 6624 ^ is nasty but nice [17:52] my wifi goes AWOL in 8 minutes, so my EOD is then [17:52] * Chipaca trying a new thing to see if he can get his sleep schedule sorted [17:54] * Chipaca thinks better about it, and adds two minutes to the scheduled wifi shutoff [18:00] ok, EOD, ttyl, hand, etc etc [18:31] niemeyer, hey [18:32] niemeyer, when you have a minute, could you pleas take a look to https://github.com/snapcore/spread/pull/75 [18:32] PR spread#75: Make spread tests for spread project run on google backend [18:32] ? [18:46] mvo: I don't think the whole of 6624 is viable for 2.38 [18:56] pedronis, about the card to add tests for service.ssh.disable [18:56] cachio: yes? [18:57] pedronis, how should be the gadget scenario? [19:00] jdstrand, hey, is there any lp issue for the sru validation? [19:02] cachio: we need something like main/ubuntu-core-gadget-config-defaults/task.yaml at least [19:02] cachio: then we should explore if we can do something where we boot for real (not just snapd restart) with something made with ubuntu-image [19:03] cachio: see privmsg [19:04] pedronis, ok [19:05] pedronis, starting with this, thanks [19:05] pedronis: looking at 6624 [19:09] pedronis: yeah, it looks risky, lets discuss tomorrow, my brain is a bit fried [20:26] Issue # closed: core18#56, core18#86, core18#89, core18#117 [20:26] PR # closed: core18#43, core18#63, core18#72, core18#90, core18#98, core18#120 [20:27] Issue # opened: core18#56, core18#86, core18#89, core18#117 [20:27] PR # opened: core18#43, core18#63, core18#72, core18#90, core18#98, core18#120