[04:52] morning [05:04] Good morning [05:05] zyga: kids already at school, yay ;) [05:15] mborzecki: oldest just went to school [05:15] mborzecki: middle one finished breakfast [05:15] mborzecki: youngest one plays with her toys :) [05:15] mborzecki: our son is 170 now [05:15] he's not even 13 [05:15] mborzecki: easy landing for the morning https://github.com/snapcore/snapd/pull/7218#pullrequestreview-272716535 [05:16] mborzecki: I added the extra checks you asked for [05:16] PR #7218: tests: measure behavior of the device cgroup [05:16] and it's green :) [05:17] woot, thankyou [05:17] another easy ish is https://github.com/snapcore/snapd/pull/7392 [05:17] PR #7392: tests: rewrite "retry" command as retry-tool [05:17] needs 2nd +1 [05:17] also green [05:17] PR snapd#7218 closed: tests: measure behavior of the device cgroup [05:29] mborzecki: one more kid to walk to school, ttyl [06:00] back now [06:30] mvo: morning [06:32] hey mborzecki [06:36] good morning mvo [06:36] mvo: one more review if I can grab your attention :) [06:36] https://github.com/snapcore/snapd/pull/7392 [06:36] PR #7392: tests: rewrite "retry" command as retry-tool [06:37] mborzecki: let's start chopping your classic mount namespace PR as we discussed in private [06:37] it will help with the review and will help with figuring out what is wrong [06:37] hey zyga [06:38] :-) [06:39] mvo: do you know who is the upstream of command-not-found nowadays? [06:42] zyga: foundations, why? [06:42] mvo: I have some patches [06:47] mvo: zyga: #7393 really simple [06:47] PR snapd#7393 opened: cmd/libsnap-confine-private, cmd/s-c: use constants for snap/instance name lengths [06:47] PR #7393: cmd/libsnap-confine-private, cmd/s-c: use constants for snap/instance name lengths [06:48] mborzecki: +1 [06:48] thx [06:58] PR snapd#7394 opened: tests: move debug section after restore === pstolowski|afk is now known as pstolowski [07:05] morning [07:05] good morning pawel! [07:11] pstolowski: hi, what's the status of #7277 ? [07:12] PR #7277: [RFC] overlord/snapstate: fix undo on firstboot seeding [07:12] * zyga goes to read https://github.com/snapcore/snapd/pull/7381 [07:12] PR #7381: seed,image,o/devicestate: extract seed loading to seed/seed16.go [07:13] great for rainy morning focus :) [07:23] pedronis: hi, i've addressed your feedback but not yet happy about some aspects, i need to revisit this PR, i haven't looked at it for a while [07:26] pstolowski: pedronis: hi [07:26] pstolowski: it's the last bit of https://trello.com/c/9Clql0BE/299-first-boot-fixes , considering that the initramfs/grub stuff is more nice to have [07:26] and also not great time given uc20 work [07:28] pedronis: ack, going to look at it today [07:32] is it just me or is github not responding very well now? [07:38] PR snapd#7393 closed: cmd/libsnap-confine-private, cmd/s-c: use constants for snap/instance name lengths [07:43] * zyga short break [07:49] PR snapd#7394 closed: tests: move debug section after restore [08:38] zyga: ping [08:38] hey [08:39] zyga: hiya [08:40] zyga: what's the purpose of osutil.MountOptsToFlags ? [08:40] it seems very sensible, but it's not used -- instead, the unchecked MountOptsToCommonFlags is used [08:41] let me look [08:41] k [08:41] not going to change it, but it caught my attention [08:41] (i'm fixing compilation on darwin) [08:42] ah, [08:42] I see it now [08:42] it's the layer on top of common flags that understands x-snapd and filters them out but errors on unknown flags [08:42] let me look at that [08:42] thanks! [08:43] zyga: np! tbh it looks like the Common one should be private, if i understand the difference [08:44] if you look at how it is used, common is the one being used because we acknowledge there are flags we don't understand that we just pass to mount [08:44] PR snapd#7395 opened: cmd/snap-confine: apply global mount namespace initialization for confined and classic snaps [08:44] so I think the strict version is just unused and could be removed [08:50] zyga: any thoughs on https://bugs.launchpad.net/snapd/+bug/1841137 ? [08:50] Bug #1841137: /dev/loopX devices left around for removed snap revisions [08:53] pstolowski: oooh [08:53] interesting [08:54] zyga: garbage collection gone wrong? [08:54] there's no garbage collection, it's explicitly managed malloc/free style [08:54] the deleted aspect is super puzzling! [08:55] zyga: i mean gc in the sense of removing old revisions automatically [08:55] pstolowski: nitpick: triaged literally means: we know the severity [08:55] setting it to triaged without setting one is weird [08:55] pstolowski: normally mount does that [08:55] pstolowski: libmount removes loopback devices [08:56] agreed about triaged [08:56] since I just rebooted [08:56] if you don't know the severity it's confirmed, not triaged :-) [08:56] can everyone with a longer uptime run: sudo losetup | grep deleted [08:56] and see if it shows anything [08:56] nothing right now but i have seen them before [08:57] that's when they appear in unity [08:57] i didn't know it was noteworthy :-) [08:57] that's super curious [08:57] I wonder if that's another lowlevel bug in kernel/libmount/systemd [08:57] normally systemd doesn't handle those so more likely in libmount [08:57] Chipaca, zyga noted, thanks :) [08:57] what snap is it? here it's always been core when i've looked [08:58] mvo, pedronis, mborzecki ^ perhaps you? (sudo losetup | grep deleted) [08:58] Chipaca: btw thanks for updating yesterday's bug ;) [08:58] Chipaca: those that refresh [08:58] Chipaca: I will look at adding a test for this though [08:58] Chipaca: refresh tree times [08:58] Chipaca: maybe that's exactly what I saw in the core 18 leak detector [08:58] pstolowski: :) [08:58] I saw deleted revisions [08:58] Chipaca: there are more snaps than just core there [08:58] so maybe it's a real bug that just doesn't get measured because refreshes are not 10 a day [08:58] pstolowski: thank you! [08:59] zyga: a bunch for core [08:59] can you check the changes for those if you still have them [08:59] pedronis: I assume you have high uptime [08:59] but that's great, it's probably easy to reproduce then [08:59] I doubt, they are actuall old revisions [09:00] zyga: none here it seems [09:00] < 7700 [09:00] thanks guys! [09:00] (just ran your command) [09:00] thank you [09:00] pstolowski: thanks for sharing, I was working on this anyway without realizing it is not limited to our test suite [09:01] zyga: https://pastebin.ubuntu.com/p/wwXft5wyDg/ [09:01] pedronis: 18.04? [09:01] yes [09:01] perfect, thank you [09:03] zyga: isn't this because of preserved mount namespaces? [09:03] no [09:03] well [09:03] I don't think so [09:03] it's more involved [09:03] the most surprising part is the deleted element [09:03] but let me write a test first [09:04] zyga: deleted seems fine to me, iirc association of a file with a loopback device is done using a file descriptor [09:04] mborzecki: it's not fine [09:04] IMO [09:04] but let's check out little by litte [09:05] it only happens when the parent dentry is removed AFAIR [09:05] zyga: is this the highest priority thing on your plate? [09:05] pedronis: in terms of wrapping up, it's what I was doing anyway but I was chasing logs from runs, this gives me an idea for a test to write, maybe it will uncover the bug more directly [09:07] mborzecki: they are not mounted anywhere [09:08] though there are two separate sides here, losetup deleted vs mountinfo deleted [09:08] Chipaca: do you know what's the status of https://bugs.launchpad.net/snapd/+bug/1804245 ? [09:08] Bug #1804245: empty response from store when throttled allows switch to nonexistent track [09:09] pstolowski: I think i fixed that one :-) updated the bug to reflect [09:09] good :) [09:10] i just need to confirm [09:11] i was wrong i think [09:11] digging [09:13] zyga: this can be closed? https://bugs.launchpad.net/snapd/+bug/1805866 [09:13] Bug #1805866: On core18 system core18 snap was mounted after snapd had started [09:14] pstolowski: no, why? [09:14] it's a real bug IMO [09:16] zyga: ok, it's very old, i assumed it was fixed as otherwise we would probably see it more? [09:16] pstolowski: we don't really boot that often with services, not on classic systems [09:16] ok, fair enbough [09:18] * pstolowski is stepping down from triaging job for now [09:23] zyga: is there anything else needed to get https://github.com/snapcore/snapd/pull/6767 merged? [09:23] PR #6767: wrappers: allow snaps to install icon theme icons [09:24] jamesh: no, just need to finish my review [09:24] zyga: okay, thanks. [09:27] pstolowski: confirmed, fixed in 6ce906f7df2 [09:27] Chipaca: ty! [09:27] updating the bug [09:40] PR snapd#7396 opened: tests: don't guess in is_classic_confinement_supported [09:52] pstolowski: I replied to https://github.com/snapcore/snapd/pull/7392#discussion_r320176487 -- let me know if that's okay in your eyes [09:52] PR #7392: tests: rewrite "retry" command as retry-tool [09:52] pstolowski: I'm happy to have a follow up [09:57] zyga: ty, replied [09:57] woot, thanks. [09:58] PR snapd#7392 closed: tests: rewrite "retry" command as retry-tool [10:08] is there anything we could retry that would help us? [10:08] snap install from the store perhaps [10:08] any ideas? [10:09] though that retries internally so perhaps no [10:09] pstolowski: I cannot immediately reproduce the error with leaking loopback device [10:09] I was wondering if being base is somehow more special [10:10] but then the bug report did include telegram-desktop [10:10] it may also be a more recent-ish environment (a rolling distro) [10:10] I'll run my test on 16.04 and look at finishing some of reviews instead [10:16] pstolowski: ^ quiet in 7397 [10:16] PR snapd#7397 opened: tests: add --quiet switch to retry-tool [10:17] zyga: right, i couldn't repro loopback issue either. perhaps we need to observe our systems more close for some time [10:17] zyga: thanks for 7397! [10:17] it means we are missing something though :) [10:17] it's going to be good when we finally understand what it was [10:21] pstolowski: I updated https://github.com/snapcore/snapd/pull/7256 [10:21] PR #7256: tests: adding retry command and use it to delete $XDG directory [10:23] PR snapd#7398 opened: boot, etc: simplify BootParticipant (etc) usage [10:24] PR snapd#7399 opened: tests: verify retry-tool not retrying missing commands [10:26] pstolowski: replied on https://github.com/snapcore/snapd/pull/7397#discussion_r320200634 [10:26] PR #7397: tests: add --quiet switch to retry-tool === pedronis_ is now known as pedronis [10:31] pstolowski: zyga: when that happens, aren't those devices listed in nautilus sidebar too? [10:31] mborzecki: I have *never* seen that in my sidebar on TW [10:32] but I did see some wonkiness in unity a while ago while using a 16.04 vm [10:32] zyga: i have, i recall asking someone during last sprint about that :) [10:33] mmmm [10:33] more interesting to see what causes it [10:35] PR snapd#7400 opened: cmd/snap-update-ns: don't propagate detaching changes [10:36] mborzecki: speaking of mounting https://github.com/snapcore/snapd/pull/7400 [10:36] PR #7400: cmd/snap-update-ns: don't propagate detaching changes [10:36] I'm reviving that MS_SHARED bugfix but this is something I found in that branch [10:37] it's a bit unusual in the sense that it is a fix for something that was hidden by lack of sharing [10:37] but I proposed it ahead of actual sharing [10:37] I explain why in the PR [10:37] I should break for coffee and then really return to reviews or I will never get them done [10:38] * Chipaca takes a break [10:42] mborzecki, pstolowski: https://github.com/snapcore/snapd/pull/7397#discussion_r320206281 [10:42] mborzecki: i think the reporter of that bug mean that when he said gnome file manager [10:42] PR #7397: tests: add --quiet switch to retry-tool [10:43] pstolowski: nautilus? [10:43] mborzecki: on my 18.04 box i only have cli so can't verify; on full-blown 19.04 i couldn't reproduce in my short tests [10:43] mborzecki: yes, nautilus [10:44] woo, it's called files now :/ [10:44] files :) [10:44] nice [10:44] and i keep on typing nau.. in the search [10:44] until it becomes folders ;) [10:44] after coffee / reviews I'll change my test to refresh bases while running various apps [10:44] and see what happens then [10:46] pstolowski: trivial quickie https://github.com/snapcore/snapd/pull/7399 [10:46] PR #7399: tests: verify retry-tool not retrying missing commands [10:46] this is also pretty obvious but I was unsure if there are any side effects [10:46] https://github.com/snapcore/snapd/pull/7396 [10:46] PR #7396: tests: don't guess in is_classic_confinement_supported [10:46] though it feels like the original code was just buggy anyway [10:47] mborzecki: is my reasoning correct? https://github.com/snapcore/snapd/pull/7396#discussion_r320208075 [10:47] PR #7396: tests: don't guess in is_classic_confinement_supported [10:48] mvo: can you please finish the review of https://github.com/snapcore/snapd/pull/7362 [10:48] PR #7362: cmd: unify die() across C programs [11:00] * zyga breaks [11:08] good idea [11:08] * pstolowski lunch [11:39] pedronis: #7092 seems to be happy [11:39] PR #7092: packaging: use snapd type and snapcraft 3.x <â›” Blocked> [11:44] * Chipaca goes for lunch [11:48] 6705 needs a second review now [11:49] zyga: will do after lunch [11:54] zyga: updated #7387 [11:54] PR #7387: packaging/fedora, tests/lib/prepare-restore: helper tool for packing sources for RPM [11:57] Chipaca: comment on #7398 [11:57] PR #7398: boot, etc: simplify BootParticipant (etc) usage [11:57] *commented [12:13] zyga: duh, mounts do not look nice when there's more snaps installed https://paste.ubuntu.com/p/BHvpFx7P3X/ [12:23] zyga: ok, updated #7302 too [12:23] PR #7302: cmd/snap-confine: add support for parallel instances of classic snaps <â›” Blocked> [12:36] PR pc-amd64-gadget#19 closed: UC20 spike changes [12:37] mvo, pedronis I need to skip the standup, perhaps arriving late but not sure, forgot I had a meeting in the school [12:37] mborzecki: seeing double :) [12:37] mborzecki: I have an idea about that [12:37] mborzecki: but in the evening after round of reviews [12:37] mborzecki: for tomorrow, ok? [12:58] jdstrand, i have a problem ... trying to use a DHT11 sensor (temp/humidity) and output to an OLED display on a Pi3 with https://github.com/adafruit/Adafruit_Python_DHT ... i end up with https://paste.ubuntu.com/p/Z7PDBTCjkb/ ... could we add these paths to "hardware-observe" ? [12:59] pedronis: interesting comments [13:17] I'm using Docker to run snapcraft inside Travis. Is there anything I need to do to switch to using core18? [13:17] Currently I'm doing: docker run -v $(pwd):$(pwd) -t snapcore/snapcraft sh -c "apt-get update -qq && apt-get install -qq git && cd $(pwd) && snapcraft" [13:17] I'm wondering if I need to use a different Docker image based on Bionic? [13:17] rbasak: use core18 for what? [13:17] does snapcraft in docker not use multipass? [13:18] * Chipaca doesn't know [13:18] i imagine not, in which case yeah you'd need a bionic image [13:19] Chipaca: to base my snap on core18 I mean. Ie. "base: core18" in snapcraft.yaml AIUI. Yes - that's what I was thinking. [13:19] Is there a published snapcraft-that-uses-bionic image? [13:26] rbasak: I don't know. Maybe sergiusens does? [13:51] cmatsuoka: can this land? https://github.com/snapcore/snapd/pull/7266 [13:51] PR #7266: recovery: update run mode variable name [13:52] zyga: on the uc20 branch, yes [13:52] zyga: thanks [14:00] cachio: hey, I pushed into the XDG branch of yours [14:00] cachio: I also opened something small for your consideration: https://github.com/snapcore/snapd/pull/7396 [14:00] PR #7396: tests: don't guess in is_classic_confinement_supported [14:00] zyga, nice, thanks [14:00] ogra: it is likely fine. they are binary files. add to the list for the next round. thanks! [14:01] added* [14:01] jdstrand: hello :) [14:01] zyga: hi :) [14:01] jdstrand, thanks ! [14:02] rbasak: there is not, you will need to roll your own docker image [14:03] rbasak: if you're building on travis, it is probably much easier for you to just build with lxd and call snapcraft from your job definition directly, travis supports running snapcraft as a snap and lxd as a snp [14:06] ijohnson: OK, thanks. [14:06] ijohnson: do you know of any examples? [14:06] rbasak, sure one minute [14:07] rbasak: take a look at one of the snapcrafters snaps at https://github.com/snapcrafters/sdlpop/blob/master/.travis.yml [14:08] ijohnson: that's great. Thank you! [14:13] zyga, #7256 needs a +1 [14:13] PR #7256: tests: adding retry command and use it to delete $XDG directory [14:13] pstolowski, could you please take a quick look to #7256 [14:15] cachio: yes [14:15] pstolowski, tx [14:16] cachio: looks good but as i said in earlier comment i think we should wait for it to fail again an see if #7336 shows anything interesting [14:16] PR #7336: tests: add debug section to interfaces-contacts-service [14:22] jdstrand: hi are you around? I have a question :) {"docker": {"allow-auto-connection": "true"} <- this implies allow-installation, right? so a snap with this in the plugs section should automatically pass review, not be held with human review required due to 'allow-installation' constraint (bool) declaration-snap-v2_plugs_installation (docker, docker) [14:23] pstolowski, ok, I'll try to force the error [14:24] pstolowski, otherwise it will take too much time [14:24] cachio: force how? [14:24] pstolowski, retrying execution [14:24] until it fails [14:24] cachio: ok [14:25] and praying [14:25] heheehe [14:25] heh [14:25] roadmr: it does not imply allow-installation [14:26] jdstrand: ah, ok. That makes sense [14:26] it does for snapd [14:26] roadmr: in snapd, auto-connection, connection and installation are all separate from each other. the review-tools only look at auto-connection and installation for when to manual review [14:26] jdstrand: did this change? did it imply installation? [14:26] it doesn't respect the conventions though [14:27] roadmr: nothing has changed for months [14:27] jdstrand: (context, someone is saying that a snap for which they have the above used to pass automated review but then started failing [14:27] ) [14:27] roadmr: right, months ago it was discovered that we weren't prompting for manual review for the docker interface when we should have. it changed then [14:28] roadmr: there were a handful of snaps that were affected. they must not have uploaded anything in a while [14:28] jdstrand: like, 2 months ago? we narrowed the transition down to "2019-06-17 13:55 - 2 months, 2 weeks ago" [14:28] jdstrand: perfect, that explains it :) do you mind if I give them allow-installation? this is a snap in a brand store and they'd been using docker previously [14:29] roadmr: 693121ef4ad15a0642967d5f7af69e52c1b8969d. Feb 15 [14:29] oh that's more than 2 months ago :) [14:31] pedronis: I guess it makes sense that allow-auto-connection implies installation as a practical matter, but that is not how we initially designed the feature. do you recall when that changed? [14:32] roadmr, the date would be relevant for our bump of review tools though, not necessarily when the review tools changed [14:32] (hi all!) [14:32] jdstrand: it's always been like this [14:33] pedronis: oh, you mean because if it has nothing to say about installation, it is allowed [14:33] yes [14:33] nessita: indeed, but the gap is still puzzling - we certainly updated the tools between February and June, so it's weird this only started happening in June if the change was in the tools since Feb [14:33] roadmr, right [14:34] roadmr, have you, by any chance, downloaded the two revisions to compare if they changed anything in their manifest? [14:35] nessita: nope [14:35] pedronis: right. that is different than auto-connection affecting installation if something does have something to say about it, which is what I meant [14:35] anyhoo, yes [14:38] roadmr: can you paste me their snap.yaml? [14:38] jdstrand: sure, incoming [14:39] jdstrand: https://pastebin.canonical.com/p/WJyNTTKQ8X/ [14:39] hi, testing with the confined maas snap and snapd 2.41, I'm getting https://paste.ubuntu.com/p/cJxN7B4KpP/ all of a sudden, what could it be related to? [14:40] ackk: woah [14:40] ackk: is it /root? [14:40] zyga, what is? [14:40] I think it might be confused by root :) [14:40] ackk: there's a check that $HOME is not something crazy [14:40] ackk: but it may not capture /root [14:40] zyga, yeah "maas --help" works but "sudo maas --help" doesn't [14:40] Chipaca: ^ is that us? [14:41] Chipaca: if so that's a regression [14:41] * jdstrand notes that when to manual review is a different question than when to apply at install time [14:41] mvo: ^ [14:41] zyga, oddly, I had just run "sudo maas init" a little before, and that must have worked [14:41] huh [14:41] that's more mysterious [14:41] lemme try a reproducer [14:41] zyga: thanks, in a meeting [14:41] ack [14:41] ackk: what distro? [14:41] ackk: thank you [14:41] Chipaca, bionic [14:41] zyga: but thanks for the heads up, please keep me updated about your finidings [14:42] sudo's path should have /snap/bin on it [14:42] might be the new systemd generator? [14:42] (some distros have it in bash but not in sudo which is weird) [14:42] ugh [14:42] https://github.com/snapcore/snapd/blob/fa465e97df3b159f18c77786cdc540d2cdcd29d5/cmd/snap-confine/user-support.c#L44 [14:42] which went in with 2.41 but only in the debs [14:42] Chipaca, I'm pretty sure cloud images don't [14:42] ackk: don't what? [14:42] Chipaca, I'm in a lxd [14:43] don't have /snap/bin in path [14:43] at least, not for root [14:43] oh actually it seems they do now [14:43] ackk: in 'sudo visudo', you should see /snap/bin in 'secure_path' [14:43] I just shut down my desktop [14:43] but https://github.com/snapcore/snapd/commit/634a17c85718f15babe9fbe3cd85a36225bcfdd3 [14:44] it could be a good attempt to add a /root home directory there to see what happenns (in spread test) [14:44] ackk: maybe you installed snapd, and didn't re-login? [14:44] Chipaca, it's there [14:44] Chipaca, nope, that's not a fresh container [14:44] I'm testing now in a fresh one, see if I can reproduce [14:46] roadmr: because of that ^ the tools look at the slots side for both *for slots that specifiy an installation constraint in the base decl*. the base decl convention for docker here is that a slot implementation may provide a so-called superprivileged interface but another impl of the same slot might not. again, change was intentional [14:47] jdstrand: ah ok - got it [14:47] Chipaca, I did relogin just now fwiw, no difference [14:48] roadmr: oh, and the review-tools error is correct for that snap.yaml [14:48] s/correct/expected/ [14:49] awesome :) yes, just looks like behavior changed here and we didn't update the PD - I'll add allow-installation for them [14:49] (behavior changed expectedly as you explained) [14:49] cool, thanks [14:49] thank you for explaining :) [14:50] I mean, it was pretty clear we needed to add allow-installation, the message says as much and fwiw I usually add both anyway - just wanted to understand why this changed [15:06] roadmr: speaking of the review-tools, can you pull 20190829-1435UTC? not urgent. only meaningful change for the store is sr_lint.py: improve error message with invalid Icon paths [15:06] jdstrand: sure thing! [15:07] thanks :) [15:19] jdstrand: Hi, I'd be glad if you could weigh in on #7366 in the near future :) Am still in the office for a few hours with a little spare time for possible changes [15:19] PR #7366: interfaces/gpg-keys: Allow access to gpg-agent and creation of lockfiles [15:20] * cachio lunch [15:21] mvo: snapcraft autopkg test regressions in eoan [15:23] ppd1990: yes, it is on my list for today. note there is some history that I need to dig up for why it is the way it is [15:29] doko: hey, thanks for the heads up - sergiusens will take care of it I'm sure! [15:32] mvo: well, it would be nice if the snappy team would notice these regressions on their own by default ... [15:34] doko: right, I hear you and we try to be responsive. snapcraft is a different team though, [15:40] sergiusens: ^ see comments from doko [15:43] second review for 6705 would be great btw [15:59] ackk, zyga, Chipaca still in meetings - is there a tl;dr on this potential regression? [16:00] mvo: no, I took a break but I just stopped and writing a quick regression test [16:00] mvo, I don't have a clean reproducer but it seems related to the fact that I was ending up running snapcraft --destructive-mode with the snap mounted from the prime dir, so maybe that ends up confusing snapd? [16:00] ta [16:01] thanks! please keep me updated, if it turns out to be a regression I want to know as soon as possible [16:01] mvo: understood, sorry for lagging [16:03] zyga: no worries! it was not meant as being pushy :) [16:04] What's the production status of core18? https://forum.snapcraft.io/t/ubuntu-core-18/5169 suggests it's experimental? [16:04] rbasak: that's a very old post [16:04] It's the second hit for "snap core18" on Google. [16:05] The first hit goes to the store page which doesn't answer my question. [16:05] mvo: I will look at #6705 in my morning, slightly surprised by Chipaca's comments on it [16:05] I can't find any announcement that says that core18 is production now? [16:05] PR #6705: bootloader: little kernel support [16:06] I will go for a run now and stop surprising people [16:06] Chipaca: enjoy! [16:06] Chipaca: +1 [16:07] pedronis: I will fix the things that john mentioned, its a good point [16:07] (good points) [16:07] ackk: hey [16:07] still around? [16:08] ackk: what does "sudo env" say in that machine where it happened? [16:08] ackk: feel free to paste in private if sensitive [16:09] rbasak: https://ubuntu.com/blog/ubuntu-core-18-released-for-secure-reliable-iot-devices [16:09] top of https://ubuntu.com/core too [16:09] cwayne: great. Thanks! [16:09] np :) [16:10] PR snapd#7399 closed: tests: verify retry-tool not retrying missing commands [16:11] zyga, https://paste.ubuntu.com/p/gDNq93zrPC/ [16:11] ackk: can you walk me through a case [16:12] ackk: this is inside a container, correct? [16:12] zyga, yes [16:12] ackk: how do you access the host of the container, via ssh? [16:12] ackk: when you access the container, how do you "jump into" it, via lxc shell? [16:13] zyga, via ssh, but I have my home bind-mounted in the container, so I ssh with the same user [16:13] ackk: bind mounted from the host of the container? [16:13] ackk: so you ssh into (guest) from (host) or from a different machine? [16:13] zyga, https://paste.ubuntu.com/p/XNq6w2sVGQ/ [16:14] zyga, no, ssh from my laptop to the lxd on it [16:14] ackk: so (3rd computer) -> (guest) directly, via ssh? [16:16] zyga, why 3rd computer? the containres are on my laptop as well [16:16] ah, I understand now [16:17] so you ssh from the laptop, which is the host, directly into the container [16:17] zyga, correct [16:17] ackk: and inside the container, so in (guest), the user ack exists because it was added manually? [16:18] zyga, yes [16:18] ackk: and when you run sudo maas, snap-confine complains and quits [16:19] ackk: this is on 2.41 from the archive? [16:19] zyga, 2.41 snapd snap [16:19] from beta [16:19] aha, the snapd snap [16:19] ok, thank you [16:19] let me check stuff [16:19] can you run one simple test [16:19] snap install snapd-hacker-toolbelt [16:19] and see if that works [16:19] it's just a busybox snap [16:20] zyga, so, also the issue doesn't happen right away as I mentioned before. I have the snap installed via snap try, but was then running a sync target which ended up calling snapcraft again [16:20] zyga, sure [16:20] zyga, just run it? [16:20] yeah [16:20] so maas is in try mode? [16:21] zyga, yes [16:21] what is a sync target? [16:21] zyga, (that snap works fwiw) [16:22] ackk: can you try to unsquashfs snapd-hacker-toolbelt, edit it anyway you like (it's a static binary), [16:22] ackk: install it in try mode [16:22] zyga, so we have a makefile target that updates the prime/ dir based on the tree, so you can change code and have it copied. but that because of a recent change ended up calling "snapcraft --destructive-mode prime" again [16:22] and see if you can reproduce the issue somehow [16:22] zyga, it seems after that the snap had the issue [16:23] so there are two aspects [16:23] the message we saw indicates that we failed [16:23] the details only change the error message printed [16:23] so even if /root was handled in that logic, it would at most affect the failure message [16:24] the reason we saw that particular message is that we got EROFS [16:24] so we attempted to create a directory but got an EROFS error from the system [16:24] we can strace that to see what exactly failed [16:24] ackk: actually [16:24] ackk: if you can still reproduce it [16:24] please set SNAP_CONFINE_DEBUG=yes [16:24] and run that command again [16:24] zyga, where in the calling shell? [16:24] that will tell us all the details without the need of strace [16:24] ackk: after sudo [16:25] sudo SNAP_CONFINE_DEBUG=yes maas --help [16:25] oh ok, I don't have it right now but will try to reproduce tomorrow [16:25] ackk: so whatever really happened, it seems something has switched your home directory, or /root to read only mode === pstolowski is now known as pstolowski|afk [16:55] PR snapcraft#2695 closed: spread tests: install package marker into ament index [17:47] roadmr: sorry, can you pull 20190903-1746UTC? not terribly urgent, just an update to an override for core20 [17:48] xnox: fyi ^ [18:13] jdstrand: sure, her it goes [19:31] roadmr: thanks! [20:05] doko: mvo I did an upload last week and it was all green [20:08] PR snapcraft#2697 opened: Neon extension [20:18] PR snapd#7401 opened: tests: add unstable stage for travis execution [22:10] yootoobe: https://youtu.be/cWlYe0CE2iU [22:15] PR snapd#7402 opened: daemon, client, cmd/snap: include architecture in 'snap version'