[00:02] hey cmatsuoka what's up [00:02] * ijohnson is mostly there he thinks [00:06] hmm I forgot what I was going to say [00:06] anyway [00:07] I see many test failing because pi-kernel is not available, do you know what happened there? [00:10] cmatsuoka: hmm could be store outage [00:10] cmatsuoka: I can see the pi-kernel on my arm64 machine now [00:11] ijohnson: could you try to download the snap? [00:11] let me try [00:12] cmatsuoka: you need to specify a channel, there is no latest channel on the pi-kernel [00:12] ah I see [00:12] if you do `snap download pi-kernel --channel=20/edge` it works [00:12] but just `snap download pi-kernel` won't work [00:13] but did that change recently? [00:13] cmatsuoka: hmm don't think so [00:14] because some old tests started to fail [00:14] hmm did you check the full snapd service logs? [00:14] the spread tests will always do verbose HTTP logging for when they talk to the store [00:15] so you should see some 400s or 500s if the store was having a fit [00:15] let me see... [00:18] ijohnson: https://github.com/snapcore/snapd/pull/8495/checks?check_run_id=587219964 [00:18] PR #8495: cmd/snap-bootstrap: switch to a 64-byte key for unlocking [00:19] ijohnson: maybe you can see something there that could tell us what's happening [00:20] oh it's in prepare-image [00:20] cmatsuoka: yeah looks like store having a fit [00:20] https://www.irccloud.com/pastebin/6ZKFXKKS/ [00:20] store responded with 429 [00:20] PR snapcraft#3037 closed: plugins: introduce v2.CMakePlugin [00:21] too many requests [00:21] pft, ok [00:22] do you know when it resets the request counter? [00:23] cmatsuoka: just restart the test, it's usually temporary [00:24] ok good, thanks, will do that [00:24] np [01:08] PR snapcraft#3042 opened: repo: minor debug log tweaks [01:47] PR snapcraft#3034 closed: repo: add initialize_snapcraft_defaults() to set default source lists [01:50] PR snapcraft#3040 closed: V2 autotools plugin [02:17] PR snapcraft#3042 closed: repo: minor debug log tweaks [03:31] cjp256: I've updated PR 3031 to use a custom parameter type. Let's see how that goes. [03:31] PR #3031: cmd/snap-confine-suid-trampoline: add new helper [04:05] PR snapd#8499 opened: Adding comment on system instability caused by a privileged cp [05:53] morning [06:24] Good morning [06:24] good morning zyga [06:25] How are you guys [06:25] Hopefully today I will make progress on dbus [06:27] zyga: mvo: morning guys [06:27] have you seen https://github.com/snapcore/snapd/pull/8496 ? [06:27] PR #8496: interfaces/apparmor: use differently templated policy for non-core bases <β›” Blocked> [06:28] supper happy this is moving forward [06:29] good morning mborzecki [06:36] mborzecki: not yet [06:36] but we talked about this in Toronto [07:01] morning [07:03] hey pawel :) [07:05] pstolowski: hey [07:39] hhm trying to investigate the context deadline exceeded thing that happens in our tests randomly https://paste.ubuntu.com/p/zr6fNXMMbf/ [07:40] but looking at go stdlib side, the whole timeout detection in http.Client.do() looks quite brittle [07:48] heh `time.Now().After(deadline)` heh if this is true then Client.do() returns httpError{timeout:true} otherwise it's urlError [07:59] mvo: hi [08:00] pedronis: hi [08:00] mvo: #8489 is all green but of course the required things don't exist there, it needs reviews and you will have to land it manually I suppose [08:00] PR #8489: github: partition the github action workflows [08:02] mborzecki: zyga: ^ needs reviews [08:03] mm [08:03] pedronis: yeah, it does not have review, once it does I can land it manually and adjust the required things in the settings [08:04] zyga: do you noticed/know why it seems we don't get the tooltip/panels with the detected error lines anymore? [08:04] yeah, I noticed [08:04] I was looking at why just now when looking at this diff [08:05] I think echo "::add-matcher::.github/spread-problem-matcher.json" needs to be in an earlier step [08:05] or there's a typo [08:05] but it happens not only here [08:05] also weirdly enough the emails I get have the count [08:06] hmm [08:08] mvo: could you land #8488, it failed on the store in one test, it will unblock various other PRs [08:08] PR #8488: bootloader: add efi pkg for reading efi variables [08:10] pedronis: sure, will do [08:13] pedronis: meh, I had no idea how annoying efi vars are :( [08:13] pedronis: anyway, landed [08:13] PR snapd#8488 closed: bootloader: add efi pkg for reading efi variables [08:13] mvo: thank you, I will pick up now Claudio's #8463 [08:13] now [08:13] PR #8463: secboot: key sealing also depends on secure boot enabled [08:40] zyga,pedronis: re, looks like network is a bit unstable today, did I miss anything in the last 20min [08:41] mvo: I'm doing a review of your actions patches [08:41] mvo: hopefully nothing else [08:42] mvo: I updated #8463 if you want to double check my changes [08:42] mvo: https://github.com/snapcore/snapd/pull/8489#pullrequestreview-393570583 [08:42] PR #8463: secboot: key sealing also depends on secure boot enabled [08:42] PR #8489: github: partition the github action workflows [08:45] Are updates to core20/edge held up for manual review? === pedronis_ is now known as pedronis === pedronis_ is now known as pedronis [08:59] PR snapd#8498 closed: run-checks: use consistent "Checking ..." style messages [09:00] omg that github workflow PR, wish we didn't need to have this hack [09:00] mborzecki: you mean the amount of duplicated code? [09:01] mborzecki: that is very sad indeed [09:01] and the full workflow name appears in the jobs list :/ [09:01] and it's controlled by repo level switches :/ [09:01] and it's all duplicated [09:01] mvo: a clever hack nonetheless ;) [09:03] feels like gh actions are half baked atm [09:04] mborzecki: yeah, I wish we had yaml ancors or anything really [09:05] rustup generates their workflows from parameterised source yaml files [09:05] jamesh: but it's done offline right? [09:06] or is there an action that does that? :) [09:06] mborzecki: yes. The generated workflows need to be committed to git [09:07] we still need the dependency on the unit tests [09:07] (even if we were to split workflows per spread ssytem) [09:08] mvo, mborzecki: jobs and steps can have a name and an ID [09:09] we can keep the ID as is but give it a shorter name [09:21] brb [09:31] PR snapd#8500 opened: httputil: fix client timeout retry tests [09:31] shoud help with random failure in httputil retry tests ^^ [09:31] still a mess, but nothing we can do, especially when sticking to older go versions [09:38] mvo: zyga: so a silly thought about gh actions, what if we just accept to run the unit tests more than once, and run it on the target system if possible (eg. via a docker container and whatnot) or a reasonmable default eg 18.04 [09:38] mborzecki: that's what mvo did, no? [09:38] zyga: i mean workflow per spread system [09:38] yeah [09:38] as for running them only in spread [09:38] I disagree [09:39] that's somewhat slower and much more costly [09:39] but we can just try [09:40] mborzecki: you suggest one workflow per spread system and each workflow would have the unit tests first? [09:40] I think we should try to live with a setup for a bit, after mvo split lands, before making other changes [09:41] yeah [09:41] we are really blocked at the moment by everything being red and and needing full re-run or mvo override [09:41] I would only make cosmetic tweaks (names) so that it's nicer on a PR page [09:41] or we land it and do cosmetic tweaks in the followup :) [09:41] ? [09:41] (it still needs a +1) [09:41] yes totally [09:42] * zyga checks the comments [09:42] mvo: yes, the unit test could run in a docker container of the target system or some known default if there's none or just run the spread unit test job on the target system separately first [09:43] that seems it would slow down things [09:43] anyway it all sounds more work than we can afford atm [09:43] the downside it it'd obviously take longer, the upside is that unit tests would run on more systems where they happen o fail sometimes when mocking is lacking [09:44] but you can also restart workflows :P [09:44] mborzecki: we run the unit tests in spread on the target too (tests/unit/go) [09:45] this all sounds a discussion to have at the earliest in 3 weeks [09:45] tbh [09:45] the trade-off is unit tests first or the risk of spinning up 50 machines that all fail [09:48] FYI: I scaled down my deployment to 7 nodes (currently busy) to force some traffic onto canonistack instances [09:48] if something falls over I will scale back up [09:48] hmmm could we cache the job results somehow? [09:49] https://github.com/snapcore/snapd/pull/8489#pullrequestreview-393621491 [09:49] PR #8489: github: partition the github action workflows [09:49] mborzecki: yes [09:49] as in, cache the result of unit tests or individual spread tests [09:49] mborzecki: we could [09:49] mborzecki: that's actually pretty brilliant [09:49] and then skip those jobs when workflow restarts? [09:49] mborzecki: if sha matches and is green [09:49] just go ahead :) [09:49] oh, that's interessting [09:49] we could use the cache action for that [09:50] mvo: https://github.com/snapcore/snapd/pull/8489#pullrequestreview-393621491 [09:52] PR snapd#8489 closed: github: partition the github action workflows [09:52] mborzecki: can you merge master or rebase https://github.com/snapcore/snapd/pull/8500 [09:52] PR #8500: httputil: fix client timeout retry tests [09:52] mborzecki: it will need the new checks to pass anyway [09:52] mborzecki: and I want to use it as a guinea pig [09:53] haha ok [09:53] with a bit of luck canonistack workers will now pick it up [09:55] hmm Error restoring google:ubuntu-18.04-64:tests/completion/ (apr150717-345956) : read tcp 10.113.57.178:48146->35.229.25.208:22: read: connection reset by peer [09:56] did we run out of some quota on gce? [09:56] no, probably my fault [09:56] let's wait for 20 minutes to see if the new setup works [09:57] PR snapd#8494 closed: tests: preserve size for centos images on spread.yaml [09:59] mborzecki: we should figure out caching for "go test" [09:59] 7 minutes of each run is there [09:59] PR snapd#8495 closed: cmd/snap-bootstrap: switch to a 64-byte key for unlocking [10:00] ehh more of that `Error restoring google:ubuntu-20.04-64:tests/main/parallel-install-services (apr150717-996191) : read tcp 10.113.57.84:36100->34.74.8.8:22: read: connection reset by peer` [10:02] we are mow using canonistack as well as my "cloud" [10:02] 6 workers but hopefully with way better network [10:02] we have enough memory and CPU load is low [10:03] and network is much faster actually [10:20] hmm https://github.com/snapcore/snapd/pull/8499 wat? [10:20] PR #8499: Adding comment on system instability caused by a privileged cp [10:21] hmmm? [10:22] mborzecki: maybe he has a github repo [10:22] we can send a PR with a response [10:22] / thank you for this comment [10:23] it's probably some weird snap that produces lots of data in $SNAP_DATA or somesuch [10:23] mborzecki: lol [10:23] he made a typo [10:23] so we didn't waste money "testing" that PR [10:23] mvo: I guess that's one for you to handle [10:24] mvo: https://github.com/snapcore/snapd/pull/8499 [10:24] PR #8499: Adding comment on system instability caused by a privileged cp [10:25] * mvo is in a meeting [10:36] brb [10:57] damn, error: cannot download snap "pi-kernel": no snap revision available as specified [10:57] keeps repating === pedronis_ is now known as pedronis [11:00] weird, why it's using 20-pi3/edge while xnox's xnox-core-20-pi-arm64.model is using 20/edge [11:00] mborzecki: yeah, I think the kernel team did something to "pi-kernel" [11:00] I think 20-pi3 doesn't exist anymore [11:00] xnox: do you know whether we should be using 20/edge for pi kernels now? [11:00] also why are we 20 there ? [11:00] we using 20 there? [11:01] oh, funny "pi-kernel" has no latest track anymore [11:01] so snap info pi-kernel gives no reply [11:01] pedronis: this is what we're using https://github.com/snapcore/snapd/blob/master/tests/lib/assertions/developer1-pi-uc20.model.json [11:02] I think that's broken now [11:03] ok, the official (?) model ubuntu-core-20-pi-arm64.model is using 20/stable, so 20/edge is ok right? [11:03] mborzecki: yes [11:03] I think so [11:06] mborzecki: please stop using that, and instead use canonical signed dangerous model that is now available from models repo & from assertions service. [11:06] PR snapd#8501 opened: tests/lib/assertions/developer1-pi-uc20.model: use 20/edge for kernel and gadget [11:06] 20/edge is the way to go [11:07] xnox: thanks! [11:07] that's our own prepare-image tests [11:09] xnox, a post on the forum would be really nice (where to find models etc) ... though probably post-release [11:09] so our customers can start rolling their own core20 imgs [11:10] ogra: no =) nobody shall find out anything about core20 ever. [11:10] haha [11:10] * xnox giggles [11:10] mvo: for latest you need pi2-kernel I think, pi-kernel is 18/20 only afaict [11:16] cachio: hey [11:16] cachio: around? :) [11:38] PR snapd#8502 opened: github: try caching test results [11:39] zyga: ^^ [11:39] yeah, noticed :) [11:40] zyga: poor man's test result cache === pedronis_ is now known as pedronis [11:57] zyga: hey [11:57] mmm [11:58] what's up :) ? [11:58] zyga: are you aware of snap-confine-undesired-mode-group errors in fedora? or are these fixed already in master? [11:58] cmatsuoka: can you show me the log please [11:58] sure, just a sec [11:59] cmatsuoka: there's one thing in master that is fixing the only known, to me, failure of this test [11:59] https://github.com/snapcore/snapd/pull/8495/checks?check_run_id=587546789 [11:59] PR #8495: cmd/snap-bootstrap: switch to a 64-byte key for unlocking [11:59] yes [11:59] this is fixed in master [11:59] excellent, thanks! [12:09] cmatsuoka: hi, I worked a bit on #8463, it should be ready to land I think, but it's very red (for various reasons) [12:09] PR #8463: secboot: key sealing also depends on secure boot enabled [12:10] pedronis: I just checked it, thanks for the fixes/improvements [12:12] cmatsuoka: Chris asked you to review: https://github.com/snapcore/secboot/pull/46 [12:12] PR secboot#46: Make SealKeyToTPM enforce a key size of 64-bytes [12:12] ack [12:40] PR snapcraft#3039 closed: build providers: setup initial apt source configuration [13:07] mvo: can you use your superpowers and merge https://github.com/snapcore/snapd/pull/8501 ? [13:07] PR #8501: tests/lib/assertions/developer1-pi-uc20.model: use 20/edge for kernel and gadget <⚠ Critical> [13:23] mborzecki: sure [13:23] mvo: thanks! [13:24] PR snapd#8501 closed: tests/lib/assertions/developer1-pi-uc20.model: use 20/edge for kernel and gadget <⚠ Critical> [13:27] mborzecki: does 8476 look ok otherwise (beside this one point you made there) [13:36] ok, I can now turn off my home server [13:37] we have 38 workers in two locations [13:38] zyga: maybe you can lease some DC capacity from StΓ©phane :) [13:38] roadmr: that's an interesting idea [13:38] mvo: ^ we probably could :) [13:39] PR snapd#8476 closed: secboot: add tpm support helpers [13:41] zyga: heh, interessting [13:53] * zyga breaks for lunch [13:59] PR snapd#8481 closed: cmd/snap-update-ns: handle EBUSY when unlinking files [13:59] PR snapd#8485 closed: cmd/snap/debug/boot-vars: add opts for setting dir and/or uc20 vars [14:13] thank you mvo! [14:16] cmatsuoka: not really urgent but we should probably merge secboot_tpm.go and tpm.go at some point, they are not big, it's not very clear what each does. the other option is to rename secboot_tpm.go to something else [14:21] PR snapd#8500 closed: httputil: fix client timeout retry tests [14:22] zyga: so is robust mount namespace updates already enabled by default on master ? [14:23] pedronis: agree, I considered that myself when moving tpm.go to secboot but decided to propose that later to avoid changing too much in a single step [14:28] ijohnson: yes it is [14:28] nice [14:28] ijohnson: 2.45 will have it [14:28] * zyga is partially away, prepping food [14:33] ijohnson: cmatsuoka: I did some digging about the SecureBoot variable [14:34] pedronis: ah interesting, any new finding? [14:34] cmatsuoka: see the PR comments [14:34] * cmatsuoka checks [15:16] hmmmm [15:16] did canonistack just die? [15:17] deadstack [15:19] hmm [15:20] not sure what happened [15:20] PR snapd#8453 closed: [RFC] travis.yml: re-enable travis [15:22] pedronis: #8497 is ready again [15:22] PR #8497: boot/bootstate20: re-factor kernel methods to use new interface for state [15:25] ijohnson: I was in meetings, looking now [15:29] thanks [15:33] ijohnson: commented [15:40] pedronis: do you want a separate PR/rebase to extract just the change for https://github.com/snapcore/snapd/pull/8497#discussion_r408941648 ? [15:40] PR #8497: boot/bootstate20: re-factor kernel methods to use new interface for state [15:41] * zyga settles for quiet work [15:41] ijohnson: maybe yes, I would expect a test change for it in some form [15:42] pedronis: ok, fwiw that was changed in the PR that's open [15:42] that will be easy to extract out though [15:46] PR snapd#8503 opened: boot/bootstate20: fix bug in try-kernel cleanup [15:46] pedronis: broken out as ^ [15:47] mvo: i've pushed my early config test to https://github.com/stolowski/snapd/tree/core18-early-config [15:47] sil2100 (cc mvo): hey, the core20 snap is failing with https://paste.ubuntu.com/p/DrhfhjWG9y/. can you confirm this is ok or not? [15:47] mvo: to run it: google-nested:ubuntu-18.04-64:tests/nested/manual/core-early-config [15:48] cachio: unfortunately we still need sb enabled to be able to run the full encryption test [15:48] cachio: error: cannot seal the encryption key: cannot store encryption key: cannot add EFI secure boot policy profile: cannot compute secure boot policy digests: the current boot was performe [15:48] d with secure boot disabled in firmware [15:51] sil2100 (mvo): I'm going to approve it to unblock the queue, but please look into it [15:53] pstolowski: thanks, in a meeting right now, will have a look once that is over [15:54] cmatsuoka, is that on classic? [15:54] ijohnson: thanks, it wasn't clear that the calls number where not a consquences of some of the refactorings [15:54] jdstrand: thanks, problably https://paste.ubuntu.com/p/DrhfhjWG9y/ is something for xnox to review [15:54] pedronis: yes the refactor actually exposed the bug :-) [15:55] cachio: mm, yes, the test is supposed to run on classic [15:55] cmatsuoka, ok, I'll take a look [15:56] mvo: can you explain that output? [15:56] cachio: thanks, I'll run a few more tests here and will be back after lunch [15:56] cmatsuoka, just first let me finish the tests changes I did [15:56] could you give me a link to your tests? [15:56] mvo: i.e. these files were removed from .deb's in focal, and hence new revisions of core20 snap don't have them...... [15:57] xnox: jdstrand runs a tool that compares the files from one base to the next base and when stuff is removed it alerts and blocks the snap [15:57] mborzecki: around? [15:57] mvo: why are you asserting contents of .deb's that are part of core20 snap? [15:57] cachio: yes, of course. I'm running the test with PR #8409 [15:57] xnox: I don't, the store-review-tools do [15:57] PR #8409: snap-bootstrap: seal and unseal encryption key using tpm <β›” Blocked> [15:57] xnox: it's there to ensure that bases are stable and drop support [15:58] jdstrand: those files are dropped in focal, despite equivalent functionality still present. [15:58] jdstrand: you should see new added files, like subiquitycore/palette.py [15:58] jdstrand: and that systemd-timesyncd is now a standalone package, which is included. [15:59] xnox: ack, that's fine. thanks! [15:59] xnox: just wanted to make sure it was intentional [15:59] jdstrand: it was =) [15:59] jdstrand: if you have questions like these in the future, core20 bug tracker is on github snapcore/core20 issues [16:00] jdstrand: or foundations-crew@lists.canonical.com as foundations owns/builds core20 [16:00] cmatsuoka, nice, I'll take a look after lunch as well [16:00] xnox: right, that is why I pinged sil2100, but I can change the workflow [16:01] jdstrand: he mostly deals with "stable" like core18 / core16 [16:01] jdstrand: core20 will be dead to me next thursday! and onto bootstrapping core22 [16:01] xnox: heh, so should foundations-crew@lists.canonical.com be used for core16 and core18? [16:01] I like how xnox said "stable" ;) [16:02] mvo: ok, so core20 revisions are now passing automated reviews again and should be caught up in a few minutes [16:02] jdstrand: yes you can. foundations-crew@ is one stop shop for core snaps, subiquity, netplan, etc. [16:03] ok, thanks [16:03] * cachio lunch [16:04] xnox: that list is internal only, right? [16:05] jdstrand: yes. [16:34] cmatsuoka: looks like 8463 is ready to land but one core20 test is failing, i.e. google:ubuntu-core-20-64:tests/core/basic20, could you please disable this part of the test and/or move to nested (if disable, TODO:UC20: is probably fine). then this can land [16:44] cmatsuoka: nevermind [16:45] cmatsuoka: pushed it [16:56] cmatsuoka, I don't see any nested test on #8409 [16:57] PR #8409: snap-bootstrap: seal and unseal encryption key using tpm <β›” Blocked> [16:57] cmatsuoka, is that the PR? [17:02] cachio: yes, the test is currently in tests/main/uc20-create-partitions-encrypted, it will be moved to nested [17:04] cmatsuoka, ah [17:04] ok [17:06] this failure is weird: https://pipelines.actions.githubusercontent.com/xS8oSnypZkPEQZqiZgDaRp2kdvQJKbOY08TesHp7E8vn7g4hYR/_apis/pipelines/1/runs/1163/signedlogcontent/31?urlExpires=2020-04-15T17%3A04%3A50.7188367Z&urlSigningMethod=HMACV1&urlSignature=n1FukofiSafDmCcRwGnoncpLJS8mfgpiWZRJABATpvc%3D [17:07] pedronis: that url seems to be expired [17:07] which PR was that from? [17:07] ijohnson: your PR [17:08] I noticed an odd lxd error from 8503, but I saved the logs and restarted the check [17:08] ah okay so same thing I saw [17:08] I'm looking into the issue now [17:08] yes, weird failure in a lxd regression test [17:10] ijohnson: it failed also in your other pr [17:10] so maybe it's real [17:10] that's silly [17:10] * ijohnson will look more into it after lunch === pedronis_ is now known as pedronis [17:17] what was the failure? [17:17] (in the lxd regression test) [17:40] pedronis: what would be a good place to define the kernel command line used in tpm sealing? [17:46] ijohnson: that test is failing for master too [17:46] pedronis: do you have logs? [17:46] which test is it? [17:48] something thinks we should have snapd around [17:48] but actually we have core [17:48] # lxc file push /usr/bin/snap bionic/usr/bin/snap [17:48] Error: open /snap/snapd/current/usr/bin/snap: no such file or directory [17:48] ah [17:48] I think I know that this is [17:48] one moment [17:49] I think it's a race, I've seen things like that today [17:49] lxc launch ... [17:49] it's google:ubuntu-18.04-64:tests/regression/lp-1871652 [17:49] lxc file push ... [17:49] that fails [17:49] it fails on master, running alone [17:49] I added lxc exec -- ls -l ... [17:49] that makes the container "ready" somehow [17:49] pedronis: thanks, I'll run it now [17:52] cmatsuoka: the bootloaders should know about this, but for now you can hard code it in devicestate if it's easier [17:53] pedronis: ok, thanks [17:53] cmatsuoka: actually is not devicestate, right? this goes into snap-boostrap? [17:54] anyway you can hard code it in snap-boostrap for now [17:54] mm, yes, it's in snap-bootstrap [17:59] * ijohnson is back [18:00] zyga: did you reproduce/fix the issue? [18:13] ijohnson: yes, not yet [18:14] ah finally my spread run just got there I've got it now too [18:14] so [18:14] this is weird [18:14] there are two systems [18:14] the host and the container [18:15] neither has snapd snap [18:15] yet [18:15] lxc file push says [18:15] Error: open /snap/snapd/current/usr/bin/snap: no such file or directory [18:15] this is lxd 4.0.0 [18:15] maybe lxd bug? [18:15] you must have SNAP or SNAP_NAME in the environment [18:16] stgraber: we don't [18:16] indeed [18:16] https://www.irccloud.com/pastebin/dtcv3kfp/ [18:16] and this passed recently as it's a fix for the bug that was plaguing lxd (from snapd side) [18:16] what's the "lxc file push" you're running? [18:16] lxc file push /usr/bin/snap bionic/usr/bin/snap [18:16] I'm copying the snap command from the host (deb) to the guest [18:17] since the host has locally built snapd [18:17] hmm, odd [18:18] stgraber: if I switch to 3.23/stable channel the same command works [18:18] and actually switching back to 4.0/stable channel again the same command fails again [18:18] looks like a lxd regression to me [18:18] hmm, we didn't touch any of the file code between 3.23 and 4.0 [18:18] yeah, I just tried that too [18:19] stgraber: did you release 4.0 recently? [18:19] we did just switch the snap over to core18 though [18:19] zyga: did you see the same behavior? [18:19] yes [18:19] works perfectly on 3.23 [18:20] ijohnson: shall we update LXD channel and call it a day? [18:20] zyga: yeah I think that's fine for now [18:20] I'll file a PR [18:20] thank you! [18:20] may I suggest something? [18:20] ijohnson: adjust the test to respect a variable [18:20] testing with current 4.0 with same file push here to see if it reproduces easily [18:20] and update the variable in spread.yaml [18:20] I think I missed that we have this in the first place when I wrote the test [18:21] zyga: that's a great idea sure [18:21] thanks! [18:21] root@sateda:~# nsenter --mount=/run/snapd/ns/lxd.mnt -- ls -lh /usr/bin/snap [18:22] lrwxrwxrwx 1 root root 32 Mar 11 05:46 /usr/bin/snap -> /snap/snapd/current/usr/bin/snap [18:22] that would be why ^ [18:22] !!! [18:22] that's weird [18:22] how did we get that symlink? [18:22] presumably this wasn't that way on core16 so that wasn't a problem [18:22] is that a core18 regression? [18:22] is 3.23 using a different base? [18:22] I mean, it's still a bug, it shouldn't actually pull the thing from inside the snap, it needs to pull it from the host [18:22] yeah, we switched based from core to core18 a few hours ago [18:22] indeed [18:23] this makes sense [18:23] root@sateda:~# nsenter --mount=/run/snapd/ns/lxd.mnt -- ls -lh /var/lib/snapd/hostfs/usr/bin/snap [18:23] -rwxr-xr-x 1 root root 14M Oct 30 12:17 /var/lib/snapd/hostfs/usr/bin/snap [18:23] cool, do you need a bug report for tracking? [18:23] that's still fine and it should be using that [18:23] you can if you want, but I'll probably have it fixed in 5min or so [18:23] ok, no need then [18:27] INFO[04-15|18:27:30] Pushing /var/lib/snapd/hostfs/usr/bin/snap to /usr/bin/snap (file) [18:27] that's more like it [18:27] I absolutely hate that logic... [18:28] such a mess to support absolute and relative paths, with and without symlinks in the middle, ... [18:29] zyga, ijohnson: https://github.com/lxc/lxd/pull/7198 [18:29] PR lxc/lxd#7198: shared/util: Never look into the snap [18:29] will be in the stable snap in the next 4-5 hours I'd say [18:29] cool thanks stgraber [18:29] stgraber: you like to live dangerously ;-) [18:29] but I admire that [18:29] I've been doing daily stable cherry-picks lately [18:29] ah [18:29] I see [18:29] that's interest [18:29] *interesting [18:29] perhaps something we could do? [18:30] ahhhh we would have many folks very mad if we did as many stable releases as often as lxd :-) [18:30] well, we have pretty reasonable testing on all distros and architecture, that's why it takes 4-5 hours before we're pretty confident it's not horribly broken :) [18:30] stgraber: having a regression test for push would be great tho :D [18:31] yeah, not very practical to test in CI since we don't use any snaps there, but I'll sneak some file push testing into our snap tests [18:33] test added [18:33] thank you :) [18:36] zyga: fancy a +1 before you EOD? https://github.com/snapcore/snapd/pull/8504 [18:36] PR #8504: spread.yaml,tests/many: use global env var for lxd channel <⚠ Critical> [18:36] :-) [18:36] looking [18:36] I'm not EODing yet [18:36] deeeeply in dbus [18:36] one might say you are dbusly in dbus [18:36] * ijohnson stops [18:36] PR snapd#8504 opened: spread.yaml,tests/many: use global env var for lxd channel <⚠ Critical> [18:36] hahaha [18:37] +1 [18:38] there's something magic about the words [18:38] "in progress" [18:38] that's so much better than "queued" we had before [18:44] PR snapd#8505 opened: spread.yaml: switch back to latest/candidate for lxd snap <β›” Blocked> [18:46] ijohnson: ^ nice [18:46] in progress does sound much better [18:47] I think in general whenever we need to do things like this, i.e. revert something to unbreak master while something is fixed, we should always open a PR undoing that change at the same time so at least we remember because there's an open PR about changing [18:47] yes [18:47] i.e. same thing when we move a system to unstable, we should open another PR which moves it back to stable [18:48] pedronis: we fixed the lxd issue on master [18:48] well we have a PR open [18:48] PR snapd#8463 closed: secboot: key sealing also depends on secure boot enabled [18:49] ijohnson: I see [18:50] ijohnson: anyway proves why I'm not a fan of dangling symlinks in bases [18:51] pedronis: yes but also why we should be very strict about letting snaps access things in bases too [18:52] yes [18:52] pedronis: also did you discuss with maciej about picking up the necessary changes to gadget pkg et al to enable mbr support for create-partitions? should I try to push that forward? I was kinda hoping that he would pick that up but I keep forgetting to ask him before he EODs [18:53] ijohnson: I don't know, it's actually a prereq to make the run mode changes relevant, right? because it's install time? [18:53] pedronis: yes [18:54] yes it's a blocker I mean [18:54] ijohnson: have you started the other bootloaderKernelState ? [18:55] I'm wrapping it up now, but I need to figure out a way to write tests for it, I'd rather not rewrite the dozens of tests we already have for bootstate20 to use a mock uboot bootloader, etc. [18:56] ijohnson: maybe just write direct tests for it? [18:56] we probably need to reorg some of the other tests as table tests so that they are easier to reuse [18:56] pedronis: yes perhaps, need to think about it a little bit [18:56] or have a actual test mixins [18:57] we don't have a lot of the latter but we don sometimes [18:58] yeah we could refactor some of the tests big chunks to use helpers too [18:58] lots of the tests are just bootloader setup anyways [18:59] cmatsuoka: is #8409 ready for Chris to look at it? [18:59] PR #8409: snap-bootstrap: seal and unseal encryption key using tpm <β›” Blocked> [19:00] pedronis: will be in a minute, pushing a final commit [19:01] pedronis: pushed [19:23] Issue core20#36 closed: SSH login shell says the system was minimized and to run "unminimize", but should not [19:23] PR core20#42 closed: drop `unminimize` instructions that are not applicable on Core [19:52] PR snapd#8284 closed: config: add system.store-certs.[a-zA-Z0-9] support [19:58] PR snapd#8503 closed: boot/bootstate20: fix bug in try-kernel cleanup [20:05] PR snapd#8504 closed: spread.yaml,tests/many: use global env var for lxd channel <⚠ Critical> [20:37] PR snapd#8506 opened: Add libnvidia-opticalflow as Nvidia library [20:37] hey everyone, if someone could take a quick look at ^ it'd be much appreciated [21:36] what determines what ends up in parts/$PART/install/usr/lib/python3/dist-packages using the python plugin? [22:17] PR snapcraft#3043 opened: package-repositories: initial schema and meta read/write support [22:45] PR snapcraft#3044 opened: build providers: use ubuntu-ports mirrors for non-x86 platforms