[05:49] <mmikowski> Eickmeyer[m]: @vorlon bdmurray arraybolt3[m]: I will update bug, but interesting change from 5.17.0-1015 to 5.17.0-1016 was the initramrd went from 108MB to 165MB.
[05:50] <mmikowski> Changing to xz compression brought it to around 120 MB. But that's a huge, 50% jump from before.
[05:51] <arraybolt3> mmikowski: I've seen other people have very similar problems. One thing that helped was changing the "MODULES=most" to "MODULES=dep" in the initramfs config file IIRC.
[05:52] <arraybolt3> mmikowski: Also, the last time I saw someone with the problem, it was getting them on a system that did *not* use NVIDIA, so it might be related to something else.
[05:53] <mmikowski> arraybolt3: Thanks! I will definitely make a note of this. Btw, the 165 jump occured with no config changes for initramrd: it just happened on that kernel install. So perhaps the default changed from MODULES=dep to MODULES=most. I'd have to test to see if prior initramrd builds are also affected.
[05:55] <mmikowski> arraybolt3: That's interesting. This was on a NUC with an eGPU, but that was powered down, but the initramrd was built with the Nvidia modules for obvious reasons - prime falls back to using Intel if the eGPU isn't present. But again, odd this just started happening with the 5.17.0-1016 kernel, whereas 5.17.0-1015 initramrd was so much smaller.
[05:55] <arraybolt3> mmikowski: Also, the guy with the messed-up initramfs has a 112MB initrd for one kernel, and an 82MB one for a different kernel, while the one that was generated after the change was only 62MB. Not sure if that's helpful, but just in case, there it is.
[05:56] <mmikowski> arraybolt3: Hmmm. So that's from changing the MODULES settings? Or is that just noting that a kernel change can sometimes cause huge swings in initramrd sizes?
[05:57] <arraybolt3> mmikowski: I believe it was from changing the MODULES setting. And also it appears that initrd sizes can swing relatively largely between kernels (which really doesn't make a whole lot of sense to me).
[05:59] <mmikowski> Another weird thing: on other hardware (no eGPU), I see 164 and 165 MB for 1015 and 1016 kernels. So some strange hardware interaction going on there (164MB on NV laptop; 108MB on NUC with eGPU). Ahhh - this might be it -- apparently initramrd is sensitive to attached displays? The NUC previously had a 1200p display attached; the big bump occurred after attaching a 4k display.
[06:01] <mmikowski> Finally, there may be a BIOS setting to turn off SGX controls which might fix the squeeze on the NUC. The eGPU definitely makes this a bit of an edge-case.
[06:01] <mmikowski> So calling it a night. Thanks @arraybolt3!
[06:02] <arraybolt3> mmikowski: Glad to provide some helpful info!
[06:02] <mmikowski> @arraybolt3: yes, you've been great. More stuff to investigate. Noted and I will report back if I find anything interesting.
[06:59] <khfeng> rbasak: hi! is it possible to upload it for Focal? https://bugs.launchpad.net/ubuntu/+source/xserver-xorg-video-amdgpu/+bug/1987038
[10:18] <bluca> enr0n: ping - just making sure you saw the message from yesterday
[10:44] <ginggs> bdmurray, kanashiro__: i'm seeing several failures on amd64, e.g. https://autopkgtest.ubuntu.com/packages/3/389-ds-base/kinetic/amd64 2022-09-07 04:15:12 UTC
[10:44] <ginggs> autopkgtest [04:15:11]: ERROR: testbed failure: cannot send to testbed: [Errno 32] Broken pipe
[10:45] <ginggs> also i386
[12:35] <enr0n> bluca: Yes, thanks for the heads up. I was just about to begin debugging that failure when you sent
[12:58] <enr0n> Can a core dev please retry systemd autopkgtests on kinetic: retry-autopkgtest-regressions --blocks systemd -s kinetic ? I am seeing many of the weird failures that ginggs described above (I have also seen this happen in the past).
[14:02] <ginggs> enr0n: running that command I only see 3 packages to retry; coturn, initramfs-tools,  and nut.  were you expecting more?
[14:12] <enr0n> ginggs: ah I think sil2100 started several after our meeting but didn't mention it here
[14:26] <ginggs> enr0n: i've just chatted to sil2100, he didn't restart them yet -- it is weird, running the script again now, it showed many more.  anyway,I'll retry all those I can now
[14:27] <enr0n> ginggs: weird indeed, but thanks!
[15:24] <bdmurray> enr0n, ginggs: Have the "cannot send to testbed" errors all been on amd64 / in lgw01 ?
[15:28] <enr0n> bdmurray: For the most part, yes e.g. https://autopkgtest.ubuntu.com/results/autopkgtest-kinetic/kinetic/amd64/f/fetchmail/20220907_043933_fbb77@/log.gz and https://autopkgtest.ubuntu.com/results/autopkgtest-kinetic/kinetic/amd64/c/ceph/20220907_040000_b4874@/log.gz. There are also some i386 e.g.
[15:28] <enr0n> https://autopkgtest.ubuntu.com/results/autopkgtest-kinetic/kinetic/i386/c/clutter-1.0/20220907_050854_f3aae@/log.gz.
[15:30] <bdmurray> enr0n: ack, thanks!
[16:59] <bdmurray> enr0n: some of those logs show kernel oopses w/ systemd-udevd being hung. Have you looked at that at all?
[17:04] <enr0n> bdmurray: No, I haven't looked to closely at those since it looked like an issue we've seen before. Can you link one of those examples?
[17:08] <bdmurray> https://autopkgtest.ubuntu.com/results/autopkgtest-kinetic/kinetic/amd64/f/fetchmail/20220907_164153_f66ff@/log.gz
[17:14] <enr0n> looking
[17:16] <enr0n> bdmurray: hm yeah that could use more investigation. However I think it's unlikely it was introduced in the latest upload, which only modifies systemd-resolved.postinst.
[17:51] <bdmurray> enr0n: I booted a test system in lgw01 with systemd 251.4-1ubuntu3 and it also hit a kernel oops
[18:21] <vorlon> bdmurray, enr0n: the autopkgtest failure logs I'm seeing show a system that hasn't had a chance to install the new systemd
[18:26] <bdmurray> vorlon: new being ubuntu4 or ubuntu3 ?
[18:26] <vorlon> bdmurray: ubuntu4
[18:27] <vorlon> bdmurray: do you think the previous systemd is what broke the env?
[18:27] <vorlon> my point is that these don't look like they should block promotion of ubuntu4
[18:30] <bdmurray> vorlon: bdmurray-test-kerneloops3 in prod-proposed-migration took a very long time to finish provisioning b/c of 'task systemd-udevd:143 blocked for more than 1208 seconds' and it has ubuntu3 installed
[18:30] <vorlon> ok
[18:30] <vorlon> but is there any reason not to skiptest ubuntu4 and potentially unbreak the livefs builds?
[18:31] <bdmurray> not that would be for the greater good
[18:31] <bdmurray> s/not/no/
[18:32] <vorlon> ack, adding the hint, thanks
[18:36] <vorlon> jbicha: so you synced a systemd-cron package from unstable that adds a dependency on a non-existent package?
[18:42] <jbicha> vorlon: I didn't make it worse since the dep change was already done by autosync
[18:43] <vorlon> jbicha: ah
[18:43] <enr0n> vorlon, bdmurray: thanks for adding the hint. In any case I will dig a bit more into the issue
[20:38] <arraybolt3> When building a debian/copyright file, how should I handle supporting documentation files within the source code of a project that have no license specified? I'm dealing with dssi-1.0.0 (part of distrho-ports), and have been met with a whopping load of GNU Autoconf and related junk (all of which I believe I can simply omit from the copyright file according to Debian policy), however there's also a "README" file and a "ChangeLog"
[20:38] <arraybolt3> file with no license specified in them or in other areas of the software.
[20:39] <arraybolt3> (The software also licenses different parts of itself under different licenses and has no overarching "license" applied to the entire thing, so I can't fall back to the "default license" of the project since there is none.)
[20:46] <arraybolt3> (I think I can probably just omit the files entirely.)
[20:47] <sarnold> users expect readmes and changelogs :)
[20:50] <arraybolt3> sarnold: I mean omit them from the copyright file.
[20:50] <sarnold> ah :)
[20:51] <arraybolt3> (They're obviously supporting documentation for an open-source project, the license is just... missing. I'm guessing no one expected the need to specify a license for files like that.)
[21:10] <vorlon> arraybolt3: the assumption is that files included in the tarball which don't include an explicit per-file license declaration have the same license as the project as a whole.  Where would I find the tarball for this to look at?
[21:23] <arraybolt3> vorlon: It's a part of a giant project, let me see if I can pull up the info...
[21:24] <vorlon> arraybolt3: eh what is it that you're reviewing? :)
[21:24] <arraybolt3> vorlon: The repo is "git clone https://git.launchpad.net/distrho-ports". I'm building the entire copyright file for the whole project.
[21:24] <arraybolt3> The exact part in question is libs/juced/source/dependancies/dssi-1.0.0.
[21:25] <vorlon> hmm
[21:26] <vorlon> arraybolt3: that directory has a COPYING file which we would assume is controlling
[21:26] <arraybolt3> It's essentially a ton of smaller projects lumped into one gigantic one, the project as a whole is GPL-2, but all of the stuff inside is under various different licenses with tons of different copyrights.
[21:26] <arraybolt3> vorlon: That file specifically says it only applies to one specific file in the project.
[21:26] <vorlon> ah
[21:27] <arraybolt3> NOTE: This license applies to the DSSI header file.  See README for a summary of the licensing of other files, and the individual source code for more details.
[21:27] <arraybolt3> Thus is the notice in COPYING.
[21:28] <arraybolt3> Neither one of the things the header mentions helps in determining the copyright of the README itself or the ChangeLog.
[21:29] <vorlon> arraybolt3: shrug I would still treat those as LGPL as that's the most restrictive license there
[21:29] <vorlon> but also this looks like a terrible bundling of things to be trying to package
[21:29] <arraybolt3> vorlon: That makes sense. Also, one last thing, are file format specifications licensable, or only patentable? There's an RFC.txt file in the mess with missing copyright info and I don't know how that plays in.
[21:30] <arraybolt3> (Er, I don't think it's a file format specification, but it is a specification.)
[21:30] <vorlon> ugh there has been a lot of nuisance with rfcs because the IETF has applied a copyright license to them which is non-free
[21:30] <vorlon> Debian has largely dealt with this
[21:35] <arraybolt3> vorlon: I don't think the RFC.txt file in here is copyrighted by the IETF. No mention of them is made, and I'm not finding the file in the IETFs RFC stuff.
[21:35] <vorlon> ah
[21:37] <arraybolt3> *shrug* I guess I'll just treat it as LGPL too? I mean, that's the best I can figure. But maybe I or Eickmeyer should toss an email at the DSSI guys and say "GUYS! Your licensing info! Fix it! :P"
[21:38] <Eickmeyer[m]> Dssi is mortibund.
[21:40] <arraybolt3[m]> ...well I guess then we'll only have to do this once!
[22:45] <mmikowski> arraybolt3: We added hardward support now for NXg1. In package testing. This fixes the overly large initramrd. Was 169MB; change to xz compression brings it to 120; change to modules = dep, brings it to 82 (smaller than prior kernels). Eickmeyer[m] is pushing out a testing package perhaps tomorrow.  Plan to release after a hardware peripheral check.  Thanks for your help!
[22:46] <mmikowski> NXg1 = Kubuntu Focus NX gen 1.  FWIW, no other platform requires these tweaks, so only applies to that hardware which is sensed and patched on upgrade.
[22:46] <arraybolt3> mmikowski: Nice! Glad to help!
[22:48] <mmikowski> arraybolt3: Yes, thanks again. Used a symlink drop file in /etc/initramfs-tools/conf.d/ fwiw.  Testing change in initrd file for prior kernel version ...
[22:49] <mmikowski> also, interestingly, seems to boot much faster now (haven't time it though).
[22:50] <mmikowski> Hmmm. I guess we could have dropped into /usr/share/initramfs-tools/conf.d, but this will work for a drop file IMO.
[23:12] <mmikowski> arraybolt3: updating prior initramd results in 29MB (1/4 size). Not sure about nv libs, but that's even more testing (gahhh!)
[23:13] <arraybolt3> Good grief, that seems a bit on the small side.
[23:41] <Eickmeyer[m]> xz is much higher compression, but is much more CPU intensive.
[23:42] <sarnold> and vastly slower
[23:43] <sarnold> I've got a collection of scripts for unpacking the archive to search things, and I've seen some of the unpack operations take four minutes of xeon e5 CPU time
[23:46] <Eickmeyer[m]> sarnold: For the computers in question (11th gen Intel NUCs), the difference has been negligible, and boot time has been actually shorter, believe it or not.
[23:47] <sarnold> Eickmeyer[m]: heh, what were they using for storage? an sd card?
[23:47] <sarnold> I'm prepared to accept that xz might be faster than lz4 or zstd in some situations but I'd mostly expect it to be with *slow* storage
[23:47] <Eickmeyer[m]> sarnold: nvme SSD.
[23:47] <sarnold> o_O
[23:51] <Eickmeyer[m]> Yeah. Somewhat anti-intuitive, but it's true.