[01:48] <Unit193> Regarding https://lists.ubuntu.com/archives/ubuntu-devel-discuss/2020-May/018679.html, I actually ran into this issue between Debian unstable and Ubuntu Bionic as well.  I ended up backporting 1.3.0
[06:03] <wgrant> doko, kanashiro, mitya57, RAOF: I've downgraded kernel and firmware (but not qemu) on the riscv64 buildds and retried some affected builds, and it looks promising so far. I've also been able to repro the qtbase-opensource-source gcc segfault on focal on real hardware with 5.4, so it does seem likely it's a kernel regression.
[08:09] <cpaelzer> rbasak: new fun with git-ubuntu on qemu
[08:10] <cpaelzer> rbasak: my special edge-cases  keep on coming it seems
[08:10] <cpaelzer> rbasak: newer releases work fine it seems, but historically there was some mismatch between git based on salsa and git-ubuntu imports
[08:10] <cpaelzer> rbasak: that sometimes led to mismatches and is resolved in newer releases
[08:10] <Unit193> cpaelzer: Stop being a corner case, then! :P
[08:11] <cpaelzer> rbasak: but we might face something like it again due to git submodules
[08:11] <cpaelzer> Unit193: that isn't a problem - rbasak is happy to settle the corner-cases with me instead of later on a wider user group :-)
[08:12] <Unit193> cpaelzer: Hah, sorry in case the humor doesn't carry over.  I've hit some weird corner cases in things too, it's pretty nifty when you can find a bug like that and get it fixed. :)
[08:12] <cpaelzer> np, the humor carried over
[08:14] <cpaelzer> rbasak: repro should be 1. check out qemu bionic-devel 2. dpkg-buildpackage -S -nc -d - it will fail complaining about a mismatch between tarball and source in qemu.git/ui/keycodemapdb/..git
[08:14] <cpaelzer> that is a git submodule - part of git-ubuntu import (e.g. I get it restored when I git checkout .) and can be resolved by rm'ing the file before build
[08:45] <ackk> xnox, hi, is there anything I have to do to progress https://bugs.launchpad.net/ubuntu/+source/maas/+bug/1877973 or will it just get picked up at some point?
[08:55] <mitya57> wgrant: thanks! qtbase is building for 5 hours and has not failed yet :)
[09:23] <wgrant> Downgraded just the kernel on hardware, and my test case has been running for 25 minutes now, whereas it previously failed in 10-600s reliably.
[10:12] <cpaelzer> rbalint: hi, I've done the identificationa nd driving of a journald issue in focal in https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1875708
[10:13] <cpaelzer> rbalint: I think all the prep is done, next step would be backporting to focal and pushing it there with the next update
[10:13] <cpaelzer> in the past that part was doen by xnox usually grouping things with a bunch of others in his queue - would now you do such and I can "pass" the bug over to you?
[10:47] <ddstreet> cpaelzer i upload the majority of systemd srus, i can include yours if it's ready, i'm watching your bug
[10:49] <cpaelzer> ddstreet: it is ready upstream
[10:49] <cpaelzer> I didn't look into doing a backport yet and how complex or not that might be
[10:50] <cpaelzer> different topic - did the arm builders just have some hard reset - I see builds in failed state but no build log
[10:50] <cpaelzer> https://launchpad.net/~ci-train-ppa-service/+archive/ubuntu/4062/+build/19301601
[10:50] <cpaelzer> https://launchpad.net/~ci-train-ppa-service/+archive/ubuntu/4062/+build/19301602
[10:50] <cpaelzer> https://launchpad.net/builders doesn't look like all would be gone ...
[10:55] <rbalint> cpaelzer, ddstreet  i kept an eye on LP: #1875708, it missed last upload to groovy, but good to go in next one, thanks for driving it and for the heads-up
[10:56] <ddstreet> ack rbalint, and there's a version in focal-proposed that should go out next week, i think it only needs backporting to focal so can go into the next one
[10:57] <rbalint> ddstreet, cool, thanks
[10:57] <ddstreet> i can push to focal once you've got in groovy.  thanks!
[10:57] <cpaelzer> thank you ddstreet and rbalint - so I can keep 1875708 to you then to be grouped with the next uploads \o/
[10:58] <cjwatson> cpaelzer: #launchpad
[10:58] <cjwatson> (it is known)
[12:05] <kanashiro> wgrant: thanks! ruby2.7 migrated to the release pocket :)
[12:05] <wgrant> kanashiro: Great. Sorry about that.
[12:05] <wgrant> I have a kernel bisect to look forward to tomorrow.
[12:06] <kanashiro> np
[12:12] <xnox> ackk:  you can ping AA => ~ubuntu-archive team members, i.e. sil2100 vorlon RAOF doko
[12:15] <ogra> AA ... the anonymous archivists ...
[12:34] <rbasak> cpaelzer: on qemu and ..git, that's expected I believe
[12:34] <rbasak> git cannot store anything called .git found in the source tree in general, so it must be stored escaped
[12:35] <rbasak> If you use dpkg-buildpackage directly then you won't get that unescaped
[12:35] <rbasak> The intention is that "git ubuntu build" would do the correct unescaping. I'm not sure if that's implemented yet. But it's not an issue from the importer end.
[12:35] <cpaelzer> rbasak: the problem is that the decision how to "not store" it seems to be different
[12:35] <cpaelzer> rbasak: you stored it as ..git while the tarball doesn't have it at all
[12:36] <cpaelzer> so on build it complains that there is a ..git it doesn't know
[12:36] <rbasak> The extracted source tree must have .git - otherwise there wouldn't have been anything to escape at import time
[12:37] <rbasak> But yes - if .git gets renamed to ..git, then from the perspective of dpkg-source it'll be a new directory not present in the orig tarball and you'll get a complaint
[12:37] <rbasak> This escaping has been present for years, BTW. It's not a new thing.
[12:38] <rbasak> Oh
[12:38] <rbasak> But I did fix a bug in it not too long ago
[12:39] <rbasak> cpaelzer: I think it's an error for Debian to be uploading source packages with .git in them
[12:39] <rbasak> Even if it's a submodule reference thing
[12:40] <rbasak> If the process for generating the source package from git before upload could be adjusted not to do that, then that'd fix the problem for you maybe?
[12:41] <rbasak> Because otherwise the result simply cannot be imported back into git, since git will not accept an entry called .git
[12:45] <cpaelzer> rbasak: it seems it was a recent upload by mdeslaur that did this
[12:45] <cpaelzer> rbasak: I'm usually working of git-ubuntu (whichi didn't have .git or ..git) but I expect he would work of the source package
[12:45] <cpaelzer> rbasak: he then has got the .git as it is in the tarball
[12:46] <cpaelzer> and on re-import it became ..git
[12:46] <cpaelzer> see git diff import/1%2.11+dfsg-1ubuntu7.22..import/1%2.11+dfsg-1ubuntu7.23 -- ui/keycodemapdb/
[12:46] <cpaelzer> that is why I only see this "recently"
[12:49] <rbasak> cpaelzer: in the reimport that gives me no results. Could you pastebin what you see please? But I'm still puzzled as the orig tarball is the same for both. So git-ubuntu should see the same situation for both versions.
[12:50] <rbasak> What's in your source tree in !debian, for a "3.0 (quilt)" package, should be unconnected to what ends up in the built source package assuming your source package build succeeds
[12:50] <rbasak> Because on extraction that part of the source tree will come from the orig tarball and nowhere else.
[12:51] <cpaelzer> rbasak: https://paste.ubuntu.com/p/xQV7m5GfGZ/
[12:52] <cpaelzer> commit 9514499ffce37066c0048f543b33d9ec01ef5cc4 (tag: reimport/import/1%2.11+dfsg-1ubuntu7.22/0, tag: import/1%2.11+dfsg-1ubuntu7.22)
[12:52] <cpaelzer> commit 9360dd8574d10d92b62742d9cbb09e9c2274296e (tag: import/1%2.11+dfsg-1ubuntu7.23)
[12:53] <rbasak> Right I see https://git.launchpad.net/ubuntu/+source/qemu/diff/ui/keycodemapdb/..git?h=9360dd8574d10d92b62742d9cbb09e9c2274296e thanks
[12:55] <rbasak> cpaelzer: you're still looking at the previous import tree I think?
[12:56] <rbasak> That change came from the bugfix I believe - not anything Marc did or didn't do
[13:08] <cpaelzer> rbasak: well formerly it wasn't important and therefore worked
[13:08] <cpaelzer> rbasak: now it is imported as ..git and breaks on buildpkg
[13:09] <cpaelzer> so "correctly" importing made it worse in this particular case :-/
[13:09] <rbasak> cpaelzer: that sounds right :-/
[13:09] <ahasenack> what if .git isn't imported at all?
[13:09] <cpaelzer> which is as it was before
[13:09] <ahasenack> dpkg knows how to ignore its presence
[13:10] <rbasak> ahasenack: theoretically a source package build might depend on its presence
[13:10] <rbasak> cpaelzer: could you use -I..git?
[13:10] <ahasenack> I've seen builds break because they *see* it
[13:10] <cpaelzer> rbasak: but it won't get that as it is only ..git not .git it was depending on
[13:10] <ahasenack> i.e., they assume they are in a devel environment
[13:10] <cpaelzer> ahasenack: yes I've seen the same
[13:10] <rbasak> cpaelzer: "git ubuntu build" would then round-trip it correctly
[13:12] <cpaelzer> I can work around it, but perception wise loosing the ability to dpkg-buildpackage out of a git-ubuntu checkout is a loss for me
[13:13] <ahasenack> specially since git ubuntu build was almost removed from the snap
[13:13] <cpaelzer> and I don't want it back :-)
[13:14] <cpaelzer> I want the normal paths to work
[13:16] <rbasak> cpaelzer: I don't see any workaround that doesn't result in some source package I could construct that would then break even with a fully implemented "git ubuntu build".
[13:17] <rbasak> cpaelzer: can you convince upstream to stop shipping .git in their release?
[13:17] <rbasak> cpaelzer: or configure dpkg-buildpackage to use -I..git, assuming that works?
[13:17] <ahasenack> how does gbp or dgit import such a package?
[13:17] <ahasenack> (obvious question)
[13:18] <rbasak> I assume they drop the .git
[13:18] <rbasak> Which is fine if you're the package maintainer, because you're in a position to fix your build if dropping .git causes a problem
[13:18] <rbasak> But git-ubuntu doesn't have that luxury
[13:18] <ahasenack> isn't it more surprising to fine a ..git directory?
[13:18] <ahasenack> find
[13:19] <rbasak> It's more surprising to have a build from "git ubuntu build" fail when it succeeds otherwise from the archive.
[13:19] <ahasenack> hm, no, git ubuntu build has many bugs :)
[13:19] <ahasenack> it's surprising when it works ;)
[13:19] <ahasenack> unless you mean a future git ubuntu build
[13:19] <rbasak> A fully implemented "git ubuntu build" with bugs fixed :)
[13:19] <rbasak> Yes
[13:20] <rbasak> I'm avoiding having the import broken by design
[13:20] <ahasenack> and these are not going to be addressed soon
[13:20] <ahasenack> the build bugs
[13:20] <rbasak> Very few packages even have a .git AFAIK
[13:21] <ahasenack> cpaelzer: qemu_4.2.orig.tar.xz does not have a .git
[13:21] <ahasenack> is it a new upstream version that has it?
[13:21] <rbasak> It is a new development that .git is found in an upload that isn't the main .git repository
[13:21] <cpaelzer> no new versions are already good
[13:22] <cpaelzer> it was Bionics 2.11 that has the issue
[13:23] <ahasenack> qemu_2.11+dfsg.orig.tar.xz has it in a subdirectory,
[13:23] <ahasenack> $ find . -name .git
[13:23] <ahasenack> ./ui/keycodemapdb/.git
[13:23] <ahasenack> is that it?
[13:25] <rbasak> That's it
[13:29] <cpaelzer> yep
[13:29] <cpaelzer> as I said I can work around it
[13:29] <cpaelzer> I was mostly wondering why it now shows up
[13:29] <cpaelzer> but it seems we can only decide to break some or some oter packages
[13:30] <cpaelzer> then being correct on import seems to be the better choice and we can keep things as-is
[13:32] <rbasak> Thanks
[13:33] <rbasak> cpaelzer: so on the other thing
[13:33] <rbasak> The qemu reimport didn't adopt your latest upload tags
[13:33] <rbasak> AFAICT, it's because of the empty directory thing
[13:33] <rbasak> http://paste.ubuntu.com/p/FVnHFDymV4/ is the difference between the import and the upload tag
[13:34] <cpaelzer> rbasak: yes I have seen that it didn't adopt my history
[13:34] <cpaelzer> rbasak: the old packaging around roms was very bad
[13:34] <rbasak> Is that different to before?
[13:34] <cpaelzer> rbasak: that is why I went some lengths to clean it up
[13:34] <cpaelzer> also mjt did on the debian side improve rom handling
[13:35] <rbasak> cpaelzer: does this mismatch any of your expectations from the reimport?
[13:35] <cpaelzer> rbasak: consider anything starting with roms/ <Eoan as "probably awkward"
[13:35] <cpaelzer> rbasak: I'd be ok wit hthe re-import as-is
[13:35] <rbasak> OK thanks
[13:36] <rbasak> So we got some talking points around edge cases but so far the reimports are "expected behaviour" then
[13:36] <rbasak> Please keep an eye out for anything else :)
[13:37] <cpaelzer> will do
[13:57] <ackk> xnox thanks
[15:45] <ackk> sil2100, hi, around?
[16:15] <sil2100> ackk: hey! Yeah, I'm aroundish now
[16:16] <ackk> sil2100, hi, could you take a look at https://bugs.launchpad.net/ubuntu/+source/maas/+bug/1877973 when you have time? it's not urgent, though
[16:20] <sil2100> ackk: ok, looking in a few moments o/
[16:21] <ackk> sil2100, thanks!
[17:56] <smoser> ahasenack: can you do an import of squashfs-tools-ng
[18:44] <ahasenack> smoser: importing
[18:48] <ahasenack> smoser: done
[18:55] <smoser> gracias.
[18:58] <gpiccoli> vorlon, xnox - do you know exactly why Ubuntu uses "wait-for-root" as its initramfs approach to..wait for root? Debian has no mention of it...
[18:59] <gpiccoli> instead, Debian relies on script-looping on local stage, which for me, makes sense (and would allow me to fix an issue I'm working hehe)
[18:59] <gpiccoli> Tnx in advance
[19:05] <vorlon> gpiccoli: because it makes the boot faster instead of having to wait for udev settle of random events
[19:05] <vorlon> on systems with significant numbers of unrelated disks
[19:05] <vorlon> what is your case where you need to loop?
[19:07] <gpiccoli> vorlon, thanks! My case is a crypto volume on top of raid1 - currently it does not work. Reason is that cryptroot fails on local-top and panics to shell
[19:07] <vorlon> well, that's surprising to me
[19:07] <gpiccoli> sorry vorlon, I express myselg in an incomplete way hehe
[19:08] <vorlon> because both raid and crypto are meant to be udev-activated, and self-assemble with no need to loop in the main script
[19:08] <gpiccoli> It works! The issue is if a raid1 member is removed
[19:08] <vorlon> ah
[19:08] <gpiccoli> hehe my bad!!
[19:08] <gpiccoli> my idea to fix was to allow cryptroot to fail gently, and let the "xnox's poor man last resort" mdadm script on local-block to start the md array with a missing member...
[19:08] <gpiccoli> then cryptroot (also in local-block) would take over and decrypt the volume
[19:08] <gpiccoli> that..relies on the man page idea of local-block, the it loops
[19:09] <vorlon> so the issue then is that you have both mdadm and cryptroot hooks that you want to be able to interact with the console?
[19:09] <vorlon> why do we not in general allow mdadm arrays to be assembled in degraded mode by default?
[19:09] <gpiccoli> no console involved, I want a "broken" raid1 array (or any mirroring) to be able to mount and continue the boot
[19:10] <vorlon> ok
[19:10] <gpiccoli> it's an option vorlon
[19:10] <vorlon> alright
[19:10] <gpiccoli> to allow the degraded by default, without spinning on local-block script
[19:10] <vorlon> if it's an option, and it's enabled, then I think that ought to be handled inline by the udev rules and not depend on falling through to an initramfs script?
[19:10] <vorlon> xnox: ?
[19:11] <gpiccoli> vorlon, again my bad english! It's a design option, that I could work
[19:11] <gpiccoli> It's not currently an option!
[19:11] <vorlon> ah
[19:11] <gpiccoli> Desculpe vorlon =)
[19:11] <vorlon> well, it's possible that not assembling degraded arrays by default is a deliberate design decision also, but I don't remember
[19:11] <gpiccoli> currently, mdadm tries for 2/3*ROOTDELAY iterations before giving up and degrade-assembly
[19:12] <vorlon> we should at least ensure that the default behavior between initrd and rootfs is similar
[19:12] <gpiccoli> I geuss this is what is currently doing, keeping the consistency
[19:12] <gpiccoli> *guess
[19:14] <gpiccoli> vorlon, the problem that I see with wait-for-root is that it seems to just care with udev, it doesn't give a chance to re-running scripts on local-block
[19:14] <gpiccoli> how about if we do partial waits for root, allowing local-block scripts to re-run in between,
[19:14] <gpiccoli> accouting the total time to not exceeds ROOTDELAY ?
[19:15] <gpiccoli> *accounting
[19:21] <vorlon> gpiccoli: that sounds plausible
[19:23] <gpiccoli> great vorlon, I'll try to work something, let's see xnox opinion also about mdadm default to degrade option hehe
[19:28] <The_LoudSpeaker> Hey! I have asked the OP to provide requested details on the bug report. Can anyone else also test this? https://bugs.launchpad.net/ubuntu/+source/gnome-shell/+bug/1877553
[19:42] <xnox> gpiccoli:  vorlon: the intention is to use udev and assemble all the things, failing that there is a loop to interate things, and failing that there are fail hooks.
[19:42] <xnox> gpiccoli:  vorlon: it is possible, that when we have disigned and implemented the loop + eventual fallbacks, that didn't get integrated into ubuntu properly. As we had still the remains of "just udev activate everything"
[19:42] <vorlon> xnox: so what is the intended /outcome/ if a raid array is missing members but can be assembled in degraded mode?
[19:43] <gpiccoli> I can say this works! ^
[19:43] <xnox> gpiccoli:  vorlon: loop, loop, loop, start degraded, loop, loop, loop
[19:43] <vorlon> what's the loop loop loop?
[19:43] <vorlon> after starting degraded
[19:43] <xnox> i.e. start it degraded, whilst there are still some loops available to assemble things on top of it.
[19:44] <xnox> vorlon:  the loop that interates in the initramfs, trying to assemble/start/find rootfs
[19:44] <gpiccoli> but only works due to a hack I found (from Ben Hutchings) in initramfs-tools
[19:44] <vorlon> why does that require looping instead of udev?
[19:44] <vorlon> xnox: we've basically disabled the loop part of initramfs-tools since forever in Ubuntu
[19:44] <xnox> vorlon:  because we don't have systemd, such that udev can start systemd timer units that assemble things degraded after a delay.
[19:44] <xnox> vorlon:  it's not the old loop part, but a new block-disk loop only.
[19:45] <vorlon> xnox: however, assembling the array degraded should trigger dependent udev events with no further looping required
[19:45] <xnox> which uses udev
[19:45] <xnox> vorlon:  correct. and that's what i expect to happen. Cause after start degraded, that block loop thing calls udevadm trigger/settle, and try any futher hooks.
[19:45] <xnox> gpiccoli:  i guess the question is => does that work on debian? degraded mdadm+crypto?
[19:46] <vorlon> udevadm trigger/settle, instead of wait-for-root
[19:46] <xnox> i think so.
[19:46] <xnox> vorlon:  because that new block-loop, is to support really weird shit like mixing encrypted & non-encrypted PVs in VG, where things jump layers.
[19:46] <xnox> vorlon:  and like nested VGs
[19:46] <gpiccoli> xnox, I don't expect that to work currently due to the first limitation on cryptroot script - it fails on local-top and drops to a shell. With my change, it should work in debian (I can test), but not on ubuntu
[19:47] <vorlon> hmm
[19:47] <gpiccoli> because we don't loop in local-block, instead we use wait-for-root and give-up after a while
[19:47] <xnox> gpiccoli:  ack. i think our crypto script is better than debian's true.
[19:47] <gpiccoli> no, the script is the same hehe
[19:47] <xnox> gpiccoli:  i think you want to open a bug report with details.
[19:47] <gpiccoli> it fails in both currently hehe
[19:47] <xnox> gpiccoli:  against cryptsetup mdadm initramfs-tools
[19:47] <xnox> and we can check what's missing.
[19:48] <gpiccoli> right, I'll do that and propose my change there
[19:48] <xnox> nesting is hard.
[19:48] <xnox> i thought we did well to support "normal stackings" and "degraded stuff" sensibly, but we might be missing some stacks orderings.
[19:48] <gpiccoli> agreed
[19:48] <xnox> i.e. i think 2 LUKS => assembled into RAID1 end up degraded more often than they should.
[19:49] <gpiccoli> tnx xnox and vorlon =)