[06:51] <cpaelzer> hmm - is it a recent change that ppas are no more served from launchpad.net but from ppa.launchpadcontent.net ?
[06:51] <cpaelzer> Seems to have broken some of my automated tests, so I wonder about the timing and backgrounds if anyone knows more about it
[06:54] <cpaelzer> both servers seem to be up still, wget from both serves the same, so maybe only apt or add-apt-repository changed to use the new URL by default now ...
[07:10] <vorlon> cpaelzer: yes, in the past month; I know the Launchpad team included it in their internal weekly report, and it also required some changes to IIRC the core snap builds which I saw float by as MPs
[07:13] <cpaelzer> thanks vorlon
[07:13] <cpaelzer> thant confirms it isn't just me and my tests were quickly adapted and are fixed now
[10:50] <schopin> athos: I'm doing the python-debian merge, and have forwarded-ported your zstd patches. Given the comments on the Salsa MR, I figured you might be interested in these, since they keep the external tool approach :)
[10:50] <schopin> athos: https://salsa.debian.org/schopin/python-debian/-/tree/external-zstd
[11:17] <cjwatson> cpaelzer: Out of interest, what did it break?  https://bugs.launchpad.net/launchpad/+bug/1473091 has the background
[11:18] <cjwatson> cpaelzer: and https://bugs.launchpad.net/ubuntu/+source/software-properties/+bug/1959015 for the add-apt-repository change
[11:31] <cpaelzer> thanks for the background cjwatson
[11:32] <cpaelzer> cjwatson: it did break tests that did use add-apt-repository - that started to fail since the URls were not in the no_proxy env and then got some ssl handshake issues
[11:33] <cpaelzer> it had a few different symptoms in different places, but adding launchpadcontent.net + api.launchpad.net to no_proxy resolved it all
[11:35] <cjwatson> cpaelzer: right, fair enough
[11:35] <cjwatson> slightly bumpy but it should be better in the long run
[11:35] <cpaelzer> since it wasn't change at jammy release day it should be fine after a short bumpy time
[11:36] <cpaelzer> I was told that this was in the launchpad weekly update (I have to admit I do not read that), maybe it would be time to announce that change a bit more
[11:36] <cpaelzer> to make it easier for others that might e.g. like me need to adapt proxy config
[11:37] <cpaelzer> "announce -> fix" is alway nicer than "fail -> debug - arrrrr -> fix" :-)
[12:04] <athos> schopin: thanks! still haven't figured out the merge myself :) => current dealing with how to handle large files
[12:05] <schopin> athos: fwiw I keep amending the linked branch because the CI seems particularly pedantic.
[16:28] <rbasak> utkarsh2102: hi! Did I hear correctly that you were going to look at the ruby-mysql2 dep8 holdup on mysql-8.0?
[16:29] <rbasak> I'm concerned to get the dep8 rdeps fixed for mysql-8.0. I'm not sure what might be caused by the OpenSSL 3.0 transition, what might be caused by a new upstream release, and now we have maybe some issues with language connectors that might be being affected by potential changes upstream, etc. So I don't have much confidence that there isn't a more serious blocker hiding somewhere that we don't know
[16:30] <rbasak> about yet.
[16:36] <utkarsh2102> rbasak: hey, yes, I'll take a look. I'll sync up with you or lenavoytek before diving into it.
[16:36] <rbasak> Thanks!
[16:36] <lena> thanks!
[16:47] <bluca> juliank: it seems some jobs on https://autopkgtest.ubuntu.com/running#pkg-systemd-upstream are running multiple times
[16:48] <bluca> I was watching the amd64 job for PR id 22458 and it completed successfully, went back to queued, and started over
[16:48] <bluca> there were no pushes on the PR
[16:48] <juliank> bluca: it might have failed
[16:48] <bluca> it ran for many hours and I caught glimpses from the log snippet
[16:48] <juliank> bluca: was there a push somehow that it passed?
[16:48] <bluca> it completed at least until the second to last job
[16:49] <bluca> /job/autopkgtest suite/
[16:49] <juliank> so it might have rebooted and failed to come up again
[16:49] <bluca> no push on the PR
[16:49] <bluca> https://github.com/systemd/systemd/pull/22458
[16:50] <juliank> bluca: this is failing
[16:51] <juliank> bluca: last run failed 35 mins ago
[16:52] <juliank> um
[16:52] <juliank> well timing is wrong
[16:52] <bluca> is the failed log accessible?
[16:52] <bluca> also any reason it's just rerunning instead of reporting the failure?
[16:52] <juliank> no
[16:52] <juliank> if the machine fails to come back up again after a reboot it's a testbed failure that will be retried 3 times or so
[16:53] <juliank> you'll have to wait until it failed often enough or passed
[16:53] <juliank> there is no useful log
[16:54] <juliank> It's also not just systemd upstream jobs, machines just don't always come back up in general
[16:54] <juliank> or lose network connectivity
[16:54] <juliank> about 25% of the tests on amd64 are such abnormal failures
[16:54] <bluca> uhm but why is the machine rebooting before all the autopkgtest suites are done?
[16:55] <juliank> because it's rebooting between tests
[16:55] <bluca> is that expected?
[16:55] <juliank> yes
[16:55] <juliank> or like between testbed setup and tests
[16:55] <jawn-smith> vorlon: I know you love removing packages
[16:55] <bluca> ok, got it, thanks
[16:55] <jawn-smith> LP: #1960433
[16:55] <juliank> I have no idea how to debug this further
[16:55]  * vorlon oils the chainsaw
[16:56] <juliank> I'd have to patch autopkgtest to keep one such failing machine around I suppose
[16:56] <juliank> I've tried to reproduce the issue manually multiple times, but those always work
[16:57] <juliank> maybe I should create a vm and do a reboot loop
[16:59] <vorlon> jawn-smith: first I need to know if this is parsed as aegi-sub or aegis-u-b
[17:01] <vorlon> jawn-smith: for reference, the existence of Debian bug #997098 is sufficient for me, no need to file a separate bug in Ubuntu if there's already a release-critical Debian bug for the issue
[17:08] <juliank> vorlon: failure count today of amd64 hosts: https://paste.ubuntu.com/p/F9D7xpvnqJ/
[17:08] <juliank> I'd like to get an instance on elektra really
[17:09] <juliank> IS did a soft restart of all OVS but it did not help
[17:14] <vorlon> :/
[17:19] <juliank> vorlon: OK I see in the failing logs that systemd is starting a job called "/usr/bin/sh -c sleep 3; reboot"
[17:19] <juliank> vorlon: I wonder where that is from
[17:19] <juliank> vorlon: I think if the machine reboots itself vs the outside, autopkgtest gets confused as it still expects it to be up
[17:22] <juliank> Because the machines take ages to boot due to load right now
[17:24] <juliank> Startup finished in 7.265s (kernel) + 15.902s (userspace) = 23.167s
[17:27] <jawn-smith> vorlon: it's a subtitles package so I'm going with aegi-sub. Thanks for the tip about the Debian bug
[17:35] <juliank> vorlon: So I added -o ConnectionAttempts=20 to the ssh command autopkgtest auxverb uses which will make it try to connect 20 times with 1s timeouts, maybe that helps
[17:36] <juliank> I guess I should hack that into existing running processes
[18:07] <juliank> we now have debugging enabled for half the jobs
[18:36] <juliank> vorlon: I locally hacked in NOVA_REBOOT=1 in ssh-setup/nova and will see if that helps. It seemingly helped a long long time ago in bos01
[18:36] <juliank> (pitti added --nova-reboot to the worker.conf; but we don't really have the luxury to reload workers right now)
[18:41] <bdmurray> I'm trying the new aiocoap w/ python3.10 defaults
[18:44] <bdmurray> hrm, I might have jumped the gun there
[23:46] <jawn-smith> seb128: can you have a look at LP: #1960458 when you get a chance? Your name is on the patch that is causing the issue
[23:46] <jawn-smith> or, at least the changelog entry for that patch