[03:40] <bluesabre> vorlon: I think I've narrowed down that livecd-rootfs always pulls snap packages from the base seed instead.
[03:40] <bluesabre> Are there instructions somewhere on how to build an image with livecd-rootfs so I can iterate and test?
[03:41] <bluesabre> I guess the lazy solution would be to remove the snap packages from the base seed and just use the firefox transition package
[04:48] <amurray> RAOF: hi - any chance I could get the py-macaroon-bakery SRUs for jammy+kinetic promoted to -updates? LP: #1970267 thanks :)
[04:48] -ubottu:#ubuntu-release- Launchpad bug 1970267 in py-macaroon-bakery (Ubuntu Kinetic) "Unable to save macaroons in MozillaCookieJar() under python3.10" [Undecided, Fix Committed] https://launchpad.net/bugs/1970267
[04:51] <RAOF> /me looks
[05:01] <RAOF> Yeah, that looks releaseable. Enjoy!
[05:01] <amurray> awesome - thanks mate
[06:08] <ginggs> vpa1977: i triggered migration-reference/0 tests for openjdk-17 vs chromhmm and beagle
[06:08] <ginggs> https://autopkgtest.ubuntu.com/packages/b/beagle/lunar/ppc64el
[06:09] <ginggs> https://autopkgtest.ubuntu.com/packages/c/chromhmm/lunar/ppc64el
[06:09] <ginggs> and they "failed successfully" (still queued on s390x) so should no longer block openjdk-17
[06:10] <vpa1977> ginggs: Thank you !!!!!!
[06:24] <vorlon> bluesabre: build an image with livecd-rootfs> sudo apt install livecd-rootfs && mkdir build && cp -a /usr/share/livecd-rootfs/live-build/* build/ && cd build && sudo env LANG=C.UTF-8 SHELL=/bin/sh PROJECT=xubuntu SUBPROJECT=minimal ARCH=amd64 SUITE=lunar lb build
[06:25] <vorlon> bluesabre: as far as removal from the base seed, see live-build/auto/config lines 1041ff which shows you how to change the base seed on a per-subproject basis
[06:26] <vorlon> bluesabre: note the above livecd-rootfs invocation can be done on a full system or on a privileged container but not an unprivileged container (needs to be able to do various mount ops) or a chroot (can't do the snap stuff in a chroot without access to a running snapd)
[07:41] <mwhudson> vorlon: i think the base seed nonsense is another thing my branch fixes!!
[07:41] <mwhudson> really must land that when mm opens
[08:24] <guiverc> is anything required on new release (jammy.2) for torrents to work from https://ubuntu.com/download/alternative-downloads , OP on reddit claims torrent just gives error as 22.04.2 doesn't exist at https://torrent.ubuntu.com/tracker_index ... I can confirm error on quick try (initial download works; but error when it gets to transmission)
[10:23] <LocutusOfBorg> :wq:wq
[10:23] <LocutusOfBorg> oops
[10:29] <aciba> RAOF, bdmurray: cloud-init SRU uploads have been queued for B, F, J, and K per https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/2008230 . Could you, please, review it so we can get them into proposed?
[10:29] -ubottu:#ubuntu-release- Launchpad bug 2008230 in cloud-init (Ubuntu Kinetic) "sru cloud-init (23.1 update) Bionic, Focal, Jammy, Kinetic" [Undecided, New]
[10:31] <bluesabre> vorlon: Awesome, thanks! Created https://code.launchpad.net/~bluesabre/livecd-rootfs/+git/livecd-rootfs/+merge/438001 for the xubuntu-minimal snap issue
[13:17] <ginggs> is there any way to avoid getting the 6.1.0 kernel installed when running an autopkgtest with a trigger from -proposed?
[13:18] <ginggs> pytho-fabio consistently passes migration-reference/0 tests, but is incredibly flaky when triggered with any single package from -proposed
[13:18] <ginggs> https://autopkgtest.ubuntu.com/packages/p/python-fabio/lunar/amd64
[13:19] <ginggs> many test images are downloaded from here http://www.silx.org/pub/fabio/testimages/
[13:20] <ginggs> and just one http.client.IncompleteRead fails the autopkgtest
[14:26] -queuebot:#ubuntu-release- Unapproved: systemd (focal-proposed/main) [245.4-4ubuntu3.19 => 245.4-4ubuntu3.20] (core, i386-whitelist)
[14:38] -queuebot:#ubuntu-release- New sync: ubelt (lunar-proposed/primary) [1.2.3-2]
[14:57] -queuebot:#ubuntu-release- Unapproved: docker.io (kinetic-proposed/universe) [20.10.21-0ubuntu1~22.10.1 => 20.10.21-0ubuntu1~22.10.2] (no packageset)
[15:25] -queuebot:#ubuntu-release- Unapproved: rejected docker.io [source] (kinetic-proposed) [20.10.21-0ubuntu1~22.10.2]
[16:11] <vorlon> number of tests update_excuses is waiting for is coming down nicely (3590).  otoh there are only 2950 tests currently in the queue so some may have gotten lost and need requeued; will wait until the queue fully drains before trying to requeue them though
[16:11] <vorlon> since some of that delta is "tests that have run and completed since the last britney output"
[16:59] <slyon> ginggs: May I ask for your review on https://code.launchpad.net/~slyon/britney/+git/hints-ubuntu/+merge/438045 (cc @bdmurray)
[17:06] <vorlon> whooo only 6 days behind on arm64 autopkgtests
[17:08] <vorlon> slyon: a badtest of dgit/10.7 won't match autopkgtest results for dgit/blacklisted
[17:09] <vorlon> slyon, bdmurray: was dgit 10.5 broken or only the new one?
[17:10] <slyon> vorlon: ahh! good catch. I guess "force-badtest dgit/blacklisted/arm64" would do the trick?
[17:11] <vorlon> slyon: yes that's what we've used elsewhere
[17:11] <slyon> thanks, I've updated the MR
[17:14] <vorlon> slyon: the question about whether this affects 10.5 or only 10.7 is relevant, because if 10.5 isn't broken, this particular hint will let 10.7 migrate to the release pocket and regress coverage
[17:16] <slyon> 10.7 is very recent, i.e. after it got blacklisted. So I guess we don't know for 10.7. But I'll let bdmurray make a call about that as he has to context of why it got blacklisted
[17:37] <bdmurray> dgit was blacklisted because its tests were looping
[17:38] <vorlon> bdmurray: which version of them?
[17:39] <bdmurray> vorlon: likely 10.6 given the commit date and when 10.7 became available.
[17:40] <Eickmeyer> vorlon: Ran into this bug in sshfs my own infrastructure (bug 2007750). Can't find a matching upstream bug in Debian, considering reverting the -1.1 change if it works.
[17:40] -ubottu:#ubuntu-release- Bug 2007750 in sshfs-fuse (Ubuntu) "Ubuntu lunar. Error after upgrade to 3.7.3-1.1" [Undecided, Confirmed] https://launchpad.net/bugs/2007750
[17:40] <bdmurray> We can add it back and see what happens with 10.7
[17:41] <bdmurray> s/add it back/remove it/
[19:10] <vorlon> Eickmeyer: not sure why you're flagging me on it?
[19:10] <vorlon> bdmurray: I don't see anything in the 10.7 changelog to suggest that it fixes
[19:17] <vorlon> 306 fewer tests in the queue than listed in update_excuses as awaiting results.  But 'retry-autopkgtest-regressions --state RUNNING' returns 1983 records, so it seems many of the queued tests don't match what britney is waiting for?  I'm going to go ahead and queue these up now
[19:18] <vorlon> fwiw 1412 of those 1983 will be queued on arm64 which isn't great, but
[19:19] <vorlon> is anyone working on boost1.74?
[19:24] <Eickmeyer> vorlon: Yeah, I realized that later. Uploaded a NCR. nbd.
[19:29] <vorlon> getting python3-defaults to migrate might be a bit useful, wrt the fact that packages in -proposed that build-depend on python3-all-dev will currently fail their autopkgtests (mismatch between set of pythons in lunar and lunar-proposed)
[19:45] <LocutusOfBorg> vorlon, question: python-django-celery-results passed from neutral to fail, and its shown as regression. Sadly this isn't really a regression since the new version is just bringing a new "upstream" autopkgtest
[19:46] <LocutusOfBorg> and also in Debian this new upstream autopkgtest is giving troubles, e.g. https://ci.debian.net/data/autopkgtest/unstable/amd64/p/python-django-celery-results/31696277/log.gz
[19:49] <LocutusOfBorg> its failing for different reasons but looks a meh test
[19:49] <LocutusOfBorg> and not being able to start a postgresql server is not for sure a celery-result bug I would say
[19:56] <vorlon> LocutusOfBorg: patch to disable the new test and let Debian sort it out?
[19:56] <vorlon> (assuming there are other tests that at least provide some value)
[19:57] -queuebot:#ubuntu-release- Unapproved: rocr-runtime (jammy-proposed/universe) [5.0.0-1 => 5.0.0-1ubuntu0.1] (no packageset)
[19:58] <LocutusOfBorg> vorlon, the other tests are "import foo"
[19:58] <LocutusOfBorg> not really a testsuite tbh
[19:59] <vorlon> yet sometimes fails and detects real package breakage :)
[19:59] <LocutusOfBorg> and debian passes on testing and fails on sid the testsuite (or experimental), so I can't even open an RC
[20:00] <LocutusOfBorg> vorlon, speaking of debian bug #1028371?
[20:00] -ubottu:#ubuntu-release- Debian bug 1028371 in src:bernhard "bernhard: needs rebuilds on top of new protobuf" [Serious, Open] https://bugs.debian.org/1028371
[20:01] <vorlon> LocutusOfBorg: you could open a bug and tag it sid tho?
[20:01] <vorlon> LocutusOfBorg: 1028371> tl;dr the implications/relevance for Ubuntu?
[20:02] <LocutusOfBorg> bernhard is arch:all containing protobuf autogenerated code, needs a rebuild on top of new protobuf, and fails a simple "import". I spot the issue via autopkgtests :D
[20:03] <LocutusOfBorg> in Ubuntu, there is no issue since we don't have multiple protobufs I would say
[20:03] <LocutusOfBorg> but there should be a strict runtime dependency
[20:03] <LocutusOfBorg> https://ci.debian.net/user/locutusofborg/jobs
[20:03] <LocutusOfBorg> asked some tests to ci, lets see
[20:06] <vorlon> bdmurray: ros-ros-comm/arm64 is now blacklisted and blocks a number of significant packages in -proposed; would be interesting to know history there as well and what versions do or do not have problematic tests, given that the release version had a successful pass on Feb 16 and badtesting it will pull in the -proposed version
[20:09] <vpa1977> ginggs: for openjdk-17 migration, beagle and chromhmm s390x is still considered a regression. Would it be possible to ignore those guys too?
[20:15] <vorlon> dh_vdrplugin_depends: error: Bug in helper: The substvar must not contain a raw newline character (vdr:Depends=vdr-abi-2.6.0-debian\n)
[20:15] <vorlon> super useful
[20:15] <LocutusOfBorg> vorlon, vdr is fixed
[20:15] <vorlon> LocutusOfBorg: so I should just retry this build?
[20:15] <LocutusOfBorg> no.
[20:15] <LocutusOfBorg> deferred/3
[20:15] <LocutusOfBorg> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1026310
[20:15] -ubottu:#ubuntu-release- Debian bug 1026310 in vdr-dev "vdr-dev: dh_vdrplugin_depends fail with 'error: Bug in helper:'" [Serious, Open]
[20:16] <vorlon> mmk
[20:16] <LocutusOfBorg> Date: Sun, 26 Feb 2023 01:42:42 +0200
[20:16] <LocutusOfBorg> so, around 3-4 march will be syncd
[20:16] <bdmurray> vorlon: with ros-ros-comm it looks like verison 1.15.15+ds-1 and and the package has a large number of tests which require multiple instances to be created and any one of them failing / timing out causes the the whole test suite to be rerun.
[20:16] <LocutusOfBorg> unless you want to pick it up now
[20:16] <vorlon> LocutusOfBorg: I'm not sure I'm going to get around to figuring out how to pluck it from deferred, if it's easy for you to do it would be nice for us to have this sooner as it helps unblock boost1.74
[20:17] <bdmurray> vorlon: The same type of thing was happening with boost, dgit etc. I added them to never_run so that *any* progress would be made on the queue.
[20:17] <vorlon> bdmurray: ok so related to the 5-hour runtimes seen on https://autopkgtest.ubuntu.com/packages/r/ros-ros-comm/lunar/arm64 and it's reasonable for me to badtest it for now?
[20:19] <bdmurray> I wouldn't say its a bad test its the underlying cloud failing to provision instances.
[20:20] <vorlon> bdmurray: nevertheless there's a pile of stuff blocked that shouldn't be
[20:21] <bdmurray> vorlon: Okay, so remove from never_run and add a badtest hint for that version?
[20:21] <vorlon> I'm going to mark it badtest blacklisted, and when the package configs are updated it should auto-heal
[20:21] <vorlon> there was at least one successful test with the -proposed version
[20:23] <LocutusOfBorg> vorlon, I just picked up the patch from the bug and uploaded. Retry once its green
[20:24] <vorlon> LocutusOfBorg: cheers
[20:25] <vorlon> LocutusOfBorg: probably could've done 2.60-1maysync1 or such, but ok :)
[21:35] <UnivrslSuprBox> I've found another odd thing in the jammy archive... The binary package `libhsail-rt0-i386-cross=11.3.0-1ubuntu1~22.04cross1` states that it comes from `Source: gcc-11 (11ubuntu1.1)`. However, the Sources record for src:gcc-11 doesn't list that binary. `apt-cache showsrc libhsail-rt0-i386-cross` instead returns two entries of gcc-10-cross. Was something not updated in gcc-11-cross' source to cause this?
[21:37] <sarnold> UnivrslSuprBox: https://launchpad.net/ubuntu/+source/gcc-11-cross
[21:40] <vorlon> all of the gcc-* debian/control files give me angina
[21:40] <UnivrslSuprBox> sarnold: I agree, but the archive, apparently, does not
[21:41] <vorlon> https://i.kym-cdn.com/entries/icons/original/000/043/877/Screen_Shot_2023-02-22_at_1.09.45_PM.jpg
[21:41] <sarnold> UnivrslSuprBox: I found it by looking at my clone of the archive:
[21:41] <sarnold> $ locate libhsail-rt0-i386-cross 11.3.0-1ubuntu1~22.04cross1
[21:41] <sarnold> /srv/mirror/ubuntu/pool/universe/g/gcc-11-cross/libhsail-rt0-i386-cross_11.3.0-1ubuntu1~22.04cross1_all.deb
[21:43] <vorlon> unhelpfully, '11ubuntu1.1' doesn't appear in the version string of any of the binary packages
[21:44] <vorlon> so, the archive and launchpad are right that it was built from gcc-11-cross source; and it's bad that debian/control doesn't list it
[21:46] <UnivrslSuprBox> So it's not in Sources because the source package's control file is wrong, and the archive software trusts the control when building Sources. Yeah?
[21:46] <vorlon> yes
[21:50] <blackboxsw> SRU vanguard: or RAOF, bdmurray: please reject cloud-init uploads in the -proposed unapproved queue we have a mitigation that warrants a hot fix that we'll need to upload. LP: #2008230
[21:50] -ubottu:#ubuntu-release- Launchpad bug 2008230 in cloud-init (Ubuntu Kinetic) "sru cloud-init (23.1 update) Bionic, Focal, Jammy, Kinetic" [Undecided, New] https://launchpad.net/bugs/2008230
[21:50] <blackboxsw> we'll provide new uploads to B, F J and K for cloud-init when we are ready to push changes to stable release
[21:51] <blackboxsw> side effect of delayed openstack detection no bare metal we are trying to mitigate https://bugs.launchpad.net/cloud-init/+bug/2008727
[21:51] -ubottu:#ubuntu-release- Launchpad bug 2008727 in cloud-init "[Lunar/Desktop] 5 min boot delay due cloud-init-local.service" [Low, Triaged]
[21:52] <blackboxsw> *detection on bare metal*
[22:54] -queuebot:#ubuntu-release- Unapproved: rejected cloud-init [source] (focal-proposed) [23.1-0ubuntu0~20.04.1]
[22:55] -queuebot:#ubuntu-release- Unapproved: rejected cloud-init [source] (bionic-proposed) [23.1-0ubuntu0~18.04.1]
[22:55] -queuebot:#ubuntu-release- Unapproved: rejected cloud-init [source] (bionic-proposed) [23.1-0ubuntu0~18.04.2]
[22:55] -queuebot:#ubuntu-release- Unapproved: rejected cloud-init [source] (jammy-proposed) [23.1-0ubuntu0~22.04.1]
[22:55] -queuebot:#ubuntu-release- Unapproved: rejected cloud-init [source] (kinetic-proposed) [23.1-0ubuntu0~22.10.1]
[22:58] <vorlon> bdmurray: alright, I had a look at the dgit delta in -proposed and there are no new autopkgtests introduced - so there is no regression in likelihood of triggering the problem due to failing to launch instances.  I'll accept slyon's MP
[23:00] <bdmurray> vorlon: reduction?
[23:01] <vorlon> bdmurray: reduction?
[23:02] <bdmurray> vorlon: there is no reduction in likelihood?
[23:03] <vorlon> bdmurray: no, I meant regression; or perhaps more clearly there is no increase
[23:03] <vorlon> if the -proposed version increased the chance of this happening I would remain reluctant to badtest it
[23:04] <vorlon> bdmurray: I know there's a long list of things to worry about, do we have any retry logic within the autopkgtest runners in the event of failing instance allocation that could be tuned to mitigate this arm64 problem?
[23:07] <bdmurray> vorlon: I don't think there is anything easy to tune because the instance appears as up but then we time out waiting for ssh to connect.
[23:11] <bdmurray> Oh maybe if we did something more than just checking to see if the instance has an IP address
[23:33] <vorlon> bdmurray: well, or if there's a failure at this stage, kill it and try again?
[23:33] <vorlon> rather than aborting the entire run?