[10:52] <bdrung> cryfs 0.11.3-5 fails to build on ppc64el due to not enough memory. Can we give the builders more memory?
[10:58] <seb128> bdrung, that's rather a question for #launchpad?
[12:46] <ginggs> bdrung: as far as i know, we have big_packages for autopkgtests, but not for builders
[12:46] <ginggs> you can try fiddle with --max-parallel
[12:46] <ginggs> e.g. https://launchpad.net/ubuntu/+source/deal.ii/9.4.0-1ubuntu1
[12:47] <cjwatson> It's not a system memory question, it's an mlock question
[12:47] <cjwatson> (see discussion in #launchpad)
[14:05] <ogayot> Hello, I'm looking for a sponsor for python-ansible-pygments -> bug 2019237. It is a package in universe. Thanks!
[14:05] -ubottu:#ubuntu-devel- Bug 2019237 in python-ansible-pygments (Ubuntu) "python-ansible-pygments 0.1.1-5 fails autopkgtest against pygments 2.15" [Undecided, New] https://launchpad.net/bugs/2019237
[14:15] <enr0n> bdrung: If you have time during +1 can you sponsor these syncs? bug 2019241, bug 2019242, bug 2019243, and bug 2019245
[14:15] -ubottu:#ubuntu-devel- Bug 2019241 in golang-github-hashicorp-go-slug (Ubuntu) "Please sync golang-github-hashicorp-go-slug 0.9.1-2 from Debian unstable" [Undecided, New] https://launchpad.net/bugs/2019241
[14:15] -ubottu:#ubuntu-devel- Bug 2019242 in mit-scheme (Ubuntu) "Please sync mit-scheme 12.1-3 from Debian unstable" [Undecided, New] https://launchpad.net/bugs/2019242
[14:15] -ubottu:#ubuntu-devel- Bug 2019243 in prometheus-ipmi-exporter (Ubuntu) "Please sync prometheus-ipmi-exporter 1.6.1-2 from Debian unstable" [Undecided, New] https://launchpad.net/bugs/2019243
[14:15] -ubottu:#ubuntu-devel- Bug 2019245 in python-uvicorn (Ubuntu) "Please sync python-uvicorn 0.17.6-1 from Debian unstable" [Undecided, New] https://launchpad.net/bugs/2019245
[14:21] <bdrung> will do the sponsoring for ogayot (in Debian) and enr0n
[14:21] <enr0n> bdrung: thanks!
[14:21] <ogayot> bdrung: appreciated :)
[14:31] <bdrung> syncs done. That were easy ones.
[15:00] <bdrung> ogayot, python-ansible-pygments 0.1.1-6 uploaded to unstable
[15:36] <teward> bdrung: let me know if you need any assists in Ubuntu with the sponsors, my coredev is here if it's needed.
[15:36] <teward> at least for the next hour :P
[15:37] <teward> (i'm an hour behind but i'm still her)
[15:37] <teward> s/her/here/
[15:41] <bdrung> teward, no assistance needed. I sponsored all requests. The python-ansible-pygments upload to unstable will get auto-synced.
[15:41] <teward> cool cool
[15:41] <teward> i'm still around though if you get a massive batch of stuff and want to split some off, happy to assist where I can :)
[15:42] <teward> (esp. since i need a break from python 3.4 -> python 3.10 code migration today - this is giving me a headache for FT job >.<)
[15:42] <bdrung> teward, feel free to pick up everything else coming in today.
[15:43] <teward> bdrung: i'm not on -sponsors because i got self-sabotaged by SSL interception on my end, but if I see a few and i'm NOT neck deep in python code i'll grab a few.  Though I usually follow traditional bug process and self-assign things i'm handling too :)
[15:43] <teward> *reassigns self to the proper lists*
[15:44] <bdrung> teward, if you need distraction from work, here is a list you could work through: https://reports.qa.ubuntu.com/reports/sponsoring/
[15:44] <teward> *looks*
[15:47] <teward> bdrung: thanks.  SOME of the things here look like they need to be flushed out because EOL or EOSS but that's just my opinion :P
[15:47] <teward> i mean there's stuff from 2018 on here still :P:
[15:47] <bdrung> teward, the queue did not get much love in the last years.
[15:47] <teward> accurate
[15:48] <teward> some of these are for xenial which I know is long dead so there might end up some rejects/closures due to time expiry on these i think
[15:48] <teward> *goes and looks at the three old 2018 items*
[15:52] <bdmurray> Wouldn't it make more sense to look at recent patches than cleaning up cruft?
[15:54] <teward> bdmurray: i agree, yes.  but unless the cruft is cleaned up it looks like it's never looked at.
[15:54] <teward> the other problem I see is that there's old stuff in here *already merged* that still shows up on the queue
[15:54] <teward> so "cleaning up the cruft" will make that queue look cleaner as well
[15:54] <teward> i also just got pulled into a meeting so what "free time" i had is now gone >.>
[15:54] <teward> sometimes i dislike working as IT Security, but my hands are in everything at work so
[15:54] <bdrung> probably work from both directions. address some recent entries, but also clean some old ones as well.
[15:58] <teward> bdmurray: also, from the Community side of things, one of the complaints I hear from community members is the queue looks like nothing has been addressed since eons ago and it's selectively reviewed.  Which is partly true but mostly false.  Hence the 'cruft cleanup' thing.
[15:58] <teward> *is now away*
[16:11] <rbasak> teward: complaints> there are efforts to get able people at Canonical on a patch piloting rota again. Hopefully coming soon.
[17:32] <enr0n> Can a core dev please retry this autopkgtest? https://autopkgtest.ubuntu.com/request.cgi?release=mantic&arch=amd64&package=linux-lowlatency&trigger=fakeroot%2F1.31-1.2
[17:32] <enr0n> https://autopkgtest.ubuntu.com/request.cgi?release=mantic&arch=amd64&package=sbuild&trigger=fakeroot%2F1.31-1.2
[17:33] <enr0n> Er, sorry, I only meant to link the linux-lowlatency test, please ignore the sbuild test.
[17:34] <teward> enr0n: queued
[17:34] <enr0n> teward: thanks!
[17:34] <teward> rbasak: hopefully.  that will require people to go back into the queue and clear cruft too though, not just process new stuff
[18:58] <rbasak> teward: agreed. Hopefully we'll get the count going downwards and will be able to clear the old queue too.
[19:02] <enr0n> Can a core dev please retry this autopkgtest? I just ran it locally and it was OK. https://autopkgtest.ubuntu.com/request.cgi?release=mantic&arch=amd64&package=sbuild&trigger=fakeroot%2F1.31-1.2
[19:03] <kanashiro[m]> enr0n: done
[19:03] <enr0n> kanashiro[m]: thanks!