[09:58] <bluca> morning folks, we are having some problems with the Github integration of https://autopkgtest.ubuntu.com/running#pkg-systemd-upstream
[09:59] <bluca> for a few days now - are there known issues?
[09:59] <bluca> symptom is that some jobs never report back
[10:00] <bluca> GH log shows the request is sent successfully when a PR is opened, and the job shows up on the autopkgtest for a while so it is running, but then goes away and status is never reported back
[10:01] <bluca> eg: https://github.com/systemd/systemd/pull/22067 for s390x I can see the request going out and giving me the API url to check the result
[10:01] <bluca> https://api.github.com/repos/systemd/systemd/statuses/927892e0ecf9b7148543e5a5ab5d866a8255fe07
[10:02] <bluca> which says the job was successful, but on the PR it's still "yellow" (pending)
[10:03] <bluca> wait, it says pending there actually (looked at the wrong arch)
[10:04] <bluca> is it jobs getting stuck somewhere maybe?
[10:22] <schopin> doko: if you have some time today could you have a look at LP: #1956765 and maybe sponsor it?
[10:33] <doko> schopin, ok. is this also needed in 3.10?
[10:33] <schopin> yup, I provided both debdiffs.
[12:08] <toabctl> cpaelzer, could you have a look at https://bugs.launchpad.net/ubuntu/+source/fuse3/+bug/1956949 please?
[12:30] <bluca> ddstreet: do you know who we should ping for autopkgtest infra issues?
[13:05] <cpaelzer> toabctl: looking ...
[13:31] <cpaelzer> toabctl: answered on the bug, which triggered many bug pings (you'll find links there)
[13:32] <cpaelzer> TL;DR: we need to check if all other deps are ready now, I added bug tasks for ginggs and didrocks (potentially to reassign it to someone else)
[13:34] <cpaelzer> toabctl: if all others are fine, they shall get uploads and we get it resolved
[13:34] <toabctl> cpaelzer, thx. is there a workaround we could apply to get image builds working now?
[13:34] <cpaelzer> toabctl: if now I'll revert the last change on open-vm-tools and need to wait until the others are ready
[13:34] <toabctl> maybe adding "* ! open-vm-tools" to the cloud-image seed ?
[13:35] <toabctl> I'm a seed/germinate newbie so no idea if that would work ..
[13:35] <cpaelzer> It depends on what images you are building atm, for some chances are they do not need open-vm-tools at all (I mentioned that in the bug updates)
[13:35] <cpaelzer> but I'm not gonna massively restructure seeds and make this worse
[13:36] <cpaelzer> right now after we gave everyone a chance to answer for their packages we can just revert the change in open-vm-tools
[13:36] <cpaelzer> In fact I could revert it now, do an upload - and if in a few days we know all is good can revert-revert it
[13:36] <cpaelzer> that seems to be the bets short term-solution
[13:37] <toabctl> that would be good imo. then we are not blocked on image builds.
[13:48] <cpaelzer> toabctl: uploaded and bug updated
[13:49] <toabctl> cpaelzer, thanks a lot!
[13:49] <cpaelzer> toabctl: but really maybe that was a good wakeup call for all involved teams to re-check their packages in this overall transition
[13:49] <cjwatson> toabctl: ! generally isn't useful in seeds
[13:49] <cjwatson> See germinate(1)
[13:56] <ginggs> cpaelzer: what uses unionfs-fuse?
[13:58] <cpaelzer> ginggs: it is in the seeds
[13:58] <cpaelzer> ginggs: it has a comment there
[13:58] <cpaelzer> let me fetch a link ...
[13:59] <cpaelzer> ginggs: https://git.launchpad.net/~ubuntu-core-dev/ubuntu-seeds/+git/platform/tree/live-common#n12
[13:59] <ginggs> cpaelzer: thanks
[14:00] <cpaelzer> paride: FYI ^^ you know the bugs, discussion also happened here, thanks for tracking the server portion of this so I can focus on other things
[14:00] <cpaelzer> ginggs: since the bug # is ~1.5M ago, chances are it isn't needed anymore
[14:02] <paride> cpaelzer, thanks for adding the bug tasks, I'm reading the comments/backlog
[14:02] <ginggs> here's hoping, because unionfs-fuse doesn't look like it's ready for fuse3
[14:14] <rbasak> tjaalton, bdmurray: I commented on https://bugs.launchpad.net/ubuntu/+source/v4l2loopback/+bug/1921474/comments/12. I'm fairly sure others on the SRU team would agree so I posted the comment. Let's discuss if you don't.
[14:20] <paride> ginggs, do we actually have a good reason to keep unionfs-fuse in main?
[14:20] <paride> maybe it's obviously "yes", but it's a question better asked sooner than later :)
[14:20] <paride> `apt rdepends unionfs-fuse` only shows a Suggests: schroot
[14:22] <ginggs> paride: somehow, unionfs-fuse is not in main, but it is seeded.  see the link c.paelzer pasted above
[14:22] <paride> nevermind, it's not in main
[14:23] <tjaalton> rbasak: I'll let vicamo reply
[14:26] <paride> ginggs, but isn't that commented out?
[14:26] <paride> https://git.launchpad.net/~ubuntu-core-dev/ubuntu-seeds/+git/platform/commit/live-common?id=9a6fc9d9cf687a8492d6e19f9425316252d4254e
[14:27] <ginggs> paride: it looks that way :)
[14:28] <cpaelzer> ginggs: paride: indeed it is commented out, I only grepped
[14:29] <cpaelzer> ginggs: paride: but the reported conflict still comes from "the rest" like grub, s390x-tools, ...
[14:29] <cpaelzer> essentially all need to be ready and move at once
[15:16] <cpaelzer> ginggs: paride: (Trevinho?): FYI while unionfs-fuse might not be needed anymore it should be compatible with fuse3 soon, see https://bugs.launchpad.net/ubuntu/+source/fuse3/+bug/1956949/comments/10
[15:38] <ddstreet> bluca i believe that juliank owns the autopkgtest infrastructure now
[15:38] <bluca> thanks - juliank are there known issues?
[15:39] <bluca> jobs disappear from https://autopkgtest.ubuntu.com/running#pkg-systemd-upstream but status is never reported back to the github api
[15:39] <juliank> bluca: there are not
[15:39] <bluca> it seems random which job it affects, it's ~50%
[15:39] <juliank> that makes sense
[15:39] <bluca> also it's not fixed by architecture, it varies
[15:39] <bluca> been going on for at least a week I'd say
[15:41] <bluca> eg, PR 22070 is no longer on that autopkgtest page, but arm64 and s390x jobs are still marked as pending on the github API report page https://api.github.com/repos/systemd/systemd/statuses/5b1cf7a9be37e20133c0208005274ce4a5b5c6a1
[15:41] <bluca> bad bot
[15:46] <juliank> bluca: so presumably one of the 2 web workers fails but both have the right credentials at least
[15:49] <bluca> I'm not really familiar with how this is implemented :-) any more data that you need? maybe an example http request webhook log from gh?
[15:49] <juliank> bluca: I think results should come in now, the service to submit them seems to stuck on jan 06
[15:49] <bluca> fab, thank you!
[15:49] <juliank> I should add a timeout on this so it gets killed
[15:55] <cpaelzer> slyon: have you heard of incompatibilities between systemd test_resolved_domain_restricted_dns test and dnsmasq 2.86?
[15:55] <cpaelzer> slyon: seeing those in proposed (https://autopkgtest.ubuntu.com/results/autopkgtest-jammy/jammy/ppc64el/s/systemd/20220106_211604_c8ba4@/log.gz) and wanted to avoid  debugging known cases
[15:56] <slyon> cpaelzer: I saw the failing tests.. but I am not currently aware of any incompatibilities, didn't do any investigation either, tho
[15:57] <cpaelzer> I try to have a look tomorrow and let you know if I find anything useful
[15:58] <slyon> ty!
[16:01] <juliank> bluca: I deleted all test requests older than one day, fwiw, it was going over them again and again, leading nowhere, as they were cancelled or whatever
[16:01] <bluca> yeah that's ok, thanks for taking care of this
[16:58] <mapreri> ddstreet: mh, I saw queuebot's lines on #-release about gallery-dl/amd64.  I might have forgotten it, but is there something about manually accepting binaries after a NEW source was already accepted?
[16:59] <mapreri> (I accepted the source yesterday, but then didn't look anymore, and I just assumed you did something if I saw the lines at this time, right after the dmb meeting :3)
[17:13] <ddstreet> mapreri yeah, if the source is new, then after it's built, you have to go accept the binary/binaries in new also
[18:14] <bdmurray> sil2100: dbungert and I were trying to build the focal branch of ubiquity using debuild and it failed for both of us. Do you know of any secrets to make it build?
[18:15] <bdmurray> see https://pastebin.ubuntu.com/p/Mt27KVw46W/
[18:25] <bdmurray> dbungert, sil2100: Oh I fixed it - run "debian/rules update"
[18:26] <ahasenack> schopin: when you bumped krb5 to 1.19, I guess there was no debian upload yet? THere is one now in experimental
[18:39] <bdmurray> dbungert: Could you add a test case for the SRU of bug 1942648? Since I installed on my BitLocker system already I don't think I'll be able to recreate it.
[19:18] <dbungert> bdmurray: on it
[19:20] <bdmurray> dbungert: Thanks! That'll help with 20.04.4
[19:27] <smoser> hey.  So someone at work is needing to mirror some apt archives.  What is the current recommended way to do that? by-hash seems to have  broken many of the old ways.  aptly seems an option. anyone have other advice ? The immediate goal is to mirror 20.04.
[19:32] <sarnold> smoser: just setting up a squid-deb-proxy might get 80% of the benefit with less storage and bandwidth use. I think folks use debmirror for 'partial' mirrors; the debian/changelog in there shows a bunch of 2021 work, so I'm optimistic
[19:40] <smoser> sarnold, yeah, i agree that proxy is huge win at almost no cost, but the request is for offline.
[19:41] <smoser> git clone https://salsa.debian.org/debian/debmirror.git && git grep -i 'by-hash' isn't making me think there is by-hash support there.
[19:42] <smoser> (i also will admit that I feel the request for offline is very much perfection being the enemy of good-enough here)
[19:45] <sarnold> smoser: heh, yeah, for me it was 'cd /fst/trees/ubuntu/universe/d/debmirror && rg -i by-hash ; rg -i byhash  .. it's a bit sad-sounding at first, but maybe the archive layout / design doesn't actually need specific references to it in software in order to use it. I'm not sure. it's been ages since I've read The Blogpost
[19:49] <smoser> sarnold: do you have a 'by-hash' directory ?
[19:49] <smoser>  http://archive.ubuntu.com/ubuntu/dists/focal/by-hash/
[19:49] <sarnold> smoser: I just rsync the whole thing :/
[19:49] <smoser> if that isn't there, then .... then it isn't going to work.
[19:49] <smoser> oh. yeah.
[20:55] <mwhudson> good morning
[20:56] <bdmurray> mwhudson: morning, I retested all the focal glibc autopkgtest issues and the only one that is left is poor yorick, I don't it well though
[20:56] <mwhudson> bdmurray: oh thanks!
[20:57] <mwhudson> ERROR (testg) failed to open X display or create X window
[20:57] <mwhudson> i hope that's not glibc's fault
[20:58] <mwhudson> i'll try a run with glibc from release
[20:58] <mwhudson> er updates
[22:12] <mwhudson> bdmurray: looks like it's not caused by glibc https://autopkgtest.ubuntu.com/packages/y/yorick/focal/ppc64el
[22:13] <bdmurray> mwhudson: okay, then we could just hint it
[22:14] <mwhudson> bdmurray: yeah, there are like 7 bugs to verify as well so no hurry :)