bluca | #2056461 this one is a bit of a pain, as it of course started to appear at the same time as one of the qemu-only tests started to fail, only in noble -.- | 11:03 |
---|---|---|
bluca | are there any known workarounds aside from downgrading kernel? | 11:03 |
bluca | looks like there's no bot around here https://bugs.launchpad.net/ubuntu/+source/autopkgtest/+bug/2056461 | 11:04 |
Skia | not yet, sadly | 11:25 |
Skia | only thing I can say is that I have this on my radar, and plan on investigating that shortly | 11:26 |
andersson1234 | bluca: another obvious workaround which you're probably aware of is using lxd instead of qemu, which I've been doing since this issue has arisen | 11:29 |
andersson1234 | ah, wait, I see you said qemu-only, nevermind :D | 11:29 |
bluca | yeah need to run on the noble kernel for this | 11:32 |
bluca | are there proxy issues from the autopkgtest datacenter to salsa or something? been getting this for the past 2 hours, despite retries: | 12:20 |
bluca | fatal: unable to access 'https://salsa.debian.org/bluca/systemd.git/': Received HTTP code 503 from proxy after CONNECT | 12:20 |
bluca | can clone/fetch fine from here | 12:20 |
bluca | doesn't seem to be the usual intermittent hiccups, happens on all the jobs | 12:21 |
andersson1234 | This issue is familiar to me - I'll look into it shortly :) | 12:22 |
bluca | thanks! | 12:22 |
andersson1234 | bluca: do you have an autopkgtest logfile you can point me to please? | 12:48 |
andersson1234 | this seems datacentre dependent | 12:48 |
andersson1234 | or *feels* | 12:48 |
bluca | andersson1234: https://autopkgtest.ubuntu.com/results/autopkgtest-noble-upstream-systemd-ci-systemd-ci/noble/amd64/s/systemd-upstream/20240502_110511_6a2a7@/log.gz | 12:50 |
andersson1234 | ah, a pesky issue has reared its head once again | 13:00 |
andersson1234 | working on a fix now | 13:01 |
andersson1234 | bluca: I'd expect within the hour for the salsa.debian.org issue to be fixed. | 13:06 |
andersson1234 | Please ping me if this isn't the case! | 13:06 |
bluca | thank you, will let you know | 13:08 |
andersson1234 | bluca: FYI I have a fix I'm testing for the issue you saw yesterday (jobs being lost with a restart of the workers) | 16:54 |
bluca | thanks | 16:55 |
bluca | still seeing the salsa problem as of one hour ago | 16:56 |
bluca | triggered again just now to see if it persists | 16:57 |
andersson1234 | ah that's annoying, sorry, I'll look into that again now | 16:58 |
bluca | actually looks like it recovered in the past half hour or so | 17:12 |
bluca | I see jobs moving past the cloning and onto the compilation step | 17:12 |
bluca | so the fix seems to have worked, thanks | 17:12 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!