[14:09] <andersson1234> bluca: just checking in to see if everything is still all good for you?
[14:29] <bluca> everything going great and no issues so far, thanks!
[17:07] <andersson1234> awesome to hear, thanks
[17:57] <doko> no autopkg tests are running, but according to update_excuses, there should be hundreds running 
[18:12] <andersson1234> I see there's a lot running here?
[18:12] <andersson1234> https://autopkgtest.ubuntu.com/running
[18:13] <andersson1234> a few*
[18:16] <andersson1234> Which tests  should be running?
[18:16] <andersson1234> Over the last couple of days I pruned the queue of hundreds of tests - specifically tests that were queued for ESM releases on non-ESM architectures. this is the only thing I can think of that would explain it. 
[18:17] <ginggs> andersson1234: all the ones marked "Test in progress" here:
[18:18] <ginggs> https://ubuntu-archive-team.ubuntu.com/proposed-migration/update_excuses.html
[18:19] <andersson1234> Ah, I understand why now
[18:21] <andersson1234> per https://discourse.ubuntu.com/t/autopkgtest-service/34490 we're replenishing the db right now. I corrupted it on Monday. Should be done in the next couple days. I believe tests are marked as not in progress once the results come up - incomplete results on package X in our db would mean the results aren't displayed on the relevant page e.g. https://autopkgtest.ubuntu.com/packages/p/python-eventlet/noble/arm64
[18:21] <andersson1234> I suspect all of the in progress tests will get resolved as the database does
[18:22] <doko> really, a couple of days? then we're accumulating again
[18:22] <andersson1234> To clarify, the tests have all ran. The results exist in our swift storage but not in our local sqlite db, so right now we're just copying results from swift to sqlite
[18:23] <andersson1234> so the results for all these tests should be in the swift storage and will appear on the webpage as the db gets restored
[18:23] <andersson1234> the tests aren't staying in the queue or anything like that
[18:27] <doko> ahh, ok
[21:28] <bluca> andersson1234: we spotted a problem, the reporting is linking the wrong arch job, for example on https://github.com/systemd/systemd/pull/31140 the jammy-arm64 and jammy-s390x results are both pointing to the jammy-amd64 results
[21:28] <bluca> this is happening across multiple PRs and the links seem to be randomly assigned (there's no pattern)