[08:20] <_hc> hello all, I'm a DD and I'm checking in on the status of my packages in Ubuntu/hirsute.  This update seems to be stuck in "proposed" even though it made it into Debian/testing/bullseye a while ago: https://launchpad.net/ubuntu/+source/androguard
[08:20] <_hc> is there something I can do to get it unstuck?
[08:28] <Unit193> First stop would be to https://people.canonical.com/~ubuntu-archive/proposed-migration/hirsute/update_excuses.html#androguard where you can see autopkgtest regressions.
[08:35] <Unit193> (See also: http://autopkgtest.ubuntu.com/running#pkg-androguard - http://autopkgtest.ubuntu.com/packages/a/androguard)
[08:37] <_hc> could someone retrigger those failed runs? it seems like something killed the tests
[08:37] <_hc> I tried but I don't have permissions
[08:38] <_hc> it is passing on all arches in Debian/bullseye https://ci.debian.net/packages/a/androguard/
[08:38] <_hc> all supported arches
[08:41] <Unit193> Yeah, I did so when you first asked.
[08:41] <_hc> thanks
[11:30] <ginggs> _hc: out of memory.  Killed in the log and passing on armhf are reliable indicators of that
[11:31] <seb128> so, I triggered     ['chromium-browser/89.0.4389.82-0ubuntu0.18.04.1'] for bionic/amd64 earlier
[11:31] <_hc> do ubuntu's runners have less RAM than Debian's?  Is there something I should do?
[11:32] <seb128> but there is no result showing on https://autopkgtest.ubuntu.com/packages/chromium-browser/bionic/amd64 and it's not in the running page
[11:32] <seb128> how can I tell what happened there? is that pending a webpage refresh?
[11:32] <seb128> did it silently fail for some reason and if so how do I tell that's the case?
[11:37] <ginggs> _hc: much less.  androguard needs to be added to the big_packages list so the test will run on a bigger cloud instance.  i've been meaning open a MR for some other packages, i'll add yours to the list
[11:38] <_hc> thanks!
[11:48] <Laney> seb128: it seems to be there, I'm guessing you were just waiting for it to be fetched by the website
[12:13] <seb128> Laney, ah, I should have waited a bit more, thanks :)
[12:13] <seb128> oSoMoN, so yeah, it's also failing, not specific to the ppa
[12:48] <oSoMoN> seb128, ack, thanks for confirming this
[12:49] <seb128> np!
[16:25] <cjwatson> rbalint: Sorry for the long delay!  I'm getting copious coffee and working on a reply now ...
[16:26] <rbalint> rbasak, ^ , guess :-)
[16:26] <cjwatson> Ugh, sorry
[16:26] <cjwatson> Quite right
[16:39] <rbasak> Thanks!
[20:02] <coreycb> Hello release team, I would like to sync python-dogpile.cache from debian experimental as it's needed by a new version of python-oslo.cache for OpenStack.
[23:54] <rbasak> Trevinho: do we need to revert the sssd SRU?
[23:56] <Trevinho> rbasak: It's not clear to me what are the affected cases, as that comment looks more a warning, or what?
[23:58] <rbasak> AFAICT, the reporter is giving us a use case that breaks auth when your SRU is applied?
[23:58] <Trevinho> rbasak: yeah, not sure there's an actual case though?
[23:59] <Trevinho> rbasak: I mean I hope so, so indeed would be better to proceed with safer upgrade if there's such risk