[09:40] <cpaelzer> of the ppc64el builders there are a few listed as cleaning for >10 hours https://launchpad.net/builders
[09:40] <cpaelzer> do those need a bump?
[09:48] <Laney> cpaelzer: I dunno what the threshold is, but that's usually a #launchpad question / action
[10:09] <cjwatson> cpaelzer: stabbing them
[10:09] <cjwatson> cpaelzer: but we have alerts for when it gets too bad anyway
[10:12] <rbasak> I'm basing some packaging on Focal so that it can be snapped with a core20 base. On the packaging end, I found that dh_bash-completion doesn't seem to use the dh sequencer, in Focal at least. Where should it sit in the sequence? Inserting it into override_dh_auto_install seems to work. Is that appropriate? Digging into a couple of existing source packages wasn't much help to me; I've not been able to
[10:12] <rbasak> find an example.
[10:24] <cpaelzer> rbasak: dh --with bash-completion $@ ?
[10:25] <cpaelzer> found in many packges, so it does not seem to be a one-off way to do it https://codesearch.debian.net/search?q=--with+bash-completion&literal=1
[10:25] <rbasak> That didn't work for me :-/
[10:25] <rbasak> I wondered if it was maybe post-Focal?
[10:26] <cpaelzer> The package I checked first started to use it in Fri Aug 21 12:18:36 2015 +0200
[10:26] <cpaelzer> which seems very much before focal to me
[10:26] <cpaelzer> was initramfs-tools
[10:26] <cpaelzer> not sure if this is a good example, it just was my first hit
[10:27] <rbasak> When I tried, various dh helpers were being called with (IIRC) -O--with-bash-completion or something like that
[10:27] <rbasak> But dh_bash-completion itself didn't get run
[10:27] <rbasak> Hmm
[10:27] <rbasak> Maybe I added a hyphen when I shouldn't have done
[10:28] <rbasak> --with-bash-completion
[10:28] <rbasak> I think maybe that was it
[10:28] <rbasak> Thanks :)
[10:28]  * rbasak tries
[10:28] <cpaelzer> let me know if it worked rbasak
[10:29] <cpaelzer> and thanks cjwatson for stabbing the ppc64 builders
[10:35] <rbasak> It did work cpaelzer, thanks!
[10:35] <rbasak>         dh $@ --with bash-completion,python3 --buildsystem=pybuild
[10:36] <cpaelzer> \o/
[10:36] <rbasak> It turns out that --with-bash-completion is wrong but silently accepted
[10:37]  * rbasak should have known better, but in his defence, it's been a while :)
[11:59] <Laney> seb128: slyon: I will add a hint for netplan.io now, but I would appreciate it if this regression could still be investigated, it might be finding a bug (at least a test bug) on ppc64el
[12:01] <seb128> Laney, thx!
[12:11] <Laney> Filed https://bugs.launchpad.net/ubuntu/+source/netplan.io/+bug/1916888
[13:01] <slyon> Thanks Laney, I'm currently working on a fix
[13:24] <Laney> slyon: toll!
[13:25] <slyon> ;D
[15:03] <seb128> are autopkgtests not being picked up today?
[15:03] <seb128> there doesn't seem to be a backlog on hirsute/i386 yet https://autopkgtest.ubuntu.com/packages/libt/libthai/hirsute/i386 hasn't picked the upload done this morning
[15:04] <seb128> ok, I just had to ask for it to show up on active list now...
[15:06] <seb128> hum,  not i386 though?
[15:11] <RikMills> seb128: 14:41:45+0000] - Requesting wikidiff2 autopkgtest on amd64 to verify libthai/0.1.28-3ubuntu1
[15:12] <RikMills> 14:41:45+0000] - Requesting libthai autopkgtest on i386 to verify libthai/0.1.28-3ubuntu1
[15:13] <RikMills> possible the are no no-mans-land of having just run, but not in results yet
[15:13] <JawnSmith> Is any core dev available to restart the samba autopkgtest that is blocking acl on s390x?
[15:15] <leftyfb> It seems the latest python2.7 deps for xenial are missing from the repo's. https://p.mort.coffee/OFSO
[15:15] <ginggs> JawnSmith: .
[15:17] <JawnSmith> ginggs: Thanks!
[15:17] <seb128> RikMills, thx, I always have difficulties to figure out the status of a request
[15:18] <RikMills> leftyfb: the update was only released 1 hr ago so could be slow mirror sync?
[15:18] <leftyfb> RikMills: https://launchpad.net/ubuntu/xenial/amd64/python2.7-minimal  looks like they were deleted?
[15:18] <Laney> seb128: it seems to be there now
[15:19] <Laney> the status is that it gets requested at the next proposed-migration run after it got built
[15:19] <Laney> if you just miss the previous one then it could be some time
[15:19] <seb128> seems the proposed-migrations have been taking 3h+ today
[15:19] <Laney> not just today
[15:19] <seb128> :(
[15:19] <Laney> rbalin_t is working on a speedup there
[15:19] <seb128> great
[15:19] <RikMills> leftyfb: 'broken security update' lovely!
[15:20] <leftyfb> RikMills: can we get the index updated so people's updates and build farms aren't broken while it's being resolved?
[15:21] <RikMills> cjwatson: ^ ?
[15:22] <Laney> leftyfb: there's not really a distinction; reverting the security update will make that happen automatically and as fast as possible
[15:22] <leftyfb> Laney: over an hour from the update being deleted?
[15:23] <Laney> I don't know when it was deleted, but deleting it causes the indexes to be republished and it can't be done as a separate step, that's all I'm saying
[15:25] <leftyfb> Laney: according to the link I posted above, they were deleted at 9:10AM EST.
[15:26] <leftyfb> Laney: if the indexs take an over an hour to get republished, then it would be nice to get confirmation on that. Otherwise, something as part of that process seems to be broken currently and should be addressed
[15:31] <cjwatson> I'm working on it, leave me to it please
[15:32] <cjwatson> The trade-off here is that some temporary index breakage is better than having people install a broken update
[15:32] <RikMills> ok. sorry to bother you
[15:32] <RikMills> ^ leftyfb
[15:32] <cjwatson> But it's underway
[15:33]  * RikMills goes back to pondering flatpak tests
[15:34] <ogra> package them as a snap 😉
[15:35] <RikMills> eep!
[15:38] <leftyfb> thank you folks. Sorry to bother, but I saw the reports coming from multiple communities all at once, figured it was about to blow up
[15:58] <cjwatson> leftyfb: indexes should be updated now, subject to mirror propagation times
[16:00] <cjwatson> Just checked all of {xenial,bionic}-{security,updates} on archive.ubuntu.com from outside the datacentre and they look right
[16:03] <cjwatson> Apologies for the disruption
[16:05] <leftyfb> cjwatson: out of curiosity, was there an issue with the updated indexes or do they actually take over an hour to update?
[16:05] <cjwatson> leftyfb: There was an issue because of some unfortunate timing
[16:05] <leftyfb> cjwatson: cool. Thanks for the immediate attention and quick fix
[16:06] <cjwatson> leftyfb: We deleted the broken update from -security quite quickly, but there's a cron job that propagates security updates to -updates and that interfered unexpectedly, which meant everything took longer; we temporarily shut down mirror triggering to minimize the risk of the broken update being re-published while we sorted all that out
[16:07] <cjwatson> (since a broken python2.7 could cause quite a lot of chaos)
[16:07] <leftyfb> ah, that makes sense
[16:07] <Slashman> cjwatson: thanks for the update, do you have an ETA?
[16:08] <leftyfb> Slashman: the main repos have been resolved.
[16:08] <cjwatson> Slashman: at least archive.ubuntu.com already looks fixed, so my ETA is negative :_)
[16:09] <Slashman> hm, I still have the issue on "security.ubuntu.com" 2001:67c:1562::15
[16:09] <cjwatson> let me check that
[16:10] <cjwatson> while I do, can you confirm that you've done an apt update very recently?
[16:10] <Slashman> yes, I'm actully trying to do a "do-release-upgrade" from 16.04 to 18.04
[16:11] <Slashman> I'll try to force Ipv4 usage, maybe "security.ubuntu.com" will then be resolved to a different server
[16:11] <cjwatson> 2001:67c:1562::15 looks right to me
[16:12] <cjwatson> oh wait, maybe not
[16:12] <Slashman> https://paste.ubuntu.com/p/9s4cfDW3Fz/
[16:13] <cjwatson> getting our sysadmins to check mirroring there now
[16:13] <Slashman> seems to work over IPv4
[16:13] <cjwatson> I doubt IPv4/v6 is the issue, more likely inconsistent state across multiple machines
[16:13] <cjwatson> or conceivably an HTTP proxy in the way
[16:13] <Slashman> I guess so, but switching to IPv4 may have switched machine
[16:14] <cjwatson> indeed
[16:15] <Slashman> at least I'm not stuck anymore, thanks!
[16:15] <cjwatson> I think you may just have caught it in the middle of a somewhat late update
[16:15] <cjwatson> that machine looks fine now
[16:15] <cjwatson> but we're checking others
[16:16] <seb128> hum, https://objectstorage.prodstack4-5.canonical.com/v1/AUTH_77e2ada1e7a84929a74ba3b87153c0ac/autopkgtest-hirsute/hirsute/i386/libi/libinih/20210225_140654_ee264@/log.gz
[16:16] <seb128> autopkgtest [14:06:42]: test unittest.sh: [-----------------------
[16:16] <seb128> make: i686-linux-gnu-gcc: No such file or directory
[16:16] <seb128> i686-linux-gnu-gcc    -c -o parseargs.o parseargs.c
[16:16] <Slashman> thank you for your hard work
[16:16] <seb128> should i686-linux-gnu-gcc be available as a command?
[16:16] <cjwatson> $ for x in 91.189.88.142 91.189.91.39 91.189.88.152 91.189.91.38; do curl -s --resolve "security.ubuntu.com:$x:80" http://security.ubuntu.com/ubuntu/dists/bionic-security/main/binary-amd64/Packages.gz | zcat | grep-dctrl -nsVersion -XP python2.7; done
[16:16] <cjwatson> $ for x in 2001:67c:1562::15 2001:67c:1360:8001::23 2001:67c:1360:8001::24 2001:67c:1562::18; do curl -s --resolve "security.ubuntu.com:[$x]:80" http://security.ubuntu.com/ubuntu/dists/bionic-security/main/binary-amd64/Packages.gz | zcat | grep-dctrl -nsVersion -XP python2.7; done
[16:17] <cjwatson> both of those show 2.7.17-1~18.04ubuntu1.2 across the board
[16:17] <cjwatson> it's always much rougher when we have to abruptly go backwards, we don't do it very often
[16:20] <sdeziel> when I run those for loops, I see both 2.7.17-1~18.04ubuntu1.2 and 1.3
[16:20] <cjwatson> sdeziel: is there an HTTP proxy between you and security.u.c?
[16:20] <sdeziel> cjwatson: no
[16:21] <cjwatson> oh, but *I* have an HTTP proxy in the way
[16:21] <cjwatson> *facepalm*
[16:21] <cjwatson> yes, I see it now
[16:26] <sdeziel> shows all 1.2 now
[16:31] <cjwatson> Yep, I think everything has caught up now and I've unconfused myself about how I was holding curl wrongly
[16:31] <cjwatson> (everything on security.u.c anyway)
[16:34] <cjwatson> and archive.u.c
[16:41] <sdeziel> thanks (TIL about curl's --resolve feature so thanks!)
[16:42] <cjwatson> It's very handy, but I was running it on an old machine which was lacking some features
[16:42] <cjwatson> Especially re v6
[16:49] <stgraber> ah yeah, I love --resolve, been using it quite heavily when you want to check that all servers in a RR setup are up to date
[20:26] <seb128> vorlon, hey, I tried your suggestion for libinih but it failed, https://objectstorage.prodstack4-5.canonical.com/v1/AUTH_77e2ada1e7a84929a74ba3b87153c0ac/autopkgtest-hirsute/hirsute/i386/libi/libinih/20210225_140654_ee264@/log.gz ... I'm unsure to understand why because the same gcc variant seems to work for other packages, do you have any idea maybe?
[20:26] <seb128> make: i686-linux-gnu-gcc: No such file or directory
[21:50] <vorlon> seb128: hmmm doh, I actually reproduced that locally but assumed it was an error in my setup.  I don't know what's different about libinih debian/test/control that's causing the cross-detection to fail and not pull in crossbuild-essential-i386
[21:51] <vorlon> seb128: it's *supposed* to get pulled in whenever @builddeps@ is listed in debian/tests/control, and we are doing a cross-test
[21:56] <vorlon> seb128: ah - apparently what I implemented pulls in crossbuild-essential-i386 only if 'build-essential' is explicitly listed as a test dep; this seems to be a bug in my autopkgtest implementation, since @builddeps@ does imply build-essential:native but does not cause crossbuild-essential-i386 to be pulled in
[21:57] <vorlon> seb128: so, can be worked around by adding build-essential to the test deps, but ultimately I should fix the autopkgtest implementation
[22:09] <seb128> vorlon, thanks!