[00:45] <mwhudson> ah waiting for gcc-10 to build
[01:57] <mwhudson> https://code.launchpad.net/~dbungert/curtin/+git/curtin/+merge/405655
[01:57] <mwhudson> no
[01:57] <mwhudson> lrwxrwxrwx root/root         0 2021-06-28 02:52 ./libx32/ld-linux-x32.so.2 -> /libx32/ld-linux-x32.so.2
[01:57] <mwhudson> hmm
[01:57] <mwhudson> how did i do that
[01:57] <sarnold> fun
[01:58] <mwhudson> oh for heavens sake
[01:58] <mwhudson> from=debian/tmp-x32$rtld_so \
[01:58] <mwhudson> soname=`basename $rtld_so` \
[01:58] <mwhudson> to="debian/tmp-x32//libx32/$soname" ; \
[01:58] <mwhudson> if [ $from != $to ]; then \
[01:59] <mwhudson> that's comparing debian/tmp-x32/libx32/ld-linux-x32.so.2 with debian/tmp-x32//libx32/ld-linux-x32.so.2
[02:08] <mwhudson> tseliot: what happen
[02:08] <mwhudson> (looking at ubuntu-drivers autopkgtests)
[02:08] <mwhudson> er build logs
[02:34] <mwhudson> are these messages coming from apt??
[02:41] <mwhudson> yes they are but it looks like they've been emitted since 2014 so what has changed?
[03:59] <FourDollars> Regarding https://bugs.launchpad.net/bugs/1932559, I saw oem-somerville-gendry-meta on https://launchpad.net/ubuntu/focal/+queue after I executed ./copy-package --from ppa:canonical-oem-metapackage-uploaders/ubuntu/oem-metapackage-staging --from-suite focal --to ubuntu --to-suite focal-proposed oem-somerville-gendry-meta. But its Component is 'universe' instead of 'main'. Any idea?
[04:15] <mwhudson> FourDollars: i'm no expert, but i would guess that will get deault with when the queue gets processed
[08:43] <seb128> hum, I don't seem to be able to sso login to do autopkgtest retries
[08:44] <seb128> like https://autopkgtest.ubuntu.com/request.cgi?release=impish&arch=amd64&package=colord&trigger=gnome-settings-daemon/40.0.1-1ubuntu1 sends me to a sso login page
[08:44] <seb128> I click the button, get the ubuntu one page with my username listed and checked, I click 'yes, log me in' and I get a non secure redirection warning from firefox
[08:44] <seb128> bah, and now it worked
[08:45] <seb128> alright, probably a transient issue and bad timing
[08:51] <tseliot> mwhudson, ?
[09:32] <slyon> seb128: After investigating the systemd vs network-manager autopkgtests I think there is an actual regression in NM v1.32: https://pad.lv/1936312
[09:46] <seb128> slyon, hey, thanks for investigating. I wonder if that's my fault for removing ubuntu_revert_systemd.patch in https://git.launchpad.net/network-manager/commit/?id=4c4f7172
[09:47] <seb128> slyon, that sounds similar to bug #1914062 which was why it was added but the problem was supposed to be fixed in systemd
[09:51] <slyon> yes that looks very related. could we keep that ubuntu_revert_systemd.patch?
[09:51] <mwhudson> tseliot: https://launchpad.net/ubuntu/+source/ubuntu-drivers-common/1:0.9.1
[09:54] <mwhudson> doko: gcc-10 taking 24 hours + on amd64 doesn't seem normal? :/
[10:16] <slyon> seb128: looks like the supposed fix in systemd (247.3-1ubuntu1) was reverted only one day later by @stgraber https://git.launchpad.net/~ubuntu-core-dev/ubuntu/+source/systemd/commit/?h=ubuntu-impish&id=efebddfe37efff6a259ef7fd59212d65ad1b848b
[10:17] <slyon> I'm not sure whats going on there... looks like we have 2 patches available to fix the problem, both dropped from the current packages
[10:20] <slyon> Maybe stgraber can comment on that situation (^^^) once he is available
[10:25] <seb128> slyon, right, thanks for noticing the revert!
[10:25] <seb128> stgraber, ^
[10:30] <mwhudson> there seems to be an almost universal law that clisp migrations are blocked by sbcl failures and vice versa
[10:31] <mwhudson> (and also that they are completely incomprehensible, but that may be because i haven't really done any common lisp in like 15 years)
[10:32] <doko> mwhudson, it is normal, if a build ends on slow hardware. yes, fastest builds are around 10h
[10:33] <mwhudson> doko: ah ok
[10:33] <doko> mwhudson, I only checked the bison rebuild, nothing else. the other rebuilds did already migrate
[10:34] <mwhudson> doko: yeah, the _dl_catch_error thing on i386 is strange
[10:35] <mwhudson> makes https://people.canonical.com/~ubuntu-archive/proposed-migration/update_excuses.html#glibc ugly to look at
[10:42] <tseliot> mwhudson, yes, I noticed the failures, I suspect the error from the log is just a red herring, and it's the actual writing the package list to a file that is failing
[11:09] <laney> juliank: any comments on https://gist.github.com/iainlane/cf7a426ea35066e7ace8046cab922e6a as a way to find esm updates you "could" get?
[11:18] <juliank> laney: looks ok to me
[11:19] <laney> 👍
[11:19] <laney> a constant on the API for -32768 would be nice
[11:20] <xnox> slyon:  there was a fix submitted by stgraber's team into systemd, which got merged. We must run udevd service and sockets in LXC. And issues that were preventing from running them should be fixed in latest systemd release. Is the tl;dr that I know about.
[11:22] <juliank> laney: I'd just skip that check, to be honest
[11:22] <juliank> If you manually pin ESM down, it will pop up as disabled, but oh well
[11:23] <laney> juliank: not sure what you mean, pop up how?
[11:23] <juliank> laney: If you remove the check, you'd also see ESM upgrades you missed out on because of local pinning config
[11:23] <laney> oic
[11:23] <juliank> laney: Though presumably you'd want to just check priority < 0
[11:23] <laney> is that desirable?
[11:24] <juliank> I realized that you probably want to check for "disabled" ESM and not ESM updates that are not the candidate because you have a PPA with higher version :)
[11:24] <laney> yeah
[11:25] <laney> I guess I want to say, if we remove this Pin-Priority: never, would it be the candidate?
[11:26] <juliank> laney: I mean, you could also call policy.set_priority(version, 500) and then check if it becomes the candidate :D
[11:26] <juliank> scary
[11:26] <laney> heh
[11:26] <laney> is that smart?
[11:26] <laney> it feels kinda smart
[11:26] <juliank> Or maybe it doesn't work I don't remember
[11:27] <juliank> because it's supposed to not be overridable by preferences files
[11:27] <laney> i'll try it, see what happens
[11:27] <laney> suppose I have to remember the old priority and unset it at the end
[11:28] <laney> make me a context manager for this kind of thing :D
[11:28] <juliank> laney: Um, well, the priority is probably 0 by default (it falls back to the package file)
[11:29] <juliank> laney: But the current thing essentially is what I did in the ua apt hook
[11:29] <juliank> laney: Where do you want to put this code, btw?
[11:29] <juliank> because python-apt is scary slow, hence we went with C++ for the hook (and Go for the new one)
[11:29] <laney> juliank: update-manager
[11:30] <juliank> ah, ok, makes sense there
[11:30] <laney> ideally I'll find an existing loop through the cache to put this in
[11:33] <juliank> laney: You could loop over the PackageFile objects first and check if any actually are a disabled ESM source before checking further
[11:33] <juliank> laney: But it's only accessible from apt_pkg I think
[11:34] <juliank> laney: Probably I think it's best to keep the code as is rather than try to play with the policy, I'm scared a bit by policy.cc
[11:36]  * juliank is not sure it's possible to reset the priority, or well, actually set it
[11:36] <juliank> :D
[11:37] <juliank> yeah, apt is broken, SetPriority() forgets to set the type
[11:38] <juliank> In [14]: c._depcache.policy.set_priority(c["apt"].candidate._cand, 912)
[11:38] <juliank> In [15]: c._depcache.policy.get_priority(c["apt"].candidate._cand)
[11:38] <juliank> Out[15]: 500
[11:38] <juliank> lol
[11:39] <juliank> apparently nothing relies on this :D
[11:57] <laney> juliank: heh, ok
[11:57] <laney> I don't know about PackageFiles sooooo
[11:57] <laney> if you think it's fine I'll leave it as is, maybe fix the -32768 bit
[12:03] <slyon> xnox: thanks! I'll try to digg that up, to find out why it doesn't work
[12:04] <xnox> slyon:  it was fixed upstream.....
[12:04] <slyon> yeah, I'm currently searching upstream github
[12:05] <slyon> xnox: with "latest release" did you mean v249 (from ~8days ago) or v248?
[12:06] <xnox> slyon:  i think it is https://github.com/systemd/systemd/pull/18559 which got closed in favor of dozen other prs.
[12:07] <xnox> i.e. https://github.com/systemd/systemd/pull/18684/files
[12:07] <xnox> https://github.com/systemd/systemd/pull/18717
[12:07] <slyon> thanks for the pointer!
[12:07] <xnox> so it should be there since v248-rc1
[12:08] <xnox> and v248 should work just fine with udev _enabled_ to run in lxd containers.
[12:08] <xnox> (i can't remember what upstream default is, in Ubuntu we _must_ have udev enabled)
[12:08] <xnox> lots of device passthrough features and products depend on it from Canonical.
[12:08] <slyon> ok. I will double check that
[12:33] <icey> hey jamespage - is 22+ hours normal for a ceph build on riscv64?!
[12:59] <lucasmoura> Hi sil2100, when you have some time, can you take a look at this SRU here: https://bugs.launchpad.net/ubuntu/+source/ubuntu-advantage-tools/+bug/1934902
[13:00] <lucasmoura> We need the packages on the proposed pocket to start our tests here which will allow the SRU to proceed
[13:44] <xnox> icey:  you can look how long previous build took..... "Finished on 2021-05-27 (took 2 days, 7 hours, 12 minutes, 53.1 seconds)"
[13:46] <icey> xnox: ah thanks for the pointer; and wow, two days!
[13:54] <sil2100> lucasmoura: hey! Let me take a look in a bit, I didn't start my SRU shift yet - will do shortly
[13:56] <lucasmoura> Thanks sil2100
[14:07] <seb128> bryyce, would probably makes sense to hold on disruptive updates that get in the way of the openldap transition
[14:08] <seb128> bryyce, the apache2 and backuppc updates from yesterday show autopkgtest regressions
[14:09] <seb128> bdmurray, is anyone working on update-manager autopkgtests failing? that's also in the way to unblock the openldap, gnome, etc transitions
[14:34] <bdmurray> seb128: I started having a look yesterday and will dive into it more today
[14:35] <bdmurray> xnox: Do you have an idea of what's going on in bug 1924850?
[14:38] <seb128> bdmurray, I expect the fix is going to be similar to http://launchpadlibrarian.net/548108821/apturl_0.5.2ubuntu20_0.5.2ubuntu21.diff.gz
[14:38] <seb128> bdmurray, in case that's helping
[14:56] <xnox> bdmurray:  it feels like people have installed a usrmerged system. trying to upgrade. as part of upgrade usrmerge is installed for the first time. and it is confused about life and is trying to convert.... an already converted system.
[14:57] <xnox> bdmurray:  i'm not sure how they installed their systems.
[14:58] <bdmurray> xnox: okay, so it seems like a corner case then?
[15:03] <xnox> bdmurray:  it has not come up in my testing. but it seems like multiple people are hitting it. it would be nice to figure out how to reproduce this issue and see if we can fix anything in usrmerge to make it less confused about life.
[15:06] <bdmurray> xnox: okay, I'll test some more and ask for details
[15:07] <sergiodj> seb128: FWIW openldap has been blocked waiting on gnome-shell to become installable again; that's why it's still in -proposed :(
[15:08] <sergiodj> well, arguably now ceph is also blocking openldap, but I saw the issue has been resolved
[15:25] <bryyce> seb128, thanks for pointing out the test failures; they appear unrelated to apache and from the logs might just be test flaking, but I'll see if I can get them to pass
[15:26] <bryyce> seb128, the apache2 merge itself should be pretty unremarkable, I'm not too worried about it vis a vis openldap.  I agree with you though it would be great to get it cleared since many things are gated on it right now.
[15:27] <TJ-> xnox: bdmurray reading the code in convert_file() and applying the logic to my 20.04 amd64 installation where /bin is a symlink to /usr/bin this install would hit the last fatal("Both $n and /usr$n exist") if -e "/usr$n";  - it is unfortunate that there are 4 identical error messages so it is difficult to know where the error was triggered
[15:30] <icey> bryyce: I was looking briefly at the apache2 regression - I'd be tempted to retry the failure as it seemed networking related rather than anything related to either diaspora-installer or apache2; that said, are there networking differences between the different architectures on the autopkgtest setup?
[15:31] <icey> (I'm about to drop offline for the day but wanted to share the observation, didn't get any deeper than that)
[15:31] <bryyce> icey, I looked as well and agree it seems something intermittent.  We'll take care of retriggers and such
[15:32] <icey> bryyce: thanks, I don't have permissions so was thinking about trying to repro on ppc64el tomorrow morning, much easier to poke the retry :)
[15:32] <xnox> TJ-:  that can't be good.
[15:33] <xnox> TJ-:  i would have hoped conversions would not even be started on an already merged system.....
[15:33] <icey> anyways,
[15:33]  * icey drops
[15:33] <TJ-> xnox: ahhh, so 20.04 would be expected to have been converted since it is a fresh install?
[15:37] <xnox> TJ-:  yes.... we have started to do mergedusr installations long before enforcing usrmerge on upgrade; so we have only just now started to install usrmerge package, which should expect either already converted system or not. And it shouldn't convert already preinstalled systems as merged.
[15:37] <xnox> TJ-:  upgrade paths from splitusr needs to start installation using like bionic.
[15:42] <TJ-> xnox: ahhh, thanks for the clarification :)
[15:49] <seb128> sergiodj, right, I know, we finally got the things cleared up on the GNOME side now, which is why I'm checking on other things that got in the way since
[15:49] <sergiodj> seb128: ack, thanks
[15:52] <sergiodj> bryyce: two of the failing test (diaspora-installer and munin) passed with a retrigger
[15:52] <sergiodj> I've re-retriggered backuppc with all-proposed
[15:57] <sergiodj> ... and it's failed again.  sigh
[15:58] <bryyce> sergiodj, well, some progress at least
[15:59] <bryyce> the backuppc problem is failure to ping localhost; that seems suspect
[16:00] <bryyce> similar issue when backuppc ran against glibc
[16:02] <bryyce> sergiodj, do you have an armhf login handy?  might be worth logging in and seeing if ping is failing generally
[16:04] <sergiodj> bryyce: not right now.  I had an armhf machine reserved last week
[16:05] <bryyce> the glibc error suggests it could be a permission/ownership issue on /bin/ping6
[16:06] <sergiodj> it's a very similar error to the one seen when using glibc/2.33-0ubuntu9 as the trigger
[16:06] <bryyce> yep
[16:11] <sergiodj> canonistack is misbehaving as usual, I might have to resort to other machines
[16:13] <bryyce> sergiodj, i'm going to give it one more retrigger just in case
[16:13] <sergiodj> bryyce: +1
[16:15] <bryyce> sergiodj, internet has lots of matches on this error message but suggested solutions are all over the map.  But sounds like network config issues might sometimes cause this issue
[16:16] <sergiodj> bryyce: yeah, a lot of solutions revolving around capabilities
[16:16] <sergiodj> I'll have my lunch now, will check more when I'm back
[16:17] <bryyce> ditto
[16:39] <bdmurray> seb128: I've uploaded update-manager
[17:13] <laney> welp
[17:13] <laney> now my /boot can't fit two kernels, initrds etc
[17:52] <seb128> bdmurray, thanks
[21:12] <mwhudson> oh yeah i saw that ping permission denied thing, was odd
[21:12] <bryyce> mwhudson, we're still scratching our heads; any ideas?  I see newpid has a similar ping6 error on armhf
[21:13] <bryyce> er, same error but with ping, not ping6
[21:20] <mwhudson> bryyce: no idea at all sorry
[21:20] <TJ-> does it have CAP_NET_RAW ? or alternatively setting net.ipv4.ping_group_range appropriately?
[21:20] <mwhudson> some seccomp nonsense maybe?
[21:21] <bryyce> mwhudson, could be; the internet is rife with idea variety here
[21:21] <mwhudson> i mean the fact that it only happens on armhf and armhf is the only arch that runs in a container is unlikely to be a coincidence
[21:22] <bryyce> mm, good point
[21:22] <mwhudson> but i would guess the containers are privileged so eh i don't know
[21:22] <mwhudson> trying on canonistack would be interesting but it's not cooperating?
[21:23] <bryyce> sergio hasn't been able to reproduce it in his armhf environments
[21:23] <bryyce> he's currently trying via ppa
[21:30] <mwhudson> those failing makes me thing the container-ness is more likely to be the problem then
[21:30] <mwhudson> in which case we can ignore the failure tbh
[21:54] <bryyce> mwhudson, yeah I'm leaning that way too...  https://paste.ubuntu.com/p/HQSxN9qRYQ/ maybe
[21:55] <bryyce> or just hint it
[21:55] <mwhudson> bryyce: uhh
[21:55] <mwhudson> i think hinting it would be less gross but ymmv :)
[21:56] <bryyce> well I can't upload hints, so it's extra work but agree it's probably cleaner
[21:58] <bdmurray> bryyce: I'm here for you bryyce
[21:58] <bdmurray> so much so I said it twice
[21:58] <bryyce> mwhudson, more seriously though, the patch would disable just the broken portion of the tests, whereas a hint would be a broader brush
[21:59] <mwhudson> true
[21:59] <bryyce> in any case, whatever is done for backuppc, newpid likely needs the same treatment
[22:01] <bryyce> since backuppc and newpid are both syncs currently, a hint probably does make more sense
[22:09] <sergiodj> ah, you're talking about the bug here too
[22:09] <sergiodj> :)
[22:09] <bryyce> sergiodj, sorry :-)
[22:10] <sergiodj> bryyce: no need!  I was AFK anyway ;)
[22:10] <bryyce> presently I'm writing a bug report
[22:10] <bryyce> sergiodj, if you're +1 on hinting backuppc I can put an MP in for it
[22:11] <sergiodj> bryyce: let me just see if reinstalling iputils-ping solves the issue; the test is running right now
[22:11] <bryyce> ok cool
[22:11] <bryyce> sergiodj, meanwhile, https://bugs.launchpad.net/ubuntu/+source/backuppc/+bug/1936437
[22:15] <sergiodj> bryyce: so, reinstalling iputils-ping seems to have solved the problem
[22:15] <sergiodj> bryyce: I can attach a debdiff to the bug if you're OK with it
[22:15] <sergiodj> and if we choose this route, then I'll also propose the change to the debian package
[22:18] <bryyce> sergiodj, yep please do
[22:18] <sergiodj> bryyce: on it
[22:20] <bryyce> sergiodj, huh, I'm glad I mentioned that but honestly felt like too much a long shot
[22:20] <sergiodj> bryyce: yeah, who knows what's going on...  perhaps we should add the iputils package to this bug as well
[22:21] <bryyce> mwhudson, ftr: https://github.com/backuppc/backuppc/issues/177
[22:22] <mwhudson> bryyce: huh
[22:23] <bryyce> maybe https://bugs.launchpad.net/ubuntu/+source/iputils/+bug/1302192 ?
[22:23] <bryyce> that's pretty ancient though
[22:23] <sarnold> hold on..
[22:23] <mwhudson> it could be something like that but not definitely that
[22:24] <sarnold> net.ipv4.ping_group_range = 0	2147483647
[22:24] <sarnold> I think on focal and newer systemd (iirc) forces this ^^^ but on older releases it's set to 0 0
[22:26] <sergiodj> sarnold: do you happen to know which Ubuntu base system the autopkgtest runners use?
[22:26] <sergiodj> I tried reproducing this bug on hirsute, focal and bionic, to no avail
[22:26] <sarnold> sergiodj: I don't know but xenial or bionic wouldn't surprise me
[22:26] <sergiodj> using an lxd armhf container, of course
[22:26] <sergiodj> ah, I didn't try xenial
[22:34] <sergiodj> bryyce: posted a patch there
[22:37] <bryyce> sergiodj, would it make sense to add iputils-ping as a Depends in t/control?  It is already a B-D though so may already be pulled in?
[22:37] <bryyce> sergiodj, also should do the reinstall on all arch's or limit the reinstall to just armhf?
[22:38] <sergiodj> bryyce: I *think* it's already there, although it doesn't seem to be pulled in explicitly
[22:39] <sergiodj> it's Priority: important, so I think it's there
[22:40] <sergiodj> bryyce: as for reinstalling only on armhf, I thought about it but decided to keep the patch simpler.  but if you think it's better, then I can certainly update the patch
[22:43] <bryyce> @sergiodj, I don't have strong feelings.  I'd probably limit it but as long as it works *shrug*
[22:43] <bryyce> @sergiodj, ok given that +1 to upload
[22:43] <sergiodj> bryyce: ACK, thanks
[23:28] <mwhudson> that's ... a thing i guess
[23:29] <mwhudson> do the base images lack xattrs or something? i thought we fixed all those bugs :(
[23:30] <sarnold> I thought I heard that fscaps had to be handled via maintainer scripts, and if setting the caps didn't work, then the maintainer script would have to set setuid on the executables
[23:30] <sarnold> .. with the consequence being that fscaps are basically not used
[23:30] <sarnold> (I haven't actually looked lately)