[08:32] <sahid> jamespage: o/ can I ask you to take care of this: https://code.launchpad.net/~sahid-ferdjaoui/ubuntu/+source/python-zeroconf/+git/python-zeroconf ?
[10:07] <jamespage> sahid: looking now
[10:09] <sahid> ack
[10:17] <jamespage> sahid: done thankyou
[11:37] <xnox> jamespage:  so v14 ceph is not happy with boost1.71, but v15 on paper will work fine. Whist v15 tags exist on github it's not out yet.
[11:37] <jamespage> xnox: no its not
[11:37] <xnox> v15.1.0 -> is that like an RC?
[11:37] <jamespage> its a snapshot
[11:37] <jamespage> kinda a point in time for upstream dev to measure things against only
[11:38] <xnox> ok. when will it be out? / what is focal target version?
[11:38] <xnox> cause should i revert ceph to build against boost1.67  + wait for next ceph, or like should I be cherrypicking patches to make ceph compatible with boost1.71.....
[11:39] <xnox> note that boost-context & boost-coroutine is now available on s390x which might be of interest for ceph/s390x
[11:40] <jamespage> kinda - we don't use the beast frontend for the radosgw so its not directly relevant
[11:42] <jamespage> xnox: start of march is release date target
[11:42] <jamespage> we could bump in 15.1.0 in preparation for that
[11:44] <jamespage> yep - I'll do that makes freeze for ubuntu easier...
[12:00] <Darkchaos> Any particular reasion, why #ubuntu-beginners-dev is now invite only?
[13:09] <cpaelzer> rbasak: doko: I'm wondering about paths for cpython .so files - I'd need someone to tell me if this is ok
[13:09] <cpaelzer> It has a buch of files all following the same pattern, just one as example
[13:09] <cpaelzer> New upstream installs it already at: /usr/lib/python3.7/dist-packages/libvirtmod_lxc.cpython-37m-x86_64-linux-gnu.so
[13:09] <cpaelzer> Files were formerly at: /usr/lib/python3/dist-packages/libvirtmod.cpython-37m-x86_64-linux-gnu.so
[13:10] <cpaelzer> is the new path that it has right after build ok?
[13:11] <cpaelzer> or do I have to mess around with it to get paths liek we had it in the past?
[13:11] <doko> we don't use /usr/lib/python3.X
[13:12] <doko> doesn't dh-python move files around?
[13:12] <cpaelzer> I had another issue on build when handling dh_install
[13:12] <cpaelzer> maybe dh_python would move it around later
[13:12] <doko> yes, it's called after dh_install
[13:13] <cpaelzer> ah ok, then I can recheck paths afer the other issue is out of the way
[13:15] <cpaelzer> .install files had usr/lib/python3.? which didn't install anything on the old package either, but now the "fail to find" seem fatal
[13:15] <cpaelzer> does that ring a bell doko?
[13:16] <doko> hmm, no. I saw some .install files which hard-coded 3.7
[13:17] <cpaelzer> I have removed it, now dh_install is happy and indeed (as you suggested) dh_python moved things around correctly
[13:17] <cpaelzer> the list of files in the final .deb matches the old .deb
[13:17] <cpaelzer> so I guess I'm good here
[13:17] <cpaelzer> thanks doko for the hing of dh_python being after dh:install
[13:17] <doko> sounds ok
[13:18] <doko> when I close the lid, (supposed to suspend), and then re-open, nothing happens. and I have to force a power-off and restart (Carbon X1, current focal). Any idea which information to provide for a bug report?
[13:18] <doko> xnox, apw: ^^^
[13:19] <doko> xnox: this is the oem kernel, as suggested
[13:19] <apw> doko, file a bug against linux-oem or -osp1 depending which it is, and the hwe peeps can have a look
[13:21] <tjaalton> should be the same with master kernel
[13:21] <tjaalton> there's no diff
[13:21] <tjaalton> besides the fact that oem needs to rebase against -12
[13:23] <apw> oh focal, ok then against linux and let us know the number
[13:24] <doko> apw: LP: #1861837
[13:32] <tjaalton> doko: try latest master kernel
[13:32] <tjaalton> I mean generic, distro kernel
[13:42] <xnox> apw:  tjaalton: why use linux-5.4 when linux-oem-5.4 is available?
[13:42] <tjaalton> xnox: it should be identical at this point
[13:43] <apw> xnox, because oem-5.4 is a higher risk proposition (in principle)
[13:43] <tjaalton> but isn't,
[13:43] <xnox> ok
[13:43] <tjaalton> because needs a rebase with the current master
[13:43] <xnox> apw:  linux-oem-5.4 worked better for doko, than linux-5.4 =)
[13:43] <tjaalton> then oem will regress very soon
[13:44] <apw> xnox, that doens't change its risk envelope
[13:44] <xnox> ack
[13:46] <doko> apw, tjaalton: anyway, same behaviour
[13:48] <apw> doko, if you could grab the info the bug is asking for that has the actual full hardware thing in it, we must have one of those -- i assume they have been enabled
[13:50] <tjaalton> there are several generations of X1c
[13:51] <tjaalton> we're at gen7 now I think
[13:51] <apw> tjaalton, right, I asusme the apport info will tell us which of them it is, and whether we have one
[13:53] <doko> apport-collect failed: module 'cgi' has no attribute 'parse_qs'
[13:54] <apw> doko, :) it is going to be a long day
[14:00] <juliank> To be fair, as mentioned before kernel 5.4 has completely buggy intel graphic support and will lock up your machine
[14:00] <juliank> well any Intel graphic
[14:02] <tjaalton> works here
[14:03] <juliank> tjaalton: There's a massive issue, there are reports it got better in 5.5, but no fix for 5.4 yet
[14:03] <juliank> It's tracked in https://gitlab.freedesktop.org/drm/intel/issues/673
[14:03] <juliank> It only shows if there's quite a bit of activity on the screen
[14:03] <juliank> I feel like 5.4-12 got slightly better than 5.4-9, but might be placebo
[14:05] <juliank> Also, you can get a lot more hangs if you switch mesa driver from i915 to iris, it seems to trigger it more often
[14:06] <juliank> Basically play video in Chrome, or have spotify snap running in foreground can trigger it because it seems to render a lot
[14:06] <tjaalton> that was filed with 5.3
[14:06] <tjaalton> and I've used chrome to watch the australian open (eurosportplayer.com), because firefox leaks memory. no hangs
[14:07] <juliank> I can tell you that it hangs to 2-3 times a day last time I used it normally
[14:07] <juliank> Requiring reboots
[14:07] <tjaalton> then it should be bisectable?
[14:07] <juliank> My bug report was closed as a duplicate of that one
[14:08] <tjaalton> -13 had a backported patch for this
[14:08] <tjaalton> which was sent to stable over a month ago but never applied
[14:09] <juliank> -13?
[14:09] <tjaalton> 5.4.0-13
[14:10] <tjaalton> dunno if it's still in proposed
[14:10] <juliank> 5.4.0-12 was the latest I saw
[14:10] <tjaalton> kerneltoast is testing it as well, dunno how it's going
[14:13] <juliank> tjaalton: Well, it's good to know people are aware of it, last time I asked nobody bothered replying
[14:15] <tjaalton> I didn't notice it before late last week when it was assigned to me
[14:16] <juliank> Wondering why I can't find 5.4.0-12 in launchpad publishing history or the git repo
[14:16] <juliank> this is dod
[14:16] <juliank> *odd
[14:16] <ricotz> the last exiv2 0.27.2-8ubuntu1 upload dropped a CVE patch
[14:16] <tjaalton> the tag is there
[14:17] <tjaalton> it's just a rebuild for new binutils
[14:17] <tjaalton> -13 has actual meat
[14:17] <juliank> Am I looking at the wrong git repo? https://kernel.ubuntu.com/git/ubuntu/ubuntu-focal.git/
[14:18] <tjaalton> I think that's a mirror?
[14:18] <tjaalton> git://git.launchpad.net/~ubuntu-kernel/ubuntu/+source/linux/+git/focal
[14:18] <seb128> ricotz, LocustusOfBorg isn't only atm but I was planning to point that out to him indeed
[14:18] <juliank> tjaalton: ack
[14:18] <tjaalton> and some browseable url somewhere
[14:18] <juliank> https://git.launchpad.net/~ubuntu-kernel/ubuntu/+source/linux/+git/focal
[14:18] <tjaalton> ah
[14:18] <juliank> just s/git:/http:/ :D
[14:19] <tjaalton> right
[14:20] <ricotz> seb128, thanks
[14:20] <seb128> ricotz, np!
[14:21] <juliank> tjaalton: Hah, I was asking about this in #kernel on Jan 21, should have just reported a bug but wanted to avoid pointless work if it's already been reported.
[14:22] <juliank> tjaalton: Should have just reported directly then, or days earlier when it happened first time
[14:23] <juliank> tjaalton: I reported it to Intel on Jan 11, Jan 21 is when they marked it as a duplicate (https://gitlab.freedesktop.org/drm/intel/issues/963)
[14:24] <tjaalton> some say it was caused by the cve fixes
[14:24] <tjaalton> which is why 5.3 would be affected as it seems to be for some
[14:24] <juliank> People say it was introduced in 5.3.11
[14:24] <juliank> and I'm on 5.3.9 atm
[14:24] <tjaalton> so there are possibly two separate issues
[14:24] <juliank> I should pick 5.3 from eoan instead of using the last one that was in focal
[14:25] <juliank> or pick the 5.4 testing kernel
[14:25] <tjaalton> 5.3.11 is where the cve fixes were included
[14:25] <juliank> ah, ok
[14:26] <tjaalton> compare eoan 5.3.0-24 and a later one, 5.3.11 was merged in -25
[14:26] <tjaalton> I've asked the same on bug 1861294
[14:32] <cpaelzer> component mismatch proposed seems to update slower than usual - we are still on "Generated: Mon Feb 3 21:12:29 GMT 2020"
[14:32] <cpaelzer> that is like 17h ago
[14:32] <cpaelzer> is there anything ongoing that would stall this and/or does anything need to be restarted?
[14:32] <cpaelzer> I hit all refresh buttons I had on https://people.canonical.com/~ubuntu-archive/component-mismatches-proposed.html - please don't tell me it is some cache that doesn't want to go away
[16:53] <gaughen> Wimpress, juliank helped me get to the bottom of my upgrade issues - seems I'm stuck until libgl1-mesa-dri lands in focal
[16:53] <juliank> tjaalton: sil2100 ^
[16:54] <juliank> libgl1-mesa-dri now have higher versions in bionic-updates than in focal release, breaking upgrades
[16:57] <sil2100> Ah, yeah, lovely
[16:57] <sil2100> The new mesa is stuck in focal-proposed right now
[16:59] <sil2100> Ok, and the openscad/s390x autopkgtest seems to fail always on every new mesa we had in -proposed
[16:59] <sil2100> tjaalton: ^
[16:59] <juliank> sil2100: also seems to depend on libglvnd, which is valid candidate
[16:59] <juliank> probably something in update_output to look at
[16:59] <Laney> It's a regression, I've been asking for investigation for a while now
[16:59] <Laney> Timo said he had some trouble accessing an s390x machine
[16:59] <sil2100> Yeah, looks like a regression
[17:00] <Laney> Which I'm sure is a problem we can solve
[17:00] <rbasak> cjwatson, wgrant: FYI, I've reopened the bug on removing python-tickcount entirely from the archive here: bug 1740160
[17:01] <juliank> Laney: It might be worth it to pull it, and replace it with another 1.9.2.8 for now, if this is going to take some time
[17:01] <Laney> Probably sane
[17:02] <Laney> Unless there are shlibs considerations
[17:02] <cjwatson> rbasak: LP no longer uses it, so as you wish
[17:02] <rbasak> Thanks!
[17:02] <juliank> Laney: Nothing has Depends: mesa in britney excuses, so I guess should be good?
[17:03] <juliank> Or we declare openscad/s390x as too boring to fix
[17:03] <juliank> Who's running 3D CAD software on s390x anyway?
[17:04] <Laney> It's clearly a regression in the new mesa, so I have been asking for some minimal investigation to determine whether we should care or not
[17:04] <Laney> I'm not personally willing to hint it based on 's390x, nobody cares' alone
[17:22] <juliank> Laney: I do have uploaded a fix for ubuntu-release-upgrader to not get into such situations anymore, and not upgrade instead, so it should not happen to anyone else anymore
[17:22] <juliank> They'll just will be told they can't upgrade to focal yet
[17:23] <Laney> cool
[17:23] <Laney> I feel like an sru-release fix would be nice too
[17:24] <juliank> I also feel like this should be part of SRU releasing checks, yes
[17:24] <juliank> But some stuff gets SRUed first and then copied up, so does not always work
[17:24] <Laney> yeah, I guess you'd need to be able to override it, but that's fine
[17:25] <Darkchaos> Is there a specific policy/requirement to strip the versym/verneed tables from elf headers?
[17:48] <Darkchaos> or put differently: Is this even "legal"? It seems in comparison to other binary distributions ubuntu has stripped quite a few tables (symtab), but the deletion of versym makes ld/glibc fail at an assertion, which is what I try to solve
[17:50] <kerneltoast> tjaalton: i haven't had any gpu hangs with the 5.4 kernel you built
[17:51] <kerneltoast> It's been a few days and i ran chromium+youtube for hours
[17:51] <juliank> nice
[17:52] <tjaalton> sil2100: all the more reason to badtest openscad/s390x for now :/
[17:52] <tjaalton> kerneltoast: good to hear
[17:55] <kerneltoast> tjaalton: people on the bug i filed are still complaining of hangs though o_O
[18:29] <tjaalton> kerneltoast: probably not the same bug then
[18:48] <joelkraehemann> hi all
[18:49] <joelkraehemann> https://launchpadlibrarian.net/463439921/buildlog_ubuntu-focal-amd64.gsequencer_3.0.13-1_BUILDING.txt.gz
[18:49] <joelkraehemann> howto fix this?
[18:49] <joelkraehemann> E: Unable to correct problems, you have held broken packages.
[18:53] <sarnold> joelkraehemann: I saw a similar problem debugged yesterday with the advice to create a bare build chroot and try installing the build dependncies one at a time until you find why they aren't installable
[18:58] <joelkraehemann> just tried to search the packages on https://packages.ubuntu.com
[18:58] <joelkraehemann> I get: Internal Server Error
[19:42] <RikMills> Laney: openscad s390x test fails. I got the number of failing tests down from 839 to 14 at build time by extending this patch to cover s390x
[19:42] <RikMills> vorlon: openscad s390x test fails. I got the number of failing tests down from 839 to 14 at build time by extending this patch to s390x
[19:42] <RikMills> maybe that gives a clue
[19:42] <RikMills> sorry, this patch https://salsa.debian.org/knielsen-guest/openscad/blob/master/debian/patches/Work-around-Mesa-llvmpipe-bug-on-MIPS-which-causes-crashe.patch
[19:44]  * RikMills glares at copy/paste fail
[19:55]  * sladen was just using openscad ... but not on s390x...
[21:51] <tjaalton> Laney: build-logs with tests enabled show that s390x fails the same way as sparc64, both are big-endian, and the failures seem to point to this merge-request https://gitlab.freedesktop.org/mesa/mesa/merge_requests/1899
[21:51] <tjaalton> which can't be reverted
[21:51] <tjaalton> I've poked anholt about it
[21:56] <seb128> tumbleweed, thanks for the meson build fix! I meant to look at that bug kept being delayed by other things
[21:58] <seb128> tumbleweed, did you plan to submit the patch upstream?
[22:22] <tumbleweed> seb128: nope, but it should be forwardable
[22:31] <mwhudson> blargh why is git tag -s not working on focal
[22:32] <mwhudson> oh wait Author: Michael Hudson-Doyle <Michael Hudson-Doyle michael.hudson@ubuntu.com>
[22:38] <tjaalton> RikMills: looks like the current version in the archive has the same 14 failures
[22:38] <tjaalton> so wonder if the workaround itself helps any
[22:43] <RikMills> tjaalton: when tested against mesa in release pocket, yes. against mesa in proposed, it failed 859 tests
[22:46] <RikMills> also failed 836 tests when it build in proposed pocket
[22:48] <RikMills> but yes, workaround seems likely it would at best give parity at ~14 fails.
[22:48] <tjaalton> right