[03:46] tjaalton: Would you mind taking a look at https://salsa.debian.org/xorg-team/xorg/-/merge_requests/17 ? [03:46] -ubottu:#ubuntu-devel- Merge 17 in xorg-team/xorg "60x11-common_xdg_path: Inject the variables set into the session via dbus-update-activation-environment." [Opened] [04:26] liushuyu: hi, sure [04:27] tjaalton: Thank you! [04:27] bluesabre: the rationale is missing? [04:28] liushuyu: got an sru bug already? [04:29] tjaalton: Not yet. Do you want me to create one? I am still very new to this SRU process [04:30] liushuyu: yes, since you know what's broken and how to fix it. the wiki has info on that [04:30] tjaalton: Thank you! Will do in a moment [04:58] tjaalton: I have opened a SRU bug for it: https://bugs.launchpad.net/ubuntu/+source/llvm-toolchain-15/+bug/2008755 [04:58] -ubottu:#ubuntu-devel- Launchpad bug 2008755 in llvm-toolchain-15 (Ubuntu) "[SRU] Upgrade to 15.0.7 on Kinetic and Jammy" [Undecided, New] [05:11] cpaelzer: its not always the same test that times out - I wonder if QEMU has gotten slower for some of these use cases. I uploaded a package that adds some timing stats to a PPA, I'll do some comparison runs to see if I can narrow it down [05:11] liushuyu: thanks [05:14] tjaalton: No problem! I am not sure if I have filed the SRU request correctly, some of the links in https://wiki.ubuntu.com/StableReleaseUpdates are dead. Also, do I need to tag someone from the SRU team for this? [06:12] dannf: interesting, I didn't spot that difference before [06:12] dannf: that might also explain why it works just fine on my s390x machien where I tried to reproduce [06:12] dannf: maybe just more cpu/memory or less contention overall [06:15] dannf: indeed test_ovmf_4m_ms or test_ovmf_ms so far, hmm [06:24] dannf: I'm wrapping these in pytets --durations to check if any/all tests slowed down ... [06:28] dannf: we can assume my machine is faster, but there also is no difference qemu 7.0 5.21s-7.36s qemu 7.2: 5.48s-7.42s - that is the same within noise [06:28] dannf: the test timeout (that you most likely increased in your PPA) is 60s which is one order of magnitude away [06:29] dannf: let me reduce mem/cpu and see where I end up then [07:15] dannf: I've ran a bit deeper into this rabbit hole and I think we are onto a real issue [07:16] liushuyu: nope [07:38] tjaalton: Thank you! [07:55] liushuyu: which links are dead btw? [07:57] tjaalton: In https://wiki.ubuntu.com/StableReleaseUpdates#Phasing, the link "blog post by Brian Murray" leads to a 404 page [07:58] tjaalton: Yeah hay sorry about no good description there, thought prior discussion on IRC helped and I don't know the specifics behind it just that other Xsession scripts use that to fix the same problem, and after the MR it works here too. Also I used `salsa`, so a bit blindly. :P [08:01] JackFrost: but what is the problem? :) [08:02] liushuyu: okay.. bdmurray ^ do you still have your blog online somewhere? [08:04] liushuyu: the web never forgets, though: http://web.archive.org/web/20170719231238/http://www.murraytwins.com/blog/?p=127 [08:13] tjaalton: Ah OK. So setting the session name in XDG_CONFIG_DIRS isn't making it into the env, such that /etc/xdg/xdg-xubuntu is getting ignored and Xubuntu is getting stock upstream settings, ignoring all of our settings in xubuntu-default-settings. For us, pretty critical. :3 [08:15] can someone retry these tests? https://pastebin.canonical.com/p/t6zbFz97X6/ they are all arm64 failures during the test preparation; the queues are heavily loaded but these test might enable the ocaml cluster to migrate [10:10] I'd also like https://autopkgtest.ubuntu.com/request.cgi?release=lunar&arch=amd64&package=openmsx&trigger=tcl8.6%2F8.6.13%2Bdfsg-2 (there's a "sleep 3" which might not have been enough during the past weeks) and https://autopkgtest.ubuntu.com/request.cgi?release=lunar&arch=amd64&package=plplot&trigger=tcl8.6%2F8.6.13%2Bdfsg-2 (testbed wasn't ready in time); both are for amd64 where queues are fairly [10:10] small at the moment [10:10] (500 only!) [10:17] adrien:. [10:18] thanks! :) [10:39] JackFrost: ack [10:40] I'd love to know what changed such that it no longer works, but not really sure where to dig. [10:56] May I ask someone to re-run a bunch of autopkgtests for me, please? [10:56] https://autopkgtest.ubuntu.com/request.cgi?release=lunar&arch=amd64&package=pycurl&trigger=curl/7.88.1-1ubuntu1&trigger=pycurl/7.45.2-3 [10:56] https://autopkgtest.ubuntu.com/request.cgi?release=lunar&arch=arm64&package=pycurl&trigger=curl/7.88.1-1ubuntu1&trigger=pycurl/7.45.2-3 [10:56] https://autopkgtest.ubuntu.com/request.cgi?release=lunar&arch=armhf&package=pycurl&trigger=curl/7.88.1-1ubuntu1&trigger=pycurl/7.45.2-3 [10:56] https://autopkgtest.ubuntu.com/request.cgi?release=lunar&arch=ppc64el&package=pycurl&trigger=curl/7.88.1-1ubuntu1&trigger=pycurl/7.45.2-3 [10:56] https://autopkgtest.ubuntu.com/request.cgi?release=lunar&arch=s390x&package=pycurl&trigger=curl/7.88.1-1ubuntu1&trigger=pycurl/7.45.2-3 [10:56] dannf: I tracked it down and found an offending commit [10:57] is there a way to pass a list of archs btw? :P [10:57] dannf: but while upstream might one day have a fix, for now I would still appreciate to consider bumping the timeouts or any other way that lets qemu pass for now [10:57] dannf: here is the report https://gitlab.com/qemu-project/qemu/-/issues/1520 [10:57] -ubottu:#ubuntu-devel- Issue 1520 in qemu-project/qemu "x86 TCG acceleration running on s390x with -smp > host cpus slowed down by x10" [Opened] [11:00] danilogondolfo: Clicking. And no, AFAIK one link is for one test run (so single arch) [11:01] thank you, schopin :) [13:26] is possible to also trigger these ones again? https://autopkgtest.ubuntu.com/request.cgi?release=lunar&arch=arm64&package=coq&trigger=ocaml%2F4.13.1-4ubuntu1 https://autopkgtest.ubuntu.com/request.cgi?release=lunar&arch=arm64&package=diffoscope&trigger=ocaml%2F4.13.1-4ubuntu1 [13:26] they were triggered not that long ago but failed for the same reasons [13:26] there's also https://autopkgtest.ubuntu.com/request.cgi?release=lunar&arch=arm64&package=llvm-toolchain-13&trigger=ocaml%2F4.13.1-4ubuntu1 but I don't know how heavy these are [13:56] . [14:25] hello, is there any way to get all the MPs I have reviewed or took part in via LP? [14:28] utkarsh2102: https://launchpad.net/~/+merges [14:29] Actually sorry that might just be of your own branches [14:29] yes! these are just my branches that I proposed [14:29] There's https://launchpad.net/~/+activereviews but that only gives you ones that are still open [14:29] not the ones that I reviewed [14:29] So possibly not [14:29] yes! [14:30] uh oh :( [14:32] If you know the context (project, package, whatever) then there's also views like https://launchpad.net/launchpad-project/+merges [14:32] I don't think there's precisely what you asked for right now though [14:37] vorlon: I see you recently seeded (inetutils-)telnet in lunar.standard. Am I correct in my assumption that Foundations should be the owning team of src:inetutils and we should prepare a MIR for it? (We were maintaining netkit-telnet in Kinetic and earlier). Also, see the discussion in bug #1980663 which does not apply anymore to src:inetutils, I guess (not a false-positve anymore). [14:37] -ubottu:#ubuntu-devel- Bug 1980663 in plzip (Ubuntu) "[MIR] false-positives, do not promote" [Undecided, Fix Released] https://launchpad.net/bugs/1980663 [15:18] Hey! Who do I ping about serious Ubiquity issues where it's trying to install as the wrong user? bug 2008731 === arraybolt3[m] is now known as Guest4510 [15:27] And since Ubottu totally messed-up, https://launchpad.net/bugs/2008731 [15:54] slyon: re: inetutils and Foundations, yes [15:54] slyon: this came up from jira card review when I discovered I'd accidentally removed netkit-telnet before inetutils was ready to go [16:02] vorlon: Who do you think would be the best person to ping about the Ubiquity installing as the wrong user bug for Edubuntu (if not yourself as I know you're busy)? [16:05] vorlon: ack. What's the jira ticket number that came from? I've created https://bugs.launchpad.net/ubuntu/+source/inetutils/+bug/2008789 to track this and hand out the MIR work [16:05] -ubottu:#ubuntu-devel- Launchpad bug 2008789 in inetutils (Ubuntu) "[MIR] inetutils" [Undecided, Incomplete] [16:06] Eickmeyer: I don't really know. ubiquity itself is being deprecated for Ubuntu Desktop, and support for the installer for flavors is best-effort [16:06] Eickmeyer: maybe file a bug against ubiquity in LP and see who bites [16:06] slyon: FR-2983 [16:07] vorlon: Yeah, done. https://launchpad.net/bugs/2008731. Would've gone with the new installer but the desktop team never got back to any of the flavors on the "how" aspect as we discussed during our flavor sync meeting. [16:07] -ubottu:#ubuntu-devel- Launchpad bug 2008731 in ubiquity (Ubuntu) "Ubiquity crashed on first test installation of Edubuntu" [Undecided, Confirmed] [16:08] And since flavor leads don't have direct commit access, that means any bugs are going to have some very unhappy flavor leads, especially show-stoppers like this one. [16:08] I wouldn't think that subiquity would be the way to go for bootstrapping a new flavor this cycle [16:10] Perhaps not, but, with my Studio hat on, we were very interested, and that knowledge could've been useful. [16:24] (and transferable) [18:56] rbasak: hi! dropped a small text on MM, could you look at it when you have a sec? [19:00] ack === Guest4510 is now known as arraybolt3[m] [22:20] Is it expected that the python3 modules "lsb_release" no longer exists in Lunar? Calamares has a Python file that runs "from lsb_release import get_distro_information" and it's erroring out because the lsb_release module is missing, breaking Lubuntu's ability to install. [22:20] I'm about to test more thoroughly but my Internet has gone into slow mode and so I'm aking here in case this rings a bell for anyone (I saw ther was some stuff happening with debootstrap and lsb_release recently). [22:21] *asking [22:35] arraybolt3: lsb-release was replaced by lsb-release-minimal in Debian which is stripped down to the bare minimum. what did you depend on before to get an lsb_release python module? [22:35] looks like /usr/lib/python3/dist-packages/lsb_release.py came from lsb-release package [22:35] anyway, no longer [22:35] so yes it's expected. I don't know if there's an equivalent that looks at /etc/os-release? [22:35] Tar. [22:36] OK, thanks for the update. [22:36] arraybolt3: I suggest python3-distro https://distro.readthedocs.io/en/latest/ [22:37] The problem is either we need something that returns the lsb_release module allowing Calamares to work mostly unmodified, OR we need an Ubuntu delta, OR we need to get Calamares upstream to change their code. [22:37] This isn't a Calamares bug, so the latter two options seem weird. [22:37] But I'm not sure the first one is even an option. [22:38] This is also a showstopper regression - the installer crashes due to this during a normal installation procedure. [22:38] arraybolt3: python3-distro uses /etc/os-release but falls back to lsb_release, according to its docs [22:38] Right but is it API-compatible with the original lsb_release module? [22:39] probably not but you don't need that [22:39] actually I may be wrong about where the code is, so it may not be a big problem. [22:40] Yeah I'm wrong, this is code in Ubuntu only, not from Calamares itself. Panic averted, nevermind. [22:40] OK, I'll try and use python3-distro to replace lsb_release and make things work again. [22:42] (I accidentally thought that this had broken a critial part of Calamares, when in fact it broke code that was written specifically for Lubuntu, so it's easy to update that.)