[00:06] gemclip: AFAIK, this concept of group in group is not possible on Linux, only on Windows [00:08] making accountants managers is not great anyway [00:10] gemclip: aha; as sdeziel says, there's no way to do 'nested groups' -- you could rename the group you've got, or you could add all members of one group to the other group.. *maybe* it's possible that sssd would know how to do something similar with AD groups, but that's a big guess on my part [02:42] thanks for the feedback [06:38] jamespage: long shot, but have you noticed any issues with vnc console for jammy server cloud image on openstack nova with qemu? The image works in that one can SSH and use it, but is just a blackscreen on console, tried to see if it's actually intended or a bug but didn't find anything [06:50] tobias-urdin: I heard of issues depending what default conf for your guest you have for virtual graphics could conflict with the graphics drivers in the guest - but that was gnome-boxes and real UI of Ubuntu Desktop - yours being a server image should be a different thing :-/ [06:50] tobias-urdin: I just tried a new spawned jammy server image, it was happy via vnc and spice for me [06:51] tobias-urdin: since ssh works, let us move up one by one - do you have a console when you `virsh console ` [06:51] tobias-urdin: do you have the problem only with the embedded web-vnc of openstack or also with a dedicated vnc viewer and/or virt-viewer? === gschanuel9 is now known as gschanuel [07:31] doing virsh console gives me the login prompt, checking the console.log which is libvirt reading the vm:s tty shows the complete boot output and login prompt [07:31] the vnc console only shows this though https://ibb.co/fH7ZkBn [07:34] cpaelzer: ^ [07:38] thank tobias-urdin, that seems stuck with very eary boot content and not updated since then [07:38] s/eary/early/ [07:51] yeah, doesn't show anything after that echo line from grub [07:52] the issue is solved when using jammy-server-cloudimg-amd64.img instead of jammy-server-cloudimg-amd64-disk-kvm.img [07:52] thanks for helping out though! [08:22] tobias-urdin: oh interesting on those images [08:23] tobias-urdin: can you check if the bad image no linux...modules-extra installed? [08:23] that is one common difference (as it is not meant to need it) [08:23] but there have been occasions were - due to that - drivers were missing [08:24] tobias-urdin: if your bad -kvm image indeed has ...extra-modules missing plesae install those matching your kernel and reboot [08:24] tobias-urdin: if it works that is the reason, I'd then ask you to check which modules got loaded in addition and report that [08:24] tobias-urdin: maybe there is one more that we need to move extra->base [08:51] sorry I don't really have time to look into it more in detail right now, but I assume it's related to graphics since openstack nova creates model=cirrus in libvirt xml [08:51] so some setting or driver is missing in the disk-kvm image [08:51] yeah - which is just what I said === polymorp- is now known as polymorphic [11:58] good morning [12:03] good morning [12:39] good morning :) [13:55] rbasak: hi, so what's the story behind the mariadb ftbfs so far that you dug up? [13:55] from my pov, it looks like mariadb decides to use io_uring support, and that requires a bit more of locked memory (memlock rlimit), and that limit in the builders is too low, so the build-time tests fail [13:56] in particular, it can't even start the daemon in that configuration, as a regular user [13:58] ahasenack: IIRC, originally Launchpad was FTBFSing on mariadb that included io_uring support because upstream were doing a build time test for io_uring (and I think still are), which is wrong because it should be done at runtime since the lack of io_uring availablity at build time doesn't tell us about its availablity at runtime. [13:58] But then the Launchpad builders got updated to a newer release and therefore a newer kernel that supported it. [13:58] AIUI, that's how we ended up with a successful build in the Jammy release pocket (of 10.6). [13:58] I think the lp builders are using the focal hwe kernel [13:59] 5.4.0-something [13:59] let me check that build log [13:59] But then something changed that caused this current FTBFS, and I haven't tracked down what that is. [13:59] hm, both are 10.6.7 [13:59] release and proposed [13:59] What puzzles me is that if the root cause is a memlock rlimit issue then why did it work before? [14:00] So since there's a contradiction somewhere, maybe one or more of my "facts" above is wrong. [14:00] this is the current failure [14:00] 2022-04-14 8:11:49 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOMEM: try larger memory locked limit, ulimit -l, or https://mariadb.com/kb/en/systemd/#configuring-limitmemlock under systemd (262144 bytes required) [14:00] and ulimit -l confirms that the limit is lower [14:00] Max locked memory 65536 65536 bytes [14:00] just 64kbytes [14:00] Yeah but then how did the release pocket build work? [14:01] either the limit was different back then [14:01] or ... stuff [14:01] https://launchpad.net/ubuntu/+source/mariadb-10.6/1:10.6.7-3 is the changelog [14:02] * Fix mysql_install_db by reverting recent addition (MDEV-27980) <-- what's that MDEV? [14:02] a mysql bugzilla number? [14:02] the kernel is slightly different too [14:02] 5.4.0-107-generic #121 failed case [14:02] 5.4.0-104-generic #118 working case [14:04] MariaDB Jira I'm guessing? [14:04] the proposed pkg built/tested in armhf and riscv64 :) [14:04] hm, armhf also was on 5.4.0-104 [14:04] riscv is something else, on 5.11.0-1031-generic [14:06] rbasak: indeed, their jira: https://jira.mariadb.org/browse/MDEV-27980 [14:06] not relevant to this I think [14:06] still trying to reproduce this [14:07] tried with sbuild yesterday on focal host, jammy chroot [14:07] but this build takes ages :/ [14:09] in fails in ppas as well, at least [14:10] rbasak: search for "ulimit": https://launchpadlibrarian.net/598576864/buildlog_ubuntu-jammy-amd64.mariadb-10.6_1%3A10.6.7-3ubuntu1~ppa2_BUILDING.txt.gz [14:10] I added some debugging [14:11] I should have added `|| :` so it continues, but it would fail to bring the daemon up anyway due to this low limit [14:11] I'm unsure how that limit is set, I *think* based on observation that it could be set to 1/8 of installed RAM? [14:12] with an upper cap, certainly [14:12] mine is set to 2016112, and my ram is 16128908 [14:12] >>> 16128908/2016112 [14:12] 8.000005952050284 [14:12] close enough [14:13] similarly in a VM I brought up [14:46] rbasak: do we need to do something in git-ubuntu for the archive opening? kinetic? [14:48] needed|4898 [14:48] So no, but it'll take a while :-/ [14:55] these imports won't take up (much) extra space right now, because the packages are still the same as in jammy, right? [14:55] just another fork of a branch in the same repository (each package being a repo) [17:00] Right [17:00] No extra space needed, but the importer works by cloning from LP into a temporary directory, adding the branch, and then pushing back. So it's not very efficient at this 6-monthly event. [17:01] needed|4827 === polymorp- is now known as polymorphic [17:54] kanashiro: hey, https://bugs.launchpad.net/ubuntu/+source/ruby2.7/+bug/1943823 showed up during my triage. it doesn't seem to be an issue anymore (ruby2.7 builds fine on impish/focal and isn't shipped anymore on jammy). do you mind if I close it as Fix Released? [17:54] Launchpad bug 1943823 in ruby2.7 (Ubuntu) "ruby2.7 ftbfs on ppc64el using GCC 11.2" [Low, Triaged] [17:58] hm, scratch that. it's likely that ruby2.7 won't build on ppc64el with gcc-11. I see it's building fine right now because of the workaround you did [17:59] I think I will leave the bug as is but will target it to Impish [19:21] rbasak: so I have a focal VM, with 8Gb of RAM [19:21] ulimit -l in it defaults to 64Mb [19:22] root can change that, and mariadb via systemd options raises it for the service if needed [19:22] but a jammy lxd container cannot [19:22] it gets the 64Mb, and mariadb inside it cannot change this limit [19:22] and fails to start (core dumps actually) [19:22] tl;ldr: focal 8Mb VM cannot run jammy's mariadb in a jammy container, in default configs [19:23] this is as close as I got to a reproducer, with the only caveat that the LP builders use chroots, so they could raise the limit if they were root, but the test run at build-time is not root [19:24] I should be able to reproduce this now in any scenario where the memlock limit cannot be raised, with a jammy mariadb [19:24] trying to figure out now how to change the default memlock limit in focal, so that the lxd container inherits that, to verify mariadb can then start [19:25] who's the PoC for postgres packages? [19:25] just wondering 'cause 'defaults' may need updated (md5 deprecation, so move to scram-sha-256 for postgres encryptions) [19:27] teward: server team [19:27] i'll email then [19:27] bug? [19:35] sergiodj, re ruby2.7 bug: sorry, I had an issue with my irc client and I did not see your message in time, but yeah your assessment is right :) [19:35] we will likely not fix this issue tbh [19:36] kanashiro: yeah, that's what I figured too [19:36] impish will EOL in 2 months IIRC [19:36] yep [19:56] ahasenack: IIRC you were hacking nfs stuff before the release, right? do you happen to have a setup ready to test things there? https://bugs.launchpad.net/ubuntu/+source/autofs/+bug/1970264 came up during my triage. I'm installing a Jammy VM here to test it but it'd be great to do a double check [19:56] Launchpad bug 1970264 in autofs (Ubuntu) "autofs fails to mount nfs4 shares with 'error 0x3 getting portmap client'" [Undecided, New] [19:56] that rings a bell, I think I saw an autofs bug about it [19:57] yeah, there's a bug upstream that looks similar [19:57] the reporter pointed to it [19:57] I'd like to confirm it's the same issue, though [19:57] nfsv4 does not require portmap, and autofs was trying to follow that, but didn't work [19:57] this is more about autofs than nfs, I don't have an autofs setup ready [19:58] s/portmap/rpcbind/ [19:58] if I recall correctly, the ML discussion was ongoing [19:59] I'm unsure if it was ever resolved definitely [19:59] there is a patch and the maintainer said he was going to install it, but nothing happened after that [19:59] that being said, autofs upstream keeps a nice directory with official patches, that will be part of the next release [19:59] we could try one of those, or see if that posted patch in the ML made it to the patches directory from upstream [19:59] yeah [19:59] that's my plan. I need to set something up first, though [20:00] I wonder if it's possible to reproduce this by setting a simple NFS share and configuring autofs to mount it [20:00] or if I will need a more complex setup [20:00] that should be it, no ldap needed [20:00] hope so :) [20:00] but without rpcbind on the server [20:01] (which could be one workaround, though: install rpcbind on the server, if it solves the issue) [20:01] I'm unclear if autofs tried to contact rpcbind and failed (because it's not running on the server for nfsv4), or if autofs just broke in the nfsv4 scenario [20:02] isn't rpcbind installed by default when you install nfs-kernel-server? [20:02] I thought it was [20:02] sergiodj: btw, nfsv4 is the default in ubuntu for a while now, no need for kerberos even [20:02] yeah, you get it, but you can disable it [20:02] (argh, my Ubuntu machine is thrashing, too many VMs opened) [20:02] OK [20:02] it is a depends, though [20:02] yeah, I noticed it being pulled [20:48] rbasak: posted a summary in https://bugs.launchpad.net/ubuntu/+source/mariadb-10.6/+bug/1970634/comments/1 [20:48] Launchpad bug 1970634 in mariadb-10.6 (Ubuntu) "FTBFS: test failure due to low memlock limit" [Undecided, In Progress] [20:49] tl;dr the memlock limit is real, and my comment there just explained why we see it in the special case of focal kernel with jammy mariadb, and not, say, jammy kernel with jammy mariadb [20:50] mariadb has a further check on whether it will use uring or not, and that is at runtime depending on the kernel version. That is besides the check if uring is supported in the kernel or not, or if it was enabled at build-time or not [20:50] in the build env, the conditions are right for uring to be enabled, and since the build starts a mariadb process to run tests, that process requires a higher memlock limit because it's using uring... [20:51] I'll investigate if it's something we can change in the test (skip it if memlock limit is not high enough, perhaps), and check if that same test cannot be run as DEP8 instead [20:52] maybe it is being run as dep8 already [20:53] I think they are being run as dep8 already