[00:06] <sdeziel> gemclip: AFAIK, this concept of group in group is not possible on Linux, only on Windows
[00:08] <ravage> making accountants managers is not great anyway
[00:10] <sarnold> gemclip: aha; as sdeziel says, there's no way to do 'nested groups' -- you could rename the group you've got, or you could add all members of one group to the other group.. *maybe* it's possible that sssd would know how to do something similar with AD groups, but that's a big guess on my part
[02:42] <gemclip> thanks for the feedback
[06:38] <tobias-urdin> jamespage: long shot, but have you noticed any issues with vnc console for jammy server cloud image on openstack nova with qemu? The image works in that one can SSH and use it, but is just a blackscreen on console, tried to see if it's actually intended or a bug but didn't find anything
[06:50] <cpaelzer> tobias-urdin: I heard of issues depending what default conf for your guest you have for virtual graphics could conflict with the graphics drivers in the guest - but that was gnome-boxes and real UI of Ubuntu Desktop - yours being a server image should be a different thing :-/
[06:50] <cpaelzer> tobias-urdin: I just tried a new spawned jammy server image, it was happy via vnc and spice for me
[06:51] <cpaelzer> tobias-urdin: since ssh works, let us move up one by one - do you have a console when you `virsh console <guestname>`
[06:51] <cpaelzer> tobias-urdin: do you have the problem only with the embedded web-vnc of openstack or also with a dedicated vnc viewer and/or virt-viewer?
[07:31] <tobias-urdin> doing virsh console gives me the login prompt, checking the console.log which is libvirt reading the vm:s tty shows the complete boot output and login prompt
[07:31] <tobias-urdin> the vnc console only shows this though https://ibb.co/fH7ZkBn
[07:34] <tobias-urdin> cpaelzer: ^
[07:38] <cpaelzer> thank tobias-urdin, that seems stuck with very eary boot content and not updated since then
[07:38] <cpaelzer> s/eary/early/
[07:51] <tobias-urdin> yeah, doesn't show anything after that echo line from grub
[07:52] <tobias-urdin> the issue is solved when using jammy-server-cloudimg-amd64.img instead of jammy-server-cloudimg-amd64-disk-kvm.img
[07:52] <tobias-urdin> thanks for helping out though!
[08:22] <cpaelzer> tobias-urdin: oh interesting on those images
[08:23] <cpaelzer> tobias-urdin: can you check if the bad image no linux...modules-extra installed?
[08:23] <cpaelzer> that is one common difference (as it is not meant to need it)
[08:23] <cpaelzer> but there have been occasions were - due to that - drivers were missing
[08:24] <cpaelzer> tobias-urdin: if your bad -kvm image indeed has ...extra-modules missing plesae install those matching your kernel and reboot
[08:24] <cpaelzer> tobias-urdin: if it works that is the reason, I'd then ask you to check which modules got loaded in addition and report that
[08:24] <cpaelzer> tobias-urdin: maybe there is one more that we need to move extra->base
[08:51] <tobias-urdin> sorry I don't really have time to look into it more in detail right now, but I assume it's related to graphics since openstack nova creates model=cirrus in libvirt xml
[08:51] <tobias-urdin> so some setting or driver is missing in the disk-kvm image
[08:51] <cpaelzer> yeah - which is just what I said
[11:58] <lvoytek> good morning
[12:03] <ahasenack> good morning
[12:39] <athos> good morning :)
[13:55] <ahasenack> rbasak: hi, so what's the story behind the mariadb ftbfs so far that you dug up?
[13:55] <ahasenack> from my pov, it looks like mariadb decides to use io_uring support, and that requires a bit more of locked memory (memlock rlimit), and that limit in the builders is too low, so the build-time tests fail
[13:56] <ahasenack> in particular, it can't even start the daemon in that configuration, as a regular user
[13:58] <rbasak> ahasenack: IIRC, originally Launchpad was FTBFSing on mariadb that included io_uring support because upstream were doing a build time test for io_uring (and I think still are), which is wrong because it should be done at runtime since the lack of io_uring availablity at build time doesn't tell us about its availablity at runtime.
[13:58] <rbasak> But then the Launchpad builders got updated to a newer release and therefore a newer kernel that supported it.
[13:58] <rbasak> AIUI, that's how we ended up with a successful build in the Jammy release pocket (of 10.6).
[13:58] <ahasenack> I think the lp builders are using the focal hwe kernel
[13:59] <ahasenack> 5.4.0-something
[13:59] <ahasenack> let me check that build log
[13:59] <rbasak> But then something changed that caused this current FTBFS, and I haven't tracked down what that is.
[13:59] <ahasenack> hm, both are 10.6.7
[13:59] <ahasenack> release and proposed
[13:59] <rbasak> What puzzles me is that if the root cause is a memlock rlimit issue then why did it work before?
[14:00] <rbasak> So since there's a contradiction somewhere, maybe one or more of my "facts" above is wrong.
[14:00] <ahasenack> this is the current failure
[14:00] <ahasenack> 2022-04-14  8:11:49 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOMEM: try larger memory locked limit, ulimit -l, or https://mariadb.com/kb/en/systemd/#configuring-limitmemlock under systemd (262144 bytes required)
[14:00] <ahasenack> and ulimit -l confirms that the limit is lower
[14:00] <ahasenack> Max locked memory         65536                65536                bytes     
[14:00] <ahasenack> just 64kbytes
[14:00] <rbasak> Yeah but then how did the release pocket build work?
[14:01] <ahasenack> either the limit was different back then
[14:01] <ahasenack> or ... stuff
[14:01] <ahasenack> https://launchpad.net/ubuntu/+source/mariadb-10.6/1:10.6.7-3 is the changelog
[14:02] <ahasenack>   * Fix mysql_install_db by reverting recent addition (MDEV-27980) <-- what's that MDEV?
[14:02] <ahasenack> a mysql bugzilla number?
[14:02] <ahasenack> the kernel is slightly different too
[14:02] <ahasenack> 5.4.0-107-generic #121 failed case
[14:02] <ahasenack> 5.4.0-104-generic #118 working case
[14:04] <rbasak> MariaDB Jira I'm guessing?
[14:04] <ahasenack> the proposed pkg built/tested in armhf and riscv64 :)
[14:04] <ahasenack> hm, armhf also was on 5.4.0-104
[14:04] <ahasenack> riscv is something else, on 5.11.0-1031-generic
[14:06] <ahasenack> rbasak: indeed, their jira: https://jira.mariadb.org/browse/MDEV-27980
[14:06] <ahasenack> not relevant to this I think
[14:06] <ahasenack> still trying to reproduce this
[14:07] <ahasenack> tried with sbuild yesterday on focal host, jammy chroot
[14:07] <ahasenack> but this build takes ages :/
[14:09] <ahasenack> in fails in ppas as well, at least
[14:10] <ahasenack> rbasak: search for "ulimit": https://launchpadlibrarian.net/598576864/buildlog_ubuntu-jammy-amd64.mariadb-10.6_1%3A10.6.7-3ubuntu1~ppa2_BUILDING.txt.gz
[14:10] <ahasenack> I added some debugging
[14:11] <ahasenack> I should have added `|| :` so it continues, but it would fail to bring the daemon up anyway due to this low limit
[14:11] <ahasenack> I'm unsure how that limit is set, I *think* based on observation that it could be set to 1/8 of installed RAM?
[14:12] <ahasenack> with an upper cap, certainly
[14:12] <ahasenack> mine is set to 2016112, and my ram is 16128908
[14:12] <ahasenack> >>> 16128908/2016112
[14:12] <ahasenack> 8.000005952050284
[14:12] <ahasenack> close enough
[14:13] <ahasenack> similarly in a VM I brought up
[14:46] <ahasenack> rbasak: do we need to do something in git-ubuntu for the archive opening? kinetic?
[14:48] <rbasak> needed|4898
[14:48] <rbasak> So no, but it'll take a while :-/
[14:55] <ahasenack> these imports won't take up (much) extra space right now, because the packages are still the same as in jammy, right?
[14:55] <ahasenack> just another fork of a branch in the same repository (each package being a repo)
[17:00] <rbasak> Right
[17:00] <rbasak> No extra space needed, but the importer works by cloning from LP into a temporary directory, adding the branch, and then pushing back. So it's not very efficient at this 6-monthly event.
[17:01] <rbasak> needed|4827
[17:54] <sergiodj> kanashiro: hey, https://bugs.launchpad.net/ubuntu/+source/ruby2.7/+bug/1943823 showed up during my triage.  it doesn't seem to be an issue anymore (ruby2.7 builds fine on impish/focal and isn't shipped anymore on jammy).  do you mind if I close it as Fix Released?
[17:58] <sergiodj> hm, scratch that.  it's likely that ruby2.7 won't build on ppc64el with gcc-11.  I see it's building fine right now because of the workaround you did
[17:59] <sergiodj> I think I will leave the bug as is but will target it to Impish
[19:21] <ahasenack> rbasak: so I have a focal VM, with 8Gb of RAM
[19:21] <ahasenack> ulimit -l in it defaults to 64Mb
[19:22] <ahasenack> root can change that, and mariadb via systemd options raises it for the service if needed
[19:22] <ahasenack> but a jammy lxd container cannot
[19:22] <ahasenack> it gets the 64Mb, and mariadb inside it cannot change this limit
[19:22] <ahasenack> and fails to start (core dumps actually)
[19:22] <ahasenack> tl;ldr: focal 8Mb VM cannot run jammy's mariadb in a jammy container, in default configs
[19:23] <ahasenack> this is as close as I got to a reproducer, with the only caveat that the LP builders use chroots, so they could raise the limit if they were root, but the test run at build-time is not root
[19:24] <ahasenack> I should be able to reproduce this now in any scenario where the memlock limit cannot be raised, with a jammy mariadb
[19:24] <ahasenack> trying to figure out now how to change the default memlock limit in focal, so that the lxd container inherits that, to verify mariadb can then start
[19:25] <teward> who's the PoC for postgres packages?
[19:25] <teward> just wondering 'cause 'defaults' may need updated (md5 deprecation, so move to scram-sha-256 for postgres encryptions)
[19:27] <ahasenack> teward: server team
[19:27] <teward> i'll email then
[19:27] <ahasenack> bug?
[19:35] <kanashiro> sergiodj, re ruby2.7 bug: sorry, I had an issue with my irc client and I did not see your message in time, but yeah your assessment is right :)
[19:35] <kanashiro> we will likely not fix this issue tbh
[19:36] <sergiodj> kanashiro: yeah, that's what I figured too
[19:36] <sergiodj> impish will EOL in 2 months IIRC
[19:36] <kanashiro> yep
[19:56] <sergiodj> ahasenack: IIRC you were hacking nfs stuff before the release, right?  do you happen to have a setup ready to test things there?  https://bugs.launchpad.net/ubuntu/+source/autofs/+bug/1970264 came up during my triage.  I'm installing a Jammy VM here to test it but it'd be great to do a double check
[19:56] <ahasenack> that rings a bell, I think I saw an autofs bug about it
[19:57] <sergiodj> yeah, there's a bug upstream that looks similar
[19:57] <sergiodj> the reporter pointed to it
[19:57] <sergiodj> I'd like to confirm it's the same issue, though
[19:57] <ahasenack> nfsv4 does not require portmap, and autofs was trying to follow that, but didn't work
[19:57] <ahasenack> this is more about autofs than nfs, I don't have an autofs setup ready
[19:58] <ahasenack> s/portmap/rpcbind/
[19:58] <ahasenack> if I recall correctly, the ML discussion was ongoing
[19:59] <ahasenack> I'm unsure if it was ever resolved definitely
[19:59] <sergiodj> there is a patch and the maintainer said he was going to install it, but nothing happened after that
[19:59] <ahasenack> that being said, autofs upstream keeps a nice directory with official patches, that will be part of the next release
[19:59] <ahasenack> we could try one of those, or see if that posted patch in the ML made it to the patches directory from upstream 
[19:59] <sergiodj> yeah
[19:59] <sergiodj> that's my plan.  I need to set something up first, though
[20:00] <sergiodj> I wonder if it's possible to reproduce this by setting a simple NFS share and configuring autofs to mount it
[20:00] <sergiodj> or if I will need a more complex setup
[20:00] <ahasenack> that should be it, no ldap needed
[20:00] <sergiodj> hope so :)
[20:00] <ahasenack> but without rpcbind on the server
[20:01] <ahasenack> (which could be one workaround, though: install rpcbind on the server, if it solves the issue)
[20:01] <ahasenack> I'm unclear if autofs tried to contact rpcbind and failed (because it's not running on the server for nfsv4), or if autofs just broke in the nfsv4 scenario
[20:02] <sergiodj> isn't rpcbind installed by default when you install nfs-kernel-server?
[20:02] <sergiodj> I thought it was
[20:02] <ahasenack> sergiodj: btw, nfsv4 is the default in ubuntu for a while now, no need for kerberos even
[20:02] <ahasenack> yeah, you get it, but you can disable it
[20:02] <sergiodj> (argh, my Ubuntu machine is thrashing, too many VMs opened)
[20:02] <sergiodj> OK
[20:02] <ahasenack> it is a depends, though
[20:02] <sergiodj> yeah, I noticed it being pulled
[20:48] <ahasenack> rbasak: posted a summary in https://bugs.launchpad.net/ubuntu/+source/mariadb-10.6/+bug/1970634/comments/1
[20:49] <ahasenack> tl;dr the memlock limit is real, and my comment there just explained why we see it in the special case of focal kernel with jammy mariadb, and not, say, jammy kernel with jammy mariadb
[20:50] <ahasenack> mariadb has a further check on whether it will use uring or not, and that is at runtime depending on the kernel version. That is besides the check if uring is supported in the kernel or not, or if it was enabled at build-time or not
[20:50] <ahasenack> in the build env, the conditions are right for uring to be enabled, and since the build starts a mariadb process to run tests, that process requires a higher memlock limit because it's using uring...
[20:51] <ahasenack> I'll investigate if it's something we can change in the test (skip it if memlock limit is not high enough, perhaps), and check if that same test cannot be run as DEP8 instead
[20:52] <ahasenack> maybe it is being run as dep8 already
[20:53] <ahasenack> I think they are being run as dep8 already