[10:06] <raddy> Hello
[10:06] <raddy> intl module is not showing up in phpinfo, but shows up in php -m
[10:07] <raddy> have restarted php-fpm and nginx several times
[10:09] <lotuspsychje> !crosspost | raddy
[10:09] <ubot3> raddy: Please don't ask the same question in multiple Ubuntu channels at the same time. Many helpers are in more than one channel and it's not fair to them or the other people seeking support.
[10:09] <lotuspsychje> ok nvm i see you got forwarded
[13:47] <dn_> I have a very odd nvme problem with a new machine (threadripper); I have 7x the same NVME (980 PRO) installed - one of them is roughly 2x (4k random read) faster than the rest. They all seem to connected the same way (PCIe4, x4) but testing with fio shows the difference - and the fast one is also the only one with the expected speed. I'm a bit lost
[13:47] <dn_> what I could do/check/verify. It seems like slot dependend, but e.g. lspci -vvv looks identical for all.. (LnkSta: Speed 16GT/s (ok), Width x4 (ok). I'm testing with fio on the raw block device.
[13:53] <tomreyn> no idea. have you tried physically swapping them, to rule that out?
[13:54] <tomreyn> have you tried playing with any related uefi settings? is hot swap enabled (try disabling it)?
[13:55] <tomreyn> inspect dmesg / systemd journal (if you havenT' already). try uefi upgrades, maybe check speeds with the OS supported by mainboard vendor, get support from mainboard vendor.
[13:56] <tomreyn> dn_: ^
[13:57] <dn_> tomreyn: will try disabling hotswapping and switch slots again - there is an error in dmesg, I forgot about it - but it's an error for the fast device - need a moment to reboot to try disabling hotswap & get the msg
[14:48] <dn_> tomreyn: sadly no change, but I got the log entry -> [  103.889739] nvme 0000:43:00.0: AER: aer_status: 0x00000001, aer_mask: 0x00000000
[14:48] <dn_> [  103.890764] nvme 0000:43:00.0:    [ 0] RxErr                  (First)
[14:48] <dn_> [  103.890767] nvme 0000:43:00.0: AER: aer_layer=Physical Layer, aer_agent=Receiver ID
[14:48] <dn_> Also on 43:00.0 is the fastest device .. so not sure what I shall make of it
[14:51] <tomreyn> dn_: and uefi is up to date?
[14:51] <dn_> tomreyn: good question - never checked that, bios is - will google how to check/update
[14:51] <tomreyn> journalctl -b | grep DMI:
[15:02] <dn_> May 15 14:58:54 brrrmm kernel: DMI: ASUS System Product Name/Pro WS WRX80E-SAGE SE WIFI, BIOS 0405 03/17/2021
[15:05] <tomreyn> dn_: i haven't checked whether that's the latest, but that's a rather current build date, so it could be. i suggested other things you could give a try above, such as swapping NVMEs. since this is a hardware error, it could also be interesting what happens if you just remove this one device.
[15:05] <dn_> tomreyn: thank you for the ideas, this really helps. will do that next... because I'm really out of ideas ;-)  it's kinda maeh...
[15:05] <tomreyn> but i guess this is really a ##hardware topic at this point.
[15:06] <tomreyn> although, just in case, try different linux (kernel) versions, too
[15:07] <tomreyn> so far i don't know which ubuntu server version and kernel you've tested with
[15:07] <dn_> tomreyn: will do, also a good idea .. it's kinda odd that one disk is fast, I would understand all are slow ... but I don't see any pattern why one is fast; - I just reinstalled 20.04 - will try after fresh install, will update & retry
[15:07] <tomreyn> !mainline
[15:07] <ubot3> The kernel team supply continuous mainline kernel builds which can be useful for tracking down issues or testing recent changes in the Linux kernel. More information is available at https://wiki.ubuntu.com/Kernel/MainlineBuilds
[15:08] <dn_> I also tried archlinux, just to be sure it's not something totaly odd - but both are the same (arch & ubuntu) - exact same behaviour
[15:08] <TJ-> dn_: there's quite a lot of intereseting info under /sys/ that might help you deduce a reason/cause
[15:09] <dn_> I diffed /sys/block stuff for the devices - I might be blined, but everything like scheduler, queue, readahead seems to be the same
[15:09] <TJ-> my guess thought is going to be firmware's configuration of the hardware
[15:09] <dn_> the nvme all have the same firmware version, anything else I could check?
[15:10] <TJ-> dn_: can you show sus "pastebinit <(  sudo strings /sys/firmware/acpi/tables/DSDT  | grep -i windows | sort )"
[15:12] <dn_> https://paste.ubuntu.com/p/Tts87YC52h/ - sweet, never used pastebinit
[15:12] <dn_> tomreyn: just for the record, I tried `5.4.0-73-generic` - will now upgrade to latest stable and retry
[15:21] <TJ-> dn_: in /theory/ the kernel should say YES to the most recent of those _OSIs /but/ the kernel expects the firmware's ACPI DSDT caller to pass the most recent ISO first. In the event it doesn't then it is possible to the firmware to misconfigure or use a less than optimal config
[15:22] <dn_> TJ- got some keywords for me to google, like how change/check etc?
[15:22] <TJ-> dn_: with that in mind it may be worth trying a workaround developed some years ago but now mostly not needed, to force Linux to only recognise the most recent _OSI string. See https://iam.tj/prototype/enhancements/Windows-acpi_osi.html
[15:29] <dn_> TJ- thanks, will look into it
[15:30] <dn_> TJ- might be a stupid question, but which one do I want to use? Windows 2015?
[15:31] <TJ-> dn_: yes 'latest' based on intelligent analysis!
[15:31] <dn_> I think I also just figured out the reason ..
[15:31] <dn_> if so I'll have to cry for a moment
[15:31] <TJ-> go on!?
[15:32] <TJ-> cables?
[15:32] <TJ-> connectors?
[15:32] <dn_> I think trim ... but not 100% sure - trim & space used
[15:33] <dn_> they are all 2TB disks, all but one has round 40% namespace usage ... so the fast one has 0 usage....; I did a blockdiscard and ns format on one of the slows - that also has now no usage - and it's fast now
[15:33] <dn_> I find that a bit unexpected ...
[15:33] <TJ-> wow!
[15:34] <TJ-> nice find though, and toally left-field
[15:34] <dn_> am I stupid/missing something? .. like why!?
[15:35] <dn_> so, e.g. for 4k rand write - I get now -> `[r=4446MiB/s][r=1138k IOPS]` ... on the device that was slow before, slow devices do `[r=2731MiB/s][r=699k IOPS]`
[15:40] <TJ-> dn_: this thread has some interesting comments and towards the end by 'gerard' is mention of a recent firmware upgrade and Samsung confirming issues. Haven't checked that claim out myself. https://chiaforum.com/t/extremely-slow-results-from-samsung-980-m-2-ssd/900/26
[15:44] <dn_> TJ- thanks! - hm I think it's not even possible under linux to update windows... how annoying