=== chris14_ is now known as chris14 [13:15] Hello - I have put together a NAS that I was planning on running ZFS on. When I initially got the drives, I ran a long test and everything came back fine. Today after getting the rest of the hardware and bring it overseas, I am seeing a drive fail out and I can't tell if it is the HBA / SATA card, or the drive itself. Attached is a log file of [13:15] ata12 and /dev/sdf: https://pastebin.ubuntu.com/p/yFJpTMHSqh/ [13:21] I was trying to make this a low-power consumption server, so it could be that powertop or a BIOS configuration caused this, but, I just wanted to be absolutely sure that it wasn't the drive itself failing before I dig in any further. [13:27] I don't know of a way to re-enable the drive remotely, [13:27] Whoops, but I also see another drive that shows "SmartSelfTestStatus: Interrupted" [14:00] Posted in here instead: https://ubuntuforums.org/showthread.php?t=2493444&page=2&p=14176659#post14176659 [14:00] As I likely need to hop off soon. [14:05] I dunno about the rest, you didn't post any smart data from the drive [14:05] but smartd is saying your drive is 65c [14:05] it really shouldn't be over 50c, I like to keep mine around 40c [14:07] heh, it's *technically allowed* [14:07] 0°C to 65°C [15:32] Any of you folks use Cockpit on your Ubuntu servers? [15:42] never, no [15:46] @pilot out [15:46] Ahem [15:47] for those who are already on Matrix, we have a Server room now: "#server:ubuntu.com" (https://matrix.to/#/#server:ubuntu.com) [15:49] sergiodj: will there be a bridge to here too? [15:49] I'm not sure [15:50] the folks responsible for setting up the Ubuntu matrix server probably know [15:51] #ubuntu and #ubuntu-next are already bridged for testing, maybe after a succes time, more will be bridged [15:51] my 10s reading of matrix.to doesn't make any sense [15:51] it says it's to free you from using a specific app, irc has never had an *app* [15:53] element is nice patdk-lap [15:53] dunno what that even means [15:54] Element is both a secure messenger and a productivity team collaboration [15:54] app that is ideal for group chats while remote working. This chat app uses [15:54] end-to-end encryption to provide powerful video conferencing, file sharing [15:54] and voice calls. [15:55] ya, I dont need another app [16:38] o/ [16:38] quick question: run-qemu.mount is supposed to work out of the box in stock Jammy? /cc cpaelzer [16:52] nibbon_: yes, if you upgrade you should have the modules of the old qemu on that mount point to allow still running old guests to load them [16:53] cpaelzer: so, the expected behavior is to have that systemd unit enabled and running and not to restart after an upgrade, amirite? [16:54] because with kernel 5.15.0 the unit is disabled on boot and never gets enabled after upgrades [16:55] yeah, I'd have expected it to be on and running and that is how I see it here on jammy with kernel 6.5 [16:55] it will be empty, that is ok [16:55] until you upgrade [16:55] I recently installed linux-image-6.2.0-39-generic and now I get the unit enabled and up, and every time I upgrade qemu-block-extra it restart the unit wiping out whatever it's in /run/qemu [16:57] I don't get why with 5.15 that unit remains disabled [16:58] also, I don't get why this https://git.launchpad.net/ubuntu/+source/qemu/tree/debian/rules?h=ubuntu/jammy-updates#n432 isn't respected [16:58] iiuc on upgrade it shouldn't restart the unit, should it? [16:59] nibbon_: yes, I'd ignore 5.15 kernel for now [17:00] okay, I was asking out of curiosity :) [17:00] nibbon_: but the fasct that the debhelper magic seems to ignore all the --no and restarts it is bad [17:00] yeah, tell me about that. I've lost data :/ [17:00] the only impact should be that you can't hot add devices of not-loaded drivers to guests that started in the past [17:01] not a biggie, because a stop/start of the domain will restore. Still annoying [17:02] hmm, never had this issue. I assume it goes beyond my use case(s) [17:03] it surely worked when we added it, and it later got changed a lot (how it is handled in d7rules) [17:03] having a look what is affected ... [17:03] thanks [17:04] I'm okay with having that new mount point. However, it must be persistent for the whole lifecycle of the machine [17:05] yep, it should be started at the beginning - populated with content on upgrades (the modules of the qemu version that is going away) and cleaned by a reboot [17:06] hmm, so should I expect some process running rm in that directory? [17:06] no, not rm [17:06] okay, as long as it's addition it's fine [17:07] the removal maintainer script of qemu will back up its modules there, to be available for guests of that version that might still run and can only load those old bits [17:10] what's the removal maintainer script of qemu? [17:12] the prerm / postrm scripts of qemu-block-extra in this case [17:13] you'll see in there how it does the backup on upgrade, the removal on remove and only stops the mount unit on purge [17:16] I don't see any prerm / postrm script in jammy-updates: https://git.launchpad.net/ubuntu/+source/qemu/tree/debian?h=ubuntu/jammy-updates [17:17] I only see this https://git.launchpad.net/ubuntu/+source/qemu/tree/debian/qemu-block-extra.postinst?h=ubuntu/jammy-updates [17:17] yeah, because of magic - the lines like https://git.launchpad.net/ubuntu/+source/qemu/tree/debian/rules?h=ubuntu/jammy-updates#n185 will make it exist in the final package [17:18] I have the feeling that when switching to the mount unit this was ok, but then undetected the logic changed and since then this restarts the unit and thereby is wasted effort [17:19] aha, that makes sense [17:19] so far my check, working in focal, broken in jammy, working in mantic [17:19] so it is not doomed everywhere [17:19] likely some unexpected dh_systemd behavior restarting the unit there [17:19] [17:20] nibbon_: have you already filed a bug I can add info to? [17:20] not yet, I wanted to understand the behavior before filing a bug [17:20] I'll file one right now and pass you a url in a bit [17:21] sounds good, thanks [17:21] willing to help to solve the issue [17:31] nibbon_: bug 2051153 [17:31] -ubottu:#ubuntu-server- Bug 2051153 in qemu (Ubuntu Jammy) "run-qemu.mount is restarted on upgrades" [Undecided, Confirmed] https://launchpad.net/bugs/2051153 [17:31] IÄm off for the day then, maybe sergiodj can be a rubber duck on this confirming if he can see the same [17:31] the actual debug and fix will take some time [17:31] mostyl to find why it does not behave as it should [17:31] I had very similar issues in other packages, but never realized this might affect this as well [17:32] debhelper sounds like opening a can of worms :/ [17:32] it helps, therefore its name [17:32] but I've seen it often to behave unexpectedly on .socket and now it seems on .mount units [17:33] can you give me some pointers where I could look at to figure this out? [17:33] thanks for filing the bug [17:44] nibbon_: I've left more info on the bug now, drilling down to where it goes wrong [17:44] nibbon_: and linked an old case where this affected .socket units [17:44] that provides plenty of study material [17:48] cpaelzer: perfect, I'll go read.