/srv/irclogs.ubuntu.com/2024/01/24/#ubuntu-server.txt

=== chris14_ is now known as chris14
DriveFailureProbHello - I have put together a NAS that I was planning on running ZFS on. When I initially got the drives, I ran a long test and everything came back fine. Today after getting the rest of the hardware and bring it overseas, I am seeing a drive fail out and I can't tell if it is the HBA / SATA card, or the drive itself. Attached is a log file of13:15
DriveFailureProbata12 and /dev/sdf: https://pastebin.ubuntu.com/p/yFJpTMHSqh/13:15
DriveFailureProbI was trying to make this a low-power consumption server, so it could be that powertop or a BIOS configuration caused this, but, I just wanted to be absolutely sure that it wasn't the drive itself failing before I dig in any further.13:21
DriveFailureProbI don't know of a way to re-enable the drive remotely,13:27
DriveFailureProbWhoops, but I also see another drive that shows "SmartSelfTestStatus: Interrupted"13:27
DriveFailureProbPosted in here instead: https://ubuntuforums.org/showthread.php?t=2493444&page=2&p=14176659#post1417665914:00
DriveFailureProbAs I likely need to hop off soon.14:00
patdk-lapI dunno about the rest, you didn't post any smart data from the drive14:05
patdk-lapbut smartd is saying your drive is 65c14:05
patdk-lapit really shouldn't be over 50c, I like to keep mine around 40c14:05
patdk-lapheh, it's *technically allowed*14:07
patdk-lap0°C to 65°C14:07
SuperLagAny of you folks use Cockpit on your Ubuntu servers?15:32
patdk-lapnever, no15:42
rbasak@pilot out15:46
rbasakAhem15:46
sergiodjfor those who are already on Matrix, we have a Server room now: "#server:ubuntu.com" (https://matrix.to/#/#server:ubuntu.com)15:47
lotuspsychjesergiodj: will there be a bridge to here too?15:49
sergiodjI'm not sure15:49
sergiodjthe folks responsible for setting up the Ubuntu matrix server probably know15:50
lotuspsychje#ubuntu and #ubuntu-next are already bridged for testing, maybe after a succes time, more will be bridged15:51
patdk-lapmy 10s reading of matrix.to doesn't make any sense15:51
patdk-lapit says it's to free you from using a specific app, irc has never had an *app*15:51
lotuspsychjeelement is nice patdk-lap 15:53
patdk-lapdunno what that even means15:53
lotuspsychjeElement is both a secure messenger and a productivity team collaboration15:54
lotuspsychje  app that is ideal for group chats while remote working. This chat app uses15:54
lotuspsychje  end-to-end encryption to provide powerful video conferencing, file sharing15:54
lotuspsychje  and voice calls.15:54
patdk-lapya, I dont need another app15:55
nibbon_o/16:38
nibbon_quick question: run-qemu.mount is supposed to work out of the box in stock Jammy? /cc cpaelzer16:38
cpaelzernibbon_: yes, if you upgrade you should have the modules of the old qemu on that mount point to allow still running old guests to load them16:52
nibbon_cpaelzer: so, the expected behavior is to have that systemd unit enabled and running and not to restart after an upgrade, amirite? 16:53
nibbon_because with kernel 5.15.0 the unit is disabled on boot and never gets enabled after upgrades16:54
cpaelzeryeah, I'd have expected it to be on and running and that is how I see it here on jammy with kernel 6.516:55
cpaelzerit will be empty, that is ok16:55
cpaelzeruntil you upgrade16:55
nibbon_I recently installed linux-image-6.2.0-39-generic and now I get the unit enabled and up, and every time I upgrade  qemu-block-extra it restart the unit wiping out whatever it's in /run/qemu16:55
nibbon_I don't get why with 5.15 that unit remains disabled16:57
nibbon_also, I don't get why this https://git.launchpad.net/ubuntu/+source/qemu/tree/debian/rules?h=ubuntu/jammy-updates#n432 isn't respected16:58
nibbon_iiuc on upgrade it shouldn't restart the unit, should it?16:58
cpaelzernibbon_: yes, I'd ignore 5.15 kernel for now16:59
nibbon_okay, I was asking out of curiosity :)17:00
cpaelzernibbon_: but the fasct that the debhelper magic seems to ignore all the --no and restarts it is bad17:00
nibbon_yeah, tell me about that. I've lost data :/17:00
cpaelzerthe only impact should be that you can't hot add devices of not-loaded drivers to guests that started in the past17:00
nibbon_not a biggie, because a stop/start of the domain will restore. Still annoying17:01
nibbon_hmm, never had this issue. I assume it goes beyond my use case(s)17:02
cpaelzerit surely worked when we added it, and it later got changed a lot (how it is handled in d7rules)17:03
cpaelzerhaving a look what is affected ...17:03
nibbon_thanks17:03
nibbon_I'm okay with having that new mount point. However, it must be persistent for the whole lifecycle of the machine17:04
cpaelzeryep, it should be started at the beginning - populated with content on upgrades (the modules of the qemu version that is going away) and cleaned by a reboot17:05
nibbon_hmm, so should I expect some process running rm in that directory?17:06
cpaelzerno, not rm17:06
nibbon_okay, as long as it's addition it's fine17:06
cpaelzerthe removal maintainer script of qemu will back up its modules there, to be available for guests of that version that might still run and can only load those old bits17:07
nibbon_what's the removal maintainer script of qemu?17:10
cpaelzerthe prerm / postrm scripts of qemu-block-extra in this case17:12
cpaelzeryou'll see in there how it does the backup on upgrade, the removal on remove and only stops the mount unit on purge17:13
nibbon_I don't see any prerm / postrm script in jammy-updates: https://git.launchpad.net/ubuntu/+source/qemu/tree/debian?h=ubuntu/jammy-updates17:16
nibbon_I only see this https://git.launchpad.net/ubuntu/+source/qemu/tree/debian/qemu-block-extra.postinst?h=ubuntu/jammy-updates17:17
cpaelzeryeah, because of magic - the lines like https://git.launchpad.net/ubuntu/+source/qemu/tree/debian/rules?h=ubuntu/jammy-updates#n185 will make it exist in the final package17:17
cpaelzerI have the feeling that when switching to the mount unit this was ok, but then undetected the logic changed and since then this restarts the unit and thereby is wasted effort17:18
nibbon_aha, that makes sense17:19
cpaelzerso far my check, working in focal, broken in jammy, working in mantic17:19
cpaelzerso it is not doomed everywhere17:19
cpaelzerlikely some unexpected dh_systemd behavior restarting the unit there17:19
nibbon_<sigh>17:19
cpaelzernibbon_: have you already filed a bug I can add info to?17:20
nibbon_not yet, I wanted to understand the behavior before filing a bug17:20
cpaelzerI'll file one right now and pass you a url in a bit17:20
nibbon_sounds good, thanks17:21
nibbon_willing to help to solve the issue17:21
cpaelzernibbon_: bug 205115317:31
-ubottu:#ubuntu-server- Bug 2051153 in qemu (Ubuntu Jammy) "run-qemu.mount is restarted on upgrades" [Undecided, Confirmed] https://launchpad.net/bugs/205115317:31
cpaelzerIÄm off for the day then, maybe sergiodj can be a rubber duck on this confirming if he can see the same17:31
cpaelzerthe actual debug and fix will take some time17:31
cpaelzermostyl to find why it does not behave as it should17:31
cpaelzerI had very similar issues in other packages, but never realized this might affect this as well17:31
nibbon_debhelper sounds like opening a can of worms :/17:32
cpaelzerit helps, therefore its name17:32
cpaelzerbut I've seen it often to behave unexpectedly on .socket and now it seems on .mount units17:32
nibbon_can you give me some pointers where I could look at to figure this out?17:33
nibbon_thanks for filing the bug17:33
cpaelzernibbon_: I've left more info on the bug now, drilling down to where it goes wrong17:44
cpaelzernibbon_: and linked an old case where this affected .socket units17:44
cpaelzerthat provides plenty of study material17:44
nibbon_cpaelzer: perfect, I'll go read.17:48

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!