[08:11] <icey> jamespage: would you be around to take a look at https://code.launchpad.net/~chris.macnaughton/ubuntu/+source/openvswitch/+git/openvswitch/+merge/387852 ?
[08:39] <Peanut> I've logged in to my 20.04 Focal desktop, and logged out again. Now, 15 minutes later, there are still 33 processes running as this user: /usr/lib/bluetooth/obexd, lots of Evolution processes (I don't use Evolution), geoclue (no clue what that is), telepathy/mission-control (???), gnome-tweak-tool-lib-inhibitor (this is a desktop) etc. Why don't these disappear once I've logged out?
[08:41] <Peanut> Ah, sorry, wrong channel.
[11:26] <Towser> (Towser) I have a question. Would it be worth converting an old laptop into a DHCP/PBX server? If so what edition of ubuntu server would work best (and cheapest) or would I be better off using an alternative platform?
[12:37] <Orcs53> Hi there! I have Docker (installed the snap) on a Raspberry Pi running Ubuntu 20.04 server. I have done a few power cycles, and now when I reboot, the Docker daemon no longer starts.
[12:38] <Orcs53> Here is a portion of the output of journalctl https://paste.ubuntu.com/p/nVNkpWGQCD/
[12:38] <Orcs53> Any ideas on how to solve this issue?
[12:43] <jamespage> icey: sorry was ooo for this morning
[12:43] <icey> no worries :)
[12:43] <NTQ> I've got a problem with a stuck cifs mount which is not available anymore on an Ubuntu 18.04. Every two second it logs this: https://paste.ubuntu.com/p/Z9K643qBWr/
[12:43] <NTQ> I can not modprobe -rf cifs and I can not remount again (because the remote is gone) and I can not umount because it was already unmounted.
[12:43] <NTQ> What can I do except restarting the system? It is production server with a lot of running services.
[12:45] <Orcs53> Oh FYI, I solved the issue, the docker.pid was not deleted on power off. Deleting this file and restarting the snap solved the issue.
[12:45] <NTQ> I seems also to be the reason why I can not upgade virtualbox. It gets stuck at "Preparing to unpack ..."
[15:18] <rbasak> mdeslaur: sergiodj and kanashiro will work on MySQL riscv64 for you
[15:20] <mdeslaur> rbasak: awesome, thanks sergiodj, kanashiro
[15:20] <sergiodj> mdeslaur: hey, np :)
[20:51] <keithzg[m]> Still baffled by the eventual slow to a crawl of file i/o on the BTRFS storage pool at my work; started happening a few weeks back now, only solution seems to be rebooting, nothing logged other than stuff like Dovecot complaining about timing out trying to write files to storage or other servers complaining their NFS mounts are down, lots of evidence of symptoms but no evidence of any cause :(
[20:54] <sarnold> keithzg[m]: any luck with perf top?
[20:56] <keithzg[m]> sarnold: Alas, it seemed to report "naw not much going on here!" when I tried that during one of these flareups.
[20:56] <sarnold> :(
[20:58] <keithzg[m]> Moving the /home from the BTRFS pool to the root SSD on the server seems to have saved email delivery from stalling out any more, so it's definitely not the server overall and just the RAID 1+0 pool of 4x4TB drives somehow. But I already very strongly suspected as much.
[21:04] <sarnold> smartctl?
[21:04] <sarnold> dmesg?
[21:06] <keithzg[m]> If only! `smartctl` reports all fine, barring a mere 2 bad blocks on one of the drives. Nothing seems relevant in `dmesg` but I should make a note to read through dmesg's output thoroughly next time. Mostly I've been relying on looking at the systemd journal, which has similarly only seemed to show some evidence of the problems of i/o timeouts and nothing pointing towards any cause.
[21:12] <keithzg[m]> I'm kindof wondering if it's just a matter of getting overloaded, these are only 5400rpm drives, maybe the write queue is just getting untenably long? But I would think some sort of warning about that would be logged somewhere I've looked . . . hmm. Adding `iostat -x` to my list of outputs I need to peer closely at next time this happens too.
[21:14] <sarnold> are they SMR drives? SMR drives kinda suck at sustained writes