[03:10] 2 users using the same pc at the same time, with same gpu, 2 sets of monitor, keyboard [03:10] any options? [03:12] hm… vmware has vGPU - i read kvm has something too [03:35] Hi all. After a reboot with my mdadm/LVM/btrfs system, I get: mount: mounting /dev/mapper/osrootVG-root on /root failed: No such file or directory. After which I get dropped to an initramfs shell, where I can mount the device through /dev/osrootVG/root, which is just a symlink to /dev/dm-1 just like /dev/mapper/osrootVG-root is. Any hints? [03:40] Also, why is this a problem, the initramfs is stored on that device, so it already found it, and mounted it... I don't get it. What am I missing? [03:51] fixed it. Just a stupid stupid mistake. . . [05:07] Intelo: this lists some of the options to split cards https://cpaelzer.github.io/blogs/006-mediated-device-to-pass-parts-of-your-gpu-to-a-guest/ - you might pick and experiment with one of them [05:07] Intelo: but also it depends a lot on the level of isolation you really need/want - maybe something like https://wiki.ubuntu.com/Multiseat (if that still is a thing these days) would work better for you? [05:10] or https://askubuntu.com/questions/1054541/multiseat-on-ubuntu-18-04 - but it seems most use multiple GPUs to achieve that [05:10] so you might be back to splitting the GPU some way as suggested at first [15:33] cpaelzer_: multiseat needs separate gpu heads. I only have one [15:49] Intelo: looks like there's vGPU support in KVM for nVidia https://docs.nvidia.com/grid/10.0/grid-vgpu-release-notes-generic-linux-kvm/index.html [15:51] hm RoyK [15:52] RoyK: virtualbox? [15:58] I was talking about KVM, not vbox [20:27] i transplanted by janky server into a proper case and when i booted it up i saw "Failed to start Import ZFS pools by cache file. See 'systemctl status zfs-import-cache.service' for details" and this is what it gives me: https://paste.debian.net/hidden/502e5056/ [20:28] kinghat: what does zpool import report? [20:28] pretty sure the two drives in the mirror pool are plugged into the same ports but i did add a couple other drives to the machine [20:29] sarnold: https://paste.debian.net/hidden/e3d0417b/ [20:31] kinghat: zpool import data should bring it onlnie [20:32] persistently? [20:33] I think that ought to update the cache for the next reboot, but I've never been very clear on the cache [20:33] $ sudo zpool import data [20:33] cannot mount '/mnt/data': directory is not empty [20:34] argh :/ that bit me too; I once was unable to import at boot for something, and then *other* stuff started creating contents in the mountpoints.. [20:34] i did try and check the mount after it happened and was kind of odd that there were a couple dirs in the mount dir. [20:34] fix the /mnt/data problem -- either delete things, or move them aside for adding to the pool, etc [20:36] so i think one dir is in there creating data because i have a container volume in the pool but the other im not sure about. [20:37] oh. ya i know why. its two containers putting data there [20:39] hmm so bring down the containers, rm -rf the dirs inside of /mnt/data/ and then try to bring the pool back online? [20:39] and then the containers, ofc. [20:49] $ sudo zpool import data [20:49] cannot import 'data': a pool with that name already exists [20:49] use the form 'zpool import ' to give it a new name [20:50] does zpool status agree? [20:51] https://paste.debian.net/hidden/5bad3e16/ [20:52] yay [20:52] theres nothing in the mount though? [20:52] the /mnt/data dir is empty [20:52] check zfs list [20:53] $ zfs list [20:53] NAME USED AVAIL REFER MOUNTPOINT [20:53] data 881G 17.7G 881G /mnt/data [20:53] ya its an almost full data set [20:53] pool or whatever [20:54] can it be "remounted" or something? [20:55] whaaaat [20:55] what does /proc/mounts report? [20:55] how about /proc/mounts from a different shell spawned via a different mechanism? [20:56] is /proc/mounts a command? [20:56] no, it's a file showing the mounts in the current process's namespace [20:57] this is from a new shell: https://paste.debian.net/hidden/2a3456ff/ [20:57] $ ll /proc/mounts [20:57] lrwxrwxrwx 1 root root 11 Jul 28 20:18 /proc/mounts -> self/mounts [20:57] and what's in /proc/mounts? [20:58] https://paste.debian.net/hidden/8036bbf1/ [21:00] *very* curious, not a single mention of zfs anywhere [21:00] 😬 [21:01] how about: zfs list -ocanmount,mounted,mountpoint,name [21:02] $ zfs list -ocanmount,mounted,mountpoint,name [21:02] CANMOUNT MOUNTED MOUNTPOINT NAME [21:02] on no /mnt/data data [21:03] lolol I am so confused. *why* zfs *why* [21:04] zfs mount -a ? [21:05] boom [21:05] $ zfs list -ocanmount,mounted,mountpoint,name [21:05] CANMOUNT MOUNTED MOUNTPOINT NAME [21:05] on yes /mnt/data data [21:05] actual data le mounted [21:06] so it should be persistent across reboots/shutdowns now? like the cache got cleared or something? [21:10] I sure hope so :) [21:11] ok going to give it a go. [21:11] im not sure what changed to make it freak out? maybe drive mount point? [21:13] looks like it made it! [21:13] thanks for being a G, sarnold 🙏 [21:15] kinghat: the usual problem is stuff in the mountpoint, but once you got past that I'm surprised you still had problems :/ [21:15] kinghat: I hope thta's it though :) [23:42] sarnold: so i had to remove the power to the server to move it to a temp location and on booting it again i got the same import cache error [23:42] :( [23:44] this time there wasnt any data created in the mount point though [23:44] yikes. no datasets this time [23:46] does 'zpool import' show the pool? zpool status show it imported or not? [23:48] whoops i just imported 'data' vs just zpool import [23:48] $ zfs list [23:48] NAME USED AVAIL REFER MOUNTPOINT [23:48] data 881G 17.7G 881G /mnt/data [23:49] $ zfs list -ocanmount,mounted,mountpoint,name [23:49] CANMOUNT MOUNTED MOUNTPOINT NAME [23:49] on yes /mnt/data data [23:52] survived a reboot again. i have to go to the server again so ill shut it down, without remove the power cord, and see if it does it with regular shutdowns. [23:52] maybe it does it when it loses actual power? [23:53] though i wouldnt have a clue why that would matter 🤷‍♂️ [23:56] kinghat: you might want to do a zpool import -d /dev/disk/by-id/ or similar, so that the pool ought to use long names rather than shortnames [23:58] $ sudo zpool list [23:58] NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT [23:58] data 928G 881G 46.7G - 13% 94% 1.00x ONLINE - [23:58] you mean so it doesnt use "data"?