Intelo | 2 users using the same pc at the same time, with same gpu, 2 sets of monitor, keyboard | 03:10 |
---|---|---|
Intelo | any options? | 03:10 |
RoyK | hm… vmware has vGPU - i read kvm has something too | 03:12 |
superboot | Hi all. After a reboot with my mdadm/LVM/btrfs system, I get: mount: mounting /dev/mapper/osrootVG-root on /root failed: No such file or directory. After which I get dropped to an initramfs shell, where I can mount the device through /dev/osrootVG/root, which is just a symlink to /dev/dm-1 just like /dev/mapper/osrootVG-root is. Any hints? | 03:35 |
superboot | Also, why is this a problem, the initramfs is stored on that device, so it already found it, and mounted it... I don't get it. What am I missing? | 03:40 |
superboot | fixed it. Just a stupid stupid mistake. . . | 03:51 |
cpaelzer_ | Intelo: this lists some of the options to split cards https://cpaelzer.github.io/blogs/006-mediated-device-to-pass-parts-of-your-gpu-to-a-guest/ - you might pick and experiment with one of them | 05:07 |
cpaelzer_ | Intelo: but also it depends a lot on the level of isolation you really need/want - maybe something like https://wiki.ubuntu.com/Multiseat (if that still is a thing these days) would work better for you? | 05:07 |
cpaelzer_ | or https://askubuntu.com/questions/1054541/multiseat-on-ubuntu-18-04 - but it seems most use multiple GPUs to achieve that | 05:10 |
cpaelzer_ | so you might be back to splitting the GPU some way as suggested at first | 05:10 |
Intelo | cpaelzer_: multiseat needs separate gpu heads. I only have one | 15:33 |
RoyK | Intelo: looks like there's vGPU support in KVM for nVidia https://docs.nvidia.com/grid/10.0/grid-vgpu-release-notes-generic-linux-kvm/index.html | 15:49 |
Intelo | hm RoyK | 15:51 |
Intelo | RoyK: virtualbox? | 15:52 |
RoyK | I was talking about KVM, not vbox | 15:58 |
kinghat | i transplanted by janky server into a proper case and when i booted it up i saw "Failed to start Import ZFS pools by cache file. See 'systemctl status zfs-import-cache.service' for details" and this is what it gives me: https://paste.debian.net/hidden/502e5056/ | 20:27 |
sarnold | kinghat: what does zpool import report? | 20:28 |
kinghat | pretty sure the two drives in the mirror pool are plugged into the same ports but i did add a couple other drives to the machine | 20:28 |
kinghat | sarnold: https://paste.debian.net/hidden/e3d0417b/ | 20:29 |
sarnold | kinghat: zpool import data should bring it onlnie | 20:31 |
kinghat | persistently? | 20:32 |
sarnold | I think that ought to update the cache for the next reboot, but I've never been very clear on the cache | 20:33 |
kinghat | $ sudo zpool import data | 20:33 |
kinghat | cannot mount '/mnt/data': directory is not empty | 20:33 |
sarnold | argh :/ that bit me too; I once was unable to import at boot for something, and then *other* stuff started creating contents in the mountpoints.. | 20:34 |
kinghat | i did try and check the mount after it happened and was kind of odd that there were a couple dirs in the mount dir. | 20:34 |
sarnold | fix the /mnt/data problem -- either delete things, or move them aside for adding to the pool, etc | 20:34 |
kinghat | so i think one dir is in there creating data because i have a container volume in the pool but the other im not sure about. | 20:36 |
kinghat | oh. ya i know why. its two containers putting data there | 20:37 |
kinghat | hmm so bring down the containers, rm -rf the dirs inside of /mnt/data/ and then try to bring the pool back online? | 20:39 |
kinghat | and then the containers, ofc. | 20:39 |
kinghat | $ sudo zpool import data | 20:49 |
kinghat | cannot import 'data': a pool with that name already exists | 20:49 |
kinghat | use the form 'zpool import <pool | id> <newpool>' to give it a new name | 20:49 |
sarnold | does zpool status agree? | 20:50 |
kinghat | https://paste.debian.net/hidden/5bad3e16/ | 20:51 |
sarnold | yay | 20:52 |
kinghat | theres nothing in the mount though? | 20:52 |
kinghat | the /mnt/data dir is empty | 20:52 |
sarnold | check zfs list | 20:52 |
kinghat | $ zfs list | 20:53 |
kinghat | NAME USED AVAIL REFER MOUNTPOINT | 20:53 |
kinghat | data 881G 17.7G 881G /mnt/data | 20:53 |
kinghat | ya its an almost full data set | 20:53 |
kinghat | pool or whatever | 20:53 |
kinghat | can it be "remounted" or something? | 20:54 |
sarnold | whaaaat | 20:55 |
sarnold | what does /proc/mounts report? | 20:55 |
sarnold | how about /proc/mounts from a different shell spawned via a different mechanism? | 20:55 |
kinghat | is /proc/mounts a command? | 20:56 |
sarnold | no, it's a file showing the mounts in the current process's namespace | 20:56 |
kinghat | this is from a new shell: https://paste.debian.net/hidden/2a3456ff/ | 20:57 |
kinghat | $ ll /proc/mounts | 20:57 |
kinghat | lrwxrwxrwx 1 root root 11 Jul 28 20:18 /proc/mounts -> self/mounts | 20:57 |
sarnold | and what's in /proc/mounts? | 20:57 |
kinghat | https://paste.debian.net/hidden/8036bbf1/ | 20:58 |
sarnold | *very* curious, not a single mention of zfs anywhere | 21:00 |
kinghat | 😬 | 21:00 |
sarnold | how about: zfs list -ocanmount,mounted,mountpoint,name | 21:01 |
kinghat | $ zfs list -ocanmount,mounted,mountpoint,name | 21:02 |
kinghat | CANMOUNT MOUNTED MOUNTPOINT NAME | 21:02 |
kinghat | on no /mnt/data data | 21:02 |
sarnold | lolol I am so confused. *why* zfs *why* | 21:03 |
sarnold | zfs mount -a ? | 21:04 |
kinghat | boom | 21:05 |
kinghat | $ zfs list -ocanmount,mounted,mountpoint,name | 21:05 |
kinghat | CANMOUNT MOUNTED MOUNTPOINT NAME | 21:05 |
kinghat | on yes /mnt/data data | 21:05 |
kinghat | actual data le mounted | 21:05 |
kinghat | so it should be persistent across reboots/shutdowns now? like the cache got cleared or something? | 21:06 |
sarnold | I sure hope so :) | 21:10 |
kinghat | ok going to give it a go. | 21:11 |
kinghat | im not sure what changed to make it freak out? maybe drive mount point? | 21:11 |
kinghat | looks like it made it! | 21:13 |
kinghat | thanks for being a G, sarnold 🙏 | 21:13 |
sarnold | kinghat: the usual problem is stuff in the mountpoint, but once you got past that I'm surprised you still had problems :/ | 21:15 |
sarnold | kinghat: I hope thta's it though :) | 21:15 |
kinghat | sarnold: so i had to remove the power to the server to move it to a temp location and on booting it again i got the same import cache error | 23:42 |
sarnold | :( | 23:42 |
kinghat | this time there wasnt any data created in the mount point though | 23:44 |
kinghat | yikes. no datasets this time | 23:44 |
sarnold | does 'zpool import' show the pool? zpool status show it imported or not? | 23:46 |
kinghat | whoops i just imported 'data' vs just zpool import | 23:48 |
kinghat | $ zfs list | 23:48 |
kinghat | NAME USED AVAIL REFER MOUNTPOINT | 23:48 |
kinghat | data 881G 17.7G 881G /mnt/data | 23:48 |
kinghat | $ zfs list -ocanmount,mounted,mountpoint,name | 23:49 |
kinghat | CANMOUNT MOUNTED MOUNTPOINT NAME | 23:49 |
kinghat | on yes /mnt/data data | 23:49 |
kinghat | survived a reboot again. i have to go to the server again so ill shut it down, without remove the power cord, and see if it does it with regular shutdowns. | 23:52 |
kinghat | maybe it does it when it loses actual power? | 23:52 |
kinghat | though i wouldnt have a clue why that would matter 🤷♂️ | 23:53 |
sarnold | kinghat: you might want to do a zpool import -d /dev/disk/by-id/ or similar, so that the pool ought to use long names rather than shortnames | 23:56 |
kinghat | $ sudo zpool list | 23:58 |
kinghat | NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT | 23:58 |
kinghat | data 928G 881G 46.7G - 13% 94% 1.00x ONLINE - | 23:58 |
kinghat | you mean so it doesnt use "data"? | 23:58 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!