[02:08] anyone have an idea how to deal with unresponsive byobu on a server? it seems to have frozen and unfortunately I have run byobu-enable so it always loads into byobu session(and hangs) when I attempt to logon. is there anything i can do without rebooting? [02:10] gah, nevermind it became responsive again, after 10minuts or something. really weird [02:15] peepsalot, you could get the pid in top or htop and kill the process, then restart. i use tmux [02:16] the thing was I couldn't get into any shell at the time when it was frozen [02:17] also afaiui byobu is just a frontend for tmux anyways [02:23] ctrl-f6 should force kill a window [02:31] byobu is a configuration for tmux or screen [02:32] calling it a frontend is technically wrong :) [02:33] and my guess would be that something was delaying login or something like that? [02:33] or shell startup? [02:34] or the system was overloaded [02:35] peepsalot: ^^^ [02:38] well, i do keep it busy with work: 49.94 load on 48 threaded cpu(s) :) but i've never noticed any responsiveness issues like that before [02:39] it happened while I was in the middle of doing a search of scrollback in a tmux window [02:40] which only keeps 100k lines in the worst case [02:41] I/O overload can do that, I guess, or when you get into a swap storm [02:41] 0 swap usage though, i got ram to spare [02:42] only about 33% ram used [02:42] if you were logged in remotely, it could also have been a network issue [02:44] peepsalot, you might try byobu-disable and start it from the terminal to see if the hang reproduces that way [02:46] it was over ssh (on my LAN), but even when I went to the physical computer to try logging into virtual console, it was hanging upon login (because that would connect to the same byobu session since that's what "byobu-enable" configures it to do) [02:47] it might be a login/authentication/PAM issue [02:48] I've seen that happen on a heavy I/O load too [02:51] well, anyways its better now, not sure I could reproduce it if I tried (i've already been doing more scrollback finds and nothing like that is happening now), just an odd hiccup that hopefully won't occur again soon [03:05] peepsalot, re my first reply about the pid, you can alt-f2, alt-f3 to open a new tty and kill the process [03:09] ra, i couldn't login from those at the time... because byobu/tmux was stuck churning on something, and login goes directly to byobu session [03:12] so try byobu-disable and start it from the terminal to see if that changes anything [03:14] peepsalot, but you did try alt-f2? [03:15] login would have to happen before byobu starts on a new virtual console [04:34] is my zfs mirror pool mounting automatically to the server? [04:36] what i mean is i can see its mount point and its working fine but i dont see it setup in my fstab. [04:38] and i just added another drive to the system and formatted it to ext4 and it shows up in the disk list, im guessing i need to manually give it a mount point? [06:07] kinghat: ZFS doesn't use fstab unless you explicitly set up mountpoints in legacy mode. zfs-mount.service mounts datasets that aren't in legacy mode. [15:25] so I would just need to make it auto mount manually? [15:30] if you're alking to someone specifically, be sure to mention their nickname. [15:30] otherwise, provide context. [15:41] tomreyn: nobody in particular and its just an added drive to my server. i had forgotten how my pool was mounted since it was set and forget. [15:42] since i dont think i will be adding the drive to the pool and just using it for backup i need to figure out how to get it to auto mount like a regular drive [15:43] unless zfs also does backup stuffs? [15:58] kinghat: you're asking confusing questions. [15:59] probably [15:59] ZFS datasets are mounted either by zfs-mount.service -- which looks at datasets mountpoint attribute -- or by legacy mounts via fstab (for which mountpoint=legacy at the dataset attribute) [16:00] ya im guessing mines via zfs-mount.service. [16:00] so if you want a dataset mounted, you should specify its mountpoint= attribute. if it's "legacy", then you need to use fstab. [16:01] i have a zfs mirror pool that is mounted and working fine. i added another drive, formatted it and noticed it doesnt automatically have a mount point. [16:01] kinghat: "formatted it" ? [16:01] i think i want to use the drive as a backup drive that gets backed up to from multiple locations everyday. [16:02] you're not answering my questions. how did you "format" the drive (the terminology doesn't exist for ZFS) [16:02] blackflow: i did mkfs.ext4 /dev/sdc [16:02] and that has to do with your ZFS questions... what? lol [16:02] it was previously formatted with ntfs [16:04] so if you want an ext4 filesystem mounted at boot, you need to add it to fstab. [16:04] it doesnt. i was confused as to why/how the zfs pool/drives were mounted as i couldnt remember setting them up. i just addeded a drive, formatted it, and noticed its not auto mounted anywhere on the system. [16:04] yes [16:04] fstab or write a proper systemd .mount unit (fstab is converted to them at run time anyway) [16:04] figured thats what i needed to do something like that. [16:06] ZFS is a completely different beast from traditional filesystems. It's a whole fs+volumes+raid+snapshots+compression+encryption+.... kind of kitchen sink solution, with a set of services, its own specific concepts and idisyncrasies [16:07] ext4 is just a passive filesystem, nothing else, no services, no volume management, no raid, but latest versions methinks can do encryption. [16:30] blackflow: if i didnt want to add the new drive as a 3rd mirror and wanted to use it as a backup drive, does zfs have something for that as well or just use something like rsync? [16:34] kinghat: you can send ZFS snapshots at block-level to another drive, or over the network to [16:34] what if im sending other data from another drive to said backup drive? [16:35] as well* [16:35] kinghat: but consider what you're asking. if you're gonna add a third drive to the same chassis, that's hardly a backup. you can add a hot spare to your existing pool, or simply increase redundancy by creating a 3-way mirror [16:35] you can rsync files from other filsystems, and use zfs snapshots send|recv for ZFS filesystems [16:36] snapshots are moved between datasets, so you can have rpool/backups-for-zfs , rpool/backups-for-others and then you send|recv to backups-for-zfs, and use rsync under backups-for-others [16:37] kinghat: something like this? https://xkcd.com/1718/ [16:46] lel === Greyztar- is now known as Greyztar === Wryhder is now known as Lucas_Gray