[02:08] <peepsalot> anyone have an idea how to deal with unresponsive byobu on a server?  it seems to have frozen and unfortunately I have run byobu-enable so it always loads into byobu session(and hangs) when I attempt to logon.  is there anything i can do without rebooting?
[02:10] <peepsalot> gah, nevermind it became responsive again, after 10minuts or something.  really weird
[02:15] <ra> peepsalot, you could get the pid in top or htop and kill the process, then restart. i use tmux
[02:16] <peepsalot> the thing was I couldn't get into any shell at the time when it was frozen
[02:17] <peepsalot> also afaiui byobu is just a frontend for tmux anyways
[02:23] <ra> ctrl-f6 should force kill a window
[02:31] <JanC> byobu is a configuration for tmux or screen
[02:32] <JanC> calling it a frontend is technically wrong  :)
[02:33] <JanC> and my guess would be that something was delaying login or something like that?
[02:33] <JanC> or shell startup?
[02:34] <JanC> or the system was overloaded
[02:35] <JanC> peepsalot: ^^^
[02:38] <peepsalot> well, i do keep it busy with work: 49.94 load on 48 threaded cpu(s) :)    but i've never noticed any responsiveness issues like that before
[02:39] <peepsalot> it happened while I was in the middle of doing a search of scrollback in a tmux window
[02:40] <peepsalot> which only keeps 100k lines in the worst case
[02:41] <JanC> I/O overload can do that, I guess, or when you get into a swap storm
[02:41] <peepsalot> 0 swap usage though, i got ram to spare
[02:42] <peepsalot> only about 33% ram used
[02:42] <JanC> if you were logged in remotely, it could also have been a network issue
[02:44] <ra> peepsalot, you might try byobu-disable and start it from the terminal to see if the hang reproduces that way
[02:46] <peepsalot> it was over ssh (on my LAN), but even when I went to the physical computer to try logging into virtual console, it was hanging upon login (because that would connect to the same byobu session since that's what "byobu-enable" configures it to do)
[02:47] <JanC> it might be a login/authentication/PAM issue
[02:48] <JanC> I've seen that happen on a heavy I/O load too
[02:51] <peepsalot> well, anyways its better now, not sure I could reproduce it if I tried (i've already been doing more scrollback finds and nothing like that is happening now), just an odd hiccup that hopefully won't occur again soon
[03:05] <ra> peepsalot, re my first reply about the pid, you can alt-f2, alt-f3 to open a new tty and kill the process
[03:09] <peepsalot> ra, i couldn't login from those at the time... because byobu/tmux was stuck churning on something, and login goes directly to byobu session
[03:12] <ra> so try byobu-disable and start it from the terminal to see if that changes anything
[03:14] <ra> peepsalot, but you did try alt-f2?
[03:15] <JanC> login would have to happen before byobu starts on a new virtual console
[04:34] <kinghat> is my zfs mirror pool mounting automatically to the server?
[04:36] <kinghat> what i mean is i can see its mount point and its working fine but i dont see it setup in my fstab.
[04:38] <kinghat> and i just added another drive to the system and formatted it to ext4 and it shows up in the disk list, im guessing i need to manually give it a mount point?
[06:07] <blackflow> kinghat: ZFS doesn't use fstab unless you explicitly set up mountpoints in legacy mode. zfs-mount.service mounts datasets that aren't in legacy mode.
[15:25] <kinghat> so I would just need to make it auto mount manually?
[15:30] <tomreyn> if you're alking to someone specifically, be sure to mention their nickname.
[15:30] <tomreyn> otherwise, provide context.
[15:41] <kinghat> tomreyn: nobody in particular and its just an added drive to my server. i had forgotten how my pool was mounted since it was set and forget.
[15:42] <kinghat> since i dont think i will be adding the drive to the pool and just using it for backup i need to figure out how to get it to auto mount like a regular drive
[15:43] <kinghat> unless zfs also does backup stuffs?
[15:58] <blackflow> kinghat: you're asking confusing questions.
[15:59] <kinghat> probably
[15:59] <blackflow> ZFS datasets are mounted either by zfs-mount.service -- which looks at datasets mountpoint attribute -- or by legacy mounts via fstab (for which mountpoint=legacy at the dataset attribute)
[16:00] <kinghat> ya im guessing mines via zfs-mount.service.
[16:00] <blackflow> so if you want a dataset mounted, you should specify its mountpoint= attribute. if it's "legacy", then you need to use fstab.
[16:01] <kinghat> i have a zfs mirror pool that is mounted and working fine. i added another drive, formatted it and noticed it doesnt automatically have a mount point.
[16:01] <blackflow> kinghat: "formatted it" ?
[16:01] <kinghat> i think i want to use the drive as a backup drive that gets backed up to from multiple locations everyday.
[16:02] <blackflow> you're not answering my questions. how did you "format" the drive (the terminology doesn't exist for ZFS)
[16:02] <kinghat> blackflow: i did mkfs.ext4 /dev/sdc
[16:02] <blackflow> and that has to do with your ZFS questions... what? lol
[16:02] <kinghat> it was previously formatted with ntfs
[16:04] <blackflow> so if you want an ext4 filesystem mounted at boot, you need to add it to fstab.
[16:04] <kinghat> it doesnt. i was confused as to why/how the zfs pool/drives were mounted as i couldnt remember setting them up. i just addeded a drive, formatted it, and noticed its not auto mounted anywhere on the system.
[16:04] <kinghat> yes
[16:04] <blackflow> fstab or write a proper systemd .mount unit  (fstab is converted to them at run time anyway)
[16:04] <kinghat> figured thats what i needed to do something like that.
[16:06] <blackflow> ZFS is a completely different beast from traditional filesystems. It's a whole fs+volumes+raid+snapshots+compression+encryption+.... kind of kitchen sink solution, with a set of services, its own specific concepts and idisyncrasies
[16:07] <blackflow> ext4 is just a passive filesystem, nothing else, no services, no volume management, no raid, but latest versions methinks can do encryption.
[16:30] <kinghat> blackflow: if i didnt want to add the new drive as a 3rd mirror and wanted to use it as a backup drive, does zfs have something for that as well or just use something like rsync?
[16:34] <blackflow> kinghat: you can send ZFS snapshots at block-level to another drive, or over the network to <wherever>
[16:34] <kinghat> what if im sending other data from another drive to said backup drive?
[16:35] <kinghat> as well*
[16:35] <blackflow> kinghat: but consider what you're asking. if you're gonna add a third drive to the same chassis, that's hardly a backup. you can add a hot spare to your existing pool, or simply increase redundancy by creating a 3-way mirror
[16:35] <blackflow> you can rsync files from other filsystems, and use zfs snapshots send|recv for ZFS filesystems
[16:36] <blackflow> snapshots are moved between datasets, so you can have   rpool/backups-for-zfs ,   rpool/backups-for-others     and then you send|recv to backups-for-zfs, and use rsync under backups-for-others
[16:37] <RoyK> kinghat: something like this? https://xkcd.com/1718/
[16:46] <kinghat> lel