[02:25] <lotuspsychje> good morning
[02:48] <wez> Afternoon!
[10:30] <jailbreak> Morning 🌅🌞
[12:42] <wez> Evening!
[13:52] <wez> .o/
[13:53] <lotuspsychje> there's currently a zfs bug on jammy unsolved too
[13:53] <lotuspsychje> might still wait a bit
[13:53] <wez> lotuspsychje: Is there a btrfs bug too?
[13:53] <lotuspsychje> i didnt test btrfs on jammy yet
[13:53] <linsux> is btrfs a junk or linux equivalent of zfs?
[13:53] <wez> If not then great! looks like you can work around the zfs issues by using btrfs
[13:54] <wez> It offers most of what zfs does, depends what you need out of zfs I guess
[13:54] <murmel> linsux: for now, it depends on what you want to do with it
[13:56] <linsux> i want to setup home nas for home photo/video, 100G per year
[13:56] <linsux> i shouldprobably get backup?
[13:56] <linsux> and use zfs? btrfs?
[13:56] <murmel> linsux: then it doesn't really matter which you choose. (at raid1 level)
[13:56] <lotuspsychje> think synology NAS has btrfs by default
[13:57] <murmel> lotuspsychje: which sadly it not upstream btrfs
[13:57] <linsux> raid0 is mirroring and raid1 is stripping?
[13:57] <murmel> additionally they only use it for checksumming the files, raid is by lvm
[13:57] <murmel> linsux: other way around
[13:57] <linsux> if i use raid1 i gotto use zfs btrfs not ext4 or xfs right?
[13:58] <linsux> how likely will i lose data on raid1
[13:58] <wez> linsux: raid1 can be done via hardware, it doesn;t need to be done by a file syste,
[13:58] <wez> m
[13:58] <murmel> linsux: you can go mdraid/lvm raid1 with ext4, but I would go zfs/btrfs route, just because of checksum/scrubbing feature
[13:58] <linsux> so, zfs or btrfs?
[13:59] <murmel> because nobody wants to read every single file to see if corruption happened (over the span of a year)
[13:59] <wez> raid1 is a 1:1 copy, so depends how many drives are in the array and which ones go down
[13:59] <wez> raid6 exists BTW
[13:59] <linsux> and hardware raid is always better than software right?
[14:00] <wez> linsux: Usually, depends on the raid controller.  Some are just firmware for the config and software via drivers for the implementation
[14:00] <wez> linsux: Others are a full on computer on a card that handles it
[14:00] <wez> compelte with CPU and RAM
[14:01] <murmel> linsux: hardware raid is bascially dead
[14:01] <murmel> as hw raid has quite a few disadvantages
[14:02] <wez> murmel: Oh?
[14:02] <murmel> yes?
[14:02] <wez> murmel: Go on
[14:03] <wez> I am interested to find out why that is the case
[14:03] <cbreak> usually, hardware raid is much worse than zfs
[14:03] <wez> cbreak: hardware raid usually has a battery backup
[14:03] <cbreak> so?
[14:04] <murmel> wez: the biggest one is, when your controller dies. you literally need to have the exact same one, and even then it's not guaranteed to work
[14:04] <cbreak> zfs works without battery
[14:04] <wez> saves those bits
[14:04] <wez> precious bits
[14:05] <murmel> wez: additionally, with the ongoing "convergence" in datacenters, it doesn't matter that the cpu is tilting just because it manages 90+ disks
[14:08] <wez> .o/
[14:09] <cbreak> at work, we've had data loss once because some hardware raid controller failed
[14:09] <cbreak> and the disks were completely unreadable without it, nor with a replacement
[14:09] <cbreak> with zfs, this won't happen
[14:09] <cbreak> not even with btrfs or other lesser software raid
[14:10] <cbreak> I don't think there's a good reason to waste money on inferior hardware raid solutions nowadays, that we have zfs and CPUs that are more than fast enough to handle it
[14:10] <murmel> whatever "lesser software raid" means
[14:10] <murmel> that's another thing, sw raid _is_ cheaper
[14:10] <cbreak> like mdraid, or the stuff apple / microsoft offers
[14:10] <cbreak> block level redundancy raids
[14:10] <JanC> hardware RAID controllers should just use the standard RAID format from linux
[14:10] <cbreak> JanC: they don't.
[14:11] <cbreak> nothing's standard about them as far as I can tell
[14:11] <JanC> I meant "standard" as in a de facto standard  :)
[14:11] <murmel> JanC: how would they lock you in then? ^^
[14:12] <JanC> and I mean someone should make such controllers, and it would be a good way to market them: never lose data that way
[14:12] <cbreak> I use zfs in many usecases, and I've only had data loss so far when I experimented around with alpha versions of zfs or similar
[14:12] <murmel> cbreak: zfs also has other data loss situations, but very few
[14:12] <cbreak> JanC: why bother?
[14:13] <cbreak> murmel: haven't encountered those yet though :)
[14:13] <murmel> cbreak: I don't know if they fixed it already, but there was an issue with zfs send/receive
[14:13] <cbreak> the hole birth one?
[14:13] <cbreak> or the one with encrypted raw sends?
[14:13] <murmel> can't remember, give me a sec to see on gh
[14:14] <murmel> but yeah, every fs had it's issues with data loss, the big question is rather which fs do you trust enough with your data
[14:15] <cbreak> the answer should be: Make backups no matter which one you chose.
[14:16] <murmel> cbreak: definitely
[14:16] <cbreak> zfs is super reliable, and defends against corruption caused by hardware more so than any other solution I know of, but it's not immune to problems such as bugs in zfs itself, or the house burning down.
[14:16] <wez> More than a hardware solution?
[14:17] <cbreak> heh, sure
[14:17] <cbreak> hardware raid can't even compete with btrfs
[14:18] <murmel> i do quite like that btrfs brought a lot of firmware bugs to the surface
[14:22] <cbreak> https://openzfs.github.io/openzfs-docs/Getting%20Started/Ubuntu/Ubuntu%2022.04%20Root%20on%20ZFS.html <- in case you care about zfs boot on ubuntu, in a more proper way
[14:22] <cbreak> (I recommend using zbm, and not splitting the root into separate /boot, /usr, ...)
[14:24] <murmel> cbreak: btw, don't you have to disable the zfs module in the ubuntu kernel when using a different zfs version (can remember doing this on 20.04
[14:24] <cbreak> I don't think you have to do that manually, but I've only used the dkms thing once, a long time ago
[14:25] <cbreak> on ubuntu, I usually stick with the shipped zfs
[14:31] <murmel> I can remember I looked into it because 20.04 shipped with 0.8.6 and I wanted to see the newer 2.0 release (especially as I was building a new raid system)
[14:40] <cbreak> 0.8.3. I still use that on a pair of servers I control, works ok.
[14:44] <murmel> cbreak: yeah I wanted to avoid the pool upgrade later down the road
[17:16] <jailbreak> Afternoon 😁☺️