[02:25] good morning [02:48] Afternoon! [10:30] Morning πŸŒ…πŸŒž [12:42] Evening! [13:52] .o/ [13:53] there's currently a zfs bug on jammy unsolved too [13:53] might still wait a bit [13:53] lotuspsychje: Is there a btrfs bug too? [13:53] i didnt test btrfs on jammy yet [13:53] is btrfs a junk or linux equivalent of zfs? [13:53] If not then great! looks like you can work around the zfs issues by using btrfs [13:54] It offers most of what zfs does, depends what you need out of zfs I guess [13:54] linsux: for now, it depends on what you want to do with it [13:56] i want to setup home nas for home photo/video, 100G per year [13:56] i shouldprobably get backup? [13:56] and use zfs? btrfs? [13:56] linsux: then it doesn't really matter which you choose. (at raid1 level) [13:56] think synology NAS has btrfs by default [13:57] lotuspsychje: which sadly it not upstream btrfs [13:57] raid0 is mirroring and raid1 is stripping? [13:57] additionally they only use it for checksumming the files, raid is by lvm [13:57] linsux: other way around [13:57] if i use raid1 i gotto use zfs btrfs not ext4 or xfs right? [13:58] how likely will i lose data on raid1 [13:58] linsux: raid1 can be done via hardware, it doesn;t need to be done by a file syste, [13:58] m [13:58] linsux: you can go mdraid/lvm raid1 with ext4, but I would go zfs/btrfs route, just because of checksum/scrubbing feature [13:58] so, zfs or btrfs? [13:59] because nobody wants to read every single file to see if corruption happened (over the span of a year) [13:59] raid1 is a 1:1 copy, so depends how many drives are in the array and which ones go down [13:59] raid6 exists BTW [13:59] and hardware raid is always better than software right? [14:00] linsux: Usually, depends on the raid controller. Some are just firmware for the config and software via drivers for the implementation [14:00] linsux: Others are a full on computer on a card that handles it [14:00] compelte with CPU and RAM [14:01] linsux: hardware raid is bascially dead [14:01] as hw raid has quite a few disadvantages [14:02] murmel: Oh? [14:02] yes? [14:02] murmel: Go on [14:03] I am interested to find out why that is the case [14:03] usually, hardware raid is much worse than zfs [14:03] cbreak: hardware raid usually has a battery backup [14:03] so? [14:04] wez: the biggest one is, when your controller dies. you literally need to have the exact same one, and even then it's not guaranteed to work [14:04] zfs works without battery [14:04] saves those bits [14:04] precious bits [14:05] wez: additionally, with the ongoing "convergence" in datacenters, it doesn't matter that the cpu is tilting just because it manages 90+ disks [14:08] .o/ [14:09] at work, we've had data loss once because some hardware raid controller failed [14:09] and the disks were completely unreadable without it, nor with a replacement [14:09] with zfs, this won't happen [14:09] not even with btrfs or other lesser software raid [14:10] I don't think there's a good reason to waste money on inferior hardware raid solutions nowadays, that we have zfs and CPUs that are more than fast enough to handle it [14:10] whatever "lesser software raid" means [14:10] that's another thing, sw raid _is_ cheaper [14:10] like mdraid, or the stuff apple / microsoft offers [14:10] block level redundancy raids [14:10] hardware RAID controllers should just use the standard RAID format from linux [14:10] JanC: they don't. [14:11] nothing's standard about them as far as I can tell [14:11] I meant "standard" as in a de facto standard :) [14:11] JanC: how would they lock you in then? ^^ [14:12] and I mean someone should make such controllers, and it would be a good way to market them: never lose data that way [14:12] I use zfs in many usecases, and I've only had data loss so far when I experimented around with alpha versions of zfs or similar [14:12] cbreak: zfs also has other data loss situations, but very few [14:12] JanC: why bother? [14:13] murmel: haven't encountered those yet though :) [14:13] cbreak: I don't know if they fixed it already, but there was an issue with zfs send/receive [14:13] the hole birth one? [14:13] or the one with encrypted raw sends? [14:13] can't remember, give me a sec to see on gh [14:14] but yeah, every fs had it's issues with data loss, the big question is rather which fs do you trust enough with your data [14:15] the answer should be: Make backups no matter which one you chose. [14:16] cbreak: definitely [14:16] zfs is super reliable, and defends against corruption caused by hardware more so than any other solution I know of, but it's not immune to problems such as bugs in zfs itself, or the house burning down. [14:16] More than a hardware solution? [14:17] heh, sure [14:17] hardware raid can't even compete with btrfs [14:18] i do quite like that btrfs brought a lot of firmware bugs to the surface [14:22] https://openzfs.github.io/openzfs-docs/Getting%20Started/Ubuntu/Ubuntu%2022.04%20Root%20on%20ZFS.html <- in case you care about zfs boot on ubuntu, in a more proper way [14:22] (I recommend using zbm, and not splitting the root into separate /boot, /usr, ...) [14:24] cbreak: btw, don't you have to disable the zfs module in the ubuntu kernel when using a different zfs version (can remember doing this on 20.04 [14:24] I don't think you have to do that manually, but I've only used the dkms thing once, a long time ago [14:25] on ubuntu, I usually stick with the shipped zfs [14:31] I can remember I looked into it because 20.04 shipped with 0.8.6 and I wanted to see the newer 2.0 release (especially as I was building a new raid system) [14:40] 0.8.3. I still use that on a pair of servers I control, works ok. [14:44] cbreak: yeah I wanted to avoid the pool upgrade later down the road === luis220413_ is now known as luis220413 [17:16] Afternoon 😁☺️ === EriC^^_ is now known as EriC^^ === EriC^^ is now known as Guest7921 === EriC^ is now known as EriC^^