[02:25] jrwren: anything with a fixed number of inodes is a bad way to go for large numbers of files/directories. [02:25] jrwren: ZFS is pretty good in that kind of a scenario, I used to admin several large ZFS-based filers that had horrendous workloads. [02:27] jrwren: when dealing with large numbers of files (I'm counting this as thousands of files and directories nested in thousands of directories), I think it's really useful to be using storage that lets you deal with it at a block level. ZFS and btrfs both have block-level send and receive which greatly simplify backups [02:28] ZFS is gaining resumable sends/receives sometime this summer. [02:28] btrfs is ok, but I wouldn't trust it to large production datasets. [02:29] but i'd use btrfs before I used xfs. [02:29] the fastest thing out there is still probably ext4, but there's so many other problems I see with ext4 that I gladly trade the performance for the things that something like ZFS gets me. === Zimdale is now known as Guest69892 [04:03] nodoubleg: i have a btrfs with man files too and its worse than ext4, what am i doing wrong? === Guest69892 is now known as Zimdale [15:26] jrwren: a COW filesystem could end up having problems if it's only on spinning rust. [15:30] filesystems like ZFS and btrfs have more overhead than older FSes. Ceph takes this even further. Rebuilds in ceph are very network and CPU-intensive, much more than ZFS and btrfs. [15:51] nodoubleg: yup. i think i may try ext2 [15:51] if performance is the only concern, then I'd actually recommend ext4. [15:51] and by "the only concern" i mean you don't care about the data :-P [15:52] bwaha [15:52] i've never lost data on ext[234] [15:52] that you know of. [15:52] * nodoubleg won't trust a filesystem that doesn't checksum the data. [15:53] bcachefs looks promising. [15:53] jcastro was using that. [15:53] but its not really an fs, is it? [15:57] the developer of bcachefs realized he was basically making a whole new filesystem, so started work on completing that. it's up as an alpha. [15:57] jrwren: do you have an SSD handy? [15:58] and, is your workload synchronous or async? read or write heavy? [15:59] it sounds like you might need to throw an SSD at the problem. Either using an SSD with bcache, or using it as part of a zpool with ZFS. [16:00] the filers I ran, 90% of their read IOs were served from memory or the read cache ssd (l2arc) [16:02] i've used zfs on linux to help absorb some of the brutal read io that gitlab can do. This was in a VM that was backed by a massively oversubscribed NetApp filer. Writes were still slow, but git clones on common repos were speedy. [20:01] well, given this is an ubuntu channel ... [20:01] d'oh [20:01] caught in backscroll again [20:01] nm