nodoubleg | jrwren: anything with a fixed number of inodes is a bad way to go for large numbers of files/directories. | 02:25 |
---|---|---|
nodoubleg | jrwren: ZFS is pretty good in that kind of a scenario, I used to admin several large ZFS-based filers that had horrendous workloads. | 02:25 |
nodoubleg | jrwren: when dealing with large numbers of files (I'm counting this as thousands of files and directories nested in thousands of directories), I think it's really useful to be using storage that lets you deal with it at a block level. ZFS and btrfs both have block-level send and receive which greatly simplify backups | 02:27 |
nodoubleg | ZFS is gaining resumable sends/receives sometime this summer. | 02:28 |
nodoubleg | btrfs is ok, but I wouldn't trust it to large production datasets. | 02:28 |
nodoubleg | but i'd use btrfs before I used xfs. | 02:29 |
nodoubleg | the fastest thing out there is still probably ext4, but there's so many other problems I see with ext4 that I gladly trade the performance for the things that something like ZFS gets me. | 02:29 |
=== Zimdale is now known as Guest69892 | ||
jrwren | nodoubleg: i have a btrfs with man files too and its worse than ext4, what am i doing wrong? | 04:03 |
=== Guest69892 is now known as Zimdale | ||
nodoubleg | jrwren: a COW filesystem could end up having problems if it's only on spinning rust. | 15:26 |
nodoubleg | filesystems like ZFS and btrfs have more overhead than older FSes. Ceph takes this even further. Rebuilds in ceph are very network and CPU-intensive, much more than ZFS and btrfs. | 15:30 |
jrwren | nodoubleg: yup. i think i may try ext2 | 15:51 |
nodoubleg | if performance is the only concern, then I'd actually recommend ext4. | 15:51 |
nodoubleg | and by "the only concern" i mean you don't care about the data :-P | 15:51 |
jrwren | bwaha | 15:52 |
jrwren | i've never lost data on ext[234] | 15:52 |
nodoubleg | that you know of. | 15:52 |
* nodoubleg won't trust a filesystem that doesn't checksum the data. | 15:52 | |
nodoubleg | bcachefs looks promising. | 15:53 |
jrwren | jcastro was using that. | 15:53 |
jrwren | but its not really an fs, is it? | 15:53 |
nodoubleg | the developer of bcachefs realized he was basically making a whole new filesystem, so started work on completing that. it's up as an alpha. | 15:57 |
nodoubleg | jrwren: do you have an SSD handy? | 15:57 |
nodoubleg | and, is your workload synchronous or async? read or write heavy? | 15:58 |
nodoubleg | it sounds like you might need to throw an SSD at the problem. Either using an SSD with bcache, or using it as part of a zpool with ZFS. | 15:59 |
nodoubleg | the filers I ran, 90% of their read IOs were served from memory or the read cache ssd (l2arc) | 16:00 |
nodoubleg | i've used zfs on linux to help absorb some of the brutal read io that gitlab can do. This was in a VM that was backed by a massively oversubscribed NetApp filer. Writes were still slow, but git clones on common repos were speedy. | 16:02 |
dzho | well, given this is an ubuntu channel ... | 20:01 |
dzho | d'oh | 20:01 |
dzho | caught in backscroll again | 20:01 |
dzho | nm | 20:01 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!