nodoublegjrwren: anything with a fixed number of inodes is a bad way to go for large numbers of files/directories.02:25
nodoublegjrwren: ZFS is pretty good in that kind of a scenario, I used to admin several large ZFS-based filers that had horrendous workloads.02:25
nodoublegjrwren: when dealing with large numbers of files (I'm counting this as thousands of files and directories nested in thousands of directories), I think it's really useful to be using storage that lets you deal with it at a block level. ZFS and btrfs both have block-level send and receive which greatly simplify backups02:27
nodoublegZFS is gaining resumable sends/receives sometime this summer.02:28
nodoublegbtrfs is ok, but I wouldn't trust it to large production datasets.02:28
nodoublegbut i'd use btrfs before I used xfs.02:29
nodoublegthe fastest thing out there is still probably ext4, but there's so many other problems I see with ext4 that I gladly trade the performance for the things that something like ZFS gets me.02:29
=== Zimdale is now known as Guest69892
jrwrennodoubleg: i have a btrfs with man files too and its worse than ext4, what am i doing wrong?04:03
=== Guest69892 is now known as Zimdale
nodoublegjrwren: a COW filesystem could end up having problems if it's only on spinning rust.15:26
nodoublegfilesystems like ZFS and btrfs have more overhead than older FSes. Ceph takes this even further. Rebuilds in ceph are very network and CPU-intensive, much more than ZFS and btrfs.15:30
jrwrennodoubleg: yup. i think i may try ext215:51
nodoublegif performance is the only concern, then I'd actually recommend ext4.15:51
nodoublegand by "the only concern" i mean you don't care about the data :-P15:51
jrwreni've never lost data on ext[234]15:52
nodoublegthat you know of.15:52
* nodoubleg won't trust a filesystem that doesn't checksum the data.15:52
nodoublegbcachefs looks promising.15:53
jrwrenjcastro was using that.15:53
jrwrenbut its not really an fs, is it?15:53
nodoublegthe developer of bcachefs realized he was basically making a whole new filesystem, so started work on completing that. it's up as an alpha.15:57
nodoublegjrwren: do you have an SSD handy?15:57
nodoublegand, is your workload synchronous or async? read or write heavy?15:58
nodoublegit sounds like you might need to throw an SSD at the problem. Either using an SSD with bcache, or using it as part of a zpool with ZFS.15:59
nodoublegthe filers I ran, 90% of their read IOs were served from memory or the read cache ssd (l2arc)16:00
nodoublegi've used zfs on linux to help absorb some of the brutal read io that gitlab can do. This was in a VM that was backed by a massively oversubscribed NetApp filer. Writes were still slow, but git clones on common repos were speedy.16:02
dzhowell, given this is an ubuntu channel ...20:01
dzhocaught in backscroll again20:01

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!