Which Linux filesystem is best in 2022?

Sdílet
Vložit
  • čas přidán 18. 06. 2024
  • Today I am exploring 4 well known filesystem for Linux, and putting each one to the test: Btrfs, ext4, ext4 w/o journaling, xfs and openzfs. All of them are being tested on hardware, running the latest versions in the Debian 11 repos. The benchmark test I am using is iozone 3.493 the latest version of the software. You can find my benchmark scripts on Gitlab. I am performing 13 tests which are included in iozone to simulate workloads of different types to see how well each filesystem performs.
    This video is based around a question I received from one of my viewers (and because its been a year since I updated the benchmarks). However, the main reason is after watching last Saturdays Linux Saloon, where some folks were making recommendations to use a particular filesystem, and making claims for performance that were unsubstantiated. As an engineer claims without facts always make me suspicious. Oddly enough marketing sometimes uses that method of hyping a product beyond its capabilities. So I wanted to know the truth. How well does btrfs actually perform against the older filesystems in linux?
    Hope you enjoy the video as much as I did making it.
    btrfs development website: btrfs.wiki.kernel.org/index.p...
    iozone3 website: www.iozone.org/
    Support me on Patreon: / djware
    Follow me:
    Twitter @djware55
    Facebook: / don.ware.7758
    Discord: / discord
    Gitlab: gitlab.com/djware27
    "Brightly Fancy" Kevin MacLeod (incompetech.com)
    Licensed under Creative Commons: By Attribution 4.0 License
    creativecommons.org/licenses/b...
    "Militaire Electronic" Kevin MacLeod (incompetech.com)
    Licensed under Creative Commons: By Attribution 4.0 License
    creativecommons.org/licenses/b...
    Werq by Kevin MacLeod
    Link: incompetech.filmmusic.io/song...
    License: filmmusic.io/standard-license
    Industrial Cinematic by Kevin MacLeod
    Link: incompetech.filmmusic.io/song...
    License: filmmusic.io/standard-license
    Music Used in this video
    "NonStop" Kevin MacLeod (incompetech.com)
    Licensed under Creative Commons: By Attribution 3.0 License
    #btrfs #zfs #filesystems
  • Věda a technologie

Komentáře • 297

  • @danvideo2948
    @danvideo2948 Před 2 lety +131

    First, we should define what "best" means. Reliability, data integrity, security, features then comes performance..

    • @coolbean9880
      @coolbean9880 Před 2 lety +9

      then its good that the incredibly well established and reliable options of xfs and non-journaling ext4 are sharing second place in terms of speed,
      right behind the very feature rich openzfs

    • @orkhepaj
      @orkhepaj Před rokem +6

      yep , this is like which is the fastest data structure ... so pointless

    • @guss77
      @guss77 Před rokem +6

      "best" is very context sensitive - Facebook uses BTRFS because it's "best" for their use, which is as local fast journalling and snapshotting filesystem that contains no critical data and isn't using RAID, so reliability and security isn't important for that use case, features and performance are much more important - at which BTRFS is very good.

    • @therealb888
      @therealb888 Před rokem +1

      @@guss77 What makes it best for them? What's the workload they're using it for?

    • @guss77
      @guss77 Před rokem +7

      @@therealb888 I'm not a Facebook engineer, but from what I've been told - they're using snapshotting and remote data synching (using btrfs-send). For horizontal scaling (i.e. many small cheap machines) BTRFS is the only valid solution. ZFS is the only competition that has all the features but it can't beat BTRFS performance on small cheap hardware, not to mention the licensing issues.

  • @thegreatall
    @thegreatall Před 2 lety +87

    Hi DJ, we use various filesystems in our production systems. In our testing we found ZFS is extremely good in almost all cases (like you said), except for three cases:
    1. Deleting files. ZFS is EXTREMELY slow (i'm talking ~10x slower than ext4) when using standard `rm -rf`
    2. ZFS has non-trivial memory management. This can be difficult for debugging. ZFS also runs garbage collection in the background in the background, which might cause issues in some workloads.
    3. We ran into a ~0.05% case where ZFS would sometimes return a block of zero bytes when there was data there. This was extremely rare, but when we ran the same test ~50k times we had zero cases of this happening on ext4. (we did talk to some devs at zfs about it, but didn't get far, due to difficulty in debugging).
    Also, it's worth pointing out that by default ZFS will journal a write, then write the data into memory and in the background write it to disk (unless fsync is called).

    • @Keechization
      @Keechization Před rokem +5

      don't forget that ZFS can't defragment and can throw a fit if it's >80% full

    • @lilith1504
      @lilith1504 Před rokem +5

      Nah, i could not say you was wrong. But zfs consume a big compute resource, so on even medium pc it's still a prob. Btw if you're using it on newest and strongest pc, u will see its benefits. But truth still be true, many features require many resources

    • @richardyao9012
      @richardyao9012 Před rokem +10

      @@Keechization That has not been true for years. The slow best fit allocator triggers at 94%.

    • @bertnijhof5413
      @bertnijhof5413 Před rokem +5

      @@lilith1504 Not true! I run Linux VMs on OpenZFS. My hardware a Ryzen 3 2200G; 16 GB DDR4 (3000MHz) and a 512GB SP nvme SSD (3400/2300MB/s). My Ryzen is the second slowest Ryzen ever (4C4T; 3.5/3.7GHz). I boot Linux VMs in 6 to 12 seconds and afterwards response times are immediate due to the L1ARC memory cache. The compute resources you need depend on the number of concurrent users and on the type of use cases.

    • @vipvip-tf9rw
      @vipvip-tf9rw Před rokem +6

      @@bertnijhof5413 I use athlon3000g with true nas scale and 16tb striped pool, works well

  • @albertogonzalez5114
    @albertogonzalez5114 Před 2 lety +11

    Thanks so much for this useful explanation! I have always used ext3 and ext4 with no issues, but the other options are also interesting to consider. Your analysis is very enlightening.

  • @lsatenstein
    @lsatenstein Před 2 lety +89

    Hi DJ
    Thank you for today's video.
    About two years ago, I tried zfs for a year. I had no issues, and none was expected, since the use was "out of the box Ubuntu zfs" and it was installed on a nvme device. Furthermore, laptop battery backup meant I could not worry about crashes. I liked the check-summing and ability to use check-summing facility for recovery,
    Then a year or so later, you did a review of file systems which included zfs on spinning hardware.
    In that review, zfs came in near to last. It was trivially slower than xfs and was outperformed by today's file access list.
    I would switch to zfs in a flash, but for two reasons.. zfs requires a full disk therefore I cannot assign zfs to a partition. I install on real hardware and I cannot dedicate a 1 TB nvme SSD to zfs.
    The other constraint I have is that the only Linux distribution to support zfs as an installation option is Ubuntu. I happen to like some other distributions (I write code for RH, Debian, SUSE, Ubuntu, Arch et al, and I need hardware access to those environments). My desktop system supports but one nvme drive. Further, I am not prepared to recompile patches to the Linux Kernel to correspond to every kernel update from the non-Ubuntu distributions
    .
    I do wish that zfs license had a legal clause allowing it to be cross licensed under gpl3. If it were to happen, I could see a very large interest by diehard gpl3 software engineers begin looking at zfs in all ways, so as to make it even better. I would like to use zfs for my system backups.
    I look forward to your videos, and you do get my `like` for the topics of interest to me, I wish you continued success and happiness in what you are doing.
    Leslie from Montreal
    PS. I am in my 82nd year, with more than 65 years in IT, I wish I could do the IT stuff for another 65 years.

    • @guilherme5094
      @guilherme5094 Před 2 lety +17

      I salute you sir.

    • @alexxx4434
      @alexxx4434 Před 2 lety +4

      That's the spirit! Wish I could keep the passion for IT till the old age.
      Also, "watch out, we have an older dinosaur than DJ Ware here!" ;)

    • @CyberGizmo
      @CyberGizmo  Před 2 lety +17

      Thank you Leslie I really appreciate your comments, very well thought out. I use Debian 11 and I install the zfsutils-linux, and yes normally you have to give over the device to it, but you can also use use loop mounts, I think I will add this to my next video to show how to do it. Btw, the zfsutils-linux is the openzfs version if its version 2.0.x or higher it will be the merged version of zfs which includes the code from BSD. And I took would like to do this IT stuff for another 65 years!

    • @wpyoga
      @wpyoga Před 2 lety +3

      IANAL, but if the ZFS license is compatible with the GPLv3, it wouldn't be compatible with Linux because it's GPLv2.
      Btrfs is really just a reaction to ZFS, not a proper response.

    • @lsatenstein
      @lsatenstein Před 2 lety +3

      @@wpyoga Hi William. As we could say, licensing be damned. There should be a cross recognition of the BSD and GPL licensing, and let me then choose or merge my choice of the software. Here is an example. Rocky Linux (Centos alternative), does not support btrfs, but Fedora is a btrfs distro. Ergo, I have to go through special efforts to force some partitions to be ext4 formatted.
      What I studied about zfs is that it has checksums per block, and that is a good thing. I would hope that if a block had a single bit error, that with the checksumming, recovery would be possible (ECC type operation).
      We are into the era of affordable 20 terrabyte drives. I believe these devices do rely on ECC for integrity. Why not ECC extension for ext4, xfs, btrfs, etc.

  • @gwgux
    @gwgux Před 2 lety +6

    One of the best videos I've seen that goes into the differences of how these file systems perform! I use ext4 locally on the system (boot and storage) and ZFS on my NAS. ZFS is great, but the additional system overhead to run it just doesn't make too much sense for local storage devices in my opinion. I'd rather keep that operational overhead off my system and on a dedicated device.

  • @peppe540
    @peppe540 Před 2 lety +3

    Thanks Dj, very clear explanation and further additional information next to your previous videos. I like the structured approach, thanks!

  • @adrianstephens56
    @adrianstephens56 Před rokem +7

    Thank you for posting this. I'm a retired software engineer with a home lab. I've used root on zfs on my server and desktop environments for about 10 years. I value (and have needed to use multiple times) the ability to rollback after some catastrophe. And I value the ability to have several versions of the OS bootable on the same ZFS pool. A pro would probably shudder at the thought, but I'm allowed to do it wrong now.

  • @ThanhTran-uf6pw
    @ThanhTran-uf6pw Před 2 lety +32

    I've moved to btrfs for better SSD drives with very fast backup/restore with Timeshift.

    • @orkhepaj
      @orkhepaj Před rokem

      cause it doesnt copy anything that way

    • @JMRVRGS
      @JMRVRGS Před rokem

      Wouldn't btrfs wear an SSD more for its constant writing?

  • @ovalwingnut
    @ovalwingnut Před 2 lety +2

    Wonderful info. Every time I stumble onto one of your videos I wonder why I haven't made it my "default".. on CZcams, like wallpaper (if that was possible:). Thanks so much. You "are" the "Linux File System Whisperer", to me. Cheers.

  • @BindasBadshah
    @BindasBadshah Před rokem

    Thanks DJware, your tests were spot on and saved us from days of research on our own.

  • @EpizodesHorizons
    @EpizodesHorizons Před 2 lety +5

    Thanks for doing these tests DJ. Just one suggestions for a non-techie like me... charts with numbers are much easier to understand if they have worst-best arrows. Thanks.

  • @ChimeraX0401
    @ChimeraX0401 Před 2 lety +27

    I've been using btrfs on my NAS server and so far it is working fine for me. What made me use btrfs is because of it's self healing and snapshot feature. The snapshot feature is very helpful since I can easily do a rollback whenever I mess something up while the self healing keeps the file integrity and prevent bit rot even if I store data for a long time....

    • @travis1240
      @travis1240 Před rokem +7

      Good luck. Lost a whole drive of data to btrfs. There was nothing wrong with the drive, btrfs just corrupted itself and the recovery tools couldn't read it at all. Never had any issues with EXT4. I reformatted that drive with Ext4 and used it for five more years without issues. Maybe I'm an outlier, but I'm not trusting it again.

    • @user-mr3mf8lo7y
      @user-mr3mf8lo7y Před rokem +2

      I sincerely do not understand the concept of snapshots. Many things change every minute in my servers. How can I take smapshots every minute. Just not practical. What am I missing?

    • @ChimeraX0401
      @ChimeraX0401 Před rokem

      @@user-mr3mf8lo7y just snapshot the important subvolumes like the root subvolumes, and at the same time make a backup of those snapshots. You can do it before doing some configurations on your server or you can schedule it to make a snapshot after the day ends (just like what I do) using a script, it is up to you when you'll do a snapshot of your subvolumes....

    • @df3yt
      @df3yt Před rokem +2

      btrfs is anything but self healing...in my case it didn't even know it was corrupted. Even it's fsck equiv check said no errors yet I had folder corruption. Folders I could not even enter into as root etc. Could not even REMOVE said folder. Had to reformat the entire drive and yes that is in 2022. btrfs is a fetus compared to other more reliable filesystems.

    • @MH_VOID
      @MH_VOID Před rokem +1

      @@travis1240 how long ago/ to what version did you lose your drive?

  • @crzwdjk
    @crzwdjk Před 2 lety +9

    This is a pretty informative video, and I'm pleasantly surprised to see that XFS is still a good choice for a root filesystem. It had been my default choice for quite a long time but distro support was never great and for a while it wasn't clear if it would continue to be maintained at all. But maybe I should give it another look.

    • @CaptainDangeax
      @CaptainDangeax Před rokem

      Xfs is the default file system with redhat so it's seriously backed up. Feel free to use it

  • @AndrewErwin73
    @AndrewErwin73 Před rokem +7

    It is weird... I have been using Linux for almost 20 years, and BTRFS is the best file system I have used in all that time. Never once had an issue. The backups are flawless and have saved my life more than once. I know this is anecdotal and all... but I wish I could get some documentation on the problems?

    • @pietersmit621
      @pietersmit621 Před rokem +1

      Btrfs is great, the multi disk support just next level.

    • @pietersmit621
      @pietersmit621 Před rokem +1

      And compression, no i-node limit

  • @Little-bird-told-me
    @Little-bird-told-me Před 2 lety +11

    In my one month old Linux Journey I have come across so many videos but never a good one topic of best file system until today. I thought EX4 was trash and Brtfs with its time shift capability was the best file system. Thanks for enlightening us.

    • @entelin
      @entelin Před rokem +4

      Filesystems are a complicated topic. Benchmarks are fairly meaningless when done without a specific usecase in mind. ZFS's high numbers here are because it aggressively caches and uses a metric boat load of ram for this and other tasks. You would never consider using zfs on a system where you just want raw io throughput like gaming for example. I'm not a fan of BTRfs, it's convoluted to use and lacks some core features like parity raid. Synology actually standardized on BTR and they don't even use it's native volume management including for raid 0/10. ZFS is fantastic, but the main reason you would use it is for the features it offers like snapshotting, zfs send/recv, encryption, compression, volume management, cloning, data safety, etc, it's perfect for a fileserver. For a gaming system or general workstation that wants local high performance storage on ssd's then XFS or EXT4 is the best choice.

    • @vipvip-tf9rw
      @vipvip-tf9rw Před rokem

      @@entelin gaming is not raw io, it caches all files in ram

    • @entelin
      @entelin Před rokem

      @@vipvip-tf9rw That varies significantly from game to game. Some (like the one I'm working on actually) do in fact load everything at startup and basically never significantly touch the disk again. However for many games where performance is going to be a real concern they have far more assets than your computer has ram so they can have varying degrees of sophistication on how they stream content off the disk. In some more intelligent cases it's relatively minor, textures may pop in a little slower for example. In most cases though you'll see it in load times, zoning, whatever. However none of that is even the main reason you wouldn't use ZFS for a gaming use case, that's ram consumption. ZFS is in many ways sort of a cross between a database and a filesystem, and it will eat as much ram as you're willing to give it. I use ZFS on my server, some client servers, and a volume I use for development work on my workstation, I use zfs send and all that fun stuff for offsite backups. However none of ZFS's strengths are things desired for gaming.

    • @orkhepaj
      @orkhepaj Před rokem

      just use ntfs

    • @orkhepaj
      @orkhepaj Před rokem

      @@vipvip-tf9rw nope , depends on the game

  • @johnpvaldez99
    @johnpvaldez99 Před rokem +2

    This is great content! I would suggest if you can display the graph slides to use the full display that’s you used on the closing slide. As I mentioned great content was just heard to see when switching from the different sizes on smaller screens.

    • @FlorinArjocu
      @FlorinArjocu Před 7 měsíci

      +1 on that. At least we can zoom on phones, otherwise it would be quite hard to read.

  • @EvanCarrollTheGreat
    @EvanCarrollTheGreat Před 10 měsíci

    First this is the best information on this subject I've seen! Great job! Rather than saying "the defaults" you should document what the default options are by outputting the result of `mount` on the volume you're benchmarking. I would love to see this updated though for Debian 12, which have discard=async on btrfs. It almost seems like the zfs is so much massively faster that something fishy is happening there -- like noatime.

  • @CMD_Line
    @CMD_Line Před 2 lety +13

    I literally use zfs for everything and ext4 for boot. So interesting what you talked about there, I shall have to do further reading. 👍🏼

  • @TheCzele
    @TheCzele Před 2 lety +17

    You never know when those btrfs's snapschots that I never do might come in handy

    • @CyberGizmo
      @CyberGizmo  Před 2 lety +8

      Or a good backup strategy :D

    • @alexxx4434
      @alexxx4434 Před 2 lety +1

      They are quite usefull for rolling back bad system updates.

    • @catchnkill
      @catchnkill Před 2 lety +8

      @@CyberGizmo Rollback is different. In many occasions you do not want to a restore from backup. For example you do a sudo pacman -Syu in Arch and it farts up. A rollback is more handy than a full restore from backup. Btrfs is severly underestimated.

    • @a.accioly
      @a.accioly Před 2 lety

      Install openSUSE. It will create snapshots for you on every update and most admin tasks. It has saved my bacon more than once.

    • @aziztcf
      @aziztcf Před 2 lety

      @@a.accioly No need to change your whole OS for that, I got that and paired with grub-btrfs you can simply boot into older snapshot which is neat.

  • @dipi71
    @dipi71 Před 2 lety +9

    What's frequently missing in fs comparisons is memory consumption, energy consumption, CPU load, number of running fs-related processes and threads and amount of logging (dmesg, journalctl).
    From almost 30 years of personal and anecdotal experience, I'd declare ext4 as the clear winner if you include above metrics into your decisions.

    • @FlorinArjocu
      @FlorinArjocu Před 7 měsíci +1

      Also the partitioning and recovery software compatibility/support. I had issues in the past and these are high priorities for me. It makes no sense to go to a faster one but risk my data, which is the most important part of storage on a local computer.

  • @sagan666
    @sagan666 Před 9 měsíci +1

    I've been using ZFS since 05 back when Solaris was cool ( Thanks SUN ) . It was really slow back then but the features made it worth using. It's great to see it being used so widely these days, and beating a lot of the competition in the FOOS world.

  • @abobader
    @abobader Před 2 lety

    Great video as always DJ. Well done!

    • @CyberGizmo
      @CyberGizmo  Před 2 lety +1

      Good to see you abobader and thank you

  • @logan225
    @logan225 Před rokem

    Exactly what I was looking for! Subbed!

  • @DaniloMussolini
    @DaniloMussolini Před 2 lety +1

    Thanks for the video. ZFS is just an awesome piece of software.

  • @gjcarter2
    @gjcarter2 Před rokem +1

    I started using BTRFS in 2021 as a backing store for my Amanda backups. So far I haven't had any issues.
    That being said I won't use it in production daily because of some of the issues still outstanding with the file system, including the issue with the layout: It probably will have to be changed.

  • @user-mr3mf8lo7y
    @user-mr3mf8lo7y Před rokem +2

    From the experience, would like to share. Used ReiserFS in (heavy) production environments for call centers 24x7, many years. Never had any issue. I wish the developer would not be in that bad circumstances.

  • @Mythologos
    @Mythologos Před rokem +2

    I'm in the RHEL stream and I noticed that small file transfers were taking forever, I figured it was BTRFS, thanks for explaining how/why. It's a shame because I really love Fedora and Alma as systems but BTRFS is ridiculously gigantic and slow as molasses.

    • @GoatzombieBubba
      @GoatzombieBubba Před 10 měsíci +1

      BTRFS is fast on my Gen 4 PCI-E NVME M.2 SSD.

  • @unfa00
    @unfa00 Před rokem +9

    Thanks for this benchmark! I personally use Btrfs for extra features like snapshots, compression. I would probably use ZFS instead if it was in the mainline kernel, but also - I find volume management on ZFS more complicated than on Btrfs, so for home use I prefer Btrfs.
    I have been using Bcache a bit to utilize SSD speed with HDD capacity, and I've heard that it's possible this year Bcachefs will be put in the mainline kernel. I'd love to see you tackle that. Surely it's far from mature at this point, but it could be interesting to see how does it compare right now.

  • @rjmaas
    @rjmaas Před rokem

    In past 5 years I have used BTRFS mostly in a RAID-1 setup. Except for one issue that happened early on, it has been a very smooth journey so far. Two features I really appreciate are the checksums for both meta data and actual data which allows me to easily verify the integrity of the volume. And second snapshots (which can be read-only). The latter is great for creating backups of a volume that is updated continuously, e.g. database. One thing to keep in mind is make sure to keep plenty of empty space on the volume. Never used RAID-5 / RAID-6.

    • @bluehatguy4279
      @bluehatguy4279 Před rokem

      I've used BTRFS a bunch in the past for RAID 5 and 6. I just feel like when BTRFS works, it's the best thing in the world, and when BTRFS doesn't work, it's the worst thing in the world.

  • @adam872
    @adam872 Před rokem +2

    As a long time NetAPP user, ZFS is the choice for me. It has all the features I want and acceptable, in some cases exceptional performance. I would also be happy to use XFS, having built up a measure of trust in it from my days running SGI machines. The performance of EXT4 without journaling would concern me a bit, because for any data I consider important I would have journaling enabled.

    • @jeffspaulding9834
      @jeffspaulding9834 Před rokem +1

      It was the other way around for me - I used ZFS on FreeBSD for years before I started a job that needed occasional NetAPP work. No one at the office knew anything about NetAPP and we didn't have a support contract. NetAPP's similarity to ZFS made it easy to pick up and learn.

  • @ringoschubert4966
    @ringoschubert4966 Před rokem +4

    For me, as a sysadmin, XFS is simply useless. It doesn't support online resizing, and you can't shrink it (in place) at all. As a ROOT-fs it's also not a good choice, because you can't do a readonly check and repair out of a running system. These flaws could leed to extensive downtimes, and that's (in a production environment) simply unacceptable.

  • @Hfil66
    @Hfil66 Před rokem +5

    I have tried a number of filesystem (btrfs, exts, zfs, nilfs), but performance was only ever a small part of my selection criteria. My root filesystem has always been Ext4 with journaling (it was Ext2 and Ext3 in the past, before Ext4 existed), but for archiving systems I like the ability to snapshot and raid (zfs and btrfs can do both, and nilfs has continuos snapshotting but is otherwise functionally rather limited). I usually have daily snapshots over several months, but with btrfs once you get over about 12 snapshots the performance falls off a cliff (not just slightly slower, it becomes painfully slow), which is why I am now using zfs on everything but root. If I want more performance then I can simply use it with raid10 or raid50.

    • @RonWolfHowl
      @RonWolfHowl Před rokem

      Interesting. Any more info about the BTRFS performance issue? Is there a tracked issue about it, for instance?

    • @Hfil66
      @Hfil66 Před rokem

      @@RonWolfHowl This was several years ago, but I was finding that as I was adding ever more snapshots it was taking longer and longer to mount the volumes or to create new snapshots (if I recall correctly ordinary reads and writes did not suffer as much, it was just the manipulation of sub-volumes got very slow). At the time I looked around and find references to the limits on the number of sub-volumes that share reflinks (which if you are taking snapshots with not a lot of changes in the files between the snapshots then most of the files will have many many reflinks pointing to the same file).
      Quickly trying to look around now I see one person who suggested the recommended limit for the number of snapshots should be in the 10s, but at present I cannot find anything more definitive than hearsay.
      If I can locate something more definitive I will try and remember to post it back here.

  • @bkovacs7
    @bkovacs7 Před 2 lety +2

    Would XFS be just as good on a desktop workstation with 500gb ssd as ext4 is. Seeing how SGI used it as a desktop os on IRIX

  • @trionghost
    @trionghost Před rokem +1

    I was one of thous who tried BTRFS when it has no recovery tools at all, lose some data and forget about it for long time. Until I open up TimeShift which integrates with GRUB and works with BTRFS snapshots. So now I use BTRFS as root FS and ZFS for everything else. It's just save me huge amongst of time.

  • @treyquattro
    @treyquattro Před 2 lety +1

    I use ext4, with SSDs. I'm researching ZFS to use in a more sophisticated volume set or RAID (maybe combined, still researching...). For a single-user system Ext4 works well enough and has the benefit that it is reliable. Journaling and swap on a single SSD have me somewhat concerned although disk stats seem to show things are OK at the moment

    • @big0bad0brad
      @big0bad0brad Před rokem +1

      I think any modern SSD that does write balancing would pretty much take a directed effort to cause damage by writing (or a workload that just writes a whole ton). You have to do the math on what your SSD claims for total allowable writes and work from there. But I think for the vast majority of people, the worries are more for older or cheap SSDs that aren't managing the flash correctly.

  • @greycell2442
    @greycell2442 Před rokem

    I love btrfs but I found out it can fragment platters, so I use ext4 for those. I'm wondering if LVM plays well with btrfs to reduce fragging, as a set size container, or if it can frag inside of a logical volume. The snapshots are too handy. My hdds at the house are like long-term cold storage, but I pool media there. I'm not knowledgeable enough about how lvm works versus btrfs. I like lvm though so I can move those if needed. My deal w btrfs is not perf but people should realize it's not a magical bandaid for various data integrity loss. I only use it on Manjaro for reversals? Not necessarily a good reason. You have to go thru data recovery procedures to realize the downfalls of any fs or lvm.

  • @anarcho.pacifist
    @anarcho.pacifist Před 6 měsíci

    Great video! F2FS remains for me the best filesystem for SSDs: great performance overall, especially when working with many small files (similar to ReiserFS for HDDs).

    • @robervaldo4633
      @robervaldo4633 Před 6 měsíci

      that's interesting, I'll look into that, I use f2fs only for removable storage, like SD cards and thumbdrives, because those have more resilient blocks in the FAT region, so when not using (ex)FAT as originally intended, one should care about some wear leveling

  • @Deezter16
    @Deezter16 Před rokem +2

    I get this is a Linux filesystem test, but I would have loved to know how these compare to FAT, NTFS and FAT-ex. Sure BTRFS might be kinda slow compared to the other ones, but if it's still better then NTFS it's still an improvement over what most are use to.

    • @TomAtkinson
      @TomAtkinson Před rokem

      I just found out ExFAT does not support permissions on Linux, thus all files are owned by root on this drive. I'm gonna format it to XFS since it is spinning rust, and I like the idea of Apple and Redhat supporting it, and it's anti-bitrot features.

  • @r_j_p_
    @r_j_p_ Před 2 lety +8

    Another strength for xfs is large numbers of files in a directory, which happens in science experiments.

    • @Anonymous______________
      @Anonymous______________ Před rokem +5

      Yup, I work in an environment where some of our directories have 100K 4GB files. EXT3/4 and ZFS seems to choke under this situation.

    • @orkhepaj
      @orkhepaj Před rokem

      lame design , small files should be merged

    • @orkhepaj
      @orkhepaj Před rokem

      @@Anonymous______________ what is that chinese surveillance system ? bad design

    • @samiraperi467
      @samiraperi467 Před rokem

      @@orkhepaj High energy physics would be my guess.

    • @angeldude101
      @angeldude101 Před rokem

      What about large number of subdirectories? Is that the same kind of situation? And how many files are we talking about, roughly? I have a very large directory that this seems like a good fit, but I'd just want to make sure.

  • @petersilva037
    @petersilva037 Před rokem

    I went on the github site where your benchmarks... All I found was a README that says you used iozone... would be good to know the actual arguments to iozone for each test.

  • @amortalbeing
    @amortalbeing Před 11 měsíci

    this was great thanks a lot. ❤

  • @fremenarrakis2616
    @fremenarrakis2616 Před 2 lety +10

    hello, it would be nice to have hammer2 included in the list even is not possible to use it on linux. I agree with your results, but sometimes fs preference is kinda religious, in the time of ext2 I was using reiserfs and everybody was against my decision but thats another story. I think that when deciding a fs you should take into consideration resource needed and feature offered.

    • @DasIllu
      @DasIllu Před rokem

      reiserfs, now that is something i did last hear about almost 20 years ago.
      People hyped it, hated it, loved it, mostly all simultaniously. At the time i was just ditching SuSE for debian and did not want to experiment with something i don't fully understand. So i held on to ext2.

  • @nicholasreynolds6609
    @nicholasreynolds6609 Před rokem +1

    What would you recommend for a Synology that would prevent bitrot and would allow different sized drives? I was thinking of using SHR w/ BTRFS, but saw there was a major performance hit, due to all of the extra features built into it. I just don't think that once I get it, I will want to drop another 1,000 on drives any time soon and might need to piecemeal my way up to higher storage capacity. Any help would be appreciated!

    • @ethograb
      @ethograb Před rokem +1

      I run a small home nas with some mixed drives in raid 1. I really like btrfs because it works really well for what I need, keep in mind speed is a benefit for me not a requirement.
      Long term storage is good on btrfs just make sure you also have another device your backing up to. For my backups I just do incremental tar archives and squash my archives every once in a while.

  • @martixy2
    @martixy2 Před rokem

    btrfs has features tho.
    Transparent compression, good TRIM support for SSDs and other optimizations, auto defrag or auto dedupe (but possibly not both, at least with bees).
    Copy-on-Write and everything enabled by that are killer features for some use cases.

  • @japrogramer
    @japrogramer Před 2 lety

    Would using btrfs on a laptop affect battery life because of the copy on write?

  • @dezmondwhitney1208
    @dezmondwhitney1208 Před 2 lety

    Great Video. Thank you D.j.

  • @geoffreyrenemoiens3089
    @geoffreyrenemoiens3089 Před 2 lety +2

    Thank you, good job :)

  • @mpdavis731
    @mpdavis731 Před rokem

    I use xfs on root on NVMe, and ZFS for data mirrored on 2 HDD. x86_64 and Arch. Only for basic usage, running win11 qemu VM for work. Ryzen 5600G, 64GB RAM, so RAM isn't an issue, and I do have compression on for ZFS, no encryption or dedup.

  • @xuedi
    @xuedi Před rokem

    btfs is great, but the native raid is super slow, but btfs on the block level raid (mdadm) is awesome for it features, file decoupling for docker and snaphots for OS updates with clean rollback as in fedora silverblue ...

  • @JayantBB78
    @JayantBB78 Před rokem

    Since last 3 years I have beed experimenting with differenct file system. I 100% agree with your conclusion. 16:54

  • @gjermundification
    @gjermundification Před rokem

    Does it make sense to test ZFS on less than 8 drives?

  • @lsatenstein
    @lsatenstein Před 2 lety

    It seems to me that since nvmes were introduced, that the major distros do not bother to create separate /home directories. I wonder why that is so? I do find one advantage to having a separate /home, and that is a refresh of a distro, or a system upgrade allows me to keep my address books, email and firefox stuff, and dropbox, without having to reregister these applications and rebuild.

    • @orkhepaj
      @orkhepaj Před rokem

      hmm , they dont expect you to reinstall os
      what i dont like is how to decide how much the os part will take

  • @ernestuz
    @ernestuz Před rokem

    Top-notch, thank you.

  • @reinoud6377
    @reinoud6377 Před rokem +1

    As for performance I think the reason ZFS is doing better is beacuse there is too much memory available. Pressure it harder with memory pressure of running big programs it will drop significantly

  • @androth1502
    @androth1502 Před rokem

    would it be prudent to build a system that has ext4 as the root file system and zfs for home and other mount points? [if yes, do you already have a video that shows building a system with this process?]

    • @CyberGizmo
      @CyberGizmo  Před rokem +1

      Hi Androth, I have my production do-it-yourself NAS running an ext4 file system for /root, /var, /tmp and /home. I put the ZFS on its own mount and i forgot to film it. It was when ZFS 2.0 came out and was moving from FreeBSD to Linux...would have made a good video, but on well maybe ZFS 2.2 :)

  • @tejing2001
    @tejing2001 Před rokem

    I use btrfs, but not because I think it's fast. I just want a filesystem that's in the mainline linux kernel and has snapshots. Only one meets those criteria last I checked. I run it over a separate raid layer, of course, due to the write hole.

  • @CMDRSweeper
    @CMDRSweeper Před 2 lety

    I am ZFSing everything that talks to hardware these days, the feature set is just too tempting.
    But virtual machines, they stay EXT4, I dunno, out of all of them they have felt snappier with EXT4 than any other file system when I have tried it, but it may be a placebo thing.

  • @mcfd
    @mcfd Před 2 lety

    Can I republish this video to other platforms? (Access is restricted to places where youtube is not accessible)

  • @THE16THPHANTOM
    @THE16THPHANTOM Před rokem

    gonna have to look up what journaling does and why turning it of has better results and yet it is on by default

  • @Froggie92
    @Froggie92 Před rokem

    @djware
    you mentioned an anandtech link @2:09 - do you have that?
    im trouble shooting a system and doing some research
    ive tried my best to comb the comments, but to no avail
    also did a few google searches and nothing jumped out at me
    thank you!

    • @CyberGizmo
      @CyberGizmo  Před rokem +1

      Hi Tommy, unfortunately, it looks like they have rolled the article off, however arstechnica which covers similar issues arstechnica.com/gadgets/2021/09/examining-btrfs-linuxs-perpetually-half-finished-filesystem/
      Hope this helps you.

    • @Froggie92
      @Froggie92 Před rokem

      ​@@CyberGizmo
      you should be applauded for your service - i am blown away by a response within 20 minutes!
      i hope it does too 🙇🙇‍♂🙏

  • @Klffsj
    @Klffsj Před 2 lety +1

    Thanks for this video! I'd be really curious how root on ZFS compares to ext4 when using smaller record sizes (which should be optimized for the smaller files, which are common in the OS). Even if OpenZFS is still slower at OS reads and writes, I might assume that the speed for small files is relatively inconsequential, considering that they are small.
    OpenZFS also offers a lot of other benefits for the root FS other than disk performance, particularly regarding system snapshots and optimized memory loading/unloading with the ARC. Obviously, this degree of complexity makes the question somewhat subjective (to user needs) and difficult to test, but it's something to think about.

  • @andibiront2316
    @andibiront2316 Před rokem

    I love ZFS, and I use it in my homelab with at least 40 VMs running. But, you should have forced sync writes for benchmarking. Your tests shows ZFS writing faster than the drive can actually write, and that's because ZFS will ACK your writes in RAM before destaging them to disk. If your PC crashes with writes in flight you'll have corruption. My VM Datastores have sync=allways enabled with two NVMe drives for the intent log, because once you disable async writes performance will drop significantly.
    I have no issue with caching reads, on the other hand. If you have memory to spare, the FS should cache reads on memory. No problem with that, and it's impossible to corrupt data regarding reads. (well... maybe a bit flip on non-ECC RAM).

  • @rizkyadiyanto7922
    @rizkyadiyanto7922 Před 2 lety +10

    Thanks for the benchmark. I think you should include NTFS too next time.

    • @CyberGizmo
      @CyberGizmo  Před 2 lety +26

      I might look into that...although it would mean polluting my linux environment with Microsoft code LOL

    • @UltraZelda64
      @UltraZelda64 Před 2 lety +1

      Meh... I haven't touched a Windows file system in Linux in probably over a decade. Personally I'd rather see how JFS holds up against the others.

    • @rizkyadiyanto7922
      @rizkyadiyanto7922 Před 2 lety +2

      @@UltraZelda64 windows hasnt been my main OS for at least 3 years either, but i still have NTFS partition, because it just works and no need to change it to ext4 or something. i am sure many people coming from windows are the same.

    • @LivingLinux
      @LivingLinux Před 2 lety +4

      @@rizkyadiyanto7922 I can see reasons when you dual-boot, but other than that, NTFS is legacy on Linux for me.

    • @big0bad0brad
      @big0bad0brad Před rokem

      Considering this is discussing linux filesystems, I can say NTFS on linux is completely borked. Try copying a large fileset into an NTFS volume with the fuse driver, I dare you.

  • @echoptic775
    @echoptic775 Před 2 lety +1

    Does filsystem speed affect the boot time and overall performance for example on linux? Would it boot faster if u use zfs vs the ext4? Also why do people use ext4, when its the slowest, but i mean i also use it but i didnt have any performance issues.

    • @Klffsj
      @Klffsj Před 2 lety

      I'm not too sure about boot performance, but I might think that it's not bottlenecked by disk I/O, so file system probably doesn't matter if you're using one of the ones listed in this video.
      ext4 is considered the best for installing your OS on. Compared to XFS, it's better for small filesystems and root file systems. (I believe DJ Ware mentioned that at one point in this video.) Compared to btrfs, it's far more stable, as btrfs's stability concerns were mentioned several times throughout the video. And, according this video, it's faster than ZFS for your root filesystem because of ZFS's slow FWrite and Fread performance. Also, root on ZFS isn't exactly for the faint of heart, as it typically requires it's not well supported by GRUB, the most popular boot loader. Ultimately, I'd personally recommend using ext4 for your OS partition/drive and ZFS or XFS for data partitions/drives.
      Please don't take my words as law on any of these; I'm trying to reproduce facts as well as I remember them, which may not be that accurate. Personally, I choose to use ZFS as much as I can, even for root, because of it's generally impressive performance and incredible features. Plus, if OpenZFS improves with FRead and FWrite performance, then all I'll have to do is perform an update to be using the best file system for my root file system. But, there is a fairly steep learning curve to using it, especially if you use it's more advanced features, and especially if you install your root file system on it. That degree of customization and complexity is about what I want with my computer, but it's not for everyone; you'll have to do your research.
      Hope this helps! Sorry for the long walls of text...

    • @echoptic775
      @echoptic775 Před 2 lety +1

      @@Klffsj great answer tnx

  • @old486whizz
    @old486whizz Před rokem

    I am extremely surprised in the ext4 and journaled ext4 speed regarding reading.. the journaling really is for writing.. do we think that's actually around the access time being updated on the inode? I know I generally set "noatime" since it's useless for me, but still..

    • @orkhepaj
      @orkhepaj Před rokem

      but still what?

    • @old486whizz
      @old486whizz Před rokem

      @@orkhepaj "but still" goes back to what I've already said - journaling shouldn't really be affecting read speed with ext4. Journaling is really for updating files (data).. a simple datestamp on an inode being updated shouldn't be going through the journal.

    • @orkhepaj
      @orkhepaj Před rokem

      @@old486whizz dunno ,probably everything goes thru it or at least checks it

  • @japamax
    @japamax Před 2 lety

    I use Btrfs since 2013 debut. It's sometimes slow and erratic, sometimes i needed to use "btrfs check repair" because the fs screws up due to 1 or 5 bad sectors into the disk
    Since 4 or 5 years, i used zfs on an extern problematic Seagate 7200 1Tb. No error, no issues, nothing to report.
    I used Xfs and ext4 too. I used f2fs on my ssd root but i decided, after some tests, to transfert everything to zfs 3 months ago.
    Zfs 's better than f2fs. It's seems fastest but the snaphots are incredibly comfortable. I use it with compression with no deduplication. My used space shrink to 50%. I decided, even it's was not easy, to use the ssd for caching and logging for the other zfs hdds.
    Zfs's not easy with linux root (you need usb key distro with zfs drivers, you need to think to the ssd partition before use it) but for me it's incredibly comfortable . The Zfs datas hdd are great with the zfs mirroring. You can change every filesystem property on each dataset (logical partition). for example, you can change recordsize property for a database or virtual image disks datasets
    Scrub, compression, dedeuplication (sometimes), recorsize snapshots and backups are safe and lovely to use on the zfs' filesystem

  • @jhonnythejeccer6022
    @jhonnythejeccer6022 Před 2 lety +1

    ZFS is more advanced than any other fs i have worked with (though this only includes ext3, ext4, ntfs, fat, hfs+, xfs and apfs). Snapshots, compression, encryption, checksums, snapshot/dataset send and receive, offline data repair and even without without decryption, raidz and now draid. I have found nothing like it.
    The only flaw, which is a bit bigger for everyday usage, is that there is literally no way to defragment it except for sending and receiving the whole dataset, for which it needs to go offline. For a root fs this is terrible, so even though i would love to be able to use snapshots for backups, it is just not possible right now. Maybe this will be solved in the future, but we can only hoppe.
    One question: you said zfs is not copy on write, but after a snapshot is done every change to a file from the snapshot is technically cow, no? And in the wiki i have read something about automatic snapshots, so would it not be cow partically because of those auto snapshots?

    • @katbryce
      @katbryce Před 2 lety

      These days, you should probably have your root filesystem on SSD, and there, fragmentation is not a problem.

    • @jhonnythejeccer6022
      @jhonnythejeccer6022 Před 2 lety +1

      @@katbryce Good point, i had mine on HDD for a long time and never thought about this again after i switched. Thanks for pointing it out.

  • @Keechization
    @Keechization Před rokem

    i know it's fun to see really big numbers, but why are you benchmarking a ram cache against on-disk write speeds? i think a more compelling comparison would be to benchmark the linux kernel's pagecache against the ZFS ARC/L2ARC

    • @CyberGizmo
      @CyberGizmo  Před rokem

      Because the ARC and L2ARC are read caches, the writes do not complete until the write to the ZIL completes. Having a ZIL on SSD will speed that up since the ZIL is normally stored on the same drives as the ZFS pool. At the time of the benchmark that was state of what I could afford to build.

  • @thiemokellner1893
    @thiemokellner1893 Před 7 měsíci

    Thanks.
    I do not get your meaning of root file system. To me, root is /, and that is just a directory. This does make sense in your setup, i.e. to have / on ZFS and /bin, /usr, /... being on other file systems. There are probably almost no files directly under /. So, what falls under root in your setup?

  • @jeremyjohansson3445
    @jeremyjohansson3445 Před 2 lety +4

    Wow, these results are impressing! Should I switch from BTRFS to ZFS? Would you feel the performance difference?

    • @CyberGizmo
      @CyberGizmo  Před 2 lety +10

      One of the experts in zfs left a comment in here I need to rerun the zfs benchmarks I am running and try to disable the ARC its slanting the numbers since most of the i/o is running in memory and not showing true disk performance. so stay tuned am working on that

    • @PenguinRevolution
      @PenguinRevolution Před 2 lety +1

      I wouldn't recommend zfs on a workstation, the way it's designed could break your system if you aren't carefuy

    • @lsatenstein
      @lsatenstein Před 2 lety

      @@CyberGizmo I am looking forward to your evaluations.

    • @Klffsj
      @Klffsj Před 2 lety +1

      @@PenguinRevolution I believe that's only if you use hibernation, or are you referring to some other issue? If this is it, the zfsbootmenu boot manager and boot loader has checks to warn you before accessing a hibernated pool and likely corrupting all your data.

    • @PenguinRevolution
      @PenguinRevolution Před 2 lety +1

      @@Klffsj ZFS isn't designed for workstation use at all

  • @snap_oversteer
    @snap_oversteer Před rokem

    No love for JFS? I've recently tried it on my laptop with new SSD and it was fastest of all filesystems tested, so I went with it for root partition (I just rsync'd my old root and modified /etc/fstab), so far so good after few months including few unexpected shutdowns by running low on power, fsck fixed everything. So so far I'm pretty happy with it, but the fact that the fsck and other utils have dates around 2011 is not reassuring same as the fact that none of my friends who are sysadmins for almost a decade have never heard of it, let alone ever used it.

  • @franek4always
    @franek4always Před 2 lety

    What i/o scheduler was used in test?
    Try to set none/noop and do retest

    • @CyberGizmo
      @CyberGizmo  Před 2 lety

      the standard i/o scheduler for Debian. And I will decline that suggestion however if that is something you would like to do, as my description of the video indicates the scripts are available

  • @TalpaDK
    @TalpaDK Před rokem

    I'm quite surprised that ext4 is so slow compared to "cow" filesystems.
    However any filesystem that doesn't have data crc is kind of obsolete to me.
    Also snapshots are very very handy and compression it very nice too.
    Personally I'm quite happy to sacrifice performance for those features.
    Currently running btrfs on all newer installs.
    A few years back I used zfs for serving VM images from a magnet centrifuge (with a SSD for l2arc) and while it was fast it used quite unreasonable amounts of ram for the arc

    • @CyberGizmo
      @CyberGizmo  Před rokem

      ext4 is quite good as a root file system it isnt tuned for any other workloads. If btrfs suits your needs, great. The devs have been working pretty hard on it to get it finished at last.

  • @eldiabloramon
    @eldiabloramon Před 9 měsíci

    I use pretty much only XFS. I have had some nightmare experinces with ZFS plus just gow much memory it hogs is crazy.

  • @guss77
    @guss77 Před rokem +2

    The results are very interesting in that they're quite the opposite of testing I've seen at Phoronix, especially considering that ZFS had numbers that are close or even faster than the theoretical performance of the drive (as DJ notes himself) - this tells me that ZFS manages to cache most of the test data by using it's very large pre-allocated caches. I think this is an unfair test as it is very unlikely that a real workload will always be constrained to the amount of data that can fit in RAM.

    • @guss77
      @guss77 Před rokem +2

      The very poor BTRFS behaviour is also suspect and would suggest to me that the filesystem failed to autodetect the drive as SSD, causing it to use the wrong optimizations. This is easy to test and fix by adding a mount option.

    • @cokesucker9520
      @cokesucker9520 Před rokem

      But then if you completely ignore the cache you're also messing up the test. I guess bench marking is complicated.

    • @yuryzhuravlev2312
      @yuryzhuravlev2312 Před rokem +2

      Yes, this is a wrong and outdated video, and we can't reproduce it. All Phoronix tests you can reproduce and doublecheck by yourself by PTS.

  • @gjermundification
    @gjermundification Před rokem

    15:37 If you root pool spans 8 more drives those numbers will most likely change.

  • @ralfbaechle
    @ralfbaechle Před rokem +1

    Many inaccuracies. Don't call ext2 / ext3 / ext4 just ext. There was a filesystem named ext in Linux which was rintroduced in 0.96c emoved in 2.1.21. Ext2 is influenced and developed by the same main developer Remy Card but the differences are extensive enough to consider it a different filesystem rather than the extended version of the extended "ext" filesystem.
    IRIX isn't UNIX-like, IRIX is a proper UNIX with all the licenses, trademarks and trimmings. Well, was.
    XFS at SGI is the successor for EFS at SGI's IRIX introduced in IRIX 6.5 and being ported to Linux starting in around 2000. XFS on iIRIX was largely optimized for large workloads. As in for machines and RAID arrays that need multiple trucks to ship. In 1997 it was able to saturate a RAID system which was built from 10,000 9GB drives hooked up to an SGI Origin 2000 of which for performance reasons only the first 4GB of each drive were being used for single fd I/O At the same time it would take minutes to delete a linux source tree stored on a single drive on a lowly SGI Indy running IRIX. To compare, starting with 2.1.4 Linux got so fast that there was no point in hitting -C when running rm -rf on a linux source tree - it was just too fast. That XFS slowness of some metadata and directory operations took many years to fix when XFS got ported to Linux. Basically XFS ruled the terabytes and petabytes before most people know what those words meant ;-)
    An XFS volume consists of three sub-volumes which are data, metadata and real-time. The later was optional. Basically IRIX could guarantee a certain realtime throughput on a number of filedescriptors for realtime I/O. At the time that was a unique feature of XFS so a license on top of the basic OS was required for realtime XFS. The first four file descriptors however came for free. Not sure how users were using it. It would appear media streaming was a target market or maybe journaling financing transactions for banking stuff (there were regulation for this sort of stuff). I used to joke about it being designed so data doesn't get lost in nuke tests. Which sounds important except the feature hit the markets when nukes were no longer being tested.
    Filesystems up to XFS had many strategies to optimize them for rotating storage. Never mind that many of those got wittered away over the years. Older drives did provide the same performance on all cylinders while newer "notched" drives did have fewer blocks per track on higher cylinder numbes. That's why SGI was using only 4GB in above mentioned benchmark. At the same time as IDE and SCSI drives made those notches invisible to an OS (except with deep SCSI magic) rendering rotational layout optimizations (such as BSD FFS) impracticable. On older drives the number of tracks seeked made a huge difference while on newer drives the time for this optimization is dominated by the time it take the head to settle down after moving thus a single track and a full stroke seek requiring about the same time. Again a bunch of optimizations bit the dust.
    Then came Shingled Magnetic Recording which overlaps tracks resulting in drives having to rewrite data under circumstances which makes performance characteristics of SMR drives somewhat more similar to flash drives. This opens the opportunity for new performance optimizations. They do exist for dm, f2fs, ext4 and btrfs but never reall hit primetime. I speculate they were ran over by the SSD bus ;-)
    Still, flash / SSD has pretty different performance characteristics and optimiing for those permeate the entire kernel. Older filesystems up to ext4 / XFS were very much designed for optimize for rotating storage. Newer filesystems like btrfs put focus on flash.
    So this makes for a enerational break in fs design and development A full set of benchmarks should probably be done on two setups, one with rotating storage (aka rust tumbler ;-) and one with SSD.
    You imply ext4 filesystems can't redirect their journals to an separate block device. That however is possible indeed using mkfs.ext4 -j device=external-journal. I've never seen this feature being used in practice but I'd imagine a scenario might by a journal on a SSD with the rest of the data sitting on a large but slower array of rotating drives.
    As for reliability, the main developer of ext4 Ted Ts'o is a very methodical, caution developer.. Exactly what it takes to keep a filesystem that holds the majority of data stored on Linux in the past like 20+ years safe. Btrfs is being developed more aggressively. Which is great on the feature side but not so great for stability. Choose your poison.

  • @net_news
    @net_news Před 2 lety

    ext3 or 4 for root and ZFS for everything else (if the system has more than 2gigs of RAM).

  • @solar3mpire
    @solar3mpire Před 2 lety

    XFS was journaling all the time, and is comparable to EXT4 with nojournal;

  • @krazykat64
    @krazykat64 Před 2 lety

    Thumbs up for the Green Lantern shirt alone. 👍🏻

  • @ThorbjrnPrytz
    @ThorbjrnPrytz Před rokem

    What a surprise that a memory cached fs has higer performance than non-cached systems...
    How would those numbers be on a 4 or 8 GB system?

  • @michaelheimbrand5424
    @michaelheimbrand5424 Před 2 lety +4

    Would be interresting to hear about your opinion on ZFS on Linux vs on FreeBSD. I recently migrated my own file server from OpenBSD to FreeBSD mainly because I heard a lot of good stuff on ZFS. I am a noob on ZFS and just winged it with a single zpool on an ssd for the system and then did a RAIDZ with 3 6TB drives. I was really concerned about the RAM with only 10GB in the machine. But I have to say it worked out well. Of course the system only takes just over 100MB. But the ARC cache stays around 5GB after 100 days of uptime with around 10 TB of data on my "datapool". I have not done any benchmarks, but I have to say both my server and my two old Thinkpads gained a LOT of performance by going to FreeBSD. I also get the feeling that FreeBSD even without ZFS is considerably faster than Linux both in general feel and network performance.

    • @CyberGizmo
      @CyberGizmo  Před 2 lety +1

      Michael I recently migrated my zfs pools from FreeBSD to Linux when openzfs 2.0.1 was made available. That's not too bad at all, and you can always create an L1ARC drive(s) from SSDs so it will hand back your memory :D. Thanks for letting me know I will add your suggestion to my to-do video list

    • @michaelheimbrand5424
      @michaelheimbrand5424 Před 2 lety

      @@CyberGizmo Thanks for that. "ARC drive" seems like a good idea.

  • @Jagi125
    @Jagi125 Před rokem

    It feels refreshing to hear RTFD in 2022 again.

  • @tunichtgut5285
    @tunichtgut5285 Před rokem

    Very interesting results! Thank you very much for this video. Many youtubers promote BTRFS because of rollbacks. What are they doing to their systems? I am using Linux as the only OS on all my computers since the early 90th and I messed up the systems only a handful but of times so much that I had to reinstall Linux.
    Of course, as a user I messed up my files many times but the solution to that problem are backups or a Dropbox-style service which lets you revert to previous file versions.

  • @CaptainDangeax
    @CaptainDangeax Před rokem

    I was recently running a database with lots of inserts and updates, and it became very slow on F2FS. I moved the mysql directory to a xfs filesystem, performance issue solved. F2FS is good when dealing with old and tired SSD, not keeping valuable data

    • @df3yt
      @df3yt Před rokem

      I find F2FS is great for sequential or single threaded stuff. XFS best of both esp with NVMes. Only F2FS and XFS hit my hardware limits when I'm copying hundreds of gigs. EXT4 and BTRFS fall off and get slow.

    • @robervaldo4633
      @robervaldo4633 Před 6 měsíci

      F2FS is good for removable storage originally intended for use with (ex)FAT, because those have more resilient flash in the FAT region, so when not using FAT, one should care about doing some wear leveling; BTRFS does too much I/O for small operations, it's the only FS that (I believe, due to that, was the culprit for) significantly wearing on one of my SSDs, so I don't see it ever improving much more than it is now, also XFS has reflinks since a few years, which allow for some of the most useful features of BTRFS with less cost

  • @gatoloco3105
    @gatoloco3105 Před 2 lety

    Do you also DJ as in spin records? Would like to hear one of your mixes. Thanks for the linux content Mr Ware.

  • @Anonymous______________
    @Anonymous______________ Před rokem +3

    XFS. Extended attributes and acls baked in without the need for additional kernel modules, the requirements for DKMS or special flags on mount points. Also, it performs fantastically with directories that have more than 100K files. Workloads matters and if you are using Linux in an enterprise environment XFS is the way to go. Oh, the only catch is you cannot non-destructively shrink XFS volumes.

  • @user-dj3lb3gi5t
    @user-dj3lb3gi5t Před 9 měsíci

    Thanks!

  • @LampJustin
    @LampJustin Před 2 lety +1

    Hey DJ,
    thanks for the test, but I think some of your conclusions are pretty misleading.
    First the reason why Zfs did so well is because everything was done in memory. This just can't be said for most typical workloads, so tests with just a small arc size have to be done as well, to make real non cached test like all the other fses.
    Second the conclusion is just not a good recommendation! Never use ext4 without journaling. It really doesn't wear out your SSD that much, any recent flash file system is journaled so it's just ill advised to disable it. So XFS is recommended for most use cases, especially because it has cow attributes like reflink copies.
    Third f2fs is another great contender here I think. Just a recommendation ;)

    • @CyberGizmo
      @CyberGizmo  Před 2 lety

      H Mozilla so first things is first 1) I can certainly add an L2Arc into the mix, but that is to help improve Reads when memory becomes the bottleneck, second the first recommendation Sun Microsystems always had was to add memory first to clear up any performance issues with zfs (this is when they were sill a free standing company). Also the zfs benchmark I did is consistent with results given by Phoronx Test Suite 2. Ext with no journaling was NOT my recommendation, in fact I said a coupe of times during the course of the video that running without a journal was a bad idea. 3) As I said in another comment I will be adding F2FS back into the line up, however I have noticed some users comments/questions in the forums that F2FS may have some data loss issues.

    • @LampJustin
      @LampJustin Před 2 lety

      @@CyberGizmo I wasn't referring to L2Arc, the normal cache is just called arc, so disabling this would show the real performance of the drive + FS combo. Without testing that, we don't know how well disk access works as everything is cached and we know that ZFSes cache is awesome, but at least I like to know what happens when cold data is getting accessed that is not in cache.
      Regarding journaling, yeah you're right you definately said that, but that really looks like a list of recommendations and can't be stressed enough, like you did with RAID 5/6 on btrfs

    • @lsatenstein
      @lsatenstein Před 2 lety

      @@LampJustin I am after a stable performing filesystem. Thus far, because I mainly use Fedora as out-of-the-box, I rely on btrfs. My own feeling is that btrfs is being updated and is getting better over time. I do mainly program coding, and my test systems are usually around 30gigs size. I keep secondary (archive storage) on 1TB 7200 spinners), along with external backups.I can go back to Jan 2018 monthly backups, if I need to.

    • @LampJustin
      @LampJustin Před 2 lety

      @@lsatenstein same here, use btrfs for all kinds of stuff and it's especially great on my laptop as Timeshift works great after moving everything to @ and @home. I'm on Fedora also but right now timeshift needs a patched version which is annoying, but with the release of 22.04 they will have to patch timeshift as the same bug will affect it as well.

    • @katbryce
      @katbryce Před 2 lety

      I use FreeBSD, and zfs for everything. I have a 2TB L2ARC ssd drive, and between that the ARC, I have a 98% hit rate.

  • @retjeh23
    @retjeh23 Před rokem

    im now btrfs lover,since i use arch and i need fast timeshift

  • @yourpersonalspammer
    @yourpersonalspammer Před 2 lety +1

    how do I know if my NVME SSD EXT4 has journaling on or off?

    • @CyberGizmo
      @CyberGizmo  Před 2 lety

      the only way I know for sure is to look at the /proc/fs/ext4/ directory and look for your device there is a file in there called options if you see journal_checksum its on, and if you see no_journal_checksum its off, also this only works if the your ext4 partition is mounted.

    • @franek4always
      @franek4always Před 2 lety

      use dumpe2fs

  • @freeculture
    @freeculture Před rokem

    Got burned by btrfs once, never again. I stick to ext4. In case of flash memory, i simply disable the journal with -O ^has_journal.
    Btrfs should be perfect in theory, it should never need an fsck. But pray you never get in a situation where it needs it (don't try), all hell breaks lose. I was able to recover most of my data after days of struggling and following guides online; it all happened one day chromium or something caused an OOM situation in my compressed btrfs filesystem... that was about 2 to 3 years ago. The other filesystems i don't care, specially ZFS is known to require a ton of memory, no thanks its not worth the trouble for a simple desktop, and XFS is meh, i guess useful for large files or something.

  • @Amaqse
    @Amaqse Před rokem

    Well, your slide would suggest BTRFS does not support encryption, compression or data protection yet its the only auto-healing fs from the group. ZFS default setting is to use lzjb, so if u used default settings your zfs could have been compressed, thus the benchmark scores exceeding drive specs. ZFS does not rewrite a file. Ever. Not unless the disk space is full, if u ask it to overwrite a file with identical copy it will just update the pointer so how is "rewrite" benchmark even possible? And also not surprising it won in that category by like 200%.
    Half of the benchmarks i dont seem to understand, your graph would suggest that in most benchmarks 1 user copying or generally "doing stuff" is basically almost as fast as 5 users doing the same at the same time. Generally, 2 users copying or writing would divide the bandwidth of 1 user almost by 2, 5 users should mean each has 1/5 of the performance, so how is it that your graphs mostly dont show big differences between 1 and 5 simultaneous workloads ? Thats a bit unrealistic

  • @Ghennesph
    @Ghennesph Před rokem

    I don't think anyone has ever claimed btrfs to be the fastest. I've only seen people use or recommend it for it's user facing featuresets.

  • @gettriggered_ian3269
    @gettriggered_ian3269 Před 2 lety

    Damn I initially thought xfs was the fastest due to previous benchmarks. I'll switch to another fs if I reinstall.

  • @df3yt
    @df3yt Před rokem +1

    btrfs - excellent features but even in 2022 not stable for pcs that have regular power cuts. "less reliable" filesystems handle that better. Personally I run XFS + ZFS on all my pc's.

  • @sluxi
    @sluxi Před 11 měsíci

    So much more to what FS is the best than just which is the fastest, the newer options blow ext4 out of the water with essential functionality so as long as they're at least reasonable in speed and reliable the tradeoff is a million times worth it to me.

  • @uzumakiuchiha7678
    @uzumakiuchiha7678 Před 2 lety

    I do this:
    two partitions, ext4 - 10G
    btrfs - 200G
    I am noob. If there is any problem in this, please make me aware of it.

  • @BUDA20
    @BUDA20 Před rokem

    I use Btrfs because I want zstd compression, and its excellent at that

    • @jeffspaulding9834
      @jeffspaulding9834 Před rokem

      Just FYI, ZFS gained that feature in OpenZFS 2. It had compression before, but I don't recall off the top of my head which algorithm it used.

    • @lsatenstein
      @lsatenstein Před rokem

      btrfs supports compress=zstd:x (x is 1 to 15, typically, most default use for x is 2)

  • @franklemanschik4862
    @franklemanschik4862 Před rokem

    About your btrfs claim maybe you did not know it but it is in fact ext4 based and can get configured in a lot of ways maybe thats why you get bad results you need to use the right storage algo for the workload its that simple claiming that a filesystem is not good because it does not performs as expected without configuration is claiming the same for databases. Like MySql is slow on querys but the user did never build any index.