Why The ZFS Copy On Write File System Is Better Than A Journaling One

SdĂ­let
VloĆŸit
  • čas pƙidĂĄn 22. 05. 2024
  • Article Referenced: Understanding ZFS storage and performance
    arstechnica.com/information-t...
    CULT OF ZFS Shirts Available
    lawrence-technology-services....
    Connecting With Us
    ---------------------------------------------------
    + Hire Us For A Project: lawrencesystems.com/hire-us/
    + Tom Twitter 🐩 / tomlawrencetech
    + Our Web Site www.lawrencesystems.com/
    + Our Forums forums.lawrencesystems.com/
    + Instagram / lawrencesystems
    + Facebook / lawrencesystems
    + GitHub github.com/lawrencesystems/
    + Discord / discord
    Lawrence Systems Shirts and Swag
    ---------------------------------------------------
    â–ș👕 lawrence.video/swag
    AFFILIATES & REFERRAL LINKS
    ---------------------------------------------------
    Amazon Affiliate Store
    🛒 www.amazon.com/shop/lawrences...
    UniFi Affiliate Link
    🛒 store.ui.com?a_aid=LTS
    All Of Our Affiliates that help us out and can get you discounts!
    🛒 lawrencesystems.com/partners-...
    Gear we use on Kit
    🛒 kit.co/lawrencesystems
    Use OfferCode LTSERVICES to get 5% off your order at
    🛒 lawrence.video/techsupplydirect
    Digital Ocean Offer Code
    🛒 m.do.co/c/85de8d181725
    HostiFi UniFi Cloud Hosting Service
    🛒 hostifi.net/?via=lawrencesystems
    Protect you privacy with a VPN from Private Internet Access
    🛒 www.privateinternetaccess.com...
    Patreon
    💰 / lawrencesystems
    ⏱ Timestamps ⏱
    00:00 File Systems Fundamentals
    01:00 Journmaling Systems
    02:00 COW
    03:30 Copy On Write Visuals
    07:49 ZFS with Single Drives
  • Věda a technologie

Komentáƙe • 177

  • @LAWRENCESYSTEMS
    @LAWRENCESYSTEMS  Pƙed 2 lety +10

    CULT OF ZFS Shirts Available
    lawrence-technology-services.creator-spring.com/listing/cult-of-zfs
    Article Referenced: Understanding ZFS storage and performance
    arstechnica.com/information-technology/2020/05/zfs-101-understanding-zfs-storage-and-performance/
    ⏱ Timestamps ⏱
    00:00 File Systems Fundamentals
    01:00 Journmaling Systems
    02:00 COW
    03:30 Copy On Write Visuals
    07:49 ZFS with Single Drives

  • @andarvidavohits4962
    @andarvidavohits4962 Pƙed 2 lety +13

    "In the beginning Sun created the zpools, datasets and zvols. And the data was without form, and void; no checksum was to be found. And Sun said, Let there be Copy on Write: and there was Copy on Write. And Sun saw the CoW, that it was good..."

  • @ChaJ67
    @ChaJ67 Pƙed 2 lety +67

    I suppose I fit into the ZFS fanboy club. I have extensively tested failure modes of various file systems, especially in RAID / RAID-Z arrays and I can give you a rundown of what I found:
    1. UFS - A simple power loss can hopelessly corrupt this. Good thing pfSense is switching away from it. As an extra kicker when I took over the IT department at a facility where I saw pfSense fail to the point where it could not boot, I found that the UPSes were over 15 years old. I got those swapped out and pfSense configured to monitor the UPSes and haven't seen any more corruption issues. I also rebuilt the failed pfSense firewall to use ZFS before it was the default. There were other random corruption issues across multiple different pieces of hardware until those ancient UPSes were replaced, so I think when they clicked over to battery mode, they were throwing out dirty power and that may have also been why things corrupted.
    2. ZFS - This is the closest thing to an indestructible file system ever made and its vertical integration is unmatched. If you just have one drive and no non-ECC RAM, while this isn't what is recommended for ZFS, ZFS is still going to do better than anything else out there in that scenario. Something important to note above and beyond CoW and snapshotting with CoW is the CRC blocks written for every commit. If CRC doesn't match, the commit is considered invalid and rolled back. The place non-ECC RAM can hurt you is maybe an otherwise good commit rolls back because the CRC got corrupted where a lesser file system will just write out the data and call it a day with no CRC to say whether the data is good or not. Most file systems don't care. ZFS does care. When it comes to ZFS RAID-Z, it is better to use RAID-Z than say hardware RAID. To avoid getting into all of the nerdy technical bits on why this is the case, let's just say it is the only RAID I tested where I couldn't find a weak spot to cause it to fail when there could have been a way to recover. Every other RAID I tested, and I tested many, I found a weakness somewhere where the whole RAID goes by-by with one or fewer failed disks or at least corruption with power loss. Of course RAID-Z level 2 is going to be a lot harder to break than level 1. If you really care about your data, you will probably use RAID-Z level 2 and maybe stick up to 8 drives in a RAID-Z level 2 vdev and then just have a bunch of vdevs in your zpool. The one annoying thing about ZFS is trying to add more drives to a vdev. You just add more vdevs, which is kind of lame if say you have 5 drives in a RAID-Z level 2 zpool and you want to add say 3 more drives to make it a single 8 drive RAID-Z level 2 array. Instead the best you can do, at least historically is have the 5 disk RAID-Z level 2 vdev and add a separate 3 disk RAID-Z level 2 vdev to the zpool. If you just want to add one drive for example, say going from a 3 drive RAID-Z level 1 to a 4 drive RAID-Z level 1, forget it; it won't work.
    The really cool thing with ZFS is the vertical integration. All you need is ZFS. No LVM. No LUKS. No partitions. Just ZFS talking straight to the disks. It actually works better this way, especially if you are using SSDs and you want TRIM support. Seeing I like to encrypt data at rest, just create an encrypted dataset in ZFS and your data volume is encrypted. Easy peasy. None of this container inside of container mess to deal with, which becomes an unweildy problem, especially if you need to make something bigger or smaller or hoping some TRIM command will make it to your SSD so they don't get slaughtered with write amplification. Actually I just bit the bullet and use high endurance SSDs with RAID because lesser SSDs just get killed, but I don't do ZFS directly to SSD arrays yet. That is to be a future project when I am ready to do more with booting off of ZFS arrays directly as opposed to using ZFS arrays more for just data.
    ZFS arrays are easy to add and remove from a system.
    3. BTRFS - This is more of a wannabe ZFS for Linux that does away with the CDDL license causing Linus Torvalds to hate on ZFS. I use it and it does some good stuff, but it just is not as good or feature filled as ZFS. You can technically RAID with it and at this it is the best GPL software RAID out there, but it has weaknesses. BTRFS needs tweaking and balancing and such to keep its performance from going completely in the toilet with heavy use. BTRFS can corrupt more easily than ZFS, though it is much harder to corrupt than any journaling file system. BTRFS can handle a reasonable amount of drive error / failure issues with RAID as in you catch the drive errors early on and swap out the bad drive and rebuild and it is OK. You can even abuse BTRFS RAID some and if you know your stuff recover the array. However you start pushing it and BTRFS will just crash. You can end up with your array deeply hosed either with really bad hard drives or abuse in just a certain way. These exact same abuse tests ZFS passed with flying colors, so it can be done. In other words I am saying BTRFS is pretty decent and better than anything else that is GPL, but the CDDL licensed ZFS takes the crown in robustness and performance for this category of file system.
    A place where BTRFS does have an advantage over ZFS is you can more easily do migrations with it. You can go from no array to an array. You can go from a few drives in your array to more drives in the array. You can also append multiple drives and arrays together to make it bigger. ZFS historically has only done the last one of adding multiple arrays together, no expanding existing ones directly. Granted this gap is closing in more recent versions of ZFS.
    BTRFS arrays are easy to add and remove from a system.
    4. XFS - This is probably the most robust of the journaling file systems. You can still screw it up and you need a separate RAID mechanism if you are using RAID.
    5. EXT4 - This is probably the most performant of the journaling file systems. Journaling file systems are a lot faster than CoW based file systems. However you will still get your data corrupted in power loses and you don't get snapshots directly with it. Once you have snapshots, especially if you care about your data and want to back it up and such, you just can't use a journaling file system anymore, so EXT4 just won't do you. Not that it is a bad file system; it is just no good for a system where you care about your data and want to have good backups.
    6. MD RAID - Software RAID that I consider a worse solution for storing your precious data than just a single drive. Power loss can cause write hole corruption and even master boot record corruption to the point where the array is unmountable from a simple power loss.
    7. Supercap backed MegaRAID RAID - This is usually pretty good. If a disk breaks, you get an audible alarm, granted someone goes into the room where the card is and hears it. Can also setup monitoring to tell you if a disk broke. It is fast. It recovers from power loss well. It can rebuild from a drive failure quickly. Obviously RAID-6 is going to be more reliable than RAID-5.
    When it comes to expansion, you can migrate your RAID easily, at least if you are not using RAID 50 or RAID 60. Really though this higher aggregation works well with BTRFS and ZFS and can be done with LVM, so you would do that at a higher level than to use the RAID controller for that. Anyways if you end up with more than one RAID controller, appending / RAIDing the arrays together at the higher level may be the only way.
    Where the MegaRAID controller will get you is removable arrays. It is just not designed for this. Also say the controller breaks and you need to move the drives to another identical controller. If anything at all goes wrong or you just don't use the right commands, the RAID controller will wipe out the drives and start over. You also can't mix and match different storage modes. Either the controller is in RAID mode or it is an HBA (host bus adapter) for all drives connected to it. It doesn't understand anything else. And RAID mode doesn't get you direct access to the drives, which will really mess you up if you try to make single drive 'RAIDs' so ZFS can have access to individual drives as then ZFS doesn't know the health of the drives and things can go really sideways when a drive malfunctions. With SSDs, the TRIM is gone. 3ware would let you mix and match, which was really handy, but MegaRAID doesn't have a concept for this. It is just either running the RAID firmware or the HBA firmware and that is it.

    • @TanKianW79
      @TanKianW79 Pƙed 2 lety

      I am also a ZFS fanboy/cult. enough said.

    • @agentsam
      @agentsam Pƙed 2 lety +4

      Awesome write-up. Thanks Jason.

    • @adymode
      @adymode Pƙed 2 lety +4

      Exceptional writeup. Very fair handed towards btrfs especially as a ZFS fan. Consider posting it as an article on medium or even just a gist, I would have appreciated finding this when I had to google about filesystems.

    • @XenonG
      @XenonG Pƙed 2 lety +1

      For point 3, Torvalds doesn't hate ZFS, he just doesn't recommend using ZFS with Linux due to licensing issues. For point 2, I agree even if you have just one physical disk, filesystems with integrity verification is a good idea e.g. ZFS or BTRFS. Though I wouldn't recommend BTRFS for RAID of any sort. "Hardware RAID" of any sort is not worth the trouble/effort and cost.

    • @ChaJ67
      @ChaJ67 Pƙed 2 lety +6

      @@XenonG you are right in that I probably used too strong of a term. Torvalds can get to be a bit emotional and maybe I read too much into that.
      You are free to opinion as to what to use and not use. ZFS software RAIDZ level 2 is the most resilient and if that is your main metric, then it will win. When I look at what is most practical across an array of use cases focused on you care about your data, the main two options I have come across that are sane options are RAID Z level 2 and hardware RAID 6. RAID 5 may be good enough for a small array of high endurance, enterprise class SSDs where the data isn't too critical. RAID 1 is surprisingly not that much better in real life than a small RAID 5 as in I have seen both fail. Big RAID 4 and 5 arrays are an accident waiting to happen and I have seen this go badly on multiple occasions. The problem with hardware RAID is both it is focused on a particular common industrial use case without allowing you to go outside of that box. Also, when when you need to do certain operations like going to another RAID controller after one broke or migrating / expanding the array, there are scary times where it seems like the RAID controller wants to lose your data and you are trying to prevent that and crossing your fingers you can pull off this operation or that operation without the array being wiped. RAIDZ is a lot less scary and goes much further into allowing different use cases than what hardware RAID confines you to. The other thing you want is performance and both RAIDZ with cache drives and hardware RAID with supercap to flash and caching schemes can deliver. Other soft RAID solutions don't deliver or at least will corrupt your data at some point.
      I think I should add to this that I came up with BTRFS RAID is not a terrible option. However it mainly recovered from more common error / failure scenarios well, including ones other software RAID solutions fell short on. After this I ran into the BTRFS process crashing and getting into becoming an expert at BTRFS recovery tools to get back to a happy place when I did my abuse testing on it. In other words if you are a small operator with a small array and don't care too much about the data nor the performance of the array and are willing to do a deep dive when something goes wrong, BTRFS RAID can do the job. It is just if you want to scale up, really care about the data, not looking to read through the source code and getting deep into questionable online forums to come up with ideas for recovery, and care about performance, BTRFS falls behind.
      One more thing to add is the value of snapshots cannot be underestimated. RAID is not a backup solution. Snapshots are a great way to get something coherent into a backup and it can happen behind the scenes. You can do this with WAFL, ZFS, and BTRFS the way it needs to happen. I once told Hans Rieser that LVM snapshots were not enough and he gave me an angry reply right before his ex-wife went missing. Maybe it wasn't such a good idea to bring up that point back then, but now we have ZFS and BTRFS, so life is good.

  • @ne0dam
    @ne0dam Pƙed 2 lety +48

    I do agree that Ars technica's ZFS 101 article is really great. I read this a while ago, and I still think it's a must read for someone who want to better understand the basics of ZFS.

  • @TanKianW79
    @TanKianW79 Pƙed 2 lety +24

    More than 10 years ago, I have been using ZFS for securing important data. More than 10 years later, that data is still with me till this day. Never looked back, never looked at other file system, never looked at other off-the-shelf NAS system. I am a ZFS fanboy/cult, if you like to call it.

    • @tuttiepadh7556
      @tuttiepadh7556 Pƙed rokem

      ZFS is badass. I can't stand the damn dicks on the Linux side who are so incredibly anal about some stupid gd license.. The product itself is not "incompatible" at all

  • @PeteMiller
    @PeteMiller Pƙed 2 lety +3

    Far too many tech people & business leaders fail to comprehend that DATA is the life-blood of their business. Protection of that data is goal#1 (or should be). To that end, the business should always be using the BEST tool(s) to store & protect that data!!

  • @hippodackl1521
    @hippodackl1521 Pƙed rokem

    You are doing so well at explaining technical concepts. I‘ve been watching lots of your videos, even completely unrelated ones, and it‘s been always loads of fun.

  • @alienJIZ1990
    @alienJIZ1990 Pƙed 2 měsĂ­ci +1

    The COW feature of ZFS is how Datto's backup tech works. Rather than synthetic backup rollups (reverse incrementals) like Veeam and other backup vendors use, they don't have to reconstruct the data first thx to the way they use ZFS/COW. But it also makes their backup solution way less ubiquitous since you need their appliance, but it's pretty cool the way their tech works

  • @rolling_marbles
    @rolling_marbles Pƙed 2 lety +2

    Very nice video! It’s amazing all these products out there touting their magic sauce and you watch the boot screen and see ZFS volumes loading. It’s like pulling back the curtains on the wizard.

  • @GodmanchesterGoblin
    @GodmanchesterGoblin Pƙed 2 lety +1

    Thanks for this! I especially appreciated the explanation of how snapshots work in ZFS. I love to understand how stuff like this works before I use it, and this was a great "light bulb moment" for me.

  • @zebethyal2635
    @zebethyal2635 Pƙed 2 lety +2

    ZFS Fanboy here, I have been using ZFS since 2006 on Solaris when it first came out, and using it on Linux (Ubuntu) since 2016.
    Having lost all data on several 4 drive NAS systems due to disk failures under RAID 5 , I now only use RAID 1 ZFS mirrors for data on these small systems, what I lose in capacity I more than make up with resilience and speed (read) and self healing capabilities. Daily, weekly, monthly, yearly snapshots of volumes or zpools makes data loss a thing of the past (just go looking in your snapshots for the lost file and copy it back out) at very low disk overhead I then suplement this with additional off host backups.
    Migration to larger disks and new zpools with zpool send/receive while using some additional cabling is again very simple. Once done, deport your old zpool, swap the disks over and import the new one with the old name and off you go again on the bigger disks.
    I have also auto cloned Oracle databases to separate servers on SAN storage to refresh them on a daily basis, initial creation using zpool split, then new snapshot and zfs send/receive to update every night.

  • @haakoflo
    @haakoflo Pƙed rokem +1

    Great visualizations! ZFS is by far my filesystem of choice, except for cases where there are a lot of small writes (like a relational database). I have my VM boot drives on separate ZFS datasets, and can do nightly incremental backups to a TrueNAS box in seconds, and keep the ability to roll the VM's back to any earlier date for as long as I want.

  • @dezznuzzinyomouth2543
    @dezznuzzinyomouth2543 Pƙed rokem +3

    I am a zfs fan boy !
    Thank you Tom. Your knowledge is INVALUABLE!!

  • @KentBunn
    @KentBunn Pƙed 2 lety +2

    Saying that snapshots take no space seems patently wrong, since the blocks that have been replaced remain locked/marked as used for the purpose of the snapshot. So while no space is allocated at the time of the snapshot, any additional data changes to the snapshotted data will occupy additional blocks, rather than freeing up blocks as new data is written.

  • @IAmPattycakes
    @IAmPattycakes Pƙed 2 lety +2

    I'm fully in support of the ZFS cult. We have it on freenas and probably a couple other boxes that my partner set up that I don't pay too much attention to. I'm a btrfs fan myself, because it's the default for my distro of choice and it sure seems like it works well.

  • @gonace
    @gonace Pƙed 2 lety

    Great video as usual, and thank for linking the article!

  • @haxwithaxe
    @haxwithaxe Pƙed 2 lety

    I use zfs as the filesystem on my laptop's boot drive. It works great. No more shuffling partitions and much reduced disk space anxiety.

  • @sagarsriva
    @sagarsriva Pƙed 2 lety +1

    Always the best explanation!

  • @nonobrochacho240
    @nonobrochacho240 Pƙed 2 lety +1

    Massively interesting. Thanks for this!

  • @h4X0r99221
    @h4X0r99221 Pƙed 2 lety +1

    Great Video Tom!

  • @larzblast
    @larzblast Pƙed 2 lety +1

    My Synology had a kernel panic and lost an entire BTRFS volume as a result (and yes, I am religious about my backup plan). That was enough to inspire me to do the homework and make the move to TrueNAS Scale to see how ZFS works out for me.

  • @hiddenyid4223
    @hiddenyid4223 Pƙed 2 lety +1

    in order to protect a single disk from bit rot by using "copies=s", you would have to implement this very early in the installation phase, since it only applies for new writes

  • @Kulfaangaren
    @Kulfaangaren Pƙed 2 lety +3

    I primarily like ZFS for bulk storage (BTRFS can't really compete here in my opinion), as I wrote in my previous post, I like BTRFS for my workstation root filesystem.
    Nothing prevents me from using both, so I do :)

  • @GCTWorks
    @GCTWorks Pƙed 2 lety +2

    Can you do a video on proper backing up and restoring data between pools? Maybe using a Debian like os and with True as. I think you mentioned cloning in this video.

  • @glock21guy
    @glock21guy Pƙed 2 lety +1

    There actually is some data correction with single device CoW. HDD's and SSD's use ECC in their firmware when they write and read. If a single bit flips while your data is at rest, then you go to read it, the ECC can correct that, repair it, and re-write it and/or relocate it. Depending on the level of ECC, maybe even more than one bit, I'm not sure. But as more time passes, you may end up with more bits flipped in a sector to the point where ECC can't correct it. That's when you get an actual read error. Filesystem checksums come into play at that time, or when there's a transport layer error between the drive controller and the filesystem driver. Then the FS either errors (with one copy of data), or issues a write of the data copy that does match its checksum, hopefully fixing the bad data - if the device is able to write good data.

  • @franciscolastra
    @franciscolastra Pƙed rokem

    Great contribution!!!!! Many Thanks!!!
    Any thoughts on using a cow fs as a drive for file sharing downloads?
    It is high above my pay grade, but it does not look like a good use case... rewritting the hole file for every acquired chunk.
    I am in the right track here?

  • @michaelchatfield9700
    @michaelchatfield9700 Pƙed 9 měsĂ­ci

    How does ZFS distinguish between which disk is "correct" in a mirror?
    Very helpful to know about ECC & ZFS. Great to know. Pleased I chose to use it

  • @rcdenis1
    @rcdenis1 Pƙed 2 lety +3

    Are "a cult with data integrity" shirts coming to the store? I'll take 2!

  • @semuhphor
    @semuhphor Pƙed 6 měsĂ­ci +2

    I know it's a little late, but thank you for this vid.

  • @Solkre82
    @Solkre82 Pƙed 2 lety

    I have a small HP running TrueNas. It can only fit one SSD and I also have a USB3 drive as an external backup target. On those single disks I do set copies=2 in the hopes data is recoverable if the drives partially fail.

  • @bikerchrisukk
    @bikerchrisukk Pƙed 2 lety +1

    Nice one Tom, it is a really good system though I've only used it for a few years now. The fan/cult situation is odd, I wonder if the first car drivers were called fan boys, by the horse and cart owners.

  • @deeneyugn4824
    @deeneyugn4824 Pƙed rokem

    We used to clone whole database by taking a snapshot of the volume then create new volume from that snapshot, we then present it to another server after change its uuid. It took about 5 min. manually.

  • @NorthWay_no
    @NorthWay_no Pƙed rokem

    It is 2022(as of right now) and I have been waiting sth like 30 years for a versioning filesystem for whatever OS I am currently using. With TUX3 infinitely in progress and no others that I know of I think I need to get myself another jug of Patience.

  • @GratuityMedia
    @GratuityMedia Pƙed 2 lety

    thanks for this

  • @jttech44
    @jttech44 Pƙed 6 měsĂ­ci +3

    "You like zfs too much" -people who don't remember how fickle hardware raid was.

  • @gregm1457
    @gregm1457 Pƙed 2 lety

    Big fan of zfs here- liked zfs root on my Solaris boxes back in the day but I put it on all my linux servers now. I hope Linux can install zfs on root by default someday.

  • @DJaquithFL
    @DJaquithFL Pƙed 2 lety +4

    ZFS helps significantly reduce bit rot. Then why the push for Synology especially when QNAP supports ZFS? For that matter TrueNAS has its own hardware offerings. When it gets down to it, both systems have had their fair share of equal amounts of attacks from vulnerabilities. In fact, Synology is subject to one as I'm typing this; quote _"Synology DSM allows execution of arbitrary commands."_

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Pƙed 2 lety +5

      QNAP has put in backdoors accounts so I dont't recommend them. www.bleepingcomputer.com/news/security/qnap-removes-backdoor-account-in-nas-backup-disaster-recovery-app/

  • @ThePswiegers
    @ThePswiegers Pƙed 2 lety +1

    Is ZFS not a ROW filesystem (redirect on write) ? found some sources pointing to this - "The copy-on-write file system overwrites original data blocks after reproducing an exact copy in a new location. On the other hand, the redirect-on-write file system (particularly the ZFS file system) updates the metadata so that it points-redirects-towards the newly written blocks."

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Pƙed 2 lety +1

      The developers call it cow so I call it cow.

    • @catchnkill
      @catchnkill Pƙed 2 lety +1

      @@LAWRENCESYSTEMS Any way to let reader understand that it is "copy-on-write" on ZFS developers' sense? There are real CoW like you do a snapshot on LVM over ext4. There is a fundamental difference in performance vs fragmentation.

  • @HelloHelloXD
    @HelloHelloXD Pƙed 2 lety +3

    Great video. A little bit Off Topic. When resilvering ZFS Mirror does it work the same way like normal RAID1 (exact copy of the second drive) or does it like this "ZFS only resilvers the minimum amount of necessary data"?

  • @CraigHaydock
    @CraigHaydock Pƙed 2 lety +4

    I too love ZFS... going back to the early days when it was a product of Sun! 😄 Also, for anyone that's interested, on the Windows side of the house, ReFS is also a CoW. It operates completely different than ZFS on so many levels though. And, among other things, it's non-bootable at this point... much like ZFS was for oh so many years.

    • @lawrencedoliveiro9104
      @lawrencedoliveiro9104 Pƙed 2 lety

      Microsoft seems to be gradually backing away from ReFS. Which makes you wonder what they are going to do about the increasingly antiquated NTFS ...

    • @CraigHaydock
      @CraigHaydock Pƙed 2 lety

      @@lawrencedoliveiro9104 I haven't seen anything that indicates they are "backing away from ReFS." Can you please provide some context to that statement? ReFS has been updated in every server release since it's inception and is closing the feature parity gap with NTFS. Slower now than in the earlier releases... but I'm sure that's partly attributed to "low hanging fruit" and the complexities in establishing complete feature parity. ReFS is also the defacto recommended file format for anyone using Storage Spaces or Storage Spaces Direct, or anyone hosting Hyper-V Virtual Machine VHDX files. So, I would be very curious to know what if anything has changed on their roadmap to continue improving ReFS... because I haven't personally seen any chatter about ReFS going away, being depreciated, or superseded by anything else coming down the pipe (ReFS v4.0 maybe??? current release is v3.4 ). If you know something... Spill the beans man! I want to know! :D

    • @lawrencedoliveiro9104
      @lawrencedoliveiro9104 Pƙed 2 lety

      @@CraigHaydock The fact that they have reduced its OS support since its early days.

    • @CraigHaydock
      @CraigHaydock Pƙed 2 lety +1

      @@lawrencedoliveiro9104 Well... they stopped allowing Windows 10 to create new ReFS volumes, but it can still read any existing ones as well as any new ones that you create on a newer server and then transfer ownership over to a Windows 10 machine.... nudge nudge wink wink 😉. But, they also did that at the same time that they discontinued allowing Windows 10 to create new Storage Spaces... again, still being able to read any existing Storage Spaces or transferred ones from a new server (a nudge is as good as a wink to a blind bat). I don't see that being so much a move away from supporting ReFS as much as it is a change in their view on wanting ReFS and Storage Spaces to stay in the realm of being a "server product" rather than a home product. (or, maybe they were getting too many support calls from home users who don't understand how to properly implement and maintain it?). I for one have taken advantage of standing up a Windows 10 machine to host ReFS + Storage Spaces to avoid Windows Server licensing requirements on small scale non-enterprise deployments. So, I can see it being more of a money grab decision than shying away from wanting to support it. I've seen Microsoft abbreviated to "M$" so many times I can't count... so a money grab wouldn't surprise me in the least. đŸ€‘ Happy to be proven wrong though. ...well... not really happy... I have a lot of deployments I support using ReFS... so, if it really went away I might cry a little bit! đŸ€Ł

    • @CraigHaydock
      @CraigHaydock Pƙed 2 lety

      Oh... I should clarify a little bit I guess... The ability to create ReFS volumes as well as Storage Spaces wasn't removed from Windows 10 across the board... it was just from the Pro edition (Home edition as I recall never had the ability... I could be wrong though... I rarely use Home edition). However, the "Windows 10 Pro for Workstations" edition still retains the ability to do both... but the licensing for that edition is also nearly half the cost of a server license and is not a SKU that most people typically purchase.

  • @TheLazyJAK
    @TheLazyJAK Pƙed 2 lety +1

    Is it the default for pfSense+ on their arm devices? I remember they never supported that in the past.

  • @Kulfaangaren
    @Kulfaangaren Pƙed 2 lety +3

    A cult with "file data integrity" :)
    Anyway... You should really try out something like Manjaro where you can have a BTRFS root filesystem and use an application like Timeshift to get rolling snapshots of of your root filesystem. Rolling snapshots are ofc available in OpenZFS as well, but what makes Timeshift + BTRFS such a powerful combination is the integration with GRUB och the Arch package management. Snapshots can be automatically taken before every upgrade giving you the ability to roll back your system to a time before you made "that update that broke your system"(TM).
    I have it running since early January with snapshots going back all the way to fresh install (1x fresh install snapshot, 3x weekly snapshots, 7x daily snapshots and 8x hourly snapshots) and it takes up about 10Gb of space.
    This is my daily driver workstation used for both work & play.

  • @johnhricko8212
    @johnhricko8212 Pƙed 2 lety +1

    gotta get one of those "I AM ROOT" shirts!!

  • @alignedfibers
    @alignedfibers Pƙed 2 lety

    Is it easy to move existing lvm volume with ext4 fs over to a new set of disks that I would like to use zfs with. Is there a way to us lvconvert and mirror in over to move it easily if the existing ext4 is added to the new volume group? Currently just single pv but when get a new set of disks on a pcie card just want to move that volume over and move it on to zfs. I understand that rsync could be used, but I always have trouble with that one and no idea how to be absolutely confident all permission, ownership an symlinks are retained.

  • @teachonlywhatiseasy
    @teachonlywhatiseasy Pƙed 2 lety

    zfs maintains a list of blocks on disk that differ between snapshots. zfs replication checks the list and backs up only the changes unlike rsync which has to go through every single file, so its built for performance right out of the box albeit the storage overhead.

  • @halowizbox
    @halowizbox Pƙed 2 lety +1

    Any idea if an Perc H730 Mini would work with Truenas Core? I would LOVE to try getting a ZFS share going in my home lab. I'm using a mix of SATA and SAS drives.

    • @jdavis4155
      @jdavis4155 Pƙed 2 lety

      I believe the H730 lacks passthrough or IT Mode as it seems to be called sometimes. Truenas wants direct access to HBA / drives and the raid card is hiding that. Most likely the root cause is lack of drivers baked into Truenas because they highly recommend against using a hardware raid. Ive not verified that part but just a guess. Edit : It does appear as if the H730 will accept some questionable firmware that you can flash to it but good luck with that :)

    • @Seris_
      @Seris_ Pƙed 2 lety +1

      i have a PERC H330 mini mono that passes through the drives as "non-RAID" drives, works great with ZFS

    • @cjchico
      @cjchico Pƙed 2 lety +3

      I have an H730 in passthrough (IT) mode and it works great with ZFS on Proxmox. Mine allowed me to put it in passthrough mode in the UI itself instead of flashing anything.

    • @halowizbox
      @halowizbox Pƙed 2 lety

      @SuperWhisk Excellent Info SuperWhisk, much appreciated.

  • @nommindymple6241
    @nommindymple6241 Pƙed 2 lety +2

    Suggestion: could you do a video on getting pfSense and Android phones to play nicely together regarding IPv6? Google vehemently refuses to implement DHCPv6 and I guess we're supposed to use SLAAC. But, I've tried to set it up in pfSense and just can't get it to work. I thought I had it (my phones actually got IPv6 addresses from the network), but after an hour or two, their addresses just disappeared never to be seen again.

  • @aeroturner
    @aeroturner Pƙed 2 lety +1

    Cult with integrity... Lol Daaaaaaddd Joookkkeeee. Love it. I want to use it.... Stickers anyone?

  • @mattguyatt
    @mattguyatt Pƙed 2 lety

    Hey Tom just FYI I think in the video title "That" is supposed to be "Than" ? Thanks for the great content!

  • @ThePonymaster2160
    @ThePonymaster2160 Pƙed 2 lety +1

    Now that major hard-drive OEM's have been caught selling shingled drives for NAS's and other storage arrays, what brand of spinning rust do you recommend for use w/ ZFS? Thanks!

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Pƙed 2 lety +5

      Lately a lot of Seagate Exos

    • @williamp6800
      @williamp6800 Pƙed 2 lety +1

      Seagate has stated that their Iron Wolf NAS drives don’t and won’t use SMR. WD still uses SMR in their Red NAS drives but the Red Plus drives use CMR not SMR. I believe the same is true for Red Pro but haven’t double checked.
      If you can stomach buying from WD after what they did with SMR drives, you’ve got a choice. If you can’t, then Seagate Iron Wolf drives are your only option for NAS drives.

    • @shadow7037932
      @shadow7037932 Pƙed 2 lety +2

      Don't bother with specific brand. Look for specific models marked as CMR.

    • @ironfist7789
      @ironfist7789 Pƙed 2 lety +1

      @@williamp6800 Yeah, wd red pro should be CMR according to their website. I have some old wd red (normal) I was scared about but the particular model I had was CMR, but yeah a lot or all of the new ones are SMR

  • @jayzn1931
    @jayzn1931 Pƙed 11 měsĂ­ci

    To me it seems like there is more storage used, as in the first picture, there is a lot of empty space on the left and right, which is later used in the animations for ZFS. Can this space also be used normally? If so, I guess it is bad to have like >90% usage on your drives with ZFS or am I getting this wrong? I know you said the snapshots don‘t take up more space but it seems the have to overwritten, so other data can fit there?

  • @ThePonymaster2160
    @ThePonymaster2160 Pƙed 2 lety

    One more question - I'm SEVERELY budget constrained, and thus can't afford any rack-mount gear (I'm just starting out in the home-lab fun), so I was wondering how you felt about just slapping a ICY DOCK 6 x 2.5 SSD to 5.25 drive bay into a regular ATX case? Would the drive writes from ZFS chew up the lifespan of the SSD's badly?

    • @ironfist7789
      @ironfist7789 Pƙed 2 lety +1

      I have a regular raid1 mirror in my atx case which works fine (2x wd red pro) as storage alongside my regular SSD ubuntu install. My server case (which is just a bigger meshify ATX case) has room for at least 8 3.5 inch drives (don't remember the exact specs) which I reckon would work for ZFS... just need to save more to fill it and test. I don't know much about the lifespan of SSDs in the ZFS environment... they do make NAS versions but seem to be pretty pricey. I actually put a 10gb sfp+ card in each comp also to connect the two (separate network from the regular 1gb ports connected to the switched lan), so I didn't have to buy a 10gb switch yet. Kind of fun to play around with that stuff.

    • @techguy3424
      @techguy3424 Pƙed 2 lety +2

      You can buy used rack mount servers for less than $300 off of Amazon. Just be prepared for a lot of noise if you start buying server grade hardware.

  • @JeroenvandenBerg82
    @JeroenvandenBerg82 Pƙed 2 lety +1

    BTRFS is not production ready, especially not on big data sets, tools are lacking, documentation is sparse at best and often conflicting and requires a lot of tuning.
    For larger volumes neither XFS (lacks any form of working tooling to manage), ext3/4 (max 16TB unless you want to try an self-compile the tools and device drivers) or reFS (64TB max) are suitable.
    The resiliency and management tools for ZFS are WAY better than any of those mentioned before.

  • @JonDisnard
    @JonDisnard Pƙed 2 lety

    Zfs is a journaling filesystem, the ZIL aka Zfs Intention Log is a filesystem journal. The COW is really nice though, too.

    • @marcogenovesi8570
      @marcogenovesi8570 Pƙed 2 lety

      No it is not, the ZIL is a write cache where the whole data and metadata are written (still in a COW way) and then eventually flushed to the main array. That's its main purpose

    • @JonDisnard
      @JonDisnard Pƙed 2 lety

      @@marcogenovesi8570 You literally just described a filesystem journal. Whether one calls the transaction log a journal or intention log, it's ambiguous, because it's the same. Please don't pendantically bikeshed, it's Friday for fsck sake đŸ€Ș

    • @marcogenovesi8570
      @marcogenovesi8570 Pƙed 2 lety

      @@JonDisnard No duh. Journals and write caching are both doing something roughly similar but have opposite goal and drawbacks.
      By your logic, adding write caches turn a filesystem in a journaling filesystem, which is simply wrong.
      File system journal exists for metadata (and possibly data) integrity but reduces the performance.
      Write caching exists to increase performance but sacrifices integirty. If your cache is destroyed before it has flushed data to main storage you lose data.
      ZIL is very much designed for doing the latter, there is no need for a journal for integrity if the whole filesystem is COW to begin with

  • @bertnijhof5413
    @bertnijhof5413 Pƙed 2 lety

    I belong to the ZFS cult since March 2018. I live in the Caribbean and we have 2 to 30 power-fails/week, that is why I had to move to ZFS, too many files got corrupted, especially music files. Now in 2022 I have 3 datapools:
    - my most used 14 Virtual Machines (VMs) on a 512GB nvme-SSD (3400/2300MB/s);
    - a data pool with 2 datasets, one for the VMs that still receive updates, but are not used frequently. The other dataset contains my music; photos, family videos etc. These datasets are stored on two 500GB partitions in Raid-0 on an ancient 500GB and 1TB HDD. That last dataset is stored with copies=2, thus with one copy on each partition/disk. Once per month ZFS runs scrub and once in 4 years I noticed, that one corrupted file has been corrected automatically. The pool runs with a sata-SSD providing a 90GB L2ARC cache and a 5GB ZIL sync write cache.
    - the last 500GB data pool at the end of the 1TB is for archiving, so for VMs that don't receive updates like Windows Vista or Ubuntu 4.10 and for old documents and software like 16-bits DOS and Windows programs. The pool runs with the same sata-SSD providing a 30GB L2ARC cache and a 3GB ZIL sync write cache.
    I have two backups one on my Pentium and one on my 2TB laptop HDD, so I have exact the same VMs and data available on laptop and desktop. I update the backups each Saturday with "zfs send | ssh receive". The Pentium 4 HT (1C2T; 3.0GHz) is 32-bits so there I moved to FreeBSD 13.0 also with OpenZFS. That machine has a 1 Gbps Ethernet interface, but it just reaches ~220Mbps due to a 95% load on one CPU thread. Since June 2019 I use 4 leftover HDDs 1,21TB (3.5" IDE 250+320GB and 2.5" SATA-1 320+320GB).

    • @SteelHorseRider74
      @SteelHorseRider74 Pƙed 2 lety

      attaching a (small) UPS to your main machines would not be an option?
      to give your machines the 2 minutes to shut down cleanly, avoiding fs corruption by hard power loss?

    • @bertnijhof5413
      @bertnijhof5413 Pƙed 2 lety

      @@SteelHorseRider74 I used such a UPS earlier, but the batteries were unusable within 1 to 2 years with that many power-fails. So now I use a 1200W Surge Protector to avoid damaged hardware and I use OpenZFS on all 3 PCs (desktop; laptop and backup-server) to avoid file corruption.
      Before the surge protector I needed 1 off-lease PC per year and each year a power supply or a motherboard died. The last 10 years I did not loose any hardware due to those frequent power-fails.

  • @blackmennewstyle
    @blackmennewstyle Pƙed 2 lety +3

    ZFS also loves NVME SSD cache, it's actually recommended for improving performances :)

  • @carlynghrafnsson4221
    @carlynghrafnsson4221 Pƙed rokem

    openzfs is fast. it's offered w fedora during install. but.... if you have any platters attached... no. COW will fragment platters, and then, your optimum io scheduler would be mq-deadline, not None, which is what you want for ssd. So I would say build a fedora ssd system to try it out. It should super nice, way beyond btrfs which actually underperforms vs. ext4.

  • @TiagoFernandes81
    @TiagoFernandes81 Pƙed rokem +2

    "A cult with integrity" :D

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Pƙed rokem +1

      I mean if you are going to join a cult, one with integrity makes sense.

  • @CurtisShimamoto
    @CurtisShimamoto Pƙed 2 lety

    The most popular copy-on-write filesystem has got to be APFS. There are just too many iOS and MacOS devices out there for it not to have the lead.
    HFS+ filesystems were converted for iOS maybe six years ago, and the MacOS devices followed shortly after. So APFS has been shopping on all their devices for five years or more (I think). Thus, APFS must have the widest reaching user base... right?

    • @catchnkill
      @catchnkill Pƙed 2 lety +1

      Yes. Largest user base. However it means nothing to Linux. It is Apple's IP and only be used in Apple products. Apple also does not reveal much on APFS. Buy a Apple product and just use it.

  • @GameCyborgCh
    @GameCyborgCh Pƙed rokem

    does zfs have a mechanism to tell you "hey I found a data inconsistency with this file, can you look over it and tell if i't still fine?" so when a bit flips in a picture or video that change will most likely be unnoticable and it could be that the bitflip actually happened in the checksum. Being able to tell ZFS that this file is still good(ish) means it could simply calculate a new checksum

  • @kristiansims
    @kristiansims Pƙed 2 lety +2

    “A cult with data integrity”

  • @boxerfencer
    @boxerfencer Pƙed rokem

    Reiserfs wasnt mentioned? It was in the thumbnail.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Pƙed rokem

      Reiserfs is not in the thumbnail and this was about ZFS.

    • @boxerfencer
      @boxerfencer Pƙed rokem

      @@LAWRENCESYSTEMS must have mistakenly clicked on the wrong video.

  • @xmagat00
    @xmagat00 Pƙed rokem

    COW and what will happen if will update file which is bigger then empty space?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Pƙed rokem

      You can't put more data than you have space to save it.

    • @xmagat00
      @xmagat00 Pƙed rokem

      @@LAWRENCESYSTEMS great, i was thinking to test on my virtual machines host. but if it is like this then it is simple no go. you save my lot of time and troubles. thx

  • @licmi1
    @licmi1 Pƙed 2 lety

    It would be nice to mention ceph some times :)

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Pƙed 2 lety

      ceph

    • @licmi1
      @licmi1 Pƙed 2 lety

      ​@@LAWRENCESYSTEMS Ceph - open-source, distributed storage system... One episode dedicated to ceph would make my day :)

    • @licmi1
      @licmi1 Pƙed 2 lety

      Just an idea for the new content

  • @RustyBrakes
    @RustyBrakes Pƙed 2 lety +1

    Am I right in thinking this is why ZFS is "RAM hungry"? Let's say a large file is being written during a backup, it all lives in RAM until the write is done?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Pƙed 2 lety +8

      Nope, zfs uses memory for caching

    • @Felix-ve9hs
      @Felix-ve9hs Pƙed 2 lety +11

      ZFS isnt't "RAM hungry", ZFS just uses all the free System Memory for Caching - unused RAM is wasted RAM

    • @bobedgar6647
      @bobedgar6647 Pƙed 2 lety

      @@Felix-ve9hs all the memory you allow it to use. There’s a tunable system parameter that limits how much memory ZFS will consume

    • @marcogenovesi8570
      @marcogenovesi8570 Pƙed 2 lety

      that's how all writes are done on all operating systems and all filesystems. ZFS does its own caching for read performance but it's a setting you can change

    • @lawrencedoliveiro9104
      @lawrencedoliveiro9104 Pƙed 2 lety

      All Linux filesystems can use RAM for caching. The key is that the kernel recognizes that these RAM blocks are cache, which is considered a low-priority use, and quickly dump them as necessary when an app needs the memory for more regular uses.
      ZFS, on the other hand, wants to play by its own rules, and not use the standard kernel caching system. This is where performance problems come in.

  • @TheRealJamesWu
    @TheRealJamesWu Pƙed 8 měsĂ­ci

    Took me this long to realize, APFS is also a COW!

  • @nevoyu
    @nevoyu Pƙed 2 lety

    I still prefer btrfs if anything because zfs is such a mono culture to the point that every time I bring up some issue with btrfs in irc someone is there to tell me to use zfs (even in the btrfs specific irc channels)

  • @gblargg
    @gblargg Pƙed 2 lety

    Snapshots and rolling back seem like such a primary feature every computing machine should have these days. Try some update etc. Don't like it? Roll back, now it's as if you never did the update. Would take so much risk out of trying new things.

  • @simonpeggboard4004
    @simonpeggboard4004 Pƙed 2 lety

    Data is a collective noun therefore 'data is' is more appropriate than 'data are'.

  • @skeetabomb
    @skeetabomb Pƙed 16 dny

    The most obvious and significant reason for incorporating zfs (or any COW) into standard operating systems (especially Windows) is to mitigate against file encrypting ransomware. I have heard MS are actually working on such a file system to become the new Windows default FS. Can't recall the name or when it is planned for mass rollout.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Pƙed 16 dny

      Microsoft already has volume Shadow copies, and it's frequent that ransomware disables those.

  • @johskar
    @johskar Pƙed rokem

    ZFS is more of a ROW tha COW filesystem tho, isn't it?
    It redirects changes written to new location, instead of copying exsisting data to new location before writing to original block.

  • @df3yt
    @df3yt Pƙed rokem

    Zfs rocks. It's a pity though that cow implementations differ to the extent that btrfs sucks with power failures. I have less data issues with xfs+zfs Vs btrfs+xfs. I like btrfs features though and hate using dkms with zfs but btrfs just can't hold a candle Vs zfs.

  • @rfh1987
    @rfh1987 Pƙed měsĂ­cem

    I believe REFS is also a COW FS.

  • @rodfer5406
    @rodfer5406 Pƙed 3 měsĂ­ci

    😂👍

  • @Hixie101
    @Hixie101 Pƙed 2 lety +2

    ZFS, a cult with integrity! *Ba dum tss

  • @ripe_apple
    @ripe_apple Pƙed 2 lety +1

    Cult with integrity!!! đŸ€ŁđŸ€ŁđŸ€ŁđŸ€ŁđŸ€ŁđŸ€ŁđŸ€ŁđŸ€Ł

  • @James_Knott
    @James_Knott Pƙed 2 lety

    I switched pfsense to ZFS when I installed it on a new computer last year. Any idea how ZFS compares with btrfs? I'm currently running EXT4 on my Linux systems.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Pƙed 2 lety +2

      Best I can tell btrfs is catching up to some ZFS features, but does not have anything that would compel me to use it over ZFS.

    • @marcogenovesi8570
      @marcogenovesi8570 Pƙed 2 lety

      btrfs offers most of ZFS features (compression, COW, snapshots and snapshot send/receive) but it's RAID5/6 functionality is still experimental and unsafe. So on Linux it's fine for mirror and RAID10 but can't compete with ZFS in a NAS

  • @minigpracing3068
    @minigpracing3068 Pƙed 2 lety

    Cult of ZFS t-shirt?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Pƙed 2 lety

      Right here lawrence-technology-services.creator-spring.com/listing/cult-of-zfs

  • @alexmannen1991
    @alexmannen1991 Pƙed 2 lety

    god i wish microsoft updated their 40uo file sustem

  • @kamertonaudiophileplayer847

    But why ZFS requires a lot RAM?

  • @MatthewHill
    @MatthewHill Pƙed rokem

    Think you maybe missed an opportunity to call out the "atomic cows" here. :-P

  • @lawrencedoliveiro9104
    @lawrencedoliveiro9104 Pƙed 2 lety

    ZFS is to filesystems what Java is to programming languages. Best not to run it on a general-purpose machine. Put it on a dedicated storage appliance, where it can chew up all the RAM to its heart’s content.

  • @beauregardslim1914
    @beauregardslim1914 Pƙed 2 lety +2

    Cult comment made me laugh.

  • @gedavids84
    @gedavids84 Pƙed 2 lety

    People call it a cult only because they don't understand the truth of ZFS' brilliance.

  • @starlite528
    @starlite528 Pƙed 2 lety +2

    I guess my question now is 'what happens if a power failure occurs at the moment the filesystem tries to unlink the old blocks?' I guess there is no perfect solution.

    • @f-s-r
      @f-s-r Pƙed 2 lety

      My thoughts, exactly. But an UPS would allow the computer to do a correct shutdown, and avoid that situation.

    • @sumduma55
      @sumduma55 Pƙed 2 lety

      You would end up in a state using part of the new and part of the old data. And considering how modern drives write to a cache on the drive before committing to disk, that might be the extreme version of events. Most modern drives do not allow direct low level access to the drive geometry by the operating system due to the complexity involved with their sizes and compression at the hardware level. The drive firmware translates this instead of the operating system. At least that is how I've understood it works since about 2 gig sized drives and the introduction of logical block addressing.
      This is also why most of the best performing drives tend to have rather large cache memory sizes.

  • @andriidrihulias6197
    @andriidrihulias6197 Pƙed 2 lety

    Why zfs? Why not a btrfs? (btrfs have dynamic change raid type from raid1 to raid 5 etc. without destroying raid, but zfs have no of this)

  • @gjermundification
    @gjermundification Pƙed rokem

    redirect on write I hear

    • @tbettems
      @tbettems Pƙed rokem

      I was thinking the same... Using term Copy On Write is confusing here. What is explained looks more like Redirect On Write: New blocks are written at another location, then you unlink the old block. Real Copy On Write would mean you copy first the old data to another location before overwriting the original block. There is no Copy in the process explained here, hence it is confusing...

  • @iamamish
    @iamamish Pƙed rokem

    In an attempt to spice up our relationship, I asked my wife to come to bed wearing a zfs shirt. Now we're divorced.

  • @tolpacourt
    @tolpacourt Pƙed 2 lety

    Better Cult of ZFS COW than Cult of The Dead Cow.

  • @mathesonstep
    @mathesonstep Pƙed 2 lety

    Doesn't ZFS require more RAM? Does that mean pfSense requires more RAM?

    • @cjchico
      @cjchico Pƙed 2 lety

      ZFS only uses RAM for cache, so it is not absolutely necessary.

    • @marcogenovesi8570
      @marcogenovesi8570 Pƙed 2 lety

      unless your pfsense is running a huge storage array and handling a ton of disk activity, no it does not

  • @asternSF
    @asternSF Pƙed 2 lety

    Wait, all these years I've been pronouncing it as betterfs, have I been wrong? đŸ€”

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Pƙed 2 lety

      Well, I don't think it's "Better" so I will keep calling it butter.

  • @39zack
    @39zack Pƙed rokem

    So Bart was wrong all along? đŸ€”

  • @ashuggtube
    @ashuggtube Pƙed 2 lety

    Cult of ZFS? Some people are so silly!

  • @OcteractSG
    @OcteractSG Pƙed 2 lety

    The way ZFS handles a degraded array is far superior to BTRFS.

  • @TechySpeaking
    @TechySpeaking Pƙed 2 lety

    First

  • @Braiam
    @Braiam Pƙed 2 lety

    It's funny hearing Tom saying btrfs as butterFS. Or maybe since I put videos at 2x.

    • @williamp6800
      @williamp6800 Pƙed 2 lety +5

      Nothing unusual in that. It’s frequently referred to as butterFS.

  • @og_myxiplx
    @og_myxiplx Pƙed 2 lety +1

    Except ZFS isn't copy-on-write, it's redirect-on-write. Copy on write means copying existing data to a new place before writing your new data to the original location, ZFS doesn't do that, new data gets written to a new location, and old data stays in place. Data is redirected, NOT copied.
    Your visualisation 4 mins in shows exactly this too. Copy-on-write is an older and less efficient approach than the one ZFS uses, and it's rarely found in use today.
    And the feature you're actually discussing for the first 3-4 mins on data integrity is more accurately called atomic writes, with new data written to stable storage and a single atomic tree update only occuring once all the data is safe.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Pƙed 2 lety

      I am going to keep calling it "Copy On Write" the same at it's called in all the documentation and by people working on the project.

  • @richardbennett4365
    @richardbennett4365 Pƙed 10 měsĂ­ci

    ZFS is a ROW.
    SO, Don't HAVE A COW.

  • @-Yogo
    @-Yogo Pƙed 2 lety +4

    my wife was really disappointed last time i was this early....

  • @sanjeev2003
    @sanjeev2003 Pƙed 2 lety +1

    First finally. đŸ˜