How to Layout 60 Hard Drives in a ZFS Pool & Benchmarking Performance.

Sdílet
Vložit
  • čas přidán 22. 10. 2021
  • Benchmark link
    openbenchmarking.org/result/2...
    More in depth discussion about ZFS
    forums.lawrencesystems.com/t/...
    Connecting With Us
    ---------------------------------------------------
    + Hire Us For A Project: lawrencesystems.com/hire-us/
    + Tom Twitter 🐦 / tomlawrencetech
    + Our Web Site www.lawrencesystems.com/
    + Our Forums forums.lawrencesystems.com/
    + Instagram / lawrencesystems
    + Facebook / lawrencesystems
    + GitHub github.com/lawrencesystems/
    + Discord / discord
    Lawrence Systems Shirts and Swag
    ---------------------------------------------------
    ►👕 lawrence.video/swag
    AFFILIATES & REFERRAL LINKS
    ---------------------------------------------------
    Amazon Affiliate Store
    🛒 www.amazon.com/shop/lawrences...
    UniFi Affiliate Link
    🛒 store.ui.com?a_aid=LTS
    All Of Our Affiliates that help us out and can get you discounts!
    🛒 lawrencesystems.com/partners-...
    Gear we use on Kit
    🛒 kit.co/lawrencesystems
    Use OfferCode LTSERVICES to get 5% off your order at
    🛒 lawrence.video/techsupplydirect
    Digital Ocean Offer Code
    🛒 m.do.co/c/85de8d181725
    HostiFi UniFi Cloud Hosting Service
    🛒 hostifi.net/?via=lawrencesystems
    Protect you privacy with a VPN from Private Internet Access
    🛒 www.privateinternetaccess.com...
    Patreon
    💰 / lawrencesystems
    #ZFS #Storage #TrueNAS
  • Věda a technologie

Komentáře • 187

  • @berndeckenfels
    @berndeckenfels Před 2 lety +111

    Multiple Narrower Vdev has the advantage that when you want to replace disks with larger models you can do that in waves. Especially good for cost sensitive homelabs.

    • @PanduPoluan
      @PanduPoluan Před 2 lety

      Good point! So, multiple mirror vdevs?

    • @chrishorton444
      @chrishorton444 Před 2 lety +10

      Mirrors wastes the most drive space. I would recommend 5+ drives with z2 or z3, which takes 2 or 3 drives for redundancy and allows 2 or 3 drives to fail while a mirror only allows a single failure before data loss. I have upgraded 3 drive devs & 6 drive vdevs without any issues. Even on a 24 drive vdev, upgrading worked well for me as well too. In the future I will limit to 12 drives per vdev max.

    • @MegaWhiteBeaner
      @MegaWhiteBeaner Před 2 lety

      Homelabs have the advantage of deleting old data when it's irrelevant especially in cost sensitive homelabs.

    • @SupremeRuleroftheWorld
      @SupremeRuleroftheWorld Před 2 lety +1

      home users should not even consider ZFS if cost is any consideration. unraid is vastly superior in that regard.

    • @GardenHard2012
      @GardenHard2012 Před 2 lety +1

      @@SupremeRuleroftheWorld I'm going the unraid route with ZFS but it's not easy to install it there. I set it up and loss lots of data. it's was how I tried to do my mounts.

  • @huseyinozsut9949
    @huseyinozsut9949 Před rokem +3

    By some research, some reasoning and some chance, I managed to setup a Truenas server with 12 2TB disks in Raidz2. 6 disks in one VDEV, 2 VDEVS in one pool. As you told in the video, this is basically not narrow, not wide.
    This was before this video. This video would help me alot. Very good content! Thank you.

  • @pealock
    @pealock Před 2 lety +5

    You did a great job of taking something that seemed daunting and making it seem trivial! Thank you!

  • @KrumpetKruncher
    @KrumpetKruncher Před rokem

    Awesome video! I love how you lay out the content, it's finally sinking in for me, thank you!

  • @shaunv1056
    @shaunv1056 Před 2 lety +19

    Thanks Lawrence! It really helps me prep my ZFS layout even at a smaller scale

  • @krisnagokoel9397
    @krisnagokoel9397 Před rokem +2

    Thank you for the easy explanation. I'm actually starting a project and was looking for something like this.

  • @Exploited89
    @Exploited89 Před 2 lety +2

    Looking forward for the next videos on ZFS!

  • @nickf3242
    @nickf3242 Před 2 lety

    OMG, I needed this vid! I built a server during 2020 with 100TB of 10, 10TB WD shucked drives plus 2TB (2 500GB WD Red SSDs and 2 500GB WD Red M.2s) and maxed out a Silverstone CS381 chasis. I Used the trial of UNRAID to preclear all my 10TB drives but decided on not paying for the license. I really like TrueNAS and wasn't too impressed with UNRAID. But I have been frozen on what to do next meanwhile my 100TB plus of current storage across a WD PR4100 (48TB), a WD EX4100 (40TB), multiple misc 4TB-10TB internal and external single drives in my HTPC, and even now tricking over to flash drives are all maxed out. This came just in time. I can't wait for more!

  • @tushargupta84
    @tushargupta84 Před 2 lety +3

    thank you for the very informative video..
    Now with this information i will be able to plan my pools/vdevs in a better way...
    keep up the good work Mr

  • @colbyboucher6391
    @colbyboucher6391 Před rokem +1

    The clarification at the beginning of this video makes it pretty useful even for people putting together MUCH smaller systems.

  • @JayantBB78
    @JayantBB78 Před rokem

    I am your subscriber since last more than 2 year. I learned a lot about for TrueNAS and storage server.
    Thanks a lot. 🙏🏻👍🏻

  • @CMDRSweeper
    @CMDRSweeper Před 2 lety +12

    Interesting take, my paranoia landed me at 60% data efficiency for my mdadm array way back in 2010.
    Everyone said, "Raid 6 is such a waste, your 6 drives are wasting so much potential space"
    Well I had a drive fail, and then a software borkup during resilver all of a sudden gave me a redundancy of 0.
    The data was still fine though and I rebuilt it to the software failed drive, operated it in degraded state for a week until I got the RMAed failed drive back.
    A resilver after that point and I was back were I wanted to be.
    But it did leave me a shocker and made me build another NAS for 2nd tier backup that was based on ZFS.
    Similar 6 drives, but built on Raid Z2 so still a loss of efficiency, but I it satisfied my paranoia.

    • @jttech44
      @jttech44 Před 6 měsíci +1

      When I build production arrays, they're almost always mirrors, and, I source the mirror pairs from different suppliers so that I get different manufacturing runs of drives in each mirror. The idea is, if a given batch of drives has a defect in the firmware or hardware, you'll only lose 1 half of a mirror, array stays up and no data is lost. Sure it's like 48% real world efficiency, but, drives are cheap and data loss/downtime is expensive.
      Also, that paranoia paid off in spades in the 3TB seagate era, where annual failure rates were like 4-5% and drives were 3x the cost because all of the factories in thailand were flooded. Dark times, but didn't lose a single bit to drive failure.

  • @lipefas
    @lipefas Před 2 lety

    Awesome Tom. Thank you for the great explanation as usual.

  • @ewenchan1239
    @ewenchan1239 Před 2 lety +3

    Thank you for making this video! Very informative.
    I've been using ZFS since ca. 2006, so I was having to deal with this even way back then.
    For a lot of people who are just entering into this space now, it is something that, in my experience, not a lot of people think about when they are trying to plan for a build because most home and desktop users, typically don't have to really think too much nor too hard about centralised storage servers (and the associated tails and caveats that it comes with).
    When it comes to trying to mitigate against failures, there's a whole slew of options nowadays which range from software defined storage, distributed storage, and also just having multiple servers so that you aren't putting (quite literally) all your "eggs" (read: data) into one "basket" (read: server).

  • @tekjoey
    @tekjoey Před 2 lety

    This is a great video. Very helpful, thank you!

  • @Moms58
    @Moms58 Před rokem +2

    Outstanding video, thank you for your guidance and advice...

  • @zesta77
    @zesta77 Před 2 lety +26

    If you want to do mirrored pairs and you have multiple HBAs, be careful with the "repeat" option, as it is not smart enough to balance the mirrored pairs across the different controllers for maximum redundancy.

  • @DrRussell
    @DrRussell Před 6 měsíci

    Fantastic information for a beginner like me, thank you

    • @ginxxxxx
      @ginxxxxx Před 2 měsíci

      did you go mirror? 50% lose of capacity for best perf and pure ease on everything else

  • @QuentinStephens
    @QuentinStephens Před 2 lety +12

    An interesting implementation I saw 15+ years ago was RAID / Z but the stripes were down backplanes so drive 1 of the first array was on backplane 1, drive 2 of the first array on backplane 2, drive 3 on the third... then drive 1 of the second array back on the first backplane and so on. Then those arrays were consolidated into logical drives. This meant that the whole was not only resilient to multiple drive failures but the failure of a backplane too.
    I'm a big fan of mirrors for the home environment but they're not without non-obvious problems. The big problem with mirrored drives is loss of space, but not so obvious is that mirrored drives tend to be physically proximate. This means that something that causes one drive to fail may also affect the drive next to it - the mirror.

    • @yasirrakhurrafat1142
      @yasirrakhurrafat1142 Před rokem

      Yeah, although we live on the chance that the mirror drive doesn't fail.
      You hear about deduplication?

  • @h4X0r99221
    @h4X0r99221 Před 2 lety +2

    Again! This guy man, just as I built my 24 drive NAS.....how can he time this things always right?! Thanks for all the videos Tom!

  • @chriskk4svr876
    @chriskk4svr876 Před 2 lety +9

    Another thing that may need to be considered in drive layouts is network throughput to the storage system itself. You may have the fastest theoritical drive layout at the cost of redundancy but are also never able to realize that true capacity if you are unable write it that fast over the network (mostly thinking in terms of vm backend storage or multiple user access).

    • @Wingnut353
      @Wingnut353 Před 2 lety +2

      Or if you have fast enough network.. but the drives are too slow (in a lower end system).

  • @dennischristian9976
    @dennischristian9976 Před rokem

    Another great video!

  • @TrueNAS
    @TrueNAS Před 2 lety +8

    ZFS Love.

  • @JordiFerran
    @JordiFerran Před 2 lety +5

    do remember an interview with an expert dedicated to storage using ZFS and CEPH; his thinking due to real testing was to limit a pool to 6 disks, using 2 spare for metadata; because rebuilding a failed disk could take days, increasing the amount of disks in a pool could increase rebuilding time (¿weeks?); having a pool in a sensitive state for days is too risky for business;
    do remember me having 2 disks failed at the same time; you might not loose the data, but if bit correcting subsystem detects an error, having no spare data means cannot repair; this is why my thinking is to have 8 disks with 3 spare; having 5 disks for data reading at 2Gbit means saturating a 10Gbit network (enough for most people); and loosing two disks in that pool type is an accident one can tolerate.

    • @grtxyz4358
      @grtxyz4358 Před 2 lety +1

      That's with a regular RAID too, now that spinning disks are incredibly large, you don't want the vdev to consist of less than RAIDZ2 and too much disks. My experience is though that de resilvering process after replacing a disk on ZFS is considerably faster than a RAID rebuilt used to be (and that was even with smaller drives).
      But it depends a lot on how full your pool with data is. If it's less filled it'll rebuild a lot faster in ZFS as the RAID will still go over all the blocks and ZFS just goes over the actual data used. (as I understood it correctly).

  • @patrickdileonardo
    @patrickdileonardo Před rokem +2

    Thanks!

  • @ihateyoutubehandles
    @ihateyoutubehandles Před 2 lety +1

    I could listen to a man with a sriracha mug for ever!!!

  • @kurohnosu
    @kurohnosu Před 2 lety

    thanks for the great video, i just have a question here: is it possible to have differents shapes of vdev but with each vdev having the same size ? let's say i have a raidz2 of 8 drives (2Tb each) in a vdev, so i have a vdev of 12Tb, can i add an other vdev of 3 disks (6Tb each) to the same pool ? also is it possible to replace a vdev by an other to migrate the hardware or do i have to create a whole new pool?

  • @skug978
    @skug978 Před 2 lety

    Great content.

  • @gregm1457
    @gregm1457 Před 2 lety

    I like the simple wide vdev if its not too crazy. the tricky part happens a couple years after you create the system and a disk fails at the most awkward moment; how much do you remember about the setup and how to recover it.

  • @rollinthedice7355
    @rollinthedice7355 Před 2 lety

    I nice job on the new outro.

  • @mbourd25
    @mbourd25 Před 2 lety +3

    Thanks Tom for the awesome explanation. Btw, what is the diagram software you were using? Thanks

  • @barryarmstrong5232
    @barryarmstrong5232 Před 2 lety +7

    It took just under 24 hours to resilver one 14TB drive in my 8 drive RAIDZ2 array. Faster than i expected tbh and on par with a parity rebuild inside unRAID

    • @hadesangelos
      @hadesangelos Před 2 lety +2

      thanks for the info, it seems like no one talks about rebuild times

  • @CDReimer
    @CDReimer Před 2 lety

    I currently have four two-drive (mirrored) vdevs on a two-port SAS controller card for my home file server. Would it make a performance difference if I had two four-drive vdevs (a vdev on each port)?

  • @dannythomas7902
    @dannythomas7902 Před 2 lety +4

    You make me want to format all my drives and forget it

  • @tgmct
    @tgmct Před 2 lety +3

    I was hoping that you would go into how you built this from a hardware perspective. These are large arrays and being so have lots of disk controller throughput considerations to consider. How about redundant power and networking? There are all sorts of clustering issues that come into play too.
    OK, I just expanded this video series...

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Před 2 lety +2

      The system is a 45 drives XL60 and I did the benchmarking before the full review which is coming soon

  • @vulcan4d
    @vulcan4d Před rokem

    Have an off-site backup just in case. My RaidZ2 Zfs completely failed. One drive failed so I ordered another. While waiting the second drive failed. Then I installed the new drive and another drive failed during the resilvering. Now the Zpool is gone. Luckily I have an off-site backup.

  • @Sovereign1992
    @Sovereign1992 Před 2 lety +1

    Great video! What are your thoughts of building a FreeNAS/TrueNAS CORE VDEV layout using RaidZ3 exclusively?

    • @chrishorton444
      @chrishorton444 Před 2 lety

      It works well but makes sense for more than 6 drives. I don’t think z3 is an option below 5 or 6 drives.

    • @Sovereign1992
      @Sovereign1992 Před 2 lety

      @@chrishorton444 It requires 8 drives minimum and has 3 disk resiliency. If you were a particularly skittish person with the data and not wanting it to be lost the RaidZ3 would be your ideal choice

  • @GCTWorks
    @GCTWorks Před 2 lety +4

    Do you recommend different layouts depending on drive types? SSD, HDD, or even by interface, NVMe, SAS, SATA, etc.

    • @texanallday
      @texanallday Před 2 lety

      I think best practice is to definitely separate spinning rust, SSD, and NVME into separate pools (not just videos). Not sure about SAS vs SATA, though

  • @phychmasher
    @phychmasher Před rokem

    Team Mirrors! You'll never convince me otherwise!

  • @HerrMerlin
    @HerrMerlin Před 5 měsíci

    Regarding Mirrors, you may do 3 or 4 way mirrors

    • @ginxxxxx
      @ginxxxxx Před 2 měsíci

      id you go mirror? 50% lose of capacity for best perf and pure ease on everything else

  • @UntouchedWagons
    @UntouchedWagons Před 2 lety +5

    Another thing worth considering is upgradeability. If you start with a 10 drive wide R2 and you need more storage you need to buy 10 drives. If you want to replace all the drives in a RAIDZ-type VDEV it could take a while since you'd have to replace one drive at a time if you don't have any spare drive bays.
    With mirrors you really don't have that issue. If you want to expand your capacity you only need two drives. If you want to replace the drives in a vdev it's much easier and probably quicker since there's no parity calculations to do.

    • @pfeilspitze
      @pfeilspitze Před 8 měsíci

      Even if I can afford the 50% overhead, I'd be tempted to do that as 4-wide RaidZ2 instead of mirrors. Especially with lots of vdevs where the unlucky 2-drive loss would kill the whole pool. And it's narrow enough that the write overhead would be pretty low.
      Parity calculations are completely trivial compared to the cost of writing to drives. Not even worth talking about in terms of speed.

  • @marcq1588
    @marcq1588 Před 5 měsíci

    This is a great video about ZFS VDEVs.
    Although Mirror VDEV are only 50% capacity, I would say there are some very important advantages beside the speed.
    1. Easier to upgrade the capacity of an entire pool, mirror by mirror
    2. Faster rebuild if one drive fails. It is just a copy of the other drive straight, and no overhead about finding each remaining drive's parity
    Did you find out how long a VDEV RAIDZ1 or 2 or 3 would take if all drives are 12TB for example? vs a Mirror VDEV?
    That would be a very interesting number to show.
    You did not mention the use of Mirror VDEV like Raid 10...

    • @ginxxxxx
      @ginxxxxx Před 2 měsíci

      id you go mirror? 50% lose of capacity for best perf and pure ease on everything else

  • @berndeckenfels
    @berndeckenfels Před 2 lety

    Did you compare z2 with z1 as well? I didn’t find the benchmark results in the linked forum post?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Před 2 lety +2

      Only Z2 and added the benchmark to the description, (forgot to when I published) openbenchmarking.org/result/2110221-TJ-45DRIVESX73

    • @berndeckenfels
      @berndeckenfels Před 2 lety

      @@LAWRENCESYSTEMS thanks for the link

  • @McCuneWindandSolar
    @McCuneWindandSolar Před 2 lety

    what I wish they would do is like I have a supermicro case with 24 bays. I wish you could set up a page were you have a layout of the 24 bays as you have in the case so when goes bad you can go right to the bay and sap and be up and running. Instead of looking for the serial number of the drive. If you want to check on a drive you can just click on the bay and it give you all the data you would need ect. When I found a drive starting to give problems it took a while to just find that drive.

  • @swezey1
    @swezey1 Před rokem

    Great video explaining the tradeoff between storage efficiency and perfromance. What I am wondering is, what impact does the Z level have on performance? Let's say I have a 6-drive VDEV, what is the performance impact of Z1, versus Z2, versus Z3. Has anyone studied this?

    • @mimimmimmimim
      @mimimmimmimim Před 9 měsíci

      Parity calculation is there for once...
      When writing, when resilvering...

    • @pfeilspitze
      @pfeilspitze Před 8 měsíci

      You have to write the parity, so the write overhead is the same as the storage overhead (assuming async writes that can be chunked up fully).
      All else being equal, with 10 drives RaidZ1 will be 10% slower for writes, RaidZ2 20%, and RaidZ3 30%.
      Reads in a healthy pool can generally just not care, since they'll just not read the parity data.

  • @npetalas
    @npetalas Před 3 dny

    Hi Tom, thanks for the video, these benchmark results make no sense to me at the moment, is there a massive CPU bottleneck or something?
    I would expect 30x mirrors to be 5x faster than 6x raidz2-10 but it's only ~40% faster?
    Similarly I thought 12x raidz2-5 would be twice as fast as 6x raidz2-10 but again not even close?
    Why is write performance not scaling proportionally with the number of vdevs?

  • @kumarmvnk3654
    @kumarmvnk3654 Před rokem

    My 5 X 8TB =40TB SSD pool is degraded within 3-4 months with two disks !!!!! Luckily i have a back-up of the data but i just can't see the degraded drives to wipe :( any suggestions

  • @rolling_marbles
    @rolling_marbles Před 2 lety +1

    I struggled with this when first starting to use FreeNAS. Got 12 drives, and using it for iSCSI for ESXi. Ended up going with 4 vDEVs, 3 wide on Z1. Seems like the best balance between performance, capacity, and fault tolerance.

  • @bryansuh1985
    @bryansuh1985 Před 2 lety +3

    Can you make a video explaining DRAID please? I don't get it... at all. I understand it's meant for bigger storage servers, but thats about it :(
    Thanks in advance!

  • @Katnenis
    @Katnenis Před 2 lety

    Hi Tom. Can you create a video on how to access Truenas from outside network please

  • @SteelHorseRider74
    @SteelHorseRider74 Před 2 lety +2

    Great video, thanks for providing.
    I can remember reading a document a while ago from ye ZFS Gods which said not moar than 5-7-9 or 7-9-11 disks per vdev (z1-z2-z3) and their argumentation why this is in there was quite obvious and made sense, but cannot find a reference to it any more.

    • @kjeldschouten-lebbing6260
      @kjeldschouten-lebbing6260 Před 2 lety

      Its bullshit, that was a thing with ZFS about 10-20 years ago.

    • @SteelHorseRider74
      @SteelHorseRider74 Před 2 lety

      @@kjeldschouten-lebbing6260 I am sure it wasnt bullshit the time it was written. Happy to see some new source of enlightenment.

  • @LaLaLa-dj4dh
    @LaLaLa-dj4dh Před 2 lety

    Hope to get the translation of video subtitles

  • @joshhardin666
    @joshhardin666 Před 2 lety

    Couldn't you then make up for the write performance hit of having really wide vdevs by either having an abundance of ram or using a couple of mirrored high endurance ssd's as a write cache? I currently have a small home nas (8x10tb drives in a single raidz2 vdev i7-2600k, 24gb ram, 10g ethernet running truenas core (though I may move to scale when it comes out of beta depending on how much effort it will take to recreate my jails as lxd containers). I use it as an archival backup and media storage vault (plex) but if I had more machines transferring data to or from it in a way that I would want to be performant, I could certainly understand why one would want to implement some kind of caching methods.

    • @pfeilspitze
      @pfeilspitze Před 8 měsíci

      Write cache (ZIL) only matters for sync writes, which you generally don't use for backup and media store uses.

  • @RahulAhire
    @RahulAhire Před rokem

    Do you think raidz3 is overkill in enterprise SSD like kioxia CM/CD 6 since they last longer than HDD?

  • @afikzach
    @afikzach Před 2 lety +2

    can you please share the full command and method you have used for benchmarking ? I have similar system and I'm interested to test it

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Před 2 lety +3

      www.phoronix-test-suite.com/ phoronix-test-suite benchmark pts/fio

    • @afikzach
      @afikzach Před 2 lety

      @@LAWRENCESYSTEMS thank you

  • @rcmaniac10
    @rcmaniac10 Před 2 lety

    lets say I buy a 60 drive storinator and only put 10 drives in it, can I later put in more drives? and add to the same pool?

    • @pfeilspitze
      @pfeilspitze Před 8 měsíci +1

      Yes, you can add more vdevs to the pool later.
      So if, for example, you have those 10 drives as two vdevs of five, you can add another 5 drives at some point in the future as a third vdev.

  • @handlealreadytaken
    @handlealreadytaken Před 2 lety +1

    The advantage of narrower but more vdevs is that you can write to more vdevs as a single time. IOPs for a vdev is limited to the max of the slowest drive. The parity calculation has little to do with it.

    • @volodumurkalunyak4651
      @volodumurkalunyak4651 Před 2 lety

      Not really. Write IOPS is really limited to slowest drive IOPS, but reads do go only to intended drive. Therefore reads is limited by how well data is spread beetwen drives within vdev. Optimally read IOPS does scale linearly with vdev dize.

  • @psycl0ptic
    @psycl0ptic Před 8 měsíci

    Can you do striped mirrors?

    • @ginxxxxx
      @ginxxxxx Před 2 měsíci

      the power of mirror is raw, once you start using cpu to calculate stripes that power is less.

  • @connellyjohnson145
    @connellyjohnson145 Před měsícem

    Can you have multiple vdevs in different Raid type?

  • @w.a.hawkins6117
    @w.a.hawkins6117 Před rokem

    From what I understand, choice of raid type doesn't just come down to the width of the vdev, but also the size of the drives themselves. Members on the TrueNAS forums told me it's risky and generally not a good idea to run raidz1 on vdevs consisting of drives with a capacity larger than 2TB. Even with larger drives having a 1^-15 URE rate, the risk of another error during a resilver is just too high according to them. I get the impression that raidz1 is frowned upon by the TrueNAS and zfs community.
    Would you agree? There seems to be a lot of conflicting information about this. I recently bought 4x 12TB drives with the intention of putting them in a raidz1 pool, but I've been advised by some people to seriously consider setting up two mirrored vdevs instead, which would mean 50% storage efficiency.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Před rokem

      RAID is not a backup so it comes down to your risk tolerance. I have Z1 pools with drives greater than 2TB for all my videos but they are backed up hourly to another older slower system.

  • @markstanchin1692
    @markstanchin1692 Před 2 lety

    I'm going to be setting up proxmox storage but on a smaller scale on a supermicro MB with 2 64gb sata doms in a mirror for the OS and 2 samsung 970 1tb ssd's in a raid 1 for storage but I was reading that zfs and ssd's don't mix, is that true ? What should I use instead? Thanks.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Před 2 lety +1

      ZFS works fine with SSD's, don't led some outdated bad information bias your decision.

  • @zyghom
    @zyghom Před 8 měsíci

    I just did my first NAS and I decided to do 2x vdev each being a mirror so 4x hdd with capacity of total of 2 only. but in my tests raidz is quite slower than mirror, maybe up to 25% even

    • @ginxxxxx
      @ginxxxxx Před 2 měsíci

      you did it right... i have not watched this video but i know mirror is the way

  • @danielvail5196
    @danielvail5196 Před 8 měsíci

    Is it ok to use 10 drives in 1 raidz3 vdev? For home use.

  • @berndeckenfels
    @berndeckenfels Před 2 lety

    It doesn’t show the layout of controllers, are the 5/10 drives distributed about enough controllers so one can fail?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Před 2 lety

      Each controller handles 20 drives so a controller failure would take the system offline until it is replaced.

  • @chrishorton444
    @chrishorton444 Před 2 lety +3

    The widest vdev I have setup was 48 drives. I noticed having more drives also slows up boot time as well. My server has 36 drives internally and 96 connected in JBOD’s. Boot time can be up to 40 minutes

    • @HelloHelloXD
      @HelloHelloXD Před 2 lety +1

      What is the power consumption of your server? ;-)

    • @chrishorton444
      @chrishorton444 Před 2 lety +2

      My UPS reports around 1.2kw usage

    • @twistacatz
      @twistacatz Před 2 lety +3

      I have the same issue. I have 83 disks on one of my servers and boot time is about 20 minutes.

    • @LtdJorge
      @LtdJorge Před 2 lety

      40 minutes? Holy shit

    • @carloayars2175
      @carloayars2175 Před 2 lety +3

      I've got 160 and it's about 5 minutes.
      You need to flash the HBAs without the onboard bios since you don't need it. Or depending on server can set it to skip bios checks (in UEFI mode).
      My main server has 4 HBAs/16 ports/64 Channels that connect to 22 internal SAS drives and a bunch of disk shelves. Each Shelf has 2 SAS Expanders.

  • @youtubak777
    @youtubak777 Před 10 měsíci +3

    My favorite - 8 drives in raidZ2. I feel like it's the best of all worlds :D And usually just one data vdev, with 8 TB drives, that's 48 TB and that's honestly plenty for most people :D

    • @jttech44
      @jttech44 Před 6 měsíci

      Yes and only a 33% chance of a failure during rebuild... or, you know, you can have a 6% chance with mirrors, and still have 32TB available, which is also plenty for most people.

    • @ginxxxxx
      @ginxxxxx Před 2 měsíci

      just start all over again with mirror ....you will learn why after a life time

  • @fbifido2
    @fbifido2 Před 2 lety +2

    which would give best performance for VM storage ???
    5-drive /w raidz1
    10-drive /w raidz2

    • @ZiggyTheHamster
      @ZiggyTheHamster Před 2 lety +1

      I would go with 5 drive / RAIDZ1 x 2 for a few reasons:
      1. data is striped across the vdevs, so random read/write performance is better
      2. you can increase the size of a vdev by replacing a small number of drives (yank one drive, insert bigger one, resilver, repeat, eventually you have increased the vdev size). this makes it easier to upgrade a whole system when you're close to filling it up, because you give yourself breathing room sooner
      3. each vdev can be operated on different HBAs (and should be), which map to completely separate PCI-e lanes. this increases performance and resiliency because your stripe now goes across CPU cores too (if your CPU has an architecture where different cores have faster access to specific PCI-e lanes).

    • @pfeilspitze
      @pfeilspitze Před 8 měsíci

      RaidZ2 vs RaidZ1 is a resiliency question. It doesn't make sense as an axis for performance. If you want to survive a double failure, use RaidZ2.
      The real trade-off you *can* make is in storage efficiency. More-but-narrower vdevs are more performant but have more storage overhead. If you'd rather pay for more disks to go faster, use smaller vdevs.

  • @magicmanchloe
    @magicmanchloe Před rokem

    Why not go for 5 drive raid z1 would that not give you the same resiliency as 10 Z2 but also give you the better performance?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Před rokem +1

      Would not be able to lose 2 drives per VDEV

    • @magicmanchloe
      @magicmanchloe Před rokem +2

      I was going to argue it was the same risk but then I realized it’s not bc if you lose 1 Vdev you lose them all. Thanks for putting it that way. Good clarification

  • @heh2k
    @heh2k Před 2 lety

    What about 5 drive raidz vs 10 drive raidz2 trade-offs?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Před 2 lety

      The issue is that with Raidz there is a greater chance that if a drive fails the other may fail during the re-silver process causing a total failure of the pool

  • @Solkre82
    @Solkre82 Před 2 lety +1

    Hmm yes. This helps configure my 4 drive box lol.

  • @peny1981
    @peny1981 Před 2 lety

    How long it takes to rebuild for 5 discs 18TB in Raidz-2?

  • @DangoNetwork
    @DangoNetwork Před 2 lety +4

    It's easier to decide on a pool with large number of drives. But with small pool with 12 drives or less, that's where hard to make decisions.

  • @igoraraujo13
    @igoraraujo13 Před 2 lety

    How do a immutable backup with cent os?

  • @Saturn2888
    @Saturn2888 Před 6 měsíci

    This is a bit old now. I'd recommend dRAID in 2023. My fastest use of 60 HDDs was 4 x dRAID2:15c:5d:1s. It's not the best redundancy, but it's way better than RAID-Z3 in terms of how quickly a distributive resilver occurs. The faster you're back to working normally, the better. You essentially shrink the vdev on-the-fly until you physically replace that drive, but once the distributive resilver finishes, you're back to 2 parity (rather than 1) in my use case.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Před 6 měsíci

      I will be making a new video but it's also worth noting that dRAID has storage efficiency draw backs especially for database or anyapplications that use small writes. There still is no perfect solution.

    • @Saturn2888
      @Saturn2888 Před 6 měsíci

      @@LAWRENCESYSTEMS ah yeah, I have Special for both zpools.

  • @Alphahydro
    @Alphahydro Před 2 lety

    I know RAID isn't a substitute for a backup, but as resilient as RaidZ2 is, in addition to having snapshots, what could go wrong outside of your entire server being destroyed?

    • @chrishorton444
      @chrishorton444 Před 2 lety +1

      Failing drives can also cause data corruption if there are discrepancies on what data is correct due to a drive or multiple drives with errors

    • @Alphahydro
      @Alphahydro Před 2 lety +1

      @@josephp1592 that's not a fault of ZFS though

    • @BlueEyesDY
      @BlueEyesDY Před 2 lety +3

      rm -r /
      All the RAID in the world isn’t going to help with that.

    • @wildmanjeff42
      @wildmanjeff42 Před 8 měsíci

      I have had a second drive start giving errors during resilver and 2 weeks later go totally dead on me with 20TB of storage on them, I think it was 9 drives, It was a tense 48 hour rebuild... The drives were bought about the same time, and had close to the same hours on them by smart readings.
      I always have at least 1 backup, usually 2 if I can even just JBOD raid Z1 the second backup in an old unused machine.

  • @KC-rd3gw
    @KC-rd3gw Před 11 měsíci

    If i had 60 drives i would use draid2 instead of plain raidz. With large capacity drives nowadays resilver times are terrible. Having the ability to read from all drives and also write to preallocated spare capacity on all drives can cut resilver times by an order of magnitude

  • @nandurx
    @nandurx Před 2 lety

    I am going to say this again. How do we do benchmark and check speed over network. Any tutorial coming up? And you can do live recording for that.

  • @leonardotoschi585
    @leonardotoschi585 Před rokem +1

    48 8 tb drives in raid z3 is all i need for my school (not *mine*)

  • @Gogargoat
    @Gogargoat Před 2 lety +1

    Instead of 5-disk Z2 Vdevs i'd aim for 6-disk Z2 vdevs, so you avoid having an odd number of data drives. I'm guessing that would show at least a slight uptick in performance compared to the 5-disk VDEV benchmarks, even if that only has 10 VDEVS instead of 12. It should have about 654 TiB of storage.

  • @twistacatz
    @twistacatz Před 2 lety +2

    One other thing worth mentioning guys is it's not wise to stretch vDev's over multiple storage shelves or JBODs.

    • @chrishorton444
      @chrishorton444 Před 2 lety +1

      Zfs is very resilient. If the pool goes offline, repair what’s wrong and reboot and it keeps on going. I currently have 4 disk shelves, 2 with 24 600gb drives and 2 with 900gb drives 48 of each drive per vdev and they are combined. I have done a few replacements and resolver times aren’t too bad.

    • @AlexKidd4Fun
      @AlexKidd4Fun Před 2 lety

      Very much correct.

    • @creker1
      @creker1 Před 2 lety

      Why? By stretching vdevs over multiple JBODs you're increasing fault tolerance. Instead of losing whole vdev when JBOD fails you lose just a couple of disks.

    • @AlexKidd4Fun
      @AlexKidd4Fun Před 2 lety

      @@creker1 if a power outage or even disconnected signal cable occurs to a portion of the vdev beyond its redundancy level and active, there could be corruption to the entire vdev. It's better to keep a vdev isolated to a single failure group. If it is, an outage won't cause corruption, the associated pool will just go offline.

    • @creker1
      @creker1 Před 2 lety +2

      @@AlexKidd4Fun if that’s the case then zfs is essentially useless. I somehow doubt that and everything about zfs indicates that no corruption would occur. And my point was that by stretching vdevs you’re not losing entire vdevs and pool will be online. That’s basic HA design that people been doing since forever. Redundant controllers, paths, jbods, disks, everything. Designing so that everything goes out as one group is the opposite of availability.

  • @annakissed3226
    @annakissed3226 Před 2 lety

    Tom I am asking this of Various CZcams content creators, can you please put out a pre black Friday video about the best drives to buy in the sales, so that CZcams content creators can build storage solutions

  • @L0rDLuCk
    @L0rDLuCk Před 2 lety

    Why does the system only have 256GB of RAM? isn't that way to few RAM for a petabyte of storage? (What about the 1gig ram for every tb of storage rule?)

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Před 2 lety +1

      1gig ram for every tb of storage rule does not apply the same way once you get over 16GB of memory.

    • @L0rDLuCk
      @L0rDLuCk Před 2 lety

      @@LAWRENCESYSTEMS Maybe a topic for a video? This "old" rule of thumb is quoted a lot on the internet!

  • @KintaroTakanori
    @KintaroTakanori Před 13 dny

    me after watching this video: "how should i lay out my drives?" xD

  • @McCuneWindandSolar
    @McCuneWindandSolar Před 2 lety

    I Guess if I had a 60 drive storage, 10 would work. but with having a 24 bay I first started off the 4 vdev with 6 drives each then I changed it to 12 drives for 2 vdev's for more storage with raidz2.

  • @rui1863
    @rui1863 Před měsícem

    The data drives should be in the power of two. i.e never create a 4 drive RAIDZ setup; it's inefficient it should be 5 drives and 6 for RAIDZ2. Parity RAID setup should be 2^n + P where n is the number of data drives and P is number of parity drives (in reality the parity bits are striped).

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Před měsícem

      Can you tell me where to find the documentation that says that's true?

    • @rui1863
      @rui1863 Před měsícem

      @@LAWRENCESYSTEMS The smallest write size for ZFS is 4k (normally unless you 512k setup) and default is 128k. Take block size being used and dived by 3; it just doesn't compute. In a RAID5 setup this would be wasteful as RAID5 is always writes a full stripe; however, ZFS doesn't alway write full stripe all the time if it doesn't need to like in the case of small files. So, in a 4 disk RAIDZ setup; ZFS will either write: a 2 blocks mirror or 2 data blocks and 1 parity block. It will never use all four disk for a single stripe thus your storage is much less than drive size * 3; it is more like 2.x. With RAIDZ it's always something 2.x rarely 3; it really depends on how many full data writes vs partial are done. You are going to have to google how ZFS works internally and/or RAID5. Call me old school but I follow the formula of 2^n+P setup. It is extremely important for a true RAID5/6 setup and much more forgiven on ZFS due to its partial stripe writes for smaller I/O requests.

  • @mitchellsmith4601
    @mitchellsmith4601 Před 2 lety

    I don’t know that I would compare mirrors and parity RAIDs. The mirrors may be faster, but what happens when ZFS detects corruption? With a mirror, there’s nowhere to go, but with parity, that data can be rebuilt. For me, that’s worth the speed loss and longer resilvering times.

    • @pfeilspitze
      @pfeilspitze Před 8 měsíci

      A mirror is functionally a 2-wide RaidZ1 using a trivial parity calculation, no? So if there's a checksum failure reading from one disk, it can repair it by reading the other disk.

  • @grinder2401
    @grinder2401 Před 2 lety

    IOPS is just half the story. Need to take into consideration throughput and latency too.

  • @zenginellc
    @zenginellc Před rokem

    The Logo looks like LTT 👀

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Před rokem

      You're right, it does. I'm starting to think they copied me! 🤔😂

  • @mimimmimmimim
    @mimimmimmimim Před 9 měsíci

    In reality, when it comes down to the number of years lost from your life due to stress when the pool starts resilvering, 3-way mirror vdevs is the way to go 😂
    Large raidz vdevs have a significant impact on the cpu as well. Even ordinary writes needs this. Mirrors have none.
    And you got way better iops...
    There's a saying about servers.. There's nothing cheaper than hard drives. Well, maybe ram 😊

    • @pfeilspitze
      @pfeilspitze Před 8 měsíci +1

      Any thoughts on 4-wide RaidZ2? Same resiliency as 3-way mirror, but with the storage overhead of a 2-way mirror. I was thinking that the vdev being still pretty narrow might mitigate most of the worries.
      Of course if you need pure speed, nothing will beat the mirrors.

    • @mimimmimmimim
      @mimimmimmimim Před 8 měsíci

      @@pfeilspitze Actually in practicality I've seen IOPS advantage but no throughput advantages in mirror reads that's caused by the size of individual mirror vdevs. It's not like 3 mirrors read the way stripe vdevs do. Though I assume (or maybe remember) the elements of a mirror vdev can be read independently which results in iops gain. Which is a serious gain in busy systems.
      On the second topic, yes, and no.. For personal storage I find it practical to have four drives even on raidz vdevs cause we're all cheap when it comes down to money :)
      If you ask me, all workstations I use, I prefer mirror vdevs. Only 2 servers have 4-drives raidz vdevs.
      But let's not forget. Usage conditions rarely gets worse from the point the system is setup to the day it's being frequently used, in home lab environment. In professional environment, it gets harsher and harsher by the day... No one cares about the underlying system, and even 3 additional concurrent clients is a significant increase in work load.
      For your actual question, I'd prefer 2 devs of 2-way mirrors any day (kind of like raid 1+0).
      Cause no operation requires raidz calculations on mirrors.. Only block hashes as it is still zfs.
      This is a serious advantage when resilvering when the system is still in use (e.g serving)..
      Actually an n-drive raidz2 is rather different basically than a mirror pool of any size. Raidz2 involves a lot of calculations for redundancy structure.
      And in crisis the larger the size of the mirror vdev the greater the gap when compared to raidzs. I mean more headroom for recovery reads to resilver the new drive...

    • @jttech44
      @jttech44 Před 6 měsíci

      @@pfeilspitze 2x2 mirrors is the right call, but only because it's faster. If your drives are small, like 4tb or less, you can get away with Raidz1 and snag an additional 4tb of space. If your dives are larger, 2x2 mirrors will be faster and slightly more resilient than raidz2, but only very slightly more, in reality it's a wash.
      I stay away from 4 drive setups for that reason, 6+ drives is where mirrors really shine.

  • @stephenreaves3205
    @stephenreaves3205 Před 2 lety +1

    First! Finally

  • @ragtop63
    @ragtop63 Před 11 měsíci +1

    Aluminum doesn’t rust. That nomenclature needs to go away.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Před 11 měsíci +1

      The platters are not what is being referenced, it's the coating of iron(III) oxide.

  • @pepeshopping
    @pepeshopping Před 2 lety +2

    Only trust RAID 10.

    • @nmihaylove
      @nmihaylove Před 2 lety +1

      Btrfs user?

    • @pfeilspitze
      @pfeilspitze Před 8 měsíci

      RAID10 is the same as having a pool with lots of 2-wide mirror vdevs.

  • @bujin5455
    @bujin5455 Před rokem

    Raid is not a back up, ZFS is not raid.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Před rokem +1

      ZFS with many drives is raid

    • @mdd1963
      @mdd1963 Před 9 měsíci

      Wonder why they call it RAIDz, RAIDz2, RAIDz3? :)

  • @curmudgeoniii9762
    @curmudgeoniii9762 Před 2 lety

    Why no dates on your vid?? A prof utuber? Not good.