TrueNAS: How To Expand A ZFS Pool

Sdílet
Vložit
  • čas přidán 22. 05. 2024
  • Explaining ZFS LOG and L2ARC Cache: Do You Need One and How Do They Work?
    • Explaining ZFS LOG and...
    ZFS COW Explained
    • Why The ZFS Copy On Wr...
    TrueNAS ZFS VDEV Pool Design Explained: RAIDZ RAIDZ2 RAIDZ3 Capacity, Integrity, and Performance.
    • TrueNAS ZFS VDEV Pool ...
    Connecting With Us
    ---------------------------------------------------
    + Hire Us For A Project: lawrencesystems.com/hire-us/
    + Tom Twitter 🐦 / tomlawrencetech
    + Our Web Site www.lawrencesystems.com/
    + Our Forums forums.lawrencesystems.com/
    + Instagram / lawrencesystems
    + Facebook / lawrencesystems
    + GitHub github.com/lawrencesystems/
    + Discord / discord
    Lawrence Systems Shirts and Swag
    ---------------------------------------------------
    ►👕 lawrence.video/swag
    AFFILIATES & REFERRAL LINKS
    ---------------------------------------------------
    Amazon Affiliate Store
    🛒 www.amazon.com/shop/lawrences...
    UniFi Affiliate Link
    🛒 store.ui.com?a_aid=LTS
    All Of Our Affiliates that help us out and can get you discounts!
    🛒 lawrencesystems.com/partners-...
    Gear we use on Kit
    🛒 kit.co/lawrencesystems
    Use OfferCode LTSERVICES to get 10% off your order at
    🛒 lawrence.video/techsupplydirect
    Digital Ocean Offer Code
    🛒 m.do.co/c/85de8d181725
    HostiFi UniFi Cloud Hosting Service
    🛒 hostifi.net/?via=lawrencesystems
    Protect you privacy with a VPN from Private Internet Access
    🛒 www.privateinternetaccess.com...
    Patreon
    💰 / lawrencesystems
    ⏱️ Timestamps ⏱️
    00:00 How to Expand ZFS
    01:23 How To Expand Data VDEV
    02:11 Symmetrical VDEV Explained
    03:05 Mixed Drive Sizes
    04:45 Mirrored Drives
    06:00 What Happens if you lose a VDEV?
    07:37 Creating Pools In TrueNAS
    10:30 Expanding Pool In TrueNAS
    16:00 Expanding By Replacing Drives
    #truenas #NAS #ZFS
  • Věda a technologie

Komentáře • 187

  • @LAWRENCESYSTEMS
    @LAWRENCESYSTEMS  Před rokem +10

    Explaining ZFS LOG and L2ARC Cache: Do You Need One and How Do They Work?
    czcams.com/video/M4DLChRXJog/video.html
    ZFS COW Explained
    czcams.com/video/nlBXXdz0JKA/video.html
    TrueNAS ZFS VDEV Pool Design Explained: RAIDZ RAIDZ2 RAIDZ3 Capacity, Integrity, and Performance.
    czcams.com/video/-AnkHc7N0zM/video.html
    ⏱ Timestamps ⏱
    00:00 ▶ How to Expand ZFS
    01:23 ▶ How To Expand Data VDEV
    02:11 ▶ Symmetrical VDEV Explained
    03:05 ▶ Mixed Drive Sizes
    04:45 ▶ Mirrored Drives
    06:00 ▶ What Happens if you lose a VDEV?
    07:37 ▶ Creating Pools In TrueNAS
    10:30 ▶ Expanding Pool In TrueNAS
    16:00 ▶ Expanding By Replacing Drives

    • @tailsorange2872
      @tailsorange2872 Před rokem

      Can we just give you a nickname "Lawrence Pooling Systems" instead :)

    • @zeusde86
      @zeusde86 Před rokem

      I'd really wish, that you could point out the importance of "ashift" in zfs. I just recently learned, that most SSDs have 512b instead ok 4k-sectors, and that using "ashift=12" on them (instead of 9) is, what really hits the performance so bad, that many SSDs will fall behind spinning-rust performance levels. In general i'd really like to see best practices on SSD-Pools (which cache-type to use, ashift, as described above, and which disk-types to avoid). while it may sound luxurious to have ssd-zpools in a homelab, this is especially important on e.G. proxmox-instances with zfs-on-root (on SSDs).

    • @garygrose9188
      @garygrose9188 Před rokem

      Brand new and as green as it gets, when you say "let's jump over here" and landed in a comand page, exactly how did you get there?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Před rokem

      @@garygrose9188 you can SSH into the system

  • @youtubegaveawaymychannelname

    The funny thing about what you said is that you can essentially add one drive in at a time if you replace existing drives with larger ones. What you can't do is replace a single drive and immediately get the benefits of the larger size drive. But I know from experience, when you finally replace all of those drives in that vdev and it calculates the new size for that vdev....... it is a glorious day.

    • @blackrockcity
      @blackrockcity Před 10 měsíci +4

      This was helpful. After I read this comment, I watched the rest of the video where Tom mentions this strategy.

    • @BlackBagData
      @BlackBagData Před 3 měsíci +1

      This is the approach I took!

    • @dracoflame8780
      @dracoflame8780 Před měsícem

      He said exactly this at the end.

  • @chromerims
    @chromerims Před rokem +3

    5:51 -- I like this. Two pools: first one with faster flash, and the second with HDDs. Thank you, Tom! 👍

  • @marklowe7431
    @marklowe7431 Před 7 měsíci +1

    Super well explained. Cheers. Enterprise grade integrity, performance & home user flexibility. Pick two.

  • @alex.prodigy
    @alex.prodigy Před rokem +1

    excellent video , makes understanding basics of ZFS very easy

  • @eggman9713
    @eggman9713 Před 10 měsíci +5

    Thank you for the detailed explanation on this topic. I'm just starting to get really into homelab and large data storage. I've been a user of Drobo products (now bankrupt, obsolete, and unsupported) for many years and their "BeyondRAID" system allowing mixed-size drives was a game-changer in 2007 and few other products could do that then or now. I also use Unraid but since it is a dedicated parity disk array and each disk is semi-independent it has limitations (mainly on write speed), but is nice in a data recovery situation where each individual data drive can function outside the machine. I know that OpenZFS developers have announced that "expansion" is coming, and users have been patiently awaiting it, which would make zfs more like how a Drobo works. Better than buying whole VDEVs worth of disks at a time and finding a place for them.

  • @GoosewithTwoOs
    @GoosewithTwoOs Před rokem

    That last piece of info is really good to know. Got a ProxMox server running and I want to replace the old drives that came with it with some newer larger drives. Now I know.

  • @David_Quinn_Photography
    @David_Quinn_Photography Před rokem +1

    16:05 answered the question I had but I learned some interesting things thank you for sharing. I have a 500gb, 2tb, and 3tb drives and wanted to at least replace my 500gb for a 8th that I got on sale.

  • @perriko
    @perriko Před rokem +1

    Great instruction as usual... fact with reason! Thankyou!

  • @Anonymousee
    @Anonymousee Před 10 měsíci

    16:02 This is what I really wanted to hear, thank you!
    Too bad it was a side-note at the end, but I did learn some other things that may come in handy later.

  • @davidbanner9001
    @davidbanner9001 Před 8 měsíci +2

    I'm just moving from Open Media Vault to TrueNAS scale and your uploads have really help me understand ZFS. Thanks.

    • @gorillaau
      @gorillaau Před 8 měsíci

      What was the deal breaker that made you leave Open Media Vault? I'm pondering shared storage device as a data store for proxmox.

    • @davidbanner9001
      @davidbanner9001 Před 7 měsíci

      @@gorillaau Overall flexibility and general support. A large amount of almost preconfigured apps/dockers and the ability to run VM's. If you are running Proxmox these are probably less of a concern? Switching to ZFS is also very interesting and something I have not used before.

    • @nitrofx80
      @nitrofx80 Před 6 měsíci

      I don't think it's a good idea. I just migrated from OMV to truenas and not.very happy about the change. I think that there is a lot more value for the home user for OMV thank truenas

    • @nitrofx80
      @nitrofx80 Před 6 měsíci

      As far I know there is only support support for one filesystrm in truenas. OMV supports all files stems and its all really up to you what you want to use.

  • @HelloHelloXD
    @HelloHelloXD Před rokem

    Great video as usual. Thanks

  • @alecwoon6325
    @alecwoon6325 Před rokem

    Thanks for sharing. Great content! 👍

  • @knomad666
    @knomad666 Před 10 měsíci

    Great explanation.

  • @SirLothian
    @SirLothian Před 10 měsíci

    I have a boot pool that was originally a single 32GB thumb drive that I mirrored with a 100GB SSD. I wanted to get rid of the thumb drive so I replaced the thumb drive on the boot pool with a second 100 GB SSD. I had expected the capacity to go from 32 GB to 100 GB but it did not. This surprises me since the video said that replacing the last drive on a pool would increase the pool size to the smallest disk in the pool. Looks like I will have to destroy the boot pool and recreate it with full capacity and then reinstall TrueNAS on it.

  • @ewenchan1239
    @ewenchan1239 Před rokem +15

    Two things:
    1) Replacing the disks one at a time for onesie-twosie TB capacities to the larger capacities isn't a terrible issue.
    But if you're replacing 10 TB drives with 20 TB drives, then the resilver process (for each drive) takes an INORDINATE amount of time such that you might actually be better off building a new system with said 20 TB drives and then migrating the data over your network vs. the asynchronous resilvering process.
    2) My biggest issue with ZFS is the lack of OTS data recovery tools that a relatively simple and easy to use. The video that Wendall made with Allan Jude talks about this in great detail.

  • @madeyeQ
    @madeyeQ Před rokem

    Great video and very informative. I may have to take another look at TrueNAS. At the moment I am using a debian based system with just ZFS pools managed from the CLI. (yes I am a control freak).
    One thing to note about ZFS raid (or any other raid) is that it's not the same as a backup. If you are worried about loosing a drive, make sure you have backups! (learned that one the hard way about 20 years ago).

  • @johngermain5146
    @johngermain5146 Před rokem +1

    You saved the best for last (adding larger capacity drives.) As my enclosure has the max # of drives installed and 2 vdevs with no room for more, replacing the drives with larger ones is "almost" my only solution without expanding.

    • @theangelofspace155
      @theangelofspace155 Před rokem +2

      You can add a 12-15 disk DAS for around $200-$250

    • @theangelofspace155
      @theangelofspace155 Před rokem +1

      Well my last commet was deleted. Check serverbuilds if you need a guide

    • @johngermain5146
      @johngermain5146 Před rokem

      @@theangelofspace155 Your last comment is still here!

  • @Mike-01234
    @Mike-01234 Před rokem

    After reviewing everything I wanted drive redundancy and pool size efficiency I built a raidZ2 that was 5 years ago never looked back. My drive failure rate has been 1-2 drives a year those were used WD red drives I bought on ebay. I now only buy brand new WD reds haven't had a failure yet in last few years. I'm looking at move the Truenas up to 14TB from 6TB, and for critical files backing up to a windows mirror drives on a windows box. I don't like all the security issues around windows if you blue screen, or something happens to the OS difficult to recover data sometimes. My new build will be 5 drive 14TB raid Z2 plus a 2nd mirror Vdev as a backup set for critical data move that off the windows box on to the Truenas.

  • @romanhegglin
    @romanhegglin Před rokem +2

    Danke!

  • @andymok7945
    @andymok7945 Před 4 měsíci

    Thanks. Waiting for the feature to add a drive to expand. I used much larger drive size when I created my pools. For me, data integrity is way more important. It is for my own use, but important stuff and I have nightly rsync happening to copy to another TrueNAS setup. The I also have a 3rd system that is my off-line archive copy. Gets powered up and connected to network and rsync away. When done, network disconnected and power removed.

  • @Kannulff
    @Kannulff Před rokem

    Thank you for the great explanation and video. As always :) Is it possible to put here the fio command line? Thank you. :)

  • @deadlymarsupial1236
    @deadlymarsupial1236 Před rokem +1

    I just went with trunas scale zfs using intel e series 6 core / 12 thread xeon 32g ram & 4 x 20TB WD RED PROs
    I like the idea that I can move the whole pool/array of drives to another mainboard and not have to be worried about differing proprietary raid controllers or such controllers failing.
    I also like using a server mainboard with remote admin built onto the board and a dedicated network interface so I can power up the machine via vpn remote access if need be.
    Although it is very early days in set-up/test, I am so far very impressed and worth the extra $ for server hardware platform. People may however be surprised how much storage is allocated for redundancy - at least 1 drive's worth to survive 1 drive failing.
    What is a bit tricky is configuring a windows vm hosted on the nas that can access the nas shares.
    Haven't quite figured out how to set up a container to host ubiquiti controller either.
    One of the things this nas will do is host storagecraft spx backup sets and the windows vm hosts the incremental back-up image manager that routinely verifies, consolidates and purges redundant data as per retention politices.
    I haven't decided what ftp server for receiving backups of remote hosts yet.
    Could go with filezilla I supose.
    Another nice solution would be pxe boot service for providing a range of system boot images for setting up and troubleshooting systems in a workshop environment.
    There has been some implementations where trunas is hosted within a hypervisor such as proxmox so trunas can focus exclusively on nas while other vm's can run a windows server, firewall and perhaps containers for ubiquiti controller. May need more cores for that however when I have the time and get another 32G Ram to put in the machine, I plan to see if I can migrate the existing bare metal install of truenas scale to proxmox hypervised vm just to see how that goes.

    • @theangelofspace155
      @theangelofspace155 Před rokem

      There are some video of setting truenas scale as a promox VM, I went that router. I use scale just for file manager, promox for VM hypervisor and unraid for container (docker) manager.

    • @deadlymarsupial1236
      @deadlymarsupial1236 Před rokem

      @@theangelofspace155 Thanks, it will be interesting to see how easily (or not) migrating trunas from bare-metal to vm within proxmox will go. I suspect back-up of the truenas configuration, mapping the drive and network interfaces to the vm and setting up auto-boot on restored mains power but need to put together a more thorough research inspired plan first.

  • @Darkk6969
    @Darkk6969 Před rokem +8

    One thing I love about ZFS as it's incredibly easy to manipulate the storage pools. I was able to replace 4 3TB drives with 4 4TB drives without any data loss. It took awhile to resilver each time I swap out the drive. Once all the drives been swapped out ZFS automatically expanded the pool.

    • @tubes9181
      @tubes9181 Před rokem +6

      This is available on a lot more than just zfs.

    • @MHM4V3R1CK
      @MHM4V3R1CK Před rokem

      How long did that take btw?

  • @simonsonjh
    @simonsonjh Před rokem

    I think I would use the disk replacement method. But waiting for new ZFS features.

  • @DiStickStoffMono0xid
    @DiStickStoffMono0xid Před rokem

    I did read somewhere that it’s possible to “evacuate” data from a vdev to remove it from a pool, is that maybe a new feature?

  • @NickyNiclas
    @NickyNiclas Před 7 měsíci

    Exciting times now that ZFS expansion is almost here!

  • @hpsfresh
    @hpsfresh Před 9 měsíci

    Doesn’t ads supports attach command even for non-mirrors?

  • @ManVersusWilderness
    @ManVersusWilderness Před rokem +1

    What is the difference between "add vdevs" and "expand pool" in truenas?

  • @frederichardy1990
    @frederichardy1990 Před 6 měsíci

    With the "Expanding by replacing", assuming that you can shutdown the TrueNAS server for a few hours, copying all the existing drives of a vdev (with dd or even standalone duplicator) to higher capacity drive could work??? It would be much faster than replacing one drive at a time for vdev with a lot of drives.

  • @philippemiller4740
    @philippemiller4740 Před rokem

    Hey Tom I thought you could remove vdevs but only mirrors not raidz vdevs from a pool?

  • @kommentator1157
    @kommentator1157 Před rokem +1

    Would it be possible (though not advisable) to have vdevs with differents widths?
    Edit: Just got to the part where you show it. Yep it's possible, not recommended.

  • @Savagetechie
    @Savagetechie Před rokem +1

    extendable vdevs can't be too far away. the openzfs developer summit is next week, maybe they'll even be discussed there?

  • @johnpaulsen1849
    @johnpaulsen1849 Před rokem +3

    Great video. I know that Wendell from Levelonetechs has mentioned that expanding vdevs is coming?
    What do you think about that?
    Also do you have any content on adding hot spares or SSD cache to an existing pool?

    • @Pythonzzz
      @Pythonzzz Před rokem +1

      I keep checking around every few months for updates on this. I’m hoping this will be an option by the time I need to add more storage.

  • @zeusde86
    @zeusde86 Před rokem +4

    Actually you CAN remove data-vdevs, you just cannot do with raid-z vdevs. with mirrored vdevs this works, see also "man zpool-remove(8)":
    "Top-level vdevs can only be removed if the primary pool storage does not contain a top-level raidz vdev".
    ...on very full vdevs it just taked some time to move the stuff around...

  • @hojo1233
    @hojo1233 Před 10 měsíci

    What about truenas and 2 drives in basic mirror? Is there any way to expand it using bigger drives? Unfortunately I don't have any more free ports in server.
    In my configuration I have 4 ports total - 2 of them are for data drives (2x4TB). Another one is for SSD cache, and last one is for boot. I had no issues with that configuration whatsoever, but now I need to increase storage capacity.
    Is there any way to expand it without rebuilding everything from scratch? For example by replacing 4TB disks to 8TB and resizing pool?

  • @fredadams1877
    @fredadams1877 Před rokem

    Question ...for a Truenas server what would be better ASUS x58 sabretooth with Xeon x5690 , or ASUS sabretooth 990FX R2.0 AMD 8350 using a sas card to the drives I have both i could use memory 24gb on the intel and 16gb on the AMD. just not sure what would be better. will also be using m.2 card with a 256gb m.2 drive as a log cache or would it be better used as just extra cache. this will be a file server to hold all my photos (Photographer) thanks for your time and thoughts on this

  • @Thomate1375
    @Thomate1375 Před rokem

    Heey, I have a problem with the pool creation ...
    I have a fresh install of truenas scale with 2x 500gb hdds
    But everytime I try to create a Pool with them there comes an error of "...partion not found "
    Everything that I could find online says that I would have to wipe the disk and eventually reboot the system. I have done this multiple times now but nothing changes.
    Have also done a smart test but according to the results the drives seems to be ok

  • @Im_Ninooo
    @Im_Ninooo Před rokem

    that's basically why I went with BTRFS. so I could expand slowly since drives are quite expensive where I live so I can't just buy a lot of them at once.

    • @Im_Ninooo
      @Im_Ninooo Před rokem

      @@wojtek-33 I've been using it for years now, but admittedly only with a single disk on all of my servers, so can't speak from experience on the resiliency of it.

    • @LesNewell
      @LesNewell Před rokem +1

      @@wojtek-33 I've been using BTRFS for 10+ years (mostly raid5) and in that time have had two data loss incidents, neither of which could be blamed on BTRFS. One was raid0 on top of LUKS with 2 drives on USB. Basically I was begging for something to go wrong and eventually it did. One USB adapter failed so I lost some data. This was only a secondary backup so no big deal.
      The other time was when I was creating a new Raid5 array of 5x 2TB SSDs and had one brand new SSD with an intermittent fault. I mistakenly replaced the wrong drive. Raid5 can't handle 2 drive failures at the same time (technically one failure and one replacement) so I lost some data. Some of the FS was still readable but it was easier to just wipe and start again after locating the correct faulty drive and replacing it.
      As an aside, I find BTRFS raid5 to be considerably faster than ZFS RaidZ. ZFS also generates roughly twice as many write commands for the same amount data. That's a big issue for SSDs.
      BTRFS raid5 may have a slightly higher risk of data loss but for SSDs I think that risk is offset by the reduced drive wear and risk of wearing drives out.

    • @Mr.Leeroy
      @Mr.Leeroy Před rokem

      each added drive is also ~52kWh per year, so expanding vertically still makes more sense..

  • @deacbeugene
    @deacbeugene Před 3 měsíci

    Questions about dealing with pools: can one move a dataset to another pool? Can one delete a vdev from a pool if there is enough space to move the data?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Před 3 měsíci +1

      You can use ZFS replication to copy them over to another pool.

  • @tank1demon
    @tank1demon Před rokem

    So there functionally isn't a solution for having a system where you will end up with 5 drives in a pool but have to start with 4? As in adding anything to an existing vdev? I'm on xubuntu 20.04 and I'm trying to work out how to go about that, if possible. Can I just build a pool with drives without a vdev and add to that pool?

  • @AnuragTulasi
    @AnuragTulasi Před rokem

    Do a video on dRaid too

  • @__SKYNET__
    @__SKYNET__ Před 4 měsíci

    Tom can you talk about the new features upcoming in Pool expansion in ZFS 2.3 thanks, appreciate it

  • @Reminder5261
    @Reminder5261 Před rokem

    Is it possible for you to do a video on creating a ZFS share? There is nothing on youtube to assist me with this. For some reason, I am unable to get my ZFS shares up and running.

    • @wiktorsz1967
      @wiktorsz1967 Před rokem +1

      Check if your user group has smb authentication enabled. At first I assumed that if my user settings are set up then it would work or just automatically allow primary group to authenticate.
      Also make sure to set share type as “smb share” at the bottom when creating your dataset and add your user and group to ACL in dataset permissions.
      I don’t know if you have done all that already, but for me it works with all the things I wrote above
      Edit: if you’re using Core (like me) and your share doesn’t work on iPhone then enable APF in services.
      On Scale you need to enable “APF compatibility” or something like that somewhere in dataset or ACL settings

    • @0Mugle0
      @0Mugle0 Před rokem

      check there is no spaces in the pool or share names. fixed it for me

  • @arturbedowski1148
    @arturbedowski1148 Před 4 měsíci

    hi, I copied my hdd on ssd and i tried expanding zfs pool via gparted, but it didnt work (ssd has waaaay biger storage). Is it possible to expand my rpool zfs partition or is it not posible?

  • @fredadams1877
    @fredadams1877 Před rokem

    are you able to add a drive so that you can increase your fault tolerance. for instance, i started with 5 drives with Z1 i would like to add another drive and change from Z1 to Z2. is that possable?

  • @bartgrefte
    @bartgrefte Před rokem +4

    Can you make a video about which aspects of ZFS are very RAM-demanding? A whole bunch of websites say that with ZFS, you need 1GB of RAM for each TB of storage, but there are also a whole bunch of people out there who are able to use ZFS without problems on systems with far from enough RAM to obey that requirement.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Před rokem +5

      Right here: Explaining ZFS LOG and L2ARC Cache: Do You Need One and How Do They Work?
      czcams.com/video/M4DLChRXJog/video.html

    • @Mr.Leeroy
      @Mr.Leeroy Před rokem +1

      The only demanding thing is deduplication, the rest is caching.
      You can control on per dataset basis what gets cached (also is it only metadata or data itself is being cached). As well as what gets cached where, into RAM, into L2ARC.
      Dataset CLI parameters like `primarycache` is you what you need.
      Still be very cautious going below minimal requirements e.g. 8GB RAM for FreeNAS, it is not ZFS dictated, but rather particular appliance as a whole OS. Something like ZFS on vanilla FreeBSD may very well go a lot lower than 8GB, all depending on your serivces running.

    • @bartgrefte
      @bartgrefte Před rokem

      ​@@Mr.Leeroy I wasn't thinking as low as 8GB, more like 32GB, but with so much storage that the "1GB RAM per TB storage" requirement still doesn't apply.

    • @Mr.Leeroy
      @Mr.Leeroy Před rokem

      @@bartgrefte 32GB is perfectly adequate. I don't suppose you approach triple digit TB pool just yet.

    • @bartgrefte
      @bartgrefte Před rokem

      @@Mr.Leeroy no pool yet, waiting for a good deal for hdd's, now if ZFS had the option to start a RAIDz2 (or 3) with a small amount of drives and adding drives later....
      Everything else is ready to go, build a system with used parts only and it has 16 3.5" and 6 2.5" hot swap bays in a Stacker STC-T01 that I managed to get my hands on :)

  • @glitch0156
    @glitch0156 Před měsícem

    I think for Raid0, you can add drives to the pool without rebuilding the pool.

  • @jms019
    @jms019 Před rokem

    Isn't RAIDz1 expansion properly in yet ?

  • @RobFisherUK
    @RobFisherUK Před 3 měsíci

    I only have two drives and only space for two, so 16:00 is the answer for me!

  • @ovizinho
    @ovizinho Před rokem

    Hello!….
    I have a doubt that I think is so simple everywhere I research this doubt goes unnoticed…
    I built a NAS with an old PC and everything is ready for the installation of TrueNas….
    My question…where to connect the LAN cable? Direct on the internet router? Or on the main PC's LAN?
    NAS-router or NAS-main computer?
    Both the NAS and the main computer have 10gb LAN each…
    If it is NAS-Router after installing TrueNas, do I disconnect it from the router and connect it to the main computer?
    Thanks in advance!…
    Processor i7 6700 3.40 GHz
    Mother board ASUS EX-B250-V7
    Vídeo card GTX 1060 6GB (PG410)
    Memory DDR4 16GB 3000mhz
    SSD 500GB NVMe
    HDD 1TB

  • @IntenseGrid
    @IntenseGrid Před 10 dny

    Several RAIDs have a hot spare, (or cool by powering down the drive). I would like to have a cold spare for my zpool that gets automatically used so resilvering can kick off without me knowing a thing. I realize that this is sometimes dangerous because we don't know what killed the drive, and may kill another one while resilvering, but most of the time, drives themselves are the problem. Doez ZFS support the hot or cold spare concept?

  • @LA-MJ
    @LA-MJ Před rokem

    would you recommend raidz1 for ssds?

    • @LesNewell
      @LesNewell Před rokem

      RaidZ1 generates quite a lot of extra disk writes, which is bad for SSD life. I did some testing a while back between ZFS raidZ and BTRFS Raid5. BTRFS generated roughly half as many disk writes for the same amount of data written to the file system.
      How do you intend to use the system? If it's mostly for backups you'll probably never wear the drives out. If it's for an application with regular heavy disk writes you may have a problem.

  • @Djmaxofficial
    @Djmaxofficial Před 2 měsíci

    But what if i wanna use diferent size drives¿?

  • @z400racer37
    @z400racer37 Před rokem

    Doesn’t Unraid allow adding 1 drive at a time @Lawrence Systems ?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Před rokem +1

      Not sure, I don't use Unraid.

    • @z400racer37
      @z400racer37 Před rokem

      @@LAWRENCESYSTEMS Pretty sure I remember them working some magic there somehow. Could be interesting to check out. But I'm a TrueNAS guy also.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Před rokem +1

      No Unraid does not natively use ZFS

    • @z400racer37
      @z400racer37 Před rokem

      @@LAWRENCESYSTEMS @superWhisk ohh I see, I must have misunderstood when researching it ~a year ago. Thanks for the clarification guys 👍🏼

  • @SandWraith0
    @SandWraith0 Před rokem

    Just one question: how is any of this better than how Unraid does it (or OpenMediaVault with a combination of UnionFS and Snapraid or Windows with Stablebit)?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Před rokem +3

      ZFS has much better performance and better scalability

  • @yc3X
    @yc3X Před 8 měsíci

    Is it possible to just drag and drop files into the the Vdrive nas? Secondly is it possible to run games off the nas? I have some super old games I wanted to store on it and just play them off it. I wasn't sure if the files are compressed or not when placing them on the nas.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Před 8 měsíci

      Yes, you can put them on a share and as long as a game can run from a share it should work,

    • @yc3X
      @yc3X Před 8 měsíci

      @@LAWRENCESYSTEMS Awesome thanks! Yeah, I'm using a Drobo currently but who knows when it might die so I figured I would start looking into something newer. I figured it must be something similar to a drobo.

  • @Saturn2888
    @Saturn2888 Před rokem

    So I have 4x1TB. Replace 1TB with 8TB, resilver, no change. Replace another 1TB, resilver, now it's 8TB larger from the first one? Or is it that you replace all drives first, then it shows the new size?

    • @gloth
      @gloth Před rokem +1

      no changes until you replace that last drive and you have 4x8tb on your vdev

    • @Saturn2888
      @Saturn2888 Před rokem

      @@gloth thanks! I eventually figured it out and switched to all mirrors

  • @mikew642
    @mikew642 Před rokem

    So on a mirrored pool, if I add a vdev to that pool, my dataset won't know the difference, and just give me the extra storage?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Před rokem +1

      Yes, datasets don't care about how the VDEV's they are attached are expanded.

    • @mikew642
      @mikew642 Před rokem +1

      @LAWRENCESYSTEMS Thank you sir! Your one of the main reasons I started playing with ZFS / TrueNAS! THANK YOU for your content!

  • @LukeHartley
    @LukeHartley Před rokem +3

    What's the most common cause of a VDEV failing? I like the idea of creating several VDEV's but the thought of 1 failing and loosing EVERYTHING scares me.

    • @BenVanTreese
      @BenVanTreese Před rokem +3

      VDEVs would fail due to normal drive failures.
      The issue with a lower raid level is that while you do have the ability to lose 1 drive and keep all data, when you put in a new drive to replace the failed one, it must do a lot of read/write to recalculate the parity on the drive you put in.
      This process can cause any other drives that are close to failing to fail as well.
      Usually people buy drives in bulk, so if you buy 16x drives at once, and they were all made at the same time from same manufacturer, the chances of another drive failing at the same time the first did is higher as well.
      The chance of dual drives failing on the same vdev when you're doing raid2 and you have a hot spare or two assigned to the pool is just lowering and lowering risk, but that risk is never 0, which is why you have backups of raid (raid is not a backup).
      Anyway, hopefully that is helpful info.

    • @lukehartleyfilms
      @lukehartleyfilms Před rokem +1

      @@BenVanTreese very helpful! Thanks for the info!

  • @rcdenis1
    @rcdenis1 Před rokem

    How you reduce the size of a zfs pool? I have more room than I need and need that extra space for another server.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Před rokem +6

      As I said in the video, you don't.

    • @rcdenis1
      @rcdenis1 Před rokem +2

      @@LAWRENCESYSTEMS ok, guess I'll have to backup everything, tear it down, start over and restore. And I wanted to go fishing next weekend! Thanks for the video

  • @GW2_Live
    @GW2_Live Před rokem

    This does drive me a little nuts tbh, as a home user. I have a MD1000 disk shelf, with 4/15 bays empty, would be nice add 4 more 8TB drives to my VDEV without restoring all the data from my backup

    • @emka2347
      @emka2347 Před 4 měsíci

      yeah... this is why i'm thinking about unraid

  • @maddmethod5880
    @maddmethod5880 Před rokem +1

    man I wish proxmox had a nice UI like that for zfs. gotta do a lot of this in command line like a scrub

  • @ameliazM
    @ameliazM Před 9 dny

    How I wish they all do these things automatically. Like, just pop in a drive then the system maximizes that drive for reliability and then as you need space, it trades reliability within acceptable or user set limits.

  • @AliB333
    @AliB333 Před měsícem

    Ok so short version, if I have a vdev with 8 disks in it currently, and I don't have space in the machine to add 8 more disks, the answer is in fact no you can't expand the zfs pool.
    Basically the only options are move 10's of Tb of data elsewhere and recreate the pool, OR replace all 8 drives one at a time with larger drives?
    That's mind boggling to me....how can anyone run a storage server that way?
    Somehow I'm now going to have to build a new server, and then temporarily connect my 8 old drives to the new server to copy data off them..somehow.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Před měsícem

      Millions of people do run storage this way.

    • @AliB333
      @AliB333 Před měsícem

      @@LAWRENCESYSTEMS Yes I realize that. Millions of people also believe the earth is flat, it doesn't make it any less frustrating.
      If you're in an office environment it probably doesn't matter much because you likely have hardware sitting around, but as a homelabber I have to now go spend hundreds of $ to essentially build a second NAS that I can move data to, just to add some capacity.

  • @WillFuI
    @WillFuI Před 9 dny

    So there is no way to make a 4drive z1 into an 8 drive z2 without loosing all the data currently on the drives. Dang would have loved that

  • @whyme2500
    @whyme2500 Před rokem

    Not all heroes wear capes....

  • @tupui
    @tupui Před 5 měsíci

    Did you see OpenZFS added Raidz expansion!?

  • @jlficken
    @jlficken Před rokem

    How would you set up an all SSD 24-bay NAS with ZFS? I'm thinking either 3 x 8-disk RAIDZ2 VDEV's, 2 x 12-disk RAIDZ2 VDEV's, or maybe 1 x 24-disk RAIDZ3 VDEV? The data will be backed up elsewhere too. It's not necessary to have the best performance ever but it will be used as shared storage for my Proxmox HA Cluster.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Před rokem +1

      2X12

    • @jlficken
      @jlficken Před rokem

      @@LAWRENCESYSTEMS Thanks for the reply! I'll try to grab 4 more SSD's over the next couple of months to make the first pool and go from there.

  • @tylercgarrison
    @tylercgarrison Před rokem +1

    is that backgoround blue hex image from gamersnexus? lol

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Před rokem

      I never really watch that channel, they were part of an old template I had.

  • @blackrockcity
    @blackrockcity Před 10 měsíci

    Watching this at 2x was the closest thing I've seen to 'The Matrix' that wasn't actually art or sci-fi. 🤣

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Před 10 měsíci +1

      I use 2X as well, CZcams should offer up to 3X.

  • @donaldwilliams6821
    @donaldwilliams6821 Před rokem +2

    Re Expanding VDEVs by replacing drives with larger ones. One note, if you are doing that with RAIDZ1 you are intentionally putting the VDEV into degraded mode. If another drive should fail during the rebuild that vdev and zpool will go offline. This is especially risky with spinning drives over 2TB since they have longer rebuild times. A verified backup should be done before attempting that process. Some storage arrays have a feature mirrors out a drive vs. forcing a complete rebuild. I.e. SMART errors increase, the drive will be mirrored out before it actually fails. I don't believe ZFS has a command like that? You mirror the data to the new drive in the background then "fail" the smaller drive, the mirrored copy becomes active and a small rebuild is typically needed to get it 100% in sync. Depending on the IO activity at the time.

    • @zeusde86
      @zeusde86 Před rokem +7

      You can do this without degrading the pool, just leave the disk to be replaced attached, and perform a "replace" action instead of just plugging it out. You will notice, that the pool reads from all available drives to prefill the new one, including the disk designated to be removed. If you have spare disk-slots, this method is definately preferred, done this multiple times.

    • @donaldwilliams6821
      @donaldwilliams6821 Před rokem

      @@zeusde86 Excellent! Thank you. I am still learning ZFS. I use it on my TrueNAS server, many VMs, Linux laptop and Proxmox.

    • @ericfielding668
      @ericfielding668 Před rokem

      ​@@zeusde86 The "replace" action is a great idea. I wonder if the addition of a "hot spare" (i.e. yet another drive) would help if things went sour during the change.

  • @phillee2814
    @phillee2814 Před 6 měsíci

    Thankfully, the future has arrived and you can now add one drive to a RAIDZ to expand it.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Před 6 měsíci

      Not yet

    • @phillee2814
      @phillee2814 Před 6 měsíci

      @@LAWRENCESYSTEMS So they were misleading us all at the OpenZFS conference then?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Před 6 měsíci

      @@phillee2814 My point is that it's still a coming in the future feature, not in production code yet.

  • @kevinghadyani263
    @kevinghadyani263 Před rokem

    Watching all these ZFS videos on your channel and others, I'm basically stuck saying "I don't know what to do". I was gonna make a RAIDZ2 with my eight 16TB drives, but now I'm thinking it's better to have more vdevs so I can upgrade more easily in the future. It just makes sense; although, I can lose a ton of storage capacity doing it.
    I thought about RAIDZ1 with 4 drives like you showed striped together, but I don't think that's very safe; definitely not as safe as a single RAIDZ2 especially with 16TB drives. I wanna put my photos and videos on there; although, I also need a ton of storage capacity for my CZcams videos. Each project is 0.5-1TB. And I don't know if I should use any of my older 2TB drives as part of this zpool or put them in a separate one.
    I feel completely and unable to move. My 16TB drives have been sitting there for some days now, and I need the space asap :(. I don't want to make a wrong decision and not be able to fix it.

  • @donaldwilliams6821
    @donaldwilliams6821 Před rokem +1

    Re: VDEV loss. In the case of RAIDZ1 you would need two faillures for the VDEV to go offline. Your illustration shows one failure bringing entire VDEV offline, which isn't correct. That VDEV would be degraded but still online. I do agree that Z2 is a better option. Re: Mirrors. Ah yes the old EMC way of doing things. haha I have seen plenty of mirror failulres too.

    • @Mr.Leeroy
      @Mr.Leeroy Před rokem

      @SuperWhisk triple mirror is far from terrible idea, when you are designing cost-effective tiered storage.
      E.g. as a homelab admin you consider how low the ratio of your non-recoverable data to recoverable trash like plex storage gets, and suddenly tripple mirror + single drive pools make sense.

    • @Mr.Leeroy
      @Mr.Leeroy Před rokem

      @SuperWhisk look up tiered storage concept, or re-read, idk..

  • @praecorloth
    @praecorloth Před rokem +2

    I'm going to be one of those mirror guys. When it comes to systems that are going to have more than 4 drives, mirrors are pretty much the only way to go. The flexibility in how you can set them up means that if you need space and performance, you can have 3x 2-way mirrors, or if you need better data redundancy (better than RAIDZ2), you can set up 2x 3-way mirrors. The more space for physical drives you have, the less sense parity RAID makes.
    Also, for home labbers using RAIDZ*, watch out for mixing and matching disks with different sector sizes. Like 512 byte vs 4096 byte sector size drives. That will completely fuck ANY storage efficiency you think you're going to get with RAIDZ* over mirrors.

    • @Mike-01234
      @Mike-01234 Před rokem +3

      Mirror is only good if performance is your top priority. Raidz-2 exceeds space, and up to 2 drive failures when compared to mirror. If you step up to 3 way mirror now you can lose up to 2 drives but you still lose more space then a raidz-2. The only gain is performance.

    • @praecorloth
      @praecorloth Před rokem

      @@Mike-01234 storage is cheap, and performance is what people want. Parity RAID just doesn't make sense anymore.

  • @nid274
    @nid274 Před rokem

    Wish it was more easy

  • @LudovicCarceles
    @LudovicCarceles Před rokem +1

    Merci !

  • @bridgetrobertson7134
    @bridgetrobertson7134 Před 8 měsíci

    Yup, I hate ZFS. Looking to offload from Open Media Vault which has run flawlessly for 6 years with 3 10TB drives on snapraid. I wanted less of a do it all server and more of a long term storage this time around. Problem is, I can't afford to buy enough drives at clown world prices to satisfy zfs if I can't just add a drive or two later. What's worse is 20TB drives are within $10 of my same old 10TB drives. Will look for something else.

  • @lyth1um
    @lyth1um Před rokem +1

    the worst part about zfs so far is shrinking, lvm and dumb fs can do it. but like in real life, we cant get everything.

  • @84Actionjack
    @84Actionjack Před rokem

    Must admit the expansion limitation is a reason I'll stick to "Stablebit" on my Windows Server as my main storage but I fully intend to adopt ZFS on TrueNAS as a backup server. Thanks

    • @Im_Ninooo
      @Im_Ninooo Před rokem

      with BTRFS you can add a drive of any size, at any time and run a balance operation to spread the data (and/or convert the replication method)

    • @84Actionjack
      @84Actionjack Před rokem +1

      @@Im_Ninooo Stablebit works the same way in windows. Thanks

  • @june5646
    @june5646 Před rokem

    How to expand a pool? You don't unless you're rich lmao

  • @christopherwilliams1878
    @christopherwilliams1878 Před 11 měsíci

    did you know that this video is uploadet to an other channel ?

  • @Mice-stro
    @Mice-stro Před rokem

    Something interesting is that while you can't expand a pool by 1 drive, you can add it as a hot spare, and then add it into a full pool later

    • @MHM4V3R1CK
      @MHM4V3R1CK Před rokem

      I have one hot spare on my 8 disk raidz2. So 9 disks. Are you saying I can expand the storage into that hot spare so it adds storage space and removes the hot spare?

    • @ericfalsken5188
      @ericfalsken5188 Před rokem

      @@MHM4V3R1CK No, but if you expand the raidz later, you can use the hot spare as one of those drives..... Not sure if that's quite as awesome..... but the drive is still giving you usefulness in redundancy.

    • @MHM4V3R1CK
      @MHM4V3R1CK Před rokem

      @@ericfalsken5188 Not sure I follow. Could you explain in a little more detail please?

    • @ericfalsken5188
      @ericfalsken5188 Před rokem

      @@MHM4V3R1CK You're confusing 2 different things. The "hot spare" isn't part of any pool. But it's swapped into a pool to replace a dead or dying drive when necessary. So it can still be useful to help provide resiliency in the case of a failure.... but isn't going to help you expand your pools. On the other hand, because it isn't being used.... when you DO get around to making a new pool with the drive (or if TrueNas adds ZFS expansion in the meantime) then you can still use the drive. If you do add the drive to a pool, then it's not a hot spare anymore.

    • @MHM4V3R1CK
      @MHM4V3R1CK Před rokem

      @@ericfalsken5188Oh yes. I understand the hot spares functionality. I thought for some reason based on your comment that having the hot spare configured in the pool meant I got some free pass to use it to expand the storage. I misunderstood. Thanks for your extra explanation!

  • @emka2347
    @emka2347 Před 4 měsíci

    i guess unraid is the way to go...

  • @enkrypt3d
    @enkrypt3d Před 9 měsíci

    so what's the advantage of using several vdev's?? If you lose one you lose everything?! EEEK!

  • @ashuggtube
    @ashuggtube Před rokem

    Boo to the naysayers 😊

  • @bluegizmo1983
    @bluegizmo1983 Před rokem

    How to expand ZFS: Switch to UnRAID and quit using ZFS if you want easy expansion 😂

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Před rokem

      But then you lose all the performance and integrity features of ZFS.

  • @bassjmr
    @bassjmr Před rokem

    This is why synology still wins … easy to expand volumes.

    • @bangjago283
      @bangjago283 Před rokem

      Yes. We use synology for 32tb. But do you have recommendations for storage 1PB?

    • @TheBlur81
      @TheBlur81 Před rokem

      All other things aside, would a Z2 2 vdev pool (4 drives per vdev) have the same sequential read/write as a single 6 drive vdev? I know the IOPS will double, but strictly R/W speeds...

  • @LesNewell
    @LesNewell Před rokem

    ZFS doesn't make it very clear but basically a pool is a bunch of vdevs in raid0.

    • @piotrcalus
      @piotrcalus Před rokem

      Not exactly. In ZFS writes are balanced to fill all free space (all vdevs) at the same time. It is not RAID0.

  • @namerandom2000
    @namerandom2000 Před rokem

    This is so confusing....there must be a simpler way to explain this.

  • @icmann4296
    @icmann4296 Před 2 měsíci

    Please remake this video. Starting point, viewer knows raid and mdadm, and knows nothing about zfs, and believes that zfs is useless if it can't do the MOST BASIC multi-disk array function of easily expanding storage. I shouldn't have to watch 75 other videos to understand zfs well enough to get one unbelievably, hilariously basic question answered.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Před 2 měsíci

      ZFS is complex and if you are looking a raid system that can be easily expanded then ZFS is not for you.

  • @dariokinoshita8964
    @dariokinoshita8964 Před měsícem

    This is very bad!!! Windows Storage Spaces allow add 1, 2 3 or any number of disc with same redundancy.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Před měsícem

      Windows Storage Spaces is a not nearly as robust as ZFS and a very poor performing product that I never recommend anyone to use.

  • @NekoiNemo
    @NekoiNemo Před rokem

    "so the only reason to ever use Btrfs went away" has been uttered on your podcast no farther than 2 weeks ago... And lo and behold, you yourself create a video explaining the main reason to use Btrfs over ZFS: not needing to sink ungodly amounts of money into buying multiple drives at once every time you need to expand your pool, rather than organically grow it one-two drives at a time.
    And yet, over and over, people like you keep telling ordinary people who are running hobby projects like homelabs OUT OF POCKET, not having an enterprise bankroll the infrastructure, to ignore Btrfs and only ever use ZFS. Because sure, everyone has multiple thousands of dollars for a hobby at a standby when said hobby needs just few terabytes more free space...

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Před rokem +5

      I don't tell people what to use, I make videos about what I use and why I use it. Use what makes you happy and what works for you and your budget. 🙂

    • @theangelofspace155
      @theangelofspace155 Před rokem

      Do you realize that you can get the best of both words and if you are making any pool with less than 4 drives, you are not being seriuos about your NAS. I personally run unraid and truenas under promox as VM, for my main data and nextcloud I use the ZFS pool, I only use unraid for my media archive, because I dont care about data lost, the speed of 1 drive is more than enough for my movies backup bitrate, and I can add 1 drive for more movies as I need. For critical data I used the zfs and backuo to unraid. You can start a zfs with as little as 4 drives. And 40TB is more than enough for main data for somebody that does not have an enterprise infrastructure with enterprise budget.

    • @NekoiNemo
      @NekoiNemo Před rokem +1

      @@theangelofspace155 So, wait, did i understood you correctly, your "solution" to the crucial ZFS flaw is... To make a single drive pools, that defeat the whole point of ZFS as they offer no protection against drive failure (so your entire file server is fubar is a single drive dies) or even bitrot? But do allow you to grove one drive at a time.
      > and if you are making any pool with less than 4 drives, you are not being seriuos about your NAS
      Ah, classic "no true Scotsman". Also wouldn't that mean you then need to expand that pool also 4 drives at a time, which, unless you're using tiny drives, would make it cost >$1.5k at a time? Precisely my point of contention?
      > You can start a zfs with as little as 4 drives.
      I suggest you up your reading comprehension. Never said anything about issue being "starting" - i specifically talked about expanding. But since you mentioned it: can you start ZFS with 4 drives, if you have 3 of them already filled with data to the brim, and only 4th being a new one (because, say, you couldn't afford to drop over a grand on a set of 4 drives to start a pool, and you want to convert your existing non-array storage into a file server)? Once again, speaking about the budget considerations.
      > And 40TB is more than enough for main data for somebody that does not have an enterprise infrastructure with enterprise budget.
      Depends on the amount and type of data. And also another example of a fallacy. I don't have enterprise budget, far from it, in fact, no living in Murica or western Europe. And yet i have a 140Tb of *usable* space file server. How did i do it without an enterprise budget? Just simple expanding 1 drive every couple months (basically whenever i started running out of space) over the last 6 years. Because for me, NOT using ZFS, upgrading and expanding as i went was an option.