Fixing my worst TrueNAS Scale mistake!

Sdílet
Vložit
  • čas přidán 22. 05. 2024
  • In this video, I'll fix my worst mistake I made on my TrueNAS Scale Storage Server. We also talk about RAID-Z layouts, fault tolerance and ZFS performance. And what I've changed to make this server more robust and solid! #truenasscale #homelab #nas
    Teleport-*: goteleport.com/thedigitallife
    Follow me:
    TWITTER: / christianlempa
    INSTAGRAM: / christianlempa
    TWITCH: / christianlempa
    DISCORD: / discord
    GITHUB: github.com/christianlempa
    PATREON: / christianlempa
    MY EQUIPMENT: kit.co/christianlempa
    Timestamps:
    00:00 - Introduction
    01:32 - Advertisement-*
    02:17 - What was my set-up before?
    06:58 - My new set-up 2x RAID-Z2
    08:15 - New storage pool with SSDs
    ________________
    All links with "*" are affiliate links.

Komentáře • 225

  • @TrueNAS
    @TrueNAS Před rokem +164

    It happens to the best of us! Glad you were able to get that sorted out, Christian!

  • @petersimmons7833
    @petersimmons7833 Před rokem +25

    Thanks for being honest and introspective about mistakes. We all make them and sometimes we learn the most from them. Hopefully they don’t cost us too much in the process. Great series.

  • @borealis370
    @borealis370 Před rokem +44

    Good on you for coming clean and sorting that mess out.

    • @christianlempa
      @christianlempa  Před rokem +4

      Thank you :) Because of all your feedback

    • @vicmac3513
      @vicmac3513 Před rokem

      @@christianlempa could you please make a video on how to install/update/sustenance and how it differs between OSes? I've tried to figure it out is it even possible and even my professor said it requires so much resources, that manual/scheduled updates are much easier.
      I'm sure using Linode would double your earnings on that project lol.
      Btw, thanks for being so good teacher.

  • @snowballeffects
    @snowballeffects Před rokem +8

    There's no better way than learning from the mistakes of other's ahead of having to make them yourself - Thank you - Brilliant as always!

  • @jonmayer
    @jonmayer Před rokem

    I made the screengrab! I'm glad you are taking the steps to improve your setup. I must have 2nd, 3rd, and 4th guessed my setup before I deployed it.

  • @Sevbh12
    @Sevbh12 Před 11 měsíci +1

    Hi Christian
    Thank you for the great video. I've been learning a lot from you! It would be great to see a video on automatic backup solutions that are out there. I am currently building a NAS with the main goal of automating backups of my production servers, but not sure where to start or what the best practices are. Thank you so much! Keep well.

  • @nixxblikka
    @nixxblikka Před rokem +1

    I like the exceptional production quality of your videos - one of the view channels where my displays can show what they can - and yeah the content is also helpful!

  • @dastiffmeister1
    @dastiffmeister1 Před rokem +5

    Great video, Christian. I respect the humility. :)
    I will allow myself to nitpick:
    If you truly want to guard yourself against potential data loss when using identical and new SSDs in a vdev, if I am not mistaken it's best practice to write to (in your case) two drives (one from each separate vdev) before the creation of the 2x2 striped mirror pool so that all four SSDs do not potentially fail simultaneously.
    But that would be taking it to an entirely new level of preparation ^^

    • @christianlempa
      @christianlempa  Před rokem +1

      Interesting, I haven't heard about that, but thanks for your insight! :)

  • @alex.prodigy
    @alex.prodigy Před rokem

    awesome , these videos are the best ... explaining what went wrong and how it was mitigated
    thanks!

  • @danielfisher1515
    @danielfisher1515 Před rokem

    Great summary, and good changes!

  • @spectreofspace
    @spectreofspace Před rokem +24

    Keep an eye on your SSD health. ZFS can eat cheap consumer SSDs in no time if you do a lot of writes on them. Forums generally recommend enterprise grade SSDs when used with ZFS. I had a 128GB Kingston I was using as an L2ARC. In 6 months it's health had already dropped to around 70%.

    • @christianlempa
      @christianlempa  Před rokem +2

      Thanks for sharing! I’ll keep an eye on it :)

    • @lorenz323
      @lorenz323 Před rokem +1

      I bought some NAS SSDs, but 1 already failed after 1 Year. And sereval Samsung EVO failed after some Months.

    • @aidanr.579
      @aidanr.579 Před rokem +1

      Samsung Evo running as a virtualization boot drive failed in less than a year. I switched to Intel Optane and after over a year the health hasn’t gone down even a percent.

    • @codycullum2248
      @codycullum2248 Před rokem +1

      @@aidanr.579 how do you change out your boot drive and keep all of your settings? The only way I can think is clone the boot Drive using a 3rd party App but I assume there’s an easier way using the software.

    • @aidanr.579
      @aidanr.579 Před rokem +1

      @@codycullum2248 personally I use Proxmox so I did a full backup of my VMs and settings and put it on a new SSD.

  • @abansalify
    @abansalify Před rokem +1

    if you add an SSD with HDD in a Pool, ZFS can use a portion of SSD to convert the HDD into a hybrid drive which means, your IO speed goes up.

  • @cdm297
    @cdm297 Před rokem

    great video, I request to make an updated step-by-step video on the TrueNas Setup, including all this 🙂

  • @ianpogi5
    @ianpogi5 Před 10 měsíci

    Thank you for all your vidoes! How's the health of your SSD's?

  • @TradersTradingEdge
    @TradersTradingEdge Před rokem +6

    Awesome Christian. Highly respecting you!
    There are not many CZcamsrs who confess making a mistakes in their domain. And I also learned more about the ZFS, because I also experiment with a 6TB TNS Server §8-)
    So, keep it up, your doing great.
    Cheers 👊

  • @David_Quinn_Photography
    @David_Quinn_Photography Před 10 měsíci

    I have been using a 2 disk mirror for about 4 years now and I keep a copy on my desktop as well as backing up to a friend's NAS states away from me and the mirror has done well.

  • @cinemaipswich4636
    @cinemaipswich4636 Před 7 měsíci

    I scoured CZcams for installs of TrueNAS. This one helped me understand how important Raid-z is for security of data. I now have 8 drives with z-2 I lose 1/4 of my storage but have 2 volumes of redundancy.

  • @B20C0
    @B20C0 Před rokem +6

    6:48 just to add: The recovery process is not only a critical time if you have no additional fault tolerance because of the time it takes, but also because of the extra strain it puts on your hard drive as during reconstruction when you literally read every single bit on that drive.
    Add read fails on top which happen on average for 1 bit in every 12 TB and you know that you're playing with fire.

  • @MrBlackCracker100
    @MrBlackCracker100 Před 3 měsíci

    man im glad i found your video. this is almost the EXACT setup i have running. with 12x 4tb drives. i dont mind sacrificing 33% because the security is deff worth it, but full mirroring just felt overkill. in fact, despite me skipping directly to the part i needed, im going to go ahead and rewatch the full video to make sure i didnt miss anything. maybe you had a mistake i can learn from

  • @passaronegro349
    @passaronegro349 Před rokem

    I'm Brazilian, and new to your channel !!! congratulations for the work and caption !!! 🇧🇷

    • @christianlempa
      @christianlempa  Před rokem +1

      Thank you 🙏 but the caption is coming from YT 😄✌️

    • @passaronegro349
      @passaronegro349 Před rokem

      @@christianlempa I thought it was in the settings of CZcams ... 😂🇧🇷

  • @systemofapwne
    @systemofapwne Před 5 měsíci

    While having a dedicated VM pool on SSDs is absolutely nice, as an intermediate step, you could have added a L2ARC and SLOG device to your main pool and run the VMs from there. For syncronous IOPS, the SLOG devices is a huge deal (eg for databases) and the L2ARC (especially when marked persistent) will make your VMs feel snappier when reading from their respective disks. In principle, adding more RAM also helps by a lot, but the persistent L2ARC is especially helpful, when you reboot TrueNAS and then want to spin up your VMs: ARC in RAM is then not yet populated but L2ARC was, giving you SSD IOPS instead of HDD.

  • @alonzosmith6189
    @alonzosmith6189 Před rokem

    Thank U, currently learning TrueNAS, looking to upgrade to Scale.

    • @christianlempa
      @christianlempa  Před rokem

      Oh nice! What are you doing with your NAS? ;)

    • @alonzosmith6189
      @alonzosmith6189 Před rokem

      @@christianlempa For family storage of data (pictures, videos, docs,etc) and backup. My NAS is their cloud storage.

  • @damani662
    @damani662 Před rokem

    Thanks for insight.

  • @BrianSez
    @BrianSez Před rokem +3

    Great video. For the future video mentioned, could you discuss how to revert from the single VDEV to two VDEVs without compromising existing data? Also, I'd love to hear about your backup solution.

    • @christianlempa
      @christianlempa  Před rokem +1

      I needed to copy the data to another location and destroy the pool, that took pretty long, but was worth it!

    • @andrewr7820
      @andrewr7820 Před rokem +1

      Once you have either a second pool or machine, a simple ZFS send/receive (that's what the replication tasks do), rebuild the source array, then push the data back). You *_did_* also have a separate backup before starting, right? 😀

  • @DarrolKHarris
    @DarrolKHarris Před rokem +3

    yes, i would like to a videoo on storage plans, replivation, backup and more about truenas scale.

    • @christianlempa
      @christianlempa  Před rokem +1

      Awesome! Let's do this once I feel comfortable enough ;)

  • @lawrencerubanka7087
    @lawrencerubanka7087 Před 2 měsíci

    Thanks for the very clear explanations. I'd love to see your take on backup and recovery of ZFS pools. That would make a good video. I suspect the backup topic would elicit plenty of critical feedback as well. :)

    • @christianlempa
      @christianlempa  Před 2 měsíci

      Thanks! :) I hope to make a video about backups at some point

  • @smolicek90
    @smolicek90 Před rokem +1

    Good choice on that redundancy step. Have you concidered using the”special dev” class on our hdd pool? I have setup of 4x16TB raidz2, 3x1TB mirror specialdev, 2x200GB slog, you can also play with block sizes on datasets to store the dataset entirely on SSDs

    • @christianlempa
      @christianlempa  Před rokem

      I haven't done it yet, as far as I understand the docs SLOG and ZIL come into play when memory is exhausted for caching, but I might do a few more tests with internal NVMEs or SSDs in the future. Great tip ;)

  • @AdenMocca
    @AdenMocca Před rokem +3

    Good video - glad that Scale is getting a lot of attention. TrueNAS is a great product to help experienced - but not ZFS developers - get going with a really good solution. The next step for good back up is to actually create a back up which means move the data to a different system. It's expensive at your size, but running a ZFS send / receive - or in TrueNAS a ZFS replication is important for real enterprise backup. RAID is not a back up, just a way to enhance availability. For important data you could also consider Backblaze or another solution.

    • @christianlempa
      @christianlempa  Před rokem

      Thank you :)

    • @andrewr7820
      @andrewr7820 Před rokem

      I did just that. Second TN box with fewer, larger drives in striped mirror. Configured periodic snapshots and replication tasks under "Data Protection" menu. The next step will be to move the second box off-site to my folk's place.
      FWIW, in the failure recovery scenario, rebuilding a mirror VDEV only involves the surviving member of the mirror, so the risk of a single drive failing during a rebuild is statistically less likely than in a multiple drive RAIDZn array.

    • @samcan9997
      @samcan9997 Před 7 měsíci

      @@andrewr7820 however on that note its worth mentioning SSDs main cause of failure is being written into the ground which in a RAID 1/mirror is much more likly to occure at the same relitive time period. spinners long as they havent been dropped you should be good for write life

  • @Neo8019
    @Neo8019 Před rokem

    I had 2 servers fail in a space of 2 month because both had 5 disk in RAID5 and during the rebuild a second disk failed. Luckely I had back ups from oen and the other one had a mirror. Since then I do not use anything less than RAID6. Extra HDD is cheaper than the data it holds.
    On the previous video you said you spent about 500 Euro for the case. You can find a HP Proliant DL380 Gen9 with 2x Intel Xeon E5-2640v3 8-Core and 32GB ECC RAM that can hold 12x 3.5" (LFF) for around 700-800 Euros used, from a German website and they provide 12 months warranty. A dual port P440ar controller which supports HBA will also cost you around 100 Euro from the same shop.
    In any case, nice build!!

  • @xordoom8467
    @xordoom8467 Před rokem +4

    I agree if you require those Reads\Writes & IOP's then make multiple zdevs. However, I've been running one giant 1 zdev at a size of 172tb without issues for years, its mainly for Plex and its been fine all this time. If I have more than 12 streamers the server may buffer once in a while but for the most part I feel these fears of a large zdev can be over exaggerated at times... I do replicate this server to a sister server just in case...

  • @TheDWehrle
    @TheDWehrle Před 8 dny

    I love how he says a second drive dies "sometimes". lol. It has happened to me a few times over the years with the WD Reds, which is why I always use at least raidz2.

  • @TannerWood2k1
    @TannerWood2k1 Před 8 měsíci

    This was helpful for understanding how to optimize larger #'s of discs. You said that you have a 10gbe interface, which one are you using? I ended up with 2 different broadcom cards which would not work and have now ordered a chelsio card. My system started as a core build but my motherboard crashes the installer, so I used scale instead.

    • @christianlempa
      @christianlempa  Před 8 měsíci

      Thanks :) I’m using a X520 DA-1/2 card that works great

  • @andrewt9204
    @andrewt9204 Před rokem

    I did something similar, I had 6x 6TB drives all in a Z1 config without doing any research. After reading forums, I decided it was better to have those 6 in a Z2 vdev, or two vdev in a Z1. I liked the idea of a bit more performance and went the 2 vdev option.
    That's pretty much the limit with board I have. I only have 1-16x pcie slot and two 1x slots. The 16x slot is being used by the 10G fiber card, and based on what I've read, those cheap 1x SATA expansion cards aren't the greatest. I thought about using the two 1x slots with m.2 adapters to mirror two SSD's for a cache. Even a single pcie 3 lane on an NVME drive is going to be 3x faster sequential write than a SATA HDD vdev. The iops will be magnitudes faster as well. I've basically turned a x4 NVME drive into a SATA SSD at that point.

    • @christianlempa
      @christianlempa  Před rokem

      That's a good question. I guess in larger systems it's better to buy a server main board with more PCIE lanes and faster controllers.

  • @Silent1Majority
    @Silent1Majority Před rokem

    Excellent breakdown. The question you've created now is did you need to create a bridge network from fast (SSD) storage to allow your applications to use the slower pool storage? If so how because the TrueNas documentation confuses me on this. 😅

  • @chrisumali9841
    @chrisumali9841 Před rokem

    thanks for the demo and info, yeah, mistakes are a part of life LOL

  • @kewitt1
    @kewitt1 Před 5 měsíci

    My setup, nas1, 4x18tb raidz1, nvme meta and cache, 8tb mirrors, 1tb apps and vms. Nas 2 8x8tb raidz1, backup. 10 gbe between both. 27 tb backup was 15 hours on 1st sync,

  • @MikeHarris1984
    @MikeHarris1984 Před rokem

    I wish I saw this last week!!! I was trying to figure out if I should make one big vdev of 20 drives or two or three smaller Raidz1 vdevs to make my pool and give me better performance with better redundancy, but lower space. Got 160tb to take 40tb for redundancy is not a big deal. I didn't see the iops thing... Lol. Have 256gb ram and using 8 ssds for cache/meta vdev.

  • @habib.bhatti
    @habib.bhatti Před 11 měsíci

    Quick Question; is the fault tolerance PER vDev, not over the entire system array as in say a traditional Hardware RAID solution?

  • @vladimirherrlein3809
    @vladimirherrlein3809 Před rokem

    As soon as you start to play with SSDs (SATA or SAS) with that amount of drives, check also your HBA in order to get the best performance (PCI lanes used, bandwidth per lane,…) if your backplane uses and expander or not, may be you will have to change your backplane and/or add another HBA.
    Exemple: With 4 SAS SSD I’m reaching limits of the HBA 6gbs on a Dell r720

    • @christianlempa
      @christianlempa  Před rokem +1

      Yeah, that's a great point to keep in mind. I'm not using this storage server extensively, but you're absolutely right. I should do some tests with copying data from both pools at the same time and maybe put the SSDs on a second HBA or internal controller.

  • @RzVa317
    @RzVa317 Před rokem

    I would definitely be interested in a truenas overview video

    • @christianlempa
      @christianlempa  Před rokem

      I already did two videos about truenas scale, maybe that's what you're looking for :)

  • @IvanToman
    @IvanToman Před 7 měsíci +1

    Mirrors only. Simple is always the best.

  • @elonbrisola998
    @elonbrisola998 Před měsícem

    I'm configuring my home Truenas setup. Started with raidz1, and after some reading, I went to raidz2. I have 5 drives in a single vdev.

  • @wildmanjeff42
    @wildmanjeff42 Před rokem +1

    I have been using truenas/freenas for years and have learned with years of 24/7 use you will have failures. I use Z2 on spinning drives, Z1 (with backup) on SSD arrays. I know a lot of people use Scale for Linux OS and running other things in docker, but my storage is for ONE thing only-- storage and backups. I use FreeBSD core as it is VERY established and its safer with ZFS than Linux at this point in time---Its your data, and its your choice of course.
    Thanks for the video !

    • @christianlempa
      @christianlempa  Před rokem

      Thank you for your insight! The good news is, I'm still doing an offline-backup in case the whole server is messed up, but I also have faith in the skills of iX Systems to improve on that ;)

    • @wildmanjeff42
      @wildmanjeff42 Před rokem +1

      @@christianlempa Same here, I have 2nd server with replication set up to back everything up every 6 hours automatically. I feel like they will get scale working to the same level as Freebsd, it will just take vetting the product, same as it did with years of use with Freebsd and the community! Lawrence systems youtube channel goes so far in depth with Truenas as a great resource !

  • @betterwithrum
    @betterwithrum Před rokem +12

    Christian, given that you're in Germany, I'm curious about your power consumption and what steps you're taking to lower your home lab cost? I'm in the US and I have enough solar on my house to offset 110% of our usage so this isn't a concern for me. I'm curous how it is for you. Thanks in advance

    • @Oxygen.O2
      @Oxygen.O2 Před rokem +3

      Here in Belgium, running my very old PC turned into an 18TB homeserver who consume 65W@idle 24/24h would cost about 17€/month, and that's not even counting the next increased price in january 2023, which will end up at around 35€/month... So, as you imagine, I turn it off most of the time! I can't even start to imagine how much those racked system use!
      For the config :
      Ubuntu 22.04 Server Edition
      Intel Q9450@2,66Ghz baseclock
      8GB DDR2
      512GB SSD
      HDD's 14TB + 2x2TB (No raid, each HDD has its purpose)
      Running multiple docker apps, a very light homepage through nginx to access all services and a SMB share for data backup (time machine).

    • @christianlempa
      @christianlempa  Před rokem +7

      That's a pretty important topic for me, and I still have so many questions regarding idle power usage and home servers. Currently, the average power consumption of this system is 110W, which is approximately €20 a month, but increasing (based on the current Ukraine war, energy crisis in Europe). Soon, I'll need to find another solution, I've heard Intel CPUs perform better in idle, compared to AMD Ryzen, however, the investment for a new CPU + MB doesn't pay off - yet.
      My plan is to first build a new Proxmox Server with my old PC hardware, once I replace it with a Mac, and take that experience to decide for the storage server build.

    • @gshadow1987
      @gshadow1987 Před rokem

      @@christianlempa i'm using a cheap HP i5-6400, have installed unraid on a usb3 stick hooked Up on a Adapter which sits directly on the Mainboard so Nobody can rip it out. There i created a VM with Windows 10, striped down by the cool utility @ChrisTitusTech has build and using the NAS Features the OS hast to offer. the package Power of the CPU under Windows ist on idle at around 5 Watt. Bought the System (mobo+cpu+8gb ram) on ebay for around 50 Euro by some refurbish reseller. im using two nvme ssd INSIDE the 4x Slot of the Motherboard via an adapter for 2 nvme Drives which both has 4500mb/s RW with high iops. I also got 12 Drives, the Same as youre using, hooked Up 6 of Them directly onto the 6 SATA ports of the mainboard. As a second Pool i added a 16x pcie card for 6 more SATAIII drives. Both Pools running RW, cache supported at around a GB/s (Not Gbit/s) in parallel, more when only one pool is doing its work. Overall total cost: 200euros, sipping well under 100watt from the Wall. Breakdown costs: 50 mobo+CPU+RAM, 25 +8gb ram, 15 pcie nvme card, 30 pcie SATA III card, 80 2x 500gb nvme ssd (with cache Chip,very important), 5 for the USB Stick (+120 dir unraid)
      For the Case i use an old Silverstone grandia HTPC Case with an 500Watt Xilence PSU which already existed. "Server" running 24/7 buttery smooth and quick as hell.
      Maybe my system inspiring you.
      Greetings from the hometown of Blau und Weiss to M-Town (40min Drive) :)

    • @xmine08
      @xmine08 Před rokem

      @@Oxygen.O2 look into more modern processor, mobo and psu. You should be able to get to 25W at idle easily.

    • @xmine08
      @xmine08 Před rokem

      @@Oxygen.O2 für reference, my Ryzen 5950x machine consumes 42W at idle - not great but for the performance (that I'm not using when at idle, lol) it's amazing

  • @gyulamasa6512
    @gyulamasa6512 Před rokem

    With the SSDs If speed is not the biggest concern, I woul do a RAIDZ1 and back it up to a mirrored pair of HDDs. If more speed is needed, I would go for a striped volume of 4 SSDs backed up to a mirrored HDD volume often enough. In your case, the first setup would result in 1,5TB, the second is at 2TB.

    • @christianlempa
      @christianlempa  Před rokem

      Thanks, yeah there are other possible setups, maybe I'll change it and do some further performance testing. A stripe would probably be the best performant setup.

  • @rahaf.s1217
    @rahaf.s1217 Před 10 měsíci

    as HW it will be one drive ? then i will split it as virtual based ob RAIDZ type ?

  • @gregjones9601
    @gregjones9601 Před rokem +1

    Love your videos Christian. Not sure if anyone asked, but would love to know how you migrated from your original storage layout to the new one. Did you just blow the data away or is there a way to migrate the data in setting up a new layout? I have two vdevs with 12 drives in each…I questioned my layout originally and maybe I should have split up even more! Always hard to sacrifice storage when you pay $$$$. The flip is the cost of the failure, something I always seem to deny to myself! Thanks for doing what you do!

    • @christianlempa
      @christianlempa  Před rokem

      I needed to copy the data somewhere else and destroy the pool, create a new one :(

    • @VallabhRao123
      @VallabhRao123 Před rokem

      ​@@christianlempa How did you manage to have so much of spare storage. If you did have, why not add them in the pool alongside all the drives to begin with! Did you buy new drives ? That will have so many new challenges, as now you have a gazillion 4TB drives to manage.
      would be great if you could explain a bit more as i am new to NAS world.

    • @gabrielosvair
      @gabrielosvair Před 5 měsíci

      ​@@VallabhRao123I would also love to know more in detail

  • @dudley810
    @dudley810 Před rokem

    I am pretty sure that the Lose all your data was the other reason why I picked unraid. I believe that you can still read the data on the unraid drives if multiple drives fail past the parity drive of unraid but I never tested that. Might be a good test for me as well.

    • @samcan9997
      @samcan9997 Před 7 měsíci

      you still technically can with truenas however you will have blank stripes of missing data effectivly making it unreadable so unless your running multipar or something there as good as lost anyway... and yeah ive attempted raw data recovery unless its gotten a lot better in the last 8 years there aint much you can do

  • @LucS0042
    @LucS0042 Před rokem

    How did you convert without losing data?

  • @nangelo0
    @nangelo0 Před rokem

    Why you didn't combine storage and VM servers into single server?

  • @barneybarney3982
    @barneybarney3982 Před rokem

    8:20 well its always about balancing between redundancy and cost... Like ofc its better to have mirror of two z2 vdevs but this way you get only 16tb of capacity from 12x4tb drives. Imo 1x z2 , 2x z1 or 3x z1 is fine for 12 drives...

  • @eloimartinez9446
    @eloimartinez9446 Před rokem +1

    Nice fix, but i have the question, why 12 4tb hdds inead of 4 12tb hdd, it might be a little bit slower, but much more energy efficient, and you have the ssd's for anything io intensive.

  • @frets1127
    @frets1127 Před 6 měsíci

    So what if you have data on the new build? I made this mistake. 8x10TB raidz2 1 vdev 🤦🏻‍♂️ copied all my data from old NAS to this. So now I have to copy it back, reconfigure, then copy back to the new build? Ugh. Any recommendations on best way to copy from new back to old?

  • @erfianugrah
    @erfianugrah Před rokem

    That panning effect on the "Hey everybody"

    • @christianlempa
      @christianlempa  Před rokem

      Yeah sometimes I still suck at editing :(

    • @erfianugrah
      @erfianugrah Před rokem

      @@christianlempa Thought it was intentional haha

  • @jwspock1690
    @jwspock1690 Před rokem

    Danke für's Filmchen

  • @ToxicwasteProductions

    If running multiple drives like you I tend to use raid6 now adays with larger drives. And to be completely honest I don't even fully trust that so I moved over to raid6+0 on my main production pc. I har 8 drives in my array. So I get 4tb usable space and ample failure space. I can loose basically 4 drives at a time given the right four drives die and still be able to recover. Like you say it makes me sleep a little better at night.

  • @alex10pty
    @alex10pty Před rokem

    Great video. How do you manage to recreate the pool if the drives already have information? Do you have spare drives to copy the existing information? I asked because i read that if a drive have information it doesnt show up in the zfs pool at least in proxmox

    • @christianlempa
      @christianlempa  Před rokem +1

      I needed to copy the data to another location, destroy the pool and re-create it. And yep... that took the whole day and night :D

  • @ozzieo5899
    @ozzieo5899 Před rokem

    hey.. how are those Fanxiang ssds working out for you? I saw them on amazon, but was apprehensive on purchasing..

    • @christianlempa
      @christianlempa  Před rokem +1

      Can’t really say much negative, so far they’re working good! But who knows about reliability and duration :P

    • @ozzieo5899
      @ozzieo5899 Před rokem

      @Christian Lempa got it.. thanks.. perhaps I'll wait a month or so more.. and if still nothing, I'll pull the trigger on it.. thanks soo much for everything..

  • @zaluq
    @zaluq Před 7 měsíci

    Have you planned to do a new Truenas setup with the changes in ver 23 ?

    • @christianlempa
      @christianlempa  Před 6 měsíci

      Not yet, but I'll look into TrueNAS again when I have time

  • @postnick
    @postnick Před rokem

    I'm running 3x 1TB SSD in Raid 0 - I know I know- but I have my "FILES" backed up to the NVME boot drive often - and also keep that key data on a different computer and extra drive. Thankfully its only 200 gb at this time.

  • @eNKa007
    @eNKa007 Před rokem

    Why not to create a vdev with higher RAIDZ level nstead of breaking dives into two vdevs?

  • @samuelmoser
    @samuelmoser Před 11 měsíci

    After watching this video..... do I have to be concerned about my configuration. I have 3x8TB and Z1. Which means I can only loose one drive, but I don't want to do Z2 with a fourth drive, as then I only have a efficiency of 50%. So is it really a problem when I have just 3 drives?

  • @Jimmy_Jones
    @Jimmy_Jones Před rokem

    Have you come across many bugs? I still think it's too early for the Kubernetes side. Loads of people seem to encounter issues/limitations even on the official pods

    • @christianlempa
      @christianlempa  Před rokem

      Actually not, however, I'm not doing much with the Kubernetes part of truenas, but I haven't seen any bug on my end yet

  • @Bartek2OO219
    @Bartek2OO219 Před 8 měsíci

    Isn't raid 6(z2) better than raid 10 for SSD?

  • @lalala987
    @lalala987 Před rokem

    Which kind of ssds fit into the trays?

  • @dillonhansen71
    @dillonhansen71 Před rokem

    What SSD's did you buy? did you make sure they have NAND flash on them? If they dont. you will get HDD performance :(

    • @christianlempa
      @christianlempa  Před rokem

      I got the Fanxiang S101, they got 3D NAND when you believe their docs :D

  • @VGAMOTION
    @VGAMOTION Před rokem

    Can you help me with a question? I'm setting up a server with truenas scale. I only have three bays available and I was planning to put 3 HDDs of 20tb. What do you think is the best configuration? Thank you so much!

    • @sempertard
      @sempertard Před 11 měsíci

      if you truly value the data, then two drives mirrored, and the other for backup. That means you will only have 20TB of storage available out of the 60TB you started with. Yeah.. Ouch. Or you could possibly use the three drives in a Z1 (Raid 5) configuration, giving you 40TB available and use external drives to back up that data. Again, how important is your data?

  • @tabascocrimson7865
    @tabascocrimson7865 Před rokem

    Where did you move all your data off in the process?

    • @christianlempa
      @christianlempa  Před rokem

      I needed to copy them to another hard drive, and yes, that took the whole day and night :D

    • @tabascocrimson7865
      @tabascocrimson7865 Před rokem

      @@christianlempa Me: Need a drive for redundancy in case one fails
      Me: decides one redundancy is not enough
      Also me: When redesigning my array, I rely on a single one.
      Lol

  • @zazuradia
    @zazuradia Před 7 měsíci

    it's not that a second drive fails. it's that if a string of bits fails during parity reconstruction (which is much much more common), some part of your data is gone.

  • @antonmaier5172
    @antonmaier5172 Před rokem

    I assume you are using the latest TrueNas Scale version ? You didn't mention.
    What is your idle CPU usage with TrueNAS Scale ?
    I tried about a year ago and then my TrueNAS Scale server cpu used about 25% in idle state, which in my opinion is unacceptable.
    The problem then was all those Kubernetes processes doing nothing but still using cpu and electrical power.
    TrueNAS Core 13 on the same hardware uses 0% cpu in idle.
    Has it gotten any better ?

    • @christianlempa
      @christianlempa  Před rokem

      I'm using the latest version, and I didn't have any problems with idle, mine is always at 1 to 3%

  • @uuu12343
    @uuu12343 Před rokem

    Question, you are using a ssd for storage?

  • @bsandoval2340
    @bsandoval2340 Před rokem

    Hold on I’m a little confused dosent mirror just duplicate the data meaning you could lose theoretically 3 drives assuming they were all on the same vdev but If you lose even 1 drive on both vdevs they’re all gone? I’m fairly new to a lot of this.

    • @christianlempa
      @christianlempa  Před rokem

      The 2 vdevs aren’t in a mirror, but in a stripe, meaning I need both of them staying intact. Each of them have a parity of 2, so I can lose 2 drives in each vdev, but not more.

  • @SharkBait_ZA
    @SharkBait_ZA Před rokem +1

    Please make the video. I want to learn more. 🙂

  • @JohnWeland
    @JohnWeland Před rokem

    So here is a question. You have multiple vdevs in a single pool. If you wanted to have deduplication would you need an extra drive per vdev for this or 1 drive for the entire pool?

    • @christianlempa
      @christianlempa  Před rokem

      I'm not really sure, but I thought deduplication is a compression method that takes a lot of your CPU power to compute it, but it's not different in vdev requirements than non-deduplication pools.

    • @JohnWeland
      @JohnWeland Před rokem

      @@christianlempa I thought it required a segment of storage to use as a manifest. I may be misremembering. Maybe it’s caching I am thinking of.

  • @roymorrison1075
    @roymorrison1075 Před rokem

    12 drives with only 1 failure. Yep I wouldn't of been able to sleep at night. Some time size isn't everything ! I also have a spare drive per dev for fail over. Drives are cheap when it comes to data that you can never be replace. Try explaing to your wife about loosing all the kids pictures for the sake of a couple of $150 HDD. Call it over kill, but I also run a 2nd Truenas server that I run once a week to replicate the main Truenas server, from its snapshots. very quick and easy. But any way great video, Thanks Christian.

  • @mt_kegan512
    @mt_kegan512 Před rokem

    Watch your sync write speed to SSDs when using NFS. May not be the speed you're expecting for fast VM storage over network. If you're using anything over 1gbit/s you may want to look into write cache (SLOG). Granted .. this will send you down quite the expensive rabbit hole! Its really about how quick the cache can write and it's endurance, not the size. If u don't care about NFS/synchronous speeds, I wouldn't bother however

  • @BrianD-pf4px
    @BrianD-pf4px Před rokem

    Nice fix up. Your next mistake was using Adaptec 71605 16-Ports SAS/SAT as your controller. Truenas forums all say this controller isn't really an HBA. With that being said I use the same controller in my build but get dinged on the forums about that. Also not sure if you will be able to TRIM those SSD drives with that controller.

    • @christianlempa
      @christianlempa  Před rokem

      Yet it doesn’t seem to be a problem, I might hit a performance Limitation when I need to use both pools at the same time heavily. But I’m not sure what you mean by it’s not a real HBA? It’s a controller that runs in HBA mode, so .. where is the problem with that?

    • @BrianD-pf4px
      @BrianD-pf4px Před rokem

      @@christianlempa I am still using that card as well for rotational drives. Just thought might be something to look in to as well. The TrueNAS forum seems to be very adamant that the card is a poor choice. Very interested in your take on the card though. Also did you see if you are able to TRIM those SSDs?

  • @heavy1metal
    @heavy1metal Před 4 měsíci

    Fault tolerance is only dictated by how much downtime you can afford, not preventing data loss. If you have everything backed up and have the time to rebuild and recover then there's nothing wrong with raidz1.

  • @Damarious25
    @Damarious25 Před 3 měsíci

    Any update on how those SSDs were?

  • @hpsfresh
    @hpsfresh Před rokem

    Why not make 3 vdevs of 4 disks in raid z1?

  • @RossCanpolat
    @RossCanpolat Před rokem

    I would love to see an NGINX Proxy Manager with SSL for LAN only video. 🙂👍

    • @christianlempa
      @christianlempa  Před rokem

      Mhh I'm not sure if I'd do this, as I'm pretty happy with Traefik as a Reverse Proxy. I will do a video about Traefik on TrueNAS Scale though, maybe that's still interesting ;)

  • @mitchellsmith4601
    @mitchellsmith4601 Před rokem

    I just had two older 4 TB drives fail in a single vdev over a three month period. It happens.

  • @helderfilho4724
    @helderfilho4724 Před měsícem

    Please replace your Sata SSDs with Samsung EVO or Crucial MX. I bet yours will become slower than old spinning disks as soon as you fill it. That as my case anyway, and I am much happier with good SSDs from reputable brands. I may be too late watching your video, but if you have any news on that please let me know =). And thanks for the info!

  • @ragtop63
    @ragtop63 Před 6 měsíci

    There is no 100% fault tolerant config. Even with RZ2 it's entirely possible to lose enough drives to kill your entire data storage. In fact, it happened to me many years ago when a lightning storm hit while I was out of town. The storm compromised my PSU and the PSU killed 5 of the drives and destroyed all of my data.
    Since then, I've some to the conclusion that for my personal needs creating pools consisting of 4xHDD@RZ1. I never have less than 2 vdevs in a pool so my IOPS are better than a single disk. The throughput is also good enough for my needs so far. I handle failed disks by having cold spares. Since I'm almost always near my system, if a degraded state were to ever show up, I can just power down and swap the drive in a matter of minutes. It would be different if the server was in a datacenter somewhere that isn't instantly accessible but for most home users, that's simply not the case. I also have a duplicate identical system at my son's house. The 2 systems are synced so the data is theoretically always backed up.
    All this is to say, I personally believe that sacrificing 2 disks per vdev in a home environment is a waste of storage space and money. As long as you have 1 or 2 cold spares and a good UPS/protection circuit, you should almost never be put in a situation where the problem can't be resolve immediately.

  • @VassilisKipouros
    @VassilisKipouros Před rokem

    It could be an idea to add your SSDs to your spinning disks as L2ARC and SLOG cache. Do some research on it. This way you can increase your spin drive pools performance...

    • @christianlempa
      @christianlempa  Před rokem

      Thank you! I’m currently fine with the memory caching but it’s indeed an interesting topic

    • @samcan9997
      @samcan9997 Před 7 měsíci +1

      or just buy 1TB of LRdimms as there cheaper and faster than replacing SSDs every few months but eh
      special metadata pools can also help a lot

  • @Mr_Meowingtons
    @Mr_Meowingtons Před rokem

    yeah i have 10 4tb Drives and i put them in RAID-z2
    My PLEX Server running 15 drives is on harware RAID6 but i want to chage that to TrueNAS + HBM some day. and run a 2U for the PLEX Server.

  • @stacygirard647
    @stacygirard647 Před 5 měsíci

    good thing i find thix video just before i get the old used pc i mgetting for my nas , so i will be sure not to do the mistake 🙂
    im getting an old i5 7th gen with 16 gig ram(ill upgrade it over time to 64(max i can have on that mother board)
    and i bought 3 ssd 1 t and a 250 gig ssd nvme for the cache , i have already in it a 112 gig ssd that ill set for my boot installation .
    i will use it a s cloud (next cloud,) my plex server, and maybe other things that ill find over time i already got a domaine name to access my cloud ect,
    and alreday have a cloudflare set up too
    and i ll save money for later get a bigger sever that ill install proxmox and install vm on it and ill see if i keep that one as a nas server or use only for some thing else
    adn mistake happen to any one

  • @THEMithrandir09
    @THEMithrandir09 Před rokem

    So with SSDs you need to watch out if you buy QLC or TLC NAND storage. TLC is great, QLC is slow af, but often cheaper/TB. There's more to it than that, but QLC is often a rip-off, especially for SSDs smaller than 2TB.

  • @YouTubeGlobalAdminstrator

    Those SSDs might fail quick, it's really not recommended to use consumer drives in a server environment due to their endurance.

    • @christianlempa
      @christianlempa  Před rokem

      Well, that's what everybody says, but no one could actually tell me reasonable docs about the impact of the missing features.
      So as I said, I'm going to test it, if an SSD dies, it's just 40$ for a new one ;)

    • @cyberagent009
      @cyberagent009 Před rokem

      @@christianlempa i suppose that each and every SSD has a tbw reading before it can fully fail. This information is available from the SSD manufacturers website. Enterprise drives have higher mtbf for the drives. Just my two cents. Correct me if I'm wrong.

    • @severgun
      @severgun Před 11 měsíci

      @@christianlempa there is actually no fancy "features". Enterprise SSDs just have more spare cells. So they last longer.
      All CoW filesystems suffer from write amplification.

  • @George-rm7yw
    @George-rm7yw Před rokem

    In my opinion, trying and failing is the only way to learn!

  • @kreaweb-be
    @kreaweb-be Před rokem

    I used consumer SSD's for a while but went back to HDD because ZFS eats up SSD's , way too much write operations cause consumer SSD to degrade in a few months.

  • @putrag2loh
    @putrag2loh Před rokem

    how about when the OS broken , can we rescue all our data on vdev?

    • @christianlempa
      @christianlempa  Před rokem

      You can import the ZFS pool into a new system. So either backup and restore the OS disk or set up a new one an import the store

  • @MrJonsson9
    @MrJonsson9 Před 8 měsíci

    What is this "starch-server"?

  • @heinowalther5023
    @heinowalther5023 Před rokem

    I don't agree about the IOPs per vdev is equal to one disk IOP. I think this is "old" information, and has been corrected in later ZFS releases... I use a 24 disk shelf with just one vdev (it's a draid3 because of the better rebuild times)... (draid can only be done from the commandline on truenas... but I just did a simple write test, where I were able to reach over 5.000 IOPs with a 128k blocksize (1.5GB/sec)... so go figure?

  • @romanrm1
    @romanrm1 Před rokem

    You said the performance is the most important for the SSD array, and then leave the Encryption checkbox on. What for? Test with and without, typically it will harm performance a lot, even if your CPU has hardware acceleration. And saying "performance matters, and if anything happens I can just restore from backup", I expected you'd just run RAID0 across all four.

  • @bensatunia8842
    @bensatunia8842 Před rokem

    No Schadenfreude ... The Pro

  • @nexovec
    @nexovec Před 7 měsíci

    Did they also tell you z2 also has worse performance?

  • @enormouschunks7138
    @enormouschunks7138 Před rokem

    Before watching the video. Was the mistake installing truenas?

    • @christianlempa
      @christianlempa  Před rokem +1

      *SLAP* watch the video

    • @enormouschunks7138
      @enormouschunks7138 Před rokem

      @@christianlempa I did and still think installing truenas was the worst mistake in the video.

  • @ArifKamaruzaman
    @ArifKamaruzaman Před rokem

    I created stripe and too lazy to change.

  • @MokshaDharma
    @MokshaDharma Před rokem

    Wen Mastodon join?