Testing Synology and TrueNAS NFS VS iSCSI

Sdílet
Vložit
  • čas přidán 19. 06. 2024
  • lawrence.video/xcp-ng
    Benchmark Links used in the video
    openbenchmarking.org/result/2...
    openbenchmarking.org/result/2...
    Synology Tutorials
    lawrence.technology/synology/
    XCP-NG Tutorials
    lawrence.technology/xcp-ng-an...
    Linux Benchmarking
    • SDWAN Failover and Ban...
    Getting Started With The Open Source & Free Diagram tool Diagrams.NET
    • Getting Started With T...
    Connecting With Us
    ---------------------------------------------------
    + Hire Us For A Project: lawrencesystems.com/hire-us/
    + Tom Twitter 🐦 / tomlawrencetech
    + Our Web Site www.lawrencesystems.com/
    + Our Forums forums.lawrencesystems.com/
    + Instagram / lawrencesystems
    + Facebook / lawrencesystems
    + GitHub github.com/lawrencesystems/
    + Discord / discord
    Lawrence Systems Shirts and Swag
    ---------------------------------------------------
    ►👕 lawrence.video/swag
    AFFILIATES & REFERRAL LINKS
    ---------------------------------------------------
    Amazon Affiliate Store
    🛒 www.amazon.com/shop/lawrences...
    UniFi Affiliate Link
    🛒 store.ui.com?a_aid=LTS
    All Of Our Affiliates that help us out and can get you discounts!
    🛒 lawrencesystems.com/partners-...
    Gear we use on Kit
    🛒 kit.co/lawrencesystems
    Use OfferCode LTSERVICES to get 5% off your order at
    🛒 lawrence.video/techsupplydirect
    Digital Ocean Offer Code
    🛒 m.do.co/c/85de8d181725
    HostiFi UniFi Cloud Hosting Service
    🛒 hostifi.net/?via=lawrencesystems
    Protect you privacy with a VPN from Private Internet Access
    🛒 www.privateinternetaccess.com...
    Patreon
    💰 / lawrencesystems
    ⏱️ Timestamps ⏱️
    00:00 NFS VS iSCSI
    01:42 Scope and Setup
    02:45 Difference Between iSCSI & NFS
    08:50 Test Results
    12:48 Storage Design Considerations
  • Věda a technologie

Komentáře • 83

  • @ewenchan1239
    @ewenchan1239 Před 2 lety +23

    Thank you for this video.
    Yes, I would definitely love to learn more about the different use cases for iSCSI vs. NFS.
    I've never really dove into much, so thank you for putting this video together and explaining this to us.
    I greatly appreciate it.

  • @joshsmith4998
    @joshsmith4998 Před 2 lety +8

    I think a deeper dive into the philosophy of storage design would be helpful! I've setup ISCSI, Fiber Channel, and SMB/NFS shares in the past and through various VMware topologies but never really got into the nitty gritty of optimizing storage for your VMs for performance, security, and scalability :)

  • @falazarte
    @falazarte Před 2 lety +1

    Great video! Looking forward to the storage design video.. Thank you!

  • @chrisipad4425
    @chrisipad4425 Před 2 lety +1

    thanks for this easy to follow comparison between NFS VS iSCSI!

  • @joelsmith2525
    @joelsmith2525 Před 2 lety +2

    The case you mention near the end of the video with a Graylog VM, and how to handle the storage differently would be super helpful to me! I'm planning on setting up Graylog (sort of half started already) and the storage aspect is one part I was very unsure about.

  • @engrpiman
    @engrpiman Před 2 lety +5

    Side note: I have found that Synology always reattaches via ISCSI when the VM reboots. My Qnap NASs often had trouble and needed me to manually mount the drive.
    What you do is uses block storage and then use Veeam to snapshot and backup the individual VMs. Works great.

  • @RicoCantrell
    @RicoCantrell Před 2 lety +3

    Awesome explanation!

  • @GrishTech
    @GrishTech Před 2 lety +2

    Ah yes. Thanks for the updated tests.

  • @84Actionjack
    @84Actionjack Před 2 lety +2

    Would be very interested in different use cases of running a Windows VM with ISCi and how the data should interface with the VM. Looking forward to that and more. Thanks

  • @devoid42
    @devoid42 Před 2 lety +1

    Great video, I'm in the market to build a network storage solution and this was very much in my interest. I have the requirement for family storage but I also host VM's as well that will be utilizing the storage

  • @handlealreadytaken
    @handlealreadytaken Před 2 lety +8

    Interesting content. This was always a hot topic when implementing either EMC or NetApp systems with VM Ware and/or Windows running on bare metal in a clustered environment. I'm sure a lot has changed since I touched those, but at the time tiered storage was handled differently at a block vs file level.

    • @fastbimmerrob
      @fastbimmerrob Před 2 lety +1

      You would be surprised.. Not much has changed 😬 it would be like riding a bicycle!

    • @BruceFerrell
      @BruceFerrell Před 2 lety

      And it still is. Clustered access to iSCSI requires OS level disk management to correctly allow it. Without that, file systems can and does occur.

  • @chromerims
    @chromerims Před 11 měsíci

    hmm . . . I have thin provisioned iSCSI before. Just this week in fact.
    Excellent video, sir 👍

  • @michaelchatfield9700
    @michaelchatfield9700 Před 7 měsíci

    Very helpful.

  • @andibiront2316
    @andibiront2316 Před 2 lety

    Running TrueNAS Core and ESXi. Both, the NAS and the ESXi are using thin provisioning with iSCSI. The ESXi even sends UNMAP commands to TrueNAS when a thin disk shrinks (files are deleted on the GuestOS FileSystem).

  • @adam872
    @adam872 Před 2 lety +3

    In spite of some performance degradation, NFS all the way for me. I find the convenience and flexibility are worth a lot more than the performance gains (in some cases) of iSCSI. Thanks for the video.

  • @RyanOHaganWA
    @RyanOHaganWA Před 2 lety +4

    Hey Tom, can we do a segment about CEPH?

  • @phrag5944
    @phrag5944 Před 2 lety +7

    super good explaination, I've used the "ethernet to HDD" analogy before to describe iSCSI. or a "locally appearing, network attached raw block of a drive that looks like a locally installed drive to the layman."
    Ethernet to HDD is better I guess.

  • @itgoatee
    @itgoatee Před 2 lety

    What was your disk layout on the TrueNAS system? I am trying to run the same suite, and I am getting 300seconds on the SQLite tests.

  • @tedmiles2461
    @tedmiles2461 Před 2 lety +4

    24:20 If you think you'll need to use snapshots on truenas/zfs. Why not make multiple zvols ...one per vm , instead of one zvol for a pool of VMs?

  • @Mr_Sprint
    @Mr_Sprint Před 2 lety +1

    As mentioned about TrueNAS and restoring snapshots, this is why I set up separate extends for each VM, so no two VMs live on the same LUN.

    • @Supermansdead81
      @Supermansdead81 Před 2 lety +1

      That’s exactly what I do. I do put test VM’s that are considered important production in a larger random LUN…..but all important production VM’s have their own unique IntelliFlash LUN. I then relax knowing I’ve got SAN snapshots per LUN on schedules set as well as our Veeam backup jobs to one Veeam storage repository and backup copy jobs to a separate Veeam storage repository. I’ve also went through setting up Veeam SureBackup Jobs for automatic Veeam restore point verification in a Veeam Virtual Lab. It’s a great setup if your stuff is exclusively in vSphere. We mainly use ESXi hosts now which makes the whole process pretty streamlined at this point. I do thin VMDK’s exclusively and the IntelliFlash does that on the backend as well for iSCSI LUN’s.

  • @hazaqames477
    @hazaqames477 Před 2 lety +3

    Did you use NFS 4.1 and multipathing? I may have missed your NFS setup.

  • @mscari
    @mscari Před 2 lety +1

    How about using VMM Pro as an hypervisor? Would the performance be better compared to the setup you tested?

  • @curefanz
    @curefanz Před 8 měsíci

    Great video

  • @hescominsoon
    @hescominsoon Před 2 lety +1

    I run a single extent per vm. This way a snapshot is available per vm instead of putting all of the vm's inside of one extent which does limit your snapshotting options..:)

  • @tedmiles2461
    @tedmiles2461 Před 2 lety

    BTW you can mount a snapshot on zfs/truenas also and copy out the one vm that you wanted to restore also

  • @jeffm2787
    @jeffm2787 Před 2 lety

    I've had excellent luck with ESXI , TrueNAS (Not core) and iSCSI. Effectively thin provisioned and compressed with lz4. Generally speaking iSCSI will outperform NFS with ESXI. FC of course is the even better option.

  • @JoeTaber
    @JoeTaber Před 2 lety

    Instead of using iSCSI I wonder if it'd be better to run block-device level workloads on the vm host in ZFS and then using ZFS send on a frequent schedule to transfer the data to the TrueNAS device and ZFS receive from TrueNAS when migrating the VM to a new host.

  • @monamoralisch264
    @monamoralisch264 Před 2 lety

    nize 1, thx4up

  • @TimothyHora
    @TimothyHora Před 2 lety

    I use TrueNAS baremetal on a HPE DL380p GEN8 as Storage Array in my VMWare HA Cluster (built with 3 HP Z420) - I‘ve implemented the zraid2 Storage over iSCSI (VASA Support) with Multipath I/O and have no problems with snapshoting/rollback a VM in a LUN.
    Did you run your Benches also with Multipath I/O? Would be surely interesting in the context of this vid :)
    I‘m talking here from my HomeLab, not Enterprises I work for - just to be sure :)

    • @Prime0pt
      @Prime0pt Před 2 lety +1

      This video is about XCP-NG. Its way of working with storage is different from vmware's

    • @TimothyHora
      @TimothyHora Před 2 lety

      @@Prime0pt I know. My meaning is: My Storage is based on TrueNAS where die VM's of the vmware Cluster (different Machines) has it's home. the 3 ESXi are only the "computing nodes" if you will say so - they have no storage built in. All Storage is centralized on the TrueNAS which supports VASA. So I'm talking here to only about Storage and that my storage is built on a DL380. It will interest me, if anybody did performance Checks "with TrueNAS in iSCSI Multipath I/O Mode" - perfectly with VASA Protocol but not mandatory to my Question ;)

  • @berndeckenfels
    @berndeckenfels Před 2 lety +3

    Iscsi could thin Provision, but the zfs trim impl is not very mature. Compress however does help in keeping free blocks out of the zvol usage

    • @fastbimmerrob
      @fastbimmerrob Před 2 lety

      I love the videos here but this info is not correct. You can very much thin provision LUNs and present iscsi or FC.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Před 2 lety +1

      You cannot with XCP-NG

    • @fastbimmerrob
      @fastbimmerrob Před 2 lety +1

      @@LAWRENCESYSTEMS Ah, I knew it had to have made sense if you're posting it! Thanks again Sir.

    • @saintbenedictscholacantorum
      @saintbenedictscholacantorum Před 2 lety +1

      On TrueNAS I just check the "Sparse" option on the zvol for the iSCSI extent, and it thin provisions just fine. I can present the extent to Windows or to Proxmox or I imagine to anything. But I can believe that a trim problem might eventually negate the thin provisioning; I haven't used it long enough to see the impact.

  • @scoopzuk
    @scoopzuk Před 2 lety

    Hi Tom - did you do any more videos on storage design considerations? I have watched lots of your videos but haven’t seen one about storage design for VMs and data storage. I currently have all my data stored inside the Windows VM which is making VM snapshots huge and slow. I’d like to learn more about the best way to setup a Windows file server/DC VM but with storage for files to share to 30 workstations. I’ve been tinkering with truenas scale recently and was thinking of setting up the main data store as SMB shares there, I’ve linked to the AD to handle share permissions. Or is it better to share as iscsi to windows VM and then use windows to share the files and folders?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Před 2 lety +1

      I have not done a video on that yet, but it's on my to do list because we do consulting work often to fix issues created by people creating huge VM's. Using iSCSI to connect TrueNAS to Windows is a great way to do it.

    • @scoopzuk
      @scoopzuk Před 2 lety +1

      @@LAWRENCESYSTEMS thanks. I’ll keep an eye out for the video in the future, I’m sure it’ll be very helpful to me and others with similar bad setups that have been inherited. It was all made worse when I took the VM offline to consolidate snapshots in ESXI not realising there were snapshots from years ago so the consolidation took 3 days, a failed p440ar battery meant write speeds were painfully slow.

    • @scoopzuk
      @scoopzuk Před 2 lety

      @@LAWRENCESYSTEMS I’ve also been left undecided on iscsi to windows file server or direct SMB share from truenas because I love the truenas snapshot options. Every 10mins for 8hrs, every hour for 5 days, every month for a year etc. I have a consulting engineering company and my employees use “previous versions” a lot when they accidentally “save” instead of “save as” so they can self-fix it without bothering me. I have windows snapshot/VSS twice a day but more granular schedules for snapshot taking and scrubbing that truenas offers and the fact it integrates with “previous versions” is tempting. So i was all set to go that route…but my employees also use Windows File search a tonne too, we have 40+ years of data and the windows file server index makes search results instantaneous for workstations. Sadly I think I’ll never get that functionality from truenas? This is the kind of stuff I’d love to hear you discuss on homelabs or this channel.

  • @juancarlospizarromendez3954

    I suggest a comparison of iscsi vs nfs vs smb vs ftp vs sftp vs http vs https vs scp protocols for different workloads.

  • @johnholland2575
    @johnholland2575 Před 2 lety

    I was wondering, how I can migrate XCP-NG zvols or datasets from one server to another. Reasoning being, that I'd like to be able to migrate individual PSQL DBs on zvols or datasets from server to another.
    Thanks Tom for the informative content.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Před 2 lety

      Using ZFS replication czcams.com/video/XOm9aLqb0x4/video.html

  • @teaearlgrayh0t
    @teaearlgrayh0t Před 2 lety

    Inability to provision thin volumes, has nothing to do with protocol, but with limitation of storage device. I would also use vVols with ISCSI

  • @lordgarth1
    @lordgarth1 Před 2 lety +2

    So block storage vs file storage ?

  • @jeffreyplum5259
    @jeffreyplum5259 Před 2 lety

    I am a home user, just exploring servers. The only place place many people see this thick versus thin provisioning is in Virtual box. If one chooses fixed size disks, the client is guaranteed access to that much disk space. That storage is carved out of the storage pool without question. This is great for performance, but expensive in storage space. Iscsi expands this high performance model to a disk server, often over a very high speed connection. NFS thin provisioning, is like the Virtual box dynamic disk. One gives up absolute performance over a known disk size for a more economical storage system.
    In my case, my VM hosts are small,, with limited internal storage. Dynamic disk provisioning allows me to squeeze the most out of my modest VM host ssd space. I plan on using NFS storage for user data, and images. II can also offload static data and snapshots to a file server. Eventually even my VN may use the NFS shares as well. I am more comfortable running VMs than containers at the moment. I can load more VMs onto my host with mostly static data offloaded to a file server.. I may also add emulated systems to my home lab. I can use storage on my older systems to backup my VM hosts Many thanks for your help

  • @ProjectUnknowEddi
    @ProjectUnknowEddi Před 2 lety +4

    What I've seen with iSCSI vs NFS...
    Storage System in DC A & Servers in DC B - 40km Fiber between DWDM Layer 1 Network.
    For some reason iSCSI fallback to 64 MTU Packages... NFS just takes the 9000 MTU - had two dual way end to end with a Viavi MTS-5800 and tested -> no problem
    The customer had a HP Storage / freenas / qnap for testing - server where ESXi. We were not able to find the error. (As DC customer equipment is customer owned - so not our problem)
    But from what I've seen - NFS just handle (longer / changing) latency way better... also Routing NFS is not really a problem. I've seen corupted iSCSI over a 180km span - just because of systems of the netapp metro cluster get's out of sync. -> Latency (netapp say "it must be the same cable distans within ~20-30m...") we had to install a compansation fiber of ~18km to get the same latency on both ways...
    "Verry enterprise stuff... works grade... most of the time :P "
    Thanks for sharing your benchmarking :D

    • @rockenrooster
      @rockenrooster Před 2 lety

      do people really have SANs that far away from the compute host? Is this common? Seems like a TERRIBLE idea regardless

    • @JeroenvandenBerg82
      @JeroenvandenBerg82 Před 2 lety

      Running iSCSI or NFS over these distances sounds like a bad idea? I have never seen a vendor that supports that?

    • @ProjectUnknowEddi
      @ProjectUnknowEddi Před 2 lety

      @@rockenrooster yes... full synced clusters. RTT is 4,2ms

    • @ProjectUnknowEddi
      @ProjectUnknowEddi Před 2 lety

      @@JeroenvandenBerg82 customers with full sync cluster just don't care - they build it. RTT is 4,2ms.
      This specific customer just have his storage over there & just do full cache on local ssd's.

    • @rockenrooster
      @rockenrooster Před 2 lety

      Ahh, local SSD cache would make a huge difference....

  • @jedring3756
    @jedring3756 Před 2 lety

    1 thing to point out. Synology will thin provision iscsi and it works correctly under vmware. Additionally on iSCSI atleast as far as Synology goes you can use the snapshot of an iSCSI lun to make a new lun. then add that to a target. attach to your host and pull the vhd you need from your recovered lun to your live lun.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Před 2 lety

      Yes, it also gives you the ability to snapshot it, but gives performance warning when configured that way.

    • @Prime0pt
      @Prime0pt Před 2 lety +1

      Vmware vmfs use thin provisioning. So it will work on iSCSi. XCP-NG use lvm and thick provisioning. So thin provisioning of Synology will be useless

  • @scorpjitsu
    @scorpjitsu Před 2 lety

    I recognize that Bigby cup! Are you in MI?

  • @JeroenvandenBerg82
    @JeroenvandenBerg82 Před 2 lety

    iSCSI does not have to thick provision, this sounds like a limitation of Xen. In my lab I have a thin provisioned volume in VMware on a thin provisioned LUN in FreeNAS connected over iSCSI, this does require you to monitor the 'real' free space because it's easy to over provision and run out of space.
    We run a Pure storage SAN (full flash) at my work environment and that is the recommended configuration, according to my vCenter it's storing 12TB of the 19TB provisioned, this is using just 3.76TB on the SAN, this is with de-duplication, compression and thin provisioning and all over iSCSI.
    And with the storage integration in VEEAM we can restore a single VM from a full volume snapshot within a few minutes.

  • @Ajicles
    @Ajicles Před 2 lety +2

    Wonder what results you would have with jumbo frames enabled.

    • @BruceFerrell
      @BruceFerrell Před 2 lety

      jumbo frames really need to be done on a separate storage LAN.

  • @hescominsoon
    @hescominsoon Před 2 lety

    i create a different iscsi lun for each VM in iscsi..solves the iscsi restore issue..:)

  • @NetBandit70
    @NetBandit70 Před 2 lety +2

    I triple dog dare you to make a video on AoE (ATA over Ethernet) and HyperSCSI

  • @apigoterry
    @apigoterry Před 2 lety +3

    How about using multipath vs lacp on iscsi vs nfs?

    • @BruceFerrell
      @BruceFerrell Před 2 lety +1

      multipath IS useful for iSCSI if there are multiple targets accessed as the same device. It has zero effect for NFS. LACP, depending on the configuration has the potential of giving better throughput or link redundancy.

  • @sevilnatas
    @sevilnatas Před 9 měsíci

    Wondering if ZFS deduplication buys you something? Seems like dedup could add the advantage of NFS into the speed of iSCSI. So, in other words, run a pre-provisioned iSCSI end point with ZFS depu turned on. Best of both worlds?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Před 9 měsíci

      deduplication works if you have data that can be deduplicated

    • @sevilnatas
      @sevilnatas Před 9 měsíci

      @@LAWRENCESYSTEMSRight, but my understanding is that it deduplicates at the block level. So, if I have got that right, it seems that opens up a lot of opportunities for dedup where you don't usually think about it. For example, VM snapshots. It doesn't need to dedup the whole snapshot, just the many duplicative blocks the snapshot is made up of. Anyway, I may be off base here, but it seems doable.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Před 9 měsíci

      @@sevilnatas It's the block level per dataset and snapshots are block level differentials.

    • @sevilnatas
      @sevilnatas Před 9 měsíci

      @@LAWRENCESYSTEMSSo if I understand what you are saying, if I have several Win11 VMs that I am using for testing, so they are setup close to identical, the deduplication should almost result in the actual space on disk being the size near a single one of those VMs and the snapshots I run off of them will probably also greatly benefit from dedup, between VMs, but the individual snapshots, per VM, will not, because a snapshot is essentially operating like a dedup, itself. If I've got that right, still sounds like a pretty good setup, at least for my situation, where many VMs, of the same OS are being used.
      The other thing I'd like to experiment with is VMWare's Horizon tech that allows for linked clones. Going back to my specific scenario, testing with multiple, similar VMs, I think I would greatly benefit from the linked clones achieving a similar result as the dedup setup. Might be more straight forward and easier to maintain. I just don't know if I can use the VMWare Horizon software on the free license. It is probably a premium offering.
      The additional thing I like about the linked clones is the ability to do update and enhancements to the "golden" VM and then inherit those changes to the linked clones. Seems like a great maintenance benefit.

  • @BruceFerrell
    @BruceFerrell Před 2 lety

    the most direct difference between iSCSI vs NFS... iSCSI presents a non-sharable block device (there qualifications, but as a general rule...) to the client system. NFS presents a file system that can be accessed by multiple hosts simultainously.
    The bottom line is they are NOT in the same class at all and comparisons are apples and oranges.

  • @NickF1227
    @NickF1227 Před 2 lety +1

    ...wait...but you can thin provision zvols..

  • @mckidney1
    @mckidney1 Před 2 lety +3

    This video is weird, iSCSI and FS are not comparable like this - you are using a simulated block device (VMDK) over simulated block device (iSCSI) compared to simulated file system (NFS) and then you introduces simulated blocked devices (ZFS snapshots) and thick provisioning (which actually happens at 3 out of 4 steps already.). I suspect this video targets people caring about the hypervisor and do not know which option to choose. From that perspective it makes sense. But from the perspective of designing a NAS/SAN for your hypervisor - think vs thick, snapshots all those hurdles are created by the design being a jumbled mess.

  • @JoeTaber
    @JoeTaber Před 2 lety

    Apparently NVMe over TCP will be a thing and could supplant iSCSI.

  • @LAWRENCESYSTEMS
    @LAWRENCESYSTEMS  Před 2 lety +1

    Benchmark Links used in the video
    openbenchmarking.org/result/2108267-IB-DEBIANXCP30
    openbenchmarking.org/result/2108249-IB-DEBIANXCP11
    Synology Tutorials
    lawrence.technology/synology/
    XCP-NG Tutorials
    lawrence.technology/xcp-ng-and-xen-orchestra-tutorials/
    Linux Benchmarking
    czcams.com/video/YjhEjWs8YzE/video.html
    Getting Started With The Open Source & Free Diagram tool Diagrams.NET
    czcams.com/video/P3ieXjI7ZSk/video.html
    ⏱ Timestamps ⏱
    00:00 NFS VS iSCSI
    01:42 Scope and Setup
    02:45 Difference Between iSCSI & NFS
    08:50 Test Results
    12:48 Storage Design Considerations

  • @trumanhw
    @trumanhw Před 2 lety

    I dont think coelesce means what u think it means ... :)

  • @Adrayven
    @Adrayven Před 2 lety

    Synology does snap shots of iSCSI a lot better than TrueNAS imo

  • @ryzenforce
    @ryzenforce Před 2 lety

    For me, it is NFS all the way because I prefer a dedicated system to make the reads and writes instead of multiple devices connected and doing it directly themselves via iSCSI. Also, less corrupted data when using NFS.

  • @pepeshopping
    @pepeshopping Před 2 lety +1

    When you do not understand the differences between file and block based shares…
    It shows!!
    It all really comes down to protocol and ASYNC writes!!