ISCSI vs NFS Performance Comparison Using FreeNAS and XCP-NG Xenserver

Sdílet
Vložit
  • čas přidán 29. 10. 2018
  • lawrence.video/xcp-ng
    Amazon Affiliate Store
    ➡️ www.amazon.com/shop/lawrences...
    Gear we used on Kit (affiliate Links)
    ➡️ kit.co/lawrencesystems
    Try ITProTV free of charge and get 30% off!
    ➡️ go.itpro.tv/lts
    Use OfferCode LTSERVICES to get 5% off your order at
    ➡️ lawrence.video/techsupplydirect
    Tesla Referral Program Offer
    🚘 www.tesla.com/referral/thomas...
    Lawrence Systems Shirts and Swag
    👕 teespring.com/stores/lawrence...
    Digital Ocean Offer Code
    ➡️ m.do.co/c/85de8d181725
    HostiFi UniFi Cloud Hosting Service
    ➡️ hostifi.net/?via=lawrencesystems
    Protect you privacy with a VPN from Private Internet Access
    ➡️ www.privateinternetaccess.com...
    Google Fi Service Referral Code
    📱g.co/fi/r/TA02XR
    More Of Our Affiliates that help us out and can get you discounts!
    ➡️ www.lawrencesystems.com/partn...
    Twitter
    🐦 / tomlawrencetech
    Patreon
    🔗 / lawrencesystems
    Our Forums
    🔗 forums.lawrencesystems.com/
    GitHub
    🔗 github.com/lawrencesystems/
    Discord
    🔗 / discord
    Our Web Site
    🔗 www.lawrencesystems.com/
    PIA Internet Access Affiliates Link
    www.privateinternetaccess.com...
    The ZFS ZIL and SLOG Demystified
    www.freenas.org/blog/zfs-zil-...
    NFS Benchmark Comparison
    openbenchmarking.org/result/1...
    iSCSI Benchmark Comparison
    openbenchmarking.org/result/1...
    www.phoronix-test-suite.com/
    My GitHub
    github.com/lawrencesystems
  • Věda a technologie

Komentáře • 71

  • @edgecrush3r
    @edgecrush3r Před 4 lety +2

    Always great to see, when i search something on CZcams I always end up with one of my favorite channels. Thanks LS!

  • @manjil1234
    @manjil1234 Před 5 lety +4

    Thank you tom for doing this test. I had done something similar test and ended up using NFS purely for the ease of expansion when using nfs vs when using iscsi. I had 4 256GB SSD pool and when using iscsi if i add 4 more disk to the pool for some reason vmware would not see the new expanded storage even after increasing zvol size. However, with NFS it was easy as soon as the pool was expanded the storage was available immediately. Since then I use cheap optane 800p as my ZIL and use NFS.

  • @jonjewett
    @jonjewett Před 4 lety +8

    I know this is an old post, but I just happened to be listening to it in the background and thought I'd shed a little light on the performance differences you saw (and didn't see) in regards to the ZIL drive usage.
    So, the difference in speed that you saw between disabling NFS sync altogether (fast but susceptible to data loss), and turning it on and using a ZIL, were likely because of the performance characteristics of the device chosen to store the ZIL. Unlike the devices used for general data storage, the device chosen to host the ZIL needs to have very specific performance characteristics. Unlike most scenarios the r/w speed of the ZIL drive doesn't really matter much. IOPS can be a good indicator, but the most important measure of a drive's suitability as a ZIL are the latency characteristics.
    If you can get a special drive for the ZIL with proper performance characteristics, you will then be able to actually keep NFS sync ON, and have the same, or better performance than with NFS sync ON. You don't need much capacity, and throughput doesn't really matter. The real important spec is the latency. This needs to be in the DRAM ballpark. Devices such as the DDRdrive X1 (www.ddrdrive.com/) are perfect for this.
    Anyway, just thought I'd chime in. 'Love the videos!

    • @kwinzman
      @kwinzman Před 2 lety

      Yes, I was so puzzled by the video author. Why was he testing the ZIL on/off with the sync disabled? ZIL is supposed to help when sync is enabled!

  • @shinwadone
    @shinwadone Před 5 lety +1

    Good! Help a lot when deciding using NFS or iSCSI

  • @ultrait5257
    @ultrait5257 Před 3 lety

    You did a pretty good job doing benchmarks on it. Thank you so much, I really appreciate this video and thank for spend your time bringing knowledge and practical lessons to us. I'm moving away from CEPH on Proxmox (8x R720xd nodes, with 10x 10GB LACP on x520-da2 Intel NICs, 24x Intel SSD's and 45x HGST 4TB SAS per node) due its bad performance for virtual machines (all kind of them). So, now I'm more confident to use NFS since we can have a similar performance of iSCSI without need to deal with the pain in the ass ZFS over iSCSI tutorials. That's awesome, again, thank you so much my friend.

  • @Rostol
    @Rostol Před 5 lety +4

    One of the important variables you didn't talk about is NICs. so say, intel X520s have iscsi offload. it is important that the iscsi server has iscsi (or FCoE if using FC) offloading nics so the CPU can leave the frame packing/unpacking to them.

  • @minigpracing3068
    @minigpracing3068 Před 5 lety +14

    Do SMB shares have the same write penalties as NFS, and if so any speed tips to share for SMB shares?

  • @AyoolaBoyejo
    @AyoolaBoyejo Před 5 lety +6

    The title of the video alone excites me.

  • @unijabnx2000
    @unijabnx2000 Před 3 lety +1

    would like to see this comparison again but with you setting the record sizes the same

  • @hrisheekesh
    @hrisheekesh Před 4 lety +2

    Buddy can you make a video on slog and catch disks usage & pros and cons???... it’s not available anywhere about it!!

  • @SteinerSE
    @SteinerSE Před 3 lety

    Buildinga FreeNAS soon that's partly going to be a datastore for an ESXi host and can't quite decide between NFS or iSCSI, any advice for that particular use case? (will be run over 1gb nic unfortunately, but might try to add a nic and run 2 bonded lines at least, and separate from the rest of the network).

  • @philipcook7608
    @philipcook7608 Před 5 lety +3

    I would be curious to see your sync=disabled vs sync=always performance is on iscsi. For me, I take a 100-150 MB/s hit with sync=always and a 900p zil. I can live with 500 MB/s with sync=always. With esxi, iscsi vs nfs is a moot point with multiple hosts. Until there is nfs 4.1 support in freenas, multiple hosts cannot use the same nfs share, but with iscsi they can. Next fall when freenas 12 hopefully adds nfs 4.1 support, it's at least added in freebsd 12, this discussion will get really interesting for me.

  • @peterfricht5528
    @peterfricht5528 Před 5 lety +3

    @7:27 you use the zpool command. Try the "watch" command if avail on this distro. Its much more clear (at least to me) since watch overwrites the previous output.
    ~# watch -n .5 zpool iostat

  • @Calm_Energy
    @Calm_Energy Před 4 lety

    @11:30 "I read more than I talk," me to! I always mispronounce things and growing up there was no youtube so I read more than I listened as well. One time in college I rememver sitting there thinking "I'm only here so I can hear how to pronounce the things I read" lol

  • @douglasg14b
    @douglasg14b Před 3 lety +1

    Do you find that using ZFS Compression increases latency, and reduced IOPS?

  • @lanceeilers5061
    @lanceeilers5061 Před 5 lety

    Hi Tom please correct me if I am wrong , did'nt you create NFS on a 1 Gig link prior to the vid , and iscsi was created on a 10 Gig link , was something with the 100 Gig and 250 Gig creation on Freenas and how it was pointed (may have ob-squired your testing), maybe I have lost my bearings along the way lol - surely you want to compare apples with apples ? to get the thru put equivalent of both file systems .... Great vid either-way love how you go into depth !!!! Thanks :-)

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Před 5 lety

      I destroyed the array from the first video and rebuilt the system for this video all connected via the 192.168.10.0/24 network which is the direct connect 10gbe

    • @lanceeilers5061
      @lanceeilers5061 Před 5 lety

      @@LAWRENCESYSTEMS Yippee , thanks for that Tom , love you vids :-)

  • @berndeckenfels
    @berndeckenfels Před 4 lety

    Would be interesting to actually mount nfs inside the guest, so that you don't have the image files but real files

  • @drrros
    @drrros Před 5 lety

    You can use ram drive as a best-ever-SLOG device for testing purposes

  • @adrianstephens56
    @adrianstephens56 Před rokem

    On the cloning scuzzy vms, you explained that even with thick provisioning, it is the zfs high compression ratio that keeps the used space low - similar to the thin-provisioning, which tracks only changes in the vhds.
    I don't buy the compression is the reason. If zfs de-duplication was turned on, this might have this effect, but this is a very expensive zfs option, which I suspect you have turned off. You can achieve the same as the truenas thin provisioning of the clones by doing a zfs clone to create the new vhds. Whether xenserver can actually do that, I don't know.

  • @vooze
    @vooze Před 5 lety +9

    if using SLOG with iSCSI, you should set it to sync=always - then it uses the SLOG.

  • @notpublic7149
    @notpublic7149 Před 5 lety

    A wise man once said "There are: lies, damn lies, and statistics.. Then there are benchmarks!" Oh.. that was Tom! lol.. Joking aside, I love watching these videos. I assume this was with disk encryption off?? I should probably run my own tests because I have the pools encrypted. For my own set up I set up a pool just for iSCSI on my FreeNAS. I have a Cheslio T320 from xcp-ng (running on dell r710) one link is for iSCSI and the second link also goes from r710=>FreeNAS for NFS. The VM I use that link on handles a NFS mounts easily.. The iSCSI would not be impossible but I think since I have a 10gbe link and most of the traffic is coming from WAN (I am not even on gigibit fiber) The bottleneck would be the pool, b4 it would be the NFS or ZFS. That's just a guess. I have 6x4TB 5400rpm WD reds and one 64GB consumer SSD for ZIL. This is a home lab set up but I do need to have high availability b/c a lot of friends rely on my server for their needs, should charge them.. but I don't.

  • @BobBeatski71
    @BobBeatski71 Před 4 lety

    Nice.

  • @boedekerj1
    @boedekerj1 Před 4 lety

    using a 512MB sample is why your test was so fast, and why removing your ZIL had a negligible difference between tests. The 512MB size is small enough to fit in FreeNAS mem buffer, which is why it probably was better. You need more exhaustive testing with larger data sample sizes.

  • @KB-zq9ny
    @KB-zq9ny Před 4 lety

    I would like to write posts for tech blogs, but I'm not familiar with a lot of this terminology. I'm not an IT professional. Where should I go to find out more so that I can understand this?

  • @praecorloth
    @praecorloth Před 5 lety

    iSCSI may be getting better compression than NFS because iSCSI has better throughput than NFS. lz4 has an early abort feature which might trigger under high workload. I don't know that it aborts on high workload for a fact, but I do know that early abort is one of the features that helps keep lz4 performance high.
    One more thing that might squeeze a tiny more performance out of your NFS share is turning off atime. For a file share, atime is useful. For VM storage, atime is just wasted IO. I'm not sure that atime=off alone would show a significant difference, but it is one straw on the camel's back.

  • @OrianIglesias
    @OrianIglesias Před 5 lety

    I always recommend using iSCSI targets on ZFS for your data stores in VMware ESXi because you get more advanced features such as dead space reclamation (UNMAP). Also make sure to set zfs sync=always to your iSCSI zvol. The reason why you never want to set zfs sync=off is because if you lose data that should have been committed to your zvol it can corrupt your virtual machine.

  • @dosmaiz7361
    @dosmaiz7361 Před 5 lety

    Are your production FreeNAS servers running on XCP, or bare metal?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Před 5 lety +2

      Bare metal

    • @dosmaiz7361
      @dosmaiz7361 Před 5 lety

      @@LAWRENCESYSTEMS Are you guys running your drives as "pass through" with a JBOD enabled controller? I have an h700 raid card and apparently running my disks in hardware raid is bad for FreeNAS. Thanks for the help!

    • @kristopherleslie8343
      @kristopherleslie8343 Před 5 lety

      Dos Maiz ya because in that case it’s better to be physical so it can handle the storage better. Also that raid card would get in the way, since it’s older.

  • @James-xg4jr
    @James-xg4jr Před 5 lety

    Wasn’t notified about the video =[ .......youtubeeee

  • @hescominsoon
    @hescominsoon Před 4 lety

    how about smb in the mix?

  • @zesta77
    @zesta77 Před 5 lety +3

    It really isn't much of a valid test unless you are testing a file size that is double the size of the RAM in the machine or larger. This is why you are getting crazy results on some of the measurements. Also, it would be interesting to see a test without the Xen crud in the middle. My main FreeNAS has 5x mirrored pairs, so has more I/O to begin with, but I can push well beyond 600 MB/s with a straight NFS mount (not writing to a boot disk that is stored on NFS with an emulation layer in between). This is with no SLOG or L2ARC at all. I will have to try iSCSI to see, but I don't expect to see much faster.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Před 5 lety +3

      So you are suggesting I test the system attached directly to another Linux machine instead of a Xenserver? (and test with larger file sets)

    • @zesta77
      @zesta77 Před 5 lety +1

      Lawrence Systems / PC Pickup you could use the existing virtual, but mount an NFS volume directly do it, then do an iSCSI LUN the same way. Testing with a size of double the RAM makes sure the local machine disk cache is not affecting the results. I’m curious what a smaller setup like your test system will do.

    • @alexnaber349
      @alexnaber349 Před 5 lety +1

      I want to second randy carpenter. as long as you do not exhaust the RAM of the ZFS device results will be crazy good. You have to use much bigger filesizes to see the REAL result and then you will see a performance hit when removing ZIL. A REAL hit. thanks for that video! very interesting.

  • @Vates_tech
    @Vates_tech Před 5 lety +2

    So in the end, as expected, NFS is on par with iSCSI. But NFS is thin provisioned, so it's FAR better than iSCSI as soon you need snapshots/backups. It's also more robust (less issues spotted due to NFS protocol is less picky than iSCSI when you have network temp issues). Note: ZIL is only used in sync mode, it's completely "bypassed" with async.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Před 5 lety +1

      But slightly more risk of losing data in flight if there was an outage due to ZFS being in an async write mode.

    • @Vates_tech
      @Vates_tech Před 5 lety +4

      @@LAWRENCESYSTEMS True if you don't have enough money to get a decent ZIL drive. Which isn't really a problem nowadays (Optane for ex). See www.servethehome.com/exploring-best-zfs-zil-slog-ssd-intel-optane-nand/

    • @rdmclark
      @rdmclark Před 5 lety

      @@Vates_tech I want to use my freenas to host iSCSI for my lab with 10g link. I been thinking of getting a nvme adapter for optane since it cheap but will 32g be enough for ZIL and another 32g for ZLOG?

    • @andibiront2316
      @andibiront2316 Před 5 lety +1

      iSCSI can be thin provisioned. It's also easier to multipath. I try to avoid NFS whenever I can, but you can't compare file vs block storage. They serve different purposes. When you can choose between the two (like VM datastores), iSCSI is better allaround solution.

    • @Vates_tech
      @Vates_tech Před 5 lety

      @@andibiront2316 nope, not on XS/XCP-ng side. Doesn't matter if it's thin pro on the target, the hypervisor will reserve the whole thing.

  • @mo6152786
    @mo6152786 Před 4 lety

    How does SMB compare?

  • @Horstonthetop
    @Horstonthetop Před 5 lety

    if you disable sync, ZFS commits every 5 seconds (in default setup) ... so I don't see the big risk there :-)

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Před 5 lety

      It really comes down to "What is 5 seconds of transaction worth to you?" For some people, not much. For a large DB running, that could be a lot of uncommited data to sort out.

    • @SuperMikkeli
      @SuperMikkeli Před 5 lety

      Lawrence Systems / PC Pickup - well it also depends also on use case. I was writing 30tb to raidz1 setup over nfs, and pausing writes every 5 seconds made the 30tb write take considerably longer time...

    • @Horstonthetop
      @Horstonthetop Před 5 lety

      @@LAWRENCESYSTEMS of course it depends on your setup/situation ... like in real life ;-)
      I assumed a typical ups backed server... if the power fails all clients are off and your server has many many seconds to write the last transfered client data
      Exactly this was the case some weeks ago at work... big power loss but not a single byte lost :-)

  • @dariopetrusic4215
    @dariopetrusic4215 Před 5 lety +2

    Hi, interesting video as always!
    I was curious so I installed this test suite on one of my VMware virtual machines (1core/2GB) and did the test with the same options you chose
    In short: I have a mirror of intel D3-S4510 (480GB ->10GB) as SLOG for the pool and I did 305MB/s in the 64k/512MB test (sync enabled of course).
    Nothing stellar for sure, but I'm happy (for now at least )
    Here is the link, if somebody wants to compare: openbenchmarking.org/result/1811015-RA-TORTUGA7004

  • @Monasucks
    @Monasucks Před 3 lety

    NFS and iSCSI set to sync always?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Před 3 lety

      no

    • @Monasucks
      @Monasucks Před 3 lety

      @@LAWRENCESYSTEMS Well than the comparison kinda sucks.
      Truenas/freenas recommending sync always for hypervisor datastores for NFS as well as iSCSI..

  • @turbokev3772
    @turbokev3772 Před 3 lety

    ISCSI is NOT syncing those writes on the vm which can put the vms storage in a state that is inconsistent with the backend and lead to dataloss. And the vm cruises happliy along thinking all the data's been written.
    Taken direcrly from Truenas.com:
    iSCSI by default does not implement sync writes. As such, it often appears to users to be much faster, and therefore a much better choice than NFS. However, your VM data is being written async, which is hazardous to your VM's. On the other hand, the ZFS filesystem and pool metadata are being written synchronously, which is a good thing. That means that this is probably the way to go if you refuse to buy a SSD SLOG device and are okay with some risk to your VM's.
    iSCSI can be made to implement sync writes. Set "sync=always" on the dataset. Write performance will be, of course, poor without a SLOG device.
    At any rate the test isn't accurate at all: NFS is guarenteeing its writes, while in the case of ISCSI data is still being written on the backend long after the test ends.

  • @carlsjr7975
    @carlsjr7975 Před 5 lety

    Nfs is a shared filesystem and iscsi isn't right? Different use cases.

  • @lulzmachineify
    @lulzmachineify Před 3 lety +1

    TL;DR?

  • @pv6596
    @pv6596 Před 5 lety

    iSCSI always beats NFS due to the protocol design.
    NFS with a ZIL disk increases writes dramatically where it can compete with iSCSI.
    I prefer NFS as everything is a file.