The Glorious Complexity of Intel Optane DIMMs and Why Micron Quit

Sdílet
Vložit
  • čas přidán 19. 03. 2021
  • STH Main Site Article: www.servethehome.com/glorious...
    STH Merch on Spring: the-sth-merch-shop.myteesprin...
    STH Top 5 Weekly Newsletter: eepurl.com/dryM09
    In this video, we show some hands-on features of Intel Optane DCPMM that make it both great, but also complex. We also discuss why Micron is exiting 3D XPoint for the CXL future.
    Other related STH videos:
    - Intel Optane De-lid: • Intel Optane DC Persis...
    - 3rd Gen Intel Xeon Cooper Lake Review: • Gigabyte R292-4S1 Revi...
    - Socket LGA4189 CPU Installation: • 3rd Gen Intel Xeon Sca...
    - Ampere Altra Arm Server Review: • Most Significant Serve...
    - 400GbE switch hands-on: • Inside a 400GbE 32-por...
  • Věda a technologie

Komentáře • 233

  • @BillLambert
    @BillLambert Před 3 lety +33

    You're absolutely right Patrick. I like the tech, I like the price, I just don't like the artificially constrained CPU support and that's more than enough to keep me away. If they open it up to the entire Xeon line in the near future, then it could drastically change the way people like me build out our clusters.

    • @gulllars4620
      @gulllars4620 Před 3 lety +2

      Though if you're paying for CPU-based licensing, like for RDBMS like SQL, the hardware costs of the solution is a small fraction over the lifespan, and if you get hung up in standard vs L SKUs pricing and relative hardware costs you are doing yourself a bad favor with regards to TCO for the solution as a whole compared to how much performance you get for that TCO.

  • @tommihommi1
    @tommihommi1 Před 3 lety +66

    The QR codes on the DIMMs that are readable from the tray or even when the DIMM is installed are a really smart move

    • @PanduPoluan
      @PanduPoluan Před 3 lety +13

      Those aren't QR Codes. Those are DataMatrix codes, a 2D barcode competitor to QR Codes. QR codes always have those "bullseye-like" markers, while DataMatrix codes have 2 solid line edges.

    • @tmi1234567
      @tmi1234567 Před 3 lety +5

      @@PanduPoluan still a really smart move if they have serial data or other info... Could make inventory interesting

  • @sayanchx
    @sayanchx Před 3 lety +61

    This is the type of content that keeps bringing me back to your channel. Great original unbiased content! Awesome work

  • @AndreiNeacsu
    @AndreiNeacsu Před 3 lety +20

    Optane would have been a killer feature especially in the low power and low core-count Xeons. Large data centers concerned in high density (buying specifically those extremely expensive CPUs and motherboards) can afford true DRAM that provides reliable and consistent performance that their users demand.
    If cheap (NAS-level) Xeons supported Optane, Intel would have very likely sold an average of 4 P-RAM DIMMS for every Xeon CPU they shipped out the factory doors. Considering that Intel does not sell DRAM, this would have only increased their adoption and profits.
    But, the sad reality is that in the last decade, Intel did a lot of questionable decisions and relied on the few moments of good luck (and bad luck for AMD) in their strategies. With Micron out, I expect Optane to be fade into obscurity like their MIC architecture or their HEDT platform; obviously, without ever being sought after or lusted for as the HEDT ecosystem.

    • @danwolfe8954
      @danwolfe8954 Před 3 lety

      Optane tends to be power hungry and Intel hasn't invested the development time to reduce the power draw... I can't remember where I read it, but IIRC each Optane DIMM add about 4 watts at idle and double digits (14?) when active to the power drain... and the U.2 run hot....

    • @shadowmist1246
      @shadowmist1246 Před 2 lety

      VROC made sense for Intel to compete with HBA/RAID card providers but optane dimms - not so much. They should leave that one.

  • @josephdtarango
    @josephdtarango Před 2 lety +9

    A new technology we have developed is an LLVM compiler extension for Optane such that data structures are automatically persistent. The research paper is under peer review and should make the future uses of Optane easier for developers.

  • @stalbaum
    @stalbaum Před 3 lety +10

    I'd buy optane just for experimenting if I could do it on a core sku and regular mother board. Let us play, it might be great marketing and open up a whole workstation market.

  • @B4dD0GGy
    @B4dD0GGy Před 3 lety +13

    not sure how many times I have to say this, but your tech updates are exceptional in so many ways

  • @setharnold9764
    @setharnold9764 Před 3 lety +14

    Great explainer, I especially loved the bios walkthrough. Sometimes there's no substitute for seeing something for yourself. Thanks!

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před 3 lety +6

      Thanks Seth. That is something that there is not a lot on. I figured if I had not seen much of it, others will not have either. Have a super day.

  • @ericneo2
    @ericneo2 Před 3 lety +11

    I wish micron would make more NVDIMMs for both Intel and AMD instead of supporting a system specific standard. Being able to use DIMMs as a RAMDISK gives amazing performance and means you don't have to worry about write back in the event of power out. Currently RAMDISK works best with RAM and NVME to safely with write back, but with NVDIMMs you don't have to worry about the write back. Just think of taking old servers running HDDs and swapping half the ram with NVRAM and seeing a massive performance increase.
    There is also a latency and lookup improvement. Stats below:
    HDD SEQ1M Q8T1 131R 126WR
    HDD RND4K Q32T1 2.26R 2.48WR
    SSD SEQ1M Q8T1 548R 480WR
    SSD RND4K Q32T1 228R 194WR
    NVME SEQ1M Q8T1 4975R 4257WR
    NVME RND4K Q32T1 606R 554WR
    RAM SEQ1M Q8T1 28,706R 29,352WR
    RAM RND4K Q32T1 723R 655WR

    • @VoSsWithTheSauce
      @VoSsWithTheSauce Před rokem

      hey i saw you mentioned NVDIMMs could you tell me about wether or not they actually keep the data on the memory after a reboot, ive heard some really werid things about how it saves its data and i just wanna hear it from someone who has the same idea of using NVDIMMs

  • @dika2saja
    @dika2saja Před 3 lety +14

    2016 Intel CEO Meeting: We predict the future is fast memory, 3D, Data-Fast Yeah
    Enginering Team: But how about our CPU? 10nm? and GPU?
    CEO: Our i7 already great, 4 core for customer, and Only "Nerd" gamer buy GPU, Nobody would change that fact
    Ah yes... the doom of Intel

    • @smellcaster
      @smellcaster Před 3 lety +1

      Maybe they have somsthing in Development.Think about a Game Console the oldfashioned Way where You just push a Module with the Game into but with this Technology. No booting Time, the Game starts immediately because it's already in Memory like on the Atari 2600

    • @KiraSlith
      @KiraSlith Před rokem

      Overconfidence tends to be Intel's greatest weakness, though their ability to shift quickly (for a corporation anyways) saves them often. The loss of Micron eventually did Optane in outright, sadly.

  • @jolness1
    @jolness1 Před 3 lety +16

    Great explainer and insight.
    Love STH's content and the neutral perspective.

  • @ArtofServer
    @ArtofServer Před 3 lety +9

    Always interesting content Patrick! Thanks for sharing your knowledge on this stuff!

  • @callums____
    @callums____ Před 3 lety +9

    Thanks for the great video! Having been a strong believer in the potential of Optane/XPoint technologies for years now, it's great to hear your overview and views on potential future possibilities. It still amazes me how rare even the NVME drive use is in the mid market with the huge performance per dollar it can offer for many database server applications - particularly when factoring in density and software licensing components.

  • @statebased
    @statebased Před 3 lety +1

    Wow, so many valuable details. Also thank you so much for your direct engineer to engineer communication style!

  • @dnmr
    @dnmr Před 3 lety +2

    This is great stuff, thank you for the content and have an awesome day too!

  • @ericblenner-hassett3945
    @ericblenner-hassett3945 Před 3 lety +1

    And to think I was under the assumption it was just a hopped up ' Ramdrive ' with more options....
    Verry detailed where I really needed it, keep up the great work!

  • @ErraticPT
    @ErraticPT Před 3 lety +6

    Better basic explanation than intel ever did, but I still think its a very niche, very expensive solution looking for a very niche problem to solve.
    If they had pushed a simpler form to the consumer mainstream at a much lower price it may of been viable by now.

  • @shawnmulberry774
    @shawnmulberry774 Před 2 lety +1

    I can appreciate your statement about the gap between high level and low level info about this architecture so thanks for filling the gap a little.

  • @James-kk3nm
    @James-kk3nm Před 3 lety +2

    how that was really interesting I've never fully understood the implications of optane until now thanks a lot!

  • @jmssun
    @jmssun Před 3 lety +7

    26:57 I literally thought I started another video!

  • @kanguruster
    @kanguruster Před 3 lety +9

    I wouldn't be boosting a single-sourced, "enterprise" technology where the vendor says "we have a plan [now that the only manufacturer quits the market]". Isn't "that's all we have" a red flag for anyone?
    So due to the risk, you should de-risk your enterprises, and sell out of your inventory of this risky tech. Please. I would like to buy it cheap for my home PC.

    • @curvingfyre6810
      @curvingfyre6810 Před 2 lety

      Ye. As sad as it is that theres probably not going to be any more useable upgrades to this, the idea of it becoming early legacy tech, falling out of hte server market and into uhe used market, like with teh x79-x99 xeons... well, let's just say budget never looked so fast

  • @OVERKILL_PINBALL
    @OVERKILL_PINBALL Před 3 lety +8

    Congrats on 50K!

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před 3 lety +5

      Thank you! It has been a long road.

    • @NebbieNZ
      @NebbieNZ Před 3 lety +1

      Yea I just subscribed too, ex Telo Systems Engineer here.

    • @supremelawfirm
      @supremelawfirm Před 3 lety

      @@ServeTheHomeVideo Patrick, you're clearly one of THE BEST currently on planet Earth. Sometimes I just enjoy watching how your mind grinds thru this highly technical stuff so effortlessly. Some people's lips are faster than their brains; your brains are much faster than your lips! LOL!! KEEP UP THE GOOD WORK and thanks for the valuable tip today.

  • @MattHessel
    @MattHessel Před 3 lety +4

    Really good video, explains so much of the optane stuff that isn't clear from intel

  • @MoraFermi
    @MoraFermi Před 3 lety +9

    PMEM could have a billion different applications everywhere -- if not for Intel's insane insistence on keeping the bare chips unavailable.
    Portable devices, IoT, opus, networking... there are so many places where persistent memory could be extremely successful without ever encroaching on Intel's exclusivity on the "use pmem on DDR4 sticks" idea.

    • @gulllars4620
      @gulllars4620 Před 3 lety

      They invested heavily for a long time to develop it, so i guess it makes sense that they are fighting hard to not let it become a commodity market component but rather try to recap their costs in as many ways as possible. But if they cared about the technology itself generating the funds rather than overall company bottom line, then i agree, they should sell chips as well. They are using lock-in to increase overall profit (which it probably does, and when you have a fiduciary duty to shareholders it's kind of a legal requirement) even though customers wish they didn't.

    • @mattc1256
      @mattc1256 Před 3 lety +1

      The problem is more in the complexity of the drive and controller itself. Compared to NAND or DRAM you need a much more sophistacated system to get it to work, in conjunction with a software overhaul that will take advantage of the benefit of the device. When you approach many companies about doing this sort of work for the marginal gain that the drive provides they don't want to do it.

    • @jaredeh2
      @jaredeh2 Před 3 lety +3

      @@gulllars4620 Mora Fermi is right. Remember they aren't making a profit, the factory is losing $400M a year. It's all about scale. No scale == high cost. high cost == high price. high price == low demand. low demand == no scale. No scale == no profit. A billion niche applications that value the technology IMHO probably would have enabled scale early on. If they care about company bottom line... the current plan isn't doing a good job of protecting it near as I can tell.

  • @BenKlassen1
    @BenKlassen1 Před 3 lety +4

    Awesome video, even for us data center laymen.

  • @askquestionstrythings
    @askquestionstrythings Před 3 lety +1

    So if App direct mode is optimized and good for SAP then would optane App direct mode also be valuable for Tableau?

  • @berndeckenfels
    @berndeckenfels Před 3 lety +7

    It makes more sense to have pmem modules in U.2++ on the CXL, but if Micron has no xpoint fab anymore, what would they connect to that Bus? (They just speculate beeing cashed out by Intel, most likely because they already have a better flash technology In the makes...)

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před 3 lety +2

      There are several new technologies coming, and they could always design a DRAM/ capacitor/ NAND solution for CXL

  • @wernerheil6697
    @wernerheil6697 Před 3 lety +9

    ABSOLUTELY AWESOME !!!

  • @Patrick73787
    @Patrick73787 Před 3 lety +10

    I can't wait to see some benchmarks of the Optane P5800X SSD.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před 3 lety +5

      I agree. Not sure if we are under embargo for those right now so they are not shown.

    • @supremelawfirm
      @supremelawfirm Před 3 lety +1

      Is that P5800X only available in the U.2 form factor, however?
      After experiencing success with Highpoint's SSD7103,
      I really like the advantages of hosting such fast storage in a single PCIe x16 slot,
      particularly by going with the latest PCIe Gen4 add-in cards.
      Highpoint now has a bootable low-profile add-in card that
      hosts 2 x M.2 NVMe SSDs, but I think it's only Gen3.
      And, the model SSD7540 supports 8 x Gen4 M.2 NVMe SSDs!
      It took quite a while for Highpoint to make their AICs bootable,
      but I'm very glad I waited: the installation was a piece o' cake,
      e.g. by running the "Migrate OS" feature in Partition Wizard.
      Everything worked fine the first time I tried to boot from the SSD7103.
      The only possible "glitch" was the requirement to change a BIOS setting
      to support UEFI, but I knew about that requirement ahead of time.

    • @Patrick73787
      @Patrick73787 Před 3 lety +1

      @@supremelawfirm The Highpoint solution looks very interesting but at low queue depth nothing beats Optane. Regarding the P5800X, it comes in both U.2 and E1.S form factors. It supports PCIe 4x4. The capacity goes from 400GB to 3.2TB.

    • @danwolfe8954
      @danwolfe8954 Před 3 lety

      @@Patrick73787 For clarity - Do all the current Optane U.2 models support both PCIe Gen 3 x 4 and PCIe Gen 4 x4 interfaces and use the PCIe Gen of the connected host adapter card or motherboard? This is something that isn't very clear.... FWIW, I think there's a missing piece from the Optane story... the possibility of a second on chip memory controller such that another X DIMM slots could be on the motherboard and use as a super fast SSD - think table indexes for DB, kernel swap space, memory mapped files, or a scratch disk for intermediate app results all without impact on the main DRAM... in short an early prototype CXL memory system... Bottom line though, I agree with you that Intel is using/abusing its IP and sacrificing Optane to support their current subpar CPUs....no wonder Micron wants out as it can't expand the Optane market to become profitable.

    • @Patrick73787
      @Patrick73787 Před 3 lety

      @@danwolfe8954 My Optane 905P SSD is a U.2 PCIe 3x4 drive. The P5800X is the PCIe 4x4 successor to both the P4800X and the 905P SSDs. The consumer variants like the 905P came in both U.2 and Add-in PCIe card, while the P5800X is aimed to datacenters only in U.2 and E1.S form factors. Every Optane SSDs use 4x PCIe lanes like regular NVMe drives.
      The CXL protocol comes from Intel and requires at least 16 PCIe 5.0 lanes to work in non-degraded mode. CXL will be first utilized on Sapphire Rapids CPUs. Those CPUs will also be paired with 3rd gen Optane DIMMs as mentioned in this video. Let's wait and see how Optane will fare in the CXL ecosystem.

  • @valshaped
    @valshaped Před 3 lety +3

    Meanwhile, I'm over here using a 16GB Intel Optane "Memory" module (the NVMe drive, not one of the fancy DIMMs) as swap space on my tiny home NAS, because it was a $7 impulse buy :P

  • @Z4KIUS
    @Z4KIUS Před 3 lety +2

    I'd be happy with NVMe 3DXPoint storage, especially that it would just work, unlike Optane cache and DIMM variants that needs special platform to work

  • @Zarcondeegrissom
    @Zarcondeegrissom Před 2 lety +2

    that bit about the 'L' CPUs is not too far off from what intel did to effectively kill Optane for desktop use. ALL of the older systems that would have benefited from the Optane as a drive cache for platter drives was excluded from Optane compatibility, and few with M.2 drives would 'feel' the game load-time difference with the then newer systems. making Optane effectively DOA for desktops.

  • @chrissybabe8568
    @chrissybabe8568 Před 3 lety +1

    I got up to 7:46m and still don't know why I would want to use optane, could I, what it actually is, advantages etc so you are right - so few know and probably explains why not many buy it.

  • @josephregallis3394
    @josephregallis3394 Před 3 lety +3

    I watched this video to get more educated about Optane Memory. I was looking at a laptop to possibly buy online and they were offering Optane Memory at no additional cost. Not knowing anything about Optane, I just assumed it was faster memory! Well, after hearing your explanation and how you described how the speeds could be slower, I am still confused about this technology and if I would even want to have something like this in a laptop. When I purchased my laptop 8 years ago I purchased a hybrid model storage technology thinking it was better and faster only because it was more expensive. Turns out I should have called someone at HP and talked about this because I later learned it wasn't the fastest choice. I later purchased a 500GB SSD to replace the HDD and it now is much faster. Thanks again for your explanation.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před 3 lety +1

      For a laptop: Get the RAM you want and the SSD you want, skip Optane Memory. That is caching via a lower-cost NVMe SSD.
      This could be a much longer response, but that is the advice I give everyone, and follow myself.

  • @Hobbiekip
    @Hobbiekip Před 3 lety +2

    Hello Patrick, how are you doing?
    Thank you for sharing your insights on cutting edge technology. It is in many ways a glimpse into the future. I sure hope that tech like this gets to be available for consumers too one day, it seems like a win-win situation.

  • @falconeagle3655
    @falconeagle3655 Před 3 lety +1

    Super informative. Thanks

  • @krusic22
    @krusic22 Před 3 lety +12

    Do any prosumer/workstation platforms support Intel Optane DIMMs (even unofficially)?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před 3 lety +5

      Prosumer, not really. Even our W-3275 did not work. You can get workstations like the Precision T7920 www.servethehome.com/dell-precision-t7920-dual-intel-xeon-workstation-review/ and those support Optane PMem because they are basically servers in a tower chassis. William did that review, then I got to see the one we purchased Dmitrij for his router/ firewall testing and I really wanted on solely for DCPMM support.

    • @Ramoonus
      @Ramoonus Před 3 lety

      @@ServeTheHomeVideo which home/workstation software would benefit from this?

    • @jcnash02
      @jcnash02 Před 3 lety

      @@Ramoonus I imagine CAD and product design would benefit.

  • @bobwong8268
    @bobwong8268 Před 3 lety

    Ah.... glad to hv watched this AWESOME video - learnt a Great Deal from your videos!
    Wonder if I understd this right:
    1) Memory Mode: Runs lots of VMs / Containers
    2) Persistence / "ramdisk" mode: for Apps tt can use the APIs. Perhaps use as cache for NAS?
    3) Mix mode: balance btw Memory intensive and storage intensive services - VMs & NAS.

  • @jfkastner
    @jfkastner Před 3 lety +2

    Love your presentations on your channel! @7:39 you have as total the "clock x speed" but a better number would have been gigabytes/sec, so we can directly compare to the bus/network needs. Also I believe it's spelled "Deficit"

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před 3 lety +1

      Jay! You are totally right. Deficit was an error that got corrected in the main site version but apparently did not get into the video portion. Oh well.
      On the clock * speed this was meant to be a high-level piece so it was left as this just to be a conceptual model that was easier to consume. You are totally right that GB/s would be better, but was just trying to get the higher-level relative percentage out there for folks. GB/s was one more conversion that would need to happen.

    • @jfkastner
      @jfkastner Před 3 lety

      @@ServeTheHomeVideo Patrick! Thanks for getting back to me, shows how hard you work for your channel!
      Intel recommends 4 to 1 ratio DRAM to Optane, that means give up 25% RAM performance to Optane = 50 of 200 GB/s for 8 channel 3200
      PCIe V4 = 2GB/s per lane so you need 25 lanes w/o giving up RAM performance, guess that's why Micron lets it go
      That's why I always use GB/s so it wont matter if we talk OMI, FBDIMM, DDR5 etc

  • @RealHIFIHelp
    @RealHIFIHelp Před 3 lety +1

    I like the level of detail this guys goes into.

  • @ewenchan1239
    @ewenchan1239 Před 3 lety +1

    Thank you for this explainer.

  • @kenzieduckmoo
    @kenzieduckmoo Před 3 lety +2

    I know it might be too soon now, but i definitely want to see more about CXL and its impact on things like truenas. And even things like your desktop gaming systems if system ram and graphics ram run together, cause right now video ram runs on incredibly different standards (like GDDR6X vs 6/5, and differing bus speeds etc)

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před 3 lety +1

      Probably some time until you really see CXL impact on desktops. It will happen but this is being driven by hyperscale data centers. Eventually, the idea is that you can build huge systems and share resources to get cost benefits. Desktops are limited a bit by wall power. Only so big you can make a system in only 1.4-1.6kW. Data center GPUs are already at 500W (A100 80GB SXM4) and going much higher.
      Still, it will trickle down eventually.

  • @berndeckenfels
    @berndeckenfels Před 3 lety +5

    What do you run on PMEM and Optane SSD? MySQL? As a filesystem? A detailed tutorial would be nice. Especially with Hypervisor in the mix, mixed mode and also apps which can actually use the pmem for stuff like cache (redis?) or Ceph store w/ (mirrored?) transaction logs on pmem?

    • @danielm.6476
      @danielm.6476 Před 3 lety

      Graylog comes to my mind - coupled with regular backups to slower redundant media. They always say you should use the fastest disks possible so why not put it on Optane?^^ Having many machines log to graylog means ungodly amounts of disk IO

    • @jmlinden7
      @jmlinden7 Před 3 lety

      Databases benefit the most from it. You can store the metadata in the Optane and use it as both a write cache and a read cache, which speeds up your most commonly accessed data

    • @berndeckenfels
      @berndeckenfels Před 3 lety

      @@jmlinden7 well yes maybe, what databases do you have used with pmem? For other apps,after all PMEM is slower than dram and more risky to use than flash. I haven’t found good use cases yet, maybe ephemeral VM drives for Hypervisor or large redis session cache (requires Intel patch)

  • @z0mb1e564
    @z0mb1e564 Před 3 lety +2

    I think you hit the nail on the head with the timing. Optane just has too little of a use case and even less of a value proposition. Micron sees the writing on the wall and is bailing out.

  • @michaelfitzgeraldnet
    @michaelfitzgeraldnet Před 3 lety +1

    What if you had these setup in a 4 socket per server (or cluster) with 100gb networking and using it as a SAN for a vm cluster?

  • @warpmonkey
    @warpmonkey Před 3 lety +4

    It seems Optane is trying to be too many things at once, when they could have made 4 or 5 products that just focused on one feature at time. When someone has a problem, they want a solution, and people will always pick the simplest solution for their problem. If someone wants DRAM with persistence, I'd imagine they would pick a product that was just 'DRAM with persistence' rather than Optane, because Optane is too complex.
    Imagine Intel made a box that could be a CPU, or a GPU, or even a TPU.... If I needed any one of those features, I wouldn't buy the Intel solution (not matter how awesome it must be) because I'm clearly paying for features I don't need, so I would buy the product that just gave me what I needed.
    I see this in startups as well, people buy the simpler things that deliver what they need, and only when the customer is a larger enterprise are they more likely to buy the 'can do everything' product.
    In my opinion Optane is too complex, and it needs to be a sharper, narrower, and split offering product.
    BTW. I still wouldn't buy it, even after this great video, it still sounds too complex.

    • @andyhu9542
      @andyhu9542 Před 2 lety

      'Imagine Intel made a box that could be a CPU, or a GPU, or even a TPU...' You get an FPGA, and Intel does make those LOL. (I understand it is not what you mean, but I just have to point out this fun fact...)
      However, combining features seems to be a trend now, just take M1 as an example. It is a CPU, GPU, memory, AI engine and other ASICs combined and people are buying it like crazy. And I think the reason behind that 'what I need' for many people nowadays is 'a bit of all'.
      Yet I totally agree that Optane should be a sharper product. I would even go as far as saying that it should not be something that fit into a lot of existing product categories, it needs to be something of its own kind (I know Intel has been advertising this way but it is not). For example, how about a storage device that the system can directly boot 'into' without having to get the data out?

  • @JonMasters
    @JonMasters Před 3 lety +2

    I strongly agree with CXL displacing many of the extra DDR channels. The days of huge pincount may be behind us

  • @FiddleMaker63
    @FiddleMaker63 Před 3 lety +2

    CXL is the future. It is a open standard and will meet the price to performance targets that optane was designed to meet. It is a interesting technology but it always comes down to what do you get for the $$$.

  • @namibjDerEchte
    @namibjDerEchte Před 3 lety +1

    I would like to see someone rigging up some DDR4 Optane to an OMI CPU with one of those adapters.
    POWER10 comes to mind.
    That won't drag the system throughout down, except that some memory channels are now for Optane, not DRAM.

  • @thomasholte1828
    @thomasholte1828 Před 3 lety +1

    Thanks for this.

  • @xenocide8032
    @xenocide8032 Před 3 lety +2

    Intel is currently buying most if not all of there Optane 3XPoint memory from the Micron fab in Lehi Utah. Intels fab in Albuquerque is partially up but in the same stage as Microns with low output. Micron is not selling the technology just the building and possibly some of the equipment. I work at the Lehi site and this announcement was like getting kicked. We worked our butts off developing this technology only to be told we are getting sold with the building really sucks.

  • @shammyh
    @shammyh Před 3 lety

    Is there any performance impact beyond lowered clock speeds if I want to toss, say, 2x PMem100 DIMMs into an existing optimally-configured 6-channel Skylake/Cascade Lake system? Like lowered peak bandwidth? Increased DRAM latency? Or does it need to be balanced, like RAM, so minimum 6x Pmem DIMMs for a Skylake/Cascade Lake platform?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před 3 lety +1

      There is an impact. With interleaving, going to 6x per socket means you have more modules interleaved than 2x. Also, memory/ DIMM population was a several minute segment that was pulled out of this already very long video. That is another area of high complexity.

  • @curvingfyre6810
    @curvingfyre6810 Před 9 měsíci +1

    Remembering what optane could have been makes me sad every time. The applications for a true midpoint in the data delegation chain between system memory and long term storage are more numerous than can be described. Servers, absolutely, it would shine there, but home computing too. We've already seen the system save state possibilities in the h20 modules, and the way it completely re-invigorated the idea of a cached spinning drive, where before nand cached spinners were actually SLOWER than uncached. In gaming, rendering, editing, animating, ANY intensive workload on GPU or CPU could benefit from more immediately available transfers over base memory, especially with the spikes from system memory rewriting. But intel killed adoption cause of artificial compatibility limits. Imagine, artificially limiting compatibility, then throwing your hands in the air and shelving the whole technology once you realize that it's not being adopted. Yes intel, limiting who can use something will cause fewer people to use it. Almost like resources that stand to benefit the entire planet shouldn't be in the hands of people willing to kill those resources to save a buck (recent Unity scandal anyone?).

  • @garrettkajmowicz
    @garrettkajmowicz Před 3 lety +1

    What I'd really like to see is a way of being able to put SRAM into DIMM slots. At scale, it should only cost about 6x as much, but provide one of the biggest performance improvements possible.

    • @volodumurkalunyak4651
      @volodumurkalunyak4651 Před 3 lety

      Terrible idea, actually. 6x of the cost for advantage, that will be eaten by caches anyway. The market is even turning opoosite way - eDRAM (DRAM on a die just next to CPU die within same substrate)

  • @mosterdpottv
    @mosterdpottv Před 3 lety +1

    Great stuff

  • @richfiles
    @richfiles Před 3 lety +1

    I'm legit curious about the fundamental physics behind Optane/Xpoint memory... If I recal some of the early documentation that I read, the way they described the technology sounded a lot like memristor technology, and honestly sounds a lot like Hewlett Packard's Memristor/Crossbar concept from 2007. HP abandoned it years later, due to difficulties manufacturing it. Right around that time, Intel came up with Optane.
    The memristance effect was theorized in 1976, and some anomalous measurements as far back as the 1800s might be attributable to the effect, but not recognized at the time. There are a few hobbyists, including a CZcamsr who has published videos on the memristance effect, but it wasn't till HP announced in 2007 that they had been working on a prototype memristor that interest really took off. The basic concept of memristance, is that the resistance of a memristor changes with the flow of current. Flow in one polarity increases resistance, and flow in the opposing polarity decreases resistance. The changes made to the materials are non-volatile, and highly durable. Reading a value alters the stored value, unless the opposing current to counter the read current isn't also applied.
    What always got me with the memristor concepts, was that HP had conceived a way to not just use it as non-volatile, high durability memory, but also figured out how to perform massively parallel processing _in the crossbar array itself,_ effectively merging the memory with the processing unit. They had even come up with a way to use the Crossbar system to create neural networks. In later years, they pulled back from the more elaborate concepts, and settled on just trying to market it as the heart of what they dubbed "The Machine", which was just a server with massive amounts of encrypted non-volatile memristor RAM, that served as both working and storage memory. Really, no different than what Optane is now.
    What's curious, is the initial descriptions of Optane (ones that I saw), really made it look like Intel was just doing memristor memory, though Intel has, as far as I'm aware, always denied that Optane had anything to do with the memristance effect.

  • @philipp594
    @philipp594 Před 3 lety +1

    I am using a 900p in my desktop, because I don't like the unreliability of ssd caches. And slc lasts forever. Most ssd benchmarks don't write more than 1gb which is convieniently the most common ssd cache size.

  • @andrewseamaster
    @andrewseamaster Před 3 lety +1

    No subtitles? How the hell can I watch this without waking my wife?

  • @NetBandit70
    @NetBandit70 Před 3 lety +1

    CXL and CCIX will ultimately bridge the road to non-x86 alternatives. I can't wait.

  • @salmiakki5638
    @salmiakki5638 Před 3 lety +3

    Do Optane dimms have fewer bandwidth than traditional RAM at a fixed clockspeed?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před 3 lety +3

      Perhaps the bigger challenge is that they are higher latency than DRAM.

    • @salmiakki5638
      @salmiakki5638 Před 3 lety +1

      @@ServeTheHomeVideo oh yeah I would expect that, thank you for your time anyway!

  • @TheBackyardChemist
    @TheBackyardChemist Před 3 lety

    How would the tradeoff look like between having PMEM in memory mode and having (a few, lets say 4) P5800X drives dedicated to Linux swap partitions? If most of the data in RAM is indeed cold or lukewarm, then maybe even that would be an acceptable compromise? And this would of course work in an EPYC system.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před 3 lety

      ScaleMP and Intel had a demo of this before the PMem modules came out

  • @kasra7907
    @kasra7907 Před 3 lety

    The intercibect works great if the nodes are fpgas wich interface with each other with timing and coherant crosspoint switches in the back plane .

  • @stevenclark2188
    @stevenclark2188 Před 3 lety +2

    So it's virtual memory fast enough you stop avoiding it?

  • @denvera1g1
    @denvera1g1 Před 3 lety +3

    As a prosumer, i love optane in any form factor, i have beein using optane(M.2) as a cache drive in my file servers sinse it was introduced. At first i added it to my Dell C2100, then i build a backup server using Ryzen 5 Pro 4650G and those two 100GB optane drives keep that slow SAS2 controller seeming very snappy. Sure they're not "supported" on anything other than Intel..... 7th gen and later? But i've had no issues with it on AMD and older intel, sure it doesnt work the way intel wants it to work unless you have 7th/8th gen, but most server OSes can have storage tiers for caching and use them in much the same way.
    I would like to see someone bring PMEM to the consumer market. I'd be more than happy with 1 channel of 64GB DDR4 2400, and 1 channel of PMEM if i could get 200GB of PMEM (in mixed persistant mode) for the same price as 64GB of DDR4

    • @andyhu9542
      @andyhu9542 Před 2 lety +1

      Also a prosumer. And I just want to see Optane as the next-gen product that eliminates loading screens......

    • @denvera1g1
      @denvera1g1 Před 2 lety

      @@andyhu9542 Ideally, i'd like tripple channel memory controllers on consumer desktop designed to work with 3 traditional channels, or 2+1 PMEM channel, but with the ability to do all PMEM. AMD at least will need tripple channel memory for their upcoming RDAN2 APUs, and if they go throuigh with the rumored 16 compute unit APU, then 3 channels probably wont be enough, even with DDR5 5500. Look at the 5500xt, a low end GPU with only 22 compute units, but it basically has the bandwidth of 7 channels of DDR4 4000 to keep those compute units fed.
      I dont recall off hand but IIRC PMEM is as fast as DDR2, If AMD were to say, have a huge TSV cache that covered the whole die, instead of just sitting on top of the existing cache on die, then maybe the performance hit wont be as bad for going from 2 channels of DDR4 3200 down to 3 channels of basically DDR2 667(probably slower) Heck, having 400+MB of total cache on an APU might make RDNA2 usable at only 2 channels of DDR5 6400.
      For reference, Valve engineers decided that 2 channels of DDR5 5500 would not be enough for that low end APU, so they gave it a whopping 4 channels, though, really they're 32 bit channels where as DDR4 i believe is 60+8, or 56+8 something like that

  • @blahblahblah1787
    @blahblahblah1787 Před 3 lety +1

    But how does it mine chia in region mode?

  • @AriBenDavid
    @AriBenDavid Před 3 lety

    Optane seemed to start out at 90nm -- a cautious start. Wouldn't it be better at 3nm?? Is my memory correct on this?

  • @Paginski
    @Paginski Před 3 lety +2

    So it works like swap but managed by CPU, not the OS

  • @shadowmist1246
    @shadowmist1246 Před 2 lety +1

    Forget pmem. With current high and ever increasing DIMM capacity/speed + NVME4/SAS24 low latency/high capacity/and increasing endurance drives, I find it hard to justify this technology and it's complexity which essentially requires software engineers to write for it. Optane SSD's may have their place though but only as a specialized endurance drives --- maybe --- for now anyways.

  • @salmiakki5638
    @salmiakki5638 Před 3 lety +3

    Could/do optane dimms benefit from ecc?

  • @wowlmito
    @wowlmito Před 7 měsíci +1

    Can we use Optane DIMMs as normal DDR4 ram?

  • @hoaxuan7074
    @hoaxuan7074 Před 3 lety

    Would be good for vast associative memory AI462 neural networks

  • @shadowarez1337
    @shadowarez1337 Před 3 lety +5

    Is it me or does the optane dimms in blue remind you the old DDR 2 heatspreaders before they became a thing.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před 3 lety +3

      The black ones remind me of several DDR3 heat spreaders we had on DIMMs and are constructed in a similar manner

    • @shadowarez1337
      @shadowarez1337 Před 3 lety

      @@ServeTheHomeVideo knew it I have some on this ECC ram in my duel opteron test bed lol
      Loving the channel.you get all the great gear I'd love to tinker with I actually have the 905p 380gb m.2 and the 900p 280gb.
      I'm about to move to AMD with a 5950x I'm hoping I can utilize optane as cache at very least.

    • @ryanwallace983
      @ryanwallace983 Před 3 lety

      @@shadowarez1337 I use an optane ssd on Ryzen-I don’t know about using it as cache tho, I know LTT has a video on it

    • @shadowarez1337
      @shadowarez1337 Před 3 lety

      @@ryanwallace983 I have primocache for software since it's been default for all my rigs. Thank you I'll give it a shot once get the 2TB Sabrent rocket Plus in mail

  • @maxhammick948
    @maxhammick948 Před 3 lety +1

    An adapter (probably needing an integrated controller) so you can get a couple of optane DIMMs in a PCIe slot to show up as a normal NVMe SSD would be a pretty handy thing once optane starts hitting ebay in volume. Kind of like the RAM drives of old, but more usable.
    Also seems like intel need to work hard on DIMM capacity - if they could push that up, then a stick of optane and a stick of RAM would retake the capacity crown over two sticks of RAM. It's a lot easier to turn a profit when you're offering something no-one else can, rather than offering a budget alternative to RAM.

    • @supremelawfirm
      @supremelawfirm Před 3 lety

      > a couple of optane DIMMs in a PCIe slot to show up as a normal NVMe SSD
      That's exactly what Highpoint and other vendors of PCIe expansion cards are now offering aka "4x4" add-in cards ("AICs"). There is also a Highpoint "2x4" AIC for low-profile chassis.
      It took Highpoint a while to make their AICs bootable, but that feature is now standard on some of their AICs.
      We've had much success for a whole year already, using Highpoint's SSD7103 to host a RAID-0 array of 4 x Samsung 970 EVO Plus M.2 SSDs.
      Their latest products now support PCIe Gen4 speeds.
      p.s. I'm a big fan of Highpoint's hardware, because it works and it's very reliable.
      It's their documentation that needed lots of attention and hopefully getting better with time.

    • @maxhammick948
      @maxhammick948 Před 3 lety

      @@supremelawfirm That's just M.2 to PCIe - I'm talking about DIMM slots, so it can use optane DIMMs rather than optane in M.2 form (or other SSDs)

    • @supremelawfirm
      @supremelawfirm Před 3 lety

      @@maxhammick948 Thanks! I misunderstood your point above. I appreciate the clarification.
      Also, I seem to remember a much older product aka "DDRdrive X1" , that held 2 or 4 DDR DIMMs in a PCIe add-in card. I don't believe it found much of a market, however.
      Found it here: ddrdrive.com
      NOTE WELL the short edge connector (looks like x1), which necessarily limits its raw bandwidth upstream.
      If Optane DIMMs currently max out at DDR4-2666 - as shown by Patrick in this video - then the calculations I did below seem to favor -- by quite a big margin -- a 4x4 add-in card with 4 x Gen4 M.2 NVMe SSDs:
      DDR4-2666 x 8 bytes = 21,328 MB/second raw bandwidth
      16G / 8.125 x 16 lanes = 31,507 MB/second raw bandwidth
      4x4 = x16 = elegant symmetry
      (The PCIe 3.0 "jumbo frame" is 1 start bit + 16 bytes + 1 stop bit = 130 bits / 16 bytes = 8.125 bits per byte transmitted, hence the divisor above.)
      I would choose a Western Digital Black SN850, partly because I prefer having a DRAM cache in each RAID-0 array member: 4 x DRAM cache @ 250MB = 1GB combined cache in RAID-0 mode.
      With PCIe doubling the clock speed at every generation, it appears to be catching and surpassing modern DDR4 DIMMs in raw bandwidth.
      p.s. I ran these numbers today, because for decades I have believed that DRAM was THE fastest memory available. Now, 4x4 AICs are proving to be a real challenge to that belief.

  • @jannikmeissner
    @jannikmeissner Před 3 lety +3

    Is facebook actually utilising Optane in their Cooperlake systems?

  • @jordonberkove7438
    @jordonberkove7438 Před 3 lety

    Will it work with Ryzen? What about sequential memory scores?

    • @foxbox2879
      @foxbox2879 Před 3 lety +1

      Intel locked it down to the Intel chipsets.

  • @memadmax69
    @memadmax69 Před 3 lety +1

    Wow, I didnt know that optane still existed in its original context still cause I thought it just turned into a name for intel ssd drives that I see in best buy once in a while lololol

  • @rogerhonacki5610
    @rogerhonacki5610 Před 3 lety

    Use it for Linux swap drive?

  • @Tyrim
    @Tyrim Před 3 lety +4

    I would love to use optane ssds for my cfd simulations, but the markup is too high compared to regular ssds. If it was say 1.5x $/gig to an mlc ssd i would buy it, but not like this...

    • @callums____
      @callums____ Před 3 lety +2

      Yeah, it is a bit of shame about the pricing. Earlier on it was only around 4x the cost for a while but since NAND pricing has significantly dropped while Optane pricing hasn't really moved and has even increased slightly in some markets for some products, it seems around 10x now.

    • @TheBackyardChemist
      @TheBackyardChemist Před 3 lety

      If you just need high write endurance, get a Micron 9300 MAX, they are extremely durable, 18 PB of writes on the 3 TB model. A pair of them in RAID 0, if you need more speed.
      If you need no such endurance, there are many more good SSDs on the market, some of them are much faster.

    • @Tyrim
      @Tyrim Před 3 lety

      @@TheBackyardChemist i am using 970 pros as scratch drives, my simulations can create around 200GB of data per day, but thats not always the case. just have to make sure everything is backed up and be ready to buy a new one when one of them dies...
      something like the 9300max would make sense but i cannot justify the straight up costs of it :( (i mean, i could, if i had the capita)

    • @TheBackyardChemist
      @TheBackyardChemist Před 3 lety +2

      @@Tyrim lol we had a 6 core Haswell E box generate 100 TB of writes in like 2 months. Ended up using HDDs in RAID0 until the 9300 max came out

    • @charlesselrachski34
      @charlesselrachski34 Před 3 lety

      @@Tyrim they make sata high endurance: micron max 5100/5200/5300 too

  • @justacomment1657
    @justacomment1657 Před 3 lety +1

    The problem with optane dimms is not the performance nor the non volatile nature of them.
    The problem is, most enterprises use virtualiization systems.
    If you run a VMware cluster of say four dual socket xeons, powering about 50 Servers where two of those are running a productive database (aka no development or test systems). You do need to enable all of these four hypervisors access to optane storage... And on this point things get complicated.
    First you need to sacrifice one memory channel for optane on each server, limiting a the amount of memory your hypervisors have access too and B sacrifice performance of the memory they have access to.
    So basicly you are impeding the pe rformance of 48 VMs just by installing optane, it's not jet used at all.
    And now what?
    You got 4 Hypervisors with say 1TB optane storage each...do you use it localy? If the machine dies all vms will get migrated to other servers, except the production database which runs on your local optane disk? That you need that much and that fast that It was chosen to buy optane for it. This thing is now offline?
    Or you use vsan with your optane drives to share it accros your VMware cluster, but this introduces overhead, latency and the high storage iops will be no faster than your blody ethernet will allow it to be...
    Optane in this state is dead in most use cases.
    You need to pair it with a storage server solution something that runs low latency connections to the hypervisors and has failover capabilities. Or vsan if your servers have enough IO to handle verry low latency rdma capable ethernet/infinyband connections.
    But guess what high bandwit low latency looks like?
    Correct
    High speed ethernet (40-100GBit)
    RDMA support on those nics
    And a ton of fast storage drives in the system.... Something nvme ssds do already deliver for a lot less money.....
    The vsan application was something I looked into in our environment.... And we got a little shafted by fujistsu/Intel with our servers.
    Fist they do not have very much pcie capabilitys at all...and now the fun part: for some reason all pcie x8 an x16 slots share the same interrupt... Very neat if you want something to be fast...
    Outside a virtual environment optane is fast but the non volatile nature of the storage stays in contrast to the potential failure of the host..... Oh and by "failure of the host" I don't meant it hast to explode to cause a problem... A network error, a temporary bug of some sort, everything disconnecting the thing from it's network will be enough to get you in trouble... No pyrotech needed...

  • @theoneyoudontsee8315
    @theoneyoudontsee8315 Před 3 lety

    the real reason optane matters is if you just have 1.5tb of dram its actually hard to use it effectively with the "use it or loose it" issues dram has that the extra chip for parity helps with with making a ecc ram dimm ecc better and is also why there is unbuffered ecc and buffered ecc. the cost per gb is just a sweet bonus of optane!

  • @joechang8696
    @joechang8696 Před 2 lety

    in storage, Optane has great (low) latency compared to most NAND, which have large page and block sizes for cost. Samsung and one other? makes a small page/block NAND in which the latency is probably good enough in relation to Optane. As memory, if an operating system really used persistent memory, then it could have had some interesting uses? but this would take time to develop. Otherwise, the use case is a situation in which the system with max DRAM still have huge IOPS (1M IOPS) but much lower IOPS with the extra capacity possible with DRAM/Optane, and it is unclear that such a work load exists in sufficient numbers

  • @mikebruzzone9570
    @mikebruzzone9570 Před 3 lety

    I can't find Optane DIMM power on memory slot power rail have heard energy in the structure on writes and efficient power reads but what is the difference with DRAM? Intel and sources will talk about SSD read and write power but what about DIMM? mb

    • @mikebruzzone9570
      @mikebruzzone9570 Před 3 lety

      interesting back in time to current apache/barlow/cooper . . . mb

    • @mikebruzzone9570
      @mikebruzzone9570 Před 3 lety

      Thousands of dollars more for XCL+r you got to be kidding me $400 a pop for % grade SKU split mirroring coming out of the fab in 1M unit orders if u accept Intel terms to use what u need and become a hyperscale or OEM broker dealer for what you don't want and will sell to others less the Intel Inside NRE allowance that of course u pocket for yourself, I mean u'r employer, well, everyone knows what I mean. mb

  • @Wrathofgod220
    @Wrathofgod220 Před 3 lety

    I honestly think Micron realized that the technology wasn't catching on and they needed to exit the market before their losses ended up accruing with no profit in sight. I mean, if their customers were asking for 3d xpoint technology and they were able to ship volume, then I'm sure they would have decided not to exit the market. But I see Micron going into cxl technology because their customers are going to need memory that can communicate at different levels of their system. I can see cxl being useful for machine learning and other applications that have mixed workloads.

  • @supremelawfirm
    @supremelawfirm Před 3 lety

    Patrick: love this video as I do all your videos.
    Question: you are the 1 in a million IT experts who would be able to answer this question off the top of your head: Have you encountered any systems that supported a fresh OS install to Optane DIMMs? I posted a similar question at another IT website reporting Micron's recent decision.
    What originally came to my mind was a re-design of triple channel chipsets, which allowed the third channel to host persistent Optane DIMMs for effectively running an entire OS in a ramdisk that is non-volatile.
    In other words, using Windows terminology, the C: system partition would exist on that Optane ramdisk.
    To implement this hybrid approach correctly, DRAM controllers would need to operate at different frequencies, so as to prevent the problem you described which down-clocks all DRAM to the same frequency as the Optane DIMMs.
    Yes, enhancements would also need to be added to a motherboard's BIOS, chiefly by adding something like a "Format RAM" feature which supports a fresh OS install, and subsequently detects if the OS is already installed in an Optane ramdisk running on that third channel.
    FYI: I filed a provisional patent application for such a "Format RAM" feature, many years ago, but that provisional application expired.

    • @supremelawfirm
      @supremelawfirm Před 3 lety

      My comment at another IT User Forum, FYI:
      [begin quote]
      In the interest of scientific experimentation, if nothing else, I would like to have seen a few radical enhancements to standard server and workstation chipsets, to allow a fresh OS install to a ramdisk hosted by Optane DIMMs.
      Along these lines, one configuration that came to mind was those dated triple-channel motherboards: the third channel could be dedicated to such a persistent ramdisk, and the other 2 or 4 channels could be assigned to current quad-channel CPUs.
      The BIOS could be enhanced to permit very fast STARTUPs and RESTARTS, and of course a "Format RAM" feature would support fresh OS installs to Optane DIMMs installed in a third channel.
      By way of comparison, last year I migrated Windows 10 to a bootable Highpoint SSD7103 hosting a RAID-0 array of 4 x Samsung 970 EVO Plus M.2 NVMe SSDs.
      I recall measuring >11,690 MB/sec. READs with CDM. I continue to be amazed at how quickly that Windows 10 workstation does routine maintenance tasks, like a virus check of every discrete file in the C: system partition.
      p.s. Somewhere in my daily reading of PC-related news, I saw a Forum comment by an experienced User who did something similar -- by installing an OS in a VM. He reported the same extraordinary speed launching all tasks, no matter how large or small.
      [end quote]

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před 3 lety

      That would be an odd architecture. Using a high-value DIMM slot/ slots for a low-value OS drive. It would be easier to just use an Optane NVMe SSD.

    • @supremelawfirm
      @supremelawfirm Před 3 lety

      @@ServeTheHomeVideo Aren't Optane NVMe SSDs limited to x4 PCIe lanes? I thought one major advantage of Optane DIMMs was their superior bandwidth i.e. parallel DIMM channels. M.2 and U.2 form factors are both x4. And, all of the Optane AICs I see at Newegg also use x4 edge connectors. Am I missing something important?

    • @supremelawfirm
      @supremelawfirm Před 3 lety

      Should I be comparing Optane DIMMs with a RAID-0 array of 4 x Optane NVMe SSDs?
      If I could afford 4 x Optane M.2 NVMe SSDs, I would be able to compare them when installed in our Highpoint SSD7103. The latter RAID-0 array currently hosts 4 x Samsung 970 EVO Plus m.2 SSDs.

    • @supremelawfirm
      @supremelawfirm Před 3 lety

      Many thanks for the expert direction, Patrick.
      I checked Highpoint's website, and their model SSD7505 supports 4 x M.2 @ Gen4 and it's also bootable, much like our SSD7103 which is booting Windows 10 AOK.
      The latest crop of Gen4 M.2 NVMe SSDs should offer persistence, extraordinary performance, and enormous capacity, even though they are not byte-addressable like Optane.
      As such, Optane M.2 SSDs are up against some stiff competition with the advent of Gen4 M.2 SSDs e.g. Sabrent, Corsair, Gigabyte and Samsung.
      The performance "gap" between DIMM slots and PCIe x16 slots should close even more with the advent of PCIe Gen5.
      From Highpoint's website, see:
      HighPoint SSD7505 PCIe Gen4 x16 4-Port M.2 NVMe RAID Controller
      Dedicated PCIe 4.0 x16 direct to CPU NVMe RAID Solution
      Truly Platform Independent
      RAID 0, 1, 1/0 and single-disk
      4x M.2 NVMe PCIe 4.0 SSD’s
      PCIe Gen 3 Compatible
      Up to 32TB capacity per controller
      Low-Noise Hyper-Cooling Solution
      Integrated SSD TBW and temperature monitoring capability
      Bootable RAID Support for Windows and Linux

  • @synaptichorizons
    @synaptichorizons Před rokem

    Really enjoyed this overview of PMEM 100/200. Also it opens my mind into thinking of delaying going to PowerEdge PCIe Gen4 Server and PMEM 200 into waiting for PowerEdge PCIe Gen 5 in 2023-2024. Why because the one thing you didn't bring out about the downside of using the older 2017 PCIe Gen 3 Servers stuffed with PMEM 100 at really cheap prices for cool technology is about Server hot swap of GPU's, NIC's. NVMe Gen5 SSD's and other failure prone devices directly from the hot swap bay on the front of the upcoming new PCIe Gen5 Servers. What do you think about that comment, does that sound accurate?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před rokem +1

      High-end GPUs are not going to E3.S in this generation, so they will not be swappable. NICs are more OCP NIC 3.0 form factors. You are right, being in the memory channels is a big downside of the technology.

    • @synaptichorizons
      @synaptichorizons Před rokem

      @@ServeTheHomeVideo Thanks for you constant set of valuable inputs. Can’t wait to see your episode on how to refer the cheapest two socket PCIe Gen 5 Windows 2022 Server that will accept Kioxia PM7 SSDs.

  • @guspaz
    @guspaz Před 3 lety +1

    How much is the cost savings with Optane DIMMs actually? The last time I looked at the larger Optane SSDs, they cost almost as much as the same amount of ECC FB-DIMMs, so my assumption with the Optane DIMMs is that, unless they're substantially cheaper than those large Optane SSDs, you might as well just use regular RAM in the server instead of Optane.
    It seems to me that there are a bunch of ways that Intel could have sold Optane with lower bars to entry, but always chose to go with the approach that maximized the implementation complexity and cost. For example, Optane could work great in the consumer space as a transparent caching layer on an SSD. Pair a bunch of QLC with Optane and present it to the host system as a single drive, with the SSD controller handling the management. That could be neat, right? QLC cost with Optane performance? Well, Intel tried this, only the "integrated" products exposed the single SSD as two different drives (one Optane and one NAND) to the host system, and required the user to set up caching software to leverage it. Then they tried selling tiny Optane-only SSDs for caching, but they inexplicably locked them to certain Intel chipsets and still required external software to leverage them. Even Intel CPU owners were locked out if they had the "wrong" chipset. Talk about shooting yourself in the foot!

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před 3 lety +1

      We are saving around $3.5k/ server with the Optane DIMMs mentioned in this video. That is ballpark 20% savings on the server.

    • @Knirin
      @Knirin Před 3 lety

      @@ServeTheHomeVideo for what Optane capacity versus what DDR4 and SSD capacity?

  • @ultraderek
    @ultraderek Před 3 lety +1

    I think Intel’s direction on memory is the way to go. If we could get rid of GDDR and DDR life would be amazing.

    • @heyhoe168
      @heyhoe168 Před 3 lety

      what is wrong with ddr?

    • @ultraderek
      @ultraderek Před 3 lety

      For some reason CZcams deleted my reply. It might be your name. It’s an added layer of complexity that is there because hard drives were a major bottleneck because of their speed. Soon load times will be zero with the absence of ram. Also the state of a pc will not be lost if there was a power outage or if it became unplugged.

    • @heyhoe168
      @heyhoe168 Před 3 lety

      @@ultraderek we have hybernation and it is quite error-prone. Also dont forget, optane is slower, and more expensive. Todays deficit of electronics should hint you how important price is.

    • @heyhoe168
      @heyhoe168 Před 3 lety

      @@ultraderek speaking of YT censorship, know this: lot of people answers to me, so if it deletes you, it means inequality of google censorship. Welcome to 2021!

  • @EyesOfByes
    @EyesOfByes Před 2 lety +1

    I'm surprised Apple didn't use this for AppDirect mode in their M1 Max SoC. If anyone could optimise Optane it's Apple. Sure, 7000MB/s 8TB storage. But it's still Nand Flash

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před 2 lety

      Power is too high for that. My M1 Max notebook can chew through battery already.

    • @EyesOfByes
      @EyesOfByes Před 2 lety

      @@ServeTheHomeVideo I've only used Lenovo Legion 7 with the 160W RTX 3080, so I have no sense of perspective on powerconsumption 😉

  • @kelvinnkat
    @kelvinnkat Před 3 lety

    3:45 I love me a NAND baed SSD

  • @Mutation666
    @Mutation666 Před 3 lety +2

    Wish the ssds would have dropped in price they are great but super expensive

  • @youtubecommenter4069
    @youtubecommenter4069 Před 3 lety

    "...If you really want CXL to take off, it needs to get into other sockets at this point other than just Intel and became an industry solution...", 22:40. Yes and no. It can become an industry solution by diverting from Intel if these became laggards. From new OCP-driven PCIe, DIMM PHYs and internal controller architecture, kernel scripting, memory and CPU compute reallocations and emergent ARM SoCs/ wafer designs; reimagining the silicon. My opinion we are at this point in the industry.

  • @SteveJones172pilot
    @SteveJones172pilot Před 3 lety

    This is SOOOOO complex.. I may be completely misunderstanding this, but my short takeaway is that they're doing something proprietary that works only with certain CPUs, with the benefit being no better than using an NVMe disk for the persistent storage and leaving your DRAM going as fast as possible, because to take advantage of the persistence effectively anyway, the program running on the system must be specifically written to take advantage of it? (which therefore limits the systems it will run on, since it wont work on non-Xeon systems).. Unless I'm seriously misunderstanding, I see this dying off like the "hard cards" of the '80s. That being said.. wow.. server tech has certainly come a long way...

  • @ultraderek
    @ultraderek Před 3 lety +1

    Being a computer engineer. The shared pool sounds complex af.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před 3 lety +2

      Just wait until Gen-Z. Probably more like CXL 2.0 (maybe later) but that is where we get shelves of memory connected to fabric and shared across multiple accelerators/ CPUs.

    • @KarrasBastomi
      @KarrasBastomi Před 3 lety

      @@ServeTheHomeVideo thats sound insane but convenience at the same time...

  • @ChuckNorris-lf6vo
    @ChuckNorris-lf6vo Před rokem

    Why didn't the CZcams algorithm suggest me this video yet? YT Weird af. f.

  • @Strykenine
    @Strykenine Před 3 lety +1

    Wait...so you're saying that intel cooked up a pretty cool technology that seemed to be implemented just so they could make more money, then a competitor came along and basically said 'No, do it our way, it's cheaper?' Where have I heard this before? *cough*ia64*cough*

  • @edc1569
    @edc1569 Před 3 lety +1

    I’m sure Apple could of done something clever with this tech - though seems unlikely with intel at the helm.

  • @Oz-gv5fz
    @Oz-gv5fz Před 3 lety +1

    Intel draging everyone down

  • @darknase
    @darknase Před rokem

    Yeah, well, that's all a year's old info. Optane is dead now. Intel kept telling people "all fine and dandy" till the press release that said: "Scrapped!"
    No supply, no nothing, exception: sales of rest stock.
    Also on a low-level 3DXpoint is and always was a chalcogenide glass resistive memory, a.k.a. Phase-Change Memory ... even though they denied it.
    Ian Cutress - Tech Tech Potato - has a great video about that:
    czcams.com/video/KjQqVD40DLw/video.html

  • @MrV1NC3N7V3G4
    @MrV1NC3N7V3G4 Před 3 lety +1

    Hey

  • @edwarddejong8025
    @edwarddejong8025 Před 10 měsíci +1

    Intel really blew it with Optane. It was a decent technology, but they restricted it in weird ways, and didn't explain well when it should be used. Intel is lost at sea, and has imploded somewhat. They still make great chips though; extremely reliable.