Explaining Server DDR5 RDIMM vs. UDIMM Differences

Sdílet
Vložit
  • čas přidán 16. 06. 2024
  • DDR5 memory is not just a simple speed upgrade. It is absolutely essential for AMD EPYC and Intel Xeon servers as we go forward. In this video, we explain why we need DDR5, explain the differences between DDR4 and DDR5, show off components like the new RCD, PMIC, and SPD hub, and finally, talk about performance and CXL. As a bonus, we also demonstrate why you can no longer use UDIMMs in DDR5 RDIMM slots, even if they are ECC UDIMMs.
    STH Main Site Article: www.servethehome.com/why-ddr5...
    STH Top 5 Weekly Newsletter: eepurl.com/dryM09
    ----------------------------------------------------------------------
    Become a STH YT Member and Support Us
    ----------------------------------------------------------------------
    Join STH CZcams membership to support the channel: / @servethehomevideo
    STH Merch on Spring: the-sth-merch-shop.myteesprin...
    ----------------------------------------------------------------------
    Where to Find STH
    ----------------------------------------------------------------------
    STH Forums: forums.servethehome.com
    Follow on Twitter: / servethehome
    Follow on LinkedIn: / servethehome-com
    Follow on Facebook: / servethehome
    Follow on Instagram: / servethehome
    ----------------------------------------------------------------------
    Timestamps
    ----------------------------------------------------------------------
    00:00 Introduction
    01:52 DDR4 vs DDR5 Differences and UDIMM vs RDIMM Differences
    04:08 DDR5 now has TWO Channels
    05:39 New chips and components on DDR5 RDIMMs
    06:40 On-chip ECC on DDR5 versus ECC UDIMM and RDIMMs
    08:57 Why Servers NEED DDR5 with AMD EPYC and Intel Xeon
    11:52 Performance Impact of DDR5
    14:37 CXL and the DDR5 Future
    16:00 DDR5 Server Memory Summary
    16:51 Wrap-up
    ----------------------------------------------------------------------
    Other STH Content Mentioned in this Video
    ----------------------------------------------------------------------
    - 4th Gen Intel Xeon Scalable Sapphire Rapids Launch: • $17K Sapphire Rapids S...
    - AMD EPYC 9004 "Genoa" Launch: • AMD EPYC 9004 Genoa Ga...
    - Non-binary DDR5 and more: • This New Server Tech i...
    - Compute Express Link or CXL: • CXL in Next-gen Server...
  • Věda a technologie

Komentáře • 125

  • @mikegrok
    @mikegrok Před rokem +13

    An example where ECC to the module was needed, more than ECC on chip.
    I was working at a company who had spent 1.5 man years tracking down a software bug in production, when it suddenly resolved itself.
    A week later we realized that one of the computers in the high availability cluster had turned off. I was present when it was opened. I noticed that the ram had 8 chips per dimm.
    After we got new ram, the computer couldn’t complete post because it had bad memory. In fact any memory installed into channel 3 was bad.
    We got a replacement motherboard, and that fixed it.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před rokem +4

      We had something similar with an early HPE DL385 Gen10

    • @jgurtz
      @jgurtz Před rokem

      A sign that the app needs better instrumentation. These kind of bugs happen too often!

    • @johndododoe1411
      @johndododoe1411 Před rokem +3

      ​@@jgurtzThe App wasn't at fault, the hardware was at fault for wasting software dev time by corrupting data.

    • @johndododoe1411
      @johndododoe1411 Před rokem +1

      Such hardware bugs is why I'm skeptical of on-chip RAM ECC. From an architecture perspective, the CPU package should do ECC on entire RAM cache slots independent of the RAM module design. So when the core needs something from a 64 byte (512 bit) cache slot, the memory interface chiplet loads 80 bytes from the DRAM interface and uses the extra 128 bits as ECC to correct up to 12 single bit errors. Those extra bytes translate to two extra 64 bit memory cycles and its up to the ECC format designer to ensure that flaky PCB traces or stuck DRAM data lines are always detected.

    • @mikegrok
      @mikegrok Před rokem +1

      @@johndododoe1411 The on dimm ecc is mostly just to allow memory manufacturers to use lower quality parts with higher error rates that would otherwise disqualify them from use.

  • @ellenorbjornsdottir1166
    @ellenorbjornsdottir1166 Před rokem +10

    I'm not moving to D5 until D6 comes out, for my own financial health.

  • @kiri101
    @kiri101 Před rokem +11

    You did a great job pacing all the information I needed to keep me up to date with newer memory technologies, thank you.

  • @becktronics
    @becktronics Před 9 měsíci

    Hey Patrick, awesome video explaining the differences between DDR4 and DDR5. I loved the various pictures and captions that you'd place as you were explaining the PMIC, RCD, and SPD hub! You have a knack for articulately and concisely explaining device differences and cementing what the myriad of acronyms actually do. Definitely will be coming back for more tech educational content over here. I have a background in chemical engineering and got curious to see the electrical/computer engineering side of semiconductor manufacturing :)

  • @l3xx000
    @l3xx000 Před rokem +1

    Great video Patrick! this was a topic that was really hot on the forums, with lots of conversation, especially around ECC UDIMMs, thanks for clearing everything up, and agree, this is an excellent resource that can be useful to lots of people - cheers!

  • @ospis12
    @ospis12 Před rokem +24

    There are many more differences between RDIMMs and UDIMMs:
    * the number of address lines for each sub channels is 13 for UDIMM while only 7 for RDIMM,
    thanks to RCDs RDIMM address lines can run DDR while UDIMM must be SDR,
    * ECC on UDIMMs according to JEDEC is only 4 bits per sub channel, while RDIMM can be 4 or 8,
    * UDIMMs are restricted to x8/x16 memory dies, RDIMM can use x4 as well,
    this allows for UDIMM's hosts to mask writes using DM_n signal, RDIMM's hosts must send full transfers.
    I think that points 1 and 3 are mostly responsible for why UDIMMs and RDIMMs are no longer slot compatible.
    The supply voltage is different, but as far as I know all PMICs can run in the 5-15V range.

  • @dvone4124
    @dvone4124 Před rokem +2

    Useful content. Thanks! In a few years when I'm going through the next round of server upgrades, I'll understand better what I'm looking at. (Yes, I expect some updates from you before then, too.)

  • @jgurtz
    @jgurtz Před rokem +2

    Great dive into the state of PC memory arch., love to keep up with this stuff! My DDR5 key takeaways: cut memory channel width in about half, provide 2 channels per-dimm, and add on-dimm power supply & other reliability features to support even higher clocking. The ratio of memory bandwidth to clock cycle has not kept up with high core counts, explaining why top500 use 24-48 core CPUs. Related points: unbuffered dimms now have on-chip ecc to support smaller processes and faster speed (I'll think of it like reed-solomon on a hdd); and, there's some interesting future potential of ram over pci-e.

  • @I4get42
    @I4get42 Před rokem +3

    Hi Patrick! Aw man, this is going stay useful. Thanks for the work 😀

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před rokem +3

      Hopefully this helps folks. Have a great weekend

    • @I4get42
      @I4get42 Před rokem

      @@ServeTheHomeVideo Thanks! I hope you do too 😃

  • @__--JY-Moe--__
    @__--JY-Moe--__ Před rokem +1

    thought I saw smoke rolling out U'r ears once! I hope U find some great trade shows 2 go to!
    great breakdown! OMG!! it's a CXL module in the wild!!! I've waited 5yrs to see where that tech
    was going!

  • @stevekrawcke3937
    @stevekrawcke3937 Před rokem +2

    Great info, now to get a budget for new servers.

  • @DrivingWithJake
    @DrivingWithJake Před rokem +2

    Quite interesting.
    Should be fun to see how the server world is changing. Just had 8x 7443P servers arrive today at my house all in just 4u which is quite nice.

  • @gowinfanless
    @gowinfanless Před rokem

    Really cool, we plan to use the DDR5 for the next design of the R86S MINI PC BOX+Alder Lake N300 CPU

  • @mumar100
    @mumar100 Před rokem +3

    Thanks for the very helpful content for a self "studied" amateur trying to go from desktop to workstation / server.

  • @spuchoa
    @spuchoa Před rokem +2

    Thank you for the DDR5 explanation. Great video!

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před rokem +1

      Glad it was helpful! Sadly, the Super Bowl seems to have stopped views on it. Hopefully people share it.

  • @comrade171
    @comrade171 Před rokem +1

    Great breakdown, thanks!

  • @RickJohnson
    @RickJohnson Před rokem +3

    Very useful to get those of us in the DDR4 world up to speed!

  • @jedcheng5551
    @jedcheng5551 Před rokem +7

    At the beginning of DDR5 rollout, quite a lot of DIMM's pmic were broken (including mine) during use
    It was my first time seeing faulty RAM but my friend who works in a big tech data centre said that it occurs every day for his company's scale. The quality of the pmic should also be way better now after 1.5 Yeats as well as the better shaped pmic supplies

  • @johnknightiii1351
    @johnknightiii1351 Před rokem +5

    Consumer CXL seems pretty exciting. We know at least AMD is working on it. We might be getting it with zen 5 and PCIe 6.0 which is really exciting

  • @naifaltamimi2885
    @naifaltamimi2885 Před rokem

    Very informative, thank you.

  • @rem9882
    @rem9882 Před rokem +3

    This really was a great video talking about all the new benefits that DDR5 brings

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před rokem +3

      Thank you. Glad you liked it.

    • @rem9882
      @rem9882 Před rokem +1

      Could you make a video about power ic chips and how there being hit with the shortages and development problems. I’d love to know more about them

  • @skaltura
    @skaltura Před rokem +4

    Awesome! Can't wait till CXL hits sensible pricing too! :)

  • @ander1482
    @ander1482 Před rokem

    Would be nice to see what workloads scale with more memory bandwidth as there is not many available info out there. Thanks Patrick for the video.

  • @VoSsWithTheSauce
    @VoSsWithTheSauce Před rokem +3

    I agree on needing ECC Server memory, their up to speed and their amounts are awesome but i hate that persistent memory is dying and AMD EYPC Would be nice with it.

  • @edschaller3727
    @edschaller3727 Před rokem +2

    Thanks for the overview of the differences. It is good to know. A couple of questions for you if you have the time:
    With the support components moving onto the memory module (eg: pmic) and PICe cards with memory, do you think we are headed towards a high bandwidth serial protocol between CPU and memory simlarly to how storage interconnects (eg: ata=>sata, pscsi=>sas) and even expansion buses (eg:: PCI=>PCIe)?
    How does the memory on PCIe work with NUMA configurations for the system?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před rokem +3

      CXL Type-3 looks like a NUMA node without cores attached in the topology. On the memory attached bit, that is somewhat the promise of CXL 3.x because it will make sense to start using shelves of memory connected via CXL instead of adding more DDR interfaces on a chip.

  • @wewillrockyou1986
    @wewillrockyou1986 Před rokem +4

    I would consider the advantage of multiple independent channels to really be a latency advantage, it reduces the chance a memory access has to be queued behind a previous access, thus reducing the chance of it being delayed. Back to back memory accesses to the same bank are the biggest contributor to higher latency under load of memory systems, increasing the number of channels is the best way to increase bank parallelism without adding more devices to each channel.

  • @jannegrey593
    @jannegrey593 Před rokem +25

    That is soooooo expensive ATM. I don't even blame EPYC's motherboards not having 24 DIMM's. Though they should start right about now releasing them - that was the promise last year.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před rokem +7

      Showed a 8x DIMM one in the video from ASrock Rack

    • @jannegrey593
      @jannegrey593 Před rokem +1

      @@ServeTheHomeVideo True. I do wonder if there will be 24 DIMM per socket (with 2 sockets) motherboards as promised. I don't doubt they could be made, but I'm thinking of how much space it takes. I have to re-watch your earlier videos on Genoa. After the first one, my PC started acting up and only couple days ago I finally found the root of the problem.

    • @revcrussell
      @revcrussell Před rokem

      Came here to say the same thing. I am happy to be building with DDR4 right now due to cost.

    • @jannegrey593
      @jannegrey593 Před rokem

      @@revcrussell If past experience is anything to go by - and mine goes back to before DDR, the prices will flip. Though in case of DDR5 it seems to take longer than usual (average was around 18 months since introduction). But only a bit. I wouldn't be surprised if DDR5 - 6000 EXPO kits were cheaper than DDR4 - 4000 kits before the end of they year. Heck, you can find some places were they are cheaper already, but by the end of 2023 it should be universal. Especially since DDR4 will probably go completely out of mass production, only enough to support legacy systems. This will depend on how many Zen 3 and older Intel CPU's are unsold. And additionally there was almost a year when DDR5 was only an option - slowing down the change in production.

    • @radekc5325
      @radekc5325 Před rokem +1

      Gigabyte MZ33-AR0 is an example mobo with 24 DIMMs per socket. Never used Gigabyte server mobos, but at least it means more are likely.

  • @krisclem8290
    @krisclem8290 Před rokem +2

    First 3 seconds of this video had me checking my playback speed.

  • @flagger2020
    @flagger2020 Před rokem +1

    Nice video. For top500 most hplinpack machines use GPUs for the heavy lift, the bw mostly goes to them. CPU cores are good for other mixed workloads such as HPCG etc

  • @mrsittingmongoose
    @mrsittingmongoose Před rokem +1

    We are finally seeing ddr5 be beneficial in consumer side too. Raptor lake takes a major hit on ddr4 that alder lake did not.

  • @therealb888
    @therealb888 Před rokem +3

    If I had $100 everytime he says "THIS" I could buy some of THIS set of 32GBx24 DIMMs 😂
    Excellent video, learned a lot!.

  • @BOXabaca
    @BOXabaca Před rokem +2

    Speaking of ddr5 servers on a tangent, you should check out the M80q gen3 which uses DDR5 SODIMM in a tinyminimicro class device.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před rokem +2

      Wow! Just snagged a great deal on one because of this comment. Thank you!

    • @BOXabaca
      @BOXabaca Před rokem

      @@ServeTheHomeVideo Can't wait for the review!

  • @Nightowl_IT
    @Nightowl_IT Před rokem

    The smiley is on^^
    It flickers a bit but it isn't bad :)

  • @Veptis
    @Veptis Před 6 měsíci

    I am currently deciding on parts for a workstation. And picking the right combination of dimm slots, number of sticks, frequency, timing, capacity and price ... Is difficult.

  • @BansheeBunny
    @BansheeBunny Před rokem +26

    DDR5 is forcing people to know the difference between UDIMM and RDIMM, I love it.

  • @thebyzocker
    @thebyzocker Před rokem +4

    actually a great video

  • @MrMartinSchou
    @MrMartinSchou Před rokem +5

    If a CXL module with 4 DDR5 modules only gives you the same bandwidth of 2 DDR5 channels, why not use DDR4 modules?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před rokem +2

      That is a major area folks are looking at. Imagine for a hyperscaler reusing DDR4

    • @MrMartinSchou
      @MrMartinSchou Před rokem

      @@ServeTheHomeVideo Oh - I'm surprised.
      I was thinking "smarter folks than me have thought of this and dismissed it - I would like to know why so I can become smarter".
      I figured it would be a latency issue, bandwidth not being enough to saturate the PCI5 link or something, because as you said reusing DDR4 is cheaper.

  • @JasonsLabVideos
    @JasonsLabVideos Před rokem +1

    DAMN!!!! this is insane !!

  • @timramich
    @timramich Před rokem +1

    Are there ever even going to be any E-ATX 2 socket boards for Epyc Genoa (SuperMicro H13)? I see a few boards that are for specific cases. It doesn't look like an E-ATX board has room for of these CPUs. Seems they're going backwards.

    • @revcrussell
      @revcrussell Před rokem +2

      That is why they need so many cores, you can only get one socket on a board. Just think of the loss of PCIe lanes.

  • @chaosong9628
    @chaosong9628 Před 7 měsíci

    What a wonderful video ! I Just Need This !

  • @rafaelmanochio6990
    @rafaelmanochio6990 Před rokem

    Awesome content!

  • @simonsomething2620
    @simonsomething2620 Před rokem +1

    WoW Add-on expansion RAM... it feels almost like "downloading more RAM" becoming a thing.

  • @marcello4258
    @marcello4258 Před rokem

    This is why I don’t see risc like arm over cisc in the big servers just yet.. it’s the same reason why cisc was set for long time.. the memory doesn’t keep up

  • @zerothprinciples
    @zerothprinciples Před rokem

    How would I build a compute server to maximize RAM (specifically , a single Java address space for AI applications)?
    Four TB on a single motherboard would just be the starting point.

  • @keeperofthegood
    @keeperofthegood Před rokem +2

    ROI is going to be a pita to reach before DDR6 is out

  • @jackykoning
    @jackykoning Před rokem +2

    So does AM5 support UDIMM? I really don't want to use non ecc anymore. Because most are unstable out of the box when you are gaming 12 hours you are nearly guaranteed to crash.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před rokem +3

      Yes. AMD AM5 is a consumer platform. The next use for that ECC UDIMM shown is for an AM5 server platform

    • @jackykoning
      @jackykoning Před rokem

      @@ServeTheHomeVideo Good to know so any unbuffered DDR5 will likely work as long as the slot matches which it should in theory always do.

  • @johndododoe1411
    @johndododoe1411 Před rokem

    As someone who learned about DRAM when each chip might contain only 8Kibyte or less and studied cache hardware and CPU design later, keeping up with marketing code names such as Death Lake and superclean plus plus mega is a useless game of noise.
    Interesting though that the old RAMBUS company is coming back as a maker of standard high end RAM chips instead of a monopoly.

  • @uncrunch398
    @uncrunch398 Před 2 měsíci

    Other than a server being built specifically for HB apps and banning low bandwidth from running on them, I see no point in choosing a lower core count CPU to save bandwidth. This is the same problem as inefficiencies in farm land use to make massive machine use more efficient and faster. It takes many times more land and other resources to feed a person this way. Put LB apps on the extra cores to save on hardware and floor space that they otherwise take up.

  • @constantinosschinas4503
    @constantinosschinas4503 Před rokem +1

    So why exactly Micron gave you 32x32GB to test?

  • @tristankordek
    @tristankordek Před rokem +1

    👍

  • @user-hj8rn5wp8z
    @user-hj8rn5wp8z Před rokem +1

    what about timings?
    and how timings impact server work csenarious?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před rokem +2

      CAS latency increase is mostly offset by higher clock speeds so the latency in NS is only up ~3%.

  • @berndeckenfels
    @berndeckenfels Před rokem +2

    So one RDIMM has 2 channels and CPUs have 12, is that now 6 DIMM per Socket or can you have multiple?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před rokem +2

      12 DIMMs per socket in 1DPC, 24 in 2DPC (when that is available). You are right it is confusing now.

    • @concinnus
      @concinnus Před rokem +1

      CPU channels means 64-bit data width channels, as before. The two channels within a DIMM are best referred to as sub-channels.

  • @nfavor
    @nfavor Před rokem +4

    Wow. RAMBUS is still around.

    • @revcrussell
      @revcrussell Před rokem +1

      Only as a patent troll.

    • @TheBackyardChemist
      @TheBackyardChemist Před rokem +1

      @@revcrussell i dont think so, they are actually designing memory/pci-e controller blocks and selling them to cpu/gpu/*pu designers

    • @revcrussell
      @revcrussell Před rokem

      @@TheBackyardChemist If they are, I stand corrected, but I read recently they were just making money on patents.

    • @TheBackyardChemist
      @TheBackyardChemist Před rokem +1

      @@revcrussell I do not remember where I read this, but I seem to remember that out of AMD/Nvidia/IBM, at least one is using a DRAM controller block they have bought from RAMBUS

    • @alext3811
      @alext3811 Před rokem

      @@revcrussell I think they're doing both.

  • @clausdk6299
    @clausdk6299 Před rokem

    I mean.. 2 of those would be nice

  • @bandit8623
    @bandit8623 Před rokem

    great vid

  • @Xiph1980
    @Xiph1980 Před rokem +1

    Ehm, about that graph.... Might've been better to put a release year on the x-axis, because the MT-Gbps chart is essentially a chart displaying the relationship between inches on X, and centimeters on Y. Not really informative. 😉

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před rokem

      Somewhat hard to do that. When was DDR5-4800? When it was delivered for consumers? When Genoa launched? When both Genoa and Intel used it? DDR4-3200 is another good example

  • @mr.b5566
    @mr.b5566 Před rokem +2

    Ya overloaded information. I had to played at 0.75 just a better obtaining it

  • @therealb888
    @therealb888 Před rokem

    Incompatibility is such an anti consumer jerk move.

  • @reki353
    @reki353 Před rokem

    Me who still used DDR3 FB-DIMM

    • @akirafan28
      @akirafan28 Před rokem +1

      FB-Dimm? What's that?

    • @reki353
      @reki353 Před rokem +1

      @@akirafan28 fully buffered dimms instead of the regular unbuffered dimms

    • @akirafan28
      @akirafan28 Před rokem +1

      @@reki353 Thanks! 🙂👍

  • @marvintpandroid2213
    @marvintpandroid2213 Před rokem +4

    That looks like a very expensive box

  • @JimFeig
    @JimFeig Před rokem

    They made it so they can charge a larger premium for server memory. Artificial infatuation.

  • @AraCarrano
    @AraCarrano Před rokem +2

    Smiley face prop light is just a little flickery.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před rokem

      Yes. I am not sure why it is moreso in this one than others. That Canon C70 has not had settings change.

  • @minnesnowtan9970
    @minnesnowtan9970 Před 5 měsíci +1

    At 41 through 44 seconds, it in UNCLEAR if you said CAN or Can't. So please start learning to say "can not" when appropriate, and please STOP using contractions entirely. This is ESPECIALLY true when Brits, South Efrikans and Indians (3 examples should be enough) speak. Contractions make you less understandable and make you more likely to be skipped over, avoided and certainly not subscribed to.

  • @charlievikram4510
    @charlievikram4510 Před rokem

    i am a freelancer from india love to desiogn thumnail for you how can i contect you ..?

  • @christ2290
    @christ2290 Před rokem

    Jesus, Rambus, the patent troll, is *still* around sticking their name on things. Haven't thought of them in years.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Před rokem

      Actually they develop a lot of IP other companies use. I think of patent trolls more of organizations without R&D.

  • @paxdriver
    @paxdriver Před rokem

    TLDR - server RAM has no RGB so its definitely better lol

  • @abritabroadinthephilippines

    Why do you say "pretty much" either it is or it isn't m8.

  • @mikebruzzone9570
    @mikebruzzone9570 Před rokem

    mb

  • @kimsmith6066
    @kimsmith6066 Před rokem

    do u let people win 1 who has subscribed h a good 1

  • @sfoyogi8979
    @sfoyogi8979 Před rokem +3

    sponsored by micron.