The ACTUAL Difference Between Intel and AMD

Sdílet
Vložit
  • čas přidán 4. 04. 2022
  • Visit www.brilliant.org/TechQuickie/ to get started learning STEM for free, and the first 200 people will get 20% off their annual premium subscription.
    Learn about Intel's and AMD's contrasting approaches to building CPUs.
    Leave a reply with your requests for future episodes, or tweet them here: / jmart604
    ► GET MERCH: lttstore.com
    ► AFFILIATES, SPONSORS & REFERRALS: lmg.gg/tqsponsors
    ► PODCAST GEAR: lmg.gg/podcastgear
    ► SUPPORT US ON FLOATPLANE: www.floatplane.com/
    FOLLOW US ELSEWHERE
    ---------------------------------------------------
    Twitter: / linustech
    Facebook: / linustech
    Instagram: / linustech
    TikTok: / linustech
    Twitch: / linustech
  • Věda a technologie

Komentáře • 1,3K

  • @arhyvrapisa
    @arhyvrapisa Před 2 lety +2119

    A version of this explaining the difference between Nvidia, AMD, and Intel's GPU architecture would be amazing!

    • @rk3senna61
      @rk3senna61 Před 2 lety +1

      thats a good idea

    • @literallysteel
      @literallysteel Před 2 lety +45

      Intel has gpus?
      Edit: no way intel has gpus

    • @rk3senna61
      @rk3senna61 Před 2 lety +22

      @@literallysteel yes they do now

    • @this_is_japes7409
      @this_is_japes7409 Před 2 lety +46

      @@rk3senna61 they've had it for a long time, they just typically weren't discrete gpus, just integrated, they're starting to do discrete, but gpus in-of-themselves is not new to them.

    • @rk3senna61
      @rk3senna61 Před 2 lety +1

      @@tuxshake i still prefer intel

  • @irwainnornossa4605
    @irwainnornossa4605 Před 2 lety +2171

    Please do more videos like this. Focused on the chips, technologies behind and so on. It's awesome content.

    • @forest7424
      @forest7424 Před 2 lety +15

      Yes, it's nice to get a glimpse into how the hell this stuff works

    • @xADDxDaDealer
      @xADDxDaDealer Před 2 lety +18

      And have Anthony host them

    • @dragospahontu
      @dragospahontu Před 2 lety +6

      He needs to do Mediatek vs Qualcomm

    • @nitePhyyre
      @nitePhyyre Před 2 lety +2

      This. But I wish they were TechLongies. Ran through it too fast to really comprehend and didn't go into deep details.

    • @dimasfazlur5926
      @dimasfazlur5926 Před 2 lety +2

      @@xADDxDaDealer dis iz de wey

  • @EweChewBrrr01
    @EweChewBrrr01 Před 2 lety +442

    AMD: "We're introducing chip stacking"
    Pringles: 😎

  • @0hMyGandhi
    @0hMyGandhi Před 2 lety +1869

    I know he gets mentioned rather frequently, but Anthony is a godsend for this channel. His voice, his mannerisms, his general disposition is just perfect, especially in videos like this.

    • @CyrilJap
      @CyrilJap Před 2 lety +61

      Anthony is great at explaining things. Love him.

    • @Papa-Murphy
      @Papa-Murphy Před 2 lety +14

      I like the topics he chooses, but Riley, James, Linus, etc are still my preferred hosts.

    • @patrickgronemeyer3375
      @patrickgronemeyer3375 Před 2 lety +21

      Anthony is the best. and he has a bad ass track suit.

    • @thebasketballhistorian3291
      @thebasketballhistorian3291 Před 2 lety +12

      @@Papa-Murphy Agree.
      Anthony is good at explaining things and has a kind of "normal", relatable matter to him. But I personally prefer the other hosts for their energy, rapid delivery, and comedic timing.

    • @sounddrill
      @sounddrill Před 2 lety +15

      Linus actually dislikes(not really dislikes, but more like avoids) working with Anthony, because he can(in Linus's own words) get a bit too technical. I do enjoy Anthony ''s content a lot tho

  • @Aharpoon24
    @Aharpoon24 Před 2 lety +248

    Anthony's Tone, Inflection, and personability on screen plus how he arranges his content makes the information he is presenting easy to digest and does not leave you feeling lost. I feel like Anthony is writing the Electronics for Dummies LTT version while making you feel smart just listening to him. He is a Great and invaluable asset to the team.

  • @DanRTS
    @DanRTS Před 2 lety +140

    I really enjoyed the detail in this. Interesting to deep dive into how the tech actually works. Thanks!

    • @kuttispielt7801
      @kuttispielt7801 Před 2 lety +1

      I wouldn’t call a five minute video on something as complex as CPUs as a deep dive.

  • @IMPureRay
    @IMPureRay Před 2 lety +7

    This is a really good video. Just the right amount of depth, pacing and audio/video content. Anthony is very articulate and covers the stuff I care about. Thank you!

  • @davidalangay1186
    @davidalangay1186 Před 2 lety +68

    Thank you for taking the time to explain the differences between Intel & AMD, especially since the marketshare between the two are now neck-in-neck and not the blowout Intel once had.
    I guess what it boils down to for someone who does a lot of programming and some casual gaming on older games like EVE Online and WoW, the differences really doesn't matter. It's like trying to compare a detached house with a semi-detached house. The architecture might be different but the house is still your own.

  • @Alvin853
    @Alvin853 Před 2 lety +160

    The terms "Zen 3" and "Zen 2" are misused here to explain CCD, what you actually mean is "Vermeer" and "Matisse"... there are other Zen 3 and Zen 2 CPUs like Cezanne and Renoir that are monolithic and don't use CCDs.

    • @BeepBeep2_
      @BeepBeep2_ Před 2 lety +21

      This, and AMD seem to have dropped the "CCX" terminology for Vermeer / Milan, because these chips no longer have a crossbar connecting the 4 cores, instead all 8 connected via ringbus.

    • @saricubra2867
      @saricubra2867 Před 2 lety +5

      5700G outperforms Zen 2 chips that have twice the L3 cache with similar core count and don't have integrated graphics lmao.
      I'm on team monolithic.

    • @mingyi456
      @mingyi456 Před 2 lety +15

      @@saricubra2867 Because it uses zen 3 chips. The 5700g actually loses to the 5800x by quite a huge margin, so much that it is much closer to a 3700x in multicore performance due to the lack of cache.

    • @saricubra2867
      @saricubra2867 Před 2 lety +1

      @@mingyi456 That is not true in terms of single core speed.

    • @mingyi456
      @mingyi456 Před 2 lety +17

      @@saricubra2867 Yes, the 5700g beats the 3600 and 3700x in single core, but that has nothing to do with its packaging. Its monolithic form factor lets it down in multicore performance, because it is restricted in cache capacity.
      Your original comment was "5700G outperforms Zen 2 chips that have twice the L3 cache with similar core count". Why mention the core count if you were comparing single core performance? It is really an unfair statement when you are comparing zen 3 monolithic to zen 2 chiplets, then concluding that chiplets are worse because faster zen 3 cores on a monolithic package are faster in single core compared to older, slower cores on a chiplet design. You should be comparing either the 4700g and 3700x, or the 5700g and 5800x, not the 5700g and 3700x, if you want to argue about the packaging technique for the cores.

  • @bonnome2
    @bonnome2 Před 2 lety +427

    Stacking chips is actually used a lot in mobile phones. Even the raspberry pi zero has stacked chips

    • @hjups
      @hjups Před 2 lety +83

      You're describing a different technology called package-on-package. Chip stacking is 3D-integration using through-silicon vias, and significantly more complicated and expensive to do.

    • @bonnome2
      @bonnome2 Před 2 lety +24

      @@hjups yeah you are right I confused the two. But the raspberry pi zero 2 does have true chip stacking with wire bonding. Just take a look at the x-rays!

    • @hjups
      @hjups Před 2 lety +26

      @@bonnome2 I wouldn't consider wire-bonding to be stacking. It's more like one of those weird package-in-package things. An evolution of multiple dies on a fiber composite like what Microchip did with some of their SAMD MPUs.
      Chip stacking would imply that wire bonds are not used.

    • @asterphoenix3074
      @asterphoenix3074 Před 2 lety +3

      @@hjups is package on package less efficient or something?

    • @hjups
      @hjups Před 2 lety +11

      @@asterphoenix3074 Not necessarily. It has to do with the interconnect size. Package on package can work for a LPDDR4 chip for example (~60 pins), whereas 3D stacking can be full-scale (~10,000 pins). Also, you get higher parasitics with PoP and still need to translate the signal to something that can go external (that's fine for LPDDR4 though, because it's using the LPDDR4 standard). 3D stacking on the other hand typically just has re-drivers (buffers) to go between dies.
      So I guess tl;dr. If you want to stack something that you could otherwise put on the motherboard, then PoP is fine. If you need something higher performance, you want 3D stacking.

  • @quackmandoo
    @quackmandoo Před 2 lety +19

    This was actually quite informative. I was expecting more benchmarking and specific tasking head to head, but I definitely learned something new and useful.
    Always good to see Anthony showing out, good stuff, great channel and as always, I look forward to moer!

  • @KuramaKitsune1
    @KuramaKitsune1 Před 2 lety +46

    Ones innovative,
    And the other ...
    Is also innovative

  • @THE.MICHAEL.ANGELO
    @THE.MICHAEL.ANGELO Před 2 lety +5

    WOW! Another AWESOME video!! What would be so cool, awesome and appreciated is if you guys did a video on which one (Intel vs. AMD) is good for Cybersecurity, Coding, Programming and the like, although it would be subjective it would also be great to be able to pick your minds about it all. Somewhat a "Knowing What We Know," Series. There are a whole lot of aspiring Cybersecurity/ Coding enthusiasts [such as myself] who are coming into it all blind and even caught up in picking between which one? CES 2022 had us confused even more with the plethora of awesomeness in the CPUs but now...which one would be good for what? Thanks!!!!

  • @stevenaninon5653
    @stevenaninon5653 Před 2 lety

    Easy to digest. Short. Very informative. Great delivery. Great job. Thank you.

  • @ChuckNorris-lf6vo
    @ChuckNorris-lf6vo Před 2 lety +1

    Good job. More videos to help us choose processors for specific workloads. Thank you.

  • @iGrave
    @iGrave Před 2 lety +4

    Ahh yes, thanks for making the entirely more relatable link to modern basketball court construction, certainly something I'm far more in tune with :)

  • @jdgrupp
    @jdgrupp Před 2 lety +37

    Very good video. I enjoyed it because it discussed the underlying tech of something we use instead of a million dollar server that’ll never use or need in my life.

  • @KuruGDI
    @KuruGDI Před 2 lety +1

    Wonderful explanation with a wonderful host!
    For some reason I can follow Anthony better than other hosts with these complex topics.

  • @punklejunk
    @punklejunk Před rokem +2

    This video was brilliant, instructive and accessible. Anthony is a treasure. Keep 'em coming! We love this stuff.

  • @scorcher64
    @scorcher64 Před 2 lety +44

    I'd love to see a video talking about the differences in instruction sets between CPUs - x86/PowerPC/ARM, etc...

  • @jacksterstream
    @jacksterstream Před 2 lety +34

    Linus, give the man his own show already!

    • @Finkelfunk
      @Finkelfunk Před 2 lety +6

      Sorry Linus, this is Anthony's Tech Tips now. ATT.

    • @TH3C001
      @TH3C001 Před 2 lety

      @Finkel - Funk that’s honestly what I was hoping to see in their April Fools video, where Linus is replaced by Anthony and gradually loses everything before waking up at the end of the video revealing it was all a nightmare of his lol. Maybe next year.

    • @colorsafebleach5381
      @colorsafebleach5381 Před 2 lety

      Lol, he can make his own channel whenever he wants.

    • @ArmanRafique
      @ArmanRafique Před 2 lety +1

      @@TH3C001 I hope they see this for next year.

    • @martiananomaly
      @martiananomaly Před 2 měsíci

      "man" lol

  • @montehollandsworth5052
    @montehollandsworth5052 Před 8 měsíci

    Keep it up my friend someone has to have some sort of understanding of complex situations that are misunderstood...thank you for helping with this

  • @justaskin8523
    @justaskin8523 Před rokem +2

    Anthony's videos are informative AND entertaining. Well done sir, well done!

  • @t0uchme343
    @t0uchme343 Před 2 lety +16

    When I put together my pc, I went team red simply because I intended to upgrade later and I knew amd cpus have a habit to be chipset backwards compatible with older mobo chipsets. I still haven't upgraded though... (Still rocking a 2400g)
    I'd like to say with this edit I went to 3600 and it's amazing but I hit my limit, I need to get a new motherboard if I ever do upgrade further.

  • @wirthiwirth7166
    @wirthiwirth7166 Před 2 lety +7

    Another great Anthony video. Personally I would love it if he would be allowed to make them even more technical, but I do understand the reasoning of LMG wishing to appeal to a wider audience

  • @WarriorsPhoto
    @WarriorsPhoto Před 2 lety +1

    Good bit of information. I am glad you shared this information with us. Now to get some Pringles.

  • @beachsandinspector
    @beachsandinspector Před 2 lety

    Thank you for your description of the differences you made it easy follow,

  • @_TeXoN_
    @_TeXoN_ Před 2 lety +117

    Smaller Chiplets are actually due to EUV Lithography.
    Because they have to use mirrors instead of lenses, the area of the chip is quite limited.

    • @mastershooter64
      @mastershooter64 Před 2 lety +2

      I wish i could get my hands on some EUV lenses lol i wanna build a EUV microscope

    • @_TeXoN_
      @_TeXoN_ Před 2 lety +12

      @@mastershooter64 you will probably get a Nobel price, if you manage to make EUV work with lenses.

    • @hjups
      @hjups Před 2 lety +2

      That's incorrect. 7nm EUV (as well as 5, 4, and 2 nm) can still do full wafer sized chips (i.e. one chip per wafer). The lithography constraint is that you need to expose the wafer in many small intervals. If what you said was true, then Nvidia and Intel would be unable to manufacture their monolithic chips, and neither could AMD manufacture the PS5 / Xbox X/S, both of which are also monolithic.

    • @davidgunther8428
      @davidgunther8428 Před 2 lety +2

      The size limit is still around 800mm² (or 400mm² for high NA), much larger than the compute chiplets AMD has been making (

    • @davidgunther8428
      @davidgunther8428 Před 2 lety

      @master shooter64 everything absorbs EUV, so it would be less useful than electron microscopes, and lower resolution.

  • @peterwroberts
    @peterwroberts Před 2 lety +14

    This was very interesting Anthony and helped clear up a number of things I wasn't sure about 👍

  • @onlyeyeno
    @onlyeyeno Před 2 lety

    @Techquickie
    I really like this video, both the "general type" as well as this one in particular. How ever I would love if You somehow put in a "timeline perspective". Preferably some "definitive references" e.g. by mentioning "key" model/generation names and/or their "date/periods".
    That way I believe Your videos would become more "valuable" by making them useful and interesting both for people looking for a "current TechQuickie" as well as making them useful as "look backs",and giving a better understanding of the pace of the "ever developing" nature of "tech".
    I think this ought to be possible while still keeping the great "Quickie-format".
    Best regards

  • @chrisguli2865
    @chrisguli2865 Před 2 lety +8

    Nice presentation and explanation of Intel vs AMD tech. It will be hard to imagine what chip design will be like in 20-50 years.

  • @MrRom92DAW
    @MrRom92DAW Před 2 lety +8

    There were a lot of differences between AMD and Intel that I really wasn’t familiar with when doing my first build. Like, I saw a lot of things mentioning XMP profiles for RAM, and then I spent god knows how long trying to figure out how to enable XMP, because that’s what you’re supposed to do… nobody ever said anything about DOCP. I wouldn’t even know it existed!

    • @Juggernath
      @Juggernath Před 2 lety +3

      Yup. Always had Intel till the 3600 launched and actually had to google AMD XMP to figure out it was called DOCP though the manual probably would have mentioned that had I read it. Still can't wrap my head around overclocking

  • @hjups
    @hjups Před 2 lety +52

    I know this title seems catchy, but it's an over simplification on a rather trivial difference...
    The big difference between AMD and Intel performance comes down to the CCX and internal core architecture, and not the package technology used... The package technology has more of an impact for manufacturing costs and yields than for performance.
    You could have spent time talking about how the cache sizes and philosophy is different, how the inter-core communication strategy is different, how the branch predictors and target caches are different, how the instruction length decoding is different, how the instruction decoders themselves are different, the differences in the scheduling structure, the difference in the register files and re-order buffer, etc. But instead... you discuss the manufacturing difference and still don't get that quite right...
    So a few clarifications.
    1) The latency in infinity fabric is largely due to the off-die communication. The signals within the die are far weaker and have to be translated into something that can leave the die and then translated into something that can work in the next die. It's sort of like fiber-optic ethernet, you have to translate the electrical signal into light, travel along the fiber, and then translate the light back into an electrical signal. However, the latency for infinity fabric for die-die communication, is on par with the far ring communication on intel CPUs. So it's not the major contributing factor for performance.
    2) Infinity fabric is not serial, at least from what I could find. It utilizes SERDES for fewer wires, but it is still able to transfer 32-bits at the 1.6-1.8 GHz interconnect speed. That does not make it serial - it's effectively identical to a 32-bit bus. It should be noted that infinity fabric is a NoC, just like the ring-bus on Intel chips, where the flits are 32-bit. Granted though, the Intel ring bus NoC is likely wider (possibly 128-bits). I don't believe this is public knowledge, so I'm not sure about the exact parameters.
    3) The video said that the core-core communication is slower across infinity fabric, however, it should be noted that the majority of the communication is not core-core. Instead, it's cache-cache communication (i.e. maintaining memory consistency and executing atomic operations). Core-Core communication would imply mailboxes, IRQs, or some sort of MSR based messaging.

    • @rcavicchijr
      @rcavicchijr Před 2 lety +1

      Yeah!

    • @richardsalazar4817
      @richardsalazar4817 Před 2 lety +1

      Is that why amd is implementing 3dvcache?

    • @hjups
      @hjups Před 2 lety +14

      @@richardsalazar4817 No, the 3d-vcache is just to have a bunch of cache. To do any sort of computation, data needs to be moved from memory into the CPU. If it's in DRAM, then that takes a relatively long amount of time (1000s of CPU cycles), whereas if it's in SRAM (cache), that can be as low as 3 cycles for L1, or 50 cycles for the L3. This is largely due to the inherent properties of the memory technology itself (DRAM vs SRAM). So ideally, you want most of your data in SRAM. But SRAM also has the problem that it's not very dense, making it expensive in large quantities. However, if instead of making the CPU die bigger to fit more SRAM, you can put it in another die sitting atop the CPU die (the 3d-vcache), then you don't need a very big die for the SRAM. There are still limits though, which is why vcache isn't GBs in size.

    • @gabadu529
      @gabadu529 Před 2 lety +5

      @@hjups who are you?

    • @hjups
      @hjups Před 2 lety +20

      ​@@gabadu529 A computer architecture researcher, who doesn't work for Intel or AMD.

  • @johnniejohnson4096
    @johnniejohnson4096 Před 2 lety

    Your explanations of tech news are given in away that takes the intimidation a person may feel when trying to understand the information thank you!

  • @srikanthramanan
    @srikanthramanan Před 2 lety +19

    Current Ryzen & Epyc chiplets do not use a silicon interposer. They use traces in the package substrate to connect the chiplets. However AMD already has an answer to Intel EMIB by using Elevated Fanout Bridge (EFB) from TSMC in their Instinct MI200.

    • @niks0987
      @niks0987 Před 2 lety +1

      Its interesting to know what apple uses in their ultrafusion, I mean if that is serial interconnect like amd or parallel like intel.

    • @srikanthramanan
      @srikanthramanan Před 2 lety +1

      @@niks0987 Apple M1 Ultra uses TSMC InFO_LI (Parallel) as confirmed by TSMC. Check the article published in Tom's Hardware on 27-Apr-2022. This is similar to what AMD uses in its Instinct MI200.

    • @niks0987
      @niks0987 Před 2 lety

      @@srikanthramanan thanks, apple indeed does a serious business! Great info.

  • @iggysixx
    @iggysixx Před 2 lety +116

    Anthony, your presence here is great!
    It looks WAY more natural when you're not trying to hide the 'clicker' thingie :)
    If anything, this fits YOU very well, since YOU are the one who shows us how things work IN DEPTH.
    So it fits 'conceptually' too.
    I approve wholeheartedly.
    We all know 'how the pie is made' by now; so much 'behind the scenes' information about LMG;
    ...there's no need to pretend you're on network television or something :)

    • @TheWayBesst
      @TheWayBesst Před 2 lety +3

      I don’t like seeing Anthony in videos. I usually go out of my way to avoid clicking on any video with him in the thumbnail

    • @EliteNK
      @EliteNK Před 2 lety +3

      @@TheWayBesst Care to elaborate why?

    • @iggysixx
      @iggysixx Před 2 lety +12

      @@TheWayBesst Yet here you are, commenting on a video with Anthony in the thumbnail.
      It seems 'going out of your way to avoid anything with Anthony in the thumbnail' does not include 'NOT CLICKING on anything with Anthony in the thumbnail'.
      Lightly stated; there are some flaws in your methodology.
      More firmly; do something positive in your life - something that you truly love - that drains the energy and need from you to want to be negative towards others.
      Anthony makes complicated topics feel understandable to regular people,
      and is able to make 'us regular folk' feel excited about things we had no idea even existed 2 seconds ago
      That is an exceptional skill.
      -
      My question to you is;
      WHY do you waste your time commenting negative shit;
      especially if you didn't even feel like watching this video "because Anthony's in the thumbnail"?
      -
      There's enough negativity in this world.
      Whenever you want to feel better about yourself by dragging others down, just because your own life isn't working out like you pictured...
      I don't need to hear/read your '2 cents'.
      -
      ... And.if that last part is the case; happy to talk sometime, or maybe go see a psychologist (it can help out a lot - trust me on that one).
      You're not alone in your misery; there's better times to come, even if you can't picture them right now.
      I know how tough shit can get. It gets better. Ain't no shame to ask for help along the way - that can save you a couple years (again; trust me. I know)
      Anyways; no more negativity towards people on the internet, please.
      Talk to people about how you feel instead. It's scary as hell at first. You'll get used to it.
      And you might find out who your best friends truly are (they might not be the ones you think of first)
      One love, yo

    • @richardeadon6396
      @richardeadon6396 Před 2 lety +4

      @@TheWayBesst Opposite of the rest of us then

    • @Tommy50377
      @Tommy50377 Před 2 lety +6

      @@TheWayBesst Before anyone else responds to this, please remember: Do not feed the trolls.

  • @charliemaybe
    @charliemaybe Před 2 lety +39

    I want to see a technological overview on the history of cpu coolers

  • @RiskFlair
    @RiskFlair Před rokem

    Killer video. Short and to the point and covered everything. 👍good stuff

  • @Skeezy93
    @Skeezy93 Před 2 lety

    Thanks for the information! Looking sharp today Anthony!

  • @andrewd3899
    @andrewd3899 Před 2 lety +18

    Would've been nice to mention that AMD still uses monolithic designs on it's laptops and APUs. Would have been an interesting aside to about the space disadvantages of chiplets. Great video though!

  • @eugkra33
    @eugkra33 Před 2 lety +13

    0:55 that seems labelled wrong. 5600 is Zen3.

    • @Alirezarz62
      @Alirezarz62 Před 2 lety

      There is no 5600 only 5600x and yes they meant 3600

    • @countbaker5595
      @countbaker5595 Před 2 lety

      @@Alirezarz62 there is 5600, since yesterday

  • @jgillette98
    @jgillette98 Před 2 lety

    You got a calm, soothing voice that's great to listen to Anthony!

  • @shivam5878
    @shivam5878 Před 2 lety

    anthony is my fav in ltt group cus he knows all technicalities and also explains it in simple terms

  • @lipsucant
    @lipsucant Před 2 lety +6

    As a newish gaming pc user something that has made me wonder, is if an amd gpu works more efficiently when paired with an amd cpu, or if it matters at all if you pair your gpu with what ever brand processor? This would be a useful video topic for a lot of people I believe.

  • @richardrees5256
    @richardrees5256 Před 2 lety +4

    I'd love to see a video of if it's possible to add your own CCD if you could get the parts to just add more cores to your existing CPU using a cpu with an empty CCD section. Might want to get a microscope for that one and I doubt you could ever do it at home but would be interesting to see if it is possible.

    • @gamagama69
      @gamagama69 Před 2 lety

      I mean you probably could but there would be tons of issues.
      The chip would not be suppported by any motherboard and would need a custom bios.
      You'd probably have differences in the chips that ones produced together would not have.
      It would be insanely easy to mess up.
      It might be fused off which would completely neglect doing anything
      I'm pretty sure people have added more vram to gpus and it has wroked but very was very unstable.

    • @MarcABrown-tt1fp
      @MarcABrown-tt1fp Před 2 lety +1

      @@gamagama69 Seems that if the chip can use the signals used to identify 3900x or 3950x silicon then maybe, you could use existing In bios signatures for existing ryzen chips to make a 3800x into a 3950x but that would be extremely difficult without Nanometer scale precision tools.

    • @wowza-
      @wowza- Před 2 lety +2

      It's a lot more complex than just sticking in another ccd and not something you can DIY unless you were to buy a personal cpu fabrication factory.

    • @xfy123
      @xfy123 Před rokem

      It's pretty much impossible to do by yourself even if you could afford the needed tooling you ain't getting the microcode on to the CPU.

  • @itchy9766
    @itchy9766 Před 2 lety

    Anthony u are my fav tech youtuber to watch, I cant tell you know your stuff and you come across so welcoming.

  • @coverfrequency2305
    @coverfrequency2305 Před 2 lety

    Way more informative than Google searches. I'm upgrading my laptop eventually and I have been fighting to find current specs and upcoming technology improvement predictions.

  • @winstonllamas5163
    @winstonllamas5163 Před 2 lety +5

    Anthony is just someone who can probably explain almost anything you need to understand - maybe, he should narrate that "easy" quantum mechanics book by Hawking - "The Theory of Everything."

  • @seunfunmiewedairo4161
    @seunfunmiewedairo4161 Před 2 lety +51

    Anthony is my favorite person, nice to see him in a video

    • @DefeaterMann
      @DefeaterMann Před 2 lety +1

      READ MY NAME!!!!!
      !

    • @armsofzeus
      @armsofzeus Před 2 lety

      Agreed. I love the way he explains stuff. He does it so clearly, but for some reason, I can't process or maintain the videos he's in.

  • @matthewhollick5397
    @matthewhollick5397 Před 2 lety

    Man I loved that Pringles joke way too much. Great simplicity in the explanation!

  • @ethelryan257
    @ethelryan257 Před rokem +1

    This man always paces his presentations so you can follow them. I really appreciate that - not too slow, not too fast. Some of the other hosts in this group have zero sense of how to structure their presentaions.

    • @Nobody-zq8bl
      @Nobody-zq8bl Před měsícem

      No, he thinks he's a woman now. 🙄

  • @Caabooose
    @Caabooose Před 2 lety +2

    i've never really cared about either lol id just try to build comparable systems and then decide based on over all price / taking into consideration reviews on all the parts around them. was working on it a bit last night as im considering upgrading and noticed that i7-12700k out performs the ryzzen 9 5900x by a decent margin and is cheaper which was interesting to me as a step up on either side was a huge price jump for not a big jump in power.

  • @SubtractZero
    @SubtractZero Před 2 lety +6

    If modern Intel motherboards were competitively priced, I'd consider going that route.
    But given LGA 1700 boards are almost double that of AM4 (especially ITX stuff), it's just not worth the extra 2% gaming performance.

    • @tortugatech
      @tortugatech Před 2 lety +3

      Youre comparing prices for 5 year old motherboards with brand spanking ne cutting edge boards with PCIe 5.0 🙄

    • @lpcamargo
      @lpcamargo Před 2 lety +3

      @@tortugatech But until PCIe 5 starts making a difference for most people, he's got a point. Go with the proven platform, that performs almost as much, and is more efficient to boot.

    • @aboveaveragebayleaf9216
      @aboveaveragebayleaf9216 Před 2 lety

      There is a also a point to be taken from the backwards/forward compatibility with amd. You are more likely to stick with a motherboard through upgrades.

    • @saricubra2867
      @saricubra2867 Před 2 lety

      "extra 2% gaming perfomance"
      For dated and overpriced GPUs

    • @aboveaveragebayleaf9216
      @aboveaveragebayleaf9216 Před 2 lety +1

      @@saricubra2867 this is true as well, but it really depends on the games you play. Some older games take almost no gpu power but need good single thread cpu. Where newer games might be more gpu dependent.

  • @darksprbike
    @darksprbike Před 2 lety +1

    Linus recently said Andy gets super technical. Way to play to his strengths! Great content!

  • @Brokenhill42
    @Brokenhill42 Před 2 lety

    I thought this was one of the better videos in terms of usage of pictures...so thanks for that!

  • @MW3GlitchSA
    @MW3GlitchSA Před 2 lety +9

    Would have loved to see some background and why Intel was better for so long

    • @HyperSnypr
      @HyperSnypr Před 2 lety

      In the most simplistic terms, Intel had the bank to crush fair competition, and they had AMD licked on single core performance for ages. It is only within the last decade that multicore performance really started to become more prominent in the mainstream. AMD went back to the drawing board for their chiplet design and continued mutlicore performance improvements, which has made then as competive and moreso in recent years. There are tonnes more reasons, but those two stand out most to me

    • @BullyTecg
      @BullyTecg Před rokem

      Venkat and his wife madhavi are new ceo of my company. You will see more snd more competition as I have manufacturing unit in every house , dont underestimate the power of sales owner ramya vallabh and her vadas and mirchi bajji. It can make kings , it can make bhagwans

  • @dudeguy11333
    @dudeguy11333 Před 2 lety +3

    Can you do a video on x86 vs Arm?

  • @trevorelvis1355
    @trevorelvis1355 Před 2 lety +2

    I'm reading about Operating systems and i just discovered that the big difference lies in the Architecture....both perform the same tasks quite differently but the results of a a top tier AMD or Intel CPU are hard for the average user to even notice

  • @sliqueh
    @sliqueh Před 2 lety +2

    Hi. Great video from Techquickie. I hope Techquickie would make a video for comparison, which nowadays cpu is better for Linux? AMD for Linux or Intel for Linux. Thanks

  • @lauchkillah
    @lauchkillah Před 2 lety +29

    0:46 the 5600X has 6 cores, not 8 (unless you're counting laser-cut ones?). And 0:56, the 5600 is not based on Zen2. did you mean the 3600?

    • @jilherme
      @jilherme Před 2 lety +3

      I was confused when it mentioned 5600 as zen 2.

    • @xorkatoss
      @xorkatoss Před 2 lety +2

      lmao who even made that? they should double check those

    • @William-Morey-Baker
      @William-Morey-Baker Před 2 lety

      they are counting laser disabled ones... because thats how they are made... its a six core part, but it has the entire 8 core chip. in theory, 1 or 2 of those cores didnt meet validation requirements due to defacts so they laser them off and sell it as a 6 core CPU instead. its the cheapest way to maufacture at scale, at least for now anyway...

    • @holobolo1661
      @holobolo1661 Před 2 lety

      @@William-Morey-Baker I hope they're not disabling perfectly good cores... that's so stupid.

    • @glockmanish
      @glockmanish Před 2 lety +1

      @@holobolo1661 The yield on TSMC N7 by now is so high that you can bet they are crippling a tonne of perfectly good chiplets to fullfill demand of 5600(X). That is the sole reason why AMD up to now didn't offer a non-X 5600 at reduced prices. They only do now because of actual competition by Intel with parts like the 12400.

  • @alexr1969
    @alexr1969 Před 2 lety +59

    on a lower level, the cores are also structured differently between brands, with intel favoring having a large branch predictor and having much higher transistor count for instructions to push through (beyond the more complex branch predictor). This leads to marginally better single core performance, higher power draw and less space on the die for cores (ignoring MOSFET size differences). Because AMD favors less branch prediction and generally less transistors in a instruction path, they are generally able to have more cores that run more efficiently with marginally worse single core performance due to worse branch prediction. There's a lot more to it, but that has been a big difference between the 2 brands since AMD started making their own x86 chips

    • @Aquabyte
      @Aquabyte Před 2 lety +5

      Interesting!!

    • @petrkdn8224
      @petrkdn8224 Před 2 lety +16

      yep, this is why in games (which mostly require high single core performance) intel beats AMD, while workload processes (such as decompression and compression, physics simulations) run better on AMD because it is better suited for it than intel..

    • @robb5828
      @robb5828 Před 2 lety +5

      @@petrkdn8224 and also at the end of the day,both chips can do gaming and workloads :) unless you are obsesed with numbers....for us it doesn't matter what you choose :)

    • @petrkdn8224
      @petrkdn8224 Před 2 lety +3

      @@robb5828 yes of course, both are good.. I have an i3 7100, sure I can't run modern games on high settings, but it still runs everything (except warzone because that shit is unoptimized as fuck)

    • @alexr1969
      @alexr1969 Před 2 lety +2

      @@robb5828 to add to your point, if hardware/software has "solved" your workload already (common example being word processing) any chip will do and many tasks like gaming are more demanding on other systems within a computer/network. So the differences being marginal already have even smaller impacts if at all in the larger picture.

  • @raemondrose3349
    @raemondrose3349 Před rokem

    This was very easy to understand and follow for a beginner, thank you for making this video

  • @istvanlovas2464
    @istvanlovas2464 Před 2 lety +1

    Parallel interconnect has its drawbacks for instead when there is a curvature in the lanes, so for instance the higher bits has to go farther. So usually parallel buses had to have lower frequencies and therefore were slower and more expensive (more lanes->more complexity for design+harder manufacturing+more materials) compared to the serial ones. This is why FSB was replaced by both of the manufacturers, and there are really successful serial connectors, like USB. And as far as I know Infinity Fabric can be used to connect distant parts or multiple CPU dies, so it being serial in many of its usages most likely definitely not a drawback for it. But in a short distance without bends like these tiles Intels way of "gluing" chiplets/tiles together can be a good option, we will see.

  • @Kermeous
    @Kermeous Před 2 lety +3

    Glad Anthony is getting lots of screen time. He's great

  • @BaghaShams
    @BaghaShams Před 2 lety +14

    I thought this would be about the architecture of the x86 designs they each use, but it turned out to be just about the recent way they're each implementing multicore.

    • @hjups
      @hjups Před 2 lety +1

      The x86 architecture difference is more interesting, in my opinion. They're vastly different strategies, which were last unified with the AMD K6.

    • @scheurkanaal
      @scheurkanaal Před 2 lety +1

      @@hjups I'm not sure if the K6 was the last per-core equivalence. The last truly identical cores where Intel 80486 and AMD Am486. As for other cores, AMD until the K10 (Phenom) did not fundamentally change the architecture. Bulldozer (FX) was the first major overhaul.
      Intel changed things up a fair bit sooner, with Netburst (Pentium 4). Funnily enough both Netburst and Bulldozer were ultimately dead ends, worse than their predecessors. Intel brought back the i686 design in the form of first Pentium M and later Core2. Core2 competed against K8 and K10, which I think share the same lineage to the first microcoded "inner-RISC" CPU's like K6 and Pentium Pro. AMD instead started over once again, and that brings us to Zen.
      What I find interesting is that Zen3/Vermeer and Golden Cove/Alder Lake are very good at trading blows: depending on what you're doing, one can be wildly faster than the other. As far as I can see though, that mostly seems to be caching matters; a Cezanne chip does not have the same strengths as Vermeer, but does have the same weaknesses, as far as I can see.
      I'm also curious how far hybrid architectures are going to go. On mobile, they're a massive success, and Alder Lake has proven them to be very useful on desktop as well.

    • @hjups
      @hjups Před 2 lety +1

      @@scheurkanaal I think you misunderstood my statement. I'm not referring to performance, I'm referring to architecture. Obviously, there are going to be differences that have a substantial effect, even as far as the node in which the processors are fabricated on.
      Yes, the last time they were identical in architecture was the 486, however, the K5/K6 and the Pentium Pro/Pentium 2/Pentium 3, were all quite similar internally. AMD then diverged with the K7/K8+ line, while Intel tried Netburst with the Pentium 4. After the failure of Netburst, Intel returned to the Pentium 3 structure and expanded it into Core 2/Nehalem/etc. and have a similar structure to this day. Similarly, AMD maintains a similar structure to the K10, with families like Bulldozer diverging slightly in how multi-core was implemented with shared resources.
      Also note that AMD since the K5, and Intel since the original Pentium and the Pentium Pro have used a "RISC" micro-operation based architecture. The original Pentium is the odd one out there though, since it was less apparent due to it being an in-order processor while the others have all been out-of-order.
      Hybrid architectures may not really go much further than Alder Lake and Zen 4D. There isn't much room to innovate in the architectural space, where most of the innovation needs to happen at the OS level (how do you schedule the system resources). It's also driven by the software requirements though. Other than that, there may be some innovation in the efficiency cores themselves, to save power even further, but in exchange for lower performance (the wider the gap, the more useful they will be).

    • @scheurkanaal
      @scheurkanaal Před 2 lety

      ​@@hjups I was also talking about architecture :) I was just not under the impression K7 was much different from K6, since it did not seem all that different from what Intel was doing circa Pentium 3 (which is like "a P2 with SSE", and the P2 in turn was just a tweaked Pentium Pro), and the numbers also imply a more incremental improvement (although to be fair, K5 and K6 were quite different).
      That said, I wouldn't be so sure if Zen and K10 are that similar. As far as I know, Zen was (at least in theory) a clean-sheet design, more-or-less.
      I was also referring to micro-operations when I said "inner-RISC". The word "micro-operation" just did not occur to me. Finding something that said whether or not the original Pentium was based on such a design was also quite hard, so I assumed it didn't. It was superscalar, but I think the multi-issue was quite limited in general, which gave me the impression the decoder was like the one on a 486, just wider (for correctly written code).
      I don't know how far efficiency cores will go. Their use is not from a wider gap, but rather, more efficiency (power per watt). Saving 40% of power but reducing performance by 50% is not very effective. Also, in desktop machines, die size is a very big consideration, not just power. And little cores are useful here. Keep in mind that the E-cores from Alder Lake are significantly souped up compared to earlier Atom designs. That's important to maximize their performance in highly threaded workloads.
      I think the next thing that should be looked at is memory and interconnect. CPU's are getting faster, and it's becoming harder and harder to keep them properly fed with enough data.

    • @hjups
      @hjups Před 2 lety +2

      @@scheurkanaal Maybe we have different definitions of architecture. SSE wouldn't be included in that discussion at all, since it's just a special function unit added to one of the issue ports, similar to 3DNow! (which came before SSE).
      The K5 and K6 are much more similar than the K6 and K7... The K5 and K6 even use the same micro-op encoding as I understand it. The K7 diverged from simple operations though into more complex unified operations, that's also when AMD split up the integer and floating point paths. The cache structure changed, the whole front end changed, the length decoding scheme changed, etc.
      As for P2 vs Pentium Pro, the number of ports changed, and the front end was improved to include an additional decoder (which has a substantial difference for the front end performance - it negatively impacts it, requiring a new structure). The micro-op encodings may have also changed with the P2 (I believe they still used the Pentium uops in the Pentium Pro which are very similar to the K5 and K6 uops).
      Zen may have been designed from the "ground up", but it still maintains the same structure and design philosophy - that's likely for traditional reasons (they couldn't think outside of the box). Although, it does have some significant benefits in terms of design complexity over what Intel does - especially when dealing with the x87 stack (the reason why the K5 and K6 performed so poorly with x87 ops, and why the K7 did much better).
      Yeah, I knew what you meant by "inner-RISC". I just used more technical terms. The P1 was touted as two 486's bolted together, but that was an overly simplified explanation meant for marketing people who couldn't tell the difference between a vacuum tube and a transistor. In reality, you're correct, the dual issue was very restricted, since the second pipeline really could only do addition and logical ops, as well as FXCH which was more impactful (again for x87). I would guess that most of the performance improvements came from being able to do CMP with a branch, a load/store and a math op, or two load/stores.
      As for specific information about the P1 using uops, you're not going to find that anywhere, because it's not published. But it can be inferred. You would have to look at the instruction latencies, pipeline structure, know that a large portion of the die / effort was spend on "emulating instructions" (via micro-code), and have knowledge of how to build something like the Pentium Pro/2/K6. At that point, you would realize that the P1 essentially had two of what AMD called "long decoders" and one "vector decoder", which it could either issue two "long" instructions or one "vector" instruction. The long decoders were hard coded though, and unlike the K6/P2, the uops were issued over time rather than area (i.e. the front end could only issue 2 uops per cycle, and many instructions were 3 uops. So if logically they should be A,B,C,D,E,F, the K6 would issue them as [A,B,C,D] then [E,F], but the P1 issues them as [A,C],[B,D],[E,F]).
      Yes, power efficiency is proportional to performance. The wider the gap implies more power efficient. But there's also the notion of making the cores smaller too and throwing more at the problem (making them smaller also improves power efficiency with fewer transistors). If the performance is too high though, there's no reason to have the performance cores, which is what I meant by the wide gap being important.
      Memory and interconnect are an active area of research. One approach is to reduce the movement of data as much as possible, to the extent of performing the computation in RAM itself (called processing in memory). It's a tricky problem though, because you have to tradeoff flexibility with performance and design complexity (which is usually proportional to area and power usage - effectively energy efficiency).

  • @LosTCoz3000
    @LosTCoz3000 Před 2 lety

    That Anthony, is a great man! Thank you for sharing!!!! The world needs more Anthony!!!

  • @BLKBRDSR71
    @BLKBRDSR71 Před 2 lety +7

    I remember when you could swap an intel CPU for an AMD. How the times have changed.

    • @SpinDlsc
      @SpinDlsc Před rokem +2

      Oh Socket 7... those were some good times.

    • @BLKBRDSR71
      @BLKBRDSR71 Před rokem

      @@SpinDlsc

  • @V3ntilator
    @V3ntilator Před 2 lety +72

    I usually bought Intel CPU's most of the time as they were always reliable, but over 1 year ago i went for AMD Ryzen 9 5900X instead. 100% satisfied with that too.

    • @1mol831
      @1mol831 Před 2 lety +7

      New intel cpu somehow happens to be cheaper here so I use it instead.

    • @kenhew4641
      @kenhew4641 Před 2 lety +10

      AMD ones are now as reliable as Intel, but because they are built differently it affects certain processing tasks. I'm a 3d visualizer and have been using intel chips for my rendering process, as well as them being the standard for most rendering farms. No problems all this while, until I switched to AMD and while the creation process is very much the same, when it comes to rendering AMD computes differently from Intel hence the render results are different and inconsistent with those rendered using Intel cpus. So I had to stick back to Intel for my work, but for anything else like coding or gaming there's no issue. I believe it would also affect physics simulation as well. I guess what I'm saying is that for the average user it won't matter the way AMD and Intel chips are built differently but for calculation sensitive tasks it does.

    • @h.mandelene3279
      @h.mandelene3279 Před 2 lety +5

      Does AMD still make their chips run hotter than hell? The only one I ever owned fried itself. i have used Intel since(mid 90's).

    • @Anonymous-qb4vc
      @Anonymous-qb4vc Před 2 lety +2

      @@kenhew4641 AMD(AMF/VCE) definately sucks when it comes to rendering and encoding compare to Nvidia NVENC and Intel QSV (EposVox made good analysis on this)

    • @SprunkCovers
      @SprunkCovers Před 2 lety

      @@h.mandelene3279 sometimes, it depends on the set up but AMD set ups are usually hotter and more power consuming than Intel ones

  • @223raulh
    @223raulh Před 2 lety

    I appreciate the knowledge. I also like your friendly tone, brother.

  • @BWGPEI
    @BWGPEI Před 2 lety

    As always, appreciate your video, and hope for more. Live long and prosper!

  • @LightningLion500
    @LightningLion500 Před 2 lety +3

    Video Suggestion: How are Programming Languages created?

    • @DefeaterMann
      @DefeaterMann Před 2 lety

      READ MY NAME!!!!!
      !

    • @Slada1
      @Slada1 Před 2 lety +1

      Its not that complicated. C compiler is written in C, java compiler is written in java, Python compiler is written in python,...

    • @techheck3358
      @techheck3358 Před 2 lety +1

      @@Slada1 python interpreter is written in c. java compiler is written in java/c/c++ ;)

    • @LightningLion500
      @LightningLion500 Před 2 lety

      @@Slada1 Yea, but how are those compilers created then?

  • @iriviking774
    @iriviking774 Před rokem +1

    love how you ask for a like OR dislike, haha u are so easy to listen to as always! YOU BRING IT BRIGHT AND CLEAR!

  • @LtColVenom
    @LtColVenom Před 2 lety

    Loved the Pringle joke! Thanks for a crystal clear explanation!

  • @redringofdeathgamer
    @redringofdeathgamer Před 2 lety +6

    More Anthony!

  • @seangarrison3515
    @seangarrison3515 Před 2 lety +4

    Video Idea: I just read something about SGX and only being on 10 series for playing UHD blu-rays in 4k. It is my understanding you can forget about 4k uhd on AMD. I’m wanting to build a home-theater pc and would like to know other “gotchas” or is home theater on a pc no longer possible? It is difficult to find a pc that has a 5.25 bay for blu-ray or dvd playback. Is blu-ray playable on amd? I know the Netflix app can stream 5.1 but are there other ways to get surround sound via streaming on a pc other than that? Anyway it is a topic I wish could be revisited for that use case. Thanks.

  • @gabequezada2066
    @gabequezada2066 Před rokem

    excellent video... Didnt know these subtle differences at all until now.. Thank you

  • @cyberwaste
    @cyberwaste Před 2 lety +4

    I decided to try an AMD machine after being with intel for a few generations. The Infinity Fabric was completely unstable and caused any audio playing to be filled with static and blue screening occasionally. I tried for a week with updates, tweaks, tutorials but couldn't stabilise it. I sold the parts and bought intel parts and had no problems at all. I've been building computers for myself since I was twelve years old (20 years), and that AMD machine was the only time I was forced to give up when presented with an issue. I've bounced back and forth between the two, as well as ATI and nVidia over the decades, but that experience really put me off AMD for the moment.

    • @cuongtang9539
      @cuongtang9539 Před rokem

      Lol i build since 15 years, just went with intel. This time the hype about Zen 4 was so huge, i could not resist to try. I bought a 7900x. Hat stuttering issues, because of ftpm, issues with memory controller. Then after two days i returned it and bought a 13600k, wich worked perfectly since. I cant relay my money anymore to AMD. First impression not good.

  • @Yusufyusuf-lh3dw
    @Yusufyusuf-lh3dw Před 2 lety +4

    One correction here. Meteor lake doesn't use EMIB, it uses Foveros. Basically 3D stacking of silicon. But unlike TSMC/ AMD 3D v cache, meteor lake can be overclocked like normal CPUs.

  • @nekomasteryoutube3232
    @nekomasteryoutube3232 Před 2 lety +1

    This move to chiplet designs kinda reminds me of the move to the SLOT 1/SLOT A cpu's of hte 90's to improve yields by having parts of the CPU seperate on a larger circuit board since at the time intel was having issues with chip yields with their CPU's.
    Perhaps this will just be a temporary thing or maybe we'll figure out how to make the interconnects between chiplets faster with less latency comparable to a monolithic chip of today.

  • @LivelysReport
    @LivelysReport Před rokem

    I would say with Intel, since you are using P cores and E cores.. that it would probably be somewhat beneficial to make them separate chiplets one for the E cores, one for the P cores and then interconnect them.. using shared cache on board which will work for both and giving the P cores more memory priority when it throttles up.. then you can lay on your die which ever chiplets you want.. if you want a 8 P core and a 8 E core on the same die, you place both chiplets on the same die with that shared cache.. if you want a 16 P core chiplet and an 8 E core chiplet on the same die.. no problem, just place them on the die.. and the share cache on board, should also be upscaleable as well.. so if someone wanted more say Level 3 cache.. they could go from say 30mb to say 50mb..

  • @fatimamahmoud4261
    @fatimamahmoud4261 Před 2 lety +8

    Can you make a video on the difference between amd and Intel in terms of performance for different uses? Like a quick guide on which to get

  • @johndoh5182
    @johndoh5182 Před 2 lety +3

    With AMD moving to TSMC N5, I felt they should move back to monolithic design for parts 8 core and less, and have the snap together interface, so if you want to add an 8 core chiplet, you can. I think this would be ideal for Ryzen considering 16 core compute is still PLENTY of compute power. And then when they want to make their move to big-little, they could use the same approach, but do it with N4 or N3 where the amount of density still allows them to use a very small die and don't take much losses.
    In other words, the approach on how to do something can't be looked at in a bubble. It has to account for the total ecosystem, including the manufacturing process and the node being used. Sure, this next gen for Intel, or actually two generations from now will use tiles. But what happens when they can finally produce Intel 20A? Are you going to use tiles to create a 12 core part? I mean a monolithic design on 20A means the chiplets will be TINY. It would seem better to go back to monolithic so many desktop parts, but leave the ability to snap on another chiplet (tile) to add another core complex.
    Now, for WS and server that's a totally different realm, but desktop, for most people is STILL browsing the web, office apps and media, and not editing media. You don't need the cost of these interconnects, unless maybe it's an APU, in which case the APU could be a tile/chiplet. I think this would be the most cost effective approach and not a waste of die space. I don't think total package size could shrink much because of so many connections needed between the MB and the CPU. But the die could shrink quite a bit.

  • @maximilianok1
    @maximilianok1 Před 2 lety

    0:43 those pictures kinda look like little villages with football fields in the middle from really far away

  • @Sad_King_Billy
    @Sad_King_Billy Před 2 lety +5

    Anthony always leaves me satisfied and smiling

  • @EnsignLovell
    @EnsignLovell Před 2 lety +6

    It still fascinates me, 2 companies, started in the same decade (okay intel was named different back then), and they are still competing against each other as "top dog" as such. Kind of reminds me of 2 brothers constantly trying to 1 up each other.

    • @youtubeshadowbannedme
      @youtubeshadowbannedme Před 2 lety +1

      Same can be said for Microsoft and Apple, with Windows and Mac/iOS

    • @petrkdn8224
      @petrkdn8224 Před 2 lety

      @The Deluxe Gamer they arent really "competing" as would be in other countires where there is an acutal left / center party rather than the US's 2 right wing parties,
      if it were really comparable to US political system then both intel and amd would have to be competing in a single sector of proccessors such as workload or gaming , while they do both (and they have different features etc etc)

  • @Mzansi74
    @Mzansi74 Před rokem

    Awesome presentation of a not-so-simple topic!

  • @jawnTem
    @jawnTem Před 2 lety

    I really enjoy your presentations & thank you for presenting complicated technology that even an iliterate junky like me might understand.Your voice is easy to to follow, you don't talk too fast, nor above a person status.

  • @SireDragonChester
    @SireDragonChester Před 2 lety +14

    Been using intel since 8088/86 days. Intel got lazy and sloppy by the time I had built my i7 3770 3.5 ghz. At end 2018 I switch to AMD 2700x and she been a beast and great Cpu, at least for games and software I use. Sure ive used few AMD Athlon over the years. But at the time intel was king for so long, because the lack of competition. They got lazy. Thankfully now both Amd and intel compete with each other now. Been happy with Amd and prob won’t be buying any Intel cpu for awhile.
    Intel GPUs I’ll be watching, looking at some point replace my GTX 1080ti. Then again RDNA3 sound good. So does future intel GPUs. Time will tell if either them will be able compete with RTX 4xxx when they come out.
    Good video.

    • @13thzephyr
      @13thzephyr Před 2 lety +4

      Intel not only got lazy but also has some dodgy corporate stuff that puts more money in their pockets while the consumers are stuck with "you only need 4c/8t". Whenever I can I have switched to AMD both on my main desktop and laptop and given the chance I also recommend AMD everyone else around me. Not to mention as well that AMD has a really good track record so far of supporting their platform for far more generations, I'm talking about AM4.

    • @xruud24
      @xruud24 Před 2 lety +2

      It seems like Rx 7000 will beat Rtx 4000

    • @SireDragonChester
      @SireDragonChester Před 2 lety +2

      @@13thzephyr
      Yeah I also have been recommending AMD Ryzen after I built my 2700x. Been very happy with it. And they run cooler then intel cpus imo. Yeah after I built the 2700x. I know soon after we started also hearing about all Cpu vulnerabilities that dates back like what 10+ years. Spectre/meltdown and bunch other. I also know Amd isn’t perfect and it has its share issue too. But I think generally AMD are better made and better secure. TSMC is currently the leader. They way better imo to anything out there. Yeah I know my zen+ (2700x) was on global Foundries. Been happy with it. Prices here Canada for CPUs and any pc hardware still kinda high or insane. Though slowly starting come down. Some day I’ll upgrade zen 3. But I’m in no rush.
      Most my online friends have also moved to AMD Ryzen and the small indie dev team I was helping few yrs back, (I was ex mod/server admin /tech help guy) I know one head devs (Vipe) has also upgrade to 3900x or 3950x, I forget which, but I know he was very happy with it. Let’s him code UE4 much faster, which let them beta test more quickly, then can deploy patches more easily for their dino UE4 game. :)
      Would only recommend intel if your doing specific tasks or apps the run better on intel. Zen 4 ( AM5) sound very impressive as does RDNA3. Pc have come a long way from old 286/386/486 days. Lol

    • @SireDragonChester
      @SireDragonChester Před 2 lety +2

      @@xruud24
      Yeah hopefully RDNA3 will be as good as next RTX GPUs. Been happy with 1080ti, but imo Nvidia need some spanking, they been at the top too long and Nvidia kinda becoming more and more anti consumer. Would love see AMD or intel GPU take lead for few years. But prob never happen cus Nvidia has $$$$$ to keep their share holder happy.

    • @steel5897
      @steel5897 Před 2 lety +1

      Intel 12th gen is like the holy grail right now, so fucking good for the pricing.
      Thankfully AMD is starting to release budget CPUs again now too.

  • @spider0804
    @spider0804 Před 2 lety +11

    The difference is you are not replacing your motherboard every time with AMD.
    Gotta love spending $200 bucks on a motherboard for a $300 processor.
    AMD BABY.

    • @fahrai4983
      @fahrai4983 Před 2 lety

      That’s not true this generation. The 5000 series is the last supported one for AM4.

    • @spider0804
      @spider0804 Před 2 lety +1

      @@fahrai4983 Yea great then I will have the AM5 board for the next 6-8 years. The point is a new generation does not mean a new board EVERY SINGLE TIME like intel does purposefully. There is zero reason for it. "Oh we added a pin so its 1151 pins instead of 1150 now, that extra pin does nothing but we changed the pattern just to screw you."
      I understand AMD has to update their socket with technologies but we got so many glorious years of AM4, and before that, AM3.

    • @hawkeyeaerialphotography6652
      @hawkeyeaerialphotography6652 Před 2 lety +2

      @@fahrai4983 AM4 has been the latest since 2016, that's a long time. AM5 will probably last around the same amount of time.

  • @jaredc7758
    @jaredc7758 Před 2 lety

    Narrator has great radio voice. Great video, quick and educational. Will subscribe

  • @ZearouAyedea497
    @ZearouAyedea497 Před 2 lety

    After watching Anthony in many LTT vids, I think he'd be a great mentor for a budding PC tech enthusiast or aspiring technician.

  • @thatblackchick2766
    @thatblackchick2766 Před 8 měsíci +3

    No clue what you said, but you said it nicely so i like you

  • @user-jy3zk6op8b
    @user-jy3zk6op8b Před 2 lety +3

    The price.

    • @DefeaterMann
      @DefeaterMann Před 2 lety

      READ MY NAME!!!!!
      !

    • @FlopFan69
      @FlopFan69 Před 2 lety +1

      Not really, AMD went expensive after they came back.

    • @__aceofspades
      @__aceofspades Před 2 lety +1

      Yeah Intel's 12th gen is cheaper and gives better performance than AMD's Ryzen these days.

    • @user-jy3zk6op8b
      @user-jy3zk6op8b Před 2 lety

      I don't like amd over Intel or the other way around, but I simply take the best bang for my bucks

  • @T0ffik1
    @T0ffik1 Před 2 lety +1

    Great vid, do that for graphic card makers also :). But there should be mentioned that AMD already is making 3d Cache's and will probably also expand into 3d connections like Intel plans.

  • @brettweltz8135
    @brettweltz8135 Před 2 lety +1

    To do all of the architectures and GPU styles justice, you could do a four-part video thing explaining each GPU and how each architecture works. What do you guys think?

  • @chrisbiggie3466
    @chrisbiggie3466 Před 2 lety +44

    When you invest, you are buying a day that you don't have to work.
    I pray everyone reading this becomes successful.

    • @franksmart3826
      @franksmart3826 Před 2 lety +1

      You are absolutely right 👍

    • @melissaduch6091
      @melissaduch6091 Před 2 lety +2

      Investing in crypto is very cool, especially with the current rise in the market.

    • @auroraemaxeal2972
      @auroraemaxeal2972 Před 2 lety +1

      I really don't know why people still remain poor out of ignorance.

    • @ronbarkley7963
      @ronbarkley7963 Před 2 lety +1

      It is not all about ignorance, there are lots of unprofessional brokers in the market.

    • @blessingdanilo3829
      @blessingdanilo3829 Před 2 lety +1

      I will introduce you to my trader Mr Lennart Antero, his methods works like magic and is working for me at the moment.

  • @fatihyener7589
    @fatihyener7589 Před 2 lety +4

    Intel also has less cache compared to AMD cpu's which tend to slow down your computer after a while. For example I had switched from a ryzen 3700x to i5 11400 system because I had sold that computer to a friend and at the time intel 11400 system costed much less than zen 3 systems. And i5 11400 is supposed to be faster than 3700x for single threaded applications right? Yes it is faster in games but after only 9-10 months of usage, the web browsing experience and a couple applications like obs got significantly slower compared to 2 years of heavy usage on ryzen 3700x. I am now just lazy to reinstall windows due to my job taking too much of my time and leaving no room backing up stuff. And for those who might ask, I don't have more programs or I'm not using an antivirus, I still have the same ssds, I am up to date on drivers and I don't use browser extensions... And no the cpu or memory usage isn't high. And I got significantly faster memory on this system with super low timings. And yes memory overclock is stable, It has passed memtest 1500%, linpack, time spy, y cruncher all of that. So yeah, at least as far as I can tell 11th gen intel sucks in that case which I think is caused by 32 megabytes of l3 cache vs 12 megabytes. Making a video on youtube full screen on chrome is taking a couple seconds for example. I mean like wtf...

    • @zatchbell366
      @zatchbell366 Před 2 lety +1

      ssds slow down over time
      its not the cpu

    • @fatihyener7589
      @fatihyener7589 Před 2 lety

      @@zatchbell366 hwinfo shows total 9tb host writes, its a samsung 970 evo plus 2tb and it only has windows and some programs, total 230 gb is used so I dont think thats the case

    • @BigFatCone
      @BigFatCone Před 2 lety

      @@zatchbell366 Agreed. My Macbook takes ages to read or write, but when something is running in the memory it's as fast as ever.

    • @fatihyener7589
      @fatihyener7589 Před 2 lety +1

      @@danieljimenez1989 Just reinstalled the windows and all the other programs and all the windows updates on the same SSD. Everything is now running flawlessly fast. So apparently it was windows and software updates bloating the system which made the cpu or cache no longer being able to keep up in some programs. I have got my bookmarks and everything else back on the chrome. And all the "default programs" are still running at startup, got the same drivers installed as I had my "install" folder remaining on another drive which I keep all my driver setups. I also have my steam and games installed as well. So it was not SSD nor anything else. Just stupid windows bloating things up.

    • @fatihyener7589
      @fatihyener7589 Před 2 lety +1

      @@danieljimenez1989 Thanks pal

  • @chrissartain4430
    @chrissartain4430 Před 2 lety

    All his videos are always deep but I understand & learn it all. Thanks !!

  • @adrianzendejas7049
    @adrianzendejas7049 Před 2 lety

    Couldn't focus on the video because I was wondering where Anthony got that sick jacket he's wearing
    Good stuff guys 🤙🏽

  • @DDRWakaLaka
    @DDRWakaLaka Před 2 lety +10

    AM4 also got plenty of support while Intel swaps sockets every other gen. Hopefully AM5 lasts just as long.

    • @BeautifulAngelBlossom
      @BeautifulAngelBlossom Před 2 lety +2

      that makes upgraing to new CPU easy with AMD and not having by new MOBO

    • @DDRWakaLaka
      @DDRWakaLaka Před 2 lety +2

      @@BeautifulAngelBlossom yup. you don't get pcie 4.0 on older boards but that's not an enormous issue