The Secret to NVIDIA GPU Success. What AMD, Tesla and Cerebras Have to Offer?

Sdílet
Vložit
  • čas přidán 25. 07. 2023
  • Invest in Blue-chip Art by signing up for Masterworks: www.masterworks.art/anastasi
    Purchase shares in great masterpieces from Pablo Picasso, Banksy, Andy Warhol, and more.
    See important Masterworks disclosures: www.masterworks.com/about/dis...
    Mentioned Videos:
    Tesla DOJO Explained: • New Tesla DOJO superco...
    Tesla DOJO Update: • Tesla AI Day 2: DOJO U...
    Graphcore AI Chip: • The World’s First WoW ...
    👉 Support me at Patreon ➜ / anastasiintech
    📩 Sign up for my Deep In Tech Newsletter for free! ➜ anastasiintech.substack.com

Komentáře • 306

  • @AnastasiInTech
    @AnastasiInTech  Před 11 měsíci +108

    Sign up for Masterworks www.masterworks.art/anastasi and purchase shares in great masterpieces from Pablo Picasso, Banksy, Andy Warhol, and more.

    • @dorcelmarcs2772
      @dorcelmarcs2772 Před 11 měsíci +1

      SC12+ FCSG< nb

    • @TAREEBITHETERRIBLE
      @TAREEBITHETERRIBLE Před 11 měsíci +1

      *ur so fine woman*

    • @davidstar2362
      @davidstar2362 Před 11 měsíci

      My Wife in another Life.Detroit Michigan 48221. Thank you very much. I sub, liked and commented : Video paid for.

    • @RampariSah
      @RampariSah Před 11 měsíci

      You look beautiful in black

    • @user-ww2lc1yo9c
      @user-ww2lc1yo9c Před 11 měsíci

      No new video for so many days will put me into cardiac arrest

  • @KenOtwell
    @KenOtwell Před 11 měsíci +161

    Gaming demand didn't cool so much as gamers are sick of the huge price increases.

    • @dchdch8290
      @dchdch8290 Před 11 měsíci +15

      I tend to agree

    • @IakobusAtreides
      @IakobusAtreides Před 11 měsíci +12

      Exactly

    • @Druac
      @Druac Před 11 měsíci +9

      That isn't going to change any time soon...unfortunately. They are selling GPUs hand over fist right now...regardless of gamers.

    • @Wobbothe3rd
      @Wobbothe3rd Před 11 měsíci +7

      That's total bullshit. 7 million rtx40 GPUs sold in 2023, the 1st half of the year is ALWAYS slower than the second half. More than 3% of PC Gamers have an rtx40 gpu already. Gaming may have cooled relative to the pandemic, but gaming GPUs are still selling fine. The rtx4060 is CHEAPER than the rtx3060. Nvidia has REDUCED prices at the bottom of the stack! People are not generally using RTX gpus for non gaming purposes, there are better options for professional workloads.

    • @shanent5793
      @shanent5793 Před 11 měsíci +2

      ​@@Wobbothe3rdwhat's better than the RTX A6000 Ada?

  • @kenbrown9438
    @kenbrown9438 Před 11 měsíci +17

    Anastasi, your videos are amazing. I've only been watching for maybe 5 months, but have been going back and watching all your stuff. Sometimes I watch the same ones several times since there is always so much information presented. Please keep them coming.

  • @dchdch8290
    @dchdch8290 Před 11 měsíci +17

    deep dive! first video I watched which actually explained well why everyone is so exited about Nvidia and GPU rush.

  • @samcrater7296
    @samcrater7296 Před 11 měsíci +22

    Anastasi, the H100 is a monolithic design, not a chiplet design. Chiplet designs refer to designs that split the different components of a [traditionally singular] die across multiple dies on the same package, for example AMD's Epyc or MI300 processors having 'chiplets' of cores and a 'chiplet' for the memory controller. In datacenter products, HBM is often packaged with the die but this is not what a chiplet design implies. Also, there have been converters that can transform CUDA code to roughly corresponding OpenCL or HIP code for a while, but CUDA will never be able to run natively on AMD hardware as CUDA compiles to PTX, a proprietary virtual instruction set.

    • @AnastasiInTech
      @AnastasiInTech  Před 11 měsíci +11

      Sorry my mistake

    • @jkd7799Yann
      @jkd7799Yann Před 9 měsíci

      absolutely, chiplet was initially designed by AMD for their cpus, and then applied it to their gpus. At first, it was simply MCM with 2d architecure, which was really good but had some drawbacks, like latency... Then they released chiplet with 3d Vcache, allowing for better performance, lower latency, better scalability and lower costs

  • @Sven_Dongle
    @Sven_Dongle Před 11 měsíci +20

    CUDA is way more than a parallel computation framework, it's a complete series of libraries for solving problems on the GPU with C/C++ including linear algebra, deep learning, sparse matrix, solver, FFT, nvJPEG, profilers, etc.

    • @TheLazyVideo
      @TheLazyVideo Před 3 měsíci

      There are higher level abstractions like pytorch and tensorflow keras

    • @Sven_Dongle
      @Sven_Dongle Před 3 měsíci

      @@TheLazyVideo Yeah that add layers of performance killing kruft on top of CUDA.

  • @TheParadoxy
    @TheParadoxy Před 11 měsíci +14

    I wonder if AMDs software stack is being underrated. I avoided their graphics cards for years because of the ROCm horror stories I had heard; and I think that was a smart choice at the time. But a month ago I bought a Radeon 6700 XT for my girlfriend's computer, and even without official ROCm support (after changing one line in a text file) it's been incredibly easy to use for stable diffusion and Leela Chess Zero. I'm just a hobbyist and don't have any commercial application for ANNs at the moment, so their are probably use case issues that I am completely unaware of. But I was absolutely amazed at how easy it was to get ROCm up and running on an AMD consumer GPU (it took like an hour, and it was only that long because of slow driver downloads). I ultimately found it less of a hassle than using CUDA on my Nvidia RTX 2060 Super because (at least in Ubuntu 22.04) the drivers that make it work for CUDA break it for Steam and vice versa. After setting up the 6700 XT with ROCm I was so sure I'd have problems getting Steam to run games, but everything went smoothly. I was simply mind blown. I hope AMD brings official ROCm support to their 7000 series cards ASAP. As a hobbyist I don't care that Nvidia cards might train a faceswap model slightly faster, I just care that everything runs smoothly.

    • @crhu319
      @crhu319 Před 11 měsíci

      China agrees and wants ROCm on RISC-V

  • @christopher3d475
    @christopher3d475 Před 11 měsíci +5

    The first computer I had as a teenager was a based on the MOS 6502. It's insane to realize how far these chip architectures and technologies have come since the mid 1980s.

    • @MajorCaliber
      @MajorCaliber Před 11 měsíci +2

      A lot of useful devices were built around Chuck Peddle's 6502 "RISC" architecture... now it probably couldn't even serve as a supervisor/side processor for today's multi-core beasts. 😁

    • @GregMoress
      @GregMoress Před 11 měsíci +2

      Forget about 4MHz becoming 4GHz... As the complexity of the chip architecture and software also grows exponentially... what is it that is looming to take over the burden of design and planning? Our brainchild, which goes by many names, but all call it AI.
      Once let loose into this world, and our sattelites and probes... it will live forever and ever. Ever Growing... Ever Learning...

  • @SandyRegion
    @SandyRegion Před 11 měsíci +12

    Thank you for making things so easy to understand

  • @HenryCalderonJr
    @HenryCalderonJr Před 11 měsíci +4

    One of the most beautiful genuine geniuses that explains it easy for us to understand! Thank you so much always for your information and updates! You make something’s that can be boring very easy to want to watch!

  • @RSRrobertwalker
    @RSRrobertwalker Před 11 měsíci +6

    SHE is always the goto source for credible AI & New Tech developments!

  • @maxnao3756
    @maxnao3756 Před 11 měsíci +2

    Thank you for another excellent video which in a nutshell allows me to stay updated about the GPU/ CPU « universe », the upcoming technical hurdles and challenges, as well as where the future may be going.

  • @CoolMusicToMyEars
    @CoolMusicToMyEars Před 11 měsíci +6

    I cannot wait until Optical CPUs will be available over the counter working near the speed of light ❤ Quantum Computing is available for the enthusiastic people out there,
    I remember the days of £500 CD writer reader & the 40MB Hard Drive thinking thats fantastic, now there so cheap, more & more solid state drives i purchased 200GB SSD for £6 in the UK 🇬🇧
    Have a fantastic weekend ❤

  • @iliasiosifidis4532
    @iliasiosifidis4532 Před 11 měsíci +9

    Can you explain the differences between x86, arm, risc-V and the GPU architecture that I don't know what is called :D
    Thx for the great videos. It's an andidote to the pessimism that the consumer side industry has

  • @swamihuman9395
    @swamihuman9395 Před 11 měsíci +2

    - Fascinating!
    - Thx for keeping us updated.
    - Oh and, great presentation. Keep up the great effort/content...

  • @soonts
    @soonts Před 11 měsíci +10

    These hardware companies don't actually need a parity with CUDA. They only need to implement custom backends for TensorFlow and PyTorch libraries. This is much easier than replacing CUDA. I even did similar stuff myself, on top of Direct3D 11 compute shaders.

    • @maxjames00077
      @maxjames00077 Před 10 měsíci +1

      When do you think companies like Meta and Google can use Intel's or AMD's GPU's for their TF or PT?

    • @soonts
      @soonts Před 10 měsíci +1

      @@maxjames00077 I think they already can, it just involves more friction to setup and use.
      I believe AMD is better than Intel for that. According to PyTorch devs, support for ROCm became stable in version 1.12, June 2022. According to AMD, both PyTorch and TensorFlow work on AMD GPUs on top of ROCm. Both require a Linux OS for that, Windows is unsupported. However, Internet companies like Meta and Google don’t care about Windows support because they run their AI stuff on servers, Linux is better for that use case.

  • @user-lh7dh1ew1c
    @user-lh7dh1ew1c Před 11 měsíci +1

    thanks for the video Anastasi !

  • @leematthews6812
    @leematthews6812 Před 11 měsíci +5

    Some amazing stuff coming down the line. I'm looking forward to Graphcore's Good computer next year.

  • @jibjc
    @jibjc Před 10 měsíci +1

    Anastasia's videos are so great. I love listening to them.

  • @royjones1053
    @royjones1053 Před 11 měsíci +1

    Thanks again for another great informative vlog. Until next time thank you and keep up the great work

  • @garycard1826
    @garycard1826 Před 10 měsíci +1

    Another terrific video Anastasi! Well done.👍

  • @WmLatin
    @WmLatin Před 11 měsíci +1

    Hey, I saw my name as a supporter! Thanks, Anastasi! 😍

  • @HarpaAI
    @HarpaAI Před 11 měsíci +14

    🎯 Key Takeaways for quick navigation:
    00:00 📈 The demand for GPUs, particularly for AI applications, is surging, causing supply struggles.
    03:02 🚀 Nvidia dominates the GPU market due to its high-performance hardware and proprietary software, like CUDA.
    05:03 💹 The new Nvidia H100 GPU offers significantly better performance for large language model training compared to CPUs.
    09:51 💼 Alternatives to Nvidia GPUs for AI applications include Tesla's Dojo, Google's TPUs, Cerebras' Wafer Scale Engine, and Intel/AMD GPUs.
    16:34 💡 The future of AI hardware is promising with intense competition, and diversifying investments is recommended, including alternative investments like Fine Art through Masterworks.
    Made with HARPA AI

  • @markmalonson7531
    @markmalonson7531 Před 11 měsíci +3

    Your videos are the best! And you look and sound great.

  • @cool-alien377
    @cool-alien377 Před 11 měsíci

    i came back from work awsome thanks for info!

  • @PaulPiedrahita
    @PaulPiedrahita Před 11 měsíci +10

    Turned on notifications! Can't miss these vids. Thank you for teaching us and keeping us updated! 🤖🌹

  • @gregbarber8166
    @gregbarber8166 Před 11 měsíci

    Hi Anastasiia your English is fantastic you are explaining reasons for the shortfall and high prices so clearly and concisely keep up the great work gb

  • @luapo2233
    @luapo2233 Před 11 měsíci +2

    You are extremely educational. Thank you.🎉

  • @petereriksson7166
    @petereriksson7166 Před 11 měsíci

    Thanks for the information.

  • @dbroers
    @dbroers Před 11 měsíci +1

    Love your videos, but justed wanted to say: I love your watch, the Santos is one of my all time favourites

  • @Mannuu1
    @Mannuu1 Před 11 měsíci +22

    excellent video. very well researched. lets hope Cerebras (that actually just closed a deal with AMD for a supercomputer to help with ML) and AMD are able to pull ahead and actually compete. the fact nvidia has such a big piece of the pie with proprietary software is a real problem imo... or else no AI for everyone cause its a company focused only on profits and extremely greedy at that.
    i pray Rocm has a chance to shine but amd needs to actually focus on it massively.
    AI is in its infancy so theres a chance for others to join the fight.

    • @elivegba8186
      @elivegba8186 Před 11 měsíci +3

      Amd would have done same if they pulled up before Nvidia did.
      You guys shouldn't behave as if AMD is a saint and will not overprice

  • @theunderdowners
    @theunderdowners Před 11 měsíci +1

    Very informative, and exciting.

  • @methlonstorm2027
    @methlonstorm2027 Před 11 měsíci +1

    enjoyable and informative as always thank you

  • @alanreader4815
    @alanreader4815 Před 11 měsíci +1

    Great video Anastasi.

  • @JulianFoley
    @JulianFoley Před 11 měsíci

    Insightful and clear. Thanks.

  • @MrFoxRobert
    @MrFoxRobert Před 11 měsíci +1

    Thank you!

  • @dreamphoenix
    @dreamphoenix Před 11 měsíci +1

    Thank you.

  • @luisrabalperez7146
    @luisrabalperez7146 Před 11 měsíci +7

    Me encantaria un video sobre el impacto que puede tener el descubrimiento de LK99 en el universo de los microchips

  • @cool-alien377
    @cool-alien377 Před 11 měsíci +1

    Alright will watch this after work

  • @TheFinalRevelation1
    @TheFinalRevelation1 Před 11 měsíci +26

    There is no parallel between the two: gold retains its value, CPUs and GPUs become e-waste after 10 years or so

    • @klin1klinom
      @klin1klinom Před 11 měsíci +6

      Indeed. One is a store of value, and the other is an investment. You can't really generate an income from gold, though, unless you're a pawnbroker buying at a massive discount.

    • @ihavetubes
      @ihavetubes Před 11 měsíci +3

      @@klin1klinom actually Gold is both

    • @geekinasuit8333
      @geekinasuit8333 Před 11 měsíci +1

      e-waste in much lees than 10 years, maybe 5 years average and often much less than that.

    • @aldaricJohnes
      @aldaricJohnes Před 11 měsíci +5

      What's the point of that nonsense? If you want to compare gold value, you compare it to investments in the ticker of those companies, make no sense to compare it to physical chips. Like yeah, obviously, nobody has ever bough a chip expecting it to go up in price ...

    • @joelcarson4602
      @joelcarson4602 Před 10 měsíci +1

      Chips can be used to make money, gold just sits there. Maybe gold increases in value, maybe not so much, if you're hoping gold is gonna make you rich if civilization suddenly goes headfirst into the crapbasket, you'll find you have much bigger problems than your portfolio. Value of any sort is ultimately faith based and situational, subject to unexpected change.

  • @soheilseyedjamali5423
    @soheilseyedjamali5423 Před 7 měsíci

    Thank you for this video and all of previously you did. As i know, if we could make a digital chip implement Non Volatile elements such as Memristor or MTJ device, it will be actually useful 👌

  • @markvietti
    @markvietti Před 9 měsíci

    you have a great ability to explain . would be a great teacher

  • @mylesl2890
    @mylesl2890 Před 11 měsíci +3

    Cartier watch?? looks pretty. Will be interesting to see the role GPU chips and quantum chips may play soon...

    • @adrielr5930
      @adrielr5930 Před 11 měsíci

      You like? Cartier tanks are pretty popular among buyers. That one should be a 36mm Cartier tank they're expensive tho.

  • @solosailorsv8065
    @solosailorsv8065 Před 11 měsíci +2

    Another insightful Pro-Video - Thanks !
    'seems a dedicated AI to help streamline and optimize the software layers of MUX/DeMUX (pls excuse the hardware terminology) for all these parallel processors will be another 'Cutting Edge' tech

  • @scottwatschke4192
    @scottwatschke4192 Před 11 měsíci

    Awesome growth ahead and happening now.

  • @marcovillani4427
    @marcovillani4427 Před 11 měsíci +3

    Great work Anastasi video always very interesting I believe that NVIDIA will be increasingly the protagonist in the realization of supercomputers given the billion-dollar investments I foresee exciting solutions in a short time !!!”

  • @georgearnold977
    @georgearnold977 Před 11 měsíci +1

    Thanks!

  • @garycard1826
    @garycard1826 Před 10 měsíci

    This from Model 3 manual, May also apple to model Y ( I would think): To experience the same amount of deceleration whenever you release the accelerator pedal, regardless of the state of the Battery, you can choose to have the regular braking system automatically engage whenever regenerative braking is limited. Touch Controls > Pedals & Steering > Apply Brakes When Regenerative Braking is Limited.

  • @mvasa2582
    @mvasa2582 Před 9 měsíci

    Excellent presentation across the board. I would love to see Cerebras compared to GH200. That would be more apples-to-apples. And the NVLink vs Cerebras in build memory - should give a good comparison.

  • @optimus888amicus
    @optimus888amicus Před 11 měsíci

    Very good ! 🌞😉👍

  • @jasonkocher3513
    @jasonkocher3513 Před 11 měsíci

    You should do a video about multimodal models and see if/when we'll have proper and accurate 3D STEP files imagined from AI inputs. STEP is the only real engineering format for 3D IMO. It means you can machine the parts and build assemblies to 100% proper engineering specifications. When we get there, we'll see a step change in everything.

  • @pandoorapirat8644
    @pandoorapirat8644 Před 9 měsíci

    Best Market advisors are those who knows the most about the company and the technology.

  • @joelcarson4602
    @joelcarson4602 Před 10 měsíci

    Wasn't it Clive Sinclair and associates back in the 1990s who came up with the idea of using a single wafer ( at that time likely a 200 mm wafer) as a single large RAM device, similar in general idea to the Cerebras full wafer TPU?

  • @peterxyz3541
    @peterxyz3541 Před 10 měsíci

    Nice info. Any alt GPU for the end users? What are the options for end users for playing with AI text and image generation?

  • @stevemaxwell5559
    @stevemaxwell5559 Před 11 měsíci

    I enjoyed your great, intelligent presentation. For me, it filled in some of the details about AI that I was missing.
    For what it's worth, my feeling is that AMD are on a good path.
    Their chiplet design is ultimately very scalable.
    It depends upon them getting the software right.

  • @commanderdante3185
    @commanderdante3185 Před 11 měsíci

    Room temp superconductor patent was released Ana. Maybe we could have mini super computers oneday. Could you explain how superconductors would help computing?

  • @altsak840
    @altsak840 Před 11 měsíci +4

    As a gamer I'm hoping there would be some GPU dribble to us. For example failed chips getting new life as gamer GPUs.

    • @abram730
      @abram730 Před 11 měsíci

      4090 is a failed 6000 Ada chip.

    • @pixelfairy
      @pixelfairy Před 11 měsíci

      Between AI replacing the financial interest of anyone capable of producing gpus and the economies of scale for mobile and dedicated devices, I think those are slowly replacing the pc gamer market, and with it, the gamer gpu market. Wont be long before your playing your old pc games on an emulator running on your phone. Just give it a few years for mobile to catch up while investment in the desktop drops off.
      Almost all my gaming is on a home made vr treadmill. Theres already one company, Virtuix, releasing a dedicated all in one device for this purpose. Even Kat-VR is working on a device, the kat nexus, to use their slidemill with quest2 and psvr2 so theirs won't need a pc either. Dont know if thats diversification or they also see the market headed in that direction.
      Of course, some new development could turn everything upside down or I could be just wrong about this. I'm into video games, but mostly for exercise, and only a small part of that. I suspect us casual gamers are really the bigger market.

  • @christerwiberg1
    @christerwiberg1 Před 11 měsíci

    One maybe off topic question, how will the on going and coming AI boom add to data storage? Old forecasts as far as I understand was based on more streamed video, but as I understand, AI will add a lot of data stored as well. Any idea of where we are heading? My rough guess is that we will have a lot more data stored but that the amount to be manufactured can’t grow as fast.

  • @cock699
    @cock699 Před 10 měsíci

    plz bring videos on quantum computing

  • @soundcore183
    @soundcore183 Před 11 měsíci

    16:25 Overall, proprietary code can be relevant to AI development, including LLM and autonomous AI. However, it is crucial to strike a balance between proprietary and open-source components in the AI ecosystem to promote innovation, collaboration, and transparency while protecting intellectual property rights.

  • @mydearfriend007
    @mydearfriend007 Před 11 měsíci +1

    Can you make a video on the new room temperature superconductor paper?

  • @pazitor
    @pazitor Před 11 měsíci +3

    _What?_ Covering my hobbies, now, too? Gee, thanks!
    More seriously, you made a good case for Cerebras. Taken together, these trends may spell deep trouble for Intel in particular.

    • @maxjames00077
      @maxjames00077 Před 10 měsíci

      Intel is the only one who isn't in deep trouble in the long run when they will be manufacturing the chips with TSMC. AMD on the other hand...

  • @MozartificeR
    @MozartificeR Před 11 měsíci

    The new throughput to chiplets tech of hopper looks exciting:)
    With open source. It will probably be like consoles. Where the company that becomes more consumer friendly dominates.

  • @aliandiazperez7602
    @aliandiazperez7602 Před 10 měsíci

    Has Apple some in-house super-computer for ai training? The M series arm processors show good performance for that.

  • @JazevoAudiosurf
    @JazevoAudiosurf Před 11 měsíci

    we are talking about wafer scale computing but the truth is there is barely anything preventing chips at the size of square meters

  • @williamgidrewicz4775
    @williamgidrewicz4775 Před 11 měsíci

    Hey, supremely tech lady maybe in the future they use some sort of messenger RNA coupled with special types of fluid dynamics to impart AI with some sort of human thought! Maybe it takes a decade or 2 or...!

  • @TSulemanW
    @TSulemanW Před 11 měsíci

    Nicely explain , I was born 1965 now i am 57 years old ten years old than bill gate. when i was at University i am as lecture asst System Information , and Electronic Data Processing

  • @JazevoAudiosurf
    @JazevoAudiosurf Před 11 měsíci +1

    the real question is who are the alternatives if tsmc goes down, because cerebras receives chips from tsmc

    • @Chip_in
      @Chip_in Před 11 měsíci

      New tsmc fab will be up and running in yankland in a few years this should help demand ⛳

  • @InfinitelyCurious
    @InfinitelyCurious Před 11 měsíci

    I’m curious as to which companies are using Niobium in their supercomputing chips?

  • @crhu319
    @crhu319 Před 11 měsíci +1

    The watts to do the same ISOwork are the key factor. Cost of the rack much less so. Heat and software difficulty a distant third and fourth place. H100 GPU on TSMC N4 and better tensor cores, chiplet memory, CoWoS is impressive but... When a RISC-V solution on DUV with AMD's parallel franework comes out, itll be far cheaper i exoect.

  • @martiddy
    @martiddy Před 11 měsíci +2

    I feel like Cerebras can benefit if these bottleneck by selling the GPU's that NVIDIA can't supply yet.

  • @IlyaEfanov
    @IlyaEfanov Před 11 měsíci

    What do think about Tiny Corp and their future Tiny Box ?

  • @DYNAMiCDRiVESDESiGN
    @DYNAMiCDRiVESDESiGN Před 11 měsíci

    How do you explain TSMC earnings report and guidance of 10% reduction in sales and Ai only accounting for 5% of those sales?

  • @crusadercatwoman02
    @crusadercatwoman02 Před 11 měsíci

    Love the quality of your videos

  • @yw1971
    @yw1971 Před 11 měsíci

    What about the Photonic Processors & The Quantum on silicon (& Intel's neural network)?

  • @cherubin7th
    @cherubin7th Před 11 měsíci +1

    Crazy how much the other chip makes slept.

  • @RolandElliottFirstG
    @RolandElliottFirstG Před 11 měsíci

    The larger chip area must create much more heat hence thermal waste, just my thoughts.

  • @erickvond6825
    @erickvond6825 Před 11 měsíci +1

    The flip side of this is that AMD is making serious market shares in the private sector which may have massive implications later in this trend.

  • @joemeyer6876
    @joemeyer6876 Před 11 měsíci

    Talk about AI designing itself, how much of that happens, chipwise?

  • @muhammaddaniyaal8991
    @muhammaddaniyaal8991 Před 11 měsíci +3

    First Comment And First Like.......Love your videos.. I'll watch your Videos and become more intelligent..

  • @Sheerwinter
    @Sheerwinter Před 11 měsíci +1

    :) strix point apu is the only way for me.

  • @JeffBrazeel-fe4wc
    @JeffBrazeel-fe4wc Před 11 měsíci

    Ive been a big proponent of Open Source since desktops first came into the market.

  • @marcelspence5304
    @marcelspence5304 Před 10 měsíci

    Anastasi, the hissing of your S sounds are killing me. Do you use a de-esser for your mic. I don't know, it might just be me.

  • @pokwerpokwerpokwer
    @pokwerpokwerpokwer Před 11 měsíci +1

    Towelie of South Park has a TPU

  • @taylord1984
    @taylord1984 Před 11 měsíci

    I'm hoping intel arc GPUs brings in some real competition. Other than that some sort of low power setup for those that don't mind waiting a day or two.

    • @maxjames00077
      @maxjames00077 Před 10 měsíci +1

      i see many happy Arc a770 owners already!

  • @joesterling4299
    @joesterling4299 Před 11 měsíci

    When even the auto-generated Google subtitles can't quite understand you, you know that you should provide a transcript of your speech in subtitle form.

  • @memoli801
    @memoli801 Před 11 měsíci

    Will AMD stocks fly too, what do you think?
    Since NV can not deliver to the demand, the next best will catch up.
    I just dont see both of them as very stable and predictable.
    Even more at NV where news been spread way to slow and therefore too late to counter react in case.

  • @user-hl1wg7yd9u
    @user-hl1wg7yd9u Před 11 měsíci +1

    Спасибо за информацию, очень интересно.

  • @GregoryCarnegie
    @GregoryCarnegie Před 11 měsíci +3

    It's annoying cuz it means the next-gen gaming chips will be super-expensive

    • @dchdch8290
      @dchdch8290 Před 11 měsíci +1

      there is already big difference between gaming and AI GPUs. They just keeps the term the same (i.e. GPU) for marketing purposes. however, at some point of time it won't make sense anymore

  • @PACotnoir1
    @PACotnoir1 Před 11 měsíci

    What about photonic chips as the new development trend for GPU or TPU instead of electronic ones?

  • @springwoodcottage4248
    @springwoodcottage4248 Před 11 měsíci

    Fabulous: Topical, interesting & exciting. The issue with chip competitors to Nvidia seems to be fabrication. Can these competitors get enough access to fabs to create chips at scale? A second issue is whether existing AI can self train as the article from Stanford you covered suggested. Ark investment argue that training costs have fallen & will continue to fall. There is no squaring of these circles : Not every one can be correct. So the question is about who is a reliable indicator of demand & how will that demand grow. Adding into this mix are a few comments Elon that said the way they had been tackling FSD was too complex & they have found a better way. Does this indicate less demand in the future for GPU? This is all such a fascinating & complex subject that will change many things. Excellent summary presented in such an exciting way, Thank you!

  • @TheFinnmacool
    @TheFinnmacool Před 11 měsíci

    Well right now with NVDA people are assuming massive sales in the future even if the market isn't there for it and the market can't supply what NVDA needs. Good luck with that.

  • @nightowl263
    @nightowl263 Před 11 měsíci

    Could you pls explain how much extra energy ( e.g. MG=Megawatt's) is aprox. in future needed to fuel AI ? Elon Musk made a warning about a lack of power plants in US (equal to europe)

  • @royvarley
    @royvarley Před 11 měsíci +1

    FYI Grace Hopper: en.wikipedia.org/wiki/Grace_Hopper
    Grace was a consultant to DEC (I'm a former employee)

  • @Maggio100
    @Maggio100 Před 11 měsíci

    H100 is derived from the GPU chip? If yes probably an AI chip designed from scratch would be far more efficient (I think about Dojo v2 cause v1 is not designed for AGI and LLM).

  • @sdsa007
    @sdsa007 Před 10 měsíci

    very exciting! The real world kinda sucks but the computer world is so promising! cheaper, better, faster... exciting opportunities in tech, beautifully, and organically presented! I don't think she can be replaced with a bot!

  • @a0z9
    @a0z9 Před 10 měsíci

    Tensores y transformadores. Los chips se sofistican especializándose en lo que funciona. La función hace al órgano o viceversa.

  • @gargamelandrudmila8078
    @gargamelandrudmila8078 Před 11 měsíci

    Hopefully optoelectronic solutions can break the AI compute power bottleneck in terms of BoP/kWh.

  • @TheBestNameEverMade
    @TheBestNameEverMade Před 11 měsíci

    Why is the celebrus chip square and not round? Seems to he loosing some of the silicon area.

  • @felicytatomaszewska2934
    @felicytatomaszewska2934 Před 11 měsíci +1

    I feel that Cerebras direction sounds promising and really hope and wish that AMD catches up with nVidia and ROCm succeeds because nVidia is a monopoly as of now and it hurts of all us