The Real Reason Tesla Built The DOJO Supercomputer!

Sdílet
Vložit
  • čas přidán 6. 11. 2021
  • The first 100 people to go to www.blinkist.com/theteslaspace are going to get unlimited
    access for 1 week to try it out. You'll also get 25% off if you want the full membership!
    The Real Reason Tesla Built The DOJO Supercomputer! Detailing Tesla's Supercomputer project Dojo, how it works, chip specs, and why Tesla is investing so much into AI.
    Last video: The 2022 Tesla Battery Update Is Here
    • The 2022 Tesla Battery...
    ► Subscribe to our sister channel, The Space Race: / @thespaceraceyt
    ► Subscribe to The Tesla Space newsletter: www.theteslaspace.com
    ► Get up to $250 in Digital Currency With BlockFi: blockfi.com/theteslaspace
    ►You can use my referral link to get 1,500 free Supercharger km on a new Tesla:
    ts.la/trevor61038
    Subscribe: / @theteslaspace
    🚘 Tesla Videos: • Why Tesla Will Destroy...
    🚀 SpaceX Videos: • SpaceX Videos
    👽 Elon Musk Videos: • Elon Musk Developing C...
    🚘 Tesla 🚀 SpaceX 👽 Elon Musk
    Welcome to the Tesla Space, where we share the latest news, rumors, and insights into all things Tesla, Space X, Elon Musk, and the future! We'll be showing you all of the new details around the Tesla Model 3 2021, Tesla Model Y 2021, along with the Tesla Cybertruck when it finally arrives, it's already ordered!
    Instagram: / theteslaspace
    Twitter: / theteslaspace
    Business Email: tesla@ellifyagency.com
    You can use my referral link to get 1,500 free Supercharger km on a new Tesla:
    ts.la/trevor61038
    #Tesla #TheTeslaSpace #dojo
  • Věda a technologie

Komentáře • 491

  • @TheTeslaSpace
    @TheTeslaSpace  Před 2 lety +26

    The first 100 people to go to www.blinkist.com/theteslaspace are going to get unlimited
    access for 1 week to try it out. You'll also get 25% off if you want the full membership!

    • @ryvyr
      @ryvyr Před 2 lety +1

      I removed the comment from main body since it did not seem relevant to subject material, and am relegating here. Why do you employ the seamless mid-video sponcorship method rather than announce at beginning, at the very least, if insisting on mid-video reel? It really kills the rest of video and at times just click off at that point. Is there an ethical/moral consideration, or no?
      Per your recent video with the CT photoshopped to be black, along with the title, and noting to be "self aware" per clickbait - was that a sort of hand-wave relying on enough of us to not care?
      I do enjoy your content, though am disheartened when people seem misleading or irreverent with mid-video seamless sponsorship reels, which feel like a betrayal of trust.

    • @texasblaze1016
      @texasblaze1016 Před 2 lety

      Where is the DOJO super computer being built?

    • @nathanthomas8184
      @nathanthomas8184 Před 2 lety

      Is it plugged into the Black ooze ?

    • @glidercoach
      @glidercoach Před 2 lety

      Not sure if using climate change models as an example, was a good idea, seeing as all models have failed miserably.
      As they say, _"Garbage in, garbage out."_

    • @martinheath5947
      @martinheath5947 Před 2 lety

      While these computers and AI breakthroughs may in themselves be pure, scientifically and mathematically speaking, the potential for malevolent usage is enormous eg 24/7 real time, comprehensive, monitoring and tracking surveillance of entire populations for an all pervasive and totalitarian social credit system. Recent events around the world relating to pandemic "control measures" suggest our leaders do not have our best interests at the forefront of their concerns. Control is the goal and I foresee a very dangerous coalescence of supranational elite power once this technology is pressed into service for *their* benefit.

  • @nujuat
    @nujuat Před 2 lety +88

    Im an experimental physics phd student and I've written a quantum mechanics simulator that runs on graphics cards. When i was writing that, the top priority was to retain the highest accuracy possible with 64 bit floating point numbers (since we want to know exactly what's going to happen when we test the experiment out in the lab). I think most supercomputers are built to do things like that. However, having that accuracy is unnecessary for things like graphics and machine learning. So it makes perfect sense that tesla would cut down on that when they're designing a supercomputer only for machine learning purposes. I don't think you got anything wrong.

    • @superchickensoup
      @superchickensoup Před 2 lety +8

      I once used a search bar on a computer

    • @karlos6918
      @karlos6918 Před 2 lety +1

      The Chern Simons number has a modulo 64 factorization heavenly equation representation which can map onto a binary cellular automaton with states.

    • @muhorozibb2777
      @muhorozibb2777 Před 2 lety +3

      @@karlos6918 In human words that means😳😳😳?

    • @BezzantSam
      @BezzantSam Před 2 lety

      @@superchickensoup I remember my first beer

    • @BezzantSam
      @BezzantSam Před 2 lety

      Do you mine ethereum on the side?

  • @TheMrCougarful
    @TheMrCougarful Před 2 lety +24

    This is probably another example of a philosophy most often seen working at SpaceX: The best part is no part. I would probably call the dojo a super-abacus. But for their purpose, an abacus was perfect, so they built the correct machine.

  • @denismilic1878
    @denismilic1878 Před 2 lety +33

    Very smart approach. Less precise data and more neural networks. Simply said, it's not important if a pedestrian is distanced 15.1 m or 15.1256335980...m, important is if he going to step on the road or not. For decision making precise data is not necessary, interpreting and understanding data is crucial. The second factor why low precision is acceptable all predictions are made for a short time span, and calculations are done repeatedly. The third reason is sensors inputs are also relatively low quality but huge amount.
    edit: very good and understandable video.

  • @thomasruwart1722
    @thomasruwart1722 Před 2 lety +56

    Great video! I spent my entire 45-year career in High Performance Computing specializing in the performance of data storage systems at the various DoE and DoD labs. I am very impressed with Dojo, it's design and implementation not to mention its purpose. Truly amazing and fascinating!
    Tuesday Humor: Frontera: The only computer that can give you one hexabazillion wrong answers per second! 😈

    • @efrainrosso6557
      @efrainrosso6557 Před 2 lety +2

      So Frontera is the Joe Brandon Biden of computers. Always wrong with authority and confidence. Not one right decision in 50 years.

    • @prashanthb6521
      @prashanthb6521 Před 2 lety +3

      Awesome career you had sir. I am right now struggling to string together few computers to make money from stock market from my basement :)

    • @thomasruwart1722
      @thomasruwart1722 Před 2 lety

      @@prashanthb6521 - that sounds like fun! There are lots of inexpensive single board computers that you can build clusters with. Some have AI coprocessors as well to do Tensor Flow or whatever suits your needs. I wish to all the best luck with your projects!

  • @anthonykeller5120
    @anthonykeller5120 Před 2 lety +9

    40+ years of software engineering starting with machine interfaces. Very good presentation. If I was at the start of my career this is where I would want to spend my waking hours.

  • @StephenRayner
    @StephenRayner Před 2 lety +7

    Software engineer here with 15 years experience. You did a good job

  • @stevedowler2366
    @stevedowler2366 Před 2 lety +17

    Thanks for a very clear explanation of task-specific computing machine design. I've read ... well, skimmed ... er, sampled that DOJO white paper to the point where I glommed the idea of lower but sufficient precision yields higher throughput thus compute power for a specific task. Your pi example was the best! Keep these videos coming, cheers.

  • @MichaelAlvanos
    @MichaelAlvanos Před 2 lety +33

    Great presentation! It filled in the gaps & I learnt some things I wasn't even aware of. Even your comment section is filled with great info!!

  • @jaybyrdcybertruck1082
    @jaybyrdcybertruck1082 Před 2 lety +53

    fun fact, the computers Tesla has been using to train FSD software today amount to being the 5th largest super computer in the world. It isnt good enough at that level so they are leap frogging everything.

    • @ClockworksOfGL
      @ClockworksOfGL Před 2 lety +10

      I have no idea if that’s true, but it sounds like something Tesla would do. They’re not trying to break records, they’re trying to solve problems.

    • @jaybyrdcybertruck1082
      @jaybyrdcybertruck1082 Před 2 lety +4

      @@ClockworksOfGL here is the actual presentation by Tesla which explains everything, its a but long but holy cow its awesome.
      czcams.com/video/j0z4FweCy4M/video.html

    • @scottn7cy
      @scottn7cy Před 2 lety +3

      @@ClockworksOfGL They're trying for world domination. Elon Musk is merely a robotic shell. Inside you will find Brain from Pinky and the Brain.

    • @jaybyrdcybertruck1082
      @jaybyrdcybertruck1082 Před 2 lety

      @@stefanms8803 small potatoes for a car company then I guess, remind me what GM Ford and VW have?

    • @abrakadavra3193
      @abrakadavra3193 Před 2 lety

      @@ClockworksOfGL It's not true.

  • @robert.2730
    @robert.2730 Před 2 lety +30

    GO TESLA GO 🚀🚀🚀👍🏻😀

  • @jaybyrdcybertruck1082
    @jaybyrdcybertruck1082 Před 2 lety +21

    Its worth mentioning that Tesla is already planning out the next upgraded version of DOJO which will be 10 x the performance of the one they are building today.
    Dojo will be up and running sometime in the second half of 2022, after that I give it 1 year to turn Full Self driving into something the world has never seen. It will take all 8 cameras video and simultaneously label everything they see in real time through time.
    Today its labeling small clips from individual cameras. This will be a HUGE step change in training once its running.
    Its going to save millions of lives.

    • @gianni.santi.
      @gianni.santi. Před 2 lety

      "after that I give it 1 year to turn Full Self driving into something the world has never seen."
      What we're seeing right now is also never seen before.

    • @TusharRathi-zj1wu
      @TusharRathi-zj1wu Před 3 měsíci

      Not yet

    • @TusharRathi-zj1wu
      @TusharRathi-zj1wu Před 3 měsíci

      Not yet

  • @raymondtonkin6755
    @raymondtonkin6755 Před 2 lety +5

    It's not just flops, it's the adaptive algorithms too ! The structure of dimensions in a neural network ... pattern recognition, nondeterministc weighted resolution 🤔 and memory

  • @scotttaylor3334
    @scotttaylor3334 Před 2 lety +2

    I, for one, welcome our computer overlords... Three comments about the video:
    Fantastic video! Tons of data and lots of background. Love it.
    You made an analogy with Canada getting rid of the $1 bill, and I think you indicated that it reduced the number of coins we carry around, but my experience is exactly the opposite. I find that I come home with a pocket full of change every time I go out and use cash...
    Second thing, Nvidia, is pronounced, "invidia/envidia". I used to play on the hockey team down in San Jose California.
    Again thanks for the great video and great presentation.

  • @incognitotorpedo42
    @incognitotorpedo42 Před 2 lety +53

    When you start the video with a long (sometimes angry/defensive) tirade about you not knowing anything about supercomputers, it makes me wonder if any of it is going to be worth listening to. You actually did a pretty good job, once you got to it.

    • @KineticEV
      @KineticEV Před 2 lety +1

      I was thinking the same thing. Especially at the beginning with the super computer vs. the human brain. I think that was the only thing I disagreed with since we know the whole point some companies are trying to do is solve the AI problem but always come short.

    • @kiaroscyuro
      @kiaroscyuro Před 2 lety +3

      I listened to it anyway and he got quite a big wrong

    • @ravinereedy204
      @ravinereedy204 Před 2 lety +3

      Not everyone has a degree in CS... I do, and he explained a lot of things pretty good. The thing is, he knows the limits of his knowledge and he does his best to explain anyways. How you gonna bash the guy for that? lol I suppose I understand what you mean though. At least he is upfront about it and doesnt lie to the views to fill the gaps?

    • @vsiegel
      @vsiegel Před 2 lety

      @@ravinereedy204 I think he did not bash the author, he pointed out that there is a risk of loosing viewers early because they misunderstand what he says.

    • @ravinereedy204
      @ravinereedy204 Před 2 lety

      @@vsiegel Sure, maybe thats what he was implying, but thats not what he said though lol

  • @j.manuelrios5901
    @j.manuelrios5901 Před 2 lety +4

    Great video! It was never about the EV’s for me, but instead more about the Ai and energy storage. TSLA

  • @oneproductivemusk1pm565
    @oneproductivemusk1pm565 Před 2 lety +10

    Like I told you before!
    I love your commentary very natural and conversational!
    Keep it up my man!

  • @neuralearth
    @neuralearth Před rokem +1

    The amount of love I felt for this community when you compared it to Goku and Frieza made me feel like there might be somewhere on this planet where I might fit in and that I am not as alone as I feel. Thank you TESLA and ELON and NARRATOR GUY.

  • @d.c.monday4153
    @d.c.monday4153 Před 2 lety +24

    Well, I am not a computer nerd! But, the parts you explained that I knew, were right, the parts you explained that I didn't know, sounded right! So I am happy with that. Well done.

  • @lonniebearden9923
    @lonniebearden9923 Před 2 lety +10

    You did a great job of presenting this information . Thank you.

  • @NarekAvetisyan
    @NarekAvetisyan Před 2 lety +8

    The PS5 is 10.2 TFLOPS of FP32 btw so one of these Tesla tiles is only 2 times faster not 35.

  • @owenbradshaw9302
    @owenbradshaw9302 Před 2 lety +8

    Greet video , I will say, dojo has the advantage of incredibly low latency so that the entire super computer can process data efficiently, regardless of floating points . Lots of floating points is useless if you can’t transfer that data between nodes very fast. It’s like trying to push a fire hydrant of water through a garden hose . This is one of the big factors for how good dojo is .

    • @vsiegel
      @vsiegel Před 2 lety +1

      Is still is floating point numbers, but less precise, with lower resolution basically. You do not need the precision, and if a number uses less precision, it uses less memory. You can not transfer the data faster, but you can transfer more in the same time. The latency does not change, the throughput doubles if you use half the precision.

  • @costiqueR
    @costiqueR Před 2 lety +2

    I really enjoy it, a comprehensive and clear presentation. Thanks!

  • @oneproductivemusk1pm565
    @oneproductivemusk1pm565 Před 2 lety +5

    I agree that image is too graphic but it's perfect for the occasion! Lol😂😂😂

  • @ChaJ67
    @ChaJ67 Před 2 lety +30

    To my understanding at least, with current technology it is impossible to make a chip over a certain size and get perfection. This is what limits GPU sizes, which is way smaller than a wafer. The only way to do wafer size is to design it to work around any and all defects. So they may actually use nearly 100% of the wafers, just with a number of sub-components disabled because of defects.
    The reason wafer scale is so important is heat dissipation of interconnects. The reason we have gone so long without GPU chiplets is with all of the interconnects, you can't just distribute the GPU across multiple die to get better performance. Instead you have a multi-pronged interconnect nightmare with one of those problems the shear heat generated in the die to die interconnects outweighs any benefit from spreading across to more die. While there is talk of MCM GPUs from AMD and AMD already has MCM CPUs, the CPUs are done with particular limitations to allow chiplets to work and the issues making an MCM GPU possible have been studied for years and it looks like they may have come up with an acceptable solution to there being a benefit to spreading across multiple die. Wafer scale takes a different approach in that everything is on the same wafer and so the interconnect issue is eliminated at the cost of you have to deal with the defects of neighboring silicon on the wafer instead of chopping everything up and throwing out all of the defective pieces, at least defective to the point where more common designs cannot work.
    The only way to dissipate the heat from so much silicon in one spot is through liquid cooling. So there is actually another layer on top which is the water block if I understand correctly. Another great thing about liquid cooling is you can just bring the heat to outdoor radiators and dissipate it. Something I would be interested in is it seems Tesla has high temperatures figured out, allowing them to boost the performance of the power electronics for the Tesla car, so it would be interesting to know what is going on with Dojo to see if they can have a simple high heat load outdoor radiator to cool the supercomputer and thus save a bunch on cooling. Cooling can be quite an expensive process, especially if traditional forced air CRACs are used, so a simple liquid loop with mainly just pumps to move the liquid and fans over the radiators from a power perspective would be a huge power savings. Chilling air to 65 F (or 15 C) and then blowing it over high performance computer parts with crazy high powered fans burns a tonne of power to do, especially if it is 115 F (over 45 C) outside.

    • @goldnutter412
      @goldnutter412 Před 2 lety +1

      If the car is moving, you can get near free airflow. RAM it in with the right fluteing or whatnot..
      Clock speed and routing on chip knows what is coming well before cognition of a human would kick in. I stopped it must be underclock time.. it happened well over a second ago..slowing down with expected stop is a high prediction case.. most of the time it won't be getting fooled. Even if it does, clocking back up from 5% to 100% is so fast it is "instant" to our perception.
      So zero issues should be expected.. the wafers that go in should really last by the sounds. Nice essay cheers.. do enjoy when someone doesn't ignore CENTI grade, History channel shame on you and ALONE show.. always F still NEVER a conversion.. was always telling someone uh just saying but - that in F is under 0 in C as well.. which from the basic explain to a layman.. doesn't seem right because you first -32 then almost halve ! lucky the temps dont swing to -40 aka -40 lol the coolest temp to almost die in but not seen it yet.

    • @denismilic1878
      @denismilic1878 Před 2 lety

      Of course, all these wafers have redundancy built in them, but this is not a new idea. czcams.com/video/LiJaHflemKU/video.html

    • @davidelliott5843
      @davidelliott5843 Před 2 lety

      The simple way to cool computer processors is to chill the server room. It’s not efficient but does the job. Direct cooling the wafer with “water” cooled heat sinks is far more efficient but the plumbing soon gets seriously complicated.

    • @vsiegel
      @vsiegel Před 2 lety

      @@goldnutter412 Thank you for fighting for correct or even sensible use of temperature units. (Maybe it is good that no aliens visit this planet. Not using common units would be really embarrassing.)

    • @traniel123456789
      @traniel123456789 Před 2 lety

      @@davidelliott5843 Plumbing is complicated when you need 3rd party manufacturers to install their equipment. It is the preferred way of doing things in a homogenous datacenter. Fans consume a *lot* of power, and you can't make them go faster. There are even immersion cooling systems in some new datacenters to improve energy efficiency.

  • @thefoss721
    @thefoss721 Před 2 lety +3

    Dude your videos are super solid! I’m super impressed with the info and knowledge and slight bit of humor to keep things moving swiftly
    Can’t wait to hear some more info!

  • @amosbatto3051
    @amosbatto3051 Před 2 lety +7

    Very poor info on the D1 at 7:55. The wafer of 25 D1 chips is probably designed to be able to work around bad chips, so they don't have to throw away the entire wafer. Also, Tesla is not the first to make whole wafer chips with many processors. Both UCLA and Cerebras have been doing this since 2019, and there was a company back in the 1980s doing the same.

  • @dan92677
    @dan92677 Před 2 lety +1

    Both interesting and informative!! Thank you...

  • @markrowland1366
    @markrowland1366 Před 2 lety +1

    When mentioning Dojo needing twelve units to do what is impressive, the architecture is infinaitily expandable. A stand alone single unit might fit in a bedside cabinet. Maybe twelve might take up one wall of a bedroom.

  • @Nobody-Nowhere
    @Nobody-Nowhere Před 2 lety +10

    Cerebras is doing wafer scale AI chips. This year they released the 2nd gen chip. They announced the first version already in 2019. So Tesla is not the only one or first doing this.

    • @godslayer1415
      @godslayer1415 Před 2 lety

      You are fucking clueless.

    • @godslayer1415
      @godslayer1415 Před 2 lety

      @@IOFLOOD With TSMC's atrocious defect levels - prob half that "wafer" is dead.

    • @gabrielramuglia2055
      @gabrielramuglia2055 Před 2 lety +1

      @@godslayer1415 In a traditional "monolithic die" design, one bad single transistor could potentially require you disable an entire CPU core, memory channel, or other critical large structure. If you design with a larger number of smaller structures that are intended to work together and route around any dead spots, your effective "working" / "active" silicon rate can be dramatically higher even with the same number of actual defects. For example, one could presume as few as a dozen defects might make a 1 billion transistor CPU be completely unusable. If you end up with 1/100,000,000 defects on average, that means most of your CPUs will be unusuable, which seems silly to criticize the fab and say that the defect rate is very high (1 in 100 million is pretty insanely good), just the tolerances required are insane -- maybe with that design of CPU die, you need 1 in 500 million defect rate. Whereas a design that is fault tolerant may lose only 1% of computing capacity for the exact same defects.

    • @zoltanberkes8559
      @zoltanberkes8559 Před 2 lety +2

      Tesla DOJO is not a wafer scale chip. They use normal chip tehcnology and put them on a wafer sized interconnect.

  • @pwells10
    @pwells10 Před 2 lety +4

    I subscribed based off the thumbnail. I liked and commented because of the quality of content.

  • @nickarnoldi4304
    @nickarnoldi4304 Před 2 lety +6

    Tesla will most likely keep all exopods in-house, and offer a subscription to tile time.
    The Tesla bot platform will use a Dojo subscription service for training. A VR headset with tactile gloves would allow a user to perform their very complex task, and the client can send builds up to the cloud. Tesla made Dojo compute with scalability at its core.
    Dojo is the gateway to AGI.

  • @norwegianblue2017
    @norwegianblue2017 Před 2 lety +3

    Anyone else remember when there was talk about hitting the ceiling on computing power with the 486 processor? This was back in the early 1990s.

    • @goldnutter412
      @goldnutter412 Před 2 lety

      MS-DOS 3.3.. hmm okay easy enough.. might be a coder
      next decade.. not a chance in hell no thankyou and goodbye.

  • @Fitoro67
    @Fitoro67 Před 2 lety +2

    Excelente apresentação! Essa forma de abordagem da TESLA em seu projeto DOJO, vem de encontro a questão de que: as coisas mais complexas, são formadas por partes simples.
    Esse tipo de pensamento, contrário à idéia da perfeição absoluta, nos leva a potenciais incriveis.
    😀

  • @PeterDoingStuff
    @PeterDoingStuff Před 2 lety

    Thanks for making this video, very informative about HPC

  • @JayTemaatFinance
    @JayTemaatFinance Před 2 lety +1

    Great content. Funny analogies. Commenting for the algorithm. 👍🏼

  • @craigruchman7007
    @craigruchman7007 Před 2 lety +3

    Best explanation of Dojo I’ve heard,

  • @vsiegel
    @vsiegel Před 2 lety +1

    Practically speaking:
    AI training normally runs on nVidia graphics cards, which are AI training accelerators at the same time.
    Dojo is just a fast AI training accelerator. Ideally you can simply choose to use Dojo instead of nVidia, and your program does the same as before, but much faster.
    Alternatively, you can make your AI larger, similar to a higher resolution on a screen, so much that it runs at the same speed as before, but the AI is better in what it does.
    How it is done and how much faster it is is mind blowing.

  • @citylockapolytechnikeyllcc7936

    Dumb this down one more level, and it will be comprehensible to those of us outside the labcoat set. Very interesting presentation

  • @BreauxSegreto
    @BreauxSegreto Před 2 lety +4

    Well done 👍 ⚡️

  • @konradd8545
    @konradd8545 Před 2 lety +34

    ASI is beyond our reach for at least 100 years or until we have AGI (Artificial General Intelligence). AGI in itself is infinitely much more complex than a very small task of learning how to drive. Obviously, I'm not saying that self-driving cars is an easy task in terms of computing, but our brain does it infinitely better, faster and on 20W of energy only. I love how lay people overestimate the power of HPC or Machine Learning and underestimate the power of our brains. It's like comparing a single light bulb to a massive star 😂

    • @vivekpraseed918
      @vivekpraseed918 Před 2 lety +3

      Exactly...not all supercomputers put together can rival the ingenuity of a single rat's or bird's brain (or maybe even bacterial colonies with zero neurons). Apes are nearly AGI

    • @memocappa5495
      @memocappa5495 Před 2 lety +3

      Advancements here are exponential, doubles every 9 months, and that rate itself is improving. It’ll be in the next 5-10 years

    • @dogecoinx3093
      @dogecoinx3093 Před 2 lety +2

      100 years? More like 6 months ago 5/3/21

    • @konradd8545
      @konradd8545 Před 2 lety +2

      @@memocappa5495 yeah, sure. The same exact predictions were made around 50-60 years ago. And do we have AGI (let alone ASI)? Not even remotely close to it. It's not about computing and crunching trillions of FLOPS, it's about being able to learn and adapt to any situation based on experiences and about milion other things. There are two main problems with developing AGI. Human intelligence is not yet well known. Even the definitions differ from scientist to scientist. So how on earth are we naive enough to think that we can develop something similar if we don't understand our own natural intelligence? Second main problem is that we are trying to develop AGI on Von Neumann architecture which is a futile attempt in itself, unless we want to spend energy of the entire universe for a 1s simulation of human brain 😂 I can only see neuromorphic computing as a possible candidate but these are in their infancy. So, despite what media and lay sources say, we are nowhere near AGI. Sorry (not sorry) to burst the bubble.

    • @konradd8545
      @konradd8545 Před 2 lety

      @@dogecoinx3093 what are you talking about?

  • @emilsantiz3816
    @emilsantiz3816 Před 2 lety +2

    Excellent Video!!! A very concise explanation of what Dojo is and is not, and its capabilities and limitations!!!!!!

  • @sowjourner
    @sowjourner Před rokem

    Amazing...exactly on my level of comprehension without googling in conjunction with listening. Impressive. I immediately subscribed..... i never subscribe to any channel. my expectation is hearing more at this perfect and engaging level. a BIG thanks !!

  • @TheRealTomahawk
    @TheRealTomahawk Před 2 lety +3

    hey did Alan Turing use a supercomputer to crack the Enigma code? Thats what this reminded me of...

    • @jabulaniharvey
      @jabulaniharvey Před 2 lety +2

      found this...A young man named Alan Turing designed a machine called a Bombe, judged by many to be the foundation of modern computing. What might take a mathematician years to complete by hand, took the Bombe just 15 hours. (Modern computers would be able to crack the code in several minutes...thirteen to be precise)

  • @sundownaruddock
    @sundownaruddock Před 2 lety

    Thank you for your awesome work

  • @erickdanielsson6710
    @erickdanielsson6710 Před 2 lety

    Kool Beans, I worked on array processors "FPS Floating Point" in the late 70's 12MFLOP 64 bit systems, Hot stuff then. It would take months to solve problem. Progressed thru the years. Ending my industry work with SGI/Cray. Last 15 years with DOD and High speed machines. But This is a step above. Thanks for sharing.

  • @donwanthemagicma
    @donwanthemagicma Před 2 lety +3

    A lot of companies don't wanna put in the risk of making a system like what Tesla is doing and have it not be adopted because it also brings down the amount of computing that something would need to have in order to get the calculations proper And that's only if everyone adopts it

    • @menghawtok7837
      @menghawtok7837 Před 2 lety

      If Tesla cracks the autonomous driving puzzle then the financial return would be many times the investment put in. Perhaps most companies don’t have a single use case that can potentially reap such a high return, or management that’s willing to put in the investment to do it.

    • @donwanthemagicma
      @donwanthemagicma Před 2 lety +1

      @@menghawtok7837 most other companies do not have the people that could even begin to design a system like that in the first place

  • @gregkail4348
    @gregkail4348 Před 2 lety

    Good presentation !!!

  • @matthewtaylor9066
    @matthewtaylor9066 Před 2 lety

    Thanks that's cool fantastic work on the story could you do more on dojo

  • @yulpiy
    @yulpiy Před 2 lety +4

    its N-vidia not nevidia btw

    • @gohansaru7821
      @gohansaru7821 Před 2 lety

      CZcams offered to translate that into English!

  • @rkaid7
    @rkaid7 Před 2 lety

    Enjoyed the pants flop and odd swear word. Great video.

  • @francisgricejr
    @francisgricejr Před 2 lety

    Wow that's one hella fast Super Computer!

  • @YaroslavVoytovych
    @YaroslavVoytovych Před 2 lety

    The big flow of your video: You try to introduce AI supercomputer to a general public by focusing on the computing only, but avoiding even a brief introduction to the neural networks - what they do, how they work, why they are used, what is training, why to use them at all, why not to just program things, what they are good for and what they are not, etc.

  • @raphaelgarcia3636
    @raphaelgarcia3636 Před 2 lety

    Well explained ,...I understood it & Im no computer expert by any means ..lol ..& entertaining ..TY :)

  • @ModernDayGeeks
    @ModernDayGeeks Před 2 lety

    Awesome video explaining Tesla's Supercomputer. Knowing the possibility of Tesla integrating this to their AI work like Tesla Bot means they can further improve how we understand AIs today!

  • @Bianchi77
    @Bianchi77 Před 2 lety

    Nice video clip, keep it up, thank you :)

  • @ottebya
    @ottebya Před 2 lety

    BEST summary of that white paper I have heard, really impressive since every other video that tries to explain it is a mess, this is such complex stuff jeez

  • @sandiegoray01
    @sandiegoray01 Před 2 lety

    Thank You. I'm only concerned about FSD, at this point. As far as I can see, not a super far distance, all other computing needs are gradually being fulfilled. And my association with computers in business has been terminated, as I'm retired. Now my only real connection with computers is trying to find one that will actually be delivered to me. And after that one which doesn't die on me after 3 months, as my last computer purchase. And combining that need with a high end personal computer that will satisfy my rather complex personal computer needs in one package.

  • @LAKXx
    @LAKXx Před 2 lety +2

    Elon : ''Been telling people we need to slow down Ai''
    Meanwhile builds the fastest machine learning computer known to mankind

    • @broughttoideas
      @broughttoideas Před 2 lety

      Not even close that would be a quantum computer

    • @nolansmith7923
      @nolansmith7923 Před 2 lety

      Can’t beat quantum, but quantum couldn’t be used for this purpose, so technically both of y’all are right.

    • @wesleyashley99
      @wesleyashley99 Před 2 lety

      Nowhere to go but forward. Scary as it may be, slowing down will only allow others to pass.

  • @markbullock3741
    @markbullock3741 Před 2 lety

    Thank you for the upload.

  • @Philibuster92
    @Philibuster92 Před 2 lety +2

    This was communicated so well and so clearly. Thank you.

  • @Davethreshold
    @Davethreshold Před 2 lety +2

    I need a much finer computer mind than myself to answer this question: Right now my home machines with Windows are both 64 bit. My question is, will there come a day when 128 bit is used? I remember when 64 came out. It took years for my computer to have HALF of the programs on it written in 64!

  • @Human-uv3qx
    @Human-uv3qx Před 2 lety +2

    Support ♥️

  • @kstaxman2
    @kstaxman2 Před 2 lety

    Tesla is always ahead on science and technology.

  • @miketharp4914
    @miketharp4914 Před 2 lety

    Great report.

  • @robertmont100
    @robertmont100 Před 2 lety

    Adding Double precision is 15% area hit for the total chip

  • @Jolly-Green-Steve
    @Jolly-Green-Steve Před 2 lety +1

    7:22 Wrong. PS5's 10.2 teraflops is measured in FP32(32 bit width) where this chip's FP32 rating is clearly shown at 22 teraflops which is only 2x faster not 30 something times faster. Maybe in 8 bit and 16 bit calculations it is more than 2x versus PS5 but it's definitely not 30x more like you incorrectly stated.

  • @Jesse_Golden
    @Jesse_Golden Před 2 lety

    Good content 👍

  • @GlennJTison
    @GlennJTison Před 2 lety

    Dojo can be configured for larger floating point formats.

  • @Leopold5100
    @Leopold5100 Před 2 lety

    excellent

  • @ambercook6775
    @ambercook6775 Před 2 lety

    It all sounded logical to me ! Lol. I love your channel.

  • @MrGeorgesm
    @MrGeorgesm Před 2 lety

    Bravo! It does help understand the evolution of Tesla’s competitive advantage in FSD and related. Thank you!

  • @automateTec
    @automateTec Před 2 lety +1

    No matter how large the computer, GIGO (garbage in garbage out) still applies

  • @davivify
    @davivify Před 2 lety

    I feel confident that if I had gone into writing Broadway musicals,
    I'd have also been able to achieve that high number of flops.

  • @somaday2595
    @somaday2595 Před 10 měsíci

    @ 9:20 -- 1 tile, 18,000 A & 15 kW heat load? Is something like liquid nitrogen removing the heat? Also, is that 18 kA the max A, and the avg is more like 5 kA?

  • @alexforget
    @alexforget Před 2 lety

    Another thing that strikes me with Dojo is the bandwidth.
    Most computers can only achieve a small fraction of their advertised power because of bandwidth limitations.
    Dojo interconnects between chips and wafers mean no slowdown over data access. There is probably a 10X factor in speed right there that is easily overloked.

  • @AudiTTQuattro2003
    @AudiTTQuattro2003 Před 2 lety +5

    Dojo chips are designed and being made, but the ExoPod - Dojo chip stack is still a year or so away from actually being supercomputer capable. So at this point, it is just speculation how it will perform, but it will probably do what they project.

  • @arthurwagar6224
    @arthurwagar6224 Před 2 lety

    Thanks. Interesting but beyond my understanding.

  • @Mikkel111
    @Mikkel111 Před 2 lety +2

    Nvidia, not Nividia.

  • @howardjohnson2138
    @howardjohnson2138 Před 2 lety

    Thank you

  • @gti189
    @gti189 Před 2 lety

    I’m an idiot and I understood this easily. Great video thank you.

  • @mmenjic
    @mmenjic Před 2 lety

    15:48 if that is the case then every first big thing in history would have resulted in major development in the field but often that is not the case, usually first just proves the concept and then second, third and others improve and really innovate and change stuff significantly.

  • @EdwardTilley
    @EdwardTilley Před rokem

    Smart video!

  • @johntempest267
    @johntempest267 Před 2 lety

    Good job.

  • @kimwilliams722
    @kimwilliams722 Před 2 lety

    I also appreciate when people keep their grafic language to themselves

  • @henrycarlson7514
    @henrycarlson7514 Před 2 lety

    Interesting , Thank You

  • @theword7268
    @theword7268 Před 2 lety +1

    Good info - but my dude - that was a terrible segue to your sponsor. lol

  • @meshuggeneh14850
    @meshuggeneh14850 Před 2 lety

    Well done

  • @helder4u
    @helder4u Před 2 lety

    refreshing, thanx.

  • @kenleach2516
    @kenleach2516 Před 2 lety

    Interesting

  • @jameslmorehead
    @jameslmorehead Před rokem

    The Mac G3/G4 was considered a super computer in its day due to the raw processing power available.

  • @makeworldbette
    @makeworldbette Před 2 lety +3

    No, D1 is made with TSMC, not Samsung

  • @thegreatdeconstruction

    IBM made a tile based CPU for supercomputers as well. In the 90s

  • @teddygreene2000
    @teddygreene2000 Před 2 lety

    Very interesting

  • @russell2449
    @russell2449 Před 2 lety +1

    It would be tremendously ironic if in the end, Elon Musk's Tesla, Neuralink and Starlink, combined with Tesla Robotics (coming soon imo ;?) to become the REAL Skynet, lol, wouldn't that suck ;?) Let's hope they don't start by naming their first bots Model T-1 :?O

  • @cinemaipswich4636
    @cinemaipswich4636 Před 2 lety

    The nodes of the Tesla chips are quite chunky. They appear to be of an era from 6 or 7 years ago. Those fabs that made 25/30nm chips back then still exist. They are mature and refined to a level that perfect chips can be achieved, but with a fairly high bin rate. Since only hundreds or perhaps a few thousand chips are required for a neural network, the prospects are good.

  • @iamthetriplet
    @iamthetriplet Před 2 lety

    😂😂😂Great video!!!

  • @larryroben1683
    @larryroben1683 Před 2 lety

    GOD *** THE AUTHORITY & CREATOR ****

  • @Clint_the_Audio-Photo_Guy
    @Clint_the_Audio-Photo_Guy Před 3 měsíci

    So, how can I build something like this in my spare bedroom? Maybe it can tell me exactly what I should cook for dinner and what movie to watch? J/K

  • @tireman91
    @tireman91 Před 2 lety

    Beautiful! Just want to remind everyone... DOJO 4 DOGE!