RISC vs CISC Computer Architectures (David Patterson) | AI Podcast Clips with Lex Fridman

Sdílet
Vložit
  • čas přidán 28. 06. 2020
  • Full episode with David Patterson (Jun 2020): • David Patterson: Compu...
    Clips channel (Lex Clips): / lexclips
    Main channel (Lex Fridman): / lexfridman
    (more links below)
    Podcast full episodes playlist:
    • Lex Fridman Podcast
    Podcasts clips playlist:
    • Lex Fridman Podcast Clips
    Podcast website:
    lexfridman.com/ai
    Podcast on Apple Podcasts (iTunes):
    apple.co/2lwqZIr
    Podcast on Spotify:
    spoti.fi/2nEwCF8
    Podcast RSS:
    lexfridman.com/category/ai/feed/
    David Patterson is a Turing award winner and professor of computer science at Berkeley. He is known for pioneering contributions to RISC processor architecture used by 99% of new chips today and for co-creating RAID storage. The impact that these two lines of research and development have had on our world is immeasurable. He is also one of the great educators of computer science in the world. His book with John Hennessy "Computer Architecture: A Quantitative Approach" is how I first learned about and was humbled by the inner workings of machines at the lowest level.
    Subscribe to this CZcams channel or connect on:
    - Twitter: / lexfridman
    - LinkedIn: / lexfridman
    - Facebook: / lexfridman
    - Instagram: / lexfridman
    - Medium: / lexfridman
    - Support on Patreon: / lexfridman
  • Věda a technologie

Komentáře • 258

  • @ShovelShovel
    @ShovelShovel Před 3 lety +60

    Lex is a good interviewer, pretty sure he knew alot of the stuff David was explaining but the way he explains it is really good for viewers that aren't as well versed.

    •  Před 3 lety +4

      And best of all, he almost never interrupts and is never trying to take the spotlight.

    • @asdfg3421
      @asdfg3421 Před 3 lety

      Yeah... He was talking to Lex like he was in his sophomore year.

  • @hiimjesus123
    @hiimjesus123 Před 3 lety +76

    Lex, you should try to have a security engineer, like Chris Domas on. The instruction set architecture discussion gets so interesting when you consider how other people (ab)use the operating system to do what they want.

  • @DaSkuggo
    @DaSkuggo Před 3 lety +232

    I didn't know Bryan Cranston know this much about CPU architecture.

  • @andy16666
    @andy16666 Před 3 lety +13

    Great interview with a great guest.

  • @taaaaaaay
    @taaaaaaay Před 3 lety +65

    7:19 “These high level languages were just too inefficient”
    First year uni me would be crying if I heard C was a high level language

    • @autohmae
      @autohmae Před 3 lety +7

      We can talk all day about low or high level languages, but these days we can run Windows 2000 (which was written in C++ and compiled to x86) in Javascript in the browser on an ARM device. And that is at half the speed or faster than half the speed compared to bare x86 hardware.

    • @Conenion
      @Conenion Před 3 lety +2

      @@autohmae
      The Windows NT _kernel_ is written mostly in C, with some assembly as needed. Maybe some C++ for newer parts. Everything what you see, the GUI part, is written in C++ and C#.

    • @drewmandan
      @drewmandan Před 3 lety +4

      I have a disagreement with people who claim that C is a "high level language". It's certainly human-readable, but that's an aesthetic choice. They could have renamed all the keywords to things more esoteric and that wouldn't change its "level". Instead, I think the important thing is how easy it is to draw a map between C instructions and machine instructions, and it's almost 1-1. Not only that, but a C programmer needs to actively think about the machine instructions in a way that a Java or Python programmer does not. So perhaps there should be a separate category for C or C++, like "semi-high level" or "medium level".

    • @autohmae
      @autohmae Před 3 lety +4

      @@drewmandan C was considered one of the first high-languages after Assembler, so that makes all other even higher languages also a high-language :-) Maybe something like: super-high-language would be a good fit ? There are other ways you can talk about languages: Python, like Javascript, Bash and Powershell are considered scripting languages. Which imply they are 'super higher' languages in practice (my guess is Lua still fits that category too). An other way to distinguish the languages you mentioned is that Java and Python both have a runtime which usually means they work with bytecode, Python(.pyo), PHP, Java, all do that. And Javascript does something similar at runtime (Webassembly is very similar to the bytecode for Javascript). Rust and C, C++, etc. are also often called "system languages"

    • @autohmae
      @autohmae Před 3 lety

      @@Conenion yes, you can run it unmodified in the browser with the right Javascript code.

  • @adriangibbs
    @adriangibbs Před rokem +1

    Brilliant conversation. This just closed the remaining knowledge gap I had when it comes to understanding how modern hardware and software work together.

  • @TNTsundar
    @TNTsundar Před 3 lety +19

    Thinking about the fundamental instructions running on my phone’s processor are designed by this guy, puts a smile on my face. Great video!

  • @Mvobrito
    @Mvobrito Před 3 lety +31

    RISC is faster, more energy efficient and easier to design.
    CISC uses less memory and is simpler for compilers.
    It made sense to use CISC in the 1980s, when memory was much more expensive, programming languages were lower level and compiler technology was not yet well established.
    Nowadays, memory is no longer a limiting factor and modern compilers/interpreters can turn high-level languages into machine code very easily.
    The priority now, with the mobile device revolution, is to design faster, less energy-consuming processors, and RISC is the way to go.
    In addition, as they are simpler to conceive, the market's transition to RISC would greatly increase competition in this segment, which has been quite stagnated in the last decade because of the Intel/AMD duopoly.

    • @Kpopzoom
      @Kpopzoom Před 3 lety +4

      The only difference is in the processors translator - less work to do with RISC, but more machine cycles to accomplish that same task.
      With multi-thread, multi-core processors (like AMD Ryzen) CISC is still the best, especially for high powered computers used from gaming/ video editing etc..

    • @PuntiS
      @PuntiS Před 3 lety +1

      For low-power application chips, though, which see much more use in products worldwide, RISC-based processors are being increasingly used in products, and have been gathering much attention in the past couple years.
      In this case, prioritizing higher clock cycles per instruction means less clock activity to carry out processes, which in turn translates into less consumption.
      And low consumption is one of the hot words going around, along with security, cloud and ML.

    • @isodoublet
      @isodoublet Před 3 lety +1

      "RISC is faster, "
      Empirically false.

    • @lubricustheslippery5028
      @lubricustheslippery5028 Před 3 lety

      Memory access time is an big factor for modern CPU. An cache miss takes about 200 cycles. So an instruction set that minimize the amount off necessary access to the memory could improve the performance

  • @rolfw2336
    @rolfw2336 Před 3 lety +2

    Interview ends kind of abruptly, but I really enjoyed it! You bring out Dr Patterson’s talent for explaining these concepts to a wide audience. Wow, he must have been great in the classroom.

  • @d3ly51d
    @d3ly51d Před 3 lety +2

    In university I took two courses on computer architecture where we studied the entire book, and it was my favorite set of lectures from the entire CS curriculum. The book gives you a wonderful insight into how computers and compilers actually work and how various types of speedup are achieved and measured. In the exam we had to unroll a loop in DLX among other things, and to calculate the CPI speedup. I'm so glad to actually see one of the authors behind this amazing book.

    • @abdullahsiddiqui1065
      @abdullahsiddiqui1065 Před rokem

      literally had a midterm today based on one of his books lol

    • @byteme6346
      @byteme6346 Před 5 měsíci

      Jim Keller is a journeyman computer architect. Patterson has been slapping the monkey in academia his whole life. Patterson has been working on an ISA for forty freakin' years.

  • @Leeszus
    @Leeszus Před 3 lety

    Amazing interview!

  • @sahilchoudhary3002
    @sahilchoudhary3002 Před 2 lety

    was looking for a video on mips and stumbled across the great lex

  • @ricosrealm
    @ricosrealm Před 2 lety

    I used his book in college... really enjoyed it.

  • @cafeinomano_
    @cafeinomano_ Před 2 lety

    I've seen this video like 15 times, I love RISC and its philosophy.

  • @El.Duder-ino
    @El.Duder-ino Před rokem

    Very well explained difference between these 2 fundamental CPU architectures.

  • @danielwait8555
    @danielwait8555 Před 3 lety +2

    I love these discussions on Computer Systems! Thanks Lex

  • @albeit1
    @albeit1 Před 3 lety

    many small things flow through a system quicker. Works with web requests too. With web requests, there's more opportunities for caching because it's less likely that individual small responses have changed than one monolithic response.

  • @akemp06
    @akemp06 Před 3 lety

    Loving all your interviews! Great questions and good for an audience that does not know all the details. What I don’t understand is your suit ? Why you having a fancy out fit but your table looks like a mess! If you want a style element in the show hide the cables under the table !

  • @sirousmohseni4
    @sirousmohseni4 Před 3 lety

    Excellent video

  • @sandraviknander7898
    @sandraviknander7898 Před 3 lety +1

    Awesome intervju!
    One thing that I have always wondered. avx instructions, sure you might have to use an intrinsic for the compiler to use it, but it’s really great way to parallelise. How would you those instructions compare to a risk alternative? You touched a little bit on it at the end but the answer was a little short for such an important part.

    • @kynikersolon3882
      @kynikersolon3882 Před 3 lety

      There is a vector extension to risc-v.

    • @mikafoxx2717
      @mikafoxx2717 Před 4 měsíci

      Complexity of instructions don't make the difference between CISC or RISC.. the basic idea is that RISC has the same size instructions for everything, instead of variable length, they all take a similar time to compute, and they use a load and store architecture - where you load the registers with the needed information l, then you execute the instructions to operate on them, then you store the required registers back to memory. With CISC, like x86, you can have an instruction of variable length up to 15 bytes, so it'll keep pulling in more information for that instruction, the contents for the registers, the task to operate on the registers, and then where to put the registers, in one single instruction. With CISC it could be a simple short instruction like xor a, a, or addsubps.. which I don't even want to explain what it does.. because I don't fully understand.

  • @vivichellappa7645
    @vivichellappa7645 Před 3 lety +12

    Around 8:00 minutes, the good professor starts talking about how operating systems and even application programs were written in assembly language to achieve speed. And if compilers could have been made smart enough to translate from various languages into complex instructions, that would have been wonderful but RISC makes it easier. And the he goes off on to Unix,C,C++, etc.
    I now understand why for the last 40 years, we have been getting programmers pretty much illiterate about computer architecture.
    Does the good professor know about the Burroughs B5500, B6500, B7500 series and their follow-ups? Those machines had a push down stack architecure so that they could effectively use Algol as the language in which the operating system could be implemented. As opposed to C which is a high-level machine-oriented language, you had a true high-level language for writing operating systems. And those machines were equally efficient in running Fortran, COBOL and such languages which did not need a stack for their statically declared variables.
    And if you want register-to-register instructions only (this is claimed to be an essential feature of RISC computers) as opposed to the IBM 360's register-to-register, register-to-memory and memory-to-memory types of (CISC) instructions, then I can tell you that the CDC 3200 dating back to the early 1960's had that type of instruction set. Every arithmetic instruction meant that the programmer painfully accessed the memory to load the two operands into two registers, performed the arithmetic operation (add, subtract, multiply, divide) on the registers and then stored the result back in main memory. What a pain for the programmer who had to program that beast in assembly language! It is the speed increases in hardware over 30 years that enabled a RISC computer to perform fast. If one implements the CDC 3200 using today's VLSI technology, I am sure it would beat any RISC processor in performance.
    Write a nice proposal to get a grant, get a bunch of PhD students to design the chip and write compilers for C or Smalltalk or C++, and you have a nice decade-long run of research publications and more research funding. That is what RISC was about.

    • @lemonsavery
      @lemonsavery Před 3 lety +2

      I'm not sure why Lex doesn't know about RISC and CISC. Having just finished my undergrad CS degree, one of my later classes had us make a rudimentary processor from scratch gates, we coded a little in assembly and converted it into machine language by hand, and we learned some about the history of RISC ARM/CISC x86 as well.
      Did I just get an abnormally competent professor?

  • @benschulz9140
    @benschulz9140 Před 3 lety

    Would be neat for a GAN to make a game of writing instruction sets and compilers.

  • @jimreynolds2399
    @jimreynolds2399 Před 3 lety +6

    I remember the RISC/CISC debate/battle back in the 80s. I always thought that CISC was better and all the hard work can be done by the compiler - which is a piece of software - so I felt that CISC would win-out. When Sun started moving away from the Sparc I was surprised and puzzled but I read about commercial arguments that started to change the economic argument in favour of RISC but I always felt it was like VHS beating Betamax again. I'm surprised to hear about RISC-V now. I'm not convinced things are sufficiently different to justify switching from RISC to CISC for everything but there are bound to be applications where CISC is way better. Interesting times.

    • @Eugensson
      @Eugensson Před 2 lety +1

      In the end CISC CPU’s like x86-64 are decomposing their long instructions into a set of many simple micro operations, so effectively modern Intel CPU’s are in some sense RISC in disguise. On the other hand ARM has many very complex instructions which could be considered CISC in nature. Same with RISC-V, altho the instructions are simple and short, at a certain stage under the hood they are fused into more complex ones for the efficiency purpose.
      The debate of RISC vs CISC is irrelevant these days. It is more about: can instructions operate directly with memory (x86) or one has to use load/store (arm/r5); and does CPU rely on a fixed length instructions (basic Risc-v and basic Arm) or on variable length ones (x86, Arm Thumb, Risc-v C-extension).

    • @FLMKane
      @FLMKane Před 4 měsíci

      Ever since the mid 90s, x86 processors use RISC like microcode internally, and theres an x86 frontend that translates the binary into the risc microcode.

    • @mikafoxx2717
      @mikafoxx2717 Před 4 měsíci

      ​@@FLMKaneAnd this is how Intel can patch some issues with certain instructions for security purposes, they can change that the cpu does internally to do each instruction. It's almost like an x86 emulator, in a way. Just with a hardware accelerated instruction decoder.

  • @11vag
    @11vag Před 3 lety

    What an interesting interview.

  • @sergioropo3019
    @sergioropo3019 Před 3 lety

    It is fascinating how this guy come up with instant, perfectly structured answers.

  • @k4vms
    @k4vms Před 3 lety

    Talk about CRISP micro processor and DEC(Digital Equipment Corp), IBM, APPLE, Motorola 68K processors, Quick Draw, WNT, VMS, OpenVMS, OPS5, MVS, VM, VAX, APLHA, AIX, POWER, ZOS. System P, System Z, System X, System I, etc
    Ricky from DEC and IBM and APPLE

  • @msalvi6302
    @msalvi6302 Před 3 lety +3

    Have you heard about MIPS, kids at berkey could have used that, but they wanted to look cool and started RiscV

    • @32gigs96
      @32gigs96 Před 3 lety

      Problem is mips is proprietary.

  • @hassanjaved4091
    @hassanjaved4091 Před 3 lety

    WoW great clip from the guy whose text books we have read in uni

  • @daysofgrace2934
    @daysofgrace2934 Před 10 měsíci

    Even in the late 80s, computer games on the Commodore Amiga & Atari ST were written in assembler...

  • @atomspalter2090
    @atomspalter2090 Před 3 lety +1

    nice video!

  • @peters972
    @peters972 Před 3 lety +2

    This guy has one brother who makes the highest quality crystal meth, another brother who commands the space ship enterprise, and he himself provided the best cpu design theory. Pretty talented family.

  • @ErwinFranzR
    @ErwinFranzR Před 3 lety

    I love these tech radicals.

  • @daysofgrace2934
    @daysofgrace2934 Před 10 měsíci

    Acorn Archimedes was a RISC computer but it failed commercially against the Amiga & ST but the CPU, the Acorn Risc Machine (ARM) went on to conquer the world, should also mention MIPS...

  • @Rudrazz
    @Rudrazz Před 3 lety

    Nice talk

  • @petergoodall6258
    @petergoodall6258 Před 3 lety +1

    Some folks got their Smalltalk VM to be resident in the RISC CPU cache

  • @Mbd3Bal7dod
    @Mbd3Bal7dod Před 3 lety

    They jumped the open source instruction set

  • @briancase6180
    @briancase6180 Před 2 lety +1

    Basically, the RISC opponents didn't understand how optimizing compilers work and what they are capable of; many of them also didn't understand what high-performance processor implementation really requires. The argument basically boils down to that. One thing that Dave didn't get to is that CISC computers tend to have instructions that execute *more slowly* than a sequence of simpler instructions...from *their own* instruction set. This was very true of the Digital Equipment Corporatation (DEC) VAX machines. In some ways, the VAX was the CISC-y-est CISC. If you understand hardware and compilers (and software frameworks), you understand why RISC makes sense and why you would never choose to design a CISC architecture from scratch. Even the original ARM architecture was not really a RISC. ARM V8 and v9 are much more simple.

    • @mikafoxx2717
      @mikafoxx2717 Před 4 měsíci

      One good way to think about it in a way, is the Java runtime environment, where you assemble the java into a virtual machine, which is then converted into the actual underlying instruction set on the fly. In this case, the x86 cpu is doing the same thing under its hood too, converting it into simpler micro-operations.

  • @LyubomyrSemkiv
    @LyubomyrSemkiv Před rokem

    I still don’t get the main question: why not having complex operations in cpu works faster. Hardware must be faster than software so calculating sha256 directly in cpu must be faster then by running primitive instructions. The only I can imagine that silicon space for logic for translating from cisc to some microcode can be used for more processing.

  • @drmosfet
    @drmosfet Před 3 lety +1

    He forgot Intel 8088 the 8 bit version of the 8086, the interview cut of just when it was getting interesting. Like to know what he thought about Intel iAPX 432, it seems to have so much potential?

    • @Conenion
      @Conenion Před 3 lety +2

      > Like to know what he thought about Intel iAPX 432, it seems to have so much potential?
      iAPX 432 was a total disaster right from the beginning. The idea was to have even higher level instructions than with CISC. Making the processor even more complex than with CISC. What do you expect one of RISC inventors thinking about such a braindead idea?

  • @maxfmfdm
    @maxfmfdm Před 3 lety +4

    As someone who is pro-CISC because of economic and software development ecosystem reasons. It's important for me to hear the logical reasons and arguments for the merit of RISC architecture. Thank you.

  • @michaelrenper796
    @michaelrenper796 Před 3 lety +1

    The RISC-CISC wars are long over. Neither side has won but rather the whole issue got obsoleted by long pipelines and speculative execution. For simple CPUs, which run slowly but are power optimized simple instruction sets usually win. For fast CPUs minimizing code size and therefore going a bit more CISCy wins. All modern instruction sets are hybrids and all have some form of microcode being spit into the decoding pipeline.

  • @TheVincent0268
    @TheVincent0268 Před 3 lety +4

    I can remember that the Acorn Archimedes had a RISC processor.

    • @DavidRutten
      @DavidRutten Před 3 lety +1

      And it's operating system was RISC OS. You could run that cpu for hours and it would barely be warm to the touch.

  • @thefreethinker4441
    @thefreethinker4441 Před 3 lety +4

    SHAKTi is based on this RISC. Good going team Shakti. Thanks to Lex for bringing knowledge to the world. Russian Legend!

    • @alexben8674
      @alexben8674 Před 3 lety +1

      Technically RISC is more feasible these days than older times. Because the processor were slower and running those complex programs would had been time consuming and not so viable but now we have high frequency processor which solve those problems. So, RISC is the future. Where CISC will be history in the. Coming future. Until some kinda of radical change happens in the architecture.

    • @ciarfah
      @ciarfah Před 3 lety

      @@alexben8674 I think it will flip flop back and forth. CISC makes more sense when reducing transistor size or memory access latency become prohibitively expensive.

  • @paradox_695
    @paradox_695 Před rokem

    10:40 good sir, if they are inefficient languages, still in the context of compiled languages, which ones are?

  • @livingthehardlife
    @livingthehardlife Před 3 lety +41

    HEISENBERG

    • @jojojorisjhjosef
      @jojojorisjhjosef Před 3 lety +2

      This dude is big, Walter White is more the David Patterson of chemistry.

  • @stabgan
    @stabgan Před 3 lety +2

    I am following Lex since when he had 2k connections in linkedin. He also replied to me in past multiple times. He's my idol. Honestly the apex of male peak performance.

  • @intheshell35ify
    @intheshell35ify Před 2 lety

    This a gold mine for students. Mind your citations children!!

  • @dirkbastardrelief
    @dirkbastardrelief Před 9 měsíci +1

    Bryan Cranston is way more tech-savy than I suspected

  • @jaydunstan1618
    @jaydunstan1618 Před 3 lety

    Brilliant.

  • @macintush
    @macintush Před 3 lety +10

    "RISC architecture is going to change everything"

    • @ancestralrocha7709
      @ancestralrocha7709 Před 3 lety +3

      RISC is good

    • @ogremgtow990
      @ogremgtow990 Před 3 lety +3

      I heard the same thing back in 96. A few months later everyone wanted NT 4 for network security. All the RISC workstations and servers would not run NT 4 and RISC died .
      I gather history is repeating itself again ?

    • @ashishpatel350
      @ashishpatel350 Před 3 lety

      @@ogremgtow990 the problem with risc and arm chips is they are very basic and need to be redesigned for certain workloads. So if your workload changes the chips can't run the software 😂. Software has the ability to move much faster than hardware.

    • @Mvobrito
      @Mvobrito Před 3 lety

      @@ogremgtow990 Not with Apple going for it

    • @hailtothechief7181
      @hailtothechief7181 Před 3 lety

      14:29 Sounds like RISC did change everything and Intel adapted.

  • @RyanMitchell-yy4no
    @RyanMitchell-yy4no Před rokem

    As an early career web developer, CISC architecture sounds like an absolute nightmare.

  • @stevecoxiscool
    @stevecoxiscool Před 3 lety +15

    Worked for Compaq in the mid 80's and remember these arguments. Back then x86=Intel=PC=DOS = "Inexpensive" computer. Yes, Compaq had SGI workstations to help out designing the x86 boxes being sold. It wasn't which architecture was technically superior, everyone new THAT, it was what chip set is the cheapest. Just ask "Sun Micro, Silicon Graphics, DEC, HP, NeXT". CISC/RISC is a dead argument in the multi-core universe we live in.

    • @Conenion
      @Conenion Před 3 lety +8

      > CISC/RISC is a dead argument in the multi-core universe we live in.
      I don't see why. Since CISC vs RISC has nothing to do with single or multi-core.
      It is rather a dead argument, because X86 is RISC internally, and many RISC chips, which started as a pure RISC design, have more and more instructions and complexity added to them over time.

    • @250txc
      @250txc Před 3 lety

      Yep Intel chips run UNIX also..

    • @neonlost
      @neonlost Před 3 lety +1

      lol this comment won’t age well.. this decade will be the decade of RISC, CISCs days are numbered

    • @TheCablebill
      @TheCablebill Před 3 lety

      The distinction is arbitrary but the discussion is interesting.

  • @andytroo
    @andytroo Před 3 lety +2

    if risc is better than cisc, then why does "just in time compiled" code work so well; the thing that can best understand how to execute a complex instruction would be a CPU. the breakdown of a complex instruction into micro instructions is what happens inside a CISC cpu, why is this less efficient than the compiler doing it up front into RISK instructions?

    • @complexacious
      @complexacious Před 3 lety +2

      It was touched upon in the video, but if you don't know the answer already it's easy to miss. The genuine CISC instructions come at a higher penalty through the translation layer. In more detail a compiler that targets modern x86 will greatly favor the "CISC" instructions which are actually 1:1 with the hidden internal RISC instructions. I know, you're thinking "but isn't it just coming up with the same instructions? Why is it slower?" The CPU just has to do more work to get usable instructions out of these CISC instructions and unlike a JIT compiler it doesn't have 16gigs of RAM to store the results for next time. There's also the general efficiency of the instructions in particular. These instructions tend to operate on specific registers, so software that uses them has to use EXTRA instructions to move data from RAM to registers and back again to make use of them. With a 386 this was acceptable since all instructions had that limitation in some fashion, but on a modern version of the ISA you can save all that overhead by using the simpler instructions that can operate on the registers that make the code simpler directly. I'm sure many an Intel engineer has argued for the moving of the CISC decoder to software and exposing the internal RISC to the outside to save space on the die, save power, lower heat, etc. but for business reasons Intel doesn't want to do that.

    • @rolfw2336
      @rolfw2336 Před 3 lety

      It’s a legit question.. but JIT is still a kind of compiling. I think Dr Paterson argues that the compiler will better match the available instructions of RISC than CISC.

  • @viacheslavromanov3098
    @viacheslavromanov3098 Před 3 lety +26

    Heisenberg guy is telling truth listen to him 😂 Hope it won’t end up like in the movie..

  • @sharonneedlesfreedomsnotfr813

    -A bunch of computer nerds involved in violent debate...many mothers got the call “mom im gonna need you to pick me up late”-

  • @beameup64
    @beameup64 Před 2 lety +1

    "machine language" was the term I was taught in data processing in the '70s. Apple will be using RISC in all their products.

  • @bobweiram6321
    @bobweiram6321 Před 3 lety

    Ironically, ARM added Jazelle to execute Java bytes codes natively.

  • @Scorch428
    @Scorch428 Před 3 lety +12

    RISC is gonna change everything
    Yeah, RISC is good
    1995, Hackers

  • @GL-Kageyama2ndChannel
    @GL-Kageyama2ndChannel Před 3 lety +13

    RISC-V ?

  • @ohdude6643
    @ohdude6643 Před rokem +1

    Give him a goatee, and this man is Heisenberg.

  • @nickharrison3748
    @nickharrison3748 Před 3 lety

    I personally like the word Opcode or operation code rather than calling it instruction or instruction set

  • @jasonzhou6437
    @jasonzhou6437 Před 3 lety +2

    My textbook author ;) great book

    • @Yukke91
      @Yukke91 Před 3 lety +2

      Haha I was like ”Hey I know that book!”

    • @mika274
      @mika274 Před 3 lety

      He also mentioned his friend John Hennessy

  • @byteme6346
    @byteme6346 Před 5 měsíci

    Jim Keller is a journeyman computer architect. Patterson has been slapping the monkey in academia his whole life. Patterson has been working on an ISA for forty freakin' years.

  • @petergoodall6258
    @petergoodall6258 Před 3 lety

    One man’s software is another man’s hardware

  • @eliasdat
    @eliasdat Před 3 lety +6

    Heisenberg actually didn’t die, he just switched to manufacturing processors

    • @bendover4728
      @bendover4728 Před 3 lety +1

      Now I see where Malcolm got his genes from..

    • @d3ly51d
      @d3ly51d Před 3 lety

      he's now in the microprocessor empire business

  • @wolfganglava1511
    @wolfganglava1511 Před 3 lety +1

    CISC is not secured, easy to put backdoor in it; hard to audit CISC platform.

  • @unaphiliated5090
    @unaphiliated5090 Před 3 lety

    HAL says hello

  • @denni_isl1894
    @denni_isl1894 Před 3 lety +1

    Sophie Wilson.

  • @akhilaryappatt7209
    @akhilaryappatt7209 Před 3 lety

    but I'm so nostalgic about x86. can't let go
    and I somehow started disliking mobile devices with ARM chips

  • @WahlanSahlan1982
    @WahlanSahlan1982 Před 4 měsíci

    The guy who literally wrote the book on CPU design.

  • @prithviraj-mu8ox
    @prithviraj-mu8ox Před 3 lety +1

    Meth to silicon?

  • @TheLkdude
    @TheLkdude Před 3 lety +3

    If you look at current intel architecture, it is not a pure CISC processor, it is a hybrid (czcams.com/video/NNgdcn4Ux1k/video.html) 14:40 . It has a CISC wrapper around a RISC core .

    • @maxmuster7003
      @maxmuster7003 Před 3 lety

      It start with the Pentium architecture?

    • @autohmae
      @autohmae Před 3 lety +1

      @@andrewdunbar828 It is in this clip

    • @Gabriel38196
      @Gabriel38196 Před 3 lety +3

      that's what I don't get, these days nothing is pure risc or cisc. We have heterogenous x86 cpus, microprogrammed arm chips and every fucking thing in-between. And I love them all.

    • @Conenion
      @Conenion Před 3 lety

      @@maxmuster7003
      > It start with the Pentium architecture?
      Intel started translating from CISC to RISC-like instructions internally with the Pentium Pro in 1995 (AMD followed shortly after).

    • @Conenion
      @Conenion Před 3 lety

      @@andrewdunbar828
      > consensus seems to be that the RISC inside cisc analogy is badly flawed.
      It is a simplified explanation, sure, but certainly not "badly flawed".
      > but too far off the mark if you know how CPUs work.
      Then it would have been explained in this way at 14:35 in the video.

  • @LoneWolf-wp9dn
    @LoneWolf-wp9dn Před 3 lety

    Damn Mr White you know about computers too!?

  • @julianskidmore293
    @julianskidmore293 Před rokem

    Prior to university, of course, the vast majority of kids or students who were into computers, which at the time meant 8-bit home computers had almost no access to the Hennessy and Patterson RISC research. All I knew was from articles in the mid-1980s on the Inmos Transputer and Acorn Risc Machine.
    archive.org/details/PersonalComputerWorld1985-11/page/136/mode/2up
    So, we were properly introduced to RISC only at University (in my case UEA, Norwich) as part of the computer architecture modules. So, normally, I've understood RISC to be a performance or energy optimisation trade-off. That is, the question is how to get the most work out of a given set of transistors in a CPU, and what RISC does is trade under-utilised circuitry (e.g. for seldom used instructions) for speed. In a similar sense, complex decoding represents an under-utilisation of circuitry (which adds to propagation delays, thus limiting pipeline performance) and because microcode is effectively a ROM cache: ISA ==> Microcode ==> Control Signals, it's better to use the resources to implement an actual cache or a larger register set. Etc.

  • @Cuplex1
    @Cuplex1 Před 3 lety

    Hmm, 6:00. Thats not how it works. Most of the extra instructions that have been added the last 20 years have been only accelerators. For example SIMD SSE4, or a more obvious example AES instruction set that makes compression and decompression about 20 times faster. All modern heavy compute operations on windows rely on running modern compilers with support for a few optimized instructions like AVX2. You also have pipe lining and branch prediction making the x86 side much more attractive. The instruction set was between AMD and intel have ended but 20 years ago we had competing and completely different instruction sets like 3dNow.
    12:30, thats BS if you know programming. The very efficient instruction sets are widely used even by more high level languages. I have been a computer engineer/developer for over 15 years so what do I know. 😎 I think the majority was right back then if we look at where we are now.
    General compute is never as fast as ASICS which is basically what advanced instruction sets are.

    • @websnarf
      @websnarf Před 3 lety +2

      Yeah, this discussion makes it seem like Patterson has not looked at a serious CPU architecture in 25 years. His arguments may have made sense against the 80386, or Morotola 68K, but even by the time of the Pentium (P54/P55c) the "many RISC instructions are faster than the equivalent CISC instruction" was demonstrably wrong. Today, there is no such thing as a "high performance RISC"'; the only way to achieve performance is to get a multi-core x86. RISC has been relegated to low-cost/hardware integrated solutions.

    • @Conenion
      @Conenion Před 3 lety

      @@websnarf
      > but even by the time of the Pentium (P54/P55c) the "many RISC instructions are faster than the equivalent CISC instruction" was demonstrably wrong.
      You have obviously never heard of the Alpha processor.
      > the only way to achieve performance is to get a multi-core x86.
      X86 translates from CISC to RISC-like instructions internally since Pentium Pro in 1995.
      Which avoids long RISC instructions for simple instructions like INC.

    • @Conenion
      @Conenion Před 3 lety

      > 12:30, thats BS if you know programming.
      No, its not. For a compiler it is still very difficult to map a code snippet to a special instruction doing the same. I doubt that a compiler will replace C code that does AES encryption or decryption with an AES instruction. Taking your example.

    • @Conenion
      @Conenion Před 3 lety

      @@juliuszkopczewski5759
      Sure, I know. That is exactly the reason, why you add instructions to the instruction set without caring about the compiler. But this is not "general purpose" code, and for such a code the argument from Prof Patterson is still true to this day. Albeit a bit less so, because compilers are smarter today, than they were 30 years ago.

  • @maxmuster7003
    @maxmuster7003 Před 3 lety +3

    Why is RISC not so efficient to access the ram?

    • @MagnumCarta
      @MagnumCarta Před 3 lety +8

      Compiled instructions have to be stored in memory before being loaded into the CPU. CISC systems can narrow down the amount of RAM utilized by keeping the number of bytes to store compiled instructions small. The biggest bottleneck between the CPU and RAM is the MMU (Memory Management Unit) which has a fixed-size in how many bits it can transfer at any given clock cycle. Since CISC can use less memory, it can load more information in the same unit of time as a RISC system.
      A good example of this is the mult instruction to multiply two values. In RISC, you would need to do two load instructions for each given value you want to multiply by whereas in CISC you could fit all of this into the size of the MMU (so for 64-bit this would be stored in only eight bytes of memory).
      So CISC improves the number of lines of instruction whereas RISC improves the number of clock cycles per instruction (only one instruction per clock cycle). The bottleneck is the bandwidth of the MMU.
      That's my understanding of it but please keep in mind I come from the software development perspective not the hardware development perspective. I could be wrong about my interpretations.

    • @maxmuster7003
      @maxmuster7003 Před 3 lety

      @@MagnumCarta Thanks, i begin to understand. The Intel core2 CPU can execute up to 4 integer instructions at the same time, if the instructions are pairable. I think this works with one complex and three simple instructions. I never used a compiler, but i am familar with assembler on Intel 80386.

    • @povelvieregg165
      @povelvieregg165 Před 3 lety +6

      @@maxmuster7003 It isn't really about how many instructions you can execute in parallel but about how quickly you can pull instructions into the CPU. A simple example would be, say a line of C code, may compile into a single CISC machine code instruction. While on RISC it may turn into 4 instructions. However that single CISC instruction may take 4 clock cycles to execute, while each one of the RISC instructions take 1 cycle. Hence in principle there is no performance difference.
      However this means that for a larger program the RISC processor will fill up its CPU cache faster than the CISC processor. That is why RISC processors tend to have larger caches.
      However it is apparently not as bad as it sounds for RISC. RISC processor avoid a lot of load and store instruction by having a lot more registers than CISC processors. As far as I understand, a good compiler will be able to arrange things so that a RISC doesn't need to have that many more instructions than CISC.
      Anyway that is my understanding. I am also a learner here. I stopped caring about RISC and CISC ever since Apple switched to intel. But it is becoming a more interesting topic again.

    • @Conenion
      @Conenion Před 3 lety +2

      @@povelvieregg165
      > However it is apparently not as bad as it sounds for RISC.
      Also because of instruction caches having a high hit rate.

    • @Conenion
      @Conenion Před 3 lety

      @Max Muster
      You can combine both worlds. ARM for examples does this with the thumb instruction set. Those are "compressed" short RISC instructions that are expanded to their long versions during instruction fetch.
      In essence, x86 does this as well. It wasn't planned, though.

  • @willwill2548
    @willwill2548 Před 3 lety +6

    For a moment I though this is a Breaking Bad episode...

  • @mysticalsoulqc
    @mysticalsoulqc Před 3 lety

    i shall not add... lol to touchy of situations.

  • @ChitranjanBaghiofficial
    @ChitranjanBaghiofficial Před 3 lety +1

    hey breaking bad chracter is back, nice to see you professor

  • @shableep
    @shableep Před 3 lety +14

    With Apple switching all of their computers over to RISC, and RISC running inside almost all tablets and cellphones, it sounds like RISC won.

    • @maxmuster7003
      @maxmuster7003 Před 3 lety +2

      I am not familar with ARM CPUs, so i use the x86 DOSBOX emulator on my Android tablet for x86 assembly. I do not like Apple with or without CISC.

    • @brent56and1
      @brent56and1 Před 3 lety

      Especially seeing that Intel and AMD are constantly trying to fix newly discovered speculative execution attack vulnerabilities.

    • @stevecoxiscool
      @stevecoxiscool Před 3 lety +1

      I am so proud of you RISC !!!, It's been 40 years and you finally did IT !!!!

    • @lb5928
      @lb5928 Před 3 lety +3

      @@andrewdunbar828 Wrong, x86-64 is owned by AMD and it runs microcode that can implement RISC-like routines not RISC itself. It makes CISC cpus extremely versatile having vast capabilities.

    • @lb5928
      @lb5928 Před 3 lety +2

      @@stevecoxiscool RISC didnt do anything the CISC based market share in terms of revenue is like %90 of the computing market.

  • @mmenjic
    @mmenjic Před 3 lety

    why we could not have something close to universal or even dynamic or reprogrammable instruction set instead of 17 different hidden and fixed sets ?

  • @drewmandan
    @drewmandan Před 3 lety +33

    Wow, I didn't know Walter White knew so much about microprocessors.

  • @LabyrinthMike
    @LabyrinthMike Před 3 lety +2

    But, but, but, isn't the memory speed your limiting factor? If you execute more instructions and your are waiting on the memory to serve them, wouldn't that make it slower? Have you accomplished your goal? I don't really want to debate this here, I'm just saying that the Intel itanium wasn't a successful microprocessor. Macs ran for a long time on Rs6000 chips and now run on Intel. I just don't see that RISC is commercially successful. Perhaps, it is a better microprocessor design, but then why aren't Macs still using them? I've been in the computer biz for a long time. Written a bunch of assembly language. I'm just not convinced that RISC won this competition, as much as I hate the Intel instruction set.

    • @Conenion
      @Conenion Před 3 lety +1

      > itanium wasn't a successful microprocessor.
      Yep. It was a giant failure. But Itanium was VLIW not RISC.
      > it is a better microprocessor design, but then why aren't Macs still using them?
      Apple just announced to use ARM based processors. Which are RISC. They call it "Apple Silicon".
      Search "Mac transition to Apple Silicon" on Wikipedia.
      > I'm just not convinced that RISC won this competition, as much as I hate the Intel instruction set.
      Intel started translating from CISC to RISC-like instructions internally with the Pentium Pro in 1995 (AMD followed shortly after).

    • @LabyrinthMike
      @LabyrinthMike Před 3 lety

      @@Conenion Well, it is not important how it works internally, but, this translating to RISC internally, does that mean microcode? If yes, machines have been doing that for a long time. If I recall, the IBM 360 was a microcoded machine.

  • @danf6975
    @danf6975 Před 3 lety

    My simple analogy
    Big block muscle cars have much more scalability than small piston high rpm rice burners Because the rice burner technology or in this case the complex sets don't scale well and hit a wall

  • @_avr314
    @_avr314 Před rokem

    Prof. Patterson :) Go Bears!

  • @PixelPhobiac
    @PixelPhobiac Před 3 lety

    PS3 was RISC, right?

  • @khwezimngoma
    @khwezimngoma Před 3 lety

    Wow

  • @zebratangozebra
    @zebratangozebra Před 3 lety

    Think the guys that write the compiler code are the real wizards, but I'm kinda stupid.

  • @DaiChiMon
    @DaiChiMon Před rokem

    Walter White if he not a chemist

  • @petros_adamopoulos
    @petros_adamopoulos Před 3 lety

    No mention of pipelining, one of the most important leverages for RISC vs CISC early on as a means to achieve single cycle instructions vs many cycles even for some of the simple CISC ones.
    No mention of the amount of registers which was/is typically several fold more on RISC; that's one of the things making it easier to target for a compiler.
    No mention of how the cost of CPU cache changed historically, which made it first advantageous for CISC then for RISC.
    This interview is really really dumbed down, so as to wonder what the audience for it would be...
    Pipelining and register allocation are very interesting topics, and defining ones in processor architectures.

  • @henrifritsmaarseveen6260
    @henrifritsmaarseveen6260 Před 3 lety +2

    the advance is the fetch of the commands
    because cisc has more commands it need more time to get a instruction as risc
    so in the beginning cisc was risc but because people became lazy and wanted multiplications in the instruction set because code would be come easier and the memory need became smaller to store the commands , less needed
    Also at that time memory was expensive ..
    So when memory became cheaper and clock circles became higher .. risc because faster .
    But around that time Intel and all others Microsoft blocked these CPUs
    Look at the story of the BBC ACHIMEDES .. there is your first real Risc computer with OS !! maybe still ome of the best ever !!

  • @filiperocha1465
    @filiperocha1465 Před rokem +1

    "RISC is good"

  • @TheOneTrueMaNicXs
    @TheOneTrueMaNicXs Před 3 lety +3

    I feel kind of like he is wrong. On arm processors all instructions take 4 cycles since and x86 instructions are variable, today x86 machines basically are 4 times faster.
    I still want epic ( Explicitly parallel instruction computing ) architecture .

    • @povelvieregg165
      @povelvieregg165 Před 3 lety +3

      Curtis ARM instructions take 1 cycle on average to finish because they are pipelined. That is after all the whole point of RISC having the same number of cycles per instruction. It is to make pipelining a lot easier. I am not up to date on the current status of x86 but at least back in the PowerPC days of Apple, it was a point often made that pipelining worked bad with x86. It was hard to keep it full at all times, with variable number of cycles.
      ARM also has a bunch of instructions very well suited for pipelining, such as conditional arithmetic operations. It means you can avoid branching which drains the pipeline.

  • @rezan6971
    @rezan6971 Před 3 lety

    the question you didn't ask: if risc and cisc where engins ,which one would be more powrfull?

  • @popotit0
    @popotit0 Před 3 lety

    RISC will start catching up if you could pay less than us$2k for a server and run Linux on it.

  • @Elmaxo1989
    @Elmaxo1989 Před 3 lety

    Did anyone make a Malcolm in the Middle reference in the comments yet? Y'know, like something about Hal designing HAL? I'll leave the completion of this joke as an exercise for the reader.

    • @ciarfah
      @ciarfah Před 3 lety

      Multiple layers of joke here given "Hal" authored a book on this stuff, haha

  • @parkerd2154
    @parkerd2154 Před 3 lety

    If you didn't know this you didn’t know much about computers

  • @AbdulelahAlJeffery
    @AbdulelahAlJeffery Před 3 lety

    Breaking Bad!

  • @Maxkraft19
    @Maxkraft19 Před 3 lety +1

    Almost all chips are RISC. Most Chips just convert there conventional code to a simple code in the CPU. Intel did this with the Pentium 4. So RISC did win. Also VLIW was superseded by SIMD in the the FPU via special instructions or in the GPU. Modern Chips just glue all these different approaches together and hide it in the compiler or CPU instruction decoder.

    • @Conenion
      @Conenion Před 3 lety

      > Intel did this with the Pentium 4
      Before that. Intel started translating from CISC to RISC-like instructions internally with the Pentium Pro in 1995 (AMD followed shortly after).

  • @mTs1978412
    @mTs1978412 Před 3 lety

    Is just me or is this SCARY asf

    • @shableep
      @shableep Před 3 lety +4

      What’s scary about it?