RISC versus CISC

Sdílet
Vložit
  • čas přidán 22. 07. 2024
  • In this computer science video tutorial you will learn about some of the differences between RISC and CISC. RISC stands for Reduced Instruction Set Computer, and CISC stands for Complex Instruction Set Computer. You will learn that RISC and CISC are two fundamentally different approaches to processor design. The RISC approach is at the heart of the ARM architecture (Advanced RISC machine) which can be found in Apple devices for example, whereas the CISC approach is at the heart of Intel’s x86 architecture, found in many early PCs for example. You will learn that the fundamental difference between RISC and CISC stems from the number of bits allocated to the operation code versus the operand of a binary machine code instruction inside the current instruction register of a CPU. You will see how this impacts on the size of a processor’s instruction set and the number of memory locations that can be addressed directly by the processor. You will therefore learn why CISC chips are typically more expensive to design and produce than RISC chips and why CISC processors have more addressing modes.
    Chapters:
    00:00 Introduction
    00:55 Assembly code instructions
    04:12 Anatomy of a machine code instruction
    04:47 The operation code and the operand
    07:46 Summary of the differences between RISC and CISC

Komentáře • 109

  • @abdouceesay7461
    @abdouceesay7461 Před 2 lety +22

    if i knew this chanel in my first year of university it will much better for me but i really appreciate your efforts of making this video important video

  • @rairoshan7635
    @rairoshan7635 Před 2 lety +24

    Couldn't be explained any better , thanks for this beautiful explaination .

  • @danielsims5771
    @danielsims5771 Před 8 měsíci +1

    Just the way you explain forces people to automatically subscribe once they come across you tutorials. Thank you a million times. Please, kindly requesting for a video on CPU pipelines

  • @trebelojaques458
    @trebelojaques458 Před 2 lety +8

    Have not even started my computer science yet, and I've already been watching you since so long!
    Goddd thank youuu for introducing himm in my lifeer ❤️❤️❤️

  • @mth32871
    @mth32871 Před 2 lety +3

    Excellent description/comparison. Very well done. Thank you.

  • @JuswanthTeeb
    @JuswanthTeeb Před 10 měsíci

    Only today, I feel like I understood these concepts ! Thanks ❣

  • @mortenlund1418
    @mortenlund1418 Před rokem +1

    Ohh - really like the style of this video. Clear, clean, reduced instruction!

  • @Joseph-vn8gh
    @Joseph-vn8gh Před 8 měsíci

    this is fantastic, love how you didn't assume we knew everything.

  • @SaradaBani
    @SaradaBani Před rokem +2

    Very well explained with the basic concepts. This is the explanation a normal engineer required.

  • @syung8709
    @syung8709 Před 4 měsíci

    Thanks for the clear explanation!

  • @Mel-jp5vb
    @Mel-jp5vb Před rokem

    Great explanation, thank you!

  • @alanmichaelthayil967
    @alanmichaelthayil967 Před 2 lety +2

    Thank you for this wonderful video!

  • @tylersehon120
    @tylersehon120 Před 2 lety +1

    Fantastic video. Thank you!

  • @techankhamun838
    @techankhamun838 Před 2 lety +4

    Great video! Thank you
    I'm wondering if you have a video on CPU pipelines or not. It'd be great if you can make one. Thanks again :)

  • @mayank8387
    @mayank8387 Před 2 lety +2

    Beautiful stuff. Please make more videos like this one.

  • @El.Duder-ino
    @El.Duder-ino Před rokem +1

    An excellent explanation, thank you very much👍👍👍

  • @Melpomenex
    @Melpomenex Před 2 lety +1

    Fantastic video. Thanks for making it.

  • @hermano8160
    @hermano8160 Před rokem +2

    Very good explaination, thank you.
    Cherry on top would be a comparison on the level of the logic gates to really connect how the different assembly code affect the gate/transistor sequence and thus the complexity of the actual silicon/chip design.

  • @Anonymous-om7sq
    @Anonymous-om7sq Před rokem +1

    This is amazing, thank you so much.

  • @jkibeats1466
    @jkibeats1466 Před 2 lety

    Love these videos

  • @AjinkyaMahajan
    @AjinkyaMahajan Před 2 lety +2

    Great Explanation
    Thanks ✨✨✨

  • @vutuan4308
    @vutuan4308 Před 6 měsíci +1

    It is so easy for me to understand. Thank you sir

  • @shanesepac7716
    @shanesepac7716 Před rokem +1

    amazing explanation

  • @mikey10006
    @mikey10006 Před 2 lety +3

    I don't know if you still reply to these but you , Neso Academy, Houston Math Prep and Michael Van Buren got me through my electrical engineering classes with As

    • @ComputerScienceLessons
      @ComputerScienceLessons  Před 2 lety +3

      I'm still here. I love reading comments like yours. It's great to hear you've been successful. Good luck with the future and I hope, like me, you are a lifelong learner. :)KD

    • @mikey10006
      @mikey10006 Před 2 lety +1

      @@ComputerScienceLessons I am 100% I won't let you down! Thanks honestly I'm watching this video specifically for fun haha. We did the x86 ISA way back when and I'm just learning about RISC because it seems cool. Best explanation out there btw haha

  • @osireacts
    @osireacts Před 2 lety +1

    Year 2 uni is starting in a few weeks and I need to catch up on all your videos

  • @nguyenxuanquang9864
    @nguyenxuanquang9864 Před 2 lety +1

    If a CPU use a 16 bit register with 8 bit for op code and 8 bit for operand. If the instruction have 2 operands such as MOV, does that mean the CPU can only access 2^4 address?

    • @ComputerScienceLessons
      @ComputerScienceLessons  Před 2 lety

      The simple answer is no. Depending on the architecture, the instruction register might support 1, 2, 3 or even 4 operands. If a particular CPU supports, let's say, 2 operands, then some of the instruction register bits will be allocated to the first operand and some of them will be allocated to the second operand. If the instruction register of this particular CPU is executing an instruction with only 1 operand, then some of the operand bits will go unused. I should point out that a modern CPU generally has more than 16 bits available in the instruction register. Note also that at least a couple of bits need to be reserved to indicate the addressing mode being used and, depending on the architecture, an instruction register might allocate bits for other reasons, for example to indicate the number of shifts to be performed by a bit shift instruction. Bit allocation in the instruction register is fixed; I'm not aware on an instruction register format that dynamically allocated bits depending on the instruction - now there's an idea! :)KD

    • @nguyenxuanquang9864
      @nguyenxuanquang9864 Před 2 lety

      Thank you for the answer. Excuse me if my question is not clear and sound silly since English is not my native language :-(. I will clear up my question as following:
      Let assume a particular CPU has 8 bits instruction register, 4 bit for op code, 4 bit for operand(s). No special bit for anything else.
      In this scenario, the CPU should be able to access 2^4 memory addresses from 0 to 15, but how can it execute an instruction to MOV the value from any address greater than 3 (i.e. move value from address 4 to address 5: MOV 100 101 require 10 bits)? So in this scenario, it can only access 2^2 address, right?

  • @thomasgavris855
    @thomasgavris855 Před rokem +1

    Found the Will Buxton of computer science

  • @Nobody-df7vn
    @Nobody-df7vn Před 2 lety +1

    Thanks!

  • @_BWKC
    @_BWKC Před 2 lety +2

    Nice video, Please make more videos about assembly 🌚

  • @Dudleymiddleton
    @Dudleymiddleton Před 2 lety +2

    Swings v Roundabouts! :)

    • @ComputerScienceLessons
      @ComputerScienceLessons  Před 2 lety

      I thought about saying that at the end of the video (or "six and two threes") :)KD

  • @crocolierrblx9365
    @crocolierrblx9365 Před rokem +1

    Im gonna make the sisc... (Stay Tuned)!

  • @peterwan小P
    @peterwan小P Před 2 lety +1

    0:29 I like your videos but CISC is sisc btw. I though I was doing it wrong all the time but after checking wiki and other websites, I think I can conclude that it should be pronounced as “sisc”. Thanks for sharing and making a such high quality tho. Just giving some minor information about the pronunciation. No one else is listen. I think much of the new experience though. Thanks! Sir!

    • @peterwan小P
      @peterwan小P Před 2 lety

      I know this could be rude, so please excuse my rudeness.

    • @ComputerScienceLessons
      @ComputerScienceLessons  Před 2 lety

      I think, because the C stands for 'Complex' my pronunciation is more logical. But hey! you say tomayto, I say tomarto and that's fine by me :)KD

    • @ComputerScienceLessons
      @ComputerScienceLessons  Před 2 lety

      You're not being rude at all. I pronounce Denary as it sounds but it has been pointed out that "Deenary" is more common. :)KD

  • @abdulmotin196
    @abdulmotin196 Před 11 měsíci

    Video starts with x86 on the left and then video finishes with Arm based on the left column confused me. 😢

  • @RecycleBin0
    @RecycleBin0 Před 2 lety +1

    I know you may have left this out intentionally, but don't some instructions use more than one opcode?

    • @RobertFletcherOBE
      @RobertFletcherOBE Před 2 lety

      thats covered in the video ;) MOVE and COPY

    • @RecycleBin0
      @RecycleBin0 Před 2 lety

      @@RobertFletcherOBE that was multiple operands

    • @ComputerScienceLessons
      @ComputerScienceLessons  Před 2 lety +1

      There are lots of assembly languages out there, indeed every specialist type of processor has its own instruction set, so I would not be surprised (although I can't think of one). It is not uncommon for an instruction to have a label as well as an op code, to enable branching and looping. :)KD

  • @Deveyus
    @Deveyus Před 2 lety +4

    It's interesting you didn't mention that both the hardware and the ISA for a RISC based processor are easier to audit for security as well, which in some application can be quite important.

  • @Anonymous______________
    @Anonymous______________ Před rokem +1

    Um given the presence of raspberry Pi's (SBC's) everywhere, the home automation part is bass ackwards. Also ARM (RISC) has a far larger presence in nearly every computing device that isn't a PC.

  • @shahzebkhalid5591
    @shahzebkhalid5591 Před 2 lety +1

    if u dont mind me asking whats that in ur profile picture isit a moon or a black hole?

  • @animatrix1851
    @animatrix1851 Před 2 lety

    why would home automation require a CISC ? most home automation devices these days are based on ARM or even lower end MCU's while for DSP you might need several new special instructions make it more CISC oriented.

  • @_Stin_
    @_Stin_ Před 2 lety +2

    I remember when I was fighting with my Wintel PC-user friend in school about RISC vs CISC - I was right pmsl
    It was Acorn vs PC back in the 90s lol

    • @ComputerScienceLessons
      @ComputerScienceLessons  Před 2 lety +2

      You're showing your age! Factoid - ARM originally stood for Acorn RISC Machine. It was later changed to Advanced RISC Machine. :)KD

    • @_Stin_
      @_Stin_ Před 2 lety +1

      @@ComputerScienceLessons That's correct. It was the first assembly language and machine code I learned whilst in high school - I was such a geek lol
      I still have my 233MHz StrongARM RiscPC on the shelf lol
      On a personal note, Prof. Ferber and Sophie Wilson are personal idols lol - She could type compressed ARM BASIC code! Last I knew, Prof. Ferber was researching spiking neural networks using a cluster of ARM Spinnaker(?) chips. Fascinating stuff.

  • @richliou22
    @richliou22 Před rokem

    What you have explained must be Von Neumann architecture and not Harvard. Please confirm.

    • @ComputerScienceLessons
      @ComputerScienceLessons  Před rokem +1

      This could be either. Please watch my video about Harvard czcams.com/video/4nY7mNHLrLk/video.html :)KD

    • @richliou22
      @richliou22 Před rokem

      @@ComputerScienceLessons thanks for responding. My understand is instruction register and data register are separate in Harvard architecture. However in this video it appears it is just one type of register?

  • @paulfalke6227
    @paulfalke6227 Před 2 lety +1

    At 9:55 you say that a CISC compiler is easier then a RISC compiler. I disagree. An optimizing compiler today has the same front end for RISC or CISC backend. The intermediate language, used for common subexpression optimization, can be the same. The compiler back end is different. But there are little details that make both kinds of back end complex. The Intel x86 CISC has registers EAX, EBX, ECX, EDX, but not every operation can work with every register. A classical RISC executes the operation after a conditional jump instruction independent of the result of the compare.

    • @ComputerScienceLessons
      @ComputerScienceLessons  Před 2 lety

      You are quite right. My video is primarily pitched at A Level computer science students. They learn the fundamental principles of assembly language via the Little Man Computer simulator - so I cite examples in this rather than any 'real' assembly language. The differences between RISC and CISC is actually rather blurred, modern processors taking ideas from each. I would be interested to know what you think of my series on compilers? :)KD

  • @TheRojo387
    @TheRojo387 Před rokem

    You'll find that CISC is ponpunced "sisk".
    Furhermore, CISC architectures have instructions of different lengths, with the exact value of the opcode dictating the length of the whole instruction. In contrast, RISC architectures have instructions all of one length.

    • @ComputerScienceLessons
      @ComputerScienceLessons  Před rokem

      I think it's OK to pronounce CISC anyway you like. :)KD

    • @TheRojo387
      @TheRojo387 Před 11 měsíci

      ​@@ComputerScienceLessonsOh, well, I settled on "sisk" since that was more common as I had encountered.
      There's a third type of computing architecture: VLIW. It implies instruction-level parallelism. I simply pronounce it "vlee-ew". Tongue-twisting, ain't it!

  • @djneils100
    @djneils100 Před 2 lety +1

    good video. but kissk rather than sisk is just weird

  • @nielsdaemen
    @nielsdaemen Před 2 lety

    12:29 That makes no sense. Home automation and security systems use embedded chips wich are always RISK

  • @Tamirov-Alexander
    @Tamirov-Alexander Před rokem +1

    Hi, I study English. Why do you pronounce "kisk" not "sisk"?

    • @ComputerScienceLessons
      @ComputerScienceLessons  Před rokem

      I don't think there are any rules about this. Other acronyms like CAD (Computer Aided Design) and CAT (Computer Aided Tomography) are pronounced with a 'hard' sounding C. :)KD

    • @Tamirov-Alexander
      @Tamirov-Alexander Před rokem +1

      @@ComputerScienceLessons For me it sounds ok, and I know that languages are evolving but..
      According to the rules,
      Letter c produces /s/ sound if it is followed by the letters ‘e’, ‘i’, or ‘y’..
      Letter c produces /k/ sound if it is followed by the letters ‘a’, ‘o’, or ‘u’ or a consonant.
      So CAD, right, we should pronounce as "kad", but cisco as "sisko"..

    • @ComputerScienceLessons
      @ComputerScienceLessons  Před rokem +1

      When it comes to the English language, pronunciation depends a lot on which part of the country you come from. I'm from the North East (with a hint of Kiwi) so I say 'grAf'. Some of my friends pronounce it 'grarf' but they also say 'grAphics'. There's no logic to it. To be honest, I think I started saying Kisc because it sound better when it follows Risc - I like the alliteration of it. I also think it's more memorable for students. Rules be dammed! By the way, the name Cisco comes from 'San Francisco', or should I say San FranKisco? :)KD

    • @Tamirov-Alexander
      @Tamirov-Alexander Před rokem

      @@ComputerScienceLessonsYeah, we have the exception "soccer". CISC could be an exception too, I think 🇦🇽 😊

  • @arm-power
    @arm-power Před rokem +2

    - RISC is BETTER, because RISC is next evolution step from CISC.
    - RISC was developed after CISC (solving CISC problems).
    - All famous 8-bit CPUs were CISC - Intel 8080, Motorola 6800, 680000, Zilog Z80, MOS 6502 etc.
    - DEC had CISC VAX and then come up with 64-bit RISC Alpha.
    - Intel had CISC 8008, 8080, 8086 and later come up with RISC-like IA64 (Itanium).
    - CISC is obsolete today - there is no new CISC ISA while there is many new RISC ISAs (Xtensa ESP32, RISC-V, ARMv9, Lonsoon etc.).
    1) Number of instructions:
    - RISC .... 64-bit ARM has around 700 instructions (around 100 for basic scalar integer/LD/ST, rest is FPU and vector/SIMD/ML extensions).
    - CISC .... 64-bit x86-64 has around 900 instructions (again most instructions are FPU/vector/SIMD/ML extensions).
    - summary: difference between CISC and RISC is almost zero in terms of number of instructions. Basically it depends how many extensions are there. As in 2022 ARM added SME2 extensions (to the current SVE, SVE2, SME) for matrix computation up to 2048-bit long registers (x86 has only 512-bit AVX512) - this probably means than RISC ARM has today more instructions than CISC x86.
    - in real SW the 64-bit ARM binary as RISC has about 10% bigger size than x86-64 CISC (difference is negligible)
    - ARM-Thumb2 binary has 10-20% smaller binary size than x86 (Thumb2 was developed for MCU market, for full-size CPU the difference is negligible).
    2) Op-code:
    - RISC obviously can have thousands of instructions (mentioned above). How is that possible?
    - ARM has multiple op-code templates for one operand, two operands, three operands - op-code size is different, there is no need to be fixed.
    - I think author of this video confuses fixed op-code size with fixed instruction length. Instruction length MUST be fixed (usually is high performance RISC is 32-bit long (Alpha, ARM, MIPS, SPARC etc.), sometimes for MCU can be packed into 16-bit ARM-Thumb2, or small 8-bit MCUs like PIC use 12-bit, 14-bit instruction length to save space in tiny onboard flash) because it is the main advantage of RISC ISA - decoding unlimited number of instructions in parallel (very important for modern CPU as today best CPU can execute 8 instructions/clock in average).
    - CISC usually uses variable instruction length. x86 instruction can have 1 byte up to 15 bytes (8-bit up to 120-bit), consisting of two prefix instructions, main instruction and two postfix instructions. This is good for hand assembly coding back at 1970' when RAM was few kBytes and very expensive. Also 1 instruction at 8086 took 10 clocks to complete (0.1 instruction/clock is 80x times less IPC than today's CPU).
    - Modern x86 CPU has big problem with parallel decode - basically x86 CPU does know only where the 1st instruction begins but doesn't know where the 2nd, 3rd etc. instructions begin. 2nd instruction can begin at 2nd byte up to 16th byte, 3rd instruction can begin at 3rd byte up to 31th byte (number of combinations rise exponentially for every other in parallel loaded instruction). That's a reason why AMD Zen 4 still has only 4-decode (and using sophisticated predictors) and huge micro-op cache (with already decoded instructions for high price) while Apple M1 has 8-decode and no micro-op cache (because it can decode very simply even 100 instructions if needed).
    - This x86's unnecessary decoding hell cost transistors, workhours to find tricks around and burns energy
    3) Every CISC (x86) CPU today is running RISC-like inside. First RISC-inside x86 CPU was AMD K5 competing with 486 and Pentiums AKA P5 (last CISC-inside x86 CPU). K5 was in-order RISC CPU Am29000 with x86 decode added to the frontend. K6 based on NexGen was out-of-order RISC-inside CPU (similar to Pentium Pro AKA P6).
    4) It's pretty easy to modify Intel or AMD x86 CPU for another ISA, especially for RISC because it runs inside already as RISC. AMD converted RISC Am29000 into x86 in the past. There is no problem to strip over-complicated x86 decoder away and implement more efficient RISC-V or ARM decoder. Jim Keller did that with K12 when AMD was developing x86 Zen and ARM K12 in parallel as sister cores.
    The true reason why AMD nor Intel don't do that is money - x86 market is big and nobody else has a license to make x86 CPU except Intel and AMD. They care about money only. However every company which focus on money making rather than delivering best possible products is going to bankrupt inevitably. Smartphone and tablet market is 5x times bigger than server market and growing.... and ARM has 100% there while x86 has 0%. Server market ARM got 10% in just last 2 years and fast growing (cloud providers offer ARM servers at half price/performance - during upcoming economic recession how many companies can afford to pay double price for same performance?).
    X86 will die soon - because it's old/obsolete/ineffective and because it has bad business model (basically monopoly).

    • @ComputerScienceLessons
      @ComputerScienceLessons  Před rokem

      Good analysis :)KD

    • @simply6162
      @simply6162 Před 11 měsíci

      u are very good computer scientist and good analyzer. I also think intela nd amd are going to get fcked in future but the gaming industry is keeping x86 alive. and its gonna take a looooong time for x86 to die because of gaming

    • @mikafoxx2717
      @mikafoxx2717 Před 6 měsíci

      I think a cross compilation, basically a risc pre-decode could help make backwards compatibility if they remove the on-die complicated decoder. Like you said, modern CPU's could really run any instruction set, more or less, depending on things like register count or the optimization of the maths units for the instruction set.

  • @igneousred1875
    @igneousred1875 Před 2 lety +1

    The whole RISC vs CISC debate is over as RISC has won...
    Why? Because more then a decade ago processors accepting CISC instructions have started to internally be composed of a RISC processor that does a handful of instructions and a decoder that translates every instruction from CISC to RISC.
    So even with the added overhead of translating Every single instruction Every Time it is executed was a better choice then trying to make a processor have separate circuits for every instruction(A lot of area) and have only a few "working" at any moment...
    As x86(the most popular CISC instruction set) gets more instructions every several years, the more of trade off is in the favor of internally having a "RISC" processor...
    Other things RISC is better at than CISC:
    1. Compilers are terrible at picking the "best" CISC instructions. Where it is actually simpler to find the correct RISC instruction as there is not much of a choice.
    Point: Converting RISC to CISC will result in say half as much instructions(generous) but for each(on average) you have many to choose from... Also the fact that computation difficulty in reguards to instruction count is roughly O(n) while to say the same for the number of choices for each instruction does not hold water... (Not simpler)
    2. In actual "CISC" Processors today Sometimes it is faster to do a simple instruction multiple times, than the more complicated one that was made FOR THE EXACT computation.
    (this is as I said because they are really RISC with a decoder at the front). Also more reason for the (1.).
    3. RISC encoded program almost always takes less space as a vast majority of the CISC instructions are almost never used and yet they take bits...
    Along with other I may have forgot...
    Why is CISC still the most dominant in the PC domain? Mostly because of established standards... As sadly leaders of industries think to create a new standard, market the hell out of it and hope people use it instead is not profitable.
    RISC is dominating in the Microcontroler/processor and Phone spheres... And is a vast majority of the processors made today, not including the pseudo CISC ones.
    For more info, a good place to start is the great David Patterson's Lex Fridman interview...
    YT sometimes marks a comment as spam if you include a link, so I won't.
    Thank you for reading! And replies are encouraged!

    • @ComputerScienceLessons
      @ComputerScienceLessons  Před 2 lety

      Fascinating interview if you have time (czcams.com/video/naed4C4hfAg/video.html) :)KD

    • @igneousred1875
      @igneousred1875 Před 2 lety

      @@ComputerScienceLessons Love your videos btw... Keep up the good work!

    • @ngndnd
      @ngndnd Před 2 lety

      thanks, im using ur comment as my homework assignment so i hope u dont mind

    • @igneousred1875
      @igneousred1875 Před 2 lety

      @@ngndnd Not sure how you would... But glad to inspire

  • @paulfalke6227
    @paulfalke6227 Před 2 lety

    At 8:39 you say that "ADD Y" is a (typical) RISC instruction. But this is a typical CISC instruction. A typical RISC instruction is "ADD R0, R1, R2". I don't know ANY RISC CPU that has a accumulator register. Why? This is part of the "one clock tick, one operation" idea of RISC. In a classical RISC CPU you can only write "ADD R0, R1, R2" that executes as R0=R1+R2. All registers have to be different. The operation "ADD Y" executes as A=A+Y with accumulator A is used twice in the operation. But this is a NO, NO for classical RISC. The CISC CPU does a little cheat. The CISC has a second accumulator, often called Atmp. The ADD Y operation is executed in two clock ticks. First do Atmp=A+Y, second do A=Atmp.

  • @technicalthug
    @technicalthug Před 21 dnem

    From this great explanation, I fee that CISC might have been more appealing to Intel because it somewhat tied Developers to their processors via. a slight lock-in. Easier to Developer for, but harder to migrate away from.

    • @tookitogo
      @tookitogo Před 12 dny

      Every CPU architecture at the time of x86’s birth was CISC.

  • @qusayfadhel1609
    @qusayfadhel1609 Před 4 měsíci +1

    "sisk"