why does inheritance suck?

Sdílet
Vložit
  • čas přidán 5. 05. 2024
  • You've probably heard this a few times when talking to your fellow programmer friends. "Gee Billy C++ polymorphism sure is slow, I hope Sally doesn't know that I use it!" But why is it so bad? In this video, we'll do a deep dive on what C++ Polymorphism is, what "virtual" does under the hood, and ultimately why it is SUCH a performance hit compared to languages like C and Rust.
    🏫 COURSES 🏫 Check out my new courses at lowlevel.academy
    🙌 SUPPORT THE CHANNEL 🙌 Become a Low Level Associate and support the channel at / lowlevellearning
    How Does Return Work? • do you know how "retur...
    🔥🔥🔥 SOCIALS 🔥🔥🔥
    Low Level Merch!: lowlevel.store/
    Follow me on Twitter: / lowleveltweets
    Follow me on Twitch: / lowlevellearning
    Join me on Discord!: / discord
  • Věda a technologie

Komentáře • 774

  • @LowLevelLearning
    @LowLevelLearning  Před 9 měsíci +42

    IM GOING LIVE TODAY AT 12PM EST TO REVIEW MY CHATS CODE: czcams.com/users/live-A6Ar4u_Teg

    • @kuhluhOG
      @kuhluhOG Před 9 měsíci +3

      I think you should add that *runtime* polymorphism was discussed in the video.
      C++ can also do compile time polymorphism (which doesn't have a runtime performance overhead).

    • @EmilPrivate
      @EmilPrivate Před 9 měsíci

      @@kuhluhOG but it has other trade-offs

    • @kuhluhOG
      @kuhluhOG Před 9 měsíci +1

      @@EmilPrivate mostly longer compile times, yes

    • @EmilPrivate
      @EmilPrivate Před 9 měsíci +1

      @@michaelxz1305 It's relative

    • @Toda_Ciencia
      @Toda_Ciencia Před 9 měsíci +1

      What's this text editor you used?

  • @Spirrwell
    @Spirrwell Před 9 měsíci +317

    "It is important to note that by default, functions with the same signature in a derived class are virtual in the parent by default." What? No? This isn't Java.

    • @reroman
      @reroman Před 9 měsíci +63

      I was just looking for this comment. Maybe he's not as experienced in C++ as he is in C.

    • @Spirrwell
      @Spirrwell Před 9 měsíci +78

      @@reromanSure, but it's such an odd mistake. It was such a detailed explanation, but it was just wrong. Especially for a whole video built around why polymorphism supposedly sucks.

    • @michaelgreene6441
      @michaelgreene6441 Před 9 měsíci +2

      Is this perhaps c++ version specific rather than outright wrong? (not a cpp dev just a thought)

    • @Spirrwell
      @Spirrwell Před 9 měsíci +28

      @@michaelgreene6441Think about what you're suggesting. C++ is old. If there were such a drastic difference in behavior between C++ versions, that would actually be terrible. This is simply wrong.

    • @Spartan322
      @Spartan322 Před 9 měsíci +7

      @@Spirrwell It would literally mean you can't move a project further forward on C++ versions, which does happen occasionally, if you try to use auto in C++98 and and then move to C++11 it will throw an error where it was once valid, but such things are rare and not common use, auto was functionally worthless by 1995 and almost no one used the feature and it was deprecated for over a decade, and almost no other elements of the language were reworked like that, other case I can think of is the comma operator in square brackets, which again was a feature nobody used because the comma operator is almost never used for its return value. (and otherwise the implementation for a[1, 2, 3] wouldn't be how you could access a 3 dimensional container which they deemed a nightmare to do)

  • @draconicepic4124
    @draconicepic4124 Před 9 měsíci +1079

    I have a number of issues regarding this video:
    First, the example given is a exceptionally poor choice. The number of unique cases is far to few for when Polymorphism should properly be used. With only "OP_ADD" and "OP_MUL" as outcomes, the code will compile into conditional jump statements. If there were more cases, the switch statement would likely be compiled into a jump table. This would have been a far better comparison to Polymorphism. As it stands, the video was effectively comparing if-statements to function pointers for a minuscule number of cases.
    Second, This explanation on how vtables are structured is based on the assumption that the vtable isn't embedded into the object. While I haven't encountered any compilers that do this, actual implementation of an object's vtable isn't standardized. This video can wrongly give off the assumption that it is standardized.
    Thirdly, in regards to why virtual methods are slow, while it is true that memory operations are slower than operations that take place on registers, the biggest slowdown actually comes from the fact your doing a dynamic jump. Modern CPUs try to fill the execution pipeline with as many instructions as possible with out-of-order execution by looking ahead for independent chains of instructions. This is far easier to do when the control flow of the instructions is static and works especially well when the branch predictor chooses the correct execution path. When a dynamic jump occurs, the processor basically has to halt everything until the memory location of where it's going is loaded into the processor.
    Forth, I must disapprove of the blanket "This is Bad" approach this video takes. Polymorphism is just like any other tool in programming: there situations where it is good and bad. When choosing what mechanism to use in programming, one needs to compare the benefits and cost. The video you've done shows Polymorphism being used in an inner-most loop. This is quite likely the worst case scenario for polymorphism. It simply isn't worth the overhead that comes with polymorphism. If you wanted to stick with the calculator theme for the video, better options like Square Root, Trigonometric Operations, or Logarithm would have been a fairer comparison.
    Lastly, the code presented at the start has some problems. Why are you using atoi in the conditional?! The code will be pointlessly executed every pass! Additionally, if you use optimization settings, the compiler might very well optimize away the entire body of the loop! If it tries to inline the operation code, it might very well see that the only one case is ever true and the operand assignments are pointless. This would result in an addition to a temporary variable that is only written to and never read from. Seeing it has no side effects, it might just empty the entire body of the loop. Unless you looked at the assembly, you wouldn't be able to tell how aggressively it optimized the code and if the tests were actually fair.

    • @MrJake-bb8bs
      @MrJake-bb8bs Před 9 měsíci +168

      The video gives a clickbait feeling by bashing on something widely used so hard, that everyone using it would feel offended.

    • @cesarbretschneider
      @cesarbretschneider Před 9 měsíci +232

      I also would like to add that the video invites to premature optimisation. 99.99% of developers will have bigger fish to fry in their codebase than two memory accesses

    • @khatdubell
      @khatdubell Před 9 měsíci +113

      You left off
      Sixthly, he incorrectly states that you can't do OO programming in C. It requires more discipline, and you don't have the benefit of any sort of language features, but there is literally a book called "oo in c". Unless the book is several hundred blank pages, i haven't actually checked.
      Seventhly, sort of the the reverse of the above. If you really dying for that minuscule performance bump, you can write the code in C++ the same way you would in C. Without any late binding.

    • @jasonenns5076
      @jasonenns5076 Před 9 měsíci +4

      ​@@khatdubellMinus the restrict keyword in C, but I have never needed to use restrict.

    • @IBelieveInCode
      @IBelieveInCode Před 9 měsíci +1

      @@MrJake-bb8bs I don't feel offended. Am I a freak ?

  • @Mitch-xo1rd
    @Mitch-xo1rd Před 9 měsíci +351

    In the famous words of Chef "There's a time and place for everything, children. It's called college"

    • @bigtymer4862
      @bigtymer4862 Před 9 měsíci +1

      So true 😂

    • @gronkhfp
      @gronkhfp Před 9 měsíci +12

      Or enterprise grade software 😂

    • @user-og6hl6lv7p
      @user-og6hl6lv7p Před 8 měsíci +2

      $70K for something you can otherwise learn online by yourself? Nah. Plus modern college is predominantly online anyway. Professors don't have time to answer intricate questions and tell you to google things anyway. Absolutely pointless.

    • @cgme9535
      @cgme9535 Před 8 měsíci +4

      @@user-og6hl6lv7p that’s quite the generalization. All universities and differing degree programs are different.

    • @teaser6089
      @teaser6089 Před 8 měsíci +6

      @@user-og6hl6lv7p that's not the case in The Netherlands, don't generalize your experience with other countries.

  • @abaan404
    @abaan404 Před 9 měsíci +221

    Cool video but i could'nt stop staring at the cpu with the swick sunglasses

  • @hugo-garcia
    @hugo-garcia Před 9 měsíci +623

    A non-virtual call is exceptionally fast, as it usually consists of a single instruction. On the other hand, a virtual call introduces an extra level of indirection, leading to the purported 20% increase in execution time. However, this increase is merely "pure overhead" that becomes apparent when comparing calls of two parameterless functions. As soon as parameters, especially those requiring copying, come into play, the difference between the two overheads diminishes. For instance, passing a string by value to both a virtual and a non-virtual function would make it challenging to discern this gap accurately.
    It's essential to note that the increased expense is primarily confined to the call instructions themselves, not the functions they invoke. The overhead incurred by function calls constitutes a minor proportion of the overall execution time of the function. As the size of the function grows, the percentage representing the call overhead becomes even smaller.
    Suppose a virtual function call is 25% more costly than a regular function call. In this case, the additional expense pertains only to the call itself, not the execution of the function. It's essential to emphasize this point. Usually, the expense of the function call is much smaller compared to the overall cost of executing the function. However, it is crucial to be cautious because though it may not always be significant, if you excessively use polymorphism, extending it to even the simplest functions, the extra overhead can accumulate rapidly.
    In C++, and in programming in general, whenever there's a price, there's a gain, and whenever there's a gain, there's a price. It's that simple.

    • @marcossidoruk8033
      @marcossidoruk8033 Před 9 měsíci +42

      First your whole line of argumentation about the copy overhead is ridiculous. No one will ever pass something big by value unless it is absolutely necessary, and in that case the copy should be considered part of the actual work the function has to do.
      If the function does a lot of work and is not called often then you eventually run into icache issues wich increases the overhead. If the function is called frequently the overhead adds up, doesn't matter that its small compared to the actual work that the function does, small multiplied by a big N is big, and if you are in a performance constrained system like a videogame renderer even small overhead should be considered. Another way in wich vtables are horrible for performance that this video doesn't mention is the fact that they introduce bloat into your data types wich is absolutely horrendous for cache efficiency, so even if you don't care at all about the indirect call overhead vtables may be a no starter to begin with.
      Also what "gain" do you get from virtual functions? I would argue you get negative benefits, codebases that uses this kind of thing extensively do a lot of inheritance wich is in itself bad for performance (bad for cache since you end up with HUGE data types) and just overall terrible for readability and simplicity.
      And even if you do single layer inheritance and you are very careful not to bloat your data types virtual functions greatly obfuscate the control flow of the program, because we all know foo.bar() calls the bar method in the foo object oh wait, foo has 5 subclasses and it could be calling any of those, so poor me running my program through a debugger will be met with a suprise when I hit that line.
      Readable and pretty are different things, the way you do it in C is not the prettiest but it is the more readable, in a switch statement you can clearly see how many "overrides" there are and what needs to be true in order for every function to be called. You can step through with a debugger and see everything that is happening and not have to worry about vtable nonsense that may be happening in your back at any moment.
      Never ever use virtual functions, they are always bad, and I absolutely mean this in absolute terms.

    • @monochromeart7311
      @monochromeart7311 Před 9 měsíci +19

      ​​@@marcossidoruk8033 I mostly agree with you, but virtual-functions/dynamic-dispatch still have their place.
      A simple example would be a video game, where the Enemy type can be anything and should be easily extensible without changing the calling code.
      About the object bloat, there's an alternative you may have seen in Rust - dynamic types. The vtable only exists if the type is marked as dynamic, which means the original type doesn't have any bloat, but references to it become fat pointers (pointer to object + vtable).

    • @saniel2748
      @saniel2748 Před 9 měsíci +26

      @@marcossidoruk8033 To be fair when you have a switch statement, all possible types are known at compile time. When you use vtable you can, for example, add types from other dlls and it'll still work. These are different functionalities
      Also I feel like languages should introduce such closed polymorphism where virtual functions would compile to a function with a switch inside, would be kinda cool

    • @hugo-garcia
      @hugo-garcia Před 9 měsíci

      @@marcossidoruk8033 I never said any of this

    • @tweakoz
      @tweakoz Před 9 měsíci +11

      ​@@marcossidoruk8033 The second you start running code where data loaded from storage influences the order of execution (very common in a game engine), the control-flow argument goes out the door. Yes you can still mitigate some of the chaos away by processing the execution graph into a more structured form - but some will remain anyway it will never be completely predictable. A much worse offender of control flow obfuscation than vtables are collections of asynchronous lambdas (jobs) processed by a multi-threaded worker queue - that makes me pine for the simple days of a running a method on a collection of base classes with vtables.

  • @tzimmermann
    @tzimmermann Před 9 měsíci +157

    I've been coding a game engine from scratch for a few years now, and in most real scenarios I encounter, where functions perform actual work, the virtual call overhead is just negligible. I tend to avoid using virtuals in a hot path (functions that will run many times per frame), but I'm totally fine using them elsewhere.
    This is how I dispatch game system update and render calls in my framegraph, for instance. My game systems must have state, and the engine can't know about the specifics of client-side game system implementations. All it knows is that some will modify a scene (update), and some will push render commands in a queue when traversing a const scene (render). So polymorphism is a good tool here: it makes the API clear enough, it makes development less nightmarish, the abstraction it introduces can be reasoned about, and the call overhead is jack shit when compared to execution times.
    Guys, don't let people decide for you what "sucks" and what is "nice" or "clean". This is ideology, not engineering. Toy examples like this are *not* real code, measure your actual performances and decide for yourself.

    • @Mitch-xo1rd
      @Mitch-xo1rd Před 9 měsíci +10

      Yep, you can believe in whatever programming ideologies you want, but it all goes out the window when you are in a real world situation and actually need to get something working.

    • @jaskij
      @jaskij Před 8 měsíci +11

      I can't find a source on this, but I remember reading around a decade ago, when I started uni, that the OGRE engine was made the way it is was partially to demonstrate that the overhead of virtual calls is neglible.

    • @tzimmermann
      @tzimmermann Před 8 měsíci

      @@jaskij Nice, I didn't know about this! Can't find a source either, sadly.

    • @guillaumebourgeois42
      @guillaumebourgeois42 Před 8 měsíci

      If you can ever find it back, I'd be thrilled to read about it @@jaskij !

    • @Takyodor2
      @Takyodor2 Před 8 měsíci +3

      You deserve way more upvotes, the example in the video isn't a typical scenario where you'd want to use polymorphism at all.

  • @pheww81
    @pheww81 Před 9 měsíci +97

    I get your point: virtual function have a cost and it's true.
    But the video is not 100% honest. The c++ code can be extended by only creating new type of operator without touching the code that execute operation. The C code cannot, you must change the central switch case and the enum. The virtual function has a cost but also offer functionality. Do the functionality worth the cost? Maybe yes maybe no. Each case is different and must be evaluated.
    Also c++ != virtual function and inheritance. It's not because the feature exist you must use it. They are tools in the toolset, nothing more.
    Then you also did an useless operation the force the c++ to pass by the vtable with the line "Operation *op = &add;". If you just used "add.execute();" directly the compiler you know at compile time what function to call and would not pass by the vtable. I understand you did that to have a one page example. But it could lead someone to think an example this simple would always use overkill feature like the vtable. It make look c++ dumper than it is.

    • @crimsonmegumin
      @crimsonmegumin Před 9 měsíci +5

      I agree. In fact, it's probably zero cost if you use templates (at least in Rust). In this case, he is using a pointer (in Rust, it would be `&dyn Operation`, the dyn makes it clear it's a fat pointer)

    • @nero008
      @nero008 Před 9 měsíci +4

      you only have to write it once but the user will have to pay the runtime cost forever !

    • @theEndermanMGS
      @theEndermanMGS Před 9 měsíci +15

      @@nero008You do know that the vast majority of useful code needs maintenance well into the future, right? Not to mention that this sort of thinking almost always falls under the umbrella of premature optimization. Outside of very select circumstances, the performance difference between a polymorphic implementation and a monomorphic implementation is utterly imperceptible to the end user, but the difference it makes to the structure of the code could be very well noticed and appreciated by any programmer doing work on that code

    • @nero008
      @nero008 Před 9 měsíci

      @@theEndermanMGS why the hell would i base my point on cold paths ?

    • @theEndermanMGS
      @theEndermanMGS Před 9 měsíci +4

      @@nero008 You certainly didn’t read that way, given that you were responding to a critique of the video’s lack of nuance that makes clear that there is room for evaluating whether a virtual call’s performance penalty is worthwhile in a given case. I don’t see any reasonable way to read your response except as a complete dismissal of the more nuanced view.

  • @metal571
    @metal571 Před 9 měsíci +146

    It's always worth noting that when Stroustrup says that C++ obeys the zero-overhead principle, that in NO way means that the abstraction itself is free. It's just that you couldn't hand-code it better yourself with less performance overhead to use that feature (ideally). If you use inheritance with virtual member function overrides, you *will* pay a cost, because if you care about performance 1. you should always measure it, and 2. you should be aware of exactly how it is implemented. Otherwise, don't solve the problem that way.
    There are some cases where inheritance is quite applicable, but needless to say, it is not exactly cheap, and its depth should be minimized if it's used at all. And to quote Gang of Four, "Favor object composition over class inheritance".

    • @heavymetalmixer91
      @heavymetalmixer91 Před 9 měsíci +2

      I'm a newbie to OOP and I hear "composition" quite often but I don't get what it means. Can you please explain it in a few phrases?

    • @metal571
      @metal571 Před 9 měsíci +19

      @@heavymetalmixer91 it's as simple as including one or more instances of a class inside another class rather than deriving from a base class. So a class that contains objects of another class as opposed to deriving from another class. This is easier to understand especially compared to multiple levels of derived classes

    • @marcossidoruk8033
      @marcossidoruk8033 Před 9 měsíci

      Starsoup is full of shit and according to your definition "zero overhead" is actually not zero overhead.
      This is like killing some people and then saying "I killed zero people according to the zero kill principle because you couldn't have done any better" its nonsense.
      Zero overhead means zero overhead, what else would it mean? That is the abstraction costs no extra cpu cycles and that is exactly what starsoup means, he just never claimed vtables to be an example of what he calls a zero overhead abstraction.
      However, the whole idea of zero overhead abstractions is in itself ridiculous, no abstraction ever has zero overhead, and if it does it is probably a terrible abstraction that doesn't provide you with much abstraction to begin with.

    • @maksymiliank5135
      @maksymiliank5135 Před 9 měsíci

      @@heavymetalmixer91 Instead of extending the base class you put the object of that class inside of another class. That way you still have access to all of the fields of the "base" class but usually you need to write some additional code like setters and getters and method wrappers in the new class to get to those fields and methods. In case of c++ you can achieve exactly the same result by inheriting the base class without using the virtual keyword. The fields of base class are just going to be included in the structure of derived class and methods will be statically resolved at compile time (no vtable and no double dispatch). But it doesn't work that way in other languages. For example in java every method is virtual by default.

    • @sanderbos4243
      @sanderbos4243 Před 9 měsíci +18

      @@heavymetalmixer91 As a practical example, say you have a Bow and a Sword class, and you want to make a SwordBow class that is like a Sword and also shoots an explosive arrow out of the sword. You could do this by letting SwordBow inherit Bow and Sword, and then overriding the inherited Bow's shoot() method (function) to replace it with a new method that shoots an explosive one. With composition on the other hand, you simply tell Sword and SwordBow to both use a slash() method, and you then define the explosive shoot() method in SwordBow without overriding anything. Composition turns a 2D family tree into a simple 1D list of family members.

  • @francoisgagnon3529
    @francoisgagnon3529 Před 9 měsíci +131

    80% to 90% of code execution time is spend in 10% to 20% of the code. if your code is performance tuned so much that it's the performance degradation is due to vtable lookup (which I highly doubt), then perhaps you can say "polymorphism sucks". For 99.999% of all programs out there, that's not the case and polymorphism is a very useful feature and makes code simpler.

    • @taragnor
      @taragnor Před 9 měsíci +26

      Yeah this is an extreme example because in this case, the polymorphic function in question is just: int add(int a, int b) {return a+b;}
      And yeah, if you're doing a bunch of one line functions it'll probably make your polymorphism overhead seem really bad. Polymorphism isn't designed to optimize toy programs, it's designed for large projects.

    • @delphicdescant
      @delphicdescant Před 9 měsíci +9

      You say 99.999%, and your average programmer unhappily employed doing webdev might be able to agree.
      The problem arises when these techniques and philosophies seep into applications programming (or worse, embedded) where the bottleneck is no longer some disgustingly lethargic network activity.
      Then your 99.999% becomes merely "sometimes." Are 99.999% of all web-free programs I/O bound? No, not remotely. And for any of them that aren't bound by even disk I/O (nevermind the abyss of network I/O), a benchmark between polymorphic and non-polymorphic solutions will show a non-trivial difference quite a bit more frequently than you're claiming.
      That small bit of code you mention that occupies the most runtime? That might be a tight logic loop or some recursive tree/graph-walking thing, which isn't that unusual for a program that's algorithmically interesting. And yeah, you'll absolutely see a difference if that logic is running through some enterprise-grade polymorphic mess.
      So it's truly unhealthy that OOP and web/enterprise laziness, which is excusable in that one specific realm, has infiltrated universities to become embedded in the minds of all graduates, when only *some* of those graduates are going to be employed writing code that's already so slow that polymorphism is just a drop in the bucket.
      (And yes, I do understand your point applies to these performance-relevant programs wherein the bulk of the computation time lies within the bodies of functions rather than straddling calls that may or may not be subject to a vtable lookup, but if you want to make that argument more effectively, you'd need to shift that rectally-extracted statistic from 99.999% to something more like 80% to avoid being too obviously hyperbolic)

    • @ErazerPT
      @ErazerPT Před 9 měsíci +1

      @@delphicdescant True. Recently was doing some benchmarking on array performance in C# and... let's say that [y*width+x]!=[x,y]!=[x][y]. And none of those is the same as objects encapsulating them. And none of the former is the same as accessing said objects through an interface.
      That said, such is the cost for "generic not specific". But... if you're laying tracks for a trolley car, you have a LOT more tolerance than if you're laying tracks for a bullet train. People just need to be taught "when is which". And I bet most will want to stick to either side of the fence in their work. "Application coders" and "performance coders" are intrinsically different, but both have their place because business needs vary.

    • @jomo5493
      @jomo5493 Před 9 měsíci +1

      @@taragnor its designed for projects that dont require hyper performance, if your project is big but needs high performance then you will start needing to not use these abstractions depending on how much optimization you need and where the bottleneck is.

    • @T0m1s
      @T0m1s Před 9 měsíci +2

      "polymorphism is a very useful feature and makes code simpler" - do you have an example of code made simpler by polymorphism, or are you just repeating the marketing ads of OOP people?

  • @jeffspaulding9834
    @jeffspaulding9834 Před 9 měsíci +109

    You can write OO code in C. C just doesn't do the heavy lifting for you. You can write your own dispatch tables with function pointers.
    There are plenty of OO libraries for C. GTK+ and Xt are two examples.
    The code doesn't look as nice, but OO is about how you organize and reason about your code, not about whatever syntactic sugar your language gives you.

    • @GregMoress
      @GregMoress Před 9 měsíci +21

      A feature of the language is not just 'syntactic sugar', because if it were, then C is just 'syntactic sugar' for Assembly.

    • @TsvetanDimitrov1976
      @TsvetanDimitrov1976 Před 9 měsíci +1

      While all of this is true, I think the main point of the video was that type erasure is just not a good idea in general for a staticly typed language from an efficiency pov, so c++ making it easier is a debatable choice(at least in nowadays perspective, I'm not suggesting that Bjarne should have looked at his crystal ball to see that in the future the additional level of indirection would mess with the branch predictor or the icache)

    • @taragnor
      @taragnor Před 9 měsíci +2

      Right, OOP is a paradigm, not a language feature. C++ used to originally just compile into C first before it got its own compiler. You can do almost anything in C, it just may take a godawful amount of boilerplate that other languages will handle for you with ease.

    • @valizeth4073
      @valizeth4073 Před 9 měsíci +5

      ​@@GregMoressNo, in fact C doesn't even target your CPU (thats what your compiler does), C targets an abstract machine.

    • @GregMoress
      @GregMoress Před 9 měsíci +4

      You are bluring the line I drew, so allow me to blur yours...
      In Java and DotNet, and even Basic, there is/are intermediate OP Codes, MSIL for DotNet, ByteCode for Java, and P-Code for BASIC.
      It's possible for C to have such 'intermediate' Op codes as well, which is the Assembly language that programmers would write to if not for hi-level 'syntactic-sugar' languages...
      The fact that C doesn't use OP codes doesn't change the fact that a programmer COULD write Assembly targetting their CPU. An internally, the C compiler DOES have platform-agnostic OP codes.
      The difference between C and C++ is not just 'syntactic sugar' it is a feature, just as C is a feature, and not just 'syntactic sugar' over OP codes.

  • @Minty_Meeo
    @Minty_Meeo Před 9 měsíci +54

    Remember to mark your polymorphic classes as final if they are the final derivation! That lets your compiler optimize virtual function calls into regular function calls in situations where there can be no higher superclass.

  • @sledgex9
    @sledgex9 Před 9 měsíci +25

    Mistake in 03:46: If a derived class has the same function as the parent BUT the parent function isn't marked as virtual it doesn't become virtual implicitly. The 2 functions will co-exist simultaneously. Which one will be called depends on the type of the variable that does the call. Is it base class variable or a derived class variable? Specifically if the pointer is of type base*, you assign it to an instance of derived class and call the overriden function it will actually call the base class's implementation of that function. This is a subtle C++ pitfall that can lead to bugs.
    What you probably meant to say: If a base class has marked a function as virtual then it becomes implicitly virtual for any derived class. The derived class doesn't have to explicitly mark it as virtual. But for clarity reasons it should do so.

  • @jonbezeau3124
    @jonbezeau3124 Před 9 měsíci +49

    I teach college C/C++ and I think the most important part of introducing C++ is explaining the problems that the language was written to solve; they were problems with large scale software development, not writing faster algorithms. Projects were becoming really hard to manage with library name conflicts, simultaneous changes to common code stepping on eachothers toes, human problems like that.
    Polymorphism lets developers build on top of eachother's code in a way imperative code can't support. Someone could subclass your calculator, add/remove/change operators, and pass the new calculator into existing calculator-using code with no change to your calculator class code and just instantiating a different class in code that depends on your calculator.

    • @teaser6089
      @teaser6089 Před 8 měsíci +12

      Indeed, it's a common theme with these CZcamsrs that they take very simple coding problems and think they can extend those experiences to all coding problems, but when you are not alone working on your own codebase and instead of a team of half a dozen, or god forbid dozens of people and you don't implement polymorphism you are going to grind progress to a halt needing to rewrite a bunch of code and any new members to the team need to first spent weeks or months studying the entire codebase...

    • @hanspeterbestandig2054
      @hanspeterbestandig2054 Před 8 měsíci

      Bravo! 👍👏👏👏 Exactly!

    • @Turalcar
      @Turalcar Před 8 měsíci

      Polymorphism is all over the place in linux kernel (written in C). They just have to define vtables manually.

  • @TheSimoriccITA
    @TheSimoriccITA Před 9 měsíci +5

    This video shows a only a compaison when C++ virtual approach is disadvanaged and dosen't count the disadvantages of the C switch approach, using that to arrive at a simplistic conlcusion "virtual bad".
    The example show in this video the number of operation and the operation itself are very small, so here the comparison is "dereferencing a virtual table and call a function" vs "call diretly a slightly bigger function". But if we pass in a more realistic context, where there are more derived classes and bigger method, now in the C approach we have bigger switch that've to call other functions so that means that we've to do 2 function call in C approach insthead of 1 like the C++ approach, with all the overhead that comes to (or put all the logic of all cases in one function and have a massive switch). And THE CACHE MISS CAN HAPPEN EVEN IN C APPORACH TOO, you know the switch has to be loaded form memory too, like a vtable.
    The C switch has the advantage of being easier to optimize and can be faster, but you've to now and implement the call for every type of operation in the base class. This means that it's not possible to use in situations like extending the functionality of an already compiled library.
    And last which compiler and flags were used?

  • @givememorebliss
    @givememorebliss Před 9 měsíci +26

    Incredibly misleading. You're not comparing the same things. Full-fledged polymorphism with virtual calls cannot be compared to a case over an enum. If you were comparing C++ to a function pointer call (with the destination of the pointer compiled in a separate TU and the program compiled without link-time optimisation) in C, that could be considered "comparable" at least.

  • @NysShortCut
    @NysShortCut Před 9 měsíci +138

    In large project, polymorphism could literally save tons of amount of time and be really helpful to organize the code. Things like OOP, design patterns, are basically just for large projects but not for something small like calculator or some kinda stuff.

    • @calebsteinmetz9471
      @calebsteinmetz9471 Před 9 měsíci +52

      Exactly, they aren't done to speed up execution time, they are done to speed up development time

    • @eddiebreeg3885
      @eddiebreeg3885 Před 9 měsíci +18

      Thing is, they don't always make your code easier to write. Using polymorphism often comes from the assumption that your code has to be generic on order to be future proof and easily scalable. But the reality is: very rarely do you actually know whether you will need this extra scalability in the future, ESPECIALLY in large projects. Using stuff like enums, or sometimes type erasure doesn't have to be that complicated (I mean enums are really simple, seriously) and they can minimize the performance overhead. Polymorphism can be okay, if you KNOW that you need it for any given task. If you're just trying to plan ahead chances are you will mess it up because no one is that good. Maybe it's okay when you only have a few polymorphic classes with a single inheritance level... but when your WHOLE API is almost exclusively polymorphic, with sometimes 3 or 4 levels, it's a nightmare performance wise, for a minimal benefit

    • @calebsteinmetz9471
      @calebsteinmetz9471 Před 9 měsíci +2

      @@eddiebreeg3885 Agreed, if you try to use polymorphism on everything you are going to have a bad time. I look at it more as a tool for very specific things. Any feature of a language can be bad if used without care.

    • @taragnor
      @taragnor Před 9 měsíci +11

      The thing with OOP is it's far more flexible. Imagine you wanted to add a new operation. Using the C method, you'd need to be able to directly alter the original source code. If you're using an external library, that may not always be an option. Under C++ polymorphism you can build a library that can be extended with new operations simply by extending the base class and you never need to alter the base code. And that's one of the main advantages of OOP, being able to create libraries of classes and functions that people can just drop into their programs and use.

    • @mskiptr
      @mskiptr Před 9 měsíci +2

      (or you could just use FP and have nice abstractions from the start)

  • @joeroeinski1107
    @joeroeinski1107 Před 9 měsíci +29

    Sounds deceptive to say it sucks. It doesn't. However, it's easy to misuse.

    • @homelessrobot
      @homelessrobot Před 9 měsíci

      some might say that tools that are easy to misuse categorically suck more than tools that aren't easy to use at all, though that wasn't his point, so that wouldn't help his case.

    • @valizeth4073
      @valizeth4073 Před 9 měsíci

      I mean this guy just doesn't know what he's talking about, which is rather apparent. This whole video carries the same argument as "random access data structures suck! they're slow!" when all he's doing is performing tons of insertions and deletions at the middle of the list. He takes 1 example and compares it to another completely abused case of a language feature and draws the conclusion that it "sucks".

  • @sharksandbananas
    @sharksandbananas Před 9 měsíci +6

    This is a nice video but I think it's also a bit misleading...
    The C code isn't *really* polymorphic: Every 'method' added has its own switch statement, every 'child' added is a new case for every switch in every function and the monolith grows.
    And the C++, of course it'd be silly to use that kind of indirection on the hot path (and std::visit(); might be preferable).
    Either way, the compiler will optimize away where possible for the target platform, including switch statements.

  • @tomb5372
    @tomb5372 Před 9 měsíci +10

    I think you're missing the point a bit. Every competent C++ knows that virtual functions aren't the most efficient. Using them in "hot" code (e.g. tight loops) causes the "penalty" to exponentially grow. But it's a way to abstract things. The C example here doesn't really abstract anything, whereas the C++ example does. You could compile and run the same C example code in C++. You could change it to use classes (which are just structs after all). Heck, unlike C, you could even turn it into templates and do all kinds of fun stuff and get the compiler to even inline the whole thing, making it potentially even more efficient than the C counterpart. Bottom line is, polymorphism doesn't really "suck" for this reason. The biggest complaint is the complexities that come info play when you inherit from multiple classes at the same time, which many languages don't even allow.

    • @zeez7777
      @zeez7777 Před 29 dny

      Why would it grow exponentially? It should be linear right?

  • @sinom
    @sinom Před 9 měsíci +16

    std::function, lambdas, concepts etc.
    v-tables and virtual are rarely actually required and usually just a convenience tool for doing something in a specific way.

  • @MyManJohnny
    @MyManJohnny Před 9 měsíci +27

    I'd say that polymorphism doesn't necessarily suck. Poor use of polymorphism sucks. If your virtual function takes only couple of cycles to complete, then it might not be a good idea to use polymorphism, but when it takes couple of miliseconds, the slowdown caused by the extra lookup is negligible.

  • @EmilPrivate
    @EmilPrivate Před 9 měsíci +3

    Firstly, I'd like to emphasize that polymorphism, like any tool in the programmer's toolbox, has its optimal use cases. It's not inherently 'bad' or 'good', but should be used when appropriate for the problem at hand. An overly narrow focus on performance can inadvertently lead to premature optimization, and it's important to remember the classic Donald Knuth quote: "Premature optimization is the root of all evil".
    Polymorphism shines in scenarios where you need to process a collection of objects that share a common interface but have different internal behaviors. In these cases, it allows you to write more concise, readable, and maintainable code by avoiding verbose switch-case statements or if-else ladders. Although polymorphism introduces a performance overhead due to the indirection through the vtable, this is often insignificant in comparison to the cost of the actual function execution. And while it's true that in a tight loop this overhead can accumulate, this isn't where polymorphism typically brings the most value.
    To your point about how vtables are implemented, it's true that there's no standardization across compilers. However, most C++ compilers adopt similar strategies, and the vtable is usually stored separately from the object instance, with a pointer to the vtable added to the object's memory layout. This additional level of indirection can indeed add to the function call cost, and that cost is further compounded by the challenge for CPUs to predict dynamic jumps, as another commenter rightly pointed out.
    I also agree with the critique about the video's simplistic example. A more complex set of operations would have made the comparison more meaningful, as the benefits of polymorphism become more apparent when you're dealing with larger codebases and more complex behavior differences between classes.
    Finally, it's worth noting that C++ offers more than one way to achieve polymorphism. Beyond the classical (runtime) polymorphism we're discussing here, there's also compile-time polymorphism, also known as static polymorphism, via templates. The latter can eliminate the runtime overhead, at the expense of potentially increasing the binary size. It's yet another trade-off to consider based on the specific needs of your program.
    Overall, I think it's crucial not to dismiss polymorphism out of hand because of performance concerns. Instead, we should understand its costs and benefits, and use it where it makes the most sense.

    • @T0m1s
      @T0m1s Před 9 měsíci

      "Firstly, I'd like to emphasize that polymorphism, like any tool in the programmer's toolbox, has its optimal use cases" - name one.

    • @EmilPrivate
      @EmilPrivate Před 9 měsíci +1

      @@T0m1s
      Well if you had read a bit further than just the first few lines you'd see that I'm providing actual examples, starting at "Polymorphism shines in scenarios...". I will forgive you though since attention span is a common issue these days. Below is another example that will further elaborate this.
      Suppose we have a number of distinct shape classes, such as Circle, Square, and Triangle. Each of these shapes can be drawn and resized, but the specifics of how these operations are implemented vary between each shape type. By defining a base Shape class with virtual 'draw' and 'resize' methods, we can implement polymorphism.
      Through this approach, we're able to maintain a collection of Shape pointers, without needing to know the specific type of shape they point to. When we need to draw or resize a shape, we simply call the appropriate method. The correct implementation is chosen at runtime based on the actual object type, thanks to the magic of polymorphism.
      This not only makes our code more concise and maintainable (as we're not wrestling with unwieldy switch-case statements or if-else ladders), but also more extensible. If we decide to introduce a new shape type into our program, we simply create a new class that extends Shape and provides its own 'draw' and 'resize' implementations. No need to touch existing code.
      Polymorphism thus allows us to write code that is open to extension but closed for modification, embodying a key principle in object-oriented design, known as the Open-Closed Principle.

  • @01001000010101000100
    @01001000010101000100 Před 9 měsíci +18

    You explained very nicely why polymorphic code is slower, but it sucks only if you need to squeeze all possible performance from the hardware by a specific task. Which is the case pretty rarely. The most common scenario is something (I mean, most of the code) is waiting for something slower, meaning it has more than enough time to not create any measurable lag. But yes, when you're optimizing a tight loop and every cycle matters, then the good old C is the way to go. But again it would be pretty tricky to provide a right example. In my practice - if I have a tight loop and I want to save every cycle - I just avoid calling anything, inlining as much as possible. And this can happen inside a C++ overridden method, but it just doesn't matter as long I don't call other other methods from said tight loop. Sometime I mix C with C++, where C++ is for general app logic, and C is doing time critical things. The common C++ usage is for tasks too tedious to code in C and usually not time critical. Like UI and network. The code waits "ages" for something to happen, then you invoke a C function that does the heavy lifting pushing gigabytes of data because the user just moved a mouse or clicked a button.

    • @teaser6089
      @teaser6089 Před 8 měsíci

      Yeah the title should have been specific for C++, instead he chose to clickbait and generalize it to all code, which is not the case.
      If you need to develop software with a team of people not using polymorphism will cause more problems than the potential performance loss, which in most real world cases that aren't basic C++ scripts running on Arduino don't exist in the first place.

  • @giorgiobarchiesi5003
    @giorgiobarchiesi5003 Před 9 měsíci +8

    Good point, but in the calculator example, the C code has conditional code that the C++ code does not have.
    Believe me, after having programmed in C++ for 30 years, I can assure you that the supposed overhead of polymorphism is negligible in the overall execution time of a real world program.

    • @maxp3141
      @maxp3141 Před 3 měsíci

      Depends on the specific case. If you loop over 100 million elements and call an add function (virtual or otherwise, as long as it’s not inlined) in every iteration then it’s way heavier than just summing the numbers in the loop. As always, it’s about choosing three right tool for the right job.

    • @giorgiobarchiesi5003
      @giorgiobarchiesi5003 Před 3 měsíci +2

      @@maxp3141 Ok, but this is why I specified “real world program”. I have no experience of real world programs that loop a million times calling a function that merely performs a sum. Functions are typically more elaborate than this, so I think that my reasoning is sound.

  • @KokahZ777
    @KokahZ777 Před 9 měsíci +7

    For such simple operations vtable lookup represents 20% more of execution time but truth is in a real program it would not represent such a high proportion of execution time. And C code might be less easy to maintain down the line

    • @taragnor
      @taragnor Před 9 měsíci +3

      Yeah most functions aren't going to be "return a + b;"

  • @haiphamle3582
    @haiphamle3582 Před 9 měsíci +11

    An alternative title: "How C++ implements polymorphism and its cost"

  • @SophieJMore
    @SophieJMore Před 9 měsíci +6

    Polymorphism doesn't need to involve dynamic dispatch though. In Rust, for instance, polymorphism can be achieved with generics. And the cool thing about generics is that at compiler time they get monomorphized, i.e. the compiler will just copy-paste the function with every combination of concrete types that it's called with, so you just end up with roughly the same setup you had in C with no performance cost.
    I haven't had much experience in C++, so I don't know how templates work there, but presumably it would be possible to do the same thing there.
    Also there are situations where dynamic dispatch is needed, so, if you're writing it in C you'd use function pointers and it would be just as slow as virtual methods.

  • @davivify
    @davivify Před 8 měsíci +3

    I LOVE polymorphism. Let me tell you why. I've used it in providing an undo/redo feature, as well as in writing an arithmetic parser. It elegantly separates the overall design from the implementation of nodes. So if I have to fix nodal behavior or add new node types over time, I don't have to touch the top level architecture. It. Just. Works. Yes, there is a small performance hit but it's more than worth it to me to have more readable, more maintainable code. That said, one alternative is to use name overloading (what I like to call "static polymorphism"), which is resolved at compile time and solves the performance hit.

  • @enderger5308
    @enderger5308 Před 9 měsíci +5

    C can be written in OO style (see GObject), it just requires you to manually implement the pattern. An interface in C would be a structure containing function pointers constructed by object specific functions.

  • @justaboy680
    @justaboy680 Před 9 měsíci +431

    Not many people can make an 8min long video about polymorphism and such low-level details and still keep it interesting. Well done, bro.

    • @LowLevelLearning
      @LowLevelLearning  Před 9 měsíci +33

      Wow, thanks!

    • @valizeth4073
      @valizeth4073 Před 9 měsíci +16

      Just unfortunate that he was wrong regarding pretty much everything.

    • @justaboy680
      @justaboy680 Před 9 měsíci +2

      @@valizeth4073 I found the video very informative and engaging. Can you specifically address where he went wrong?

    • @ccreutzig
      @ccreutzig Před 8 měsíci +2

      ​@@justaboy680 E.g., the claim that because C is not an OO language, you cannot write OO code in it. Of course you can. What do you think fopen/fread/fclose do, if not return and work with an object handle with virtual member functions?

    • @justaboy680
      @justaboy680 Před 8 měsíci

      @@ccreutzig I'm not aware of the implementation details of file utilities. What virtual member functions are you talking about?

  • @drno87
    @drno87 Před 9 měsíci +4

    I used polymorphism in writing simulations where the code paths were things like "run this CFD code that takes 20 min" vs "run this approximation that still takes 30 seconds". It made for convenient organization where the child classes would track whatever extra data they needed. The overhead in using virtual functions maybe added a few millionths of a percent to the overall runtime of that particular feature.

  • @Cjw9000
    @Cjw9000 Před 9 měsíci +34

    It's not always about performance, you have to choose wisely what you want to use for your particular use case. If your use case requires raw performance, OOP may not be the way to go. But if you don't want to spend 5 hours debugging your C code, maybe OOP is the way to go.
    Still, it's very nice to see someone explaining the implementation details of C++. Great work!

    • @filipg4
      @filipg4 Před 9 měsíci +8

      Why would a procedural code be a mess? It's usually really flat, as opposed to OO code. Which in my experience makes it a lot easier to debug, there's a lot less to worry about overall, functions are the only abstraction mechanism. Sure C++ compiler is better, which why people write C-style C++ over C.

    • @godnyx117
      @godnyx117 Před 9 měsíci

      Or you choose a language that doesn't suck. OH, WAIT!!!! There isn't any!
      Until I finish and publish Nemesis that is!

    • @Cjw9000
      @Cjw9000 Před 9 měsíci +4

      ​@@filipg4 I didn't mean procedural code in general, but highly optimized code.

    • @empireempire3545
      @empireempire3545 Před 9 měsíci +10

      "if you don't want to spend 5 hours debugging your C code, maybe OOP is the way to go. " - OOP changes 5 into 50-500 hours, which is good for job stability

    • @lazergenix
      @lazergenix Před 9 měsíci +1

      I only use virtual functions for my callback classes, it's nice to have syntax that wraps up functions in a single class and doesnt need to use C's terrible function pointer syntax. Never heard someone using it for "debugging" reasons

  • @ribamarsantarosa4465
    @ribamarsantarosa4465 Před 8 měsíci +2

    Respect for taking the time to write such a critical video. A suggestion I make when writing critical things about a programming language is to let clear when the criticism goes on the language itself (e.g. syntax), and when it goes on implementation aspects. Your criticism is on an implementation aspect, that can be optimized by a compiler. So, if you think well, your criticism isn't on (C++'s) polymorphism.

  • @on-hv9co
    @on-hv9co Před 8 měsíci +2

    About the V-Table, if you use the final specifier on virtual functions this(amongst other things) provides the compiler an opportunity to "devirtualize" the function.

  • @felipelopes3171
    @felipelopes3171 Před 9 měsíci +9

    Well, this has been discussed dozens of times already. Essentially the example you constructed is specially crafted to make polymorphism look bad.
    If you want to do a simple calculator, the operation can be done in a single instruction and fits in register.
    For the types of code that OOP was designed, though, it has to search data structures which use memory allocated in the heap, fetch data from storage, invoke kernel syscalls, etc, and all of this will make the overhead of calling a vtable negligible.
    And even in the case that you find yourself in a situation where you need to have the CPU call an operation millions of times you could just dump the class structure with other code, translate it to C style opcodes and run this instead.
    This is what any numerical library in a virtualized language does, btw. And if you do your calculation in the methods it provides, instead of using the language directly, it's just as fast as C code.
    At the end of the day, the only performance penalty you cannot get away if when using higher level abstractions is the start up time it takes to initialize all the stuff, and a fixed amount of memory to store everything.
    I definitely think yoh could provide better content to your viewers than something which already has been exhaustively discussed in the 80's.

  • @junbird
    @junbird Před 9 měsíci +2

    Fun fact: doing all of that stuff in C is completely feasible, altough it's unconvenient (lots of boilerplate code is necessary, C just is not object oriented) and unsafe (you need to do type checking at runtime). I actually discovered these things on my own a few years ago, when I first began studying OOP and tried to replicate it in C, which was the only language I was familiar with at that time. When you actually implement this stuff, you realize how much wasteful it is.
    Btw, please note that there are many types of polymorphism. This is specifically what's known as inclusion polymorphism, which as far as I know is the only kind of polymorphism which has to be implemented with these type of runtime mechanisms. However, polymorphism is not something which is exclusive to OOP and stuff like adhoc polymorphism (ie function overloading or even traits in Rust) and parameter polymorphism (ie templates) are usually implemented statically. I know that there are a few other kinds of polymorphism, but I don't know much about those.

  • @stuartlovett
    @stuartlovett Před 9 měsíci +2

    Enjoy your videos, it's great to see videos explaining the inner workings of the stuff that we just take for granted now.
    I think you may have miss spoken a couple of times around 3:32. The "=0" is not needed in order for the compiler to create the v-table entry, this just indicates that the class is only to be derived from and that the derived class must implement this pure virtual function. Non-pure virtual functions (without the =0) need not be implemented in the derived class if the base implementation is appropriate.
    Also you say that the virtual keyword is not needed and that by simply declaring a function with the same signature in the derived class the compiler will assume the base method to be virtual. I think this is true in some other OO languages, but not in C++. You cannot make a base class virtual without specifying the virtual keyword in the declaration of the base class.
    It is true that you can declare a function with the same signature in the derived class - this is know as "overriding" (which is not the same as "overloading") - but it will not get an entry in the v-table and will therefore not get called by a method invocation from a base class object.
    The following g++ program demonstrates this:
    #include
    class base
    {
    public:
    void identify(void) {printf("%s
    ", __PRETTY_FUNCTION__);}
    virtual void virt_identify(void) {printf("%s
    ", __PRETTY_FUNCTION__);}
    };
    class derived : public base
    {
    public:
    void identify(void) {printf("%s
    ", __PRETTY_FUNCTION__);}
    virtual void virt_identify(void) {printf("%s
    ", __PRETTY_FUNCTION__);}
    };
    int main(int argc, char *argv[])
    {
    derived d;
    base *b = &d;
    b->identify(); // calls base::identify() (not a virtual function)
    b->virt_identify(); // call derived::virt_identify() (via vtbl)
    d.identify(); // calls derived::identify() (overrides base::identify())
    d.base::identify(); // calls base::identify() (explicitly)
    b->base::identify(); // calls base::identify() (explicitly, bypasses the vtbl)
    return 0;
    }
    Program output:
    void base::identify()
    virtual void derived::virt_identify()
    void derived::identify()
    void base::identify()
    void base::identify()

  • @didgerihorn
    @didgerihorn Před 9 měsíci +5

    The basic point you're trying to illustrate about the vtable is correct, but I think we have to clarify a few things. First of all, this is not about C++ but about the code style your're using (runtime polymorphism). If you hadn't used a manual way of creating a function pointer, the compiler would most likely have resolved the polymorphism at compile time. Speaking about compile time, you could also use templates for enforcing compile-timepolymorphism. And modern CPUs are good at resolving indirections like vtables, too.
    All in all, OOP and polymorphism can be great for structuring code, but I think they're overused.

  • @laenprogrammation
    @laenprogrammation Před 9 měsíci +1

    This is the extreme example, your functions are so small that they hardly perform any work. You could even say "OMG THIS IS SO SLOWWWWW, you actually have to do context switching to call your functions that is really slow"
    This "performance drop" is hardly noticeable when your functions actually perform some work

  • @anacierdem
    @anacierdem Před 9 měsíci +11

    C++ as a language, is capable of solving the problem introduced in the video faster than C contrary to the argument. If you need to have a “dynamic” operation (like a custom operation for a mapper or a predicate for a filter), on C you are stuck with function pointers (the given enum example is unrelated to the problem space IMO, it is fully static and not bound late at execution time) and they cannot be easily inlined by the compiler as they are only known very late at execution time. C++ OTOH, is capable of doing the inlining at compile time thanks to the superior type system. Overall I think this problem is a pretty bad example for polymorpism. It is just not the correct tool for this use case. Of course there are many ways of achieving the same thing and some of them will perform slower and it is something to consider when picking a solution. For the provided example, a compile-time polymorphism strategy (ie changing the operation at compile time) would perform the same as the C version provided. Note that the C version would not be versatile as you cannot simply provide the operation at the callsite and you’d need to modify every place you want to use that operation without a function pointer.

    • @skejeton
      @skejeton Před 8 měsíci

      it cant really inline without LTO, and if you define it in the header, it can inline it, but same would apply to C, if you use a custom vtable

  • @rastersoft
    @rastersoft Před 9 měsíci +12

    I think that the concept of this video is wrong from the base. I mean: polymorphism sucks... if you need performance. But when what you need is code easily maintenable, and the performance is not a problem, it is a good tool (but don't forget what happens when you only have a hammer...). If performance were the only metric, we would use only assembler/C, and never languages like python, perl or java.

    • @lazyh0rse
      @lazyh0rse Před 9 měsíci

      The c code in the video already looks so much uglier than the c++ version. Just imagine building huge libraries such as image adapters. It would look like spaghettis for sure in the C code.

    • @xKeray
      @xKeray Před 9 měsíci

      lmao, we would absolutely NOT stop using java if performance was the only metric - we would stop with enterprisey patterns though

    • @jonhdoe4119
      @jonhdoe4119 Před 9 měsíci +1

      Meh. Try writting java on a memory constrained device or for real-time systems.

  • @ksbs2036
    @ksbs2036 Před 9 měsíci +1

    It's the execution pipeline stall/flush/reload that burns the most CPU, not the vtable lookup. However, polymorphism can eliminate huge swaths of complexity in large programs when used appropriately. It's just a tool. Not a panacea

  • @TheBitKrieger
    @TheBitKrieger Před 9 měsíci +4

    This is a mood point: sure if your method contains only one line then yeah, it will be slower but if you write code like that, you either aren't a good coder or you are writing java ;)

  • @DjoumyDjoums
    @DjoumyDjoums Před 9 měsíci +1

    Hence why devirtualization is a big subject for compilers, making this problem disappear when possible. Using sealed classes at the end of the inheritance chain helps for devirtualization.

  • @mwwhited
    @mwwhited Před 8 měsíci

    In many cases the compile and resolve this if you turn on devirtualization optimizations. You can even provide better hints by finalizing your implementations. And even in non-optimized code the flexibility provided to the developer far out weights performance loss due to virtual table lookups even in extremely tight processing.

  • @JubilantJerry
    @JubilantJerry Před 8 měsíci +1

    The real problem is actually the loss of a lot of function inlining opportunities. Even with a switch table, the compiler would be aware of the complete set of behaviors and can inline the function calls. The situation is even better with compile time polymorphism like with CRTP, which often produces the most optimized binary. But with a vtable the compiled binary has to remain compatible even with derived classes made later on that can arbitrarily overload the methods. Devirtualization can sometimes overcome this problem if classes are declared final, but it doesn't work that well and also coders often don't bother marking classes final.

  • @anon_y_mousse
    @anon_y_mousse Před 9 měsíci +2

    I've actually written a lot of OO code in C. Depending on how you write it, it can be horrific or reasonably pleasant. When I first wrote my data structures library I implemented it in a somewhat generic way with loads of function pointers and size fields and type ID's in nearly everything. It was really easy to work with, but it kind of looked fugly. At some point I implemented some standard types to use as templates, and it was huge. Not that it ran particularly slow or anything, or produced bloated binaries, but it was a lot to maintain. For the second version I just said "fuck it" and implemented raw data copies and had the user provide a void pointer and a size. Let them figure it out.

  • @TheAudioCGMan
    @TheAudioCGMan Před 9 měsíci +1

    You can see how atoi is called in the loop to parse the second argument again and again ... I assume that's more operations than the virtual function

  • @DogeOfWar
    @DogeOfWar Před 9 měsíci +3

    Is there any overhead to using classes without polymorphism in this case? I.e. if you had separate ADD/MUL classes without an Operation virtual execute function does it still perform much worse than the C code you used? I personally think the class structure looks cleaner and is easier to maintain so if you had to compromise somewhere in the middle between flexibility and performance it would be interesting to see if there is a middle ground.
    Thanks for the vid btw, always enjoy your uploads!

    • @GabrielGonzalez2
      @GabrielGonzalez2 Před 9 měsíci +2

      In this specific example, there would still be function call overhead but no virtual function overhead. Functions, Virtual or not, are exactly the same as C functions. You also only have virtual call overhead when you are calling virtually. So like in the C++ example, if you call execute on the Add class directly (without a pointer), there would be no virtual overhead. There's also other cases where the compiler knows the true type of your object at compile time so it can optimize the virtual call away.
      Also unless you're doing something absolutely performance critical or on super tiny embedded devices, you don't need to worry about virtual call overhead. The actual work you do in your functions will almost always eclipse the overhead of a virtual call where you can barely tell it's there. Remember to avoid premature optimizations, and to use benchmarks to see if you actually need to optimize and where.
      There is a way to achieve compile-time polymorphism through a technique called "CRTP" although its limited to *only* compile-time polymorphism which this example could work for but I'd have to check and see.

    • @DogeOfWar
      @DogeOfWar Před 9 měsíci

      @@GabrielGonzalez2 Thanks!

  • @karwszpl5117
    @karwszpl5117 Před 9 měsíci

    Thank you for awesome video!

  • @kaikalii
    @kaikalii Před 9 měsíci +4

    It would have been nice if you had said something about static dispatch and monomorphization, which languages like Rust do by default. For those that don't know, it creates a copy of you code for each type that you pass, so you get all the benefits of polymorphism without this runtime overhead.

  • @SegamanTV
    @SegamanTV Před 9 měsíci +4

    vtable after one access will remain in the CPU cash for faster access.
    if the code is not very complex, this will not make any issues at all.
    but if there is a lot of different polymorphism classes and they take a lot of memory and needed to be accessed very often - in that case yes, it can cause slowdowns, but usualy it is not big of a deal.

  • @AleksiGron
    @AleksiGron Před 9 měsíci +1

    Also good to note the performance impact of branch prediction misses. In normal code, the CPU is trying to predict whether a jump is going to happen, but it always knows the destination of the jump. With virtual functions the CPU knows that we will be jumping but it is trying predict the destination of the jump because the address is loaded from memory. When a branch destination is mispredicted, the CPU first needs to load the instructions for the correct branch, and only then it can start loading any data the instructions might require. The performance is also dependent on how predictable the function usage pattern is.

  • @EUPThatsMe
    @EUPThatsMe Před 9 měsíci +1

    "C++ is just a bunch of compiler tricks" - old CS prof. There was a time where C++ was just another pre-processor to a C compiler. Most of the useful parts of C++ can be had in C by using an instance of the "object's" data as the first parameter to each of the "object's" methonds (functions).

  • @hexadecimalpickle
    @hexadecimalpickle Před 9 měsíci +28

    Nice video! There's a miss at 2:15: C++ classes are not C structures with function pointers. That's true only for polymorphic classes - and not even for all types of polymorphism, if we have to be picky. C++ classes are basically just structures, with member functions being normal functions with an implicit "this" argument. It's also worth mentioning the use of the "final" keyword can help assisting the compiler in optimising away virtual table lookups when virtual functions are called via the final implementation. Still, I prefer the C way as it doesn't add hidden data members and doesn't perform the shenanigans I described in this comment. Much easier to reason about.

  • @janoschreppnow3785
    @janoschreppnow3785 Před 9 měsíci +42

    To be fair to all those C++ code bases littered with virtual functions and interfaces: With C++ not having a clean way to define restrictions (SFINAE my ..) for generic/template parameters until C++20 (and I have not actually seen concepts used in any productive code base to this day..), it was often the easiest way to get at least somewhat sane compiler messages, even though static dispatch or an enum style solution would have been perfectly possible for a lot of these usecases. Rust (btw) does it kinda cleverly by combining dynamic and static dispatch into Traits, although using trait objects is somewhat frowned upon at times.

    • @totof2893
      @totof2893 Před 9 měsíci

      In C++17 you can use std::variant and std::visit to do static dispatch with clean type like Rust enum.
      C++20 concept is syntactic sugar for my point of view.
      Almost everything can be written and read easily with constexpr and static_assert.
      And SFINAE can be used to test the existence of an expression and transform it into a trait (a la std::is_arithmetic).

    • @alexsarbu3978
      @alexsarbu3978 Před 9 měsíci +3

      To be fair you can't really do generic programming in C ;)

    • @Baptistetriple0
      @Baptistetriple0 Před 9 měsíci

      I don't know why you are saying that trait object are frowned upon, lot and lot of widely used library abuse it, like tokio for exemple. you dive into any async library and you will see Pin everywhere (often aliased by BoxedFuture), it is just so much easier to work with.

  • @MrFlugi
    @MrFlugi Před 9 měsíci +1

    this video is an example of "the tool X is bad because I can use it wrong on purpose if I want!" reasoning. No C++ programmer would use polymorphism in a real world scenario like this. The best case scenario I can think of is someone saw an educational code sample which had to fit on a single screen, and thought that the example is the recommended usage, not just the syntax.

  • @falahati
    @falahati Před 9 měsíci +3

    this is purely theorical. in majority of real world programs, the actual function is so much more packed with memory lookups and operations that the single additional pointer dereference's effect on the final performance is insignificant. on the other hand you get a clean and maintainable code. weighing both sides of this scale, my choice personally would be rarely different. in other words, "polymorphism sucks" is very big claim to be fully settled with a 25% performance penalty on a function that has no parameter and does a simple mathematical operation.

  • @jaysistar2711
    @jaysistar2711 Před 9 měsíci +6

    In modern C++, one could use a concept for polymorphism, but in Rust switching between a v-table based dynamic polymorphic dispatch and a template based static dispatch is much easier, and with no wrappers.

    • @muadrico
      @muadrico Před 9 měsíci

      Yes, I also thought of using CRTP.

  • @nullderef
    @nullderef Před 9 měsíci

    Does the additional store to memory used in the C++ version also not impact the runtime? That's 2 whole additional writes for the caller and 2 more reads for the callee. Especially on x64 where the calling convention uses registers for integers anyway, so if execute() took the two integers instead the difference between these might be even smaller.

  • @velho6298
    @velho6298 Před 9 měsíci +2

    I suppose you could always try to emulate how the C++ would implement vtables in C. The indirection will always be more slower compared to enum.
    edit: try godbolt next time

  • @ngortheone
    @ngortheone Před 9 měsíci +2

    @LowLevelLearning: this is a bit one-sided and misses the point. Polymorphism sucks when you don't know how and when to use it. You have reviewed one of 2 ways of doing polymorphism. This particular approach relies on dynamic dispatch (at runtime), in C++ implemented via class inheritance. This will always be slower for reasons mentioned in the video, but it has it's benefits - your code doesn't have to know the concrete type of the object. Arguably this approach is not well suited for calculator because you know all supported operations ahead of time.
    There is another other approach to polymorphism - static dispatch, or compile-time monomorphisation (aka templates/generics) which does not have performance penalty. This video can be greatly enhanced by featuring a second approach and giving viewers the full picture.

  • @cdarklock
    @cdarklock Před 9 měsíci +1

    "I am building a small project by myself that performs a well-defined function. Here's why it's a Bad Idea to use a language designed for large projects with large teams that will be frequently modified to account for changes in the business landscape, using metrics appropriate to small solo projects of course"

  • @mikkelens
    @mikkelens Před 9 měsíci +1

    I think the title is misleading: Polymorphism is more than just inheritance. A different type of polymorphism is using generics/having type arguments. In Rust, this complexity is resolved ("monomorphized") during compile time, removing the overhead that inheritance would have in this instance. Of course generics are pretty different to work with than virtual methods, but "polymorphism" was the chosen word. Another example could be using interfaces (dynamic dispatch) and traits, that can also be considered polymorphism and in certain languages (say, rust) also have no runtime overhead.
    A title that would be less of a problem in my opinion could be "why does inheritance suck for runtime performance?", or "why does inheritance suck?" for short.

  • @Nanagos
    @Nanagos Před 9 měsíci +3

    I'm reading a book about C++ written by Bjarne Stroustroup (the creator of C++) and he comments this about virtual functions: "The mechanism of calling virtual functions is almost as efficient as calling 'normal' functions (within 25%), so that efficiency questions shouldn't scare anybody off to use a virtual function, where a normal function call is efficient enough."

    • @zemlidrakona2915
      @zemlidrakona2915 Před 9 měsíci +1

      The main problem with virtuals is the compiler generally can't inline. That's often a big part of the speed difference. However virtuals are still a good option in many places.

  • @georgehelyar
    @georgehelyar Před 9 měsíci +1

    I once wrote a small C application for a friend using function pointers in structs just to troll them. Good times.

  • @rumisbadforyou9670
    @rumisbadforyou9670 Před 9 měsíci +3

    What about polymorphism without inheritance / virtual functions? Which is how polymorphism is usually used IRL.

  • @welehcool2522
    @welehcool2522 Před 9 měsíci +9

    OOP is just a tool. It will be great if you know how to use it.
    PBRT use very much OOP a lot.
    It really difficult when I tried to port their code to Vulkan and GLSL, but then I realized why they choose to use OOP.

    • @rursus8354
      @rursus8354 Před 9 měsíci

      OOP is the analysis. Using objects is just common sense.

  • @emjizone
    @emjizone Před 9 měsíci

    Assuming the VTable is static, why the compiler doesn't pre-compute the end pointers and write them as static pointers at compile time ?

  • @TsvetanDimitrov1976
    @TsvetanDimitrov1976 Před 9 měsíci +7

    I'd argue that C is the most polymorphic language, since probably the most used type is void* xD

  • @ultimatesoup
    @ultimatesoup Před 8 měsíci

    A single virtual call is essentially free, it's just a pointer indirection. It's when you have many many calls that you start to see the impact. With modern c++ you can use some template tricks to get rid of it at compile time in most cases

  • @nirajandata
    @nirajandata Před 9 měsíci

    just 2 hours before this video uploaded, i was watching 1 years old cppcon video on alternative for virtual method

  • @Mitsunee_
    @Mitsunee_ Před 9 měsíci

    I find it very interesting how classes in C++ basically work like they used to with babel before JavaScript had its native classes, where classes were transpiled to an implemetation used prototypes and a WeakMap.

  • @SimGunther
    @SimGunther Před 9 měsíci +1

    The performance tradeoff for obeying "more readability by polymorphism" is just not worth it and there's a better perspective on the problem that can help you use other design patterns instead of polymorphism or switch cases.

  • @mrx10001
    @mrx10001 Před 9 měsíci

    wouldn't it be faster if the compiler just auto converted the function calls into an enum based check that directly calls the function? Or would the casting still be equivalent to a x20 perf loss?

  • @michalwa
    @michalwa Před 9 měsíci +1

    It's unfortunate how a lot of videos/resources treat polymorphism and virtual dispatch as synonymous. Polymorphism is a much broader topic, this reasoning really only applies to dynamic/runtime polymorphism and a poor implementation at that.

  • @Krunklehorn
    @Krunklehorn Před 8 měsíci

    What if a language hardcoded the inherited functions into derived classes during compile time?
    I assume there are clear downsides to this approach, so what if a language at least gave you the option?
    Are there any languages that have tried this?

  • @Mathhead2000
    @Mathhead2000 Před 9 měsíci +1

    I don't think overridden functions are virtual by default. Unless the specs have changed. If you override without using the virtual keyword, it only uses static analysis to determine with method to run. I.e. no v-table.

    • @muadrico
      @muadrico Před 9 měsíci

      Yes. You are right, they are not.

  • @WRXDannyW
    @WRXDannyW Před 9 měsíci +1

    This is not saying that C++ is slower than C. It's just that this is a great explanation of why hand coded switch statements can be faster than virtual functions, both of which you can do in C++ also. Virtual calls certainly need to be eliminated from performance critical areas of your code. In this use case I would use templates rather than virtual functions to achieve the same thing with probably better performance than the C code.

  • @GrantCelley
    @GrantCelley Před 8 měsíci

    I am a hobbiest programmer specializing in nlp. Classes and python are a godsend. The one thing holding me back usually is getting the code down and polymorphism and computionally heavy(as in needing more compute to compile and interpret) are way better for me because i am the bottleneck more than a computer. Also i cant run c code on google colab.

  • @phongkyvo4383
    @phongkyvo4383 Před 9 měsíci

    1. You can do as C in C++ declare a struct and switch-case.
    2. Use polymorphism to help the programmer, not to improve performance. When there are new requirements, instead of having to fix all the switch-case code in a place where only God knows, the programmer simply inherits.

  • @jurekrasovec
    @jurekrasovec Před 9 měsíci

    I am tempted to write this in C not C++ as it can be, but as you pointed out, it would be ulgy, so maybe you can do a video on how to write a base class, a class that inherits a base class in C ... and the code looks (well it won't look pretty) at least readable? :D

    • @junbird
      @junbird Před 9 měsíci +1

      Doing inheritence is actually very basic. Let
      struct Parent
      {
      int x;
      };
      you'd define an extension of this type as:
      struct Child {
      struct Parent p;
      int y;
      };
      It's important that struct Parent is the first field of this derived struct Child, as a struct is simply a contigous buffer (exactly like an array). The whole trick is to cast a struct Child pointer to a struct Parent pointer. For example, let
      void f(struct Parent*);
      struct Child c = {};
      you could call f on c:
      f((struct Parent*) &c)
      Now, in struct Parent there could be a function pointer, such as
      struct Parent
      {
      void (*g)(struct Parent*);
      int x;
      };
      struct Parent::g points to a function which expects a pointer to struct Parent as its only input. If you inizialize struct Parent::g differently for struct Parent and struct Child instances, you could then have a dispatch function such as
      void g(struct Parent *self)
      {
      self->g(self);
      }
      g does not provide an actual implementation, which is expected to be contained in whatever self points to, whatever it might be (either a struct Parent or a struct Child).
      So, let
      struct Parent p = {
      .print = printParent,
      .x = 4
      };
      struct Child c = {
      .p = {
      print = printChild,
      };
      .x = 3
      }:
      with
      void printParent(struct Parent *p)
      {
      printf("I'm a parent and %d", p->x);
      }
      void printChild(struct Child *c)
      {
      printf("I'm a child and %d %d", c->p->x, c->y);
      }
      the output of
      g(&p);
      would be "I'm a parent and 4", while the output of
      g((struct Parent*) &c);
      would be "I''m a child and 4 3 ".
      Here you have a function which executes different procedures based on what type of argument you pass to it. This is extremely basic, you can go MUCH deeper, but as you can see this is already ugly enough (and it has no type safety, as I mentioned in a previous comment, you'd need to check for types at runtime, having each instance of each class containing a specific pointer which represent that class, which is what some refer to as the metaclass, you basically render C a dynamically typed language). Believe me, you could write fully featured class-based object oriented code in C (vtables and all), it's just that there's no point to it (if you want an OO C, just go with C++), other than for learning purposes.

  • @axelbagi100
    @axelbagi100 Před 8 měsíci +1

    We can just use templates which can be used for compile time inheritance basically
    And with the addition of concepts it can be done without gouging your eyes out 😂

  • @2sourcerer
    @2sourcerer Před 9 měsíci

    How do we set up the tooling to examine code in assembly as it is done at 5:49?

  • @ambuj.k
    @ambuj.k Před 9 měsíci

    Hey, I really love your videos but from one vim user to another, what is your colorscheme?

  • @romsthe
    @romsthe Před 9 měsíci +1

    What happens if you declare the function arguments and also the operations themselves as const, which they should be, what happens then when you compile with optimizations ? Could you compare the assembly of both ? C++ might surprise you here ;)

    • @johnadriaan8561
      @johnadriaan8561 Před 8 měsíci

      While you're right that adding `const` in general has the possibility of improving compiled code, in this contrived example there's no benefit to be had. There's no point in passing the arguments as `const`, since they're passed by value. Declaring the functions themselves as `const` also wouldn't help in this example, where OP is complaining about the indirect virtual call, rather than the compiler's selection of which function to use (which is what `const` helps with).
      On a more abstract level, it's also (often) a mistake to declare a virtual base function as `const` unless you're trying to establish a guarantee. A derived implementation (which would also have to be declared `const` to override the base class) might want/need to modify its members for some reason. Sure, it could declare those members as `mutable`, but that has its own philosophical problems.

  • @chinoto1
    @chinoto1 Před 7 měsíci

    It's not always a possibility depending on the language and use case, but monomorphization can help a lot.

  • @soumen_pradhan
    @soumen_pradhan Před 9 měsíci

    But, consider a container with multiple objects that share some base functionality. Inheritance helps with that.
    All in all, of your types are increasing, but methods remain same. Use inheritance.
    If methods are increasing, but types remain small. Use variant / switch.

  • @BinGanzLieb
    @BinGanzLieb Před 9 měsíci +2

    what if using in C an array of function pointers instead of switch-case-statement?

  • @informagico6331
    @informagico6331 Před 9 měsíci

    The end of the video:
    "do you C what I mean?"
    Man 😂😂😂

  • @loc4725
    @loc4725 Před 9 měsíci +1

    Not you but there seems to be a few people on the Internet who either hate OOP or are just obsessed with raw performance, and this is usually marked by the extensive use and emphasis of edge cases.
    In the real world getting working, maintainable code out of the door quickly is far, far more important. If it takes 0.2 seconds longer to execute then most of the time it simply doesn't matter.
    Good video though.

  • @qwertyq3889
    @qwertyq3889 Před 9 měsíci

    Hmm, but does it always work through these pointers? For example function templates are split into set of compile-time generated functions, each function treats one specific template parameter type combination. Why can't we resolve class hierarchy, find the code for one and only parent function and inline some asm at compile time instead of playing around with ptrs at run time? If the pointer/code we end up with would be the same every time? I guess that would be obvious optimisation to implement in the compiler

    • @qwertyq3889
      @qwertyq3889 Před 9 měsíci

      Looks like there are some opt passes, like pass_ipa_devirt which works on call graph in gcc, and DevirtModule in llvm which inserts function as basic block. So, with enough patience and alcohol level, this ptr dereference/call could be optimized into something not so mem op hungry. Some short functions would benefit of this, especially if their text "payload" could fit into cache line.

  • @user-ni9tf5yr6m
    @user-ni9tf5yr6m Před 9 měsíci

    >it`s important to note that by default functions with the same signature in a derived class are virtual in the parent by default
    Is it means base class is becoming polymorphic when derived class overrides any its method (have at least one the same method signature) ?
    OR
    Derived class just hides base class methods and no vtables are created?
    I need confirmation 🙂

    • @tracevandyke2009
      @tracevandyke2009 Před 9 měsíci +1

      What he said was actually not true. In C++, a base class method cannot be virtual unless it is explicitly marked virtual. A derived class method will only be virtual if it has the same signature as a base class virtual method.

  • @alexaneals8194
    @alexaneals8194 Před 7 měsíci

    I think one misconception is that object-oriented or procedural code is tied to a language. Part of it is the marketing of the languages. Ultimately, all code is compiled to ML even if it is through multiple layers it has to eventually be compiled to ML. No one will say that Assembly (which represents a 1 to 1 with ML) is object-oriented, but the object-oriented code is still implemented in Assembly (or ML). I can write code that is completely procedural is Java just by putting the entire contents of the code in static methods call by the static main function. The code is written in an object-oriented language and has at least one class; however, it is not object-oriented code. Same with C, I can package the data and functions together and use function pointers to create object-oriented code despite the fact that C is a procedural language. Object-oriented, procedural, functional, etc. are programming paradigms and languages can facilitate the use of one or more of those paradigms, but with the exception of a language that completely enforces the use of one of those paradigms (which C does not), they can be implemented using most of the languages out there.

  • @twochilis6763
    @twochilis6763 Před 9 měsíci

    This omits the fundamental advantage of polymorphism, which is that you can call methods on interfaces without knowing all the possible object types, and extend the application without recompiling. For example, you can have code that calls a method, and the object it is called on is provided by a shared library you load at runtime. The implementation may not even be written at the time that you write the code that uses the interface.
    This flexibility comes at a cost. Which is why code that doesn't provide this flexibility can run faster when that flexibility is not required. But try to provide that same flexibility in C, and you end up copying the same vtable mechanism C++ uses, only you have to do it manually instead of the compiler doing it for you.
    Polymorphism does not suck. It is a powerful tool, and as such it must be used with care. Learn when to use it, instead of overusing it or swearing it off completely.

  • @ScorgRus
    @ScorgRus Před 8 měsíci

    Can I compile your C calculator and then extend it's operations with my own?

  • @stapler942
    @stapler942 Před 9 měsíci

    Small point: Shouldn't there be more trials involved in this kind of test than one, you know, for science and stats and all?
    Unless that shell stuff involved running the program several times, I couldn't quite figure out what was going on.

  • @malakggh
    @malakggh Před 9 měsíci

    does java also have vtable ? is the polymorphism slow in general (also in java)?

  • @coolbrotherf127
    @coolbrotherf127 Před 8 měsíci

    There is a bit of important context for polymorphism as why it is used and what is it's bad at. Yes, hand coding C for simple operations might be faster, but when you're trying to make a large OO project, doing all those manual things in C can take a long time so it's only worth doing if code performance is very important for the project. In many cases for desktop or web based applications, the extra operations needed for C++ polymorphism are still magnitudes faster than waiting on data transfer betten storage or Internet data. It ends up not really mattering at all for the usability of the program.