Memoization: The TRUE Way To Optimize Your Code In Python

Sdílet
Vložit
  • čas přidán 23. 10. 2022
  • Learn how you can optimize your code using memoization, a form of caching computations that have already been made in recursive functions. Incredibly useful and can really optimize slow functions.
    Learn about decorators here: • HOW TO USE DECORATORS ...
    ▶ Become job-ready with Python:
    www.indently.io
    ▶ Follow me on Instagram:
    / indentlyreels

Komentáře • 149

  • @maroofkhatib3421
    @maroofkhatib3421 Před rokem +342

    Its good that you showed working of the memoization, but there are some inbuilt decorators for this exact same process, we can use cache or lru_cache from functools library. So that we don't need to write the memoization function every time.

    • @Indently
      @Indently  Před rokem +48

      True

    • @abdelghafourfid8216
      @abdelghafourfid8216 Před rokem +23

      also have more robust mapping than key = str(args) + str(kwargs) which is very risky. and more efficient if the standard library used some optimisations with C for the caching functions.
      So yeah there is really not too many reasons to create your own caching

    • @d4138
      @d4138 Před rokem +3

      @@abdelghafourfid8216 what would be a more robust mapping? And why the current one is not robust?

    • @abdelghafourfid8216
      @abdelghafourfid8216 Před rokem +11

      @@d4138 imagine a function with two arguments `arg1` and `arg2`
      the current mapping will confuse
      (arg1="12, 45", arg2="67, 89")
      with (arg1="12", arg2="45, 67, 89")
      and ofc you can find infinite other cases like this.
      this behavior is certainly not what you want in your code. you can make it safer by including the arguments name and also make sure your mapping dont confuse different object types.
      So I'll just recommend to use the built in caching function which you can safely trust without worrying about the implementation.

    • @capsey_
      @capsey_ Před rokem +9

      @@abdelghafourfid8216 i agree with your point that `str(args) + str(kwargs)` isn't great due to many reasons, but your example of confusing arguments is not one of them, because repr of tuple and dict (which are what args and kwargs are respectively) will automatically add parenthesis, curly brackets and quotation marks around them.
      def func(*args, **kwargs):
      print(str(args) + str(kwargs))
      func("12, 45", "67, 89") prints ('12, 45', '67, 89'){}
      func("12", "45, 67, 89") prints ('12', '45, 67, 89'){}
      func(arg1="12, 45", arg2="67, 89") prints (){'arg1': '12, 45', 'arg2': '67, 89'}
      func(arg1="12", arg2="45, 67, 89") prints (){'arg1': '12', 'arg2': '45, 67, 89'}

  • @Bananananamann
    @Bananananamann Před rokem +69

    To add to this nice video: Memoization isn't just some random word, it is an optimization technique from the broader topic of "dynamic programming", where we try to remember steps of a recursive function. Recursive functions can be assholes and turn otherwise linear-time algorithms into exponential beasts. Dynamic programming is there to counter that, because sometimes it may be easier to reason about the recursive solution.

    • @Indently
      @Indently  Před rokem +8

      Very well said!

    • @Bananananamann
      @Bananananamann Před rokem +2

      @@Indently Great video though, I learned 2 new things! That we can create our own decorators easily and how easy it is to apply memoization. I'm sure I'll use both in the future.

  • @rick-lj9pc
    @rick-lj9pc Před rokem +70

    Memoization is a very useful technique, but it is trading off increased memory usage to hold the cache to get the extra speed. In many case it is a good tradeoff, but it could also use up all of your memory if overused. For the fibonacci function an iterative calculation is very fast and uses a constant amount of memory.

    • @capsey_
      @capsey_ Před rokem +30

      I remember one time I was tinkering around with memoization of fibonacci function and it was so fast i was kinda frustrated how effective it was and for curiosity went for higher and higher numbers to see if it will ever start slowing down at least for half a second and when i went for billionth fibonacci number my computer completely froze and i had to physically shut it down 💀

    • @Trizzi2931
      @Trizzi2931 Před rokem +4

      Yes for Fibonacci the iterative solution is better in terms of space complexity. But generally in dynamic programming both the top down (Memoization) and bottom up (iterative) solutions have same time and space complexity because the height of the recursion tree will be same as the size of the array which you create for iterative solution which better than brute force solution or normal recursion.

    • @ResolvesS
      @ResolvesS Před rokem +10

      Or instead, you don't use the recursive or the iterative approach. Fibonacci can be calculated by formula in constant time and with constant memory

    • @thisoldproperty
      @thisoldproperty Před rokem +3

      Let's be honest, there is a Fibonacci formula that can be implemented.
      The point is ideas on caching, which id like to see this expanded on. Great intro video to this topic.

    • @zecuse
      @zecuse Před rokem +1

      Memory can be less of an issue if the application allows you to eliminate less frequently used cache items. In the fibonacci case, the cache is really just generating an iterative implementation (backwards) and looking up the values.

  • @7dainis777
    @7dainis777 Před rokem +9

    Memoization is very important concept to understand for code performance improvement. 👍
    I have used different approach in the past for this exact issue. As a quick way, you can pass a dict as second argument, which will work as cache
    def fib(numb: int, cache: dict = {}) -> int:
    if numb < 2:
    return numb
    else:
    if numb in cache:
    return cache[numb]
    else:
    cache[numb] = fib(numb - 1, cache) + fib(numb - 2, cache)
    return cache[numb]

  • @HexenzirkelZuluhed
    @HexenzirkelZuluhed Před rokem +22

    You do mention this at the end, but "from functools import lru_cache" is a) in the standard library b) is even less to type and c) can optionally limit the amount of memory the memoization-cache can occupy.

  • @erin1569
    @erin1569 Před rokem +9

    Maybe some people don't realize why it's so good with fibonacci and why they aren't getting similar results with their loops inside functions.
    This caches the function return (taking args and kwargs into account), which is mega helpful because the Fibonacci function is recursive, it calls itself, so each fibonacci(x) has to be calculated only 1 time. Without caching, the fibonacci function has to calculate each previous fibonacci number from 1, requiring rerunning the same function(x) a huge number of times.

    • @castlecodersltd
      @castlecodersltd Před rokem +1

      This helped me have a light bulb moment, thank you

  • @swelanauguste6176
    @swelanauguste6176 Před rokem +5

    Awesome video. This is wonderful to learn. Thanks, I really appreciate your videos.

  • @wtfooqs
    @wtfooqs Před rokem +5

    used a for loop for my fibonacci function:
    def fib(n):
    fibs = [0,1]
    for i in range(n-1):
    fibs.append(fibs[-1]+fibs[-2])
    return fibs[n]
    ran like butter even at 1000+ as an input

    • @skiminechannel
      @skiminechannel Před rokem +3

      In this case you just implement the memoization directly into your algorithm, which I think is the superior method.

  • @adventuresoftext
    @adventuresoftext Před rokem +4

    Definitely helping to boost a bit of performance in my massive open world text adventure I'm developing. Thank you for this tip!

    • @nameyname1447
      @nameyname1447 Před rokem

      Drop a link?

    • @adventuresoftext
      @adventuresoftext Před rokem

      @@nameyname1447 link for what?

    • @nameyname1447
      @nameyname1447 Před rokem

      @@adventuresoftext A link to your project. Do you have a Github pepository or Replit or something?

    • @adventuresoftext
      @adventuresoftext Před rokem

      @@nameyname1447 oh no it's not released yet, it's still got quite a bit of work just a few videos on this channel

    • @nameyname1447
      @nameyname1447 Před rokem

      @@adventuresoftext Alright cool! Good Luck with It!

  • @YDV669
    @YDV669 Před rokem

    That's so neat. Python has a solution for a problem called Cascade Defines in the QTP component of an ancient language, Powerhouse.

  • @IrbisTheCat
    @IrbisTheCat Před rokem +17

    key creation here seems risky, as in some odd cases 2 different (k)wargs can end up as same key. Example: Args 1, kwargs "2", args 12, kwargs 12 empty strings. Would recomend adding specjal character between args and kwargs to avoid such thing.

    • @tucker8676
      @tucker8676 Před rokem +7

      That would also be risky: what about args 1 and 2 vs 12? Or args with the special character? If your args and kwargs are hashable you could always index with the tuple (*args, InternalSeparatorClass, **kwargs as tuple pairs). The most reliable and practical way is really to use functools.cache or a variant, which does what I just described internally.

    • @Bananananamann
      @Bananananamann Před rokem

      The key creation is very use-case dependent and should be thought about, true. For this case it works well.

  • @xxaqploboxx
    @xxaqploboxx Před 2 měsíci

    Thanks a lot this cntent is incredible for junior python devs like me

  • @williamflores7323
    @williamflores7323 Před rokem +1

    This is SICK

  • @davidl3383
    @davidl3383 Před rokem

    thank you very much !

  • @jcdiezdemedina
    @jcdiezdemedina Před rokem

    Great video, by the way witch theme are you using?

  • @arpitkumar4525
    @arpitkumar4525 Před měsícem +1

    The 1st step is to avoid recursion if you can 😄 but nice to know this in case I must implement recursion.

  • @MrJester831
    @MrJester831 Před rokem +25

    Another way to optimize your Python is to use a systems level language and to add bindings 🦀. This is why polars is so fast

  • @Yotanido
    @Yotanido Před rokem +11

    I actually think that this isn't that great of an example. This only works because the function recurses twice.
    Memoization is a great tool for pure functions that get frequently called with the same input. The recursive fibonacci definition happens to do this, but this is still not a great implementation. An iterative approach can be even faster and won't use up memory.
    You could even momoize the iterative implementation, for quick lookups of repeat inputs, but no wasted memory for all the intermediate values.
    Memoization is a powerful and useful tool, but it should be used when it is appropriate. In this case a better algorithm is all that is needed. (And you don't even need to change the recursion depth!)

    • @Indently
      @Indently  Před rokem +5

      Please add resources to your claims so others can further their understanding as well :)

    • @Yotanido
      @Yotanido Před rokem +5

      @@Indently Looks like links do indeed get automatically blocked. I'm guessing you can fix that on your end.

    • @Indently
      @Indently  Před rokem +8

      The example I gave might not be the greatest, but it surely was one of the easiest ways to demonstrate it.
      I appreciate your informative comment, it's definitely something interesting to keep into account :) Thanks for sharing! (I also unblocked the link)

  • @OBGynKenobi
    @OBGynKenobi Před rokem

    I used the same technique to minimize AWS lambda calls to other services when it's the same expected value return.

  • @stefanomarchesani7684
    @stefanomarchesani7684 Před měsícem

    First of all I want to praise you for your nice videos. I always enjoy them.
    That being said, I would like to point out a bug in your code. Since you are using the string thing to create the key, if you call the function in the two equivalent ways
    fibonacci(50)
    fibonacci(n=50)
    the two inputs are mapped into different strings, and so the second function call will not use the previously stored cache.
    I get that in the fibonacci example this does not matter and that you are just make an example of code that does memoization (not claiming any optimality), but this is a thing that, in my opinion, should have been mentioned in the video.

  • @CollinJS
    @CollinJS Před rokem +4

    It should be noted that generating keys that way can break compatibility with certain classes. A class implementing the __hash__ method will not behave as expected if you use its string representation as the key instead of the object itself. The purpose of __hash__ will be lost and __str__ or __repr__ will be used instead, which are neither reliable nor intended to be used for that purpose. It's generally best to let objects handle their own hashing. I realize you can't cover everything in a video so I wanted to mention it.
    One solution would be to preserve the objects in a tuple: key = (args, tuple(kwargs.items()))
    Similarly, the caching wrapper in Python's functools module uses a _make_key function which essentially returns: (args, kwd_mark, *kwargs.items()) where kwd_mark is a persistent object() which separates args from kwargs in a flattened list. Same idea, slightly more efficient.
    As others have noted, I think you missed a good opportunity to talk about functools, but that may now be a good opportunity for a future video. Thanks for your time and content.

    • @Indently
      @Indently  Před rokem +3

      I really appreciate these informative comments, they really make the Python community a better place, thank you for taking your time to write it! I will cover functools in a future lesson, I really wanted to get the basics on memoization out and about so people had an idea where to start, so I thank you once again for your patience, and hope to see you keep up with these informative comments around the internet :)

  • @nextgodlevel4056
    @nextgodlevel4056 Před 9 měsíci

    i love your videos

  • @adityahpatel
    @adityahpatel Před 7 měsíci

    how is memoization different from the lru_cache u discussed in another video?

  • @anamoyeee
    @anamoyeee Před 8 měsíci

    Why? In the functools module @cache decorator does the same thing and you don't have to write your own implementation but just from functools import cache

  • @issaclifts
    @issaclifts Před rokem +3

    Could this also be used in a while loop for example?:
    while a != 3000:
    print(a)
    a+=1

    • @IrbisTheCat
      @IrbisTheCat Před rokem +4

      Doesn't seem to. It is to memorize result of function with same arguments called over and over with same result calculated.

    • @oskiral320
      @oskiral320 Před rokem

      no

    • @4artificial-love
      @4artificial-love Před rokem

      #fibonacci
      time_start = time.time()
      print('st:', time.time())
      a, b = 0, 1
      fb_indx = 36
      ctn=0
      while ctn != fb_indx:
      print( b, end=' ')
      a, b = b, a+b
      ctn+=1
      print( '
      ', 'fibonacci', fb_indx, ':', b)
      print('
      ended:', time.time(), '
      ', 'timed:', time.time()-time_start)
      #timed: 0.0030629634857177734

  • @tobiastriesch3736
    @tobiastriesch3736 Před rokem +2

    For primitive recursive functions, such as Fibonacci's series, tail recursion would also circumvent the issue with max recursion depth, wouldn't it?

    • @neilmehra_
      @neilmehra_ Před 6 měsíci

      Yes, and by definition any tail recursive function can be trivially converted to iterative, so you could go further and just implement an iterative version.

  • @JorgeGonzalez-zs7vj
    @JorgeGonzalez-zs7vj Před rokem

    Nice!!!

  • @91BJnaruto
    @91BJnaruto Před rokem +1

    I would like to know how you did that arrow as I have never seen it before?

    • @Indently
      @Indently  Před rokem +1

      It's a setting in PyCharm and other code editors. If you look up "ligatures" you might be able to find it for your IDE.

  • @miguelvasquez9849
    @miguelvasquez9849 Před rokem

    Awesome

  • @velitskylev7068
    @velitskylev7068 Před rokem +1

    lru cache decorator is already available at functools

    • @Indently
      @Indently  Před rokem +1

      That would be a great 10 second tutorial

  • @user-vb9mv9xb1x
    @user-vb9mv9xb1x Před 2 měsíci

    could you please explain where it store the cache?

  • @FTE99699
    @FTE99699 Před rokem

    Thanks

    • @Indently
      @Indently  Před rokem

      Thank you for the generosity! :)

  • @j.r.9966
    @j.r.9966 Před rokem

    Why is there not a new cache defined for each function call?

  • @mithilbhoras5951
    @mithilbhoras5951 Před 4 měsíci

    Functools library already has two memoization decorators: cache and lru_cache. So no need to write your own implementation.

  • @SP-db6sh
    @SP-db6sh Před rokem

    Use just prefect library

  • @miltondias6617
    @miltondias6617 Před rokem

    It seems great to implement. Can I do it with any code?

    • @EvilTaco
      @EvilTaco Před rokem

      yes, but it's only going to help if that function is called a huge amount of times with the same arguments. The reason the first implementation was so slow is because it never saved the results it returned, so it calculated a shit-ton of values it has already calculated at some point previously

  • @simonwillover4175
    @simonwillover4175 Před rokem +1

    2:40 Should take about 26 hours, looking at the time it took to do fib(30). Fib(40) would have been a better number to make your point.

  • @FalcoGer
    @FalcoGer Před rokem

    you can easily make fib with loops instead of recursion, saving yourself stack frames, stack memory, loads of time and your sanity. Recursion should be avoided whenever possible. It's slow, eats limited stack space, generates insane, impossible to debug stack traces and is generally a pain in the rear end.
    Caching results makes sense in some applications. But fib only needs 2 values to be remembered. Memory access, especially access larger than a page, or larger than processor cache is slow in it's own right. An iterative approach also doesn't require boilerplate code for a caching wrapper.
    And of course you don't get max recursion depth errors from perfectly sane and valid user inputs if you don't use recursion. Which you shouldn't. The naive recursive approach takes O(n^2) time. The iterative approach only takes O(n). Memorization also takes this down to O(n), but you still get overhead from function calls and memory lookups. If you want fast code, don't recurse. If you want readable code, don't recurse. If you want easy to debug code, don't recurse. The only reason to recurse is if doing it iteratively hurts readability or performance, whichever is more important to you.
    The max recursion value is there for a reason. Setting an arbitrary new value that's pulled out of your ass isn't fixing any problems, it just kicks the can down the road. What if some user wants the 10001st number? What you want is an arbitrary number. Putting in the user's input also is a really bad idea. Just... don't use recursion unless it can't be avoided.
    Here are my results, calculating fibbonacci(40) on my crappy laptop.
    In [27]: measure(fib_naive)
    r=102334155, 44.31601328699617 s
    In [28]: measure(fib_mem)
    r=102334155, 0.00019323197193443775 s
    In [29]: measure(fib_sane)
    r=102334155, 2.738495822995901e-05 s
    As you can see, the non recursive implementation is faster by a factor of 10 again, and it will only get worse with larger values. Of course calling the function again with the same value for testing in the interpreter is a bit of a mess. obviously an O(1) lookup of fib(1e9999) is going to be faster than an O(n) calculation.
    fib_naive and fib_mem are the same except for using your implementation of the cache. fib_sane is
    def fib_sane(n: int) -> int:
    p = 1
    gp = 0
    for _ in range(1, n):
    t = p + gp
    gp = p
    p = t
    return p

    • @supwut7292
      @supwut7292 Před rokem +1

      You make great points through your post, however you missed the fundamental point of the vid. It wasn’t about whether the iterative approach is faster than the recursive approach. But rather the fundamental idea of caching and exploiting memory hierarchy. Furthermore, this is not just a theme in programming. This is a key part of computer architecture, software architecture, and processor design. It’s almost guaranteed for a recursive function to take longer due to its fundamental nature. However by exploiting caching we avoid the expensive costs of exhaustive memory look ups.

  • @QWERTIOX
    @QWERTIOX Před rokem

    As a cpp dev, using recursion instead of basic loop to calc fibonacci looks like overkill. Personally i will write it like 2 element table, one boolen to change postion where im setting newest calced variable and doing this in loop for n times

  • @Dyras.
    @Dyras. Před 5 měsíci

    implementation of memoization is very slow compared to if you used hashmap or 2d array but a nice starter to dynamic programming for beginners

  • @djin81
    @djin81 Před 8 měsíci

    fibonacci(43) generates more than a billion recursive calls to fibonacci(n) using that code. Its no wonder fibonacci(50) doesn't complete, think how many times its generating a fibonacci(43) call which will add a billion more calls. There's only 50 unique return values, the cache version is 50 function calls vs literally billons of recursive function calls.

  • @aniketbose4360
    @aniketbose4360 Před rokem

    Dude i know this video is to show memoization but in case you don't know there is a formaula for n'th fibonnaci and that's very simple too

    • @Indently
      @Indently  Před rokem

      I know :) thank you for bringing it up though!

  • @shivamkumar-qp1jm
    @shivamkumar-qp1jm Před 4 měsíci

    What is the difference between lru_cache and this

  • @gabrote42
    @gabrote42 Před rokem +1

    Whoah

  • @DivyanshuLohani
    @DivyanshuLohani Před rokem

    at 6:40 on line 28 10_000 what is that syntax?

    • @Indently
      @Indently  Před rokem

      In Python you can use underscores as a separator when typing numbers. The compiler ignores it, but it's visually easy on your eyes.

    • @DivyanshuLohani
      @DivyanshuLohani Před rokem

      @@Indently Oh ok thanks

  • @revenity7543
    @revenity7543 Před 8 měsíci

    Why don't you use DP?

  • @weistrass
    @weistrass Před 6 měsíci

    You invented the wheel

  • @frricbc4442
    @frricbc4442 Před rokem

    can someone explain to me '→' i am not familiar with it in python

    • @Harveyleegoodie
      @Harveyleegoodie Před rokem

      pretty sure it means to return the output of the function as listed for example:
      def main(s) -> int: makes it return the output as a int
      def main(s) -> str: makes it return the output as a str

  • @gJonii
    @gJonii Před rokem

    This is like, the usual talk about memoization but it's just slightly wrong everywhere. You don't use default libraries to import this functionality, you however don't write case-specific code for this case either, instead, you try to write generic library code very badly. Fever dream'ish quality to this video.

  • @aneeshb8087
    @aneeshb8087 Před 17 dny

    How exactly that cache worked in this program would be more helpful.

  • @bgdgdgdf4488
    @bgdgdgdf4488 Před rokem +2

    Lesson: stop using recursion because it's slow. Use while loops instead.

  • @idk____idk6530
    @idk____idk6530 Před rokem +1

    Man I'm thinking what if we use this function in Cython code 💀.

  • @AcceleratedVelocity
    @AcceleratedVelocity Před 11 měsíci

    memory leak go BRRRRRRRR

  • @robert_nissan
    @robert_nissan Před 2 měsíci

    poderoso script super🎉

  • @7DYNAMIN
    @7DYNAMIN Před 5 měsíci

    the better way might be to use generator functions in python

  • @tipoima
    @tipoima Před rokem

    Wait, so "memoization" is not a typo?

    • @Indently
      @Indently  Před rokem

      A typo for what?

    • @tipoima
      @tipoima Před rokem

      @@Indently "memorization"

    • @Indently
      @Indently  Před rokem

      @@tipoima oh yeah, ahaha, true, I also thought something similar when I first heard it.

  • @romain.guillaume
    @romain.guillaume Před rokem

    I know it is for demonstration purpose but this implementation of the Fibonacci sequence is awful, with or without the decorator. Without the decorator you have a O(exp(n)) program and on the other end a memory cache which is useless unless you need all the Fibonacci sequence.
    If you want to keep a O(n) program without memory issue in this case, just do a for loop and update only two variables a_n and a_n_plus_1. Like this, it is still a O(n) program but your store only two variables, not n.
    I know that some people will say it is obvious and that example has been chosen for demonstration but somebody had to say it (if it is not already done)

    • @Indently
      @Indently  Před rokem

      If you have a better beginner example for memoization, I would love to hear about it so I can improve my lessons for the future.

  • @ThankYouESM
    @ThankYouESM Před 8 měsíci

    lru_cache seems significantly faster and requires less code.

  • @tmahad5447
    @tmahad5447 Před rokem

    Optimizing python be like Feed a turtle for speed istead using a fast animal

    • @Indently
      @Indently  Před rokem

      But your slow turtle won't beat my fast turtle in a race then, and people care about the fastest turtle in this scenario

  • @richardbennett4365
    @richardbennett4365 Před rokem +4

    I see his point. His code, if if cannot add up some numbers to Fibonacci(50), in less than a few seconds, he's got the wrong code, which is what he's demonstrating, or he's using the wrong computer programming language for this task. Everyone knows scientific problems are best handled in FORTRAN (or at least, C or Rust), and this problem is pure arithmetic. Python is not the right language for this problem unless of course memorize is used.

    • @SourabhBhat
      @SourabhBhat Před rokem

      That is only partially right. Even though FORTRAN is better suited for scientific computing, efficient algorithms are very important.
      Try computing fib(50) using recursion in FORTRAN for yourself. How about fib(60) after that?

    • @richardbennett4365
      @richardbennett4365 Před rokem +1

      @@SourabhBhat, you are correct 💯. The point is to make the algorithm as efficient as possible given the language with which one is presented.

  • @gregorymartin9091
    @gregorymartin9091 Před rokem +2

    Hello. Please could you explain me what's the most efficient between using @memoization and @lru_cache? Thank you, and congratulations for this really useful channel!

    • @HIMixoid
      @HIMixoid Před rokem +2

      Never heard of "memoization" before. But during the first seconds of video I just said "lol just use cache decorator". And then he started to implement it.
      "Same thing, different names" situation I guess.
      My guess is that main points of video are:
      - Show that such thing as caching/memoization exists.
      - Show how to implement it by yourself so you would have deeper understanding of how it works under the hood.
      So after you learn it you can even implement it in other languages where you don't have it "out of the box".

  • @sangchoo1201
    @sangchoo1201 Před rokem

    fibonacci? use O(logN) method

    • @spaghettiking653
      @spaghettiking653 Před rokem

      You mean Binet formula?

    • @sangchoo1201
      @sangchoo1201 Před rokem

      @@spaghettiking653 matrix exponation

    • @JordanMetroidManiac
      @JordanMetroidManiac Před rokem +2

      Here’s an O(1) function.
      PHI = 5 ** .5 * .5 + .5
      k1 = 1+PHI*PHI
      k2 = 1+1/(PHI*PHI)
      def fib(n): return int(.5 + .2 * (k1 * PHI ** n + k2 * (-1) ** n * PHI ** -n))

    • @sangchoo1201
      @sangchoo1201 Před rokem

      @@JordanMetroidManiac it's not O(1). the ** is O(n)
      and it doesn't work

    • @JordanMetroidManiac
      @JordanMetroidManiac Před rokem

      @@sangchoo1201 I accidentally gave the formula for Lucas numbers. Also the exponent operator is not O(n) lol

  • @nempk1817
    @nempk1817 Před rokem

    dont use python = more speed.

  • @overbored1337
    @overbored1337 Před 7 měsíci +6

    The best way to optimize python is to use another language

  • @Shubham_Shiromani
    @Shubham_Shiromani Před rokem

    from functools import wraps
    from time import perf_counter
    import sys
    def memoize(func):
    cache={}
    @wraps(func)
    def wrapper(*args, **kwargs):
    key= str(args)+str(kwargs)
    if key not in cache:
    cache[key]=func(*args, **kwargs)
    return cache[key]
    return wrapper
    def sum(n):
    s=0
    for i in range(n):
    if i%3==0 or i%5==0:
    s=s+i
    return s
    t = int(input().strip())
    for a0 in range(t):
    n = int(input().strip())
    start= perf_counter
    print(sum(n))
    end= perf_counter
    #For this code, it is not working-----------

  • @Fortexik
    @Fortexik Před rokem

    in functools there is a decorator @cache or @lru_cache

  • @CEREBRALWALLSY
    @CEREBRALWALLSY Před rokem +4

    Can functools.lrucache be used for this instead?

  • @dcknature
    @dcknature Před rokem

    This reminds of learning to multiply at school a long time ago 🧓. Thanks for the tutorial video 😊!
    likes = 57 😉👍

  • @richardboreiko
    @richardboreiko Před rokem

    That was interesting and effective. I tried using 5000 and got an error Process finished with exit code -1073741571 (0xC00000FD). I started looking for the upper limit on my Windows PC, and it's 2567. At 2568, I start to see the error. It may be because I have too many windows each with too many tabs, so I'll have to try it again after cleaning up my windows/tabs. Or it may just be a hardware limitation on my PC. Still, it's incredibly fast. Thanks!
    Also, I just checked for the error message on openai (since everybody's talking about it lately) and it said this:
    =======================================
    Exit code -1073741571 (0xC00000FD) generally indicates that there was a stack overflow error in your program. This can occur if you have a function that calls itself recursively and doesn't have a proper stopping condition, or if you have a very large number of nested function calls.
    To troubleshoot this error, you will need to examine your code to see where the stack overflow is occurring. One way to do this is to use a debugger to step through your code and see where the error is being thrown. You can also try adding print statements to your code to trace the flow of execution and see where the program is getting stuck.
    It's also possible that the error is being caused by a problem with the environment in which your program is running, such as insufficient stack size or memory. In this case, you may need to modify the environment settings or allocate more resources to the program.
    If you continue to have trouble troubleshooting the error, it may be helpful to post a more detailed description of your code and the steps you have taken so far to debug the issue.
    =======================================

  • @user-tg2gm1ih9g
    @user-tg2gm1ih9g Před 29 dny

    actually, the true way to optimize is to use a good algorithm.
    this requires thinking before you start typing.
    Fibonacci(n)
    your first implementation is crap because it is glacially slow
    your second implementation is crap because it needs a cache of unbounded size
    # calculate the n-th fibonacci number
    # this implementation is fast,
    # it requires no "extra" memory
    # it is stack friendly -- unlike recursion.
    # it does nothing hidden or unnecessary.
    def F(n):
    if n 0:
    f = fpp + fp
    fpp = fp
    fp = f
    return f
    some data
    >>> def foo(n):
    ... start = time.process_time()
    ... F(n)
    ... end = time.process_time()
    ... print(end-start)
    ...
    >>> foo(5)
    3.900000000101045e-05
    >>> foo(50)
    5.699999999997374e-05
    >>> foo(500)
    0.00021399999999971442
    >>> foo(5000)
    0.002712000000000714
    >>> foo(50000)
    0.05559500000000028
    >>> foo(500000)
    2.9575280000000017
    >>> foo(5000000)
    291.43955300000005
    >>>
    by the way F(5000000) ~ 7.108286 × 10^1044937 that's 1,044,938 decimal digits
    I'm running Python 3.9.6 on a MacBook Pro, w/ M1 Pro chip

  • @4artificial-love
    @4artificial-love Před rokem

    i believe that simple is better...and faster...
    #fibonacci
    time_start = time.time()
    print('st:', time.time())
    a, b = 0, 1
    fb_indx = 10000-1
    ctn=0
    while ctn != fb_indx:
    #print( b, end=' ')
    a, b = b, a+b
    ctn+=1
    print( '
    ', 'fibonacci', fb_indx, ':', b)
    print('
    ended:', time.time(), '
    ', 'timed:', time.time()-time_start)
    #timed: 0.0.005985736846923828

  • @mx-kd2fl
    @mx-kd2fl Před rokem +2

    OMG! You can just use functools.cache...

    • @Indently
      @Indently  Před rokem +1

      Right, there perfect way to teach how something works is by using pre-made functions

    • @Armcollector77
      @Armcollector77 Před rokem

      ​@@Indently Thanks for the video, great content. No, you are right that it is not a good way to teach, but mentioning them at the end of your video is probably a good idea.

    • @Indently
      @Indently  Před rokem +1

      @@Armcollector77 That part I can accept, I will try to remember for future lessons :)

  • @sesemuller4086
    @sesemuller4086 Před rokem +11

    In your fibonnacci implementation, f(n-2)+f(n-1) would be more efficient for the recursion because it reaches a lower depth sooner

    • @wishu6553
      @wishu6553 Před rokem

      Yea, but than caching arg:return like this wouldn't be possible. I guess it's just basic example