C++ Weekly - Ep 364 - Python-Inspired Function Cache for C++

Sdílet
Vložit
  • čas přidán 21. 08. 2024
  • ☟☟ Awesome T-Shirts! Sponsors! Books! ☟☟
    Upcoming Workshop: C++ Best Practices, NDC TechTown, Sept 9-10, 2024
    ► ndctechtown.co...
    Upcoming Workshop: Applied constexpr: The Power of Compile-Time Resources, C++ Under The Sea, October 10, 2024
    ► cppunderthesea...
    Notes and code: github.com/lef...
    T-SHIRTS AVAILABLE!
    ► The best C++ T-Shirts anywhere! my-store-d16a2...
    WANT MORE JASON?
    ► My Training Classes: emptycrate.com/...
    ► Follow me on twitter: / lefticus
    SUPPORT THE CHANNEL
    ► Patreon: / lefticus
    ► Github Sponsors: github.com/spo...
    ► Paypal Donation: www.paypal.com...
    GET INVOLVED
    ► Video Idea List: github.com/lef...
    JASON'S BOOKS
    ► C++23 Best Practices
    Leanpub Ebook: leanpub.com/cp...
    ► C++ Best Practices
    Amazon Paperback: amzn.to/3wpAU3Z
    Leanpub Ebook: leanpub.com/cp...
    JASON'S PUZZLE BOOKS
    ► Object Lifetime Puzzlers Book 1
    Amazon Paperback: amzn.to/3g6Ervj
    Leanpub Ebook: leanpub.com/ob...
    ► Object Lifetime Puzzlers Book 2
    Amazon Paperback: amzn.to/3whdUDU
    Leanpub Ebook: leanpub.com/ob...
    ► Object Lifetime Puzzlers Book 3
    Leanpub Ebook: leanpub.com/ob...
    ► Copy and Reference Puzzlers Book 1
    Amazon Paperback: amzn.to/3g7ZVb9
    Leanpub Ebook: leanpub.com/co...
    ► Copy and Reference Puzzlers Book 2
    Amazon Paperback: amzn.to/3X1LOIx
    Leanpub Ebook: leanpub.com/co...
    ► Copy and Reference Puzzlers Book 3
    Leanpub Ebook: leanpub.com/co...
    ► OpCode Puzzlers Book 1
    Amazon Paperback: amzn.to/3KCNJg6
    Leanpub Ebook: leanpub.com/op...
    RECOMMENDED BOOKS
    ► Bjarne Stroustrup's A Tour of C++ (now with C++20/23!): amzn.to/3X4Wypr
    AWESOME PROJECTS
    ► The C++ Starter Project - Gets you started with Best Practices Quickly - github.com/cpp...
    ► C++ Best Practices Forkable Coding Standards - github.com/cpp...
    O'Reilly VIDEOS
    ► Inheritance and Polymorphism in C++ - www.oreilly.co...
    ► Learning C++ Best Practices - www.oreilly.co...

Komentáře • 39

  • @Possseidon
    @Possseidon Před rokem +45

    I just wanna point out one glaring problem with this current solution. If you use this cache function on two different functions that have the same signature they will also share a single cache.
    One way to get around this, is by only allowing function pointers (which is probably not a bad idea anyway) and using that function pointer as a template parameter instead of a normal function parameter to the cache function. Then, every function has its separate static store in the cache function.
    This is a pretty neat way of doing such python style decorators and I've used it to automatically generate stateless Lua compatible (fixed signature) wrapper functions to regular C++ functions.

    • @oschonrock
      @oschonrock Před rokem +4

      yes, I thought t he same, but can you not just store Func func as first element of std::tuple and remain generic to all types of function? .. what does "Lua compatible (fixed signature)" mean? "not overloaded"?

    • @Possseidon
      @Possseidon Před rokem +2

      @@oschonrock If you want to have a function be callable from Lua, it has to take a single Lua state pointer parameter and return an int. Arguments and return values are passed and returned on a stack that can be accessed with that Lua state parameter. The int return tells how many result values you pushed (Lua supports multiple return values).

    • @Possseidon
      @Possseidon Před rokem +1

      @@oschonrock Ah, you mean for the cache example. Yes, you could store the function as part of the map key, but that also makes the lookup more expensive, as you need to search through cached values for different functions.

    • @oschonrock
      @oschonrock Před rokem +1

      @@Possseidon Oh, you meant eg a void*'ed FP as an NTTP, which then gives you a new template instantiation, which gives you a new static map? yeah, that could be faster, depending the use case (how many functions, how many values etc).
      map is terrible for this, but unordered_map requires hash..

    • @Possseidon
      @Possseidon Před rokem +1

      @@oschonrock You can just straight up use function pointers as NTTP; no need to cast them to void*. But yes, that's what I meant.

  • @pmcgee003
    @pmcgee003 Před rokem +21

    In the functional world ... I think this is called memoisation ... ?

  • @mehno583
    @mehno583 Před rokem +4

    There is a whole chapter about lazy Evaluation in "functional programming in c++" by Ivan cukic. I really loved that book. He does something similar.

  • @RoyBellingan
    @RoyBellingan Před rokem +4

    There are definitively way this can be extremely good, in my case I tend to cache quite a lot of sql, and I have to specify a TTL (time to live) so they are stored in a boost.multimap ecc.
    The massive benefit here is the ability to store arbitrary type, I always "limited" myself to a string key. I will definitively play around with a tuple of element as a key! Which using C++20 defaultable should be quite easy to write for custom type too..
    This is definitively food for thought, thank you!

    • @ranseus
      @ranseus Před rokem

      For your specific case of caching SQL results, my gut says that it would be wiser to let the SQL server manage the caching.

    • @oschonrock
      @oschonrock Před rokem +1

      @@ranseus that depends.. a ) caching at sql server level can become parallel bottleneck and b) hydration of object graph with possible processing may mean a cache on client side makes more sense.

    • @RoyBellingan
      @RoyBellingan Před rokem

      @@ranseus caching in the application also reduce the RTT as sql server Is not locali, many Times that made possibile to run the application in Canada while having SQL server in France.

  • @dj-maxus
    @dj-maxus Před rokem +4

    Nice video! I was working on the similar caching approach for heaily-calculated functions. It would be really nice to see more videos on this topic, especially on how to replace static storage inside functions to something thread-safe, which is not a rare context of calling cached functions

    • @12affes
      @12affes Před rokem

      You could use thread_local

    • @dj-maxus
      @dj-maxus Před rokem +1

      @@12affes that's interesting. But what if I need some shared cache for all the threads?

    • @oschonrock
      @oschonrock Před rokem +2

      @@dj-maxus protect it with a lock ... is about the only way.

  • @djee02
    @djee02 Před rokem +3

    Pretty neat generalized dynamic programming helper. Also, I didn't know tuples implemented operator

  • @harunbozaci1054
    @harunbozaci1054 Před rokem

    Very interesting and good video :) At first glance, I didn't understand but after given a few sec, looks really good.

  • @danielmilyutin9914
    @danielmilyutin9914 Před rokem

    I've noticed (and @Possseidon gave big explanation on this) that if you call two functions with same signature, you'll get mixed cache.
    I'll go with creating functional object. Either mutable lambda or functor. Herein, I'll store cache. And mutex, if needed.

  • @timhaines3877
    @timhaines3877 Před rokem +4

    The only way to reasonably use std::pair is with structured bindings. Seeing "->first.second" makes me a sad panda.

  • @R1D3R2
    @R1D3R2 Před rokem +2

    What if instead of returning the result of invoking the function, cache(..) returns a function that when you call it, it automatically caches the results? Wouldn't it be more transparent? I hope I have explained myself 😅

  • @pabloariasal
    @pabloariasal Před rokem +3

    Jason, why not use emplace instead of insert?

  • @LogicEu
    @LogicEu Před rokem +1

    Very interesting indeed

  • @stevesimpson5994
    @stevesimpson5994 Před rokem +1

    Possible limited use case would be performing division as a multiplication by 1/divisor with a cached reciprocal??

  • @JossWhittle
    @JossWhittle Před rokem

    Is there a way to template it in a way that you could use "using" to define your decorated function? Does this work with lambdas? How close to the pythonic "decorated at the point of definition" style can we reasonably get before cthulhu rises up in utter dismay?

  • @taw3e8
    @taw3e8 Před rokem +1

    Pretty cool, I wonder when compilers will be able to see through all of this junk code that was generated.

  • @Omnifarious0
    @Omnifarious0 Před rokem +1

    3:10 - As of this cut, I think there are some huge issues with move only types. They will appear to work, but cause strange things to happen. Maybe you fix it later.

    • @cppweekly
      @cppweekly  Před rokem

      I didn't actually write any real tests, so...

  • @ProfessorWaltherKotz
    @ProfessorWaltherKotz Před 9 měsíci

    What about making the map static thread_local? Wouldn't that solve the thread safety concern?

    • @cppweekly
      @cppweekly  Před 9 měsíci +1

      Yes, that could be an option. (but just as an aside "static thread_local" is redundant, you "thread_local" means the same thing)

  • @DART2WADER
    @DART2WADER Před rokem

    4:00 ломал голову как получить тип возвращаемого значения функции, спасибо за пример.

  • @TheSulross
    @TheSulross Před rokem

    I'd prefer to put the burden on the application to retain results that will be needed more than once.
    Really wonder about the utility of this in Python land - smacks of an optimization that is touted in a marketing checklist touting all the efforts to try to make Python more performant

    • @frydac
      @frydac Před rokem +2

      In the Fibonacci example, there is no way for the user to retain/use the intermediate results. The optimization technique is called 'memoization' and exists far longer than python does.

    • @martinmckee5333
      @martinmckee5333 Před rokem

      I have had a number of Python projects where it has genuinely come in handy.
      It could certainly be argued that all of them could have been designed around state storage and been even faster, but the decorator syntax hides the memorization in such a way that it nicely stays out of the program logic.