C++ Weekly - Ep 364 - Python-Inspired Function Cache for C++
Vložit
- čas přidán 21. 08. 2024
- ☟☟ Awesome T-Shirts! Sponsors! Books! ☟☟
Upcoming Workshop: C++ Best Practices, NDC TechTown, Sept 9-10, 2024
► ndctechtown.co...
Upcoming Workshop: Applied constexpr: The Power of Compile-Time Resources, C++ Under The Sea, October 10, 2024
► cppunderthesea...
Notes and code: github.com/lef...
T-SHIRTS AVAILABLE!
► The best C++ T-Shirts anywhere! my-store-d16a2...
WANT MORE JASON?
► My Training Classes: emptycrate.com/...
► Follow me on twitter: / lefticus
SUPPORT THE CHANNEL
► Patreon: / lefticus
► Github Sponsors: github.com/spo...
► Paypal Donation: www.paypal.com...
GET INVOLVED
► Video Idea List: github.com/lef...
JASON'S BOOKS
► C++23 Best Practices
Leanpub Ebook: leanpub.com/cp...
► C++ Best Practices
Amazon Paperback: amzn.to/3wpAU3Z
Leanpub Ebook: leanpub.com/cp...
JASON'S PUZZLE BOOKS
► Object Lifetime Puzzlers Book 1
Amazon Paperback: amzn.to/3g6Ervj
Leanpub Ebook: leanpub.com/ob...
► Object Lifetime Puzzlers Book 2
Amazon Paperback: amzn.to/3whdUDU
Leanpub Ebook: leanpub.com/ob...
► Object Lifetime Puzzlers Book 3
Leanpub Ebook: leanpub.com/ob...
► Copy and Reference Puzzlers Book 1
Amazon Paperback: amzn.to/3g7ZVb9
Leanpub Ebook: leanpub.com/co...
► Copy and Reference Puzzlers Book 2
Amazon Paperback: amzn.to/3X1LOIx
Leanpub Ebook: leanpub.com/co...
► Copy and Reference Puzzlers Book 3
Leanpub Ebook: leanpub.com/co...
► OpCode Puzzlers Book 1
Amazon Paperback: amzn.to/3KCNJg6
Leanpub Ebook: leanpub.com/op...
RECOMMENDED BOOKS
► Bjarne Stroustrup's A Tour of C++ (now with C++20/23!): amzn.to/3X4Wypr
AWESOME PROJECTS
► The C++ Starter Project - Gets you started with Best Practices Quickly - github.com/cpp...
► C++ Best Practices Forkable Coding Standards - github.com/cpp...
O'Reilly VIDEOS
► Inheritance and Polymorphism in C++ - www.oreilly.co...
► Learning C++ Best Practices - www.oreilly.co...
I just wanna point out one glaring problem with this current solution. If you use this cache function on two different functions that have the same signature they will also share a single cache.
One way to get around this, is by only allowing function pointers (which is probably not a bad idea anyway) and using that function pointer as a template parameter instead of a normal function parameter to the cache function. Then, every function has its separate static store in the cache function.
This is a pretty neat way of doing such python style decorators and I've used it to automatically generate stateless Lua compatible (fixed signature) wrapper functions to regular C++ functions.
yes, I thought t he same, but can you not just store Func func as first element of std::tuple and remain generic to all types of function? .. what does "Lua compatible (fixed signature)" mean? "not overloaded"?
@@oschonrock If you want to have a function be callable from Lua, it has to take a single Lua state pointer parameter and return an int. Arguments and return values are passed and returned on a stack that can be accessed with that Lua state parameter. The int return tells how many result values you pushed (Lua supports multiple return values).
@@oschonrock Ah, you mean for the cache example. Yes, you could store the function as part of the map key, but that also makes the lookup more expensive, as you need to search through cached values for different functions.
@@Possseidon Oh, you meant eg a void*'ed FP as an NTTP, which then gives you a new template instantiation, which gives you a new static map? yeah, that could be faster, depending the use case (how many functions, how many values etc).
map is terrible for this, but unordered_map requires hash..
@@oschonrock You can just straight up use function pointers as NTTP; no need to cast them to void*. But yes, that's what I meant.
In the functional world ... I think this is called memoisation ... ?
There is a whole chapter about lazy Evaluation in "functional programming in c++" by Ivan cukic. I really loved that book. He does something similar.
There are definitively way this can be extremely good, in my case I tend to cache quite a lot of sql, and I have to specify a TTL (time to live) so they are stored in a boost.multimap ecc.
The massive benefit here is the ability to store arbitrary type, I always "limited" myself to a string key. I will definitively play around with a tuple of element as a key! Which using C++20 defaultable should be quite easy to write for custom type too..
This is definitively food for thought, thank you!
For your specific case of caching SQL results, my gut says that it would be wiser to let the SQL server manage the caching.
@@ranseus that depends.. a ) caching at sql server level can become parallel bottleneck and b) hydration of object graph with possible processing may mean a cache on client side makes more sense.
@@ranseus caching in the application also reduce the RTT as sql server Is not locali, many Times that made possibile to run the application in Canada while having SQL server in France.
Nice video! I was working on the similar caching approach for heaily-calculated functions. It would be really nice to see more videos on this topic, especially on how to replace static storage inside functions to something thread-safe, which is not a rare context of calling cached functions
You could use thread_local
@@12affes that's interesting. But what if I need some shared cache for all the threads?
@@dj-maxus protect it with a lock ... is about the only way.
Pretty neat generalized dynamic programming helper. Also, I didn't know tuples implemented operator
Very interesting and good video :) At first glance, I didn't understand but after given a few sec, looks really good.
I've noticed (and @Possseidon gave big explanation on this) that if you call two functions with same signature, you'll get mixed cache.
I'll go with creating functional object. Either mutable lambda or functor. Herein, I'll store cache. And mutex, if needed.
The only way to reasonably use std::pair is with structured bindings. Seeing "->first.second" makes me a sad panda.
What if instead of returning the result of invoking the function, cache(..) returns a function that when you call it, it automatically caches the results? Wouldn't it be more transparent? I hope I have explained myself 😅
Jason, why not use emplace instead of insert?
Even better, try_emplace
that's a fair point.
Very interesting indeed
Possible limited use case would be performing division as a multiplication by 1/divisor with a cached reciprocal??
Is there a way to template it in a way that you could use "using" to define your decorated function? Does this work with lambdas? How close to the pythonic "decorated at the point of definition" style can we reasonably get before cthulhu rises up in utter dismay?
Pretty cool, I wonder when compilers will be able to see through all of this junk code that was generated.
3:10 - As of this cut, I think there are some huge issues with move only types. They will appear to work, but cause strange things to happen. Maybe you fix it later.
I didn't actually write any real tests, so...
What about making the map static thread_local? Wouldn't that solve the thread safety concern?
Yes, that could be an option. (but just as an aside "static thread_local" is redundant, you "thread_local" means the same thing)
4:00 ломал голову как получить тип возвращаемого значения функции, спасибо за пример.
I'd prefer to put the burden on the application to retain results that will be needed more than once.
Really wonder about the utility of this in Python land - smacks of an optimization that is touted in a marketing checklist touting all the efforts to try to make Python more performant
In the Fibonacci example, there is no way for the user to retain/use the intermediate results. The optimization technique is called 'memoization' and exists far longer than python does.
I have had a number of Python projects where it has genuinely come in handy.
It could certainly be argued that all of them could have been designed around state storage and been even faster, but the decorator syntax hides the memorization in such a way that it nicely stays out of the program logic.