We just discovered faster sorting algorithms!
Vložit
- čas přidán 7. 06. 2023
- DeepMind built AlphaDev, which discovered faster sorting algorithms using Deep Reinforcement Learning.
Link to the article: www.deepmind.com/blog/alphade...
🔔 Subscribe for more stories: www.youtube.com/@underfitted?...
📚 My 3 favorite Machine Learning books:
• Deep Learning With Python, Second Edition - amzn.to/3xA3bVI
• Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow - amzn.to/3BOX3LP
• Machine Learning with PyTorch and Scikit-Learn - amzn.to/3f7dAC8
Twitter: / svpino
Disclaimer: Some of the links included in this description are affiliate links where I'll earn a small commission if you purchase something. There's no cost to you. - Věda a technologie
I wouldn't say it found a new algorithm, it just optimized the existing ones. The AI made the sorting library slightly more efficient (1.7%) by adjusting assembly code instructions. Essentially AlphaDev is like a reinforcement learning compiler AI that found the best way to optimize the sorting library.
I think we can say it found a faster algorithm. That should be fair.
so it did what a genetic programming algorithm can do… did it do it faster or at less cpu cost?
@@underfitted uh that’s not a fair assessment… it’s still the same algorithm, just a better implementation
The authors of the paper say AlphaDev started from scratch and invented new algorithms. They bear resemblance to existing, less efficient ones, but they the process of discovering them is in fact novel. For sorting 3 to 5 elements, the efficiency boost is about 70 per cent. This is so relevant that the standard c++ library has already adapted these improvements. The library hadn't been updated in this area for decades. The improvement falls to 1.7 per cent only when sorting more than 250,000 elements.
czcams.com/video/zFpLFW3lfDs/video.html
sounds cool, but would be nice to have the link of the paper and how much % improvement are they talking about ?
I just updated the description with the link.
I would have enjoyed a thorough explanation of the details of the algorithms.
True
IMO there's a deep reason why we have never found a simple optimal sorting algorithm: even in pure mathematics, nature does not have a strongly preferred order. We choose to focus on certain arrangements and the operators which work well with them, but we're cherry picking a few tiny subsets which happened to be useful and/or easier for our brains to comprehend. In other words, abstract set theory does not have a baked-in preferred order, we pick the orders we find useful out of abstracts set theory.
Yes, and then the work done is shared generation to generation through conventions and best practices. It only takes a generation before that arbitrary abstract convention becomes useful language and knowledge.
I think that in formal conceptions, set theory in it's most meta form, is the closest thing to describing philosophy with itself that you can get.
My immediate thought: space (which is about O(n) sorting, ideally) and operation time have different weights, one can even project hardware development into the future, but yeah, hardware instructions are to be taken in focus for some desired fitness
Was this video generated by VideoGPT? Because it spends 1:32 on giving one information: an AI generated a sorting algorithm. Nothing about the algorithm nor the AI
I am excited to see you excited ☺️
Lol
Well how about making an outline
Hands waving made me dizzy
I tied them up in my back on the next video and it came out better
give us source!!!
AlphaDev. Google it. You should find it easily.
Kindly be a sport and, as a service to your viewers, supply it. Thank you.
Beware "breakthroughs", unless Ping..
Let's see the paper, but with comparison, the lower possible algorithm is nlogn, and it is impossible to get lower that.
They aren’t getting lower than that. It’s about optimizing the actual speed of the code. They aren’t changing the runtime complexity of the algorithm.
Dude, o(n)
@Kotov Media if o(n) implies that you use something more that is not comparison and your memory is higher (probably than n^2, i don't recall now). This is because of the proof that a comparison sort algorithm must make at least nlog n comparisons. It is based on decision tree model. Each comparison made by the sorting algorithm can be represented as a node in a binary decision tree, where each internal node corresponds to a comparison and each leaf node corresponds to a permutation of the input sequence. Since there are n! possible permutations of n elements, there must be at least n! leaf nodes in the decision tree. Moreover, because the sorting algorithm must correctly sort any input sequence, each permutation must be reachable by some sequence of comparisons. Therefore, the decision tree must have at least n! leaves. Now, consider the height of the decision tree (i.e., the length of the longest path from the root to a leaf). Since each comparison can have at most two outcomes, the height of the tree must be at least log2(n!) = Ω(nlog n), using Stirling's approximation. Since each comparison corresponds to a node on the decision tree, and the height of the tree is Ω(nlog n), the sorting algorithm must make at least Ω(nlog n) comparisons. You can search for a more graphic proof
@@titusfx ur incomprehensible nd u obviously haven't got a year in CS. Just look up o(n) sortings. Counting and Radix will come up, I believe
@@GEMSofGOD_com Those are not comparison based sorts, they're radix/index based. They only work on whole numbers, not arbitrary data, so they're much less useful in practice. It has been proven that comparison sorting cannot run faster than nlogn and sorting in general cannot run faster than n. We're obviously only talking about comparison-based sorting, he even specified so, twice. Sure, you took a year of cs, but you're falling a little behind on common sense. Please don't be rude to others when you're the only one worth laughing at.
Your intense hand movements are making the video harder to watch, visually.
Get gud lol