We just discovered faster sorting algorithms!

Sdílet
Vložit
  • čas přidán 7. 06. 2023
  • DeepMind built AlphaDev, which discovered faster sorting algorithms using Deep Reinforcement Learning.
    Link to the article: www.deepmind.com/blog/alphade...
    🔔 Subscribe for more stories: www.youtube.com/@underfitted?...
    📚 My 3 favorite Machine Learning books:
    • Deep Learning With Python, Second Edition - amzn.to/3xA3bVI
    • Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow - amzn.to/3BOX3LP
    • Machine Learning with PyTorch and Scikit-Learn - amzn.to/3f7dAC8
    Twitter: / svpino
    Disclaimer: Some of the links included in this description are affiliate links where I'll earn a small commission if you purchase something. There's no cost to you.
  • Věda a technologie

Komentáře • 48

  • @ricosrealm
    @ricosrealm Před rokem +51

    I wouldn't say it found a new algorithm, it just optimized the existing ones. The AI made the sorting library slightly more efficient (1.7%) by adjusting assembly code instructions. Essentially AlphaDev is like a reinforcement learning compiler AI that found the best way to optimize the sorting library.

    • @underfitted
      @underfitted  Před rokem +5

      I think we can say it found a faster algorithm. That should be fair.

    • @j.j.maverick9252
      @j.j.maverick9252 Před rokem +1

      so it did what a genetic programming algorithm can do… did it do it faster or at less cpu cost?

    • @invinciblemode
      @invinciblemode Před rokem +13

      @@underfitted uh that’s not a fair assessment… it’s still the same algorithm, just a better implementation

    • @enantiodromia
      @enantiodromia Před rokem +3

      The authors of the paper say AlphaDev started from scratch and invented new algorithms. They bear resemblance to existing, less efficient ones, but they the process of discovering them is in fact novel. For sorting 3 to 5 elements, the efficiency boost is about 70 per cent. This is so relevant that the standard c++ library has already adapted these improvements. The library hadn't been updated in this area for decades. The improvement falls to 1.7 per cent only when sorting more than 250,000 elements.

    • @MizanHIT
      @MizanHIT Před rokem

      czcams.com/video/zFpLFW3lfDs/video.html

  • @smvk666
    @smvk666 Před rokem +22

    sounds cool, but would be nice to have the link of the paper and how much % improvement are they talking about ?

    • @underfitted
      @underfitted  Před rokem +4

      I just updated the description with the link.

  • @enantiodromia
    @enantiodromia Před rokem +9

    I would have enjoyed a thorough explanation of the details of the algorithms.

  • @WilliamDye-willdye
    @WilliamDye-willdye Před rokem +5

    IMO there's a deep reason why we have never found a simple optimal sorting algorithm: even in pure mathematics, nature does not have a strongly preferred order. We choose to focus on certain arrangements and the operators which work well with them, but we're cherry picking a few tiny subsets which happened to be useful and/or easier for our brains to comprehend. In other words, abstract set theory does not have a baked-in preferred order, we pick the orders we find useful out of abstracts set theory.

    • @Robert_McGarry_Poems
      @Robert_McGarry_Poems Před rokem +1

      Yes, and then the work done is shared generation to generation through conventions and best practices. It only takes a generation before that arbitrary abstract convention becomes useful language and knowledge.

    • @Robert_McGarry_Poems
      @Robert_McGarry_Poems Před rokem +1

      I think that in formal conceptions, set theory in it's most meta form, is the closest thing to describing philosophy with itself that you can get.

  • @GEMSofGOD_com
    @GEMSofGOD_com Před rokem +2

    My immediate thought: space (which is about O(n) sorting, ideally) and operation time have different weights, one can even project hardware development into the future, but yeah, hardware instructions are to be taken in focus for some desired fitness

  • @cimmik
    @cimmik Před 11 měsíci +2

    Was this video generated by VideoGPT? Because it spends 1:32 on giving one information: an AI generated a sorting algorithm. Nothing about the algorithm nor the AI

  • @shakhaoathossain5032
    @shakhaoathossain5032 Před rokem

    I am excited to see you excited ☺️

  • @GEMSofGOD_com
    @GEMSofGOD_com Před rokem

    Well how about making an outline

  • @kohait00
    @kohait00 Před 3 měsíci

    Hands waving made me dizzy

    • @underfitted
      @underfitted  Před 3 měsíci

      I tied them up in my back on the next video and it came out better

  • @TeoDark
    @TeoDark Před rokem +3

    give us source!!!

    • @underfitted
      @underfitted  Před rokem +2

      AlphaDev. Google it. You should find it easily.

    • @enantiodromia
      @enantiodromia Před rokem

      Kindly be a sport and, as a service to your viewers, supply it. Thank you.

  • @tim40gabby25
    @tim40gabby25 Před rokem

    Beware "breakthroughs", unless Ping..

  • @titusfx
    @titusfx Před rokem

    Let's see the paper, but with comparison, the lower possible algorithm is nlogn, and it is impossible to get lower that.

    • @underfitted
      @underfitted  Před rokem +2

      They aren’t getting lower than that. It’s about optimizing the actual speed of the code. They aren’t changing the runtime complexity of the algorithm.

    • @GEMSofGOD_com
      @GEMSofGOD_com Před rokem

      Dude, o(n)

    • @titusfx
      @titusfx Před rokem

      @Kotov Media if o(n) implies that you use something more that is not comparison and your memory is higher (probably than n^2, i don't recall now). This is because of the proof that a comparison sort algorithm must make at least nlog n comparisons. It is based on decision tree model. Each comparison made by the sorting algorithm can be represented as a node in a binary decision tree, where each internal node corresponds to a comparison and each leaf node corresponds to a permutation of the input sequence. Since there are n! possible permutations of n elements, there must be at least n! leaf nodes in the decision tree. Moreover, because the sorting algorithm must correctly sort any input sequence, each permutation must be reachable by some sequence of comparisons. Therefore, the decision tree must have at least n! leaves. Now, consider the height of the decision tree (i.e., the length of the longest path from the root to a leaf). Since each comparison can have at most two outcomes, the height of the tree must be at least log2(n!) = Ω(nlog n), using Stirling's approximation. Since each comparison corresponds to a node on the decision tree, and the height of the tree is Ω(nlog n), the sorting algorithm must make at least Ω(nlog n) comparisons. You can search for a more graphic proof

    • @GEMSofGOD_com
      @GEMSofGOD_com Před rokem

      @@titusfx ur incomprehensible nd u obviously haven't got a year in CS. Just look up o(n) sortings. Counting and Radix will come up, I believe

    • @paulblart7378
      @paulblart7378 Před měsícem

      @@GEMSofGOD_com Those are not comparison based sorts, they're radix/index based. They only work on whole numbers, not arbitrary data, so they're much less useful in practice. It has been proven that comparison sorting cannot run faster than nlogn and sorting in general cannot run faster than n. We're obviously only talking about comparison-based sorting, he even specified so, twice. Sure, you took a year of cs, but you're falling a little behind on common sense. Please don't be rude to others when you're the only one worth laughing at.

  • @mael1515
    @mael1515 Před rokem

    Your intense hand movements are making the video harder to watch, visually.