Adaptable Aviators: future of autonomous navigation Liquid networks | Ramin Hasani | TEDxMIT Salon

Sdílet
Vložit
  • čas přidán 19. 07. 2023
  • We delve into the realm of Liquid Neural Networks, an innovative approach to artificial intelligence inspired by the humble nematode C. elegans. Learn how these tiny, adaptable networks can understand the task they are given and navigate robustly complex environments when deployed on drones!
    AI, Algorithm, Engineering, Finance, Imagination, Math, Robots Ramin Hasani is a Principal AI and Machine Learning Scientist at the Vanguard Group and a Research Affiliate at the Computer Science and Artificial Intelligence Lab (CSAIL), Massachusetts Institute of Technology (MIT). Ramin’s research focuses on robust deep learning and decision-making in complex dynamical systems. Previously he was a Postdoctoral Associate at CSAIL MIT, leading research on modeling intelligence and sequential decision-making, with Prof. Daniela Rus. He received his Ph.D. degree with distinction in Computer Science at Vienna University of Technology (TU Wien), Austria (May 2020). His Ph.D. dissertation and continued research on Liquid Neural Networks got recognized internationally with numerous nominations and awards such as TÜV Austria Dissertation Award nomination in 2020, and HPC Innovation Excellence Award in 2022. He has also been a frequent TEDx Speaker. This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at www.ted.com/tedx

Komentáře • 22

  • @nicktasios1862
    @nicktasios1862 Před 10 měsíci +10

    One of the reasons they have not been very popular is that, like neural ODEs/Continuous normalising flows, they require backpropagation through an ODE solver, which can be slow and complex to implement. I haven't experimented with LNNs yet, but they look promising.

    • @revimfadli4666
      @revimfadli4666 Před 6 měsíci

      I wonder if gradient-free methods could help, such as evolution strategies

    • @pavelnakaznenko9445
      @pavelnakaznenko9445 Před dnem

      from my understanding, there is a Closed-form Continous Time work from the same authors, where they find a closed-form solution for ode-solving steps, which speeds up things drastically.

  • @ps3301
    @ps3301 Před 10 měsíci +21

    Liquid neuron has been out for 2 years and no one is paying much attention to it. Perhaps we need to fine tune human brain (or in human term, do more marketing) better to notice this innovation

    • @anhta9001
      @anhta9001 Před 10 měsíci +3

      Humans tend to have tunnel vision on the thing called "scale is all you need", you know.

    • @Supreme_Lobster
      @Supreme_Lobster Před 10 měsíci

      We still dont know how liquid networks behave at scale, or if they are even scalable

    • @Supreme_Lobster
      @Supreme_Lobster Před 10 měsíci

      They also dont seem to be doing much with it for now, which is kinda sad. This has so much potential. So far I've only been able to get my hands on the CfC version of Liquid Networks and it was really promising, but more testing and engineering is needed.

    • @Gabcikovo
      @Gabcikovo Před 9 měsíci

      Good one

    • @Gabcikovo
      @Gabcikovo Před 9 měsíci

      ​@anhta9001 wait, didn't Sam Altman just say he's been praying to the god of scale?

  • @feridbelgaid1946
    @feridbelgaid1946 Před 10 měsíci +9

    Dear Raminheimer, Please don't put a gun on that thing

  • @khaledmoulayamar4113
    @khaledmoulayamar4113 Před 8 měsíci +1

    impressive and wonderful, thank u so much

  • @GarrettMoore-ve7we
    @GarrettMoore-ve7we Před 10 měsíci

    This was refreshing, thank you.

  • @DanielSanchez-jl2vf
    @DanielSanchez-jl2vf Před 10 měsíci +4

    can you apply self-atenttion to liquid neurons to improve scalabilty or it sounds easier than it actually is?

    • @Supreme_Lobster
      @Supreme_Lobster Před 10 měsíci +1

      My understanding is that liquid neurons are having a tough time scaling up, and adding them to self-attention (which follows quadratic scaling, ie O(n^2) ) seems a bit too crazy at this point.

  • @shawnvandever3917
    @shawnvandever3917 Před 10 měsíci +5

    Current deep learning isn't going to get us to AGI. We need a fresh architecture with continuous learning and memory. This seems promising if it can scale

  • @Gabcikovo
    @Gabcikovo Před 9 měsíci

    6:20 liquid neural networks perform significantly better than the other ones because they understand the task

  • @ivan8960
    @ivan8960 Před 9 měsíci +2

    good for robopets and autonomous seeking weapons

  • @erwinzer0
    @erwinzer0 Před 8 měsíci

    Zero labeling that insane. We are closer to AGI

  • @CharlesFraser
    @CharlesFraser Před 9 měsíci +1

    While this is like Enescu level genius, the demos are incredibly boring. Can you make a serious model that demonstrates broad learning and understanding. Perhaps working with the team at Roboat to make a boat that can ferry people about or a drone that can follow / film an amazing cyclist through various different environments. Train the drones on amazing drone pilots. Do something that shows real value now rather than potential value (which is obs massive)

    • @StevenAkinyemi
      @StevenAkinyemi Před 8 měsíci +1

      Easier said than done. This is a relatively new architecture different from traditional neural net architectures that have been around for decades.
      These things take time and boring is good. I would be highly skeptical of a flashy demo.