KAN: Kolmogorov-Arnold Networks | Ziming Liu

Sdílet
Vložit
  • čas přidán 8. 06. 2024
  • Portal is the home of the AI for drug discovery community. Join for more details on this talk and to connect with the speakers: portal.valencelabs.com/logg
    Abstract: Inspired by the Kolmogorov-Arnold representation theorem, we propose Kolmogorov-Arnold Networks (KANs) as promising alternatives to Multi-Layer Perceptrons (MLPs). While MLPs have fixed activation functions on nodes ("neurons"), KANs have learnable activation functions on edges ("weights"). KANs have no linear weights at all -- every weight parameter is replaced by a univariate function parametrized as a spline. We show that this seemingly simple change makes KANs outperform MLPs in terms of accuracy and interpretability. For accuracy, much smaller KANs can achieve comparable or better accuracy than much larger MLPs in data fitting and PDE solving. Theoretically and empirically, KANs possess faster neural scaling laws than MLPs. For interpretability, KANs can be intuitively visualized and can easily interact with human users. Through two examples in mathematics and physics, KANs are shown to be useful collaborators helping scientists (re)discover mathematical and physical laws. In summary, KANs are promising alternatives for MLPs, opening opportunities for further improving today's deep learning models which rely heavily on MLPs.
    Speakers: Ziming Liu
    Twitter Hannes: / hannesstaerk
    Twitter Dominique: / dom_beaini
    ~
    Chapters
    00:00 - Intro + Background
    05:06 - From KART to KAN
    07:56 - MLP vs KAN
    16:05 - Accuracy: Scaling of KANs
    26:35 - Interpretability: KAN for Science
    38:04 - Q+A Break
    57:15 - Strengths and Weaknesses
    59:28 - Philosophy
    1:08:45 - Anecdotes Behind the Scenes
    1:11:49 - Final Thoughts
    1:14:58 - Q+A
  • Věda a technologie

Komentáře • 38

  • @ferencszalma7094
    @ferencszalma7094 Před 25 dny +15

    0:02:35 Kolmogorov-Arnold Representation Theorem KART
    ~The only true multivariate function is the sum.
    0:03:45 Details of (two layers) KART: 1d edge functions and node sums
    0:05:05 KAN Kolmogorov-Arnold Network (orig 2-layer)
    0:05:55 Multi-layer KAN
    0:07:55 MLP and KAN comparison
    0:09:45 B-splines basics
    0:14:30 B-spline Cox-de Boor recursion formula (inefficient)
    0:14:45 Implementation tricks: residual activations, initialization, grid update
    0:38:05 Q: Expressivity vs generalization, bias-variance tradeoff, U-shape loss as fn of p (number of features)
    0:39:15 Q: What if activation is out of range of the finite spline domain? -> Use the residual activation fn!
    0:40:40 KANs to solve physics problems from raw data or already partially processed data?
    0:43:15 KANs to solve PDEs?
    0:44:35 Grid resolution finetuning is done manually
    0:47:20 Can you replicate KANs by MLPs with the right breadth and depth? Yes. Would be nice to see a unified theory.
    0:51:18 What's the novelty of KANs? At the technical level what makes a KAN a KAN?
    0:58:16 Inductive bias: KAN's or DNN's inductive biases better fit a task: vision, language, science
    0:59:25 History of connection vs symbolism
    1957 - Frank Rosenblatt, Invention of perceptron
    1969 - Marvin Minsky & Seymour Papert, Perceptrons: An introduction to computation geometry: "Perceptrons cannot do XOR"
    1974 - Paul Werbos, "A multi-layer perceptron can do XOR"
    1975 - Robert Hecht-Nielsen, Kolmogorov networks (2 layer, width 2n+1)
    1988 - George Cybenko, "2 Layer Kolmogorov networks can do XOR"
    1989 - Tomaso Poggio, "KA is irrelevant for neural networks
    2012 - year of modern deep learning
    Expert systems/symbolic regression vs KANs vs MLP/Kolmogorov networks
    1:04:20 KAN vs MLP phylosophy: High internal degrees of freedom, reductionism, parts are important vs Low internal degrees of freedom, holism, interaction of parts is important
    1:08:45 Intricacies of developing something new: KANs beyond 2 layers
    1:11:45 Github repos

  • @TomHutchinson5
    @TomHutchinson5 Před 24 dny +5

    Wow, this is blowing up. Most of the journal club videos get hundreds of views. This already has thousands! I look forward to watching the talk and reading the paper.

  • @Pingu_astrocat21
    @Pingu_astrocat21 Před 26 dny +7

    thank you for uploading this :)

  • @shinkurt
    @shinkurt Před 24 dny

    Thanks guys

  • @brian5735
    @brian5735 Před 20 dny

    I like the 1d showing the integration. Great for PDEs

  • @space-time-somdeep
    @space-time-somdeep Před 24 dny

    Thanks

  • @HD-qq3bn
    @HD-qq3bn Před 19 dny +1

    I suggest to use piece wise function instead of spline, which show some similarities with FEM , which may easy to train

  • @automatescellulaires8543
    @automatescellulaires8543 Před 22 dny +1

    Yes we Kan ? I swear i've already heard this somewhere.

  • @hanyanglee9018
    @hanyanglee9018 Před 25 dny +2

    You simply make activation functions for each instructions and protect the activation output between layers, it would probably work. Except for Idk how to protect the activation between layers in a graceful way.
    Softmax helps self attention to protect *that*.
    BN seems not to be used anymore, but it actually protects *that*.
    None of them is grace since they all distort the forward path in some way.
    Or, if we don't use the B spline, we still can use sigmoid (with MIG) to so similar job.
    Edit: sigmoid way doesn't provide any known interpredability. It's only about the black box way.

  • @Kram1032
    @Kram1032 Před 23 dny +2

    So in principle, clearly you could simply take the functions KAN is built upon to be NNs.
    Furthermore, you could take a KAN of KANs, which strikes me as a second way to "go deep" on KANs.
    It also feels a little bit to me like the connections between objects, functions, functionals, natural transformations... - i.e. you'd essentially be able to encode category theoretical notions in KANs. - Is that a reasonable comparison to make?
    If so I wonder if you could simply take your base objects to be, say, the primitives of your favourite proof assistant plus arbitrarily deep, arbitrarily nested KANs to effectively efficiently find arbitrary functions that well represent whatever relationships you'd throw at them
    It's probably not at all easy to do, but that'd seem to me to be the most powerful version.

  • @deliyomgam7382
    @deliyomgam7382 Před 15 dny +1

    Since u haven't given up on KAN u can apply normalization function to the whole data set eg: x=y^2 may be out of bounds for large value of x u can simply represent the section of b-spline where curve of differential would explode with a representation while keeping the curve = x=y^2 but on the side would be it's multipliers. Eg: u can represent billion with b as in calculator it also saves space. It's multipliers would show the different between x=y^2 nd nx=y^2.......I don't know I understood it right if it does best of luck for ur P.hd

  • @deliyomgam7382
    @deliyomgam7382 Před 15 dny +1

    So n can be represented as function itself. Instead of going to infinity.

  • @gemini_537
    @gemini_537 Před 22 dny +3

    Gemini 1.5 Pro: This video is about Kolmogorov-Arnold Networks (KANs) presented by Ziming Liu, a Phd student at MIT. KANs are a new type of neural network architecture inspired by the Kolmogorov-Arnold representation theorem. This theorem states that any continuous function can be represented as a finite sum of compositions of single-variable functions.
    The video talks about the following aspects of KANs:
    * Motivation: Why KANs were developed and what problems they address (0:00-2:22)
    * Mathematical foundations: Explanation of the Kolmogorov-Arnold representation theorem (2:22-7:44)
    * Visualization of KANs: How KANs are visualized as networks (7:44-12:12)
    * Training KANs: How to train a KAN to approximate a function (12:12-15:37)
    * Comparison with MLPs: How KANs compare to traditional Multi-Layer Perceptrons (MLPs) (15:37-20:22)
    * Applications of KANs: Examples of using KANs for symbolic and special function approximation (20:22-29:31)
    * Interpretability of KANs: How KANs can be interpreted to reveal the underlying structure of the function they approximate (29:31-41:26)
    * Discovery with KANs: How KANs can be used to discover new relationships between variables (41:26-47:22)
    * Case study: Recovering scientific results with KANs (47:22-58:12)
    * Open questions and future directions: Discussion on limitations and future research areas for KANs (58:12-1:00:00)
    In conclusion, KANs are a promising new direction in neural network research that leverages the Kolmogorov-Arnold representation theorem to achieve interpretable function approximation. They have the potential to be particularly useful in scientific applications where understanding the relationships between variables is important.

  • @PabloHorneman-rd4cq
    @PabloHorneman-rd4cq Před 25 dny +1

    Legend!

  • @taraaryal9609
    @taraaryal9609 Před 24 dny

    Do you also have an example to solve ODE using KAN?

  • @spencerfunk6697
    @spencerfunk6697 Před 15 dny

    exactly 10% of your subs liked

  • @deliyomgam7382
    @deliyomgam7382 Před 15 dny +1

    Can Kan be extended to math transformer

  • @radosawjasiewicz2494
    @radosawjasiewicz2494 Před 22 dny

    What about vector functions?

  • @TeeTeeNet
    @TeeTeeNet Před 20 dny +1

    Hannes, if you say thank you after a speaker has answered your question you let them know that your done. Just saying “yup” is kinda rude.

  • @deliyomgam7382
    @deliyomgam7382 Před 15 dny

    Eg: π in circle is present so KAN is good for producing formula

  • @mohammedbenaissa1278
    @mohammedbenaissa1278 Před 25 dny

    Can we make a cnn with kan layer

  • @sunghjung45
    @sunghjung45 Před 21 dnem

    The question at 1:18:43 killed me 🤣

    • @sunghjung45
      @sunghjung45 Před 21 dnem

      czcams.com/video/5p4JEXweboE/video.html

  • @darkhydrastar
    @darkhydrastar Před 23 dny

    👏😎

  • @deliyomgam7382
    @deliyomgam7382 Před 16 dny

    So circle x circle= donuts but to define direction u need trigonometry......eg: circle x sin 2 or somthing or sin circle or circle sin(x)= donuts invite homer please.....1 hole then train to find hole of knots...

  • @araldjean-charles3924
    @araldjean-charles3924 Před 13 dny

    Are we talking here about a general representation theory? Are b-splines the only basis set that can be used? What about wavelets, Fourier series, etc.?

  • @deliyomgam7382
    @deliyomgam7382 Před 15 dny

    LLM to design physical language representation.......sphere representing nothing then twist n stretch to represent some memories........... cluster of neuron might represent memory but it still is capable of processing.....since audio n videos have same zeros n 1

  • @elirane85
    @elirane85 Před 24 dny +16

    God, I wish this entire "AI Boom" happened when I was in collage almost 20 years ago. I would be able to publish so many papers. Now its a sigmoid, boom paper, now it's an exponent, paper, now a spline, paper, what's next, directed graph, paper, fully connected graph, paper. When exactly did the level of research papers started to be like my freshman year homework?

  • @tankieslayer6927
    @tankieslayer6927 Před 24 dny +3

    Tegmark attention-whoring again and giving a bad name to physicists. This is a completely worthless paper. Learning activiation functions isn’t a new idea it’s just unnecessary.

    • @choi77770
      @choi77770 Před 20 dny +3

      You should give reasons for this comment

    • @tudoropran1967
      @tudoropran1967 Před 17 dny +2

      A statement with no arguments is unscientific.