Intuition and Examples for Lagrange Multipliers (Animated)

Sdílet
Vložit
  • čas přidán 13. 07. 2024
  • We discuss the idea behind Lagrange Multipliers, why they work, as well as why and when they are useful.
    External Images Used:
    1. www.greenbelly.co/pages/conto...
    2. mathoverflow.net/questions/19...
    Further Reading:
    1. The Variational Principles of Mechanics, Cornelius Lanczos, 2.1-2.6
    2. www.math.toronto.edu/courses/m...
    3. abel.math.harvard.edu/~knill/t...
    Timestamps:
    0:00 - Introduction
    0:18 - Free Optimization in 1 Dimension
    1:30 - Constrained Optimization
    2:25 - Auxiliary Equations
    3:05 - Solving with Substitution
    4:25 - Where Substitutions break down
    7:17 - Lagrange Multipliers
    9:23 - Types of Extrema
    12:04 - Lagrange Multiplier Example
    13:42 - Final Notes
  • Zábava

Komentáře • 62

  • @poo2uhaha
    @poo2uhaha Před 2 lety +52

    Nice video - I appreciate the 3B1B-esque animations. This video was shared on an oxford university undergraduate physics group chat so you are helping a lot of people!

    • @casualscience
      @casualscience  Před 2 lety +11

      Hey thanks so much, it really makes me motivated to make more videos knowing that I'm helping students. I too was once a physics undergrad!

  • @Kaepsele337
    @Kaepsele337 Před 2 lety +15

    Nice! I've convinced myself before that Lagrangian multipliers work, but only at a symbolic level. The intuition that it just means that the gradients of f and g align was new to me, and gives me a much better understanding of why it works :)
    And should I ever forget how to do it, this will allow me to rederive it quickly, which is always useful. So, Thanks! I'll check out your other videos.

  • @fabriai
    @fabriai Před rokem +1

    Very nice video... thanks a lot. I was struggling to wrap my head around plugging L into the Grad resolution. Your explanation is easy to follow and sensible. Brilliant!

  • @HuLi-iota
    @HuLi-iota Před 4 měsíci

    Really helps, not only your video did makes me know it works , but why it works

  • @algorithminc.8850
    @algorithminc.8850 Před měsícem

    Thanks. Great video. I look forward to scoping your other videos. Subscribed. Cheers

  • @mauritzwiechmamnn7366
    @mauritzwiechmamnn7366 Před 2 měsíci

    Thank you, a big help for my bachelors thesis!

  • @cauetrindade1181
    @cauetrindade1181 Před 10 dny

    amazing video

  • @gijsjespers4868
    @gijsjespers4868 Před 2 lety

    beautiful video, thank you !

  • @monsieur910
    @monsieur910 Před 2 lety +2

    Wow this is a great video! I remember studying this in undergrad (engineering) and when I wanted to write my phd (physics) I decided to optimize numerically because I didn't want to be questioned about the method.

    • @casualscience
      @casualscience  Před 2 lety +1

      Hahaha, I know that feeling man, have to pick your battles! Thank you for the kind words!!

  • @youssefabsi6296
    @youssefabsi6296 Před rokem +1

    thank you a lot. wish you had a series solely for optimization

  • @gabrielpus-perchaud9063
    @gabrielpus-perchaud9063 Před rokem +1

    Thank you, it is very useful

  • @tranhoanglong2000
    @tranhoanglong2000 Před 2 lety

    This really helped me a lot, thank you for your explanation. I dont know why it does not have more views by now, maybe cause the video is still new. Great work ✨🇻🇳❤

    • @casualscience
      @casualscience  Před 2 lety +1

      Thanks for the comment Trần, really makes the work feel worthwhile to know I'm helping people learn!!

  • @kashu7691
    @kashu7691 Před 2 lety

    this was perfect. thanks for making this

    • @casualscience
      @casualscience  Před 2 lety

      Thank you so much! I appreciate the kind words, and that you took the time to comment; it means a lot to me!

  • @olivierbegassat851
    @olivierbegassat851 Před rokem

    very nice explanation : )

  • @1Fortnite1
    @1Fortnite1 Před rokem

    phenomenal explanation!

  • @hannahnelson4569
    @hannahnelson4569 Před rokem

    Thank you! This helped me understand what the Lagrange equation means!

  • @Dhruvbala
    @Dhruvbala Před 5 měsíci +1

    Brilliant insight! I wish you hadn't glossed over the part at 9:40 so quickly -- as that's kind of the crux of the entire video. I appreciate the thought you put into the explanation, though

  • @orandanon
    @orandanon Před rokem +1

    Nice video.
    Few remarks:
    1. Lagrange Multipliers Methods is a necessary condition for extrema, but not sufficient. Indeed one can give examples where "just" running the algorithm display yields a wrong solution. One way to solve this problem one needs to show that the domain is compact (i.e., closed and bounded), and then by Weierstrass, one knows that the function has a minimum and a maximum in the domain.
    2. In the final notes section the generalization for several constraints requires that g_1,...,g_k are linearly independent.

  • @maxyazhbin826
    @maxyazhbin826 Před 2 lety +1

    I like your content, it is education, not entertainment that has a bunch of music

    • @casualscience
      @casualscience  Před 2 lety

      Thank you max!

    • @theairaccumulator7144
      @theairaccumulator7144 Před 2 lety

      Exactly! 3blue1brown is possibly the least educative math channel. Why show pretty graphics without showing the math used to make them?

  • @francoparnetti
    @francoparnetti Před 2 lety +1

    I always wondered how to deduce the Lagrange function. Is there a way to prove (in an "elegant" way) that the function does what it does? Or did Lagrange just say "this just works and thats it"?

    • @casualscience
      @casualscience  Před 2 lety +2

      Hi Franco, I give a short proof in chapter: 9:23 - Types of Extrema. I also linked a short history in the description that might give some context: abel.math.harvard.edu/~knill/teaching/summer2014/exhibits/lagrange/genesis_lagrangemultpliers.pdf
      But I'll say it's pretty hard to know exactly what the old mathematicians were thinking when they came up with ideas; the culture around mathematics in the past was much more closed off. I will say, however, that Lagrange was an absolute master of finding ways to solve math problems by introducing a functions whose derivatives give the solution. He was considered the best mathematician of his time, holding the chair of the prussian academy after Euler. The lagrange multiplier is only one example of these "Lagrange functions". Another famous example his Lagrangian from classical mechanics: en.wikipedia.org/wiki/Lagrangian_mechanics
      Unfortunately that's the edge of my knowledge on the history/development. Lagrange's work is all in French, and I remember having difficulty finding English translations in grad school. I agree there is still a small leap there that doesn't flow naturally, perhaps that is simply Lagrange's brilliance... or perhaps a better historian will come along and have some more to say on the subject.
      Thanks for the comment!

    • @francoparnetti
      @francoparnetti Před 2 lety

      @@casualscience Thank you!
      I asked because I tried to find a proof integrating, but I am not really sure if that's ok. I kinda hoped there was a smarter way to prove it. Also I'm not sure if I fully understand the paper, but thanks anyway!

  • @EricBrunoTV
    @EricBrunoTV Před 2 lety

    Please can you tell me the names of softwares you use to realize this video? Thank you

  • @aashsyed1277
    @aashsyed1277 Před 2 lety +1

    How about maximizing a multivariable function with the constraint x²+y²< 1 ??

    • @casualscience
      @casualscience  Před 2 lety +2

      This would be similar to the situation at 1:44, here you are optimizing inside of a disk (in two dimensions that disk has volume). You would need to do a free optimization, then manually check which solutions are within the unit circle. You might also have a maximum on the boundary, so you'd want to also include any solutions that you get from using a Lagrange multiplier with g(x,y) = x²+y² -1

    • @SuperMrMuh
      @SuperMrMuh Před 2 lety

      You might want to check out the Karush-Kuhn-Tucker conditions, which generalize Lagrange multipliers to inequality constraints: en.m.wikipedia.org/wiki/Karush%E2%80%93Kuhn%E2%80%93Tucker_conditions

  • @ranam
    @ranam Před 2 lety

    My question may be strange but I have no one to ask this can you tell me a Lagrange algorithm to find a minimum arbitrary volume within another volume which can contain it by maximum of it inside it or minimum of it out side 🙏🙏🙏

    • @casualscience
      @casualscience  Před 2 lety

      Hi, sorry I'm not sure I understand the question, are you asking to maximize the volume of some shape given that it will fit within another shape? Because I don't believe that problem has a simple solution. Also, I would take a look at math.stackexchange.com, that's a good place to post these kinds of questions.

    • @ranam
      @ranam Před 2 lety

      @@casualscience yes any arbitrary volume inside other volume tells weather it could fit or being maximized

  • @hosz5499
    @hosz5499 Před 2 lety

    nice geometric interpretation of extra dimensions! We live in a 5D space with a constraint x5=0.

    • @casualscience
      @casualscience  Před 2 lety +3

      My plane of existence is actually x5=42069

    • @hosz5499
      @hosz5499 Před 2 lety

      @@casualscience seriously, what’s the meaning of the amplitude lambda in extra dim? Its thickness or rigidity? Eg?

    • @casualscience
      @casualscience  Před 2 lety

      ​@@hosz5499 In Lagrangian mechanics in physics, it has the interpretation as a type of force multiplier for the constraint. physics.stackexchange.com/questions/47651/how-are-constraint-forces-represented-in-lagrangian-mechanics
      In general, I think all you can say is that at the zeros it's the ratio of gradients.

    • @hosz5499
      @hosz5499 Před 2 lety

      @@casualscience i think it means the largest eigenvalue, if f is a quadratic of (x,y) locally (near zero). So local eigenvalues such that f and g gradients align or rescaled contours coincides

    • @hosz5499
      @hosz5499 Před 2 lety

      See czcams.com/video/6oZT72-nnyI/video.html

  • @yt-1161
    @yt-1161 Před 2 lety +1

    @2:57 n-1 degree of freedom or 1 degree of freedom

    • @casualscience
      @casualscience  Před 2 lety +2

      Hi, N-1 is correct there. You have one equation which relates a single variable to the N-1 other ones, so once you fix those N-1 numbers you immediately know the Nth, giving you N-1 DOFs. I do have a mistake at 2:48 though, I should say "choose the N-1 parameters", not "choose one of the ..." That might be the source of your confusion.
      thanks for the comment

  • @griffinbur1118
    @griffinbur1118 Před 8 měsíci

    Is there a mistake or two around 7:30? I follow most of this, but “there’s a constrained optimization in n dimensions which is the same as a free optimization problem in n-1 dimensions” seems to match my understanding more than what’s said. But I might be missing something.

    • @griffinbur1118
      @griffinbur1118 Před 8 měsíci +1

      Oh, OK, I was just listening and not watching. Yes, I’m right on one point (your slides show the dimension difference as being between n and n+1, not n+1 and n-1).

    • @casualscience
      @casualscience  Před 8 měsíci +1

      Yes, sorry the text is correct, I misspeak. I should say N+1 and N, nice catch!

    • @griffinbur1118
      @griffinbur1118 Před 8 měsíci

      @@casualscience Overall, an excellent video! (I think my other point is just empty semantics: in some sense, the lower-dimensional optimization problem is "unconstrained" once the constraint is set, but relative to some hypothetical higher-dimensional problem, it is "constrained"...it seems like both capture the same mathematical meaning).

  • @sfglim5341
    @sfglim5341 Před 2 lety +1

    Not related but the thumbnail looks like There Existed an Addiction to Blood by Clipping

    • @casualscience
      @casualscience  Před 2 lety

      haha, well it's the gradient field of circles centered at the origin, so I feel like it existed before Clipping.

  • @muuubiee
    @muuubiee Před 2 lety +2

    Eh... I had some slight understanding of Lagrange multipliers (do note though, Gradients are NOT covered in the suggested course literature, not sure what they were thinking), but none of this really made sense.
    If you want to explain something you have to explain it on a level lower than the current one. If the person understands concepts like Gradients and Constraints they'll probably not have any problems with understanding Lagrange. Therefore there's no point in assuming that the people watching understands gradients and constraints. Your video is essentially trying to teach Lagrange multipliers to people who already understand Lagrange multipliers (and may be revisiting this to understand a concept that builds on Lagrange multipliers).

  • @sridharbajpai2196
    @sridharbajpai2196 Před rokem

    why cant go in direc of G..explain pls

    • @casualscience
      @casualscience  Před rokem

      Hi this is covered from 7:20-9:40. But it's because you need G to have a fixed value (normally 0), if grad(G) is nonzero, you are moving in a direction where G changes, e.g. It goes from 0 to not 0. Since we only want to explore the space where G is 0, we have to move only in the directions perpendicular to grad(G), i.e. Along directions where G is not changing

  • @bocckoka
    @bocckoka Před rokem

    Worst thing is that I once did, and I no longer do.

    • @casualscience
      @casualscience  Před rokem

      That doesn't bode well for the quality of my video 😨

  • @user-hf5nz3pe9d
    @user-hf5nz3pe9d Před rokem

    one the most difficult to understand on this topic. could have used simple words.