27. Positive Definite Matrices and Minima

Sdílet
Vložit
  • čas přidán 3. 08. 2024
  • MIT 18.06 Linear Algebra, Spring 2005
    Instructor: Gilbert Strang
    View the complete course: ocw.mit.edu/18-06S05
    CZcams Playlist: • MIT 18.06 Linear Algeb...
    27. Positive Definite Matrices and Minima
    License: Creative Commons BY-NC-SA
    More information at ocw.mit.edu/terms
    More courses at ocw.mit.edu

Komentáře • 138

  • @RC-bm9mf
    @RC-bm9mf Před 4 lety +62

    This lecture is definitely a positive effect on my grasp of the matrix, and this lecture plays a pivotal role in the whole series. Thank you prof. Strang. Nobody has explained those concepts so clearly and coherently -- a whole new world is ahead of me. This is a must-see. A genuine human heritage.

    • @rolandheinze7182
      @rolandheinze7182 Před 3 lety +11

      Don't you mean... a positive definite effect? *cymbal crash*

    • @Arycke
      @Arycke Před 9 měsíci +2

      The first 2 lines are the joke. He said positive definite as "definitely ... positive."
      For those who thought the last comment with the rimshot was explaining the original joke. For completeness sake, no shade.

  • @rolandheinze7182
    @rolandheinze7182 Před 3 lety +33

    After viewing the 3blue1brown video on Duality, I am just seeing xT*A*x as the results of applying a linear transformation to a vector x and then projecting that new vector back onto x; if the vectors still point "in the same direction" i.e. the projection is positive, then A is positive definite

    • @dalisabe62
      @dalisabe62 Před 3 lety +5

      Yes. In fact, there is another way to look at positive definite matrix; that is, the transformation of vector X is another vector that lies in the same quadrant as vector X. The angle between the vector and its transformation should always be acute.

    • @youssefel-mahdy922
      @youssefel-mahdy922 Před 2 lety

      Thank you!

  • @PedroHernandez-gz4cn
    @PedroHernandez-gz4cn Před 11 lety +92

    my calculus 3 professor taught us, think of a saddle point as a point on a "pringles chip" a lot of people know exactly what it looks like.

    • @matiasmoanaguerrero8095
      @matiasmoanaguerrero8095 Před 4 lety +24

      or a saddle ?

    • @DeadPool-jt1ci
      @DeadPool-jt1ci Před 4 lety +5

      @@matiasmoanaguerrero8095 lmao

    • @integralboi2900
      @integralboi2900 Před 4 lety +4

      (S)He’s the smartest person ever.

    • @mississippijohnfahey7175
      @mississippijohnfahey7175 Před 2 lety +3

      @@matiasmoanaguerrero8095 most American kids have seen way more pringles than they have saddles. Sorry John Wayne

    • @jasoncampbell1464
      @jasoncampbell1464 Před 6 měsíci +1

      I think mathematicians should start calling it the Pringles point.
      “The function either curves upwards, downwards, or you have a Pringles point.” Has a nice ring to it

  • @rakolman
    @rakolman Před 8 lety +47

    This is maybe the best lecture in the entire course.

    • @thej1091
      @thej1091 Před 8 lety

      Lets see! gonna do it! There is also a recent close up video of gilbert doing a 20 minute bit on positive definite matrices!

    • @jhabriel
      @jhabriel Před 7 lety +3

      I agree with you! What an incredible way to show how different branches of mathematics actually refers to the same thing.

    • @ja-qk4vd
      @ja-qk4vd Před 6 měsíci

      beautiful how comes together.

  • @ozzyfromspace
    @ozzyfromspace Před 4 lety +15

    I watched this once, taking all notes. Then watched it again (with break) without taking notes. Then I read my notes (after a break). The lecture's quite good! Btw for those that thought the second derivatives thing came out of left field, Du is the directional derivative operator, so you want Du( Du( f(x1,x2,...,xn) ) ), which gives the expressions for x^T*A*x for a connected f and A. This tells us that x^T*A*x is like a second order operation for the derivative of a function when A is the Hessian (matrix of second derivatives) of said function f(). This wasn't clear to me initially so I was kinda lost. Best!

  • @annawilson3824
    @annawilson3824 Před měsícem

    31:08 how everything beautifully connects together

  • @iyalovecky
    @iyalovecky Před 9 lety +74

    This lecture is especially beautiful..

    • @Nakameguro97
      @Nakameguro97 Před 9 lety +11

      Positive Definite connecting matrices, algebra, geometry, and calculus: priceless!

    • @sourabhdhere1124
      @sourabhdhere1124 Před 4 lety +4

      Oh Yeah, It's All Coming Together.

  • @rosh70
    @rosh70 Před 2 lety +2

    If I had teachers like Gilbert Strang, I likely would've had a Ph.D in Math by now. No kidding! I love this guy! He made me fall in LOVE with Mathematics.

  • @ozzyfromspace
    @ozzyfromspace Před 4 lety +9

    The funny thing is, I'm also doing a general relativity series and got to covectors. I understand what they are computationally, but couldn't "visualize" them, beyond the typical "stacks". Then I stumbled on the interpretation of covectors as linear maps, which served as a connection to linear algebra. I tried so many things to built a geometric interpretation all night but nothing formed in my head properly, so I was like, "meh, I'm 80% done with Professor Strangs lectures, might as well do another one". He led with x^H*A*x > 0 and for some reason, everything just started to click (x^H*A is an implicit map, so I have ideas about how to analyze things in my other GR class). Funny how his lecture was what I needed to get my head turning again. Thank you, and awesome lecture! ☺️🙌🏽 Stay safe during #COVID19

  • @madsonpena2783
    @madsonpena2783 Před 3 lety +7

    This lecture is just amazing, what a beautiful thing, all coming together...

  • @georgesadler7830
    @georgesadler7830 Před 3 lety +1

    From this lecture, I really understand Positive Definite Matrices and Minima thanks to Dr. Gilbert Strang. The examples really help me to fully comprehend this important subject.

  • @straus1482
    @straus1482 Před 8 lety +15

    Never Seen someone like this.... Amazing!!!!

  • @RC-bm9mf
    @RC-bm9mf Před 2 lety

    This lecture is a true masterpiece. I was in awe with the similar feeling of watching suspense thrillers.

  • @jasoncampbell1464
    @jasoncampbell1464 Před 6 měsíci

    Finally, lecture 27. Big data, machine learning, blockchain, artificial intelligence, digital manufacturing, big data analysis, quantum communication, and internet of things

  • @bearcharge
    @bearcharge Před 14 lety +15

    fancy tie!!!!

  • @SauravKumar-xg4zr
    @SauravKumar-xg4zr Před 5 lety +3

    @15:16, I apologies.......................That's how a matchless, renowned and extraordinary person respect everyone, everywhere.
    Love you Prof. Gilbert Strange:)

  • @parkerhyde7514
    @parkerhyde7514 Před 4 lety +4

    "had better be more than 18" was the correct answer

  • @Eschewy
    @Eschewy Před 11 lety +2

    At 41:54, Strang gets the eigenvalues correct from memory. I'm impressed!

  • @dorupanciuc218
    @dorupanciuc218 Před 5 lety +1

    the explanation starting at 25:10 helped me understand what a positive definite matrice is

  • @brainstormingsharing1309
    @brainstormingsharing1309 Před 3 lety +1

    Absolutely well done and definitely keep it up!!! 👍👍👍👍👍

  • @yazhouhao7086
    @yazhouhao7086 Před 6 lety

    Dr. Gilbert Strang is really a master!

  • @kartikarora8877
    @kartikarora8877 Před 3 lety +1

    It is my dream to meet prof. Gilbert strang . His voice , his words ,his action touch my soul . Please prof. Read my comment so that I can satisfy only by this And I pray You may live 1000 years .

  • @yazhouhao7086
    @yazhouhao7086 Před 6 lety +1

    Wow.....how beautiful it is! How beautiful!!!

  • @rudypieplenbosch6752
    @rudypieplenbosch6752 Před rokem

    Love these lectures

  • @Slogan6418
    @Slogan6418 Před 4 lety

    31:40 In the case of A = [2, 6; 6, 7] for example, when setting z = 0, what we actually get is a cross (an 'X' shape) ; when z euqals other values, then we get a hyperbola.

  • @rochesterjezini317
    @rochesterjezini317 Před 4 lety +1

    Very good lecture Professor Strang thank you from Amazon Forest.

  • @shashvatshukla
    @shashvatshukla Před 4 lety +1

    My favourite sports team is the Seattle Submatrices

  • @aditiprasad5549
    @aditiprasad5549 Před 2 lety

    From 1,012,547 views in the first lecture to 203,355 views in 27th...You belong to 20% who made it this far, Congrats!

  • @Ilija_Ilievski
    @Ilija_Ilievski Před 2 lety +1

    Like watching someone like Aristotle teach

  • @tiger3023381
    @tiger3023381 Před 10 lety +3

    thanks prof. now I can imagine what eigenvectors and eigenvalues are like.

  • @ChessMemer69
    @ChessMemer69 Před 2 lety +1

    Quite possibly the hardest lecture, so far.

  • @DiegoAndrade
    @DiegoAndrade Před 10 lety +1

    tremendous explanation thank you for sharing ...

  • @beriteri
    @beriteri Před 11 lety +2

    Thanks prof. This is one of the best lectures of the course!

  • @quirkyquester
    @quirkyquester Před 4 lety +2

    sorta got the general idea of this whole lecture, im still not so good at the details. Reading the book and summary might be a good idea to solidate the knowledge if needed. Thank you Professor Strang and MIT! great lecture!

  • @sihanchen1331
    @sihanchen1331 Před 8 lety

    The last function f(x1,x2,x3)>=0 can be proved by creating squares.

  • @tanya-no4wi
    @tanya-no4wi Před 3 měsíci

    This was amazing!!!!!!!!!!!!!!!!

  • @alijoueizadeh8477
    @alijoueizadeh8477 Před 5 lety

    Thank you.

  • @carlborgen
    @carlborgen Před 14 lety

    damn subtitle, my exam is in a couple of days and I need to watch this seriously, but the subtitle just keeps making me giggle ^^haha

  • @eccesignumrex4482
    @eccesignumrex4482 Před 7 lety +3

    Saddle Points: sometimes you gotta get up to get down.

  • @jessstuart7495
    @jessstuart7495 Před rokem

    This lecture is a good one to have Matlab or Octave pulled up to look at some of these surfaces.
    x=-10:1:10;
    y=-10:1:10;
    [XX, YY]=meshgrid(x,y);
    Z=2*XX.*XX + 12*XX.*YY + 7*YY.*YY;
    surf(XX,YY,Z)

  • @darkkalimdor1311
    @darkkalimdor1311 Před 7 lety +1

    this man just related a rugby ball to the eigenvectors of a matrix. damn!

  • @ravinm100
    @ravinm100 Před 7 lety

    Thanks!

  • @GiovannaIwishyou
    @GiovannaIwishyou Před 2 lety

    The mathematics fills my heart with beauty.

  • @dangidelta
    @dangidelta Před 4 lety

    I think my mind just exploded !

  • @dalisabe62
    @dalisabe62 Před rokem

    Intuitively, a positive definite matrix is one that has more stretching weight than a rotation weight. If it operates on a vector in the first quadrant, it guarantees that the output vector is also in the first quadrant. This is almost like a closure property under the transformation of a matrix. I need to examine the case when the x vector in Ax is not in the first quadrant. The entire underlying theme is creating something that is very close to an Eigen matrix with more weight around the diagonal. The important of Eigen matrices is well known in applications where the power matrix has the same effect of a power scalar function. With big matrices, there is nothing more valuable than decomposing a matrix in terms of simpler matrices among which the Eigen matrix is one of those decompositions. Diagonalizing a matrix and SVD is very related to this theme. Any analysis that attempts to transform the effect of a matrix to one that resembles a scalar transformation is the goal.

  • @xiaoweidu4667
    @xiaoweidu4667 Před 4 lety

    amazing!

  • @dijo1469
    @dijo1469 Před 6 lety +3

    Dove chocolate is not smooth as silk. This lecture is. indeed

  • @alkalait
    @alkalait Před 13 lety +11

    It's a .... superbowl

  • @iharsh386
    @iharsh386 Před 7 měsíci

    prof. nice tie 👍😁

  • @jonahansen
    @jonahansen Před 5 lety +1

    Damn he's good at teaching!

  • @MrSyrian123
    @MrSyrian123 Před 5 lety

    Nice tie prof

  • @camanhbui9655
    @camanhbui9655 Před 8 lety

    Very nice! :)

  • @adnansomani8218
    @adnansomani8218 Před 13 lety +4

    u just sumarized my 12 hours of studying in 50 mins

  • @martintoilet5887
    @martintoilet5887 Před 4 lety

    There's one video on Khan academy multivariable calculus talking about this, didn't quite get it. Now after his explanation, I finally know what is that.XD

  • @sameersharma4038
    @sameersharma4038 Před 10 měsíci

    awsome

  • @starriet
    @starriet Před rokem

    Question) If we complete the squares of the eq. (at 44:53), I don't think the squares get multiplied by the three pivots(2, 3/2, 4/3)... Can anyone tell me why?

  • @user-wo3km2qe9d
    @user-wo3km2qe9d Před rokem

    I believe the graph of 2x^{2}+12xy+20y^{2}=0 do not exist

  • @jasio83
    @jasio83 Před 14 lety

    Is that video available or not? I'm following the course on you tube but I can't visualize this particular lecture (n° 27).

  • @imegatrone
    @imegatrone Před 12 lety

    I Really Like The Video Positive Definite Matrices and Minima From Your

  • @mohammedtarek9544
    @mohammedtarek9544 Před 3 lety

    that tie looks cool tho

  • @vslaykovsky
    @vslaykovsky Před 2 lety

    47:25 drawing an eye 101

  • @lucasm4299
    @lucasm4299 Před 6 lety +2

    ?? I understood everything except my pivots are 2, 3, 4 :(
    I love that last bit. Kind of like SVD.

    • @ortollj4591
      @ortollj4591 Před 5 lety +1

      maybe this (with Sagemath) could help you ?
      sagecell.sagemath.org/?q=dfqzji

    • @aaronpaulhughes
      @aaronpaulhughes Před 4 lety

      The Pivots ARE 2,3,4. He just took it one step further by dividing each row by the previous pivot. So the last row was [0,0,4] and he divided by pivot 3 to get [0,0,4/3]. He then took the 2nd row which was [0,3,-2] and divided by pivot 2 to get [0,3/2,-1]. If you take the product of the pivots 2,3,4 you get 24 which is the determinant of this new matrix. But by dividing by the previous pivot the product of the pivots [2, 3/2, 4/3]=4, which was the determinant of the original matrix.

  • @feelgood9570
    @feelgood9570 Před 3 lety

    35:00
    recap

  • @slatz20
    @slatz20 Před 13 lety +1

    Ellipsoid :D
    Its like the UFO from the Movie Independence Day :D

  • @lobisw
    @lobisw Před 8 lety +16

    "What's the long word for bowl?"

    • @krille0o
      @krille0o Před 8 lety +8

      +Lobezno Meneses eliptic paraboloid z=(x/a)^2 + (y/b)^2

    • @usmanhassan1887
      @usmanhassan1887 Před 6 lety

      It should be a function of three variables, not two variables.

    • @dangernoodle2868
      @dangernoodle2868 Před 5 lety +2

      I would have just called it a hyper bowl.

    • @marcinkovalevskij5820
      @marcinkovalevskij5820 Před 5 lety

      @@usmanhassan1887 f(x_1,x_2,x_3)

    • @usmanhassan1887
      @usmanhassan1887 Před 5 lety

      Yes, it is, the long word for bowl is a function of 3 variables. Thank you.. @@marcinkovalevskij5820

  • @azaz868azaz5
    @azaz868azaz5 Před 5 měsíci

    It seems that in mathematics everything at some point connects

  • @divykala169
    @divykala169 Před 3 lety

    Is a hyperbola in high dimensional spaces called a hyperhyperbola?

  • @theflaggeddragon9472
    @theflaggeddragon9472 Před 8 lety

    This max and min method makes intuitive sense but does anyone have a proof showing that if the gradient is zero and the matrix is positive definite then the function is a local minimum?

    • @schrodingershat3063
      @schrodingershat3063 Před 8 lety +3

      First of all, in order for a function to have a minimum, the gradient has to be zero. If this were not the case, there would be a direction next to the point in question where the function decreases - the direction opposite to the gradient - and this would contradict the fact that it is a minimum.
      If one takes an arbitrary function and carries out a Taylor expansion around the minimum, all terms of order higher than two can be neglected if we are close enough to the minimum (and we just need to know the function values right next to the minimum point to know that it is a minimum). Since the gradient is zero, all first-order derivatives are zero at the minimum point, and then we just have the constant term - the function value at the minimum - and the second order derivatives. The Taylor approximation around the minimum can thus be written as f(x_0 + x) ≈ f(x_0) + 1/2 (x^T A x), where A is the matrix of second derivatives (the Hessian) and x_0 is the minimum.
      Thus, if x^T A x > 0 for all x - i.e., the matrix A is positive definite - this is in particular valid for small x-values (such that x_0 + x is close to x_0), where the Taylor approximation is a good approximation of f(x_0 + x). Thus, we can conclude that in a neighbourhood of the point x_0 (i.e. if we are close enough so that the Taylor approximation is good), all other points give function values such that f(x_0 + x) > f(x_0), and thus x_0 is a local minimum, since the function has higher values for all points in a neighbourhood around this point.

  • @jiaqigan6398
    @jiaqigan6398 Před 4 lety

    Thank you Prof.Strang 🇨🇳

  • @Shauracool123
    @Shauracool123 Před 3 lety

    How will we define negative definate. Will it have everything negative like
    1) All pivots are negative
    2) All e-values negative ?

  • @hamiltonianmarkovchainmc

    It's real algebraist hours, my dudes.

  • @mohammedtarek9544
    @mohammedtarek9544 Před 3 lety +1

    am i the only one watching the lectures in 1.5 or 1.25 playback speed?

  • @user-kx9ym3dk2e
    @user-kx9ym3dk2e Před 7 lety

    Can someone please explain why the condition for pivot test is (ac-b^2)/a > 0? If I just do the elimination to figure out the pivot, shouldn't the second pivot just be (c-b^2)/a?

    • @pavlenikacevic4976
      @pavlenikacevic4976 Před 6 lety +2

      in order to make b (in 21 position) 0, you have to multiply first row with -b/a and add it to the second. When you do that with b (in 12 position) and add it to c, you get -b²/a + c, which is the same as (ac-b²)/a

    • @douglasespindola5185
      @douglasespindola5185 Před 2 lety

      @@pavlenikacevic4976 I know that it has 4 years, but I'd like to thank you, mr. Pavle. I'm curious about how did you get this insight? Greetings from Brazil!

    • @pavlenikacevic4976
      @pavlenikacevic4976 Před 2 lety

      @@douglasespindola5185 you're welcome. I don't remember where that is in the video, and I've also been out of maths for some time, so I cannot provide you any useful information at this point 😅

    • @douglasespindola5185
      @douglasespindola5185 Před 2 lety

      @@pavlenikacevic4976 thanks anyway. I'm studying for a job selection that will apply an exam about data science content. It'll be helpful. Why are you out of maths? Seems to me that you were a nice student.

    • @pavlenikacevic4976
      @pavlenikacevic4976 Před 2 lety

      @@douglasespindola5185 now I'm doing research in quantum chemistry. Doesn't really require math on a day to day basis. I needed math during my studies, to help me understand physics better
      Good luck with studying for the exam!

  • @quirkyquester
    @quirkyquester Před 4 lety

    could someone tell me 6:48 why is this true? how can you calculate lambda with trace? Thank youooouuu!

    • @quirkyquester
      @quirkyquester Před 4 lety

      The trace of a matrix is the sum of its (complex) eigenvalues, and it is invariant with respect to a change of basis.

  • @gowrithampi9751
    @gowrithampi9751 Před 4 lety

    A rugby ball has 2 out of three eigen values the same.

  • @FirstNameLastName-gf3dy

    Hope we become neighbors in heaven Mr Strang.

  • @adrianmh
    @adrianmh Před 4 lety

    We're down from 1.2mill views on the first video :D

  • @cgu001
    @cgu001 Před 15 lety

    man that was tragic...

  • @des6309
    @des6309 Před 4 lety

    Strang is so funny

  • @cooking60210
    @cooking60210 Před 6 měsíci

    @2:50 Seattle submatrices lol

  • @papapap2
    @papapap2 Před 13 lety

    how about calling it an egg...

  • @ArslanAlihanafi
    @ArslanAlihanafi Před 13 lety

    Where is lecture no 28?

  • @eglintonflats
    @eglintonflats Před 3 lety

    Trump learned from Him how to wear a tie.

  • @melissaallinp.e.5209
    @melissaallinp.e.5209 Před 4 lety

    What's a long word for a bowl? lol

  • @PhuongNam-ys6ru
    @PhuongNam-ys6ru Před 10 měsíci

    .

  • @yifuliu547
    @yifuliu547 Před 7 lety

    autodidact

  • @lewlafanz6932
    @lewlafanz6932 Před rokem

    Probably not the best instructor. There are a lot of instructors on CZcams who are much much better teachers than him. Imagine the students at MIT have to through this .

  • @tchappyha4034
    @tchappyha4034 Před 5 lety

    Please prove the facts.

  • @seventyfive7597
    @seventyfive7597 Před 7 lety +8

    The absolute lack of rigor and the hand-waviness of the lesson makes this video a popular science segment rather than a math lesson. A nice popular science segment, but I certainly hope MIT students of Physics and engineers are directed to take something with a higher level, significantly so. Physics undergrads don't need total rigor, but such a total lack of it doesn't develop mathematical thinking.

    • @jean-fredericfontaine2695
      @jean-fredericfontaine2695 Před 7 lety +21

      there is a 300+ pages book that come with that course (Strang wrote it) used in the majority of top level schools. I suggest you buy it and/or gtfo.

    • @bboysil
      @bboysil Před 6 lety +16

      He struggles to get the idea across, the intuition, which he does so masterfully. Very few teachers have this talent.... most math teachers are too pedantic and they lose their students along that way.

    • @hcgaron
      @hcgaron Před 6 lety +1

      You can only write so many proofs in 50 minutes. The proofs are very well laid out in the book. It's a good read (and a required read for students of the class)

    • @hcgaron
      @hcgaron Před 6 lety

      I have 4th edition as well, but I believe this was with 2nd edition. I can't recall but I believe I saw that on the OCW course page.

    • @DrTymish
      @DrTymish Před 6 lety +1

      This is intended to be an applied Linear Algebra class at MIT, for a more rigorous course on Linear Algebra (with proofs) Axler's or Friedberg are the best out there for the undergraduate level.

  • @Mimi54166
    @Mimi54166 Před 4 lety

    35:15