Eigenvalues and Eigenvectors of an awkward 3x3 matrix - Full explanation

Sdílet
Vložit
  • čas přidán 10. 09. 2024
  • Full method on how to deal with such an awkward 3x3 Matrix to find its eigenvalues and eigenvectors!
    Production equipment
    🛒 Samsung Tablet: amzn.to/3vAIdol (Record and Edit)
    🛒 Portable Keyboard1: amzn.to/3Oaq9Ky (Bluetooth & Tablet)
    🛒 Main Keyboard2: amzn.to/3rJA8iB (Laptop)
    🛒 Current Calculator: amzn.to/44EMmYm
    🛒 Reliable File Storage SSD: amzn.to/43NxfL2
    🛒 Current Amazing Microphone: amzn.to/3q0XOhJ
    These are Amazon affiliate links, I get a small commission at no cost to you which helps the channel out a lot. Thank you!
    ✅ Amazon prime FREE TRIAL:
    www.amazon.co....
    ✅ Amazon prime student FREE TRIAL:
    www.amazon.co....
    ✅ Amazon Audible FREE TRIAL:
    www.amazon.co....
    🪅 INSTAGRAM: / teaformulamaths
    🪅 TIKTOK: www.tiktok.com...
    🪅 PATREON: patreon.com/TeaFormula
    All tiers get free support and resources. Tier 2+ have amazing free gifts!
    #maths #eigenvalues #Eigenvectors #furthermaths #matrices #Alevel

Komentáře • 23

  • @somebody7270
    @somebody7270 Před 6 měsíci +1

    Thanks!

  • @wikdipr2944
    @wikdipr2944 Před 9 měsíci +2

    Why did you use the first row instead of the first column to find the determinant?

    • @teaformulamaths
      @teaformulamaths  Před 9 měsíci +1

      Only convention, and either can be used. The key is to maintain consistency in the application of the chosen method.
      Choose the row or column with most zeroes perhaps but it's all the same result.

    • @teaformulamaths
      @teaformulamaths  Před 7 měsíci

      @GSSJG Yes, I understood their point, my answer humbly adresses this 🙏 Let's all hope for 0s in our examination matrices 😄👍

  • @Jack_Saint_Archive
    @Jack_Saint_Archive Před 7 měsíci +1

    This is a great video but I would prefer if there weren't any background music. The music is too loud. (I am using headphones. )

    • @teaformulamaths
      @teaformulamaths  Před 7 měsíci

      It was actually set to just 4% on the editor 😅

  • @GingerMathematician
    @GingerMathematician Před rokem +2

    Hi there from a fellow Maths CZcamsr 😊

  • @davidmurphy563
    @davidmurphy563 Před 8 měsíci

    For an attention neural network I'd like to build a matrix with an eigenvevtor positioned between any two vectors. It's so i can change the meaning of words depending on context but my maths is weak.
    How do you solve for a eigenvector which is v2-v1? Basically i want to do the standard EV calc you do in Rn but backwards, start with an EV and produce a matrix.

    • @teaformulamaths
      @teaformulamaths  Před 8 měsíci

      I think I may be following what you are asking - if I do then my response initially is: If you only have an eigenvector and don't have its corresponding eigenvalue, finding a matrix is not uniquely determined.

    • @davidmurphy563
      @davidmurphy563 Před 8 měsíci

      @@teaformulamaths Well yes, you clearly need both the vector and the lamba scalar. But like you do with one of the components of the eigenvector - selecting one from the subspace- surely you can just pick an arbitrary eigenvalue to go with your eigenvector?
      What is the space for eigen matrices? Surely for every eigenvector and eigenvalue pair there exists a corresponding unique matrix which can be found?
      I just want to work backwards naturally. This should be a two way operation, right? Are there constraints?
      You get what I'm asking? For eigenvector {0, 1} and eigenvalue 1. What is the matrix?

    • @teaformulamaths
      @teaformulamaths  Před 8 měsíci

      @@davidmurphy563 I will humbly answer the best I can, not from the neural network knowledge/application! Firstly, a fair few matrices have multiple Eigenvalues/Vectors, are you limited to using those with just one? For matrices with multiple, the process is very complicated if you only know one of them. When you say "eigen matrices", can you define what you're specifically talking about, there are a few meanings there? You can find the matrix A using the spectral decomposition formula A=PΛP^(-1), where P is a matrix whose columns are the eigenvectors and Λ is a diagonal matrix whose diagonal elements are the eigenvalues.

    • @davidmurphy563
      @davidmurphy563 Před 8 měsíci

      @@teaformulamaths I neglected to thank you btw, you've been super helpful so thanks!
      The exam question would probably read like this:
      "Given eigenvector v1 and eigenvalue lambda, calculate matrix A."
      No, not limited to one, it'll be two for sure but obviously the space is a line. My "eigenmatrix" term just meant a particular matrix which resulted in a particular vector being an eigenvector with a particular eigenvalue. So, in the example of your video, I think you had a lambda of -2 and a vector of {4, -1, 1} and 2 {1, 0, 0} - I want an algorithm that outputs the result: {{2, 0, 0}, {1, 2, -4}, {-3, 1, -3}}. Those values are probably copied wrong but you get the point I hope.
      Background:
      The point of this is that there's something called an attention matrix - honestly, the name is a bit of a misnomer, it came from debugging images so that you could see what area the NN was focused on. So if an NN was working deciding on types of hats in photos, then you'd check it it was looking at heads and not feet for example. Helped with training.
      This has moved on the LLMs where there's a matrix which changes the meaning of words in the context of words around it. "The chef is cooking" vs "dinner is cooking" vs "my laptop is cooking" etc . This is solved by training a matrix transform which reduces the distance between the tokens by shearing the latent space and so changing the distance. In fact you need two matrices. It's parallelisable but it's all very expensive computationally (important to me because I don't own a supercomputer!) and you tend to get random side effect craziness because it introduces undesired proximity that you have to train out and you're playing whack-a-mole.
      But if the eigenvector lay along the line between the token vectors then it would potentially have inherent stability. Maybe. It's just an idea I'm playing with. There's a "natural tendency" for things to settle on eigenvectors which you see in nature all the time.
      So really I'm looking to orientate the matrix between the token vectors along the eigenvalue. So the first step is calculating the matrix.

    • @davidmurphy563
      @davidmurphy563 Před 8 měsíci

      Actually, don't worry. Pretty sure I've figured it out. If I rotate along the v2-v1 axis - even by a miniscule amount - then, by definition I'll have a eigenvector along that axis. Then I have the option of expressing it in Rn+1, adding a one in the bottom right and using the other right hand column components to shear the matrix in the higher dimension so that it moves in the lower dimension.
      End result is that I can place an eigenvector at any angle anywhere in latent space at virtually zero cost. I'm limiting myself to square matrices but that's fine, I need to reduce rank to dimensionality anyway.
      Sorry, couldn't see the wood for the trees there for a while. Do appreciate your help.

  • @LilyGazou
    @LilyGazou Před rokem +2

    Math😮
    My best subject
    Not

    • @LilyGazou
      @LilyGazou Před rokem +1

      I’m going to share your channel. People need help with math.

    • @teaformulamaths
      @teaformulamaths  Před rokem +1

      @@LilyGazou That would be super, appreciate that!

    • @teaformulamaths
      @teaformulamaths  Před rokem +1

      Let me know if you ever had a question in Maths that you couldn't find the answer to :D@@LilyGazou