Matrix to a matrix

Sdílet
Vložit
  • čas přidán 10. 09. 2024
  • Matrix to the power of a matrix
    In this video, I do the unbelievable: I calculate a matrix raised to the power of a matrix! Want to know how to do this? Watch this video and find out!
    Check out my linear algebra playlist: • Linear Algebra
    Check out my eigenvalues playlist: • Eigenvalues
    Subscribe to my channel: / drpeyam
    Check out my Teespring merch: teespring.com/...

Komentáře • 124

  • @xBoBox333
    @xBoBox333 Před 4 lety +86

    2:20 best dog i've ever seen

  • @blackpenredpen
    @blackpenredpen Před 4 lety +46

    Next: do A^A^A

  • @christopherthomas6124
    @christopherthomas6124 Před 4 lety +21

    Dr Peyam this video really got my interest and I wanted to work out a few examples for myself. After a couple hours of reading, it turns out that if you have a matrix X and matrix Y where X is non-singular and normal and Y is complex, the X^Y is defined as X^Y = e^(log(X)*Y) which is a right exponential because we aren't assuming X and Y communte. This means if you do e^(Y*log(X)) you actually get the left exponential. I have been teaching myself operations in quaternion space and this fits the same pattern as operations in Q!!! Really cool stuff

    • @drpeyam
      @drpeyam  Před 4 lety +5

      Yes, in general those are 2 different things

    • @christopherthomas6124
      @christopherthomas6124 Před 4 lety

      @@drpeyam so do you have any idea of any useful applications for this?

    • @axelnils
      @axelnils Před 4 lety +7

      Christopher Thomas Applications? What are you, an engineer lol

    • @rodrigorodders7173
      @rodrigorodders7173 Před 4 lety +2

      Axs one does not simple ask a pure mathematician for application

  • @shambosaha9727
    @shambosaha9727 Před 4 lety +32

    Dr πm: Attempts to draw a dog
    Meme artist: Am I a joke to you?
    Dr πm: Yes, I am a mathematician

  • @blackpenredpen
    @blackpenredpen Před 4 lety +16

    Can wolframalpha do this?

  • @Ricocossa1
    @Ricocossa1 Před 4 lety +4

    I think the notation A^B would be acceptable only if A commutes with B, otherwise you have to give more information and write e^(ln(A)B). Also A has to be strictly positive definite. It's an interesting idea. I wonder if it has some applications like sqrt(A) does.

  • @kingk.crimson6633
    @kingk.crimson6633 Před 4 lety +6

    Great video. I was caught off guard by the diagonalization to calculate ln(A), I'd only ever seen that in the context of raising A to a power. Really cool!

  • @Pedritox0953
    @Pedritox0953 Před 4 lety +2

    Akways amaze me this kind of videos!! Thanks Professor!!

  • @AaronRotenberg
    @AaronRotenberg Před 4 lety +9

    Dr. Peyam, what can we do with the ability to raise a matrix to the power of another matrix? There are lots of applications for the e^A matrix exponential (e.g. systems of differential equations), but are there any interesting applications for the A^B matrix-matrix exponential?

    • @drpeyam
      @drpeyam  Před 4 lety +6

      I’m not quite sure actually!

  • @AirAdventurer194
    @AirAdventurer194 Před 3 lety +2

    Maybe it should only be defined if ln(A) and B commute?

  • @poutineausyropderable7108
    @poutineausyropderable7108 Před 4 lety +24

    Heres a nice idea: What about trig identities for matrices? Like sin^2(x) + cos^2(x)= 1. Would it give the identity matrix?

    • @anderrafaellinaresrojas3772
      @anderrafaellinaresrojas3772 Před 4 lety +2

      yes

    • @skylardeslypere9909
      @skylardeslypere9909 Před 3 lety

      I believe so, since the reason you can apply functions to diagonal matrices is just to do with the fact that we can write these functions as power series.
      The result sin²(x)+cos²(x)=1 follows from these power series definitions.

    • @skylardeslypere9909
      @skylardeslypere9909 Před 3 lety +1

      In particular, we have:
      sin(A)= [exp(iA)-exp(-iA)] / 2i
      cos(A)=[exp(iA)+exp(-iA)] / 2
      Notice, if you square these, you will eventually get
      sin²A+cos²A = [exp(iA)exp(-iA)+exp(-iA)exp(iA)]/2.
      This will cancel out to the identity matrix.

  • @shiina_mahiru_9067
    @shiina_mahiru_9067 Před 4 lety +37

    Next: do the derivative of a matrix wrt a matrix

  • @Lance.2451
    @Lance.2451 Před 4 lety +10

    Dr could you make a video of doing some like taking the derivative or an integral of a function that is just X^(of a matrix). Really interested to see the result

    • @drpeyam
      @drpeyam  Před 4 lety +4

      That is interesting!

    • @guitar_jero
      @guitar_jero Před 4 lety +2

      Intuition tells me it'd be similar to the real numbers case, but replacing the 1 that's added/substracted for the Identity matrix.

    • @Lance.2451
      @Lance.2451 Před 4 lety +2

      @@guitar_jero I'm trying to figure out if we measure the 1 that is subtracted as the determinant, then maybe it could exist an entire space for potential "1" matrices and how that could change the derivative, and how something like could be applied

  • @cbbuntz
    @cbbuntz Před 3 lety

    Fun trick about square roots of matrices. Let A be a matrix of column vectors and B be the Gram matrix A'*A. Take the inverse square root of B and then compute C = A*B, then C will be an orthonormal transformation of A since (A*(A'*A)^(-1/2))' * A*(A'*A)^(-1/2) = A'*A*(A'*A)^(-2/2) = identity matrix

  • @dr.rahulgupta7573
    @dr.rahulgupta7573 Před 3 lety

    Excellent presentation of the topics. Thanks.DrRahul Rohtak Haryana India

  • @dr.rahulgupta7573
    @dr.rahulgupta7573 Před 3 lety

    Excellent presentation Sir .Thanks DrRahul Rohtak Haryana India

  • @abhishekpadhi6014
    @abhishekpadhi6014 Před 4 lety +2

    Got a test tomorrow and I so don't need to watch this, but I'm gonna

  • @emmepombar3328
    @emmepombar3328 Před 4 lety +1

    10 seconds in the video. Yepp, that is already great. Take my like.

  • @Royvan7
    @Royvan7 Před 4 lety +5

    i wonder if you you could define differentiation of a variable matrix. such as d[A^B]/dA = ???

    • @drpeyam
      @drpeyam  Před 4 lety +4

      That would be cool!

    • @thedoublehelix5661
      @thedoublehelix5661 Před 4 lety +1

      It can be like the directional derivative in multivariable calculus.

    • @112BALAGE112
      @112BALAGE112 Před 4 lety

      You're gonna love en.wikipedia.org/wiki/Matrix_calculus.

    • @Royvan7
      @Royvan7 Před 4 lety

      @@112BALAGE112 nice, thx. actually think i have seen the vector derivative notation before. didn't think of extending it to martixs though.

  • @noahtaul
    @noahtaul Před 4 lety +2

    Hello Dr Peyam, at 11:02 could you explain how that first number in the bottom row becomes negative and the second number becomes positive? Thanks!

  • @smortemm2438
    @smortemm2438 Před rokem +2

    this man is so enthusiastic about this stuff, it truly makes me smile

  • @xzy7196
    @xzy7196 Před 4 lety +1

    The triple integral of a (matrix^i)/(x!^e).

  • @socraticmathtutor1869
    @socraticmathtutor1869 Před 2 lety

    Cool video. Good to keep in mind though that matrix exponentiation is non-injective. E.g. If Id = ((1,0), (0,1)) is the identity matrix and J = ((0,-1),(1,0)) is the encoding of the imaginary unit i as a matrix, then exp(0) = Id and exp(2 * pi * J) = Id. This tells us that log(Id) has more than one element. So in general, log(B) should be regarded as a set of matrices; namely, the set of all matrices A for which exp(A) = B. Therefore, to do the math properly, many of these equals signs in the video need to be replaced by set-theoretic comparisons (e.g. "is a subset of", "includes", "shares an element with" etc.) to account for this added complexity. Note also that matrix exponentiation is non-surjective. In particular, it's easy to show that exp(A) is always invertible for any matrix A; this follows from Jacobi's identity; it can also be proved by observing that A and -A commute, and hence that exp(A) * exp(-A) = exp(A - A) = exp(0) = Id. As a consequence, we deduce that for a non-invertible matrix B, the set of matrices denoted by log(B) is always empty. So matrix logarithms are sometimes empty. Anyway....... really enjoyed the video.

  • @remlatzargonix1329
    @remlatzargonix1329 Před 4 lety +3

    Great intro!

  • @virat.chauhan
    @virat.chauhan Před 4 lety +1

    Dr π m ....this is really amazing...

  • @user-gm6sl5mw2o
    @user-gm6sl5mw2o Před 3 lety +1

    Hello Dr Peyam
    I have a question
    Can we say e^(A)=e^(B)
    then A=B?
    I want to prove this but this is very very difficult to me

    • @drpeyam
      @drpeyam  Před 3 lety +1

      That’s a really interesting question! In the diagonalizable case you can prove it directly. My guess is that it’s still true in general and I think for this you just use the Jordan form

    • @user-gm6sl5mw2o
      @user-gm6sl5mw2o Před 3 lety +1

      @@drpeyam I found that
      e^X=I
      X=2pi 0
      0 -2pi
      I think if
      eigenvalue is real then only X=0

  • @sam-kx3ty
    @sam-kx3ty Před 4 lety

    Thanks for this video .

  • @sayanmaji2845
    @sayanmaji2845 Před 4 lety +1

    Thanks

  • @marcellomarianetti1770

    I think there is something wrong around 11:20 when you calculate with Wolfram alpha: how can -2 + 2^12 be equal to -4094, and -10-2^11=2058, it looks like you inverted the signs or am I missing something?

  • @edwardhuff4727
    @edwardhuff4727 Před 4 lety

    Finally getting around to watching... At very start, IIRC bprp would throw that dried out marker across the room and take a new one. I'd appreciate it. Old eyes and all... Edit: I see you did replace it. Yay!!!

  • @61rmd1
    @61rmd1 Před 3 lety

    So, what's the correct answer? The computation with e^(lnA)B or with e^B(lnA)? or both are incorrect? sorry, but I don't see it...

    • @drpeyam
      @drpeyam  Před 3 lety

      Either answer is correct depending on your convention

  • @tracyh5751
    @tracyh5751 Před 4 lety +1

    You should find conditions where equality occurs. :D At least in small dimension

  • @foreachepsilon
    @foreachepsilon Před 4 lety

    There seems to be some arithmetic error at 11:02 with entry (2,1)… the sign is negative but looks like it should be a positive number from the right hand side of the equation. Same with entry (2,2)

  • @user-eh2ec3rn6w
    @user-eh2ec3rn6w Před rokem

    Dear professor , this topic is very hard for me .

  • @hammadansari6309
    @hammadansari6309 Před 4 lety

    How can there be two different answers for the same thing

  • @caldersheagren
    @caldersheagren Před 4 lety +1

    Can you do a video on exponential of a direct sum or tensor product of matrices?

    • @miro.s
      @miro.s Před 3 lety

      Interesting would be counting with exponential of higher order tensors

  • @nandakumarcheiro
    @nandakumarcheiro Před 4 lety

    Matrics is a form of piezo electric diagonalisation forming a powered by another matrics are we on the verge producing a new crystal producing an exponential crystal?

  • @ankurmazumder5590
    @ankurmazumder5590 Před 4 lety

    Sir does the algorithm works o ly for diagonalizable matrices? Is there a way to go about fot non diagonalizable matrices?

  • @NAMEhzj
    @NAMEhzj Před 4 lety

    Hey Dr. Peyam, nice video!
    At 4:07 you say "there is no good definition of ln here", but there is right? Its the inverse of the exponential, so e^ln(A) = A. And when e^A = P e^D P^-1 then by that your definition is actually exactly the one you want, isnt it?

    • @drpeyam
      @drpeyam  Před 4 lety +1

      It’s good if A is diagonalizable, but not every matrix is

    • @NAMEhzj
      @NAMEhzj Před 4 lety

      @@drpeyam right okay^^

  • @haoli9220
    @haoli9220 Před 3 lety

    Great? But is there a way to do it when it is non diagonaliable?

    • @drpeyam
      @drpeyam  Před 3 lety +1

      Omg hi Hao! In that case use Jordan form I think!

    • @haoli9220
      @haoli9220 Před 3 lety

      @@drpeyam thanks! I might explore this

  • @Ensivion
    @Ensivion Před 4 lety

    This is a weird mathematical statement. Because in both cases I think you're applying a function to be B and a separate function to A. Exponentiating A and taking the equivalent anti-exponentiation, logrithmic function to B. This second function is equivalent to e^B where instead of a constant, you have a matrix. It isn't clear which one happens first.

  • @mosab643
    @mosab643 Před 4 lety

    Am I loosing my sight?

  • @MagicGonads
    @MagicGonads Před 4 lety

    If you diagonalise A first and then apply e^ln(lambda)*B for each lambda eigenvalue in the diagonalised matrix, then the ambiguity about the order doesn't appear (as long as the logarithm of the eigenvalues is commutative with the other scalars). Which order ends up being correct? My guess is that this will give yet a third different answer but I'm not sure.
    This means you don't have to apply the logarithm of a matrix, you just have to use some polynomial expansion of e^kB for many values

    • @MagicGonads
      @MagicGonads Před 4 lety

      Or you could diagonalise B (if it is diagonalisable) so you don't have to approximate the infinite series and it would become a bunch of e^kj for every k the logarithm of an eigenvalue of A and every j the logarithm of an eigenvalue of B
      I think actually this would output a matrix with matrix values so an nxn matrix of mxm matrices of the original scalars of the problem, which would be isomorphic to an nmxnm matrix (where A is nxn and B is mxm)

    • @MagicGonads
      @MagicGonads Před 4 lety

      If A is (nxn) and = PDP' , B is (mxm) and = QEQ'
      then A^B = P(D^B)P' = P(D^(QEQ'))P'
      if we assume that any function of a diagonal matrix is componentwise on the diagonal (this is at least true for matrix valued polynomials and so would it work for exp if we use the infinite series)
      then every diagonal component of D^B is some e^kB = Q(e^kE)Q' where k is the logarithm of a diagonal component of D, and each diagonal component of each e^kE is some e^kj where j is the logarithm of a diagonal component of E.
      Then you can construct an isomorphism between this P(D^(QEQ'))P' matrix and some nmxnm matrix (such as converting it into a block matrix), and this is what you could call A^B

    • @MagicGonads
      @MagicGonads Před 4 lety

      So I did the working for my method to see how it behaves when compared to complex exponentiation
      It took me a while haha
      if a and b are complex numbers and a* or b* are the conjugates of said complex numbers (and a^b* is a to the conjugate of b not the conjugate of a to the b, to make what follows easier to type out)
      where A and B are the matrices which represent a and b respectively
      a^b = a^b
      A^B = 1/4 (
      [ a^b + a^b* + a*^b + a*^b* , ia^b - ia^b* + ia*^b - ia*^b* , ia^b + ia^b* - ia*^b - ia*^b* , - a^b + a^b* + a*^b - a*^b* ;
      - ia^b + ia^b* - ia*^b + ia*^b* , a^b + a^b* + a*^b + a*^b* , a^b - a^b* - a*^b + a*^b* , ia^b + ia^b* - ia*^b - ia*^b* ;
      - ia^b - ia^b* + ia*^b + ia*^b* , a^b - a^b* - a*^b + a*^b* , a^b + a^b* + a*^b + a*^b* , ia^b - ia^b* + ia*^b - ia*^b* ;
      - a^b + a^b* + a*^b - a*^b* , - ia^b - ia^b* + ia*^b + ia*^b* , - ia^b + ia^b* - ia*^b + ia*^b* , a^b + a^b* + a*^b + a*^b* ]
      )
      which certainly doesn't look like it's equal to a^b but perhaps using the same sort of way we classify [x , -y ; y , x] as x + iy, this can be found to be equivalent to a^b ?
      It looks cool I guess
      I also would like to know (but would dread to do the working out) if this matrix is commutative
      the matrix has a kind of symmetry like for each component if you compared it to the transpose of the matrix's component to find the sign difference
      [+ - - +;
      - + + -;
      - + + -;
      + - - +]
      compared to how the complex numbers work
      [+ -;
      - +]
      it has like this cell placed into each quadrant applying this rule to itself

    • @MagicGonads
      @MagicGonads Před 4 lety

      YES! as it turns out using that symmetry/equivalence one can reduce it to a 1d number which = a^b
      the equivalence notion is that [x , -y; y, x] = x + yi
      this is the way we will contract the 4x4 matrix into a 2x2 matrix and then the 2x2 matrix into a 1x1 matrix
      A^B = 1/4 (
      [ a^b + a^b* + a*^b + a*^b* , ia^b - ia^b* + ia*^b - ia*^b* , ia^b + ia^b* - ia*^b - ia*^b* , - a^b + a^b* + a*^b - a*^b* ;
      - ia^b + ia^b* - ia*^b + ia*^b* , a^b + a^b* + a*^b + a*^b* , a^b - a^b* - a*^b + a*^b* , ia^b + ia^b* - ia*^b - ia*^b* ;
      - ia^b - ia^b* + ia*^b + ia*^b* , a^b - a^b* - a*^b + a*^b* , a^b + a^b* + a*^b + a*^b* , ia^b - ia^b* + ia*^b - ia*^b* ;
      - a^b + a^b* + a*^b - a*^b* , - ia^b - ia^b* + ia*^b + ia*^b* , - ia^b + ia^b* - ia*^b + ia*^b* , a^b + a^b* + a*^b + a*^b* ]
      ) (as given in my previous reply)
      we notice that there are 2x2 matrices in each quadrant.
      If we label the top left quadrant as matrix X and the bottom left quadrant matrix Y
      A^B = 1/4 (
      [ X , -Y ;
      Y , X ] )
      where
      X =
      [ a^b + a^b* + a*^b + a*^b* , ia^b - ia^b* + ia*^b - ia*^b*;
      - ia^b + ia^b* - ia*^b + ia*^b* , a^b + a^b* + a*^b + a*^b* ]
      Y =
      [- ia^b - ia^b* + ia*^b + ia*^b* , a^b - a^b* - a*^b + a*^b*;
      - a^b + a^b* + a*^b - a*^b* , - ia^b - ia^b* + ia*^b + ia*^b*]
      then A^B = 1/4( X + Yi )
      -i = 1/i so
      Yi =
      [a^b + a^b* - a*^b - a*^b* , ia^b - ia^b* - ia*^b + ia*^b*;
      - ia^b + ia^b* + ia*^b - ia*^b*, a^b + a^b* - a*^b - a*^b*]
      so A^B = 1/4 (
      [ a^b + a^b* + a*^b + a*^b* , ia^b - ia^b* + ia*^b - ia*^b*;
      - ia^b + ia^b* - ia*^b + ia*^b* , a^b + a^b* + a*^b + a*^b* ]
      +
      [a^b + a^b* - a*^b - a*^b* , ia^b - ia^b* - ia*^b + ia*^b*;
      - ia^b + ia^b* + ia*^b - ia*^b*, a^b + a^b* - a*^b - a*^b*]
      )
      = 1/4 (
      [ 2a^b + 2a^b* , 2ia^b - 2ia^b*;
      -2ia^b +2ia^b* , 2a^b + 2a^b* ]
      )
      then applying that equivalence again
      A^B = 1/4 (2a^b + 2a^b* - (2ia^b - 2ia^b*)i) = 1/4 (2a^b + 2a^b* + 2a^b - 2a^b*)
      A^B = 1/4 (4a^b)
      A^B = a^b

  • @neilgerace355
    @neilgerace355 Před 4 lety

    The other PDP-1:
    "DEC PDP display" infolab.stanford.edu/pub/voy/museum/pictures/display/3-3.htm

  • @stevenbanton5073
    @stevenbanton5073 Před 4 lety +1

    It is interesting to research how we can define matrix to a power of matrix without losing nice properties (e.g. 14:05). I suppose method in video implies that both matrices are diagonalizable, for other matrices we can't define it in such way (or there is some trick?)
    I have some idea about 2x2 matrices. In fact there is isomorphism between complex numbers and 2x2 matrices of certain form (math.stackexchange.com/questions/180849/why-is-the-complex-number-z-abi-equivalent-to-the-matrix-form-left-begins). For only those 2x2 matrices we can easily define power operation, because we can easily calculate complex number to a power of complex number. All nice properties hold automatically due to the isomorphism. Then maybe we can (or can't - research needed) to extend it for wider class or all 2x2 matrices by some kind of decomposition of them to complex-number isomorphic class.

    • @vicktorioalhakim3666
      @vicktorioalhakim3666 Před 4 lety

      The nice property you refer to @14:05 is not that nice, it requires that B and C commute. For a generalized functions on non-diagonalizable matrices, see en.wikipedia.org/wiki/Jordan_normal_form. For more information about the link between 2x2 matrices and complex numbers, see Lie groups. It's nothing new, and all of the ideas you discuss here have been developed for decades, just open any serious analysis book.

    • @stevenbanton5073
      @stevenbanton5073 Před 4 lety

      @@vicktorioalhakim3666
      I refer not to "nice property" but to "losing nice property" due to the fact it requires commuting matrices (loss of generality).
      I'm not claiming there is something new. I am trying to say there is another ways of defining that operation. How about trivial one - calculating powers of corresponding elements? ( C{i,j} = A{i,j}^(B{i,j}) )

  • @quantumsoul3495
    @quantumsoul3495 Před 4 lety +1

    Nice dog !

  • @kma6881
    @kma6881 Před 2 lety

    A complex matrix raised to a complex matrix, maybe?

    • @drpeyam
      @drpeyam  Před 2 lety +1

      I mean same strategy :)

  • @scorch25able
    @scorch25able Před 3 lety

    Crazy thought: matrix root of a matrix lol

    • @drpeyam
      @drpeyam  Před 3 lety

      Wow that would be cool

  • @chimetimepaprika
    @chimetimepaprika Před rokem

    Silly Rabbit! [ma]trix are for kids.

  • @Flanlaina
    @Flanlaina Před 4 lety

    Challenge:
    1. Sine of a matrix?
    2. What happen if we take the square root of a matrix with negative eigenvalues?

    • @drpeyam
      @drpeyam  Před 4 lety +1

      1) Already done (with cosine)
      2) Imaginary numbers

    • @Flanlaina
      @Flanlaina Před 4 lety

      What if we take the square root of a matrix with complex eigenvalues?

    • @aaronsmith6632
      @aaronsmith6632 Před 4 lety

      @@Flanlaina You'll get complex values in the resulting matrix.

  • @paulkohl9267
    @paulkohl9267 Před 4 lety

    From dimensionality and completeness I would have expected that an n x n matrix brought to an m x m matrix to be a set of nm x nm order matrices that have no ambiguity in definition. Am I expecting too much? The approach in the video seems like a "naive" use of the function of a matrix idea using diagonalization.... Seems like there should be another approach that is more high-brow as it were... Monodromic evaluation of the natural logarithm and some operator ordering principle straight from complex analysis (asuming C for algebraic completeness but using any algebraically complete field should also be allowed). Just a thought.

    • @paulkohl9267
      @paulkohl9267 Před 4 lety +1

      Dr Peyam, thanks for the heart, love your videos! The answer to my own query occured to me just this morning. (e^A)^B = e^(A * B) where * is a tensor product. Then you would have the requisite dimensions I described. Operator ordering is still an issue. Thanks for the heart.

    • @MagicGonads
      @MagicGonads Před 4 lety +1

      @@paulkohl9267 I think if you diagonalise both A and B then you get something isomorphic to an nmxnm matrix where each value is the result of a scalars multiplied by exp of the product of two scalars from the original domains. And this would not depend on operator ordering as long as the scalars are commutative

    • @MagicGonads
      @MagicGonads Před 4 lety +1

      If A is (nxn) and = PDP' , B is (mxm) and = QEQ'
      then A^B = P(D^B)P' = P(D^(QEQ'))P'
      if we assume that any function of a diagonal matrix is componentwise on the diagonal (this is at least true for matrix valued polynomials and so it would work for exp if we use the infinite series)
      then every diagonal component of D^B is some e^kB = Q(e^kE)Q' where k is the logarithm of a diagonal component of D, and each diagonal component of each e^kE is some e^kj where j is the logarithm of a diagonal component of E.
      Then you can construct an isomorphism between this P(D^(QEQ'))P' matrix and some nmxnm matrix (such as converting it into a block matrix), and this is what you could call A^B

    • @paulkohl9267
      @paulkohl9267 Před 4 lety +2

      @@MagicGonads, thanks for the heuristic argument showing why an nxn brought to an mxm is going to be nm x nm, but I think your calculations are off. The idea to just take the entries in A as a base to bring them to the power of each will give an nm x nm matrix, but it does not capture the other structure associated with matrices. For instance, complex numbers brought to other complex numbers are multivalued, why shouldn't matrices be as well when they can include complex numbers as entries? Does not make sense.
      Given that A is nxn, B is mxm and both are diagonalizable,
      A^B = e^(B * ln A) or e^(ln A * B)
      where * is a tensor product, which does not commute. Using your diagonalizations,
      ln A = P (ln D) P',
      but then the natural log of D is multivalued with n monodromic degrees of freedom. If K is a diagonal matrix with integers on the diagonal and zero off-diagonal, then
      ln D = [ ln D_ii ] + 2 pi sqrt(-1) K
      where [ ln D_ii ] is diagonal matrix with zero off-diagonal and ln of D's diagonal entries in the same spot. This completely characterizes the solutions to A^B. What is the meaning or application? Who knows.

    • @MagicGonads
      @MagicGonads Před 4 lety +1

      @@paulkohl9267 I don't see why we should assume that any property of e is preserved for matrices aside from it being the inverse of any branch of ln. eg e^(ln(A)) = A yes, so A^B = e^(ln(A))^B but that doesn't mean that we can bring that power down into the power of e, I don't think power rules necessarily apply here, so that's why I take this method. Though are you saying that the new power rule should be that you have to use the tensor product to bring it down?
      Also I prefer mine still since AB is a regular matrix product and that is what we use for the polynomials in the infinite series representation that defines exp(A) not something that isn't closed like the tensor product (consider how would you add two matrices of completely different dimension? that is what you would get if you use the tensor product for the definition of exp(A)). But I don't know much about the tensor product, both methods should reduce to regular exponentiation when both A and B are 1x1.
      And these matrices would still be multivalued cus whenever you take the ln of a complex eigenvalue, you can take all the branches of those logarithms into account, which means for every complex eigenvalue on either A or B you have an extra degree of countable solutions to A^B
      Recall I said that k and j are *logarithms* of the diagonal components of D and E respectively, which are the eigenvalues of A and B. So in fact for every element on the diagonal in A^B, if all eigenvalues of A and B are complex, gives you a bunch of e to the products of complex logarithms, which makes the results of the branches non-trivial.

  • @rolandd2804
    @rolandd2804 Před 4 lety

    I like that shit :D

  • @Flanlaina
    @Flanlaina Před 4 lety

    Next: do tan(A)

  • @blackpenredpen
    @blackpenredpen Před 4 lety

    found it: knowyourmeme.com/memes/if-a-dog-wore-pants

    • @drpeyam
      @drpeyam  Před 4 lety

      Hahaha, that’s the one!!!

  • @euler7586
    @euler7586 Před 4 lety

    1/8

  • @allaincumming6313
    @allaincumming6313 Před 4 lety

    Ahuevo

  • @renardtahar4432
    @renardtahar4432 Před 3 lety

    hhhhhhhhhhhhhhhhhhhhhhhhhhh good

  • @KANA-rd8bz
    @KANA-rd8bz Před 7 měsíci

    xd

  • @housamkak646
    @housamkak646 Před 4 lety +1

    First oneee

  • @stevenwilson5556
    @stevenwilson5556 Před 3 lety

    badly behaved.. lol.. "Bad Matrix!!" *, *…