Spectral Graph Theory For Dummies

Sdílet
Vložit
  • čas přidán 18. 06. 2024
  • To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/Ron . You’ll also get 20% off an annual premium subscription.
    This video was sponsored by Brilliant.
    ---
    References:
    1. Cornell University Lecture Note: people.orie.cornell.edu/dpw/o...
    2. Spectral Graph Theory Lecture by Steve Butler: www.stevebutler.org/spectral2023
    3. Spectral Graph Theory Lecture by Radu Horaud: csustan.csustan.edu/~tom/Clus...
    4. Tutorial by Daniel A. Spielman: www.cs.cmu.edu/afs/cs/user/gl...
    5. This Stack Exchange question: stats.stackexchange.com/quest...
    6. And this Quora question: www.quora.com/Whats-the-intui...
    ---
    Timestamp:
    0:00 Introduction
    0:30 Outline
    00:57 Review of Graph Definition and Degree Matrix
    03:34 Adjacency Matrix Review
    05:03 Review of Necessary Linear Algebra
    09:09 Introduction of The Laplacian Matrix
    15:36 Why is L called the Laplace Matrix
    18:14 Eigenvalue 0 and Its Eigenvector
    20:27 Fiedler Eigenvalue and Eigenvector
    23:56 Sponsorship Message
    25:02 Spectral Embedding
    25:38 Spectral Embedding Application: Spectral Clustering
    27:51 Outro ​
    ---
    Big thanks to professor Fan Chung Graham and professor Robert Ellis for the email conversations.

Komentáře • 106

  • @ron-math
    @ron-math  Před měsícem +8

    To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/Ron . You’ll also get 20% off an annual premium subscription.

    • @cameroncarter6789
      @cameroncarter6789 Před 29 dny

      🇵🇸🏳‍🌈FREE🏳‍⚧ PALESTINE!⚧🏳‍🌈🇵🇸

  • @yewdimer1465
    @yewdimer1465 Před 19 dny +8

    For anyone confused about the plot at 12:20, it displays the component number on the x-axis and the corresponding value on the y-axis. For example, one of the eigenvalues for the Laplacian matrix is 2, which corresponds to the eigenvector (0.5, -0.5, -0.5, 0.5). With this in mind, the meaning of the plot should become self-explanatory.

  • @user-lf3rz4ou5b
    @user-lf3rz4ou5b Před 3 dny +1

    I really like it. some confusing parts after 21:20.(Fiedler Vector) could you please discuss about the below important lemmas as well:
    Perron-Frobenius Theorem
    Rayleigh Quotient
    Courant-Fischer Theorem
    Cheeger’s Inequality
    Spectral Radius
    Algebraic Connectivity (Fiedler’s Theorem)
    Weyl’s Inequality
    Eigenvalue Interlacing
    Lovász Local Lemma

  • @gonzacont4975
    @gonzacont4975 Před měsícem +14

    Good video! I did miss a discussion of the Perron eigenvector of the adjacency matrix, something essential in graph centrality. Hopefully in the future you can do a continuation of this video!

    • @ron-math
      @ron-math  Před měsícem +7

      Thank you!
      I skipped some content (there are just too much) and try to create a consistent topic: meaning of eigenvalues and eigenvectors of L. May work on another video focused on random walk in the future :)

  • @chadx8269
    @chadx8269 Před 24 dny +5

    Please increase the Audio Volume for the elderly Mathematician fans.

    • @ron-math
      @ron-math  Před 24 dny +1

      Noted! Thank you for your feedback!

  • @jordanmoore7340
    @jordanmoore7340 Před měsícem +13

    Love your videos! Wanted to point out a typo at 3:40. Element 4,5 of the adjacency matrix A should be 1, not 0.

    • @ron-math
      @ron-math  Před měsícem +3

      You are right! Thank you for pointing this out!

  • @Mr.Nichan
    @Mr.Nichan Před měsícem +5

    10:00 Lf = λf implies λ = (f^t)Lf ?
    Why? Shouldn't we also have to divide by dot product of f with itself (the square of the length/2-norm of f). Shouldn't (f^t)λf = λ(f^t)f = λ(||f||^2)?

    • @ron-math
      @ron-math  Před měsícem +4

      Great catch! The assumption that f is normalized is missing.
      Sometimes when I wrote draft, I came back and forth. You can see later when I made the connection with PCA, f is normalized: that's when I assumed that all f are normalized in the video and forgot to state this explicit at 10:00.

  • @purpleAiPEy
    @purpleAiPEy Před měsícem +17

    I was at a combinatorics conference.. and as soon as they went to eigenvalues of graphs I melted into a puddle of shame.
    Well. I felt that for the whole conference but, I appreciate the video

  • @xbz24
    @xbz24 Před měsícem +20

    keep it up with the good stuff ron

  • @blahch11111
    @blahch11111 Před 27 dny

    this is amazing. I hope there are more of these for different theories!

  • @CaiodeOliveira-pg4du
    @CaiodeOliveira-pg4du Před měsícem

    Absolutely brilliant! Thank you so much for this video!

  • @alextrebek5237
    @alextrebek5237 Před měsícem +1

    This is exactly what I needed. Thank you!

  • @Mr.Nichan
    @Mr.Nichan Před měsícem +5

    9:58 I think the f(i) and f(j) (which you accidentally wrote as f(i) in the first form) was not the most confusing part of that notation. Rather, I think the most confusing part was the use of "j, i~j" to mean "all j such that i is connected by an edge to j" (based on the fact that "~" often represents a binary relation from a set to itself, which is often represented with a directed graph) - that and the accidental use use of f(i) instead of f(j) when it was the only thing in the summation.

    • @hannahnelson4569
      @hannahnelson4569 Před 19 dny +1

      Thank you so much I had no idea what that meant.

    • @Iamfafafel
      @Iamfafafel Před 3 dny

      i think it's standard in graph theory to denote i~j as i,j are neighbors. i've certainly seen it more than once.

  • @hannahnelson4569
    @hannahnelson4569 Před 19 dny

    This was simultaneously an incredibly useful and informative lesson and incredibly confusing difficult to understand.
    Thank you for educating us!

  • @michaeln.8185
    @michaeln.8185 Před 25 dny

    Loved the video! Keep up the great work Ron!

  • @Mr.Nichan
    @Mr.Nichan Před měsícem +2

    19:45 If anyone's confused like I was, the following discussion is entirely dependent on the fact that these squares must all be non-negative, so each addend must be zero (so each difference between values of f at adjacent vertices must be zero) for the sum to be zero. No "cancelling out" is allowed here.

    • @cameroncarter6789
      @cameroncarter6789 Před 29 dny

      🇵🇸🏳‍🌈FREE🏳‍⚧ PALESTINE!⚧🏳‍🌈🇵🇸

  • @MDNQ-ud1ty
    @MDNQ-ud1ty Před měsícem +5

    My suggestion is not to interrupt the flow of explaining things with tangential information such as notational conventions and to not make a big deal out of them unless it is very confusing or very non-standard. Breaking the flow for those things is worse than not mentioning them at all. If you do mention it maybe keep it short or use a footnote on the screen or mention it before going into the discussion.
    You make a big deal about using function notation rather than subscript notation as if it is a big deal when it isn't. There literally is no difference between f_i and f(i) except symbolically. Also, another solution would have been to start out with using "vector notation" then switching over to function notation when it was useful. Anyone confused due to lack of knowledge might be confused but they will quickly get over it or learn their lesson about it at some point and then it won't be a problem.
    It's important to try to avoid unnecessary things that ultimately undermine the learning process as it is self-defeating. Also, make sure you do not cut off a section too soon else it requires the viewer to rewind the video to see what just happened. Also, you might want to run your audio through a sibilance remover as your recording system/environment/voice is a bit sibilant.

    • @ron-math
      @ron-math  Před měsícem

      Thank you for the suggestion.
      I was trying to make it consistent with the function Laplace operator that applies on a normal function f(x) and use f_i to represent the i-th eigenvector and f(i) to represent an eigenvector's i-th entry. Seems that I am not doing a good job enough here.
      But I guess I make it too "tedious". Will keep your suggestions in mind!

    • @MDNQ-ud1ty
      @MDNQ-ud1ty Před měsícem +2

      @@ron-math The video was pretty good in terms of explaining some cool things about how graph theory works and the details and it seems to flow better after that.
      The problems are minor and obviously everything can be improved(nothing is perfect). So don't take it person. I just thought I'd mention it so you are aware of such things for future videos(and ultimately it's your choice to listen to me or not).
      I understood what you were doing but for someone where such things are obvious and not bothered by the issue it becomes a little tedious to be told how it works and then for you to apologize for it(as there is no need and the apology takes up extra space).
      The principle should generally be: Explain things as effectively and efficiently as possible. Say as much as possible, as little as possible, and as effectively as possible.
      Also remember the "fan out principle". E.g., if you have a node(you) that reaches a lot of other nodes(your viewers) then any "mistakes" you make will be amplified across those nodes(viewers). What this means is the more effective you are then it is amplified by N where N is the number of viewers. Of course you can't be perfect so you have to find the balance of perfecting enough and not too much so you can also make the most progress.
      What I mean by this are such things as "explaining the basics". Many times people will try to explain things that, for the material, the viewer should already know or be able to deal with on their own. This just takes time away because M of those viewers will not need it explained and those that do can deal with it on their own.
      It is an optimization problem in some sense where one has to try and find the right combination of "details" to get the biggest result and it isn't easy.
      Again, these are minor things but if no one mentioned them then you'll never think about them.
      If it were me, what I would have done would just put a little text block explaining that detail so it could be picked up visually very quickly while also listening to the vocals(so as this is done in parallel it consumes almost no extra time). So this idea can be used by you in the future if you want to explain extra things you can have "footnotes" and the * is pretty universal. Just keep it non-intrusive and say enough but not too much so that most(say 80% of people have no problem with understanding it and what it means and the rest can put in whatever effort they need to understand what is going on).

    • @Debrugger
      @Debrugger Před 26 dny +1

      @@ron-math First of all I loved the video and I'm going to work through it again more slowly :)
      I also agree with this person's comments. A person watching this definitely has some linear algebra background, otherwise they wouldn't understand anything, and you rightly assume that they've heard of stuff like PCA and K means. But anyone with that background can probably handle indices moving around from subscripts to (parentheses), so no need to get caught up on it.
      Another note on pacing, because it feels a bit uneven. The first part is very slow and basic, spending precious minutes on explaining node degrees and star/line/cycle graphs which I'm going to guess everyone who knows what an eigenvalue is can probably conceptualize. But then for the proofs about multiplicity and number of eigenvectors, things suddenly go really fast comparatively. On their own both of these are ok, but in the same video it feels very weird and will suit nobody (either you need 20 seconds of explanation that an adjacency matrix is symmetric, or you can understand in 5 seconds why the last eigenvector must be linearly dependent, but not both).
      Looking forward to more videos!

    • @ron-math
      @ron-math  Před 26 dny

      @@Debrugger Your profiling of the audience is amazing. Thank you so much! Will do better in the future.

  • @Infraredchili
    @Infraredchili Před měsícem

    Interesting and clear presentation. There is an error: the matrix shown at 3:45 should be symmetrical (as you pointed out) but A(4,5) = 0 and A(5,4) = 1.

    • @ron-math
      @ron-math  Před měsícem

      Yes. You are right. Never trust copilot too much!

  • @yandrak6134
    @yandrak6134 Před měsícem +1

    That soft music in the background is top notch!

  • @liyi-hua2111
    @liyi-hua2111 Před měsícem

    15:00 How did discuss "smoothness" when the dimension of eigenspace is greater than 1. Even if you require them to be mutually orthogonal, all the eigenvectors in it can mix up and create a new set of eigenvectors.

    • @ron-math
      @ron-math  Před měsícem

      Good question.
      When the eigenspace has a dimension of 3, you are right that we can create a new set of eigenvectors and the inter-connected-component smoothness will be different as the only requirement is that the corresponding entries of the eigenvector for a connected component are constant.
      The point is that 1) those eigenvectors all have smaller intra-component variance (0) compared to other eigenvectors associated with lambda > 0.
      2) If we try to find a "base" that has many 0 entries as possible ( high sparsity), which is usually a good convention, then our base choice is good. It also seems that this is how numpy does it.

  • @hideyoshitheturtle
    @hideyoshitheturtle Před 28 dny

    Thanks for this amazing video! 19:23 I didn’t quite understand the logic here. How does eigenvectors with nonzero entries on each cluster produces eigenvalues 0?

    • @ron-math
      @ron-math  Před 28 dny

      Hey!
      Each "block" is like a smaller Laplacian matrix itself. As long as the entries, associated with a block, are constant, the eigenvalue is 0.

  • @LucasSilva-ut7nm
    @LucasSilva-ut7nm Před měsícem

    Pretty good video bro! I'm now excited to see your other videos.
    I would only like to add that, in my opinion, it would've been better if you gave more time to explain the end part. Your pacing is not constant through the video, you did spend more time explaining the easier parts than the harder ones lol.

    • @ron-math
      @ron-math  Před měsícem +2

      You are right bro. I should keep the pace more consistent for sure.

  • @drdca8263
    @drdca8263 Před měsícem

    This was nice, I certainly hadn’t expected the technique at the end.
    Do I understand correctly that you took the 3 eigenvalues with the greatest magnitude, and then the corresponding eigenvectors, and sent each graph vertex to the triple of the entries for that vertex in those 3 eigenvectors?
    If so, how was 3 chosen? Was it chosen based on the knowledge that there should be 3 components, or was there a different region?

    • @ron-math
      @ron-math  Před měsícem +1

      Yes, your understanding is correct.
      For this question, 2 is actually enough (so it doesn't depend on the knowledge that there are 3 components). In real world, it is usually trial and error and depends on experience a lot, like picking the number of cluster in k-means algorithm.

  • @baronvonbeandip
    @baronvonbeandip Před měsícem

    Man, what a stellar video. Explained stuff that I didn't even know I wanted from it. Great job, dude

  • @BalthazarMaignan
    @BalthazarMaignan Před měsícem +4

    I love this kind of videos, it switches it up a bit from my uni classes haha

    • @cameroncarter6789
      @cameroncarter6789 Před 29 dny

      🇵🇸🏳‍🌈FREE🏳‍⚧ PALESTINE!⚧🏳‍🌈🇵🇸

  • @gabrielsembenelli7633
    @gabrielsembenelli7633 Před měsícem +4

    Did you use Manim for the animations? Really cool video!

    • @ron-math
      @ron-math  Před měsícem +1

      Yes. Thank you buddy.

  • @MechanicumMinds
    @MechanicumMinds Před 26 dny +1

    I never knew graph theory could be so relatable. I mean, who hasn't had a 'genuine friendship' on social media, only to realize the other person doesn't follow them back? And don't even get me started on the'most popular guy in your social bubble' - I'm pretty sure that's just my high school ex. Anyway, on to the adjacency matrices...

  • @0MVR_0
    @0MVR_0 Před měsícem

    been using an old cosine similarity algorithm to accomplish the task of metric distance
    yet directly manipulating a matrix seems much more efficient

  • @blacklistnr1
    @blacklistnr1 Před měsícem +10

    Seems like an interesting video, but I've had to drop it midway since it's more homework than presentation.
    It watches like a semester's worth of material with chapter after chapter to remember with no goal in sight.
    I feel like there's a lot of work expected of me to join the dots that you present.
    I truly mean this as feedback, it's the best I could do to explain how video structure feels off, I hope it's useful for future improvements!

    • @ron-math
      @ron-math  Před měsícem

      Can't thank you enough for confirming my doubt (and folks who like this comment). And I would happily pin this comment if I can.
      Do you think an introduction with an example that is at the end of the video currently (25:40) is better?
      Storytelling is THE most difficult part for me for making videos. I will try a different approach for my next video (and a fun one).
      And in this video, I actually like the Laplacian matrix naming justification and the Fiedler examples the most. Sadly they are at the second half of the video...
      Thanks buddy.

    • @blacklistnr1
      @blacklistnr1 Před měsícem

      ​@@ron-math That's actually such a cool and simple-to-understand application!
      Yes, I would suggest starting with a more abstract version of that as motivational problem and build towards it in the video.
      Drop the matrix/Laplacian specifics and keep the

    • @ron-math
      @ron-math  Před měsícem +1

      @@blacklistnr1 Thanks!
      "It would also be really helpful if you stick to the concentric circles(or part of them) for visualising the graph things I've seen in the first part half of the video."
      I went through different basic types of graphs in hope that they will be beneficial but I agree that from the sentiment in the comment it seems that I overdid it. Well... I also have comment saying that they like this pace. Guess it will take more videos and trial-and-error to find the sweet spot :)

    • @blacklistnr1
      @blacklistnr1 Před měsícem

      @@ron-math The circles suggestion is more to help me have one unifying structure in my head where I can put the ideas you present.
      If you can draw a line or color a dot/group of dots or something for each idea, it becomes much easier to remember and frame them within the context.

    • @radadadadee
      @radadadadee Před měsícem

      Yeah, I stopped at the 11 min mark, I couldn't follow anymore

  • @t.gokalpelacmaz584
    @t.gokalpelacmaz584 Před měsícem +1

    At 12:39, I don’t understand what the graph is. After checking your sources, I still could not find where it came from. “Vertices in a Row” headline did not make sense to me. If graph was better labeled, it would be much better. If anyone could explain it, I would really appreciate it.
    It was a nice introduction otherwise. Keep them coming 😉.

    • @ron-math
      @ron-math  Před měsícem +1

      "Vertices in a row" is just a line graph. It was an early draft title that sneaked into production.
      Thanks buddy, will do.

  • @dominiquelaurain6427
    @dominiquelaurain6427 Před měsícem

    Very interesting. I have found a motivation for studying the subject, merely for clustering aspects in TDA (Topological Data Analysis). Maybe, a work to be done, with spectral graph theory about graph representing complex simplicies. You talked a little about weighted graphs (Laplacian is changed but the ideas remain, like in your quoted paper) but not about "directed graphs"...only undirected graphs because adjacency matrix is always symmetric. Why ? Does spectral theory exists for these extensions ? Motivationin your video is fuzzy: I deduce some applications in clustering. Are they others ? In graph theory, adjacent matrix M can have symbolic entries (not real constants) for example in graph paths, (M^n is representing sets of paths in at most n steps...the union M^0 + M^1 + ... M^n + ....) is representing all paths between a couple of vertices (i,j). Is there spectral theory there ?

    • @ron-math
      @ron-math  Před měsícem

      Thank you for commenting.
      There are a lot of motivations. This is a huge topic and this video is just a slice of it. And you are right, M exponents are connected with graph paths.
      I was trying to frame different motivations as different stories. This video focuses on "clustering" as you said.

  • @jaf7979
    @jaf7979 Před 20 dny

    You have a great teaching style.

  • @turnerburger
    @turnerburger Před měsícem

    20:10 this does not make sense to me, why are the entries all nonzero for this eigenvector in particular? Specifically why can't the "neighbors" be nonzero?

    • @ron-math
      @ron-math  Před měsícem

      "why are the entries all nonzero for this eigenvector in particular?"
      In general, c1, c2, c3 can't be all 0: at least one of them must be non-zero. I was trying to make the point of linear dependency so I can write f4=c1f1 + c2f2 + c3f3. But yes, they don't have to be non-zero at the same time.
      At 20:10, c is not 0, so other entries in component teal must also be c. But since you asked this question, I assume you got it.
      Did I answer your question?

  • @RandomBurfness
    @RandomBurfness Před měsícem +1

    Perhaps I didn't quite catch this detail, but why are we even interested in the "smoothness" of eigenvectors to begin with?

    • @ron-math
      @ron-math  Před měsícem

      It my fault of story telling. You can check the fielder eigenvalue example and the spectral clustering example in the later half of the video: the eigenvector signs and magnitude can be used to perform tasks like graph partition and clustering.

  • @IkhukumarHazarika
    @IkhukumarHazarika Před 28 dny

    It was a spiritual experience to watch this maths videos make more videos on ml and optimization and please make it free and available for everyone

  • @wargreymon2024
    @wargreymon2024 Před 29 dny

    excellent, I don't think we need that bgm tho, keep it up🔥💯

  • @ZheZhang-yi2qn
    @ZheZhang-yi2qn Před měsícem +2

    Brilliant!

  • @andyhughes8315
    @andyhughes8315 Před měsícem

    Any arbitrary pair of eigenvectors are not always orthogonal to eachother. Only eigenvectors corresponding to the same eigenvalue are orthogonal.

    • @ron-math
      @ron-math  Před měsícem +1

      You mean different eigenvalues?

    • @huhneat1076
      @huhneat1076 Před měsícem

      Yeah, I was confused on this too. I've rarely seen eigenvectors be perpendicular to each other, unless there's essentially a whole plane of eigenvectors (such as the matrix 2 * Identity)

    • @drdca8263
      @drdca8263 Před měsícem

      @@huhneat1076For a self-adjoint linear operator H, if u,v are two vectors with eigenvalues \lambda and \mu respectively, and \lambda ≠ \mu ,
      then = = \mu
      but also, = = = (\lambda)^* ,
      so, because \lambda is real valued, \lambda = (\lambda)^* ,
      so, (\lambda - \mu) = 0,
      And as \lambda ≠ \mu, therefore = 0
      so, eigenvectors with different eigenvalues of self-adjoint linear operators , are orthogonal.
      If all the matrices here are self-adjoint (and, they seem to be real and symmetric, and so they should be), then the eigenvectors for different eigenvalues should be orthogonal.

  • @jorgelombardi169
    @jorgelombardi169 Před 9 dny

    ¿Always a Graph matrix are symetric? indiferently of the type of graph?

    • @ron-math
      @ron-math  Před 8 dny

      The graph has to be unidirectional in order for the matrices to be symmetric.

  • @happi5159
    @happi5159 Před 25 dny

    I don't understand what you mean by plotting the eigenvectors. How do you plot them? What do the axes mean? Why do equal value eigenvalues produce different eigenvectors on the graph? It seems so hand-wavy to me.

    • @drdca8263
      @drdca8263 Před 25 dny +1

      I think, “for each vertex of the graph, associate to it a point in n-D space (here n=3 was chosen) where the i-th coordinate is the coefficient of that vertex in the i-th eigenvector (with i ranging from 1 to n)

  • @Iamfafafel
    @Iamfafafel Před 3 dny

    eventually, i really want to see someone make a comparison between spectral theory on closed manifolds and on compact graphs. there are too many similarities too be a coincidence. as a baby example, theorem one also holds on manifolds.
    as for the video, i think the strongest block is the one starting at 9:10. the introduction was too fluffy, and the end was too unfocused. too much fluff risks alienating the audience. it would've been better to split it into different videos.
    nonetheless, thank you for making a video on a very interesting subject

  • @math__man
    @math__man Před měsícem +1

    Liked the video :)

  • @howwitty
    @howwitty Před měsícem

    Thanks.

  • @isaac10231
    @isaac10231 Před 14 dny

    Was this made with Manim?

  • @grayjphys
    @grayjphys Před měsícem

    Nice video :) I would say you need to put a de-esser on your sound though.

    • @ron-math
      @ron-math  Před měsícem

      Good point! Thank you!

  • @juandavidrodriguezcastillo9190

    awesome

  • @SoufianeIdrissi123
    @SoufianeIdrissi123 Před 29 dny +1

    Hhh mines maths 2 2024

  • @ominollo
    @ominollo Před měsícem +1

    Nice 😊
    I didn’t understand everything but It was interesting nevertheless 👍

    • @ron-math
      @ron-math  Před měsícem +1

      Happy to help you understand buddy.

  • @siegfriedbarfuss9379
    @siegfriedbarfuss9379 Před měsícem

    Great content but not for real dummies. Need to process a lot of info.

  • @fayadkhairallah2760
    @fayadkhairallah2760 Před 28 dny

    Is it true that when you write a book for dummies you actually mean that they're so. A Dieu 😮

  • @VadimChes
    @VadimChes Před měsícem +1

    It was too quick an explanation for me.... I wish I had more time to be able to calculate in my mind what is happening on the screen... it looks like an explanation for somebody who already knows 95% of the material...

    • @ron-math
      @ron-math  Před 28 dny +1

      Hey, slow it down on your side and pause for as long as you can and think about it.
      And let me know any confusions you have ;)

    • @VadimChes
      @VadimChes Před 28 dny

      ​@@ron-mathit's wrong attitude. Why don't you slow down yourself

    • @ron-math
      @ron-math  Před 28 dny +1

      I will try next time 👍.
      If you check a few other comments, some people complain it is too "easy". It is always hard to balance but I will try my best.

    • @herrk.2339
      @herrk.2339 Před 4 dny

      @@VadimChes It's a video, you can pause and slow it down and work things out yourself. That's the whole point

  • @isiisorisiaint
    @isiisorisiaint Před 29 dny

    this is ridiculous! if you'd write all this down as a course lecture then maybe it would have some value (i say "maybe" because i can't even follow your vid, let alone appreciating it in any meaningful way), but a video with the density of information you have here is just insane. who exactly is your supposed target audience with this vid??? it t=11.49 and i just gave up.

    • @ron-math
      @ron-math  Před 29 dny +1

      Hey buddy, you may not know how much I appreciate this comment.
      If you check a few other comments, you may see that people are complaining I spent too much time on "easy" stuff". The comments will collectively influence how I create content in the future. Your comment gave me an important data point that I also have audience who think this video's information density is too high.
      What's your suggestion to make this video valuable to you?

  • @eiseks3410
    @eiseks3410 Před měsícem +25

    Too easy subject presented in a very complicated and time consuming way. Video should have been shorter in my opinion.

    • @ron-math
      @ron-math  Před měsícem +21

      Hey, you are a theoretical physicist.

    • @ron-math
      @ron-math  Před měsícem +35

      And I somewhat agree with you about the "shorter" argument. The original draft for this video is very concise and short:
      1. Remove the review part, jumps to L matrix and Fiedler vector directly.
      2. Talk about spectral clustering directly.
      But I decided to make it way more accessible for folks who have no knowledge of graph definitions and likely forget stuff that 0 can be an eigenvalue, etc.
      There are a lot of stuff that audience like you may enjoy more. Like this one czcams.com/video/uxsDKhZHDcc/video.html

    • @eiseks3410
      @eiseks3410 Před měsícem +30

      @@ron-math Hey, thank you very much for your reply...and sorry for the harsh criticism :P You're doing a great work with your channel :)

    • @4thpdespanolo
      @4thpdespanolo Před měsícem +11

      LOL not a simple subject if you dig deep enough

    • @umbraemilitos
      @umbraemilitos Před měsícem +4

      ​@ron-math Negative people are everywhere, don't worry about them. Opinions are cheap.