Mindscape 174 | Tai-Danae Bradley on Algebra, Topology, Language, and Entropy

Sdílet
Vložit
  • čas přidán 27. 07. 2024
  • Patreon: / seanmcarroll
    Blog post with audio player, show notes, and transcript: www.preposterousuniverse.com/...
    Mathematics is often thought of as the pinnacle of crisp precision: the square of the hypotenuse of a right triangle isn’t “roughly” the sum of the squares of the other two sides, it’s exactly that. But we live in a world of messy imprecision, and increasingly we need sophisticated techniques to quantify and deal with approximate statistical relations rather than perfect ones. Modern mathematicians have noticed, and are taking up the challenge. Tai-Danae Bradley is a mathematician who employs very high-level ideas - category theory, topology, quantum probability theory - to analyze real-world phenomena like the structure of natural-language speech. We explore a number of cool ideas and what kinds of places they are leading us to.
    Tai-Danae Bradley received her Ph.D. in mathematics from the CUNY Graduate Center. She is currently a research mathematician at Alphabet, visiting research professor of mathematics at The Master’s University, and executive director of the Math3ma Institute. She hosts an explanatory mathematics blog, Math3ma. She is the co-author of the graduate-level textbook Topology: A Categorical Approach.
    Mindscape Podcast playlist: • Mindscape Podcast
    Sean Carroll channel: / seancarroll
    #podcast #ideas #science #philosophy #culture
  • Věda a technologie

Komentáře • 32

  • @logaandm
    @logaandm Před 2 lety +10

    Not often I listen to a podcast twice. Even rarer I listen three times. A wonderful example of how mathematics helps in understanding a wide variety of things from physics, to biology, language, evolution, or even improving clarity of thought. Sometime in the future this is going to be seen as a golden age of mathematics. Eye opening.
    Thank you Sean and Tai for the conversation and explanations. The world is a better place for what you do.

  • @thewiseturtle
    @thewiseturtle Před 2 lety +8

    YES! Finally! Someone else exploring how entropy relates to everything, by using geometry/topology and language! This has been the focus of my work for about a decade, and, I believe, even inspired Stephen Wolfram to start work on this level of thinking, where you allow for all combinations to happen and be equally probable, and thus generate a model of reality that continually expands while simultaneously contracting. This is what we see as natural selection (gravity, matter, stability, contraction) and random mutation (electromagnetism, energy, change, expansion) which, combined, produce a family tree of all possible paths through space~time.

  • @samig9032
    @samig9032 Před 2 lety +2

    Mrs. Bradley’s enthusiasm for her work is so obvious just listening to her speak. Great listen!

  • @TJ-hs1qm
    @TJ-hs1qm Před 2 lety +3

    lol I wasn't expecting to hear about Monoids and Category Theory just 10 min in. I learned this kind of stuff through Haskel and Scala, so basically Functional Programming. Awesome :)

  • @alvarorodriguez1592
    @alvarorodriguez1592 Před 2 lety +2

    Great interview. Not only was the topic very interesting and delivered with a joy and carisma, but hype was kept to a refreshing minimum, aknowledging the limitations of applying an awesome math concept to a domain, language, that is not necessarily described by current math.
    Kudos and thanks to both :-)

  • @user-wu8yq1rb9t
    @user-wu8yq1rb9t Před 2 lety +1

    Hello Dear Professor Carroll (one the my beloved Physicist).
    Thanks a ton for all of your efforts.

  • @robdin81
    @robdin81 Před 2 lety +3

    This is for me probably the most difficult to understand episode so far, and that has nothing to do with the explanations that were given by the miss Bradley or Sean Carroll for that matter. It is just difficult parts of math that are connected that makes it even more difficult to understand. Very interesting episode though and I think I will listen to it a couple more times just so I'll be able to understand everything.

  • @JacobCanote
    @JacobCanote Před 2 lety +1

    Wow. A joy to hear. Thanks for walking us through the insights gleaned from the paper.

  • @Xx_Eric_was_Here_xX
    @Xx_Eric_was_Here_xX Před 2 lety

    one of my favorite podcast episodes ever, the knowledge and enthusiasm is palpable

  • @paxdriver
    @paxdriver Před 2 lety +2

    Omg this has been one of my favorite episodes!

  • @grawl69
    @grawl69 Před 2 lety

    Fantastic interview, thank you.

  • @Dth091
    @Dth091 Před 2 lety

    This was fantastic. The relationship between boundaries and entropy makes me think of the holographic principle; that the maximal information contained in a region of space is proportional to its surface area, and entropy could be thought of as a measure of unknown information so it seems pretty connected! Also how the surface measure of an object is the derivative of its volume measure!

  • @DudokX
    @DudokX Před 2 lety +1

    Ohh coarsegraining! now I know why Sean is interested in this!

  • @manfredkrifka8400
    @manfredkrifka8400 Před 2 lety

    Interesting podcast! But just for the record, the algebraic perspective on language has been established for a long established. For example, the Polish logician Kazimierz Ajdukiewicz in 1935 developed this concept in what later became Categorial Grammar. Basically, words are assigned categories that tell you precisely in which neighbourhood they occur. Around 1970, the American logician and philosopher Richard Montague provided a very general mathematical framewok in his article "Universal Algebra". Basically, Montague sketched a way to describe syntax in algebraic terms, semantics in algebraic terms (referring to models in intensional logic), and a homomorphic mapping from syntax structure to semantic interpretation. It describes how we humans can form and understand sentences that have never been uttered before. This became a great research program called "formal semantics" -- Professor Barbara Partee would be an excellent guest to talk about that! I should also mention that grammars with probabilistic rules have been introduced several decades ago.

  • @flexeos
    @flexeos Před 2 lety

    a philosopher worth reading on the philisophical side of all this is Alain Badiou. In is book "being and event" he redifines Being and explores the parts and the whole using set theory. He also has a strong definition of Event as something that could not have been predicted by the analysis of the existing, so in your way as an part of a probability distribution with a high entropy.

  • @cloudrouju526
    @cloudrouju526 Před 2 lety

    You know, what have been ringing in my ears after this were lots of “you know” and “ I don’t know”. I don’t know.

  • @LearnedSome
    @LearnedSome Před 2 lety

    A pleasant surprise after the clickbaity addition of the word Entropy at the end of the title, however apt. :)

  • @DeclanMBrennan
    @DeclanMBrennan Před 2 lety +3

    That was a fascinating chat that also triggered some nostalgia because Tai-Danae Bradley was a writter/presenter on the great "PBS Infinite Series" which sadly turned out to be all too finite. czcams.com/users/pbsinfiniteseries

  • @fs5775
    @fs5775 Před 2 lety +1

    In her application to language, she's just talking about corpus linguistics & descriptive linguistics

    • @thewiseturtle
      @thewiseturtle Před 2 lety

      I think the novel understanding here, which she may or may not yet be aware of, is how the topology (squishy geometry) means that language is multidimensional, and fractal, such that every single word can "contain" an infinite number of other words within it, as the location on the map is expanded to become it's own map that can be described in infinite detail, like zooming into the trees from the forest, and the cells from the trees, and the atoms from the cells, and so on. This is why natural language is so messy and complex and impossible to pin down. But this evolutionary family tree of all possible relationships that the Pascal's triangle of simplices describes allows us to at least make an attempt to quantify a whole language with an impressive level of usefulness.
      I believe that at least one electronic 20 questions game used this to allow for categorizing all nouns so as to seriously limit the number of questions needed to sort out something close to the answer.

    • @fs5775
      @fs5775 Před 2 lety

      @@thewiseturtle language being fractal, recursive, or existing as a complex adaptive system is not a new idea ...

  • @AaronParks
    @AaronParks Před 2 lety +1

    hey it's the PBS Infinite lady!

  • @marianmusic7221
    @marianmusic7221 Před 2 lety

    @Sean Carroll Hello, mister Carroll. Thanks for making youtube a more interesting place and bringing the beauty of science closer to us. Here is a question regarding the gravity and the way it affects everything. It is said that a strong gravity field slows down the clocks and even the thoughts of a person which is found in that strong gravity field. Very precise clocks were mounted on planes flying at high altitude around the planet. In that experiment the altitude and the speed of the planes were taken into consideration and it was proven that the gravity and the speed have an effect on time. I wonder if we could make the following experiment - By my understanding, "the movement" of the electrons in the atoms of our brains/our bodies is the thing that gives us, beside the sens of time passing, our thoughts/our ability to think and the speed at which we think/our perception of life. In the experiment involving the clocks on airplanes the result of the behavior of the whole clocks were analyzed. Can we make the same experiments involving electrons only? Is there a property of the electrons that can be measured using today's technology? Can we put some electrons on some airplanes and measure their properties while flying at high altitude? I know the physicists would tell me: "We cannot say that the electrons are moving. We can only say what is the probability to find an electron at a certain position inside the atom". But maybe there is a property of the electron that, when running that experiment, we can notice it slowing down. Hint - Electricity is also a result of the "moving" electrons. Can we make precise measurements of the properties of the electricity at high altitude? Maybe by analyzing the properties of the electricity at high altitude and compare the results with the results of the same experiment made here on earth, we can find some differences. And those differences can tell us something about the "movement" of the electrons and how the gravity affects it. I am talking about a device consisting of an electricity source, a wire and a consumer. Thanks!

  • @robhollander1844
    @robhollander1844 Před 2 lety

    Would Yoneda suffice? Using Quine's example: "renate" and "cordate" refer to the same set of individuals in the actual world, so it's possible that they might just happen to always coincide in actual use, in which case the Yoneda lemma might not distinguish their meaning. The lemma might distinguish them in all possible uses in all possible worlds, but to identify those worlds one already must know the meaning of the words, in which case the lemma would be unnecessary. Granted it's an extreme example, but its point is that meaning must be more than mere actual use. What that more is can be a difficult question, harder than the so-called "hard problem" of consciousness.

  • @tonytanner3048
    @tonytanner3048 Před 2 lety

    Interesting does that mean a student t distribution can be seen as a infinite an entropy space.

  • @HarryNicNicholas
    @HarryNicNicholas Před 2 lety

    "putting an elephant on a cardboard plane" - i just saw a poem written by AI about this.....synchronicity. damn, can't find it. it was in a tweet.
    "pull the house down" lol.

  • @robhollander9821
    @robhollander9821 Před 2 lety

    It must be all the *possible* contexts in which a word appears that might render its semantic information. An arcane scienctific neologism, for example, found in only one or two or just a handful of occurences, will be underdetermined by looking at its actual uses (unless the language has a word for every possible meaning, which English demonstrably does not have). But to designate what constitues all possible uses/contexts of a word (beyond the mere actual uses and contexts) requires first already knowing its meaning, so it seems the Yoneda lemma doesn't help any. We still need a semantic relation between word forms and their semantic content.
    How do humans figure out this relationship? For one thing, we know more than the local context of a word. We know so much about the larger context of the word use as well -- does it appear in a scientific journal about math or a personal email about lunch at an exotic restaurant, e.g. Given enough context, we don't need multiple context to figure out the likely meaning of an unfamiliar word.
    Fascinating discussion, engaging, rich ideas. I want to read the paper!

  • @AndreAmorim-AA
    @AndreAmorim-AA Před 2 lety

    Elements

  • @aprylvanryn5898
    @aprylvanryn5898 Před 2 lety

    Bradley is so much smarter than I am

    • @fs5775
      @fs5775 Před 2 lety

      who cares? it's not a competition. focus on the knowledge, not the personal comparison

  • @manfredullrich483
    @manfredullrich483 Před 2 lety

    She already lost me, when her dice rolled a 4 with a probability of more than 20% - and when they talked about the probability matrixes, they never mentioned what are the different axis of these matrixes.
    Plus, are these only 2-dimensional matrixes, or do they have more dimensions?
    I would assume for a 6-sided dice it's just [1 0 0 0 0 0] [0 1 0 0 0 0] [0 0 1 0 0 0] [0 0 0 1 0 0] [0 0 0 0 1 0] [0 0 0 0 0 1], but I do not see more information here, than saying the chance for each number is 1/6, assuming the dice is fair.
    And I may even not see that on a first view.

    • @manfredullrich483
      @manfredullrich483 Před 2 lety

      But later it's getting easier, cause she actually can explain abstract concepts in a more relatable way.