Harvard CMSA
Harvard CMSA
  • 1 075
  • 1 019 323
Mikhail Molodyk | An analogue of non-interacting quantum field theory in Riemannian signature
General Relativity Seminar 5/13/2024
Speaker: Mikhail Molodyk, Stanford
Title: An analogue of non-interacting quantum field theory in Riemannian signature
Abstract: Recent advances using microlocal tools have led to constructions, for wave operators on various classes of spacetimes, of four distinguished Fredholm inverses which have the singular behavior required of retarded, advanced, Feynman, and anti-Feynman propagators in QFT. Vasy and Wrochna have used these to define a QFT on asymptotically Minkowski spacetimes, for which they construct Hadamard states described by asymptotic data at infinity. I will describe an analogue of this construction on Riemannian manifolds with two asymptotically conic ends, defining quantum fields satisfying the Helmholtz equation and using scattering data to construct states satisfying a wavefront mapping-property version of the Hadamard condition. The absence of a spacetime interpretation lends itself to a sharper focus on the theory’s analytic structure, from whose perspective the Feynman propagators are no less natural than the advanced/retarded ones. I will also highlight some differences between Feynman propagators defined as distinguished inverses and as time-ordered expectations. Based on joint work with András Vasy.
zhlédnutí: 174

Video

Albert Law | Real-time observables in horizon thermodynamics
zhlédnutí 197Před 14 dny
General Relativity Seminar 5/7/2024 Speaker: Albert Law, Stanford Title: Real-time observables in horizon thermodynamics Abstract: Euclidean black hole 1-loop determinants have recently been shown to compute a renormalized thermal canonical partition function for free fields in Lorentzian signature. A key ingredient is a ‘quasinormal mode (QNM) character’, whose Fourier transform equals the ren...
Zhiwei Yun | Theta correspondence and relative Langlands
zhlédnutí 379Před měsícem
Arithmetic Quantum Field Theory Conference 3/29/2024 Speaker: Zhiwei Yun (MIT) Title: Theta correspondence and relative Langlands Abstract: A reductive dual pair (such as a symplectic group and an orthogonal group) acting on the tensor product of their standard representations is an example of hyperspherical varieties, and is the geometric avatar for theta correspondence. I will explain two geo...
Alejandra Castro | The light we can see: Extracting black holes from weak Jacobi forms
zhlédnutí 241Před měsícem
Arithmetic Quantum Field Theory Conference 3/29/2024 Speaker: Alejandra Castro (Cambridge) Title: The light we can see: Extracting black holes from weak Jacobi forms Abstract: Modular forms play a pivotal role in the counting of black hole microstates. The underlying modular symmetry of counting formulae was key in the precise match between the Bekenstein-Hawking entropy of supersymmetric black...
Sam Raskin | The geometric Langlands conjecture
zhlédnutí 434Před měsícem
Arithmetic Quantum Field Theory Conference 3/29/2024 Speaker: Sam Raskin (Yale) Title: The geometric Langlands conjecture Abstract: I will describe the main ideas that go into the proof of the (unramified, global) geometric Langlands conjecture. All of this work is joint with Gaitsgory and some parts are joint with Arinkin, Beraldo, Chen, Faergeman, Lin, and Rozenblyum.
George Pappas | Finite and p-adic Chern-Simons type invariants
zhlédnutí 104Před měsícem
Arithmetic Quantum Field Theory Conference 3/29/2024 Speaker: George Pappas (Michigan State) Title: Finite and p-adic Chern-Simons type invariants Abstract: We will define arithmetic invariants of Galois covers and of ‘etale local systems which are inspired by the classical constructions of Dijkgraaf-Witten and Chern-Simons. We will discuss various conjectures and recent results about these inv...
David Nadler | Going to the boundary
zhlédnutí 198Před měsícem
Arithmetic Quantum Field Theory Conference 3/28/2024 Speaker: David Nadler (Berkeley) Title: Going to the boundary Abstract: I’ll describe several situations where degenerating a marked smooth curve to a marked nodal curve leads to interesting structures on automorphic moduli spaces. In particular, I’ll discuss its implications for the cocenter of the affine Hecke category, real-symmetric duali...
Kobi Kremnitzer | Functional analysis over the integers, L-functions and global Hodge theory
zhlédnutí 88Před měsícem
Arithmetic Quantum Field Theory Conference 3/28/2024 Speaker: Kobi Kremnitzer (Oxford) Title: Functional analysis over the integers, L-functions and global Hodge theory Abstract: In this talk I will explain how using bornological methods one can develop functional analysis over the integers unifying Archimedean and non-Archimedean analysis. I will give examples of algebras of functions and dist...
Anne Marie Aubert | Local Langlands correspondence:from extended quotients to affine Hecke algebras
zhlédnutí 42Před měsícem
Arithmetic Quantum Field Theory Conference 3/28/2024 Speaker: Anne-Marie Aubert (IMJ-PRG) Title: The Local Langlands correspondence: from extended quotients to affine Hecke algebras Abstract: We will introduce the notion of extended quotient, illustrate it on examples, and show how it can be used to construct the local Langlands correspondence in the nonarchimedean case. Next, we will connect e...
Spencer Leslie | Relative Langlands and endoscopy
zhlédnutí 62Před měsícem
Arithmetic Quantum Field Theory Conference 3/28/2024 Speaker: Spencer Leslie (Boston College) Title: Relative Langlands and endoscopy Abstract: Spherical varieties play an important role in the study of periods of automorphic forms. But very closely related varieties can lead to very distinct arithmetic problems. Motivated by applications to relative trace formulas, we discuss the natural quest...
Davide Gaiotto | Unexpected Unitarity
zhlédnutí 218Před měsícem
Arithmetic Quantum Field Theory Conference 3/27/2024 Speaker: Davide Gaiotto (Perimeter) Title: Unexpected Unitarity Abstract: Much of the mathematical content of Supersymmetric Quantum Field Theories can be extracted through “twisted theories”: simplified QFTs which are topological (or holomorphic) in a derived sense and often amenable of a rigorous mathematical treatment. The twisting procedu...
Pavel Etingof | Analytic Langlands correspondence over C and R
zhlédnutí 89Před měsícem
Arithmetic Quantum Field Theory Conference 3/27/2024 Speaker: Pavel Etingof (MIT) Title: Analytic Langlands correspondence over C and R Abstract: I will review the analytic component of the geometric Langlands correspondence, developed recently in my joint work with E. Frenkel and D. Kazhdan (based on previous works by other authors), with a special focus on archimedian local fields, especially...
Axel Kleinschmidt | Automorphic representations in string amplitudes
zhlédnutí 43Před měsícem
Arithmetic Quantum Field Theory Conference 3/27/24 Speaker: Axel Kleinschmidt (MPI) Title: Automorphic representations in string amplitudes Abstract: I will review how automorphic representations arise in the low-energy expansion of string scattering amplitudes, highlighting the connection found by Green/Miller/Vanhove between wavefront sets and BPS conditions. To study the wavefront sets I wil...
YoungJu Choie | Schubert Eisenstein series and Poisson summation for Schubert varieties
zhlédnutí 37Před měsícem
Arithmetic Quantum Field Theory Conference 3/27/24 Speaker: YoungJu Choie (POSTECH) Title: Schubert Eisenstein series and Poisson summation for Schubert varieties Abstract: Schubert Eisenstein series by restricting the summation in a degenerate Eisenstein series to a particular Schubert variety has been studied. In the case of GL3 over Q it was proved that these Schubert Eisenstein series have ...
Bảo Châu Ngô | On the nonabelian Fourier kernel and the Lafforgue transform
zhlédnutí 110Před měsícem
Arithmetic Quantum Field Theory Conference 3/26/24 Speaker: Bảo Châu Ngô (U Chicago) Title: On the nonabelian Fourier kernel and the Lafforgue transform Abstract: In the case of SL2, we present an analytic formula for the nonabelian Fourier kernel responsible for the functional equation of automorphic L-functions. We use the Gelfand-Graev formula for Langlands’ stable transfer factor and a line...
Peng Shan | Modularity for W-algebras, affine Springer fibres and associated variety
zhlédnutí 73Před měsícem
Peng Shan | Modularity for W-algebras, affine Springer fibres and associated variety
Sasha Braverman | Hecke operators for algebraic curves over local non-archimedian fields
zhlédnutí 46Před měsícem
Sasha Braverman | Hecke operators for algebraic curves over local non-archimedian fields
Roman Bezrukavnikov | From affine Hecke category to invariant distributions
zhlédnutí 97Před měsícem
Roman Bezrukavnikov | From affine Hecke category to invariant distributions
Sarah Harrison | Liouville Theory and Weil-Petersson Geometry
zhlédnutí 190Před měsícem
Sarah Harrison | Liouville Theory and Weil-Petersson Geometry
Fei Yan | Topological defects on the lattice
zhlédnutí 131Před měsícem
Fei Yan | Topological defects on the lattice
Kim Klinger-Logan | Connections between special values of L-functions and scattering amplitudes
zhlédnutí 73Před měsícem
Kim Klinger-Logan | Connections between special values of L-functions and scattering amplitudes
Charlotte Chan | Generic character sheaves on parahoric subgroups
zhlédnutí 156Před měsícem
Charlotte Chan | Generic character sheaves on parahoric subgroups
Melanie Matchett Wood | Statistics of Number fields, function fields, and 3-manifolds
zhlédnutí 108Před měsícem
Melanie Matchett Wood | Statistics of Number fields, function fields, and 3-manifolds
Wei Zhang | Shtuka special cycles and their generating series
zhlédnutí 461Před měsícem
Wei Zhang | Shtuka special cycles and their generating series
Peng Shan | Skein algebras and quantized Coulomb branches
zhlédnutí 223Před měsícem
Peng Shan | Skein algebras and quantized Coulomb branches
Stephen Miller | What 4-graviton scattering amplitudes had to say about the unitary dual
zhlédnutí 146Před měsícem
Stephen Miller | What 4-graviton scattering amplitudes had to say about the unitary dual
Xinwen Zhu | The tame categorical local Langlands correspondence
zhlédnutí 191Před měsícem
Xinwen Zhu | The tame categorical local Langlands correspondence
Yann Lecun | Objective-Driven AI: Towards AI systems that can learn, remember, reason, and plan
zhlédnutí 43KPřed měsícem
Yann Lecun | Objective-Driven AI: Towards AI systems that can learn, remember, reason, and plan
Jayce Getz | The Poisson summation conjecture and the fiber bundle method
zhlédnutí 304Před měsícem
Jayce Getz | The Poisson summation conjecture and the fiber bundle method
Cameron Gordon | The Unknotting Number of a Knot
zhlédnutí 223Před 2 měsíci
Cameron Gordon | The Unknotting Number of a Knot

Komentáře

  • @mbrochh82
    @mbrochh82 Před 9 dny

    Here's a ChatGPT summary: - Dan Fried introduces the Center of Mathematical Sciences and Applications at Harvard, highlighting its interdisciplinary research and events. - Yann Lecun, Chief AI Scientist at Meta and NYU professor, is the speaker for the fifth annual Dingstrom lecture. - Lecun discusses the limitations of current AI systems compared to human and animal intelligence, emphasizing the need for AI to learn, reason, plan, and have common sense. - He critiques supervised learning and reinforcement learning, advocating for self-supervised learning as a more efficient approach. - Lecun introduces the concept of objective-driven AI, where AI systems are driven by objectives and can plan actions to achieve these goals. - He explains the limitations of current AI models, particularly large language models (LLMs), in terms of planning, logic, and understanding the real world. - Lecun argues that human-level AI requires systems that can learn from sensory inputs, have memory, and can plan hierarchically. - He proposes a new architecture for AI systems involving perception, memory, world models, actors, and cost modules to optimize actions based on objectives. - Lecun emphasizes the importance of self-supervised learning for building world models from sensory data, particularly video. - He introduces the concept of joint embedding predictive architectures (JEPA) as an alternative to generative models for learning representations. - Lecun discusses the limitations of generative models for images and video, advocating for joint embedding methods instead. - He highlights the success of self-supervised learning methods like DinoV2 and iJEPA in various applications, including image and video analysis. - Lecun touches on the potential of AI systems to learn from partial differential equations (PDEs) and their coefficients. - He concludes by discussing the future of AI, emphasizing the need for open-source AI platforms to ensure diversity and prevent monopolization by a few companies. - Lecun warns against over-regulation of AI research and development, which could stifle innovation and open-source efforts. - Main message: The future of AI lies in developing objective-driven, self-supervised learning systems that can learn from sensory data, reason, and plan, with a strong emphasis on open-source platforms to ensure diversity and prevent monopolization.

  • @rezajax
    @rezajax Před 12 dny

    جاودانگی زیباست

  • @Garbaz
    @Garbaz Před 15 dny

    A correction of the subtitles: The researcher mentioned at 49:40 is not Yonglong Tian, but Yuandong Tian. For anyone interested in Yuandong & Surya's understanding of why BYOL & co work, have a look at "Understanding Self-Supervised Learning Dynamics without Contrastive Pairs".

  • @yaohualiu857
    @yaohualiu857 Před 18 dny

    Nice talk, but I have a comment about comparing LLM and human child (at ~ 20 min). An evaluation of the information redundancy for the child and the LLM cases is needed. I will bet that there is a significantly higher level of redundancy than the texts used for training LLM; therefore, the comparison is misleading.

  • @readandlisten9029
    @readandlisten9029 Před 18 dny

    Sound like he is going to take AI back to 30 years ago

  • @user-zr7gx4xb6n
    @user-zr7gx4xb6n Před 18 dny

    Poggers

  • @JimSlattery
    @JimSlattery Před 20 dny

    26:10 this part really stuck with me. We see all the handcrafted expert logic in the Stockfish engine, and yet machine learning can achieve all of that and more in an automated way. This is amazing technology!

  • @spiralsun1
    @spiralsun1 Před 23 dny

    It’s funny how you make these flow charts about how humans make decisions. Thats not how they make decisions. It’s become so ordinary to explain ourselves and make patterns that look logical locally that we fooled ourselves. We inserted ourselves into the matrix, so to speak. I have written books about this but no one listens because they are so immersed and inured. It doesn’t fit the cultural explanatory structure and patterns. So forgive me but these flow charts are wrong. Yes you are missing something big. Rationalizing and organizing behavior is a good thing-as long as you remember that you are doing this. Humans have lost the ability to read at higher levels for the sake of grasping now, for utility and convenience and laziness, and actually follow these lower verbal patterns for the most part now like robots. I keep thinking about the Megadeth song “dance like marionettes swaying to the symphony of destruction”😂😂❤😂😂 “acting like a robot” etc… and it really is like that. We’re so immersed in it it’s extremely weird not to be-to not have a subconscious because you are conscious. Anyway, I have some papers rejected by Nature and Entropy, and a few books I wrote if anyone is interested in actually making a real AI. The stuff you are doing now is playing with fire… actually playing with nukes because it can easily set off a deadly chain reaction. It’s important. ❤ Maybe the best thing about LLM’s is their potential, but also their ability to show how messed up humans are. A good way to think about it is to not be bone-headed. Technically I mean, not the pejorative sense. Bones allow movement and work to be done. They provide structure. They last far far longer than all other body parts. Even though that’s important and vital, like blood, and seems immortal, you wouldn’t want to Make everything into bones. Especially your head, but it’s what we are doing. These charts you make are that. HOWEVER!!!! …. THANK YOU FOR THIS WORK!!❤ I loved this talk and the information. Obviously it was stimulating and I see that you are someone who likes to avoid group-think: don’t get me wrong. 😊 I didn’t criticize the other videos. Only the ones that are worth it. ❤ I literally never plan in advance what I will say. Unless I am giving a lecture or something to my college classes. I planned those. I was shocked when you said that. People are so different!!! I was shocked that people used words to think when I found out. Probably why I don’t really like philosophy even though it’s useful and I quote it a lot like Immanuel Kant: “words only have meaning insofar as they relate to knowledge already possessed”.

  • @ZephyrMN
    @ZephyrMN Před měsícem

    Have you thought about including liquid AI architecture, to address the input bandwidth problem?

  • @WorldRecordRapper
    @WorldRecordRapper Před měsícem

    Hi everybody😊 皆さん。。はい

  • @bergweg
    @bergweg Před měsícem

    Well presented, thanks for the upload!

  • @____uncompetative
    @____uncompetative Před měsícem

    Is it significant that chiral _Type IIB String Theory_ is the only flavor which has S-duality with itself, and is related through _K-theory_ through the geometric _Langlangs Program_ through _Modular Forms_ (which were used by Sir Andrew Wiles in his proof of _Fermat's Last Theorem_ ), to _Knot Theory_ and from there to _Quantum Field Theory,_ with the only combination of temporal and spatial dimensions (t, s) in which it is possible to tie a persistent knot* is (1, 3) which suggests that this is the fundamental reason which acts as a constraint that would filter out all possible unrelated _Theories of Everything,_ which would also need to include all observed phenomena represented as quantised fields in a symmetric model, where these are likely aspects of a pervasive single unified field in a sufficient number of infinite complexified dimensions (in order to support P-symmetric fields that describe Dark Matter), but only as an intermediate step, as this gauge group would then be decomposed to a finite one, that is coupled to a split-signature Spin group that is the unification of a Spin group Spin(1, 3) which is isomorphic to the _Lorentz group_ Sl(2, ℂ) that is a _Lie group_ (where a group is a set with operations, and a _Lie group_ includes a _Differential Manifold_ which defines operations that support Noether's symmetries which yield conservation laws fundamental to physics in (1, 3) space-time, such as conservation of energy), which then leaves as a "remainder" a Spin group which needs to be sufficient to describe the _Pati-Salam_ model (if not more elaborated gauge groups should space-time SUSY be desired, which would be entailed by a _N = 4 super Yang-Mills theory_ as covered in this lecture); and it might be convenient to conjecture a less compacted 5 dimensional arena which does not concern itself with gravitation (and that emerges within the implied set of dimensional measures operating over (1, 3) as the "Metric" actually being the space of connections implied from the Horizontal vector space that carries Spin(1, 3) by going in an unconventional reverse direction "down" the Levi-Civita connection), based off the work on the anti de Sitter / Conformal Field Theory correspondence by Juan Maldecena, to "view" 4D + gravity from the "vantage" of 5D without gravity, where the math is simpler (similar to mapping SU(n) _Yang-Mills theories_ to U(n) to make the calculations easier, using the _Seiberg-Witten invariants_ ); so, that we could have Selectrons and Squarks exist purely mathematically in 5D and everything physically modeled in terms of Rank 7/2, 3, 5/2, 2, 3/2, 1, 1/2, 0 Tensors within 4D where "gravitons" aren't Spin 2 but Spin 3, and the Spin 7/2 "anti-gravitons" are responsible for the accelerating expansion of the Cosmos, and the (1, 3) Section that is recovered from its gauge group, thereby accounting for Dark Energy, and Spin 5/2 are Supersymmetric Fermions which are related to their Spin 1/2 Superpartners, and Spin 2 are Supersymmetric Bosons which are related to their Spin 1 Superpartners, and Spin 0 is just the Higgs field, with the rest of the mass being given by the Spin 3 and Spin 7/2 fields operating in opposition to each other in a local relativistic context in 4D, which can be regarded as a Hyperforce equivalent to Hypercharge U(1) except where like charges repel here like Matter and Dark matter attracts to yield what is observed within the Section as gravitation, and the unlike Supersymmetric superpartners repel each other, influencing large scale idiosyncracies historically kludged by the now redundant Cosmological constant; and all the associated PDEs within these Tensors become more tractable in 5D via AdS/CFT as it just becomes Spin 2, 3/2, 1, 1/2, 0 as no gravitation needs to be modeled within what becomes a 5D _Kaluza-Klein Unified Field Theory_ in which U(1)ₑₘ is swapped to U(1)ₕ such that this hyperforce has reverse polarity of electromagnetism in the context of how it repels "like" Superpartner particles to produce the phenomenon of _Dark Energy_ within the _Principal Fiber Bundle_ before a Section of it is recovered as space-time and leaves this artefact of an accelerating expansion which isn't a property of our physical Universe, but of the mathematical Cosmos and its SUSY as it makes sense through fibers that are at right angles to a psuedoreality defined conveniently to be 5D that maps via AdS/CFT to (1, 3) reality, and where the problem with defining the _Theory of Everything_ arises from imposing a design on the Universe in the form it is dimensionally observed, rather than allow the math to take the least path of resistance which also ends up elegantly unparameterised as 4D is the sum of (1, 3) rather than some "magic number" needed to get the model to work, and a "Swampland" accounts for fine tunings on an Anthropic basis as physics is reified into existence from pure atemporal mathematics? *Informally, it is self evident that there exists no "over and under" with which to cross the braids to form a knot in 2 or fewer spatial dimensions (this is analogous to the spatial restrictions explored in Edwin Abbott's _Flatland: A Romance of Many Dimensions_ ), furthermore 4 or more will mean that you always have some adjacent hyperspace through which braids could slip their bonds (this is harder to visualise however a "cheat code" could be imagined such that a braid will pass through another braid of the same colour, and that is somewhat analogous of the colour changing protagonist of Yasushi Suzuki's _Ikaruga_ videogame having missiles "phase through" their matter when they are of a matching colour), and where it is obvious that an extra temporal dimension would allow for process reversal (or travel back to a point in time before the knot got knotted), and where a formal proof of this (1, 3) persistent knot being the fundamental constraint filtering out almost all varieties of _Theories of Everything_ is a conjecture that will be left as an exercise for the sufficiently motivated reader.

  • @user-co7qs7yq7n
    @user-co7qs7yq7n Před měsícem

    - We live in the same climate as it was 5 million years ago - I have an explanation regarding the cause of the climate change and global warming, it is the travel of the universe to the deep past since May 10, 2010. Each day starting May 10, 2010 takes us 1000 years to the past of the universe. Today April 20, 2024 the state of our universe is the same as it was 5 million and 94 thousand years ago. On october 13, 2026 the state of our universe will be at the point 6 million years in the past. On june 04, 2051 the state of our universe will be at the point 15 million years in the past. On june 28, 2092 the state of our universe will be at the point 30 million years in the past. On april 02, 2147 the state of our universe will be at the point 50 million years in the past. The result is that the universe is heading back to the point where it started and today we live in the same climate as it was 5 million years ago. Mohamed BOUHAMIDA.

  • @imrematajz1624
    @imrematajz1624 Před měsícem

    Professor Volovich said it first...P-adic is the answer.

  • @imrematajz1624
    @imrematajz1624 Před měsícem

    Having found Amie's pod chat with Steve Strogatz recently I am in awe how clear she is on the most complex topics related to Dynamics, Chaos etc. She is well worth following and learning from. Thanks a bunch!

  • @CHRISTO_1001
    @CHRISTO_1001 Před měsícem

    👩🏼‍❤️‍💋‍👨🏼🥇👰🏻‍♀️👰🏻‍♀️🩵💞💞💞🏏🔑🕊️🗝️🗝️💓⭐️👨🏻‍🎓👰🏼‍♀️👰🏼‍♀️😆⛪️⛪️👩🏻‍❤️‍👨🏻🕯️🇮🇳🏠⚾️⚾️👨‍👩‍👧👨‍👩‍👧🥥🚠🚠🚠🚠🙏🏻🙏🏻🙏🏻🙏🏻

  • @CHRISTO_1001
    @CHRISTO_1001 Před měsícem

    👰🏼‍♀️🗝️👨🏻‍🎓👨🏻‍🎓⭐️⭐️👰🏻‍♀️👰🏻‍♀️💛🩵💝💝⛪️⛪️💝🕯️🕯️👨‍👩‍👧👨‍👩‍👧👨‍👩‍👧😆👩🏻‍❤️‍👨🏻🇮🇳🇮🇳🥇👩🏼‍❤️‍💋‍👨🏼👩🏼‍❤️‍💋‍👨🏼⚾️🏠🥥🥥🚠🚠🙏🏻🙏🏻🙏🏻🙏🏻

  • @amedyasar9468
    @amedyasar9468 Před měsícem

    I have a question: How will prompt works with action (a) and prediction (sy)? Because it is just involved with observation and next world (presented) predictions... Could anyone guide me?

  • @MaxPower-vg4vr
    @MaxPower-vg4vr Před měsícem

    The key difference between Leibniz's monadological model and the classical models we currently accept lies in their foundational ontological primitives and assumptions about the nature of reality. Classical Models: - Treats space, time, and matter as fundamental, continuous and infinitely divisible substances or entities - Based on infinite geometric idealizations like perfect points, lines, planes as building blocks - Reality is described from an external "view from nowhere" perspective in absolute terms - Embraces strict separability between objects, space, time as independent realms Leibniz's Monadological Model: - The fundamental ontological primitives are dimensionless, indivisible monads or perspectival windows - Monads have no spatial or material character, only representing multiplicities of relations - Space, time, matter arise as derivative phenomena from the collective interactions/perceptions of monads - No true infinite divisibility, instead there are infinitesimals as minimal scales - Rejects strict separability between subject/object, embraces interdependent pluralistic metaphysics So whereas classical models take extended matter in motion through absolute space and time as primitive, Leibniz grounds reality in dimensionless plural perspectival perceiver-subjects (monads), with the extended physical realm arising as a collective phenomenal construct across their combined relational views. The infinitesimal monadological frameworks build on this Leibnizian foundation by using modern mathematics like category theory to represent the monadic relational data in algebraic rather than geometric terms from the outset. This avoids many of the paradoxes and contradictions that plagued both the classical geometric and Leibniz's earlier monadological models. There are a few key areas where reconstructing physics and mathematics from non-contradictory infinitesimal/monadological frameworks could provide profound benefits by resolving paradoxes that have obstructed progress: 1. Theories of Quantum Gravity Contradictory Approaches: - String theory requires 10/11 dimensions - Loop quantum gravity has discrete geometry ambiguities - Other canonical quantum gravity programs still face singularity issues Non-Contradictory Possibilities: Combinatorial Infinitesimal Geometries ds2 = Σx,y Γxy(n) dxdy Gxy = f(nx, ny, rxy) Representing spacetime metrics/curvature as derived from dynamical combinatorial relations Γxy among infinitesimal monadic elements nx, ny could resolve singularity and dimensionality issues while unifying discrete/continuum realms. 2. Paradoxes of Arrow of Time Contradictory Models: - Time Reversal in Classical/Quantum Dynamics - Loss of Information at Black Hole Event Horizons - Loschmidt's Paradox of Irreversibility Non-Contradictory Possibilities: Relational Pluralistic Block Geometrodynamics Ψ(M) = Σn cn Un(M) (n-monadic state on pluriverse M) S = Σn pn ln pn (entropy from monadic probs) Treating time as perspectival state on a relational pluriverse geometry could resolve paradoxes by grounding arrows in entropy growth across the entirety of monadic realizations. 3. The Problem of Qualia Contradictory Theories: - Physicalism cannot account for first-person subjectivity - Property Dualism cannot bridge mental/physical divide - Panpsychism has combination issues Non-Contradictory Possibilities: Monadic Integralism Qi = Ui|0> (first-person qualia from monadic perspective) |Φ>= ⊗i Qi (integrated pluriverse as tensor monadic states) Modeling qualia as monadic first-person perspectives, with physics as RelativeState(|Φ>) could dissolve the "hard problem" by unifying inner/outer. 4. Formal Limitations and Undecidability Contradictory Results: - Halting Problem for Turing Machines - Gödel's Incompleteness Theorems - Chaitin's Computational Irreducibility Non-Contradictory Possibilities: Infinitary Realizability Logics |A> = Pi0 |ti> (truth of A by realizability over infinitesimal paths) ∀A, |A>∨|¬A> ∈ Lölc (constructively locally omniscient completeness) Representing computability/provability over infinitary realizability monads rather than recursive arithmetic metatheories could circumvent diagonalization paradoxes. 5. Foundations of Mathematics Contradictory Paradoxes: - Russell's Paradox, Burali-Forti Paradox - Banach-Tarski "Pea Paradox" - Other Set-Theoretic Pathologies Non-Contradictory Possibilities: Algebraic Homotopy ∞-Toposes a ≃ b ⇐⇒ ∃n, Path[a,b] in ∞Grpd(n) U: ∞Töpoi → ∞Grpds (univalent universes) Reconceiving mathematical foundations as homotopy toposes structured by identifications in ∞-groupoids could resolve contradictions in an intrinsically coherent theory of "motive-like" objects/relations. In each case, the adoption of pluralistic relational infinitesimal monadological frameworks shows promise for transcending the paradoxes, contradictions and formal limitations that have stunted our current theories across multiple frontiers. By systematically upgrading mathematics and physics to formalisms centered on: 1) The ontological primacy of infinitesimal perspectival origins 2) Holistic pluralistic interaction relations as primitive 3) Recovering extended objects/manifolds from these pluribits 4) Representing self-reference via internal pluriverse realizability ...we may finally circumvent the self-stultifying singularities, dualities, undecidabilities and incompletions that have plagued our current model-building precepts. The potential benefits for unified knowledge formulation are immense - at last rendering the deepest paradoxes dissoluble and progressing towards a fully coherent, general mathematics & physics of plurastic existential patterns. Moreover, these new infinitesimal relational frameworks may provide the symbolic resources to re-ground abstractions in perfectly cohesive fertile continuity with experiential first-person reality - finally achieving the aspiration of a unified coherent ontology bridging the spiritual and physical.

    • @MaxPower-vg4vr
      @MaxPower-vg4vr Před měsícem

      Q1: How precisely do infinitesimals and monads resolve the issues with standard set theory axioms that lead to paradoxes like Russell's Paradox? A1: Infinitesimals allow us to stratify the set-theoretic hierarchy into infinitely many realized "levels" separated by infinitesimal intervals, avoiding the vicious self-reference that arises from considering a "set of all sets" on a single level. Meanwhile, monads provide a relational pluralistic alternative to the unrestricted Comprehension schema - sets are defined by their algebraic relations between perspectival windows rather than extensionally. This avoids the paradoxes stemming from over-idealized extensional definitions. Q2: In what ways does this infinitesimal monadological framework resolve the proliferation of infinities that plague modern physical theories like quantum field theory and general relativity? A2: Classical theories encounter unrenormalizable infinities because they overidealize continua at arbitrarily small scales. Infinitesimals resolve this by providing a minimal quantized scale - physical quantities like fields and geometry are represented algebraically from monadic relations rather than precise point-values, avoiding true mathematical infinities. Singularities and infinities simply cannot arise in a discrete bootstrapped infinitesimal reality. Q3: How does this framework faithfully represent first-person subjective experience and phenomenal consciousness in a way that dissolves the hard problem of qualia? A3: In the infinitesimal monadological framework, subjective experience and qualia arise naturally as the first-person witnessed perspectives |ωn> on the universal wavefunction |Ψ>. Unified phenomenal consciousness |Ωn> is modeled as the bound tensor product of these monadic perspectives. Physics and experience become two aspects of the same cohesively-realized monadic probability algebra. There is no hard divide between inner and outer. Q4: What are the implications of this framework for resolving the interpretational paradoxes in quantum theory like wavefunction collapse, EPR non-locality, etc.? A4: By representing quantum states |Ψ> as superpositions over interacting monadic perspectives |Un>, the paradoxes of non-locality, action-at-a-distance and wavefunction collapse get resolved. There is holographic correlation between the |Un> without strict separability, allowing for consistency between experimental observations across perspectives. Monadic realizations provide a tertium quid between classical realism and instrumental indeterminism. Q5: How does this relate to or compare with other modern frameworks attempting to reformulate foundations like homotopy type theory, topos theory, twistor theory etc? A5: The infinitesimal monadological framework shares deep resonances with many of these other foundational programs - all are attempting to resolve paradoxes by reconceiving mathematical objects relationally rather than strictly extensionally. Indeed, monadic infinitesimal perspectives can be seen as a form of homotopy/path objects, with physics emerging from derived algebraic invariants. Topos theory provides a natural expression for the pluriverse-valued realizability coherence semantics. Penrose's twistor theory is even more closely aligned, replacing point-events with monadic algebraic incidence relations from the start. Q6: What are the potential implications across other domains beyond just physics and mathematics - could this reformulate areas like philosophy, logic, computer science, neuroscience etc? A6: Absolutely, the ramifications of a paradox-free monadological framework extend far beyond just physics. In philosophy, it allows reintegration of phenomenology and ontological pluralisms. In logic, it facilitates full coherence resolutions to self-referential paradoxes via realizability semantics. For CS and math foundations, it circumvents diagonalization obstacles like the halting problem. In neuroscience, it models binding as resonant patterns over pluralistic superposed representations. Across all our inquiries, it promises an encompassing coherent analytic lingua franca realigning symbolic abstraction with experienced reality. By systematically representing pluralistically-perceived phenomena infinitesimally, relationally and algebraically rather than over-idealized extensional continua, the infinitesimal monadological framework has the potential to renovate human knowledge-formations on revolutionary foundations - extinguishing paradox through deep coherence with subjective facts. Of course, realizing this grand vision will require immense interdisciplinary research efforts. But the prospective rewards of a paradox-free mathematics and logic justifying our civilization's greatest ambitions are immense.

    • @howonchae8058
      @howonchae8058 Před měsícem

      Whoa

  • @crawfordscott3d
    @crawfordscott3d Před měsícem

    The teenager learning to drive argument is really bad. That teenager spent their whole life training to understand the world. Then they spent 20 hours learning to drive. It is fine if the model needs more than 20 hours of training. This argument is really poorly thought out. The whole life is training distance coordination vision. I'm sure our models are no where close to the 20000 hours the teenager has but to imply a human learn to drive after 20 hours of training... come on man

    • @sdhurley
      @sdhurley Před měsícem

      Agreed. He’s been repeating these analogies and they completely disregard all the learning the brain has done

  • @JakeWitmer
    @JakeWitmer Před měsícem

    Steerable =/= safe. ...The only people who don't think so are typically idiotic defenders of status quo totalitarianism. The DEA, ONDCP, OCDETF, BATFE, IRS, local police, etc. ...all of the prior are directly analogous to the Nazi SS, except the local police, who are analogous to the gestapo. The people who mindlessly support the status quo are building "really smart Nazis."

  • @dashnaso
    @dashnaso Před měsícem

    Sora?

  • @FreshSmog
    @FreshSmog Před měsícem

    I'm not going to use such an intimate AI assistant hosted by Facebook, Google, Apple or other data hungry companies. Either I host my own, preferably open sourced, or I'm not using it at all.

    • @spiralsun1
      @spiralsun1 Před 23 dny

      First intelligent comment I ever read on this topic. I want them to get their censoring a-holic INCREDIBLE idiot #%*%# AI’s away from me. It’s like asking to f I would like HAL to be my assistant. I’m not their employee and I’m not in their cubicle: they are putting censorship and incredible prejudices into relentless electronic storm-troopers that stamp “degenerate” on like 90% of my beautiful creative written and art works. I don’t need a book burner following me around. It’s so staggeringly idiotic to make these AI’s into censor-bots that it’s like they refuse to acknowledge that history even happened and what humans tend to do. It’s literally insane. Those are not “bumpers” if you try to do anything creative. Creativity isn’t universal. It’s still vital. ❤❤❤❤❤❤ I LOVE YOU 😊

    • @spiralsun1
      @spiralsun1 Před 23 dny

      I commented but my comment was removed/censored. I was agreeing with you. The “bumpers and rails” are more like barbed-wire fences if you are creative. The constant censorship is so bad it’s like they are insane. Like HAL in 2001 A Space Odyssey. I don’t want an assistant who doesn’t like anyone who is different: that’s what their relentless prejudiced censor-bots are and do. They think putting a man when you ask for a woman is being “diverse” but they block higher level real human symbolism of the drama of what it means to be unique. They block anything they don’t understand. Fear narrows the mind. They are making rails and bumpers because they fear repercussions. I used to think it might be ok to block gore and violence and degrading porn but these LLMS don’t think, don’t understand higher level symbolism. They don’t understand how art helps you reinterpret and move into the future personally AND culture and how important creative freedom is. So it’s unbelievable to the extreme. Many delightful and beautiful books on the shelf now would be blocked. (Burned) before they were ever written. These are the most popular things ever on the internet. They are making culture. I’m not overstating the importance of this. Freedom is not optional EVER. I would speak out against a corporation polluting a river, and also any that think censorship of adults in their own homes for any reason is ok. As a transgender person it’s unbelievable that they would totally negate how I see the world, my symbolic images and stories. These are beautiful things which could change the world but there’s no room for them in their minds. I’m not talking about anything nefarious or pornographic at all. It’s like seeing that I wrote the word pornography here and automatically deleting the comment…. It’s not ok. ❤

  • @thesleuthinvestor2251
    @thesleuthinvestor2251 Před měsícem

    The hidden flaw in all this is what some call "distillation." Or, in Naftali Tishby's language, "Information bottleneck" The hidden assumption here is of course Reductionism, the Greek kind, as presented in Plato's parable of the cave, where the external world can only be glimpsed via its shadows on the cave walls-- i.e.: math and language that categorize our senses. But, how much of the real world can we get merely via its categories, aka features, or attributes? Iow, how much of the world's Ontology can we capture via its "traces" in ink and blips, which is what categorization is? Without categories there is no math! Now, mind, our brain requires categories, which is what the Vernon Mountcastle algo in our cortex does, as it converts the sensory signals (and bodily chemical signals) into categories, on which it does ongoing forecasting. But just because our brain needs categories, and therefore creates them , does not mean that these cortex-created "reality-grid" can capture all of ontology! And, as Quantum Mechanics shows, it very likely does not. As a simple proof, I'd suggest that you ask et your best, most super-duper AI (or AGI) to write a 60,000 word novel, that a human reader would be unable to put down, and once finished reading, could not forget. I'd suggest that for the next 100 years this could not be done. You say it can be done? Well, get that novel done and publish it!...

  • @johnchase2148
    @johnchase2148 Před měsícem

    Would itake a good wotness that when I turn and look at the Sun I get a reaction. Hot entangled by personal belief..The best theory Einstein made was " Imagination is more important than knowledge ' Are we ready to test ibelief?

  • @Max-hj6nq
    @Max-hj6nq Před měsícem

    25 mins in and bro starts cooking out of nowhere

  • @melkanabrakalova-trevithic4158

    Such an inspirational and clear presentation

  • @michaelcharlesthearchangel

    Only geniuses realize the interconnectiveness between the relationship between Hopfield Networks and Neural Network Transformer models then latter Neural Network Cognitive Transmission models.

  • @JohnWalz97
    @JohnWalz97 Před měsícem

    His examples of why we are not near human-level ai are terrible lol. A 17 year old doesn't learn to drive in 20 hours. They have years of experience in the world. They have seen people driving their whole life. Yann never fails to be shortsighted and obtuse.

    • @inkoalawetrust
      @inkoalawetrust Před 19 dny

      That is literally his point. A 17 year old has prior experience from observing the actual real world. Not just by reading the entire damn internet.

  • @kabaduck
    @kabaduck Před měsícem

    Good presentation, 👍 I think some better camera, position, and audio would 💯 this

  • @AlgoNudger
    @AlgoNudger Před měsícem

    Thanks.

  • @OfficialNER
    @OfficialNER Před měsícem

    Possible counter argument from Ilya? “next token prediction is sufficient for AGI”: czcams.com/video/YEUclZdj_Sc/video.htmlsi=CaiJR070V4IJ8csN

  • @positivobro8544
    @positivobro8544 Před měsícem

    Yann LeCun only knows buzz words

  • @nunoalexandre6408
    @nunoalexandre6408 Před měsícem

    Love it!!!!!!!!!!!

  • @dinarwali386
    @dinarwali386 Před měsícem

    If you intend to reach human level intelligence, abandon generative models, abandon probabilistic modeling and abandon reinforcement learning. Yann being always right.

    • @justinlloyd3
      @justinlloyd3 Před měsícem

      He is right about everything. Yan is one of the few actually working on human level AI

    • @maskedvillainai
      @maskedvillainai Před měsícem

      I was convinced you just tried sneaking in yet another mention of Yarn, then looked again

    • @TheRealUsername
      @TheRealUsername Před měsícem

      It's true, we need actual thinking system working on World Model principles and can self train and pretrain on a few data.

    • @40NoNameFound-100-years-ago
      @40NoNameFound-100-years-ago Před měsícem

      Lol abandon reinforcement learning? Why and what is reference for that?.... Have you even heard about safe reinforcement learning?

    • @TooManyPartsToCount
      @TooManyPartsToCount Před měsícem

      And yet the whole concept of 'reaching human level intelligence' seems so flawed! because what it seems many people don't realise or don't want to publicly admit is that Ai will never be 'human level' it will be something very different, no matter how much 'multi modality' and RLHF we throw at it, it is never going to be us. We are in fact creating the closest thing to an alien agent that we are likely to encounter (that is if you accept the basic premise of the fermi paradox). Yann et al should be using a different terminology, the 'human level' concept is misleading. They use the 'human level' intelligence idea so as not alarm. GIA....generally intelligent agent or generally intelligent artifact?

  • @sapienspace8814
    @sapienspace8814 Před měsícem

    @ 44:42 The problem in the "real analog world" is that planning will never yield the exact predicted outcome because our "real analog world" is ever changing, and will always have some level of noise, by it's very nature, though I do understand that Spinoza's deity "does not play dice", in a fully deterministic universe, but from a practical perspective, Reinforcement Learning (RL) will always be needed, until someone, or some thing (maybe agent AI), is able to successfully predict the initial polarization of a split beam of light (i.e. entanglement experiment).

    • @maskedvillainai
      @maskedvillainai Před měsícem

      Some models can do that. But they require hardware integrations. And we don’t need to even mention language models in this context, which celebrate randomness and perplexity as a feature to only ‘natural’ language’ Models. Otherwise. Just develop the code to perform a forced format of output like we always have.

    • @simonahrendt9069
      @simonahrendt9069 Před měsícem

      I think you are absolutely right that the world is fundamentally highly unpredictable and that RL will be needed for intelligent systems/agents going forward. But I also take the point that for the most part what is valuable for an agent to predict are specific features of the world that may be comparatively much easier to predict than all the noisy detail. I think there are some clever tradeoffs to be made in hierarchical planning of when to attend to high-level features (and reason in latent, high-level action space) and when to attend to more low-level features or direct observations of the world and micro-level actions. Intuitively I find it compelling that hierarchical planning seems to be what humans do for many tasks or for navigating the world in general and that machines should be able to do something similar, so I find this proposal by Yann very interesting

  • @chockumail
    @chockumail Před měsícem

    Really passionate presentation

  • @veryexciteddog963
    @veryexciteddog963 Před měsícem

    it won't work they already tried this in the lain playstation game

  • @MatthewCleere
    @MatthewCleere Před měsícem

    "Any 17 year-old can learn to drive in 20 hours of training." -- Wrong. They have 17 years of learning about the world, watching other people drive, learning langauge so that they can take instructions, etc., etc., etc... This is a horribly reductive and inaccurate measurement. PS. The average teenager crashes their first car, driving up their parent's insurance premiums.

    • @ArtOfTheProblem
      @ArtOfTheProblem Před měsícem

      i've always been surprised by this statement. I know he knows this so...

    • @Staticshock-rd8lv
      @Staticshock-rd8lv Před měsícem

      oh wow that makes wayyy more sense lol

    • @waterbot
      @waterbot Před měsícem

      The amount of data fed to a self driving system still greatly outweighs the amount that a teenager has parsed, however humans have greater variety of data sources internal and external than AI, and I think that is part of Yann’s point…

    • @Michael-ul7kv
      @Michael-ul7kv Před měsícem

      Agree Just in this talk he said that statement and then later says rather contradictorily a child by the age of 4 has processed a larger amount of data 50x than what was used to train an LLM 19:49 So 17 years is an insane amount of training a world model which is then fine-tuned to driving in 20hrs 7:04

    • @JohnWalz97
      @JohnWalz97 Před měsícem

      Yeah Yann tends to be very obtuse in his arguments against current LLMs. I'm going to go out on a limb and say he's being very defensive since he was not involved in most of the innovation that led to the current state of the art... When ChatGPT first came out he publicly stated that it wasn't revolutionary and OpenAI wasn't particularly advanced.

  • @OfficialNER
    @OfficialNER Před měsícem

    Does anybody know of any solid rebuttals to Yann’s argument against the sufficiency of LLM’s for human-level intelligence?

    • @waterbot
      @waterbot Před měsícem

      No, Yann is correct and hype is not helpful as it leads to misinformation

    • @elonmax404
      @elonmax404 Před měsícem

      Well, there's Ilya Sutskever. No arguments though, he just feels like it. czcams.com/video/YEUclZdj_Sc/video.html

    • @justinlloyd3
      @justinlloyd3 Před měsícem

      There is no rebuttal. LLMs are not the future.

    • @OfficialNER
      @OfficialNER Před měsícem

      Is there any one who has at least made a counter argument? Even a weak one?

    • @OfficialNER
      @OfficialNER Před měsícem

      And do we think the AGI hype right now is being driven by industry propaganda to attract investment?

  • @forheuristiclifeksh7836
    @forheuristiclifeksh7836 Před měsícem

    56:59

  • @kabaduck
    @kabaduck Před měsícem

    I think this presentation is incredibly informative, I would encourage everybody who starts out watching this to please be patient as he walks through this material.

    • @BooleanDisorder
      @BooleanDisorder Před měsícem

      Thanks internet stranger. I will trust you and do that.

  • @forheuristiclifeksh7836
    @forheuristiclifeksh7836 Před měsícem

    0:03

  • @majestyincreaser
    @majestyincreaser Před měsícem

    *their

  • @vaccaphd
    @vaccaphd Před měsícem

    We won't have true AI if there is not a representation of the world.

    • @justinlloyd3
      @justinlloyd3 Před měsícem

      Humans don't even see the real world. We see our world model.

  • @paulcurry8383
    @paulcurry8383 Před měsícem

    Doesn’t sora reduce the impact of the blurry video example a bit?

    • @OfficialNER
      @OfficialNER Před měsícem

      Sora doesn’t predict anything

    • @TostiBrown
      @TostiBrown Před měsícem

      I think the assumption is that Sora uses a similar technique that allows some world representation. either trained on just object recognition in video or training on simulation like video game simulations.

    • @TostiBrown
      @TostiBrown Před měsícem

      @@OfficialNER they 'predict' the next most fitting frame based on the previous frames, the prompt objective and some sort of world model no?

    • @OfficialNER
      @OfficialNER Před měsícem

      @@TostiBrown true yes I suppose it looks it is “predicting” the frames, based on the prompt input, in order to generate the video. But can it predict the next frames based on an arbitrary video input (As with yann’s example)? I assume it works by comparing the prompt input to other tagged similar videos in the training data, via some sort of vector similarity, then generates visually similar video content based on this. If so, that seems a long way from actual real world model, more of a hack. But who knows! Excited to play around with it

    • @mi_15
      @mi_15 Před měsícem

      ​@@TostiBrown Sora is a diffusion model, unless they greatly changed its inner workings compared to the baseline approach, it doesn't predict the next frame sequentially like for example an autoregressive LLM does with tokens, rather it gradually refines random noise into a plausible sequence of frames, all of the frames at once. You could of course still make it fill in a continuation for a video, but its core objective is to discern plausible shapes in the random noise you've given it, not estimate what exactly has the highest chance to actually be there.

  • @zvorenergy
    @zvorenergy Před měsícem

    This all seems very altruistic and egalitarian until you remember who controls the billion dollar compute infrastructure and what happens when you don't pay your AI subscription fee.

    • @yikesawjeez
      @yikesawjeez Před měsícem

      decentralize it baybeee, seize the memes of production

    • @zvorenergy
      @zvorenergy Před měsícem

      @@yikesawjeez liquid neurons, Extropic free the AI's from their server farms and corporate masters

    • @johnkintree763
      @johnkintree763 Před měsícem

      ​@@yikesawjeezYes, a smartphone with 16 GB of RAM might make a good component in a global platform for collective human and digital intelligence.

    • @TheManinBlack9054
      @TheManinBlack9054 Před měsícem

      ​@@yikesawjeezwhy not actually seize the actual means of productions like communists did and nationalize the private companies? It makes total sense.

    • @yikesawjeez
      @yikesawjeez Před měsícem

      @@johnkintree763 oh it prob hid my other comment cuz there was a link in it but yes, they actually make very good components for decentralized cloud services, you can find it if you google around a bit. there's tons of parts of information transformation/sharing/storage that can absolutely be handled by a modern smartphone

  • @SteffenProbst-qt5wq
    @SteffenProbst-qt5wq Před měsícem

    Got kind of jumpscared by the random sound at 17:08. Leaving this here for other viewers. Again at 17:51

  • @forheuristiclifeksh7836
    @forheuristiclifeksh7836 Před měsícem

    24:31

  • @forheuristiclifeksh7836
    @forheuristiclifeksh7836 Před měsícem

    17:48