Dr. JEFF BECK - The probability approach to AI

Sdílet
Vložit
  • čas přidán 1. 06. 2024
  • Support us! / mlst
    MLST Discord: / discord
    Note: We have had some feedback that the audio is a bit low on this for some folks - we have fixed this in the podcast version here: podcasters.spotify.com/pod/sh...
    Dr. Jeff Beck is a computational neuroscientist studying probabilistic reasoning (decision making under uncertainty) in humans and animals with emphasis on neural representations of uncertainty and cortical implementations of probabilistic inference and learning. His line of research incorporates information theoretic and hierarchical statistical analysis of neural and behavioural data as well as reinforcement learning and active inference.
    / jeff-beck-6b5085196
    scholar.google.com/citations?...
    Interviewer: Dr. Tim Scarfe
    TOC
    00:00:00 Intro
    00:00:51 Bayesian / Knowledge
    00:14:57 Active inference
    00:18:58 Mediation
    00:23:44 Philosophy of mind / science
    00:29:25 Optimisation
    00:42:54 Emergence
    00:56:38 Steering emergent systems
    01:04:31 Work plan
    01:06:06 Representations/Core knowledge
    #activeinference
  • Věda a technologie

Komentáře • 95

  • @jordan13589
    @jordan13589 Před 7 měsíci +17

    Wrapping myself in my Markov blanket hoping AGI pursues environmental equilibrium 🤗

  • @SymEof
    @SymEof Před 7 měsíci +15

    One of the most profound discussions about cognition available on CZcams. Truly excellent.

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  Před 7 měsíci +1

      @@webgpu podcasters.spotify.com/pod/show/machinelearningstreettalk/episodes/DR--JEFF-BECK---THE-BAYESIAN-BRAIN-e2akqa1

  • @luke2642
    @luke2642 Před 7 měsíci +9

    I really enjoyed this. Great questions, if a little leading, but Beck was just fantastic in answering, thinking on his feet too. The way he framed empiricism, prediction, models... everything, it's just great! And then to top it off he's got the humanity, the self awareness of his Quaker/Buddhist trousers (gotta respect that maintaining hope and love are axiomatic for sanity during the human condition) without any compromise on the scientific method!

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  Před 7 měsíci +2

      Epic comment as always, cheers Luke! Dr. Beck was brilliant!

    • @JeffBeck-po3ss
      @JeffBeck-po3ss Před 7 měsíci +3

      The best thing about the Quaker buddist pants is that there is none of this belt and zipper BS. Loose fit, draw string, modal cotton, and total freedom...

    • @luke2642
      @luke2642 Před 7 měsíci

      Indeed! The freedom to pursue those purposes that create meaning and value in our lives, perhaps even more fulfilling than just scratching our paleolithic itches. Strong narratives anchor us, inspire us, yet trap us, it's a such a pickle!

  • @BrianMosleyUK
    @BrianMosleyUK Před 7 měsíci +5

    I've been starved of content from this channel, this is so satisfying!

  • @Lolleka
    @Lolleka Před 7 měsíci +2

    I've switched to the bayesian framework of thinking a few year ago. There is no coming back, it is just too good.

  • @Daniel-Six
    @Daniel-Six Před 4 měsíci +2

    Love listening to Beck riff on the hidden melodies of the mind. Dude can really shred the scales from minute to macroscopic in the domain of cognition.

  • @siarez
    @siarez Před 7 měsíci +4

    Great questioning Tim!

  • @Blacky372
    @Blacky372 Před 7 měsíci +2

    Thanks! I am grateful that such great content is freely available for everyone to enjoy.

  • @rthegle4432
    @rthegle4432 Před 7 měsíci +2

    Very awesome, hope the lengths of the episodes be more ❤

  • @ffedericoni
    @ffedericoni Před 7 měsíci

    Epic episode! I am already expanding my horizons by learning Pyro and Lenia.

  • @kd192
    @kd192 Před 6 měsíci

    Incredible discussion.. thanks for sharing

  • @dr.mikeybee
    @dr.mikeybee Před 7 měsíci +4

    You've hit another one out of the park. Great episode!

  • @GaHaus
    @GaHaus Před 7 měsíci +2

    Totally epic, a bit out of my depth, but really expanding my horizons. I really liked the answer to the question about believing in things beyond materialism. Non materialist thinking is incredibly important to many people around the world, and can bring us so much meaning, and that Dr Beck didn't instantly jump to support only scientific materialism.

    • @stefl14
      @stefl14 Před 7 měsíci +2

      Support among scientists for something like objective idealism isn't that rare. It's just that materialism is a shibboleth in science because it's the most useful frame for doing science. It's the shut up and calculate frame. I lean materialist myself, but even that gets fuzzy. If you're a computationalist functionalist, for example, and think the substrate doesn't matter, then it's hard to deny that "thinking" structures could emerge above the human, say by selection on institutional structures by cultural evolution. You then end up at an attenuated version of idealism when the next level of the fractal is structured enough to be a controller. It's not that I believe this is true yet, but certainly the connectivity structures in cities recapitulate those in the neocortex. And certainly, evolutionary processes led cells to lose their independence to a weak "mind-like" controller (as in Michael Levin's work) and continuously to a true mind like our own. There's no principled reason we aren't becoming those cells. All of a sudden, the materialist view starts to sound quite Hegelian. My main point here is that only philosophically uneducated scientists completely dismiss non-materialism.

    • @luke2642
      @luke2642 Před 7 měsíci

      @@stefl14 Do you have a specific example in mind? It sounds like you're just saying something along the lines of "predictions of models in the material world are just as important as how we feel about how real the model abstraction is", which makes no sense to me. Material world relevance is the measure of a scientific explanation.

  • @Blacky372
    @Blacky372 Před 7 měsíci +3

    Great talk! Thank you very much for doing this interview. One minor thing: I would have preferred to hear Jeff's thoughts flow for longer without interjections in some parts of the video.

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  Před 7 měsíci +1

      Thanks for feedback, I think I did get a little bit overexcited due to the stimulating conversion. Cheers 🙏

    • @Blacky372
      @Blacky372 Před 7 měsíci

      @@MachineLearningStreetTalk I can't blame you. The ideas discussed left me with a big smile and a list of new topics to read about. You both are absolute legends!

    • @JeffBeck-po3ss
      @JeffBeck-po3ss Před 7 měsíci

      I thought it was perfectly executed. As someone who has worked with Jeff for many years I can attest that it's quite risky to give him free reign in this setting.

  • @neurosync_research
    @neurosync_research Před měsícem

    🎯 Key Takeaways for quick navigation:
    00:00 *🧠 Brain's Probabilistic Reasoning*
    - The brain's implementation of probabilistic reasoning is a focal point of computational neuroscience.
    - Bayesian brain hypothesis examines how human and animal behaviors align with Bayesian inference.
    - Neural circuits encode and manipulate probability distributions, reflecting brain's operations.
    02:19 *📊 Bayesian Analysis and Model Selection*
    - Bayesian analysis provides a principled framework for reasoning under uncertainty.
    - Model selection involves choosing the best-fitting model based on empirical data and considered models.
    - Occam's razor effect aids in selecting the most plausible model among alternatives.
    06:13 *🤖 Active Inference Framework*
    - Active inference involves agents dynamically updating models while interacting with the environment.
    - It incorporates optimal experimental design, guiding agents to seek the most informative data.
    - Contrasts traditional machine learning by incorporating continuous model refinement during interaction.
    09:34 *🌐 Universality of Cognitive Priors*
    - Cognitive priors shape cognitive processes, reflecting evolutionary adaptation and cultural influences.
    - The debate on universal versus situated priors explores the extent to which priors transcend specific contexts.
    - Cognitive priors facilitate rapid inference by providing a foundation for reasoning and decision-making.
    14:20 *💭 Epistemological Considerations*
    - Science prioritizes prediction and data compression over absolute truth, acknowledging inherent uncertainty.
    - Models serve as predictive tools rather than absolute representations of reality, subject to continuous refinement.
    - Probabilistic reasoning emphasizes uncertainty and the conditional nature of knowledge, challenging notions of binary truth.
    19:11 *🗣️ Language as Mediation in Communication*
    - Language serves as a mediation pattern for communication.
    - Communicating complex models involves a trade-off between representational fidelity and communication ability.
    - Grounding models in predictions facilitates communication between agents with different internal models.
    22:03 *🌐 Mediation through Prediction*
    - Communication between agents relies on prediction as a common language.
    - Interactions and communication are mediated by the environment.
    - The pragmatic utility of philosophy of mind lies in predicting behavior.
    24:24 *🧠 Materialism, Philosophy, and Predictive Behavior*
    - The pragmatic perspective in science prioritizes prediction over philosophical debates.
    - Compartmentalization of beliefs based on context, such as scientific work versus personal philosophy.
    - Philosophy of mind serves the practical purpose of predicting behavior.
    29:46 *🧭 Tractable Bayesian Inference for Large Models*
    - Exploring tractable Bayesian inference for scaling up large models.
    - Gradient-free learning offers an alternative approach to traditional gradient descent.
    - Transformer models, like the self-attention mechanism, fall within the class amenable to gradient-free learning.
    36:56 *🎓 Encoding representations in vector space*
    - Gradient-free optimization and the trade-off with limited model accessibility.
    - The importance of Autograd in simplifying gradient computations.
    - Accessibility of gradient descent learning for any loss function versus limitations of other learning approaches.
    39:18 *🔄 Time complexity of gradient-free optimization*
    - Comparing the time complexity of gradient-free optimization to algorithms like Kalman filter.
    - Discussion on continual learning mindset and measurement of dynamics over time.
    40:19 *🧠 Markov blanket detection algorithm*
    - Overview of the Markov blanket detection algorithm for identifying agents in dynamic systems.
    - Explanation of how dynamics-based modeling aids in identifying and categorizing objects in simulations.
    - Utilization of dimensionality reduction techniques to cluster particles and identify interacting objects.
    43:10 *🔍 Emergence and self-organization in artificial life systems*
    - Discussion on emergence and self-organization in artificial life systems like particle Linnea.
    - Exploration of the challenges in modeling complex functional dynamics and the role of emergent phenomena.
    - Comparison of modeling approaches focusing on bottom-up emergence versus top-down abstraction.
    49:02 *🎯 Role of reward functions in active inference*
    - Comparison between active inference and reinforcement learning in defining agents and motivating behavior.
    - Critique of the normative solution to the problem of value function selection and the dangers of specifying reward functions.
    - Emphasis on achieving homeostatic equilibrium as a more stable approach in active inference.
    52:20 *🛠️ Modeling levels of abstraction and overcoming brittleness*
    - Discussion on modeling different levels of abstraction in complex systems and addressing brittleness.
    - Exploration of emergent properties and goals in agent-based modeling.
    - Consideration of the trade-offs in modeling approaches and the role of self-organization in overcoming brittleness.
    55:08 *🏠 Active inference and homeostasis*
    - Active inference involves steering emergent systems towards target macroscopic behaviors, often resembling homeostatic equilibrium.
    - Agents are imbued with a definition of homeostatic equilibrium, leading to stable interactions within a system.
    - Transitioning agents from a state of homeostasis to accomplishing specific tasks poses challenges in maintaining system stability.
    56:34 *🔄 Steerable multi-agent systems*
    - Gradient descent training on CNN weights can produce coherent global outputs, illustrating macroscopic optimization.
    - Outer loops in multi-agent systems steer agents toward fixed objectives without resorting to traditional reward functions.
    - Manipulating agents' internal states or boundaries can guide them to perform specific tasks without disrupting system equilibrium.
    59:00 *🎯 Guiding agents' behaviors*
    - Speculative approaches to guiding agents' behaviors include incorporating desired tasks into their definitions of self.
    - Avoiding brittleness in agent behaviors involves maintaining flexibility and adaptability over time.
    - Alternatives to altering agents' definitions of self include creating specialized agents for specific tasks, akin to natural selection processes.
    Made with HARPA AI

  • @ChristopherLeeMesser
    @ChristopherLeeMesser Před 6 měsíci +2

    Interesting discussion. Thank you. Does anyone have a reference on the bayesian interpretation for self-attention in transformers?

  • @35hernandez93
    @35hernandez93 Před 7 měsíci +4

    Great video, although the volume was a bit muted

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  Před 7 měsíci +2

      Thanks for letting me know, I'll dial it up on the audio podcast version

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  Před 7 měsíci

      podcasters.spotify.com/pod/show/machinelearningstreettalk/episodes/DR--JEFF-BECK---THE-BAYESIAN-BRAIN-e2akqa1

  • @ntesla66
    @ntesla66 Před 7 měsíci +1

    That was truly eye opening... the epiphany I had around the 45 minute mark was that there are the two schools of approach in training just like the two schools of physics. The general relativists whose mathematical foundations are in the tensors and linear algebra, and the quantum physicists being founded in the statistical. The one being a vector approach needing a coordinate system and the other using Hamilton's action principal. Tensors or the Calculus of Variations.

  • @eskelCz
    @eskelCz Před 6 měsíci +2

    What was the name of the cellular automata "toy" he mentioned? Particle Len... ? :)

  • @dr.mikeybee
    @dr.mikeybee Před 7 měsíci

    How would a non-gradient-decent method like a decision tree be used to speed up learning? Is there a way to "jump" from what we learn from a decision tree to updating a neural net? Or is the idea that an agent can use Evolutionary algorithms, Swarm intelligence, Bayesian methods, Reinforcement learning methods like Q-learning, policy gradients, etc., Heuristic optimization, and Decision tree learning as part of its architecture. And if so, where is the learning taking place? If there is no update to a model, are we learning by storing results in databases? Updating policy networks, etc?

    • @backslash11
      @backslash11 Před 7 měsíci +1

      No easy jump. It's hard to actually update all the weights of a large neural net, but as seen with LLMs, you can teach them quickly to an extent by just adding pretext. Pretext can become huge with techniques such as dilated attention, or it can be compressed and distilled to cram more info in there. This priming can persist as long as needed, basically becoming a form of semi-permanent learning. I'd imagine in the future, the billion dollar training will just form the base predictive abilities of a network, but other non-gradient descent methods will be fed later as priming, and managed as if it were a base of newly learned knowledge. Once in a while the entire network could be updated, incorporating that knowledge into the base model.

    • @JeffBeck-po3ss
      @JeffBeck-po3ss Před 7 měsíci +4

      The relevant Google search terms are 'coordinate ascent' and 'Variational Bayes' and 'conjugate priors'. The trick is extending coordinate updates to work (approximately in some cases) with models that are not just simple mixtures of exponential family distributions.

  • @kennethgarcia25
    @kennethgarcia25 Před 4 měsíci

    objectives? trajectories? how we define things relate to the aims one sense are important

  • @Boobkink
    @Boobkink Před 4 měsíci

    Amazing. Simply amazing. VRSSF is in the forefront. Microsoft won’t let this go…

  • @kamalsharma3294
    @kamalsharma3294 Před 7 měsíci

    Why this episode is not available on Spotify?

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  Před 7 měsíci

      It is podcasters.spotify.com/pod/show/machinelearningstreettalk/episodes/DR--JEFF-BECK---THE-BAYESIAN-BRAIN-e2akqa1

  • @entropica
    @entropica Před 4 měsíci

    Joscha Bach would call the sum of the models of the different agents (self and others) the "role play" that's running on our brain, which is by construction a simulation.

  • @user-tu1lm3kb7t
    @user-tu1lm3kb7t Před 6 měsíci

    Dr. Beck said that "you can use gradient descent learning for any loss function" which is not right. We can use gradient descent learning for any loss function which has a derivative.

  • @marcinhou
    @marcinhou Před 7 měsíci +2

    is it just me or is the volume really low on this?

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  Před 7 měsíci +1

      podcasters.spotify.com/pod/show/machinelearningstreettalk/episodes/DR--JEFF-BECK---THE-BAYESIAN-BRAIN-e2akqa1 sorry about that, fixed on pod

  • @svetlicam
    @svetlicam Před 6 měsíci

    Brain is not baysian inference. Electrical potential in neurons are not exited by correlational probability of imput impulse but exactly opposite its totally to the very tiny electrical charges what is exited or what is not or what is just curent dispersal, maybe this dispersal works on principles of baysian inference, which maybe add in long term to some pruning of synapses, but more is kind of background noise.

    • @JeffBeck-po3ss
      @JeffBeck-po3ss Před 6 měsíci +1

      You're thinking hardware. The Bayesian brain hypothesis is a cognitive construct. Humans and animals behave as if they are reasoning probabilistically. My laptop also reasons probabilistically even though it's basically a deterministic calculator. I've just programmed it to do so. Nature has likely done something similar if for no other reason than the fact that failing to rationally take uncertainty into account can have undesirable consequences. See the Dutch Book theorem.
      That said one could make an argument that, at biologically relevant spatio-temporal scales the laws of physics are bayesian. Niven 2010 and Davis and Gonzalez 2014 have very nice derivations of the equations of statistical physics/thermodynamics from purely information theoretic, i.e. bayesian, considerations.

    • @svetlicam
      @svetlicam Před 6 měsíci

      @@JeffBeck-po3ss true, is cognitive construct through mathematical principles but is not how cognition works, cognition works to say that way from sort of principle of exclusivity, not probability if you get distinction. Only exclusive stimuli get into account of cognitive process for the faster and much more goal oriented reaction, probabilistic process would be too long and too energy expensive but through tools like mathematical principles or computational algorithms is achievable faster and to some point or persevere more precise, because this processes are logically simplified through technique of mathematical calculations or binary probabilistic rationalization.

  • @Isaacmellojr
    @Isaacmellojr Před měsícem

    The lack of "As if i was performing basian inference of course" was awkward.

  • @ML_Indian001
    @ML_Indian001 Před 7 měsíci

    "Gradient Free Learning" 💡

  • @bpath60
    @bpath60 Před 7 měsíci

    Thank you Fodder for the mind sorry Brain !

  • @Daniel-ih4zh
    @Daniel-ih4zh Před 7 měsíci +1

    Volume needs to be turned up

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  Před 7 měsíci

      podcasters.spotify.com/pod/show/machinelearningstreettalk/episodes/DR--JEFF-BECK---THE-BAYESIAN-BRAIN-e2akqa1

  • @markcounseling
    @markcounseling Před 7 měsíci

    The thought occurred, "My Markov blanket is a Klein bottle" Which I can't explain, but perhaps Diego Rapaport can?

  • @jason-the-fencer
    @jason-the-fencer Před 7 měsíci +2

    "does it matter if the brain is acting AS IF it's using bayesian inference or not..." yes, it does. In engineering a solution to a problem, not, acting 'as if' doesn't matter because the outcome is what's important. But if you're conducing scientific research that is intended to lead you down a path of discovery, taking the 'as if' result as equal to 'is', you're going to risk the possibility of being led to wrong conclusions.
    This seems to be the whole problem with the mechanist view of the brain - it presupposes that our brains are just computers, and then never wants to find out.

    • @Vectorized_mind
      @Vectorized_mind Před 7 měsíci

      Correct,he's delusional, he also claims science is not about seeking truth but about making predictions,which is idiotic cause science was built on the idea of understanding the operations and functions behind the mysteries of the Universe. What's the point of making predictions if you don't understanding anything?!

    • @JeffBeck-po3ss
      @JeffBeck-po3ss Před 7 měsíci +2

      See if it makes more sense if you think in terms of an isomorphism which is a fancy word that tells you when two mathematical systems are the same. The standard proof of the inevitability of a free energy principle just shows that any physical system can be interpreted as performing bayesian inference. This is because the mathematics of belief formation, planning, and action selection can be mapped onto the equations of both classical and quantum mechanics. So the 'as if' is ultimately justified by showing that two mathematical systems are equivalent.

  • @dirknbr
    @dirknbr Před 6 měsíci

    It's not Einstein who said all models are wrong but Box

    • @JeffBeck-po3ss
      @JeffBeck-po3ss Před 6 měsíci

      Omg. Thanks. I have been using that line for like 20 years. How embarrassing...

  • @GodlessPhilosopher
    @GodlessPhilosopher Před 7 měsíci

    RIP I loved your music

  • @tompeters4506
    @tompeters4506 Před 7 měsíci

    Sounds fascinating. Wish I understood it and I aint a dummy.

    • @tompeters4506
      @tompeters4506 Před 7 měsíci

      Sounds like some mechanism for faster AI learning for certain class of tasks (models)

    • @tompeters4506
      @tompeters4506 Před 7 měsíci

      Mechanism being not depenent on gradient descent mechanism to optimal solution....it gets to approximate solution faster ?

    • @dr.mikeybee
      @dr.mikeybee Před 7 měsíci

      Ask an LLM all the questions you have. For example, ask why a transformer is like a mixture of experts: A transformer neural network can be viewed as a type of implicit mixture of experts model in the following way:
      - The self-attention heads act as experts - each head is able to focus on and specialize in different aspects of the input sequence.
      - The multi-headed attention mechanism acts as a gating network - it dynamically combines the outputs of the different attention heads, weighting each head differently based on the current inputs.
      - The feedforward layers after the multi-headed attention also help refine and combine the outputs from the different attention heads.
      - The entire model is trained end-to-end, including the self-attention heads and feedforward layers, allowing the heads to specialize while optimizing overall performance.
      So the self-attention heads act as local experts looking at the sequence through different representations. The gating/weighting from the multi-headed attention dynamically chooses how to combine these experts based on the current context.
      This provides some of the benefits of mixture of experts within the transformer architecture itself. Each head can specialize, different combinations of heads get used for different inputs, and the whole model is trained jointly.
      However, it differs from a traditional mixture of experts in that the experts are predefined as the attention heads, rather than being separate networks. But the transformer architecture does achieve a form of expert specialization and gating through its use of multi-headed self-attention.

    • @JeffBeck-po3ss
      @JeffBeck-po3ss Před 7 měsíci

      You can actually show that Bayesian inference on a particular class of mixture models leads to an inference algorithm that is mathematically equivalent to the operations performed by a transformer that skips the add and norm step.

  • @jonmichaelgalindo
    @jonmichaelgalindo Před 7 měsíci +1

    I have perfect knowledge of the Truth that I exist. Probability = 1. Maybe I'm an immortal soul, or just a line of code in a simulation, but that soul or that code exists. (I don't know that you exist though. Just me.)
    (Can GPT-4 experience this P=1 knowledge? I doubt that "I" exists in there.)

  • @ArtOfTheProblem
    @ArtOfTheProblem Před 7 měsíci

    would love to collab

    • @andybaldman
      @andybaldman Před 7 měsíci

      You have a truly amazing channel. A collab would benefit both of you.

  • @sapienspace8814
    @sapienspace8814 Před 7 měsíci

    This is Fuzzy Logic. The "active inference" is what happens as the result of using Reinforcement Learning.

    • @JeffBeck-po3ss
      @JeffBeck-po3ss Před 7 měsíci

      Half right. Bayesian inference is fuzzy logic constrained by normative principles established by statistics and probability theory. Similarly the mechanics of active inference are the same as the mechanics of Bayesian RL, with one critical difference. In RL the user selects the reward function. In active inference the reward function is derived from information theoretic principles.

    • @sapienspace8814
      @sapienspace8814 Před 7 měsíci

      @@JeffBeck-po3ss RL has an internal reward function and an external one.

    • @JeffBeck-po3ss
      @JeffBeck-po3ss Před 7 měsíci

      Yes. But where does it come from?

    • @sapienspace8814
      @sapienspace8814 Před 7 měsíci

      @@JeffBeck-po3ss It comes from a pseudo random input signal that balances exploration and exploitation using the RL reward/punishment equations (for both internal and external reward/punishment).

    • @JeffBeck-po3ss
      @JeffBeck-po3ss Před 7 měsíci

      I'm not making myself clear. That's a high level description of a reward function that includes some sensible things but doesn't tell me how to weigh them. For example, sugar is good, sunlight is good. High entropy policies that encourage exploration are good. Information seeking is good. Getting sick is bad. so ...
      R = a*(grams of sugar) + b*(sun intensity) + c*(policy entropy) + d*(information gain) - f*(sickness).
      How do you determine a b c d and f. That is how do you determine the relative values of good and bad things?

  • @Gigasharik5
    @Gigasharik5 Před 3 měsíci

    Aint no way that the brain is bayesian

  • @Achrononmaster
    @Achrononmaster Před 7 měsíci

    @1:20 the question is really _is that _*_all_*_ the brain does?_ I'd argue definitely not. Plus, you cannot separate brain from mind (a fools errand), and they're not the same thing. Bayes/brain cannot generate genuine novel insight nor mental qualia.

    • @JeffBeck-po3ss
      @JeffBeck-po3ss Před 7 měsíci

      I am inclined to agree with you when I am not wearing my scientist pants. But when I am wearing them I get frustrated with terms like mental qualia because it lacks precision. Insight and intuition, on the other hand, do have nice Bayesian descriptions via a kind of shared structure learning that enables the identification of analogies.

  • @u2b83
    @u2b83 Před 7 měsíci

    Jeff: So what do you mean by guard rails lol ...what he really meant, what do you mean by social complexification?

    • @JeffBeck-po3ss
      @JeffBeck-po3ss Před 7 měsíci +2

      Your guess is as good as mine.

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  Před 7 měsíci +2

      I would have to listen back, what's the timestamp? But speaking broadly - I am interested in emergent social dynamics i.e. culture / language, which is to say - that which is best made sense of when we zoom out quite far from the highly diffused phenomena in the "microscopic" physical substrate and look at the things which emerge higher up in the social plane. For example the concept of a chair exists as a meme in our social plane, even though it's parasitic on a widespread diffusion of physical interactions lower down (between people, and chairs, and lower down, between cells etc!). So the "guardrails" of the dynamics are communication affordances at the respective scale i.e. the words we use, how we can physically interact with our environments, but the interesting thing is the representational fidelity of these affordances, i.e. they can be used, remixed, and overloaded to create a rich tapestry of meaning, both in the moment, and more culturally embedded in our society mimetically. The emergence of social plane from physical plane is something FEP can teach us a lot about IMO. What's also interesting is that the higher the plane is from the physical the faster it can evolve i.e. our language evolves a million times faster than our DNA, this evolution velocity asymmetry is common of many other symbiotic organisms.

    • @JeffBeck-po3ss
      @JeffBeck-po3ss Před 7 měsíci +2

      Our resident expert in this domain is Mahault Albarracin.

    • @JeffBeck-po3ss
      @JeffBeck-po3ss Před 7 měsíci +4

      My intuition is that communication must be grounded in common observations and so ultimately communication is about transmitting predictions.

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  Před 7 měsíci +1

      @@JeffBeck-po3ss We will release our interview with Mah soon!

  • @dionisienatea3137
    @dionisienatea3137 Před 7 měsíci

    sort out your audio...why you post such an important discussion with this low audio? i have everything on 100% and barely understand ...

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  Před 7 měsíci

      podcasters.spotify.com/pod/show/machinelearningstreettalk/episodes/DR--JEFF-BECK---THE-BAYESIAN-BRAIN-e2akqa1 Audio pod has improved audio

  • @Vectorized_mind
    @Vectorized_mind Před 7 měsíci +1

    The most flaw thing I've heard is "science is not about seeking truth but about making predictions"🤣🤣,this is a very erroneous claim,science is the process of trying to figure out how the world around us works not just about making predictions.
    When Newton developed his laws he didn't want to merely make predictions but he sincerely wanted to understand how the world worked. The accuracy of the prediction is equivalent to how true your understanding of a system is. THE MORE TRUE YOUR UNDERSTANDING IS THE MORE ACCURATE YOUR PREDICTION,THE LESS TRUE YOUR UNDERSTANDING IS THE LESS ACCURATE YOUR PREDICTION IS.