Forrest Landry - Is AI Alignable?

Sdílet
Vložit
  • čas přidán 9. 09. 2024
  • Forrest Landry is a philosopher, writer, researcher, scientist, engineer, craftsman, and teacher focused on metaphysics, the manner in which software applications, tools, and techniques influence the design and management of very large scale complex systems, and the thriving of all forms of life on this planet. Forrest is also the founder and CEO of Magic Flight, a third-generation master woodworker who found that he had a unique set of skills in large scale software systems design, governance architecture, and community process.
    Audio here: archive.org/de...
    Many thanks for tuning in!
    Please support SciFuture by subscribing and sharing!
    Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series?
    Please fill out this form: docs.google.co...
    Kind regards,
    Adam Ford
    - Science, Technology & the Future - #SciFuture - scifuture.org

Komentáře • 5

  • @NicholasWilliams-uk9xu
    @NicholasWilliams-uk9xu Před 3 měsíci +3

    Dude you guys are smart. People need to hear these things.

  • @NicholasWilliams-uk9xu
    @NicholasWilliams-uk9xu Před 3 měsíci +1

    I would also add (even though exponentials are important, how the exponential (came about) epistemologically is a ([additive/subtractive]*multiplicative causal structure at it's foundation, meaning that more robust prediction ultimately deals with these granularities by [addictive and subtractive means] because (for every action there is a equal and opposite reaction) meaning any exponential has a more concrete description in (additive and subtractive), while a exponent is a (generalization of dynamics, not a accounting of dynamics). This is also a machine learning problem (when all activation functions rest on exponential activation), dimensional scaling is not foundation, additives and subtractive provide foundation, then you scale the foundation*foundation for dimensional scaling. It really depends on (how much generality/granularity you need your model to shuffle, which will depend on the ratio of linear/exponential activation functions in the system by their proportional need for the task at hand [this is a quantitative fact about data processing]).

  • @TheTimecake
    @TheTimecake Před 3 měsíci

    Can an isomorphism be drawn between the argument for unalignability Landry put forward and the Second Law of Thermodynamics? I've seen a formulation of the Second Law that seems very similar to his argument.
    Namely, that if you have a two state phase space with a fractal boundary between the two states, then the control over selecting one state and not the other around that boundary goes to zero with the continued evolution of that system.