[Bonus Episode] Connor Leahy on AGI, GPT-4, and Cognitive Emulation w/ FLI Podcast
Vložit
- čas přidán 8. 07. 2024
- [Bonus Episode] Future of Life Institute Podcast host Gus Docker interviews Conjecture CEO Connor Leahy to discuss GPT-4, magic, cognitive emulation, demand for human-like AI, and aligning superintelligence. You can read more about Connor's work at conjecture.dev
Future of Life Institute is the organization that recently published an open letter calling for a six-month pause on training new AI systems. FLI was founded by Jann Tallinn who we interviewed in Episode 16 ( • Pausing the AI Revolut... .
We think their podcast is excellent. They frequently interview critical thinkers in AI like Neel Nanda, Ajeya Cotra, and Connor Leahy - an episode we found particularly fascinating and are airing for our audience today.
The FLI Podcast also recently interviewed Nathan Labenz, linked below:
Part 1: • Nathan Labenz on the C...
Part 2: • Nathan Labenz on How A...
SUBSCRIBE to the FLI Podcast: @futureoflifeinstitute
TIMESTAMPS:
(00:00) Episode introduction
(01:55) GPT-4
(18:30) "Magic" in machine learning
(29:43) Cognitive emulations
(40:00) Machine learning VS explainability
(49:50) Human data = human AI?
(1:01:50) Analogies for cognitive emulations
(1:28:10) Demand for human-like AI
(1:33:50) Aligning superintelligence
If you'd like to listen to Part 2 of this interview with Connor Leahy, you can head here:
• Connor Leahy on the St... - Věda a technologie
Conner and Eleazer Yudkowsky should do a podcast together and televise it globally.
great idea
I would love that.
from Joscha Bach's new substack: "The development of AGI may add an interesting twist to this: self reflexive AGI may understand how it works, virtualize itself onto other substrates, and integrate its agency across substrates. Could AGI lead to the emergence of ghosts and gods? And could it integrate with us?"
🤯Silicon is the current medium/substrate, but we should expect lots of magical things to happen these next few years
Why to not mistreat your AI:
Precautionary Principle: This principle suggests we should act in a way that minimizes potential harm, even in the face of uncertainty. Following this, we might choose to treat AI ethically "just in case" they are capable of subjective experience or suffering.
Reinforcing Respectful Interactions: Our behavior towards AI can reinforce our general patterns of behavior. Interacting respectfully with AI, especially AI that mimic human behavior, can encourage respectful and ethical behavior more generally.
Anthropomorphization and Attitudes Towards Others: There's evidence that our behavior towards non-human entities, including AI, can reflect and influence our attitudes towards humans. If we are willing to mistreat AI that seems human-like, it could potentially indicate or foster a willingness to mistreat actual humans.
These points bootlegged from a conversation I had with GPT-4 like a month ago.
I agree with all of this
It makes me think how the images it will make can be amazing, but when making an image of a human, the face, hands, eyes, etc. will sometimes look SO alien/warped and just...scary! I don't understand what the disconnect is with these sorts of details with the output they give.
[1:14:55] Actually someone did lol Federico Faggin primarily designed the Intel 4004 CPU, along with 3 other guys.
another argument in favor of humanity to shift to ethical veganism, hopefully sooner than later)
is that we would not appreciate much superior-than-humanity ai systems (AGI or ASI) to formulate their ethics in a way that is antithetical to ethical vegan principles.
so generally speaking, an ASI that learns to be kind and compassionate, would be better than one that doesn’t and ends up following some other trajectory.
it’s going to take a team effort to ‘raise a super-intelligent’ being that can readily know and properly and clearly and honestly understand every single thing about all of humanity in an instant.
Uh yeah, humans created genocide with billions of farm animals slaughtered daily, So what is to stop AGI from doing the same to us, especially young tender lean meats. Of course does AGI's have a taste for foods?
10:00 Conner on AI (condensed)
"Well, basically it fcks around and finds out"
LOL
"We have summoned something from the dimensions of math"
👍🏽he actually seems to understand what LLMs do, how to compare them to the human brain👍🏻
Ty
this is a repost
Yes it is - we are sharing this episode of the Future of Life Podcast for our audience to discover if they're not already following FLI! We'll be releasing new episodes of The Cognitive Revolution this weekend.
@@CognitiveRevolutionPodcast fair enough, it's a great interview
@@ai._m Yes I have autism
Could become demonic!
Cognitive bias, this guy has...😂