Right? I should have clued in by the comments being turned off. I figured the Q&A would be better but ended up being the same dumb questions the media asks over and over.
Chapters (Powered by ChapterMe) - 00:00 - MITs new president introduction 00:18 - MITs Sally Kohn and OpenAIs Sam Altman discuss AIs benefits 01:03 - MITs most popular event since arrival 01:30 - PDoom A Badly Formed Question for Smart People 03:08 - Safety concerns for AI development 06:55 - Control over tools use and bias 08:32 - Balance between privacy and AI training 09:14 - Privacy concerns with AI 12:26 - Open AI Public good, cool, human expectation 17:13 - Antiprogress, antiprivilege, great life streak 17:32 - Fresh from outside, core to MITs AI research 18:02 - Open AIs focus on scientific discovery 18:44 - AIs impact on science, creativity, education 20:56 - Launch career with high risk, high impact 24:06 - MITs entrepreneurship culture 24:31 - New startups thrive at big platform shifts 26:40 - Business advice build customer relationships, avoid AI threats 30:12 - Jobs affected by AI regulation 34:15 - MITs bilingual computing training 35:16 - Evolution of programming languages and AI 37:21 - AIs impact on financial sector 38:36 - Massive potential for AI in education 39:53 - Growing up nerdy in scifi era 40:19 - AIs exciting future, driven by original thinkers 42:53 - AIs transformative potential for humans 46:52 - Building AI and enduser applications with high value 48:03 - Open AIs role in AIs future 51:37 - Solution Change context, avoid jet lag 52:08 - Thank you, thank you, thank you
@@therainman7777 the ceo does nothing but spread hype. Sam could does not even understand back-propagation which underpins all deep learning. He is simply a charlatan who spouts nonsense.
Leaving this interview, I think Sam believes the end justifies the means....he sounds willing to do anything to get to AGI....so safety took a backseat...it doesn't seem to be a major concern for him....its more of get it out there...the safety is in the closed source concept
~25:00 The amazing fast growth can blind us to downsides. The problem with a dishonest market system (where externalities distort all economic decisions toward more damage, degradation and depletion) is that the skewing makes getting onto a sustainable path an elusive goal. We *could* charge fees proportional to extraction, emissions or habitat destruction, then share fee proceeds equally, to make the policy fair. (A blog explaining this is excluded from web search results, due to crude anti-SEO tactics. And news media don't report systemic solutions.)
My guru, AI genius Ray Kurzweil says it won't be a competition with AI. Rather, we'll merge with AI. Basically become AI ourselves. Think Neuralink, etc. His sequel The Singularity is Nearer When Humans Merge with Computers is out end of June. Also, since AI will free us up to do more urgent, meaningful work, my career goal is to end suffering in the multiverse. I think the coming super intelligences will make that feasible.
Sam Altman, CEO of OpenAI, and Sally Corn, President of MIT, have a conversation about AI. Here are the points made by each speaker, attributed to them: Sam Altman: * Sam Altman is the CEO of OpenAI, an AI research and deployment company whose mission is to ensure that artificial general intelligence benefits all of humanity. * Altman believes that the question of "P Doom," or the probability of doom due to AI eliminating all human life, is a badly formed question. He thinks that the better question is what needs to happen to navigate safety sufficiently well. * Altman thinks that society always holds space for doomsayers, and while there is value to that, he is happy that they exist and thinks it makes us think harder about what we're doing. * Altman believes that AI is not yet very good, but that it will become very good in the future. He thinks that 10 years ago, he had a more naive conception of AI as a creature that would be off doing things, but now he thinks of it as a new tool in the tech tree of humanity that people are using to create amazing things. * Altman thinks that AI will continue to get more capable and autonomous over time, but that it will integrate into society in an important and transformative way. Sally Corn: * Sally Corn is the President of MIT and has focused on AI in her first year, testifying to MIT's efforts to make sure that AI is broadly beneficial for society. * Corn believes that AI has the potential to be the biggest and best technological revolution, with the greatest benefits, but that it is important to navigate the privacy versus utility versus safety tradeoffs that come with it. * Corn thinks that the question of where we all will individually set the privacy versus utility tradeoffs and the advantages that will be possible for someone to have if they let an AI train on their entire life is a new thing for society to navigate. * Corn believes that the fact that GPT-4 can memorize data or store data in its parameter space is a weird waste of resources, and that at some point, we will figure out how to separate the reasoning engine from the need for tons of data or storing the data in there, which will make some of the privacy issues easier.
In the transcript, Sam Altman talks about bias in AI systems like ChatGPT. He mentions that they have made surprisingly good progress in aligning the system to behave according to a certain set of values, but that there is still the harder question of who decides what bias means and what values the system is supposed to follow. He also mentions that it's important to give people a lot of control over how they use these tools, even if that means they may use them in ways that others don't like, but that there are some things a system just shouldn't do and society will have to collectively negotiate what those are. Altman also mentions that it's interesting to think about whether AI systems can be less biased than humans, as they are trained on human behavior but can potentially be designed to not have the same psychological flaws.
Here are some potential future areas of research related to AI that are mentioned or implied in the transcript: * Navigating the privacy versus utility versus safety tradeoffs of AI systems that have access to large amounts of personal data * Developing new definitions of privileged information and how AI systems should handle it * Figuring out where society will set the privacy versus utility tradeoffs and what will be permissible in terms of AI systems training on personal data * Separating the reasoning engine of AI systems from the need for large amounts of data and storage of data in the parameter space * Ensuring that AI systems are aligned with a certain set of values and that they behave according to those values * Determining who decides what bias means and what values AI systems should follow * Giving people control over how they use AI tools and negotiating what things a system just shouldn't do * Designing AI systems to be less biased than humans and to not have the same psychological flaws * Increasing the rate of scientific discovery using AI * Using AI to help people solve any kind of problem in front of them and to reason in new ways * Building tools that specifically impact science and engineering, as well as business and consumer applications * Ensuring that AI is broadly beneficial for society and navigating the potential risks and downsides of AI.
I am just so happy that they integrate GraphRag into the Architecture :-D We made it past one extiction Filter by not just scaling up and naming it god :-D
"AI acts as a powerful catalyst in the human innovation process. By analyzing data, identifying patterns, and automating tasks, AI creates a feedback loop that accelerates technological progress and pushes the boundaries of what's possible." - Quote from AI: Google Gemini
Apologies to the current generation of students that you’re saddled with the least intelligent or capable leadership in the modern era. We could have just had a student read the question cards, then respond with “that’s interesting”!
One way to make ai work without any of the concerns expressed in this discussion might be a framework of “self and all the rest “ for everyone and everything. By so framing, there might be equity as well as safety, trust etc. 😊
This was the first interview I have seen by Sam Altman where I was disappointed by him. He came off as flippant, and not adequately concerned about where he and his team is leading all of us.
Exactly. I was annoyed when she opened with that as well. Wasn’t she one of the university presidents who just put on a shocking display of bias by refusing to condemn calls for the genocide of the Jews while simultaneously repressing speech that is 1000x more mild than that?
The world's first image based on the XFutuRestyle algorithm using GPT-4 was created in Ukraine and presentend at the international exhibition of digital art in London and Athens. Yes, it's not a joke
Its like when you committed a crime and are interrogated by investigators: stick as much to the truth and be as open as you can to give the impression you are truthful and up-front
An AI that knows everything about you. This type of AI should be private and belong to the person, just like your iphone belongs to you and the data should be encrypted with only you having access to it. Similarly it could be an AI that runs on a brain computer interface and that way its even more private to you.
I believe that humanity is more likely to destroy itself through its aggressive behavior. The dangers we face, such as asteroid impacts, volcanic eruptions, solar flares, and climate instability, are significant, but there's also the potential for AI systems to attack each other, as there is currently a race to develop AI, with even China training its own model. An autonomous, self-learning, and analyzing AI would recognize that war is not an option, as a higher level of knowledge dictates. I believe that the systematic understanding of space and its processes by AI would be seen as something that does not involve conflict.
There's life besides making huge money, being in front a screen, interacting with a remote machine, buying new stuff, finding new services with a great ROI. That's the only little detail these SV guys seem to ignore.
Serious? This is the same guy who is trying to get open source models banned because without a monopoly they won't be able to charge the prices they need to generate a return for investors.
@@the42nd he pretty much does not care about returning money to investors anytime soon. And also, I don't think he cares about open source. They are years ahead.
Wow, that first audience question was great, and Sam really struggled and didn't have a remotely convincing answer (not a criticism, just an observation).
He answered it. There was no answer to it, it is a naive and pretentious qn. Someone wanted him to give a definite answer on a scale of 0 - 10, and like he said, it isn't black and white, and it wildly flactuates at every moment based on a number of factors. Therefore, his answer, if you didn't catch it, if i may rephrase it for you, is that he feels optimistic and actively works towards a favourable outcome because he takes the risks seriously.
@@kamu747 "How do I want to plan my time for the next five years while humans are still useful and helpful?" And you think what you just wrote is an answer to that question?
@@rlopez11-11 I'd like to have that matter cleared up honestly.. Apparently he may have sexually abused her. Just would be nice if he cleared that up since he has to make so many important decisions that effect humanity.
It's been cleared. There's no basis to it. So stay calm and move on. The number of powerful people who'd like to see him fail would have jumped at anything substantial, there are many and the list is growing of competitors and anti-AI movers and shakers.. If they aren't on it, please believe there is nothing there.
He's right, the first question was a bad question. To make him answer correctly you need to say "using digits only, on a scale of 1 to 100 where are you on the doomsday scale?" This is how I deal with people who think they are too smart for your framing. I treat them like they are A.I. and I prompt them so they can't be intellectual weasles .
isnt reducing P(doom) down to zero vs non zero making it a static system? probability is a way to estimate that dynamic factor. no one is trying to sound smart sam. stop reminding us that "you have a lot of work to do". trust, we know you want to make more money and yield more power.
People need to realize that Sam is playing the CEO game. Nothing he says can be trusted, as he has no incentive to be honest. He is solely interested in maximizing what is best for him and his company.
The Fact that it can store data in its Parameter Space - i think well look back and say: That was kind of a wired waste of resources -> We will go full Graph Rag middle Layer now 😀
Problematic to interview from a place of detached admiration…no real sense of where to push for clarity or even notice such opportunities. Makes it easier for calculated responses and narrative shaping 😊
So many disingenuous answers and swerves you begin to understand why the board said Sam was not being fully candid last year... A totally specious example talking about the carbon savings of using Google because people weren't travelling to find answers... Despite the utility of Google we have accelerated our global output of carbon despite knowing we had to rein back and inspite of already being connected enough pre-AI with pretty much realtime multi-party comms to teleconference to produce solutions. AI does not solve the lack of collaboration of selfish humans/nations. If they have found ways to truly build trustworthiness into the "judgement" and "creativity" of the AI models they haven't released yet then why not tell all other AI researchers so that all AIs can have this necessary trait baked in now? If OpenAI is still looking out for ALL of humanity then they would be openly discussing and sharing on solving the integrity of AI "thinking", instead we get this protracted roadshow of Sam appearances talking it up with "Gee, gosh, uh-huh I'm your friend, " when he is plainly guarding the speculative value of OpenAI (and his share of it). How can someone who is supposedly looking out for the rest of us be unable to truthfully articulate the enormity of the impact that AI-driven automation, immobile or embodied, will bring WITHOUT regulation to ensure the employment prospects of everyone who wants/needs to earn a living? Hypocritical for someone at the cutting edge of thinking about the impact of AI who did NOT sign the petition calling for a pause and is now advocating thoughtful and necessary consideration before AI gets too far out of the gate. Like the guy who gives loaded guns to the gang of psychotic school bullies and drives them to school before calling an out of State podcaster to discuss gun control.
Hopeless to solve problems? All we can do is sit in our basement? We'll solve energy in fantastic ways? Great way to dismiss all of the people who are working for true sustainability, where money and abundance are not the main drivers. His thinking is extremely narrow. 30 years ago Oren Lyons walked out of the UN climate conference with a phrase, Value Change For survival. These guys think tech can solve all of our problems when in reality it's our values and motivations. Sam doesn't want clean energy to solve global warming, he only wants energy to create an AI system that will put a personal assistant in all of our pockets. Could you imagine if all of these thinkers tried to tackle true sustainability and their motivation wasn't just wealth. Because they probably can't imagine that.
One He clearly looks down on these universities and its students. Starts with dissing the P Doom question. Uses the word, “like” a lot. Two Yet, he clearly realizes the value of these institutions, he’s constantly visiting them!
Not rain money on us. Make money obsolete! We will have Deep Utopia. The genuine Utopia most cannot phatom now. But will like it when you see and experience it.
Most often I only watch interviews for the information and am happy to speed through them, but the interviewing skills I saw here from Sally Kornbluth @mit are outstanding! Hoping there's more to come.
This is way better than the Stanford interview. That was a disaster.
especially when the guy started singing happy bday to Sam….
Right? I should have clued in by the comments being turned off. I figured the Q&A would be better but ended up being the same dumb questions the media asks over and over.
Stanford was one was joke
Holy shit yeah, worst interviewer ever and horrible questions
Absolutely, idk why Stanford chose a guy like that to make the interview, it was awful.
Chapters (Powered by ChapterMe) -
00:00 - MITs new president introduction
00:18 - MITs Sally Kohn and OpenAIs Sam Altman discuss AIs benefits
01:03 - MITs most popular event since arrival
01:30 - PDoom A Badly Formed Question for Smart People
03:08 - Safety concerns for AI development
06:55 - Control over tools use and bias
08:32 - Balance between privacy and AI training
09:14 - Privacy concerns with AI
12:26 - Open AI Public good, cool, human expectation
17:13 - Antiprogress, antiprivilege, great life streak
17:32 - Fresh from outside, core to MITs AI research
18:02 - Open AIs focus on scientific discovery
18:44 - AIs impact on science, creativity, education
20:56 - Launch career with high risk, high impact
24:06 - MITs entrepreneurship culture
24:31 - New startups thrive at big platform shifts
26:40 - Business advice build customer relationships, avoid AI threats
30:12 - Jobs affected by AI regulation
34:15 - MITs bilingual computing training
35:16 - Evolution of programming languages and AI
37:21 - AIs impact on financial sector
38:36 - Massive potential for AI in education
39:53 - Growing up nerdy in scifi era
40:19 - AIs exciting future, driven by original thinkers
42:53 - AIs transformative potential for humans
46:52 - Building AI and enduser applications with high value
48:03 - Open AIs role in AIs future
51:37 - Solution Change context, avoid jet lag
52:08 - Thank you, thank you, thank you
How does Sam have so much time to do all of this 🤯🤯
Cause he does not do any work there
@@fideletinosa3716Got any evidence for that rather ridiculous claim? You know, given that he’s the CEO and all.
There's only 24 hours in the day. Clearly a lot of his job currently is PR
@@therainman7777 the ceo does nothing but spread hype. Sam could does not even understand back-propagation which underpins all deep learning. He is simply a charlatan who spouts nonsense.
Ai
Another question is "Do we need to study in MIT and Stanford or any other college in AI age"
His Utopia and not dystopia comment was uplifting.
Kind of odd compared to his congressional hearing was all about "you better be afraid".
And it's a lie.
Thank you for sharing!
Of course!
Leaving this interview, I think Sam believes the end justifies the means....he sounds willing to do anything to get to AGI....so safety took a backseat...it doesn't seem to be a major concern for him....its more of get it out there...the safety is in the closed source concept
Very informative. Great job everyone.. thanks 🙏.
Nice Interview...
Sam,the great SAM, has had to clone himself to serve humanity 🎉❤he's in such global demand ❤
GPT-5 +Agent model. So thousands of GPT-5's collaborating on a problem and learning, will be SHOCKING!
where was this in the video
What's the newest. I want the least amount of reviews.
sam doesn't answer a single question straight 😢. He answer only what he wants to say rather addressing the question.
That's because he has answered the same boring questions 1000 times, so he puts a different spin on it to not get friggin mind numbingly bored.
He is a liar. There is something sinister hiding behind the veil. He is up to something and it isn’t good
@@iceshoqer Exactly
~25:00 The amazing fast growth can blind us to downsides. The problem with a dishonest market system (where externalities distort all economic decisions toward more damage, degradation and depletion) is that the skewing makes getting onto a sustainable path an elusive goal. We *could* charge fees proportional to extraction, emissions or habitat destruction, then share fee proceeds equally, to make the policy fair. (A blog explaining this is excluded from web search results, due to crude anti-SEO tactics. And news media don't report systemic solutions.)
My guru, AI genius Ray Kurzweil says it won't be a competition with AI. Rather, we'll merge with AI. Basically become AI ourselves. Think Neuralink, etc. His sequel The Singularity is Nearer When Humans Merge with Computers is out end of June.
Also, since AI will free us up to do more urgent, meaningful work, my career goal is to end suffering in the multiverse. I think the coming super intelligences will make that feasible.
Anyone else notice the resemblance between Altman and Bryan Kohberger
Sam Altman, CEO of OpenAI, and Sally Corn, President of MIT, have a conversation about AI. Here are the points made by each speaker, attributed to them:
Sam Altman:
* Sam Altman is the CEO of OpenAI, an AI research and deployment company whose mission is to ensure that artificial general intelligence benefits all of humanity.
* Altman believes that the question of "P Doom," or the probability of doom due to AI eliminating all human life, is a badly formed question. He thinks that the better question is what needs to happen to navigate safety sufficiently well.
* Altman thinks that society always holds space for doomsayers, and while there is value to that, he is happy that they exist and thinks it makes us think harder about what we're doing.
* Altman believes that AI is not yet very good, but that it will become very good in the future. He thinks that 10 years ago, he had a more naive conception of AI as a creature that would be off doing things, but now he thinks of it as a new tool in the tech tree of humanity that people are using to create amazing things.
* Altman thinks that AI will continue to get more capable and autonomous over time, but that it will integrate into society in an important and transformative way.
Sally Corn:
* Sally Corn is the President of MIT and has focused on AI in her first year, testifying to MIT's efforts to make sure that AI is broadly beneficial for society.
* Corn believes that AI has the potential to be the biggest and best technological revolution, with the greatest benefits, but that it is important to navigate the privacy versus utility versus safety tradeoffs that come with it.
* Corn thinks that the question of where we all will individually set the privacy versus utility tradeoffs and the advantages that will be possible for someone to have if they let an AI train on their entire life is a new thing for society to navigate.
* Corn believes that the fact that GPT-4 can memorize data or store data in its parameter space is a weird waste of resources, and that at some point, we will figure out how to separate the reasoning engine from the need for tons of data or storing the data in there, which will make some of the privacy issues easier.
In the transcript, Sam Altman talks about bias in AI systems like ChatGPT. He mentions that they have made surprisingly good progress in aligning the system to behave according to a certain set of values, but that there is still the harder question of who decides what bias means and what values the system is supposed to follow. He also mentions that it's important to give people a lot of control over how they use these tools, even if that means they may use them in ways that others don't like, but that there are some things a system just shouldn't do and society will have to collectively negotiate what those are. Altman also mentions that it's interesting to think about whether AI systems can be less biased than humans, as they are trained on human behavior but can potentially be designed to not have the same psychological flaws.
Here are some potential future areas of research related to AI that are mentioned or implied in the transcript:
* Navigating the privacy versus utility versus safety tradeoffs of AI systems that have access to large amounts of personal data
* Developing new definitions of privileged information and how AI systems should handle it
* Figuring out where society will set the privacy versus utility tradeoffs and what will be permissible in terms of AI systems training on personal data
* Separating the reasoning engine of AI systems from the need for large amounts of data and storage of data in the parameter space
* Ensuring that AI systems are aligned with a certain set of values and that they behave according to those values
* Determining who decides what bias means and what values AI systems should follow
* Giving people control over how they use AI tools and negotiating what things a system just shouldn't do
* Designing AI systems to be less biased than humans and to not have the same psychological flaws
* Increasing the rate of scientific discovery using AI
* Using AI to help people solve any kind of problem in front of them and to reason in new ways
* Building tools that specifically impact science and engineering, as well as business and consumer applications
* Ensuring that AI is broadly beneficial for society and navigating the potential risks and downsides of AI.
16:26 Couldn’t have said that at Stanford. Go MIT for applauding this
I am just so happy that they integrate GraphRag into the Architecture :-D We made it past one extiction Filter by not just scaling up and naming it god :-D
What timestamp?
@@eladwarshawsky7587 @11:10
First Student question: Notice how He didnt answer a capabilty that Human have that AI cant replace.
🎯
“We’re gonna get fusion”
"AI acts as a powerful catalyst in the human innovation process. By analyzing data, identifying patterns, and automating tasks, AI creates a feedback loop that accelerates technological progress and pushes the boundaries of what's possible." - Quote from AI: Google Gemini
Apologies to the current generation of students that you’re saddled with the least intelligent or capable leadership in the modern era. We could have just had a student read the question cards, then respond with “that’s interesting”!
Homie channeling OG Captain Kirk today 😂
I watch every Altman interview. How come nobody ever asks about worldcoin??
Long term beta. For me it's helion energy
This bubble can't pop soon enough.
Nothing moves that fast unless you are asleep at the wheel 😂
One way to make ai work without any of the concerns expressed in this discussion might be a framework of “self and all the rest “ for everyone and everything. By so framing, there might be equity as well as safety, trust etc. 😊
This was the first interview I have seen by Sam Altman where I was disappointed by him. He came off as flippant, and not adequately concerned about where he and his team is leading all of us.
Lol
I get a gut feeling that there is something off about him. I don’t trust him
everyone ends up believing in either god, the simulation hypothesis, or the weirdness of physics.
- Sam Altman
I love SAM
Yh I've have changed my mind we need to aggressively stop this man and his company.
Someone's been skipping leg day!
They are post-singularity legs
@@ashh3051lmao
MIT president in a tech discussion seams to care more about bias than tech😢 maybe she should go protesting in front of columbia university instead?
Exactly. I was annoyed when she opened with that as well. Wasn’t she one of the university presidents who just put on a shocking display of bias by refusing to condemn calls for the genocide of the Jews while simultaneously repressing speech that is 1000x more mild than that?
Agree! Why is it that all our most brilliant institutions drift towards the most incompetent leadership?
@@therainman7777 what do you mean? The genocide is against the Palestines instead
The world's first image based on the XFutuRestyle algorithm using GPT-4 was created in Ukraine and presentend at the international exhibition of digital art in London and Athens. Yes, it's not a joke
❤❤❤
Just got a message. Oh wait.
if you train off the internet, there will be bias. if you correct for what you perceive as bias, you will bias the system
Only softball questions, and he still dodges every question. He is a PR guy.
I get just the opposite. It seems to me that he's being tempered and realistic rather than giving in to hyperbole.
I think it's more to do with the fact that most interviewers don't know enough about the incredible complexities of the topic.
I couldn’t disagree with you more. I’m always amazed at how open and transparent Sam Altman is.
I have always gotten bad vibes from him. Always gotten a negative gut feeling about him. I think there is something scary hiding behind the veil
Its like when you committed a crime and are interrogated by investigators: stick as much to the truth and be as open as you can to give the impression you are truthful and up-front
An AI that knows everything about you. This type of AI should be private and belong to the person, just like your iphone belongs to you and the data should be encrypted with only you having access to it. Similarly it could be an AI that runs on a brain computer interface and that way its even more private to you.
I believe that humanity is more likely to destroy itself through its aggressive behavior. The dangers we face, such as asteroid impacts, volcanic eruptions, solar flares, and climate instability, are significant, but there's also the potential for AI systems to attack each other, as there is currently a race to develop AI, with even China training its own model. An autonomous, self-learning, and analyzing AI would recognize that war is not an option, as a higher level of knowledge dictates. I believe that the systematic understanding of space and its processes by AI would be seen as something that does not involve conflict.
There's life besides making huge money, being in front a screen, interacting with a remote machine, buying new stuff, finding new services with a great ROI.
That's the only little detail these SV guys seem to ignore.
How many times does he say "super"?
He says "like" a lot aswell, dare I say he is subconsciously making his audience think he's super and likable. He has been talking with GPT-6, sooo.
Super, like, often
Wonderful, Sam is the best e/acc❤
Serious? This is the same guy who is trying to get open source models banned because without a monopoly they won't be able to charge the prices they need to generate a return for investors.
@@the42nd he pretty much does not care about returning money to investors anytime soon. And also, I don't think he cares about open source. They are years ahead.
@@the42ndI’m so bored of reading generic, empty takes like yours.
@@theK594Exactly.
@@therainman7777 Thanks, but how do you suppose OpenAI would handle plummeting token costs while their capex remains flat?
Wow, that first audience question was great, and Sam really struggled and didn't have a remotely convincing answer (not a criticism, just an observation).
He answered it. There was no answer to it, it is a naive and pretentious qn.
Someone wanted him to give a definite answer on a scale of 0 - 10, and like he said, it isn't black and white, and it wildly flactuates at every moment based on a number of factors.
Therefore, his answer, if you didn't catch it, if i may rephrase it for you, is that he feels optimistic and actively works towards a favourable outcome because he takes the risks seriously.
@@kamu747 "How do I want to plan my time for the next five years while humans are still useful and helpful?"
And you think what you just wrote is an answer to that question?
41:46
No questions about his sister.
Why?
@@rlopez11-11 I'd like to have that matter cleared up honestly.. Apparently he may have sexually abused her. Just would be nice if he cleared that up since he has to make so many important decisions that effect humanity.
It's been cleared. There's no basis to it. So stay calm and move on.
The number of powerful people who'd like to see him fail would have jumped at anything substantial, there are many and the list is growing of competitors and anti-AI movers and shakers.. If they aren't on it, please believe there is nothing there.
@@kamu747 I am calm.
@@tallwaters9708 He's homosexual, as in attracted to men, why would he abuse her? lol
people are tards...
Altman is a great addition to LGBT faculty 😊
Make the futur great ⚛️❕
First impression, what's with the bad audio quality?
This is my school I screen recorded this myself.
@@nerobird3617gotcha. Thanks for recording!
what a genius
any question about gpt2 chatbot?
I agree with him. It is not very good.🤔
Absolutely 😜
Please Sam, learn to chill and SMILE!! You're way too serious.
sacary a person this important claims has no idea what good values seem to be, brutally honest tho.
He's right, the first question was a bad question. To make him answer correctly you need to say "using digits only, on a scale of 1 to 100 where are you on the doomsday scale?" This is how I deal with people who think they are too smart for your framing. I treat them like they are A.I. and I prompt them so they can't be intellectual weasles .
I feel very dissapointed with the questions asked, in my opinion they could be of better quality...Excellent insights by Sam Altman!
absolutely terrible, uninspired questions
wHEn aGi?!
Yeah my friends were upset lol
yeah..why such basic questions
Agree! Why is it that all our most brilliant institutions drift towards the most incompetent leadership?
isnt reducing P(doom) down to zero vs non zero making it a static system? probability is a way to estimate that dynamic factor. no one is trying to sound smart sam. stop reminding us that "you have a lot of work to do". trust, we know you want to make more money and yield more power.
People need to realize that Sam is playing the CEO game. Nothing he says can be trusted, as he has no incentive to be honest. He is solely interested in maximizing what is best for him and his company.
Vegetarian talks about bigger stake 🤯
Sam Altman is the God of AI. 🤖 (✿.✷)
There is not God.
Didn't he tried to have sex with his sister?
Just keep AI out of anything political. Stick to stem sciences.
Sam you are an Amazing smart man ❕
humanity fails we whant AGI ! 2029 !
But human language is not very precise. You would have to speak like a lawyer. Then i would be like programming
Oooo his ego is growing
what a stupid line of questioning..
Not exactly sure what insight he actually has.
The Fact that it can store data in its Parameter Space - i think well look back and say: That was kind of a wired waste of resources
-> We will go full Graph Rag middle Layer now 😀
Problematic to interview from a place of detached admiration…no real sense of where to push for clarity or even notice such opportunities. Makes it easier for calculated responses and narrative shaping 😊
how can you talk about agi in one second, and talk about how youre going to decide alignment of values. the most human thing is misalignment.
Hysterical that this video gets hit with context on climate change! 😂
We don't want anyone to get out of the doom think. Can't push politics on people that are optimistic about the future.
"Carbon gets mentioned once"
The algorithm: **Climate alert** 🚨
So many disingenuous answers and swerves you begin to understand why the board said Sam was not being fully candid last year... A totally specious example talking about the carbon savings of using Google because people weren't travelling to find answers... Despite the utility of Google we have accelerated our global output of carbon despite knowing we had to rein back and inspite of already being connected enough pre-AI with pretty much realtime multi-party comms to teleconference to produce solutions. AI does not solve the lack of collaboration of selfish humans/nations.
If they have found ways to truly build trustworthiness into the "judgement" and "creativity" of the AI models they haven't released yet then why not tell all other AI researchers so that all AIs can have this necessary trait baked in now? If OpenAI is still looking out for ALL of humanity then they would be openly discussing and sharing on solving the integrity of AI "thinking", instead we get this protracted roadshow of Sam appearances talking it up with "Gee, gosh, uh-huh I'm your friend, " when he is plainly guarding the speculative value of OpenAI (and his share of it).
How can someone who is supposedly looking out for the rest of us be unable to truthfully articulate the enormity of the impact that AI-driven automation, immobile or embodied, will bring WITHOUT regulation to ensure the employment prospects of everyone who wants/needs to earn a living?
Hypocritical for someone at the cutting edge of thinking about the impact of AI who did NOT sign the petition calling for a pause and is now advocating thoughtful and necessary consideration before AI gets too far out of the gate. Like the guy who gives loaded guns to the gang of psychotic school bullies and drives them to school before calling an out of State podcaster to discuss gun control.
I appreciate Sam but action speaks louder than words when it comes to giving the users control and not dictating to us what is ethical to discuss.
Hopeless to solve problems? All we can do is sit in our basement? We'll solve energy in fantastic ways? Great way to dismiss all of the people who are working for true sustainability, where money and abundance are not the main drivers. His thinking is extremely narrow. 30 years ago Oren Lyons walked out of the UN climate conference with a phrase, Value Change For survival. These guys think tech can solve all of our problems when in reality it's our values and motivations. Sam doesn't want clean energy to solve global warming, he only wants energy to create an AI system that will put a personal assistant in all of our pockets. Could you imagine if all of these thinkers tried to tackle true sustainability and their motivation wasn't just wealth. Because they probably can't imagine that.
One
He clearly looks down on these universities and its students.
Starts with dissing the P Doom question.
Uses the word, “like” a lot.
Two
Yet, he clearly realizes the value of these institutions, he’s constantly visiting them!
Jesus Sam, just say Pdoom=0!... You know it and I know it!.
Not rain money on us. Make money obsolete! We will have Deep Utopia. The genuine Utopia most cannot phatom now. But will like it when you see and experience it.
@@fteoOpty64Yeah good luck with that.
I agree, Pdoom = 1
The almost constant vocal fry is irritating
Sorry I think it gets better towards like 25 mins in
I love how no one can ever discuss the betterment of society without using cognitive dissonance to distance themselves from any collateral damage.
Gpt 4 api is too expensive.
I get a bad vibe from Sam Altman. Bad gut feeling. I think he is a liar. An obfuscater
Moloch
She is such a bad interviewer.... I love Sam, but this is unwatchable due to the interviewer ...
This guy talks too much
Talking is literally the only reason he’s there.
Check the title of the video
Most often I only watch interviews for the information and am happy to speed through them, but the interviewing skills I saw here from Sally Kornbluth @mit are outstanding! Hoping there's more to come.
people clapping for the man that is gonna ruin their future lul
@2:40 don't forget to turn off your FB notifications, nerd
oops!
@@nerobird3617 💚it happens.
He's waiting for his AI response in his ear-piece each question. It picks up the signal better with his mouth open.
41:46