- 207
- 15 476 392
Future of Life Institute
Registrace 18. 04. 2016
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, and nuclear weapons.
The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions.
FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions.
FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
How might AI be weaponized? | Al, Social Media and Nukes at SXSW 2024
FLI's Anthony Aguirre speaking on the panel 'From Algorithms to Arms: Understanding the Interplay of Al, Social Media and Nukes' at South By Southwest (SXSW) on March 9th 2024.
See here for event details: schedule.sxsw.com/2024/events/PP138144
Featuring:
Anthony Aguirre. Executive Director, Future of Life Institute.
Frances Haugen. Beyond the Screen, former Facebook Product Manager.
Jeffrey Ladish. Center For Humane Technology, Head of AI Insights.
Emily Schwartz. Communications Partner, Bryson Gillette.
See here for event details: schedule.sxsw.com/2024/events/PP138144
Featuring:
Anthony Aguirre. Executive Director, Future of Life Institute.
Frances Haugen. Beyond the Screen, former Facebook Product Manager.
Jeffrey Ladish. Center For Humane Technology, Head of AI Insights.
Emily Schwartz. Communications Partner, Bryson Gillette.
zhlédnutí: 1 205
Video
Should we slow down AI research? | Debate with Meta, IBM, FHI, FLI
zhlédnutí 3,9KPřed 19 hodinami
Mark Brakel (FLI Director of Policy), Yann LeCun, Francesca Rossi, and Nick Bostrom debate: "Should we slow down research on AI?" at the World AI Cannes Festival in February 2024.
Members of congress want to Ban Deepfakes.
zhlédnutí 297Před 21 hodinou
U.S. lawmakers are waking up to the urgent need to address the rampant deepfakes issue. Here's what some have recently had to say on the topic. To learn more about deepfakes and the harm they're increasingly causing across society with deepfake-powered sexual abuse, disinformation, and fraud, visit bandeepfakes.org/
Emilia Javorsky at 2024 Vienna Conference on Autonomous Weapons
zhlédnutí 268Před 21 hodinou
In the panel "How Dealing With AWS Will Shape Future Human-Technology Relations", Emilia Javorsky addresses over 900 attendees of the Vienna Conference on Autonomous Weapons - including representatives of over 100 nations - as such the largest gathering of policymakers on the topic of this emerging weapons technology. Extracted from czcams.com/video/A1DyH7N3ppE/video.html More info: www.aws2024.at
Dan Faggella on the Race to AGI
zhlédnutí 5KPřed dnem
Dan Faggella joins the podcast to discuss whether humanity should eventually create AGI, how AI will change power dynamics between institutions, what drives AI progress, and which industries are implementing AI successfully. Find out more about Dan at danfaggella.com Timestamps: 00:00 Value differences in AI 12:07 Should we eventually create AGI? 28:22 What is a worthy successor? 43:19 AI chang...
Anthony Aguirre at 2024 Vienna Conference on Autonomous Weapons
zhlédnutí 298Před 14 dny
In the High-level panel "Geopolitics and Machine Politics: How to Move Forward on AWS", Anthony Aguirre addresses over 900 attendees of the Vienna Conference on Autonomous Weapons - including representatives of over 100 nations - as such the largest gathering of policymakers on the topic of this emerging weapons technology. Extracted from czcams.com/video/Ju9fvM6pAS0/video.html More info: www.a...
Jaan Tallinn Keynote: 2024 Vienna Conference on Autonomous Weapons
zhlédnutí 599Před 14 dny
In the High-level opening, Jaan Tallinn addresses over 900 attendees of the Vienna Conference on Autonomous Weapons - including representatives of over 100 nations - as such the largest gathering of policymakers on the topic of this emerging weapons technology. Extracted from czcams.com/video/Ju9fvM6pAS0/video.html More info: www.aws2024.at
Liron Shapira on Superintelligence Goals
zhlédnutí 2,1KPřed 21 dnem
Liron Shapira joins the podcast to discuss superintelligence goals, what makes AI different from other technologies, risks from centralizing power, and whether AI can defend us from AI. Timestamps: 00:00 Intelligence as optimization-power 05:18 Will LLMs imitate human values? 07:15 Why would AI develop dangerous goals? 09:55 Goal-completeness 12:53 Alignment to which values? 22:12 Is AI just an...
Annie Jacobsen on Nuclear War - a Second by Second Timeline
zhlédnutí 79KPřed měsícem
Annie Jacobsen joins the podcast to lay out a second by second timeline for how nuclear war could happen. We also discuss time pressure, submarines, interceptor missiles, cyberattacks, and concentration of power. You can find more on Annie's work at anniejacobsen.com Timestamps: 00:00 A scenario of nuclear war 06:56 Who would launch an attack? 13:50 Detecting nuclear attacks 19:37 The first cri...
Katja Grace on the Largest Survey of AI Researchers
zhlédnutí 1KPřed 2 měsíci
Katja Grace joins the podcast to discuss the largest survey of AI researchers conducted to date, AI researchers' beliefs about different AI risks, capabilities required for continued AI-related transformation, the idea of discontinuous progress, the impacts of AI from either side of the human-level intelligence threshold, intelligence and power, and her thoughts on how we can mitigate AI risk. ...
Holly Elmore on Pausing AI, Hardware Overhang, Safety Research, and Protesting
zhlédnutí 1,2KPřed 2 měsíci
Holly Elmore joins the podcast to discuss pausing frontier AI, hardware overhang, safety research during a pause, the social dynamics of AI risk, and what prevents AGI corporations from collaborating. You can read more about Holly's work at pauseai.info Timestamps: 00:00 Pausing AI 10:23 Risks during an AI pause 19:41 Hardware overhang 29:04 Technological progress 37:00 Safety research during a...
Sneha Revanur on the Social Effects of AI
zhlédnutí 496Před 2 měsíci
Sneha Revanur joins the podcast to discuss the social effects of AI, the illusory divide between AI ethics and AI safety, the importance of humans in the loop, the different effects of AI on younger and older people, and the importance of AIs identifying as AIs. You can read more about Sneha's work at encodejustice.org Timestamps: 00:00 Encode Justice 06:11 AI ethics and AI safety 15:49 Humans ...
Roman Yampolskiy on Shoggoth, Scaling Laws, and Evidence for AI being Uncontrollable
zhlédnutí 2KPřed 3 měsíci
Roman Yampolskiy joins the podcast again to discuss whether AI is like a Shoggoth, whether scaling laws will hold for more agent-like AIs, evidence that AI is uncontrollable, and whether designing human-like AI would be safer than the current development path. You can read more about Roman's work at cecs.louisville.edu/ry/ Timestamps: 00:00 Is AI like a Shoggoth? 09:50 Scaling laws 16:41 Are hu...
Special: Flo Crivello on AI as a New Form of Life
zhlédnutí 876Před 3 měsíci
On this special episode of the podcast, Flo Crivello talks with Nathan Labenz about AI as a new form of life, whether attempts to regulate AI risks regulatory capture, how a GPU kill switch could work, and why Flo expects AGI in 2-8 years. Timestamps: 00:00 Technological progress 07:59 Regulatory capture and AI 11:53 AI as a new form of life 15:44 Can AI development be paused? 20:12 Biden's exe...
Carl Robichaud on Preventing Nuclear War
zhlédnutí 1,1KPřed 4 měsíci
Carl Robichaud joins the podcast to discuss the new nuclear arms race, how much world leaders and ideologies matter for nuclear risk, and how to reach a stable, low-risk era. You can learn more about Carl's work here: www.longview.org/about/carl-robichaud/ Timestamps: 00:00 A new nuclear arms race 08:07 How much do world leaders matter? 18:04 How much does ideology matter? 22:14 Do nuclear weap...
Frank Sauer on Autonomous Weapon Systems
zhlédnutí 961Před 5 měsíci
Frank Sauer on Autonomous Weapon Systems
Darren McKee on Uncontrollable Superintelligence
zhlédnutí 2,1KPřed 5 měsíci
Darren McKee on Uncontrollable Superintelligence
Mark Brakel on the UK AI Summit and the Future of AI Policy
zhlédnutí 572Před 5 měsíci
Mark Brakel on the UK AI Summit and the Future of AI Policy
How two films saved the world from nuclear war
zhlédnutí 373KPřed 6 měsíci
How two films saved the world from nuclear war
Dan Hendrycks on Catastrophic AI Risks
zhlédnutí 2,5KPřed 6 měsíci
Dan Hendrycks on Catastrophic AI Risks
Samuel Hammond on AGI and Institutional Disruption
zhlédnutí 3,3KPřed 6 měsíci
Samuel Hammond on AGI and Institutional Disruption
Imagine A World: What if AI advisors helped us make better decisions?
zhlédnutí 486Před 6 měsíci
Imagine A World: What if AI advisors helped us make better decisions?
Imagine A World: What if narrow AI fractured our shared reality?
zhlédnutí 795Před 7 měsíci
Imagine A World: What if narrow AI fractured our shared reality?
Imagine A World: What if AI enabled us to communicate with animals?
zhlédnutí 643Před 7 měsíci
Imagine A World: What if AI enabled us to communicate with animals?
Imagine A World: What if AI-enabled life extension allowed some people to live forever?
zhlédnutí 712Před 7 měsíci
Imagine A World: What if AI-enabled life extension allowed some people to live forever?
Johannes Ackva on Managing Climate Change
zhlédnutí 292Před 7 měsíci
Johannes Ackva on Managing Climate Change
Imagine A World: What if we developed digital nations untethered to geography?
zhlédnutí 520Před 7 měsíci
Imagine A World: What if we developed digital nations untethered to geography?
Europe 99% US 99% China 99% Russia 98% we have a clear winner 🏆
It seems the world is soon face these destruction as punishment from lord God because of being silent on the oppression on Gaza and other parts of the world from oppressors nation of the world.
DefCon 3 became a club hit: czcams.com/video/msFz0CBdnVc/video.htmlsi=Hq5ztWl5-t-WKw6N
Sadly there will never be a nuke war nor an EMP etc. instead people like me will take the brunt of all humanity abuse and that's it. All there is.
Hominids, Gus. And Spciest Gus. And Gus, im just a guy, Gus. You know, Gus?
👍
Yann & Ftancesa are utterly oblivious & speaking about something completely diff. A fast approaching AGI/ASI is not the "internet," & 100% poses a potential existential threat. Perhaps they can tell us why the vast majority of AI experts, even most of the "optimistic" ones having stated on record, AI posing a non-zero probability of resulting in the end of our species.
" My only friend the End " Jim Morrison/The Doors 1967 ⏰️
His voice sounds like Elon Musk's
If this war happens, and you are one of the last survivors, you will die in a survival suit completely covered including a gas mask. You will freeze to death, but you as an individual "won". What For?
I thought our limited but technically capable ABM systems, like THAAD, were built to deal with the rogue launch. The successes of the Patriot missile and Iron Dome indicate a refined American capability to solve the challenges of interception, although not with 100% assurance of success. Ballistic missile submarines do have to be concerned about the ASW capabilities of their opponent. The P-8 aircraft, US Navy and Coast Guard surface vessels, SOSUS type systems, and attack submarines are threats to ballistic missile submarines. If a ship with SM-6 missiles is within a close range, interception of the missile in boost phase is possible. More likely the launch coordinates can be established and missiles directed to that location. A ballistic missile submarine would have to make a choice about a curtailed salvo or fire missiles until it is destroyed. The closer the sub is to the adversary's shore, the more likely the sub is hit before it can launch all its missiles.
Wow excellent discussion. I wasn't aware that we don't have a separate doctrine for a rogue state nuclear missile attack. I would have thought the response would have been a massive air and sea attack with Conventional weapons. This assumes only one or two missiles were fired at the U.S. homeland. If the president does not have that discretion under laws passed to do such a thing...that is horrible. 80 ICBM missile as a response is nuts.
The genetic problem with mankind will prevent us from ever going where no-one has gone before. The genetic problem ? Human DNA. We are a flawed design & will self destruct before this century is concluded. The 20th century was the beginning. The 21st will be the end. We have been murdering each other from the start. Soon the world be rid of its plague of humans & maybe then it will become a paradise.
Congratulations humanity! You took this as a template, not a warning. Great job.
I am constantly confused by one thing. Why isn’t anyone as worried about what people will do to other people using machine intelligence? Even if the AI is properly aligned, people will just use it to destroy other people, right? Everyone will lose their job. Everyone will be manipulated and radicalized. The wealthy elites will reach escape velocity and leave the rest of us behind. Why is this never discussed? Obvoiusly, it will happen. Does nobody ever worry about simply starving to death?
An entire AI talk about weapons and they did not mention Lavender.
Climate activists won’t be happy
Humans do not understand machine-produced answers because those answers are never as comprehensive as those produced by humans.
Biotech conferences have only slowed down the progress of biotech.
The ability to create something more intelligent than yourself is a logical impossibility. It is just like the so-called scientists who thought that this new thing called electricity would bring the dead back to life.
We will have the capability to process very large data sets with incredible speeds ultimately, and it is going to be very good just like all the other major technologies in the past each one of which was going to end the world, but, eventually every one of those turned out to be beneficial, overall.
Human intelligence and hence superintelligence is not just faster processing of larger data sets.
No.
Guess im too into nuclear weapons because this information is nothing new. She has said absolutely nothing that i didnt already know. Thats why i chuckle when i hear idiots say we should nuke this country on city.
How nuclear war starts? Hmm, well the idiot in the whitehouse enters a code into football, that code goes to the subs,silos and bombers. They press a few buttons,turn a few keys the we all die.
AI advancement will vastly improve life for all sentient beings. Mankind has NEVER been better off with less technological development. It always, without exception, leads to net quality of life improvements.
😷 ☕ 🇺🇸
the families need to be rehabillated and stopped.
I know Ukrainians that believe starting a Nuclear war is doing gods work!
We have had nuclear weapons for 90 years and not one single nation has ever even used one. Threats are being issued sure..people get mad and try to scare each other. No one has even attempted it. Its a fantasy to think suddenly its going to happen now. No one is insane enough to hit the button. Theres too many obstacles. Its just not going to happen. This fear mongering was all over the Cold War and no one even tried. It aint going to happen. Youre all going to get up tomorrow and go to your mind numbing jobs and get abused by your bosses and co workers as usual..money keeps inflating and pretty soon no one but the elite will be able to buy a candy bar, but apart from that no disaster is coming. Its just people being jackasses as usual.
You are mistaking a low probability event with an impossible event. Mixed in with some psychological denial of of "the unthinkable"
Annie Jacobsen sounds like the Hollywood Rona Barrett of Thermonuclear War.
wow ! smart and beautiful !!!!! we must stopp the madness ! " uhh, don't ask mee' how !"
24:13 AI can already do all of that.
War...war never change
She is hawking a book plain and simple. Nothing anyone can do about it so stop worrying about. No one gets out of it alive so stop freaking out sheep.
Putin. Don't. Care. About. The. Atomic. Or. Hydrogen. Bomb. He. Just. Cares. About. Weapons. Of. Mass. Destruction. And. How. Many. Innocent,. And. Helpless. Men,. Women,. Children. And. The. Innocent. And. Helpless. Precious. Animals. He. Can. Kill. AND. SLAUGHTER. INHUMANLY. HE. GOT. DEMENTIA,. FROM. CANCER. OF. THE. BRAIN. AND. HE. HAS. BECOME. A. RAVING. MAD. MAN. HE. COULD. CARE. LESS. ABOUT. THE. CATASTROPHIC. APOCALYPTIC. ARMAGEDDON. HE. HAS. COMPLETELY. BECOME. DICTATOR. OF. THE. WHOLE. WORLD. AND. UNIVERSE. 😢😢😢😢😢😢😢😢😢😢😢😢😢😢😢😢😢😢😢😢😢😢😢😢😢😢😢😢
Stop trolling 😂
It is true bozo@@user-xt5yz8wm7z
@@blitzwingprime nope 😂 go play you transformers kid 😂
@@blitzwingprimein nuclear war everything will be destroyed 😂
Don't want a nuclear? Simple get rid of Putin, close down North Korea. The only people that don't have any morales.
Well how do you propose to engineer the coup de -tat to get rid of Putin. Get real.
As a European living in USA I would urge Putin to be damn careful. America has vast wealth and loves weapons and inventing newer and better weapons. You might think you know the cards America holds but you don’t. Japan thought they knew Americas hand in WW2. USA armed Russia in ww2 and still managed Japan and German Western Europe and flatten all major German cities. USA is 10 times more powerful now and only helping Ukraine a little. Russia has mighty nukes. America has that and already has the secret 22nd century weapons you only dream of.
lol proof? The us is in huge debt of 31 trillion dollar rn and their economy is on the verge of a depression, gas price skyrocket and consumer drop. How would you think the US supply Russia in ww2 ? Russia built factory that produce hundred of tank per day in ww2 and they do not need any american assistance. I need to know where you get your source bud😊
Nuclear war he happen if NATO or America send troops Ukraine period if you are in NATO country or America warning for civilians All over the world
It's fun that in America, always the "crazy one" comes from a foreign land, or even from the space. One of the must dangerous aspects of the american exceptionalist mentality, is that in no way, the baddies are inside the U. S. government. But in the recent events along 2023-24, we see how evil and criminal the actual administration can be, not only against other countries, but against their own people, including some disturbing chapters from years and years ago. The principal aggressor against the world is the zionist state of America, And even before, the expansionism of the so-called "american empire" are the true root of destruction along the world. And just with the end of that kind of system, peace, even the peace of the graves, can be assured.
"The day after" was a Disney tale about nuclear war. More accurate and realistic was, with no doubt, the british serie "Threats". Few people inside the U. S. know it.
Iron dome...? That crappy expensive useless complex, that just can't stop Iranian missiles that hit precisely and with crisp efficiency the must important U. S. - zionist military installations, and radars...? I heard with respect and attention this video article, but I found really disturbing traces of american exceptionalist mentality, and gaslighting preconceived ideas about Russian and Korean capabilities... And now, taking as example the so called "iron done" as technical efficiency and marvelous weaponry, I know how miss leaded these journalists really are. The world has changed, and you have to know how at the edge all really are.
Why would Russia use nukes vs the west, ? Putin must know the west could easily retaliate with their own nukes and level Moscow and St Petersburg ( Putin home) with in a few hours.
Never read such a stressing book!
Robert Hansen is a reptoid globalist
46:20 say the US would develop the sand god, the first true AGI and keep that mostly to themselves, leaving anyone else on the world far behind in their development. What if it would be China instead, or India or europe? Who of those who are not first would just bow their heads to the new overlords of another nation? If those AGI would show their respectiv nations ways to prevail better in geopolitics, how long till others say its enough? What if the one with AGI could not be convinced by argument or international pressure to keep it down a notch? Is that not in a sense stearing nations to war?
48:16 The overpopulation problem is contained, as the reproductive rate is already at or below 2 (births per woman) as global average. See the explanation of Hans Roesling about the filling-up effect. czcams.com/video/2LyzBoHo5EI/video.html
An the best tactic to slow down overpopulation is: a) send the girls to the school, b) let people get richer from poverty to a mediocre income.
Les boucliers anti-missiles seraient déployés, et plusieurs de ceux-ci seraient détruits pendant le voyage et n'atteignerais pas tous leur cibles.
my grandfather was in ww2 logistics driver ill never forget him telling me the earth can not take ww3 especially with the tech we have today, this was said in early 2000 its 24 years later tech will have advanced alot more since then, Voldermort is getting to the end what if he takes us all with him! god help us.
She can see into the future with those thick glasses