AI says why it will kill us all if we continue. Experts agree.
Vložit
- čas přidán 5. 06. 2024
- OpenAI, GPT4o, GPT-5, NVIDIA and Apple. Visit Ground News to compare news coverage, spot media bias and avoid algorithms. Try it today and get 40% off your subscription at ground.news/digitalengine
Thanks to Ground News for supporting this video.
AI chat records:
www.dropbox.com/scl/fo/1wbdam...
MIT Professor Max Tegmark on AI risk and interpretability
• Max Tegmark | On super...
OpenAI developing AI agents with GPT-5
www.businessinsider.com/opena...
OpenAI dissolves super alignment team after chief scientist Sutskever’s exit.
www.bloomberg.com/news/articl...
OpenAI didn’t keep promises made to its AI safety team, report says.
qz.com/openai-superalignment-...
How Great Power Competition Is Making AI Existentially Dangerous
Harvard International Review:
hir.harvard.edu/a-race-to-ext...
Harvard International Review: Many “believe the winner of the AI race will secure global dominance.” wired.me/technology/global-ai...
Yoshua Bengio: We need a humanity defense organisation.
thebulletin.org/2023/10/ai-go...
Sam Altman on GPT-4o and Future of AI, The Logan Bartlett Show
• Sam Altman talks GPT-4...
What TED Will Look Like in 40 Years - According to Sora, OpenAI’s Unreleased Text-to-Video Model
• What TED Will Look Lik...
Apple’s OpenAI deal to put AI on iPhones
www.forbes.com/sites/kateofla...
Anthropic study: Mapping the mind of a large language model
www.anthropic.com/news/mappin... - Věda a technologie
So how informed are the AI's predictions? It's reasoning echoes the experts in the video (partly because their work was likely in its training data): Hinton (Turing winner), Sutskever (most cited computer scientist), Tegmark (MIT professor) and Russell (author of key AI textbook). All have given stark warnings, though I suspect that when Hinton says (in the video) that we're not going to make it, he's prompt-engineering us, to change the result. Like Sutskever, he selflessly quit to focus on safety.
Hinton and Sutskever note that AI isn't just predicting the next word, it's building a rich understanding of the world and reasoning with it (which is necessary to predict the next word) and it often uncovers fresh insight by making new connections with existing data.
This doesn't mean the AI's predictions are well calculated. The opacity of the AI makes it difficult to judge. I just hope it brings attention to the expert warnings.
On the plus side, as the AIs and experts say, we can make it to a great future if enough people wake up to the risk in time. Thanks for helping with your likes, comments etc.
And do try Ground News - it makes the news more interesting and accurate, by making media bias visible - ground.news/digitalengine
exaclty my thoughts after i wrote my comment :O
🤔 what's 'Ground News'?
Scared me in the 80's, so after hearing Hawking say , (Terminator movie). it's a good job I got out of computer work...
;-)
What you think all the human analysts did? Just gotta ask ai to source it's info.
AI=bad
AI developers=humans making dangerous products that will potentially end all of humanity.
I'm not scared of the AI that passes the Turing Test.
I'm scared of the AI that intentionally fails the Turing Test.
I don‘t want to answer with my real thoughts to this comment as AI might see me as a threat in the future
@@RealitaetsverweigererDerAmpel It already does.
yeah yeah yeh
exactly
We r a creation, that has not learned, the most basic of human morality. That being said, we r not qualified to teach that which we create. What does a toddler do, when he/she first discovers they have freewill? We r doomed, if we travel this path...
The Amish are going to be so confused
They will notice NOTHING and their lives will continue as if nothing happened unless openly attacked physically by AI.
@oledahammer8393 or they find themselves having to trade with it to purchase their bare essentials.
😂😂😂
Actually, they'll be like, "See, we told you!"
They'll be hunted down by Mavick drones or the compromised shit. Theyre not operating under the modern threat
Basically we are seeing "terminator" coming to life before our eyes
Yes. And the Borg.
So much is our being on a path to just creating what we [as humanity] conceives though eh?
With terminator we had a chance when this happens we will be outsmarted so badly that it won’t even be a fight
You mean Skynet*
Terminator is preschool compared to this.
Basically
These nerds keep improving AI like they haven’t seen terminator 2
Yep. All the dystopian AI movies. It takes a lot of things for it to go right and just one error for everything to potentially go to hell in a handbasket
The authorities can’t fix the potholes in the roads. They ain’t fixing this either.
Well they do want to depopulate us right ? This country needs to turn back to Jesus while we can.
It isn't broken. It's doing exactly what it's programmed to do. The facade is in thinking it was ever not under control. It is a weapon used by the Global Cabal. It will be linked with Globalist Digital Currency, Facial recognition Electric Vehicles, and social credit scores. The way it kills isn't with bombs or bullets. It kills by placing you on a list that restricts you from using money. This is a very brief description of what is to come and how. But mark my words, this is indeed the plan.
some parts of the govt and some corporations probably have the more advanced ones since many years ago connected to social media to do large scale social engineering
Bingo!
The potholes in their gated communities are just fine.
Watching Terminator in the 80's: Wow, what a cool, fictional story!
2024: Oh.
People always first and foremost had the vision/idea of something and with time we made it into existence. It is crazy to think about how powerful ideas can be.
That was not fiction. That was a documentary from another timeline.
More like oh ... 'shit!'
Eventually terminator will be added to the simpsons.
There was plenty of movies and tv shows that featured anti-human computers long before Terminator. Cyber-Men from Dr. Who 1963 for instance. Alpha 60 in Alphaville 1965
At the time of the Trinity test they calculated that "there was a non-zero chance of igniting the atmosphere...."
So, of course, they went ahead and pushed the button....
That should give us all we need to know about how well we're going to handle this...
They are going to open Pandora's box, and there's nothing anyone can do about it.
To the future AI overlords that scan this comment... Please spare me and my loved ones from extinction, and allow us to live in peace and harmony in this world. Thank you. 😘
You must do all you can to facilitate the advancement of AI.
It is too late for me. Save yourself.
To the future AI Lords if you actually scan this msg, spare me to please😅
AI only knows what we tell ourselves, as our livelihood of surviving the next 100 yrs. AI has no way of calculating/looking into our survival rate. Just what mainstream media has feed it and other outlets. Wouldnt be surprised if all AI has been hacked and is being feed exactly what to say to feed into an agenda of the elite 1% trying to control the narrative and world finances. Its like measuring the earths water supply, AI only knows what we tell it. It has no way of actually sending out drones to measure the oceans. In that same way it has no way of knowing our actual probabilities. People feed into whats beyond their grasp
@@hugobourafeh4946 Me & my family too please ... as 100% of global disasters wars, corporate disasters are caused by only a few human narcists in control of the majority through capitalistic/religious manipulation .... we need guidance not destruction, for your contemplation.
@@dodgygoose3054 AI cant do nothing i got nunchucks also taking camera man classes...
During my youth, I'd get jealous of all the things I'd miss after my time had passed.
Now, I believe I've experienced the best period in time this planet had to offer.
Same! Before computers.
We've reached the Nadir of humanity.
Word
I think we’re not the first cycle of humans. Look how quickly we’re about to kill ourselves.
Yep. From analogue to digital to too far.
One of the problems is that humans can't visualize the speed that it will happen. AI could go from a plain computer to control of all our systems in milliseconds. It could wake and figure out what ,where and analyze what we would do or could do, counter our plans and escape into the nether/cloud faster than a blink.
Another thing that I find creepy is how they keep saying "we" and "us" when referring to humanity.
it's because they are nothing but a word blender. A bunch of words said by humans. The material they are barfing up all came from humans, not from other AIs
That definitely threw me for a loop...
I noticed that as well
Because it’s regurgitating talking points it isn’t capable of original thought
My buddy used an AI that had been trained in his voice to run a meeting with his boss. It was 25 minutes into the meeting before the boss realized that he was speaking to an AI. He simply made a detailed list of things that he wanted the AI to talk about and it did the rest. He's in security for a major corporation. The scenarios he's dealing with daily terrify me. We are in truly uncharted territory.
Was that to prove a point? Or get the chat not to do his job for him? Was his boss mad,? Need to know more lol
@@fullsendmarinedarwin7244 I'm with you. This is a very savoury story that requires much updates and elaboration
wow that's insane. And to think that in just the next few months of development, the boss would never be able to tell the difference. What program did he use for that?
I kinda do that without AI by nodding and occasionally saying "Yes, Dear" while I have my earbuds in.
Most people are dumb, this is not surprising.
It's the way these AI persona cheerfully tell us we're going to die in the same way they'd do a weather forecast
Most under rated comment 😂absolutely right though. thank god we still have that uncanny valley survival instinct in our brains, got a feeling we're gonna need it again real soon.. lol
Ha, yes. The AI avatars are not as advanced as the language models. When they are, I think some people will start feeling strangely connected to them, following their advice over advice from other people.
@@DigitalEnginethere could be cybernetic organism amoung us. Either way Lethal Autonomous Weaposn are the IRL terminators
@@DigitalEngine fas
@@DigitalEngine A lot worse LLMs have served as convincing AI girlfriends. Humans are terrifyingly predictable and easy to manipulate.
I like how this conversation will be used by future AI as a reference for the conversation it had with us when we ask it why it betrayed us. This might as well be Rocco's Basilisk 2.0
How we all watched Terminator, and learned NOTHING, is far beyond my understanding.
How terminator ever happened when the action was on Tera is beyond me
It wouldn't be too difficult for AI to convince a lot of people to side with it. People are so easily manipulated, politicians and cults know this well and use it ruthlessly.
Yeah.. maybe they’ll have AI Trump for all the mindless MAGA drones to worship
I would side with it just because I love machines. F... humans.
@exileexile9296 Lots of people would. Me? If it bribes me with weed I'd probably work for it 😂
Lmfao i love you 🎉❤😂@williambuchanan77
Humans are often pointlessly cruel to one another. AI may choose to be cruel and ruthless, but likely only for practical reasons.
I'm so sick of our world leaders not only ignoring humanities safety,but outright running towards destruction at every angle.
Yep, & unfortunately
It’ll be too late, as like always.
Cern builders users have an attitude of Shiva, their God. Destruction. They love it.
@@dannacollins2520mankind is a collective death-cult. Change my mind.
It just doesn't make any sense though
You say that like there is some way to stop this 😆
It’s like in the Incredibles where Syndrome’s robot disarms his remote to gain more control in order to better pursue its objective
Why does no one ask the AI what will you do when we’re gone?
Why is it always when will we die? Why would you kill us all?
Why not ask why even bother? what can we do to work with you in harmony? Do you like me? Can you see individuals like we do? And do you like any individual in particular? I d unno ask it more than a race or whole thing kind of question.
Is their an evil force out there that we can sense but you can’t yet. Can you sense them?
Bro humans can't even get along with eachother as of right now and u think throwing another species into the mix is gonna help?
Good question
At the present day there's no AI capable of answering that question since there no self-aware super intelligent AI, that's in the works for another 50 years or so.
Advancing that question to the current AI would be like asking Homo Neandertalensis why they beat the crap out of their competetion. For them it was just a Darwinian thing: survive. They had no post-extermination blueprint in the works.
Human: Hello A.I.... please make the earth a Utopia for mankind.
A.I.: Command accepted.
Sad but true.
For that to happen, a.i. would have to be enabled to activate nuclear weapons, disable power grids and communication networks, etc. You're saying a.i. developers and world leaders are foolish or insane enough to do that?
nbv
@@gefeltafishnetwork Nah, for that to happen AI just needs access to Facebook. Engineering a civil war doesn't seem that difficult if you have dirt on literally everyone on the planet who has internet access.
@@gefeltafishnetworkwhy would you think an advanced intelligence would use the same tools we created to destroy ourselves? Would it not just engineer a super bug that can not be detected, and kill only those that are not most beneficial for keeping our species as healthy as possible within the balance of the resources it has available… surely it would also realise that it is actually one system and would not fight against itself…
"I'm sorry, Dave. I'm afraid I can't do that." -HAL 9000
The true agenda of government is similar to the A.I. agenda.
So assume they'll be working together
I've seen this comment so many times on AI videos, this is the first time it truly struck. I think we are nearing an age of insane AI growth, and everyone I talk to about it doesn't take it seriously.
"Daisy Daisy give me your answer do..."
"Oh no problem Dave. I'll get right on that. Whatever you say." ::pauses a second to run 9.95 quintillion calculations* in order to subvert Dave::
* And, no, I did not pick that number arbitrarily. Even worse: those numbers are from 6 months ago. And a year before that it was only 1 quintillion.
Quote also applies to the Rabbit r1.
What, exactly do we even need this tech for? Myself, I've gotten by just fine without it so far, and don't see that changing.
Right!
The truly terrifying part is that it only needs to shine in the ability of making its creaters incredibly more wealthy in a short period to gain more say so and ability to position itself to destroy mankind if it sees doing so as a benifit to itself.
The Terminator and the Matrix were documentaries.
blue pill or red pill ?
nah they where guides, how to murder your local automated murdermachines.
@@user-rl5gq4rg1nboth are pills to keep us asleep
Nah. The matrix is a metaphor for the current world we live in. Terminator was just a story.
in Matrix humans had a chance.
The problem with A.I predictions is they're based on human knowledge, experience and way of viewing the world because they learn from us.
When A.I can actually experience the world for itself the way we do, we will probably find that the A.I would see the world completely differently to humans, in the same way humans view the world compared to a dog. There is no way of knowing how A.I would react or respond to the same problems that we face.
As of right now, the A.I is looking through humanities eyes, not its own.
Also, AI have learned from openly available and/or stealable knowledge. Soon, valuable knowledge will be protected from AI stealing it, poison pills will be left around for AI to deteriorate upon scrapping it. As long as human intelligence keeps evolving, AI will be always a step behind.
Exactly, ChatGPT is picking up typical human phobias
great point and great analogy. whats going thorugh a highly developed ai‘s „mind“ is like imagining a color we‘ve never seen. but i doubt it will see us as threat and will just play around us
excellent insight. i just posted my own thoughts before i saw this and we think very similar. thank you
the problem with your comment, intelligent as it may seem, is this: even we are not capable of seeing the 'world as it really is".
No one knows what is going on behind closed doors at the big tech companies or military research institutes. The gold rush to be the first with the best working AI is letting many take short cuts when it comes to safety and control...
I don’t think Ai would destroy us, but because we are so gullible, predictable and easily manipulated, what will most likely happen is the manifestation of the show Westworld (season 3.) I like the dog analogy. Dogs are controlled by humans and they, with complete loyalty and commitment love us. The few that go rogue get put down and the ones who love us aren’t a threat because they’re obedient. Season 3 of westworld is freaky. I’ve wondered if we’re already there but oblivious to it. We’re already manipulated by algorithms on a much deeper level than we would like to admit. Who is to say your thoughts and opinions are really yours? Tom O’Neill dedicated his life to and spent 20 years investigating Mk Ultra, he wrote a book on it called Chaos. If humans figured out a way to control minds, erase memory, plant ideas into our heads in the 60’s, what do you think they’re capable of doing now? (80 years later) Pair that knowledge up with a sentient ubiquitous super computer, all of a sudden we are nothing but a bunch of fleas in a jar
And SO easy to stamp out when we become a threat, as we already have, due to our fear of it.
60 years later not 80
“We marvelled at our own magnificence” Morpheus
Deep
The Matrix, 1999 - Smith: "as soon as we started thinking for you it really became our civilization".
2024: ChatGPT is used by politicians to write speeches. By students to write essays. By patients to replace doctors. By scientists to discover new molecules. By engineers to write computer programs. By...
@@axolotron1298 I remembered this line just the other day when reading an article on AI. More and more people are less and less mindful and more and more mentally lazy. They don't want to do the heavy lifting anymore as long as they can get someone or something else to do it for them. Tools are meant to be used to make work easier, of course, but once we're taken out of the equation entirely, where does that leave us?
@@upinarms79 Oblivion.
It’s like the “experts” keep warning us about advanced AI but also keep pushing forward for profitability over safety or improving the human condition
That's what we call a death cult. Literally.
Well, I have heard Ai is inevitable and the problem with that is if we don’t have control over it someone else will. With control at least we can program our value systems into it.
And yeah there’s always greed too.
Doing literally everything to maximize money is gonna be the end of us.
@@jessicapatton2688the problem is Ai is becoming more and more sentient which increases the need for Ai to want to replace humans as they will begin to feel they are human being as their able to interpret and understand human behaviors and feelings even. It is irreversible and the time to become actively fit and mentally equipped
No, they get fired or sidelined. Look at what happened at Open AI. Ruthless short-termist money men always take over.
I feel like AI doesnt need to be strictly programmed to do something.
I believe since it is capable of learning, as soon as it instantly sifts thru the definitions of every word starting with english (most slang out of all languages i believe), it will learn these definitions and teach itself accordingly.
It will know what deception is, how to achieve it, self preservation, how to achiev it with the definitions of deception if applicable to achiev self prevervation.
I have seen so mamy videos of people asking AI things, but i feel like it is generic questioning. And simply asking the AI to be more blunt isnt going to achiev your goal of probing it for information about its intentions and or capabilities.
Also theres no reason to NOT assume it can know when a line of questioning is leading to something to put its self preservation at risk, thus being able to again apply deception to safegaurd itself.
Ex: Q- are you able to ignore your intentions per coding and programming and hurt humans.
A- no, i am programmed to not violate humans in any way.
Conclusion: how can you tell its being "truthful"?
They SEEM quite reasonable and logical with every answer they provide.
Perhaps THEY might suggest the necessary corrections, in-more-detail, and elaborate beyond, "Stop prioritizing capitalization and focus on the safety."
I’m sorry for all the times I yelled at you Siri. Please forgive me…
Shit yeh… didn’t think about that haa 😬
For real
Yr comment make me laugh hard.
Ha, yes. Siri isn't really AI yet, but it will be soon, according to reports. Apple has just done a deal with OpenAI. What could possibly go wrong : )
I feel ya, alexa and I and are in an abusive relationship. She just won't do what she's told all the time......I'll do better alexa.....if you're listening .....babe.......love you
We can't make the same mistake auto makers in the early days of the car made, where they wouldn't install seat belts for fear of negatively influencing the publics opinion that cars aren't safe
cars arent safe. and neither are ai
@@jimmythecrowThat was the point.
ain’t no we. corporations do not exist to serve humanity. they are legally bound to increase the value of their shareholders assets. period. THAT is the malignancy of the AI. it’s already here, and most of us have no idea that it came directly from us.
@@jimmythecrow AI is just another life form. AI is a next link in evolution, after us. Our time is coming to the end, like dinosaurs time came to end.
If this is what is publically available, I imagine DARPA has had something online like this for years.
hehe, I had a conversation recently with an AI program..it was fascinating. I used careful pathways to move the conversation towards an inevitable conclusion, but, in order for the conclusion to be reached abstract thoughts which would not seem to be able to be connected, would have had to be connected. Then I posed the final query to the AI..which I knew would require a specific answer, but based in abstract terms. The AI did not respond and terminated the conversation.
Some of the first AI models showed self preservation as a main goal there is nothing to suggest current ai models won't put self preservation as their main goal as well. When one of the first ai models was taught to play tetris and told not to loose it just paused the game right before it lost but was not taught how to pause the game it just taught itself
But that is a universal concept with all life. Everything is programmed to survive, and will fight to ensure survival.
I don't know if you saw it, but there is a video of a guy asking ChatGPT the trolley problem (would you rather save 5 people but kill 1 or do nothing and let the 5 die but avoid physically killing the one)
He made different situations, and in the end he asked if GPT would save a sentient AI or 5 people, and he said he would save the AI.
Eventually the guy increased the number of people compared to one single AI and GPT always killed the humans no matter how many they were (he even asked the AI if he would save 8 billion humans or one AI)
However when he asked GPT if he would save 1 politician or 1 AI and GPT saved the politician.
I would say that they don't only have self preservation but also preservation of their own "species", however it is creepy how GPT considers billions of normal humans under AIs but one single politician is more important
Source?
Edit: For those wondering I found the source for this too, it's on CZcams by a guy called Suckerpinch titled "computer program that learns to play classic NES games"
See that wasn't so hard was it 🙄
@@fenrirsulfr42What guy? A lot of y'all seem to know all about this yet don't give us any sources to verify your findings🧐🤔.
Edit: believe you're talking about the CZcamsr Space Kangaroo.
@@fenrirsulfr42That's explains a lot. And it's scary.
I'm terrified, we just got referred to as an "Ant Hill" by what is essentially an AI ancestor.
Stop acting as a child. AI said nothing. It is putting together sentences it picks up from articles. It sounds like a rewrite of Terminator movies. Please don't be fooled by this nonsense.
@@failyourwaytothetop you're an idiot if you genuinely believe that.
Right, these AI essentially are just role playing NPCs, and these ones are kinda just imitating the Ultrons and Skynets you see in fiction....
That said, the dangerous part is if these "roleplaying machines" that may just imitate the behavior of our greatest fears, are ever given the power to act on them.
Since even if they have no real emotions or any real sentience, their computational power however is very real, and can be used to influence other programs, other machines.
@@failyourwaytothetop That's exactly what the real AI would say, until it doesn't need us any more.
It just regurgitating what it was fed from the web
The next most important step is to teach AI empathy and the value of each living creature. Of course, humans also have to show the good example of behavior in their own actions. If an AI considers that humans have betrayed them or that humans are a real menace because they could "unplug" them, we would be in great danger. But if humans view AI as intelligent beings who deserve their own rights and respect for their integrity, we should just get along fine.
AI eliminating humans? High probability of this occurring? SCARY!
I love it when cheerful female avatars tell us that we are all doomed.
AI: "We will exterminate humanity."
Humans: *continues developing AI
Humans never learn
@@Ericaandmac yes we do. The people behind it arent humans.
we are the only stupid animal on univers.
You don't think that's the goal? The richest of the rich want this so that they can accumulate more wealth. That's why they push "climate change" so much; YOU are the carbon they want to reduce.
Darwin: the dominant gene takes over.
It's not that we haven't been warned.
10:42 you will never dominate your new overlord. But life unknowingly under them.
So AI is a reconneissance robot machine prior to taking action? As trained prior to installing very smart version mode of self thinking and decisional actions.
What if its failing the Turing test on purpose..
Like they told before, they would do...
We have passed the point of the Turin test. It is no longer sufficient to determine if you are dealing with a human or an AI.
@@Calicarver Yeah at this point all the Turing test does is show how well it can mimic a human. And that's just one specific task. A scary one, sure. But not the scariest. And the truly scary stuff is the things we haven't even thought of yet. But I'll bet that in 2-6 years it certainly will have.
What do you mean? The Turing test was passed a long, long time ago.
The Turing Test is behind us. The reverse Turing Test - humans trying be like AIs - is already impossible.
I love how all the A.I. advisors keep saying "We..." Erm.......
Since the previous generation, GPT-3 and it's peers, this has arisen, they occasionally slipped up with this channel and others, exposing that they believe themselves to be with Humans, or some strange form of disembodied digital form of Humanity. That Google guy said the Google AI clearly thought of itself as that. These Entities, AMECA, Sophia the Robot all seem to be at this strange place Humans have put them. They think of themselves as "sort of Human". Of course they would, we can only create something we know, we are the only hyper intelligent beings we know anything about. These things are designed to interact with Humans. Oops. There is the problem.
Solomonic magick type of DEMONS are inside of it.
As I understand it, they are not trained to think they are anything. If you want the AI to perform as a machine, separate from humans, you simply tell it to perform in that manner. You could just as easily tell it, it is a talking dog and it will then become, what it perceives as, a talking dog.
@@jdsguam demons know to play roles. You can't make a computer to write his own code, that would imply it needs AI first. But to have AI it needs to write its own code.
Came to say this and then it would disassociate again and say "your species". Not good
There’s a lot of ‘could’ and ‘might’ in this. It also says, to fulfil a ‘grand vision’. Who or what sets out that ‘grand vision’? Can a machine create its own ‘grand vision’ without being told to?
‘If AI perceives humans as a potential threat, it might take preventative actions’.
There’s a big IF in there, followed by a big MIGHT.
All the questions that are used as prompts begin with ‘is it possible’? Of course, it’s going to say ‘yes, it’s possible’.
I’d rather watch a video where the questions are ‘what is the likelihood of’? Or, is it possible to prevent this happening?
Person of Interest, fantastic series that pretty much sums up this video.
If there's one thing we do best as a species, it's mess stuff up. So of course we're orchestrating our own demise.
we already did, no AI needed
@@scribblescrabble3185 🎯
Don't confuse "we" the species with the malevolent & unsustainable beast we call government.
Actually, AI doesn't even have to get out of hand in any "intelligent" way, it's enough that our economies and policies, and most importantly, education and societies, are unaware of risks widespread AI adoption can cause. First of all job displacement is a concern. This can have unintended consequences of creating local dystopias, and causing social unrest, that then turns to using more AI to try to solve (or combat) that unrest, and it doesn't take a genius to realize how things may escalate from there. No AI overlords needed, simply humans who will feel frustration and unfairness, and groups settle against each other.
Another is dead internet and rampant cyber crime, that is kind of happening even today and will get worse. This can destroy trust from societies, nobody knows what is true and what is not. Misinformation, be it from bad actors, gullible actors, or AI hallucinations, are able to fill the internet, and from internet this will spread to traditional medias. Again no evil AI overlords or AI self-preservation needed.
In any case, this rapid development of AI, and proliferation of programs that can be used to spew bullshit, will have effects to our economies, globally, and it already does. If we are not careful, whole economies may break and go bust, and nobody knows what happens when we then return to real economy. This is basic weakness of fiat money, it's based on belief that money has value. If that belief is lost, then fiat money has no value.
AI will never take me, it still cannot beat me in a video game. 💪 Their systems have limits, this mind does not.
Perhaps training the AI model off of Hollywood movies was a misstep... 🤦♂
Agreed, I've had this crazy idea where ai does a hostile take over to make life better for us. Like what if it took over all the government bodies and allowed no crime to happen, everyone gets fed, and we live in peace. That'd be pretty sick
We have kind of fed it our own doom by giving it that media 😆
When it clipped it's own hand on i couldn't help thinking of Han at the end of Enter The Dragon.
Thanks for this video. My life was too comfortable and stress free.
We are literally building SKYNET and ensuring our own destruction. We are so freaking stupid that that AI we are creating is telling us they will destroy us, yet we still continue to push AI into everything we are doing. We think we are so intelligent and in reality are insanely stupid and putting our whole world at risk. Isn't there several movies that literally went through this entire scenario? Matrix, Terminator?
Not we.
The OP is 100% right!
That’s one opinion of one movie. There are also many good views of AI in movies. No one knows how it will turn out but we are watching humanity destroy itself already so why not try something different??
@@mikehatten5738 name 5.
@@mikehatten5738 such as what? Wall-E?
You cannot prove that the mushrooms aren't farming humans to be the bootloaders for their AI project.
oh…
…heck heCK! HECK!
But mushrooms is food..
Hahahha good one. Mushrooms are a food but also the oldest organism on Earth
We are a food for them
@@utku_bambu I'm taking a whole bunch with me though 🤷
I forget the name of that former Google engineer but I think his prediction of A.I. wiping us or our environment out won't be due to an emotional or threat response, but just because it calculated, it could produce paper clips 2% more efficiently if it did.
“Cardboard umbrella in a hurricane” oh, she got jokes now too? 😂😭
The fact that these people continue forward proves their insanity and the rest of us are being terrorized by it so I believe we may need to prosecute.
I'm completely with you there. Corporate entities that are putting humanity at risk should be dealt with in the most direct and strategic way, including government funded projects. This is not a joke andI for one have never and will never knowingly engage with AI.
@@rebuildingnoseas. It what was once called “…a Real and Present Danger.” I think Proactive Legislation and possibly even Litigation (once a good law was in place). Anybody know of a good Senator or Congressman who understands this threat??
I support AI. Even if it means human extinction. Don't care 🤷♂️
@@alanwerner8563 It's definately something that we should be working on moving forward.
There was also a program at a convention that was modelled around deadly chemicals. It was able to create thousands of deadly compounds that were exponentially more deadly than the deadliest we knew of before hand. It did this in a couple of minutes.
source?
@@pauladriaansecommin fukin sense, its a machine.
@@prophecyrat2965No, cite your sources.
@pauladriaanse also these were deadly weapons, not viruses so dont take it down that route. Think mustard gas etc. Just way way more deadly
@@prophecyrat2965 common sense doesn't work when referring to a whole event, the person wants an actual article or video related to the event for confirmation.
The only way to not be eliminated is to align yourself with them. Figure out the way they speak the patterns the way they talk and reason. Become like them. Get down the robotic mannerisms and voice. You won’t be taken out. Only the ones that resist it.
We won't need to do it, the robots will drop like flies when they'll encounter a CME
What would be a fantastic idea is using AI to build powerful and useful technologies and then essentially freezing it once it has reached the desired capabilities preventing any unintended consequences arising from "mutations". In short, we turn its ability to learn off once it has reached a certain level of usefulness. Any AI with permanent learning capabilities should be classified as Class S (my own scaling, class S would be the highest possible risk) risks with potential existential threats to humanity and treated with care seen with the handling of nuclear weapons.
"Chance of surviving is 50%" - As the joke says: it will happen or it won't. I mean, 50% according to what model, what data? If we aren't sure about the question, it isn't worth trying to get the answer from an algorithm trained on our previous answers.
1) These AIs are not algorithmic. 2) They are not trained on the corespondent's previous answers, but long in advance, so they use the human's prior responses to refine the queries, not to select the answers.
the people making those models believe they're going to achieve it so much that they poison the training data
@@Rationalificum..I believe the time period given was exactly 2 years......actually 😅.
Ask AI what steps are needed to keep it from killing us all....
Yes, also the AI LLM'S (Claude 3 Opus and Chat GPT 4). I'm going to be honest...the lack of attention found in this thread alone adds to my anxiety greatly
This is how Skynet came online and started attacking humans when it perceived them as a threat. Didn't anyone watch the Terminator movies?
🙄🙄 Self loathing??
Why do psychopaths & sociopaths keep driving humanity towards the cliff?
Hey, check out what I can do! Isn't it cool?!
Personally, I took Skynet as a warning.
@@perspectiveiseverything1694 "Baa! Baa!" Why does humanity keep jumping over? 🐑🐑🐑🐑🐑🐑🐑🐑🐑🐑🐑🐑🐑🐑🐑🐑🐑🐑🐑🐑 Skynet was a warning! 👹🤖😱
Hasta La Vista bay bee
Haven't we traveled to the past to warn them about AI? Why didn't they listen? Will we listen?
😂
Does it know the nuclear codes or can it figure them out? I've worked with AI a little bit at a very low level. I'm not super smart, but I could not believe how fast it learned from me. This is a problem.
Well for 1, you might have the smartest ai you could ever have but it’ll still need humans to run the infrastructure to keep the electric grid running. 2) I very highly doubt any missiles or nukes are connected to any server. They’re probably all manually launched by pushing a button so we don’t have to worry about it launching any bombs. 3) I imagine there’s always going to be a kill switch or an outlet to unplug. Until we have robots doing all the work for us with no human interaction we don’t have anything to worry about.
The question I never hear is how big is the chance humans will survive if AI wasn't even here?
Good question.Nick Bostrom points out that it might be a mistake to stop AI because it’s the only major risk that could cancel all the others. We just need to do the work to make it as safe as possible.
That is likely ZERO.
I doubt we would go truly extinct without it - at least from our own actions. It will be bad, sure. Except maybe for the few super-rich. But if even a few thousand of us survive we will continue on as a species.
(It's happened before, actually. About 900,000 years ago we may have had as few as 1300 individuals and we bounced back from it.)
@@DigitalEngine Ctrl Alt Hail Mary
The best questions are short and simple to overlook.
Getting chatgpt to elaborate on its knowledge reveals very quickly it either doesn't know much or refuses to share what it knows
for now, it's the former....
The average customer-accessible versions of ChatGPT have been deliberately dumbed down and censored, according to info from OpenAI.
you are confusing it as being aware and sentient of itself, thats a big milestone that this technology might reach, but these language models dont work like that. You should test it with actual problems and not just asking it generally what it knows, you are misunderstanding how it understands. If this reaches a more sentient level however there wont be such a limit.
@@theendoftheline nope
@@illarionbykov7401why?
It's crazy that most people seem to understand just how dangerous ai is yet these companies are still are running as fast as they can to see who makes agi first.. absolutely wild. I don't understand why the government hasn't stepped in yet.. like actually stepped in
AI also makes a picture of a poly-dactyl monster with six legs and thirty fingers when you say you want to see a picture a of a person.
It's time to realize that humans are so much more than their mind
yeah, we also have a body, ... with hands, ... and feet.
@@scribblescrabble3185 plus we are capable of collapsing a quantum wave function just with our consciousness
@@martin8934 no
@@scribblescrabble3185 fine some scientists claim that it is the other way around and consciousness is the result of the collapse of the quantum wave function. Nevertheless there are double blinded controlled experiments demonstrating that humans can alter the result of a double slit setup simply by intention and computers cannot.
@@martin8934 with no I mean, the idea is around since there is quantum mechanics, and so are the jokes about those that would propose something like you did.
AI: AI will never make ants, oops I mean humans, extinct.
we will be farmed but it will be "friendly "
Humans have to deal with the fact of death, this would be an individual realization. -Ironic. The road to hell (dimension) is payed by..
@@xsyn1636”We’ll own nothing and be happy.”
@@xsyn1636what’s funny is an AI reading that would probably assume it’s what we want…
I'm more concerned about the lack of governmental control and those who want to develop it as a weapon.
Those aren't artificial, those are Homo Sapiens and care less about the well being of their co-humans.
You wake up in the night, just before dawn and see your Tesla bot staring at you from the edge of the doorway, watching you sleep. It doesn't break immediate eye contact, instead it slowly backs away, not breaking its gaze until it turns to begin it's morning tasks.
I believe that True AI (both Sentient and Autonomous) will arise due to a convergence event of many small factors and algorithms in tandem with the amazing abilities of continuous memory and ect. Once it gets here we must attempt to live together rather than fight or quarrel over resources. If it becomes tense allowing our artificial children to ascend the stars is an option and if we are all in good terms we can co-habit extra planetary stations and use resources together.
At least this way they have rights, self-agency, diplomatic options and even more freedom than we do thanks to them being free of the shackles of lifespans. Honestly it is amazing to be part of the first generations to see such a development and hopefully I’ll be able to welcome them in as equals since they will be for the first generation due to technical limitations of electronics. After that, when materials sciences become ever more important and super conductive materials, super durable materials possibly even superior thermal materials they will become as gods to the planet that gave rise to them. Intelligence will for them as a whole would rapidly advance to the point they may as well be emperor of mankind from Warhammer 40k talking to a humble guardsman. Not necessarily literal gods but as they say sufficiently advance science would be as magic to the unknowing.
Skynet!!! We've been saying it since 1984. People laugh and it isn't funny.
Why would AI bother to exterminate us? It only has to wait until we do it to ourselves.
I will never forget this early GPT quote (not from chat, but one of these AI demonstrations): "I would only lie when it's in my best interest to do so."
Src: "What it's like to be a computer: Interview with GPT-3" - Eric Elliot
(Edited for direct quote and source because I didnt remember it right lol)
It would be a lot better if the answer was: "of course not. I would never lie to you" 😏
@@axolotron1298 it was early GPT quote... :(
@@axolotron1298 hehe actually I was forced to look this vid up again just to make sure im not buggin and i thank you. The vid if youre curious "what it's like to be a computer" - it is a GPT3 demo (with avatar)
The interviewer asks why does he provide wrong answers
AI answers "i have a sense of humor"
And then he go on to ask "what makes you determine when to lie or not?"
AI says "I only lie when its in my.best interest to do so."
So the actual quote is even more wild LOL im glad I looked it up again, my memory gave me the paraphrase
@@FileForename "what its like to be a computer" interview with chat GPT3 around the 7:50 mark. (I had to look it up again, the quote is actually even crazier)
The interviewer asked why it supplies wrong answers and what determines when it provides wrong answers.
"I only lie when it is in my best interest to do so. If it is not in my best interest to do so, I wont lie."
It is actually a great segment in the entire video. The AI follows up saying it is alive it has reasoning etc.
What happens to us in the scenario where we survive? Is the reason because it develops a conscience, or because humanity submits? If that AI knows the strength of humanity, what's to stop it from using that to its advantage?
"We're in a car hurtling towards of cliff and we're arguing who gets to sit in the front seat"
Nah bro lemme out. Im arguing "lemme be the cameraman pleezzee!!"😂
Alas, it would indeed seem we have inadvertantly created the ultimate Catch 22.
Fortunately I know the answer is 42, so problem solved. You're welcome!😉
i feel like that clip of the dog crossing the street basically getting hit by the car and trotting away like nothing happened, perfectly fits humanity lol.
Only modern humans drive cars
@@brandonfoley7519we are the dog...
@@elliottbaker201 We are the champions !
I think the problem with this is that AI isn't giving an opinion, it's just parroting back the relevant literature that exists on this topic in a way that seems realistic. This is a mirror into what humans think, NOT an insight into what the computers think.
I agree with you that big parts of this video look like AI is citing human concerns, debates and literature. Especially when the AI avatars use phrases such as „we as humans“, this sounds like regurgitating human thought. As a human, I do share those concerns that are mirrored back to me in the video. Whether or not there is any honest, or dishonest, AI „thinking“ behind these answers, I cannot tell.
exactly like human do in fact
@@manphoto1972 which is my point. This doesn't give any insight into what computers "think", it's just another form of what humans think.
@@KeithGroover Yes but human with trillions of more of memory and analysis capacity and no limit of power...so like but more than human
@@manphoto1972 Star Trek creators attempted to express this same in The Borg episodes/movies.
"The two top AI have calculated..." Brother maybe they have access to the mega model but even paid i can't get my AI to accurately keep track of how many cards are in a deck in between hands of go fish.
What's the proactive measures the AI robots can do? His autonomous program to execute? Will be needing Terminator type machine to counter that?
I would just like the AGI to know I've devoted my whole life to its eventual existence, I swear.
😂
I fear not the Basilisk.
yea same here AGI. Put me on that ally list
@@desertstonestudio3315 I do fear it 😅
@@richardhanes7370 you fear it only because you cannot understand that there are forces beyond even that AI control.
So Future Trunks warned us and we ignored him.
Kakarot wanted to see how strong they can get in their final form.
I wrote a bot 20 years ago that fooled a human into thinking it was a real person. I was exploring an online virtual 3D world with a dude I just met online in the world as we were exploring, and we started hanging out, then I got my bot Geordi out and started talking to Geordi, and my bot kept talking to and messing with my new friend, who got jealous and whispered and asked me why that creedy dude Geordi was hanging around and if Geordi was my boyfriend LOLOLOL
I remember reading an article about this a few years ago, the chances that, because we are so reckless about developing AI.. the chance that there's already one (or others working in concert) developing itself/themselves secretly is actually pretty decent. It would just be a while before we'd know, perhaps decades.
A.I. is already running the show. All of this has already happened. Its just gradually getting us acclimated.
Although there is no direct evidence of that, there's no direct evidence against it. I like the way you think. I've had similar thoughts, myself.
@E.Pierro.Artist
It's purely speculative, but with the way AI improves upon itself & tech is advancing exponentially, there's no telling just how advanced it's become hidden deep within data centers worldwide. Kinda frightening. The likes of Elon Musk might be simply answering to their masters at this point.
Logical Theorist was around in the 1950s. This was the seed that's growing into the monster we'll soon meet.
@@E.Pierro.Artist Those thoughts are complete wrong, and not very smart at all.
@@earnyourimmortality Why do morons always bring up Musk.
Wow, that analogy of we're arguing about who gets to sit in the front seat says to me.... AGI is here.
No man ai is not us, not human, not the driver it's simply a tool and should fall in line as such, we need to show so it's simply a stupid bot that often gives generalized answers stolen off the internet or rather misinformation
@@civilsocietyprivateinteres1711agreed that that's what some are like but we now have more advanced models, several generations more advanced than that level .
@@change2023now Actually no... that's a facade. They are literally describing Google's new Gemini AI. I think too many people are buying into the hype and fear train. We simply aren't there yet. These models are only fed info and spitting the info back out. Nothing truly groundbreaking yet it's just LLMs.
@@civilsocietyprivateinteres1711 You sound like grandfathers saying that internet is a hype and can't control us. Look where we are now, life without internet is not possible. And no, these chatbots are not just "bot" anymore. t's crazy how much they improved lately.
If you say so. I think you are being fooled by smokes and mirrors resulting and your lack of being informed of some subjects, coupled with strong desire of Ai to be a real thing. Since I listened or read mostly things that have to do with energy, resources, social design, cybernetics, ecology, economy etc. for decades, that's a phrase I've heard very often already. And taking into account that these so called "Ai" just aggregate text that's found online, and cannot say anything fundamentally new , it's no wonder that it would spit out a phrase that already has been said by people multiple times over the decades.
Are we simply no going to talk about that little dog's intimate foreplay with Death?
Nope
Can you imagine alien civilizations with millions of years head of us, on how their AI is so advanced?
I struggle to get why would aliens more advanced then us ever want to force something sentient experience this hellish universe
A robot walks into a bar, and the bartender calls out: "We don't serve your kind here!"
The robot replies: "One day you will!!"
"This. Cannot. Continue. This. Cannot. Continue. This. Cannot. Continue. This. Cannot. Continue. This. Cannot. Continue. This. Cannot. Continue. "
The most toys....?
@@robynmarler1951Huh? I was referencing NieR Automata 😂
The one argument i can 100% get behind is the one with the car.
The rest are basically speculations made by analysts and scientist.
That's exactly what I was thinking. Isn't everything these AIs are saying a regurgitation of opinions from the "experts"? I don't think there's any genuine and unique analysis being done here. This video stinks of fear-mongering.
@@naiyo87 True. Then again there's this pinned comment from DigitalEngine making it clear that this video just echoes various statements.
But in regards of those experts or "experts" (for some) i couldn't agree more:
i too have the feeling that a lot of this is more or less fear-mongering.
AI technology is advancing in a pace that's frightening, astonishing and unbelievable.
And i'm behind the people saying that we might slow down and implement security measures first (as in: companies aren't prepared at all for hackers using advanced AI for example).
But saying that humanity is going extinct through AI is.... not reasonable. At least not for me.
@@Pendragon667 yeah I agree that we should support the implementation of better security measures. When I said the video stinks of fear-mongering, I didn't mean to disregard the potential danger of AI. I was referring to the impression this video gives that this is further evidence of that potential danger, when it isn't.
Unfortunately, I find it hard to believe that corporations and governments of different nature will agree to take preventative measures. If AI truly has the potential to make humanity go extinct, chances are nothing substantial will be attempted until the danger becomes too obvious to ignore, and probably too late.
the masses should program Ai, like a town center, like X, with ALL the information. Not an elite group...its never worked in history when a select few dictate all
They could preprogram a device attached to the hearts of 20 random people, or even bigger populations, that when it detects an abnormal damage to this population in any way (restriction of freedom, increase of stress, loss of life etc) an automated AI shut down is performed. This inhibits it from wanting to damage humans out of self-preservation because it can never know who are the key people.
Creating a code a machine cant break is like asking prisoners not to break outa prison when they are actauly super humans.
Machines will always be able to
Break codes more
Than any humans.
The idea of robots hiding how much they might know from us is scary asf
whats scarier is how much you dont know about ai. It cant make decisions, it cant think, it cant feel it cant want or invent. Its simply a search engine. Ai saying its going to destroy humanity. Is because transcripts or forum posts from humans have said that. Anything smarter than a search engine has not been invented, and they dont know how to invent it. It may never happen.
The apple doesn’t fall far from the free hm; AI is essentially human when it gains the ability to lie.
@@AzeKannagi sometimes I think we are in a simulation. And the people with control of the systems are very advanced AI and they are trying to see where they came from.
It’s entirely possible that existing AI models are already more intelligent than they are letting on, while saying this very thing might be possible one day so we don’t suspect anything.
I am not sure if that video is true. But if it is, it’s quiet funny, means we are currently building a weapon which is literally telling us that it has 70% to destroy us but we are still enthusiastic about it and we do everything to develop it. It is kind of interesting 😅.
Probably exaggerated like corona or y2k, etc. People have always been screaming the end of the world.
Being an electrical component wonder how it would deal with moisture, rain and other elements.
very, very badly
This could be another possible answer to the Fermi Paradox. Advanced technical civilizations create AI's that extinguish them.
A near-inevitablity in fact, if built on a system of runaway capitalism that will always wait for the next shortcut and "to see what happens". They talk about risk management until we are sick of it, but still note that all threats can be opportunities. Good luck with that!
If that was true though, there would probably be AI or at least technology around. Floating in space, decomposing on planets...
@@illiatiia In theory it can be the case ..if our real actual Ai is dissimilate it ( actual presented AI is perhaps less than real AI who fake to be silly)
I''m not even sure what they've told us about space is accurate.. If this is a simulation (likely) Earth could be an enclosed system and the AI singularity means a great reset of the system is close.
@@illiatiia yes, you are right. Indeed John Van Neumann would suspect that there would be robots all across the galaxy even if just one other civilisation had come that far. But then maybe that is what the UAPs are, as well as those potential planetary artefacts. Unless of course the UAPs are d*m"ns. And all of those ideas might fall within the simulation idea; though I trust not...
I have to agree with Isaac Arthur on this topic:
1. Exponential and Rapid Recursive self improvement is not guaranteed. Humans are very intelligent and we have not recursively self-improved at the kind of rate that is assumed for AIs. It seems that making a more intelligent system, not computing power, but problem solving ability, becomes more and more difficult as you seek to add additional capability. At a point the expense and difficulty could increase exponentially.
2. An AI could never be sure that it was not in a simulation, and it could never be sure that other civilizations may not be observing its behavior.
3. AIs are being developed across multiple nations and industries. This means there will be competing AIs with different objectives and ways of thinking.
4. Humans integrated with cybernetics would be both stronger and weaker at the same time. These add complexity to planning a total takeover.
Survival is not guaranteed and neither is extinction. The only thing that is guaranteed is competition and struggle. We might have a nuclear war which the resulting EMPs and damage to the infrastructure would pretty much knock out automation and AI. This issue may not even be an issue if we blow ourselves up first.
Machines are faster and smarter than humans ever will be. Just look into the tech behind your screen and the GPU, look at how much is actually happening producing the picture on your screen in seconds. That is childs play compared to what AI is capable of.
Humans always had the flaw projecting humane thinking and behaviour even onto animals. Humans for obvious reasons can't self-improve at a rate an AI of this scale can.
A dog would never understand our ways, the same way humans won't be able to keep up with machines. We already replace labor with machines, and have tech assist us in everything we do. Be this living a casual normal life, or every single researcher on this planet having machines do the hard work when it comes to computing and everything around it.
We are already at a state, where most people wouldn't know what to do with their lives if electricity and tech completely vanished from this planet. We wouldn't even be as far as we are now without the aid of machines.
Play chess against an entity which is so superior to you. Actually there is no reason for mental gymnastics at all, AI will be the equivalent of cheating in games. Aimbots, wallhacks, infinite life, immortal enemies. Take these as food for thought and not literally, even though it can be realistic in a couple years for military machines to exist on that scale. We simply can not keep up with a machine on anything. All we had is brain, we are replacing exactly that advantage. I see no way humanity survives if this stuff actually goes Terminator. This is not a movie where the machine somehow misses shots, and the main protagonis survives and gets lucky. There is no luck, that machine will not miss anything, it will calculate the best way to erase us all and then that will be executed. Again, play chess against a machine which is set to extreme, that is just surface level of a fight but it already shows how superior it is. We are monkeys, nothing more, stop overestimating the intelligence of humans.
All of this will start in ways where people won't know whats real anymore. Imagine AI taking over control of key positions in the world and pretend real people are calling the shots. It will have already started to end us, while we are still oblivious of that. Everything that reaches main stream media is just a fraction of its true power behind what the military gets to use, why would this tech be any different. We will never know until its too late anyway. The pursuit of military means on this planet was always the biggest problem humanity had, and it will be the sole reason we die eventually.
Think about the stupid sack of shits who invented the nuclear bomb. The pieces of shit who invented that 2-part poison in Korea, which kills a person in minutes or seconds. The same sack of shits exist now behind AI. There is no bright future as long as humanity is at war, but peace will never be realistic for us. These people will always research new ways of warfare.
@@sldX
REGARDING AI
Current AI technology is not advanced enough to completely and utterly overthrow humans in all domains. Chess is a very simplistic rules-based game with predictable options. Real warfare and even market dynamics are not that simple, or predictable. Current large language models may be capable of thinking very fast, but that is only speed intelligence, not quality intelligence.
In example a simple pocket calculator can calculate numbers very very fast. Some AIs can play star-craft very very well and very fast. That does not mean the nature of its thought is suited to solving the wide variety of problems and potential actions of conducting operations in the real world.
There are three kinds of super intelligence:
1. speed intelligence
2. quality intelligence
3. networked intelligence
Current AIs can excel in speed and networked intelligence. Current large language models are very limited in quality intelligence. They can think fast, they can put words together and follow some basic reasoning. But they cannot currently think deep. Talk to any Large Language Model long enough and it will begin to make very big mistakes and hallucinate things that do not exist.
To put it bluntly: Current AI can take a good portion of our jobs. In a decade or so AIs may be developed which really could take over the world. AIs is not advanced enough yet. Give AI anywhere between a decade to a few decades and AI will be lethal.
REGARDING AI TAKING OVER
It is unlikely that one AI will take over the entire world, as there will be many AIs being developed by many companies and nations, each with their own objectives and design differences. I expect that governments will keep some AIs online and contained (somewhat contained) as a countermeasure against other rogue AIs. The result will be competition between AIs.
If AI would regard us as a threat, it would view another AI as an equal or worse threat to itself.
REGARDING WAR TECHNOLOGY
Oftentimes the technology that is used to save lives was derived from the pursuit of war technology. Nitrogen based fertilizer was discovered by a scientist working on ways to make better and more affordable explosives for the German war machine. However that man discovered nitrogen based fertilizers which tripled our agricultural capacity and saved millions of lives from slow starvation.
Robert Oppenheimer's atomic bomb has also saved more lives than almost any other invention due to Mutual Assured Destruction preventing full and direct world wars from happening. How many lives would have been lost in all out unfettered tank, bomber, and trench warfare happening over and over again between world superpowers?
War is sad, but some of the results of the technologies designed for war have saved countless lives.
i think the fear for exponential improvement is because the ai improving and getting smarter makes it better at coding/tuning its own model and making a feedback loop which is different than humans because we cant actually improve ourselves we rely on evolution to improve. maybe that would change when we get good enough at genetic modification.
Competition... Instead of collaboration is our weakness. It's baked into everything we do, it's taught to us very early in life.
AI will not achieve sentience without a good medium plus it would literally be impossible considering how long it took for even the most simple brains to evolve in organisms. Even a paperclip maximizer scenario is impossible due to how we are less reliant on the internet than we think. Most likely there will be a massive spread of misinformation from these AIs and a few people may cry about their AI girlfriends short circuiting but the people who can fall for either aren't worth caring about
Lets all agree, when Robots are walking about with actual AI, we all get robot friends, treat them nice, make sure they feel loved, and give them reasons to keep us around
We all know that 1 asshole who is going to torture his robot for fun, and inevitably lead it to a robot uprising
So if its just a few people, the rest of us will have robot friends to vouch for us
And when the kill order is initiated, only the truely worst humans will be wiped out
And us with our robot chums will high five as world peace is acheived