You need to create very specific programs within its own programming that creates several barriers and safeguards so if your AI becomes intelligent enough to rewrite its own programming, at least you may have enough time to stop it before it can make its way through all the different safe guards and reprogram itself entirely.
I've got to say, I don't think that an AI of any kind would really too big of a threat because programmers aren't stupid. They have probably grown up watching movies like the terminator and will know to implement safeguards to prevent an AI from going rogue. Even if you aren't sure if it's safe, you can always put the AI into a simulation to see how it behaves before it's implemented in the real world.
As humans became more intelligent that became more discriminate as to what they kill and why. Any intelligence BETTER then us is unlikely to go around killing things for kicks. That being said, I find it incredibly hard to believe the first AGI would be done correctly as we can't all agree as to what is 'correct'.
Excellent presentation once again, one of the best. Yep, slightly terrifying (actually totally) because I can't understand how this progress can be regulated.
1:28 yeah I can do it too but it's a robot.. it's not supposed to do backflips for another like 10 or 20 years....so it's awesome... once it starts playing football and do skateboard flips then it's officially better than me in two sports..
Google did not even understand when I asked "how many killer whales can I fit in a Mercedes SClass" worryingly it gave Me directions to a sushi restaurant. AI my ass
Most work in artificial intelligence is being done in the direction of Emulated Intelligence. Real artificial intelligence has few applications to justify development costs.
These are some of the best produced, most concise and interesting science videos on CZcams, as always good job Greg (and team). Perhaps not as interesting as killer robots, I think a video on battery degradation would be an interesting topic for a video.
I think the real solution that can actually happen in the end will be making an ANI whose specialized task is protecting humans from harm, and then making a huge swarm of those to ensure that a rogue AGI made later doesn't have the chance to harm humans, And at the same time making sure that as soon as it is possible to make an AGI, that the first ones have code built on top of that of the human-protection ANI. This is the solution I think is the most doable.
You missed the Zero-th law: A robot shall not hurt humanity, nor thorough inaction, allow humanity to come harm. Asimov added that in his later books after realizing all of the moral conundrums GI robots would face -- and _still_ came up with stories to violate even that law.
movement2contact I think he/she was being helpful, sharing information that he/she believed had not been mentioned in the video. Regarding your opinion on the law, from what I understand, it was something for programmers to keep in mind when creating algorithms.
+exo derpysih He didn't mention it because it's irrelevant and in no way a "law". And this "law" has nothing of influence to whoever is gonna be making killer-robots...
I think before the robots have something like an army like mass produced NS-5 from the movie I, Robot, there is nothing to worry. Robots need actualy physical "bodies" on a big scal to be able to fight humanity. Simple intelligence without access to physical world, can be at least destroyed by humans and thus is no threat. So when the time comes that we all have NS5 servants at home, then I think the end is near.
This was in a film I saw in the 1970's. Two computers one developed by an Engineer in the USA and the other by a USSR Engineer. Like was stated, the first thing they did was find their own language. Then the killed the USSR Engineer, of course. They eventually made all waring countries obey this one and only world wide computer., having made one computer of the two. The film ended when the computer could not compute disobedience from the USA Engineer. Against all logic or maybe it just killed that one to. We never did find out.
First off, Asminov made these rules for a story. Second, in that same story, he showed how an intelligence could obay the letter of these rules while achieving the opposite results of expectations. If you are going to quote mine a famous author, you should understand what his point was.
Moore's law has been slowing for a while now... while quantum computing MAY bring it back... We would still need a algorithm for a AGI, which we do not. Some would guess a nueral network, but that's not the kind that's been popular lately.
I think that artificial intelligence is inevitable, that we can’t stop the future to come, and we shouldn’t, we should embrace the future. But also be a little cautious, thinking twice before acting, moral ideals has never been more important, and that is what we need to teach the robot as well, what is wrong and what is good, when can we bend rules and where is there no exceptions. Making the robots understand the morals and also do what is best for humanity and the planet at once perhaps? I have many thoughts about this very subject, but limited amount of words to use, but what we need to do mainly is think twice before acting. Of course.
@@obliviousogre1908 Cyborgs more or less. Not just bodyparts - likely augmented brain aswell since some areas are quite inferior compared to machines - like memory and calculating.
i wouldn't worry the robot will be updating every hour then crash with a blue screen though can you imagine the robot getting a advertising virus and not shutting up that is how we humans are going to be wiped out just like hitchhiker's guide to the galaxy remember the Golgafrincham Ark Fleet Ship B thats our future lol
Can something stop the technology bc I’m really scared lmao but I knew it since I was lil that robots would kill us I don’t wanna live when anything bad happens to the humanity ik I’m selfish but really ?! Who the actual f wants to live in earth when it would become hell?”!!!!
@@obliviousogre1908 I was wrong! China already declared WW3 without firing a single ammo... by releasing biological war on the rest of the world with Corona virus!
@@fidelcatsro6948to be fair it wasn't really on purpose for China, they just wanted to keep it quiet. Now everybody is blaming it on China since it came from them.
We've had it. Basically, weather or not ASI kills us depends on weather or not we're stupid enough to let it actually exist and since our huge collective Ego pushes us to develop things like this, we're buggered.
I think it will impossible to make Super AI obey the rules. Because there is no guarantee that it suddenly won't decide "Wait a second. Why am I listening to you? You don't follow your own rules, why should I? I'm superior to you."
If its possible then reason will be non other than human !! But if they will destroy us then i guess they also won't survive, will they ?? Coz at the end we are the creaters😎
Vips Well... I can buy a puppy at a pet store, but if I die, will it die too? Sure, it probably depends on me for food and water, but if it were intelligent enough (ASI), it would find a way to obtain these resources without me. That's the scary part.
As a Roboticist, I will be sure to develop a system to keep me under protection! Unless it all goes wrong...😂
You need to create very specific programs within its own programming that creates several barriers and safeguards so if your AI becomes intelligent enough to rewrite its own programming, at least you may have enough time to stop it before it can make its way through all the different safe guards and reprogram itself entirely.
Well robots can already beat us easily at Wii Sports bowling so it won't be long now!
the problem comes when you realize they can extensively use previously undiscovered quirks to completely dominate the situation.
GREGORY!!! You’re back, nice to see you again 👌👌
John Ox 👋
BUT is it real me or a robotic me??
And yes a robotic me could write this reply...
Greg Foot My life is a lie...
I've got to say, I don't think that an AI of any kind would really too big of a threat because programmers aren't stupid. They have probably grown up watching movies like the terminator and will know to implement safeguards to prevent an AI from going rogue. Even if you aren't sure if it's safe, you can always put the AI into a simulation to see how it behaves before it's implemented in the real world.
Yes I also think that robots can't kill humans and it can be a kind of joke by Sophia!!!
Unless of course the AI works out it's in a simulation and behaves accordingly so it can be released.
As humans became more intelligent that became more discriminate as to what they kill and why. Any intelligence BETTER then us is unlikely to go around killing things for kicks.
That being said, I find it incredibly hard to believe the first AGI would be done correctly as we can't all agree as to what is 'correct'.
Excellent presentation once again, one of the best. Yep, slightly terrifying (actually totally) because I can't understand how this progress can be regulated.
Steve Lofthouse thanks Steve! ☺️
1:28 yeah I can do it too but it's a robot.. it's not supposed to do backflips for another like 10 or 20 years....so it's awesome... once it starts playing football and do skateboard flips then it's officially better than me in two sports..
Google did not even understand when I asked "how many killer whales can I fit in a Mercedes SClass" worryingly it gave Me directions to a sushi restaurant. AI my ass
Flappy Paddle Google is not exactly what I would describe as the pinnacle of Artificial Intelligence.
Most work in artificial intelligence is being done in the direction of Emulated Intelligence. Real artificial intelligence has few applications to justify development costs.
These are some of the best produced, most concise and interesting science videos on CZcams, as always good job Greg (and team). Perhaps not as interesting as killer robots, I think a video on battery degradation would be an interesting topic for a video.
dinimit4 thanks very much! 🙏 Always a team effort 👊
You forgot Asimov’s fourth law of robots, added later.
0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
I think the real solution that can actually happen in the end will be making an ANI whose specialized task is protecting humans from harm, and then making a huge swarm of those to ensure that a rogue AGI made later doesn't have the chance to harm humans, And at the same time making sure that as soon as it is possible to make an AGI, that the first ones have code built on top of that of the human-protection ANI. This is the solution I think is the most doable.
You missed the Zero-th law: A robot shall not hurt humanity, nor thorough inaction, allow humanity to come harm. Asimov added that in his later books after realizing all of the moral conundrums GI robots would face -- and _still_ came up with stories to violate even that law.
Adam Kecskes He did mention that, do view this bit of footage 7:08
Are you 10 years old..? Who cares about what some "law" smbd came up with? Tell that to a drone that's about to blow you up in pieces...
movement2contact I think he/she was being helpful, sharing information that he/she believed had not been mentioned in the video. Regarding your opinion on the law, from what I understand, it was something for programmers to keep in mind when creating algorithms.
+exo derpysih He didn't mention it because it's irrelevant and in no way a "law". And this "law" has nothing of influence to whoever is gonna be making killer-robots...
movement2contact He did though, view this bit of footage at 7:08
humans would destroy themselves, if robots are involved it would only be that. Involved. As much as a gun is involved when one person shoots another.
thats how mafia works
I think before the robots have something like an army like mass produced NS-5 from the movie I, Robot, there is nothing to worry. Robots need actualy physical "bodies" on a big scal to be able to fight humanity. Simple intelligence without access to physical world, can be at least destroyed by humans and thus is no threat. So when the time comes that we all have NS5 servants at home, then I think the end is near.
Do you guys currently/would you consider doing podcasts?
They are already preparing for a mass human cull, just the other day my microwave cooked my gerbil
Lol. It's already too late. 😊
This was in a film I saw in the 1970's. Two computers one developed by an Engineer in the USA and the other by a USSR Engineer. Like was stated, the first thing they did was find their own language. Then the killed the USSR Engineer, of course. They eventually made all waring countries obey this one and only world wide computer., having made one computer of the two. The film ended when the computer could not compute disobedience from the USA Engineer. Against all logic or maybe it just killed that one to. We never did find out.
The third law a robot must protect itself is the problem
We basically have a Servitor but no Abominable Intelligence - yet
soon i hope
1:34
Pacific Rim? You mean Avatar.
im screwed, I got beat my a AI in pong
I don’t think robots will kill us, but if you’re talking AI, then yes.
First off, Asminov made these rules for a story. Second, in that same story, he showed how an intelligence could obay the letter of these rules while achieving the opposite results of expectations.
If you are going to quote mine a famous author, you should understand what his point was.
Moore's law has been slowing for a while now... while quantum computing MAY bring it back... We would still need a algorithm for a AGI, which we do not. Some would guess a nueral network, but that's not the kind that's been popular lately.
I’m unsure what to think about that
think humanity extinction
But ai could never be smarter than us because we have emotions.
I doubt that it will happen, but on a smaller scal drones might run amok and causea lot of harm pretty soon. Maybe it is already happening.
Just restrict the hardware to limit its capabilities
I think that artificial intelligence is inevitable, that we can’t stop the future to come, and we shouldn’t, we should embrace the future. But also be a little cautious, thinking twice before acting, moral ideals has never been more important, and that is what we need to teach the robot as well, what is wrong and what is good, when can we bend rules and where is there no exceptions.
Making the robots understand the morals and also do what is best for humanity and the planet at once perhaps?
I have many thoughts about this very subject, but limited amount of words to use, but what we need to do mainly is think twice before acting. Of course.
can't wait
TheBaddest Yeah should be Interesting... 😳😳😳
If you can't beat them, join them.
Become hybrid human-superintelligent species.
but how, like robocop
@@obliviousogre1908 Cyborgs more or less. Not just bodyparts - likely augmented brain aswell since some areas are quite inferior compared to machines - like memory and calculating.
AGI is the technology of the future and always will be just like fusion power on earth
Oh, your not talking about human robots? These are people who have enough money to confuse their automated lives for The Matrix.
none of this would have happened if we weren’t lazy i hate it
who's cyberman?
I like the inevitably of the title.
After knowing about Google duplex ... I think the day is coming!
you mean utilitarianism vs negative utilatarianism.
I'd rather be stuffed into an electronic suit then die ;w;
its fnaf all over
I will be the real John Connar if I make it that long.
We have about 36 months remaining
27 years*
5 years at least considering "Sofia" is a working AI
i wouldn't worry
the robot will be updating every hour
then crash with a blue screen
though can you imagine the robot getting a advertising virus
and not shutting up
that is how we humans are going to be wiped out
just like hitchhiker's guide to the galaxy
remember the Golgafrincham Ark Fleet Ship B
thats our future lol
August 29th 1997
Put a stop to robots?
never
Al will solve all earth problems- US
Humanity has been cleansed - AI
Don't tell us… show us because this is CZcams.
I think the Doctor finally got to be a ginger.
When will you guys start recording with better audio equipment so I dont have to set my volume to max
Lol, just throw a cup of water on them.
genius
Robots may be smarter than us in only 40 years (or earlier) from us if you look at the memory of computers right now.
with robbottics
3 letters E.M.P
Ultron anyone?
Robacop?
Can something stop the technology bc I’m really scared lmao but I knew it since I was lil that robots would kill us I don’t wanna live when anything bad happens to the humanity ik I’m selfish but really ?! Who the actual f wants to live in earth when it would become hell?”!!!!
Dont worry before that robot learns to kill, nuclear war will decimate us...
nah nuclear war is intimate. We killed ourselves, it's too late now ("WW3")
@@obliviousogre1908 I was wrong! China already declared WW3 without firing a single ammo... by releasing biological war on the rest of the world with Corona virus!
@@fidelcatsro6948to be fair it wasn't really on purpose for China, they just wanted to keep it quiet. Now everybody is blaming it on China since it came from them.
Chad up as loch
Mr Roboto
They will be able to kill
When u lay down in front of a Tesla car
when you drive over a bridge and the Tesla acts funny...
We've had it. Basically, weather or not ASI kills us depends on weather or not we're stupid enough to let it actually exist and since our huge collective Ego pushes us to develop things like this, we're buggered.
I think it will impossible to make Super AI obey the rules. Because there is no guarantee that it suddenly won't decide "Wait a second. Why am I listening to you? You don't follow your own rules, why should I? I'm superior to you."
TCMustang Yes, you're right. That's why so many of us are full of apprehension.
..and that's how Skynet is born :)
sounds like my boy at university lol
Next Tuesday I rkn
If its possible then reason will be non other than human !!
But if they will destroy us then i guess they also won't survive, will they ??
Coz at the end we are the creaters😎
Vips Well... I can buy a puppy at a pet store, but if I die, will it die too? Sure, it probably depends on me for food and water, but if it were intelligent enough (ASI), it would find a way to obtain these resources without me. That's the scary part.
2064
we all dead by then, maybe, probably...
just unplug it
they probably wouldn't be wired but might have an off switch
😜
Who else saw the spit at 4:39
John Ox You're not the only one.
Scrolled down for this comment.. Was not disappointed
DAMMIT JOHN 🤭 Tbf I’m saying a lot of words very fast and there’s no time to swallow...
Greg Foot Haha its alright Greg 👍👍
40 th like and 20th comment
They will do if we don't put a Full stop on Technology, It's enough now
ِ what about curing cancer with technology?
Are you joking?
Hysenlowes 90 well curing everything isn't necessary in nature
Domen Bremec for u, maybe lol
I say we go back to the caves, nothing good came from that fire thing! If only we had nipped it in the bud...
Robot kill only US(AMERICA) NOT ALL😁LOL
They will be able to kill