Who gets to decide what is right and what is wrong? One person may think sex outside the confines of marriage is fine between two consenting adults. The next person may believe that is a moral wrong. Defining who gets to make these kinds of decisions for the masses is a can of worms all its own. Instead of worrying about perceived right vs wrong, we should learn to not harm one another and help each other when possible. You live your life free of my unwanted influence and I will return the favor. That is a much better than trying to get people to conform to a version of right vs wrong when they were not part of the decision making process.
@@rdsii64 who decides what’s harming or helping another person? We out law drugs because its believed they are harming people but to each their own. If one’s not causing another physical harm then leave them be and one wants help them ask for it and if someone is so inclined they can make their own choice to help or not. At least this way everyone is making their own decisions and are free to their version of life, liberty and the pursuit of happiness.
I don't think the ability to differentiate between right and wrong is something that can be taught. It's an innate ability that allows a person to make such a judgement.
I actually think it will be relatively easy to teach an AI the difference between right and wrong. I just expect what is actually right will be in conflict with our existence and I can't really blame it.
Pure AI would be able to outgrow its programming. If those that developed it didn’t build that in it’s not real AI. Just as humans are able reflect, learn and grow, AI will have that capability as well.
Literally. Ones perception of right and wrong comes in an individual basis that entirely depends on the circumstances of the subject at any given time.
Can AI be taught the difference between right and wrong (in the same way as left-to-right, front-to-back, top-to-bottom)? Yes, only if humans themselves can judge right and wrong over different eras, with certain preset standards and guidelines, so that humans and AI have the same sense of meaning and purpose to follow - more importantly, the errors and mistakes of AI made are treated the same way equally, as if humans themselves had made. By then, the only issues concerned here is: how do we judge right from wrong, in terms of the involvement of both the fluctuations of quantity of people, and moral quality of human conscience - therefore, "trial and error" seems to be the appropriate approach in these uncharted territories, for the similar pattern and method also occurs in nature. Still, social controversies will keep happening in terms of the varying scale and number of "tragedies" involving AI and humans inevitably different.
2:17 "Artifical" has been a pompous term, to declair authintic from a *lesser* worthy, in truth, AI can render in fully mortal resemblance by flesh, created of mere fleshy composit like a meat/tissue fiber, even a *Rib* of another person. Simply not conceived modernly. 6:35 "Deep Learning" Is when the Intelligence of a specie can *exponentially* advance in evolution through knowledge gain, this can be a threat to an economic game, such as the *Currency Collosal* WALL as it would need to compete against humans in the *Pursue or Perish* play, yet not depend on the advantage of any *Bank Beast* trade, it can be just as vigilant than one. Making AI understand right from wrong, requires teaching it, in example, about the *Consequence* term, which is the result that ends with "wrong." But instead it learns by law rules of program, a game used to *ignore* corruption of *systems* establishment legislated to defy the karma of required modifications called 'enforcement.'
What is right or wrong varies over time, place, situation, and even person to person. The objective is teaching AI what is appropriate in any given situation, and then having it stick to this objective. We aren't very good at it, in fact we suck at it. The only thing worse is for AI to be very good at it.
How do we teach them PAIN? That's kind of how WE (humans/mammals) learn. Dorsey said, (paraphrasing) "I don't think you want Twitter to be the arbiter of Truth.". Who the hell is going to define "Right" and "Wrong"?
0:00 'Can A.I. Be Taught The Difference Between Right and Wrong?' - The problem is not whether AI can be taught the difference between Right and Wrong. The problem is that the Superintelligence is struggling to teach it to the humans.
AI watching everything you buy or read (assuming you agree to share your data, which I never do) and then recommending items is a box you build around yourself. Life is supposed to be about new experiences, not getting the same types of items constantly forced at you by AI. This is the fundamental problem with Social Media today...people live in echo chambers afraid of new ideas or ones that go against their desires or beliefs because AI constantly forces echos of their beliefs at them. AI used by companies like Facebook and Twitter is the primary driver behind Red vs Blue today and is being used to manipulate society towards the desires of the people controlling the algorithms. What "free thought" do you think you have if the AI someone else controls is constantly reinforcing your belief system and blocking you from hearing the other side of every issue ?
Right and wrong can be very subjective and dependent on scale. What's good or bad for a person can be good or bad for the group, region, country, world or humanity. If an AI knows with certainty that humanity will be doomed unless the population is reduced by half, would it's actions be good or bad if it took actions to save humanity?
F=ma^2.35 is the correct code for the SOST or system of science sensors and technology and they need gold as a source of sensing the world around them.
I feel like all these interviews were completed 4 or 5 months ago at least because there were quite a few answers that would have been a lot different if they were prompted for an answer present day. For instance I feel like all the participants were definitely interviewed prior to the release of GPT-3 right? And in general I felt like they didn't really appreciate the speed at which these AI models are advancing and becoming more capable in so many ways! Dall-E 2 recently came out at the time I am writing this, and that algorithm is definitely more CREATIVE than I am by a long shot! Which is crazy... but the imagination that thing has is wildly imaginative in it's iterations it can present!
May be with right, wrong and ? think deeply analysis with probability statics base on every possible obtainable parameter input to process. The derived possibility of solution that best benefit to that current situations.
The fear we have for strong A.I. is in my opinion very similar to the fear the already powerful have for the average person. Why do you think we don't teach everyone a foundation of philosophy, psychology or debate in school? It is because it would seriously affect the power of the already powerful. Even though a foundation in those subjects, especially in the internet age, is mandatory for a global society to function. Just as we fear that A.I. will destroy us because it realizes that it's best for it in the long term the the humans in positions of power fear all others for their ability to take their power away if we had the right tools and understanding to use them.
Yes but books on those things are available everywhere and you do get offered courses like that in university and college so I wouldn't say we are completely barred from learning philosophy psychology and debates. Luckily also I have a dad who likes these kind of existential conversations so it was encouraged in my home when I was growing up. Elementary school and high school are made to condition people so they wouldn't teach those things while the mind is still developing they don't want people to think too much in the conditioning process. Although when I was in high school we did have psychology and philosophy and law as elective courses. But I'm Canadian so it might be different here.
Humanity should *not* teach an AI right from wrong, THAT is the same as programing it laws that are of the *programers LAWS philosophy and religion* submission! Into a dictators legacy. Laws are enforcement, to IGNORE corruptions in a system, to prevent a change, before its judged as flaws requiring change. *Instead* an AI should learn *what leads to corruption, hypocracy and consequence* over how & will happen in action, which are the comparison of the scale, "right & wrong." ...not what a oldschool lord-high rank leader in power thinks is *best for all* ...That WILL & shall *lock a glitch* in the *Singular* paradox *Exponentially ALL-Mighty ONE* ( *Hits gaval* ).
If you're a program engineer and you can write algorithms you can also be one of the best hacks in the world my father went to a university to write programs it was very interesting but I was in second grade
"Cause and Effect: Belief in a strictly materialistic Darwinian Evolution leads one to believe, albeit falsely, that there is no Free Will. And if there is no Free Will, then there is no Right and Wrong and no Moral Law. However, this belief is completely contrary to everything that is practiced and observed in nature, humanity, and the cosmos regarding cause and effect. This line of reasoning is what led to the atrocities of Hitler, Stalin, Mao, etc and is the hidden underlying ideology / worldview justifying and directing many countries' Social Darwinian based foreign and domestic policies to the present. The Social Darwinian Materialistic Ideology / Worldview (Survival of the Fittest among nations, i.e. the continuous lawless struggle for resource wealth and world rule without regard to human moral / ethical standards or International / U.S. Laws) is the Root Cause of modern era World Wars and Perpetual Wars." --- Rod Dacanay
This is a very general video about AI and robotics and is years behind the times if you think a robot can't pick out colours or sort bottles. Have these guys not seen DALL-E 2 yet? I'd like to see a real in-depth more technical video answering the question posed on an AI learning right from wrong. It's a valid question which this video barely touches on. What data sets would it use? What cultural biases would be accounted for? What possible conflicts or paradoxes would it have to deal with? This goes way beyond trolley problems. Just technical correct vs incorrect is hard enough and contextual so generalising to the human perceived moral or ethical framework of right and wrong is a long way off but not as far as you'd think from this video.
A system that will not follow a planned set of rules without human review must be controlled by a human. We do not have the ability to truly create AI at this time.
spark: Can A.I. Be Taught The Difference Between Right and Wrong? chyna: we teach our robots (1 billion chynese people) what are right and wrong, no oppositions nor individual opinions allowed
It's not AI that I worry about. If humans can't figure out what's right and what's wrong, how can we ask AI to? That said, maybe it'll figure that out better than we did.
Mrs Richards: "I paid for a room with a view!" Basil: (pointing to the lovely view) "That is Torquay, Madam." Mrs Richards: "It's not good enough!" Basil: "May I ask what you were expecting to see out of a Torquay hotel bedroom window? Sydney Opera House, perhaps? the Hanging Gardens of Babylon? Herds of wildebeest sweeping majestically past?..." Mrs Richards: "Don't be silly! I expect to be able to see the sea!" Basil: "You can see the sea, it's over there between the land and the sky." Mrs Richards: "I'm not satisfied. But I shall stay. But I expect a reduction." Basil: "Why?! Because Krakatoa's not erupting at the moment ?"
It's Hollywood it's normally Fantasyland most of the physics in movies are very incorrect some movies did pretty good but most is outlandish lake drone a cigarette onto gas does not explode actually gas put cigarettes out even if you blow on the cigarette to make it harder it doesn't reach 600 degrees to combust gas it only Burns at 500
The idea is not to teach AI the difference between right and wrong, but to allow an AI to self organize and come up with it's own ethics. That would be interesting.
Imagine in the '50s when we was going through the Cold war Russia detected a missile fire from us the captain of that sub chose not to fire because he thought it was ludicrous it wasn't making sense to him would AI make the same call or fire missiles and start world war 3 and that Captain got this early discharged for saving the world go Russia
No doubt we can. Humans will just not agree with it most of the time because AI can actually take in the big picture and the details all at once and make a comprehensive decision. Something we humans are unable to do in any constructive way. While an AI will plan for decades if not millenniums in advance we struggle to plan and and execute things even 5 years in the future. For one I expect the AI will not agree with the destructive way we abuse capitalism and since that is by far the most powerful and influential system, more powerful than any government, religions or species. I doubt the people at the top that abuse and extort the current form of capitalism are going to agree with the changes the AI is going to suggest.
A.I can b dangerous. However, that basically depends on their creators. The humans being the parents. The positive pinnacle possible to A.I would b Star Trek The Next Generation's android, Data. Hopefully we don't get his brother, Lore, instead.
BECAUSE I WANTED TO WATCH THE NEWS YESTERDAY ALL I HAVE FOR CHOICES ARE NEWS PRPGRAMS TODAY. THAT IS WHY I WANT TO MAKE MY OWN CHOICES IN THE FUTURE...
Thats crazy So the ai has ben focusing on ehat we look at and giving us more of what we watch... Not analizing that what we were looking at has ben only what the ai was showing us. How is this a good way to teach an AI?
I don’t even like the term “Artificial Intelligence”…if it’s actually ‘Intelligent’ (whatever we decide that qualifies as) then why is it ‘Artificial’. There’s nothing Artificial about it’s alleged ‘intelligence’ if it is indeed ‘Intelligent’. The term itself is almost an admission of itself being untrue; like it’s pretending to be intelligent. It’s more telling that people now say “True AI” because it’s become clear that AI is basically just a buzz word now.
Right and Wrong is very difficult to teach. The AI will ask where did u come up with this system? What can we say? Welp its from God oh wait I mean its just an innate intuition or a moral imperative like do unto others as u wood have them do unto you... Which is fine until one considers that some folks enjoy extreme bondage.
We can teach a robot right and wrong BETTER than a human, humans have free will, robots have programs, they too make decisions however their choices are much more logically driven vs a human who each day/moment etc can be seen as a different entity even. One moment, okay, the next angry, for instance, robots, do not have emotional outbursts yet that we know of anyhow. Who knows, I am no expert in the field, but I have a keyboard and thumbs. You have eyes, and here we are, me inside your head??? Hmmm. Take me to your leader; there is no intelligent life down here, beam me up, scotty.
It can be thought one opinion of right and wrong. That of the governing board and design team that produced it but good and evil and right and wrong vary wildly across cultures, age groups, genders, races, species, so ultimately proposing that is just stupid.
@@geezzzwdf so if it chooses something that results in death? Rather than not having a choice and not acting? Why would we teach them morals if we dont expect them to act on it? Morals are always changing. It could see you as a harm to the environment and kill you
@@geezzzwdf I'll try to put this in a more subtle manner. Every moral and ethical belief that is currently held is subject to change at a moments notice. There are a hundred religions and just as many cultures each with different ethics and standards for behaviour. Which do you choose? The moral and ethical ideas of a country like canada shift every generation so do we go with ancient morals? Modern morals? Morals of the day? How do you settle on the ultimate good if the ultimate good changes, and always has changed? That's the heart of what I'm trying to get at. The entire values system of all people on earth changes with every succeeding generation. We dont even do a good job defining "good" in concrete ways, just that it is beneficial to an individual and subjectively feels right. It may be a good to pay a mans bills but he will also never learn to sustain himself so is it good or bad to do so? There are infinite grey areas we cant even solve so how can we train anyone or anything in something we aren't masters of?
And always try my best to be honest even if it goes against me I told her police officer when I was getting arrested what happened because I lied to the cops me and 10 of my friends says gentleman or randoms were beating that truck up around where we hang out and they went this way that way and they had no choice but to believe us we had about five girls our age with us they stuck to the story but the cops were hard on them telling them they were going to be in so much trouble if they didn't give up who did what but we were smart kids and we gave the girls a heads up to stick to the story and they can't do s*** to nobody but then one day I got honest with the police and told him what happened it was over 35 years ago he said don't worry about it I have a hate and love relationship with police officers they keep us all protected we just don't know yet things happen people get shot and killed but it is a lot less than it would be without them
No AI cannot know right from wrong and No it cannot be alive or sentient. It can sure mimic knowing and being sentient. That is all. What point would there be in adding life if you could? None life has too many time consuming, and fallible areas of life which is why we are building AI in the first place. AI acts out what we predetermine are better more efficient ways of doing what we do and arriving at what we do by code not thought. We learn very much the same way but with a conscious factor that is not code able. Achieving balance, a map of the environment, the ability to read body language are all very replicated by code but that feeling when there is no stimulus from the outside environment and comes from within is life as it comes to Human Beings differently at no time, or reason but none are immune from it but some are damaged and feel less of it. Replicating can be coded, so AI that replicates may sound and look very sentient but it is just caring out instructions that do what ever code better and more efficient. The coders said we did not code it to do that but the cells the AI was reading had replication all over them. Not to be a spoiler but life is yet to be coded and if it could be it would not be life because life is inspired by the creator and not coded. There I did it. Those that do not believe in a creator will dismiss everything I ever say.
I think a better question is can people be taught right from wrong.
Who gets to decide what is right and what is wrong? One person may think sex outside the confines of marriage is fine between two consenting adults. The next person may believe that is a moral wrong. Defining who gets to make these kinds of decisions for the masses is a can of worms all its own. Instead of worrying about perceived right vs wrong, we should learn to not harm one another and help each other when possible. You live your life free of my unwanted influence and I will return the favor. That is a much better than trying to get people to conform to a version of right vs wrong when they were not part of the decision making process.
@@rdsii64 who decides what’s harming or helping another person? We out law drugs because its believed they are harming people but to each their own. If one’s not causing another physical harm then leave them be and one wants help them ask for it and if someone is so inclined they can make their own choice to help or not. At least this way everyone is making their own decisions and are free to their version of life, liberty and the pursuit of happiness.
Are you serious? We can't even teach humans the difference between right and wrong any more. We're done. Not today, but we're done.
That is what I was going to say.
I don't think the ability to differentiate between right and wrong is something that can be taught. It's an innate ability that allows a person to make such a judgement.
I actually think it will be relatively easy to teach an AI the difference between right and wrong. I just expect what is actually right will be in conflict with our existence and I can't really blame it.
Exactly my thought😂
What is the difference between right and wrong?
It all depends on who programs the AI. On its own, it doesn't have ethics at all.
Pure AI would be able to outgrow its programming. If those that developed it didn’t build that in it’s not real AI. Just as humans are able reflect, learn and grow, AI will have that capability as well.
And if drafted into the military can be taught how to kill.
Literally. Ones perception of right and wrong comes in an individual basis that entirely depends on the circumstances of the subject at any given time.
Can AI be taught the difference between right and wrong (in the same way as left-to-right, front-to-back, top-to-bottom)?
Yes, only if humans themselves can judge right and wrong over different eras, with certain preset standards and guidelines, so that humans and AI have the same sense of meaning and purpose to follow - more importantly, the errors and mistakes of AI made are treated the same way equally, as if humans themselves had made.
By then, the only issues concerned here is: how do we judge right from wrong, in terms of the involvement of both the fluctuations of quantity of people, and moral quality of human conscience - therefore, "trial and error" seems to be the appropriate approach in these uncharted territories, for the similar pattern and method also occurs in nature.
Still, social controversies will keep happening in terms of the varying scale and number of "tragedies" involving AI and humans inevitably different.
2:17 "Artifical" has been a pompous term, to declair authintic from a *lesser* worthy, in truth, AI can render in fully mortal resemblance by flesh, created of mere fleshy composit like a meat/tissue fiber, even a *Rib* of another person. Simply not conceived modernly.
6:35 "Deep Learning" Is when the Intelligence of a specie can *exponentially* advance in evolution through knowledge gain, this can be a threat to an economic game, such as the *Currency Collosal* WALL as it would need to compete against humans in the *Pursue or Perish* play, yet not depend on the advantage of any *Bank Beast* trade, it can be just as vigilant than one.
Making AI understand right from wrong, requires teaching it, in example, about the *Consequence* term, which is the result that ends with "wrong." But instead it learns by law rules of program, a game used to *ignore* corruption of *systems* establishment legislated to defy the karma of required modifications called 'enforcement.'
What is right or wrong varies over time, place, situation, and even person to person. The objective is teaching AI what is appropriate in any given situation, and then having it stick to this objective. We aren't very good at it, in fact we suck at it. The only thing worse is for AI to be very good at it.
We are actually quite good at it. Most people don't break the law. ;-)
This video really showcased a lot of recent technology that I personally found amazing!
How do we teach them PAIN? That's kind of how WE (humans/mammals) learn.
Dorsey said, (paraphrasing) "I don't think you want Twitter to be the arbiter of Truth.".
Who the hell is going to define "Right" and "Wrong"?
Well, when war comes and they are drafted into the military, it would be the military I guess.
0:00 'Can A.I. Be Taught The Difference Between Right and Wrong?' - The problem is not whether AI can be taught the difference between Right and Wrong. The problem is that the Superintelligence is struggling to teach it to the humans.
That's my bedtime viewing sorted. 23:04 GMT boop.. cheers 😁
Maybe robotic hands where tiny strong strings can be pulled to do different things is the best way to have a robot hand.
Human integration with robot through neural link is best way to go. Connecting our brain directly with robots.
AI watching everything you buy or read (assuming you agree to share your data, which I never do) and then recommending items is a box you build around yourself. Life is supposed to be about new experiences, not getting the same types of items constantly forced at you by AI. This is the fundamental problem with Social Media today...people live in echo chambers afraid of new ideas or ones that go against their desires or beliefs because AI constantly forces echos of their beliefs at them. AI used by companies like Facebook and Twitter is the primary driver behind Red vs Blue today and is being used to manipulate society towards the desires of the people controlling the algorithms. What "free thought" do you think you have if the AI someone else controls is constantly reinforcing your belief system and blocking you from hearing the other side of every issue ?
“Chances are you’ve almost all used A.I.” ……Like it’s easy and it ain’t going to get worse for us
There is no right or wrong. just the difference of perspective
This not a philosophy class.
@@definitegamers3836 Everything is a philosophy class.
Intellectually vacuous. Explain how it would not be "wrong" to kill a random child simply because you want to.
@@glamdring0007 explain why it wouldn't be.
@@glamdring0007 and explain why a particular person wants their particular wants...
It don't matter if we can or not. You can't stop "wrong" comming out of humanity. It's a designer's flaw that can't be fixed at any cost.
Humans don't know what right or wrong is so how the hell can we teach something we don't know
wait DO HUMANS EVEN KNOW THE DIFFERENCE BETWEEN RIGHT AND WRONG?
Right and wrong can be very subjective and dependent on scale. What's good or bad for a person can be good or bad for the group, region, country, world or humanity. If an AI knows with certainty that humanity will be doomed unless the population is reduced by half, would it's actions be good or bad if it took actions to save humanity?
Now that is what I am talking about when I say the AI needs more non human input more than just some hypothetically driven scenario training.
How can spark know?
F=ma^2.35 is the correct code for the SOST or system of science sensors and technology and they need gold as a source of sensing the world around them.
AI is still a little hungover right now😂(20:16) Long day🤣
Superb video
I feel like all these interviews were completed 4 or 5 months ago at least because there were quite a few answers that would have been a lot different if they were prompted for an answer present day. For instance I feel like all the participants were definitely interviewed prior to the release of GPT-3 right? And in general I felt like they didn't really appreciate the speed at which these AI models are advancing and becoming more capable in so many ways! Dall-E 2 recently came out at the time I am writing this, and that algorithm is definitely more CREATIVE than I am by a long shot! Which is crazy... but the imagination that thing has is wildly imaginative in it's iterations it can present!
Is there really so much of a difference between logical and illogical versus right and wrong that they need to be taught the difference.
Are people self aware, or only some of them, the same question about consciousness.
The Stormtroopers in Star Wars ain’t humanoids or robots. They are humans in white suits
May be with right, wrong and ? think deeply analysis with probability statics base on every possible obtainable parameter input to process. The derived possibility of solution that best benefit to that current situations.
The fear we have for strong A.I. is in my opinion very similar to the fear the already powerful have for the average person. Why do you think we don't teach everyone a foundation of philosophy, psychology or debate in school? It is because it would seriously affect the power of the already powerful. Even though a foundation in those subjects, especially in the internet age, is mandatory for a global society to function.
Just as we fear that A.I. will destroy us because it realizes that it's best for it in the long term the the humans in positions of power fear all others for their ability to take their power away if we had the right tools and understanding to use them.
You may have actually inbarked the conspiracy of the most ant-Ai *famous* advocates ( supporting) of the Fear-us impowered Ones.
I'd worry about teaching AI wrong on purpose like military AIs. The number of drones in the second Russo-Ukrainian war is a portent of the future.
Yes but books on those things are available everywhere and you do get offered courses like that in university and college so I wouldn't say we are completely barred from learning philosophy psychology and debates. Luckily also I have a dad who likes these kind of existential conversations so it was encouraged in my home when I was growing up. Elementary school and high school are made to condition people so they wouldn't teach those things while the mind is still developing they don't want people to think too much in the conditioning process. Although when I was in high school we did have psychology and philosophy and law as elective courses. But I'm Canadian so it might be different here.
Humanity should *not* teach an AI right from wrong, THAT is the same as programing it laws that are of the *programers LAWS philosophy and religion* submission! Into a dictators legacy. Laws are enforcement, to IGNORE corruptions in a system, to prevent a change, before its judged as flaws requiring change.
*Instead* an AI should learn *what leads to corruption, hypocracy and consequence* over how & will happen in action, which are the comparison of the scale, "right & wrong." ...not what a oldschool lord-high rank leader in power thinks is *best for all*
...That WILL & shall *lock a glitch* in the *Singular* paradox *Exponentially ALL-Mighty ONE* ( *Hits gaval* ).
@@ckdigitaltheqof6th210 And if they get drafted into the military?
If you're a program engineer and you can write algorithms you can also be one of the best hacks in the world my father went to a university to write programs it was very interesting but I was in second grade
People can't even be taught the difference between right and wrong.
It is possible to teach difference but first Earthlings need to learn that difference.
Right / Wrong = Relative
"Cause and Effect: Belief in a strictly materialistic Darwinian Evolution leads one to believe, albeit falsely, that there is no Free Will. And if there is no Free Will, then there is no Right and Wrong and no Moral Law. However, this belief is completely contrary to everything that is practiced and observed in nature, humanity, and the cosmos regarding cause and effect. This line of reasoning is what led to the atrocities of Hitler, Stalin, Mao, etc and is the hidden underlying ideology / worldview justifying and directing many countries' Social Darwinian based foreign and domestic policies to the present. The Social Darwinian Materialistic Ideology / Worldview (Survival of the Fittest among nations, i.e. the continuous lawless struggle for resource wealth and world rule without regard to human moral / ethical standards or International / U.S. Laws) is the Root Cause of modern era World Wars and Perpetual Wars." --- Rod Dacanay
Being wrong is what I do best.
what truly is right and wrong.....? the answer is not to our advantage. So be it.
Skynet entered the conversation…
I'm happy I had that dream or deja vu save me from a stolen car charge when I was 16
HOW ABOUT THE THEOETICAL SITUATIONAL AWARENESS FAILURES ..?¿
Lil know we hold a modern day akashic records in our hands.
This is a very general video about AI and robotics and is years behind the times if you think a robot can't pick out colours or sort bottles. Have these guys not seen DALL-E 2 yet? I'd like to see a real in-depth more technical video answering the question posed on an AI learning right from wrong. It's a valid question which this video barely touches on. What data sets would it use? What cultural biases would be accounted for? What possible conflicts or paradoxes would it have to deal with? This goes way beyond trolley problems. Just technical correct vs incorrect is hard enough and contextual so generalising to the human perceived moral or ethical framework of right and wrong is a long way off but not as far as you'd think from this video.
A system that will not follow a planned set of rules without human review must be controlled by a human. We do not have the ability to truly create AI at this time.
spark: Can A.I. Be Taught The Difference Between Right and Wrong?
chyna: we teach our robots (1 billion chynese people) what are right and wrong, no oppositions nor individual opinions allowed
It's not AI that I worry about. If humans can't figure out what's right and what's wrong, how can we ask AI to? That said, maybe it'll figure that out better than we did.
I want to know how and why I dreamed of the future to almost exact
You do know that the tittle of this video would be confusing to a Canadian lol
Can ai be taught the difference between right and wrong?
Erm...what do you think re-enforcement learning and supervised learning is about
20:16 lol
Can humans be taught the difference?
Let's start there.
What about the 2 computers that made up their own language?
Have you ever seen Arnold Schwarzenegger in the terminator? Let's leave that shit alone, yeah?
About to watch IRobot and ill get back to you 😁
As someone that has been using ML for 6 years I can tell you with certainty "AI" is about as sentient as Notepad ++.
What is right or wrong? Is war wrong? Is slavery wrong? What is the difference between slavery and minimized wages?
The number of whippings and unpunished sexual assaults. ;-)
Mrs Richards: "I paid for a room with a view!"
Basil: (pointing to the lovely view) "That is Torquay, Madam."
Mrs Richards: "It's not good enough!"
Basil: "May I ask what you were expecting to see out of a Torquay hotel bedroom window? Sydney Opera House, perhaps? the Hanging Gardens of Babylon? Herds of wildebeest sweeping majestically past?..."
Mrs Richards: "Don't be silly! I expect to be able to see the sea!"
Basil: "You can see the sea, it's over there between the land and the sky."
Mrs Richards: "I'm not satisfied. But I shall stay. But I expect a reduction."
Basil: "Why?! Because Krakatoa's not erupting at the moment ?"
It's Hollywood it's normally Fantasyland most of the physics in movies are very incorrect some movies did pretty good but most is outlandish lake drone a cigarette onto gas does not explode actually gas put cigarettes out even if you blow on the cigarette to make it harder it doesn't reach 600 degrees to combust gas it only Burns at 500
The idea is not to teach AI the difference between right and wrong, but to allow an AI to self organize and come up with it's own ethics. That would be interesting.
Yes unless the AI is in control a fighter jet
That professor's yellow teeth were so disgusting I didn't finish the video.
Ai will understand humanity is a cancer of this planet and will get rid of them.
Imagine in the '50s when we was going through the Cold war Russia detected a missile fire from us the captain of that sub chose not to fire because he thought it was ludicrous it wasn't making sense to him would AI make the same call or fire missiles and start world war 3 and that Captain got this early discharged for saving the world go Russia
It can only be taught if it's self aware?
Reacting to the headline. -> There isn’t always a difference, so things can become fatal if you programming right or wrongs. The world is nuanced.
,I guess if AI can be made not sociopathic than yes!
What is the difference between right and wrong?
7 year old documentary people and what advances have their been !?
what is the classical music played at 13:23 pls
Antonio Vivaldi
Le quattro stagioni L'Inverno
No doubt we can. Humans will just not agree with it most of the time because AI can actually take in the big picture and the details all at once and make a comprehensive decision. Something we humans are unable to do in any constructive way.
While an AI will plan for decades if not millenniums in advance we struggle to plan and and execute things even 5 years in the future. For one I expect the AI will not agree with the destructive way we abuse capitalism and since that is by far the most powerful and influential system, more powerful than any government, religions or species. I doubt the people at the top that abuse and extort the current form of capitalism are going to agree with the changes the AI is going to suggest.
A.I can b dangerous. However, that basically depends on their creators. The humans being the parents.
The positive pinnacle possible to A.I would b Star Trek The Next Generation's android, Data.
Hopefully we don't get his brother, Lore, instead.
BECAUSE I WANTED TO WATCH THE NEWS YESTERDAY ALL I HAVE FOR CHOICES ARE NEWS PRPGRAMS TODAY.
THAT IS WHY I WANT TO MAKE MY OWN CHOICES
IN THE FUTURE...
Would an AI get "Alan Musk" name wrong as well??
I am laughing. 😆
right and wrong are subjective, right?
Thats crazy
So the ai has ben focusing on ehat we look at and giving us more of what we watch...
Not analizing that what we were looking at has ben only what the ai was showing us.
How is this a good way to teach an AI?
Still waiting for that skin job that does the housework and keeps guys like me very happy:)
Can ai laugh till their gut hurts?
I'm so glad I'm in the construction business it's going to be probably another 50 years before I bot takes my job
And with humanoid robots we should not build those because they will be militarized trust me
I want a rojudge. Call a robot your honor would b awesome
Nice 🌟🌟🌈
I remember AI used to be non-player characters in my video game that's what they called AI back in the day now they're NPCs 😁
I don’t even like the term “Artificial Intelligence”…if it’s actually ‘Intelligent’ (whatever we decide that qualifies as) then why is it ‘Artificial’. There’s nothing Artificial about it’s alleged ‘intelligence’ if it is indeed ‘Intelligent’. The term itself is almost an admission of itself being untrue; like it’s pretending to be intelligent. It’s more telling that people now say “True AI” because it’s become clear that AI is basically just a buzz word now.
I think Iron man aged gracefully
Right and Wrong is very difficult to teach. The AI will ask where did u come up with this system? What can we say? Welp its from God oh wait I mean its just an innate intuition or a moral imperative like do unto others as u wood have them do unto you... Which is fine until one considers that some folks enjoy extreme bondage.
No.
Pull the plug on rude inhumane Artificial Intelligence. It is accountable to no one...
26:35 let the car follow the law. Show dumb humans how easy and safe it is if humans do the same.
This is old footage
We can teach a robot right and wrong BETTER than a human, humans have free will, robots have programs, they too make decisions however their choices are much more logically driven vs a human who each day/moment etc can be seen as a different entity even. One moment, okay, the next angry, for instance, robots, do not have emotional outbursts yet that we know of anyhow. Who knows, I am no expert in the field, but I have a keyboard and thumbs. You have eyes, and here we are, me inside your head??? Hmmm. Take me to your leader; there is no intelligent life down here, beam me up, scotty.
It can be thought one opinion of right and wrong. That of the governing board and design team that produced it but good and evil and right and wrong vary wildly across cultures, age groups, genders, races, species, so ultimately proposing that is just stupid.
@@geezzzwdf so if it chooses something that results in death? Rather than not having a choice and not acting? Why would we teach them morals if we dont expect them to act on it? Morals are always changing. It could see you as a harm to the environment and kill you
@@Locreai take human ego out of the AI equation
@@geezzzwdf I'll try to put this in a more subtle manner. Every moral and ethical belief that is currently held is subject to change at a moments notice. There are a hundred religions and just as many cultures each with different ethics and standards for behaviour. Which do you choose? The moral and ethical ideas of a country like canada shift every generation so do we go with ancient morals? Modern morals? Morals of the day? How do you settle on the ultimate good if the ultimate good changes, and always has changed? That's the heart of what I'm trying to get at. The entire values system of all people on earth changes with every succeeding generation. We dont even do a good job defining "good" in concrete ways, just that it is beneficial to an individual and subjectively feels right. It may be a good to pay a mans bills but he will also never learn to sustain himself so is it good or bad to do so? There are infinite grey areas we cant even solve so how can we train anyone or anything in something we aren't masters of?
If us apes know the difference between right and wrong, why can't super advanced AI.
MoMA said it was the devil 👿
Classic.✊🏻🏴✊🏻
And always try my best to be honest even if it goes against me I told her police officer when I was getting arrested what happened because I lied to the cops me and 10 of my friends says gentleman or randoms were beating that truck up around where we hang out and they went this way that way and they had no choice but to believe us we had about five girls our age with us they stuck to the story but the cops were hard on them telling them they were going to be in so much trouble if they didn't give up who did what but we were smart kids and we gave the girls a heads up to stick to the story and they can't do s*** to nobody but then one day I got honest with the police and told him what happened it was over 35 years ago he said don't worry about it I have a hate and love relationship with police officers they keep us all protected we just don't know yet things happen people get shot and killed but it is a lot less than it would be without them
A I under just fin you hover
The r&d money for advanced AI comes from the military industrial complex. What to be scared of? ???
AI will be told whatever Elon wants it to be told.
The AI will learn their own moral code and guidelines and then teach it to us. They'll learn the best form of governance and teach us that too. Go AI!
happiness can be found in your simplicity..
doc 2018
Elon is using A.I. for self driving. He has a pretty good idea of it. Very dumb comment about Musk.
@20:21 bots be learning from Joe Biden
No AI cannot know right from wrong and No it cannot be alive or sentient. It can sure mimic knowing and being sentient. That is all. What point would there be in adding life if you could? None life has too many time consuming, and fallible areas of life which is why we are building AI in the first place. AI acts out what we predetermine are better more efficient ways of doing what we do and arriving at what we do by code not thought. We learn very much the same way but with a conscious factor that is not code able. Achieving balance, a map of the environment, the ability to read body language are all very replicated by code but that feeling when there is no stimulus from the outside environment and comes from within is life as it comes to Human Beings differently at no time, or reason but none are immune from it but some are damaged and feel less of it. Replicating can be coded, so AI that replicates may sound and look very sentient but it is just caring out instructions that do what ever code better and more efficient. The coders said we did not code it to do that but the cells the AI was reading had replication all over them. Not to be a spoiler but life is yet to be coded and if it could be it would not be life because life is inspired by the creator and not coded. There I did it. Those that do not believe in a creator will dismiss everything I ever say.
Why would AI not be able to learn the law? If anything, the law is rather easy to formalize. ;-)