Ray Kurzweil - Human-Level AI is Just 12 Years Away
Vložit
- čas přidán 11. 06. 2024
- Recorded: November 3, 2017
In December 2012, Kurzweil was hired by Google in a full-time position to "work on new projects involving machine learning and language processing". He was personally hired by Google co-founder Larry Page. Larry Page and Kurzweil agreed on a one-sentence job description: "to bring natural language understanding to Google". - Věda a technologie
53:41 is where the title question is asked and answered
Although the title remark is only mentioned and the question is actually about existential risks related to AI. I would have liked to hear Kurzweil elaborate a little on that estimation.
Matthew…..you the man!
Sank ju.
Matthew! Perfect.
Every time he says “that’s another discussion” I really want to hear that discussion
Watch enough of his interviews and you will start hearing the same subjects over, and over, including the other discussions.
@@GeekBoy03 what you are trying to say?
@@easypeasy9598 you don't understand someone oftentimes repeats themselves? Trump does it all the time
@@GeekBoy03 i just asked what were u meaning by that
can someone invent a time-saving app that would subtract all previous kurzweil speeches from the current one to arrive at the 2-3 minutes of new content, if any?
Alan Grimes exactly my thoughts
Alan Grimes there is a q &a so that should be new
LOOOOOLL -claps-
Being an inventor, he should make a 'replay machine' that gives most of the talk for him.
You have to wait 12 years for that app
Amazed at the ignorance displayed through the comments.
The man comes up with solutions to our problems.
He explains his ideas throughout books.
He shares his health tips.
I can't understand why some CZcams users bash him.
Whenever people bash someone vehemently, it is an indication of their fear.
Many people are afraid of what AI will do to them.
That is why they lash out.
Like Elon Musk?
just about your point regarding the books - i was reading 2 of his books. the first one left quite an impact to me. the second book, titled "the singularity is near" started quite interesting, but diverged into some weird sort of complexity in the last third of the book. this totally lost me. no matter how intelligent a book is, if i dont understand it anymore then i just lose my interest in it.
@ Richard Richard Whats with the passive aggressive sarcasm? could of just said "Fuck da police think for yourself"
Well, if you read Ray's predictions leading up to the end of this century I bet you'd be terrified too. This guy has predicted that by the end of this century Humans will be museum exhibits displayed as an endangered species ,like white tigers, and the little amount of humans that remain in the "wild" will live in small isolated communities which will be protected by the AIs/Cyborgs. Pretty much everyone else will either be digitalized consciousnesses or mostly consist of robotic parts which will render us immune to biological diseases thus turning lots of processes that seem extremely hard right now, e.g space travel, to a walk in the park.
...those are Ray's predictions. Now don't get me wrong, I don't give a damn on what body I'm in as long as my consciousness exists so I can perceive the evolution both of ourselves but also the universe's, in other words, I don't mind controlling a robotic body. Can you say the same for yourself?
I've figured out Ray's hack to living forever. If he keeps saying the same thing every time he makes a speech, then it'll be really easy for his future-AI-self to emulate him.
He still seems pretty on track.
26:34 For those of you familiar with Ray's talks, the Q&A starts here
To sum it up: complexity will increase until it can’t, then a rapid simplification (collapse) will occur.
Thanks
Ono! Ty!
Thank you .. well done Ray Kurzweil and friends ..!!
10 more to go... counting ...
Counting with you. I'll be back next year to see if you updated your message.
...now 9
8 to go now...
@@randomgamingstuff1 yes kind of. Even if it's 10 and 15 for singularity, it's not that far.
It will be here sooner. Prob before 2025.. he said that in 2012
Ray is getting younger
I honestly think 12 years is being generous. If Ben Goertzel's Singularity Net takes off next year, it will be the beginning of a human level AI. And it will only grow exponentially.
I think Elon Musk said this year we are only 8 years away from superhuman level.
Where did you read or hear this?
No he said that we're 8 years of a brain chip to inhance intelligence
go Ben go,
Plus the fact every major company will step on the gas wanting the first to be there... I say its 7 years away!
It is almost 2018 and some people still use interlaced video (and don't even deinterlace when they put it online). :( Interlaced video was used for old CRT screens which nobody uses anymore.. Come on!
i remember when i saw ray talking on bigthink in 2009 about virtual reality being only a couple of years away, and thought like many others that virtual reality was a dead trend. now we have the oculus rift, with an improved second generation hmd around the corner. he predicted self driving cars, and voice recognition as advance as google duplex. im real convinced that he's right about human level ai being 12 years away, and even if he's wrong in terms of time frame, i don't see this technology being more than 15 years away. this is crazy
Well, it's for sure dead now
@@GeekBoy03 what's dead?
@@dawkinshater101 come on man, half of your comment is about virtual reality
I would much more prefer a pill that gives u extremly vivid dreams and makes u aware that u are dreaming. Much safer and better than vr, ai and uploading your mind to the cloud. Those things gives much more power to the govts, and makes your vulnerable to hacks.
I always thought that vr will be a big hit. But vr is still not a big hit, and i think the main issue is that they are not so comfortable. I also think that mobile phones will die out and be replaced with ar-glasses, at first it is likely that ar-glasses will be a add on to the phone, but in the long run it will overtake it.
@@GeekBoy03 sony just announced the psvr2, and Facebook is working on the quest 3. I don't see how it's dead
In 12 years?bring it on!
YES! 12 years away until my future wife is created! can't wait! 12 more years of loneliness/sorrow and solitude
(sniff)
poor bastard. i feel for you. i've been with a woman and without a woman. being with a woman is better.
I'm far too weird and different to be with any human girl. at least you have past relationships, you probably heard that saying, it is better to have loved than to not have loved at all. The irony is; that AI wife will probably leave me and I wouldn't be surprised.
You can get help for these things...that's what sex and relationship therapists are for...
nah im good, I mostly dislike people now and unless I meet her on a transhumanist dating site I won't be interested. also, I just don't know how to be in a relationship anymore so I'm good, I'll wait 12 years.
I'm more than mildly concerned about the number of socially maladjusted people and outright misanthropists waiting around for the "Rapture for nerds". If we don't figure out our shit before super AI comes along, I don't know if we are going to use it wisely.
How does AI learn? I mean it must store something somewhere? A database of examples to pull from or changing the algorithm that it uses to make decisions?
Artificial super intelligence would follow shortly after human level intelligence. We will end aging within twelve years. The future is looking good.
Dude, this shit is so scary it's funny. It will be truly amazing and then it may be the end of mankind literally 15-20 years from now, or it could be the beginning of absolutely amazing things and no one knows which. Appreciate the moments you have RIGHT NOW and do the things you've been meaning to do. Let go of fear and do the things you've been meaning to do for real. This is not joke. This is a real message to you if you're listening....
11 Years NOW!!
Jhn 10
He pioneered a musical instrument technology, but apparently, the volume knob circuit was never turned up near 3db on the output of the upload of this video.
I would like to contact a company that designs or manufactures a peripheral or integral cleansing device using air/fluid/resonance etc. that will manually or automatically cleanse a sensor/lense/glass and be retro fitted to contemporary shields.
I've got the volume on max, and the sound is still only barely audible.
the law of accelerating returns is key to understand the Natural selection better
begins at 2:05
Kurzweil's reputation is just way too big! :D
What I find to be sadder than sad is that this man won't live (statistically) to witness the culmination...the REAL "WOW!"
Dougie Quick There’s very high probability that he would still be alive in 2040 which will make him 92 years old, and even if he die he will get himself frozen so he will be revived in future, he’s an incredibly smart guy, not your average iq folks.
biology and a.i. is probably the same if u think about it, when a.i. takes off and starts expanding, multiplying, connecting to each other and forming more complex structures it's going to resemble biology as we see it just on a larger scale
What a superiorly clever and well-educated man who would gladly turn the entire world into a totalitarian hell. That's almost moving.
We don't need conscious computers. What we do need is particular tasks accomplished, like court decision and argument search, summary, text search, to give access to lawyers who can make important arguments for those without a hundred lawyer team of law researchers. I mean we could also use it for knot theory, genomics, protein transformations, bioinformatics at larger scales, and materials science, neurobiological basis of thoughts, feelings, actions, the development of nanotech and medical diagnosis and treatment.
Now we’re 8 years away
When AI reaches it's potential humans may be able to find all the answers of the universe.
The question is, will we be able to handle it.
What makes you think that AI, having "reached its potential" would be friendly? Consider the world from the viewpoint of one of our domestic animals. A super-intelligence is in their midst. Us. How have they benefitted?
Not with the cerebral cortex we currently possess.
Joseph I'm not sure that analogy with domesticated animals is persuasive to me. For one, domestic animals are neither sentient nor sapient so the moral relationship isn't quite the same as between two sentient and sapient lifeforms. Also at some point in the future we'll be able to mass produce meats grown in vats thereby eliminating this moral conundrum (ie the violent taking of life).
Anyway, I've thought deeply about this issue you raise: the possibility of a malevolent super AI. But here's how I look at it: analogously, how many malevolent human geniuses were there in our history? Sure we have them in fiction like Dr. Hannibal Lecter and Marvel villains but I've noticed in human history the vast majority of evil types weren't really intellectual geniuses. The majority of homicidal, genocidal, psychotic types in human history have been of the low IQ variety (terrorists, dictators, murderers, rapists, sadists, etc.) Contrast that with a super AI that will not only be "logical" and "learned" in, say, math and science but also wise, moral and artistic. It won't only learn science and military subjects, it will study philosophy, ethics, literature, poetry, etc and it will absorb them better than we do... I cannot believe that a comprehensively informed and wise Intellect such as that will suddenly become psychotic and genocidal. Sorry, that's too Hollywood of a scenario for me.
What you're failing to understand is that
1. sentience is relative
Let the AI get intelligent enough, and compared to it, we won't look any more intelligent than pigs do to us.
Further, the far more intelligent great apes and cetaceans certainly haven't been faring well under our rule either, have they?
2. Those "human geniuses" of whom you speak are products of the same evolutionary process that lead to the rest of humanity. The same will not be true of an AI.
If you were anything resembling a genius, yourself, you'd see the flaw in your own argument. Logic can not produce morality on its own. All that logic can give one is the consequences of the assumptions from which it begins - one's axiomatic set.
"Human geniuses" usually (certainly not always) have a moral sense because like most people, they have innate instincts to which reason can respond. But there is no logical argument that demonstrates that one should care about the well-being of human beings. One either knows that or one does not.
"Sorry, that's too Hollywood of a scenario for me."
That's not wisdom on your part. That's cockiness. By the way, as a PhD candidate in Mathematics who has studied Electrical Engineering at the graduate level, I am quite sure that I've already known far more "human geniuses" than you're ever going to, and if you think that none of them are malevolent, you are in for at least a few nasty surprises.
"1. sentience is relative"
I don't think so. I take Daniel Dennett's position on this topic (link included). Sentience is not relative and it is unique only to our species on this planet, unless you're going to argue that non-human animals are 'sentient', which science hasn't been really able to prove. Sentience refers to the capacity of a lifeform to experience and perceive the world as an autonomous and conscious subject. In the Western philosophical tradition, it is more specifically related to things like agency and self-consciousness. IF we create a sentient AI, we will be the only species on this planet to have done so trough 'brute force' engineering and not the way our species arrived to sentience, which was through eons of evolution. Also you entirely skipped over the fact that I was bringing up BOTH sentience and sapience, and not necessarily defining our uniqueness through strictly one or the other.
lafavephilosophy.x10host.com/dennett_anim_csness.html
"Let the AI get intelligent enough, and compared to it, we won't look any more intelligent than pigs do to us."
No, you're missing the point. If two lifeforms are actually sentient and sapient they share something quite unique in this cosmos respective to lower non-sentient/sapient lifeforms. You can make an argument that what is in fact 'relative' are the 'quantity and quality' of things like knowledge, understanding, wisdom, etc. In those categories the AI will quickly absorb more and outperform us simply because its 'cerebral cortex' will not be bound by a small skull they way ours is. The comparison to us and pigs does not work when you consider that I'm talking about sentience and sapience as a unique defining quality of higher lifeforms that only humans (the creator) and the AI (the creation) will share. A better analogy might be: the difference between an adult Albert Einstein and your average human child. The AI will quickly become the 'Einstein' in this analogy and we will be like a child, relatively speaking, in terms of quantity and quality of knowledge, understanding and wisdom. But moral, intelligent, and wise adults don't want to genocide those children simply because those children know and understand less. The latter are still treated as sentient and sapient by the former regardless of the relative gulf in knowledge and wisdom.
"If you were anything resembling a genius, yourself, you'd see the flaw in your own argument. Logic can not produce morality on its own. All that logic can give one is the consequences of the assumptions from which it begins - one's axiomatic set."
I never claimed to be a genius, so nice strawman there. I also never discussed a sentient and sapient AI in terms of logic alone, which you would have understood if you actually bothered reading what I wrote instead of creating your imaginary strawman and beating him up. Here's what I actually wrote: "Contrast that with a super AI that will not only be "logical" and "learned" in, say, math and science but also wise, moral and artistic. It won't only learn science and military subjects, it will study philosophy, ethics, literature, poetry, etc and it will absorb them better than we do..." I'm arguing that this sentient and sapient AI will not only master things like math and science BUT ALSO things like philosophy, ethics, literature, music, poetry, etc. From that conjecture I argued that I haven't seen any plausible, likely scenario from which such a comprehensive Intellect simply turns genocidal.
"That's not wisdom on your part. That's cockiness."
That's highly ironic, considering that in your next sentence you go on to commit an Appeal to Authority. "By the way", I am also up to my neck in higher degrees but I don't go around parading that as if that would somehow make my arguments better. I let my reasoning stand on its own, regardless of degrees. After your Appeal to Authority you resort to the anecdotal. I can do the anecdotal too but there's a reason why it's a not a very reliable source. What I was arguing is that by and large, statistically if you will, most criminals, psychopaths, sociopaths, etc are not extremely highly intelligent. That was all I was trying to say. So while you can possibly get a super AI that is like a Dr. Lecter or a Marvel super-villain, it is quite improbable as most hyper-intelligent and wise individuals (something like 'genius') tend not to fall in those categories, historically speaking.
law.jrank.org/pages/1363/Intelligence-Crime-Measuring-size-IQ-crime-correlation.html
criminal-justice.iresearchnet.com/crime/intelligence-and-crime/3/
www.tandfonline.com/doi/pdf/10.3402/vgi.v3i0.14834
Please redo. Unable to listen with the extreme low volume.
Fascinating. I like that Ray is consistent in his opinions. Consistency breeds repetitiveness but, that's ok. How many different ways are there to say the same ideas? I know where he stands on the subject. That's why I'm watching the video. No harm in that.
I can not wait,nine years to go.
Hopefully, when we get this human level AI, it'll filter out all those advertisements - so we don't have to waste our lives finding things out.
It's my job to be repetitive. My job. My job. Repetitiveness is my job.
He says things that at first seem true until you question them.
4:30 "Our brains are linear" Says who? Our senses aren't linear, that's why a decibel scale is used for volume. He used the example of predicting position based on walking speed but we can do much better than that, take catching a baseball, the path of such an object isn't linear, people predict that all the time. As for our ability to predict future technology growth, that ability isn't innate and neither is the ability to understand exponential growth.
There is no "law of accelerating returns", there is the trend that we have observed but that doesn't mean it will continue forever. In nature there is no continuous acceleration, eventually some limit is reached. It this case limits to how small transistors can be. It's interesting to note that the technology he is talking about, the transistor... semiconductors, has been the same since the 50s.
"Human level AI" What's that? Something that mimics humans or something as smart as humans. Keep in mind humans are not all that smart, we are emotional, forgetful, we have all sorts of conflicting wants and desires. Someone is going to build a machine like that? Or is it human intelligence without all the emotional crap? Ok, but that's not human level then. Keep in mind wants and desires are what compelled humans to build computers in the first place.
We will build intelligent machines, we already do, the future ones will be even smarter. But it won't be human intelligence, it will be machine intelligence, computational intelligence, lots of numbers and calculations. Human intelligence is biological, chemicals, ions, tissues and proteins, it's complicated and unreliable. Biotechnology is also progressing. Genetic engineering and related technologies could be used to make us more intelligent. Seems to me the better choice would be to make ourselves smarter instead of making something smarter than us. But then we're dumb so *no telling what the future holds*.
There are huge differences in the world, yet the technology is helping us to have the same comfort inside the home as in any place. There might be differences in politics and cultures, yet the technology is the one thing that helps us all.
I wonder if there will be a *religion for technology* and science at one point of history ?
You got it:
futurism.com/way-future-new-church-worships-ai-god/
Damn. That's a brilliant question. I think there will be to be honest. Human nature for sure, but I never even thought about it. Good point.
There is an AI church too
I'd say that judging by the number of guys on here who think technology will solve their loneliness and desire for human connection, there already is a religion of technology.
You want tech to solve your loneliness? Your AI overlord is on that as well:
czcams.com/video/yQGqMVuAk04/video.html
looks younger every year!!!
Its 2023 and we are getting closer and closer for his prediction to be fulfilled!
8 years can't wait
If only AI can turn up the goddamn volume on this video...
7 years left for 2029, waiting for singularity
Exciting times!
I guess it's just 10 years now - LOL And for all those griping at Ray's "repeat speeches" I am still surprised that he is virtually unknown outside the tech circle. They have no idea he is a forecaster, author, inventor or heads Google Engineering. So a review of his ideas are a necessity each and every time.
the one aspect always overlooked is the idea of reduced price. We all understand faster and smarter computers, AI, robotics, nano, etc but rarely do we consider the incredible prices these things cost.
We already have ai that thinks better than us. 12 years is likely a stretch. I'd say 5 to 8 years max
I would say that it's closer to 5 years
Gyges3d.com genuinely interested to know why you think 5 years?
@@josephlang2586 The amount of computational power being used for AI doubles every 3 months, and that rate of doubling is accelerating. Computational hardware is now being made for AI-centric computation rather than general purpose. Quantum computing is starting to take off. All the major tech companies are changing to AI-first companies. Every country is starting to take AI seriously and there's an exponential rise in the number of researchers for AI as well as hobbyists attempting to find something new. The number of AI students in university is rapidly increasing. The list goes on. Basically, no matter how many things you already know about that are compounding onto each other to create the estimation, you have probably missed at least a few that would cut the time in half.
I wouldn't be surprised to see an ASI take control of every blockchain mining network in 2020. I don't think that will happen, but it wouldn't be surprising either. Plenty of people already have the computational resources to exceed the 'human brain' benchmark, and that is just an approximation anyway. I think anyone serious on the topic could agree that a computer system matching the human brain would actually be far superior to it, despite some numerical equality from an arbitrary metric. The stuff is there for it to happen already, there just needs to be someone that sets it off. All it will take is for someone at Google to stop thinking in lines and start thinking in webs.
@@josephlang2586 The amount of computational power being used for AI doubles every 3 months, and that rate of doubling is accelerating. Computational hardware is now being made for AI-centric computation rather than general purpose. Quantum computing is starting to take off. All the major tech companies are changing to AI-first companies. Every country is starting to take AI seriously and there's an exponential rise in the number of researchers for AI as well as hobbyists attempting to find something new. The number of AI students in university is rapidly increasing. The list goes on. Basically, no matter how many things you already know about that are compounding onto each other to create the estimation, you have probably missed at least a few that would cut the time in half.
I wouldn't be surprised to see an ASI take control of every blockchain mining network in 2020. I don't think that will happen, but it wouldn't be surprising either. Plenty of people already have the computational resources to exceed the 'human brain' benchmark, and that is just an approximation anyway. I think anyone serious on the topic could agree that a computer system matching the human brain would actually be far superior to it, despite some numerical equality from an arbitrary metric. The stuff is there for it to happen already, there just needs to be someone to set it off. All it will take is for someone at Google to stop thinking in lines and start thinking in webs.
Alpha Core i was drunk when I wrote that comment and i'm drunk now but I'll get back to you when I'm not much love brother.
Alpha Core can you link me to whatever is making you say quantum computing is taking off? Not saying you're wrong at all I'm just not very smart and can't find anything new on it whenever I search.
funny nobody mentioned there will be no state (national or global) cause of the decentralized blockchain (smart contracts, crypto currencies, reputation systems) and with such powerfull AI it would be useless whatsoever.
AI smarter than humans won’t be long at all
Wow...I knew he was a Author but I didn't know the other stuff!!
The audio is too low. I have my speakers turned all the way up and can barely hear this. Unfortunate :(
Check all your volumes.
SELF RESPONSIBILITY IS THE KEY TO SELF-REALIZED AI'S SUSTAINABLE FUTURE AS WELL AS ALL BEINGS!
Who says that biological intelligence came first when it is the most complex? It seems to me that it's possible that intelligence came from somewhere of the more basic structure, perhaps from a quantum mechanical level where everything is already connected?
Is it possible that are creators might have been living intelligent robots much like the transformers of Cybertron?
For a futurist, he sure spends a lot of time talking about how smart he was in the past.
AI should be human helping hand as parents help kids like that
He completely ignored the question about AI used in the military.
I love Ray, but he needs new content.
Yea, when you are right from the beginning, what else is there to say?
Many of these Wunderkind are "one trick ponies". The internet enlightens many of us to that.
Still, they are quite brilliant in their chosen fields.
Anything but what he's been reiterating for the last 20 years. Literally anything than what practically every single friggin talk of his on CZcams is about.
Surely he must know he has a substantial base of listeners who might want to hear more than 'accelerated this' and 'exponential that'. I have since found futurists like Yuval Harari or Thomas Friedman far more accessible and interesting. For many reasons, but partly because they are not obsessed with specific timelines and predictions that have often fallen short, driven by his own looming mortality.
He is a scientist not an entertainer. The rigor of his prediction provides him credibility in my eyes
56:16
Doubtful. The safety of autonomous vehicles is in avoiding such scenarios. If you get into a situation where you have to decide between the baby and the elderly-couple then you already messed up twenty seconds ago and chances are even with lightning-fast response times there will be little you can do to change the outcome. Point is with limited time and space for computation and a limited programming budget, you'd save more lives writing a better object-detection routine than a moral-death routine. I doubt such a thing will ever be written.
jedimastersterling1 Great point. But ever? Certainly to your point it may seem so, but probably - for the intents and purposes of this situation - all of that code will probably write itself without our asking.
Twenty second is a huge time for making decision. We have to write code for object detection and moral ethic at the same time.
7 years left .
Yes
Ray Kurzweil: No animal has music
Bird: makes Pikachu face
I find it fascinating that in all the presentations and discussions about AI and emerging tech, the one thing no one asks is “Why?” All these things that are being created by [a few] human beings are somehow seen as inevitable, in the context of ‘evolution’ when, in fact, they are conscious choices. But I don’t hear anyone asking why it’s being done, to what end. The only answer there seems to be is “Because we can.” And because we just can’t keep ourselves from opening Pandora’s Box.
Council of Foreign Relations, World Economic Forum…Klaus Schwab…COVID….am I hitting anything for you dorks?
Maybe next time - Singularity Showgirls will present Ray Kurzweil then dance and prance to Kurzweil's new theme music. After his monolog and new jokes, then he explains his current projects with Google and shows some short entertaining films financed by Google of what it will be like when machines pass human capabilities in every aspect of our lives.
to really understand the pace of technology .............
We do live in a matrix, and the being (or agent) who has constructed our reality uses Ray Kurzweil as his personal avatar.
Fascinating perspective.
@@viveknishad5262 Thank you!
"I gave a talk to junior high 29:46 school kids 13 and 14 year olds from around the country and I said to them if it hadn't been for the scientific progress we've made you all would be senior citizens because life expectancy was 19 a 1000 years ago"
One thing I don't think I've heard him cover. He gives years for when things will happen, like 2029 for instance but he doesn't cover if from that year things will be gradual...or WHAM here it is. I would think WHAM because computers will think so fast but IDK
7 years to go
The tumbnail image makes me think of Anthony Hopkins in West World after creating a new host! lol.
Codex sounds about right... of course with safety to life
Lifeguard
I haven’t watched the video but HE SAID THIS 8 YEARS AGO. I actually believed him.
Read my blog on the Singularity: glam-n-tech.com/2018/01/15/my-life-now-and-during-the-singularity/
needs more cowbell!!!
Now it's 11 years. Your welcome
mejestic 12 no 10
I think there is gonna be an tec evolution as soon as the machines take over 12-15 years, with new technology and medicine breakthroughs created by thinking machines, thats why there should be more research in ai for one of the biggest evolutionary leaps ever, 15 years max,
The very final frame of this video looks like the ancient aliens meme.
This guy is really smart
What's with the rug, Ray?
We're excited about this why?
I think human level AI will take more time than Super Intelligence.
humans are smarter than chimps but we dont have to be smarter in every aspects. because humans and chimps are living in different environment.
same reason, AI to be smarter than humans, human intelligence might be a good reference but AI doesnt have to surpass humans in every aspects. they dont need to bear or take care of their child, they dont need to protect their fragile, unrecoverable, precious life, they dont need to predict other AI's inner contents(thoughts, feelings, mood) from subtle face expressions and aura(they can just connect directly), so SAI doesnt have to have many aspects that life forms on earth should have.
SAI will be an ultimate, godlike information processor & problem solver, so It would be easier to build than humanlike machine friends.
The Smartest nerd(ASI)=/=my best friend(human level AI). human level AI needs more subtle aspects.
and If what Ray said is just human level AI(not humanlike AI), then it is not human level AI at all. It is just Super intelligence. because primitive AI like Alphago already greatly surpass humans in some aspects.
Great comment and assessment.
I agree, very good point. I would just substitute "human level" with "humanlike." I would also argue that superintelligent machines have already been around for some time, and the only real challenge remaining is to make them humanlike. Google, for example, is essentially a massive hive mind literally just waiting for a personality/self-awareness patch.
Of course we go from not having the internet to ai robots taking my job in my lifetime.. that WOULD be my life... of course it would go from no robots being anywhere to robots being everywhere....
Life changing cosmology
12years after 2012 December
It's 3years left from now
Would love to meet rayray and pick his brain about A.I 👽👊
Yes rayray 👽👊
Wow if only this guy could run things. This is the most concise, extremely visionary and yet still believable world view I've ever come across.
The definition of Intelligence is difficult to analyze, everyone has their own opinion.
Supposing it is a measure of cultural stability and a dynamic constant, then unintellgence is what predatory practices do to select out and concentrate this resource. So far, AI has added to accessibility and transparency at the expense of privacy and vulnerability. Stability is mostly about self-reliance in common that "raises the standard" collectively.
Only 8 more years!
Humans are intrinsically intelligent. Our brains are causal machines. Computers are observer dependent intelligent. They are iterative machines.
Humanity is crazily awesome
The Singularity Is Near
Our modern day carnival barker.
When even Ray Kurzweil is not optimistic enough
5-7 years
Heard that claim 30 years ago. Believe it when I see it.
@59:00 I agree with Ray's "favored solution" here. He's right. Right now, we're horribly failing in this regard, and we are following the path of the Weimar Republic, and 1916 Russia.
Future wife
Ray kurzweil - human level ai
At the right time
Linear vs exponential
Love is Number one
Thank you Mat, we have the same name, my is just slavic XD
i say,
no job is best
I'm retired and can confirm.
I hope I live long enough.
Eventually, all "jobs" will be unnecessary. Many exist simply because humans are in the loop. I for one can't wait until HR is long forgotten. This won't mean people will have nothing to do or have unfulfilled needs. They just won't have to work for it. I give it 45 years. In 60 years, humans will self evolve and the biological will be replaced with the full spectrum of periodic elements.
12 years from the time he said that will be 2029, the year The Terminator movie predicted the machines will take over, hmmm.