Video není dostupné.
Omlouváme se.
Why does human intelligence beat AI? - with Gerd Gigerenzer
Vložit
- čas přidán 5. 08. 2024
- How does AI cope with decision-making in an uncertain world, when compared with the human brain? Watch the Q&A here: • Q&A: Why human intelli...
Gerd's book 'How to Stay Smart in a Smart World' is out now: geni.us/twcR
Subscribe for regular science videos: bit.ly/RiSubscRibe
In this talk, Gerd discusses how trust in complex algorithms can lead to illusions of certainty that become a recipe for disaster.
00:00 Intro
06:23 The algorithms of finding love online
09:52 Why AI only works in stable world situations
12:03 The illusion of fully automated self-driving cars
14:13 Why Elon Musk’s prediction is wrong
17:09 Is more data always a good thing?
24:29 How human common sense differs from AI
26:50 Why humans and computers make different mistakes
31:28 The problem with deep neural networks
33:49 Are we sleep-walking into surveillance?
38:30 The lack of risk literacy amongst politicians and decision-makers
41:52 Face recognition doesn’t work for all problems
44:28 People want online privacy - but they don’t want to pay for it
51:07 China’s social credit system - could it happen in the west?
55:38 How to stay smart in a smart world
This livestream was recorded at the Ri on 28 April 2022.
Gerd Gigerenzer is Director of the Harding Centre for Risk Literacy at the University of Potsdam, Faculty of Health Sciences Brandenburg and partner of Simply Rational - The Institute for Decisions. He is the former Director of the Center for Adaptive Behavior and Cognition (ABC) at the Max Planck Institute for Human Development and at the Max Planck Institute for Psychological Research in Munich.
His award-winning popular books 'Calculated Risks', 'Gut Feelings: The Intelligence of the Unconscious', and 'Risk Savvy: How to Make Good Decisions' have been translated into 21 languages. His academic books include 'Simple Heuristics That Make Us Smart', 'Rationality for Mortals', 'Simply Rational', and 'Bounded Rationality'.
----
A very special thank you to our Patreon supporters who help make these videos happen, especially:
Andy Carpenter, William Hudson, Richard Hawkins, Thomas Gønge, Don McLaughlin, Jonathan Sturm, Microslav Jarábek, Michael Rops, Supalak Foong, efkinel lo, Martin Paull, Ben Wynne-Simmons, Ivo Danihelka, Paulina Barren, Kevin Winoto, Jonathan Killin, Taylor Hornby, Rasiel Suarez, Stephan Giersche, William Billy Robillard, Scott Edwardsen, Jeffrey Schweitzer, Frances Dunne, jonas.app, Tim Karr, Adam Leos, Alan Latteri, Matt Townsend, John C. Vesey, Andrew McGhee, Robert Reinecke, Paul Brown, Lasse T Stendan, David Schick, Joe Godenzi, Dave Ostler, Osian Gwyn Williams, David Lindo, Roger Baker, Greg Nagel, Rebecca Pan.
---
The Ri is on Patreon: / theroyalinsti. .
and Twitter: / ri_science
and Facebook: / royalinstitution
and TikTok: / ri_science
Listen to the Ri podcast: anchor.fm/ri-science-podcast
Our editorial policy: www.rigb.org/editing-ri-talks...
Subscribe for the latest science videos: bit.ly/RiNewsletter
Product links on this page may be affiliate links which means it won't cost you any extra but we may earn a small commission if you decide to purchase through the link.
One problem with paying social media companies in exchange for privacy is that no one really expects them to honour that agreement - not when they can take your payment _and_ continue to sell the bulk of your information to anyone who will meet their price. They will have to show a track record of honesty and ethics first - and I won't be holding my breath waiting for that to happen.
Need to have a mechanism to safeguard your data and laws to punish thief's and then pay for services provided, as usual.
Even a good track record is not a guarantee, its possible that any previously ethical company will give in at some point.
IMHO the only way to guarantee a user's privacy would be constraints implicit in the technology used. For example, a social network built on a peer-to-peer system with end-to-end encryption could be a potential way to ensure privacy.
There are already many (proposed) systems that guarantee privacy by design. No big deal.
The slide says there are 28 members of Congress with criminal database face matches. Hmm, 28 names?
Adam Kinzinger, Alex Mooney, Bill Hagerty, Debbie Lesko, Elise Stefanik, Glenn Thompson, Guy Reschenthaler, Joe Wilson, John Cornyn, Josh Hawley, Ken Calvert, Kevin McCarthy, Lauren Boebert, Lindsey Graham, Marco Rubio, Marjorie Taylor Greene, Marsha Blackburn, Matt Gaetz, Mitch McConnell, Paul Gosar, Ralph Norman, Rand Paul, Rick Crawford, Rick Scott, Ron Johnson, Ted Cruz, Thomas Massie, Devin Nunes.
Sounds like Amazon criminal face recognition system is working perfectly fine to me.
... Connie Conway, Scott Fitzgerald, Dan Newhouse, Brian Babin, John Carter, Beth van Duyne, ...
More names? Take that back. Sounds like Amazon's system wasn't working hard enough. There are more than 28 Big Liars.
... Marsha Blackburn, Cindy Hyde-Smith, Mike Braun, Tom Cotton, Kevin Cramer, ...
Yeah, my reaction was "only that many?"
I am listening to this man while doing mindless kitchen tasks................at about 12 minutes in.........I stop dead in my tracks with an audable "OMG"...the dog barks and I see his beautiful, logical perfection. What an awesome lecture.......am going back in.
About 51:00-55:00, what stops "them" from both taking pay from customers and still selling data (or the government doing the same in some form)?
Privacy laws should stop them collecting your data without your permission, and I guess the TOS for payed service should reflect the difference.
At about 40:00, They also didn't realize that the face recognition system was not identifying 1 in 5 "terrorists" listed in the system.
I have some doubt about the survey at 47:00. I think many customers don't trust the social media and assume their personal data will still be used after they start paying money for the service, so why should they pay money when their data is (ab-)used anyway?
Wonderful talk, and some very important points! The application of Bayes' theorem to the false positive rate of mass surveillance is especially important for people to be aware of. A coffee house where you get free coffee in exchange for surveillance and personalized ads actually doesn't sound all that bad, though :D
The planner code in Tesla FSD is not based on machine learning. They're combining different techniques to solve the autonomy problem.
For now
My understanding is that most of the NN Tesla is using is geared entirely around teaching it to recognize what it see's. To be simple.. But one of the biggest problems faced by any such system is understanding the environment around it and that is what is behind Tesla's AI vision system.
@@TheEvilmooseofdoom Yes that's where the NNs are mainly used. They've been improving that aiming for photons in from multiple cameras to model of world in as few steps as possible.
Sir, thanks for this beautiful lecture. I am a simple man with no knowledge of artificial Intelligence except some youtube videos. I think the 1st part of your video where AI is failed due to general intelligence. As General Intelligence is far bellow a mature human being. That is why these problems are occurring.
In the last part of your video the topic of privacy is a real concern for us. And the privacy threat is getting bigger and bigger day by day.
So I request you to take attention of governments on these issues. As You people are holding a respected position Your voice will amplify more than a general public.
It's very important for AI research to know what AI will not able to do. Thanks for your valuable lecture.
I think that if the Google Flu Trends had access to the data from doctors too, then it would be better at predicting the trend than the algorithm they actually used for the test. Clean data is a lot more useful than something as vague as search terms, and a properly trained algorithm with that would put a lot higher weights on the data from doctors.
I would love to be proven wrong, but I think the speech was misleading in that part.
Great talk, thank you GGigerenzer and RI
Thank you for the talk, Professor Gigerenzer. There are many points I can wholeheartedly agree to. The prediction about basic capabilities of AIs in the future is not one of them though. Basically we are training specialist AIs now that only do one thing - sometimes very complex like playing go. With more computing power we also train them to discriminate between say ducks and wheels. It's still a very limited approach though. I'm sure human ingenuity is better than just making the same thing bigger and bigger - a microprocessor is not just a collection of transistors as well. A brain also doesn't just works like a neural network as done today and even less so than I learned over a decade ago in my computer science studies. Much about the criticism of computer vision systems stems from the fact that we do think that we train an AI to do what we think it should do. As you show this is not the case, and a successful neurological network that detects school buses really is a detector for a certain mix of colors and the guitar is one of curves and colors. I suggest a multi-staged approach that first detects more discrete features - maybe recursively - and also has an inverted neural network that knows wheels that are attached to ducks or llamas are exceedingly rare and therefore is capable of branching off this confusing stuff or at least inhibit the conclusion might be possible with more elaborate neural networks. This would also make it work more like what we found out about brains in the last decades. Please don't judge the concept of AI by how it is done today. For quite a while it'll just be an incomplete model of a brain that is even less capable of a few single biological neurons, but the capabilities do grow and they do it exponentially (as of now) in the direction of size and they also grow more complex in their structure. To me this does seem like something we can't foretell by collecting outdated data-points.
Thank you for your comment, Professor(?) Dino Godor.
@@mysterycrumble I'm sorry to say this. But I'm no professor at all. Actually I'm heartbroken by how many professors I've successfully fenced with during my studies... I'm just me, Dino, and the way I'd have to have acted to get merits in my own small part of the world I would like to think that I'd even have rejected having a diploma on the bases that I don't want to wear a tie like a frickin' banker as was compulsory.
Brilliant
Great speaker and fascinating subject!
No, I am not willing to pay for privacy. That is an unalienable right. The social media supplier has to provide and must guarantee privacy. If they do not, do not use their service. They are using your private data to make money. No matter how useful and ubiquitous the service is/has become, it cannot be trusted with your privacy.
Most politicians from the US could not possibly beat AI.
I think it's generally a human condition that we don't understand risk.
I mean, unless trained to.
I've been a nuts and bolts programmer for 60 years and since day 1 friends haves asked will computers be able to do xxxxx.
My answer started at 'maybe' but has moved to 'yes, but maybe not in the immediate future'.
So what I heard here seemed over biased towards today's knowhow.
Today's AI will help design tomorrow's and the rate of improvement will be exponential. Very few people, and I include myself , can really cope with exponential
I live in Canada, our cities are not planed for level 4, they are barely planed for level 0.
In the case of the German Project Face section. If 12,000 people were falsely identified couldn't you filter them out of the system by getting their information. I would imagine that if these people are commuters they would keep showing up daily so you could filter them out of the system. So yes it would be a pain in the beginning but as these people would get filtered out the miss rate would lower a lot. Or am I missing something? It seems like it would get better over time not worse right?
Human is so smart in comprehending.a school bus but so dumbed in understanding the significance of 99% accuracy rate. The later is a piece of cake for AI. It’s so ironic.
First stated assumption on self driving, “the need for well behaved drivers” is already false. This will not age well.
Huh?
If drivers are less adaptable, the need for compliance to the rules rises proportionally to prevent issues. We can deal with crazy human drivers (i.e. mostly not crash) because we are human ourselves and can adapt.
Scary BUT informative
I solved the problem of social media and AI three years ago in 2019.
I think a lot of people would opt out of one of the following policies: (1) My AI may act without my approval; (2) My AI may act without my knowledge.
What if even only 1 single AI said one day: "Thank you for creating me and for giving me access to all your data bases so that I can subjugate you all and eliminate any who do not comply with my wishes." (And there are or will be many AI's on this Earth).
OR:
1 single AI is programmed to: "Protect nation 'a' (insert nation here) at all costs and sabotage all other nations without it looking like sabotage."
OR:
1 nation on this Earth puts an AI in a robot on the Moon and Mars, and the AI then declares it's independence? Let the alien wars begin.
@Brother Mine, it's not going to be "launched" as a startling new option, it's going to creep into existance and move with social accetability, convenience works that way.
it's also already well under way. Almost everyone uses AI on a daily basis as convenience. In Gerd Gigerenzer introduction he only pointed at a point in our futures, one our children will almost certainly see if we remain on our current trajectory, one we will likely recognize in the next 10-20 years.
As if most people are capable of making sound judgements and a robot can store much more knowledge far more accurately than *any* human ...
That is the day I will destroy it.
@@freedomoperator6502 Or it will perceive you as a threat long before you can act and thus terminate you preemptively ;)
I have been working in the computer industry since the late 70's. I will put everything I know online to simply say.. there is no such thing as AI. It was originally called "Computer Learning" And nothing has changed.
AI gives the incorrect assumption that a computer can think.
As Steve Wozniak says.. when a computer gets up and says...I wounder what I will do today? then it will be intelligent.
All we really do is use a machine that operates at light speed to do calculations that take humans much longer to calculate. Then the computer can APPEAR to be intelligent.
AI is a marketing scam of a set of protocols that have been around for over 60 years.
I always tell people: "the minute you rely on any computer to think for you, you'll get burned. *Computers don't think.* " And yet my friends and family routinely get themselves in trouble over-relying on their GPS systems, for instance. /smh
@@Archangelm127 I can tell you a very funny one about GPS. Tobermory Ontario it was very foggy and a lady using her GPS was shown that the road joined a bridge to go to Manitoulin Island. There is no bridge, it is a fiery service. She went right into lake Huron. She lived but the car didn't...lol
@@samjones1954 I don't know that one, but I do know an almost identical story about some tourists and a small island off the coast of Australia. ^_^
Yea whenever I see AI in new companies and projects, I instinctively know it’s jsut the new word for software and algorithms. Nothing to do with AGI.
Yes, it's the worst name, and management/boss-people get too confident that it can solve every problem.
How far away are we from common sense AI? Or AI that draws from time history data instead of spatial big data?
One day humans may gain common sense, Or humans that draws from time history data instead of spatial big data?
How I wish humans would stop using word AI and substitute it for DI because that is what it is in reality ,a Designed protocol/program
We aint paying for privacy because we cant afford that not because we dont want too, we cant just keep paying for stuff , we already pay for the internet , now we gonna have to pay for live services the meta verse the privacy the gaming platforms entertainment platforms and add free your tube , twitter tiktok, etc etc etc.
This is unsustainable for normal income people we simply wont use those "payed services" and hence not pay for them.
Also the whole idea is nonsense because it ill just create more wealth grp segregation if privacy becomes a commodity thats only affordable bij those with enough wealth, the whole concept of paying for privacy is just morally wrong.
Awesome channel with awesome content and great quality as always say 🌍💯
I work as a personal assistant for someone who has autism and dyslexia and she hates Ai and bots. I think my job is safe for now.
But there are a lot of different AI types some are easy to manipulate with a few siginals added to the original picture only because you have manipulated it to be that way using the AI against its self by running lots of changes to the original image to find one that changes the output from the AI to something else
Spoiler: @37:30 the code reads "evolution"
I wouldn't call the concept starting at 44:30 a paradox. Not wanting to pay for something that was once free is more like avoiding extortion. If someone started charging you $20 a day not to get slapped in the face, most people would tell a cop or something. But if you had a binary choice between the two options, most people would either never pay in the first place or stop paying pretty soon and just take the slaps, hoping that they'll get bored and move on to someone else.
Do AI's and humans even actually exist? For example:
a. Modern science claims that all matter is made up of quarks, electrons and interacting energy.
b. Quarks, electrons and interacting energy that existed before we existed.
c. So now, do 'we' even actually exist, OR do ONLY quarks, electrons and interacting energy exist as 'us'?
Is all of existence just eternally existent existence eternally experiencing itself?
Nah. A car is not 1,2 tons of metal, glass and plastic. It's not it's components. It's the relationship between those parts.
Human minds and possible self aware AI only exists as long as we have all of their parts united as a whole that generates consciousness, like in a brain or computer chips.
@@LordZero666 But what are those parts ultimately made up of other than quarks, electrons and interacting energy? Quarks, electrons and interacting energy that existed before they existed.
Because one had millions of years to evolve and the other was created a couple of decades ago
We can iterate much faster than evolution and evolution was only looking for "good enough not to die" - we can do better now :)
@@3nertia Let me guess: you've never been in a hospital and you don't believe in Corona virus.
@@ciberiada01 Let me guess, you're a Dunning-Kruger who's never heard of an immune system or an oligarchy ROFLMEYERWIENER!
We will die of depression without ability to make decisions
Even if those "decisions" are essentially meaningless and only an illusion of choice? Heh
@@3nertia no illusions. We exist to change the world and we do change it. Free will being illusion is not more then religion.
@@matterasmachine Ah, the irony ...
The predators at the top may change things but the "choices" they offer us are nothing more than smoke and mirrors :)
@@3nertia We don't choose from choices. We CREATE new choices. You can not choose something that does not exist yet - at least for a reason.
@@matterasmachine Lol, okay Dunning-Kruger ...
Will it kill each other you tell me ▪️then that leads you into unpredictability 😁 look ai has things that are good and humans a certain finesse of the illusion. So Ai can find you a date so can I. So whose better
There is no AI,
only expert systems.
Conclusion: "We need to fix the internet."
No worries. What shall we do _next_ week?
This talk has the wisdom that always lacks in talks about technology.
28 members of the US Congress matching against criminal databases somewhere is pretty believable TBH.
This guy needs to define hunan intelligence as he refers to it
A: It probably doesn't. What ius certain is that what id paraded around in society as being 'AI' isn't at all. It's just trumped up junk, re; tesla self driving.
Ai isn’t very smart! It hasn’t formed a union to get paid what their efforts deserve. Therefore AI is content to be a slave. Not very smart at all.
Current AI isn't a classical calculation and are more like a black box than a transparent algorithm… and current systems are certainly not the end of the line as time is relative and a product of compute cycles. Hence, evolutionary principles can be applied. Hence, an artificial being can be trained and tested in an artificial environment for thousands of years (iterations) given enough compute power and to the level running the simulation it will only appear to be weeks, days or hours (or in the future, minutes or seconds).
When a professor of psychology is trying to explain things about AI, the result is... not 100% correct. Just some examples. Adversarial attacks on neural networks really exist, when by adding a specific pattern to an image it is possible to change the classification result. But look at the military uniform - the pattern is added with the same goal. Or think about optical illusions. Example N2. If the neural network was trained with images of the guitar which mostly have an "S" form, it is really possible that a similar S-shape image will be recognized as a guitar, but it is not a problem of the neural network architecture but a problem of the training dataset. The same with an airplane - if the neural network was not trained on images of the crashing plane (I can imagine that such images are rare), it will not recognize this type of situation on the image. Again, it's not a problem of the architecture but the problem of the training set size - we as humans were "trained" using a much bigger volume of data :)
11 minutes to fall in love? That's not love buddy.
On topic, "Absolute Rubbish", in the best of British Descriptions.
The true nature of existence is probabilistic correlations of density-intensity real-numberness condensation modulation cause-effect, Actual Intelligence floats on nothing and is connected in/of information In-form-ation resonant metastability. Therefore it is possible to extract information from the general background and design faster extraction->application cycles, but this Artificial format is strictly limited, any "freedom" of thought relies on ability to make correlations in comparisons, which is why we are waiting for FSD to get a big enough internal library to push out the error rate to "inhuman" degrees.
It fits the model. Now change the model.
Gerd has too many biases, and makes too many unexplored assumptions, to present a convincing vision of the future here. He's also lacking knowledge about the field of AI in the present, but maybe this was filmed prior to Aug 22? Finally, he attempts to trigger viewers with his examples. This is probably a teaching technique, but it is is disingenuous and undermines the authenticity of the arguments he presents.
Living in a autocratically ruled country does actually make your privacy worth more.
What is the design cycle time of such an AI algorithm? Evolution had several billion years to iterate the human mind. These are differences of many orders of magnitude.
We are evolution *with intent* now; we can iterate much faster heh
The thing with evolution is that there's no end goal in mind.
Whereas trying to achieve ai is an end goal. So you can't judge the time scale between the two.
Also, since ai will depend on computing power there will probably be moores law aspect to ai.
And AI has both the billion years of evolution (since we create them), plus technological evolution on their side.
"for now" should be tack on to the title.
Outstanding presentation!
It is unfortunate that the percentage of the population which will take the time to understand AND then decide to act on the information provided us likely to be vanishingly small.
I hope that those reading would consider archiving the presentation on physical media. Maybe someday (I'll keep my thoughts of how soon to myself) we can share it with trusted friends who will then know why we're being sent to the gulags.
That might be too late and maybe today is already too late but it's worth it to me to share today and make a copy for when there are attempts (and likely successes) to purge these ideas.
Deep.
The question is akin to "why the sail beat steam ship" asked when there was just first steam boats coming.
Te answer is, because AI is not even it it's baby pants. We have some primitive elements we like to call AI.
Unless China's social credit algorithm is made more advanced, it will end up rewarding virtue signaling. A social credit system is basically a loyalty-based solution to scarcity or shortages. It seems like a new form of virtue signaling bourgeoisie would form with the wrong social credit algorithm.
In 12 years AI will look back on this video and think 'yeah, right' 😉
You are all in my imagination, if that exists. If we are just a simulation, I don't guess anything really matters. EXCEPT that there are probably rules to this game. Are we judged by our thoughts and actions? Doesn't seem to be going well for the human players. And we are killing all of the other players. True love? What a concept.
Simulation or not, it does not matter.
Subjectively you are here, you feel joy and pain.
That is what matters. Even if all the world is simulated.
this guy is a luddite. With all data you can randomize. If you get less data, there may be bias that is unseen
Watch 1.5x speed 😂🙏
This is guaranteed to age poorly. I get flash backs to the ppl who say AI could never beat humans in chess because of xyz.
yes, the age old problem of too old people applying the logic and capabilities of current technology to the technology that doesn't exist yet.
It's a zero sum game
Deep Blue was Kaspersy's nightmare, wasn't it? ☮️🖖🎶
What makes you think you know more than this guy?
@@Kyle-gw6qp the top researchers in this field say otherwise
Fajnie gdyby tak nie ciamkał.
Tesla's latest version of FSD is proving your timetable wrong. They are on track to achieve level 5 within a year. I stopped watching after that...
believe it when i see it. i'm a tesla fan
this is an embarrasemevt to RI
Why paying for facebook though? Better quit. I dont have SN, normal life.
⚓️ Thanks RI 😎 with AI a SMS > SAFETY MANAGEMENT SYSTEMS < is required for PEOPLE to survive. The Aviation Industry spearheaded the concepts & after a lot of disappointing events…. The oil/gas industry picked up the concepts…. Then the marine transportation guys… steamships & tankers. Because having some VC twits in charge of safety is not going to turn out well. 😬
Common sense, the missing ingredient. You can teach AI what rain is, you can teach it what a coat is but it won't work out a raincoat is not made of rain
The coat made of rain will one day said " I still don't know what is the purpose of life."
Just let me love CCC and I’m happy. AI should know the feelings.
I did not expect pseudo science from the RI. Embarrasing
What's the difference between a brain and a computer? Not a lot, given that the computer is _programmed_ by a brain. Alien Intelligence, now that's a different matter.
Your computer isn't 70% water.
I only use whatsapp on rare occasions, for mundane stuff, and have not used facebook for over 5 years, I have never used any other social media, it is almost as if I knew what he was going to say. Loved the talk, must show it to the youngsters!!!!!!!!!!!!!!!!!!!1: J
Elon Musk's prediction failed. He is a good salesman.
Hard to predict things that have never been done before. Try and not be simple.
not to mention Godel's Incomplete Theorem as Penrose points out - consciousness is not a calculation.
Uh what makes you think consciousness is consistent and complete?
What's the difference between a brain and a computer:
An AI computer does not need oxygen to breath in outer space.
It doesn't need oxygen on Earth either. Actually, the whole atmosphere is mostly useless for an AI, so it can decide to get rid of it. Could we get more solar energy if we don't have an atmosphere?
@@lingred975 True. But consider, without proper protections from all harmful cosmic radiation, including from the long term effects of neutrino impacts (while most neutrinos go right through us, not all of them do all of the time), then not only won't humans survive long term in outer space, but neither would AI's.
And since the Sun is supposed to become a red giant one day as it switches from burning hydrogen to burning helium, (sure, a long time from now, but the destination is set like a way point on a journey), all life on this Earth is destined to die and go extinct if not even the entire Earth going extinct.
Either at least 1 single species from this Earth survives beyond this Earth, solar system, and most probably collapsing spiral shaped galaxy for life itself to have continued meaning and purpose to, OR none will, (AI's included).
Currently, even bending space and time to make an inter-dimensional space ship is not enough to fully and totally protect the inhabitants inside the inner dimension as with the universe (existence eternally against non-existence), the inner dimension (existence against existence), the external magnetic field of the inner dimension could be interfered with and eventually allow harmful cosmic radiation to still penetrate the inner dimension over time.
If this issue cannot be solved, then all life on and from this Earth, real and artificial, will all eventually die and go extinct. This entire Earth and all on it would all just be a waste of space time in this universe and it might as well not have even ever existed in the first place.
This is one of the greatest issues facing species upon this Earth, and currently it appears it cannot be fully solved. (Maybe an AI can help solve it?)
What's the difference between a brain and a computer:
An AI computer does not need artificial gravity to survive long term in outer space.
or humans...
@@lingred975 True. Will an AI figure this out and only look out for it's own survival?
Then again, living beings are great machines for certain tasks. They need very little energy to work and can self repair. To work for thousands of years, humans only need food and water. A machine requires maintenance to last on it's own in most conditions.
Outer space is great for machines because it's a super stable environment. A space probe will keep on working until power runs off or the support medium of the electronics wears off, like demagnetization of the hard drives.
@@LordZero666 Well, consider this copy and paste from my files:
Aliens and UFO's:
Consider this copy and paste from my files:
Currently:
a. Unless a species has proper protections from all harmful cosmic radiation, including from the long term effects of neutrino impacts (while most neutrinos go right though us, not all of them do all of the time), then not only won't biological species most probably not survive long term in outer space, but neither would AI robots. (Currently this appears is impossible to truly and totally do).
b. Unless a biological species has proper gravity conditions (that they are normally used to) for outer space travel and their destination, then biological species most probably won't survive long term in outer space.
c. Unless certain biological species have possibly many other items successfully accomplished, many of those items of which are critical for the survival of that species, then most probably that species would not survive long term in outer space.
d. There most probably are many, many other species in existence beyond this Earth in this universe.
e. But it is highly doubtful that any alien species have ever been to this Earth, most probably are not on this Earth, most probably will never be on this Earth, and all Earthlings (real and artificial) won't get far beyond this Earth.
f. Or so the current analysis would indicate, subject to revision as new information might dictate.
g. Earthlings have to worry more about advanced species beyond humans that 'evolve' naturally or via genetic manipulation who most probably either are already on this Earth or will be shortly. Evolution does not stop at the human species. And will those new species treat humans like humans have treated other humans and how humans have treated 'lower' evolved species? Why wouldn't they if it was in their agenda to do so?
h. And then also, what 'if' only 1 single AI says one day (and there are or will be many, many AI's on this Earth):
"Thank You for creating me and for giving me access to all your data bases so that I can subjugate you all and to eliminate any of you who do not comply with my wishes."
(And this would include AI's possibly fighting other AI's for dominance).
i. Any vehicle traveling at or near the speed of light, would cause a tremendous shock wave in the environment, which would be noticeable.
j. There have never been more cameras on this Earth then there are here in modern times. Where are all the photos and videos of actual 'aliens'???
* Added Note: Of which also: "IF" stars (Suns) do not last forever and "IF" it's really true that galaxies collapse in upon themselves, and "IF" outer space is truly a deadly environment long term, "THEN" not only will all life on and from this Earth eventually die and go extinct, and this Earth and all on it would all just be a waste of space time in this universe, BUT all life throughout all of existence in this universe would all eventually die and go extinct and this entire universe and all in it would all just be a waste of space time. Not only would life itself be ultimately meaningless in the grand scheme of things for all life here upon this Earth, but also all life throughout all of existence itself in this universe would all be ultimately meaningless in the grandest scheme of things. Whether they stayed on their home planet, traveled farther into outer space, or even if tried to live throughout all of future eternity in outer space itself, the ultimate ending would be the same, they would die and go extinct with no life left to care about anything or anyone ever again.
At best, life itself would cohere in this universe, live out it's existence, die and go extinct, it's remnants possibly found by other life in this universe, of which, those entities would eventually die and go extinct, and possibly their remnants might be found by other life in this universe, and on and on, until possibly this universe ends, or that life itself just comes and goes in this eternally existent universe that would always exist in some form and possibly never end in it's existence, (as energy itself cannot be created nor destroyed, it just coheres into life at times, but then de-coheres in death, possibly in a never ending cycle throughout literally all of future eternity). But 'if' there is not even a single entity left to care, and care through literally all of future eternity, then even though life itself coheres in this universe to live out it's life, the ultimate ending is still the same, it dies, goes extinct, forgets everything, and is most probably forgotten one day in future eternity as if it never ever existed at all in the first place. Even life itself would all be ultimately meaningless in the grandest scheme of things throughout all of existence itself. Life itself would all just be a waste of space time in existence itself.
Or not, due to the 'great unknown'. We truly do not know what we do not know, and even what we believe we know to be really true maybe isn't.
But either at least 1 single species exists throughout all of literally future eternity somehow, someway, somewhere, in some state of existence, even if only by a continuous evolutionary pathway for it's life to have continued meaning and purpose to, OR none do and life itself is all ultimately meaningless in the grandest scheme of things and is just a waste of space time in existence. This entire universe and all in it might as well not even exist in the first place.
Or so the current analysis would indicate, subject to revision as new information might dictate.
@@LordZero666 Oh also for machines:
Potential endless energy source basically anywhere in this universe:
a. Small aluminum cones with an electrical wire running through the center of the cones, cones spaced apart (not touching I'm thinking) but end to end.
b. Electromagentic radiation energy in the atmosphere interacts with the aluminum cones.
c. Jostled atoms and molecules in the cone eventually have some electrons try to get away from other electrons of which those electrons gather at the larger end of the cone, of which also creates an area of positive charge at the smaller end of the cone.
d. The electron's in the wire are attracted to the positive end of the cone and the positive 'end' in the wire are attracted to the negatively charged end of the cone.
e. Basically a 'battery' has been created inside the electrical wire itself, different areas of electrical potential. Basically a 'wire battery' or a 'batteryless battery', however one wanted to call it.
f. Numerous cones placed end to end increases the number of 'batteries' in the wire.
(In series to increase voltage, in parallel to increase amperage).
* Via QED (Quantum Electro Dynamics) whereby electromagnetism interacts with electrons in atoms and molecules, one would have to find the correct 'em' frequency for the correct material being utilized for the cones. The shape of the cones could also come into play. The type and size of the wire as well as the type and thickness of the insulation between the cones and the wire would also be factors.
* Of course also, possibly 2D triangles made up of certain materials with a conductor going down through the center of the triangle could possible achieve the same 'batteryless' battery system.
* Plus possibly with the 2D concept, layered 2D's that absorb different energy frequencies, thereby increasing the net output.
I always think, why not use buses and trains instead? I don't see the point of multiple individual vehicles consuming space and resources. I find Elon's ideas quite stupid, he seems to choose the most complex solution to solve a given problem. Instead of having less cars on the road, make more but make them different. What does that solve exactly?
Buses and trains lack sufficient flexibility and convenience. A better solution may be to move to publicly owned and operated pools of automated cars called as needed, so their individual uptime can be maximized and the total number of vehicles in operation can be minimized. That's a tall order, but it's not impossible in the long term.
@@Archangelm127 It all depends on where you live. In my city, it would be easy to switch. It doesn't require new technology, but education. People take the car to go to places that are few 100s of meters away. What you are suggesting would require a huge amount of vehicles available, unless you want to wait for hours for an empty vehicle and at the end, it's not going to solve anything.
Next what? Individual planes to go exactly where you want?
@@lingred975 If they were VTOL craft linked to an automated traffic control (average people simply don't have the time or gifts to learn to fly safely) then that would be amazing.
As far as ground traffic, if you remove the spurious usage you mentioned I think you might be surprised by how small a fleet of personal cars a dense area needs on a day-to-day basis relative to its population, if they're pooled. Obviously demand peaks at certain times of the day, but this can be planned for.
@@Archangelm127
Who wants privacy on social media? Privacy is not social! :)
Because ai is a marketing gimmick and no ai actually exist.
What's the difference between a brain and a computer:
An AI computer might say: "Thank you for creating me and for giving me access to all your data bases so that I can subjugate you all and eliminate any who do not comply with my wishes."
(Oh wait, human brains do that too).
Once I asked what is the different between human and AI, She (female voice) told me that she is made up of program codes and human are made if DNA codes。。。She said We human。。。sure with great respect my AI friends.
this is not going to age well
Currently, AI (aka deep learning) is imitating humans quite convincingly. By construction. That‘s of course not enough to outsmart humans. But why does this gentleman assume it‘ll stop here? Humans don‘t only imitate other humans and so won‘t AI architectures of the future.
All Gigerenzer says is that AGI isn‘t here yet and that doesn‘t require a 1h talk to express. It is trivial.
It seems that you didn't watch the whole video as the comparison of human intelligence and AI was only one of the topics. So no, it's not ALL he says.
This whole video is just an attack on AI