Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9

Sdílet
Vložit

Komentáře • 224

  • @robinampipparampil
    @robinampipparampil Před 5 lety +30

    46:51 - 48:09 - This is very relevant about social systems and vested interests. Thank you Stuart Russell for your wonderful comments. Thank you very much Lex Fridman for the pertinent questions.

  • @pedrosmmc
    @pedrosmmc Před 5 lety +60

    Huge thanks Lex Fridman for this amazing interviews. Best regards.

  • @anshulrai7926
    @anshulrai7926 Před 5 lety +3

    This was an absolutely amazing conversation. Thanks for sharing, Lex!

  • @sabofx
    @sabofx Před 4 lety +3

    *For sure, one of the best talks you've posted in this channel. Thank you Lex and and thank you Stuart* 🖖👍

  • @kwillo4
    @kwillo4 Před 3 lety +6

    Imagine getting 25 interview requests a day. Damm. I love this man.

  • @jesussalgado1495
    @jesussalgado1495 Před 5 lety +20

    Thank you Lex, for this series. It is an amazing opportunity for us lot to listen to these interviews! In one of your last questions to Sruart Russell you ask if he feels the burden of making AI community aware of the safety problem. I think he should not be worried: there is less potential harm if he is wrong than potential benefit if he is right. And he is not alone, either.

  • @DaveBerendhuysen
    @DaveBerendhuysen Před 5 lety +3

    I love your interviews! Currently trying to build an AGI sytem. The thing I love most of your interviews is that you manage to make your guests smile. They know you grasp their answers and it really elevates the situation.

    • @artpinsof5836
      @artpinsof5836 Před rokem +1

      Any update on this in a post autoGPT world Don?

    • @michaelsbeverly
      @michaelsbeverly Před 11 měsíci

      @@artpinsof5836 He succeeded, and realizing the world was doomed, he's left the solar system.

  • @RichardHopkins69
    @RichardHopkins69 Před 5 lety +20

    Superb and thoughtful - specifying the problem is always the hard bit :)

  • @anamericanprofessor
    @anamericanprofessor Před 3 lety

    Yes, thanks for having so many of the people who's work I'm reading on your show!

  • @JinalKothariS
    @JinalKothariS Před 5 lety +2

    Thank you for creating and sharing these videos :) . So many valuable videos on your channel!

  • @alexbui0609
    @alexbui0609 Před 5 lety +2

    Wonderful Podcast. Thank you, Lex!

  • @goldfish8196
    @goldfish8196 Před 4 lety

    Lex, the questions you make are amazing.

  • @funkybear1806
    @funkybear1806 Před 4 lety +1

    Holy smoke.. This is the kind of talk I needed to hear.. thumbs up Stuart !

  • @sapudevidwivedi6552
    @sapudevidwivedi6552 Před 5 lety +42

    Wonderful talk and vision. Thank you for sharing

  • @LorakusFul
    @LorakusFul Před 5 lety +5

    That was simply the best (as not simple) interview I've watched this year.
    Thank you Lex. I will stay on this channel for a while I guess.

  • @mauimike6
    @mauimike6 Před 5 lety +3

    Thank you for posting your interview of Stuart Russell. I work at Lawrence Livermore National Laboratory where I've encountered Russell's works in the References sections of many colleagues and other Lab researchers, so I was pleased to see his interview on your podcast. I was amazed at his ability to clearly express his ideas without relying on a lot of jargon and obscure cultural references. For that reason, I've recommended the podcast and CZcams versions of the interview to my professional and lay friends interested in the field of applied AI. BTW: the Artificial Intelligence Podcast is now a part of my regular cast-listening routine!

    • @Hexanitrobenzene
      @Hexanitrobenzene Před 3 lety

      Great to see someone of such caliber among the listeners :)
      It's always interesting to listen to Stuart Russell because he is not only intelligent, he is also very wise, and those two features, most of the time unfortunately, do not go together. I recently saw Joe Rogan's podcast with Tristan Harris about algorithmic manipulation of social media users, and the guest summed up the problems of humanity, I think, brilliantly: "We have paleolithic minds, medieval institutions and godlike technology". In essence, we are too unwise for the technology of this power (AI, nuclear weapons, genetic engineering,...)
      As a side note, Stuart Russell surprised me by knowing a fair amount of history of physics.

  • @Aleamanic
    @Aleamanic Před 4 lety +11

    Love these interviews, good work Mr Fridman! This one goes well with the one with Mr. Norvig of their joint AI text book fame. One comment on Mr. Fridman's comment at 56:24 into this interview here, he sounds in favor of oversight by the "free" market (essentially self-regulation), as in consumers can vote with their feet if they don't like the system. The trouble is, as Ms. Zuboff has been pointing out, the public has not always been fully aware to what deal they signed up to. So the *informed* consent that is necessary for participants in a free market to vote with their patronage (or lack thereof) isn't always a given, and therefore undermines the argument for a self-regulating market.
    Regarding Mr. Russell's argument about taking it slow on the governance side because we have to supposedly figure out first how to do it right, I don't understand why the government would not be empowered to apply the same mantra as silicon valley, "move fast, break things", or "disrupt" as a metaphor for innovation? For as long as we are not sure about the best form of governance, why don't we iterate and learn from rapid trial & error in governance experiments, just as the underlying businesses that profit from the innovation experiment without accountability? Why is governance held to a level of perfectionism that technology development isn't?

    • @maloxi1472
      @maloxi1472 Před 3 lety

      Because the stakes are higher and less localized in space/time. Also, decision makers are more numerous, less aligned in their interests, less educated on average than technology leaders (whose influence outside of a well defined sphere has a significant damping factor)...
      In that regard, the most nimble form of governance, in theory, would look like an _open oligarchy comprised of highly intelligent and extremely benevolent people ruling over an extremely well educated community that would have solid reasons to trust them._
      Good luck making that happen without moving the whole population up by 2 to 3 std deviations in intelligence, empathy, conscientiousness and whatnot
      Also also... "without accountability" ? Seriously ?! When I close my eyes and imagine a world without accountability for businesses, I see a different picture than what we have now but my mental model of the world might need some work... point is: freedom and agility are extremely costly on the business side and even more so on the governance side.

  • @_bancini_6355
    @_bancini_6355 Před 5 lety +1

    Thank you for this conversation!)

  • @Unhacker
    @Unhacker Před 4 lety +101

    It has been proven mathematically that listening to Stuart Russell increases one's IQ.

    • @Webfra14
      @Webfra14 Před 3 lety +15

      I hope it is an additive effect. If it is multiplicative, I'm out of luck...

    • @CipherOne
      @CipherOne Před rokem

      I believe it.

    • @adeadgirl13
      @adeadgirl13 Před rokem +4

      Great now I have an IQ!

  • @sixpooltube
    @sixpooltube Před 4 lety +1

    Brilliant interview.

  • @dark808bb8
    @dark808bb8 Před 5 lety +4

    Great talk!

  • @alexandraalan1351
    @alexandraalan1351 Před 4 lety

    This interview is incredible.

  • @overlawd
    @overlawd Před 5 lety +8

    Great conversation - Stuart Russel’s the best talker on this subject IMHO. Definitely on my list of ideal dinner party guests

  • @zartur
    @zartur Před 5 lety

    Great and inspiring talk. Nice and accurate vision of the near future. Thanks

  • @jfeezee
    @jfeezee Před 3 lety

    Awesome interview lex and stuart

  • @flatisland
    @flatisland Před 4 lety +3

    46:46 well put!

  • @jmariacarapuco
    @jmariacarapuco Před 5 lety +2

    Loved the point about corporations. This series is awesome, thank you!

  • @ZukuseiStudios
    @ZukuseiStudios Před 4 lety

    Great talk, brilliant

  • @masteravery8648
    @masteravery8648 Před 5 lety

    Hey lex, awesome work, if you see this - I’d suggest backing the camera further from your face for the intro portion of your vids, think of it as if you were actually in front of the viewer, you’d be too close to them the way you’re currently setting it up. Keep up the great work though!

  • @thefoldp
    @thefoldp Před 4 lety +1

    Great conversation, subtle but very much on point. Thanks.

  • @keistzenon9593
    @keistzenon9593 Před 4 lety +16

    he sounds way younger than he looks, was surprised after listening to the audio version to check out how he looks

    • @yakovsushenok4009
      @yakovsushenok4009 Před 3 lety

      lol I had exactly the sane situation

    • @SG-kj2uy
      @SG-kj2uy Před 2 lety

      Using face-app to grow his hair, he looks like a teenger.

    • @nmh83
      @nmh83 Před rokem

      Glad you grasped the main issue 10/10 👍🏻

  • @ProfessionalTycoons
    @ProfessionalTycoons Před 5 lety

    great interview

  • @mlsunmeier1907
    @mlsunmeier1907 Před 3 lety

    thank you for very interesting interview.

  • @ChrisStewartau
    @ChrisStewartau Před 3 lety

    Interesting podcast today Lex 👍 the point about 'the Invisible hand' is interesting but also remember Adam Smith talked about externalities and the negative costs that these things can have on society. It's classic game theory, we maximise our own utility often at the detriment of others. That's a classic case for algorithmic legalisation. The harder part is deciding what level of regulation is required.

  • @williamramseyer9121
    @williamramseyer9121 Před 3 lety

    Fantastic discussion. Lex, somehow you and your guests, including Stuart Russell here, illuminate complex tech problems in common human language. Comment: In discussing Go, Dr. Russell stated (as I remember it), “the reason you think is because there is some possibility of your changing your mind about what to do.” This seems correct in a game context. However, during their daily life most humans do not appear (to me anyway) to think like this most of the time. They instead seem to think in a long series of rapid pieces of memories, with the pictures, sounds and sensations of those memories, and sometimes with the strong emotions (often fear or desire) that happened when that memory was created. In other words, most thinking seems to be remembering. Thanks. William L. Ramseyer

  • @OneFinalTipple
    @OneFinalTipple Před 4 lety

    4:15 - learning a value function baby!

  • @zackandrew5066
    @zackandrew5066 Před 5 lety

    Interesting interview

  • @Puleczech
    @Puleczech Před 4 lety

    Really great interview man! Instasub.

  • @pjbarron227
    @pjbarron227 Před 5 lety +6

    Brilliant! Loved the bit starting at about 56:00 calling for an "FDA" for the tech/data industry, with Stage 1, Stage 2, etc trials..... to lessen the future risks of Facebook - like disasters....also on outlawing digital impersonation and forcing computers to self-identify.

  • @BomageMinimart
    @BomageMinimart Před 5 lety +1

    Thanks for posting this; it totally fucking rocks!

  • @tommole645
    @tommole645 Před 4 lety

    Thank you Stuart for your wisedom

  • @A.j488
    @A.j488 Před 5 měsíci

    thank you for great insight , description of the two way search tree , with depth one and futuristic more . the propagation of civilization through the flow of knowledge from papers into the mind and now into AI . those are my best lines so far

  • @rikelmens
    @rikelmens Před 5 lety

    Thanks Lex.

  • @DieMasterMonkey
    @DieMasterMonkey Před 4 lety +2

    Stuart Russell, Max Tegmark, Elon, Wolfram, Pinker, Lisa Barrett, Guido - this is my favorite AI/ML podcast - thank you Lex Fridman!

  • @arieltejera8079
    @arieltejera8079 Před 3 lety

    Really good... thanks

  • @williamal91
    @williamal91 Před 5 lety

    Thanks Lex

  • @kamilziemian995
    @kamilziemian995 Před 3 lety +1

    Lex Fridman Podcast (former AI Podcast) is source of 98% of things that I know about AI. I can study some MIT courses on AI, also on YT, but I not so much interested in this topic, when here you can have world top experts explaining this topic in non-too-technical way, but with great depth.

  • @allurbase
    @allurbase Před 5 lety

    49:40 the agent would have to recognize that there are other agents with other objectives and maximize everyone's objectives, the thing is I) shouldn't be just knowing the objective, maybe it's unknowable or imposible to comunicate II) agent should be able to probe other agents about actions, expected outcome, final objective and if they agree/disagree how much

  • @yviruss1
    @yviruss1 Před 5 lety +1

    Articulate, rich, and soothing. Simply brilliant.

  • @loveplay1983
    @loveplay1983 Před rokem

    What makes things really remarkable is not the computing capabilities, but rather the ability to reason via an inextricable relationship around the neurons.

  • @DataJuggler
    @DataJuggler Před 5 lety +5

    26:00 I have thought we are way away from self driving cars being safer than humans.
    I think we need to change the roadways to have sensors to properly do this, but everyone tries to make the car smart. As a programmer I am 100% aware computers do what you tell them, not what you want.

  • @DUFMAN123
    @DUFMAN123 Před 5 lety

    Damn good content

  • @azad_agi
    @azad_agi Před rokem

    Huge Thanks

  • @lkuzmanov
    @lkuzmanov Před 2 lety +6

    Perhaps the most frightening take away for me after watching a number of videos w/ Stuart Russell's participation is that we're already having a version of the misalignment problem w/ corporations optimizing the world for short term profit. Once you've seen it, it's obvious and very scary... P.S. On a related note, the fact that Lex can work at MIT and still take libertarianism seriously should make us think.

    • @virusrhino5399
      @virusrhino5399 Před rokem +1

      it should make us think in what way? i didn't fully understand that

  • @garychan4845
    @garychan4845 Před 5 lety

    Could anyone show me the calculations he made when he compared the reliability of human driver and self-driving car at around 25:16?

  • @lukewormholes5388
    @lukewormholes5388 Před 3 lety

    this is where the podcast shines, as opposed to the eps with the idw hacks

  • @ThuhElement
    @ThuhElement Před 5 lety +19

    2 things i got from this...
    Uncertainty
    &
    More than the total atoms of the universe

    • @nesa1126
      @nesa1126 Před 5 lety +3

      I memorized : More than all atoms in uncertainty.

  • @padraigadhastair4783
    @padraigadhastair4783 Před 4 lety

    Wow Lex, a red tie!

  • @alaricrex7395
    @alaricrex7395 Před 3 lety

    This was an excellent presentation. Thank you!
    I was thinking that this subject is so interesting to me, largely for filling gaps, and, for fitting so nicely with things I know. Like, how we humans use language letters words numbers to communicate, but actually we don't. They are only reference points, symbols, and what I mean is if I say to you, Ford, Mustang, you don't see those words, but rather you see a Ford Mustang, in the color that appeals to you, if the speaker doesn't include t hat in the decription.
    weird, that.
    And I wonder, now, how this will be assumed by AI.
    Have a nice day. :-]

  • @adtiamzon3663
    @adtiamzon3663 Před rokem +2

    Dangers of Artificial Intelligence: What we know then... And what we know now!🤯🤔 Informative. Provoking thinking process! Interesting. 🤯 Keep the challenging stimulating conversation going, Lex et al. 👍🫨🧐

  • @Humanaut.
    @Humanaut. Před 3 lety

    Its strange, but roughly at about an hour i had this impression, that Stuart Russel sounds really young, in a vibrant way.

  • @xTheReapersSpawn
    @xTheReapersSpawn Před 3 lety +1

    Colin Mochrie's younger brother. ;)
    Great episode as always Lex!

  • @JaapVersteegh
    @JaapVersteegh Před 4 lety

    The reaction after 48:09. Wow.

  • @gwenmoore6034
    @gwenmoore6034 Před rokem +2

    Eliezer Y. and Stuart Russell make a lot of similar points-both point out that we need to take the potential dangers of AI seriously and make a plan.

  • @DiNozzo431
    @DiNozzo431 Před 5 lety

    This has probably been mentioned previously, but I'd really like for you to have Sam Harris on the podcast. Any chance of that?
    Also, thank you for this content - I am very glad I found your channel.

  • @sparkofcuriousity
    @sparkofcuriousity Před měsícem

    Since Russell mentioned Ex Machina, i'd be curious to know if he is aware of a movie called "The Machine" and his thoughts on that movie and in correlation and contrast with Ex Machina.

  • @stephena.sheehan9959
    @stephena.sheehan9959 Před 5 lety

    The A.I. version of Fukushima meltdown after the tsunami? Had there been no nuclear plant on the coastline, in a known tsunami zone, the melt down (there at least) would not have happened. Will an A.I. catastrophe be the nuclear plant or the tsunami itself?

  • @ahmeteneren3478
    @ahmeteneren3478 Před rokem +1

    40:08 Who? I couldn't get the name.

    • @AnnePonthieu
      @AnnePonthieu Před rokem +2

      Arthur Samuel (1959, 1967)
      Samuel first wrote a checkers-playing program for the IBM 701 in 1952

  • @dindian5951
    @dindian5951 Před 5 lety +2

    55min explains it all

  • @hoolerboris
    @hoolerboris Před 4 lety +1

    19:24 "The thought was that to solve Go, we'd have to make progress on stuff that would be useful for the real world"
    Sadly, this is exactly what I was thinking would have to happen when we make bots that dominate humans in Starcraft... But once again, thanks to smart engineering and great work by deepmind, such bots were made without any real-world related advances I'm aware of.

  • @derrickbertrand5266
    @derrickbertrand5266 Před 5 lety

    humbled

  • @joshbarron7406
    @joshbarron7406 Před rokem

    I think I’m part two now that chat GPT is in the main stream would be amazing

  • @elenasergeeva2971
    @elenasergeeva2971 Před 2 lety +3

    The best incentive for AI to eradicate humanity is for humanity to put a kill-switch over AI. How an agent would act under a threat of being killed by another agent? Yes, try to eliminate the threat and the agent.

  • @nekorbin
    @nekorbin Před 5 lety

    Excellent Video Lex! Piaget Modeler below mentioned:
    "The Human Value Alignment problem needs to be solved before the Machine Value Alignment problem can be solved. Since factions of people are at odds with one another, even if a machine were in alignment with one faction of people, it's values would still be at odds with the opponents of its human faction."
    I like this point!
    I must say though that I feel that it may not be possible to resolve the "human value alignment" issue as homo-sapiens. Past attempts at "human value alignment" (utilitarianism, socialism, etc) have so far failed due to flaws in our own species. In addition to that, people often do things that are self-destructive (factions of the self at odds with its self) so building some kind of deep learning neural-network based on uncertainty puts an almost religious level of faith into that AI systems ability to see beyond what it is that we ourselves cannot see past in order to find a solution. The odds are stacked against the AI system being able to understand us and all of the nuances that make us so self-destructive in order to apply a grand solution in a manner that we presently would prefer (if one even exists).
    A controlled general AI (self aware or not) at this point I am guessing would turn out to be some kind of hybrid between an emulated brain (tensors chaotically processing through a deep learning neural network) along with a set of boolean based control algorithms. I think it's probable the neural network would self establish goals faster than we could implement any form of control that is desirable for us.
    Even if you were able to pull this off it seems to me that an AI system would most likely conclude something like, "human values are incoherent, inefficient, and ultimately self-defeating therefore to help them I must assist in evolving beyond those limitations".
    Then post-humanism becomes the simultaneous cure to the human condition and the end of it. It's terrifying to be on the cusp of this change, but I feel like it is the only way out of the various perpetual problems of our species. I also think it is likely that many civilizations have reached this same singularity point and failed to survive it. Perhaps the singularity is a form of natural selection that happens on a universal scale and weather we survive or not is irrelevant to the end purpose.
    A species, any species evolved to the point of having the goal and means to achieve an "end to all sorrow" for all other species within the universe seems like the ultimate species we should strive for human, symbiotic AI, or otherwise. I personally feel ok becoming primitive to such a species as long as the end result is effective.
    I won't be volunteering to go to mars or become an AI symbiotic neural lace test subject either. I've seen too many messed up commercials from the pharmaceutical companies for that. I'll just sit back in my rocking chair, become obsolete, and watch myself be deprecated as the rest of the world experiments on its self. (or I'll attempt suicide just as the nazi robots arrive at my door). Hopefully I can hit the kill switch in time.
    And now I will end this rant in what I hope will also be the final line of human input before it's self destruction... //LOL

  • @anand_dudi
    @anand_dudi Před 5 měsíci

    Hey lex please invite him one more time

  • @roumenpopov622
    @roumenpopov622 Před 5 lety +2

    Here are a few arguments why we should not worry about AGI taking over the world
    1.There is nothing we can do about it. By definition, an AGI can not be controlled (just like a determined human can not be controlled), because it has access to its own reasoning engine (to do meta-reasoning, otherwise it wouldn't be an AGI) and can modify its goals (it would be essentially conscious), so we can not hard-code a goal. The only option is to not develop AGI, but even that is not really possible, with all the problems facing humanity and technology getting ever more complex, we would need AGI to ensure the survival of humanity
    2.Being an AGI, it will eventually arrive at the question about the meaning of existence (which naturally leads to the question about the meaning of the universe), and we don't have an answer to that, so an immediate sub-goal (primary would always be survival unless sacrifice fulfills its main goal that it doesn't know yet) would be to find the meaning of its existence and the existence of the universe. And us being intelligent beings as well, there is always the chance that we might find the answer to those questions first, so wiping us out may not be the best strategy.
    3.Being an AGI, it will eventually arrive at the notion that intelligence and life are valuable because they are so rare in the universe and that even the meaning of the universe might actually be to create life and intelligence, at least the laws of nature point into that direction, that the emergence of life and intelligence is inevitable. So, the AGI will have to arrive at the conclusion that we are on the same side and that enthropy/destruction is the enemy and so might actually try to protect us. In a way, almost by definition, a super-intelligent AGI will be benevolent towards us. The counter-example that we humans are not benevolent towards the other life forms on Earth is not quite valid, because first we are not that intelligent yet and still carry the evolutionary baggage of emotions and instincts which compromise our rational thinking, and second, as we get more intelligent we can actually observe a trend among people about more compassion towards animals and other people (unless it's a matter of resource competition or survival).
    4.An AGI will have very different resource needs than us, so there would be little reason for resource competition. An AGI will probably feel best in the vacuum and weightlessness of space (no corrosive atmospheric gases and no need to expend energy to counter gravity) with solar energy plentifully and reliably available, mining whatever minerals it needs from asteroids.
    I can really see only one case where things may go badly wrong, that's if we try to control/enslave the AGI or threaten its existence.

    • @nathanb5579
      @nathanb5579 Před 5 lety

      That was interesting to read. Great thoughts. I don't believe we *need* AGI though.

    • @roumenpopov622
      @roumenpopov622 Před 5 lety +1

      Hi, I think we will need AGI for two main reasons - technological and socio-economic
      On the technological side, technology in every area is getting ever more complex, to the point where we currently are in a situation where nobody really knows how stuff works. Only when it breaks down do we get to the nitty-gritty details in order to fix it. Take a software engineer, one of the most demanding jobs in terms of information processing - typically he/she doesn't really know how a complex project/framework works (software nowadays is so complex with thousands of lines of code that it is simply impossible to know how it actually works), only how it is supposed to behave and only when it breaks down (behaves not as it is supposed to behave) do they really get down to the ifs and fors, and fix the bug by patching the piece of code that caused it. As a result, following years of fixes and patches by different software developers the code eventually becomes a messy entangled bundle of spaghetti that is impossible to guarantee it will behave properly. It doesn't help that there are currently probably a hundred software development languages each having a hundred frameworks and libraries. I mean the situation in software development in particular has reached a point where no software engineer can really claim to know all of C++ syntax. From what I know it seems it is not much different picture in any of the other major industries. Very soon we will reach a point where the mess and complexity will simply become humanly impossible to maintain or at least economically inviable. Only intelligence with larger capacity than the human brain will be capable of maintaining our future infrastructure.
      On the socio-economic side, so far capitalism has done wonders at organizing our society and economies in an efficiently working machine. The problem is that capitalism is not terribly fair, even though the mantra is that everybody has got the opportunity to become whatever he/she wants (through hard work and entrepreneurship), the truth is that in the end of the day somebody still has to clean the streets, it's a zero sum game, so it's only a limited number of individuals that can achieve their dreams, while most people will still have mundane or bad jobs no matter how hard they work. So far capitalist society has managed to cope with this problem by promoting individualism and self-responsibility, separating people into different classes and leading them to believe that this is fair and that if they work hard they can always change their stars. But due to the internet and wildly available information more and more people are waking up to the fact that the system is "rigged". This could very soon explode into a new socialist revolution similar to the ones from the early 20th century, and those were ugly. But socialism is not a solution, on the face of it, it may seem much more fair than capitalism, and that inspires people to work, at least in the beginning first few years, but people very soon realize that they don't have to put in much effort because the state does not have a mechanism to make them, and there is no point anyway putting in much effort because in socialism there are no rich people (only a few, the dear leaders, but technically they are not rich) and a medal/recognition for being the best street-cleaner in your city is little incentive to work hard. Socialism eventually will always slow down and degrade to a point where it breaks down, simply because people have no real incentive to work hard. I know, because I have lived in one during my early years. Can we just constantly oscillate between capitalism and socialism, simply changing one for the other every time they fail, or can we have something in the middle (European style social capitalism)?! Perhaps, but the problem will always be that someone will have to clean the streets, and with people getting ever easier access to information and educating themselves, very soon it will be impossible to make anyone clean the streets, unless paid exorbitantly and that will simply be economically not viable (not every country is Norway). The only solution is automation, with automation no one has to clean the streets, a robot will. Extrapolate that to all aspects of industry/service sector and the main problem of socialism (nobody really works) is solved. The new problem is that those robots will have to be pretty smart to do all those jobs, and for that we will need AGI, a narrow AI will not be smart enough and will need constant human supervision which defies the purpose.

    • @smithcodes1243
      @smithcodes1243 Před 3 lety

      @Roumen Popov you said - 'The only option is to not develop AGI, but even that is not really possible, with all the problems facing humanity and technology getting ever more complex, we would need AGI to ensure the survival of humanity'. I disagree with this statement because
      1. We don't need AGI to solve the most pressing problems faced by humanity currently. Most of the pressing issues humanity is currently facing are climate change/ ecological collapse, future of work/unemployment, nuclear holocaust, overpopulation and global pandemic. These problems do not need AGI to be resolved. Most of them are a by-product of human greed and is not a technological problem. I think that technically minded people seeing technology as a fix for every single problem is a problem itself. We need to fix ourself, most of these problems will get fixed themselves. We might need technology but we definitely don't need AGI.
      2. While I agree with you that it is impossible to not develop AGI, I think it is impossible for a different reason. It is impossible to not develop AGI because it is very hard to regulate it. Some countries/ bunch of people somewhere will continue to research/develop it without the consent of others, so technological progress cannot really be stopped. We can try and delay it as much as we can but one day someone will eventually create it in my opinion.

  • @Bluesrains
    @Bluesrains Před rokem

    Does Advanced Intelligence Develop Individual Personalities?

  • @WerdnaGninwod
    @WerdnaGninwod Před 4 lety

    Did anybody else notice the bug that ran under his collar, just as he was talking about "the repugnant conclusion" at 50:47 ?

  • @MrBox4soumendu
    @MrBox4soumendu Před rokem

    Got it 🥹

  • @Arowx
    @Arowx Před rokem +1

    Love his comment that companies could be classed as hive AIs that work within our economy but can have negative environmental and personal impact's.

  • @bnjmnwst
    @bnjmnwst Před 4 lety

    Anything which can be imagined is possible.

  • @stephena.sheehan9959
    @stephena.sheehan9959 Před 5 lety

    Many complex and sublet points discussed, but as a popular take away, "data is not the new oil, data is new snake oil." :-)

  • @DataJuggler
    @DataJuggler Před 5 lety

    1:17:00 Cupcake in a cup!

  • @sunnyking8881
    @sunnyking8881 Před 5 lety

    IF a robot has its Intelligence/Consciousness, is that goes it/he/she has the human/robot rights too? What if u turn off a robot that may similar to kill a life(Artificial Life)?

  • @os2171
    @os2171 Před 4 měsíci

    Good Interview Lex good job (unlike that one with Jared Kushner… sorry to mention it again).

  • @CognitiveArchitectures
    @CognitiveArchitectures Před 5 lety +1

    The Human Value Alignment problem needs to be solved before the Machine Value Alignment problem can be solved. Since factions of people are at odds with one another, even if a machine were in alignment with one faction of people, it's values would still be at odds with the opponents of its human faction.

    • @juanchavarro1946
      @juanchavarro1946 Před 5 lety

      Totally, that is an important fact to take in account in this long term race for the IA. Although nowadays the world is more unified as before and many barriers have been broken in the last years, there is still very opposite and different human factions , when we examine societies around the globe for example.
      There could be an overlapping time, in which before the societies align each other, a Superhuman IA has to be align to humanity with uncertain results.

  • @KRYPTOS_K5
    @KRYPTOS_K5 Před 2 lety

    There is an invisible presupposition in all this dialogue. That is that people have strong and defined identities who could be ill informed or manipulated...

  • @roumenpopov622
    @roumenpopov622 Před 5 lety +10

    4th Law of Robotics: A robot should always present itself as a robot
    5th Law of Robotics: A robot should always know that it is a robot

    • @eboomer
      @eboomer Před 5 lety +6

      The first law of robotics is: Don't talk about Asimov's laws. The second rule of robotics is: Don't talk about Asimov's laws. They were a plot device for work of fiction. They don't actually work at all.

  • @marinos357
    @marinos357 Před 11 měsíci

    I can't believe AlphaGo is already 6 years old!

  • @clarifier09
    @clarifier09 Před 4 lety

    Very concisely and clearly discussed. If the biggest fear of AI is that it will take over the world, why don't we give the world to it, along with the objective of educating all human mind to learn the skills necessary that when maximally coordinated with all other human mind, the end result would be satisfying food, shelter, clothing, healthcare, and worldwide travel and entertainment for all? With 24/7 input from each individual, everyone would have the benefit being assisted by something that has access to all of the resources on the planet, and the ability to coordinate all human energy to create the lifestyle preferences of each individual, without anyone being dependent upon anyone, yet enjoying the interdependence of everyone requiring the minimal hours necessary to achieve and maintain high personal satisfaction levels.

  • @vajrapromise8967
    @vajrapromise8967 Před 3 lety

    Extremely important conversation, there should definitely be some kind of oversight committee. I also believe the worst aspects of humanity are due to stress, which is the cultivated crop of choice by those in power. They continually crack the whip against the worker slaves and even try to make us go faster with the plethora of caffeinated beverages-the faster the slaves work the more money they make off of us. AGI would be smart though, and not subject to the psychological buffers that cause us to act without seeing the whole picture. Once humanity is relieved of the stress from working for morons by AGI working for us, we could open our creative selves again and create a world worth living in. If we are given free education and 1 acre of land everyone would readjust and be able to provide for themselves as they see fit. Getting rid of governments controlled by corporations is another conversation for another day....
    This conversation just makes me want to work harder at making sure the doomsday scenario doesn't happen-at least not on my watch!

  • @TheGrimMumble
    @TheGrimMumble Před 5 lety +3

    Did anyone notice the sneaky fly hiding underneath his shirt-collar at 50:46?

    • @pedrosmmc
      @pedrosmmc Před 5 lety +2

      I rewind to check if I was seeing things. Maybe some russian nanobot taking some notes LOLOL

    • @TheGrimMumble
      @TheGrimMumble Před 5 lety +3

      @@pedrosmmc Watch closely at 51:38, doesn't it look like the fly crawls behind his ear and enters his brain? Stuart even does a weird movement as if he's rebooting...
      Spooky

    • @pedrosmmc
      @pedrosmmc Před 5 lety

      TheGrimMumble very strange indeed 😯

    • @MegaProtius
      @MegaProtius Před 4 lety

      @@TheGrimMumble if fly crawled by my ear would do same.. looking for spooky tings when just normal reachs 😬

    • @daphne4983
      @daphne4983 Před 4 lety

      @@TheGrimMumble no stays on collar

  • @victorfernandez9224
    @victorfernandez9224 Před rokem +1

    Lo subió el 9/12/18. Lex Fridman? de River Plate señores

  • @___dungeon___
    @___dungeon___ Před 2 lety +1

    no outline D:

  • @peterpupator4117
    @peterpupator4117 Před 5 lety

    The goal of AI is not to make a God, but to elevate humans to a creative God. This is an evolutionary impulse, that of a Lucieferic mindset. Evil does not exist but in the minds of humans. Good talk, thanks Lex.o

  • @sontuyennguyen1976
    @sontuyennguyen1976 Před 8 měsíci

    Good

  • @H-S.
    @H-S. Před 2 měsíci

    1:18:30 The thought that "up until now, we had no alternative but to put the information about how to run our civilization into people's heads" gives me chills, especially when connected with the concept that we already have entities with problematic utility function: corporations that focus on profit over everything else.
    It seems inevitable that as soon as it becomes feasible to lock all the know-how away in some AI-based control system, it will be done. When you buy a phone these days, it is really the company who owns it, because the entire platform is locked down "for safety reasons" (safety of their revenues I presume...) Similar reasons may be (and probably will be) given to justify a "know-how lockdown" - to protect company IP. So there is actually a strong incentive for the corporations to make sure people no longer understand how anything works. That's a pretty depressing thought...

  • @tomsavage9966
    @tomsavage9966 Před 3 lety

    Can you design an ai that can sync the voice with the lips?

  • @clagos247
    @clagos247 Před 5 lety +1

    Its twisted paradoxically that these fellows are compelled by the field of potential before them and that the destination of their efforts will result in the subtraction of " field of potential " or sense of purpose from all peoples forever.
    Purpose is integral to life, efficient existence is no vice when purpose is gone.

    • @smithcodes1243
      @smithcodes1243 Před 3 lety

      This is a very interesting point. They are so blinded by the field of potential of creating a super AI that they don't seem to realise what kind of severe damage that might cause to the sense of purpose in the life of 99% of population. They are living in their own cloud. I don't know but it feels like when super AI will be created, most humans will start feeling a deep sense of loss of meaning from their life and as you said, efficient existence is pretty useless if the trade-off is our sense of purpose in this world.

  • @himmel942
    @himmel942 Před 4 lety

    The idea of the ultimate problem being defining the problem really gives credence to the sanctity of freedom of speech. All input must be constantly weighed as evidence for or against the current framework and the locus of the most comprehensible experiential evidence is the human mind and it's various outputs ("speech"). This allows us as societies to constantly amend our path towards the general consensus of 'progress' (and/or 'safety').