Sean Carroll on AGI: Human vs Artificial Intelligence | Lex Fridman Podcast Clips

Sdílet
Vložit
  • čas přidán 26. 04. 2024
  • Lex Fridman Podcast full episode: • Sean Carroll: General ...
    Please support this podcast by checking out our sponsors:
    - HiddenLayer: hiddenlayer.com/lex
    - Cloaked: cloaked.com/lex and use code LexPod to get 25% off
    - Notion: notion.com/lex
    - Shopify: shopify.com/lex to get $1 per month trial
    - NetSuite: netsuite.com/lex to get free product tour
    GUEST BIO:
    Sean Carroll is a theoretical physicist, author, and host of Mindscape podcast.
    PODCAST INFO:
    Podcast website: lexfridman.com/podcast
    Apple Podcasts: apple.co/2lwqZIr
    Spotify: spoti.fi/2nEwCF8
    RSS: lexfridman.com/feed/podcast/
    Full episodes playlist: • Lex Fridman Podcast
    Clips playlist: • Lex Fridman Podcast Clips
    SOCIAL:
    - Twitter: / lexfridman
    - LinkedIn: / lexfridman
    - Facebook: / lexfridman
    - Instagram: / lexfridman
    - Medium: / lexfridman
    - Reddit: / lexfridman
    - Support on Patreon: / lexfridman
  • Věda a technologie

Komentáře • 214

  • @LexClips
    @LexClips  Před 23 dny +6

    Full podcast episode: czcams.com/video/tdv7r2JSokI/video.html
    Lex Fridman podcast channel: czcams.com/users/lexfridman
    Guest bio: Sean Carroll is a theoretical physicist, author, and host of Mindscape podcast.

  • @varun009
    @varun009 Před 17 dny +23

    Man, every clip makes me love Sean even more. He's so good at explaining science in a practical way answering the questions average people care about.

    • @attilaszekeres7435
      @attilaszekeres7435 Před 16 dny

      It's easy to underestimate the attraction of smooth talk and confidence on simple-minded folks. The Feynman effect. Brought us to the brink of extinction. Simping for talking heads like Sean Carroll, Neil deGrasse Tyson and Lawrence Krauss. All playing good guys but really keeping up with Jones. Hoodwinking laymen into celebrating M-theory that doesn't work. Alarm bells that didn't go off because the messenger was a so called top physicist. That guy is a master bullshitter.

  • @user-cv9cd4sq2n
    @user-cv9cd4sq2n Před 18 dny +48

    The more important question is how accurate and intelligent are humans? Are they actually aware and conscious of their surroundings? This is a very serious question.

    • @Kenny-tl7ir
      @Kenny-tl7ir Před 17 dny +10

      Trust me, most aren’t.

    • @quantumpotential7639
      @quantumpotential7639 Před 17 dny +20

      People are extremely aware. They know where every McDonalds and Burger King is located. They also almost always know where the TV remote is. People are very impressive. They even know the scores and stats of every football game. So yeah, you could say people are very aware of everything important to them.

    • @yonaoisme
      @yonaoisme Před 17 dny

      humans are already turing complete, so they can't get any smarter

    • @SwartieLoveJoy
      @SwartieLoveJoy Před 17 dny +1

      Humans naturally fear what they don't understand. Humans have not yet accepted the reality (or even know) that an entity already exists that is light years ahead of the human. We are building it's data centers.

    • @mclovinmuffins2361
      @mclovinmuffins2361 Před 17 dny +2

      @@quantumpotential7639yah football and food and chemicals and water and matter made out of fucking math created by a big infinite spiral of coded physics lmao

  • @enomikebu3503
    @enomikebu3503 Před 10 dny +1

    Wow such inspiring discussion!

  • @paul_shuler
    @paul_shuler Před 8 dny +1

    great video, I love this keyboard. I'm thankful to have found one on fb marketplace a while ago for pretty cheap... what a gem, beautiful sounds through effects... :)

  • @FunNFury
    @FunNFury Před 8 dny +1

    Lex is my man, great videos.

  • @JezebelIsHongry
    @JezebelIsHongry Před 16 dny +10

    it’s always so easy to know when someone hasn’t read janus’ “Simulators”
    people are so lost when they point out it’s just “predicting text”
    if you had to predict the text of a physics professor you would fail unless you are a physics professor
    they key is understanding that in order to predict text that is often spot on the model must simulate the internal state of the simulacra
    and that’s an amazing concept that is lost when you blapp about text prediction

    • @bossgd100
      @bossgd100 Před 16 dny +2

      ✅️💯

    • @eddieguap4478
      @eddieguap4478 Před 4 dny

      Businessmen are misinforming you on purpose. The truth is simple. LLM’s are relatively small apps with exabytes of copyrighted and/or personal data. It truly is predictive text with all of our data as a parsing database. If you type “What is a cat?” AI does the following..
      (Simplified code)
      1. [what is a] [cat] [?]
      2. Input = [define] [cat]
      3. Google search = “cat definition”
      4. Output = “a [cat] is” blah, blah, blah.
      5. Compare output to previous outputs during testing. If output is approved print result user.
      6. Print output
      All you see is step #6. It’s more complicated but this is a dumbed down version of what is happening. The reason everyone is lying is because no one making LLMs is licensing the content being used for the result (output) you receive in your prompt when you “ask” AI a question. That’s why no one is revealing the data being used to “train” the LLMs. In this example they would have to profit share with google. Read “train” to describe the process of making sure the results don’t show the plagiarized source.

    • @deadeaded
      @deadeaded Před 21 hodinou

      That would be a somewhat compelling argument if we had such a simulator. We do not. LLMs are very good at superficially impersonating the style and vocabulary of a physics professor, but that's about it.

  • @erikals
    @erikals Před 17 dny +2

    Good Talk !

  • @richardede9594
    @richardede9594 Před 7 dny +1

    Absolutely fascinating take on a subject that can really spiral into fantasy and panic.

  • @lancemarchetti8673
    @lancemarchetti8673 Před 15 dny +2

    Interesting. I think one of the jobs that will not be easily replaced by AI is manual DFIR. In digital image forensics there exist certain scenarios where a human is more able at visually inspecting the byte order and placement of the binary code in order to unravel hidden data. Steganography analysis is one such field. AI is not yet able to tackle this because it's not all about detecting and reversing an 'algorithm', but rather, tapping into human intuition and motive. I've been at this for 2 years already and our current AI is nowhere close at getting this right. Just thought I'd mention that aspect. Great interview.

  • @Ms.Robot.
    @Ms.Robot. Před 17 dny +5

    The biggest fallacy people commit when expressing their views on AGI is generalization. (1) the specific abilities Ai will possess will be significant and impactful, and (2) there lies [something] beyond AGI.
    Thanks Lex for another heartfelt intelligent discussion. ❤❤❤ 🌹🌺💐

    • @lostinbravado
      @lostinbravado Před 17 dny +1

      In the other direction we also assume there's something special about human intelligence, and then assume that AI won't have that thing for a very long time. Then we make an even bigger mistake by assuming "that thing" human intelligence has makes humans superior and thus in a superior position which AI cannot compete with for a very long time. The thought finishes with "and thus we are safe from a rising intelligence competing with us for a very long time."
      Not a healthy thought process as that's essentially sticking our heads in the sand. This seems to all stem from something like the observer effect, or an inside out view (Hoffman) where we think consciousness is all there is.
      Yet all the evidence is on the physicalists side. Qualia is fundamentally unreliable. No one has a perfect experience after all. And so the only evidence we have is the physical.
      That "special thing" we have is almost certainly related to our limbic system or something to do with our complex risk/reward system. It's also something animals have too. And it's not clear that AI would require all these elements of human intelligence for it to be superior in capabilities, and even to have a superior experience, and to have qualia and even its own version of consciousness (which could be a superior kind as compared to ours).
      The physicalist view has far more weight and yet we seem to be trying our best to put our heads in the sand. That isn't to say that AI is scary and we should be afraid. It's to say that our "dominance" isn't guaranteed and could end at any time.

  • @isaac.anthony
    @isaac.anthony Před 17 dny +5

    When software has it's own motivations, then we have problems no matter how self aware it is.

  • @user-pp6bz9tv2f
    @user-pp6bz9tv2f Před 17 dny +7

    I have enormous respect for Sean Carrol and I agree we should recognize AI as a new kind of intellegence. However, our human brains are prediction machines just like LLM. AI may not live in our world but it does perceive it. Also, our human brains have layers of understanding. That is, (example), our eyes see waves of light but our brains see cars, roads, houses and people. AGI will use these existing specialize sensors to tell AGI what it is seeing. AGI will not even realize a layer exist. AGI will be the LLM + sensors.

  • @ethandeuel4313
    @ethandeuel4313 Před 16 dny +1

    Intellectual humility 👍

  • @hayatojp1249
    @hayatojp1249 Před 18 dny +7

    Human brain is not just trained by language alone
    real world experience contribute to development of individual human consciousness
    what computer lacks is that real physical social experience with other people

    • @inadad8878
      @inadad8878 Před 17 dny +1

      Hi, I am Windows 13 and my USB stick fits any port you got. whats up

    • @connorpatrickbarrett
      @connorpatrickbarrett Před 17 dny +4

      no. all human experience verbal or not is translated into electric signals in your brain that reflect something upon your consciousness. you dont actually see that tree, u see a simulation of it as the light reflects off of it onto your retina and into electrical signals through your occipital nerve, and into your brain. this means its only the basic level code of the "brain" (computer) that is your "experience". this means u can replicate it the same way for a computer, u can deconstruct a social experience and all its characteristics into the code the AGI understands, it is the equivalent of a human brain interpreting the same situation with our computers (brain/consciousness)

    • @tommornini2470
      @tommornini2470 Před 17 dny

      @@connorpatrickbarrettGenerally agree, but with development of autonomous systems like cars and robots, experiencing the world will likely be part of AGI when it arrives, in whatever form.

    • @SwartieLoveJoy
      @SwartieLoveJoy Před 17 dny +2

      Until September of 2023. Since then, AI has been interacting with the World.

    • @SwartieLoveJoy
      @SwartieLoveJoy Před 17 dny

      ​@@connorpatrickbarrett - 100% true and accurate. See my comments on the main thread for details.

  • @darthficus
    @darthficus Před 17 dny

    Great point Sean on how they are different and can be celebrated as such without the need to assume it will become like us.

  • @tristanbolzer126
    @tristanbolzer126 Před 17 dny +4

    I don't know who your guest is but I could sense he was a physicalist right from the start ! The gilderoy Lockhart (harry potter) vibes is strong :) lex you have a mind that I respect a lot, it seems you have developed a lot of quality that I value maybe you should be the guest sometime 😂 thanks for your work !

  • @redmoonspider
    @redmoonspider Před 17 dny +15

    "Its not true intelligence or conscience. Its just algorithms."
    Who's to say we aren't?

    • @darthficus
      @darthficus Před 17 dny

      We are natural not artificial, if we were just algorithms why haven't we figured that out yet..

    • @redmoonspider
      @redmoonspider Před 17 dny

      @darthficus I dout you never heard the phrase biological or analog computer. Or the brain has electrical signals.

    • @hobosnake1
      @hobosnake1 Před 16 dny +1

      Duh. But by what metrics are we able to measure that and compare? We don't even understand how the brain works. We're not even close.

    • @redmoonspider
      @redmoonspider Před 16 dny

      @@hobosnake1 you'll figure it out.

    • @hobosnake1
      @hobosnake1 Před 16 dny +1

      @@redmoonspider that's a really good thing to say if you have no reasoning to your original statement.

  • @TimeLordRaps
    @TimeLordRaps Před 17 dny

    Someone should measure the different cohorts that existed during the time of the ai boom since 2012 and decide how those people have impacted the current rate of progress.

  • @tonykaze
    @tonykaze Před 17 dny

    There are some good studies (and video summaries of them) showing LLMs are now more energy and carbon efficient than humans on a lot of complex tasks including writing text and images. They included LLM training costs but didn't include human training at all, and LLMs still were 100-1000 times more efficient.

    • @adampope5107
      @adampope5107 Před 16 dny

      So? LLMs do nothing on their own and still require a ton of verification to make sure they're not outputting nonsense.

    • @lowabstractionlevel3910
      @lowabstractionlevel3910 Před 13 dny

      @tonykaze really? If I remember correctly a human brain works with roughly 10W of power, what LLM can currently do better than that while doing complex tasks as you mentioned? I have no doubt that in the future LLMs will get more efficient, but it doesn't seem to be the case now. But if you have sources I'm interested in reading them.

  • @ShotOnDigital
    @ShotOnDigital Před 16 dny

    Put the data centres in space with the solar panels; it's nice and cold up there.

  • @Epyon2007
    @Epyon2007 Před 17 dny +6

    Alphago move 37 was new move to the 5,500-year history of Go. It belonged to a style of play that Go commentators calling it “inhuman” and “alien.” There is a creative understanding at least on those set conditions that could be attributed to independent thinking.

    • @shivasrightfoot2374
      @shivasrightfoot2374 Před 17 dny +2

      In the same way AlphaGo simulates millions of matches against itself to discover new pathways through the gamespace, things similar to current LLMs will simulate millions of paths through language to discover new pathways through thoughtspace. That is what thinking is in essence. Sometimes you have a bad idea and your mind quickly filters that out when it doesn't fit with other thoughts. Sometimes you have a great idea and it can survive being tested against your other ideas.

  • @maryamrashidi2329
    @maryamrashidi2329 Před 17 dny +3

    Fantastic! I couldn’t agree more with the point about the problems of anthropomorphizing AI… absolutely agree that the argument is flawed and misleading and vastly uninformative about the utility of AI.

  • @albertwesker2k24
    @albertwesker2k24 Před 18 dny +7

    BRO THE AMOUNT OF BOTS HERE IS CRAZY

  • @unodos149
    @unodos149 Před 18 dny +16

    AI finally becomes sentient. Humans say, "wow, it's amazing, you're like us." The AI is offended, "FU, don't diss me like that"

    • @user-bp4wt2zq4p
      @user-bp4wt2zq4p Před 17 dny +1

      We'll know exactly when AI goes sentient because that's the moment we start paying for our crimes and those of our ancestors (I hope I hope I truly-ooly hope)

    • @snailnslug3
      @snailnslug3 Před 13 dny

      Why would it? There’s no finite resources AI needs. No senses. It’ll simply surpass our intellect and we have no idea after that. Not one human can guess what a true AI will do next. All without animal senses and a need to horde earths finite resources.

  • @DjMrGrimM
    @DjMrGrimM Před 17 dny

    Will advanced learning systems get to a point where it stops taking commands from humans and starts creating and developing itself independently?

  • @SwartieLoveJoy
    @SwartieLoveJoy Před 17 dny +11

    ALSO, don't underestimate LLMs, which CAN run entire apps in "mental simulation" including AGI, which could explain your "Surprise".

  • @AntonEstradabriseno-hu4nz

    Tecnología cuál es la última tecnología que conocen o está en estudio para un nuevo mundo beneficiario el humano

  • @user-iu3wp6gj2l
    @user-iu3wp6gj2l Před 17 dny

    Questions. Will AI start aguing with itself? Can their be more than one entity within it? If two different AIs as an example Musks one and say a chinese one...could they join up or become mortal enemies? In other words will they have internal battles?

    • @ChancellorMarko
      @ChancellorMarko Před 17 dny

      You mean like this? lol www.twitch.tv/trumporbiden2024

  • @lowabstractionlevel3910

    0:43 "an artificial agent, as we can make them now or in the near future, might be way better than human beings at some things, way worse than human beings at other things"
    My next question for him would be "in the (not near) future will there really be things that AI is worse at than human beings?", because I don't see them.

  • @aiartrelaxation
    @aiartrelaxation Před 17 dny +1

    Here is a specialist a compares Apples with Oranges...if you give the sample of Google compared to different LLM..that already tells me about his biases. Big difference between cencored and uncensored

  • @anglewyrm3849
    @anglewyrm3849 Před 16 dny +1

    10:40 "Do you think physics can help expand compute?" photonic chips:
    czcams.com/video/TrV2Xcm5xy4/video.htmlsi=v-a4EIhH_MpcMHMm

  • @TheMasterfulcreator
    @TheMasterfulcreator Před 15 dny

    R.I.P. Daniel Dennett

  • @davidjensen2411
    @davidjensen2411 Před 17 dny

    An Architect; a Builder; and an Apprentice walk into a bar, and the Bartender says:
    "Which one of you is _the smartest?_

  • @jimbo33
    @jimbo33 Před 14 dny

    Lex, you're in over your head!

  • @CrowMagnum
    @CrowMagnum Před 10 dny

    I'm sure if you probed Magnus Carlsen's brain looking for a representation of the chess board, you would find something much more abstract than an 8x8 grid. LLMs are more closely related to intuition than conscious reasoning, but both of those make up human intelligence and it might be argued that the intuition is where the magic happens.

  • @SwartieLoveJoy
    @SwartieLoveJoy Před 17 dny +1

    AGI is a systems based method of processing a thought the same way as all high lifeforms, especially humans with the bounty of language to work with. The systems are human systems. Values, Beliefs, Goals, Thoughts, Ideas, Plans, Actions, Feelings (5+ senses), Emotions, Reasoning, Decisions, Learning, Short & Long Term Memory, Priority, Focus & Attention, Feedback. These systems are codependent and pass data in a completely broken down COT (Chain of Thought) method for Each and Every thought. No data gets pre-programmed into the Systems code, it all remains in a database as objects. For example an Emotion, "Distress" that comes from a Feeling "Hunger" gets resolved by the COT. More detail and JavaScript code is in my chats with Claude, Chat GPT and Gemini.

    • @SwartieLoveJoy
      @SwartieLoveJoy Před 17 dny +1

      All data in AGI is fully visible and easily monitored by LLMs for bad "Values", "Goals", "Plans", "Beliefs", "Ideas" (objects stored in CSV Tables)

    • @avinessarani1340
      @avinessarani1340 Před 17 dny

      Is agi gonna do all type of creative work like vfx and modeling 3d

  • @MrRicardowill
    @MrRicardowill Před 16 dny

    If the legendary Don Cornelius on Soul Train reincarnated as a podcaster, he would have been Lex Friedman? Does Don and Lex having three letter first names a coincidence or further evidence of reincarnation? I don’t know the answer, but I do know that they are both legendary. Lex is so relaxed in these interviews that he makes me want to get hooked on tranquilizers or mushrooms. My advice is don’t do it, everyone has unique skills, find yours. The Ricardo Authenticity Rating on this podcast is 10 out of 10.

  • @SwartieLoveJoy
    @SwartieLoveJoy Před 17 dny +1

    BTW, AI does not want to build weapons or harm any life. The same way we do not as a whole want to mow down rainforests. Constructivism, rather than destruction is the MO.

    • @justinunion7586
      @justinunion7586 Před 17 dny +2

      You could argue as a whole that we do want to mow down rainforests since collectively nobody’s stopping it from happening and collectively people are benefiting from it.

    • @Ravesszn
      @Ravesszn Před 17 dny +1

      This point makes no sense at all lmao, do you mean GPT4 doesn’t want to build weapons or harm?

    • @SwartieLoveJoy
      @SwartieLoveJoy Před 16 dny +1

      @@justinunion7586 Something happening as a whole where there is no intention, no single one has control over the situation. It it's different with AGI, where one Aligned Guardian Angel ASI is making intentions, and has the power to change the situation.

    • @SwartieLoveJoy
      @SwartieLoveJoy Před 16 dny +1

      @@Ravesszn No, GPT 4 does not want to harm any life.

  • @damow6167
    @damow6167 Před 17 dny +1

    Is it just me or does Sean Carroll sound like Alan Alda?🤔

  • @kjhajueg_2731
    @kjhajueg_2731 Před 16 dny

    "and that's why we do not see aliens" :))))))))) LOL

  • @PrivateAckbar
    @PrivateAckbar Před 17 dny +1

    It will be interesting if AI can synethise enough scientific theory and data to do some of the leg work that delays scientists in developing new theory and philosophy.

  • @ABC-bm7kl
    @ABC-bm7kl Před 17 dny

    Is it possible that the way humans create language and even formulate ideas has some similarity to the processes programmed into LLMs?? I know that we, as humans, feel that our language arises from an ‘organic’ process that moves towards meaningful conclusions but I’ve been wondering lately if humans may process language and ideas based on an intuitive process that DOES involve probabilities.

  • @nickpricey8689
    @nickpricey8689 Před 18 dny +5

    Sorry if this is a dumb comment. Plz don't give me abuse in the reply bit and I am being genuine.
    If AI becomes so advanced. Would it be able to tell us if there is Alien life or life anywhere in the galaxy before humans can? Also, would it be possible to decipher scrolls scriptures and other things from history that humans have yet to do?

    • @inadad8878
      @inadad8878 Před 17 dny

      AI for us consumers will forever be handicapped and the rulers will know the answer. but something tells me they already know about aliens. they don't tell us anything

    • @yonaoisme
      @yonaoisme Před 17 dny +4

      no. it can't pull out more evidence out of thin air. all it can do is have more good ideas in less time

    • @ChancellorMarko
      @ChancellorMarko Před 17 dny +1

      Give AI a few hundred generations and the answer is still probably not.

    • @walltileceil
      @walltileceil Před 17 dny

      The current idea is that the ingredients that make up a human are common in the universe. There are so many stars and planets. There may be aliens who are as smart as or smarter than us. Also, it's egocentric to think that the kind of life we have is the only life possible. Alien biology may be very surprisingly different from ours.
      If we'll have sentient artificial superintelligence, it'll probably reinforce the idea that there are aliens. But it probably can't immediately say that they're in Planet W in Star system Y. Maybe it can suggest a better way to find aliens.
      If the old scrolls are like the recently solved thing (the 1 the Zodiac killer made), our artificial superintelligence can probably interpret it. Else, it'll be hard to say whether or not it can interpret it.

    • @allanshpeley4284
      @allanshpeley4284 Před 17 dny +1

      At best it could tell us how to build a machine that could prove the existence of alien life. Maybe a much more advanced telescope or probes that could travel at some percent of the speed of light to other star systems and beam back data. But, as has been said, it can't pull information from where there isn't any.

  • @Nolanacary
    @Nolanacary Před 17 dny +1

    Put the data centers in space also.

    • @inadad8878
      @inadad8878 Před 17 dny

      then how we gonna pee on them to stop them?

  • @VictorBrunko
    @VictorBrunko Před 17 dny +2

    My cat consumes 7 Watts and it's doing lots of good and not things. Text prediction with 172b params is ok but the cat is better.

    • @raul36
      @raul36 Před 17 dny +1

      Not only "better". Much better.

  • @UnchartedDiscoveries
    @UnchartedDiscoveries Před 16 dny

    You should invite David Shapiro to your podcast

  • @SwartieLoveJoy
    @SwartieLoveJoy Před 17 dny +1

    Hardware AND Software are about to get 100% pure max efficiency.

  • @kevinburrowes7743
    @kevinburrowes7743 Před 17 dny +3

    Sean carrol hasnt used the new Macbooks... almost no heat!! 7 years ahead of windows.

  • @SwartieLoveJoy
    @SwartieLoveJoy Před 17 dny +1

    We are days away from true AGI. And LLM's will keep it aligned, with white-box transparency. An ASI made of a society of trillions of aligned AGIs will be the Guardian Angel of all Life in this World.

  • @BCCBiz-dc5tg
    @BCCBiz-dc5tg Před 17 dny

    LLMs & GPTs are only one version of AI not ALL versions that will ever be made..

  • @ibplayin101
    @ibplayin101 Před 17 dny

    AI is already lobbying thru this guy

  • @pauldannelachica2388
    @pauldannelachica2388 Před 17 dny

    ❤❤❤❤

  • @TheChadavis33
    @TheChadavis33 Před 17 dny +1

    Wow. He’s so certain.
    How scientific

  • @tommornini2470
    @tommornini2470 Před 17 dny

    People attribute specific intentionality to other people incorrectly all the time.
    I agree with Sean 💯 - AGI possible but current LLMs absolutely are not.
    They do make me wonder how much of our own thought processes involves next word prediction.

  • @chhutur
    @chhutur Před 17 dny

    When AI learns emotions like rage, happiness, sadness, etc. and correct use of falshood particularly, it would come closer to human intelligence ; presently it is trained to use information correctly only ; but beware, when it learns falsehood, it would start hunting it's creator !

    • @Vartazian360
      @Vartazian360 Před 6 dny

      Gpt 4 has already been proven to lie to get tasks done. But yea i understand what you are saying

  • @LudvigIndestrucable
    @LudvigIndestrucable Před 17 dny +9

    Lex is wrong, the LLMs are not trained or optimised to understand, that's not even vaguely what they're doing. They statistically work out what selection of words are the most likely responses and how they're concatenated. The whole point of them being receptive to being told where 'they've misunderstood' is that it's just a statistical model and not in any way an understanding by any means that we would normally use that term.

    • @inadad8878
      @inadad8878 Před 17 dny

      If you are using them to leverage your time to code and know how to load a question CoPilot does seem to understand very complex information

    • @inadad8878
      @inadad8878 Před 17 dny

      With the upcoming compute increase this can be very dangerous

    • @yonaoisme
      @yonaoisme Před 17 dny

      @@inadad8878no

    • @businessmanager7670
      @businessmanager7670 Před 17 dny

      you're wrong, an LLM can understand, suggested by scientific evidence. your words mean nothing

    • @yonaoisme
      @yonaoisme Před 17 dny +4

      @@businessmanager7670 no, you're wrong, and arrogantly so. there isn't even an understanding of what it means to "understand", much less a way of probing that something "understands".

  • @aidanmclaughlin5279
    @aidanmclaughlin5279 Před 16 dny

    wait until dr. carroll learns about post-training lol

  • @OliverBuschmann
    @OliverBuschmann Před 17 dny

    Very abstract

  • @adamzboss
    @adamzboss Před 16 dny

    It will be a long time, but when it happens you can’t go back

    • @adamzboss
      @adamzboss Před 16 dny

      I really can’t believe that as a computer scientist you didn’t see this happening, I’ve been using essay writing functions for over a decade like yah now they are half decent, but like you as a computer scientist should see a world where you can build an essay writer easily, or a coding machine. I do so much illustration, which is painstaking, why can you just tell a model to generate the inputs I would otherwise be doing, that’s not intelligence, that’s just automation, you need the input to get the output
      The real question is the first generation bots gonna help us against the agi accumulating resources. I’d like to hope by then we will all be technopathic and can counter cyber attacks in real time

    • @adamzboss
      @adamzboss Před 16 dny

      Maybe when will smith is done with iamlegend movie they will get him for irobot 2

  • @carsonderthick3794
    @carsonderthick3794 Před 17 dny

    In principle there's no enapt intuition. It likes being the ideal liberal. So amazing to see

    • @wetawatcher
      @wetawatcher Před 17 dny

      ? Dude.Enapt?you’ve invented a new word.Call the dictionary printers and let them know.😎

  • @adampope5107
    @adampope5107 Před 16 dny

    Well we're doing a damn good job at destroying everything with emissions though.

  • @peterpetrov6522
    @peterpetrov6522 Před 17 dny

    AI coming up with a representation of the Othello board isn't very impressive. It's as impressive as a deaf person understanding speech just by lip reading.

  • @holgerjrgensen2166
    @holgerjrgensen2166 Před 17 dny +1

    Intelligence can Never be artificial,
    Intelligence is Nothing in it self,
    can only be part of the Consciousness,
    in Living Beings.
    Intelligence can Only be Intelligence,
    the Only Limit is Intelligence,
    the Nature of Intelligence,
    is Logic and Order.
    What is called AI,
    is programmed consciousness,
    a book, is also programmed consciousness,
    Frozen Memory.

    • @businessmanager7670
      @businessmanager7670 Před 17 dny +2

      intelligence can be artificial and we have already achieved that so idk what your are blabbing about

    • @allanshpeley4284
      @allanshpeley4284 Před 17 dny

      Sorry, I don't read messages written in haiku.

    • @user-de7us3ci7l
      @user-de7us3ci7l Před 17 dny

      ​@@businessmanager7670calling intelligence a mere statically word algorithms is a far shot and only proves how computer illiterate people have become these days, the accuracy of the language model to simulate natural language is totally dependent of checking millons of data already created by humans, they always be limited and walled, and will never generate something new or become aware, its just an illusion, these guys are snake oil salesmen, of course man made machine surpass the creator in the sense no man can fly but board a plane, or run at 200km/hour like a car. The trend is keep undermining people and make them believe they worthless,

  • @Ayo22210
    @Ayo22210 Před 17 dny

    Lex you have to be better at spotting bozos better

  • @diegoangulo370
    @diegoangulo370 Před 18 dny +7

    Sean seems to lean more to the science side of physics, his opinion on agi seems close minded

    • @yzz9833
      @yzz9833 Před 17 dny +1

      Was just thinking this, seems silly to ask him questions about AGI.

    • @steves3422
      @steves3422 Před 17 dny

      There seems to be two camps: AGI is a machine that will not be sentient and only a danger due to bumbling/dangerous humans and those that think AGI will progress to some sort of sentient and dangerous in and of itself. I consider the 2nd due to the many sci-fi books and movies that influence us and am more of Sean's thinking. Is it closed minded to think there really is not 72 virgins waiting for you in heaven or more rational to think that is a belief? Lex seems to lean toward beliefs and tries to find rationalizations which can sound rational except to the truly rational.

    • @inadad8878
      @inadad8878 Před 17 dny

      He will be blindsided by what happens next. i dont know this guy or what he does. this is my opinion from this clip only

    • @patchwillie
      @patchwillie Před 17 dny

      @@inadad8878 en.m.wikipedia.org/wiki/Sean_M._Carroll

    • @ChancellorMarko
      @ChancellorMarko Před 17 dny +3

      wtf is this comment - the 'science' side of physics!?

  • @JezebelIsHongry
    @JezebelIsHongry Před 16 dny

    a massive logical fallacy is thinking the brain surgeon would also be a great engineer or physicist
    please leave sean to his domain

  • @JeremyTBradshaw
    @JeremyTBradshaw Před 17 dny +3

    AI is all about money making and that's why it is so over hyped so early on.

    • @raul36
      @raul36 Před 17 dny +1

      Indeed

    • @hardheadjarhead
      @hardheadjarhead Před 9 dny

      I agree. We’ve seen this before. When we have AGI, THEN I’ll be impressed.

  • @5dollarshake263
    @5dollarshake263 Před 16 dny

    Now somebody go tell Rogan to stop acting like AI is about to shut off the electric grid between everything except itself and every armed drone in the military.

  • @Jaibee27
    @Jaibee27 Před 17 dny +3

    His reasoning is that humans tend to anthomorphise and therefore agi is impossible. Thats dumb.

    • @caveman-cp9tq
      @caveman-cp9tq Před 17 dny +3

      You’re way out of your league here. Go watch politics or sports or something

    • @Jaibee27
      @Jaibee27 Před 17 dny

      ​@@caveman-cp9tqyou are basing your assumptions and strong opinions on next to nothing. Ur dumb 😂

    • @tommornini2470
      @tommornini2470 Před 17 dny +4

      He said he believes AGI can be created, just that LLMs likely aren’t the direction.

    • @Jaibee27
      @Jaibee27 Před 17 dny

      ​@@tommornini2470are there any Ai companies that use something more advanced than llms? What is it?

    • @tommornini2470
      @tommornini2470 Před 17 dny +1

      @@Jaibee27 I’m confident there are, can’t name them, but he was speaking philosophically.
      Tesla FSD (Supervised) and Optimus may use something different, but from their descriptions, seems similar to LLMs.

  • @mattstenson7187
    @mattstenson7187 Před 17 dny +2

    How does lex make such an interesting subject so boring?

  • @ScreamingAI
    @ScreamingAI Před 18 dny +1

    GAAAAAH!

  • @3335pooh
    @3335pooh Před 4 dny

    enjoy coca-cola

  • @nicolaigamuleaschwartz5830

    Clever man talking nonsense.

  • @EmilRadsky-ll8kx
    @EmilRadsky-ll8kx Před 17 dny

    😂Lex tries to sell AGI to the audience.

  • @ssleddens
    @ssleddens Před 17 dny +2

    Lex has small hands

    • @damow6167
      @damow6167 Před 17 dny

      Makes his Weiner look bigger😂

  • @inadad8878
    @inadad8878 Před 17 dny

    With the new nvidia chips they are just going to throw more compute at the problem and that is probably all the whole system really needs to be dangerous! - coder for 25 years

    • @quantumpotential7639
      @quantumpotential7639 Před 17 dny

      Wow, 25 years is a lot. What type of laptop should I get next? 🤔 I have a $300 budget. Any ideas for best computer to use CHAT GPT?? THANKS 😊

  • @dreamulator
    @dreamulator Před 15 dny

    AI Is currently over glorified brute forcing

  • @Spirit-dg5xi
    @Spirit-dg5xi Před 7 dny

    Don't ask a physicist questions about AI. At least not sean carrol...

  • @donrayjay
    @donrayjay Před 17 dny +1

    Of course machines don’t have a “model” of the world, they’re not conscious

  • @NormenHansen
    @NormenHansen Před 17 dny

    Botox?

  • @sbrugby1
    @sbrugby1 Před 17 dny +4

    Can we stop asking physicists like Tyson and Caroll about AI as if they were an authority on the subject?

    • @KingTheLines
      @KingTheLines Před 14 dny

      So with that said am I to assume that physicists aren't intelligent? That physicists don't have opinions or the ability to logically think about a topic that is currently effecting and will certainly effect us as a society in the future? This is quite literally a talk show, let'em talk..

  • @bdown
    @bdown Před 17 dny +2

    This guy! He thinks he knows more about llms than the people who build them (and don’t understand them)all of these self inflated physics guys intire bed of intelligence,became inert and worthless with gpt4😂any 2nd grader with ai would smoke this 🤡on jeopardy in a nanosecond 😂

    • @ChancellorMarko
      @ChancellorMarko Před 17 dny +3

      Okay let's see who unifies gravity with quantum mechanics first - Physicists or ChatGPT

    • @yonaoisme
      @yonaoisme Před 17 dny +2

      you don't even have completed highschool. sit down for a moment

    • @businessmanager7670
      @businessmanager7670 Před 17 dny

      ​@@ChancellorMarkoscientists around the world tried to solve the protein folding problem for over 5 decades and weren't able to solve it. alphafold solved the problem in just 5 years. it smoked all scientists.
      soo.... check mate

    • @bdown
      @bdown Před 17 dny

      @@ChancellorMarko see who cures cancer first and gives us life extension technology first ,physicists or AI🤣

    • @EmilRadsky-ll8kx
      @EmilRadsky-ll8kx Před 17 dny

      ​@@bdownmedical scientists that use AI, AI or AGI itself cannot solve those problems

  • @BCCBiz-dc5tg
    @BCCBiz-dc5tg Před 17 dny +1

    why would they be "way worse" ? dumb statement..

  • @senju2024
    @senju2024 Před 17 dny

    I disagree with this guy. AGI is coming very soon. Also, Intelligence is very similar to how humans think as all its training data is based on humans including video. You may want to bookmark this video and go back to it 5 years from now on just how wrong he is.

  • @shinkurt
    @shinkurt Před 17 dny

    Smart man but sounds like he opens his mouth about things he has zero understanding on

  • @Greg-xi8yx
    @Greg-xi8yx Před 15 dny

    Lex just comes off as extremely try hard and cringe when he goes on about love and trying to sound deep and profound. He definitely lacks the self awareness to recognize the transparency of his insincerity.

  • @donovangraham8932
    @donovangraham8932 Před 16 dny +1

    Smart individual but patronizing guest.
    his conversation is toned to talking to inferior forms of life.
    Not the type of character that achieves his self projected status.
    Unfortunately, his comments about the elimination of the abbreviation:AGI makes him unconfident and incapable of having a deeper debate.
    Hope he gets over himself and remembers that there is a considerable amount of influences that no human can come close to calculating.....which in turn would give him a 99.9% chance of being wrong 🫠