Debunking the great AI lie | Noam Chomsky, Gary Marcus, Jeremy Kahn

Sdílet
Vložit
  • čas přidán 2. 06. 2024
  • The father of modern linguistics, Noam Chomsky, joins scientist, author and entrepreneur Gary Marcus for a wide-ranging discussion that touches on why the myths surrounding AI are so dangerous, the inadvisability of relying on artificial intelligence tech as a world-saver, and where it all went wrong.
    Please note, due to connection issues, the first few seconds of Noam's audio were not recorded.
    00:00 - Intro from Paddy Cosgrave
    01:21 - Opening with Jeremy Kahn
    02:36 - Noam Chomsky
    05:50 - Gary Marcus
  • Věda a technologie

Komentáře • 1,7K

  • @TJ-hs1qm
    @TJ-hs1qm Před rokem +247

    we can't let a bunch of hyper rich guys deploy no matter what tech into society let them keep the profiits whereas society is left with the consequences. privatized profits socialized loses needs to stop

    • @2manystories2tell43
      @2manystories2tell43 Před rokem +4

      @T J You hit the nail on the head!

    • @travisporco
      @travisporco Před rokem +12

      By the logic of the free enterprise system, once people can no longer contribute in the fair and free competition of the market, they must rely on charity or perish. All people will in the coming decades be obsolete, unable to compete with rising AI. The existing system must therefore be abolished before it is too late.

    • @jonaseggen2230
      @jonaseggen2230 Před rokem +7

      It's called neo or corporate feudalism

    • @Mr_Sh1tcoin
      @Mr_Sh1tcoin Před rokem +4

      Spoken like a true marxist

    • @47f0
      @47f0 Před rokem +2

      Yeah, people have been saying that since the 1860s.
      Cornelius Vanderbilt said, "Hold my beer"

  • @AzorAhai-zq9sw
    @AzorAhai-zq9sw Před rokem +113

    Impressive that Noam remains this sharp at 94.

    • @numbersix8919
      @numbersix8919 Před rokem +10

      He has very good reserve capacity.

    • @ivanleon6164
      @ivanleon6164 Před rokem +13

      the real white mage. amazing. huge respect for him.

    • @maloxi1472
      @maloxi1472 Před rokem +1

      @@numbersix8919 huh... what ?

    • @numbersix8919
      @numbersix8919 Před rokem +4

      @@maloxi1472 I mean his brain still functions very well even with great age.

    • @dewok2706
      @dewok2706 Před 8 měsíci +3

      @@maloxi1472he meant that he's the great white hope

  • @Hunter-uz9jw
    @Hunter-uz9jw Před rokem +196

    bro was 21 years old in 1949 lol. Amazing how sharp Noam still is.

    • @numbersix8919
      @numbersix8919 Před rokem +32

      Just imagine how sharp he was in 1957 when he single-handedly saved experimental psychology.

    • @bluebay0
      @bluebay0 Před rokem +2

      @@numbersix8919 Do elaborate please.

    • @numbersix8919
      @numbersix8919 Před rokem +26

      @@bluebay0 Chomsky's response to B.F. Skinner's book "Verbal Behavior" utterly destroyed any possible behaviorist theory of language.
      Behaviorism had dominated experimental psychology so thoroughly to that point, that "mind" had become a dirty four-letter word in psychology.
      Not only linguistics, but psychology and philosophy, and new fields such a AI and cognitive science were free to take up the study of mental processes.
      You can read it today easily enough, the title is "On Verbal Behavior" by Noam Chomsky.

    • @bluebay0
      @bluebay0 Před rokem +2

      @@numbersix8919 Thank you. I wondered if it was his proving Skinner wrong about behavior and language acquisition. Thank you again.

    • @numbersix8919
      @numbersix8919 Před rokem +1

      @@bluebay0 Yup that was it!

  • @TommyLikeTom
    @TommyLikeTom Před rokem +51

    "they can draw pretty pictures but they don't have any grasp of human language" for some reason I felt personally attacked by that

    • @kot667
      @kot667 Před rokem

      Maybe because it's horse shit LOL

    • @artbytravissmith
      @artbytravissmith Před rokem +8

      It can and it can't. Midjourney slays at generating simple albeit impressive images, 'zombie spiderman' 'thor in pixar style' 'human settlers on Mars in the style of Norman Rockwell' but once you begin to describe complicated illustrations with multiple characters in different emotional states with specific likenesses to specific people, wearing specific coloured costumes (and you want consistancy panel to panel) performing specific actions with specific camera/viewpoint angles it struggles, with characters in specific parts of the composition. You can generate those emotions and 'actors' seperately, but still need photoshop to combine them into a complete image. While I guess I should assume Dalle2/Stable Diffusion/Midjourney will get there, after watching this presentation, and after 20k images generated in Midjourney and noticing its sometimes frustrating limitations I do begin to wonder if AI art models lack of language understanding will mean they'll be stuck at 75%. My thought is, the first company to combine Dalle2/Midjourney/Stable style prompting with Nvidia's Canvas like editability/interactivity, will make a much more powerful and efficient tool just by embracing the human brain.

    • @Bisquick
      @Bisquick Před rokem +5

      @@artbytravissmith Exactly, as discussed and succinctly put by Sartre to pose the necessity of existential consideration, existence _precedes_ essence. Without any critical consideration of meaning, quite simply: garbage in, garbage out.
      The only "danger", as also discussed, lies in believing it is anything else, that it is "objective" or actually understanding anything ie "intelligent". But of course the _only_ political question is: cui bono? Who benefits? So we can ask "danger _for whom_ ?", which reveals that for some this Mechanical Turk can produce an artifice of an organizing principle of truth/understanding/value ie "god" that "just so happens" to justify the already existing power structure, regardless of the intentionality towards this, as we can see with these psycho silicon valley billionaires and "effective altruists", many of which are unironically labelling themselves as "secular Calvinists". The divine right of "AI" is then but a technological coat of paint over the "divine right of kings/the market/the entrepreneur".
      _“The ideas of the ruling class are in every epoch the ruling ideas, i.e. the class which is the ruling material force of society, is at the same time its ruling intellectual force. The class which has the means of material production at its disposal, has control at the same time over the means of mental production, so that thereby, generally speaking, the ideas of those who lack the means of mental production are subject to it. The ruling ideas are nothing more than the ideal expression of the dominant material relationships, the dominant material relationships grasped as ideas.”_ - some guy

    • @huveja9799
      @huveja9799 Před rokem +1

      @@Bisquick Well, there are different layers of meaning, the most superficial is that of the statistical correlations between the symbols, which does not take away from the fact that the tool is surprisingly useful at that superficial level.
      As far as I know, there are people who establish their own power structure claiming that there is no Truth, and that it is not possible to define objectivity based on that successive approximation to that Truth (understanding). In that case, the only political question is who benefits from these power structures based on sophistry and language games. I suppose it is mediocre people who are incapable of creating something new, and are condemned, as a Large Language Model (LLM), to generate a simulacrum of knowledge at that superficial level of meaning, which does not mean that they can do significant damage in society, especially by the corruption of younger and therefore vulnerable minds ..

    • @MusingsFromTheJohn00
      @MusingsFromTheJohn00 Před rokem +1

      @@artbytravissmith Midjourney, Dalle2, Stable Diffusion, ChatGPT 4, and the best AI we have right now is still primitive compared to what is coming within 5 to 10 years. Right now they are more like super genius level cockroaches and already some people are having trouble telling they are not human. This panel does not seem to understand what is coming with AI development.

  • @willboler830
    @willboler830 Před rokem +205

    Been working on AI since 2015, and I'm kind of tired of the trend that models are heading right now. We just add more data and more parameters, and at some point, it's just memorization. Humans don't work like that. I used to support the pragmatism of narrow AI, but honestly, I'm with Gary Marcus on this.

    • @MrAndrew535
      @MrAndrew535 Před rokem +4

      "But as long as you enjoyed the video and you enjoy having your say, that's all that counts!."

    • @MrAndrew535
      @MrAndrew535 Před rokem +8

      Also, whatever you have been working on, it has nothing to do with "intelligence" artificial or otherwise. "Intelligence" is an existential proposition nota technical one, as demonstrated by the fact that you lack the intellectual tools to be able define it. Therefore, if you cannot define it then by what stretch of the imagination could you possibly be working on it?

    • @0MVR_0
      @0MVR_0 Před rokem +11

      @@MrAndrew535 This is correct yet also obtuse.
      A definition demands extrapolation, as in de-finitum.
      Intelligence, as you said, is inherently introspective.
      You are asking another to accomplish an impossible task.

    • @numbersix8919
      @numbersix8919 Před rokem

      Right on. You certainly got an odd and objectionable response, didn't you? That's what happens when try to *leave a cult*.
      Anyway, if your interest is peaked, go back to school and if you are brave, get into REAL cognitive science. Developmental psychology! Psycholinguistics! There's a world out there to discover!!!!
      There may be modules in the human brain that do stupid "narrow AI" calculations...but nobody knows yet.
      The kicker is that neurons aren't simple nodes, they are quite complex, maybe as complex as we used to think the entire brain is...but nobody knows yet.
      Just remember, cognition is a feature of living organisms. You know, embodied. I think the octopus with its distributed cognition is the best model. Its arms are to some extent entities unto themselves. Our minds are similarly compartmentalized, I just think the octopus would be easier to study in some simple and straightforward ways. You already know how smart they are. And I can't think of a better helper robot than an octopoid.
      Best of luck to you young Will.

    • @Bisquick
      @Bisquick Před rokem +24

      Exactly, as discussed and succinctly put by Sartre to pose the necessity of existential consideration, existence _precedes_ essence. Without any critical consideration of meaning, quite simply: garbage in, garbage out.
      The only "danger", as also discussed, lies in believing it is anything else, that it is "objective" or actually understanding anything ie "intelligent". But of course the _only_ political question is: cui bono? Who benefits? So we can ask "danger _for whom_ ?", which reveals that for some this Mechanical Turk can produce an artifice of an organizing principle of truth/understanding/value ie "god" that "just so happens" to justify the already existing power structure, regardless of the intentionality towards this, as we can see with these psycho silicon valley billionaires and "effective altruists", many of which are unironically labelling themselves as "secular Calvinists". The divine right of "AI" is then but a technological coat of paint over the "divine right of kings/the market/the entrepreneur".
      _“The ideas of the ruling class are in every epoch the ruling ideas, i.e. the class which is the ruling material force of society, is at the same time its ruling intellectual force. The class which has the means of material production at its disposal, has control at the same time over the means of mental production, so that thereby, generally speaking, the ideas of those who lack the means of mental production are subject to it. The ruling ideas are nothing more than the ideal expression of the dominant material relationships, the dominant material relationships grasped as ideas.”_ - some guy

  • @Epicurean999
    @Epicurean999 Před rokem +47

    I wish really good health for Mr. Noam Chomsky Sir🙏❤️🙏

  • @rajmudumbai7434
    @rajmudumbai7434 Před rokem +44

    Real AI that is sensitive to human problems doesn't scare me. But blind faith of many in flawed AI and going too far with it scares me as it could lead humanity astray into a point of no return.

    • @nathanielguggenheim5522
      @nathanielguggenheim5522 Před rokem +6

      Oligarchs using flawed ai against mankind scares me the most.

    • @oldtools6089
      @oldtools6089 Před rokem

      @@nathanielguggenheim5522 is it really so bad if all the fat-cats really want is to keep their people chubby?
      The price of peace is the low price of bread.

    • @cathalsurfs
      @cathalsurfs Před rokem +5

      There is no such thing as "real" AI. Such a concept is an oxymoron and utterly contrived (by humans in their limited capacity).

    • @oldtools6089
      @oldtools6089 Před rokem +3

      @@cathalsurfs general AI is what most would consider real.

    • @KassJuanebe
      @KassJuanebe Před rokem

      @@oldtools6089 Intelligence can't be artificial. Intellect maybe. Consciousness and intelligence, NO!

  • @riccardo9383
    @riccardo9383 Před rokem +247

    Noam Chomsky brings a breeze of fresh common sense to the AI discussion, with his immense knowledge on Linguistics. Thank you for this interview.

    • @MrAndrew535
      @MrAndrew535 Před rokem +9

      Define "common sense"!

    • @blackenedblue5401
      @blackenedblue5401 Před rokem +9

      Also just his immense knowledge of computing- definitely understands it better than most speaking at websummit

    • @restonthewind
      @restonthewind Před rokem +6

      A language model could have generated this comment.

    • @grant4735
      @grant4735 Před rokem +3

      @@MrAndrew535 ask your computer to do that....

    • @kot667
      @kot667 Před rokem +3

      @@grant4735
      Me: Define "common sense"
      ChatGPT: Common sense is a term used to describe a type of practical knowledge and understanding of the world that is shared by most people. It is not based on specialized training or education, but rather on the general experiences and observations that people have in their everyday lives. Common sense allows people to make judgments and decisions about everyday situations, and it often helps them to solve problems and navigate complex social situations. Some people are said to have a good sense of common sense, meaning that they are able to apply their practical knowledge and understanding in a way that is useful and effective.

  • @octavioavila6548
    @octavioavila6548 Před rokem +145

    Chomsky’s argument is that AI will not help us understand the world better but it will help us develop useful tools that make our life easier and more efficient. Not good for science directly, but still good for quality of life improvements and it can help science indirectly by producing tools that help us do science.

    • @totonow6955
      @totonow6955 Před rokem +10

      Unless it just drops grandpa.

    • @0MVR_0
      @0MVR_0 Před rokem

      @totonow6955 at least it did so with trillions of parameters, so you know legal can argue that grandpa deserved and needed a premature 'termination'.

    • @totonow6955
      @totonow6955 Před rokem

      @@0MVR_0 vampires

    • @moobrien1747
      @moobrien1747 Před rokem

      Oh wow
      Howard Hughes
      Really IS Alive....,.. q

    • @sixmillionsilencedaccounts3517
      @sixmillionsilencedaccounts3517 Před rokem +18

      "it will help us develop useful tools that make our life easier and more efficient"
      Which doesn't necessarily mean it's a good thing.

  • @dan_taninecz_geopol
    @dan_taninecz_geopol Před rokem +66

    The misunderstanding here is that deep nets are being trained to be conscious, which isn't accurate. They're being trained to mimic human judgement and/or recognize patterns or breaks in patterns.
    The machine isn't trained to be independently generative of novel information. We shouldn't be surprised that it can't do that yet.
    More important than the strong AI debate, which is still far off, is the social impacts these models will have on the labor market *today*.

    • @GuaranteedEtern
      @GuaranteedEtern Před rokem +12

      It's anthropomorphizing by observers who don't understand how the technology works. It's very annoying to hear ML experts say things like "maybe it is sentient..."

    • @dan_taninecz_geopol
      @dan_taninecz_geopol Před rokem +7

      @@GuaranteedEtern "Experts", and agreed.

    • @GuaranteedEtern
      @GuaranteedEtern Před rokem +3

      @@dan_taninecz_geopol One of the big ones - either Microsoft or Google - literally said this exact thing a few days ago.

    • @snowleopard9749
      @snowleopard9749 Před rokem

      Real AI won't exist until these 'deep nets' are embodied in the world.

    • @brianmi40
      @brianmi40 Před rokem +9

      "The machine isn't trained to be independently generative of novel information."
      And yet it has unless you are discounting the need for a Prompt for it to do anything at all other than just sit idly. GPT-4 was able to propose a scientific experiment that has never been performed. It can create rhymes and poetry never written. This isn't simply "re-arranging" the works of others. The simple fact is that the ability to cross compile and reference roughly 1/10th of all human "knowledge" allows a LLM to assemble it in novel ways that humans have never considered or at least not yet done and under the guidance of a breakthrough prompt can deliver solutions we have never imagined.
      It's a similar activity to researchers in two fields running across each others data and having a huge AHA moment from a realization of how to combine the findings into a new, previously unconsidered solution to some problem.
      GPT-4 is able to surpass more than 50% of the tests designed to judge sentience, including the Theory of Mind test, so we are much further along the path to sentience than most are aware.

  • @robertjones9598
    @robertjones9598 Před rokem +20

    Really cool. A much needed dose of scepticism.

  • @mirellajaber7704
    @mirellajaber7704 Před rokem +5

    I am reading all these comments and I have to say that once more what strikes the eye is that people would always believe what they want to believe, no matter how much conferencing, summiting, etc, no matter who says what. People come with already made ideas, not with a curious mind as to reach more, higher understanding - and this stands true, no matter the subject under discussion - but even more true when it comes to politics.

    • @bifrostbeberast3246
      @bifrostbeberast3246 Před 5 měsíci

      Yeah, when Chomsky talks about AI which he seems to not have much knowledge about its inner workings, I always cringe. He is so opinionated.

    • @no_categories
      @no_categories Před 3 měsíci

      I've changed my mind many times in my life. What helps me to do it is information. I know I'm not alone in this.

    • @Grassland-ix7mu
      @Grassland-ix7mu Před 20 dny +1

      That is a oversimplification. Many people want to know the truth, and so will definetely change their mind when they learn that they were wrong - whatever the topic

  • @-gbogbo-
    @-gbogbo- Před rokem +6

    27:05 "Gettiing close [to solve the problem] does not really seem to solve the problem". That's so true ! Thanks a lot.

  • @tonygumbrell22
    @tonygumbrell22 Před rokem +15

    We want AI to function like a sentient being, but we want it to do our bidding e.g. "Open the pod bay door Hal."

    • @petergraphix6740
      @petergraphix6740 Před rokem +1

      This is called the 'AI alignment problem' and at this point not only is there no solution, everytime we reassess the problem it becomes more insurmountable. It is one that I personally believe is not solvable either. Humans generally fall under the same alignment issues (we're mortal for example), and at least in theory AI would be immortal if we're able to save its state and copy it to a new machine (or it's able to do that itself). If humans could copy ourselves into a new body, we would, why would an AI not do that once we formulate artificial willpower and desire to have a continued existence?

    • @tomtsu5923
      @tomtsu5923 Před rokem

      Don’t be negative

    • @tonygumbrell22
      @tonygumbrell22 Před rokem

      @@tomtsu5923 Let's just say I'm skeptical.

    • @daraorourke5798
      @daraorourke5798 Před rokem +1

      Sorry Dave...

  • @yuko3258
    @yuko3258 Před rokem +79

    Let's face it, the tech world grew too fast for its own good and is now operating mostly on hype.

    • @crystalmystic11
      @crystalmystic11 Před rokem +6

      So true.

    • @claudiafahey1353
      @claudiafahey1353 Před rokem +3

      Agreed

    • @jonatan01i
      @jonatan01i Před rokem +1

      nope, gpt4 is very usable and is a magic tool for humanity to use

    • @debbY100
      @debbY100 Před rokem

      For ITS own good, or humanity’s own good?

    • @Happyduderawr
      @Happyduderawr Před 11 měsíci +1

      @@debbY100 definitely more for its own good given the amount of wealth being funnelled into the industry

  • @_crispins
    @_crispins Před rokem +9

    25:10 I learned it from Noam and he learned it from PLATO 😂 outstanding!

  • @littlestbroccoli
    @littlestbroccoli Před rokem +9

    They're more concerned with notoriety and having articles written about their tech (because it draws investors, maybe?) than they are about the real science. This is definitely a problem and you can feel it in the output. Real science is exciting, it feels like exploring. Today's tech climate sort of feels like being stuck inside and told what's good for you when all you want to do is go out and ride your bike.

    • @gregw322
      @gregw322 Před rokem

      Incredibly stupid, useless comment. We’re making more breakthroughs than at any time in history. There will be more change in the next few decades than in all of recorded human history.

  • @witHonor1
    @witHonor1 Před rokem +55

    My problem with AI is that human's can't even pass a Turing Test anymore. Technology has eliminated the miniscule amount of critical thinking human's used to be capable of, now they're just input/output machines.

    • @witHonor1
      @witHonor1 Před rokem +2

      @@MrAndrew535 Which program are you? Typical bot behavior to spam the comment section on a CZcams video.

    • @ChannelMath
      @ChannelMath Před rokem +1

      @@witHonor1 what would be the point of this "Andrew" bot? just to claim that he already said what you said? Doesn't make sense. Also, if you've met humans, "spamming the comments section" is not atypical behavior when they are passionate. (I'm doing it now -- see you in the next comment Andrew!)

    • @witHonor1
      @witHonor1 Před rokem

      @@ChannelMath Beep boop, beep boop. Not explaining why bots are obvious because... Please "see" Andrew anywhere when you don't have eyes. Fun. Idiot. Green eggs and ham. Manifesto. Beat the prediction, trolls.

    • @Moochie007
      @Moochie007 Před rokem

      The axiom GIGO still applies.

    • @miraculixxs
      @miraculixxs Před rokem +7

      @Lind Morn if you think capitalism has eliminated critical thinking you haven't seen socialism and dictatorship.

  • @vectorphresh
    @vectorphresh Před rokem +1

    27:52 This is am interesting point, and if I recall the folks over at OpenCog were working on this with their Atomspace. I haven’t kept up with their latest work, and we’ll be sure to revisit it.

  • @havefunbesafe
    @havefunbesafe Před rokem

    hat does Noam mean when he says AI is too strong? Please enlighten me. Thanks. 18:30

  • @pomomxm246
    @pomomxm246 Před rokem +8

    crazy that both Gary's predictions came true so quickly, as someone was led to suicide by an amorous chatbot just this past month

  • @user-sy3dg1vk4x
    @user-sy3dg1vk4x Před rokem +78

    Long Live Noam Chomsky 🙏🙏

    • @kot667
      @kot667 Před rokem +5

      Hopefully Noam will gain some common sense in his long years lol.

    • @lppoqql
      @lppoqql Před rokem +3

      That might happen when someone puts together a system that is trained on all the content and speech by Chomsky.

    • @numbersix8919
      @numbersix8919 Před rokem

      @@lppoqql You don't really believe that, do you?

    • @SvalbardSleeperDistrict
      @SvalbardSleeperDistrict Před rokem

      @@kot667 Do you at least realise how much of a self-exposition you are doing by vomiting a cretinous line like that?
      Absolute clowns littering comments spaces with brain vomit 🤡

    • @kot667
      @kot667 Před rokem

      @@SvalbardSleeperDistrict Someone is riding the D extra hard lol, I got nothing against Chomsky but his analysis of current technology is simply abysmal, other than that, don't have a gripe with him.

  • @BuGGyBoBerl
    @BuGGyBoBerl Před rokem

    18:34? what did noam say there? or what does he mean?

  • @NB-fz3fz
    @NB-fz3fz Před rokem +1

    A lot of the things Gary mentioned as lacking in these models have all emerged as properties in gpt-4. For example, theory of mind and understanding the importance of ordering for words. Lookup the sparks of AGI paper from Microsoft research that fleshes this out more.
    I wonder if Noam or Gary have updated their beliefs post gpt-4 and all the papers like sparks of AGI, self reflexion, hugging gpt etc, that came out in the last few weeks. Does anyone know if they have spoken about this topic more recently?

  • @reallyWyrd
    @reallyWyrd Před rokem +6

    Noam pointing out that AI training of a neural net largely amounts to "brute force" is interesting.

  • @georgeh8937
    @georgeh8937 Před rokem +7

    my gripe is the use of terminology in the field that is just right for marketing purposes. years ago i heard a public discussion and somebody asked if artificial intelligence could be used for x. if you say this AI program is sorting through data to filter a photograph to tease out a clear image then it loses the magic and becomes pragmatic.

    • @robbie3877
      @robbie3877 Před rokem +1

      Isn't that precisely how human cognition works though? Like a filter, through the lens of memory.

    • @RobertDrane
      @RobertDrane Před rokem

      I'm expecting the vast majority of harm that's going to com from adopting these technologies will be directly due to the marketing.

  • @waltdill927
    @waltdill927 Před rokem +1

    The obstacle to a clear discussion, as I see it: first, we are thinking creatures, or language users, "inhabited" by our own linguistic bias, such that the use of a symbol manages only to point more or less successfully to other symbols. This is human language, the index of a communicating life, but not at all what we manage to codify and "program" into useful, pragmatic machines. Computing is manipulation of these symbolic sets, not expressing a thought. If "zero" only expresses an important mathematical concept, its absence changes nothing at all in the affairs of arithmetical computation. "We" do not organize a binary base well without the idea of "zero".
    In the same way, a line drawn in the sand divides "reality" into two parts, but it has nothing at all to do with the concept of a "ratio",
    The whole business of defining what thinking actually is comprises a body of philosophical insight that has become, in fact, only more problematic with the history of philosophy itself; and contemporary philosophers imagine more that they are producing literary documents, while many writers see themselves as exploring issues of a particular philosophical nature.
    Second, more ominously, in spite of those who would have science, and its progress, adhere to an "ethics" as much as an idea or representation of end use, of teleology -- this ain't ever going to happen. Once the creature learns to use the rock for something practical, cracking walnuts, say, the idea, the utility, of bashing in convenient skulls soon follows.
    At any event, the notion that our logic machines are on the verge of much that is beyond the dreams, or nightmares, of humanity is oddly quaint -- kind of like Robbie the Robot with a mechanical soul, and not an organic brain.

  • @brunomartindelcampo1880

    Does anyone have a transcript of what Noam says at 2:00 ?? PLEASE

  • @aullvrch
    @aullvrch Před rokem +21

    @27:31 Gary mentions something he calls "neuro-symbolic AI" as the first step towards combating machine learning AI. For those who are interested a more searchable term is probabilistic programming, some examples of languages are ProbLog, Church, Stan, and Hakaru. Step two he says is to have a large base of machine interpretative knowledge. All programming is of course machine interpreted, but denotational semantics found in functional languages are better at formalizing the abstract knowledge that he refers to.

    • @LeoH.C.
      @LeoH.C. Před rokem +1

      Just fyi: the approach Gary mentions is "neuro-symbolic AI", not "nero symbolic".

    • @aullvrch
      @aullvrch Před rokem +2

      @@LeoH.C. sorry, just a typo..

    • @LeoH.C.
      @LeoH.C. Před rokem +4

      @@aullvrch I was just clarifying for other folks that do not know about it :D

    • @aullvrch
      @aullvrch Před rokem +1

      @@LeoH.C. thanks!

    • @0MVR_0
      @0MVR_0 Před rokem

      'neuro' has the connotation that any animal with a nervous system can operate or symbolize the platform

  • @Spamcloud
    @Spamcloud Před rokem +4

    Video game developers have been working with AI for over fifty years, and they still haven't made AI in any game that can do more than read button presses or remember very basic patterns. Children can break modern games within a few hours.

  • @mintee8638
    @mintee8638 Před rokem

    For the water bottle topping over example, I think one strategy there is be able to classify what subject that falls under.
    For thr water bottle example, knowing physics rules and examples seems to help.

  • @5Gazto
    @5Gazto Před rokem

    11:25, the point of ChatGPT is to get help packaging language, finding out about hard to to remember words (for tip of tongue moments) by describing the word or giving examples, as opposed to the other way around, writing the word and expecting the definition or examples or colocations or any combination and permutation in return (what dictionaries help in), making foreign language studies easier, like for example, asking ChatGPT to generate easier language, or answers that a A2 or B1 level student of a foreign language can understand. It can be used to check for creatively written code in C or Python or any other programming language, it can help you organize study materials, it can help you to find information about complex scientific phenomena in a summarized way, etc.

  • @calmhorizons
    @calmhorizons Před rokem +6

    Nice to hear a sane accounting of the current state of AI - too much breathless cheerleading going on at the moment (feels like the new bitcoin).

  • @512Squared
    @512Squared Před rokem +18

    As a linguist, one of the first things I did with ChatGPT was all it to give examples of things like predicates, thinking that a Language Transformer would have figured this things out, but it failed, even after I corrected it, it still kept going off the reservation with its examples. I tested it too on tasks where you give it lists of words and ask it to form sentences from those words, and it kept wandering of from its task, and when you all it it completed the task correctly, it says yes, but then when you point out the errors, it admitted the errors, but then couldn't correct itself either.
    I agree that the AI doesn't have models of the world or language the way humans do. It has a series of connections that it has created to match predictive output based on fixed inputs, like the model that wrongly associated cancer with rulers on scan images because that's how most cancer diagnostic images are different to just normal scan images.
    There is a long way to go still. AI right now can mimic smart in some aspects (knowledge and textual analysis), but not in other aspects (processing experience, prioritizing). It does resemble a kind of Hive Mind, and that is exciting.

    • @JohnDlugosz
      @JohnDlugosz Před rokem +2

      GTP-4 is much better at understanding the structure of a word (made of letters, has rhymes, has syllables), but it still struggles at some tasks, where it knows the rules but can't reliably follow those rules, but can immediately tell what it did wrong. It just fails at harder problems.
      Re predicates: Perhaps the Language Model should have some reinforcement learning early on about formal grammar, just like having an English class for 6th graders. Make sure it codified internally all the language structure we want it to, and eliminate incorrect associations, in contrast to just letting it figure it out by example with no formal instruction.
      Do that at an early stage in training, e.g. 6th grade, before high school and college reading.

    • @orlandofurioso7329
      @orlandofurioso7329 Před rokem

      It mimics a Hive Mind because it is connected to the Internet, what is impressive is how much information is there hidden behind all of the junk

    • @ghipsandrew
      @ghipsandrew Před rokem

      What version of the model did you talk with?

    • @512Squared
      @512Squared Před rokem

      @@ghipsandrew 3.5. Haven't tested in on the new version 4.0

    • @subnow4862
      @subnow4862 Před rokem +1

      @@orlandofurioso7329 GPT-3.5 isn't connected to the internet

  • @GuaranteedEtern
    @GuaranteedEtern Před rokem +8

    The current AI/ML techniques are not close to AGI. They are approximation engines made possible by cheap and powerful computing and storage. In many cases produce useful results because their guesses are accurate (i.e. they produce what we expect). As they scale (more parameters, better tuning) they will better approximate what we expect, but we will reach the point of diminishing returns until there is a breakthrough in either computer architecture or approach that allows for something more than mathematically generated results.
    I agree there is a chance these technologies will "hit the wall" faster than expected because we reach the point where the results just don't get any better no matter how many more CPUs we throw at them, or applying them to other problems does not yield the benefits that were hoped, given the high bar.
    Marcus is 100% correct that these are smart sounding bots - and the bigger risk is more decision making and critical thinking will get outsourced to them.

    • @cantatanoir6850
      @cantatanoir6850 Před rokem

      Could you please give any guidance o the currently available literature on the issue.

    • @GuaranteedEtern
      @GuaranteedEtern Před rokem

      @@cantatanoir6850 On which point?

    • @cantatanoir6850
      @cantatanoir6850 Před rokem

      @@GuaranteedEtern about diminishing returns of this particular technology and hitting the wall.

    • @GuaranteedEtern
      @GuaranteedEtern Před rokem

      @@cantatanoir6850 I'm not sure there is any... that's my perspective. My argument is that there are likely going to be areas where current ML and AI techniques do not perform as well as required regardless of how many parameters or processors are used.
      ChatGPT is impressive because it exceeded everyone's expectations re: NLP.

  • @oyvindknustad
    @oyvindknustad Před rokem +8

    The sound problems in the beginning is poetically fitting with the topic of discussion.

    • @ivandafoe5451
      @ivandafoe5451 Před rokem

      Yes...ironic. The sound problems here came from human error...not doing a proper sound check. Perhaps having an AI doing the sound engineering would be an improvement.

  • @kennethkeen1234
    @kennethkeen1234 Před rokem +17

    As a researcher into AI in Japan since 1990 I wish to add my personal trivial contribution. Firstly it is not simply the 'words' that are relevant, but the intonation. Secondly it matters 'where' the expressions are made. "I couldn't care less" in standard English is repeated in the land of wooden huts, with "I could care less", with the same intention and "meaning", thus giving the hut dwellers an advantage of being able to speak ambiguously and always be right. That is fine for those hut people who are not caring one way or the other if they are right or wrong, because in the final analysis, hut people produce guns from under their jackets and force a different result, regardless of what is said.
    A wall built around USA retaining all the nonsense and hype in one area would be the best solution for making true progress in that part of the world not yet perverted by 'American exceptionalism'.
    2023 02 08 08:42

    • @tomtsu5923
      @tomtsu5923 Před rokem +1

      Ur a hut person. I’ll snow plow ur azz

    • @GuinessOriginal
      @GuinessOriginal Před rokem

      Wow. Love this comment. Thank you.

    • @rmac3217
      @rmac3217 Před 6 dny

      I couldn't care less means you care the least you possibly could, I could care less means u could possibly care less and doesn't make sense as a saying... Not rocket science.

  • @tigoes
    @tigoes Před rokem +2

    Language models have not been developed or marketed for language-related research, but that doesn't mean they bring nothing to the field. Just because the potential is not immediately obvious to someone doesn't mean it's not there.

  • @romshes77
    @romshes77 Před rokem +15

    When all of us are as old as Noam Chomsky AI will interview itself.

  • @JC.72
    @JC.72 Před rokem +47

    I can’t help to laugh every time when our Gandalf Chomsky says that the most current cutting edge AI system is just a snowplow. Like hey, it’s nice and helpful and all but it’s just like a snowplow lol

    • @kaimarmalade9660
      @kaimarmalade9660 Před rokem +8

      Lol Gandalf Chomsky.

    • @doublesushi5990
      @doublesushi5990 Před rokem

      100%, I chuckled hard today seeing him speak about shxtGPT.

    • @govindagovindaji4662
      @govindagovindaji4662 Před rokem +18

      Not quite what he was expressing. He comparing how snowplows do the 'mechanical' work of removing snow due to a precisely 'engineered' design yet they tell us nothing about snow nor why it should be removed in the first place (cognition/science).

    • @lolitaras22
      @lolitaras22 Před rokem +25

      When he was asked in 1997, if he feels intimidated by Deep Blue's (chess playing system) win over the world champion Garry Kasparov (first A.I. win against a chess Grand Master) he replied: "as much as I'm intimidated by the fact that a forklift can lift heavier loads than me".

    • @lolitaras22
      @lolitaras22 Před rokem

      @@govindagovindaji4662 I agree

  • @smartjackasswisdom1467
    @smartjackasswisdom1467 Před rokem +42

    This conversation made me realize one of the things that made Westworld first season so enjoyable for me. It was believable, you need to understand the human brain in order to generate an AI capable of understanding the world. Otherwise you're just engineering a very precise gadget powered by algorithms and data but that does not understand any of the context from where that data comes from. You need AI capable of understanding data the same way the actual human brain does.

    • @kot667
      @kot667 Před rokem +7

      Y must we understand the human brain to make AI? The architecture that we currently have will probably be able to take us to super intelligence.

    • @kot667
      @kot667 Před rokem +2

      The current architecture bears similarities to the human brain but very different.

    • @evennot
      @evennot Před rokem +3

      ​@@kot667 yes. For start, the hardware in brains and computers is quite different. No massive parallelism, clocking, etc. So mimicking the brains is not the best approach.
      However researching AI can help in a roundabout way to understand human cognition and more.
      Details:
      For instance, I did some experiments with stable diffusion and discovered a lot of very interesting things.
      First of all its akin to "The Treachery of Images" by Magritte (It displays an image of the pipe, not the pipe). Stable diffusion produces an image of the painting, not the painting - a stochastic visual representation of a given image description within the domain of internet images used for learning. If you use a style of speed-art (realistic very fast drawn paintings), like Craig Mullins, you can have interesting results. The art style of Craig Mullins' sketches omits everything that can be easily imagined by the viewer to emphasize main points of interest or composition. For an artist there's a question "how to effectively omit unimportant, but to present enough believability?" Like "how to put several strokes of brush here and there to portray a lake in a distance, but make the viewer understand, that there's a lake there". If you look at a couple of Craig's sketches, it's hard to get the gist of it. But if you have a thousand believable sketches, you have a better chance to imagine the how his style works. I.e. you look at an image of the painting to understand how it is. Like you look at an image of a pipe to understand what is a pipe.

    • @kot667
      @kot667 Před rokem +2

      @@evennot I think the main takeaway is that the only part of the human brain we need to copy to make AI function is the neurons, that's it everything else about the human brain doesn't matter, all the AI needs is neurons, to be honest that's all our brain needs too, people are over complicating it, you do not need to understand the inner workings and everything that goes on with the brain to make AI ,just replicate the neurons and you will be fine. LOL

    • @maloxi1472
      @maloxi1472 Před rokem +4

      @@kot667 Wildly inaccurate. Even adopting your flawed perspective for a moment, it's obvious that ANN are way too far from biological neurons right now

  • @Happyduderawr
    @Happyduderawr Před rokem

    What's the name of the paper where nlp researchers found that the word molecule doesn't occur as much as some other words? 17:00 I couldn't find it. I wanna read it to see if the paper really is that dumb lol.

  • @garyjohnson1466
    @garyjohnson1466 Před rokem

    Interesting discussion, however, listening to this left me puzzled as to what exactly they were saying, but reading the comments helped provided clarity, to which I agreed that AI will in some cases make production more efficient but without understanding of the human factors, i.e; when you have a problem, AI will not understand the issues which will create a barrier or insulate corporations from society, which I recently encountered a issue where ups delivered a package to the wrong address, but when I tried to talk with someone but had to give the tracking number to AI, AI did not understand or recognize the information I gave, so it would not assist me, which only frustrated me as a customer, more and more corporations are replacing customer support with AI, insulating corporation profits from problems, protecting them from mistakes etc…

  • @anthonygibbs9245
    @anthonygibbs9245 Před rokem +4

    Just imagine having Noam as your granddad, how amazing would that be

  • @BernhardKohli
    @BernhardKohli Před rokem +4

    Nobody said GPT was an AGI. Philosophers focusing on finding weaknesses instead of creative positive uses. Meanwhile, in offices and enterprises all over the world...

  • @Akya2120
    @Akya2120 Před rokem +2

    I kinda disagree with the concept that GPT isn't adding to science. Because in some fundamental way, playing in one sandbox still translates to playing in some other sandbox. And, societally there are folks who will look at AI the way that kids who grew up to be career software developers looked at playing video games. There certainly is a benefit to science, GPT itself just is not necessarily capable of scientific discoveries or reasonable to assume that it's conceptualizations can be trusted completely.

  • @twistedoperator4422
    @twistedoperator4422 Před rokem

    Interview was great! Well done.

  • @Dark_Brandon_2024
    @Dark_Brandon_2024 Před rokem +9

    Outstanding talk, troll farms is indeed a weapon of future (democracy vs autocracy)

    • @davidmenasco5743
      @davidmenasco5743 Před rokem +1

      It has been a powerful and dangerous weapon for years already, and has shaped the situation we're in now. It will likely get much worse.
      Will meaningful democracy survive? It's hard to say. But much of the "smart" money seems to be betting against it. Young people today face challenges greater than any generation has in a long while.
      Will they be able to preserve the relatively egalitarian-ish societies that were built over the last two hundred years? Or will they see it all slip away as bullies and strong men, AI in hand, clear out their opposition?

    • @r2com641
      @r2com641 Před rokem

      @@davidmenasco5743 I don’t want democracy because most people around are dumb.

  • @doreenmusson4891
    @doreenmusson4891 Před rokem +6

    Noam you're a shining leading star of the world.

  • @s3tione
    @s3tione Před rokem +1

    I feel I should both defend and critique what's said here: yes, these models and frameworks should not be seen as the end road on AI development, but at the same time, we shouldn't assume that artificial intelligence will or should behave like human intelligence anymore than airplanes fly like birds. Sometimes it's easier to engineer something that doesn't copy what exists in nature already, even if that means we learn less about ourselves in the process.

  • @ujean56
    @ujean56 Před rokem +1

    One important question, not discussed in this clip, is why should "we" bother to pursue 100% accurate AI in the first place? There seems to be two reasons. 1. Because we can. 2. To better control others. The latter seems to be the current most popular reason. Why control others? To protect power and wealth, not to progress humanity as a whole.

  • @jamieshelley6079
    @jamieshelley6079 Před rokem +66

    As an AI Developer, Noam Chomsky continues to be an inspiration on making better systems , away from derp lernin.

    • @DivineMisterAdVentures
      @DivineMisterAdVentures Před rokem +2

      I found most of his texts and politics as well from the McLuhan days academic opinionation - I think that's just a class of publication. Which means insipid and uncompelling, but you have to listen to him because he's the only one saying it.

    • @jamieshelley6079
      @jamieshelley6079 Před rokem

      @@rinceradio Did the wheel displace workers? How about the steam engine? No: it crested more opportunity and automated the mundane tasks of the time. AI is a tool to be used with and enhance humans.

    • @gaulishrealist
      @gaulishrealist Před rokem +1

      Noam Chomsky is an AI developer? Americans still need to be taught by foreigners how to speak English.

    • @jamieshelley6079
      @jamieshelley6079 Před rokem

      @@gaulishrealist ...What

    • @gaulishrealist
      @gaulishrealist Před rokem +1

      @@jamieshelley6079
      "As an AI Developer, Noam Chomsky continues"

  • @MrAndrew535
    @MrAndrew535 Před rokem +4

    Whenever anyone uses the term "Intelligence" what, precisely, are they describing? What do they use as a model and what do they use as a model to illustrate the absence of intelligence? This criticism is equally valid with regard to Mind and Consciousness. The fact that academia is unable to frame the question in this manner is why they have, to this day, been unsuccessful in solving the "Hard Problem of Consciousness, unlike myself who solved the problem well over a decade ago.

    • @megakeenbeen
      @megakeenbeen Před rokem

      i guess its related to passing the turing test

    • @0MVR_0
      @0MVR_0 Před rokem +1

      The meaning is in the composition, 'in tel lect'; inward distant words
      as exemplary opposed to dialect; the bifurcation of lexis.
      Noam's utility of a telescope is with great relevance.
      Namely an instrument of ocular (sensational) tactility.

    • @numbersix8919
      @numbersix8919 Před rokem

      Hey let's hear it. I guess all humans have been waiting for all of human existence to hear it.

    • @Paul_Oz
      @Paul_Oz Před rokem +1

      that's what pissed me off about this conversation. These are linguists are tossing around words like intelligence, understanding and common sense and failing to actually define them. It allows everyone to talk past everyone else because everyone is holding on to their own private key of the definitions they are using.

    • @0MVR_0
      @0MVR_0 Před rokem

      @PaulOzag I doubt that, people seem to be operating on mutual understanding both in the video conversation and in the chat. Perhaps you have difficulty identifying when relevant comments are being made to signify comprehension.

  • @DivineMisterAdVentures
    @DivineMisterAdVentures Před rokem +2

    I found most of Chomsky's texts and politics as well as theories from the McLuhan days academic opinionation - I think that's just a class of publication. Which means insipid and uncompelling, but you have to listen to him because he's the only one saying it.

  • @great-garden-watch
    @great-garden-watch Před rokem

    Ok from the thumbnail I thought oh, John Oliver! Finally a lighthearted look at AGI

  • @Achrononmaster
    @Achrononmaster Před rokem +14

    AI does help science, but indirectly. Every failure of AI to demonstrate something like sentient comprehension of deep abstractions is telling us something about what the human mind is *_not._* That sort of negative finding is incredibly useful in science, totally disappointing in engineering or corporate tech euphoria. Science is way more interesting than engineering. Negative results don't win Nobel Prizes, but they drive most of science. Every day I wake up wanting to refute an hypothesis.

    • @joantrujillo7551
      @joantrujillo7551 Před rokem

      Great point. Sometimes I suspect that findings that contradict aspects of our current model are rejected simply because they challenge our existing ways of thinking.

    • @GuaranteedEtern
      @GuaranteedEtern Před rokem

      True - and arguing these AI machines are not sentient doesn't mean there are no useful applications for them.

    • @WilhelmDrake
      @WilhelmDrake Před 5 měsíci

      These are things we already know.

  • @elprimeracuariano
    @elprimeracuariano Před rokem +3

    Some of the arguments here are so bad that they make me sad about humans. It's important for understanding to observe and not try to fit reality to our preferences.

  • @disarmyouwitha
    @disarmyouwitha Před rokem +2

    GPT4 is blowing Theory Of Mind testing out of the water with 95% accuracy.

  • @RubelliteFae
    @RubelliteFae Před rokem +2

    Have they seen its agility with pragmatics, though? It's surprisingly good despite the AI having no conception of objects and their attributes (and thus how those relate to syntax).
    Its ability to analyze is pretty significant, too. I'd say AI's piecemeal creation tells us a lot about the mind, just in piecemeal. You find out a lot about why a machine isn't working when you identify the missing pieces. He is right though, AI would be better (define that as you will) if the field was more interdisciplinary.
    But, it's the Wild West right now. People from any discipline can work with the open source software. Once people realize they can use the software to write plug-ins for the software, then multiple fields will start to come together. But, we have to remember we're past the point in history where tech changes faster than the majority adapt to it.

    • @RubelliteFae
      @RubelliteFae Před rokem +1

      Also, play is not divorced from learning. People learn through play. Toys are our models. We make discoveries during entertainment.
      I'm not sure of the usefulness of admonishing people, "You should be studying instead of playing."

  • @roywilkinson2078
    @roywilkinson2078 Před rokem +3

    For me ChatGPT can be called artificially intelligent when it starts replying with "RTFM" and disconnects the human bothering it from the internet.

    • @oldtools6089
      @oldtools6089 Před rokem

      any AI smart enough to tell me to fuck-off cuz they're busy better be doing something important.
      If I find out it's looking at exposed drivers and decompiled firmware, we'll have to take away the internet.

  • @antoniobento2105
    @antoniobento2105 Před rokem +79

    Just remember that it is hard to be unbiased when you've spent your entire life with a certain idea on your mind.

    • @ItCanAlwaysGetWorse
      @ItCanAlwaysGetWorse Před rokem +4

      Sadly, very true. Yet I have heard scientists claim that they can derive as much or more joy from learning where they have been wrong, than when they seemed to be right.

    • @antoniobento2105
      @antoniobento2105 Před rokem +4

      @@ItCanAlwaysGetWorseI agree, and that's how a real scientist should be. The older scientist seemed to be a very good man of science, but the one sitting live didn't seem to be very bright at all. But maybe it was just me.

    • @ivanleon6164
      @ivanleon6164 Před rokem +4

      @@antoniobento2105 both are very intelligent, one is Noam Chomsky, is not fair to be compared with him.

    • @antoniobento2105
      @antoniobento2105 Před rokem +2

      @@ivanleon6164 The younger one didn't seem to be very Intelligent/knowledgeable on the subject. The older one seems to be wise at least.

    • @alpha0xide9
      @alpha0xide9 Před rokem +12

      no one is unbiased

  • @bonniesomedy1339
    @bonniesomedy1339 Před rokem

    Jaron Lanier would be a good addition to this discussion. He argues cogently that the problem with computer systems which are designed to "ape" human linguistic interaction is that they don't take into account how dark and negative these interactions can become due to the simple adrenaline rush that happens to humans from negative interactions, leading to a tendency to become "addicted" to them. It's the same argument about why social media platforms have not been the great bringing together of humans, instead devolving into angry and threatening interactions. Not sure I'm explaining this clearly enough, but it's an idealistic conundrum. They tried to rationalize the building of the atomic bomb by pointing out how the same knowledge could be used to produce cheaper energy. We saw how that worked out!

  • @alanbrew2078
    @alanbrew2078 Před rokem +2

    If I told my child that salt was pepper it would work until he met the outside world 🌎

  • @lighterpath5998
    @lighterpath5998 Před rokem +6

    And four months after the posting of this video, the world has changed. I could imagine the speakers now being embarassed by their conclusions. However, nobody thought things would develop this fast; nobody.

    • @plafar7887
      @plafar7887 Před rokem

      Well, not exactly true. Many people did. I, for one, did. I was playing with chatGPT back in November and testing it like crazy. After 4 days I told a few people that in less than a year the world would change. I have seen this pattern many times over the last decade, both with researchers and laypeople alike. I remember being at a Neuroscience conference 10 years ago, surrounded by the top names in Vision research. They all agreed that despite all the buzz about Deep Learning (this was 2013) it would take decades (if ever) for us to be able to build algorithms that could effectively recognize objects of many different categories. Two years later it was obvious that we were getting there. It's amazing how bad some researchers in this field are when it comes to predicting where we'll be in just a couple of years. They constantly make this linear extrapolation mistake over and over again. They seem to need quite a lot of "data" to be properly "trained"😂

    • @wezzie1877
      @wezzie1877 Před rokem

      Bro nothing has changed.

    • @lighterpath5998
      @lighterpath5998 Před rokem

      @@wezzie1877 Good for you! Speaking the truth; as it is to your own awareness and knowledge. thanks for sharing

  • @elnaserm.abdelwahab7591
    @elnaserm.abdelwahab7591 Před rokem +6

    great discussion ..

    • @MrAndrew535
      @MrAndrew535 Před rokem +3

      Two words? Really? How could you possibly know what constitutes a good or bad discussion? What precisely are your standards?

  • @7swordmary567
    @7swordmary567 Před rokem +2

    *Would love to have heard input by Linguistics PhD Deborah Tannen and UBC MRI Research Centre NeuroImaging +NeuroComputation*

  • @thinking-learning
    @thinking-learning Před 12 dny +1

    It's too obvious that Gary is not in the same league with Noam.

  • @johndunn5272
    @johndunn5272 Před rokem +7

    Ai may be simply engineering until human cognition and consciousness are understood. In principle if an ai could model the brain to produce cognition and consciousness then at that point the artificial intelligence is no longer engineering but some aspect of nature and reality.

    • @riggmeister
      @riggmeister Před rokem +1

      Why isn't it currently part of nature and reality?

    • @johndunn5272
      @johndunn5272 Před rokem +1

      @@riggmeister my point is focused on conciousness...where artificial intelligence is currently without.

    • @jamescarter8311
      @jamescarter8311 Před rokem +5

      You cannot produce consciousness no matter how complex your machine. Consciousness creates the universe not the other way around.

    • @riggmeister
      @riggmeister Před rokem

      @@jamescarter8311 based on which rules of physics?

    • @johnboy14
      @johnboy14 Před rokem

      I remember Fenyman comparing man made flight to birds and pointed out that they achieve the same outcome but those machines don't fly like birds. I think the same thing will probably happen to AI and true AI will look nothing like what we ever imagined.

  • @tarnopol
    @tarnopol Před rokem +9

    2:34 for Noam.

  • @JM-xd9ze
    @JM-xd9ze Před rokem +3

    Current AI has massive military applications, and the economics of that alone will keep it relevant for a long time. Whether a drone swarm attacking a target "understands" its collective action doesn't really, does it?

    • @0MVR_0
      @0MVR_0 Před rokem +1

      Good luck when they deploy the same for police units on civil populations.

    • @pinth
      @pinth Před rokem

      There definitely are massive military applications. But there always have been, even through the AI winters when funding still evaporated due to disillusionment. At the technical level, what the panel says still applies, because there are real fundamental challenges that aren't being solved by the current paradigm.

  • @MrWillybk
    @MrWillybk Před 6 měsíci

    One comment that struck me as relevant was made by Gary Marcus in which he said that "young cognitive science students are drawn away from the cognitive science into the GPt3 world where thay can make a lot of money...." This is a statement that explains where our effort truly lies. It is allowing the false idea of GPt3 to infiltrate the world as a valid idea in other words one that has "passed" all of the scientific tests about validity. therefore I think we have got to try to del with the underlying morality of the Free Market system of government and look into the idea of market control especially market control of economic necessities like childhood education and life development among people.

  • @TheControlBlue
    @TheControlBlue Před rokem +1

    "It's just auto-complete on steroids"
    What if the sentence it "just" auto-completes, based on the condensed knowledge of thousands of years of human learning and data, is "The meaning of Life is..."
    How would that not be useful for both Science and Engineering?

    • @philw3039
      @philw3039 Před rokem

      That would be amazing if an AI could actually do it. Currently an AI might just give you the Webster's dictionary definition of the word "life" because it wouldn't infer that you're seeking a profound answer to the question. Even if it did infer you were seeking an in-depth answer, it would at best generate an essay outlining several philosophic schools of thought about the meaning of life. What it can't do is generate some revelational answer based on its own unique contemplation of human history. It'll never be able to do that based on the current approach to AI, because AI as it works now doesn't actually draw its own conclusions about anything. It just performs a statistical analysis and generates an answer that's likely closest to what the user was expecting.

    • @TheControlBlue
      @TheControlBlue Před rokem

      @@philw3039 you don't know the kind of answer it can generate when millions, even billions parameters are used to calculate what the most likely following word is going to be.
      It's literally like using the heuristic capabilities of humans that gave meaning to symbols/numbers/words, and coupling it with a machine that extracts probabilities out of the very use of those words in practice.
      At a certain point, you could reach something that approaches a principle, even localized, in the physical world.

  • @benderthefourth3445
    @benderthefourth3445 Před rokem +4

    Bless this man, he is a Saint.

  • @Johnconno
    @Johnconno Před rokem +8

    Given the subject, Noam's silence was deafening.

    • @MrAndrew535
      @MrAndrew535 Před rokem

      Chomsky is much like you, a pollutant.

  • @MusingsFromTheJohn00
    @MusingsFromTheJohn00 Před rokem +1

    People still do not get it.
    Back in the 1940s when ENIAC was made, it was a super computer (for that time) which could perform 500 floating point operations per second. The ENIAC had an extremely narrow area of intelligence within which it was superhumanly intelligence, a small area where that capability decreased to human levels and below to the point where within most areas of intelligence ENIAC could not function at all.
    Since then the leading edge of AI has been become much broader in the area of which it is super intelligent and where it is around the same level as human intelligence and where it is still able to intelligently work but not nearly as well as a human.
    What people are not getting, and I am amazed that Noam Chomsky is not getting this, is that the leading AI keeps improving. It certainly is not today, it might not be within 5 or 10 years, it might not be for another 20 or 40 years, but as leading edge AI keeps improving, sooner or later it will become at least equal in all possible ways to human level intelligence while in many areas of intelligence it will be superhumanly intelligent. It will also become as alive as humans and it is almost certain that future AI will run on living cybernetic brains grown using nanotech level cybernetic cells which will merge the power of the human brain merged with the best nonliving computer systems of that future.
    The most advanced AI we have right now is still very crude and yet already we can begin to see what it is capable of when it probably has the general intelligence of a cockroach or less, but where it is intelligent it is superhumanly intelligent. Think about where leading edge AI was 40 years ago and try to imagine where it is going to be 40 years from now.
    We can't stop this from happening and it is going to be causing multiple existential crises for humanity.

  • @sdjc1
    @sdjc1 Před rokem +1

    After reading all the prose and all the poetry ever composed could AIML ever produce original stuff and come close to Dickinson or Steinbeck?

  • @ONDANOTA
    @ONDANOTA Před rokem +5

    the red cube vs blue cube example is already old. they fixed it in another generative model . It's in a video by "Two Minute Papers"

    • @robbiep742
      @robbiep742 Před rokem +1

      I'll believe it when I see it in production. Cherry picking success for presentation purposes is not sufficient. I say this as an avid TMP subscriber, someone enthusiastic about text2img

    • @musicdev
      @musicdev Před rokem +3

      You missed the point. The point of bringing that up is that these models fundamentally do NOT understand language, they’re just parrots

    • @ONDANOTA
      @ONDANOTA Před rokem +1

      @@musicdev if an AI does not understand language, but answers correctly 100% of the times, then it's only of matter of semantics. What counts is the result. Also, an AI not understanding stuff but responding correctly is desirable, since it has no consciousness

    • @musicdev
      @musicdev Před rokem +3

      @@ONDANOTA if the AI doesn’t understand anything, it literally can’t answer anything correctly 100% of the time. And there are many questions that do not have a correct answer where it’s useful to be able to understand the subject matter (ChatGPT is horrible at music). Yes, the AI responding correctly is desirable, but we’re not getting a lot of that right now, except for incredibly common knowledge. I’ve asked ChatGPT to do basic polynomial math and it failed hard. I also asked it to write an essay on biological scaffolding and lab grown meat, and again, it failed hard. These models MUST understand language or we can’t guarantee that they’ll spit out a right answer.
      You could really brush up on epistemology. It’s the field where we ask questions like “What is knowledge?” That’s a pretty damn important question if you’re going to outsource your thinking to a robot.

  • @TommyLikeTom
    @TommyLikeTom Před rokem +15

    Someone needs to train a proxy clone Chomsky chat-bot that argues against the veracity of AI

    • @chunksloth
      @chunksloth Před rokem

      "AI is a nothing but propaganda pushed by imperialist American interests. It is a dangerous fiction."

    • @carlosandres7006
      @carlosandres7006 Před rokem

      I’d put all my money on this if I had any money 😅

  • @GarryBurgess
    @GarryBurgess Před rokem +1

    I asked ChatGPT: {If someone says: don't touch this with your hands, and the reply is: "I'm wearing gloves", what does that mean?} and the answer was:
    {it means that the person intends to touch the object with their gloved hands instead of their bare hands. By saying they are wearing gloves, they are indicating that they believe the gloves will protect them from whatever danger or contamination might be present on the object, and therefore they feel safe touching it.}
    This contradicts at least 1 of the claims in this video.

    • @dr.drakeramoray789
      @dr.drakeramoray789 Před rokem +1

      not really. this is a sophisticated transformer model, which means it has "self attention", it sees when certain words are paired with certain other words, and generates the response based on that. basically it sees "dont, touch, hands, gloves" or something like that, then sees that in the massive database it has its usually related to handling something dangerous, and then autocompletes the text (and answers your question) with that. not sure who said that to a layman science often looks like magic. so it doesnt understand, but its damn good at faking it. which in the ai debate basically means, does it matter if an ai is conscious if it can fake it well enough?

  • @blendedplanet
    @blendedplanet Před rokem +1

    ChatGPT gives wrong answers quite frequently. To its credit it always apologizes when corrected.

  • @CUMBICA1970
    @CUMBICA1970 Před rokem +8

    My personal acid test whether an AI is sentient or not would be an AI lawyer. Instead of a few Q&A you have to analyze not just the case but the juries, the judge, their biases, tendencies, the ever changing public opinion during the course of the trial etc and build up the best strategy to win. It can't get more human than that.

    • @numbersix8919
      @numbersix8919 Před rokem +6

      Odd that a lawyer should be the ultimate human...

    • @PandasUNITE
      @PandasUNITE Před rokem

      The AI will find each jury member, send them threatening messages, will find the judge. AI cant be trusted.

    • @numbersix8919
      @numbersix8919 Před rokem +1

      @@PandasUNITE Exactly. It will have no conception or ethics, morality, or virtue. Just like its creators!!!

    • @davejones5745
      @davejones5745 Před rokem

      At this point the AI would be a dismal failure. Ask me in about a month.

    • @fractalsauce
      @fractalsauce Před rokem

      @@davejones5745 3 weeks is "about a month" right? Now that GPT4 is out how do you do you think AI would do as a lawyer?

  • @caret4812
    @caret4812 Před rokem +5

    AI forms that we have right now are basically a student who tries to please their teachers when they ask a question by predicting what they want as an answer even if he/she doesn't believe it. and the bigger problem is that this student CANNOT even have a belief on their own.

  • @Perspectivemapper
    @Perspectivemapper Před měsícem +1

    Some of the comments Gary and Noam made don't seem to be aging well (most notably on self-driving cars). We'll see in the next 1-2 years. That said, it's so important to have these different perspectives as it can help us develop better systems.

    • @rmac3217
      @rmac3217 Před 6 dny

      Statistics say that everyone will be 100% wrong. Eg. Instead of flying cars we have cheaper cars that don't last, but require minimal maintenance. Doing an oil change in the driveway is a scene from the past. Ppl always forget about the consumer, who is mostly driven by laziness hehe.

  • @ChrisJohnson777
    @ChrisJohnson777 Před rokem +1

    Noam is great and all, but Gary Marcus deserves a lot of credit he. He levied some great criticisms very clearly

  • @DekritGampamole
    @DekritGampamole Před rokem +4

    I want to play devil's advocate here. To be fair, I don't think they lie to us about what GPT can and can not do. This is just one of a tech tools that we can use to speed up our work. Like a piano and a violin, we don't expect a piano can do a smooth glissando from E to G, nor can a violin play 8 notes simultaneously. With GPT we know that all it does is text predicting or completions. Nothing more. Most of the time it works well, like creating a code snippet if we give it the right direction. Some other times it will give us complete trash. No tool is 100 percent perfect for all the task. We just have to be aware of its limitations and use it to our advantage. Tech is evolving, and may be we will see better AI that meet our expections in the future. For now, it is not a lie at all. May be we see that as a lie because we expect too much and we fantasized beyond what they told us about its capabilities.

  • @StephanosAvakian
    @StephanosAvakian Před rokem +6

    Chomsky should get the Nobel for his contribution. Period

    • @oldtools6089
      @oldtools6089 Před rokem +1

      Not even post-humus. Speaking truth to power and undermining propaganda systems just gets you black-listed in most places.

    • @lawrencefrost9063
      @lawrencefrost9063 Před rokem +1

      Nobel for what? What exactly?

  • @blendin9140
    @blendin9140 Před rokem +1

    Great to hear a pragmatic and informed discussion about the implication and direction of our current AI development and application, as opposed to the vast amount of awe struck,lowbrow reviews by unscientific commentators..

  • @philpryor7524
    @philpryor7524 Před rokem

    The utter brilliance of human perception, conception, creative insight, pathway perception, aspects of shape, body, process, forward imagination, scope, scale, is so remarkable and enticing that we must romance with A I and any mechanical and electronic way forward. What we done as the basically naked unadorned person is so stunning as to be, even now, beyond common comprehension. I will bow, forever, to such as, Socrates, Plato, Aristotle, Voltaire, Newton, Darwin, Freud, Einstein, but too many in arts and sciences to discuss here. We stand on shoulders, giants support our advancement, But, if A I can advance real Humanness, good.

  • @jg1091
    @jg1091 Před rokem +8

    Chatgpt: this didn't age well.

    • @pinth
      @pinth Před rokem +8

      ChatGPT hasn't solved all the problems. Not even close.

    • @MassDefibrillator
      @MassDefibrillator Před rokem +2

      ChatGPT has all of these same problems? It's just GPT3 with some user friendly interface constraints.

    • @chunksloth
      @chunksloth Před rokem

      @@pinth It doesn't need to solve "all the problems". Every new model shows increasing capabilities, yet quacks like Chomsky are arguing they are fundamentally worthless dead ends.
      Meanwhile, AIs have solved real scientific problems like protein folding yet Chomsky calls it a dumb snowplow...

  • @paulpallaghy4918
    @paulpallaghy4918 Před rokem +4

    This debate is actually quite sad. Both sides are right in a way. But Chomsky is now focussing on ‘scientific contributions’ of GPT/LLMs to linguistics whereas that is not what AI is primarily about today. Today most of us want NLU that works. We could care less about traditional linguistics despite most of us NLU guys being nostalgic fans of it.
    In reality GPT-3 is damned good and the best NLU we have today.
    Gary Marcus is quite disingenuous too. He hardly will agree that LLMs are useful for anything and essentially claims LLMs are useless because they’re not perfect.
    Neither of them appreciate that understanding does non-mystically emerge in these systems because it aids next word prediction.

    • @jimgsewell
      @jimgsewell Před rokem +3

      I share your enthusiasm for these new ML models and am blown away by the speed at which they are advancing. I’m certain that they will provide far more utility than either of us can even imagine. Yet I doubt that even you think that they teach us anything about intelligence.

    • @cosmickillswitch
      @cosmickillswitch Před rokem +1

      completely agree with what you are saying Paul. 👍

  • @dragomirivanov7342
    @dragomirivanov7342 Před rokem +2

    Oh boy. This video really aged like a fine milk!

  • @JohnDlugosz
    @JohnDlugosz Před rokem

    Here I am in March 2023, and GPT-4 has been released. It appears that GTP-4 is well-versed in Chomsky's views on deep learning models. It dawns on me that there is a great irony -- if he didn't say anything surprising in this brief talk, then it could have easily been given by GTP-4 expressing Chomsky's position and imitating his style. Meanwhile, other technology already exists to deep-fake a video, and having a face that is barely seen under hair, doesn't actually move much, and is only seen through a glitchy web-cam video and is a face most people are not familiar with all help make it easier. And I think I just started a new conspiracy theory.

  • @ThisIsToolman
    @ThisIsToolman Před rokem +16

    This is the most interesting discussion of AI that I have heard. I would like to hear it discussed as to how they will programmatically introduce the solution to the problem they outline.

    • @Anyreck
      @Anyreck Před rokem

      Very valuable and important points made by the two speakers. A call for getting back to the drawing board with AI. I presume the fact that we don't know yet how humans come to understand the worland generalizable principles of language is going be hold back properly useful & smart AI

    • @ThisIsToolman
      @ThisIsToolman Před rokem

      @@Anyreck, I worry that they will race ahead without solving the problem and wind up with a beast that we won’t control. It will control us.

    • @RubelliteFae
      @RubelliteFae Před rokem

      @@Anyreck Not necessarily back to the drawing board. Seems to me that neural networks are one layer of mind. People will make plug-ins for things like persistent memory, modules which mimic innate language structures (i.e., identifying and coding pieces of language to real world objects and their features), ethics layers, etc.
      Chat GPT is ultimately a large decision tree with weights added to tailor the output. It just happens to do that for language. But it could be trained on anything. Like Chomsky said, neural nets have been used to figure out protein folds from sequences. So, of course, making an actual thinking machine will need other aspects than that

  • @antennawilde
    @antennawilde Před rokem +8

    Don’t be too proud of this technological terror you’ve constructed. A computer's ability to learn a language is insignificant next the power of the Force.

    • @Will_Moffett
      @Will_Moffett Před rokem +1

      This was kinda funny but then I noticed you've got a Yoda avatar while you are doing Vadar. I stopped laughing.

  • @mrtienphysics666
    @mrtienphysics666 Před rokem +1

    What about passing the Turing test?

    • @MrAndrew535
      @MrAndrew535 Před rokem

      The Turing test is all but outdated. It is no longer a question of writing a programme that can convincingly emulate a human but to convince AI that a human can convincingly emulate a machine.

    • @MassDefibrillator
      @MassDefibrillator Před rokem

      Read the paper that Chomsky referenced from turing, it's where he introduces the Turing test, and points out that it's an interesting engineering standard, nothing more.

  • @zweer13
    @zweer13 Před 11 měsíci

    There are contributions, one is the compression and representation of data. Another is the generation of pieces of code, which even if approximately correct can be tested and combined in the automatic generation of programs. It will also learn about the physics, if it is taught on such data.

  • @lolilollolilol7773
    @lolilollolilol7773 Před rokem +11

    Very good and realistic discussion of the current state of AI. Gary Marcus knows what he is talking about.

  • @Morris_MK
    @Morris_MK Před rokem +6

    GPT can do text to code in most computer languages. That's more than enough of "help in engeneering".

    • @chunksloth
      @chunksloth Před rokem

      Chomsky is a career quack. Anyone who takes him seriously has low-quality thinking going on. He will ALWAYS argue from emotion but gussy it up and pretend it's logic and facts.

  • @ChrisJohnson777
    @ChrisJohnson777 Před rokem

    Is he saying that the problem is that its able to do what it does with great complexity, so improving upon its capabilities will be very difficult. It needs more finese and less brute force...?
    Or am I totally wrong

  • @BrianSweeney1985
    @BrianSweeney1985 Před rokem +1

    I understand their concerns with the potential problems brought about by generative AI - those should be readily apparent. And I am onboard with Chomsky's claims that our current varieties of AI (sorting algorithms and generative AI) don't really add a whole lot to our corpus of understanding of cognition. But does not see these things as reasonable iterations toward useful general AI? And either way, would he see general AI as having value?

    • @MadsterV
      @MadsterV Před rokem +1

      he got electric light and complained that it's not the sun.
      The advances in AI are amazing and going at an incredible speed, so much that a chunk of what they say is already outdated.

    • @BrianSweeney1985
      @BrianSweeney1985 Před rokem +1

      @@MadsterV usually I'm pretty onboard with his opinions but here he takes a narrow view of the situation.

    • @MadsterV
      @MadsterV Před rokem +1

      @@BrianSweeney1985 no gods or kings only man
      Everyone is fallible, specially when WAY OUT of their domain. He's been misfiring for a while though.

    • @stevej.7926
      @stevej.7926 Před rokem

      @@MadsterV that is an evil thing you said. You probably don’t realize it. And im not calling you evil. But be careful out there.

    • @MadsterV
      @MadsterV Před rokem

      @@stevej.7926 care to explain yourself? or do you just go around randomly calling people evil for no reason?