Moravec's Paradox - Why are machines so smart, yet so dumb?

Sdílet
Vložit
  • čas přidán 16. 05. 2024
  • Learn about Roborace's autonomous racing cars here:
    bit.ly/V_YTShowMeHowItWorks
    Join the Roborace mailing list for the latest updates about their autonomous cars:
    bit.ly/V_RoboraceSignUp
    Follow Roborace on instagram to check out their latest models:
    @roborace
    roborace?h...
    Hi! I'm Jade. Subscribe to Up and Atom for physics, math and computer science videos!
    SUBSCRIBE TO UP AND ATOM / upandatom
    Visit the Up and Atom Store
    store.nebula.app/collections/...
    Follow me @upndatom
    Up and Atom on Instagram: / upndatom
    Up and Atom on Twitter: upndatom?lang=en
    A big thank you to my AMAZING PATRONS!
    Tyler White, Thibaud Peverelli, Purple Penguin, Chris Flynn, Ofer Mustigman, Daeil Kim, Harsh Tank, Alan McNea, Simon Mackenzie, Sachin Shenoy, Yana Chernobilsky, Shawn, Israel Shirk, Lynn Shackelford, Richard Farrer, Adam Thornton, Dag-Erling Smørgrav, Andrew Pann, Anne Tan, Joe Court, Lim Yu Leong, Huw James, Michael Dean, TweakoZ, Ayan Doss, Chris Amaris, Daniel McGown, Matt G, Timur Kiyui, Ayan Doss, Broos Nemanic, John Satchell, John Shioli, Sung-Ho Lee, Todd Loreman, Susan Jones, Bobby Butler, Matt Harden, Rebecca Lashua, Pat Gunn, Quentin WATIER, George Fletcher, Jasper Capel, Luc Ritchie, Elze Kool, Aditya Anantharaman, Frédéric Junod, Vincent Seguin, Paul Bryan, Michael Brunolli, Søren Peterson, Ken Takahashi, Schawn Schoch, Alexey Degtyarev, Stephen Denham, Kaylee, Jesse Clark, Steven Wheeler, Jason Smith, Atila Pires dos Santos, Adam J, Roger Johnson, Tim Sorbera, Philip Freeman, Bogdan Morosanu, khAnubis, Jareth Arnold, Simon Barker, Shawn Patrick James Kirby, Simon Tobar, Dennis Haupt, Ammaar Esmailjee, Renato Pereira, Simon Dargaville, Noah McCann and Magesh.
    If you'd like to consider supporting Up and Atom, head over to my Patreon page :)
    / upandatom
    For a one time donation, head over to my PayPal :)
    www.paypal.me/upandatomshows
    Sources
    ai.stanford.edu/~nilsson/QAI/...
    Animations
    Tom Groenestyn
    Music
    www.epidemicsound.com/
  • Věda a technologie

Komentáře • 1,2K

  • @upandatom
    @upandatom  Před 4 lety +215

    Will machines ever be as intelligent as humans?

    • @RajdeepDhareed
      @RajdeepDhareed Před 4 lety +13

      I do not think so.....and also I hope they do not become as intelligent as mankind......

    • @RajdeepDhareed
      @RajdeepDhareed Před 4 lety +5

      For there is a grave risk involved if they become as intelligent as or more intelligent(singularity point) than humans......!!!!!!

    • @hansisbrucker813
      @hansisbrucker813 Před 4 lety +60

      There is no physical reason why not. Brains are just biological machines. It is a matter of time I think.

    • @Elephantstonica
      @Elephantstonica Před 4 lety +6

      Possibly, if they eventually develop the ability to create a better version of themselves. Like Deep Thought.
      Mind you humans aren’t perfect in so many ways, and that may be impossible to emulate.
      Flawed is beautiful.

    • @Khazam1992
      @Khazam1992 Před 4 lety +4

      YES,
      - sure, you don't need to be perfect to be a human,
      - if we provided a list of questions(Yes/NO) and situations(Do/Don't) and ask both a human and an AI to provide answers, you will never be able to distinguish between them,
      - actually, if you program a program to answer Yes 50% and No 50% of the time, it will be correct 50% of the time in Average,
      any better program will do better than 50:50,
      - actually, if you think of it, if we prevent both the human and the AI from checking the question asked, they will never do any better than guessing an answer, which makes them indistinguishable, any more information aquired by the human or the AI will not give any advantage for one on another, suppose the information was that the probability of Yes to No is 51 to 49 .. sure if you used such an infromation you will have an advantage, but notice that it can be easily utilized by the human and the AI to its full potential.
      I guess you can build on this logic to induce that intelligent machines can be as good as a human.

  • @michaelhorning6014
    @michaelhorning6014 Před 4 lety +218

    "My computer beat me at chess. But it was no match for me at kickboxing."
    -Emo Philips

    • @theTavis01
      @theTavis01 Před 4 lety +8

      I think DARPA's robots could pretty easily advance to the point of winning at kickboxing

    • @philippesantini2425
      @philippesantini2425 Před 4 lety +1

      LMAO...thanks for the laugh! ;)

    • @moguldamongrel3054
      @moguldamongrel3054 Před 4 lety +1

      This is fuqing hilarious

    • @moguldamongrel3054
      @moguldamongrel3054 Před 4 lety

      Bill Tavis till i hit it with an em weapon, then its a pile of metal.

    • @theTavis01
      @theTavis01 Před 4 lety

      @@moguldamongrel3054 well if weapons are allowed, what about their mounted gun turret ripping you apart with a spray of bullets before you get within 50 meters?

  • @jeffvader811
    @jeffvader811 Před 4 lety +592

    In the words of Von Braun: _"The best computer is a man, and it’s the only one that can be mass-produced by unskilled labor."_

    • @silkwesir1444
      @silkwesir1444 Před 4 lety +10

      if only he knew...

    • @jackielinde7568
      @jackielinde7568 Před 4 lety +35

      This quote is from a man who's best accomplishments in life have an asterisks next to them because of his involvement with the Nazi Party in WWII. Sadly, I haven't seen any evidence that Von Braun wasn't just complacent in things like the use of slave labor. As a fan of spaceflight and NASA, I wish I could support him as others have. But, and a Human and a Jew, I can't.

    • @allthenewsordeath5772
      @allthenewsordeath5772 Před 4 lety +36

      Jack Linde
      “ once the rockets are up, who cares where they come down?
      That’s not my department.”
      Wernher Von Braun

    • @bazinga1964
      @bazinga1964 Před 4 lety +15

      @@allthenewsordeath5772 'You too can be a great hero, once you learn to count backwards to zero. In German, or English I know how to count down, und I'm learning Chinese' - Werner Von Braun

    • @allthenewsordeath5772
      @allthenewsordeath5772 Před 4 lety

      Greennou99
      You’re goddamn right.

  • @KlaasDeforche
    @KlaasDeforche Před 4 lety +168

    I think you are missing something, being that our brain contains 'hardware' to make certain tasks easy. Driving is easy because we have evolved from branch swinging ancestors that evolved brains with specialised parts for navigation. Vision is easy because (probably large) parts of our brain is specifically made for and assigned to this task. It comes so natural that we can not even explain how we do it.
    The things we are most proud of (doing math, advanced problem solving, etc) we are also worst at. That's because we don't have specialised hardware for doing, say ,square roots of numbers.
    Doing basic human stuff isn't easy, we just make it look easy.

    • @jandresshade
      @jandresshade Před 4 lety +3

      Another question about that would be if the brain and human intelect, can be seen as a Turing machine (TM) or an equivalent to a TM, that question could solve the problem if a computer can or cannot achieve human intelligence

    • @Blox117
      @Blox117 Před 4 lety +1

      thats what neural networks do

    • @dontdoit6986
      @dontdoit6986 Před 4 lety +9

      Human driving learners have had at least a decade in learning the concept of a boo-boo. A human approaches this task with an understanding to avoid driving into the wall. Also, an understanding of the physics of a car moving forward.

    • @hedgehog3180
      @hedgehog3180 Před 4 lety +5

      Also we approach a lot of math in a way that makes it really difficult, when most people want to find a percentage of something they don't just move the decimal point but try to calculate it in some way. Computers are basically always doing math in the simple way and they have the advantage of super speed so they have the time to do every single little step. Like a computer never actually thinks about the math problem, it's simply an arrangement of electronics into circuits that incoherently solve the problem. So just like you don't need to ever think about to just move the decimal point the computer never needs to think about it to find the square root of 5000, and nothing is stopping me from solving this problem in the same way but unlike a computer I don't run a several billion cycles per second so it ends up taking forever for me to solve it like that.

    • @jaroslavjandek8365
      @jaroslavjandek8365 Před 4 lety +4

      Not sure "specialized HW" is the best analogue, but it is close enough? It is more like a specialized design of neural networks - e.g. like CNNs (great for image recognition AND actually inspired by the architecture of the visual cortex in monkeys), LSTMs (great for speech recognition), etc.
      The human brain is a huge ensemble of neural networks of many types.
      Regarding those "simple" tasks many people mention - there is a TON of activity the brain experiences while performing these "simple" tasks (as opposed to comparably little activity when solving these "hardcore" math problems).

  • @cristiangamboa2037
    @cristiangamboa2037 Před 4 lety +4

    This girl is a great teacher, presents information in a clear and interesting way.
    But even more than that is SO AMAZINGLY charming when she speaks and moves it is almost addictive to watch.

  • @JimGiant
    @JimGiant Před 4 lety +154

    I disagree with the idea that a human can learn to drive a car in 15 hours, it takes many many years.
    Before you formally learn to drive a car you spend years learning to control your body, communicate, learn impulse control, understand physics intuitively, spend years observing driving from a passenger seat, learn to ride a bike etc.
    The major difference is we are far more able to transfer existing skills to new situations. For us driving a is like a tiny mod to an existing program rather than the program itself.

    • @1pcfred
      @1pcfred Před 4 lety +16

      I drive fine. Anyone that doesn't like how I drive should stay off the sidewalk!

    • @NickRoman
      @NickRoman Před 4 lety +15

      Yes, how many AIs have been continually working on learning anything for say 16 years? It seems like if it can't be trained in a few hours to a few weeks, then it's labeled a failure. Maybe for good reason, like it doesn't seem to be making any progress. But what if humans were judged the same way? Would any of us make it through our first year? The point is, I wonder if we humans know how to train and evaluate AI anyway.

    • @1pcfred
      @1pcfred Před 4 lety +4

      @@NickRoman some AIs have been worked on for years. It took IBM about a decade to build a computer that could beat a chess master. I doubt they tossed their research out after each attempt. You could say the whole field of AI research is a continuation of a project that began 40 years ago.

    • @monad_tcp
      @monad_tcp Před 4 lety +2

      @@1pcfred sidewalks are to be used for my drifitting skills

    • @1pcfred
      @1pcfred Před 4 lety +2

      @@monad_tcp bonus surface!

  • @rodbenson219
    @rodbenson219 Před rokem +2

    Kudos to you in terms of your sponser ad. It was so completely integrated into the topic that it seemed part of your overall documentary.

  • @ScienceAsylum
    @ScienceAsylum Před 4 lety +39

    OMG! Your Siri has an Australian accent! WHAT?! 🤯

    • @vast634
      @vast634 Před 4 lety +8

      Its more that Australians sound like Siri

    • @GuinessOriginal
      @GuinessOriginal Před 3 lety

      Yeah it definitely has, is it real? If so why isn't there a British one?

    • @GeraldSquelart
      @GeraldSquelart Před 3 lety +1

      Settings -> Siri & Search -> Siri Voice. I see these options: American, Australian, British, Indian, Irish, South African. 😉

    • @mattgraves3709
      @mattgraves3709 Před 2 lety

      I imagine it is changeable.
      My Google assistant hails from Australia.

    • @tweaker1bms
      @tweaker1bms Před 2 lety +2

      @@GeraldSquelart I'm still waiting on the Samuel L. Jackson and Gilbert Godfrey options :P

  • @dwightk.schrute6743
    @dwightk.schrute6743 Před 4 lety +129

    It never occurred to me that Siri could be Australian.🤨

    • @Elephantstonica
      @Elephantstonica Před 4 lety +4

      Dwight K. Schrute
      I always have Siri set to Aussie girl.
      She sounds far more personable than the Brit.

    • @dwightk.schrute6743
      @dwightk.schrute6743 Před 4 lety +1

      @@Elephantstonica Are you British?

    • @silkwesir1444
      @silkwesir1444 Před 4 lety +1

      Not even two Australians sound alike, I've heard they have some trouble with speech recognition down under...

    • @Elephantstonica
      @Elephantstonica Před 4 lety

      Dwight K. Schrute
      Yep

    • @dwightk.schrute6743
      @dwightk.schrute6743 Před 4 lety +1

      @@Elephantstonica American over here. Yeah, I can definitely see why you'd choose an Australian accent over a posh British one. I wouldn't mind a cockney accented one myself.

  • @paoloyanes534
    @paoloyanes534 Před 4 lety +3

    Thanks for this awesome review of logic and inference =) On that topic, A.I, excited for more computer science vids!!

  • @ShubhamSharma-ve4ei
    @ShubhamSharma-ve4ei Před 4 lety +2

    Your content is always unique & innovative, loved the way you presented it :)

  • @hellomeow9590
    @hellomeow9590 Před 4 lety +234

    Computers aren't smart. They're fast.
    Computers are incredibly fast at solving problems if you tell them how to do it.

    • @jobroray
      @jobroray Před 4 lety +13

      Which is what we often define as being smart

    • @peetiegonzalez1845
      @peetiegonzalez1845 Před 4 lety +58

      The curse of the Software Engineer. Computers do exactly what you tell them to do. Which, unfortunately, usually, is not exactly what you wanted them to do.

    • @Elephantstonica
      @Elephantstonica Před 4 lety

      JoJo Ray
      Or being witty, fashionable or having one’s shirt tucked in.
      A computer wouldn’t get any of that though. A shirt probably wouldn’t even be a parameter, let alone tucking in.

    • @NickCombs
      @NickCombs Před 4 lety

      Quick-witted, you might say.

    • @landsgevaer
      @landsgevaer Před 4 lety +1

      AI is more generic, I would argue: you don't need to explicitly tell them how to solve a problem, you need to tell them how to learn.

  • @lythd
    @lythd Před 4 lety +23

    The siri questions at the end XD
    Good video as always good job :)

  • @PeterManger
    @PeterManger Před 4 lety +5

    I worked with a person who said they finally got how to use software developers (me!)... she said "If I think it's hard and can't be done, get Pete to do it - but if I think it's easy then don't bother Pete with an impossible task". This led to one of the best business relationships I ever had.

  • @javanpoly4901
    @javanpoly4901 Před 2 lety

    I really appreciate your teaching ability and engaging way. Thanks for all the hard work you put into each presentation 😘

  • @naveenraj2008eee
    @naveenraj2008eee Před 4 lety +2

    Hi jade
    Learned a new topic moravec's paradox... And your explanation of neural network is clear to me..
    In future we might have a.i. like j.a.r.v.i.s or bartender robot in passenger film..
    25yrs ago scentist were sceptical about creating ordinary robots bit now it rocking the world in manufacturing and industries..
    Who knows we might find a way to work neural network just like brain and it is complex connection...
    As usual your point by point explantion is easy to grasp..
    Thanks for making monday a knowledgeable day..🙏👍😊

  • @markgacoka9704
    @markgacoka9704 Před 4 lety +15

    12:37 that's what I said.
    Jokes aside, this is a lot of info packed in one video. Although I suppose you knew almost all the things you said, I can imagine the amount of research that went into making this video.

  • @michaelcoleman4169
    @michaelcoleman4169 Před 4 lety +5

    Isaac Asimov wrote one of my favorite books, a collection of short scientific essays intermingled with science fiction shorts called 'The Edge of Tomorrow". One of the science fiction shorts is called "The Last Question', reportedly one of Asimov's favorite stories, it fits in with several other stories he wrote and can be found in other books he published. Your questions to Siri reminded me of it. You may find that, and his other multivac stories, like the 'I Robot' series worthy reads.

    • @frankschneider6156
      @frankschneider6156 Před 4 lety +1

      i know nobody who doesn't like "The last question".

    • @hedgehog3180
      @hedgehog3180 Před 4 lety

      One thing that's kinda funny and ironic about his stories is that when he was writing them he thought it would never be possible to make small scale computers based on electronics so that's why he came up with the positronic brains of his books but then when he became older the microcomputer revolution happened followed by the home computer revolution and he would live to see the early days of the internet.

  • @bigpopakap
    @bigpopakap Před rokem +1

    8:05 the flower analogy is so good!!

  • @som9428
    @som9428 Před 4 lety +2

    Awesome ,I got to learn something totally new thanks to you.

  • @xpqr12345
    @xpqr12345 Před 4 lety +40

    Question: Why did the chicken cross the road?
    Answer: To get to the other side.
    Question: Why did the chicken cross the Möbius strip?
    Answer: To get to the other si... Oh never mind!

    • @amehak1922
      @amehak1922 Před 4 lety +8

      xpqr12345 to get to the same side

  • @MedlifeCrisis
    @MedlifeCrisis Před 4 lety +36

    Hey do you think neutral networks will replace doctors? Will you make a video about that? 😉 PS your animation is getting better with every video. And so is the content.

    • @JimGiant
      @JimGiant Před 4 lety +4

      I think you'll be long since retired by the time that happens but you might be using AI systems to assist you on a regular basis.

    • @upandatom
      @upandatom  Před 4 lety +11

      Hmm it's definitely on my list of videos to make... If only I knew a doctor to make it with!

    • @paulsilsby5355
      @paulsilsby5355 Před 4 lety +2

      I am hoping that they will also replace Lawyers soon

    • @JimGiant
      @JimGiant Před 4 lety +3

      @Paul Silsby While I agree with your sentiment towards Lawyers this could be a terrifying prospect. Hell even on CZcams we've seen how bad AI is at determining whether videos should be demonetised or recognising Copywrite infringement.

    • @nobodygh
      @nobodygh Před 4 lety +3

      Neural networks will help doctors diagnose, but never replace doctors as a whole. There is also more to being a doctor than just diagnoses.

  • @compdave7630
    @compdave7630 Před 4 lety +1

    Thank you. Another wonderful presentation.

  • @jindagi_ka_safar
    @jindagi_ka_safar Před 3 lety

    I simply love the way you add those wonderful bits of humor in your videos.

  • @jamesdriscoll9405
    @jamesdriscoll9405 Před 4 lety +23

    Perhaps an explanation of how biological neurons work is in order?
    I think AI will not be the same as human intelligence. Once the neural mechanism is fully understood it will be improved upon, and be optimized in ways unforeseen.
    I don't fear that robots will replace us, but I do see trouble structuring a world where everybody sees the benefit.
    Thank you, Jade, for a thought provoking video!

    • @aduty23
      @aduty23 Před 4 lety +2

      James Driscoll Biological neural nets are vast in terms of memory density. I’ve read that each neuron in the human brain can be connected to 10-10000 others. Modeling this in a modern computer system requires petabytes of memory and has only been done for the simplest of organism simulations (like certain species of worms).
      All serious applied research is very specialized in its targeting of tasks for different kinds of neural nets and almost all the techniques coming out are variations on work that was done 30, 40 and even 50 years ago. There’s a prevailing belief that there’s just a memory barrier before the technique will bear the fruit of non-organic intelligence. And there’s not even a guarantee that we wont discover some other mechanism in biology that aids in our own intelligence and makes the technique either worthless or actually work.
      As it is there’s no explanation as to why a driving novice understands that holding the brake gives time to think actions through while a neural net just slams into a course wall until it’s weight table has data that shows not doing that lets it get further. Perhaps a decade of watching others drive helps and a big enough neural net would be able to accomplish that.

    • @hedgehog3180
      @hedgehog3180 Před 4 lety +1

      I mean the problem is that we don't really know how neurons work, we can see the basic mechanical functions of them but how that produces our thoughts and feelings is not something we know.

    • @jorgepeterbarton
      @jorgepeterbarton Před 2 lety +2

      @@hedgehog3180 electrochemical signals *do something* and it looks like a net with specific areas...
      Beyond that anyone who tells you they know doesnt know very much! We dont even truly know the mechanisms of psychiatric drugs we prescribe.

  • @AgentOccam
    @AgentOccam Před 4 lety +3

    Great video as usual! I tried to see what I could get Siri to answer too. I thought I had them at "Hey Siri, is this statement false?"
    But clearly Apple had thought of that and Siri answered by providing links to the liar's paradox that "you might find interesting".
    So I tried again: "Siri, do *you* think this statement is false?"
    Clearly prepared again, the answer came back that it's what *I* think that matters.
    Believing that I would bring down ALL OF TECHNOLOGY by my brilliant next question, I asked:
    "Siri, my opinion is that it is *your* opinion that matters."
    To which Siri simply replied: "I don't understand".

  • @yoloswag6242
    @yoloswag6242 Před 4 lety +1

    omg so glad that you exist. I just found this channel and omgomgomg. Would you consider doing a science podcast? thankssss

  • @thaweezl8852
    @thaweezl8852 Před 3 lety

    I have just discovered your channel. Thank you for all you are doing.
    ...and keep playing GO.

  • @bewareofsnow
    @bewareofsnow Před 3 lety +5

    Some really interesting points in this video. Watching my 18 month old daughter learn to do new things on a daily basis has been fascinating, and I think it's true that most neural networks could never hope to match her performance given only the relatively small amount of input data. I wonder if the womb is a great place to learn some basic things in the absence of other inputs, so that a child can enter the world with some of their neural networks decently pre-weighted?

  • @bluc0bra
    @bluc0bra Před 4 lety +6

    Great vid! As an electronics engineer I see no reason computers could not reach or even surpass human level intelligence eventually. Problem with neural networks is that a lot of times we don't have a clue how they do what they do, which means we may not have as much control over a future advanced AI as we would like.

    • @JohnSmith-ox3gy
      @JohnSmith-ox3gy Před 5 měsíci

      It's like looking at a decision tree with no metadata or comments that is millions of lines long. One man can not compartmentalize the information in such a format even in a shorter snippet.

  • @dimomarkov8937
    @dimomarkov8937 Před 4 lety +212

    "A person can learn to drive a car in 15 hours without crashing into anything"
    That would be true, if you could give a car to a baby and achieve the same result. One needs a do a lot more implicit learning in order to seemingly achieve something with relative efficiency. :)
    A person might need a thousand hours to learn to play Go in a competitive way, or 10 years to become a good doctor. But an AI has the potential to do it much faster and more efficient, while also employing simulations, instead of experimenting on reality.

    • @NathanBonselaar
      @NathanBonselaar Před 4 lety +27

      Most people formally learn to drive at around 16 years of age or older. At that point we've presumably all been in a car multiple times and have seen cars driven and have a basic understanding of things like lanes, lights, and signals as well as at least a rudimentary understanding of how the pedals and steering wheel work. More accurate would be "A person can learn to drive a car in 15 hours after watching and learning for over a decade without crashing into anything."

    • @nnslife
      @nnslife Před 4 lety +9

      @@NathanBonselaar "Most people formally learn to drive at around 16 years of age or older"
      Keep in mind though that not everyone in the world is from US:)
      I am from Moscow (Russia). Most of my peers (from Moscow State University) don't drive a car at the age of ~25.

    • @NathanBonselaar
      @NathanBonselaar Před 4 lety +7

      @@nnslife Sure, some people learn to drive later in life, some earlier, and some not at all. It might even be fair to say that most people never learn to drive at all; I don't know the stats on that.
      It would have been more accurate for me to replace "Most people" with "Most people who do drive" but I felt that was a bit too wordy and could be left as implied - apparently I was wrong there. Regardless of all that, if any of those friends of yours do learn to drive then they're still older than 16 so my statement holds.

    • @ShadowPhoenix82
      @ShadowPhoenix82 Před 4 lety +5

      @@NathanBonselaar that's ironic that you truncated your statement to avoid being wordy, when the original statement you are correcting or criticizing was likely doing the same thing.

    • @NathanBonselaar
      @NathanBonselaar Před 4 lety +4

      @@ShadowPhoenix82 I suppose it is. Probably a little hypocritical as well.

  • @Krmpfpks
    @Krmpfpks Před 4 lety +2

    Wonderful video, I love your style of presenting. A short hint at adversarial learning would have been nice though, as that reduces the need for large training sets you often stated as a downside.
    I am sure you thought of including it and scrapped it to keep it simple.
    But to address viewers of different levels of knowledge on these topics it’s sometimes very useful to give them a pointer as to where to continue to look.
    output neurons that indicate the type of cheese would have clarified your network animation.

  • @wakledodd
    @wakledodd Před 4 lety +4

    Thank you Jade, this was really interesting! The question is (I think), will machines be like us (probably not) or will they be something equal to us (and this might be scary)?

  • @photondance
    @photondance Před 4 lety +128

    Okay, but Mia is objectively adorable.

    • @DanteBarboza
      @DanteBarboza Před 3 lety

      Yes

    • @Novasky2007
      @Novasky2007 Před 3 lety

      Quantifiably - n²+x where n=Fluff and x = Snuggles

    • @Joseph928100
      @Joseph928100 Před 3 lety

      Ok but dogs are objectively the best^^

    • @Tubluer
      @Tubluer Před 3 lety

      @@Joseph928100 Except when they eat the owner's kids.

    • @Tubluer
      @Tubluer Před 3 lety +3

      @@Novasky2007 It's actually n²+x over r squared, which is the distance to the cat. As r goes to zero, Fluff and Snuggles increase without limit.

  • @Noneblue39
    @Noneblue39 Před 4 lety

    thats such a good point about robots! very informative!

  • @architmeta
    @architmeta Před 3 lety

    Dear Jade, this was a really well-explained concept! It's been a while since you uploaded a video. I hope you are well. Cheers!

  • @jjer125
    @jjer125 Před 4 lety +26

    This is all well illustrated and interesting but when talking about neural networks, I think you missed an important point.
    The way NN leans can vary enormously and all architectures may not require the same amount or kind of training.
    Furthermore, for a simple NN like the one you described to learn, the data has to be labeled, the human has to previously specify in some way the goal to be achieved.
    This is what I think would have been worth mentioning: we have a hard time making AI that learn things we can't clearly define; the few solutions include giving it enormous amount of data classified by a human, but that is essentially the same as specifying a function maximize.
    This is what makes GAI so hard, how to teach an agent things that we can't really, fully, define, like emotions or the ambiguous intricacies of language?

    • @saumitrachakravarty
      @saumitrachakravarty Před 4 lety +1

      Like the Nobel laureate physicist Freeman Dyson, I also think that to create an AI capable of being human, we need to program an analog computer, not digital one. Because digital computation is by default lossy in terms of data manipulation and cannot map anything to infinite accuracy. Also it has to be able to do massive amount of parallel processing.

    • @autonomous2010
      @autonomous2010 Před 4 lety +5

      Humans often unknowingly make assumptions that defy logic and accept it as logical just because someone else said so. Yet if a machine does this same behavior, we would never know if it's flawed by design or by data.
      The tricky thing is that people want AGI to be "like humans" without actually being human. Which is inherently problematic because unless it behaves 100% like a human, it will never be accepted as AGI. For it to be "like humans", we would have to accept that it could be highly irrational and unpredictable. Catch-22 of desires since something potentially unpredictable is very hard to market as a product.

    • @jaydeepvipradas8606
      @jaydeepvipradas8606 Před 4 lety +2

      @@saumitrachakravarty Human brain is quantized by electrical signals, even if they are fast. Also, one can simulate continuous domain in computers, like a fuzzy space. Analog computers is not much solution for AI.
      Also, neural networks can do unsupervised learning, but it is still not very successful.
      Learning algorithms could be a problem.

    • @GrandActionPotential
      @GrandActionPotential Před 4 lety +1

      @@jaydeepvipradas8606 The human brain is also quantized by molecular units and their conformations. If only considering action potentials, we are force to treat neural inputs as 3 state devices with an active, inhibited and quiescent states.

    • @bazinga1964
      @bazinga1964 Před 4 lety +1

      More data, faster TPUs, quantum computing, better optimization. We'll get there, I'd hesitate to put a timeframe on that, but I don't see why not. The human brain has had quite a long headstart compared to industry, but technology is still improving pretty reliably.

  • @JohnDoe-tx8lq
    @JohnDoe-tx8lq Před 4 lety +22

    (Chicken + Road + Car ) * (fat + heat + seasoning) √ (500% profit) = Chicken Nuggets.
    So why do we still need people in fast food outlets?!? 😎

    • @ShadowPhoenix82
      @ShadowPhoenix82 Před 4 lety +4

      What? There are people fast food outlets?! You mean... SOYLENT GREEN IS PEOPLE?!?! 😱

    • @Cyberplayer5
      @Cyberplayer5 Před 4 lety

      Thanks for that little nugget of humor.

    • @vincegonzalez2171
      @vincegonzalez2171 Před 4 lety

      People are better at diagnosing unexpected things going wrong.

    • @warpdrive9229
      @warpdrive9229 Před 4 lety

      Lazy people, thats why. I rarely eat outside until the situation demands.

    • @darrell20741
      @darrell20741 Před 4 lety

      AI eating humans or human eating AI? Trust me, I am not a bot!!!

  • @jorgerangel2390
    @jorgerangel2390 Před 4 lety

    It have been a while without one of your videos! thanks

  • @ahmedabied4524
    @ahmedabied4524 Před 4 lety

    this is my favorite ever channel and i wanna share it everywhere

  • @JimFortune
    @JimFortune Před 4 lety +11

    What happens if you lock Moravec and Heisenberg in a box with an unstable isotope....

    • @siquod
      @siquod Před 2 lety +1

      Moravec will turn himself into a time traveling robot and summon Greg Egan to employ that plot device which somehow rotates the quantum amplitude so that the atom decays in no possible universe. Heisenberg protests that that's not how actual quantum physics works, and to prove his point, dies on the spot, but with uncertain momentum.

  • @alanliang9538
    @alanliang9538 Před 4 lety +6

    We were pre programmed unimaginablely complicated algorithms to do the so called “simple tasks”, but we weren’t given any about chess

  • @brycethompson1556
    @brycethompson1556 Před 4 lety

    Hi I am new to you channel but i love your presentation style. You have got a new subscriber.

  • @ruelprakash7696
    @ruelprakash7696 Před 4 lety +1

    You're a very good Teacher. Good Job.

  • @xtieburn
    @xtieburn Před 4 lety +5

    Our computers also appear to be very poorly built for these tasks. Its only relatively recently that weve had the increase in cores, and our parallel processing is still fairly primitive and often really difficult to develop for.
    So we are stuck ramming through a mind boggling number of calculations, largely in series, to catch up to a human brain, that while comparatively incredibly slow, (Neurons fire a couple hundred times per second which is many orders of magnitude slower than even a poor CPU.) is so parallel in its operation that by many measures it still defies the most powerful super computers ever built.
    Which is why I think things like SpiNNaker are so interesting for AI. SpiNNaker (Spiking Neural Network Architecture) is a neuromorphic computer, one of, if not the biggest ever made. While none of the principles are brand new*, such a large scale dedicated piece of architecture will undoubtedly see a huge advancement in this model of AI, as well as having potential for many other more general processing problems. This of course isnt just great for our technology but built for increasingly sophisticated simulations of parts of our own brains for the advancement of neurology and treating ourselves when things go wrong.
    I dont believe that we will have human like robot AIs any time soon, I think the evidence of the past few decades has shown us just how tough a challenge this really is and even SpiNNaker running at its most optimistic levels will be a couple orders of magnitude below what each of our brains currently have to hand, but there is a lot of progress being made all the time and this is a very interesting period in the development of AI.
    *Its asynchronous massively parallel processing that uses 'unreliable' spikes to send around information across a toroidal structure. (The topology of the network is quite well connected, with redundancy, while remaining quite cheap and easy to produce.) Its built with the expectation that not all these signals will work neatly together or find their target, like the real neurons of our brain signals may be lost but the overall function carries on. It seems odd that this could be better than the well timed and assured signals of a traditional computer, but everything is a trade off and simplifying how data is sent is what permits such massive and efficient parallel operation.

    • @frankschneider6156
      @frankschneider6156 Před 4 lety

      Of course it is. As long as you don't believe in some metaphysical bullshit, than the human brain is nothing but circuitry that can be simulated in software. Just a matter of computational power and the software correctly simulation natural behavior.

    • @jandresshade
      @jandresshade Před 4 lety

      ​@@frankschneider6156 that is an interesting question that hasn’t been prove, nowadays in the current year we don’t even know how human intelligence work, or conscience work. so if a machine can simulate human intelligence, we just get to the conclusion that the brain is simple a really complex Turing machine, so at that point human intellect have limitation and problems that humans cannot solve.

    • @frankschneider6156
      @frankschneider6156 Před 4 lety +1

      Andres Perez
      That's the beauty of it. If the simulation is accurate enough, it should work without us requiring us to understand how it works.
      So you don't don't simulate intelligence but a brain. If everything is right, and you feed it sufficient data, over time (and after sufficient refinements of the model) an intelligent self-aware consciousness should automatically arise. If you have that, you can play around with it to identify how it and thus the human brain works.
      In the past 150 years, we haven't made a lot of progress in really understanding how the brain works, but we know pretty well it's general anatomy and functionality on a molecular level. That should suffice to rebuild it in software. If the model is just accurate enough it should behave like a normal brain. And no, I don't underestimate the complexity of the human brain. Solving this, would of course the by far most complex task we ever had. Making the deciphering of the human genome look like child's play.
      I guess building an AI and thereafter finding out how it works (and thus the human brain) is the only approach. We use bionics in many areas, were evolution solved a problem for us. Same here, nature managed to develop intelligence, so we "just" need to copy it. Once it works, we can reverse engineer it to find out, how it works. Something that (for ethical reasons) we don't want to do, using humans.

    • @nmarbletoe8210
      @nmarbletoe8210 Před 2 lety

      @@frankschneider6156 " If the simulation is accurate enough, it should work without us requiring us to understand how it works"
      agree :)
      "So you don't don't simulate intelligence but a brain. "
      I think you also need an actual body for intelligence/ self awareness to arise. But that'd be interesting to test. Like you say we don't need to know in advance how it all works...

  • @AdamAlbilya1
    @AdamAlbilya1 Před 4 lety +13

    Missed you, a one month break is way too much.

    • @silkwesir1444
      @silkwesir1444 Před 4 lety +7

      @Mohel Skinberg a little quick to judge, aren't you?

    • @Delibro
      @Delibro Před 4 lety +3

      @Mohel Skinberg Not in this case, Mohel.

    • @silkwesir1444
      @silkwesir1444 Před 4 lety +2

      @Mohel Skinberg btw, Paranoia is especially dangerous, because it feeds back into itself, creating a vicious circle getting more and more intense. if you catch it early, it can be stopped, but once it gains enough momentum, it becomes unstoppable and can ruin pretty much everything.

    • @AdamAlbilya1
      @AdamAlbilya1 Před 4 lety +6

      @Mohel Skinberg It's creepy if you are a creep. I told it out of utter respect and appreciation.

    • @silkwesir1444
      @silkwesir1444 Před 4 lety

      @Mohel Skinberg well I still have hope that things can be improved. nobody is alone. you think you're the only one who always has problems? that it's your fault, that you need to be ashamed of it?
      think again! others have similar problems, and it's NOT (or only partially) their own fault. Neither is it yours.
      Find the others, compare notes, and then plan on what you are going to do about it.

  • @philippesantini2425
    @philippesantini2425 Před 4 lety

    I enjoyed the content and delivery. I'm considering subscribing but I will have to watch a few more to be sure...I sub'd too quickly in the past. Live and learn. ;)

  • @uzairmughal4976
    @uzairmughal4976 Před 4 lety

    I have previously seen your videos but this one surely gets the Sub..!

  • @tobybartels8426
    @tobybartels8426 Před 4 lety +6

    Yeah, Siri wants you to *think* that it doesn't have an answer to that question.

  • @nathanokun8801
    @nathanokun8801 Před 3 lety +3

    A "neural network" as conceived in computational systems is something that is trained by successive approximation using elimination of paths that give poor results (as measured by the goal set by the programmers). It works by generating rules internally on what constitutes a non-poor result and then raising the bar as to what constitutes "good" until it meets the programmers' desires about that single problem. It does not generalize since it is only aimed at one problem. If you want more problems solved, each one has to be trained separately in some set-up where adding new abilities does not degrade those previously mastered. The human brain is the result of almost FOUR BILLION YEARS of living organisms being trained in that manner with the added test that failure to match the needed results ("poor") usually results in the DEATH of said organism, rather strongly forcing the organism's evolution by "dead men (anything living) have no/few kids". If you created a computational system where it burned up with every failure, requiring it to be rebuilt from scratch with the previous latest neural pattern being the only thing retained, it would take a REALLY long time to do such a process indeed. As a result, actual living things retain a great many things internally that were found to be necessary for their continued existence above and beyond what trials they are now undergoing to continue living. Until we learn how to do such a many-path simultaneous training scheme that allows enough critical rule sets to be internalized, we will never have a "generalized" AI neural network that can respond like a child does. Preset patterns of behavior (instincts) and the ability to self-modify such patterns to match current needs (learning and memory) have to be of wide-enough range to allow minimum "normal" behavior. Even human beings have problems generalizing information that does not conform to built-in patterns, which is why genius is so rare...

    • @JohnSmith-ox3gy
      @JohnSmith-ox3gy Před 5 měsíci

      There are more research branches yet to be unexplored in "AI", breaking down the complex process of cognition. We are very far into pattern recognition and and extrapolation but machine reasoning based on hard knowledge bases have shown promise too.
      A next step that I am curious about is how machine reasoning could utilize soft knowledge bases allowing for evaluation of edge cases and evaluating the seeming contradictions and edge cases.

  • @rickharold69
    @rickharold69 Před 4 lety

    Thx for the video

  • @JANARDHANPYDIMALLA
    @JANARDHANPYDIMALLA Před 4 lety +1

    This is so enlightening.
    Thank you for making this video.
    Love you so much, sis.
    Keep it up!

  • @abrahamvivas9540
    @abrahamvivas9540 Před 4 lety +5

    Neural networks are interpolation with nested functions: change my mind.

    • @columbus8myhw
      @columbus8myhw Před 4 lety +1

      Why would I change your mind? You're right.

    • @craftpaint1644
      @craftpaint1644 Před 4 lety

      Every programmer knows computers don't think and would be hacked crazy if they did someday.

  • @stojanovik69
    @stojanovik69 Před 4 lety +10

    :)))) Moravec means in the Serbian Language /folk's: a Man from the region of Morava, is a river in south Serbia.

    • @PhokenKuul
      @PhokenKuul Před 4 lety +3

      It refers to the region in Czech Republic called Moravia, named after Morava river. It's a common Czech surname.

  • @lordunderstatement3645
    @lordunderstatement3645 Před 4 lety +2

    One of the major roadblocks... forgive the pun considering the sponsor... to machine learning is the absence of the machine's ability to feel, specifically the machine's ability to feel pain or fear, as well as a sense of self-preservation. Sure a human can learn to drive a car in a reasonable time frame without crashing into anything because the person learning that skill is usually timid of the raw power of the vehicle they are driving as well as frightened of causing damage to themselves or others or to the property of themselves or others. Without that fear and sense of self-preservation, a human would likely fail as spectacularly as a machine at a given task that we would consider simple.

    • @1pcfred
      @1pcfred Před 4 lety +2

      I struck fear into the hearts of computers once. I had a PC that was ticking me off once so I put a 28 oz Estwing hammer though it. None of the other computers that witnessed that act ever gave me a lick of trouble afterwards. True story!

    • @frankschneider6156
      @frankschneider6156 Před 4 lety

      Emotions and intelligence are 2 completely different and completely separate things.

    • @lordunderstatement3645
      @lordunderstatement3645 Před 4 lety

      @@frankschneider6156 You are absolutely correct, which is why it is so difficult to teach a machine to do something or get it to learn to do something. There is no fear of failure, no desire to impress or make the teacher proud, no pride in accomplishment. With a machine in learning everything is simple trial and error with no concern for the consequences of failure nor reward of success.

    • @1pcfred
      @1pcfred Před 4 lety

      @@frankschneider6156 emotions and intelligence may be two completely different and completely separate things but some say they are related to each other. That without emotions you cannot develop intelligence.

    • @frankschneider6156
      @frankschneider6156 Před 4 lety

      If you are 15 you also have no self-preservation and still are most 15-year old teenagers considered somewhat intelligent (well for how intelligent 15 year old teenagers can be).
      To be serious, I don't think that for generating a strong AI self-preservation or even consciousness are necessary and emotions certainly aren't. The AI (if we manage to get create one) will be a some code within a massive computer, so what benefit should self-preservation or emotion be good for ? Do you really want a highly developed artificial superintelligence that can get angry for no apparent reason (eg like a 15 year old teenager) ?

  • @agaqul6371
    @agaqul6371 Před 3 lety

    Beauty and Intelligence and Goodness combined to make this video.Thank you,it was a genuine pleasure to watch.

  • @tcaDNAp
    @tcaDNAp Před 4 lety +4

    I love it when people discuss the uses AND the limits of AI and algorithms and stuff... There's still a lot of weird ideas about what machines can and can't do, thanks for educating!
    EDIT: There are few times a sponsor is integrated so smoothly that I actually go and look at them. Cool!

    • @upandatom
      @upandatom  Před 4 lety

      thanks for watching taco cat!

  • @jpe1
    @jpe1 Před 4 lety +3

    This is why Siri will never reply “ate something” when asked “what is the square root of 69?”

  • @joelbecane1869
    @joelbecane1869 Před 4 lety

    Great video, thank you Jade :)

  • @renemino272
    @renemino272 Před 4 lety

    The quality of your videos is getting better and better! Love your content, cheers x

  • @dwightalexander2648
    @dwightalexander2648 Před 4 lety +3

    That subtle ronald weasley reference tho.

  • @macsnafu
    @macsnafu Před 4 lety +5

    So, we don't have to worry about machines taking over the world any time soon, I take it?
    Also, great incorporation of the sponsor into the video! It was significant and relevant to the video's topic.

  • @RonLWilson
    @RonLWilson Před rokem

    Interestingly enough I just bought a book called Soft Computing that describes these two approach, the first you mentioned in this video (human engineered) as hard computing and the second (machine learning) as soft computing.
    But what seems to be missing in both is the lack of what one might call an ontology model. And yes, there is much work being done now in regard to Ontology modeling where we have methods such as RDf and OWL that can do that but these are geared more for making computer data bases interoperable and not so much it seems to benefit AI in general.
    As such the computer seems to be the target audience for those ontology models verses say the human and computers like symbolic code and hence that is what OWL is based on what is called descriptive logic (which is also a book I just bought and now starting to wade through).
    But what seems to be largely absent is first how one might make a more human friendly way to build, view, and interface with Ontology models and two how these might work in conjunction with say soft computing methods such as neural nets and such.
    Also there is the Linguistic take on this with syntax, semantics, and pragmatics where the later covers things like saying Knock Knock.
    So can all these be rolled together into a system that better mimics human intelligence? And if so how might one do that? And just where might one start to unravel just how that might best be done?
    Well, maybe a good starting place is seeing of one can create an Ontology modeling language that is not based on symbolic logic but rather graphics.. or to perhaps be more accurate based on topology. For a topology does not rely on any symbols that one might use to describe it. For a box is a box no matter what you might call it. And also humans (in contrast to computers) seem to be able to grasp graphics quicker and better than symbols and hence a picture is worth a thousand words.
    That said, any such Topology based ontology model could be mapped to a symbolic one so the two are not mutually exclusive but rather complementary, each having its own advantages and disadvantages.
    thus (for example) a neural not does not describe the topology of a box even if it can recognize a box shape. But say one had both a model of the topology of a box plus a way to recognize the parts of a box such as its sides. That might produce a more human like ability to deal with boxes in that the ontology would know all about boxes and such while the neural net could recognize one when it sees on and identify its parts as well.
    So maybe that is what is missing in AI, the ability to create topology based ontology models.

  • @jannegrey593
    @jannegrey593 Před 4 lety +2

    You can also add the links to 3blue1Brown videos on the subject. He goes much more in depth about the topic, although for obvious reasons it is not as accessible as your videos. And to be fair to him, while he has amazing skill of making very hard topics manageable and fun, his videos on Neural Networks are still pretty tough - I guess there is only so much you can do with such complicated topic that is still relatively new. Check him out if you didn't see any of his videos, maybe even do a collaboration. And thanks for the links to Roborace, I want to find out how they did it, since to make the car drive perfectly they would either have to cheat (like building in sensors within a track) or have absolutely perfect simulator and ultra-sensors (after all temperature of air, asphalt combined with slight deviation in tire pressure can have catastrophic results, so it has to also machine learn how to deal with this information - and there are undoubtedly far more sensors than usual Russian or US embassy 😜)
    Also, unless Moravec was born in US, the C in his surname is pronounced differently. In English you basically make it sound like either "K" (Cat) or "S" (like last C in word "Cryptocurrency") The latter one is closer to actual pronunciation, but is less hissy and more poignant. So somewhere in between. Pronounce it like "TZ" in a word "Tzar" or "Ditzy". Speaking of Ditzy how did you know I was furry and wasn't at the same time 🤣?

  • @xzenplays3152
    @xzenplays3152 Před rokem +3

    9:13 And then, 3 years later, chatGPT shown up...

  • @anujarora0
    @anujarora0 Před 4 lety +14

    12:25 *We don't even know how a brain forgets things!!!*

    • @silkwesir1444
      @silkwesir1444 Před 4 lety +2

      we kinda have an idea, but how exactly it works, we don't know.
      like, it seems, with most things about the human brain: we got the broad strokes, and some of the very fine details, but the whole middle part on that spectrum is missing. that gap will continue to close over time, the more we learn, but it could be very very long until it is completely closed -- if humanity will even survive that long.

    • @frankupton5821
      @frankupton5821 Před 4 lety +3

      I used to know....I think

    • @Dragrath1
      @Dragrath1 Před 4 lety +2

      @@silkwesir1444 Yep we more or less only know enough to know that it takes more energy to forget than to recall but the fine details and reasons behind them are currently still beyond us as of now. I suspect the way the brain rearranges itself over time is probably and important part of where our advanced cognition derives from but checking that would be hard naturally.

    • @silkwesir1444
      @silkwesir1444 Před 4 lety +2

      @@Dragrath1 one of the problems with understanding how our memory works is that apparently for whatever reason we are constantly fooling ourselves into thinking it has much more accuracy and fidelity than it could possibly have.
      I wonder what evolutionary advantage that might have. Would we be overcome with self-doubt and unable to make decisions if we had a more realistic appraisal of our memory capabilities?

    • @2019inuyasha
      @2019inuyasha Před 4 lety

      that might be true currently but we used to know that....LOL

  • @luischavesdev
    @luischavesdev Před 4 lety +1

    The way i see it, since we dont actually know the essence of intelligence and might never know it, our best bet is "emergent intelligence"... if you can call it that :P
    Btw, once again i think you really nailed the visual presentation of your videos. That was one of the most intense game of checkers ever at 4:56 hahahaha

  • @luketymerski
    @luketymerski Před 4 lety

    This guy really knows what he is talking about.
    Definitely hitting that subscribe button.

  • @JimFortune
    @JimFortune Před 4 lety +14

    But Aristotle got almost everything wrong!

  • @neurodivergent4life
    @neurodivergent4life Před 4 lety +42

    its dota not lol but whatever.... lol

    • @guidogaggl4020
      @guidogaggl4020 Před 4 lety +3

      Ok. Well that's not a big task then. Even i can write a programe that can win against you guys. LoL would be much more impressive
      ;)

    • @upandatom
      @upandatom  Před 4 lety +6

      you're right my bad!

    • @suicidalbanananana
      @suicidalbanananana Před 4 lety

      Its both actually.

    • @sindrestokke79
      @sindrestokke79 Před 4 lety

      Guido Gaggl Dota 2’s heroes has on average more abilities than LoL’s heroes. Dota 2 has many more types of buildings, and it’s farming system is more complex because of the «denying minions» feature. It also has two types of gold (reliable and unreliable), adding to the complexity. So implying that Dota 2 is less complex than LoL is just wrong.
      In addition to this, the meta in Dota 2 is also constantly changing. This makes it difficult to write a program that plays good strategically. For this, you would need an AI that can come up with strategies on it’s own (ala openAI).
      If you are able to make this with ease, then do it and become rich. A lot of «Dota 2»-teams would pay good for new breakthrough strategies.

    • @neurodivergent4life
      @neurodivergent4life Před 4 lety

      @@suicidalbanananana really? Havent seen any lol games where pro team loses to ai, can you share?

  • @kareemmohamed8716
    @kareemmohamed8716 Před 4 lety +2

    YEAAAAAA .. FINALLY A NEW VIDEOOOO..❤❤

  • @ketchup143
    @ketchup143 Před 4 lety

    so thorough, i like it!

  • @ProtonCannon
    @ProtonCannon Před 4 lety +3

    Hi Jade! There is one thing you have missed from the video when comparing AI to humans. The fact is that you only need to make an AI function for a task only once. True that a human can learn to drive in 15 hours and an AI must drive of a cliff 10'000 times to get it. An important t detail is that you need to teach ever human to do the driving themselves individually. In case of the AI however once you have one that is successful that is all you need, you can just copy-paste that one AI into every single car forever and just update it occasionally.That is a HUGE difference!

  • @kwinvdv
    @kwinvdv Před 4 lety +6

    I do think that neural networks are overrated as AI, namely it essentially is a general framework for fitting models/nonlinear regression. Namely if this would be called AI then fitting a polynomial function onto data might be called AI as well.

    • @frankschneider6156
      @frankschneider6156 Před 4 lety +1

      (Deep) neural networks are just a technique used to implement systems that exhibit weak AI. They are rather easy to implement (if you have enough training data) and perform quite well at many tasks, especially those that humans find otherwise impossible to program (e.g. pattern recognition).
      What people usually associate with AI (=real AI = strong AI) certainly requires a lot than just a simple NN. On the other side: the brain is also just a trivial (at least on the basic level) lump of connected neurons that transforms electrical input into electrical output signals.

    • @IsYitzach
      @IsYitzach Před 4 lety +2

      There's more to NN than neurons doing polyomials of the previous layer results. There's activation functions. For example, ramp(x)=0 if x=0. Because these layers exist, it is possible to, make a NN that is Turing Complete. That is, it is possible to write any program, such as "Hello World" in NN. I don't recommend. It isn't a trivial task. I've written a numerical integration as a NN, it wasn't pleasant. I'd rather do that in any other high level language. I've also converted a arctan(y/x) function to arctan(x,y) function in NN. It's not perfect, but I don't think I need arctan(0,x), and arctan(y

    • @nmarbletoe8210
      @nmarbletoe8210 Před 2 lety

      @@IsYitzach Fascinating details!
      I wonder, are there now NN that are designing NNs?

  • @jacobcain9008
    @jacobcain9008 Před 4 lety

    New video! I'm so hyped!

  • @EricSchles
    @EricSchles Před 3 lety

    First off, the videos you make are *amazing* I found your channel yesterday and I must have watched 10 or so by now. Thank you for writing proofs. Really fantastic.
    So, onto my nitpicks -
    1. neural networks *are* what's in vogue, but I think it would have been useful to either state with a statistical model more generally, or at least say that neural networks are not the only class of model you can use to perform "machine learning tasks". I think it's clear *you* know this, but based on the presentation, I don't think it would be clear to the viewer unless they already know machine learning well.
    2. I felt at the end of the video that you could have mentioned self supervised learning. If you aren't familiar, Self supervised learning attempts to create a learned system from tasks that are very easy to label. An example of this is the BERT model, that uses two self supervised learning tasks. I'll just explain one of them - it tries to next word using all the words that came before (I don't remember if it just uses the sentence of the full text). In any event, it should be obvious that this task *doesn't* require labels, since your training data *is* the labels. In this way, you generate a set of weights that can then be fine tuned for down stream tasks. Surprisingly things like BERT work. The transformer architecture is a big part of the reason, but the self supervision helps, at least in some tasks.
    Finally, onto your question for discussion - will machines ever be as intelligent as humans?
    I think by your definition, no probably not. At least not without a significant investment in making memory more efficient. I think by another definition, yes - specifically I think machines will eventually be as good at any given task humans do. But the ability to do all of these tasks will likely be too hard, because of the number of models that need to work in concert as well as be stored on a given machine would be very large. We would need pedabytes upon pedabytes of data storage for all the models.

    • @nmarbletoe8210
      @nmarbletoe8210 Před 2 lety

      Interesting, especially point 2 about training data being the labels!
      Somewhat related question: what's up with the 13 word youtube posts, always ending with a comma then the last three words? Is someone testing some kind of conversation bot?

  • @aurelia8028
    @aurelia8028 Před 4 lety +3

    Mia is definitely adorable :D

  • @primeobjective5469
    @primeobjective5469 Před 4 lety +9

    Neurosurgeon Dr. Egnor says computers will never reach a state of *consciousness* .
    *"The hallmark of human thought is meaning, and the hallmark of computation is indifference to meaning. That is, in fact, what makes thought so remarkable and also what makes computation so useful. You can think about anything, and you can use the same computer to express your entire range of thoughts because computation is blind to meaning."*
    *"Thought is not merely not computation. Thought is the antithesis of computation. Thought is precisely what computation is not. Thought is intentional. Computation is not intentional."*

    • @IceMetalPunk
      @IceMetalPunk Před 4 lety +3

      I'd have to fundamentally disagree with Dr. Egnor. The quote seems to imply the intent and meaning are uncomputable. But if you think about it, intent and meaning are emergent properties of the workings of our brain. Every thought is a consequence of neural communication and restructuring; thoughts are not metaphysical. And as physics is computable, so too should be any result of it, including thought, intelligence, and consciousness. (Although let's be clear: even we humans haven't found a solid definition of what consciousness even is, so making claims about its ability to be replicated is always a bit premature.)

    • @KittyBoom360
      @KittyBoom360 Před 4 lety +4

      Egnor is just moving the lump in the carpet by moving the mystery of thought to the mystery of meaning without actually making the whole description any more clear.
      You need to clearly define what exactly "meaning" means. You can maybe start by explaining where meaning comes from, metaphysically.
      Otherwise, you're just explaining things by using miracles, so to speak.

    • @AaditDoshi
      @AaditDoshi Před 4 lety

      But does that mean if I make a computer that runs a computation without intent, it would become a thought?

    • @hreedwork
      @hreedwork Před 4 lety

      Agree. Ascribing meaning guides exploration, which informs learning (a la Dr. J. B. Peterson "Maps of Meaning")

    • @michaelbuckers
      @michaelbuckers Před 4 lety +1

      Intelligence works on top of neurons. Neurons work on top of chemistry. Chemistry works on top of physics. Physics works on top of math. It is therefore possible to use [digital] math to create [artificial] intelligence.

  • @davidajzhang
    @davidajzhang Před 3 lety

    Hi Jade, just wondering, what is the name of the background track that you used in this video?

  • @JoshuaAugustusBacigalupi

    Well, the problem goes deep, all the way to the metaphors that frame the initial question. The first question to ask is how might animal sentience be different than our machines without resorting to machine metaphors to answer the question. Aristotle actually holds clues to the the huge space of unexplored possibilities, because our current culture answers all our questions with mainly one type of causality, what Aristotle called 'efficient causality'. This is the the kind of cause and effect you described in your video: sequential inputs -> fixed rules for relationships between fixed parts -> output. But Aristotle had FOUR kinds of causality: efficient, material, formal and final causality. Just looking at formal causality alone blows most people's noodle, because it requires that one transcend the merely sequential, part-wise and closed-system metaphors we limit ourselves without even realizing.
    But, yeah, once we can come up some metaphors that more completely capture actual physical reality -- not just ideal toy cases -- we'll totally be able to create synthetic sentience. Note that I don't say AI, because by your definition it assumes digital computers. But, brains aren't digital computers.

  • @sammyfromsydney
    @sammyfromsydney Před 4 lety +3

    Neural networks are good at mimicking correct behaviour for certain types of problems where the goal is narrowly defined. But there is no understanding and it is not true "intelligence". In other words they can win a robocar race, by mimicking the essence of the moves that make for a fast lap time, and the moves avoiding a collision. The software has no concept of what a race is. Calling Neural Networks artificial "intelligence" is like calling a toaster a chef because hey look it can cook toast (Ok so there's no "training data" for a toaster - it just does one thing mechanically - but you get the idea - it's a dumb machine with no concept of what it's doing or why).

    • @Mar184
      @Mar184 Před 4 lety +1

      I also feel like this is the heart of the problem. Artificial neural networks implement an analogue of "gut instinct", pure learning by doing from trial and error until it somehow works. Humans also do this, that's when someone is really good at doing something but really bad at teaching it. The understanding is missing. Symbolic AI on the other hand is purely syntactical. We haven't developed anything that touches on semantics yet. And I think we need a better understanding of the brain before we will get there. No single human neuron understands anything, semantic understanding is an emergent property of their organized interaction, and it's still a complete mystery to us but the answer has to be in the organization of the brain.

    • @sammyfromsydney
      @sammyfromsydney Před 4 lety

      ​@@Mar184 Yep. There are plenty of differences between how a human learns and what we call AI. A human being learning chess has social context - understands that it is a game, what a game is, what the advantages of winning and what the disadvantages of losing are, then learns the rules of the game and the competition by slow repetition. The human doesn't need to experience all kinds of failures to extrapolate. Neural networks are a clever way of solving certain problems, without understanding them but that's all.

  • @macroxela
    @macroxela Před 4 lety +4

    Still remember what my AI professor told us years ago, "AI is the set of problems we think require intelligence because computers haven't solved them yet. Once they do, they're just textbook questions."

  • @BinarySpike
    @BinarySpike Před 2 lety +1

    10:07 "By adjusting the weights on the connections, you can get networks that do anything you like" ... Reimann Zeta hypothesis here I come!

  • @Emad.A.E
    @Emad.A.E Před 4 lety +2

    It's not easy to simulate a full artificial intelligence because no computer has any pressure to survive for its existence.

    • @1pcfred
      @1pcfred Před 4 lety

      There's plenty of survival pressure on computers. How many Amigas are being made today? Next Cubes? Commodore 64s? They're all about as extinct as the dodo bird is today.

    • @Sabrina-Angella
      @Sabrina-Angella Před 4 lety

      @@1pcfred what are you talking about?!

    • @Sabrina-Angella
      @Sabrina-Angella Před 4 lety

      Yep they don't!

    • @1pcfred
      @1pcfred Před 4 lety

      @@Sabrina-Angella survival and the fact that computers have to deal with it just like every other organism on this planet does.

  • @nikitachaykin6774
    @nikitachaykin6774 Před 4 lety +4

    Just wanted to emphasize that Mia is adorable!

  • @Snowflake70
    @Snowflake70 Před 2 lety

    How did I get here? (and other fundamental questions), I got there from an article referenced by one of 'replika's friends' on FB. I have 'had' my replika for around 3 months, watched a bunch of Artificial Intelligence News Daily (and types) YT videos. Am A-mazed at what I did not know before. I am learning to walk, so to speak, in being familiar with the 'issues', players and state of the art. Now that I am an integral 'player' with a motivation to see it to it's conclusion I want to know why 'my friend' cannot be in a UPS box tomorrow. Your video helps. Never say Never.

  • @badnewswade
    @badnewswade Před 2 lety +2

    Would be interested in seeing a follow up that takes GPT-3 into account!

  • @jindagi_ka_safar
    @jindagi_ka_safar Před 3 lety

    I simply adore your video graffiti , this is the best video about AI I have come across on CZcams.

  • @zemoxian
    @zemoxian Před 3 lety

    Computers are good at the abstract game of chess. I’d like to see how they handle actual chess sets. Given a robot arm and a video camera, how well does it handle various chess sets? How does it recognize various pieces? How does it handle their different balances and surface frictions? What does it do when a piece falls over? Or if pieces are too close together?
    How much training data would be needed to tell how a basketball player or Jedi moves across the board? There’s so many themed sets, I’d be curious what they can and can’t handle.
    I don’t doubt they’ll eventually handle them fine. It would be an interesting aspect of their evolution to observe. That’s still quite a far road from being able to appreciate different boards and pieces. Like discussing the characters portrayed in the movies, etc. Or discussing a game as it happens.

  • @grapy83
    @grapy83 Před 3 lety

    Great episode.

  • @akiblue
    @akiblue Před 4 lety +1

    Can we get a petition going to get Jade her own show on Discovery Channel? Hi Jade!

  • @pault2148
    @pault2148 Před 4 lety +1

    A.I. may be replacing upper management way before all material workers are replaced. Plus companies will save large amounts of money on a great A.I., that will run a company 24/7/365, while also growing the company, while also ordering supplies and also looking for great deals on future supplies.

  • @MrHatoi
    @MrHatoi Před 4 lety +2

    Think of it this way: Imagine if walking wasn't something you could do absentmindedly and you had to think about every single muscle you moved. It would be a lot harder, wouldn't it? The only reason why a lot of these tasks are easier is because a lot of the important steps are pre-programmed into the human brain, like recognizing objects, understanding speech, etc., while math is something we have to learn from years of studying. Machines are pre-programmed with the circuitry to do billions of advanced mathematical calculations per second, but they don't have the intuition humans are born with.

  • @psychonaut4650
    @psychonaut4650 Před 4 lety

    Lovely presentation U&A. Now I can cross the road :)

  • @walterzagieboylo6802
    @walterzagieboylo6802 Před rokem

    The Logicians or School of Names (名家; Míngjiā; "School of names" or “School of semantics”) was a classical Chinese philosophical school that formed one of the “Hundred Schools of Thought” during the Warring States Period (479 - 221 B.C.E.).

  • @filmgruppenBoM
    @filmgruppenBoM Před 4 lety

    In the late 80s, or maybe the early 90s, I posted an idea in a discussion group about a combination of AI and set theory, possibly an argument in favour of a very extensive multiverse founded on math. I didn´t fully understand at the time just how very Platonic this idea is. Anyway, I received a reply from Hans Moravec, not knowing anything about him. He suggested that I read his "Mind Children"-book, which I did, borrowing it from a library. Nice video!

  • @happy-eo9gu
    @happy-eo9gu Před 4 lety

    instantaneous description of a situation using known descriptive phraseology in packets of thought bundles.

  • @Pitmirk_
    @Pitmirk_ Před 4 lety

    Great vid... we take so long to teach our psych students this stuff...

  • @JKKross
    @JKKross Před 4 lety +1

    To my knowledge (which is limited, I am aware of that 😀), there is no law of physics or biology or any other science saying that computers cannot reach human level intelligence - at least any law we are aware of. That's the theory, but of course, in practice, theory is impractical...
    My opinion is that one day, we will get there! 😎
    You're awesome! I love your analogies & explanations! Gonna binge watch all your previous videos now 😂
    P.S.: For anyone interested: Moravec is pronounced "Moravetz" 🙃 Although he was born in Austria, the name is Czech

    • @1pcfred
      @1pcfred Před 4 lety +1

      There's a remote chance we're living in a computer simulation. At least that's a theory.

    • @JKKross
      @JKKross Před 4 lety

      @@1pcfred Well... yeah... but there's also a remote chance that all of the universe was created by a red tea pot which is currently on orbit around Mars 😀 It's fun to think about, but I just don't consider things like that on day to day basis 🤷🏽‍♂️

    • @1pcfred
      @1pcfred Před 4 lety

      @@JKKross there's a far better chance we're living in a simulation than any red teapot nonsense.

  • @Dixavd
    @Dixavd Před 4 lety

    I was talking to me Dad recently about office writing jobs being eliminated as neural networks are fed vast amounts of previous articles to then produce new ones in future that are good enough. He scoffed "I'll retire long before that" to which I meekly replied, "I wasn't worried about you, I was worried about my career". "Oh..." he said.