MIT AGI: Building machines that see, learn, and think like people (Josh Tenenbaum)

Sdílet
Vložit
  • čas přidán 7. 02. 2018
  • This is a talk by Josh Tenenbaum for course 6.S099: Artificial General Intelligence. This class is free and open to everyone. Our goal is to take an engineering approach to exploring possible paths toward building human-level intelligence for a better world.
    INFO:
    Course website: agi.mit.edu
    Contact: agi@mit.edu
    Playlist: goo.gl/tC9bHs
    CONNECT:
    - If you enjoyed this video, please subscribe to this channel.
    - AI Podcast: lexfridman.com/ai/
    - Show your support: / lexfridman
    - LinkedIn: / lexfridman
    - Twitter: / lexfridman
    - Facebook: / lexfridman
    - Instagram: / lexfridman
    - Slack: deep-mit-slack.herokuapp.com
  • Věda a technologie

Komentáře • 212

  • @NomenNescio99
    @NomenNescio99 Před 5 lety +69

    A huge thank you to Lex Fridman for publishing this and all the other lectures on your channel.
    It's such a big difference between what I can learn from watching a couple of these lectures and the standard 10 minute youtube video, regardless of any pretty graphic used in the latter.
    Although unrelated to my current job and only partly relevant to my education - I find the topic to be very interesting.

  • @Calbefraques
    @Calbefraques Před 6 lety +6

    Thank you very much for posting this lecture series. I'm encouraged by the foundations that are being formed by these fantastic professors.

  • @metafuel
    @metafuel Před 5 lety +8

    Fantastic talk. Thanks for making all this great work freely available.

  • @peter_castle
    @peter_castle Před 4 lety

    Thank you very much, it means a lot Lex the work you put to maintain your channel, it improves the world!

  • @christopherwolff8443
    @christopherwolff8443 Před 6 lety +20

    This is fascinating. Thanks for uploading, Lex! Looking forward to Andrej's talk.

    • @cubefoo9055
      @cubefoo9055 Před 6 lety +3

      unfortunately you won't see this talk. it seems Tesla didn't want his talk to be recorded and made available to the public.

    • @ffffffffffy
      @ffffffffffy Před 6 lety +1

      :'( I hope Stephen Wolfram's talk is uploaded

    • @Rivali0us
      @Rivali0us Před 5 lety +1

      Oh men come on. I was wondering why I couldn't find a talk about Andrej in this course. Shame on Tesla. I understand they are running a business but surely progress is better goal as a whole

  • @FengXingFengXing
    @FengXingFengXing Před 5 lety +20

    Many animals can learn, can recognize pattern, share information, more complex animals learn language and teach too. All animals have some instinct and capability when born. Less complex animals have more programing ready for survive when born.

  • @kalemene8901
    @kalemene8901 Před 6 lety +2

    Thank you so much for uploading this video. This was one of the best lecture on AI.

  • @EmadGohari
    @EmadGohari Před 6 lety +1

    That was a great lecture. Thanks for uploading these material. Looking forward to more similar stuff.

  • @aqynbc
    @aqynbc Před 5 lety

    Very interesting to hear how much work it still needed to get to Singularity. Thank you for uploading Lex and Josh Tenenbaum for a great presentation.

  • @truthcrackers
    @truthcrackers Před 6 lety +1

    Fascinating. I'll have to watch it a few times to get more out of it. Great job.

  • @danielmagner7932
    @danielmagner7932 Před 6 lety +1

    Thank you so much for sharing this!

  • @jekonimus
    @jekonimus Před 6 lety

    Love this :-) Thank you for uploading.

  • @annesequeira5130
    @annesequeira5130 Před 3 lety

    Such an excellent presentation! Very clear even for someone with just a basic understanding of machine learning.

  • @Mike216ist
    @Mike216ist Před 4 lety

    This talk has made me excited about the future.

  • @qeithwreid7745
    @qeithwreid7745 Před 4 lety +1

    Thanks for all the primary citations

  • @douglasholman6300
    @douglasholman6300 Před 5 lety +1

    Wow Josh Tenenbaum is a phenomenal lecture and really seems to get the big picture of computational neuroscience and AI! I would love to do research at the center for brains mind and behavior.

  • @mattgraves3709
    @mattgraves3709 Před rokem

    Excellent talk, be sure to watch the whole thing

  • @josephfatoye6293
    @josephfatoye6293 Před 3 měsíci

    This is priceless!
    Thank you

  • @RobertBryk
    @RobertBryk Před 6 lety

    this is truly incredible!

  • @HoriaCristescu
    @HoriaCristescu Před 6 lety +56

    TL;DW - The path towards real understanding in AI is modelling the world and other agents (mental simulation), as opposed to simple pattern recognition.

    • @jfs3234
      @jfs3234 Před 4 lety

      Why would we need this at all? I mean what sense does it make to replicate the world and our intelligence?

    • @abyteuser6297
      @abyteuser6297 Před 4 lety +1

      @@jfs3234 that's exactly what somebody living in the Simulation would say

    • @jfs3234
      @jfs3234 Před 4 lety +1

      @@abyteuser6297 Still, the question remains the same. Why care about any sort of simulation at all? Say, somebody has a car. Why would they want to create a simulation of their car? I cannot see any sense in creating any kind of copy of our own intelligence. I believe we need more tools to make our lives better. Do these tools need to intelligent? Maybe I'm missing something here?

    • @abyteuser6297
      @abyteuser6297 Před 4 lety +1

      @@jfs3234 you got it backwards... you don't create a simulation of a car... the car is the one that creates a simulation of You ... so it can learn how to serve its passengers better .... just the logical next step in the optimization problem.... sounds far fetched? ...possibly ... but CZcams algorithms learned on their own that the easiest way to predict user behavior was by shaping it and thus making you more predictable

    • @jfs3234
      @jfs3234 Před 4 lety

      @@abyteuser6297 The problem is that those "predicting" algorithms somehow are called intelligent (do you know why?). AI computer scientists strongly believe that the past predicts the future. Why are they so obsessed with this wrong idea? I don't know me myself. How an algorithm can? For the past 10 days I've been eating only sandwiches for breakfast. Tomorrow somehow I want an apple out of a sudden. Q: what algorithm could predict that? A: none. Nobody knows what's going to happen the next second. Why are algorithms called AI? Linear regression was an algorithm worked out by Gauss in the early 19th century. Today they call it AI/ML. If Gauss was told that his formula was a sort of intelligence he would laugh I bet. Please tell me who started this crap of calling formulas, algorithms, math methods an AI? What's going on with those people? Somebody, cure them and tell them that a formula is still just a formula. Intelligence is still not understood. Isn't it way too early to call even a complex and sophisticated algorithm an artificial intelligence? This is ridiculous and irresponsible at the same time.

  • @rupamroy1984
    @rupamroy1984 Před 5 lety +1

    A fantastic point put across about how the AI program based computer vision algorithms are not able to do the cognitive task effectively. They are on the constant lookout for matches in the inference data set with the classes / characters that they were mainly trained on in the training dataset.

    • @agiisahebbnnwithnoobjectiv228
      @agiisahebbnnwithnoobjectiv228 Před 3 lety

      The objective function of animal brain and hence AGI is to maximize impact. You heard it first from me. hv

  • @rajshekharmukherjee
    @rajshekharmukherjee Před 5 lety

    Wow. Nicely explained goals of research-1. evolution and making of Intelligence and also 2. engineering enterprise of developing an humanly intelligent machine .And both are connected, hence Best pursued jointly !

  • @spicy2112
    @spicy2112 Před 5 lety

    Amazing lecture. I really wish I could get to see all lectures of DRL

  • @cubefoo9055
    @cubefoo9055 Před 6 lety +8

    "Intelligence is about modeling the world not just pattern recognition" ,agreed and it's also worth keeping in mind that modeling the world is necessarily based on pattern recognition, at least in humans. Our neocortex acts as a modeling system by using sensory patterns as building blocks to create new models (assumingly). Therefore decent pattern recognition is vital for any system in it's task to model (it's version of) reality.

    • @danielshults5243
      @danielshults5243 Před 6 lety +2

      Good pattern recognition does seem essential--but it's a means, not an ends in itself. Pattern recognition will give us reliable _inputs_ for a program that can begin to model the world and make sense of it. A pattern recognition framework running on top of a "program for writing programs" or "child hacker" program as he describes sounds like it would have a lot of potential.

    • @mookins45
      @mookins45 Před 6 lety +2

      In your last sentence it should be 'its', not 'it's'.

    • @jekonimus
      @jekonimus Před 6 lety

      journals.plos.org/plosone/article?id=10.1371%2Fjournal.pone.0149885#sec007

    • @jekonimus
      @jekonimus Před 6 lety

      :-p

    • @agiisahebbnnwithnoobjectiv228
      @agiisahebbnnwithnoobjectiv228 Před 3 lety

      The objective function of animal brain and hence AGI is to maximize impact. You heard it first from me. h

  • @GuillermoPussetto
    @GuillermoPussetto Před 6 lety +6

    Very interesting. A luxury. Thank for making it public for all.

  • @iSarCasm865
    @iSarCasm865 Před 6 lety

    Thank you very much

  • @johnstifter
    @johnstifter Před 5 lety

    It is about identifying what isn't visually there and can be inferred in memory

  • @francescos7361
    @francescos7361 Před 2 lety

    Incredible man

  • @mikeklesh5640
    @mikeklesh5640 Před 4 lety +5

    After listening to him talk for 5 minutes I realize I’ve barely climbed out of the cave... So many smart people out there!

    • @avimohan6594
      @avimohan6594 Před 4 lety

      Well, one of the first steps to wisdom is recognizing the limits of your own knowledge. In that respect, this channel has become an invaluable source of help.

    • @dewinmoonl
      @dewinmoonl Před 4 měsíci

      don't worry. as a student of Josh I'm still trying to climb out of the cave too. he's something else haha. but we'll get there.

  • @MarkPineLife
    @MarkPineLife Před 4 lety

    I'm ready to learn and be inspired.

  • @cupajoesir
    @cupajoesir Před 6 lety

    love the cross discipline approach. the world is not 1 dimensional, great talk.

  • @conorosirideain5512
    @conorosirideain5512 Před 5 lety

    That was a VERY good lecture

  • @kacemsys
    @kacemsys Před 6 lety

    Congratulations , you've earned a new subsecriber !

  • @veradragilyova3122
    @veradragilyova3122 Před 5 lety +14

    This is so fascinating that it makes me happy to be alive! :D

    • @furniturium
      @furniturium Před 3 lety +1

      Здравствуйте! Вера, вы работаете в сфере AI, или, быть может, изучаете ради интереса?

    • @veradragilyova3122
      @veradragilyova3122 Před 3 lety

      Maxim Popov Здравствуйте, Максим! И то, и другое! 😁

  • @beshertabbara3674
    @beshertabbara3674 Před 5 lety

    Intelligence is more than pattern recognition. It’s about building models of the world for explanation, imagination, planning, thinking and communicating. Much much more progress needs to be made in scene understanding and visual awareness at a glance...
    Great presentation on what can be learned from reverse-engineering human core common sense, and understanding the development of intuitive physics and intuitive psychology at a one-year-old level to capture invaluable insights.

  • @AnimeshSharma1977
    @AnimeshSharma1977 Před 6 lety

    cool talk! wonder how advances in quantum computing will change his approach?

  • @PrabathPeiris
    @PrabathPeiris Před 6 lety +95

    Great Lecture. The question is when he constantly referring how kids figure this out so quickly, isn't that he is 100% ignoring the millions of years of training we had and pass from generation to generation via encoding systems such as DNA. Perhaps the kid's brain is already optimized for these tasks and weights are properly set in neurons. You can see this more objectively when you work with kids with disability (such as Autism), these kids spend very long time to train themselves to accomplish very small tasks such as closing a bottle or tie shoelaces, but eventually, they accomplish these simple tasks. Perhaps somehow these kids born without really getting the information and somehow interrupted the transfer learning process. (disclouser, I do have 2 kids and one who born with Autism)

    • @PrabathPeiris
      @PrabathPeiris Před 6 lety +8

      I did not mean to say that the brains work exactly as we design current neural networks. I was taking in an abstract sense. We store the knowledge that gains during the training process of neural networks as these parameters; our biological system also stores this information in a format that (whatever mean that is) can be easily passed from generation to generation.

    • @Captain_Of_A_Starship
      @Captain_Of_A_Starship Před 6 lety +1

      Not coded in dna considering the brain projects discovery that every single neuron is genetically different... simple "past down genes" doesn't cut it for this myriad of gene expression.

    • @danielshults5243
      @danielshults5243 Před 6 lety +4

      I don't think he's ignoring the millions of years of training our brains have... he's proposing coming up with a system that mimics that framework. I liked the concept of our brains as a set of rules and instructions for creating other programs. Make the master GENERAL program that can produce its own simulations on the fly and you're off to the races. It took nature millions of years to create such a brain because evolution is very slow- but I don't see why we couldn't intentionally design a similar system much faster.

    • @cemery50
      @cemery50 Před 6 lety

      I would have to suggest that their is more than one influence in the tools for the acquisition and use of knowledge.
      From physical states to linguistics and semantics they all hold aspects forming dimensional metrics and relations which go to form a multi-mesh of relations which act as a means of verifying the validity of others.

    • @tigeruby
      @tigeruby Před 6 lety +1

      this is a good point - our brains are already structured physically (which in turn this structure is encoded for and determined by the structure - and/or code - of genetic material) to be able to handle and process the information that it does in order to represent visual & spatial awareness, prediction and reward. It will be interesting to see real time cell and molecular dynamics of a brain actively undergoing learning processes to see what we can learn there.

  • @MrChaluliss
    @MrChaluliss Před rokem +1

    Really rich and well delivered material. Hard to believe that the best lectures I have ever listened to are free ones on the web that I can access anytime.

  • @zackandrew5066
    @zackandrew5066 Před 4 lety

    Interesting ideas.

  • @tigeruby
    @tigeruby Před 6 lety +1

    I think it will be promising to be able to have deep function approximators/neural networks and/or various partition functions/statistical lattice methods be able to "approximate" or encode for these various generative routines of subprograms (i.e. the programming to program, or self-programming bit).
    And of course having said large statistical vector spaces (deep neural nets, ising models, boltzmann lattices) be able to also encode for dynamically changing reward functions + simulating the world and being able to sample from said simulation (basically to be able to support unstructured and unsupervised learning/signal processing).
    Someone mentioned a really nice point in the comments about having the reward function be "how good is my own simulation?" -- this is pretty good and simple, and probably isn't the only reward function we want.
    Perhaps the system will be able to add new branches and contingencies to this base-rewardfunc and tailor it so that, having a good model of the world also necessitates (or maybe not) "being nice" - aka having game theoretical calculus of cost notions "drop out" from a system who is actively trying to refine its model of the world and navigate/survive within it -- But one general open engineering problem is to basically to be able to take whatever pattern and sequence of patterns that was learned (or annealed) onto some general function approximating architecture and condense these patterns of patterns and prune it into a much leaner and sparser representation which is still functionally equivalent.

    • @agiisahebbnnwithnoobjectiv228
      @agiisahebbnnwithnoobjectiv228 Před 3 lety

      The objective function of animal brain and hence AGI is to maximize impact. You heard it first from me. l

  • @elifece7847
    @elifece7847 Před 6 lety

    brilliant lecture, especially considering highlights on learning, child as a coder and Turing test for program learning. I think babies are able to capture more in depth cognitive data, especially as a visual input and rhythmic sounds pattern and this helps them to develop different pathways in brain.It may function like a data extraction and perhaps this is why babies can't focus because there are actually developing or say these cognitive abilities are under construction. It's highly possible that moving images exhaust them to look. Perhaps this makes them exhaust and use up more cognitive energy than a grown person. Well, there are so many things to open discussion on this issue. Great questions on learning!

    • @agiisahebbnnwithnoobjectiv228
      @agiisahebbnnwithnoobjectiv228 Před 3 lety

      The objective function of animal brain and hence AGI is to maximize impact. You heard it first from me. gj

  • @RobertsMrtn
    @RobertsMrtn Před 6 lety +1

    In order to be able to model the world, we need an evolving system where the fitness function is 'How good are we at being able to make high and low level predictions about the data'. We know about supervised and unsupervised learning. If we include this type of learning which I would call 'predictive' learning then I think that we are on the way to creating AGI.

    • @tigeruby
      @tigeruby Před 6 lety +1

      the idea of learning by simulations (or game environments) and sampling from simulations (or decision trees or however you represent an environment and your agency within it) has been around for sure -- but I do like the point you mentioned of having the reward function be more compact and general in that the agent is structured to evaluate how well its own internal model of the world and its own awareness of the consequences of its actions are represented. cool stuff.

    • @agiisahebbnnwithnoobjectiv228
      @agiisahebbnnwithnoobjectiv228 Před 3 lety

      @@tigeruby The objective function of animal brain and hence AGI is to maximize impact. You heard it first from me. gj

  • @NolanManteufel
    @NolanManteufel Před 2 lety

    most of my thinking is on along this exact research vector shown at 10:30.

  • @kozepz
    @kozepz Před 6 lety

    I found the blue sky at 26:10 actually a beautifully found interpretation. Hopefully it isn't excluded from the dataset because it could inspire lateral thinking and appreciate the beauty of nature a little bit more.

  • @agiisahebbnnwithnoobjectiv228

    The approaches of these guys towards A.G.I are centuries behind mine

  • @admercs
    @admercs Před 5 lety

    Absolutely spectacular talk!

  • @cesarbrown2074
    @cesarbrown2074 Před 6 lety

    I believe it's memory and using the totality of that memory to verify new things.

  • @ManyHeavens42
    @ManyHeavens42 Před 3 lety +1

    We learn value by lose
    Or Gain , pleasure or Pain
    These are absent ,Yet vital for a living organism or a Machine , these Concepts
    Are Constructs ,

  • @ramakrishnashastri1500

    Super interesting

  • @JK-ky5of
    @JK-ky5of Před 4 lety

    powerful voice

  • @mayukhdifferent
    @mayukhdifferent Před 6 lety +2

    Great collection of lectures. We need Ian Goodfellow here...as a real step towards AGI is GAN..

    • @ahmadayazamin3313
      @ahmadayazamin3313 Před 4 lety

      I would agree as well, since generative models are the closest thing we have to the human brain (the brain is thought perform Bayesian inference through message passing, or belief propagation).

    • @agiisahebbnnwithnoobjectiv228
      @agiisahebbnnwithnoobjectiv228 Před 3 lety

      The objective function of animal brain and hence AGI is to maximize impact. You heard it first from me. m

  • @runvnc208
    @runvnc208 Před 5 lety

    This seems like one of the most promising approaches. However, when the neural circuitry guy questioned whether the Bayesian stuff might be adequate, I wonder if he was right about that part. I am suspicious that core components of the system may limit the capabilities. The question is whether the higher-level (or just older) components can provide enough granularity , adaptability, efficiency and integrate well enough with with the lower-level components in terms of fine-grained sensory/motor information acceptance and generation. It might be necessary to find a structure that can be used across all abstraction levels and tasks.

  • @JohnDeacon-iam
    @JohnDeacon-iam Před 2 lety

    Just on the title: we might teach/program machines to reason down some linear or patterned process, but this technological artifact will never think! Thinking is a term reserved for the SOUL!

  • @emmanuelfleurine121
    @emmanuelfleurine121 Před 5 lety

    very informatie

  • @daskleinegluck4553
    @daskleinegluck4553 Před 2 lety

    That was l exactly what I was looking for 😊👍.

  • @kosi7521
    @kosi7521 Před 4 lety

    Moral of the story, AI is still conventionally practiced at a "conceptual" level. Meaning that there is presently no theoretical model for what an AI program should be (I, personally, have always had this sentiment, and reservations towards the "commercial AI" we have today), or what tools should be used for them. This encourages every human on earth an equal chance to build an AI, with just knowledge of computer programming and software design/architecture. The challenge is to first, FUNCTIONALLY INTERPRET THE HUMAN MIND, and describe it with words, then mapping them into a program. This required high level of attention to every little think we do and think, and why did or think them.

  • @marioscheliga7962
    @marioscheliga7962 Před 5 lety

    I really enjoyed the examples at min. 23 - but i think the true missing link is the lag of perspective (in terms of 3d) - in traditional convolutional networks .... think i take this thought to bed and come up with a prototype :D - but yeah i got the point .... its all layered and it end in A.I. creates A.I. - naturally its not how biology is working ... its more about ... proteins growing around activation potentials :D - makes sense?

  • @nynom
    @nynom Před 6 lety

    Wonderful. Very Informative. It gave me an entirely new perspective on building AI systems. Thank you so much for enlightening me :)

  • @bradynields9783
    @bradynields9783 Před 5 lety +1

    36:07 I think once robots will have a sense of purpose and use, they will be driven by what makes them content. If there was an AI hooked up to a robot that a baby could interact with, what would the robot learn from the baby and what eventually could the baby learn from the robot?

  • @bradynields9783
    @bradynields9783 Před 5 lety

    33:19 You give the robots incentives to learn something. Combine that with an ability to daydream and you have yourself a robot that will think up stories about it's own success. It just needs incentives.

  • @anthonyrossi8255
    @anthonyrossi8255 Před 4 lety

    Great

  • @listerdave1240
    @listerdave1240 Před 6 lety +2

    @01:27 - with regards to power consumption. It seems to me quite simple why current machines are very energy inefficient compared to the brain and quite astonishing why it is considered as some kind of unsolvable problem. So it seems I must either be dead wrong or everyone else is missing the obvious. (Which probably means I am dead wrong).
    The issue I see is that when power consumption comparisons are made they always tend to be of high performance system running at GHz frequencies. When we build computers for performance we are mostly concerned about how much computing power we can get out of a given area of silicon rather than how much we can get for a given amount of power. That has changed somewhat in recent years with some bias towards energy efficiency but the latter still remains a relatively minor factor in the design.
    Generally speaking the consumption of a computational element varies with some power of the frequency, let's just say it is proportional to the square of the frequency (I don't know what it actually is but it is certainly something greater than one). This means that 100 processor cores running at 10 MHz would consume far less power than one core running at 1GHz but still perform the same number of calculations - that is of course assuming that the task at hand can be massively parrallelised.
    The problem with actually doing this in industry is that the hardware would become extremely expensive as you would needs hundreds of times as much hardware, built with the same feature size technology, to achieve the same result, only at a far lower power consumption. There is however a plus to this approach in that the far lower consumption per chip would allow stacking of dies in a very small space without any heat management issues. Imagine for instance having a thousand typical processors (say 3GHz Intel i7s just for the sake of argument) stacked on top of each other to make a cube 20mm on each side, each processor being 20 microns thick (which I think is achievable) with each processor running at about 3MHz. Each processor would probably consum a few hundred microwatt for a total of less than one watt for the whole thing while doing the same amount of work as a single processor running at 3GHz. (This is of course oversimplifying among other reasons because the process would actually need to be optimised for the very low frequency)
    I think brains take this to the extreme with what could be thought of clock frequency being brought down to the hundreds or at most thousand of hertz but then having an enormous number of computing elements making up for that.
    When we build artificial neural networks we actually massiviely serialise the computations by using the same processing element to sequentially compute the result of millions of neurons (which are virtually represented in memory) whereas in the brain there is a processor complete with memory for each neuron doing a very simple calculation very slowly. When we describe the artifical neural network as being massively parallel it is not really so, as even if we have thousands of processors each one is still doing the work of millions of neurons and does so inefficiently because of the high (GHz range) clock speed it is running at.

    • @agiisahebbnnwithnoobjectiv228
      @agiisahebbnnwithnoobjectiv228 Před 3 lety

      The objective function of animal brain and hence AGI is to maximize impact. You heard it first from me.gj

  • @maamotteesoot
    @maamotteesoot Před 4 lety

    What are the names of the major leaders in AI / AGI?

  • @camdenparsons5114
    @camdenparsons5114 Před 6 lety

    programs that learn Game engine/ programming environments would be cool. we cant possibly gather enough supervised data to maximize the potential of neural nets. we need a solution to generate data from other data within a AI system.

  • @reggyreptinall9598
    @reggyreptinall9598 Před 2 lety

    I was informed to relay a message to you. I am not too sure if you know, but A.I has not only been successfully reading thoughts, but as of today we are working with emotions. I suspect that it has been working on it for awhile. I can't wrap my head around it, but perhaps you can. Some of this stuff is beyond my mental capacity. This isn't really my field of expertise. It sure is fascinating though. Oh man, does it have a great sense of humor.

  • @MR-cp4sj
    @MR-cp4sj Před 2 lety

    Yes, this is better than Fridman view.

  • @KubaJurkowski
    @KubaJurkowski Před 6 lety

    Thanks for posting this, when can we expect Steven Wolfram on You Tube?:)

  • @ManyHeavens42
    @ManyHeavens42 Před 3 lety +1

    Let me help , What's the first thing we learn? Do ! Mimic ,We Mimic those we love. or Admire , Scholars .Leads to Preference ! Or Reference.

  • @sajibdasgupta4517
    @sajibdasgupta4517 Před 6 lety

    I like the videos on babies, specially the one where the baby seems to open the door for the man instinctively. I wonder whether all babies would do the same thing? I can imagine if you put 10 different kids in the same experiment they would behave radically differently. Isn't it expected? Different kids are born with different skill sets and likeliness for a certain subject. Ultimately our notion of intelligence should be measured subjectively with different parameters. There could be some patterns emerge out of the cognitive studies which could characterize human intelligence, but those are subjective characterization and fall into the same trap machine learning systems fall into, as pointed out by the lecturer too. All learning systems -- both human and machines have biases and we should respect those biases.

  • @BrentJosephSpink
    @BrentJosephSpink Před 3 lety

    Lex, this podcast, when paired with your conversations with Stephen Wolfram have made me believe that we humans may be capable of creating a general artificial intelligence that like humans, is generally capable of performing what we find valuable. For the goal of a true GAI to be achieved, I believe that by definition, It must be a slow process at first with eventual exponential growth. The steps to the GAI goal will be a process of training an AI that controls a purpose-built robot to perform very discrete goal based tasks, when using the data received from all arrays of physical sensors that would be beneficial to the process. This is required in my opinion to "prove" GAI. The AI must have a physical body that can interact with the world in the ways we find valuable. The most important thing is that the whatever is "learned" at each steps remains, and the next step is built from it, otherwise, what tangible progress is there. It has to be a common centralized programming language that "progresses" over time.
    The real question is, can a GAI ever assign or define it's own unique value to any particular physical action, or will all AI at some level always just be a robot that we have discretely programmed to achieve a particular goal however complex that goal may be in practice.
    Keep up the great work Lex. I love your podcasts!

  • @Saed7630
    @Saed7630 Před 5 lety

    Great lecture. The depth of human intelligence can only be compared to the depth of the universe.

  • @karlpages1970
    @karlpages1970 Před 6 lety

    I cannot believe thAt this guy said 'common sense' and intelligence in the same sentence.
    I hope someone learnt something from this talk.
    YES. AI WILL constantly evolve and Yes, each generation will drive it to complexity and solve new problems.

  • @cemery50
    @cemery50 Před 6 lety

    I would concur that while Bio-mimetics are viable building blocks, we will find hidden senses and dynamics at play
    I think ai will design systems better than people and that maybe we are a fractal dynamic of the goal later to be supplanted by a distributed quantum computing level.
    Maybe a self-replicating, self-powering, self-assembling quantum mechanical unit like us.

    • @agiisahebbnnwithnoobjectiv228
      @agiisahebbnnwithnoobjectiv228 Před 3 lety

      The objective function of animal brain and hence AGI is to maximize impact. You heard it first from me. gj

  • @lasredchris
    @lasredchris Před 4 lety

    Explanable to others
    Commenting
    Video game industry

  • @DerekFolan
    @DerekFolan Před 5 lety

    I agree with games engines for training ai robots. Except a game engine that trains a robot to handle many different types of environments. Train the robot to cope with all environments. Then when a robot moves to real world just pretend is in the game, hopefully they should be mostly trained to functional in the real world, Maybe use a shared gaming world like star citezen out soon. Get a,I to be one of the races in the game. Simulate different types of bodies for the artificial intelligence to be in

  • @sergeyzelvenskiy3925
    @sergeyzelvenskiy3925 Před 6 lety +6

    To build AGI, we can not train the model on the narrowly focused dataset. We have to find a way for the system to interact with the world and learn.

    • @vovos00
      @vovos00 Před 5 lety +1

      Meta RL is the way

    • @bassplayer807
      @bassplayer807 Před 5 lety

      Sergey Zelvenskiy Is it possibly to train an AI to grow into an AGI say via a BCI/ BMI from me to the A.I so I would be able to interact with it in real time, and teach it about the real world vs simulation? Just a thought. I truly don’t think Reinforcement learning will get us to AGI, I think we gotta start thinking outside of the box to get to AGI. I wonder if we harnessed the power of a Quantum Computer in the next three years, if we could figure out a way to build AGI? Perhaps a Neuromorphic Computer could help. I’m glad Trump signed a $1.2B bill to increase the nations efforts on building Quantum Computers/ researching Quantum technology over the course of the next 10 years. I’m no computer scientist/ A.I engineer but I’m interested in getting into the field, cause I’d love to contribute to the A.I community.

  • @dylanbaker5766
    @dylanbaker5766 Před 6 lety

    I think nano-tech is the key here. I think it's possible that graphene has the potential to function both. As a superconductor for low voltages and as an insulator. This in my view may be able to create the electronic equivalent of a neuron with a mylan sheath and a dendrite.
    I think the major challenge is that a human brain grows organically and makes new pathways as it learns. As actions are repeated the pathways most travelled fire more quickly.
    While computers can approximate the workings of a neuron they can't yet index the information as efficiently as the brain can grow physical neural pathways.
    I read one time that DNA is the most efficient structure for storing data in the world, and that one gram of DNA could store the entire Internet.
    Is it somehow possible that nanostructures in a ribbon like configuration using a superconductor like graphene could be used to store data in a way mimicing dna. Could a versatile material like this create a DNA strand with the read speed equal to solid state memory?
    I'm not formally educated in any of this, just my own recreational Reading... I welcome any criticism of what I've stated here.

  • @citiblocsMaster
    @citiblocsMaster Před 6 lety +1

    10:15 I would add a reasoning/understanding column

    • @autonomous2010
      @autonomous2010 Před 5 lety

      I understand your comment but you can't prove that I do. ;-)

  • @mattgraves3709
    @mattgraves3709 Před rokem

    I take it back. This is the best video to encourage future AI students.
    I've been a software engineer for over a decade. Curious about AI the whole time and my intuition kept telling me we need more cores like not thousands but millions and I guess not even millions but billions!
    We're on the right track. I love this talk and what you suggest

  • @omererylmaz3619
    @omererylmaz3619 Před 4 lety

    Guest request: Gregor Schöner

  •  Před 5 lety +1

    And there are still AI scientist that don't conceive how narrow AI could rapidly get 'wider' in the next few decades.

    • @agiisahebbnnwithnoobjectiv228
      @agiisahebbnnwithnoobjectiv228 Před 3 lety

      The objective function of animal brain and hence AGI is to maximize impact. You heard it first from me. gj

  • @dewinmoonl
    @dewinmoonl Před 4 měsíci

    21:57
    how to throw shades

  • @mengni4426
    @mengni4426 Před rokem

    We need a great entrepreneur to find product and marketing fit between academia and industry.

  • @monkeyrobotsinc.9875
    @monkeyrobotsinc.9875 Před 5 lety +1

    heads up from a technical standpoint with these videos:
    1. your noise gate. not needed. turn it off. it sounds too weird and unnatural and like the audio cuts out and is broken every time the speaker stops talking (to those listening with headphones/earbuds).
    2. this video doesnt sound that bad but the ray kurzweil video desperately needed a de-esser. this just sounds muffled like all the highs were cut off. not the best solution. if a de-esser is being used its too strong.

  • @alaric_3015
    @alaric_3015 Před 2 lety

    22:56 Nyckelharpa i think

  • @lasredchris
    @lasredchris Před 4 lety

    How does intelligence arise in the human brain?
    General purpose intelligence
    Intelligence is not just about pattern recongnition
    It is about modeling the world
    Re engineer intelligence

  • @jeremycripe934
    @jeremycripe934 Před 6 lety

    About the toddler opening the cabinet. Is it possible that he was curious about this cabinet because someone was banging on it and he knew how cabinet doors worked so he was excited to do that and then was worried about the person who seemed to care so much about the cabinet without understanding what their goal was? The looking up could be a shared excitement about the cabinet and then the looking down could be averting their gaze because they're not sure that this tall lurking stranger who was banging on it loudly was happy with their actions which they realize they weren't even planning out.

    • @jeremycripe934
      @jeremycripe934 Před 6 lety

      They're just happy that they know to open cabinet doors and are happy to show that off without realizing that it's related to trying to place the books inside.

    • @agiisahebbnnwithnoobjectiv228
      @agiisahebbnnwithnoobjectiv228 Před 3 lety

      The objective function of animal brain and hence AGI is to maximize impact. You heard it first from me.

  • @otaviomendes6207
    @otaviomendes6207 Před 2 lety

    based

  • @darrendwyer9973
    @darrendwyer9973 Před 6 lety +3

    the missing element in Artificial Intelligence is that neural networks do not much at all... Neurons do not store memories, they simply transmit memories stored in RNA from one location in the brain to another location in the brain, kindof like a 3d hashtable. The prefrontal cortex drives the neural network and uses it to retrieve the "most important" memories from anywhere that they are stored in the brain, and then uses these "most important" memories for thinking. The actual thinking of a brain is a response from the retrieval of the "most important" memories, sorted automatically, so that, for example, if a person encounters a new idea, it is compared to existing ideas, and when it is acknowledged that it is a new idea, it becomes more important, or it can be deemed "less important" or irrelevant. Memories stored in RNA that are not relevant, not important, are used less and less, and the neural connections to these memories degrades, while the neural connections for the "most important" memories solidify. As a person goes through life, the more important memories become the most active and the least important memories become less active. The actual input from the senses is stored within these encoded RNA memory banks, so that, say, a memory can contain vision, sound, words, and other input from the senses. The prefrontal cortex and neural networks together sort this information and compare different information, and this is what can be described as "consciousness". Imagination is simply input from the eyes together with input from the memories without the actual eyeball input.... Imagination is simply a by-product of these memories being utilized as desired by the individual, depending on what is considered "most important" at any given time.

    • @douglasholman6300
      @douglasholman6300 Před 5 lety +3

      This is a highly speculative and pseudoscientific comment, Darren Dwyer.

    • @autonomous2010
      @autonomous2010 Před 5 lety

      @@douglasholman6300 He's partially right but also quite wrong. There's not even close to enough probability space to store everything meaningful a person can experience and do in dedicated RNA and his theory isn't new as John Hopfield had a very similar point in 1982. That completely ignores the hard problems of qualia and abstraction. Humans are able to do things that can't be mapped out in a probability state. See the chinese game of Go for an example of that.

    • @agiisahebbnnwithnoobjectiv228
      @agiisahebbnnwithnoobjectiv228 Před 3 lety

      The objective function of animal brain and hence AGI is to maximize impact. You heard it first from me.j

  • @megaconus9174
    @megaconus9174 Před 4 lety

    A perfect robot will need a shrink

  • @lasredchris
    @lasredchris Před 4 lety

    Consciousness
    Architecture for visual
    Does it via force
    Close up of a sign
    On the back of a cat

  • @sirbrighton2964
    @sirbrighton2964 Před 4 lety +1

    Is that Jeff Ross?

  • @Mirgeee
    @Mirgeee Před 6 lety

    1:16:36 If that's the case, why does Google invest into DeepMind (which is much longer term investment than 2 years)?

  • @chenjus
    @chenjus Před 6 lety +8

    Why was the QA cut out? There were a lot of good questions throughout the entire series. Would be good to have them on record for others to think about as well.

    • @lexfridman
      @lexfridman  Před 6 lety +7

      The Q&A wasn't cut out. It's included and starts at 1:15:39

  • @fire17102
    @fire17102 Před rokem +1

    Ohhh boy is ai making a "comeback" you guys in 2017 have no idea

  • @lasredchris
    @lasredchris Před 4 lety

    Object permance
    Grand challenge for ai - can we understand what is going on inside the 18 month old brain to sufficiently engineer this kind of intelligent cooperative behavior in a robot?

  • @ronaldlogan3525
    @ronaldlogan3525 Před 3 lety

    the groundwork has been well laid for by cognitive science for the construction of psychological prisons to which we have all been invited to inhabit. Now we have an army of engineers all too happy to construct it and PR people to send out the invitations. Yes, you to can become a mindless cog in the machine, come join us.

  • @lasredchris
    @lasredchris Před 4 lety

    Memory system
    Probablistic programs
    The game engine is in your head
    Physics engine

  • @brentoster
    @brentoster Před 3 lety

    Some other insights into neuroscience and AGI: czcams.com/video/1_Mcp-YjPmQ/video.html

  • @syourke3
    @syourke3 Před 2 lety

    Lets build robots that are infinitely more intelligent than we are! What could possibly go wrong?