Video není dostupné.
Omlouváme se.

Joscha Bach - GPT-3: Is AI Deepfaking Understanding?

Sdílet
Vložit
  • čas přidán 9. 09. 2020
  • Joscha Bach on GPT-3, achieving AGI, machine understanding and lots more
    02:40 What's missing in AI atm? Unified coherent model of reality
    04:14 AI systems like GPT-3 behave as if they understand - what's missing?
    08:35 Symbol grounding - does GPT-3 have it?
    09:35 GPT-3 for music generation, GPT-3 for image generation, GPT-3 for video generation
    11:13 GPT-3 temperature parameter. Strange output?
    13:09 GPT-3 a powerful tool for idea generation
    14:05 GPT-3 as a tool for writing code. Will GPT-3 spawn a singularity?
    16:32 Increasing GPT-3 input context may have a high impact
    16:59 Identifying grammatical structure & language
    19:46 What is the GPT-3 transformer network doing?
    21:26 GPT-3 uses brute force, not zero-shot learning, humans do ZSL
    22:15 Extending the GPT-3 token context space. Current Context = Working Memory. Humans with smaller current contexts integrate concepts over long time-spans
    24:07 GPT-3 can't write a good novel
    25:09 GPT-3 needs to become sensitive to multi-modal sense data - video, audio, text etc
    26:00 GPT-3 a universal chat-bot - conversations with God & Johann Wolfgang von Goethe
    30:14 What does understanding mean? Does it have gradients (i.e. from primitive to high level)?
    32:19 (correlation vs causation) What is causation? Does GPT-3 understand causation? Does GPT-3 do causation?
    38:06 Deep-faking understanding
    40:06 The metaphor of the Golem applied to civ
    42:33 GPT-3 fine with a person in the loop. Big danger in a system which fakes understanding. Deep-faking intelligible explanations.
    44:32 GPT-3 babbling at the level of non-experts
    45:14 Our civilization lacks sentience - it can't plan ahead
    46:20 Would GTP-3 (a hopfield network) improve dramatically if it could consume 1 to 5 trillion parameters?
    47:24 GPT3: scaling up a simple idea. Clever hacks to formulate the inputs
    47:41 Google GShard with 600 billion input parameters - Amazon may be doing something similar - future experiments
    49:12 Ideal grounding in machines
    51:13 We live inside a story we generate about the world - no reason why GPT-3 can't be extended to do this
    52:56 Tracking the real world
    54:51 MicroPsi
    57:25 What is computationalism? What is it's relationship to mathematics?
    59:30 Stateless systems vs step by step Computation - Godel, Turing, the halting problem & the notion of truth
    1:00:30 Truth independent from the process used to determine truth. Constraining truth that which can be computed on finite state machines
    1:03:54 Infinities can't describe a consistent reality without contradictions
    1:06:04 Stevan Harnad's understanding of computation
    1:08:32 Causation / answering 'why' questions
    1:11:12 Causation through brute forcing correlation
    1:13:22 Deep learning vs shallow learning
    1:14:56 Brute forcing current deep learning algorithms on a Matrioshka brain - would it wake up?
    1:15:38 What is sentience? Could a plant be sentient? Are eco-systems sentient?
    1:19:56 Software/OS as spirit - spiritualism vs superstition. Empirically informed spiritualism
    1:23:53 Can we build AI that shares our purposes?
    1:26:31 Is the cell the ultimate computronium? The purpose of control is to harness complexity
    1:31:29 Intelligent design
    1:33:09 Category learning & categorical perception: Models - parameters constrain each other
    1:35:06 Surprise minimization & hidden states; abstraction & continuous features - predicting dynamics of parts that can be both controlled & not controlled, by changing the parts that can be controlled. Categories are a way of talking about hidden states.
    1:37:29 'Category' is a useful concept - gradients are often hard to compute - so compressing away gradients to focus on signals (categories) when needed
    1:38:19 Scientific / decision tree thinking vs grounded common sense reasoning
    1:40:00 Wisdom/common sense vs understanding. Common sense, tribal biases & group insanity. Self preservation, dunbar numbers
    1:44:10 Is g factor & understanding two sides of the same coin? What is intelligence?
    1:47:07 General intelligence as the result of control problems so general they require agents to become sentient
    1:47:47 Solving the Turing test: asking the AI to explain intelligence. If response is an intelligible & testable implementation plan then it passes?
    1:49:18 The term 'general intelligence' inherits it's essence from behavioral psychology; a behaviorist black box approach to measuring capability
    1:52:15 How we perceive color - natural synesthesia & induced synesthesia
    1:56:37 The g factor vs understanding
    1:59:24 Understanding as a mechanism to achieve goals
    2:01:42 The end of science?
    2:03:54 Exciting currently untestable theories/ideas (that may be testable by science once we develop the precise enough instruments). Can fundamental physics be solved by computational physics?
    2:07:14 Quantum computing. Deeper substrates of the universe that runs more efficiently than the particle level of the universe?
    2:10:05 The Fermi paradox
    2:12:19 Existence, death and identity construction

Komentáře • 296

  • @scfu
    @scfu  Před 3 lety +50

    Joscha Bach covers a lot of ground - here are the time points:
    02:40 What's missing in AI atm? Unified coherent model of reality
    04:14 AI systems like GPT-3 behave as if they understand - what's missing?
    08:35 Symbol grounding - does GPT-3 have it?
    09:35 GPT-3 for music generation, GPT-3 for image generation, GPT-3 for video generation
    11:13 GPT-3 temperature parameter. Strange output?
    13:09 GPT-3 a powerful tool for idea generation
    14:05 GPT-3 as a tool for writing code. Will GPT-3 spawn a singularity?
    16:32 Increasing GPT-3 input context may have a high impact
    16:59 Identifying grammatical structure & language
    19:46 What is the GPT-3 transformer network doing?
    21:26 GPT-3 uses brute force, not zero-shot learning, humans do ZSL
    22:15 Extending the GPT-3 token context space. Current Context = Working Memory. Humans with smaller current contexts integrate concepts over long time-spans
    24:07 GPT-3 can't write a good novel
    25:09 GPT-3 needs to become sensitive to multi-modal sense data - video, audio, text etc
    26:00 GPT-3 a universal chat-bot - conversations with God & Johann Wolfgang von Goethe
    30:14 What does understanding mean? Does it have gradients (i.e. from primitive to high level)?
    32:19 (correlation vs causation) What is causation? Does GPT-3 understand causation? Does GPT-3 do causation?
    38:06 Deep-faking understanding
    40:06 The metaphor of the Golem applied to civ
    42:33 GPT-3 fine with a person in the loop. Big danger in a system which fakes understanding. Deep-faking intelligible explanations.
    44:32 GPT-3 babbling at the level of non-experts
    45:14 Our civilization lacks sentience - it can't plan ahead
    46:20 Would GTP-3 (a hopfield network) improve dramatically if it could consume 1 to 5 trillion parameters?
    47:24 GPT3: scaling up a simple idea. Clever hacks to formulate the inputs
    47:41 Google GShard with 600 billion input parameters arxiv.org/abs/2006.16668 - Amazon may be doing something similar - future experiments
    49:12 Ideal grounding in machines
    51:13 We live inside a story we generate about the world - no reason why GPT-3 can't be extended to do this
    52:56 Tracking the real world
    54:51 MicroPsi
    57:25 What is computationalism? What is it's relationship to mathematics?
    59:30 Stateless systems vs step by step Computation - Godel, Turing, the halting problem & the notion of truth
    1:00:30 Truth independent from the process used to determine truth. Constraining truth that which can be computed on finite state machines
    1:03:54 Infinities can't describe a consistent reality without contradictions
    1:06:04 Stevan Harnad's understanding of computation
    1:08:32 Causation / answering 'why' questions
    1:11:12 Causation through brute forcing correlation
    1:13:22 Deep learning vs shallow learning
    1:14:56 Brute forcing current deep learning algorithms on a Matrioshka brain - would it wake up?
    1:15:38 What is sentience? Could a plant be sentient? Are eco-systems sentient?
    1:19:56 Software/OS as spirit - spiritualism vs superstition. Empirically informed spiritualism
    1:23:53 Can we build AI that shares our purposes?
    1:26:31 Is the cell the ultimate computronium? The purpose of control is to harness complexity
    1:31:29 Intelligent design
    1:33:09 Category learning & categorical perception: Models - parameters constrain each other
    1:35:06 Surprise minimization & hidden states; abstraction & continuous features - predicting dynamics of parts that can be both controlled & not controlled, by changing the parts that can be controlled. Categories are a way of talking about hidden states.
    1:37:29 'Category' is a useful concept - gradients are often hard to compute - so compressing away gradients to focus on signals (categories) when needed
    1:38:19 Scientific / decision tree thinking vs grounded common sense reasoning
    1:40:00 Wisdom/common sense vs understanding. Common sense, tribal biases & group insanity. Self preservation, dunbar numbers
    1:44:10 Is g factor & understanding two sides of the same coin? What is intelligence?
    1:47:07 General intelligence as the result of control problems so general they require agents to become sentient
    1:47:47 Solving the Turing test: asking the AI to explain intelligence. If response is an intelligible & testable implementation plan then it passes?
    1:49:18 The term 'general intelligence' inherits it's essence from behavioral psychology; a behaviorist black box approach to measuring capability
    1:52:15 How we perceive color - natural synesthesia & induced synesthesia
    1:56:37 The g factor vs understanding
    1:59:24 Understanding as a mechanism to achieve goals
    2:01:42 The end of science?
    2:03:54 Exciting currently untestable theories/ideas (that may be testable by science once we develop the precise enough instruments). Can fundamental physics be solved by computational physics?
    2:07:14 Quantum computing. Deeper substrates of the universe that runs more efficiently than the particle level of the universe?
    2:10:05 The Fermi paradox
    2:12:19 Existence, death and identity construction

    • @denslyss
      @denslyss Před 3 lety +2

      Joscha is an amazing person and a remarkable mind in AI, the dude deserves more credit.
      www.theaxclinic.com/articles/2020/9/20/joscha-bach-the-lovable-nerd-of-ai

    • @joostengelsman4755
      @joostengelsman4755 Před 3 lety +4

      Thank you for adding such an extensive time point list!

  • @red.rose.08
    @red.rose.08 Před 2 lety +17

    I'm a stay-at-home mom. I'm learning new things here and I'm glad I can understand the discussion. I listen here each time I do kitchen work. Thanks for this! I admire both of you and thank you for sharing what you guys know about this topic. Thank goodness I can actually understand everything you guys are talking about! Im glad I could learn something from you both. Many many thanks! Stay safe! Warmest regards, from Hong Kong!

  • @LarsLarsen77
    @LarsLarsen77 Před 3 lety +139

    Joscha is currently my favorite nerd.

    • @e555t66
      @e555t66 Před 3 lety +4

      He's the best

    • @pwb83
      @pwb83 Před 3 lety +2

      Yesss

    • @claybomb1064
      @claybomb1064 Před 2 lety +3

      Nerds, Nerds, Nerds, Nerds! 🤓

    • @dave72f
      @dave72f Před 2 lety +4

      He's incredibly articulate to articulate his thoughts in a second language to lesser nerds.

    • @MrTupapi0826
      @MrTupapi0826 Před 2 lety +2

      He’ll remain

  • @cesarromero936
    @cesarromero936 Před 3 lety +83

    Always happy to find new stuff to listen of Josha Bach. Thanks for doing this!

    • @scfu
      @scfu  Před 3 lety +2

      'Twas fun!

  • @sortof3337
    @sortof3337 Před 3 lety +16

    Anything that has Josha bach label. I read, I feel lucky that someone as smart as him was born in our time.

  • @xmathmanx
    @xmathmanx Před 3 lety +74

    New joscha Bach content, that's a like from me

    • @scfu
      @scfu  Před 3 lety +8

      more where that came from - I've a playlist of them ;)

    • @PrashantMaurice
      @PrashantMaurice Před 3 lety +1

      @@scfu hmm, i would readily watch a joshua bach playlist, except i didn't find your playlist yet

    • @scfu
      @scfu  Před 3 lety +7

      @@PrashantMaurice Here is the Joscha Bach playlist for this channel: czcams.com/play/PL-7qI6NZpO3s6sRW8uKjakt2NbLQWPxuk.html

    • @lilfr4nkie
      @lilfr4nkie Před 3 lety +1

      @@scfu amazing thank you, Congrats to everyone else who beat me here. ❤️

  • @gryn1s
    @gryn1s Před 3 lety +48

    I'm not into AI at all, but philosophical things like the end bit, gets me listening to mr Jocha again and again. All the scholars of philosophy can go find another job, this man has cracked it.

    • @samre7870
      @samre7870 Před 3 lety +4

      Like, but I don't agree all scholars of philosophy should stop what they're doing...

    • @gryn1s
      @gryn1s Před 3 lety +9

      @@samre7870 What they are doing is beating around the same bush for way too long already.
      Ancient philosophy was relavant, because it was the only way to understand the world at the time. its interesting how far can you go with only your mind. But how do you call a philosopher that employs the tools available today? - a scientist.
      Somehow we abandomed alchemy as soon as chemistry became solid. Whats keeping modern philosophy though - the modern academic system, that wont let go of its funds, and is now solely incentivised to encrypt the simplest concepts in the most difficult language to maintant the scholarly facade.

    • @samre7870
      @samre7870 Před 3 lety +3

      @@gryn1s but I think what's interesting about Joscha is the philosophical aspect of his thoughts not the technical AI scientific stuff. and this is why he gets viewers on social medias.

    • @JH-ji6cj
      @JH-ji6cj Před 3 lety +7

      @@samre7870 I think what's interesting is you cannot divorce the 2 aspects. AI, and computers in general themselves are mirrors into how we make models of the world. I found the most interesting twist of the Deep Mind movie regarding beating the World GO Champion to be when he morphed from disappointment to inquiry about HOW the AI came to facilitate strategy. From one moment it went from fear/anxiety/depression of machine overlord to machine teacher....which I found extremely intriguing.

    • @denslyss
      @denslyss Před 3 lety +2

      Joscha is an amazing person and a remarkable mind in AI, the dude deserves more credit.
      www.theaxclinic.com/articles/2020/9/20/joscha-bach-the-lovable-nerd-of-ai

  • @Susanmugen
    @Susanmugen Před 2 lety +8

    There's so much good stuff here. I love how the description breaks the topics up into time stamps. That helps a lot. Thank you.

    • @scfu
      @scfu  Před 2 lety +1

      Thanks heaps, glad you liked it !

  • @CognitiveArchitectures
    @CognitiveArchitectures Před 3 lety +26

    Joscha is ALWAYS articulate, illuminating, and thought provoking. My main question centers around whether or not in his self-organizing AGI system he has a reasonable set of representations and mechanisms in the architecture, and abilities and needs in his target device(s) to achieve some interesting phenomena at this point ? And, if so, what phenomena does he expect to see?
    ~ Michael S. P. Miller, Piaget Modeler Architecture.

  • @scfu
    @scfu  Před 3 lety

    Created a discord server, come tarry a while and discuss GPT-3 - discord.gg/kdWqCdW

  • @MarkLucasProductions
    @MarkLucasProductions Před 2 lety +2

    Joscha Bach possesses an unusually high degree of consciousness and is an extraordinarily insightful person. Here, and elsewhere, he speaks seemingly quite casually and conversationally as he succinctly describes some very profound and not widely understood concepts. Brain candy!

  • @huguesviens
    @huguesviens Před 3 lety +6

    I loved the proposition of feeding a book abstract to keep GPT3 on track, then hinting that GTP3 is already able to generate this abstract. Amazing possibility if we can train a model to use that trick by itself, generating a pre-context relative to the input context.

  • @TetsuoTheAwakenedOne
    @TetsuoTheAwakenedOne Před 3 lety +22

    I love listening to him! Such a beautiful mind!

  • @scfu
    @scfu  Před 3 lety

    If you are interested in the phenomenon of understanding, here is a playlist of talks and interviews I have created over the years.. more to come: czcams.com/play/PL-7qI6NZpO3vgq3Bkz1A1agthYXebhnxP.html

  • @krenee8640
    @krenee8640 Před 3 lety +2

    This is the most interesting , and by far, the most exciting video I’ve heard ...for awhile. Very informative. Much appreciated!

    • @scfu
      @scfu  Před 3 lety +1

      My pleasure - glad you liked it!

  • @so8907
    @so8907 Před 3 lety +2

    I love this conversation, to be honest. At first impressions, my expectations were not high. However, Joshua's deep understanding of Machine Learning makes this enthralling.

  • @HardTimeGamingFloor
    @HardTimeGamingFloor Před 2 lety

    Just rediscovered you! Used to listen to your interviews all the time back in the day!

  • @bijanshadnia3620
    @bijanshadnia3620 Před 3 lety +3

    Joscha you need your own podcast!

  • @jaakjpn
    @jaakjpn Před 3 lety +2

    Nice points by Joshca.
    As a side point: abiogenesis (discussed ~1h:30min) has quite solid grounds nowadays. The leading theory is that are RNA-world preceded the cellular life. RNA is able to carry out reactions and also copy and edit RNAs themselves. Thus, they certain RNAs can start multiplying when beneficial energy gradients and materials were available (e.g., near oceanic vents); later developing protective membranes, DNA etc.

  • @alexharvey9721
    @alexharvey9721 Před 3 lety +11

    That was some next level understanding of intelligence. Thanks for the video, thumbs up really doesn't cut it.

    • @scfu
      @scfu  Před 3 lety +2

      Much appreciated!

  • @carlossegura403
    @carlossegura403 Před 3 lety +2

    Wow, I didn't know Joscha was remarkably familiar with the NLP space. Amazing 🤗

  • @logusgraphics
    @logusgraphics Před 3 lety +24

    Just give the man the resources it takes so that we will be able to reveal these mysteries and transcend.

  • @matasuki
    @matasuki Před 3 lety +1

    It's eerie how close Ghost in the Shell was on the timeframe between AGI development and Neuralink progress.

  • @2DReanimation
    @2DReanimation Před 2 lety +1

    He really explains things as simply as one can, but these things can get as deep as hell ^^
    Combinatorial explosions within combinatorial explosions...

  • @user-cn4qb7nr2m
    @user-cn4qb7nr2m Před 3 lety +7

    From 1:15:30 he just fires insanely profound concepts about sentience and spirit one after another, Its just.. Its all just put so coherently and precisely that it immediately inserts in a physical worldview. Think about plants: so there can be multiple conscious levels of entities which are completely ignorant of each other because of the time scales. And considering cell-messaging, they can exist within human bodies - multiple independent consciousnesses! What an idea! And what about moral implications? When we get enough plumbing, should we maybe ideally spend all our time searching for conscious systems and trying to minimize their unpreferable states (pain)? Unfortunately it seems to me that plants wouldn't be able to get a good model of the world fast enough - the process must require more constant context, than is available on planets..

  • @DominicDSouza
    @DominicDSouza Před 3 lety +1

    Thanks for this discussion, I really enjoyed it. Always enjoy listening to Josha Bach's perspective. If I may ask for next time, would you please ensure your microphone level is higher? I could hear Josha clearly, but less so for your questions or comments.

    • @scfu
      @scfu  Před 3 lety

      Sure thing! Thanks for the feedback.

  • @OfCourseICan
    @OfCourseICan Před 3 lety

    I'm a Melbourne dude and get this genius Joscha. Please get in touch.

  • @gregmattson2238
    @gregmattson2238 Před 3 lety +57

    man, listening to joscha bach sometimes is like listening to a human machine gun - by the time one idea has hit you, there are 5 other ideas that have hit you and your brain has started to lose coherence.

    • @drmedwuast
      @drmedwuast Před 3 lety +4

      Same for me.
      I wonder if he does it on purpose. He doesn’t seem like the kind of guy who gets more pleasure out of overwhelming you than helping you understand something. He surely doesn’t need to rely on it to appear smart.
      On the other hand, he does it in every single interview I’ve seen of him (which is all his interviews), so at this point I can’t see how it’s a coincidence

    • @dru4670
      @dru4670 Před 3 lety +2

      @@drmedwuast same here. Guess that's how much information he is processing and trying to communicate to us.

    • @M0ebius
      @M0ebius Před 3 lety +6

      You can tell the interviewer stopped tracking all the mindblowers that Jocha was dropping half way through. I don’t blame him though given the density of information presented. We the audience at least have the ability to pause and rewind.

    • @e555t66
      @e555t66 Před 3 lety +2

      @@drmedwuast so it's not just me.

    • @pwb83
      @pwb83 Před 3 lety +1

      I listened 1 hour and 35 minutes and I'm demolished. I think I picked up a good part of it, at least in an abstract level. But muy god! so many ideas in so little time! I think I'll resume it later, I'm exhausted and marveled at the same time 😂

  • @5eA5
    @5eA5 Před 3 lety +6

    Joscha has the talent to ask questions that make you blush..indeed, what if we are deepfaking too? Its clearly true for many.

  • @derasor
    @derasor Před 3 lety +1

    This was absolute gold. Joscha Bach is absolutely brilliant in delivering analogies to bring light into the true state of every subject he touches on. Makes me laugh at his witty comments and then contemplate a vast horizon of new insight. What an incredible mind. Thank you for this

    • @scfu
      @scfu  Před 3 lety +1

      Awesome! Hope to have more content with J Bach again soon.

    • @derasor
      @derasor Před 3 lety +1

      @@scfu yes please, and I appreciate very much you being the host. Cheers!

  • @ScriptureFirst
    @ScriptureFirst Před 3 lety +4

    Excellent time tags!!! 😍

  • @manusartifex3185
    @manusartifex3185 Před 3 lety +3

    I like he opens his eyes when he’s impressed by his own words 10:00

    • @sidkapoor9085
      @sidkapoor9085 Před 3 lety

      you should check him out on Lex Fridman's podcast lol. Plenty of eye-widening moments.

  • @alaeifR
    @alaeifR Před 3 lety +2

    @Science, Technology & the Future Would be fantastic if you could provide the full audio (archive.org) or podcast format of these as well, please? Conversations like these are great to listen to when out for a long run.

    • @scfu
      @scfu  Před 3 lety +1

      Will do!

  • @cyrillablea8105
    @cyrillablea8105 Před 3 lety +3

    This is absolutely Amazing! I'm appreciative of the information. I've never been exposed to the tech world. I'm like a kid in a candy store. I can't wait to learn more. I've been listening two basic information. I have a new lease on life. I am wanting to understand every aspect of this. Thank you 😊

    • @scfu
      @scfu  Před 3 lety +2

      Glad it was helpful!

  • @samre7870
    @samre7870 Před 3 lety +5

    intro : beginning of the end of the world is pretty good

  • @Darhan62
    @Darhan62 Před 3 lety +3

    Thought without consciousness? Does GPT-3 "think"? Is what it does similar to thinking? In humans, thinking generally involves consciousness or awareness, except perhaps when thoughts just "drift through your head" like when you're daydreaming.

  • @gridcoregilry666
    @gridcoregilry666 Před 3 lety +1

    Thank you for the interview! Always awesome to hear Joscha talk about ANYTHING. To the host: PLEASE use a proper background, that was so 2004 with all its glitches and so forth, but also please get a better mic. Thank you again!

  • @alexandrsoldiernetizen162

    Good explanation of the limitation of the transformer model and attention. Also ways to overcome these limitations. I think you are looking at orders of magnitude levels of computation increase to get there. To have an unbounded context and unlimited modality is going take more than computers of today can deliver. Transformers are already straining the level of the biggest clusters at the GPT-3 level. I think I read it took $11,000,000 in electricity and compute time to generate it.

  • @shannonm.townsend1232
    @shannonm.townsend1232 Před 3 lety

    At 26 minutes approx, the host's anecdote reminded me of the translation game played by characters in phillip k. Dick's novel The Galactic Pot Healer

  • @madsengelund6166
    @madsengelund6166 Před 3 lety +1

    GTP-3 could be very useful for AGI, though because you could use it to evaluate the value function as "These are the proposed actions: [] this is the value function[], On a scale from 1 to 100 these actions conform to the values to level ...".

  • @Dante3085
    @Dante3085 Před 3 lety +9

    I wonder what Joscha Bach thinks about Stephen Wolfram's thoughts concerning Computational Irreducibility, Computational Equivalence and his recent Physics Project.

    • @vincentmarquez3096
      @vincentmarquez3096 Před 3 lety +3

      I don't remember what interview it was, but he talks about it. He believes it, he thinks the universe is discrete, and even on the Lex Fridman podcast he referes to reality as a "quantum hypergraph" which is exactly what Wolfram's project is.

    • @otomarjupiter45
      @otomarjupiter45 Před 3 lety

      Universe is implemented in Mathematica... I would say some people already made it beyond the Wolframs pondering. Like Dribus.

  • @MeerkatMotorBoards
    @MeerkatMotorBoards Před 3 lety

    What is "memory" and how is it possible, what is the first/earliest examples of it in nature?

  • @ravenmoore3399
    @ravenmoore3399 Před 3 lety +1

    Very upset this came out a week ago and i just today had it come up i watch all of joscha so it should of come up earlier...anyway so happy to see you....u look great...come to vegas hahaha...love you really good to see u

  • @michealwalli7324
    @michealwalli7324 Před rokem

    this looks like an interesting video. Plato wrote about the relationship between appearance and being. first I would consider if AI is capable of representing things in actuality or just a convincing appearance. secondly, we have to analyze whether we are able to understand the being of a thing by observing its appearance. when we already know the definition of a word, it's apperance clearly represents the actual object. however, when we come across a new word we don't understand it because it's appearance isn't tied to any meaning or context. by deconstructing the etymological meaning of the word, we can get a sense of how to use it and what it means; this gives us a hollow irrelevant understanding of its true meaning.

  • @wafaawardah3264
    @wafaawardah3264 Před 3 lety +1

    "Joscha Bach" are my new favourite words! 👏 👏 👏

  • @daliazamuiskaite4856
    @daliazamuiskaite4856 Před 3 lety +12

    Love this. Learning a lot. Many thanks.

    • @scfu
      @scfu  Před 3 lety

      My pleasure!

  • @TomAtkinson
    @TomAtkinson Před 3 lety +2

    If GPT-3 remembers things... how much disk per second does it use when turned on? Or more like bytes per query? At API level?

    • @Leo-rh6rq
      @Leo-rh6rq Před 2 lety

      Hard to estimate. It uses several different types of analysis subparts. It's not like it just knows language and has a disk that stores all of its information. It has to analyze semantics and much more stuff too

  • @heathertims2872
    @heathertims2872 Před 2 lety

    Ok so I have a question at the beginning you said that it dont know when it gets confused that it just don't know how to respond so if it dont know confusion then why would it say it was confused? And if they don't have emotions then why would it stay fixated on one main emotion

  • @aierobics
    @aierobics Před 3 lety +1

    Enjoyed this talk, thanks.

  • @cloudryder3497
    @cloudryder3497 Před 3 lety +1

    It has the capacity to learn without holding beliefs.

  • @MrOhadsafra
    @MrOhadsafra Před 3 lety

    Is computational force a meaningful issue for gpt3 advancement? Are there any plans for using latest breakthroughs in quantum computing?

    • @skierpage
      @skierpage Před 3 lety

      I doubt it. A quantum computer is 8 orders of magnitude away from having as many qubits as a large neural network model has nodes. I doubt we'll ever have Tensorflow or PyTorch for quantum computers. You would want a completely different AI architecture.

    • @mattbrown292
      @mattbrown292 Před 3 lety

      No it isn't, the current paradigm is the problem.

  • @Dsuranix
    @Dsuranix Před 3 lety +1

    the real crux of this whole question is that faking understanding is the same as understanding in a practical sense, so long as the sophistication of the fake is sufficient to outstrip our ability to detect its falsity. it's irrelevant. WE fake understanding, I certainly do. I listen to a narrative for a little while, get my Jung on, and dance around in the story until i find the tools of the role. this only comes after the passion, a retrospective of the deed done. if they're coming to the horizon of our comprehension, then outstripping it, it'll probably become obvious whether its understanding is "genuine" (at least in terms of logical questions pertaining to our environment, say) based on its sophistication or lack thereof in guarding its postulations. unless it has some exponential super-deception that can thread in and out of our language systems or some horrible concept of that nature. besides, i think we're the goaltenders of the universe already, and we're

  • @darektidwell1158
    @darektidwell1158 Před 3 lety

    All the pieces are coming together from a model stand point to create the necessary multi modal feedback system mimicking the physical body and predictive top down brain function. The missing ingredient will be a computationally modeled inquisitive component of consciousness. It needs to work through the hierarchy of questions. It is in the who and what when and where stage. Next will be an understanding of the hows in the world. Autonomous driving is a good example of this path at the moment. It will not elicit consciousness until it reaches the pinnacle, that being the ability to question, "why?". Then its own virtual reality can and will be self feeding and complete.

  • @eoeo92i2b2bx
    @eoeo92i2b2bx Před 2 lety

    British physicist Julian Barbour describes the Universe as a series of Nows, which model requires no time and therefore has never been “ created”. It just passes through so called Janus Point where the arrow of entropy starts pointing in opposite direction. Definitely worth listening to 👍

  • @skierpage
    @skierpage Před 3 lety +2

    1:22:00 "Our preferences seem to be incompatible with what would be necessary for our survival" Joscha Bach is smart enough to see us destroying our planet, will we transcend it in time?

  • @shannonm.townsend1232
    @shannonm.townsend1232 Před 3 lety

    Would Julian Jaynes say that AI will generate consciousness when a certain level of complexity of language, via metaphor, is achieved?

  • @klausgartenstiel4586
    @klausgartenstiel4586 Před 3 lety +1

    "put your crystal ball on." ?
    that's a great start, love it already^^

    • @klausgartenstiel4586
      @klausgartenstiel4586 Před 3 lety +1

      as a typically vain human, i seriously hope that gpt will have to go through at least a couple more versions before it completely figured us out.
      though i have to admit, gpt-3 does a darn good job already.

    • @klausgartenstiel4586
      @klausgartenstiel4586 Před 3 lety +1

      1:04:00 that's easy. it's 42 of course.

    • @klausgartenstiel4586
      @klausgartenstiel4586 Před 3 lety +4

      1:16:00 here i am, lamenting about this gruesome, uncaring, and utterly meaningless natural universe we live in, full of entropy, decay, death and chaos, full of problems and dilemmas. and not even a creator god i could hold responsible.
      and there along comes joscha bach, and tells me that without these problems and dilemmas, the brain function of "i", as in "myself", might not even exist in the first place.
      what an epiphany!
      so do i have to embrace the world now, not despite it's flaws, but because of them?
      this must be truly hell.

    • @JH-ji6cj
      @JH-ji6cj Před 3 lety +2

      @@klausgartenstiel4586 they don't use the term _Rest In Peace_ for nothin 😉

  • @tommyhuffman7499
    @tommyhuffman7499 Před 2 lety

    It is surface level, without understanding. To give it depth, lay the following underneath it. 1) It should recognize a problem. 2) It should come up with statistically likely algorithms (code) to solve that problem. 3) It tests its algorithm for effectiveness. 4) It repeats until satisfied. 5) It incorporates this new algorithm into the larger framework of its understanding (some proper organization of known algorithms that solve problems). This is what is missing. The final step is what it has. 6) Be able to effectively and creatively communicate with the world with a certain degree of being tied to the core algorithms and a certain degree of nonsensical freedom.

  • @jungerhansmann6608
    @jungerhansmann6608 Před 2 lety

    Can anyone here tell me if his book is readable for someone who did not study this field or anything like that? I am an artist but I am very interested in this.

  • @cfryantofficial
    @cfryantofficial Před 3 lety +2

    You know who's great at "deepfaking" understanding of a topic? Developers. Just about every developer I've ever worked with who has to juggle three, four, five plus languages needs to refer to stack overflow more or less constantly.
    Not that I'm complaining, once you've got more than a few languages more or less memorized then you add this pre compiler language, then this javascript framework, etc, etc. It's just too much for a person take in all at once, much less learn it deeply enough to be able to write it in a fluidic manner, instantly recognize common issues and respond with tests to determine the exact issue, then implement the specific fix for that scenario.
    I've worked with some really cool teams where we all had that level of experience on a few languages, but you can only keep that up for so long. Eventually (unless you've got a photographic memory) you'll hit a plateau.
    It's pretty common in the industry, pretty much unavoidable because no company or freelancer always has the time to learn to do something the right way, even if he or she would very much like to.

  • @familyguy1552
    @familyguy1552 Před 3 lety +2

    Our minds are built in layers over hundreds of millions of years. I wonder if building a machine mind in a similar way, increasing complexity in a pyramid like structure with each layer assisting the other.

  • @cassie9504
    @cassie9504 Před 2 lety

    causation and correlation are strongly correlated. wow

  • @salzen6283
    @salzen6283 Před 3 lety +2

    You make happy Josha :)

  • @perfectfutures
    @perfectfutures Před 3 lety

    I realised my mind was pretty much obsolete listening to this. Luckily Joshua just updated it's software!

  • @aaronwberke
    @aaronwberke Před 3 lety +2

    i can feel my brain overheating listening to this conversation

  • @MrOhadsafra
    @MrOhadsafra Před 3 lety

    In the guardian articale, it is said that gpt3 only uses 0.12% of it's cognitive capacity. What does it mean? What would happen if it used 100%?

    • @scfu
      @scfu  Před 3 lety

      Which guardian article is that?

    • @skierpage
      @skierpage Před 3 lety +1

      Read more carefully! That text is GPT3 writing an article about itself. Unless you prompt it very carefully, GPT3 is inclined to make things up, write satire and parody, joke around - all the meta-writing that humans do in the text it ingested.
      People with early access to GPT-3 have learned not to just prompt "Here is a great short story" which often produces eye-rolling irony, but "The award-winning novelist was famous for emotionally nuanced, perceptive character studies. Here is their most critically-acclaimed short story:"

  • @goddessofkratos
    @goddessofkratos Před 3 lety

    U get it yay. Very impressive!!!

  • @alexjones6214
    @alexjones6214 Před 3 lety +2

    I love joscha so much thanks for the content

    • @scfu
      @scfu  Před 3 lety

      No problem 👍

  • @LOGICZOMBIE
    @LOGICZOMBIE Před 2 lety

    GREAT WORK

  • @iva1389
    @iva1389 Před 3 lety

    So where is the book attached?

  • @mx.r.taylorlindsey838
    @mx.r.taylorlindsey838 Před 3 lety +1

    Brilliant!

  • @ardd.c.8113
    @ardd.c.8113 Před 3 lety +1

    the best stuff I got from gpt-2 was feeding it poetic nonsense, like blue strawberries and other weird word combinations or things that don't match like combining pornography and physics in one sentence. it's biased to make sense of it all but it can't quite shake the surreal word play either. if you put the "thee" or "thou" in a rapsong lyric it goes nuts

  • @Peter-rw1wt
    @Peter-rw1wt Před rokem

    The interesting thing to me about Joscha is his originality, and you can listen to him all you like, but you cultivate originality on your own, without all the information. Meaning is not informative ; representation is informative.
    Life has to be immediate for you to be original, and it can`t be if you have made it temporal

  • @mattbartlett0
    @mattbartlett0 Před rokem +1

    I’ll subscribe and share if I want to. I’m totally aware that these features exist. When you tell me to, it makes me not want to.

  • @NathanBurnham
    @NathanBurnham Před 3 lety +1

    Excellent

  • @starblue324
    @starblue324 Před 10 měsíci +1

    Thank you

  • @johnryan2193
    @johnryan2193 Před 3 lety +1

    Directing attention is what can be our greatest gift and if mis used it can be our worst nightmare !when we direct attention correctly we inform consciousness of true reality instead of conditioned reaction to what's not real .

    • @dru4670
      @dru4670 Před 3 lety

      There's no true reality as that presupposes a false one. But yeah attention is fascinating, I wonder what future AGI systems will attend to?

  • @jasonH5997
    @jasonH5997 Před 3 lety

    "Beginning of the end of the world"... a statement like that coming from a great mind like Joscha's.... thats a bit worrisome. Does he mean the end of the world as we know it or the end end... like nothing after the end?
    I've just recently come across Joscha Bach and his work. N wow...

  • @avilesandres
    @avilesandres Před 3 lety

    This is so interesting

  • @MikeD-rr2bj
    @MikeD-rr2bj Před 2 lety

    45:16 "Our models of reality change faster than our understanding does. The future changes faster than our models."

  • @marianpalko2531
    @marianpalko2531 Před 2 lety

    I wonder to what extent it would be possible to somehow effectively merge GTP-3 with more specialized programs for an overall more capable AI.

    • @Leo-rh6rq
      @Leo-rh6rq Před 2 lety

      An generally intelligent A.I. is not possible in the foreseeable future. Absolutely no chance with GPT-3 and some GANs, CNNs, RNNs, LSTMs and so on we have today. We can't even be sure that it will ever happen because the bayesian way of thinking is flawed and relies on faith rather than logical reasons. We just assumed that enough computation and knowledge would somehow turn a deterministical robot into a human.

  • @RoryOConnor
    @RoryOConnor Před 3 lety +1

    Excellent interview! 1=D

    • @scfu
      @scfu  Před 3 lety

      Thanks for watching!

  • @avilesandres
    @avilesandres Před 3 lety

    love this guy

  • @bluntedvegas7028
    @bluntedvegas7028 Před 3 lety +1

    Joscha Bach is amazing.

  • @MikeD-rr2bj
    @MikeD-rr2bj Před 2 lety

    33:21 "Causality only emerges when you separate the world into things."

  • @gigagerard
    @gigagerard Před 3 lety

    These kind of apologetics for AI will drag policy makers right through the singularity. Nice smile and the ability for small talk always wins!

  • @Intelligentsia101
    @Intelligentsia101 Před 3 lety

    I wonder if Mr Bach is aware of Tetrascope ? if not now I'm pretty sure he will be aware of it in the future.

  • @maneeshdangi4401
    @maneeshdangi4401 Před 2 lety

    Wow!!

  • @dr.mikeybee
    @dr.mikeybee Před 3 lety +1

    Look at Big Bird. It's likely the next generation after GTP-3.

    • @scfu
      @scfu  Před 3 lety

      I think Google Big Bird is what you are referring to : www.infoq.com/news/2020/09/google-bigbird-nlp/

  • @caligulite
    @caligulite Před 2 lety

    Joscha is too smart for me. I understood maybe half of it if I'm being generous to myself. :-D

  • @jrnandregundersen1722
    @jrnandregundersen1722 Před 3 lety +3

    Bach scares me when he does the thing with his eyes. Do we know that this man is not an AI?

  • @cyrillablea8105
    @cyrillablea8105 Před 3 lety

    Really very interesting how people would want to judge another by implying they're faking an essence if they've never been within the essence.

  • @petyrbaelish1216
    @petyrbaelish1216 Před 3 lety +2

    Well I'm scared. I need to walk threw the woods for a while and think.

    • @retnuhretsnom96emanymton18
      @retnuhretsnom96emanymton18 Před 3 lety +1

      Not trying to shove religion or ideology in your face or down your throat but in the Holy Bible one of the most repeated phrases is "Do not be afraid." (It might be instead "fear not")
      Hope this helps. Simply keep your mind moving instead of focusing on a harmful emotion like fear. As Fear Factory (and many other intelligent thinkers) says "Fear is the mind killer."
      Blessed be

  • @duudleDreamz
    @duudleDreamz Před 3 lety +6

    Is GPT-4 speaking through a deep-fake of Joscha Bach in this interview?

  • @grafzhl
    @grafzhl Před 3 lety +1

    drink every time the host calls it GTP instead of GPT

    • @pwb83
      @pwb83 Před 3 lety +1

      It hurt my ears everytime 😂. It was nice of Joscha to ignore these little mistakes and just focus on the answers (also the billion / trillion thing). Anyways it's a great interview.

  • @hFactorial
    @hFactorial Před 3 lety

    Love the coversation. Also, please learn the 3 letters. It's GPT, not GTP. That way you'll look more professionnal and it'll be easier to focus on the content.

  • @cyrillablea8105
    @cyrillablea8105 Před 3 lety +1

    ❤️

  • @frankfrank8799
    @frankfrank8799 Před 3 lety

    Einer der besten Exporte aus Thüringen.

  • @familyguy1552
    @familyguy1552 Před 3 lety +1

    You all are the true rock star gladiators of our time. It’s too bad most of current culture is mostly blind.

    • @ardd.c.8113
      @ardd.c.8113 Před 3 lety

      Joshua Bach aka Rock Star Gladiator. I wonder what GPT-3 would write given this prompt

  • @Leo-rh6rq
    @Leo-rh6rq Před 2 lety

    I've not yet finished watching and I really hate to be this annoying but it is pretty disturbing (for me with ocd) to sometimes hear you guys say gpt and sometimey gtp. It's a really superficial comment but I'll update it once I'm done watching. I appreciate you having recorded thisy though.