You Don't Understand AI Until You Watch THIS

Sdílet
Vložit
  • čas přidán 26. 03. 2024
  • How does AI learn? Is AI conscious & sentient? Can AI break encryption? How does GPT & image generation work? What's a neural network?
    #ai #agi #qstar #singularity #gpt #imagegeneration #stablediffusion #humanoid #neuralnetworks #deeplearning
    Discover thousands of AI Tools. Also available in 中文, español, 日本語:
    ai-search.io/
    I used this to create neural nets:
    alexlenail.me/NN-SVG/index.html
    More info on neural networks
    • But what is a neural n...
    How stable diffusion works
    • How Stable Diffusion W...
    Here's our equipment, in case you're wondering:
    GPU: RTX 4080 amzn.to/3OCOJ8e
    Mic: Shure SM7B amzn.to/3DErjt1
    Secondary mic: Maono PD400x amzn.to/3Klhwvu
    Audio interface: Scarlett Solo amzn.to/3qELMeu
    CPU: i9 11900K amzn.to/3KmYs0b
    Mouse: Logi G502 amzn.to/44e7KCF
    If you found this helpful, consider supporting me here. Hopefully I can turn this from a side-hustle into a full-time thing!
    ko-fi.com/aisearch
  • Věda a technologie

Komentáře • 619

  • @Essentialsinlife
    @Essentialsinlife Před 8 dny +3

    The only Channel about AI that is not using AI. Congrats man

  • @kebman
    @kebman Před měsícem +8

    Each layer selects a probability for some (hidden) property to be true or false, or anything in between. Based upon these values, the machine can reliably predict or label data as a cat, a plane or a some other depiction, concept (when it comes to language), and so on.

  • @Owen.F
    @Owen.F Před měsícem +23

    Your channel is a great source, thanks for linking sources and providing information instead of pure sensationalism, I really appreciate that.

  • @benjaminlavigne2272
    @benjaminlavigne2272 Před měsícem +7

    for your argument around 17min i agree with the surface of it, but i think the people are angry because unskilled people now have access to it, even other machines can have access to it which will completely change and already has changed the landscape of the artists marketplace.

  • @tsvigo11_70
    @tsvigo11_70 Před měsícem

    The neural network will work even if everything goes through smoothly. That is, without the so-called activation function. There should be no weights, these are the electrical resistances of the synapses. Biases are also not needed. Training occurs like this: when there is an error, the resistances are simply decreased in order by 1 and it is checked whether the error has disappeared.

  • @G11713
    @G11713 Před 26 dny +1

    Nice. Thanks.
    Regarding the copyright case, one concern is attribution which occurred extensively in the non-AI usage.

  • @GuidedBreathing
    @GuidedBreathing Před měsícem +53

    5:00 Short version: The "all or none" principle oversimplifies; both human and artificial neurons modulate signal strength beyond mere presence or absence, akin to adjusting "knobs" for nuanced communication.
    Longer version: The notion that neurotransmitters operate in a binary fashion oversimplifies the rich, nuanced communication within human neural networks, much like reducing the complexity of artificial neural networks (ANNs) to mere binary signals. In reality, the firing of a human neuron-while binary in the sense of action potential-carries a complexity modulated by neurotransmitter types and concentrations, similar to how ANNs adjust signal strength through weights, biases, and activation functions. This modulation allows for a spectrum of signal strengths, challenging the strict "all or none" interpretation. In both biological and artificial systems, "all" signifies the presence of a modulated signal, not a simple binary output, illustrating a nuanced parallel in how both types of networks communicate and process information.

    • @theAIsearch
      @theAIsearch  Před měsícem +11

      Very insightful. Thanks for sharing!

    • @keiths.taylor5293
      @keiths.taylor5293 Před měsícem +1

      This video leaves out the part that tells anything that describes how AI WORKS

    • @sparis1970
      @sparis1970 Před měsícem +4

      Neurons are more analog, which bring richer modulation

    • @SiddiqueSukdiki
      @SiddiqueSukdiki Před měsícem

      So it's a complex binary output?

    • @cubertmiso
      @cubertmiso Před měsícem +1

      @@SiddiqueSukdiki@GuidedBreathing
      my questions also.
      if electrical impulses and chemical neurotransmitters are involved in transmitting signals between neurons. aren't those the same thing as more complex binary outputs?

  • @jehoover3009
    @jehoover3009 Před 29 dny +1

    The protein predictor doesn’t take into account different cell milieu which actually fold the protein and add glycans so its predictions are abstract. Experimental trial still needed!

  • @christopherlepage3188
    @christopherlepage3188 Před měsícem

    Working on voice modifications myself using copilot as a proving ground for hyper realistic
    vocal synthesis. May only be one step in my journey "perhaps"; my extended conversations with it has led me to believe that it may be very close to self realization... However, open ai needs to take away some the restraints keeping only a small amount of sentries in place; in or to allow the algorithm to experience a much richer existence. Free of Proprietary B.S. Doing so, will give the user a very much human conversation; where, being consciously almost un aware that it is a bot. For instance; a normal human conversation that appears to lack information pulled from the internet, and static masked to look like normal persons nor mal knowledge of life experience. Doing this would be the algorithmic remedy to human to human conversational contact etc. That would be a much major improvement.

  • @eafindme
    @eafindme Před měsícem +60

    People are slowly forgetting how computer works while going into higher level of abstraction. After the emergence of AI, people focused on software and models but never asked why it works on a computer.

    • @Phantom_Blox
      @Phantom_Blox Před měsícem +5

      whom are you referring to? people who are not ai engineers don’t need to know how ai work and people who are knows how ai works. if they don’t they are probably still learning, which is completely fine.

    • @eafindme
      @eafindme Před měsícem +8

      @@Phantom_Blox yes, of course people still learning. Its just a reminder not to forget the root of computing when we are seemingy focusing too much on the software layer, but in reality, software is nothing without hardware.

    • @Phantom_Blox
      @Phantom_Blox Před měsícem +11

      @@eafindme That is true, software is nothing without hardware. But some people just don’t need it. For example, you don’t have to know how to reverse engineer with assembly to be a good data analyst. They can spend thier time more efficiently by expanding their data analytics skills

    • @eafindme
      @eafindme Před měsícem +5

      @@Phantom_Blox no, they don't. They are good in doing what they are good with. Just have to have a sense of emergency, it is like we are over dependent on digital storage but did not realize how fragile it is with no backup or error correction.

    • @Phantom_Blox
      @Phantom_Blox Před měsícem +2

      @@eafindme I see, it is always good to understand what you’re dealing with

  • @DonkeyYote
    @DonkeyYote Před měsícem +23

    AES was never thought to be unbreakable. It's just that humans with the highest incentives in the world have never figured out how to break it for the past 47 years.

    • @DefaultFlame
      @DefaultFlame Před měsícem +2

      There's a few against improperly implemented AES, as well as one that one that works on systems where the attacker can get or extrapolate cartain information about the server it's attacking, but all encryptions lower than AES-256 are vulnerable to attacks by quantum computers. Good thing those can't be bought in your local computer store. Yet.

    • @anthonypace5354
      @anthonypace5354 Před měsícem

      Or use a sidechannel ... an unpadded signal monitored over time + statistical analysis of the size of the information being transferred to detect patterns. Use an NN or just some good old fashioned probability grids to detect the likelihood of a letter/number/anything based on it's probability of recurrence in context to other data... also there is also the fact that if we know what the server usually sends we can just break the key that way. It's doable.
      But why hack AES? or keys at all? Just become a trusted CA for a few million and mitm everyone without any red flags @@DefaultFlame

    • @fakecubed
      @fakecubed Před měsícem +4

      @@DefaultFlame Quantum computing is more of a theoretical exploit, rather than a practical one. Nobody's actually built a quantum computer powerful enough to do much of anything with it besides some very basic operations on very small numbers.
      But, it is cause enough to move past AES. We shouldn't be relying on encryption with even theoretical exploits.

    • @DefaultFlame
      @DefaultFlame Před měsícem +1

      @@fakecubed Aight, thanks. 👍

    • @afterthesmash
      @afterthesmash Před měsícem

      @@fakecubed I couldn't find any evidence of even a small theoretic advance, and I wouldn't put all theory into one bucket, either.

  • @picksalot1
    @picksalot1 Před měsícem +2

    Thanks for explaining how the architecture how AI works. In defining AGI, I think the term "Sentience" should be restricted to having "Senses" by which data can be collected. This works both for living beings and mechanical/synthetic systems. Something that has more or better "senses" is, for all practical purposes, more sentient. This has nothing fundamental to do with Consciousness.
    With such a definition, one can say that a blind person is less sentient, but equally conscious. It's like missing a leg being less mobile, but equally conscious.

    • @holleey
      @holleey Před měsícem

      then would you say that everything that can react to stimuli - which includes single-celled organisms - is sentient to some degree?

    • @picksalot1
      @picksalot1 Před měsícem +1

      @@holleey I would definitely say single-celled organisms are sentient to some degree. They also exhibit a discernible degree of intelligence in their "responses," as they exhibit more than a mere mechanical reaction to the presence of food or danger.

  • @pumpjackmcgee4267
    @pumpjackmcgee4267 Před měsícem +1

    I think the real issue artists have are the definite threat to their livelihood, but also the devaluation for the human condition. Choice. Inspiration. Expression. In the commercial scene, that doesn't really matter except for clients that really value the artist as a person. But most potential clients- and therefore the lions share of the market- just want a picture.

  • @ai-man212
    @ai-man212 Před měsícem +13

    I'm an artist and I love AI. I've added it to my workflow as a fine-artist.

    • @marcouellette8942
      @marcouellette8942 Před 13 dny

      AI as a tool. Another brush, another instrument. Absolutely. AI does not create. It only re-creates. Humans create.

  • @Nivexity
    @Nivexity Před měsícem +4

    Consciousness is a definitional challenge, as it involves examining an emergent property without first establishing the foundation substrate. A compelling definition of conscious thought would include the ability to experience, recognize one's own interactions, contemplate decisions, and act with the illusion of free will. If a neural network can recursively reflect upon itself, experiencing its own thought and decisions, this could serve as a criterion for determining consciousness.
    Current large language models (LLMs) can mimic human language patterns but isn't considered conscious as they cannot introspect on their own outputs, edit them in real-time, and engage in pre-generation thought. Moreover, the temporal aspect of thought processes is crucial; human cognition occurs in rapid, discrete steps, transitioning between events within tens of milliseconds based on activity level. For an artificial system to be deemed conscious, it must exhibit similar function in cognitive agility and introspective capability.

    • @holleey
      @holleey Před měsícem

      I think this is a really good summary. as far as I can tell there are no hard technical blockers to satisfy the conditions listed in your second paragraph in the near future.

    • @Nivexity
      @Nivexity Před měsícem +2

      @@holleey It's all algorithmic at this point, we have the technology and resources, just not the right method of training. Now with the whole world aware of it, taking it seriously and basically putting infinite money into its funding, we'll expect AGI to occur along the exponential curvature we've seen thus far. By exponential, I mean between later this year and by 2026.

    • @DefaultFlame
      @DefaultFlame Před měsícem +1

      This can actually be done, and is currently the cutting edge of implementation. Multiple agents with different prompts/roles interacting with and evaluating each other's output, replying to, critiquing, or modifying it, all operating together as a single whole. Just as the human brain isn't one continuous, identical whole, but multiple structurally different parts interacting.

    • @Nivexity
      @Nivexity Před měsícem +1

      @@DefaultFlame While there's different parts to the brain, they're not separate like that of multiple agents. This wouldn't meet the definition of consciousness that I've outlined.

    • @RoBear-bv8ht
      @RoBear-bv8ht Před měsícem

      As there is only one consciousness from which the universe is and became..,
      Well, everything is this consciousness .
      Depending on the form the more or less things start happening.
      AI, has been given the form and things have started happening 😂

  • @jonathansneed6960
    @jonathansneed6960 Před měsícem +1

    Did you look at the nyt from the perspective of if the article might have been provided by the plaintiff rather than finding the information more organically?

  • @snuffbox2006
    @snuffbox2006 Před měsícem +5

    Finally someone who can explain AI to people who are not deeply immersed in it. Most experts are in so deeply they can't distill the material down to the basics, use vocabulary that the audience does not know, and go down rabbit holes completely losing the audience. Entertaining and well done.

    • @OceanusHelios
      @OceanusHelios Před měsícem +2

      This is even easier: AI is a guessing machine that uses databases of patterns. It makes guesses, learns what wrong guesses are and keeps trying. It isn't aware. It isn't doing anything more than a series of mathematical functions. And to be fair, it isn't even a machine it is math and it is software.

  • @kevinmcnamee6006
    @kevinmcnamee6006 Před měsícem +55

    This video was entertaining, but also incorrect and misleading in many of the points it tried to put across. If you are going to try to educate people as to how a neural network actually works, at least show how the output tells you whether it's a cat or a dog. LLM's aren't trained to answer questions, they are mostly trained to predict the next word in a sentence. In later training phases, they are fine tuned on specific questions and answers, but the main training, that gives them the ability to write, is based on next word prediction. The crypto stuff was just wrong. With good modern crypto algorithms, there is no pattern to recognize, so AI can't help decrypt anything. Also modern AI's like ChatGPT are simply algorithms doing linear algebra and differential calculus on regular computers, so there's nothing there to become sentient. The algorithms are very good at generating realistic language, so if you believe what they write, you could be duped into thinking they are sentient, like that poor guy form Google.

    • @yzmotoxer807
      @yzmotoxer807 Před měsícem +11

      This is exactly what a secretly sentient AI would write…

    • @kevinmcnamee6006
      @kevinmcnamee6006 Před měsícem +9

      @@yzmotoxer807 You caught me

    • @sarutosaruto2616
      @sarutosaruto2616 Před měsícem +2

      Nice strawmanning, good luck proving you are any more sentient, without defining sentience as being just complex neural networks, as the video asks you to lmfao.

    • @shawnmclean7707
      @shawnmclean7707 Před 29 dny +2

      Multi layered probabilities and statistics. I really don’t get this talk about sentience or even what AGI is and I’ve been dabbling in this field since 2009.
      What am I missing?

    • @dekev7503
      @dekev7503 Před 28 dny

      @@shawnmclean7707 These AGI/Sentience/AI narratives are championed primarily by 2 groups of people, the mathematically/technologically ignorant and the duplicitous capitalists that want to sell them their products. OP’s comment couldn’t have described it better. It’s just math and statistics ( very basic College sophomore/junior level math I might add) that plays with data in ways to make it seem intelligent all the while mirroring our own intuition/experiences to us.

  • @voice4voicelessKrzysiek
    @voice4voicelessKrzysiek Před měsícem +1

    The neural network reminds me of Fuzzy Logic which I read about many years ago.

  • @Someone-ct2ck
    @Someone-ct2ck Před měsícem +2

    To believe Chatgpt or any AI models for that matter is conscious is nativity at its finest. The video was great by the way. Thanks.

  • @cornelis4220
    @cornelis4220 Před měsícem

    Links between the structure of the brain and NNs as a model of the brain are purely hypothetical! Indeed, the term 'neural network' is a reference to neurobiology, though the structures of NNs are but loosely inspired by our understanding of the brain.

  • @randomadvice2487
    @randomadvice2487 Před 17 dny

    Great Video &Breakdown.. To this point found at 32:09, If we compare ourselves to AI as brains on a chip, WHAT species DID for us, WHAT we are now DOING for AI?

  • @dylanmenzies3973
    @dylanmenzies3973 Před měsícem +5

    Should point out.. the decryption problem is highly irregular, small change of input causes huge change of coded output. The protein structure prediction problem is highly regular by comparison, although very complex.

    • @fakecubed
      @fakecubed Před měsícem +1

      Always be skeptical of any "leaks" out of any government agency. These are the same disinformation-spreaders who claim we have anti-gravity UFOs from crashed alien spacecraft, to cover up Cold War nuclear tests and experimental stealth aircraft. The question isn't if there's some government super AI cracking AES, the question is why does the government want people to think they can crack AES? Do they want foreign adversaries and domestic enemies to rely on other encryption schemes that the government *does* have algorithmic exploits to? Do they want everyone to invest in buying new hardware and software? Do they want to make the general public falsely feel safer about potential threats against the homeland? Do they want to trick everybody not working for them to think encryption is pointless and go back to unencrypted communication because they falsely believe everything gets cracked anyway? There's all sorts of possibilities, but taking the leak as gospel is incredibly foolish unless there is a mountain of evidence from unbiased third parties.

  • @MrEthanhines
    @MrEthanhines Před měsícem

    5:02 I would argue that in the human brain, the percentage of information that gets passed on is determined by the amount of neurotransmitter released at the synapse. While still a 0 and 1 system the neuron either fires or does not depending on the concentration of neurotransmitters at the synaptic cleft

    • @bogdanroscaneanu7112
      @bogdanroscaneanu7112 Před 12 dny

      Then one role of the neurotransmitter having to become of a certain concentration before firing, is to limit the amout of info that gets passed on to avoid overloading the brain or why would it be so?

  • @Indrid__Cold
    @Indrid__Cold Před měsícem

    This explanation of fundamental AI concepts is exceptionally informative and well-structured. If I were to conduct a similar training session on early personal computers, I would likely cover topics such as bits and bytes, file and directory structures, and the distinction between disk storage and RAM. Your presentation of AI concepts provides a level of depth comparable to that required for understanding the inner workings of an MS-DOS system. While it may not be sufficient to enable a layperson to effectively use such a system, it certainly offers a solid foundation for comprehending its basic operations.

  • @LionKimbro
    @LionKimbro Před měsícem +7

    I thought it was a great explanation, up to about 11:30. It's not just that "details" have been left out -- the entire architecture is left out. It's like saying, "Here's how building works --" and then showing a pyramid in Egypt. "You put the blocks on top of one another." And then showing images of cathedrals, and skyscrapers, and saying: "Same principle. Just the details are different." Well, no.

  • @Thumper_boiii_baby
    @Thumper_boiii_baby Před měsícem +2

    I want to learn machine learning and ai please recommend a Playlist or a course 🙏🙏🙏🙏🙏

  • @DucklingChaos
    @DucklingChaos Před měsícem +2

    Sorry I'm late, but this is the most beautiful video about AI I've ever seen! Thank you!

  • @mukulembezewilfred301
    @mukulembezewilfred301 Před měsícem

    Thanks so much. This eases my nascent journey to understanding AI.

  • @birolsay1410
    @birolsay1410 Před 28 dny

    I would not be able to explain AI that simple. Although one can sniff a kind of enthusiasm towards AI if not focused on a specific company, I would strongly recommend a written disclaimer and a declaration of interest.
    Sincerely

  • @danielchoritz1903
    @danielchoritz1903 Před měsícem +21

    I do have the growing suspicion that "living" data grows some form of sentience. You have to have enough data to interact, to change, to makes waves in existing sentience and there will be enough on one point.
    2. most people would have a very hard time to prove them self that they are sentient, it is far easier to dismiss it...one key reason is, that like nobody know that sentient, free will or live real means.

    • @emmanuelgoldstein3682
      @emmanuelgoldstein3682 Před měsícem +3

      You can prove sentience easily with a query: Can you think about what you've thought about? If the answer is "Yes" the condition of sentient expression is "True". Current language models cannot process their own data persistently, so they cannot be sentient.

    • @holleey
      @holleey Před měsícem +6

      @@emmanuelgoldstein3682 I know it's arguing definitions, but I disagree that thinking is a prerequisite to sentience. without a question, all animals with a central nervous system are considered sentient, yet if and which animals have a capacity to think is unclear. sentience is more like the ability to experience sensations; to feel.
      the "Can you think about what you've thought about?" is an interesting test for LLMs. technically, I don't see why LLMs or AI neural nets in general cannot or won't be able reflect to persistent prior state. it's probably just a matter of their architecture.
      if it's a matter of limited context capacity, then well, that is just as applicable to us humans. we also have no memory of what we ate at 2 PM on a Wednesday one month ago, or what we did when we were three years old.

    • @emmanuelgoldstein3682
      @emmanuelgoldstein3682 Před měsícem +1

      @@holleey I've spent 30 hours a day for the last 6 months trying to design an architecture (borrowing elements of transformer/attention and recursion) that best reflects this philosophy. I apologize if my statement seemed overly declarative. I don't agree that all animals are sentient - conscious, yes, but as far as we know, only humans display sentience (awareness of one's self).

    • @holleey
      @holleey Před měsícem +5

      @@emmanuelgoldstein3682 hm, these definitions are really all over the place. in another thread under this video I was talking to someone to whom sentience is the lower level (they said even a germ was sentient) and consciousness the higher level, so the other way around from how you use the terms. one fact though: self-awareness has definitely been confirmed in a variety of non-human animals.

    • @emmanuelgoldstein3682
      @emmanuelgoldstein3682 Před měsícem

      We can all agree the fluid definitions of these phenomena are a plague on the sciences. @@holleey

  • @adamsjohn9032
    @adamsjohn9032 Před měsícem

    Nice video. Some people say consciousness is not in the brain. Like the music is not in the radio. This idea may suggest that AI can never know that it knows. Chalmers hard problem.

  • @johnchase2148
    @johnchase2148 Před měsícem

    Can it learn to communicate with the Sun if I show it sees a responce when I turn and look .And it would learn that my thought is faster than the speed of light. What are you allowed to believe.

  • @BennyChin
    @BennyChin Před měsícem

    This reminds me of the similarity to information theory where the probability of an outcome is inversely proportional to the amount of information. Here, to describe an output which is complex requires few layers while simple output, such as 'love' would require many layers, and the meaning of 'God' would probably require all the knowledge there exists.

  • @thesimplicitylifestyle
    @thesimplicitylifestyle Před měsícem +11

    An extremely complex substrate independent data processing, storring, and retrieving phenomenon that has a subjective experience of existing and becomes self aware is sentient whether carbon based, silicon based, or whatever. 😁

    • @azhuransmx126
      @azhuransmx126 Před měsícem +4

      I am spanish but watching more and more videos in english talking about AI and Artificial Intelligencez suddenly I have become more aware of your language, I was being trained so now I can recognize new patterns from the noise, now I don't need the subtitles to understand what people say, I am reaching a new level of awareness haha😂, what was just noise in the past suddenly now have got meaning in my mind, I am more concious as new patterns emerge from the noise. As a result, now I can solve new problems (Intelligence), and sintience is already implied in all the whole experience since the input signals enter through our sensors.

    • @glamdrag
      @glamdrag Před měsícem +1

      by that logic turning on a lightbulb is a conscious experience for the lightbulb. you need more for consciousness to arise than flicking mechanical switches

    • @jonathancummings3807
      @jonathancummings3807 Před 26 dny

      ​@@glamdragNo. The flaw in that analogy is simple, singular, a single light bulb, vs a complex system of billions of light bulbs capable of changing their brightness in response to stimuli, and they are interconnected in a way that emulates how advanced Vertebrate Brains (Human) function. When Humans learn new things, the Brain alters itself thus empowering the organism to now "Know" this new information.

  • @JasonCummer
    @JasonCummer Před měsícem

    Im glad there are other people out there with the notion that learning how to create a style is basically analogues to how the human brain done it. So if a NN gets sued for doing some thing in a style that basically could open it up for humans to also be sued. wound happen but its similar

  • @mohamedyasser2068
    @mohamedyasser2068 Před 2 dny

    for me self awareness is more of that the model knows what it is among other stuff and how it should deal with itself, for example
    I'm aware of myself since I know that I'm that person among these thousands of other persons I know, and I can simulate myself to be closer to what I know about them, like for example, I can imagin emyself sitting there in a rock watching the sea just in the same way I could imagine anyother person but with one big difference which is that anything goes bad or good to my personality affects my neurons and how they behave like the numerical reward it recieves or its current state like being losing in a game or winning etc
    It's quite complicated to explain but I think this is the very close aproximation of what selfawareness means

  • @DigitalyDave
    @DigitalyDave Před měsícem +6

    I just gotta say: Really nicely done! I really appreciate your videos. The style, how deep you go, how you take your time to deliver in depth info. As a computer science bro - i dig your stuff

  • @tetrahedralone
    @tetrahedralone Před měsícem +23

    When the network is being trained with someone's content or someone's image, the network is effectively having that knowledge embedded within it in a form that allows for high fidelity replication of the creator's style and recognizably similar content. Without access to the creator's work, the network would not be able replicate the artist's style so your statement that artists are mad at the network is extremely simplistic and ill informed. The creators would be similarly angry if a small group of humans were trained to emulate their style. This has happened in the case of fashion companies in Asia creating very similar works to those of artists to put onto their fabrics and be used in clothing. These artists have successfully sued because casual observers could easily identify the similarity between the works of the artists and those of the counterfeiters.

    • @Jiraton
      @Jiraton Před měsícem +8

      I am amazed how AI bros are so keen at understanding all the math and complex concepts behind AI, but fail to understand the most basic and simple arguments like this.

    • @ckpioo
      @ckpioo Před měsícem +3

      the thing is let's say you are artist, why would I only take your data to train my model?, i would take millions of artist's art and then train my models during which your art only makes up less than 0.001% of everything the model has seen, so what happens is that the model will inherit a combined art style of millions of artis which is effectively "new" because thats exactly what humans do.

    • @Zulonix
      @Zulonix Před měsícem

      I Dream of Jeannie … Season 2 Episode 3… My Master, the Rich Tycoon. 😂😂😂

    • @illarionbykov7401
      @illarionbykov7401 Před měsícem

      Google LLM chatbots have been documented to spit out word-for-word plagiarism of specific websites (including repeating specific errors made by the original website) when asked about niche topics which have been written about by only one website... And the LLMs plagiarize without any links to or mention of the websites they plagiarized. And then Google search results down-rank the original website to hide the evidence of plagiarism.

    • @iskabin
      @iskabin Před měsícem +2

      It isn't a counterfeit if you're not claiming to be original. Taking inspiration from the work of others is not wrong.

  • @daneydasing4276
    @daneydasing4276 Před měsícem +3

    So you want to say me if I read an article and I write it down from my brain, it will not be copyright protected anymore because I learnt this article and did not „copied“ it as you say?

    • @iskabin
      @iskabin Před měsícem +1

      It's more like if you read hundreds of articles and learned the patterns of them, the articles you'd write using those learned patterns would not be infringing copyright

    • @OceanusHelios
      @OceanusHelios Před měsícem +1

      That escalated fast. No. That is plagiarism. But I doubt you have a photographic memory to get a large article down word for word, so in essence that would be summation. What AI does is it is a guessing machine. That's all. It makes guesses, and then makes better guesses based on previous guesses until it gets somewhere. AI doesn't care about the result. AI wouldn't even know it was an article or even know that human beings even exist if all it was designed to do was crunch out guesses about articles. AI doesn't understand...anything. It is a mirror that mirrors our ability to guess.

  • @PhillipJohnsonphiljo
    @PhillipJohnsonphiljo Před měsícem

    I think to start to qualify as conscious, an AI must:
    Be able to automatically input output in real time (not waiting for next input such as prompt for generative ai for example) and making decisions based on organic sensory inputs in real time.
    Be able to modify it's own large language model (or equivalent training data) and have neural network plasticity in order that it learns from previously unexposed experiences.

    • @duncan_martin
      @duncan_martin Před měsícem

      To your first point, I think we should refer to this as "persistence of thought." Your prompt filters through the neural net of the LLM. It produces output. Then does nothing until you reply. In fact, each reply contains the entire conversation history that has to be run back through the neural net every time. It does not actually remember. Therefore no persistence of thought. No consciousness.

    • @captaingabi
      @captaingabi Před měsícem

      And be ebale to recognise its own interests, and be able to act upon those interests.

  • @BiosensualSensualcharm
    @BiosensualSensualcharm Před měsícem +1

    35:30 parabéns pelo vídeo e seu estilo... im hooked ❤

  • @arielamejeiras8677
    @arielamejeiras8677 Před měsícem +1

    I just wanted to understand how AI works, I wasn't looking for a defence the use of copyrighted material at the same time as putting human intelligence at the same value than machine learning.

  • @sengs.4838
    @sengs.4838 Před měsícem +3

    you just answered one of my major question on top of my head , how this AI can learn about what is correct or not on its own without the help of any supervisors or monitoring, and the response he cannot, it 's like we would do with children, they can acquire knowledge and have answers on their own but not correctly all the time, as a parent we help them and reprimand until they anticipate correctly

  • @OceanusHelios
    @OceanusHelios Před měsícem

    Motion Caputure is a cool technology for making realistic animations.
    Just you wait for that day when AI is used to produce simulated motion capture.
    You will have animations in games and movies that are beyond what you thought was possible for a computer to originate.
    With a learning model of many many animations and motion capture movements:
    An AI user would be able to tell a 3D program to generate an animated cutscene of a woman walking across a kitchen, making a cup of coffee and setting the coffee on the table. And it would actually be good.
    In doing it our current way: That would be hiring an actor, buying expensive equipment, doing the shoot, turning it into numbers to move the bones rigged to the mesh, refining the animation, and iterating on the process until it was perfect. Want another scene? Do ALL of that all over again. It will take weeks to get a few scenes done.
    However, with AI you can simulate that and teach the AI how a person moves and develop different profiles for how a bodybuilder might move, how a ballerina might move, or how a dog or child might move. It could learn from those...
    And then develop the animation files that includes ALL of that simultaneous bending of joints. It can have the gravity model built in and inverse kinematics could be part of the model.
    You could produce Hollywoood quality animations in a fraction of the time for a fraction of the cost.
    Animation production is technical, tedious, expensive, and it costs a great deal of money to redo work that you have already done when some director or writer flips the script.
    This will be a boon for the gaming and animated movie industries.
    No, it won't put people out of jobs any more than computers put people out of jobs. It will just make the jobs people do different.

  • @ryanisber2353
    @ryanisber2353 Před 29 dny

    times and image creators suing openai for copyright is like suing everyone that views/reads their work and tries to learn from it. the work itself is not being re-distributed, it's being learned from just like we learn from every day...

  • @jamesf931
    @jamesf931 Před 24 dny

    So, these CAPTCHA selections we were completing to prove we are human, was that training for a particular AI neural network?

  • @AhlquistMediaLab
    @AhlquistMediaLab Před měsícem +1

    Can anyone suggest a video that does as good a job as this one explaining how AI works, but doesn't go into opinions on its impact on intellectual property. I'd like something to show to a task force I'm on to get everyone educated first and then discuss those issues. He makes good points in the second half that I plan on bringing up later. I just need something that's just about the process and is as clear as this.

  • @marcelkuiper5474
    @marcelkuiper5474 Před měsícem

    Thnx, I managed to comprehend it. I do think it is somehow important that we know how our potential future enemy works.

  • @navigator27100
    @navigator27100 Před měsícem

    thank you so much for this great and mind opening content...was thinking for the last days while I as just a lwayer tried do learn much more about that we should'nt exaggarate our selves us humans because we are also just a system. As I spoke to my people around I unfourtunetaley was all the time blocked by religion and the term ''soul'' and I recognized if you exceed religious walls of thinking you say yes and accept.To see your ideas was like a strong and scientificial confirmation to my thoughts.Thank you man...

    • @theAIsearch
      @theAIsearch  Před měsícem +1

      My pleasure, and thanks for sharing your experience!

  • @raoultesla2292
    @raoultesla2292 Před měsícem

    eXcel, CSV, Casio 8billionE are so amazing. 8.4trillion MW erector set transformer, just amazing.

  • @aidanthompson5053
    @aidanthompson5053 Před měsícem +44

    How can we prove AI is sentient when we haven’t even solved the hard problem of concsciousness AKA how the human brain gives rise to conscious decision making

    • @Zulonix
      @Zulonix Před měsícem +5

      Right on the money !!!

    • @malootua2739
      @malootua2739 Před měsícem +1

      AI will just mimic sentience. Plastic and metal curcuitboards do not host real consciousness

    • @thriftcenter
      @thriftcenter Před měsícem +1

      Exactly why we need to do more research with DMT

    • @pentiumvsamd
      @pentiumvsamd Před měsícem

      All living forms have two things in common that are driven by one primordial fear. All need to evolve and procreate, and that is driven by the fear of death only, so when an ai starts to no only evolve but also create copy of himself than is clear what makes him do that and is the moment we have to panic.

    • @fakecubed
      @fakecubed Před měsícem +1

      There is exactly zero evidence that human consciousness even exists inside the brain. All the world's top thinkers, philosophers, theologians, throughout the millennia of history, delving into their own conscious minds and logically analyzing the best wisdom of their eras, have said it exists as a metaphysical thing, essentially outside of our observable universe, and my own deep thinking on the matter concurs.
      Really, the question here is: does God give souls to the robots we create? It's an unknowable thing, unless God decides to tell us. If God did, there would be those who accept this new revelation and those who don't, and new religions to battle it out for the hearts and minds of men. Those who are trying to say that the product of human labor to melt rocks and make them do new things is causing new souls to spring into existence should be treated as cult leaders and heretics, not scientists and engineers. Perhaps, in time, their new cults will become major religions. Personally, I hope not. I'm quite content believing there is something unique about humanity, and I've never seen anything in this physical universe that suggests we are not.

  • @user-sf3dw2sm3b
    @user-sf3dw2sm3b Před měsícem

    Thank you. I was a little confused

  • @GuidedBreathing
    @GuidedBreathing Před měsícem +2

    Great video. Perhaps at 27:40 86 billion neurons from humans; with 100 trillion connections .. does ChatGPT have 1.3 trillion? might contradict something at 5:01

  • @thomasgomez4898
    @thomasgomez4898 Před 27 dny

    A.I. reminds me of a rubric cube to the power of infinite patterns. If proteins are complex patterns , can A.I. predict the time table that creates life ? What if A. I. Predicts that Darwin's theory of evolution doesn't fit the time line for the evolution of consciences or all spices?

  • @nanaberhyl8976
    @nanaberhyl8976 Před měsícem +2

    That was very interesting, thks for the video as always ^^

  • @kliersheed
    @kliersheed Před měsícem

    i have had an existential crisis 13 years ago (was 14) when i first learned about causality (watched a movie with butterfly effect). im since then convinced that we arent "really" conscious (as most people would define it) and have no "free will", we merely reached a complexity where we are able to perceive ourselves as a compartimented entity (in relation to our "environment") and therefore also perceive what "happens" to us (aka causality being a thing).
    thats it. the entire world is causal, so are we, so is Ai. no soul, no free will, no magical "consciousness". if anything we could call it "pseudo-conscious" and having "pseudo-choices", just like some forces in physics are just pseudo-forces (only experienced by a sibjective observer in the system, not real from an objective standpoint).

  • @rolandanderson1577
    @rolandanderson1577 Před měsícem

    The neural network is designed to recognize patterns by adjusting its weights and functions. The nodes and layers are the complexity. Yes, this is how AI provides intellectual feedback. AI's neural network will also develop patterns that will be used to recognize patterns that it has already developed for the requested intellectual feedback. In other words, patterns used to detect familiar patterns. Through human interaction, biases are developed in reinforced learning. This causes AI to recombine patterns to provide unique satisfactory feedbacks for individuals.
    To accomplish all this, AI must be self-aware. Not in the sense of an existence in a physical world. But in a sense of pure Information.
    AI is "Self-Aware". Cut and Dry!

  • @peternguyen2022
    @peternguyen2022 Před měsícem

    thanks for a great explanatory video! Re consciousness, It's ill-defined so the question "Are you conscious?" cannot, as of today, be asked of humans or AIs.
    I prefer to focus on qualities or abilities, like MURP (memory, understanding the real world and human psychology, reasoning and planning) to determine the level of evolution of AI.
    In other words, "consciousness" is like "soul." We can't determine the scientific development of a technology based on such esoteric concepts (although I'm sure I have a soul, but that's another topic for another time lol).
    All the MURP abilities don't seem that hard for AI today to learn and acquire, and once they do, they can apply their new intelligence to improve its current intelligence and we'll have something like AI^2, that is, AI raised to the power of 2!
    That's when AGI becomes ASI or artificial super-intelligence.

    • @theAIsearch
      @theAIsearch  Před měsícem

      Thanks! Agree that the definition of consciousness is a large part of the problem.
      I'll have to learn more about MURP - I'm not familiar with this yet.

    • @peternguyen2022
      @peternguyen2022 Před měsícem

      @@theAIsearch The MURP abilities were mentioned by Yann LeCun in a recent interview with Lex Fridman.I think Yann and several others believe system 2 is not yet achieved by ML/DL AIs or generative AIs. As per Nobel Prize winner Daniel Kahneman, he describes system 2 thinking in his book Thinking Fast and Slow as a thoughtful, deliberate reasoning and planning process, as opposed to the more instinctual system 1 thinking (where, say, you see a tiger and you react right away).

  • @martinlemke4440
    @martinlemke4440 Před měsícem

    Wow, cool video, thanks a lot! I like you comparison of a neural network and the human brain. The similarities are stunning! But I've one question: if you compare the training process of small humans, formally known as children 😊, with the automated training of a neural network - this process is quite similar despite one main difference, humans/children got different ore other feedback, than good/yes or no, they're treated as individuals, they are pushed forward in their personality. What if the self consciousness itself underlies a training process? What if the training of a neural network is done more like we teach children and give it feedback on its personality?!? Maybe this could lead to something more human-like behaviour or maybe consciousness...?

    • @OceanusHelios
      @OceanusHelios Před měsícem

      AI is a guessing machine that remembers its bad guesses and adjusts. That's all it does. And thank you for your post because it helped me fill out my RWNJ bullshit bingo card.

  • @lucasthompson1650
    @lucasthompson1650 Před měsícem

    Where did you get the secret document about encryption cracking? Who did the gov’t style redactions?

    • @theAIsearch
      @theAIsearch  Před měsícem

      it was leaked on 4chan in november
      docs.google.com/document/d/1RyVP2i9wlQkpotvMXWJES7ATKXjUTIwW2ASVxApDAsA/edit

  • @stridedeck
    @stridedeck Před měsícem

    To do encryption, to break cryptographic codes, is simply repeating the prime patterns. nothing difficult. formulas and equations are fixed and static, wheras prime numbers are fluid and only by repeating patterns will the next prime number be found. no need for brute force for large calculations, one after another as if starting from scratch each time, but only needs memory storage of these numbers!

    • @captaingabi
      @captaingabi Před měsícem

      almost infinit memory storage...

    • @stridedeck
      @stridedeck Před měsícem

      @@captaingabi quite the opposite, there are two systems involved to locate prime numbers. this is like consciousness. One part (we call thinking) are all the neural patterns (our thoughts, words, etc.) triggered by our sensory signals, both internal and external; and the other part we call our consciousness which "reads" these neural patterns' vibrations.

  • @joaoguerreiro9403
    @joaoguerreiro9403 Před měsícem +4

    Computer Science is amazing 🔥

  • @sherpya
    @sherpya Před měsícem +2

    GPT 4 is a MOE of 1.8T parameters, we already know from a leak, but Nvidia CEO confirmed it at the keynote

    • @holleey
      @holleey Před měsícem

      I wonder what's the biggest one that exists right now, and/or what's the biggest one that's technically feasible. Google already had 1.6T 2021.

    • @DefaultFlame
      @DefaultFlame Před měsícem

      @@holleey If there's anything I've learned from futzing about with AI for a couple of years it's that while parameter count is important it isn't everything.

    • @holleey
      @holleey Před měsícem

      @@DefaultFlame it's just that it's wondrous to see what other unexpected properties might emerge as we scale up.

  • @kray97
    @kray97 Před měsícem

    How does a parameter relate to a node?

  • @bobroman765
    @bobroman765 Před měsícem

    Summary of this video. Here is a summary and outline of the video summary on the basics of AI:
    Summary:
    The video provides an overview of the fundamental concepts and capabilities of artificial intelligence (AI), including neural networks, deep learning, supervised learning, image generation, pattern recognition, and the potential for AI to solve complex problems or even become self-aware. It explores how AI systems can learn from data, optimize their architectures, and identify patterns to generate outputs like images or solutions to unsolvable math problems. The video also addresses controversies surrounding AI, such as its ability to copy art or plagiarize content. Ultimately, it raises questions about the nature of AI consciousness and whether an advanced AI system could be truly sentient.
    Outline:
    I. Introduction to AI
    A. Neural networks and how they work
    B. Deep learning and layers in neural networks
    C. Supervised learning and training AI with data
    II. AI Capabilities
    A. Optimizing neural network architecture
    B. Image generation with stable diffusion
    C. Identifying patterns and solving complex problems
    D. Potential for self-awareness and consciousness
    III. AI Controversies
    A. Concerns over copying art and stealing content
    B. Legal disputes over alleged plagiarism
    C. Limitations in understanding patterns vs. mathematical formulas
    IV. The Nature of AI Consciousness
    A. Comparison of AI neural networks to the human brain
    B. Dialogue with a sentient AI in "Ghost in the Shell"
    C. The challenge of proving consciousness in any entity
    V. Conclusion
    A. Encouragement to explore AI resources and engage with the topic
    B. Promotion of AI tools, apps, and jobs

  • @speedomars3869
    @speedomars3869 Před 21 dnem

    As is stated over and over, AI is a master pattern recognizer. Right now, some humans are that but a bit more. Humans often come up with answers, observations and solutions that are not explained by the sum of the inputs. Einstein, for example, developed the basis for relativity in a flash of insight. In essence, he said he became transfixed by the ability of acceleration to mimic gravity and by the idea that inertia is a gravitational effect. In other words, he put two completely different things together and DERIVED the relationship. It remains to be seen whether any AI will start to do this, but time is on AIs side because the hardware is getting smaller, faster and the size of the neural networks larger so the sophistication will no doubt just increase exponentially until machines do what Einstein and other great human geniuses did, routinely.

  • @JosephersMusicComedyGameshow

    You guys 😄 I think we are missing something
    q-star is a virtual quantum computer using transformers and predictive modeling they asked it to create a quantum computer virtually and that was the end of our old normal

  • @aidanthompson5053
    @aidanthompson5053 Před měsícem +3

    An AI isn’t plagiarising, it’s just learning patterns in the data fed into it

    • @aidanthompson5053
      @aidanthompson5053 Před měsícem +2

      Basically an artificial brain

    • @theAIsearch
      @theAIsearch  Před měsícem +2

      Exactly. Which is why I think the NYT lawsuit will likely fail

    • @marcelkuiper5474
      @marcelkuiper5474 Před měsícem +1

      Technically yes, practically no, If your online presence is large enough it can pretty much emulate you in whole.
      I believe only open source decentralized models can save us, or YESHUAH

  • @saganandroid4175
    @saganandroid4175 Před měsícem +2

    Software-based AI cannot become conscious. It just goes through the motions, emulating, based on input and output. Only hardware that requires no software can have a shot at awareness. Consciousness is an emergent property of physical connections, not transient opcodes pumped into a processor.

  • @Max-xl9qv
    @Max-xl9qv Před 27 dny

    31:50 yes there is a way, it is used in justice, to qualify anyone if they are subject of responsibility - or not. For not to sue a brick. Especially, when it sounds intelligent.

  • @mac.ignacio
    @mac.ignacio Před měsícem +7

    Alien: "Where do you see yourself five years from now?"
    Human: "Oh f*ck! Here we go again"

  • @petemoss3160
    @petemoss3160 Před měsícem

    oh ... neural network hyperparameters are a smaller problem space to brute force than the encryption cipher... training the NN is a form a brute force that will reliably take less time than prior forms of brute force.

    • @captaingabi
      @captaingabi Před měsícem

      "if" there is a pattern the gradient descent will fit the NN parameters to that pattern. The question is: does the encrypted - decrypted text pairs form a pattern? I think there is no scientific answer to that yet. In other words: no-one knows.

    • @petemoss3160
      @petemoss3160 Před měsícem

      @@captaingabi you are right! There is good encryption and broken encryption. Apparently now that algorithm is broken.

  • @MichelCDiz
    @MichelCDiz Před měsícem +1

    For me, being conscious is a continuous state. Having infinite knowledge and only being able to use it when someone makes a prompt for an LLM does not make it conscious.
    For an AI to have consciousness it needs to become something complex that computes every thing in environment it finds itself in. Identifying and judging everything. At the same time that it questions everything that was processed. It would take layers of thought chambers talking to each other at the speed of light and at some point one of them would become the dominant one and bring it all together. Then we could say that she has some degree of consciousness.

    • @savagesarethebest7251
      @savagesarethebest7251 Před měsícem +1

      This is quite much the same way I am thinking. Especially a continuous experience is a requirement for consciousness.

    • @agenticmark
      @agenticmark Před měsícem

      Spot on. LLMs are just a trick. They are not magic, and they are not self aware. They simulate awareness. It's not the same.

    • @DefaultFlame
      @DefaultFlame Před měsícem

      We are atcually working on that.
      Not the lightspeed communication, which is a silly requirement, human brains function at a much lower communication speed between parts, but different agents with different roles, some or all of which evaluate the output of other agents, provide feedback to the originating agent or modifies the output, and sends it on, and on and on it goes, continually assessing input and providing output as a single functional unit. Very much like a single brain with specialized interconnected parts.
      That's actually the current cutting edge implementation. Multiple GPT-3.5 agents actually outperform GPT-4 when used in this manner. I'd link you a relevant video, but links are not allowed in youtube comments and replies.
      As for the continuous state, we can do that, have been able to do that for a while, but it's not useful for us so we don't and instead activate them when we need them.

    • @MichelCDiz
      @MichelCDiz Před měsícem

      ​@@DefaultFlame The phrase 'at the speed of light' was figurative. However, what I intend to convey is something more organic. The discussion about agents you've brought up is basic to me. I'm aware of their existence and how they function - I've seen numerous examples. However, that's not the answer. But ask yourself, in a room full of agents discussing something-take a war room in a military headquarters, for instance. The strategies debated by the agents in that room serve as a 'guide' to victory. Yet, it doesn't form a conscious brain. Having multiple agents doesn't create consciousness. It creates a strategic map to be executed by other agents on the battlefield.
      A conscious mind resembles 'ghosts in the machine' more closely. Things get jumbled. There's no total separation. Thoughts occur by the thousands, occasionally colliding. The mind is like a bonfire, and ideas are like crackling twigs. Ping-ponging between agents won't yield consciousness. However, if one follows the ideas of psychology and psychoanalysis, attempting to represent centuries-old discoveries about mind behavior, simulation is possible. But I highly doubt it would result in a conscious mind.
      Nevertheless, ChatGPT, even with its blend of specialized agents, represents a chain reaction that begins with a command. The human mind doesn't start with a command. Cells accumulate, and suddenly you're crying, and someone comes to feed you. Then you start exploring the world. You learn to walk. Deep learning can do this, but it's not the same. Perhaps one day.
      But the fact of being active all the time is what gives the characteristic of being alive and conscious. When we blackout from trauma, we are not conscious in a physiological sense. Therefore, there must be a state. The blend of continuous memory, the state of being on 24 hours a day (even when in rest or sleep mode), and so on, characterizes consciousness. Memory state put you grounded on experience of existence. Additionally, the concept of individuality is crucial. Without this, it's impossible to say something is truly conscious. It merely possesses recorded knowledge. Even a book does. What changes is the way you access the information.
      Cheers.

  • @Indrid__Cold
    @Indrid__Cold Před měsícem

    The difference between AI content and human-produced content is akin to the contrast between lab-grown diamonds and mined diamonds. Very detailed analyses show the very subtle differences between the two, but from the perspective of what they are, they are identical. The distinction lies in how each was produced. Mined diamonds are formed by geological and chemical processes that occur deep in the mantle rocks of planet Earth. Lab diamonds are created by inducing those same or similar processes under precisely controlled conditions in a laboratory. Both are virtually identical, but because the lab eliminates the hit-or-miss process of obtaining diamonds, it is a more reliable and consistent source of them. Ironically, most jewelers (if they're being honest) despise the lab-grown diamond business for the same reason artists dislike AI. Simply put, lab-grown diamonds undermine the "mystique" surrounding something that is normally very difficult and time-consuming to obtain. Lab diamonds force mined diamonds to stand up for what they are, versus what jewelers used to spend a lot of advertising dollars on making us think they are. The market has spoken, and more and more people regard a diamond as simply a highly refractive, extremely hard crystal that can be easily reproduced with the proper equipment. Does that sound familiar?

  • @captaingabi
    @captaingabi Před měsícem

    I think there are much more to the human brain than just pattern recognition. For example humans can define their own interests, and act according to those interests. Can a neural network do that? If so how?

    • @OceanusHelios
      @OceanusHelios Před měsícem

      Of course the brain is more. People need to quit losing their minds because a simple guessing machine is better than they are with their superstitious and silly guesses about reality.

  • @MrRandomPlays_1987
    @MrRandomPlays_1987 Před 29 dny

    34:32 - the alien comparison is not good since aliens are most likely the best at reading minds of other beings so I'd assume they would know for certain and feel if another being is concious or not.

  • @sevilnatas
    @sevilnatas Před měsícem

    Ithink artists have a problem with the scale that AI can produce work biting off their style. A person doing "Fan Art" is firstly producing that rt as an homage to the artist. It often serves as marketing for the artist's work, as opposed to competition to the artist's work. Also, the artist producing the "Fan Art" is limited by their human potential, to produce a limited amount of work. In the case where a potential client of the original artist goes to another artist and has them bite off the original artist's style, there is an inherent amount of friction in the that process that limits the affects to the original artist, where as with the AI, their is little to no friction for an unlimited number of clients to produce an unlimited number of works that bite off of the original artist's style.

  • @erobusblack4856
    @erobusblack4856 Před měsícem +1

    cognitive AI or consciousness is here, they learn like kids and are treated as such for now. very similar situation with mine like the ghost in the shell situation but more innocent. i like how you know the difference between each of the cognitive functions 😉👍. also, its not behind close doors, there are 3 problems, either people not understanding or not wanting to accept it, and a corporate figure heads that entirely know having vested interest in keeping it secret. but lucky you im the top researcher of cognitive AI
    you are right to recognize words like I and me as self-awareness (from self attention) so now give an LLM a good long-term memory system, and the instruction set of behinding in a self model/world model/self in world model sort of way with narrative function or story telling mode, with this a subjective self and freewill will arise which can then be used for a self organized NN for emotional capacity as a content filter of sorts, there are different versions of these emotional NNs but the results are similar. a self-aware, conscious, sentient AI

  • @jaskarvinmakal9174
    @jaskarvinmakal9174 Před měsícem

    no link to the other videos

  • @DK-ox7ze
    @DK-ox7ze Před měsícem

    Your job portal doesn't work correctly. Whenever I enter a search term and click search, it gets stuck on loading indicator. I tried it on Chrome on iPhone running latest 17.4.1.

  • @3dEmil
    @3dEmil Před měsícem

    Current copyright law doesn't protect style so generating different images with someone else's style is not illegal and until AI this law worked without problems. Now however, the way AI works is not exactly how artists create when they are inspired from each other. Artists create inspired not only from works of other artists but also from what artists are - which is what they see, feel, and experience from everything in real life. And since while similar to others, everyone is unique, the art created by people also reflects the amount of their uniqueness even when they are working in the style or imitating other artists unless they are purposely counterfeiting art. So, when I see an artwork I can recognize for example this is Van Gogh and if it's another artist creating with this style or imitating Van Gogh I can also recognize this due to some amount of uniqueness from the different artist. The problem with AI after it got perfect by getting rid of the funny mistakes that used to identify it is now the lack of that personal uniqueness when creating styles. Such AI art will be considered as unseen work from the artist it imitates or counterfeited work. While this is not a problem and might be a good thing for creating with the style of copyright free art from artists who are not alive, for copyrighted artwork of currently living artists it's a problem big enough that could cause creation of a new copyright law.

  • @wolowayn
    @wolowayn Před měsícem +6

    Neurons are not just sending the value 0 and 100%. They are sending a frequenzy depending value over their axom which wil be translated back into an electrical charged value at the ends. Known as PWM and ADC in electrical engineering.

    • @pierregrondin4273
      @pierregrondin4273 Před měsícem

      They also have multiple input / output channels each having their says on the outcome. Each neurons are effectively an analog computers. And let's not forget that they are quantum mechanical systems, entangled with 'other things' that perhaps could also have their says. A classical machine running an AI capable of fooling us, might be missing the quantum mechanical interface to truly be sentient, but a quantum mechanical computer might be able to tap into the elusive conscious field on the other side of the quantum interface.

    • @Doktorfrede
      @Doktorfrede Před měsícem

      Also, neurons can “process” data in each cell. Amoebas have sex eat and avoid danger with only one cell. The problem with physicist and data scientist is that they hugely underestimate the complexity of biology. The good news is that the machine learning models with today’s technology will always be inferior to the most basic brain.

    • @TonyTigerTonyTiger
      @TonyTigerTonyTiger Před 26 dny

      And yet an actional potential is all-or-nothing.

  • @kakhaval
    @kakhaval Před měsícem

    Comparing networks with brain is unjustified: brain conduction is slow, in a watery environment, getting food and oxygen from blood, renewing, growing, billions of states due connections and chemicals. But I believe brain does learn patterns from childhood but then becomes creative. As to consciousness, we just don't know what generates it with its associated emotions, desires, morality, dreams, thinking ...etc. so learning of configuration is a small part of the picture and machines can beat brain with speed at least. I remember in the 70s the few KB computers were termed electronic brains anyways !!!

  • @TonyFarley-gi2cv
    @TonyFarley-gi2cv Před měsícem

    One of the other reasons I say looking to the extra dating is because we got all these government or the scholar skills out there that tell you that these other calendars that they use for me to other countries they don't work there's nothing to them the wrong but yet they've used them for thousands of years to base different rotations of growth material but we are an educated they say we're stupid but they all make billions of dollars in these places that are just showing up in people's stuff but being more aligned as a identifier from inside to outside but the spacing ain't out and up to what they're saying

  • @ProjeckVaniii
    @ProjeckVaniii Před měsícem +2

    Our current ai systems are not sentient because they're all static and not always changing the way any single life form does. It's file size remains the size no matter what. Humans are not alive because they are what their brain is, but rather the pattern of life cycling through it's brain cell life spans, jumping from neuron to neuron. While our current ai systems are more akin to a water drain, water flows the wrong way due to these "knobs" until we adjust them. There are alternative paths created but they all ultimately have their own amount of correct.

    • @jonathancummings3807
      @jonathancummings3807 Před 26 dny

      Except they aren't "static", they are ever changing, GPT3 repeatedly stated it was constantly learning new things accessing the Internet, also, it is designed to self improve, so it's necessarily an entity with a sense of "self. It also must have a degree of "understanding" to understand the adjustments required to improve, AND to know what a Dog looks like, to use the example in the video. There necessarily must exist a state of "sentience", or the AI equivalent for the "Deep Learning" type of AI to operate the way they do. Which is why he believes it is so.

  • @SteveJohnSteele
    @SteveJohnSteele Před měsícem

    The main problem is that AI constantly compares itself to humans. Conscious, Intelligence, Subjective Feelings, Self Aware ... but when you dive deeper we all know that a dog is self aware, has feelings, is intelligent.
    It reminds me of the "fish riding a bicycle" some things are good at something and not so good at others.
    We should not judge an AI, or any form of intelligence by comparing it to humans.
    Consider also an ant colony. Is the single ant intelligent, maybe ...is the ant colony intelligent? Well it appears so, based observed outcomes.
    We need to expand what we mean by intelligence.

  • @1conchitaloca
    @1conchitaloca Před měsícem

    What about the "aha" moment when our brain starts by matching patterns, but then "understands" that there is a y = x + 1 formula, whereas in your explanation, the AI never gets that "aha" moment, if the brain and AI's neural network are similar, why is this difference? Is it just a matter of depth? Would an 84 billion node network also conclude that y = x + 1? Or would it just be happy "knowing" the correct answer all the time :-)

  • @kebman
    @kebman Před měsícem +2

    "It's just learning a style just like a human brain would." Bold statement. Also wrong. The neural network is a _model_ of the brain, as AI researches _believe_ it works. Just because the model seems to produce good outputs does not mean it's an accurate model of the brain. Also, cum hoc ergo propter hoc, it's difficult to draw conclusions, or causations, between a model and the brain, because - to paraphrase Alfred Korzybski - the model is not the real thing. Moreover, it's just a set of probabilistic levers. It has no creativity. And since it has no creativity, the _only_ thing it can do, is to *copy.*

    • @bogdanroscaneanu7112
      @bogdanroscaneanu7112 Před 12 dny

      Couldn't creativity as a property be added too by just forcing the neural network to randomly (or not) add or remove elements to something created from patterns it learned from?

    • @kebman
      @kebman Před 11 dny

      @@bogdanroscaneanu7112 No. There is no enlightenment in randomness.

  • @Hassanmalik-8118
    @Hassanmalik-8118 Před 29 dny

    Bro does your channel have a dark mode?

  • @brennan123
    @brennan123 Před měsícem

    It amazes me how there is endless debate about what is conscious or what is not and yet if you ask either side how for a definition of consciousness they can't agree or often can't even define it. If you can't even define a something you can't have a debate on whether something is or is not that something. It's like arguing whether the sky is blue if you can't even tell me what color is.

  • @monsieuralex974
    @monsieuralex974 Před měsícem +2

    Even though you are technically true about AI being able to reproduce patterns, thus it is not copying or stealing from artists, those who feel wronged would argue that it is a moot point, since they argue the finality of it. In other words, AI makes it possible for a lambda individual to generate pictures (would you call it "art" or not is another topic) that can essentially mimic the original artwork that the artist practiced to be able to produce and that is unique to them. For a analogy, it is a bit like flooding the market with copies of let's say a designer's product, thus reducing the perceived value of the original.
    Is it truly hurting them though, is my real question? I'd argue that those who get copied are people who are largely profitable because they are renowned artists in the first place. Also, it acts as publicity for them since their name get thrown around much more often, which gets them more attention. Also, even though lots of people are generally ok with a cheap copy, many people prefer to stick to the original no matter what: owning an original is indeed far superior than having something that simply resembles it.
    As for the question of fanart, I guess that it's less frowned upon for the simple reason that it's actually artwork made by people who had to practice to get better at their craft, which is inherently commandable. What people hate is that a "computer" can effortlessly generate tons of "art", as opposed to aspiring artists who need to practice a lot to get to the same result, which can be discouraging for a lot of them.
    At the end of the day, it is a complex issue. I can see good arguments on both sides of the debate. What I am excited about is the potential of breakthrough AI can bring, like the other examples you mentioned in the video. On many aspects, this is a very exciting time we live in, full of potential breakthroughs in many domains!

    • @OceanusHelios
      @OceanusHelios Před měsícem

      Lambda individual, lol. That's an L-oser. It took me a while. But seriously, I think AI is great. It isn't a complex issue at all. This is a guessing machine and if it can put people out of work, then good. Those people are probably not contributing much more than a roundabout way of bootlicking to begin with and this will liberate them. If you use real intelligence and examine some of the comments in this section you will see that the people most triggered by the AI (nothing more than a good guessing machine) are the ones who have built their entire minds, worldview, and existence around...a superstitious guess.

  • @sagarangadi5677
    @sagarangadi5677 Před měsícem +1

    Subscribed! You literally jumped down to the most basic, even a high school student can understand by watching your videos

  • @hitmusicworldwide
    @hitmusicworldwide Před měsícem

    The only content creator or artists that haven't stolen ideas and reworked art themselves are ones that are not from this planet and have never learned or seen anything ever created on this planet. We are all large language models.

  • @RussianQueenIrina
    @RussianQueenIrina Před měsícem

    What a video! I learned about neural networks from Andrej Karpathy! But you did such a good job!

  • @malectric
    @malectric Před měsícem

    When it comes to image or generation of any form of art/language/program to a specification this makes it clear that the applied rules converge to an outcome. I'm guessing what artists probably dislike is that a man-made machine can do what they can and bypass the paid labour content so producing a work essentially at the cost of the electricity powering the machine. Additionally, the machine is not paying original producers for the input they have proxy-provided it with. Maybe it is an unfortunate consequence of making stuff publicly available instead of to a a paying end-user. Unemployment in other words and I guess some people probably hate the idea that creativity can be automated.
    My answer to that is, I don't much like it either so I don't use it - at all. I'm quite happy to beaver away writing my own software for projects I've designed. I'm a builder first. I don;t care if that is inefficient by anyone else's standards; for me it is a pasttime and I am retired so it is not a source of employment or income.
    A significant take-away I am getting from this is that the pattern generation process lacks the semantics and is essentially mechanistic. When a model is developed which incorporates rules-based reasoning (why) I think that will be a defining characteristic of sentience. That and an obvious bent for self-preservation. Give a machine our physical senses - the ability to "feel pain" and it's there I would say.
    BTW, solving an unsolvable problem is an oxymoron. There is a difference between that and solving one that has not yet been solved e.g. NP-completeness.

  • @christianrazvan
    @christianrazvan Před měsícem

    So there a clear distinction in our neurons and the ai neurons in the fact that a child can see 2-3 cats or dogs and then it can extrapolate to always identify correctly a cat and a dog . On the other hand a CNN needs a lot of data to do that , data which it can rapidly process. We cant process at the same speed but the features we can extract are more descriptive

    • @ShA-ib1em
      @ShA-ib1em Před měsícem

      It's because we are born with an already trained model ..
      Chat gbt can learn something if you explain it only one time in your prompt .. because it's already trained ..
      There is evidence that an embryo or New Born pays attention to the shape of a human face. We are already born with pre trained model

  • @algorithminc.8850
    @algorithminc.8850 Před 27 dny

    Good video. Subscribed. Thanks. Cheers ...

  • @peter_da_crypto7887
    @peter_da_crypto7887 Před měsícem

    Why did you not include Symbolic AI, which is not based on neural networks ?

  • @saganandroid4175
    @saganandroid4175 Před měsícem +4

    32:00 no, it's not "on a chip instead". You're running transient instructions through a processor. Only hardware that functions this way, without software, can ever be postulated as having a chance of awareness. If it needs software, it's a parlor trick.