Neuromorphic computing with emerging memory devices

Sdílet
Vložit
  • čas přidán 17. 05. 2024
  • This Plenary speech was delivered by Prof. Daniele Ielmini (Politecnico Di Milano) during the first edition of Artificial Intelligence International Conference that was held in Barcelona on November 21-23 of 2018.

Komentáře • 53

  • @tjeanneret
    @tjeanneret Před 4 lety +32

    I can't believe that only a few people where present for this presentation... Thank you for publishing it.

  • @greencoder1594
    @greencoder1594 Před 3 lety +55

    *Personal Notes*
    [00:00] Introduction of Speaker
    [01:54] START of Content
    [05:13] CMOS Transistor Frequency Scaled with Decreased Size Node
    [06:54] Von Neumann Architecture uses Power to for regular communication between CPU and Memory
    - In contrast, within the brain memory and computation are co-located
    [09:06] Neuromorphic hardware might utilize "in-memory computing" and emerging semiconductor memory
    [09:30] Non-volatile memory (brain-like long-term memory)
    - resistance switching memory
    - phase change memory
    - magnetic memory
    - ferroelectric memory
    [10:24] RRAM (Resistive Random Access Memory)
    - dielectric between two electrodes
    - resistance changes to a high-conductance state once voltage applied exceeds a certain threshold (due to movement of structural defects within the dielectric)
    - can be used to connect to neurons with a dynamic weight (high voltages strengthen the synapse, opposite voltage weakens)
    [12:23] STDP (Spike-Timing Dependent Plasticity)
    - relative delay between post-synaptic neuron and pre-synaptic neuron
    - t = t_post - t_pre
    - long-term potentiation when t>0 (neuromorphic agent assumes causality from correlation)
    - long-term depression when t
    We simulated an unsuprevised spiking neuronal network with STDP and did pretty good. It hasn't been built in hardware yet tough.
    [38:08]
    If you say we could port a bee brain to a chip, why not a human brain?
    ->
    Members of the human brain project told me, there is a total lack of understanding, how the brain works.
    The human brain appears to be the most complex machine in the world.
    Improvement in lithography might offer chips with a neuronal complexity similar to the human brain.
    It's a waste of time, because we don't know what kind of operating system or mechanism we have to adopt, to make it work
    We might target very small brains and only some few distinct features of a brain
    - like the sensory motor system of a bee
    - or object detection and navigation of the ant
    - ...so very simple brains and functions might be feasible within the next decade
    [40:59]
    How do your examples compare to classical examples with respect to savings in time and energy?
    ->
    All currently developed neuromorphic hardware uses CMOS technology for diodes, transistors, capacitors and so on.
    A classical transistor network with similar capabilities would require far more space on the chip.
    Thus those new memory types are essential if you want to safe energy and complexity like the brain does.
    [43:54]
    How can you adapt to changes in the architecture, like when the count or wiring of neurons is supposed to change?
    ->
    You can design your system in a hybrid way to integrate RRAM flexibly into your classical CMOS hardware
    [46:09]
    Are you trying to develop a device dedicated for AI only or as a (more general?) peripheral device that can replace current GPU acceleration?
    ->
    We are not competing with GPUs, we are targeting a new type of computation. Replacing a GPU with such a network wouldn't make any sense.
    In-memory logic does not seem to be very interesting, considering high cycle times and high energy consumption.
    But using RRAM (or similar technology) to emulate neurons can save you a lot of energy and space on the chip.
    [47:53]
    In-memory computing could have a great impact, because you have a kind-of filter to know what you really have to compute when changing a value in a neuromorphic database for example.
    The input is the result _and_ the behavior at the same time, ...that could be the reason for this big change in energy management
    ->
    Yeah, I totally agree.
    If you compute within the memory, you don't have to move the data from the memory to the processor.
    [49:09]

    • @paquitagallego6171
      @paquitagallego6171 Před 3 lety +1

      Thanks...

    • @everettpiper5564
      @everettpiper5564 Před 3 lety +1

      I really appreciate these notes. And your final remark is spot on. Very intriguing. You might be interested in a channel called Dynamic Field Theory. Anyhoo, appreciate the insights.

    • @ashwanisirohi
      @ashwanisirohi Před 2 lety +1

      What more I can say for what you did....Thanks

    • @celestialmedia2280
      @celestialmedia2280 Před 2 lety +1

      Thanks for your osm effort 👍

    • @leosmi1
      @leosmi1 Před 2 lety +1

      Thank you

  • @JoshuaSalazarMejia
    @JoshuaSalazarMejia Před 3 lety +9

    You saved the day by recording and uploading he presentation. Amazing topic. Thanks!

  • @totality10001
    @totality10001 Před 4 lety +4

    Brilliant lecture. Thank you!

  • @HavenInTheWood
    @HavenInTheWood Před 2 měsíci

    This is great, I'll be watching again!

  • @feuxdartificeppp
    @feuxdartificeppp Před 4 lety +3

    Great video! Thank you!

  • @Atrocyte
    @Atrocyte Před 3 lety +4

    Thank you for this fascinating lecture and sharing it!

  • @holdenmcgroin8917
    @holdenmcgroin8917 Před 5 lety +4

    Thanks for sharing, very informative presentation

  • @ashwanisirohi
    @ashwanisirohi Před 2 lety +2

    Thanks for making and uploading the video in such a nice manner. Very comfortable to follow the contents of the talk.

  • @ashwanisirohi
    @ashwanisirohi Před 2 lety

    Talk was good but questions were better. I like the Prof. honesty and smooth answering.

  • @viswanathgowd4060
    @viswanathgowd4060 Před 2 lety

    Thanks for sharing this.

  • @pradhumnkanase8381
    @pradhumnkanase8381 Před 3 lety

    Thank you!

  • @Artpsychee
    @Artpsychee Před rokem

    thank you sharing your insights

  • @entyropy3262
    @entyropy3262 Před 2 lety

    Thanks, really interesting.

  • @GWAIHIRKV
    @GWAIHIRKV Před 3 lety +3

    So are we saying this is another form of memristor?

  • @silberlinie
    @silberlinie Před 2 lety

    Eine absolut geniale Sache.
    Obwohl der Bericht hier aus dem Jahr 2018 ist.
    Ist denn das Projekt weitergekommen?
    Was gibt es denn in der Zwischenzeit zu
    berichten?
    Ist Politecnico Di Milano noch an der
    Sache dran?

  • @teamsalvation
    @teamsalvation Před 3 lety +5

    Although this is well over my head, I am excited by what is being said, or at least what I think is being said and shown.
    The brain is both a memory and a processor. What they've been able to accomplish is to recreate "the brain" (for talking purposes, I know it's not literal).
    Again, keeping this simple for me; if I were using TensorFlow and running the session on a GPU, I would instead run this session on "the brain" created by Dr. Lelmini? The initial input data set is still gathered in the traditional sense or would we be moving data directly into "The Brain" from the data capture HW (e.g. video camera data stream) and then kicking off the session by some HW interrupt once some pre-defined amount of raw data has been transferred?
    This is all really cool stuff!!
    Can wait to replace my GPUs with NPUs (Neuromorphic Processing Units) :-) with PCI-E 6 x16 (64 GT/s)

    • @jacobscrackers98
      @jacobscrackers98 Před 3 lety +1

      I would try to email him if I were you. I doubt anyone is looking at CZcams comments.

  • @SaiBekit
    @SaiBekit Před 3 lety +1

    Does anyone understand the difference between this and Neurogrid's architecture?

  • @styx1272
    @styx1272 Před 4 lety +3

    too complicated for me ; glad others found it inlightening .

  • @matthewlove2346
    @matthewlove2346 Před 3 lety +1

    Is there a paper that goes into more depth that I could read? And if so where can I access it?

    • @cedricvillani8502
      @cedricvillani8502 Před 3 lety

      IEEE has everything you could ever want and updated, become a member

    • @cedricvillani8502
      @cedricvillani8502 Před 3 lety

      New memory device that just came out! The Nvidia NGX Monkey Brain, comes pretrained with a few muscle memory actions such as, throwing poop at a fan, and getting sexually aroused at the sight of a banana.

  • @moizahmed8053
    @moizahmed8053 Před 4 lety +4

    I want to try these "toy examples" myself... Is there a way to get hands on the RRAM modules/ICs?

    • @davidtro1186
      @davidtro1186 Před 3 lety +2

      knowm.org/ similar memristor technology made in USA

  • @ONDANOTA
    @ONDANOTA Před 5 lety +2

    is this faster than quantum computers? does it scale exponentially or better?

    • @ONDANOTA
      @ONDANOTA Před 5 lety +2

      auto-answer after googling, yes it is faster than QC's

    • @mrpr93cool
      @mrpr93cool Před 4 lety +2

      @@ONDANOTA faster in what?

    • @anywallsocket
      @anywallsocket Před 4 lety +2

      You have to realize what you're asking here. QC is just computing at the nano level, as opposed to micro level, and taking advantage of entanglement / tunneling rather than attempting to avoid it. In principle, one is not "faster" than the other as both operations can unfold at the rate of electromagnetic wave impulses (the fastest you can get). It's just a matter of what physical medium is catalyzing this computational operation. In-Memory computing is a technique for organizing that medium, so as to eliminate the latency between data storage and data manipulation. It's a different ball-game altogether, and in principle, both QC and CC can be organized via this In-Memory technique.

    • @ShakmeteMalik
      @ShakmeteMalik Před 3 lety +1

      @@anywallsocket Correct me if I am mistaken, but is it not the case that QC aims to eliminate network latency altogether by utilising Spooky Action at a Distance?

    • @anywallsocket
      @anywallsocket Před 3 lety +2

      @@ShakmeteMalik Depends what you mean by "network latency". For the most part QC is employed for processing information, not storing it - since quantized info is usually too delicate to store. The whole point of In Memory computing is combining the processing and storing, which therefore works much better for classical computing.

  • @jaimepatino1645
    @jaimepatino1645 Před rokem

    And that . [18] Here is wisdom. Let him that hath understanding count the : for it is the number of a man; and his number is Six hundred threescore and six.

  • @davids3116
    @davids3116 Před 2 lety

    need to create an ego operating system for AI to improve it's capabilties

  • @onetruekeeper
    @onetruekeeper Před 3 lety +1

    This could be simulated using holographic circuits.

    • @brian5735
      @brian5735 Před dnem

      Yeah i thought of that. Photons would be much more efficient in quantum computer. Less noise and decoherence

    • @brian5735
      @brian5735 Před dnem

      Just etch the gates

  • @demej00
    @demej00 Před 2 lety

    Tough to pour your soul into research for only 10 people.

  • @Nathouuuutheone
    @Nathouuuutheone Před 2 lety

    20:57

  • @Ositos_dad
    @Ositos_dad Před 3 měsíci

    No le entiendo ni turca.