Tutorial: Integrated Information Theory of Consciousness

Sdílet
Vložit
  • čas přidán 17. 08. 2016
  • The science of consciousness has made great strides by focusing on the behavioral and neuronal correlates of experience. However, correlates are not enough if we are to understand even basic facts, for example, why the cerebral cortex gives rise to consciousness but the cerebellum does not, though it has even more neurons and appears to be just as complicated. Moreover, correlates are of little help in many instances where we would like to know if consciousness is present: patients with a few remaining islands of functioning cortex, pre-term infants, non-mammalian species, and machines that are rapidly outperforming people at driving, recognizing faces and objects, and answering difficult questions. To address these issues, we need not only more data, but also a theory of consciousness ΓÇô one that says what experience is and what type of physical systems can have it. Integrated Information Theory (IIT) does so by starting from conscious experience itself via five phenomenological axioms of existence, composition, information, integration, and exclusion. From these it derives five postulates about the properties required of physical mechanisms to support consciousness. The theory provides a principled account of both the quantity and the quality of an individual experience (a quale), and a calculus to evaluate whether or not a particular system of mechanisms is conscious and of what. Moreover, IIT can explain a range of clinical and laboratory findings, makes a number of testable predictions, and extrapolates to a number of unusual conditions. The theory vindicates some intuitions often associated with panpsychism - that consciousness is an intrinsic, fundamental property, is graded, is common among biological organisms, and even some very simple systems may have some of it. However, unlike panpsychism, IIT implies that not everything is conscious, for example aggregates such as heaps of sand, a group of individuals or feed-forward networks, such as deep learning networks. Also, in sharp contrast with widespread functionalist beliefs, IIT implies that digital computers, even if their behavior were to be functionally equivalent to ours, and even if they were to run faithful simulations of the human brain, would experience next to nothing.
  • Věda a technologie

Komentáře • 19

  • @casiandsouza7031
    @casiandsouza7031 Před 5 lety +3

    I am conscious so that I can continue to be! Emotions motivate me to act on my consciousness. Communication enables communal motivation. I am not conscious of actions that do not require motivation trigger.

  • @michaelshannon9169
    @michaelshannon9169 Před 2 lety +1

    Wonderful, to know that whatever I am is a mere passenger of a physical reality that has not interest in whatever entity I am. The word terrifying doesnt even come near close.

  • @buckithed
    @buckithed Před 4 lety +7

    What if each neuron contained a small local consciousness that was faster than the the neuron itself? The brain would still be one consciousness on the level we sense. Dendrites have logic that operate faster than neurons fire. It's the level that matters. Computers can be conscious at the software level, while not at the transistor level.

    • @goldnutter412
      @goldnutter412 Před 2 lety

      An outer loop.. and you get randomness ? because they never align quite perfectly, no matter what we choose to do. To us the player, the avatar world moves like a slideshow. It might quantum leap to the next
      sin and cos seem like.. probability wave pair as an algorithmic uncertainty principle, as time ticks to you needing data for a sense, like your eyes, these come into phase or something.. way beyond C++ and JS lol.. of course you would need all data that ever existed. And choices.. which had probabilities on a bell curve, where rare things do happen.. rarely heh.
      Can think forever about "mechanics" of reality or use subset/superset logic to delete materialism and level up to meaning.. data vs information ! and back to the first question WHY ! to get feedback from our choices of course.. we learn the obvious even if we don't want to ! arch enemies can become best friends - and both have a better life afterwards. True happiness and meaning all point to evolving yourself.. the best way to enable others to evolve, leading by example.
      Language is just data like everything else.. we have a STANDARD like English which is patterns.. entropy vs order. But meaning ? information, requires a context. To be as human as a human, and write something that could fumble its way around and have wars with other ones, it becomes so far beyond today's complexities.. but as a thought experiment it works the same as our history in reflection. Life is much better but we still have self serving, corruption and crime.. and it all comes from the fear and ego. Evolution is slow, but relentless.
      Almost no brain matter, has a normal IQ.. we can't just brush it under the rug. Einstein's "relativity" is actually.. far more fundamental ! maybe he meant something he couldn't explain ? :-) entropy in information systems, elliptic curves.. decoherence when probing the last layer of "matter" where you just lose some of the result data "you're doing it wrong, you are way past optics hint hint" lol

  • @raycosmic9019
    @raycosmic9019 Před 2 lety +1

    Information is abstract. So is Consciousness. The abstract idea of love can be expressed existentialy in a hug.

  • @Brainisnotacomputer
    @Brainisnotacomputer Před 7 lety +5

    I don't understand why the perfect micro simulation of an awake human brain would have small Phi 1:44:35. In an abstract level, the cause-effect repertoire is there, and at the same time, at the molecular level, the biological brain also has very small connectivity (spatially very limited).

    • @ChazyK
      @ChazyK Před 6 lety +1

      I think that if this theory says this, the theory is wrong. But maybe it is just Mr. Kochs interpretation. The computer operates completely differently than human brain, but computer can behave the same way as any other turing machine (including brain), only not so efficiently. The bits that represent neurons in computer are connected, but the connection is not permanent. The neurons in brain update simultaneously, but virtual neurons update one by one, as the information goes through CPU, but that doesnt mean that they are disconnected.

    • @ogulcancingiler568
      @ogulcancingiler568 Před 3 lety

      www.consciousnessrealist.com/physical-vs-functional-states/ this could be an answer for your question

  • @21stcenturyoptimist
    @21stcenturyoptimist Před 7 lety +3

    So what he is saying is that a system with high phi creates a subjective realm of itself?

  • @LouisWaweru
    @LouisWaweru Před rokem

    How are the boundaries of a system defined? Does the theory consider the possibility of super-systems, perhaps where subsystems have a local autonomy with consciousness and a super-system experiencing something like consciousness as well?
    Edit: oops I should have waited to the end before asking, this is mentioned around 1:41:20.

  • @golemtheory2218
    @golemtheory2218 Před 2 lety

    Koch and Tonioni look like the same person - anyone else?

  • @archiewoosung5062
    @archiewoosung5062 Před 4 lety

    I can hear a vague murmur

  • @dhirendrasinghgehlot4565

    Who is the Professor in the video??? He seems French!!

  • @golemtheory2218
    @golemtheory2218 Před 2 lety

    He seems like a nice guy, but naive- imagine hitching your boat to a thieving plagiarismic scoundrel like Crick.

  • @raphaels2103
    @raphaels2103 Před 2 lety

    Microsoft ? Consciousness ?