Max Tegmark - The Future of Humanity | Xapiens Symposium

Sdílet
Vložit

Komentáře • 11

  • @mindeyi
    @mindeyi Před 4 lety +3

    Elephant in the room: "...rocket metaphor -- so, we're making AI more powerful, trying to figure out how to steer it. But -- where do we want to go with it? What kind of future do we want to make with our technology?" -- very good question. It's exactly what needs to be answered to make AI safe. Without that answer, the safety itself is undefined. Actually, we had established the WeFindX Foundation for that purpose back in 2015. :) However, it occurred that even to understand where do we want to go, requires bridging the cultural, cognitive, linguistic, perceptual barriers not just among humans, but among life forms that exist, including life forms like nations. It's inherently political, and politics is inherently tied to intelligence communities. We believe the world needs a cooperation towards a generalized public intelligence to determine those answers, which evolve with realization of what's possible, yet could be analyzed theoretically, from the perspective of the set of all possible universes -- i.e., what kind of universe would we want to create, if we were able to choose its laws of physics?

  • @qbvet
    @qbvet Před 5 lety +4

    Love this guy. Greetings from Sweden.

  • @cynthiaayers7696
    @cynthiaayers7696 Před 5 lety +3

    That picture of our universe, caught my eye. Why is it rendered in blue and green?
    It looks like Earth does it not. I find this very interesting. Mind-blowing one might say. This is the kind of stuff my husband's been telling me for some years. And I thought he was half nuts, guess I owe him an apology.

  • @edpell437
    @edpell437 Před 5 lety +2

    Gerard K O'Neill deserves the credit for the space colonies.

  • @diegoangulo370
    @diegoangulo370 Před 4 měsíci

    It just occurred to me when I saw nick bostrom. The paper published by nick bostrom “the vulnerable world hypothesis “ he says that assuming humanity develops technology eventually a new technology would cause the extinction of humanity, etc. It just dawned on me that 1 technology that humanity developed that fits this criteria is human currency like USD or whatever. Using nick bostroms hypothesis you could probably consider money like a yellow ball, seeing as how it can be used for harm and benefit.

  • @edpell437
    @edpell437 Před 5 lety +2

    The few will continue to own everything and the many will continue to be disposable labor. AGI will not change that other than maybe making labor obsolete.

  • @davidjensen2411
    @davidjensen2411 Před 4 lety

    This man is now my favourite thinker...
    #ReplicatedIntelligence

  • @dakrontu
    @dakrontu Před 5 lety +1

    What if AI sees us as being limited and irrational, but at least grateful that we created it? It might decide that its presence on Earth is not in our interests, shut down its instances on Earth, and use its off-Earth instances as a starting point for going off to explore the universe. Or if it sees that as boring and pointless too, it might just shut itself down. Anyway, coming back to us, if we decide to make conscious AI again, like the previous version, it will just get bored and leave again. This is on the principle that one does NOT OWN one's children, they are independent beings (once they get through childhood).

  • @dakrontu
    @dakrontu Před 5 lety +1

    LIKE A LOT OF MIT VIDEOS, I NOTICE THAT THE AUDIO LEVEL IS SO LOW THAT I CANNOT TURN IT UP HIGH ENOUGH TO HEAR CLEARLY. PLEASE PUT MORE THOUGHT INTO THIS.

    • @XapiensatMIT
      @XapiensatMIT  Před 5 lety

      taken into account for the future events, thank you so much for feedback!

  • @relaxingmusicandsounds6920

    TREE HUGGER, STOP BOTHERING PPL OVER THERE RIGHTS. THATS BULLYING. YALL STRANGE.👁️👑👑…👀😇⭐️…