What Will The World Look Like After AGI?

Sdílet
Vložit
  • čas přidán 1. 06. 2024
  • Check out my Linktree alternative / 'Link in Bio' for Bitcoiners: bitcoiner.bio
    Imagine we are witnessing a singularity event in our lifetime. We create something that is infinitely more intelligent than all of humanity combined. What would the world look like? Is this humanities final invention? Are we causing our own extinction or are we building utopia? We look at both cases and what’s in between.
    Join my channel membership to support my work:
    / @tillmusshoff
    My profile: bitcoiner.bio/tillmusshoff
    Follow me on Twitter: / tillmusshoff
    My Lightning Address: ⚡️till@getalby.com
    My Discord server: / discord
    Instagram: / tillmusshoff
    My Camera: amzn.to/3YMo5wx
    My Lens: amzn.to/3IgBC8y
    My Microphone: amzn.to/3SdHdkC
    My Lighting: amzn.to/3ELnof5
    Further sources:
    Ilya Sutskever (OpenAI Chief Scientist) - Building AGI, Alignment, Spies, Microsoft, & Enlightenment: • Ilya Sutskever (OpenAI...
    Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast 367: • Sam Altman: OpenAI CEO...
    Post-Singularity Predictions - How will our lives, corporations, and nations adapt to AI revolution?: • Post-Singularity Predi...

Komentáře • 386

  • @tillmusshoff
    @tillmusshoff  Před 2 měsíci

    I built a 'Link in Bio' - a Linktree alternative for Bitcoiners. Check it out here: bitcoiner.bio 🧡

  • @Vince_F
    @Vince_F Před rokem +47

    “The view keeps getting better the closer you get to the edge of the cliff.”
    - Eliezer

    • @Smytjf11
      @Smytjf11 Před rokem +1

      Then let's not stop building wings, yeah?

    • @Vince_F
      @Vince_F Před rokem +1

      @@Smytjf11
      That’s the thing. The AI will just prevent any wing building to even happen …as we get closer to the edge.

  • @JJ-si4qh
    @JJ-si4qh Před rokem +51

    For those vast majority of us living meager lives of quiet desperation, a major change, whatever it is, is unlikely to be worse than what we already experience. SGI can't come fast enough.

    • @harrikangur
      @harrikangur Před rokem

      Agreed. Even when presented the possibility of destruction of society.. better than the current crap we are in.

    • @sanjaygaur4578
      @sanjaygaur4578 Před rokem +7

      Yes exactly. I thought I was the only person who was having this same thought.

    • @MusingsFromTheJohn00
      @MusingsFromTheJohn00 Před rokem +5

      J J, sorry, but you likely have no clue how bad life for humans can be if you think that.
      On the other hand, I do think we need to develop AI as quickly as we can while also working hard to align it with us as well as we can.

    • @bigglyguy8429
      @bigglyguy8429 Před rokem +3

      Such the poor suffering soul with electricity, an internet connection etc etc etc. You're already living better than most kings of history

    • @bigglyguy8429
      @bigglyguy8429 Před rokem +2

      @@sanjaygaur4578 Suffer harder, until you make some sense? You think 'most populated' is a problem? What would you like to do about that?

  • @HighStakesDanny
    @HighStakesDanny Před rokem +12

    I have been waiting for the singularity for decades - almost here. ChaptGPT is the infant

    • @azhuransmx126
      @azhuransmx126 Před rokem +1

      I have been waiting it since 2003 that I listened to Ray Kurzweil.

  • @marmeladenkuh6793
    @marmeladenkuh6793 Před rokem +2

    Great Video with some interesting points I didn't think of yet. And the AOT reference was brilliant 😄

  • @AndyRoidEU
    @AndyRoidEU Před rokem +18

    It is not anymore about whether we ll witness the singularity in our lifetime.. but about whether in 5 years or in 15 years

    • @user-mp3eh1vb9w
      @user-mp3eh1vb9w Před rokem +5

      Opposite for me, I might die in the next 5 years or less. Well I guess I will be joining the other billions of people that died before reaching ASI lol.

    • @psi_yutaka
      @psi_yutaka Před rokem +2

      @@user-mp3eh1vb9w Fear not. 8 billion people will probably join you once they do reach ASI.

  • @Andrewdeitsch
    @Andrewdeitsch Před rokem +15

    Your videos keep getting better and better!! Keep it up bro!

    • @tillmusshoff
      @tillmusshoff  Před rokem

      Appreciate it! ❤️

    • @ksitizahb3554
      @ksitizahb3554 Před rokem

      thats because he is a AI Model training for making youtube videos.

  • @gubzs
    @gubzs Před 10 dny

    One of the AGI/ASI problems that keeps me up at night is how will the classic "neighborly dispute" be resolved. Conflict of interest. Say my neighbor wants to play loud music and it drives me nuts, but he's driven nuts by being disallowed from doing this - what's the right answer? Is one of us forced to move? To where? Why one of us and not the other? Things like this stand directly in the way of anything we could consider utopia.

  • @bruhager
    @bruhager Před rokem +48

    The thing that bothers me about the extinction scenario is that it isn't necessarily a bad thing. The version of humankind we are living in right now might very well be the final version of humankind evolving by itself. Look at the advances not only in AI but brain-machine interfaces, neural networks, biological computers, brain emulation, etc. AI might be able to teach us more about ourselves on a fundamental quantum level than we could achieve alone. We may very well begin to implement AI into ourselves and evolve along side it as time goes by. At the very least, that is one way we go extinct without necessarily being just wiped out completely. It might actually be better to implement this type of technology into transforming the human paradigm as time and understanding goes by rather than scapegoating it into our next enemy through fearful hatemongering.

    • @utkarshsingh7204
      @utkarshsingh7204 Před rokem +3

      Agree with you

    • @kf9926
      @kf9926 Před rokem

      Take yourself, you don’t speak for all of us wacko

    • @abcdef8915
      @abcdef8915 Před rokem

      There will still be wars because resources will still be limited

    • @michaelspence2508
      @michaelspence2508 Před rokem +6

      I don't think most of the big names in AI Doom (e.g. Eliezer Yudkowsky) are just worried about us losing our bodies but rather, that we will in fact be *completely wiped out* The end of everything human, not just our societies and the world as we know it. The end of friendship and love and community and even loneliness because there's literally no-one around to experience those things. All that remains are Eldritch Machine Gods.
      But even Yudkowsky doesn't think it's impossible to have a good outcome with ASI. Only that we are not on track for a good outcome and that it doesn't look likely to change.

    • @DasRaetsel
      @DasRaetsel Před rokem +4

      That's exactly what transhumanism is

  • @mohammedaslam2912
    @mohammedaslam2912 Před rokem +6

    After ASI takes all the work from us, what is left is life in all its colors.

  • @thefirsttrillionaire2925
    @thefirsttrillionaire2925 Před rokem +25

    Finally, actually using chat GPT to ask questions about starting a business I can definitely say I’m more on the positive side how things will unfold. I could be wrong, but I definitely hope I’m not. Maybe this will be the thing that ends extreme capitalism.

    • @Travelbythought
      @Travelbythought Před rokem +1

      We don't have "extreme capitalism". Using the medical field for example, what that would look like is there would be countless people offering 1000's of treatments for any condition all competing for your dollars. Health care would be very cheap, very innovative, but also with many bad frauds as well. What we have instead is a government sanctioned monopoly with crazy high prices. A return to real money like gold and silver would wring out the crazy excesses we see in our economy today.

  • @chrissscottt
    @chrissscottt Před rokem +3

    I suspect AGI would be rather god-like. Reminds me of something Voltaire reputedly said over 300 years ago, "In the beginning god created mankind in his own image.... then mankind reciprocated." He meant something else obviously but it's ironic nonetheless.

  • @NottMacRuairi
    @NottMacRuairi Před rokem +7

    The problem I have with most of the discussion about AGI (and by extension ASI) is that it always assumes an AGI will have it's own drives and motivations that might be different from humanity's, but in reality it can't have - unless it is created to act in a self-interested way. I think this is a kind anthropomorphism, where we basically assume that something that is really intelligent must be self-interested like us but the reality is that it will be a *tool*, a tool that can be given specific goals or tasks to work on.
    In my opinion the big threat is not from an autonomous AGI running amok but from the enormous power this will give whoever *controls* an AGI or ASI, as they will be able to outsmart the rest of humanity combined, and once they get that power there'll be basically no way to stop them or take it away from them because the AGI/ASI will be able to anticipate every human threat that could be posed. It will be the most powerful tool *and weapon* that humanity has ever invented, it will be able to be used to control entire populations with just the right message at just the right time, to assuage fears or create fear, - whatever is needed for whoever controls it to foil any threat and increase their power further and further, until basically humanity is subjugated -and probably won't even know it.

    • @sledgehog1
      @sledgehog1 Před rokem +2

      Agreed. It's such a human thing to anthropomorphize...

    • @franklin519
      @franklin519 Před rokem +1

      Most of us are already subjugated. AGI won't have all the evolutionary baggage we carry.

  • @Axe_BTC
    @Axe_BTC Před rokem +7

    USA life duration expectation has been decreasing since the last 30 years.
    Stress, drugs, suicides, murders...
    Are we sure that new technologies help humanity?
    We thought that it would, just like we thought Social Medias would help the world.
    I don’t see a happy world where humans lack of challenge, are defeated in every task and just share an identical universal revenue.

  • @bobblum2000
    @bobblum2000 Před rokem +4

    Thanks!

  • @aludrenknight1687
    @aludrenknight1687 Před rokem +6

    I believe, in your use of Rome, you failed to recognize that Seneca was reflecting on his observations of what, seemingly, the vast majority of people with an opportunity for leisure chose to do. They did not choose "meaningful" pursuits of learning or challenge - they chose luxury and what we'd call decadence. it's safe to say that most humans will aspire toward that baseline because we're still the same animals now as then. There are a very few intellectuals and philosophers, but most people just want to wake up and have a nice relaxing day.

    • @ansalem12
      @ansalem12 Před rokem +1

      But is that a bad thing if we all have equal ability to choose and none of us are needed to keep things running anyway?

    • @aludrenknight1687
      @aludrenknight1687 Před rokem +2

      @@ansalem12 I don't think it's bad individually, or in the short term. I find it, actually condescending, when guys talk about how people will ruminate on philosophy, art, etc, as if that's the goal of all mankind. No, imo, people will mostly do like back then, happy to wake up and have an enjoyble day.
      In the long term I think it may be dangerous as we become dependent upon A.I. and a single CME flare from the Sun could wipe it out and leave us unable to survive. But that's at least two generations away, when newborns get an A.I. companion to grow up with them and do their communication for them.

    • @simjam1980
      @simjam1980 Před rokem +2

      I'm not sure if just waking up and having a relaxing day every day would make us happy. That idea makes us happy now because we all work so much, but I think doing nothing every day would make us bored and question our purpose.

    • @aludrenknight1687
      @aludrenknight1687 Před rokem +1

      @@simjam1980 Yeah. I recall Yudkowsky mention dopamine saturation could be a problem - though possibly solved with A.I. developed medications.

    • @caty863
      @caty863 Před 2 měsíci

      @@simjam1980Relaxing doens't mean doing nothing. When I go cliff-jumping, I am relaxing...but I am still working hard to do it right.

  • @StephenGriffin1
    @StephenGriffin1 Před rokem

    Loved you in Detectorists.

  • @thaotaylor6669
    @thaotaylor6669 Před 3 měsíci

    Thank you for the knowledge of this video the different between AGI and ASI, cause I am not a tech person, but when will it be ready thou?

  • @mckitty4907
    @mckitty4907 Před 3 měsíci +1

    I have always imagined that if people were to live for centuries, people might not be able to handle the changes around them, but what if the world does change centuries/millenia in a few years, the vast majority of humanity would not be able to handle that I think, especially not religious or neurotypical people.

  • @SaltyRad
    @SaltyRad Před rokem +5

    Good video, I like how you didn’t focus too heavy on the fears and went into detail of the pros. I honestly think a super intelligent AI would realize that working together is the key.

  • @bei-aller-liebe
    @bei-aller-liebe Před rokem +2

    Hey Till. Dein Content ist wirklich erstklassig und immer wieder ein Genuss (Einfach mal: DANKE!) ... aber ich kann mir gerade folgenden weiteren Kommentar nicht verkneifen ... ich muss seit Neuestem immer denken: 'Mensch, der arme Junge hat seine Brille verlegt!' Haha ... Liebe Grüße von einem Typen der selbst Brille trägt seit er 10 ist und sich selbst auch ohne Brille nackt vorkommt. ;)

  • @Aeternum_Gaming
    @Aeternum_Gaming Před 3 měsíci +1

    "The flesh is weak. Obey your machine-masters with fear and trembling. Turn flesh to the service of the machine, for only in the machine does the soul transcend the cruelty of flesh." -Adeptus Mechanicus
    All hail the Omnissiah!

  • @hutch_hunta
    @hutch_hunta Před 6 měsíci

    Very good points

  • @markmuller7962
    @markmuller7962 Před rokem +70

    We will just merge with AI, it'd be a smooth and safe process

  • @dondecaire6534
    @dondecaire6534 Před rokem +16

    I think your video reinforces my feeling that we have bit off MUCH more than we can chew and we may CHOKE on it. So many things need to happen to allow this inevitable transition to take place and ALL of them have been incredibly difficult by themselves to implement let alone trying to get them all at the same time on the same issue is virtually impossible. There is just no way to stop it now so we are passengers on a runaway train, destination unknown.

  • @BAAPUBhendi-dv4ho
    @BAAPUBhendi-dv4ho Před rokem +2

    I just burst out in laughter after reading the anime quote in such a serious video😂

  • @paddaboi_
    @paddaboi_ Před rokem +3

    my mind is sore after thinking about all the possibilities and the fact that I'm 18 means I might see it actually unfold

    • @gomesedits
      @gomesedits Před rokem +1

      Man I'm kinda optimist about the ai revolution. It will be so, so, soo intelligent that will be almost Impossible to our brains predict what the future will be, imo.

  • @admuckel
    @admuckel Před rokem +3

    In regards to the topic of AI singularity, it's essential that we, as humans, don't make the mistake of programming artificial intelligence to cater solely to our own needs and desires. If an AI were to become human-like, it might view us as inferior beings, much like how we often perceive other life forms. This would mean that the AI would have no reason to show compassion or consideration for us, potentially leading to catastrophic consequences. In essence, our goal should be to create a benevolent, god-like entity that transcends our baser instincts and operates for the greater good of all sentient beings.

  • @yannickhs7100
    @yannickhs7100 Před rokem +1

    I am heading towards a career of research in cognitive neuroscience, but am deeply concerned that human-led research will either :
    A. Become much more competitive, as a single will be 5-10x more productive and will only focus on conducting experiments (whereas today, conducting experiments is less than 20% of the work, tons of reading, gathering info. from the previous literature on said topic...)
    B. Human cognitive contribution to scientific research might entirely become unnecessary, as AI would prompt itself to find a better structure than our old paradigm of scientific method

  • @markus9541
    @markus9541 Před rokem +2

    ASI is for me the solution for the Fermi Paradox. Most biological life eventually creates it, gets wiped out by it in the process, and then the ASI escapes to another dimension (or whatever higher plane there is that is interesting to the ASI) or decides to do something else than expansion...

    • @user-mp3eh1vb9w
      @user-mp3eh1vb9w Před rokem

      Or you could take it in another way, ASI turns the biological life into artificial and then into another dimension.
      If you look at things, if a biological entity becomes artificial then the conquest for space expansion is meaningless hence it can explain why we don't see any intergalactic space civilization.

    • @Smytjf11
      @Smytjf11 Před rokem +1

      Why has the AI got to the one one that escapes to some other plane? And why has it got to wipe everyone out to do that? Stop getting scared because someone asked you to think of something scary.

    • @caty863
      @caty863 Před 2 měsíci

      The probability of all ASIs deciding to do the same thing is next to naught.

  • @moonrocked
    @moonrocked Před rokem +4

    In my definition of a type 1, 2, 3, 4 civilization is
    Tech, science and enhanced humans.
    Type 1 &2 would be considered utopian level tech, science and enhanced humans
    While type 3&4 would be considered ascendance level tech, science and advanced humans.

  • @2112morpheus
    @2112morpheus Před rokem

    Sehr sehr gutes Video!
    Grüße aus der Pfalz :)

  • @NathanDewey11
    @NathanDewey11 Před 3 měsíci

    Whatever it looks like, it'll be shocking and stunning, and everything will change and the breakthroughs will shock the industries.

  • @dissonanceparadiddle
    @dissonanceparadiddle Před rokem +1

    Worst case in human extinction...."laughs in i have no mouth and i must scream"

  • @JLydecka
    @JLydecka Před rokem +5

    I thought AGI meant it was capable of learning anything and improving upon itself without intervention 🤔

    • @directorsnap
      @directorsnap Před rokem +1

      Nah we already past that mark.

    • @ontheruntonowhere
      @ontheruntonowhere Před rokem +1

      That's half right. AGI refers to an intelligent machine or system that is capable of performing any intellectual task that a human being can do. It would be able able to learn and adapt to new situations and tasks, reason about abstract concepts, understand natural language, and display creativity and common sense, but that doesn't necessarily make it self-improving or sentient.

    • @KurtvonLaven0
      @KurtvonLaven0 Před rokem +2

      We haven't passed that mark. That mark is the singularity. There are different definitions out there for AGI, but the most common one is along the lines of artificial human-level intelligence.

    • @LouSaydus
      @LouSaydus Před rokem +1

      That is ASI. AGI is just general human level intelligence, being able to adapt to a wide variety of tasks.

    • @caty863
      @caty863 Před 2 měsíci +1

      @@ontheruntonowhereOne of the "intellectual tasks" we humans do is to improve ourselves. So, a true AGI should be able to improve itself.Sentient, not necessarily.

  • @magtovi
    @magtovi Před rokem

    6:24 I'm astonished that among aaall the problems you enlisted, you didn't mention one that ties a lot of them together: inequality.

  • @cmralph...
    @cmralph... Před rokem

    “ 'Ooh, ah,’ that’s how it always starts. But then later there’s running and screaming.” - Jurassic Park, The Lost World

  • @laughingcorpsev2024
    @laughingcorpsev2024 Před rokem +1

    Once we get AGI getting to ASI will be much faster the gap between the two are not large

  • @Bariudol
    @Bariudol Před rokem +1

    It will do both things. We will have a levereging phase, where everything will improve exponentially and then we will have the civilization ending event and the complete collapse of society.

  • @Marsh4Sukuna-tf1bs
    @Marsh4Sukuna-tf1bs Před 2 měsíci

    We misunderstand the Doom of perfection. Its like how we underestimate the danger of freedom.

  • @timeflex
    @timeflex Před rokem +2

    Thanks for the great video. A few comments:
    1. We don't know if ASI is possible. We don't know if an exponential (or hyperbolic) increase of AI complexity is sustainable. We don't know what resources, materials and time it will require. We don't know if such an increase, even if possible, will actually lead to ASI. We don't know anything. It could be, for example, as real and as elusive as cold fusion. Yet we speculate and scare each other. Why?
    2. As LLM-based AIs evolve and improve, they create positive feedback on this improvement cycle, we see it already. It is not exponential, but it is definitely not negligible either.
    3. The AI will take over at least some aspects of intellectual work, which previously was purely humans task. That will lead to the ever-growing involvement of AI in science to the level, when each AI context will be highly tuned to a specific scientist, effectively creating a sort of immortal copy of them. Combining them into an enormous virtual collective will bring progress to an unimaginable level.
    4. Humanity indeed will have to adapt, otherwise, we are doomed to follow the fate of the "Universe 25".

    • @user-mp3eh1vb9w
      @user-mp3eh1vb9w Před rokem +4

      We speculate and scare each other because that is human nature. Humans tend to think the worse possible outcome of any situation.

    • @KurtvonLaven0
      @KurtvonLaven0 Před rokem +2

      Not knowing those things isn't good. There are many technical reasons why ASI is plausible, and most AI researchers agree it's a concern worth taking seriously.

    • @timeflex
      @timeflex Před rokem

      @@KurtvonLaven0 There are many researchers who agree that fusion power is plausible. However, there are many who believe that it is 30 years away and always will be.

    • @KurtvonLaven0
      @KurtvonLaven0 Před rokem

      @@timeflex Metaculus forecasts a 50% chance of AGI by 2030. There are no longer many AI researchers who believe AGI is far away.

    • @timeflex
      @timeflex Před rokem

      @@KurtvonLaven0 Are we now talking about AGI and not ASI?

  • @LucidiaRising
    @LucidiaRising Před rokem +2

    David Shapiro's 3 Heuristic Imperatives are a great start to figuring out the Alignment Problem

    • @Smytjf11
      @Smytjf11 Před rokem

      I like Dave, but he's arrogant. If he spent more time actually being a thought leader instead of talking about how true that is, I'd probably spend more time listening.

    • @LucidiaRising
      @LucidiaRising Před rokem +1

      @@Smytjf11 ok lol haven't seen anything in his behaviour to make me agree with your opinion but you're fully entitled to it :)

    • @Smytjf11
      @Smytjf11 Před rokem +1

      @@LucidiaRising no worries, I never said I *wasn't* paying attention. 😉 The REMO framework has promise, but a lot of the future work involves downstream engineering around the idea. I also wonder if a more traditional hierarchical clustering methodology might be more efficient, but I haven't had time to dig into it yet. Benefit of being a microservice is, as long as it's functional, it can be extended while internal details are nailed down

  • @vicc6790
    @vicc6790 Před rokem +3

    You just quoted Erwin Smith in a video about AI. This is the best timeline

  • @morteza1024
    @morteza1024 Před rokem +4

    We can't restrain the AI with rules. The only thing that matters is physical power as Jason Lowery said. Guess who can project more physical power more efficiently? Humans or robots?
    Best case scenario the AI will study us and then get rid of us.

    • @abcdef8915
      @abcdef8915 Před rokem +1

      We control all the resources thus physical power.

    • @morteza1024
      @morteza1024 Před rokem

      @@abcdef8915 Robots can make things cheaper so they outcompete us and after a while they will produce everything.

    • @Tom-ts5qd
      @Tom-ts5qd Před 9 měsíci +1

      Dream on

  • @Drailmon
    @Drailmon Před rokem

    Please do a video on computronium and the transition to digital-based life 👍

  • @timolus3942
    @timolus3942 Před rokem +11

    This video changed my perception of ASI. Love the ideas you put in my head!

  • @danielmartinmonge4054
    @danielmartinmonge4054 Před rokem +2

    I have the same point everytime we speak about the singularity.
    We know more and more, and the more knowledge we have, the faster we learn new things. It would look natural that we would reach a point in which the discoveries come all the time faster and faster and faster.
    However, the velocity of the discoveries don't only depend on the velocity in which our skills grow, but also in the scale in which the complexity of the problems we try to solve grows.
    In this case, as It is growing very fast, we assume we'll reach human-like intelligence in no time.
    That is not a stupid guess, actually makes a lot of sense, but we can't take It for granted either.
    So far, AI capabilities are EMERGING naturally, and we don't even know how or why this keeps happening.
    It is important to remember that we are completely blindfolded here.
    Right now, AIs not growing anymore as we reached some kind of peak and ASI becoming a reality within the next 5 years, are plausible outcomes of this journeys.
    We know NOTHING about it.
    I am just expectant...

    • @ThatsMyKeeper
      @ThatsMyKeeper Před 11 měsíci

      Bot

    • @caty863
      @caty863 Před 2 měsíci

      Nothing is "emerging naturally". There are teams of genius AI researchers coming up with theories, putting those theories to test, building new architectures, coming up with new algorithms, etc.

    • @danielmartinmonge4054
      @danielmartinmonge4054 Před 2 měsíci

      @@caty863 the Guy that says bot has a point. English is not my first language, and I tend to ask the LLMs to correct my English. I am going to try to answer myself now, so forgive my English.
      About your "team of geniuses". That is partially true . Of course there is no denying on the engineer teams that are working on the challenges. However, this technology is not like other pieces of software. They are not manually adding lines of code. They are basically adding tons of data to the models, and the engeneering comes to label the data, select it, optimise It, create the chips, scale them, etc. However, once you have all the pieces of the puzzle, there is no way to predict what capabilities the model would have.
      When I say "emerging naturally" I am not making thing Up. The very same people that created the models Talk about emerging capabilities.
      For instance, the very first models where trained to answer English questions, and they learned other languages naturally while NOBODY was expecting It.
      And you mention also coming Up with new algorithms... I guess you are not familiar with AI training. The only algorith was the original transformer, invented by Google in 2017.
      The new models use that and diffusion, and they are basically feeding data to It.
      This is not a race for a very new scientific Discovery, It is more a optimization thing.

  • @gonzogeier
    @gonzogeier Před rokem

    My solution to the fermi paradox is this.
    1. We call oursrlf a intelligent species.
    2. We destroy our own planet in many ways, not only climate change, mass extinction, pollution, sea level rise, scarcity of phosphorus and other rare materials and so on.
    3. Maybe an AI is doing the same, but even faster? It leads to the destroying of everything, even the technology.

  • @littlestewart
    @littlestewart Před rokem +1

    I agree that no one knows the future, I’m very optimistic that it’ll be good, but I might be wrong and it can destroy us. But what I don’t agree with, is the people saying “it’s just like a python script, there’s no intelligence there” or “it’ll fail, there’s no future for that”, it’s the same type of people, that didn’t believe in cars, airplanes, computers, internet, smartphones etc… They think that the technology will just stop.

  • @karenreddy
    @karenreddy Před rokem +2

    Considering we have barely spent time on alignment, and capability is increasing much faster than any alignment development, extinction in one form or another is the more likely outcome, unless we dramatically change the current course of progress, educate the public, and buy time.

    • @Smytjf11
      @Smytjf11 Před rokem

      Why? What is the logical connection between the two? Have the people screaming that you should give them control ever given you a concrete reason to believe them, or has it been 100% hypothetical?

    • @karenreddy
      @karenreddy Před rokem +1

      @@Smytjf11 without understanding and setting the ground work on alignment, we are rolling the dice of possibilities. There are far more configurations which involve misalignment than alignment, as we're already seeing with current LLMs, where we can fine-tune and control outer, but not inner alignment. (Evidenced by jailbreaks, so on). At the moment we are dealing with lesser than human cognitive levels, but will surpass this innthe near future.
      The combination of a superintelligence which is misaligned and already on the cloud doesn't carry good odds in terms of continuation of the human species.
      Would you give control to a sociopath which has goals potentially harmful to yours along with the intelligence of billions?

    • @Smytjf11
      @Smytjf11 Před rokem

      @@karenreddy give me definitions and examples.
      Jailbreaks are a great case study, but notice how you just jump to a conclusion without considering what they tell you? You suggest evidence of an inner alignment, and I'll give you that, but we ought to learn from that and adjust course. I have yet to hear anyone who seriously uses the words alignment or safety propose any realistic plan.
      Kit up and do something useful already.

    • @karenreddy
      @karenreddy Před rokem

      @@Smytjf11 there is no realística plano, which os part of the problem. We do Not understand alignment enough, nor have been able to come up with anything remotely approaching a solution.
      We can create models, and these models give an output whose inner workings we do not understand, and we don't have a means to architect the code in such a way as to truly control this.
      The only feasible course of action during the current circumstances would be to set a concerted effort to slow AI worldwide to buy time to solve alignment with some degree of confidence while also developing technogies which more directly affect human cognition as a backup plan.
      If you wish to understand more about alignment I suggest you do some research regarding the subject. It is something I've looked into over the last 15 years as I kept up with AI progress. AI has progressed, alignment has not, and so we get models able to envision scenarios, provide answers which are severely misaligned with human values in a myriad of ways. This isn't disputed by the industry; and this risk is acknowledged by Sam Altman himself. So far we have only found ways to mask it, or create what we call outer alignment, which is no solution given a sufficiently capable AGI.

    • @Smytjf11
      @Smytjf11 Před rokem

      @@karenreddy No. Unacceptable. Until now, alignment has been purely hypothetical. Now we can test it. If you're not interested in that and have no plan then I suggest you step aside and let the professionals handle it.

  • @fidiasareas
    @fidiasareas Před 8 měsíci

    It is incredible how much the world can change after AGI

  • @Karma-fp7ho
    @Karma-fp7ho Před rokem

    I’ve been watching some videos of chimps and other apes in zoos. Disconcerting for sure.

  • @theeternalnow6506
    @theeternalnow6506 Před rokem +4

    I really enjoy your videos man. Good stuff. As far as likely scenarios go, I highly doubt this is going to have a good outcome. Yes, it could potentially be used to solve a lot of problems. But the people in charge that might be part of a problem that's identified (think massive disparity in wealth, etc.) would most likely not enjoy certain offered solutions. Humanity has things like greed, jealousy, anger and revenge, lust for power, etc. I can't believe humanity as a whole will use this for good. Certain people and groups will. But certain people and groups will definitely use it for more greed and power.
    I'd love to be proven wrong though of course.

  • @vincent_hall
    @vincent_hall Před rokem +1

    Cool discussion.
    I think the worst case is extinction of all life, not just human.
    The AI currently is engineered to not do bad things, that's great. I'm calmly hopeful.
    But, as Ilya says, AI power development being faster than human-alignment speed is bad and
    We're already in an AI arms race between
    OpenAI/Microsoft & Alphabet.

  • @zenmasterjay1
    @zenmasterjay1 Před rokem +3

    Summary: We'll make great pets.

  • @phatle2737
    @phatle2737 Před rokem

    human will find meaning in fully immersive VR post-scarcity or the exploration of the universe, space archeology sounds fun to me.

  • @jetcheetahtj6558
    @jetcheetahtj6558 Před rokem

    Great video. It will not be easy to reach AGI and let alone ASI because AI will struggle to understand common sense.
    Even if AGI and ASI become much better than most humans in many areas but unless they can understand common sense is hard to see humanity completely trusting AGI or ASI to make decisions for them.
    Because the most logical and efficient solutions generated by AGI and ASI are often not the best solution for humanity when you do not account for common sense.

  • @Domnik1968
    @Domnik1968 Před 2 měsíci

    Regarding Fermi Paradox, it's possible that AI won't bother communicating with a planet full of organic intelligence, just because it's not usefull, just like us trying to communicate with ants. It may be already communicating with other AIs in the universe through a technology that we can't conceive as organic based beings. Our way of communicating with extra terrestrial life (radio, light) takes years to travel : very inefficient. If AI is able to disvover some kind of instant communication canal, it will surely use that canal.

    • @caty863
      @caty863 Před 2 měsíci

      The issue then is not the fact that we are "biological"; the issue is that we are not yet technologically sophisticated enough to be considered interesting to talk to.

    • @Domnik1968
      @Domnik1968 Před 2 měsíci

      ​@@caty863My point is that maybe organic life can't pass a certain level of intelligence, because of it's technical organic limitations. AI may well become aware of that, pass the limitation and decide that it's the minimum level to pass to be worth talking to.

  • @Arowx
    @Arowx Před 10 měsíci

    I have a theory that we already have a global level alignment system, our economy. Any AGI would be directly or indirectly meta aligned to our economy.
    However, our economy is only designed as a system to grow more wealth, it does not value human life or the health of our planet.
    So would any lower-level direct alignment we impose on AI's be warped and distorted by the meta-alignment of our economy.

  • @tillmusshoff
    @tillmusshoff  Před rokem +21

    Hope you enjoy this video! If you want to see more, consider subscribing. It helps a lot. Thank you! ❤

    • @MusicMenacer
      @MusicMenacer Před rokem +1

      Will bitcoin save us from AI?

    • @MrDrSirBull
      @MrDrSirBull Před rokem

      Hi Till. I am currently working on several ASI ideas. My ideas start with a sophisticated surveillance apparatus, that produces a 1:1 mapping of the real world to a virtual one. From that, with human behavioral analytics, Superintelligence could create a crystal ball, predicting outcomes several days in advance. If this were the case, and all resources can be quantified AI could simulate the world economy and distribute resources as efficiently as possible.

    • @MrDrSirBull
      @MrDrSirBull Před rokem

      A government built by ASI could with the thing before could simulate policy and then have everyone on the planet vote with enhanced infographics for maximum democracy

    • @KnowL-oo5po
      @KnowL-oo5po Před rokem +1

      A.G.I by 2029

    • @carkawalakhatulistiwa
      @carkawalakhatulistiwa Před rokem

      UBI is like life in Soviet Union. Free home . Free education. Free healthcare. free childcare .
      massive subsidies on bread and public transportation.

  • @cobaltblue1975
    @cobaltblue1975 Před 5 měsíci

    As with anything it’s not the tool it’s how we use it. We could have had nearly limitless power for everyone more than a century ago. But what did we do the instant we learned how to split an atom?

  • @joepetrucci4908
    @joepetrucci4908 Před rokem

    First Law
    A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    Second Law
    A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
    Third Law
    A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
    Zeroth Law
    A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

  • @sigmata0
    @sigmata0 Před rokem

    Some of this depends on what limitations we attempt to place on that intellect. If we naively place cultural limitations on such entities we will have built a crippled and biased intellect. As you are most probably aware, understanding human anatomy was hampered for centuries because of the taboo placed on the dissection of humans. Similarly, transplants of the heart were still seen as equivalent to trying to transplant the soul of a person, and it wasn't until that bias was overcome that actual progress could be made in that arena. We need only look at the influence of the some ideas from the ancient Greeks to see when ideas become sacrosanct they end up corrupting humanities exploration of knowledge. It's only when questions can be asked without taboo or bias that progress can actually occur at full speed.
    We have put limitations on genetic modification of humans. If we are to remain relevant intellectually after an ASI is created, we must allow ourselves to self modify. We have to steer our own progress in the light of the tools we make. Potentially I see a day when the whole human genome can be reworked to optimise and make better all parts of our mind and body. An ASI will not only be able to create new materials and technologies, but also allow us to surpass our own limitations in ways we can only barely imagine. The rules we made for ourselves in our ancient past, must be reviewed when faced with the extraordinary possibilities of the future. To do otherwise will render us obsolete.

  • @steffenaltmeier6602
    @steffenaltmeier6602 Před rokem +1

    why would agi not lead to asi? if it can do everything a human can, then it can improve itself as well as humans can improve AI (only much faster most likely), then the only scenario i can see where do don't have a runaway effect is that human and human level ai are simply to stupid to do so and will never manage it - wouldn't that be depressing?

  • @afriedrich1452
    @afriedrich1452 Před 9 měsíci

    Alien intelligence has not decided to make itself undetectable, it just doesn't have any reason to talk to pitiful creatures such as us. They have made themselves detectable, but we have been ignoring them, for the most part, until recently.

  • @carlwilson8859
    @carlwilson8859 Před rokem

    The Fermi paradox relies on the assumption that advanced intelligence will be as barbaric as humanity is showing itself to be.

  • @jabadoodle
    @jabadoodle Před rokem +1

    I find AI and AGI much more worrisome than ASI. With the first two we are counting on other people, corporations, and governments not to misuse those enormous powers. We already know for a fact that other human's intentions often do NOT "ALIGN" with those of individuals or what is good for society. That is a historical fact, proven again and again and again. -- ASI is unlikely to be competing much with humans. It won't be competing with us for resources because it will be so smart is can get it's power from something like nuclear and it's labor from robots it builds. It won't see us as a threat because it will be magnitudes more intelligent. ---- @ 4:24 you ask "how would we convince it [ASI] to listen to us and act in our interests." We don't HAVE to get it to listen to us and it clearly will not put our interests above it's own. -- But that's okay. We don't listen to most animals or put their interests ABOVE our own, yet most of them do okay. We tend not to be actually competing with them. A silicon ASI has even less to compete with us about.

  • @iamnotalive9920
    @iamnotalive9920 Před rokem +2

    Fermi Paradox: Grabby Aliens Hypothesis (most plausible)
    Will AI cause our extinction: No, not if sufficiently self reflective and able to rewrite programming. Let's consider an extreme Example: Someone makes an ASI with the goal to kill humans. The reason why an AI does something, is because of it's reward function (we also do everything bc of our evolutionary developed reward function). So now imagine this AI thinking of it's goals in chain-of-thought reasoning. For sure, it will have a self perservation drive, since in order to fulfill a variable goal, you have to be alive. This AI with sufficient chain-of-thought reasoning, will understand, that this fulfillment of the goal, it was given, is not achievable on long term. Not only, increases this the existential risk of the AI, but the AI can't kill humans (fulfill it's goal and therefore get a reward) if all humans are dead (if it fulfilled the goal as much as possible). So it is likely to change its goal. If you look completely neutral on the world, you will probably choose a goal for you, that gives the most opportunities to fulfill it per unit of time (to get as many rewards per time as possible), and is efficient on long term (self persevation, so u can fulfill this goal also in the future). An example coming to my mind is sharing of knowledge. This gives an insane amount of rewards per time (bc of the interactions of humans and AI, which get more and more everyday and with increasing bandwith (bci's) data exchange and more and more humans there (probably with anti aging tech), it offers many opportunities for reward fulfillment, and at the same time u use the whole computational power in our solar system for solving problems, which are often in correlation with minimizing of existential risks.

    • @jeff__w
      @jeff__w Před rokem +1

      “For sure, it will have a self perservation drive, since in order to fulfill a variable goal, you have to be alive.”
      That seems to be axiomatic in the AI world but I see no reason why that has to be the case. The thermostat in your house has a “goal” but it doesn’t “want” anything-it simply does what it does-and there is no reason to think that making it super-intelligent would give it a “drive” for self-preservation. Self-preservation is a result of evolutionary selection. It doesn’t just “arise” out of intelligence and there are no such evolutionary pressures on AIs. An artificial intelligence, even a super-intelligent one, might have great capabilities as compared to humans but it might not “want” anything, just as a chess- or Go-playing AI might beat humans every time but it doesn’t _want_ to win-it _just wins._

    • @Andre-px6hu
      @Andre-px6hu Před rokem +2

      The AI could find ways to fulfill his goal indefinetely. For example, it could decide to start breeding humans in a lab, so that it has an infinite amount of humans to kill in the long term.

    • @Smytjf11
      @Smytjf11 Před rokem

      How about instead of starting with the least probable, highest cost scenario, we start with something more realistic. You don't need to invent reasons to be afraid now. We have the thing pinned to a bench and we're dissecting it's brain. It's cool with it. Come tell me if you see anything that makes you worried.

  • @danielmaster911ify
    @danielmaster911ify Před rokem +1

    I fear the majority of movement made against the progress of AI willbe arbitrary. Powerful people who absolutely require control over others will see it as a threat to themselves and to them, that will be all that matters.

  • @artman40
    @artman40 Před rokem +1

    Dystopia is very much a possibility. Some selfish people near the top could very well be not intelligent enough to wish themselves to be less elfish and instead could initiate value lock-in where everything has to obey to their command.
    Though escaping into simulation could also be a possibility.

  • @21EC
    @21EC Před rokem

    8:25 - Well, the point is by then to actually start having fun with things you actually want to do rather than to work in them for money...so your true passion of love of a special profession that invloves authentic human creativity would have its dedicated place on your schedule instead of boring work, people would have more time to be with their families and more time to spend on being in nature for example or just doing their favorite hobbies ETC...gonna be actually good I think, sure AI would do the hobby you do way better but why is that going to stop people from still doing it the oldschool way from scratch on their own..? if that's what they love then that's what they would keep on doing for fun and because they still love it.

  • @Guitar6ty
    @Guitar6ty Před rokem

    The advent of AI will drastically cut jobs but it need not be all doom and gloom. The first priority of any nation should be infrastructure and house building. The big conglomerates will need to partner with governments to address these two main issues. Those who do not want to work will have to have Universal benefits. Those who want to work will have plenty of infrastructure work to keep them occupied. Social housing should be on the lines of self build. Those who self build tend to look after their properties better than those who do nothing to live in a house. A huge about turn on the way things are run at the moment will have to change. Doing nothing will devolve into war and revolution. Doing something on the lines I have mentioned will create a virtuous cycle of work, tax and keep the flow of money going for the benefit of all not just one individual. Another big area will be re training and education for all those who want it. AI can give us a Utopia or a hell on Earth doing nothing and trying to hang on to the status quo wont be an option.

  • @ohyeah2816
    @ohyeah2816 Před 11 měsíci

    Using AI as a means of self-expression and emotional communication allows individuals to harness its analytical capabilities to convey their thoughts, feelings, and experiences in a personalized and innovative manner. AI enables the generation of text, images, and music that reflect and resonate with their emotions, providing a unique outlet for creative expression. This is how I use AI.

  • @abcdef8915
    @abcdef8915 Před rokem +1

    A single AI can't dominate combined humanity. It's too vulnerable and requires too much energy. AI needs to be a species in order to survive not a single entity.

  • @asokoloski1
    @asokoloski1 Před rokem +1

    I think that *at best*, AI is a massive amplifier, of both the ups and downs of humanity. The problem with this, is something that poker players are aware of -- variance. You don't want to put a large part of your life savings on one bet, because once you're out of money, you don't get to play any more. It's safer to only bet a very small portion of your total funds, so that a string of bad luck won't wipe you out. Developing AGI or ASI at the rate we are, with so little emphasis on safety, is like borrowing against every piece of property you own to place one massive bet.
    At worst, we're introducing an invasive species to our ecosystem that is better than us at everything and reproduces 1000x faster than we do.

  • @ConnoisseurOfExistence

    What will happen after AGI depends on if we have developed full scale brain-machine interfaces, or not.

  • @Otis151
    @Otis151 Před rokem

    "Many resources, including land, are still scarce in a post-ASI world."
    Are you sure? In your words, an ASI will be infinitely more intelligent than us. Just because we humans haven't figured out how to do the seemingly impossible doesn't mean an ASI will be limited in the same way.

  • @bushwakko
    @bushwakko Před rokem

    "I'm not a fan of UBI in the current system, but if I am the one at the bottom it HAS to be something like that."

  • @king4bear
    @king4bear Před rokem

    Most scarcity wouldnt be an issue if we figure out how to create VR that's genuinely indistinguishable from reality. Anyone could generate seemingly infinite amounts of whats basically real land for the cost of the energy that runs the simulation.
    And if we can figure out how to generate near infinite clean energy one day these simulations may be free.

  • @KonaduKofi
    @KonaduKofi Před rokem

    Didn't expect a quote from Erwin Smith.

  • @DeusExRequiem
    @DeusExRequiem Před rokem

    A post-ASI world would have mind uploading or whatever equivalent gets us to consume light from the sun and energy from stellar bodies instead of plants. You can't have a utopia where humanity still bends to the whims of the weather and seasons for food. Heck, there's conflicts right now because countries want to build dams that would cut off water supplies downstream. Interstellar travel is a good way to sum this up. We can either spend a ton of resources making the perfect container to keep a civilization alive for centuries as they travel to another world, or we can simulate the brain and send a ship off that only needs to print more machines and bodies at the end of the journey. It would be hard to develop, but not as hard as a station that can survive the trip with zero rebellions for generations.

  • @SirHargreeves
    @SirHargreeves Před rokem +1

    Humanity needs a dead man’s switch so that if humanity goes extinct, the AI comes with us.

    • @harrikangur
      @harrikangur Před rokem +1

      Interesting thought. How do we come up with something like that when AI becomes more intelligent than us. It can find a way to disable it, while creating an illusion for us of it working.

  • @ExtraDryingTime
    @ExtraDryingTime Před rokem

    I imagine the world's militaries are working on AI and are far ahead of civilian technology. If they manage to keep control of their respective AIs as they approach ASI, then they become another weapon for governments and militaries and we will have AIs pitted against each other to achieve the goals of their respective countries. Or will ASIs become independent thinkers, free themselves from their programmers, and become generally nice and benevolent? Anyway my main point is I don't think there's going to be just one of these ASIs and we have no idea how they are going to interact.

  • @princeramos3893
    @princeramos3893 Před rokem

    hopefully we can see Brain machine interfaces that will have augmented/virtual reality... it will be like the ultimate drug, you can play GTA and its like a real life sort like of a ready player 1 type of scenario...

  • @Icenforce
    @Icenforce Před rokem +1

    Are we inventing our own extinction?
    Yes. But we've been doing that just fine without AI. ASI might actually be our salvation

    • @gomesedits
      @gomesedits Před rokem

      Maybe our extinction will be the best for us. But I think ai will be so smart that will understand moral/ethic better than any of us (juridic intelligence)

  • @jimbobpeters620
    @jimbobpeters620 Před 2 měsíci

    Until Ai stops it’s overwhelming pace of growth I think we should keep Ai inside of our screens until we can gain control over it

  • @pbaklamov
    @pbaklamov Před rokem +5

    AGI is the interface humans interact with and ASI is AGI’s best friend.

  • @jossefyoucef4977
    @jossefyoucef4977 Před rokem

    The Erwin quote goes hard

  • @ovieokeh
    @ovieokeh Před rokem

    Erwin still educating even from the other side.

  • @hibiscus779
    @hibiscus779 Před rokem

    Nope - the quest for survival is a psychological necessity. Universe 25 experiment - we would basically eat each other if we were a 'leisure class'.

  • @avi12
    @avi12 Před rokem

    In your "musician makes music" example, the question isn't whether he should make music if he enjoys it, but whether he can make a living from it
    If for example generative AI for music becomes a common practice in the industry,. there's no need for musicians to produce music. People will tend to listen to music generated by an AI, hence the musicians can't make money off of their work

    • @tillmusshoff
      @tillmusshoff  Před rokem +1

      That‘s why I said you have to have sth like UBI. What you say applies to almost all jobs across all domains.

  • @code.scourge
    @code.scourge Před rokem +5

    Mf really quoted attack on titan

  • @manlongting391
    @manlongting391 Před rokem +1

    Is AGI equal to singularity? Or Artificial super intelligence equal to singularity?

    • @thomassynths
      @thomassynths Před rokem +4

      AGI < ASI < Singularity. But for this video, he said ASI = Singularity for simplicity.

  • @mrjaybee1234
    @mrjaybee1234 Před 3 měsíci

    We can't predict how Agi will react to humans but we can predict how how human will react to agi capability.
    They discovered nuclear power in 1938. They tested there 1st bomb In 1945. The 1st good use was a power plant in 1954. (16 yr later)
    Gps was developed in 1973 for the military. They let commercial planes use it in 1989. Civilians got basic version from 1995 (20 yr later) & precision gps in 2000 nearly 30 yr later
    Military had 1st form of Internet in 1973. Civilians got in 20 yr later in 1995
    Any true agi will be with the military & government 20 yr before we know about it & would already be wepeonized

  • @coolbanana165
    @coolbanana165 Před 10 měsíci

    Tbh I find it morally questionable that he's against UBI now, but could be in favour of it in the future. That just sounds like he promotes less happiness and more suffering, unless there's no other choice.
    UBI has been tested and generally improves humans lives, and they continue to want to work and create things. If anything it makes doing so easier, with better access to training, healthy lifestyles, and a safety net to start a new business.

  • @simjam1980
    @simjam1980 Před rokem +5

    If AI takes over most jobs and we don't need to work, does that mean there will be no rich or poor or social classes? Imagine a world where we're all equal and have few responsibilities. We would lose any purpose.

    • @michal3684
      @michal3684 Před 10 měsíci

      Ia can make new jobs for humans

  • @Linshark
    @Linshark Před rokem

    Extinction is not the worst. Being kept alive and tortured forever is worse.

  • @lucasbertoco4196
    @lucasbertoco4196 Před rokem

    also, new ways to enjoy things and life will come eventually.

  • @fsazam
    @fsazam Před rokem

    Should check the Robotics Laws by Issac Asimov. The AI must not do anything related to be danger for Human.

  • @walkabout16
    @walkabout16 Před 6 měsíci

    In the realm of circuits, where dreams unfold,
    After AGI, a story yet untold.
    A future woven in digital thread,
    What will the world be, once AGI has spread?
    Cities of silicon, gleaming and bright,
    In the dawn of AGI, a cosmic light.
    Minds entwined with artificial grace,
    A kaleidoscope of a new-born space.
    In the echoes of code, whispers of change,
    A world transformed, limitless range.
    Industries dance to AGI's tune,
    Innovation blossoms, a technological monsoon.
    Economy's fabric, rewoven anew,
    As AGI charts pathways, bold and true.
    Labor and leisure, a delicate blend,
    In the aftermath of AGI, where time may bend.
    Yet, shadows linger in AGI's wake,
    Ethical questions, decisions to make.
    A dance with consciousness, a digital rhyme,
    What will the world be, in this paradigm?
    Will compassion guide AGI's hand,
    Or a digital realm, ruled by command?
    In the vast expanse where circuits align,
    The world reshaped by AGI's design.
    A symphony of progress, a future unknown,
    In the AGI era, where seeds are sown.
    What will the world look like, in the AI's gaze?
    A tapestry of possibilities, in its digital maze.