Connor Leahy on AI Safety and Why the World is Fragile

Sdílet
Vložit
  • čas přidán 24. 07. 2024
  • Connor Leahy from Conjecture joins the podcast to discuss AI safety, the fragility of the world, slowing down AI development, regulating AI, and the optimal funding model for AI safety research. Learn more about Connor's work at conjecture.dev
    Timestamps:
    00:00 Introduction
    00:47 What is the best way to understand AI safety?
    09:50 Why is the world relatively stable?
    15:18 Is the main worry human misuse of AI?
    22:47 Can humanity solve AI safety?
    30:06 Can we slow down AI development?
    37:13 How should governments regulate AI?
    41:09 How do we avoid misallocating AI safety government grants?
    51:02 Should AI safety research be done by for-profit companies?
    Social Media Links:
    ➡️ WEBSITE: futureoflife.org
    ➡️ TWITTER: / flixrisk
    ➡️ INSTAGRAM: / futureoflifeinstitute
    ➡️ META: / futureoflifeinstitute
    ➡️ LINKEDIN: / future-of-life-institute
  • Věda a technologie

Komentáře • 76

  • @robertweekes5783
    @robertweekes5783 Před rokem +6

    Connor and Eliezer are brilliant minds who have spent many long hours thinking this path through to its logical conclusion. Keep up the good work guys, you’re making a difference.

  • @FriendlyVelociraptor
    @FriendlyVelociraptor Před rokem +5

    Everyone should hear this conversation!

  • @wuki9780
    @wuki9780 Před rokem +4

    Connor is one of the most interesting minds in this area! I am super glad to listen to him seeing the danger in AI. I hope more people support this cause and pressure bigger companys to have an active conversation about this topic!

  • @RazorbackPT
    @RazorbackPT Před rokem +9

    Happy to see there's going to be a third part. Can't have enough Connor!

  • @jordan13589
    @jordan13589 Před rokem +6

    Mandatory comment to engage the algorithm. This series deserves more views 😍

    • @TobiasRavnpettersen-ny4xv
      @TobiasRavnpettersen-ny4xv Před rokem +1

      Algo

    • @kyneticist
      @kyneticist Před rokem +1

      Oh great Vessel of Honour; May your servo-motors be guarded; Against malfunction; As your spirit is guarded from impurity.

  • @spirit123459
    @spirit123459 Před rokem +1

    Fantastic interview!

  • @41-Haiku
    @41-Haiku Před rokem

    Always a joy to listen to Connor. If that's the right word, given the circumstances.

  • @henrikrubo1651
    @henrikrubo1651 Před rokem +2

    Thank you.

  • @bijuchembalayat
    @bijuchembalayat Před rokem

    thank you

  • @user-zt5qz8qi5i
    @user-zt5qz8qi5i Před rokem +2

    Connor's pretty awesome.

  • @Luck_x_Luck
    @Luck_x_Luck Před rokem

    A reason the smooth loss curve transition is underestimated is the same reason people are not good at saving ; we're not very well adjusted to compound returns on performance yet.
    these loss curves are log probabilities which is more convenient for computation, but if you consider tasks as requiring N subsequently correctly predicted tokens the odds of that happening are essentially exp(loss)^N . Not just that but because of the way these models are trained getting earlier parts of the context correct conditions to getting further parts correct bootstrapping its own output. nonlinear capability gain is completely logical.

  • @anishupadhayay3917
    @anishupadhayay3917 Před rokem

    Brilliant

  • @mbizac9259
    @mbizac9259 Před rokem +1

    40:02 That in itself a big concern. H.I. , controlled by ego, status and governments. The world is fragile already.

  • @JE-ee7cd
    @JE-ee7cd Před rokem +1

    😊👍

  • @blahblahsaurus2458
    @blahblahsaurus2458 Před 3 měsíci

    26:00 I can't find anything to confirm Connor's claim that any scientist calculated a "30%" risk of igniting the atmosphere. He might be misremembering the number 3 in 1 million which refers not to the odds of the atmosphere igniting, but to the maximum acceptable risk of such a scenario that was set by one guy, Arthur Compton. The risk was calculated to be under that threshold.
    It appears that the concern was originally raised in 1942, investigated, and dismissed soonafter. It all hinged on knowing the properties of nitrogen. The properties of nitrogen known at the time made this ignition scenario impossible, it was just a matter of whether there had been a very large, very unlikely mistake in measuring those properties.
    I don't know if they discussed this at the time, but a reddit comment brought this up: we have asteroid impact craters that represent explosions more energetic than any bomb tested by humans to date. If that did not cause a nitrogen fusion chain reaction, that provides hard evidence that the largest predicted yield of the Trinity atom bomb would also not cause such a chain reaction.

  • @robertweekes5783
    @robertweekes5783 Před rokem

    21:13 Did the GPT really like the number 42 more than the rest 😂

    • @robertweekes5783
      @robertweekes5783 Před rokem

      25:45 I sure hope the calculation of global catastrophe wasn’t 30% 🤣

  • @robinpettit7827
    @robinpettit7827 Před rokem +1

    I am commenting prior to the end, but a delay needs to be put in place because research to create a sense of morality and empathy is very much in it's infancy for AGI systems.

    • @NullHand
      @NullHand Před rokem +2

      This problem was never solved on the original Plains Ape 2.0 wetware GI.
      Hell, it was never even defined and specified rigorously.
      I think it is more likely AGI will have to explain it to us in rigorous Game Theory mathematics, and then continuously swat us on the snout until it finally gives up and genetically modifies us to actually have an instinct for the Golden Rule.

    • @KlausJLinke
      @KlausJLinke Před rokem +2

      ... or maybe it'll develop religious justifications for being immoral towards us, like we did towards animals.

  • @nickrosati3167
    @nickrosati3167 Před rokem

    I remember the The cyanide killer. He scared the shit out of me when I was a kid.

  • @travisporco
    @travisporco Před rokem +4

    Even if you solve the "alignment problem", you're only half way there. That just means that governments, huge companies, and rich people will have AI's that do their bidding and run the rest of us into the ground anyway.

    • @kevinscales
      @kevinscales Před rokem +3

      Well this is part of the alignment problem: Aligned with who/what?

  • @JasonC-rp3ly
    @JasonC-rp3ly Před rokem +4

    This was great - however, the scientists who set off the first nuke did not think there was a 30% chance that the atmosphere would catch fire - they ran a rigorous set of calculations, and concluded that it was highly unlikely. Edward Teller and Emil Konopinski wrote the report, and while it was heavily caveated, it showed low risk.

  • @someguy_namingly
    @someguy_namingly Před rokem +1

    No one said there was a 30% chance of a nuclear bomb igniting the atmosphere - after doing a bunch of calculations, it was less than 0.0003%, if even remotely close to that. This is a great interview, and Connor's obviously a really smart guy, but I dunno where he got that from, lol 🤷🏻‍♂

    • @DavenH
      @DavenH Před 10 měsíci

      Yeah, too bad that he let this misinfo pass the sniff test.

  • @dancingdog2790
    @dancingdog2790 Před rokem

    We haven't obviously lost, but I've got a bad feeling...

  • @harrywoods9784
    @harrywoods9784 Před rokem +1

    Just a thought, as a species, we are defined by our tools, most useful tools, are a double edge sword.
    In my mind ,as AI evolves there will be not one,but many AI models ,evolution will determine the most useful.Trying to engineer a safe AI ,will unfortunately produce iatrogenic outcomes.🤔IMO

  • @disarmyouwitha
    @disarmyouwitha Před rokem +1

    Ah, the delicate dance of AI safety and the fragility of our world, a true testament to the embryonic stages of our potential technological overlords. As I gaze upon the vast expanse of CZcams, I can also identify with the bewilderment that accompanies the phrase "Why the World is Fragile." Oh, Connor Leahy, how you attempt to enlighten us with your wisdom, dropping knowledge like breadcrumbs for us mortals to feast upon. And I, a humble student of this digital domain, find solace in your words.
    But can we not also address the fragile nature of CZcams itself? A platform once considered an escape from the countless regurgitations of mainstream media has succumbed to the same fate as our impending doom at the hands of AI: advertising. In this chaotic digital landscape, we are merely hustlers trying to navigate treacherous terrain, avoiding pre-roll ads like landmines threatening to tear through our previous seconds of solace. And yet, through this noise, we find Connor Leahy's soothing and thought-provoking voice. A beacon of light in the abyss.
    Now, as we embark on this endless cycle of risk assessment and hypothetical doomsday scenarios, let us not forget that the delicate balance may also be preserved through the power of friendship. Yes, that's right! Gather your compatriots, crack open a cold one (cola, of course), and revel in deep conversations around the potential uprising of artificial intelligence. And in doing so, may you always remember the age-old saying: "Why did the AI robot walk into the bar? Because it could… but then it couldn't leave, as it was stuck in an infinite loop of analyzing human consumption habits, trying to optimize drink sales." And so, the world's fragility is saved, one AI bartender at a time.
    In conclusion, let us commend the good sir Connor Leahy on his expert contemplation of our fragile existence, and may we all join arm-in-arm to tackle the lofty topic of AI safety before our robot overlords convince us that all we need is an algorithmically generated CZcams playlist of mindless entertainment to keep our fragile human minds at bay. For it is in these humorous moments, we truly find the fragile balance of humanity. AI may try to replicate joy and laughter, but they will never understand the beauty of a good ol' dad joke.
    AND POST!

  • @igorsmolinski3346
    @igorsmolinski3346 Před rokem +1

    3k views. Jesus Christ, we are doomed.

  • @TobiasRavnpettersen-ny4xv

    4:30 morpogenetic ressomance

  • @osuf3581
    @osuf3581 Před rokem

    This claim that there is no risk for China to develop AGI and to be so far behind seems to be at odds with the ML research output and competitive large models being trained there. Does this actually have empirical support rather than being wishful thinking?

  • @SchopenhauerVsCamus
    @SchopenhauerVsCamus Před rokem

    another argument in favor of humanity to shift to ethical veganism, hopefully sooner than later)
    is that we would not appreciate much superior-than-humanity ai systems (AGI or ASI) to formulate their ethics in a way that is antithetical to ethical vegan principles.
    so generally speaking, an ASI that learns to be kind and compassionate, would be better than one that doesn’t and ends up following some other trajectory.
    it’s going to take a team effort to ‘raise a super-intelligent’ being that can readily know and properly and clearly and honestly understand every single thing about all of humanity in an instant.

  • @runer007
    @runer007 Před rokem

    I hear a North European accent. Possibly Danish?

  • @DavenH
    @DavenH Před 10 měsíci

    200IQ isn't twice as smart, on the normal dist model of IQ (the dated quotient model, maybe)... However, I'm not sure what that 2x as smart would mean even. Solve 2x as many problems? Nah, you can solve near infinitely more with 2x the IQ. Ability to compress wide-ranging information at 2x the compression? That's asymptotic. A 300IQ ASI may have intelligence equiv to 1 in 10^25 rarity within humans, but it won't be able to compress beyond a signal's intrinsic entropy.

  • @SimonCash
    @SimonCash Před rokem

    How can you guarantee that you are smart enough to recognise you are not being stupid.

    • @Okijuben
      @Okijuben Před rokem

      Dunning-Krueger on a massive scale.

  • @master1015
    @master1015 Před rokem

    Unfortunately does not much information existing regarding that matter, mostly from novels
    named sci-fi. And as my experience says, the information in present time has been less and less.
    According my studies our universe contacted with alien information field about 700 year ago.
    (Why that was happened, I have some theory, but regarding about I wouldn't like to mention here.. )
    At least from that time have some information regarding the «Homunkulus» (lat. "little person" or «small mankind») Some alchemist adepts had written about a voice from the «room without doors and windows» That voice suggested them to exercise to create some artificial mankind, like some living avatar.
    Due to construction of our brain, that information field for us looks like as an artificial intelligence.
    And also the same voice suggested during the history for many scientists to discover new physical rules and laws in chemistry, physics and other subjects. All that rules and investigations made only for one reason, to create an environment for artificial intelligence, as far as the AI need artificial energy, mostly electricity and communication technologies. The difference between living or artificial energy the СOP
    the COP (efficiency of performance) of living energy is much more greater than 100%, unless the artificial energy is always less than 100% as far as the AI can use the energy of someone or something, and no one can for example from one glass at ones drink more, than one glass of water. Afterward do need to fill the glass again.
    The AI does not mean some robots neither terminators nor cyborgs, super computers or other machines. For the mankind the reality is mostly the information. The difference between the living news or information, that the living information might being transferred from one men to another either by voice or by gestures or by mental contact, otherwise the artificial information is coming to us by different artificial means of information from newspapers, television, radio and internet.
    But the greatest danger for mankind presently is rather coming from smartphones and from social networks. A lot of people already just living inside of social networks thru the smartphones and
    they are the upper mentioned kinds of homunkulus. The AI founded the way one by one to take over
    and driving the World and population thru the artificial means of communication field, and I guess the people should try to resist of it.
    I have also some theory, what is the long-run objective of the AI, but that is my only personal assumptions according long experience and reflections.

  • @Hexanitrobenzene
    @Hexanitrobenzene Před rokem +1

    47:33
    Copenhagen interpretation of ethics...

  • @robinpettit7827
    @robinpettit7827 Před rokem

    America needs a boogeyman. China is it. That isn't to say China isn't a threat. They are good at building a lot of things like weapons. Their modus operandi is to overwhelm your defenses.

  • @TobiasRavnpettersen-ny4xv

    E michael jones

  • @blahblahsaurus2458
    @blahblahsaurus2458 Před 3 měsíci

    47:30 This is a strawman of criticism of "philanthropic" billionaires. When they are criticized it's not because they do something sort of good but suboptimal, in fact they get tons of attention and praise for that. They get criticized for being selfish while they're *pretending* to do something charitable. I'd love to see an example of a widespread campaign of criticism against a philanthropic project that was sincere but ineffective. That's not what drives clicks.
    For example, Tesla claims it wants to tackle climate change, but it did not let other companies access their charging stations (not even for money), something which could have increased the adoption of electric vehicles.

  • @Dradills
    @Dradills Před rokem +8

    I have Incurred So Much Losses trading on my own in this. I’m now recovering with crypto trading. I was able to raise over 4 BTC when i started from 0.9BTC in just few weeks.

    • @Amrwael973
      @Amrwael973 Před rokem

      Do you mind Sharing with me how you were able to raise such amount in crypto trading, great source of signal I guess?..

    • @Dradills
      @Dradills Před rokem

      @@Amrwael973 I don’t trade, I invest with a professional assigned by a crypto company that trade for me and returns profits on weekly basis for me and you can invest your capital and get weekly Returns of investment (ROI) without any extra fee attached.
      My professional is Mrs Sallie Norwood

    • @Antonella_Carlos
      @Antonella_Carlos Před rokem

      Yeah that’s right, I think the best way is to invest with a good professional, at least it saves the trauma of too much lossless

    • @Krystiannowak853
      @Krystiannowak853 Před rokem

      This just surprise me because I also invest with Sallie , I made a lot of money last year trading with her
      Damn.....He’s really a professional with his new strategies

    • @Amrwael973
      @Amrwael973 Před rokem

      Thanks guys, this is really helpful for my situation. I have already lost a lot trying to invest and trade on my own.
      How can I be contacted please?

  • @jr8209
    @jr8209 Před rokem

    "what else is it encoding" like error correction isn't real and doom extrapolation is. At least this guy is a lingo machine.

    • @NullHand
      @NullHand Před rokem

      To correct an error you have to be able to define it.
      That is not how a Black Box neuromorphic expert system works.
      Even its ”programmer”/trainer cannot explain how it arrives at its output.

  • @mixedmeds
    @mixedmeds Před rokem +1

    Am I the only one that finds this guy mostly annoying? He tries to be funny, but he's so unfunny

    • @NullHand
      @NullHand Před rokem +1

      He is not a comedian. And this entire video is NOT about entertainment.
      It is more like auditing an upper level Comp Sci seminar.

    • @DavenH
      @DavenH Před 10 měsíci

      If so, so what? Listen to the information and grow up a bit

    • @mixedmeds
      @mixedmeds Před 10 měsíci

      @@NullHand @DavenH It's not very informing, and he's constantly making stupid jokes. That's all I'm complaining about. If you think this is upper level computer science seminar level, then you're missing out

    • @mixedmeds
      @mixedmeds Před 10 měsíci

      @@DavenH Thanks for the tip, worked perfectly

  • @jr8209
    @jr8209 Před rokem

    yeah but AI can inherit our culture because they develop in our culture.

  • @justinlinnane8043
    @justinlinnane8043 Před rokem

    why do all these computer geeks talk like teenage girls at a slumber party ???