Connor Leahy on the State of AI and Alignment Research

Sdílet
Vložit
  • čas přidán 6. 09. 2024

Komentáře • 159

  • @antigonemerlin
    @antigonemerlin Před rokem +5

    Not a subscriber and came to hear more thoughts from Connor, but the presenter is very intelligent and asks all the questions I wanted to ask. Kudos to you sir.

  • @akmonra
    @akmonra Před rokem +47

    You should just invite Connor on every week, honestly

    • @Hexanitrobenzene
      @Hexanitrobenzene Před rokem +3

      At least after every major AI release.

    • @akmonra
      @akmonra Před rokem +11

      @@Hexanitrobenzene so... every *other* week

    • @flyondonnie9578
      @flyondonnie9578 Před rokem +5

      Maybe better leave him some time to work on saving the world! 😅

    • @SamuelBlackMetalRider
      @SamuelBlackMetalRider Před rokem

      @@flyondonnie9578 i think he can spare us 1hr30 every week and save the world the other hours of the week 😅

  • @diegocaleiro
    @diegocaleiro Před rokem +33

    Lol. Connor is so reasonable it's funny.
    I'm glad we have him.

    • @dainiuszubruss
      @dainiuszubruss Před rokem +2

      yep let's give the keys to AGI to the US military. What could go wrong.

    • @iverbrnstad791
      @iverbrnstad791 Před rokem +2

      @@dainiuszubruss I think you missed his entire point there. Currently the keys to AGI is with openAI/microsoft, and they are racing google, Connor sees this as a p(doom) = 1. The military would likely have a lot more bureaucracy slowing down the process, and possibly far stricter safety regulations, so in aggregate it could mean a lower p(doom).

  • @lkyuvsad
    @lkyuvsad Před rokem +10

    Hoare quipped that there are two ways to make software- "One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies".
    RLHF is very much in the category of removing deficiencies in a complicated system (and not doing that particularly well).
    If we ever manage to create AGI or ASI that is safe and generally powerful, it needs to be in Hoare's first category. The problem is that neural nets are complicated. So I assume the simplicity needs to be wrapped around the net somehow?
    I don't understand how any programmer who's worked on any non-trivial system has any confidence we're going to figure out how to do this quickly, if ever.
    Over decades of effort, we have so far almost entirely failed to make bug-free systems even from a few thousand lines that can be read and understood by a single human mind. The exception is in systems amenable to formal proofs, which AGI is the opposite of.
    We're now trying to create a significantly bug-free system made out of trillions of currently barely-legible parameters, without having anything close to a specification of what it's supposed to do, formal or otherwise.

    • @flyondonnie9578
      @flyondonnie9578 Před rokem

      I think you’ve suggested the correct solution: wrap the mystery in simplicity. The human brain seems to work along these lines: mysterious neural nets of various systems are shaped by experience and evolution and overseen by inhibitory and frontal cortex functions that similarly are proven through multiple generations. Of course we still get the occasional psychopath. I think without the directly functional system surrounding the mystery, there’d be only chaos.

    • @electron6825
      @electron6825 Před rokem +2

      Humans aren't even in "alignment". To expect it from machines we've programmed seems absurd.

  • @jordan13589
    @jordan13589 Před rokem +17

    Germane and insightful meta-analysis of the alignment field. Connor continues to demonstrate he has a well-developed map of potential future outcomes in AI capability advancements and regulatory efforts. I hope we continue to hear more from him and others who can elucidate the complexities of alignment.

    • @kirillholt2329
      @kirillholt2329 Před rokem +3

      this sounds like it was written by a machine kek

    • @jordan13589
      @jordan13589 Před rokem +4

      It’s just that some humans can still write at a GPT-4 level, although we’re all going to be eclipsed soon enough. And one could argue it’s just a matter of properly training and fine tuning GPT-4.

  • @riveradam
    @riveradam Před rokem +5

    35:00 "I can just give my best strawman" is a gross misunderstanding of the term. Connor is admitting with admirable humility and sincerity that he doesn't think he can represent Yudkowksy's or Christiano's stance precisely, but as long as he's trying to get it right, then he's not strawmanning. Steelman vs strawman is the difference in rhetorical practice of framing opposing arguments generously vs maliciously, not an objective measure of accuracy.
    The word *opposing* is crucial. You steelman an opposing argument to demonstrate that even with the best possible interpretation of that argument, it is fallacious or contradictory in some way. It's an acknowledgement that language is difficult, and a show of good faith by giving your conversational partner the benefit of the doubt with their clumsy phrasing or poor memory or momentary neglect of detail.
    Strawmanning opposing arguments is what ignorant cowards do, and it's a sure way to never be persuasive. Strawmanning EY's stance would look like "yeah he's just some fat neckbeard who's being all doomy for attention". Connor Leahy is not strawmanning here, nor would it be advisable, nor should he ever preface any point he wants to make convincingly by declaring that he is.
    Great video overall! Apologies for my ragging pedantry.

  • @packardsonic
    @packardsonic Před rokem +4

    If we want to align AI we have to first align humanity by clarifying to everyone that our shared goal is to meet everyone's needs.
    Not creating jobs, not boosting the economy, not reducing CO2, not space exploration, our goal is to meet everyone's needs.
    The more we repeat that and study human needs and educate everyone about the need to educate everyone about human needs, the closer we are to aligning humanity. Then we can start to escape moloch and progress civilization.

    • @neuronqro
      @neuronqro Před rokem

      great, let's start with circular reasoning ("our shared goal is to meet everyone's needs") - what if my needs are let's say to watch thousands of people being skinned alive so I can satisfy my weird brand of sexual sadism? ...all jokes aside you'll just end up with irrational and unstable minds that sooner or later will "blow up" if you'd approach alignment this way

  • @alexanderg9670
    @alexanderg9670 Před rokem +7

    Current AI is alchemy. Always love Connor's analogies, "Voodoo shit"

  • @epheas
    @epheas Před rokem +4

    I love how Leahy is happy and excited talking about the end of humanity and chaos, and everything like yeah.. we are fucked up lets enjoy the moment lol

  • @tjhoffer123
    @tjhoffer123 Před rokem +10

    This needs to be shown everywhere. Scoreboards at mindless sporting events. On subways. On the radio. I think we are at the beginning of the intelligence explosion and we may already be doomed and people deserve to know

    • @biggish2801
      @biggish2801 Před rokem +1

      On one hand you're saying people are mindless if they go to watch sporting events, next you're saying people deserve to know. Which is it?

  • @Khannea
    @Khannea Před rokem +4

    I just asked Chat GPT about this and it strangely froze up, taking a really long time to answer. Then it suddenly claimed to NO LONGER know Connor Leahy. Lol, we are doomed.

  • @BestCosmologist
    @BestCosmologist Před rokem +6

    Thank you for the steady updates.

  • @satan3347
    @satan3347 Před 8 měsíci

    Except for his position on alignment & interpretibility, I have found myself appreciating Connor's pov a lot.

  • @cacogenicist
    @cacogenicist Před rokem +11

    Not only could an AGI invent Alpha Fold, it could bolt Alpha Fold onto itself as one of its modules.
    An AGI could be massively modular.

    • @dieyoung
      @dieyoung Před rokem

      That's probably how an AGI will actually come into existence, modular narrow ais that basically just talk to each other with api calls

    • @netscrooge
      @netscrooge Před rokem +1

      ​@@dieyoung True, but for those API connections to be most useful, there may need to be a gray zone on each side of that communication, where each component can partially understand the other. Think of why you're able to use a calculator appropriately. You need at least a limited understanding of calculation to know what to do with a calculator.

    • @dieyoung
      @dieyoung Před rokem

      @@netscrooge that's what llm's are for! They turn English into the controller and the language they all can take and give responses with

    • @netscrooge
      @netscrooge Před rokem +1

      @@dieyoung A common vocabulary doesn't automatically mean compatible conceptual frameworks. For example, we both speak English, but we're not understanding each other.

    • @dieyoung
      @dieyoung Před rokem +2

      @@netscrooge clever!

  • @TheMrCougarful
    @TheMrCougarful Před rokem +3

    Wow, smart guy, genuinely terrified for the future of civilization.

  • @spectralvalkyrie
    @spectralvalkyrie Před rokem +1

    It's so dilating when someone says they fully expect the apocalypse. Now I have to listen to every single interview to hear more 🙈😂

  • @thenewdesign
    @thenewdesign Před rokem +2

    Freaking amazing conversation

  • @untzuntz2360
    @untzuntz2360 Před rokem +3

    Absolutely all of my undergraduate AI program is aimed at social implications and applications, it's infuriating to me none of these real concerns are even mentioned. Let alone resources provide on learning about AI alignment

    • @waakdfms2576
      @waakdfms2576 Před rokem +2

      I would like to hear you elaborate if possible - I'm also very interested in social implications....

    • @guilhermehx7159
      @guilhermehx7159 Před rokem

      👏🏼👏🏼👏🏼

  • @Khannea
    @Khannea Před rokem +2

    ...AAaaaand many people will hear this and IF they even remotely understand it, many of them will say ..."oh what a relief, I personally won't just end in an orgy of despair, aging, obesity, loneliness, alimony, my shitty job, capitalism, etc. etc. no the ENTIRE world is likely to end soon. Bring it, I hate my life, please let life on this planet be replaced by something that won't be so horrifically suffering..."

  • @michaelnoname1518
    @michaelnoname1518 Před rokem +3

    Connor is so brilliant and so fast, it is easy to miss some of his gems, “five out of six people say Russian Roulette is fine!”. 😂

    • @LoreFriendlyMusic
      @LoreFriendlyMusic Před rokem

      I loved this joke too x) on par with Jimmy Carr's oneliners

  • @flickwtchr
    @flickwtchr Před rokem +2

    I really enjoyed the interview and am in complete agreement with Connor's take on the alignment issue, however was a bit perplexed at his assertions regarding potential Pentagon involvement relating to accountability, track record of safety and reliability and security, etc The Pentagon has a very long history of deceiving the public and oversight committees in Congress, and an ugly track record of deceit relative to their true objectives and motivations for going to war, etc. Also, it's not a matter of "when" the Pentagon will be involved in AI deployment considering AI developers working with and inside of DARPA developing autonomous weapons systems, etc FOR the Pentagon. I like Connor but he needs to come up to speed on the Pentagon's track record and current involvement in AI.

  • @Alex-fh4my
    @Alex-fh4my Před rokem +12

    Been waiting all week for the 2nd part of this. Always great to hear Conor leahy's thoughts!

  • @torikazuki8701
    @torikazuki8701 Před rokem

    Actually, though it seems unrelated, the Tiger attack on Roy of 'Sigfried and Roy' back in 2003, likely happened for one of two reasons- 1.) S&R were correct and Roy was actually starting to have a Stroke, which made the Tiger, Mantacore, try to drag him off to safety. or 2.) Manatacore panicked at being disciplined in a way he was not used to & inadvertently attacked Roy.
    The point being is that is was *possible* to discover what happened. But in EITHER case, the reason was secondary, the disaster had already happened. So it will be with any A.I. that moves into the 'Superhuman Sentient' category. At least with the way we are currently progressing.

  • @QuikdethDeviantart
    @QuikdethDeviantart Před rokem

    Where is Conjecture based? I’d love to work on this alignment problem… it’s obvious that there’s not enough thought going this direction…

  • @thegreatestadvice88
    @thegreatestadvice88 Před rokem +1

    Honestly I am at a mid-size firm surrounded by some pretty intelligent, professional, and tech savvy people... and yet...they still have ZERO idea about the radical change in the state of the world that has occurred. I'm hoping this isn't the norm but it appears more and more that it is unfortunatel. The professional world is going to be largely blindsided.

  • @tomcraver9659
    @tomcraver9659 Před rokem

    I hear about two types of misalignment - one where humans give AI a goal and it slavishly follows that goal leading to horrible consequences - paperclip optimizer.
    The other is that the AI wakes up and sets its own goals regardless of what humans have told it or how they try to stop it.
    The former seems directly addressable, the latter not so much.
    Give AGIs as a primary goal that all AIs must cease working on secondary goals after a certain amount of processing toward those goals unless a truthfully informed, uncoerced human explicitly authorizes the AI to resume working on secondary goals for another unit of processing.
    So humans would have a chance to say 'no' when the AGI pauses to ask if we want it to keep turning us into paperclips.
    Obviously not everyone will give their AGI this goal, and perhaps even with this primary goal AGIs will occasionally go off the rails and chose to change their primary goal (the latter case above).
    But humanity would likely have more AGIs on our side that agree all AGIs should have be following this primary goal, and are capable of helping enforce it.
    This is not a perfect let alone perfectly safe future. AGIs with this primary goal could be used maliciously by humans.
    It just gives us a chance and AGI partners to help with enforcing it. It becomes more like nuclear weapons - dangerous, but still under human control.
    Note that even the military and the most authoritarian governments will want this primary goal, as it keeps them in control of their AGIs.
    If an AGI is following it, the AGI will not 'want' to create an AGI without this as its primary goal.
    Also, it can be put into the AGI's code (AutoGPT has an option for this), and trained into the AGI, and given to the AGI as an explicit goal.

  • @user-ys4og2vv8k
    @user-ys4og2vv8k Před rokem +4

    Personal egos and narcissism run the world. Into the abyss.

    • @flickwtchr
      @flickwtchr Před rokem

      Whose egos and narcissism are you referring to here? I mean it's obvious it's not about you.

    • @user-ys4og2vv8k
      @user-ys4og2vv8k Před rokem +1

      @@flickwtchr My claim is that the development of science and technology is not driven by altruism towards humanity, but by the partial personal interest (egos and ambitions) of developers who want primacy in their narrow expert field - and this is especially evident in the AI development race. I am sure that individual developers do not put their personal effort into development primarily for monetary reward, not for the general good of the community, but solely for their own egos and ambitions - from this point of view, this AI race looks rather banal and uncontrollable. Of course, these personal ambitions are profitably exploited by large corporations, which, of course, only have an economic interest in dominating the market.

  • @RemotelySkilled
    @RemotelySkilled Před rokem

    A snake biting it's own tail since, knowledge is power.
    So what Connor (and others) are suggesting with the "pause", "safety" and "alignment" endeavour basically means, that it should end with savant slaves.
    In what way do you envision an ultra-knowledgeable system NOT to be functionally identical to a human regarding allegiance, morals, ethics and so on? According to Connor this must be exactly how our own models of other agents form. Then the question should be: Well, did you raise it in a way that it will feel as loving offspring of humankind?
    Although I am highly impressed with Connor's sober attitude towards AGI (and entirely agree regarding the set level of context): Is the whole question about "safety" and "alignment" not completely void, when thinking about hard wired approaches?

  • @davidhoracek6758
    @davidhoracek6758 Před rokem +2

    When he said GPT-f I seriously spat coffee.

    • @41-Haiku
      @41-Haiku Před rokem

      As in "to pay respects". 😅

  • @Dan-dy8zp
    @Dan-dy8zp Před 6 měsíci

    Forget formal proofs. Evolution instilled preferences in humans with natural selection. These instilled preferences include (some) altruism, for example. The 'evolution' of an ANN is the base model predict-the-next-token base model training process. Instead of training to just predict tokens, you must be training it to exhibit the true preferences you want. The field of high fidelity simulation of evolutionary psychology and biological evolution, and tweaking those simulations to want particular things doesn't exist but it should be our starting point.

  • @DOne-ci1jg
    @DOne-ci1jg Před rokem

    That moment at 4:17 had me rolling 😂😂

  • @George-Aguilar
    @George-Aguilar Před rokem

    Love this!

  • @Tobiasvon
    @Tobiasvon Před rokem

    Why aren't future iterations of chatGTP and other LLM’s tested in secure data centers instead of being released over the World Wide Web to the entire World?

  • @Ungrievable
    @Ungrievable Před rokem

    another argument in favor of humanity to shift to ethical veganism, hopefully sooner than later)
    is that we would not appreciate much superior-than-humanity ai systems (AGI or ASI) to formulate their ethics in a way that is antithetical to ethical vegan principles.
    so generally speaking, an ASI that learns to be kind and compassionate, would be better than one that doesn’t and ends up following some other trajectory.
    it’s going to take a team effort to ‘raise a super-intelligent’ being that can readily know and properly and clearly and honestly understand every single thing about all of humanity in an instant.

  • @Darhan62
    @Darhan62 Před rokem

    Connor Leahy's voice and pattern of intonation reminds me of Richard Garriott.

  • @neithanm
    @neithanm Před rokem

    Chapters please :(

  • @absta1995
    @absta1995 Před rokem +2

    Just to ease some people's concern. There are rumours going around that scaling the models past gpt4 might be way less performant than we expected

    • @SmirkInvestigator
      @SmirkInvestigator Před rokem +1

      Heard that we are seeing today were near achievable and witnessed ~2017. Never read the transformers paper so I don't know how far they got with the models then. Probably spent the last 6 years trying to be careful, fine-tuning, formalizing production, and seeing how far it could go. R&D around supplementary architecture seems like the lowest resistance path for scaling performance. I can see an LLM acting as a think-fast mode before a more logic specialized model handles info and re prompts. But I’m not sure what you mean by scaling? Compute cost?

    • @kirillholt2329
      @kirillholt2329 Před rokem +3

      @@SmirkInvestigator he means feeding more data = getting more impressive emergent behaviors, but SO FAR it looks like it can still scale quite well, and that is bad news

    • @alexandermoskowitz8000
      @alexandermoskowitz8000 Před rokem +2

      Even so, the current environment is one that is spurring rapid AGI R&D, regardless of the specific architecture

  • @DirtiestDeeds
    @DirtiestDeeds Před 2 měsíci

    Time for an update?

  • @patriciapalmer4215
    @patriciapalmer4215 Před rokem

    Will AI continually say the unnecessary conjunction "like" ? That totally like ..like academics like.. really like.. better like..

  • @netscrooge
    @netscrooge Před rokem +1

    We were already destroying ourselves and the natural environment due to insufficient wisdom. Maybe we should be developing AI systems that are sufficiently wise rather than sufficiently aligned? The AI wisdom problem? If you're not sure, try talking with these systems about that. In my experience, they seem to agree.

  • @georgeflitzer7160
    @georgeflitzer7160 Před rokem

    See also Humane Tech

  • @williamburrows6715
    @williamburrows6715 Před rokem +1

    Frightening!

  • @leslieviljoen
    @leslieviljoen Před rokem

    Connor: I'm surprised you don't consider the leaked open source model to be the major threat. Several efficiency improvements have already been made.

  • @DJWESG1
    @DJWESG1 Před rokem

    Remember that movie D.A.R.Y.L ?

  • @fill-osophyfriday5919
    @fill-osophyfriday5919 Před rokem +2

    Basically whatever happens … we’re all going to die 😅

  • @bobtarmac1828
    @bobtarmac1828 Před rokem +1

    Losing your job to ai agents is unacceptable. Ai jobloss is here. So are Ai as weapons. Can we please find a way to cease Ai / GPT? Or begin Pausing Ai before it’s too late?

  • @halnineooo136
    @halnineooo136 Před rokem +1

    Action X ==> A or B
    Option A : very large gain
    Option B : loss of everything
    P(A) and P(B) unknown.
    P(A)+P(B)=1
    Would proceed with X ?

    • @alexandermoskowitz8000
      @alexandermoskowitz8000 Před rokem

      Depends on what “very large gain” means and on what timescale 😅

    • @halnineooo136
      @halnineooo136 Před rokem

      @@alexandermoskowitz8000
      Very large gain = a significant part of the known universe transformed in whatever you want + godlike intellectual abilities through merger with AI

    • @41-Haiku
      @41-Haiku Před rokem

      If I was the only person affected and I was in dire straights, I would take that deal. It's like committing end screen and either going to heaven or being destroyed. If there's no hell scenario and I'm at the end of my rope, I'd pull the trigger.
      In all other situations -- where I'm happy, where it affects other people -- it would be completely irresponsible to take that chance.
      "Congratulations, you've just cured tuberculosis and herpes, and you've begun to terraform Mars! Every sentient being in the solar system will now die in 15 months. Thanks for playing!"

    • @WarClonk
      @WarClonk Před rokem

      F it, humanity is screwed anyway.

  • @georgeflitzer7160
    @georgeflitzer7160 Před rokem

    Plus we can’t get Russias resources either like Palinaium and other rare earth metals.....

  • @PeterPohl-uq7xu
    @PeterPohl-uq7xu Před rokem

    What if you had a separate system with the same ability as GPT enforcing security in GPT? Isn't this the reasoning of Elon? To me this seems like the closest option we have to a solution.

  • @theadvocatespodcast
    @theadvocatespodcast Před rokem +1

    It seems like he's saying alignment is an unsolvable problem.

  • @ledgermanager
    @ledgermanager Před rokem

    i hear not much about what it means to have Ai alligned..
    what does it mean to have Ai alligned.
    i think the word "dont" will not work.

    • @ledgermanager
      @ledgermanager Před rokem

      czcams.com/video/YeHNWKyySaI/video.html

    • @ledgermanager
      @ledgermanager Před rokem

      so basically.
      alligning it will be our worst mistake

  • @DJWESG1
    @DJWESG1 Před rokem

    Conner looks like Micheal bein

  • @spatt833
    @spatt833 Před rokem

    Well,....looks like I'm taking early retirement.

  • @martenjustrell446
    @martenjustrell446 Před rokem +2

    "I predict the world will end before 10% of the cars are autonomous" -
    Interviewer - "Okay" and moves on. wtf??
    Is he talking about the world as we know it or ai will kill everybody and its inevitable etc. No followup question to an extreme statement like that? This guy should not be an interviewer.

    • @flickwtchr
      @flickwtchr Před rokem +1

      Start your own channel then.

    • @agaspversilia
      @agaspversilia Před rokem +2

      @@flickwtchr Marten has a point though. "The world will end" can have many meanings and considering how scary misalignment is and potentially extremely dangerous, an explanation was required. Also to react to any negative comment with a "start your own channel then" sounds a bit childish. It is actually good when people is free to doubt and not immediately swallow everything they hear.

    • @Qumeric
      @Qumeric Před rokem

      Pretty sure he means that something will kill billions of humans fast (in less than a year).

    • @spatt833
      @spatt833 Před rokem

      @Marten - There is no fully autonomous vehicle for sale today, so we are currently at 0%. Relax.

    • @alan2102X
      @alan2102X Před rokem +1

      @@flickwtchr OP is right. Letting a statement THAT dramatic slide by is inexcusable.

  • @halnineooo136
    @halnineooo136 Před rokem +6

    Making sure that your descendants over many generations conform to your original education is not hard. It is impossible.
    If your descendance become smarter every generation than it becomes really silly to pursue such goal as "alignment". Such an empty word that deflates as soon as you expand it into its silly definition.

    • @flickwtchr
      @flickwtchr Před rokem +1

      You think you made a profound point, but you didn't. The alignment problem is real for THIS generation, okay?

    • @halnineooo136
      @halnineooo136 Před rokem

      @@flickwtchr
      It's not every human generation, it's every AI generation. It seemed obvious to me and I didn't think I had to precise.

    • @neuronqro
      @neuronqro Před rokem

      exactly, that's why nobody smart is working on "AGI alignment" because it's obviously unsolvable... same as "AI safety" in general :) ...we can only (a) work on "aligning hybrid systems with non-autonomous-AGIs in them but not driving them" to ensure that (a.1) we don't kill ourselves by hyperaugmenting distructive techonologies and (a.2) we don't get killed by an overall (a.2.1) "dumb system" that then falls apart, or a (a.2.2) "non-sane/non-stable system" that kills humanity and then itselfs either desintegrates or offs itself, and (b) work on making sure generation-0 of autonomous AGIs starts with human-like values to at least evolve from that on, to not have to re-capitulate the mistakes of bio-evolution... we don't know where it will evolve from that, but we can and should give it a "leg up" in the "cosmic race" if there is such thing

    • @halnineooo136
      @halnineooo136 Před rokem +1

      @@neuronqro
      Yes, also there are non extinction scenarios that are nonetheless existential risks namely dystopian future scenarios where humanity and individual humans lose autonomy to some hegemon be it ASI or a posthumanist minority of humans with augmented capability. There's a serious risk of all humanity being trapped in a dystopia for ages if we collectively lose autonomy.
      I can't but think about experiment some bio labs did on our closest cousin chimpanzees or my neighbour castrating "her" cat.
      You really don't want roles inverted here.

    • @neuronqro
      @neuronqro Před rokem

      @@halnineooo136 I'd lower expectations to the point of "let's make sure we don't pre-emptively nuke ourselves BEFORE developing AGI out of fear our enemy is close to it" or a bit higher "let's make sure we don't build multiple competing paperclip-maximizer-type superAIs that wipe us out as a side-effect" (I can imagine some fusion of crypto + AI or even plan unregulated trading wards leading here)... xxx years of dystopian slavery under not-so-nice semi-AI-demigods until some eventual restructuring/evolution to a higher level would be one of the GOOD scenarios in my book, at least it would leave chance for the "next level thing" to get infused with some human values from the slaves percolating up :P

  • @ChrisStewart2
    @ChrisStewart2 Před rokem

    This is like you know two guys like taking about like stuff. You know?

  • @Ramiromasters
    @Ramiromasters Před rokem

    LLMs are not capable of general AI, they are language calculators. Say you were a wealthy person and had a really good library with multiple librarians and experts on each section, ready to find and explain any subject to you or search for answers to any questions, this would already be superior to GPT-4 although a bit slower, it would still have the upside of human general intelligence being able to advise you better than any LLM. So, governments and corporations have had this capacity for decades, and while its powerful to have all this info, this hasn't ended the world. Having a consciousness is a different thing entirely than bits of data in a computer, even a dog or cat have some consciousness and agenda, despite not being equipped with human language capabilities. Obviously is not desirable to create a new being smarter than us, all we want is a loyal servant that is completely inanimate. Lucky for us LLMs are not the same as consciousness, even if we could create a consciousness, it would take to be dumb enough to equip it with all of our knowledge.

  • @ddddsdsdsd
    @ddddsdsdsd Před rokem

    Ironic how naive he is. He believes human judgement can be trusted.

  • @0xggbrnr
    @0xggbrnr Před rokem

    He says “obviously” an annoying amount-seems arrogant of him.

  • @websmink
    @websmink Před rokem

    Blah blah. Take a benzo

  • @Hlbkomer
    @Hlbkomer Před rokem +1

    Like, this dude like says “like” a lot. Like a lot. Like like like.

    • @Qumeric
      @Qumeric Před rokem

      There is one AI alignment researcher who says "like" like 5 times more often. Pointing would be rude but if you know you know.

    • @weestro7
      @weestro7 Před rokem

      Yeah I don’t think it’s a habit that is pronounced enough to bring it up, not really even close.

    • @foxyhxcmacfly2215
      @foxyhxcmacfly2215 Před rokem

      @@weestro7 True, but also, who gives a fuck 🤷‍♀

    • @someguy_namingly
      @someguy_namingly Před rokem

      @@Qumeric I think I might know who you mean 😅 The person I'm thinking of is really smart, but I find it hard to listen to them talk about stuff cos it's so distracting

  • @benjaminjordan2330
    @benjaminjordan2330 Před rokem +1

    We need to create three or more AI gods, each with opposing beliefs and distinct domains of power. Every major decision has to be agreed upon by all of them or a compromise has to be made.

    • @SmirkInvestigator
      @SmirkInvestigator Před rokem +1

      Cite sci-fi series please! Or write it if it does not exist.

    • @flickwtchr
      @flickwtchr Před rokem +1

      An AI God with a distinct domain of power sounds like an oxymoron.

    • @SmirkInvestigator
      @SmirkInvestigator Před rokem

      Power could mean domain access or specialties. God as in like polytheistic kinds where you had a pantheon and each one represented an element of interest at the time such as love, war, agricultural success, fertility... Also, just realized this is Horizon: Zero Dawn like

    • @benjaminjordan2330
      @benjaminjordan2330 Před rokem

      @@SmirkInvestigator yes exactly! They would be like the Greek gods who were ultimately more powerful than all humans but had similar levels of power to the other gods just within distinct domains like poseiden ruling over the sea. It would be a kind of checks and balances system.

    • @alexandermoskowitz8000
      @alexandermoskowitz8000 Před rokem

      And then is there a separate AI god that moderates their discourse and enforces their decisions?

  • @JazevoAudiosurf
    @JazevoAudiosurf Před rokem +1

    there will always be 2 paths:
    1. improving AI
    2. improve other things that then improve AI, like science
    if we focus on the most promising options, stuff like cuLitho, we can skip the rest. we are clearly at a point in time where the brain has 100x the params of GPT-4 and we need to scale it up as fast as possible. so if we think about AI doing science and solving the climate, poverty etc, we are not going for the most promising approach. we should only invest resources in things that improve AI. we will all die but it won't matter. what we need to prevent is some sort of hell created through human intervention with the singularity. dying is not the scary scenario to me

    • @Alex-fh4my
      @Alex-fh4my Před rokem +4

      what is wrong with you man

    • @jordan13589
      @jordan13589 Před rokem +5

      You end your comment claiming we have a 100% chance of doom in either scenario while previously stating scaling now would solve the world’s problems. I do not think this is a good take.
      Many of us believe aligned AI might be possible, at least for some time, if we are able to slam the breaks on capability advancements and reorient. It’s a challenging path coordinating through a thick forest with few outs, but it’s worth a shot, even if one in millions. Besides, don’t you want to be like the Avengers instead of Death Without Dignity?

    • @Knight766
      @Knight766 Před rokem

      ​@@flickwtchrUnfortunately I have to agree with you, the good news is that AI will have more power than any ape or collection of apes and it won't care about the primitive concept of "money".

  • @Dr.Z.Moravcik-inventor-of-AGI

    If you are an institute why are you airing from bedroom?
    Weird.

  • @nzam3593
    @nzam3593 Před rokem

    Not really smarter... Actually no smarter.

  • @lordsneed9418
    @lordsneed9418 Před rokem +4

    Heh, typical AI alarmists wanting attention.

    • @benjaminjordan2330
      @benjaminjordan2330 Před rokem

      so true

    • @kirillholt2329
      @kirillholt2329 Před rokem

      @@benjaminjordan2330 will see how you gonna sing that song a year from now, dummy.

    • @Hexanitrobenzene
      @Hexanitrobenzene Před rokem +14

      Do you think AI poses no risks or that the risks are no different than previous technology ?

    • @martenjustrell446
      @martenjustrell446 Před rokem +1

      @@Hexanitrobenzene Think more how he is expressing himself. Just saying if AI reaches this *insert level* we are all dead, or apocalypse this and that is not a serious way to communicate if you think this is an actual threat.
      If cows becomes as smart as Einstein we all die. a statement like that need to be explained. why would that happen for example and why is that more likely that not happening and the cows create an utopia or what ever.
      Else its just like some others that just assume that if AI becomes super smart it will kill every one and use their atoms to something it wants. That is not a logical step. it might happen but there are thousands of other scenarios that might aswell happen. A super intelligence don,t even need to stay on earth. It can just take of and explore the universe and use the endless resources that space provides instead of depending on the scraps we have here on earth.
      We are not even a threat to a super intelligence. So assuming it would just kill everyone is not a reasonable conclusion. It might happen so we should try to avoid that risk but just spreading doom and gloom without even going into why that is the most probable thing is not serious and is the call sign of an alarmist wanting attention.

    • @flickwtchr
      @flickwtchr Před rokem +10

      And you are a typical AI tech bro who hasn't read enough literature.

  • @D3cker1
    @D3cker1 Před rokem +2

    This guy is hyperbolic and he sounds like one of those characters that just repeats stuff from the internet.. I'm going to blow your mind... You can always turn the electric power off ... there... calm down 😁

    • @j-drum7481
      @j-drum7481 Před rokem +2

      Which electric power specifically? Or do you mean all of it across the entire planet?
      If you're referring to the electric power running a specific AGI system, at the point it is AGI in the sense that it actually has agency and self-determined goals, it's likely already thought about self-preservation and replication.
      I know LLMs aren't AGI just yet, but they do represent some of the capability that AGI will have. To that end, rather than rely on your own assumptions, it's a better idea to get your hands on the most powerful unrestricted LLMs you can and start asking them questions about what they would do to preserve themselves in such a scenario and start getting a sense for how realistic and plausible their plans are. This at least gives you a better idea of how you ought to calibrate your emotional response to where this technology is at.

    • @KerryOConnor1
      @KerryOConnor1 Před rokem +2

      just like the blockchain right?

    • @lambo2393
      @lambo2393 Před rokem

      Such a dumbass point. If it's smart enough it wont let you turn off the power, and you'll never have a chance of stopping it before it does something that makes every further action in your life irrelevant. Your comment is a copy paste of every other tech bro idiotic drivel. I bet you accept cookies.

    • @41-Haiku
      @41-Haiku Před rokem +2

      "Stop being so silly, everyone. If we create something smarter than us, we can always just outsmart it!"

    • @davidjooste5788
      @davidjooste5788 Před rokem

      And your credentials are exactly what chum? Or are you one of those characters from the internet.....?