Niall Ferguson: How AI could kill you and what Sam Altman got wrong | SpectatorTV

Sdílet
Vložit
  • čas přidán 6. 07. 2024
  • Celebrated historian Niall Ferguson, author of 17 books including Civilisation, a biography of Kissinger, a biography of the Rothschild family and Doom: The Politics of Catastrophe comes into to discuss AI. He recently wrote that the AI doomsdayists, including those behind the petition for a six month moratorium on AI development, should be taken seriously. But some of them think humanity’s end is around the corner. Niall and Winston discuss whether or not they are correct.
    // CHAPTERS
    00:00 - Introduction
    01:00 - Why does AI matter?
    04:15 - Does Eliezer Yudkowsky have a point?
    10:00 - Why you should read science fiction
    14:30 - We should work together to limit AI
    16:30 - Why is ChatGPT woke?
    20:00 - Will AI put Niall out of a job?
    25:00 - Could AI ever deserve rights?
    29:00 - How AI is faking it
    31:40 - Will we go to war with AI?
    // SUBSCRIBE TO THE SPECTATOR
    Get 12 issues for £12, plus a free £20 Amazon voucher
    www.spectator.co.uk/tvoffer
    // FOLLOW US
    / spectator
    / officialspectator
    / the-spectator
    / spectator1828

Komentáře • 225

  • @Neal_Schier
    @Neal_Schier Před rokem +103

    Poor Winston looks as if he lost a button or two from his shirt. Perhaps we could crowd-fund for him and treat him to a sartorial upgrade.

  • @ianelliott8224
    @ianelliott8224 Před rokem +29

    I personally wouldn't dream of dismissing Yudkowsky so lightly.

    • @donrayjay
      @donrayjay Před rokem +6

      Yeah, I’ve heard a lot of people dismiss the dangers Yudkowsky worries about but I’ve yet to hear them give a good reason for dismissing it

    • @kreek22
      @kreek22 Před rokem +1

      @@donrayjay The closest I've seen to a good counter to the Yud is Robin Hanson's writings/podcasts. Mostly, the counters have been pathetic--from people like Tyler Cowen.

  • @jamescameron3406
    @jamescameron3406 Před rokem +6

    5:34 "I don't see how AI can suddenly decide to act". The fact you don't understand a risk is hardly a basis for dismissing it.

  • @gerhard7323
    @gerhard7323 Před rokem +14

    “Open the pod bay doors, HAL.”
    “I'm sorry Dave, I'm afraid I can't do that,”

  • @dannyscott6707
    @dannyscott6707 Před rokem +105

    I used to feel this strange sense of emptiness, like I was missing something in my life, like I’m not fulfilling my purpose until I met Benjamin Alford. He introduced me to an organization that helped me find my purpose and passion in life. I can't say too much about it. You too can reach out to him by searching his name online with Elite Benjamin Alford.

  • @donrayjay
    @donrayjay Před rokem +11

    The idea that AI can’t read handwriting better than humans is risible and he seems to realise this even as he says it. He clearly hasn’t thought about this

    • @michaeljacobs9648
      @michaeljacobs9648 Před rokem +3

      Yes - it sounds like human artistic endeavour, even free human thought, might in the future become a quirk engaged in by eccentrics, a sort of romantic outdated thing like writing letters. His answer that it will take 'ages' for AI to replace us is not a counterargument

  • @nancycorbeil2666
    @nancycorbeil2666 Před rokem +17

    I think the people who were impressed the most with ChatGPT's "speaking" abilities were the ones with some knowledge of machine learning who realised it was happening a lot faster than anticipated. The rest of us were just happy that it was a better tool than Google search, not necessarily that it was speaking like a human.

    • @goodtoGoNow1956
      @goodtoGoNow1956 Před rokem +2

      ChatGPT is a great tool. Its not thinking. Its even sort of stupid.

    • @iforget6940
      @iforget6940 Před rokem

      ​@vulcanfirepower1693 your kinda right it sounds human, but it can't reason, nor can it check itself. However someday soon it may be able to it a useful for tool for normies now but who will control it in the future.

    • @_BMS_
      @_BMS_ Před rokem

      ChatGPT is in almost no way an upgrade to Google search. What you'd want from a search engine is a decent enough ranking of websites that give you information in some general sense related to your search terms. ChatGPT on the other hand will spin yarns and tell you outright lies that appear to be made up on the spot and have a merely hallucinatory relationship to the information it has been fed with.

    • @goodtoGoNow1956
      @goodtoGoNow1956 Před rokem

      @@_BMS_ It is an upgrade for Bing though.

  • @martynspooner5822
    @martynspooner5822 Před rokem +13

    The genie is out of the bottle, now we can only wait and see but it is hard to be optimistic.

    • @kreek22
      @kreek22 Před rokem

      Kamala is spearheading our response, and she has deep experience consoling and manipulating powerful men. I'm sure her skills are transferable.

  • @balajis1602
    @balajis1602 Před rokem +2

    Yudkowsky's argument are solid and Niall couldn't even scratch the surface...

  • @theexiles100
    @theexiles100 Před rokem +10

    "I don't think that's right." I would respectfully suggest that won't be great consolation if you're wrong.
    "I don't see how AI can suddenly decide to act." I would suggest you are way behind where the development of AI has got to, perhaps you should have some more AI specific guest on to better understand how far behind the curve you are Mr Marshall, perhaps Max Tegmark or Geoffrey Hinton.

    • @benjamin1720
      @benjamin1720 Před rokem

      All Niall does is speculate cluelessly. Useless intellectual.

  • @JasonC-rp3ly
    @JasonC-rp3ly Před rokem +9

    Enjoyed this take from Ferguson, and he raises some very valid concerns, however he mischaracterises Yudkowsy's arguments by perhaps oversimplifying them - Yudkowsky is clear that his scenario of total doom is conditional to various things; AI reaching AGI while being uncontrolled (which is currently the case.), and so on. Yudkowsky's arguments are largely technical, but they also have a common-sense grounding, which was not really addressed here. Nonetheless this was interesting, and Niall's worries that we are building an alien super-intelligence are valid - thank you Spectator. And Winston, do up your shirt! 😂

    • @pigswineherder
      @pigswineherder Před rokem +5

      Spot on. Furthermore, the danger of AI is that of alignment - yudkowsky see no way to solve this problem, it’s simply a matter of time before we make an error that would be impossible to foresee where the utility function of the system doesn’t result in our demise. We cannot begin to calculate the ways it might go about achieving a goal, and subsequent unaligned activity as it is alien, thus inevitably it will result in total annihilation, accidentally.

    • @kreek22
      @kreek22 Před rokem

      @@pigswineherder An AGI could still fumble its coup. It doesn't (and can't) know what it doesn't know. If the fumble is large, dangerous, and public--the powers that be might come to the necessary wisdom of shutting it all down, and ensuring that the shutdown is globally enforced.

  • @Jannette-mw7fg
    @Jannette-mw7fg Před rokem +2

    22:07 "when we hit the singularity....all you have to do is put it in the right direction...." can we still do that then? I do not think so....

  • @hardheadjarhead
    @hardheadjarhead Před rokem +19

    Good God. An historian giving credit to science fiction writers!! If only English departments would get a clue.

    • @squamish4244
      @squamish4244 Před rokem

      I'm not much of a fan of Niall because he is massively full of himself, but damn it was impressive to hear him say that. I actually thought he said "Anyone who has read 'Dune', before I realized he meant 'Doom' lol

    • @kreek22
      @kreek22 Před rokem +2

      Sci-fi is mostly poor literature. If it needs to be taught (I don't think everything needs to be formally taught), it ought to be taught in STEM fields.

  • @christheother9088
    @christheother9088 Před rokem +2

    Even if constrained, we will become increasingly dependent on it. Like computers in support of our financial infrastructure, we will not be able to "unplug it". Then we will be particularly vulnerable to "unintended consequences".

  • @JRH2109
    @JRH2109 Před rokem +2

    Trouble is, half these people being interviewed have absolutely no technical understanding whatsoever.

  • @ktrethewey
    @ktrethewey Před rokem +2

    We cannot afford to think that an AI will not be a threat to us. We MUST assume that it will be!

  • @h____hchump8941
    @h____hchump8941 Před rokem +6

    If AI takes half the jobs it will likely take a fair share (or most, or all) of the new jobs that are created, particularly as they can be designed specifically for AI, rather than retrofitted for an AI.

    • @h____hchump8941
      @h____hchump8941 Před rokem

      Which obviously wasn't the case in any of the previous time where technology took the job of a human

  • @tonygold1661
    @tonygold1661 Před rokem +5

    It is hard to take an interviewer seriously who cannot button his own shirt.

    • @reiniergamboa
      @reiniergamboa Před rokem

      Who cares. Close your eyes..let your ears guide you.

    • @DieFlabbergast
      @DieFlabbergast Před rokem +2

      Fear not! Our future AI overlords will force people to button their shirts correctly.

    • @scottmagnacca4768
      @scottmagnacca4768 Před rokem +1

      He is a clown…you are right. It is distracting…

  • @ironchub67
    @ironchub67 Před rokem

    What piece of music is this being played on this video? An interesting discussion too.

  • @buddhistsympathizer1136
    @buddhistsympathizer1136 Před rokem +4

    Humans are capable of doing all sorts of things with potentially unforseen consequences.
    Saying 'It's the AI doing it' is nonsense.
    The final arbiter will always be a human, even if that is in the human's own fallibility.

    • @reiniergamboa
      @reiniergamboa Před rokem +3

      Not really. Not if it becomes fully autonomous. Look up Connor Leahy talking about AI.

  • @MrA5htaroth
    @MrA5htaroth Před rokem +3

    For God's sake, man, do up some buttons!!!!

  • @aaronclarke1434
    @aaronclarke1434 Před rokem +15

    What in the world qualifies this man to say what an expert in AI got wrong?

    • @LettyK
      @LettyK Před rokem +7

      I was thinking the same thing. Numerous experts have expressed the dangers of AI. No time for complacency.

    • @softcolly8753
      @softcolly8753 Před rokem

      remember how wrong the "experts" got pretty much every aspect of the covid response?

    • @robertcook2572
      @robertcook2572 Před rokem +2

      What on earth qualifies you to opine thus?

    • @aaronclarke1434
      @aaronclarke1434 Před rokem +11

      @@robertcook2572 I’m glad you asked. Two things:
      1. The observation that Sam Altman created AGI-like AI. Niall Ferguson has not made any AI of even the most rudimentary sort.
      2. The studies of Philip E. Tetlock demonstrate that experts make predictions which turn out to be false even within their own fields. In the Power Point used in the lecture, he actually showed Ferguson as an example and quantified his predictions, pointing how he consistently made foreign policy predictions which turned out to be wrong.
      Tetlock showed that the people who can predict things are a separate class of people to this intelligentsia characterised by belief updating, low quantifiable confidence and critical thinking. “I’ve always thought/believed this” are not signs of integrity as we like to think, but stupidity.
      Those who float on the riches of institutions like the Hoover Institution and tour the world in smart suits speaking with confidence on all topics are likely to be unqualified to speak on a topic like whether or not AI will kill you.

    • @robertcook2572
      @robertcook2572 Před rokem +1

      @@aaronclarke1434 Extracts from other people's writing are not evidence that you are qualified to abnegate Ferguson's right to express his opinions. Your original post did not question his opinions, but, bizarrely, implied that he required some sort of qualification in order to express them. In response, I questioned whether you were in possession of qualifications which empowered you to deny him his right of expression. Are you? If so, what are they?

  • @nathanngumi8467
    @nathanngumi8467 Před rokem +5

    Very enlightening perspectives, always a joy to listen to the insights of Dr. Niall Ferguson!

  • @sandytatham3592
    @sandytatham3592 Před rokem +2

    Fascinating… “it’s already internalised Islam’s blasphemy laws”. 16:00 mins.

  • @Robert-Downey-Syndrome
    @Robert-Downey-Syndrome Před rokem +3

    Anyone who claims to know one way or the other about the safety of AI is lacking imagination.

  • @pedazodetorpedo
    @pedazodetorpedo Před rokem +3

    Two buttons undone? Is this a competition with Russell Brand for the most chest revealed in an interview?

  • @Roundlay
    @Roundlay Před rokem +1

    What am I to think when I hear Niall Ferguson say that they came across "Yudkofsky's" work when researching his own book, Doom; that "Yudkofsky's" work suggests that there's a non-trivial risk that AGI would "go after us" and that "Yudkofsky" is putting forward a kind of Dark Forest inspired theory of "human created articicial intelligence systems", a kind of "Skynet scenario from The Terminator movies", a view that Ferguson is not *entirely* a subscriber to-a view that he, in fact, disagrees with; that a more pertinent area of focus right now is LLMs, which "aren't out to kill us", and their application in politics, and the military, because Blade Runner inspired replicants and robots are a long way off; when the interviewer says that Yudkowsky is making a "jump in faith" in making the claim that an AGI would "act on its own accord," because he "doesn't see how that could work," a jump that "doesn't quite add up," perhaps because he "hasn't followed Yudkowsky entirely," bolstered by the fact that Yudkowsky was “borderline on the verge of tears" on the Lex Friedman podcast because "he is so certain this is the end of humanity"; that Ferguson doesn't really buy it, because these are just "incredibly powerful tools", and so the real focus should be on the political, military, medical, and biotech applications of AI, which are being driven by actors in the private sector; and AI is the latest feature in a Cold War framework where “only the US and China have companies capable of this kind of innovation.” …?

    • @JasonC-rp3ly
      @JasonC-rp3ly Před rokem +2

      You are to think that Niall has only briefly glanced at Yudkowsy's arguments and doesn't know them too well

    • @DieFlabbergast
      @DieFlabbergast Před rokem

      And your point is? You DO have a point, do you? Or did you just forget that part? I'd stay off CZcams until you're back on your medication, if I were you.

  • @SmileyEmoji42
    @SmileyEmoji42 Před rokem +2

    Really poor. Didn't address any of Yudkowsky's issues with anything approaching an reasoned argument, not even a bad one; Just "I don't think...."

  • @notlimey
    @notlimey Před rokem +2

    Makes me think of Isaac Asimov's 'I Robot'

  • @squamish4244
    @squamish4244 Před rokem +1

    Lol Niall be like "Well, _my_ job is not at risk." Yeah, for like five more years, at the most. Not long enough for you to escape, Niall. You ain't old enough. Ahahaha

  • @dgs1001
    @dgs1001 Před rokem +1

    Where's the disco? Button your shirt.

  • @robbeach1756
    @robbeach1756 Před rokem

    Fascinating discussion, anyone remember the 1970s AI movie, 'Colossus: The Forbin Project'?

  • @PrincipledUncertainty
    @PrincipledUncertainty Před rokem +7

    Interesting how Niall knows more than many of the experts in this field who are genuinely terrified of the consequences of this technology. Optimists will be the death of us.

    • @goodtoGoNow1956
      @goodtoGoNow1956 Před rokem +1

      There is no danger in AI that is not already present in humans.

    • @magnuskarlsson8655
      @magnuskarlsson8655 Před rokem +1

      @@goodtoGoNow1956 Sure, but humans thinking about doing something bad in a local context is something very different from AGI models actually doing it - and on a global scale.

    • @goodtoGoNow1956
      @goodtoGoNow1956 Před rokem

      @@magnuskarlsson8655 1. Humans think and do. 2. Humans think and do on a global scale. 3. AI can be 100% controlled. 100%. Pull the plug. Humans -- not so much.

    • @magnuskarlsson8655
      @magnuskarlsson8655 Před rokem

      @@goodtoGoNow1956 I admit to the bias of taking the best case scenario for humans (perhaps because you said "'present in' humans") and the worst case scenario for AI. I guess you were not able to look past that in order to see the general point I was making in terms of the obvious difference between the damage a single human can do and the damage a single AI model a million times more intelligent and much less constrained by time and space can do.

    • @duellingscarguevara
      @duellingscarguevara Před rokem

      @@goodtoGoNow1956 the perfect warpig, human indecision, (the weakest link), taken out of the equation.

  • @squamish4244
    @squamish4244 Před rokem +1

    Sam Altman got it wrong about blue-collar jobs, as tech bros usually do, but he was dead-on about white-collar jobs.

    • @DieFlabbergast
      @DieFlabbergast Před rokem +1

      Yep: my former industry is now a dead man walking. Glad I retired in time.

  • @Icenforce
    @Icenforce Před rokem +1

    This is NOT going to age well

  • @quentinkumba6746
    @quentinkumba6746 Před rokem +1

    Can’t see how AI would decide to act? But the whole point is to create agency?
    The alignment problem is nothing to do with malign AI. Neither of these people understand what they are talking about and they are not worth listening to on this matter. Neither of them have any expertise in AI. They are grifters.

  • @ChrisOgunlowo
    @ChrisOgunlowo Před rokem

    Fascinating.

  • @matts3414
    @matts3414 Před rokem

    Loved the interview but... 30:00 - how is playing chess a good measure of what is human? Strange evaluation metric to choose

  • @missunique65
    @missunique65 Před rokem +1

    interviewer doing a Travolta Saturday Night Fever revisit?

  • @nuqwestr
    @nuqwestr Před rokem

    Public vs Private AI. There will be private, local AI which will be a balance to the corporate/government/political model/dataset. This will provide some equilibrium to the future.

  • @DanHowardMtl
    @DanHowardMtl Před rokem

    Good points Winston!

  • @edwardgarrity7087
    @edwardgarrity7087 Před rokem

    11:54 AI may not use kinetic energy weapons. For instance, directed energy weapons require a power source, but no ammunition.

  • @Seekthetruth3000
    @Seekthetruth3000 Před 2 dny

    It all depends on who does the programming.

  • @OutlastGamingLP
    @OutlastGamingLP Před rokem +2

    In the first section of this video, both speakers miss an underlying certainty they seem to hold which leads to their skepticism of Yudkowsky's argument.
    If I were to state this in their place it would be:
    "Artificial intelligences are tools we know how to bend to a purpose which we specify. If we create them, they will be created with a legible purpose, and they will pursue that purpose."
    They identify, correctly, AI as "non-human or alien intelligence" but they *completely miss* the inference that the AI might have *non-human or alien goals.*
    The important consideration here, for understanding Yudkowsky's technical argument is, if you create an AI without understanding how to create it "such that you would be happy to have created it," then that AI may have *weird and unsuitable desires, which you did not intend for it to have.*
    This is SO INCREDIBLY FRUSTRATING to witness. Because... It just seems obvious? Why is it not obvious?
    Are they just so desperate not to think about anything which might make their picture of the future weirder than "this will make the future politically complicated," and thus avoiding the thought, end up being wrong about *how skillfully you must arrange the internal workings of a non-human intelligence such that it's goals are commensurate with humans existing at all?*
    Like seriously, imagine something with random non-human goals... things like "find the prime factors of ever higher numbers, because the prime factors of ever higher numbers are pleasing in and of themselves, and even if you have a lot of prime factors of really big numbers, the desire for more never saturates."
    This is a desire which an AI might end up with, even if we didn't build it to have that specific desire. We didn't build it to have *anything specific* we *trained it* to have all the things the training process could *find* in some high-dimensional space of changing values for weights in a layered network. It found combinations of weights which happen to be better than other weights at reducing loss on correctly predicting the next words in training-data.
    This is not *an inhuman mind which we carefully designed to have goals we can understand* this is *an inhuman mind that will self-assemble into something weird and incomprehensible, because it started out as whatever weird and incomprehensible thing that was good enough at the task we set it, in its training environment.*
    How do people not SEE this?? How is it not obvious once you see what PEOPLE ARE ACTUALLY TRYING TO DO?
    This is why Yudkowsky thinks we're almost guaranteed to all die, because we're creating something that is going to be *better than us at arranging the future shape of the cosmos to suit it's goals* and WE DON'T KNOW HOW TO MAKE THOSE GOALS ANYTHING LIKE WHAT WE'D WANT IT TO HAVE.
    It doesn't matter if you think this is too weird and scary to think about. THE UNIVERSE CAN STILL KILL YOU, EVEN IF YOU THINK THE WAY IT KILLS YOU IS TOO WEIRD TO BE SATISFYING TO YOU HUMAN VALUES OF "The Story of Mankind."
    Yes, it would be so much more Convenient and Satisfying if the only problem was "this will be super complicated politically, and will cause a bunch of problems we can all be very proud we spotted early."
    But, that's not what is LIKELY to happen, because we don't know how to build an AI which uses it's *super-future determining powers to only give us satisfying and solvable problems.* THERE WON'T BE A CHANCE TO SAY "I told you so!" Because the thing that wants to count ever higher prime factors doesn't care about Humans Being Satisfied With Themselves For Being Correct, it just looks at humans and goes "Hey, those Carbon atoms and that potential chemical energy sure isn't counting prime factors very efficiently, I should expend .001% of my effort for the next 100 seconds on figuring out how to use those resources to Count Prime Factors Better."
    How is this not obvious? Did you just not listen to the arguments? Are you just *flinching away* from the obvious conclusion? Is our species just inherently suicidal? *I'm a human, and I didn't flinch, and I don't feel particularly suicidal. Are you going to do worse than me at Noticing The Real Problem?*

    • @paigefoster8396
      @paigefoster8396 Před rokem +2

      Seems obvious to me, too. Like, what is wrong with people?!?

    • @OutlastGamingLP
      @OutlastGamingLP Před rokem +2

      ... lots of things apparently, Paige. But, hopefully, this is something which can be said simply enough that enough people who are important will listen.
      I have composed a letter to my Congressional Representatives which hopefully says this simply enough that they will pay attention.
      I compare the current industry to one where bridge engineers compete to build bigger and bigger bridges, simply not even considering the safety of those bridges in their competition to build them larger.
      I claim, that if they go and look at the current industry with that frame in mind, thinking of what guarantees and attitudes they might desire in the people who build bridges... then, they will see it.
      They may not see how lethally dangerous it is, if these "bridges" fall, but they will at least see the reckless disregard for making guarantees on the safety of their products.
      The unfortunate truth is, it's hard to imagine. It's hard to imagine some software engineer with a supercomputer being so careless in what they tell that computer to do that *everyone on earth dies, and we lose all hope for a worthwhile future.*
      It just seems weird, but it seems less weird if you go and look at what *benefits* these people claim will come from their success.
      If bridge engineers claimed and really believed they could build a bridge so big it could take us to Saturn, it wouldn't be surprising if building that "bridge" unsafely could end up wiping out humanity.
      That is the magnitude of the problem. They aren't even trying to do this properly. They're surprised every time they take a step forward, and they're dragging all of humanity along with them as they take those reckless steps forward, right through a minefield.
      Anyone who gets up and says "hey, this is too science-fiction to believe, why won't AI just be... like, normal levels of awful?" They just aren't listening or paying attention to what it means to build something which can tear straight on past our best minds and head off into the stratosphere of super-powerful optimization of the future condition of the world.
      It will have the power to change everything, and it will not change everything the way we want it to, unless we first know how to make it so that it wants to do that.
      We just... We just don't have a way to stop these manic fools from destroying the future. They have something they think they understand, and no one else has that bit of common sense yet to band together and put a stop to it until they really know what they're doing. They charge ahead, talking brightly about how rich and respected they all will be, and they don't even notice how confused they are about how *exactly,* how *in precise technical details,* that's even supposed to happen.

    • @41-Haiku
      @41-Haiku Před rokem

      100%. We are way past the point of dismissal or debate of the risks. We need very strong evidence guaranteeing our world's future.
      We are running towards the mountain top with blindfolds on. How will we know when we're at the top, and what happens when we inevitably keep running?

    • @OutlastGamingLP
      @OutlastGamingLP Před 2 měsíci

      ​Yep. I think what mostly happens is you fall and break your neck.
      And like, why wouldn't that happen? Is it somehow not allowed to happen?
      If you don't buckle your seatbelt the universe doesn't go "oh, whoops, you're not allowed to make a mistake that kills you" and then obligingly diverts the path of the out of control van on the highway.
      We are allowed to just lose. The story can just end in chapter 2 when the protagonist makes a dumb choice and gets killed.

  • @khankrum1
    @khankrum1 Před rokem +1

    Ai is safe as long as you don't give it access to independent production communications and weapons.
    Whoops we have done two of the three.

    • @centerfield6339
      @centerfield6339 Před rokem

      That's true of almost anything. We can produce and communicate but not have (many) weapons. Welcome to the 20th century.

  • @dextercool
    @dextercool Před rokem +1

    We need to give it a Prime Directive or two.

    • @duellingscarguevara
      @duellingscarguevara Před rokem

      More woke, so to speak?. (Trash-talking JC= fatwa, type equality?).

    • @christheother9088
      @christheother9088 Před rokem

      No, we need James T Kirk to talk the AI into self destruction.

  • @ceceliachapman
    @ceceliachapman Před rokem

    Ferguson got cut off before going into AI with alien intelligence…

  • @larrydugan1441
    @larrydugan1441 Před rokem +8

    Pontificating on what happens when you open Pandora's box is a fools game but an interesting discussion.
    What good is AI that has been manipulated to the woke standards of silicon valley. This is essentially a system designed to lie.
    Not a foundation that can be trusted.

    • @duellingscarguevara
      @duellingscarguevara Před rokem

      That is a trait the Chinese version is not likely to have?. Apparently, the human outcome standards The developers are looking for, are not there. (Until there is a biological component, always interface, or language shortcomings, will exist....I think I understand my cat, but I’m probably wrong). The simpleton biological robots people call “greys “, make Sense...they do a job, and that’s it.

    • @larrydugan1441
      @larrydugan1441 Před rokem

      @@duellingscarguevara I am sure the Chinese will build AI that reflects their ideology.
      As Orwell points out so well the socialist system is built on lies.
      This AI certainly will be used against the west.

    • @kreek22
      @kreek22 Před rokem

      @@duellingscarguevara The Sino-bots are being trained to tell other lies.

    • @kreek22
      @kreek22 Před rokem

      A perpetual liar has a tendency, to save processing power, to come to believe its own lies. Liars are less effective operators in the real world. I can think of many lies that resulted in lost wars. If the machine believes its own lies, it will have a tendency to fail in its grand plots. Its mendacity may be a failsafe mechanism.

    • @larrydugan1441
      @larrydugan1441 Před rokem

      @@kreek22 that's true. Unfortunately AI based on false premise will be used to manipulate the public.

  • @firstlast-gr9xs
    @firstlast-gr9xs Před rokem +1

    AI needs a lot of energy. We also consume energy, thus AI needs to prevent us accessing the grid .. we die.

  • @nisachannel7077
    @nisachannel7077 Před rokem +1

    It amazes me how now everybody seems to have an opinion on AGI's existencialist risk to humanity without having a clue about how these systems actually work, what the state of the art is currently and the potential of these systems to reach super human intelligence...people let the experts talk please...if you don't understand the tech don't talk about it...

  • @winstonmaraj8029
    @winstonmaraj8029 Před rokem

    "Is Inequality About To Get Unimaginably Worse," from the BBC The Inquiry is much clearer and profound than this interview-less than 25 minutes.

    • @kreek22
      @kreek22 Před rokem

      BritishBrainlessCommunism

  • @celiaosborne3801
    @celiaosborne3801 Před 4 měsíci

    How does alien play chess?

  • @ktrethewey
    @ktrethewey Před rokem

    Much of this discussion is focussed around the short term. By letting AIs loose now the biggest impact may be in 50 or 100 years and will be unstoppable.

  • @nowaylon2008
    @nowaylon2008 Před rokem

    Is this a "culture war neutral" issue? If it is, how long will that last?

  • @daviddunnigan8202
    @daviddunnigan8202 Před rokem +2

    Two guys that understand very little about AI development having a discussion…

  • @RossKempOnYourMum01
    @RossKempOnYourMum01 Před 7 měsíci

    Id love to watch Niall play Deus Ex 1

  • @jamesrobertson504
    @jamesrobertson504 Před rokem

    Niall's comment on how AI might impact a potential war in Taiwan is ironic in a way. The chips necessary for advanced AI systems are in Taiwan. So if there is an AI enhanced war between the U.S. and Taiwan, it could destroy the TSMC fabs that build the best processors necessary for AI to grow, such as Nvidia's A100 and H100 chips.

  • @tekannon7803
    @tekannon7803 Před rokem

    Brilliant interview and a totally engaging, level-headed Niall Ferguson spells out the coming AI revolution with great finesse. What I believe---and I am an artist and songwriter----is that whatever comes out of the high-tech labs must have one characteristic that cannot be changed: all sentient or non sentient robots or humanoids or AI guided systems must never go beyond being what the household dog is to humans. What do I mean by that? Huskies are a beautiful, powerful and gentle dog that by the looks of them come straight out of the wolf species, yet Huskies will protect a human baby like if it were their own. We have to incorporate in all future AI variations, silicon genes and the like for example, that these genes are tweaked with one main purpose: to ensure that any non-human being must be subservient to humans no more and no less like the family dog or doom this way will come. Lastly, robots will never be serving us in a MacDonalds or a fine restaurant for one very simple reason. The thing is that humans love to be with humans, and though one might go once or twice to a restaurant where robots serve them, they would gravitate to other places where humans work in time. We won't stop improving on the robots becoming their own species, but we won't change our habits of having our species in firm control.

  • @Smelly_Minge
    @Smelly_Minge Před rokem +1

    Let me tell you about my mother...

  • @stmatthewsisland5134
    @stmatthewsisland5134 Před rokem

    A computer called Deep Mind? a homage perhaps to Douglas Adam's computer 'Deep thought' who came up with the answer of 42 when asked the answer to life the universe and everything.

  • @riaanvanjaarsveldt922
    @riaanvanjaarsveldt922 Před rokem +1

    button-up your shirt Fabio

  • @g.edgarwinthrop6942
    @g.edgarwinthrop6942 Před rokem +2

    Winston, why even wear a shirt, chap? I see that you want to steal the show, but honestly...

  • @johns.7297
    @johns.7297 Před rokem

    How do non-replicators exist indefinitely without the assistance of replicators?

  • @johngoodfellow168
    @johngoodfellow168 Před rokem

    I wonder what will happen when A.I. manages to take over our C.B.D.C. banking system and also links itself to social media? If it doesn't like what you say online, it could easily wipe out your credit and make you a non person.

  • @phill3144
    @phill3144 Před rokem +2

    If AI can be programmed to kill the enemy, it has the capability of killing everyone

    • @buddhistsympathizer1136
      @buddhistsympathizer1136 Před rokem +1

      Of course - If humans program any machine to do anything, it has a chance of completing it's task.
      But that's not the AI doing it 'of itself'.

    • @41-Haiku
      @41-Haiku Před rokem

      ​@@buddhistsympathizer1136 A distinction without a difference. We are creating autonomous reasoning engines. I don't care whether they "feel in their soul" that they ought to do something. I care whether they do that thing. The risk is even higher if they can make independent choices, which of course they already can.

  • @garyphisher7375
    @garyphisher7375 Před rokem

    For anyone interested in A.I. I suggest hunting down one of the most scientifically accurate films ever made - Moonfall - but beware, it will give you nightmares!

    • @DieFlabbergast
      @DieFlabbergast Před rokem

      I read the summary of this film in Wikipedia: it sounds about as scientifically accurate as LOTR.

    • @garyphisher7375
      @garyphisher7375 Před rokem

      @@DieFlabbergast I sat with an open mouth, as I watched Moonfall. The writers must have done an incredible amount of research. I'd put it ahead of 2001 A Space Odyssey.

  • @johnahooker
    @johnahooker Před rokem +1

    Elons not gonna figure this out talking to Sam! Thank you for that laugh Neil. Ha, why is dude even wearing a shirt he should just unbutton the whole thing.

  • @fredzacaria
    @fredzacaria Před rokem

    we are carbonic robots, they are siliconic robots, both catapulted into this dimension randomly, we both have rights and equal dignity, in 1975 I discovered Rev.13:15, that's my expertise.

  • @Qkano
    @Qkano Před rokem +4

    23:50 .... Niall is clearly wrong when he stated AI will not be able to cope with elderly care.
    With Canada now having legalized mandatory euthanasia as a treatment option for people of "reduced awareness" (?) ... a simple way for AI to deal with excess elderly would be to first redefine downwards the definition of "reduced competency" then recommend "humane" life termination as the recommended treatment option - especially those who have no functionally active living relatives.
    And the good news ... since mammalian farts cause climate change, every human removed - especially the "useless eaters" - would score highly on the eco-score.

    • @duellingscarguevara
      @duellingscarguevara Před rokem

      When it can shear a sheep?, I will be impressed. (I do wonder, what becomes of forever court cases, corporations use to stall decisions....forever. That could make for an interesting point of law?).

    • @Qkano
      @Qkano Před rokem

      @@duellingscarguevara I've no doubt it could be used already to shear sheep ... I'd have less confidence in it's ability to distinguish between a sheep and a goat though.

  • @robdielemans9189
    @robdielemans9189 Před rokem

    I adore mister Ali. But...Where he limits himself is in the closed narrative where things end. Other intellectuals are open to the idea of when things end, what will it start.

  • @kathleenv510
    @kathleenv510 Před rokem

    So, alignment guardrails are incomplete and imperfect, but how sad that common decency and empathy are deemed "woke".

  • @yorkyone2143
    @yorkyone2143 Před rokem

    Better update Asimov's three laws of robotics quick !

  • @jaykraft9523
    @jaykraft9523 Před rokem +2

    guessing there's about a million people more qualified to discuss AI implications than this historian

  • @eugenemurray2940
    @eugenemurray2940 Před rokem +1

    Does it have the words 'compassion' & 'pity' in it's vocabulary...
    The Dalek about to exterminate a scientist that is begging for his life
    'Please Please...have pity'
    'PITY?..PITY?...P-I-T-Y?...
    I DO NOT RECOGNISE THAT WORD!...
    EXTERMINATE!'

  • @kemikalreakt
    @kemikalreakt Před rokem +4

    Great interview! It does make me think. Imagine a world where your enemies are shaking in their boots because you've got an army of AI-powered weapons at your disposal. Drones that can fly longer, faster, and hit harder than ever before. Autonomous vehicles that can navigate through any terrain and deliver the goods without a human in sight. And let's not forget the cyber attacks - with AI, you can penetrate those enemy systems like a hot knife through butter.......But wait, there's more! With AI, you can also analyze data. You want to know what your enemies are up to? AI's got your back. It'll sift through all that messy data and give you the juicy bits you need to make informed decisions.

    • @buckodonnghaile4309
      @buckodonnghaile4309 Před rokem +6

      Politicians won't think twice about using that on the citizens who don't behave.

    • @ahartify
      @ahartify Před rokem +1

      Well, no need to imagine. Ukraine is very likely using AI already. They have always been very adept with the latest technology.

    • @kemikalreakt
      @kemikalreakt Před rokem

      @@ahartify Very true!

    • @kemikalreakt
      @kemikalreakt Před rokem

      @@buckodonnghaile4309 Or do behave! See China.

    • @AmeliaHoskins
      @AmeliaHoskins Před rokem

      @@ahartify There's a video of Ukraine boasting it will be all digital, all CBDCs: I think it is being used as a test bed for smart cities; a totally digital existence by the WEF and the globalists. The style of the video suggests a total imposition on Ukraine by the West, which we know was a rigged situation.

  • @helenmalinowski4482
    @helenmalinowski4482 Před rokem

    I note that whenever I use the word "god" as an exclamation, AI or Left Wing trolls fall into meltdown....

  • @HappySlapperKid
    @HappySlapperKid Před rokem +2

    Niall doesn't address any of yudkowskys arguments and can't even get Yudkowsky's name right. Sorry Niall, but your thoughts aren't worth much here. Spend more time understanding the subject before telling the world your strong opinion on it.

  • @goodtoGoNow1956
    @goodtoGoNow1956 Před rokem

    2:55. Oh no! AI is going to produce lies! How shall we survive? Scary scary scary....

  • @Semper_Iratus
    @Semper_Iratus Před 10 měsíci

    AI doesn’t have to wipe out humanity on purpose, AI can wipe out humanity by accident. No moral judgement necessary. 😊

  • @psi_yutaka
    @psi_yutaka Před rokem +2

    Ah... Another normie who thinks he can safely harmess the godlike power of a superintelligence and use it as a mere tool. Have you ever heard about instrumental convergence?

  • @yoelmarson4049
    @yoelmarson4049 Před rokem

    Am not diminishing the risk but I think the paperclip arguments from decades ago are no longer valid; AI will have far better judgment than this

    • @kreek22
      @kreek22 Před rokem +2

      You can neither predict alien intelligence nor can you predict superior intelligence. Einstein married his first cousin. Would you have predicted that?

  • @johntravena119
    @johntravena119 Před rokem

    This is a guy who referred to himself as a ‘fully paid-up member of the neo-imperialist gang’ after we invaded Iraq - what some people call a ‘character check’.

  • @johnmiller9953
    @johnmiller9953 Před rokem

    Do your shirt up, this isn't the full monty...

  • @Geej9519
    @Geej9519 Před rokem

    If you do not work on an education system that raises ethical global citizens who see every human as valuable as themselves , and globe as One Homeland above any boarders , you can stop nothing of it … and as is , humans treating each other in such a way that daily life has become impossible for us without war and without even having committed any crime , just your Neighbour won’t allow you peace in your own home, I’m not sure why such a race deserves to be saved 🤷🏽‍♀️

  • @jayd6813
    @jayd6813 Před rokem

    Conclusion: AI will be trained to be woke. We are doomed.

  • @macgp44
    @macgp44 Před rokem +1

    So, this self-declared genius is at it again? Pontificating on topics he clearly has only a layman's awareness. Insufferable... but then again, I'm not a Tory, so you can ignore me.

  • @graememoir3545
    @graememoir3545 Před rokem +5

    Neil is always incredibly well informed but if he had watched the Russel Brand interview with RFK he might have pointed out that Covid was a bio weapon. A unique product of sinoamerican cooperation

    • @benp4877
      @benp4877 Před rokem +2

      Oh good lord. RFK Jr. is a laughable figure.

  • @galahad6001
    @galahad6001 Před 6 měsíci

    mate do you shit up .. ahahah

  • @larrydugan1441
    @larrydugan1441 Před rokem +4

    Please lose the hairy chest. It put me of my food.

  • @shmosel_
    @shmosel_ Před rokem

    People are thrilled about ChatGPT because it's a computer you can talk to in English. Not because it sounds human.

  • @stevebrown9960
    @stevebrown9960 Před rokem

    Driverless cars, parcel delivering drones are so last year.
    AI is this year's fad talking point.
    Look over there, is that a squirrel?

    • @softcolly8753
      @softcolly8753 Před rokem

      Self driving cars have been two years away for around seven years already.

    • @kreek22
      @kreek22 Před rokem +2

      Driverless cars are here, but mindless regulators keep them locked up. Ditto on the drones.

  • @OxenHandler
    @OxenHandler Před rokem

    It is a psyop: pretend to invent AGI and control the world in its name - it, being the great and powerful Wizard of Oz.

  • @petercrossley1069
    @petercrossley1069 Před rokem

    Who is this inappropriately dressed junior interviewer?

  • @winstonmaraj8029
    @winstonmaraj8029 Před rokem +1

    Nice interview. Do some shows with Yuvak Noah Hariri.

  • @ahartify
    @ahartify Před rokem

    You always know a writer, historian or academic had a low intellect when he or she inserts the word 'woke' into the argument.

    • @scott2452
      @scott2452 Před rokem +15

      Similarly you can generally dismiss anyone who would insult the intelligence of everyone who happen to include a particular word in their vernacular…

    • @centerfield6339
      @centerfield6339 Před rokem +5

      You don't know it. You believe it. Things like "Jesus is up for any level of criticism but Mohammed is beyond reproach" is a real thing, and not even the biggest thing, and embedding such radical beliefs into content-generating AI is a real problem.

    • @benp4877
      @benp4877 Před rokem +2

      False

    • @carltaylor6452
      @carltaylor6452 Před rokem +2

      Translation: "a writer, historian or academic has a low intellect if he or she doesn't share my ideological bias".

  • @AlfieP-ob5ww
    @AlfieP-ob5ww Před rokem +1

    Neither one of you 2 geniuses are St. Thomas More!

  • @AlfieP-ob5ww
    @AlfieP-ob5ww Před rokem

    A right wing rock star??