Q&A: Should Computers Run the World? - with Hannah Fry

Sdílet
Vložit
  • čas přidán 28. 07. 2024
  • Can the workings of algorithms be made more transparent? Is fake news forcing us to become more discerning about the info we consume? Hannah Fry answers audience questions following her talk.
    Subscribe for regular science videos: bit.ly/RiSubscRibe
    Hannah Fry is an Associate Professor in the mathematics of cities from University College London. In her day job she uses mathematical models to study patterns in human behaviour, and has worked with governments, police forces, health analysts and supermarkets. Her TED talks have amassed millions of views and she has fronted television documentaries for the BBC and PBS; she also hosts the long-running science podcast, ‘The Curious Cases of Rutherford & Fry’ with the BBC.
    Watch the talk: • Should Computers Run t...
    This talk and Q&A were filed at the Ri on 30 November 2018.
    Hannah's book "Hello World" is available now: www.penguin.co.uk/books/111/1...
    ---
    A very special thank you to our Patreon supporters who help make these videos happen, especially:
    Dave Ostler, David Lindo, Elizabeth Greasley, Greg Nagel, Ivan Korolev, Lester Su, Osian Gwyn Williams, Radu Tizu, Rebecca Pan, Robert Hillier, Roger Baker, Sergei Solovev and Will Knott.
    ---
    The Ri is on Patreon: / theroyalinstitution
    and Twitter: / ri_science
    and Facebook: / royalinstitution
    and Tumblr: / ri-science
    Our editorial policy: www.rigb.org/home/editorial-po...
    Subscribe for the latest science videos: bit.ly/RiNewsletter
    Product links on this page may be affiliate links which means it won't cost you any extra but we may earn a small commission if you decide to purchase through the link.
  • Věda a technologie

Komentáře • 91

  • @bhatkrishnakishor
    @bhatkrishnakishor Před 4 lety +8

    When the crowd is as educated as it is in this presentation, it is safe to say, you are assured to have an intelligent discussion. Loved it.

  • @LordQueezle
    @LordQueezle Před 4 lety +8

    I really like Hannah's lecture style. She knows her stuff and explains it simply and to the point. All of course, with a touch of humor.

  • @mjswart73
    @mjswart73 Před 5 lety +21

    I loved this part.
    "And I think that the more that you do something the better that you become at it,
    the more confident that you feel in it, the more that you enjoy it, the more that it becomes a playground rather than a chore. And that is a tidal wave that I've been riding ever since"
    What a great way to get to 10000 hours to become an expert.

  • @endrankluvsda4loko172
    @endrankluvsda4loko172 Před 5 lety +2

    Very interesting! Thank you for the upload!

  • @Fallkhar
    @Fallkhar Před 5 lety +5

    Fluid dynamics shot up in my ranking of things I want to study by like 14 ranks.

  • @Sam_on_YouTube
    @Sam_on_YouTube Před 5 lety +40

    Wait, the trolley problem happened to your husband!? How did nobody follow up on that?

    • @emperorSbraz
      @emperorSbraz Před 5 lety +17

      she mentioned the whole thing elsewhere, basically he found himself in front of a police chase going wrong-way and HAD to choose between a pedestrian/cyclist on the sidewalk or a head-on collision with a car in the opposite lane... or something like that.

    • @miles4711
      @miles4711 Před 4 lety +8

      ​@@emperorSbraz Interesting topic. I guess what differentiates a looming traffic accident from the trolley problem is, that the decision maker (you or the AI) is not part of the accident the trolley will have. If the decision making process of an autonomous car takes into consideration its occupants (I think it should), the problem becomes easier in my opinion. Incorporate self-preservation. I.e. minimise danger to the occupants first, and from this solution space decide on minimising outside damage. I presume most humans will act like this. Instinctively picking the choice that harms them the least.

    • @raizin4908
      @raizin4908 Před 4 lety +11

      Motivated by @@emperorSbraz's clue I did some googling and found the story. It's mentioned on an episode of the podcast Hidden Forces, starting at almost exactly 30 minutes in. Or at 35 minutes, if you want to skip the intro about the trolley problem in general.
      Spoilers:
      Her husband had to choose between driving head-on into a car being chased by the police, driving head-on into oncoming traffic, or hitting a cyclist. With his daughter on the back seat he prioritized her safety and veered towards the cyclist. Luckily the cyclist had seen the scene unfold and went onto the pavement to preemptively avoid Hannah's husband's car, so no-one was seriously harmed.

  • @Robert399
    @Robert399 Před 4 lety +1

    It's one thing to acknowledge that moral problems aren't mechanical: there's no empirical test, even hypothetically, to determine the correct set of moral beliefs. But I'm really annoyed by the inevitable followup: "so there are no right answers", as if moral beliefs can't be critiqued and evaluated. Deductive logic can still be valid or not. Now it's true that, without empirical evidence as an input, abstract moral systems can never rise to proven or disproven. But they can still be consistent or not. More importantly, logical debate adds clarity: it lets people see what core assumptions really underlie their beliefs or conversely where a core set of principles ought to lead. Layman "moral" intuitions are overwhelmingly neither clear nor consistent.

  • @frogz
    @frogz Před 5 lety +8

    hey, hannah, if you read this, fluid dynamics, what are your thoughts about current work in microfluiditics? some pretty amazing things possible at small scales

  • @javb222
    @javb222 Před 5 lety +3

    Engineers work to systematically reduce risk. Risk = Severity X Likelihood. "Solving" the trolley problem just results in a marginal reduction in severity whereas engineering effort can much more significantly reduce the likelihood. In real life the trolley problem has a third option which stops the trolley so it's not really the same problem.

  • @StreuB1
    @StreuB1 Před 5 lety +10

    This was so so SO good!!!!

  • @matyourin
    @matyourin Před 5 lety +4

    I wondered when hearing that study about those nuns about one thing... I am no native speaker, so I try to make my point by different formulations of the same question:
    - If you can take an essay of a 19 year old and sort of "predict" the probability of dementia in high age, does that imply that they have some kind of sickness, that shows early signs and finally leads to dementia? Or does that imply that *because* they do not use their language skills they will develop dementia?
    - So is it more like a handicap that (that lead to bad language skills and later to dementia) or is it like "not enough brain usage / training" that first showed up as bad language skills and later got worse and became dementia? (comparable to saying "if you cant run fast when you are young, you get movement impaired - does that mean you had a sickness when young like a handicap or does that mean lack of running lead to the impairment?)
    - Do dementia and bad language skills have the same cause or are bad language skills the cause of dementia?
    Maybe someone can enlighten me...

    • @iUserapp
      @iUserapp Před 5 lety +5

      You can't really answer those questions with the nun study, because there are too man hidden variables and correlations like intelligence, language use etc. You need to do a controlled study for that. For example: force half of the people to do crosswords and sudokus every day and than observe after some time if this influences the frequency of dementia.
      And as far as I know, challenging intellectual tasks, social contacts and exercising all decrease the risk of dementia.

    • @agient_io
      @agient_io Před 5 lety +2

      I have similar thoughts to Wulf and would add the oft repeated phrase "Correlation is not Causation".

    • @donfox1036
      @donfox1036 Před 5 lety +2

      matyourin. Good question. What you seem to be getting at is the difference between correlation vs. cause and effect. That can be distiguished using standard techniques but it isn't easy, either to do it of to understand it clearly. It is basically a process of elimination.

    • @matyourin
      @matyourin Před 5 lety

      I knew about that difference between correlating and causation and that what my question was aiming at in deed. Afaik a strong correlation can hint at causation or at a common causing factor of the correlated items. In case of these nuns I just wondered, if the correlation is like super strong between language skills when young and dementia when old: that could mean one causes the other or it could mean there is a common cause of both... (or though of course not necessarily one of these two possibilities, but it's a strong hint). Isn't there any further research on this? I like that idea of two groups, both on similar language skill levels, one does constant language skill training, the other group doesn't, years later you check if cognitive capabilities diverged between them... If they did, it hints at dementia being avoidable in some cases by "training", but if they didn't diverge it hints at another hidden cause...
      But that of course could be anything... We are talking about a group of nuns who at age 19 committed to that lifestyle, all of which agreed to this study and donated their brains - a highly selective group, that isn't representative at all. No males, no other ethnicities or professions...

  • @bhatkrishnakishor
    @bhatkrishnakishor Před 4 lety +1

    My two cents regarding the trolley problem wrt autonomous cars.
    In a human driven car, the decision to choose one off the two/multiple scenarios is with the driver, then in case of autonomous cars why not present owners/driver of the car with profiles which will dictate such future outcomes?

  • @neoplato7525
    @neoplato7525 Před 4 lety

    Hi, Hannah. Thanks for taking my question. I'm just wondering what would happen if an algorithm is given too much autonomy and makes a decision that results in a serious adverse event. How do we determine who is responsible?

  • @robertgreen7593
    @robertgreen7593 Před 5 lety

    I think needing an 'Explanation' could possibly be replaced with 'Weight.' The algorithm might have a dozen variables (modules/areas) that weigh into a decision. If there is a flaw that causes a bad decision it could be just one of those variables. All those contributors could have a typical range of 1 to 10, but a flaw could cause one of them to be 1000 and send someone off to the electric chair. This anomaly might be easy to review if the weight is transparent to the reviewer.

  • @JamesPetts
    @JamesPetts Před 5 lety

    On deciding the limits - the problem is not specific to AI. The problem is general to deciding the extent to which any agent (i.e. individual person) ought to give up her/his power over her/himself to another agent or agents, whether they be humans or machines. In many ways, giving power to other humans is more dangerous than giving it to machines. This includes giving power to humans to prevent people by force from using and developing machines.

  • @Inertia888
    @Inertia888 Před 5 lety

    Are we going to be able to define and identify algorithms' "thinking" mistakes, and find similarities in them like we do with human neurology and psychology? Maybe whole new branches of science set in place in order to isolate and fix common algorithm "mental health"?

  • @murieren2830
    @murieren2830 Před 4 lety

    Brilliant!

  • @tokajileo5928
    @tokajileo5928 Před rokem

    the girl at 12:30 is she the musician girl from another lecture?

  • @donfox1036
    @donfox1036 Před 5 lety

    In fact temporary optimalité is the bread and butter of both science and technology.

  • @michaelbayerl1683
    @michaelbayerl1683 Před 5 lety +1

    BTW, the process of computer screening and human confirmation has been in routine practice for evaluating Pap smears for over a decade. It's already here! It works great!

  • @donfox1036
    @donfox1036 Před 5 lety +1

    What Hannah seems to be talking about is an optimal solution which means choosing among the solutions we know in the best way we can. The engineers would like this because it is doable, but Hannah seems to want to proceed beyond possibly temporary optimal answer.

  • @Robert399
    @Robert399 Před 4 lety

    I don't think lawyers will ever go so long as we have an adversarial legal system. You can tell a computer to estimate the likelihood of something or store a vast amount of laws and cases but you can't tell it to spin a convincing narrative (let alone 2 opposing ones) to convince a jury.

  • @Krydolph
    @Krydolph Před 5 lety +3

    Hannah, we can agree t hat its so funny the AI sees pink sheeps as flowers, and makes up cows when there aren't any.
    But can we agree, that this a matter of the AI not been trained well enough?
    If it tells you its a lush field with cows, and there is no cows, and you just think "oh stupid AI" and let it go, it will think its ok, but if you go in and tell it "Well AI, there is no cows in this image" then it will keep learning.
    What we need is more AI as co-workers, where is a real qualified human, that will judge what the AI says and thinks. At least for stuff like this its easy. There is a clear right and wrong, either there is a cow, or there isn't.
    For the mamogram stuff, isn't it just a point of training it for the new machine too then...? If it can do it for one machine, it should be able to do it for another, and if it done right, it can be even trained so when you put in a picture it can see, this is machine A, and this Machine B, without you having to tell it.
    I think there is a big future for algorithms like this, and we might not be there all the way by now, though I think we are very close. We just need to train them right, and that will take a lot of time, and it might not be easy getting people to train their replacement, though for most options, I see them more of a helper. Why should the doctor spend 30 minutes on coming up with diagnosis, when the computer can give the 5 most likely in 30 seconds. Then he can look at it and see what seems most likely to him, and try it out. As they find out was what wrong and correct it, the AI is told, and it has learned for the next that comes in. It sees patterns humans never would. But you still need the doctor to ask the right questions, and humanize it all, I think we really need that.

    • @nicholassullivan6105
      @nicholassullivan6105 Před 5 lety +5

      Well, I agree that more training will make the AI better, but I really don't think that was the point Hannah was trying to get across. The thing is, no amount of training will make an algorithm better at extrapolating its experience into new contexts, which is what was happening with the cows, and also with the mammograms. If we point out a sheep to a baby for the first time, it will recognize the sheep even if it's in a completely different location, like on the stairs, or a different colour. For these algorithms, they currently aren't capable of really understanding "This is a sheep", and knowing that it will still be a sheep in a different context. You would prefer something that could apply its learning from one scenario, and apply it to another, without having to relearn everything. And for that, you really need to change the way the algorithm is designed.

    • @NipapornP
      @NipapornP Před 5 lety

      Yes, you can train the algorithm for the pink sheeps, but next time the sheeps are red, blue or any other color and the AI would "see" a green field with red or blue flowers on it. So you see, that this would already run into a nearly endless training, even with that single characteristic, the color.

    • @Krydolph
      @Krydolph Před 5 lety

      Then maybe this algorithm is too simple. Maybe we aren't quite where we need to be, i don't know, I am no expert by any means, but I still believe it is very possible to make something that will be able to translate, lets say a blue sheep after it has learned about red sheeps. or maybe it has to learn about both red blue and green before it can see it yellow ones... but its about take enough datapoints in, where it says, all my data points says its a sheep, but its the wrong color, could it still be a sheep then? By then it has already learned sheep can have other colors.
      Also the big thing about all this self learning AI stuff you always here is "It does this, it does it right, but we have NO IDEA how. The programmer do not know what the AI looks at to determine its task, and often much of it is things the human would never think of if he had to code it manually.

    • @Sam_on_YouTube
      @Sam_on_YouTube Před 5 lety

      We probably need people whose job it is to literally raise AI children. Basically, trained nursery school and kindergarten teachers. But I don't think we have AI yet that is smart enough to go to nursery school. It will be a learning problem, probably soon, but so far it is still a technical problem.

    • @NipapornP
      @NipapornP Před 5 lety

      @@Sam_on_CZcams That sounds promising and could be a key to AI.

  • @rapauli
    @rapauli Před 5 lety

    oh Hanna, don't you think that fluid dynamics must be evaluated in a computational AI world? Weather and climate - perhaps the ultimate dynamic fluidity, is heavily computed and learned, why not a part of a future world?

  • @ecospider5
    @ecospider5 Před 5 lety +1

    I think it is funny that people talk about loosing jobs to physical automation like manufacturing as a new thing. Paper pushing automation has taken 90% of paperwork jobs over the last 50 years.
    In the 60’s you actually had people at companies adding numbers. In the 70’s you had people still taking dictation and manually typing letters. In the 80’ you had a mail person hand delivering printed notices to every employee. In the 90s an admin controlled conference meetings. In 2001 people at companies made bank deposits every night. In 2010 you sent a paper bill between 2 companies for things to get paid.
    These are all jobs that have been removed now do to automation of paperwork. Physical automation is a very small percentage of jobs in the world right now. Automation has already had a massive impact on the job market. This is not something new just because it is easier to see the physical machines doing work.

    • @MrChefjanvier
      @MrChefjanvier Před 5 lety

      So true, I totally agree. Further examples: count the number of employees in bank branches in the eighties and nowadays. Or in accounting departments.

  • @Wrackey
    @Wrackey Před 5 lety +7

    As for the "Trolly problem" with self driving cars: I think it's easy: If there really is a point where the car has time to decide, and the world doesn't consist of purely grannies and little children, choose a tree, lamppost, or a wall instead! The person inside is WAY more protected, and the car is way more prepared to deal with any collision than any pedestrian or cyclist will ever be! I do think though, that a situation where it is clear cut: either drive over the child, or swerve and kill the granny, has a very ... VERY slim chance of ever happening, and if it does, I really expect the AI to be good enough to think of a third option ;)

    • @motherofallemails
      @motherofallemails Před 5 lety +1

      stop avoiding the question, the situation is the granny or the child, which should be saved?
      I think the answer is pretty clear who should be saved, the concept that there is "no right answer" is absurd, would you really toss a coin?

    • @Wrackey
      @Wrackey Před 5 lety +2

      @@motherofallemails Regardless of what you think the answer to the trolley problem should be, I was merely arguing that for self driving cars, it is useless to ask the question as a binary one, since reality is much more complex and they don't operate in a vacuum. I never said there was "no right answer".

    • @raykent3211
      @raykent3211 Před 5 lety +1

      I agree with you. Philosophical distillations often depart from practical reality. A large part of stopping distance comes from reaction time before action is taken. If the AI has a substantially shorter reaction time it outperforms a human driver. If we imagine that the human can do a considered moral assessment of the situation in a tenth of a second I think we're fantasising about our own abilities. Algorithm: slam on the brakes and keep the steering straight for best control, don't try to make a moral judgement between an adult (who may be pregnant and have dependent kids) and a younger person (who may grow up to be a mass murderer). Cherry-picked example, I know! But all valid grist to the mill of philosophy.

    • @motherofallemails
      @motherofallemails Před 5 lety +3

      @@raykent3211 STILL ducking the question, what should the ai do if the situation has no other outcome than the death of a child OR a granny? I really HATE it when people talk a whole lot of irrelevant bs and start to think they answered the question, NO YOU DIDN'T
      Don't pretend to take the moral high ground with bs like "the child may grow up to be a killer" or "the granny may be pregnant" all the data is GRANNY OR CHILD, WHICH ONE DIES? Would you rather leave it to the ai to answer when the time comes? I don't think so.

    • @agient_io
      @agient_io Před 5 lety

      @@raykent3211 Are you claiming there are no practically realistic examples where it impossible to be forced into the quandry? Let me help, you're driving down a suburban street at the legal speed limit, there are numerous parked cars obscuring your vision beyond the road itself. Now, you notice an elderly person looking the other way about to walk into the path of your vehicle. As you go to alter the vehicles' direction, a small child runs from another direction into your path, chasing a ball. The physics of the situation are such that it is impossible, regardless of reaction speed, to stop in time. It's also impossible given the relative locations of the old person, and the child, to avoid both of them. The situation described is not a 'philisophical distillation', indeed it is instead entirely plausible. So, now we get the the crux of the matter. How do we ethically decide who gets to live?

  • @MrAndrew535
    @MrAndrew535 Před 5 lety

    I ask again, Define "world". On this matter, both a high functioning autodidact and the highest functioning computer would be infinitely more contemplative than their anthropocentric counterparts. A follow-up question would be, why is this the case?

  • @B20C0
    @B20C0 Před 4 lety

    I don't get why people put so much emphasis on the trolley problem. It's such a simple solution: Make it a "Wargames" scenario, aka "the only right move is not to play". Meaning you just hard code how to react to such an event by making the car keep going straight and brake. That way the outcome of such a scenario is
    a) always predictable
    b) always fair

  • @DasPreem
    @DasPreem Před 5 lety +2

    I'm in love

  • @peterfoerderer8224
    @peterfoerderer8224 Před 5 lety

    I've wondered if politicians should be replaced by a computer.

    • @DamirAsanov
      @DamirAsanov Před 5 lety

      They should be replaced by pink sheeps.

    • @aasid2446
      @aasid2446 Před 3 lety

      @@DamirAsanov aren't they already?

  • @MrOldCrow
    @MrOldCrow Před 5 lety

    An AI in jurisdiction may be hacked to get certain individuals out of trouble. And an AI is not yet capable of telling the difference of being taught relevant patterns or misleading information...

  • @vigneshdwarakanathan2980
    @vigneshdwarakanathan2980 Před 5 lety +2

    I'm a very young doctor, and i don't see my and my colleagues' job snatched by AI in my lifetime. I agree about specificity and sensitivity completely.

    • @Krydolph
      @Krydolph Před 5 lety +3

      What I do see, and hope for too, is that you have an AI colleague... The difference between you and the AI is that it can see all of the journal, and remember all of it. It can also read all the new studies, and does it. And it can often see and remember coalitions you might not always be aware of. So that will give you prognosis on patients, and you will look at it, and you will see if it makes sense or not. Wattson has already shown promises here, I dont think we are at the point where we just see a robot, and then thats it, but haveing it as the doctors co-worker would be so good, and good for all parties.

    • @vigneshdwarakanathan2980
      @vigneshdwarakanathan2980 Před 5 lety

      @@Krydolph Yes exactly. The difference between me and AI is that I have a hunch when it is not normal. Like Hannah mentioned, I don't see AI as a threat, rather, I see it as a tool... another tool which I can rely on and come up with a decision as to the management of patient. I'm already using technology more than my patients would like, but it is only making me better. I look up to medscape and up-to-date all the time to make sure i'm not missing anything.

    • @vigneshdwarakanathan2980
      @vigneshdwarakanathan2980 Před 5 lety

      @William White I really hope that day comes... But right now, computer vision (imaging and pathology slides) is well developed to the point it may be implemented. Maybe few other things like ECG etc. But other than these specific stuffs, it really isn't doing the more important things, like decision making(again maybe some specific conditions). What humans have is a bias, and a hunch. On any day, i would trust an expert in the field more than an AI algorithm, and i would trust AI algorithm more than a novice.

  • @abdmuhaimin
    @abdmuhaimin Před 4 lety

    if we are in an automatic car, we have to choose between breaking the parents or the baby. we are still at fault for living human beings. but I wish we had more options.

  • @TheNoodlyAppendage
    @TheNoodlyAppendage Před 5 lety +1

    No. Computer algorithms are made by people, and I think its disingenuous to assume that any
    algorithm is infallible.

  • @robertpage4991
    @robertpage4991 Před 5 lety +2

    We go from unelected Belgian Bureaucrats to unelected Pentium i7 Bureaucrats. I want a vote on Compu-texit.

  • @Pate1992
    @Pate1992 Před 5 lety

    Finally First!

  • @____________________________.x

    Could have done without the audience bias

  • @tehKap0w
    @tehKap0w Před 5 lety +1

    Answer to the trolley problem: apply the brakes. That was easy.

    • @donfox1036
      @donfox1036 Před 5 lety

      kwyjibo, sorry, not allowed. Brakes don't work. Good try.

    • @tehKap0w
      @tehKap0w Před 5 lety

      @@donfox1036 you don't operate defective equipment. that's why you hire engineers to build these things.

    • @donfox1036
      @donfox1036 Před 5 lety +1

      kwyjibo, in reality the trolley problem is solvable, but it requires perfect safety devices and perfect maintenance, which are quite hard to achieve. Are those ideas any more real than a trolley hurtling toward two groups of different sizes on the other side of a single switch? There is a huge literature on this, as it is a standard question for ethicists. I refer you to that. But the engineer needs to admit that freak accidents happen, such as the runaway train at Lake Megantic in Quebec.

  • @marcmarc172
    @marcmarc172 Před 5 lety +1

    I couldn't believe what her "AI future" sounded like.
    This Q&A really left a bad taste in my mouth.
    Love the topic of conversation and the channel though!

  • @p_mouse8676
    @p_mouse8676 Před 5 lety +2

    Scientist and engineers who wave away essentials questions with the argument that's it's so rare and should never happen, should be fired on the spot.
    Because these people obviously don't understand what science is about then.
    That is basically very basic statistics, besides the fact you're missing essential opportunities.

  • @martincotterill823
    @martincotterill823 Před 3 lety

    Just watched the Berlin talk, my impression is, you don't understand the consequences of the tool you created and the uses to which the Police will put it. This is another example of non-ethical science.

  • @fflv_irn
    @fflv_irn Před 5 lety

    Definitely NOT reading her book. No answers at all. All examples in the presentation are ancient. The book is most definitely would be a waste of time. Fake news!!!:)

    • @FHidber
      @FHidber Před 5 lety

      Well, the question's asked are also antique in terms of the moral questions they pose. Viewing the talk from her perpective, I think she does a great job of explaining and creating a coherent picture for kids and people coming more from the view point of worry about the impact AI will have on their lifes, rather than the endless possibilities and advances it will bring..

    • @migros8
      @migros8 Před 5 lety

      Illya Fefelov I would love to hear your opinion about the book!

    • @TabbsMcgee
      @TabbsMcgee Před 3 lety

      I think clearly with the nature of the talk and the variety in audience demographic, there was never going to be long form, in-depth answers to each question. However I felt those answers that were given were succinct and more than satisfactory for the purpose of the event, as they were clearly looking to get more of a range of questions from different audience members and I'm not sure quite why you seem so ready to throw her under the bus. I think she did a great job. :)