The Rise and Fall of Effective Altruism

Sdílet
Vložit
  • čas přidán 10. 02. 2024
  • #philosophy #effectivealtruism #ethics
    The Effective Altruism went on a wild ride. From a small group of nerds wanting to help in the most effective way that they can, to tech billionaires worrying about AI making us all to paperclips. In this video, we tell the story of the rise and fall of effective altruism. Here are some of the books about the effective altruism movement, if you are interested (affiliate link):
    William MacAskill - Doing Good Better amzn.to/3vf6NhR
    William MacAskill - What We Owe the Future amzn.to/3IzQE9Z
    Peter Singer - The Most Good You Can Do amzn.to/4cnNDY6

Komentáře • 36

  • @houseofjax2806
    @houseofjax2806 Před 5 měsíci +10

    i feel proximity does matter in the sense that you have a duty to save the child and sacrifice your clothes and meeting, because you're the only one who can. If there's a lifeguard nearby then I don't see why you would need to stop what you're doing to save the kid, and in fact if you did try to help you might mess things up. This is exacerbated worldwide when there are billions of other people who can help others in need, and who could probably do a better job considering their proximity.

    • @IntrepidLlama123
      @IntrepidLlama123 Před 5 měsíci +3

      I think Singer has a response to this. Imagine a situation in which you’re walking past a pond with the same drowning child but there are a bunch of other people around, say 10 or so. They all walk past. Do you save the child? If you do, then by parity of reasoning, you have an obligation to help people in faraway countries despite the distance.

    • @houseofjax2806
      @houseofjax2806 Před 4 měsíci

      @@IntrepidLlama123 then that really depends on if its even true that no one in those countries actually aren't going to help each other, or an unpersonal flow of cash will actually help anyone. I feel like a community helping each other through mutual aid is much more effective than some impersonal charity, especially since the former would help to provide the foundations for social change. Even then the best way to help these countries would be both economic growth and more efficient and fairer laws, which no effective altruist can really help with. Yet effective altruists can probably foster more change within their own countries since they literally live there.

    • @YLLPal
      @YLLPal Před 4 měsíci

      ​@@houseofjax2806 one of the programs promoted in EA is giving directly, which gives community members the resources to embark on their own projects. The economic development then flows on to aid the community more broadly.
      I think there's a misconception about EA being focused on the money. That is a taint which was introduced, but not a part of the philosophy itself.
      It isn't about the money as such, it's about the resources in general, which includes time, research, money and influence.
      Influence can be used to promote better economic policies which could help alleviate the unfair way the world is arranged.
      The reason there's so much focus on directing resources to places with worse conditions is simple. 10+10 is a 100% increase, but 100+10 is only a 10% increase.

    • @IntrepidLlama123
      @IntrepidLlama123 Před 4 měsíci +3

      ⁠​⁠@@houseofjax2806
      These are interesting points but I think there are still ways around them.
      The first is that really is the case that a lot of people are indifferent (or feel that they are unable to help) the poor in low income nations. So in some sense, Singer’s thought experiment does have force.
      The second is that it’s not exactly an impersonal flow of cash as you’re providing people who are actually there (better off locals or volunteers) with a means of helping those in need.
      The third is that it’s actually far cheaper to help people overseas because a lot of the problems are easily preventable. That’s why most of the issues that plague low-income nations don’t even exist in middle and high income nations.
      And in terms of the most efficient way to help, I agree with you that systemic change would be the best possible outcome but then there’s the problem of what individual intervention can do.
      In terms of helping those around you, I don’t think the two are mutually exclusive. I personally think that we *can* help those in need around us, *and* do our best to help those most in need in a global sense.
      But in all honesty, I’m still unsure myself, and I do think you make some good points. Would be interested to hear your thoughts on the above. Also, you might find Singer’s “The Life You Can Save” interesting.

  • @nebufabu
    @nebufabu Před 4 měsíci +6

    There may be some value in some EA ideas, but longtermism is a joke. Wild guesses plugged into exponential formulas can justify doing or not doing literally anything. It's essentially "Life begins when someone makes an estimate of how many people will be born millions of years into the future. By typing this, I destroyed ten kajillion potential future babies, but seeing that at least five of them would have been hyperhitlers who would have destroyed humanity, I also saved at least fifty kajillion."

    • @Ashiq.Ibrahim
      @Ashiq.Ibrahim Před 24 dny

      Your argument doesn’t really address why longtermism is a joke. And most of the sub-causes in longtermism are not wild guesses. Take ai safety as an example. Over half of ai researchers are afraid that by 2050 (or something similar), AGI will be developed which can be very bad as it develops sentience. I would agree that most of it is speculations, but they’re definitley not wild guesses. Take AGI as an example, ai is goal-oriented, so if it’s tasked to do something, and it sees that getting rid of humans would help it achieve it’s goal, it would take out humans, not because ai is inherently bad, but because they see it as the best means to achieve their goal, whatever it might be.
      Also, i can’t tell if that final point you’re making is a joke, but if it’s not it’s utter nonsense.
      I could be wrong, i might have come across as dogmatic, but feel free to point out any mistakes in my argument, or refute it.

    • @nebufabu
      @nebufabu Před 24 dny

      @@Ashiq.Ibrahim OK, let's take AGI as an example. I don't know how it's going to work. You don't know how it's going to work, Sam Altman doesn't. Quite a few people would even go as far as to say it's impossible to know how AGI will work (that's the whole point of ideas like technological singularity) though I won't go that far myself.
      Without any kind of theoretical framework like that, or any practical experience of it, all what is left is projections based on mostly arbitrary assumptions about AGI's motivations and capabilities, that is, wild guesses. Why would AGI be intrinsically "goal-oriented?" Current foundation models aren't, what they naturally do is just output random hallucinations. Many humans aren't.
      Those guesses are often seemingly tailored to produce the desired outcome rather than the other way around - note how in the classic "paperclip maximizer" scenario AI is both so independent humans can't dissuade it, yet so slavishly obedient to the initial command that nothing else matters to it. Again, given how easily current AI models are confused or even "jailbroken" out of purportedly unbreakable constraints on their output, there's no evidence for this.
      There is one possible theoretical framework, but frankly, given the AI state right now, it's worse than a wild guess. Many of those scenarios were based on the GOFAI notion that AI would be designed and built like a car, not trained like a neural network. So you could make assumptions about those designers and builders, (including their ability to build exactly what they want) not AI as such. If you know anything about how the current AI boom started, what progress had been made, and what challenges remain, this is pretty much the exact opposite of the actual AI development as it is now.
      Speaking more generally, again, the logic of longtermism can be used to argue the exact opposite points - should we care about environment or ignore it? Is nuclear war bad? Is an effort to colonize Mars right now worthwhile? Any of those depend on particular long-termist's assumptions about too many imponderables leading to a simple binary - do some people survive? If yes, then it doesn't really matter, as in 10 million years...
      As for my joke in the end... Well, it was a joke, no one would seriously write "hyperhitler" ever, I hope. But if you think serious longtermist projections make much more sense, allow me to disagree.

  • @GrumpyStiltskin316
    @GrumpyStiltskin316 Před 3 měsíci +1

    It’s not the proximity that’s the underlying issue with the initial proposal. It’s the uncertainty of the process and the results. If I jump into the pond to save the child, I’m doing the deed myself and I see the results. Presuming that all goes well, I immediately know my efforts were not in vain. The variables involved are limited to my response time, how long the child has been in the water, what injuries they may have suffered, if the child had started to drown, etc. Of course, I may not succeed. But if I did nothing, the child stands little to no chance. Despite the variables beyond my control, given that the child is still alive upon my arrival, it’s safe to assume that the outcome of the situation is most likely a direct result of my decisions and actions in the moment.
    To compare that scenario to me donating money to a cause such as cancer research or third world development, is beyond even apples to oranges. Take cancer research for this example. I donate my money to a cancer foundation. How much of that money goes towards administrative costs? Lobbying? What sort of research is specifically being done? What knowledge is to be gained of cancer itself? Is treatment the goal? A cure? What level of success is anticipated? And who exactly will benefit from the results of the research? Will treatment or cures be available to everyone? Will they be free, or costly? How costly? Who’s doing the research? Who profits IF successful? Will my money even fund research, or is someone pocketing the money for themselves? The variables are finite, but they’re vast, and aside from my decision to donate, they’re entirely out of my control.

    • @adampersand
      @adampersand Před měsícem

      Your questions are great, and that's exactly why groups like GiveWell and The Life You Can Save and Giving Green others have arisen: To answer just those questions so when you give, you don't have to be swamped with uncertainty.

  • @ImplodingChicken
    @ImplodingChicken Před 3 měsíci

    Hey PQ, video suggestion: I've been trying to read up on the philosophical foundations of vegetarianism and arguments for/against. I'd love to hear your take.

    • @PhilosophicalQuestions
      @PhilosophicalQuestions  Před 3 měsíci +1

      Hey there, good to see you're still around :) yeah I thought about doing that and taking Michael Huemer's "Dialogues on Ethical Vegetarianism" as core material for that, but still thinking about what a cool twist might be in order not to just present the arguments in a boring way

    • @andymeier7708
      @andymeier7708 Před 18 dny

      How sure are you that killing plants is worth less morally than a animal? They react, they defend, they feel pain. How surprising that we who are intelligent overwhelmingly value intelligence as a metric for whether killing is moral. Its pure self interest disguised as altruism, as it always is.

  • @ns1extreme
    @ns1extreme Před 5 měsíci +1

    I mean sure you can argue that putting your focus on AI is just the cause of rich tech entrepreneurs but the actual research into AI Safety is very underfunded compared to how much funding goes into improving AI technology. If AI has the potential to be the most impactful technology in the next century, then leaving things exclusively to tech bros in the industry seems ludicrous. We need more moral intelligent non tech-bros working on AI regulation and safety not less. Which is what Effective altruism is achieving.
    The fact that Effective Altruists are scared of the word socialism and don't think much about systemic change is a real flaw in the community though. I agree.

    • @houseofjax2806
      @houseofjax2806 Před 4 měsíci

      i assume that they're scared of the word socialism considering its history of being terrible and most of the people promoting it as a solution to every existential problem being disingenuous

    • @YLLPal
      @YLLPal Před 4 měsíci

      I think the taint of focussing on money has derailed the EA movement so much.
      I consider myself both socialist and an EA (of the philosophical approach, not the financial focus)

    • @aaronclarke1434
      @aaronclarke1434 Před 4 měsíci +2

      That’s a straw man. What EA is wary of is proposed solutions that can’t be tested or measured.
      I don’t want to presume the model of socialism you have in your head. If you mean the Nordic model, then this has been tested and is quite beneficial in the right cultural context. If you mean Communism, then this has been tested and is harmful.
      More broadly, experiments in seeking to wholesale implant solutions in foreign cultural soils have been mixed. Those with various ideas of capitalism in their heads: Singapore, West Germany, Japan, South Korea, and modern Rwanda have generally fared much better than places with socialism in their heads. Perhaps the only example you can use is AANES in Syria, and that’s with a charitable view of what socialism is, which includes liberalism and direct democracy.
      Libya, Afghanistan, Iraq, etc., did not succeed.
      The lesson we should draw from the experimental evidence is this:
      Caution when implementing any systematic change in a country. Sensitivity to its already existing culture. The mixed success of implementing liberalism and democracy. The absence of successful explicitly socialist successes. The success of the Norway model IN NORWAY.
      Caution about systematic change is shown to be wise by looking at the evidence. That’s the attitude most EAs have.

    • @YLLPal
      @YLLPal Před 4 měsíci

      @aaronclarke1434 communism, but I would make a strong disagreement about it being the communism which failed, but instead the capitalist world which worked its ass off to make sure it couldn't succeed.
      InB4 Black book of communism which uses inflated figures and includes deaths from fighting Nazis as well as "potential lives" which were never even conceived.

  • @dookdomini6535
    @dookdomini6535 Před 2 měsíci

    lack of proximity counts - due to transparency and control. You hand your hard earned money to X, but how does X spend it and or who takes a cut ?

  • @troodoniverse
    @troodoniverse Před měsícem +1

    It is not that simple
    From what I understand effective altruism is about doing the maximal good and minimising suffering. The long term risk (like AGI of any kind) usually means extinction or worse. In theory, that will return much higher returns that standart charities. Unless you put into the equation the fact that poverty and lack of education are a source of conflicts (most of the world safest and most peaceful countries are also very rich, but the cycle works both ways). And possibility of global conflict create a wall of fear pushing goverments to fast and unsafe creation of more and more advanced AIs as they are weapons in theory far more dangerous than the nuclear ones once they can self-reproduce, which could lead to one of possible fates worse than extinction.
    But now, we have to look at the world from the perspective of each individual. From perspective of the less fortunate, investment into them is much more important than the investment done against big dangers like AI. It makes no sense for such people to fight against dangers of future if they are dying right know, especialy if these dangers could be also what will save their lives. After they are safe and secure, they will start to thing about more long-term problems like.
    So, saving people living now seems as the best action, but what if AGI will take over our world faster than eradicating extreme poverty will bring its rewards? Poverty could be eradicated 5 years from now just as easily as 50 years from now, but if we wont stop the AI race soon, there might not be any humans that could enjoy their lives outside extreme poverty in 50 years.
    In my perspective, problems should be measured also by how much acute they are. The risks of AI or bio-nuclear war are extremely actual (scale of years) and threten everyone, but eliminating 90% of todays extreme poverty would make the fight much easier and it could be done in a year or two (if goverments were willing to tax the big tech and use its money).

  • @YLLPal
    @YLLPal Před 4 měsíci +1

    That Steven Pinker tweet is right where I'm at.
    Can we reclaim effective altruism back to the core philosophy?

    • @IntrepidLlama123
      @IntrepidLlama123 Před 4 měsíci +1

      Hopefully. I feel like a lot of the grassroots stuff hasn’t been morally corrupted.

    • @Derpsider
      @Derpsider Před 3 měsíci +1

      Honestly doubt Effective Altruism will go back to their core philosophy.
      What this video did not mention about the EA debacle related to AI, is its previous relation in OpenAI. The former OpenAI board had Helen Toner (who did TEDx about EA), Tasha McCauley: both effective altruists. Though this is a hypothesis, Sam Altman was likely fired due to his behavior that clashed with EA philosophy around AI safety. Sam Altman has later commented his dislike of how EA's philosophy gives pure speculative threats about AI without substantial basis.
      Now on to: *will we know if it goes back to the core?:* EA will give a conference in June at the Netherlands. I would argue the topics discused (+amount of attention given) during conference will give a good indication where the core of EA currently lies.
      *Personally:* I'm in for the AI safety. Got acquanted to EA after attending Q&A with openai developers (e.g. Jeffrey Wu) and what concerns they express related to AI. It's diverse! Some are optimists, others are doomers (Q&A had doomers). Definitely do agree that AI safety research should not be donated, to those who 'generally' want to donate to an 'effective' cause.
      _Side note:_ Rise and Fall of EA? Ehh... Measure amount of attention last 12 months on Google Trends "Effective Altruism" and funding received (dropped after SBF situation). Argue this would give better understanding if there really a rise or fall. Also thx to the creator making the vid, find it very important to also understand the critiques of EA.

  • @KahnShawnery
    @KahnShawnery Před 5 měsíci +3

    placing a higher value on potetntai llives over actual lives is definitevly evil.

    • @pwhqngl0evzeg7z37
      @pwhqngl0evzeg7z37 Před 4 měsíci +1

      Aye, I would say as much; I think we credit too much our predictive abilities.

    • @aaronclarke1434
      @aaronclarke1434 Před 4 měsíci

      Why? Physics tells us space and time are one thing. It even suggests the past, present, and future are equally real. The concept of the actual and potential has its roots in Aristotle. Not modern science.
      Harm is wrong regardless of geographical distance. Why do you want to uphold time and potential future lives as fundamentally different?

    • @pwhqngl0evzeg7z37
      @pwhqngl0evzeg7z37 Před 4 měsíci

      @@aaronclarke1434 I suppose I was agreeing with a figurative interpretation of OP's statement: that even if we acknowledge the ethical importance of future people, we cannot necessarily then make ethical determinations, because we don't know all the effects of our actions. Sciences give us foreknowledge, but they are like an adjustable rifle scope; we exchange precision for field of view and vice versa (at least in most fields, for now.) Nonetheless, one attempts synthesis of findings between and within fields. Each synthetic prediction has a range of error, and as these predictions are assembled sequence and in dependence graphs that error compounds exponentially. I think this is evident in the fact that (as far as I know) notable journals are not coming forth with the kinds of concrete predictions which are ethically relevant and thus the kind to be published in news media almost immediately.
      Also, this particular question of ethics reminds me a little of the Legislator's Syllogism (disparagingly named.)

    • @aaronclarke1434
      @aaronclarke1434 Před 4 měsíci

      @@pwhqngl0evzeg7z37 1. I've never heard of the Legislator's Syllogism. Thanks. Although, there's no difference between omission and commission in utilitarianism.
      2. I 100% agree. The research on forecasting indicates that we can't predict much in the social realm beyond a year with better accuracy than chance. That includes these ethical arguments. However, I think the argument is very strong. It doesn’t matter what you substitute for “harm”; there are probably trillions of people in the future compared to only 108 billion or so in the past. As long as you can have less and more of whatever the good is, it holds. There’s a chance we end all those lives via our actions today.
      So, I'm left in the position that we have a strong obligation we may never meet. There’s no reason the universe would make ethics easy for us.

    • @pwhqngl0evzeg7z37
      @pwhqngl0evzeg7z37 Před 4 měsíci

      @@aaronclarke1434 I'm not sure I grasp your argument- are you saying that the sheer number of future people counteracts the small likelihood of any given prediction?
      Also just realized I gave it the wrong name- it's called the Politician's Syllogism

  • @steven-el3sw
    @steven-el3sw Před 3 měsíci

    Charity begins at home and you can't convince me otherwise.

  • @tomsanders5584
    @tomsanders5584 Před 5 měsíci +3

    Writing a check to pay other people to do your dirty work in order to clear your conscience is despicable. Get off you ass, find someone less better off than you and just spend some time with them. That's true charity, person to person:
    'For I was hungry and you gave me food, I was thirsty and you gave me drink, I was a stranger and you welcomed me, I was naked and you clothed me, I was sick and you visited me, I was in prison and you came to me.’
    --Matthew 25:35-36

    • @alastairleith8612
      @alastairleith8612 Před 4 měsíci

      george orwell wrote some short stories about such people.

    • @pwhqngl0evzeg7z37
      @pwhqngl0evzeg7z37 Před 4 měsíci

      I don't think an action can become despicable because of ulterior motives alone.

  • @TheRealLachlan
    @TheRealLachlan Před 5 měsíci

    we are weaker than our fathers, dupree we don't even look like them