I suppose all that stuff about infinity and eternity means that you think you are justified in doing anything—absolutely anything—here and now, on the off chance that some creatures or other descended from man as we know him may crawl about a few centuries longer in some part of the universe.” “Yes—anything whatever,” returned the scientist sternly, “and all educated opinion—for I do not call classics and history and such trash education—is entirely on my side.” [from C.S. Lewis’ Out of the Silent Planet]
There’s been a decent amount of discussion of Effective Altruism (EA) lately.
Both Robin Hanson and Dylan Matthews have raised some flags.
Robin writes:
Robin doesn’t think that EAs are all that different than the philanthropists of previous eras, and that the movement itself has flaws that are common of youth movements.
Dylan writes:
Dylan worries that EAs make impossible to verify claims about future threats that give them mathematical cover to avoid pressing current issues. Later in the article, Dylan does note that EAs have a chance to push philanthropy in the right direction.
My thoughts:
1) I think Robin underestimates the potential for EAs to shift philanthropy, a field that does not always have a particularly data-driven bent (in large part due to one of Robin’s favorite issues: status seeking). Of course, I don’t think the EAs are the first on the scene here (nor do I think they claim to be). Clearly folks like Bill Gates, Peter Singer, and John Stuart Mill have been thinking along similar lines for a long time. But I do think EA has a chance to move the field further in the right direction. This could be a pretty big deal. Given the dollars involved, any movement on the margin will be useful.
2) Both Robin and Dylan point out that the EAs might be thinking about AI incorrectly (both in terms of how it will occur as well as how to weight its importance in terms of giving). Perhaps. But these stances may be revised overtime. A few years ago, I too fell for the “AI is the only thing we should think about” mindset, so I understand its allure. Just because this way of thinking has vocal support within the EA community, it does not mean that it will carry the day in perpetuity. Also, organizations such as Give Well, are surely not saying “only invest in AI research.” So while there is some risk that EAs will be overly AI obsessed, I doubt this will ever be the only cause they work on (or say others should work on). And, as of now, the money going into such research does not seem that high, so there very well may be room for significantly more giving.
3) I think there are strong arguments to be made that, as a species, we are underspending on certain existential risks, AI perhaps being amongst them. I’m not an expert here, but I know in education reform we are currently spending hundreds of millions of dollars a year on things that do little, so it’s hard for me to believe that this money wouldn’t better be spent on low probability, high risk causes – somethings humans are often awful at focusing on. EAs can probably help shift resources to these causes.
To summarize: I think there is a lot of low-hanging fruit to be picked here. Philanthropy and government waste a lot of money. The principles the EAs spouse have a chance to help shift some of these wasted resources into more productive uses.
Of course, the EAs may succumb to status seeking instead of living by the values they espouse.
Time will tell.
But I bet the EAs will be a positive force in giving.