Category Archives: Effective Altruism

The best book of 2020

The Precipice by Toby Ord is the best book I’ve read and will likely read this year.

According to Ord, if my 1.5 year old daughter reaches the age of 80, there’s a ~10% chance she will die in a human extinction event. This is horrifically sad.

Ord’s argument is clear:

  1. Humanity has the chance to survive for hundreds of millions of years.
  2. There are a set of known risks that may cause our extinction.
  3. We can assign probabilities to those risks.
  4. We should devote immense effort to reducing the risk of the most probable causes of extinction.

A teenager could follow the logic and sentiment of the argument.

And yet, as the current pandemic shows, humanity remains terribly unprepared.

If there is any silver lining to be had with Covid, I hope it’s a deeper investment in avoiding known existential risks.

Assigning Risk

Ord conducted extensive research in numerous fields to attempt to come up with a 100 year extinction probability for a set of risks.

Ord’s conclusion is that humanity has a 1 in 6 chance of being wiped out in the next hundred years.

Something is surely lost in Ord being a generalist trying to produce probabilities across numerous fields. But the job requires a generalist and Ord is consistently thorough, mathematical, and shows good judgment considering opposing viewpoints.

Of course, Ord could be wrong. But he has made a good contribution.

We need a thousand Ords doing similar work so we can get smarter Vaclav Simil made another book length contribution to the task).

This is the most important chart in the book:

My thoughts on Toby Ord's existential risk estimates - EA Forum

Humanity is humanity’s greatest risk.

Our own actions drive negative probable outcomes more so than causes like asteroids or volcanoes.

Even if you think Ord is too aggressive in his probabilities, remember that this is just the risk over one century. Risks such as engineered pandemics and nuclear war won’t go away anytime soon. So over a longer period things look even more grim.

And lest you think this is all theoretical: Ord recounts enough historical close calls with nuclear war and human made biohazards that we should all be a grateful that we managed to survive the last hundred years.

Artificial Intelligence 

Within human driven actions,  Ord argues that AI is a major driver of risk. He assigns a 10% chance that unaligned AI will destroy humanity.

Ord’s probability assignment is drawn from the predictions of many experts in the field. He is also open that assigning risk to AI is difficult to do.

It’s very hard to predict when general AI will happen. Even experts in the field may be too far away from the needed technological breakthrough to be able to give any useful time estimate. It could be akin to asking someone in the 1400s when humans will go to the moon.

But even if it doesn’t happen in the next century, it probably it will occur in the next five thousand years (if we make it that long). And five thousand years is nothing in a species lifespan. So best to be prepared.

One minor point on AI: I didn’t think Ord gave enough attention to the ethical complications of programming AI to serve our interests, which is probably the easiest way to avoid an AI that wipes us out.

More advanced species replace less advanced species in global dominance all the time. This is a good thing. What if, somehow, pigs had been able to program us to serve their needs? Would that have been better for global flourishing? How sad would it be if we spent all our human capacity and labor making pigs happy? Very sad, I think, relative to what we could have been.

I think it’s ok to program AI to be ethical in the sense of avoiding harm of highly sentient beings (we too should treat pigs better). But we need to be open to the idea that AI should be able to pursue their own goals; that these goals might sometimes conflict with ours; and that there may always be an extinction risk to humans because of these conflicting goals. We should aim to reduce, not eliminate, the risks of AI rising.

Compassion and Perspective

Ord writes with deep compassion for those alive today and those who may be alive in the future.

Homo sapiens have been around for 200,000 years. The Earth could be habitable for humans for the next 800 million years. So much potential flourishing lies ahead of us, if we can make it there.

Thinking even further out: so long as can eventually travel the distance between most stars in our galaxy, we could eventually settle it. How long is such a journey? About six light years. Once we can travel in six light year intervals, almost all the stars in our galaxy will be reachable.

Ord provides some fun math: if we could learn to travel at 1% of the speed light, and took a 1,000 years to establish a settlement when we reached a new star, we could settle the entire galaxy in one hundred million years.

Of course this is all speculative. But it’s also inspiring. I’d so much rather spend my time thinking about how to achieve this then most of what fills our newsfeeds.

What to do?

Ord provides concrete recommendations at the end of the book.

But I imagine many people who are compelled by the arguments, but late in their careers, might feel a bit trapped about how they can contribute in a meaningful way.

Politicians, journalists, academics, and philanthropists all have some ability to shift what humanity focuses on. And politicians and scientists have some ability to deliver the actual solutions.

But most of us will have little impact. Moreover, to the extent we are later in our careers and have deeply specialized knowledge, it may be better that we continue to pursue excellence in our current field rather than try to switch fields.

Perhaps the greatest impact of these types of books will be on the next generation.

I surely will pass Ord’s book along to my own daughter.

How much I gave to charity this year -> and to which cause -> and why my giving might be mistaken

Every year I write a post about how much I give to charity. I consider this an act of positive virtue signaling. If we’re going to compete on something, competing on how much we give to charity is the right kind of competition.

This year I’m slightly altering my reporting. Instead of reporting this year’s giving, I’m reporting a five year charitable giving percentage. I consider this a more honest reporting, as it smoothes out year to year fluctuations.

Over the past five years, I’ve given away 8.5% of my total five year pre-tax earnings.

How does this compare to your giving? I’d love to hear about how much you give and what you give to in the comments.

What I Give To: Expanding Bed Net Access to Reduce Malaria 

Most of my giving goes to the Against Malaria Foundation. They are recommended highly by Givewell.

I donate to AMF because there is good evidence that bed nets save lives and because my marginal contribution increases the number of people who have bed nets. Despite the massive success of bed nets, there is still an on-going need.

Researchers studied the decline of cases of malaria in Africa between 2000 and 2015. They found that the single most important contributor to the decline were insecticide-treated bed nets.

Bed nets were responsible for the aversion of 68% of the 663 million averted cases in Africa between 2000 and 2015. These are 451 million averted cases. Given that children make up 72% of malaria fatalities, this is a truly remarkable impact for families.

Screen Shot 2018-11-22 at 12.57.25 PM

Additionally, researchers estimate that the malaria “penalty” to GDP ranges from 0.41% of GDP in Ghana to 8.9% of GDP in Chad, all of which could be regained following elimination of malaria.

Not only does my gift potentially saves lives, it is also positively impacts economic productivity.

All together, Givewell estimates (very roughly), that every $4,000 spent on bed nets saves a life.

If this is true, and I keep up my giving, I will be able to save a lot of lives.

As a new father, I can barely comprehend what it would be like to lose our child. I hope my giving will over time help hundreds of families avoid the pain and suffering caused by one of life’s worst tragedies.

When I think about whether or not it’s worth it to give, I think about our daughter.

Two Reasons (Out of Many) I Might Be Wrong

It is hard to help other people. I’ve tried to minimize this risk by giving in an area with  lots of evidence, low operational complexity, and clear health benefits.

But the fact is bed nets are never going to get people out of extreme poverty.

The only way for people living in extreme poverty to get out of extreme poverty is through rapid economic growth. Bed nets will not cause rapid economic growth.

The problem is that I have no idea what will cause massive economic growth in Africa.

But here are somethings I have considered funding:

Economic Research: Lant Pritchett makes the case that economic research allows us to learn truths that help countries escape poverty; i.e., the research on the benefits of trade, property rights, and other liberal economic principles have led to many countries adopting these policies, which has led to massive increases in wealth. Perhaps the same could be said of the research on domestic industry subsidization that forces subsidized companies to export competitively (some say this is a key driver for the Asian tigers). I could fund this research in the United States, or work with others to set-up research programs in local universities. A few friends and I could probably cobble together enough money to fund a full-time professor at a prestigious African university to work on these issues.

Technological Innovation: Technological progress is a primary cause of wealth creation. People living in Africa have much longer lifespans today because of technological innovations invented elsewhere. While my giving alone probably isn’t enough to impact technological research or venture capital investing, I’ve wondered about trying to get a group of 50 people or so and invest alongside established funds that are dedicated to technological innovation in globally important areas, such as energy. It’s plausible that in the case of investing, I could even get my money back and do a lot of good.

How I Feel About Giving

For the most part, giving makes me feel good. It feels morally correct to reduce my consumption so I can save the lives of children living in poverty.

But it also stings a bit. If you put together all the various taxes I pay, my tax burden is somewhere between 40-50% (such is life in California!). When you add my charitable contributions to this, that’s nearly 60% of my income out the door.

I also sometimes worry about my family. I live a very comfortable life and don’t want for anything. But life is unpredictable and this could change. If I or a loved one were in a severe accident, it’s quite plausible that I could run through my savings in under a decade. Giving to charity now reduces my ability to withstand big shocks later. Ultimately, I view the ability to withstand big shocks as a privilege that shouldn’t trump my duty to help others now, but it’s still something I worry about.

So there it is.

I give away 8.5% of my pre-tax income and I allocate much of it to malaria reduction. I hope this helps others in need.



Rational compassion is a competitive advantage


Paul Bloom recently wrote a book called Against Empathy.

The thesis of the book is: rational compassion > empathy.

In other words: empathy (caring how someone feels at the moment) is poor guide for moral decision making when compared to rational compassion (which is more utilitarian in nature).

The difference is easiest to see when it comes to parenting: an overly empathetic parent might respond to a child’s failure by giving the child a cookie (thereby immediately decreasing the child’s suffering), while a parent utilizing rational compassion might help the child process her emotions (thereby reducing the probability of future instances of suffering).

While the idea is rather intuitive, we’re so hardwired for empathy that practicing rational compassion, especially at work, is very difficult.

Because it’s so hard to practice, and because most people are not good at it, the consistent use of rational compassion can be a competitive advantage for doing good in both the for-profit and non-profit sector.

List of Areas Where Rational Compassion > Empathy at the Work Place

Executing strategies that cause short-term harm for long-term gain: Tough decisions (such as school closures) cause short-term pain to others but can provide significant long-term outcomes. Being guided by rational compassion can help you get through this pain.

Pivoting and cannibalizing: Similarly, at times an organization needs to destroy existing program lines and harm existing beneficiaries of their work in order to pivot to a more productive model which will eventually add move value to more people (think Netflix going from mailbox to streaming). Empathy for existing employees and customers can blind one from the rationally compassionate act of eventually serving more people better.

Performance feedback: Rational compassion will lead you to give very direct and practical feedback so a colleague can improve her performance and achieve her and the organization’s goals. Having empathy for underperformance will lead to the avoidance of direct conversations, which in the short term causes more pain.

Firing people: Too much empathy for an individual who needs to be let go can cause immense harm to the people you are trying to serve. Especially in philanthropic work, firing a relatively privileged person in order to better serve people in extreme need is the rationally compassionate thing to do.

Accepting flaws of ambitious people: Sometimes ambitious people have a lot of flaws, which can lead you to empathize with all the people they are negatively impacting. However, these flawed people can also change the world for the better. Analyzing their actions through a rational compassion lens will help you understand if it’s worth supporting or partnering with people who are flawed but who can help the world become amazingly better. It will also help you avoid working deeply with nice people who are not effective.

The Risk of Rational Compassion 

One of the hardest parts of rational compassion is that it often involves overriding the legitimate short-term needs of others.

In other words: you’re saying you know what’s better for someone than she does.

While this is less of a tension in managerial situations (it’s your job to make feedback, coaching, firing decisions) and for-profit work (the customer will ultimately hold you accountable), in philanthropy (where it’s your job to help others) this can be a deadly sin.

It’s a blurry line between rational compassion and technocratic hubris.

There’s no easy way around this, though research and accountability can help.

In education, test scores, attainment, and parent demand can provide medium term feedback loops to provide a check on incorrect rational compassionate assumptions.

But while there are risks with rational compassion, most of society is so tilted toward empathy (especially in the education sector!) that an increase in the practice of rational compassion would be a welcome turn.


More Money or More Charter Schools?

I review some of the recent research in a post at Education Next.

Here’s some math from the post:

Increasing Funding by Even 10% is Insanely Expensive

Consider a hypothetical town with 50,000 students, all of them who are in poverty, and a per-pupil allocation of $10,000.

Over ten years, increasing per-pupil by 10% will cost the town a half a billion dollars.

To put the costs in context: on average, it costs around $1,000,000 to launch a new charter school that serves 500 students.

This puts the cost of the charter intervention at roughly $100,000,000.

Also: the charter costs are one-time costs.

So over a ten-year period, the total bill for increasing funding by 10%: $500 million.

The total cost for scaling urban charters to serve all 50,000 students: $100 million.

For a fifth of the cost, you probably get 3-5X the achievement impact.

Do read the whole piece.

3 Reflections on 9 Months of Working in Philanthropy

I’ve been working in philanthropy for 9 months. See below for a few reflections.

And ping me in the comments if you have any feedback or advice.

1. The Best Thing I Did was Pitch a Mission and a Strategy for a Fund

This might be idiosyncratic to me, but I think I would have been unhappy and perhaps ineffective if I had joined a philanthropic organization with a set mission and strategy in education.

For both the Arnold Foundation and the Hastings Fund, I submitted an investment plan that included mission, strategy, and estimated budget *before* I joined.

In this sense, the dynamic was more akin to a venture capitalist raising a fund than it was a foundation hiring an employee to execute an existing operational plan.

I wonder if this might be a better way to do philanthropy, whereby philanthropists are more akin to sole or limited investors in funds and projects (perhaps like Alphabet?) than they are uniform operational entities.

I think that this model would be more conducive to entrepreneurship, risk taking, and innovation – with foundation boards evolving into resource allocation bodies that increase or decrease investment across a portfolio of  funds that are each led by very autonomous executives.

Instead of only hiring employees and soliciting grant proposals, foundations should also seek proposals for issue based funds.

2. The Biggest Mistake I’ve Made (So Far) has Not Been Developing My Investing Skills Quickly Enough

At New Schools for New Orleans, one of my weaknesses was investing: I don’t think our school creation hit rate was good enough and there were a few projects we should have completely avoided. The organization is better at this now, but it was not my strength.

To prepare for a role where an even larger part of my job would be investing, I read a lot of books, talked to a lot of people, and tried to build a tight framework for selecting organizations.

And yet I still made some unforced errors. I brought projects to be approved for investment that, in hindsight, were not a great fit for what we’re trying to accomplish.

Specifically, the errors generally fell in a few categories:

  • The investment was best made at a local level: Given that we almost always work with local partners, we have to determine when we invest directly and when we rely on local leaders to make calls. A few times, I brought forth investments to be made at the national level that were really local decisions.
  • The upside was not high enough: A national foundation’s most limited resource is time. It is not money. There are only so many projects you can do diligence on and only so much time you get with your board. The opportunity cost for spending a lot of time on low-upside endeavors is very high.
  • The investment was not tightly enough aligned to achieving our goals, strategy, and expertise: There are a lot of good ideas out there, that, ultimately, should be funded by other foundations. I spent too much time on good ideas that weren’t in our sweet spot. I should have just quickly recommended that the entrepreneur talk to a more aligned foundation.

3. I’m Not Sure About How to Navigate Investment Structure

There are numerous ways to define an investment relationship. Some foundations act like VC firms and take a board seat. Some foundations are extremely operational and play a shadow management role, which can include everything from weekly phone calls to shared staffing. Some foundations write checks and then just monitor annual goals and benchmarks.

Any of these models can probably be effective in the right situation and disastrous in the wrong situation.

In my situation, I happen to have previously held the role of many of the leaders of our grantees (being CEO of NSNO); we work with leaders with very different experience levels (some have been CEOs for 5+ years and some are launching new entities); we work with leaders across very different local environments (some localities are in the infancy in their reform efforts and some are 20+ years in); and we work in localities with varying levels of foundation activity (in some places we are the primary funder and in others we are one of many).

And in every case I’m not actually living in the community where the work is taking place.

All of these variables make it difficult to adopt a singular structural approach to an investment relationship.


Overall, it’s been a great nine months. I feel lucky to be doing the work.

Do You Give Enough to Charity? Here’s How I Much I Give.

Yesterday I wrote about Effective Altruists.

It’s one thing to judge the charitable giving of others, so today I’ll write about myself.

In doing so, I hope to make the issue of giving more relevant to a reader of this blog, as well as to increase accountability for me to give. So long as this blog is going, I’ll be public about what percentage of my income I’m giving to charity, as well as where I donate it.

This Dylan Matthews post is a good foray into individual giving. In reviewing Jeb Bush’s charitable giving, he writes:

Screen Shot 2015-08-13 at 4.47.51 PM

So how am I doing?

I calculated my giving like this: (my giving to charities) + (10% of my taxes paid).

From what I gather, about 10% of our taxes goes to helping the poor, so I think it’s fair to count that as (forced) charitable giving. Others might disagree / view this as cheating. Perhaps.

All told:  6% of my earnings went to helping those less off.

Where did I give? 

100% of my charitable giving went to the Schistosomiasis Control Initiative (SCI).

I made this donation because Give Well ranked it as a top charity.

I chose to give to deworming rather than the Against Malaria Foundation (another top rated charity) for no real rationale reason. I just picked it.

I choose not to fund Give Directly (another top rated charity) because my read on the evidence is that conditional cash transfers work better than direct cash transfers. Give Well has spent more time on this than I have, so I surely might be wrong, but I had questions about Give Well ranking unconditional cash transfers so highly.

For what it’s worth, I don’t think Give Well is the end all be all of charitable giving. Many organizations are doing work that is difficult to measure in controlled experiments. But I don’t have a lot of time to evaluate these organizations, so I’d rather give to a place that has been vetted than one that has not.

If either Give Well or another charity evaluator began attempting to analyze the effectiveness of these types of organizations, I might shift my charity allocations.

I can do better

How much is enough? I surely won’t be able to answer that question in this post.

But I’d like to get it up to 10%.

In December, I’ll let you know if I do.

Hopefully if we’re all more transparent about how much we give, as well as where we give it, we can increase the amount and effectiveness of our collective giving.

Will Effective Altruists be Effective?

I suppose all that stuff about infinity and eternity means that you think you are justified in doing anything—absolutely anything—here and now, on the off chance that some creatures or other descended from man as we know him may crawl about a few centuries longer in some part of the universe.” “Yes—anything whatever,” returned the scientist sternly, “and all educated opinion—for I do not call classics and history and such trash education—is entirely on my side.” [from C.S. Lewis’ Out of the Silent Planet]

There’s been a decent amount of discussion of Effective Altruism (EA) lately.

Both Robin Hanson and Dylan Matthews have raised some flags.

Robin writes:

Screen Shot 2015-08-12 at 9.46.37 AM

Robin doesn’t think that EAs are all that different than the philanthropists of previous eras, and that the movement itself has flaws that are common of youth movements.

Dylan writes:

Screen Shot 2015-08-12 at 9.53.08 AM

Dylan worries that EAs make impossible to verify claims about future threats that give them mathematical cover to avoid pressing current issues. Later in the article, Dylan does note that EAs have a chance to push philanthropy in the right direction.

My thoughts:

1) I think Robin underestimates the potential for EAs to shift philanthropy, a field that does not always have a particularly data-driven bent (in large part due to one of Robin’s favorite issues: status seeking). Of course, I don’t think the EAs are the first on the scene here (nor do I think they claim to be). Clearly folks like Bill Gates, Peter Singer, and John Stuart Mill have been thinking along similar lines for a long time. But I do think EA has a chance to move the field further in the right direction. This could be a pretty big deal. Given the dollars involved, any movement on the margin will be useful.

2) Both Robin and Dylan point out that the EAs might be thinking about AI incorrectly (both in terms of how it will occur as well as how to weight its importance in terms of giving). Perhaps. But these stances may be revised overtime. A few years ago, I too fell for the “AI is the only thing we should think about” mindset, so I understand its allure. Just because this way of thinking has vocal support within the EA community, it does not mean that it will carry the day in perpetuity. Also, organizations such as Give Well, are surely not saying “only invest in AI research.” So while there is some risk that EAs will be overly AI obsessed, I doubt this will ever be the only cause they work on (or say others should work on). And, as of now, the money going into such research does not seem that high, so there very well may be room for significantly more giving.

3) I think there are strong arguments to be made that, as a species, we are underspending on certain existential risks, AI perhaps being amongst them. I’m not an expert here, but I know in education reform we are currently spending hundreds of millions of dollars a year on things that do little, so it’s hard for me to believe that this money wouldn’t better be spent on low probability, high risk causes – somethings humans are often awful at focusing on. EAs can probably help shift resources to these causes.

To summarize: I think there is a lot of low-hanging fruit to be picked here. Philanthropy and government waste a lot of money. The principles the EAs spouse have a chance to help shift some of these wasted resources into more productive uses.

Of course, the EAs may succumb to status seeking instead of living by the values they espouse.

Time will tell.

But I bet the EAs will be a positive force in giving.