Category Archives: Existential Threats

The best book of 2020

The Precipice by Toby Ord is the best book I’ve read and will likely read this year.

According to Ord, if my 1.5 year old daughter reaches the age of 80, there’s a ~10% chance she will die in a human extinction event. This is horrifically sad.

Ord’s argument is clear:

  1. Humanity has the chance to survive for hundreds of millions of years.
  2. There are a set of known risks that may cause our extinction.
  3. We can assign probabilities to those risks.
  4. We should devote immense effort to reducing the risk of the most probable causes of extinction.

A teenager could follow the logic and sentiment of the argument.

And yet, as the current pandemic shows, humanity remains terribly unprepared.

If there is any silver lining to be had with Covid, I hope it’s a deeper investment in avoiding known existential risks.

Assigning Risk

Ord conducted extensive research in numerous fields to attempt to come up with a 100 year extinction probability for a set of risks.

Ord’s conclusion is that humanity has a 1 in 6 chance of being wiped out in the next hundred years.

Something is surely lost in Ord being a generalist trying to produce probabilities across numerous fields. But the job requires a generalist and Ord is consistently thorough, mathematical, and shows good judgment considering opposing viewpoints.

Of course, Ord could be wrong. But he has made a good contribution.

We need a thousand Ords doing similar work so we can get smarter Vaclav Simil made another book length contribution to the task).

This is the most important chart in the book:

My thoughts on Toby Ord's existential risk estimates - EA Forum

Humanity is humanity’s greatest risk.

Our own actions drive negative probable outcomes more so than causes like asteroids or volcanoes.

Even if you think Ord is too aggressive in his probabilities, remember that this is just the risk over one century. Risks such as engineered pandemics and nuclear war won’t go away anytime soon. So over a longer period things look even more grim.

And lest you think this is all theoretical: Ord recounts enough historical close calls with nuclear war and human made biohazards that we should all be a grateful that we managed to survive the last hundred years.

Artificial Intelligence 

Within human driven actions,  Ord argues that AI is a major driver of risk. He assigns a 10% chance that unaligned AI will destroy humanity.

Ord’s probability assignment is drawn from the predictions of many experts in the field. He is also open that assigning risk to AI is difficult to do.

It’s very hard to predict when general AI will happen. Even experts in the field may be too far away from the needed technological breakthrough to be able to give any useful time estimate. It could be akin to asking someone in the 1400s when humans will go to the moon.

But even if it doesn’t happen in the next century, it probably it will occur in the next five thousand years (if we make it that long). And five thousand years is nothing in a species lifespan. So best to be prepared.

One minor point on AI: I didn’t think Ord gave enough attention to the ethical complications of programming AI to serve our interests, which is probably the easiest way to avoid an AI that wipes us out.

More advanced species replace less advanced species in global dominance all the time. This is a good thing. What if, somehow, pigs had been able to program us to serve their needs? Would that have been better for global flourishing? How sad would it be if we spent all our human capacity and labor making pigs happy? Very sad, I think, relative to what we could have been.

I think it’s ok to program AI to be ethical in the sense of avoiding harm of highly sentient beings (we too should treat pigs better). But we need to be open to the idea that AI should be able to pursue their own goals; that these goals might sometimes conflict with ours; and that there may always be an extinction risk to humans because of these conflicting goals. We should aim to reduce, not eliminate, the risks of AI rising.

Compassion and Perspective

Ord writes with deep compassion for those alive today and those who may be alive in the future.

Homo sapiens have been around for 200,000 years. The Earth could be habitable for humans for the next 800 million years. So much potential flourishing lies ahead of us, if we can make it there.

Thinking even further out: so long as can eventually travel the distance between most stars in our galaxy, we could eventually settle it. How long is such a journey? About six light years. Once we can travel in six light year intervals, almost all the stars in our galaxy will be reachable.

Ord provides some fun math: if we could learn to travel at 1% of the speed light, and took a 1,000 years to establish a settlement when we reached a new star, we could settle the entire galaxy in one hundred million years.

Of course this is all speculative. But it’s also inspiring. I’d so much rather spend my time thinking about how to achieve this then most of what fills our newsfeeds.

What to do?

Ord provides concrete recommendations at the end of the book.

But I imagine many people who are compelled by the arguments, but late in their careers, might feel a bit trapped about how they can contribute in a meaningful way.

Politicians, journalists, academics, and philanthropists all have some ability to shift what humanity focuses on. And politicians and scientists have some ability to deliver the actual solutions.

But most of us will have little impact. Moreover, to the extent we are later in our careers and have deeply specialized knowledge, it may be better that we continue to pursue excellence in our current field rather than try to switch fields.

Perhaps the greatest impact of these types of books will be on the next generation.

I surely will pass Ord’s book along to my own daughter.

My favorite books of 2017

I felt this was a pretty weak year for books. I don’t know why. I only really loved three books that came out this year.

Especially with regards to work, I found I learned a lot more by doing rather than reading. I’m curious if this trend will continue.

That being said, the best books was incredibly good: because of the first two books below, I’ve tried to up my meditation to 40 minutes a day and reduce social media to under 30 minutes a day. I’ve also done a lot to reduce iPhone screen time. I’ve also spent much more time contemplating the nature of the self. My meditation practice is a little less tactical and includes more philosophical exploration.

I feel more in control of mind than I have in years.

Why Buddhism is True by Robin Wright

Robin’s thesis is:

  1. Our brains were mostly built during hunter and gather times.
  2. The modern world has hijacked useful desires (for food, sex, stimulation, and status) so that they are no longer that useful (we over eat, watch too much porn, constantly check our phones, etc.).
  3. There is no CEO in your brain. Your brain is made up of a bunch of competing desires / modules. And whichever you feed and reward will grow stronger.
  4. Meditation is a technique that can reduce the power of the feeling -> action sequence. Desires need not be orders if they are observed with distance and objectivity.
  5. The idea that there is no CEO of the brain also fits Buddhism’s core philosophical tenet that the self is an illusion.

I think arguments #1 through #4 are correct. Robin surveys a mounting body of scientific case evidence that makes this case, from evolutionary psychology to neurobiology.

I think #5 is directionally correct, but that ultimately humans do not have a brain that is powerful enough to make hard claims about these types of metaphysical conditions.

iGen – Jean Twenge

Jean’s thesis is that:

  1. Socio-economic conditions (in part families having increased wealth and less children) has led to a lengthening of childhood. High school is the new middle school.
  2. The iPhone has fundamentally altered how teens interact.
  3. Taken together, changing socio-economic conditions and the smartphone has led children to be more tolerant, less risk taking (sex, alcohol, and driving are down… marijuana is up), more insecure, less happy, less religious, more concerned with wealth, and more politically independent.

I’m not an expert in the field, but I found her found her argument compelling.

This generational shift is a striking example of how productivity and technology can combine to change societal values.

For you parents in the crowd, she gives thoughtful parenting recommendations at the end of the book.

The Dark Forest Trilogy

I previously reviewed the books here. Some of the best science fiction I have ever read.

The series is premised on this logic path:

  1. The primary goal of each civilization is to survive.
  2. There are finite resources and space in the universe.
  3. Civilizations tend to expand.
  4. Civilizations tend to advance technologically.
  5. You have no way of truly knowing whether an alien species is peaceful or hostile.

If this ends up being true in our reality, we will likely be destroyed by more technology advanced aliens.

Could the Earth ever become a Dark Forest?

In the trilogy The Three Body Problem, Liu Cixin builds his novels around the idea that the universe is a Dark Forest – i.e., when you’re moving through a dark forest and you hear the rustling of leaves, the optimal reaction is to shoot first.

More fully, the Dark Forest theory of the universe is built upon these first principles:

  1. The primary goal of each civilization is to survive.
  2. There are finite resources and space in the universe.
  3. Civilizations tend to expand.
  4. Civilizations tend to advance technologically.
  5. You have no way of truly knowing whether an alien species is peaceful or hostile.

So, if you detect an alien species – what do you do?

Under the Dark Forest theory, you kill them.

The reason you kill them is that even if they’re not hostile now, at some point they will want to survive, need more sources, and have advanced technology – which means they might just kill you.


I have no idea if the Dark Forest theory accurately describes the first principles of the universe.

But it made me think about something we might be able to understand with greater precision: could the Earth ever become a Dark Forest?


Right now, the Earth is not a Dark Forest largely because of nuclear deterrence.

North Korea might very well recognize that the existence of the United States will likely bring down their regime at some point, but they can’t act on this knowledge because we could respond to any nuclear attack with an attack that wipes them out.

Even for more robust nuclear powers, each side must live with the fact that a massive nuclear war could destroy all of humanity.

Culture also acts against the Earth becoming a Dark Forest. The scaling of large societies has been in part been sustained through cultural evolution: we now identify with nation and world instead of just kin, which, presumably, partially mitigates the 5th aforementioned principal (lack of trust).

But these conditions are not immutable: so it’s worth considering, how could one sided deterrence, assured mutual destruction, and trust…. end?


Unfortunately, it’s not hard to describe a scenario:

  1. There is a shortage of an important resource that is necessary to a nation’s survival, which makes securing that resource more important to a nation’s survival than the benefits of trade.
  2. This shortage, as well as the already significant cultural differences between existing rival nations (such as USA and China), erode trust.
  3. Technology advances in a manner that allows a nation that launches a first strike to kills all other humans, not allow for a return strike, and preserve the attacking nation.

How about this: there’s a water shortage that fuels nationalism, that leads to rising animosity between populous nations, and then one of them develops a synthetic virus that instantly kills all humans that haven’t received the vaccine – a vaccine that the attacking nation released in their own water supply the week before launching the attack.


I’m not in expert in these issues, so maybe I’ve gotten much wrong.

But if the universe can become a Dark Forest, the Earth probably can too.

If this is true, we’ll need a deterrence system for whatever set of weapons come after nuclear warheads.

But what are the odds that for every new weapon we develop we’ll also near simultaneously have an equally strong deterrence system?

They don’t seem high.


Please do let me know where my logic is off.

Here’s a better framework for thinking about Trump

As I’m reading anti-Trump and pro-Trump commentary, I’m finding very few pieces that fully explore the different possibilities of a Trump presidency.

So I tried to create a graph to chart what I think are three dominant considerations we should be using to understand the president elect.

A Framework for Understanding the President Elect 


This framework captures 3 primary spectra:

Social Liberalism: Does a leader have respect for people of all races, gender, sexuality, religion, and places of birth?

Economics: Does a leader lean more toward populist economics (which often involves trade protectionism and anti-immigration stances) or globalist economics (which generally leans towards free trade and more immigration)?

Rule of Law: Does a leader behave within the established norms of domestic democracy and international rule of law, or does she lead by greatly damaging democratic institutions and grossly violating international law?

To chart some historical examples, I spent a few minutes trying to plot the last few American presidents and Hitler. I was just aiming to be directionally correct but am in no way trying to argue that I plotted these perfectly.

Each Variable is Very Important, But I Think Rule of Law is Probably Most Important

You could make reasonable arguments for each variable being the most important consideration.

If I had to argue for social liberalism, I’d say that even someone who works within the rule of law can do terrible harm to minority populations.

If I had to argue for economics, I’d say that someone who wrecks the international economic system could unleash untold suffering on the poor of the world.

In arguing for rule of law, I’m mostly arguing from this recent historical fact that so many of the world’s major mass deaths have been caused by dictators, such as Hitler, Mao and Stalin.

This graph is illustrative:


I’d need to think harder before having stronger opinions on the relative importance of each variable.

The only thing I am confident in is that they’re all important.

When to Build Bridges, When to Join the Resistance 

I think both Trump and Clinton supporters have reasonable grievances about the world.

I don’t think that it’s in our country’s best long-term interest for each side to: (1) argue loudly about their legitimate grievances (2) not listen to the other side’s legitimate grievances and (3) not differentiate between policy differences and threats to the survival of the nation.

I think economics and immigration are policy differences.

I think respect for rule of law is an issue that gets at the survival of our nation.

And I think social liberalism sits between the two, in that it determines who receives the full benefit of the rule of law within our country, which in its most severe form can threaten the survival of our nation (slavery) but in other cases can be solved through the political process (gay marriage).

I think it’s worth trying to build bridges around policy and less severe forms of social illiberalism.

I think it’s worth considering more radical forms of resistance in cases of major threats to the rule of law and severe cases of social illiberalism.

In Sum

Our country is deeply divided about many issues.

It’s important to tease out the differences between these issues, both to understand ourselves and to understand the president elect.

I know that this is a rather unemotional way of trying to understand issues riven with deep emotions.

I’ve felt a lot over the past week – it’s been especially hard to hear stories of children in our schools who don’t feel safe – and I’ll continue to listen to these emotions.

But I also want to try and understand the way forward, and, for me, frameworks help.

Book Review: The Wealth of Humans


I just finished Ryan Avent’s The Wealth of Humans: Work, Power, and Status in the Twenty-First Century.

Summary: Economic Disruptions Require New Social Contracts, which can be a Bloody Process 

Ryan’s primary argument is as follows:

1. Periods of rapid technological innovation usually lead to increased prosperity, but the transition can be very disruptive to the existing social and economic order.

2. During these periods of disruption, workers, the economic elite, and those in governmental power have to create the social contract will be for the new order. This is a very difficult process that involves a lot of trial and error.

3. The last time this happened was after the industrial revolution, where numerous wars and revolutions eventually led to a few dominant orders: capitalism and the welfare state (in the West, South and Central America, and parts of the East), socialist dictatorship (in China), and resource based dictatorships (primarily in the Middle East). Of these different variations, capitalism + the welfare state have proven most successful.

4. The digital revolution, which is being driven by continuing gains in computing power, will requite a new social order, especially if this revolution leads to massive surpluses of labor.

5. Creating a new social contact for this age could be just as bloody – or bloodier – than the last go around (WWI, WWII, Mao, the Cold War, etc.).

Reflection #1: Time Between Disruptions is Decreasing, Power of Weapons is Increasing 

I generally agree with Ryan’s argument. One additional issue to consider is that the time between economic singularities is decreasing. It took us a very, very longtime to get from hunter gathers to farmers, and a very longtime to get from farming to the industrial revolution.

It’s barely taken us a 150 year to get from the industrial revolution to the computing revolution.

And it’s likely that the computing revolution will seed another revolution (perhaps general artificial intelligence) in another 50-100 years – and who knows what next economic singularity will spring from superior artificial intelligence…

Additionally, technological advancement increases the power and scope of our weapons. We will likely continue to build new weapons that can wipe out humanity, such as synthetic viruses.

In short, the time between the rolls of the dice will decrease, while our odds of losing any given die roll may increase.

One way to reduce the odds of losing is to disperse ourselves and / or our decendents amongst the cosmos in order to decrease the fragility of single planet living.

Reflection #2: A Minor Guess of How to Ease Into the Next Social Order

The more I puzzle over the accelerating impacts of the digital revolution, the more I come back to wage subsidies as the best tool we have for stumbling our way into the next social order.

While universal basic incomes might at some time be warranted, this will be incredibly expensive (given current productivity) and we don’t yet know how to structure a modern society where many people simply don’t work.

Wage subsidies, on the other hand: (1) maintain the connection between work and income (2) lead to less economic distortion, especially compared to minimum wage raises (3) can be raised over time to maintain a sense of economic progress, and (4) help avoid an economy where purchasing power (and presumably social power) consolidates with the top 10%.

Reflection #3: What is Inflationary? What is Deflationary?

Over the past few decades, goods have faced deflationary pressures (most things you buy for day-to-day uses are cheaper now).

Education and healthcare, on the other hand, have been subject to inflationary pressures (they cost more than they used to).

From a pure material progress standpoint, a deflationary future means that wage subsidies might not be necessary to keep improving welfare.

However, if healthcare, housing, and education continue to eat up budgets, people will need higher wages to keep up, especially those that don’t receive government subsidies in these areas.

Lastly, it’s possible that even if purchasing power increases, if income inequality is still increasing, social unrest could still be a major issue.

All this is to say: it’s worth looking at both income and expense.

Reflection #4: Consider Yourself, Consider the Monkey, Consider the Dog 

To the extent humans survive the new social order that comes after an artificial intelligence singularity, it’s worth considering what this existence might be like.

Dogs, for example, have done quite well during the era of human dominance. Specifically, they were bred to be happier.

Dogs have also been provided a universal basic income in the form of shelter, food, and treats.

I often struggle with the gap between what I believe to be the best version of myself and the actual reality of the current version of myself. I sometimes get depressed by the lack of progress I’m making.

The fact is that it’s incredibly difficult to become an even better person once you’ve eaten up the low-hanging fruit of adopting classical liberal beliefs and not murdering your fellow humans.

So it’s worth noting that humans (perhaps?) have created the best version of dogs.

Perhaps our descendants will do the same for us, especially if we are able to bring value to whatever is they are seeking in life. Interestingly enough, more intelligent primates have not faired as well as dogs and cats. So don’t assume being #2 on the intelligence pecking order means you’ll be ok.

This may all sound crazy, but it seems extremely unlikely that humans are the endpoint of evolution. So it’s worth considering – what comes next?

Elon Musk vs. the Environmentalists – Some Lessons


One of the core values of our team is: we face and solve brutal realities.

Another on of our values is: we ask why. 

Recently, at a team retreat, we read and discussed Musk’s biography. It is well worth reading.

In reading the book – and reflecting on our values – I was struck by how Musk differs from many environmentalists.

Facing the Brutal Reality of Climate Change

Both Musk and the environmentalists care about the future of humanity.

Both Musk and environmentalists believe that humanity is at-risk due to human induced climate change.

In this sense: each has faced the brutal reality of the dangers of climate change.

Because of this brutal reality, environmentalists are doing important policy and conservation work.

Because of this brutal reality, Musk launched Solar City and Tesla.

Facing the Brutal Reality of Single Planetary Existence 

But Musk, in considering the threat of environmental disaster, did not stop asking “why” when it comes to the risk of human extinction.

Rather than being satisfied with the (true) morality tale of humans destroying the planet; he kept on asking why humans were so exposed to environmental collapse on Earth in the first place.

The answer is of course obvious: Earth is the only planet we live on. As it goes, so do we.

In terms of human continuity, it is very fragile to only live on one planet. Ultimately, even natural environmental shifts (volcano explosion, meteor, etc.) can destroy humanity. Musk realized this was a major problem that many environmentalists did not seem to be working on.

Yes, slowing human made climate change is important, but it is only a stop-gap solution. Leaving Earth is the more sustainable solution.

Completing this logic pathway (of asking why humanity is truly at risk) only requires the knowledge one might pick up in high school.

Ultimately, getting  down to the root solutions is as much as about mental habits as it is about knowledge: facing brutal realities, continuing to ask “why,” having the boldness of vision to put forth a solution – this is what is needed…. as well as having the operational capacity to make a good attempt to realize this vision.

It is rare that all these qualities sit in one person. This is what makes Musk so special.

And it is why we have Space X.

Our Work 

I’d like to think that some of our greatest successes in New Orleans were because we faced brutal realities and we asked “why” a lot.

Some of our biggest failures likely came from a failure to live out these two values.

When it comes to facing brutal realities, I find the following to be of use: soberly analyzing existing performance data; reading the criticisms of thoughtful people in other tribes; taking the time to quantitatively role forward your expected impact over 10-20 years.

When it comes to asking “why,” I find the following to be useful: sitting on potential solutions before acting on them; setting-up a culture and process for rigorous team questioning; having a board of directors that constantly questions your work; reading broadly to build-up false solution pattern recognition.


Existential Threat Resource Allocation: Are We Doing it Right?

In the last ten years, there has been an uptick in attention paid to existential threats (threats that could wipe out humanity). This is potentially great news.

Last night, I watched an episode of Elementary, which is one of my favorite television shows. The episode’s plot revolved around existential threats, with a focus on artificial intelligence.

That was enough to get me to write this post.


A couple years ago, I read Nick Bostrom’s Global Catastrophic Risks which catalogues the various threats that might lead to human extinction.

Since then, I’ve maintained a passing interest in the field. I even went to the Singularity Summit.

Over the past few weeks, I’ve been playing around the internet trying to get more caught up on the field.

The good news: there seems to be a lot of talented people working on these issues.

The bad news: I’ve found very little publicly availability data analysis on the issue. I was curious which risks were most likely to occur; which risks were most solvable by human intervention; and the amount of resources that were currently being devoted to each risk.

I found very little of this information. Of course, perhaps this information exists in secret government departments; or perhaps the research exists and I just did a poor job of finding it.

I did see that the Future of Humanity Institute has launched a Global Priorities Project, which aims to answer some of the questions, I think. The Centre for the Study of Existential Risk also seems to be working on the issue. But neither of them have put out reports that I could find.

But, overall, I was pretty surprised at how little easy accessible information was out there.


I’d love to see the data I mentioned above (and that I tried to capture in the below bubble chart).

Note: I spent 30 minutes creating this chart. I don’t think I’m right on any of the values I placed on these threats. I just wanted to try and create an easy way to visualize the problem.

Screen Shot 2014-12-13 at 11.35.09 AM

Does anyone know if such data exists in an easily digestible format?