The Precipice by Toby Ord is the best book I’ve read and will likely read this year.
According to Ord, if my 1.5 year old daughter reaches the age of 80, there’s a ~10% chance she will die in a human extinction event. This is horrifically sad.
Ord’s argument is clear:
- Humanity has the chance to survive for hundreds of millions of years.
- There are a set of known risks that may cause our extinction.
- We can assign probabilities to those risks.
- We should devote immense effort to reducing the risk of the most probable causes of extinction.
A teenager could follow the logic and sentiment of the argument.
And yet, as the current pandemic shows, humanity remains terribly unprepared.
If there is any silver lining to be had with Covid, I hope it’s a deeper investment in avoiding known existential risks.
Ord conducted extensive research in numerous fields to attempt to come up with a 100 year extinction probability for a set of risks.
Ord’s conclusion is that humanity has a 1 in 6 chance of being wiped out in the next hundred years.
Something is surely lost in Ord being a generalist trying to produce probabilities across numerous fields. But the job requires a generalist and Ord is consistently thorough, mathematical, and shows good judgment considering opposing viewpoints.
Of course, Ord could be wrong. But he has made a good contribution.
We need a thousand Ords doing similar work so we can get smarter Vaclav Simil made another book length contribution to the task).
This is the most important chart in the book:
Humanity is humanity’s greatest risk.
Our own actions drive negative probable outcomes more so than causes like asteroids or volcanoes.
Even if you think Ord is too aggressive in his probabilities, remember that this is just the risk over one century. Risks such as engineered pandemics and nuclear war won’t go away anytime soon. So over a longer period things look even more grim.
And lest you think this is all theoretical: Ord recounts enough historical close calls with nuclear war and human made biohazards that we should all be a grateful that we managed to survive the last hundred years.
Within human driven actions, Ord argues that AI is a major driver of risk. He assigns a 10% chance that unaligned AI will destroy humanity.
Ord’s probability assignment is drawn from the predictions of many experts in the field. He is also open that assigning risk to AI is difficult to do.
It’s very hard to predict when general AI will happen. Even experts in the field may be too far away from the needed technological breakthrough to be able to give any useful time estimate. It could be akin to asking someone in the 1400s when humans will go to the moon.
But even if it doesn’t happen in the next century, it probably it will occur in the next five thousand years (if we make it that long). And five thousand years is nothing in a species lifespan. So best to be prepared.
One minor point on AI: I didn’t think Ord gave enough attention to the ethical complications of programming AI to serve our interests, which is probably the easiest way to avoid an AI that wipes us out.
More advanced species replace less advanced species in global dominance all the time. This is a good thing. What if, somehow, pigs had been able to program us to serve their needs? Would that have been better for global flourishing? How sad would it be if we spent all our human capacity and labor making pigs happy? Very sad, I think, relative to what we could have been.
I think it’s ok to program AI to be ethical in the sense of avoiding harm of highly sentient beings (we too should treat pigs better). But we need to be open to the idea that AI should be able to pursue their own goals; that these goals might sometimes conflict with ours; and that there may always be an extinction risk to humans because of these conflicting goals. We should aim to reduce, not eliminate, the risks of AI rising.
Compassion and Perspective
Ord writes with deep compassion for those alive today and those who may be alive in the future.
Homo sapiens have been around for 200,000 years. The Earth could be habitable for humans for the next 800 million years. So much potential flourishing lies ahead of us, if we can make it there.
Thinking even further out: so long as can eventually travel the distance between most stars in our galaxy, we could eventually settle it. How long is such a journey? About six light years. Once we can travel in six light year intervals, almost all the stars in our galaxy will be reachable.
Ord provides some fun math: if we could learn to travel at 1% of the speed light, and took a 1,000 years to establish a settlement when we reached a new star, we could settle the entire galaxy in one hundred million years.
Of course this is all speculative. But it’s also inspiring. I’d so much rather spend my time thinking about how to achieve this then most of what fills our newsfeeds.
What to do?
Ord provides concrete recommendations at the end of the book.
But I imagine many people who are compelled by the arguments, but late in their careers, might feel a bit trapped about how they can contribute in a meaningful way.
Politicians, journalists, academics, and philanthropists all have some ability to shift what humanity focuses on. And politicians and scientists have some ability to deliver the actual solutions.
But most of us will have little impact. Moreover, to the extent we are later in our careers and have deeply specialized knowledge, it may be better that we continue to pursue excellence in our current field rather than try to switch fields.
Perhaps the greatest impact of these types of books will be on the next generation.
I surely will pass Ord’s book along to my own daughter.