- Published on
A future cancelled: existential risk deserves more of your attention – and our altruism
- Authors
- Name
- James Brady
- @james_elicit
Bring to mind the person you care about most in the world.
Now imagine sitting across from them, watching them play Russian roulette with a loaded revolver.
If we found ourselves in that situation we'd jump out of our seat, rush over to them, grab the gun from their hand, and attempt to unload it. However, in The Precipice, Toby Ord estimates there is a 1-in-6 chance of humans wiping themselves out in the next century: a game of Russian roulette that we're all playing at an existential level.
At the same time, despite the present day being the best time to be alive in our history, there remains abject suffering, wealth inequality, inequity, disease, and death around the world. It is on these immediate, pressing issues that most aid agencies focus on.
With such a range of global priorities presented to us, which type of need is greater and more urgent? Where should we place our focus to make things better?
Weighing up existential risk
Existential risks (like nuclear war, a pandemic much worse than Covid-19, or runaway artificial intelligence) are by their nature low-probability but extremely high-consequence.
This makes it tricky to compare them against more familiar tragedies, such as the chance that someone is going to starve to death tomorrow. Unfortunately, starvation is almost a guaranteed outcome, but it's also an outcome that – awful as it is – is less consequential than the end of humanity.
A fellow human's imminent and preventable death grabs us on a visceral level. But does this mean that we should myopically focus on ending world hunger before we put any thought at all towards avoiding nuclear war? That doesn't seem like the right approach.
To be able to draw a rational comparison between existential risk and conventional causes focussed on concrete short-term goals, we need to answer two questions:
- How do we value the lives of people who don't yet exist?
- To what degree should we prioritise action now to reduce our risk in the future?
There is no consensus on how to do either of these things, but I'll show that with some – I think quite reasonable – assumptions, existential risk reduction is grossly under-resourced at the moment.
1. How should we value future human lives?
Population ethics is reliably confusing, with much of our confusion stemming from the lack of an accepted way to "add up" successive generations of human welfare over time.
Let's simplify the range of different approaches for this addition into three main options1: Zero, Some, or Total.
Zero
A strict person-affecting view holds that we should completely disregard the wellbeing of hypothetical future people, as their current non-existence removes them from any sensible conversation about welfare.
For example, in this post on the forum for Effective Altruism (EA), years of life lost for our existing population are factored in to thinking about an existential catastrophe – but unrealised future generations aren't considered at all.
Some
An economist might advocate for discounting: a standard accounting method to compare the value of money paid today versus the future. Why not apply this method to the value of future human lives too?
The Precipice gives a few examples of where fixed rate discounting results in unintuitive conclusions:
It implies, for example, that if you can save one person from a headache in a million years’ time, or a billion people from torture in two million years, you should save the one from a headache
Based on the work of Martin Weitzman, a variable discount rate – decreasing over time – is a better choice for long-term situations.
Total
The Total view states that we should value all future human welfare equally to our own. We should also include other sentient beings in the mix, and many would argue sentient machines too.
There are some infamous counter-intuitive results of this view, but it is a view considered seriously by Nick Bostrom, Toby Ord, and this EA article.
Which method we'll use
There is significant support for the Total view in philosophical circles, but in order to weaken my point let's use the Some view with variable rate discounting. To further declaw my argument, we'll only consider future lives through to the year 2100 on the flimsy bases that:
- That's as far as we have good global population forecasts
- There are likely to be new, unforeseen existential threats we are facing at that point
- We all know people who will be alive in 2100 – perhaps even you or I will be alive then: it's a time period close enough for us to easily empathise with its population
Using the assumptions listed below, we find that existential catastrophe tomorrow would be equivalent to the loss of 18 billion lives – made up of today's population and the children of the future.
Assumptions and parameters:
- The risk of existential catastrophe this century is 1-in-6 (derived by Ord in The Precipice)2
- The discount rate decreases by 1% annually: meaning that although we care less about people in the deep future than we do about people today, their value doesn't plummet to zero as quickly as simple actuarial discounting would prescribe
- The discount rate tends to 0.0001% in the limit: Martin Weitzman's analysis of variable rate discounting shows that the rate should decrease towards the background risk so we'll use the 1-in-10,000 per century rate as derived by Ord for natural existinction risks
You can see my calculations in this spreadsheet.
2. Should we prioritise action now to reduce existential risk?
Separate from the question of how we should value future human lives, we can also think about the effectiveness of funding efforts to reduce it in comparison to other causes. Again, opinions differ on this point.
In The moral value of the far future, Holden Karnofsky shares why he would place more emphasis on shorter-term projects:
I have often been challenged to explain how one could possibly reconcile (a) caring a great deal about the far future with (b) donating to one of GiveWell’s top charities. My general response is that in the face of sufficient uncertainty about one’s options, and lack of conviction that there are good (in the sense of high expected value) opportunities to make an enormous difference, it is rational to try to make a smaller but robustly positive difference, whether or not one can trace a specific causal pathway from doing this small amount of good to making a large impact on the far future.
I think there are a couple of flaws in his reasoning3, and he seems to have changed his views in the intervening years: Open Philanthropy – the organisation of which he is CEO – has recommended grants of more than $250m towards existential risk reduction.
John Halstead's Existential Risk Cause Area Report4 makes a case for philanthropy focussed on existential risk reduction. Two points stand out:
Compared to other focus areas, existential risk is sorely underfunded:
For prospective donors, this means that the potential to find “low-hanging fruit” in this cause area is exceptionally high at present. Just as VC investors can make outsized returns in large uncrowded markets, philanthropists can have outsized impact by working on large and uncrowded problems.
The funding amounts sought after by the top organisations in the field are relatively small (the Center for Health Security needs $2.2m; the Biosecurity Initiative needs $2m; the Center for Human-Compatible AI needs $3m)
In summary: a little investment on existential risk reduction would go an awfully long way.
Project X
To allow us to make some quantitative comparisons, let's suppose the existence of a hypothetical Project X which costs $1bn annually, and reduces overall extinction risk by 1% in relative terms. Given the points above, in reality I think we would find much better value-for-money initiatives to reduce existential risk, but let's use these numbers as a baseline.
So how cost-effective would Project X be?
To summarise:
- Given current population and projected birth rates, we can think of an existential catastrophe as being equivalent to the loss of 18 billion lives
- A good estimate of our existential risk this century is 1-in-6: about 16.67%
- We'll assume Project X costs $1bn per year and reduces existential risk by 1% to 16.5% ()
I put together this model which shows the expected cost for life saved by Project X is between $1,100 and $24,000, with a mean of $5,700.
This compares favourably with the most cost-effective charities in the world. Every year, GiveWell creates its list of most effective charities. At the moment the front-runner is Malaria Consortium which spends $3,373 to save a life.
Given the generous concessions made in the model, existential risk reduction is extremely competitive with other causes from a cost-effectiveness perspective.
Summary
Owen Cotton-Barratt showed that, for a focus on a particular problem:
It's difficult to imagine a more important priority than the ongoing flourishing of sentient beings. It's clear that we neglect safeguarding our existence to enable this flourishing. It's our moral duty to engage with this problem and make it more tractable.
Thanks to Odette and Jesse for feedback and suggestions on this post.
- total natural risk: 1-in-10,000
- nuclear war: 1-in-1,000
- climate change 1-in-1,000 (although Ord stresses that the chances of extreme suffering – stopping short of existential catastrophe – is much more likely)
- engineered pandemics: 1-in-30
- unaligned artificial intelligence: 1-in-10
In addition, I believe the invisible hand is pushing us in the wrong direction. For example, it incentivises AI researchers to skip the difficult alignment problems in order to get their paperclip manufacturing robot out of the door first.
Footnotes
Other viewpoints exist, like David Benatar's antinatalism or even promortalism. ↩
Table 6.1 in The Precipice gives a full list of natural and anthropogenic risks. Some headline estimates of existential catastrophe within 100 years: ↩
The "track record" argument is difficult to make in this case, because a) most of our anthropogenic existential risks are very new and b) we can only go extinct once. ↩
Originally from https://founderspledge.com/research/fp-existential-risk ↩