Longtermism: How Much Should We Care About the Far Future?

Author: Dylan Balfour
Category: Ethics
Word count: 1000

Imagine you’ve been given a million dollars to donate to charitable causes of your choice. How would you spend the money?

Perhaps you’d donate it to a cause close to your heart, like a local community project. Or perhaps you’d try to help the global poor by funding disaster relief, or the distribution of antimalarial bednets.

Advocates of a growing philosophical movement known as longtermism argue, surprisingly, that spending the money in these ways would be wrong. Longtermists believe that you should almost entirely ignore the concerns of people in need today and instead use the money to try to help secure and improve humanity’s long-term future: their goal is to benefit people who will exist in the thousands, millions, and billions of years to come.[1]

This essay introduces longtermism.

The Milky Way galaxy.
The Milky Way galaxy.

1. What is Longtermism?

Longtermism is the view that we should prioritize the far future of humanity, primarily through preventing human extinction and improving the lives of our distant descendants.

Some longtermists go further, arguing that we should aim to fulfill humanity’s cosmic potential by accelerating technology to colonize the stars and enable the existence of astronomical numbers of future people. Some even propose that we should transform humanity into a digital civilization of computerized minds, giving people practically unlimited lifespans.[2]

Because humanity’s resources are limited, these efforts would require paying less attention to those in need today. So longtermists believe that it is almost always preferable to allocate resources towards producing a better long-term future for humanity than helping present generations.[3]

2. Why Longtermism?

Longtermists often motivate their view by pointing to the sheer size of the future human population. On a conservative estimate, the Earth might be able to sustain around ten quadrillion people in total before it becomes uninhabitable (that’s ten plus fifteen zeros!).[4] And if humanity successfully spreads across the Milky Way, then this number may grow by many orders of magnitude.[5] 

Longtermists argue that if human lives are valuable, then it follows that these oceans of future people are overwhelmingly more important than the people living today: there could be so many people in the future that, cumulatively, their interests outweigh our own.

This is not to say that future people are individually more valuable than we are, but that as a whole they deserve more attention than present generations.[6] Because of this, longtermists argue that actions that aim to improve the far future can achieve far more good than actions that produce short-term benefits.

Longtermism is also supported by the suggestion that we have an obligation to steward forthcoming generations just as our ancestors did for us.[7] Another argument is that we have a duty to preserve humanity because of our cosmic significance as possibly the only intelligent life in the universe.[8]

3. Longtermist Priorities

Longtermists typically highlight two kinds of interventions that they think should be prioritized.

The first is preventing existential catastrophes—events that could stunt civilization or cause human extinction—such as nuclear war, asteroid impacts, and extreme climate change.[9] Many researchers are also concerned about the risks posed by advanced artificial intelligence which could become dangerous to humanity if its goals were misaligned with our interests.[10] 

The second intervention is to enact positive trajectory change or, in other words, to try to improve the long-term course of civilization:[11] e.g., improving the rate of annual economic growth by even a fraction of a percentage point would generate vast amounts of wealth in just a few centuries’ time.[12] From a long-term perspective, economic growth is far more important than on short- or medium-term timescales, and thus a higher priority for individuals and governments.

4. Objections

Longtermism is a controversial view; there are, of course, many objections to it.

One objection is that longtermism depends on predictions that we cannot confidently make: we don’t know how long humanity will last, and it’s very difficult to know exactly which actions will improve the far future. By contrast, we can be very confident of our ability to benefit existing people.[13]

A related objection is that, by asking us to pay less attention to the interests of those alive today, many of whom suffer greatly, longtermism asks us to be unacceptably callous.[14] Surely we shouldn’t turn a blind eye to those in need today, like the millions of people currently living in absolute poverty, over people who only might come to exist.

One response is to agree that longtermism does seem callous, but that we should accept it anyway. The needs of the present generation may elicit a greater emotional pull on us, but this does not mean they matter more than the swathes of future generations to come.

Longtermists may even flip the accusation: it would be at least as callous to neglect the interests of future people, who vastly outnumber us, and yet have no social or political power. Privileging our present generation might mean neglecting the trillions of people yet to come who cannot advocate for their own interests.

5. Implementing Longtermism

If longtermism sounds plausible, what can we do now to help?

Many longtermists argue that we should donate money to research organizations working on longtermist issues, such as asteroid detection and artificial intelligence safety.[15]

Other longtermists argue that we should abstain from donating money within our own lifetimes, and arrange to have our personal wealth donated in the future to maximize our philanthropic impact.[16]

We may also devote our careers to the long-term future. The organization 80,000 Hours, which gives altruistic career advice, recommends that, to have the biggest impact with our careers, we work on longtermist causes like pandemic preparedness and nuclear security, rather than intuitively worthwhile near-term causes like global poverty and inequality.[17]

Longtermists also urge governments to devote far more resources towards securing and improving humanity’s long-term future.

6. Conclusion

Longtermism implies that we should spend fewer resources combatting the problems of today, instead of using them to assist future generations. For the sake of the many people who may come to exist, we should evaluate longtermism’s proposals carefully, even if they are controversial.

Notes

[1] Longtermism is an offshoot of the “effective altruism” movement, which aims to identify and resource the most important and effective altruistic causes. For an introduction to effective altruism, see Ethics and Absolute Poverty: Peter Singer and Effective Altruism by Brandon Boesch.

[2] See Bostrom (2003, 2013).

[3] This view has been termed ‘axiological strong longtermism’ in an authoritative argument for longtermism made by Greaves and MacAskill (2021).

[4] This is Nick Bostrom’s calculation (2013, 18), which assumes that the Earth can carry an average of one billion inhabitants at any given time, for a period of one billion years.

[5] Newberry (2021) estimates that up to 10³⁵ people could come to exist if humanity spreads across the Milky Way.

[6] Such thinking implicitly assumes that value can be “aggregated” or, in simpler terminology, added up. Aggregation is often associated with the moral theory of consequentialism (see Consequentialism by Shane Gronholz).

Aggregation is a controversial idea. Parfit (1987) pointed out that aggregation may lead to the Repugnant Conclusion, in which a large enough population of people with lives barely worth living is considered more valuable than a smaller population of blissful lives (see The Repugnant Conclusion by Jonathan Spelman). Others think that, although we should aggregate value to some degree, there are limits. So-called “partially aggregative” theories of value hold that some goods or experiences can never add up to a value higher than some other goods or experiences—e.g., that preventing no number of headaches could ever be as morally important as saving a human life: see Norcross (1997).

[7] This is a point made by Ord (2020, 49-51).

[8] This is also a point made by Ord (2020, 53-6).

[9] See Ord (2020) for a comprehensive description and evaluation of many existential catastrophes.

[10] For just a few examples, see Bostrom (2014), Russell (2019), and Ord (2020) who estimates, frighteningly, that there is a 1 in 10 chance that humanity will go extinct in the next century due to misaligned artificial intelligence.

[11] The term ‘trajectory change’ is owed to Nick Beckstead (2013).

[12] See Cowen (2018). Additionally, lowering the annual rate of growth could be an enormously regrettable trajectory change. As MacAskill (2020) explains, lowering the rate of annual growth from 0.8% to 0.2% would: ‘after a couple of centuries […] becomes the equivalent of a catastrophe that wipes out half the world’s wealth’.

[13] Peter Singer is a philosopher who, in 2021, received a million-dollar prize and pledged to give it all to effective charities. He explains why he did not decide to donate to longtermist efforts: “Some thoughtful effective altruists urge us to focus on reducing the risk of extinction. But the uncertainties about how to achieve that objective are so great that I prefer to donate to projects for which the odds of accomplishing something positive are vastly higher.” See Singer (2021).

[14] See Torres (2021) for an argument to this effect.

[15] See Greaves and MacAskill (2021, 16-7).

[16]  See Trammell (2020).

[17] https://80000hours.org/key-ideas/#longtermism. Note that 80,000 hours does also recommend careers in “short-termist” cause areas, but that they think the best opportunities are typically in the longtermist space.

References

Ahmed, A (2018) ‘Rationality and Future Discounting’ Topoi 39(2): 245-56

Beckstead, N (2013) ‘On the Overwhelming Importance of Shaping the Far Future’ (PhD thesis). Department of Philosophy, Rutgers University

Beckstead, N (2014) ‘Will we eventually be able to colonize other stars? Notes from a preliminary review’ Future of Humanity Institute.

Bostrom, N (2003) ‘Astronomical Waste: The Opportunity Cost of Delayed Technological Development’ Utilitas 15(3): 308-14

Bostrom, N (2013) ‘Existential Risk Prevention as Global Priority’ Global Policy 4(1): 15-31

Bostrom, N (2014) Superintelligence: Paths, Dangers, Strategies. Oxford: OUP

Bostrom, N (2019) ‘The Vulnerable World Hypothesis’ Global Policy 10(4): 455-76

Broome, J (2005) ‘Should we Value Population?’ Journal of Political Philosophy 13(4): 399-413

Cowen, T (2018) Stubborn Attachments: A Vision for a Society of Free, Prosperous, and Responsible Individuals. Stripe Press

Greaves, H; MacAskill, W (2021) ‘The Case for Strong Longtermism’ Global Priorities Institute Working Paper 5-2021

John, T.M; MacAskill, W (2020) ‘Longtermist Institutional Reform’ Global Priorities Institute Working Paper 14-2020

MacAskill (2020) ‘What We Owe the Future’ (Lecture)

Newberry, T (2021) ‘How Many Lives Does the Future Hold?’ GPI Technical Report no.T2 – 2021

Norcross, A (1997) ‘Comparing Harms: Headaches and Human Lives’ Philosophy and Public Affairs

Ord, T (2020) The Precipice: Existential Risk and the Future of Humanity. London: Bloomsbury

Parfit, D (1987) Reasons and Persons, 2nd edition. Oxford: OUP

Russell, S (2019) Human Compatible: AI and the Problem of Control. London: Penguin

Singer, P (1972) ‘Famine, Affluence, and Morality’ Philosophy and Public Affairs 1(3): 229-43

Singer, P. (2021) ‘How to Give Away a Million Dollars’ Project Syndicate

Tarsney, C (2017) ‘Does a Discount Rate Measure the Costs of Climate Change?’ Economics and Philosophy 33: 337-65

Torres, P (2021) ‘The Dangerous Ideas of “Longtermism” and “Existential Risk”’ Current Affairs

Trammell, P (2020) ‘How Becoming a “Patient Philanthropist” Can Allow You to Do Far More Good’ The 80,000 Hours Podcast

For Further Reading

Cargill, N; John, T (2021) The Long View. London: First

Moorhouse, F (2021) ‘Introduction to Longtermism’ Effective Altruism

Related Essays

Consequentialism by Shane Gronholz

Ethics and Absolute Poverty: Peter Singer and Effective Altruism by Brandon Boesch

Saving the Many or the Few: The Moral Relevance of Numbers by Theron Pummer

Ethics and the Expected Consequences of Voting by Thomas Metcalf

Pascal’s Wager: A Pragmatic Argument for Belief in God by Liz Jackson

The Repugnant Conclusion by Jonathan Spelman

Translations

Turkish, Arabic

About the Author

Dylan Balfour is a philosophy PhD student at the University of Edinburgh. He works on topics related to ethics, decision theory, and longtermism. www.ed.ac.uk/profile/dylan-balfour

Follow 1000-Word Philosophy on Facebook and Twitter and subscribe to receive email notifications of new essays at 1000WordPhilosophy.com