Author: Dylan Balfour
Word count: 1000
Imagine you’ve been given a million dollars to donate to charitable causes of your choice. How would you spend the money?
Perhaps you’d donate it to a cause close to your heart, like a local community project. Or perhaps you’d try to help the global poor by funding disaster relief, or the distribution of antimalarial bednets.
Advocates of a growing philosophical movement known as longtermism argue, surprisingly, that spending the money in these ways would be wrong. Longtermists believe that you should almost entirely ignore the concerns of people in need today and instead use the money to try to help secure and improve humanity’s long-term future: their goal is to benefit people who will exist in the thousands, millions, and billions of years to come.
This essay introduces longtermism.
1. What is Longtermism?
Longtermism is the view that we should prioritize the far future of humanity, primarily through preventing human extinction and improving the lives of our distant descendants.
Some longtermists go further, arguing that we should aim to fulfill humanity’s cosmic potential by accelerating technology to colonize the stars and enable the existence of astronomical numbers of future people. Some even propose that we should transform humanity into a digital civilization of computerized minds, giving people practically unlimited lifespans.
Because humanity’s resources are limited, these efforts would require paying less attention to those in need today. So longtermists believe that it is almost always preferable to allocate resources towards producing a better long-term future for humanity than helping present generations.
2. Why Longtermism?
Longtermists often motivate their view by pointing to the sheer size of the future human population. On a conservative estimate, the Earth might be able to sustain around ten quadrillion people in total before it becomes uninhabitable (that’s ten plus fifteen zeros!). And if humanity successfully spreads across the Milky Way, then this number may grow by many orders of magnitude.
Longtermists argue that if human lives are valuable, then it follows that these oceans of future people are overwhelmingly more important than the people living today: there could be so many people in the future that, cumulatively, their interests outweigh our own.
This is not to say that future people are individually more valuable than we are, but that as a whole they deserve more attention than present generations. Because of this, longtermists argue that actions that aim to improve the far future can achieve far more good than actions that produce short-term benefits.
Longtermism is also supported by the suggestion that we have an obligation to steward forthcoming generations just as our ancestors did for us. Another argument is that we have a duty to preserve humanity because of our cosmic significance as possibly the only intelligent life in the universe.
3. Longtermist Priorities
Longtermists typically highlight two kinds of interventions that they think should be prioritized.
The first is preventing existential catastrophes—events that could stunt civilization or cause human extinction—such as nuclear war, asteroid impacts, and extreme climate change. Many researchers are also concerned about the risks posed by advanced artificial intelligence which could become dangerous to humanity if its goals were misaligned with our interests.
The second intervention is to enact positive trajectory change or, in other words, to try to improve the long-term course of civilization: e.g., improving the rate of annual economic growth by even a fraction of a percentage point would generate vast amounts of wealth in just a few centuries’ time. From a long-term perspective, economic growth is far more important than on short- or medium-term timescales, and thus a higher priority for individuals and governments.
Longtermism is a controversial view; there are, of course, many objections to it.
One objection is that longtermism depends on predictions that we cannot confidently make: we don’t know how long humanity will last, and it’s very difficult to know exactly which actions will improve the far future. By contrast, we can be very confident of our ability to benefit existing people.
A related objection is that, by asking us to pay less attention to the interests of those alive today, many of whom suffer greatly, longtermism asks us to be unacceptably callous. Surely we shouldn’t turn a blind eye to those in need today, like the millions of people currently living in absolute poverty, over people who only might come to exist.
One response is to agree that longtermism does seem callous, but that we should accept it anyway. The needs of the present generation may elicit a greater emotional pull on us, but this does not mean they matter more than the swathes of future generations to come.
Longtermists may even flip the accusation: it would be at least as callous to neglect the interests of future people, who vastly outnumber us, and yet have no social or political power. Privileging our present generation might mean neglecting the trillions of people yet to come who cannot advocate for their own interests.
5. Implementing Longtermism
If longtermism sounds plausible, what can we do now to help?
Many longtermists argue that we should donate money to research organizations working on longtermist issues, such as asteroid detection and artificial intelligence safety.
Other longtermists argue that we should abstain from donating money within our own lifetimes, and arrange to have our personal wealth donated in the future to maximize our philanthropic impact.
We may also devote our careers to the long-term future. The organization 80,000 Hours, which gives altruistic career advice, recommends that, to have the biggest impact with our careers, we work on longtermist causes like pandemic preparedness and nuclear security, rather than intuitively worthwhile near-term causes like global poverty and inequality.
Longtermists also urge governments to devote far more resources towards securing and improving humanity’s long-term future.
Longtermism implies that we should spend fewer resources combatting the problems of today, instead of using them to assist future generations. For the sake of the many people who may come to exist, we should evaluate longtermism’s proposals carefully, even if they are controversial.
 Longtermism is an offshoot of the “effective altruism” movement, which aims to identify and resource the most important and effective altruistic causes. For an introduction to effective altruism, see Ethics and Absolute Poverty: Peter Singer and Effective Altruism by Brandon Boesch.
 See Bostrom (2003, 2013).
 This view has been termed ‘axiological strong longtermism’ in an authoritative argument for longtermism made by Greaves and MacAskill (2021).
 This is Nick Bostrom’s calculation (2013, 18), which assumes that the Earth can carry an average of one billion inhabitants at any given time, for a period of one billion years.
 Newberry (2021) estimates that up to 10³⁵ people could come to exist if humanity spreads across the Milky Way.
 Such thinking implicitly assumes that value can be “aggregated” or, in simpler terminology, added up. Aggregation is often associated with the moral theory of consequentialism (see Consequentialism by Shane Gronholz).
Aggregation is a controversial idea. Parfit (1987) pointed out that aggregation may lead to the Repugnant Conclusion, in which a large enough population of people with lives barely worth living is considered more valuable than a smaller population of blissful lives (see The Repugnant Conclusion by Jonathan Spelman). Others think that, although we should aggregate value to some degree, there are limits. So-called “partially aggregative” theories of value hold that some goods or experiences can never add up to a value higher than some other goods or experiences—e.g., that preventing no number of headaches could ever be as morally important as saving a human life: see Norcross (1997).
 This is a point made by Ord (2020, 49-51).
 This is also a point made by Ord (2020, 53-6).
 See Ord (2020) for a comprehensive description and evaluation of many existential catastrophes.
 For just a few examples, see Bostrom (2014), Russell (2019), and Ord (2020) who estimates, frighteningly, that there is a 1 in 10 chance that humanity will go extinct in the next century due to misaligned artificial intelligence.
 The term ‘trajectory change’ is owed to Nick Beckstead (2013).
 See Cowen (2018). Additionally, lowering the annual rate of growth could be an enormously regrettable trajectory change. As MacAskill (2020) explains, lowering the rate of annual growth from 0.8% to 0.2% would: ‘after a couple of centuries […] becomes the equivalent of a catastrophe that wipes out half the world’s wealth’.
 Peter Singer is a philosopher who, in 2021, received a million-dollar prize and pledged to give it all to effective charities. He explains why he did not decide to donate to longtermist efforts: “Some thoughtful effective altruists urge us to focus on reducing the risk of extinction. But the uncertainties about how to achieve that objective are so great that I prefer to donate to projects for which the odds of accomplishing something positive are vastly higher.” See Singer (2021).
 See Torres (2021) for an argument to this effect.
 See Greaves and MacAskill (2021, 16-7).
 See Trammell (2020).
 https://80000hours.org/key-ideas/#longtermism. Note that 80,000 hours does also recommend careers in “short-termist” cause areas, but that they think the best opportunities are typically in the longtermist space.
For Further Reading
Consequentialism by Shane Gronholz
Ethics and Absolute Poverty: Peter Singer and Effective Altruism by Brandon Boesch
Ethics and the Expected Consequences of Voting by Thomas Metcalf
Pascal’s Wager: A Pragmatic Argument for Belief in God by Liz Jackson
The Repugnant Conclusion by Jonathan Spelman
About the Author
Dylan Balfour is a philosophy PhD student at the University of Edinburgh. He works on topics related to ethics, decision theory, and longtermism. www.ed.ac.uk/profile/dylan-balfour