Ethics and the Expected Consequences of Voting

Author: Thomas Metcalf
Categories: Ethics, Social and Political Philosophy
Word count: 995

Your vote normally only has a very small chance of changing the outcome of any election for a public office.[1] For your vote to make that difference, thousands or millions of other votes would need to end up in a tie, which is extremely improbable.[2]

Given this, could you still be morally required to vote, because of the consequences that would occur if your vote did—somehow—change the outcome? You might be thus obligated even though your vote almost certainly wouldn’t cause such a change.

To understand this, let’s think about voting from an expected-value perspective on decision-making.

An “I Voted” sticker.

1. Expected Ethical Value

Decisions in which your action will almost certainly create some effect are relatively easy to think about. For example, throwing a bomb through someone’s window has a very high probability of causing major damage. Unless there is some extraordinarily compelling reason to do so, it’s easy to see we shouldn’t do that.

But what about decisions in which your action only has a small probability of changing the outcome, such as voting?

You often have some very good reason to act, even when doing that action only has a very small chance of creating a benefit, when the benefit is large enough.[3] For example, in any particular car trip, a serious collision is unlikely, so fastening your child’s seatbelt only has a very small chance of making the difference about whether your child survives the trip—there probably won’t be a collision at all. Still, you ought to fasten the seatbelt, because of the small chance of major harm.

We can make this point in terms of expected value.[4] One expected-value calculation method involves multiplying the net benefits of your action’s success by the probability that those benefits will occur, and then subtracting the harms of making the attempt.[5]

If we’re talking about morality, then the “value” we’re talking about is ethical value: the sort of thing that we have moral reasons to produce.[6] Here, we assume that we can approximate ethical value with some unit of measurement. Let’s use a hypothetical unit of goodness called ‘utils.’[7]

For example, if there’s a 1% chance that buckling a seatbelt will save your child’s life, and the value of saving the child’s life is equal to ten million utils,[8] and it costs you one util (say, in lost time) to buckle your child’s seatbelt,[9] then the expected value of buckling your child’s seatbelt is equal to (1% of 10,000,000 utils) – 1 utils, i.e. 99,999 utils. Even though there is a small probability of having an effect, the value of the effect is so high that it’s worth doing.

2. Expected Value in Voting

Let’s apply expected value to voting. There might only be a tiny chance that your vote will change who gets elected, but the net benefit of one candidate’s getting elected might be huge, for example in the billions of utils. Therefore, to decide whether you ought to vote, you must take into account the result of your vote’s changing the outcome, along with any other possible harms or benefits.

Start with the value of the result. Suppose that Jane would, if elected, confer an average, net benefit of 1000 utils for each of 325 million people in Jane’s country.[10] Then the value of Jane’s winning is 325 billion utils. If your vote had a one-in-ten-million chance of changing the outcome, and it only harmed you by a net 100 utils to vote for Jane (you have to spend time, and it’s not much fun), and there were no other benefits or harms, then voting for Jane would have the following expected value:

[(probability of making a difference x total benefit if you make a difference)] – harms = expected value.

[(1/10,000,000) x 325,000,000,000] – 100 = 32,400.

Thus from an expected-value perspective, it’s exactly like acting in some way that has a 100% chance of creating exactly net 32,400 utils of goodness.[11]

What’s the probability of affecting the outcome of the election? In presidential elections in the United States, a swing-state voter might have a one-in-ten-million chance of determining who wins, and a non-swing-state voter might have a one-in-one-billion chance.[12]

In this case, many voters might be obligated to vote for Jane in the above example.[13] A similar calculation explains why you’re obligated to fasten your child’s seatbelt.

3. Other Benefits and Harms

Of course, there can also be other good results of your voting. Maybe you love wearing an “I Voted” sticker. Add those into your calculation the same way: multiply the probability that you’ll get that benefit with the value of the benefit.

At the same time, there can also be harms of your voting. Normally, setting aside the outcome of the election, the effects of your voting a certain way only really happen to you.[14] But voting can be harmful to the voter, physically or psychologically. For example, if you are a member of an oppressed group, a dominant group may attack you for trying to vote: consider these harms in your calculation too.

4. Political Knowledge, Morality, and Blame

Above, we estimated the values of voting a certain way and the probabilities of changing the outcome. But what if we were mistaken in our estimates? We could also be mistaken in our values: what we think is morally important.

Most Americans have relatively little politically-relevant knowledge.[15] It might be obligatory to abstain from voting if you don’t know enough about the election.[16] By analogy, if you walk into a complicated factory and see a big, red, unlabeled button, and you don’t know what it does, don’t push the button.

5. Conclusion

Expected value is normally not the only morally relevant consideration. One might feel a sense of civic obligation.[17] One might feel an obligation to symbolically express approval or disapproval of some candidate or law, or of democracy itself.[18] One might also simply enjoy voting: it’s kinda fun. And deontologists believe that actions can be morally wrong even if those actions maximize expected ethical value.[19] Here, as with in every moral question, we must take seriously the possibility that expected consequences are not the only relevant moral consideration.

Notes

[1] Gelman et al. 2012.

[2] In that case, every vote would be the “deciding” vote, because every vote would be such that if it hadn’t gone that way, the result would have been different. Strictly, this result would have to be after the last step in the process, for example, after a recount has been conducted and all re-votes if any have been conducted. That is to say, it would have to be that the final tally after all re-votes or re-counts was such that setting your vote aside, everyone else ended up in a tie. You might wonder what the probability in general of such a result is. Consider an extremely favorable case: there are only 100 votes cast other than yours, and every voter, going into the vote, is exactly 50% likely to vote one way, and exactly 50% likely to vote the other way. Even then, there’s only about an 8% chance of a tie. With 10,000 votes cast, there’s only about a 1% chance. You can run the relevant calculations by changing the inputs here: Wolfram Alpha.

[3] Cf. Singer 1972: 231. Most philosophers believe there is some obligation of beneficence: to make the world better. Arguably, nearly all utilitarians (cf. Sinnott-Armstrong 2020: § 1 and Gronholz 2014 [“Consequentialism” in 1000-Word Philosophy]) believe something like this, as do most deontologists, for example Kant (Beauchamp 2020: § 2.3; cf. Chapman 2014 [Kantian Ethics in 1000-Word Philosophy]) and Ross 2002 [1930]: 21. For more, see Beauchamp 2020.

[4] Cf. Briggs 2020: § 1 plus Brennan and Lomasky: § IV.

[5] We are hereby assuming that we can measure different things against each other in terms of value, for example that the extra time spent fastening a seatbelt can be compared against the benefit of a child’s surviving a collision. We can’t defend this assumption here, but of course, other philosophers have written about it (Hsieh 2019).

[6] Of course, there’s debate about what exactly counts as ethically valuable: is it pleasure and the absence of pain, or well-being more generally, or some other set of phenomena? Cf. Crisp 2020. Note that this is a separate question—but related to—the question of how we should act. But if consequentialism is true (cf. Gronholz 2014 [Consequentialism in 1000-Word Philosophy]), we need some way of deciding what matters morally.

[7] ‘Utility’ in philosophy usually refers to benefit or goodness. (It’s the root of the name ‘utilitarianism,’ for example. So we can use ‘utils’ to refer to hypothetical units of value.)

[8] It’s sometimes convenient to talk about ethical value in terms of dollars. We normally think there can be moral reasons to spend wealth on achieving certain ends, for example, to spend $500 to save a human life. Cf. Singer 1972. Or we might say a benefit worth $100 is a benefit that it would be morally permissible or obligatory to spend $100 on. But we can imagine these as units of happiness, or benefit, or whatever; cf. Briggs 2020.

[9] Strictly speaking, of course, we have to think about the harm to the rest of society of buckling the child in, but I don’t know what that would be. Maybe the child would grow up to be the next Hitler, but let’s set aside that possibility.

[10] Perhaps this estimate seems high. But note that presidents can serve for four or even eight years, and have many effects on many different people, in the present and the future. See n. 12 below.

[11] Compare: Suppose there are two buttons. One has a 50% chance of killing two innocent people (and a 50% chance of doing nothing), and one has a 25% chance of killing four innocent people (and a 75% chance of doing nothing). If you consider pushing one button to be morally equal to pushing the other, then you’re reasoning in this way.

[12] Gelman et al. 2012. The expected value of swing-state voting would be comparatively high. But if the stakes are high enough, non-swing-state votes may also have extremely high expected value. Given the length of presidential terms and the incumbency effects, the effects on the entire human race (for example, on the potential for catastrophic climate change and war), and effects on future generations, including not only the laws passed but also the executive orders issued, the Supreme-Court justices appointed, and the health of democracy itself, the expected value of a certain candidate’s being elected president might sometimes be in the tens of trillions or more. Given a 10-trillion-util expected value, even a one-in-a-billion chance of tipping the outcome delivers an expected value of 10,000 utils before the cost of your voting. In practice, it will harm very few voters equivalently to 10,000 utils to vote for any presidential candidate.

[13] There could be extreme cases, though, in which the standard calculation yields implausible results (cf. Bostrom 2009).

[14] For example, if you can either vote or take an injured person to the hospital, then perhaps you ought to do the latter. But that’s a rare sort of case.

[15] See e.g. Annenberg Public Policy Center 2014 and the research cited in Caplan 2008 and Huemer 2016. But see e.g. Schleicher 2008 and Colander 2008 for critical perspectives on the effects and implications of this ignorance.

[16] At the very least, they may need to engage in epistemic discounting: reduce the expected net value of their candidate’s winning proportionally to their own lack of knowledge. Cf. Brennan and Lomasky 2000: § IV. Equally importantly, most voters may simply have false beliefs about the value of their candidates’ winning an election. If we’re making a moral assessment of the voter (rather than a value-based assessment of a particular act of voting, i.e. how much goodness or badness the act generates), it may be incorrect to blame the voter for their action. However, we could still ask whether the voter should have done a better job seeking out knowledge. Cf. Chignell 2020.

[17] Indeed, in some places, it’s legally required (Moyo 2019).

[18] See e.g. Brennan and Lomasky 2000: § VI; and Brennan 2020: § 3.1-3.2; and Sinnott-Armstrong 2005.

[19] For example, maybe you promised someone that you wouldn’t vote for Jane. See Chapman 2014 (Kantian Ethics in 1000-Word Philosophy) for an introduction to a version of deontology.

References

Annenberg Public Policy Center. 2014. “Americans Know Surprisingly Little About Their Government, Survey Finds.” Annenberg Civics Knowledge Survey.

Beauchamp, Tom. 2020. “The Principle of Beneficence in Applied Ethics.” In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy, Spring 2020 Edition.

Bostrom, Nick. 2009. “Pascal’s Mugging.” Analysis 69(3): 443-45.

Brennan, Geoffrey and Loren Lomasky. 2000. “Is There a Duty to Vote?” Social Philosophy and Policy 17(1): 62-86.

Brennan, Jason. 2020. “The Ethics and Rationality of Voting.” In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy, Spring 2020 Edition.

Briggs, R. A. “Normative Theories of Rational Choice: Expected Utility.” In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy, Spring 2020 Edition.

Caplan, Bryan. 2008. The Myth of the Rational Voter: Why Democracies Choose Bad Policies. Princeton, NJ: Princeton University Press.

Chapman, Andrew. 2014. “Deontology: Kantian Ethics.” In 1000-Word Philosophy (ed.), 1000-Word Philosophy.

Chignell, Andrew. 2020. “The Ethics of Belief.” In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy, Spring 2020 Edition.

Colander, David. 2008. “The Myth of the Myth of the Rational Voter.” Critical Review 20(3): 259-71.

Crisp, Roger. 2020. “Well-Being.” In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy, Spring 2020 Edition.

Gelman, Andrew et al. 2012. “What is the Probability Your Vote Will Make a Difference?” Economic Inquiry 50(2): 321-26.

Gronholz, Shane. 2014. “Consequentialism.” In 1000-Word Philosophy (ed.), 1000-Word Philosophy.

Hsieh, Nien-hê. 2019. “Incommensurable Values.” In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy, Winter 2019 Edition.

Huemer, Michael. 2016. “Why People are Irrational About Politics.” In Jonathan Anomaly et al, Philosophy, Politics, and Economics: An Anthology (Oxford, UK and New York, NY: Oxford University Press), pp. 456-67.

Moyo, Dambisa. 2019. “Make Voting Mandatory in the U. S.” The New York Times, October 15, 2019.

Ross, W. D. 2002 [1930]. The Right and the Good. Oxford, UK: Clarendon Press.

Schleicher, David. 2008. “Irrational Voters, Rational Voting.” Election Law Journal 7(2): 149-58.

Singer, Peter. 1972. “Famine, Affluence, and Morality.” Philosophy and Public Affairs 1(3): 229-43.

Sinnott-Armstrong, Walter. 2005. “It’s Not My Fault: Global Warming and Individual Moral Obligations.” In Sinnott-Armstrong, Walter and Richard B. Howarth (eds.), Perspectives on Climate Change (Amsterdam, Netherlands: Elsevier), pp. 293-315.

Sinnott-Armstrong, Walter. 2020. “Consequentialism.” In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy, Spring 2020 Edition.

Related Essays

Condorcet’s Jury Theorem and Democracy by Robert Weston Siscoe

Consequentialism by Shane Gronholz

Deontology: Kantian Ethics by Andrew Chapman

Practical Reasons by Shane Gronholz

Interpretations of Probability by Thomas Metcalf

The Prisoner’s Dilemma by Jason Wyckoff

PDF Download

Download this essay in PDF. 

Translation

Korean

About the Author

Tom Metcalf is an associate professor at Spring Hill College in Mobile, AL. He received his PhD in philosophy from the University of Colorado, Boulder. He specializes in ethics, metaethics, epistemology, and the philosophy of religion. Tom has two cats whose names are Hesperus and Phosphorus. http://shc.academia.edu/ThomasMetcalf

Follow 1000-Word Philosophy on Facebook, Twitter and subscribe to receive email notifications of new essays at 1000WordPhilosophy.com