The Prisoner’s Dilemma

Author: Jason Wyckoff
Category: Social and Political Philosophy
Word Count: 1000

What is the relationship between being rational and producing good results? One might think that rational people, acting rationally on good information, will always produce good outcomes for themselves, and maybe for others too. But consider William Poundstone’s description of a widely discussed case:

“Two members of a criminal gang are arrested and imprisoned. Each prisoner is in solitary confinement with no means of speaking to or exchanging messages with the other. The police admit they don’t have enough evidence to convict the pair on the principal charge. They plan to sentence both to a year in prison on a lesser charge. Simultaneously, the police offer each prisoner a Faustian bargain. If he testifies against his partner, he will go free while the partner will get three years in prison on the main charge. Oh, yes, there is a catch…If both prisoners testify against each other, both will be sentenced to two years in jail.”1

We’ll call our prisoners Orange and Black. This matrix illustrates the possible outcomes, with Orange’s sentence listed first and Black’s listed second:

Prisoner's Dilemma Matrix

If we assume the only thing each prisoner cares about is minimizing her own sentence and that the two prisoners can’t communicate with each other, it’s rational for each prisoner to testify—even though this produces the second-worst outcome for each prisoner. No matter what Black does, Orange gets a lighter sentence by testifying, and vice-versa. “Testify” is therefore the dominant strategy for both players and Testify/Testify is a dominant strategy equilibrium (the outcome produced by everyone’s adoption of their dominant strategy).

1. Significance

We’ll use the language of game theory—a field in the study of decision-making—and call this scenario a game, with players. Let’s introduce some generic terms, and call choices like remaining silent “cooperation” (with the other player) and choices like testifying “defection.” In a PD, mutual cooperation produces a pretty good outcome (second-best from the perspective of each player, best overall in terms of total preference-satisfaction), but mutual cooperation is rationally precluded because each player can see that defection is the better option for her, no matter what the other player does. So a game has a PD structure when mutual cooperation is second-best for everyone, but the players’ attempts to get the best results for themselves end up producing the outcome they all regard as second-worst. The bottom line is that rational people, behaving rationally, can produce very bad outcomes.

2. Iterations

So far, we’ve seen what happens when the PD is played once. But what if it’s played multiple times by the same players? For somewhat technical reasons, mutual cooperation may be unstable if the players know how many times the game will be played.2 Things are different, however, if it’s unknown when the last round will be (an indefinitely iterated game). Interestingly, under such conditions cooperative strategies work well.

A particularly effective (and simple) strategy is tit-for-tat, in which the player cooperates in the first round, and from then on the player does what the other player did in the previous round. This punishes defection with immediate but short-lived consequences—“you just defected, so now I’m denying you the benefits of my cooperation. But I’ll cooperate if you cooperate.” In The Evolution of Cooperation, Robert Axelrod says of tit-for-tat, “[w]hat accounts for [its] robust success is its combination of being nice, retaliatory, forgiving, and clear.”3 It’s nice because it’s initially cooperative, avoiding trouble. Though it punishes defection, it forgives by rewarding a return to cooperative behavior. And it’s easy to recognize and understand.

It’s worth noting that tit-for-tat is just one among many possible “nice strategies.” In iterated PDs, nice strategies tend overwhelmingly to deliver better results than strategies that are not nice.4 The lesson: be cooperative until the other player defects, and then react but forgive. Think long-term and gain trust.

3. Applications

So then, where’s the philosophy? Consider one domain in which the Prisoner’s Dilemma has been influential: ethics. Some philosophers have argued (controversially) that we might develop moral principles from the self-interested reasoning of participants in a PD. They hold that by accepting ethical constraints on behavior, we all achieve better results in the long term.5 Rather than simply try to maximize our welfare, we should be “constrained maximizers.” Here, we might interrogate the use of the PD model. Can it supply moral reasons to constrain behavior toward the disadvantaged and those with less power, with whom the powerful don’t interact on the equal footing of players in a PD? If so, how?

The PD has also influenced political philosophy. Thomas Hobbes famously argued that in the state of nature, without a government, people’s attempts to maximize their own gains lead to violence, constant fear, and lives that are “solitary, poor, nasty, brutish, and short.”6 Some contemporary social contract theorists have suggested that Hobbes was thinking of the state of nature as a PD—a situation in which universal cooperation can’t be secured because each person has decisive reasons to be non-cooperative, so the result is universal serial defection—a state of war. By establishing a state, we change the payoff structure by making defection more costly, thereby avoiding the PD.

But anarchists can also help themselves to the lessons we’ve learned here. They might argue that a condition of statelessness is more like an iterated PD since people are looking to do well in the long run, not just immediately. Under such conditions, perhaps cooperation evolves without the establishment of a state.

This may be a fruitful way to frame the debate between those who think a stateless world would be best and those who think government is necessary. Or perhaps it simply invites more questions: Is the state of nature really a PD? Is there such a thing as a “state of nature” at all? Is government—particularly democracy—about adjudicating competing interests, or rather about solving shared problems collectively? These are difficult questions, but there is little doubt that the Prisoner’s Dilemma has played a significant role in shaping recent debates on important problems in philosophy.

Notes

1Poundstone, 1993, 118.

2Sorenson (2004) explains the matter quite well.

3Axelrod, 1984, 54.

4See Davis, 1997, 147. Jean Hampton (1997, 46) claims that cooperation in an iterated PD is rational only if one has assurance that one’s opponent will cooperate as well, but this is not true, as the tournament of computer programs discussed in Axelrod (1984) demonstrates.

5See, e.g., Gauthier, 1986.

6See Hobbes 1994 [1668], ch. 13.

References

Axelrod, Robert (1984). The Evolution of Cooperation. Basic Books.

Davis, Morton D. (1997). Game Theory: A Nontechnical Introduction. Dover Publications, Inc.

Gauthier, David (1986). Morals by Agreement. Oxford University Press.

Hampton, Jean (1997). Political Philosophy. Westview Press.

Hobbes, Thomas (1994 [1668]). Leviathan. Hackett Publishing Company.

Poundstone, William (1993). Prisoner’s Dilemma. Anchor.

Sorenson, Roy (2004). “Paradoxes of Rationality,” in The Oxford Handbook of Rationality (Alfred Mele and Piers Rawling, eds). Oxford University Press.

About the Author

Jason earned a PhD in philosophy at the University of Colorado Boulder, a JD at Georgetown University Law Center, and a BA from the University of Illinois at Urbana-Champaign. Over the past several years he has published in the areas of ethics, social and political philosophy, feminist philosophy, and philosophy of religion. He has a taste for Scotch and a budget for Canadian whisky. http://jasonwyckoffauthor.com 

Follow 1000-Word Philosophy on Facebook, Twitter and subscribe to receive email notice of new essays at the bottom of 1000WordPhilosophy.com