The Probability Calculus

Author: Thomas Metcalf
Categories: Epistemology, Philosophy of Science, Logic and Reasoning
Word Count: 1000

Suppose that Lemmy is playing poker, and the only card he needs in order to win is the Ace of Spades. If he’s drawing randomly from a standard deck, it’s easy to figure out how likely he is to draw the Ace of Spades: one in fifty-two.[1] But we might ask other questions: how probable is it that he would, on two draws, draw the Ace of Spades and the Two of Spades? Or, if he were rolling two dice, how probable is it that he might roll a seven or an eleven? How probable is it that he will at least not roll a snake-eyes? We can use the probability calculus to answer these questions.

The probability calculus has a wide variety of applications in natural science,[2] social science,[3] and philosophy.[4]

casino

1. Vocabulary

We generally use decimal representations, where a probability of ‘0’ is the minimum probability and a probability  of ‘1’ is the maximum. A fair coin might be 0.5-probable to come up Heads, but 0-probable to transform into a square circle. Equivalently, we can use fractions.[5]

We use the capital letter ‘P’ followed by parentheses to express the probability that some sentence is true or that some event occurred or will occur. We use sentence-letters such as ‘a’ and ‘b’ to represent those sentences or occurrences, as in ‘P(a).’[6] We’ll assume that whatever event we’re interested in only has finitely many possible outcomes, or that there are only finitely many possibly true sentences in question. And we’ll assume elementary rules of logic.

We use the symbols from sentential logic[7] to make more-complex formulas. Examples:

  • Pa)’: the probability that nota is true.
  • P(ab)’: the probability that both a and b are true.
  • P(ab)’: the probability that a or b (or both) are true.[8]

Assumptions:[9]

  • All probabilities are real numbers that range from 0 to 1. I.e.: For all sentences a, 0 ≤ P(a) ≤ 1.
  • A sentence that is necessarily true has a probability of 1. I.e.: For all necessarily true sentences a, P(a) = 1.[10]
  • If two sentences a and b can’t both be true, then the probability that a or b is true is equal to the probability that a is true plus the probability that b is true. I.e.: For any two mutually-exclusive sentences a and b, P(ab) = P(a) + P(b).
  • All logically-equivalent sentences have the same probability. I.e.: If ab then P(a) = P(b).

2. Definitions

A. Conditional Probability

What is the probability of randomly drawing the Ace of Spades in one draw from a standard, shuffled deck? Well, there are fifty-two cards, and only one is the Ace of Spades, so the probability is 1/52. But suppose you knew that the card you would draw was black. Then the probability isn’t 1/52 anymore. There are only twenty-six black cards, and one is the Ace of Spades. So the conditional probability of drawing the Ace of Spades given that you’re drawing a black card is 1/26, not 1/52. We can represent or define ‘you drew the Ace of Spades’ as ‘a’ and ‘you drew a black card’ as ‘b,’ as follows:

  • P(a|b)’ represents the conditional probability of a, given b.
  • P(a|b) = P(a ∧ b)/ P(b), when P(b) > 0.[11]

B. Statistical Independence

Possibilities are statistically independent of each other when one’s being true, or false, doesn’t affect the probability that the other is true, nor that it’s false. There’s no reason that the result of one die-roll should affect the result of a distinct die-roll. But the probability of drawing the Ace of Spades from some deck of cards is dependent on the probability of having already drawn all the Hearts out of that deck. If you’ve already eliminated one-quarter of the cards without eliminating the Ace of Spades, then the probability is now 1/39, not 1/52.

Thus the more likely it is that all the Hearts are gone (but the other cards remain), the more likely it is that your next draw will be the Ace of Spades—so the two sentences aren’t independent.

  • Two sentences a and b (such that P(b) > 0) are independent when and only when P(a|b) = P(a).

3. Rules

These rules follow from our assumptions and definitions.[12]

Negation (‘Not’ Sentences):[13]

Pa) = 1 – P(a).

Illustration:

If 0.25 of the cards in a deck are Hearts, then the probability of drawing something not a Heart in one random draw is 0.75.

Conjunction (‘And’ Sentences):[14]

General rule for two[15] possibilities:

P(ab) = P(a) × P(b|a), when P(a) > 0.

Illustration:

Suppose there are four marbles in a jar: two blue and two red. The probability that you will randomly draw a red marble and then (if you don’t replace the marble you drew) randomly draw a blue marble is equal to 1/2 × 2/3. After you’ve drawn one red marble (itself a 0.5 chance), the probability that you’ll draw one of the two remaining blue marbles (instead of the one remaining red marble) is 2/3.

Rule for two independent possibilities:

P(ab) = P(a) × P(b), when a and b are independent.

Disjunction (‘Or’ Sentences):[16]

Rule for two possibilities:

P(ab) = P(a) + P(b) – P(ab).

Illustration:

In a standard, shuffled deck, the probability that you’ll draw an ace or a black card is equal to 4/52 + 26/522/52. Four out of fifty-two cards are aces; twenty-six out of fifty-two are black; and two out of fifty-two are both black and aces.

Rule for two[17] mutually exclusive possibilities:

P(ab) = P(a) + P(b), when a and b are mutually exclusive.

4. Conclusion

There are other useful rules and theorems in the probability calculus, but now we know the fundamental rules and definitions.

Notes

[1] We use the Principle of Indifference: if there are n possibilities, and no reason to expect any particular outcome over any other, then the chance of any particular outcome is one-in-n.

[2] Fisher 1959.

[3] Courgeau 2012.

[4] Collins 2012.

[5] Depending on our interpretation of probability, these numbers might represent something impossible (‘0’) or something necessarily true (‘1’), or something certainly false (‘0’) or certainly true (‘1’), or a result that never occurred (‘0’) or occurred in every trial (‘1’).

[6] I’ll refer mostly to sentences, but the sentence in question may assert that a certain event has occurred or will occur. That way, our probability calculus can apply to sentences and to events.

[7] Here I’ve chosen some standard symbols, but you might learn different symbols depending on the textbook you use. Some systems use ‘&’ or ‘·’ to mean ‘and.’ Some systems use ‘||’ to mean ‘or.’ Some systems use ‘~’ or ‘!’ to mean ‘not.’

[8] We might ask about the probability of a material conditional’s being true, as in ‘P(ab).’ Since ‘ab’ is logically equivalent to ‘¬(a ∧ ¬b),’ by our rules below, P(ab) = P(¬(a ∧ ¬b)) = 1 – P(a ∧ ¬b) = 1 – [P(a) × Pb|a)].

[9] These assumptions are based on Kolmogorov’s (1956 [1933]: § I.1) axioms, but these (or similar) axioms are now standard in any elementary presentation of the probability calculus. See also Hacking (2001: ch. 6) and Hájek (2018: § 1).

[10] As before, this actually depends upon one interpretation of probability. Other interpretations will replace “necessarily true” with, for example, ‘certainly true.’ See n. 5.

[11] We need to make this specification because division by any x is undefined when x = 0, and conditional probability is defined in terms of division.

[12] We don’t need to prove that here, but you can read proofs in a probability textbook. See e.g. Hacking (op. cit.) for these rules and some proofs. You may also be able to figure them out yourself.

[13] This follows from the third assumption. See n. 9.

[14] These follow from the definition of conditional probability, plus basic sentential-logic axioms, for example that conjunction is associative. See n. 9.

[15] General rule (including for more than two possibilities): For some set of sentences { a, b, c, … y, z }, the probability that a and b and … y and z will be true is equal to the probability that a will be true, times the probability that b will be true given that a is true, times the probability that c will be true given that a and b are true, … times the probability that z will be true given that a and b and c and … y are true.

[16] These follow from the four assumptions. See n. 9.

[17] The general rule for an arbitrary number n of possibilities is difficult to explain in simple terms, but it follows the pattern for two disjuncts: add all the individual probabilities, then subtract the two-sentence conjunctions’ probabilities, then add in the three-sentence conjunctions’ probabilities, then subtract the four-sentence conjunctions’ probabilities, and so on, up to n.

References

Collins, Robin. 2012. “The Teleological Argument: An Exploration of the Fine-Tuning of the Universe,” in The Blackwell Companion to Natural Theology, edited by William Lane Craig and J. P. Moreland (Oxford: Blackwell), 202-281.

Courgeau, Daniel. 2012. Probability and Social Science: Methodological Relationships Between the Two Approaches. Dordrecht, Heidelberg, London, and New York: Springer.

Fisher, Ronald A. 1959. “Mathematical Probability in the Natural Sciences.” Technometrics 1 (1): 21-29.

Hacking, Ian. 2001. An Introduction to Probability and Inductive Logic. Cambridge, UK: Cambridge University Press.

Hájek, Alan. 2018. “Interpretations of Probability.” In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy, Summer 2018 Edition, URL = < https://plato.stanford.edu/archives/sum2018/entries/probability-interpret/>

Kolmogorov, A. N. 1956 [1933]. Foundations of the Theory of Probability, Second Edition. Tr. Nathan Morrison. New York, NY: Chelsea Publishing Company.

Acknowledgments

I thank David Builes and the editors for helpful comments on this essay.

PDF Download 

Download this essay in PDF.

Related Essays

Arguments: Why Do You Believe What You Believe? by Thomas Metcalf

Interpretations of Probability by Thomas Metcalf

Epistemology, or Theory of Knowledge by Thomas Metcalf

Epistemic Justification: What is Rational Belief? by Todd R. Long

The Problem of Induction by Kenneth Blake Vernon

Pascal’s Wager: A Pragmatic Argument for Belief in God by Liz Jackson

Translation

Turkish

About the Author

Tom Metcalf is an associate professor at Spring Hill College in Mobile, AL. He received his PhD in philosophy from the University of Colorado, Boulder. He specializes in ethics, metaethics, epistemology, and the philosophy of religion. Tom has two cats whose names are Hesperus and Phosphorus. shc.academia.edu/ThomasMetcalf

Follow 1000-Word Philosophy on Facebook and Twitter and subscribe to receive email notifications of new essays at 1000WordPhilosophy.com.