Author: Jon Charry
Category: Logic and Reasoning
Word count: 1,000
Gödel’s[1] Incompleteness Theorems—discovered by Austrian logician, mathematician, and philosopher Kurt Gödel (1906-1978)—are central to many philosophical debates about the limits of logical and mathematical reasoning.[2]
This essay introduces the Theorems and explains their importance.

1. Theories and Completeness
For logicians, a theory is a set of axioms, or basic statements, used to deduce things about a specific domain. For example, theories of chemistry use axioms—“laws of chemistry”—to predict how chemicals might react when mixed. The Incompleteness Theorems reveal limitations of a different kind of theory, arithmetical theories. These theories use axioms to infer things about numbers, like that 1+1 equals 2.
Ideally, a theory will be both consistent and complete. It’s consistent if its axioms can’t be used to prove a contradiction,[3] and it’s complete if every true statement about its domain can be proven using the theory.[4]
The First Theorem reads:
(1st) Any sufficiently powerful and consistent theory is not complete (it’s incomplete).[5]
Gödel’s insight is that “sufficiently powerful”[6] arithmetical theories—theories able to prove a lot of facts about numbers—are also able to prove facts about themselves. He showed that the concept of provability—whether a statement can be deduced from some axioms or not—can be represented by arithmetical statements like “1+1=2.” To do this, we need a Gödel code.
2. Gödel coding
Since arithmetical theories are intended to prove statements like “1+1=2”, we don’t expect them to prove things about themselves, like “I can prove ‘1+1=2’”. The latter statement doesn’t seem to be about just numbers, but also about the theory itself.
We can, however, use a clever trick to interpret certain arithmetical statements as statements about the theory itself. We do this by first stipulating a Gödel code: to each basic symbol of the theory, we assign a different natural number (its Gödel number). Let’s illustrate this by encoding the formula 1+1=2. There are four kinds of symbols in this formula: “1” (repeated twice), “+”, “=”, and “2”. We assign each of them a different number as follows:
| Symbol: | 1 | 2 | + (plus) | = (equals) |
| Gödel number: | 31 | 33 | 35 | 37 |
The numbers we picked in the second row are pretty arbitrary.[7] What matters is that different symbols get different Gödel numbers.
Now that each symbol in “1+1=2” has a Gödel number, we can encode the whole formula. There are five symbols (with repeats) in this formula, so we’ll write down five empty slots:
__ __ __ __ __
Now we fill in the slots with the first five prime numbers:
2 3 5 7 11
Then raise each number to the power of the Gödel number of the symbol whose slot that is, and multiply everything together:
231 × 335 × 531 × 737 × 1133
This number is gigantic, but it uniquely labels the formula 1+1=2.[8] We can think of this number as code for the formula 1+1=2. Gödel also extended the code so that any collection of formulas, such as a proof of some statement, can be given a unique Gödel number.
3. The provability predicate
Where p is any formula, we’ll denote p’s Gödel number by G(p). We’ll now see how our theory is able to talk about itself. We define a relationship between numbers—the provability predicate, Proves(x,y)—as follows:
Proves(x,y) =df x is the Gödel number of a proof of the formula whose Gödel number is y.
Our theory, if it’s any good, could already prove basic things like “1+1=2”. With the introduction of the provability predicate, it can now also talk about what it can prove: The proof of “1+1=2” will be encoded as an enormously large number, say n. We showed already how to encode the formula “1+1=2” as the number 231 × 335 × 531 × 737 × 1133, or m for short. Our theory will now be able to prove the statement “Proves(n, m)”, which really just means “I can prove ‘1+1=2’”.[9]
4. The Gödel sentence
We can finally build a sentence, a Gödel sentence we’ll call φ, that is not provable. The idea is to adapt the famous liar sentence: instead of “This sentence is false,”[10] our Gödel sentence will say “This sentence is unprovable”. We define φ as:[11]
φ =df ~∃x [ Proves(x, G(φ)) ].
That is, φ simply states that φ (itself) is unprovable. Suppose φ is provable. Then, since there is a proof of φ, there exists a (huge!) number x which is the Gödel code of this proof of φ. In other words, “∃x [Proves(x, G(φ))]” is true. But this is just the opposite of φ! If our theory really is consistent, it must be that we can’t prove φ to begin with. Since we can’t prove φ, what φ says is true.[12] [13]
5. The Second Incompleteness Theorem
We can now also convince ourselves of the Second Incompleteness Theorem, which reads:
(2nd) Any sufficiently powerful theory cannot prove its own consistency.
In the last section, we used the assumption that our theory is consistent to show that is unprovable. If our theory could prove that it is consistent, it would be able to encode our line of reasoning in the last section—once again, using Gödel coding—and would, just as we did, come to the conclusion that φ is not provable. But this means it would be able to prove φ, since this is just what φ says! This would violate the first Incompleteness Theorem. So, our theory cannot prove its own consistency.[14]
6. Conclusion
We’ve learned that no matter how good a given mathematical theory is—no matter how many facts it is able to spit out—there will always be mathematical facts it cannot prove. The Gödel sentence is an example. Among these will also be the theory’s own consistency.
Gödel’s results dashed the hopes of three towering giants of early 20th-century philosophy and mathematics: David Hilbert, Gottlob Frege, and Bertrand Russell.[15] They wanted to “automate” math so that all mathematical facts could be deduced from a carefully built, consistent theory. The Theorems, however, seem to crush these hopes.
More recent discussions apply Gödel’s results to the question of whether human minds are essentially just complex machines. Some, for instance, have argued that Gödel’s Theorems suggest a negative answer, and that human reasoning will always surpass computers. Others have countered this view, and the debate continues.[16]
Notes
[1] In English, “Gödel” is pronounced like “Grr-dle”.
[2] This article includes logical notation that some readers may be unfamiliar with. Readers might benefit from first reading Formal Logic: Symbolizing Arguments in Quantificational or Predicate Logic by Timothy Eshing and Modal Logic: Axioms and Systems for Alethic Modal Logic by Thomas Metcalf. These articles provide a good introduction to symbolizing English statements using logical symbols. The latter article especially is a good place to read about theories (there called systems), even if what we are talking about here has little to do with modality (the logic of possibility and necessity). Readers are also encouraged to peruse the accessible biography of Gödel Incompleteness by Rebecca Goldstein.
[3] A contradiction is a statement of the form “P and ~P”. For example, “It is raining and it is not raining”. In classical logic, inconsistent theories are bad—in the sense of being undesirable—because of the principle of explosion: any inconsistent theory, using usual inference rules afforded to us by classical logic, will be able to prove any statement whatsoever. This is undesirable because it will mean the theory is able to prove a bunch of falsehoods in addition to a bunch of statements that may happen to be true. We won’t be able to sift through the falsehoods and true statements ourselves . . . that’s what we wanted the theory for in the first place, to tell us which statements are true! It is possible to move to certain non-classical logics for which the principle of explosion doesn’t hold. See (Priest, 2022) for more on this.
[4] A different definition of complete is that a theory is complete when, for any sentence, it or its negation can be proven from the theory. Nothing turns on this discrepancy: Gödel’s Theorems imply that any sufficiently powerful theory of arithmetic will not be complete (in both senses). See also n.12 for a bit more of a technical clarification of the notion of “true” in the given definition of complete.
[5] Actually, Gödel’s original statement of the theorem assumed that the theory in question is omega-consistent, which is a more stringent requirement on theories than mere consistency. But the way we’ve stated the result here is also correct. J. Barkley Rosser is credited with improving Gödel’s result by showing that the assumption of omega-consistency can be relaxed to just consistency. See Rosser’s original article (Rosser, 1936). See also (Smith, 2013) and (Smullyan, 1992) for more details.
[6] We won’t specify here exactly what is meant by “sufficiently powerful”. The goal is for this article to explain how the Incompleteness Theorems work for theories that are powerful enough to prove “a lot” about arithmetic. We’d certainly expect the statement “1 + 1 = 2” to be provable in any such sufficiently powerful theory. There are various ways a logician might define what “sufficiently powerful” means, but the least technical way is perhaps the following: a theory is sufficiently powerful if it can prove a set of seven fairly straightforward axioms known as Robinson arithmetic. These axioms establish very basic properties of the ordinary counting numbers (0,1,2,…) and the successor function s. For example, one of the axioms states that 0 is not the successor of any number (after all, we start counting at 0!). This definition is still not entirely satisfactory, but for our purposes this is what a sufficiently powerful arithmetical theory should look like.
[7] If we had more space, we would also provide a Gödel code for the other more traditional symbols used in logic, like ∀ meaning “for all” and ~ meaning “not”. Formal Logic: Symbolizing Arguments in Quantificational or Predicate Logic by Timothy Eshing reviews many of these symbols.
[8] Notice a subtlety we’re not addressing here: with this Gödel code, we can assign to any formula of our theory of arithmetic a unique Gödel number, but it may be that there are numbers which do not encode a corresponding formula. It’s possible to specify a different code for which every number is the code of some formula. See (Smullyan, 1992, p. 6) for some details on this. On that note, the uniqueness of Gödel numbers is easy to verify. That is, whenever a number is the Gödel number of some formula, it is unique (i.e., there is no other number that is the Gödel number of that formula). This can be seen by noticing that every number can be uniquely factored into primes. For example, the number 12 can be factored into 4 and 3 (since 4 times 3 is 12); and 4 can be further factored into 2 and 2. Now, 2 and 3 cannot be further factored; they are prime numbers. No number, other than 12, can be factored into these three primes (2, another 2, and a 3). So these three primes uniquely label the number 12 in much the same way that the big number from above uniquely labels the formula “1+1=2”.
[9] We’re unrealistically personifying the theory here, but no harm done. It can be useful to think of a theory as a machine that spits out facts about some “target system” (if it’s any good). A sufficiently strong arithmetical theory is a machine that is also “aware” of what it can and can’t prove by being able to Gödel-encode statements and proofs of statements.
[10] The liar sentence states “This sentence is false.” If true, then the sentence is false. And if false, then it must be true. This is a paradox. See (Beall, 2020) for much more on this. Although it may be helpful to think of the Gödel sentence as related to the liar sentence, this is not an exact connection; the Gödel sentence talks about provability—whether or not something can be proven or demonstrated using a theory—whereas the liar is related to truth and falsity.
[11] Strictly speaking, we do not simply “define” this sentence to be this way. Similarly, we do not simply “define” the provability predicate Proves(x,y) the way we did above. Rather, what’s important is making sure that once all the formulas have been given a label according to some Gödel code, there exist formulas with these properties, i.e., that there actually exists a formula like Proves(x,y) which is true just in case x is the Gödel number of a proof of a formula whose Gödel number is y, and that there actually exists a formula φ which is true just in case there does not exist a proof (with Gödel number x) of y. Here we’ve simply defined these to be the way they are, without rigorously demonstrating that these formulas always exist for any sufficiently powerful arithmetical theory.
[12] There’s a subtle point here. When we say that the Gödel sentence is true, we don’t mean that it is a tautology or that it is valid (i.e., always true). Since the Gödel sentence (and its negation) is unprovable, there will be some interpretations of the theory in which it is true and some in which it is false. However, in the “target system” of arithmetic (the standard counting numbers 0, 1, 2,…), it is true.
[13] Similarly, ~φ is also unprovable. We could offer a very similar sketch of a proof of why the negation of the Gödel sentence is not provable. But we will in fact have to assume that the theory in question satisfies a condition which is actually stricter than just merely being consistent. This is called omega-consistency. The reader is encouraged to consult (Smith, 2013) or (Smullyan, 1992) for details.
[14] Here are two more details we’ve omitted, though they are treated in more (even if just a little more) advanced sources. First, notice that self-reference is important to the First Theorem. The Gödel sentence references its own provability, much like the liar sentence (see footnote 9) references its own truth. In order to be sure that theories of arithmetic are able to express statements that are self-referential in this way, we need to do more work. Second, it is actually possible to prove that theories of arithmetic can define a wide variety of provability predicates. For some of these, the Second Incompleteness Theorem will not hold. Fortunately, we need not worry about them; if we impose very reasonable rules (called the Hilbert-Bernays conditions) on how exactly a provability predicate should behave, the Second Incompleteness Theorem will follow from these kinds of provability predicates. These nuances can be found in (Smith, 2013) and (Smullyan, 1992).
[15] In a little more detail: in 1900 the mathematician David Hilbert issued a set of twenty-three unsolved math problems he felt were central to mathematical thinking at the time. The second problem on the list challenged mathematicians to prove the consistency of the most powerful theories of arithmetic of the time. Similarly, the logician-philosophers Gottlob Frege and Bertrand Russell were early advocates of logicism, which roughly states that all mathematical truths are reducible to statements which can be proved by an appropriate theory. These two projects are often cited as having taken the biggest hit when Gödel published his results. See (Raatikainen, 2022) for much more on the philosophical upshots of the Theorems.
[16] For much more on this debate, see David Chalmers’ article “Minds, Machines, And Mathematics: A Review of Shadows of the Mind by Roger Penrose”. Physics Nobel laureate Roger Penrose’s argument is detailed in his book, Shadows of the Mind. A similar argument was given much earlier by philosopher J.R. Lucas in his paper “Minds, Machines and Gödel.” Since then, there have been a host of philosophical arguments inspired by Gödel’s results.
References
Beall, J. (2020). “Liar Paradox.” In E. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Fall 2020 ed.).
Chalmers, D. (1995). “Minds, Machines, And Mathematics: A Review of Shadows of the Mind by Roger Penrose”. PSYCHE: An Interdisciplinary Journal of Research On Consciousness, 2, 11-20.
Gödel, K. (1992). On Formally Undecidable Propositions of Principia Mathematica and Related Systems. Dover.
Goldstein, R. (2005). Incompleteness: The Proof and Paradox of Kurt Gödel. W.W. Norton & Company.
Lucas, J.R. (1961). “Minds, Machines and Gödel”. Philosophy, 36(137), 112-127.
Penrose, R. (1994). Shadows of the Mind. Oxford University Press.
Priest, G. (2022). “Paraconsistent Logic.” In E. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Summer 2022 ed.).
Raatikainen, P. (2022). “Gödel’s Incompleteness Theorems.” In Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Spring 2022 ed.).
Rosser, J. Barkley. (1936). “Extensions of some theorems of Gödel and Church”. Journal of Symbolic Logic, 1(3), 87–91.
Smith, P. (2013). An Introduction to Gödel’s Theorems. Cambridge.
Smullyan, R. (1992). Gödel’s Incompleteness Theorems. Oxford.
Related Essays
Formal Logic: Symbolizing Arguments in Sentential Logic by Thomas Metcalf
Formal Logic: Symbolizing Arguments in Quantificational or Predicate Logic by Timothy Eshing
Modal Logic: Axioms and Systems for Alethic Modal Logic by Thomas Metcalf
Gödel’s Incompleteness Theorems by Jon Charry
PDF Download
Download this essay in PDF.
About the Author
Jon Charry is a PhD candidate at the University of California, Santa Barbara specializing in philosophy of physics and logic. philosophy.ucsb.edu/people/jon-charry
Follow 1000-Word Philosophy on Facebook, Twitter, and Instagram and subscribe to receive email notifications of new essays at 1000WordPhilosophy.com
Discover more from 1000-Word Philosophy: An Introductory Anthology
Subscribe to get the latest posts sent to your email.

Comments are closed.