The Inconsistency of Arithmetic
Posted by John Baez
Faster-than-light neutrinos? Boring… let’s see something really revolutionary.
Edward Nelson, a math professor at Princeton, is writing a book called Elements in which he claims to prove the inconsistency of Peano arithmetic.
It’s a long shot, but I can’t resist saying a bit about it.
On the Foundations of Mathematics mailing list, he wrote:
I am writing up a proof that Peano arithmetic (P), and even a small fragment of primitive-recursive arithmetic (PRA), are inconsistent. This is posted as a Work in Progress at http://www.math.princeton.edu/~nelson/books.html
A short outline of the book is at:
http://www.math.princeton.edu/~nelson/papers/outline.pdf
The outline begins with a formalist critique of finitism, making the case that there are tacit infinitary assumptions underlying finitism. Then the outline describes how inconsistency will be proved. It concludes with remarks on how to do modern mathematics within a consistent theory.
Thanks to David Roberts and Andres Caicedo for pointing this out on Google+.
I have no idea if Nelson’s proof is correct! He has, however, done good mathematics in the past. For example, in his Ph.D. thesis he made major progress in constructive quantum field theory, by proving the self-adjointness of Hamiltonians for the theory in 1+1 dimensions (with a spatial cutoff, I believe).
Over on Google+, Kevin Kelly asked:
His first sentence: “The aim of this work is to show that contemporary mathematics, including Peano arithmetic, is inconsistent, to construct firm foundations for mathematics, and to begin building on those foundations” reminds me of Gödel’s Theorem. Is it a fool’s quest to try to make an consistent arithmetic?
I replied, trying to avoid nuances that would confuse nonmathematicians (so give me a break):
Most logicians don’t think the problem is “making a consistent arithmetic” - unlike Nelson, they believe they arithmetic we have now is already consistent. The problem is making a consistent system of arithmetic that can prove itself consistent.
Gödel’s theorem says that given certain technical conditions, any system of arithmetic that can prove itself consistent must be inconsistent.
So, the only way out is to develop a system of arithmetic that doesn’t obey those ‘certain technical conditions’. And since Nelson is no fool, this is what he’s trying to do.
One of those ‘certain technical conditions’ is the idea that the numbers 0, 1, 2, 3, 4, … obey the principle of mathematical induction. Namely, if
A) some property holds for the number 0,
and
B) given that this property holds for some number n, it holds for the next number n+1,
then it holds for all numbers. Nelson doubts the principle of mathematical induction, for reasons he explains in his book, so I’m sure his new system will eliminate or modify this principle.
Needless to say, this is a radical step. But vastly more radical is his claim that he can prove ordinary arithmetic is inconsistent. Almost no mathematicians believe that. I bet he’s making a mistake somewhere, but if he’s right he’ll achieve eternal glory.
In his book Elements, in the chapter Against Finitism, Nelson explains his doubts about the principle of mathematical induction:
Induction is justified by appeal to the finitary credo: for every number there exists a numeral such that is . It is necessary to make this precise; as someone once said, it depends on what you mean by “is”. We cannot express it as a formula of arithmetic because “there exists” in “there exists a numeral d” is a metamathematical existence assertion, not an arithmetical formula beginning with .
Let be a formula of with just one free variable such that . By the least number principle (which is a form of induction), there is a least number satisfying . One might, thinking that every number is a numeral, assert that there exists a numeral such that .
[Here is with the free variable replaced by the numeral .]
But, as I learned from Simon Kochen, this does not work. The rank of a formula is the number of occurrences of in it. Let be the formula asserting that there is no arithmetized proof of a contradiction all of whose formulas are of rank at most . Then each can be proved in by introducing a truth definition for formulas of rank at most . But if is consistent then is not a theorem of , by Gödel’s second theorem. Now let be . Then is a theorem of since it is equivalent to the tautology , but (if is consistent) there is no numeral such that .
The finitary credo can be formulated precisely using the concept of the standard model of arithmetic: for every element of there exists a numeral such that it can be proved that is equal to the name of ; but this brings us back to set theory. The finitary credo has an infinitary foundation.
The use of induction goes far beyond the application to numerals. It is used to create new kinds of numbers (exponential, superexponential, and so forth) in the belief that they already exist in a completed infinity. If there were a completed in finity consisting of all numbers, then the axioms of would be true assertions about numbers and would be consistent.
It is not a priori obvious that can express combinatorics, but this is well known thanks to Gödel’s great paper on incompleteness. As a consequence, exponentiation and superexponentiation can be defined in so that we have
(20)
(21)
(22)
(23) =
and similarly for primitive-recursive functions in general. For the schema of primitive recursion to be coherent, it is necessary that the values of the functions always reduce to a numeral, since they are defined only for 0 and iterated successors of numbers for which they have been previously defined. Finitists believe that primitive recursions always terminate; for example, that applying (15)-(18)
[namely, the axioms of Peano arithmetic, less mathematical induction]
and (20)-(23) a sufficient number of times,
reduces to a numeral. But the putative number of times these rules must be applied can only be expressed by means of a superexponential expression — the argument is circular.
The problem is not simply that some primitive recursions are too long. The problem is structural: there is a sharp division between two classes of primitive recursions. This results from work of Bellantoni and Cook; see also Bellantoni’s thesis. They indicate that their work was strongly inuenced by previous work of Leivant, who subsequently gave a more elegant form to the characterization, and it is appropriate to call the result the BCL theorem.
While a number is being constructed by recursion, it is only potential, and when the recursion is complete, it is actual. What Bellantoni and Cook, and Leivant, do is restrict recursions so that they occur only on actual numbers. Then the theorem is that the class of functions computable in this way is precisely the class of polynomial-time functions. This is an astonishing result, since the characterization is qualitative and conceptually motivated, saying nothing whatever about polynomials, Bellantoni, Cook, and Leivant have revealed a profound difference between polynomial-time recursions and all other recursions. The recursions constructed by the BCL schema enjoy a different ontological status from recursions in general. In the former, recursions are performed only on objects that have already been constructed. In the latter, for example in a superexponential recursion, one counts chickens before they are hatched (and the chicks that they produce as well), all in the fond hope that this will eventually take place in the completed infinity by and by.
Not only is induction as a general axiom schema lacking any justification other than an appeal to as a completed infinity, but its application to specific variable-free primitive-recursive terms lacks a cogent justification. We shall exhibit a superexponential recursion and prove that it does not terminate, thus disproving Church’s Thesis from below and demonstrating that finitism is untenable.
Note, in case it’s not obvious, that this is not a proof of the inconsistency of Peano Arithmetic. Nonetheless, exhibiting a recursion that doesn’t terminate sounds fascinating in its own right! I hadn’t been aware of this work of Bellantoni, Cook, and Leivant.
He outlines his claimed inconsistency proof later, in Section 18 — but alas, it’s too technical for me to follow. It uses the fact (or claim) that Robinson arithmetic “proves the quasitautological consistency of its relativization”. It uses the “Hilbert–Ackermann theorem” stating that “a quasitautologically consistent open theory is consistent”. It also uses Chaitin’s incompleteness theorem, and Kritchmann and Raz’s “stunning proof without diagonalization or self-reference of Gödel’s second incompleteness theorem”. The only bit of this I understand is Chaitin’s incompleteness theorem.
Can anyone take a stab at explaining some of these ideas? You can see most of the proof sketch in his outline.
I should admit that Nelson and I had the same Ph.D. thesis advisor, so I probably take his ideas more seriously than if he were some random unknown guy. On the other hand, he turned me down when I asked him to supervise my undergraduate thesis, so it’s not like we’re best buddies or anything.
Re: The Inconsistency of Arithmetic
Over on Google+, Terry Tao writes: