When Tarski moved the concept of truth to the meta-logic to avoid the simple paradoxes, it might have been assumed that the daunting foundational difficulties in mathematics had been solved. These hopes were dashed in 1931 by Kurt Gödel with the publication of his paper On Formally Undecidable Propositions of Principia Mathmetica and Related Systems. Though we could no longer discuss truth directly in the logic, Gödel had the revolutionary idea to combine a system of formula numbering with the first-order propositional logic and the axioms of arithmetic, the basis of Russell and Whitehead's Principia Mathematica, and examine the implications.
Gödel developed a method, using the prime numbers, to associate a number with any logical formula and, by extension, with any string of formulas. The details of this Gödel numbering are unimportant, since the idea, though brilliant at the time, is now unremarkable due to the computer revolution -- any string of symbols is routinely represented in computers by a string of binary bytes which can be interpreted as a large, unique binary number representing the symbol string, and we can freely translate back and forth between the two. Let us represent some formula numbering for the formula F by the computable functor [1] σ('F') having a computable unique inverse σ-1(σ('F')) = 'F'. Since any mathematical proof can also be repesented by a string of formulas, then proofs can likewise be numbered. Now consider the two-place functor dem(x,σ('F')), which is true if the number x is a formula number of a proof of the formula F (i.e., x is a demonstration of F). A large part of Gödel's paper was devoted to proving that σ(...), σ-1(...), and dem(...,...) are all computable functions in the Principia formalism. He then considered the quantified expression:
This expression represents the statement that the formula F has a proof, i.e., it is demonstrable. Gödel then used a diagonalization procedure to construct his famous statement G -- we will take a more elegant path using a remarkable theorem later proved by Church:
The Self-Reference Lemma
If any PA-formula C(x) contains exactly one free variable, then it is possible to construct a closed PA-formula B such that PA proves: B ≡ C(σ('B'))
where PA refers to the system consisting of the Peano Axioms and the propositional logic (i.e., the Principia formalism), and σ('B') is the Gödel number of the formula B.
Taking C(y) ≡ ¬(∃x)dem(x,y), a PA formula containing exactly one free variable y, then there is some formula G such that:
G ≡ C(σ('G')) ≡ ¬(∃x)dem(x,σ('G')) ≡ (∀x)¬dem(x,σ('G')) | (2a) |
This is Gödel's famous formula asserting of itself that it is not demonstrable. If we abbreviate (∃x)dem(x,σ('G')) as Dem(G), then the formula can be written as:
A common argument at this point [see for instance Nagel and Newman] is that if the formalism is consistent, neither G nor its negation ¬G are demonstrable -- they are both formally undecidable. Then "G is not demonstrable" is a true meta-statement. G, being equivalent to this true meta-statement, must then also be true -- hence there is a true statement which is not demonstrable, and the formalism is incomplete. We will take a simpler course. The formalism is consistent if it is not possible to demonstrate a false statement, i.e.:
(∀F)¬(Dem(F)⋅¬F) ≡ (∀F) Dem(F) ⇒ F | (3) |
In particular, Dem(G) ⇒ G. Negating both sides of equivalence (2b) produces an equivalent equivalence:
(G ≡ ¬Dem(G)) ≡ (Dem(G) ≡ ¬G) | (4a) |
Since material equivalence allows substitution of equivalent expressions in other formulas:
(G ≡ ¬Dem(G))⋅(Dem(G) ⇒ G) ⇒ (¬G ⇒ G) | (4b) |
From
Reductio ad Absurdum, (F ⇒ ¬F) ⇒ ¬F: [2]
Finally, from the transitivity of implication applied to (4b) and (4c):
(G ≡ ¬Dem(G))⋅(Dem(G) ⇒ G) ⇒ G | (4d) |
Hence, given the definition of G and the assumption of consistency, we conclude that G is true and Dem(G) is therefore false by (4a), so the formalism is incomplete. There is no need for any meta-argument or interpretation of G to arrive at the final conclusion:
Gödel's First Theorem
The axioms of arithmetic are incomplete, assuming such axioms are consistent.
Gödel did not stop with this result. He went on to show that even if an axiom were added to the formalism to make G demonstrable, it was possible to construct, using the same methods, other formulas which were also true but formally undecidable
ad infinitum, so the
Principia formalism could never be made complete. Then he delivered the
coup de grâce by proving his second theorem:
Gödel's Second Theorem
The consistency of the arithmetic can not be established by any reasoning that can be represented in the formalism of the arithmetic.
Hence, the consistency of the formalism could not be proven within the formalism, and even if consistent, it was terminally incomplete. This was the death knell of the
Principia program, viz., to establish the foundations of all mathematics on the consistent and complete formalism of logic and the Peano arithmetical axioms, a blow from which it has never recovered.
The perspective is different in a complex logic. We can simply posit the relation:
as an
implicit definition of G, where the propositional expressions G and Dem(G) range over the complex formal constants {U, U
~, U
~V, V}. Generalizing the definitions of consistency and completeness produces the relation: [3]
Substituting, the
implicit definition (2c) becomes the
equation:
G = Dem(G)~ = (U~G)~ = U~G~ | (5b) |
[Note the similarity to the Liar paradox.] This equation has the solution G = U
~V, in agreement that G is undecidable, but its demonstrability Dem(G) = U
~(U
~V) = U
~V is also undecidable rather than false. Any meta-statement concerning G's demonstrability or non-demonstrability is then undecidable, rather than true. The reasoning employed above to show that G is true now fails because the
Reductio (4c) is no longer a
tautology:
[(¬G ⇒ G) ⇒ G] = (GG)~G = G~G = (U~V)~U~V = U~V | (5c) |
There is no longer any meta-logical or formal way to conclude that G is true or its demonstrability false.
Lacking any analytic means of determination, there only remains the implied computation of Dem(G):
Dem(G) = (∃x) dem(x,σ('G')) | (5d) |
a brute-force, infinite calculation on the integers. At any point in this computation, if we find a demonstration we may conclude that Dem(G) is true and stop. Lacking this, we must continue
ad infinitum. We can never conclude that Dem(G) is false -- only that it is true or undecided. Hence,
a priori, we can never conclude that G is true -- only that it is false or undecidable. The nature of the infinite calculation in the definition of Dem(G) restricts its possible logical value, and thus the value of G, specifically excluding the values necessary to prove incompleteness. On the other hand, the complex logic allows the analytic determination of G and Dem(G) as both undecidable, in complete agreement with these computational limitations.
It would seem that the consistency and completeness of the complex logic and PA are undecidable, since we have for the consistency relation:
[Dem(G) ⇒ G] = Dem(G)~G = (U~V)~U~V = U~V | (6a) |
and similarly for the completeness relation. However, it seems unremarkable and only reasonable that an undecidable expression could be derived from another undecidable expression, and such undecidable formulas should not be relevant to the consistency or completeness of the formalism since they have no possible real content or import. As Reichenbach noted:
For all practical purposes, these limitations of derivability [i.e., G's underivability from the axioms] can be neglected since the underivable formulas do not possess a practical importance. [4]
The solution is to redefine the consistency and completeness relations so that formally undecidable formulas are not relevant. Introduce the primitive formal relation
derivability, indicated by "|–", defined as:
and then define consistency as (∀X) Dem(X) |– X and completeness as (∀X) X |– Dem(X), both of which reduce to the previous implicational forms for real distinctions, but for undecidable ones:
Dem(U~V) = U~U~V = U~V | (6c) |
[Dem(U~V) |– U~V] = τ(U~V)~U~V = (U~)~U~V = U | (6d) |
[U~V |– Dem(U~V)] = τ(U~V)~U~V = (U~)~U~V = U | (6e) |
so undecidable formulas have no effect on the consistency or completeness of the formal system, since they can not be proved or disproved.
This does not disprove Gödel's Theorems, it simply shows that their consequences can be avoided by changing the logical system assumed. As in most things, there is a choice. If we accept the standard propositional logic and following Tertium non Datur we assume that any closed formula must be either true or false, then Gödel's Theorems inexorably follow, and at best the arithmetic is incomplete and at worst both incomplete and inconsistent. Or we can instead assume a logic which explicitly symbolizes undecidable propositions, eschews Tertium non Datur in their case, and preserves the consisteny and completeness of the arithmetic. Such formulas are then regarded simply as defective definitions, insufficient to determine whether the formula is true or false even though they are closed. I find the latter choice more hospitable.
[1] Carnap's term for a function having arguments of one type returning a value of a different type -- in this case a function having numeric arguments returning a logical value.
[2]
Reductio ad Absurdum is easily demonstrated to be a
tautology for
real distinctions by the calculation:
[(F ⇒ ¬F) ⇒ ¬F] = (F~F~)~F~ = (F~)~F~ = FF~ = U |
[3] The requirement for consistency, (∀F) Dem(F) ⇒ F, and a similar relation for completeness, (∀F) F ⇒ Dem(F), when particularized for G give:
(Dem(G) ⇒ G)⋅(G ⇒ Dem(G)) ≡ (Dem(G) ≡ G) |
[4] Reichenbach, Hans,
Elements of Symbolic Logic, p. 234.
Labels: gödel, systema