## Category Archives: In the Light of Logic

Do numbers exist?  What a crazy question for a computer scientist to consider!  I admit that my curiosity on this topic is not due to any application in Computer Science that I can see.  Reading Feferman’s In the Light of Logic, though, has gotten me interested in some of the philosophical questions that some logicians — Feferman and Goedel are two notable examples — have devoted considerable attention to.  This question regarding the ontological status of the objects of mathematics is considered by many very serious philosophers, certainly in the 20th century and no doubt before (though I know little about the history of the question).  I think it is quite interesting to consider, since it sheds light on exactly what is happening when we do theoretical work.

The most straightforward answer to the question “Do numbers exist?” is simply, “yes (of course!)”.  Numbers and more complex mathematical objects are indispensable to scientific practice, which W. V. Quine famously argued justified our acceptance (at least in some sense) of the existence of whatever objects mathematics requires, at least mathematics that is needed for scientific practice.  But as Feferman argues in Chapter 14 of In the Light of Logic (and also in earlier chapters), the logical theories required in principle for the development of all the mathematics needed for modern science are quite weak in their ontological commitments.  Indeed, Feferman puts forward theory W, a weak system of recursive operations on the natural numbers, and predicatively defined classes of such entities, and argues that it is sufficient for the needs of all of modern science.  And theory W is a conservative extension of Peano Arithmetic, and proof-theoretically reducible to it.  So enormously complex cardinal numbers, for example, need not be assumed to exist in order to carry out scientifically relevant mathematics.  Thus if we follow Quine’s idea that we should accept mathematical objects that are needed for science, we need not accept such things.  In this way Feferman rejects a strongly Platonist conception of mathematics, where the entire menagerie of higher set theory is accepted as actually existing.

But all this proof-theoretic fanciness is, in a sense, a distraction.  After all, on Feferman’s approach we are still left with Peano Arithmetic at least.  So just seeing that we do not need the more esoteric mathematical objects to do modern science does not let us avoid the question of whether or not (or how) numbers exist.  And indeed, I personally am not motivated by Quinean concerns, which are ultimately rooted in naturalism.  And naturalism, I suspect, ultimately represents a desire to banish the concept of God or other religious beliefs from intellectual discourse and indeed, from human life.  There are very weighty reasons for thinking that that is a bad idea, which I will not discuss here: that topic is, of course, even more controversial than the existence of mathematical objects!

So: do numbers exist?  Really I think we should consider two separate questions: do theoretical objects exist, and do abstract objects exist.  While I have my doubts, it seems quite reasonable to believe that, at least in some sense, at least rather small finite numbers could be said to exist.  I actually am open to persuasion on that point, but when I say the number of books on my desk is currently 4, maybe it is not just a figure of speech, but an actual statement about this thing called a number.  I don’t know.  But the question of whether or not theoretical objects exist — that is, objects that a particular theory is committed to — this seems clearer.  Why on earth should we believe that just because a particular mathematical theory speaks of certain objects, that they definitely exist?  I can easily write down a set of first-order formulas giving certain characteristics of unicorns, or ghosts, or whatever else you like.  We would certainly not wish to say that because such a theory is committed to the existence of those things, they really exist.  They might exist, of course.  I could have a theory of cars, or other generally noncontroversially existing things.  But just having a theory about them does not tell us anything about whether or not they exist.  We would need some independent reason for believing in their existence.

Of course, in mathematics or other formal disciplines (Computer Science, too), people spend a lot of time working with the particular entities the theory considers.  In Computer Science, one might work with Turing Machines, which are, of course, idealized computers with an infinite amount of memory.  No computer has or perhaps physically could have an infinite amount of memory, so in that sense, Turing Machines do not exist.  Working within a theory, it is natural to talk and act as though the theory were definitely true.

Perhaps psychologically, one must, in a sense, enter the world of the theory to be successful at deriving new theorems or reaching new insights into the theory.  Indeed, this idea of entering the world of the theory suggests a modal interpretation of theoretical truth.  A version of this interpretation is proposed by Hilary Putnam in “Mathematics without Foundations”, which can be found in his book Mathematics, Matter, and Method.  The idea roughly is to say that when we assert a theorem of arithmetic, for example, as true, what we are really asserting is the necessary truth of the statement that the axioms of arithmetic imply the theorem.  So any structure which satisfies the axioms of arithmetic will also satisfy the theorem.  With this kind of perspective, “modalism” (which amusingly, Putnam says explicitly that he is not trying to start as an alternative foundational philosophy, though certainly it by now is) is a kind of structuralism, which asserts that structures exist, even if mathematical objects are identified only by their roles in such structures.

To me, structuralism’s structures are no better than the original mathematical objects.  I would take a more proof-oriented version of modalism: to say that a theorem of a particular theory is true, is just to assert that it is provable from the axioms of the theory, using whatever logical axioms and rules one accepts.  There is no ontological commitment at all there.  Yes, we could understand this deductivist interpretation modally: if a structure exists that satisfies the axioms of the theory, then that structure will satisfy the theorem.  But if we look at it that way, I think we must avoid phrasing this as above: “it is necessary that the axioms imply the theorem”.  For this interpretation requires us to assume also that the axioms could be satisfied.  If they absolutely could not be satisfied, then all formulas become theorems: if no structure models the axioms, then in every structure, the axioms are false, and the implication “axioms imply theorem” is true.  I would not want to ground that modality in possibly existing structures — we’d just be back to where we started, because we would need a theory for those, and we’d be unsure of their status.  It would be more acceptable to ground the modality in actually existing structures.  So then the modal interpretation would amount to saying that for any structure that exists, if it satisfies the axioms, then it satisfies the theorem.  But then arithmetic would become inconsistent if the universe is finite, for example.  So we would not want to take this kind of more semantic modalism.  We just need a more modal notion of implication, which we can find in constructive logic: if we were to enter a world where the axioms are true (regardless of whether or not there is such a world), then the theorem would be true.  This has a bit of the air of the fictionalism I read attributed to Hartry Field.  More simply, the theorem follows by the rules of logic from the axioms of the theory, and that is the end of the story from an ontological perspective.

Why one theory is of more use or interest than another is then a wholly separate question from whether or not use of the theory requires acceptance of the existence of the theory’s objects.  Asserting a theorem, no matter how vivid the intuition for a world satisfying the axioms of a theory, is ultimately no more than asserting that the theorem follows deductively from those axioms.

Ok, after getting my attempts to learn some of this philosophy out of my system, I’ll be back to more technical topics next post.

I have almost finished reading Sol Feferman’s excellent book, “In the Light of Logic”, which surveys a number of historical and recent results in Mathematical Logic, particularly Proof Theory, with an eye towards their possible philosophical consequences.  I want to skip ahead to talk about Chapter 13 of the book, “Weyl Vindicated: Das Kontinuum Seventy Years Later”.  In this chapter, Feferman surveys, reconstructs, and reports on his results building on the work of the early-20th-century mathematician Hermann Weyl.  It seems from what Feferman reports that Weyl, while influenced strongly by his doctoral advisor David Hilbert, was nevertheless drawn to the intuitionism of L. E. J. Brower, as providing a better foundation for mathematics than various schemes of axiomatization.  For a prominent example, Russell’s Ramified Theory of Types seemed to require at least one axiom which Computer Scientists would probably describe as a hack: the Axiom of Reducibility was needed to make some things work out, but lacked principled justification.

Weyl set out in his book Das Kontinuum to develop a theory of real-valued functions, including differentiation and integration of continuous functions, on a foundation he found more defensible: a theory of predicatively defined first-order relations on natural numbers. Here, “predicative” means that the definition of a set S (say) is not allowed to refer to a set of sets which contains S.  That is, we cannot base our definition of S on a set of sets containing S.  A simple case of an impredicative definition is defining the set of natural numbers to be the intersection of all sets containing 0 and closed under the successor (“plus 1”) operation.  This definition presupposes the existence of a set U of all sets containing 0 and closed under successor, in order to define the set of natural numbers.  But the set U contains the set of natural numbers — so we are using a set of sets containing the set of natural numbers, in order to define the set of natural numbers.  As Feferman points out, impredicative definitions need not trouble us if our philosophy of mathematics is platonistic: sets of sets really do exist independent of our constructions, so definitions like the one above for the set of natural numbers are simply identifying a set which already exists.  A more constructivist philosophy of mathematics might not allow such a definition, on the grounds that there is no preexisting mathematical reality independent of our constructions, so it is not justified to refer to a set of sets containing N as part of the definition of N.  It is important to note that despite the appeal of Brower’s work, Weyl based his system on classical (2-valued) logic, instead of the intuitionistic logic that came to be identified with Brower’s program.  (As an aside, type theorists Zhaohui Luo and Robin Adams have recently been developing a formalization of Weyl’s project from a type-theoretic perspective; see their “Weyl’s predicative classical mathematics as a logic-enriched type theory“).

Section 8 of Feferman’s chapter on Weyl’s Das Kontinuum is quite interesting for termination analysis.  Feferman presents a theory W, in the spirit of Weyl’s work, in which Weyl’s program (of reconstructing the mathematics of the reals on predicative foundations) can be carried out.  This theory W is essentially a theory of general recursive functions, described by individual terms, and classified by what a Computer Scientist would see as simple types (base type N for natural numbers, function types $S\stackrel{\sim}{\rightarrow} T$ for partial functions from $S$ to $T$, and cartesian production types).  Classification is done with an explicit membership predicate $\in$, and the simple types are expressed as class terms of the language. (So types are not a distinct syntactic category; rather, they are expressed as terms of the language, and the connection with individual terms is given by an atomic formula of the language, using the $\in$ predicate.)

Theory W is based on Beeson’s Logic of Partial Terms (LPT), which has an explicit predicate $\downarrow$ for definedness.  In W, $\downarrow$ is used for termination of general recursive functions.  There are axioms for definition by primitive recursion and search, as well as induction, in the form that if $a\in\mathcal{P}(N)$, and $0\in a$ and $a$ is closed under successor, then every $n\in N$ is in $a$.  Here, $\mathcal{P}(N)$ is defined as $N \stackrel{\sim}{\rightarrow} \{ 0, 1 \}$ (that is, as the class of characteristic functions for subsets of N).  This theory does not allow us to state that a class $T_1$ is a member of another class $T_2$, unless of course the former is representable by an individual term.

Despite its expressivity, W is a conservative extension of Peano Arithmetic.  This is, in a sense, not too surprising (but perhaps I am being naive), since the form of class terms is quite restricted, and induction is just the usual natural-number induction.  Feferman goes on to outline how the theory of analysis of real-valued functions can be developed “at least for separable Banach and Hilbert spaces” in a straightforward way.  On the basis of this, and with some discussion of potential counterexamples, Feferman claims that all of scientifically applicable mathematics (certainly a very large portion of it) can be developed in theory W.

In the end, what is the possible application for Computer Science here?  Theory W is an example of an expressive but proof-theoretically relatively weak theory, in which it is possible to reason about termination of general recursive functions, based on a notion of type (the class terms of W).  So it is an example of the combination of total and partial functions that is of interest for developing dependently typed programming languages with a logically sound notion of proof, or alternatively, for extending total type theories to allow general recursion.  This is the theme, by the way, of a FLoC workshop which I am hoping to submit a paper to.  The workshop is called “Partiality and Recursion in Interactive Theorem Provers“.

Happy New Year 2010 to anyone who happens to be reading this around that time.  While still reading and hopefully soon writing more about termination analysis, I have decided to write a little about Solomon Feferman’s lucid and readable book, “In the Light of Logic”, Oxford University Press, 1998, which I’ve been reading for a month or so now.  The call number is QA9.2.F44 1998  (apologies to interested readers at U. Iowa, since I have the book checked out at the moment — I’m planning to buy a copy, so this one will be checked back in shortly).

Today I’ll just write a little bit about the book’s Chapter 1, “Deciding the Undecidable”.  Feferman does a really wonderful job, in my opinion, surveying important results and ideas, both technical and philosophical, in logic and foundations of mathematics in the late 19th (with Cantor) and 20th centuries.  Feferman combines pre-eminent technical knowledge with broad historical and philosophical considerations in clear and accessible prose.  It is certainly very educational reading for both technically-oriented and (I suppose) philosophically minded students and researchers.  (Although Chapter 1 is for a less technical audience than some later chapters of the book.)

One of the central historical and foundational issues considered in Chapters 1 and 2, constituting the book’s Part I, “Foundational Problems”, is Hilbert’s program.  As Feferman explains (and as readers probably already know a bit about, at least), the great German mathematician David Hilbert in 1900 posed 23 mathematical problems, which became greatly influential for much research in mathematics and logic to the present day.  The ones that I am most interested in for purposes of this post are the 1st and 2nd: prove the Continuum Hypothesis (problem 1), and give a “finitary” consistency proof for a formal theory of arithmetic (problem 2).  As Feferman tells us, Hilbert’s optimism that these problems could be solved was not dampened by dramatic and — as most experts see it today — devastating results by Goedel, who showed that (any extension of) Peano Arithmetic (PA) is incomplete if it is consistent.  In particular, it cannot prove its own consistency.  The conclusion, as Feferman says, is that “any system which incoporates all finitary methods cannot be proved consistent by those methods” (page 14).  So no finitary consistency proof is possible.  For problem 1, Goedel proved in 1938 that Zermelo-Fraenkel (ZF) set theory cannot disprove the Continuum Hypothesis (CH), and a number of years later (1963), Paul Cohen proved that ZF cannot prove CH.  So CH is neither implied nor contradicted by the theory, which is again an example of incompleteness (completeness for a theory is the property that for every sentence $\phi$, the theory entails either $\phi$ or $\neg\phi$).

The fact that all consistent suitably strong theories are incomplete is philosophically interesting.  Indeed, Feferman uses it (in Chapter 1 and also Chapter 2) to cast doubt on platonism, the idea that there exists (in some sense) an abstract realm of mathematical objects, which are then considered by mathematical discourse. Feferman (very briefly) contrasts platonism with the idea that mathematics “is somehow the objective part of human conceptions and constructions” (page 7).  To me the situation seems a little different, as I’ll now explain.

When we consider the consequences of certain ideas — which when rigorously formulated are described via axioms or other formal expressions of our assumptions — we are working within the theory determined by those ideas.  The incompleteness results mentioned above show definitively that for any suitably strong theory, there will always be statements $S$ which the axioms do not determine.  So our axioms could be further developed in such a way as either to make $S$ a consequence, or $\neg S$ a consequence.  But the theory as it stands just does not cover completely the situation expressed by $S$.

The platonist position, at least as attributed by Feferman to Goedel, is that the existing mathematical universe simply has not been adequately described by our axioms, and we must search for new ones that express enough of its truths to determine either $S$ or $\neg S$.  I agree with Feferman in rejecting this position: the fact that the axioms can be consistenly (or relatively consistently) expanded to accommodate $S$, or differently expanded to accommodate $\neg S$ suggests to me that only someone operating on a powerful sense of a particular mathematical universe could believe that one of these expansions is the right one, and the other is wrong.  That is, if you already have some pre-axiomatic mathematical ideas that go beyond your axioms, then of course you will simply seek to formalize more of those ideas to describe the mathematical universe you imagine.  But the results above imply that you could expand your axioms a different way, thus capturing a different set of pre-axiomatic ideas, and still be no worse off, from the point of view of consistency.

So it seems to me that a better way of thinking about the philosophical consequences of the incompleteness results is what we might call hypotheticalism.  Mathematical theories say what would be true, hypothetically, of any world satisfying the axioms.  There is no existing mathematical universe that all axiom systems must describe.  The axioms just speak hypothetically: if these things were true, then these consequences would follow.  This does not reduce mathematics to a merely human construction.  Mathematics, in various axiom systems that people have found useful or interesting for whatever practical or aesthetic reasons, is about truths, namely hypothetical truths.  It seems to me that this perspective helps preserve the sense that mathematics is not an arbitrary human construction and is indeed about potentially important truths, without requiring acceptance of the idea of a unique existing mathematical universe.  No doubt my expression of this hypotheticalism is naive from a philosophical perspective, and it leaves open the question of why one axiom system should be preferred (if at all) to another, but I hope it accommodates the incompleteness results in a more plausible way than Goedel’s platonism (as reported by Feferman).