Skip navigation

Category Archives: Philosophy

I recently (May 1) gave a short presentation touching on ethics and technology, as well as the ethics of Internet pornography.  I am sharing the slides here.  Sadly, most of the references on the seemingly unending list of harms and evils associated with pornography are behind paywalls, although the article by Max Waltman is freely available.  As always when reading about this topic, please be aware that some of the material can be quite upsetting for people who have experienced sexual assault, abuse, childhood assault, or other traumatic experiences, as well as for young readers.

I have wanted to post for a month or so on some intense reading I did this spring on infinitary logics.  I was trying to revisit the semantics of impredicative type theories with type-level computation, like System F^{\omega}, in the hopes of finding a new approach that would satisfy the following conditions:

  •  only closed value types are interpreted.  In contrast, most semantics I have seen are forced to interpret open non-value type expressions, like a type-level application with free variables, or a type-level lambda abstraction.  Interpreting these type-level non-values complicates the semantics.
  • the interpretation of a universal type \forall X:K.T (where K is the kind of the variable X) is defined in terms of the meanings of its substitution instances.  In contrast, in proofs based on Girard’s famous proof for System F, one must pass to the semantic domain at this point, and define the interpretation as the intersection over all reducibility candidates (possible interpretations of X of kind K) R of the body T, where X is interpreted as R.

My attempts failed — perhaps not surprising, as excessively smart people have thought about this for 40 years and not come up with anything essentially better than Girard’s approach.  There is one exception to this, which Harley pointed out to me as satisfying the above two properties: step-indexed logical relations.  While the use of these for normalization proofs is limited, I think, they do satisfy the above two criteria, which I find really fantastic.

Anyhow, while on the quixotic quest of trying to devise a new semantics for impredicative type theories, I took a look at some works on infinitary logics.  My idea was: imagine that we are defining the semantics of System F, and we want to say that a term t is in the meaning of a universal type \forall X.T iff for every closed type T', we have t in the meaning of [T'/X]T.  As is well known, this cannot be allowed as part of a well-founded definition by recursion on the structure of the type which we are trying to interpret, because the type expression could very easily have increased when we did the substitution of T' for X in T.  For example, T' could have been \forall X.T itself (and T could have contained a free occurrence of X)!

But what if we viewed a semantics for System F types including this substitutive defining clause for \forall-types as a co-recursive definition of an infinite formula?  That is, we imagine that rather than recursively describing the meaning via a formula of our meta-logic (which speaks about quantifying over closed types or over reducibility candidates, or whatever), we imagine that we are just generating, co-recursively, an infinite formula, and that defines our semantics.  The definition will indeed be a productive co-recursion.  So we really can consider it a definition of our semantics.  The question is, what kind of logic allows infinite formulas?

That is where my trip(s) to the QA9 section of our library began.  Because I invested so much effort into this, I want to share with whoever is interested a summary of what I found.  First, a wonderful starting point, and indeed, just a fun and interesting article to read, is (the late) Jon Barwise’s article “Infinitary Logics” in a rather peculiar book titled “Modern Logic — A Survey”, edited by Evandro Agazzi (Reidel, 1981).  The book is strange because it seeks to survey so much material, with the contributions of so many famous logicians, in such a short space (475 pages).  Schutte has a 6 page article covering Proof Theory, for example.  In contrast, the Handbook of Proof Theory (whose coverage of the topics I was most interested in learning more about is almost unreadable for me, sadly) is 811 pages long.

Anyhow, Barwise explains that there are two main lines of work on logics with infinite formulas.  By far the most studied such logics allow inductively defined formulas including infinitary conjunctions and disjunctions.  Since the definition of the syntax of formulas is an inductive one, all paths through a formula are finite.  The formula can be infinite, because a conjunction may conjoin infinitely many distinct finite subformulas, for example.  Many works have studied logics with such infinitary conjunctions and disjunctions.  The other line of work, Barwise explains, is much smaller and less developed.  It concerns infinitely long formulas.  That is, formulas are allowed to contain infinite paths; or one can say, the subformula relation need not be well-founded.  Yet another way to put this, which I find very appealing, is to imagine that the usual syntax of formulas (of first-order logic, say) is interpreted as a co-inductive definition (instead of an inductive one).  Here, I want to quickly mention works I found helpful on this topic:

  • “Some remarks on infinitely long formulas” by L. Henkin, in Infinitistic Methods, Proceedings of the Symposium on Foundations of Mathematics, Warsaw, 1959, published 1961 by Pergamon Press.  Section C (page 179 and on) is the most interesting, as it talks about giving a semantics for formulas with infinitely many quantifier alternations (in prenex form) by a game semantics: if existential and universal quantifiers strictly alternate, then two players take turns choosing elements.  The value of the matrix of the formula (which Henkin allows to mention infinitely many variables by allowing infinitary predicate symbols) is then determined.  The truth value is determined if one player has a winning strategy for this game.  Henkin also proposes that one can use infinitary skolemization to pull out the existential first-order variables into an infinitary second-order existential prefix.
  • “A New Approach to Infinitary Languages”, by J. Hintikka and V. Rantala.  This very readable and stimulating, though somewhat sketchy, paper has an interesting further discussion of game semantics for infinitely deep formulas.  They discuss, in particular, ways to try to mitigate the problem that even if one limits oneself to infinite formulas built only from, say, conjunction, negation, and closed finite atomic formulas in some fixed interpretation — which would all have a definite truth value, either True or False, if the formulas were finite — game semantics will still report some formulas as having undetermined truth value.  There are ties that game semantics, even the modified ones they propose, cannot break.  Say one player has to falsify a conjunction.  It is enough to falsify one of the branches.  But what if the other branch were a true finite formula?  Suppose one player is avoiding such counter-evidence infinitely often, while the other player is not.  Then Hintikka and Rantala consider a possible winning condition in which the player who is infinitely avoiding counter-evidence loses.  Very interesting stuff, but still some formulas are left undetermined.

Then there are a number of Finnish PhD theses, like “Definability and Infinitely Deep Languages” by H. Heikkila, which I found to be completely impenetrable, and mostly concerned with model-theoretic questions.  So those were not much help.  But the three articles I mention above were very readable and thought-provoking.  In the end, I was not able to find (or devise) a logic of infinitely deep formulas where every formula has exactly one truth value, either True or False.  Either I was forced to say that some formulas were both True and False, or else omit a truth value for some formula.

One can give a semantics for System F types using a logic of infinitely deep formulas with only conjunction, finite closed determined atomic formulas, and the negations of such atomic formulas.  I found I could write mutually co-recursive definitions for when a term is in the meaning of a type and when it is not in the meaning of a type.  But again, I do not know of an adequate semantics for such a logic, so this interpretation was useless for reasoning about System F (e.g., for proving soundness of the typing rules with respect to the semantics).

An alternative would be to go to a richer infinitary logic, where we truly have a notion of hypothetical reasoning.  In the logic just sketched, we have no ability to assume that something is true, and then operate under that assumption.  The logic is Platonistic, in the sense that it assumes a fixed mathematical universe (the interpretation of the atomic formulas), and tries to tell you whether or not a formula is true in that universe.  In contrast, implication, at least in constructive logic, is well-known to have a modal character: when proving A\to B, we pretend we are in some arbitrary world where A is true, and then try to argue that in such a world B must be true, too.  This could be helpful for the case of System F semantics, since we would be allowed simply to assume that an argument to a function is in the meaning of the function’s domain type.  We would not be forced to consider whether that was “really” true in a fixed mathematical universe; we could just assume it and then carry on with our reasoning.

From a game-theoretic approach, this seems somewhat reminiscent of what I learned is called dialogic logic, where we think game-theoretically as in the semantics mentioned above, but the game includes commitments of the players: facts that they are committed to defending (at least to some degree).  This has again that modal or hypothetical character: I will play out my move as if something is true (or pretending I am in a world where a certain fact is true).

For considering logics of infinitely deep formulas that include a modal form of implication (the only one worth the name?), we will be forced to carry out the proof theory — in particular, cut elimination — for languages with infinitely deep proofs (corresponding to the infinitely deep formulas).  This sounds very exotic, but there might be some hope for it, generally speaking, as researchers (like Fer-Jan de Vries; see his publications for many interesting papers on this topic) working on infinitary lambda-calculus are likely to have the right technical tools for considering infinitary reductions of infinitary lambda-terms corresponding, under an  infinitary version of the Curry-Howard isomorphism, to infinitely deep proofs.  Unfortunately, a cut-elimination argument for a language strong enough to do the semantics of System F would, of necessity, have to be quite complex.  If one wished to argue that the reduction process is normalizing, one would need ordinals stronger than have yet been devised (I’m told), since ordinal analysis of System F is not known currently (again, if anyone knows differently, I’d love to hear about it).

In the end, what can we learn from all this?  Well, I certainly have a renewed respect for both the traditional approach to semantics of System F and other impredicative type theories due to Girard, and for step-indexed logical relations.  At the same time, from a Computer Science perspective, a coinductive view of formulas is rather appealing.  It would be nice to know what proof-theoretic results could be obtained for such a system, and what the connections to infinitary lambda calculus might be.

Hope everyone’s summer is going great!

Do numbers exist?  What a crazy question for a computer scientist to consider!  I admit that my curiosity on this topic is not due to any application in Computer Science that I can see.  Reading Feferman’s In the Light of Logic, though, has gotten me interested in some of the philosophical questions that some logicians — Feferman and Goedel are two notable examples — have devoted considerable attention to.  This question regarding the ontological status of the objects of mathematics is considered by many very serious philosophers, certainly in the 20th century and no doubt before (though I know little about the history of the question).  I think it is quite interesting to consider, since it sheds light on exactly what is happening when we do theoretical work.

The most straightforward answer to the question “Do numbers exist?” is simply, “yes (of course!)”.  Numbers and more complex mathematical objects are indispensable to scientific practice, which W. V. Quine famously argued justified our acceptance (at least in some sense) of the existence of whatever objects mathematics requires, at least mathematics that is needed for scientific practice.  But as Feferman argues in Chapter 14 of In the Light of Logic (and also in earlier chapters), the logical theories required in principle for the development of all the mathematics needed for modern science are quite weak in their ontological commitments.  Indeed, Feferman puts forward theory W, a weak system of recursive operations on the natural numbers, and predicatively defined classes of such entities, and argues that it is sufficient for the needs of all of modern science.  And theory W is a conservative extension of Peano Arithmetic, and proof-theoretically reducible to it.  So enormously complex cardinal numbers, for example, need not be assumed to exist in order to carry out scientifically relevant mathematics.  Thus if we follow Quine’s idea that we should accept mathematical objects that are needed for science, we need not accept such things.  In this way Feferman rejects a strongly Platonist conception of mathematics, where the entire menagerie of higher set theory is accepted as actually existing.

But all this proof-theoretic fanciness is, in a sense, a distraction.  After all, on Feferman’s approach we are still left with Peano Arithmetic at least.  So just seeing that we do not need the more esoteric mathematical objects to do modern science does not let us avoid the question of whether or not (or how) numbers exist.  And indeed, I personally am not motivated by Quinean concerns, which are ultimately rooted in naturalism.  And naturalism, I suspect, ultimately represents a desire to banish the concept of God or other religious beliefs from intellectual discourse and indeed, from human life.  There are very weighty reasons for thinking that that is a bad idea, which I will not discuss here: that topic is, of course, even more controversial than the existence of mathematical objects!

So: do numbers exist?  Really I think we should consider two separate questions: do theoretical objects exist, and do abstract objects exist.  While I have my doubts, it seems quite reasonable to believe that, at least in some sense, at least rather small finite numbers could be said to exist.  I actually am open to persuasion on that point, but when I say the number of books on my desk is currently 4, maybe it is not just a figure of speech, but an actual statement about this thing called a number.  I don’t know.  But the question of whether or not theoretical objects exist — that is, objects that a particular theory is committed to — this seems clearer.  Why on earth should we believe that just because a particular mathematical theory speaks of certain objects, that they definitely exist?  I can easily write down a set of first-order formulas giving certain characteristics of unicorns, or ghosts, or whatever else you like.  We would certainly not wish to say that because such a theory is committed to the existence of those things, they really exist.  They might exist, of course.  I could have a theory of cars, or other generally noncontroversially existing things.  But just having a theory about them does not tell us anything about whether or not they exist.  We would need some independent reason for believing in their existence.

Of course, in mathematics or other formal disciplines (Computer Science, too), people spend a lot of time working with the particular entities the theory considers.  In Computer Science, one might work with Turing Machines, which are, of course, idealized computers with an infinite amount of memory.  No computer has or perhaps physically could have an infinite amount of memory, so in that sense, Turing Machines do not exist.  Working within a theory, it is natural to talk and act as though the theory were definitely true.

Perhaps psychologically, one must, in a sense, enter the world of the theory to be successful at deriving new theorems or reaching new insights into the theory.  Indeed, this idea of entering the world of the theory suggests a modal interpretation of theoretical truth.  A version of this interpretation is proposed by Hilary Putnam in “Mathematics without Foundations”, which can be found in his book Mathematics, Matter, and Method.  The idea roughly is to say that when we assert a theorem of arithmetic, for example, as true, what we are really asserting is the necessary truth of the statement that the axioms of arithmetic imply the theorem.  So any structure which satisfies the axioms of arithmetic will also satisfy the theorem.  With this kind of perspective, “modalism” (which amusingly, Putnam says explicitly that he is not trying to start as an alternative foundational philosophy, though certainly it by now is) is a kind of structuralism, which asserts that structures exist, even if mathematical objects are identified only by their roles in such structures.

To me, structuralism’s structures are no better than the original mathematical objects.  I would take a more proof-oriented version of modalism: to say that a theorem of a particular theory is true, is just to assert that it is provable from the axioms of the theory, using whatever logical axioms and rules one accepts.  There is no ontological commitment at all there.  Yes, we could understand this deductivist interpretation modally: if a structure exists that satisfies the axioms of the theory, then that structure will satisfy the theorem.  But if we look at it that way, I think we must avoid phrasing this as above: “it is necessary that the axioms imply the theorem”.  For this interpretation requires us to assume also that the axioms could be satisfied.  If they absolutely could not be satisfied, then all formulas become theorems: if no structure models the axioms, then in every structure, the axioms are false, and the implication “axioms imply theorem” is true.  I would not want to ground that modality in possibly existing structures — we’d just be back to where we started, because we would need a theory for those, and we’d be unsure of their status.  It would be more acceptable to ground the modality in actually existing structures.  So then the modal interpretation would amount to saying that for any structure that exists, if it satisfies the axioms, then it satisfies the theorem.  But then arithmetic would become inconsistent if the universe is finite, for example.  So we would not want to take this kind of more semantic modalism.  We just need a more modal notion of implication, which we can find in constructive logic: if we were to enter a world where the axioms are true (regardless of whether or not there is such a world), then the theorem would be true.  This has a bit of the air of the fictionalism I read attributed to Hartry Field.  More simply, the theorem follows by the rules of logic from the axioms of the theory, and that is the end of the story from an ontological perspective.

Why one theory is of more use or interest than another is then a wholly separate question from whether or not use of the theory requires acceptance of the existence of the theory’s objects.  Asserting a theorem, no matter how vivid the intuition for a world satisfying the axioms of a theory, is ultimately no more than asserting that the theorem follows deductively from those axioms.

Ok, after getting my attempts to learn some of this philosophy out of my system, I’ll be back to more technical topics next post.