Skip navigation

I have wanted to post for a month or so on some intense reading I did this spring on infinitary logics.  I was trying to revisit the semantics of impredicative type theories with type-level computation, like System F^{\omega}, in the hopes of finding a new approach that would satisfy the following conditions:

  •  only closed value types are interpreted.  In contrast, most semantics I have seen are forced to interpret open non-value type expressions, like a type-level application with free variables, or a type-level lambda abstraction.  Interpreting these type-level non-values complicates the semantics.
  • the interpretation of a universal type \forall X:K.T (where K is the kind of the variable X) is defined in terms of the meanings of its substitution instances.  In contrast, in proofs based on Girard’s famous proof for System F, one must pass to the semantic domain at this point, and define the interpretation as the intersection over all reducibility candidates (possible interpretations of X of kind K) R of the body T, where X is interpreted as R.

My attempts failed — perhaps not surprising, as excessively smart people have thought about this for 40 years and not come up with anything essentially better than Girard’s approach.  There is one exception to this, which Harley pointed out to me as satisfying the above two properties: step-indexed logical relations.  While the use of these for normalization proofs is limited, I think, they do satisfy the above two criteria, which I find really fantastic.

Anyhow, while on the quixotic quest of trying to devise a new semantics for impredicative type theories, I took a look at some works on infinitary logics.  My idea was: imagine that we are defining the semantics of System F, and we want to say that a term t is in the meaning of a universal type \forall X.T iff for every closed type T', we have t in the meaning of [T'/X]T.  As is well known, this cannot be allowed as part of a well-founded definition by recursion on the structure of the type which we are trying to interpret, because the type expression could very easily have increased when we did the substitution of T' for X in T.  For example, T' could have been \forall X.T itself (and T could have contained a free occurrence of X)!

But what if we viewed a semantics for System F types including this substitutive defining clause for \forall-types as a co-recursive definition of an infinite formula?  That is, we imagine that rather than recursively describing the meaning via a formula of our meta-logic (which speaks about quantifying over closed types or over reducibility candidates, or whatever), we imagine that we are just generating, co-recursively, an infinite formula, and that defines our semantics.  The definition will indeed be a productive co-recursion.  So we really can consider it a definition of our semantics.  The question is, what kind of logic allows infinite formulas?

That is where my trip(s) to the QA9 section of our library began.  Because I invested so much effort into this, I want to share with whoever is interested a summary of what I found.  First, a wonderful starting point, and indeed, just a fun and interesting article to read, is (the late) Jon Barwise’s article “Infinitary Logics” in a rather peculiar book titled “Modern Logic — A Survey”, edited by Evandro Agazzi (Reidel, 1981).  The book is strange because it seeks to survey so much material, with the contributions of so many famous logicians, in such a short space (475 pages).  Schutte has a 6 page article covering Proof Theory, for example.  In contrast, the Handbook of Proof Theory (whose coverage of the topics I was most interested in learning more about is almost unreadable for me, sadly) is 811 pages long.

Anyhow, Barwise explains that there are two main lines of work on logics with infinite formulas.  By far the most studied such logics allow inductively defined formulas including infinitary conjunctions and disjunctions.  Since the definition of the syntax of formulas is an inductive one, all paths through a formula are finite.  The formula can be infinite, because a conjunction may conjoin infinitely many distinct finite subformulas, for example.  Many works have studied logics with such infinitary conjunctions and disjunctions.  The other line of work, Barwise explains, is much smaller and less developed.  It concerns infinitely long formulas.  That is, formulas are allowed to contain infinite paths; or one can say, the subformula relation need not be well-founded.  Yet another way to put this, which I find very appealing, is to imagine that the usual syntax of formulas (of first-order logic, say) is interpreted as a co-inductive definition (instead of an inductive one).  Here, I want to quickly mention works I found helpful on this topic:

  • “Some remarks on infinitely long formulas” by L. Henkin, in Infinitistic Methods, Proceedings of the Symposium on Foundations of Mathematics, Warsaw, 1959, published 1961 by Pergamon Press.  Section C (page 179 and on) is the most interesting, as it talks about giving a semantics for formulas with infinitely many quantifier alternations (in prenex form) by a game semantics: if existential and universal quantifiers strictly alternate, then two players take turns choosing elements.  The value of the matrix of the formula (which Henkin allows to mention infinitely many variables by allowing infinitary predicate symbols) is then determined.  The truth value is determined if one player has a winning strategy for this game.  Henkin also proposes that one can use infinitary skolemization to pull out the existential first-order variables into an infinitary second-order existential prefix.
  • “A New Approach to Infinitary Languages”, by J. Hintikka and V. Rantala.  This very readable and stimulating, though somewhat sketchy, paper has an interesting further discussion of game semantics for infinitely deep formulas.  They discuss, in particular, ways to try to mitigate the problem that even if one limits oneself to infinite formulas built only from, say, conjunction, negation, and closed finite atomic formulas in some fixed interpretation — which would all have a definite truth value, either True or False, if the formulas were finite — game semantics will still report some formulas as having undetermined truth value.  There are ties that game semantics, even the modified ones they propose, cannot break.  Say one player has to falsify a conjunction.  It is enough to falsify one of the branches.  But what if the other branch were a true finite formula?  Suppose one player is avoiding such counter-evidence infinitely often, while the other player is not.  Then Hintikka and Rantala consider a possible winning condition in which the player who is infinitely avoiding counter-evidence loses.  Very interesting stuff, but still some formulas are left undetermined.

Then there are a number of Finnish PhD theses, like “Definability and Infinitely Deep Languages” by H. Heikkila, which I found to be completely impenetrable, and mostly concerned with model-theoretic questions.  So those were not much help.  But the three articles I mention above were very readable and thought-provoking.  In the end, I was not able to find (or devise) a logic of infinitely deep formulas where every formula has exactly one truth value, either True or False.  Either I was forced to say that some formulas were both True and False, or else omit a truth value for some formula.

One can give a semantics for System F types using a logic of infinitely deep formulas with only conjunction, finite closed determined atomic formulas, and the negations of such atomic formulas.  I found I could write mutually co-recursive definitions for when a term is in the meaning of a type and when it is not in the meaning of a type.  But again, I do not know of an adequate semantics for such a logic, so this interpretation was useless for reasoning about System F (e.g., for proving soundness of the typing rules with respect to the semantics).

An alternative would be to go to a richer infinitary logic, where we truly have a notion of hypothetical reasoning.  In the logic just sketched, we have no ability to assume that something is true, and then operate under that assumption.  The logic is Platonistic, in the sense that it assumes a fixed mathematical universe (the interpretation of the atomic formulas), and tries to tell you whether or not a formula is true in that universe.  In contrast, implication, at least in constructive logic, is well-known to have a modal character: when proving A\to B, we pretend we are in some arbitrary world where A is true, and then try to argue that in such a world B must be true, too.  This could be helpful for the case of System F semantics, since we would be allowed simply to assume that an argument to a function is in the meaning of the function’s domain type.  We would not be forced to consider whether that was “really” true in a fixed mathematical universe; we could just assume it and then carry on with our reasoning.

From a game-theoretic approach, this seems somewhat reminiscent of what I learned is called dialogic logic, where we think game-theoretically as in the semantics mentioned above, but the game includes commitments of the players: facts that they are committed to defending (at least to some degree).  This has again that modal or hypothetical character: I will play out my move as if something is true (or pretending I am in a world where a certain fact is true).

For considering logics of infinitely deep formulas that include a modal form of implication (the only one worth the name?), we will be forced to carry out the proof theory — in particular, cut elimination — for languages with infinitely deep proofs (corresponding to the infinitely deep formulas).  This sounds very exotic, but there might be some hope for it, generally speaking, as researchers (like Fer-Jan de Vries; see his publications for many interesting papers on this topic) working on infinitary lambda-calculus are likely to have the right technical tools for considering infinitary reductions of infinitary lambda-terms corresponding, under an  infinitary version of the Curry-Howard isomorphism, to infinitely deep proofs.  Unfortunately, a cut-elimination argument for a language strong enough to do the semantics of System F would, of necessity, have to be quite complex.  If one wished to argue that the reduction process is normalizing, one would need ordinals stronger than have yet been devised (I’m told), since ordinal analysis of System F is not known currently (again, if anyone knows differently, I’d love to hear about it).

In the end, what can we learn from all this?  Well, I certainly have a renewed respect for both the traditional approach to semantics of System F and other impredicative type theories due to Girard, and for step-indexed logical relations.  At the same time, from a Computer Science perspective, a coinductive view of formulas is rather appealing.  It would be nice to know what proof-theoretic results could be obtained for such a system, and what the connections to infinitary lambda calculus might be.

Hope everyone’s summer is going great!

Advertisements

4 Comments

  1. I was wondering how related are the two following paragraphs, at the end of the 2007 paper “Finally tagless, partially evaluated” by Jacques Carette, Oleg Kiselyov and Chung-chiech Shan:

    > We encode terms in elimination form, as a coalgebraic
    > structure. Pfenning and Lee (1991) first described this basic idea and
    > applied it to metacircular interpretation. Our approach, however, can
    > be implemented in mainstream ML and supports type inference, typed CPS
    > transformation and partial evaluation. In contrast, Pfenning and Lee
    > conclude that partial evaluation and program transformations “do not
    > seem to be expressible” even using their extension to Fω , perhaps
    > because their avoidance of general recursive types compels them to
    > include the polymorphic lift that we avoid in §4.1.

    > We could not find work that establishes that the typed λ -calculus has
    > a final coalgebra structure. Honsell and Lenisa (1995, 1999)
    > investigate the untyped λ -calculus along this line. Honsell and
    > Lenisa’s bibliography (1999) refers to the foundational work in this
    > important area. Particularly intriguing is the link to the coinductive
    > aspects of Böhm trees, as pointed out by Berarducci (1996) and Jacobs
    > (2007; Example 4.3.4).

  2. Hi Aaron,

    I’ve mentioned this to Stephanie and Chris C. before, but haven’t told you for some reason: step-indexing was actually originally invented to prove normalization results! People (starting with Appel and McAllester) have just mostly applied it to proving safety properties. See Nakano’s 2000 LICS paper, “A Modality for Recursion”, where he uses a step-indexed model to prove normalization for a calculus with *unrestricted* recursive types and guarded recursors. It’s not the best-written paper in the world, but IMO it’s really prescient and idea-dense.

    • Hi, Neel. Thanks a lot for this reference. I did not realize the history of this idea, and will take a look at the Nakano paper.
      Cheers,
      Aaron


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: