Skip navigation

Category Archives: Type Theory

I have been interested, for many years, in what is generally called computational classical type theory (CCTT).  The idea is to take some proof system for classical logic, and turn it into a programming language via the Curry-Howard isomorphism.  Cut elimination or normalization for the proofs turns into reduction for the corresponding program terms.  There is nothing at all objectionable about this, and indeed it is very interesting to see term notations for proofs of classical theorems like ((A \to B) \to A) \to A (Peirce’s Law) or A \vee \neg A.  It was the brilliant insight of Griffin (my that POPL ’90 was a good one, as Lamping’s paper on optimal beta-reduction was that year, too) that proofs of such classical theorems essentially involve control operators like call-cc.  This is all super cool and very interesting.

And I accept what I understand to have been David Hilbert’s position that non-constructive reasoning is an essential tool for mathematics, and that rigid constructivism (i.e., insisting on constructive proofs for every theorem) is thus disastrous for advancement of mathematics.  In Computer Science at least, I have sometimes in the past thought that the ability to do things like case-split on whether a term in some unrestricted programming language terminates or not could be very helpful if not critical for verification of general programs.

Since Curry-Howard is great and applies very nicely to classical logic, and since classical logic is useful if not essential for certain kinds of verification, CCTT should be fantastic in my book, right?

No — and of course I will tell you why.

The problem is that the Curry-Howard isomorphism a reasonable programmer expects to hold is broken in CCTT.  The problem can be seen clearly with disjunctions.  Under Curry-Howard, a disjunction A \vee B corresponds to a sum type A + B.  Given an element of such a sum, we can case split on whether that element is the injection of an element in A or else in B.  And in doing so, we gain some actionable information.  But this does not happen when we split on a use of the law of excluded middle A \vee \neg A (LEM).  We gain no information at all in the two branches of the split.  And indeed, the usual way CCTT proves LEM is to say (so to speak): it is \neg A which is true.  Then if the branch of the split for the right disjunct ever tries to make use of that information — wait, how can it do that?  The only thing you can do with the negative fact that \neg A is to apply it to a proof of A to obtain a contradiction.  And if you ever do that, the machinery of CCTT will take your proof of A, backtrack to the point where it originally told you \neg A is true, and then invoke the branch of the split for the left disjunct, with that proof of A.  Super clever!  (This is dramatized by a famous story about the devil told by Phil Wadler; see my old post about this.)  But also, note that it means you can never take action based on the result of such a case split.  Everything must be indefinitely revisable, due to the possible need to backtrack.  So you cannot split on an instance of LEM and print 1 in the first branch and 2 in the second.  Or rather, you can, but the meaning is not at all what you expect.  “This Turing machine either halts or it doesn’t.”  We case split on this, and could easily think that we are gaining some information by doing so.  But we are not.  Your code will just always print 2.  This is not what you reasonably expect under the Curry-Howard interpretation of disjunctions.  And thus using CCTT for programming is not a good idea.

It would be fine to use classical reasoning for reasoning about programs.  For the justification of some property of a program might require (or be easier with) some non-constructive case split.  But the program itself should not do such things, or else the abstraction Curry-Howard seems to offer us is violated.

Is this an argument against sum types in the presence of control operators?  (Genuine question)

 

 

Advertisements

As you may know (for example, from my interview a little while back on the fantastic Type Theory Podcast), we are working here at U. Iowa on a new dependent type theory implementation called Cedille.  It is based on a new extrinsic (aka Curry-style) type theory, and aims to be a very compact theory in which standard parts of dependent type theory like inductive (and hopefully coinductive) datatypes can be defined from the more primitive constructs of the theory.

At our first Cedille meeting of the fall semester — also in attendance Larry Diehl, Richard Blair, Chris Jenkins, Tony Cantor, Colin McDonald, Nadav Kohen — I gave a rather long overview of the current state of Cedille, which I recorded as a screencast and as the notes I was drawing (.ora and .jpg formats) while talking.  While we are (still! [sigh]) not quite ready to make a public release, I think we will get there this fall, as I mention in the screencast.  A few quick highlights are: we have a derivation of induction that is parametrized by a functor (actually, we have two: one for Church encoding and one for Mendler encoding); we can do some surprising and cool things with casts, which are derivable in the theory, including define within Cedille monotone recursive types and get proof reuse between types which can be cast to each other, like lists and vectors; and we just need to complete a (very basic) module system and solve an unpleasant performance issue with our parser, and we should be ready to make a release.

I have been blessed the past few days to have some amazing conversations with people working with me here at Iowa.  For one of these, I talked yesterday with Richard Blair and Ananda Guneratne about semantics of recursive types, as this is something Richard is currently working on.  The conversation was so inspiring I wrote up a short note summarizing it.  You can find this note here.

You find yourself, like Jacob, gazing up the rungs (for me, ropes)

of a ladder that seems to have constructed itself

to avoid leaving you flat on the ground.

But what, you ask, in the arching presence of the angels,

do you do at the top?

 

Type theories, to be consistent, cannot be uniform.  Somewhere, your power must ebb, abstractions diminish, until finally you are left with the beautiful silence of a single symbol (maybe \Box).  Type : type is also beautiful, but long known to be inconsistent: collapsing everything to one level ruins your type theory as a logic.  There are more subtle examples of the need to give up your power as you ascend (this starts to sound like John of the Cross).  You cannot repeat impredicative quantification one level up, for example: Girard proved this is paradoxical in his dissertation.  So if you are an impredicativist as I am, your kind level must be strictly weaker than your type level.

But what about if you are a predicativist, perhaps a devotee (as I also am) of Agda?  Agda, and also Coq via its ECC (Extended Calculus of Constructions, of Zhaohui Luo) structure, seem to avoid this quietist imperative I am mentioning.  Above the current level, there is another, with essentially the same power.  And beyond that, another, ad infinitum.  The ladder goes up forever!  We surpass the heavens!

But no, experienced Agda and Coq users know that is sadly not what happens.  Much like the tower of Babel, the predicative universe hierarchy does not satisfy the desires of abstraction.  For some constructions make sense at all levels, and hence we introduce the seductive ideas of typical ambiguity (considered for a system \textit{CC}^\omega of Coquand’s which is a predecessor of ECC, by Harper and Pollack) and universe polymorphism (also considered by Harper and Pollack).  Universe polymorphism as I understood it from Agda seems a bit different from what Harper and Pollack propose.  And it seems that typical ambiguity is found more in Coq than Agda.  (I could be wrong on these points!)  In Agda, there seems to be the curious idea that we can quantify over all levels of the hierarchy explicitly.  And we are then left with the conundrum of what level such a quantification itself has?  Of course, for soundness reasons, it cannot have one of the levels covered by the level quantification.  So we have to make up a new level, namely \omega.  And we see that level quantification was a charade to begin with: we are definitely not quantifying over all the levels, just some prefix of them.  And it seems Agda does not make \omega available to you to use in your code.  So this just seems very hacky.

More principled is something that I think is closer to the kind of universe polymorphism that Harper and Pollack were proposing, which in their work is located at global definitions (not arbitrary terms).  And this is the answer to my riddle at the beginning: when you reach the top of your ladder, you have to “go meta”.  It is a beautiful insight that we can internalize the induction scheme one finds in Peano Arithmetic as a single second-order axiom.  We have passed from the meta-level, with a family of formulas, to a single object-language formula.  I propose that when we reach the top layer of our layered type theory, we do the reverse.  We add schematic definitions.

Here at Iowa, we are working on this new Cedille type theory and system.  Some of my recent papers are about this.  There, we have a predicative hierarchy truncated at level 2.  Ha!  There are just terms, types, and kinds.  It is actually more convenient to reach the end of the ladder faster!  And going schematic means that while we do not support kind-level functions, we will allow you to define parametrized kinds.  So you can give a top-level definition of some kind parametrized by terms or types.  The definition itself does not need some funny type with \omega level or some additional superkind structure.  Of course, one can only use the definition with all parameters instantiated.

Simple, but extends expressivity a bit just out of the reach of what can be internalized in the theory.

This post is about double negation in constructive type theory.  Suppose you know that \neg(\neg A) is true; i.e., (A \to \perp)\to \perp (writing \perp for false).  In classical logic, you then know that A is true.  But of course, you are allowed no such conclusion in intuitionistic logic.  So far, so familiar.  But let us look at this now from the perspective of realizability.  Realizability semantics gives the meaning of a type as a set of some kind of realizers.  Let us consider untyped lambda calculus terms (or combinators) as realizers for purposes of this discussion.  Then the crucial clauses of the semantics for our purposes are

  • t \in [\negthinspace[ A \to B ]\negthinspace ]\ \Leftrightarrow\ \forall t' \in[\negthinspace[A]\negthinspace].\ t\ t'\in[\negthinspace[B]\negthinspace], and
  • \neg t \in [\negthinspace[\perp]\negthinspace]

Let us work through, then, when t\in[\negthinspace[(A\to\perp)\to\perp]\negthinspace] according to this definition:

\begin{array}{ll} t\in[\negthinspace[(A\to\perp)\to\perp]\negthinspace] & \Leftrightarrow \\ \forall t'\in[\negthinspace[A\to\perp]\negthinspace].\ t\ t'\in[\negthinspace[\perp]] & \Leftrightarrow \\ \forall t'.\neg(t'\in[\negthinspace[ A \to \perp]\negthinspace]) & \Leftrightarrow \\ \neg(t'\in[\negthinspace[A\to\perp]\negthinspace]) & \Leftrightarrow \\ \neg(\forall t''\in[\negthinspace[A]\negthinspace].\ t'\ t''\in[\negthinspace[\perp]\negthinspace]) & \Leftrightarrow \\ \neg(\forall t''.\neg t''\in[\negthinspace[A]\negthinspace]) & \Leftrightarrow \\ \exists t''.t''\in[\negthinspace[A]\negthinspace] &\  \end{array}

So this is saying that if [\negthinspace[A]\negthinspace] is nonempty, then [\negthinspace[(A\to\perp)\to\perp]\negthinspace] is universal (contains all terms); and otherwise (if [\negthinspace[A]\negthinspace] is empty), [\negthinspace[(A\to\perp)\to\perp]\negthinspace] is empty.

So if you have a function F, say, realizing \neg\neg A)\to B for some B, what this means is that F can make use of the fact that A is true, but not how it is true.  If F were simply given a proof of A, then it could compute with this proof.  But here, with \neg\neg A, the proof of A is hidden behind an existential quantifier (as we deduced above), and so it cannot be used.  Only if someone is reasoning at the metalevel about F, they can make use of the fact that A is true when checking that the body of F realizes B.

So semantically, \neg\neg A is news you can’t use: we know A is true, but we have no access to how it is true.  For example, if A is a disjunction, you cannot case split, within the type theory, on the disjuncts.  Only at the metalevel are you entitled to consider the behavior of a term in light of the fact that one or the other of the disjuncts must hold.