Skip navigation

You could be forgiven for not knowing it, but we — at U. Iowa, U. Penn, and Portland State — have been hard at work now for almost two years on the language design for Trellys.  Two years seems like a long time — and I very much hope that we’re nearing completion of this language — but when you think about how long it has taken the Type Theory community to make each of its various advances, two years isn’t really much.  One can look at the challenge we are trying to meet in two ways:

  1. extending the Curry-Howard isomorphism to support general recursion in a convenient way, while retaining a logically sound fragment of the language.
  2. extending a logical theory of evaluation for a richly typed programming language so that programs can contain proofs from the logic.

We are concurrently working on two language designs, one of which is primarily aimed at (1), and the other at (2).  The U. Penn. team is primarily trying to extend the Curry-Howard isomorphism to accommodate general recursion, using a design internally called NoLaw (as in “Congress shall make no law … abridging the freedom of speech”); while we at Iowa are focused on combining a separate logical theory of program evaluation with dependency of programs on proofs, in a language we rather infelicitously call Sep-pp (or Sep3, for separation of proofs and programs).

Both the language designs we are exploring right now distinguish logical and programmatic fragments of the language.  The goal is to be able to prove normalization for terms in the logical fragment, while allowing the possibility of diverging computations in the programmatic fragment.  In Sep3, the big challenge (recently picked up also for the NoLaw design) is to allow programs to contain proofs.  This might sound innocuous, but think about what this means from the point of view of normalization.  Suppose you have a proof about a program that contains a proof.  For a small artificial example: maybe you are doing a case split in a proof on the form of the value of some programmatic term t of the following type:

data pf_holder : Type where 
   mk_pf_holder : (F:Formula) -> (p:F) -> pf_holder

This datatype is maybe not that useful in practice, but we certainly want to allow datatypes like it, where term constructors can accept proofs of unknown formulas F as arguments.

So imagine you are doing a case analysis inside a bigger proof P on the value of a programmatic term t of type pf_holder. Now, since some programmatic terms fail to have a value, in Sep3 we are insisting that to do a case analysis in a proof, you must supply a proof that the scrutinee (t in this case) is terminating. In Sep3, we have a primitive atomic formula t ! to express that t terminates. So suppose we have a proof that t terminates, and we do this case analysis. Then what is really happening is from the land of safety — this is the logical fragment, where all terms are safe in the sense that we are guaranteed that they normalize (this is the goal anyway) — we reach into the land of danger and potential divergence — the programmatic fragment, where this term t lives — and pull out from it a safe term again, which would be in this case the subproof p for which we have t = mk_pf_holder F p.  And now we can potentially use this safe term p inside of our bigger proof P.

It is this back and forth between programs and proofs, and particularly this scenario where a proof can reach into a program and pull out another proof and use it, that makes this language design quite challenging.  Guru also separates proofs and programs in a similar way as Sep3, but does not provide general support for this feature of taking proofs out of programs and using them in proofs.  So I think this is something really new.

I’ve taken a bit longer than I had hoped to set the stage for what I want to write about today, which is how to accommodate a predicative universe hierarchy in this setup.  I’ve generally wanted to avoid universe hierarchies, because, as Conor McBride wrote about a little while back, they do seem to involve (at least with our current knowledge) some irritating, rather inelegant mechanisms.  I definitely don’t have anything to contribute at the moment for trying to improve matters there.  But consider this situation.  In Sep3, we are proposing to have the logical fragment be a predicative higher-order logic with a form of structural induction.  Why take a predicative logic?  Because the meta-theory will be tremendously easier, and because since our intended application is practical program verification, having the profound proof-theoretic strength that comes from an impredicative logic is quite unnecessary.  We want to use this language to prove algebraic properties of list combinators, not consistency of strong logical theories.

So if we’re adopting a predicative higher-order logic, that means, briefly, that we stratify formulas (in our logical fragment) into levels, where a quantification over formulas of level i is forced to be itself at level at least i+1.  So a formula like (F:Formula_0) -> F, for a small example, is itself classified by Formula_1, because we need it to be up a level from the level of the type of its quantified variable. This does not require any great innovation, as this sort of idea is relatively well understood (at least enough to get something reasonably workable, as Conor was posting about).

What becomes tricky in our setting is how this universe hierarchy in the logical fragment affects the typing for the programmatic fragment. Because we have the possibility in Sep3 of pulling proofs out of data inside bigger proofs, when we go to do our normalization proof for the logical fragment (to prove it is consistent), we need a deep interpretation of programmatic datatypes. In the middle of defining our interpretation of types using reducibility, we will need to state that the interpretation of the type pf_holder is, roughly, the set of terms of the form mk_pf_holder F p, where p is in the interpretation of F. At this point, if we are not careful, we will lose well-foundedness of the definition of the interpretation of types, since there is no syntactic relationship whatsoever between F and pf_holder, so we cannot see that anything is decreasing where we appeal to the interpretation of F while defining the interpretation of pf_holder.

There are a number of technical issues here, and we certainly haven’t worked this all out yet. But part of our current idea is that we are going to have to stratify Type, on the programmatic side, as well. So we will write:

data pf_holder : Type_1 where 
  mk_pf_holder : (F:Formula_0) -> (p:F) -> pf_holder

We will put pf_holder up one level, so that when we need to appeal to the interpretation of F, we can go from level 1 to level 0. This means that the interpretation of formulas and types will be indexed also by the level of the formula or type in question.

This leads finally to where I wanted to get to for this post. We have a predicative hierarchy of formulas Formula_0, Formula_1, ... on the logical side, and also on the programmatic side: Type_0 : Type_1 : Type_2 .... (Note that we are not making Formula_0 : Formula_1, since intuitively, Formula_i does not assert anything, and hence should not itself be considered a formula; we have other technical mechanisms for enforcing the hierarchy besides just making Formula_i : Formula_i+1.) But the really interesting thing is that for Type_0, we can allow very dangerous principles. In particular, we are going to make Type_0 : Type_0. It’s been one of our goals for Trellys all along to have Type : Type for the programmatic fragment, since we don’t need that to be sound anyway (and it is known that adding Type : Type can lead to diverging terms). It is ok to put that principle at the lowest level of our programmatic hierarchy, because we do not need a deep interpretation at level 0 for programmatic types. This is because, by the way we have constructed the language, programmatic datatypes at level 0 cannot contain proofs of quantified or unknown formulas. They could contain proofs of known unquantified formulas, but that poses no problem for well-foundedness of our interpretation of types: we just make datatypes bigger than unquantified formulas in our well-founded ordering, and there is no circularity. The interpretation of unquantified formulas cannot possibly depend back on the interpretation of datatypes.

So that’s the picture: predicative hierarchies for Formula and Type, where we have the highly impredicative principle Type_0 : Type_0 at the lowest level. This is pleasantly and unexpectedly similar to the situation in Coq, say, where we have impredicative Prop at the lowest level, and then a predicative hierarchy Type_0 : Type_1 : ... above Prop.

As a final note, just to attempt to ward off misconceptions: this similarity does not at all mean that we are inheriting Coq’s artificial distinction between Set and Prop. In Sep3 our distinction between Formula and Type is based on the profound difference between a guaranteed terminating proof — indeed, possibly a nonconstructive one, since we are including the ability to case-split on whether or not a programmatic term terminates — and a general-recursive program.

About these ads

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: