Skip navigation

For many type theories, types enforce termination.  Typable terms terminate, while terms which are not typable may diverge.  Consider, for example, the (pure) simply typed lambda calculus.  The only role of the type system is to enforce that every function is terminating.  For programming languages, type systems do not usually enforce termination.  Rather, they are there to ensure that evaluation of a program cannot violate some other property, like memory safety (only currently live memory can be accessed, thus ruling out accesses via dangling pointers that could lead to crashes or security violations).  But for many type theories like simply typed lambda calculus, the role of types is to enforce termination.

So in this sense, termination is based on types.  But “type-based termination” has its own special meaning, where types actually track, in a certain way, information on the sizes of the data manipulated by the program.  This approach can make it possible to verify termination of programs that are not accepted by other type systems.  A very nicely written introduction to these matters can be found in Andreas Abel’s paper “Termination Checking with Types”.

But isn’t it a bit strange that there should be a separate line of work developing systems where the types enforce termination?  Didn’t we just observe that at least for simply typed lambda calculus, the types do enforce termination?  The answer to this conundrum is that while types do enforce termination for pure lambda calculi without inductive types, for systems with inductive types, like Goedel’s System T (which Sol Feferman mentions, in “In the Light of Logic”, Goedel called \Sigma in earlier work), the situation is a little less clear.  The types certainly do play a critical role in enforcing termination (since the simply typed lambda calculus is a sub-language of System T), but also the form of the recursors is very important.  The reduction rules for R are:

\begin{array}{lll} (R\ t_1\ t_2\ 0) & \to & t_1 \\ (R\ t_1\ t_2\ (S n)) & \to & (t_2\ n\ (R\ t_1\ t_2\ n))\end{array}

By the way, as Altenkirch and Coquand note in a paper called “A Finitary Subsystem of the Polymorphic lambda-calculus”, we can adopt iteration, with alternate reduction rules below, instead of primitive recursion, since we can implement predecessor as soon as we have iteration and pairs of natural numbers — and the latter can be implemented using a monomorphic Church encoding \lambda x. \lambda y. \lambda f. (f\ x\ y):

\begin{array}{lll} (R\ t_1\ t_2\ 0) & \to & t_1 \\ (R\ t_1\ t_2\ (S n)) & \to & (t_2\ (R\ t_1\ t_2\ n))\end{array}

Notice that n is no longer an argument to t_2.

Returning to types and termination: here, the form of the recursors does play a significant role in enforcing termination.  Indeed, without lambda-abstractions, we could easily have systems where termination is established by (a possibly fairly sophisticated) structural decrease alone, as for certain term rewriting systems.  So it does seem that in such systems, it is not just the types that enforce termination, but also the way the recursors work plays a crucial role.

Unfortunately, this complicates the proof methods for showing termination.  For System T, for example, the most elegant proof methods are using reducibility.  We can’t go into that method now, except to note that it is a form of realizability (due to S. C. Kleene), as explained also by Jean Gallier in “Proving Properties of Typed lambda-Terms Using Realizability, Covers, and Sheaves”; programming languages researchers have adopted the idea under the name “logical relations”.  The types are certainly critical to the method, but it is well-known not to be the most perspicacious proof technique — indeed, it is very tricky, and I have heard even very experienced practitioners confess to lacking intuition for it.

There is an alternative method in which it is much clearer how types influence termination, and that is the method of hereditary substitutions.  Computationally speaking, we define a notion of capture-avoiding substitution [t/x]^T t'.  Here, t is being substituted for variable x in t', but wherever substitution creates new redexes, we actually reduce those as well at type T_2, as long as T \equiv T_1 \to T_2.  So if T is a base type, then hereditary substitution is the same as the usual substitution.  This method was brought to the attention of computational logic researchers by the paper “A concurrent logical framework I: Judgments and properties”, by K. Watkins et al.  We can prove hereditary substitution terminates, by well-founded induction with the lexicographic ordering on (T,t').  Just at the point t' gets bigger (when a redex has been created and is now being reduced by recursively applying hereditary substitution), T gets smaller, and the proof goes through.  It is very clear that the type is enforcing termination.  The problem is, that this method has only been considered for proof-theoretically very weak type systems.  I’ll say more about the possibility for extension in a later post.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: