Skip navigation

I recently read this very nice paper by Venanzio Capretta (Logical Methods in Computer Science, Volume 1, 2005, pages 1-28).  To summarize very crudely: if your termination analysis supports lazy computations (“corecursive functions”) on possibly infinite data structures (“coinductive data”), then you can implement general recursive functions as certain such computations on certain such data structures.  This is important, because the idea works even in a type theory where all functions are required to terminate (and all co-recursive functions are required to be productive; that is, in finite time one must always be able to unfold the infinite data structure the next finite amount).

The remarkable thing Capretta realized is that the only possibly infinite data structure you need is one which we can think of as a list which begins with possibly infinitely many copies of a single symbol \triangleright.  This symbol means “the computation is still running”.  If there are only finitely many such symbols, that means the computation is finite, and then the list ends in the result of the computation.  If there are infinitely many \triangleright symbols, that means the computation runs forever (without producing a value).  For example, let’s say we are trying to represent the (terminating) computation of 2^4 this way.  We will have something of the form:

\triangleright \cdots \triangleright 16

Actually, Capretta will write

\triangleright \cdots \triangleright \ulcorner 16 \urcorner

This is because the coinductive type for computations that either diverge or converge with a result of type A is A^\nu, with (co-inductive) constructors:

\begin{array}{lll} \triangleright & : & A^\nu \rightarrow A^\nu \\ \ulcorner \cdot \urcorner & : & A \rightarrow A^\nu\end{array}

Of course, if your computation is (for example) searching for counter-examples to Fermat’s Last Theorem, your computation will look like the following (with an infinite number of \trianglerights):

\triangleright \cdots

This A^\nu is a rather spartan data structure for computations: all the information about what the computation is doing has been abstracted away, so that we see either only “still thinking” or “done, and here is the answer”.  Why is this enough for encoding general recursive functions in a terminating type theory?  The answer (as I believe I understand it) is that the type A^\nu is the appropriate constructive analog of the more obvious but constructively inappropriate inductive (as opposed to co-inductive) type where you either have a result of type A, or something indicating divergence.  This type might be written as a sum-type A + \uparrow.  It is inappropriate for modeling general computation in a terminating type theory, because with a sum-type, we can always case split on whether some element a of type A+\uparrow is in A or is the entity representing divergence.  In a constructive logic such as a terminating type theory, this implies we can decide termination of this element a, which is not possible in general.  The coinductive type A^\nu is the perfect standin for A + \uparrow, because while we cannot case-split on its values, we can co-recursively analyze them, so if a computation does terminate, we can make use of the result.  If it does not, we will co-recursively analyze it forever.

Capretta develops and applies these ideas in a number of interesting ways.  But you’ll have to read his paper to learn more about that, as this post (delayed as it was by the end of the semester) is getting too long already.



  1. This technique might be theoretically complete (I don’t know the details), but it’s inconvenient to use in practice. With Coq’s well-formedness conditions on co-fixpoints, many common abstractions aren’t possible. For instance, you can’t write one function in this style whose body calls a bunch of other such functions and then combines their results. Maybe you can get around this by using CPS or some other trick, but it probably wouldn’t be much fun.

  2. It should probably also be noted that it’s a rather weak sense in which this gives you general recursion. You might be able to, say, implement mergesort in the typical style using this, but then your mergesort will have a type something like:

    sort : {A : Set} -> List A -> D (List A)

    So you can give it a list, and it will give you back an answer, but it is not a useful answer unless you can prove that the general recursive algorithm terminates, because ‘unwrap my D (List A) until you find a real answer’ is not, on its own, a total function. You could also, I suppose implement an ‘unwrap at most N times to see if there’s an answer’ function, but that isn’t particularly satisfying.

    • Dan,

      Well, it seems we could prove a theorem like this, expressing that the function is terminating:

      \forall A:Set.\ \forall l:List A.\ \exists l':List A. (\textit{sort}\ A\ l) \downarrow l'

      (where x \downarrow a means that x converges to value a, Definition 3.2 of the paper).

      So once we write the general recursive function, we can prove it terminates. I agree with you that it is still not entirely clear to me (probably due to lack of doing some examples in this co-recursive style) how effectively we could program using such functions, whether proved total or not.

        • Dan Doel
        • Posted March 4, 2010 at 8:10 am
        • Permalink

        Yes, that is what you’d have to do. It’s just that it (probably) isn’t easy, and may well not be any easier than using other tricks to encode the algorithm as a recognizably total function (for sorting, using well-founded recursion on the length of the list seems like it’d be straight forward for quite a few algorithms).

        Not that it isn’t a cool trick, but it’s one of several for encoding general recursion in a total language through fancier proof obligations. I suppose it does have the advantage that you can write down the algorithm and prove that it’s terminating afterward, instead of the algorithm and proof obligations being intertwined, like they are in some other solutions.

        Of course, there’s also the option of adding general recursion back into the language in a controlled way, similar to how Haskell has main :: IO a as a top-level hook. Write what you can as total functions, and write what you can’t using general recursion in a monad that can’t infect the total functions.

  3. Just use it as a monad and prove partial correctness (if it terminates it gives the right answer).

    • You can’t generally use this scheme as a monad, because the productivity restriction on co-recursive definitions won’t let you make a co-recursive call within one of the arguments to the “bind” operator.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: