I would like to collect a list of some examples where non-constructive reasoning is either essential or makes something significantly more convenient. I am still digesting work of Russell O’Connor on “Classical Mathematics for a Constructive World” — Russell has many interesting examples there, including using classical reasoning for the constructive reals (for which basic relations like equality are undecidable). One is for verification of hybrid systems (that paper will be printing here shortly), which is certainly an eminently practical and computational application. The others are more mathematical in nature, as the title of the paper suggests. This leads me to refine my quest a little: I’d like to see examples of computational problems where non-constructive reasoning is either essential or very useful. Here are a few examples I know of so far. I’d be very grateful for any others you know of or think of that you’d be willing to share.
Termination case. In the Trellys project (particularly “Iowa Trellys”), we have made use of the ability to case split on whether or not a term terminates under the call-by-value operational semantics of the language. One great use for this feature is to provide simpler theorems about functions which have not been proven total (either because they are truly partial, or because we just decide not to invest our verification effort to do so). For example, in Iowa Trellys, quantifiers range over terms of the stated type, not values. To prove a theorem like
(assuming plus recurses on its first argument, so the theorem is not immediate) it is thus very handy to be able to case split on whether or not has a value. If it does not, then both sides of the equation diverge and are hence equal (since we equate all diverging terms). If does have a value, then we can do induction on it to prove the theorem in a standard way.
Similar examples arise even if the quantifiers of the language range only over values. Suppose we have defined a tail-recursive reverse-map function (our implementation’s notation is not this nice :-():
rev_map f  l' = l'
rev_map f (x:l) l' = rev_map f l (f x : l')
Suppose we want to prove the following basic theorem about this function:
forall f, l1, l2, l' . rev_map f (l1 ++ l2) l' = rev_map f l2 (rev_map f l1 l')
The proof is by induction on l1. In the inductive case, where l1 is x:l1' (say), we need to instantiate the induction hypothesis with f, l1', l2, and f x: l'. But the latter is not known to terminate, and in general might not, since f x might very well diverge. Of course we could choose to prove the theorem only for total functions f, but that is strictly weaker than what we could hope to achieve: the theorem is also true if f is partial. For if f x diverges, then so will both sides of the above equation. With termination case, one can prove desirable theorems about higher-order functions in a general-recursive setting, where a purely constructive approach would only support strictly weaker theorems.
Computational induction. For quite some time in our work on Iowa Trellys, we have been vexed by the lack (in our current system) of a principle of induction on the length of a terminating computation. For example, suppose you have defined an interpreter for SK combinators (where I am writing embedded combinator terms like App(S,K) as just S K):
interp S = S
interp (S x) = S (interp x)
interp (S x y) = S (interp x) (interp y)
interp K = K
interp (K x) = K (interp x)
interp (x y) = app (interp x) (interp y)
app S x = S x
app (S x) y = S x y
app (S x y) z = interp (x z (y z))
app K x = x
app (K x) y = x
Suppose you wish to prove that it is idempotent:
A natural way to prove this is first to case split on whether or not interp x terminates. If it does not, then both sides of the above equation diverge under call-by-value semantics, and so we equate them. If interp x does terminate, then we would like to reason by induction on some measure of the terminating computation; here I will use nesting depth of recursive calls. For example, if we use the second equation above for interp, by the IH we know that (interp (interp x)) equals interp x, which shows that (interp (interp (S x)) equals interp (S x). And in the case for the third equation for app, by the IH we know that (interp (interp (x z (y z)))) equals interp (x z (y z), and hence (since interp (x z (y z)) is the value returned by interp in the situation that led to this call to app), we also know interp is idempotent on the original term.
Here is a way (that I just realized in the middle of the night yesterday) that we can use classical reasoning to derive a principle of computational induction for functions like interp above. First, we have to define a counter-indexed version of the function. This is in addition to the regular version of the function without counters. For interp, a counter-indexed version could be (I will just call both this and the original function "interp"):
interp n S = S
interp (n+1) (S x) = S (interp n x)
interp (n+1) (S x y) = S (interp n x) (interp n y)
interp n K = K
interp (n+1) (K x) = K (interp n x)
interp (n+1) (x y) = app n (interp n x) (interp n y)
interp 0 x = abort
app n S x = S x
app n (S x) y = S x y
app n (S x y) z = interp n (x z (y z))
app n K x = x
app n (K x) y = x
Now our goal is to prove , from our assumption . Here I am writing to mean that t terminates. Let us case split on whether or not . Such a case split is non-constructive, since if it were constructive we could effectively decide halting for SK combinators, which is impossible. In the case where that formula holds, we are obviously immediately done. So suppose it does not hold. Then (intuitionistically) we have . Let us imagine that we have a coinductive predicate for divergence, and suppose we know (presumably by an axiom) that . Let us now try to prove the following formula:
The proof is by coinduction: we must produce a proof of , where we use the co-IH only where it is guaranteed to be guarded by constructors for our coinductive type of divergence in the final return value of the co-inductive proof. How does this proof work? We first case split on x, to enable some partial evaluation of interp x. For example, suppose x is (S x'), and so interp x = S (interp x'). We can prove by contradiction: if there exists n where interp n x' terminates, then interp (n+1) x terminates, contradicting our assumption . So we can use our co-IH to obtain a proof that interp x' diverges, which we can then wrap in a constructor for the divergence type to show in a guarded way. The other cases proceed similarly (I believe: I did not work through them all for interp). Since we have derived from our assumption , and since contradicts (by the principle stated above) , which we are assuming, we have derived a contradiction and the proof is completed.
I'd be delighted to hear of other uses of classical reasoning for computational problems you know of!