I have recently become fascinated, not to say obsessively fixated, on what some call computational classical type theory. This is the kind of type theory one gets by applying the Curry-Howard isomorphism to proof systems for classical logic. There are many variants, and Harley Eades, who introduced me to this topic, has written a lot about them on his blog. These systems extend lambda calculus with some control operators (like try and catch for exceptions, or generalizations of these). For example, a proof of works by providing a function from to , where if that function is ever called with a proof of , we go back in time (by raising an exception) to the point where the proof of was consulted, and at that point return that proof of , instead of the function from to . Sound evil? It is. In fact, in this post I want to highlight just how bad the situation is. But this is not a rant: I am firmly convinced that classical reasoning is needed for reasoning about general-recursive functions (think case-splitting on whether or not a given general-recursive term terminates). But we are not going to be able to make use of these powerful and exciting ideas if we cannot tame the *non-canonicity* of these theories, which I will discuss next.

When we program in a pure functional language, say, we usually intend that at some point our program will produce some observable effect. We are not writing programs just for the sake of knowing they are running. We want them to compute answers to some computational question. So we need to take some kind of observation of the results (assuming convergence) of our program.

Now, we know that in classical logic the so-called disjunction property fails. That is, we can prove, from no assumptions, a formula without in general having a proof of A independent of B, or else of B independent of A. For a “classic” example, we can prove without thereby having a proof of by itself or else . So we cannot observe a canonical result of type in general, where a canonical result of a type is either or , with a canonical result of type and a canonical result of type .

What is the big deal with that? Probably we were not expecting to observe results at -type. But of course, we could compute some value at a type we do want to observe, like (for natural numbers), based on a case split on a non-canonical result of type . These type theories for classical logic are quite amazing: they will carefully track and propagate that non-canonicity throughout computations. So if we perform such a case-split in computing a , we would expect to end up with another non-canonical result, which would then not be suitable for observation.

Well darn, you might say. Sometimes we might try to compute a number and end up with a non-canonical result which we cannot observe. Perhaps we could just report an error to the user in that case: “non-canonical result encountered, cannot observe”. That would be unworkably bad, in my opinion. But it gets worse.

It can actually happen (I believe: any experts reading please check me on this) that we can compute an **incorrect** canonical result, based on case-splitting with a non-canonical term of type . This is really shocking, and renders the idea of computational classical type theory unusable in practice without major modifications (some of which have been undertaken by Herbelin, Krebbers, and others in work to preserve canonicity at various types, while still allowing classical reasoning). Here is a sketched example.

I am assuming you are familiar with the non-constructive proof of the fact that there are two irrationals and such that is rational (you can read this proof on Wikipedia here). This proof does a case-split on whether is rational or not. Let us recast this proof ever so slightly, so that the case-split is on whether that quantity is irrational or not. As I described at the start of this post, the proof of in classical logic returns a proof of (the second branch of the disjunction) which, if it is ever used to derive a contradiction with a proof of , will throw that proof of back to the original point the proof of was used, to serve instead as a proof of the first branch of the disjunction. Ok. But what this means is that when you case-split on this proof of , that proof essentially tells you that the second disjunct is the true one. So if we case split on whether or not is irrational, computationally we will enter the second branch of our proof. And in that branch, we know that is rational, and that is what (computationally) we will return from our proof, together with the proof that it is rational.

There is only one problem with this. As stated on the above linked Wikipedia page, it turns out that is actually irrational. Now, the proof of rationality that we returned along with this value is such that if we ever contradict it using a proof that the quantity in question is irrational, we will go back in time to our original case-split, and produce instead the answer that it is and that are the irrationals and such that is rational. So again, the logic very carefully tracks the non-canonicity of our proof, and will go back and repair the situation if choosing the second branch of turns out to have been the wrong choice.

But this is small consolation to the programmer who is given and together with the non-canonical proof that is rational, and takes an observation of those values (prints them out, say). As soon as we need to observe our answers, we depart from the magical world of classical logic where everything is revisable, and commit ourselves to a particular value. It is well known that there may be no value to commit to, due to non-canonicity, when taking such a measurement. What the above example shows (assuming I am right about this) is that one can actually commit to the wrong value. This is worse than aborting with the “non-canonical result” error message. The source of the problem is that we have a non-canonical constraint on our value which needs to be maintained, but taking an observation severs the value from that constraint.

To conclude: we absolutely have to be able to carve out a safe fragment of a computational classical type theory, where what you see (via typing) is truly what you get, in order for such systems to be even remotely usable in practice. Some attempts (briefly mentioned above) are being made, but there is no fully satisfactory solution yet available, in my opinion. Such a solution would use the type system to track canonicity, so one would know whether it is legal to observe a particular value or not. Values that cannot be observed can still be used for formal reasoning. But as we have seen, they lead to disaster if allowed out of the mathematics school room and onto the computational playground.

## 6 Comments

Wow, that’s a really cool observation!

Regarding your proposed type system, could you contrast this proposal with what is the case in Coq now? In Coq, if you postulate the usual “classic” axiom, then because the axiom is in Prop ( and therefore can only be eliminated to construct other Prop’s), you know that every expression in Set still has a canonical form (so in particular, extracted programs will work as expected). Would your proposed system be more expressive?

This is not entirely true, Coq supports singleton elimination, allowing one to eliminate a singleton type in Prop to construct something in Type. The following example shows that assuming classical logic breaks computation.

Parameter (dn : forall P, P \/ ~P).

Definition silly_True : True :=

match dn True with

| or_introl p => p

| or_intror q => False_rect _ (q I)

end.

Definition silly_nat : nat := match silly_True with I => 10 end.

Eval compute in silly_nat.

Here, the result of compute is stuck on a match on dn. Of course, this example is rather contrived, a more interesting use case would be to use classical logic to construct an inhabitant of the Acc type

Inductive Acc (A : Type) (R : A -> A -> Prop) (x : A) : Prop :=

Acc_intro : (forall y : A, R y x -> Acc R y) -> Acc R x

and then to use this inhabitant of Acc for a termination proof.

More concretely, assuming classical logic in Prop, one can prove that negation of real numbers x and y implies apartness (that is, there is a rational witness separating x and y). This is done by using classical logic to prove that a bounded search for a witness will terminate. Unfortunately, in Coq, this will not compute.

You are however right that extracted programs will work as expected, that is because proofs (and in particular these singleton eliminations over Props) are removed. For example:

Extraction silly_nat

yields

let silly_nat = S (S (S (S (S (S (S (S (S (S O)))))))))

An interesting observation about these “incorrect results” is that it shows that the well-known dependency erasure extraction (as we have in for example Coq) does not work for classical logic. Consider

Parameter (callcc : forall A B, ((A -> B) -> A) -> A).

Definition wrong_nat : { x | x = 1} :=

callcc _ (0 = 1) (fun k => exist _ 0 (k (exist _ 1 eq_refl))).

Extraction Language Scheme.

Extraction wrong_nat.

This yields

(define wrong_nat (callcc (lambda (_) `(O))))

So, what is going on here. In wrong_nat, we are supposed to construct a nat equal to 1. However, instead of doing that, we use callcc, give a wrong nat, and jump back. Unfortunately, extraction removes the jumping back part, as it is hidden in a proposition.

I know there is some classical extraction variant by Alexandre Miquel, http://perso.ens-lyon.fr/alexandre.miquel/kextraction/, but I do not really understand how it deals with the above

Hi, Vilhelm and Robbert. Many thanks for your comments. It is very interesting to know, Robbert, that the phenomenon I was describing can actually occur in Coq extended with callcc. That is fascinating, and confirms my fear that this alarming behavior could happen in practice. So we really do need to come up with type theories that can ensure canonicity, either for certain types, or else for all types with some modifier applied. For the latter idea, I am thinking of a type theory with types “C T” and “U T” for “canonical T” and “unknown (possibly non-canonical) T”, respectively.

Cheers,

Aaron

I would advocate “tracking” use of classical reasoning by relegating its “effects” to inside a double negation monad. I give a few example of where I’ve used classical reasoning safely in constructive proofs in “Classical Mathematics for a Constructive world”: http://arxiv.org/abs/1008.1213

After all, since constructive logic is an *extension* of classical logic, how can we lose anything by using constructive logic?

Thanks for this reference to your work, Russell. The paper looks very relevant; it’s printing here now…