I have been interested, for many years, in what is generally called computational classical type theory (CCTT). The idea is to take some proof system for classical logic, and turn it into a programming language via the Curry-Howard isomorphism. Cut elimination or normalization for the proofs turns into reduction for the corresponding program terms. There is nothing at all objectionable about this, and indeed it is very interesting to see term notations for proofs of classical theorems like (Peirce’s Law) or . It was the brilliant insight of Griffin (my that POPL ’90 was a good one, as Lamping’s paper on optimal beta-reduction was that year, too) that proofs of such classical theorems essentially involve control operators like call-cc. This is all super cool and very interesting.

And I accept what I understand to have been David Hilbert’s position that non-constructive reasoning is an essential tool for mathematics, and that rigid constructivism (i.e., insisting on constructive proofs for every theorem) is thus disastrous for advancement of mathematics. In Computer Science at least, I have sometimes in the past thought that the ability to do things like case-split on whether a term in some unrestricted programming language terminates or not could be very helpful if not critical for verification of general programs.

Since Curry-Howard is great and applies very nicely to classical logic, and since classical logic is useful if not essential for certain kinds of verification, CCTT should be fantastic in my book, right?

No — and of course I will tell you why.

The problem is that the Curry-Howard isomorphism a reasonable programmer expects to hold is broken in CCTT. The problem can be seen clearly with disjunctions. Under Curry-Howard, a disjunction corresponds to a sum type . Given an element of such a sum, we can case split on whether that element is the injection of an element in or else in . And in doing so, we gain some actionable **information**. But this does not happen when we split on a use of the law of excluded middle (LEM). We gain no information at all in the two branches of the split. And indeed, the usual way CCTT proves LEM is to say (so to speak): it is which is true. Then if the branch of the split for the right disjunct ever tries to make use of that information — wait, how can it do that? The only thing you can do with the negative fact that is to apply it to a proof of to obtain a contradiction. And if you ever do that, the machinery of CCTT will take your proof of , backtrack to the point where it originally told you is true, and then invoke the branch of the split for the left disjunct, with that proof of . Super clever! (This is dramatized by a famous story about the devil told by Phil Wadler; see my old post about this.) But also, note that it means you can never take action based on the result of such a case split. Everything must be indefinitely revisable, due to the possible need to backtrack. So you cannot split on an instance of LEM and print 1 in the first branch and 2 in the second. Or rather, you can, but the meaning is not at all what you expect. “This Turing machine either halts or it doesn’t.” We case split on this, and could easily think that we are gaining some information by doing so. But we are not. Your code will just always print 2. This is not what you reasonably expect under the Curry-Howard interpretation of disjunctions. And thus using CCTT for programming is not a good idea.

It would be fine to use classical reasoning for *reasoning* about programs. For the justification of some property of a program might require (or be easier with) some non-constructive case split. But the program itself should not do such things, or else the abstraction Curry-Howard seems to offer us is violated.

Is this an argument against sum types in the presence of control operators? (Genuine question)

## 4 Comments

I think the answer is yes, and the argument is clarified when formalized in dependent type theory. Sum types + control operators result in an inconsistent type theory once we have dependent elimination of the sum type. This dependent elimination is a nice formalization of “learning something” by branching, and the fact that you “learn” something inconsistent when you have control operators is concerning.

I’m not fully convinced by this line of argument: it might rather be an indication that in a dependent type theory with control, we need to distinguish between additive (A ⊕ B) and multiplicative (A ⅋ B) sums just like we do in classical linear logic. Indeed, it’s already the case that sigma types with unpack-style eliminators and projective (fst/snd) eliminators are not interderivable.

(After reading some nlab and stack overflow articles), I could be convinced by your argument. When I saw “sum types”, I thought “additive disjunction” because I consistently fail to build an intuition for multiplicative disjunction, and then forget it exists.

Now I wonder how Aaron’s example applies. Supposing I have Halts ⅋ ¬ Halts, and I want to use this information. The intuition is that I must have already prepared for both eventualities; I do not “branch” in the traditional sense, it is more like I fork. In one thread, I know Halts, while in the other I know ¬ Halts. But then why can I not print 1 in the first thread and 2 in the second?

(for reference: https://ncatlab.org/nlab/show/multiplicative+disjunction, https://math.stackexchange.com/questions/50340/what-is-the-intuition-behind-the-par-operator-in-linear-logic)

Hi William,

I have to admit that I don’t have any kind of computational intuition for par — I think of par in terms of translations into calculi of proofs and refutations.

So a proof of A ⅋ B is a pair of operations, one of which builds a proof of A from a refutation of B, and the other of which builds a proof of B from a refutation of A. (A refutation of A ⅋ B is a pair of a refutation of A and a refutation of B.) Negation simply interchanges proofs and refutations — a proof of ¬ C is a refutation of C, and a refutation of C is a proof of C. So Halts ⅋ ¬ Halts is trivially provable: given a refutation of Halts, that’s a proof of ¬ Halts. And given a refutation of ¬ Halts, that’s a proof of Halts. (Obviously this works for any A, so A ⅋ ¬ A holds generically.)

Two comments about this, though:

1. However, making substructural logic play nicely with dependent types is really, really hard. Dependency really wants there to be a notion of equality present, and since equality is left adjoint to contraction, this means that equality and substructural logic are incompatible at a really deep level. So no simple-minded approach to this question will work out. (My work in this space works by only allowing equality at types satisfying contraction, and Conor McBride and Bob Atkey’s work on this requires each linear type to have an intuitionistic “skeleton” which is what dependency cares about. Neither one of these approaches can be said to support a “real” combination of linearity and dependency.)

2. OTOH, it’s an idea worth pursuing! Mike Shulman argues in his paper “Linear Logic for Constructive Mathematics” that many ideas in constructive mathematics can be seen as secretly arising from the “compilation” of proofs in linear logic into constructive logic. He blogged about this here:

https://golem.ph.utexas.edu/category/2018/05/linear_logic_for_constructive.html

The tripos construction he uses can be thought of as a “linear higher-order logic”, where sets/sorts are strongly segregated from formulas/proofs. So sets can have a sensible equality even though equality of proofs is not an idea that works out easily. (I think this is a really amazing paper, and that both you and Aaron would really like it.)