Everyone knows classical logic is nonconstructive. If a classical principle like were true constructively, it would mean we would have an effective (i.e., computable) way to tell, for any value , whether or not is true or false. When expresses a property like “the Turing machine coded by natural number halts”, then we can see that cannot be constructively true in general. Case closed.

But a funny thing happened on the way from Amsterdam: computational interpretations of classical type theory were devised, like the -calculus of Parigot and quite a few others. These type theories have term notations for proofs of classical logic, and give a reduction semantics showing how to simplify proofs to normal forms, similar to standard reduction semantics for good old -calculus. The reduction semantics is perfectly effective, so one really can compute — or so it would seem — with terms in these type theories. If the essence of constructivism is computation, then it would seem that classical logic is constructive. Indeed, titles like that of Girard’s 1991 article “A new constructive logic: classical logic” would suggest this.

But you know, maybe it is intuitionistic logic which is nonconstructive. Come again? Well, the past half year or so several of us here at Iowa have been studying so-called bi-intuitionistic logic. This is a logic that extends standard propositional intuitionistic logic with an operator called subtraction, which is dual to implication. There are a number of papers about this topic I could recommend. Maybe the easiest introduction is this one by Goré, although we ultimately followed the approach in this paper by Pinto and Uustalu. Recall that in the Kripke semantics for propositional intuitionistic logic, the implication is true iff in all future worlds where is true, is also true. The subtraction has a dual semantics: it is true iff there is an earlier world where is true and is false. In intuitionistic logic, one usually defines as . This negation is true iff is false in all future worlds. In bi-intuitionistic logic, one can also define a second kind of negation, using subtraction: means , and it is true iff there exists an earlier world where is false. With this definition, we start to see classical principles re-emerging. For example, holds, because in any world of any Kripke model, either is true in , or there is some earlier world (possibly ) where is false. Valid formulas like violate the disjunction property of intuitionistic logic, since neither nor holds, despite the fact that the disjunction does hold. The disjunction property, or in programming-languages terms, canonicity, would seem to be as much a hallmark of constructive reasoning as the law of excluded middle is of classical reasoning. Yet here we have a conservative extension of intuitionistic logic, based on exactly the same semantic structures, where a form of excluded middle holds and canonicity fails.

What are we to make of this? Let us think more about our Kripke semantics for (bi-)intuitionistic logic. In this semantics, we have a graph of possible worlds, and we define when a formula is true in a world. Formulas that are true stay true (the semantics ensures this), but formulas which are false in one world can become true in a later world. The critical point is that we have, in a sense, two notions of truth: truth in a world, and truth across time (or across possible worlds). Truth in a world is usually thought of as being defined classically. Maybe it is better to say that the modality with which it is defined is not made explicit. A textbook definition of Kripke semantics is making use of some background meta-language, and I think it is fair to say that without further explicit stipulation, we may assume that background meta-language is the language of informal classical set theory. So for any particular formula , either holds in world , or else it does not. This is, in fact, what justifies validity of . Indeed, to construct a version of Kripke semantics where this is not valid would seem to require something like an infinitely iterated Kripke semantics, where truth in a world is constructively defined (the infinite iteration would be to ensure that the notion of constructive definition being used for truth in a world is the same as the one for truth across worlds). An interesting exercise, perhaps! But not one that will validate the laws of bi-intuitionistic logic, for example, where is indeed derivable.

So what is it that makes a logic constructive? Let’s come back to the case of computational classical type theory. The term constructs that were discovered to correspond to classical principles were not so exotic: they were actually forms of familiar control operators like call-cc from functional programming! So, these constructs were already in use in the wild; only the connection to logic was a (remarkable!) discovery. So every day, the working Scheme programmer, or Standard ML programmer, or Haskell programmer has at his disposal these constructs which correspond to nonconstructive reasoning. But they are writing code! It is computing something! What is going on?

Perhaps it is in Haskell where this is clearest. To use call-cc, one must import Control.Monad.Cont. And there we see it already: the control operator is only available in a monad. The sequencing of operations which a monad is designed to provide is critical for taming call-cc from the point of view of lazy functional programming. In the words of Simon Peyton Jones (from these notes):

Indeed it’s a bit cheeky to call input/output “awkward” at all. I/O is the raison d’etre of every program. — a program that had no observable effect whatsoever (no input, no output) would not be very useful.

And that’s just where the interaction of control operators with pure functional programming seems strange. Phil Wadler, in his well-known dramatization of the computational content of the law of the excluded middle, has the devil give a man a choice: either the devil pays him one billion dollars, or if the man can pay the devil one billion dollars, the devil will grant him one wish of whatever he wants. The man accepts the choice, and the devil goes with option 2. After a life of greed and antisocial accumulation, the man present the devil with one billion dollars, hoping for his wish. The devil (using a control operator, not that they are intrinsically evil) goes back in time, so to speak, to change the offer to option 1, returning immediately to the man his one billion dollars.

I am reaching the conclusion that it is not the non-canonicity of the law of excluded middle that is the locus of non-constructivity. It is the lack of a notion of the progression of time. The devil (indeed, even God Himself) cannot change the past — despite numerous Hollywood offerings (Back to the Future, anyone?) based on the idea of changing the past — but without some restrictions, control operators can revert to an earlier time, thus potentially undoing some of the effects that have already been carried out. As Peyton Jones is saying, if the computation just spins its wheels and does not interact at all with the outside world, then there is not really any notion of time or cause and effect which could be violated by the use of a control operator.

Another way to think of it: classical logic is the language of mathematics, where everything is considered as a static entity (notwithstanding the fact that dynamic entities like computations can be statically described). Constructive logic is supposed to have a dynamic character: it should capture the idea, somehow, that the state of information can change over time.

So where I am thinking to go with all this on the language-design side, is rather than developing a type theory based on bi-intuitionistic logic (which we have been hard at work on doing for many months now, and which is likely a good research project anyway), to develop instead a type theory based on classical modal logic, where classical reasoning, and its corresponding type theory with full control operators, can be used at any given time, but not across time. Reasoning (or computation) across time will be governed by the rules of a modal logic, likely S4 for reflexivity and transitivity of the modality. Observables operations like input and output will require stepping into the future, from which no control operator can return the flow of control. Such a type theory would provide a new approach to pure functional programming with effects: at any given instant of time, one could perform reads and writes (for example), possibly backtracking with control operators to cancel some of them out. But when one wants to step forward in time, the most recent operations (particularly writes) are committed, and become irrevocable. Such a type theory would hold out the promise of combining classical reasoning (at a particular time) with constructive computation (across time). If we think of general recursion as computation over time — with each general recursive call requiring a step into the future — we can even combine Turing-complete computation with logically sound classical reasoning in this way. That would be almost the holy grail of verified programming I have been working towards for the past 7 years or so: a single language supporting sound classical reasoning and general-purpose programming.

So to conclude: what is so nonconstructive about classical logic? I believe it is not the failure of canonicity, but rather the lack of a notion of progression of time. Devising type theories with this perspective in mind may bring new solutions to some old conundrums of both type theory and functional programming. We will see.