Skip navigation

Everyone knows classical logic is nonconstructive. If a classical principle like \forall x. \phi(x) \vee \neg \phi(x) were true constructively, it would mean we would have an effective (i.e., computable) way to tell, for any value x, whether or not \phi(x) is true or false. When \phi(x) expresses a property like “the Turing machine coded by natural number x halts”, then we can see that \forall x.\phi(x) \vee \neg \phi(x) cannot be constructively true in general. Case closed.

But a funny thing happened on the way from Amsterdam: computational interpretations of classical type theory were devised, like the \lambda\mu-calculus of Parigot and quite a few others.  These type theories have term notations for proofs of classical logic, and give a reduction semantics showing how to simplify proofs to normal forms, similar to standard reduction semantics for good old \lambda-calculus.  The reduction semantics is perfectly effective, so one really can compute — or so it would seem — with terms in these type theories.  If the essence of constructivism is computation, then it would seem that classical logic is constructive.  Indeed, titles like that of Girard’s 1991 article “A new constructive logic: classical logic” would suggest this.

But you know, maybe it is intuitionistic logic which is nonconstructive.  Come again?  Well, the past half year or so several of us here at Iowa have been studying so-called bi-intuitionistic logic.  This is a logic that extends standard propositional intuitionistic logic with an operator called subtraction, which is dual to implication.  There are a number of papers about this topic I could recommend.  Maybe the easiest introduction is this one by Goré, although we ultimately followed the approach in this paper by Pinto and Uustalu.   Recall that in the Kripke semantics for propositional intuitionistic logic, the implication A \to B is true iff in all future worlds where A is true, B is also true. The subtraction A - B  has a dual semantics: it is true iff there is an earlier world where A is true and B is false.  In intuitionistic logic, one usually defines \neg A as A \to \perp.  This negation is true iff A is false in all future worlds.  In bi-intuitionistic logic, one can also define a second kind of negation, using subtraction: \sim A means \top - A, and it is true iff there exists an earlier world where A is false.  With this definition, we start to see classical principles re-emerging.  For example, A \vee \sim A holds, because in any world w of any Kripke model, either A is true in w, or there is some earlier world (possibly w) where A is false.  Valid formulas like A \vee \sim A violate the disjunction property of intuitionistic logic, since neither A nor \sim A holds, despite the fact that the disjunction does hold.  The disjunction property, or in programming-languages terms, canonicity, would seem to be as much a hallmark of constructive reasoning as the law of excluded middle is of classical reasoning.  Yet here we have a conservative extension of intuitionistic logic, based on exactly the same semantic structures, where a form of excluded middle holds and canonicity fails.

What are we to make of this?  Let us think more about our Kripke semantics for (bi-)intuitionistic logic.  In this semantics, we have a graph of possible worlds, and we define when a formula is true in a world.  Formulas that are true stay true (the semantics ensures this), but formulas which are false in one world can become true in a later world.  The critical point is that we have, in a sense, two notions of truth: truth in a world, and truth across time (or across possible worlds).  Truth in a world is usually thought of as being defined classically.  Maybe it is better to say that the modality with which it is defined is not made explicit.  A textbook definition of Kripke semantics is making use of some background meta-language, and I think it is fair to say that without further explicit stipulation, we may assume that background meta-language is the language of informal classical set theory.  So for any particular formula \phi, either \phi holds in world w, or else it does not.  This is, in fact, what justifies validity of A \vee \sim A.  Indeed, to construct a version of Kripke semantics where this is not valid would seem to require something like an infinitely iterated Kripke semantics, where truth in a world is constructively defined (the infinite iteration would be to ensure that the notion of constructive definition being used for truth in a world is the same as the one for truth across worlds).  An interesting exercise, perhaps!  But not one that will validate the laws of bi-intuitionistic logic, for example, where A \vee \sim A is indeed derivable.

So what is it that makes a logic constructive?  Let’s come back to the case of computational classical type theory.  The term constructs that were discovered to correspond to classical principles were not so exotic: they were actually forms of familiar control operators like call-cc from functional programming!  So, these constructs were already in use in the wild; only the connection to logic was a (remarkable!) discovery.  So every day, the working Scheme programmer, or Standard ML programmer, or Haskell programmer has at his disposal these constructs which correspond to nonconstructive reasoning.  But they are writing code!  It is computing something!  What is going on?

Perhaps it is in Haskell where this is clearest.  To use call-cc, one must import Control.Monad.Cont.  And there we see it already: the control operator is only available in a monad.  The sequencing of operations which a monad is designed to provide is critical for taming call-cc from the point of view of lazy functional programming.  In the words of Simon Peyton Jones (from these notes):

Indeed it’s a bit cheeky to call input/output “awkward” at all. I/O is the raison d’etre of every program. — a program that had no observable effect whatsoever (no input, no output) would not be very useful.

And that’s just where the interaction of control operators with pure functional programming seems strange.  Phil Wadler, in his well-known dramatization of the computational content of the law of the excluded middle, has the devil give a man a choice: either the devil pays him one billion dollars, or if the man can pay the devil one billion dollars, the devil will grant him one wish of whatever he wants.  The man accepts the choice, and the devil goes with option 2.  After a life of greed and antisocial accumulation, the man present the devil with one billion dollars, hoping for his wish.  The devil (using a control operator, not that they are intrinsically evil) goes back in time, so to speak, to change the offer to option 1, returning immediately to the man his one billion dollars.

I am reaching the conclusion that it is not the non-canonicity of the law of excluded middle that is the locus of non-constructivity.  It is the lack of a notion of the progression of time.  The devil (indeed, even God Himself) cannot change the past — despite numerous Hollywood offerings (Back to the Future, anyone?) based on the idea of changing the past — but without some restrictions, control operators can revert to an earlier time, thus potentially undoing some of the effects that have already been carried out.  As Peyton Jones is saying, if the computation just spins its wheels and does not interact at all with the outside world, then there is not really any notion of time or cause and effect which could be violated by the use of a control operator.

Another way to think of it: classical logic is the language of mathematics, where everything is considered as a static entity (notwithstanding the fact that dynamic entities like computations can be statically described).  Constructive logic is supposed to have a dynamic character: it should capture the idea, somehow, that the state of information can change over time.

So where I am thinking to go with all this on the language-design side, is rather than developing a type theory based on bi-intuitionistic logic (which we have been hard at work on doing for many months now, and which is likely a good research project anyway), to develop instead a type theory based on classical modal logic, where classical reasoning, and its corresponding type theory with full control operators, can be used at any given time, but not across time.  Reasoning (or computation) across time will be governed by the rules of a modal logic, likely S4 for reflexivity and transitivity of the modality.  Observables operations like input and output will require stepping into the future, from which no control operator can return the flow of control.  Such a type theory would provide a new approach to pure functional programming with effects: at any given instant of time, one could perform reads and writes (for example), possibly backtracking with control operators to cancel some of them out.  But when one wants to step forward in time, the most recent operations (particularly writes) are committed, and become irrevocable.  Such a type theory would hold out the promise of combining classical reasoning (at a particular time) with constructive computation (across time).  If we think of general recursion as computation over time — with each general recursive call requiring a step into the future — we can even combine Turing-complete computation with logically sound classical reasoning in this way.  That would be almost the holy grail of verified programming I have been working towards for the past 7 years or so: a single language supporting sound classical reasoning and general-purpose programming.

So to conclude: what is so nonconstructive about classical logic?  I believe it is not the failure of canonicity, but rather the lack of a notion of progression of time.  Devising type theories with this perspective in mind may bring new solutions to some old conundrums of both type theory and functional programming.  We will see.

3 Comments

  1. It is quite interesting to think about what would be the “computational” meaning of negation. Indeed, classical logic is living in intuitionistic logic, but inside the land of double negation. For example, the law of excluded middle is valid inside double negation, ~~ (p \/ ~p). Double negation can be read “undeniable”, thus consistent. Just we don’t know how to realize it. It’s a fantasy land where all consistent things are realized. Still, the double negation might be understood and computationally interpreted in a useful way. Interpreting negation as a continuation looks interesting. The set theoretic interpretation of False would be empty set or zero computation. At the same time, False can be interpreted as the generator of everything (can prove anything). So, it seems suitable to interpret False as some kind of infinite computation. Also, the LEM can be freely used to prove False in constructive logic. I wonder what would be the computational analogy for that. I’ll look forward to hearing more about computing with negations!

  2. As you might recall I once said (though I wasn’t the first to say it) that constructive logic is an extension of classical logic, so it is not surprising to me that there are terms with normal forms corresponding to classical proofs, since every classical theorem is a perfectly valid constructive theorem. What makes constructive logic constructive is the two operators that are not in classical logic: The constructive-disjunction and the constructive-existential. Proofs of theorems with these operators contain a witness, in the constructive-existential case, and a tag determining which branch of the disjunction hold, in the constructive-disjunction case. Really these to constructive logic constructs are honest to goodness data structures for programming with, and if you are going to add these operations you might as well also add in the rest of the inductive data types, constructive natural numbers, constructive reals, etc. and get a dependently typed programming language.

  3. Hi, Russell. If one takes the perspective that classical logic is a fragment of constructive logic — and I think we have to agree that like what I am advocating, this is contrary to received understandings of these systems (not many will agree with the statement that every classical theorem is a constructive theorem, for example)– then I agree that the classical type theories I am discussing omit these two new operators (of constructive disjunction and constructive existential quantification).

    The main realization I have been coming to is that one does not need those two operators for effective programming. If you want to see terms in normal form as the final results of computation, then my position is pretty untenable. But it is more realistic to think about programs interacting with their environment via input and output channels. Imagine these channels allow one to send boolean values only, drawn from some primitive type of booleans. This is, of course, not so unrealistic for computing with binary encodings. So we have a type bool, and primitive (not encoded) values tt and ff of that type. The only closed terms of type bool, in the computational classical type theories I have in mind, are tt and ff. So we can never send (or receive, though that seems less important) a non-canonical bool on a channel. There is thus no need to ensure canonicity generally, since the motivation for canonicity is just to ensure that normal terms are values of the expected forms — and we don’t need that if our only observable type is this primitive bool type. So we can dispense with the machinery required to provide those two additional constructive operators. There are some advantages, for the design of the language, to doing so.

    One final point related to your work: you (and others, like Geuvers) have advocated using a double-negation monad to embed classical logic within constructive logic. There is likely some connection with what I am proposing, which could be seen as embedding classical logic with a modal logic. Indeed, it seems likely we have a monadic structure from the modalities. For example, we have a return of type A \to \diamond A, and we have a bind of type

    \diamond A \to \Box (A \to \diamond B) \to \diamond B

    (note the use of \Box for the second premise; this formula would not be valid without it).

    Aaron


Leave a comment