I was recently talking over lunch with my colleague Cesare Tinelli, with whom I run the Computational Logic Center here at U. Iowa, and was very surprised to find that we had different intuitions about whether or not quantification over sort bool, where formulas are of sort bool, constituted higher-order quantification.  On the one hand, it is certainly hard to dispute the idea that higher-order quantification essentially involves quantification over sets.  In higher order logic, for example as developed in the Q0 system of Peter Andrews (see his enlightening book “An Introduction to Mathematical Logic and Type Theory: To Truth Through Proof”), sets are modeled by their membership predicates.  Quantification over sets (or functions, or predicates) thus involves quantifying over entities of type $A \rightarrow B$, for some types $A$ and $B$.  Just quantifying over type $o$ (the primitive type for formulas in Q0) is at best a degenerate form of such quantification, and hardly worthy of the term “higher-order”.

Furthermore, we know that validity in higher-order logic, as for first-order logic, is undecidable.  But just adding quantification over sort bool is not enough to obtain undecidability.  The logic of Quantified Boolean Formulas (QBF), for example, extends classical propositional logic with quantification over bool.  The  problem of determining truth for a QBF formula is PSPACE-complete (indeed, it is considered the paradigmatic PSPACE-complete problem).  This means that the best known algorithm (and the best most people believe we will ever find) has the same complexity as the one for SAT, namely exponential time.  In practice, though, QBF problems really do seem to be harder to solve than SAT problems.  In any event, decidability of validity for formulas one can otherwise decide with quantification over bool added is no big deal: one can always just try both boolean values for validity of a universal quantification over bool, and for unsatisfiability of an existential.

So that seems like a pretty good reason for thinking that quantification over bool does not count as higher-order quantification (except maybe in a degenerate sense).  Now here’s why one might be less sure.  Consider an expression like $\forall A:o.\, (A \to A) \to A \to A$.  From the perspective of QBF, this is an uninteresting valid formula.  But let’s take a trip to the Gamma quadrant through that seductive interdimensional portal known as the Curry-Howard isomorphism.  Then this expression is actually a polymorphic type in the type theory System F (second-order lambda-calculus, also sometimes denoted $\lambda 2$).  In System F, this expression is the type for all functions which given any type $A$, return a function from $A \to A$ to $A \to A$.  In fact, using Church encodings, this expression is the type for natural numbers encoded as lambda-terms in System F.

What is the significance of this type-theoretic view?  Well, the results which I noted above for QBF change completely for System F.  Instead of validity being PSPACE-complete, it becomes undecidable!  This is really striking, and not a connection I ever noticed.  Truth for (classical) quantified boolean formulas is decidable and in PSPACE, but the problem of inhabitation for System F is undecidable (see Section 4.4 of Barendregt’s amazingly great “Lambda Calculi with Types” [Citeseer]).  The inhabitation problem is the problem of finding a lambda term that has a given type, and can be thought of the question of provability of the given quantified boolean formula in constructive logic.  The lambda term corresponds to the proof, and the System F type to the formula it proves.  So constructive provability is undecidable, where classical provability is decidable.  That is quite remarkable.

Furthermore, another reason for thinking of boolean quantification comes from the complexity of normalizing lambda terms which are typable in System F.  The complexity is quite spectacular.  To get a rough feel for it (and my knowledge of the theory of subrecursion is only enough for this, I am afraid), we know that Ackermann’s function, which even for very small input values takes astronomically long to compute an output, can be typed in Goedel’s System T.  In fact, Ackermann’s function does not begin to push the limits of System T, since Ackermann’s function can be written using only primitive recursion at order 1.  Let’s say that a primitive recursive term is at order n if all the actual recursions in it compute something (by primitive recursion) of type whose order is n, where base types are order 0, and the order of $A \to B$ is the maximum of the order of $B$ and one plus the order of $A$.  System T supports primitive recursion of any finite order.  So, Ackermann’s function, which is completely infeasible to compute in practice for more than a few inputs, is a relatively easy function for System T.  Now, understanding that, we need only note that System F allows typing vastly more complex functions than are typable in System T.   Since System T can be seen as giving the proof terms for first-order arithmetic, in a similar way as System F does for constructive second-order logic, we see that constructive second-order logic is a more complex theory (in this quantitative sense) than first-order arithmetic.  This again adds support for viewing boolean quantification as higher-order.

So which is it?  Is boolean quantification first-order (or weaker) or higher-order?  It seems the difference hinges on whether one is considering a constructive or a classical version of the system.  This may seem puzzling, but there are philosophical reasons why it is not surprising: constructive implication is modal in a way that classical implication is not, and hence might be expected to give rise to a qualitatively (if not quantitatively) more complex theory.  But this has to be taken up in another post.