Last week I had the great pleasure of visiting the amazing PL group at Indiana University. They are doing so much inspiring research in many different areas of programming languages research. It was great. The collective knowledge there was very impressive, and I got a lot of great references to follow up on now that I am back.
I gave two talks: one in the logic seminar, and one for the PL group. In the first talk, one issue I was discussing was an argument, originally sketched by Michael Dummett and developed more fully by Charles Parsons in a paper called “The Impredicativity of Induction” (you can find a revised version of this paper in a book called “Proof, Logic, and Formalization” edited by Michael Detlefsen) — this argument claims to show that induction is impredicative. Impredicativity, as many QA9 readers well know, is the property of a definition D of some mathematical object x that D refers to some collection or totality of which x is a member. Philosophical constructivists must reject impredicative definitions, because they believe that the definition are constructing or creating the entities defined, and hence cannot presuppose that those entities already exist in the course of defining them. This kind of philosophical constructivism must be distinguished from a kind of formal constructivism a type theorist is likely to care about, which I identify with a canonical forms property: every inhabitant of a type A should have a canonical form which is part of the essential nature of type A. Computational classical type theory violates this formal canonicity property, because types like are inhabited by terms which are neither left nor right injections.
Here is Parsons’s argument as I understand (or maybe extrapolate?) it. A philosophical constructivist must reject impredicative definitions, such as the kind one gives for the natural numbers in second-order logic (see my talk slides for this example definition). As an alternative, one may try to say that the meaning of the natural number type is determined by introduction and elimination rules (similarly to the meanings of logical connectives). The introduction rules are about the constructors of the type. And the elimination rule is an induction principle for the type. But then, Parsons and Dummett point out, this explanation of the natural number (or other) inductive type is not any better off with respect to impredicativity than the second-order definition, because the induction principle allows us to prove any property P of n, assuming as premises that Nat n holds and that the base and step cases of induction are satisfied for P. But the predicate P could just as well be Nat itself, or some formula which quantifies over Nat. The latter seems to provoke worries, which I do not understand. It seems already bad enough that P could be Nat. So just moving from a single formula defining Nat to a schematic rule defining Nat cannot save one from the charge of impredicativity. Hence, for avoiding impredicativity, there is no reason to prefer one approach (specifically, the rule-based approach) over the other.
The alternative to this is to reject the idea that we must give a foundational explanation of the natural-number type (or predicate). We just accept the idea of numbers and other datatypes as given, and then induction is a consequence — not part of some definition of numbers, which we are refusing to give. This is a coherent way to avoid impredicativity, but at the cost of having to accept numbers with no deeper analysis — something that a logicist might not like, for example (though I do not know for sure how a logicist would likely view this situation).