Skip navigation

Yours truly is interviewed now on the wonderful Type Theory Podcast.  Very exciting!

This post is about double negation in constructive type theory.  Suppose you know that \neg(\neg A) is true; i.e., (A \to \perp)\to \perp (writing \perp for false).  In classical logic, you then know that A is true.  But of course, you are allowed no such conclusion in intuitionistic logic.  So far, so familiar.  But let us look at this now from the perspective of realizability.  Realizability semantics gives the meaning of a type as a set of some kind of realizers.  Let us consider untyped lambda calculus terms (or combinators) as realizers for purposes of this discussion.  Then the crucial clauses of the semantics for our purposes are

  • t \in [\negthinspace[ A \to B ]\negthinspace ]\ \Leftrightarrow\ \forall t' \in[\negthinspace[A]\negthinspace].\ t\ t'\in[\negthinspace[B]\negthinspace], and
  • \neg t \in [\negthinspace[\perp]\negthinspace]

Let us work through, then, when t\in[\negthinspace[(A\to\perp)\to\perp]\negthinspace] according to this definition:

\begin{array}{ll} t\in[\negthinspace[(A\to\perp)\to\perp]\negthinspace] & \Leftrightarrow \\ \forall t'\in[\negthinspace[A\to\perp]\negthinspace].\ t\ t'\in[\negthinspace[\perp]] & \Leftrightarrow \\ \forall t'.\neg(t'\in[\negthinspace[ A \to \perp]\negthinspace]) & \Leftrightarrow \\ \neg(t'\in[\negthinspace[A\to\perp]\negthinspace]) & \Leftrightarrow \\ \neg(\forall t''\in[\negthinspace[A]\negthinspace].\ t'\ t''\in[\negthinspace[\perp]\negthinspace]) & \Leftrightarrow \\ \neg(\forall t''.\neg t''\in[\negthinspace[A]\negthinspace]) & \Leftrightarrow \\ \exists t''.t''\in[\negthinspace[A]\negthinspace] &\  \end{array}

So this is saying that if [\negthinspace[A]\negthinspace] is nonempty, then [\negthinspace[(A\to\perp)\to\perp]\negthinspace] is universal (contains all terms); and otherwise (if [\negthinspace[A]\negthinspace] is empty), [\negthinspace[(A\to\perp)\to\perp]\negthinspace] is empty.

So if you have a function F, say, realizing \neg\neg A)\to B for some B, what this means is that F can make use of the fact that A is true, but not how it is true.  If F were simply given a proof of A, then it could compute with this proof.  But here, with \neg\neg A, the proof of A is hidden behind an existential quantifier (as we deduced above), and so it cannot be used.  Only if someone is reasoning at the metalevel about F, they can make use of the fact that A is true when checking that the body of F realizes B.

So semantically, \neg\neg A is news you can’t use: we know A is true, but we have no access to how it is true.  For example, if A is a disjunction, you cannot case split, within the type theory, on the disjuncts.  Only at the metalevel are you entitled to consider the behavior of a term in light of the fact that one or the other of the disjuncts must hold.

Greetings, QA9 reader(s).  It has been a while since I posted, for which I offer the usual excuses.  I have a backlog of interesting technical things I’m hoping to write about sometime pretty soon: failure of subject reduction in lambda2 with eta-contraction, which I learned about maybe a year ago but did not record in a post; also intriguing things I am reading about deep inference, although there the problem is the stuff is, well, deep and I haven’t understood it well enough yet to have too much to say.  I also had high hopes of posting about the super fun summer I had here at U. Iowa working on Cedille with great new U. Iowa graduate students Chad Reynolds, Ananda Guneratne, and Richard Blair; great visiting Indiana doctoral student Matthew Heimerdinger; and inspiring visitors Stéphane Graham-Lengrand, Olivier Hermant, and Vincent St-Amour.  In a perfect world I would have some pictures to post of all this stimulating interaction, but here I am photo-less.

But what impels me to write today is getting bitten again by performance problems with well-founded recursion in Agda.  Just for fun, I will tell you the story.  We are working on this new interactive theorem prover called Cedille based on dependently typed lambda encodings.  You can read about this in the project report we just revised to submit for the TFP 2016 post-proceedings.  Thanks to the determination of Matthew (Heimerdinger), we have a highlighting mode — actually several — for Cedille in our emacs interface.  One funny wrinkle was that we could not highlight comments, because our parser was just dropping them, and we did not want to store information about them in our abstract syntax trees (which would then become very cluttered).  We hit upon a daring (for us, anyway) solution: parse input files a second time, with a different grammar that is recognizing just whitespace and comments.  It is nice to find the whitespace, too, in addition to the comments, since then we can set highlighting to the default for those spans of the text in the emacs mode.  Otherwise, with the way our interaction between emacs frontend and Agda backend works (see that revised project report), if you type in whitespace you will likely get a non-default highlighting color, from the span containing that whitespace.

Ok, so we (Richard) wrote the grammar for this, and we started producing comment and whitespace spans to send to the frontend, only to see some very sluggish performance on a couple longer files.  We misdiagnosed this as coming from doing lots of string appends (with what correspond to slow strings in Haskell; i.e., lists of characters), and I dutifully changed the code to print the spans to a Haskell Handle, rather than computing one big string for all the spans and then printing that.  Distressingly, this optimization, which took maybe an hour or so to implement, with finding the right Haskell stuff to hook up to, did not improve performance.  Well shucks!  Let’s get out our handy Haskell profiler and see what is going wrong.  Now sadly, I am still on Agda 2.4.2.4, which lacks this nifty agda-ghc-names tool that is part of Agda 2.5.1 now.  That tool translates Haskell names that are generated by the Agda compiler, back to names from the Agda source.  But this tool was not needed to see the following:

COST CENTRE MODULE %time %alloc

d179.\.\.\ MAlonzo.Code.QnatZ45Zthms 35.9 18.8
d179.\ MAlonzo.Code.QnatZ45Zthms 11.6 48.8
d191.\ MAlonzo.Code.QnatZ45Zthms 9.6 7.5
d179 MAlonzo.Code.QnatZ45Zthms 5.7 0.0
d191 MAlonzo.Code.QnatZ45Zthms 5.2 0.0
d179.\.\ MAlonzo.Code.QnatZ45Zthms 4.7 11.3

 

Oh pondwater.  Over half the time is being spent in functions in nat-thms.agda in the IAL.  That file contains, as the name suggests, just theorems about the natural numbers — not code I expect to be running intensively during execution of the Cedille backend.  But I have seen this before, as I noted.  These are lemmas getting called again and again during a well-founded recursion, to show that certain less-than facts hold.  Which well-founded recursion was it?  It turns out it was for the function that converts a natural number to a string.  We are doing that operation many times, to print character positions in the input Cedille source file.  Just removing the well-founded recursion and marking the nat-to-string function as needing NO_TERMINATION_CHECK reduced the running time to less than half what it was.

The moral of this story is the same as the moral of the previous story: be wary of well-founded recursion in Agda for code you intend to run (as opposed to just reason about).

Have a blessed start of fall!

Last week I had the great pleasure of visiting the amazing PL group at Indiana University.  They are doing so much inspiring research in many different areas of programming languages research.  It was great.  The collective knowledge there was very impressive, and I got a lot of great references to follow up on now that I am back.

I gave two talks: one in the logic seminar, and one for the PL group.  In the first talk, one issue I was discussing was an argument, originally sketched by Michael Dummett and developed more fully by Charles Parsons in a paper called “The Impredicativity of Induction” (you can find a revised version of this paper in a book called “Proof, Logic, and Formalization” edited by Michael Detlefsen) — this argument claims to show that induction is impredicative.  Impredicativity, as many QA9 readers well know, is the property of a definition D of some mathematical object x that D refers to some collection or totality of which x is a member.  Philosophical constructivists must reject impredicative definitions, because they believe that the definition are constructing or creating the entities defined, and hence cannot presuppose that those entities already exist in the course of defining them.  This kind of philosophical constructivism must be distinguished from a kind of formal constructivism a type theorist is likely to care about, which I identify with a canonical forms property: every inhabitant of a type A should have a canonical form which is part of the essential nature of type A.  Computational classical type theory violates this formal canonicity property, because types like A \vee \neg A are inhabited by terms which are neither left nor right injections.

Here is Parsons’s argument as I understand (or maybe extrapolate?) it.  A philosophical constructivist must reject impredicative definitions, such as the kind one gives for the natural numbers in second-order logic (see my talk slides for this example definition).  As an alternative, one may try to say that the meaning of the natural number type is determined by introduction and elimination rules (similarly to the meanings of logical connectives).  The introduction rules are about the constructors of the type.  And the elimination rule is an induction principle for the type.  But then, Parsons and Dummett point out, this explanation of the natural number (or other) inductive type is not any better off with respect to impredicativity than the second-order definition, because the induction principle allows us to prove any property P of n, assuming as premises that Nat n holds and that the base and step cases of induction are satisfied for P.  But the predicate P could just as well be Nat itself, or some formula which quantifies over Nat.  The latter seems to provoke worries, which I do not understand.  It seems already bad enough that P could be Nat.  So just moving from a single formula defining Nat to a schematic rule defining Nat cannot save one from the charge of impredicativity.  Hence, for avoiding impredicativity, there is no reason to prefer one approach (specifically, the rule-based approach) over the other.

The alternative to this is to reject the idea that we must give a foundational explanation of the natural-number type (or predicate).  We just accept the idea of numbers and other datatypes as given, and then induction is a consequence — not part of some definition of numbers, which we are refusing to give.  This is a coherent way to avoid impredicativity, but at the cost of having to accept numbers with no deeper analysis — something that a logicist might not like, for example (though I do not know for sure how a logicist would likely view this situation).

I am giving a talk this afternoon titled “From Logic with Love” as part of a special class organized by U. Iowa English Prof. Judith Pascoe, where professors from different departments give talks from the perspective of their discipline on the topic “What We Talk About When We Talk About Love”.  The slides are here.  I am also including some links and notes below, for students interested in further reading.

The short story Beginners, by Raymond Carver, can be found here.  This is the original version of the story that was published as “What We Talk About When We Talk About Love”.

Ray Monk’s biography of Bertrand Russell, while loathed by admirers of Russell, is extremely interesting and revealing reading.

Constance Reid’s biography of David Hilbert is also very interesting and enjoyable reading.  It includes a reprint of much of the text of his famous 1900 Paris lecture on future problems of mathematics.

Some Computer Science news articles related to the talk:

Other related articles:

A Man For Others by Patricia Treece is an excellent read about Maximilian Kolbe.  It contains first-hand reports and recollections from many different people who knew the saint.