Skip navigation

Category Archives: Uncategorized

I haven’t yet started repeating myself — though there’s every chance you’ll hear it here twice — but iteration is the sort of thing one can find just one use after another for. I mean, if you’ve seen it once, you’ve seen it a thousand times: iteration delivers repeatedly. How many times have you iterated to good effect? I say again: is iteration great or what?

Ok, got that out of my system. 🙂 I am working on lambda encodings right now, and with Church-encoded data, every piece of data is its own iterator. So the encoding tends to make one think of algorithms in terms of iteration. We have a function f, and a starting point a, and we wish to apply f to a in a nested fashion, n times: f^0(a) = a and f^{n+1}(a) = f(f^n(a)). To multiply numbers N and M, for example, we can iterator the function “add M” on starting point 0, N times. And other natural algorithms have iterative formulations.

What about division? Usually in total type theories (where it is required that uniform termination of every function — that is, termination on all inputs — has to be confirmed statically by some termination-checking algorithm or technique), natural-number division is implemented using well-founded recursion. The idea is that to divide x by y, we basically want to see how many times we can subtract y from x until x becomes smaller than y (at which point it is the remainder of the division). So one wants to make a recursive call to division on x – y, and since that quantity is not the predecessor of x (or y), the usual structural decrease demanded by the termination checker is not satisfied. So the usual simple schemes for observing termination statically cannot confirm that division is terminating. And indeed, if y were 0, there would be no structural decrease. So it is not a completely trivial matter. The solution one finds in Coq (Arith/Euclid.v in the standard library for Coq version 8.4) and Agda (Data/Nat/DivMod.agda in the standard library version 0.8) is to use well-founded recursion. This is a somewhat advanced method that uses a generalized inductive type to encode, effectively, all the legal terminating call sequences one could make using a given well-founded ordering. Then we can do structural recursion on an extra argument of this generalized inductive type.

Well-founded recursion is really quite cool, and it’s amazing to see the power of the type theory in the fact that well-founded recursion is derivable, not primitive, to the language. Every student of type theory should try walking through the definitions needed for well-founded recursion over, say, that natural number ordering <. But as elegant and impressive as it is, it's a pretty heavy hammer to have to get out. For starters, if you want to reason later about the function you defined by well-founded recursion, you are most likely going to have to use well-founded induction in that reasoning. So you find yourself continually setting up these somewhat complicated inductions to prove simple lemmas. A second issue is that at least in Agda, because there is no term erasure explicit in the language, if you write a function by well-founded recursion, you are going to be manipulating these values of the generalized inductive datatype at runtime. I reported earlier on this blog that in my experience this led to a major, major slowdown for running code extracted from Agda. So if you are just doing some formal development to prove a theorem, then well-founded recursion won't cause you serious problems in Agda. But if you want to extract and run code that uses well-founded recursion, you likely will see major performance issues.

In my standard library for Agda, the version of natural-number division defined by well-founded recursion is in nat-division.agda:


{- a div-result for dividend x and divisor d consists of the quotient q, remainder r, and a proof that q * d + r = x -}
div-result : ℕ → ℕ → Set
div-result x d = Σ ℕ (λ q → Σ ℕ (λ r → q * d + r ≡ x))

div-helper : ∀ (x : ℕ) → WfStructBool _<_ x → (y : ℕ) → y =ℕ 0 ≡ ff → div-result x y
div-helper x wfx 0 ()
div-helper x (WfStep fx) (suc y) _ with 𝔹-dec (x =ℕ 0)
... | inj₁ u = 0 , 0 , sym (=ℕ-to-≡ u)
... | inj₂ u with 𝔹-dec (x < (suc y))
... | inj₁ v = 0 , (x , refl)
... | inj₂ v with (div-helper (x ∸ (suc y)) (fx (∸< {x} u)) (suc y) refl)
... | q , r , p with <ff {x} v
... | p' with ∸eq-swap{x}{suc y}{q * (suc y) + r} p' p
... | p'' = (suc q) , (r , lem p'')
where lem : q * (suc y) + r + suc y ≡ x → suc (y + q * suc y + r) ≡ x
lem p''' rewrite
+suc (q * (suc y) + r) y
| +comm y (q * (suc y))
| +perm2 (q * (suc y)) r y = p'''

_÷_!_ : (x : ℕ) → (y : ℕ) → y =ℕ 0 ≡ ff → div-result x y
x ÷ y ! p = div-helper x (wf-< x) y p

This code returns a value of type div-result x y, which contains the quotient q, remainder r, and the proof that x equals y * q + r. It is not as simple as one would like, due to the use of well-founded recursion.

But we can avoid well-founded recursion for defining division, if we go back to our old friend iteration (“There he is again!” — sorry, I said I had that out of my system, but apparently not quite). Because we know that we will not possibly iterate subtraction of y from x more than x times, if y is not 0. So we can pass an extra argument in to division which is a counter, that we start out at x. Again we use the div-result type, but this time there is no need for well-founded recursion:


divh : (n : ℕ) → (x : ℕ) → (y : ℕ) → x ≤ n ≡ tt → y =ℕ 0 ≡ ff → div-result x y
divh 0 0 y p1 p2 = 0 , 0 , refl
divh 0 (suc x) y () p2
divh (suc n) x y p1 p2 with keep (x < y)
divh (suc n) x y p1 p2 | tt , pl = 0 , x , refl
divh (suc n) x y p1 p2 | ff , pl with divh n (x ∸ y) y (∸≤2 n x y p1 p2) p2
divh (suc n) x y p1 p2 | ff , pl | q , r , p = suc q , r , lem
where lem : y + q * y + r ≡ x
lem rewrite sym (+assoc y (q * y) r) | p | +comm y (x ∸ y) = ∸+2{x}{y} (<ff{x}{y} pl)

_÷_!_ : (x : ℕ) → (y : ℕ) → y =ℕ 0 ≡ ff → div-result x y
x ÷ y ! p = divh x x y (≤-refl x) p

You can find this in nat-division2.agda. The code is also a bit less cluttered with helper lemmas, although we still do need to require that x is less than or equal to n, in order to rule out the case that we run out of counter budget (n) before we are done dividing x.

This example shows that sometimes iteration is sufficient for defining functions like division whose natural definition is not structurally recursive. The moral of the story is that we should not forget about iteration. And that is a lesson worth repeating!

Advertisements

Well, I am embarrassed at how late I am in posting the solution to the puzzle I mentioned in my last post.  It has been a busy summer with taking care of our sweet new baby at home, and running StarExec development at work.  Anyhow, below is a graph with the minimum number of nodes which contains every legal possible combination of properties from termination (aka strong normalization), normalization (aka weak normalization), confluence (aka Church-Rosser), and local confluence (aka Weak Church-Rosser), and their negations.  This graph was found by Hans Zantema, whom I asked about this puzzle by email (he agreed to let me share his solution here).  Furthermore, he argues that 11 is the minimal number of nodes, as follows.  Out of the 16 possible combinations of properties SN, WN, CR, and WCR and their negations, we exclude immediately the combinations with SN and ~WN (since SN implies WN) and CR and ~WCR (since CR implies WCR).  So there are three legal possibilities for the values of CR and WCR, and three for the values of SN and WN.  These are independent, so there are 9 legal combinations of properties.  Now, Hans argues, since there is a node X which is SN and ~WCR, there must be two nodes which are SN and CR.  For since X is SN but not WCR, it has two children (which are still SN) which cannot be joined.  We may assume these children are themselves CR, otherwise we could repeat this observation and the graph would not be minimal.  Similarly, since there is a node which is ~WN and ~WCR, there must be two nodes which are ~WN and CR.  So there must be at least 11 nodes.  And the graph below has 11 nodes.  To test your knowledge, you can try to identify which combination of properties each node has!  Fun!

arsall-hans

Suppose we have a graph (A,->) consisting of a set of objects A and a binary relation -> on A.  This is a simple case of an abstract reduction system, as defined in the Terese book (in the more general case, we have not just one relation ->, but an indexed set of relations).  In the theory of abstract reduction systems, an element x is confluent iff whenever there is a path from x to y and a path from x to z, then there exists some element q which is reachable from both y and z.  An element x is locally confluent iff whenever there is an edge (not an arbitrary path) from x to y and an edge from x to z, then there is some element q reachable from both y and z.  So confluence implies local confluence, but (rather famously) the reverse implication holds only for terminating systems.  An element is terminating iff there are no infinite paths from that element.  An element is normalizing iff there exists a path from that element to a normal form, which is an element that has no outgoing edges.  So terminating implies normalizing.

We have these four properties: confluence, local confluence, termination (sometimes also called strong normalization), and normalization (sometimes called weak normalization).  What is the smallest graph that is property diverse, in the sense that for every consistent combination of properties, the graph contains an element with that combination of properties?  (The consistency requirement for the set of properties for an element arises because confluence implies local confluence and termination implies normalization).

I will post the answer to this (with a nice picture) Monday…

Just a quick note: the PLPV 2012 PC chairs have posted a call for papers.  The submission deadline is Oct. 11th.  This is a great opportunity to give a talk to an audience likely to include many well-known researchers, since PLPV is affiliated with POPL and has had quite significant registration numbers (among the highest of all affiliated workshops) in the past couple years.

I was recently talking over lunch with my colleague Cesare Tinelli, with whom I run the Computational Logic Center here at U. Iowa, and was very surprised to find that we had different intuitions about whether or not quantification over sort bool, where formulas are of sort bool, constituted higher-order quantification.  On the one hand, it is certainly hard to dispute the idea that higher-order quantification essentially involves quantification over sets.  In higher order logic, for example as developed in the Q0 system of Peter Andrews (see his enlightening book “An Introduction to Mathematical Logic and Type Theory: To Truth Through Proof”), sets are modeled by their membership predicates.  Quantification over sets (or functions, or predicates) thus involves quantifying over entities of type A \rightarrow B, for some types A and B.  Just quantifying over type o (the primitive type for formulas in Q0) is at best a degenerate form of such quantification, and hardly worthy of the term “higher-order”.

Furthermore, we know that validity in higher-order logic, as for first-order logic, is undecidable.  But just adding quantification over sort bool is not enough to obtain undecidability.  The logic of Quantified Boolean Formulas (QBF), for example, extends classical propositional logic with quantification over bool.  The  problem of determining truth for a QBF formula is PSPACE-complete (indeed, it is considered the paradigmatic PSPACE-complete problem).  This means that the best known algorithm (and the best most people believe we will ever find) has the same complexity as the one for SAT, namely exponential time.  In practice, though, QBF problems really do seem to be harder to solve than SAT problems.  In any event, decidability of validity for formulas one can otherwise decide with quantification over bool added is no big deal: one can always just try both boolean values for validity of a universal quantification over bool, and for unsatisfiability of an existential.

So that seems like a pretty good reason for thinking that quantification over bool does not count as higher-order quantification (except maybe in a degenerate sense).  Now here’s why one might be less sure.  Consider an expression like \forall A:o.\, (A \to A) \to A \to A.  From the perspective of QBF, this is an uninteresting valid formula.  But let’s take a trip to the Gamma quadrant through that seductive interdimensional portal known as the Curry-Howard isomorphism.  Then this expression is actually a polymorphic type in the type theory System F (second-order lambda-calculus, also sometimes denoted \lambda 2).  In System F, this expression is the type for all functions which given any type A, return a function from A \to A to A \to A.  In fact, using Church encodings, this expression is the type for natural numbers encoded as lambda-terms in System F.

What is the significance of this type-theoretic view?  Well, the results which I noted above for QBF change completely for System F.  Instead of validity being PSPACE-complete, it becomes undecidable!  This is really striking, and not a connection I ever noticed.  Truth for (classical) quantified boolean formulas is decidable and in PSPACE, but the problem of inhabitation for System F is undecidable (see Section 4.4 of Barendregt’s amazingly great “Lambda Calculi with Types” [Citeseer]).  The inhabitation problem is the problem of finding a lambda term that has a given type, and can be thought of the question of provability of the given quantified boolean formula in constructive logic.  The lambda term corresponds to the proof, and the System F type to the formula it proves.  So constructive provability is undecidable, where classical provability is decidable.  That is quite remarkable.

Furthermore, another reason for thinking of boolean quantification comes from the complexity of normalizing lambda terms which are typable in System F.  The complexity is quite spectacular.  To get a rough feel for it (and my knowledge of the theory of subrecursion is only enough for this, I am afraid), we know that Ackermann’s function, which even for very small input values takes astronomically long to compute an output, can be typed in Goedel’s System T.  In fact, Ackermann’s function does not begin to push the limits of System T, since Ackermann’s function can be written using only primitive recursion at order 1.  Let’s say that a primitive recursive term is at order n if all the actual recursions in it compute something (by primitive recursion) of type whose order is n, where base types are order 0, and the order of A \to B is the maximum of the order of B and one plus the order of A.  System T supports primitive recursion of any finite order.  So, Ackermann’s function, which is completely infeasible to compute in practice for more than a few inputs, is a relatively easy function for System T.  Now, understanding that, we need only note that System F allows typing vastly more complex functions than are typable in System T.   Since System T can be seen as giving the proof terms for first-order arithmetic, in a similar way as System F does for constructive second-order logic, we see that constructive second-order logic is a more complex theory (in this quantitative sense) than first-order arithmetic.  This again adds support for viewing boolean quantification as higher-order.

So which is it?  Is boolean quantification first-order (or weaker) or higher-order?  It seems the difference hinges on whether one is considering a constructive or a classical version of the system.  This may seem puzzling, but there are philosophical reasons why it is not surprising: constructive implication is modal in a way that classical implication is not, and hence might be expected to give rise to a qualitatively (if not quantitatively) more complex theory.  But this has to be taken up in another post.