Skip navigation

Category Archives: Type Theory

You find yourself, like Jacob, gazing up the rungs (for me, ropes)

of a ladder that seems to have constructed itself

to avoid leaving you flat on the ground.

But what, you ask, in the arching presence of the angels,

do you do at the top?

 

Type theories, to be consistent, cannot be uniform.  Somewhere, your power must ebb, abstractions diminish, until finally you are left with the beautiful silence of a single symbol (maybe \Box).  Type : type is also beautiful, but long known to be inconsistent: collapsing everything to one level ruins your type theory as a logic.  There are more subtle examples of the need to give up your power as you ascend (this starts to sound like John of the Cross).  You cannot repeat impredicative quantification one level up, for example: Girard proved this is paradoxical in his dissertation.  So if you are an impredicativist as I am, your kind level must be strictly weaker than your type level.

But what about if you are a predicativist, perhaps a devotee (as I also am) of Agda?  Agda, and also Coq via its ECC (Extended Calculus of Constructions, of Zhaohui Luo) structure, seem to avoid this quietist imperative I am mentioning.  Above the current level, there is another, with essentially the same power.  And beyond that, another, ad infinitum.  The ladder goes up forever!  We surpass the heavens!

But no, experienced Agda and Coq users know that is sadly not what happens.  Much like the tower of Babel, the predicative universe hierarchy does not satisfy the desires of abstraction.  For some constructions make sense at all levels, and hence we introduce the seductive ideas of typical ambiguity (considered for a system \textit{CC}^\omega of Coquand’s which is a predecessor of ECC, by Harper and Pollack) and universe polymorphism (also considered by Harper and Pollack).  Universe polymorphism as I understood it from Agda seems a bit different from what Harper and Pollack propose.  And it seems that typical ambiguity is found more in Coq than Agda.  (I could be wrong on these points!)  In Agda, there seems to be the curious idea that we can quantify over all levels of the hierarchy explicitly.  And we are then left with the conundrum of what level such a quantification itself has?  Of course, for soundness reasons, it cannot have one of the levels covered by the level quantification.  So we have to make up a new level, namely \omega.  And we see that level quantification was a charade to begin with: we are definitely not quantifying over all the levels, just some prefix of them.  And it seems Agda does not make \omega available to you to use in your code.  So this just seems very hacky.

More principled is something that I think is closer to the kind of universe polymorphism that Harper and Pollack were proposing, which in their work is located at global definitions (not arbitrary terms).  And this is the answer to my riddle at the beginning: when you reach the top of your ladder, you have to “go meta”.  It is a beautiful insight that we can internalize the induction scheme one finds in Peano Arithmetic as a single second-order axiom.  We have passed from the meta-level, with a family of formulas, to a single object-language formula.  I propose that when we reach the top layer of our layered type theory, we do the reverse.  We add schematic definitions.

Here at Iowa, we are working on this new Cedille type theory and system.  Some of my recent papers are about this.  There, we have a predicative hierarchy truncated at level 2.  Ha!  There are just terms, types, and kinds.  It is actually more convenient to reach the end of the ladder faster!  And going schematic means that while we do not support kind-level functions, we will allow you to define parametrized kinds.  So you can give a top-level definition of some kind parametrized by terms or types.  The definition itself does not need some funny type with \omega level or some additional superkind structure.  Of course, one can only use the definition with all parameters instantiated.

Simple, but extends expressivity a bit just out of the reach of what can be internalized in the theory.

This post is about double negation in constructive type theory.  Suppose you know that \neg(\neg A) is true; i.e., (A \to \perp)\to \perp (writing \perp for false).  In classical logic, you then know that A is true.  But of course, you are allowed no such conclusion in intuitionistic logic.  So far, so familiar.  But let us look at this now from the perspective of realizability.  Realizability semantics gives the meaning of a type as a set of some kind of realizers.  Let us consider untyped lambda calculus terms (or combinators) as realizers for purposes of this discussion.  Then the crucial clauses of the semantics for our purposes are

  • t \in [\negthinspace[ A \to B ]\negthinspace ]\ \Leftrightarrow\ \forall t' \in[\negthinspace[A]\negthinspace].\ t\ t'\in[\negthinspace[B]\negthinspace], and
  • \neg t \in [\negthinspace[\perp]\negthinspace]

Let us work through, then, when t\in[\negthinspace[(A\to\perp)\to\perp]\negthinspace] according to this definition:

\begin{array}{ll} t\in[\negthinspace[(A\to\perp)\to\perp]\negthinspace] & \Leftrightarrow \\ \forall t'\in[\negthinspace[A\to\perp]\negthinspace].\ t\ t'\in[\negthinspace[\perp]] & \Leftrightarrow \\ \forall t'.\neg(t'\in[\negthinspace[ A \to \perp]\negthinspace]) & \Leftrightarrow \\ \neg(t'\in[\negthinspace[A\to\perp]\negthinspace]) & \Leftrightarrow \\ \neg(\forall t''\in[\negthinspace[A]\negthinspace].\ t'\ t''\in[\negthinspace[\perp]\negthinspace]) & \Leftrightarrow \\ \neg(\forall t''.\neg t''\in[\negthinspace[A]\negthinspace]) & \Leftrightarrow \\ \exists t''.t''\in[\negthinspace[A]\negthinspace] &\  \end{array}

So this is saying that if [\negthinspace[A]\negthinspace] is nonempty, then [\negthinspace[(A\to\perp)\to\perp]\negthinspace] is universal (contains all terms); and otherwise (if [\negthinspace[A]\negthinspace] is empty), [\negthinspace[(A\to\perp)\to\perp]\negthinspace] is empty.

So if you have a function F, say, realizing \neg\neg A)\to B for some B, what this means is that F can make use of the fact that A is true, but not how it is true.  If F were simply given a proof of A, then it could compute with this proof.  But here, with \neg\neg A, the proof of A is hidden behind an existential quantifier (as we deduced above), and so it cannot be used.  Only if someone is reasoning at the metalevel about F, they can make use of the fact that A is true when checking that the body of F realizes B.

So semantically, \neg\neg A is news you can’t use: we know A is true, but we have no access to how it is true.  For example, if A is a disjunction, you cannot case split, within the type theory, on the disjuncts.  Only at the metalevel are you entitled to consider the behavior of a term in light of the fact that one or the other of the disjuncts must hold.

In type theory and logic, predicativity is the property that quantifications do not create objects of the same class as the variable being quantified over.  So one cannot quantify over formulas and get a formula.  Instead, one gets a formula’ — some class distinct from formulas.  In type theory, predicative quantification means that an expression which is quantifying over types is not itself a type (Haskell works like this, which surprised me).  Often (for example, in both Coq and Agda), one sees a countable hierarchy of type universes: a quantification over Type(x) lives in universe Type(x+1).  Impredicativity allows one to stay at the same level.  So in Coq, which has an impredicative universe called Prop at the bottom of the universe hierarchy, quantifications over Prop are still in Prop.  From a type theory perspective, impredicative polymorphism is a quantification of types which still gives you a type, of the same kind that can be used for instantiating such quantifications.  Indeed, it is the possibility of instantiating a quantifier with the entire quantified formula itself that makes analysis of impredicativity challenging.  Challenging, but well understood: Girard discovered the way to prove normalization in the presence of impredicative polymorphism for his System F (see his co-authored book Proofs and Types).

Forget squabbles over comment characters (I do think I prefer Haskell/Agda’s now over OCaml’s — sorry!): people can really tangle over predicativity versus impredicativity.  I have heard second-hand that a well-known type theorist is against impredicativity because he distrusts a trick that can only be used once.  This refers to the fact that we cannot have a layered language with impredicativity in two layers: Girard proved this leads to inconsistency, with his analysis of System U; see the discussion by Coquand here.  This is why in Luo’s Extended Calculus of Constructions (ECC), which is part of the type theory of Coq, only the base level of the universe hierarchy is impredicative, and the other levels are predicative.

I have to say I find this criticism, that we should distrust tricks that can only be used once, unconvincing.  For impredicativity is a heck of a trick.  The best part is that it enables lambda encodings.  I am on a personal quest to rehabilitate lambda encodings, and I am working on new type theories that support dependently typed programming and proving with lambda encodings.  A first step along these lines is my work with Peng Fu on self types, which you can read about here.  This is not the place to go into all the details, but impredicativity is absolutely essential.  With only predicative quantification, there really is no way to make lambda encodings practical.

Furthermore, if impredicativity is the trick that can only be used once (and hey, let’s use it), predicativity is the trick you have to keep using again and again and again.  To avoid repeating code and datatype definitions at each level of the hierarchy (which one quickly finds will be needed for a lot of practical examples), we have to resort to level polymorphism.  Now a level-polymorphic type is not in any level we can access.  Why not just extend to higher ordinals?  Oh boy.  And of course, we have to solve level constraints that the type system imposes.  This complicates type checking.

So predicativity is out for me, including ECC-style predicative hierarchies over impredicative base levels.  But impredicativity is not getting off so easily.  Not for funny philosophical reasons or because we have to use it carefully — and we are treading the line here, as the more expressive we make our logic the closer we skirt the edge of inconsistency — but because it too has its own peculiar restrictions.  I have just been learning about these, and indeed, they are the impetus for writing this post.  Did you know that you cannot perform large eliminations on so-called impredicative datatypes in Coq?  And that these datatypes are the reason that Coq enforces strict positivity for datatype declarations (the datatype cannot be used to the left of an arrow, even if to the left of an even number of arrows, in the type for an argument to a constructor of that datatype)?  Well, if you are reading this post you probably did.  But I did not.  The counterexample is from a 1988 conference paper by Coquand and Paulin-Mohring.  Their presentation is a bit hard to read through for me, anyway, but thankfully Vilhelm Sjöberg transcribed it into Coq, with helpful comments, here.

Reading through this example has been challenging, not just because it is tricky and I still did not manage to get a great intuitive grasp of how it works.  But also because I have been worried it might apply to the type theory I am developing as the next step in the work on dependently typed lambda encodings.  And that would be very troubling, as I have a 20-page proof of consistency of that system!  Did I mess up my proof?  What features of ECC + inductive types are actually needed for that example of Coquand and Paulin-Mohring’s?  As Vilhelm wrote it, the example does use the predicative hierarchy of ECC, not just Prop.  I am not 100% convinced that it could not be carried out in Prop alone, though.

After much puzzling, I think I understand why this example would not affect the system I am working on, which allows impredicative nonstrictly positive datatypes, and for which I believe I will soon have an extension with large eliminations.  This sounds miraculous, because Coq forbids nonstrictly positive datatypes, and also forbids large eliminations of impredicative datatypes.  I am saying you could have both simultaneously and be consistent.  That must be wrong, right?  Well, wrong (maybe).  The difference is that I am working in a Curry-style type theory, similar to the Implicit Calculus of Constructions (ICC) of Miquel (with several additional features for dependently typed lambda encodings).  As an aside, while I have the highest admiration for Miquel’s work, his system has added several strange typing rules that prevent a reasonable metatheory and violate a number of aesthetic guidelines for type theory.  My system does not follow Miquel on those points — though we do not yet have the definable subtyping which is the most intriguing practical point of ICC, albeit to my knowledge not yet capitalized on in any system or even other papers.  In the system I am working on, terms are simply the terms of pure untyped lambda calculus.  All the action is in the types we can assign to such terms, and the kinding of those types.  For practical purposes, one will eventually need to design an annotated version of such a language, since inferring a type for a pure term of lambda calculus, even just in System F (a subsystem of my system), is undecidable.  But for metatheoretic analysis, this Curry-style development, where terms are completely unannotated, is fine.  In fact, it is more than fine, it is enlightening.  ECC is based on a Church-style approach to the syntax of terms.  Types are really legitimate parts of terms, that could be computed at run-time (though nothing interesting at run-time could be done with them).  This, by the way, is another point of ICC that I do not agree with: types can be parts of terms “really”, even though they do not have to be.

I am locating the central issue with the Coquand/Mohring counterexample to large eliminations with nonstrictly positive inductive types in the fact that the element of the inductive type in question can store a type (actually, something of type (A \rightarrow \star) \rightarrow \star).  This type is really like a piece of data inside the element of the inductive type.  It can be retrieved and processed, and this leads to the paradox.  Rather than blocking large eliminations with inductive types, I propose to shift the ground completely by using a Curry-style theory.  So types cannot be stored inside elements of datatypes, and thus cannot be retrieved and manipulated.  I believe this would block that counterexample, while still allowing both nonstrictly positive inductive types and large eliminations over impredicative inductive types.  I have a proof that this works for the first of these features, and I believe I shall have a proof for the second (large eliminations) in the next few months.

Hoping to have stirred the pot, I wish anyone reading to this point all the best in their investigations of type theory, or whatever other subjects deprive them of sleep and fire their minds.

Everyone knows classical logic is nonconstructive. If a classical principle like \forall x. \phi(x) \vee \neg \phi(x) were true constructively, it would mean we would have an effective (i.e., computable) way to tell, for any value x, whether or not \phi(x) is true or false. When \phi(x) expresses a property like “the Turing machine coded by natural number x halts”, then we can see that \forall x.\phi(x) \vee \neg \phi(x) cannot be constructively true in general. Case closed.

But a funny thing happened on the way from Amsterdam: computational interpretations of classical type theory were devised, like the \lambda\mu-calculus of Parigot and quite a few others.  These type theories have term notations for proofs of classical logic, and give a reduction semantics showing how to simplify proofs to normal forms, similar to standard reduction semantics for good old \lambda-calculus.  The reduction semantics is perfectly effective, so one really can compute — or so it would seem — with terms in these type theories.  If the essence of constructivism is computation, then it would seem that classical logic is constructive.  Indeed, titles like that of Girard’s 1991 article “A new constructive logic: classical logic” would suggest this.

But you know, maybe it is intuitionistic logic which is nonconstructive.  Come again?  Well, the past half year or so several of us here at Iowa have been studying so-called bi-intuitionistic logic.  This is a logic that extends standard propositional intuitionistic logic with an operator called subtraction, which is dual to implication.  There are a number of papers about this topic I could recommend.  Maybe the easiest introduction is this one by Goré, although we ultimately followed the approach in this paper by Pinto and Uustalu.   Recall that in the Kripke semantics for propositional intuitionistic logic, the implication A \to B is true iff in all future worlds where A is true, B is also true. The subtraction A - B  has a dual semantics: it is true iff there is an earlier world where A is true and B is false.  In intuitionistic logic, one usually defines \neg A as A \to \perp.  This negation is true iff A is false in all future worlds.  In bi-intuitionistic logic, one can also define a second kind of negation, using subtraction: \sim A means \top - A, and it is true iff there exists an earlier world where A is false.  With this definition, we start to see classical principles re-emerging.  For example, A \vee \sim A holds, because in any world w of any Kripke model, either A is true in w, or there is some earlier world (possibly w) where A is false.  Valid formulas like A \vee \sim A violate the disjunction property of intuitionistic logic, since neither A nor \sim A holds, despite the fact that the disjunction does hold.  The disjunction property, or in programming-languages terms, canonicity, would seem to be as much a hallmark of constructive reasoning as the law of excluded middle is of classical reasoning.  Yet here we have a conservative extension of intuitionistic logic, based on exactly the same semantic structures, where a form of excluded middle holds and canonicity fails.

What are we to make of this?  Let us think more about our Kripke semantics for (bi-)intuitionistic logic.  In this semantics, we have a graph of possible worlds, and we define when a formula is true in a world.  Formulas that are true stay true (the semantics ensures this), but formulas which are false in one world can become true in a later world.  The critical point is that we have, in a sense, two notions of truth: truth in a world, and truth across time (or across possible worlds).  Truth in a world is usually thought of as being defined classically.  Maybe it is better to say that the modality with which it is defined is not made explicit.  A textbook definition of Kripke semantics is making use of some background meta-language, and I think it is fair to say that without further explicit stipulation, we may assume that background meta-language is the language of informal classical set theory.  So for any particular formula \phi, either \phi holds in world w, or else it does not.  This is, in fact, what justifies validity of A \vee \sim A.  Indeed, to construct a version of Kripke semantics where this is not valid would seem to require something like an infinitely iterated Kripke semantics, where truth in a world is constructively defined (the infinite iteration would be to ensure that the notion of constructive definition being used for truth in a world is the same as the one for truth across worlds).  An interesting exercise, perhaps!  But not one that will validate the laws of bi-intuitionistic logic, for example, where A \vee \sim A is indeed derivable.

So what is it that makes a logic constructive?  Let’s come back to the case of computational classical type theory.  The term constructs that were discovered to correspond to classical principles were not so exotic: they were actually forms of familiar control operators like call-cc from functional programming!  So, these constructs were already in use in the wild; only the connection to logic was a (remarkable!) discovery.  So every day, the working Scheme programmer, or Standard ML programmer, or Haskell programmer has at his disposal these constructs which correspond to nonconstructive reasoning.  But they are writing code!  It is computing something!  What is going on?

Perhaps it is in Haskell where this is clearest.  To use call-cc, one must import Control.Monad.Cont.  And there we see it already: the control operator is only available in a monad.  The sequencing of operations which a monad is designed to provide is critical for taming call-cc from the point of view of lazy functional programming.  In the words of Simon Peyton Jones (from these notes):

Indeed it’s a bit cheeky to call input/output “awkward” at all. I/O is the raison d’etre of every program. — a program that had no observable effect whatsoever (no input, no output) would not be very useful.

And that’s just where the interaction of control operators with pure functional programming seems strange.  Phil Wadler, in his well-known dramatization of the computational content of the law of the excluded middle, has the devil give a man a choice: either the devil pays him one billion dollars, or if the man can pay the devil one billion dollars, the devil will grant him one wish of whatever he wants.  The man accepts the choice, and the devil goes with option 2.  After a life of greed and antisocial accumulation, the man present the devil with one billion dollars, hoping for his wish.  The devil (using a control operator, not that they are intrinsically evil) goes back in time, so to speak, to change the offer to option 1, returning immediately to the man his one billion dollars.

I am reaching the conclusion that it is not the non-canonicity of the law of excluded middle that is the locus of non-constructivity.  It is the lack of a notion of the progression of time.  The devil (indeed, even God Himself) cannot change the past — despite numerous Hollywood offerings (Back to the Future, anyone?) based on the idea of changing the past — but without some restrictions, control operators can revert to an earlier time, thus potentially undoing some of the effects that have already been carried out.  As Peyton Jones is saying, if the computation just spins its wheels and does not interact at all with the outside world, then there is not really any notion of time or cause and effect which could be violated by the use of a control operator.

Another way to think of it: classical logic is the language of mathematics, where everything is considered as a static entity (notwithstanding the fact that dynamic entities like computations can be statically described).  Constructive logic is supposed to have a dynamic character: it should capture the idea, somehow, that the state of information can change over time.

So where I am thinking to go with all this on the language-design side, is rather than developing a type theory based on bi-intuitionistic logic (which we have been hard at work on doing for many months now, and which is likely a good research project anyway), to develop instead a type theory based on classical modal logic, where classical reasoning, and its corresponding type theory with full control operators, can be used at any given time, but not across time.  Reasoning (or computation) across time will be governed by the rules of a modal logic, likely S4 for reflexivity and transitivity of the modality.  Observables operations like input and output will require stepping into the future, from which no control operator can return the flow of control.  Such a type theory would provide a new approach to pure functional programming with effects: at any given instant of time, one could perform reads and writes (for example), possibly backtracking with control operators to cancel some of them out.  But when one wants to step forward in time, the most recent operations (particularly writes) are committed, and become irrevocable.  Such a type theory would hold out the promise of combining classical reasoning (at a particular time) with constructive computation (across time).  If we think of general recursion as computation over time — with each general recursive call requiring a step into the future — we can even combine Turing-complete computation with logically sound classical reasoning in this way.  That would be almost the holy grail of verified programming I have been working towards for the past 7 years or so: a single language supporting sound classical reasoning and general-purpose programming.

So to conclude: what is so nonconstructive about classical logic?  I believe it is not the failure of canonicity, but rather the lack of a notion of progression of time.  Devising type theories with this perspective in mind may bring new solutions to some old conundrums of both type theory and functional programming.  We will see.

This summer, a bunch of us here at U. Iowa were learning Agda, and I thought I’d write up some of my thoughts on the system, now that we have used it a little.  Overall, I found it to be a great tool, with numerous very nice design ideas.  The positives:

  • The latex-style syntax for unicode, which I gather is supported in some other tools like Isabelle, is fantastic.  One can type \forall and get ∀.  I thought I would not care much and not use it, and was surprised to find how much I like having these notations available in code.  It is just the best, particularly when it comes to formalizing mathematics of whatever kind.  I only wish I could define my own shortcuts for specific symbols.  If I am typing \cdot a lot, maybe I want to type just \. or something.  For all I know, one can define such shortcuts, but the documentation (one of the few negatives) does not seem to say.  Using unicode notations in Agda has got me thinking about more general notions of syntax.  For example, what about 2-dimensional syntax, where I can supply any scalar vector graphics image as the definition of a symbol?  Then I could design my own notations, and have inputs and outputs coming in and out of different ports on a square displaying that image.
  • The support for implicit arguments to functions is very good.  With dependent types, you often need lots of specificational data as arguments to functions, just so that later types come out correctly.  For example, to append vectors of length N and M, you need to make N and M extra arguments to the function.  In most cases, Agda can automatically infer these, and you can omit them where you call the function.  Again, I expected I would not care and would be happy to write out lots of indices to functions explicitly.  Wrong again!  It qualitatively changes the experience to be able to leave those out most of the time.  Code becomes much, much shorter, and when you get lucky and everything lines up correctly, and Agda can figure out instantiations, then it is quicker to write, with less cognitive burden.  This is really a great feature.
  • Equational definitions are fantastic.  I had already overcome my foolish bias against these from previous exposure to equational definitions in Haskell.  It is just so nice to be able to write something like


    [] ++ ys = ys
    (x ∷ xs) ++ ys = x ∷ (xs ++ ys)

    for append, rather than first abstracting the arguments and then doing the pattern matching and recursion, as one would in OCaml (which is still my go-to programming language, despite all this coolness in these other languages).

  • The type refinements in pattern matching (incorporating axiom K, I understand) work simply and naturally.  You match on something of type a ≡ b, you cover that case with just refl, and the type checker knows a must be definitionally equal to b.  That’s exactly what you want in practice, and it’s a pleasure to use.
  • The overloading of constructors is outstanding, for the simple reason that it helps you avoid introducing extra names for similar concepts.  Part of what makes theorem proving and dependently typed programming tough is the cognitive burden to keep track of lots of interdependent information.  Just remembering the names for everything is a big part of that.  Fewer names, less cognitive load.  It is easy to greet you if we are all named Tom or Tammy.

Overall, this made for a great experience using Agda.  For example, I formalized Kripke semantics for intuitionistic propositional logic (actually, bi-intuitionistic logic, but that is another post), and proved monotonicity of the semantics, a standard property.  The proof in Agda is significantly shorter than what the paper proof would be.  It is also more readable (I think) by a human than a tactic-based proof in Coq, say, would be.  This is really the first time I felt as thought some result was easier to prove in a theorem prover than slogging it out in LaTex.  I am not saying Agda is unique in enabling this, but it is interesting to me that the first time I have had this feeling (having formalized a number of other things in Coq and other provers, including a couple of my own) is in Agda.

Ok,  now quickly for some constructive criticism:

  • The documentation needs to be improved.  I know it is a nuisance to document things, and it seems, from the Agda wiki,  that maybe the documentation has been getting updated or migrated or something.  But it’s not too great right now.  There are many little sources to consult (this tutorial for how to use the Agda emacs mode, these lecture notes for how this or that feature of the language works, this old reference manual for this, a new reference manual for that).  It is irritating.  Agda needs a single consolidated reference manual with all features covered with examples.
  • The standard library needs better documentation, too.  Mostly the intended use model seems to be to browse the standard library.  This is not too fun.  The organization of the standard library has some annoyances, like central definitions put in Core.agda files in subdirectories, rather than in the expected file.  Either some kind of agdadoc tool that could generate just type signatures and comments in HTML from the library code, or else a real reference manual for the standard library, would improve usability a lot.  Just look at OCaml’s standard library documentation for a very nice example.
  • Error messages involving meta-variables and implicit variables can be hard to decode.  Giving good error feedback with type inference is known to be a tough problem, and it is exacerbated with the dependently typed setting.  I think we need more research (or if it’s already been done, more implemented research) on how to let the programmer interact with a type checker to explore the state the type checker has gotten into when it found a type error.  Mostly Agda’s error messages are reasonable, but once higher-order meta-variables and implicit arguments get involved, it can be pretty confusing to figure out why it’s having trouble typing my term.
  • Compilation is a bit funny right now.  Compiling hello-world in Agda took a couple minutes, and generated a 10 megabyte executable.  Wow!  It’s because it is first compiling a big chunk of the standard library to Haskell, and then calling the Haskell compiler.  It still seems too slow, and generating too big an executable.  I asked one major Agda user this summer about this, and he said he had never tried to compile a program before!  So many people are just running the type checker, and using Agda as a theorem prover.  But it does compile, and it can do rudimentary IO with Haskell’s IO monad (this is a plus I did not list above).

Ok, so that’s my feedback from learning Agda this summer.  I’d love to hear others’ experiences or thoughts in the comments!