Skip navigation

Category Archives: Group news

By Anthony Cantor (University of Iowa PhD student)

Thanks to travel funds generously provided by my advisor, Aaron Stump, as well as the FLOC student travel grant, I was recently fortunate enough to spend a week in Oxford at FLOC 2018. Not only was it an enriching experience, but it was also my first time presenting research. Over the course of the pre-FLOC workshop weekend and the block 1 conferences I saw numerous thought-provoking and enlightening presentations, so I’d like to share some of the highlights here.

Term Assignment for Admissible Rules in Intuitionistic Logic
One of my favorites from my day at the Classical Logic and Computation workshop was Matteo Manighetti’s presentation of an extension of intuitionistic logic that supports admissible intuitionistic rules (with co-author Andrea Condoluci). He explained that if the provability of a formula A implies the provability of a formula B, then the rule A/B is admissible, and if the formula A→B is provable then the rule A/B is derivable. As far as I can tell, these definitions express the difference between meta-theoretic implication and object implication. In classical logic every admissible rule is derivable, but this is not the case in intuitionistic logic: apparently the rule ¬A→(B∨C) / (¬A→B)∨(¬A→C) is admissible but not derivable (this is called “Harrop’s rule”). Manighetti continued by describing an extension of intuitionistic logic obtained by adding axioms corresponding to the admissible rules, and a corresponding Curry-Howard term assignment. I’m really looking forward to reading this paper when it appears in the workshop proceedings because I’m very curious about the details of some of the proofs of theorems that Manighetti claimed during the presentation: one was the disjunction property, and the other he called a “classification” lemma. At the moment I’m quite interested in these sorts of proofs because I’m currently working with Aaron to prove a logical constructiveness result.

Proof Nets and Linear Logic
While at FLOC I attended two great presentations on the subject of proof nets: a presentation on proof nets for bi-intuitionistic linear logic by Willem Heijltjes, and a presentation on a new type of proof nets for multiplicative linear logic by Dominic Hughes. These two presentations caught my interest because of the logic under consideration in the former (bi-intuitionistic logic), and the concept of canonicity in the latter (these two topics relate to research I’ve been working on with Aaron).

Regardless of their potential relevance to my research interests, I’m happy to have attended these presentations because they both had a common property that taught me a lesson about designing slides: when possible, omit words (especially sentences) from a slide. Both of these presentations did a good job of focusing my attention on a particular point of the slide (usually some part of a proof derivation). Throughout the conference I often got lost because I was trying to read sentences on slides instead of focusing on the speaker. By omitting unnecessary words, these presentations kept my eyes on the right part of the slide, and my ears on the speaker. Interestingly, the two presentations differed greatly in terms of the their depth. Heijltjes’ presentation contained a lot details and examples, and Hughes’ stayed extremely high level.

Inspired by Heijltjes and Hughes, I’ve begun exploring linear logic and proof nets via Girard’s “Linear Logic”, a reference cited in both of their papers[1][2]. So far it’s been very rewarding. In particular, I quite liked the following observation made by Girard regarding the connection between the ⊢ relation and constructiveness:

Now, what is the meaning of the separation ⊢? The classical answer is “to separate positive and negative occurrences”. This is factually true but shallow; we shall get a better answer by asking a better question: what in the essence of ⊢ makes the two latter logics more constructive than the classical one? For this the answer is simple: take a proof of the existence or the disjunction property; we use the fact that the last rule used is an introduction, which we cannot do classically because of a possible contraction. Therefore, in the minimal and intuitionistic cases, ⊢ serves to mark a place where contraction (and maybe weakening too) is forbidden; classically speaking, the ⊢ does not have such a meaning, and this is why lazy people very often only keep the right-hand side of classical sequents. Once we have recognized that the constructive features of intuitionistic logic come from the dumping of structural rules on a specific place in the sequents, we are ready to face the consequences of this remark: the limitation should be generalized to the other rooms, i.e., weakening and contraction disappear. As soon as weakening and contraction have been forbidden, we are in linear logic.

Blockchain Verification
Grigore Rosu’s presentation on formally verifying blockchain contracts and virtual machines also stood out. In his talk Rosu advocated the use of a single framework called “K” to generate a suite of language and runtime tools related to a blockchain specification, as opposed to constructing the components first and attempting formal verification as an afterthought. The system uses a logic called “Matching Logic” to generate the components based on configurations containing semantic and syntactic rules. Rosu claimed that many previous methods for defining computational semantics have drawbacks, and that the “K” framework “keeps the advantages of those methods, but without the drawbacks”. Unfortunately he didn’t explain how exactly the “K” framework achieves this, but he did enumerate a rather impressive list of languages that are currently supported by the “K” framework, which included C, Java, and the Ethereum VM. I had a hard time following a lot of the detail about how matching logic enables auto-generation of effective language tools, but I’m at least convinced that it would probably be a lot of fun to try out his framework on a toy language.

Corrado Böhm Memorial
Finally, another standout event was most certainly the pair of talks given by Silvio Micali and Henk Barendregt in memory of Corrado Böhm. After relaying some personal memories, Barendregt presented some highlights of Böhm’s career. He first discussed some of Böhm’s early work on self compilation and bootstrapping. In particular, Böhm proved that one could start with a handwritten slow compiler, and then use that compiler to compile itself to obtain faster and faster compilers. Barendregt also touched on a few other important results achieved by Böhm, and ended by discussing Böhm’s passion for research: apparently Böhm was still stating open problems even while in his 90’s. Barendregt’s conclusion was memorable: “keeping asking questions, it keeps you young.”



Yours truly is interviewed now on the wonderful Type Theory Podcast.  Very exciting!

In our Computational Logic seminar here at The University of Iowa, we are studying logic programming this semester.  We are using the very nice book “Logic, Programming, and Prolog”, freely available online.  We were talking today about the existence of a least Herbrand model for a definite program.  A definite program is just a set of clauses of the form A_0 \leftarrow A_1,\ldots,A_m, where each A_i is an atomic formula (predicate applied to terms).  (Free variables in clauses are interpreted universally.)  If m = 0, then we just have an atomic fact A_0 in the definite program.  A Herbrand interpretation is a first-order structure where each function symbol f of arity k is interpreted as \lambda x_1,\ldots,x_k. f(x_1,\ldots,x_k), and each predicate is interpreted as a subset of the set of ground (i.e., variable-free) atomic formulas.  A Herbrand model of a definite program P is then just a Herbrand interpretation which satisfies every clause in P.  It will be convenient below to identify a Herbrand interpretation with a subset of the set of all ground atomic formulas.  Such a subset determines the meanings of the predicate symbols by showing for which tuples of ground terms they hold.  We will pass tacitly between the view of a Herbrand interpretation as a first-order structure and the view of it as a set of ground atomic formulas.  The Herbrand base is the Herbrand interpretation corresponding to the set of all ground atomic formulas.  It says that everything is true.

What I want to talk about briefly in this post is the fact that the set of Herbrand models  of definite program P forms a complete partial order, where the ordering is the subset relation, the greatest element is the Herbrand base, and the greatest lower bound of a non-empty subset S of Herbrand models of P is the intersection of all the models in S.  In a complete partial order, every subset S of elements should have a greatest lower bound (though it need not lie in S).  Alternatively — and what I am interested in for this post — we can stipulate that every subset S should have a least upper bound.  The two formulations are equivalent, and the proof is written out below.  “Logic, Programming, and Prolog” contains a simple elegant proof of the fact that the intersection of a non-empty set of Herbrand models is itself a Herbrand model.

What I want to record here is the proof that in general, if in a partial order (X,\sqsubseteq) every subset S\subseteq X (including the empty set) has a greatest lower bound, then every such S also has a least upper bound.  The proof I have seen for this is a one-liner in Crole’s “Categories for Types”.  It took me some puzzling to understand, so I am writing it here as much for my own memory as for the possible interest of others, including others from the seminar who watched me fumble with the proof today!

Let S be a subset of X.  Let \textit{ub}(S) be the set of elements which are upper bounds of S (that is, the set of elements u which are greater than or equal to every element of S).  The claim is that the greatest lower bound of \textit{ub}(S) is the least upper bound of S.  By the assumption that every subset of X has a greatest lower bound, we know that there really is some element q which is the greatest lower bound of \textit{ub}(S).  As such, q is greater than or equal to every other lower bound of \textit{ub}(S).  Now here is a funny thing.  Every element x of S is a lower bound of \textit{ub}(S).  Because if y\in \textit{ub}(S), this means that y is greater than or equal to every element in S.  In particular, it is greater than or equal to x.  Since this is true for every y\in \textit{ub}(S), we see that x is a lower bound of \textit{ub}(S).  But q is the greatest of all such lower bounds by construction, so it is greater than or equal to the lower bound x.   And since this is true for all x\in S, we see that q is an upper bound of all those elements, and hence an upper bound of S.  We just have to prove now that it is the least of all the upper bounds of S.  Suppose u' is another upper bound of S.  This means u'\in\textit{ub}(S).  Since by construction q is a lower bound of \textit{ub}(S), this means that q \sqsubseteq u', as required to show that q is the least of all the upper bounds of S.

The final interesting thing to note about the complete partial order of Herbrand models of a definite program P is that while the greatest lower bound of a non-empty set S of models is their intersection, and while the greatest element is the Herbrand base (a universal Herbrand model), the intuitive duals of these operations are not the least element nor the least upper bound operation.  The intuitive dual of a universal Herbrand model would be, presumably, the empty Herbrand interpretation.  But this need not be a model at all.  For example, the definite program P could contain an atomic fact like p(a), and then the empty Herbrand interpretation would not sastisfy that fact.  Furthermore, if S is a non-empty set of Herbrand models, \bigcup S is not the least upper bound of S.  That is because \bigcup S need not be a Herbrand model of P at all.  Here is a simple example.  Suppose P is the definite program consisting of clauses \textit{ok}(h(a,b)) and \textit{ok}(h(x,y)) \leftarrow \textit{ok}(x),\textit{ok}(y).  Consider the following two Herbrand models H_1 and H_2 of this program P. In H_1 the interpretation of \textit{ok} contains all the terms built using h from a and h(a,b).  In H_2, the interpretation of \textit{ok} contains all the terms built using h from b and h(a,b).  If we take the intersection of H_1 and H_2, then it is a Herbrand model, in fact the minimal one: it says that \textit{ok}(h(a,b)) is true, as required by the first clause in P; and if two terms t_1 and t_2 are in the interpretation of \textit{ok}, then so is h(t_1,t_2).  But if we take the union of H_1 and H_2, what we get is not a Herbrand model of P at all.  Because H_1 \cup H_2 contains \textit{ok}(h(a,a)) and \textit{ok}(h(b,b)), for example, but not \textit{ok}(h(h(a,a),h(b,b))).  To get an upper bound of H_1 and H_2, it is not enough to take their union.  One must take their union and then close them under the deductive consequences of the program P.  That’s the intuition, though we would need to formally define closure under deductive consequences — and it would be a bit nicer to be able to apply a model-theoretic notion (since we are working model-theoretically here) rather than a proof-theoretic one.   Declaratively, we know we can get the least upper bound of a set S of Herbrand models as the intersection of the set of all Herbrand models which are supersets of every model in S.  But this is rather a hard definition to work with.

Anyhow, this is a nice example of finding an interesting abstract structure in semantics, as well as a good exercise in reasoning about such structures.

It’s been a pretty busy summer for our group, and I want to give an update of some of the things we have been up to.  Our biggest news is probably the great opportunities some group members have recently left to pursue:

  • Duckki Oe accepted a postdoc at MIT, working with Adam Chlipala.  The project is concerned with verification of system software, so that should be quite challenging and exciting.
  • Garrin Kimmell accepted a position at Kestrel Institute.  Kestrel is a well-known research institute that applies formal methods for developing reliable software systems, largely (as I understand) for government contracts.

Both of these positions will provide fantastic environments for Garrin and Duckki to continue developing their substantial formal-methods and programming-languages skills, with some of the top formal-methods researchers in the field.  So, I am very, very happy about this (and so are they!).

Frank Fu and Harley Eades attended several interesting research events this summer — but I don’t want to steal their thunder: you can check out their reports on NASSLLI and ICALP on their blogs (see my blogroll).

We had an undergraduate researcher visiting here from Ohio State University, Angello Astorga.  Angello was here through the Summer Research Opportunities Program (SROP), run by the CIC (Big Ten Schools + U. Chicago).  This is a really great program that gives students from underrepresented populations a research experience, and a lot of mentoring and professional development activities, too.  I know Angello was especially excited about the multiple practice GREs.  Just kidding.  It was truly a very good experience for the attendees, from what I saw.  Students really did an impressive job mastering new material and presenting it at an end-of-the-summer undergrad research conference.  They really knew their stuff, and did a very nice job explaining it.  Angello was working here on verifying the CK machine for untyped lambda calculus using Trellys.  He picked up lambda calculus and Trellys and completed (almost!) the verification in just 8 weeks, after just his sophmore year as a CS major at OSU. Wow!  I was impressed.  Here’s a picture of Angello with his poster.  We are hoping he will present it at the upcoming Midwest Verification Days (MVD), hosted at Kansas U. September 20-21.

On the research side, I am very fired up about the Dual Calculus (due to Wadler).  It is a really elegant and intriguing formalism.  Hopefully I’ll manage to post more about it sometime soon.  The most helpful source I have found so far is “Dual Calculus with Inductive and Coinductive Types” by Kimura and Tatsuta (sorry, no free PDF seems to be available, only at SpringerLink).

Hope your summer is ending well.