Skip navigation

Last week I attended RDP 2013 in Eindhoven, the Netherlands.  RDP 2013 consisted of two conferences, RTA (Rewriting Techniques and Applications) and TLCA (Typed Lambda Calculi and Applications); and a number of workshops, of which I attended most of COS (Control Operators and their Semantics), which ran on the first and second days of RTA, and also the IFIP working group 1.6 on rewriting.  My doctoral student Harley Eades and I each gave talks in COS, and I gave a talk in the working group meeting.

For me, attending this RDP was a really great experience.  Everything ran really smoothly with travel, for starters.  Flying from Cedar Rapids to Chicago to Amsterdam, and then taking a 1.5-hour train ride to Eindhoven, was easy and expeditious.  The conference and hotel were just 5-10 minutes walk from the train station in Eindhoven, which was very convenient, too.  The hotel was really quite nice, although like so many hotels I’ve stayed at, the wireless network was a bit flaky.  Also, one had to know to request the remote control for the thermostat from the front desk — not something American travelers are expecting to have to do to operate the air conditioning unit.  The local organization of RDP ’13 was also very good.  There was a nice big building in which the meetings took place.  The key organizational aspects, in my opinion, were that lunch was hosted on site (so time was not wasted wandering around for a restaurant in the middle of the day), and that there were lots of places to sit and do some work or talk with other participants.  In my opinion the local organizers did a very good job with the meeting.

Anyhow, on to the technical side. COS was a great meeting to attend. There were invited talks by Matthew Flatt and Thomas Streicher.  It is a testimony, perhaps, to the inherent interest of control operators that a single workshop could accommodate talks from such different parts of Computer Science.  Matthew’s talk did a wonderful job explaining how control operators like call/cc, and an impressive catalog of related programming-language constructs, were useful in practice and complex to implement in combination with each other.  I finally learned from Matthew’s talk what dynamic-wind is, which I had seen referenced in a couple papers.  It is similar to try-catch-finally in Java, except that the semantics of dynamic-wind e1 e2 e3 is that we will execute e2, but whenever we do so, we will execute e1 before e2, and e3 after e2.  The key point is that if exceptions are raised (or more generally, a continuation is called that takes us out of the context in which we are evaluating e2), then we will make sure to execute e3 before we execute the next thing after e2.  So this is like the finally block of a try-catch-finally statement in Java.  But intriguingly, it is also necessary to execute e1 whenever we re-enter computation of e2, which could happen by calling a continuation.  This has no obvious counterpart in the Java construct.  Thomas’s talk was about denotational categorical semantics related to control operators, using ideas from Krivine’s classical realizability (a form of compositional semantics for types which works for classical logic, and not just intuitionistic logic).  It was a fun talk, because Streicher is a very engaging speaker.  I am afraid that my category theory was not up to following what he had to say in detail, but I still enjoyed the talk as I caught glimmers of different interesting technical ideas in the categorical setting he is using.

And there were several other talks at COS ’13 quite relevant for our current work, in particular one by Kakutani and Kimura on the Dual Calculus with inductive and coinductive types.  It seems that due to some fundamental issues which show up in the collapse of control categories for interpreting computational classical type theory (CCTT), it is difficult if not impossible to have inductive and coinductive types coexisting in the same CCTT.  I can’t say I fully understand even how the categorical collapse (where the relevant category becomes just a pre-order, and there is at most one morphism from any source to any target object) leads to problems in the definition of the language.  But it is a bit disturbing.  Kakutani and Kimura proposed to tackle this by building up a restricted form of induction from coinduction, within the language.  The treatment was again quite categorical.  I would have to beef up my understanding of categorical semantics to understand what is happening here.  We are working here at Iowa towards a similar goal, but using a type theory based on bi-intuitionistic logic instead of classical logic.  It’s a matter for another post to give a real introduction to bi-intuitionistic logic, but the brief summary might be that it supports an operator dual to implication, which we call simply negative implication (the usual term is subtraction), where A negatively implies B is true in the current world (of a Kripke model) if there is an earlier world where A is false and B is true.  The logic exhibits some classical features, but retains (we conjecture) canonicity for certain types.

At RTA and TLCA, I saw a number of really cool talks.  Simon Peyton Jones gave a joint (between RTA and TLCA) invited talk about the “Core” intermediate language of Haskell, and how it uses explicit evidential expressions to show a simple type checker why expressions elaborated from Haskell input are indeed typable, without needing to invoke a complex type inference algorithm.  The inference algorithm is earlier in the compiler pipeline, and generates these intermediate expressions with their evidential annotations.  The evidence expressions are there to show why one type equals another, and they, with explicit annotations on lambda-bound variables, are sufficient to encode all the fancy typing features that Haskell programs use.  And of course, Simon is a fantastic speaker, and his talk was, I am sure, a real highlight of the whole RDP.

I could go on and mention quite a few other talks I really enjoyed, on decreasing diagrams, strong normalization of System T using non-deterministic reduction, graph-isomorphism hardness of extended alpha-equivalence checking for letrec expressions, and more.  These were all really stimulating.  I want to mention a few other things that happened at the meeting, though.  One that’s very important for us here is that the CoCo ’13 confluence-checking competition successfully ran using StarExec on Friday, June 28th.  This is the first competition to use StarExec, and we were delighted (and relieved!) that CoCo ran smoothly on it, in real time during a workshop at RDP.  The organizers, Harald Zankl and Nao Hirokawa, had a backup run they had already computed (with StarExec) offline, but fortunately they did not need to use it.  So, this was great for them, and definitely great for StarExec.

One other thing that happened is that I was elected to the RTA Steering Committee (SC), during the business meeting for the conference.  I am really appreciative of the warm reception the rewriting community has given me, and I feel a big responsibility to try to help guide the conference as a member of the SC.  One immediate issue that is before the community is the question of pursuing a tighter connection with TLCA.  Every two years the two conferences meet together as RDP (Rewriting Deduction and Programming).  Next year, at the mighty FLoC ’14 in Vienna — itself just one component of the Vienna Summer of Logic (wow) — the conferences will be combined.  The name will be “RTA/TLCA”, but there will be a single program committee, and a single track for submissions.  I have to say that based on what I have seen so far, and talking also with others in the rewriting community at RTA, I am not sure a tighter merger — as in just a single conference — is a good idea.  The problem is that while the two conferences share deep roots in lambda calculus and reduction semantics, they have gone in quite different directions.  Most of the TLCA talks are about denotational semantics of typed languages, and many rely heavily on categorical constructions.  This time there were only a couple TLCA papers, in my opinion, that were likely to be accessible to RTA attendees.  Maybe I am wrong and more RTA folks are steeped in categorical semantics than I am guessing — but it seems a bit unlikely, at least judging by the typical RTA papers, which are much more focused on reduction semantics.  The traditional problems of confluence and termination, as well as other algorithmic questions related to term-rewriting systems — like approximating the set of terms reachable by rewriting from an initial set of terms using a set of rules, or unification algorithms for various theories — are more the focus of RTA.  Denotational semantics does not play much of a role.  So, I am a bit worried that a combined conference will not be too successful.  At FLoC it won’t matter, I think, because people can come and go from one meeting to another, so the audience will be quite dynamic.  But if there were only an RTA/TLCA by itself, I think half the talks would not be accessible or interesting to half the audience, all the time.  So I think I’m not in favor of combining more tightly than the RDP joint conference.  But we will see how the discussions go.

Alright, hope all you who are reading this are having a good summer so far.  Till next time.

Advertisements

One Comment

  1. I know I was there, but you posed an interesting point, one in which I have been pondering a bit about.

    Rewriting research does not use categorical approaches very often. I think it would be cool to try this out. It seems it might be possible to get termination proofs using presheafs, but I am not sure of any other properties. For example, what would be a categorical proof of confluence? Interesting question. There is some work on this however:

    http://link.springer.com/chapter/10.1007%2F3-540-61697-7_8

    http://www.lfcs.inf.ed.ac.uk/reports/98/ECS-LFCS-98-396/

    http://citeseer.uark.edu:8080/citeseerx/viewdoc/summary;jsessionid=7431B4ADAF21A5CA7B87D9421A1267F9?doi=10.1.1.37.5374

    http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=9&cad=rja&ved=0CHwQFjAI&url=http%3A%2F%2Fwww.cs.man.ac.uk%2F~banach%2Fsome.pubs%2FFib.Sem.ETGRS.ps.gz&ei=L0vUUYCaFevJ0AHao4DgDg&usg=AFQjCNGLU_8J_KeDNx5G3lLLfrDg47So0g&sig2=_Nd-oW6NicxZ2oDLRO-GOw&bvm=bv.48705608,d.dmQ

    http://hal.archives-ouvertes.fr/docs/00/56/07/48/PDF/macy.pdf

    Those are the papers I have. I am sure there are more.

    This might be an interesting application that may have insight into higher order rewriting.

    Cheers,
    .\ Harley


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: