Skip navigation

Category Archives: Meetings

Following is a report from CS doctoral student Andrew Marmaduke, working with me here at U. Iowa, on his attendance at DevCon4:

Devcon4 is the Ethereum developer conference that was hosted in Prague.  Thanks to my adviser Aaron Stump and the grant awarded to him by the Ethereum Foundation I was able to attend. Moreover, because I was representing The University of Iowa as a grantee, I also got to sit down in the Grantee Showcase and talk a little bit about Cedille with those who were interested. To start off, I’ll share my first impressions of Devcon4.

It would be a mistake, I think, not to mention the city of Prague when talking about a first impression.
Prague is a very beautiful city and the conference (held at Prague Conference Centre) was outside of Prague 1 (or the tourist hotspot). I spent every day walking to and from the conference. Honestly, it felt like any other major city, but the architecture was quite beautiful even outside of Prague.

Prague is known for its beer and I heard that there were no shortage of parties and drinking after a conference day ended. It felt informal in a good way. That’s my first stand out impression of Devcon4.
With idle chatter and friends reuniting or friendships sprouting it wasn’t all business or academics.
And have I mentioned the light show performance room? Every day during the conference there was a decompression room where a performer sitting in the center played several ethereal sounding instruments (like chimes, low frequency drums, etc) with a column of lights hanging above her and pillows and comfy carpets surrounding her. I know that the carpets were comfy because I spent an hour or two decompressing on the last day!

However, the conference also had a distinctly business feel.I’ve been to my fair share of academic conferences and have attend a developer conference or two before Devcon4, but I’ve never had an investor ask me for an elevator pitch.Needless to say, if you’re a researcher keen to network inside the blockchain business world then I would wager it’s worth your time to attend Devcon5.

Aside from first impressions I arrived at the conference with a particular skepticism about blockchain and the vision of Ethereum. I set a goal for myself to try and resolve that skepticism and to cut through the hype. I feel like I succeeded. Indeed, I now believe that the future is bright for some kind of cryptographically foundational technology that other solutions are built on top of, let’s talk briefly about why. Before we begin I want to make it clear that I am not a blockchain researcher and any speculation I offer should be taken with a grain of salt. First, there seems to be some significant problems inherent to a fully distributed model that is used in blockchain. Obviously, the Byzantine General’s problem is the principle issue that is addressed by Bitcoin and other blockchain solutions but Phil Daian gave a critical talk against Ethereum that highlighted that there is more to the story. Phil wrote his thoughts in a blogpost which may be an easier way to consume his ideas until the video recordings of the talks at Devcon4 are live [1]. To summarize, on-chain voting is a difficult and possibly impossible problem to solve without allowing for effective vote buying. Moreover, nefarious decentralized entities (named Dark DAOs by Phil) could pull resources and damage the stability of a blockchain.

The situation does not seem hopeful after reading Phil’s opinion on Ethereum! However, Vitalik Buterin a co-founder of Ethereum gave a roadmap talk about the future of Ethereum that alleviated some concerns. In his talk he discussed proof of stake, rent for long term storage, layer two solutions, and many other things, but the key take away for me was that Ethereum is willing to acknowledge and solve problems illuminated by people like Phil. For now the problems the developers at Ethereum have been trying to solve are scalability focused and the story for that is promising. In particular I would like to highlight the notion of “Layer 2”.

The idea behind Layer 2 is that you can sacrifice some properties given to you by the blockchain (by moving some operations off the chain) in order to gain others. For instance, it is generally accepted that state transitions on-chain are expensive and impractical but two parties could agree to work off-chain to compute these state transitions while maintaining hashes of the state to keep each other honest.
This means that if one party decides to lie then the full weight of the blockchain will be needed to resolve the dispute but if both parties agree maybe the entire exchange could be just a few transactions.

As far as the practicality of blockchain and Ethereum I’m not sold that the technology is quite there yet.
In my opinion, it would seem that a solution that admits trusted parties is necessary while simultaneously trying to balance the scales between trusted parties and the greater decentralized network would be a successful approach. However, as previously stated, I am not a blockchain researcher and it is possible that there is still a satisfactory solution that is completely decentralized.

The state of Ethereum withstanding there are some cool projects and technologies that I also learned about at Devcon4. As one might expect from cryptography, mathematical results that were cast as having essentially no practical applications turned out to have practical applications.
In this case, that application is zkSNARKs and zkSTARKs. The acronym zkSNARK stands for zero-knowledge succinct non-interactive argument of knowledge. These mechanisms for automated proof verification are being applied to blockchain protocols like Zcash and potentially other poof of stake protocols. A very broad and quick overview of zkSNARKs is the “Where is Waldo” example.
Imagine that a child wants their parent to prove to them that they found Waldo. The parent, of course, doesn’t want to ruin the puzzle for the child by revealing where Waldo is but also desires to demonstrate that the puzzle is solvable. To solve the problem, the parent cuts out a hole from a large piece of cardboard the size of Waldo and places the puzzle behind it. Now, the child can only see Waldo and not the relative position of the puzzle behind the board. The child has verified the knowledge of the parent without the parent giving up the construction of that knowledge, so to speak. It’s a fascinating field and one wonders if there is anything useful for formal verification tools to learn from.

Another interesting project that also happens to be a grantee of the Ethereum Foundation is the Gitcoin project. Gitcoin is a Dapp (decentralized application built from smart contracts on the Ethereum blockchain) that allows users to post bounties on Github issues for other users to work on and then claim. In my opinion, Gitcoin is an admirable effort to energize the open source community by changing the story from volunteer passion work to actual work! Ideally, this project could be useful in helping those who simply can’t volunteer their time to find a foothold in the open source community to instead finance their contributions.

Finally, the Solidity team at Ethereum showcased the beginnings of their SMT integration. It wasn’t explicitly stated in the talk what SMT solver was being used, but based on some file names shown in the source code during the demo I can confidently say they’re exploring Z3 and CVC4. From my understanding, the way smart contracts are modeled is in functional units, so pre and post conditions of functions aren’t usable from the perspective of the SMT Solver in a separate function.It seemed to me like a good start. All the same, it was encouraging to see a team greasing the gears of automated formal verification in a programming language.

And that’s all! Devcon4 was a lot of fun and there was a lot to discover. The video recordings of talks are suppose to go up around two to four weeks after the final day of Devcon4.I recommend giving the talks a glance and watching anything you find interesting, there was a lot going on that I missed!

[1] http://hackingdistributed.com/2018/07/02/on-chain-vote-buying/

Advertisements

The following is a report from CS doctoral student Chris Jenkins, working with me here at U. Iowa, on his attendance at IFL ’18.

 

IFL 2018 Highlights

A couple of weeks ago I had the pleasure of attending IFL 2018, the 30th
Symposium on the Implementation and Application of Functional Programming
Languages
where I would be presenting the type inference system Aaron and I
developed for Cedille. I found the talks given over the course of the
symposium to be for the most part quite interesting, and even more so the
other attendees and dinner-time discussion, and so thought I might share of
the highlights with QA9’s readership.

First I would like to mention that the venue this year was quite lovely: the
UMass Lowell Inn and Conference Center. Lowell is a quiet little town 30m
outside of Boston where the streets curve through hills and pass under the
over-hanging branches of tall trees. The inn itself was quite nice, situated
as it was just south of a canal that fed into the Concord river. It was very
lovely!

9/5: Haskell GADT Tutorial

The first day was a rather friendly an informal tutorial on the advanced
usage of GADTs in Haskell, taught by José and David from Galois. The first
part of the session laid the foundation for developing a balanced two-three
tree whose depth was encoded at the type level, and how to go about writing
an insert function for such a type (after all, the depth of the resulting
tree might change!). Being more familiar with Agda than Haskell, I was more
curious about (and in some cases, amused by) the bag of tricks Haskell
requires to simulate programming with dependent types (here, unary numbers to
represent the tree depth). There was also some healthy skepticism in the
audience about whether the invariant we were trying to preserve (that the
depth remains the same, or grows by one, after an insert) was worth the extra
complexity required. The topic continued to be discussed during lunch at my
table – which is perhaps one of the greatest pluses for attending such
conferences in the first place! The latter half of the tutorial ramped up to
the more involved example of developing a verified type-checker for a toy
language of booleans and integers. Both repositories used in the tutorial can
be found here and here, for anyone interested.

9/6: First Day of Talks

The keynote for the first day was given by Adam Chlipala. In his talk, he
laid out what (to me, at least) seemed like a radical call to re-evaluate the
role of the “standard computational model” of functional programming
languages. The main idea was that most programmers are working in a domain
where the products being produced are variations on a theme, rather than
wholly new works, and specifically variations within a well-defined domain.
We look to FP as a way to provide certain correctness guarantees, but the
general-purpose run-time efficiency leaves something to be desired. Using SQL
as an example, Adam then proposed that not only would domain programmers
would happier with domain-specific languages, but compiler writers would be,
too! If the compiler could be provided (proven to be sound) rewrite rules
that hold for the domain, this could allow us to more easily (and safely)
explore different approaches to program optimization, and in the process
leave the “standard cost model” of functional programming behind. Quite the
talk to mull over!

In addition to the keynote, I enjoyed mostly all talks given. To single just
one out of the many that were given, the talk “Continuation Passing Style and
Primitive Recursive Functions” by Ian Mackie came from quite an interesting
observation. This appears to be one of those cases where one reads something
in the textbook and ask “But wait, why is that the case?” – and maybe a
decade or so later, you start to have an answer! To start, we know that any
program you want to write can be given in continuation-passing style, for
simple programs like multiplication (through repeated addition), the CPS
style can be further simplified into an accumulator style. The unanswered
question was “which CPS programs can be ‘optimized’ into an accumulator
style.” And in this talk, Ian gave the sound (though not necessarily
complete) answer that all primitive recursive functions can be written this
way, taking a detour into linear type theory along the way. Very interesting!

9/7: Second Day of Talks

The keynote of the second day was given by Arjun Guha, and actually I noticed
some broad similarity with Adam’s talk. Arjun presented work for verifying
different network layers, starting with the network policy layer with a
NetKAT, a high level (and principled) language for describing the policies
for accepting, forwarding, and filtering packets. He also talked about PANE
(short for PArticipitory NEtworks), a paradigm for the configuration of
Software Defined Networks (SDNs) that is able to handle live changes to
network policies and dynamic resource usage. More still was work on verifying
Puppet configurations (with Puppet being a language for declarative
specifications of a system’s configuration) have such desirable properties
as: determinacy (the configuration would always be executed in the same
way); and idempotency (a second execution of a configuraiton produces no
change on the system). There was much more, of course, but my big take-away
from this and from Adam’s talk was that there is a great opportunity for
formal methods to really shine in restricted domains.

If I must arbitrarily pick one of the many good talks given today, the one
closest to my own interest would have to be “Delta Debugging Type Errors with
a Blackbox Compiler” by Joanna Sharrad et al. The premise is intriguing –
how much can we improve type errors by treating the compiler as a black-box?
Using Haskell and GHC, they showed an interesting approach of bisecting
programs with type errors to find the smallest type-correct program, and the
smallest change on top of this that produces a type error, using this more
minimal example to help inform the programmer. Now, I do not thing that
language agnostic tools will ever beat gnostic ones as far as providing
useful feedback – but blackbox techniques open up the possibility of
experimenting with different error reporting across many different languages,
which as someone interested in improving type error messages (and the
usability of languages with advanced type features, more generally) certainly
caught my interest!

This past Thursday and Friday (Oct. 10th and 11th), I organized a mini-symposium on Programming Languages here at U. Iowa.  I invited doctoral students from Indiana, U. Penn., Portland State, and Northwestern to come down and give talks on their research in PL.  Robby Findler from Northwestern also brought his fall PL seminar down — so we had a big group of Racketeers here!  Sam Tobin-Hochstadt gave the keynote, which was simultaneously our departmental colloquium.  Why did I call this a mini-symposium?  Isn’t a “symposium” in the ACM taxonomy just a meeting that is between the size of a workshop and a conference?  I was thinking of Plato’s symposium here, where the emphasis is on lively discourse in pursuit of truth (about love, in Plato’s case, and involving quite a bit of drinking — while our focus was more technical, we did go for local beer).  So we had 45 minutes blocked out for each talk, and lots of breaks and long lunches.  I was really happy with the results, as we did ask a lot of questions, ranging from basic-knowledge questions from folks newer to a particular topic, to quite advanced reflections.  The talks were all really great, too, with the possible exception of my own, which was too jammed with rules and derivations.  Anyhow, it was a very fun event, and I plan to ask for participant travel funds from NSF — which paid for this edition — in my future grant proposals.  You can find the program, with titles and abstracts, and even a group picture, here.

Last week I attended RDP 2013 in Eindhoven, the Netherlands.  RDP 2013 consisted of two conferences, RTA (Rewriting Techniques and Applications) and TLCA (Typed Lambda Calculi and Applications); and a number of workshops, of which I attended most of COS (Control Operators and their Semantics), which ran on the first and second days of RTA, and also the IFIP working group 1.6 on rewriting.  My doctoral student Harley Eades and I each gave talks in COS, and I gave a talk in the working group meeting.

For me, attending this RDP was a really great experience.  Everything ran really smoothly with travel, for starters.  Flying from Cedar Rapids to Chicago to Amsterdam, and then taking a 1.5-hour train ride to Eindhoven, was easy and expeditious.  The conference and hotel were just 5-10 minutes walk from the train station in Eindhoven, which was very convenient, too.  The hotel was really quite nice, although like so many hotels I’ve stayed at, the wireless network was a bit flaky.  Also, one had to know to request the remote control for the thermostat from the front desk — not something American travelers are expecting to have to do to operate the air conditioning unit.  The local organization of RDP ’13 was also very good.  There was a nice big building in which the meetings took place.  The key organizational aspects, in my opinion, were that lunch was hosted on site (so time was not wasted wandering around for a restaurant in the middle of the day), and that there were lots of places to sit and do some work or talk with other participants.  In my opinion the local organizers did a very good job with the meeting.

Anyhow, on to the technical side. COS was a great meeting to attend. There were invited talks by Matthew Flatt and Thomas Streicher.  It is a testimony, perhaps, to the inherent interest of control operators that a single workshop could accommodate talks from such different parts of Computer Science.  Matthew’s talk did a wonderful job explaining how control operators like call/cc, and an impressive catalog of related programming-language constructs, were useful in practice and complex to implement in combination with each other.  I finally learned from Matthew’s talk what dynamic-wind is, which I had seen referenced in a couple papers.  It is similar to try-catch-finally in Java, except that the semantics of dynamic-wind e1 e2 e3 is that we will execute e2, but whenever we do so, we will execute e1 before e2, and e3 after e2.  The key point is that if exceptions are raised (or more generally, a continuation is called that takes us out of the context in which we are evaluating e2), then we will make sure to execute e3 before we execute the next thing after e2.  So this is like the finally block of a try-catch-finally statement in Java.  But intriguingly, it is also necessary to execute e1 whenever we re-enter computation of e2, which could happen by calling a continuation.  This has no obvious counterpart in the Java construct.  Thomas’s talk was about denotational categorical semantics related to control operators, using ideas from Krivine’s classical realizability (a form of compositional semantics for types which works for classical logic, and not just intuitionistic logic).  It was a fun talk, because Streicher is a very engaging speaker.  I am afraid that my category theory was not up to following what he had to say in detail, but I still enjoyed the talk as I caught glimmers of different interesting technical ideas in the categorical setting he is using.

And there were several other talks at COS ’13 quite relevant for our current work, in particular one by Kakutani and Kimura on the Dual Calculus with inductive and coinductive types.  It seems that due to some fundamental issues which show up in the collapse of control categories for interpreting computational classical type theory (CCTT), it is difficult if not impossible to have inductive and coinductive types coexisting in the same CCTT.  I can’t say I fully understand even how the categorical collapse (where the relevant category becomes just a pre-order, and there is at most one morphism from any source to any target object) leads to problems in the definition of the language.  But it is a bit disturbing.  Kakutani and Kimura proposed to tackle this by building up a restricted form of induction from coinduction, within the language.  The treatment was again quite categorical.  I would have to beef up my understanding of categorical semantics to understand what is happening here.  We are working here at Iowa towards a similar goal, but using a type theory based on bi-intuitionistic logic instead of classical logic.  It’s a matter for another post to give a real introduction to bi-intuitionistic logic, but the brief summary might be that it supports an operator dual to implication, which we call simply negative implication (the usual term is subtraction), where A negatively implies B is true in the current world (of a Kripke model) if there is an earlier world where A is false and B is true.  The logic exhibits some classical features, but retains (we conjecture) canonicity for certain types.

At RTA and TLCA, I saw a number of really cool talks.  Simon Peyton Jones gave a joint (between RTA and TLCA) invited talk about the “Core” intermediate language of Haskell, and how it uses explicit evidential expressions to show a simple type checker why expressions elaborated from Haskell input are indeed typable, without needing to invoke a complex type inference algorithm.  The inference algorithm is earlier in the compiler pipeline, and generates these intermediate expressions with their evidential annotations.  The evidence expressions are there to show why one type equals another, and they, with explicit annotations on lambda-bound variables, are sufficient to encode all the fancy typing features that Haskell programs use.  And of course, Simon is a fantastic speaker, and his talk was, I am sure, a real highlight of the whole RDP.

I could go on and mention quite a few other talks I really enjoyed, on decreasing diagrams, strong normalization of System T using non-deterministic reduction, graph-isomorphism hardness of extended alpha-equivalence checking for letrec expressions, and more.  These were all really stimulating.  I want to mention a few other things that happened at the meeting, though.  One that’s very important for us here is that the CoCo ’13 confluence-checking competition successfully ran using StarExec on Friday, June 28th.  This is the first competition to use StarExec, and we were delighted (and relieved!) that CoCo ran smoothly on it, in real time during a workshop at RDP.  The organizers, Harald Zankl and Nao Hirokawa, had a backup run they had already computed (with StarExec) offline, but fortunately they did not need to use it.  So, this was great for them, and definitely great for StarExec.

One other thing that happened is that I was elected to the RTA Steering Committee (SC), during the business meeting for the conference.  I am really appreciative of the warm reception the rewriting community has given me, and I feel a big responsibility to try to help guide the conference as a member of the SC.  One immediate issue that is before the community is the question of pursuing a tighter connection with TLCA.  Every two years the two conferences meet together as RDP (Rewriting Deduction and Programming).  Next year, at the mighty FLoC ’14 in Vienna — itself just one component of the Vienna Summer of Logic (wow) — the conferences will be combined.  The name will be “RTA/TLCA”, but there will be a single program committee, and a single track for submissions.  I have to say that based on what I have seen so far, and talking also with others in the rewriting community at RTA, I am not sure a tighter merger — as in just a single conference — is a good idea.  The problem is that while the two conferences share deep roots in lambda calculus and reduction semantics, they have gone in quite different directions.  Most of the TLCA talks are about denotational semantics of typed languages, and many rely heavily on categorical constructions.  This time there were only a couple TLCA papers, in my opinion, that were likely to be accessible to RTA attendees.  Maybe I am wrong and more RTA folks are steeped in categorical semantics than I am guessing — but it seems a bit unlikely, at least judging by the typical RTA papers, which are much more focused on reduction semantics.  The traditional problems of confluence and termination, as well as other algorithmic questions related to term-rewriting systems — like approximating the set of terms reachable by rewriting from an initial set of terms using a set of rules, or unification algorithms for various theories — are more the focus of RTA.  Denotational semantics does not play much of a role.  So, I am a bit worried that a combined conference will not be too successful.  At FLoC it won’t matter, I think, because people can come and go from one meeting to another, so the audience will be quite dynamic.  But if there were only an RTA/TLCA by itself, I think half the talks would not be accessible or interesting to half the audience, all the time.  So I think I’m not in favor of combining more tightly than the RDP joint conference.  But we will see how the discussions go.

Alright, hope all you who are reading this are having a good summer so far.  Till next time.

Well, it’s been quite some time since I posted anything, because this was one busy semester (even without coffee spoons).  A lot has been going on here at U. Iowa to keep me from finding time for blogging.  One of the biggest things, which might be of interest to some, is the big bump in enrollment which expanded my undergraduate Programming Languages class by 50% from around 60 registered students to 90.  This is because it is a required class for majors, and our enrollments are up a lot, as at a lot of other places.  The shocker to me was just how much email the class generated.  There were 4 programming assignments, 12 short assignments, a midterm and a final.  Most of the emails for the class were, I believe (but have not tried to confirm) related to the programming assignments.  Since I keep all my email, I can tell you that the course generated around 1,075 emails, of which I personally wrote around 430.  For a class that runs 15 weeks (or 105 days), this comes out to be a little over 4 a day on average.  Seems like a lot.

Is it worth it?  Yes.  It is a very fun class to teach.  I try to cover some basic informal ideas from Programming Languages from an implementor’s point of view, using a functional programming language.  This has been OCaml for me for the past 4-5 years.  I did introduce Haskell and Coq, too, at the very end of the class.  I am actually toying with the idea of moving to Agda for Spring 2014, since Agda promises to give me the coding elegance of Haskell, in particular definition by recursive equations, as well as verification power as in Coq.  Many students, who have learned programming through Java and Python, found programming in OCaml to be a mind-blowing experience.  Imagine what it would be like in Agda!  Maybe even too mind-blowing?  We will see if I am brave enough to try the experiment.  I have to learn Agda first, which hopefully we’ll manage this summer.

One example assignment we did was to write a compiler from a high-level language (“rsketch”) similar in spirit to Logo, but using notation for regular expressions, to Processing.  This makes a nice connection back to the class, since I learned about Processing from a former student, JJ Meyer.  I quite liked these rsketch programs (students had to compete for “audience favorite” for 5 points), as well as many others (f is for forward, r n for rotation by n degrees, cr+ n for increasing the red component of the drawing color by n, l+ n for increasing the length of the segment to draw by n, etc.):

  ( f r 20 cr+ 4 w+ 1 l+ 1 cg+ 5 cb+ 3 f r 255 ) * 1200

and

  ( h ( f r 30 l+ 1 cb+ 1 ) * 80 l- 80 r 5 cg+ 30 ) * 120

This past week we held a mini-symposium here at U. Iowa.  Cody Roux, postdoc’ing with Jeremy Avigad at CMU, and Muhammad Nassar, a doctoral student working with Geoff Sutcliffe at U. Miami, were here for the week. We also had talks from a number of us here, and John Altidor, a doctoral student from U. Mass. Amherst who happened to be in Iowa City for the semester and joined our group for that time. I  talked a lot of type theory with Cody, and worked on StarExec in support of TPTP with Muhammad.  It was quite productive.  I really liked hearing about Cody’s research agenda on the structural theory of PTSs, where he looks to see what operations on PTSs preserve normalization.  This was very cool, and Cody had some great technical results and insights about things like why adding individual quantification to F_omega to formulate the Calculus of Constructions does not increase the proof-theoretic strength of the system.  This is work in progress, it seems — hopefully a paper will appear soon.

So, that’s a short update.  I have something more substantial to blog about soon, namely dualized type theory, which we — Harley Eades III, Ryan McCleeary, and I — are hard at work developing.  But that will have to wait till later in the week or next week.