28 January 2014

Awards and Invited Talks at POPL 2014

These are my notes from attending presentations at POPL and associated events. This post is about the non-regular talks I attended.

The Essence of John Reynolds. This session was a series of three talks in the memory of John Reynolds. The presenters were Stephen Brookes, Peter O'Hearn, and Uday Reddy.

Stephen Brookes told us a nice analogy that Reynolds used to make. Research is like picking apples. Some researchers reach for the low hanging fruit. Other researchers work hard to get the apples at the top. Great researchers spot new trees.

Peter O'Hearn gave one of those talks where I can't take notes. More precisely, one of those talks where I forget to take notes, because I'm busy watching the performance. Anyway, what I learned from the talk is that Reynolds didn't stop from work to get coffee. I should revise my habits. Also, I learned that the papers Reynolds published in 1970–1985 are classics, but the period 1985–2000 was ‘slower’.

Uday Reddy recommended that we spend some quality time with Reynolds' papers. Then he speculated that the logical relations of Reynolds will find other applications soon. In mathematics, we need to clarify how they relate/connect to Galois connections. But other areas of science might make use of them. (There was a paper at this POPL that used logical relations as symmetries in physics.)

Milner Award: Lars Birkedal presented Modular Reasoning about Concurrent Higher-Order Imperative Programs. The talk summarized several reasoning techniques about programs. The techniques support various programming language features, in various combinations. The targeted features are: higher-order, mutability, concurrency. Higher-order types enable reasoning in the presence of unknown code. For example, the map function, which applies a function f to all the elements of a list xs is higher-order, and you need to reason about its implementation without looking at the body of f. Mutability is present in almost all programs. Concurrency is important for performance, and because people live on different continents.

It seems to me that the basic idea is to build what Lars Birkedal called a space of semantic types. If I understood correctly, this is a way to systematically organize all the values that could be constructed. Of course, in a higher-order setting, values include programs. These values are slotted first by their type, which is the type of their result. But, second, they are also slotted by a bound on computation steps. The bound does not mean that the execution will terminate within that bound. Instead, it means that the execution will not fault within that time. This way of organizing values is called step indexing, because the bound is a number of execution steps.

What can you do with these reasoning technique, once you establish them? One example that was given is the following. In 1988, Lucassen and Gifford proposed a type system with effects that checks whether a piece of code is safe to parallelize. The paper describes how it works. That the effects represent regions of memory that can affect a piece of code is ‘obvious’. But, how to prove it? That's where the kind of techniques that Lars and collaborators developed come in.

At the end Peter O'Hearn asked how do these techniques relate to what Dana Scott does. If I understood the answer, the techniques are rather incomparable, because one works in the context of operational semantics, while the other works in the context of denotational semantics.

SIGPLAN Achievment Award: Patrick and Radhia Cousot; Patrick Cousot presented ‘A Galois Connection Calculus for Abstract Interpretation’. The goal of a static analysis is to explore all possible executions. The abstract interpretation approach, which is standard, says that you need to run the program over abstract states, rather than concrete ones. These abstract states must have certain properties (lattice structure, a widening operator, connection with the concrete states, and so on). But, abstract interpretation doesn't say how to choose these domains of abstract states. There are, however, several basic examples, such as interval domains for integers. In this talk, Cousot emphasized that there are also several ways to combine abstract domains to obtain other abstract domains. I don't remember if Cousot used this example in the talk, but there's certainly a paper about it: suppose you have an abstract domain for values of type $\alpha$; then, there's a way to automatically construct an abstract domain for values of type $\alpha\;{\rm array}$.

This approach reminds me of what people do in analytic combinatorics: You know the generating function for some basic families of objects. You also know how a certain operation that generates new objects (say, the cartesian product operation) corresponds to an operation on generating functions. Thus, you have a recipe for obtaining generating functions that describe families of objects generated by some process.

In abstract interpretation, there is a way to obtain a static analyzer for a more complicated domain out of static analyzers for simpler domains.

At the end, as above, question from Peter O'Hearn: ‘Is everything a Galois connection?’ By this, I suppose, he means whether sometimes it's useful to have an abstract domain even if you can't find a Galois connection between it and the concrete domain. (This happens when there exist concrete states that do not have a ‘best’ abstraction.) Someone in Cousot's team who works on the Astree tool answered that indeed in several places they don't have a Galois connection. (I couldn't see who answered.)

The Microsoft Verified Software Award went to the Intel team that verified core i7. The engineering effort involved was reported as hundreds of person-years.

The Most Influential Paper of POPL2004 had the funniest and shortest speech. The academy was thanked, and one of the authors said that ‘in those days we had a blast’. The paper that received the award is ‘Abstractions from Proofs’. After the ‘speech’, which was supposed to take half an hour, the organizers were left wondering: ‘So … what are we going to do?’ ‘I don't know. What do you want to do?’ ‘Let's do something!’ ‘So … what are we going to do?’ (The dialogue is imaginary, but not too far from what happened.)

Coq got the SIGPLAN award for systems. Gerard Huet told us the history of Coq's early years. He did it in a way that was captivating and slide-less.

The initial goal was to develop a proof system that would let you do real mathematics. This means that things like requiring normal forms is out of the question. It also means that automation is necessary. But, since automation sometimes runs amok, you'd want an interactive way to provide hints. And finally, it must support definitions.

One question was: which programming language to use? The proof system used as a model was LCF, which used ML as the tactic language. So, someone (I forgot who) suggested that Huet&team use ML for the whole thing. At the time, ML wasn't regarded as a general purpose programming language. This led to the development of the CAML compiler, in parallel to the development of Coq.

Another difference between Coq and LCF is that Coq produces proof objects, inspired by the Automath project.

An anecdote: The first program that Thierry Coquand wrote was the tactic engine of Coq.

The first usable Coq was released in 1989. Some developments since then include code extraction, well-founded higher-level functions, better support for introducing notation.

At the end, we were told that the foundations of Coq are unclear, and that the code itself is not certified. (Personally, I think we get carried away with this certification thing: The more important thing is that at some point we finish with a small turtle, that we can analyze completely, even if we are humans.)

Formal Methods in Industry. The CTO of Coverity, Andy Chou, gave us his perspective on formal methods. The main message was that finding bugs is not enough — they must be fixed. He told us a story about how they were about to lose a contract because their tool uncovered bugs, but the client's developers didn't fix them. To get bugs fixed they used a combination of technology and social engineering. First, they introduced a policy of no new bugs — or, rather, no new warnings. Second, they made the tool automatically file bug reports for each warning and assign them to whoever wrote the code. Third, they sent around weekly emails with top-10 worst developers. Well, top-10 developers with most unfixed bugs. (Apparently the top developer was beating everyone else by a huge margin.)

Coverity's static analyzer finds about one issue every 1000 lines of code. Open source projects have slightly fewer issues than commercial code, but not by much. (If you have an open source project in C++ or Java, then you can use Coverity Scan for free.) Some of the issues are false negatives. You can ask the Coverity to try to reduce the rate of false negatives at the expense of speed. Apparently, this makes a SAT solver kick in. Since the expensive mode uses a SAT solver, this makes me think that most checks are rather lightweight, probably similar to those in FindBugs.

At the end, one question was what Coverity doesn't do. The answer: dynamic languages, like Javascript. Another question, form Patrick Cousot: Isn't it bothersome that the tool is unsound? The answer: If you want to be really sure that your code is OK, then you should be using many tools, including things like model-checking. In this scenario, Coverity is useful before the last stages of development to root out bugs quickly and efficiently.

Direct-Manipulation Programming. Bret Victor identified a problem and proposed a solution. The problem: Only people adept at symbol manipulation can create things like Angry Birds. The solution: Invent a way of creating Angry Birds that does not involve symbol manipulation. There exist tools for drawing without having to write code, but not much for specifying behavior without writing code.

You didn't ask, but here's my opinion. I'll explain with an analogy. The problem: Only people that read books are educated. The solution: Invent other ways of transmitting information, such as theater and cinema. My opinion is that what he's proposing is part of the solution. There's nothing wrong with being adept at symbol manipulation, and more people should be. But doing stuff without manipulating symbols has value as well. I like to watch movies, but that should not substitute books. I like doing MOOCs, but that shouldn't substitute real apprenticeship. I like being able to specify the behavior of Angry Birds by moving objects on the screen, but this shouldn't substitute programming with languages. I think Bret Victor ended up with some Angry Men in the audience (judging by the questions), because he seemed to suggest he wants to get rid of languages.

No comments:

Post a Comment

Note: (1) You need to have third-party cookies enabled in order to comment on Blogger. (2) Better to copy your comment before hitting publish/preview. Blogger sometimes eats comments on the first try, but the second works. Crazy Blogger.