03 February 2014

Some Talks from POPL 2014

These are my notes from some of the most memorable talks at POPL. What I found memorable is heavily biased. First, I attended less than half of the talks. (More than half is physically impossible.) Second, I did not read the papers. Third, I found some presentations memorable because their content matched my background, and others because of the presentation style. Fourth, it's likely I payed more attention to talks given by people I know.

Before you read on, I should warn you: Most of these notes are based on what I was able to understand during the presentation. It is likely I had many opportunities to misunderstand what was being said, and I'm sure I didn't miss too many of those opportunities. You may wonder why do I bother writing these things down, given that they probably contain mistakes. I have a few reasons:

  1. For me, writing these notes forces me to think about the presentations.
  2. For blog visitors, it is an easy way to get a taste of a slice of POPL.
  3. For authors of the papers, it is an opportunity to see how some in the audience understood and misunderstood their work. Feel free to chime in and correct the latter. ☺

Parametric Completeness for Separation Theories. BBI (boolean bunched implications) is a logic that combines multiplicative operators, as in linear logic, with classic operators. Its models are commutative idempotent semirings $\langle 2^W,\circ,\cup,\emptyset,E\rangle$. According to Wikipedia there's some variation in what people mean by semirings, so here's more detail:

  • $\langle 2^W,\cup,\emptyset \rangle$ is a commutative monoid (the additive one); it is also idempotent: $w\cup w=w$ for $w\in 2^W$
  • $\langle 2^W,\circ,E \rangle$ is a commutative monoid (the multiplicative one)
  • distributivity: $v\circ(w_1\cup w_2)=(v\circ w_1) \cup (v\circ w_2)$ for $v,w_1,w_2 \in 2^W$
  • zero: $w\circ\emptyset=\emptyset$
Heap models are a special case. Take $W$ to be the set $V \rightharpoonup V$ of finite partial maps from values to values. (From memory address to memory content.) For $h_1,h_2\in W$, define $\{h_1\}\circ\{h_2\}$ to be the empty set when $h_1$ and $h_2$ disagree, and the singleton set $\{h_1 \cup h_2\}$ otherwise. (We say that two heaps disagree if there is an address $x$ such that $h_1(x)$ and $h_2(x)$ are defined and distinct.) Because of distributivity, this definition of $\circ$ extends from $W$ to $2^W$.

If I understood correctly, one result of the paper is that BBI cannot define heap models; that is, there is no BBI formula that holds exactly in heap models. To be able to do this, the authors extended BBI with labels (an idea from hybrid logic). The resulting logic is called HyBBI. Here is an example of what you can do with such labels: The Hoare triple $\{\ell \land P\}C\{\ell\}$ means that command  does not change (at all!) those parts of the heap that satisfy the predicate $P$. Here $\ell$ is a label that ranges over heaps. While listening to this I was wondering if there's some relation to the freeze quantifier in LTL. Somehow, I forgot to ask the authors. So, James and Jules: consider yourselves asked. :) It's a vague question, so no need to reply; but perhaps you might want to think about it.

(Just assume that each paragraph starts with ‘If I understood correctly’ so I don't repeat myself.) The main result of the paper is a method that takes a theory in HyBBI and produces a corresponding proof system that is sound and complete.

Modular Reasoning about Heap Paths via Effectively Propositional Formulas. This paper is about how to reason about reachability in a weak logic. You'd want to do this because weak logics have efficient solvers. The logic is first-order with restricted use of quantifiers (universals come before existentials $\forall{\bf x}\exists{\bf y}\varphi$) and no functions, except for constants. The encoding uses several tricks. But, first, a bit more about the setting. The heap is viewed as a digraph in which the out-degrees are 0 or 1. In other words, the heap is modeled as a function $h : V \to V_\bot$. (I think the domain is not required to be finite.) Thus, it is a restricted form of reachability. In particular, we can talk about the iterated function $h^* : 2^V\to 2^V$, which is idempotent: $h^*(U)$ is the set of locations reachable starting from the set $U\subseteq V$ of memory addresses. The first trick is that idempotent functions can be encoded in a logic that has constants, but no other functions: whenever you want to say $f^*(x)$ just introduce a fresh constant.

The second trick enables local reasoning. When you specify (with a triple) what a procedure does, the heap is naturally split in two parts: the part that is potentially mutated, and the part that is not mutated. If you follow paths in the heap, then you end up moving in and out of these two regions. The idea is to give special status to the points that are at the boundary of these two regions; that is, give them names/constants. This lets you split any path into inside/outside regions, and this in turn enables local reasoning.

The presenter mentioned two limitations: (1) the out-degree is at most 1, and (2) the number of nodes at the boundary between the two regions mentioned above must be finite. (More precisely, at the boundary from the possibly mutate region to the definitely not mutated region; in the other direction, there's no restriction.)

The running example was union-find with path compression. It's possible to specify it and verify it. The verification works via VC-generation and Z3.

Optimal Dynamic Partial Order Reduction. This paper presented an improved algorithm to do partial order reduction. What is partial order reduction? If you observe the execution of a concurrent program, you observe $n$ statements and some dependencies between them. For example, a read from memory depends on the last write to the same location. We record that the writing happened before the reading. Similarly, for two consecutive instructions in the same thread, we record that the first happened before the second. The dag (directed acyclic graph) made by the arcs just described defines an equivalence class of concurrent interleavings: Any interleaving of the instructions that is consistent with the dag will have the same effect. Thus, if you want to exhaustively test a concurrent program by looking at different schedules, then you should try to schedule threads such that the next dag you observe is not one of those you already saw.

This article is supposed to explore each dag only once. I didn't follow the details, so I don't know how it works, but it seems like a cute problem. More importantly, I'm more worried that I don't understand in which sense can testing be ‘complete’ in the presence of loops&recursion. Is it perhaps a standard assumption when doing partial-order reduction that you don't consider loops? I should perhaps eventually learn how partial-order reduction works.

Others. If I have time (or someone expresses interest) I want to also transcribe my notes about the following articles, which I found interesting:

  • Bridging Boolean and Quantitative Synthesis using Smoothed Proof Search
  • Sound Compilation of Reals
  • A Constraint Based Approach to Solving Games on Infinite Graphs
  • From Parametricity to Conservation Laws, via Noether's Theorem

1 comment:

Sylvain said...

Thanks for the notes! About freeze LTL vs. HyBBI: yes, there is a connection, in the sense that freeze LTL itself is inspired by hybrid modal logic. The difference though is that the freeze operator binds the value at the current point, while the hybrid binder binds the current point.

Post a Comment

Note: (1) You need to have third-party cookies enabled in order to comment on Blogger. (2) Better to copy your comment before hitting publish/preview. Blogger sometimes eats comments on the first try, but the second works. Crazy Blogger.