Program Analysis, from 2000 to 2009

Very extremely roughly, static program analysis is the area concerned with looking at the text of computer programs and deciding based on that whether they are crap or not. There is, of course, a very easy way to write a sound program analyzer. In Python3 it would look like this: print('crap'). It's sound because it never lets bad programs go without calling them out. But you often want something a bit more friendly (aka complete, aka precise). Being precise is difficult, and that's why this is still an area of active research. Here are few papers from the decade 2000–2009 that are influential, and pointers to associated tools.

Some Talks from POPL 2014

These are my notes from some of the most memorable talks at POPL. What I found memorable is heavily biased. First, I attended less than half of the talks. (More than half is physically impossible.) Second, I did not read the papers. Third, I found some presentations memorable because their content matched my background, and others because of the presentation style. Fourth, it's likely I payed more attention to talks given by people I know.

Awards and Invited Talks at POPL 2014

These are my notes from attending presentations at POPL and associated events. This post is about the non-regular talks I attended.

Edit Distance, Benchmarks

You could compute the edit distance of two strings by memoization or by dynamic programming. The latter should be more efficient, at the expense of clarity. But, how much more efficient? This post shows some (micro) benchmarks.

Certificates and Simplifications for Quantified Boolean Formulas

It starts poetically, with a story involving parallel universes, and ends prosaically, with a link to a paper to appear in LPAR 2013.

Nature and Computation

Physicists look for the simple rules of our world; computer scientists look for the possible worlds that come out of simple rules.