*It starts poetically, with a story involving parallel universes, and ends prosaically, with a link to a paper to appear in LPAR 2013.
*

Everybody knows about the success story of SAT solvers, but none knows about the success story of QBF solvers. Because there isn't such a story. Yet.

To remind you, the success story of SAT solvers goes like this.
The P-versus-NP problem is frequently advertised as the most important problem in computer science.
It certainly is, when measured by the amount of interest it generates.
In any case, P is the class of problems that we can solve efficiently with our computers, while NP is the class of problems that can be solved efficiently with mythical-computers that make use of parallel universes.
SAT is a problem in NP (solvable by mythical computers) that we don't know to be in P (we don't know whether it can be solved efficiently with our computers).
It goes like this:
I give you a boolean formula, something like
$$\begin{align*}
&(x_1 \lor x_2 \lor x_3) \\
\land &(x_1 \lor \lnot x_2 \lor x_4) \\
\land &(x_1 \lor \lnot x_3 \lor \lnot x_4) \\
\land &(x_2 \lor \lnot x_3 \lor x_4) \\
\land &(\lnot x_1 \lor x_2 \lor \lnot x_4) \\
\land &(\lnot x_1 \lor x_3 \lor x_4) \\
\land &(\lnot x_1 \lor \lnot x_2 \lor \lnot x_3) \\
\land &(\lnot x_2 \lor x_3 \lor \lnot x_4)
\end{align*}$$
and ask whether you can give true/false values to variables such that the whole formula evaluates to true.
(Try it.)
Even better, we know that if we ever find out how to solve SAT efficiently with real computers then real computers are just as powerful as mythical-computers.
How could we possibly know this?
Well, we know that we can *reduce* all problems in NP to SAT.
That is, for any problem in NP we know how to write a program that solves it efficiently, *if* we are given a subroutine `sat` that magically solves any instance of the SAT problem in one second.
Essentially, we have a way to turn any efficient program for mythical-computers into an efficient program for real-computers, modulo the pesky `sat` subroutine, which we don't know how to implement efficiently.
I mean, *in theory* we don't know how to implement it efficiently.
*In practice*, since about ten years ago, we have pretty good implementations, known as SAT solvers.
To summarize, theoreticians had come up with a model of computation (which I called, somewhat eccentrically, the mythical computer), they wondered whether it is strictly more powerful than a real computer, and in doing so they identified a key problem on which they focused their attention.
Then practicians developed solvers for this particular problem (SAT) that work fast on almost all instances, although we know there are cases in which they crawl to an answer.
When I say theoreticians and practicians above, I mean theoreticians and practicians *in research*.
But, the story doesn't end with research:
These efficient solvers are now part of programs like Eclipse and OPAM.

So, that's the success story of SAT solvers. What about QBF solvers? They solve a very similar problem, known by different names: QSAT (satisfiability with quantifiers) or TQBF (true QBF formulas). This time, all variables are quantified explicitly, as in $\forall x \exists y\; x \leftrightarrow y$, and we must figure out if the whole formula is true or false. A trivial but inefficient solution is to unpack $\forall$-quantifiers into logical-ands, and unpack $\exists$-quantifiers into logical-ors. This solution, however, may use a lot of space — just think how much paper you'd waste writing down the formulas. Another trivial solution is to simply evaluate the formula recursing on its structure. If the formula starts with $\forall x$, set $x:=0$ and evaluate the rest, then set $x:=1$ and evaluate the rest, and then compute the logical-and of the two results. This method still takes a lot of time, but it uses much less space, essentially as much space as it takes to write the given formula.

So far we saw that QSAT is a generalization of SAT.
Are there other similarities?
SAT is interesting because it is a (non-trivial) representative of an interesting class of problems.
What about QSAT?
It turns out that QSAT has appeared in theoretician's textbooks for decades because it too is a (non-trivial) representative of an interesting class of problems, called PSPACE — the class of problems that computers (real or mythical) can solve with a reasonable amount of ~~paper~~ space, but perhaps lots of time.

But now the analogy breaks down: Practical algorithms for QSAT are not nearly as good as practical algorithms for SAT.

In the case of SAT solvers, the breakthroughs came about a decade ago.
The basic algorithm on which SAT solvers rely, the so called DPLL procedure, is essentially the second algorithm I gave for QSAT.
Of course, you need to first add $\exists$-quantifiers for all variables to turn the SAT instance into a QSAT instance.
DPLL are the initials of four people.
The first two (Martin Davis and Hilary Putnam) published the first algorithm I gave for QSAT (the one that grows the formula) while scrambling to get something published because they needed something concrete to point to at the end of a grant.
The last two (Logemann and Loveland) are the programmers that realized that the memory usage of the first algorithm is horrible, so they need to implement the second.
In any case, the basic SAT algorithm is a backtracking search: first try $x=0$, then try $x=1$.
The most important breakthroughs were heuristics to limit this exhaustive search.
In a sense, these heuristics are similar to alpha-beta pruning for games: The search remains exhaustive, but you use certain tricks to cut off some branches of the search tree.
One such heuristic is *clause learning*.
The idea is that if setting $x=0$ and $y=1$ will always lead, perhaps after a complicated sequence of operations, to a contradiction, then it's beneficial to learn the clause $x \lor \lnot y$: it must be that either $x$ is true or $y$ is false.
This increases the size of the formula, which would seem to make the problem more difficult, but it also avoids the long sequence of operations that were necessary to discover the contradiction.
Thus, there's an art (aka heuristic) involved in choosing what to learn.
Another big breakthrough was non-chronological backtracking.
Roughly, instead of returning from one recursive call, you return from several, if you know that this will only skip dead branches of the search tree.
In any case, the big breakthroughs of SAT solving are about pruning the search tree while leaving it complete.

In the case of QBF solving, the breakthroughs are coming *just now*.
One recent discovery was that it helps to simplify formulas before starting to explore the search tree.
In a sense, this is a combination of the two algorithms mentioned earlier:
In the first phase, the formula is modified while preserving its satisfiability;
in the second phase, truth assignments are explored.
Except that the first phase is not exactly just expanding quantifiers.
The trick is to choose simplifications carefully.
Yes, this idea was also applied to SAT solvers, but for QBF solvers this division into two phases seems to be even more important in practice.

People who work on QBF solving are concerned not only with speed, but also with correctness.
This is perhaps an instance of a more general trend in solvers.
SAT solvers and SMT solvers now have many users, and those users are a bit worried about the correctness of the black boxes they use.
There are (at least) two big approaches to increasing one's confidence in a piece of software.
One way is to look very carefully at the source code, perhaps with the help of a tool, and make sure it does what it is supposed to do.
As an extreme example of this approach, some crazy people wrote a C compiler in the language of the theorem prover Coq, which means they were able to use Coq to prove theorems about the source code of the compiler.
Another way is to forget about analyzing the code carefully and instead require the program to always produce a certificate for its output.
What the certificate would contain would depend on the problem at hand, but the general principle is that it should be possible to write a *simple checker* that takes some output and its associated certificate and says if things look alright.
This second idea, when applied to compilers, is called ‘translation validation’, but I know of no reasonably usable compiler that actually emits certificates.
In the case of QBF solvers such certificates are either refutations or models of the input formula.
There exist several formats for such certificates and several tools to check their correctness.
In other words, the jury is still out there on which is the best proof system for representing refutations, which is the best way to represent models, and so on.

So far, simplification was done by preprocessing tools that are completely independent of solvers. As a result, if you preprocessed a formula, there was no way to obtain a certificate. If the formula was difficult and hence not solved without preprocessing, there was no way to obtain a certificate, period. The following paper (to appear in LPAR 2013) explains how to do it: Janota, Grigore, Marques-Silva, On QBF Proofs and Preprocessing. We also provide an efficient implementation (obtained by modifying the tool bloqqer by Armin Biere) and we also make some observations on some certificate formats, from the point of view of supporting formula simplifications.

Have fun reading it. Click on the picture.

## No comments:

## Post a Comment

Note: (1) You need to have third-party cookies enabled in order to comment on Blogger. (2) Better to copy your comment before hitting publish/preview. Blogger sometimes eats comments on the first try, but the second works. Crazy Blogger.