*
Vector addition systems are a model of computation.
For this model, coverability is one of the easiest decision problems.
This post presents an old (1978) upper bound, with accompanying code.
*

A VAS (vector addition system) is a model of computation that pops up occasionally. In this model, the state is a vector of $d$ nonnegative integers. At each time step, the state changes from $v$ to $v+\delta$. The shift $\delta$ is chosen nondeterministically from some given finite set $\Delta$. If states $V$ are active now, then states $V'$ are active next, where $$ V' \;=\; \{\,v+\delta\mid\text{$v \in V$ and $\delta \in \Delta$ and $v+\delta \ge 0$}\,\} $$ Here's how you compute $V'$ from $V$ in Python:

def check_many(d, vs): for v in vs: assert d == len(v) def check_one(d, *vs): check_many(d, vs) def vas_step(d, Delta, vs): check_many(d, Delta) check_many(d, vs) ws = set() for v in vs: for delta in Delta: w = tuple(v[i] + delta[i] for i in range(d)) if all(wi >= 0 for wi in w): ws.add(w) return ws

The reachability problem asks whether there is some run going from $u$ to $w$. The coverability problem asks whether there is some run going from $u$ to $\ge w$. Here's how coverability looks in Python:

def LE(u, v): assert len(u) == len(v) return all(u[i] <= v[i] for i in range(len(u))) def vas_cover(d, Delta, u, w): check_many(d, Delta) check_one(d, u, w) vs = [u] while len(vs) > 0 and not any(LE(w, v) for v in vs): vs = vas_step(d, Delta, vs) return len(vs) > 0

For example, the code

print(vas_cover(2, [[1,-1], [-2,1]], [3,2], [10,10])) print(vas_cover(2, [[1,-1], [-1,2]], [3,2], [10,10]))gives

False True

The code I gave has a problem, though. This doesn't terminate:

print(vas_cover(2, [[1,0]], [0,0], [0,1]))

Can we fix it?
One way to fix it is to stop after some time if we still didn't find a vector $\ge w$.
But, after how much time?
We can find one answer in
[Rackoff,
*The Covering and Boundness Problems for Vector Addition Systems*,
1978].
The bound given by Rackoff depends on $\Delta$ and $w$ but not on $u$.
Let $N$ be the biggest absolute value of a number occurring in $\Delta$ or $w$.
Then it is sufficient to try $L_d$ steps, where
$$\begin{align}
L_0 &= 1 \\
L_k &= (N \cdot L_{k-1})^k + L_{k-1} &&\text{for $k \gt 0$}
\end{align}$$
The bounds $L_0, L_1, L_2, \ldots$
are of order $N^0, N^1, N^{1 \cdot 2 + 2}, N^{1\cdot 2\cdot 3 + 2\cdot 3+3}, \ldots$;
very roughly, $L_d \in O(N^{d\cdot d!})$.
Python code again:

def rackoff_bound(d, Delta, w): check_many(d, Delta) check_one(d, w) N = max(di for delta in Delta for di in delta) bound = 1 for k in range(1, d + 1): bound = (N * bound) ** k + bound print('bound',bound) return bound def rackoff_vas_cover(d, Delta, u, w): check_many(d, Delta) check_one(d, u, w) vs = [u] for i in range(rackoff_bound(d, Delta, w)): if any(LE(w, v) for v in vs): return True vs = vas_step(d, Delta, vs) return False print(rackoff_vas_cover(2, [[1,-1], [-2,1]], [3,2], [10,10])) print(rackoff_vas_cover(2, [[1,-1], [-1,2]], [3,2], [10,10])) print(rackoff_vas_cover(2, [[1,0]], [0,0], [0,1]))

The output:

bound 6 False bound 39 True bound 6 False

We can solve the case that previously didn't terminate, using a bound of 6.
But, *why* is this bound sufficient?
Rackoff's proof uses two tricks: induction on coordinates, and a bounding box.
Let $I \subseteq \{0,1,\ldots,d-1\}$ be a set of coordinate indices.
For a vector $v \in \mathbb{Z}^d$,
let $v[I] \in \mathbb{Z}^{|I|}$ be its restriction to the coordinates indicated by $I$.
We can lift the same notation to sets of vertices;
for example, if $\Delta \subset \mathbb{Z}^d$,
then $\Delta[I] \subset \mathbb{Z}^{|I|}$ and $|\Delta[I]| \le |\Delta|$.
We will prove by induction the following stronger statement:

Fix the target $w$, and the set of moves $\Delta$. For all $u$ and $I$, if there exists a run from $u[I]$ to $w'[I]$ using moves from $\Delta[I]$ such that $w'[I]\ge w[I]$, then there exists such a run of length $\lt L_{|I|}$.

We will assume there exists a run, and we will show that there exists one of length $\lt L_{|I|}$, by induction on the size of $I$.

The base case $|I|=0$ holds because $w'[\emptyset] \ge w[\emptyset]$ always holds.

For the inductive case, we use a bounding box of size $S$: that is, we split runs into those that use only coordinates $\lt S$, and the others. If all coordinates are $\lt S$, then interesting runs have length $\lt S^{|I|}$. (If $v_1[I]=v_2[I]$ and $v_1[I] \leadsto v_2[I]$ is a subrun, then we simply cut it out.) But, maybe it's not possible to stay within this box. Then, there is some vertex $v$ that is the first one outside the box. We cut our run into two pieces, $u[I] \leadsto v[I]$ and $v[I] \leadsto w[I]$, which we analyze separately. The first piece is of length $\lt S^{|I|}$ for the same reason as before. Let's move to the second piece, $v[I] \leadsto w[I]$. Let $I' \subset I$ be those coordinates of $v$ that are still within the bounding box. Then, by the induction hypothesis, there is a run from $v[I']$ to $w[I']$ using moves from $\Delta[I']$ of length $\lt L_{|I'|}$. What happens with the coordinates $I \setminus I'$? In $v$ they are $\ge S$, so they can become no smaller than $S-N(L_{|I'|}-1)$. We would like $w'[I \setminus I'] \ge w[I \setminus I']$, so we pick $S$ such that $S-N(L_{|I'|}-1) \ge N$; that is, we pick $S \stackrel{{\rm def}}{=} N \cdot L_{|I|-1}$. With this choice, we get exactly the inductive case from the definition of $L_k$.

That concludes the proof.

*Remark*:
Apart from presentation issues,
there are two minor differences between what I said above
and what you will find in Rackoff's paper.
He does the induction on sets of indices of the form $\{0,1,\ldots,k-1\}$
and justifies this by saying at some point ‘without loss of generality’.
That's perfectly fine once you understand the proof, but it confused me for awhile,
although I can't say why.
I decided to just go through all subsets of $\{0,1,\ldots,d-1\}$.
The second is that I measure the length of runs by the number of moves,
while Rackoff's paper uses the number of states.
I may have off-by-one errors, so I wasn't too explicit about this above. :p
(But, if I some such errors slipped through, please let me know
so I can fix the text and the code.)

What does this give us? A coverability algorithm that works in $O(d \cdot |\Delta|^{N^{d\cdot d!}})$ time. Not terribly fast, but still better than Ackermannian or non-primitive recursive.

I should say that there are some ways to speed up the algorithm,
although it probably won't help with the asymptotics.
One obvious change is to add a test in the loop of
`rackoff_vas_cover`

:
if `vs`

is empty, then answer `False`

.
A less obvious change is to run the whole thing backward.
[**update 20150731:**
Actually, there *is* a better upper bound for the backward algorithm.
[Bozzelli, Ganty, *Complexity Analysis for the Backward Coverability Algorithm for VASS*, 2011]
shows that the runtime of the algorithm below is of the same order of magnitude as $L_d$: doubly-exponential.
Compare their Theorem 1 with their Theorem 2.
Their Lemma 5 describes the algorithm from below, with minor differences.]

def vas_cover_backward(d, Delta, u, w): check_many(d, Delta) check_one(d, u, w) vs = [tuple(w)] while True: if any(LE(v, u) for v in vs): return True us = set() for v1 in vs: for delta in Delta: u1 = tuple(max(0, v1[i] - delta[i]) for i in range(d)) if all(not LE(v2, u1) for v2 in vs): us.add(u1) if len(us) == 0: return False vs = set(v for v in vs if all(not LE(v, u1) for u1 in us)) vs |= us assert False print(vas_cover_backward(2, [[1,-1], [-2,1]], [3,2], [10,10])) print(vas_cover_backward(2, [[1,-1], [-1,2]], [3,2], [10,10])) print(vas_cover_backward(2, [[1,0]], [0,0], [0,1]))

After the $k$th iteration, the set `vs`

contains the minimal vectors that can reach $\ge w$ in $\le k$ steps.
I know this algorithm from Sylvain Schmitz
and from the Section 2.2.2 of the
algo-wqo lecture notes.
[**update**: Sylvain also pointed some mistake I had in a previous version of the code.]
The algorithm is presented there in the more general setting of WSTSs
(well structured transition system).
(A WSTS is essentially a transition system for which $a \le b$ encodes
what we'd intuitively describe as ‘$b$ is an abstraction of $a$’
in program analysis.)
Since abstraction came into discussion,
I should say that midway writing this post
I googled for more recent presentations of Rackoff's proof.
I found an article that presents the proof in a more general (and abstract!) setting:
[Lazic, Schmitz, *The Ideal View of Rackoff's Coverability Technique*, 2015]
.
I didn't read it, so I don't know what it says.

Let's move to a slightly higher vantage point.
The more complicated the model of computation, the harder its decision problems.
The more precise the question being asked by a decision problem, the harder to answer it.
VASs are more complicated than finite automata, and not too far behind Turing machines.
Coverability is one of the least precise questions one could ask of them,
and it is doable in exponential space using the algorithms from above.
Reachability is a more precise question, so solving it seems harder.
Its story is in this post by Lipton:
An EXPSPACE lower bound.
So, one way to make life harder is by asking more precise questions.
Another way is to make the model of computation more complicated:
coverability is Ackermann-hard when you add reset
[Schnoebelen, *Revisiting Ackermann-hardness for Lossy Counter Machines
and Reset Petri-Nets*, 2010].

## No comments:

## Post a Comment

Note: (1) You need to have third-party cookies enabled in order to comment on Blogger. (2) Better to copy your comment before hitting publish/preview. Blogger sometimes eats comments on the first try, but the second works. Crazy Blogger.