*There are several ways of extending the standard definitions of automata to infinite alphabets. Here I try to explain why the trivial extension (just use the same definitions) is not satisfactory and present a couple of alternatives.*

The usual definitions go like this. Given is a finite set $\Sigma$. We say that $\Sigma$ is an *alphabet* and its elements are *letters*. *Words* are strings of letters, *languages* are sets of words. A *finite automaton* is a finite digraph whose arcs are labelled by letters, has a distinguished *initial* vertex, and has some vertices marked as being *accepting*. Each (finite) automaton determines a language, namely the set of those words obtained by concatenating the labels of a walk from the initial vertex to some accepting vertex. A language that can be described by a finite automaton is said to be *regular*.

Why restrict $\Sigma$ to be finite? What could possibly cause problems if we allow it to be bigger? Isn't the generalization straightforward?

Yes and no.

Yes, we could simply remove the restriction on $\Sigma$ and keep the same definitions. However, that would lead to situations that intuitively just don't feel right. For example, is the language $L=\{\,abc \mid c\in\{a,b\}\,\}$ regular? With the definitions above $L$ is regular exactly when $\Sigma$ is finite. There is something unpleasant about a language suddenly becoming non-regular, although its (declarative) definition does not change.

Recall the standard model for (finite) automata: a transition function $\delta:X\times\Sigma\to X$, where $X=V$ is the (finite) set of vertices, an accepting function $\alpha:V\to2$, and the initial function $\iota:1\to X$. There are at least three ways of extending the standard model: finite-memory automata aka register automata (Francez and Kaminski, 1994), pebble automata, and data automata. All three are surveyed by Segoufin (2006). Thinking back of the language $L$, the intuitive reason why it cannot be recognized by a finite automaton is that we cannot easily remember the letters we saw. So the natural thing to try is to add a (finite) memory to the automaton. This suggests distinguishing between the (not necessarily finite) set of states $X$ and the finite set of vertices $V$. For example, we could take $X=V\times\Sigma^k$ for some $k$. The automaton has $k$ *registers*. However, now the automaton is too powerful. For example, take $V$ to be the singleton set $1$, and $\Sigma$ to be the set of natural numbers $\mathbb{N}$, and $k$ to be $1$. Then $\delta$ could be essentially any function $\mathbb{N}\times\mathbb{N}\to\mathbb{N}$. We did not even require it to be computable! The class of languages recognized by such a device is clearly too big to be called "regular". This is why extra restrictions on $\delta$ are needed. Francez and Kaminski (1994) essentially required that all that $\delta$ can do with letters is (1) store them in registers and (2) compare them for equality.

Finally, Bojanczyk et al (2011) give what I think is a very cute extension of the standard model of (finite) automata.

Their model is still a triple $(\delta, \alpha, \iota)$ of functions with the types as given above. However, the extra restrictions are much more principled. Now $X$ and $\Sigma$ are required to be $G$-Sets (for some group $G$), and all the three functions are required to be invariant. (These concepts are explained in my previous post on group actions.) A language is now *$G$-regular* when it is recognized by an automaton for which both $X$ and $\Sigma$ have a finite number of orbits.

## No comments:

## Post a Comment

Note: (1) You need to have third-party cookies enabled in order to comment on Blogger. (2) Better to copy your comment before hitting publish/preview. Blogger sometimes eats comments on the first try, but the second works. Crazy Blogger.