


Stochastic cellular automata
What it's about
Cellular automata, introduced by von Neumann, are locally connected finite
automata whose dynamics, in spite of the limited power of its components
and local interaction, can be rich and arbitrarily complex. For example,
when viewed as an abstract computing machine, a cellular automaton (CA)
is capable of universal computation and represents a prototypical parallel
computer. When viewed as a physical system, a CA can be used to model
collective behavior of particle dynamics in physics, cellular interactions
in biology, and information processing in distributed computing systems.
Philosophically, CAs can embody a mindset where one's world view is that
of a giant CA, inclusive family of CAs, whose rules and secrets, through
scientific endeavor, one seeks to uncover.
Issues and challenges
My research focus in cellular automata is in the study of their dynamics
when treated as an abstract computing machine, i.e., mathematical discipline.
Physics, biology, and distributed computing systems in computer science
act as proving grounds where additional structure and constraints can
be injected to answer questions arising in the various domains and test
the theory's predictability. From a research perspective, CAs can quickly
become "uninteresting" due to their universality: many questions
involving their dynamics are undecidable. Those questions that have been
hereto answered for deterministic CAs can be highly nontrivial, but even
so, perhaps not impressively deep. Potentially interesting ideas, such
as Wolfram's 4fold classification of CAs, have been studied, but little
rigorous underpinning exists to give them depth with respect to technical substance.
Overcoming this barrier looms as an important challenge.
The state of affairs for stochastic cellular automata is a little brighter:
it has enabled posing fundamental questions that have been solved, some
only partially, using novel ideas and mathematical arsenals in innovative
ways that have advanced the stateoftheart. A canonical problem concerns
the reliability or (non)ergodicity of cellular automata, whose roots can
be traced to von Neumann's study of faulttolerant computation in Boolean
circuits. In its simplest form, the CA reliability problem can be stated
as a "memory" problem: can a deterministic CA remember its past
when continually subject to noise?
Example: "Can you remember a single bit?" Consider a
2state onedimensional CA implementing majority voting — a cell
inspects the state of its two nearest neighbors and itself, and chooses
as its next state the majority — where at each transition point a
fault can occur with probability p. If a CA with an infinite
number of cells (for finite CA the problem is trivial although the convergence
rate is not) is started in a configuration where all cells are in state
1, will it continue to remember something about its initial configuration,
or will faults collude and erase all memory of its beginning with the
hands of time? Since the same CA can be started in a configuration where
all cells are in state 0, one can advance the interpretation: can an infinite
1D CA remember a single bit when immersed in a noisy environment? Here
the meaning of remembering/storing is in its weakest form where we will
grant retrieval of a single bit if the CA can correctly guess whence it
came from with probability greater than 1/2.
Using an infinite 1D CA to remember a single bit is a somewhat expensive
proposition. The question addressed therein, however, is of fundamental
value, more basic, in some respects, than applied questions such as whether
the universe is expanding/shrinking, or how cells organize themselves
into a multitude of organs and body parts during development. Understanding
developmental biology and cellular dynamics underlying cancer are clearly
practically relevant and in many ways more complicated, however, one suspects
that a necessary component for effectively combating disease and other
complex phenomena with a cellular basis is a better understanding of the
fundamentals which our effort is aimed at.
Contribution
Toom's seminal work in the early '70s showed the existence of nonergodic
cellular automata in dimension 2 and higher. Gray (1982) proved that continuous
time 1D CA implementing majority voting are forgetful. Gacs (1986) constructed
discrete time 1D CA that are nonforgetful, refuting the positive rates
conjecture which had been accepted by many in the community. My contribution
(1996) showed that certain 1D cellular automata proposed in the '70s
and '80s as candidates of nonergodic CA are forgetful if there is a bias
in the error model. This work also showed that convergence, as measured
by relaxation time, is significantly faster, i.e., exp[O(logˆ2 1/p)],
than the exponential rate conjectured by some in the community. Relaxation
time has implications to finitary systems and their resilience to noise.
Eroder property: "Contain the enemy!" Gacs, Kurdyumov
and Levin (1978) proposed a 2state CA, also called the GKL, soldiers,
or unsymmetric voter rule, which has the property that any finite island
of non1's within a sea of 1's will be erased in finite time. This deterministic
error correcting property, called eroder property, is illustrated in Fig.
1 (a) where a finite island of 0's (black cells) is eventually erased
when placed amidst a sea of 1's (white cells). The "grey" colored
spacetime triangle is actually a pattern of alternating 0's and 1's which
comes across visually as grey. The GKL rule is defined as follows: if
the state of a cell is 0, then it takes the majority vote of the first
neighbor to its right, the third neighbor to its right, and itself. If
the state of the cell is 1, it does so in the opposite direction. Toom
introduced a 4state CA possessing the eroder property, called twoline
voting, in an attempt to have a technically more amenable symmetric rule
(Toom's proof of nonergodicity in dimension 2 or higher heavily relied
on symmetric rules with certain basins of attraction). The dynamics of
deterministic twoline voting is shown in Fig. 1 (b). An island of black
is erased within a sea of white; the red and green spacetime triangles
denote the other two states of the CA.
Impact of noise: "Tale of dancing snakes" When the deterministic
CA dynamics is subject to noise, the "signalling" affecting
error correction is disrupted resulting in a complex boundary or interface
at the junction where a largely "blackish" region meets a largely
"whitish" region. Fig. 2 (a) and (b) show the stochastic dynamics
of the GKL and twoline voting rules under the same initial condition
as Fig. 1 when transitions are subject to noise. Understanding the evolution
of the stochastic CAs is tantamount to understanding the behavior of the
fluctuating boundaries, in relation to the time scale at which black (or
white) islands of size m get spontaneously born. The error model
is symmetric if the probability of erring toward any one of the states
during transition is uniform; it is asymmetric, i.e., possesses a bias,
if the transition probability is nonuniform.
Intuitively, if the error model is symmetric, one might expect that boundaries
behave akin to a symmetric random walk (assuming the width of the interface
remains wellbehaved), with adjacent boundaries annihilating each other
when they meet: i.e., the island wedged between two boundaries is erased.
If the error model is asymmetric — e.g., there is a bias toward black
when errors occur in a transition — then one may conjecture that
the boundary behaves like a random walk with drift, which may result in
blackish islands tending to expand and whitish islands tending to contract.
Hence, even if a CA is started from an initial configuration of all white,
black islands, upon spontaneously arising, will tend to survive and merge,
leading to an equilibrium in the limit where the probability measure in
the space of all configurations is concentrated on blackish configurations.
Proof: "Talk the talk, walk the walk" The random walk
intuition and the behavior of boundaries resembling contact and branching
processes sounds plausible, but how does one go about proving ergodicity?
The boundary looks unwieldy and saying anything rigorous about its evolution
appears daunting. It turns out that a multiresolution analysis framework
— related to renormalization in statistical mechanics — pioneered
by Gacs in the context of proving nonergodicity of 1D cellular automata
can be harnessed to prove ergodicity of GKL and twoline voting in the
opposite context of showing forgetfulness of 1D CA that possess the eroder
property. The proof structure has two key components: one combinatorial
and the other probabilistic (in a very broad sense, it can be viewed as
a sample path method). On the combinatorial side, one wishes to characterize
spacetime rectangles for both the CA evolution as well as the error pattern,
with which one can cover, or patch together, the entire spacetime evolution
such that "neighboring" spacetime rectangles obey the laws
of motion dictated by the CA rule. Keeping track of the future, given
the complicated nature of the boundary, becomes progressively harder with
time. The trick is to compensate for this loss of resolution by adopting
a coarser, yet rigorous, viewpoint of the dynamics with increasing time
whose dynamics, at the coarser scale, remains exact.
The aforementioned spacetime rectangles, therefore, increase in size,
and to enable causally consistent assembly of these rectangles to cover
all of spacetime, their internal shape/pattern, as a function of size,
is defined recursively. The key notion for defining such causally patchable
rectangles is that of (recursive) sparsity. For example, in the case of
spacetime error patterns under a given error model, we would call a larger
spacetime rectangle ksparse if the rectangle, modulo a possible
"glitch" that can be covered/excluded by a smaller rectangle,
is (k1)sparse. For a biased error model, the base level (k
= 0) definition would assert that "good" errors (e.g., those
helping black) are in simple dominance over "bad" errors (e.g.,
those helping white). At level k, this dominance holds recursively
so, where coarsity admits allowances for selfsimilar/fractal violations.
In the case of spacetime evolution patterns, a similar recursive sparsity
notion applies where kblackness captures dominance of black
cells over white cells in a spacetime rectangle with allowances for fractally
distributed holes (think of Swiss cheese). This undertaking, albeit in
a more combinatorial sense, is akin to setting up multiscale differential
equations and integrating across both scale and time to find a solution
of the behavior of the system in the long run at ever coarser scale.
The probabilistic component of the proof is twofold. First, one must
show that ksparse error patterns are typical: for independent
errors the probability of not being ksparse is superexponentially
small. This property, called the sparsity lemma, is then combined with
the deterministic dynamics of the combinatorial part to conclude that
the probability of a spacetime rectangle in the stochastic CA evolution
is not kblack is small. The second part of the probabilistic
component concerns the stochastic framework for proving ergodity. We use
coupling where, considering the relative magnitude of the probability
of kblackness and the birth probability of black islands, we
show that the region of agreement of the coupled system grows. An innovation
of the stochastic framework is that coupling can be made to work for the
asymmetric GKL rule. For twoline voting, which is symmetric, an FKGtype
inequality that couples the all0 configuration with the all1 configuration
"sandwiching" all other trajectories inbetween facilitates
proof of ergodicity. In the case of the GKL rule, we define a coupling
that simultaneously glues all trajectories over a common probability space
and show that agreement spreads for any pair of trajectories.
Open problems
Is the GKL rule ergodic under symmetric errors?
The conjecture is, "yes." The unique invariant measure may be equally
blackishwhitish flavoured spacetime patterns to which both the allblack
and allwhite configurations (and all other configurations) converge.
Proving that this is so presents interesting challenges.
Considering other fault models, establishing
a rigorous correspondence between relaxation time of infinite systems
and rate of information loss in finite systems, constructing simpler
nonergodic 1D CAs (if they exist) are avenues that may be further
explored.
Acknowledgments
This is joint work with Peter Gacs, my Ph.D. thesis advisor at BU.
The research was supported in part by NSF grant CCR9002614.
References
 Peter Gacs. Reliable computation with cellular automata. Journal
of Computer and System Sciences, 32(1):1578, 1986.
 P. Gacs, G. Kurdyumov, and L. Levin. Onedimensional homogeneous
media dissolving finite islands. Problems of Information Transmission,
14:9296, 1978.
 Lawrence Gray. The positive rates problem for attractive nearest
neighbor spin systems on Z. Z. Wahrscheinlichkeitstheorie
verw. Gebiete, 61:389404, 1982.
 Kihong Park. Ergodicity and mixing rate of onedimensional cellular
automata. Ph.D. Thesis, Boston University, 1996.
 Andrei Toom. Nonergodic multidimensional systems of automata. Problems
of Information Transmission, 10:239246, 1974.
 John von Neumann. Theory of SelfReproducing Automata (edited
and completed by A. Burks). University of Illinois Press, 1966.
