From: acd on
Hi,

(I do not know whether comp.theory or comp.theory.cell-automata is
better suited here, so sorry for the crosspost).

When we consider "intrinsically universal cellular automata" as in
(Ollinger: The Quest for Small Universal Cellular
Automata, ICALP2002) a constant factor for the required size of a
"processor" results.

So if we have a cellular automaton A, and we run on it a problem for a
cellular automaton B,
each cell of B is mapped to k cells of A.
(I abbreviate here a bit, on a 2D CA one can imagine the k cells
forming a rectangle or a similar compact shape).
One part of k is needed to "encode the program", i.e. it is
independent of the data and remains static.
But when exchanging the neighbor state, the time to do so depends on
k.
So the speed-ratio of B on A (how many steps of A are needed to run
one step of B)
depends on how well we encode the state transition function of B on
A.

This property is not visible in all computational models, e.g. when
considering the PRAM model or von Neumann model, the size of the
program is irrelevant. On Turing machines however, and in this case
for cellular automata, it does.

Is there something known in the case of Turing machines how the size
of the Turing machine (it's state count) and the efficiency of
emulating other TMs is related.

For instance, the paper by Ollinger uses evaluation of circuits made
of NAND gates to construct an arbitrary (1D) CA. Similar examples are
the universality of Life which communicates through "gliders", or
Wire World.
Considering the last two examples, GoL has 2 states and WireWorld 4,
but the standard construction of universality by creating circuits, WW
will be much more efficient in terms of size (number of cells needed,
k before) and speed.

Andreas