33

Quoting Wikipedia, "[Conway's Game of Life] has the power of a universal Turing machine: that is, anything that can be computed algorithmically can be computed within Conway's Game of Life."

Do such results extend to noisy versions of Conway's Game of Life? The simplest version is that after every round every live cell dies with a small probability $t$ and every dead cell becomes alive with a small probability $s$ (independently).

Another possibility is to consider the following probabilistic variant of the rule of the game itself.

  • Any live cell with fewer than two live neighbours dies with probability $1-t$.
  • Any live cell with two or three live neighbours lives with probability $1-t$ on to the next generation.
  • Any live cell with more than three live neighbours dies with probability $1-t$.
  • Any dead cell with exactly three live neighbours becomes a live cell with probability $1-t$.

Question: Do these noisy versions of the Game of Life still support universal computations? If not, what can be said about their "computational power"?

Related information on the computational power of cellular automata and noisy versions of cellular automata will be also much appreciated.

(This question developed from this question on MathOverflow. Vincent Beffara's answer on MO gave interesting references for related results on computational aspects of noisy cellular automata.)

Gil Kalai
  • 6,033
  • 35
  • 73
  • Relevant (if only for the lack of an answer): http://math.stackexchange.com/questions/393229/does-conways-game-of-life-have-attracting-cycles-or-equilibria –  Jun 06 '13 at 08:32
  • seems the real/more general question here is whether a probabilistic Turing machine is equivalent to a deterministic Turing machine. & then maybe the probabilistic version of Life could be converted to a probabilistic TM. have not encountered the concept of Turing equivalence wrt probabilistic machines, has anyone else? a similar question does also arise in a QM context.... maybe this is similar to BQP =? P etc...? – vzn Jun 06 '13 at 15:31
  • 2
    @vzn 1) no, this is not the "real question," it's an altogether different question; Gil's question is about robustness of a simple computational model to noise, not about the power of randomness; 2) TMs with a random tape are no more powerful than deterministic TMs, see this answer: http://cstheory.stackexchange.com/a/1415/4896 – Sasho Nikolov Jun 07 '13 at 06:26
  • Intuitively, it seems that one can never 100% rely on any output from your noisy system. If t > 0 there is a non-zero probability that the answer will be different from what a deterministic Turing machine would come up with. --- The only thing that comes to mind is a Game of Life configuration that grows its own redundancy -to infinity- so the reliability of the output would asymptotically approach 100%. But that takes infinite time. There is also a non-zero probability that the duplication process is compromised at an early stage. --- (just my thoughts, not backed by any citations or proofs) – mhelvens Jun 07 '13 at 10:22
  • @sasho you seem to make some distinction between "noise" and "randomness" but arent they the same thing? as for (2), a TM that starts with a random tape is different than one that starts with a configuration and then is subject to noise during the computation. dont immediately see an equivalence there. further thoughts: think the theory of QM noise reduction is relevant here. basically the idea is to create ensembles of gates that are resistant to noise. think the answer may be the same as in QM that if the noise is below some threshhold, it can be reliably "corrected". also, hamming codes...? – vzn Jun 07 '13 at 15:11
  • 1
    "a TM that starts with a random tape is different than one that starts with a configuration and then is subject to noise during the computation", which is exactly the distinction between "noise" and "randomness" that sasho is making. $;$ –  Jun 07 '13 at 19:23
  • 2
    The real question here is if the stochastic/noisy versions of the "Game of Life" still support computation. (If these version support computations in P then their power may go all the way to BPP.) It is possible that the computational power of these stochastic versions of the game of life is much lower. – Gil Kalai Jun 08 '13 at 18:25
  • 3
    Perhaps I'm stating the obvious, but you can just duplicate a configuration enough times so as to guarantee with high probability that a version of the configuration doesn't even have one cell flipped. My personal belief is that we can do much, much better, but at least it's a simple lower bound. – user834 Jun 12 '13 at 16:33
  • 4
    I'm not sure the question is well-defined. Suppose $t=10^{−9}$. It seems to me that you might be able to find a computer that deals with all one-bit errors in the "Game of Life", giving you fault-tolerant computation unless you spontaneously get a large block of errors all at once. But I don't think anything can be robust against all errors. For example, suppose the errors spontaneously create a malevolent adversary determined to disrupt the computation. You might be able to show your computation succeeds with probability $>1−10^{−9}$ but fails with probability $>10^{−10000}$. Does this count? – Peter Shor Jun 13 '13 at 13:20
  • 1
    re @user834 idea. interesting but a computation has a single result. therefore it is "outside of the box" for some entity to choose the configuration of many repeated ones that is "correct". but this does suggest a construction. if there exists a mechanism within the machine that can detect/choose between incorrect and incorrect computations (similar to a multiplexer), then that solves the problem. btw the way that Life is proven turing complete is generally by "gates" made out of "gliders", (the glider patterns "carry" the 0s & 1s) so that is the key system to study. – vzn Jun 13 '13 at 15:38
  • 2
    Peter, if your computation succeed with probability 2/3, I am happy. – Gil Kalai Jun 15 '13 at 11:57
  • 2
    I still don't know if the question is well-defined. If there is some fixed-sized configuration which destroys all computation, then the computation will fail with probability something like $TAp$, where $T$ is the length of the computation, $A$ is the area of the computation, and $p$ is some constant (say $10^{-100}$). You can clearly design rules where this happens (with probability $p$, a cell turns into a "plague cell", which spreads with the speed of light, turning all cells into "plague cells"). Something like this could conceivably happen in the real "Game of Life". – Peter Shor Jun 30 '13 at 12:27
  • 2
    Continuing my previous comment, you would only be able to do fixed-sized computation, but the size might be enormous. Would this count? – Peter Shor Jun 30 '13 at 12:30
  • lacking strict definitions in the question, think its reasonable to formulate them ad hoc in the answer. @shor your ideas seem to imagine there is some lack of independence in the errors whereas the question clearly seems to suppose independent errors. but agreed, probabilistic definitions can have big subtleties... – vzn Jul 01 '13 at 16:11
  • Dear Peter, the problem is indeed a little open ended, as there are a few possible noisy variants of the game of life and perhaps there are few more choices to make. Still I do not see a reason to believe that noisy versions of the game of life manifest the "plague cell" phenomena you described, and on the other hand, I do not see how such noisy variations allow fault tolerance computation. There are also several possible answers: you may be able to compute indefinitely or perhaps only a number of steps which is a function of 1/t (logarithmic? polynomial? exponential?) – Gil Kalai Jul 01 '13 at 20:55
  • Peter: "Continuing my previous comment, you would only be able to do fixed-sized computation, but the size might be enormous. Would this count?" --- YES – Gil Kalai Jul 01 '13 at 20:56
  • In questions like this it is often happens that up-to some p it will always die regardless of finite initial configurations.

    Can you give an example of the game where you can prove that the results depends on the initial configuration?

    – Klim Jul 02 '13 at 21:43
  • Is it a requirement of a game of life configuration that only a finite number of cells is alive? If not, then we can emulate a finite configuration and its non-noisy evolution correctly in an infinite configuration, if we can find an upper bound for how far the furthest affected cell is... – Erik P. Feb 21 '14 at 06:15

4 Answers4

8

Here are some "nearby best" references, for what it's worth. It would seem the way to go on this question is to reduce it to a question on "noisy Turing machines", which have been studied (somewhat recently), and which are apparently the nearest relevant area of the literature. The basic/general/reasonable answer seems to be that if the TM can resist/correct for noise (as is demonstrated in these references), it's quite likely the CA can also, within some boundaries/thresholds.

The question of how to reduce a "noisy CA" to a "noisy TM" (and vice versa) is more open. It may not be hard but there does not appear to be published research in the area. Another issue is that the noisy TM is a new model and therefore there may be multiple (natural?) ways to represent a noisy TM. For example, the following papers look at disruptions in the state transition function, but another natural model is disruptions in the tape symbols (the latter being more connected to noisy CAs?). There may be some relation between the two.

  • Fault-tolerant Turing Machine by Ilir Capuni, 2012 (PhD thesis)

    The Turing machine is the most studied universal model of computation. This thesis studies the question if there is a Turing machine that can compute reliably even when violations of its transition function occur independently of each other with some small probability.

    In this thesis, we prove the existence of a Turing machine that with a polynomial overheadcan simulate any other Turing machine, even when it is subject to faults of the above type, thereby answering the question that was open for 25 years.

  • A Turing Machine Resisting Isolated Bursts Of Faults by Ilir Capuni and Peter Gacs, 2012
  • Noisy Turing Machines by Eugene Asarin and Pieter Collins, 2005
(Another question: could there be some connection between noisy TMs and probabilistic Turing Machines?)
argentpepper
  • 2,281
  • 15
  • 27
vzn
  • 11,014
  • 2
  • 31
  • 64
6

Gil is asking if the GL is forgetting everything about its initial configuration in time independent of the size, when each cell "disobeys" the transition function independently of other cells with some small probability.

To the best of my knowledge, this is not known for the GL. It is a very interesting question though. If it can withstand the noise, then it should preserve its universality.

A quick overview of the state of the art is as follows.

  1. Toom's rule can save one bit forever faults that occur independently of each other with some small probability.
  2. It was widely believed (the positive rates conjecture) that all 1 dim CA are ergodic until P. Gacs constructed his multi-scale CA that can simulate any other CA with moderate overhead even when subjected to the aforementioned noise.
  3. The question if G(acs)K(urdiumov)L(evin) rule can save one bit forever in the presence of the above noise is still open. Kihong Park -- a student of Gacs --- showed that it wont, when the noise is biased.
  4. When the work in 2 was published, M. Blum asked if a TM can carry on its computation if at each step, the transition is not done according to the transition function with some small probability independently of other steps, assuming that the information stored on the tape far from the head does not decay. A positive answer was given by I. Capuni (another student of Gacs) in 2012.
user8719
  • 81
  • 3
  • "If it is not ergodic, then it will preserve its universality" ... do you have any evidence for this statement? Is this a theorem? Where is it proved? I believe that Gacs's work shows that this is true in at least one case, but I don't see how that proves it holds for Conway's game of Life. – Peter Shor Aug 04 '13 at 12:15
  • Thanks for pointing out. It is not a theorem but an interesting open question. Not being ergodic seems too little to ask for such a strong statement. – user8719 Aug 04 '13 at 13:45
2

For starters, keep in mind that research in Conway's Game of Life is still ongoing and future developments may present a far less complicated solution.

Now then. Interestingly enough, this is a topic that is actually as much in line with biology and quantum physics as with traditional computer science. The question at the root of the matter is if any device can effectively resist random alterations to its state. The plain and simple answer is that it is impossible to make a such a machine that is perfectly resistant to such random changes. Of course, this is true in much the same way that quantum mechanics could cause seemingly impossible events. What prevents these events from occurring (leading most people to declare them strictly impossible) is the stupendously small probability such an event has of happening. A probability made so small by the large scale difference between the quantum level and the human level. It is similarly possible to make a state machine that is resistant to small degrees of random change by simply making it so large and redundant that any "change" noticed is effectively zero, but the assumption is that this is not the goal. Assuming that, this can be accomplished in the same way that animals and plants are resistant to radiation or physical damage.

The question then may not be how to prevent low-level disturbances from doing too much damage, but rather how to recover from as much damage as possible. This is where biology becomes relevant. Animals and plants actually have this very ability at the cellular level.(Please note: I am speaking of cells in the biological sense in this answer) Now, in Conway's game of life the notion of building a computing device at the scale of single cells is appealing (it does, after all, make such creations much smaller and more efficient), but while we can build self-reproducing computers (see Gemini), This ignores the fact that the constructor object itself may become damaged by disturbances.

Another, more resilient, way I can see to solve this is to build computers out of self-reproducing redundant parts (think biological cells) that perform their operations, reproduce, and are replaced.

At this point we can see another interesting real-world parallel. These low-level disturbances are akin to the effects of radiation. This is most appreciable when you consider the type of damage that can be done to your cellular automata. It is easy to trigger the cascade failure or "death" of a cell in Conway's Game of Life, much the same as what happens to many cells exposed to radiation. But there exists the worst-case possibility of mutation, creating a "cancerous" cell that continues to reproduce faulty copies of itself that do not aid in the computational process, or produce incorrect results.

As I've said, its impossible to build a system that is entirely foolproof, you can only make it less and less likely for a fault to compromise the entire system. Of course, the fundamental question here is really "are probabilistic simulations themselves Turing complete" which has already been decided to be true. I would have answered that fundamental question initially, save that it wasn't what you asked.

Hawkwing
  • 147
  • 2
  • Wow! Thanks for the drive-by-downvote! At any rate, I've revised my post, adding some information and sources. Sorry I didn't have the time to do that when I first posted this. I could modify this answer even further to fit community standards, if it wasn't for the fact that no reason was given for the downvote. – Hawkwing Jul 05 '13 at 20:23
  • 5
    As a non-voter, I don't see how this answers Gil's question. You address the question of whether "any device can effectively resist random alterations to its state", which is not what Gil asked. – András Salamon Jul 06 '13 at 11:39
  • Thanks (non-sarcastically this time) for the comment, András Salamon. I'd vote it useful myself, but I'm still a new user on this overflow site. Anyways, I'm sorry my answer seems off-topic. I did perhaps address the question more loosely than I'd intended, but I feel my answer does respond to the original question by answering a similar question and then drawing parallels between the two. Is this perhaps too roundabout a way of answering? – Hawkwing Jul 08 '13 at 12:16
0

I am reminded of xkcd 505: A Bunch of Rocks.

Any real-world computer is subject to some level of noise. A simulation of a universal computer in the ideal infinite Conway's Life universe will have a mean time between failures dependent on the engineering details of its design. It will compute reliably for a probabilistically quantifiable period, unreliably for a period of accumulating errors, and then not at all.

I would expect a fuzzy logic or quantum superposition model to demonstrate clearly what reliability should be expected of a particular construction. One may want to simulate the expected outputs of various components, rather than iterating over all of their cells, to whatever degree they can be isolated from each other. One might be able to quantify expected interference from failing components. A genetic algorithm should be the best way to develop fault-{tolerating,resisting,correcting} components with MTBFs as large as desired for a given noise distribution.

user130144
  • 111
  • 1
  • (mysterious voting here) A quantitative answer would be very speculative. There can't be a more precise answer than "yes, conditionally" without extensive experimentation on some chosen implementation of a UTM. A normal computer in a high-radiation environment is still practically a UTM, if only briefly. – user130144 Mar 04 '14 at 01:50