騒々しいバージョンのConwayのゲームオブライフはユニバーサルコンピューティングをサポートしていますか?


30

Wikipediaを引用して、「[ConwayのGame of Life]は普遍的なチューリングマシンの力を持っています。つまり、アルゴリズム的に計算できるものはすべてConwayのGame of Life内で計算できます。」

このような結果は、ConwayのGame of Lifeのノイズの多いバージョンにも拡張されますか?最も単純なバージョンは、すべてのラウンドの後に、すべての生細胞が小さな確率で死に、すべての死んだ細胞が小さな確率sで生存することです。ts(独立して)です。

もう1つの可能性は、ゲーム自体のルールの以下の確率的なバリエーションを考慮することです。

  • 2つ未満のライブネイバーを持つライブセルは、確率1 - tで死にます。1tます。
  • 2つまたは3つのライブネイバーを持つライブセルは、確率で次の世代に生きます。1t
  • 3つ以上のライブネイバーがあるライブセルは、確率死にます。1t
  • 正確に3つのライブネイバーを持つデッドセルは、確率ライブセルになります。1t

質問:これらの騒々しいバージョンのGame of Lifeは、普遍的な計算をまだサポートしていますか?そうでない場合、彼らの「計算力」について何が言えるでしょうか?

セルオートマトンの計算能力とセルオートマトンのノイズの多いバージョンに関する関連情報も高く評価されます。

(この質問は、MathOverflowに関するこの質問から発展しました。VincentBeffaraの MOに関する答えは、ノイズの多いセルオートマトンの計算面に関する関連結果について興味深い参照を提供しました。)


2
@vzn 1)いいえ、これは「本当の質問」ではなく、まったく別の質問です。ギルの質問は、単純な計算モデルのノイズに対する堅牢性に関するものであり、ランダム性の力に関するものではありません。2)ランダムテープを使用したTMは、決定論的なTMよりも強力ではありません。次の
Sasho Nikolov

2
ここでの本当の問題は、「Game of Life」の確率的/ノイズの多いバージョンがまだ計算をサポートしているかどうかです。(これらのバージョンがPでの計算をサポートする場合、それらの能力はBPPに至る可能性があります。)これらの確率ゲームの生命のゲームの計算能力ははるかに低い可能性があります。
ギルカライ

3
おそらく私は明白なことを述べていますが、構成のバージョンが1つのセルを反転させないことを高い確率で保証するために、構成を十分な回数だけ複製することができます。私の個人的な信念は、私たちははるかに良いことをすることができるということですが、少なくともそれは単純な下限です。
user834

4
I'm not sure the question is well-defined. Suppose t=109. It seems to me that you might be able to find a computer that deals with all one-bit errors in the "Game of Life", giving you fault-tolerant computation unless you spontaneously get a large block of errors all at once. But I don't think anything can be robust against all errors. For example, suppose the errors spontaneously create a malevolent adversary determined to disrupt the computation. You might be able to show your computation succeeds with probability >1109 but fails with probability >1010000。これはカウントされますか?
ピーターショー

2
ピーター、あなたの計算が確率2/3で成功した場合、私は幸せです。
ギルカライ

回答:


8

Here are some "nearby best" references, for what it's worth. It would seem the way to go on this question is to reduce it to a question on "noisy Turing machines", which have been studied (somewhat recently), and which are apparently the nearest relevant area of the literature. The basic/general/reasonable answer seems to be that if the TM can resist/correct for noise (as is demonstrated in these references), it's quite likely the CA can also, within some boundaries/thresholds.

The question of how to reduce a "noisy CA" to a "noisy TM" (and vice versa) is more open. It may not be hard but there does not appear to be published research in the area. Another issue is that the noisy TM is a new model and therefore there may be multiple (natural?) ways to represent a noisy TM. For example, the following papers look at disruptions in the state transition function, but another natural model is disruptions in the tape symbols (the latter being more connected to noisy CAs?). There may be some relation between the two.

  • Fault-tolerant Turing Machine by Ilir Capuni, 2012 (PhD thesis)

    The Turing machine is the most studied universal model of computation. This thesis studies the question if there is a Turing machine that can compute reliably even when violations of its transition function occur independently of each other with some small probability.

    In this thesis, we prove the existence of a Turing machine that with a polynomial overheadcan simulate any other Turing machine, even when it is subject to faults of the above type, thereby answering the question that was open for 25 years.

  • A Turing Machine Resisting Isolated Bursts Of Faults by Ilir Capuni and Peter Gacs, 2012
  • Noisy Turing Machines by Eugene Asarin and Pieter Collins, 2005
(Another question: could there be some connection between noisy TMs and probabilistic Turing Machines?)


7

Gil is asking if the GL is forgetting everything about its initial configuration in time independent of the size, when each cell "disobeys" the transition function independently of other cells with some small probability.

To the best of my knowledge, this is not known for the GL. It is a very interesting question though. If it can withstand the noise, then it should preserve its universality.

A quick overview of the state of the art is as follows.

  1. Toom's rule can save one bit forever faults that occur independently of each other with some small probability.
  2. It was widely believed (the positive rates conjecture) that all 1 dim CA are ergodic until P. Gacs constructed his multi-scale CA that can simulate any other CA with moderate overhead even when subjected to the aforementioned noise.
  3. The question if G(acs)K(urdiumov)L(evin) rule can save one bit forever in the presence of the above noise is still open. Kihong Park -- a student of Gacs --- showed that it wont, when the noise is biased.
  4. When the work in 2 was published, M. Blum asked if a TM can carry on its computation if at each step, the transition is not done according to the transition function with some small probability independently of other steps, assuming that the information stored on the tape far from the head does not decay. A positive answer was given by I. Capuni (another student of Gacs) in 2012.

"If it is not ergodic, then it will preserve its universality" ... do you have any evidence for this statement? Is this a theorem? Where is it proved? I believe that Gacs's work shows that this is true in at least one case, but I don't see how that proves it holds for Conway's game of Life.
Peter Shor

Thanks for pointing out. It is not a theorem but an interesting open question. Not being ergodic seems too little to ask for such a strong statement.
user8719

3

For starters, keep in mind that research in Conway's Game of Life is still ongoing and future developments may present a far less complicated solution.

Now then. Interestingly enough, this is a topic that is actually as much in line with biology and quantum physics as with traditional computer science. The question at the root of the matter is if any device can effectively resist random alterations to its state. The plain and simple answer is that it is impossible to make a such a machine that is perfectly resistant to such random changes. Of course, this is true in much the same way that quantum mechanics could cause seemingly impossible events. What prevents these events from occurring (leading most people to declare them strictly impossible) is the stupendously small probability such an event has of happening. A probability made so small by the large scale difference between the quantum level and the human level. It is similarly possible to make a state machine that is resistant to small degrees of random change by simply making it so large and redundant that any "change" noticed is effectively zero, but the assumption is that this is not the goal. Assuming that, this can be accomplished in the same way that animals and plants are resistant to radiation or physical damage.

The question then may not be how to prevent low-level disturbances from doing too much damage, but rather how to recover from as much damage as possible. This is where biology becomes relevant. Animals and plants actually have this very ability at the cellular level.(Please note: I am speaking of cells in the biological sense in this answer) Now, in Conway's game of life the notion of building a computing device at the scale of single cells is appealing (it does, after all, make such creations much smaller and more efficient), but while we can build self-reproducing computers (see Gemini), This ignores the fact that the constructor object itself may become damaged by disturbances.

Another, more resilient, way I can see to solve this is to build computers out of self-reproducing redundant parts (think biological cells) that perform their operations, reproduce, and are replaced.

At this point we can see another interesting real-world parallel. These low-level disturbances are akin to the effects of radiation. This is most appreciable when you consider the type of damage that can be done to your cellular automata. It is easy to trigger the cascade failure or "death" of a cell in Conway's Game of Life, much the same as what happens to many cells exposed to radiation. But there exists the worst-case possibility of mutation, creating a "cancerous" cell that continues to reproduce faulty copies of itself that do not aid in the computational process, or produce incorrect results.

As I've said, its impossible to build a system that is entirely foolproof, you can only make it less and less likely for a fault to compromise the entire system. Of course, the fundamental question here is really "are probabilistic simulations themselves Turing complete" which has already been decided to be true. I would have answered that fundamental question initially, save that it wasn't what you asked.


Wow! Thanks for the drive-by-downvote! At any rate, I've revised my post, adding some information and sources. Sorry I didn't have the time to do that when I first posted this. I could modify this answer even further to fit community standards, if it wasn't for the fact that no reason was given for the downvote.
Hawkwing

5
As a non-voter, I don't see how this answers Gil's question. You address the question of whether "any device can effectively resist random alterations to its state", which is not what Gil asked.
András Salamon

Thanks (non-sarcastically this time) for the comment, András Salamon. I'd vote it useful myself, but I'm still a new user on this overflow site. Anyways, I'm sorry my answer seems off-topic. I did perhaps address the question more loosely than I'd intended, but I feel my answer does respond to the original question by answering a similar question and then drawing parallels between the two. Is this perhaps too roundabout a way of answering?
Hawkwing

0

I am reminded of xkcd 505: A Bunch of Rocks.

Any real-world computer is subject to some level of noise. A simulation of a universal computer in the ideal infinite Conway's Life universe will have a mean time between failures dependent on the engineering details of its design. It will compute reliably for a probabilistically quantifiable period, unreliably for a period of accumulating errors, and then not at all.

I would expect a fuzzy logic or quantum superposition model to demonstrate clearly what reliability should be expected of a particular construction. One may want to simulate the expected outputs of various components, rather than iterating over all of their cells, to whatever degree they can be isolated from each other. One might be able to quantify expected interference from failing components. A genetic algorithm should be the best way to develop fault-{tolerating,resisting,correcting} components with MTBFs as large as desired for a given noise distribution.


(mysterious voting here) A quantitative answer would be very speculative. There can't be a more precise answer than "yes, conditionally" without extensive experimentation on some chosen implementation of a UTM. A normal computer in a high-radiation environment is still practically a UTM, if only briefly.
user130144
弊社のサイトを使用することにより、あなたは弊社のクッキーポリシーおよびプライバシーポリシーを読み、理解したものとみなされます。
Licensed under cc by-sa 3.0 with attribution required.