40

Paul Wegner and Dina Goldin have for over a decade been publishing papers and books arguing primarily that the Church-Turing thesis is often misrepresented in the CS Theory community and elsewhere. That is, it is presented as encompassing all computation when in fact it applies only to computation of functions, which is a very small subset of all computation. Instead they suggest we should seek to model interactive computation, where communication with the outside world happens during the computation.

The only critique I have seen of this work is in the Lambda the Ultimate forum, where somebody lamented these authors for continually publishing what is obviously known. My question then is, is there any more critique into this line of thinking, and in particular their Persistent Turing Machines. And if not, why then is it seemingly studied very little (I could be mistaken). Lastly, how does the notion of universality translate to an interactive domain.

Kaveh
  • 21,577
  • 8
  • 82
  • 183
zenna
  • 835
  • 6
  • 11
  • ps: you may want also have a look at this question about hyper-computation. – Kaveh Aug 22 '12 at 06:13
  • 6
    Here's another similar question. – Dave Clarke Aug 22 '12 at 10:52
  • 7
    I think Andrej and Neel have explained here that the answer is negative for the higher-type function computation problems. So essentially Church-Turing Thesis is about number function computation problems. The usual equivalences between models of computation do not hold over higher-types. (However, as I understand it, this is more about the interaction mechanisms and how higher-type objects are represented than about the computational power of models.) (reposting to fix a few typos) – Kaveh Aug 23 '12 at 19:20
  • 7
    I agree with Kaveh. – Andrej Bauer Aug 23 '12 at 21:19
  • actually wegners 1st paper along these lines seems to date to 1996-1997, "Why interaction is more important than algorithms" or "The Paradigm Shift from Algorithms to Interaction". later in the paper there is ref to Platos cave, "the Turing tarpit" (?), Kants Critique of Pure Reason, Marx's dialectical logic, Descartes, Penrose, Searle. so maybe it should be seen as bordering on the philosophical and not so much in the vein of technical/math TCS. no math, no lemmas or proofs or thms. while maybe a bit grandiose, he earnestly seeks to understand "the big picture" of CT thesis wrt history etc... – vzn Aug 31 '12 at 01:26
  • "All interactive formalisms can be simulated by Turing machines" iff they are considered in an a posteriori sense. –  Sep 03 '12 at 05:41

7 Answers7

76

Here's my favorite analogy. Suppose I spent a decade publishing books and papers arguing that, contrary to theoretical computer science's dogma, the Church-Turing Thesis fails to capture all of computation, because Turing machines can't toast bread. Therefore, you need my revolutionary new model, the Toaster-Enhanced Turing Machine (TETM), which allows bread as a possible input and includes toasting it as a primitive operation.

You might say: sure, I have a "point", but it's a totally uninteresting one. No one ever claimed that a Turing machine could handle every possible interaction with the external world, without first hooking it up to suitable peripherals. If you want a TM to toast bread, you need to connect it to a toaster; then the TM can easily handle the toaster's internal logic (unless this particular toaster requires solving the halting problem or something like that to determine how brown the bread should be!). In exactly the same way, if you want a TM to handle interactive communication, then you need to hook it up to suitable communication devices, as Neel discussed in his answer. In neither case are we saying anything that wouldn't have been obvious to Turing himself.

So, I'd say the reason why there's been no "followup" to Wegner and Goldin's diatribes is that TCS has known how to model interactivity whenever needed, and has happily done so, since the very beginning of the field.

Update (8/30): A related point is as follows. Does it ever give the critics pause that, here inside the Elite Church-Turing Ivory Tower (the ECTIT), the major research themes for the past two decades have included interactive proofs, multiparty cryptographic protocols, codes for interactive communication, asynchronous protocols for routing, consensus, rumor-spreading, leader-election, etc., and the price of anarchy in economic networks? If putting Turing's notion of computation at the center of the field makes it so hard to discuss interaction, how is it that so few of us have noticed?

Another Update: To the people who keep banging the drum about higher-level formalisms being vastly more intuitive than TMs, and no one thinking in terms of TMs as a practical matter, let me ask an extremely simple question. What is it that lets all those high-level languages exist in the first place, that ensures they can always be compiled down to machine code? Could it be ... err ... THE CHURCH-TURING THESIS, the very same one you've been ragging on? To clarify, the Church-Turing Thesis is not the claim that "TURING MACHINEZ RULE!!" Rather, it's the claim that any reasonable programming language will be equivalent in expressive power to Turing machines -- and as a consequence, that you might as well think in terms of the higher-level languages if it's more convenient to do so. This, of course, was a radical new insight 60-75 years ago.

Final Update: I've created a blog post for further discussion of this answer.

Scott Aaronson
  • 13,733
  • 2
  • 62
  • 68
  • 8
    There is a substantial difference between toasters and interaction: every model of computation has some IO mechanism. Toasters show up only rarely. Some models of computation model IO naively: for example Turing machines deal with IO only informally. This is not problematic where computation is understood to be functional, i.e. starting with an input and ending with an output, as it is with Turing machines. However, this naively becomes burdensome when you want to deal with genuine concurrent phenomena, e.g. when are two interactive computations equal? (Continued below.) – Martin Berger Aug 29 '12 at 09:50
  • 4
    (Continued.) How does this equality fare under parallel composition etc. In order to put such questions on a firm scientific basis, IO must be mathematised. TMs are mostly an inappropriate vehicle for such mathematisation. Whence concurrency theory. – Martin Berger Aug 29 '12 at 09:51
  • 6
    I would argue that interaction is more fundamental than functional computation on binary strings (as e.g. modelled by Turing machines), and that computation should be seen as finitary, rule guided interaction. – Martin Berger Aug 29 '12 at 10:03
  • 4
    I think this answer is overly dismissive. As Martin said, there is a difference between interaction and toast, and it can be argued that 1. interaction is fundamental to the mathematical structure of higher-order computation, and 2. higher-order structures arise naturally in the study of first-order computation. So, the notion that any "internal logic" can be easily handled by TMs is begging the question: the point is that it may be easier to express (and reason mathematically about) this internal logic in higher-order/interactive terms. – Noam Zeilberger Aug 29 '12 at 13:25
  • 5
    Martin and Noam: If your claim is that the notion of a Turing machine did not, by itself, answer all interesting questions about computation, then you're preaching to the converted and beating down an open door! Concurrency theory and higher-order programming languages are just two of the many wonderful things Turing left for the rest of us to figure out when he bit the apple -- others include complexity theory, graphics, operating systems, quantum computing, and more. Crucially, though, none of those things challenge the original Church-Turing Thesis, contrary to Wegner and Goldin's claim. – Scott Aaronson Aug 29 '12 at 19:12
  • 1
    Scott, that is not their claim: "...the [CT] thesis is commonly interpreted to imply that Turing machines model all computation. It is a myth that the original thesis is equivalent to this interpretation of it. In fact, the [CT] thesis only refers to the computation of functions, and specifically excludes interactive computation. The original contribution of this paper is to identify and analyze the historical reasons for this myth." Whether you buy their argument is a separate question, but precisely they are not trying to refute the CT thesis, rather argue that its scope has been overstated. – Noam Zeilberger Aug 29 '12 at 21:20
  • 2
    Noam: Yes, OK, looking again at their stuff ("Refuting the Church-Turing Thesis," "The Church-Turing Thesis: Breaking the Myth," etc. etc.), I stand corrected. Their line is not that they refute the "original" Church-Turing Thesis, but something they call the "strong" Church-Turing Thesis: the "myth that TMs capture all of computation." But I claim that their "Strong Church-Turing Thesis" is itself a myth. The ideas they "refute" are ones that no sane person ever defended, exactly like the idea that computational universality implies that Turing machines can toast bread. – Scott Aaronson Aug 29 '12 at 21:57
  • 13
    In case my views aren't clear enough yet, I should add that I find the whole "myth of the Church-Turing Thesis" literature not merely hectoring, but (more to the point) depressingly barren of ideas. Reading it brings all the joy of reading someone claiming to refute Newtonian physics, not because of something cool like quantum mechanics or relativity, but because "Newton's laws ignore friction". Or listening to a child explain why she technically won a board game because she moved the pieces while you left to go to the bathroom. – Scott Aaronson Aug 29 '12 at 22:04
  • 7
    I think the Lance Fortnow quote extracted below in vzn's answer (original article here: http://ubiquity.acm.org/article.cfm?id=1921573) demonstrates that at least a few sane people do hold the "Strong" thesis. Fortnow claims that the CT thesis can be "simply stated" as "everything computable is computable by a Turing machine", writing "everything" where he should have really written "every $f : \mathbb{N} \to \mathbb{N}$". – Noam Zeilberger Aug 29 '12 at 22:56
  • 2
    (...and probably he was just being loose and writing that for rhetorical purposes, but I think it's indicative of the sort of ingrained cognitive dissonance Wegner and Goldin were arguing is harmful.) – Noam Zeilberger Aug 29 '12 at 23:26
  • 4
    Noam: I'd say Lance's comment is like an 18th-century physicist's triumphant claim that Newtonian principles capture all currently-known phenomena. Other people might then object: what about the knowledge of glassblowing, lens-grinding, woodworking, etc. needed for us to "communicate" with these Newtonian principles via experiment? But in the sense he intended, the triumphant physicist would clearly have been right at the time he was speaking. All that other stuff fits snugly within the paradigm -- it doesn't challenge the paradigm on its own territory, as relativity and QM would do later. – Scott Aaronson Aug 30 '12 at 00:14
  • 3
    One can get into endless debates about what exactly the Church-Turing Thesis means. Personally, though, I've always preferred the version that takes it to be a falsifiable empirical claim, about what sorts of computational problems can be solved in the physical world. For that version has the enormous advantage of clarity: it makes it clear exactly what it would take to falsify the CT (namely, a revolution in physics), and why none of the stuff Wegner and Goldin talk about comes anywhere close to qualifying. – Scott Aaronson Aug 30 '12 at 00:37
  • @scott, find it pretty wild if you are comparing the current assertion of the supremacy/transcendance (in the sense that it transcends all other possibilities, past, present, future) of the TM model with newtonian physics.. do you want to think that out a little further? think it actually undermines your initially stated position seriously.... – vzn Aug 30 '12 at 01:24
  • 3
    vzn: I chose my analogy very carefully. I was trying to show you how open-minded I was, which is why you might have found it "wild"! :) I'd be thrilled to be able to abandon the Church-Turing Thesis, just as quantum mechanics already caused most of us to abandon its polynomial-time version. My point was that that would require another scientific revolution on the same scale as QM. If people cry wolf (or cry Wegner?) at every trivial observation about TMs not being able to "interact" or toast bread, then how will they even recognize a real challenge to the CT if and when it ever comes? – Scott Aaronson Aug 30 '12 at 02:03
  • newsflash, the new paradigm has arrived! in fact there are multiple new paradigms. the good news, it seems to me that QM computing is one of them! and it uses a model with qubits & their interactions. its modelled by a TM only in a very awkward way. the bad news, respectable academic work is being semitrashed by the TCS ivory tower & its self appointed guardians of the status quo. the toaster analogy is an obvious, embarrassing (but admittedly colorful!) straw man, and imho its a bit shameful that its currently the highest rated answer here. guess the old paradigm is quite safe for now! – vzn Aug 30 '12 at 02:29
  • 2
    @vzn: CT pertains to how real-world computations can be simulated by TMs, and says nothing about how elegantly/awkwardly this is accomplished. If you meant what you said with "modelled by a TM only in a very awkward way", then I see nothing to challenge CT. Did you mean something else? – mikero Aug 30 '12 at 03:15
  • how can we debate about a so-called Thesis named after both Turing and Church, neither of whom actually stated in their own writing the thesis as it has later been interpreted & evolved? Turing and Church made assertions about computation that were correct in their time and even a few decades later, but life goes on! science evolves... Turing invented the idea before electric typewriters. the Turing machine is an analogy with one of the most sophisticated machines of Turings era-- a manual typewriter.... arguably we should update our models as our own understanding progresses... – vzn Aug 30 '12 at 03:29
  • 10
    how can we debate about a so-called Thesis named after both Turing and Church, neither of whom actually stated in their own writing the thesis as it has later been interpreted & evolved? — See also: Euler's formula, Gaussian elimination, Euclid's algorithm, the Pythagorean theorem. – Jeffε Aug 30 '12 at 11:44
  • further thought-- am thinking of writing this up as a question someday. has anyone noticed that the qm computer model seems to assume infinite precision in the position/orientation/coordinates of the qubits? what is the proof that the QM computer model is not more powerful than TMs? moreover, only ~1.5decade ago, pre Shor factoring, a complexity version variant of the CT thesis was held by a significant TCS segment, namely that QM computers are not significantly faster that TMs.... as my TCS prof(s) taught me, its called a thesis and not a law for a reason. "it cannot be proven" – vzn Aug 30 '12 at 14:47
  • 4
    vzn: Has anyone noticed that QCs seem to assume infinite precision? ROTFL ... please read the Bernstein-Vazirani and BBBV papers from the early 90s, or learn about Holevo's Theorem, quantum fault-tolerance, etc. The point of all that seminal work was to explain why the linearity of QM makes the continuity of amplitudes essentially as "benign" as the continuity of classical probabilities: for example, both can be rounded to finite precision without causing problems. What's the proof that QCs can be simulated on a classical TM? You'll find it in Bernstein-Vazirani, if you care to look. – Scott Aaronson Aug 30 '12 at 15:02
  • 3
    (cont'd) Also, the polynomial-time Church-Turing thesis was never as widely accepted as the original computability thesis, for excellent reason (though a tiny, wrong minority still clings to the classical polynomial-time CT even today). I do, however, completely agree with you about even the original CT being an empirical thesis subject to falsification ... indeed, that was the entire point of my comment above explaining the Newtonian physics analogy, which you seem not to have read or understood. – Scott Aaronson Aug 30 '12 at 15:11
  • BQP==P is open last I heard! arguably this is closely connected to CT thesis. (since there are so many variants/versions, maybe someone can provide a link to the one actually under discussion...?) the basic theoretical QM model is noiseless, no? it can be amended to consider noise. agreed the work on fault tolerance & the noise is crucial to a real-world implementation of the model. ("application-of-theory") – vzn Aug 30 '12 at 15:26
  • I didn't say there's currently a mathematical proof that the holdouts for the classical polynomial-time CT are wrong -- I just said that they are wrong. :) And BQP is contained in PSPACE; the point is that you can discretize without much affecting the acceptance probability. You don't need any fancy fault-tolerance theorems to prove that result. Again, please see Bernstein-Vazirani for the proof. – Scott Aaronson Aug 30 '12 at 15:32
  • are you talking about Bernstein Vazirani 1993? consider Deutsch 1985 "Quantum theory, the Church-Turing principle and the universal quantum computer" who states physics is continuous & TMs are discrete and that therefore QM computers can compute functions not computable by TMs (including perfectly random values etc). but later research seemed to go in a different direction.... anyway complexity theory has definitively shown over the years (& will continue to do so) that the CT thesis is just not so clearcut or cut-and-dried as was historically thought... it has subtlety & nuance – vzn Aug 30 '12 at 16:11
  • 2
    vzn: Deutsch was simply confused about this issue in the 80s. In his more recent writings, like "The Beginning of Infinity," he talks about the CT thesis more carefully (though still very differently than I would). Bernstein-Vazirani marked the first time anyone thought about quantum vs. classical computing in the "modern" way. As for "subtlety & nuance," those are precisely what I find missing in Wegner and Goldin's diatribes, which set up a simplistic "Church-Turing dogma" no sane person actually believes in order to knock it down! – Scott Aaronson Aug 30 '12 at 17:13
  • You want "subtlety & nuance" about what the (Extended) Church-Turing Thesis actually says? Try, I dunno, http://www.scottaaronson.com/papers/samprel.pdf http://www.scottaaronson.com/papers/npcomplete.pdf http://www.scottaaronson.com/papers/optics.pdf http://www.scottaaronson.com/papers/qchvpra.pdf :-) – Scott Aaronson Aug 30 '12 at 17:16
  • 14
    twenty comments! scott successfully turned a cstheory answer to a Shtetl Optimized blogpost... – Sasho Nikolov Aug 30 '12 at 17:18
  • 2
    @ScottAaronson, the applications you list generally feature terminating interative computation with simple communication topologies. Things become more interesting and complicated when talking about non-terminating interactive computation. Consider for example the question: when are two non-terminating interactive computations equal (which has deep connections with proof-theory). Hundreds of answers have been given. Moreover, making the communication topology dynamic also complicates things. (Continued below.) – Martin Berger Aug 30 '12 at 17:31
  • 2
    (Continued.) Moreover, looking at e.g. papers in multi-party cryptographic protocols, treatment of interaction is ad-hoc and not fully mathematised (e.g. the original Micali, Goldreich, Widgerson paper). A substantial problem in cryptographic protocols is compositionality. The original papers were super informal. Modern tools for the (semi-) automatic analysis of cryptographic protocols (e.g. CryptoVerif) are based on the pi-calculus and not interactive TMs. For good reason. (Continued below.) – Martin Berger Aug 30 '12 at 17:32
  • 1
    (Continued.) Finally, programming language researchers are primarily interested in developing better languages and making verification easier. Since there is no good algebra of TMs, let alone ITMs, basing this work on (I)TMs is hopeless. – Martin Berger Aug 30 '12 at 17:34
  • 3
    I also don't believe that researchers in the fields you list use TMs to form their intuitions about interaction. Instead they form their intuitions in other contexts, and then, painfully force their intuitions into the straightjacket of ITMs when writing papers, because in some communities that is the go-to model for formalisation. Partly this is for good reason, because TMs shine when thinking about complexity of computation on integers, which is often a concern. – Martin Berger Aug 30 '12 at 17:41
  • 4
    it is depressing that someone built their reputation on a strawman argument. sure, there are models that capture some aspects of computation more elegantly that TMs. for what it's worth, word RAM machines are better for algorithm analysis; at our current level of understanding, the cell probe model is better for data structure lower bounds. the CT thesis (and yes, it is irrelevant whether Church and Turing actually thought that) states that all function-computing procedures inside a computational system are computable by a TM if and only if they are computable in the physical world. – Sasho Nikolov Aug 30 '12 at 17:46
  • 3
    @MartinBerger: OK, but I see not a shadow of a hint of a challenge to the "Church-Turing paradigm" from that stuff---any more than the CT is challenged by the fact that Python is more fun to program in than assembly language. They're still friggin' Turing-equivalent! (And of course, the pioneer of the "algebraic" approach to computability was none other than Church!) Also, hardly anyone in TCS wastes their time anymore "painfully translating their intuitions into Turing-machine notation"---that went out of fashion in the 80s. Which, again, has nothing to do with the CT's intellectual status. – Scott Aaronson Aug 30 '12 at 17:51
  • 2
    @ScottAaronson, I agree with you, none of this challenges in any way the CT thesis. I've said that much in my old reply to the original post below. I find the Wegener/Goldin rhetoric unhelpful. Maybe one way of looking at the issue is asking: is Turning equivalency a good way of thinking about programming languages or computing paradigms? – Martin Berger Aug 30 '12 at 18:05
  • 3
    @SashoNikolov: careful, you must say procedures computing numeric functions. At higher types, the Church-Turing thesis is false. This is why Martin and Noam are banging the drum on compositionality -- looking at the algebraic structure of programs leads to the question of how to compose higher-type objects, and this is a different world. – Neel Krishnaswami Aug 30 '12 at 19:39
  • 1
    @NeelKrishnaswami this is a valid point but framing it as a statement about CTT obscures the real point imo. what happens physically in a computer when a function calls another function is perfectly well modeled by a TM receiving an encoding of another TM. i understand that this is not what you'd like to do in PL theory. but it does not mean that a TM cannot model the physical reality of what your computer does when say a system call is made. so I still think "CTT is false for higher order functions" is a confusing way to say "we want a more elegant abstraction for these questions" – Sasho Nikolov Aug 30 '12 at 20:10
  • 1
    Could you people move over to some meta-thread somewhere? – Andrej Bauer Aug 30 '12 at 20:20
  • 4
    Tell you what, I'll create a post on my blog, so people can continue the discussion there. – Scott Aaronson Aug 30 '12 at 20:25
  • "depressingly barren of ideas" & like "listening to a child explain why she technically won a board game because she moved the pieces while you left to go to the bathroom." wow! anyway thx for all the refs to your own papers. actually one of my faves is why philosophers should care about computational complexity. from abstract: ...I end by discussing aspects of complexity theory itself that could benefit from philosophical analysis. — as long as its not 3rd rail/sacred cow topics like CT thesis, alternative models of computation etc! =) – vzn Aug 30 '12 at 21:22
  • 1
    vzn: I think we should respect Andrej's request to move this discussion elsewhere. But briefly, if alternative models of computation are "third rails" to me, then I suppose I've spent the last 13 years of my career electrocuting myself! :) The reason I found quantum computing so exciting is that it really challenged the Polynomial-Time Church-Turing Thesis, in a way that no other realistic computing proposal yet has. QC set a standard, compared to which most other challenges to the CT have all the advantages of theft over honest toil! Please take further discussion to my blog. – Scott Aaronson Aug 30 '12 at 21:53
  • scott, all due respect, but andrej is not a moderator. moderators have no comment/involvement so far except dave clark who posted an almost identical question on cs stackexchange & maybe some voting on this. comments are hidden & post does not go to top unless someone edits Q/A. just want to say, feel your position on this subj amts to a double bind. think this work is harmless/innocuous taken as bordering on philosophy as it is partly intended. you say you encourage CS philosophy but then turn around & squish it like a bug on the windshield. =( – vzn Sep 01 '12 at 03:30
  • 4
    vzn, I love "philosophy of science," but never when people interpret it to mean (as they sometimes do), "the use of philosophy as an excuse to avoid engaging with scientific discoveries that are well-established in the relevant field but that the speaker doesn't understand or doesn't like." If you reread my CS&philosophy essay, I think (hope!) you'll find that I never use the word "philosophy" in that sense. – Scott Aaronson Sep 01 '12 at 15:02
37

I think the issue is quite simple.

  1. All interactive formalisms can be simulated by Turing machines.

  2. TMs are inconvenient languages for research on interactive computation (in most cases) because the interesting issues get drowned out in the noise of encodings.

  3. Everybody working on the mathematisation of interaction knows this.

Let me explain this in more detail.

Turing machines can obviously model all existing interactive models of computing in the following sense: Choose some encoding of the relevant syntax as binary strings, write a TM that takes as input two encoded interactive programs P, Q (in a chosen model of interactive computation) and returns true exactly when there is a one-step reduction from P to Q in the relevant term rewriting system (if your calculus has a ternary transition relation, proceed mutatis mutandis). So you got a TM that does a step-by-step simulation of computation in the interactive calculus. Clearly pi-calculus, ambient calculus, CCS, CSP, Petri-nets, timed pi-calculus and any other interactive model of computation that has been studied can be expressed in this sense. This is what people mean when they say interaction does not go beyond TMs. If you can come up with an interactive formalism that is physically implementable but not expressible in this sense, please apply for your Turing award.

N. Krishnaswami refers to a second approach to modelling interactivity using oracle tapes. This approach is different from the interpretation of the reduction/transition relation above, because the notion of TM is changed: we move from plain TMs to TMs with oracle tapes. This approach is popular in complexity theory and cryptography, mostly because it enables researchers in these fields to transfer their tools and results from the sequential to the concurrent world.

The problem with both approaches is that the genuinly concurrency theoretic issues are obscured. Concurrency theory seeks to understand interaction as a phenomenon sui generis. Both approaches via TMs simply replace a convenient formalism for expressing an interactive programming language with a less convenient formalism.

In neither approach genuinely concurrency theoretic issues, i.e. communication and its supporting infrastructure have a direct representation. They are there, visible to the trained eye, but encoded, hidden in the impenetrable fog of encoding complexity. So both approaches are bad at mathematisation of the key concerns of interactive computation. Take for example what might be the best idea in the theory of programming languages in the last half century, Milner et al's axiomatisation of scope extrusion (which is a key step in a general theory of compositionality):

$$P|(\nu x)Q \ \equiv\ (\nu x)(P|Q) \quad\text{provided}\ x \notin fv(P)$$

How beautifully simple this idea is when expressed in a tailor-made language language like the pi-calculus. Doing this using the encoding of pi-calculus into TMs would probably fill 20 pages.

In other words, the invention of explicit formalisms for interaction has made the following contribution to computer science: the direct axiomatisation of the key primitives for communication (e.g. input and output operators) and the supporting mechanisms (e.g. new name generation, parallel composition etc). This axiomatisation has grown into a veritable research tradition with its own conferences, schools, terminology.

A similar situation obtains in mathematics: most concepts could be written down using the language of set theory (or topos theory), but we mostly prefer higher level concepts like groups, rings, topological spaces and so on.

Martin Berger
  • 11,420
  • 3
  • 39
  • 71
28

In terms of number computability (i.e., computing functions from $\mathbb{N} \to \mathbb{N}$), all known models of computation are equivalent.

However, it's still true that Turing machines are fairly painful for modelling properties like interactivity. The reason is a little bit subtle, and has to do with the kinds of questions that we want to ask about interactive computations.

The usual first pass at modelling interaction with TMs is with oracle tapes. Intuitively, you can think of the string printed on the oracle tape as being a "prediction" of the Turing machine's I/O interaction with the environment. However, consider the sorts of questions we want to ask about interactive programs: for example, we might want to know that a computer program will not output your financial data unless it receives your username and password as input, and furthermore that programs do not leak information about passwords. Talking about this kind of constraint is very painful with oracle strings, since it reflects a temporal, epistemic constraint on the trace of the interaction, and the definition of oracle tapes ask you to supply the whole string up front.

I suspect getting this right is doable, and essentially amounts (1) to considering oracle strings not as a set, but as a topological space whose open sets encode the modal logic of time and knowledge that you want to model, and (2) ensuring that the theorems you prove are all continuous with respect to this topology, viewing predicates as continuous functions from oracle strings to truth values viewed as the Sierpinski space. I should emphasize that this is a guess, based on the analogy with domain theory. You'd need to work out the details (and probably submit them to LICS or something) to be sure.

As a result, people prefer to model interaction using things like the Dolev-Yao model, where you explicitly model the interaction between computers and the environment, so that you can explicitly characterize what the attacker knows. This makes it a lot easier to formulate appropriate modal logics for reasoning about security, since the state of the system plus the state of the environment are represented explicitly.

Neel Krishnaswami
  • 32,528
  • 1
  • 100
  • 165
1

reading Lance Fortnows blog, just ran across this recent/nice/long survey article on the subj with many perspectives & refs [1] (which has not been cited so far), includes Wegner/Goldin's perspective (among many others). Ill just quote Fortnows excellent/emphatic summary/declaration/assertion of the near-official/uniform/unanimous TCS party line:

"A few computer scientists nevertheless try to argue that the [Church-Turing] thesis fails to capture some aspects of computation. Some of these have been published in prestigious venues such as Science, Communications of the ACM, and now as a whole series of papers in ACM Ubiquity. Some people outside of computer science might think that there is a serious debate about the nature of computation. There isn't."

[1] Turings Titanic Machine by Barry S Cooper CACM Vol 55

vzn
  • 11,014
  • 2
  • 31
  • 64
-4

I am very much in agreement with Aaronson's comments.

I don't understand Milner's work. (e.g. pi calculus, which Milner invented to describe communicating processes). It is quite unreadable to me, as are nearly all papers on maths and logic, such as Lambek's theories. I have no doubt that Lambek's ideas are very good, but I would like to see them translated into some kind of pidgin English that I can read.

I am thrown by Milner's comment that lambda calculus is fine for "sequential processes" but that something more is needed for communicating processes.

My (perhaps naïve) viewpoint was that cannot be so, because pi-calculus is Turing complete, and therefore can be converted mechanically to another Turing-complete notation, i.e. lambda calculus. Therefore Milner's pi-calculus notation can be converted automatically to lambda calculus.

It seems that I have identified a project: intuitively, it should be possible to mechanically convert from one Turing-complete language to another. is there an algorithm to do this? I will have to look on Google. Maybe this is incredibly hard to do, and as hard as the halting problem.

I looked yesterday on the net, and found papers on models of lambda calculus. I surprised to find that this seems to be a very deep rabbit hole.

Richard Mullins

Or Meir
  • 5,380
  • 22
  • 33
-8

skimming Wegner's paper, its clear he's being a bit melodramatic and contrarian, but he has a point. the future of computing is arguably much more significantly centered in robotics, AI, or datamining (of vast real world "big data") which he doesnt seem to mention very much by name, but which he is clearly alluding to with his model. and these areas very much largely focus on the universe outside of a TMs inputs and outputs.

historically it also went by the name cybernetics as invented/formulated by Weiner. the point of robotics is that inputs and outputs are not merely digital and without meaning, which one might conclude looking at a TM; they are, but they have real world implications/effects/causes etc., and the machine forms a feedback loop with the environment.

so I would argue that TMs and robotics form a very natural synergy or symbiotic relationship so to speak. but this is not a radical assertion and what Wegner announces with great fanfare is, phrased in different terms, not very controversial or novel. in other words, Wegner seems to be setting himself up as an intellectual or academic iconoclast in his style on purpose... and so who is the TCS community to deny him that melodramatic framing? nevertheless see [2] for a serious rebuttal.

Wegner's example of driving a car is very relevant & other key recent breakthoughs in TCS can be cited:

  • the DARPA road race challenge and also Google's closing in on the technology of a driving car.[3]
  • the case of the Big Blue AI chess victory over Kasparov
  • the recent Deep Blue Jeopardy Challenge victory
  • the increasingly autonomous Mars rover
  • a recent announced breakthrough in unsupervised object recognition by Google.[4]
  • commercialized speech recognition

but it is true, what started out decades ago as mere theory with TMs is now a very real-world phenomenon and segments of the ivory tower TCS community might be in some resistance or even denial of that fact and the associated, fundamental [near Kuhnian] transformation and shift "currently in play". this somewhat ironic because Turing was very applied in many of his perspectives & studies such as his interest in an operational AI test (the Turing test), chemical dynamics, chess solving computation, etc [5].

you can even see this in a microcosm on this site in clashes over how to define the scope, and heated arguments over whether a specific seemingly innocuous tag called application-of-theory is legitimate.[7]

and lets note that TCS is in fact studying many interactive models of computation and much key research is going on in that area... particularly interactive proof systems of which all the important computing classes can be defined in terms of.[6]

[1] The Church-Turing thesis-- breaking the myth by Goldin & Wegner

[2] Are there new models of computation? a reply to Goldin & Wegner by Cockshott & Michaelson

[3] Googles self-driving cars-- 300K miles logged, not a single accident under computer control, the Atlantic

[4] Google unsupervised object recognition of Youtube images

[5] Alan Turings contributions to CS

[6] Landscape of interactive proof systems

[7] On modifying our scope-- a proposal

vzn
  • 11,014
  • 2
  • 31
  • 64
  • 9
    what started out decades ago as mere theory with TMs is now a very real-world phenomenon — Of course, we know that. We call it "computer science". – Jeffε Aug 26 '12 at 01:29
  • an analogy that was at the edge of my thoughts while writing this, but finally figured out later: think the distinction of in vivo vs in vitro biology is relevant. the TM is analogous to the latter. other (emerging) models are analogous to the former. =) – vzn Aug 29 '12 at 22:35
  • anyway the 2006 volume shows many prestigious computer scientists agreeing with the new paradigm. note also the final essay in the collection: Lynn Stein, Interaction, Computation, and Education — This volume as a whole documents a fundamental shift in the culture of computation from a focus on algorithmic problem solving to a perspective in which interaction plays a central role. In this chapter, Stein points out that such a shift must be accompanied by a corresponding shift in computer science education, in the fundamental ``story'' we tell our students in their introductory courses. – vzn Aug 30 '12 at 01:46
  • 4
  • agreed! JeffE=authority =) – vzn Aug 30 '12 at 15:11
-8

Here's the thing, once you add (pure) interactivity, formality goes out the window. It's no longer a "closed" system. The question then is, what is the notion of computation once interactivity enters in? That answer: well, either the other user/machine is substituting for some of your computation (which can be enscribed by just another, bigger, state machine) or you're no longer in a formally-definable system and you're now playing a game, in which case there is no application of the Church-Turing thesis.

TheDoctor
  • 159
  • 7
  • 2
    Interactive models of computation like process calculi are games in the sense of game semantics. – Martin Berger Aug 30 '12 at 06:22
  • @Martin: No, you cannot "model" (human) interactivity. What you are talking about are formal rule behaviors (as in a game), but then as soon as you've codified such, you're no longer in an interactive model. – TheDoctor Aug 30 '12 at 18:13
  • 1
    Human behaviour is irrelevant. What matters is that computable interactive devices act in an algorithmic, mechanical manner to their inputs. – Martin Berger Sep 03 '12 at 13:05
  • Aye, there's the rub, i'nnit? "...computable interactive devices". But is it computable? – TheDoctor Sep 03 '12 at 16:46
  • @Martin: I want to further elucidate the issue, because it's important. The situation is analogous to an experiment measuring the veracity of the 2nd Law of Thermodyamics: yes one may notice a local increase in order (computability), seemingly at odds with the 2nd Law, but it comes at an expense at a larger scale where it again returns to disorder (incomputable). The error is where (i.e. at what level) is the system closed enough in order to be considered a "process calculi"? – TheDoctor Sep 04 '12 at 00:06
  • 1
    @ Mark J, I don't understand what you are saying. The interactive approach simply says that a device is computable if it reacts to its inputs in a mechanical way, using finite resources. Yes, if the other part of the interaction does something crazy, like inputting Chaitin's Omega, then the mechanical device can do something crazy, like computing the halting problem. So what? – Martin Berger Sep 04 '12 at 09:58
  • 1
    In my opinion the CTT is not about what is physically implementable. Instead, it's a crude test that rules out certain clearly not implementable things: If the CTT says something is not computable, then it is not physically implementable, but I don't think the reverse implication holds. – Martin Berger Sep 04 '12 at 10:00
  • @MartinBerger: "a device is computable if it reacts to its inputs ina mechanical way, using finite resources" -- but then it shouldn't be considered "interactive" should it? It's more a double-sized Turing machine with the other machine at the other end of the tape and the translation table having a code to distinguish between one FSM and the other (i.e. still completely implementable in a single Turing machine). – TheDoctor Sep 04 '12 at 21:20
  • 1
    @ Mark J, the requirement "a device is computable if it reacts to its inputs in a mechanical way, using finite resources" does not require the inputs be generated mechancially. Certainly inputting Chaitin's Omega cannot be mechanically generated. – Martin Berger Sep 05 '12 at 09:31
  • 1
    And yes, as I said in my comment below, interactive formalisms can be step-by-step simulated by TMs. However, step-by-step simulation is not the same thing as TM-simulatability in the sense of the strong CTT thesis. – Martin Berger Sep 05 '12 at 09:33