Computation in Physical Systems
In our ordinary discourse, we distinguish between physical systems that perform computations, such as computers and calculators, and physical systems that don't, such as rocks. Among computing devices, we distinguish between more and less powerful ones. These distinctions affect our behavior: if a device is computationally more powerful than another, we pay more money for it. What grounds these distinctions? What is the principled difference, if there is one, between a rock and a calculator, or between a calculator and a computer? Answering these questions is more difficult than it may seem.
In addition to our ordinary discourse, computation is central to many sciences. Computer scientists design, build, and program computers. But again, what counts as a computer? If a salesperson sold you an ordinary rock as a computer, you should probably get your money back. Again, what does the rock lack that a genuine computer has?
How powerful a computer can you build? Can you build a machine that computes anything you wish? Although it is often said that modern computers can compute anything (i.e., any function of natural numbers, or equivalently, any function of strings of letters from a finite alphabet), this is not correct. Ordinary computers can compute only a tiny subset of all functions. Is it physically possible to do better? Which functions are physically computable? These questions are bound up with the foundations of physics.
Computation is also central to psychology and neuroscience (and perhaps other areas of biology). According to the computational theory of cognition, cognition is a kind of computation: the behavior of cognitive systems is causally explained by the computations they perform. In order to test a computational theory of something, we need to know what counts as a computation in a physical system. Once again, the nature of computation lies at the foundation of empirical science.
- 1. Abstract Computation and Concrete Computation
- 2. Accounts of Concrete Computation
- 3. Is Every Physical System Computational?
- 4. Physical Computability
- Academic Tools
- Other Internet Resources
- Related Entries
Computation may be studied mathematically by formally defining computational objects, such as algorithms and Turing machines, and proving theorems about their properties. The mathematical theory of computation is a well-established branch of mathematics. It deals with computation in the abstract, without worrying much about physical implementation.
By contrast, most uses of computation in science and ordinary practice deal with concrete computation: computation in concrete physical systems such as computers and brains. Concrete computation is closely related to abstract computation: we speak of physical systems as running an algorithm or as implementing a Turing machine, for example. But the relationship between concrete computation and abstract computation is not part of the mathematical theory of computation per se and requires further investigation. Questions about concrete computation are the main subject of this entry. Nevertheless, it is important to bear in mind some basic mathematical results.
The most important notion of computation is that of digital computation, which Alan Turing, Kurt Gödel, Alonzo Church, Emil Post, and Stephen Kleene formalized in the 1930s. Their work investigated the foundations of mathematics. One crucial question was whether first order logic is decidable — whether there is an algorithm that determines whether any given first order logical formula is a theorem.
Turing (1936–7) and Church (1936) proved that the answer is negative: there is no such algorithm. To show this, they offered precise characterizations of the informal notion of algorithmically computable function. Turing did so in terms of so-called Turing machines — devices that manipulate discrete symbols written on a tape in accordance with finitely many instructions. Other logicians did the same thing — they formalized the notion of algorithmically computable function — in terms of other notions, such as λ-definable functions and general recursive functions.
To their surprise, all such notions turned out to be extensionally equivalent, that is, any function computable within any of these formalisms is computable within any of the others. They took this as evidence that their quest for a precise definition of “algorithm” or “algorithmically computable function” had been successful. The resulting view — that Turing machines and other equivalent formalisms capture the informal notion of algorithm — is now known as the Church-Turing thesis (more on this in Section 4). The study of computable functions, made possible by the work of Turing et al., is part of the mathematical theory of computation.
The theoretical significance of Turing et al.'s notion of computation can hardly be overstated. As Gödel pointed out (in a lecture following one by Tarski):
Tarski has stressed in his lecture (and I think justly) the great importance of the concept of general recursiveness (or Turing's computability). It seems to me that this importance is largely due to the fact that with this concept one has for the first time succeeded in giving an absolute definition of an interesting epistemological notion, i.e., one not depending on the formalism chosen. (Gödel 1946, 84)
Turing also showed that there are universal Turing machines — machines that can compute any function computable by any other Turing machine. Universal machines do this by executing instructions that encode the behavior of the machine they simulate. Assuming the Church-Turing thesis, universal Turing machines can compute any function computable by algorithm. This result is significant for computer science: you don't need to build different computers for different functions; one universal computer will suffice to compute any computable function. Modern digital computers approximate universal machines in Turing's sense: digital computers can compute any function computable by algorithm for as long as they have time and memory. (Strictly speaking, a universal machine has an unbounded memory, whereas digital computer memories can be extended but not indefinitely, so they are not unbounded.)
The above result should not be confused with the common claim that computers can compute anything. This claim is false: another important result of computability theory is that most functions are not computable by Turing machines (and hence, by digital computers). Turing machines compute functions defined over denumerable domains, such as strings of letters from a finite alphabet. There are uncountably many such functions. But there are only countably many Turing machines; you can enumerate Turing machines by enumerating all lists of Turing machine instructions. Since an uncountable infinity is much larger than a countable one, it follows that Turing machines (and hence digital computers) can compute only a tiny portion of all functions (over denumerable domains, such as natural numbers or strings of letters).
Turing machines and most modern computers are known as (classical) digital computers, that is, computers that manipulate strings of discrete, unambiguously distinguishable states. Digital computers are sometimes contrasted with analog computers, that is, machines that manipulate continuous variables. Continuous variables are variables that can change their value continuously over time while taking any value within a certain interval. Analog computers are used primarily to solve certain systems of differential equations (Pour-El 1974, Rubel 1993).
Classical digital computers may also be contrasted with quantum computers. Quantum computers manipulate quantum states called qubits. Unlike the computational states of digital computers, qubits are not unambiguously distinguishable from one another. This entry will focus primarily on classical digital computation. For more on quantum computation, see the entry on quantum computing.
The same objects studied in the mathematical theory of computation — Turing machines, algorithms, and so on — are typically said to be implemented by concrete physical systems. This poses a problem: how can a concrete, physical system perform a computation when computation is defined by an abstract mathematical formalism? This may be called the problem of computational implementation.
The problem of computational implementation may be formulated in a couple of different ways. Some people interpret the formalisms of computability theory as defining abstract objects. According to this interpretation, Turing machines, algorithms, and the like are abstract objects. But how can a concrete physical system implement an abstract object? Other people treat the formalisms of computability theory simply as abstract computational descriptions. But how can a concrete physical system satisfy an abstract computational description? Regardless of how the problem of computational implementation is formulated, solving it requires an account of concrete computation — an account of what it takes for a physical system to perform a given computation.
A closely related problem is that of distinguishing between physical systems such as digital computers, which appear to compute, and physical systems such as rocks, which appear not to compute. Unlike computers, ordinary rocks are not sold in computer stores and are usually not considered computers. Why? What do computers have that rocks lack, such that computers compute and rocks don't? (If indeed they don't?) In other words, what does it take for a computation to be implemented in a concrete physical system? Different answers to these questions give rise to different accounts of concrete computation.
Questions on the nature of concrete computation should not be confused with questions about computational modeling. The dynamical evolution of many physical systems may be described by computational models. Computational models describe the dynamics of a system that are written into, and run by, a computer. The behavior of rocks — as well as rivers, ecosystems, and planetary systems, among many others — may well be modeled computationally. From this, it doesn't follow that the modeled systems are computing devices — that they themselves perform computations. Prima facie, only relatively few and quite special systems compute. Explaining what makes them special — or explaining away our feeling that they are special — is the job of an account of concrete computation.
One of the earliest and most influential accounts of computation is due to Hilary Putnam. To a first approximation, the account says that anything that is accurately described by a computational description C is a computing system implementing C.
More precisely, Putnam sketches his earliest account in terms of Turing machines only, appealing to the “machine tables” that are a standard way of defining specific Turing machines. A machine table consists of one column for each of the (finitely many) internal states of the Turing machine and one row for each of the machine's symbol types. Each entry in the machine table specifies what the machine does given the pertinent symbol and internal state. Here is how Putnam explains what it takes for a physical system to be a Turing machine:
A ‘machine table’ describes a machine if the machine has internal states corresponding to the columns of the table, and if it ‘obeys’ the instruction in the table in the following sense: when it is scanning a square on which a symbol s1 appears and it is in, say, state B, that it carries out the ‘instruction’ in the appropriate row and column of the table (in this case, column B and row s1). Any machine that is described by a machine table of the sort just exemplified is a Turing machine. (Putnam 1960/1975a, 365; cf. also Putnam 1967/1975a, 433–4)
This account relies on several unexplained notions, such as square (of tape), symbol, scanning, and carrying out an instruction. Furthermore, the account is specified in terms of Turing machine tables, but there are other kinds of computational description. A general account of concrete computation should cover other computational descriptions besides Turing machine tables. Perhaps for these reasons, Putnam — soon followed by many others — abandoned reference to squares, symbols, etc.; he substituted them with an appeal to a physical description of the system. The result of that substitution is what Godfrey-Smith (2009) dubs the “simple mapping account” of computation.
According to the simple mapping account, a physical system S performs computation C just in case (i) there is a mapping from the states ascribed to S by a physical description to the states defined by computational description C, such that (ii) the state transitions between the physical states mirror the state transitions between the computational states. Clause (ii) requires that for any computational state transition of the form s1 → s2 (specified by the computational description C), if the system is in the physical state that maps onto s1, it then goes into the physical state that maps onto s2.
One difficulty with the formulation above is that ordinary physical descriptions, such as systems of differential equations, generally ascribe uncountably many states to physical systems, whereas ordinary computational descriptions, such as Turing machine tables, ascribe at most countably many states. Thus, there are not enough computational states for the physical states to map onto. One solution to this problem is to reverse the direction of the mapping, requiring a mapping of the computational states onto (a subset of) the physical states. Another, more common solution to this problem — often left implicit — is to select either a subset of the physical states or equivalence classes of the physical states and map those onto the computational states. When this is done, clause (i) is replaced by the following: (i′) there is a mapping from a subset of (or equivalence classes of) the states ascribed to S by a physical description to the states defined by computational description C.
The simple mapping account turns out to be very liberal: it attributes many computations to many systems. In the absence of restrictions on which mappings are acceptable, such mappings are relatively easy to come by. As a consequence, some have argued that every physical system implements every computation (Putnam 1988, Searle 1992). This thesis, which trivializes the claim that something is a computing system, will be discussed in Section 3.1. Meanwhile, the desire to avoid this trivialization result is one motivation behind other accounts of concrete computation.
One way to construct accounts of computation that are more restrictive than the simple mapping account is to impose a constraint on acceptable mappings. Specifically, clause (ii) may be modified so as to require that the conditional that specifies the relevant physical state transitions be logically stronger than a material conditional.
As the simple mapping account has it, clause (ii) requires that for any computational state transition of the form s1 → s2 (specified by a computational description), if the system is in the physical state that maps onto s1, it then goes into the physical state that maps onto s2. The second part of (ii) is a material conditional. It may be strengthened by turning it into a logically stronger conditional — specifically, a conditional expressing a relation that supports counterfactuals.
In a pure counterfactual account, clause (ii) is strengthened simply by requiring that the physical state transitions support certain counterfactuals (Maudlin 1989, Copeland 1996). In other words, the pure counterfactual account requires the mapping between computational and physical descriptions to be such that the counterfactual relations between the physical states are isomorphic to the counterfactual relations between the computational states.
Different authors formulate the relevant counterfactuals in slightly different ways: (a) if the system had been in a physical state that maps onto an arbitrary computational state (specified by the relevant computational description), it would then have gone into a physical state that maps onto the relevant subsequent computational state (as specified by the computational description) (Maudlin 1989, 415), (b) if the system had been in a physical state that maps onto s1, it would have gone into a physical state that maps onto s2 (Copeland 1996, 341), (c) if the system were in a physical state that maps onto s1, it would go into a physical state that maps onto s2 (Chalmers 1996, 312). Regardless of the exact formulation, none of these counterfactuals are satisfied by the material conditional of clause (ii) as it appears in the simple mapping account of computation. Thus, counterfactual accounts are stronger than the simple mapping account.
An account of concrete computation in which the physical state transitions support counterfactuals may also be generated by appealing to causal or dispositional relations, assuming (as most people do) that causal or dispositional relations support counterfactuals. Appealing to causation or dispositions may also have advantages over pure counterfactual accounts in blocking unwanted computational implementations (Klein 2008, 145, makes the case for dispositional versus counterfactual accounts).
In a causal account, clause (ii) is strengthened by requiring a causal relation between the physical states: for any computational state transition of the form s1 → s2 (specified by a computational description), if the system is in the physical state that maps onto s1, its physical state causes it to go into the physical state that maps onto s2 (Chrisley 1995, Chalmers 1995, 1996, Scheutz 1999, 2001).
To this causal constraint on acceptable mappings, David Chalmers (1995, 1996) adds a further restriction (in order to avoid pancomputationalism, which is discussed in Section 3): a genuine physical implementation of a computational system must divide into separate physical components, each of which maps onto the components specified by the computational formalism. As Godfrey-Smith (2009, 293) notes, this combination of a causal and a localizational constraint goes in the direction of mechanistic explanation (Machamer, Darden, and Craver 2000). An account of computation that is explicitly based on mechanistic explanation will be discussed in Section 2.5. For now, the causal account simpliciter requires only that the mappings between computational and physical descriptions be such that the causal relations between the physical states are isomorphic to the relations between state transitions specified by the computational description. Thus, according to the causal account, concrete computation is the causal structure of a physical process.
In a dispositional account, clause (ii) is strengthened by requiring a dispositional relation between the physical states: for any computational state transition of the form s1 → s2 (specified by a computational description), if the system is in the physical state that maps onto s1, the system manifests a disposition whose manifestation is the transition from the physical state that maps onto s1 to the physical state that maps onto s2 (Klein 2008). In other words, the dispositional account requires the mapping between computational and physical descriptions to be such that the dispositional relations between the physical states are isomorphic to the relations between state transitions specified by the computational description. Thus, according to the dispositional account, concrete computation is the dispositional structure of a physical process.
The difference between the simple mapping account on the one hand and counterfactual, causal, and dispositional accounts on the other may be seen by examining a simple example.
Consider a rock under the sun, early in the morning. During any time interval, the rock's temperature rises. The rock goes from temperature T to temperature T+1, to T+2, to T+3. Now consider a NOT gate that feeds its output back to itself. At first, suppose the NOT gate receives ‘0’ as an input; it then returns a ‘1’. After the ‘1’ is fed back to the NOT gate, the gate returns a ‘0’ again, and so on. The NOT gate goes back and forth between outputting a ‘0’ and outputting a ‘1’. Now map physical states T and T+2 onto ‘0’; then map T+1 and T+3 onto ‘1’.
According to the simple mapping account, the rock implements a NOT gate undergoing the computation represented by ‘0101’.
By contast, according to the counterfactual account, the rock's putative computational implementation is spurious, because the physical state transitions do not support counterfactuals. If the rock were put in state T, it may or may not transition into T+1 depending on whether it is morning or evening and other extraneous factors. Since the rock's physical state transitions that map onto the NOT gate's computational state transitions do not support counterfactuals, the rock does not implement the NOT gate according to the counterfactual account.
According to the causal and dispositional accounts too, this putative computational implementation is spurious, because the physical state transitions are not due to causal or dispositional properties of the rock and its states. T does not cause T+1, nor does the rock have a disposition to go into T+1 when it is in T. Rather, the rock changes its state due to the action of the sun. Since the rock's physical state transitions that map onto the NOT gate's computational state transitions are not grounded in either the causal or dispositional properties of the rock and its states, the rock does not implement the NOT gate according to the causal and dispositional accounts.
It is important to note that under the present family of accounts, there are mappings between any physical system and at least some computational descriptions. Thus, according to the present accounts, everything performs at least some computations (cf. Section 3.2). This still strikes some as overly inclusive. In computer science and cognitive science, there seems to be a distinction between systems that compute and systems that do not. To account for this distinction, one option is to retain the current account of computational implementation while restricting the class of descriptions that count as computational descriptions. Another option is to move beyond this account of implementation.
In our everyday life, we usually employ computations to process meaningful symbols, in order to extract information from them. The semantic account of computation turns this practice into a metaphysical doctrine: computation is the processing of representations — or at least, the processing of appropriate representations in appropriate ways. Opinions as to which representational manipulations constitute computations vary a great deal (Fodor 1975, Cummins 1983, Pylyshyn 1984, Churchland and Sejnowski 1992, Shagrir 2006). What all versions of the semantic account have in common is that they take seriously the reference to symbols in Putnam's original account of computation: there is “no computation without representation” (Fodor 1981, 180).
The semantic account may be seen as imposing a further restriction on acceptable mappings. In addition to the causal restriction imposed by the causal account (mutatis mutandis for the counterfactual and dispositional accounts), the semantic account imposes a semantic restriction. Only physical states that qualify as representations may be mapped onto computational descriptions, thereby qualifying as computational states. If a state is not representational, it is not computational either.
The semantic account is probably the most popular in the philosophy of mind, because it appears to fit its specific needs better than other accounts. Since minds and digital computers are generally assumed to manipulate (the right kind of) representations, they turn out to compute. Since most other systems are generally assumed not to manipulate (the relevant kind of) representations, they do not compute. Thus, the semantic account appears to accommodate some common intuitions about what does and does not count as a computing system. It keeps minds and computers in while leaving most everything else out, thereby vindicating the computational theory of cognition as a strong and nontrivial theory.
The semantic account raises three important questions: how representations are to be individuated, what counts as a representation of the relevant kind, and what gives representations their semantic content.
On the individuation of computational states, the main debate divides internalists from externalists. According to externalists, computational vehicles are symbols individuated by their wide cognitive contents — paradigmatically, the things that the symbols stand for (Burge 1986, Shapiro 1997, Shagrir 2001). By contrast, most internalists maintain that computational vehicles are symbols individuated by narrow cognitive contents (Segal 1991). Narrow contents are, roughly speaking, semantic contents defined in terms of intrinsic properties of the system. Cognitive contents, in turn, are contents ascribed to a system by a cognitive psychological theory. For instance, the cognitive contents of the visual system are visual contents, whereas the cognitive contents of the auditory system are auditory contents.
To illustrate the dispute, consider two physically identical cognitive systems A and B. Among the symbols processed by A is symbol S. A produces instances of S whenever A is in front of bodies of water, when A is thinking of water, and when A is forming plans to interact with water. In short, symbol S appears to stand for water. Every time A processes S, system B processes symbol S′, which is physically identical to S. But system B lives in an environment different from A's environment. Whenever A is surrounded by water, B is surrounded by twater. Twater is a substance superficially indistinguishable from water but in fact physically different from it. Thus, symbol S′ appears to stand for twater (cf. Putnam 1975b). So, we are assuming that A and B live in relevantly different environments, such that S appears to stand for water while S′ appears to stand for twater. We are also assuming that A is processing S in the same way that B is processing S′. There is no intrinsic physical difference between A and B.
According to externalists, when A is processing S and B is processing S′ they are in computational states of different types. According to internalists, A and B are in computational states of the same type. In other words, externalists maintain that computational states are individuated in part by their reference, which is determined at least in part independently of the intrinsic physical properties of cognitive systems. By contrast, internalists maintain that computational states are individuated in a way that supervenes solely on the intrinsic physical properties of cognitive systems.
So far, externalists and internalists agree on one thing: computational states are individuated by cognitive contents. This assumption can be resisted without abandoning the semantic account of computation. According to Egan (1999), computational vehicles are not individuated by cognitive contents of any kind, whether wide or narrow. Rather, they are individuated by their mathematical contents — that is, mathematical functions and objects ascribed as semantic contents to the computational vehicles by a computational theory of the system. Since mathematical contents are the same across physical duplicates, Egan maintains that her mathematical contents are a kind of narrow content — she is a kind of internalist.
Let us now turn to what counts as a representation. This debate is less clearly delineated. According to some authors, only structures that have a language-like combinatorial syntax, which supports a compositional semantics, count as computational vehicles, and only manipulations that respect the semantic properties of such structures count as computations (Fodor 1975, Pylyshyn 1984). This suggestion flies in the face of computability theory, which imposes no such requirement on what counts as a computational vehicle. Other authors are more inclusive on what representational manipulations count as computations, but they have not been especially successful in drawing the line between computational and non-computational processes. Few people would include all manipulations of representations — including, say, painting a picture and recording a speech — as computations, but there is no consensus on where to draw the boundary between representational manipulations that count as computations and representational manipulations that do not.
A third question is what gives representations their semantic content. There are three families of views. Instrumentalists believe that ascribing semantic content to things is just heuristically useful for prediction and explanation; semantic properties are not real properties of computational states (e.g., Dennett 1987, Egan forthcoming). Realists who are not naturalists believe semantic properties are real properties of computational states, but they are irreducible to non-semantic properties. Finally, realists who are also naturalists believe semantic properties are both real and reducible to non-semantic properties, though they disagree on exactly how to reduce them (e.g., Fodor 2008, Harman 1987).
The semantic account of computation is closely related to the common view that computation is information processing. This idea is less clear than it may seem, because there are several notions of information. The connection between information processing and computation is different depending on which notion of information is at stake. What follows is a brief disambiguation of the view that computation is information processing based on four important notions of information (cf. Piccinini and Scarantino forthcoming).
Information in the sense of thermodynamics is closely related to thermodynamic entropy. Entropy is a property of every physical system. Thermodynamic entropy is, roughly, a measure of an observer's uncertainty about the microscopic state of a system after she considers the observable macroscopic properties of the system. The study of the thermodynamics of computation is a lively field with many implications in the foundations of physics (Leff and Rex 2003). In this thermodynamic sense of “information,” any difference between two distinguishable states of a system may be said to carry information. Computation may well be said to be information processing in this sense, but this has little to do with semantics properly so called. However, the connections between thermodynamics, computation, and information theory are one possible source of inspiration for the view that every physical system is a computing system (see Section 3.4).
Information in the sense of communication theory is a measure of the average likelihood that a given message is transmitted between a source and a receiver (Shannon and Weaver 1949). This has little to do with semantics, too.
Information in one semantic sense is approximately the same as “natural meaning” (Grice 1957). A signal carries information in this sense just in case it reliably correlates with a source (Dretske 1981). The view that computation is information processing in this sense is prima facie implausible, because many computations — such as arithmetical calculations carried out on digital computers — do not seem to carry any natural meaning. Nevertheless, this notion of semantic information is relevant here because it has been used by some theorists to ground an account of representation (Dretske 1981, Fodor 2008).
Information in another semantic sense is just ordinary semantic content or “non-natural meaning” (Grice 1957). This is the kind of semantic content that most philosophers discuss. The view that computation is information processing in this sense is similar to a generic semantic account of computation.
Although the semantic account of computation appears to fit the needs of philosophers of mind, it appears less suited to make sense of other sciences. Most pertinently, representation does not seem to be presupposed by the notion of computation employed in at least some areas of cognitive science as well as computability theory and computer science — the very sciences that gave rise to the notion of computation at the origin of the computational theory of cognition (Piccinini 2008a, Fresco 2010). If this is correct, the semantic account may not even be adequate to the needs of philosophers of mind — at least those philosophers of mind who wish to make sense of the analogy between minds and the systems designed and studied by computer scientists and computability theorists. Another criticism of the semantic account is that specifying the kind of representation and representational manipulation that is relevant to computation may require a non-semantic way of individuating computations (Piccinini 2004). These concerns motivate efforts to account for computation in non-semantic terms.
As we saw, the semantic account needs to specify which representations are relevant to computation. One view is that the relevant representations are language-like, that is, they have the kind of syntactic structure exhibited by sentences in a language. Computation, then, is the manipulation of language-like representations in a way that is sensitive to their syntactic structure and preserves their semantic properties (Fodor 1975).
As discussed in the previous section, however, using the notion of representation in an account of computation involves some difficulties. If computation could be accounted for without appealing to representation, those difficulties would be avoided. One way to do so is to maintain that computation simply is the manipulation of language-like structures in accordance with their syntactic properties, leaving semantics by the wayside. The structures being manipulated are assumed to be language-like only in that they have syntactic properties — they need not have any semantics. In this syntactic account of computation, the notion of representation is not used at all.
The syntactic account may be seen as adding a restriction on acceptable mappings that replaces the semantic restriction proposed by the semantic account. Instead of a semantic restriction, the syntactic account imposes a syntactic restriction: only physical states that qualify as syntactic may be mapped onto computational descriptions, thereby qualifying as computational states. If a state lacks syntactic structure, it is not computational.
What remains to be seen is what counts as a syntactic state. An important account of syntax in the physical world is due to Stephen Stich (1983, 150–157). Although Stich does not use the term “computation,” his account of syntax is aimed at grounding a syntactic account of mental states and processes. Stich's syntactic theory of mind is, in turn, his interpretation of the computational theories proposed by cognitive scientists — in competition with Fodor's semantic interpretation. Since Stich's account of syntax is ultimately aimed at grounding computational theories of cognition, Stich's account of syntax also provides an (implicit) syntactic account of computation.
According to Stich, roughly speaking, a physical system contains syntactically structured objects when two conditions are satisfied. First, there is a mapping between the behaviorally relevant physical states of the system and a class of syntactic types, which are specified by a grammar that defines how complex syntactic types can be formed out of (finitely many) primitive syntactic types. Second, the behavior of the system is explained by a theory whose generalizations are formulated in terms of formal relations between the syntactic types that map onto the physical states of the system.
The syntactic account of computation is not very popular. A common objection is that it seems difficult to give an account of primitive syntactic types that does not presuppose a prior semantic individuation of the types (Crane 1990, Jacquette 1991, Bontly 1998). In fact, it is common to make sense of syntax by construing it as a way to combine symbols, that is, semantically interpreted constituents. If syntax is construed in this way, it presupposes semantics. And if so, the syntactic account of computation collapses into the semantic account.
Another objection is that language-like syntactic structure is not necessary for computation as it is understood in computer science and computability theory. Although computing systems surely can manipulate linguistic structures, they don't have to. They can also manipulate simple sequences of letters, without losing their identity as computers. (Computability theorists call any set of words from a finite alphabet a language, but that broad notion of language should not be confused with the narrower notion — inspired by grammars in logic and linguistics — that Stich employs in his syntactic account of computation.)
The mechanistic account (Piccinini 2007b, Piccinini and Scarantino forthcoming, Section 3) avoids appealing to both syntax and semantics. Instead, it accounts for concrete computation in terms of the mechanistic properties of a system. According to the mechanistic account, concrete computing systems are functional mechanisms of a special kind — mechanisms that perform concrete computations.
A functional mechanism is a system of organized components, each of which has functions to perform (cf. Craver 2007, Wimsatt 2002). When appropriate components and their functions are appropriately organized and functioning properly, their combined activities constitute the capacities of the mechanism. Conversely, when we look for an explanation of the capacities of a mechanism, we decompose the mechanism into its components and look for their functions and organization. The result is a mechanistic explanation of the mechanism's capacities.
This notion of mechanism is familiar to biologists and engineers. For example, biologists explain physiological capacities (digestion, respiration, etc.) in terms of the functions performed by systems of organized components (the digestive system, the respiratory system, etc.).
According to the mechanistic account, a computation in the generic sense is the processing of vehicles according to rules that are sensitive to certain vehicle properties, and specifically, to differences between different portions of the vehicles. The processing is performed by a functional mechanism, that is, a mechanism whose components are functionally organized to perform the computation. Thus, if the mechanism malfunctions, a miscomputation occurs.
Digital computation, analog computation, etc. turn out to be species of generic computation. They are differentiated by more specific properties of the vehicles being processed. If a computing system processes strings of discrete states, then it performs a digital computation. If a computing system processes continuous variables, then it performs an analog computation. If a computing system processes qubits, then it performs a quantum computation.
When we define concrete computations and the vehicles that they manipulate, we need not consider all of their specific physical properties. We may consider only the properties that are relevant to the computation, according to the rules that define the computation. A physical system can be described more or less abstractly. According to the mechanistic account, an abstract description of a physical system is not a description of an abstract object but rather a description of a concrete system that omits certain details. Descriptions of concrete computations and their vehicles are sufficiently abstract as to be defined independently of the physical media that implement them in particular cases. Because of this, the mechanistic account calls concrete computations and their vehicles ‘medium-independent’.
In other words, a vehicle is medium-independent just in case the rules (i.e., the input-output maps) that define a computation are sensitive only to differences between portions of the vehicles along specific dimensions of variation — they are insensitive to any more concrete physical properties of the vehicles. Put yet another way, the rules are functions of state variables associated with a set of functionally relevant degrees of freedom, which can be implemented differently in different physical media. Thus, a given computation can be implemented in multiple physical media (e.g., mechanical, electro-mechanical, electronic, magnetic, etc.), provided that the media possess a sufficient number of dimensions of variation (or degrees of freedom) that can be appropriately accessed and manipulated and that the components of the mechanism are functionally organized in the appropriate way.
Notice that the mechanistic account avoids pancomputationalism. First, physical systems that are not functional mechanisms are ruled out. Functional mechanisms are complex systems of components that are organized to perform functions. Any system whose components are not organized to perform functions is not a computing system because it is not a functional mechanism. Second, mechanisms that lack the function of manipulating medium-independent vehicles are ruled out. Finally, medium-independent vehicle manipulators whose manipulations fail to accord with appropriate rules are ruled out. The second and third constraints appeal to special functional properties — manipulating medium-independent vehicles, doing so in accordance with rules defined over the vehicles — that are possessed only by relatively few physical systems. According to the mechanistic account, those few systems are the genuine computing systems.
Another feature of the mechanistic account is that it accounts for the possibility of miscomputation — a possibility difficult to make sense of under other accounts. To illustrate the point, consider an ordinary computer programmed to compute function f on input i. Suppose that the computer malfunctions and produces an output different from f(i). According to the causal (semantic) account, the computer just underwent a causal process (a manipulation of representations), which may be given a computational description and hence counts as computing some function g(i), where g≠f. By contrast, according to the mechanistic account, the computer simply failed to compute, or at least it failed to complete its computation correctly. Given the importance of avoiding miscomputations in the design and use of computers, the ability of the mechanistic account to make sense of miscomputation may be an advantage over rival accounts.
A final feature of the mechanistic account is that it distinguishes and characterizes precisely many different kinds of computing systems based on the specific vehicles they manipulate and their specific mechanistic properties. The mechanistic account has been used to explicate digital computation (Piccinini 2007b), analog computation (Piccinini 2008b, Section 3.5), computation by neural networks (Piccinini 2008c), and other important distinctions such as hardwired vs. programmable and serial vs. parallel computation (Piccinini 2008b).
Which physical systems perform computations? According to pancomputationalism, they all do. Even rocks, hurricanes, and planetary systems — contrary to appearances — are computing systems. Pancomputationalism is quite popular among some philosophers and physicists.
Varieties of pancomputationalism vary with respect to how many computations — all, many, a few, or just one — they attribute to each system.
The strongest version of pancomputationalism is that every physical system performs every computation — or at least, every sufficiently complex system implements a large number of non-equivalent computations (Putnam 1988, Searle 1992). This may be called unlimited pancomputationalism.
The weakest version of pancomputationalism is that every physical system performs some (as opposed to every) computation. A slightly stronger version maintains that everything performs a few computations, some of which encode the others in some relatively unproblematic way (Scheutz 2001). These versions may be called limited pancomputationalism.
Varieties of pancomputationalism also vary with respect to why everything performs computations — the source of pancomputationalism.
One alleged source of pancomputationalism is that which computation a system performs is a matter of relatively free interpretation. If whether a system performs a given computation depends solely or primarily on how the system is perceived, as opposed to objective fact, then it seems that everything computes because everything may be seen as computing (Searle 1992). This may be called interpretivist pancomputationalism.
Another alleged source of pancomputationalism is that everything has causal structure. According to the causal account, computation is the causal structure of physical processes (Chrisley 1995, Chalmers 1995, 1996, Scheutz 1999, 2001). Assuming that everything has causal structure, it follows that everything performs the computation constituted by its causal structure. This may be called causal pancomputationalism.
Not everyone will agree that everything has causal structure. Some processes may be non-causal, or causation may be just a façon de parler that does not capture anything fundamental about the world (e.g., Norton 2003). But those who have qualms about causation can recover a view similar to causal pancomputationalism by reformulating the causal account of computation and consequent version of pancomputationalism in terms they like — e.g., in terms of the dynamical properties of physical systems.
A third alleged source of pancomputationalism is that every physical state carries information, in combination with an information-based semantics plus a liberal version of the semantic view of computation. According to the semantic view of computation, computation is the manipulation of representations. According to information-based semantics, a representation is anything that carries information. Assuming that every physical state carries information, it follows that every physical system performs the computations constituted by the manipulation of its information-carrying states (cf. Shagrir 2006). Both information-based semantics and the assumption that every physical state carries information (in the relevant sense) remain controversial.
Yet another alleged source of pancomputationalism is that computation is the nature of the physical universe. According to some physicists, the physical world is computational at its most fundamental level. This view, which is a special version of limited pancomputationalism, will be discussed in Section 3.4.
Arguments for unlimited pancomputationalism go back to Hinckfuss's pail, a putative counterexample to computational functionalism — the view that the mind is the software of the brain. Hinckfuss's pail is named after its proponent, Ian Hinckfuss, but was first discussed in print by William Lycan. A pail of water contains a huge number of microscopic processes:
Now is all this activity not complex enough that, simply by chance, it might realize a human program for a brief period (given suitable correlations between certain micro-events and the requisite input-, output-, and state-symbols of the program)? (Lycan 1981, 39)
Hinckfuss's implied answer to this question is that yes, a pail of water might implement a human program, and therefore any arbitrary computation, at least for a short time.
Other authors developed more detailed arguments along the lines of Hinckfuss's pail. John Searle (1992) explicitly argues that whether a physical system implements a computation depends on how an observer interprets the system; therefore, for any sufficiently complex object and for any computation, the object can be described as implementing the computation. The first rigorous argument for unlimited pancomputationalism is due to Hilary Putnam (1988), who argues that every ordinary open system implements every abstract finite automaton (without inputs and outputs).
Putnam assumes that electromagnetic and gravitational fields are continuous and that physical systems are in different maximal states at different times. He considers an arbitrarily chosen finite automaton whose table calls for the sequence of states ABABABA. He then considers an arbitrary physical system S over the arbitrarily chosen time interval from 12:00 to 12:07 and argues that S implements the sequence ABABABA. Since both the automaton and the physical system are arbitrary, the argument generalizes to any automaton and any physical state. Here is the core of Putnam's argument:
Let the beginnings of the intervals during which S is to be in one of its stages A or B be t1, t2, … tn (in the example given, n = 7, and the times in question are t1 = 12:00, t2 = 12:01, t3 = 12:02, t4 = 12:03, t5 = 12:04, t6 = 12:05, t7 = 12:06). The end of the real-time interval during which we wish S to “obey” this table we call tn+1 (= t8 = 12:07, in our example). For each of the intervals ti to ti+1, i = 1, 2, …, n, define a (nonmaximal) interval state si which is the “region” in phase space consisting of all the maximal states … with ti ≤ t < t+1. (I.e., S is in si just in case S is in one of the maximal states in this “region.”) Note that the system S is in s1 from t1 to t2, in s2 from t2 to t3, …, in sn from tn to tn+1. (Left endpoint included in all cases but not the right — this is a convention to ensure the “machine” is in exactly one of the si at a given time.) …
Define A = s1 ∨ s3 ∨ s5 ∨ s7; B = s2 ∨ s4 ∨ s6.
Then, as is easily checked, S is in state A from t1 to t2, from t3 to t4, and from t5 to t6, and from t7 to t8, and in state B at all other times between t1 and t8. So S “has” the table we specified, with the states A,B we just defined as the “realizations” of the states A,B described by the table. (Putnam 1988, 122–3, emphasis original)
In summary, Putnam picks an arbitrary physical system with continuous dynamics, slices up its dynamics into discrete time intervals, and then aggregates the slices so that they correspond to an arbitrary sequence of computational states. He concludes that every physical system implements every finite automaton.
Putnam points out that his argument does not apply directly to computational theories of cognition, because cognitive systems receive specific physical inputs through their sensory organs and yield specific physical outputs through their motor organs. To determine which computations are implemented by a system with physical inputs and outputs, the inputs and outputs must be taken into account:
Imagine … that an object S which takes strings of “1”s as inputs and prints such strings as outputs behaves from 12:00 to 12:07 exactly as if it had a certain [computational] description D. That is, S receives a certain string, say “111111,” at 12:00 and prints a certain string, say “11,” at 12:07, and there “exists” (mathematically speaking) a machine with description D which does this (by being in the appropriate state at each of the specified intervals, say 12:00 to 12:01, 12:01 to 12:02, …, and printing or erasing what it is supposed to print or erase when it is in a given state and scanning a given symbol). In this case, S too can be interpreted as being in these same logical states A,B,C, … at the very same times and following the very same transition rules; that is to say, we can find physical states A,B,C … which S possesses at the appropriate times and which stand in the appropriate causal relations to one another and to the inputs and the outputs. The method of proof is exactly the same… Thus we obtain that the assumption that something is a “realization” of a given automaton description … is equivalent to the statement that it behaves as if it had that description (Putnam 1988, 124, emphasis original).
In summary, Putnam picks an arbitrary physical system with physically specified inputs and outputs and then matches it to an arbitrary finite automaton whose abstractly specified inputs and outputs map onto the physically specified inputs and outputs. He then slices up the physical system's internal dynamics as before, and then aggregates the slices so that they correspond to the sequence of computational states of the finite automaton. It follows that given any physical system and any finite automaton with isomorphic inputs and outputs, the physical system implements the computational system.
Although this result is weaker than the result for systems without inputs and outputs, it is still striking because for any abstract input-output pair <i, o>, there are infinitely many automata that yield output o given input i. Given Putnam's conclusion, any physical system with inputs and outputs isomorphic to i and o implements all of the infinitely many automata with input i and output o.
If unlimited pancomputationalism is correct, then the claim that a system S performs a certain computation becomes trivially true and vacuous or nearly so; it fails to distinguish S from anything else (or perhaps from anything else with the same inputs and outputs). Thus, unlimited pancomputationalism threatens the computational theory of cognition. If cognition is computation simply because cognitive systems, like everything else, may be seen as performing computations, then it appears that the computational theory of cognition is both trivial and vacuous. By the same token, unlimited pancomputationalism threatens the foundations of computer science, where the objective computational power of different systems is paramount. The threat of trivialization is a major motivation behind responses to the arguments for unlimited pancomputationalism.
The first thing to notice is that arguments for unlimited pancomputationalism rely either implicitly or explicitly on the simple mapping account of computation. They assume that an arbitrary mapping from a computational description C to a physical description of a system is sufficient to conclude that the system implements C. In fact, avoiding unlimited pancomputationalism is a major motivation for rejecting the simple mapping account of computation. By imposing restrictions on which mappings are legitimate, other accounts of computation aim to avoid unlimited pancomputationalism.
In one response to unlimited pancomputationalism, Jack Copeland (1996) argues that the mappings it relies on are illegitimate because they are constructed ex post facto — after the computation is already given. In the case of kosher computational descriptions — the kind normally used in scientific modeling — the work of generating successive descriptions of a system's physical dynamics is done by a computer running an appropriate program (e.g., a weather forecasting program), not by the mapping relation. In the sort of descriptions employed in arguments for unlimited pancomputationalism, instead, the descriptive work is done by the mapping relation.
An arbitrarily chosen computational description, such as those employed in arguments for unlimited pancomputationalism, does not generate successive descriptions of the state of an arbitrary system. If someone wants a genuine computational description of a physical system, she must first identify physical states and state transitions of the system, then represent them by a computational description (thereby fixing the mapping relation between the computational description and the system), and finally use a computer to generate subsequent representations of the state of the system, while the mapping relation stays fixed. By contrast, the arguments for unlimited pancomputationalism pick a computation first, then slice and aggregate the physical system to fit the computational description, and finally generate the mapping between the two. The work of describing the physical system is not done by the computational description but by whoever constructs the mapping. Copeland concludes that such ex post facto mappings are illegitimate.
In addition, both Chalmers (1995, 1996) and Copeland (1996) argue that the mappings invoked by unlimited pancomputationalism violate the counterfactual relations between the computational states. Consider again Putnam's slice-and-aggregate strategy for generating mappings. The mappings are constructed based on an arbitrary dynamical evolution of an arbitrary physical system. No attempt is made to establish what would happen to the physical system had conditions been different. Chalmers and Copeland argue that this is illegitimate, as a genuine implementation must exhibit the same counterfactual relations that obtain between the computational states. This response leads to the counterfactual account of computation, according to which the counterfactual relations between the physical states must be isomorphic to the counterfactual relations between the computational states.
Another possible response to unlimited pancomputationalism is that its mappings fail to construct an isomorphism between the causal structure of the physical system and the state transitions specified by the computational description. Consider Putnam's argument again. The mapping from the computational description to the physical description is chosen with no regard to the causal relations that obtain between the physical states of the system. Thus, after a computational description is mapped onto a physical description in that way, the computational description does not describe the causal structure of the physical system. According to several authors, non-causal mappings are illegitimate (Chrisley 1995, Chalmers 1995, 1996, Scheutz 1999, 2001). Naturally, these authors defend the causal account of computation, according to which acceptable mappings must respect the causal structure of a system.
Yet another response to unlimited pancomputationalism is implicitly given by Godfrey-Smith (2009). Although Godfrey-Smith is primarily concerned with functionalism as opposed to computation per se, his argument is still relevant here. Godfrey-Smith argues that for a mapping to constitute a genuine implementation, the microscopic physical states that are clustered together (to correspond to a given computational state) must be physically similar to one another — there cannot be arbitrary groupings of arbitrarily different physical states, as in the arguments for unlimited pancomputationalism. Godfrey-Smith suggests that his similarity restriction on legitimate mappings may be complemented by the kind of causal and localizational restrictions proposed by Chalmers (1996).
The remaining accounts of computation — the semantic, syntactic, and mechanistic accounts — are even more restrictive than the causal and counterfactual accounts; they impose further constraints on acceptable mappings. Therefore, like the causal and counterfactual accounts, they have resources for avoiding unlimited pancomputationalism.
Such resources are not always straightforward to deploy. For example, consider the semantic account, according to which computation requires representation. If being a representation of something is an objective property possessed by relatively few things, then unlimited pancomputationalism is ruled out on the grounds that only the few items that constitute representations are genuine computational states. If, however, everything is representational in the relevant way, then everything is computational (cf. Churchland and Sejnowski 1992, Shagrir 2006). If, in addition, whether something represents something else is just a matter of free interpretation, then the semantic account of computation gives rise to unlimited pancomputationalism all over again. Similar considerations apply to the syntactic and mechanistic accounts. For such accounts to truly avoid unlimited pancomputationalism, they must not rely on free interpretation.
Limited pancomputationalism is much weaker than its unlimited cousin. It holds that every physical system performs one (or relatively few) computations. Which computations are performed by which system is deemed to be a matter of fact, depending on objective properties of the system. In fact, several authors who have mounted detailed responses to unlimited pancomputationalism explicitly endorse limited pancomputationalism (Chalmers 1996, 331, Scheutz 1999, 191).
Unlike unlimited pancomputationalism, limited pancomputationalism does not turn the claim that something is computational into a vacuous claim. Different systems generally have different objective properties; thus, according to limited pancomputationalism, different systems generally perform different computations. Nevertheless, it may seem that limited pancomputationalism still trivializes the claim that a system is computational. For according to limited pancomputationalism, digital computers perform computations in the same sense in which rocks, hurricanes, and planetary systems do. This may seem to do an injustice to computer science — in computer science, only relatively few systems count as performing computations and it takes a lot of difficult technical work to design and build systems that perform computations reliably. Or consider the claim that cognition is computation. This computational theory of cognition was introduced to shed new and explanatory light on cognition. But if every physical process is a computation, the computational theory of cognition seems to lose much of its explanatory force (Piccinini 2007b).
Another objection to limited pancomputationalism begins with the observation that any moderately complex system satisfies indefinitely many objective computational descriptions (Piccinini 2010). This may be seen by considering computational modeling. A computational model of a system may be pitched at different levels of granularity. For example, consider cellular automata models of the dynamics of a galaxy or a brain. The dynamics of a galaxy or a brain may be described using an indefinite number of cellular automata — using different state transition rules, different time steps, or cells that represent spatial regions of different sizes. Furthermore, an indefinite number of formalisms different from cellular automata, such as Turing machines, can be used to compute the same functions computed by cellular automata. It appears that limited pancomputationalists are committed to the galaxy or the brain performing all these computations at once. But that does not appear to be the sense in which computers (or brains) perform computations.
In the face of these objections, limited pancomputationalists are likely to maintain that the explanatory force of computational explanations does not come from the claim that a system is computational simpliciter. Rather, explanatory force comes from the specific computations that a system is said to perform. Thus, a rock and a digital computer perform computations in the same sense. But they perform radically different computations, and it is the difference between their computations that explains the difference between them. As to the objection that there are still too many computations performed by each system, limited pancomputationalists have two main options: either to bite the bullet and accept that every system implements indefinitely many computations, or to find a way to single out, among the many computational descriptions satisfied by each system, the one that is ontologically privileged — the one that captures the computation performed by the system. One way to do this is to postulate a fundamental physical level, whose most accurate computational description identifies the (most fundamental) computation performed by the system. This response is built into the view that the physical world is fundamentally computational (next section).
As to those who remain unsatisfied with limited pancomputationalism, their desire to avoid limited pancomputationalism motivates the shift to more restrictive accounts of computation, analogously to how the desire to avoid unlimited pancomputationalism motivates the shift from the simple mapping account to more restrictive accounts of computation, such as the causal account. The semantic account may be able to restrict genuine computational descriptions to fewer systems than the causal account, provided that representations — which are needed for computation according to the semantic account — are hard to come by. Mutatis mutandis, the same is true of the syntactic and mechanistic accounts.
Some authors argue that the physical universe is fundamentally computational. The universe itself is a computing system, and everything in it is a computing system too (or part thereof). Unlike the previous versions of pancomputationalism, which originate in philosophy, this ontic pancomputationalism originates in physics. It includes both an empirical claim and a metaphysical one. Although the two claims are logically independent, supporters of ontic pancomputationalism tend to make them both.
The empirical claim is that all fundamental physical magnitudes and their state transitions are such as to be exactly described by an appropriate computational formalism — without resorting to the approximations that are a staple of standard computational modeling. This claim takes different forms depending on which computational formalism is taken to describe the universe exactly. The two main options are cellular automata, which are a classical computational formalism, and quantum computing, which is non-classical.
The earliest and best known version of ontic pancomputationalism is due to Konrad Zuse (1970, 1982) and Edward Fredkin, whose unpublished ideas on the subject influenced a number of American physicists (e.g., Feynman 1982, Toffoli 1982, Wolfram 2002; see also Wheeler 1982, Fredkin 1990). According to some of these physicists, the universe is a giant cellular automaton. A cellular automaton is a lattice of cells; each cell can take one out of finitely many states and updates its state in discrete steps depending on the state of its neighboring cells. For the universe to be a cellular automaton, all fundamental physical magnitudes must be discrete, i.e., they must take at most finitely many values. In addition, time and space must be fundamentally discrete or must emerge from the discrete processing of the cellular automaton. At a fundamental level, continuity is not a real feature of the world — there are no truly real-valued physical quantities. This flies in the face of most mainstream physics, but it is not an obviously false hypothesis. The hypothesis is that at a sufficiently small scale, which is currently beyond our observational and experimental reach, (apparent) continuity gives way to discreteness. Thus, all values of all fundamental variables, and all state transitions, can be fully and exactly captured by the states and state transitions of a cellular automaton.
Although cellular automata have been shown to describe many aspects of fundamental physics, it is difficult to see how to simulate the quantum mechanical features of the universe using a classical formalism such as cellular automata (Feynman 1982). This concern motivated the development of quantum computing formalisms (Deutsch 1985, Nielsen and Chuang 2000). Instead of relying on digits — most commonly, binary digits or bits — quantum computation relies on qudits — most commonly, binary qudits or qubits. The main difference between a digit and a qudit is that whereas a digit can take only one out of finitely many states, such as 0 and 1 (in the case of a bit), a qudit can also take an uncountable number of states that are a superposition of the basis states in varying degrees, such as superpositions of 0 and 1 (in the case of a qubit). Furthermore, unlike a collection of digits, a collection of qudits can exhibit quantum entanglement. According to the quantum version of ontic pancomputationalism, the universe is not a classical computer but a quantum computer, that is, not a computer that manipulates digits but a computer that manipulates qubits (Lloyd 2006) — or, more generally, qudits.
The quantum version of ontic pancomputationalism is less radical than the classical version. The classical version eliminates continuity from the universe, primarily on the grounds that eliminating continuity allows classical computers to describe the universe exactly rather than approximately. Thus, the classical version appears to be motivated not by empirical evidence but by epistemological concerns. Although there is no direct evidence for classical ontic pancomputationalism, in principle it is a testable hypothesis (Fredkin 1990). By contrast, quantum ontic pancomputationalism may be seen as a reformulation of quantum mechanics in the language of quantum computation and quantum information theory (qubits), without changes in the empirical content of the theory (e.g., Fuchs 2004, Bub 2005).
But ontic pancomputationalists do not limit themselves to making empirical claims. They often make an additional metaphysical claim. They claim that computation (or information, in the physical sense described in Section 2.3) is what makes up the physical universe. This point is sometimes made by saying that at the most fundamental physical level, there are brute differences between states — nothing more need or can be said about the nature of the states. This view reverses the traditional conception of the relation between computation and the physical world.
According to the traditional conception, which is presupposed by all accounts of computation discussed above, physical computation requires a physical substratum that implements it. Computation is an aspect of the organization and behavior of a physical system; there is no software without hardware. Thus, according to the traditional conception, if the universe is a cellular automaton, the ultimate constituents of the universe are the physical cells of the cellular automaton. It is legitimate to ask what kind of physical entity such cells are and how they interact with one another so as to satisfy their cellular automata rules.
By contrast, according to the metaphysical claim of ontic pancomputationalism, a physical system is just a system of computational states. Computation is ontologically prior to physical processes, as it were. “‘Hardware’ [is] made of ‘software’” (Kantor 1982, 526, 534). According to this non-traditional conception, if the universe is a cellular automaton, the cells of the automaton are not concrete, physical structures that causally interact with one another. Rather, they are software — purely “computational” entities.
Such a metaphysical claim requires an account of what computation, or software, or physical information, is. If computations are not configurations of physical entities, the most obvious alternative is that computations are abstract, mathematical entities, like numbers and sets. As Wheeler (1982, 570) puts it, “the building element [of the universe] is the elementary ‘yes, no’ quantum phenomenon. It is an abstract entity. It is not localized in space and time”. Under this account of computation, the ontological claim of ontic pancomputationalism is a version of Pythagoreanism. All is computation in the same sense in which more traditional versions of Pythagoreanism maintain that all is number or that all is sets (Quine 1976).
Ontic pancomputationalism may be attacked on both the empirical and the ontological fronts. On the empirical front, there is little positive evidence to support ontic pancomputationalism. Supporters appear to be motivated by the desire for exact computational models of the world rather than empirical evidence that the models are correct. Even someone who shares this desire may well question why we should expect nature to fulfill it. On the metaphysical front, Pythagoreanism faces the objection that the abstract entities it puts at the fundamental physical level lack the causal and qualitative properties that we observe in the physical world — or at least, it is difficult to understand how abstract entities could give rise to physical qualities and their causal powers (e.g., Martin 1997).
According to the Church-Turing thesis (CTT), any function that is intuitively computable is computable by some Turing machine (i.e., Turing-computable). Alternatively, CTT may be formulated as follows: any function that is “naturally regarded as computable” (Turing 1936–7, 135) is Turing-computable. The phrases “intuitively computable” and “naturally regarded as computable” are somewhat ambiguous. When they are disambiguated, CTT takes different forms.
In one sense, “intuitively computable” means computable by following an algorithm or effective procedure. An effective procedure is a finite list of clear instructions for generating new symbolic structures out of old symbolic structures. When CTT is interpreted in terms of effective procedures, it may be called Mathematical CTT, because the relevant evidence is more logical or mathematical than physical. Mathematical CTT says that any function computable by an effective procedure is Turing-computable.
There is compelling evidence that Mathematical CTT is true (Kleene 1952, §62, §67; cf. also Sieg 2006):
- There are no known counterexamples.
- Diagonalization over Turing machines, contrary to what may be expected, does not yield a function that is not Turing-computable.
- Argument from confluence: all the formalisms proposed to capture the intuitive notion of computability by effective procedure — formalisms such as general recursiveness (Gödel 1934), λ-definability (Church 1932, Kleene 1935), Turing-computability (Turing 1936-7), and reckonability (Gödel 1936) — turn out to capture the same class of functions.
- A Turing machine seems capable of reproducing any operation that a human being can perform while following an effective procedure (Turing 1936–7's main argument for CTT).
In another sense, “intuitively computable” means computable by physical means. When CTT is so interpreted, it may be called Physical CTT (following Pitowsky 1990), because the relevant evidence is more physical than logical or mathematical.
Physical CTT is often formulated in very strong forms. To a first approximation, Bold Physical CTT holds that any physical process — anything doable by a physical system — is computable by some Turing machine.
Bold Physical CTT can be made more precise in a number of ways. Here is a representative sample, followed by references to where they are discussed:
- Any physical process can be simulated by some Turing machine (e.g., Deutsch 1985, Wolfram 1985, Pitowsky 2002).
- Any function over denumerable domains (such as natural numbers) that is computable by an idealized computing machine that manipulates arbitrary real-valued quantities (as defined by Blum et al. 1998) is Turing-computable.
- Any system of equations describing a physical system gives rise to computable solutions (cf. Earman 1986, Pour-El 1999). A solution is said to be computable just in case, given computable real numbers as initial conditions, it returns computable real numbers as values. A real number is said to be computable just in case there is a Turing machine whose output effectively approximates it.
- For any physical system S and observable W, there is a Turing-computable function f: N → N such that for all times t∈N, f(t)=W(t) (Pitowsky 1990).
Thesis (A) is ambiguous between two notions of simulation. In one sense, simulation is the process by which a digital computing system (such as a Turing machine) computes the same function as another digital computing system. This is the sense in which universal Turing machines can simulate any other Turing machine. If (A) is interpreted using this first notion of simulation, it entails that everything in the universe is a digital computing system. This is (a variant of) ontic pancomputationalism (Section 3.4).
In another sense, simulation is the process by which the output of a digital computing system represents an approximate description of the dynamical evolution of another system. This is the sense in which computational models of the weather simulate the weather. If (A) is interpreted using this second notion of simulation, then (A) is true only if we do not care how close our computational approximations are. If we want close computational approximations — as we usually do — then (A) turns into the claim that any physical process can be computationally approximated to the degree of accuracy that is desired in any given case. Whether that is true varies from case to case depending on the dynamical properties of the system, how much is known about them, what idealizations and simplifications are adopted in the model, what numerical methods are used in the computation, and how many computational resources (such as time, processing speed, and memory) are available (Piccinini 2007b).
Thesis (B) is straightforwardly and radically false. Blum et al. (1989) set up a mathematical theory of computation over real-valued quantities, which they see as a fruitful extension of ordinary computability theory. Within such a theory, Blum et al. define idealized “computing” machines that perform addition, subtraction, multiplication, division, and equality testing as primitive operations on arbitrary real-valued quantities. They easily prove that such machines can compute all sets defined over denumerable domains by encoding their characteristic function as a real-valued constant (ibid., 405). Although they do not discuss this result as a refutation of Physical CTT, their work is often cited in discussions of physical computability and Physical CTT.
Theses (C) and (D) have interesting counterexamples that are consistent with some physical theories (cf. below and Pour-El 1999). These theoretical counterexamples may or may not occur in our concrete physical universe.
Each of (A)–(D) raises important questions pertaining to the foundations of computer science, physics, and mathematics. It is not clear, however, that any of these theses bears an interesting analogy to Mathematical CTT. Below are two reasons why.
First, (A)–(D) are falsified by processes that cannot be built and used as computing devices. The most obvious example is (B). Blum et al.'s result is equivalent to demonstrating that all functions over denumerable domains — including the uncountably many functions that are not Turing-computable — are computable by Blum et al.'s “computing” systems, which are allowed to manipulate the exact values of arbitrary real numbers. Hence, (B) is radically false. But at the same time, this result has no direct practical applications, as there is no evidence that concrete physical systems can manipulate arbitrary real-valued quantities in the way that Blum et al.'s systems do.
More generally, formulations (A)–(D) would be falsified by a sequence generated by a random (i.e., nondeterministic) physical process. According to quantum mechanics, some physical processes — such as atom decay — contain an objectively random element. Hidden variable interpretations dispute this, but the possibility of genuine randomness is sufficiently plausible that it should be taken into account.
Consider a random process that produces discrete outputs over an infinite period of time, e.g., the decay of atoms from a radioactive sample. Its output at regular time intervals is a string of digits — ‘0’ if no atoms decay during a time interval, ‘1’ if one or more atoms decay during a time interval. A simple consideration shows that, with probability one, the sequence produced by our random process is not Turing-computable. There are uncountably many infinite strings of digits (even more strongly, there are uncountably many infinite strings of digits with any given limiting frequency of ‘0's and ‘1's). But there are only countably many Turing-computable infinite strings. Therefore, assuming that each infinite string (or each infinite string with a certain limiting frequency) has the same probability of occurring as a result of a random process, the probability that a random process would generate a Turing-computable string of digits is zero, whereas the probability that the string of digits is not Turing-computable is one (for the stronger conclusion that no such string of digits is Turing-computable, see Calude & Svozil 2008). Thus, simply by using a random process to generate a string, there is a sense in which we would have physical means that go beyond what is Turing-computable. As Alan Turing (1950, 438–439) pointed out, a machine with a “random element” can do “more” than a Turing machine. But doing more is not the same as computing more. Contrary to what is sometimes suggested (e.g., Calude 2005, 10), the combination of a Turing machine and a random process does not threaten Physical CTT.
Random processes should not count as computations: unlike computations properly so called, random processes cannot be used to generate the desired values of a function or solve the desired instances of a general problem. Random processes can be exploited by a computation, of course — there are important computational techniques that rely on random or pseudo-random choices at some stages of a computation. If some quantum random sequences were random in the sense of algorithmic information theory, they may even raise the probability of obtaining correct solutions from computational techniques that rely on random choices (Calude 2005, 10). But no computational technique can amount to a mere sequence of random choices. So any thesis, such as Bold Physical CTT, that would be falsified by a sequence generated by a random process is too strong to capture the notion of physical computability — the physical analogue of algorithmic computability. Thus, contrary to what some authors seem to assume, Bold Physical CTT is too strong to be a physical analogue of Mathematical CTT.
Another feature that theses (A)–(D) have in common is that they appeal quite freely to arbitrary real-valued quantities. This is explicit in (B). To see why they all appeal to arbitrary real-valued quantities, consider that most physical theories assume that many physical magnitudes, including time, can take arbitrary real numbers as values. Hence, systems of physical equations, whose simulations, solutions, or observables are appealed to, respectively, by (A), (C), and (D), involve arbitrary real numbers. Therefore, all formulations of Bold Physical CTT involve, explicitly or implicitly, arbitrary real numbers.
The expressive power of real numbers may be used to generate a simple recipe to obtain the values of a Turing-uncomputable function. Consider that the digital expansion of a real number contains countably many digits. Hence, for any characteristic function (i.e., a function whose values are ‘0’ or ‘1’) defined over a countable domain, including all Turing-uncomputable such functions, there is a real number whose binary expansion encodes its values. This is because for all values of a characteristic function, the nth value of the function may be defined to be the nth digit of the binary expansion of a real number.
Suppose you wish to know the value of a specific Turing-uncomputable characteristic function, such as the halting function for Turing machines, for its nth argument. Take the real number r whose digital expansion encodes the values of the halting function. All you need to do is obtain the value of the nth digit in the binary expansion of r and you have the result you desire. If you can do this, you may obtain any value of any characteristic function defined over strings, including all Turing-uncomputable such functions.
The above recipe, if feasible, is a trivial consequence of the expressive power of real numbers. Yet it is discussed in the literature as an example of perhaps possible physical computation beyond the power of Turing machines (e.g., Copeland 2000) — something that would falsify Physical CTT. But there is no reason to believe such a recipe can be implemented, because it requires either the measurement or the preparation of a Turing-uncomputable real-valued quantity with unbounded precision. There is no evidence that either is doable.
By relying quite freely on arbitrary real-valued quantities, many versions of Bold Physical CTT make themselves vulnerable to falsification by relatively trivial (though presumably unfeasible) recipes such as the one just mentioned. This casts doubt on the assumption — made by some who seek ways to falsify Bold Physical CTT — that Bold Physical CTT is actually an interesting physical analogue of Mathematical CTT.
(The point just made does not impugn analog computation in the standard sense of Pour-El (1974). Analog computation does not manipulate exact values of arbitrary real-valued quantities but rather continuous variables. Although a continuous variable may be assumed to take any real value within a relevant interval, a successful concrete analog computation requires only the measurement of real variables with some degree of approximation. No exact values of arbitrary real-valued quantities are exploited by an analog computation, so analog computation does not falsify Bold Physical CTT (cf. Rubel 1989).)
Similar doubts about the alleged analogy between Mathematical CTT and Bold Physical CTT may be generated by a related observation. Many current physical theories assume that nature contains real-valued quantities that vary along a continuum. These may include the velocity of objects, the coordinates that define the position of their center of gravity in the spacetime manifold, and more. If these physical theories are correct, then many properties of many entities take arbitrary real numbers as their values. And most real numbers in any continuous interval are Turing-uncomputable. In fact, there are only countably many computable numbers, but any continuous interval contains uncountably many real numbers. Thus, the probability that a randomly picked real-valued quantity is computable is zero. Hence, if our physical theories are correct, most transformations of the relevant physical properties are transformations of Turing-uncomputable quantities into one another.
For instance, an object's change of speed, or even its simple change of spatial location may be transformations of one Turing-uncomputable real-valued quantity into another. A transformation of one Turing-uncomputable value into another Turing-uncomputable value is certainly a Turing-uncomputable operation. Hence, it would seem that given many of our physical theories, the physical world is chock-full of operations that outstrip the power of Turing machines. If this is correct, it falsifies Bold Physical CTT.
But, as before, there is no reason to believe that we can use the Turing-uncomputable operations just mentioned to compute in the epistemological sense that motivates CTT in the first place — to solve problems, to generate the values of desired functions for desired arguments. In other words, it is implausible that the operations in question should count as computations. Bold Physical CTT, which is threatened by such operations, is not interestingly analogous to Mathematical CTT.
To conclude our discussion of Bold Physical CTT, it may be helpful to distinguish the issue of physical computability proper — the issue that pertains to the physical analogue of Mathematical CTT — from other issues that connect computability and physics. Many questions about the relationship between physical processes and computability deserve to be asked. What are the physically computable functions? This is the question that should motivate Physical CTT. What can be computationally approximated to what degree under what circumstances? This is presumably what (A) is after. As important and interesting as this question is, it is different from the question of what can be physically computed. Yet other questions motivate theses (B)-(D) above as well as other versions of Bold Physical CTT to be found in the literature. Many of these questions are interesting and deserve to be investigated. Nevertheless, they do not properly belong in discussions of CTT, because they are different from the computability question that motivates CTT in the first place.
In the literature on computation in physical systems, there is growing concern that a physical analogue of Mathematical CTT should include only usable physical processes (e.g., Németi and Dávid 2006; Ord 2006, Smith 2006a, Beggs and Tucker 2007). Given this concern, an adequate version of Physical CTT ought to be more modest than Bold Physical CTT.
Modest Physical CTT ought to be formulated in terms of what can be physically computed. Prototypical examples of physical computation are the processes of ordinary digital computers and their components, including digital circuits. Such processes can be exhaustively described by effective procedures, which are already covered by Mathematical CTT. Mathematical CTT says that any function computable by an effective procedure is computable by a Turing machine. As Turing machines can be physically implemented (or replaced by a computing human), any process that follows an effective procedure is physically computable.
But physical computation in the present sense is a broader notion than computation by effective procedure. A process may count as a physical computation even if there is no effective procedure for describing the process, perhaps because there are no finite instantaneous descriptions of the internal states that constitute the process or no way to finitely and exactly specify the transition from one instantaneous description to the next. Thus, Modest Physical CTT is stronger than Mathematical CTT. In addition to physical processes that follow effective procedures, Modest Physical CTT may cover continuous dynamical processes (as in certain kinds of connectionist computing), processes that span large portions of spacetime, and quantum processes (as in quantum computing). But physical computation does not include all physical processes.
In accordance with this proposal, Modest Physical CTT holds that any function that is physically computable is Turing-computable.
For a process to count as a physical computation, and hence for it to be relevant to Modest Physical CTT, it must be usable by an observer to generate the desired values of an independently specified function. This requirement may be spelled out in terms of a number of constraints (Piccinini forthcoming). This list is not intended to be exhaustive:
Constraint 1: Readable inputs and outputs. The process must take inputs and yield outputs that observers can read without error, so that observers can use the outputs as solutions to problems or values of functions defined over the inputs. For that to be possible, presumably the inputs and outputs need to be strings of discrete states, like the inputs and outputs of ordinary digital computers.
Constraint 2: Process-independent rule. There must be a fixed rule or map — specifiable independently of the physical process — that links the outputs to the inputs. By defining the problem to be solved by the process, this rule tells the user what she is going to learn by running the process. Since the rule defines a physical computation in general, the rule need not be recursive. For instance, it may be the rule that defines the halting problem for Turing machines. But like all recursive rules, the rule must be the same for all inputs that belong in the same problem; it cannot change from one input to the next.
Constraint 3: Repeatability. The process must be repeatable, so as to allow users to obtain the same results multiple times and to check a computation by repeating it.
Constraint 4: Resettability. The system undergoing the process must be resettable, so that a user can perform multiple computations on the same system.
Constraint 5: Physical constructability. The system must be physically constructible.
Constraint 6: Reliability. The system must not break down before the process is completed.
In summary, Modest Physical CTT asserts that every function that can be physically computed, i.e., every usable transformation of input strings into output strings in accordance with a process-independent rule defined over the strings, is Turing-computable.
Since Modest Physical CTT is restricted by epistemologically relevant criteria, it doesn't raise the worries associated with Bold Physical CTT — namely, that it's too easy to falsify and irrelevant to the notion of computability that motivates CTT. And there are good reasons to believe Modest Physical CTT. All computing mechanisms that have been physically built or are in the process of being built compute only functions that are Turing-computable.
It is important to understand the exact scope of Modest Physical CTT. Modest Physical CTT does not entail that every physical process is a computation, or that every physical system is a computing system. It only says that if something physical is a computing system, then the functions it computes are Turing-computable.
To fully assess Modest Physical CTT, we should consider whether it is possible to build a machine that, like an ordinary digital computer, can be used by human observers, but, unlike an ordinary digital computer, generates the values of functions that are Turing-uncomputable. In recent years, several designs for hypercomputation have been proposed. Hypercomputation is the computation of Turing-uncomputable functions. If hypercomputation turns out to be physically possible, it will refute Modest Physical CTT.
To a first approximation, a hypercomputer is a system that yields the values of a function that is not Turing-computable. If what counts as yielding the values of a function is left unspecified, any of the systems discussed in Section 4.1, such as genuinely random processes and systems that manipulate arbitrary real-valued quantities, would count as hypercomputers. But in discussing Bold Physical CTT, we saw that yielding the values of a function that is Turing-uncomputable, without further constraints, is not enough for genuine physical computation.
By analogy with the distinction between Bold Physical CTT and Modest Physical CTT, let us distinguish between a weak and a strong notion of hypercomputation by distinguishing usable hypercomputers from unusable hypercomputational processes.
An unusable hypercomputational process is a physical process that fails to satisfy at least one of the first four constraints on physical computation. Examples include processes whose inputs or outputs are arbitrary real-valued quantities (which are not readable without error) and genuine random processes (which have no rule characterizing the inputs and outputs independently of the process, and are neither repeatable nor resettable). These processes are not usable because an observer cannot obtain from them arbitrary values of an independently defined function on a chosen input, as ordinary computing systems can (given enough time and space). Since they are not usable, unusable hypercomputational processes are irrelevant to Modest Physical CTT.
A usable hypercomputer is a physical system that satisfies at least the first four constraints on physical computation. It has readable inputs and outputs, there is a rule characterizing its input-output properties that may be defined independently of the process itself, and its processes are repeatable and resettable. If a system does not satisfy one of these conditions, it does not count as computing in the sense relevant to Modest Physical CTT.
A system that satisfies these conditions — a usable hypercomputer — may be purely notional. For instance, infinitely accelerating Turing machines (Copeland 2002) are Turing machines that perform each computational operation in half the time as their previous operation. As a consequence, infinitely accelerating Turing machines complete an infinite number of operations (a supertask) within twice the time it takes them to perform their first operation. This allows infinitely accelerating Turing machines to compute functions, such as the halting function, that are Turing-uncomputable. But infinitely accelerating Turing machines are usually discussed as notional entities, without evidence that they can be constructed. Purely notional systems, of course, do not falsify Modest Physical CTT. To do that, a system must satisfy at least the fifth and sixth constraints on physical computation: it must be physically constructible and it must operate reliably.
One way to construct something like an infinitely accelerating Turing machine would be to make a computing machine that, after performing some computational operations, builds a smaller and faster copy of itself (Davies 2001). The smaller and faster copy will also perform some operations and then build a faster and smaller copy, and so on. Given appropriate assumptions, the resulting series of infinitely shrinking machines will be able to complete an infinite number of computational operations within a finite time, thereby surpassing the power of Turing machines. While infinitely shrinking machines appear to be consistent with Newtonian mechanics, Davies (2001, 672) points out that the atomic and quantum mechanical nature of matter in our universe makes infinitely shrinking machines physically impossible.
Neural networks have sometimes been discussed as computing systems that may go beyond Turing-computability (e.g., Smolensky 1988, 3). If we restrict our attention to classes of neural networks that contain all systems with current or foreseeable practical applications, this opinion is unwarranted. There is now a considerable literature on the computational and complexity properties of large classes of neural networks. The most relevant systems have digital inputs and outputs (so as to satisfy our first constraint on physical computation) but may have, and typically do have, non-digital internal processes (i.e., their internal processes are not discrete step-by-step manipulations of strings of digits). The main results are the following: (i) feedforward networks with finitely many processing units are computationally equivalent to Boolean circuits with finitely many gates; (ii) recurrent networks with finitely many units are equivalent to finite state automata; (iii) networks with unbounded tapes or an unbounded number of units are equivalent to Turing machines (Šíma & Orponen 2003).
Neural networks more powerful than Turing machines may be defined, however, by exploiting once again the expressive power of real numbers. The best known networks of this kind are Analog Recurrent Neural Networks (ARNNs) (Siegelmann 1999). ARNNs should not be confused with analog computers in the traditional sense (Pour-El 1974, Rubel 1993, Mills 2008). Whereas analog computers manipulate real variables without relying on the exact value of arbitrary real-valued quantities, ARNNs manipulate strings of digits by (possibly) relying on the exact value of arbitrary real-valued quantities. Specifically, the weights connecting individual processing units within ARNNs can take exact values of arbitrary real numbers, including values that are Turing-uncomputable. When their weights are Turing-uncomputable, ARNNs can go beyond the power of Turing-machines: they compute any function over binary strings. ARNNs more powerful than Turing machines cannot be built and operated reliably, however, for at least two reasons: first, they require unboundedly precise weights, and second, such weights are not Turing computable (Davis 2004, Schonbein 2005, Siegelmann 1999, 148).
Perhaps the best known proposal for a hypercomputer is due to Mark Hogarth (1994, 2004), who developed an idea of Itamar Pitowsky (1990). Relativistic hypercomputers exploit the properties of a special kind of spacetime called Malament-Hogarth spacetime, which is physically possible in the sense of constituting a solution to Einstein's field equations for General Relativity. Malament-Hogarth spacetimes contain regions with an infinite time-like trajectory λ that can be circumvented by a finite time-like trajectory γ. In other words, λ and γ have a common origin, and there is a spacetime point p on γ such that λ, even though it is infinite, lies entirely in p's chronological past. If an observer launches a Turing machine along λ and then travels along γ she may, within finite time, find herself in the future of an infinitely long computation performed by the Turing machine. If the Turing machine is able to send signals to the observer, the observer would be able to know the outcome of a potentially infinitely long computation, thereby having computational means more powerful than (ordinary) Turing machines. For instance, an observer may be able to obtain the results of an arbitrary instance of the halting function for Turing machines.
Constructing and using a relativistic hypercomputer is a nontrivial affair. The first question is whether our spacetime is Malament-Hogarth. The answer is currently unknown. Even if our spacetime is not Malament-Hogarth globally, it might still contain regions that have the Malament-Hogarth property locally. An example of such a region is a huge, rotating black hole; there is evidence that our universe contains such black holes (Etesi and Németi 2002). But even if there are Malament-Hogarth regions in our universe, there remain considerable obstacles. For starters, (i) a machine that requires an unbounded amount of matter running for an infinite amount of time will malfunction with probability 1, rendering it useless (Button, 2009, 779), and (ii) the huge, rotating black hole that is closest to us is probably out of our reach as well as our descendants' reach (see also Németi and Dávid 2006, Andréka, Németi, and Németi 2009).
Quantum computing has also been invoked as a possible source of hypercomputation. Quantum computing is the manipulation of qubits (and more generally, qudits) in accordance with the laws of quantum mechanics. Qubits are variables that, like bits, can be prepared or measured in one or two states, 0 and 1. Unlike bits, qubits can (i) take states that are a superposition of 0 and 1 and (ii) become entangled with each other during a computation. A surprising feature of quantum computing is that it allows computing certain functions much faster than any known classical computation (Shor 1994). But while mainstream quantum computing may be more efficient than classical computing, it does not allow computing any functions beyond those computable by Turing machines.
Some authors have questioned whether the mainstream quantum computing paradigm is general enough and, if not, whether some aspects of quantum mechanics may be exploited to design a quantum hypercomputer (Nielsen 1997, Calude and Pavlov 2002). The most prominent proposal for a quantum hypercomputer is due to Tien Kieu (2002, 2003, 2004, 2005). He argues that an appropriately constructed quantum system can decide whether an arbitrary Diophantine equation has an integral solution — a problem which is known to be unsolvable by Turing machines. Kieu's method involves encoding a specific instance of the problem in an appropriate Hamiltonian, which represents the total energy of a quantum system. Kieu shows that such a system can dynamically evolve over time into an energy ground state that encodes the desired solution. Unfortunately, Kieu's scheme does not appear to be workable. For one thing, it requires infinite precision in setting up and maintaining the system (Hodges 2005). For another thing, Kieu does not provide a successful criterion for knowing when the system has evolved into the solution state, and the problem of determining when the solution state is reached is unsolvable by Turing machines (Smith 2006b, Hagar and Korolev 2007). Thus, operating Kieu's proposed hypercomputer would require already possessing hypercomputational powers.
In conclusion, the candidate hypercomputers proposed so far have not been shown to be physically constructible and reliable. For the time being, Modest Physical CTT remains plausible. It may well be that for all practical purposes, any function that is physically computable is Turing-computable.
- Andréka, H., Németi, I., and P. Németi, 2009, “General Relativistic Hypercomputing and Foundation of Mathematics,” Natural Computing, 8(3): 499–516.
- Beggs, E. J., and J.V. Tucker, 2007, “Can Newtonian Systems, Bounded in Space, Time, Mass and Energy Compute all Functions?” Theoretical Computer Science, 371(1–2): 4–19.
- Blum, L., Cucker F., Shub M., and S. Smale, 1998, Complexity and Real Computation, New York: Springer.
- Bontly, T., 1998, “Individualism and the Nature of Syntactic States,” British Journal for the Philosophy of Science, 49(4): 557–574.
- Bub, J., 2005, “Quantum Mechanics is About Quantum Information,” Foundations of Physics, 35(4): 541–560.
- Burge, T., 1986, “Individualism and Psychology,” Philosophical Review, 45: 3–45.
- Button, T., 2009, “SAD Computers and Two Versions of the Church-Turing Thesis,” British Journal for the Philosophy of Science, 60(4): 765–792.
- Calude, C. S., 2005, “Algorithmic Randomness, Quantum Physics, and Incompleteness,” in Proceedings of the Conference “Machines, Computations and Universality” (MCU 2004), M. Margenstern (ed.), Berlin: Springer, pp. 1–17.
- Calude, C. S., and B. Pavlov, 2002, “Coins, Quantum Measurements, and Turing's Barrier,” Quantum Information Processing, 1(1–2): 107–127.
- Calude, C. S., and K. Svozil, 2008, “Quantum Randomness and Value Indefiniteness,” Advanced Science Letters, 1(2): 165–168.
- Chalmers, D. J., 1995, “On Implementing a Computation,” Minds and Machines, 4: 391–402.
- Chalmers, D. J., 1996, “Does a Rock Implement Every Finite-State Automaton?” Synthese, 108: 310-333.
- Chrisley, R. L., 1995, “Why Everything Doesn't Realize Every Computation,” Minds and Machines, 4: 403–430.
- Church, A., 1932, “A Set of Postulates for the Foundation of Logic,” Annals of Mathematics, 33: 346–366.
- Church, A., 1936, “An Unsolvable Problem in Elementary Number Theory,” The American Journal of Mathematics, 58: 345–363.
- Churchland, P. S., and T. J. Sejnowski, 1992, The Computational Brain, Cambridge, MA: MIT Press.
- Copeland, B. J., 1996, “What is Computation?” Synthese, 108(3): 335–359.
- Copeland, B. J., 2000, “Narrow Versus Wide Mechanism: Including a Re-Examination of Turing's Views on the Mind-Machine Issue.” The Journal of Philosophy, XCVI(1): 5–33.
- Copeland, B. J., 2002, “Accelerating Turing Machines,” Minds and Machines, 12(2): 281–301.
- Crane, T., 1990, “The Language of Thought: No Syntax Without Semantics,” Mind and Language, 5(3): 187–212.
- Craver, C. F., 2007, Explaining the Brain, Oxford: Oxford University Press.
- Cummins, R., 1983, The Nature of Psychological Explanation, Cambridge, MA: MIT Press.
- Davies, E. B., 2001, “Building Infinite Machines,” British Journal for the Philosophy of Science, 52(4): 671–682.
- Davis, M., 2004a, “The Myth of Hypercomputation,” in Alan Turing: Life and Legacy of a Great Thinker, C. Teuscher (ed.), Berlin: Springer, pp. 195–211.
- ––– (ed.), 2004b, The Undecidable: Basic Papers on Undecidable Propositions, Unsolvable Problems and Computable Functions, Dover: Mineola.
- Dennett, D. C., 1987, The Intentional Stance, Cambridge, MA: MIT Press.
- Deutsch, D., 1985, “Quantum Theory, the Church-Turing Principle and the Universal Quantum Computer,” Proceedings of the Royal Society of London A, 400: 97–117.
- Dretske, F. I., 1981, Knowledge and the Flow of Information, Cambridge, MA: MIT Press.
- Earman, J., 1986, A Primer on Determinism, Dordrecht: D. Reidel.
- Earman, J., and J. Norton, 1993, “Forever is a Day: Supertasks in Pitowsky and Malament-Hogarth Spacetimes,” Philosophy of Science, 60: 22–42.
- Egan, F., 1999, “In Defence of Narrow Mindedness.” Mind and Language, 14(2): 177–194.
- Egan, F., forthcoming, “A Modest Role For Content,” Studies in the History and Philosophy of Science.
- Etesi, G., and I. Németi, 2002, “Non-Turing Computations via Malament-Hogarth Spacetimes,” International Journal of Theoretical Physics, 41: 342–370.
- Feynman, R. P., 1982, “Simulating Physics with Computers,” International Journal of Theoretical Physics, 21(6–7): 467–488.
- Fodor, J. A., 1975, The Language of Thought, Cambridge, MA: Harvard University Press.
- Fodor, J. A., 1981, “The Mind-Body Problem,” Scientific American, 244: 114–125.
- Fodor, J. A., 2008, LOT 2: The Language of Thought Revisited, Oxford: Oxford University Press.
- Fredkin, E., 1990, “Digital Mechanics: An Information Process Based on Reversible Universal Cellular Automata,” Physica D, 45: 254–270.
- Fresco, N., 2010, “Explaining Computation Without Semantics: Keeping It Simple,” Minds and Machines, 20: 165–181.
- Fuchs, C. A., 2004, “Quantum Mechanics as Quantum Information (and only a little more),” in Quantum Theory: Reconsiderations of Foundations, A. Khrennikov (ed.), Växjö, Sweden: Växjö University Press, pp. 463–543.
- Gödel, K., 1934, “On Undecidable Propositions of Formal Mathematical Systemsm,” in The Undecidable, M. Davis (ed.), Ewlett, NY: Raven, pp. 41–71.
- Gödel, K., 1936. “Über die Lange von Beweisen,” Ergebnisse eines mathematischen Kolloquiums, 7: 23–24.
- Gödel, K., 1946, “Remarks Before the Princeton Bicentennial Conference on Problems in Mathematics,” reprinted in Davis 2004b, pp. 84–88.
- Godfrey-Smith, P., 2009, “Triviality Arguments Against Functionalism,” Philosophical Studies, 145(2): 273–295.
- Grice, H. P., 1957, “Meaning,” The Philosophical Review, 66(3): 377–388.
- Hagar, A., and A. Korolev, 2007, “Quantum Hypercomputation--Hyper or Computation?,” Philosophy of Science, 74(3): 347–363.
- Hamkins, J. D., 2002. “Infinite Time Turing Machines,” Minds and Machines, 12: 521–539.
- Harman, G., 1987, “(Nonsolipsistic) Conceptual Role Semantics,” in New Directions in Semantics, E. Lepore (ed.), London: Academic Press, pp. 55–81.
- Hogarth, M. L., 1994, “Non-Turing Computers and Non-Turing Computability,” PSA 1994(1): 126–138.
- Hogarth, M. L., 2004, “Deciding Arithmetic Using SAD Computers,” British Journal for the Philosophy of Science, 55: 681–691.
- Jacquette, D., 1991, The Myth of Pure Syntax,“ in Topics in Philosophy and Artificial Intelligence, L. Albertazzi and R. Rolli (eds.), Bozen: Istituto Mitteleuropeo di Cultura, pp. 1–14.
- Kieu, T. D., 2002, “Quantum Hypercomputation,” Minds and Machines, 12(4): 541–561.
- Kieu, T. D., 2003, “Computing the Noncomputable,” Contemporary Physics, 44: 51–71.
- Kieu, T. D., 2004, “A Reformulation of Hilbert's Tenth Problem through Quantum Mechanics,” Proceedings of the Royal Society A, 460(2045): 1535–1545.
- Kieu, T. D., 2005, “An Anatomy of a Quantum Adiabatic Algorithm that Transcends the Turing Computability,” International Journal of Quantum Information, 3(1): 177–183.
- Kleene, S. C., 1935, “A Theory of Positive Integers in Formal Logic,” American Journal of Mathematics, 57: 153–173 and 219–244.
- Kleene, S. C., 1952, Introduction to Metamathematics, Princeton: Van Nostrand.
- Klein, C., 2008, “Dispositional Implementation Solves the Superfluous Structure Problem,” Synthese, 165: 141–153.
- Leff, H. S. and A.F. Rex, (eds.), 2003, Maxwell's Demon 2: Entropy, Classical and Quantum Information, Computing. Bristol: Institute of Physics Publishing.
- Lloyd, S., 2006, Programming the Universe: A Quantum Computer Scientist Takes on the Cosmos. New York: Knopf.
- Lycan, W., 1981, “Form, Function, and Feel,” Journal of Philosophy, 78(1): 24–50.
- Machamer, P. K., Darden, L., and C. Craver, 2000, “Thinking About Mechanisms,” Philosophy of Science, 67: 1–25.
- Martin, C. B., 1997, “On the Need for Properties: The Road to Pythagoreanism and Back,” Synthese, 112(2): 193–231.
- Maudlin, T., 1989, “Computation and Consciousness,” Journal of Philosophy, 86(8): 407–432.
- Mills, J. W., 2008, “The Nature of the Extended Analog Computer,” Physica D: Nonlinear Phenomena, 237(9): 1235–1256.
- Németi, I., and G. Dávid, 2006, “Relativistic Computers and the Turing Barrier,” Journal of Applied Mathematics and Computation, 178(1): 118–142.
- Nielsen, M. A., 1997, “Computable Functions, Quantum Measurements, and Quantum Dynamics,” Physical Review Letters, 79(15): 2915–2918.
- Nielsen, M. A., and I. L. Chuang, 2000, Quantum Computation and Quantum Information. New York: Cambridge University Press.
- Norton, J. D., 2003, “Causation as Folk Science,” Philosophers' Imprint, 3(4): 1–22.
- Ord, T., 2006, “The Many Forms of Hypercomputation,” Applied Mathematics and Computation, 178(1): 143–153.
- Piccinini, G., 2004, “Functionalism, Computationalism, and Mental Contents,” Canadian Journal of Philosophy, 34(3): 375–410.
- –––, 2007a, “Computing Mechanisms,” Philosophy of Science, 74(4): 501–526.
- –––, 2007b, “Computational Modeling vs. Computational Explanation: Is Everything a Turing Machine, and Does It Matter to the Philosophy of Mind?” Australasian Journal of Philosophy 85(1): 93–115.
- –––, 2008a, “Computation without Representation,” Philosophical Studies, 137(2): 205–241.
- –––, 2008b, “Computers,” Pacific Philosophical Quarterly, 89(1): 32–73.
- –––, 2008c, “Some Neural Networks Compute, Others Don't,” Neural Networks, 21(2–3): 311–321.
- –––, 2010, “The Mind as Neural Software? Understanding Functionalism, Computationalism, and Computational Functionalism,” forthcoming in Philosophy and Phenomenological Research.
- –––, forthcoming, “The Physical Church-Turing Thesis: Modest of Bold?,” The British Journal for the Philosophy of Science.
- Piccinini, G. and A. Scarantino, forthcoming, “Information Processing, Computation, and Cognition,” Journal of Biological Physics.
- Pitowsky, I., 1990, “The Physical Church Thesis and Physical Computational Complexity,” Iyyun, 39: 81–99.
- Pitowsky, I., 2002, “Quantum Speed-Up of Computations,” Philosophy of Science, 69: S168-S177.
- Pour-El, M. B., 1974, “Abstract Computability and Its Relation to the General Purpose Analog Computer (Some Connections Between Logic, Differential Equations and Analog Computers),” Transactions of the American Mathematical Society, 199: 1–28.
- Pour-El, M. B., 1999, The Structure of Computability in Analysis and Physical Theory: An Extension of Church's Thesis. In Handbooks of Computability Theory, E.R. Griffor (ed.), New York: Elsevier, pp. 449–471.
- Putnam, H., 1960, “Minds and Machines,” in Dimensions of Mind: A Symposium, S. Hook (ed.), New York: Collier, pp. 138–164; Reprinted in Putnam 1975a, pp. 362–386.
- Putnam, H., 1967, “Psychological Predicates,” in Art, Mind, and Religion, W.H. Capitan & D.D. Merrill (eds.), Pittsburgh, PA: University of Pittsburgh Press, pp. 37–48. Reprinted in Putnam 1975a as “The Nature of Mental States,” pp. 150–161.
- Putnam, H., 1975a, Philosophical Papers: Volume 2, Mind, Language and Reality, Cambridge: Cambridge University Press.
- –––, 1975b, “The Meaning of ‘Meaning’,” in Language, Mind and Knowledge. Minnesota Studies in the Philosophy of Science, vol. 7, K. Gunderson (ed.), Minneapolis: University of Minnesota Press, pp. 131–193. Reprinted in Putnam 1975a, pp. 215–271.
- Putnam, H., 1988, Representation and Reality, Cambridge, MA: MIT Press.
- Pylyshyn, Z. W., 1984, Computation and Cognition, Cambridge, MA: MIT Press.
- Quine, W. V. O., (1976), “Whither Physical Objects?” in Essays in Memory of Imre Lakatos, (Series: Boston Studies in the Philosophy of Science), R.S. Cohen, P.K. Feyerabend, & M.W. Wartofsky (eds.), Dordrecht: Reidel, pp. 497–504.
- Rubel, L. A., 1989, “Digital Simulation of Analog Computation and Church's Thesis,” Journal of Symbolic Logic, 54(3): 1011–1017.
- Rubel, L. A., 1993, “The Extended Analog Computer,” Advances in Applied Mathematics, 14(1): 39–50.
- Scheutz, M., 1999, “When Physical Systems Realize Functions …,” Minds and Machines, 9(2): 161–196.
- Scheutz, M., 2001, “Causal versus Computational Complexity,” Minds and Machines, 11(4): 534–566.
- Schonbein, W., 2005, “Cognition and the Power of Continuous Dynamical Systems,” Minds and Machines, 15(1): 57–71.
- Searle, J. R., 1992, The Rediscovery of the Mind, Cambridge, MA: MIT Press.
- Segal, G., 1991, “Defence of a Reasonable Individualism,” Mind, 100(400): 485–493.
- Shagrir, O., 2001. “Content, Computation and Externalism,” Mind, 110(438): 369–400.
- Shagrir, O., 2006. “Why We View the Brain as a Computer,” Synthese, 153(3): 393–416.
- Shagrir, O., and I. Pitowsky, 2003, “Physical Hypercomputation and the Church-Turing Thesis,” Minds and Machines, 13(1): 87–101.
- Shannon, C. E., and W. Weaver, 1949, The Mathematical Theory of Communication, Urbana: University of Illinois Press.
- Shapiro, L. A., 1997, “A Clearer Vision,” Philosophy of Science, 64(1): 131–153.
- Shor, P. W., 1994, “Algorithms for Quantum Computation: Discrete Logarithms and Factoring,” in Proceedings of the 357th Annual Symposium on Foundations of Computer Science, Los Alamitos, CA: IEEE Computer Society Press, pp. 124–134.
- Sieg, W., 2006, “On Computability,” in Philosophy of Mathematics (Handbook of the Philosophy of Science), A. Irvine (ed.) Amsterdam: Elsevier, pp. 535–630.
- Siegelmann, H. T., 1999, Neural Networks and Analog Computation: Beyond the Turing Limit, Boston: Birkhäuser.
- Smith, B. C., 2002, “The Foundations of Computing,” in Computationalism: New Directions, M. Scheutz (ed.), Cambridge, MA: MIT Press, pp. 23–58.
- Smith, W. D., 2006a, “Church's Thesis Meets the N-body Problem,” Applied Mathematics and Computation, 178(1): 154–183.
- Smith, W. D., 2006b, “Three Counterexamples Refuting Kieu's Plan for Quantum Adiabatic Hypercomputation; and Some Uncomputable Quantum Mechanical Tasks,” Applied Mathematics and Computation, 178(1): 184–193.
- Stannett, M., 1990, “X-Machines and the Halting Problem: Building a Super-Turing Machine,” Formal Aspects of Computing, 2(1): 331–341.
- Stich, S., 1983, From Folk Psychology to Cognitive Science, Cambridge, MA: MIT Press.
- Toffoli, T., 1982, “Physics and Computation,” International Journal of Theoretical Physics, 21(3–4): 165–175.
- Turing, A. M., 1936–7, “On Computable Numbers, with an Application to the Entscheidungsproblem,” Proceeding of the London Mathematical Society, 42(1): 230–265, reprinted in Davis 2004b, pp. 116–154.
- Turing, A. M., 1950, “Computing Machinery and Intelligence,” Mind, LIX(236): 433–460.
- Wheeler, J. A., 1982, “The Computer and the Universe,” International Journal of Theoretical Physics, 21(6–7): 557–572.
- Wimsatt, W. C., 2002, “Functional Organization, Analogy, and Inference”, in Functions: New Essays in the Philosophy of Psychology and Biology, A. Ariew, R. Cummins, and M. Perlman (eds.), Oxford: Oxford University Press, pp. 173–221.
- Wolfram, S., 2002, A New Kind of Science, Champaign, IL: Wolfram Media.
- Zuse, K., 1970, Calculating Space. Cambridge, MA: MIT Press.
- Zuse, K., 1982, “The Computing Universe,” International Journal of Theoretical Physics, 21(6–7): 589–600.
How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database.
- Hodges, A., 2005, “Can quantum computing solve classically unsolvable problems?” [PDF].
- Bibliography on Computation and Physical Systems, at the PhilPapers website.
- Hypercomputation Research Network