Computation in Physical Systems

First published Wed Jul 21, 2010; substantive revision Wed Jun 16, 2021

In our ordinary discourse, we distinguish between physical systems that perform computations, such as computers and calculators, and physical systems that don’t, such as rocks and raindrops. Among computing devices, we distinguish between more and less powerful ones. These distinctions are of both practical and theoretical importance: a supercomputer can compute in minutes what would take decades on a laptop, while a calculator cannot do certain things that a laptop can do, even in principle. What grounds these distinctions? What is the principled difference, if there is one, between a rock and a calculator, or between a calculator and a computer? Answering these questions is more difficult than it may seem.

In addition to our ordinary discourse, computation is central to many sciences. Computer scientists design, build, and program computers. But again, what counts as a computer? If a salesperson sold you an ordinary rock as a computer, you should probably get your money back. Again, what does the rock lack that a genuine computer has?

How powerful a computer can you build? Can you build a machine that computes anything you wish? Although it is often said that modern computers can compute anything (i.e., any function of natural numbers, or equivalently, any function of strings of letters from a finite alphabet), this is incorrect. Ordinary computers can compute only a tiny subset of these functions. Is it physically possible to do better? Which functions are physically computable? These questions are bound up with the foundations of physics.

Computation is also central to psychology and neuroscience, and perhaps other areas of biology. According to the computational theory of cognition, cognition is a kind of computation: the behavior of cognitive systems is causally explained by the computations they perform. In order to test a computational theory of something, we need to know what counts as a computation in a physical system. Once again, the nature of computation lies at the foundation of empirical science.

1. Abstract Computation and Concrete Computation

Computation may be studied mathematically by formally defining computational objects, such as algorithms and Turing machines, and proving theorems about their properties. The mathematical theory of computation is a well-established branch of mathematics. It deals with computation in the abstract, without regard to physical implementation.

By contrast, most uses of computation in science and ordinary practice deal with concrete computation: computation in physical systems such as computers and brains. Concrete computation is closely related to abstract computation: we speak of physical systems as running an algorithm or as implementing a Turing machine, for example. But the relationship between concrete computation and abstract computation is not part of the mathematical theory of computation and requires further investigation (cf. Curtis-Trudel forthcoming-a, for an argument that abstract and concrete computation cannot be given a unified account). Questions about concrete computation are the main subject of this entry. Nevertheless, it is important to bear in mind some basic mathematical results.

The most important notion of computation is that of digital computation, which Alan Turing, Kurt Gödel, Alonzo Church, Emil Post, and Stephen Kleene formalized in the 1930s. Their work investigated the foundations of mathematics. One crucial question was whether first order logic is decidable—whether there is an algorithm that determines whether any given first order logical formula is a theorem.

Turing (1936–7) and, separately, Church (1936) proved that the answer is negative: there is no such algorithm. To show this, they offered precise characterizations of the informal notion of an effectively computable function. Turing did so in terms of so-called Turing machines (TMs)—devices that manipulate discrete symbols written on a tape in accordance with finitely many instructions. Other logicians did the same thing—they formalized the notion of effectively computable function—in terms of other notions, such as λ-definable functions and general recursive functions.

To their surprise, all such notions turned out to be extensionally equivalent: any function computable within any of these formalisms is computable within any of the others. They took this as evidence that their quest for a precise definition of “algorithm” or “effectively computable function” had been successful. The resulting view—that TMs and other equivalent formalisms capture the informal notion of algorithm—is now known as the Church-Turing thesis (more on this in Section 4). The study of computable functions, made possible by the work of Turing et al., is part of the mathematical theory of computation.

The theoretical significance of Turing et al.’s notion of computation can hardly be overstated. As Gödel pointed out (in a lecture following one by Tarski):

Tarski has stressed in his lecture (and I think justly) the great importance of the concept of general recursiveness (or Turing’s computability). It seems to me that this importance is largely due to the fact that with this concept one has for the first time succeeded in giving an absolute definition of an interesting epistemological notion, i.e., one not depending on the formalism chosen. (Gödel 1946, 84)

Turing also showed that there are universal TMs—machines that can compute any function computable by any other TM. Universal machines do this by executing instructions that encode the behavior of the machine they simulate. Assuming the Church-Turing thesis, universal TMs can compute any function computable by algorithm. This result is significant for computer science: you don’t need to build different computers for different functions; one universal computer will suffice to compute any computable function. Modern digital computers are universal in this sense: digital computers can compute any function computable by algorithm for as long as they have time and memory. (Strictly speaking, a universal machine has an unbounded memory, whereas digital computer memories can be extended but not indefinitely, so they are not unbounded.)

The above result should not be confused with the common claim that computers can compute anything. This claim is false: another important result of computability theory is that most functions are not computable by TMs (and hence, by digital computers). TMs compute functions defined over denumerable domains, such as strings of letters from a finite alphabet. There are uncountably many such functions. But there are only countably many TMs; you can enumerate TMs by listing each TM specification (which is some finite string). Since an uncountable infinity is much larger than a countable one, it follows that TMs (and hence digital computers) can compute only a tiny portion of all functions (over denumerable domains, such as natural numbers or strings of letters).

TMs and most modern computers are known as (classical) digital computers, that is, computers that manipulate strings of discrete, unambiguously distinguishable states. Digital computers are sometimes contrasted with analog computers, which manipulate variables that can be continuous. Continuous variables are variables that can change their value continuously over time while taking any value within a certain interval. Analog computers are used primarily to solve certain systems of differential equations (Pour-El 1974, Rubel 1993).

Classical digital computers may also be contrasted with quantum computers. Quantum computers manipulate quantum states called qudits (usually binary qudits, known as qubits). Unlike the computational states of digital computers, qudits are not unambiguously distinguishable from one another in certain important respects. This entry will focus primarily on classical digital computation. For more on quantum computation, see the entry on quantum computing.

The same objects studied in the mathematical theory of computation—TMs, algorithms, and so on—are typically said to be implemented by concrete physical systems. This poses a problem: how can a concrete, physical system perform a computation when computation is defined by an abstract mathematical formalism? This may be called the problem of computational implementation.

The problem of computational implementation may be formulated in a couple of different ways. Platonists interpret the formalisms of computability theory as defining abstract objects. According to this interpretation, TMs, algorithms, and the like are abstract objects. But how can a concrete physical system implement an abstract object, and what precisely is this implementation relation? Anti-platonists treat the formalisms of computability theory simply as abstract computational descriptions. But how can a concrete physical system satisfy an abstract computational description? Regardless of how the problem of computational implementation is formulated, solving it requires an account of concrete computation—an account of what it takes for a physical system to perform a given computation.

A closely related problem is that of distinguishing between physical systems such as digital computers, which do compute (if anything does), and physical systems such as rocks, which do not (or so it seems). Unlike computers, ordinary rocks are not sold in computer stores and are usually not considered computers. Why? What do computers have that rocks lack such that computers compute but rocks don’t? (If indeed they don’t?) In other words, what kinds of concrete systems can compute at all? Different answers to these questions give rise to different accounts of concrete computation.

Questions about the nature of concrete computation should not be confused with questions about computational modeling. The dynamical evolution of many physical systems may be approximated by computational models—computer programs that simulate or predict the dynamics or kinematics of some system of interest. The behavior of rocks—as well as rivers, ecosystems, and planetary systems, among many others—may well be modeled computationally. It does not follow that the modeled systems are computing devices—that they themselves perform computations. Prima facie, only relatively few and quite special systems compute. Explaining what makes them special—or explaining away our feeling that they are special—is the job of an account of concrete computation.

2. Accounts of Concrete Computation

The basic problem that any account of concrete computation must solve can be seen via the following diagram. Consider some computational system C (e.g. an abstract automaton) and a physical system P. System C has computational states s1 and s2 such that C transitions from s1 to s2 according to the specification of C. System P has physical states p1 and p2 such that P transitions from p1 to p2 according to the dynamics of P. The transitions among states of the same type—the arrows in the diagram—are not in need of explanation: they are either stipulated in the case of C, or well-understood physical dynamics in the case of P. However, in order for P to count as an implementation of C, the putative connection between the states of one type of system and the states of the other (the dotted lines) must be explained. The following accounts offer different ways of characterizing this connection, either by making precise what the dotted lines represent, by narrowing the possible physical states or transitions that could comprise P, or both.

The commutation diagram
  explained above. It consists of 4 nodes: s1, s2, p1, p2. s1 is above
  p1 with a dotted line between them; similarly s2 is above p2 with a
  dotted line connecting them.  s2 is to the right of s1 and an arrow
  points from s1 to s2. Similarly p2 is to the right of p1 and an
  arrow ponts from p1 to p2.

Diagram

2.1 The Simple Mapping Account

One of the earliest and most influential accounts of computation is due to Hilary Putnam. To a first approximation, this account says that anything that is accurately described by a computational description C is a computing system implementing C.

More precisely, Putnam sketches his earliest account in terms of Turing Machines (TMs) only, appealing to the “machine tables” that are a standard way of defining specific TMs. A machine table consists of one column for each of the (finitely many) internal states of the TM and one row for each of the machine’s symbol types. Each entry in the machine table specifies what the machine does given the pertinent symbol and internal state. Here is how Putnam explains what it takes for a physical system to be a TM:

A ‘machine table’ describes a machine if the machine has internal states corresponding to the columns of the table, and if it ‘obeys’ the instruction in the table in the following sense: when it is scanning a square on which a symbol s1 appears and it is in, say, state B, that it carries out the ‘instruction’ in the appropriate row and column of the table (in this case, column B and row s1). Any machine that is described by a machine table of the sort just exemplified is a Turing machine. (Putnam 1960/1975a, 365; cf. also Putnam 1967/1975a, 433–4)

This account relies on several unexplained notions, such as square (of tape), symbol, scanning, and carrying out an instruction. Furthermore, the account is specified in terms of TM tables, but there are other kinds of computational description. A general account of concrete computation should cover other computational descriptions besides TM tables. Perhaps for these reasons, Putnam—soon followed by many others—abandoned reference to squares, symbols, etc.; he substituted them with an appeal to a physical description of the system. The result of that substitution is what Godfrey-Smith (2009) dubs the “simple mapping account” of computation.

According to the simple mapping account, a physical system S performs a computation defined by description C just in case (i) there is a mapping from the states ascribed to S by a physical description to the states defined by computational description C, such that (ii) the state transitions between the physical states mirror the state transitions between the computational states. Thus, suppose S has states p1 and p2 that map on states s1 and s2 of C (respectively). Clause (ii) requires that, for any computational state transition of the form s1s2, S transitions from p1 to p2 whenever it goes into state s1.

One difficulty with the formulation above is that ordinary physical systems admit of uncountably many physical states, whereas ordinary computational descriptions, such as TM tables, consist of countably many states. Thus, there are not enough computational states for the physical states to map onto. One solution to this problem is to reverse the direction of the mapping, requiring a mapping of the computational states onto (a subset of) the physical states. Another, more common solution to this problem—often implicit—is to select either a subset of the physical states or equivalence classes of the physical states and map those onto the computational states. When this is done, clause (i) is replaced by the following: (i′) there is a mapping from a subset of (or equivalence classes of) the physical states of S to the states of computational description C.

The simple mapping account turns out to be very liberal: it attributes many computations to many systems. In the absence of restrictions on which mappings are acceptable, such mappings are relatively easy to come by. As a consequence, some have argued that every physical system implements every computation (Putnam 1988, Searle 1992). This thesis, which trivializes the claim that something is a computing system, will be discussed in Section 3.1. Meanwhile, the desire to avoid this trivialization result is one motivation behind other accounts of concrete computation.

2.2 Causal, Counterfactual, and Dispositional Accounts

One way to construct more restrictive accounts of computation is to further constrain acceptable mappings. As a reminder, the simple mapping account has it that, for any computational state transition of the form s1s2, if the system is in the physical state p1 that maps onto s1, it then goes into the physical state p2 that maps onto s2. The second part of (ii) is a material conditional. This account may be strengthened by turning it into a conditional expressing a relation that supports counterfactuals.

In a pure counterfactual account, clause (ii) is strengthened by requiring that the physical state transitions support certain counterfactuals (Block 1978, Maudlin 1989, Copeland 1996, Rescorla 2014, Campbell and Yang 2021). In other words, this account requires the mapping between computational and physical descriptions to be such that the counterfactual relations between the physical states are isomorphic to the counterfactual relations between the computational states.

Different authors formulate the relevant counterfactuals in slightly different ways: (a) if the system had been in a physical state that maps onto an arbitrary computational state (specified by the relevant computational description), it would have gone into a physical state that maps onto the relevant subsequent computational state (Maudlin 1989, 415); (b) if the system had been in a physical state p1 that maps onto s1, it would have gone into a physical state p2 that maps onto s2 (Copeland 1996, 341); (c) if the system were in a physical state p1 that maps onto s1, it would go into a physical state p2 that maps onto s2 (Chalmers 1996, 312). Regardless of the exact formulation, none of these counterfactuals are satisfied by the material conditional of clause (ii) as it appears in the simple mapping account of computation. Thus, counterfactual accounts are stronger than the simple mapping account.

In causal accounts, clause (ii) is strengthened by requiring a causal relation between the physical states: for any computational state transition of the form s1s2, the physical state p1 (which maps onto computational state s1) causes the system to go into physical state p2 (which maps onto computational state s2) (Chrisley 1995, Chalmers 1995, 1996, 2011, Scheutz 1999, 2001).

In order to avoid unlimited pancomputationalism, (discussed in Section 3), Chalmers (1995, 1996, 2011) adds the further restriction that a physical implementation of a computational system must divide into separate physical components, each of which maps onto the components specified by the computational formalism. As Godfrey-Smith (2009, 293) notes, this combination of a causal and a localizational constraint goes in the direction of mechanistic explanation (Machamer, Darden, and Craver 2000). An account of computation that is explicitly based on mechanistic explanation will be discussed in Section 2.5. For now, the causal account requires only that the mappings between computational and physical descriptions be such that the causal relations between the physical states are isomorphic to the state transitions specified by the computational description. Thus, according to the causal account, concrete computation is the causal structure of a physical process.

Another account of concrete computation appeals to dispositional relations, assuming (as most people do) that dispositional relations support counterfactuals. Appealing to dispositions may have advantages over pure counterfactual accounts in blocking unwanted computational implementations (Klein 2008, 145, makes the case for dispositional versus counterfactual accounts).

In a dispositional account, clause (ii) is strengthened by requiring a dispositional relation between the physical states: for any computational state transition of the form s1s2 (specified by a computational description), if the system is in the physical state p1 that maps onto s1, the system manifests a disposition whose manifestation is the transition from p1 to the physical state p2 that maps onto s2 (Klein 2008). The physical system is such that physical state p1 (which maps onto computational state s1) manifests the disposition to transition to physical state p2 (which maps onto computational state s2). In other words, the dispositional account requires the mapping between computational and physical descriptions to be such that the dispositional relations between the physical states are isomorphic to the state transitions specified by the computational description. Thus, according to the dispositional account, concrete computation is the dispositional structure of a physical process.

The difference between the simple mapping account and counterfactual, causal, and dispositional accounts may be seen by looking at an example.

Consider a rock under the sun, early in the morning. During any time interval, the rock’s temperature rises, going from T to T+1, to T+2, to T+3. Now consider a NOT gate that feeds its output back to itself. At first, suppose the NOT gate receives ‘0’ as an input; it then returns a ‘1’. After the ‘1’ is fed back to the NOT gate, the gate returns a ‘0’ again, and so on. The NOT gate goes back and forth between outputting a ‘0’ and outputting a ‘1’. Now map physical states T and T+2 onto ‘0’; then map T+1 and T+3 onto ‘1’.

According to the simple mapping account, the rock implements a NOT gate undergoing the computation represented by ‘0101’.

By contrast, according to the counterfactual account, the rock’s putative computational implementation is spurious, because the physical state transitions do not support counterfactuals. If the rock were put in state T, it may or may not transition into T+1, depending on whether it is morning or evening and other extraneous factors. Since the rock’s physical state transitions that map onto the NOT gate’s computational state transitions do not support counterfactuals, the rock does not implement the NOT gate according to the counterfactual account.

According to the causal and dispositional accounts too, this putative computational implementation is spurious, because the physical state transitions are not due to causal or dispositional properties of the rock and its states. T does not cause T+1, nor does the rock have a disposition to go into T+1 when it is in T. Rather, the rock changes its state due to the action of the sun. Since the rock’s physical state transitions that map onto the NOT gate’s computational state transitions are not grounded in either the causal or dispositional properties of the rock and its states, the rock does not implement the NOT gate according to the causal and dispositional accounts.

It is important to note that under all of these mapping accounts, there are mappings between any physical system and at least some computational descriptions. Thus, according to mapping accounts, everything performs at least some computations (cf. Section 3.2). This strikes some as overly inclusive. In computer science and cognitive science, there seems to be a distinction between systems that compute and systems that do not. To preserve this distinction, one option is to move beyond mapping accounts of implementation.

2.3 The Semantic Account

In our everyday life, we usually employ computations to process meaningful symbols in order to extract information from them. The semantic account of computation turns this practice into a metaphysical doctrine: computation is the processing of representations—or at least, the processing of appropriate representations in appropriate ways. Opinions as to which representational manipulations constitute computation vary a great deal (Fodor 1975; Cummins 1983; Pylyshyn 1984; Churchland and Sejnowski 1992; Shagrir 2006, forthcoming). What all versions of the semantic account have in common is that they take seriously the reference to symbols in Putnam’s original account of computation: there is “no computation without representation” (Fodor 1981, 180).

The semantic account may be seen as imposing a further restriction on acceptable mappings. In addition to the causal restriction imposed by the causal account (mutatis mutandis for the counterfactual and dispositional accounts), the semantic account imposes a semantic restriction. Only physical states that qualify as representations may be mapped onto computational descriptions, thereby qualifying as computational states. If a state is not representational, it is not computational either.

The semantic account is probably the most popular in the philosophy of mind, because it appears to best fit what is required of a philosophical account of cognition. Since minds and digital computers are generally assumed to manipulate (the right kind of) representations, they turn out to compute. Since most other systems are generally assumed not to manipulate (the relevant kind of) representations, they do not compute. Thus, the semantic account appears to accommodate some common intuitions about what does and does not count as a computing system. It keeps minds and computers in while leaving most everything else out, thereby vindicating the computational theory of cognition as a strong and nontrivial theory.

The semantic account raises three important questions. First, how are representations to be individuated? Second, what counts as a representation of the relevant kind? Finally, what gives representations their semantic content?

The main debate regarding the individuation of computational states divides internalists from externalists. According to externalists, computational vehicles are symbols individuated by their wide cognitive contents—paradigmatically, the things that the symbols stand for (Burge 1986, Shapiro 1997, Shagrir 2001). By contrast, most internalists maintain that computational vehicles are symbols individuated by narrow cognitive contents (Segal 1991). Narrow contents are, roughly speaking, semantic contents defined in terms of intrinsic properties of the system. Cognitive contents, in turn, are contents ascribed to a system by a cognitive psychological theory. For instance, the cognitive contents of the visual system are visual contents, whereas the cognitive contents of the auditory system are auditory contents.

To illustrate the dispute, consider two physically identical cognitive systems A and B. Among the symbols processed by A is symbol S. A produces instances of S whenever A is in front of bodies of water, when A is thinking of water, when A is forming plans to interact with water, and so on. In short, symbol S appears to stand for water. Every time A processes S, system B processes symbol S′, which is physically identical to S. But system B lives in an environment different from A’s environment. Whenever A is surrounded by water, B is surrounded by twin-water, which is a substance superficially indistinguishable, yet physically different, from water. Thus, symbol S′ appears to stand for twin-water (cf. Putnam 1975b). In summary, A and B live in relevantly different environments, such that S appears to stand for water while S′ appears to stand for twin-water; A processes S in the same way that B processes S′; and there is no intrinsic physical difference between A and B or between S and S′.

According to externalists, when A is processing S and B is processing S′, they are in computational states of different types. According to internalists, A and B are in computational states of the same type. In other words, externalists maintain that computational states are individuated in part by their reference, which is determined at least in part by physical properties external to the system. By contrast, internalists maintain that computational states are individuated in a way that depends solely on the intrinsic physical properties of cognitive systems.

So far, externalists and internalists agree on one thing: computational states are individuated by cognitive contents. This requirement can be relaxed without abandoning the semantic account of computation. According to Egan (1999), computational vehicles are not individuated by cognitive contents of any kind, whether wide or narrow. Rather, they are individuated by their mathematical contents—that is, mathematical functions and objects ascribed as semantic contents to the computational vehicles by a computational theory of the system. Since mathematical contents are the same across physical duplicates, Egan maintains that her mathematical contents are a kind of narrow content—she is a kind of internalist.

Let us now turn to what counts as a representation. This debate is less clearly delineated. According to some authors, only structures that have a language-like combinatorial syntax, which supports a compositional semantics, count as computational vehicles, and only manipulations that respect the semantic properties of such structures count as computations (Fodor 1975, Pylyshyn 1984). This suggestion flies in the face of both computability theory and analog computation (Maley forthcoming), neither of which impose any such requirement on what counts as a computational vehicle.

Other authors are more inclusive on what kinds of manipulations on representations count as computations, but they have not been especially successful in drawing the line between computational and non-computational processes. Few would include all manipulations of representations—including, say, painting a picture or recording a speech—as computations, but there is no consensus on where to draw the boundary between representational manipulations that count as computations and representational manipulations that do not.

A third question is what gives representations their semantic content. There are three families of views. Instrumentalists believe that ascribing semantic content to things is just heuristically useful for prediction and explanation; semantic properties are not real properties of computational states (e.g., Dennett 1987, Egan 2010). Realists who are not naturalists believe semantic properties are real properties of computational states, but they are irreducible to non-semantic properties. Finally, realists who are also naturalists believe semantic properties are both real and reducible to non-semantic properties, though they disagree on exactly how to reduce them (e.g., Fodor 2008, Harman 1987).

The semantic account of computation is closely related to the common view that computation is information processing. This idea is less clear than it may seem, because there are several relevant notions of information. The connection between information processing and computation is different depending on which notion of information is at stake. What follows is a brief disambiguation of the view that computation is information processing based on four relevant notions of information (cf. Piccinini and Scarantino 2011).

  1. Information in the sense of thermodynamics is closely related to thermodynamic entropy. Entropy is a property of every physical system. Thermodynamic entropy is, roughly, a measure of an observer’s uncertainty about the microscopic state of a system after she considers the observable macroscopic properties of the system. The study of the thermodynamics of computation is a lively field with many implications in the foundations of physics (Leff and Rex 2003). In this thermodynamic sense of “information,” any difference between two distinguishable states of a system may be said to carry information. Computation may well be said to be information processing in this sense, but this has little to do with semantics properly so called. However, the connections between thermodynamics, computation, and information theory are one possible source of inspiration for the view that every physical system is a computing system (see Section 3.4).
  2. Information in the sense of information theory is a measure of the likelihood that a given event occurs, and the mutual information of two random variables is a measure of the mutual dependence between the variables (Shannon and Weaver 1949). These measures can then be used to analyze and design robust communication systems that reliably transfer signals in the presence of noise. Computation may well be said to be information processing in this sense as well. Again, this is not explicitly about semantics. But mutual information can tell us how much information a variable carries about another, so mutual information may be the basis for a notion of semantic information carried by a variable about another (cf. Isaac 2019).
  3. Information in one semantic sense is approximately the same as “natural meaning” (Grice 1957). A signal carries information in this sense just in case it reliably correlates with a source (Dretske 1981). The view that computation is information processing in this sense is prima facie implausible, because many computations—such as arithmetical calculations carried out on digital computers—do not seem to carry any natural meaning. Nevertheless, this notion of semantic information is relevant here because it has been used by some theorists to ground an account of representation (Dretske 1981, Fodor 2008).
  4. Information in another semantic sense is just ordinary semantic content or “non-natural meaning” (Grice 1957). This is the kind of semantic content that most philosophers discuss. The view that computation is information processing in this sense is similar to a generic semantic account of computation.

Although the semantic account of computation appears to fit the needs of philosophers of mind, it appears less suited to make sense of other sciences. Most pertinently, representation does not seem to be presupposed by the notion of computation employed in at least some areas of cognitive science as well as computability theory and computer science—the very sciences that gave rise to the notion of computation and the origin of the computational theory of cognition (Fresco 2010). If this is correct, the semantic account may not even be adequate to the needs of philosophers of mind—at least those philosophers of mind who wish to make sense of the analogy between minds and the systems designed and studied by computer scientists and computability theorists. Another criticism of the semantic account is that specifying the kind of representation and representational manipulation that is relevant to computation may require a non-semantic way of individuating computations (Piccinini 2004). These concerns motivate efforts to account for computation in non-semantic terms.

2.4 The Syntactic Account

As we saw, the semantic account needs to specify which representations are relevant to computation. One view is that the relevant representations are language-like, that is, they have the kind of syntactic structure exhibited by sentences in a language. Computation, then, is the manipulation of language-like representations in a way that is sensitive to their syntactic structure and preserves their semantic properties (Fodor 1975).

As discussed in the previous section, however, using the notion of representation in an account of computation involves some difficulties. If computation could be accounted for without appealing to representation, those difficulties would be avoided. One way to do so is to maintain that computation simply is the manipulation of language-like structures via the manipulation of their syntactic properties, leaving semantics by the wayside. The structures being manipulated are assumed to be language-like only in that they have syntactic properties—they need not have any semantics. In this syntactic account of computation, the notion of representation is not used at all.

The semantic account attempts to solve the problem of implementation by imposing a semantic restriction on acceptable mappings between physical and computational states; only physical states that count as (the right sort of) representations may be mapped onto computational states. In contrast, the syntactic account may be seen as replacing this semantic restriction: only physical states that qualify as syntactic may be mapped onto computational states, thereby qualifying as computational states. If a state lacks syntactic structure, it is not computational.

What remains to be seen is what counts as a syntactic state. An important account of syntax in the physical world is due to Stephen Stich (1983, 150–157). Although Stich does not use the term “computation,” his account of syntax is aimed at grounding a syntactic account of mental states and processes. Stich’s syntactic theory of mind is, in turn, his interpretation of the computational theories proposed by cognitive scientists—in competition with Fodor’s semantic interpretation. Since Stich’s account of syntax is ultimately aimed at grounding computational theories of cognition, Stich’s account of syntax also provides an (implicit) syntactic account of computation.

According to Stich, roughly speaking, a physical system contains syntactically structured objects when two conditions are satisfied. First, there is a mapping between the behaviorally relevant physical states of the system and a class of syntactic types, which are specified by a grammar that defines how complex syntactic types can be formed out of (finitely many) primitive syntactic types. Second, the behavior of the system is explained by a theory whose generalizations are formulated in terms of formal relations between the syntactic types that map onto the physical states of the system.

The syntactic account of computation is not very popular. A common objection is that it seems difficult to give an account of primitive syntactic types that does not presuppose a prior semantic individuation of the types (Crane 1990, Jacquette 1991, Bontly 1998). In fact, it is common to make sense of syntax by construing it as a way to combine symbols, that is, semantically interpreted constituents. If syntax is construed in this way, it presupposes semantics. If so, the syntactic account of computation collapses into the semantic account.

Another objection is that language-like syntactic structure is not necessary for computation as it is understood in computer science and computability theory. Although computing systems surely can manipulate language-like structures, they don’t have to. They can also manipulate simple sequences of letters, without losing their identity as computers. (Computability theorists call any set of words from a finite alphabet a language, but that broad notion of language should not be confused with the narrower notion—inspired by grammars in logic and linguistics—that Stich employs in his syntactic account of computation.)

2.5 The Mechanistic Account

The mechanistic account (Piccinini 2007, 2015; cf. Kaplan 2011, Milkowski 2013, Fresco 2014, Duwell 2017, Coelho Mollo 2018, Curtis-Trudel forthcoming-b) avoids appealing to both syntax and semantics. Instead, it accounts for concrete computation in terms of the mechanistic properties of a system. According to the mechanistic account, concrete computing systems are functional mechanisms of a special kind—mechanisms that perform concrete computations.

A functional mechanism is a system of organized components, each of which has functions to perform (cf. Craver 2012, Garson 2013). When appropriate components and their functions are appropriately organized and functioning properly, their combined activities constitute the capacities of the mechanism. Conversely, when we look for an explanation of the capacities of a mechanism, we decompose the mechanism into its components and look for their functions and organization. The result is a mechanistic explanation of the mechanism’s capacities.

This notion of mechanism is familiar to biologists and engineers. Both explain capacities of the system in question (e.g., digestion or respiration for the biologist, memory access or multiplication for the computer engineer) in terms of the functions performed by systems of organized components (e.g., the digestive system and the respiratory system, or the memory system and the processor).

According to the mechanistic account, a computation in the generic sense is the processing of vehicles according to rules that are sensitive to certain vehicle properties and, specifically, to differences between different portions of the vehicles. The processing is performed by a functional mechanism, that is, a mechanism whose components are functionally organized to perform the computation. Thus, if the mechanism malfunctions, a miscomputation occurs.

Digital computation, analog computation, etc. turn out to be species of generic computation. They are differentiated by more specific properties of the vehicles being processed. If a computing system processes strings of discrete states, then it performs a digital computation. If a computing system processes variables that can be continuous, then it performs an analog computation (but see Maley, forthcoming, for an alternative account). If a computing system processes qubits, then it performs a quantum computation.

When we define concrete computations and the vehicles that they manipulate, we need not consider all of their specific physical properties, but only those properties that are relevant to the computation, according to the rules that define the computation. A physical system can be described more or less abstractly. According to the mechanistic account, an abstract description of a physical system is not a description of an abstract object but rather a description of a concrete system that omits certain details. Descriptions of concrete computations and their vehicles are sufficiently abstract as to be defined independently of the physical media that implement them in particular cases. Because of this, concrete computations and their vehicles are sometimes said to be ‘medium-independent’ or ‘substrate neutral’.

In other words, a vehicle is medium-independent just in case the rules (i.e., the input-output maps) that define the relevant computation are sensitive only to differences between portions of the vehicles along specific dimensions of variation—they are insensitive to any more concrete physical properties of the vehicles. Put yet another way, the rules are functions of state variables associated with a set of functionally relevant degrees of freedom, which can be implemented differently in different physical media. Thus, a given computation can be implemented in multiple physical media (e.g., mechanical, electro-mechanical, electronic, magnetic, etc.), provided that the media possess a sufficient number of dimensions of variation (or degrees of freedom) that can be appropriately accessed and manipulated and that the components of the mechanism are functionally organized in the appropriate way.

Like the semantic and syntactic accounts, the mechanistic account aims to avoid pancomputationalism. First, physical systems that are not functional mechanisms are ruled out. Functional mechanisms are complex systems of components that are organized to perform functions. Any system whose components are not organized to perform functions is not a computing system because it is not a functional mechanism. Second, mechanisms that lack the function of manipulating medium-independent vehicles are ruled out. Finally, medium-independent vehicle manipulators whose manipulations fail to accord with appropriate rules are ruled out. The second and third constraints appeal to special functional properties—manipulating medium-independent vehicles, doing so in accordance with rules defined over the vehicles—that are possessed only by relatively few physical systems. According to the mechanistic account, those few systems are the genuine computing systems.

Another feature of the mechanistic account is that it accounts for the possibility of miscomputation—a possibility difficult to make sense of under other accounts. To illustrate the point, consider an ordinary computer programmed to compute function f on input i. Suppose that the computer malfunctions and produces an output different from f(i). According to the causal (semantic) account, the computer just underwent a causal process (a manipulation of representations), which may be given a computational description and hence counts as computing some function g(i), where g f. By contrast, according to the mechanistic account, the computer simply failed to compute, or at least it failed to complete its computation correctly. Given the importance of avoiding miscomputations in the design and use of computers, the ability of the mechanistic account to make sense of miscomputation may be an advantage over rival accounts (Dewhurst 2018, Tucker 2018).

A final feature of the mechanistic account is that it precisely distinguishes and characterizes many different kinds of computing systems based on the specific vehicles they manipulate and their specific mechanistic properties. The mechanistic account has been used to explicate digital computation, analog computation, computation by neural networks, and other important distinctions such as hardwired vs. programmable and serial vs. parallel computation (Piccinini 2015).

3. Is Every Physical System Computational?

Which physical systems perform computations? According to pancomputationalism, they all do. Even rocks, hurricanes, and planetary systems—contrary to appearances—are computing systems. Although it may seem counterintuitive, this view is quite popular among some philosophers and physicists.

3.1 Varieties of Pancomputationalism

Varieties of pancomputationalism vary along two dimensions. First, they vary with respect to how many computations—all, many, a few, or just one—they attribute to each system. We will call this the quantitative dimension. Second, they vary with respect to why these computations are attributable to physical systems. We will call this the source dimension.

The weakest version of pancomputationalism along the quantitative dimension is that every physical system performs at least one computation (Scheutz 2001). This version may be called limited pancomputationalism.

The strongest version of pancomputationalism is that every physical system performs every computation—or at least, every sufficiently complex system implements a very large number of non-equivalent computations (Putnam 1988, Searle 1992). This may be called unlimited pancomputationalism.

One purported source of pancomputationalism is that the computation(s) a system performs is a matter of relatively free interpretation. If the computation a system performs depends solely or primarily on how the system is interpreted, as opposed to objective fact, then it seems that everything computes because everything may be seen as computing (Searle 1992). This may be called interpretivist pancomputationalism.

Another purported source of pancomputationalism is that computation is simply the causal or dynamical structure of physical processes, as the causal account of computation (Section 2.2) maintains (Chrisley 1995, Chalmers 1995, 1996, Scheutz 1999, 2001). Assuming that everything has causal structure, it follows that everything performs the computation constituted by its causal structure. This may be called causal pancomputationalism.

A third purported source of pancomputationalism results from the idea that every physical state carries information, coupled with an information-based semantics and a liberal version of the semantic view of computation. According to the semantic view of computation, computation is the manipulation of representations. According to information-based semantics, a representation is anything that carries information. Assuming that every physical state carries information, it follows that every physical system performs the computations constituted by the manipulation of its information-carrying states (cf. Shagrir 2006).

The final purported source of pancomputationalism is that computation itself is the nature of the physical universe. According to some physicists, the physical world is computational at its most fundamental level; this is sometimes put in slogan form as the “it from bit” view (Wheeler 1989). We will discuss this view further in Section 3.4.

3.2 Unlimited Pancomputationalism

Arguments for unlimited pancomputationalism go back to Hinckfuss’s pail, a putative counterexample to computational functionalism—the view that the mind is the software of the brain. Hinckfuss’s pail is named after its proponent, Ian Hinckfuss, but was first discussed in print by William Lycan. A pail of water contains a huge number of microscopic processes:

Now is all this activity not complex enough that, simply by chance, it might realize a human program for a brief period (given suitable correlations between certain micro-events and the requisite input-, output-, and state-symbols of the program)? (Lycan 1981, 39)

Hinckfuss’s implied answer to this question is that yes, a pail of water might implement a human program (or any arbitrary computation), at least for a short time.

Other authors developed more detailed arguments along the lines of Hinckfuss’s pail. Searle (1992) explicitly argues that whether a physical system implements a computation depends on how an observer interprets the system; therefore, for any sufficiently complex object and for any computation, the object can be described as implementing the computation. The first rigorous argument for unlimited pancomputationalism is due to Putnam (1988), who argues that every ordinary open system implements every abstract finite automaton (without inputs and outputs).

Putnam considers what is essentially a generalization of the example of the rock from Section 2.2, which was an example of a simple finite automaton transitioning through a sequence of computational states without input or output. An arbitrary physical system that undergoes (possibly continuous) physical transitions can be discretized such that its physical states map onto the computational states of an arbitrary automaton, thus mapping physical state transitions to computational state transitions. Because the physical system and the automaton are arbitrary, every physical system implements every finite automaton.

This example does not include inputs and outputs (which are presumably necessary for computational explanations), so Putnam introduces a small change to allow for automata with inputs and outputs. The idea is similar, except one must take into account an isomorphic mapping between physical states (considered as inputs and outputs) and the inputs and outputs of the relevant automaton. This restriction result in a weaker conclusion: instead of every physical system implementing every finite automaton, any given physical system implements a countably infinite number of finite automata (namely, those that produce the correct input/output pairs). This is because, for any arbitrary input/output pair <i, o>, there are countably infinitely many automata that produce output o when given input i (see Appendix of Putnam 1988 for details).

If unlimited pancomputationalism is correct, then the claim that a system S performs a certain computation becomes trivially true and thus vacuous or nearly so; it fails to distinguish S from anything else (or perhaps from anything else with the same inputs and outputs). Thus, unlimited pancomputationalism threatens the utility of the computational theory of cognition. If cognition is computation simply because cognitive systems, like everything else, may be seen as performing more or less arbitrary computations over the relevant inputs, then it appears that the computational theory of cognition is both trivial and vacuous (though see Schweizer 2019 for an attempt to resist this conclusion). By the same token, unlimited pancomputationalism threatens some of the relevance of theoretical computer science to physical computation. For example, computational complexity theory classifies computational systems based on the complexity of their computations; if any physical system can implement any computational system regardless of its computational complexity, those results are immaterial to concrete computations. This threat of trivialization is a major motivation behind responses to the arguments for unlimited pancomputationalism.

Notably, arguments for unlimited pancomputationalism rely either implicitly or explicitly on the simple mapping account of computation. They assume that an arbitrary mapping from a computational description C to a physical description of a system is sufficient to conclude that the system implements C. In fact, avoiding unlimited pancomputationalism is a major motivation for rejecting the simple mapping account of computation. By imposing restrictions on which mappings are legitimate, other accounts of computation aim to avoid unlimited pancomputationalism.

In one response to unlimited pancomputationalism, Copeland (1996) argues that the mappings the thesis relies on are illegitimate because they are constructed ex post facto—after the computation is already given. In the case of genuine computation—the kind normally used in science— the work of generating successive descriptions of a system’s physical dynamics is done by a computer running an appropriate program (e.g., a weather forecasting program); this is the point of using a computer in the first place. If the physical dynamics were already known—which is required for an ex post facto computational description—then using a computer would be unnecessary.

In addition, both Chalmers (1995, 1996) and Copeland (1996) argue that the mappings invoked by unlimited pancomputationalism violate the counterfactual relations between the computational states, which are required by the counterfactual mapping account, mentioned above. Consider again Putnam’s slice-and-aggregate strategy for generating mappings. The mappings are constructed based on an arbitrary dynamical evolution of an arbitrary physical system. No attempt is made to establish what would happen to the physical system had conditions been different. Chalmers and Copeland argue that this is illegitimate, as a genuine implementation must exhibit the same counterfactual relations that obtain between the computational states. This response lends support to the counterfactual mapping account of computation.

Other responses to unlimited pancomputationalism point out that transitions between physical states are not causal in nature. The computational description Putnam relies on maps onto a physical description such that the computational description does not describe the causal structure of the physical system. According to several authors, non-causal mappings are illegitimate (Chrisley 1995, Chalmers 1995, 1996, 2011, Scheutz 1999, 2001). This type of response lends support to the causal mapping account of computation.

Yet another response to unlimited pancomputationalism is implicitly given by Godfrey-Smith (2009). Although Godfrey-Smith is primarily concerned with functionalism rather than computation, his argument is still relevant here. Godfrey-Smith argues that for a mapping to constitute a genuine implementation, the microscopic physical states that are clustered together (to correspond to a given computational state) must be physically similar to one another—there cannot be arbitrary groupings of arbitrarily different physical states, as in the arguments for unlimited pancomputationalism. Godfrey-Smith suggests that his similarity restriction on legitimate mappings may be complemented by the kind of causal and localizational restrictions proposed by Chalmers (1996).

The remaining accounts of computation—the semantic, syntactic, and mechanistic accounts—are even more restrictive than the causal and counterfactual accounts; they impose further constraints on acceptable mappings. Therefore, like the causal and counterfactual accounts, they have resources for avoiding unlimited pancomputationalism.

For example, consider the semantic account, according to which computation requires representation. If being a representation is an objective property possessed by relatively few things, then unlimited pancomputationalism is ruled out on the grounds that only the few items that constitute representations are genuine computational states. If everything is representational in the relevant way, however, then everything is computational (cf. Churchland and Sejnowski 1992, Shagrir 2006). If, in addition, whether something is a representation is a matter of interpretation, then the semantic account of computation gives rise to pancomputationalism insofar as the interpretation of those representations is unrestricted. Similar considerations apply to the syntactic and mechanistic accounts. For such accounts to avoid unlimited pancomputationalism, they must not rely on unrestricted interpretation.

3.3 Limited Pancomputationalism

Limited pancomputationalism is much weaker than its unlimited cousin. It holds that every physical system performs one (or relatively few) computations. Which computations are performed by a given system is deemed to depend on objective properties of that system. In fact, several authors who have mounted detailed responses to unlimited pancomputationalism explicitly endorse limited pancomputationalism (Chalmers 1996, 331, Scheutz 1999, 191).

Unlike unlimited pancomputationalism, limited pancomputationalism does not turn the claim that something is computational into a vacuous one. Different systems generally have different objective properties; thus, according to limited pancomputationalism, different systems generally perform different computations. Nevertheless, it may seem that limited pancomputationalism still trivializes the claim that a system is computational. For according to limited pancomputationalism, digital computers perform computations in the same sense in which rocks, hurricanes, and planetary systems do. This seems at odds with computer science and engineering, as well as the computational theory of cognition. If every physical process is a computation, computer science would have to include every physical process in its domain of inquiry, and the computational theory of cognition—which was motivated in part by the idea that cognition is distinctively explained by computation—loses much of its explanatory force.

Another objection to limited pancomputationalism begins with the observation that any moderately complex system satisfies indefinitely many objective computational descriptions (Piccinini 2010). This may be seen by considering computational modeling. A computational model of a system may be constructed at higher or lower resolutions. For example, cellular automata models of the dynamics of a galaxy or a brain may be described using different state transition rules, different time steps, different numbers of states-per-cell that represent spatial regions of different sizes, etc. Furthermore, an indefinite number of other formalisms, such as TMs, can be used to compute the same functions computed by cellular automata. It appears that limited pancomputationalists are committed to the claim that galaxies or brains perform all these computations at once. But that does not appear to be the sense in which computers (or brains) perform computations.

In the face of these objections, limited pancomputationalists are likely to maintain that the explanatory force of computational explanations does not come from the claim that a system is computational simpliciter. Rather, explanatory force comes from the specific computations that a system is said to perform. Thus, while a rock and a digital computer both perform computations, the fact that they perform radically different computations explains the difference between them. As to the objection that there are still too many computations performed by each system, limited pancomputationalists have two main options: either to bite the bullet and accept that every system implements indefinitely many computations, or to find a way to single out, among the many computational descriptions satisfied by each system, the one that is ontologically privileged—the one that captures the computation performed by the system. One way to do this is to postulate a fundamental physical level, whose most accurate computational description identifies the (most fundamental) computation performed by the system. This response is built into the view that the physical world is fundamentally computational (explored in the next section).

Those who desire to avoid limited pancomputationalism shift to more restrictive accounts of computation, analogously to how the desire to avoid unlimited pancomputationalism motivates the shift from the simple mapping account to more restrictive accounts of computation, such as the causal account. The semantic account may be able to restrict genuine computational descriptions to fewer systems than the causal account, provided that representations—which are needed for computation according to the semantic account—are hard to come by. Mutatis mutandis, the same is true of the syntactic and mechanistic accounts.

3.4 The Universe as a Computing System

Some authors argue that the physical universe is fundamentally computational. The universe itself is a computing system; therefore, everything in it is a computing system too (or a part thereof). Unlike the previous versions of pancomputationalism, which originate in philosophy, this ontic pancomputationalism originates in physics. It includes both an empirical claim and a metaphysical one. Although the two claims are logically independent, supporters of ontic pancomputationalism tend to make them both.

The empirical claim is that all fundamental physical magnitudes and their state transitions are such as to be exactly described by an appropriate computational formalism—without resorting to the approximations that are a staple of standard computational modeling. This claim takes different forms depending on which computational formalism is taken to describe the universe exactly. The two main options are cellular automata, which are a classical computational formalism, and quantum computing, which is non-classical.

The earliest and best-known version of ontic pancomputationalism is due to Konrad Zuse (1970, 1982) and Edward Fredkin, whose unpublished ideas on the subject influenced a number of American physicists (e.g., Feynman 1982, Toffoli 1982, Wolfram 2002; see also Wheeler 1982, Fredkin 1990). According to some of these physicists, the universe literally is a giant cellular automaton. A cellular automaton is a lattice of discrete cells; each cell can be in one of finitely many states, and each cell updates its state in discrete steps depending on the state of its neighboring cells. For the universe to be a cellular automaton, all fundamental physical magnitudes must be discrete, i.e., they must take at most finitely many values. In addition, time and space must be fundamentally discrete or must emerge from the discrete processing of the cellular automaton. At a fundamental level, continuity is not a real feature of the world—there are no truly real-valued physical quantities. This flies in the face of most mainstream physics, but it is not an obviously false hypothesis. The hypothesis is that at a sufficiently small scale, which is currently beyond our observational and experimental reach, (apparent) continuity gives way to discreteness. Thus, all values of all fundamental variables, and all state transitions, can be fully and exactly captured by the states and state transitions of a cellular automaton.

Although cellular automata have been shown to describe many aspects of fundamental physics, it is difficult to see how to simulate the quantum mechanical features of the universe using a classical formalism such as cellular automata (Feynman 1982). This concern motivated the development of quantum computing formalisms (Deutsch 1985, Nielsen and Chuang 2000). Instead of relying on digits—most commonly, binary digits or bits—quantum computation relies on qudits—most commonly, binary qudits or qubits. The main difference between a digit and a qudit is that whereas a digit can take only one out of finitely many states, such as 0 and 1 (in the case of a bit), a qudit can also take an uncountable number of states that are a superposition of the basis states in varying degrees, such as superpositions of 0 and 1 (in the case of a qubit). Furthermore, unlike a collection of digits, a collection of qudits can exhibit quantum entanglement. According to the quantum version of ontic pancomputationalism, the universe is not a classical computer but a quantum computer—a computer that manipulates qubits (Lloyd 2006) or, more generally, qudits.

The quantum version of ontic pancomputationalism is less radical than the classical version. The classical version eliminates continuity from the universe, primarily on the grounds that eliminating continuity allows classical computers to describe the universe exactly rather than approximately. Thus, the classical version appears to be motivated not by empirical evidence but by epistemological concerns. Although there is no direct evidence for classical ontic pancomputationalism, in principle it is a testable hypothesis (Fredkin 1990). By contrast, quantum ontic pancomputationalism may be seen as a reformulation of quantum mechanics in the language of quantum computation and quantum information theory (qubits), without changes to the empirical content of the theory (e.g., Fuchs 2004, Bub 2005).

But ontic pancomputationalists do not limit themselves to making empirical claims. They often make an additional metaphysical claim. They claim that computation (or information, in the physical sense described in Section 2.3) is what makes up the physical universe. This point is sometimes made by saying that, at the most fundamental physical level, there are brute differences between states—nothing more need or can be said about the nature of the states. This view reverses the traditional conception of the relation between computation and the physical world.

According to the traditional conception, which is presupposed by all accounts of computation discussed above, physical computation requires a physical medium, or substrate. Computation is an aspect of the organization and behavior of a physical system; there is no software without hardware. Thus, according to the traditional conception, if the universe is a cellular automaton, the ultimate constituents of the universe are the physical cells of the cellular automaton. It is legitimate to ask what kind of physical entity such cells are and how they interact with one another so as to satisfy their cellular automata rules.

By contrast, according to the metaphysical claim of ontic pancomputationalism, a physical system is just a system of computational states. Computation is ontologically prior to physical processes, as it were. “‘Hardware’ [is] made of ‘software’” (Kantor 1982, 526, 534), or “it” (the physical) comes from “bit” (the computational) (Wheeler 1989). According to this non-traditional conception, if the universe is a cellular automaton, the cells of the automaton are not concrete, physical structures that causally interact with one another. Rather, they are software—purely “computational” entities.

Such a metaphysical claim requires an account of what computation, or software, or physical information, is. If computations are not configurations of physical entities, the most obvious alternative is that computations are abstract, mathematical entities, like numbers and sets. As Wheeler (1982, 570) puts it, “the building element [of the universe] is the elementary ‘yes, no’ quantum phenomenon. It is an abstract entity. It is not localized in space and time”. Under this account of computation, the ontological claim of ontic pancomputationalism is a version of Pythagoreanism. All is computation in the same sense in which more traditional versions of Pythagoreanism maintain that all is number or that all is sets (Quine 1976).

Ontic pancomputationalism may be questioned on both the empirical and the ontological fronts. On the empirical front, there is little positive evidence to support ontic pancomputationalism. Supporters appear to be motivated by the desire for exact computational models of the world rather than empirical evidence that the models are correct. Even someone who shares this desire may well question why we should expect nature to fulfill it. On the metaphysical front, Pythagoreanism faces the objection that the abstract entities it puts at the fundamental physical level lack the causal and qualitative properties that we observe in the physical world—or at least, it is difficult to understand how abstract entities could give rise to physical qualities and their causal powers (e.g., Martin 1997).

4. Physical Computability

According to the Church-Turing thesis (CTT), any function that is intuitively computable is computable by some TM (i.e., Turing-computable). Alternatively, CTT may be formulated as the thesis that any function “naturally regarded as computable” (Turing 1936–7, 135) is Turing-computable. The phrases “intuitively computable” and “naturally regarded as computable” are somewhat ambiguous. When they are disambiguated, CTT takes different forms.

In one sense, “intuitively computable” means computable by following an algorithm or effective procedure. An effective procedure is a finite list of clear instructions for generating new symbolic structures out of old symbolic structures; an example would be dividing two numbers according to the long division algorithm, using only paper and pencil. When CTT is interpreted in terms of effective procedures, it may be called Mathematical CTT, because the relevant evidence is more logical or mathematical than physical. Mathematical CTT says that any function computable by an effective procedure is Turing-computable.

There is compelling evidence for Mathematical CTT (Kleene 1952, §62, §67; cf. also Sieg 2006):

  • There are no known counterexamples.
  • Diagonalization over Turing machines, contrary to what may be expected, does not yield a function that is not Turing-computable.
  • The argument from confluence: all the formalisms proposed to capture the intuitive notion of computability by effective procedure—formalisms such as general recursiveness (Gödel 1934), λ-definability (Church 1932, Kleene 1935), Turing-computability (Turing 1936-7), and reckonability (Gödel 1936)—turn out to capture the same class of functions.
  • A Turing machine seems capable of reproducing any operation that a human being can perform while following an effective procedure (Turing 1936–7’s main argument for CTT).

In another sense, “intuitively computable” means computable by physical means. When CTT is so interpreted, it may be called Physical CTT (following Pitowsky 1990), because the relevant evidence is more physical than logical or mathematical.

4.1 The Physical Church-Turing Thesis: Bold

Physical CTT is often formulated in very strong forms. To a first approximation, Bold Physical CTT holds that any physical process—anything doable by a physical system—is computable by some TM.

Bold Physical CTT can be made more precise in a number of ways. Here is a representative sample, followed by references to where they are discussed:

  1. Any physical process can be simulated by some TM (e.g., Deutsch 1985, Wolfram 1985, Pitowsky 2002).
  2. If a physical system can be modeled by a certain kind of idealized computing machine that manipulates real-valued quantities, then that physical system can only compute Turing-computable functions on denumerable domains (Blum et al. 1989 show this to be false).
  3. Any system of equations describing a physical system gives rise to computable solutions (cf. Earman 1986, Pour-El 1999). A solution is said to be computable just in case, given computable real numbers as initial conditions, it returns computable real numbers as values (where, following Turing, a real number is said to be computable just in case there is a TM whose output produces any desired number of digits of that number).
  4. For any physical system S and observable W, there is a Turing-computable function f: NN such that for all times t ∈ N, f(t) = W(t) (Pitowsky 1990).

Thesis (A) is ambiguous between two notions of simulation. In one sense, simulation is the process by which a digital computing system (such as a TM) computes the same function as another digital computing system. This is the sense in which universal TMs can simulate any other TM. If (A) is interpreted using this first notion of simulation, it entails that everything in the universe is a digital computing system. This is (a variant of) ontic pancomputationalism (Section 3.4).

In another sense, simulation is the process by which the output of a digital computing system represents an approximate description of the dynamical evolution of another system. This is the sense in which computational models of the weather simulate the weather. If (A) is interpreted using this second notion of simulation, then (A) is true only if we do not care how close our computational approximations are. If we want close computational approximations—as we usually do—then (A) turns into the claim that any physical process can be computationally approximated to the degree of accuracy that is desired in any given case. Whether that is true varies from case to case depending on the dynamical properties of the system, how much is known about them, what idealizations and simplifications are adopted in the model, what numerical methods are used in the computation, and how many computational resources (such as time, processing speed, and memory) are available (Piccinini 2015, Chap 4).

Thesis (B) is straightforwardly and radically false, as shown by Blum et al. (1989). Blum et al. set up a mathematical theory of computation over real-valued quantities, which they see as a fruitful extension of ordinary computability theory. Within such a theory, they define idealized “computing” machines that perform addition, subtraction, multiplication, division, and equality testing as primitive operations on arbitrary real-valued quantities. They easily prove that such machines can compute all sets defined over denumerable domains by encoding their characteristic function as a real-valued constant (ibid., 405). Although they do not discuss this result as a refutation of Physical CTT, their work is often cited in discussions of physical computability and Physical CTT.

Theses (C) and (D) have interesting counterexamples that are consistent with some physical theories (cf. below and Pour-El 1999). These theoretical counterexamples may or may not occur in our concrete physical universe.

Each of (A)–(D) raises important questions pertaining to the foundations of computer science, physics, and mathematics. It is not clear, however, that any of these theses bears an interesting analogy to Mathematical CTT. Below are two reasons why.

First, (A)–(D) are falsified by processes that cannot be built and used as computing devices. The most obvious example is (B). Blum et al.’s result is equivalent to demonstrating that all functions over denumerable domains—including the uncountably many functions that are not Turing-computable—are computable by Blum et al.’s “computing” systems, which are allowed to manipulate the exact values of arbitrary real numbers. Hence, (B) is radically false. But at the same time, this result has no direct practical applications, as there is no evidence that concrete physical systems can manipulate arbitrary real-valued quantities in the way that Blum et al.’s systems do.

More generally, formulations (A)–(D) would be falsified by a sequence generated by a random (i.e., nondeterministic) physical process. According to quantum mechanics, some physical processes—such as atom decay—contain an objectively random element. Hidden variable interpretations dispute this, but the possibility of genuine randomness is sufficiently plausible that it should be taken into account.

Consider a random process that produces discrete outputs over an infinite period of time, e.g., the decay of atoms from a radioactive sample. Its output at regular time intervals is a string of digits—‘0’ if no atoms decay during a time interval, ‘1’ if one or more atoms decay during a time interval. A simple consideration shows that, with probability one, the sequence produced by our random process is not Turing-computable. There are uncountably many infinite strings of digits (even more strongly, there are uncountably many infinite strings of digits with any given limiting frequency of ‘0’s and ‘1’s). But there are only countably many Turing-computable infinite strings. Therefore, assuming that each infinite string has the same probability of occurring as a result of a random process, the probability that a random process would generate a Turing-computable string of digits is zero, whereas the probability that the string of digits is not Turing-computable is one (for the stronger conclusion that no such string of digits is Turing-computable, see Calude & Svozil 2008). Thus, simply by using a truly random process to generate a string, we would have the physical means to go beyond what is Turing-computable. As Turing (1950, 438–439) pointed out, a machine with a “random element” can do “more” than a TM. But doing more is not the same as computing more. Contrary to what is sometimes suggested (e.g., Calude 2005, 10), the combination of a TM and a random process does not threaten Physical CTT.

Random processes should not count as computations: unlike computations properly so called, the values of random processes cannot, by definition, be the output of a function. Random processes can be exploited by a computation, of course—there are important computational techniques that rely on random or pseudo-random choices at some stages of a computation. If some quantum random sequences were random in the sense of algorithmic information theory, they may even raise the probability of obtaining correct solutions from computational techniques that rely on random choices (Calude 2005, 10). But no computational technique can amount to a mere sequence of random choices. So, any thesis, such as Bold Physical CTT, that would be falsified by a sequence generated by a random process is too strong to capture the notion of physical computability—the physical analogue of algorithmic computability. Thus, contrary to what some authors seem to assume, Bold Physical CTT is too strong to be a physical analogue of Mathematical CTT.

Another feature that theses (A)–(D) have in common is that they appeal quite freely to arbitrary real-valued quantities. This is explicit in (B). To see why they all appeal to arbitrary real-valued quantities, consider that most physical theories assume that many physical magnitudes, including time, can take arbitrary real numbers as values. Hence, systems of physical equations, whose simulations, solutions, or observables are appealed to, respectively, by (A), (C), and (D), involve arbitrary real numbers. Therefore, all formulations of Bold Physical CTT involve, explicitly or implicitly, arbitrary real numbers.

The expressive power of real numbers may be used to generate a simple recipe to obtain the values of a Turing-uncomputable function. Consider that the digital expansion of a real number contains countably many digits. Hence, for any characteristic function (i.e., a function whose values are ‘0’ or ‘1’) defined over a countable domain, including all Turing-uncomputable such functions, there is a real number whose binary expansion encodes its values. This is because for all values of a characteristic function, the nth value of the function may be defined to be the nth digit of the binary expansion of a real number.

Suppose you wish to know the value of a specific Turing-uncomputable characteristic function, such as the halting function for TMs, for its nth argument. Take the real number r whose digital expansion encodes the values of the halting function. Then, obtain the value of the nth digit in the binary expansion of r and you have the result you desire. If you can do this, you may obtain any value of any characteristic function defined over strings, including all Turing-uncomputable such functions.

The above recipe, if feasible, is a trivial consequence of the expressive power of real numbers. Yet it is discussed in the literature as an example of perhaps possible physical computation beyond the power of TMs (e.g., Copeland 2000)—something that would falsify Physical CTT. There is no reason to believe such a recipe can be implemented, because it requires either the measurement or the preparation of a Turing-uncomputable real-valued quantities with unbounded precision. There is no evidence that either is doable.

By relying quite freely on arbitrary real-valued quantities, many versions of Bold Physical CTT make themselves vulnerable to falsification by relatively trivial (though presumably unfeasible) recipes such as the one just mentioned. This casts doubt on the assumption—made by some who seek ways to falsify Bold Physical CTT—that Bold Physical CTT is actually an interesting physical analogue of Mathematical CTT.

(The point just made does not impugn analog computation in the standard sense of Pour-El (1974). Analog computation does not manipulate exact values of arbitrary real-valued quantities but rather continuous variables. Although the continuous variables used in most analog computations may be assumed to take any real value within a relevant interval, concrete analog computation requires only the manipulation and measurement of real variables within some degree of approximation. So, analog computation does not falsify Bold Physical CTT (cf. Rubel 1989).)

Similar doubts about the alleged analogy between Mathematical CTT and Bold Physical CTT may be generated by a related observation. Many current physical theories assume that nature contains real-valued quantities that vary along a continuum. These may include the velocity of objects, the coordinates that define the position of their center of gravity in the spacetime manifold, and more. If these physical theories are correct, then many properties of many entities take arbitrary real numbers as their values. And most real numbers in any continuous interval are Turing-uncomputable (i.e. there are only countably many computable numbers, but any continuous interval contains uncountably many real numbers). Thus, the probability is zero that an arbitrary real-valued quantity is computable. Hence, if our physical theories are correct, most transformations of the relevant physical properties are transformations of Turing-uncomputable quantities into one another.

For instance, an object’s change of speed, or even its simple change of spatial location, may be transformations of one Turing-uncomputable real-valued quantity into another. A transformation of one Turing-uncomputable value into another Turing-uncomputable value is a Turing-uncomputable operation—if nothing else, because no TM can write down the input and output of such an operation. On this picture, the physical world is chock-full of operations that outstrip the power of TMs, thus Bold Physical CTT is falsified.

There is no reason to believe that we can use the Turing-uncomputable operations just mentioned to compute in the epistemological sense that motivates our interest in computation in the first place—to solve problems, to generate the values of desired functions for desired arguments, to understand certain physical systems, and so on. In other words, the operations in question should not count as computations. Bold Physical CTT, which is threatened by such operations, is not interestingly analogous to Mathematical CTT.

To conclude our discussion of Bold Physical CTT, it may be helpful to distinguish the issue of physical computability proper—the issue that pertains to the physical analogue of Mathematical CTT—from other issues that connect computability and physics. Many questions about the relationship between physical processes and computability deserve to be asked. What are the physically computable functions? This is the question that should motivate Physical CTT. What can be computationally approximated to what degree under what circumstances? This is presumably what (A) is after. As important and interesting as this question is, it is different from the question of what can be physically computed. Yet other questions motivate theses (B)-(D) above as well as other versions of Bold Physical CTT to be found in the literature. Many of these questions are interesting and deserve to be investigated. Nevertheless, they do not properly belong in discussions of CTT, because they are different from the computability question that motivates CTT in the first place.

4.2 The Physical Church-Turing Thesis: Modest

In the literature on computation in physical systems, there is growing concern that a physical analogue of Mathematical CTT should include only usable physical processes (e.g., Németi and Dávid 2006; Ord 2006, Smith 2006a, Beggs and Tucker 2007). Given this concern, an adequate version of Physical CTT ought to be more modest than Bold Physical CTT.

Modest Physical CTT is thus formulated in terms of what can be physically computed—more precisely, which functions over a denumerable domain can be physically computed. Prototypical examples of physical computation are the processes of ordinary digital computers and their components, including digital circuits. These processes can be exhaustively described by effective procedures, which are already covered by Mathematical CTT, plus relatively uncontroversial mappings between physical states and computational states. Mathematical CTT says that any function computable by an effective procedure is computable by a TM. As TMs can be physically implemented (or replaced by a computing human), any process that follows an effective procedure is physically computable.

But physical computation in the present sense is a broader notion than computation by effective procedure. A process may count as a physical computation even if there is no effective procedure for describing the process, perhaps because there are no finite instantaneous descriptions of the internal states that constitute the process or no way to finitely and exactly specify the transition from one instantaneous description to the next. Thus, Modest Physical CTT is stronger than Mathematical CTT.

Gandy (1980) was one of the first to discuss a version of Modest Physical CTT, arguing that mechanisms with discrete components and reasonable physical limitations (a lower bound on the size of components, and an upper bound on the speed of their kinematics) can only physically compute Turing-computable functions. However, Modest Physical CTT need not be limited in these ways. In addition to physical processes that follow effective procedures, Modest Physical CTT may cover continuous dynamical processes (as in certain kinds of neural networks), processes that span large portions of spacetime, and quantum processes (as in quantum computing). But physical computation does not include all physical processes.

In accordance with this proposal, Modest Physical CTT holds that any function (of a denumerable domain) that is physically computable is Turing-computable.

For a process to count as a physical computation, and hence for it to be relevant to Modest Physical CTT, it must be usable by an observer to generate the desired values of an independently specified function. This requirement may be spelled out in terms of a number of constraints (Piccinini 2015, Chaps 15 and 16). This list is not intended to be exhaustive:

  1. Readable inputs and outputs. The process must take inputs and yield outputs that observers can read without error, so that observers can use the outputs as solutions to problems or values of functions defined over the inputs. For that to be possible, presumably the inputs and outputs need to be reducible to strings of discrete states, like the inputs and outputs of ordinary digital computers.
  2. Process-independent rule. There must be a fixed rule or map—specifiable independently of the physical process—that links the outputs to the inputs. By defining the problem to be solved by the process, this rule tells the user what she is going to learn by running the process. Since the rule defines a physical computation in general, the rule need not be recursive. For instance, it may be the rule that defines the halting problem for TMs. But like all recursive rules, the rule must be the same for all inputs that belong in the same problem; it cannot change from one input to the next.
  3. Repeatability. The process must be repeatable, at least in principle, so as to allow users to obtain the same results multiple times and to check a computation by repeating it.
  4. Settability. The system undergoing the process must be settable, so that a user can choose which argument of a function the system computes and set the system to compute the relevant value of the function.
  5. Physical constructibility. The system must be physically constructible.
  6. Reliability. The system must not break down before the process is completed.

In summary, Modest Physical CTT asserts that every function that can be physically computed, i.e., every usable transformation of input strings into output strings in accordance with a process-independent rule defined over the strings, is Turing-computable.

Since Modest Physical CTT is restricted by epistemologically relevant criteria, it doesn’t raise the worries associated with Bold Physical CTT—namely, that it’s too easy to falsify and irrelevant to the notion of computability that motivates CTT. And there are good reasons to believe Modest Physical CTT. All computing mechanisms that have been physically built or are in the process of being built to compute functions of a denumerable domain compute only functions that are Turing-computable.

It is important to understand the exact scope of Modest Physical CTT. Modest Physical CTT does not entail that every physical process is a computation, or that every physical system is a computing system. It only says that if something physical computes functions of a denumerable domain, then the functions it computes are Turing-computable.

To fully assess Modest Physical CTT, we should consider whether it is possible to build a machine that, like an ordinary digital computer, can be used by human observers, but, unlike an ordinary digital computer, generates the values of functions that are Turing-uncomputable. In recent years, several designs for hypercomputation have been proposed. Hypercomputation is the computation of Turing-uncomputable functions. If hypercomputation turned out to be physically possible, it would refute Modest Physical CTT.

4.3 Hypercomputation

To a first approximation, a hypercomputer is a system that yields the values of a function that is not Turing-computable. If what counts as yielding the values of a function is left unspecified, any of the systems discussed in Section 4.1, such as genuinely random processes and systems that manipulate arbitrary real-valued quantities, would count as hypercomputers. But in discussing Bold Physical CTT, we saw that yielding the values of a function that is Turing-uncomputable, without further constraints, is not enough for genuine physical computation.

By analogy with the distinction between Bold Physical CTT and Modest Physical CTT, let us distinguish between a weak and a strong notion of hypercomputation by distinguishing usable hypercomputers from unusable hypercomputational processes.

An unusable hypercomputational process is a physical process that fails to satisfy at least one of the first four constraints on physical computation. Examples include processes whose inputs or outputs are arbitrary real-valued quantities (which cannot be read with infinite precision) and genuine random processes (which have no rule characterizing the inputs and outputs independently of the process and are neither repeatable nor settable). These processes are not usable because an observer cannot obtain from them arbitrary values of an independently defined function on a chosen input, as ordinary computing systems can (given enough time and space). Since they are not usable, unusable hypercomputational processes are irrelevant to Modest Physical CTT.

A usable hypercomputer is a physical system that satisfies at least the first four constraints on physical computation. It has readable inputs and outputs, there is a rule characterizing its input-output properties that may be defined independently of the process itself, and its processes are repeatable and settable. If a system does not satisfy one of these conditions, it does not count as computing in the sense relevant to Modest Physical CTT.

A system that satisfies these conditions—a usable hypercomputer—may be purely notional. For instance, infinitely accelerating TMs (Copeland 2002) are TMs that perform each computational operation in half the time as their previous operation. As a consequence, infinitely accelerating TMs complete an infinite number of operations (a supertask) within twice the time it takes them to perform their first operation. This allows infinitely accelerating TMs to compute functions, such as the halting function, that are Turing-uncomputable. But infinitely accelerating TMs are usually discussed as notional entities, without evidence that they can be constructed. Purely notional systems, of course, do not falsify Modest Physical CTT. To do that, a system must satisfy at least the fifth and sixth constraints on physical computation: it must be physically constructible, and it must operate reliably.

One way to construct something like an infinitely accelerating TM would be to make a computing machine that, after performing some computational operations, builds a smaller and faster copy of itself (Davies 2001). The smaller and faster copy will also perform some operations and then build a faster and smaller copy, and so on. Given appropriate assumptions, the resulting series of infinitely shrinking machines will be able to complete an infinite number of computational operations within a finite time, thereby surpassing the power of TMs. While infinitely shrinking machines appear to be consistent with Newtonian mechanics, Davies (2001, 672) points out that the atomic and quantum mechanical nature of matter in our universe makes infinitely shrinking machines physically impossible.

Neural networks have sometimes been discussed as computing systems that may go beyond Turing-computability (e.g., Smolensky 1988). If we restrict our attention to classes of neural networks that contain all systems with current or foreseeable practical applications, this opinion is unwarranted. There is now a considerable literature on the computational and complexity properties of large classes of neural networks. The most relevant systems have digital inputs and outputs (so as to satisfy our first constraint on physical computation) but may have, and typically do have, non-digital internal processes (i.e., their internal processes are not discrete step-by-step manipulations of strings of digits). The main results are the following: (i) feedforward networks with finitely many processing units are computationally equivalent to Boolean circuits with finitely many gates; (ii) recurrent networks with finitely many units are equivalent to finite state automata; (iii) networks with unbounded tapes or an unbounded number of units are equivalent to TMs (Šíma & Orponen 2003).

Neural networks more powerful than TMs may be defined, however, by exploiting once again the expressive power of real numbers. The best known networks of this kind are Analog Recurrent Neural Networks (ARNNs) (Siegelmann 1999). ARNNs should not be confused with analog computers in the traditional sense (Pour-El 1974, Rubel 1993, Mills 2008). Whereas analog computers manipulate real variables without relying on the exact value of arbitrary real-valued quantities, ARNNs manipulate strings of digits by (possibly) relying on the exact value of arbitrary real-valued quantities. Specifically, the weights connecting individual processing units within ARNNs can take exact values of arbitrary real numbers, including values that are Turing-uncomputable, and such exact values are exploited in the computation. When their weights are Turing-uncomputable, ARNNs can go beyond the power of Turing-machines: they compute any function over binary strings. ARNNs more powerful than TMs cannot operated reliably, because they require unboundedly precise weights. In addition, the required weights are Turing-uncomputable, so constructing ARRNs more powerful than TMs requires already possessing the ability to perform hypercomputations (Davis 2004, Schonbein 2005, Siegelmann 1999, 148).

Perhaps the best-known proposal for a hypercomputer is due to Mark Hogarth (1994, 2004), who developed an idea of Itamar Pitowsky (1990). Relativistic hypercomputers exploit the properties of a special kind of spacetime called Malament-Hogarth spacetime, which is physically possible in the sense of constituting a solution to Einstein’s field equations for General Relativity. Malament-Hogarth spacetimes contain regions with an infinite time-like trajectory λ that can be circumvented by a finite time-like trajectory γ. In other words, λ and γ have a common origin, and there is a spacetime point p on γ such that λ, even though it is infinite, lies entirely in p’s chronological past. If an observer launches a TM along λ and then travels along γ she may, within finite time, find herself in the future of an infinitely long computation performed by the TM. If the TM is able to send signals to the observer, the observer would be able to know the outcome of a potentially infinitely long computation, thereby having computational means more powerful than (ordinary) TMs. For instance, an observer may be able to obtain the results of an arbitrary instance of the halting function for TMs.

Constructing and using a relativistic hypercomputer is challenging. The first question is whether our spacetime is Malament-Hogarth. The answer is currently unknown. Even if our spacetime is not Malament-Hogarth globally, it might still contain regions that have the Malament-Hogarth property locally. An example of such a region is a huge, rotating black hole; there is evidence that our universe contains such black holes (Etesi and Németi 2002). But even if there are Malament-Hogarth regions in our universe, there remain considerable obstacles. For starters, (i) the huge, rotating black hole that is closest to us is probably out of our reach as well as our descendants’ reach, (ii) completing an infinite computation requires an unbounded memory, which might require an unbounded amount of matter, and (iii) a machine that requires an unbounded amount of matter running for an infinite amount of time will malfunction with probability 1, rendering it useless (Button, 2009, 779). The latter challenge might be addressable by employing multiple self-reproducing and self-correcting machines that check one another’s computations instead of a single machine (Andréka et al. 2018), although that seems to exacerbate the need for unbounded resources (see also Németi and Dávid 2006, Andréka, Németi, and Németi 2009).

Quantum computing has also been invoked as a possible source of hypercomputation. Quantum computing is the manipulation of qubits (and more generally, qudits) in accordance with the laws of quantum mechanics. Qubits are variables that, like bits, can be prepared or measured in one or two states, 0 and 1. Unlike bits, qubits can (i) take states that are a superposition of 0 and 1 and (ii) become entangled with each other during a computation. A surprising feature of quantum computing is that it allows computing certain functions much faster than any known classical computation (Shor 1994). But while mainstream quantum computing may be more efficient than classical computing, it does not allow computing any functions beyond those computable by TMs.

Some authors have questioned whether the mainstream quantum computing paradigm is general enough and, if not, whether some aspects of quantum mechanics may be exploited to design a quantum hypercomputer (Nielsen 1997, Calude and Pavlov 2002). The most prominent proposal for a quantum hypercomputer is due to Tien Kieu (2002, 2003, 2004, 2005). He argues that an appropriately constructed quantum system can decide whether an arbitrary Diophantine equation has an integral solution—a problem which is known to be unsolvable by TMs. Kieu’s method involves encoding a specific instance of the problem in an appropriate Hamiltonian, which represents the total energy of a quantum system. Kieu shows that such a system can dynamically evolve over time into an energy ground state that encodes the desired solution. Unfortunately, Kieu’s scheme does not appear to be workable. For one thing, it requires infinite precision in setting up and maintaining the system (Hodges 2005). For another, Kieu does not provide a successful criterion for knowing when the system has evolved into the solution state, and the problem of determining when the solution state is reached is Turing-uncomputable (Smith 2006b, Hagar and Korolev 2007). Thus, operating Kieu’s proposed hypercomputer would require already possessing hypercomputational powers.

In conclusion, the candidate hypercomputers proposed so far have not been shown to be physically constructible and reliable. For the time being, Modest Physical CTT remains plausible. It may well be that, for all practical purposes, any function that is physically computable is Turing-computable.

Bibliography

  • Andréka, H., Németi, I., and P. Németi, 2009, “General Relativistic Hypercomputing and Foundation of Mathematics,” Natural Computing, 8(3): 499–516.
  • Andréka, H., Madarász, J. X., Németi, I., Németi, P. and Székely, G., 2018, “Relativistic computation,” in Cuffaro and Fletcher (eds.) 2018.
  • Beggs, E. J., and J.V. Tucker, 2007, “Can Newtonian Systems, Bounded in Space, Time, Mass and Energy Compute all Functions?” Theoretical Computer Science, 371(1–2): 4–19.
  • Block, N., 1978, “Troubles with Functionalism,” in Perception and Cognition: Issues in the Foundations of Psychology, C. W. Savage (ed.), Minneapolis, University of Minnesota Press, pp. 261–325.
  • Blum, L., Cucker F., Shub M., and S. Smale, 1998, Complexity and Real Computation, New York: Springer.
  • Bontly, T., 1998, “Individualism and the Nature of Syntactic States,” British Journal for the Philosophy of Science, 49(4): 557–574.
  • Bub, J., 2005, “Quantum Mechanics is About Quantum Information,” Foundations of Physics, 35(4): 541–560.
  • Burge, T., 1986, “Individualism and Psychology,” Philosophical Review, 45: 3–45.
  • Button, T., 2009, “SAD Computers and Two Versions of the Church-Turing Thesis,” British Journal for the Philosophy of Science, 60(4): 765–792.
  • Calude, C. S., 2005, “Algorithmic Randomness, Quantum Physics, and Incompleteness,” in Proceedings of the Conference “Machines, Computations and Universality” (MCU 2004), M. Margenstern (ed.), Berlin: Springer, pp. 1–17.
  • Calude, C. S., and B. Pavlov, 2002, “Coins, Quantum Measurements, and Turing’s Barrier,” Quantum Information Processing, 1(1–2): 107–127.
  • Calude, C. S., and K. Svozil, 2008, “Quantum Randomness and Value Indefiniteness,” Advanced Science Letters, 1(2): 165–168.
  • Campbell, D. I., and Yang, Y., 2021, “Does the solar system compute the laws of motion?” Synthese, 198: 3203–3220.
  • Chalmers, D. J., 1995, “On Implementing a Computation,” Minds and Machines, 4: 391–402.
  • –––, 1996, “Does a Rock Implement Every Finite-State Automaton?” Synthese, 108: 310-333.
  • –––, 2011, “A Computational Foundation for the Study of Cognition,” Journal of Cognitive Science, 12(4): 323–57.
  • Chrisley, R. L., 1995, “Why Everything Doesn’t Realize Every Computation,” Minds and Machines, 4: 403–430.
  • Church, A., 1932, “A Set of Postulates for the Foundation of Logic,” Annals of Mathematics, 33: 346–366.
  • –––, 1936, “An Unsolvable Problem in Elementary Number Theory,” The American Journal of Mathematics, 58: 345–363.
  • Churchland, P. S., and T. J. Sejnowski, 1992, The Computational Brain, Cambridge, MA: MIT Press.
  • Coelho Mollo, D., 2018, “Functional Individuation, Mechanistic Implementation: The Proper Way of Seeing the Mechanistic View of Concrete Computation,” Synthese, 195(8): 3477–97.
  • Copeland, B. J., 1996, “What is Computation?” Synthese, 108(3): 335–359.
  • –––, 2000, “Narrow Versus Wide Mechanism: Including a Re-Examination of Turing’s Views on the Mind-Machine Issue.” The Journal of Philosophy, XCVI(1): 5–33.
  • –––, 2002, “Accelerating Turing Machines,” Minds and Machines, 12(2): 281–301.
  • Crane, T., 1990, “The Language of Thought: No Syntax Without Semantics,” Mind and Language, 5(3): 187–212.
  • Craver, C. F., 2012, “Functions and Mechanisms: A Perspectivalist Account,” in P. Huneman (ed.), Functions, Dordrecht: Springer.
  • Cuffaro, Michael E., and Samuel C. Fletcher (eds.), 2018, Physical Perspectives on Computation, Computational Perspectives on Physics, Cambridge: Cambridge University Press.
  • Cummins, R., 1983, The Nature of Psychological Explanation, Cambridge, MA: MIT Press.
  • Curtis-Trudel, A., forthcoming-a, “Why Do We Need A Theory of Implementation?” British Journal for the Philosophy of Science. doi:10.1086/714791
  • Curtis-Trudel, A., forthcoming-b, “Implementation as Resemblance,” Philosophy of Science. doi:10.1086/714872
  • Davies, E. B., 2001, “Building Infinite Machines,” British Journal for the Philosophy of Science, 52(4): 671–682.
  • Davis, M., 2004a, “The Myth of Hypercomputation,” in Alan Turing: Life and Legacy of a Great Thinker, C. Teuscher (ed.), Berlin: Springer, pp. 195–211.
  • ––– (ed.), 2004b, The Undecidable: Basic Papers on Undecidable Propositions, Unsolvable Problems and Computable Functions, Dover: Mineola.
  • Dennett, D. C., 1987, The Intentional Stance, Cambridge, MA: MIT Press.
  • Deutsch, D., 1985, “Quantum Theory, the Church-Turing Principle and the Universal Quantum Computer,” Proceedings of the Royal Society of London A, 400: 97–117.
  • Dewhurst, J., 2018, “Computing Mechanisms without Proper Functions,” Minds and Machines, 28: 569–88.
  • Dretske, F. I., 1981, Knowledge and the Flow of Information, Cambridge, MA: MIT Press.
  • Duwell, A., 2017, “Exploring the Frontiers of Computation: Measurement Based Quantum Computers and the Mechanistic View of Computation,” in A. Bokulich and J. Floyd (eds.), Turing 100: Philosophical Explorations of the Legacy of Alan Turing (Boston Studies in the Philosophy and History of Science: Volume 324), New York: Springer, 219–32.
  • Earman, J., 1986, A Primer on Determinism, Dordrecht: D. Reidel.
  • Earman, J., and J. Norton, 1993, “Forever is a Day: Supertasks in Pitowsky and Malament-Hogarth Spacetimes,” Philosophy of Science, 60: 22–42.
  • Egan, F., 1999, “In Defence of Narrow Mindedness.” Mind and Language, 14(2): 177–194.
  • –––, 2010, “Computational Models: A Modest Role For Content,” Studies in the History and Philosophy of Science, 41(3): 253–259.
  • Etesi, G., and I. Németi, 2002, “Non-Turing Computations via Malament-Hogarth Spacetimes,” International Journal of Theoretical Physics, 41: 342–370.
  • Feynman, R. P., 1982, “Simulating Physics with Computers,” International Journal of Theoretical Physics, 21(6–7): 467–488.
  • Fodor, J. A., 1975, The Language of Thought, Cambridge, MA: Harvard University Press.
  • –––, 1981, “The Mind-Body Problem,” Scientific American, 244: 114–125.
  • –––, 2008, LOT 2: The Language of Thought Revisited, Oxford: Oxford University Press.
  • Fredkin, E., 1990, “Digital Mechanics: An Information Process Based on Reversible Universal Cellular Automata,” Physica D, 45: 254–270.
  • Fresco, N., 2010, “Explaining Computation Without Semantics: Keeping It Simple,” Minds and Machines, 20: 165–181.
  • –––, 2014, Physical Computation and Cognitive Science, New York: Springer.
  • Fuchs, C. A., 2004, “Quantum Mechanics as Quantum Information (and only a little more),” in Quantum Theory: Reconsiderations of Foundations, A. Khrennikov (ed.), Växjö, Sweden: Växjö University Press, pp. 463–543.
  • Gandy, R., 1980, “Church’s Thesis and Principles for Mechanism,” in J. Barwise, H. J. Keisler, K. Kunen (eds.), The Kleene Symposium: Studies in Logic and the Foundations of Mathematics, Amsterdam: North-Holland Publishing.
  • Garson, J., 2013, “The Functional Sense of Mechanism,” Philosophy of Science, 80: 317–333.
  • Gödel, K., 1934, “On Undecidable Propositions of Formal Mathematical Systemsm,” in The Undecidable, M. Davis (ed.), Ewlett, NY: Raven, pp. 41–71.
  • –––, 1936. “Über die Lange von Beweisen,” Ergebnisse eines mathematischen Kolloquiums, 7: 23–24.
  • –––, 1946, “Remarks Before the Princeton Bicentennial Conference on Problems in Mathematics,” reprinted in Davis 2004b, pp. 84–88.
  • Godfrey-Smith, P., 2009, “Triviality Arguments Against Functionalism,” Philosophical Studies, 145(2): 273–295.
  • Grice, H. P., 1957, “Meaning,” The Philosophical Review, 66(3): 377–388.
  • Hagar, A., and A. Korolev, 2007, “Quantum Hypercomputation--Hyper or Computation?,” Philosophy of Science, 74(3): 347–363.
  • Hamkins, J. D., 2002. “Infinite Time Turing Machines,” Minds and Machines, 12: 521–539.
  • Harman, G., 1987, “(Nonsolipsistic) Conceptual Role Semantics,” in New Directions in Semantics, E. Lepore (ed.), London: Academic Press, pp. 55–81.
  • Hogarth, M. L., 1994, “Non-Turing Computers and Non-Turing Computability,” PSA 1994(1): 126–138.
  • –––, 2004, “Deciding Arithmetic Using SAD Computers,” British Journal for the Philosophy of Science, 55: 681–691.
  • Isaac, A., 2019, “The Semantics Latent in Shannon Information,” British Journal for the Philosophy of Science, 70(1): 103–125.
  • Jacquette, D., 1991, “The Myth of Pure Syntax,” in Topics in Philosophy and Artificial Intelligence, L. Albertazzi and R. Rolli (eds.), Bozen: Istituto Mitteleuropeo di Cultura, pp. 1–14.
  • Kantor, F. W., 1982, “An Informal Partial Overview of Information Mechanics,” International Journal of Theoretical Physics, 21(6–7): 525–35.
  • Kaplan, D. M., 2011, “Explanation and Description in Computational Neuroscience,” Synthese, 183(3): 339–373.
  • Kieu, T. D., 2002, “Quantum Hypercomputation,” Minds and Machines, 12(4): 541–561.
  • –––, 2003, “Computing the Noncomputable,” Contemporary Physics, 44: 51–71.
  • –––, 2004, “A Reformulation of Hilbert’s Tenth Problem through Quantum Mechanics,” Proceedings of the Royal Society A, 460(2045): 1535–1545.
  • –––, 2005, “An Anatomy of a Quantum Adiabatic Algorithm that Transcends the Turing Computability,” International Journal of Quantum Information, 3(1): 177–183.
  • Kleene, S. C., 1935, “A Theory of Positive Integers in Formal Logic,” American Journal of Mathematics, 57: 153–173 and 219–244.
  • –––, 1952, Introduction to Metamathematics, Princeton: Van Nostrand.
  • Klein, C., 2008, “Dispositional Implementation Solves the Superfluous Structure Problem,” Synthese, 165: 141–153.
  • Leff, H. S. and A.F. Rex, (eds.), 2003, Maxwell’s Demon 2: Entropy, Classical and Quantum Information, Computing, Bristol: Institute of Physics Publishing.
  • Lloyd, S., 2006, Programming the Universe: A Quantum Computer Scientist Takes on the Cosmos, New York: Knopf.
  • Lycan, W., 1981, “Form, Function, and Feel,” Journal of Philosophy, 78(1): 24–50.
  • Machamer, P. K., Darden, L., and C. Craver, 2000, “Thinking About Mechanisms,” Philosophy of Science, 67: 1–25.
  • Maley, C. J., forthcoming, “Analog Computation and Representation,” British Journal for the Philosophy of Science. doi:10.1086/715031
  • Martin, C. B., 1997, “On the Need for Properties: The Road to Pythagoreanism and Back,” Synthese, 112(2): 193–231.
  • Maudlin, T., 1989, “Computation and Consciousness,” Journal of Philosophy, 86(8): 407–432.
  • Milkowski, M., 2013, Explaining the Computational Mind, Cambridge, MA: MIT Press.
  • Mills, J. W., 2008, “The Nature of the Extended Analog Computer,” Physica D: Nonlinear Phenomena, 237(9): 1235–1256.
  • Németi, I., and G. Dávid, 2006, “Relativistic Computers and the Turing Barrier,” Journal of Applied Mathematics and Computation, 178(1): 118–142.
  • Nielsen, M. A., 1997, “Computable Functions, Quantum Measurements, and Quantum Dynamics,” Physical Review Letters, 79(15): 2915–2918.
  • Nielsen, M. A., and I. L. Chuang, 2000, Quantum Computation and Quantum Information, New York: Cambridge University Press.
  • Ord, T., 2006, “The Many Forms of Hypercomputation,” Applied Mathematics and Computation, 178(1): 143–153.
  • Papayannopoulos, P., 2020, “Computing and modelling: Analog vs. Analogue,” Studies in History and Philosophy of Science, 83: 103–120.
  • Piccinini, G., 2004, “Functionalism, Computationalism, and Mental Contents,” Canadian Journal of Philosophy, 34(3): 375–410.
  • –––, 2007, “Computing Mechanisms,” Philosophy of Science, 74(4): 501–526.
  • –––, 2010, “The Mind as Neural Software? Understanding Functionalism, Computationalism, and Computational Functionalism,” Philosophy and Phenomenological Research, 81(2): 269–311.
  • –––, 2015, Physical Computation: A Mechanistic Account, Oxford: Oxford University Press.
  • Piccinini, G. and A. Scarantino, 2011, “Information Processing, Computation, and Cognition,” Journal of Biological Physics 37(1): 1–38.
  • Pitowsky, I., 1990, “The Physical Church Thesis and Physical Computational Complexity,” Iyyun, 39: 81–99.
  • –––, 2002, “Quantum Speed-Up of Computations,” Philosophy of Science, 69: S168-S177.
  • Pour-El, M. B., 1974, “Abstract Computability and Its Relation to the General Purpose Analog Computer (Some Connections Between Logic, Differential Equations and Analog Computers),” Transactions of the American Mathematical Society, 199: 1–28.
  • –––, 1999, The Structure of Computability in Analysis and Physical Theory: An Extension of Church’s Thesis. In Handbooks of Computability Theory, E.R. Griffor (ed.), New York: Elsevier, pp. 449–471.
  • Putnam, H., 1960, “Minds and Machines,” in Dimensions of Mind: A Symposium, S. Hook (ed.), New York: Collier, pp. 138–164; Reprinted in Putnam 1975a, pp. 362–386.
  • –––, 1967, “Psychological Predicates,” in Art, Mind, and Religion, W.H. Capitan & D.D. Merrill (eds.), Pittsburgh, PA: University of Pittsburgh Press, pp. 37–48. Reprinted in Putnam 1975a as “The Nature of Mental States,” pp. 150–161.
  • –––, 1975a, Philosophical Papers: Volume 2, Mind, Language and Reality, Cambridge: Cambridge University Press.
  • –––, 1975b, “The Meaning of ‘Meaning’,” in Language, Mind and Knowledge. Minnesota Studies in the Philosophy of Science, vol. 7, K. Gunderson (ed.), Minneapolis: University of Minnesota Press, pp. 131–193. Reprinted in Putnam 1975a, pp. 215–271.
  • –––, 1988, Representation and Reality, Cambridge, MA: MIT Press.
  • Pylyshyn, Z. W., 1984, Computation and Cognition, Cambridge, MA: MIT Press.
  • Quine, W. V. O., (1976), “Whither Physical Objects?” in Essays in Memory of Imre Lakatos, (Series: Boston Studies in the Philosophy of Science), R.S. Cohen, P.K. Feyerabend, & M.W. Wartofsky (eds.), Dordrecht: Reidel, pp. 497–504.
  • Rescorla, M., 2014, “A Theory of Computational Implementation,” Synthese, 191: 1277–1307.
  • Rubel, L. A., 1989, “Digital Simulation of Analog Computation and Church’s Thesis,” Journal of Symbolic Logic, 54(3): 1011–1017.
  • –––, 1993, “The Extended Analog Computer,” Advances in Applied Mathematics, 14(1): 39–50.
  • Scheutz, M., 1999, “When Physical Systems Realize Functions …,” Minds and Machines, 9(2): 161–196.
  • –––, 2001, “Causal versus Computational Complexity,” Minds and Machines, 11(4): 534–566.
  • Schonbein, W., 2005, “Cognition and the Power of Continuous Dynamical Systems,” Minds and Machines, 15(1): 57–71.
  • Searle, J. R., 1992, The Rediscovery of the Mind, Cambridge, MA: MIT Press.
  • Segal, G., 1991, “Defence of a Reasonable Individualism,” Mind, 100(400): 485–493.
  • Shagrir, O., 2001. “Content, Computation and Externalism,” Mind, 110(438): 369–400.
  • –––, 2006, “Why We View the Brain as a Computer,” Synthese, 153(3): 393–416.
  • –––, forthcoming, The Nature of Physical Computation, Oxford: Oxford University Press.
  • Shagrir, O., and I. Pitowsky, 2003, “Physical Hypercomputation and the Church-Turing Thesis,” Minds and Machines, 13(1): 87–101.
  • Shannon, C. E., and W. Weaver, 1949, The Mathematical Theory of Communication, Urbana: University of Illinois Press.
  • Shapiro, L. A., 1997, “A Clearer Vision,” Philosophy of Science, 64(1): 131–153.
  • Shor, P. W., 1994, “Algorithms for Quantum Computation: Discrete Logarithms and Factoring,” in Proceedings of the 357th Annual Symposium on Foundations of Computer Science, Los Alamitos, CA: IEEE Computer Society Press, pp. 124–134.
  • Sieg, W., 2006, “On Computability,” in Philosophy of Mathematics (Handbook of the Philosophy of Science), A. Irvine (ed.) Amsterdam: Elsevier, pp. 535–630.
  • Siegelmann, H. T., 1999, Neural Networks and Analog Computation: Beyond the Turing Limit, Boston: Birkhäuser.
  • Šíma, J., and P. Orponen, 2003, “General-purpose Computation with Neural Networks: A Survey of Complexity Theoretic Results,” Neural Computation, 15: 2727–78.
  • Smith, B. C., 2002, “The Foundations of Computing,” in Computationalism: New Directions, M. Scheutz (ed.), Cambridge, MA: MIT Press, pp. 23–58.
  • Smith, W. D., 2006a, “Church’s Thesis Meets the N-body Problem,” Applied Mathematics and Computation, 178(1): 154–183.
  • –––, 2006b, “Three Counterexamples Refuting Kieu’s Plan for Quantum Adiabatic Hypercomputation; and Some Uncomputable Quantum Mechanical Tasks,” Applied Mathematics and Computation, 178(1): 184–193.
  • Stannett, M., 1990, “X-Machines and the Halting Problem: Building a Super-Turing Machine,” Formal Aspects of Computing, 2(1): 331–341.
  • Stich, S., 1983, From Folk Psychology to Cognitive Science, Cambridge, MA: MIT Press.
  • Toffoli, T., 1982, “Physics and Computation,” International Journal of Theoretical Physics, 21(3–4): 165–175.
  • Tucker, Chris, 2018, “How to Explain Miscomputation,” Philosophers’ Imprint, 18(24). [Tucker 2018 available online]
  • Turing, A. M., 1936–7, “On Computable Numbers, with an Application to the Entscheidungsproblem,” Proceeding of the London Mathematical Society, 42(1): 230–265, reprinted in Davis 2004b, pp. 116–154.
  • –––, 1950, “Computing Machinery and Intelligence,” Mind, LIX(236): 433–460.
  • Wheeler, J. A., 1982, “The Computer and the Universe,” International Journal of Theoretical Physics, 21(6–7): 557–572.
  • –––, 1989, “Information, Physics, Quantum: The Search for Links,” Proceedings III International Symposium on Foundations of Quantum Mechanics, Tokyo, 354–358.
  • Wolfram, S., 2002, A New Kind of Science, Champaign, IL: Wolfram Media.
  • Zuse, K., 1970, Calculating Space, Cambridge, MA: MIT Press.
  • –––, 1982, “The Computing Universe,” International Journal of Theoretical Physics, 21(6–7): 589–600.

Acknowledgments

Many thanks to Neal Anderson, David Chalmers, Nir Fresco, Mark Sprevak, Fred Kroon, Ray Turner, and several anonymous referees for helpful comments on previous drafts. Thanks to James Virtel for editorial assistance. This material is based in part upon work supported by the National Science Foundation under Grant No. SES-0924527.

Copyright © 2021 by
Gualtiero Piccinini <piccininig@umsl.edu>
Corey Maley <cmaley@ku.edu>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free