Please Read How You Can Help Keep the Encyclopedia Free
The Philosophy of Computer Science
For nearly every field of study, there is a branch of philosophy, called the philosophy of that field. …Since the main purpose of a given field of study is to contribute to knowledge, the philosophy of X is, at least in part, a branch of epistemology. Its purpose is to provide an account of the goals, methodology, and subject matter of X. (Shapiro 1983: 525)
The philosophy of computer science is concerned with those philosophical issues that arise from within the academic discipline of computer science. It is intended to be the philosophical endeavor that stands to computer science as philosophy of mathematics does to mathematics and philosophy of technology does to technology. Indeed, the abstract nature of computer science, coupled with its technological ambitions, ensures that many of the conceptual questions that arise in the philosophies of mathematics and technology have computational analogues. In addition, the subject will draw in variants of some of the central questions in the philosophies of mind, language and science. We shall concentrate on a tightly related group of topics which form the spine of the subject. These include specification, implementation, semantics, programs, programming, correctness, abstraction and computation.
- 1. Computational Artifacts
- 2. Specification and Function
- 3. Implementation
- 4. Semantics
- 5. The Ontology of Programs
- 6. Correctness
- 7. Programming
- 8. Abstraction
- 9. Computation
- 10. Other Topics
- Academic Tools
- Other Internet Resources
- Related Entries
Computer science underpins our Facebook pages, controls air traffic around the world, and ensures that we will not be too surprised when it snows. It has been applied in algebra, car manufacturing, laser surgery, banking, gastronomy, astronomy and astrology. Indeed, it is hard to find an area of life that has not been fundamentally changed and enhanced by its application. But what is it that is applied? What are the things that give substance to such applications? The trite answer is the entities that computer scientists construct, the artifacts of computer science, computational artifacts, if you will. Much of the philosophy of the subject is concerned with their nature, specification, design and construction.
Folklore has it that computational artifacts fall into two camps: hardware and software. Presumably, software includes compilers and natural language understanding systems whereas laptops and tablets are hardware. But how is this distinction drawn: how do we delineate what we take to be software and what we take to be hardware?
A standard way identifies the distinction with the abstract/physical one (see the entry on abstract objects) where hardware is taken to be physical and software to be abstract. Unfortunately, this does not seem quite right. As Moor (1978) points out, programs, which are normally seen as software, and therefore under this characterization abstract, may also be physical devices. In particular, programs were once identified with sequences of physical lever pulls and pushes. There are different reactions to this observation. Some have suggested there is no distinction. In particular, Suber (1988) argues that hardware is a special case of software, and Moor (1978) that the distinction is ontologically insignificant. On the other hand, Duncan (2009—see Other Internet Resources) insists that there is an important difference but it is one that can only be made within an ontological framework that supports finer distinctions than the simple abstract/physical one (e.g., B. Smith 2012). Irmak (2012) also thinks that software and hardware are different: software is an abstract artifact, but apparently not a standard one since it has temporal properties.
Whether or not the software/hardware distinction can be made substantial, most writers agree that, while a program can be taken as an abstract thing, it may also be cashed out as a sequence of physical operations. Consequently, they (e.g., Colburn 2000; Moor 1978) insist that programs have a dual nature: they have both an abstract guise and a physical one. Indeed, once this is conceded, it would seem to apply to the majority of computational artifacts. On the one hand they seem to have an abstract guise which enables us to reflect and reason about them independently of any physical manifestation. This certainly applies to abstract data types (Cardelli and Wegner 1985). For example, the list abstract data type consists of the carrier type together with operations that support the formation and manipulation of lists. Even if not made explicit, these are determined by several axioms that fix their properties e.g., if one adds an element to the head of a list to form a new list, and then removes the head, the old list is returned. Similarly, an abstract stack is determined by axioms that govern push and pop operations. Using such properties one may reason about lists and stacks in a mathematical way, independently of any concrete implementation. And one needs to. One cannot design nor program without such reasoning; one cannot construct correct programs without reasoning about what the programs are intended to do. If this is right, computational artifacts have an abstract guise that is separable from their physical realization or implementation. Indeed, this requirement to entertain abstract devices to support reasoning about physical ones is not unique to computer science. The necessity to abstract is clearly made by the physicist Duhem.
When a physicist does an experiment, two very distinct representations of the instrument on which he is working fill his mind: one is the image of the concrete instrument that he manipulates in reality; the other is a schematic model of the same instrument, constructed with the aid of symbols supplied by theories; and it is on this ideal and symbolic instrument that he does his reasoning, and it is to it that he applies the laws and formulas of physics. A manometer, for example, is on the one hand, a series of glass tubes, solidly connected to one another filled with a very heavy metallic liquid called mercury and on the other by the perfect fluid in mechanics, and having at each point a certain density and temperature defined by a certain equation of compressibility and expansion. (Duhem 1954: 155–156)
Wittgenstein talks about a similar notion of abstraction when he argues that in kinematics one abstracts away from actual physical properties.
In kinematics we talk of a connecting rod—not meaning a rod made of brass or steel or what-not. We use the word ‘connecting rod’ in ordinary life, but in kinematics we use it in a quite different way, although we say roughly the same things about it as we say about the real rod; that is goes forward and back, rotates, etc. But then the real rod contracts and expands, we say. What are we to say of this rod? Does it contract and expand?—And so we say it can't. But the truth is that there is no question of it contracting or expanding. It is a picture of a connecting rod, a symbol used in this symbolism for a connecting rod. And in this symbolism there is nothing which corresponds to a contraction or expansion of the connecting rod. (Wittgenstein 1975 : 98)
Much the same could be said about computers: reasoning about them requires the employment of an abstract machine that captures the computers function. Moreover, there is no question of abstract machines overheating or exploding.
Seemingly, computational artifacts have an abstract guise that supports our reasoning about them. On the other hand, somewhere down the line, they must have a physical implementation that enables them to be used as things in the physical world. This is obviously true of machines, but it is equally so for programs: programmers write programs to control physical devices. A program or abstract machine that has no physical realization is of little use as a practical device for performing humanly intractable computations. For instance, a program that monitors heart rate must be underpinned by a physical device that actually performs the task. The computer scientist Dijkstra puts it as follows.
A programmer designs algorithms, intended for mechanical execution, intended to control existing or conceivable computer equipment. (Dijkstra 1974: 1)
On the duality view, computer science is not an abstract mathematical discipline that is independent of the physical world. To be used these things must have physical substance. And once this observation is made there is a clear link with a central notion in the philosophy of technology (Kroes 2010; Franssen et al. 2010).
Technical artifacts include all the common objects of everyday life such as toilets, paper clips, tablets and dog collars. They are intentionally produced things. This is an essential part of being a technical artifact. For example, a physical object that accidentally carries out arithmetic is not by itself a calculator. This teleological aspect distinguishes them from other physical objects, and has led philosophers to argue that technical artifacts have a dual nature fixed by two sets of properties (e.g., Kroes 2010; Meijers 2001; Thomasson 2007; Vermaas and Houkes 2003).
- Functional Properties
- Structural Properties
Functional properties say what the artifact does. For example, a kettle is for boiling water and a car is for transportation. On the other hand, structural properties pertain to its physical makeup. They include its weight, color, size, shape, its chemical constitution etc. In particular, we might insist that our car is red and has white seats.
The notion of a technical artifact will help to conceptualize and organize some of the central questions and issues in the philosophy of computer science. We begin with a concept that underpins much of the activity of the subject. Indeed, it is the initial expression of functional properties.
In computer science, the function of an artifact is initially laid out in a (functional) specification (Summerville 2012; Vliet 2008). Indeed, on the way to a final device, a whole series of specification/artifact pairs of varying degrees of abstractness come into existence. The activities of specification, implementation and correctness raise a collection of overlapping conceptual questions and problems (B. C. Smith 1985; Turner 2011; Franssen et al. 2010).
Specifications are expressed in a variety of ways, including ordinary vernacular. But the trend in computer science has been towards more formal and precise forms of expression. Indeed, specialized languages have been developed that range from those designed primarily for program specification (e.g., VDM, Jones 1990; Z, Woodcock and Davies 1996; B, Abrial 1996), wide spectrum languages such UML (Fowler 2003), through to specialized ones that are aimed at architectural description (e.g., Rapide, Luckham 1998; Darwin, 1997; Wright, Allen 1997). They differ with respect to the underlying ontology and their means of articulating requirements.
Z is based upon predicate logic and set theory. It is largely employed for the specification of suites of individual program modules or simple devices. For example, consider the following definition of a machine. The machine is to hold numerical values in locations. There are three finite sets: Store, Location and Numerals that represent the store itself, the locations of the store, and the values in the locations. In addition, there are two operations: (the declaration part of the schema) called Lookup which is a (partial) function from the Cartesian product of Store and Location to Numerals and Update which is a total function from the Cartesian product of Store, Location and Numerals to Store. These operations are governed by formal conditions that dictate the relationship between them (the predicate part of the Z schema). More specifically, they insist that if a location is updated with a new value, Lookup returns that value. Of course, any such set theoretic definition does not automatically refer to a specific physical device. That is its virtue. In order to provide its function it must abstract away from physical properties. More explicitly, it employs the abstract notion of set to introduce the update and the lookup operations as abstract set theoretic functions. As such it defines an abstract machine.
UML (Fowler 2003) has a very rich ontology and a wide variety of expression mechanisms. For example, its class language allows the specification of software patterns (Gamma et al. 1994). In general, an architectural description language (ADL) is used to precisely specify the architecture of a software system (Bass 2003). Typically these languages employ an ontology that includes notions such as components, connectors, interfaces and configurations. In particular, architectural descriptions written in Rapide, Darwin or Wright are precise expressions in formalisms which are defined using an underlying mathematical semantics. For example, the Rapide model of computation is based on partially ordered sets of events, and its semantics is given in the Pi-calculus (Magee et al. 1995). Wright has an underlying semantics based upon the formalism known as communicating sequential processes (Hoare 1985). A specification in Rapide defines a sequence of events, and one in Wright describes an abstract process. Such expressions enable the properties of any program or system to be explored in a way that is independent of any particular implementation.
But what is the logical function of the expressions of these languages? On the face of it, they are just expressions in a formal language. However, when the underlying ontology is made explicit, each of these languages reveals itself to be a formal ontology which maybe naturally cast as a type theory (Turner 2009a). And under this interpretation these expressions are stipulative definitions (Gupta 2012). As such, each defines a new abstract object within the formal ontology of its system.
However, taken by itself a definition need not be a specification of anything; it may just form part of a mathematical exploration. So when does a definition act as a specification? Presumably, just in case the definition is taken to point beyond itself to the construction of an artifact. It is the intentional act of giving governance of the definition over the properties of a device or system that turns a mere definition into a specification. The definition then determines whether or not the device or system has been correctly built. It provides the criteria of correctness and malfunction. From this perspective, the role of specification is a normative one. If one asks does the device work, it is the definition functioning as a specification that tells us whether it does. Indeed, without it the question would be moot. At all levels of abstraction, the logical role of specification is always the same: it provides a criterion for correctness and malfunction. This is the perspective argued for by Turner (2011). Indeed, this normative role is taken to be part of any general theory of function (Kroes 2012).
It should go without saying that this is an idealization. A specification is not fixed throughout the design and construction process. It may have to be changed because a client changes her mind about the requirements. Furthermore, it may turn out for a variety of reasons that the artifact is impossible to build. The underlying physical laws may prohibit matters. There may also be cost limitations that prevent construction. Indeed, the underlying definition may be logically absurd. In these cases, the current specification will have to be given up. But the central normative role of specification remains intact.
In this regard, Turner (2011: 57) observes that the logic of specification is not that of scientific theorizing.
…specifications are not intended to be a scientific theory of anything. If my computer does not meet its specification, I do not expect the manufacturer to supply me with a revised specification, which is what I would expect if it were functioning as a theory of the device. I expect to be given a computer that actually meets the original specification. With specifications it is the artefact that is correct or not; with theories it is the theory.
Observe that, unlike functional descriptions, specifications are taken to be prescribed in advance of the artifact construction; they guide the implementer. This might be taken to suggest a more substantive role for specification i.e., to provide a methodology for the construction of the artifact. However, the method by which we arrive at the artifact is a separate issue to its specification. The latter dictates no such methodology. There is no logical difference between a functional specification and functional description; logically they both provide a criterion of correctness.
Software is produced in a series of layers of decreasing levels of abstraction, where in the early layers both specification and artifact are abstract (Brooks 1995; Summerville 2012; Irmak 2012). For example, a specification written in logical notation might be taken to be a specification of a linguistic program. In turn, the linguistic program, with its associated semantics, might be taken as the specification of a physical device. In other words, we admit abstract entities as artifacts. This is a characteristic feature of software development (Vliet 2008). It distinguishes it from technology in general. The introduction of abstract intermediate artifacts is essential (Brooks 1995; Summerville 2012). Without them logically complex computational artifacts would be impossible to construct.
So what happens to the duality thesis? It still holds good, but now the structural description does not necessarily provide physical properties but another abstract device. For example, an abstract stack can act as the specification of a more concrete one that is now given a structural description in a programming language as an array. But the array is itself not a physical thing, it is an abstract one. Its structural description does not use physical properties but abstract ones i.e., axioms. Of course, eventually, the array will get implemented in a physical store. However, from the perspective of the implementer who is attempting to implement stacks in a programming language with arrays as a data type, in her consciousness, the artifact is the abstract array of the programming language. Consequently, the duality thesis must be generalized to allow for abstract artifacts.
Exactly how the physical and intentional conceptualisations of our world are related remains a vexing problem to which the long history of the mind–body problem in philosophy testifies. This situation also affects our understanding of technical artefacts: a conceptual framework that combines the physical and intentional (functional) aspects of technical artefacts is still lacking. (Kroes and Meijers 2006: 2)
The literature on technical artefacts (e.g., Kroes 2010; Meijers 2001; Thomasson 2007; Vermaas and Houkes 2003) contains two main theories about how the two conceptualizations are related: causal role theories and intentional ones.
Causal role theories insist that actual physical capacities determine function. Cummins's theory of functional analysis (Cummins 1975) is an influential example of such a theory. The underlying intuition is that, without the physical thing, and its actual properties, there can be no artifact. The main criticism of these theories concerns the location of any correctness criteria. If all we have is the physical device, we have no independent measure of correctness (Kroes 2010): the function is fixed by what the device actually does.
Causal role theories…..have the tendency to let functions coincide with actual physical capacities: structure and function become almost identical. The main drawback of this approach is that it cannot account for the malfunctioning of technical artefacts: an artefact that lacks the actual capacity for performing its intended function by definition does not have that function. The intentions associated with the artefact have become irrelevant for attributing a function. (Kroes 2010: 3)
This criticism has the same flavor as that made by Kripke in his discussion of rule following.
Actual machines can malfunction: through melting wires or slipping gears they may give the wrong answer. How is it determined when a malfunction occurs? By reference to the program of the machine, as intended by its designer, not simply by reference to the machine itself. Depending on the intent of the designer, any particular phenomenon may or may not count as a machine malfunction. A programmer with suitable intentions might even have intended to make use of the fact that wires melt or gears slip, so that a machine that is malfunctioning for me is behaving perfectly for him. Whether a machine ever malfunctions and, if so, when, is not a property of the machine itself as a physical object, but is well defined only in terms of its program, stipulated by its designer. Given the program, once again, the physical object is superfluous for the purpose of determining what function is meant. (Kripke 1982: 34)
The abstract machine provides the notion of correctness for the physical one. Without some independent expression of the function there are no notions of correctness and malfunction. In specification terms, the (physical) artifact cannot fix its own criterion of correctness. This argues against such causal theories of function and specification.
Intentional theories insist that it is agents who ascribe functions to artefacts. Objects and their components possess functions only insofar as they contribute to the realization of a goal. Good examples of this approach are McLaughlin (2001) and Searle (1995).
[t]he function of an artifact is derivative from the purpose of some agent in making or appropriating the object; it is conferred on the object by the desires and beliefs of an agent. No agent, no purpose, no function. (McLaughlin 2001: 60)
But how exactly does the function get fixed by the desires of an agent? One interpretation has it that the function is determined by the mental states of the agents i.e., the designers and users of technical artifacts. In their crude form such theories have difficulty accounting for how they impose any constraints upon the actual thing that is the artifact.
If functions are seen primarily as patterns of mental states, on the other hand, and exist, so to speak, in the heads of the designers and users of artifacts only, then it becomes somewhat mysterious how a function relates to the physical substrate in a particular artifact. (Kroes 2010: 2)
For example, how can the mental states of an agent fix the function of a device that is intended to perform addition? This question is posed in a rather different context by Kripke.
Given … that everything in my mental history is compatible both with the conclusion that I meant plus and with the conclusion that I meant quus, it is clear that the skeptical challenge is not really an epistemological one. It purports to show that nothing in my mental history of past behavior—not even what an omniscient God would know—could establish whether I meant plus or quus. But then it appears to follow that there was no fact about me that constituted my having meant plus rather than quus. (Kripke 1982: 21)
Of course, one might also insist that the artifact is actually in accord with the specification, but this does not help if the expression of the function is only located in the mental states of an agent. This version of the intentional theory is really a special case of a causal theory where the agent's head is the physical device in which the function is located.
However, there is an alternative interpretation of the intentional approach. On his commentary on Wittgenstein's notion of acting intentionally (Wittgenstein 1953), David Pears suggests that anyone who acts intentionally must know two things. Firstly, she must know what activity she is engaged in. Secondly, she must know when she has succeeded (Pears 2006). According to this perspective, establishing correctness is an externally observable rule based activity. The relation between the definition and the artifact is manifest in using the definition as a canon of correctness for the device. I must be able to justify my reasons for thinking that it works: if I am asked if it works I must be able to justify that it does with reference to the abstract definition. The content of the function is laid out in the abstract definition, but the intention to take it as a specification is manifest in using it as one (§2.2).
Broadly speaking an implementation is a realization of a specification. Examples includes the implementation of a UML specification in Java, the implementation of an abstract algorithm as a program in C, the implementation of an abstract data type in Miranda or the implementation of a whole programming language. Moreover, implementation is often an indirect process that involves many stages before physical bedrock, and each level involves a specification/artifact pairing, and each involves a notion of implementation. But what is an implementation? Is there just one notion or many?
The most detailed philosophical study of implementation is given by Rapaport (1999, 2005). He argues that implementation involves two domains: a syntactic one (the abstraction) and a semantic one (the implementation). Indeed, he suggests that a full explication of the notion requires a third hidden term, a medium of implementation: I is an Implementation of A in medium M. Here I is the semantic component, A is the abstraction and M the medium of implementation. He allows for the target medium to be abstract or physical. This is in line with the claim that artifacts maybe abstract or concrete.
Superficially, this seems right. In all the examples cited there is a medium of implementation in which the actual thing that is the implementation is carved out. Perhaps the clearest example is the implementation of a programming language. Here the syntactic domain is the actual language and the semantic one its interpretation on an abstract machine: the medium of interpretation. He suggests that we implement an algorithm when we express it in a computer programming language, and we implement an abstract data type when we express it as a concrete one. Examples that he does not mention might include the UML definition of design patterns implemented in Java (Gamma et al. 1994).
He further argues that there is no intrinsic difference between which of the domains is semantic and which is syntactic. This is determined by the asymmetry of the implementation mapping. For example, a physical computer process that implements a program plays the role of the semantics to the linguistic program, while the same linguistic program can play the role of semantic domain to an algorithm. This asymmetry is parallel to that of the specification/artifact connection. On the face of it there is little to cause any dissension. It is a straightforward description of the actual use of the term implementation. However, there is an additional conceptual claim that is less clear.
Apparently, the semantic domain, as its name suggests, is always taken to be a semantic representation of the syntactic one; it closes a semantic gap between the abstraction and the implementation in that the implementation fills in details. This is a referential view of semantics in that the syntactic domain refers to another domain that provides its meaning. Indeed, there is a strong tradition in computer science that takes referential or denotational semantics as fundamental (Stoy 1977; Milne and Strachey 1976; Gordon 1979). We shall examine this claim later when we consider the semantics of programming languages in more detail (§4). For the moment we are only concerned with the central role of any kind of semantics.
One view of semantics insists that it must be normative. Although the exact form of the normative constraint (Glüer and Wikforse 2009; Miller and Wright 2002) is debated, there is a good deal of agreement on a minimal requirement: a semantic account must fix what it is to use an expression correctly.
The fact that the expression means something implies that there is a whole set of normative truths about my behavior with that expression; namely, that my use of it is correct in application to certain objects and not in application to others…. The normativity of meaning turns out to be, in other words, simply a new name for the familiar fact that, regardless of whether one thinks of meaning in truth-theoretic or assertion-theoretic terms, meaningful expressions possess conditions of correct use. Kripke's insight was to realize that this observation may be converted into a condition of adequacy on theories of the determination of meaning: any proposed candidate for the property in virtue of which an expression has meaning, must be such as to ground the ‘normativity’ of meaning-it ought to be possible to read off from any alleged meaning constituting property of a word, what is the correct use of that word. (Boghossian 1989: 513)
On the assumption that this minimal requirement has to be satisfied by any adequate semantic theory, is implementation always, or even ever, semantic interpretation? Are these two notions at odds with each other?
One standard instance of implementation concerns the interpretation of one language in another. Here the abstraction and the semantic domain are both languages. Unfortunately, this does not provide a criterion of correctness unless we have already fixed the semantics of the target language. While translating between languages is taken to be implementation, indeed a paradigm case, it is not, on the present criterion, semantic interpretation. It only satisfies the correctness criterion when the target language has an independently given notion of correctness. This may be achieved in an informal or in a mathematical way. But it must not end in another uninterpreted language. So this paradigm case of implementation does not appear to satisfy the normative constraints required for semantic interpretation.
Next consider the case where the abstraction is a language and the semantic medium is set theory. This would be the case with denotational semantics (Stoy 1977). This does provide a notion of correctness. Our shared and agreed understanding of set theory provides this. Unfortunately, it would not normally be taken as an implementation. Certainly, it would not, if an implementation is something that is eventually physically realizable. Presumably, this is a necessary condition for being an implementation.
Now consider the case where the syntactic component is an abstract stack and the semantic one is an array. Here we must ask what it means to say that the implementation is correct. Does the medium of the array fix the correct use of stacks? It would seem not: the array does not provide the criteria for deciding whether we have the correct axioms for stacks or whether we have used them correctly in a particular application. Rather the stack is providing the correctness criteria for the implementation that is the array. Instead, the axioms provide the fundamental meanings of the constructs. While the array is an implementation of the stack, it does not provide it with a notion of correctness: the cart and the horse have been interchanged.
Finally, suppose the semantic domain is a physical machine and the syntactic one is an abstract one. The suggestion is that the physical machine provides a semantic interpretation of the abstract one. But again, a semantic interpretation must provide us with a notion of correctness and malfunction, and there are compelling arguments against this that are closely related to the causal theories of function (§2.4). This issue will be more carefully examined in section (§4) where we consider programming language semantics.
Given that a semantic account of a language must supply correctness criteria, and that the term semantics is to have some bite, these are serious obstacles for the view that implementation is semantic interpretation. There are several phenomena all rolled into one. If these objections are along the right lines, then the relationship between the source and target is not semantic interpretation. Of course, one may counter all this by arguing against the correctness requirement for semantic theory.
An alternative analysis of implementation is implicit in Turner (2013, 2012). Consider the case where the data type of finite sets is implemented in the data types of lists. Each of these structures is governed by a few simple axioms. The implementation represents finite sets as lists, the union operation on sets as list concatenation, and equality between sets as extensional equality on lists etc. This is a mathematical relationship where the axioms for sets act as a specification of the artifact, which in this case is implemented in the medium of lists. It would appear that the logical connection between the two is that of specification and artifact. The mapping does not have to be direct, i.e., there does not have to be a simple operation to operation correspondence, but the list properties of the implemented operations must satisfy the given set axioms. In standard mathematical terms, the list medium must provide a mathematical model (in the sense of model theory, W. Hodges 2013) of the set axioms. The case where one language is implemented in another is similar, and fleshed out by the semantic definitions of the two languages.
Finally, consider the case where the medium of implementation is a physical device, e.g., an abstract stack is implemented as a physical one. Once again the abstract stack must provide the correctness criteria for the physical device. This is what happens in practice. We check that the physical operations satisfy the abstract demands given by the axioms for stacks. There are issues here that have to do with the adequacy of this notion of correctness. We shall discuss these when we more carefully consider the computer science notion of correctness (§6.4).
If this analysis is along the right lines, implementation is best described as a relation between specification and artifact. Implementation is not semantic interpretation; indeed, it requires an independent semantic account in order to formulate a notion of implementation correctness. So what is taken to be semantic interpretation in computer science?
How is a semantic account of a language to be given? What are the main conceptual issues that surround the semantic enterprise? There are many different semantic candidates in the literature (Gordon 1979; Gunter 1992; Fernandez 2004; Milne and Strachey 1976). One of the most important distinctions centers upon the difference between operational and denotational semantics (Turner 2007; White 2003).
Operational semantics began life with Landin (1964). In its logical guise (Fernandez 2004) it provides a mechanism of evaluation where, in its simplest form, the evaluation relation is represented as follows.
P ⇓ c
This expresses the idea that the program P converges to the canonical form given by c. This is usually called big step semantics. It is normally given in terms of rules that provide the evaluation of a complex program in terms of the evaluation of its parts. For example, a simple rule for sequencing (°) would take the form
P°Q ⇓ c°d
These canonical or normal forms are other terms in the programming language which cannot be further reduced by the given rules. But they are terms of the language. For this reason, this operational approach is often said to be unsatisfactory. According to this criticism, at some point in the interpretation process, the semantics for a formal language must be mathematical.
We can apparently get quite a long way expounding the properties of a language with purely syntactic rules and transformations… One such language is the Lambda Calculus and, as we shall see, it can be presented solely as a formal system with syntactic conversion rules … But we must remember that when working like this all we are doing is manipulating symbols-we have no idea at all of what we are talking about. To solve any real problem, we must give some semantic interpretation. We must say, for example, “these symbols represent the integers”. (Stoy 1977: 9)
In contrast, operational semantics is taken to be syntactic. In particular, even if one of them is in canonical form, the relation P⇓c relates syntactic objects. This does not get at what we are talking about. Unless the constants of the language have themselves an independently given mathematical meaning, at no point in this process do we reach semantic bedrock: we are just reducing one syntactic object to another, and this does not yield a normative semantics. This leads to the demand for a more mathematical approach.
Apparently, programming languages refer to (or are notations for) abstract mathematical objects not syntactic ones (Strachey 2000; McGettrick 1980; Stoy 1977). In particular, denotational semantics provides, for each syntactic object P, a mathematical one. Moreover, it generally does this in a compositional way: complex programs have their denotations fixed in terms of the denotations of their syntactic parts. These mathematical objects might be set theoretic, category theoretic or type theoretic. But whichever method is chosen, programs are taken to refer to abstract mathematical things. However, this position relies on a clear distinction between syntactic and mathematical objects.
One way of addressing this is via the observation that mathematical theories such as set theory and category theory are axiomatic theories. And it is this that makes them mathematical. This is implicit in the modern axiomatic treatment of mathematics encouraged by (Bourbaki 1968) and championed by Hilbert. The following is from Hilbert's letter to Frege of December 29, 1899 (Frege 1980: 40).
In my opinion, a concept can be fixed logically only by its relations to other concepts. These relations, formulated in certain statements, I call axioms, thus arriving at the view that axioms (perhaps together with propositions assigning names to concepts) are the definitions of the concepts. I did not think up this view because I had nothing better to do, but I found myself forced into it by the requirements of strictness in logical inference and in the logical construction of a theory. I have become convinced that the more subtle parts of mathematics … can be treated with certainty only in this way; otherwise one is going around in a circle….. [I]t is surely obvious that every theory is only a scaffolding or schema of concepts together with their necessary relations to one another, and that the basic elements can be thought of in any way one likes. If in speaking of my points I think of some system of things, e.g., the system: love, law, chimney-sweep … and then assume all my axioms as relations between these things, then my propositions, e.g., Pythagoras' theorem, are also valid for these things. In other words: any theory can always be applied to infinitely many systems of basic elements.
It is worth pointing out that the axiomatic account, as long as it is precise and supports mathematical reasoning, does not need to be formal. If one accepts this as a necessary condition for mathematical status, does it rule out operational accounts? Prima facie it would seem so. Apparently, programs are reduced to canonical constants with no axiomatic definitions. But Turner (2009b, 2010) argues this is to look in the wrong place for the axiomatization: the latter resides not in the interpreting constants but in the rules of evaluation i.e., in the theory of reduction given by the axiomatic relation ⇓.
If this is right, given that both define matters axiomatically, as long as they agree, it should not matter which we take to define the language as a formal mathematical theory. Unfortunately, they don't always agree: the notion of equality provided by the operational account, although preserved by the denotational one, is often more fine grained. This has led to very special forms of denotational semantics based upon games (Abramsky and McCusker 1995; Abramsky et al. 1994). However, it is clear that practitioners take the operational account as fundamental, and this is witnessed by the fact that they seek to devise denotational accounts that are in agreement with the operational ones.
Not only is there no metaphysical difference between the set theoretic account and the operational one, but the latter is taken to be the definitive one. This view of programming languages is the perspective of theoretical computer science: programming languages, via their operational definitions, are mathematical theories of computation.
However, programming languages are very combinatorial in nature. They are working tools not elegant mathematical theories; it is very hard to explore them mathematically. Does this prevent them from being mathematical theories? There has been very little discussion of this issue in the literature Turner (2010) and Strachey (2000) are exceptions. On the face of it, Strachey sees them as mathematical objects pure and simple. Turner is a little more cautious and argues that actual programming languages, while often too complex to be explored as mathematical theories, contain a core theory of computation which may be conservatively extended to the full language.
However, Turner (2013) further argues that programming languages, even at their core, are not just mathematical objects. He argues that they are best conceptualized as technical artifacts. While their axiomatic definition provides their function, they also require an implementation. In the language of technical artifacts, a structural description of the language must say how this is to be achieved: it must spell out how the constructs of the language are to be implemented. To illustrate the simplest case, consider the assignment instruction.
x := E
A physical implementation might take the following form.
- Physically compute the value of E.
- Place the (physical token for) the value of E in the physical location named x; any existing token of value to be replaced.
This is a description of how assignment is to be physically realized. It is a physical description of the process of evaluation. Of course, a complete description will spell out more, but presumably not what the actual machine is made of; one assumes that this would be part of the structural description of the underlying computer, the medium of implementation. The task of the structural description is only to describe the process of implementation on a family of similarly structured physical machines. Building on this, we stipulate how the complex constructs of the language are to be implemented. For example, to execute commands in sequence we could add a physical stack that arranges them for processing in sequence. Of course, matters are seldom this straightforward. Constructs such as iteration and recursion require more sophisticated treatment. Indeed, interpretation and compilation may involve many layers and processes. However, in the end there must be some interpretation into the medium of a physical machine. Turner (2013) concludes that thing that is a programming language is a complex package of syntax/semantics (function) together with the implementation as structure.
Some have suggested that a physical implementation actually defines the semantics of the language. Indeed, this is a common perspective in the philosophy of computer science literature. We have already seen that Rapaport (1999) sees implementation as a semantic interpretation. Fetzer (1988) observes that programs have a different semantic significance to theorems. In particular, he asserts:
…programs are supposed to possess a semantic significance that theorems seem to lack. For the sequences of lines that compose a program are intended to stand for operations and procedures that can be performed by a machine, whereas the sequences of lines that constitute a proof do not. (Fetzer 1988: 1059)
This seems to say that the physical properties of the implementation contribute to the meaning of programs written in the language. Colburn (2000) is more explicit when he writes that the simple assignment statement A := 13×74 is semantically ambiguous between something like the abstract account we have given, and the physical one given as:
physical memory location A receives the value of physically computing 13 times 74. (Colburn 2000: 134)
The phrase physically computing seems to imply that what the physical machine actually does is semantically significant i.e., what it actually does determines or contributes to the meaning of assignment. Is this to be taken to imply that to fix what assignment means we have to carry out a physical computation? However, if an actual physical machine is taken to contribute in any way to the meaning of the constructs of the language, then their meaning is dependent upon the contingencies of the physical device. In particular, the meaning of the simple assignment statement may well vary with the physical state of the device and with contingencies that have nothing to with the semantics of the language, e.g., power cuts. Under this interpretation, multiplication does not mean multiplication but rather what the physical machine actually does when it simulates multiplication. This criticism parallels that for causal theories of function (§2.4).
The nature of programs has been the subject of a good amount of philosophical and legal reflection. What kinds of things are they? Are they mathematical/symbolic objects or concrete physical things? Indeed, the legal literature even contains a suggestion that programs constitute a new kind of (legal) entity.
The exact nature of computer programs is difficult to determine. On the one hand, they are related to technological matters. On the other hand, they can hardly be compared to the usual type of inventions. They involve neither processes of a physical nature, nor physical products, but rather methods of organization and administration. They are thus reminiscent of literary works even though they are addressed to machines. Neither industrial property law nor copyright law in their traditional roles seems to be the appropriate instrument for the protection of programs, because both protections were designed for and used to protect very different types of creations. The unique nature of the computer program has led to broad support for the creation of sui generis legislation. (Loewenheim 1989: 1)
This highlights the curious legal status of programs. Indeed, it raises tricky ontological questions about the nature of programs and software: they appear to be abstract, even mathematical objects with a complex structure, and yet they are aimed at physical devices. In this section we examine some of the philosophical issues that have arisen regarding the nature of programs and software. Some of them have legal origins and potentially legal consequences.
What is the content of the claim that programs are mathematical objects? In the legal literature the debate seems to centre on the notion that programs are symbolic objects that can be formally manipulated (Groklaw 2011, 2012—see Other Internet Resources). Indeed, there is a branch of theoretical computer science called formal language theory that treats grammars as objects of mathematical study (Hopcroft and Ullman 1969). While this does give some substance to the claim, this is not the most important sense in which programs are mathematical. This pertains to their semantics, where programming languages are taken to be axiomatic theories (§4.2). This perspective locates programs as elements in a theory of computation (Turner 2007, 2010).
But no matter in what sense programs are taken to be mathematical objects, there are legal ramifications. If programs and software are taken to be abstract mathematical things, then they cannot easily be governed by copyright. Normally, copyright law excludes pure abstract ideas. Mooers (1975) attempts to meet this objection by drawing a distinction between an idea that is not copyrightable, and a copyrightable expression of an idea.
Where does the “expression” leave off, and the “idea” take over? The best insight into this matter comes from discussions of copyright as applied to novels and dramatic productions. In these, “expression” is considered to include the choice of incident, the personalities and development of character, the choice of names, the elaboration of the plot, the choice of locale, and the many other minor details and gimmicks used to build the story. In other words, “expression” is considered to include not only the marks, words, sentences, and so on in the work, but also all these other details or structures as they aggregate into larger and larger units to make up the expression of the entire story … In other words, after the bare abstract “idea” has been chosen (e.g., boy meets girl, boy loses girl, boy wins girl), the “expression” to which copyright applies covers the remaining elements of original choice and artistry which are supplied by the author in order for him to develop, express, and convey his version of the bare idea. (Mooers 1975: 50)
However, in the case of programs, is it clear how to mark the boundary between an idea and an expression of it? Is the idea of a program contained in its semantics or its specification? How is it related to the algorithm/program distinction?
While agreeing that programs have an abstract guise, much of the philosophical literature (e.g., Colburn 2000; Moor 1978) has it that they also possess a concrete physical manifestation that facilitates their use as the cause of computations in physical machines. For example, Moor observes:
It is important to remember that computer programs can be understood on the physical level as well as the symbolic level. The programming of early digital computers was commonly done by plugging in wires and throwing switches. Some analogue computers are still programmed in this way. The resulting programs are clearly as physical and as much a part of the computer system as any other part. Today digital machines usually store a program internally to speed up the execution of the program. A program in such a form is certainly physical and part of the computer system. (Moor 1978: 215)
The following is of more recent origin, and more explicitly articulates the duality thesis in its claim that software has both abstract and physical guises.
Many philosophers and computer scientists share the intuition that software has a dual nature (Moor 1978, Colburn 2000). It appears that software is both an algorithm, a set of instructions, and a concrete object or a physical causal process. (Irmak 2012: 3)
Some shadow of thus duality emerges in the legal literature: to be subject to patent, programs must have some technical effect.
In this context, the Court described a “technical nature” as an “instruction to a systematic acting by utilizing controllable natural forces to achieve a causally predictable result”. In other words, the key element which characterizes the technical nature of an invention is the control of natural forces to achieve a predicted result. (Loewenheim 1989: 2)
Apparently, to qualify for a patent, a computer program cannot be simply an abstract thing; it must also utilize controllable forces of nature to achieve predictable results. It must cause a computer to do something like display items on screen, store a particular pattern in a memory, or activate a peripheral device: it must provide a technical effect. This is in line with the notion that programs are technical artifacts. However, the legal situation is very far from being all moonlight and roses. The debate is ongoing (Rapaport 2010; Groklaw 2011, 2012—see Other Internet Resources).
Anyone persuaded by the abstract/physical duality for programs is under an obligation to say something about the relationship between these two forms of existence. This is the major philosophical concern and parallels the question for technical artifacts in general.
One immediate suggestion is that programs, as textual objects, cause mechanical processes. The idea seems to be that somehow the textual object physically causes the mechanical process. Colburn (2000, 1999) denies that the symbolic text itself has any causal effect; it is its physical manifestation, the thing on the disk, which has such an effect. For him, software is a concrete abstraction that has a medium of description (the text, the abstraction) and a medium of execution (e.g., a concrete implementation in semiconductors). The duality is unpacked in a way that is parallel to that found in the philosophy of mind (see the entry on dualism), where the physical device is taken as a semantic interpretation of the abstract one. This is close to the perspective of Rapaport (1999). However, we have already alluded to problems with this approach (§3.3).
A slightly different account can be found in Fetzer (1988). He suggests that abstract programs are something like scientific theories: a program is to be seen as a theory of its physical implementation—programs as causal models. In particular, the simple assignment statement and its semantics is a theory about a physical store and how it behaves. If this is right, and a program turns out not to be an accurate description of the physical device that is its implementation, the program must be changed: if the theory which is enshrined in the program does not fit the physical device, being a theory, it should be changed. But this does not seem to be what happens in practice. While the program may have to be changed, this is not instigated by any lack of accord with its physical realization, but by an independent abstract semantics for assignment. If this is correct, the abstract semantics appears not to be a theory of its concrete implementation.
The alternative picture has it that the abstract program (determined by its semantics) provides the function of the artifact, and the physical artifact, or rather its description, provides its structure. It is the function of the program, expressed in its semantics, that fixes the physical implementation and provides the criteria of correctness and malfunction. Programs as computational artifacts have both an abstract aspect that somehow fixes what they do, and a physical aspect that enables them to cause physical things to happen.
What is the difference between programming and specification? One suggestion is that a specification tells us what it is to do without actually saying how to do it. For instance, the following is a specification written in VDM (Jones 1990).
SQRTP (x:real, y:real)
Pre: x ≧ 0
Post: y∗y = x and y ≧ 0
This is a specification of a square root function with the precondition that the input is positive. It is a functional description in that it says what it must do without saying how it is to be achieved. One way to unpack this what/how difference is in terms of the descriptive/imperative distinction. Programs are imperative and say how to achieve the goal whereas specifications are declarative and only describe the input/output behavior of the intended program. Certainly, in the imperative programming paradigm, this seems to capture a substantive difference. But it is not appropriate for all. For example, logic and functional programming languages (Thompson 2011) are not obviously governed by it. The problem is that programming languages have evolved to a point where this way of describing the distinction is not marked by the style or paradigm of the programming language. Indeed, in practice, a program written in Haskell (Thompson 2011) could act as a specification for a program written in C (Huss 1997, Other Internet Resources).
A more fundamental difference concerns the direction of governance i.e., which is the normative partner in the relationship, and which is the submissive one. In the case of the specification of the square root function, the artifact is the linguistic program. When the program is taken as the specification, the artifact is the next level of code, and so on down to a concrete implementation. This is in accord with Rapaport (2005) and his notion of the asymmetry of implementation.
One of the earliest philosophical disputes in computer science centers upon the nature of program correctness. The overall dispute was set in motion by two papers (De Millo et al. 1979; Fetzer 1988) and was carried on in the discussion forum of the ACM (e.g., Ashenhurst 1989; Technical Correspondence 1989). The pivotal issue derives from the duality of programs, and what exactly is being claimed to be correct relative to what. Presumably, if a program is taken to be a mathematical thing, then it has only mathematical properties. But seen as a technical artifact it has physical ones.
On the face of it, Tony Hoare seems to be committed to what we shall call the mathematical perspective i.e., that correctness is a mathematical affair i.e., establishing that a program is correct relative to a specification involves only a mathematical proof.
Computer programming is an exact science in that all the properties of a program and all the consequences of executing it in any given environment can, in principle, be found out from the text of the program itself by means of purely deductive reasoning. (Hoare 1969: 576)
Consider our specification of a square root function. What does it mean for a program P to satisfy it? Presumably, relative to its abstract semantics, every program (P), carves out a relationship RP between its input and output, its extension. The correctness condition insists that this relation satisfies the above specification, i.e.,
- ∀x:Real. ∀y:Real⋅x ≧ 0 → (RP(x, y) → y∗y = x and y ≧ 0)
This demands that the abstract program, determined by the semantic interpretation of its language, satisfies the specification. The statement (C) is a mathematical assertion between two abstract objects and so, in principle, the correctness maybe established mathematically. A mathematical relationship of this kind is surely what Tony Hoare has in mind, and in terms of the abstract guise of the program, there is little to disagree with. However, there are several concerns here. One has to do with the complexity of modern software (the complexity challenge), and the other the nature of physical correctness (the empirical challenge).
Programmers are always surrounded by complexity; we cannot avoid it. Our applications are complex because we are ambitious to use our computers in ever more sophisticated ways. Programming is complex because of the large number of conflicting objectives for each of our programming projects. If our basic tool, the language in which we design and code our programs, is also complicated, the language itself becomes part of the problem rather than part of its solution. (Hoare 1981: 10)
Within the appropriate mathematical framework proving the correctness of any linguistic program, relative to its specification, is theoretically possible. However, real software is complex. In such cases proving correctness might be practically infeasible. One might attempt to gain some ground by advocating that classical correctness proofs should be carried out by a theorem prover, or at least one should be employed somewhere in the process. However, the latter must itself be proven correct. While this may reduce the correctness problem to that of a single program, it still means that we are left with the correctness problem for a large program. Moreover, in itself this does not completely solve the problem. For both theoretical and practical reasons, in practice, human involvement is not completely eliminated. In most cases proofs are constructed by hand with the aid of interactive proof systems. Even so, a rigorous proof of correctness is rarely forthcoming. One might only require that individual correctness proofs be checked by a computer rather than a human. But of course the proof-checker is itself in need of checking. Arkoudas and Bringsjord (2007) argue that since there is only one correctness proof that needs to be checked, namely that of the proof checker itself, then the possibility of mistakes is significantly reduced.
This is very much a practical issue. However, there is a deeper conceptual one. Are proofs of program correctness genuine mathematical proofs, i.e., are such proofs on a par with standard mathematical ones? The authors of (De Millo et al. 1979) claim that correctness proofs are unlike proofs in mathematics. The latter are conceptually interesting, compelling and attract the attention of other mathematicians who want to study and build upon them. This argument parallels the graspability arguments made in the philosophy of mathematics. Proofs that are long, cumbersome and uninteresting cannot be the bearers of the kind of certainty that is attributed to standard mathematical proofs. The nature of the knowledge obtained from correctness proofs is said to be different to the knowledge that may be gleaned from standard proofs in mathematics. In order to be taken in, proofs must be graspable. Indeed, Wittgenstein would have it that proofs that are not graspable cannot act as norms, and so are not mathematical proofs (Wittgenstein 1956).
Mathematical proofs such as the proof of Gödel's incompleteness theorem are also long and complicated. But they can be grasped. What renders such complicated proofs transparent, interesting and graspable involves the use of modularity techniques (e.g., lemmas), and the use of abstraction in the act of mathematical creation. The introduction of new concepts enables a proof to be constructed gradually, thereby making the proofs surveyable. Mathematics progresses by inventing new mathematical concepts that facilitate the construction of proofs that would be far more complex and even impossible without them. A very elementary example concerns the exponent notation. This makes it possible to extend computation beyond the complexity of multiplication, and to argue about the results. At the other extreme, the invention of category theory facilitated the statement and proof of very general results about algebraic structures that automatically apply to a whole range of such. Mathematics is not just about proof; it also involves the abstraction and creation of new concepts and notation. In contrast, formal correctness proofs do not seem to involve the creation of new concepts and notations. While computer science does involve abstraction, it is not quite in the same way.
One way of addressing the complexity problem is to change the nature of the game. The classical notion of correctness links the formal specification of programs to its formal semantic representation. It is at one end of the mathematical spectrum. However, chains of specification/artifact pairings, positioned at varying degrees of abstraction, are governed by different notions of correctness. For example, in the object oriented approach, the connection between a UML specification and a Java program is little more than type checking. The correctness criteria involve structural similarities and identities (Gamma et al. 1994). Here we do not demand that one infinite mathematical relation is extensionally governed by another. At higher levels of abstraction, we may have only connections of structure. These are still mathematical relationships. However, such methods, while they involve less work, and may even be automatically verified, establish much less.
The notion of program verification appears to trade upon an equivocation. Algorithms, as logical structures, are appropriate subjects for deductive verification. Programs, as causal models of those structures, are not. The success of program verification as a generally applicable and completely reliable method for guaranteeing program performance is not even a theoretical possibility. (Fetzer 1988: 1)
In fact this issue is alluded to by Hoare in the very text that Fetzer employs to characterize Hoare's mathematical stance on correctness.
When the correctness of a program, its compiler, and the hardware of the computer have all been established with mathematical certainty, it will be possible to place great reliance on the results of the program, and predict their properties with a confidence limited only by the reliability of the electronics. (Hoare 1969: 579)
All seemed to be agreed that computational systems are at bottom physical systems, and some unpredictable behavior may arise because of the causal connections. Indeed, even when theorem provers and the proof checkers are used, the results still only yield empirical knowledge. A proof checker is a program running on a physical machine. It is a program that has been implemented and its results depend upon a physical computation. Consequently, at some level, we shall need to show that some physical machine operations meet their specification. Testing and verification seems only to yield empirical evidence. Indeed, the complexity of program proving has led programmers to take physical testing to be evidence that the abstract program meets its specification. Here the assumption is that the underlying implementation is correct. But prima facie, it is only empirical evidence.
In apparent contrast, Burge (1988) argues that knowledge of such computer proofs can be taken as a priori knowledge. According to Burge, a priori knowledge does not depend for its justification on any sensory experience. However, he allows that a priori knowledge may depend for its possibility on sensory experience e.g., knowledge that red is a color may be a priori even though having this knowledge requires having sensory experience of red in order to have the concepts required to even formulate the idea. If correct, this closes the gap between a priori and a posteriori claims about computer assisted correctness proofs, but only by redrawing the boundary between a priori and a posteriori knowledge so that some empirical assertions can fall into the former category. For more discussion on the nature of the use of computers in mathematical proofs see Hales 2008; Harrison 2008; Tymoczko 1979, 1980.
Unfortunately, practice often does not even get this far. Generally, software engineers do not construct classical correctness proofs by hand or even automatically. Testing of software against its specification on suites of test cases is the best that is normally achieved. Of course this never yields correctness in the mathematical sense. Test cases can never be exhaustive (Dijkstra 1974). Furthermore, there is a hidden assumption that the underlying implementation is correct: at best, these empirical methods tell us something about the whole system. Indeed, the size of the state space of a system may be so large and complex that even direct testing is infeasible. In practice, the construction of mathematical models that approximate the behavior of complex systems is the best we can do.
The whole correctness debate carried out in the forum of the ACM (e.g., Ashenhurst 1989; Technical Correspondence 1989) is put into some perspective when programs are considered as technical artifacts. But this leaves one final topic: when we have reached physical structure, what notion of correctness operates?
What is it for a physical device to meet its specification? What is it for it to be a correct physical implementation? The starting point for much contemporary analysis is often referred to as the simple mapping account.
According to the simple mapping account, a physical system S performs as a correct implementation of an abstract specification C just in case (i) there is a mapping from the states ascribed to S by a physical description to the states defined by the abstract specification C, such that (ii) the state transitions between the physical states mirror the state transitions between the abstract states. Clause (ii) requires that for any abstract state transition of the form s1 → s2, if the system is in the physical state that maps onto s1, it then goes into the physical state that maps onto s2.
To illustrate what the simple mapping account amounts to we consider the example of our abstract machine (§2.1) where we employ an instance of the machine that has only two locations l and r, and two possible values 0 and 1. Subsequently, we have only four possible states (0,0), (0,1),(1,1) and (1,0). The computation table for the update operation may be easily computed by hand, and takes the form of a table with input/output pairings. For example, Update(r,1) sends the state (0,0) the state (0,1). The simple mapping account only demands that the physical system can be mapped onto the abstract one in such a way that the abstract state transitions are duplicated in the physical version.
Unfortunately, such a device is easy to come by: almost anything with enough things to play the role of the physical states will satisfy this quite weak demand of what it is to be an implementation. For example, any collection of colored stones arranged as the update table will be taken to implement the table. SMA only demands extensional agreement. It is a de-facto demand. This leads to a form of pancomputationalism where almost any physical system implements any computation.
The danger of pancomputationalism has driven some authors (D. J. Chalmers 1996; Egan 1992; Sprevak 2012) to attempt to provide an account of implementation that somehow restricts the class of possible interpretations. In particular, certain authors (D. J. Chalmers 1996; Copeland 1996) seek to impose causal constraints on such interpretations. One suggestion is that we replace the material conditional (if the system is in the physical state S₁ …) by a counterfactual one. In contrast, the semantic account insists that a computation must be associated with a semantic aspect which specifies what the computation is to achieve (Sprevak 2012). For example, a physical device could be interpreted as an and gate or an or gate. It would seem to depend upon what we take to be the definition of the device. Without such there is no way of fixing what the artifact is. The syntactic account demands that only physical states that qualify as syntactic may be mapped onto computational descriptions, thereby qualifying as computational states. If a state lacks syntactic structure, it is not computational. Of course, what remains to be seen is what counts as a syntactic state. A good overview can be found in the entry on computation in physical systems.
Turner (2012) argues that abstract structure and physical structure are linked, not just by being in agreement, but also by the intention to take the former as having normative governance over the latter. On this account, computations are technical artifacts whose function is fixed by an abstract specification. This relationship is neither that of theory to physical object nor that of syntactic thing to semantic interpretation.
But there is an ambiguity here that is reflected in the debate between those who argue for semantic interpretation (Sprevak 2012), and those who argue against it (Piccinini 2008). Consider programs. What is the function of a program? Is it fixed by its semantic interpretation or is it fixed by its specification. The ambiguity here concerns the function of a program as part of a programming language or its role as part of a larger system. As a program in a language, it is fixed by the semantics of the language as a whole. However, to use a program as part of a larger system, one needs only know what it does. The function of the program, as part of a larger system, is given by its specification. When a computation is picked out by a specification, exactly how the program achieves its specification is irrelevant to the system designer. The specification acts as an interface, and the level of abstraction employed by the system designer is central.
Although programming is one of the core activities of computer science, there has been little sustained philosophical debate about its nature. What there has been revolves around the following question: what kind of activity is programming? Is it mathematics, engineering, science or art (Knuth 1974)? This debate is about the methodology of computer science rather than the ontological status of programs, but inevitably the latter will leak into the discussion.
One very influential computer scientist claimed that programming is, or should be, methodologically a mathematical activity. This is aimed at the actual process of program construction. Hoare suggests that programs can be derived from their specifications in a mathematically precise way using program transformations that are based upon algebraic laws.
I hold the opinion that the construction of computer programs is a mathematical activity like the solution of differential equations, that programs can be derived from their specifications through mathematical insight, calculation, and proof, using algebraic laws as simple and elegant as those of elementary arithmetic (Hoare 1969: 576).
This can also be seen as an alternative approach to the correctness problem: programs are derived from their specifications by laws that preserve correctness. Morgan (1994) develops the transformational approach for imperative style languages. The functional style has also (Henson 1987) embraced this style of transformational programming. Martin-Löf (1982) advocated the use of constructive mathematics as a means of deriving programs from (constructive) existence proofs. However, in the contemporary scene, there is little evidence that real software is developed in this way. Systematic program development from specifications is still an active research topic, but the complexity issue is still with us.
The position of Dijkstra (1974) is somewhat softer. He only seems to be suggesting that programming involves rigorous reasoning. Here at least, he is not advocating that programs can be mathematically developed from specifications.
With respect to mathematics I believe, however, that most of us can heartily agree upon the following characteristics of most mathematical work: 1. compared with other fields of intellectual activity, mathematical assertions tend to be unusually precise. 2. Mathematical assertions tend to be general in the sense that they are applicable to a large (often infinite) class of instances. 3. Mathematics embodies a discipline of reasoning allowing such assertions to be made with an unusually high confidence level.
The mathematical method derives its power from the combination of all these characteristics; conversely, when an intellectual activity displays these three characteristics to a strong degree, I feel justified in calling it “an activity of mathematical nature”, independent of the question whether its subject matter is familiar to most mathematicians. (Dijkstra 1974: 2)
This claims that the development of programs and software involves precise reasoning, presumably about the underlying type structure and control mechanisms. There is little doubt that something like this is true. While programmers seldom prove rigorous theorems about software, this obviously does not mean that they do not reason when designing and testing programs. Programming is a precise intellectual activity that involves precise reasoning about the effects of their programming constructs on some underlying machine. This is a fairly minimalistic claim that it is hard to dispute.
But even if we accept this, we are not told what kind of mathematical activity programming is. We are only told it involves rigorous reasoning about some kind of formal objects. However, on the assumption that programming languages are axiomatic theories of computation, there is a simple characterization. When glossed with the perspective of (§4.2), programs are definitions within the mathematical theory of the programming language. This does not argue for any kind of methodological stance, only that programs are definitions and programming is the activity of creating them; programming is a definitional enterprise. If one accepts this, there is a logical uniformity about specification and programming: both specifications and programs are definitions inside their containing mathematical theories.
However, the invention of software engineering (Summerville 2012; Vliet 2008) moved the goal posts. While formal methods form a part of many methodologies, modern software development uses a rich mixture of methods and techniques. Programming in the raw involves the construction of technical artifacts, not mathematical objects. In this more complete picture, programming is an engineering activity.
There are some general design principles that do have a conceptual ring about them. The movement of structured programming (Dijkstra 1970) argues that programs should be constructed in a transparent way that is in some sense compositional. On this basis users were urged to avoid the use of goto statements and unstructured jumps. This had some impact upon programming language design. The Turing award lecture (Floyd 1979) contains some further reflections on structured programming and its impact upon software development. In a more modern form, encapsulation (Mitchell 2003) tries to ensure that programs are self-contained units with transparent and self-contained functional specifications—avoid leaky modules that make reference to lower levels of abstraction.
These are principles of design. But these notions have yet to be subject to much philosophical analysis. Indeed, the whole of the philosophy of engineering design, with all its conceptual questions and apparatus, for example as outlined in Kroes (2012), has yet to be applied to computational artifacts.
Some claim that software development is a scientific activity. Indeed, there is a hint of such a perspective in many of the Turing award lectures (ACM 2013—see Other Internet Resources) where one can find many variations on the more general claim that computer science is a scientific activity. One folklore claim has it that programs act as scientific theories or models. However, such theories are always up for revision; they provide defeasible knowledge (Rosenberg 2000; A. F. Chalmers 1999). More importantly, when there is a mismatch between the theory and the natural world, the theories may have to be given up in the light of experimentation. Program verification certainly fits the testing and verifying methodology. But it is not clear that its purpose is the verification of a model or theory. When a program is tested, it is tested against a specification not for its agreement with the physical world. It would seem that programs, in the primary role, are not intended to be scientific theories of anything; they function as artifacts.
Moreover, while programs may themselves function as specifications, whereas scientific theories are concerned with natural objects and phenomena, specifications are concerned with intentionally produced things. And, as we have already observed, (§4.2), in regard to their underlying logic, there appears to be a fundamental difference between specifications and theories.
However, there is a very specific argument that needs consideration. The paper (Angius 2013) contains an argument for the claim that modern software development is partly a scientific activity. This argument is based on the observation that software testing and verification employ model based techniques; the software system is modeled because of the complexity of the real thing. Here there is a direct analogy with scientific models. If the model gets things wrong i.e., does not capture the system, then it will be modified or replaced. This process resembles standard model construction in science (Frigg and Hartmann 2012). However, it would seem that the purpose here is different. The function of the model is to stand proxy for the system in the process of verification. We are not trying to model the system as a natural thing but as a technical artifact. The methodology may be similar, but we are dealing with a designed artifact. The normative role of the specification is still in force, except that now the specification has normative force over the system indirectly via the model. On the face of it, because computer scientists build mathematical models of complex computational systems, it does not mean that they are engaged in a scientific activity in the same way that physicists are.
Computer science… differs from physics in that it is not actually a science. It does not study natural objects. Neither is it, as you might think, mathematics; although it does use mathematical reasoning pretty extensively. Rather, computer science is like engineering; it is all about getting something to do something, rather than just dealing with abstractions, as in the pre-Smith geology. (Feynman 2000 [1984–1986]: 8)
Abstraction facilitates computer science. Without it we would not have progressed from the programming of numerical algorithms to the software sophistication of air traffic control systems, interactive proof development frameworks, and computer games. It is manifested in the rich type structure of contemporary programming and specification languages, and underpins the design of these languages with their built-in mechanisms of abstraction. It has driven the invention of notions such as polymorphism, data abstraction, classes, schema, design patterns, and inheritance. But what is the nature of abstraction in computer science? Is there just one form of it? Is it the same notion that we find in mathematics?
This is how Skemp (1987) describes abstraction. His conception begins with similarity recognition and results in the embodiment of this similarity in a new concept.
Abstracting is an activity by which we become aware of similarities … among our experiences. Classifying means collecting together our experiences on the basis of these similarities. An abstraction is some kind of lasting change, the result of abstracting, which enables us to recognize new experiences as having the similarities of an already formed class…. To distinguish between abstracting as an activity and abstraction as its end-product, we shall call the latter a concept. (Skemp 1987: 5)
In the opening paragraph of his lectures on data structuring (Hoare 1973: 83), the computer scientist Tony Hoare describes abstraction in much the same way.
Abstraction arises from recognition of similarities between certain objects, situations, or processes in the real world, and the decision to concentrate upon those similarities and to ignore for the time being the differences.
Such abstraction involves ignoring concrete features, the structural aspects of the device or process. From this voluntary ignorance a new concept emerges. This has it that abstraction is a mental process. Unfortunately, so conceived, it employs a much criticized perspective in the philosophy of mind (see the entry on abstract objects).
A related approach is more logical in nature. Crispin Wright (1983) and Bob Hale (1987) have developed an account of abstraction that emanates from Frege. Consider the following equations.
- The direction of a = the direction of b if and only if a is parallel to b.
- The number of Fs = the number of Gs if and only if there are just as many Fs as Gs.
Expressions such as the direction of a line or the number of form the basis of abstraction; they are abstraction principles. They appear to hold in virtue of the meaning of the expressions on the right hands sides: on the face of it, mastery of the concept of a direction presupposes mastery of the concept of parallelism, but not vice versa. This is discussed in more detail in the entry on abstract objects. However, the adequacy of this account to deal with the abstraction mechanisms of computer science has not been investigated.
Computer science abstraction takes many different forms. We shall not attempt to describe these in any systematic way here. However, Goguen (Goguen & Burstall 1985) describes some of this variety of which the following examples are instances.
One kind involves the idea of repeated code: a program text, possibly with a parameter, is given a name (procedural abstraction). In Skemp's terms, the procedure brings a new concept into existence, where the similarity of structure is the common code. Formally, this is the abstraction of the Lambda calculus (see the entry on the lambda calculus). The parameter might even be a type, and this leads to the various mechanisms of polymorphism, which may be formalized in mathematical theories such as the second order Lambda calculus (Hankin 2004).
Recursion is an early example of operation or mechanism abstraction: it abstracts away from the mechanisms of the underlying machine. In turn, this facilitates the solution of complex problems without having to be aware of the operations of the machine. For example, recursion is implemented in devices such as stacks, but in principle the user of recursion does not need to know this.
The type structure of a programming or specification language determines the ontology of the language: the kinds of entity that we have at our disposal for representation and problem solving. To a large extent, types determine the level of abstraction of the language. A rich set of type constructors provides an expressive system of representation. Abstract and recursive types are common examples.
In object oriented design, patterns (Gamma et al. 1994) are abstracted from the common structures that are found in software systems. Here, abstraction is the means of interfacing: it dissociates the implementation an object from its specification. For example, abstract classes act as interfaces by providing nothing but the type structure of its methods.
In addition, in mathematics (Mitchelmore and White 2004), computer science and philosophy (Floridi 2008) there are levels of abstraction. Abstractions in mathematics are piled upon each other in a never ending search for more and more abstract concepts. Likewise, computer science deals with the design and construction of artifacts through a complex process involving sequences of artifacts of decreasing levels of abstractness, until one arrives at the actual physical device.
In mathematics, once the abstraction is established, the physical device is left behind. On this account, the abstraction is self-contained: an abstract mathematical object takes its meaning only from the system within which it is defined. The only constraint is that the new objects be related to each other in a consistent system which can be operated on without reference to their previous meaning. Self-containment is paramount. There are no leaks. This is clearly expressed in Hilbert's view of axiomatization (§5.1).
Some argue that, in this respect at least, abstraction in computer science is fundamentally different to abstraction in mathematics (Colburn & Shute 2007). They claim that computational abstraction must leave behind an implementation trace. Information is hidden but not destroyed. Any details that are ignored at one level of abstraction (e.g., programmers need not worry about the precise location in memory associated with a particular variable) must not be ignored by one of the lower levels (e.g., the virtual machine handles all memory allocations). At all levels, computational artifacts crucially depend upon the existence of an implementation. For example, even though classes hide the implementation details of their methods, except for abstract ones, they must have implementations. This is in keeping with the view that computational artifacts have both function and structure: computational abstractions have both an abstract guise and an implementation.
However, matters are not quite so clean cut. While it is true that abstraction in mathematics generates objects whose meaning is defined by their relationships, the same is so in computer science. Abstract notions could not have a normative function unless they had such independent meanings. Moreover, certain forms of constructive mathematics resembles computer science in that there has to be an implementation trace: one must always be able to recover implementation information from proofs by reading between the lines. Of course, this is not the case for classical mathematics.
Moreover, many would argue that mathematical abstractions do not completely leave behind their physical roots.
One aspect of the usefulness of mathematics is the facility with which calculations can be made: You do not need to exchange coins to calculate your shopping bill, and you can simulate a rocket journey without ever firing one. Increasingly powerful mathematical theories (not to mention the computer) have led to steady gains in efficiency and reliability. But a calculational facility would be useless if the results did not predict reality. Predictions are successful to the extent that mathematical models appropriate aspects of reality and whether they are appropriate can be validated by experience. (Mitchelmore and White 2004: 330)
How is it that the axiomatic method has been so successful in this way? The answer is, in large part, because the axioms do indeed capture meaningful and correct patterns. … There is nothing to prevent anyone from writing down some arbitrary list of postulates and proceeding to prove theorems from them. But the chance of those theorems having any practical application [is] slim indeed. … Many fundamental mathematical objects (especially the more elementary ones, such as numbers and their operations) clearly model reality. Later developments (such as combinatorics and differential equations) are built on these fundamental ideas and so also reflect reality even if indirectly. Hence all mathematics has some link back to reality. (Devlin 1994: 54–55)
I want to say: it is essential to mathematics that its signs are also employed in mufti. It is the use outside mathematics, and so the meaning of the signs, that makes the sign-game into mathematics. Just as it is not logical in formation either, for me to make a change from one formation to another (say from one arrangement of chairs to another) if these arrangements have not a linguistic function apart from this transformation. (Wittgenstein 1956: 268)
If would appear that the difference between abstraction in computer science and abstraction in mathematics is not so sharp. However, there appears to be an important conceptual difference. If Turner (2011) is right, in computer science, the abstract partner is the dominant one in the relationship: it determines correctness. In the case of (applied) mathematics things are reversed: the mathematics is there to model the world, and it must model it accurately. In computer science, the relationship between the abstraction and its source is the specification/artifact relationship; in mathematics it is the model/theory to reality relationship. When things go wrong the blame is laid at a different place: with the artifact in computer science and with the model in mathematics.
The original motivation for a mathematical analysis of computation came from mathematical logic. Its origins are to be found in Hilbert's question concerning the decidability of predicate calculus: could there be an algorithm, a procedure, for deciding of an arbitrary sentence of the logic whether it was provable (The Entscheidungsproblem). In order to address this question, a rigorous model of the informal concept of an effective or mechanical method in logic and mathematics was required. Providing this is first and foremost a mathematical endeavor: one has to develop a mathematical analogue of the informal notion. What is now taken to be the definitive account was given by Turing (1936).
However, Turing was not the first to attempt such a characterization. Historically, the first notions of computability were suggested in the early nineteen thirties.
- Gödel-Herbrand (Spring, 1934): General Recursive Functions.
- Church-Kleene (1931–1934): Lambda Calculus.
- The computable functions are the general recursive functions.
- The computable functions are those computable by Lambda terms.
However, Gödel was not convinced; he rejected both versions. He did not accept that they express an intensionally adequate notion of a mechanical method. They were extensional characterizations with little motivational justification. In contrast, Turing's proposed class of devices, now known as Turing machines, appeared to capture the informal notion. Furthermore, Turing proved that his machines and the Lambda calculus are extensionally equivalent. This led to the Turing version of the thesis, one formulation of which is the following.
- Turing's thesis:
- Logical computing machines (Turing's name for his machines) can do anything that could be described as “rule of thumb” or “purely mechanical”.
Even today, every program written in an existing implemented programming language is Turing computable and conversely, all general-purpose programming languages are Turing complete, i.e., they contain all the control constructs necessary to simulate a universal Turing machine. But what is so special about the Turing proposal?
Turing placed great emphasis on the nature of the basic instructions of a Turing machine. The following prose suggests the instructions can be performed without thought: atomic operations can be implemented in machines that do not and cannot know the meaning of their instructions. This is emphasized by many commentators.
Instructions given the computer must be complete and explicit, and they must enable it to proceed step by step without requiring that it comprehend the result of any part of the operations it performs. Such a program of instructions is an algorithm. It can demand any finite number of mechanical manipulations of numbers, but it cannot ask for judgments about their meaning … An algorithm is a set of rules or directions for getting a specific output from a specific input. The distinguishing feature of an algorithm is that all vagueness must be eliminated; the rules must describe operations that are so simple and well defined they can be executed by a machine. (Knuth 1977: 63)
The instructions must be complete and explicit and the human computer is not required to comprehend of any part of the basic operations: they must proceed without any appeal to the semantics of the operations of the machine.
The intuitive notion of an algorithm is rather vague. For example, what is a rule? We would like the rules to be mechanically interpretable, i.e., such that a machine can understand the rule (instruction) and carry it out. In other words, we need to specify a language for describing algorithms which is general enough to describe all mechanical procedures and yet simple enough to be interpreted by a machine…. What Turing did was to analyse the human calculating act and arrive at a number of simple operations which are obviously mechanical in nature and yet can be shown to be capable of being combined to perform arbitrarily complex mechanical operations. (Wang 1974: 91)
But is there such a thing as an instruction or operation in a programming language that has no meaning? Is it possible that we can follow an instruction without making judgments about its meaning? Are these instructions really meaningless?
But given that this is a genuine rule it is anything but meaningless. Granted, it is so simple that one can envisage an agent doing it mechanically after a short while, and that is the problem that will be looked at last. For the moment we need only see that, however simple this rule might be, it does indeed tell one what to do. (Shanker 1987: 637)
The point here seems to be that rule following involves judgments about meaning. While eventually one may be able to perform them without thought, initially, the performer must have taken in the meaning. They could not have been followed without thought. This is at the heart of the Wittgenstein-Turing debate on the nature of computation. In Turing's account the normative aspect of meaning (Glüer and Wikforse 2009) is replaced by what appears to be a causal interpretation of the basic operations. This is made explicit in the following analysis given by Shanker.
The only way to remove its normative content is to treat it as a description of the causal events that occur in the mind/program of the computer which trigger off certain reactions. Thus Turing introduced the premise that ‘the behaviour of the computer at any moment is determined by the symbols which he is observing, and his "state of mind" at that moment’ (: 136). On this picture the computer's ‘state of mind’ is the causal intermediary between ‘observed symbols’ and subsequent action. (Shanker 1987: 637)
If this is correct, the simple instructions get their content from the impact they have on the physical device. On this view they must act as theories of the underlying physical device. But then it would appear that our instructions are not rules with normative force but descriptions of the physical device. This seems to be in line with Colburn (2000). However, we have crossed the normative descriptive divide. Our basic operations are no longer rules in any normative sense.
Turing Machines are Humans who Calculate (Wittgenstein 1975 : 1096)
Is calculation/computation something that humans do or something that machines do—or both? Is it a rule governed mathematical activity or an empirical one? Is it the rules of addition that provide the correctness criteria for any physical activity that we might wish to call addition, or is it an empirical activity? If the latter, the evaluation of arithmetic expressions is subject to physical interference. Physical multiplication may not mean multiplication but rather what the physical machine actually does when it multiplies. Consequently, 13×74 might be 16.
This is a position close to the view that the semantics of a programming language is fixed by its physical implementation (§4.3). This treats 13×74 as a description of the causal events that occur in the mind/program of the computer which trigger off certain physical reactions. Thus Turing introduced the premise that
the behaviour of the computer at any moment is determined by the symbols which he is observing and his “state of mind at that moment”. (Turing 1936: 136)
On this picture the computer's state of mind is the causal intermediary between observed symbols and subsequent action. Wittgenstein is against this view.
Does a calculating machine calculate? Imagine that a calculating machine had come into existence by accident; now someone accidentally presses its knobs (or an animal walks over it) and it calculates the product 25×20… Imagine that calculating machines occurred in nature, but that people could not pierce their cases. And now suppose that these people use these appliances, say as we use calculation, though of that they know nothing. Thus, e.g., they make predictions with the aid of calculating machines, but for them manipulating these queer objects is experimenting. These people lack concepts which we have; but what takes their place? Think of the mechanism whose movement we saw as a geometrical (kinematic) proof: clearly it would not normally be said of someone turning the wheel that he was proving something. Isn't it the same with someone who makes and changes arrangements of signs as [an experiment]; even when what he produces could be seen as a proof? (Wittgenstein 1956: V, §2)
There are no causal connections in a calculation, only the connections of the pattern. And it makes no difference to this that we work over the proof in order to accept it. That we are therefore tempted to say that it arose as the result of a psychological experiment. For the psychical course of events is not psychologically investigated when we calculate……You aren't calculating if, when you get now this, now that result, and cannot find a mistake, you accept this and say: this simply shows that certain circumstances which are still unknown have an influence on the result. This might be expressed: if calculation reveals a causal connection to you, then you are not calculating… What I am saying comes to this, that mathematics is normative (Wittgenstein 1975 [1939}: VII, §18).
On this view calculation and computing are mathematical activities. Wittgenstein also claims that what we do when we run programs on physical machines is an experimental activity. But is this right? Are we experimenting when we run programs on physical machines or are we using them to perform physical calculations?
There is also an apparent conflict between the insistence that atomic instructions are meaningless and the principle of compositionality for semantic theories (see the entry on compositionality). In its modern form, the supporting argument for compositionality can be found in Frege.
… the possibility of our understanding sentences which we have never heard before rests evidently on this, that we can construct the sense of a sentence out of parts that correspond to words. (Frege 1914?)
Competent speakers can understand a complex expression that they never encountered before. And, so the argument goes, the only plausible way they can do this is by the employment of their knowledge of the structure of the expression together with knowledge of the individual meanings of its simple constituents. This is an inference to the best explanation. In the case of programming languages, without some account of the meanings of atomic constructs of the language, and how the meanings of the constructors are fixed, it is hard to see how one could understand and produce the programs of a programming language. Unfortunately, under the assumption that all basic instructions are meaningless, it is hard to maintain any compositional theory of meaning: if all basic instructions are meaningless, the compositionality thesis would lead us to conclude that all programs in the language are meaningless.
Many take it for granted that the Church-Turing thesis characterizes and prescribes actual physical computation. Some claim (see the entry on the Church-Turing thesis by Copeland (2008)) that the thesis as proposed by Church and Turing only concerns the interpretation of mathematical computations. Andrew Hodges (2011) disagrees. He argues that Church and Turing did not distinguish between the two interpretations. This is the historical dispute. But the important one concerns the capabilities of actual machines: does the physical thesis (M) hold good? In the following, taken from Copeland (2008), the phrase calculated by a machine refers to a machine that conforms to the laws of physics.
- Thesis M:
- Whatever can be calculated by a machine (working on finite data in accordance with a finite program of instructions) is Turing-machine-computable.
Gandy proposed four principles that are intended to characterize computation by a physical machine. He shows that such machines exactly agree with Turing's characterization (Gandy's Theorem). Copeland and Shagrir (2007, 2011) suggest that Gandy's characterization of a discrete deterministic mechanical device is too narrow, and consequently there are examples of possible physical machines whose capabilities go beyond the class of Turing computable functions. Many of these require infinite speedup, whereby an infinite number of computations can be physically carried out in a finite time. Quantum computing has been cited as a possible example for such machines, but this has been disputed (Hagar 2007). On page 71 in a footnote, Dummett (2006) contains the view that the very notion of performing an infinite number of physical operations in a finite time is just confused.
One question that seems quite puzzling is this: imagine we have constructed a physical machine that is claimed can decide the halting problem? How would one verify its correctness? A plausible way is to construct a semi-decidable machine for the halting problem. We could then check that the halting machine is in agreement on the programs that terminate. But on those that don't, how would we determine if it produced the right results? For further discussion see Copeland and Shagrir (2007).
Many of the topics we have not covered have existing or forthcoming entries. Computational Complexity theory is covered in computability and complexity. and will be the subject of a forthcoming article devoted to it by Walter Dean. Information in its various guises is also covered in two entries: semantic conceptions of information and information. Computer ethics is covered in computer and information Ethics. The logical aspects of artificial intelligence are covered in logic and artificial intelligence. There is also a general entry on emergent properties.
- Abramsky, S. and G. McCusker, 1995, “Games and Full Abstraction for the Lazy Lambda-Calculus,” In D. Kozen (ed.) Tenth Annual Symposium on Logic in Computer Science, IEEE Computer Society Press. pp. 234–43.
- Abramsky, S., P. Malacaria, and R. Jagadeesan, 1994, “Full Abstraction for PCF,” In M. Hagiya and J. C. Mitchell (eds), Theoretical Aspects of Computer Software: International Symposium TACS '94, Sendai, Japan, April 19–22, 1994, Springer-Verlag, 1–15.
- Abrial, J., 1996, The B-Book, Cambridge: Cambridge University Press.
- Alama, J., 2013, “The Lambda Calculus”, The Stanford Encyclopedia of Philosophy (Fall 2013 Edition), Edward N. Zalta (ed.), forthcoming URL = <http://plato.stanford.edu/archives/fall2013/entries/lambda-calculus/>.
- Allen, R. J., 1997, A Formal Approach to Software Architecture, Ph.D. Thesis, Computer Science, Carnegie Mellon University. Issued as CMU Technical Report CMU-CS-97-144. Allen 1997 available on line
- Angius, N., 2013, “Abstraction and Idealization in the Formal Verification of Software,” Minds and Machines. 23(2): 211–226.
- Arkoudas, K. and S. Bringsjord, 2 007, “Computers, Justification, and Mathematical Knowledge,” Minds and Machines, 17(2): 185–202.
- Ashenhurst, R. L. (ed.), 1989, “Letters in the ACM Forum” Communications of the ACM, 32(3): 287. [Ashenhurst 1989 available online]
- Bass, L. C., P. Clements, and R. Kazman, 2003, Software Architecture in Practice, (2nd edition), Reading, MA: Addison-Wesley.
- Boghossian, P. A., 1989, “The Rule-following Considerations,” Mind, 98(392): 507–549.
- Bourbaki, N., 1968, Theory of Sets, Ettore Majorana International Science Series.
- Bridges, D., 2013, “Constructive Mathematics”, The Stanford Encyclopedia of Philosophy (Spring 2013 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/spr2013/entries/mathematics-constructive/>.
- Brooks, F. P., 1995, The Mythical Man Month: Essays on Software Engineering, Anniversary Edition, Reading, MA: Addison-Wesley.
- Burge, T., 1988, “Computer Proof, Apriori Knowledge, and Other Minds,” Noûs, 32: 1–37.
- Cardelli, L. and P. Wegner, 1985, “On Understanding Types, Data Abstraction, and Polymorphism,” Computing Surveys, 17(4): 471–522. [Cardelli and Wegner 1985 available online (pdf)]
- Chalmers, A. F., 1999, What is this thing called Science, Maidenhead: Open University Press.
- Chalmers, D. J., 1996, “Does a Rock Implement Every Finite-State Automaton?” Synthese, 108: 309–33. [D. J. Chalmers 1996 available online]
- Colburn, T. R., 1999, “Software, Abstraction, and Ontology,” The Monist, 82(1): 3–19.
- –––, 2000, Philosophy and Computer Science, Armonk, N.Y.: M.E. Sharp.
- Colburn, T. and G. Shute, 2007, “Abstraction in Computer Science,” Minds and Machines, 17(2): 169–184.
- Copeland, B. J., 1996, “What is Computation?” Synthese, 108(3): 335–359.
- –––, 2008, “The Church-Turing Thesis”, The Stanford Encyclopedia of Philosophy (Fall 2008 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/fall2008/entries/church-turing/>.
- Copeland, B. J. and O. Shagrir, 2007, “Physical Computation: How General are Gandy's Principles for Mechanisms?” Minds and Machines, 17(2): 217–231.
- –––, 2011, “Do Accelerating Turing Machines Compute the Uncomputable?” Minds and Machines, 21(2): 221–239.
- Cummins, R., 1975, “Functional Analysis,” The Journal of Philosophy, 72(20): 741–765.
- De Millo, R. L., R. J. Lipton, and A. J. Perlis, 1979, “Social Processes and Proofs of Theorems and programs,” Communications of the ACM, 22(5): 271–281.
- Devlin, K., 1994, Mathematics: The Science of Patterns: The Search for Order in Life, Mind, and the Universe, New York: Henry Holt.
- Dijkstra, E. W., 1970, Notes on Structured Programming, T.H.-Report 70-WSK-03, Mathematics Technological University Eindhoven, The Netherlands. [Dijkstra 1970 available online (pdf)]
- –––, 1974, “Programming as a Discipline of Mathematical Nature,” American Mathematical Monthly, 81(6): 608–612. [Dijkstra 1970 available online
- Distributed Software Engineering, 1997, The Darwin Language, Dept. of Computing, Imperial College of Science, Technology and Medicine, London. [Darwin language 1997 available online (pdf)]
- Duhem, P., 1954, The Aim and Structure of Physical Theory, Princeton: Princeton University Press.
- Dummett, M., 2006, Thought and Reality, Oxford: Oxford University Press.
- Egan, F., 1992, “Individualism, Computation, and Perceptual Content,” Mind, 101(403): 443–59.
- Fernandez, M., 2004, Programming Languages and Operational Semantics: An Introduction, London: King's College Publications.
- Fetzer, J. H., 1988, “Program Verification: The Very Idea,” Communications of the ACM, 31(9): 1048–1063.
- Feynman, R. P., 2000 [1984–1986], Feynman Lectures on Computation, Cambridge, MA: Westview Press. Based on lectures given in 1984–86.
- Floridi, L., 2008, “The Methods of Levels of Abstraction,” Minds and Machines, 18(3): 303–329.
- Floyd, R. W., 1979, “The Paradigms of Programming,” Communications of the ACM, 22(8): 455–460.
- Fowler, M., 2003, UML Distilled: A Brief Guide to the Standard Object Modeling Language, Reading, MA: Addison-Wesley. [Fowler 2003 available online]
- Franssen, M., G-J. Lokhorst, and I. van de Poel, 2010, “Philosophy of Technology”, The Stanford Encyclopedia of Philosophy (Spring 2010 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/spr2010/entries/technology/.>
- Frege, G., 1914, “Letter to Jourdain,” in Frege 1980 pp. 78–80.
- –––, 1980, Gottlob Frege: Philosophical and Mathematical Correspondence, edited by Gabriel, G., H. Hermes, F. Kambartel, C. Thiel, and A. Veraart, Oxford: Blackwell Publishers.
- Frigg, R. and S. Hartmann, 2012, “Models in Science”, The Stanford Encyclopedia of Philosophy (Fall 2012 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/fall2012/entries/models-science>.
- Gamma, E., R. Helm, R. Johnson, and J. Vlissides, 1994, Design Patterns: Elements of Reusable Object-Oriented Software, Reading, MA: Addison-Wesley.
- Glüer, K. and Å. Wikforss, 2010, “The Normativity of Meaning and Content”, The Stanford Encyclopedia of Philosophy (Winter 2010 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/win2010/entries/meaning-normativity/>.
- Goguen, J. A. and R. M. Burstall, 1985, “Institutions: Abstract Model Theory for Computer Science”, Journal of the ACM, 39(1): 95–146).
- Gordon, M. J. C., 1979, The Denotational Description of Programming Languages, Berlin: Springer-Verlag.
- Gunter, C. A., 1992, Semantics of Programming Languages: Structures and Techniques, Cambridge, MA: MIT Press.
- Gupta, A., 2012, “Definitions”, The Stanford Encyclopedia of Philosophy (Fall 2012 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/fall2012/entries/definitions/>.
- Hagar, A., 2007, “Quantum Algorithms: Philosophical Lessons,” Minds and Machines, 17(2): 233–247.
- Hale, B., 1987, Abstract Objects, Oxford: Basil Blackwell.
- Hales, T. C., 2008, “Formal Proof,” Notices of the American Mathematical Society, 55(11): 1370–1380.
- Hankin, C., 2004, An Introduction to Lambda Calculi for Computer Scientists, London: King's College Publications.
- Harrison, J., 2008, “Formal Proof—Theory and Practice,” Notices of the American Mathematical Society, 55(11): 1395–1406.
- Henson, M. C., 1987, Elements of Functional Programming, Oxford: Blackwell.
- Hoare, C. A. R., 1969, “An Axiomatic Basis for Computer Programming,” Communications of the ACM, 12(10): 576–580.
- –––, 1973, “Notes on Data Structuring,” in O.-J. Dahl, E.W. Dijkstra, and C.A.R. Hoare (eds.), Structured Programming, London: Academic Press, pp. 83–174.
- –––, 1981, “The Emperor's Old Clothes,” Communications of the ACM, 24(2): 75–83.
- –––, 1985, Communicating Sequential Processes, Englewood Cliffs: Prentice Hall. [Hoare 1985 available online (pdf)]
- Hodges, A., 2011, “Alan Turing”, The Stanford Encyclopedia of Philosophy (Summer 2011 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/sum2011/entries/turing/>.
- Hodges, W., 2013, “Model Theory”, The Stanford Encyclopedia of Philosophy (Fall 2013 Edition), Edward N. Zalta (ed.), forthcoming URL = <http://plato.stanford.edu/archives/fall2013/entries/model-theory/>.
- Hopcroft, J. E. and J. D. Ullman, 1969, Formal Languages and their Relation to Automata, Reading, MA: Addison-Wesley.
- Irmak, N., 2012, “Software is an Abstract Artifact,” Grazer Philosophische Studien, 86: 55–72. [Irmak 2012 available online]
- Johnson, C. W., 2006, “What are Emergent Properties and How Do They Affect the Engineering of Complex Systems,” Reliability Engineering and System Safety, 91(12): 1475–1481. [Johnson 2006 available online (pdf)]
- Jones, C. B., 1990, Systematic Software Development Using VDM, Englewood Cliffs: Prentice Hall.
- Knuth, D. E., 1974, “Computer Programming as an Art,” Communications of the ACM, 17(12): 667–673.
- Knuth, D. E., 1977, “Algorithms,” Scientifc American, 236(4): 63–80.
- Kripke, S., 1982, Wittgenstein on Rules and Private Language, Cambridge, MA: Harvard University Press.
- Kroes, P., 2010, “Engineering and the Dual Nature of Technical Artefacts,” Cambridge Journal of Economics, 34(1): 51–62.
- –––, 2012, Technical Artefacts: Creations of Mind and Matter: A Philosophy of Engineering Design, Dordrecht: Springer.
- Kroes, P. and A. Meijers, 2006, “The dual nature of technical artefacts,” Studies in History and Philosophy of Science, 37: 1–4.
- Landin, P. J., 1964, “The mechanical evaluation of expressions,” The Computer Journal, 6(4): 308–320.
- Loewenheim, U., 1989, “Legal Protection for Computer Programs in West Germany,” Berkeley Technology Law Journal, 4(2). [Loewenheim 1989 available online]
- Luckham, D. C., 1998, “Rapide: A Language and Toolset for Causal Event Modeling of Distributed System Architectures,” in Y. Masunaga, T. Katayama, and M. Tsukamoto (eds.), Worldwide Computing and its Applications, WWCA'98, Berlin: Springer, pp. 88–96. [Luckham 1998 available online]
- Magee, J., N. Dulay, S. Eisenbach, and J. Kramer, 1995, “Specifying Distributed Software Architectures,” Proceedings of 5th European Software Engineering Conference (ESEC 95), Berlin: Springer-Verlag, pp. 137–153.
- Martin-Löf, P., 1982, “Constructive Mathematics and Computer Programming,” in Logic, Methodology and Philosophy of Science (Volume VI: 1979), Amsterdam: North-Holland, pp. 153–175.
- McGettrick, A., 1980, The Definition of Programming Languages, Cambridge: Cambridge University Press.
- McLaughlin, P., 2001, What functions explain: Functional explanation and self-reproducing systems, Cambridge: Cambridge University Press.
- Meijers, A. W. M., 2001, “The relational ontology of technical artifacts,” in P. A. Kroes and A. W. M. Meijers (eds), The Empirical Turn in the Philosophy of Technology, Amsterdam: Elsevier.
- Mitchelmore, M. and P. White, 2004, “Abstraction in Mathematics and Mathematics Learning,” in M.J. Høines and A.B. Fuglestad (eds.), Proceedings of the 28th Conference of the International Group for the Psychology of Mathematics Education (Volume 3), Bergen: Programm Committee, pp. 329–336. [Mitchelmore and White 2004 available online (pdf)]
- Miller, A. and C. Wright (eds), 2002, Rule Following and Meaning, Durham, England: Acumen Publishing Ltd.
- Milne, R. and C. Strachey, 1976, A Theory of Programming Language Semantics, London: Chapman and Hall.
- Mitchell, J. C., 2003, Concepts in Programming Languages, Cambridge: Cambridge University Press.
- Mooers, C. N., 1975, “Computer Software and Copyright,” ACM Computing Surveys, 7(1): 45–72.
- Moor, J. H., 1978, “Three Myths of Computer Science,” The British Journal for the Philosophy of Science, 29(3): 213–222.
- Morgan, C., 1994, Programming From Specifications, Englewood Cliffs: Prentice Hall. [Morgan 1994 available online]
- Pears, D., 2006, Paradox and Platitude in Wittgenstein's Philosophy, Oxford: Oxford University Press.
- Piccinini, G., 2008, “Computation without Representation,” Philosophical Studies, 137(2): 206–241. [Piccinini 2008 available online (this version lists 2006 as publication date but that seems to be the online publication date)]
- Rapaport, W. J., 1999, “Implementation Is Semantic Interpretation,” The Monist, 82(1): 109–30. [Rapaport 1999 available online (pdf)]
- –––, 2005, “Implementation as Semantic Interpretation: Further Thoughts,” Journal of Experimental & Theoretical Artificial Intelligence, 17(4): 385–417. [Rapaport 2005 available online (pdf)]
- Rosenberg, A., 2012, The Philosophy of Science, London: Routledge.
- Searle, J. R., 1995, The Construction of Social Reality, London: Penguin.
- Shanker, S. G., 1987, “Wittgenstein versus Turing on the nature of Church's thesis,” Notre Dame Journal of Formal Logic, 28(4): 615–649. [Shanker 1987 available online]
- Shapiro, S. 1983, “Mathematics and Reality,” Philosophy of Science, 50(4): 523–548.
- Skemp, R. R., 1987, The Psychology of Learning Mathematics, Hillsdale, NJ: Lawrence Erlbaum Associates.
- Smith, B. C., 1985, “The Limits of correctness in Computers,” ACM SIGCAS Computers and Society,. 14–15(1–4): 18–26.
- Sprevak, M., 2012, “Three challenges to Chalmers on computational implementation,” Journal of Cognitive Science, 13(2): 107–143.
- Stoy, J. E., 1977, Denotational Semantics: The Scott-Strachey Approach to Programming Language Semantics, Cambridge, MA: MIT Press.
- Strachey, C., 2000, “Fundamental Concepts in Programming Languages,” Higher-Order and Symbolic Computation, 13: 11–49.
- Suber, P. 1988, “What Is Software?” Journal of Speculative Philosophy, 2(2): 89–119. [Suber 1988 available online]
- Summerville, I., 2012, Software Engineering, Reading, MA: Addison-Wesley. (First Edition, 1982.)
- Technical Correspondence, 1989, ACM Digital Library, Communications of the ACM 32(3): 374–81. Letters from James C. Pleasant, Lawrence Paulson/Avra Cohen/Michael Gordon, William Bevier/Michael Smith/William Young, Thomas Clune, Stephen Savitzky, James Fetzer. Retrieved from Technical correspondence: http://dl.acm.org/citation.cfm?doid=62065.315927
- Thomasson, A., 2007, “Artifacts and human concepts,” in S. Laurence and E. Margolis, Creations of the Mind: Essays on Artifacts and Their Representations, Oxford: Oxford University Press.
- Thompson, S., 2011, Haskell: The Craft of Functional Programming, Reading, MA: Addison-Wesley. (First edition, 1999.)
- Turing, A. M., 1936, “On Computable Numbers, with an Application to the Entscheidungsproblem,” Proceedings of the London Mathematical Society (Series 2), 42: 230–65.
- Turner, R. 2007, “Understanding Programming Languages,” Minds and Machines, 17(2): 203–216.
- –––, 2009a, Computable Models, Dordrecht: Springer.
- –––, 2009b, “The Meaning of Programming Languages,” APA Newsletters, 9(1): 2–7. [Turner 2009b available online (pdf)]
- –––, 2010, “Programming Languages as Mathematical Theories,” in J. Vallverdú (ed.), Thinking Machines and the Philosophy of Computer Science: Concepts and Principles, Hershey, PA: IGI Global, pp. 66–82.
- –––, 2011, “Specification,” Minds and Machines, 21(2): 135–152.
- –––, 2012, “Machines,” in H. Zenil (ed.), A Computable Universe: Understanding and Exploring Nature as Computation, London: World Scientific Publishing Company/Imperial College Press, pp. 63–76.
- –––, 2013, “Programming Languages as Technical Artefacts,” Philosophy and Technology, DOI: I 10.1007/s13347-012-0098-z.
- Tymoczko, T., 1979, “The Four Color Problem and Its Philosophical Significance,” The Journal of Philosophy, 76(2): 57–83.
- –––, 1980, “Computers, Proofs and Mathematicians: A Philosophical Investigation of the Four-Color Proof,” Mathematics Magazine, 53(3): 131–138.
- Vermaas, P. E. and W. Houkes, 2003, “Ascribing functions to technical artifacts: A challenge to etiological accounts of function,” British Journal of the Philosophy of Science, 54: 261–289. [Vermaas and Houkes 2003 available online]
- Vliet, H. v., 2008, Software Engineering: Principles and Practice, 3rd edition, New York: Wiley. (First edition, 1993.)
- Wang, H., 1974, From Mathematics to Philosophy, London: Routledge, Kegan & Paul.
- White, G., 2003, “The Philosophy of Computer Languages,” in L. Floridi (ed.), The Blackwell Guide to the Philosophy of Computing and Information, Malden: Wiley-Blackwell, pp. 318–326.
- Wittgenstein, L., 1953 , Philosophical Investigations, 3rd Edition, Oxford: Blackwell Publishing; translated by G. E. M. Anscombe.
- –––, 1956 , Remarks of the Foundations of Mathematics, Revised Edition, Oxford: Basil Blackwell, G.H. von Wright, R. Rhees, and G.E.M. Anscombe (eds.); translated by G.E.M. Anscombe.
- Wittgenstein, L., 1975 , Wittgenstein's Lectures on the Foundations of Mathematics, Cambridge 1939, C. Diamond (ed.), Cambridge.
- Woodcock, J. and J. Davies, 1996, Using Z: Specification, Refinement, and Proof, Englewood Cliffs: Prentice Hall.
- Wright, C. 1983, Frege's Conception of Numbers as Objects, Aberdeen: Aberdeen University Press.
How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database.
- ACM (ed.), 2013, ACM Turing Award Lectures.
- Duncan, W., 2009, “Making Ontological Sense of Hardware and Software”. Unpublished manuscript. University of Buffalo.
- Groklaw, 2011, “Software is Mathematics—The Need for Due Diligence”, by PoIR.
- Groklaw, 2012, “What Does “Software is Mathematics” Mean?” Part 1 and Part 2, by PoIR.
- Huss, E., 1997, The C Library Reference Guide, Web Monkeys, University of Illinois at Urbana-Champaign.
- Rapaport, W. J. (ed.), 2010, “Can Programs be Copyrighted or Patented?”. Web site (last updated 21 March 2010), Philosophy of Computer Science, University at Buffalo, State University of New York.
- Smith, B., 2012, “Logic and Formal Ontology”. A revised version of the paper which appeared in J. N. Mohanty and W. McKenna (eds), 1989, Husserl's Phenomenology: A Textbook, Lanham: University Press of America.
- Turner, Ray and Amon Eden, 2011, “The Philosophy of Computer Science”, Stanford Encyclopedia of Philosophy (Winter 2011 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/win2011/entries/computer-science/>. [This was the previous entry on the philosophy of computer science in the Stanford Encyclopedia of Philosophy — see the version history.]
- Philosophy of Computer Science at Buffalo
- Center for Philosophy of Computer Science
artificial intelligence: logic and | computability and complexity | computational complexity theory | computer and information ethics | function: recursive | information | information: semantic conceptions of | mathematics, philosophy of | technology, philosophy of
This is a completely rewritten and updated version of the entry on the philosophy of computer science, written by one of the authors of the previous version. See the reference to Turner and Eden 2011 in the Other Internet Resources for the link to the previous, coauthored version.