Computer Simulations in Science

First published Mon May 6, 2013; substantive revision Thu Sep 26, 2019

Computer simulation was pioneered as a scientific tool in meteorology and nuclear physics in the period directly following World War II, and since then has become indispensable in a growing number of disciplines. The list of sciences that make extensive use of computer simulation has grown to include astrophysics, particle physics, materials science, engineering, fluid mechanics, climate science, evolutionary biology, ecology, economics, decision theory, medicine, sociology, epidemiology, and many others. There are even a few disciplines, such as chaos theory and complexity theory, whose very existence has emerged alongside the development of the computational models they study.

After a slow start, philosophers of science have begun to devote more attention to the role of computer simulation in science. Several areas of philosophical interest in computer simulation have emerged: What is the structure of the epistemology of computer simulation? What is the relationship between computer simulation and experiment? Does computer simulation raise issues for the philosophy of science that are not fully covered by recent work on models more generally? What does computer simulation teach us about emergence? About the structure of scientific theories? About the role (if any) of fictions in scientific modeling?

1. What is Computer Simulation?

No single definition of computer simulation is appropriate. In the first place, the term is used in both a narrow and a broad sense. In the second place, one might want to understand the term from more than one point of view.

1.1 A Narrow definition

In its narrowest sense, a computer simulation is a program that is run on a computer and that uses step-by-step methods to explore the approximate behavior of a mathematical model. Usually this is a model of a real-world system (although the system in question might be an imaginary or hypothetical one). Such a computer program is a computer simulation model. One run of the program on the computer is a computer simulation of the system. The algorithm takes as its input a specification of the system’s state (the value of all of its variables) at some time t. It then calculates the system’s state at time t+1. From the values characterizing that second state, it then calculates the system’s state at time t+2, and so on. When run on a computer, the algorithm thus produces a numerical picture of the evolution of the system’s state, as it is conceptualized in the model.

This sequence of values for the model variables can be saved as a large collection of “data” and is often viewed on a computer screen using methods of visualization. Often, but certainly not always, the methods of visualization are designed to mimic the output of some scientific instrument—so that the simulation appears to be measuring a system of interest.

Sometimes the step-by-step methods of computer simulation are used because the model of interest contains continuous (differential) equations (which specify continuous rates of change in time) that cannot be solved analytically—either in principle or perhaps only in practice. This underwrites the spirit of the following definition given by Paul Humphreys: “any computer-implemented method for exploring the properties of mathematical models where analytic methods are not available” (1991, 500). But even as a narrow definition, this one should be read carefully, and not be taken to suggest that simulations are only used when there are analytically unsolvable equations in the model. Computer simulations are often used either because the original model itself contains discrete equations—which can be directly implemented in an algorithm suitable for simulation—or because the original model consists of something better described as rules of evolution than as equations.

In the former case, when equations are being “discretized” (the turning of equations that describe continuous rates of change into discrete equations), it should be emphasized that, although it is common to speak of simulations “solving” those equations, a discretization can at best only find something which approximates the solution of continuous equations, to some desired degree of accuracy. Finally, when speaking of “a computer simulation” in the narrowest sense, we should be speaking of a particular implementation of the algorithm on a particular digital computer, written in a particular language, using a particular compiler, etc. There are cases in which different results can be obtained as a result of variations in any of these particulars.

1.2 A Broad Definition

More broadly, we can think of computer simulation as a comprehensive method for studying systems. In this broader sense of the term, it refers to an entire process. This process includes choosing a model; finding a way of implementing that model in a form that can be run on a computer; calculating the output of the algorithm; and visualizing and studying the resultant data. The method includes this entire process—used to make inferences about the target system that one tries to model—as well as the procedures used to sanction those inferences. This is more or less the definition of computer simulation studies in Winsberg 2003 (111). “Successful simulation studies do more than compute numbers. They make use of a variety of techniques to draw inferences from these numbers. Simulations make creative use of calculational techniques that can only be motivated extra-mathematically and extra-theoretically. As such, unlike simple computations that can be carried out on a computer, the results of simulations are not automatically reliable. Much effort and expertise goes into deciding which simulation results are reliable and which are not.” When philosophers of science write about computer simulation, and make claims about what epistemological or methodological properties “computer simulations” have, they usually mean the term to be understood in this broad sense of a computer simulation study.

1.3 An Alternative point of view

Both of the above definitions take computer simulation to be fundamentally about using a computer to solve, or to approximately solve, the mathematical equations of a model that is meant to represent some system—either real or hypothetical. Another approach is to try to define “simulation” independently of the notion of computer simulation, and then to define “computer simulation” compositionally: as a simulation that is carried out by a programmed digital computer. On this approach, a simulation is any system that is believed, or hoped, to have dynamical behavior that is similar enough to some other system such that the former can be studied to learn about the latter.

For example, if we study some object because we believe it is sufficiently dynamically similar to a basin of fluid for us to learn about basins of fluid by studying the it, then it provides a simulation of basins of fluid. This is in line with the definition of simulation we find in Hartmann: it is something that “imitates one process by another process. In this definition the term ‘process’ refers solely to some object or system whose state changes in time” (1996, 83). Hughes (1999) objected that Hartmann’s definition ruled out simulations that imitate a system’s structure rather than its dynamics. Humphreys revised his definition of simulation to accord with the remarks of Hartmann and Hughes as follows:

System S provides a core simulation of an object or process B just in case S is a concrete computational device that produces, via a temporal process, solutions to a computational model … that correctly represents B, either dynamically or statically. If in addition the computational model used by S correctly represents the structure of the real system R, then S provides a core simulation of system R with respect to B. (2004, p. 110)

(Note that Humphreys is here defining computer simulation, not simulation generally, but he is doing it in the spirit of defining a compositional term.) It should be noted that Humphreys’ definitions make simulation out to be a success term, and that seems unfortunate. A better definition would be one that, like the one in the last section, included a word like “believed” or “hoped” to address this issue.

In most philosophical discussions of computer simulation, the more useful concept is the one defined in 1.2. The exception is when it is explicitly the goal of the discussion to understand computer simulation as an example of simulation more generally (see section 5). Examples of simulations that are not computer simulations include the famous physical model of the San Francisco Bay (Huggins & Schultz 1973). This is a working hydraulic scale model of the San Francisco Bay and Sacramento-San Joaquin River Delta System built in the 1950s by the Army Corps of engineers to study possible engineering interventions in the Bay. Another nice example, which is discussed extensively in (Dardashti et al., 2015, 2019) is the use of acoustic “dumb holes” made out of Bose-Einstein condensates to study the behavior of Black Holes. Physicist Bill Unruh noted that in certain fluids, something akin to a black hole would arise if there were regions of the fluid that were moving so fast that waves would have to move faster than the speed of sound (something they cannot do) in order to escape from them (Unruh 1981). Such regions would in effect have sonic event horizons. Unruh called such a physical setup a “dumb hole” (“dumb” as in “mute”) and proposed that it could be studied in order to learn things we do not know about black holes. For some time, this proposal was viewed as nothing more than a clever idea, but physicists have recently come to realize that, using Bose-Einstein condensates, they can actually build and study dumb holes in the laboratory. It is clear why we should think of such a setup as a simulation: the dumb hole simulates the black hole. Instead of finding a computer program to simulate the black holes, physicists find a fluid dynamical setup for which they believe they have a good model and for which that model has fundamental mathematical similarities to the model of the systems of interest. They observe the behavior of the fluid setup in the laboratory in order to make inferences about the black holes. The point, then, of the definitions of simulation in this section is to try to understand in what sense computer simulation and these sorts of activities are species of the same genus. We might then be in a better situation to understand why a simulation in the sense of 1.3 which happens to be run on a computer overlaps with a simulation in the sense of 1.2. We will come back to this in section 5.

Barberousse et al. (2009), however, have been critical of this analogy. They point out that computer simulations do not work the way Unruh’s simulation works. It is not the case that the computer as a material object and the target system follow the same differential equations. A good reference about simulations that are not computer simulations is Trenholme 1994.

2. Types of Computer Simulations

Two types of computer simulation are often distinguished: equation-based simulations and agent-based (or individual-based) simulations. Computer Simulations of both types are used for three different general sorts of purposes: prediction (both pointwise and global/qualitative), understanding, and exploratory or heuristic purposes.

2.1 Equation-based Simulations

Equation-based simulations are most commonly used in the physical sciences and other sciences where there is governing theory that can guide the construction of mathematical models based on differential equations. I use the term “equation based” here to refer to simulations based on the kinds of global equations we associate with physical theories—as opposed to “rules of evolution” (which are discussed in the next section.) Equation based simulations can either be particle-based, where there are n many discrete bodies and a set of differential equations governing their interaction, or they can be field-based, where there is a set of equations governing the time evolution of a continuous medium or field. An example of the former is a simulation of galaxy formation, in which the gravitational interaction between a finite collection of discrete bodies is discretized in time and space. An example of the latter is the simulation of a fluid, such as a meteorological system like a severe storm. Here the system is treated as a continuous medium—a fluid—and a field representing its distribution of the relevant variables in space is discretized in space and then updated in discrete intervals of time.

2.2 Agent-based simulations

Agent-based simulations are most common in the social and behavioral sciences, though we also find them in such disciplines as artificial life, epidemiology, ecology, and any discipline in which the networked interaction of many individuals is being studied. Agent-based simulations are similar to particle-based simulations in that they represent the behavior of n-many discrete individuals. But unlike equation-particle-based simulations, there are no global differential equations that govern the motions of the individuals. Rather, in agent-based simulations, the behavior of the individuals is dictated by their own local rules

To give one example: a famous and groundbreaking agent-based simulation was Thomas Schelling’s (1971) model of “segregation.” The agents in his simulation were individuals who “lived” on a chessboard. The individuals were divided into two groups in the society (e.g. two different races, boys and girls, smokers and non-smokers, etc.) Each square on the board represented a house, with at most one person per house. An individual is happy if he/she has a certain percent of neighbors of his/her own group. Happy agents stay where they are, unhappy agents move to free locations. Schelling found that the board quickly evolved into a strongly segregated location pattern if the agents’ “happiness rules” were specified so that segregation was heavily favored. Surprisingly, however, he also found that initially integrated boards tipped into full segregation even if the agents’ happiness rules expressed only a mild preference for having neighbors of their own type.

2.3 Multiscale Simulations

In section 2.1 we discussed equation-based models that are based on particle methods and those that are based on field methods. But some simulation models are hybrids of different kinds of modeling methods. Multiscale simulation models, in particular, couple together modeling elements from different scales of description. A good example of this would be a model that simulates the dynamics of bulk matter by treating the material as a field undergoing stress and strain at a relatively coarse level of description, but which zooms into particular regions of the material where important small scale effects are taking place, and models those smaller regions with relatively more fine-grained modeling methods. Such methods might rely on molecular dynamics, or quantum mechanics, or both—each of which is a more fine-grained description of matter than is offered by treating the material as a field. Multiscale simulation methods can be further broken down into serial multiscale and parallel multiscale methods. The more traditional method is serial multi-scale modeling. The idea here is to choose a region, simulate it at the lower level of description, summarize the results into a set of parameters digestible by the higher level model, and pass them up to into the part of the algorithm calculating at the higher level.

Serial multiscale methods are not effective when the different scales are strongly coupled together. When the different scales interact strongly to produce the observed behavior, what is required is an approach that simulates each region simultaneously. This is called parallel multiscale modeling. Parallel multiscale modeling is the foundation of a nearly ubiquitous simulation method: so called “sub-grid” modeling. Sub-grid modeling refers to the representation of important small-scale physical processes that occur at length-scales that cannot be adequately resolved on the grid size of a particular simulation. (Remember that many simulations discretize continuous equations, so they have a relatively arbitrary finite “grid size.”) In the study of turbulence in fluids, for example, a common practical strategy for calculation is to account for the missing small-scale vortices (or eddies) that fall inside the grid cells. This is done by adding to the large-scale motion an eddy viscosity that characterizes the transport and dissipation of energy in the smaller-scale flow—or any such feature that occurs at too small a scale to be captured by the grid.

In climate science and kindred disciplines, sub-grid modeling is called “parameterization.” This, again, refers to the method of replacing processes—ones that are too small-scale or complex to be physically represented in the model— by a more simple mathematical description. This is as opposed to other processes—e.g., large-scale flow of the atmosphere—that are calculated at the grid level in accordance with the basic theory. It is called “parameterization” because various non-physical parameters are needed to drive the highly approximative algorithms that compute the sub-grid values. Examples of parameterization in climate simulations include the descent rate of raindrops, the rate of atmospheric radiative transfer, and the rate of cloud formation. For example, the average cloudiness over a 100 km2 grid box is not cleanly related to the average humidity over the box. Nonetheless, as the average humidity increases, average cloudiness will also increase—hence there could be a parameter linking average cloudiness to average humidity inside a grid box. Even though modern-day parameterizations of cloud formation are more sophisticated than this, the basic idea is well illustrated by the example. The use of sub-grid modeling methods in simulation has important consequences for understanding the structure of the epistemology of simulation. This will be discussed in greater detail in section 4.

Sub-grid modelling methods can be contrasted with another kind of parallel multiscale model where the sub-grid algorithms are more theoretically principled, but are motivated by a theory at a different level of description. In the example of the simulation of bulk matter mentioned above, for example, the algorithm driving the smaller level of description is not built by the seat-of-the-pants. The algorithm driving the smaller level is actually more theoretically principled than the higher level in the sense that the physics is more fundamental: quantum mechanics or molecular dynamics vs. continuum mechanics. These kinds of multiscale models, in other words, cobble together the resources of theories at different levels of description. So they provide for interesting examples that provoke our thinking about intertheoretic relationships, and that challenge the widely-held view that an inconsistent set of laws can have no models.

2.4 Monte Carlo Simulations

In the scientific literature, there is another large class of computer simulations called Monte Carlo (MC) Simulations. MC simulations are computer algorithms that use randomness to calculate the properties of a mathematical model and where the randomness of the algorithm is not a feature of the target model. A nice example is the use of a random algorithm to calculate the value of π. If you draw a unit square on a piece of paper and inscribe a circle in it, and then randomly drop a collection of objects inside the square, the proportion of objects that land in the circle would be roughly equal to π/4. A computer simulation that simulated a procedure like that would be called a MC simulation for calculating π.

Many philosophers of science have deviated from ordinary scientific language here and have shied away from thinking of MC simulations as genuine simulations. Grüne-Yanoff and Weirich (2010) offer the following reasoning: “The Monte Carlo approach does not have a mimetic purpose: It imitates the deterministic system not in order to serve as a surrogate that is investigated in its stead but only in order to offer an alternative computation of the deterministic system’s properties” (p.30). This shows that MC simulations do not fit any of the above definitions aptly. On the other hand, the divide between philosophers and ordinary language can perhaps be squared by noting that MC simulations simulate an imaginary process that might be used for calculating something relevant to studying some other process. Suppose I am modeling a planetary orbit and for my calculation I need to know the value of π. If I do the MC simulation mentioned in the last paragraph, I am simulating the process of randomly dropping objects into a square, but what I am modeling is a planetary orbit. This is the sense in which MC simulations are simulations, but they are not simulations of the systems they are being used to study. However, as Beisbart and Norton (2012) point out, some MC simulations (viz. those that use MC techniques to solve stochastic dynamical equations referring to a physical system) are in fact simulations of the systems they study.

3. Purposes of Simulation

There are three general categories of purposes to which computer simulations can be put. Simulations can be used for heuristic purposes, for the purpose of predicting data that we do not have, and for generating understanding of data that we do already have.

Under the category of heuristic models, simulations can be further subdivided into those used to communicate knowledge to others, and those used to represent information to ourselves. When Watson and Crick played with tin plates and wire, they were doing the latter at first, and the former when they showed the results to others. When the army corps built the model of the San Francisco Bay to convince the voting population that a particular intervention was dangerous, they were using it for this kind of heuristic purpose. Computer simulations can be used for both of these kinds of purposes—to explore features of possible representational structures; or to communicate knowledge to others. For example: computer simulations of natural processes, such as bacterial reproduction, tectonic shifting, chemical reactions, and evolution have all been used in classroom settings to help students visualize hidden structure in phenomena and processes that are impractical, impossible, or costly to illustrate in a “wet” laboratory setting.

Another broad class of purposes to which computer simulations can be put is in telling us about how we should expect some system in the real world to behave under a particular set of circumstances. Loosely speaking: computer simulation can be used for prediction. We can use models to predict the future, or to retrodict the past; we can use them to make precise predictions or loose and general ones. With regard to the relative precision of the predictions we make with simulations, we can be slightly more fine-grained in our taxonomy. There are a) Point predictions: Where will the planet Mars be on October 21st, 2300? b) “Qualitative” or global or systemic predictions: Is the orbit of this planet stable? What scaling law emerges in these kinds of systems? What is the fractal dimension of the attractor for systems of this kind? and c) Range predictions: It is 66% likely that the global mean surface temperature will increase by between 2–5 degrees C by the year 2100; it is “highly likely” that sea level will rise by at least two feet; it is “implausible” that the thermohaline will shut down in the next 50 years.

Finally, simulations can be used to understand systems and their behavior. If we already have data telling us how some system behaves, we can use computer simulation to answer questions about how these events could possibly have occurred; or about how those events actually did occur.

When thinking about the topic of the next section, the epistemology of computer simulations, we should also keep in mind that the procedures needed to sanction the results of simulations will often depend, in large part, on which of the above kind of purpose or purposes the simulation will be put to.

4. The Epistemology of Computer Simulations

As computer simulation methods have gained importance in more and more disciplines, the issue of their trustworthiness for generating new knowledge has grown, especially when simulations are expected to be counted as epistemic peers with experiments and traditional analytic theoretical methods. The relevant question is always whether or not the results of a particular computer simulation are accurate enough for their intended purpose. If a simulation is being used to forecast weather, does it predict the variables we are interested in to a degree of accuracy that is sufficient to meet the needs of its consumers? If a simulation of the atmosphere above a Midwestern plain is being used to understand the structure of a severe thunderstorm, do we have confidence that the structures in the flow—the ones that will play an explanatory role in our account of why the storm sometimes splits in two, or why it sometimes forms tornados—are being depicted accurately enough to support our confidence in the explanation? If a simulation is being used in engineering and design, are the predictions made by the simulation reliable enough to sanction a particular choice of design parameters, or to sanction our belief that a particular design of airplane wing will function? Assuming that the answer to these questions is sometimes “yes”, i.e. that these kinds of inferences are at least sometimes justified, the central philosophical question is: what justifies them? More generally, how can the claim that a simulation is good enough for its intended purpose be evaluated? These are the central questions of the epistemology of computer simulation (EOCS).

Given that confirmation theory is one of the traditional topics in philosophy of science, it might seem obvious that the latter would have the resources to begin to approach these questions. Winsberg (1999), however, argued that when it comes to topics related to the credentialing of knowledge claims, philosophy of science has traditionally concerned itself with the justification of theories, not their application. Most simulation, on the other hand, to the extent that it makes use of the theory, tends to make use of the well-established theory. EOCS, in other words, is rarely about testing the basic theories that may go into the simulation, and most often about establishing the credibility of the hypotheses that are, in part, the result of applications of those theories.

4.1 Novel features of EOCS

Winsberg (2001) argued that, unlike the epistemological issues that take center stage in traditional confirmation theory, an adequate EOCS must meet three conditions. In particular it must take account of the fact that the knowledge produced by computer simulations is the result of inferences that are downward, motley, and autonomous.

Downward. EOCS must reflect the fact that in a large number of cases, accepted scientific theories are the starting point for the construction of computer simulation models and play an important role in the justification of inferences from simulation results to conclusions about real-world target systems. The word “downward” was meant to signal the fact that, unlike most scientific inferences that have traditionally interested philosophers, which move up from observation instances to theories, here we have inferences that are drawn (in part) from high theory, down to particular features of phenomena.

Motley. EOCS must take into account that simulation results nevertheless typically depend not just on theory but on many other model ingredients and resources as well, including parameterizations (discussed above), numerical solution methods, mathematical tricks, approximations and idealizations, outright fictions, ad hoc assumptions, function libraries, compilers and computer hardware, and perhaps most importantly, the blood, sweat, and tears of much trial and error.

Autonomous. EOCS must take into account the autonomy of the knowledge produced by simulation in the sense that the knowledge produced by simulation cannot be sanctioned entirely by comparison with observation. Simulations are usually employed to study phenomena where data are sparse. In these circumstances, simulations are meant to replace experiments and observations as sources of data about the world because the relevant experiments or observations are out of reach, for principled, practical, or ethical reasons.

Parker (2013) has made the point that the usefulness of these conditions is somewhat compromised by the fact that it is overly focused on simulation in the physical sciences, and other disciplines where simulation is theory-driven and equation-based. This seems correct. In the social and behavioral sciences, and other disciplines where agent-based simulation (see 2.2) are more the norm, and where models are built in the absence of established and quantitative theories, EOCS probably ought to be characterized in other terms.

For instance, some social scientists who use agent-based simulation pursue a methodology in which social phenomena (for example an observed pattern like segregation) are explained, or accounted for, by generating similar looking phenomena in their simulations (Epstein and Axtell 1996; Epstein 1999). But this raises its own sorts of epistemological questions. What exactly has been accomplished, what kind of knowledge has been acquired, when an observed social phenomenon is more or less reproduced by an agent-based simulation? Does this count as an explanation of the phenomenon? A possible explanation? (see e.g., Grüne-Yanoff 2007). Giuseppe Primiero (2019) argues that there is a whole domain of “artificial sciences” built around agent-based and multi-agent system based simulations, and that it requires its own epistemology--one where validation cannot be defined by comparison with an existing real-world system, but must be defined vis a vis an intended system.

It is also fair to say, as Parker does (2013), that the conditions outlined above pay insufficient attention to the various and differing purposes for which simulations are used (as discussed in 2.4). If we are using a simulation to make detailed quantitative predictions about the future behavior of a target system, the epistemology of such inferences might require more stringent standards than those that are involved when the inferences being made are about the general, qualitative behavior of a whole class of systems. Indeed, it is also fair to say that much more work could be done in classifying the kinds of purposes to which computer simulations are put and the constraints those purposes place on the structure of their epistemology.

Frigg and Reiss (2009) argued that none of these three conditions are new to computer simulation. They argued that ordinary ‘paper and pencil’ modeling incorporate these features. Indeed, they argued that computer simulation could not possibly raise new epistemological issues because the epistemological issues could be cleanly divided into the question of the appropriateness of the model underlying the simulation, which is an issue that is identical to the epistemological issues that arise in ordinary modeling, and the question of the correctness of the solution to the model equations delivered by the simulation, which is a mathematical question, and not one related to the epistemology of science. On the first point, Winsberg (2009b) replied that it was the simultaneous confluence of all three features that was new to simulation. We will return to the second point in section 4.3

4.2 EOCS and the epistemology of experiment

Some of the work on the EOCS has developed analogies between computer simulation in order to draw on recent work in the epistemology of experiment, particularly the work of Allan Franklin; see the entry on experiments in physics.

In his work on the epistemology of experiment, Franklin (1986, 1989) identified a number of strategies that experimenters use to increase rational confidence in their results. Weissart (1997) and Parker (2008a) argued for various forms of analogy between these strategies and a number of strategies available to simulationists to sanction their results. The most detailed analysis of these relationships is to be found in Parker 2008a, where she also uses these analogies to highlight weaknesses in current approaches to simulation model evaluation.

Winsberg (2003) also makes use of Ian Hacking’s (1983, 1988, 1992) work on the philosophy of experiment. One of Hacking’s central insights about experiment is captured in his slogan that experiments have a life of their own’ (1992: 306). Hacking intended to convey two things with this slogan. The first was a reaction against the unstable picture of science that comes, for example, from Kuhn. Hacking (1992) suggests that experimental results can remain stable even in the face of dramatic changes in the other parts of sciences. The second, related, point he intended to convey was that ‘experiments are organic, develop, change, and yet retain a certain long-term development which makes us talk about repeating and replicating experiments’ (1992: 307). Some of the techniques that simulationists use to construct their models get credentialed in much the same way that Hacking says that instruments and experimental procedures and methods do; the credentials develop over an extended period of time and become deeply tradition-bound. In Hacking’s language, the techniques and sets of assumptions that simulationists use become ‘self-vindicating’. Perhaps a better expression would be that they carry their own credentials. This provides a response to the problem posed in 4.1, of understanding how simulation could have a viable epistemology despite the motley and autonomous nature of its inferences.

Drawing inspiration from another philosopher of experiment (Mayo 1996), Parker (2008b) suggests a remedy to some of the shortcomings in current approaches to simulation model evaluation. In this work, Parker suggests that Mayo’s error-statistical approach for understanding the traditional experiment—which makes use of the notion of a “severe test”—could shed light on the epistemology of simulation. The central question of the epistemology of simulation from an error-statistical perspective becomes, ‘What warrants our taking a computer simulation to be a severe test of some hypothesis about the natural world? That is, what warrants our concluding that the simulation would be unlikely to give the results that it in fact gave, if the hypothesis of interest were false (2008b, 380)? Parker believes that too much of what passes for simulation model evaluation lacks rigor and structure because it:

consists in little more than side-by-side comparisons of simulation output and observational data, with little or no explicit argumentation concerning what, if anything, these comparisons indicate about the capacity of the model to provide evidence for specific scientific hypotheses of interest. (2008b, 381)

Drawing explicitly upon Mayo’s (1996) work, she argues that what the epistemology of simulation ought to be doing, instead, is offering some account of the ‘canonical errors’ that can arise, as well as strategies for probing for their presence.

4.3 Verification and Validation

Practitioners of simulation, particularly in engineering contexts, in weapons testing, and in climate science, tend to conceptualize the EOCS in terms of verification and validation. Verification is said to be the process of determining whether the output of the simulation approximates the true solutions to the differential equations of the original model. Validation, on the other hand, is said to be the process of determining whether the chosen model is a good enough representation of the real-world system for the purpose of the simulation. The literature on verification and validation from engineers and scientists is enormous and it is beginning to receive some attention from philosophers.

Verification can be divided into solution verification and code verification. The former verifies that the output of the intended algorithm approximates the true solutions to the differential equations of the original model. The latter verifies that the code, as written, carries out the intended algorithm. Code verification has been mostly ignored by philosophers of science; probably because it has been seen as more of a problem in computer science than in empirical science—perhaps a mistake. Part of solution verification consists in comparing computed output with analytic solutions (so called “benchmark solutions”). Though this method can of course help to make case for the results of a computer simulation, it is by itself inadequate, since simulations are often used precisely because analytic solution is unavailable for regions of solution space that are of interest. Other indirect techniques are available: the most important of which is probably checking to see whether and at what rate computed output converges to a stable solution as the time and spatial resolution of the discretization grid gets finer.

The principal strategy of validation involves comparing model output with observable data. Again, of course, this strategy is limited in most cases, where simulations are being run because observable data are sparse. But complex strategies can be employed, including comparing the output of subsystems of a simulation to relevant experiments (Parker, 2013; Oberkampf and Roy 2010).

The concepts of verification and validation has drawn some criticism from philosophers. Oreskes et al. 1994, a very widely-cited article, was mostly critical of the terminology, arguing that “validity,” in particular, is a property that only applies to logical arguments, and that hence the term, when applied to models, might lead to overconfidence.

Winsberg (2010, 2018, p.155) has argued that the conceptual division between verification and validation can be misleading, if it is taken to suggest that there is one set of methods which can, by itself, show that we have solved the equations right, and that there is another set of methods, which can, by itself, show that we’ve got the right equations. He also argued that it is misleading to think that the epistemology of simulation is cleanly divided into an empirical part (verification) and a mathematical (and computer science) part (validation.) But this misleading idea often follows discussion of verification and validation. We find this both in the work of practitioners and philosophers.

Here is the standard line from a practitioner, Roy: “Verification deals with mathematics and addresses the correctness of the numerical solution to a given model. Validation, on the other hand, deals with physics and addresses the appropriateness of the model in reproducing experimental data. Verification can be thought of as solving the chosen equations correctly, while validation is choosing the correct equations in the first place” (Roy 2005).

Some philosophers have put this distinction to work in arguments about the philosophical novelty of simulation. We first raised this issue in section 4.1, where Frigg and Reiss argued that simulation could have no epistemologically novel features, since it contained two distinct components: a component that is identical to the epistemology of ordinary modeling, and a component that is entirely mathematical. “We should distinguish two different notions of reliability here, answering two different questions. First, are the solutions that the computer provides close enough to the actual (but unavailable) solutions to be useful?…this is a purely mathematical question and falls within the class of problems we have just mentioned. So, there is nothing new here from a philosophical point of view and the question is indeed one of number crunching. Second, do the computational models that are the basis of the simulations represent the target system correctly? That is, are the simulation results externally valid? This is a serious question, but one that is independent of the first problem, and one that equally arises in connection with models that do not involve intractable mathematics and ordinary experiments” (Frigg and Reiss 2009).

But verification and validation are not, strictly speaking, so cleanly separable. That is because most methods of validation, by themselves, are much too weak to establish the validity of a simulation. And most model equations chosen for simulation are not in any straightforward sense “the right equations”; they are not the model equations we would choose in an ideal world. We have good reason to think, in other words, that there are model equations out there that enjoy better empirical support, in the abstract. The equations we choose often reflect a compromise between what we think best describes the phenomena and computational tractability. So the equations that are chosen are rarely well “validated” on their own. If we want to understand why simulation results are taken to be credible, we have to look at the epistemology of simulation as an integrated whole, not as cleanly divided into verification and validation—each of which, on its own, would look inadequate to the task.

So one point is that verification and validation are not independently-successful and separable activities. But the other point is that there are not two independent entities onto which these activities can be directed: a model chosen to discretized, and a method for discretizing it. Once one recognizes that the equations to be “solved” are sometimes chosen so as to cancel out discretization errors, etc. (Lenhard 2007 has a very nice example of this involving the Arakawa operator), this later distinction gets harder to maintain. So success is achieved in simulation with a kind of back-and-forth, trial-and-error, piecemeal adjustment between model and method of calculation. And when this is the case, it is hard even to know what it means to say that a simulation is separately verified and validated.

No one has argued that V&V isn’t a useful distinction, but rather that scientists shouldn’t overinflate a pragmatically useful distinction into a clean methodological dictate that misrepresents the messiness of their own practice. Collaterally, Frigg and Reiss’s argument for the absence of epistemological novelty in simulation fails for just this reason. It is not “a purely mathematical question” whether the solutions that the computer provides close enough to the actual (but unavailable) solutions to be useful. At least not in this respect: it is not a question that can be answered, as a pragmatic matter, entirely using mathematical methods. And hence it is an empirical/epistemological issue that does not arise in ordinary modeling.

4.4 EOCS and Epistemic Entitlement

A major strand of ordinary (outside of the philosophy of science) epistemology is to emphasize the degree to which it is a condition for the possibility of knowledge that we rely on our senses and the testimony of other people in a way that we cannot ourselves justify. According to Tyler Burge (1993,1998), belief in the results of these two processes are warranted but not justified. Rather, according to Burge, we are entitled to these beliefs. “[w]e are entitled to rely, other things equal, on perception, memory, deductive and inductive reasoning, and on…the word of others” (1993, p. 458). Beliefs in which a believer is entitled are those that are unsupported by evidence available to the believer, but which the believer is nevertheless warranted in believing.

Some work in EOCS has developed analogies between computer simulation and the kinds of knowledge producing practices Burge associates with entitlement. (See especially Barberousse and Vorms, 2014, and Beisbart, 2017.) This is, in some ways, a natural outgrowth of Burge’s arguments that we view computer assisted proofs in this way (1998). Computer simulations are extremely complex, often the result of the epistemic labor of a diverse set of scientists and other experts, and perhaps most importantly, epistemically opaque (Humphreys, 2004). Because of these features, Beisbart argues that it is reasonable to treat computer simulations in the same way that we treat our senses and the testimony of others: simply as things that can be trusted on the assumption that everything is working smoothly. (Beisbart, 2017).

Symons and Alvarado (2019) argue that there is a fundamental problem with this approach to EOCS and it has to do with a feature of computer-aided proof that was crucial to Burge’s original account: that of a being a ‘transparent conveyor’. “It is very important to note, for example, that Burge’s account of content preservation and transparent conveying requires that the recipient already has reason not to doubt the source” (p. 13). But Symons and Alvarado point to many of the properties of computer simulations (drawing from Winsberg 2010 and Ruphy 2015) in virtue of which they fail to have these properties. Lenhard and Küster 2019 is also relevant here, as they argue that there are many features of computer simulation that make them difficult to reproduce and that therefore undermine some of the stability that would be required for them to be transparent conveyors. For these reasons and others having to do with many of the features discussed in 4.2 and 4.3, Symons and Alvarado argue that it is implausible that we should view computer simulation as a basic epistemic practice on a par with sense perception, memory, testimony, or the like.

4.5 Pragmatic Approaches to EOCS

Another approach to EOCS is to ground it in the practical aspects of the craft of modeling and simulation. According to this view, in other words, the best account we can give of the reasons we have for believing the results of computer simulation studies is to have trust in the practical skills and craft of the modelers that use them. A good example of this kind of account is (Hubig and Kaminski, 2017). The epistemological goal of this kind of work is to identify the locus of our trust in simulations in practical aspects of the craft of modeling and simulation, rather than in any features of the models themselves. (Resch et al, 2017) argue that a good part of the reason we should trust simulations is not because of the simulations themselves, but because of the interpretive artistry of those who employ their art and skill to interpret simulation outputs. Symons and Alvarado (2019) are also critical of this approach, arguing that “Part of the task of the epistemology of computer simulation is to explain the difference between the contemporary scientist’s position in relation to epistemically opaque computer simulations..” (p.7) and the believers in a mechanical oracle’s relation to their oracles. Pragmatic and epistemic considerations, according to Symons and Alvarado, co-exist, and they are not possible competitors for the correct explanation of our trust in simulations--the epistemic reasons are ultimate what explain and ground the pragmatic ones.

5. Simulation and Experiment

Working scientists sometimes describe simulation studies in experimental terms. The connection between simulation and experiment probably goes back as far as von Neumann, who, when advocating very early on for the use of computers in physics, noted that many difficult experiments had to be conducted merely to determine facts that ought, in principle, to be derivable from theory. Once von Neumann’s vision became a reality, and some of these experiments began to be replaced by simulations, it became somewhat natural to view them as versions of experiment. A representative passage can be found in a popular book on simulation:

A simulation that accurately mimics a complex phenomenon contains a wealth of information about that phenomenon. Variables such as temperature, pressure, humidity, and wind velocity are evaluated at thousands of points by the supercomputer as it simulates the development of a storm, for example. Such data, which far exceed anything that could be gained from launching a fleet of weather balloons, reveals intimate details of what is going on in the storm cloud. (Kaufmann and Smarr 1993, 4)

The idea of “in silico” experiments becomes even more plausible when a simulation study is designed to learn what happens to a system as a result of various possible interventions: What would happen to the global climate if x amount of carbon were added to the atmosphere? What will happen to this airplane wing if it is subjected to such-and-such strain? How would traffic patterns change if an onramp is added at this location?

Philosophers, consequently, have begun to consider in what sense, if any, computer simulations are like experiments and in what sense they differ. A related issue is the question of when a process that fundamentally involves computer simulation can counts as measurement (Parker, 2017) A number of views have emerged in the literature centered around defending and criticizing two theses:

The identity thesis. Computer simulation studies are literally instances of experiments.

The epistemological dependence thesis. The identity thesis would (if it were true) be a good reason (weak version), or the best reason (stronger version), or the only reason (strongest version; it is a necessary condition) to believe that simulations can provide warrants for belief in the hypotheses that they support. A consequence of the strongest version is that only if the identity thesis is true is there reason to believe that simulations can confer warrant for believing in hypotheses.

The central idea behind the epistemological dependence thesis is that experiments are the canonical entities that play a central role in warranting our belief in scientific hypotheses, and that therefore the degree to which we ought to think that simulations can also play a role in warranting such beliefs depends on the extent to which they can be identified as a kind of experiment.

One can find philosophers arguing for the identity thesis as early as Humphreys 1995 and Hughes 1999. And there is at least implicit support for (the stronger) version of the epistemological dependence thesis in Hughes. The earliest explicit argument in favor of the epistemological dependence thesis, however, is in Norton and Suppe 2001. According to Norton and Suppe, simulations can warrant belief precisely because they literally are experiments. They have a detailed story to tell about in what sense they are experiments, and how this is all supposed to work. According to Norton and Suppe, a valid simulation is one in which certain formal relations (what they call ‘realization’) hold between a base model, the modeled physical system itself, and the computer running the algorithm. When the proper conditions are met, ‘a simulation can be used as an instrument for probing or detecting real world phenomena. Empirical data about real phenomena are produced under conditions of experimental control’ (p. 73).

One problem with this story is that the formal conditions that they set out are much too strict. It is unlikely that there are very many real examples of computer simulations that meet their strict standards. Simulation is almost always a far more idealizing and approximating enterprise. So, if simulations are experiments, it is probably not in the way that Norton and Suppe imagined.

More generally, the identity thesis has drawn fire from other quarters.

Gilbert and Troitzsch argued that “[t]he major difference is that while in an experiment, one is controlling the actual object of interest (for example, in a chemistry experiment, the chemicals under investigation), in a simulation one is experimenting with a model rather than the phenomenon itself.” (Gilbert and Troitzsch 1999, 13). But this doesn’t seem right. Many (Guala 2002, 2008, Morgan 2003, Parker 2009a, Winsberg 2009a) have pointed to problems with the claim. If Gilbert and Troitzsch mean that simulationists manipulate models in the sense of abstract objects, then the claim is difficult to understand—how do we manipulate an abstract entity? If, on the other hand, they simply mean to point to the fact that the physical object that simulationists manipulate—a digital computer—is not the actual object of interest, then it is not clear why this differs from ordinary experiments.

It is false that real experiments always manipulate exactly their targets of interest. In fact, in both real experiments and simulations, there is a complex relationship between what is manipulated in the investigation on the one hand, and the real-world systems that are the targets of the investigation on the other. In cases of both experiment and simulation, therefore, it takes an argument of some substance to establish the ‘external validity’ of the investigation – to establish that what is learned about the system being manipulated is applicable to the system of interest. Mendel, for example, manipulated pea plants, but he was interested in learning about the phenomenon of heritability generally. The idea of a model organism in biology makes this idea perspicuous. We experiment on Caenorhabditis elegans because we are interested in understanding how organism in general use genes to control development and genealogy. We experiment on Drosophila melanogaster, because it provides a useful model of mutations and genetic inheritance. But the idea is not limited to biology. Galileo experimented with inclined planes because he was interested in how objects fall and how they would behave in the absence of interfering forces—phenomena that the inclined plane experiments did not even actually instantiate.

Of course, this view about experiments is not uncontested. It is true that, quite often, experimentalists infer something about a system distinct from the system they interfere with. However, it is not clear whether this inference is proper part of the original experiment. Peschard (2010) mounts a criticism along these lines, and hence can be seen as a defender of Gilbert and Troitzsch. Peschard argues that the fundamental assumption of their critics—that in experimentation, just as in simulation, what is manipulated is a system standing in for a target system—is confused. It confuses, Peschard argues, the epistemic target of an experiment with its epistemic motivation. She argues that while the epistemic motivation for doing experiments on C. elegans might be quite far-reaching, the proper epistemic target for any such experiment is the worm itself. In a simulation, according to Peschard, however, the epistemic target is never the digital computer itself. Thus, simulation is distinct from experiment, according to her, in that its epistemic target (as opposed to merely its epistemic motivation) is distinct from the object being manipulated. Roush (2017) can also be seen as a defender of the Gilbert and Troitzsch line, but Roush appeals to sameness of natural kinds as the crucial feature that separates experiments and simulations. Other opponents of the identity thesis include Giere (2009) and Beisbart and Norton (2012, Other Internet Resources).

It is not clear how to adjudicate this dispute, and it seems to revolve primarily around a difference of emphasis. One can emphasize the difference between experiment and simulation, following Gilbert and Troitzsch and Peschard, by insisting that experiments teach us first about their epistemic targets and only secondarily allow inferences to the behavior of other systems. (I.e., experiments on worms teach us, in the first instance, about worms, and only secondarily allow us to make inferences about genetic control more generally.) This would make them conceptually different from computer simulations, which are not thought to teach us, in the first instance, about the behavior of computers, and only in the second instance about storms, or galaxies, or whatever.

Or one can emphasize similarity in the opposite way. One can emphasize the degree to which experimental targets are always chosen as surrogates for what’s really of interest. Morrison, 2009 is probably the most forceful defender of emphasizing this aspect of the similarity of experiment and simulation. She argues that most experimental practice, and indeed most measurement practice, involve the same kinds of modeling practices as simulations. In any case, pace Peschard, nothing but a debate about nomenclature—and maybe an appeal to the ordinary language use of scientists; not always the most compelling kind of argument—would prevent us from saying that the epistemic target of a storm simulation is the computer, and that the storm is merely the epistemic motivation for studying the computer.

Be that as it may, many philosophers of simulation, including those discussed in this section, have chosen the latter path—partly as a way of drawing attention to ways in which the message lurking behind Gilbert and Troitzsch’s quoted claim paints an overly simplistic picture of experiment. It does seem overly simplistic to paint a picture according to which experiment gets a direct grip on the world, whereas simulation’s situation is exactly opposite. And this is the picture one seems to get from the Gilber and Troitzsch quotation. Peschard’s more sophisticated picture involving a distinction between epistemic targets and epistemic motivations goes a long way towards smoothing over those concerns without pushing us into the territory of thinking that simulation and experiment are exactly the same, in this regard.

Still, despite rejecting Gilbert and Troitzsch’s characterization of the difference between simulation and experiment, Guala and Morgan both reject the identity thesis. Drawing on the work of Simon (1969), Guala argues that simulations differ fundamentally from experiments in that the object of manipulation in an experiment bears a material similarity to the target of interest, but in a simulation, the similarity between object and target is merely formal. Interestingly, while Morgan accepts this argument against the identity thesis, she seems to hold to a version of the epistemological dependency thesis. She argues, in other words, that the difference between experiments and simulations identified by Guala implies that simulations are epistemologically inferior to real experiments – that they have intrinsically less power to warrant belief in hypotheses about the real world because they are not experiments.

A defense of the epistemic power of simulations against Morgan’s (2002) argument could come in the form of a defense of the identity thesis, or in the form of a rejection of the epistemological dependency thesis. On the former front, there seem to be two problems with Guala’s (2002) argument against the identity thesis. The first is that the notion of material similarity here is too weak, and the second is that the notion of mere formal similarity is too vague, to do the required work. Consider, for example, the fact that it is not uncommon, in the engineering sciences, to use simulation methods to study the behavior of systems fabricated out of silicon. The engineer wants to learn about the properties of different design possibilities for a silicon device, so she develops a computational model of the device and runs a simulation of its behavior on a digital computer. There are deep material similarities between, and some of the same material causes are at work in, the central processor of the computer and the silicon device being studied. On Guala’s line of reasoning, this should mark this as an example of a real experiment, but that seems wrong. The peculiarities of this example illustrate the problem rather starkly, but the problem is in fact quite general: any two systems bear some material similarities to each other and some differences.

On the flip side, the idea that the existence of a formal similarity between two material entities could mark anything interesting is conceptually confused. Given any two sufficiently complex entities, there are many ways in which they are formally identical, not to mention similar. There are also ways in which they are formally completely different. Now, we can speak loosely, and say that two things bear a formal similarity, but what we really mean is that our best formal representations of the two entities have formal similarities. In any case, there appear to be good grounds for rejecting both the Gilbert and Troitzsch and the Morgan and Guala grounds for distinguishing experiments and simulations.

Returning to the defense of the epistemic power of simulations, there are also grounds for rejecting the epistemological dependence thesis. As Parker (2009a) points out, in both experiment and simulation, we can have relevant similarities between computer simulations and target systems, and that’s what matters. When the relevant background knowledge is in place, a simulation can provide more reliable knowledge of a system than an experiment. A computer simulation of the solar system, based on our most sophisticated models of celestial dynamics, will produce better representations of the planets’ orbits than any experiment.

Parke (2014) argues against the epistemological dependency thesis by undermining two premises that she believes support it: first, that experiments generate greater inferential power than simulations, and second, that simulations cannot surprise us in the same way that experiments can. The argument that simulations cannot surprise us comes from Morgan (2005). Pace Morgan, Parke argues that simulationists are often surprised by their simulations, both because they are not computationally omniscient, and because they are not always the sole creators of the models and code they use. She argues, moreover, that ‘[d]ifferences in researcher’s epistemic states, alone, seem like the wrong grounds for tracking a distinction between experiment and simulation’ (258). Adrian Curry (2017) defends Morgan’s original intuition by making two friendly amendments. He argues that the distinction Morgan was really after was between two different kinds of surprise, and in particular to what the source of surprise is: surprise due to bringing out theoretical knowledge into contact with the world are distinctive of experiment. He also more carefully defines surprise in a non-psychological way such that it is a “quality the attainment of which constitutes genuine epistemic progress” (p. 640).

6. Computer Simulation and the structure of scientific theories

Paul Humphreys (2004) has argued that computer simulations have profound implications for our understanding of the structure of theories; he argues that they reveal inadequacies with both the semantic and syntactic views of scientific theories. This claim has drawn sharp fire from Roman Frigg and Julian Reiss (2009). Frigg and Reiss argue that whether a model admits of analytic solution or not has no bearing on how it relates to the world. They use the example of the double pendulum to show this. Whether or not the pendulum’s inner fulcrum is held fixed (a fact which will determine whether the relevant model is analytically solvable) has no bearing on the semantics of the elements of the model. From this, they conclude that the semantics of a model, or how it relates to the world, is unaffected by whether or not the model is analytically solvable.

This was not responsive, however, to the most charitable reading of what Humphreys was pointing at. The syntactic and semantic views of theories, after all, were not just accounts of how our abstract scientific representations relate to the world. More particularly, they were not stories about the relation between particular models and the world, but rather about the relation between theories and the world, and the role, if any, that models played in that relation.

They were also stories that had a lot to say about where the philosophically interesting action is when it comes to scientific theorizing. The syntactic view suggested that scientific practice could be adequately rationally reconstructed by thinking of theories as axiomatic systems, and, more importantly, that logical deduction was a useful regulative ideal for thinking about how inferences from theory to the world are drawn. The syntactic view also, by omission, made if fairly clear that modeling played, if anything, only a heuristic role in science. (This was a feature of the syntactic view of theories that Frederick Suppe, one of its most ardent critics, often railed against.) Theories themselves had nothing to do with models, and theories could be compared directly to the world, without any important role for modeling to play.

The semantic view of theories, on the other hand, did emphasize an important role for models, but it also urged that theories were non-linguistic entities. It urged philosophers not to be distracted by the contingencies of the particular form of linguistic expression a theory might be found in, say, a particular textbook.

Computer simulations, however, do seem to illustrate that both of these themes were misguided. It was profoundly wrong to think that logical deduction was the right tool for rationally reconstructing the process of theory application. Computer simulations show that there are methods of theory application that vastly outstrip the inferential power of logical deduction. The space of solutions, for example, that is available via logical deduction from the theory of fluids is microscopic compared with the space of applications that can be explored via computer simulation. On the flip side, computer simulations seem to reveal that, as Humphreys (2004) has urged, syntax matters. It was wrong, it turns out, to suggest, as the semantic view did, that the particular linguistic form in which a scientific theory is expressed is philosophically uninteresting. The syntax of the theory’s expression will have a deep effect on what inferences can be drawn from it, what kinds of idealizations will work well with it, etc. Humphreys put the point as follows: “the specific syntactic representation used is often crucial to the solvability of the theory’s equations” (Humphreys 2009, p.620). The theory of fluids can be used to emphasize this point: whether we express that theory in Eulerian or Lagrangian form will deeply affect what, in practice, we can calculate and how; it will affect what idealizations, approximations, and calculational techniques will be effective and reliable in which circumstances. So the epistemology of computer simulation needs to be sensitive to the particular syntactic formulation of a theory, and how well that particular formulation has been credentialed. Hence, it does seem right to emphasize, as Humphreys (2004) did, that computer simulations have revealed inadequacies with both the syntactic and semantic theories.

7. Emergence

Paul Humphreys (2004) and Mark Bedau (1997, 2011) have argued that philosophers interested in the topic of emergence can learn a great deal by looking at computer simulation. Philosophers interested in this topic should consult the entry on emergent properties, where the contributions of all these philosophers have been discussed.

The connection between emergence and simulation was perhaps best articulated by Bedau in his (2011). Bedau argued that any conception of emergence must meet the twin hallmarks of explaining how the whole depends on its parts and how the whole is independent of its parts. He argues that philosophers often focus on what he calls “strong” emergence, which posits brute downward causation that is irreducible in principle. But he argues that this is a mistake. He focuses instead on what he calls “weak” emergence, which allows for reducibility of wholes to parts in principle but not in practice. Systems that produce emergent properties are mere mechanisms, but the mechanisms are very complex (they have very many independently interacting parts). As a result, there is no way to figure out exactly what will happen given a specific set of initial and boundary conditions, except to “crawl the causal web”. It is here that the connection to computer simulation arises. Weakly emergent properties are characteristic of complex systems in nature. And it is also characteristic of complex computer simulations that there is no way to predict what they will do except to let them run. Weak emergence explains, according to Bedau, why computer simulations play a central role in the science of complex systems. The best way to understand and predict how real complex systems behave is to simulate them by crawling the micro-causal web, and see what happens.

8. Fictions

Models of course involve idealizations. But it has been argued that some kinds of idealization, which play an especially prominent role in the kinds of modeling involved in computer simulation, are special—to the point that they deserve the title of “fiction.” This section will discuss attempts to define fictions and explore their role in computer simulation.

There are two different lines of thinking about the role of fictions in science. According to one, all models are fictions. This line of thinking is motivated by considering the role, for example, of “the ideal pendulum” in science. Scientists, it is argued, often make claims about these sorts of entities (e.g., “the ideal pendulum has a period proportional to the square-root of its length”) but they are nowhere to be found in the real world; hence they must be fictional entities. This line of argument about fictional entities in science does not connect up in any special way with computer simulation—readers interested in this topic should consult the entry on scientific representation [forthcoming].

Another line of thinking about fictions is concerned with the question of what sorts of representations in science ought to be regarded as fictional. Here, the concern is not so much about the ontology of scientific model entities, but about the representational character of various postulated model entities. Here, Winsberg (2009c) has argued that fictions do have a special connection to computer simulations. Or rather, that some computer simulations contain elements that best typify what we might call fictional representations in science, even if those representations are not uniquely present in simulations.

He notes that the first conception of a fiction—mentioned above—which makes “any representation that contradicts reality a fiction” (p. 179), doesn’t correspond to our ordinary use of the term: a rough map is not fiction. He then proposes an alternative definition: nonfiction is offered as a “good enough” guide to some part of the world (p. 181); fiction is not. But the definition needs to be refined. Take the fable of the grasshopper and the ant. Although the fable offers lessons about how the world is, it is still fiction because it is “a useful guide to the way the world is in some general sense” rather than a specific guide to the way a part of the world is, its “prima facie representational target”, a singing grasshopper and toiling ant. Nonfictions, on the other hand, “point to a certain part of the world” and are a guide to that part of the world (p. 181).

These kinds of fictional components of models are paradigmatically exemplified in certain computer simulations. Two of his examples are the “silogen atom” and “artificial viscosity.” Silogen atoms appear in certain nanomechanical models of cracks in silicon—a species of the kind of multiscale models that blend quantum mechanics and molecular mechanics mentioned in section 2.3. The silogen containing models of crack propagation in silicon work by describing the crack itself using quantum mechanics and the region immediately surrounding the crack using classical molecular dynamics. To bring together the modeling frameworks in the two regions, the boundary gets treated as if it contains ‘silogen’ atoms, which have a mixture of the properties of silicon and those of hydrogen. Silogen atoms are fictions. They are not offered as even a ‘good enough’ description of the atoms at the boundary—their prima facie representational targets. But they are used so that the overall model can be hoped to get things right. Thus the overall model is not a fiction, but one of its components is. Artificial viscosity is a similar sort of example. Fluids with abrupt shocks are difficult to model on a computational grid because the abrupt shock hides inside a single grid cell, and cannot be resolved by such an algorithm. Artificial viscosity is a technique that pretends that the fluid is highly viscous—a fiction—right were the shock is, so that he shock becomes less abrupt, and blurs over several grid cells. Getting the viscosity, and hence the thickness of the shock, wrong, helps to get the overall model to work “well enough.” Again, the overall model of the fluid is not a fiction, it is a reliable enough guide to the behavior of the fluid. But the component called artificial viscosity is a fiction—it is not being used to reliably model the shock. It is being incorporated into a larger modeling framework so as to make that larger framework, “reliable enough.”

This account has drawn two sorts of criticisms. Toon (2010) has argued that this definition of a fiction is too narrow. He gives examples of historical fictions like I, Claudius, and Schindler’s Ark, which he argues are fictions, despite the fact that “they are offered as ‘good enough’ guides to those people, places and events in certain respects and we are entitled to take them as such.” (p. 286–7). Toon, presumably, supports a broader conception of the role of fictions in science, then, according to which they do not play a particularly prominent or heightened role in computer simulation.

Gordon Purves (forthcoming) argues that there are examples of fictions in computational models (his example is so-called “imaginary cracks”), and elsewhere, that do not meet the strict requirements discussed above. Unlike Toon, however, he also wants to delineate fictional modeling elements from the non-fictional ones. His principal criticism is of the criterion of fictionhood in terms of social norms of use—and Purves argues that we ought to be able to settle whether or not some piece of modeling is a fiction in the absence of such norms. Thus, he wants to find an intrinsic characterization of a scientific fiction. His proposal takes as constitutive of model fictions that they fail to have the characteristic that Laymon (1985) called “piecewise improvability” (PI). PI is a characteristic of many models that are idealizations; it says that as you de-idealize, your model becomes more and more accurate. But as you de-idealize a silogen atom, you do not get a more and more accurate simulation of a silicon crack. But Purves takes this failure of PI to be constitutive of a fiction, rather than merely symptomatic of them.

Bibliography

  • Barberousse, A., and P. Ludwig, 2009. “Models as Fictions,” in Fictions in Science. Philosophical Essays in Modeling and Idealizations, London: Routledge, 56–73.
  • Barberousse, A., and Vorms, M. 2014. “About the warrants of computer-based empirical knowledge,” Synthese, 191(15): 3595–3620.
  • Bedau, M.A., 2011. “Weak emergence and computer simulation,” in P. Humphreys and C. Imbert (eds.), Models, Simulations, and Representations, New York: Routledge, 91–114.
  • –––, 1997. “Weak Emergence,” Noûs (Supplement 11), 31: 375–399.
  • Beisbart, C. and J. Norton, 2012. “Why Monte Carlo Simulations are Inferences and not Experiments,” in International Studies in Philosophy of Science, 26: 403–422.
  • Beisbart, C., 2017. “Advancing knowledge through computer simulations? A socratic exercise,” in M. Resch, A. Kaminski, & P. Gehring (eds.), The Science and Art of Simulation (Volume I), Cham: Springer, pp. 153–174./
  • Burge, T., 1993. “Content preservation,” The Philosophical Review, 102(4): 457–488.
  • –––, 1998. “Computer proof, apriori knowledge, and other minds: The sixth philosophical perspectives lecture,” Noûs,, 32(S12): 1–37.
  • Currie, Adrian, 2018. “The argument from surprise,” Canadian Journal of Philosophy, 48(5): 639–661
  • Dardashti, R., Thebault, K., and Winsberg, E., 2015. “Confirmation via analogue simulation: what dumb holes could tell us about gravity,” in British Journal for the Philosophy of Science, 68(1): 55–89
  • Dardashti, R., Hartmann, S., Thebault, K., and Winsberg, E., 2019. “Hawking radiation and analogue experiments: A Bayesian analysis,” in Studies in History and Philosophy of Modern Physics, 67: 1–11.
  • Epstein, J., and R. Axtell, 1996. Growing artificial societies: Social science from the bottom-up, Cambridge, MA: MIT Press.
  • Epstein, J., 1999. “Agent-based computational models and generative social science,” Complexity, 4(5): 41–57.
  • Franklin, A., 1996. The Neglect of Experiment, Cambridge: Cambridge University Press.
  • –––, 1989. “The Epistemology of Experiment,” The Uses of Experiment, D. Gooding, T. Pinch and S. Schaffer (eds.), Cambridge: Cambridge University Press, 437–60.
  • Frigg, R., and J. Reiss, 2009. “The philosophy of simulation: Hot new issues or same old stew,” Synthese, 169: 593–613.
  • Giere, R. N., 2009. “Is Computer Simulation Changing the Face of Experimentation?,” Philosophical Studies, 143: 59–62
  • Gilbert, N., and K. Troitzsch, 1999. Simulation for the Social Scientist, Philadelphia, PA: Open University Press.
  • Grüne-Yanoff, T., 2007. “Bounded Rationality,” Philosophy Compass, 2(3): 534–563.
  • Grüne-Yanoff, T. and Weirich, P., 2010. “Philosophy of Simulation,” Simulation and Gaming: An Interdisciplinary Journal, 41(1): 1–31.
  • Guala, F., 2002. “Models, Simulations, and Experiments,” Model-Based Reasoning: Science, Technology, Values, L. Magnani and N. Nersessian (eds.), New York: Kluwer, 59–74.
  • –––, 2008. “Paradigmatic Experiments: The Ultimatum Game from Testing to Measurement Device,” Philosophy of Science, 75: 658–669.
  • Hacking, I., 1983. Representing and Intervening: Introductory Topics in the Philosophy of Natural Science, Cambridge: Cambridge University Press.
  • –––, 1988. “On the Stability of the Laboratory Sciences,” The Journal of Philosophy, 85: 507–15.
  • –––, 1992. “Do Thought Experiments have a Life of Their Own?” PSA (Volume 2), A. Fine, M. Forbes and K. Okruhlik (eds.), East Lansing: The Philosophy of Science Association, 302–10.
  • Hartmann, S., 1996. “The World as a Process: Simulations in the Natural and Social Sciences,” in R. Hegselmann, et al. (eds.), Modelling and Simulation in the Social Sciences from the Philosophy of Science Point of View, Dordrecht: Kluwer, 77–100.
  • Hubig, C, & Kaminski, A., 2017. “Outlines of a pragmatic theory of truth and error in computer simulation,” in M. Resch, A. Kaminski, & P. Gehring (eds.), The Science and Art of Simulation (Volume I), Cham: Springer, pp. 121–136.
  • Hughes, R., 1999. “The Ising Model, Computer Simulation, and Universal Physics,” in M. Morgan and M. Morrison (eds.), Models as Mediators, Cambridge: Cambridge University Press.
  • Huggins, E. M.,and E. A. Schultz, 1967. “San Francisco bay in a warehouse,” Journal of the Institute of Environmental Sciences and Technology, 10(5): 9–16.
  • Humphreys, P., 1990. “Computer Simulation,” in A. Fine, M. Forbes, and L. Wessels (eds.), PSA 1990 (Volume 2), East Lansing, MI: The Philosophy of Science Association, 497–506.
  • –––, 1995. “Computational science and scientific method,” in Minds and Machines, 5(1): 499–512.
  • –––, 2004. Extending ourselves: Computational science, empiricism, and scientific method, New York: Oxford University Press.
  • –––, 2009. “The philosophical novelty of computer simulation methods,” Synthese, 169: 615–626.
  • Kaufmann, W. J., and L. L. Smarr, 1993. Supercomputing and the Transformation of Science, New York: Scientific American Library.
  • Laymon, R., 1985. “Idealizations and the testing of theories by experimentation,” in Observation, Experiment and Hypothesis in Modern Physical Science, P. Achinstein and O. Hannaway (eds.), Cambridge, MA: MIT Press, 147–73.
  • Lenhard, J., 2007. “Computer simulation: The cooperation between experimenting and modeling,” Philosophy of Science, 74: 176–94.
  • –––, 2019. Calculated Surprises: A Philosophy of Computer Simulation, Oxford: Oxford University Press
  • Lenhard, J. & Küster, U., 2019. Minds & Machines. 29: 19.
  • Morgan, M., 2003. “Experiments without material intervention: Model experiments, virtual experiments and virtually experiments,” in The Philosophy of Scientific Experimentation, H. Radder (ed.), Pittsburgh, PA: University of Pittsburgh Press, 216–35.
  • Morrison, M., 2012. “Models, measurement and computer simulation: The changing face of experimentation,” Philosophical Studies, 143: 33–57.
  • Norton, S., and F. Suppe, 2001. “Why atmospheric modeling is good science,” in Changing the Atmosphere: Expert Knowledge and Environmental Governance, C. Miller and P. Edwards (eds.), Cambridge, MA: MIT Press, 88–133.
  • Oberkampf, W. and C. Roy, 2010. Verification and Validation in Scientific Computing, Cambridge: Cambridge University Press.
  • Oreskes, N., with K. Shrader-Frechette and K. Belitz, 1994. “Verification, Validation and Confirmation of Numerical Models in the Earth Sciences,” Science, 263(5147): 641–646.
  • Parke, E., 2014. “Experiments, Simulations, and Epistemic Privilege,” Philosophy of Science, 81(4): 516–36.
  • Parker, W., 2008a. “Franklin, Holmes and the Epistemology of Computer Simulation,” International Studies in the Philosophy of Science, 22(2): 165–83.
  • –––, 2008b. “Computer Simulation through an Error-Statistical Lens,” Synthese, 163(3): 371–84.
  • –––, 2009a. “Does Matter Really Matter? Computer Simulations, Experiments and Materiality,” Synthese, 169(3): 483–96.
  • –––, 2013. “Computer Simulation,” in S. Psillos and M. Curd (eds.), The Routledge Companion to Philosophy of Science, 2nd Edition, London: Routledge.
  • –––, 2017. “Computer Simulation, Measurement, and Data Assimilation,” British Journal for the Philosophy of Science, 68(1): 273–304.
  • Peschard, I., 2010. “Modeling and Experimenting,” in P. Humphreys and C. Imbert (eds), Models, Simulations, and Representations, London: Routledge, 42–61.
  • Primiero, G., 2019. “A Minimalist Epistemology for Agent-Based Simulations in the Artificial Sciences,” Minds and Machines, 29(1): 127–148.
  • Purves, G.M., forthcoming. “Finding truth in fictions: identifying non-fictions in imaginary cracks,” Synthese.
  • Resch, M. M., Kaminski, A., & Gehring, P. (eds.), 2017. The science and art of simulation I: Exploring-understanding-knowing, Berlin: Springer.
  • Roush, S., 2015. “The epistemic superiority of experiment to simulation,” Synthese, 169: 1–24.
  • Roy, S., 2005. “Recent advances in numerical methods for fluid dynamics and heat transfer,” Journal of Fluid Engineering, 127(4): 629–30.
  • Ruphy, S., 2015. “Computer simulations: A new mode of scientific inquiry?” in S. O. Hansen (ed.), The Role of Technology in Science: Philosophical Perspectives, Dordrecht: Springer, pp. 131–149
  • Schelling, T. C., 1971. “Dynamic Models of Segregation,” Journal of Mathematical Sociology, 1: 143–186.
  • Simon, H., 1969. The Sciences of the Artificial, Boston, MA: MIT Press.
  • Symons, J., & Alvarado, R., 2019. “Epistemic Entitlements and the Practice of Computer Simulation,” Minds and Machines, 29(1): 37–60.
  • Toon, A., 2010. “Novel Approaches to Models,” Metascience, 19(2): 285–288.
  • Trenholme R., 1994. “Analog Simulation,” Philosophy of Science, 61: 115–131.
  • Unruh, W. G., 1981. “Experimental black-hole evaporation?” Physical Review Letters, 46(21): 1351–53.
  • Winsberg, E., 2018. Philosophy and Climate Science, Cambridge: Cambridge University Press
  • –––, 2010. Science in the Age of Computer Simulation, Chicago: The University of Chicago Press.
  • –––, 2009a. “A Tale of Two Methods,” Synthese, 169(3): 575–92
  • –––, 2009b. “Computer Simulation and the Philosophy of Science,” Philosophy Compass, 4/5: 835–845.
  • –––, 2009c. “A Function for Fictions: Expanding the scope of science,” in Fictions in Science: Philosophical Essays on Modeling and Idealization, M. Suarez (ed.), London: Routledge.
  • –––, 2006. “Handshaking Your Way to the Top: Inconsistency and falsification in intertheoretic reduction,” Philosophy of Science, 73: 582–594.
  • –––, 2003. “Simulated Experiments: Methodology for a Virtual World,” Philosophy of Science, 70: 105–125.
  • –––, 2001. “Simulations, Models, and Theories: Complex Physical Systems and their Representations,” Philosophy of Science, 68: S442–S454.
  • –––, 1999. “Sanctioning Models: The Epistemology of Simulation,” Science in Context, 12(3): 275–92.

Copyright © 2019 by
Eric Winsberg <winsberg@usf.edu>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free