Computer Simulations in Science

First published Mon May 6, 2013; substantive revision Thu Feb 19, 2026

Computer simulations were first introduced to scientific inquiry in meteorology and nuclear physics in the period directly following World War II. Since then, they have become ubiquitous – and in some cases indispensable – in a growing number of disciplines in both the natural and social sciences as well as in engineering. In the 21st century, the list of sciences that make use of computer simulations has grown to include astrophysics, particle physics, materials science, engineering, fluid mechanics, climate science, evolutionary biology, ecology, economics, decision theory, sociology, epidemiology, and even medicine. If we understand computer simulations as scientific instruments, as some philosophers of science have recently suggested, they can be said to be one of the most widely used and versatile instruments in science.

Philosophical debates about computer simulations have, for several decades, oscillated between viewing them as extensions of modeling practices and as analogues to experimentation. While early accounts emphasized their continuity with mathematical modeling (Frigg and Reiss 2009), others highlighted their experimental, empirical aspects. On the one hand, given the similarity of computer simulations to theoretical and abstract elements of scientific inquiry, some philosophers initially sought to understand computer simulations from the perspective of the philosophy of modeling. Under views like these, the consensus was that there is little philosophical novelty to their nature, their role, or their use (Frigg and Reiss 2009). On the other hand, as a direct response to the perceived limitations and inadequacies of this paradigm – and mirroring an entrenched dichotomy in philosophy of science concerning theory and experiment – views emerged that sought to understand computer simulations as more closely related to empirical practices found in experimentation. Thus, a pendulum-like debate dynamic between these two camps dominated philosophical discourse on the topic for the most part of a quarter of a century.

More recently, alternative views have emerged that treat computer simulations as distinct from both models and experiments. While some of these alternative views compare them to broader elements of scientific inquiry such as methods or practices, others suggest that computer simulations should be philosophically understood and treated more like the things – namely scientific instruments – that are used either to substantiate theoretical elements of inquiry or to carry out experimental practices.

1. Introduction

1.1 What is a computer simulation?

Because the role of computer simulations varies across disciplines and experimental aims, a single definition to capture their use and import may prove inadequate. Nevertheless, understanding the different senses in which one can recognize what a computer simulation is and does can elucidate the philosophical questions at play as well as the implications of their possible answers. In this section, a few definitions are provided and surveyed.

1.1.1 A Narrow Definition

In its narrowest sense, a computer simulation is a program that is run on a computer and that uses step-by-step methods to explore the approximate behavior of a mathematical model. This simulation model is a discretized approximation of a mathematical model coded in an algorithm that is meant to capture numerical values associated with the dynamic behavior of a real-world system. When run on a computer, the algorithm thus produces a numerical picture of the evolution of the system’s state. Under this narrow understanding, computer simulations are simply equation solvers. In other words, computer simulations simulate the steps to solve an equation, which, in turn, are supposed to simulate the dynamics of a system. It should be noted that discretizing continuous equations means that the original equations of a mathematical model are not, technically speaking, solved. Rather, their output values are merely approximated to some acceptable degree of accuracy

The sequence of values generated by each running iteration can be saved as a large “data” collection and visualized on a computer screen. Often, although not always, the methods of visualization are designed to mimic the output of a scientific instrument. This gives the appearance that the simulation is measuring a system, say atmospheric pressure or ocean currents.

Sometimes, the step-by-step methods of computer simulations are used because the model of interest involves continuous (differential) equations that cannot be solved analytically, whether in practice or in principle. This is what Paul Humphreys meant to capture when he defined computer simulation as “any computer-implemented method for exploring the properties of mathematical models where analytic methods are not available” (1990, 500). However, computer simulations are also used when this is not the case.

Importantly, discretizing continuous equations means that the original equations of a mathematical model are not, technically speaking, solved. Rather, their output values are merely hoped to be approximated to some desired or acceptable degree of accuracy. Furthermore, when referring to “a computer simulation” in this narrow sense, we should be speaking of a particular implementation of the algorithm on a particular digital computer, written in a particular language, using a particular compiler, etc., since often, variations in any of these elements means that different results can be obtained.

1.1.2 A Broad Definition

In a broader sense, we can think of computer simulations not as a particular algorithm, but rather comprehensive method for studying systems that includes model selection, computational implementation, visualization, interpretation, and the justification of inferences-often involving choices and techniques that go beyond pure mathematics or theory.

Winsberg (2003, 111) refers to this broader sense of computer simulation as computer simulation studies and adds: “Successful simulation studies do more than compute numbers. They make use of a variety of techniques to draw inferences from these numbers. Simulations make creative use of calculational techniques that can only be motivated extra-mathematically and extra-theoretically.” That is, some of the techniques involved in computer simulations are not necessarily guided by purely mathematical or theoretical considerations. Rather, engineering considerations such as computing costs, representational intelligibility, coding language constraints, etc., will determine the processes involved. Because of this, Winsberg further warns that “unlike simple computations that can be carried out on a computer, the results of simulations are not automatically reliable” (2010 Ch.3). Rather, much effort and expertise from a motley set of domains is required to sanction this process.

1.1.3 Alternative views

Both narrow and broad definitions above treat computer simulations as fundamentally about solving, or approximating, the equations of a model representing a target system. An alternative, compositional approach defines a computer simulation as a simulation – i.e., any system believed or intended to mimic another – implemented on a digital computer (Weisberg 2013). This latter point is in line with the definition of simulation we find in Hartmann: i.e., “a process that imitates one process by another process” where the term ‘process’ refers “solely to some object or system whose state changes in time” (1996, 83). A shortcoming of this definition however, according to some, was that it leaves out simulations that mimic a system’s structure and not its dynamics (Hughes 1999). Considering this definition and this criticism, Humphreys revised his definition as follows:

System S provides a core simulation of an object or process B just in case S is a concrete computational device that produces, via a temporal process, solutions to a computational model […] that correctly represents B, either dynamically or statistically. If in addition the computational model used by S correctly represents the structure of the real system R, the S provides a core simulation of system R with respect to B. (2004, 110)

This definition incorporates some of the elements discussed above: it is a general definition of computer simulations as a solver, but there is a compositional element that distinguishes the computer simulation as a computational device from just a computational model. This definition accommodates the possibility that a computer simulation could simulate either the structural or dynamic properties of a system.

Furthermore, there is a conceptual novelty to centering a “concrete computational device” that proved highly influential in the literature (Keller 2003). Wendy Parker (2009), for example, argues that it is this material element of computer simulations that makes simulation epistemologically on a par with conventional experimentation. While conventional views suggested that computer simulations are significantly distinct and epistemically inferior to established empirical methods because they lack materiality and can only deal with conceptual abstractions, others offered that this materiality suggested by Humphreys implied that a simulation must share some material, structural, or procedural properties with either the dynamic properties of the target system’s behavior or with the entirety of an experimental set up (for an early overview of this latter debate see Fox-Keller (2003), Winsberg (2010), or Durán and Arnold (2013); for a more formal response to the former issue see Primiero’s (2020 Ch.14) chapter on simulationism).

Some philosophers suggested that these similarities could be found in the entirety of the implementation of a simulation process, which include the data curation, architecture specification, scientific theory and expertise, etc. This is particularly the case if one thinks of computer simulations as simulation studies where computer simulations are neither the experiment itself (e.g., merely the controlled interventions on numerical values) nor just a theoretical representation (e.g., an equation, model, etc.).

More recently, Alvarado (2023) has suggested that the material element in Humphreys’ definition be brought to the fore in a different way. Alvarado suggests we take seriously the concrete computational devices at the core of computer simulations. Thus, Alvarado offers the following definition of computer simulations as a complex technical artifact:

A computer simulation is a procedurally arranged assemblage of concrete computational devices that produces intelligible solutions, via temporal processes, to a representational abstraction designed or hypothesized to mimic another system or process. (2023, 90)

This definition is distinct from the alternatives surveyed above in that it leaves the concept of representational abstraction maximally open: a computer simulation could simulate a theoretically informed model, a set of behavioral rules, or even just a hypothesized causal sketch. Furthermore, this definition captures other intuitive desiderata missing from Humphreys’ definition. For example, it captures the possibility that a computer simulation can be a computer simulation even if it fails to “correctly represent” the system being simulated. Technical artifacts, unlike abstract formal methods, can fail to do what they are designed to do whilst persisting as the thing they were meant to be. Alvarado suggests that this conceptual strategy towards understanding computer simulations as instruments brings questions about their epistemic status back to the realm of engineered and programmed devices and away from abstract objects such as models and equations (Schiaffonati and Verdicchio 2013). This strategy centers on the properties of the simulation in the same way we do with other instruments whose reliability is in question. At the same time, Alvarado argues, this definition can make sense of the fact that simulations, like other instruments deployed in science, go through a long and arduous process of engineering refinement and calibration in which their success and fitness is temporarily unclear. This has, as we will see in section 2.4 below, important ramifications for questions about the epistemic status of computer simulations in science.

The instrument view of computer simulation is certainly not the first nor the only view pushing against the conventional dichotomy in the literature. Alvarado’s instrument view is in fact an outlier in its rather narrow approach to the issue, single-mindedly focusing on the artifactual nature of computer simulations. However, we can see insightful responses against the dichotomous pendulum described above as early as two decades ago (Winsberg 2003) that see computer simulations as part of a broader set of practices and expertise. Unfortunately, these broader alternative accounts on the nature of computer simulations have only emerged intermittently, punctuating the debate with corrections and foregrounding the importance of sociological details of the practice of simulating. Many others continue to anchor their analysis of the epistemic role, import, and nature of computer simulations on issues of representation and borrowing almost exclusively from conceptual frameworks in the philosophical literature on models and modeling (Durán 2020; Tolk et al 2023; Páez 2024).

A more recent interpretation away from the pendulum described above between theory and experimentation is provided by Durán (2024). According to Durán, computer simulations are also not just intermediaries in-between theory and experimentation, as Rohrlich (1990) suggested. The prevailing view, Durán argues, is rather that computer-based methodologies “extend the class of tractable mathematics and representation and thereby broadening the ranges of modeling (Morgan 2003), observations (Beisbart 2017), predictions (Parker 2014), measurements (Morrison 2009; Tal 2011), and explanation of phenomena (Durán 2017), among several other scientific endeavors.” (Durán 2024) When it comes to computer simulation, this means that they are “not just an intermediate between two familiar ends, but rather a scientific methodology in its own right.” (ibid.) Notice here that the emphasis on the term ‘methodology’ is is meant to draw a distinction between those who see computer simulations as a motely set of experimental practices, aided in part through computational methods, and those who think the modeling aspect of computer simulations is in itself the method, and a new one at that (Lenhard 2019).

This perspective echoes Winsberg’s notion of simulation studies, which emphasizes the heterogeneous expertise required for exploring models, computational architectures, and implementation strategies (Winsberg 2010). Similarly, according to Primiero (2020, 235), a “simulation in this sense, is meant not only as the artefact which implements a model, but it indicates the whole process of experimentation.” Accordingly, and closely following Winsberg’s simulation pipeline, Primiero (2020) explains that the methods and processes for studying systems include things like “designing the formal model, translating that model to an algorithm, implementing the algorithm in a language, devising an experiment based on running the algorithm on a computer, calculating the output of an algorithm, and studying the resultant data (possibly through visualization)” (p. 237). Furthermore, drawing from Morrison (2015, 248) Primiero adds, “simulations combine the very core of the three levels of abstraction of computing we are investigating, namely the formal, the linguistic, and the physical […] As such, the epistemic status of computer simulations reflect the whole complexity of experimental design” (p.238).

1.2 Types and Purposes of Computer Simulations

Two types of computer simulation are often distinguished: equation-based simulations and agent-based simulations. Computer simulations of both types are used for three different general sorts of purposes: prediction, understanding, and exploratory or heuristic purposes. Often, however, they are mainly used for prediction and can help scientist measure their expectations about a system of interest under a particular set of circumstances. This predictive power can also be run backwards to try to understand, in a retrodictive way, the possible paths that gave rise to a given state of affairs in a system. These predictions and retrodictions can be further split into three categories: point predictions, qualitative/global/systemic predictions, and range predictions. Point predictions are about the possible future state of a point of interest, say the position of a planet on a given date. Global predictions are about elucidating guiding principles, emerging rules and/or tendencies of a whole system. And the latter give us information about expected ranges in which we may find a target variable or state of a system. Understanding these uses goes hand in hand with understanding distinct kinds of computer simulations. Elucidating the latter is the aim of this section.

1.2.1 Equation-Based Simulations

Equation-based simulations are the most commonly used in sciences where there is an established governing theory or global theoretical principles that explain the relationship between the mathematical equations of a model and the dynamic behavior of a target system. Thus, the term ‘equation’ here is meant to refer to simulations whose operations are based on the kinds of continuous equations found in scientific theories and not just rules of behavior or rules of evolution. These equations can be of two kinds. They can be particle-based, where the behavior and interactions of many discrete agents are regulated by a set of differential equations; or they can be field-based, where there is a set of equations governing the time evolution of a continuous medium or field. An example of the former is a simulation of a galaxy formation, in which the gravitational interaction between a finite collection of discrete bodies or objects is discretized in time and space. An example of the latter is the simulation of a fluid, such as a meteorological system like a severe storm. In this latter example the system is treated as a continuous medium – a fluid – and a field representing its distribution of the relevant variables is discretized in space and then updated in discrete intervals of time.

As we will see in detail in other sections below, this discretization processes will prove to be a non-trivial epistemological consideration.

1.2.2 Agent-based Simulations

Agent-Based simulations are most common in the social and behavioral sciences, though we also find them in such disciplines as epidemiology, ecology, and any discipline in which the networked interaction of many individuals is being studied. Agent-based simulations are like particle-based simulations in that they represent behavior of many discrete individuals. Unlike particle-based equation-based simulations however, there are no global differential equations that govern the motions of the individuals. Rather, in agent-based simulations the behavior of the individual is specified via their own local rules.

An example of these kinds of equation is Schelling’s ‘segregation model’ (1971). The agents in this simulation are squares of two different colors on a chess board. Each square is coded with preferences about the vicinity of similar or distinct squares. Schelling found that both heavy and mild threshold preferences towards similar squares yielded fully ‘segregated’ grids after several iterations solving for the rules.

1.2.3 Multiscale Simulations

Some simulations are hybrids of different kinds of modelling methods and of different types of implementation and engineering strategies. Often, these simulations involve coupling simulations from different scales of description. Carrying out simulations of weather prediction systems, for example, often involves the ‘stitching’ together simulations of different levels of specific atmospheric phenomena such as cloud formation, water current patterns, temperature, etc. (Winsberg 2010; Gehring 2017; Gransche 2017).

Multiscale simulation methods can be further broken down to into two kinds: parallel multiscale and serial multiscale method. The latter is the more traditional method, and it involves choosing a region, running simulations at the lower level of description, summarizing the results into a set of parameters/values that can be processed by the higher-level model, and passing them up into the part of the algorithm calculating at the higher level. One can visualize such a process by thinking of a divisible grid in which the states of higher-level square regions are calculated in virtue of operations and calculations done at finer-grained level squares and then added up to a value recognized at higher levels of abstraction. These kinds of multiscale simulation methods are limited and do not work when the different scales at play are strongly codependent. In such cases, where the different scales interact strongly to produce observed behavior, parallel multiscale simulations – in which each region is simulated simultaneously – is used.

This latter technique is the foundation of a widespread method called ‘sub-grid modelling’ which is used in climate science and kindred disciplines and refers to the procedures used in transforming values to be processed across scales. In a numbered line, for example, one may have decimals and operations at the decimal level (say division) that fail to yield definite whole numbers. The calculations at the decimal level, however, may be part of a larger process that requires or is only able to process discrete whole numbers. In such a case, we may develop a rule or an operation such as ‘if output of decimal operations yields >.5 round up, if <.5 round down’ which would yield a value output that can be processed at the higher-level resolution. Similarly, in parallel multiscale, sub-grid modelling refers to the method of replacing processes – ones that are too small-scale or complex to be physically represented in the model – by a simpler mathematical description. This is called parametrization and it involves the consideration of non-physical parameters to drive the highly approximative algorithms that compute the sub-grid values. As such, these parameters may be determined by extra-mathematical and extra-theoretical considerations and constraints, a detail which will become important in our discussion on the epistemology of computer simulations and whether these methods are similar to ordinary mathematical procedures.

Sub-grid simulation processes of this kind can also be contrasted with another kind of parallel multiscale model where the sub-grid algorithms are more theoretically principled but are motivated by a theory at a different level of description. For example, we may observe a behavior in a system for which we have no theoretical framework, and we may try to simulate it by appealing to a smaller-level phenomena for which the computations and models are more theoretically grounded. Hence, the sub-grid calculations are more theoretically supported than the phenomena we are trying to simulate at the higher level. These kinds of multiscale models, in other words, cobble together the resources of theories at different levels of description.

1.2.4 Monte Carlo Simulations

There is another large class of computer simulations called Monte Carlo Simulations. These simulations are computer algorithms that use randomness to calculate the properties of a mathematical model and where the randomness of the algorithm is not a feature of the original target model. A good example of this is the use of a random algorithm to calculate the value of Pi. If you draw a unit square on a piece of paper and draw a circle in it, and then randomly drop a collection of objects inside the square, the proportion of objects that land in the circle would be roughly equal to pi divided by 4. Simulating such a process on a computer can be called a Monte Carlo Simulation of Pi.

Some philosophers do not see Monte Carlo Simulations as genuine simulations, since they do not seem to have a “mimetic purpose” in relation to a target system and hence do not represent, stand for, or simulate anything. Rather, it is argued, the Monte Carlo method “imitates the deterministic system not in order to serve as a surrogate that is investigated in its stead but only in order to offer an alternative computation of the determinist system’s properties.” (Grune-Yanoff and Weirich 2010, 30). This shows that Monte Carlo simulations do not fit any of the above definitions aptly. On the other hand, this tension can be solved by noting that Monte Carlo simulations simulate an imaginary process for calculating something relevant to studying some other process. This is the sense in which Monte Carlo simulations are simulations, i.e., they are not simulations of a target system but of some other necessary procedure. Only when those simulations are in fact of systems that happen to be stochastic dynamic systems, can they be said to be a simulation of that system, as noted by Beisbart and Norton (2012; Duran 2024, 152).

1.2.5 Computer Simulations, Machine Learning and Generative AI

Computer simulations and artificial intelligence methods are quite distinct. They function through distinct principles and are often used to solve distinct problems. Computer simulations are, for the most part, used in contexts in which data has been previously gathered and curated with some level of scientific rigor and where established theoretical principles and causal connections can be assumed to a certain extent. On the other hand, machine learning methods are often deployed for problems in which such causal relationships are not known or are simply not present. Nevertheless, in certain data-intensive scientific contexts, say particle physics and astrophysics, the distinct enhancing and analyzing capacities of these two methods are regularly used together, even if they are devoted to distinct subtasks.

While both machine learning techniques and computer simulations can be used to explore possible modeling approaches, the computing resources to build and run computer simulations make it so that machine learning techniques often prove – in certain experimental contexts – to be much more efficient tools to explore plausible models. While parameters and considerations can be adjusted in a computer simulation to model variations in a given model, a whole range of plausible models can be explored with machine learning. Hence, recent developments in both computer simulation, machine learning techniques, and generative AI have led to the development of systems that integrate these distinct technologies. After all, at a high-enough level of abstraction, both methods can be said to have a similar aim: to predict the possible states of a system (Johnson and Lenhard 2024; Symons and Boschetti 2013; Kaminski et al. 2023; Ferrario et al. 2024). These integrations can take at least two different modalities: machine learning-assisted computer simulation and computer simulation-assisted machine learning.

Machine learning-assisted computer simulation can happen at different stages of the construction of computer simulations. It can help scientists who are still working out the details of a possible model by supporting the search for more efficient and cost-effective solutions, or to detect patterns in the data that need to be accounted for in the simulation process. Interestingly, machine learning techniques have been successfully deployed to help with the kind of sub-grid parametrization discussed in the subsection above. Kawamleh (2021), for example, provides a few case studies of the use of artificial neural networks to help with parametrization and argues that, due to the scale of climate simulations and the limitation in conventional computing power, the non-linear functions that describe climate processes must be approximated. According to Kawamleh, artificial neural networks have a specific function – the universal approximation function – “that enables [them] to approximate any nonlinear deterministic function.” Hence, she continues, artificial neural networks represent “a promising method by which scientists can better incorporate cloud-related processes, like convection, which cannot be adequately represented by physical equations in GCM.” (Kawamleh 2021, 1012) Ultimately, however, Kawamleh is skeptical that machine learning can be used reliably, particularly when compared to models based in physical/causal principles.

Computer simulation-assisted machine learning, on the other hand, often uses a simulation to enhance an algorithm’s learning processes by providing more and often better data: e.g., a machine-learning algorithm designed for autonomous vehicles can benefit from a physics engine or a driving simulator’s data. Computer simulations can also be used to test or corroborate machine learning results: e.g., when a generative AI model suggests a material for an engineering project, or a novel way of folding a protein.

These integration efforts are relatively new and the role of each of the methods and the epistemological implications related to their combined used have only recently began to be explored by philosophers of science.

2. The Epistemology of Computer Simulations

As can be gathered by now, a central concern in the dichotomous debates mentioned above about the nature of computer simulations – between those that understood computer simulations as more closely related to formal methods (e.g., modeling), and those that compare them to empirical practices (e.g., experiments) – has been their epistemic import and status. However, as we will see in this section, simply importing considerations from existing debates in the epistemology of science onto the philosophy of computer simulations has proven to be a non-trivial task. For example, as Winsberg (1999) noted a quarter century ago, when it comes to sanctioning knowledge claims, most philosophy of science has focused on the justification of theories, while the epistemology of computer simulations is mainly about the justification of the results of computer simulations that already used established theoretical basis for their functioning. Others have appealed to the novelty of computer simulations to suggest that a new epistemological framework is required to address their epistemological import. More recently, some of the conceptual differences and disputes in these conversations have been attended by philosophers (See Alvarado 2022) who suggest that computer simulations and their epistemic status should be understood by revisiting insights, often overlooked by philosophers of science, from the epistemology of instruments (Baird 2004; Humphreys 2004). The relevant question in all these approaches is whether or not the processes and the results of a particular computer simulations can be expected to be reliable for their intended purpose.

In what follows we will introduce some of the key questions surrounding the conventional debates in the epistemology of computer simulations as well as some of the implications of more recent contributions.

2.1 Models, Experiment, and Novelty

As briefly stated above, a natural philosophical position to take vis-à-vis the epistemological role of computer simulations is to see them as an extension of formal mathematical methods. After all, early computer simulations were simply designed as solvers. What they simulated was the solving of a mathematical problem. Since the problems these early simulations were solving involved complex-enough equations – and some of these equations were designed/meant to represent the complex dynamics of a target system, then the conceptual move from interpreting them as mere solvers to interpreting them as richer representational devices such as models was not too far-fetched. If computer simulations are anything special, so the argument would go, they are special kinds of mathematical models or simple extensions of these models (See Weisberg 2013).

One of the assumed implications of this position was that there was nothing philosophically or epistemologically new to computer simulations and that most of the conceptual work required to understand them could be drawn from existing philosophical literature on models in scientific inquiry. Questions regarding the novelty, or lack thereof, of computer simulations sparked a lively decade-long debate about the epistemic status of computer simulations and what philosophers of science could say about them (See, Humphreys 2009; Frigg and Reiss 2009).

Settling the novelty question in a definite manner may, prima facie, appear to be, at best, an unnecessary philosophical difficulty, and, at worst, a hindering distraction. Nevertheless, there are non-trivial distinctions and implications underlying this debate that one must consider before dismissing this dilemma.

For example, as we briefly mentioned in the introduction to this section, traditional philosophy of science is often concerned with issues related to theory confirmation. In contrast, epistemological debates related to computer simulations have been more attentive to the adequate application of established theoretical frameworks when available and to the comparative status of results in simulation studies. More particularly, according to Winsberg (1999), an adequate epistemological framework must account for the fact that knowledge produced by computer simulations is the result of inferences that are downward, motely and autonomous. This means the following:

Downward: in large number of cases, accepted scientific theories are the starting point for the construction of computer simulations. These theories play an important role in the justification of inferences from simulations results to conclusions about target systems.

Motley: results of simulations depend not just on theory but many other extra-theoretical considerations, including considerations about parametrizations, numerical solution methods, mathematical and coding tricks, approximations, fictions, hardware, as well as on laborious empirical calibration efforts.

Autonomous: the knowledge produced by simulation cannot be sanctioned entirely by comparison with observation. Simulations are usually employed to study phenomena where data are sparse and conventional empirical experimentation may be beyond reach for principled, practical or ethical reasons.

And yet, while these characteristics could be taken as distinguishing features between computer simulations and other methods, practices, or even instruments in scientific inquiry, some philosophers (e.g., Parker 2013) have made the point that the usefulness of these conditions is somewhat compromised by the fact that they are overly focused on simulation in the physical sciences and others disciplines where simulation is theory-driven and equation-based. This seems correct. In the social and behavioral sciences, and other disciplines where agent-based simulation are more the norm, and where models are built in the absence of established and quantitative theories, epistemological frameworks should probably be characterized in different terms.

For instance, some scientists who use agent-based simulation pursue a methodology in which social phenomena (for example an observed pattern like segregation) is accounted for by merely generating similar-looking phenomena in their simulations (Epstein and Axtell 1996; Epstein 1999). This raises its own epistemological questions about the adequacy of these computational methods and their dynamic features, about their truth, and about their explanatory power, if any (See Grüne-Yanoff 2007; Paez 2009). Giuseppe Primiero (2020) argues that there is a whole domain of “artificial sciences” built around agent-based and multi-agent system-based simulations that requires its own epistemology – one where validation cannot be defined by comparison with an existing real-world system but rather must be defined vis-à-vis an intended system.

It is also fair to say, as Parker does (2013), that the conditions outlined above pay insufficient attention to the various and differing purposes for which simulations are used. If we are using a simulation to make detailed quantitative predictions about the future behavior of a target system, the epistemology of such inferences might require more stringent standards than those that are involved when the inferences being made are about the general, qualitative behavior of a whole class of systems.

Adding to this criticism, Frigg and Reiss (2009) also argued that none of these three conditions are new to computer simulation. They argued that ordinary ‘paper and pencil’ modeling incorporate these features. Indeed, they argued that computer simulation could not possibly raise new epistemological issues. This is because such epistemological issues could be cleanly divided into existing questions: first, into the question of the appropriateness of the model underlying the simulation, which is an issue that is identical to the epistemological issues that arise in ordinary modeling; and second, the question of the correctness of the solution to the model equations delivered by the simulation, which is a mathematical question, and not one related to the epistemology of science. On the first point, Winsberg (2009b) replied that it was the simultaneous confluence of all the features that was new to simulation.

As we will see, these issues become even more complex as some philosophers suggest that computer simulations – while indeed sharing some of their properties with existing methods and practices – are nonetheless special cases of these methods.

In the debates regarding the novelty of computer simulations, a distinction can be drawn between those who understand computer simulations as new, special kinds of models or modeling techniques – e.g., Lenhard (2019) and Morrison (2015) – and those who understand them more broadly as new methodologies and practices which involve modeling, engineering, and experimentation considerations. Within this latter paradigm, computer simulations are not just new kinds of models, but rather a whole new methodology for inquiry (Duran 2024). Galison (1997), for example, argues that, while earlier analog computer simulations of the kind used during the second world war can be easily understood in the same way we understand other physical models. Galison argues, nevertheless, that the novel practice of computer simulation brought about a radical epistemological transformation in physics, pushing “physics into a place paradoxically dislocated from the traditional reality that borrowed from both experimental and theoretical domains” and hence created a sort of “netherland that was at once nowhere and everywhere on the methodological map.” For some, Galison’s point means that computer simulations are a special case of modeling (Lenhard 2019); for others, this represents a new kind of experimentation practice (Morrison 2015; Ruphy 2015). Furthermore, some of these issues will be put on a different light when considered from the perspective of the epistemology of instruments in science. For Alvarado, for example, computer simulations are neither a new kind of model nor a new kind of experimental practice, but rather a new kind of instrument.

As we will see, many of the issues surrounding the nature and epistemological status of computer simulations, often revolve around three main questions: What are they? What do they do? And how do they do it? In the latter category one can find questions concerning the relation between a simulation and their targets. Accounting for the relation between computer simulations and their target is a non-trivial endeavor that has motivated several book-length projects in the last two decades (See Humphreys 2004; Winsberg 2010; Weisberg 2013; Morrison 2015; Lenhard 2019). In a sense, it is true that some of the questions arising for simulation will not be very different from some of the questions asked of modeling in general (Humphreys and Imbert 2013). One can say, for instance, that simulations and models do share some of the same problems concerning their representational status vis-a-vis target phenomena, i.e. the thing they are supposed to represent. And yet, they only share these problems in virtue of being representational devices. These are questions that both models and simulations would also share with moving cartoons, data visualization techniques, a child’s drawing, a thermometer, or an equation. This is simply too broad of a problem to infer that because of this shared philosophical issue, computer simulations and models must be similar in other substantial ways.

2.1.1 Epistemology of experimentation and computer simulations

Many of the important contributions in the understanding of computer simulations has come from views that have drawn analogies between the practice of computer simulation and the motely set of skills accompanying the empirical methods and practices of experimentation. Developments in the epistemology of experiments in physics have been of particular influence. In his work on the epistemology of experiment, for example, Franklin (1986; 1989) identified several strategies that experimenters use to increase rational confidence in their results and philosophers such as Wendy Parker (2008a) argued for various forms of analogy between these successful strategies and other strategies available to simulationists to sanction their results.

Drawing inspiration from another philosopher of experiment (Mayo 1996), Parker (2008b) suggest that an error-statistical approach for understanding the traditional experiment – which makes use of Mayo’s notion of a “severe test” – could shed light on the epistemology of simulation. The central question under this framework becomes about our warrants to conclude that the simulation would be unlikely to give the results that it in fact gave, if the hypothesis of interest were false (2008b, 380). Parker further believes that too much of what passes for simulation model evaluation lacks this kind of questioning since they mainly consist of “little more than side-by-side comparison of simulation output and observational data.” (2008b, 381) Drawing explicitly from Mayo’s (1996) work, Parker argues that what the epistemology of simulation ought to be doing is offering some account of the ‘canonical errors’ that can arise, as well as strategies for probing for their presence.

In a similar move towards the epistemology of experiment but drawing from a distinct approach in the philosophy of science, Winsberg (2003) makes use of Ian Hacking’s (1983; 1988; 1992) work. In particular, Winsberg takes note of one of Hacking’s central insight that ‘experiments have a life of their own’ (1992, 306). According to Winsberg, Hacking intended to convey two things: first – as a direct reaction against Kuhn’s unstable picture of science – Hacking suggests that experimental results can remain stable even in the face of dramatic changes in other parts of science; and second, that “experiments are organic, advance, change and yet retain certain long-term developments which makes us talk about repeating and replicating” them (1992, 307). According to Winsberg, some of the techniques that simulationists use to construct their models get credentialed in much the same way that Hacking says that instruments and experimental procedures and methods do; the credentials develop over an extended period of time and become deeply tradition-bound. In Hacking’s language, the techniques and sets of assumptions that simulationists use become ‘self-vindicated’. This move seems to provide a plausible response to the problem of understanding how simulations could have a viable epistemology despite the motely and autonomous nature of the inferences required to validate their results. And it may very well apply to ongoing practices and methodology in inquiry. If one understands, as Winsberg does, computer simulations as encompassing the practices and methodologies surrounding their construction, as simulation studies (2010), then it is a move forward in a plausible epistemology of computer simulations.

Nevertheless, it is worthy of notice that a confusion may arise here with the assertions that computer simulations or any other kind of instrument and method “carry their own credentials”, “become self-vindicated”, or “are self-sanctioning.” This is particularly the case if what is meant implies saying that these instruments, simply in doing what they do when they do it and how they do it, become sanctioned, warranted, vindicated, as if automatically. Whether it is the case that computer simulations – either as a method or as devices – can be self-sanctioning, is not immediately obvious. (See section 2.3.)

2.1.2 Simulation and Experiment

Considering the extra mathematical elements in computer simulations, some philosophers have more recently argued that there must be more to computer simulations than just their formal elements. Yet, this connection between simulation and experiment can be traced back as far as von Neumann, who, when advocating very early on for the use of computers in physics, noted that many difficult experiments had to be conducted merely to determine facts that ought, in principle, to be derivable from theory. Once von Neumann’s vision became a reality, and some of these experiments began to be replaced by simulations, it became somewhat natural to view them as versions of experiment.

The idea of “in silico” experiments becomes even more plausible when a simulation is designed to learn what happens to a system as a result of various possible interventions. Philosophers, consequently, begun to consider in what sense, if any, computer simulations are like experiments and in what sense they differ. A number of views have emerged in the literature centered around defending and criticizing two main theses:

The identity thesis: computer simulations are literally instances of experiments.

The epistemological dependence thesis: The identity thesis would (if true) be a good reason (weak version), or the best reason (strong version), or the only reason (strongest version) to believe that simulations can provide warrants for belief in the hypotheses they support.

The central idea behind the epistemological dependence thesis is that experiments are the canonical entities that play a central role in warranting our belief in scientific hypotheses, and that therefore the degree to which we ought to think that simulations can also play a role in warranting such beliefs depends on the extent to which they can be identified as a kind of experiment. One can find philosophers of science arguing for the identity thesis as early as Humphreys (1995) and Hughes (1999). And there is at least implicit support for the stronger version of the epistemological dependence thesis in Hughes. The earliest explicit argument in favor of the epistemological dependence thesis, however, is in Norton and Suppes (2001). According to them, simulations can warrant belief precisely because they literally are experiments. They have a detailed story to tell about how this is the case and how it works. According to them, a valid simulation is one in which certain formal relations hold between a base model, the modeled physical system itself, and the computer running the algorithms. When the proper conditions are met, “a simulation can be used as an instrument for probing or detecting real world phenomena. Empirical data about real phenomena are produced under conditions of experimental controls” (p. 73) just like in experiments.

One problem with this story is that the formal conditions that they set out (See Norton and Suppe 2001) are much too strict and seldom met by either the model, the modeled physical system, the computer machinery or the algorithms therein. It is unlikely that there are very many real examples of computer simulations that meet their strict standards. Simulation is almost always a far more idealizing and approximating enterprise. So, if simulations are experiments, it is probably not in the way Norton and Suppes imagined.

Parke (2014) argues against the epistemological dependency thesis by undermining two premises that she believes support it: first, that experiments generate greater inferential power than simulations, and second, that simulations cannot surprise us in the same way that experiments can. The argument that simulations cannot surprise us comes from Morgan (2005). Following Morgan, Parke argues that practitioners are indeed often surprised by their simulations, both because they are not computationally omniscient, and because they are not always the sole creators of the models and code they use. She argues, moreover, that differences “in researcher’s epistemic states, alone, seem like the wrong grounds for tracking a distinction between experiment and simulation.” (p. 258). Adrian Curry (2017) defends Morgan’s original intuition by making two friendly amendments. He argues that the distinction Morgan was really after was between two different kinds of surprise: when surprise is due to bringing out theoretical knowledge in contact with the world, simulations are distinct from experiment; he also more carefully defines surprise in non-psychological terms such that it is a “quality the attainment of which constitutes genuine epistemic progress.” Both intuitions/positions are precisely what is at the heart of Johannes Lenhard (2019) book-length treatise on the subject: Calculated Surprises (2019).

More generally, the identity thesis has drawn fire from other quarters. Gilbert and Troitzsch (1999) argued that “[t]he major difference is that while in an experiment, one is controlling the actual object of interest (for example, in a chemistry experiments, the chemicals under investigation), in a simulation one is experimenting with a model rather than the phenomenon itself.” (1999, 13) This position, however, has many issues (see Guala 2002; 2008; Morgan 2003; Parker 2009; Winsberg 2009a). For example, it is unclear what “manipulating” abstract entities means. Furthermore, it is false that real experiments always manipulate exactly their targets of interest. In fact, in both real experiments and simulations, there is a complex relationship between what is manipulated in the investigation on the one hand, and the real-world systems that are the targets of the investigation on the other. In cases of both experiment and simulation, therefore, it takes an argument of some substance to establish the ‘external validity’ of the investigation – i.e., to establish that what is learned about the system being manipulated is applicable to the system of interest.

One way to address this latter issue is to suggest, as Morrison (2015) does, that in fact several well-regarded experimental practices in well-established sciences such as particle physics are already conducted in a way that share many epistemically relevant elements with features of simulation practice. In particular, Morrison suggests that much of what goes on in particle physics (or astrophysics), in terms of experimentation, after certain empirical measurements provide the necessary data, is mainly an exercise in the manipulation of parameters and input values to understand possible/plausible dynamic behaviors of the target system. Alvarado characterizes Morrison’s views on these kinds of experiments the following way:

while there may be a direct detection exercise through the means of an apparatus, much of the experimental part is carried out when values and parameters are examined formally […] While observations about a star’s behavior may come directly from an event perceived through a telescope, for example, much of the inferences and explanations related to such an event happen once these observations have been […] integrated into large-scale theoretical frameworks. Once these values are integrated and both the assumptions and the dynamics specified, they can be changed, played with – in short, experimented upon. In this sense, the experiments are value comparisons. (2023, p.40)

Hence, according to Morrison, insofar as the main function of computer simulations consists of computing and elucidating the relationship between theoretical values and their evolution, then they are very similar to the kinds of experimental practices just described. That is, both procedures involve the testing of parameters, the fine-tuning of input values, the assessment of changes and effects in the dynamic behavior of a system, and lastly the drawing of inferences related to such processes. In this sense, computer simulations are indeed on a par with experimental practices.

This is indeed an innovative approach to the debate. By focusing on parameters and values, Morrison can overcome the objection to Gilbert and Troitzsch’s idea that what we are manipulating are abstract entities. It is evident that one can control and change input values and measurement parameters of a formal system. More importantly, this approach changes the directionality of the debate. Rather than trying to argue that computer simulation practices can rise to the status of empirical experimentation and thus inflate the epistemic status of computer simulations, Morrison establishes that important experimentation practices are already at the level of computer simulations. According to Alvarado, this amounts to an empirically deflationary position that, although thought-provoking within the context of the debate, risks simply moving the goal post vis-à-vis important questions about the relationship between this type of experimentation methods, including computer simulations, and real-world target systems. The main strength of Morrison’s deflationary argumentative strategy is to provide a cornering argument in which opponents must risk admitting that the well-established practices in physics she is talking about do not in fact provide novel and genuine knowledge about the world, a position that is not just hard to defend but seems obviously misguided. If we accept that these well-established experimentation practices are in fact just value comparisons, and we accept that computer simulations are pretty much the same, then we must accept that computer simulations also constitute a reliable method to learn about the world. On the opposite side of the coin, if we reject that computer simulations can be reliable experimental practices because they are just value and measurement comparisons, then we must reject many of the experimental practices in particle physics or astrophysics. In any case, this argument, however successful or not, may be limited to the kinds of experiments in which the main phenomenon of interest – say stars, social behaviors, or subatomic particles – is simply beyond reach of conventional empirical experimentation methods.

Nevertheless, as Parker (2009) points out – and as the main gist of Morrison’s argument above suggests – in both experiment and simulation, we can have relevant similarities between both methods and their target systems, and that is what matters. Furthermore, a computer simulation of the solar system, based on our most sophisticated models of celestial dynamics will provide better representations of the planet’s orbits than any experiment may. Some philosophers have made similar claims concerning computer simulations in climate science. Petersen (2012), for example, suggests that “the most complex simulations […] done using the national supercomputer of the Netherlands were more reliable for […] research questions than were any of the sparse experimental or observational results in the literature” (p. 3)

2.1.2.1 Simulating an experiment versus experimenting on a simulation

Note that there is a related but separate issue concerning computational systems as experimental mediators about their own properties and that here too, distinctions ought to be cautiously treated. Herbert Simon (1969) anticipated, in the early days of computer simulations, that a simulation could be ran to experiment with the limits and properties of the simulation process itself. Hence, the computer simulation would become both the instrument with which we investigate and the subject of inquiry at the same time (See Schiaffonati 2017 for a thorough review of how experiments can be done both with computing and in computing). This is anticipated by Humphreys (2004) and something that Lenhard (2019) touches upon with his notion of calculated surprises. Particularly in his acknowledgment that the computer simulation is a tool to model models and in his assertion that we can learn from computer simulations details about their own properties with uses.

These latter points echo Schiaffonati’s notion of computer simulations as exploratory experiments (Schiaffonati 2016; Primiero 2020). Which signals novel epistemic implications. According to Schiaffonati (2016) and Primero (2020), exploratory experiments “are characterized by a lower degree of constraining from the control part of the activity, often without the guide of a formal model, and driven by the interest in verifying the possibilities of an artefact.” (idem., 240). In this sense, Primiero adds:

It is the experiment that guides the construction of the computational model, or at least contributes to it. In this process, a computational hypothesis is not conceptually prior to the design and execution of the computational experiment: accordingly, control is a posteriori, i.e. it is performed only at a much later stage and often contextually to the artifact’s use. (p. 240)

These computer simulations are used, in Daterri and Schiaffonati’s (2019; 2023) words, for surrogative reasoning. Further, Primiero adds, in this sense the purpose and context of the computer simulation changes and hence so does its epistemic role, since such “exploration contributes to problem solving rather than explanation or prediction.” (2020)

2.2 Verification and Validation

Practitioners of simulation, particularly in engineering contexts, in weapons testing, and in climate science, tend to conceptualize the epistemology of computer simulations in terms of verification and validation. Verification is said to be the process of determining whether the output of the simulation approximates the true solutions to the differential equations of the original model. Validation, on the other hand, is said to be the process of determining whether the chosen model is a good-enough representation of the target-system for the purpose of the simulation. The literature on verification and validation from engineers and scientist is enormous and it has begun to receive some attention from philosophers (see Durán 2018).

Verification can be divided into solution verification and code verification. The former verifies that the output of the intended algorithm approximates the true solutions to the differential equations of the original model. The latter verifies that the code, as written, carries out the intended algorithm. Code verification has been mostly ignored by philosophers of science – probably because it has been seen as more of a problem in computer science than in empirical science – yet attention to these important issues, particularly as they pertain to computational methods in scientific inquiry, has changed in the last decade (see Symons and Horner 2014; Horner and Symons 2020; Floridi et al. 2014; Primiero 2020). Although, part of solution verification consists in comparing computed output with analytic solutions (so called “benchmark solutions”), simulations are often used precisely because analytic solutions are unavailable for regions of solutions space that are of interest (Humphreys 2004). Other indirect techniques are available: the most important of which is probably checking to see whether and at what rate computed output converges to a stable solution as the time and spatial resolution of the discretization grid gets finer.

The principal strategy for validation, on the other hand, involves comparing model output to observable data. Again, this strategy is limited in most cases where simulations are being run because observable data are sparse. But complex strategies can be employed, including comparing the output of subsystems of a simulation to relevant experiments (Parker 2013; Oberkampf and Roy 2010).

The concepts of verification and validation have drawn some criticism from philosophers. Oreskes et al. (1994) in a widely cited article argued that the terminology of “validity” is a property that only applies to logical arguments and that hence the term, when applied to simulations, might lead to overconfidence. Similarly, although for distinct reasons and motivations, Winsberg (2018, 155) has argued that the conceptual division between verification and validation can be misleading if it is taken to suggest that there is one set of methods which can, by itself, show that we have got the ‘right’ equations. He also argued that it is misleading to think that the epistemology of simulation is cleanly divided into an empirical part – verification – and a mathematical (and computer science) part – namely, validation. But this misleading idea often follows discussion of the two concepts in the work of both philosophers and practitioners.

Philosophers have put this strong distinction between verification and validation to work in arguments about the philosophical novelty of computer simulations. As we saw in a previous section, Frigg and Reiss (2009) argued that computer simulations could have no epistemologically novel features, since they contained two distinct components both of which had established epistemological frameworks to them: one with questions related to conventional modeling – e.g., do the computational models represent the target system correctly? – and another in the realm of mathematics – e.g., are the approximate solutions of computer simulations close-enough to the actual (yet unavailable) solutions to be useful? (ibid.)

Winsberg, however, argues that verification and validation are not so cleanly separable. Most model equations chosen for simulation, for example, are not in any straightforward sense “the right equations”. They often reflect a compromise between what we think best describes the phenomena and what is computationally tractable. So, the equations that are chosen are rarely well “validated” on their own. Furthermore, most methods of validation by themselves are much too weak to establish the validity of a simulation. Consequently, one point is that verification and validation are not independent and separable activities. Another point is that there are not two independent entities – one mathematical and the other one empirical – onto which these activities can be directed. Once one recognizes that the equations to be “solved” are sometimes chosen so as to cancel out discretization errors and other practical constraints (See Lenhard 2007 for an example involving the Arakawa operator), the distinction between verification and validation gets harder to maintain. In such instances, success is achieved in simulation with trial-and-error piecemeal between model and method of calculation. Much like one calibrates a recently designed instrument. When this happens, it is hard even to know what it means to say that a simulation is separately verified and validated. Frigg and Reiss’ argument against the novelty of computer simulations, mentioned in section 2.1, fails in this very respect. Whether a solution provided by a computer simulation is close-enough to actual but unavailable solutions to be useful is not a “purely mathematical question” as they suggest. Rather, it is a practical question and hence distinct from the questions that arise in conventional modeling.

2.3 Epistemic entitlements and other pragmatic approaches to the epistemology of computer simulations

A major strand of epistemology emphasizes that a proper account of knowledge can accommodate the fact that we often rely, without evidentiary justification, on our senses or the testimony of others. According to philosophers such as Tyler Burge (1993; 1998), we have an epistemic entitlement – i.e., a non-evidentiary, non-justificatory epistemic warrant – to “rely, other things equal” on these sources given that they underlie the possibility of most of our conventional knowledge. Burge applies this to his arguments on the epistemological foundations of computer-assisted mathematical proofs (1998). Drawing partly from this work, Barberousse and Vorms (2014) and later Beisbart (2017) deploy a similar argumentative strategy to justify the use of computer simulations and the trust we confer to their results. Computer simulations are extremely complex – some of their argument go – often the result of the epistemic labor of a diverse set of scientist and other experts. As extensively documented, they are also severely epistemically opaque (Humphreys 2004; 2009). Because of these features, Beisbart argues, it is reasonable to treat computer simulations in the same way we treat our senses or the testimony of others: simply as things that can be trusted on the assumption that they work (2017). Barberousse and Vorms, on their part, argue that we can rely on the outputs of computer simulations the same way we can – according to Burges’ style arguments – rely on those of computer-assisted mathematical proofs: i.e., because there is a chain of epistemic warrants, including entitlements, flowing from the experts involved in their construction, the methods with which simulations operate and the fundamental mathematical principles that guide their functioning.

Symons and Alvarado (2019), however, argue that there are a couple of problems with simply importing Burge’s arguments to the epistemology of computer simulations. The first one is that, according to Burge, computer assisted proofs are built in such a way that they can be assumed to be transparent conveyors of epistemic warrants. That is, when we ask what warrants justify our reliance on the results of these computer-assisted proofs, we can answer that they are the same kind of warrant that justifies our reliance on the output of conventional mathematical operations. More precisely, if the warrants that justify the outputs of mathematical operations are a priori epistemic warrants – warrants whose source is prior to experience or empirical examination – then the warrants that justify the outputs of the operations carried out by the computer-assisted proofs are the same or of the same kind, namely a priori warrants. By contrast, a method or process is said to be a non-transparent conveyor, if the methods or processes by which we arrive at a result require or allow for a different kind of warrant – in this case, an a posteriori kind of warrant – to enter the pipeline.

Even accepting that this is indeed the case for computer-assisted proofs – Symons and Alvarado, drawing in part from Ruphy (2015), point to many of the properties of computer simulations in virtue of which they fail to be transparent in Burge’ sense. This is something also suggested by Lenhard and Küster (2019) as they argue that there are many features of computer simulations that make them difficult to reproduce and that therefore undermine some of the stability that would be required for them to be transparent conveyors. Vis-à-vis Beisbart’s position that that we can argumentatively assume that computer simulations work well, Symons and Alvarado suggest that there is compelling evidence to rather assume the contrary and hence good reason to doubt that the processes involved are reliable to begin with. Complex computer simulations, Alvarado adds, “are novel, their aggregation is novel, or their integration is novel”, and hence untested (2023, 26).

A second related issue here is that it remains unclear to what extent technical artifacts can be said to inherit or preserve the epistemic warrants behind the credentialed expertise of those involved in their construction, or the methods underlying their functionalities. The issue is even more pronounced when one considers a priori epistemic warrants – such as the ones underlying formal methods or abstract operations (e.g., in logic or mathematics) – vis-à-vis a technical artifact. Trusting a technical artifact that is designed to carry out mathematical operations, for example, simply in virtue of the warrants that justify our reliance on mathematics per se seems epistemically suspect. There are many possibly defeating layers in between to simply trust one in virtue of the other. We do not even do that with human mathematicians. That is, we do not trust someone to do mathematics well simply because we trust in the rules and principles of mathematics. Rather, we trust them on their own independent and proven merits, i.e., through their own warrants, which are often of the a posteriori kind: i.e., empirical evidence of their abilities. When it comes to technical artifacts such as computer simulations, this is more evidently the case, as Alvarado notes: “computer simulations cannot count as transparent conveyers given Burge’s characterization because justificatory elements distinct from the ones warranting the original [mathematical] content do in fact enhance, decrease or constitute the epistemic warrants of the manipulated content [in their output].” (idem., 124)

Hence, according to Symons and Alvarado, important epistemological work must be done for the warrants that justify certain methodologies to transfer as warrants to justify the use of and reliance on the instruments these methodologies help build. In fact, it is not clear that epistemic warrants can ever be transferred from one to the other. Rather, computer simulations must have their own warrants and their sanctioning process and the justification of our reliance on a given simulation must be pursued independently from the sanctioning of the methods that enable their construction. In short, just because we are justified in trusting the experts that put simulations together and the experts are justified in trusting the methods that enable them to put these instruments together, does not mean that we get to automatically justify our trust in the instruments themselves or their results. Much empirical work remains to be done to do so. For these reasons and others having to do with many of the features discussed in previous sections, Symons and Alvarado argue that it is implausible that we should appeal to epistemic entitlements as the epistemological foundation behind our use of computer simulations.

Another approach to the epistemology of computer simulations – which is not entirely based on epistemic entitlements but can also be said to rely on a similar chain of trust, albeit with non-trivial differences – is to ground their status in the practical aspects of the craft of modeling and simulation. According to this view, the best reasons we have for believing the results of computer simulations have to do with our trust in the practical skills and craft of the modelers that use them. A good example of this kind of account is that of Hubig and Kaminski (2017). The epistemological goal of this kind of work is to identify the locus of our trust in simulations in practical aspects of the craft of modeling and simulation, rather than in any features of the models themselves. Under this view it is not the models or the simulations that we trust, but the people and practices behind them. Resch et al., (2017) for example, argue that a good part of the reason we should trust simulations is not because of the simulations themselves, but because of the interpretive artistry and skill of those who employ them. Symons and Alvarado (2019) are also critical of this approach, arguing that “part of the task of the epistemology of computer simulation is to explain the difference between the contemporary scientist’s position in relation to epistemically opaque computer simulations” (p. 7) and the relation of oracle believers to their oracles. Pragmatic and epistemic considerations, according to Symons and Alvarado, co-exist, and they are not possible competitors for the correct explanation of our trust in simulations – epistemic reasons are ultimate what explain and ground the pragmatic ones.

2.4 Computer simulations and the epistemology of scientific instruments

Recently, Alvarado (2023) has suggested that many of the challenges and questions regarding the epistemic role and import of computer simulations in science so far examined could be best tackled if we understood computer simulations as more closely related to scientific instruments. Alvarado offers several arguments as to why the views that understand computer simulations as either models or experiments prove at best inadequate. He argues, for example, that computer simulations are conceptually distinct from – i.e., non-identical to – either the experiments for which they are deployed or the models that they are designed to run. Computer simulations, Alvarado argues, have distinct properties to models because they have to be implemented and ran – as Parker (2003), Keller (2003), and others suggests. In contrast, a model specified in a blueprint and in-text specifications can be a model without having to be ran or implemented. According to Alvarado, computer simulations are also distinct from the contexts in which they are deployed (p. 58). While it is true that, as Barberousse and Jebile (2019) suggest, computer simulations can – through software/hardware implementation – simulate full experimental specifications, the fact that they can simulate such an endeavor already speaks to their distinct functional character. In short, the fact that a simulation can simulate such an experimental set-up speaks to the fact that it does something distinct from both the set-up specifications, and from what the experiment per se is trying to achieve. In this sense, Alvarado argues, a computer simulation plays a more similar role to that of a microscope or a telescope than to a complete experiment or to a complete experimental design. Yet, unlike the latter, computer simulations are also a hybrid instrument – one that must perform to represent – much more like a measuring instrument such as an analog thermometer.

Calling computer simulations an instrument is neither a nomenclature strategy, nor a mere metaphorical device under this view. Although a central target of Alvarado’s work is related to the epistemological issues discussed above, he sees these epistemological repercussions as a product of an ontological commitment to the nature of computer simulations (p. 148). Although important work has been done in the philosophy of technology regarding both the nature and the role of instruments in science (See for example, Koyré (1957); Kroes and Meijer 2002b; Heidelberger 2003), the epistemology of scientific instruments has been, for the most part, ignored by philosophers of science – with only a few notable exceptions (see Hacking (1983), Baird (2004), Fox-Keller (2003), or more recently Russo (2023)). Given this fact and the recency of this approach, either objections and/or additions to this strain of thought on computer simulations have not yet fully played out.

2.5 Epistemic Opacity and Computational Reliabilism

It is now accepted by both practitioners and philosophers that computer simulations are, as Humphreys (2004) suggests, epistemically opaque. That is, they are not immediately amenable to thorough inspection, error correction and/or even conventional notions of understanding and explanation (Kaminski et al. 2018; Symons and Alvarado 2016).

Broadly, the term epistemic opacity refers to the lack of access that an inquiring agent may have to the relevant aspects by which a process achieves whatever task it is supposed to achieve. In other words, an epistemically opaque process is one whose significant details about its functioning – or, importantly, its failing to do so (Alvarado 2025) – are not available to someone seeking to understand it, make use of it, or to someone who is subject to the results emerging from the fulfillment of its task.

Humphreys succinctly defines epistemic opacity the following way:

A process is epistemically opaque relative to a cognitive agent X at time t just in case X does not know at t all of the epistemically relevant elements of the process. (2009, 618)

As can be noted, it is a general and relative account of epistemic opacity as it is relative to a specific agent and the agent’s circumstances at a specific time (Humphreys 2004; Alvarado 2020) We can, for example imagine many processes to be epistemically opaque in this way to many people at many times: the processes in your computer, the processes by which books are bound, etc., can all be opaque to some of us at any given point. Furthermore, as Alvarado (2021) suggests, “pointing to something as being epistemically opaque in the manner defined above does not provide any interesting and/or peculiar information about the system in relation to anything else. This is particularly the case if what one is trying to understand and say something about is a specific piece of technology.”

Anticipating the latter objection, however, Humphreys suggested computer simulations are not just epistemically opaque. They are essentially so (2009). A process (system, device or method), according to Humphreys, is essentially opaque to an epistemic agent if it is impossible, in virtue of their own nature, for them to know the epistemically relevant elements of a system or process. Importantly, essential opacity is not just a stronger version of the general opacity described above. Rather, it is a different kind of opacity. According to Alvarado, this is made clear by two main factors: the first one is that it is the kind of opacity that arises in virtue of the agent’s nature and not in virtue of something else, say an agent’s environmental circumstances or a specific time t. And second, given that it is relative to an agent’s nature, it is the kind of epistemic obstacle that is insurmountable not just for an agent A but for any agent of the same nature. This fact often, though certainly not always, suggest that many of the technical aspects and processes involved in computer simulation may not be opaque because of “sloppy modeling” or because these processes are “poorly understood”, rather they can be opaque simply “because [the processes themselves] are complex” (Saam 2017, 80). Both are often neglected in the epistemology of computer simulations.

Sophisticated views that understand the severity of Humphreys’ challenge and the implications of this obstacle for an epistemology of computer simulations (or computational methods in general) nevertheless still need to make sense of our undeniable reliance and our successful deployment of computer simulations in scientific inquiry. Yet, if it is the case that computer simulations are indeed black boxes in this severe sense, i.e., if their relevant epistemic details are inaccessible in an insurmountable manner, then it is unclear how one can go about sanctioning them as reliable. The situation is made even worse if one seriously considers the challenges to widespread verification and validation techniques posed by Symons and Horner (2014). Symons and Horner suggest that conventional error assessment techniques used in science to characterize the reliability of a system or procedure are inadequate to deal with what they call software-intensive science – scientific inquiry in which computational methods play an indispensable role. The main challenge, they write, is that:

the kinds of errors associated with contemporary scientific practice stand in sharp contrast with those found in traditional science. In a scientific domain that contains no software, we can at least partially characterize the distribution of errors in terms of [conventional statistical inference theory]. CSIT, of course, has long been a concern of epistemology of science. To the extent that software systems play an essential role in contemporary science, we cannot be assured that CSIT can be used to characterize the distribution of error. This is because the languages used in software systems that are essential to the practice of science admit a kind of conditionality that cannot be captured using CSIT. Without CSIT to help characterize the distribution of errors in software systems used in inquiry, there is no effective procedure for characterizing those distributions except by testing every path. Testing all paths in a typical software system containing more than 108 paths [as most software systems used in science do] is intractable. (2014, 474)

Although it is certainly unclear whether or not the indispensability of computer simulations is the as severe as more fundamental software (Krämer et al. 2024), one possible strategy to address the challenge of epistemic opacity when it occurs at any level is to go back to Humphreys’ original definition and attempt to clarify what the “epistemically relevant elements” of a process are and exactly what it is that we need to know about a system or process in order to deem it reliable. One possible answer to this is that we may not need to know that much. As we will see in the next section, Durán and Formanek (2018) – in a view they call computational reliabilism – suggest that we may not need to know any internal features of a system to assess its reliability.

2.5.1 Computational reliabilism

Roughly, computational reliabilism is the view that in the context of computational methods researchers are justified in believing or trusting the results yielded by such methods “because there is a reliable process (i.e. the algorithm) that yields, most of the time, trustworthy results.” (Durán and Jongsma 2021, 332)

According to Durán and Formanek, computational reliabilism stems from key concepts in Alvin Goldman’s process reliabilism, which suggests that an inference/assertion can be accepted if it is the product of a reliable process. According to these views, however, a reliable assertion need not include details about the reliable processes that produced it. The reliability in question is that of the assertion, or the output of a process, and not that of the processes by which it was produced, or so it is argued. Following from this, a process can be reliable even if the reasons why it is reliable are not accessible to an agent (Comesaña 2010, 571).

In particular, those championing such a framework suggest that computational reliabilism can circumvent the challenges related to epistemic opacity in computational methods, can warrant or justify our beliefs regarding the reliability of computational processes and their results, and can also reassure us of the possibility of trust in computational methods, practices and artifacts even if these are insurmountably opaque.

Durán and Formanek accept views of reliability that suggest that users and practitioners of computer simulations are justified in trusting their results because they can “trust the assumptions upon which they are built” (2018, 652), as suggested by Beisbart (2017) and others. Nevertheless, they also suggest that computational reliabilism does not need these assumptions. Rather, they argue, their view takes into consideration ‘reliability indicators’ that are “markers of methodological and epistemological competence of the computer, algorithms and social processes involved in the formation of beliefs.” (Durán 2024). Because of this, Durán and Formanek suggest that, unlike conventional reliabilism theories in epistemology, computational reliabilism involves a retrospective reliability chain, which, according to them, “conditions the sources that attribute reliability to [computational methods] to be reliable in and by themselves.” Furthermore, such sources, they accept, “must be shown to be reliable.” (2018, 656 italics mine). Computer simulations, so their argument goes, can be deemed reliable via factors – external to the algorithms themselves – that function as reliability indicators. These include “identifying methods (formal or otherwise), metrics, expert competencies, cultures of research, and the like that make up for our best epistemic and normative efforts that might increase the degree of warrant we have to believe the outputs” of these systems.

The important argumentative strategy here is to emphasize that the factors in consideration can be external to the processes under question, without any internal factors considered. Thus, computational reliabilism can overcome the challenge posed by epistemic opacity, even when accepting the severe version suggested by Humphreys (2009) and others. In this sense, Durán and Formanek argue, the reliability of computer simulations can be understood by considering their tendency to produce “high proportion of true beliefs relative to false ones.” (ibid., 653) Trusting the results of computer simulations therefore, according to this view, “depends on a chain of reliable processes that, in the end, allow researchers to be justified in believing the results” (p. 655) This is something echoed in Ferrario’s (2023) approach to their own, highly formalized, version of reliabilism. Where this chain ends, however, is simply left unanswered (Durán and Formanek 2018, fn.6 p.655). This last point seeks to make the reliability chain referred to by Durán and Formanek somewhat distinct from the simple appeal to a chain of epistemic entitlements criticized by Symons and Alvarado (2019). That it is so, however, remains unclear.

It is worth noting, at this point, that if the argument from computational reliabilism is asking us to only consider the rate of success of a process, this must be leveraged by an error assessment that deploys CSIT – e.g., an assessment about the error rate, or distribution of error through sampling code lines. If we take Symons and Horner’s point seriously – that error in software is not randomly distributed – then, as they argue, CSIT may prove inadequate for error assessment to begin with. If so, it is unclear that computational reliabilism can even count on the exogenous features of a process that conventional reliabilism opens the door to. Along the same lines, Alvarado (2025), further suggests that when it comes to technical artifacts and their reliability, endogenous features such as the nature and source of error are an indispensable element of reliability assessments. Furthermore, opacity, he argues, in these contexts matters most when an agent is attempting to know how something fails and not just how something works. Together, these two considerations and the obstacles elucidated by Humphreys and by Symons and Horner, make it so that the viability of computational reliablism as an epistemological framework for computer simulations in science may need further defense than what has been offered so far by its proponents.

3. Other issues related to computer simulations in science

The advent and now ubiquity of computer simulations in scientific inquiry has ramifications across multiple dimensions in the philosophy of science. The following section offers a brief survey of some of these considerations.

3.1 Scientific theories and computer simulations

In his book Extending Ourselves (2004), Paul Humphreys argued that computer simulations have profound implications for our understanding of the structure of theories. He argued that computer simulations reveal inadequacies with both the semantic and the syntactic views of scientific theories. Computer simulations illustrate two things: that both the syntactic view of scientific theories – which implied that modeling played, if anything, only a heuristic role in science and had nothing to do with theory and that logical deduction was a useful regulative idea for thinking about how inferences from theory to the world are drawn – and the semantic view – which emphasized an important role for models but also urged that theories were non-linguistic entities and urged philosophers not to be distracted by the contingencies of linguistic expressions associated to theorizing – were misguided. Computer simulations, for example, seem to show that there are methods of theory application that vastly outstrip the inferential power of logical deduction, and hence that it was profoundly wrong to think that logical deduction was the right tool for rationally reconstructing the process of theory application. On the flip side, computer simulations seem to also reveal that, as Humphreys urged, syntax matters. It was wrong, it turns out, to suggest, as the semantic view did, that the particular linguistic form in which a scientific theory is expressed is philosophically uninteresting. As Humphreys put it: “the specific syntactic representation used is often crucial to the solvability of the theory’s equations” (2009, 620).

Another issue of philosophical interest in discussions about the epistemic import of computer simulations was that of emergence in complex systems. Humphreys (2004) and Bedau (2011), have argued that philosophers interested in the topics of emergence can learn a great deal by looking at computer simulation. In particular, Bedau argues, that even if simulation does not say much about strong emergence which has been the main focus of philosophers – and which posits brute downward causation that is irreducible in principle – it can nevertheless elucidate features of a weak emergence – one that allows for the reducibility of wholes to parts in principle, but not in practice. Systems that produce weak emergent properties are mere mechanisms, but the mechanisms are very complex (they have many independently interacting parts). As a result, there is no way to figure out exactly what will happen given a specific set of initial and boundary conditions, except to “crawl the causal web”. Weakly emergent properties are characteristic of complex systems in nature. And it is also characteristic of complex computer simulations that there is no way to predict what they will do except to let them run. It is here that the connection to computer simulation arises. Weak emergence explains, according to Bedau, why computer simulations play a central role in the science of complex systems. The best way to understand and predict how real complex systems behave is to simulate them by crawling the micro-causal web and see what happens.

More recently, there has been a resurgence in the representational aspect of computer simulations and their relation to modeling. Models of course involve idealization. But it has been argued that some kinds of idealization, which play an especially prominent role in the kinds of modeling involved in computer simulation, are special – to the point that they deserve the title of fiction. Winsberg (2009c) has argued that fiction does have a special connection to computer simulations. Or rather, that some computer simulations contain elements that best typify what we might call fictional representations in science, even if those representations are not uniquely present in simulations. These kinds of fictional components – useful guides to the way the world is in some general sense, without necessarily pointing to a certain part of the world in particular – of models are paradigmatically exemplified in certain computer simulations. Two of his examples are the “silogen atom” and “artificial viscosity”. Silogen atoms are a fiction in Winsberg’s sense, but they are used so that the overall model can be hoped to get things right. Thus, the overall model is not a fiction, but one of its components is. Similarly, artificial viscosity is a technique that pretends that a fluid is highly viscous – a fiction. Again, the overall model involved in these practices is not a fiction. But the component called artificial viscosity is.

Winsberg’s account of fictions and their role in computer simulations has drawn some criticism. Toon (2010) has argued that this definition of a fiction is too narrow. Toon, presumably, supports a broader conception of the role of fictions in science according to which they do not play a particular heightened role in computer simulations. (For a thorough analysis of the role of idealization and fiction in modeling and simulation see Pincock 2011; For a thorough argument about a diminished epistemic role that fictions play in genuine scientific explanations see Sullivan and Khalifa 2019).

3.2 The Ontology of Simulation Content

Permeating the many discussions above, a question persists regarding the content of the inferences and assertions we derive from computer simulations. What exactly are we referring to when we say or infer propositions about entities in a simulation, such as “the particles attract” or “galaxy clusters swerve”, etc.? Most of these – particles, predators, agents, etc. – are simply numerical constructs coded so that their dynamic transformations are visualized in a display. Although one could understand the propositions above to be simply manners of speech, there is a sense in which we do mean what we say about these representational entities. That is, it is either true or false of the simulated particles that they attract or not. Note that this is the case whether or not these entities are standing in place for some real-world target. We may for example say that “these particles attract” when the simulated particles either do not represent actual physical particles, or when physical particles do not show the same behavior. If so, what exactly are we talking about in these instances? For these sentences to make sense, perhaps we must articulate, in ontological terms, what these entities are.

As an answer to these challenges, some philosophers have suggested that in such instances we are dealing with digital artifacts (Beisbart 2019; Chalmers 2019) – computerized objects designed, developed, and deployed for a specific function. Beisbart (2024), however, contests this view. In particular, Beisbart suggests that there are two senses in which there is an under-determination problem in simulation ontologies. The first one is that the choice of entities can differ arbitrarily in a simulation. The second issue is that the same mathematical structures are used for distinct phenomena in different simulations. In his words “computer simulations can be described at different levels, and the highest level, where the simulation is described using representations, is not completely fixed by the underlying computational layer.” Furthermore, he adds, “the same computations may be used for different purposes, and various objects can be read into them. Further, although the purposes or intentions of users can, in principle, help to determine the simulated objects, they often do not suffice to fix objects uniquely.” (Beisbart 2024 p)

According to this view, we need not be committed to any digital entity. Rather, we can deal with these assertions in the same way we deal with other fictional constructions. If we do, he argues, “we do not need any simulated objects in our ontology to make sense of related talk. Under this account, simulated objects are superfluous. Given that the postulation of simulated objects led to problems, it seems better to do without them.” (Beisbart 2024)

Although this deflationary view may be appealing for pragmatic concerns in the philosophy of science, the ontological nature of simulated phenomena is a serious and interesting philosophical in and of itself. It is a problem that relates to issues in the emergence of phenomena in both simple and complex simulations such as cellular automata. One example is in trying to make sense of gliders – spatiotemporal cluster dynamics that appear to advance through a grid in a cohesive manner giving the impression of coordinated or continuous or interdependent movement from one step, or iteration, to the next – in computational experiments such as Conway’s “game of life.” (Seager 2012; Symons 2015).

3.3 Formal Relations

While the issues surrounding the relationship between representational devices and their target, which also affect computer simulations, are many and varied, there are still attempts at formulating an account of such relationships such that it captures some sort of analogy or similarity amongst the models involved in simulation and their target systems.

Winsberg, for example, writes that a “simulation is any system that is believed, or hoped, to have dynamical behavior that is similar enough to some other system such that the former can be studied to learn about the other.” (2020, italics ours) What this similarity entails, however, is a topic of profound philosophical debate, since the issue affects not just questions about computer simulations but any representational device (See Weisberg 2013; Pincock 2011).

These relations can be formulated in terms of identity and dependence (as we saw in our discussion of experiments and computer simulations), isomorphisms of various sorts and at various levels, analogy and similarity, etc. When it comes to computer simulations however, Primiero (2020) suggests that a formal description of this relationship that incorporates all these considerations (e.g., similarity and dependence relationships) can nevertheless be achieved. This is because if we assume that a relation indeed exists between the formal characterization of a target phenomena and the algorithmic processes engineered to simulate them, this relationship can be found by looking at the mathematical elements, operations and structures therein. He identifies two candidate principles to guide the formulation of such a formal relationship:

Weak simulationist principle: if a physical system is exactly simulated by a computation, then that type of computation is all there is to the nature of the physical system.

Strong simulationist principle: every physical system can be simulated by a physical computation.

For a full overview of how these principles play out in terms of formal mathematical structures involved in algorithmic simulation processes see Primiero (2020, 241–254)

3.4 Computer simulations and policy

In the immediate aftermath of the COVID-19 pandemic response, the unprecedented use of computer simulations for decision-making in policy became evident. In a video titled “Moral Models”, Winsberg characterizes the magnitude and the unprecedented nature of this reliance on computer simulations by policy-makers the following way:

This is probably the episode in human history in which computer modeling has affected the course of human events more than ever before […] The degree to which this [technology] took center stage in decision-making was the main way in which [usage of computer models] was unprecedented. (Harvard et al. 2022, minute: 02:00)

Issues surrounding the epistemic challenges, and their associated social risks, in this context have been discussed at length by Harvard and Winsberg (2021), Harvard et al., (2021), as well as by Winsberg and Harvard (2022) and Harvard and Winsberg (2022; 2024). In short, Harvard et al., have brought to the fore the social, ethical, and normative challenges posed by the deployment of and reliance on computer simulation models by policy-makers in the context of the fact that simulations are not always adequate for the purposes to which policy-makers want to use them. In particular, they highlight the degree to which the role that assumptions, idealizations, and choices regarding what to represent in a model and what to ignore, make the building of simulations that guide policy a highly value-laden enterprise. They also discuss the degree to which builders of models and simulations that guide policy have a “moral-epistemic duty” to give consideration to the values held by relevant stake-holders and the public generally.

Some of the considerations brought forth by Winsberg and Harvard and colleagues about modeling in epidemiology and climate science have been echoed in relation to other disciplines. An important part of the framework surrounding the discourse on moral models – besides highlighting the unavoidable moral dimension embedded in these seemingly neutral formal methods – is to highlight the role of both the interest of those doing the modeling, and their corresponding limitations regarding the interest of those unlike them (vis-a-vis socioeconomic class, inalienability of labor contexts, etc.). For a brief overview of some of these considerations in contexts such as healthcare please see Durán and van den Hoven, ed., (2022) The Societal and Ethical Dimensions of Computer Simulations. In this volume, Bak (2022), for example, suggests that there are serious considerations regarding fairness when it comes to modeling the distribution of healthcare resources. In particular, Bak argues that there is no consensus on which a theory of justice should underlie choices in the models simulating the distribution of things like cardiac defibrillators and other public health resources and that modelers lack the heuristic resources to build equity considerations into their simulation models. A more thorough overview of the epistemic and ethical considerations of modeling, particularly in the context of climate simulations, comes from the work of Kawamleh, (2022; 2023).

This conversation regarding the over-reliance on simulation models for policy making in the second decade of the 21st century mirrors a similar debate that arose in the Netherlands at the start of the century when the epistemic status of computer simulations in the context of climate change was debated by practitioners, philosophers, and policy-makers (Petersen 2012). In this context, the use of computer simulations was publicly questioned by a whistleblower researcher arguing that the government’s annual report on the state of the environment had no realistic value since most of the results used for policymaking had been derived from computer simulations alone (Petersen 2012). Others, such as Petersen (2012), however, considered that in this context, as discussed in previous sections, computer simulations proved to be more reliable than conventional evidence available. This last point was anticipated by Paul Humphreys (2004) when he said that machines, and by extensions computational methods, do whatever they are deployed to do better than we could and that this is why we use them in the first place. Nevertheless, as elucidated by Winsberg and others, that this is the case is not immediately obvious, particularly when we consider a complex technical artifact that involves input and maintenance from a motely set of practitioners whose methods and norms are either not yet established, or, as Symons and Alvarado (2022) argue, are not transparent conveyers.

3.5 Trust and Computer Simulations

As can be gathered, a looming overarching issue in much of the literature has to do with our ability trust computer simulations. When, how, and why we trust them, could trust them, or should trust them are questions that continue to this date. Some take it that the role of the epistemologist of computer simulations in science is to make sense of the fact that engineers, practitioners, and scientist do trust them (Duran and Formanek 2018). Others think that the responsibility of the philosopher is to provide an appropriate framework to underlie this trust. And yet other philosophers suggest that – given either the state of the arts or the intrinsic properties of simulations – we cannot appropriately trust them and, hence, we simply should not trust them (Symons and Alvarado 2019).

More recently, however, literature has emerged suggesting that we ought to pay attention to the fact that there are distinct kinds of trust and distinct ways in which they can be appropriately conferred. Alvarado (2023b) suggests, for example, that epistemic technologies such as computer simulations should only be allocated epistemic trust – a trust that is allocated exclusively in virtue of epistemic reasons, as opposed to practical or axiological ones – if and when they are proven trustworthy. Boge (2024) on the other hand provides a more nuanced set of distinctions regarding distinct kinds of trust by distinguishing between the expected epistemic role that computer simulations play in different experimental setting. Like Alvarado, Boge suggests this allocation of trust is best understood by “distinguishing different epistemic capacities of simulations and different senses of trust”. This results in “trusting simulations in their capacity to facilitate credible experimental results can mean accepting them as means for generating belief in these results, while this need not imply believing the models themselves in their capacity to represent an underlying reality.” (ibid.)

Whether or not Boge’s last point above holds or whether his distinction can overcome the challenges posed to reliabilist accounts surveyed here or the challenges posed by Symons and Alvarado leveraged against epistemic entitlements is not yet clear.

Bibliography

  • Alvarado, Ramon, 2022, “Computer Simulations as Scientific Instruments”, Foundations of Science, 27(3): 1183–1205.
  • –––, 2023, Simulating Science: Computer Simulations as Scientific Instruments, (vol. 479), Cham: Springer Nature.
  • –––, 2023b, “What Kind of Trust Does AI Deserve, If Any?” AI and Ethics, 3(4), 1169–1183.
  • –––, 2025, “Challenges for Computational Reliabilism in AI and Other Computational Methods”, in Juan M. Durán and Giorgia Pozzi (eds.), Philosophy of Science for Machine Learning, pp. 81–106.
  • Angius, Nicola, 2019, “Qualitative Models in Computational Simulative Sciences: Representation, Confirmation, Experimentation”, Minds and Machines, 29(3): 397–416.
  • Arnold, E. and Durán, J. M. (Eds.), 2013, Computer Simulations and the Changing Face of Scientific Experimentation, Cambridge Scholars Publishing.
  • Baird, Davies, 2004, “Thing Knowledge: A Philosophy of Scientific Instruments”, University of California Press.
  • Bak, Marieke A., 2022, “Computing Fairness: Ethics of Modeling and Simulation in Public Health”, Simulation, 98(2): 103–111.
  • Barberousse, A. and Ludwig, P., 2009, “Models as Fictions”, in Fictions in Science: Philosophical Essays in Modeling and Idealizations, London: Routledge, pp. 56–73.
  • Barberousse, A. and Vorms, M., 2014, “About the Warrants of Computer-based Empirical Knowledge”, Synthese, 191(15): 3595–3620.
  • Barberousse, A. and Jebeile, J., 2019, “How Do the Validations of Simulations and Experiments Compare?”, in Computer Simulation Validation: Fundamental Concepts, Methodological Frameworks, and Philosophical Perspectives, Cham: Springer International Publishing, pp. 925–942.
  • Bedau, Mark A., 1997, “Weak Emergence”, Noûs (Supplement 11), 31: 375–399.
  • –––, 2011, “Weak Emergence and Computer Simulation”, in P. Humphreys and C. Imbert (eds.), Models, Simulations, and Representations, New York: Routledge, 91–114.
  • Beisbart, Claus, 2017, “Advancing Knowledge Through Computer Simulations? A Socratic Exercise”, in M. Resch, A. Kaminski, and P. Gehring (eds.), The Science and Art of Simulation (Volume I), Cham: Springer, pp. 153–174.
  • –––, 2019, “Virtual Realism: Really Realism or Only Virtually So? A Comment on DJ Chalmers’s Lectures”, Disputatio, 11(55): 297–331.
  • –––, 2024, “Do Computer Simulations Include Digital Artifacts?” Metaphysics, 7(1): 37–50.
  • Beisbart, C. and Norton, J. D., 2012, “Why Monte Carlo Simulations are Inferences and not Experiments”, in International Studies in Philosophy of Science, 26: 403–422.
  • Boge, Fioran, J., 2024, “Why Trust a Simulation? Models, Parameters, and Robustness in Simulation-Infected Experiments”, The British Journal for the Philosophy of Science, 75(4). doi:10.1086/716542
  • Burge, Tyler, 1993, “Content Preservation”, The Philosophical Review, 102(4): 457–488.
  • –––, 1998, “Computer Proof, Apriori Knowledge, and Other Minds: The Sixth Philosophical Perspectives Lecture”, Noûs, 32(S12): 1–37.
  • Chalmers, David, J., 2019, “The Virtual as the Digital”, Disputatio, 11(55), 453–486.
  • Comesaña, Juan, 2010, “Evidentialist Reliabilism”, Noûs, 44(4): 571–600.
  • Currie, Adrian, 2017, “From Models-as-Fictions to Models-as-Tools”, Ergo, 4
  • –––, 2018, “The Argument From Surprise”, Canadian Journal of Philosophy, 48(5): 639–661.
  • Dardashti, R., K. Thebault, and E. Winsberg, 2015, “Confirmation Via Analogue Simulation: What Dumb Holes Could Tell Us About Gravity”, British Journal for the Philosophy of Science, 68(1): 55–89.
  • Dardashti, R., S. Hartmann, K. Thebault, and E. Winsberg, 2019, “Hawking Radiation and Analogue Experiments: A Bayesian Analysis”, Studies in History and Philosophy of Modern Physics, 67: 1–11.
  • Datteri, E. and Schiaffonati, V., 2019, “Robotic Simulations, Simulations of Robots”, Minds and Machines, 29: 109–125.
  • –––, 2023, “Computer Simulations and Surrogative Reasoning for the Design of New Robots”, Synthese, 202(1): 5.
  • Durán, Juan, M., 2018, Computer Simulations in Science and Engineering, Cham: Springer.
  • –––, 2017, “Varieties of Simulations: From the Analogue to the Digital”, in The Science and Art of Simulation I: Exploring-Understanding-Knowing, Cham: Springer International Publishing, pp. 175–192.
  • –––, 2020, “What is a Simulation Model?” Minds and Machines, 30(3): 301–323.
  • –––, 2024, “Computer Simulations”, in The Routledge Handbook of Philosophy of Scientific Modeling, Routledge, pp. 149–163.
  • Durán, J. M., and Arnold, E., 2013, “The Use of the ‘Materiality Argument’ in The Literature on Computer Simulations”, in Computer Simulations and the Changing Face of Scientific Experimentation, edited by Juan M. Dur&án and Eckhart Arnold, Cambridge Scholars Publishing, pp. 76–98.
  • Durán, J.M. and Formanek, N., 2018, “Grounds for Trust: Essential Epistemic Opacity and Computational Reliabilism”, Minds and Machines, 28: 645–666.
  • Durán, J.M. and van den Hoven, J., 2022, Introduction to the Special Issue “The Societal and Ethical Dimensions of Computer Simulations”, SIMULATION, 98(2): 85–86.
  • Durán, J.M. and Jongsma, K.R., 2021, “Who is Afraid of Black Box Algorithms? On the Epistemological and Ethical Basis of Trust in Medical AI”, Journal of Medical Ethics, 47(5): 329–335.
  • Epstein, J. and R. Axtell, 1996, Growing Artificial Societies: Social Science from the Bottom-up, Cambridge, MA: MIT Press.
  • Epstein, Joshua, 1999, “Agent-Based Computational Models and Generative Social Science”, Complexity, 4(5): 41–57.
  • Ferrario, Andrea, 2024, “Justifying our Credences in the Trustworthiness of AI Systems: A Reliabilistic Approach”, Science and Engineering Ethics, 30(6), 55.
  • Floridi, L., Fresco, N. and Primiero, G., 2015 “On Malfunctioning Software”, Synthese, 192(4), 1199–1220.
  • Franklin, Alan, 1996, The Neglect of Experiment, Cambridge: Cambridge University Press.
  • –––, 1989, “The Epistemology of Experiment”, The Uses of Experiment, D. Gooding, T. Pinch and S. Schaffer (eds.), Cambridge: Cambridge University Press, pp. 437–60.
  • –––, 1986, “Experiment and the Development of the Theory of Weak Interactions: Fermi’s Theory”, in PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association (Vol. 1986, No. 2), Cambridge University Press, pp. 163–179.
  • Frigg, R. and Reiss, J., 2009, “The Philosophy of Simulation: Hot New Issues or Same Old Stew”, Synthese, 169: 593–613.
  • Galison, Peter, 1997, “Three Laboratories”, Social Research, 64: 1127–1155.
  • Gehring, Petra, 2017, “Doing Research on Simulation Sciences? Questioning Methodologies and Disciplinarities”, in The Science and Art of Simulation I: Exploring-Understanding-Knowing, Cham: Springer International Publishing, pp. 9–21.
  • Giere, Ronald N., 2009, “Is Computer Simulation Changing the Face of Experimentation?” Philosophical Studies, 143: 59–62.
  • Gilbert, N. and K. G. Troitzsch, 1999, Simulation For the Social Scientist, Philadelphia, PA: Open University Press.
  • Gransche, Bruno, 2017, “The Art of Staging Simulations: Mise-en-scène, Social Impact, and Simulation Literacy”, in The Science and Art of Simulation I: Exploring-understanding-knowing, Cham: Springer International Publishing, pp. 33–50.
  • Grüne-Yanoff, Till, 2007, “Bounded Rationality”, Philosophy Compass, 2(3): 534–563.
  • Grüne-Yanoff, T. and Weirich, P., 2010, “Philosophy of Simulation”, Simulation and Gaming: An Interdisciplinary Journal, 41(1): 1–31.
  • Guala, Francesco, 2002, “Models, Simulations, and Experiments”, Model-Based Reasoning: Science, Technology, Values, L. Magnani and N. Nersessian (eds.), New York: Kluwer, 59–74.
  • –––, 2008, “Paradigmatic Experiments: The Ultimatum Game from Testing to Measurement Device”, Philosophy of Science, 75: 658–669.
  • Hacking, Ian, 1983, Representing and Intervening: Introductory Topics in the Philosophy of Natural Science, Cambridge: Cambridge University Press.
  • –––, 1988, “On the Stability of the Laboratory Sciences”, The Journal of Philosophy, 85: 507–15.
  • –––, 1992, “Do Thought Experiments have a Life of Their Own?” PSA (Volume 2), A. Fine, M. Forbes and K. Okruhlik (eds.), East Lansing: The Philosophy of Science Association, 302–10.
  • Hartmann, Stephan, 1996, “The World as a Process: Simulations in the Natural and Social Sciences”, in R. Hegselmann, et al. (eds.), Modelling and Simulation in the Social Sciences from the Philosophy of Science Point of View, Dordrecht: Kluwer, 77–100.
  • Harvard, S. and E. Winsberg, 2021, “Causal Inference, Moral Intuition, and Modeling in a Pandemic”, Philosophy of Medicine, 2(2). doi:10.5195/pom.2021.70
  • –––, 2022, “The Epistemic Risk in Representation”, Kennedy Institute of Ethics Journal, 32(1): 1–31.
  • –––, 2024,“‘Managing Values’ in Health Economics Modelling: Philosophical and Practical Considerations”, Social Science & Medicine, 358: 117256.
  • Harvard, S., E. Winsberg, J. Symons, and A. Adibi, 2021, “Value Judgments in a COVID-19 Vaccination Model: A Case Study in the Need for Public Involvement in Health-Oriented Modelling”, Social Science & Medicine, 286: 114323.
  • Harvard, Stephanie, Jeff Lee Petry, and Nathan Drillot (directors), 2022, Moral Models: Crucial Decisions in the Age of Computer Simulation [film featuring Eric Winsberg], Peer Models Network, Salazar Films. [Harvard et al. 2022 available online]
  • Heidelberger, Michael, 2003, “Theory-ladenness and Scientific Instruments in Experimentation”, in The Philosophy Of Scientific Experimentation, University of Pittsburgh Press, pp. 8–151.
  • Horner, J. K. and Symons, J. F., 2020, “Software Engineering Standards for Epidemiological Models”, History and Philosophy of the Life Sciences, 42(4): 54.
  • Hubig, C, and Kaminski, A., 2017, “Outlines of a Pragmatic Theory of Truth and Error in Computer Simulation”, in M. Resch, A. Kaminski, and P. Gehring (eds.), The Science and Art of Simulation (Volume I), Cham: Springer, pp. 121–136.
  • Hughes, Richard, 1999, “The Ising Model, Computer Simulation, and Universal Physics”, in Mary S. Morgan and Margaret Morrison (eds.), Models as Mediators: Perspectives on Natural and Social Science, (Ideas in Context), Cambridge: Cambridge University Press, pp. 97–145.
  • Huggins, E. M., and Schultz, E.A., 1967, “San Francisco Bay in a Warehouse”, Journal of the Institute of Environmental Sciences and Technology, 10(5): 9–16.
  • Humphreys, Paul, 1990, “Computer Simulation”, in A. Fine, M. Forbes, and L. Wessels (eds.), PSA 1990 (Volume 2), East Lansing, MI: The Philosophy of Science Association, pp. 497–506.
  • –––, 1995, “Computational Science and Scientific Method”, in Minds and Machines, 5(1): 499–512.
  • –––, 2004, Extending Ourselves: Computational Science, Empiricism, and Scientific Method, New York: Oxford University Press.
  • –––, 2009, “The Philosophical Novelty of Computer Simulation Methods”, Synthese, 169: 615–626.
  • Humphreys, P. and Imbert, C. (Eds.), 2013, Models, Simulations, and Representations, New York: Routledge.
  • Johnson, A., and Lenhard, J., 2024, Cultures of Prediction: How Engineering and Science Evolve with Mathematical Tools. Cambridge, MA: MIT Press.
  • Kaminski, A., Gramelsberger, G. and Scheer, D., 2023, “Modeling For Policy and Technology Assessment: Challenges from Computerbased Simulations and Artificial Intelligence”, TATuP-Zeitschrift für Technikfolgenabschätzung in Theorie und Praxis, 32(1): 11–17.
  • Kaminski, A., Resch, M., and Küster, U., 2018, “ Mathematische Opazität. Über Rechtfertigung und Reproduzierbarkeit in der Computersimulation”, In Arbeit und Spiel, Nomos Verlagsgesellschaft mbH & Co. KG, pp. 253–278.
  • Kaufmann, W. J. and Smarr, L.L., 1993, Supercomputing and the Transformation of Science, New York: Scientific American Library.
  • Kawamleh, Suzanne, 2021, “Can Machines Learn How Clouds Work? The Epistemic Implications of Machine Learning Methods in Climate Science”, Philosophy of Science, 88(5): 1008–1020.
  • –––, 2022, “Confirming (Climate) Change: A Dynamical Account of Model Evaluation”, Synthese, 200(2): 122.
  • –––, 2023, The Epistemology and Ethics of Computational Science for Decision Making, Bloomington, IN: Indiana University.
  • Keller, Evelyn, F., 2003, “Models, Simulation and Computer Experiments”, in The Philosophy Of Scientific Experimentation, University of Pittsburgh Press, pp. 198–215.
  • Koyré, Alexandre, 1957, From the Closed World to the Infinite Universe (Vol. 1). Library of Alexandria.
  • Krämer, M., Schiemann, G., and Zeitnitz, C., 2024, “Experimental High-Energy Physics Without Computer Simulations”, Studies in History and Philosophy of Science, 106: 37–42.
  • Kroes, P. and Meijers, A., 2002a, “The Dual Nature of Technical Artifacts: Presentation of a New Research Programme”, Techné, 6(2): 4–8.
  • –––, 2002b, “Reply to Critics: The Dual Nature of Technical Artifacts”, Techné: Research in Philosophy and Technology, 6(2): 110–116.
  • Laymon, Ronald, 1985, “Idealizations and the Testing of Theories by Experimentation”, in Observation, Experiment and Hypothesis in Modern Physical Science, P. Achinstein and O. Hannaway (eds.), Cambridge, MA: MIT Press, pp. 147–73.
  • Lenhard, Johannes, 2007, “Computer Simulation: The Cooperation Between Experimenting and Modeling”, Philosophy of Science, 74: 176–94.
  • –––, 2019, Calculated Surprises: A Philosophy of Computer Simulation, Oxford: Oxford University Press.
  • Lenhard, J. and Küster, U., 2019, “Reproducibility and the Concept of Numerical Solution”, Minds & Machines, 29: 19–36.
  • Mayo, Deborah, G., 1996, Error and the Growth of Experimental Knowledge, University of Chicago Press.
  • Morgan, Mary., 2003, “Experiments Without Material Intervention: Model Experiments, Virtual Experiments and Virtually Experiments”, in The Philosophy of Scientific Experimentation, H. Radder (ed.), Pittsburgh, PA: University of Pittsburgh Press, 216–35.
  • –––, 2005, Experiments Versus Models: New Phenomena, Inference and Surprise. Journal of Economic Methodology, 12(2), 317–329
  • Morgan, M. and Morrison, M. (Eds.), 2010, Models as Mediators, Cambridge: Cambridge University Press.
  • Morrison, Margaret, 2015, Reconstructing Reality: Models, Mathematics, and Simulations, Oxford: Oxford University Press.
  • –––, 2009, “Models, Measurement and Computer Simulation: The Changing Face of Experimentation”, Philosophical Studies, 143: 33–57.
  • Norton, S. and F. Suppe, 2001, “Why Atmospheric Modeling is Good Science”, in Changing the Atmosphere: Expert Knowledge and Environmental Governance, C. Miller and P. Edwards (eds.), Cambridge, MA: MIT Press, pp. 88–133.
  • Oberkampf, W., and Roy, C. 2010, Verification and Validation in Scientific Computing, Cambridge: Cambridge University Press.
  • Oreskes, N., Shrader-Frechette, K., and Belitz, K., 1994, “Verification, Validation and Confirmation of Numerical Models in the Earth Sciences”, Science, 263(5147): 641–646.
  • Páez, Andres, 2024, “Understanding With Toy Surrogate Models in Machine Learning,” Minds and Machines, 34: 45. doi:10.1007/s11023-024-09700-1
  • Parke, Emily, C., 2014, “Experiments, Simulations, and Epistemic Privilege”, Philosophy of Science, 81(4): 516–36.
  • Parker, Wendy, 2008a, “Franklin, Holmes and the Epistemology of Computer Simulation”, International Studies in the Philosophy of Science, 22(2): 165–83.
  • –––, 2008b, “Computer Simulation Through an Error-Statistical Lens”, Synthese, 163(3): 371–84.
  • –––, 2009a, “Does Matter Really Matter? Computer Simulations, Experiments and Materiality”, Synthese, 169(3): 483–96.
  • –––, 2009b, “Confirmation and Adequacy-for-Purpose in Climate Modelling”, in Aristotelian Society Supplementary Volume, (83)1: 233–249.
  • –––, 2013, “Computer Simulation”, in S. Psillos and M. Curd (eds.), The Routledge Companion to Philosophy of Science, 2nd Edition, London: Routledge, pp. 135–145.
  • –––, 2014, “Simulation and Understanding in the Study of Weather and Climate”, Perspectives on Science, 22(3): 336–356.
  • –––, 2017, “Computer Simulation, Measurement, and Data Assimilation”, British Journal for the Philosophy of Science, 68(1): 273–304.
  • –––, 2003, Computer Modeling in Climate Science: Experiment, Explanation, Pluralism (Doctoral dissertation, University of Pittsburgh).
  • Peschard, Isabelle, 2010, “Modeling and Experimenting”, in P. Humphreys and C. Imbert (eds), Models, Simulations, and Representations, London: Routledge, pp. 42–61.
  • Petersen, Arthur, C., 2012, Simulating Nature: A Philosophical Study of Computer-Simulation Uncertainties and Their Role in Climate Science and Policy Advice, Boca Raton, FL: CRC Press.
  • Pincock, Christopher, 2011, Mathematics and Scientific Representation, Oxford: Oxford University Press.
  • Primiero, Giuseppe, 2020, On the Foundations of Computing, Oxford: Oxford University Press.
  • –––, 2019, “A Minimalist Epistemology for Agent-Based Simulations in the Artificial Sciences”, Minds and Machines, 29(1): 127–148.
  • Purves, Gordon Michael, 2012, “Finding Truth in Fictions: Identifying Non-Fictions in Imaginary Cracks”, Synthese, 190: 235–251.
  • Resch, M. M., Kaminski, A., and Gehring, P., (eds.), 2017, The Science and Art of Simulation I: ExpLoring-Understanding-Knowing, Berlin: Springer.
  • Rohrlich, Fritz, 1990, “Computer Simulation in the Physical Sciences”, in PSA: Proceedings of the biennial meeting of the philosophy of science association, 1990(2): 507–518.
  • Roush, Sherrilyn, 2015, “The Epistemic Superiority of Experiment to Simulation”, Synthese, 169: 1–24.
  • Roy, Subrata, 2005, “Recent Advances in Numerical Methods For Fluid Dynamics and Heat Transfer”, Journal of Fluid Engineering, 127(4): 629–30.
  • Ruphy, Stéphanie, 2015, “Computer Simulations: A New Mode Of Scientific Inquiry?” in S. O. Hansen (ed.), The Role of Technology in Science: Philosophical Perspectives, Dordrecht: Springer, pp. 131–149.
  • Russo, Federica, 2022, Techno-Scientific Practices: An Informational Approach. Rowman & Littlefield.
  • Saam, Nicole J., 2017, “Understanding Social Science Simulations: Distinguishing Two Categories of Simulations”, in The Science and Art of Simulation I: Exploring-Understanding-Knowing, Cham: Springer International Publishing, pp. 67–84.
  • Schelling, Thomas, C., 1971, “Dynamic Models of Segregation”, Journal of Mathematical Sociology, 1: 143–186.
  • Schiaffonati, Viola, 2016, “Stretching the Traditional Notion of Experiment in Computing: Explorative Experiments”, Science and Engineering Ethics, 22: 647–665.
  • –––, 2017, “Experimenting with Computing and in Computing: Stretching the Traditional Notion of Experimentation”, in Artificial Life Conference Proceedings, pp. 2–3.
  • Schiaffonati, V. and Verdicchio, M., 2013, “The Influence of Engineering Theory and Practice on Philosophy of AI”, in Philosophy and Theory of Artificial Intelligence, Berlin, Heidelberg: Springer, pp. 375–388.
  • Seager, William, 2012, “Emergence and Cellular Automata”, in Natural Fabrications: Science, Emergence and Consciousness, Cham: Springer Nature, pp. 65–83.
  • Simon, Herbert, 1969, The Sciences of the Artificial, Boston, MA: MIT Press.
  • Sullivan, Emily and Khalifa, K., 2019, “Idealizations and Understanding: Much Ado About Nothing?” Australasian Journal of Philosophy, 97(4): 673–689.
  • Symons, John, 2015, “Physicalism, Scientific Respectability, and Strongly Emergent Properties”, Cognitive Sciences: an Interdisciplinary Approach, 14–37.
  • Symons, J. and Alvarado, R., 2016, “Can We Trust Big Data? Applying Philosophy of Science to Software”, Big Data & Society, 3(2). doi:10.1177.2053951716664747
  • –––, 2019, “Epistemic Entitlements and the Practice of Computer Simulation”, Minds and Machines, 29(1): 37–60.
  • Symons, J. and Horner, J., 2014, “Software Intensive Science”, Philosophy & Technology, 27: 461–477.
  • Symons, J. and Boschetti, F., 2013, “How Computational Models Predict the Behavior of Complex Systems”, Foundations of Science, 18: 809–821.
  • Tal, Eran, 2011, “From Data to Phenomena and Back Again: Computer-Simulated Signatures”, Synthese, 182(1): 117–129.
  • Tolk, A., E.H. Page, V.V. Graciano Neto, P. Weirich, N. Formanek, J.M. Durán, J.F. Santucci, and S. Mittal, 2023, “Philosophy and Modeling and Simulation”, in Body of Knowledge for Modeling and Simulation: A Handbook by the Society for Modeling and Simulation International, Cham: Springer International Publishing, pp. 383–412.
  • Toon, Adam, 2010, “Novel Approaches to Models”, Metascience, 19(2): 285–288.
  • Trenholme, Russell, 1994, “Analog Simulation”, Philosophy of Science, 61: 115–131.
  • Unruh, W. G., 1981, “Experimental Black-Hole Evaporation?” Physical Review Letters, 46(21): 1351–53.
  • Weisberg, Michael, 2013, Simulation and Similarity: Using Models to Understand the World. New York: Oxford University Press.
  • Winsberg, Eric, 1999, “Sanctioning Models: The Epistemology of Simulation”, Science in Context, 12(3): 275–92.
  • –––, 2001, “Simulations, Models, and Theories: Complex Physical Systems and their Representations”, Philosophy of Science, 68: S442–S454.
  • –––, 2003, “Simulated Experiments: Methodology for a Virtual World”, Philosophy of Science, 70: 105–125.
  • –––, 2006, “Handshaking Your Way to the Top: Inconsistency and Falsification in Intertheoretic Reduction”, Philosophy of Science, 73: 582–594.
  • –––, 2009a, “A Tale of Two Methods”, Synthese, 169(3): 575–92.
  • –––, 2009b, “Computer Simulation and the Philosophy of Science”, Philosophy Compass, 4/5: 835–845.
  • –––, 2009c, “A Function for Fictions: Expanding the Scope of Science”, in Fictions in Science: Philosophical Essays on Modeling and Idealization, M. Suarez (ed.), London: Routledge.
  • –––, 2010, Science in the Age of Computer Simulation, Chicago: The University of Chicago Press.
  • –––, 2018, Philosophy and Climate Science, Cambridge: Cambridge University Press.
  • –––, 2020, “Can Models Have Skill?” in A Critical Reflection on Automated Science: Will Science Remain Human?, Cham: Springer International Publishing, pp. 217–234.
  • –––, 2022, “Who is Responsible for Global Health Inequalities After Covid-19?” Global Epidemiology, 4: 100081.
  • Winsberg, E. and Harvard, S., 2022, “Purposes and Duties in Scientific Modelling”, J Epidemiol Community Health, 76(5): 512–517.
  • –––, 2024, Scientific Models and Decision Making, Cambridge: Cambridge University Press.

Copyright © 2026 by
Eric Winsberg <winsberg@usf.edu>
Ramon Alvarado <ralvarad@uoregon.edu>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free