Climate Science
Climate science investigates the structure and dynamics of earth’s climate system. It seeks to understand how global, regional and local climates are maintained as well as the processes by which they change over time. In doing so, it employs observations and theory from a variety of domains, including meteorology, oceanography, physics, chemistry and more. This entry provides an overview of some of the core concepts and practices of contemporary climate science as well as philosophical work that engages with them.
- 1. Introduction
- 2. Basic Concepts
- 3. Observational Data
- 4. Climate Models
- 5. Anthropogenic Climate Change
- Bibliography
- Academic Tools
- Other Internet Resources
- Related Entries
1. Introduction
The field of climate science emerged in the second half of the twentieth century. Though it is sometimes also referred to as “climatology”, it differs markedly from the field of climatology that came before. That field, which existed from the late-nineteenth century (if not earlier), was an inductive science, in many ways more akin to geography than to physics; it developed systems for classifying climates based on empirical criteria and, by the mid-twentieth century, was increasingly focused on the calculation of statistics from weather observations (Weart 2008; Edwards 2010; Heymann & Achermann 2018).
Climate science, by contrast, aims to explain and predict the workings of a global climate system—encompassing the atmosphere, oceans, land surface, ice sheets and more—and it makes extensive use of both theoretical knowledge and mathematical modeling. In fact, the emergence of climate science is closely linked to the rise of digital computing, which made it possible to simulate the large-scale motions of the atmosphere and oceans using fluid dynamical equations that were otherwise intractable; these motions transport heat, moisture and other quantities that shape paradigmatic climate variables, such as average surface temperature and rainfall. Today, complex computer models that incorporate representations of a wide range of climate system processes are a mainstay of climate research.
The emergence of climate science is also linked to the issue of anthropogenic climate change. In recent decades, growing concern about climate change has brought a substantial influx of funding for climate research. It is a misconception, however, that climate science just is the study of anthropogenic climate change. On the contrary, there have been, and continue to be, many lines of research within climate science that address other questions about the workings of the climate system.
2. Basic Concepts
On a standard characterization, Earth’s climate system is the complex, interactive system consisting of the atmosphere, hydrosphere, cryosphere, lithosphere and biosphere. Some definitions appear to be narrower, limiting the climate system to those aspects of the atmosphere, hydrosphere, etc. that jointly determine the values of paradigmatic climate variables, such as average surface temperature and rainfall. In either case, it would seem that some human activities, including those that release greenhouse gases to the atmosphere, are part of the climate system. In practice, however, climate scientists often classify human activities as external influences, along with volcanic eruptions and solar output (see IPCC-Glossary: 2222). This seems to be a pragmatic choice: it is easier, and a reasonable first approximation, to represent anthropogenic greenhouse gas emissions as exogenous variables in climate models. Yet it may also reflect a deeper ambivalence about whether humans are part of nature.
A climate is a property of a climate system, but there are different views on what sort of property it is. According to what might be called actualist views, a climate is defined by actual conditions in the climate system. Narrower actualist definitions reference weather conditions only. For example, the climate of a locale is often defined as its average weather conditions, or the statistical distribution of its weather conditions, when long time periods are considered. Broader actualist definitions reference conditions throughout the climate system, such as: “the state, including a statistical description, of the climate system” (IPCC-Glossary: 2222). These broader definitions emerged in the second half of the twentieth century in connection with increased efforts to understand, with the help of physical theory, how climates in the narrower sense are maintained and changed via atmospheric, oceanic and other processes.
A number of other definitions of climate are what Werndl (2016) calls model-immanent. For example, taking a dynamical systems perspective, climate is often identified with an attractor of the climate system, that is, with the conditions represented by an attractor of a (perfect) mathematical model of the climate system. Roughly speaking, an attractor is a set of points to which the values of variables in a dynamical model tend to evolve from a wide range of starting values. A drawback of this way of thinking about climate is that the distribution of conditions in a finite time period need not resemble those represented by an (infinite-time) attractor (Smith 2002). After critically examining several definitions of climate, Werndl (2016) proposes a novel, model-immanent definition on which a climate is a distribution of values of climate variables over a suitably-long but finite time period, under a regime of external conditions and starting from a particular initial state.
Definitions of climate change are closely related to definitions of climate. For instance, scientists subscribing to the climate-as-attractor view might characterize climate change as the difference between two attractors, one associated with a set of external conditions obtaining at an earlier time and one associated with a different set obtaining at a later time. By contrast, a definition of climate change associated with a narrower, actualist view is: “any systematic change in the long-term statistics of climate elements (such as temperature, pressure, or winds) sustained over several decades or longer” (American Meteorological Society 2012). The latter definition allows that climate change might occur even in the absence of any changes in external forcing conditions, as a manifestation of internal variability, i.e., due to processes internal to the climate system, such as slowly-evolving ocean circulations.
The concept of internal variability, and various other concepts in climate science, also raise interesting questions, both conceptual and empirical (Katzav & Parker 2018). Thus far, however, the conceptual foundations of climate science have remained largely unexplored by philosophers.
3. Observational Data
The sources and types of observational data employed in climate science are tremendously varied. Data are collected not only at land-based stations, but also on ships and buoys in the ocean, on airplanes, on satellites that orbit the earth, by drilling into ancient ice at earth’s poles, by examining tree rings and ocean sediments, and in other ways. Many challenges arise as climate scientists attempt to use these varied data to answer questions about climate and climate change. These challenges stem both from the character of the data—they are gappy in space and time, and they are obtained from instruments and other sources that have limited lifespans and that vary in quality and resolution—and from the nature of the questions that climate scientists seek to address, which include questions about long-term changes on a global scale.
To try to overcome these challenges, climate scientists employ various procedures for quality control, correction, synthesis and transformation. Some of these will be highlighted below, in the course of introducing three important types of climate dataset. Because of the extensive processing involved in their production, these datasets are often referred to as data products.
3.1 Station-Based Datasets
The weather and climate conditions that matter most immediately to people are those near the earth’s surface. Coordinated networks of land-based observing stations— measuring near-surface temperature, pressure, precipitation, humidity and sometimes other variables—began to emerge in the mid-nineteenth century and expanded rapidly in the twentieth century (Fleming 1998: Ch.3). Today, there are thousands of stations around the world making daily observations of these conditions, often overseen by national meteorological services. In recent decades, there have been major efforts to bring together records of past surface observations in order to produce long-term global datasets that are useful for climate change research. These ongoing efforts involve international cooperation as well as significant “data rescue” activities, including imaging and digitizing of paper records, in some cases with the help of the public.
Obtaining digitized station data, however, is just the first step. As Edwards (2010: 321) emphasizes, “…if you want global data, you have to make them.” To construct global temperature datasets that are useful for climate research, thousands of station records, amounting to millions of individual observational records, are merged, subjected to quality control, homogenized and transformed to a grid. Records come from multiple sources, and merging aims to avoid redundancies while maximizing comprehensiveness in station coverage. Procedures for quality control seek to identify and remove erroneous data. Homogenization removes jumps and trends in station time series that are due to non-climatic factors, e.g., because an instrument is replaced with a new one, a building is constructed nearby, or the timing of observations changes. Finally, for many purposes, it is useful to have datasets that are gridded, where each grid point is associated with a spatial region (e.g., a 2° latitude × 2° longitude region). Transforming data from a collection of stations to a grid involves further methodological choices: which stations should influence the value assigned to a given grid point, what if the associated region contains no reporting stations over a period, etc. In practice, different scientific groups make different choices (see Hartmann et al. 2013).
Gridded station-based datasets have been produced for temperature, precipitation and other variables. Surface temperature datasets are used to quantify the extent of recent global warming. Three prominent sources for such temperature datasets are NASA’s Goddard Institute for Space Studies (GISS), the University of East Anglia’s Climatic Research Unit (CRU) and the U.S. National Centers for Environmental Information (NCEI). Periodically, these centers develop new versions of their datasets, reflecting both the acquisition of additional data as well as methodological innovations (see, e.g., Morice et al. 2021). Despite differences in the details of their methodologies, there is good agreement among estimates of global temperature changes derived from these datasets, especially for the second half of the twentieth century. Nevertheless, in response to concerns expressed by climate contrarians, an independent analysis was conducted by the non-governmental organization Berkeley Earth; drawing on a much larger set of station records and using a different methodology for handling inhomogeneities, their analysis confirmed the twentieth century global warming seen in other datasets (Rohde et al. 2013).
3.2 Reanalyses
Compared to station data, in situ observations of conditions away from earth’s surface are much less plentiful. Radiosondes, which are balloon-borne instrument packages that ascend through the atmosphere measuring pressure, temperature, humidity and other variables, are now launched twice daily at hundreds of sites around the world, but they do not provide even, global coverage. Satellite-borne instruments can provide global coverage and are of significant value in the study of weather and climate, but they do not directly measure vertical profiles of key climatological variables like temperature; approximate values of these variables must be inferred in a complex way from radiance measurements (see Lusk 2021). A system of several thousand ocean floats, known as Argo, now takes local profiles of temperature and salinity in the ocean. None of these data sources existed a century ago, though observations of conditions away from earth’s surface were sometimes made (e.g., using thermometers attached to kites).
One way to remedy spatial and temporal gaps in observations is to perform statistical interpolation. An alternative methodology, known as data assimilation, produces estimates of the three-dimensional state of the atmosphere or ocean using not just available observations but also one or more forecasts from a physics-based simulation model. The forecast provides a first-guess-estimate of the atmosphere or ocean state at the time of interest, which is adjusted in light of observational data collected around that time. In daily weather forecasting, the resulting state estimate is known as an analysis, and it provides initial conditions for weather prediction models. To produce long-term datasets for climate research, data assimilation is performed iteratively for a sequence of past times (e.g., every 12 hours over several decades), producing a retrospective analysis or reanalysis for each time in the sequence (see Dee et al. 2016 [Other Internet Resources]).
Atmospheric reanalyses are in heavy use in climate research, because they provide complete gridded data at regular time steps over long time periods, both at the surface and above, and for a wide range of variables, including ones that are difficult to measure with instruments. Despite the central role of computer forecasts in reanalysis, many climate scientists refer to reanalysis datasets as “observations” and use them as such: to investigate climate system dynamics, to evaluate climate models, to find evidence of the causes of recent climate change, and so on. Other climate scientists, however, emphasize that reanalyses should not be confused with “real” observations (Schmidt 2011). Parker (2017) argues that differences between data assimilation and traditional observation and measurement are not as great as one might think; she proposes that data assimilation here can be understood as a complex measuring procedure that is still under development.
3.3 Paleoclimate Reconstructions
Climate scientists are also interested in climates of the distant past. These paleoclimatic investigations rely on proxy indicators: biophysical properties of materials formed during the past that are interpreted to represent climate-related variations (IPCC-Glossary: 2245). For example, variations in the ratio of different isotopes of oxygen in deep ice cores and in the fossilized shells of tiny animals serve as proxies for variations in temperature. Additional proxies for climate-related variables come via tree rings, corals, lake sediments, boreholes and other sources. Wilson and Boudinot (2022) argue that the difference between proxy measurements of past climate conditions and more familiar measurements in science should not be understood in terms of “directness” but rather in terms of how they account for confounding causal factors.
Producing proxy-based reconstructions of past climate conditions, especially on global scales, involves a host of methodological challenges. Just a few will be mentioned here. First, proxies often reflect the influence of multiple environmental factors at once. For example, tree rings can be influenced not only by temperature but also by precipitation, soil quality, cloud cover, etc., which makes it more difficult to confidently infer a single variable of interest, such as temperature. Second, proxies often must be calibrated using recent observations made with meteorological instruments, but the instrumental record covers only a very short period in earth’s history, and factors shaping the proxy in the distant past may be somewhat different. Third, proxies of a given type can have limited geographical coverage: ice cores are found only at the poles, tree rings are absent in locations where there is no distinct growing season, and so on. Fourth, the temporal resolution of different types of proxy can differ markedly—from a single year to a century or more—adding a layer of complexity when attempting to use multiple proxies together. For these reasons and others, quantifying the uncertainties associated with proxy reconstructions is also a significant challenge.
Despite these methodological challenges, climate scientists have produced paleoclimatic reconstructions covering various regions, time periods, and climate-related variables, including temperature, precipitation, streamflow, vegetation and more. Temperature reconstructions in particular have been a source of controversy (see Section 5.3), in part because they underwrite conclusions about the extent to which recent warming is unusual. Vezér (2017) suggests that inferences to such conclusions often involve a form of variety-of-evidence reasoning, insofar as they involve multiple temperature reconstructions informed by different proxies, methodological assumptions and statistical techniques (see also Oreskes 2007). Watkins (2024) calls attention to the limited intercalibration, standardization, and integration of proxy data and measurements, arguing that this disunity has several benefits, especially related to the management of error and uncertainty.
4. Climate Models
Models of the climate system, especially computer simulation models, also play important roles in both theoretical and applied research in climate science.
4.1 Types
Climate scientists often refer to a “hierarchy” or “spectrum” of mathematical and computational climate models, ranging from simple to complex. The complexity of climate models increases with: the number of spatial dimensions represented; the resolution at which those dimensions are represented; the range of climate system components and processes “included” in the model; and the extent to which those processes are given realistic rather than simplified representations. As a consequence, more complex models tend to be more computationally demanding as well.
Among the simplest climate models are energy balance models (EBMs) that represent earth’s surface energy budget in a highly aggregate way (see McGuffie & Henderson-Sellars 2014: Ch.3). These models, which often have surface temperature as their sole dependent variable, are constructed using both physical theory (e.g., the Stefan-Boltzmann equation) and empirical parameters (e.g., representing the albedo of the earth and the emissivity of the atmosphere). Zero-dimensional EBMs represent the entire climate system as a single point. They can be used to calculate by hand an estimate of the globally-averaged surface temperature when the system is in radiative equilibrium. One-dimensional and two-dimensional EBMs take a similar energy-budget approach but represent average temperature at different latitudes and/or longitudes and account in a rough way for the transport of heat between them. Their equations are usually solved with the help of digital computers.
Earth system models of intermediate complexity (EMICs) come in a range of forms but tend to be both comprehensive and highly idealized. An EMIC might incorporate not only representations of the atmosphere, ocean, land surface and sea ice, but also representations of some biospheric processes, ice sheets and ocean sediment processes. These representations, however, are often relatively simple or coarse. The atmosphere component of an EMIC, for instance, might be a two-dimensional enhanced version of an EBM known as an energy-moisture balance model. The ocean might be simulated explicitly in three dimensions using fluid dynamical equations, but with low spatiotemporal resolution. See Flato et al. 2013 (Table 9.2 and 9.A.2) for examples and further details. The relative simplicity and coarseness of EMICs makes them computationally efficient, which in turn makes it possible to use them to simulate the evolution of the climate system on millennial time scales.
At the complex end of the spectrum, coupled atmosphere-ocean general circulation models (GCMs) simulate atmospheric and oceanic motions in three spatial dimensions. They also incorporate representations of the land surface and sea ice, and they attempt to account for important interactions among all of these components. GCMs evolved from atmosphere-only general circulation models, which in turn were inspired by early weather forecasting models (Edwards 2000; Weart 2010). The latest generation of climate models, earth system models (ESMs), extend GCMs by incorporating additional model components related to atmospheric chemistry, aerosols and/or ocean biogeochemistry. In both GCMs and ESMs, numerical methods are used to estimate solutions to discretized versions of fluid dynamical equations at points on a three-dimensional grid (or using an alternative spectral approach). With increased computing power in recent decades, the horizontal spacing of these grid points for the atmosphere has reduced from several hundred kilometres to ~100 km, and the number of vertical layers has increased from ~10 to ~50, with a time step of 10–30 minutes (McGuffie & Henderson-Sellers 2014: 282). Despite this increased resolution, many important processes, such as the formation of clouds and precipitation, the transfer of radiation, and chemical reactions, still occur at sub-grid scales; accounting for the effects of these processes is a major challenge (see Section 4.2).
Another important type of climate model is the regional climate model (RCM). Like GCMs and ESMs, RCMs are designed to be comprehensive and to incorporate realistic, theory-based representations of climate system processes. Unlike GCMs and ESMs, however, RCMs represent only a portion of the globe, which allows them to have finer spatiotemporal resolution without exceeding available computing power. With this increased resolution, RCMs can explicitly simulate some smaller-scale processes, and they also have the potential to reveal spatial variations in conditions (e.g, due to complex topography) that cannot be resolved by GCMs/ESMs. These features make RCMs an attractive tool for studies of regional climate change.
While complex physics-based models like GCMs and ESMs take center stage in climate research today and will be the focus of discussion below, empirical / data-driven models have also been developed and, with the rise of machine learning, are increasingly being investigated (see, e.g., Lucarini & Chekroun 2023; Kochkov et al. 2024). In addition, before the advent of the digital computer, concrete analogue models of the atmosphere, such as rotating pans of fluid, also served as tools of research (Edwards 2011). Watkins (2023) argues that paleoclimate analogues—past climate episodes that resemble the present in relevant ways—can plausibly be understood as constituting another, rather unusual, type of concrete climate model: they are full-scale, naturally-occurring, non-manipulable models.
4.2 Construction
Today, there are a few dozen state-of-the-art GCMs/ESMs housed at modeling centers around the world. These are huge models, in some cases involving more than a million lines of computer code, and it takes considerable time to produce simulations of interest with them, even on supercomputers. To construct a GCM/ESM from scratch requires a vast range of knowledge and expertise. Consequently, many of today’s GCMs and ESMs have been built upon the foundations of earlier generations of models (Knutti, Masson, & Gettelman 2013). They thus often have a layered history, with some parts of their code originally developed years or even decades ago and other parts just added or upgraded. Even models at different modeling centers sometimes have pieces of computer code in common, whether shared directly or borrowed independently from an earlier model.
Many of today’s GCMs/ESMs have an ostensibly modular design: they consist of several component models corresponding to different parts of the climate system—atmosphere, ocean, land surface, etc.—as well as a “coupler” that passes information between them where the spatial boundaries of the component systems meet (Alexander & Easterbrook 2015). Lenhard and Winsberg (2010), however, argue that complex climate models in fact exhibit only a “fuzzy” modularity: in order for the model as a whole to work well, some of the details of the component models will be adjusted to mesh with the features of other components and to try to compensate for their limitations (see also Winsberg 2018, Ch.9).
Each component model in a GCM/ESM in turn incorporates representations of numerous important processes operating within the corresponding part of the climate system. For atmosphere models, for example, it is common to distinguish the “dynamics” and the “physics.” The dynamics (or “dynamical core”) is a set of fluid dynamical and thermodynamic equations reflecting conservation of momentum, mass and energy, as well as an equation of state. These equations are used to simulate large-scale atmospheric motions, which transport heat, mass and moisture; the scale of the motions that can be resolved depends on the spatial grid of the model. The “physics” encompasses representations of sub-grid processes that impact grid-scale conditions in a significant way: radiative transfer, cloud formation, precipitation and more. These sub-grid processes are parameterized, i.e., represented as a function of the grid-scale variables that are calculated explicitly in the dynamical core. Parameterization is necessary in other components of climate models as well, whenever sub-grid processes significantly influence resolved-scale variables.
Constructing a parameterization is akin to an engineering problem, where the goal is to find an adequate substitute for the explicit simulation of a sub-grid process, using a limited set of ingredients. This is a challenging task. Typically, parameterizations are informed by physical theory but also incorporate elements derived at least in part from observations; they are “semi-empirical.” Very recently, climate scientists have begun investigating how machine learning methods can be used to construct or improve parameterizations as well (de Burgh-Day & Leeuwenburg 2023; Schneider et al. 2023). Because climate models incorporate both accepted physical theory and semi-empirical engineered elements like parameterizations, they are sometimes described as having a hybrid realist-instrumentalist status (Katzav 2013a; Goodwin 2015).
The development of climate models also inevitably involves some tuning, or calibration, which involves the (often ad hoc) adjustment of parameter values or other elements of the model in order to improve model performance, usually measured by fit with observations (see Mauritsen et al. 2012; Hourdin et al. 2017). The observations that are targeted vary from case to case; they might relate to individual processes or to aggregate, system-level variables, such as global temperature. Although tuning has often been done by hand, in a way that relies on expert judgment and trial and error, there is growing interest in using machine learning methods instead (Bonnet et al. preprint). Such automated approaches might seem to make the tuning process more “objective,” but Jebeile et al. (2023) argue that tuning via machine learning techniques requires subjective elements not so different from those involved in standard parameterization and tuning.
A further issue is how social and ethical values shape climate model construction. Winsberg (2012) argues that they do so by influencing priorities in model development as well as the management of inductive risk (see also Biddle & Winsberg 2009; for responses, see Parker 2014a and Schmidt & Sherwood 2015). Intemann (2015) contends that social and ethical values can legitimately influence climate model construction, including via the routes identified by Winsberg, when this promotes democratically-endorsed social and epistemic aims of research. Parker and Winsberg (2018) emphasize that modeling priorities shaped by values can impact model development in ways that result in some communities having higher quality information about future climate change than others. In light of this, Jebeile and Crucifix (2021) propose that the development of climate models should be informed by a diversity of standpoints and values (see also Leuschner 2015). Pulkkinen et al. (2022) review recent work on the roles of social and ethical values in climate science more broadly and note the need for further attention to this topic.
4.3 Uses
Computational climate models are used for many purposes; just a few types will be mentioned here (see also Petersen 2012: Ch.5). One important use is in characterizing features of the climate system that are difficult to learn about via available observations. For example, the internal variability of the climate system is often estimated from the variability seen in long GCM/ESM simulations in which external conditions are held constant at pre-industrial levels. Estimates of internal variability in turn play an important role in studies that seek to detect climate change in observations (Section 5.1). Similarly, climate models have been used to arrive at estimates of equilibrium climate sensitivity (ECS), a measure of the climate system’s responsiveness to changing carbon dioxide concentrations (see Undorf et al. 2022 for more details and an analysis of the ways in which both epistemic and non-epistemic values play a role).
Climate models are also used as scientists seek explanations and understanding. Parker (2014b) identifies several ways in which climate models can play a role here: by serving as a surrogate for observations, the analysis of which can suggest explanatory hypotheses and help to fill in gaps in explanations; by allowing scientists to test the quantitative plausibility of explanatory hypotheses; and by serving as experimental systems that can be manipulated and studied in order to gain insight into their workings. In connection with the latter, Held (2005) calls for increased efforts to develop and systematically study “hierarchies of lasting value”—sets of models, ranging from the highly idealized to the very complex, that stand in known relations to one another so that the sources of differences in their behavior can be more readily diagnosed; he contends that the study of such hierarchies is essential to the development of climate theory. With growing interest in the application of machine learning tools whose “models” may be difficult to physically interpret, there is concern that progress in understanding the workings of the climate system may be slowed (but see Knüsel & Baumberger 2020 and Jebeile, Lam, & Räz 2021).
In addition, climate models are used to make predictions. One might think that, if the atmosphere is a chaotic system, then climate prediction is impossible. But chaos is an obstacle to precisely predicting the trajectory of a system—the time-ordered sequence of its states over a period—not (necessarily) for predicting the statistical properties of one or more trajectories; climate prediction is concerned with the latter. Short-term predictions of climate for periods ranging from a year to a decade or so are made with both physics-based and empirical models. The forecasts often take as their starting point an observation-based estimate of the recent state of the climate system, and assumptions are made about what external forcing conditions over the forecast period will be like. Climate models are also used to make longer-term conditional predictions, known as projections. These are predictions of what future climate would be like conditioned on external forcing scenarios, without assuming that any of those scenarios will actually occur; they are often launched from an initial state that is representative of the climate at the start of the simulated period, though not estimated directly from observations (see Werndl 2019 for further discussion). Climate change projections from computational models have become a major focus of climate research and will be discussed further in Section 5.2. Both Wilson (2023) and Watkins (2023) call attention to the potential for paleoclimate analogues, understood as concrete models, to inform reasoning about future climate change as well.
Recently there have been calls for discipline-level changes in climate modeling, motivated to a significant extent by the goal of obtaining more reliable projections of future climate change. Some climate scientists advocate consolidating expertise and computing power to allow for kilometer-scale climate simulations that can explicitly represent (rather than parameterize) many important small-scale processes (Shukla et al. 2009; Slingo et al. 2022). But because the computational demands of such models may severely limit the number of simulations that can be run to explore uncertainties, other scientists favor an intermediate approach, where the aim is to achieve 10–50 km resolution in part via gains in efficiency using artificial intelligence / machine learning tools (Schneider et al. 2023). While both of these proposals continue to center on GCMs/ESMs, Baldissera Pacchetti, Jebeile, and Thompson (2024) argue that, given the wide range of climate-related questions that are of interest, a plurality of modeling strategies—including ones that do not center on complex models—is desirable, and they advocate more equitable funding among these various strategies.
4.4 Evaluation
Evaluation of GCMs/ESMs occurs throughout the model development process. Component models and parameterizations are often tested individually “off-line” to see how they perform. Likewise, there is testing and adjustment as the different component models are coupled. Details of this iterative testing and adjustment process, however, are rarely reported in publications (Hourdin et al. 2017). What is reported instead are basic features of the GCM/ESM and of its performance once it is fully constructed, tuned and ready to be released for scientific use. The same is true when updated or alternative versions of existing GCMs/ESMs are developed.
Subsequent evaluation often occurs in part via coordinated climate model intercomparison projects (CMIPs), in which modeling centers produce simulations of past and future climate conditions and submit their results to a shared database for joint analysis (Eyring, Bony, et al. 2016). Among other things, this analysis documents how well results from the participating GCMs/ESMs fit with past observational data across a range of climate variables (see Gleckler, Taylor, & Doutraiux 2008; Eyring, Gillett, et al. 2021). These model-data comparisons in turn underwrite claims of general improvement of climate models—qua representations of the climate system—from one generation of models to the next. They also serve as a resource for assessing the adequacy (or fitness) of the latest models for specific purposes, such as estimating the magnitude of internal variability or identifying the main cause(s) of 20th century global warming (Parker 2009; Baumberger et al. 2017; Doblas-Reyes et al. 2021). For instance, in their most recent scientific assessment report, the Intergovernmental Panel on Climate Change (IPCC) reported very high confidence that the current generation of GCMs/ESMs “reproduces the observed historical global surface temperature trend and variability with biases small enough to support detection and attribution of human-induced warming” (Eyring, Gillett, et al. 2021: 425).
Nevertheless, efforts to draw conclusions about climate model adequacy-for-purpose from instances of model-data fit (or lack of fit) are complicated by a number of factors (Parker 2009; Schmidt & Sherwood 2015; Baumberger et al. 2017). Just a few will be discussed here.
First, observational datasets (including reanalyses) with which modeling results are compared can contain errors or have uncertainties that are underestimated or not reported; in more than one case, it has turned out that model-data conflicts were resolved largely in favor of the models (see, e.g., Lloyd 2012). There also have been cases where poor model-data fit was found to be due in part to errors in assumptions about the external conditions that obtained during the simulated period (e.g., Medhaug et al. 2017).
More fundamental is an issue related to the initial conditions of climate simulations. Oftentimes, a climate simulation for a period of interest is launched from a representative state that is arrived at after letting the climate model run for a while under specified external conditions; this “spin up” of the model is needed to allow its various components to come into balance with one another. Since this representative state inevitably differs somewhat from the actual conditions at the start of the period of interest (e.g., on Jan 1, 1900), and since the climate system is believed to be chaotic, the simulation would not be expected to perfectly track observations throughout the period, even if the model perfectly represented all climate system processes; the goal is for differences between climate simulations and observational data to be no larger than would result from internal variability (see also Schmidt & Sherwood 2015: 156–7).
Another complicating factor is that today’s climate models have to varying degrees been tuned to fit twentieth century observations in the course of model development. A common view is that observational data used in tuning cannot be used subsequently in model evaluation (e.g., Randall et al. 2007: 596; Flato et al. 2013: 750). Steele and Werndl (2013, 2016), however, argue that in some circumstances it is legitimate to use data for both calibration (i.e., tuning) and confirmation, which they illustrate with both Bayesian and frequentist methodologies. Frisch (2015) advocates a moderate predictivism in this context: if a climate model is tuned to achieve a good fit with data for a particular variable, this fit provides less support for the hypothesis that the model can deliver the same performance with respect to other predictions for that variable (e.g., future values), compared to the case when the good fit is achieved without tuning (see also Stainforth et al. 2007; Katzav, Dijkstra, & de Laat 2012; Schmidt & Sherwood 2015). Frisch defends this view on the grounds that successful simulation without tuning gives us more reason to think that a climate model has accurately represented key processes shaping the variable of interest; if we were already confident of the latter, then successful simulation without tuning would not have such an advantage. See Winsberg 2018, Ch.10, for further discussion.
Finally, for some predictive purposes of interest, it can be difficult to know what sort of fit with past data would even count as evidence of adequacy (Parker 2009). One reason is that idealizations in the model, as well as tuned components, might perform differently under past versus future conditions, but it can be difficult to anticipate exactly how. Katzav (2014) suggests that, in practice, attempts to determine what sort of fit with past observations would count as evidence of a climate model’s adequacy-for-purpose will often rely on what is learned from results from climate models themselves, in a way that is question begging.
Given these complications and challenges, a number of authors have emphasized that more than model-data fit should be considered when evaluating climate models. Lloyd (2009, 2010), for instance, identifies empirical accuracy (i.e., model-data fit), independent support for model components, and robustness of findings as important considerations. Baumberger et al. (2017) identify similar considerations and argue that examining these factors jointly can help to mitigate some of the limitations of each. Knutti (2018) places special emphasis on process understanding, which encompasses understanding of both the causal processes that will be involved in producing changes in the climate variable of interest and the extent to which these processes are well represented in the models being evaluated. According to Knutti: “We need to make sure the models do the right thing for the right reason, because we want to use them beyond the range they have been evaluated” (2018: 346). In a similar vein, Kawamleh (2022) offers an account of process-based evaluation that centers on establishing the “dynamical adequacy” of a climate model with respect to a climate feature of interest, such as precipitation change in a particular region.
Before moving on, it is worth noting that another significant motivation for model evaluation studies is model improvement. Model evaluation can be directed toward understanding why a model’s results exhibit particular errors, so that they can be addressed. Lenhard and Winsberg (2010), however, suggest that such “analytical understanding” of climate model performance is largely out of reach. They argue that features of climate models, including their complexity, their fuzzy modularity (see Section 4.2) and their incorporation of “kludges”—unprincipled fixes applied in model development to make the model as a whole work better— will often make it very difficult to apportion blame for poor simulation performance to different parts of climate models; climate modeling, they argue, faces a particularly challenging variety of confirmational holism (see also Petersen 2000). As empirical support, they point to the limited success of model intercomparison projects in identifying the sources of disagreement in climate model simulations. Other authors, however, have called attention to noteworthy successes in this regard (e.g., Frigg, Thompson, & Werndl 2015; Touzé-Peiffer et al. 2020; O’Loughlin 2023). O’Loughlin (2023) identifies several methods that climate scientists use to diagnose model errors and suggests that developing a repertoire of error types could facilitate future diagnoses.
5. Anthropogenic Climate Change
The idea that humans could change Earth’s climate by emitting large quantities of carbon dioxide is not a new one (Fleming 1998; Weart 2008). In the late nineteenth century, Swedish chemist Svante Arrhenius calculated that doubling the levels of carbonic acid (i.e., carbon dioxide) in the atmosphere would warm Earth’s average surface temperature by several degrees Celsius (Arrhenius 1896). By the mid-twentieth century, oceanographer Roger Revelle and colleagues concluded: “By the year 2000, the increase in atmospheric CO2… may be sufficient to produce measurable and perhaps marked change in climate” (quoted in Oreskes 2007: 83–84). In 1988, climate scientist James Hansen testified to the U.S. Congress that global warming was already happening. That same year, the World Meteorological Organization and the United Nations Environment Program established the Intergovernmental Panel on Climate Change (IPCC), “to provide policymakers with regular assessments of the scientific basis of climate change, its impacts and future risks, and options for adaptation and mitigation” (IPCC 2013). Drawing on the expertise of the international climate science community, the IPCC has delivered assessment reports roughly every five years since 1990.
5.1 Detection and Attribution
The IPCC defines detection of climate change as: “the process of demonstrating that climate or a system affected by climate has changed in some defined statistical sense, without providing a reason for that change” (IPCC-Glossary: 2226). They imply that they are interested in climate change due to factors considered external to the climate system, rather than due to internal variability, when they add: “An identified change is detected in observations if its likelihood of occurrence by chance due to internal variability alone is determined to be small, for example, <10%” (ibid.). Detection thus requires a statistical estimate of how much a quantity or field of interest might fluctuate due to internal variability alone, in the absence of changes in external forcing factors. This is difficult to estimate from the instrumental record, because the record is relatively short and reflects the influence of changing external conditions, such as variations in aerosol and greenhouse gas emissions. Estimates of internal variability have been extracted from paleoclimate data for some quantities, but often estimates are obtained from long GCM/ESM simulations in which external conditions are held constant.
In its periodic assessments, the IPCC has reached increasingly strong conclusions about the detection of climate change in observations. The IPCC’s Fifth Assessment Report concluded that it is “virtually certain” (i.e., probability >0.99) that the increase in global mean surface temperature seen since 1950 is not due to internal variability alone (Bindoff et al. 2013: 885). That is, the probability that the warming is due to internal variability alone was assessed by the scientists, on the basis of available evidence and expert judgment, to be less than 1%. Indeed, it was noted that, even if internal variability were three times larger than estimated from simulations, a change would still be detected (ibid.: 881). Changes have been detected in many other aspects of the climate system as well, not just in the atmosphere but also in the oceans and the cryosphere.
The process of attribution seeks to identify the cause(s) of an observed change in climate. It employs both basic physical reasoning—considering whether observed changes are qualitatively consistent (or not) with the expected effects of a potential cause—as well as quantitative studies. The latter include both empirical-statistical studies (e.g., Folland, Boucher, et al. 2018) as well as fingerprint studies, in which GCMs/ESMs are used to simulate what would occur over a period if some causal factor (or set of factors) were changing, while other causal factors were held constant; the resulting simulated pattern of change in the target variable or field —often a spatiotemporal pattern—is the “fingerprint” of that factor (or set of factors). Scientists then perform a regression-style analysis, which looks for the linear combination of the fingerprints that best fits the observations and checks whether the residual is statistically consistent with internal variability (see Bindoff et al. 2013: Box 10.1). If so, then an estimate of the contributions of the different causal factors that were considered can be extracted from the analysis. Because of uncertainties associated with observational data, the forcing response patterns, and various methodological choices that must be made, attribution studies produce a range of estimated contributions for each factor.
As with detection, IPCC conclusions regarding attribution have grown stronger over time and have come to encompass more variables and fields. In their Fifth Assessment Report, the IPCC concluded that it is “extremely likely” (i.e., probability >0.95) that more than half of the increase in global mean surface temperature since 1950 was due to anthropogenic greenhouse gas emissions and other anthropogenic forcings (Bindoff et al. 2013: 869). This conclusion was informed by multiple studies, as well as physical reasoning and expert judgment. Dethier (2022) argues that such conclusions about the causes of recent warming have grown stronger over time in part because the statistical models employed in attribution studies have been refined, in a process that should be understood as a kind of calibration.
These headline IPCC conclusions about detection and attribution align with, if not constitute, the “consensus” position regarding the reality and causes of recent climate change. The extent to which there is a scientific consensus on these matters, however, has itself been a topic of debate. It is often reported, for example, that 97% (or more) of climate scientists agree that anthropogenic emissions of greenhouse gases are causing global climate change (Cook et al. 2016). Analyses of the scientific literature, as well as surveys of scientists, have been conducted to demonstrate (or challenge) the existence of this consensus. This focus on consensus can seem misguided: what matters is that there is good evidence for a scientific claim, not merely that some percentage of scientists endorses it (see also Intemann 2017). Yet consensus among experts can serve as indirect evidence for a scientific claim—an indication that the scientific evidence favors or even strongly supports the claim—if the consensus is produced in the right manner (Odenbaugh 2012). Ranalli (2012) suggests that parties on opposite sides of debates about the reality and causes of climate change generally hold similar views about what makes for a reliable scientific consensus, but disagree over the extent to which the climate consensus meets the standard.
In support of the consensus position, Oreskes (2007) argues that research underpinning it accords with various models of scientific reliability: it has a strong inductive base, it has made nontrivial correct predictions, it has survived persistent efforts at falsification, it has uncovered a body of evidence that is consilient, and—in light of all of this—climate scientists have concluded that the best explanation of recent observed warming includes a significant anthropogenic component. She also emphasizes that the idea that increased atmospheric concentrations of carbon dioxide would warm the climate emerged first from basic physical understanding, not from computer models of the climate system. In a complementary analysis, Lloyd (2010, 2015) offers a defense of the use of computational climate models in attribution research and argues that studies involving multiple climate models, which incorporate similar assumptions about the radiative properties of greenhouse gases but differ somewhat in their representations of other climate system processes, provide enhanced support for the conclusion that greenhouse gases are (part of) a good explanation of observed recent warming; she argues that such studies illustrate a distinctive kind of model robustness, which incorporates variety-of-evidence considerations, and that such robustness is a “confirmatory virtue” (see also Vezér 2016).
Katzav (2013b), however, employs another philosophical perspective on evidence—Mayo’s (1996) severe testing framework—and argues that the claim that more than half of the increase in global mean surface temperature since 1950 was due to anthropogenic forcing has not passed a severe test in Mayo’s sense, even when variety-of-evidence considerations are factored in; from a severe testing perspective, he contends, we lack good evidence for this conclusion, in part because uncertainty about the magnitude of internal variability has not been sufficiently probed. Dethier (forthcoming) draws on recent scientific work to argue that estimates of the human contribution to warming are relatively insensitive even to significant errors in estimates of internal variability.
Increasingly, climate scientists are also attempting to quantify the extent to which particular extreme weather events—such as individual floods or heatwaves—were made more probable or more intense as a consequence of rising greenhouse gas emissions (see Swain et al. 2020; Otto 2023). One approach, known as probabilistic event attribution, compares the frequency with which the type of event in question occurs in simulations of climate under preindustrial versus present greenhouse gas conditions. A complementary “storyline” approach uses simulations to see how much less intense the event would have been in the absence of thermodynamic changes in the climate system that are already attributable to human influence, such as warmer ocean temperatures. A debate has unfolded over whether these approaches are prone to understate or overstate human influence on extreme weather events and which sort of error is more important to avoid (see, e.g., Winsberg et al. 2020; García-Portela & Maraun 2023; Otto 2023).
5.2 Projecting Future Climate Change
Climate scientists also seek to understand how climate would change in the future if greenhouse gas emissions were to evolve in particular ways. For a given emission scenario, scientists typically consider an ensemble of projections using different climate models or model versions. The ensemble approach is motivated in part by uncertainty about how to construct a climate model that can deliver highly accurate projections. This uncertainty stems both from limited theoretical understanding of some climate system processes and from limited computing power, which constrains how existing knowledge can be implemented in models.
How to interpret ensemble projections has itself been a subject of debate, and a range of methodologies and views have emerged. At one end of the spectrum, there are statistical methodologies used to infer probabilities of future changes in climate from ensemble results (see, e.g., Tebaldi et al. 2005; Lowe et al. 2018). Both scientists and philosophers have offered criticisms of such methods. Some cite evidence that the statistical assumptions underlying the studies are not met (e.g., Bishop & Abramowitz 2013). Others argue that uncertainty about future climate change is deeper than precise probabilities imply (e.g., Stainforth et al. 2007; Hillerbrand 2014; Katzav, Thompson, et al. 2021). Frigg, Bradley, et al. (2014) illustrate that even slight error in the structure of nonlinear modeling equations can result in misleading probability forecasts—a consequence they dub the “hawkmoth effect” (taking inspiration from the “butterfly effect”, which relates to initial condition error). While their argument threatens the credibility of some probabilistic climate change projections, Goodwin and Winsberg (2016) contend that more work is needed to establish that the hawkmoth effect is a serious problem in this context. For the latest iterations in this debate, see Nabergall et al. 2019 and Frigg and Smith 2022; for additional perspectives, see Dethier 2021 and Lam 2021.
An alternative interpretation of ensemble projections takes them to indicate changes that are merely possible in light of current understanding (Stainforth et al. 2007; Katzav, Thompson, et al. 2021). For instance, Katzav (2014; 2023) argues that climate change projections can be interpreted as indicating real possibilities, where a state of affairs in a target domain is a real possibility if and only if its obtaining is compatible with the basic way things are in the domain and it is not known that it does not obtain. Betz (2015), however, contends that even a possibilistic interpretation may be hard to justify, since it is difficult to show that contrary-to-fact assumptions of climate models do not lead to projections that are inconsistent with background knowledge; he, unlike Katzav, employs a notion of “serious possibility” that requires consistency with background knowledge. Others have expressed concern that possibilist interpretations of ensemble projections understate what is known about future climate (Risbey 2007; Dethier 2023a) and, relatedly, are unhelpful for decision making (Frame et al. 2007).
There are also intermediate views. In its recent assessment reports, the IPCC has relied on ensemble modeling results, background knowledge and expert judgment to assign imprecise probabilities to ranges of change in global temperature under different scenarios. For instance, in their Fifth Assessment Report, the IPCC concluded that the 5% to 95% range of global temperature change projections from the latest CMIP models (see Section 4.4) was likely (i.e., probability >0.66) to include the change that would actually occur under the associated scenario (Collins et al. 2013: 1031). In this way, the IPCC asserted more than that the changes projected by the CMIP ensemble were possibilities, but did not go so far as to assign a single, full probability density function over temperature change values. For another imprecise probability approach, which takes into account the stakes of the decisions to be made using ensemble results, see Roussos et al. 2021.
As the IPCC approach illustrates, in ensemble climate projection studies involving multiple climate models, it has been common to give equal weight to participating models. This is sometimes referred to as “model democracy” or “one-model one-vote” (Knutti 2010). Increasingly, however, differential weighting of projections is being advocated (Knutti, Sedláček, et al. 2017; Hausfather et al. 2022). In the IPCC’s Sixth Assessment Report (2021), projections for a few quantities, including global temperature change, were informed by studies that employed model weighting. This step away from model democracy was prompted in part by the finding that a substantial fraction of the latest generation of models projected implausibly large increases in global temperature and, upon further investigation, were found to perform relatively poorly on tests of their fitness for this purpose (see, e.g., Tokarska et al. 2020). A key challenge for weighting, however, is to determine which performance metrics and other criteria should be used to assign weights for a given projected variable; often, there are multiple reasonable options, which may give somewhat different results.
Sometimes, nearly all climate models in an ensemble give projections that agree on the answer to an interesting question about future climate change, such as whether warming would exceed a given threshold under a particular emission scenario. What is the epistemic significance of this agreement? Parker (2011) argues that such agreed-upon answers merit high confidence only under an assumption about climate model reliability that is often difficult to justify. Dethier (2024) contends that, in the absence of specific evidence to the contrary, it is reasonable to assume that this reliability requirement is met, at least for binary (yes/no) questions. Winsberg (2018: Ch.12) takes a middle position; using Schupbach’s (2018) explanatory account of robustness analysis, he argues that the confidence that should be had in a robust finding will vary from case to case, and he emphasizes that results from sources other than models also play important roles in building such confidence. Parker (2024), however, suggests that the logic of robustness analysis here differs from Schupbach’s explanatory account and more closely resembles a kind of jury theorem reasoning. (See also O’Loughlin 2021 and Gluck 2023, discussing relationships among different philosophical analyses of robustness in the climate context, and Harris & Frigg 2023a, 2023b, for a survey and analysis of views.)
The spatial resolution of current GCMs/ESMs remains too coarse to capture the local variation in climate conditions that occurs due to small-scale variations in topography and land cover. Consequently, local and regional climate change projections are often produced via a process known as downscaling. Statistical downscaling involves identifying statistical relationships between a GCM/ESM’s grid point values and observed local weather conditions in past periods, and then applying those relationships to the GCM/ESM’s grid point values for future times. A concern, however, is that model-observation relationships that hold for past periods may fail to hold for future ones. A second type of approach, known as dynamical downscaling, involves simulating future climate conditions using a regional climate model run with boundary conditions from a GCM/ESM simulation. A problem here is that conditions simulated by the regional climate model sometimes turn out to be inconsistent with those simulated by the global model. Nevertheless, a number of studies have found that downscaling can add valuable information in regions with variable topography and about small-scale phenomena (see, e.g., Flato et al. 2013). According to Thompson, Frigg, and Helgeson (2016), however, given the various uncertainties associated with downscaled results, the provision of local climate change projections should be reconceptualized as a task that requires eliciting expert judgment in light of a range of information sources, including but not limited to models (see also Barsugli et al. 2013).
Storyline approaches (see Section 5.2) can also be employed here, in a forward-looking manner (Shepherd et al. 2018; Doblas-Reyes et al. 2021). Rather than providing a statistical summary of the future changes projected by an ensemble of global or regional models, these approaches characterize weather or climate conditions in a locale for one or a few projected future climate states, some of which may be estimated to have a relatively low probability of occurring but are expected to be very impactful if they do occur. The projected conditions may be communicated using not only maps and quantitative data but also narrative descriptions of what conditions would be like, and they may go beyond climate information per se to characterize impacts of interest. In a related approach, Hazeleger et al. (2015) envision constructing “tales of future weather” with the help of high-resolution weather forecasting models; the model is used to simulate a familiar impactful weather event (e.g., a heatwave or a hurricane experienced in a locale) but with boundary conditions, such as sea surface temperatures, from a possible future climate state.
5.3 Climate Change Controversies
Climate “contrarians” challenge key conclusions of “mainstream” climate science, such as those of the IPCC, in a host of public venues: blogs, newspaper op-ed pieces, television and radio interviews, Congressional hearings and, occasionally, scientific journals. Those considered contrarians come in many varieties, from climate scientists who consider headline attribution claims to be insufficiently justified—they are usually described as climate “skeptics”—to individuals and groups from outside of climate science whose primary motivation is to block climate policy, in some cases by “manufacturing doubt” about the reality or seriousness of anthropogenic climate change; they are commonly labelled climate “deniers” (see, e.g., Oreskes & Conway 2010; Ranalli 2012; Odenbaugh 2022). (The use of these labels varies, however.) Contrarians have played a role in creating or sustaining a number of public controversies related to climate science.
The Tropospheric Temperature Controversy. Climate simulations indicate that rising greenhouse gas concentrations will induce warming not only at earth’s surface but also in the layer of atmosphere extending 8–12 km above the surface, known as the troposphere. Satellites and radiosondes are the primary means of monitoring temperatures in this layer. Analyses of these data in the early 1990s, when only about a decade of satellite data was available, indicated a lack of warming in the troposphere. This model-data mismatch became a key piece of evidence in contrarian dismissals of the threat of anthropogenic climate change. Over time, additional research uncovered numerous problems with the satellite and radiosonde data used to estimate tropospheric temperature trends (NRC 2000; Karl et al. 2006). More recent observational estimates agree that the troposphere has been warming and, although observed trends still tend to be somewhat smaller than those in simulations, the mainstream view is that there is “no fundamental discrepancy” between observations and models, given the significant uncertainties involved (Thorne et al. 2011). Nevertheless, the debate continues. Lloyd (2012) suggests that this controversy in part involves a conflict between a “direct empiricist” view that treats observational data as a naked reflection of reality, taking priority over models, and a more nuanced “complex empiricist” view (see also Edwards 2010: 417).
The Hockey Stick Controversy. The hockey stick controversy focused on some of the first millennial-scale paleoclimate reconstructions of the evolution of Northern Hemisphere mean near-surface temperature (e.g., Mann, Bradley, & Hughes 1999). These reconstructions, when joined with the instrumental temperature record, indicate a long, slow decline in temperature followed by a sharp rise beginning around 1900; their shape is reminiscent of a hockey stick. Such reconstructions featured prominently in the IPCC’s Third Assessment Report and constituted an important piece of evidence for that report’s conclusion that “the 1990s are likely to have been the warmest decade of the millennium” (Folland, Karl, et al. 2001: 102). Contrarian criticism in the published literature followed two main lines: one argued that there were problems with the data and statistical methods used in producing the reconstructions, while a second appealed to additional proxy data and alternative methods for their interpretation in order to challenge the uniqueness of late-twentieth century warming (see, e.g., Soon et al. 2003; McIntyre & McKitrick 2005). Climate scientists offered direct replies to these challenges (e.g., Wahl & Ammann 2007), and subsequent research produced new and longer reconstructions using a variety of proxies, which also supported the conclusion that the late twentieth century was anomalously warm (Masson-Delmotte et al. 2013). Contrarians, however, continue to criticize temperature reconstructions and the conclusions drawn from them, on various grounds.
The Climategate Controversy. In 2009, a large number of emails were taken from the University of East Anglia’s Climatic Research Unit and made public without authorization. Authors of the emails included climate scientists at a variety of institutions around the world. Focusing primarily on a few passages, contrarians claimed that the emails revealed that climate scientists had manipulated data to support the consensus position on anthropogenic climate change and had suppressed legitimate dissenting research in various ways, such as by preventing its publication or by refusing to share data. A number of independent investigations were subsequently conducted (e.g., Russell et al. 2010), all exonerating climate scientists of the charges of scientific fraud and misconduct that contrarians had alleged. Some of the investigations, however, did find that climate scientists had failed to be sufficiently transparent, especially in their response to contrarian requests for station data used to estimate changes in global temperature (2010: 11). Despite the exonerations, surveys found that in some countries this “Climategate” episode (as it became known in popular media) significantly reduced public trust in climate science (Leiserowitz et al. 2013).
The Hiatus Controversy. Global temperature increased significantly during the 1990s but then showed little increase between the late 1990s and the early 2010s. By the mid-2000s, contrarians began to claim that global warming had stopped and that climate models were fundamentally flawed, since many models had projected more warming. Part of the problem here was communication: graphs shared with policymakers and the public often highlighted the average of climate model projections, which smoothed out the significant variability seen in individual simulations and suggested a relatively steady warming; in fact, the observed rate of warming was not so different from that seen in some of the model projections (see Risbey et al. 2014). Nevertheless, by the early 2010s, pressure was mounting for climate scientists to give a detailed explanation of the apparent slowdown in surface warming, by then referred to as the “hiatus” or “pause”, and to explain why many climate models projected more warming. A host of potential explanatory factors were identified—related to external forcing, internal variability, ocean heat uptake and errors in observational data—which contrarians portrayed as excuses. Subsequent investigation found evidence for contributions from most of the hypothesized factors, though with varied estimates of their relative importance (see Medhaug et al. 2017 and Eyring, Gillett, et al. 2021: Cross-Chapter Box 3.1). In the meantime, since 2014, global temperature has once again shown a sharp increase.
As these examples illustrate, contrarian dissent has impacted the practice of climate science in various ways. Most obviously, climate research is sometimes directed at rebutting contrarian claims and arguments (see also Lewandowsky et al. 2015 on “seepage”). In addition, Brysse et al. (2013) contend that pressure from contrarians and the risk of being accused of alarmism may be part of the explanation for climate scientists’ tendency to err on the side of caution in their predictions related to climate change. Drawing these threads together, Biddle and Leuschner (2015) suggest that contrarian dissent in the climate context has impeded scientific progress in at least two ways: "by (1) forcing scientists to respond to a seemingly endless wave of unnecessary and unhelpful objections and demands and (2) creating an atmosphere in which scientists fear to address certain topics and/or to defend hypotheses as forcefully as they believe is appropriate" (Biddle & Leuschner 2015: 269). They argue that, while dissent in science is often epistemically fruitful, the dissent expressed by climate contrarians has tended to be epistemically detrimental.
5.4 Informing Policy and Decisions
Climate science is called upon in various ways to provide information that will be useful for decision makers who seek to mitigate, prepare for, or adapt to changes in climate. Most notably, the periodic assessments of the Intergovernmental Panel on Climate Change (IPCC) were initiated more than three decades ago for the purpose of informing international policy deliberations (IPCC 2013). These assessments place unusual epistemic demands on climate science. Chapter teams, composed of volunteer scientists drawn from the international climate science community, are tasked with assessing and synthesizing, in a limited time frame, a vast scientific literature and then communicating conclusions, as well as uncertainties, in a way that will be useful for policymakers.
To facilitate this process, the IPCC has developed a framework for assessing evidence and communicating uncertainty (Mastrandrea et al. 2010). Evidence is assessed in terms of both its character—its type, amount, quality, and consistency—and the extent to which it is in agreement in supporting particular conclusions. When the evidence relevant to a question is very limited, authors can simply summarize that evidence. When evidence is more substantial, chapter teams may offer a qualitative level of confidence in a particular conclusion (e.g., low, medium, high, or very high confidence) and/or report imprecise probabilities using calibrated language. For example, an outcome or conclusion is reported as likely if it is judged to have a probability of at least 66%; an outcome or conclusion is reported as very likely if is judged to have a probability of at least 90%; and so on.
The IPCC employs high evidential standards in at least two respects: it limits the scope of evidence to be considered to (primarily) the peer-reviewed literature, and it tends to report conclusions that are judged to be likely or very likely. Elabbar (forthcoming) notes that such high standards can result in fewer findings being reported for regions for which fewer studies have been conducted or less data is available; he makes a justice-based argument for varying evidential standards in this context. In a similar vein, Lloyd et al. (2021) recommend that IPCC chapter teams routinely provide a range of conclusions in light of the available evidence—including conclusions that are more likely than not (probability >50%), not just ones that are likely or very likely—in order to meet the varied information needs of decision makers in different contexts. A distinct but related discussion has focused on the extent to which the IPCC framework, with its flexible means of characterizing uncertainty, allows chapter teams to avoid taking on inductive risk, and thus to avoid appealing to social and ethical values to manage that risk (see, e.g., Steele 2012; Betz 2013; John 2015).
The IPCC’s uncertainty framework has garnered both praise and criticism. Critics have argued that its application leads to incoherent shifts between frequentist and subjective probabilities, that it is unclear how the confidence and likelihood descriptors are related, and more. Some commentators have also made suggestions for improvement or proposals for coherent interpretation. The literature here is quite large; some recent contributions from philosophers include: Adler and Hirsch Hadorn 2014; Wüthrich 2017; Helgeson et al. 2018; Harris 2021; and Dethier 2023b.
Beyond the IPCC context, there is growing demand for climate-related information at more local scales, as governments, industries, and communities face practical decisions in a changing climate—decisions concerning infrastructure, agriculture, economic development, water management, etc. Climate researchers in academia and government, as well as private firms, are increasingly attempting to respond to this need. This has prompted reflection on the ethics of climate information provision (Adams et al. 2015) as well as on the appropriate roles of user/stakeholder values in the information provision process (Parker & Lusk 2019; Lusk 2020). It has also spurred discussion of the quality and usability of climate information products (e.g., Baldissera Paccheti, Dessai, et al. 2021; Wilby & Lu 2022). Jebeile and Roussos (2023) argue that the provision of usable climate information is impeded because climate science retains a “physics-first” orientation and an outmoded notion of objectivity as value-freedom. They make a number of recommendations for changing the practice of climate science: (1) integrated cross-disciplinarity, (2) wider involvement of stakeholders throughout the lifecycle of a climate study, (3) a new framing of the role of values in climate science, (4) new approaches to uncertainty management, and (5) new approaches to uncertainty communication.
Bibliography
- Adams, Peter, Erika Eitland, Bruce Hewitson, Catherine Vaughan, Robert Wilby, and Stephen Zebiak, 2015, “Toward an Ethical Framework for Climate Services: A White Paper of the Climate Services Partnership Working Group on Climate Services Ethics”, Climate Services Partnership Paper, 12 pp. [Adams et al. 2015 available online]
- Adler, Carolina E. and Gertrude Hirsch Hadorn, 2014, “The IPCC and treatment of uncertainties: topics and sources of dissensus”, WIREs Climate Change, 5(5): 663–676. doi:10.1002/wcc.297
- Alexander, Kaitlin and Steve M. Easterbrook, 2015, “The software architecture of climate models: a graphical comparison of CMIP5 and EMICAR5 configurations”, Geoscientific Model Development, 8(4): 1221–1232. doi:10.5194/gmd-8-1221-2015
- American Meteorological Society, 2012, “Climate Change”, Glossary of Meteorology. [AMS 2012 available online at the Internet Archive.]
- Arrhenius, Svante, 1896, “On the Influence of Carbonic Acid in the Air Upon the Temperature of the Ground”, Philosophical Magazine, Series 5, 41(251): 237–276. doi:10.1080/14786449608620846 [Arrhenius 1896 available online]
- Baldissera Pacchetti, Marina, Suraje Dessai, Seamus Bradley, and David A. Stainforth, 2021, “Assessing the Quality of Regional Climate Information”, Bulletin of the American Meteorological Society, 102(3): E476–E491. doi:10.1175/BAMS-D-20-0008.1
- Baldissera Pacchetti, Marina, Julie Jebeile, and Erica Thompson, 2024, “For a Pluralism of Climate Modeling Strategies”, Bulletin of the American Meteorological Society, 105(7): E1350–E1364. doi:10.1175/BAMS-D-23-0169.1
- Barsugli, Joseph J., Galina Guentchev, Radley M. Horton, Andrew Wood, Linda O. Mearns, Xin‐Zhong Liang, Julie A. Winkler, Keith Dixon, Katharine Hayhoe, Richard B. Rood, Lisa Goddard, Andrea Ray, Lawrence Buja, and Caspar Ammann, 2013, “The Practitioner’s Dilemma: How to Assess the Credibility of Downscaled Climate Projections”, Eos, Transactions American Geophysical Union, 94(46): 424–425. doi:10.1002/2013EO460005
- Baumberger, Christoph, Reto Knutti, and Gertrude Hirsch Hadorn, 2017, “Building confidence in climate model projections: an analysis of inferences from fit”, WIREs Climate Change, 8(3): e454. doi:10.1002/wcc.454
- Betz, Gregor, 2013, “In Defence of the Value Free Ideal”, European Journal for Philosophy of Science, 3(2): 207–220. doi:10.1007/s13194-012-0062-x
- –––, 2015, “Are Climate Models Credible Worlds? Prospects and Limitations of Possibilistic Climate Prediction”, European Journal for Philosophy of Science, 5(2): 191–215. doi:10.1007/s13194-015-0108-y
- Biddle, Justin B. and Anna Leuschner, 2015, “Climate Skepticism and the Manufacture of Doubt: Can Dissent in Science be Epistemically Detrimental?” European Journal for Philosophy of Science, 5(3): 261–278. doi:10.1007/s13194-014-0101-x
- Biddle, Justin B. and Eric Winsberg, 2009, “Value Judgements and the Estimation of Uncertainty in Climate Modeling”, in New Waves in Philosophy of Science, P. D. Magnus and Jacob Busch (eds), New York: Palgrave Macmillan, 172–197. doi:10.1007/978-0-230-29719-7_10
- Bindoff, Nathaniel L., Peter A. Stott, et al., 2013, “Detection and Attribution of Climate Change: from Global to Regional”, in Stocker et al. 2013: 867–952 (ch. 10).
- Bishop, Craig H. and Gab Abramowitz, 2013, “Climate Model Dependence and the Replicate Earth Paradigm”, Climate Dynamics, 41(3–4): 885–900. doi:10.1007/s00382-012-1610-y
- Bonnet, Pauline, Lorenzo Pastori, Mierk Schwabe, Marco A. Giorgetta, Fernando Iglesias-Suarez, and Veronika Eyring, preprint, “Tuning a Climate Model with Machine-learning based Emulators and History Matching”, EGUsphere repository, deposited 9 August 2024. doi:10.5194/egusphere-2024-2508
- Brysse, Keynyn, Naomi Oreskes, Jessica O’Reilly, and Michael Oppenheimer, 2013, “Climate Change Prediction: Erring on the Side of Least Drama?”, Global Environmental Change, 23(1): 327–337. doi:10.1016/j.gloenvcha.2012.10.008
- de Burgh-Day, Catherine O. and Tennessee Leeuwenburg, 2023, “Machine Learning for Numerical Weather and Climate Modelling: A Review”, Geoscientific Model Development, 16(22): 6433–6477. doi:10.5194/gmd-16-6433-2023
- Collins, Matthew, Reto Knutti, et al., 2013, “Long-term Climate Change: Projections, Commitments and Irreversibility”, in Stocker et al. 2013: 1029–1136 (ch. 12).
- Cook, John, Naomi Oreskes, Peter T. Doran, et al., 2016, “Consensus on consensus: a synthesis of consensus estimates on human-caused global warming”, Environmental Research Letters, 11(4): 048002. doi:10.1088/1748-9326/11/4/048002
- Dethier, Corey, 2021, “Climate Models and the Irrelevance of Chaos”, Philosophy of Science, 88(5): 997–1007. doi:10.1086/714705
- –––, 2022, “Calibrating Statistical Tools: Improving the Measure of Humanity’s Influence on the Climate”, Studies in History and Philosophy of Science, 94: 158–166. doi:10.1016/j.shpsa.2022.06.010
- –––, 2023a, “Against ‘Possibilist’ Interpretations of Climate Models”, Philosophy of Science, 90(5): 1417–1426. doi:10.1017/psa.2023.6
- –––, 2023b, “Interpreting the Probabilistic Language in IPCC Reports”, Ergo an Open Access Journal of Philosophy, 10: article 7. doi:10.3998/ergo.4637
- –––, 2024, “Contrast Classes and Agreement in Climate Modeling”, European Journal for the Philosophy of Science, 14: article 14. doi:10.1007/s13194-024-00577-6
- –––, forthcoming, “Stability in Climate Change Attribution”, Philosophy of Science. [preprint of Dethier forthcoming available online]
- Doblas-Reyes, Francisco J., Anna A. Sörensson, et al., 2021, “Linking Global to Regional Climate Change”, in IPCC 2021a: 1363–1512 (ch. 10). doi:10.1017/9781009157896.012
- Edwards, Paul N., 2000, “A Brief History of Atmospheric General Circulation Modeling”, in David A. Randall (ed.), General Circulation Model Development: Past, Present, and Future, San Diego, CA: Academic Press, pp. 67–90.
- –––, 2010, A Vast Machine: Computer Models, Climate Data, and the Politics of Global Warming, Cambridge, MA: MIT Press.
- –––, 2011, “History of Climate Modeling”, WIREs Climate Change, 2: 128–139. doi: 10.1002/wcc.95
- Elabbar, Ahmad, forthcoming, “Varying Evidential Standards as a Matter of Justice”, British Journal for the Philosophy of Science, accepted 9 August 2023. doi:10.1086/727429
- Eyring, Veronika, Sandrine Bony, Gerald A. Meehl, Catherine A. Senior, Bjorn Stevens, Ronald J. Stouffer, and Karl E. Taylor, 2016, “Overview of the Coupled Model Intercomparison Project Phase 6 (CMIP6) Experimental Design and Organization”, Geoscientific Model Development, 9(5): 1937–1958. doi:10.5194/gmd-9-1937-2016
- Eyring, Veronika, Nathan P. Gillett, Krishna M. Achuta Rao, et al., 2021, “Human Influence on the Climate System”, in IPCC 2021a: 423–552 (ch. 3). doi:10.1017/9781009157896.005
- Flato, Gregory, Jochem Marotzke, et al., 2013, “Evaluation of Climate Models”, in Stocker et al. 2013: 741–866 (ch. 9).
- Fleming, James Rodger, 1998, Historical Perspectives on Climate Change, New York: Oxford University Press. doi:10.1093/oso/9780195078701.001.0001
- Folland, Christopher K., Olivier Boucher, Andrew Colman, and David E. Parker, 2018, “Causes of Irregularities in Trends of Global Mean Surface Temperature since the Late 19th Century”, Science Advances, 4(6): eaao5297. doi:10.1126/sciadv.aao5297
- Folland, Christopher K., Thomas R. Karl, et al., 2001, “Observed Climate Variability and Change”, in Climate Change 2001: The Scientific Basis. Contribution of Working Group I to the Third Assessment Report of the Intergovernmental Panel on Climate Change, John T. Houghton et al. (eds.), Cambridge: Cambridge University Press, pp. 99-181 (ch. 2). [Folland et al. 2001 available online]
- Frame, David J., Nick E. Faull, Manoj M. Joshi and Myles R. Allen, 2007, “Probabilistic Climate Forecasts and Inductive Problems”, Philosophical Transactions of the Royal Society A, 365(1857): 1971–1992. doi:10.1098/rsta.2007.2069
- Frigg, Roman, Seamus Bradley, Hailiang Du, and Leonard A. Smith, 2014, “Laplace’s Demon and the Adventures of His Apprentices”, Philosophy of Science, 81(1): 31–59. doi:10.1086/674416
- Frigg, Roman and Leonard A. Smith, 2022, “An Ineffective Antidote for Hawkmoths”, European Journal for Philosophy of Science, 12(2): article 33. doi:10.1007/s13194-022-00459-9
- Frigg, Roman, Erica Thompson, and Charlotte Werndl, 2015, “Philosophy of Climate Science Part II: Modelling Climate Change”, Philosophy Compass, 10(12): 965–977. doi:10.1111/phc3.12297
- Frisch, Mathias, 2015, “Predictivism and Old Evidence: A Critical Look at Climate Model Tuning”, European Journal for Philosophy of Science, 5(2): 171–190. doi:10.1007/s13194-015-0110-4
- García-Portela, Laura and Douglas Maraun, 2023, “Overstating the Effects of Anthropogenic Climate Change? A Critical Assessment of Attribution Methods in Climate Science”, European Journal for Philosophy of Science, 13(1): article 17. doi:10.1007/s13194-023-00516-x
- Gleckler, Peter J., Karl E. Taylor, and Charles Doutraiux, 2008, “Performance Metrics for Climate Models”, Journal of Geophysical Research—Atmospheres, 113(D6): D06104. doi:10.1029/2007JD008972
- Gluck, Stuart, 2023, “Robustness of Climate Models”, Philosophy of Science, 90(5), 1407–1416. doi:10.1017/psa.2023.62
- Goodwin, William M., 2015, “Global Climate Modeling as Applied Science”, Journal for General Philosophy of Science, 46(2): 339–350. doi:10.1007/s10838-015-9301-0
- Goodwin, William M. and Eric Winsberg, 2016, “Missing the Forest for the Fish: How Much Does the ‘Hawkmoth Effect’ Threaten the Viability of Climate Projections?”, Philosophy of Science, 83(5): 1122–1132. doi:10.1086/687943
- Harris, Margherita, 2021, Conceptualizing Uncertainty: The IPCC, Model Robustness and the Weight of Evidence, PhD Thesis, The London School of Economics. [Harris 2021 available online]
- Harris, Margherita and Roman Frigg, 2023a, “Climate Models and Robustness Analysis – Part I: Core Concepts and Premises”, in Handbook of the Philosophy of Climate Change (Handbooks in Philosophy), Gianfranco Pellegrino and Marcello Di Paola (eds), Cham: Springer International Publishing, 67–88. doi:10.1007/978-3-031-07002-0_146
- –––, 2023b, “Climate Models and Robustness Analysis – Part II: The Justificatory Challenge”, in Handbook of the Philosophy of Climate Change (Handbooks in Philosophy), Gianfranco Pellegrino and Marcello Di Paola (eds), Cham: Springer International Publishing, 89–103. doi:10.1007/978-3-031-07002-0_147
- Hartmann, Dennis L., Albert M.G. Klein Tank, Matilde Rusticucci, et al., 2013, “Observations: Atmosphere and Surface Supplementary Material”, in Stocker et al. 2013: 159–254 (ch. 2).
- Hausfather, Zeke, Kate Marvel, Gavin A. Schmidt, John W. Nielsen-Gammon, and Mark Zelinka, 2022, “Climate Simulations: Recognize the ‘Hot Model’ Problem”, Nature, 605(7908): 26–29. doi:10.1038/d41586-022-01192-2
- Hazeleger, Wilco, Bart J. J. M. van den Hurk, Erik Min, Geert Jan van Oldenborgh, Arthur C. Petersen, David Alan Stainforth, Eleftheria Vasileiadou, and Leonard A. Smith, 2015, “Tales of Future Weather”, Nature Climate Change, 5(2): 107–113. doi:10.1038/nclimate2450
- Held, Isaac M., 2005, “The Gap between Simulation and Understanding in Climate Modeling”, Bulletin of the American Meteorological Society, 86(11): 1609–1614. doi:10.1175/BAMS-86-11-1609
- Helgeson, Casey, Richard Bradley, and Brian Hill, 2018, “Combining Probability with Qualitative Degree-of-Certainty Metrics in Assessment”, Climatic Change, 149(3–4): 517–525. doi:10.1007/s10584-018-2247-6
- Heymann, Matthias and Dania Achermann, 2018, “From Climatology to Climate Science in the Twentieth Century”, in The Palgrave Handbook of Climate History, Sam White, Christian Pfister, and Franz Mauelshagen (eds), London: Palgrave Macmillan UK, 605–632. doi:10.1057/978-1-137-43020-5_38
- Hillerbrand, Rafaela, 2014, “Climate Simulations: Uncertain Projections for an Uncertain World”, Journal for General Philosophy of Science, 45(S1): 17–32. doi:10.1007/s10838-014-9266-4
- Hourdin, Frédéric, Thorsten Mauritsen, Andrew Gettelman, Jean-Christophe Golaz, Venkatramani Balaji, Qingyun Duan, Doris Folini, Duoying Ji, Daniel Klocke, Yun Qian, Florian Rauser, Catherine Rio, Lorenzo Tomassini, Masahiro Watanabe, and Daniel Williamson, 2017, “The Art and Science of Climate Model Tuning”, Bulletin of the American Meteorological Society, 98(3): 589–602. doi:10.1175/BAMS-D-15-00135.1
- Intemann, Kristen, 2015, “Distinguishing Between Legitimate and Illegitimate Values in Climate Modeling”, European Journal for Philosophy of Science, 5(2): 217–232. doi:10.1007/s13194-014-0105-6
- –––, 2017, “Who Needs Consensus Anyway? Addressing Manufactured Doubt and Increasing Public Trust in Climate Science”, Public Affairs Quarterly, 31(3): 189–208. doi:10.2307/44732792
- IPCC, 2013, “IPCC Factsheet: What is the IPCC?”, 30 August 2013. IPCC Fact Sheet. [IPCC 2013 available online (pdf)]
- –––, 2021a, Climate Change 2021: The Physical Science Basis, Working Group I Contribution to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change, Valérie Masson-Delmotted, Panmai Zhai, Anna Pirani, Sarah L. Connors, Clotilde Péan, Yang Chen, Leah Goldfarb, Melissa I. Gomis, J. B. Robin Matthews, Sophie Berger, Mengtian Huang, Ozge Yelekçi, Rong Yu, Baiquan Zhou, Elisabeth Lonnoy, Thomas K. Maycock, Tim Waterfield, Katherine Leitzel, and Nada Caud (eds), Cambridge/New York: Cambridge University Press. doi:10.1017/9781009157896
- –––, [IPCC-Glossary] 2021b, “Annex VII: Glossary”, Matthews, J.B.R., V. Möller, R. van Diemen, et al. (eds), IPCC 2021a: 2215–2256 (Annex VII). doi:10.1017/9781009157896.022
- Jebeile, Julie, Vincent Lam, and Tim Räz, 2021, “Understanding Climate Change with Statistical Downscaling and Machine Learning”, Synthese, 199(1–2): 1877–1897. doi:10.1007/s11229-020-02865-z
- Jebeile, Julie and Michel Crucifix, 2021, “Value Management and Model Pluralism in Climate Science”, Studies in History and Philosophy of Science, 88: 120–127. doi:10.1016/j.shpsa.2021.06.004
- Jebeile, Julie, Vincent Lam, Mason Majszak, and Tim Räz, 2023, “Machine Learning and the Quest for Objectivity in Climate Model Parameterization”, Climatic Change, 176(8): article 101. doi:10.1007/s10584-023-03532-1
- Jebeile, Julie and Joe Roussos, 2023, “Usability of Climate Information: Toward a New Scientific Framework”, WIREs Climate Change, 14(5): e833. doi:10.1002/wcc.833.
- John, Stephen, 2015, “The Example of the IPCC Does Not Vindicate the Value Free Ideal: A Reply to Gregor Betz”, European Journal for Philosophy of Science, 5(1): 1–13. doi:10.1007/s13194-014-0095-4
- Karl, Thomas R., Susan J. Hassol, Christopher D. Miller and William L. Murray (eds.), 2006, Temperature Trends in the Lower Atmosphere: Steps for Understanding and Reconciling Differences, Washington, DC: U.S. Climate Change Science Program and Subcommittee on Global Change Research. [Karl et al. (eds.) 2006 available online]
- Katzav, Joel, 2013a, “Hybrid Models, Climate Models, and Inference to the Best Explanation”, The British Journal for the Philosophy of Science, 64(1): 107–129. doi:10.1093/bjps/axs002
- –––, 2013b, “Severe Testing of Climate Change Hypotheses”, Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics, 44(4): 433–441. doi:10.1016/j.shpsb.2013.09.003
- –––, 2014, “The Epistemology of Climate Models and Some of Its Implications for Climate Science and the Philosophy of Science”, Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics, 46: 228–238. doi:10.1016/j.shpsb.2014.03.001
- –––, 2023, “Epistemic Possibilities in Climate Science: Lessons From Some Recent Research in the Context of Discovery”, European Journal for Philosophy of Science, 13(4): article 57. doi:10.1007/s13194-023-00560-7
- Katzav, Joel, Henk A. Dijkstra, and A.T.J. (Jos) de Laat, 2012, “Assessing Climate Model Projections: State of the Art and Philosophical Reflections”, Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics, 43(4): 258–276. doi:10.1016/j.shpsb.2012.07.002
- Katzav, Joel and Wendy S. Parker, 2018, “Issues in the Theoretical Foundations of Climate Science”, Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics, 63: 141–149. doi:10.1016/j.shpsb.2018.02.001
- Katzav, Joel, Erica L. Thompson, James Risbey, David A. Stainforth, Seamus Bradley, and Mathias Frisch, 2021, “On the Appropriate and Inappropriate Uses of Probability Distributions in Climate Projections and Some Alternatives”, Climatic Change, 169(1–2): article 15. doi:10.1007/s10584-021-03267-x
- Kawamleh, Suzanne, 2022, “Confirming (Climate) Change: A Dynamical Account of Model Evaluation”, Synthese, 200(2): article 122. doi:10.1007/s11229-022-03659-1
- Knüsel, Benedikt and Christoph Baumberger, 2020, “Understanding Climate Phenomena with Data-Driven Models”, Studies in History and Philosophy of Science Part A, 84: 46–56. doi:10.1016/j.shpsa.2020.08.003
- Knutti, Reto, 2010, “The End of Model Democracy? An Editorial Comment”, Climatic Change, 102(3–4): 395–404. doi:10.1007/s10584-010-9800-2
- –––, 2018, “Climate Model Confirmation: From Philosophy to Predicting Climate in the Real World”, in Lloyd and Winsberg 2018: 325–359 (ch. 11). doi:10.1007/978-3-319-65058-6_11
- Knutti, Reto, David Masson, and Andrew Gettelman, 2013, “Climate Model Genealogy: Generation CMIP5 and How We Got There”, Geophysical Research Letters, 40(6): 1194–1199. doi:10.1002/grl.50256
- Knutti, Reto, Jan Sedláček, Benjamin M. Sanderson, Ruth Lorenz, Erich M. Fischer, and Veronika Eyring, 2017, “A Climate Model Projection Weighting Scheme Accounting for Performance and Interdependence”, Geophysical Research Letters, 44(4): 1909–1918. doi:10.1002/2016GL072012
- Kochkov, Dmitrii, Janni Yuval, Ian Langmore, Peter Norgaard, Jamie Smith, Griffin Mooers, Milan Klöwer, James Lottes, Stephan Rasp, Peter Düben, Sam Hatfield, Peter Battaglia, Alvaro Sanchez-Gonzalez, Matthew Willson, Michael P. Brenner, and Stephan Hoyer, 2024, “Neural General Circulation Models for Weather and Climate”, Nature, 632(8027): 1060–1066. doi:10.1038/s41586-024-07744-y
- Lam, Vincent, 2021, “Climate Modelling and Structural Stability”, European Journal for Philosophy of Science, 11(4): article 98. doi:10.1007/s13194-021-00414-0
- Leiserowitz, Anthony A., Edward W. Maibach, Connie Roser-Renouf, Nicholas Smith, and Erica Dawson, 2013, “Climategate, Public Opinion, and the Loss of Trust”, American Behavioral Scientist, 57(6): 818–837. doi:10.1177/0002764212458272
- Lenhard, Johannes and Eric Winsberg, 2010, “Holism, Entrenchment, and the Future of Climate Model Pluralism”, Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics, 41(3): 253–262. doi:10.1016/j.shpsb.2010.07.001
- Leuschner, Anna, 2015, “Uncertainties, Plurality, and Robustness in Climate Research and Modeling. On the Reliability of Climate Prognoses”, Journal for General Philosophy of Science, 46(2): 367–381. doi:10.1007/s10838-015-9304-x
- Lewandowsky, Stephan, Naomi Oreskes, James S. Risbey, Ben R. Newell, and Michael Smithson, 2015, “Seepage: Climate Change Denial and Its Effect on the Scientific Community”, Global Environmental Change, 33: 1–13. doi:10.1016/j.gloenvcha.2015.02.013
- Lloyd, Elisabeth A., 2009, “I—Varieties of Support and Confirmation of Climate Models”, Aristotelian Society Supplementary Volume, 83(1): 213–232. doi:10.1111/j.1467-8349.2009.00179.x
- –––, 2010, “Confirmation and Robustness of Climate Models”, Philosophy of Science, 77(5): 971–984. doi:10.1086/657427
- –––, 2012, “The Role of ‘Complex’ Empiricism in the Debates about Satellite Data and Climate Models”, Studies in History and Philosophy of Science Part A, 43(2): 390–401. doi:10.1016/j.shpsa.2012.02.001
- –––, 2015, “Model Robustness as a Confirmatory Virtue: The Case of Climate Science”, Studies in History and Philosophy of Science Part A, 49: 58–68. doi:10.1016/j.shpsa.2014.12.002
- Lloyd, Elisabeth Anne and Eric B. Winsberg (eds), 2018, Climate Modelling: Philosophical and Conceptual Issues, Cham: Palgrave Macmillan. doi:10.1007/978-3-319-65058-6
- Lloyd, Elisabeth A., Naomi Oreskes, Sonia I. Seneviratne, and Edward J. Larson, 2021, “Climate Scientists Set the Bar of Proof Too High”, Climatic Change, 165(3–4): article 55. doi:10.1007/s10584-021-03061-9
- Lowe, Jason A., Dan Bernie, Philip Bett, et al., 2018, “UKCP18 Science Overview Report”, MET Office. [Loew et al. 2018 available online (pdf)]
- Lucarini, Valerio and Mickaël D. Chekroun, 2023, “Theoretical Tools for Understanding the Climate Crisis from Hasselmann’s Programme and beyond”, Nature Reviews Physics, 5(12): 744–765. doi:10.1038/s42254-023-00650-8
- Lusk, Greg, 2020, “Political Legitimacy in the Democratic View: The Case of Climate Services”, Philosophy of Science, 87(5): 991–1002. doi:10.1086/710803
- –––, 2021, “Saving the Data”, The British Journal for the Philosophy of Science, 72(1): 277–298. doi:10.1093/bjps/axy072
- Mann, Michael E., Raymond S. Bradley, and Malcolm K. Hughes, 1999, “Northern Hemisphere Temperatures During the Past Millennium: Inferences, Uncertainties, and Limitations”, Geophysical Research Letters, 26(6): 759–762. doi:10.1029/1999GL900070
- Masson-Delmotte, Valérie, Michael Schulz, et al., 2013, “Information from Paleoclimate Archives”, in Stocker et al. 2013: 383–464 (ch. 5).
- Mastrandrea, Michael D., Christopher B. Field, Thomas F. Stocker, et al., 2010 “Guidance Note for Lead Authors of the IPCC Fifth Assessment Report on Consistent Treatment of Uncertainties. Intergovernmental Panel on Climate Change (IPCC)”, 6–7 July 2010, Jasper Ridge, CA. [Mastrandrea et al. 2010 available online]
- Mauritsen, Thorsten, Bjorn Stevens, Erich Roeckner, Traute Crueger, Monika Esch, Marco Giorgetta, Helmuth Haak, Johann Jungclaus, Daniel Klocke, Daniela Matei, Uwe Mikolajewicz, Dirk Notz, Robert Pincus, Hauke Schmidt, and Lorenzo Tomassini, 2012, “Tuning the Climate of a Global Model”, Journal of Advances in Modeling Earth Systems, 4(3): M00A01. doi:10.1029/2012MS000154
- Mayo, Deborah G., 1996, Error and the Growth of Experimental Knowledge (Science and Its Conceptual Foundations), Chicago: University of Chicago Press.
- McGuffie, Kendal and Ann Henderson-Sellers, 2014, The Climate Modelling Primer, fourth edition, Chichester: Wiley Blackwell.
- McIntyre, Stephen and Ross McKitrick, 2005, “Hockey Sticks, Principal Components, and Spurious Significance”, Geophysical Research Letters, 32(3): L03710. doi:10.1029/2004GL021750
- Medhaug, Iselin, Martin B. Stolpe, Erich M. Fischer, and Reto Knutti, 2017, “Reconciling Controversies about the ‘Global Warming Hiatus’”, Nature, 545(7652): 41–47. doi:10.1038/nature22315
- Morice, Colin P., John J. Kennedy, Nick A. Rayner, Jonathan P. Winn, E. Hogan, Rachel E. Killick, Robert J. H. Dunn, Timothy J. Osborn, Philip D. Jones, and Ian R. Simpson, 2021, “An Updated Assessment of Near‐Surface Temperature Change From 1850: The HadCRUT5 Data Set”, Journal of Geophysical Research: Atmospheres, 126(3): e2019JD032361. doi:10.1029/2019JD032361
- Nabergall, Lukas, Alejandro Navas, and Eric Winsberg, 2019, “An Antidote for Hawkmoths: On the Prevalence of Structural Chaos in Non-Linear Modeling”, European Journal for Philosophy of Science, 9(2): article 21. doi:10.1007/s13194-018-0244-2
- National Research Council (NRC), 2000, Reconciling Observations of Global Temperature Change, Washington, DC: National Academy Press. [NRC 2000 available online]
- Odenbaugh, Jay, 2012, “Climate, Consensus and Contrarians”, in The Environment: Philosophy, Science and Ethics, William P. Kabasenche, Michael O’Rourke, and Matthew H. Slater (eds), Cambridge, MA: MIT Press, pp.137–150. doi:10.7551/mitpress/9780262017404.003.0008
- –––, 2022, “Skepticism and Denialism”, in The Routledge Companion to Environmental Ethics, Benjamin Hale, Andrew Light, and Lydia A. Lawhon (eds), New York: Routledge, 293–314.
- O’Loughlin, Ryan, 2021, “Robustness Reasoning in Climate Model Comparisons”, Studies in History and Philosophy of Science Part A, 85: 34–43. doi:10.1016/j.shpsa.2020.12.005
- –––, 2023, “Diagnosing Errors in Climate Model Intercomparisons”, European Journal for Philosophy of Science, 13(2): article 20. doi:10.1007/s13194-023-00522-z
- Oreskes, Naomi, 2007, “The Scientific Consensus on Climate Change: How Do We Know We’re Not Wrong?”, in Climate Change: What It Means for Us, Our Children, and Our Grandchildren (American and Comparative Environmental Policy), Joseph F. DiMento and Pamela Doughman (eds), Cambridge, MA: MIT Press, 65–99.
- Oreskes, Naomi and Erik M. Conway, 2010, Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming, London: Bloomsbury Press.
- Otto, Friederike E.L., 2023, “Attribution of Extreme Events to Climate Change”, Annual Review of Environment and Resources, 48: 813–828. doi:10.1146/annurev-environ-112621-083538
- Parker, Wendy S., 2009, “II—Confirmation and Adequacy-for-Purpose in Climate Modelling”, Aristotelian Society Supplementary Volume, 83: 233–249. doi:10.1111/j.1467-8349.2009.00180.x
- –––, 2011, “When Climate Models Agree: the Significance of Robust Model Predictions”, Philosophy of Science, 78(4): 579–600. doi:10.1086/661566
- –––, 2014a, “Values and Uncertainties in Climate Prediction, Revisited”, Studies in History and Philosophy of Science Part A, 46: 24–30. doi:10.1016/j.shpsa.2013.11.003
- –––, 2014b, “Simulation and Understanding in the Study of Weather and Climate”, Perspectives on Science, 22(3): 336–356. doi:10.1162/POSC_a_00137
- –––, 2017, “Computer Simulation, Measurement and Data Assimilation”, The British Journal for the Philosophy of Science, 68(1): 273–304. doi:10.1093/bjps/axv037
- –––, 2024, Climate Science, Cambridge/New York: Cambridge University Press. doi:10.1017/9781009619301
- Parker, Wendy S. and Greg Lusk, 2019, “Incorporating User Values into Climate Services”, Bulletin of the American Meteorological Society, 100(9): 1643-1650. doi:10.1175/BAMS-D-17-0325.1
- Parker, Wendy S. and Eric Winsberg, 2018, “Values and Evidence: How Models Make a Difference”, European Journal for Philosophy of Science, 8(1): 125–142. doi:10.1007/s13194-017-0180-6
- Petersen, Arthur C., 2000, “Philosophy of Climate Science”, Bulletin of the American Meteorological Society, 81(2): 265–271. doi:10.1175/1520-0477(2000)081<0265:POCS&rt;2.3.CO;2
- –––, 2012, Simulating Nature: A Philosophical Study of Computer-Simulation Uncertainties and Their Role in Climate Science and Policy Advice, second edition, Boca Raton/London/New York: CRC Press.
- Pulkkinen, Karoliina, Sabine Undorf, Frida Bender, Per Wikman-Svahn, Francisco Doblas-Reyes, Clare Flynn, Gabriele C. Hegerl, Aiden Jönsson, Gah-Kai Leung, Joe Roussos, Theodore G. Shepherd, and Erica Thompson, 2022, “The Value of Values in Climate Science”, Nature Climate Change, 12(1): 4–6. doi:10.1038/s41558-021-01238-9
- Ranalli, Brent, 2012, “Climate Science, Character and the ‘Hard-Won’ Consensus”, Kennedy Institute of Ethics Journal, 22(2): 183–210. doi:10.1353/ken.2012.0004
- Randall, David A., Richard A. Wood, Sandrine Bony, et al., 2007, “Climate Models and Their Evaluation”, in Susan Solomon, Dahe Qin, et al. (eds) Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge: Cambridge University Press, pp. 589–662 (ch. 8). [Randall et al. 2007 available online]
- Risbey, James S., 2007, “Subjective Elements in Climate Policy Advice”, Climatic Change, 85(1–2): 11–17. doi:10.1007/s10584-007-9314-8
- Risbey, James S., Stephan Lewandowsky, Clothilde Langlais, Didier P. Monselesan, Terence J. O’Kane, and Naomi Oreskes, 2014, “Well-Estimated Global Surface Warming in Climate Projections Selected for ENSO Phase”, Nature Climate Change, 4(9): 835–840. doi:10.1038/nclimate2310
- Rohde, Robert, Richard A. Muller, Robert Jacobsen, et al., 2013, “A New Estimate of the Average Earth Surface Land Temperature Spanning 1753 to 2011”, Geoinformatics and Geostatistics: An Overview, 1(1): 1. doi:10.4172/2327-4581.1000101
- Roussos, Joe, Richard Bradley and Roman Frigg, 2021, “Making Confident Decisions with Model Ensembles”, Philosophy of Science, 88(3): 439–460. doi:10.1086/712818
- Russell, Muir, Geoffrey Boulton, Peter Clarke, David Eyton, and James Norton, 2010, The Independent Climate Change Emails Review, Norwich, UK: University of East Anglia. [Russell et al. 2010 available online]
- Schmidt, Gavin A., 2011, “Reanalyses ‘R’ Us”, RealClimate: Climate Science from Climate Scientists..., , 26 July 2011. [Schmidt 2011 available online]
- Schmidt, Gavin A. and Steven Sherwood, 2015, “A Practical Philosophy of Complex Climate Modelling”, European Journal for Philosophy of Science, 5(2): 149–169. doi:10.1007/s13194-014-0102-9
- Schneider, Tapio, Swadhin Behera, Giulio Boccaletti, Clara Deser, Kerry Emanuel, Raffaele Ferrari, L. Ruby Leung, Ning Lin, Thomas Müller, Antonio Navarra, Ousmane Ndiaye, Andrew Stuart, Joseph Tribbia, and Toshio Yamagata, 2023, “Harnessing AI and Computing to Advance Climate Modelling and Prediction”, Nature Climate Change, 13(9): 887–889. doi:10.1038/s41558-023-01769-3
- Schupbach, Jonah N., 2018, “Robustness Analysis as Explanatory Reasoning”, The British Journal for the Philosophy of Science, 69(1): 275–300. doi:10.1093/bjps/axw008
- Shepherd, Theodore G., Emily Boyd, Raphael A. Calel, Sandra C. Chapman, Suraje Dessai, Ioana M. Dima-West, Hayley J. Fowler, Rachel James, Douglas Maraun, Olivia Martius, Catherine A. Senior, Adam H. Sobel, David A. Stainforth, Simon F. B. Tett, Kevin E. Trenberth, Bart J. J. M. Van Den Hurk, Nicholas W. Watkins, Robert L. Wilby, and Dimitri A. Zenghelis, 2018, “Storylines: An Alternative Approach to Representing Uncertainty in Physical Aspects of Climate Change”, Climatic Change, 151(3–4): 555–571. doi:10.1007/s10584-018-2317-9
- Shukla, Jagadish, Renata Hagedorn, Martin Miller, Tim N. Palmer, B. Hoskins, J. Kinter, Jochem Marotzke, and Julia Slingo, 2009, “Strategies: Revolution in Climate Prediction Is Both Necessary and Possible: A Declaration at the World Modelling Summit for Climate Prediction”, Bulletin of the American Meteorological Society, 90(2): 175–178. doi:10.1175/2008BAMS2759.1
- Slingo, Julia, Paul Bates, Peter Bauer, Stephen Belcher, Tim Palmer, Graeme Stephens, Bjorn Stevens, Thomas Stocker, and Georg Teutsch, 2022, “Ambitious Partnership Needed for Reliable Climate Prediction”, Nature Climate Change, 12(6): 499–503. doi:10.1038/s41558-022-01384-8
- Smith, Leonard A., 2002, “What Might We Learn from Climate Forecasts?”, Proceedings of the National Academy of Sciences, 99(supplement 1): 2487–2492. doi:10.1073/pnas.012580599
- Soon, Willie, Sallie Baliunas, Craig Idso, Sherwood Idso, and David R. Legates, 2003, “Reconstructing Climatic and Environmental Changes of the Past 1000 Years: A Reappraisal”, Energy & Environment, 14(2–3): 233–296. doi:10.1260/095830503765184619
- Stainforth, David A., Myles R. Allen, Edward R. Tredger, and Leonard A. Smith, 2007, “Confidence, Uncertainty and Decision-Support Relevance in Climate Predictions”, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 365(1857): 2145–2161. doi:10.1098/rsta.2007.2074
- Steele, Katie, 2012, “The Scientist qua Policy Advisor Makes Value Judgments”, Philosophy of Science, 79(5): 893–904. doi:10.1086/667842
- Steele, Katie and Charlotte Werndl, 2013, “Climate Models, Calibration and Confirmation”, The British Journal for the Philosophy of Science, 64(3): 609–635. doi:10.1093/bjps/axs036
- –––, 2016, “The Diversity of Model Tuning Practices in Climate Science”, Philosophy of Science, 83(5): 1133–1144. doi:10.1086/687944
- Stocker, Thomas F., Dahe Qin, Gian-Kasper Plattner, Melinda M. B. Tignor, Simon K. Allen, Judith Boschung, Alexander Nauels, Yu Xia, Vincent Bex, and Pauline M. Midgley (eds), 2013, Climate Change 2013: The Physical Science Basis, Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge/New York: Cambridge University Press. [Stocker et al. 2013 available online]
- Swain, Daniel L., Deepti Singh, Danielle Touma, and Noah S. Diffenbaugh, 2020, “Attributing Extreme Events to Climate Change: A New Frontier in a Warming World”, One Earth, 2(6): 522–527. doi:10.1016/j.oneear.2020.05.011
- Tebaldi, Claudia, Richard L. Smith, Doug Nychka, and Linda O. Mearn, 2005, “Quantifying Uncertainty in Projections of Regional Climate Change: A Bayesian Approach to the Analysis of Multimodel Ensembles”, Journal of Climate, 18(10): 1524–1540. doi:10.1175/JCLI3363.1
- Thompson, Erica, Roman Frigg and Casey Helgeson, 2016, “Expert Judgment for Climate Change Adaptation”, Philosophy of Science, 83(5): 1110–1121. doi:10.1086/687942
- Thorne, Peter W., John R. Lanzante, Thomas C. Peterson, Dian J. Seidel, and Keith P. Shine, 2011, “Tropospheric Temperature Trends: History of an Ongoing Controversy”, WIREs Climate Change, 2(1): 66–88. doi:10.1002/wcc.80
- Tokarska, Katarzyna B., Martin B. Stolpe, Sebastian Sippel, Erich M. Fischer, Christopher J. Smith, Flavio Lehner, and Reto Knutti, 2020, “Past Warming Trend Constrains Future Warming in CMIP6 Models”, Science Advances, 6(12): eaaz9549. doi:10.1126/sciadv.aaz9549
- Touzé-Peiffer, Ludovic, Anouk Barberousse, and Hervé Le Treut, 2020, “The Coupled Model Intercomparison Project: History, Uses, and Structural Effects on Climate Research”, WIREs Climate Change, 11(4): e648. doi:10.1002/wcc.648
- Undorf, Sabine, Karoliina Pulkkinen, Per Wikman-Svahn, and Frida A.-M. Bender, 2022, “How Do Value-Judgements Enter Model-Based Assessments of Climate Sensitivity?”, Climatic Change, 174(3–4): article 19. doi:10.1007/s10584-022-03435-7
- Vezér, Martin A., 2016, “Computer Models and the Evidence of Anthropogenic Climate Change: An Epistemology of Variety-of-Evidence Inferences and Robustness Analysis”, Studies in History and Philosophy of Science Part A, 56: 95–102. doi:10.1016/j.shpsa.2016.01.004
- –––, 2017, “Variety-of-Evidence Reasoning About the Distant Past: A Case Study in Paleoclimate Reconstruction”, European Journal for Philosophy of Science, 7(2): 257–265. doi:10.1007/s13194-016-0156-y
- Wahl, Eugene R. and Caspar M. Ammann, 2007, “Robustness of the Mann, Bradley, Hughes Reconstruction of Northern Hemisphere Surface Temperatures: Examination of Criticisms Based on the Nature and Processing of Proxy Climate Evidence”, Climatic Change, 85(1–2): 33–69. doi:10.1007/s10584-006-9105-7
- Watkins, Aja, 2023, “Using Paleoclimate Analogues to Inform Climate Projections”, Perspectives on Science, 32(4): 415–459. doi:10.1162/posc_a_00622
- –––, 2024, “Paleoclimate Proxies and the Benefits of Disunity” Philosophy of Science, 91(4): 793–810. doi:10.1017/psa.2024.12
- Weart, Spencer R., 2008, The Discovery of Global Warming, second edition (revised and updated), Cambridge, MA: Harvard University Press.
- –––, 2010, “The Development of General Circulation Models of Climate”, Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics, 41(3): 208–217. doi:10.1016/j.shpsb.2010.06.002
- Werndl, Charlotte, 2016, “On Defining Climate and Climate Change”, The British Journal for the Philosophy of Science, 67(2): 337–364. doi:10.1093/bjps/axu048
- –––, 2019, “Initial Conditions Dependence and Initial Conditions Uncertainty in Climate Science”, The British Journal for the Philosophy of Science, 70(4): 953–976. doi:10.1093/bjps/axy021
- Wilby, Robert and Xianfu Lu, 2022, “Tailoring Climate Information and Services for Adaptation Actors with Diverse Capabilities”, Climatic Change, 174(3–4): article 33. doi:10.1007/s10584-022-03452-6
- Wilson, Joseph, 2023, “Paleoclimate Analogues and the Threshold Problem”, Synthese, 202(1): article 17. doi:10.1007/s11229-023-04202-6
- Wilson, Joseph and F. Garrett Boudinot, 2022, “Proxy Measurement in Paleoclimatology”, European Journal for Philosophy of Science, 12(1): article 14. doi:10.1007/s13194-021-00444-8
- Winsberg, Eric, 2012, “Values and Uncertainties in the Predictions of Global Climate Models”, Kennedy Institute of Ethics Journal, 22(2): 111–137. doi:10.1353/ken.2012.0008
- –––, 2018, Philosophy and Climate Science, Cambridge: Cambridge University Press. doi:10.1017/9781108164290
- Winsberg, Eric, Naomi Oreskes, and Elisabeth Lloyd, 2020, “Severe Weather Event Attribution: Why Values Won’t Go Away”, Studies in History and Philosophy of Science Part A, 84: 142–149. doi:10.1016/j.shpsa.2020.09.003
- Wüthrich, Nicolas, 2017, “Conceptualizing Uncertainty: An Assessment of the Uncertainty Framework of the Intergovernmental Panel on Climate Change”, in EPSA15 Selected Papers (European Studies in Philosophy of Science), Michela Massimi, Jan-Willem Romeijn, and Gerhard Schurz (eds), Cham: Springer International Publishing, volume 5, 95–107. doi:10.1007/978-3-319-53730-6_9
Academic Tools
How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database.
Other Internet Resources
- Dee, Dick, John Fasullo, Dennis Shea, et al. (eds.), 2016, “The Climate Data Guide: Atmospheric Reanalysis: Overview & Comparison Tables”, University Corportation for Atmospheric Research.
- Annotated Bibliography: Epistemology of Climate Science
- RealClimate Blog
- Intergovernmental Panel on Climate Change (IPCC)
- Spencer Weart: The Discovery of Global Warming
- Classic Papers on Global Warming Online, with Interpretive Essays by J.R. Fleming
- Online textbook: Introduction to climate dynamics and climate modeling
- Online Course: From Meteorology to Mitigation: Understanding Global Warming
- NCAR Climate Data Guide
- Old Weather
- The International Data-Rescue (I-DARE) Portal