Historicist Theories of Scientific Rationality
Of those philosophers who have attempted to characterize scientific rationality, most have attended in some way to the history of science. Even Karl Popper, who is hardly a historicist by anyone's standards, frequently employs the history of science as an illustrative and polemical device. However, relatively few theorists have offered theories according to which data drawn from the history of science somehow constitute or are evidential for the concept of rationality. Let us call such theories historicist theories.
Roughly put, the idea behind historicist theories of rationality is that a good theory of rationality should somehow fit the history of science. According to a minimal reading of “fit”, a good theory of rationality will label as rational most of the major episodes in the history of science. A more demanding reading asserts that the best theory of rationality is the one that maximizes the number of rational episodes in the history of science (subject to some filtering out of sociologically infected episodes). However, it is unclear whether (i) historicism is a conceptual claim according to which it is an analytic or at least necessary truth that rationality fit history, or (ii) whether historicism is an epistemological claim according to which the best way to find out about rationality is to consult the history of science. Historicism (i) seems difficult to motivate, while historicism (ii) might descend into triviality. For instance, in the case of instrumental rules which tell us the best way to achieve certain goals, philosophers of all stripes would say that looking at historical attempts to achieve those goals will help us evaluate our current proposals for achieving them.
Two other ambiguities about the scope of historicism are worth flagging here. First, it might be wondered whether historicism becomes a good idea only once one has established that science is basically successful, or if historicism should be endorsed within every scientific community and possible world. Second, it would be good to clarify how the study of history, which can reasonably be thought of as a largely descriptive enterprise, can serve as a basis for a normative theory of rationality. In other words, it is unclear how historicism should bridge the “is/ought gap”. The latter question will become especially pressing for some of the historicisms discussed later in this entry.
In order to understand historicism, one must also understand the distinction between methodology and meta-methodology. In the parlance of the history and philosophy of science, a methodology for scientific rationality is a theory of rationality: it tells us what is rational and what is not in specific cases. Thus, the rule “Always accept the theory with the greatest degree of confirmation” would count as (part of) a methodology. On the other hand, a meta-methodology provides us with the standards by which we evaluate the theories of rationality that constitute our methodologies. To be a historicist about rationality is to accept a meta-methodological claim: a good theory of rationality must fit the history of science. Thus although historicists might agree on a general meta-methodology, they can and do vary widely in the sort of theory that they produce using that meta-methodology.
- 1. Paradigms: Consensus
- 2. Research Programmes: Novel Predictions
- 3. Research Traditions: Solved Problems
- 4. General Criticisms
- 5. Neo-historicist Developments
- Bibliography
- Academic Tools
- Other Internet Resources
- Related Entries
1. Paradigms: Consensus
Historicism in the philosophy of science is a fairly recent development. It can perhaps be dated to the publication of Kuhn's influential The Structure of Scientific Revolutions in 1962. Before that point, the two dominant theories of scientific rationality were confirmationism (scientists should accept theories that are probably true, given the evidence) and falsificationism (scientists should reject theories that make false predictions about observables and replace them with theories that conform to all available evidence). Both of these theories spring from purely logical roots, confirmationism from Carnap's work on inductive logic, and falsificationism from Popper's rejection of inductive logic coupled with his assertion that universals can be falsified by a single counter-instance. Neither of these theories was accountable to the history of science in the following important sense: If it turned out that the history of science exemplified few or no decisions in accordance with, say, Carnap's confirmationism, then so much the worse for the history of science. Such a discovery would merely show that scientists were largely irrational. It would not challenge confirmationism. Rather, confirmationism was mainly challenged on conceptual, ahistorical grounds, such as its inability to generate plausible yet non-arbitrary levels of confirmation for moderately sized samples, the difficulties encountered in devising a suitable criterion for evidential relevance, and so on. To acquire a feel for the general historicist approach, let us first review the work of the three major historicists, Thomas Kuhn, Imre Lakatos, and Larry Laudan.
Kuhn's work effected three major transformations in the study of scientific rationality. First, and most importantly, it brought history to the fore. The implicit (if not explicit) message of The Structure of Scientific Revolutions is that a respectable theory of rational scientific procedure must conform to the greater part of actual scientific procedure. Second, instead of assuming that scientific theories were the units of rational evaluation, Structure was based on a unit that could persist through minor theoretical changes. Therefore, it could distinguish between revisions and wholesale rejection. Kuhn called this unit “the paradigm”, and its ancestors live on as the research programme, the research tradition, the global theoretical unit, and so on. Third, Kuhn's work highlighted the real problems that historically aware accounts of rationality face: when all is said and done, there may be no trans-historical rule for rational scientific procedure. While this last difference between Kuhn and his predecessors may not require jettisoning the entirety of the received conception of rationality, it does suggest that significant revisions to that conception are required—prompting many of Kuhn's most fervent critics to reject his view as irrationalist.
According to Kuhn, scientific practice is divided into two phases, called normal science and revolutionary science. During normal science, the dominant paradigm is neither questioned nor seriously tested. Rather, the members of the scientific community employ the paradigm as a tool for solving outstanding problems. Occasionally, the community will encounter especially resistant problems, or anomalies, but if a paradigm encounters only a few anomalies there is little reason for anxiety among its proponents. Only as anomalies persist and/or accumulate, will the community pass into a state of crisis, which may in turn push the community into the phase of revolutionary science.
During a period of revolutionary science, the scientific community actively debates the underlying principles of the dominant paradigm and its rivals. The way in which dominance is established is perhaps the most important locus of disagreement concerning Kuhn's work. The most influential interpretation paints Kuhn as an arationalist. This interpretation garners some of its plausibility from Kuhn's own admission that he could not provide a general theory of the kind of creative problem-solving that gives rise to new paradigms, though he spent much time later in his career disavowing the arationalist interpretation. The interpretation makes much of Kuhn's use of the theory-ladenness of observation and various sorts of incommensurability. The supposed result of these features is that the proponents of different paradigms will often be unable to communicate with each other, and that, even when they can communicate, their standards of assessment will always favor their own paradigms. Thus, there is no rational basis for choosing between paradigms; the switch from one worldview to another is not so much a reasoned matter as the scientific equivalent of a perceptual gestalt shift. On this view, the transition between paradigms is best explained sociologically, in terms of institutional might, polemics, and perhaps generational replacement.
The previous position requires possibly unrealistically strong senses of incommensurability and theory-ladenness. According to a more moderate view of incommensurability, revolutionary science does not presuppose that proponents of one paradigm can't understand what the proponents of another are saying. However, it does retain incommensurability about values. On this view, according to which there is no principled way to evaluate the choices and weightings of values employed by different paradigms, rationality can no longer be procedurally flow-charted. Rationality may only be saved by appealing to claims which are in need of substantive grounding, such as, for example, that scientists are trained to reach a rational consensus in the absence of rules for doing so. This interpretation of Kuhn is often coupled with the claim that science has progressed in light of its increasing ability to solve problems. Again, though, there is an important qualification: while we can claim that, e.g., the Newtonian paradigm solved more problems than the Aristotelian one, we cannot claim that the Aristotelian set of solved problems is included in the Newtonian one. The transition from one paradigm to another involves some losses as well as gains, but on balance, there is a net gain in problem-solving ability.
Although this interpretation of Kuhn paints him as a rationalist, it posits a form of rationalism that rejects two claims that many rationalists had thought essential to their enterprise: (i) that rationality is a rule-governed process, and (ii) that scientific progress is cumulative. The reasons for these two claims are not so much historical as conceptual. For instance, if we suppose that the choice between paradigms is made in the absence of rules, and that we should trust it as rational simply because the people making the choices are properly trained, then might we not wonder whether a purely sociological explanation is in order? Similarly, if one paradigm is held to solve more problems than another, even in the presence of the new possibilities for research that the second paradigm opens up and the important problems it solves that the first cannot, then might we not wonder whether the apparent progress is no more than a case of history re-written by the victors? What solid philosophical grounds are there for holding that the gains achieved by the victorious paradigm outweigh the losses? Among others, Brown (1989) addresses the first worry and Laudan (1977) the second (as will be discussed later in this entry), but, to date there has been no satisfactory answer to any of these questions. Thus, Kuhn the rationalist seems to stand on shaky conceptual ground.
This interpretation is also susceptible to criticisms on historicist grounds.[1] Hacking (2006) and others have argued that large-scale conceptions of rationality have themselves varied across time. But if, in the historicist spirit, this historical fact is understood to underpin the claim that the acceptability of “styles of reasoning” varies across time then, if a Kuhnian revolution brings with it a shift in the preferred styles of reasoning, there may not be a single conception of rationality that can be used to compare the problem-solving capacities of the competing paradigms.
Specific worries aside, Kuhn is unsatisfactory for our purposes because he provides us with neither a specific account of rationality nor an explicit account of historicist meta-methodology. Because they are specific where Kuhn is not, Kuhn's main successors, Imre Lakatos and Larry Laudan, deserve our special attention.
2. Research Programmes: Novel Predictions
Lakatos's theory of rationality is based on the idea of the research programme, which is a sequence of theories characterized by a hard core (the features of the theories that are essential for membership in the research programme), the protective belt (the features that may be altered), the negative heuristic (an injunction not to change the hard core), and the positive heuristic (a plan for modifying the protective belt). The protective belt is altered for two reasons. In its early stages, a research programme will make unrealistic assumptions (i.e., Newton's early assumption that the sun and the earth are point masses). The protective belt is altered in order to make the programme more realistic. The programme becomes testable only when it has achieved a sufficient degree of realism. Once it has reached the phase of testability, the protective belt is altered when the programme makes false experimental predictions.
However, not all alterations to the protective belt are equal. If an alteration not only fixes the problem at hand but also allows the research programme to make a novel prediction, then the alteration is said to be progressive. If the alteration is no more than an ad hoc maneuver, that is, if it does not lead to any novel predictions, then it is regarded as degenerate. Initially Lakatos classifies a prediction as novel if and only if the phenomenon being predicted has never been observed prior to the prediction. Later Lakatos (Lakatos & Zahar 1976) extends the definition to cover phenomena that may have been observed before the time of prediction but which were not among the problems which the alteration was designed to solve.
A research programme is in good health as long as a sufficient number of the alterations to it are progressive. Its troubles multiply to the extent that these alterations are degenerate. Once a research programme is sufficiently degenerate, and once there is a progressive research programme to take its place, the degenerate programme should be jettisoned. However, Lakatos does not provide us with details concerning ways to measure degeneracy, nor does he locate the point at which degeneracy can prove fatal to a research programme.
Lakatos's meta-methodology is interesting precisely because it matches his methodology: a meta-methodological research programme in the philosophy of science is progressive as long as it continues to make novel predictions. This may seem puzzling. What predictions can a theory of rationality make? Lakatos's answer is that the predictions concern basic value judgments made by scientists at the time concerning the rationality and irrationality of certain episodes. To see this, suppose that, according to Lakatos's theory, a certain research programme in the past became unacceptably degenerate at a certain time. Subsequent historical investigation might uncover documents which attest to the attitudes of the scientific community at the time. Suppose that these documents show that the community was preparing to reject the research programme in question. In this case, we would say that Lakatos's theory had made a successful novel prediction.
One could easily question the weight that Lakatos places on novel predictions, both at the methodological and meta-methodological levels. Since making novel predictions does not seem valuable in itself, there must be some further end that making them promotes. But, it is difficult to pin down the other goals to which they are a means and how they are especially useful for furthering those goals. For instance, suppose Lakatos were to say that the pursuit of novel predictions provides us with the best and fastest way of increasing the observable content of our theories. Were he to say this he would need to provide us with a viable notion of, and metric for, observable content. In particular he would have to tell us what it is for one theory to have more observable content than another. If he presupposes some sort of cumulativity principle (i.e., that the better theory says everything true about observables that the worse one did plus a little bit more) then his theory is historically implausible. If he denies cumulativity, then the problem he faces, i.e., that of providing a sound basis for observational content, has foiled all who have tried to solve it. This is not to say that Lakatos's approach is without merit, just that it—like many of the historicist views being sketched—is in need of further non-trivial development in order to see if it will be viable.
3. Research Traditions: Solved Problems
Laudan (1977) presents both an explicit meta-methodology and a normative theory of rationality. For most of the remainder of this section the focus will be on this influential package of views, rather than the one Laudan later developed, since it is more radically historicist than his later views and raises interesting general questions as a well-known example of historicism. According to his meta-methodology (1977), a successful theory of rationality should respect “our preferred pre-analytic intuitions about scientific rationality” (Laudan 1977, 160). These intuitions consist of judgments of the rationality of certain explicit cases, (e.g., “it was rational to accept Newtonian mechanics and to reject Aristotelian mechanics by, say, 1800”, and “it was irrational after 1830 to accept the biblical chronology as a literal account of earth history”). Thus, although not every episode in the history of science is represented in Laudan's meta-methodology, a subset of it is, where this subset consists of the “obvious” cases.
The theory of rationality supposedly generated by Laudan's methodology is centered on the notion of the research tradition. Laudan's research traditions somewhat resemble Kuhn's paradigms and Lakatos's research programmes. Like Kuhn's paradigms (on the wider reading of the term) research traditions contain both metaphysical and methodological elements. However, Laudan downplays the sociological and pedagogical elements (e.g. training networks and exemplars) that are so important to Kuhn. Like Lakatos's research programmes, the theories generated by a research tradition will change through time, but, where Lakatos's research programmes are defined as a sequence of theories, the theories do not themselves constitute the research tradition. Laudan also claims that the research tradition is a much less rigid concept than the Lakatosian research programme, which is based on an inflexible hard-core.
However, Laudan radically differentiates himself from Kuhn and Lakatos in his accounts of scientific progress and rationality. He claims that there are two sorts of problems that face every research tradition: empirical problems (akin to Kuhnian anomalies); and conceptual problems (i.e., problems of consistency, either internal or with dominant traditions in other fields). We should accept the research tradition that has solved the most problems and pursue the tradition that is currently solving problems at the greatest rate. Science progresses by solving more problems. However, Laudan does not presume cumulativity: although a given current research tradition will have solved more problems than its predecessors, there may be particular problems that have become “unsolved” by the current tradition. Thus, unlike Kuhn, Laudan believes that there is a simple concept which serves as a basis for both progress and rationality. Unlike Lakatos, Laudan (i) rejects both the idea of empirical content and the cumulative growth of theories and (ii) places no extra value on the concept of a novel prediction, and no great disvalue on ad hocness.
As appealing as it may seem, Laudan's theory of rationality faces some potentially fatal criticisms. First, how do we determine which research tradition has solved the most problems? The difficulty here is similar to the one noted above for Kuhn qua rationalist. Is the “problem of the planets”, for instance, to be counted as one problem or eight? There is reason to believe that the enumeration and/or weighting of problems is relative to a research tradition. Without a common scheme of enumeration and/or weighting, Laudan's theory may lead to ambiguous results according to which the rational tradition to pursue depends on who is doing the counting. Second, although Laudan takes some pains to differentiate research traditions from paradigms and research programmes, the notion of a research tradition is still somewhat fuzzy. As with paradigms and programmes, the fuzziness is especially apparent at the level of historical application.
An independent set of problems concerns Laudan's meta-methodology and its link to his theory of rationality. First, since Laudan takes his theory of rationality to apply to all spheres of intellectual endeavor, including the philosophy of science, we should expect his meta-methodology (which regulates the rational choice of a theory of scientific rationality) to be identical with his theory of rationality since, as he is keen to stress later (1984), it is natural to treat theories of rationality on a par with other scientific theories. Yet the two are very different. His meta-methodology is a foundational fit-the-data affair, while the ground-level criterion rejects the existence of data in a foundationalist sense. Now, Laudan could retract the claim that his theory of rationality has applicability outside of science, but as we shall see later, that would lead him into serious problems. Another problem concerns Laudan's data-set. While Laudan's list of seven pre-analytic intuitions is fairly uncontroversial, it makes sense to ask why we believe it to be uncontroversial. Three possible answers present themselves.
First, we might think our judgments uncontroversial because we have all been socially conditioned in the same way. Second, they might be the result of a prior criterion of rationality. Lastly, we might adopt a particularism about our judgments of rational cases of scientific inquiry and hold that sensitive judgments about rationality are correct but not in virtue of conforming to a general principle about what is rational. None of the options available to Laudan look promising. If Laudan adopts the first answer, then there is no reason to privilege our pre-analytic intuitions. If he adopts the second, then, rather than consulting the history of science, we should merely try to explicate our prior criterion of rationality. Though the third option looks to be the most promising, it risks undermining the project of constructing a genuinely explanatory, rather than merely descriptive, theory of scientific rationality since it presupposes that there is at base no general principle for adjudicating the rationality of episodes of scientific practice.
Finally, even if we could provide a firm philosophical basis for Laudan's approach, we would have very little data to go on. Laudan cites only seven data points. Presumably, he would also accept other cases from the history of science and, given his paper of 1986, important cases from other domains like common law and the uncontested results of thought experiments. But, still, the data set will be pretty meager. Without doubt, many theories of rationality, some plausible and some not, would fit these data points. For instance, consider the following criterion:
An episode in the history of science is rational if and only if it is one of the following episodes: {here follows the list of paradigmatically rational episodes}; and an episode in the history of science is irrational if and only if it is one of the following episodes: {here follows the list of paradigmatically irrational episodes}. All other episodes are neither rational nor irrational.
Clearly this is a silly criterion, but it meets Laudan's meta-methodological constraints. Laudan differentiates his methodology from his meta-methodology in order to avoid circular and/or self-supporting means of testing a methodology. Circularity is probably not a worry. Laudan might do better by equating his methodology with his meta-methodology. At any rate, Laudan himself has disavowed an intuitionistic meta-methodology like that exemplified by the view in Laudan 1977 on the basis of some of these worries (e.g., Laudan 1986) and developed a historically sensitive view (Laudan 1984) that is less thoroughgoingly historicist.
4. General Criticisms
Specifics aside, there are a number of important issues that the paradigmatic historicist theories of rationality explored above fail to address. This section presents a few of those problems.
4.1 The Problem of Externalist Theories of Rationality
What precisely is a historicist theory of rationality supposed to accomplish? According to Lakatos, one is rational as long as one avoids ad hocness as much as possible; according to Laudan, one is rational as long as one accepts the research tradition that has solved the most problems and pursues the one that is solving them at the greatest rate. Yet neither writer stipulates that rational agents must have the avoidance of ad hocness or the maximization of solved problems in mind as they go about their scientific business. As long as their theoretical behavior is in accord with the Lakatosian/Laudanian dicta, they are rational, regardless of their conscious motivations.
Let us call theories of rationality that evaluate agents on the basis of their theoretical choices and not on the basis of the reasons for the choices externalist theories. Externalist theories are wider than internalist (motive based) ones in an important way: the right choice made for the wrong reasons is rational according to externalism. Since Lakatos and Laudan want their theories of rationality to cover most of the history of science, and since the conscious motivations of scientists do seem to have changed over time—and have often not centered on the considerations deemed to be central by either Lakatos or Laudan—it seems that Lakatos and Laudan are locked into externalism.
However, upon further examination, externalist theories of rationality are very puzzling. Let us compare them with another form of epistemic externalism, an externalist theory of perception. According to such theories, whether one is justified depends only on whether one's perceptual belief was produced by a reliable mechanism or process. One need not be conscious of a description or justification of that process. Now, in the perceptual case, we have a general idea of the nature of the process and every reason to trust in its reliability (dreamer arguments aside). The problem with externalist theories of rationality, on the other hand, is that we have little idea of the mechanism that would make a scientist act in such a way that she minimized ad hocness even though her actual intentions were directed towards some other cognitive goal. Where externalist theories of perception depend on tangible information provided by the psychology of perception, externalist theories of rationality depend on a very mysterious invisible hand. Until the workings of that hand are made visible, we should be very suspicious of externalist theories of rationality.
One way to do this might be to try to identify the motivations of scientists producing exemplary work and show how that factor might stand in as a proxy for what a given historicist programme takes to be central to rationality. Section 5 takes a look at how recent historically sensitive research in the philosophy of science suggests that self-interested motives can stand in proxy for epistemologically laudable goals within communities that are socially structured in the way that the scientific community is structured. Another possible route, is to give up on a unique end (or even a set of reconcilable ends) of scientific inquiry and take actual scientists' rational motivations at face value. This approach has the benefit of taking differences in historical motivation seriously, but faces a challenge explaining the tendency of science to come to consensus—see Laudan and Laudan 1989 for one attempt to explain the descriptive consensus under varying motivations and how those that agree might be led to their conclusion by (internally) rational means. Moreover, if the multiplicity of ends is understood as normatively acceptable, the view appears to sanction a strong form of epistemological pluralism in need of substantive further defense.
4.2 The Problem of Implementation
Historicist theories of rationality are also much more difficult to apply than their proponents let on. Because the historicist unit of exchange (the paradigm, research programme, research tradition) has much looser conditions of individuation than the single theory, the question of how to group theories into their respective paradigms, etc. can be a difficult one. For instance, Copernicus's theory shared much of Aristotle's physics, Aristotle's commitment to spherical motion and his use of aethereal spheres, Kepler's heliocentrism (almost), and Ptolemy's use of epicycles. In grouping Copernicus with Kepler and Newton, we say that his heliocentrism is more important than his beliefs about the way in which things in the heavens moved. There may be reasons for deciding upon this grouping, but the choice is not an automatic one. Moreover, it is unlikely that we are going to settle how to cut the units up on the basis of historical information alone, since it is unlikely that actual scientists’ judgments about the general large scale units that they and others work within will be consistent with even those in their temporal vicinity. More needs to be said about the standards for individuating large-scale theoretical units if the general claims about the nature of science and scientific methodology are to be evaluated. In addition, it may turn out that, in the end, the distinctions needed to evaluate the expounded proposals—and even the arguably clearer distinction between “theories”—cannot be non-misleadingly made in a way that will be of much use to the history and philosophy of science (Vickers 2013).
4.3 The Problem of Acceptance
A related problem concerns the notion of the acceptance of a paradigm, research programme or research tradition. Does the acceptance of a programme involve the literal belief in its truth by every single person in the scientific community? Does it require a general belief in its usefulness? These questions have practical correlates. Was the Copernican system accepted by the time that most astronomers used the Copernican tables, despite their explicit allegiance to an Aristotelian/Ptolemaic cosmology, or by when it was widely taught in universities? Likewise, and more recently, it is difficult to say when quantum mechanics came to count as accepted or, presently, whether the multiverse hypothesis is accepted. The question of acceptance has two dimensions. The first concerns what it is for a single person to accept a paradigm, etc. The second concerns the weight of individual acceptance required for community acceptance. Since the data for historicist theories consists of matters of acceptance and rejection at the community level, historicists must provide more information here in order to satisfactorily apply their theories to anything like a substantive part of the historical record.
4.4 The Problem of Motivation
What motivates us to adopt a historicist theory? One possible motivation comes from our faith in science. To reject historicism is “to claim … that it is entirely possible that all actual scientific practice, past and present, is irrational and ‘unscientific’, which is in turn to accept the (I think) absurd further consequence that scientists might be bad at doing science” (Brown 1989, 98). However, there are several problems with this motivation. First, our faith in the rationality of science may be more an a posteriori matter than an a priori one. That is, our faith in science is not blind. We have faith in science because we have seen what it has accomplished: given our evidence from the history of science, it would be absurd to conclude that science was not rational. Here, we see that the history of science is rational because it meets our (proto) criteria for progress and rationality. However, supposing for the moment that we can point to a unique scientific method that is something like the way science is currently practiced, it may be that in other counterfactual cases we would not immediately conclude that scientific practice was rational. That is, it is not true at every possible world that there is a conceptual link between scientific practice and scientific rationality. Thus, on this view, the history of science is illustrative (and not constitutive) of rationality.
The faith-in-science motivation faces the additional problem of being much too weak for many forms of historicism. Our faith in science might lead us to believe that science is not completely irrational, or that it is more rational than not. However, some historicist theories (e.g., some readings of Lakatos such as Brown 1989) claim that the best theory of rationality is the one that, subject to certain conditions, maximizes the number of rational episodes in the history of science. General faith in science cannot prop up these maximizing theories.
The second motivation for historicism is due to a form of naturalism. If we reject the idea that epistemology is an a priori enterprise and accept that it is merely a form of science, as naturalists tend to do, then an argument for historicism like the following might be tempting: Scientific theories succeed insofar as they fit the data. The data for a scientific theory of scientific rationality, if it is to be found anywhere, must be drawn from the history of science. Thus, at a minimum, the epistemic version of historicism obtains. Moreover, assuming that fitting the data is constitutive of a good scientific theory, the more radical historicist thesis that the correct theory of scientific rationality is conceptually constrained by the history of science also obtains.
A naturalistic argument like the preceding is difficult to sustain. Notably, it depends on a simplistic view of the role of theoretical concepts within naturalism. Suppose we endorse naturalism. We can consequently treat rationality as a theoretical posit, much like electrons, viruses and the other theoretical posits of science. The electron posit et al. do not acquire their justification from a simple fit-the-data approach. It's difficult to see how they could even play such a role. Rather, they are accepted because they are essential parts of our theoretically intricate best explanations for relevant phenomena. We accept the existence of electrons because our best theories of the observable phenomena associated with electricity and atomic structure crucially depend on the hypothesis that there are electrons. Similarly, if our overall goal is to explain the history and practice of science, our best theory of rationality is the one that, along with other theoretical posits, plays a relevant and crucial role in our overall explanation of the history and practice of science. As such, we should leave aside simple presuppositions such as the claim that the best theory of rationality is the one that maximizes the number of rational episodes in the history of science. In the end, it may turn out that our best concept of rationality does maximize the number of rational episodes, but such a result should count as an empirically based bonus rather than as a desideratum.
Furthermore, once one takes the descriptive facts pertaining to scientific practice—which include various sociological facts—as bedrock, the role and nature of rationality itself becomes less clear. The Strong Programme (Bloor 1976) in the sociology of knowledge has argued that rationality plays no explanatory role whatsoever. No doubt, the arguments of the Strong Programme are at least slightly overblown, but they do show that once one moves to an explanationist viewpoint, there is no guaranteed role for rationality within naturalism. If its proponents are correct, in the end one might be left with no more than the kernel of instrumental rationality (see, for example, Giere 2006). Likewise, while not uniquely motivated by the sort of naturalism outlined here, the work of Lynn Nelson (1990), Miriam Solomon (1992, 1994a, 1994b), and Helen Longino (1990) suggests that once social aspects of scientific practice are properly accounted for, there may be no clear way of specifying what counts as rational scientific belief and practice as divorced from the sociological context of practicing scientists.
5. Neo-Historicist Developments
Even though there has been a movement away from programmatic historicist theorizing, with contemporary theorists instead tending to focus on issues particular to specific sciences, the historicist tradition continues to influence thought about the social structure of scientific practice, scientific rationality, and work on theories of rationality more generally.[2] This section examines two such neo-historicisms which employ formal methods to shed light on historicist insights. The first sort of neo-historicism is historically informed Bayesianism, which attempts to capture historicist insights by employing the well-developed formalism of the probability calculus to model rational confidence and inference. The second sort of neo-historicism, on the other hand, consists in recent research that attempts to model aspects of the social structure of science and its associated epistemic benefits, rather than focusing on the confidences of individual inquirers. Some of the ways in which these views address the problems for historicist theories of rationality presented above will be flagged as the discussion progresses. Special attention will also be paid to the ways in which these neo-historicisms relate to, and offer support or clarification of, specific claims made by the particular historicist theories discussed above.
5.1 Bayesian Historicisms
The various neo-historicist views that have been most closely examined are undoubtedly historically minded Bayesianisms. While Bayesianism proper arose quite independently of the historicist tradition and finds its justification largely independently, historically centered criticisms of Bayesianism have led many Bayesians to take it as a criterion of adequacy that an adequate theory of rationality accommodate key inferences from the history of science. The common core of historically minded Bayesian views consists in the following four theses. First, that the level of confidence, or credence, of a rational inquirer can be modeled by a probability function. Second, they all agree that, in “standard” cases, rational agents update their confidences by conditionalization; that is, a rational subject's confidence in a proposition p at a time t1 is her confidence in p given q at t0 < t1 in cases in which the only change to that subject's mental state from t0 to t1 is that she learned q—in symbols, Crt1(p)=Crt0(p | q). Third, all agree that a proposition q that one has not yet learned (i) confirms, or constitutes some support for, a theory p for a subject S just in case S's confidence in p given q is greater than S's confidence in p—in symbols, Cr(p | q)>Cr(p); (ii) offers some degree of counter-support, or disconfirmation, to a theory p for a subject S just in case S's confidence in p given q is lower than S's confidence in p—in symbols, Cr(p | q)<Cr(p); and (iii) otherwise, q is said to neither confirm nor disconfirm p. Finally, in keeping with historicism, they all agree that being able to account for paradigmatically good inferences from the history of science is a desideratum of a good theory of rationality.
Bayesianism has a lot to recommend it as an account of rational scientific inference. The theory is elegant, precise, and powerful, validating many common inferences from scientific practice from few assumptions. While it is beyond the scope of this article to give a comprehensive list of the principles of scientific reasoning that its proponents claim as its consequences, it is worth examining some of the framework's more notable consequences as well as those that speak to the problems for historicism enumerated above.
First of all, in the case in which a theory t entails some piece of evidence e and one is not perfectly confident in either proposition or their negations, it can easily be shown that Cr(t | e)>Cr(t) and, consequently, that e supports t according to Bayesian principles. Conversely, it is also a trivial consequence of the framework that Cr(t|not-e)<Cr(t) and, consequently, that the negation of e disconfirms t—in fact, not-e refutes t since the agent's confidence after learning not-e, Cr(t|not-e), plunges to 0. This lines up well with common cases from scientific practice in which some consequences of a theory are deduced and then checked against observation in experiment.
Several other plausible principles of scientific inference are also captured by the Bayesian formalism. It predicts that surprising evidence e, in the sense that one's prior confidence in e is “small”, should raise our rational confidence in a theory t that entails it by a greater amount than evidence e' entailed by t that one is highly confident will obtain. On plausible measures of confirmation, it predicts that continued testing of an empirical theory's consequences will have diminishing returns and, thus, that at some point the rational scientist will do better by investigating other (compatible) theories. In a similar vein, the framework predicts that variety of evidence plays an important role in confirming a theory. The interested reader is directed to Howson & Urbach 2003, especially chapter 4, for the details.
In addition, the Bayesian framework can be, and often has been, applied to solve many of the problems raised for historicist theories of rationality. For instance, Jon Dorling (1979; 1982 (Other Internet Resources)) has argued persuasively that the Bayesian can account for what is important about the observation that the “central parts” of scientific theorizing are unlikely to be jettisoned in the face of countervailing evidence while “non-central parts” are more willingly rejected by practicing scientists when they are involved in deriving consequences that turn out to be false. In particular, Dorling has shown that there is a large range of cases in which it is rational, by Bayesian lights, for a scientist to jettison an auxiliary hypothesis a instead of a theory t when a consequence of (a&t) is observed to be false. There is also good evidence that several cases from the history of science that have this structure fall within the specified range (Dorling 1979; 1982 (Other Internet Resources); Howson & Urbach 2003, pp. 107–114). A general sufficient condition for the result that captures the range of cases presented in the literature is explored in Leitgeb 2013. See also Strevens 2001 for a different Bayesian attempt to offer both necessary and sufficient conditions for rejecting the auxiliary instead of the theory and Fitelson & Waterman 2005 for a telling criticism of that approach.
As another recent example, Leah Henderson et al. (2010) have suggested that the general theoretical units of interest to the historicists propounded above, and the dynamics of those large-scale units, can be understood by appeal to hierarchical Bayesian models (HBMs). In these models, the hypothesis space is hierarchically structured with more general and abstract theoretical hypotheses “generating”—i.e., making probable—more specific theories. It is further proposed that confirmation is to be understood as relative to these more general theoretic units which appear near the top of the hierarchy—e confirms t within framework f just in case Cr(t | e,f)>Cr(t,f), and so on. Higher-level theories are thus understood in parallel to historicist large-scale research traditions as guiding inquiry and as the touchstones by which the evidential impact of learned propositions on theory are measured.
Both Dorling and Henderson et al.'s approaches promise to provide a Bayesian clarification of the notion of a “fundamental unit” of scientific practice that figures in historicist theories of rationality, making the problems of individuation, implementation, and acceptance more tractable. Supposing, with Dorling, that the central feature of said fundamental units is to resist revision under recalcitrant evidence then, in light of the preceding discussion, it is natural to take the fundamental units to be propositions to which the scientific community assigns a high enough confidence that is, in some sense, “stably high” when confronted with propositions from a suitable range of evidence. This opens up the interesting, but currently underdeveloped, possibility of offering a Bayesian analysis of the rationality of a change in the fundamental theoretical unit of science emphasized by historicist theories of rationality. Likewise, the things that play the role of large-scale theoretical units in Henderson et al.'s picture are propositions. Individuating “fundamental units” in this way gives us some hope of securing a notion of rational progress in science since, on this picture, one can rationally move from one “fundamental theory” to another by proportioning one's confidences to the evidence within the Bayesian framework. If proportioning one's confidence to the evidence in this way can be related to a metric of progress, then scientific progress will be explained on this model.
It should be noted that, of the historicisms discussed here, these approaches most closely resembles that of Lakatos since his is the only view that takes the fundamental units to be the kinds of things to which the Bayesian apparatus might be applied—(sets of overlapping) theories or propositions. The gap between the two views is perhaps at its smallest when the Bayesian model is applied to the proposition describing the overlap, or hard core, of a Lakatosian research programme. However, the view does not appear compatible with the historicisms of either Laudan or Kuhn, who take the fundamental unit of scientific practice to be different in kind from a proposition. Moreover, within the suggested Bayesian picture there need be no difficulty understanding the content of one fundamental unit if one endorses another fundamental unit. Similarly, it is at least an open question whether researchers endorsing differing fundamental units must differ with respect to how they weigh distinct Kuhnian theoretical virtues against one another. So, the prospects of a Kuhnian model along the suggested lines seems especially dim.
In contrast, Wesley Salmon has suggested another way that the Bayesian framework might clarify Kuhn's historicism, resulting in a slightly different Bayesian treatment of the problem of individuation and the problem of implementation (Salmon 1990). Salmon points out that, on one natural reading, Kuhn's “fundamental units”—i.e., the paradigms—are best understood not as being propositions that express scientific theories, as on the previous Bayesian proposal, but rather as ways of weighing up the theoretical virtues of scientific theories. The trade offs between some Kuhnian virtues—in particular, the virtues of simplicity, consistency, and the facet of fruitfulness concerned with how well the theory unifies apparently disparate phenomenon, Salmon thinks, are best captured in terms of how they fix a scientist's prior probability, Cr(t), for a theory t under examination. Scientific revolutions are then modeled within this framework as changes to those priors. And, on the standard Bayesian model, since changes in priors affect how evidence subsequently bears on relevant theories, these differences percolate up to different posterior credence assignments as the evidence comes in. This view thus endorses the weaker reading of “Kuhnian incommensurability” according to which scientists working in different paradigms can understand one another—the propositions expressed by scientists across paradigms may even be identical on this proposal—but differ in how they value and weigh the theoretical virtues of hypotheses.
The idea here is to extend Kuhn's endorsement of a Bayesian model of “normal” or “non-revolutionary science” according to which, in Kuhn's words, “each scientist chooses between compelling theories by deploying some Bayesian algorithm which permits him to compute a value for Cr(t | e), i.e., for the probability of the theory t on the evidence e available both to him and the other members of his professional group at a particular period of time”, to a model that handles both “normal” and “non-revolutionary science” science (Kuhn as cited in Salmon 1990, 179). Seen in this light, Salmon's view is a natural extension of the Bayesian model to cases of theory choice during periods of “revolutionary” science.
The view might also naturally accommodate the more radical historicist thesis that historical facts, in some sense, play an important role in determining what is rational. As a matter of descriptive fact, agents' priors are certainly influenced by their socio-historical environment. If priors chosen at least partially on the basis of that environment can underpin further judgments regarding a subject's rationality—which is obviously contentious but will be touched on in the next section—then there will be an important sense in which what is rational for that subject will depend constitutively on a historically sensitive parameter.
Despite its congeniality to Kuhnian historicism, the extent to which Salmon's Bayesian account helps solve the general problems of individuation and implementation is less clear. It arguably helps with the Problem of Individuation in the sense that all it takes to find out a certain paradigm's effect on inquiry into a specific hypothesis is to find out how plausible scientists find that hypothesis—the scientific community's confidence judgments sum up the relevant weightings of Kuhnian virtues. However, not enough has been said in order to find out how much weight scientists have given to individual virtues and how those virtues weigh against one another within each respective paradigm in specific cases. Without this information, it is difficult to see how one might ascertain which paradigm is operative at any historical moment.
Recent work on formal Bayesian models of simplicity, unification and the other virtues—see, for instance, (Henderson et al. 2010), (Myrvold 2003), and (Bandyopadhyay & Boik 1999)—along with historical discussions of which theories are simpler and so on might give us a starting point for hypothesizing about the values being weighed within a given paradigm. As an example, a promising way to model the part of fruitfulness concerned with novel predictions within the Bayesian framework might make use of the observation touched on above that, by Bayesian lights, verifying a more surprising prediction raises one's rational confidence in a theory more than verifying a less surprising consequence of that theory. Combining observations like this with historical work to uncover whether, and to what degree, the predictions of the framework line up with historical cases might be one way to close in on the specific priors that constitute a given paradigm.
Perhaps the most important way in which historical Bayesianisms mark progress in theorizing about historically sensitive theories of rationality is that, of the theories presented, they have the best claim to being epistemically well-grounded. There are a series of arguments that purport to show that an agent's degrees of beliefs, or confidences, are rational if and only if they satisfy some set of conditions, which in turn guarantee that those degrees of belief have the structure of a probability measure and at least some of the results extend to show that conditionalization is often the correct way to update one's degrees of belief. While the results cannot be claimed to be completely uncontroversial, they are numerous enough and their assumptions sufficiently varied that each one provides adequate independent support to make working within the framework defensible. For a few of the defenses that are particularly suited to the purpose of providing a basis for a theory of epistemic rationality see Joyce 1998, Christensen 1999, and Cox 1946. Easwaran 2011a provides a brief overview.
The epistemic underpinnings of the Bayesian framework go a long way to solving the Problem of Externalist Theories of Rationality. Insofar as historicist rationality constraints can be explained by appealing to the Bayesian framework, they stand to inherit its epistemic foundation. In this way, descriptively accurate historicist observations about when a given inference is rational can be given an epistemically compelling internalist explanation in terms of their levels of confidence. This of course depends on being able to successfully give the purported externalist rationality constraints a Bayesian gloss, which will be more plausible for some constraints than for others—for instance, it is unclear whether the constraint that a theory not be ad hoc can be fully captured within the Bayesian formalism, though it seems to do a good job of characterizing why surprising evidence provides more support.
Despite the many advantages of adopting a historically sensitive Bayesianism, a few difficulties with the programme are particularly salient when it is treated as an account intended to capture rational inferences in the history of science. A first issue concerns the extent to which the Bayesian framework can capture the objectivity of scientific inference intended by its practitioners. Glymour colorfully states the problem as follows: “[scientific] arguments are more or less impersonal; I make an argument to persuade anyone informed of the premisses, and in doing so I am not reporting any bit of autobiography” (Glymour 1980). Glymour takes it that, on a Bayesian picture concerned with rational confidences, all a scientist could be doing when she offers an argument is relating her particular stance on certain premises, or appealing to independent general principles that restrict personal probabilities. On the first horn of the dilemma, Bayesians mischaracterize scientific inference. On the latter, the argument continues, it is not the Bayesian framework that does the explaining.
A Bayesian strategy for dealing with the first horn might be to adopt a more extreme historicist theory according to which socio-historical facts are constitutive of rationality. Reasons for doubting that such an approach will be viable were presented above—though see the next section for a possible way to make sense of the suggestion. On the other horn, if Bayesians wish to divorce scientific inference from “autobiographical” considerations, a natural way to proceed is by insisting that scientific inference is best captured in a Bayesian framework, but argue that when scientists produce arguments what they are doing is arguing about what confidences other inquirers objectively should have, by loosely Bayesian lights. The latter horn of the dilemma is then avoided by appealing to principles that are best understood in terms of the Bayesian framework. Unfortunately, providing a solution along these lines is no straightforward task as is evidenced by contemporary Bayesian skepticisms about so-called “objective Bayesianism” (Howson & Urbach 2003). For examples of some objective Bayesianisms, see Jaynes 2003 and Williamson 2002.
Another problem that is particularly pressing for historically sensitive Bayesianism, also due to Glymour (1980), is the Problem of Old Evidence. Recall that on the Bayesian picture, if at t0 one learns only q, one is required to update one's confidences in every proposition p to Crt1(p) = Crt0(p | q), for t1 > t0. Moreover, a proposition q counts as confirming p just in case Cr(p | q)>Cr(p). But, then any case in which a scientist realizes at time t1 that some proposition e, previously learned at t0, is predicted by a theory t must be a case in which that proposition fails to confirm that theory since Crt1(t | e) = Crt1(t). Presumably, historical cases matching this description are fairly common. For instance, Glymour notes that Copernicus argued for his theory on the basis of age-old observations, Newton argued for universal gravitation by appeal to Kepler's laws, and that Einstein's gravitational field equations predict the otherwise unexplained advance of Mercury's perihelion was an important source of evidence for his view.
Several responses to the Problem of Old Evidence have been suggested in the literature that go some of the way towards remedying the problem. See Easwaran 2011ab for a good orientation. Two prominent responses to the Problem of Old Evidence are the counterfactual response and the response from models of logical learning. According to the counterfactual response, the right way to understand a case in which “old evidence” e appears to confirm a theory t is counterfactually. “Old” e supports t just in case e would have confirmed t for the subject if the subject had not known e. The main difficulty with this solution is that it is difficult to uniquely, and sometimes even approximately, pin down one's counterfactual confidences. See Howson & Urbach 2003 and Earman 1990 for further discussion of this strategy. The response from models of logical learning, proceeds by either obscuring the logical structure of the subject's language from the framework (Garber 1983) or coming up with a new axiomatization of probability that only requires that a fragment of the logical truths be assigned credence 1 (Gaifman 2004). In either case, the models allow for logical learning, so that learning that a theory entails old evidence can raise one's confidence in the theory. Finally, some proponents of the Bayesian framework have suggested weakening the conditionalization requirement to allow for cases of learning that do not raise one's confidence in propositions learned to their maximum degree. In this way, it is not structurally required that learning an old piece of evidence will fail to confirm. Christensen (1999) pursues this strategy by weakening the conditionalization requirement to Jeffery conditionalization and supplementing the treatment with a novel measure of confirmation.
The net result of the preceding discussion of difficulties for a Bayesian historicism mirrors that described above in the case of purported “proofs” of Bayesianism. It is not that any one strategy for solving the problems for Bayesianism is completely compelling (or will cover all of the cases). Yet, that there are many decent strategies for dealing with any given problem makes it possible to cleave to a roughly Bayesian framework, at least in the absence of other proposals that are as explanatorily powerful and elegant. It should be noted, however, that even granting that the neo-historicist Bayesian apparatus successfully models matters of individual rationality, much less has been said regarding its prospects for explaining the social elements of science that play such a central role in traditional historicist proposals—especially that of Kuhn. In particular, it would be desirable to have a Bayesian explanation of why the organizational, or social, structure of science taken as a whole is as effective as it is at uncovering the truth. The next subsection takes a look a family of recent neo-historicist theories of rationality that have focused on precisely these issues.
5.2 The Social Structure of Science
Another viable approach that falls within the general rubric of historicist theories of rationality makes use of the importance that Kuhn places on the social structure of scientific communities. In particular, this approach focuses on the prudential incentive structures that scientific communities impose upon their practitioners and the ways in which such incentive structures further a community's epistemic aims by influencing the research programme choice of individual scientists working within those communities. To illustrate, suppose that a community containing N researchers has the goal of solving an outstanding problem, which in this case requires the maximization of (objective) probability that the problem is solved. Suppose also that there are two available methods or approaches, m1 and m2, for solving the problem. Finally suppose that the members of the community are aware of the functions providing the probability that a given method will succeed if provided a certain number of researchers. Then, the community optimizes its chances of solving the problem by allocating its workers (n workers to m1 and (N–n) workers to m2) so that
Pr(m1 succeeds when n workers are assigned to it) + Pr(m2 succeeds when N–n workers are assigned to it) – Pr(m1 and m2 succeed with n and N–n workers assigned respectively)
is maximized. According to at least some reasonable models of the probability that a method will succeed, the probability of success is maximized when some workers are assigned to m1 and others to m2.[3]
For the moment suppose also that m1 is more promising in that the probability that m1 succeeds when n workers are assigned to it is greater than the probability that m2 succeeds when n workers assigned to it, for every n between 1 and N. As such, it also seems that, left completely to her own devices, each individual scientist will pursue m1 in virtue of its being the more promising strategy. If they treat themselves as individual epistemic agents rather than as members of an epistemic collective, the scientists will move in lockstep. However, the community's epistemic goal is not always best served if its individual scientists move in lockstep: The division of cognitive labor is best calculated in terms of marginal gains to the probability that the problem will be solved, rather than in terms of the overall probability of each respective method. If assigning an extra worker to m2 produces a greater marginal gain in the total probability of solving the problem than assigning her to m1, then the worker should be assigned to m2.
There are various mechanisms by which the optimal allocation can be attained. For instance, it could be achieved via a governing body with the power to allocate workers as indicated by its calculations of community epistemic utility. Alternatively, if scientists were aware of the methods that other scientists were employing, they could cooperatively self-allocate within the community in order to produce an optimal distribution. In other words a community of scientists moved solely by epistemic factors might come to the conclusion that a more varied distribution of workers among methods would increase the chance of generating a solution. Upon this realization, they could, for instance, distribute individual scientists by drawing lots. However, in cases for which there is neither a governing body nor adequate communication within a group of researchers committed to the same purely epistemic goals, one is faced with a coordination problem: by what processes are individual scientists to choose between competing strategies in such a way that the resulting distribution of labor maximizes the probability that the community produces a solution to its problem?
Kitcher (1990) suggests that attainment of an optimal distribution can be explained in terms of the prudential non-epistemic goals of individual scientists. Suppose that I, as a scientist, have the primary goal of being the first scientist to solve the problem. In that case, I may decide to employ m2 even if m1 is more promising, for the following reason. If there are i people in the m1 pool, then (if m1 works) the probability of my being the one to solve the problem is 1/i, other things being equal. If there are j people in the m2 pool, then (if m2 works) the probability of my being the one to solve the problem is 1/j, again all else equal. Suppose that the winner of the prize is likely to be someone from the m1 pool. Even on this supposition, there may be so many people in the m1 pool that it is more likely that I, an individual scientist, will solve the problem first if I jump into the less populated m2 pool. Of course, m2 might constitute such a poor option that, for a population of N scientists, it would not be in my interest to join the m2 pool, even if I were to be the only person in it.
It is plausible to think that scientists motivated by this partly epistemic goal would act according to this first-past-the-post motivation if the system of reward in the scientific community exclusively rewarded the first scientist to solve whatever problem was at hand, where rewards can include publications, prestige, academic ranks, research funding, and so on. Strevens (2003) argues that, given some apparently innocuous background assumptions, a winner takes all system of rewards will lead to a greater chance of success than systems that might seem fairer to us, such as systems in which researchers are rewarded in proportion to expended effort or for reaching the finish line just behind the winner.
This approach bears little connection to historicism in the strong sense, where historical episodes are taken to be somehow constitutive of rationality as, for example, when scientific rationality is simply defined as that which maximizes the number of rational episodes in the history of science. However, the approach is historically sensitive and helps to underpin historically informed large-scale accounts of scientific rationality. For instance, the division of cognitive labor is an essential part of Kuhn's view. Kuhn intended his theory of the development of science to be naturalistic in two senses. First, it was to provide the template for an explanation of the success of normal science which included broadly social factors. Second, it was to provide a descriptively accurate account of the rise and fall of paradigms. Facts about the division of cognitive labor are important on both counts since patterns in, and changes to, the division of cognitive labor can help to explain the success of normal science and trace the declining support for paradigms in crisis. More generally, this suggests that a treatment of the division of cognitive labor should be regarded as essential, not only for Kuhnian views, but for any theory of scientific development that attends to scientific practice on the ground.
Although it offers a compelling vision of the linkage between the epistemic success of the scientific community and the prudential success of individual agents, the approach under discussion faces serious problems and questions. First, the model's supposition that individual scientists are entirely motivated by prudential considerations, and thus motivated to be the first researcher to solve the problem at hand, is clearly an idealization. Since scientists are motivated by diverse aims, the model is accordingly at best a small part of the complete story. The model also presupposes that every scientist is aware of the relevant functions from methods and number of workers to probabilities of success. Such an assumption seems deeply flawed in that it demands an unattainable degree of precision. Furthermore, that the proponents of different methods would agree on even the relative likelihoods of success seems counter-intuitive. Their acting in such a way ill accords with actual scientific practice, for this would entail that some scientific agents would be willing to pursue a course of research that they knew to be less likely to succeed than other available options. One would expect that scientists would pursue methods based on theories that they believe to be the best candidates for the approximate truth, or at least for approximate empirical adequacy. The sort of pursuit envisaged by the account under consideration demands an epistemic commitment from scientists so weak that scientific researchers must be regarded as either selfless theoretical altruists operating solely for the good of the community or as skeptics about the existence and/or attainability of a privileged set of goals for science.
The foregoing worries aside, the claim that the prudential activity of individual scientists can explain scientific success within a scientific community is suggestive. However, the model outlined above concerns a specific theoretical choice made by scientists at a specific time. In that sense, our model is synchronic rather than diachronic. However, the theories discussed earlier in this entry are essentially diachronic in that they trace the rise and fall of global theoretical units over time. Kuhn claims that the number of researchers who claim allegiance to a paradigm remains high until the paradigm goes into crisis, at which point researchers peel off rapidly. Lakatos makes a similar claim concerning the behavior of researchers with respect to degenerate research programmes. First, do these claims, whereby high and stable degrees of allegiance are followed by steep declines, map actual scientific practice? Second, can the patterns of the evolution of allegiance be modeled in terms of the prudentially motivated decisions of individual researchers?
The prudential model might also be fruitfully applied to the issue of semantic incommensurability. Traditionally, the proponents of semantically incommensurable theories (paradigms, research programmes, research projects, global theoretical units) have been construed as unable to understand each other. However, if we reconstrue incommensurability in terms of difficulty rather than impossibility (Martens & Matheson 2006), semantic incommensurability so construed has operational significance for the prudential account. Researchers who possess competence with respect to a given theoretical approach will have to assign an opportunity cost in time and lost productivity to the task of becoming competent with respect to rival approaches. Given the cost, prudentially rational researchers working within a healthy paradigm will be unlikely to adopt or even consider rival approaches (Margolis 1987, 1993). This likelihood may change as a paradigm becomes less productive (or more degenerate from a Lakatosian perspective), because the opportunity cost in terms of the publications and prestige to be gained from continued adherence to the first paradigm is likely to decrease. As such, as a paradigm's fortunes decline, prudential researchers will consider and conduct research within other theoretical approaches. Hence, a pragmatic account of incommensurability in conjunction with the prudential model supplies a plausible explanation for the thesis that scientists working with healthy paradigms are unlikely to consider other paradigms. In addition, if we assume that Lakatos is right about the rate of acceleration of degenerate research programmes, the model also provides a plausible explanation of the thesis that the number of scientists working within a paradigm steeply declines as the paradigm fails to generate positive results.
The discussion so far has presupposed that the members of the scientific community agree on the nature, number, and relative importance of the problems it faces, a presupposition that proponents of value incommensurability reject. As such, it is not evident that the prudential model is applicable to cases of value incommensurability. In lieu of other strategies for resolving the problem, skeptical worries concerning the rationality of large-scale axiological transformations come once again to the fore. In the end, one may not be able to improve upon the Kuhnian “solution” to value incommensurability whereby large-scale axiological disagreements are settled via informed consensus (Longino 1990; Kitcher 2001).
Though the approach under discussion shares the naturalistic motivation of the crudely descriptivist historicist views discussed above under the label of “explanationism”, it clearly differs greatly from those views. Like the earlier approach, it proceeds from an acceptance of the claim that science works. Unlike the earlier approach, it attempts to provide an explanation for why it works. In doing so, it draws upon aspects of the history and sociology of science (Bird 2005; Wray 2011) , formal epistemology, decision theory, computer simulations (Zollman 2007; Weisberg & Muldoon 2009), topology (Zollman 2013), and value theory (Kitcher 2001). It faces problems, such as that of providing a normatively satisfactory treatment of value incommensurability. However, it also may provide tools for other explanations, for instance of the way in which shifts in concepts of scientific rationality and divisions of cognitive labor across communities may track shifts in reward structures.
Finally, it is worth noting the relationship between the discussed neo-historicisms. Although the Bayesian and prudential approaches differ in their aims and scope, both lay down fruitful avenues for further exploration that are largely complementary. The Bayesian framework promises to provide an explanation of the epistemic rationality of common patterns of scientific inference, including many of those that have garnered special attention from traditional historicists—like the tendency of scientists to reject “less central” auxiliary hypotheses before “core theories” in the face of recalcitrant evidence. In addition, as Salmon has pointed out, the framework can be naturally understood as illuminating how scientists' conflicting values bear on rational theory evaluation, and thus where the phenomenon of “value incommensurability” fits into scientific inquiry—i.e., in the prior probabilities of researchers—even if it does not straightforwardly explain how those values might best be weighted. The neo-Kuhnian model of the social structure of science, on the other hand, provides insight into matters concerning the rational division of cognitive labor at the level of the scientific community. In particular, it is well suited to ground that division in terms of the motivations of individual scientists, whether they consist in purely epistemic aims, prudential goals, or a mixture of the two. It also sheds light on other phenomena often emphasized in traditional historicist accounts of rationality including, amongst other things, certain forms of “semantic incommensurability”, the role that the prudential interests of scientists play in scientific inquiry, and the ways in which important theoretical units rise and fall. Whether the Bayesian and neo-Kuhnian approaches could together provide a unified and explanatorily deep account of the normative and descriptive elements of scientific practice discussed in this entry remains an interesting line for future inquiry.
Bibliography
- Bandyopadhyay P.S. and R.J. Boik, 1999, “The Curve Fitting Problem: A Bayesian Rejoinder,” Philosophy of Science, 66(S): 390–402.
- Bird, A., 2005, “Naturalizing Kuhn”, Proceedings of the Aristotelian Society, 105(1): 109–127.
- Bloor, D., 1976, Knowledge and Social Imagery, London: Routledge & Kegan Paul.
- Brown, H., 1988, Rationality, London: Routledge.
- Brown, J.R., 1989, The Rational and The Social, London: Routledge.
- Carnap, R., 1950, Logical Foundations of Probability, Chicago: University of Chicago Press.
- Christensen, D., 1999, “Measuring Confirmation,” The Journal of Philosophy, 96(9): 437–461.
- Cox, R.T., 1946, “Probability, Frequency and Reasonable Expectation,” American Journal of Physics, 14(1): 1–13.
- Dorling, J., 1979, “Bayesian Personalism, the Methodology of Research Programmes, and Duhem's Problem” Studies in History and Philosophy of Science, 10(3): 605–613
- Earman, J., 1992, Bayes or Bust? A Critical Examination of Bayesian Confirmation Theory, Cambridge, MA: MIT Press.
- Easwaran, K., 2011a, “Bayesianism I: Introduction and Arguments in Favor,” Philosophy Compass, 6(5): 312–320.
- –––, 2011b, “Bayesianism II: Applications and Criticisms,” Philosophy Compass, 6(5): 321–332.
- Fitelson B. and A. Waterman, 2005, “Bayesian Confirmation and Auxiliary Hypotheses Revisited: A Reply to Strevens,” British Journal for the Philosophy of Science, 56(2): 293–302.
- Forster, M. and E. Sober, 1994, “How to Tell when Simpler, More Unified, or Less Ad Hoc Theories will Provide More Accurate Predictions,” British Journal for the Philosophy of Science, 45(1): 1–35.
- Friedman, M., 2002, “Kuhn, and the Rationality of Science,” Philosophy of Science, 69(2): 171–190.
- Gaifman, H., 2004, “Reasoning with Limited Resources and Assigning Probabilities to Arithmetical Statements,” Synthese, 140(1–2): 97–119.
- Garber, D., 1983, “Old Evidence and Logical Omniscience in Bayesian Confirmation Theory,” in Testing Scientific Theories, J. Earman (ed.), Minneapolis: Minnesota Studies in the Philosophy of Science, pp. 99–132.
- Giere, R., 2006, Scientific Perspectivism, Chicago: University of Chicago Press.
- Glymour, C., 1980, “Why I Am Not a Bayesian,” in Theory and Evidence, Princeton: Princeton University Press, pp. 63–93.
- Hacking, I., 2006, The Emergence of Probability, 2nd edition, New York: Cambridge University Press.
- Henderson, Leah, Noah D. Goodman, Joshua B. Tenenbaum, and James F. Woodward, 2010, “The Structure and Dynamics of Scientific Theories: A Hierarchical Bayesian Perspective,” Philosophy of Science, 77(2): 172–200.
- Howson, C. and P. Urbach, 2006, Scientific Reasoning: The Bayesian Approach, 3rd edition, Chicago: Open Court.
- Hoyningen-Huene, P., 1993, Reconstructing Scientific Revolutions: Thomas S. Kuhn's Philosophy of Science, A. Levine (trans.), Chicago: University of Chicago Press.
- –––, 1990, “Kuhn's conception of incommensurability” Studies in History and Philosophy of Science, 21(A): 481–492.
- Jaynes, E.T., 2003, Probability Theory: The Logic of Science, Cambridge: Cambridge University Press.
- Joyce, J., 1998, “A Nonpragmatic Vindication of Probabilism,” Philosophy of Science, 65(4): 575–603.
- Kitcher, P., 1990, “The Division of Cognitive Labor,” The Journal of Philosophy, 87(1): 5–22.
- –––, 2001, Science, Truth, and Democracy, Oxford: Oxford University Press.
- Kuhn, T.S., 1962, The Structure of Scientific Revolutions, Chicago: University of Chicago Press (2nd edition published in 1970).
- –––, 1977, The Essential Tension, Chicago: The University of Chicago Press.
- Lakatos, I., 1970, “Falsification and the Methodology of Scientific Research Programmes” in I. Lakatos and A. Musgrave (eds.), Criticism and the Growth of Knowledge, Cambridge: Cambridge University Press.
- Lakatos, I. and E.G. Zahar, 1976, “Why Did Copernicus's Programme Supersede Ptolemy's?,” in R. Westman (ed.), The Copernican Achievement, Los Angeles: University of California Press.
- Laudan, L., 1977, Progress and its Problems, Berkeley: University of California Press.
- –––, 1986, “Some Problems Facing Intuitionist Meta-Methodologies,” Synthese, 67(1): 115–129.
- –––, 1984, Science and Values, Berkeley: University of California Press.
- Leitgeb, H., 2013, “Reducing Belief Simpliciter to Degrees of Belief,” Annals of Pure and Applied Logic, 164(12): 1338–1389.
- Longino, H., 1990, Science as Social Knowledge: Values and Objectivity in Scientific Inquiry, Princeton: Princeton University Press.
- Maher, P., 1999, “The Confirmation of Black's Theory of Lime” Studies in History and Philosophy of Science, 30(2): 335–353.
- Margolis, H., 1987, Patterns, Thinking, and Cognition: A Theory of Judgment, Chicago: University of Chicago Press.
- –––, 1993, Paradigms and Barriers: How Habits of Mind Govern Scientific Beliefs, Chicago: University of Chicago Press.
- Myrvold, W., 2003, “A Bayesian Account of the Virtue of Unification,” Philosophy of Science, 70(2): 399–423.
- Nelson, L.H., 1990, Who Knows: From Quine to Feminist Empiricism, Philadelphia: Temple University Press.
- Salmon, W.C., 1990, “Rationality and Objectivity in Science or Tom Kuhn Meets Tom Bayes,” in Scientific Theories, C.W. Savage (ed.), Minnesota Studies in the Philosophy of Science, Minneapolis: University of Minnesota Press, 14: 175–204.
- Shimony, A., 1970, “Scientific Inference,” in R.G. Colodny (ed.), The Nature and Function of Scientific Theories (Pittsburgh Studies in the Philosophy of Science, Volume 4), Pittsburgh: Pittsburgh University Press, pp. 79–172.
- Solomon, M., 1992, “Scientific Rationality and Human Reasoning,” Philosophy of Science, 59(3): 439–454.
- –––, 1994a, “Social Empiricism,” Noûs, 28(3): 323–343.
- –––, 1994b, “A More Social Epistemology,” in Socializing Epistemology: The Social Dimensions of Knowledge, Frederick Schmitt (ed.), Lanham: Rowman and Littlefield Publishers, pp. 217–233.
- Strevens, M., 2001, “The Bayesian Treatment of Auxiliary Hypotheses,” The British Journal for the Philosophy of Science, 52(3): 513–577.
- –––, 2003, “The Roll of the Priority Rule in Science,” The Journal of Philosophy, 100(2): 55–79.
- Vickers, P., 2013, Understanding Inconsistent Science, Oxford: Oxford University Press.
- Weisberg, M. and R. Muldoon, 2009, “Epistemic Landscapes and the Division of Cognitive Labor,” Philosophy of Science, 76(2): 225–252.
- Williamson, T., 2002, Knowledge and Its Limits, Oxford: Oxford University Press.
- Wray, B., 2011, Kuhn's Evolutionary Social Epistemology, Cambridge: Cambridge University Press.
- Zollman, K.J.S., 2005, “Talking to Neighbors: The Evolution of Regional Meaning,” Philosophy of Science, 72(1): 69–85.
- –––, 2007, “The Communication Structure of Epistemic Communities,” Philosophy of Science, 74(5): 574–587.
- –––, 2013, “Network Epistemology: Communication in Epistemic Communities,” Philosophy Compass, 8(1): 15–27.
Academic Tools
How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database.
Other Internet Resources
- Dorling, J., 1982, “Further Illustrations of the Bayesian Solution of Duhem's Problem,” transcript of 1982 manuscript.