Realism and Theory Change in Science
Scientific theories seem to have an expiry date. If we look at the history of science, a number of theories that once were dominant and widely accepted are currently taught in history of science courses. Will this be the fate of current scientific theories? Is there a pattern of radical theory-change as science grows? Are theories abandoned en bloc? Or are there patterns of retention in theory-change? That is, are some parts of theories more likely to survive than other parts? And what are the implications of all this for the scientific image of the world?
These kinds of question have played a major role in the scientific realism debate. The challenge to scientific realism is supposed to come directly from the history of science. The history of science, it is claimed, is at odds with scientific realism’s epistemic optimism. It is full of theories which were shown to be false and abandoned, despite their empirical successes. Hence, it is claimed, realists cannot be warrantedly optimistic about the (approximate) truth of currently empirically successful theories. If we take the historical evidence seriously, it is claimed, current theories too will, sooner or later, be abandoned and take their place in future history-of-science courses. This anti-realist line of argument has become known as ‘the pessimistic induction’ (aka pessimistic meta-induction)—henceforth PI. Without denying that theories change over time, scientific realists have tried to block this line of argument by showing either that it is fallacious or that there is substantive continuity in theory-change which warrants the realist’s optimism that current science is on the right track.
This entry discusses the origin and current state of the historical challenge to realism and the various realist reactions to it. The first part focuses on the first enactment of arguments based on historical pessimism, as these appeared in the so-called ‘bankruptcy of science controversy’ in the end of the nineteenth century.
The second part deals with the historical challenge to scientific realism as this is currently formulated and the various lines of defense of the claim that scientific knowledge grows despite theory-change.
- 1. The History of the Historical Challenge
- 2. Scientific Realism and the Pessimistic Induction
- Bibliography
- Academic Tools
- Other Internet Resources
- Related Entries
1. The History of the Historical Challenge
1.1 The Bankruptcy-of-science Debate
The issue of theory-change in science was debated in the context of the ‘bankruptcy of science’ controversy that was raging in Paris in the last decade of the nineteenth century and the first decade of the twentieth. A claim of growing popular reputation among various public intellectuals, spearheaded by Ferdinand Brunetière and Leo Tolstoy, was that scientific theories are ephemeral; and this was supposed to prove that science has at best no more than predictive value with no legitimate claim to showing what the world is like—especially in its unobservable aspects. In light of a growing interest in the history of science among scientists and philosophers, it was pointed out that science has had a poor track record: it has gone through many radical theory-changes in the past; hence, there is reason to believe that what is currently accepted will be overturned in the future.
In his essay “The Non-Acting”, published in French in August 1893, the Russian novelist Tolstoy (1828–1910) noted:
Lastly, does not each year produce its new scientific discoveries, which after astonishing the boobies of the whole world and bringing fame and fortune to the inventors, are eventually admitted to be ridiculous mistakes even by those who promulgated them? (…) Unless then our century forms an exception (which is a supposition we have no right to make), it needs no great boldness to conclude by analogy that among the kinds of knowledge occupying the attention of our learned men and called science, there must necessarily be some which will be regarded by our descendants much as we now regard the rhetoric of the ancients and the scholasticism of the Middle Ages. (1904: 105)
A few years earlier, in 1889, Ferdinand Brunetière (1849–1906), Professor at the École Normale Supérieure and editor of the prestigious journal Revue des Deux Mondes, noted in his review of Paul Bourget’s play ‘Le Disciple’:
We differ from animals in recognizing that humans have to be first (i.e., they have value). The laws of nature, the ‘struggle for life’ or ‘natural selection’, do not show what we have in common. Are these the only laws? Do we know whether perhaps tomorrow they will not join in the depths of oblivion the Cartesian vortices or the ‘quiddities’ of scholasticism? (1889: 222, author’s translation)
This history-fed pessimism about science, which seemed to capture the public mood, led to a spirited reaction by the scientific community. In an anonymous article that appeared in Revue Scientifique, a prestigious semi-popular scientific journal, in August 17 1889, the following questions were raised: Is the history of science the history of human error? Will what theories affirm today be affirmed in a century or two? The reply was:
We will say to savants, philosophers and physicists, physicians, chemists, astronomers or geologists: Go forward boldly, without looking behind you, without caring for the consequences, reasonable or absurd, that can be drawn from your work. Seek the truth, without the worry of its applications. (Anonymous 1889: 215, author’s translation)
A few years later, in 1895, Brunetière strikes back with an article titled ‘Après Une Visite Au Vatican’, published in Revue des Deux Mondes, by claiming that science is bankrupt:
Science has failed to deliver on its promise to change ‘the face of the world’. (...) Even if this is not a total bankruptcy, it is certainly a partial bankruptcy, enough to shake off the credit from science. (1895: 98, 103)
The eminent scientist Charles Richet (1850–1935), Professor of Physiology at the Collège de France, Editor of Revue Scientifique and Nobel Laureate for Medicine in 1913, replied with an article titled ‘La Science a-t-elle fait banqueroute?’ (Revue Scientifique, 12 January 1895), which appeared in the section: Histoire des Sciences. In this, he did three things. Firstly, he noted that science can never understand the ‘why’ (‘le pourquoi’) of things, especially when it comes to the infinitely small and the infinitely large. Science “attends only to the phenomena. The intimate nature of things escapes from us” (1895: 34). Secondly, he stressed that “science has not promised anything”, let alone the discovery of the essence of things. Thirdly, he added that despite the fact that science has made no promises, it has changed the world, citing various scientific, industrial and technological successes (from the invention of printing and the microscope to the railways, the electric battery, the composition of the air, and the nature of fermentation).
Turning Brunetière’s argument on its head, Richet formulated what might be called an ‘optimistic induction’ based on the then recent history of scientific successes. To those who claim that science has failed in the past, his reply is that history shows that it is unreasonable to claim for any scientific question that we will always fail to answer it. Far from warranting epistemic pessimism, the history of science is a source of cognitive optimism. Richet referred to a few remarkable cases, the most striking of which is the case of Jean Louis Prevost and Jean Baptiste Dumas, who had written in 1823:
The pointlessness of our attempts to isolate the colouring matter of the blood gives us almost the certainty that one will never be able to find it. (1823, 246, author’s translation)
Forty years after their bold statement, Richet exclaimed, this coloured matter (haemoglobin) had been isolated, analysed and studied.
Richet’s reply to the historical challenge suggested lowering the epistemic bar for science: science describes the phenomena and does not go beyond them to their (unobservable) causes. This attitude was echoed in the reply to the ‘bankruptcy charge’ issued by the eminent chemist and politician of the French Third Republic, Marcelin Berthelot (1827–1907) in his pamphlet Science et Morale in 1897. He was firm in his claim that the alleged bankruptcy of science is an illusion of the non-scientific mind. Like Richet, he also argued that science has not pretended to have penetrated into the essence of things: “under the words ‘essence’, ‘the nature of things’, we hide the idols of our own imagination” (1897: 18, author’s translation). Science, he noted, has as its starting point the study of facts and aims to establish general relations, that is, ‘scientific laws’, on their basis. If science does not aim for more, we cannot claim that it is bankrupt; we cannot accuse it for “affirmations it did not make, or hopes it has not aroused”.[1]
Berthelot, who objected to atomism, captured a broad positivist trend in French science at the end of the nineteenth century, according to which science cannot offer knowledge of anything other than the phenomena. In light of this view, the history-fed pessimism is misguided precisely because there has been substantial continuity at the level of the description of the phenomena, even if explanatory theories have come and gone.
1.2 Duhem on Continuity
This kind of attitude was captured by Pierre Duhem’s (1906) distinction between two parts of a scientific theory: the representative part, which classifies a set of experimental laws; and the explanatory part, which “takes hold of the reality underlying the phenomena” (1906 [1954: 32]). Duhem understood the representative part of a theory as comprising the empirical laws and the mathematical formalism, which is used to represent, systematize and correlate these laws, while he thought that the explanatory part relates to the construction of physical (and in particular, mechanical) models and explanatory hypotheses about the nature of physical processes which purport to reveal underlying unobservable causes of the phenomena. For him, the explanatory part is parasitic on the representative. To support this view, he turned to the history of science, especially the history of optical theories and of mechanics. He argued that when a theory is abandoned because it fails to cover new experimental facts and laws, its representative part is retained, partially or fully, in its successor theory, while the attempted explanations offered by the theory get abandoned. He spoke of the “constant breaking-out of explanations which arise to be quelled” (1906 [1954: 33]).
Though Duhem embedded this claim for continuity in theory-change in an instrumentalist account of scientific theories, he also took it that science aims at a natural classification of the phenomena, where a classification (that is the representation of the phenomena within a mathematical system) is natural if the relations it establishes among the phenomena gathered by experiments “correspond to real relations among things” (1906 [1954: 26–27]). Hence, scientific knowledge does go beyond the phenomena but in doing so, that is, in tending to be a natural classification, it can extend only up to relations among “hidden realities whose essence cannot be grasped” (1906 [1954: 297]). A clear mark of the naturalness of a classification is when it issues in novel predictions (1906 [1954: 28]). Hence, successful novel predictions issued by a theory are a mark for the theory getting some aspects of reality right, viz. real relations among unobservable entities.[2]
1.3 Poincaré’s Relationism
This kind of relationism became a popular middle way between positivism and what may be called full-blown realism. Duhem himself, justly, traced it back to his contemporary Henri Poincaré. He noted with approval that Poincaré “felt a sort of revolt” against the proposition that “theoretical physics is a mere collection of recipes” and he “loudly proclaimed that a physical theory gives us something else than the mere knowledge of the facts, that it makes us discover the real relations among things ([1906] 2007: 446; improved translation from the French original by Marie Guegeun and the author).
In his address to the 1900 International Congress of Physics in Paris, Poincaré made a definitive intervention in the bankruptcy-of-science debate and its history-fed pessimism. He described the challenge thus:
The people of world [les gens du monde] are struck to see how ephemeral scientific theories are. After some years of prosperity, they see them successively abandoned; they see ruins accumulated on ruins; they predict that the theories in fashion today will quickly succumb in their turn, and they conclude that they are absolutely futile. This is what they call the bankruptcy of science (1900: 14, author’s translation).
The view of ‘the people of the world’ is not right:
Their scepticism is superficial; they understand none of the aim and the role of scientific theories; otherwise they would understand that ruins can still be good for something.
But unlike the positivist trend around him, Poincaré took it that scientific theories offer knowledge of the relational structure of the world behind the phenomena. In the Introduction to La Science et l’Hypothése in 1902, he made clear what he took to be the right answer to the historical challenge:
Without doubt, at first, the theories seem to us fragile, and the history of science proves to us how ephemeral they are; yet they do not entirely perish, and of each of them something remains. It is this something we must seek to unravel, since there and there alone is the true reality. (1902: 26, author’s translation)
Poincaré argued that what survives in theory-change are relations among physical magnitudes, expressed by mathematical equations within theories. His prime example was the reproduction of Fresnel’s laws concerning the relations of amplitudes of reflected rays vis-à-vis the amplitude of incident rays in the interface of two media within Maxwell’s theory of electromagnetism, although in this transition, the interpretation of these laws changed dramatically, from an ether-based account to an electromagnetic-field-based account. For Poincaré
These equations express relations, and if the equations remain true it is because these relations preserve their reality. They teach us, before and after, that there is such and such a relation between some thing and some other thing; only this something we used to call motion, we now call it electric current. But these names were only images substituted for the real objects which nature will eternally hide from us. The true relations between these real objects are the only reality we can attain to, and the only condition is that the same relations exist between these objects as between the images by which we are forced to replace them. If these relations are known, what does it matter if we deem it convenient to replace one image by another? (1900: 15, author’s translation.
In recent literature, Poincaré’s line of thought has come to be known as structural realism, though it may be best if we describe it as ‘relationism’. In the Introduction to La Science et l’Hypothése, he noted that
the things themselves are not what it [science] can reach, as the naive dogmatists think, but only the relations between things. Apart from these relations there is no knowable reality. (1902: 25, author’s translation)
It should be stressed that Poincaré does not deny that there is reality outside relations; but he does deny that this reality is knowable. Note also that Poincaré does not use the expression ‘things in themselves’ (choses en soi) but the expression ‘things themselves’ (chose elles-memes). Elsewhere he talks about the “nature of things” or “real objects”. It is quite clear that he wanted to draw a distinction between how things are—what their nature is—and how they are related to each other (and to us qua knowers). A plausible way to draw this distinction is to differentiate between the intrinsic and perhaps fully qualitative properties of things—what he plausibly calls ‘nature’ of things—and their relations. The former are unknowable, whereas the latter are knowable.[3]
So, Poincaré and Duhem initiated a strategy for dealing with theory-change in science which pointed to substantial continuities among successive theories. For them, the continuity is, by and large, relational (and in this sense mathematical). Hence, mathematically-convergent scientific theories reveal the relational structure of the world.
1.4 Boltzmann Against Historical Pessimism
This relational answer to historical pessimism was motivated, at least partly, by the widespread scepticism towards the atomic theory of matter. Atomism posited the existence of unobservable entities—the atoms—to account for a host of observable phenomena (from chemical bonding to Brownian motion). A trend among scientists opposed to the explanation of the visible in terms of the invisible was what Ludwig Boltzmann called “phenomenologists” (which included the early Max Planck), according to whom the aim of science was to “write down for every group of phenomena the equations by means of which their behavior could be quantitatively calculated” (Boltzmann 1901: 249). The theoretical hypotheses from which the equations might have been deduced were taken to be the scaffolding that was discarded after the equations were arrived at. For phenomenologists, then, hypotheses are not unnecessary or useless—rather they have only a heuristic value: they lead to stable (differential) equations and that’s it.
According to Boltzmann, a motivation for this phenomenological attitude was the “historical principle”, viz., that hypotheses are essentially insecure because they tend to be abandoned and replaced by others, “totally different” ones. As he put it:
frequently opinions which are held in the highest esteem have been supplanted within a very short space of time by totally different theories; nay, even as St. Remigius the heathens, so now they [the phenomenologists] exhorted the theoretical physicists to consign to the flames the idols that but a moment previously they had worshipped (1901: 252–253).
Like Poincaré, Boltzmann’s answer to historical pessimism was that despite the presence of “revolutions” in science, there is enough continuity in theory change to warrant the claim that some “achievements may possibly remain the possession of science for all time” (1901: 253). But unlike Poincaré, Boltzmann did not restrict the criterion of invariance-in-theory-change to relations only: The answer to the historical challenge is to look for patterns of continuity in theory change. In fact, as Boltzmann noted, if the historical principle is correct at all, it cuts also against the equations of the phenomenologists. For unless these very equations remain invariant through theory-change, there should be no warrant for taking them to be accurate descriptions of worldly relations (cf. 1901: 253). Besides, Boltzmann noted, the very construction of the differential equations of the phenomenologists requires commitment to substantive atomistic assumptions. Hence, the phenomenologists are not merely disingenuous when they jettison the atomistic assumptions after the relevant differential equations have been arrived at. Their move is self-undermining. In light of the historical principle, the success of the mathematical equations would lead to their defeat, since the very theory that led to this success would fall foul of the historical principle: it would have to be abandoned.
The history-based pessimism (and the relevant debate) came to an end by the triumph of atomism in the first decade of the twentieth century. Due to the work of Albert Einstein and the French physicist Jean Perrin on the atomic explanation of Brownian motion, one after the other of major scientists who were initially sceptical about the atomic conception of matter came to accept atomism.[4] The French philosopher André Lalande captured this point in his 1913 (pp. 366–367) thus:
M. Perrin, professor of physics at the Sorbonne, has described in Les Atomes, with his usual lucidity and vigour, the recent experiments (in which he has taken so considerable a part) which prove conclusively that the atoms are physical realities and not symbolical conceptions as people have for a long time been fond of calling them. By giving precise and concordant measures for their weights and dimensions, it is proved that bodies actually exist which, though invisible, are analogous at all points to those which we see and touch. An old philosophical question thus receives a positive solution.
Be that as it may, what this brief account of the history of the historical challenge to realism reveals are the two major lines of defense of realism at play. Both lines of defense are based on the presence of substantial continuity in theory-change in the history of science. This continuity suggests that the disruption of the scientific image of the world, as theories change, is less radical than is assumed by the historical challenge to realism. But the two lines of defense (the Poincaré-Duhem and the Boltzmann one) disagree over what is retained when theories change. The Poincaré-Duhem line of defense focuses on mathematical equations (which express relations) and claims that only relations among unobservable things are knowable, whereas the Boltzmann line of defense focuses on whatever theoretical elements (including entities like atoms) are retained while theories change; hence, it does not limit scientific knowledge to the knowledge of relations only. Both lines have resurfaced in the current debate.
2. Scientific Realism and the Pessimistic Induction
2.1 The ‘Disastrous Meta-Induction’
Capitalizing on the work of Richard Boyd, the early Hilary Putnam took scientific realism to involve three theses:
- Theoretical terms refer to unobservable entities;
- Theories are (approximately) true; and
- There is referential continuity in theory change.
Putnam argued that the failure of the third thesis would lead to a disastrous “meta-induction”:
just as no term used in the science of more than fifty (or whatever) years ago referred, so it will turn out that no term used now (except maybe observation terms, if there are such) refers (1978: 25) (emphasis in the original).
An answer to this ‘disastrous’ history-fed argument was the development of a causal theory of reference, which allows for referential continuity in theory-change. This theory was first suggested by Saul Kripke (1972) as an alternative to the then dominant descriptive theories of reference of proper names and was extended by Putnam (1973, 1975) to cover natural kind terms and theoretical terms. According to the causal theory, the reference of a theoretical term t is fixed during an introducing event in which an entity or a physical magnitude is posited as the cause of various observable phenomena. The term t, then, refers to the posited entity. Though some kind of descriptions of the posited entity will be associated with t, they do not play a role in reference fixing. The referent has been fixed existentially: it is the entity causally responsible for certain effects.
The causal theory of reference makes it possible that the same term featuring in different theories refers to the same worldly entity. If, for instance, the referent of the term ‘electricity’ is fixed existentially, all different theories of electricity refer to, and dispute over, the same ‘existentially given’ magnitude, viz. electricity; better, the causal agent of salient electrical effects. Hence, the causal theory makes available a way to compare past and present theories and to claim that the successor theory is more truthlike than its predecessors since it says truer things of the same entities. It turns out, however, that the causal theory faces a number of conceptual problems, most notable of which is that it makes referential success inevitable insofar as the phenomena which lead to the introduction of a new theoretical term do have a cause (see Psillos 1999: chapter 11 for a discussion). Philosophers of science have tried to put forward a causal-descriptive theory of reference which makes referential continuity possible whilst allowing room for causal descriptions in fixing the reference of a theoretical term.[5]
2.2 The Principle of No Privilege
An analogous history-fed pessimistic argument can be based on the so-called “principle of no privilege”, which was advanced by Mary Hesse in her 1976. According to this principle:
our own scientific theories are held to be as much subject to radical conceptual change as past theories are seen to be. (1976: 266)
This principle can be used for the derivation of the strong conclusion that all theories are false. As Hesse put it:
Every scientific system implies a conceptual classification of the world into an ontology of fundamental entities and properties—it is an attempt to answer the question “What is the world really made of?” But it is exactly these ontologies that are most subject to radical change throughout the history of science. Therefore in the spirit of the principle of no privilege, it seems that we must say either that all these ontologies are true, ie: we must give a realistic interpretation of all of them or we must say they are all false. But they cannot all be true in the same world, because they contain conflicting answers to the question “What is the world made of?” Therefore they must all be false. (1976: 266)
This argument engages the history of theory-change in science in a substantial way. As Hesse admitted, the Principle of No Privilege arises “from accepting the induction from the history of science” (1976: 271). Hesse’s argument starts with the historical premise that, as science grows over time, there has been a recognizable pattern of change in the ‘ontology of fundamental entities and properties’ posited by scientific theories. Assuming, then, the Principle of No Privilege, it is argued that current theories too will be subjected to a radical change in the ontology of the entities and properties they posit. Hence, current theories are as false as the past ones.
The problem with this kind of argument is that the historical premise should be borne out by the actual history of theory-change in science. It’s not enough to say that scientific theories change over time; these changes should be such that the newer theories are incompatible with the past ones. Or, to use Hesse’s idiom, it should be shown that past and current scientific ‘ontologies’ are incompatible with each other. Showing incompatibility between the claims made by current theory T and a past theory T′ requires a theory of reference of theoretical terms which does not allow that terms featuring in different theories can nonetheless refer to the same entity in the world. Hence, it is question-begging to adopt a theory of reference which makes it inevitable that there is radical-reference variance in theory-change.
Referential stability, as noted already, makes possible the claim that past and present ontologies are compatible, even if there have been changes in what current theories say of the posited entities. The “revolutionary induction from the history of science about theory change” (Hesse 1976: 268) can be blocked by pointing to a pattern of substantial continuity in theory change.
2.3 Getting Nearer to the Truth
Can a history-fed argument be used in defence of realism? William Newton-Smith (1981) was perhaps the first in the recent debate to answer positively this question. Scientific realism is committed to the two following theses:
- theories are true or false in virtue of how the world is, and
- the point of the scientific enterprise is to discover explanatory truths about the world.
According to Newton-Smith, (2) is under threat “if we reflect on the fact that all physical theories in the past have had their heyday and have eventually been rejected as false”. And he added:
Indeed, there is inductive support for a pessimistic induction: any theory will be discovered to be false within, say 200 years of being propounded. We may think of some of our current theories as being true. But modesty requires us to assume that they are not so. For what is so special about the present? We have good inductive grounds for concluding that current theories—even our most favourite ones—will come to be seen to be false. Indeed the evidence might even be held to support the conclusion that no theory that will ever be discovered by the human race is strictly speaking true. So how can it be rational to pursue that which we have evidence for thinking can never be reached? (1981: 14)
The key answer to this question is that even if truth cannot be reached, it is enough for the defense of realism to posit “an interim goal for the scientific enterprise”, viz., “the goal of getting nearer the truth”. If this is the goal, the “sting” of the preceding induction “is removed”. Accepting PI “is compatible with maintaining that current theories, while strictly speaking false, are getting nearer the truth” (1981: 14).
But aren’t all false theories equally false? The standard realist answer is based on what Newton-Smith called “the animal farm move” (1981: 184), viz., that though all theories are false, some are truer than others. Hence, what was needed to be defended was the thesis that if a theory \(T_2\) has greater verisimilitude than a theory \(T_1\), \(T_2\) is likely to have greater observational success than \(T_1\). The key argument was based on the “undeniable fact” that newer theories have yielded better predictions about the world than older ones (cf. Newton-Smith 1981: 196). But if the ‘greater verisimilitude’ thesis is correct (that is, if theories “are increasing in truth-content without increasing in falsity-content”), then the increase in predictive power would be explained and rendered expectable. This increase in predictive power “would be totally mystifying (…) if it were not for the fact that theories are capturing more and more truth about the world” (1981: 196).
The key point, then, is that the defense of realism against the historical induction requires showing that there is, indeed, a privilege that current theories enjoy over past ones, which is strong enough to block transferring, on inductive grounds, features of past theories to current ones. For most realists, the privilege current theories enjoy over past ones is not that they are true while the past theories are false. Rather, the privilege is that they are more truthlike than past theories because they have had more predictive power than past theories. The privilege is underpinned by an explanatory argument: the increasing truthlikeness of current theories best explains their increasing predictive and empirical success.
But there is a way to see the historical challenge to realism which makes it have as its target precisely to undercut the explanatory link between empirical success and truthlikeness. This was brought under sharp relief in the subsequent debates.
2.4 The Plethora of False Theories
The most famous history-based argument against realism, issued by Larry Laudan (1981), was meant to show how the explanatory link between success and truthlikeness is undermined by taking seriously the history of science. It should be noted that Laudan’s argument has been subjected to several diverging interpretations, which will be the focus of section 2.8. For the time being let’s stick to a particularly popular one, according to which Laudan argues inductively from the falsity of past theories to the falsity of current ones. This argument may be put thus:
- (L)
- The history of science is full of theories which had been empirically successful for long periods of time and yet were shown to be false about the deep-structure claims they had made about the world. It is similarly full of theoretical terms featuring in successful theories which do not refer. Therefore, by a simple (meta) induction on scientific theories, our current successful theories are likely to be false.
Laudan substantiated (L) by means of what he has called “the historical gambit”: the following list—which “could be extended ad nauseam”—gives theories which were once empirically successful and fruitful, yet just false.
Laudan’s list of successful-yet-false theories
- the crystalline spheres of ancient and medieval astronomy
- the humoral theory of medicine
- the effluvial theory of static electricity
- catastrophist geology, with its commitment to a universal (Noachian) deluge
- the phlogiston theory of chemistry
- the caloric theory of heat
- the vibratory theory of heat
- the vital force theory of physiology
- the theory of circular inertia
- theories of spontaneous generation
- the contact-action gravitational ether of Fatio and LeSage
- the optical ether
- the electromagnetic ether
This is a list of a dozen of cases, but Laudan boldly noted the famous 6 to 1 ratio:
I daresay that for every highly successful theory in the past of science which we now believe to be a genuinely referring theory, one could find half a dozen once successful theories which we now regard as substantially non-referring. (1981: 35)
If we are to take seriously this “plethora” of theories that were both successful and false, it appears that (L) is meant to be a genuinely inductive argument.
- (I)
There has been a plethora of theories (ratio 6 to 1) which were successful and yet not truthlike.
Therefore, it is highly probable that current theories will not be truthlike (despite their success).
An argument such as (I) has obvious flaws. Two are the most important. The first is that the basis for induction is hard to assess. This does not just concern the 6:1 ratio, of which one may ask: where does it come from? It also concerns the issue of how we individuate and count theories as well as how we judge success and referential failure. Unless we are clear on all these issues in advance of the inductive argument, we cannot even start putting together the inductive evidence for its conclusion (cf. Mizrahi 2013).
The second flaw of (I) is that the conclusion is too strong. It is supposed to be that there is rational warrant for the judgment that current theories are not truthlike. The flaw with this kind of sweeping generalization is precisely that it totally disregards the fresh strong evidence there is for current theories—it renders current evidence totally irrelevant to the issue of their probability of being true. Surely this is unwarranted. Not only because it disregards potentially important differences in the quality and quantity of evidence there is for current theories (differences that would justify treating current theories as more supported by available evidence than past theories were by the then available evidence); but also because it makes a mockery of looking for evidence for scientific theories! If I know that X is more likely than Y and that this relation cannot change by doing Z, there is no point in doing Z.
The second flaw of (I) becomes (even more) apparent when one takes a closer look at the successful-yet-false theories in Laudan’s list. Would anyone be willing to insist that, say, the humoral theory of medicine, the vital force theory of physiology or the theory of crystalline spheres are on a par with our current scientific theories with the same domain of application? The difference between their respective evidence is undoubtedly enormous. Nevertheless, it would rather be mistaken to restrict our attention to those theories comprising Laudan’s own list. Indeed, subsequent scholars have provided new lists of cases where admittedly false theories had been used in the derivation of impressive empirical predictions. Most notably, Timothy Lyons (2002: 70–72) and Peter Vickers (2013: 191–194) suggest the following (partly overlapping) lists:
Lyons’s list
- Caloric Theory
- Phlogiston Theory
- Rankine’s 19th Century Vortex Theory
- Newtonian Mechanics
- Fermat’s Principle of Least Time
- Fresnel’s Wave Theory of Light and of the Optical Ether
- Maxwell’s Ether Theory
- Dalton’s Atomic Theory
- Kekulé’s Theory of Benzene Molecule
- Mendeleev’s Periodic Law
- Bohr’s Theory of the Atom
- Dirac’s Relativistic Wave Equation
- The Original (pre-inflationary) Big Bang Theory
Vickers’s list
- Caloric Theory
- Phlogiston Theory
- Fresnel’s theory of light and the luminiferous ether
- Rankine’s vortex theory of thermodynamics
- Kekulé’s Theory of Benzene Molecule
- Dirac and the positron
- Teleomechanism and gill slits
- Reduction division in the formation of sex cells
- The Titius-Bode law
- Kepler’s predictions concerning the rotation of the sun
- Kirchhoff’s theory of diffraction
- Bohr’s prediction of the spectral lines of ionized helium
- Sommerfeld’s prediction of the hydrogen fine structure
- Velikovsky and Venus
- Steady state cosmology
- The achromatic telescope
- The momentum of light
- S-matrix theory
- Variation of electron mass with velocity
- Taking the thermodynamic limit
These lists summarize much of the work done by historians of science and historically informed philosophers of science. They are meant to present cases of empirical successes that were (supposedly) brought about by false theoretical hypotheses, hence offering a fresh source of historical challenges to realism. At first sight, the cases provided look substantially different from the majority of Laudan’s examples (viz. the successes, in at least some of them, are more impressive). Yet, it remains to be seen whether they are more troublesome for scientific realism.
2.5 The Divide et Impera Strategy
If we think of the pessimistic argument not as inductive but as a warrant-remover argument and if we also think that the fate of (past) theories should have a bearing on what we are warranted in accepting now, we should think of its structure differently. It has been argued by Psillos (1999: chapter 5) that we should think of the pessimistic argument as a kind of reductio. Argument (L) above aimed to “discredit the claim that there is an explanatory connection between empirical success and truth-likeness” which would warrant the realist view that current successful theories are truthlike. If we view the historical challenge this way, viz., as a potential warrant-remover argument, the past record of science does play a role in it, since it is meant to offer this warrant-remover.
Psillos’s (1996) reconstruction of Laudan’s argument was as follows:
Argument (P):
(A) Currently successful theories are truthlike.
(B) If currently successful theories are truthlike, then past theories are not.
(C) These characteristically false past theories were, nonetheless, empirically successful. (The ‘historical gambit’)
Hence, empirical success is not connected with truthlikeness and truthlikeness cannot explain success: the realist’s potential warrant for (A) is defeated.
Premise (B) of argument (P) is critical. It is meant to capture radical discontinuity in theory-change, which was put thus (stated in the material mode):
Past theories are deemed not to have been truth-like because the entities they posited are no longer believed to exist and/or because the laws and mechanisms they postulated are not part of our current theoretical description of the world. (Psillos 1999: 97).
In this setting, the ‘historical gambit’ (C) makes perfect sense. Unless there are past successful theories which are warrantedly deemed not to be truthlike, premise (B) cannot be sustained and the warrant-removing reductio of (A) fails. If (C) can be substantiated, success cannot be used to warrant the claim that current theories are true. The realists’ explanatory link between truthlikeness and empirical success is undercut. (C) can be substantiated only by examining past successful theories and their fate. History of science is thereby essentially engaged.
The realist response has come to be known as the divide et impera strategy to refute the pessimistic argument. The focus of this strategy was on rebutting the claim that the truth of current theories implies that past theories cannot be deemed truthlike. To defend realism, realists needed to be selective in their commitments. This selectivity was developed by Kitcher (1993) and (independently) by Psillos (1994).
One way to be selective is to draw a distinction between working posits of a theory (viz., those theoretical posits that occur substantially in the explanatory schemata of the theory) and presuppositional posits (putative entities that apparently have to exist if the instances of the explanatory schemata of the theory are to be true) (cf. Kitcher 1993: 149). Another way is to draw a distinction between the theoretical claims that essentially or ineliminably contribute to the generation of successes of a theory and those claims that are ‘idle’ components that have had no contribution to the theory’s success (cf. Psillos 1994, 1996). The underlying thought is that the empirical successes of a theory do not indiscriminably support all theoretical claims of the theory, but rather the empirical support is differentially distributed among the various claims of the theory according to the contribution they make to the generation of the successes. Generally, Kitcher (1993) and Psillos (1996, 1999) have argued that there are ways to distinguish between the ‘good’ and the ‘bad’ parts of past abandoned theories and to show that the ‘good’ parts—those that enjoyed evidential support, were not idle components and the like—were retained in subsequent theories.
It is worth-noting that, methodologically, the divide et impera strategy recommended that the historical challenge to realism can only be met by looking at the actual successes of past successful theories and by showing that those parts of past theories (e.g., the caloric theory of heat or the optical ether theories) that were fuelling theory successes were retained in subsequent theories and those theoretical terms which were central in the relevant past theories were referential. In fact, Vickers has recently made the methodological suggestion that if one’s sole aim is to cope with a historical challenge, then it is sufficient that one shows that the abandoned hypotheses were not essential for the relevant theory’s empirical success, without at the same time taking side on which are the essential theoretical hypotheses. As Vickers claims, in order to respond to a PI-style challenge, “all the realist needs to do is show that the specific assumptions identified by the antirealist do not merit realist commitment. And she can do this without saying anything about how to identify the posits which do merit realist commitment” (2017: 3224). Besides, according to Vickers’s conception of the dialectic of the PI-debate, the onus of proof lies with the antirealist: the antirealist has to reconstruct the derivation of a prediction, identify the assumptions that merit realist commitments and then show that at least one of them is not truthlike by our current lights; and then all the realists need to show is that the specific assumptions were inessential. In sum, Vickers argues that “the project of responding to the historical challenge” and “the project of explaining what realists should commit to” have to be kept distinct (2017: 3222).
At any rate, either employed in the identification of the trustworthy theoretical parts or in the (mere) handling of a historical challenge, the divide et impera move suggests that there has been enough theoretical continuity in theory-change to warrant the realist claim that science is ‘on the right track.
2.6 Criticisms of Divide et Impera
The realist move from substantive continuity in theory-change to truthlikeness has been challenged on grounds that there is no entitlement to move from whatever preservation in theoretical constituents there is in theory-change to these constituents’ being truthlike (Chang 2003: 910–12; Stanford 2006). Against this point it has been argued that the realist strategy proceeds in two steps (cf. Psillos 2009: 72). The first is to make the claim of continuity (or convergence) plausible, viz., to show that there is continuity in theory-change: substantive theoretical claims that featured in past theories and played a key role in their successes (especially novel predictions) have been incorporated in subsequent theories and continue to play an important role in making them empirically successful. But this first step does not establish that the convergence is to the truth. For this claim to be made plausible a second argument is needed, viz., that the emergence of this evolving-but-convergent network of theoretical assertions is best explained by the assumption that it is, by and large, truthlike. So there is, after all, entitlement to move from convergence to truthlikeness, insofar as truthlikeness is the best explanation of this convergence.
Another critical point was that the divide et impera strategy cannot offer independent support to realism since it is tailor-made to suit realism: it is the fact that the very same present theory is used both to identify which parts of past theories were empirically successful and which parts were (approximately) true that accounts for the realists’ wrong impression that these parts coincide (Stanford 2006). He says:
With this strategy of analysis, an impressive retrospective convergence between our judgements of the sources of a past theory’s success and the things it ‘got right’ about the world is virtually guaranteed: it is the very fact that some features of a past theory survive in our present account of nature that leads the realist both to regard them as true and to believe that they were the sources of the rejected theory’s success or effectiveness. So the apparent convergence of truth and the sources of success in past theories is easily explained by the simple fact that both kinds of retrospective judgements have a common source in our present beliefs about nature. (2006: 166)
It has been claimed by Psillos (2009) that the foregoing objection is misguided. The problem is this. There are the theories scientists currently endorse and there are the theories that had been endorsed in the past. Some (but not all) of them were empirically successful (perhaps for long periods of time). They were empirically successful irrespective of the fact that, subsequently, they came to be replaced by others. This replacement was a contingent matter that had to do with the fact that the world did not fully co-operate with the then extant theories: some of their predictions failed; or the theories became overly ad hoc or complicated in their attempt to accommodate anomalies, or what have you. The replacement of theories by others does not cancel out the fact that the replaced theories were empirically successful. Even if scientists had somehow failed to come up with new theories, the old theories would not have ceased to be successful. So success is one thing, replacement is another.
Hence, it is one thing to inquire into what features of some past theories accounted for their success and quite another to ask whether these features were such that they were retained in subsequent theories of the same domain. These are two independent issues and they can be dealt with (both conceptually and historically) independently. One should start with some past theories and—bracketing the question of their replacement—try to identify, on independent grounds, the sources of their empirical success; that is, to identify those theoretical constituents of the theories that fuelled their successes. When a past theory has been, as it were, anatomised, we can then ask the independent question of whether there is any sense in which the sources of success of a past theory that the anatomy has identified are present in our current theories. It’s not, then, the case that the current theory is the common source for the identification of the successful parts of a past theory and of its truthlike parts.
The transition from Newton’s theory of gravity to Einstein’s illustrates this point. Einstein took it for granted that Newton’s theory of gravity (aided by perturbation theory) could account for 531 arc-second per century of the perturbation of Mercury’s perihelion. Not only were the empirical successes of Newton’s theory identified independently of the successor theory, but also some key theoretical components of Newton’s theory—the law of attraction and the claim that the gravitational effects from the planets on each other were a significant cause of the deviations from their predicted orbits—were taken to be broadly correct and explanatory (of at least part) of the successes. Einstein could clearly identify the sources of successes of Newton’s theory independently of his own alternative theory and it is precisely for this reason that he insisted that he had to recover Newton’s law of attraction (a key source of the Newtonian success) as a limiting case of his own GTR. He could then show that his new theory could do both: it could recover the (independently identified) sources of successes of Newton’s theory (in the form of the law of attraction) and account for its failures by identifying further causal factors (the curvature of space-time) that account for the discrepancies between the predicted orbits of planets (by Newton’s theory of gravity) and the observed trajectories.[6]
Apart from Stanford’s case against the divide et impera move, the latter has become the target of criticism—among others—by Timothy Lyons. [7] Lyons (2006) focuses his critique on Psillos’s criterion for the conditions under which a hypothesis indispensably contributes to the derivation of novel predictions. In his (1999: 100) Psillos says:
Suppose that \(H\) together with another set of hypotheses \(H'\) (and some auxiliaries A) entail a prediction \(P\). \(H\) indispensably contributes to the generation of \(P\) if \(H'\) and A alone cannot yield \(P\) and no other available hypothesis \(H^*\) which is consistent with \(H'\) and A can replace \(H\) without loss in the relevant derivation of \(P\).
Lyons interprets this passage—as well as Psillos’s subsequent claim that \(H^*\) must satisfy some “natural epistemic constraints”, such as being “independently motivated, non ad hoc, potentially explanatory etc.” (ibid.)—as providing the following criterion for the essential role of hypothesis \(H\) in the derivation of prediction \(P\):
For \(H\) to be essential [for the derivation of \(P\)]:
- (1)
- It must be the case that \(H + H' + A\) leads to \(P\).
- (2)
- It must not be the case that \(H' + A\), alone, leads to \(P\).
- (3)
- It must not be the case that any alternative, \(H^*\), is
available
- (a)
- that is consistent with \(H' + A\) and
- (b)
- that when conjoined to \(H' + A\) leads to \(P\) and
- (c)
- that is non-ad hoc ([…] it does not use the data predicted by \(P\) […], is potentially explanatory, etc.) (Lyons 2006: 539).
Thus construed, Lyons criticizes Psillos’ criterion for essentiality, as being “superfluous, unmotivated, and therefore inappropriate” (2006: 541). Briefly put, his point is that condition 3 “unacceptably overshoots” the realist’s goal, since the absence of an alternative \(H^*\) has “no bearing whatsoever on whether \(H\) itself contributed to, was deployed in, the derivation of a given prediction” (2006: 540). Besides, Lyons states that condition 3 is so vague that it is “simply inapplicable” (2006: 542). According to Mario Alai’s (2021) summary of Lyons’s point, condition 3 doesn’t specify (a) when the alternative hypothesis \(H^*\) must or must not be available, (b) what ‘potentially explanatory etc.’ means and (c) whether \(H'\) and \(A\) must be essential too. In addition; (d) it doesn’t state whether \(H^*\) is allowed to lead to losses of other confirmed predictions and (e) whether \(H^*\) should be consistent with those elements of \(H'\) and \(A\), which, though they are ‘essential’ for other predictions, they are dispensable when it comes to the derivation of the prediction under scrutiny. Based on these points, Lyons suggests that even if realists might hold onto conditions 1 and 2 above, condition 3 has to be abandoned, thereby isolating “the deployment realist’s fundamental insight”, viz., that credit should be attributed to those posits that actually—as opposed to essentially—have been deployed in the derivation of empirical predictions (2006,543).
In reply to Lyons, Peter Vickers (2017) and Alai (2021) have defended the divide et impera move against the PI by suggesting the following refinement of condition 3 (let’s call it 3′):
- (3′)
- \(H\) does not merit realist commitment whenever \(H\) is doing work in the derivation solely in virtue of the fact that it entails some other proposition \(H^*\) which itself is sufficient, when combined with the other assumptions in play, for the relevant derivational step (Vickers, 2017: 3229).
According to Vickers, when realists are presented with an instance of a (seemingly) success-inducing-yet-false hypothesis, all they need to do is to show that the specific hypothesis does not satisfy the above condition. It should be noted, however, that this, in essence, is the strategy recommended by Psillos in his 1994, where he aimed to show, using specific cases, that various assumptions such as that heat is a material substance in the case of the caloric theory of heat, do not merit realist commitment, because there are weaker assumptions that fuel the derivation of successful predictions.
Alai claims that substituting condition 3′ for condition 3 is an improvement of the divide et impera move, for not only does condition 3′ perform the task that Psillos had in mind, but also escapes from Lyons’ criticisms (2021: 188). To begin with, condition 3′ is said not to suffer from the (alleged) vagueness of condition 3, for according to Alai: (a) there is no question about when the alternative \(H^*\) is available; (b) there is no need to specify what ‘explanatory’ means; and (c) it is not required that \(H'\) and \(A\) are also essential. In addition, (d) condition 3′ allows that \(H^*\) may lead to losses of other confirmed predictions and (e), since 3′ excludes only hypotheses \(H^*\) which are entailed by \(H\), \(H^*\) are ipso facto consistent with \(H'\) and \(A\).
Now, it is rather evident that condition 3′ is neither superfluous nor unmotivated, since as Alai (2021, 188) stressed it is motivated by a plausible epistemic principle associated with the Occam’s razor:
in abductions we can assume only what is essential, i.e., the weakest hypothesis sufficient to explain a given effect; but if a hypotheses [sic], although deployed, was not essential in deriving [the novel prediction at hand], it is not essential in explaining its derivation either; therefore deployment realists need not (and must not) be committed to its truth.
In sum, contra Lyons, condition 3΄ is both epistemologically motivated as well as indispensable for the proper application of the divide et impera move.
The ‘Vickers-Alai’ refinement of the divide et impera move has not been uncontested. It has been criticized on principled grounds, as well as for not being sufficient in dealing with PI-style challenges. For instance, Dean Peters (2014) argues inter alia that Vickers’ criterion for essentiality cannot account for the unificatory aspect of scientific theorizing, whereas Florian Boge (2021) and Dana Tulodziecki (2021) have provided new historical counterexamples—within the field of nuclear physics and phychometry, and the 19th century miasma theory of disease, respectively—that cannot be handled, or so it is argued, by the ‘Vickers-Alai’ criterion.
It should also be noted that, according to Vickers himself, the employment of condition 3′ in dealing with PI seems to bring scientific realism dangerously close to structural realism. As has already been said, Vickers’s recipe for handling a PI-style challenge is roughly the following: take the (false) hypothesis \(H\) that, according to the anti-realist, is employed in the derivation of a prediction \(P\), identify an (uncontested) \(H^*\) which is entailed by \(H\) and show that \(H^*\) is enough for the derivation of \(P\). This recipe goes a long way in disarming Lyon’s objection. And yet, Vickers notes, an even weaker hypothesis \(H^{**}\) is available, viz., that for the prediction of P only the mathematical structure of \(H^*\) is required. But then, “only the very abstract ‘structure’ truly merits realist commitment, as structural realists like to claim” (2017: 3227). If we take ‘structure’ to be identified with the Ramsey sentence of a given theory (see the next section), then Vickers’ concern is, at least prima facie, a plausible one. For the Ramsey sentence of a theory is obviously entailed by the latter and, as is well known, any theory and its Ramsey sentence have exactly the same observational consequences. Hence, it seems that the employment of condition 3′ forces realists to restrict their commitment solely towards the Ramsey sentences of their favoured theories. In reply, however, it should be stressed that though Vickers’s concern is prima facie warranted, it is far from conclusive. In fact, after raising his concern, Vickers doesn’t further explore it, whereas Alai (2021: 211–212) has argued that from the mere application of condition 3′ “it doesn’t follow that every hypothesis is dispensable in favor of its Ramsey sentence”.
2.7 Structural Realism
An instance of the divide et impera strategy is structural realism. This view has been associated with John Worrall (1989), who revived the relationist account of theory-change that emerged in the beginning of the twentieth century. In opposition to scientific realism, structural realism restricts the cognitive content of scientific theories to their mathematical structure together with their empirical consequences. But, in opposition to instrumentalism, structural realism suggests that the mathematical structure of a theory represents the structure of the world (real relations between things). Against PI, structural realism contends that there is continuity in theory-change, but this continuity is (again) at the level of mathematical structure. Hence, the ‘carried over’ mathematical structure of the theory correctly represents the structure of the world and this best explains the predictive success of a theory.[8]
Structural realism was independently developed in the 1970s by Grover Maxwell (1970a, 1970b) in an attempt to show that the Ramsey-sentence approach to theories need not lead to instrumentalism. Ramsey-sentences go back to a seminal idea by Frank Ramsey (1929). To get the Ramsey-sentence \(^{R}T\) of a (finitely axiomatisable) theory T we conjoin the axioms of T in a single sentence, replace all theoretical predicates with distinct variables \(u_i\), and bind these variables by placing an equal number of existential quantifiers \(\exists u_i\) in front of the resulting formula. Suppose that the theory T is represented as T (\(t_1\),…, \(t_n\); \(o_1\),…, \(o_m\)), where T is a purely logical \(m+n\)-predicate. The Ramsey-sentence \(^{R}T\) of T is:
\[\exists u_1 \exists u_2\ldots \exists u_{n} T (u_1,\ldots ,u_{n}; o_1,\ldots,o_{m}).\]The Ramsey-sentence \(^{R}T\) that replaces theory T has exactly the same observational consequences as T; it can play the same role as T in reasoning; it is truth-evaluable if there are entities that satisfy it; but since it dispenses altogether with theoretical vocabulary and refers to whatever entities satisfy it only by means of quantifiers, it was taken to remove the issue of the reference of theoretical terms/predicates. ‘Structural realism’ was suggested to be the view that: i) scientific theories issue in existential commitments to unobservable entities and ii) all non-observational knowledge of unobservables is structural knowledge, i.e., knowledge not of their first-order (or intrinsic) properties, but rather of their higher-order (or structural) properties. The key idea here was that a Ramsey-sentence satisfies both conditions (i) and (ii). So we might say that, if true, the Ramsey-sentence \(^{R}T\) gives us knowledge of the structure of the world: there is a certain structure which satisfies the Ramsey-sentence and the structure of the world (or of the relevant worldly domain) is isomorphic to this structure.
Though initially Worrall’s version of structural realism was different from Maxwell’s, being focused on—and motivated by—Poincaré’s argument for structural continuity in theory-change, in later work Worrall came to adopt the Ramsey-sentence version of structural realism (see appendix IV of Zahar 2001).
A key problem with Ramsey-sentence realism is that though a Ramsey-sentence of a theory may be empirically inadequate, and hence false, if it is empirically adequate (if, that is, the structure of observable phenomena is embedded in one of its models), then it is bound to be true. For, as Max Newman (1928) first noted in relation to Russell’s (1927) structuralism, given some cardinality constraints, it is guaranteed that there is an interpretation of the variables of \(^{R}T\) in the theory’s intended domain.[9]
More recently, David Papineau (2010) has argued that if we identify the theory with its Ramsey-sentence, it can be argued that past theories are approximately true if there are entities which satisfy, or nearly satisfy, their Ramsey-sentences. The advantage of this move, according to Papineau, is that the issue of referential failure is bypassed when assessing theories for approximate truth, since the Ramsey sentence replaces the theoretical terms with existentially bound variables. But as Papineau (2010: 381) admits, the force of the historical challenge to realism is not thereby thwarted. For it may well be the case that the Ramsey-sentences of most past theories are not satisfied (not even nearly so).[10]
2.8 Induction or Deduction?
In the more recent literature, there has been considerable debate as to how exactly we should understand PI. There are those, like Anjan Chakravartty who take it that PI is an Induction. He says:
PI can … be described as a two-step worry. First, there is an assertion to the effect that the history of science contains an impressive graveyard of theories that were previously believed [to be true], but subsequently judged to be false … Second, there is an induction on the basis of this assertion, whose conclusion is that current theories are likely future occupants of the same graveyard. (2008: 152)
Yet, it is plausible to think that qua an inductive argument, history-based pessimism is bound to fail. The key point here is that the sampling of theories which constitute the inductive evidence is neither random nor otherwise representative of theories in general.
It has been argued that, seen as an inductive argument, PI is fallacious: it commits the base-rate fallacy (cf. Lewis 2001). If in the past there have been many more false theories than true ones, (if, in other words, truth has been rare), it cannot be concluded that there is no connection between success and truth. Take S to stand for Success and not-S to stand for failure. Analogously, take T to stand for truth of theory T and not-T for falsity of theory T. Assume also that the probability that a theory is unsuccessful given that it is true is zero \((\textrm{Prob}({\textrm{not-}S}\mid T)=0)\) and that the probability that a theory is successful given that it is false is 0.05 \((\textrm{Prob}(S\mid {\textrm{not-}T})=0.05)\). Assume that is, that there is a very high True Positives (successful but true) rate and a small False Positives (successful but false theories) rate. We may then ask the question: How likely is it that a theory is true, given that it is successful? That is, what is the posterior probability \(\textrm{Prob}(T\mid S)\)?
This answer is indeterminate if we don’t take into account the base-rate of truth, viz., the incidence rate of truth in the population of theories. If the base rate is very low (let’s assume that only 1 in 50 theories have been true), then it is unlikely that T is true given success. \(\textrm{Prob}(T\mid S)\) would be around 0.3. But this does not imply something about the connection between success and truth. It is still the case that the false positives are low and that the true positives high. The low probability is due to the fact that truth is rare (or that falsity is much more frequent). For \(\textrm{Prob}(T\mid S)\) to be high, it must be the case that \(\textrm{Prob}(T)\) is not too small. But if \(\textrm{Prob}(T)\) is low, it can dominate over a high likelihood of true positives and lead to a very low posterior probability \(\textrm{Prob}(T\mid S)\). Similarly, the probability that a theory is false given that it is successful (i.e., \(\textrm{Prob}({\textrm{not-}T}\mid S))\) may be high simply because there are a lot more false theories than true ones. As Peter Lewis put it:
At a given time in the past, it may well be that false theories vastly outnumber true theories. In that case, even if only a small proportion of false theories are successful, and even if a large proportion of true theories are successful, the successful false theories may outnumber the successful true theories. So the fact that successful false theories outnumber successful true theories at some time does nothing to undermine the reliability of success as a test for truth at that time, let alone at other times (2001: 376–7).
Seen in this light, PI does not discredit the reliability of success as a test for truth of a theory; it merely points to the fact that truth is scarce among past theories.[11]
Challenging the inductive credentials of PI has acquired a life of its own. A standard objection (cf. Mizrahi 2013) is that theories are not uniform enough to allow an inductive generalization of the form “seen one, seen them all”. That is, theories are diverse enough over time, structure and content not to allow us to take a few of them—not picked randomly—as representative of all and to project the characteristics shared by those picked to all theories in general. In particular, the list that Laudan produced is not a random sample of theories. They are all before the twentieth century and all have been chosen solely on the basis that they had had some successes (irrespective of how robust these successes were). An argument of the form:
X % of past successful theories are false
Therefore, X % of all successful theories are false
would be a weak inductive argument because
it fails to provide grounds for projecting the property of the observed members of the reference class to unobserved members of the reference class. (Mizrahi 2013: 3219)
Things would be different, if we had a random sampling of theories. Mizrahi (2013: 3221–3222) collected 124 instances of ‘theory’ from various sources and picked at random 40 of them. These 40 were then divided into three groups: accepted theories, abandoned theories and debated theories. Of those 40 theories, 15% were abandoned and 12% debated. Mizrahi then notes that these randomly selected data cannot justify an inductively drawn conclusion that most successful theories are false. On the contrary, an optimistic induction would be more warranted:
72% of sampled theories are accepted theories (i.e., considered true).
Therefore, 72% of all theories are accepted theories (i.e., considered true).
Mizrahi has come back to the issue of random sampling and has attempted to show that the empirical evidence is against PI:
If the history of science were a graveyard of dead theories and abandoned posits, then random samples of scientific theories and theoretical posits would contain significantly more dead theories and abandoned posits than live theories and accepted posits.
It is not the case that random samples of scientific theories and theoretical posits contain significantly more dead theories and abandoned posits than live theories and accepted posits.
Therefore, It is not the case that the history of science is a graveyard of dead theories and abandoned posits. (2016: 267)
A similar argument has been defended by Park (2011). We may call it, the explosion argument: Most key theoretical terms of successful theories of the twentieth century refer “in the light of current theories”. But then, “most central terms of successful past theories refer”, the reason being that there are far more twentieth century theories than theories in total. This is because “the body of scientific knowledge exploded in the twentieth century with far more human and technological resources” (2011: 79).
Let’s call this broad way to challenge the inductive credentials of the pessimistic argument ‘the Privilege-for-current-theories strategy’. This has been adopted by Michael Devitt (2007) too, though restricted to entities. Devitt, who takes realism to be a position concerning the existence of unobservables, noted that the right question to ask is this: ‘What is the “success ratio” of past theories?’, where the “success ratio” is “the ratio of the determinately existents to the determinately nonexistents + indeterminates”. Asserting a privilege for current science, he claims that “we are now much better at finding out about unobservables”. According to him, then, it is “fairly indubitable” that the historical record shows “improvement over time in our success ratio for unobservables’.
In a similar fashion but focusing on current theories, Doppelt (2007) claims that realists should confine their commitment to the approximate truth of current best theories, where best theories are those that are both most successful and well established. The asymmetry between current best theories and past ones is such that the success of current theories is of a different kind than the success of past theories. The difference, Doppelt assumes, is so big that the success of current theories can only be explained by assuming that they are approximately true, whereas the explanation of the success of past theories does not require this commitment.
If this is right, there is sufficient qualitative distance between past theories and current best ones to block
any pessimistic induction from the successful-but-false superseded theories to the likelihood that our most successful and well-established current theories are also probably false. (Doppelt 2007: 110).
The key difference, Doppelt argues, is that
our best current theories enjoy a singular degree of empirical confirmation impossible for their predecessors, given their ignorance of so many kinds of phenomena and dimensions of nature discovered by our best current theories.
This singular degree of empirical confirmation amounts to raising the standards of empirical success to a level unreachable by past theories (cf. 2007: 112).
The advocate of PI can argue that past ‘best theories’ also raised and met the standards of empirical success, which inductively supports the conclusion that current best theories will be superseded by others which will meet even higher standards of success. Doppelt’s reply is that this new version of PI “should not be given a free pass as though it were on a par with the original pessimistic induction” the reason being that “in the history of the sciences, there is greater continuity in standards of empirical success than in the theories taken to realize them”. Hence, the standards of empirical success change slower than theories. Hence, it is not very likely that current standards of empirical success will change any time soon.
It has been argued, however, that Doppelt cannot explain the novel predictive success of past theories without arguing that they had truthlike constituents (cf. Alai 2017). Besides, as Alai puts it, “current best theories explain the (empirical) success of discarded ones only to the extent that they show that the latter were partly true” (2017: 3282).
The ‘Privilege-for-current-theories strategy’ has been supported by Ludwig Fahrbach (2011). The key point of this strategy is that the history of science does not offer a representative sample of the totality of theories that should be used to feed the historical pessimism of PI. In order to substantiate this, Fahrbach suggested, based on extensive bibliometric data, that over the last three centuries the number of papers published by scientists as well as the number of scientists themselves have grown exponentially, with a doubling rate of 15–20 years. Hence, he claims, the past theories that feed the historical premise of PI were produced during the time of the first 5% of all scientific work ever done by scientists. As such the sample is totally unrepresentative of theories in total; and hence the pessimistic conclusion, viz., that current theories are likely to be false and abandoned in due course, is inductively unwarranted. Moreover, Fahrbach argues, the vast majority of theories enunciated in the last 50–80 years, (which constitute the vast majority of scientific work ever produced) are still with us. Hence, as he puts it,
(t)he anti-realist will have a hard time finding even one or two convincing examples of similarly successful theories that were accepted in the last 50–80 years for some time, but later abandoned. (2011: 152)
Since there have been practically no changes “among our best (i.e., most successful) theories”, Fahrbach suggests
an optimistic meta-induction to the effect that they will remain stable in the future, i.e., all their empirical consequences which scientists will ever have occasion to compare with results from observation at any time in the future are true. (2011: 153)
The conclusion is that the PI is unsound: “its conclusion that many of our current best scientific theories will fail empirically in the future cannot be drawn” (2011: 153).
A key assumption of the foregoing argument is that there is a strong connection between the amount of scientific work (as measured by the number of journal articles) and the degree of success of the best scientific theories. But this can be contested on the grounds that it’s a lot easier to publish currently than it was in the seventeenth century and that current research is more tightly connected to the defense of a single theoretical paradigm than before. This might well be a sign of maturity of current science but, as it stands, it does not show that the present theoretical paradigm is not subject to radical change. Florian Müller (2015) put the point in terms of decreasing marginal revenues. The correlation between increased scientific work and scientific progress, which is assumed by Fahrbach may not be strong enough:
It seems more plausible to expect decreasing marginal revenues of scientific work since it usually takes much less time to establish very basic results than to make progress in a very advanced state of science. (Müller 2015: 404)
The ‘Privilege-for-current-theories strategy’ can be further challenged on the grounds that it requires some “fundamental difference between the theories we currently accept, and the once successful theories we have since rejected” (Wray 2013: 4325). As Brad Wray (2013) has argued Fahrbach’s strategy is doomed to fail because the argument from exponential growth could be repeated at former periods too, thereby undermining itself. Imagine that we are back in 1950 and we look at the period between 1890 and 1950. We could then argue, along Farhbach’s lines, that the pre-1890 theories (which were false and abandoned) were an unrepresentative sample of all theories and that the recent theories (1890–1950) are by far the most theories until 1950 and that, since most of them have not been abandoned (by 1950), they are likely to remain impervious to theory-change. Or imagine that we are further back in 1890 and look at the theories of the period 1830–1890. We could run the same argument about those theories, viz, that they are likely to survive theory change. But if we look at the historical pattern, they did not survive; nor did the theories between 1890–1950. By the same token, we should not expect current theories to survive theory-change.
Is there room for defending an epistemic privilege for current science? Two points are worth making. The first is that it’s hard to defend some kind of epistemic privilege of current science if the realist argument against PI stays only at a level of statistics (even assuming that there can be statistics over theories). If there is an epistemic privilege of current science in relation to past science, it is not a matter of quantity but of quality. The issue is not specifying how likely it is that an arbitrary current theory T be true, given the evidence of the past record of science. The issue, instead, is how a specific scientific theory—a real theory that describes and explains certain well-founded worldly phenomena—is supported by the evidence there is for it. If we look at the matter from this perspective, we should look at case-histories and not at the history of science at large. The evidence there is for specific theory T (e.g., the Darwinian synthesis or GTR etc.) need not be affected by past failures in the theoretical understanding of the world in general. The reason is that there is local epistemic privilege, that is, privilege over past relevant theories concerning first-order evidence and specific methods.
The second point is this. Wray’s argument against Fahrbach is, in effect, that there can be a temporal meta-(meta-)induction which undermines at each time t (or period Dt) the privilege that scientific theories at t or Dt are supposed to have. So Wray’s point is this: at each time \(t_{i}\) (or period \(Dt_{i}\)), scientists claim that their theories are not subject to radical change at subsequent times; but if we look at the pattern of theory change over time, the history of science shows that there have been subsequent times \(t_{i}+1\) (or periods \({Dt}_{i}+1\)) such that the theories accepted at \(t_{i}\) were considered false and abandoned. Hence, he takes it that at no time \(t_{i}\) are scientists justified in accepting their theories as not being subject to radical change in the future. But this kind of argument is open to the following criticism. It assumes, as it were, unit-homogeneity, viz., that science at all times \(t_{i}\) (and all periods \({Dt}_{i}\)) is the same when it comes to how far it is from the truth. Only on this assumption can it be argued that at no time can scientists claim that their theories are not subject to radical change. For if there are senses in which subsequent theories are closer to the truth than their predecessors, it is not equally likely that they will be overturned as their predecessors were.
The point, then, is that though at each time \(t_{i}\) (or period \({Dt}_{i}\)) scientists might well claim that their theories are not subject to radical change at subsequent times, they are not equally justified in making this claim! There might well be times \(t_{i}\) (or periods \({Dt}_{i}\)) in which scientists are more justified in making the claims that their theories are not subject to radical change at subsequent times simply because they have reasons to believe that their theories are truer than their predecessors. To give an example: if Wray’s argument is right then Einstein’s GTR is as likely to be overthrown at future time \(t_{2100}\) as was Aristotle’s crystalline spheres theory in past time \(t_{-300}\). But this is odd. It totally ignores the fact that all available evidence renders GTR closer to the truth than the simply false Aristotelian theory. In other words, that GTR has substantial truth-content makes it less likely to be radically revised in the future.
An analogous point was made by Park (2016). He defined what he called Proportional Pessimism as the view that “as theories are discarded, the inductive rationale for concluding that the next theories will be discarded grows stronger” (2016: 835). This view entails that the more theories have been discarded before T is discarded, the more justified we are in thinking that T is likely to be discarded. However, it is also the case that based on their greater success, we are more justified to take newer theories to be more likely to be truthlike than older ones. We then reach a paradoxical situation: we are justified to take newer theories to be both more probable than older ones and more likely to be abandoned than older ones.
If an inductive rendering of historical pessimism fails, would a deductive rendering fare better? Could PI be considered at least as a valid deductive argument? Wray (2015: 65) interprets the original argument by Laudan as being deductive. And he notes
as far as Laudan is concerned, a single successful theory that is false would falsify the realist claim that (all) successful theories are true; and a single successful theory that refers to a non-existent type of entity would falsify the realist claim that (all) successful theories have genuinely referring theoretical terms.
But if this is the intent of the argument, history plays no role in it. All that is needed is a single counterexample, past or present. This, it should be noted, is an endemic problem with all attempts to render PI as a deductive argument. Müller, for instance, notes that the fundamental problem raised by PI is “simply that successful theories can be false”. He adds:
Even just one counterexample (as long as it is not explained away) undermines the claim that truth is the best explanation for the success of theories as it calls into question the explanatory connection in general. (2015: 399)
Thus put, the history of past failures plays no role in PI. Any counterexample, even one concerning a current theory, will do.
How is it best to understand the realist theses that the history of science is supposed to undermine? Mizrahi (2013: 3224) notes that the realist claim is not meant to be a universal statement. As he puts it:
Success may be a reliable indicator of (approximate) truth, but this is compatible with some instances of successful theories that turn out not to be approximately true. In other words, that a theory is successful is a reason to believe that it is approximately true, but it is not a conclusive proof that the theory is approximately true.
The relation between success and (approximate) truth, in this sense, is more like the relation between flying and being a bird: flying characterizes birds even if kiwis do not fly. If this is so, then there is need for more than one counter-example for the realist thesis to be undermined.
A recent attempt to render PI as a deductive argument is by Timothy Lyons. He (2016b) takes realism to issue in the following meta-hypothesis: “our successful scientific theories are (approximately) true”. He then reconstructs PI thus:
- (D)
- 1.
- If (a) the realist meta-hypothesis were true, then (b) we would have no successful theories that cannot be approximately true. (If we did, each would be a “miracle,” which no one of us accepts.)
- 2.
- However, (not-b) we do have successful theories that cannot be approximately true: the list (of “miracles”).
- 3.
- Therefore, (not-a) the realist meta-hypothesis is false. (And the no-miracles argument put forward to justify that meta-hypothesis is unacceptable.)
This is supposed to be a deductive argument against the ‘meta hypothesis’. But in his argument the history of science plays no role. All that is needed for the argument above to be sound is a single instance of a successful theory that is not true. A single non-white swan is enough to falsify the hypothesis ‘All swans are white’—there is no point in arguing here: the more, the merrier! In a similar fashion, it doesn’t add much to argument (D) to claim
the quest to empirically increase the quantity of instances (…) is rather to secure the soundness of the modus tollens, to secure the truth of the pivotal second premise, the claim that there are counterinstances to the realist meta-hypothesis. (Lyons 2016b: 566)
In any case, a critical question is: can some false-but-rigorously-empirically-successful theories justifiably be deemed truthlike from the point of view of successor theories? This question is hard to answer without looking at actual cases in the history of science. The general point, made by Vickers (2017) is that it is not enough for the challenger of realism to identify some components of past theories which were contributing to their successes such that they were not retained in subsequent theories. The challenger of realism should show that false components “merit realist commitment”. If they do not, “ (…) that is enough to answer the historical challenge”.
More generally, the search for a generic form of the pessimistic X-duction (In-duction or De-duction) has yielded the following problem: If the argument is inductive, it is at best weak. If the argument is deductive, even if it is taken to be sound, it makes the role of the history of science irrelevant.[12]
2.9 A New Induction
Stanford (2006) has aimed to replace PI with what he calls the ‘new induction’ on the history of science, according to which past historical evidence of transient underdetermination of theories by evidence makes it likely that current theories will be supplanted by hitherto unknown (unconceived) ones, which nonetheless, are such that when they are formulated, they will be at least as well confirmed by the evidence as the current ones. But the new induction is effective, if at all, only in tandem with PI. For if there is continuity in our scientific image of the world, the hitherto unconceived theories that will replace the current ones won’t be the radical rivals they are portrayed to be.[13]
When it comes to the realist commitment to theories, the proper philosophical task is to ignore neither the first order scientific evidence that there is for a given theory nor the lessons that can be learned from the history of science. Rather, the task is to balance the first-order and the second order of evidence. The first-order evidence is typically associated with whatever scientists take into account when they form an epistemic attitude towards a theory. It can be broadly understood to include some of the theoretical virtues of the theory at hand—of the kind that typically go into plausibility judgments associated with assignment of prior probability to theories. The second-order evidence comes from the past record of scientific theories and/or from meta-theoretical (philosophical) considerations that have to do with the reliability of scientific methodology. It concerns not particular scientific theories, but science as a whole. This second-order evidence feeds claims such as those that motivate PI or the New Induction. Actually, this second-order evidence is multi-faceted—it is negative (showing limitations and shortcomings) as well as positive (showing how learning from experience can be improved).
Bibliography
- Ainsworth, Peter M., 2009, “Newman’s Objection”, The British Journal for the Philosophy of Science, 60(1): 135–171. doi:10.1093/bjps/axn051
- Alai, Mario, 2017, “Resisting the Historical Objections to Realism: Is Doppelt’s a Viable Solution?” Synthese, 194(9): 3267–3290. doi:10.1007/s11229-016-1087-z
- –––, 2021, “The Historical Challenge to Realism and Essential Deployment”, in T.D. Lyons and P. Vickers (eds.), Contemporary Scientific Realism: The Challenge from the History of Science, Oxford: Oxford University Press, pp. 183–215.
- Anonymous, 1889, “Causerie Bibliographique”, Revue Scientifique, No. 7, August 17 1889. [Anonymous 1889: 215 available online]
- Berthelot, Marcelin, 1897, Science et Morale, Paris: Calmann Levy. [Berthelot 1897 available online]
- Boge, Florian, J., 2021, “Incompatibility and the Pessimistic Induction: A Challenge for Selective Realism”, European Journal for Philosophy of Science, 11(2): 1–31. doi:10.1007/s13194-021-00367-4
- Boltzmann, Ludwig, 1901, “The Recent Development of Method in Theoretical Physics”, The Monist, 11(2): 226–257. doi:10.5840/monist190111224
- Brunetière, Ferdinand, 1889, “Revue Littéraire—A propos du Disciple de Paul Bourget”, Revue des Deux Mondes, 94: 214–226. [Brunetière 1889 available available online]
- –––, 1895, “Après une visite au Vatican”,Revue des Deux Mondes, 127: 97–118. [Brunetière 1895 available online]
- Chakravartty, Anjan, 2008, “What you don’t Know can’t Hurt you: Realism and the Unconceived”, Philosophical Studies, 137: 149–158. doi:10.1007/s11098-007-9173-1
- Chang, Hasok, 2003, “Preservative Realism and its Discontents: Revisiting Caloric”, Philosophy of Science, 70(5): 902–912. doi:10.1086/377376
- Cordero, Alberto, 2011, “Scientific Realism and the Divide et Impera Strategy: The Ether Saga Revisited”, Philosophy of Science, 78(5): 1120–1130. doi:10.1086/662566
- Cruse, Pierre, 2005, “Ramsey-sentences, Structural Realism and Trivial Realisation”, Studies in History and Philosophy of Science, 36(3): 557–576. doi:10.1016/j.shpsa.2005.07.006
- Cruse, Pierre & David Papineau, 2002, “Scientific Realism Without Reference”, in Michele Marsonet (ed.) The Problem of Realism, Aldershot: Ashgate.
- Demopoulos, William, 2003, “On the Rational Reconstruction of Our Theoretical Knowledge”, The British Journal for the Philosophy of Science, 54(3): 371–403. doi:10.1093/bjps/54.3.371
- Devitt, Michael, 2007, “Scientific Realism”, In Frank Jackson and Michael Smith, (eds) The Oxford Handbook of Contemporary Philosophy, Oxford: Oxford University Press.
- –––, 2011, “Are Unconceived Alternatives a Problem for Scientific Realism?” Journal for General Philosophy of Science, 42(2): 285–293. doi:10.1007/s10838-011-9166-9
- Doppelt, Gerald, 2007, “Reconstructing Scientific Realism to Rebut the Pessimistic Meta-induction”, Philosophy of Science, 74(1): 96–118. doi:10.1086/520685
- Duhem, Pierre Maurice Marie, 1906 [1954], Théorie physique: son objet et sa structure, Paris. Translated from the 1914 second edition as The Aim and Structure of Physical Theory, Philip P. Wiener (trans.), Princeton, NJ: Princeton University Press, 1954.
- –––, 1906 [2007], La Théorie Physique: son objet, sa structure, Paris: Vrin.
- Fahrbach, Ludwig, 2011, “How the Growth of Science Ends Theory Change”, Synthese, 180(2): 139–155. doi:10.1007/s11229-009-9602-0
- French, Steven, 2014, The Structure of the World: Metaphysics and Representation, Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199684847.001.0001
- Frost-Arnold, Greg, 2014, “Can the Pessimistic Induction be Saved from Semantic Anti-Realism about Scientific Theory?”, British Journal for the Philosophy of Science, 65(3): 521–548. doi:10.1093/bjps/axt013
- Guegeun, Marie & Stathis Psillos, 2017, “Anti-Scepticism and Epistemic Humility in Pierre Duhem’s Philosophy of Science”, Transversal, 2: 54–73. doi:10.24117/2526-2270.2017.i2.06
- Hesse, Mary B., 1976, “Truth and Growth of Knowledge”, in F. Suppe & P.D. Asquith (eds), PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association, volume 2, pp. 261–280, East Lansing: Philosophy of Science Association. doi:10.1086/psaprocbienmeetp.1976.2.192385
- Kitcher, Philip, 1993, The Advancement of Science: Science Without Legend, Objectivity Without Illusions, Oxford: Oxford University Press. doi:10.1093/0195096533.001.0001
- Kripke, Saul, 1972, “Naming and Necessity”, in Donald Davidson and Gilbert Harman (eds), Semantics of Natural Language, Dordrecht: Reidel pp. 253–355, 763–769.
- Ladyman, James, 1998, “What is Structural Realism?”, Studies in History and Philosophy of Science, 29(3): 409–424. doi:10.1016/S0039-3681(98)80129-5
- Ladyman, James & Don Ross, 2007, Every Thing Must Go: Metaphysics Naturalised, Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199276196.001.0001
- Lalande, André, 1913, “Philosophy in France in 1912”, The Philosophical Review, 22(4): 357–374. doi:10.2307/2178386
- Laudan, Larry, 1981, “A Confutation of Convergent Realism”, Philosophy of Science, 48(1): 19–49. doi:10.1086/288975
- Lewis, Peter J., 2001, “Why the Pessimistic Induction Is a Fallacy”, Synthese, 129(3): 371– 380. doi:10.1023/A:1013139410613
- Lyons, Timothy D., 2002, “Scientific Realism and the Pessimistic Meta-Modus Tollens”, in S. Clarke and T.D. Lyons (eds.), Recent Themes in the Philosophy of Science: Scientific Realism and Commonsense, Dordrecht: Springer, pp. 63–90.
- –––, 2006, “Scientific Realism and the Stratagema de Divide et Impera”, British Journal for the Philosophy of Science, 57(3): 537–560, doi:10.1093/bjps/axl021
- –––, 2016a, “Structural Realism versus Deployment Realism: A Comparative Evaluation”, Studies in History and Philosophy of Science, 59: 95–105. doi:10.1016/j.shpsa.2016.06.006
- –––, 2016b, “Scientific Realism”, in Paul Humphreys (ed.) The Oxford Handbook of Philosophy of Science, Oxford: Oxford University Press, pp. 564–584.
- Magnus, P.D. & Craig Callender, 2004, “Realist Ennui and the Base Rate Fallacy”, Philosophy of Science, 71(3): 320–338. doi:10.1086/421536
- Maxwell, Grover, 1970a, “Theories, Perception and Structural Realism”, in Robert Colodny (ed.) The Nature and Function of Scientific Theories: Essays in Contemporary Science and Philosophy, (University of Pittsburgh series in the philosophy of science, 4 ), Pittsburgh, PA: University of Pittsburgh Press, pp. 3–34.
- –––, 1970b, “Structural Realism and the Meaning of Theoretical Terms”, in Michael Radner and Stephen Winokur (eds), Analyses of Theories and Methods of Physics and Psychology, (Minnesota Studies in the Philosophy of Science, 4), Minneapolis: University of Minnesota Press, pp. 181–192.
- Mizrahi, Moti, 2013, “The Pessimistic Induction: A Bad Argument Gone Too Far”, Synthese, 190(15): 3209–3226. doi:10.1007/s11229-012-0138-3
- –––, 2016, “The History of Science as a Graveyard of Theories: A Philosophers’ Myth?”, International Studies in Philosophy of Science, 30(3): 263–278. doi:10.1080/02698595.2017.1316113
- Müller, Florian, 2015, “The Pessimistic Meta-induction: Obsolete Through Scientific Progress?”, International Studies in the Philosophy of Science, 29(4): 393–412. doi:10.1080/02698595.2015.1195144
- Newman, M.H.A., 1928, “Mr. Russell’s ‘Causal Theory of Perception’”, Mind, 37(146): 137–148. doi:10.1093/mind/XXXVII.146.137
- Newman, Mark, 2005, “Ramsey Sentence Realism as an Answer to the Pessimistic Meta-Induction”, Philosophy of Science, 72(5): 1373–1384. doi:10.1086/508975
- Newton-Smith, W.H., 1981, The Rationality of Science, London: Routledge & Kegan Paul.
- Papineau, David, 2010, “Realism, Ramsey, Sentences and the Pessimistic Meta-Induction”, Studies in History and Philosophy of Science, 41(4): 375–385. doi:10.1016/j.shpsa.2010.10.002
- Park, Seungbae, 2011, “A Confutation of the Pessimistic Induction”, Journal for General Philosophy of Science, 42(1): 75–84. doi:10.1007/s10838-010-9130-0
- –––, 2016, “Refutations of the Two Pessimistic Inductions”, Philosophia, 44(3): 835–844. doi:10.1007/s11406-016-9733-8
- Paul, Harry W., 1968, “The Debate over the Bankruptcy of Science in 1895”, French Historical Studies, 5(3): 299–327. doi:10.2307/286043
- Peters, Dean, 2014, “What Elements of Successful Scientific Theories Are the Correct Targets for ‘Selective’ Scientific Realism?”, Philosophy of Science, 81(3): 377–397. doi:10.1086/676537
- Poincaré, Henri, 1900, “Sur les Rapports de la Physique Expérimentale et de la Physique Mathématique”, in Rapports Présentés au Congrès International de Physique, Vol.XCVI: 245–263.
- –––, 1902, La Science et L’Hypothese, (1968 reprint) Paris: Flammarion.
- Prevost, Jean Louis, and Dumas, Jean-Baptiste André, 1823, “Examen du Sang et de son Action dans les Divers Phènoménes de la Vie”, Journal de Physique, De Chimie et d’Histoire Naturelle, XCVI: 245–263. [Prevost & Dumas 1823 available online]
- Psillos, Stathis, 1994, “A philosophical study of the transition from the caloric theory of heat to thermodynamics: Resisting the pessimistic meta-induction”, Studies in the History and Philosophy of Science, 25(2): 159–190. doi:10.1016/0039-3681(94)90026-4
- –––, 1996, “Scientific Realism and the ‘Pessimistic Induction’”, Philosophy of Science, 63: S306–14. doi:10.1086/289965
- –––, 1999, Scientific Realism: How Science Tracks Truth, London & New York: Routledge.
- –––, 2009, Knowing the Structure of Nature: Essays on Realism and Explanation, London: Palgrave/MacMillan. doi:10.1057/9780230234666
- –––, 2011, “Moving Molecules above the Scientific Horizon: On Perrin’s Case for Realism”, Journal for General Philosophy of Science, 42(2): 339–363. doi:10.1007/s10838-011-9165-x
- –––, 2012, “Causal-descriptivism and the Reference of Theoretical Terms”, in Athanassios Raftopoulos & Peter Machamer (eds), Perception, Realism and the Problem of Reference, Cambridge University Press, pp. 212–238. doi:10.1017/CBO9780511979279.010
- –––, 2014, “Conventions and Relations in Poincaré’s, Philosophy of Science”, Methode-Analytic Perspectives, 3(4): 98–140.
- Putnam, Hilary, 1973, “Explanation and Reference”, in Glenn Pearce & Patrick Maynard (eds), Conceptual Change, Dordrecht: Reidel, pp. 199–221. doi:10.1007/978-94-010-2548-5_11
- –––, 1975, “The Meaning of ‘Meaning’”, in Keith Gunderson (ed.), Language, Mind and Knowledge, (Minnesota Studies in the Philosophy of Science, 7), Minneapolis: University of Minnesota Press, pp. 131–93.
- –––, 1978, Meaning and the Moral Sciences, London; Boston: Routledge and Kegan Paul.
- Ramsey, Frank Plumpton, 1929, “Theories”, in his The Foundations of Mathematics and Other Essays, R. B. Braithwaite (ed.), (1931) London: Routledge and Kegan Paul.
- Richet, Charles, 1895, “La Science a-t-elle fait banqueroute?”, Revue Scientifique, No. 3, 12 January 1895, 33–39. [Richet 1895 available online]
- Russell, Bertrand, 1927, The Analysis of Matter, London: George Allen & Unwin.
- Ruttkamp-Bloem, Emma, 2013, “Re-enchanting Realism in Debate with Kyle Stanford”, Journal of General Philosophy of Science, 44(1): 201–224. doi:10.1007/s10838-013-9220-x
- Saatsi, Juha, 2019, “Historical Inductions, Old and New”, Synthese, 196: 3979–3993. doi:10.1007/s11229-015-0855-5
- Smith, George E., 2010, “Revisiting Accepted Science: The Indispensability of the History of Science”, The Monist, 93(4): 545–579. doi:10.5840/monist201093432
- Stanford, P. Kyle, 2006, Exceeding Our Grasp: Science, History, and the Problem of Unconceived Alternatives, Oxford: Oxford University Press. doi:10.1093/0195174089.001.0001
- Tolstoy, Leo, 1904, Essays & Letters, Aylmer Maud (trans.), New York: Funk and Wagnalls Company.
- Tulodziecki, Dana, 2021 “Theoretical Continuity, Approximate Truth, and the Pessimistic Meta-Induction: Revisiting the Miasma Theory”, in T.D. Lyons and P. Vickers (eds.), Contemporary Scientific Realism: The Challenge from the History of Science, Oxford: Oxford University Press, pp. 11–32.
- Vickers, Peter, 2013, “A Confrontation of Convergent Realism”, Philosophy of Science, 80(2): 189–211. doi:10.1086/670297
- –––, 2017, “Understanding the Selective Realist Defence Against the PMI”, Synthese, 194(9): 3221–3232. doi:10.1007/s11229-016-1082-4
- Worrall, John, 1989, “Structural Realism: The Best of Both Worlds?”, Dialectica, 43(1–2): 99–124. doi:10.1111/j.1746-8361.1989.tb00933.x
- Wray, K. Brad, 2013, “The Pessimistic Induction and the Exponential Growth of Science Reassessed”, Synthese, 190(18): 4321–4330. doi:10.1007/s11229-013-0276-2
- –––, 2015, “Pessimistic Inductions: Four Varieties”, International Studies in the Philosophy of Science, 29(1): 61–73. doi:10.1080/02698595.2015.1071551
- Zahar, Elie, 2001, Poincaré’s Philosophy: From Conventionalism to Phenomenology, LaSalle IL: Open Court.
Academic Tools
How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database.
Other Internet Resources
- Scientific Realism and Antirealism, entry by Michael Liston in the Internet Encyclopedia of Philosophy.
- Research Guide on Realism and anti-realism in the philosophy of Science, by Paul Dicken, History and Philoosphy of Science, Cambridge University.
Acknowledgments
Many thanks to Ludwig Fahrbach, Stavros Ioannidis, Moti Mizrahi, Nathan Oseroff, Seungbae Park and Brad Wray for useful comments. Thanks are also due to the Editors of SEP and various anonymous reviewers for their encouragement and suggestions. Many thanks are also due to my student Kosmas Brousalis for his help with the revisions of the entry.