This is a file in the archives of the Stanford Encyclopedia of Philosophy. |
version |
Stanford Encyclopedia of Philosophy |
content revised
|
If experiment is to play these important roles in science then we must have good reasons to believe experimental results, for science is a fallible enterprise. Theoretical calculations, experimental results, or the comparison between experiment and theory may all be wrong. Science is more complex than "The scientist proposes, Nature disposes." It may not always be clear what the scientist is proposing. Theories often need to be articulated and clarified. It also may not be clear how Nature is disposing. Experiments may not always give clear-cut results, and may even disagree for a time.
In what follows, the reader will find an epistemology of experiment, a set of strategies that provides reasonable belief in experimental results. Scientific knowledge can then be reasonably based on these experimental results.
Hacking also discussed the strengthening of one's belief in an observation by independent confirmation. The fact that the same pattern of dots--dense bodies in cells--is seen with "different" microscopes, (e.g. ordinary, polarizing, phase-contrast, fluorescence, interference, electron, acoustic etc.) argues for the validity of the observation. One might question whether "different" is a theory-laden term. After all, it is our theory of light and of the microscope that allows us to consider these microscopes as different from each other. Nevertheless, the argument holds. Hacking correctly argues that it would be a preposterous coincidence if the same pattern of dots were produced in two totally different kinds of physical systems. Different apparatuses have different backgrounds and systematic errors, making the coincidence, if it is an artifact, most unlikely. If it is a correct result, and the instruments are working properly, the coincidence of results is understandable.
Hacking's answer is correct as far as it goes. It is, however, incomplete. What happens when one can perform the experiment with only one type of apparatus, such as an electron microscope or a radio telescope, or when intervention is either impossible or extremely difficult? Other strategies are needed to validate the observation.[3] These may include:
1) Experimental checks and calibration, in which the experimental apparatus reproduces known phenomena. For example, if we wish to argue that the spectrum of a substance obtained with a new type of spectrometer is correct, we might check that this new spectrometer could reproduce the known Balmer series in hydrogen. If we correctly observe the Balmer Series then we strengthen our belief that the spectrometer is working properly. This also strengthens our belief in the results obtained with that spectrometer. If the check fails then we have good reason to question the results obtained with that apparatus.These strategies along with Hacking's intervention and independent confirmation constitute an epistemology of experiment. They provide us with good reasons for belief in experimental results, They do not, however, guarantee that the results are correct. There are many experiments in which these strategies are applied, but whose results are later shown to be incorrect (examples will be presented below). Experiment is fallible. Neither are these strategies exclusive or exhaustive. No single one of them, or fixed combination of them, guarantees the validity of an experimental result. Physicists use as many of the strategies as they can conveniently apply in any given experiment.2) Reproducing artifacts that are known in advance to be present. An example of this comes from experiments to measure the infrared spectra of organic molecules (Randall et al. 1949). It was not always possible to prepare a pure sample of such material. Sometimes the experimenters had to place the substance in an oil paste or in solution. In such cases, one expects to observe the spectrum of the oil or the solvent, superimposed on that of the substance. One can then compare the composite spectrum with the known spectrum of the oil or the solvent. Observation then of this artifact gives confidence in other measurements made with the spectrometer.
3) Elimination of plausible sources of error and alternative explanations of the result (the Sherlock Holmes strategy).[4] Thus, when scientists claimed to have observed electric discharges in the rings of Saturn, they argued for their result by showing that it could not have been caused by defects in the telemetry, interaction with the environment of Saturn, lightning, or dust. The only remaining explanation of their result was that it was due to electric discharges in the rings--there was no other plausible explanation of the observation. (In addition, the same result was observed by both Voyager 1 and Voyager 2. This provided independent confirmation. Often, several epistemological strategies are used in the same experiment.)
4) Using the results themselves to argue for their validity. Consider the problem of Galileo's telescopic observations of the moons of Jupiter. Although one might very well believe that his primitive, early telescope might have produced spurious spots of light, it is extremely implausible that the telescope would create images that they would appear to be a eclipses and other phenomena consistent with the motions of a small planetary system. It would have been even more implausible to believe that the created spots would satisfy Kepler's Third Law (R3/T2 = constant). A similar argument was used by Robert Millikan to support his observation of the quantization of electric charge and his measurement of the charge of the electron. Millikan remarked, "The total number of changes which we have observed would be between one and two thousand, and in not one single instance has there been any change which did not represent the advent upon the drop of one definite invariable quantity of electricity or a very small multiple of that quantity"(Millikan 1911, p. 360). In both of these cases one is arguing that there was no plausible malfunction of the apparatus, or background, that would explain the observations.
5) Using an independently well-corroborated theory of the phenomena to explain the results. This was illustrated in the discovery of the W±, the charged intermediate vector boson required by the Weinberg-Salam unified theory of electroweak interactions. Although these experiments used very complex apparatuses and used other epistemological strategies (for details see (Franklin 1986, pp. 170-72)) I believe that the agreement of the observations with the theoretical predictions of the particle properties helped to validate the experimental results. In this case the particle candidates were observed in events that contained an electron with high transverse momentum and in which there were no particle jets, just as predicted by the theory. In addition, the measured particle mass of 81 ± 5 GeV/c2 and 80+10-6, GeV/c2, found in the two experiments (note the independent confirmation also), was in good agreement with the theoretical prediction of 82 ± 2.4 GeV/c2. It was very improbable that any background effect, which might mimic the presence of the particle, would be in agreement with theory.
6) Using an apparatus based on a well-corroborated theory. In this case the support for the theory inspires confidence in the apparatus based on that theory. This is the case with the electron microscope and the radio telescope, whose operations are based on a well-supported theories, although other strategies are also used to validate the observations made with these instruments.
7) Using statistical arguments. An interesting example of this arose in the 1960s when the search for new particles and resonances occupied a substantial fraction of the time and effort of those physicists working in experimental high-energy physics. The usual technique was to plot the number of events observed as a function of the invariant mass of the final-state particles and to look for bumps above a smooth background. The usual informal criterion for the presence of a new particle was that it resulted in a three standard-deviation effect above the background, a result that had a probability of 0.27% of occurring in a single bin. This criterion was later changed to four standard deviations, which had a probability of 0.0064% when it was pointed out that the number of graphs plotted each year by high-energy physicists made it rather probable, on statistical grounds, that a three standard-deviation effect would be observed.
Galison's view is that experiments end when the experimenters believe that they have a result that will stand up in court--a result that I believe includes the use of the epistemological strategies discussed earlier. Thus, David Cline, one of the weak neutral-current experimenters remarked, "At present I don't see how to make these effects [the weak neutral current event candidates] go away" (Galison, 1987, p. 235).
Galison emphasizes that, within a large experimental group, different members of the group may find different pieces of evidence most convincing. Thus, in the Gargamelle weak neutral current experiment, several group members found the single photograph of a neutrino-electron scattering event particularly important, whereas for others the difference in spatial distribution between the observed neutral current candidates and the neutron background was decisive. Galison attributes this, in large part, to differences in experimental traditions, in which scientists develop skill in using certain types of instruments or apparatus. In particle physics, for example, there is the tradition of visual detectors, such as the cloud chamber or the bubble chamber, in contrast to the electronic tradition of Geiger and scintillation counters and spark chambers. Scientists within the visual tradition tend to prefer "golden events" that clearly demonstrate the phenomenon in question, whereas those in the electronic tradition tend to find statistical arguments more persuasive and important than individual events. (For further discussion of this issue see Galison (1997)).
Galison points out that major changes in theory and in experimental practice and instruments do not necessarily occur at the same time. This persistence of experimental results provides continuity across these conceptual changes. Thus, the experiments on the gyromagnetic ratio spanned classical electromagnetism, Bohr's old quantum theory, and the new quantum mechanics of Heisenberg and Schrodinger. Robert Ackermann has offered a similar view in his discussion of scientific instruments.
The advantages of a scientific instrument are that it cannot change theories. Instruments embody theories, to be sure, or we wouldn't have any grasp of the significance of their operation....Instruments create an invariant relationship between their operations and the world, at least when we abstract from the expertise involved in their correct use. When our theories change, we may conceive of the significance of the instrument and the world with which it is interacting differently, and the datum of an instrument may change in significance, but the datum can nonetheless stay the same, and will typically be expected to do so. An instrument reads 2 when exposed to some phenomenon. After a change in theory,[5] it will continue to show the same reading, even though we may take the reading to be no longer important, or to tell us something other than what we thought originally (Ackermann 1985, p. 33).
Galison also discusses other aspects of the interaction between experiment and theory. Theory may influence what is considered to be a real effect, demanding explanation, and what is considered background. In his discussion of the discovery of the muon, he argues that the calculation of Oppenheimer and Carlson, which showed that showers were to be expected in the passage of electrons through matter, left the penetrating particles, later shown to be muons, as the unexplained phenomenon. Prior to their work, physicists thought the showering particles were the problem, whereas the penetrating particles seemed to be understood.
The role of theory as an "enabling theory," (i.e., one that allows calculation or estimation of the size of the expected effect and also the size of expected backgrounds) is also discussed by Galison. (See also (Franklin 1995b) and the discussion of the Stern-Gerlach experiment below). Such a theory can help to determine whether an experiment is feasible. Galison also emphasizes that elimination of background that might simulate or mask an effect is central to the experimental enterprise, and not a peripheral activity. In the case of the weak neutral current experiments, the existence of the currents depended crucially on showing that the event candidates could not all be due to neutron background.[6]
There is also a danger that the design of an experiment may preclude observation of a phenomenon. Galison points out that the original design of one of the neutral current experiments, which included a muon trigger, would not have allowed the observation of neutral currents. In its original form the experiment was designed to observe charged currents, which produce a high energy muon. Neutral currents do not. Therefore, having a muon trigger precluded their observation. Only after the theoretical importance of the search for neutral currents was emphasized to the experimenters was the trigger changed. Changing the design did not, of course, guarantee that neutral currents would be observed.
Galison also shows that the theoretical presuppositions of the experimenters may enter into the decision to end an experiment and report the result. Einstein and de Haas ended their search for systematic errors when their value for the gyromagnetic ratio of the electron, g = 1, agreed with their theoretical model of orbiting electrons. This effect of presuppositions might cause one to be skeptical of both experimental results and their role in theory evaluation. Galison's history shows, however, that, in this case, the importance of the measurement led to many repetitions of the measurement. This resulted in an agreed-upon result that disagreed with theoretical expectations.
Recently, Galison has modified his views. In Image and Logic, an extended study of instrumentation in 20th-century high-energy physics, Galison (1997) has extended his argument that there are two distinct experimental traditions within that field--the visual (or image) tradition and the electronic (or logic) tradition. The image tradition uses detectors such as cloud chambers or bubble chanbers, which provide detailed and extensive information about each individual event. The electronic detectors used by the logic tradition, such as geiger counters, scintillation counters, and spark chambers, provide less detailed information about individual events, but detect more events. Galison's view is that experimenters working in these two traditions form distinct epistemic and linguistic groups that rely on different forms of argument. The visual tradition emphasizes the single "golden" event. "On the image side resides a deep-seated commitment to the 'golden event': the single picture of such clarity and distinctness that it commands acceptance." (Galison, 1997, p. 22) "The golden event was the exemplar of the image tradition: an individual instance so complete and well defined, so 'manifestly' free of distortion and background that no further data had to be involved" (p. 23). Because the individual events provided in the logic detectors containded less detailed information than the pictures of the visual tradition, statistical arguments based on large numbers of events were required.
Kent Staley (1999) disagrees. He argues that the two traditions are not as distinct as Galison believes:
I show that discoveries in both traditions have employed the same statistical [I would add "and/or probabilistic"] form of argument, even when basing discovery claims on single, golden events. Where Galison sees an epistemic divide between two communities that can only be bridged by creole- or pidgin-like 'interlanguage,' there is in fact a shared commitment to a statistical form of experimental argument. (P. 96).
Staley believes that although there is certainly epistemic continuity within a given tradition, there is also a continuity between the traditions. This does not, I believe, mean that the shared commitmeny comprises all of the arguments offered in any particular instance, but rather that the same methods are often used by both communities. Galison does not deny that statistical methods are used in the image tradition, but he thinks that they are relatively unimportant. "While statistics could certainly be used within the image tradition, it was by no means necessary for most applications" (Galison, 1997, p. 451). In contrast, Galison believes that arguments in the logic tradition "were inherently and inalienably statistical. Estimation of probable errors and the statistical excess over background is not a side issue in these detectors--it is central to the possibilty of any demonstration at all" (p. 451).
Although a detailed discussion of the disagreement between Staley and Galison would take us too far from the subject of this essay, they both agree that arguments are offered for the correctness of experimental results. Their disagreement concerns the nature of those arguments. (For further discussion see Franklin, (2002), pp. 9-17).
In Collins' view the regress is eventually broken by negotiation within the appropriate scientific community, a process driven by factors such as the career, social, and cognitive interests of the scientists, and the perceived utility for future work, but one that is not decided by what we might call epistemological criteria, or reasoned judgment. Thus, Collins concludes that his regress raises serious questions concerning both experimental evidence and its use in the evaluation of scientific hypotheses and theories. Indeed, if no way out of the regress can be found, then he has a point.
Collins strongest candidate for an example of the experimenters' regress is presented in his history of the early attempts to detect gravitational radiation, or gravity waves. (For more detailed discussion of this episode see (Collins 1985; 1994; Franklin 1994; 1997a) In this case, the physics community was forced to compare Weber's claims that he had observed gravity waves with the reports from six other experiments that failed to detect them. On the one hand, Collins argues that the decision between these conflicting experimental results could not be made on epistemological or methodological grounds--he claims that the six negative experiments could not legitimately be regarded as replications[7] and hence become less impressive. On the other hand, Weber's apparatus, precisely because the experiments used a new type of apparatus to try to detect a hitherto unobserved phenomenon,[8] could not be subjected to standard calibration techniques.
The results presented by Weber's critics were not only more numerous, but they had also been carefully cross-checked. The groups had exchanged both data and analysis programs and confirmed their results. The critics had also investigated whether or not their analysis procedure, the use of a linear algorithm, could account for their failure to observe Weber's reported results. They had used Weber's preferred procedure, a nonlinear algorithm, to analyze their own data, and still found no sign of an effect. They had also calibrated their experimental apparatuses by inserting acoustic pulses of known energy and finding that they could detect a signal. Weber, on the other hand, as well as his critics using his analysis procedure, could not detect such calibration pulses.
There were, in addition, several other serious questions raised about Weber's analysis procedures. These included an admitted programming error that generated spurious coincidences between Weber's two detectors, possible selection bias by Weber, Weber's report of coincidences between two detectors when the data had been taken four hours apart, and whether or not Weber's experimental apparatus could produce the narrow coincidences claimed.
It seems clear that the critics' results were far more credible than Weber's. They had checked their results by independent confirmation, which included the sharing of data and analysis programs. They had also eliminated a plausible source of error, that of the pulses being longer than expected, by analyzing their results using the nonlinear algorithm and by explicitly searching for such long pulses.[9] They had also calibrated their apparatuses by injecting pulses of known energy and observing the output.
Contrary to Collins, I believe that the scientific community made a reasoned judgment and rejected Weber's results and accepted those of his critics. Although no formal rules were applied (e.g. if you make four errors, rather than three, your results lack credibility; or if there are five, but not six, conflicting results, your work is still credible) the procedure was reasonable.
Pickering has argued that the reasons for accepting results are the future utility of such results for both theoretical and experimental practice and the agreement of such results with the existing community commitments. In discussing the discovery of weak neutral currents, Pickering states,
Quite simply, particle physicists accepted the existence of the neutral current because they could see how to ply their trade more profitably in a world in which the neutral current was real. (1984b, p. 87)The emphasis on future utility and existing commitments is clear. These two criteria do not necessarily agree. For example, there are episodes in the history of science in which more opportunity for future work is provided by the overthrow of existing theory. (See, for example, the history of the overthrow of parity conservation and of CP symmetry discussed below and in (Franklin 1986, Ch. 1, 3)).Scientific communities tend to reject data that conflict with group commitments and, obversely, to adjust their experimental techniques to tune in on phenomena consistent with those commitments. (1981, p. 236)
Achieving such relations of mutual support is, I suggest, the defining characteristic of the successful experiment. (1987, p. 199)He uses Morpurgo's search for free quarks, or fractional charges of 1/3 e or 2/3 e, where e is the charge of the electron. (See also (Gooding 1992)). Morpurgo used a modern Millikan-type apparatus and initially found a continuous distribution of charge values. Following some tinkering with the apparatus, Morpurgo found that if he separated the capacitor plates he obtained only integral values of charge. "After some theoretical analysis, Morpurgo concluded that he now had his apparatus working properly, and reported his failure to find any evidence for fractional charges" (Pickering 1987, p. 197).
Pickering goes on to note that Morpurgo did not tinker with the two competing theories of the phenomena then on offer, those of integral and fractional charge:
The initial source of doubt about the adequacy of the early stages of the experiment was precisely the fact that their findings--continuously distributed charges--were consonant with neither of the phenomenal models which Morpurgo was prepared to countenance. And what motivated the search for a new instrumental model was Morpurgo's eventual success in producing findings in accordance with one of the phenomenal models he was willing to acceptThe conclusion of Morpurgo's first series of experiments, then, and the production of the observation report which they sustained, was marked by bringing into relations of mutual support of the three elements I have discussed: the material form of the apparatus and the two conceptual models, one instrumental and the other phenomenal. Achieving such relations of mutual support is, I suggest, the defining charactersitic of the successful experiment. (P. 199)
Pickering has made several important and valid points concerning experiment. Most importantly, he has emphasized that an experimental apparatus is initially rarely capable of producing a valid experimental results and that some adjustment, or tinkering, is required before it does. He has also recognized that both the theory of the apparatus and the theory of the phenomena can enter into the production of a valid experimental result. What I wish to question, however, is the emphasis he places on these theoretical components. From Millikan onwards, experiments had strongly supported the existence of a fundamental unit of charge and charge quantization. The failure of Morpurgo's apparatus produce measurements of integral charge indicated that it was not operating properly and that his theoretical understanding of it was faulty. It was the failure to produce measurements in agreement with what was already known (i.e., the failure of an important experimental check) that caused doubts about Morpurgo's measurements. This was true regardless of the theoretical models available, or those that Morpurgo was willing to accept. It was only when Morpurgo's apparatus could reproduce known measurements that it could be trusted and used to search for fractional charge. To be sure, Pickering has allowed a role for the natural world in the production of the experimental result, but it does not seem to be decisive.
To repeat, changes in A [the apparatus] can often be seen (in real time, without waiting for accommodation by B [the theoretical model of the apparatus]) as improvements, whereas improvements in B don't begin to count unless A is actually altered and realizes the improvements conjectured. It's conceivable that this small asymmetry can account, ultimately, for large scale directions of scientific progress and for the objectivity and rationality of those directions. (Ackermann 1991, p. 456)
Hacking (1992) has also offered a more complex version of Pickering's later view. He suggests that the results of mature laboratory science achieve stability and are self-vindicating when the elements of laboratory science are brought into mutual consistency and support. These are (1) ideas: questions, background knowledge, systematic theory, topical hypotheses, and modeling of the apparatus; (2) things: target, source of modification, detectors, tools, and data generators; and (3) marks and the manipulation of marks: data, data assessment, data reduction, data analysis, and interpretation.
Stable laboratory science arises when theories and laboratory equipment evolve in such a way that they match each other and are mutually self-vindicating. (1992, p. 56)One might ask whether such mutual adjustment between theory and experimental results can always be achieved? What happens when an experimental result is produced by an apparatus on which several of the epistemological strategies, discussed earlier, have been successfully applied, and the result is in disagreement with our theory of the phenomenon? Accepted theories can be refuted. Several examples will be presented below.We invent devices that produce data and isolate or create phenomena, and a network of different levels of theory is true to these phenomena. Conversely we may in the end count them only as phenomena only when the data can be interpreted by theory. (pp. 57-8)
Hacking himself worries about what happens when a laboratory science that is true to the phenomena generated in the laboratory, thanks to mutual adjustment and self-vindication, is successfully applied to the world outside the laboratory. Does this argue for the truth of the science. In Hacking's view it does not. If laboratory science does produce happy effects in the "untamed world,... it is not the truth of anything that causes or explains the happy effects" (1992, p. 60).
The dance of agency, seen asymmetrically from the human end, thus takes the form of a dialectic of resistance and accommodations, where resistance denotes the failure to achieve an intended capture of agency in practice, and accommodation an active human strategy of response to resistance, which can include revisions to goals and intentions as well as to the material form of the machine in question and to the human frame of gestures and social relations that surround it (p. 22)."
Pickering's idea of resistance is illustrated by Morpurgo's observation of continuous, rather than integral or fractional, electrical charge, which did not agree with his expectations. Morpurgo's accommodation consisted of changing his experimental apparatus by using a larger separation between his plates, and also by modifying his theoretical account of the apparatus. That being done, integral charges were observed and the result stabilized by the mutual agreement of the apparatus, the theory of the apparatus, and the theory of the phenomenon. Pickering notes that "the outcomes depend on how the world is (p. 182)." "In this way, then, how the material world is leaks into and infects our representations of it in a nontrivial and consequential fashion. My analysis thus displays an intimate and responsive engagement between scientific knowledge and the material world that is integral to scientific practice (p. 183)."
Nevertheless there is something confusing about Pickering's invocation of the natural world. Although Pickering acknowledges the importance of the natural world, his use of the term "infects" seems to indicate that he isn't entirely happy with this. Nor does the natural world seem to have much efficacy. It never seems to be decisive in any of Pickering's case studies. Recall that he argued that physicists accepted the existence of weak neutral currents because "they could ply their trade more profitably in a world in which the neutral current was real." In his account, Morpurgo's observation of continuous charge is important only because it disagrees with his theoretical models of the phenomenon. The fact that it disagreed with numerous previous observations of integral charge doesn't seem to matter. This is further illustrated by Pickering's discussion of the conflict between Morpurgo and Fairbank. As we have seen, Morpurgo reported that he did not observe fractional electrical charges. On the other hand, in the late 1970s and early 1980s, Fairbank and his collaborators published a series of papers in which they claimed to have observed fractional charges (See, for example, LaRue, Phillips et al. 1981 ). Faced with this discord Pickering concludes,
In Chapter 3, I traced out Morpurgo's route to his findings in terms of the particular vectors of cultural extension that he pursued, the particular resistances and accommodations thus precipitated, and the particular interactive stabilizations he achieved. The same could be done, I am sure, in respect of Fairbank. And these tracings are all that needs to said about their divergence. It just happened that the contingencies of resistance and accommodation worked out differently in the two instances. Differences like these are, I think, continually bubbling up in practice, without any special causes behind them (pp. 211-212).
The natural world seems to have disappeared from Pickering's account. There is a real question here as to whether or not fractional charges exist in nature. The conclusions reached by Fairbank and by Morpurgo about their existence cannot both be correct. It seems insufficient to merely state, as Pickering does, that Fairbank and Morpurgo achieved their individual stabilizations and to leave the conflict unresolved. (Pickering does comment that one could follow the subsequent history and see how the conflict was resolved, and he does give some brief statements about it, but its resolution is not important for him). At the very least, I believe, one should consider the actions of the scientific community. Scientific knowledge is not determined individually, but communally. Pickering seems to acknowledge this. "One might, therefore, want to set up a metric and say that items of scientific knowledge are more or less objective depending on the extent to which they are threaded into the rest of scientific culture, socially stabilized over time, and so on. I can see nothing wrong with thinking this way.... (p. 196)." The fact that Fairbank believed in the existence of fractional electrical charges, or that Weber strongly believed that he had observed gravity waves, does not make them right. These are questions about the natural world that can be resolved. Either fractional charges and gravity waves exist or they don't, or to be more cautious we might say that we have good reasons to support our claims about their existence, or we do not.
Another issue neglected by Pickering is the question of whether a particular mutual adjustment of theory, of the apparatus or the phenomenon, and the experimental apparatus and evidence is justified. Pickering seems to believe that any such adjustment that provides stabilization, either for an individual or for the community, is acceptable. I do not. Experimenters sometimes exclude data and engage in selective analysis procedures in producing experimental results. These practices are, at the very least, questionable as is the use of the results produced by such practices in science. There are, I believe, procedures in the normal practice of science that provide safeguards against them. (For details see Franklin, 2002, Section 1).
The difference between our attitudes toward the resolution of discord is one of the important distinctions between my view of science and Pickering's. I do not believe it is sufficient simply to say that the resolution is socially stabilized. I want to know how that resolution was achieved and what were the reasons offered for that resolution. If we are faced with discordant experimental results and both experimenters have offered reasonable arguments for their correctness, then clearly more work is needed. It seems reasonable, in such cases, for the physics community to search for an error in one, or both, of the experiments.
Pickering discusses yet another difference between our views. He sees traditional philosophy of science as regarding objectivity "as stemming from a peculiar kind of mental hygiene or policing of thought. This police function relates specifically to theory choice in science, which,... is usually discussed in terms of the rational rules or methods responsible for closure in theoretical debate (p. 197)." He goes on to remark that,
The most action in recent methodological thought has centered on attempts like Allan Franklin's to extend the methodological approach to experiments by setting up a set of rules for their proper performance. Franklin thus seeks to extend classical discussions of objectivity to the empirical base of science (a topic hitherto neglected in the philosophical tradition but one that, of course the mangle [Pickering's view] also addresses). For an argument between myself and Franklin on the same lines as that laid out below, see (Franklin 1990, Chapter 8; Franklin 1991); and (Pickering 1991); and for commentaries related to that debate, (Ackermann 1991) and (Lynch 1991) (p. 197)."
For further discussion see (Franklin 1993b)). Although I agree that my epistemology of experiment is designed to offer good reasons for belief in experimental results, I do not agree with Pickering that they are a set of rules. I regard them as a set of strategies, from which physicists choose, in order to argue for the correctness of their results. As noted above, I do not think the strategies offered are either exclusive or exhaustive.
There is another point of disagreement between Pickering and myself. He claims to be dealing with the practice of science, and yet he excludes certain practices from his discussions. One scientific practice is the application of the epistemological strategies I have outlined above to argue for the correctness of an experimental results. In fact, one of the essential features of an experimental paper is the presentation of such arguments. I note further that writing such papers, a performative act, is also a scientific practice and it would seem reasonable to examine both the structure and content of those papers.
Contingency is the idea that science is not predetermined, that it could have developed in any one of several successful ways. This is the view adopted by constructivists. Hacking illustrates this with Pickering's account of high-energy physics during the 1970s during which the quark model came to dominate. (See Pickering 1984a).
The constructionist maintains a contingency thesis. In the case of physics, (a) physics theoretical, experimental, material) could have developed in, for example, a nonquarky way, and, by the detailed standards that would have evolved with this alternative physics, could have been as successful as recent physics has been by its detailed standards. Moreover, (b) there is no sense in which this imagined physics would be equivalent to present physics. The physicist denies that. (Hacking 1999, pp. 78-79).To sum up Pickering's doctrine: there could have been a research program as successful ("progressive") as that of high-energy physics in the 1970s, but with different theories, phenomenology, schematic descriptions of apparatus, and apparatus, and with a different, and progressive, series of robust fits between these ingredients. Moreover and this is something badly in need of clarification the "different" physics would not have been equivalent to present physics. Not logically incompatible with, just different.
The constructionist about (the idea) of quarks thus claims that the upshot of this process of accommodation and resistance is not fully predetermined. Laboratory work requires that we get a robust fit between apparatus, beliefs about the apparatus, interpretations and analyses of data, and theories. Before a robust fit has been achieved, it is not determined what that fit will be. Not determined by how the world is, not determined by technology now in existence, not determined by the social practices of scientists, not determined by interests or networks, not determined by genius, not determined by anything (pp. 72-73, emphasis added).
Much depends here on what Hacking means by "determined.." If he means entailed then I agree with him. I doubt that the world, or more properly, what we can learn about it, entails a unique theory. If not, as seems more plausible, he means that the way the world is places no restrictions on that successful science, then I disagree strongly. I would certainly wish to argue that the way the world is restricts the kinds of theories that will fit the phenomena, the kinds of apparatus we can build, and the results we can obtain with such apparatuses. To think otherwise seems silly. Consider a homey example, it seems to me highly unlikely, an understatement, that someone can come up with a successful theory in which objects whose density is greater than that of air fall upwards. This is not, I believe, a caricature of the view Hacking describes. Describing Pickering's view, he states, "Physics did not need to take a route that involved Maxwell's Equations, the Second Law of Thermodynamics, or the present values of the velocity of light (p. 70)." Although I have some sympathy for this view as regards Maxwell's Equations or the Second Law of Thermodynamics, I do not agree about the value of the speed of light. That is determined by the way the world is. Any successful theory of light must give that value for its speed.
At the other extreme are the "inevitablists," among whom Hacking classifies most scientists. He cites Sheldon Glashow, a Nobel Prize winner, "Any intelligent alien anywhere would have come upon the same logical system as we have to explain the structure of protons and the nature of supernovae (Glashow 1992, p. 28)."
Another difference between Pickering and myself on contingency concerns the question of not whether an alternative is possible, but rather whether there are reasons why that alternative should be pursued. Pickering seems to identify can with ought.
In the late 1970s there was a disagreement between the results of low-energy experiments on atomic parity violation (the violation of left-right symmetry) performed at the University of Washington and at Oxford University and the result of a high-energy experiment on the scattering of polarized electrons from deuterium (the SLAC E122 experiment). The atomic-parity violation experiments failed to observe the parity-violating effects predicted by the Weinberg- Salam (W-S) unified theory of electroweak interactions, whereas the SLAC experiment observed the predicted effect. In my view, these early atomic physics results were quite uncertain in themselves and that uncertainty was increased by positive results obtained in similar experiments at Berkeley and Novosibirsk. At the time the theory had other evidential support, but was not universally accepted. Pickering and I are in agreement that the W-S theory was accepted on the basis of the SLAC E122 result. We differ dramatically in our discussions of the experiments Our difference on contingency concerns a particular theoretical alternative that was proposed at the time to explain the discrepancy between the experimental results.
Pickering asked why a theorist might not have attempted to find a variant of electroweak gauge theory that might have reconciled the Washington-Oxford atomic parity results with the positive E122 result. (What such a theorist was supposed to do with the supportive atomic parity results later provided by experiments at Berkeley and at Novosibirsk is never mentioned). "But though it is true that E122 analysed their data in a way that displayed the improbability [the probability of the fit to the hybrid model was 6 x 10-4] of a particular class of variant gauge theories, the so-called 'hybrid models,' I do not believe that it would have been impossible to devise yet more variants" (Pickering 1991, p. 462). Pickering notes that open-ended recipes for constructing such variants had been written down as early as 1972 (p. 467). I agree that it would have been possible to do so, but one may ask whether or not a scientist might have wished to do so. If the scientist agreed with my view that the SLAC E122 experiment provided considerable evidential weight in support of the W-S theory and that a set of conflicting and uncertain results from atomic parity-violation experiments gave an equivocal answer on that support, what reason would they have had to invent an alternative?
This is not to suggest that scientists do not, or should not, engage in speculation, but rather that there was no necessity to do so in this case. Theorists often do propose alternatives to existing, well-confirmed theories.
Constructivist case studies always seem to result in the support of existing, accepted theory (Pickering 1984a; 1984b; 1991; Collins 1985; Collins and Pinch 1993). One criticism implied in such cases is that alternatives are not considered, that the hypothesis space of acceptable alternatives is either very small or empty. I don't believe this is correct. Thus, when the experiment of Christenson et al. (1964) detected Ko2 decay into two pions, which seemed to show that CP symmetry (combined particle-antiparticle and space inversion symmetry) was violated, no fewer than 10 alternatives were offered. These included 1) the cosmological model resulting from the local dysymmetry of matter and antimatter, 2) external fields, 3) the decay of the Ko2 into a Ko1 with the subsequent decay of the Ko1 into two pions, which was allowed by the symmetry, 4) the emission of another neutral particle, "the paritino," in the Ko2 decay, similar to the emission of the neutrino in beta decay, 5) that one of the pions emitted in the decay was in fact a "spion," a pion with spin one rather than zero, 6) that the decay was due to another neutral particle, the L, produced coherently with the Ko 7) the existence of a "shadow" universe, which interacted with out universe only through the weak interactions, and that the decay seen was the decay of the "shadow Ko2," 8) the failure of the exponential decay law, 9) the failure of the principle of superposition in quantum mechanics, and 10) that the decay pions were not bosons.
As one can see, the limits placed on alternatives were not very stringent. By the end of 1967, all of the alternatives had been tested and found wanting, leaving CP symmetry unprotected. Here the differing judgments of the scientific community about what was worth proposing and pursuing led to a wide variety of alternatives being tested.
Hacking's second sticking point is nominalism, or name-ism. He notes that in its most extreme form nominalism denies that there is anything in common or peculiar to objects selected by a name, such as "Douglas fir" other than that they are called Douglas fir. Opponents contend that good names, or good accounts of nature, tell us something correct about the world. This is related to the realism-antirealism debate concerning the status of unobservable entities that has plagued philosophers for millennia. For example Bas van Fraassen (1980), an antirealist, holds that we have no grounds for belief in unobservable entities such as the electron and that accepting theories about the electron means only that we believe that the things the theory says about observables is true. A realist claims that electrons really exist and that as, for example, Wilfred Sellars remarked, "to have good reason for holding a theory is ipso facto to have good reason for holding that the entities postulated by the theory exist (Sellars 1962, p. 97)." In Hacking's view a scientific nominalist is more radical than an antirealist and is just as skeptical about fir trees as they are about electrons. A nominalist further believes that the structures we conceive of are properties of our representations of the world and not of the world itself. Hacking refers to opponents of that view as inherent structuralists.
Hacking also remarks that this point is related to the question of "scientific facts." Thus, constructivists Latour and Woolgar originally entitled their book Laboratory Life: The Social Construction of Scientific Facts (1979). Andrew Pickering entitled his history of the quark model Constructing Quarks (Pickering 1984a). Physicists argue that this demeans their work. Steven Weinberg, a realist and a physicist, criticized Pickering's title by noting that no mountaineer would ever name a book Constructing Everest. For Weinberg, quarks and Mount Everest have the same ontological status. They are both facts about the world. Hacking argues that constructivists do not, despite appearances, believe that facts do not exist, or that there is no such thing as reality. He cites Latour and Woolgar "that 'out-there-ness' is a consequence of scientific work rather than its cause (Latour and Woolgar 1986, p. 180)." I agree with Hacking when he concludes that,
Latour and Woolgar were surely right. We should not explain why some people believe that p by saying that p is true, or corresponds to a fact, or the facts. For example: someone believes that the universe began with what for brevity we call a big bang. A host of reasons now supports this belief. But after you have listed all the reasons, you should not add, as if it were an additional reason for believing in the big bang, 'and it is true that the universe began with a big bang.' Or 'and it is a fact.'This observation has nothing peculiarly to do with social construction. It could equally have been advanced by an old-fashioned philosopher of language. It is a remark about the grammar of the verb 'to explain' (Hacking 1999, pp. 80-81).
I would add, however, that the reasons Hacking cites as supporting that belief are given to us by valid experimental evidence and not by the social and personal interests of scientists. I'm not sure that Latour and Woolgar would agree. My own position is one that one might reasonably call conjectural realism. I believe that we have good reasons to believe in facts, and in the entities involved in our theories, always remembering, of course, that science is fallible.
Hacking's third sticking point is the external explanations of stability.
The constructionist holds that explanations for the stability of scientific belief involve, at least in part, elements that are external to the content of science. These elements typically include social factors, interests, networks, or however they be described. Opponents hold that whatever be the context of discovery, the explanation of stability is internal to the science itself (Hacking 1999, p. 92).Rationalists think that most science proceeds as it does in the light of good reasons produced by research. Some bodies of knowledge become stable because of the wealth of good theoretical and experimental reasons that can be adduced for them. Constructivists think that the reasons are not decisive for the course of science. Nelson (1994) concludes that this issue will never be decided. Rationalists, at least retrospectively, can always adduce reasons that satisfy them. Constructivists, with equal ingenuity, can always find to their own satisfaction an openness where the upshot of research is settled by something other than reason. Something external. That is one way of saying we have found an irresoluble "sticking point" (pp. 91-92)
Thus, there is a rather severe disagreement on the reasons for the acceptance of experimental results. For some, like Staley, Galison and myself, it is because of epistemological arguments. For others, like Pickering, the reasons are utility for future practice and agreement with existing theoretical commitments. Although the history of science shows that the overthrow of a well-accepted theory leads to an enormous amount of theoretical and experimental work, proponents of this view seem to accept it as unproblematical that it is always agreement with existing theory that has more future utility. Hacking and Pickering also suggest that experimental results are accepted on the basis of the mutual adjustment of elements which includes the theory of the phenomenon.
Nevertheless, everyone seems to agree that a consensus does arise on experimental results.
In deciding what experimental investigation to pursue, scientists may very well be influenced by the equipment available and their own ability to use that equipment (McKinney 1992). Thus, when the Mann-O'Neill collaboration was doing high energy physics experiments at the Princeton-Pennsylvania Accelerator during the late 1960s, the sequence of experiments was (1) measurement of the K+ decay rates, (2) measurement of the K +e3 branching ratio and decay spectrum, (3) measurement of the K+e2 branching ratio, and (4) measurement of the form factor in K+e3 decay. These experiments were performed with basically the same experimental apparatus, but with relatively minor modifications for each particular experiment. By the end of the sequence the experimenters had become quite expert in the use of the apparatus and knowledgeable about the backgrounds and experimental problems. This allowed the group to successfully perform the technically more difficult experiments later in the sequence. We might refer to this as "instrumental loyalty" and the "recycling of expertise" (Franklin 1997b). This meshes nicely with Galison's view of experimental traditions. Scientists, both theorists and experimentalists, tend to pursue experiments and problems in which their training and expertise can be used.
Hacking also remarks on the "noteworthy observations" on Iceland Spar by Bartholin, on diffraction by Hooke and Grimaldi, and on the dispersion of light by Newton. "Now of course Bartholin, Grimaldi, Hooke, and Newton were not mindless empiricists without an idea in their heads. They saw what they saw because they were curious, inquisitive, reflective people. They were attempting to form theories. But in all these cases it is clear that the observations preceded any formulation of theory" (Hacking 1983, p. 156). In all of these cases we may say that these were observations waiting for, or perhaps even calling for, a theory. The discovery of any unexpected phenomenon calls for a theoretical explanation.
The Stern-Gerlach experiment was regarded as crucial at the time it was performed, but, in fact, wasn't. In the view of the physics community it decided the issue between two theories, refuting one and supporting the other. In the light of later work, however, the refutation stood, but the confirmation was questionable. In fact, the experimental result posed problems for the theory it had seemingly confirmed. A new theory was proposed and although the Stern-Gerlach result initially also posed problems for the new theory, after a modification of that new theory, the result confirmed it. In a sense, it was crucial after all. It just took some time.
The Stern-Gerlach experiment provides evidence for the existence of electron spin. These experimental results were first published in 1922, although the idea of electron spin wasn't proposed by Goudsmit and Uhlenbeck until 1925 (1925; 1926). One might say that electron spin was discovered before it was invented. (For details of this episode see Appendix 5).
Allan Franklin allan.franklin@colorado.edu |