The Social Dimensions of Scientific Knowledge

First published Fri Apr 12, 2002; substantive revision Mon May 27, 2019

Study of the social dimensions of scientific knowledge encompasses the effects of scientific research on human life and social relations, the effects of social relations and values on scientific research, and the social aspects of inquiry itself. Several factors have combined to make these questions salient to contemporary philosophy of science. These factors include the emergence of social movements, like environmentalism and feminism, critical of mainstream science; concerns about the social effects of science-based technologies; epistemological questions made salient by big science; new trends in the history of science, especially the move away from internalist historiography; anti-normative approaches in the sociology of science; turns in philosophy to naturalism and pragmatism. This entry reviews the historical background to current research in this area and features of contemporary science that invite philosophical attention.

The philosophical work can roughly be classified into two camps. One acknowledges that scientific inquiry is in fact carried out in social settings and asks whether and how standard epistemology must be supplemented to address this feature. The other treats sociality as a fundamental aspect of knowledge and asks how standard epistemology must be modified or reformed from this broadly social perspective. Concerns in the supplementing approach include such matters as trust and accountability raised by multiple authorship, the division of cognitive labor, the reliability of peer review, the challenges of privately funded science, as well as concerns arising from the role of scientific research in society. The reformist approach highlights the challenge to normative philosophy from social, cultural, and feminist studies of science while seeking to develop philosophical models of the social character of scientific knowledge and inquiry. It treats the questions of the division of cognitive labor, expertise and authority, the interactions of science and society, etc., from the perspective of philosophical models of the irreducibly social character of scientific knowledge. Philosophers employ both formal modeling techniques and conceptual analysis in their efforts to identify and analyze epistemologically relevant social aspects of science.

1. Historical Background

Philosophers who study the social character of scientific knowledge can trace their lineage at least as far as John Stuart Mill. Mill, Charles Sanders Peirce, and Karl Popper all took some type of critical interaction among persons as central to the validation of knowledge claims.

Mill’s arguments occur in his well-known political essay On Liberty, (Mill 1859) rather than in the context of his logical and methodological writings, but he makes it clear that they are to apply to any kind of knowledge or truth claim. Mill argues from the fallibility of human knowers to the necessity of unobstructed opportunity for and practice of the critical discussion of ideas. Only such critical discussion can assure us of the justifiability of the (true) beliefs we do have and can help us avoid falsity or the partiality of belief or opinion framed in the context of just one point of view. Critical interaction maintains the freshness of our reasons and is instrumental in the improvement of both the content and the reasons of our beliefs. The achievement of knowledge, then, is a social or collective, not an individual, matter.

Peirce’s contribution to the social epistemology of science is commonly taken to be his consensual theory of truth: “The opinion which is fated to be ultimately agreed to by all who investigate is what we mean by truth, and the object represented is the real.” (Peirce 1878, 133) While often read as meaning that the truth is whatever the community of inquirers converges on in the long run, the notion is interpretable as meaning more precisely either that truth (and “the real”) depends on the agreement of the community of inquirers or that it is an effect of the real that it will in the end produce agreement among inquirers. Whatever the correct reading of this particular statement, Peirce elsewhere makes it clear that, in his view, truth is both attainable and beyond the reach of any individual. “We individually cannot hope to attain the ultimate philosophy which we pursue; we can only seek it for the community of philosophers.” (Peirce 1868, 40). Peirce puts great stock in instigating doubt and critical interaction as means to knowledge. Thus, whether his theory of truth is consensualist or realist, his view of the practices by which we attain it grants a central place to dialogue and social interaction.

Popper is often treated as a precursor of social epistemology because of his emphasis on the importance of criticism in the development of scientific knowledge. Two concepts of criticism are found in his works (Popper 1963, 1972) and these can be described as logical and practical senses of falsification. The logical sense of falsification is just the structure of a modus tollens argument, in which a hypothesis is falsified by the demonstration that one of its logical consequences is false. This is one notion of criticism, but it is a matter of formal relations between statements. The practical sense of falsification refers to the efforts of scientists to demonstrate the inadequacies of one another’s theories by demonstrating observational shortcomings or conceptual inconsistencies. This is a social activity. For Popper the methodology of science is falsificationist in both its logical and practical senses, and science progresses through the demonstration by falsification of the untenability of theories and hypotheses. Popper’s logical falsificationism is part of an effort to demarcate genuine science from pseudo science, and has lost its plausibility as a description of scientific methodology as the demarcation project has come under challenge from naturalist and historicist approaches in philosophy of science. While criticism does play an important role in some current approaches in social epistemology, Popper’s own views are more closely approximated by evolutionary epistemology, especially that version that treats cognitive progress as the effect of selection against incorrect theories and hypotheses. In contrast to Mill’s views, for Popper the function of criticism is to eliminate false theories rather than to improve them.

The work of Mill, Peirce, and Popper is a resource for philosophers presently exploring the social dimensions of scientific knowledge. However, the current debates are framed in the context of developments in both philosophy of science and in history and social studies of science following the collapse of the logical empiricist consensus. The philosophers of the Vienna Circle are conventionally associated with an uncritical form of positivism and with the logical empiricism that replaced American pragmatism in the 1940s and 1950s. According to some recent scholars, however, they saw natural science as a potent force for progressive social change. (Cartwright, Cat, and Chang 1996; Giere and Richardson, eds., 1996; Uebel 2005) With its grounding in observation and public forms of verification, science for them constituted a superior alternative to what they saw as metaphysical obscurantism, an obscurantism that led not only to bad thinking but to bad politics. While one development of this point of view leads to scientism, the view that any meaningful question can be answered by the methods of science; another development leads to inquiry into what social conditions promote the growth of scientific knowledge. Logical empiricism, the version of Vienna Circle philosophy that developed in the United States, focused on logical, internal aspects of scientific knowledge and discouraged philosophical inquiry into the social dimensions of science. These came into prominence again after the publication of Thomas Kuhn’s Structure of Scientific Revolutions (Kuhn 1962). A new generation of sociologists of science, among them Barry Barnes, Steven Shapin, and Harry Collins, took Kuhn’s emphasis on the role of non-evidential community factors in scientific change even further than he had and argued that scientific judgment was determined by social factors, such as professional interests and political ideologies (Barnes 1977, Shapin 1982, Collins 1983). This family of positions provoked a counter-response among philosophers. These responses are marked by an effort to acknowledge some social dimensions to scientific knowledge while at the same time maintaining its epistemological legitimacy, which they take to be undermined by the new sociology. At the same time, features of the organization of scientific inquiry compel philosophers to consider their implications for the normative analysis of scientific practices.

2. Big Science, Trust, and Authority

The second half of the twentieth century saw the emergence of what has come to be known as Big Science: the organization of large numbers of scientists bringing different bodies of expertise to a common research project. The original model was the Manhattan Project, undertaken during the Second World War to develop an atomic weapon in the United States. Theoretical and experimental physicists located at various sites across the country, though principally at Los Alamos, New Mexico, worked on sub-problems of the project under the overall direction of J. Robert Oppenheimer. While academic and military research have since been to some degree separated, much experimental research in physics, especially high energy particle physics, continues to be pursued by large teams of researchers. Research in other areas of science as well, for example the work comprehended under the umbrella of the Human Genome Project, has taken on some of the properties of Big Science, requiring multiple forms of expertise. In addition to the emergence of Big Science, the transition from small scale university or even amateur science to institutionalized research with major economic impacts supported by national funding bodies and connected across international borders has seemed to call for new ethical and epistemological thinking. Moreover, the consequent dependence of research on central funding bodies and increasingly, private foundations or commercial entities, prompts questions about the degree of independence of contemporary scientific knowledge from its social and economic context.

John Hardwig (1985) articulated one philosophical dilemma posed by large teams of researchers. Each member or subgroup participating in such a project is required because each has a crucial bit of expertise not possessed by any other member or subgroup. This may be knowledge of a part of the instrumentation, the ability to perform a certain kind of calculation, the ability to make a certain kind of measurement or observation. The other members are not in a position to evaluate the results of other members’ work, and hence, all must take one anothers’ results on trust. The consequence is an experimental result, (for example, the measurement of a property such as the decay rate or spin of a given particle) the evidence for which is not fully understood by any single participant in the experiment. This leads Hardwig to ask two questions, one about the evidential status of testimony, and one about the nature of the knowing subject in these cases. With respect to the latter, Hardwig says that either the group as a whole, but no single member, knows or it is possible to know vicariously. Neither of these is palatable to him. Talking about the group or the community knowing smacks of superorganisms and transcendent entities and Hardwig shrinks from that solution. Vicarious knowledge, knowing without oneself possessing the evidence for the truth of what one knows, requires, according to Hardwig, too much of a departure from our ordinary concepts of knowledge.

The first question is, as Hardwig notes, part of a more general discussion about the epistemic value of testimony. Much of what passes for common knowledge is acquired from others. We depend on experts to tell us what is wrong or right with our appliances, our cars, our bodies. Indeed, much of what we later come to know depends on what we previously learned as children from our parents and teachers. We acquire knowledge of the world through the institutions of education, journalism, and scientific inquiry. Philosophers disagree about the status of beliefs acquired in this way. Here is the question: If A knows that p on the basis of evidence e, B has reason to think A trustworthy and B believes p on the basis of A’s testimony that p, does B also know that p? Some philosophers, as Locke and Hume seem to have, argue that only what one has observed oneself could count as a good reason for belief, and that the testimony of another is, therefore, never on its own sufficient warrant for belief. Thus, B does not know simply on the basis of A’s testimony but must have additional evidence about A’s reliability. While this result is consistent with traditional philosophical empiricism and rationalism, which emphasized the individual’s sense experience or rational apprehension as foundations of knowledge, it does have the consequence that we do not know most of what we think we know.

A number of philosophers have recently offered alternative analyses focusing on one or another element in the problem. Some argue that testimony by a qualified expert is itself evidential, (Schmitt 1988), others that the expert’s evidence constitutes good reason for, but is not itself evidential for the recipient of testimony (Hardwig 1985, 1988), others that what is transmitted in testimony is knowledge and not just propositional content and thus the question of the kind of reason a recipient of testimony has is not to the point (Welbourne 1981).

However this dispute is resolved, questions of trust and authority arise in a particularly pointed way in the sciences, and Hardwig’s dilemma for the physics experiment is also a specific version of a more general phenomenon. A popular conception of science, fed partly by Popper’s falsificationism, is that it is epistemically reliable because the results of experiments and observational studies are checked by independent repetition. In practice, however, only some results are so checked and many are simply accepted on trust. Not only must positive results be accepted on trust, but claims of failure to replicate as well as other critiques must be also. Thus, just as in the non-scientific world information is accepted on trust, so in science, knowledge grows by depending on the testimony of others. What are the implications of accepting this fact for our conceptions of the reliability of scientific knowledge?

The philosopher of biology, David Hull, argued in his (1988) that because the overall structure of reward and punishment in the sciences is a powerful incentive not to cheat, further epistemological analysis of the sciences is unnecessary. What scientists have to lose is their reputation, which is crucial to their access to grants, collaborations, prizes, etc. So the structure itself guarantees the veridicality of research reports. But some celebrated recent episodes, such as the purported production of “cold fusion” were characterized by the failure of replication attempts to produce the same phenomenon. And, while the advocates of cold fusion were convinced that their experiments had produced the phenomenon, there have also been cases of outright fraud. Thus, even if the structure of reward and punishment is an incentive not to cheat, it does not guarantee the veridicality of every research report.

On Hull’s view, the scientific community seeks true theories or adequate models. Credit, or recognition, accrues to individuals to the extent they are perceived as having contributed to that community goal. That is, individual scientists seek reputation and recognition, to have their work cited as important and as necessary to further scientific progress. Cheating, by misreporting experimental results or other misconduct, will be punished by loss of reputation. But this depends on strong guarantees of detection. Absent such guarantees, there is as strong an incentive to cheat, to try to obtain credit without necessarily having done the work, as not to cheat.

Both Alvin Goldman (Goldman, 1995, 1999) and Philip Kitcher (1993) have treated the potential for premature, or otherwise (improperly) interested reporting of results to corrupt the sciences as a question to be answered by means of decision theoretic models. The decision theoretic approach to problems of trust and authority treats both credit and truth as utilities. The challenge then is to devise formulas that show that actions designed to maximize credit also maximize truth. Kitcher, in particular, develops formulas intended to show that even in situations peopled by non-epistemically motivated individuals (that is, individuals motivated more by a desire for credit than by a desire for truth), the reward structure of the community can be organized in such a way as to maximize truth and foster scientific progress. One consequence of this approach is to treat scientific fraud and value or interest infused science as the same problem. One advantage is that it incorporates the motivation to cheat into the solution to the problem of cheating. But one may wonder how effective this solution really is. Increasingly, we learn of problematic behavior in science based industries, such as the pharmaceutical industry. Results are withheld or distorted, authorship is manipulated. Hot areas, such as stem cell research, cloning, or gene modification, have been subjected to fraudulent research. Thus, even if the structure of reward and punishment is an in principle incentive not to cheat, it does not guarantee the reliability of every research report. The decision theoretic model needs to include at least one more parameter, namely the anticipated likelihood of detection within a relevant timeframe.

Community issues have also been addressed under the banners of research ethics and of peer review. One might think that the only ethical requirements on scientists are to protect their research subjects from harm and, as professional scientists, to seek truth above any other goals. This presupposes that seeking truth is a sufficient guide to scientific decision-making. Heather Douglas, in her critical study of the ideal of value-freedom (Douglas 2009), rejects this notion. Douglas draws on her earlier study of inductive risk (Douglas 2000) to press the point that countless methodological decisions required in the course of carrying out a single piece of research are underdetermined by the factual elements of the situation and must be guided by an assessment of the consequences of being wrong. Science is not value-free, but can be protected from the deleterious effects of values if scientists take steps to mitigate the influence of inappropriate values. One step is to distinguish between direct and indirect roles of values; another is the articulation of guidelines for individual scientists. Values play a direct role when they provide direct motivation to accept or reject a theory; they play an indirect role when they play a role in evaluating the consequences of accepting or rejecting a claim, thus influencing what will count as sufficient evidence to accept or reject. The responsibility of scientists is to make sure that values do not play a direct role in their work and to be transparent about the indirect roles of values. A number of writers have taken issue with the tenability of Douglas’s distinction between direct and indirect. Steel and Whyte (2012) examine testing guidelines developed by pharmaceutical companies to point out that the very same decision may be motivated by values playing a direct role or playing an indirect role. If the point is to prohibit practices such as withholding negative results, then it shouldn’t matter whether the practice is motivated by values functioning directly or indirectly. Elliott (2011) questions whether only harmful consequences should be considered. If science is to be useful to policy makers, then questions of relative social benefit should also be permitted to play a role. Finally the cognitive activities demanded by Douglas’s ethical prescriptions for scientists seem beyond the capacities of individual scientists. This point will be pursued below.

Torsten Wilholt (2013) argues that the research situation is more complicated than the epistemic vs. nonepistemic tradeoff implied by the decision theoretic approach. In part because of the difficulties in achieving the degree of knowledge required to realize Douglas’s ethical prescriptions, he argues that the reliance called for in science extends beyond the veridicality of reported results to the values guiding the investigators relied upon. Most research involves both results expressed statistically (which requires choice of significance threshold and balancing chances of Type I vs. Type II error) and multiple steps each requiring methodological decisions. These decisions, Wilholt argues, represent trade-offs among the reliability of positive results, the reliability of negative results, and the power of the investigation. In making these tradeoffs, the investigator is per force guided by an evaluation of the consequences of the various possible outcomes of the study. Wilholt extends the arguments about inductive risk offered originally by Richard Rudner and elaborated by Heather Douglas to propose that, in relying on another’s results I am relying not only on their competence and truthfulness, but on their making methodological decisions informed by the same valuations of outcomes as I have. This attitude is more than epistemic reliance, but a deeper attitude: one of trust that we are guided by the same values in a shared enterprise. For Wilholt, then, scientific inquiry engages ethical norms as well as epistemic norms. Formal or mechanical solutions such as those suggested by the application of decision theoretic models are not sufficient, if the community must be held together by shared ethical values.

Peer review and replication are methods the scientific community, indeed the research world in general, employs to assure consumers of scientific research that the work is credible. Peer review both of research proposals and of research reports submitted for publication screens for quality, which includes methodological competence and appropriateness as well as for originality and significance, while replication is intended to probe the robustness of results when reported experiments are carried out in different laboratories and with slight changes to experimental conditions. Scholars of peer review have noted various forms of bias entering into the peer review process. In a review of the literature, Lee, Sugimoto, Zhang, and Cronin (2013) report documented bias along gender, language, nationality, prestige, and content as well as such problems as lack of inter-reviewer reliability consistency, confirmation bias, and reviewer conservatism. Lee (2012) argues that a Kuhnian perspective on values in science interprets lack of inter-reviewer consistency as variation in interpretation, applicability, and weight assigned to shared values by different members of the scientific community. Lee and colleagues (2013) argue that journal editors must take much more action than is currently taken to require that researchers make their raw data and other relevant trial information available to enable peer reviewers to conduct their work adequately.

One issue that has yet to be addressed by philosophers is the gap between the ideal of replication resulting in confirmation, modification, or retraction and the reality. This ideal lies behind the assumptions of efficacy of structures of reward and sanction. Only if researchers believe that their research reports will be probed by efforts at replication will the threat of sanctions against faulty or fraudulent research be realistic. John Ioannidis and collaborators (Tatsioni, Bonitsis, and Ioannidis 2007; Young, N.S. Ioannidis, and Al-Ubaydli 2008) have shown how infrequently attempts to replicate are actually made and, even more strikingly, how contradicted results persist in the literature. This is an issue that goes beyond individuals and beyond large research collaborators to the scientific community in general. It underscores Wilholt’s contention that the scientific community must be held together by bonds of trust, but much more empirical and philosophical work is needed to address how to proceed when such trust is not justified. The demonstration of widespread lack of replicability on studies in psychology and in biomedical research has prompted debate about the causes and the seriousness of the alleged crisis (Loken and Gelman 2017; Ioannidis 2007; Redish, Kummerfeld, Morris, and Love 2018).

Winsberg, Huebner, and Kukla (2013) draw attention to a different kind of supra-empirical, ethical issue raised by the contemporary situation of multiple authorship. What they call “radically collaborative research” involves investigators with different forms of expertise, as in Hardwig’s example, and as is now common across many fields, collaborating to generate an experimental result. For Winsberg, Huebner, and Kukla, the question is not merely reliability, but accountability. Who can speak for the integrity of the research when it has been conducted by researchers with a variety not just of interests, but of methodological standards, most opaque one to another? Winsberg, Huebner, and Kukla argue that a model of the social collaboration is needed as much as a model of the data or of the instruments. They argue further that the laissez-faire Wisdom of Crowds model (according to which local differences in methodological standards will cancel each other out), while perhaps adequate if the question is one of reliability, is not adequate for addressing these issues of accountability. They do not themselves, however, offer an alternative model.

3. Science in Society

Work on the role of science in society encompasses both general models of the public authority of science and analysis of particular research programs that have a bearing on public life. In their early work, Steve Fuller and Joseph Rouse were both concerned with political dimensions of cognitive authority. Rouse, whose (1987) integrated analytic and continental philosophy of science and technology, sought to develop what might be called a critical pragmatism. This perspective facilitated an analysis of the transformative impact of science on human life and social relations. Rouse emphasized the increased power over individual lives that developments in science make possible. This can only be said to have increased with the development of information technology. Fuller (1988) partially accepted the empirical sociologists’ claim that traditional normative accounts of scientific knowledge fail to get a purchase on actual scientific practices, but took this as a challenge to relocate the normative concerns of philosophers. These should include the distribution and circulation of knowledge claims. The task of social epistemology of science, according to Fuller, should be regulation of the production of knowledge by regulating the rhetorical, technological, and administrative means of its communication. While there has not been much uptake of Fuller’s proposals as articulated, Lee’s work mentioned above begins to make detailed recommendations that take into account the current structures of funding and communication.

One key area of socially relevant interdisciplinary science is risk assessment, which involves both research on the effects of various substances or practices and the evaluation of those effects once identified. The idea is to gain an understanding of both positive effects and of negative effects and a method of evaluating these. This involves integrating the work of specialists in the kind of substance whose risks are under assessment (geneticists, chemists, physicists), biomedical specialists, epidemiologists, statisticians, and so on. In these cases, we are dealing not only with the problems of trust and authority among specialists from different disciplines, but also with the effects of introducing new technologies or new substances into the world. The risks studied are generally of harm to human health or to the environment. Interest in applying philosophical analysis to risk assessment originated in response to debates about the development and expansion of nuclear power-generating technologies. In addition, the application of cost-benefit analysis and attempts to understand decision-making under conditions of uncertainty became topics of interest as extensions of formal modeling techniques (Giere 1991). These discussions intersect with debates about the scope of rational decision theory and have expanded to include other technologies as well as applications of scientific research in agriculture and in the myriad forms of biological engineering. Essays on the relation between science and social values in risk research collected in the volume edited by Deborah Mayo and Rachelle Hollander (1991) attempt to steer a course between uncritical reliance on cost-benefit models and their absolute rejection. Coming from a slightly different angle, the precautionary principle represents an approach shifting the burden of proof in regulatory decisions from demonstration of harm to demonstration of safety of substances and practices. Carl Cranor (2004) explores versions of the principle and defends its use in certain decision contexts. Shrader-Frechette (2002) has advocated models of ethically weighted cost-benefit analysis and greater public involvement in risk assessment. In particular she (Shrader-Frechette 1994, 2002) has argued for including members of the public in deliberations about health effects of and reasonable exposure limits on environmental pollutants, especially radioactive materials. Philosophers of science have also worked to make visible the ways in which values play a role in the research assessing the effects of techno-scientifically produced substances and practices themselves, as distinct from the challenges of assigning values to identified risks and benefits.

Douglas (2000) is an influential study of toxicological research on effects of exposure to dioxins. Douglas set her analysis in the framework of inductive risk introduced by Richard Rudner (1953) and also explored by Carl Hempel (1965). The ampliative character of inductive inference means that the premises can be true (and even strongly supportive) and the conclusion false. Rudner argued that this feature of inductive inference means that scientists ought to take the consequences of being wrong into account when determining how strong the evidence for a hypothesis needs to be before accepting the hypothesis. [But see Jeffrey (1956) for a different view.] Douglas proposes that such considerations reach deeper into the scientific process than the acceptance of a conclusion based on the evidence to the construction of the evidence itself. Scientists must make decisions about levels of statistical significance, how to balance the chance of false positives against the chance of false negatives. They must determine protocols for deciding borderline cases in their tissue samples. They must select among possible dose-response models. Deciding in one way has one set of social consequences, and in another way another, opposing, set of consequences. Douglas claims that scientists ought to take these risks into account when making the relevant methodological decisions. Since, even in her examples, public health considerations point in one direction and economic considerations point in another, in the end it is not clear just what responsibility can reasonably be assigned to the individual scientist.

In addition to risk assessment, philosophers have begun thinking about a variety of research programs and methods that affect human wellbeing. Lacey (2005), for example, delineates the contrasting values informing industrial, conventional agriculture on the one hand and small-scale agroecology on the other. Cartwright (2012), elaborated in Cartwright and Hardie (2012), is primarily a critical analysis of the reliance on randomized control trials to support policy decisions in economic development, medicine, and education. These fail to take account of variations in contexts of application that will affect the outcome. Cartwright’s focus on a particular methodological approach is an extension of philosophers’ traditional engagement in areas of controversy in which philosophical analysis might make a difference. Philip Kitcher’s (1985), which took on sociobiology, and Elliott Sober and David Sloan Wilson’s (1998), an extensive argument for group level selection, are examples that focus on content and methodology of extensions of evolutionary theory.

Climate change research has provoked several quite different kinds of analysis. As a complex interdisciplinary field, its evidential structure leaves it vulnerable to challenge. Opponents of limiting the use of fossil fuels have exploited those vulnerabilities to sow public doubts about the reality and/or causes of climate change (Oreskes and Conway 2011). Parker 2006, Lloyd 2010, Parker 2010, Winsberg 2012 have, respectively, investigated strategies for reconciling apparent inconsistencies among climate models, the differences between model-based projections and strictly inductive projections, methods for assessing and communicating the uncertainties inherent in climate models. Philosophers have also considered how to interpret the (American) public’s susceptibility to the climate change deniers. Philip Kitcher (2012) interprets it as lack of information amid a plethora of misinformation and proposes methods for more effective communication of reputable science to the public. Anderson (2011), on the contrary, contends that members of the public are perfectly able to evaluate the reliability of contradictory assessments by following citation trails, etc., whether on the internet or in hard copies of journals. Her view is that the reluctance to accept the reality of climate change is a reluctance to abandon familiar ways of life, which is what averting climate-caused disaster requires all to do. Finally, there is an ethical and political question once the inevitability of climate change is accepted: how should the burdens of taking action be distributed? The industrialized West is responsible for most of the carbon pollution up to the end of the 20th century, but developing nations trying to industrialize have contributed an increasing share, and will continue to do so, in the 21st century. Who bears the burden? And if the effects will only be felt by generations in the future, why should present generations take actions whose harms will be felt now and whose benefits lie in the future and will not be experienced by those bearing the costs? Broome (2008) explores the intergenerational issues, while Raina (2015) explores the global dimensions.

Two additional areas of ongoing scientific controversy are the biological reality (or not) of race and the biology of gender differences. Developments in genetics, and documented racial differences in health, have thrown doubt on earlier anti-realist views of race, such as those articulated by Stephen J. Gould (1981) and Richard Lewontin (Lewontin, Rose, and Kamin 1984). Spencer (2012, 2014) argues for a sophisticated form of biological racial realism. Gannett (2003) argues that biological populations are not independent objects that can provide data relevant to racial realism, while Kaplan and Winther (2013) argue that no claims about race can be read from biological theory or data. The reality and basis of observed gender differences were the subject of much debate in the late 20th century(See Fausto-Sterling 1992). These issues have crystallized in the early 21st century in debates about the brain and cognition drawing the attention of philosophers of biology and cognitive scientists. Rebecca Jordan-Young (2010), Cordelia Fine (2010), and Bluhn, Jacobson and Maibom, eds. (2012) all explore, with an aim of debunking, claims of gendered brains.

3. Social, Cultural, and Feminist Studies of Science

Kuhn’s critique of logical empiricism included a strong naturalism. Scientific rationality was to be understood by studying actual episodes in the history of science, not by formal analyses developed from a priori concepts of knowledge and reason (Kuhn 1962, 1977). Sociologists and sociologically inclined historians of science took this as a mandate for the examination of the full spectrum of scientists’ practices without any prior prejudice as to which were epistemically legitimate and which not. That very distinction came under suspicion from the new social scholars, often labeled “social constructivists.” They urged that understanding the production of scientific knowledge required looking at all the factors causally relevant to the acceptance of a scientific idea, not just at those the researcher thinks should be relevant.

A wide range of approaches in social and cultural studies of science has come under the umbrella label of “social constructivism.” Both terms in the label are understood differently in different programs of research. While constructivists agree in holding that those factors treated as evidential, or as rationally justifying acceptance, should not be privileged at the expense of other causally relevant factors, they differ in their view of which factors are causal or worth examination. Macro-analytic approaches, such as those associated with the so-called Strong Programme in the Sociology of Scientific Knowledge, treat social relations as an external, independent factor and scientific judgment and content as a dependent outcome. Micro-analyses or laboratory studies, on the other hand, abjure the implied separation of social context and scientific practice and focus on the social relations within scientific research programs and communities and on those that bind research-productive and research-receptive communities together.

Researchers also differ in the degree to which they treat the social and the cognitive dimensions of inquiry as independent or interactive. The researchers associated with the macro-analytic Strong Programme in the Sociology of Scientific Knowledge (Barry Barnes, David Bloor, Harry Collins, Donald MacKenzie, Andrew Pickering, Steve Shapin) were particularly interested in the role of large scale social phenomena, whether widely held social/political ideologies or group professional interests, on the settlement of scientific controversies. Some landmark studies in this genre include Andrew Pickering’s (1984) study of competing professional interests in the interpretation of high energy particle physics experiments, and Steven Shapin and Simon Shaffer’s (1985) study of the controversy between Robert Boyle and Thomas Hobbes about the epistemological relevance of experiments with vacuum pumps.

The micro-sociological or laboratory studies approach features ethnographic study of particular research groups, tracing the myriad activities and interactions that eventuate in the production and acceptance of a scientific fact or datum. Karin Knorr Cetina’s (1981) reports her year-long study of a plant science laboratory at UC Berkeley. Bruno Latour and Steven Woolgar’s (1986) study of Roger Guillemin’s neuroendocrinology laboratory at the Salk Institute is another classic in this genre. These scholars argued in subsequent work (Knorr-Cetina 1983; Latour, 1987) that their form of study showed that philosophical analyses of rationality, of evidence, of truth and knowledge, were irrelevant to understanding scientific knowledge. Sharon Traweek’s (1988) comparative study of the cultures of Japanese and North American high energy physics communities pointed to the parallels between cosmology and social organization but abstained from making extravagant or provocative epistemological claims. The efforts of philosophers of science to articulate norms of scientific reasoning and judgment were, in the view of both macro- and micro-oriented scholars, misdirected, because actual scientists relied on quite different kinds of considerations in the practice of science.

Until recently, apart from a few anomalous figures like Caroline Herschel, Barbara McClintock, and Marie Curie, the sciences were a male preserve. Feminist scholars have asked what bearing the masculinity of the scientific profession has had on the content of science and on conceptions of scientific knowledge and practice. Drawing both on work by feminist scientists that exposed and critiqued gender biased science and on theories of gender, feminist historians and philosophers of science have offered a variety of models of scientific knowledge and reasoning intended to accommodate the criticism of accepted science and the concomitant proposal and advocacy of alternatives. Evelyn Keller (1985) proposed a psycho-dynamic model of knowledge and objectivity, arguing that a certain psychological profile, facilitated by typical patterns of masculine psychological development, associated knowledge and objectivity with domination. The association of knowledge and control continues to be a topic of concern for feminist thinkers as it is also for environmentally concerned critics of the sciences. In this connection, see especially Lacey’s (2005) study of the controversy concerning transgenic crops. Other feminists turned to Marxist models of social relations and developed versions of standpoint theory, which holds that the beliefs held by a group reflect the social interests of that group. As a consequence, the scientific theories accepted in a context marked by divisions of power such as gender will reflect the interests of those in power. Alternative theoretical perspectives can be expected from those systematically excluded from power. (Harding 1986; Rose 1983; Haraway 1978).

Still other feminists have argued that some standard philosophical approaches to the sciences can be used to express feminist concerns. Nelson (1990) adopts Quine’s holism and naturalism to analyze debates in recent biology. Elizabeth Potter (2001) adapts Mary Hesse’s network theory of scientific inference to analyse gendered aspects of 17th century physics. Helen Longino (1990) develops a contextual empiricism to analyze research in human evolution and in neuroendocrinology. In addition to the direct role played by gender bias, scholars have attended to the ways shared values in the context of reception can confer an a priori implausibility on certain ideas. Keller (1983) argued that this was the fate of Barbara McClintock’s unorthodox proposals of genetic transposition. Stephen Kellert (1993) made a similar suggestion regarding the then resistance to so-called chaos theory, that is the use of non-linear dynamics to model processes like climate change.

What the feminist and empirical sociological analyses have in common is the view that the social organization of the scientific community has a bearing on the knowledge produced by that community. There are deep differences, however, in their views as to what features of that social organization are deemed relevant and how they are expressed in the theories and models accepted by a given community. The gender relations focused on by feminists went unrecognized by sociologists pursuing macro- or microsociological research programs. The feminist scientists and scholars further differ from the scholars in empirical social and cultural studies of science in their call for alternative theories and approaches in the sciences. These calls imply that philosophical concerns with truth and justification are not only legitimate but useful tools in advancing feminist transformative goals for the sciences. As can be seen in their varying treatments of objectivity, however, philosophical concepts are often reworked in order to be made applicable to the content or episodes of interest (See Anderson 2004, Haraway 1988, Harding 1993, Keller 1985, Longino 1990, Nelson 1990, Wylie 2005)

In addition to differences in analysis of philosophical concepts like objectivity, rationality, or truth, feminist philosophers of science have also debated the proper role of contextual (sometimes called, “external” or “social”) values. Some feminists argue that, given that values do play a role in scientific inquiry, socially progressive values ought to shape not only decisions about what to investigate but also the processes of justification. Philosophers of science should incorporate exemplification of the right values in their accounts of confirmation or justification. Others are less certain about the identification of the values that should and those that should not inform the conduct of science. These philosophers are dubious that a consensus exists, or is even possible in a pluralistic society, on what constitute the values that ought to guide inquiry. In an exchange with Ronald Giere, Janet Kourany (2003a, 2003b) argues that not only science, but philosophy of science ought to be concerned with the promotion of socially progressive values. Giere (2003) replies that what counts as socially progressive will vary among philosophers, and that in a democracy, it is unlikely that a unanimous or near unanimous consensus regarding the values to inform philosophical analysis or scientific inquiry could be achieved either in the larger society or in the smaller social subset of philosophers of science.

4. Models of the Social Character of Knowledge

Since 1980, interest in developing philosophical accounts of scientific knowledge that incorporate the social dimensions of scientific practice has been on the increase. Some philosophers see attention to the social as a straightforward extension of already developed approaches in epistemology. Others, inclined toward some form of naturalism, have taken the work in empirical social studies of science discussed above seriously. They have, however, diverged quite considerably in their treatment of the social. Some understand the social as biasing or distorting, and hence see the social as opposed to or competing with the cognitive or epistemic. These philosophers see the sociologists’ disdain for normative philosophical concerns as part of a general debunking of science that demands a response and defense. Some philosophers see the social aspects of science as incidental to deep questions about knowledge, but informative about certain tendencies in scientific communities. Others treat the social as instead constitutive of rationality. These differences in conception of the role and nature of the social inform differences in the several approaches to modeling the sociality of inquiry and knowledge discussed below.

Contemporary philosophers pursue both formal and informal modeling approaches in addressing the social character of knowledge. Those pursuing formal models tend to bracket questions about rationality, objectivity, or justification and concentrate on mathematically investigating the effects of community structures on features of the pursuit of knowledge and its diffusion in a community. Those pursuing informal models are more interested in understanding the role of the community in enhancing or constituting desired features of inquiry such as rationality and objectivity and in thinking about the ways knowledge is realized

Communication and the division of cognitive labor. Among the first issues to be investigated using formal techniques was the division of cognitive labor. While big science projects such as discussed by Hardwig pose a problem of integrating disparate elements of the solution to a question, the division of cognitive labor concerns the appropriate or optimal distribution of efforts towards solving a given problem. If everyone follows the same research strategy to solve a problem or answer a question, then a solution lying outside that strategy will not be reached. If such a solution is better than any attainable via the shared strategy, the community fails to attain the better solution. But how can it be rational to adopt a research strategy other than the one deemed at the time most likely to succeed? Philip Kitcher in his (1993) was concerned to offer an alternative to the strong programme’s proposal that controversy and the persistence of alternative research programs were a function of the varying social or ideological commitments of researchers. However, he also acknowledged that if researchers followed only the strategy judged at the time most likely to lead to truth, they would not pursue unorthodox strategies that might lead to new discoveries. He therefore labeled the observed fact that researchers pursued different approaches to the same problem as the division of cognitive labor and proposed a decision model that attributed the pursuit of a nonorthodox (maverick) research strategy to a rational calculation about the chances of a positive payoff. This chance was calculated on the basis of the likelihood of the maverick strategy being successful (or more successful than the orthodox approach), the numbers of peers pursuing orthodox or other maverick strategies, and the anticipated reward of success. A community can allocate research resources in such a way as to maintain the balance of orthodox and maverick scientists most likely to facilitate progress. Thus, scientific progress can tolerate and indeed benefits from a certain amount of “impure” motivation. Michael Strevens (2003) argued instead that the pursuit of maverick research strategies was to be expected as a consequence of the priority rule. The priority rule refers to the practice of referring to a law or object with the name of the first individual to articulate or perceive and identify it. Think of Boyle’s Law, Halley’s comet, the Planck constant, Avogadro’s number, etc. There’s no such reward attached to pursuing a research strategy devised by another and “merely” adding to what that individual has already discovered. The rewards of research come from being first. And to be first requires pursuing a novel problem or strategy. The division of cognitive labor, understood as different researchers pursuing different research strategies, is a simple effect of the priority rule. Muldoon and Weisberg (2011) reject both Kitcher’s and Strevens’s accounts as presupposing unrealistically uniform and ideal agents. In reality, they observe, scientists have at best imperfect knowledge of the entire research situation, do not know the entirety of the research landscape, and when they do know, know different things. They do not have sufficient information to employ the decision methods Kitcher and Strevens attribute to them. Muldoon and Weisberg propose agent-based modeling as a means to represent the imperfect, non-overlapping, and partial knowledge of the agents deciding what research problems and strategies to pursue. Solomon’s advocacy of dissensus discussed below can be understood as rejecting the premises of the problem. From that point of view the aim of scientific organization ought to be to promote disagreement.

Kevin Zollman, following Bala and Goyal (1998), used network theory to model different possible communication structures. The aim of Zollman (2007, 2013) is to investigate what difference communication structures make to the chances of a scientific community settling on a correct (or incorrect) theory or hypothesis and to the speed by which such a consensus is reached. Networks consist of nodes and edges that connect them. The nodes can represent individuals or any group that has uniform beliefs. The nodes can have values of believe or not believe and consensus consists in all nodes in the network taking the same value. Zollman investigates three possible communication structures: the cycle, in which each node is connected only to nodes on either side of it in the cycle; the wheel, in which there is a central node to which all other nodes are exclusively connected; and the complete, in which each node is connected to every other node. Using the mathematics of network theory, Zollman proves the somewhat counterintuitive thesis that the network with limited communication, the cycle, has the highest probability of consensus on the correct hypothesis, while the network with the densest communication, the complete, has a non-negligible probability of consensus (from which departure is not possible) on the incorrect hypothesis. Zollman (2010) also uses this method to investigate the division of labor problem, although he comes at it from a slightly different point of view that do Kitcher or Strevens. Structures with sparse or limited communication are more likely to arrive at the correct hypothesis, but because they take longer to reach consensus, different research approaches may persist in such communities. Under the right circumstances, this will prevent foreclosure on the incorrect hypothesis. Zollman implicitly blames a dense communication structure for the premature abandonment of the bacterial hypothesis of peptic ulcers. Diversity is a good thing as long as the evidence is not decisive, and if the acid hypothesis, which held sway until a new staining method showed the presence of Helicobacter pylori, had been slower to diffuse into the community, the bacterial hypothesis might have been preserved long enough to be better supported.

While Zollman presents his results as an alternative method to the reward mechanisms discussed by Kitcher, Strevens, and Muldoon and Weisberg, they do not include a mechanism for establishing any of the network structures as the preferred communication system for a scientific community. Kitcher and the others were concerned with how agents might be motivated to pursue a theory or method whose chance of success was either unknown or thought unlikely. Funding bodies like governmental science foundations and private foundations provide or can provide the relevant reward structure. Prize-giving bodies, like the Nobel Foundation or the Kavli Foundation, as well as historical practice, entrench the priority rule. Both of these are community methods that can motivate the choice to pursue high risk, high reward research. It is not clear how communities would select communication structures, nor what kind of system would be able to enforce a structure. Rosenstock, O’Connor, and Bruner (2017) point out in addition that Zollman’s results are very sensitive to how parameters of the models are set. Adjust the number of nodes or the probabilities assigned to the alternative strategies/hypotheses and the Zollman effect disappears. The probability of consensus on the incorrect hypothesis in the densely connected communication structure reduces to close to zero with more nodes or greater disparity of assigned probabilities to alternatives.

O’Connor and other colleagues have used evolutionary game theory to model other community phenomena such as the persistence of minority disadvantage in scientific communities (Rubin & O’Connor 2018), scientific polarization (O’Connor & Weatherall 2017), diversity (O’Connor & Bruner 2017), conservatism in science (O’Connor forthcoming). While not necessarily claiming that these game theoretic models are fully descriptive of the phenomena they model, these theorists do claim that given certain initial conditions, certain undesirable social situations (like the disadvantage accruing to minority status) are to be expected rather than being understood as perversions of scientific practice. This would suggest that some ways of addressing those undesirable social outcomes may not be effective and that alternative measures ought to be sought in case of failure.

Sociality, rationality, and objectivity. Philosophers who treat the social as biasing or distorting tend to focus on the constructivists’ view that there are no universal principles of rationality or principles of evidence that can be used to identify in any context-independent way which factors are evidential and which not. Reconciliationists tend to argue that what is correct in the sociologists’ accounts can be accommodated in orthodox accounts of scientific knowledge. The key is sifting the correct from the exaggerated or misguided. Integrationists read the relevance of the sociologists’ accounts as supporting the development of new accounts of rationality or objectivity, rather than as grounds for rejecting the cogency of such normative ideals.

Philosophers concerned to defend the rationality of science against sociological misrepresentations include Larry Laudan (1984) James Brown (1989, 1994), Alvin Goldman (1987, 1995) and Susan Haack (1996). The details of these philosophers’ approaches differ, but they agree in holding that scientists are persuaded by what they regard as the best evidence or argument, the evidence most indicative of the truth by their lights, and in holding that arguments and evidence are the appropriate focus of attention for understanding the production of scientific knowledge. When evidential considerations have not trumped non-evidential considerations, we have an instance of bad science. They read the sociologists as arguing that a principled distinction between evidential and nonevidential considerations cannot be drawn and devote considerable effort to refuting those arguments. In their positive proposals for accommodating the social character of science, sociality is understood as a matter of the aggregation of individuals, not their interactions, and public knowledge as simply the additive outcome of many individuals making sound epistemic judgments. Individual rationality and individual knowledge are thus the proper focus of philosophers of science. Exhibiting principles of rationality applicable to individual reasoning is sufficient to demonstrate the rationality of science, at least in its ideal form.

Reconciliationists include Ronald Giere, Mary Hesse, and Philip Kitcher. Giere (1988) models scientific judgment using decision theory. This permits incorporating scientists’ interests as one of the parameters of the decision matrix. He also advocates a satisficing, rather than optimizing, approach to modeling the decision situation, thus enabling different interests interacting with the same empirical base to support different selections as long as they are consistent with that base. Mary Hesse (1980) employs a network model of scientific inference that resembles W.V.O. Quine’s web of belief in that its constituents are heterogeneous in character, but all subject to revision in relation to changes elsewhere in the network. She understands the social factors as coherence conditions operating in tandem with logical constraints to determine the relative plausibility of beliefs in the network.

The most elaborate reconciliationist position is that developed in Philip Kitcher’s (1993). In addition to modeling relations of authority and the division of cognitive labor as described above, he offers what he terms a compromise between extreme rationalists and sociological debunkers. The compromise model appeals to a principle of rationality, which Kitcher calls the External Standard. It is deemed external because it is proposed as holding independently of any particular historical, cultural or social context. Thus, not only is it external, but it is also universal. The principle applies to change of belief (or shift from one practice to another, in Kitcher’s broader locution), not to belief. It treats a shift (in practice or belief) as rational if and only “the process through which the shift was made has a success ratio at least as high as that of any other process used by human beings (ever) ...” (Kitcher 1993, 303). Kitcher’s compromise proposes that scientific ideas develop over time and benefit from the contributions of many differently motivated researchers. This is the concession to the sociologically oriented scholars. In the end, however, those theories that are rationally accepted are those that satisfy Kitcher’s External Standard. Kitcher thus joins Goldman, Haack, and Laudan in the view that it is possible to articulate a priori conditions of rationality or of epistemic warrant that operate independently of, or, perhaps one might say, orthogonally to, the social relations of science.

A third set of models is integrationist in character. Integrationists use the observations of sociologists of science to develop alternative accounts of scientific rationality and objectivity. Nelson (1990) focuses on a slightly different aspect of Quine’s holism than does Hesse. Nelson uses Quine’s arguments against the independently foundational status of observation statements as the basis for what she calls a feminist empiricism. According to Nelson, no principled distinction can be made between the theories, observations, or values of a community. What counts as evidence, in her view, is fixed by the entire complex of a community’s theories, value commitments, and observations. There is neither knowledge nor evidence apart from such a shared complex. The community is the primary knower on this view and individual knowledge is dependent on the knowledge and values of the community.

Miriam Solomon’s social empiricism is focused on scientific rationality (Solomon 2001). It, too, involves denying a universal principled distinction among the causes of belief. Solomon draws on contemporary cognitive science literature to argue that what are traditionally called biases are simply among the kinds of “decision vector” that influence belief. They are not necessarily undesirable elements from which science needs to be protected, and can be productive of insight and rational belief. Salience and availability (of data, of measurement technologies), also called cold biases, are decision vectors as much as social ideologies or other motivational factors, “hot biases.” The distinctive feature of Solomon’s social empiricism is her contrast between individual and community rationality. Her (2001) urges the pluralistic view that a community is rational when the theories it accepts are those that have unique empirical successes. Individuals can persist in beliefs that are (from a panoptic perspective) less well supported than others on this view, if the totality of available evidence (or empirical data) is not available to them, or when their favored theory accounts for phenomena not accounted for other theories, even when those may have a greater quantity of empirical successes. What matters to science, however, is that the aggregated judgments of a community be rational. A community is rational when the theories it accepts are those with all or with unique empirical successes. It is collectively irrational to jettison a theory with unique empirical successes. Thus, the community can be rational even when its members are, as judged by traditional epistemic standards, individually irrational. Indeed, individual irrationality can contribute to community rationality in that individuals committed to a theory that accounts for their data keep that data in the range of phenomena any theory accepted by the entire community must eventually explain. In addition to empirical success, Solomon proposes an additional normative criterion. In order to secure appropriate distribution of scientific effort, biases must be appropriately distributed in the community. Solomon proposes a scheme for ascertaining when a distribution is normatively appropriate. Thus, for Solomon, a scientific community is rational when biases are appropriately distributed and it accepts only a theory with all or theories with unique empirical successes as the normative epistemological condition. Rationality accrues only to a community, and not to the individuals constituting the community. As in Zollman’s network models, consensus just is all members of the community assigning the same value (T/F) to a hypothesis or theory.

Finally, in Longino’s critical contextual empiricism, the cognitive processes that eventuate in scientific knowledge are themselves social (Longino 1990, 2002). Longino’s starting point is a version of the underdetermination argument: the semantic gap between statements describing data and statements expressing hypotheses or theories to be confirmed or disconfirmed by that data. This gap, created by the difference in descriptive terms used in the description of data and in the expression of hypotheses, means that evidential relations cannot be formally specified and that data cannot support one theory or hypothesis to the exclusion of all alternatives. Instead, such relations are mediated by background assumptions. Eventually, in the chain of justification, one reaches assumptions for which no evidence is available. If these are the context in which evidential relations are constituted, questions arise concerning how the acceptance of such assumptions can be legitimated. According to Longino, the only check against the arbitrary dominance of subjective (metaphysical, political, aesthetic) preference in such cases is critical interaction among the members of the scientific community or among members of different communities. There is no higher authority or transcendent aperspectival position from which it is possible to adjudicate among foundational assumptions. Longino takes the underdetermination argument to express in logical terms the point made by the sociologically oriented researchers: the individuals participating in the production of scientific knowledge are historically, geographically, and socially situated and their observations and reasoning reflect their situations. This fact does not undermine the normative enterprise of philosophy, but requires its expansion to include within its scope the social interactions within and between scientific communities. What counts as knowledge is determined by such interactions.

Longino claims that scientific communities do institutionalize some critical practices (for example, peer review), but argues that such practices and institutions must satisfy conditions of effectiveness in order to qualify as objective. She argues, therefore, for the expansion of scientific norms such as accuracy and consistency to include norms that apply to communities. These are (1) the provision of venues in which critical interaction can take place, (2) the uptake of critical intervention as demonstrated in change of belief distribution in the community over time in a way that is sensitive to the critical discourse taking place within that community, (3) public accessibility of the standards that regulate discourse, and (4) tempered equality of intellectual authority. By this latter condition, perhaps the most controversial of her proposed norms, Longino means that any perspective has a prima facie capacity to contribute to the critical interactions of a community, though equal standing can be lost owing to failure to engage or to respond to criticism. In her 2002, Longino argues that the cognitive processes of science, such as observation and reasoning, are themselves social processes. Thus the interactions subject to community norms extend not only to discussion of assumptions in finished research, but to the constructive processes of research as well.

Solomon and Longino differ on where they locate normativity and on the role and effectiveness of deliberative processes in actual scientific inquiry. Solomon attends to the patterns of acceptance and to the distribution of decision vectors, regardless of the interactions among community members, while Longino attends to deliberative processes and interactions. They may also differ in their views of what constitutes scientific success.

One set of issues that has yet to give rise to extended philosophical reflection is the question how civilizational differences are expressed in scientific work (See Bala 2008). Here, too, there is a micro- and a macro- version. At the micro level, one might ask how the interactional culture of individual laboratories or theoretical subcommunities is or is not expressed in the outcome of their research. At the macro level one might be asking how large scale cultural features are reflected in the content and practice of science in a given cultural formation. For example, Joseph Needham argued that features of the culture of ancient China directed their technical and intellectual ingenuity into channels that foreclosed the development of anything like the science that developed in Western Europe in the 14th through the 17th centuries. Other cultures developed some aspects of what we now think of as a cosmopolitan or global scientific culture (for example, the mathematics and astronomy of 10th through 14th century Islamic and South Asian scholars) independently of the early modern physics developed in Western and Central Europe. The papers in Habib and Raina (2001) address aspects of these questions with respect to the history of science in India.

Unity, Plurality and the Aims of Inquiry. The variety of views on the degree of sociality assignable to the epistemological concepts of science lead to different views concerning the ultimate character of the outcome of inquiry. This difference can be summarized as the difference between monism and pluralism. Monism, as characterized in Kellert, Longino, and Waters (2006), holds that the goal of inquiry is and should be a unified, comprehensive, and complete account of phenomena (whether all phenomena, or the phenomena specific to a particular domain of inquiry). If this is so, then the norms of assessment should be informed by this goal and there should be one standard by which theories, models, and hypotheses in the sciences are assessed. Deviation from an accepted theoretical framework is problematic and requires explanation, such as the explanations offered for the division of cognitive labor. Monism, with its commitment to ultimate unity, requires ways to reconcile competing theories or to adjudicate controversy so as to eliminate competition in favor of the one true or best theory. Pluralism, on the other hand, holds that the observed plurality of approaches within a science is not necessarily a flaw but rather reflects the complexity of the phenomena under investigation in interaction with the limitations of human cognitive capacities and the variety of human cognitive as well as pragmatic interests in representations of those phenomena.

Among pluralists, a diversity of views is to be found. Suppes (1978) emphasized the mutual untranslatability of the descriptive terms developed in the course of scientific specialization. Such incommensurability will resist evaluation by a common measure. Cartwright’s (1999) invocation of a dappled world emphasizes the complexity and diversity of the natural (and social) world. Scientific theories and models are representations of varying degrees of abstraction that manage to apply at best partially to whatever phenomena they purport to represent. To the extent they are taken to represent actual process in the real world, they must be hedged by ceteris paribus clauses. Scientific laws and models attach to patches of the world, but not to a seamlessly law-governed whole. Mitchell’s (2002, 2009) integrative pluralism is a rejection of the goal of unification by either reduction to a single (fundamental) level of explanation or abstraction to a single theoretical representation, in favor of a more pragmatically inflected set of explanatory strategies. The success for any particular investigation is answerable to the goals of the investigation, but there may be multiple compatible accounts reflecting both the contingency and partiality of the laws/generalizations that can figure in explanations and the different goals one may bring to investigation of the same phenomenon. The explanations sought in any particular explanatory situation will draw on these multiple accounts as appropriate for the level of representation adequate to achieve its pragmatic ends. Mitchell’s defense of integrative pluralism rests on both the partiality of representation and the complexity of the phenomena to be explained.

Kellert, Longino, and Waters advance a pluralism that sees multiplicity not only among but within levels of analysis. Furthermore they see no reason to require that the multiple accounts be compatible. The multiplicity of noncongruent empirically adequate accounts helps us appreciate the complexity of a phenomenon without being in a position to generate a single account of that complexity. They do not hold that all phenomena will support ineliminable pluralism, but that there are some phenomena that will require mutually irreducible or incompatible models. Which these are is determined by examining the phenomena, the models, and the match between phenomena and models. Like Mitchell, Kellert, Longino, and Waters hold that pragmatic considerations (broadly understood) will govern the choice of model to be used in particular circumstances. Both forms of pluralism (compatibilist and noncompatibilist) abandon the notion that there is a set of natural kinds whose causal interactions are the basis for fundamental explanations of natural processes. The noncompatibilist is open to multiple classification schemes answerable to different pragmatic interests in classifying. To this extent the noncompatibilist pluralist embraces a view close to the promiscuous realism articulated by John Dupré (1993). The compatibilist, or integrative pluralist, on the other hand, must hold that there is a way that different classification schemes can be reconciled to support the envisioned integration of explanatory models.

Pluralism receives support from several additional approaches. Giere (2006) uses the phenomenon of color vision to support a position he calls perspectival realism. Like the colors of objects, scientific representations are the result of interactions between human cognitive faculties and the world. Other species have different visual equipment and perceive the world differently. Our human cognitive faculties, then, constitute perspectives. We could have been built differently and hence perceived the world differently. Perspectival realism leads to pluralism, because perspectives are partial. While van Fraassen’s (2008) does not take a position on pluralism vs. monism (and as an empiricist and antirealist van Fraassen would not have to), its emphasis on the partiality and perspective dependence of measurement provides a complementary point of entry to such diversity. Solomon (2006) urges a yet more welcoming attitude towards multiplicity. In her view, dissensus is a necessary component of well-functioning scientific communities and consensus can be epistemologically pernicious. In an extension of the arguments in Solomon (2001) she argues that different models and theoretical representations will be associated with particular insights or specific data that are likely to be lost if the aim is to integrate or otherwise combine the models to achieve a consensus understanding. The activity of integrating two or more models is different from the process of one model from a set of alternatives coming eventually to have all the empirical successes distributed among the other models. In her examination of consensus conferences called by the United States National Institutes of Health (Solomon 2011), Solomon finds that such conferences do not resolve existing dissent in the scientific community. Instead, they tend to take place after a consensus has emerged in the research community and are directed more to the communication of such consensus to outside communities (such as clinicians, insurers, health policy experts, and the public) than to the assessment of evidence that might warrant consensus.

Researchers committed to a monist or unified science will see plurality as a problem to be overcome, while researchers already committed to a deeply social view of science will see plurality as a resource of communities rather than a problem. The diversity and partiality that characterizes both a local and the global scientific community characterize the products of those communities as well as the producers. Universalism and unification require the elimination of epistemologically relevant diversity, while a pluralist stance promotes it and the deeply social conception of knowledge that follows.

Sociality and the structure of scientific knowledge. Attention to the social dimensions of scientific knowledge and the consequent potential for plurality has prompted philosophers to rethink the structure of what is known. Many philosophers (including Giere, Kitcher, and Longino) who advocate forms of pluralism invoke the metaphor of maps to explain how scientific representations can be both partial and adequate. Maps only represent those features of the territory mapped that are relevant for the purpose for which the map is drawn. Some maps may represent the physical area bounded by state boundaries, others may represent the population size, or the relative abundance/poverty of natural resources. Winther (forthcoming) explores the variety of kinds of maps used in science and philosophical use of the map metaphor. But the map metaphor is only one of several ways to rethink the structure of scientific knowledge.

Other philosophers draw more heavily on cognitive science. Giere (2002) takes a naturalist approach to modeling, not so much the distribution of cognitive labor, but the distribution of cognition. This approach takes a system or interactive community as the locus of cognition, rather than the individual agent. Nersessian (2006) extends distributed cognition to model-based reasoning in the sciences. Models are artifacts that focus the cognitive activity of multiple individuals in particular settings. Knowledge is distributed across the minds interacting about the artifacts in that setting. Paul Thagard draws on the increasingly interdisciplinary (and hence social) nature of cognitive science itself to argue that not only does cognitive science (or certain lines of analysis in cognitive science) support a conception of cognition as distributed among interacting agents, but that this conception can be turned back upon cognitive science itself. (Thagard 2012). Finally Alexander Bird (2010) reflects on the sense of knowledge required for attributions such as: “the biomedical community now knows that peptic ulcers are often caused by the bacterium Helicobacter pylori.” Or “There was an explosive growth in scientific knowledge in the twentieth century.” Bird faults other social epistemologists for still making such collective knowledge supervenient on the states of individuals. Instead, he argues, we should understand social knowing as a functional analogue of individual knowing. Both are dependent on the existence and proper functioning of the relevant structures: reasoning and perception for individuals; libraries and journals and other social structures, for collectivities. Scientific knowledge is an emergent effect of collective epistemic interactions, concretized in the texts that have been designated as vehicles for the preservation and communication of that knowledge

5. Social Direction of Science

Modern science has been regarded as both a model of democratic self-governance and an activity requiring and facilitating democratic practices in its supporting social context (Popper 1950, Bronowski 1956). In this perspective, science is seen as embedded in and dependent on its supporting social context, but insulated in its practices from the influence of that context. As the reach of science and science-based technologies has extended further and further into the economy and daily life of industrialized societies, new attention is paid to the governance of science. Regardless of one’s views about the social character of knowledge, there are further questions concerning what research to pursue, what social resources to devote to it, who should make such decisions, and how they should be made.

Philip Kitcher (2001) has opened these questions to philosophical scrutiny. While Kitcher largely endorses the epistemological views of his (1993), in the later work he argues that there is no absolute standard of the significance (practical or epistemic) of research projects, nor any standard of the good apart from subjective preferences. The only non-arbitrary way to defend judgments concerning research agendas in the absence of absolute standards is through democratic means of establishing collective preferences. Kitcher, thus, attempts to spell out procedures by which decisions concerning what research directions to pursue can be made in a democratic manner. The result, which he calls well-ordered science, is a system in which the decisions actually made track the decisions that would be a made by a suitably constituted representative body collectively deliberating with the assistance of relevant information (concerning, e.g., cost and feasibility) supplied by experts.

Kitcher’s “well-ordered science” has attracted attention from other philosophers, from scientists, and from scholars of public policy. Winning praise as a first step, it has also elicited a variety of criticisms and further questions. The criticisms of his proposal range from worries about the excessive idealism of the conception to worries that it will enshrine the preferences of a much smaller group than those who will be affected by research decisions. Kitcher’s proposal at best works for a system in which all or most scientific research is publicly funded. But the proportion of private, corporate, funding of science compared to that of public funding has been increasing, thus calling into question the effectiveness of a model that presupposes largely public control (Mirowski and Sent 2002, Krimsky 2003). Kitcher’s model, it should be noted, still effects a significant separation between the actual conduct of research and decisions concerning the direction of research and scholars who see a more intimate relation between social processes and values in the context and those in the conduct of research will be dissatisfied with it. Kitcher himself (Kitcher 2011) seems to relax the separation somewhat.

The counterfactual character of the proposal raises questions about the extent to which well-ordered science really is democratic. If the actual decisions do not need to be the result of democratic procedures but only to be the same as those that would result from such procedures, how do we know which decisions those are without actually going through the deliberative exercise? Even if the process is actually carried out, there are places, e.g. in choice of experts whose advice is sought, which permit individual preferences to subvert or bias the preferences of the whole (Roth 2003). Furthermore, given that the effects of scientific research are potentially global, while democratic decisions are at best national, national decisions will have an effect well beyond the population represented by the decision makers. Sheila Jasanoff has also commented that even in contemporary industrialized democracies there are quite different science governance regimes. There is not one model of democratic decision making, but many, and the differences translate into quite different policies (Jasanoff 2005).

In his (2011) Kitcher abandons the counterfactual approach as he brings the ideal of well-orderedness into contact with actual debates in and about contemporary science. His concern here is the variety of ways in which scientific authority has been eroded by what he terms “chimeric epistemologies.” It’s not enough to say that the scientific community has concluded that, say, the MMR vaccine is safe, or that the climate is changing in a way that requires a change in human activities. In a democratic society, there are many other voices claiming authority, whether on presumed evidential grounds or as part of campaigns to manipulate public opinion. Kitcher suggests mechanisms whereby small groups trusted by their communities might develop the understanding of complicated technical issues through tutoring by members of the relevant research communities and then carry this understanding back to the public. He also endorses James Fishkin’s (2009) experiments in deliberative polling as a means to bring members of the public committed to different sides of a technical issue together with the scientific exponents of the issue and in a series of exchanges that cover the evidence, the different kinds of import different lines of reasoning possess, and the other elements of a reasoned discussion, bring the group to a consensus on the correct view. The pluralist and pragmatically inclined philosophers discussed in the previous section might worry that there is not a single correct view towards which such an encounter ought to converge, but that a broader discussion that incorporates deliberation about aims and values might produce sufficient (temporary) convergence to ground action or policy.

6. Conclusion

Philosophical study of the social dimensions of scientific knowledge has been intensifying in the decades since 1970. Social controversies about the sciences and science based technologies as well as developments in philosophical naturalism and social epistemology combine to drive thinking in this area forward. Scholars in a number of cognate disciplines continue to investigate the myriad social relations within scientific communities and between them and their social, economic, and institutional contexts.

While this area first came to prominence in the so-called science wars of the 1980s, attending to social dimensions of science has brought a number of topics to philosophical attention. The phenomenon of Big Science has encouraged philosophers to consider the epistemological significance of such phenomena as trust and cognitive interdependence and the division of cognitive labor. The increased economic and social dependence on science-based technologies has prompted attention to questions of inductive risk and the role of values in assessing hypotheses with social consequences. The controversies over health risks of certain vaccines, over the measurement of environmental pollution, and over the causes of climate change have expanded philosophy of science from its more accustomed areas of logical and epistemological analysis to incorporate concerns about the communication and uptake of scientific knowledge and the ethical dimensions of superficially factual debates.

Partly in response to the work of scholars in the social studies of science, partly in response to the changing role of scientific inquiry through the 20th and into the 21st centuries, philosophers have sought ways to either accommodate the (tenable) results of the sociologists and cultural historians or to modify traditional epistemological concepts used in the analysis of scientific knowledge. These investigations in turn lead to new thinking about the structure and location of the content of knowledge. While debates within philosophy of science between and among adherents to one or another of the models of the sociality of knowledge will continue, an important future step will be a fuller encounter between individual-based social epistemology with its focus on testimony and disagreement as transactions among individuals and the more fully social epistemologies that take social relations or interaction as partially constitutive of empirical knowledge.

Bibliography

Works Cited

  • Anderson, Elizabeth, 2004. “Uses of Value Judgments in Science” Hypatia, 19, 1–24.
  • –––. 2011. “Democracy, Public Policy, and Lay Assessments of Scientific Testimony,” Episteme, 8(2): 144–164.
  • Bala, Arun, 2008. The Dialogue of Civilizations in the Birth of Modern Science, New York, NY: Macmillan.
  • Bala, Venkatesh, and Sanjeev Goyal, 1998. “Learning from Neighbors,” Review of Economic Studies, 65: 565–621.
  • Barnes, Barry, 1977. Interests and the Growth of Knowledge, New York: Routledge.
  • –––, and David Bloor, 1982. “Relativism, Rationalism, and the Sociology of Knowledge,” in Rationality and Relativism, Martin Hollis and Steven Lukes (eds.), Oxford: Basil Blackwell, pp. 21–47.
  • Bird, Alexander, 2010, “Social Knowing: The Social Sense of ‘Scientific Knowledge’” Philosophical Perspectives, 24: 23–56.
  • Bronowski, Jacob, 1956. Science and Human Values, New York: Harper and Bros.
  • Brown, James, 1989. The Rational and the Social, London: Routledge.
  • –––, 1994. Smoke and Mirrors: How Science Reflects Reality, New York: Routledge.
  • Cartwright, Nancy, 1999, The Dappled World, Cambridge: Cambridge University Press.
  • –––, 2012, “Will This Policy Work for You?” Philosophy of Science, 79(5): 973–989.
  • –––, and Jeremy Hardie, 2012. Evidence-Based Policy: A Practical Guide to Doing It Better, New York: Oxford University Press.
  • –––, and Jordi Cat, Lola Fleck, and Hasok Chang, 1996. Otto Neurath: Philosophy Between Science and Politics, New York: Cambridge University Press.
  • Collins, Harry, 1983. “An Empirical Relativist Programme in the Sociology of Scientific Knowledge,” in Science Observed: Perspectives on the Social Study of Science, London: Sage, pp. 115–140.
  • Cranor, Carl F., 2004. “Toward Understanding Aspects of the Precautionary Principle,” Journal of Medicine and Philosophy, 29(3): 259–79.
  • Douglas, Heather, 2000. “Inductive Risk and Values in Science,” Philosophy of Science, 67(4): 559–579>
  • –––, 2009. Science, Policy, and the Value-Free Ideal, Pittsburgh, PA: University of Pittsburgh Press.
  • Dupré, John, 1993. The Disorder of Things, Cambridge, MA: Harvard University Press.
  • Elliot, Kevin, 2011. “Direct and Indirect Roles for Values in Science,” Philosophy of Science, 78(2): 303–324
  • Fausto-Sterling, Anne, 1992. Myths of Gender, New York, NY: Basic Books.
  • Fine, Arthur, 2007. “Relativism, Pragmatism, and the Practice of Science,” in New Pragmatists, Cheryl Misak (ed.), Oxford: Oxford University Press, pp. 50–67.
  • Fine, Cordelia, 2010, Delusions of Gender, New York, NY: W.W. Norton and Company.
  • Fuller, Steve, 1988. Social Epistemology, Bloomington, IN: Indiana University Press.
  • Gannett, Lisa, 2003, “Making Populations: Bounding Genes in Space and Time,”Philosophy of Science, 70(5): 989–1001.
  • Giere, Ronald, 1988. Explaining Science: A Cognitive Approach, Chicago: University of Chicago Press.
  • –––, 1991. “Knowledge, Values, and Technological Decisions: A Decision Theoretical Approach,” in Acceptable Evidence: Science and Values in Risk Management, Deborah Mayo and Rachelle Hollander (eds.), pp. 183–203, New York: Oxford University Press.
  • –––, 2002. “Scientific Cognition as Distributed Cognition,” in Cognitive Bases of Science Peter Carruthers, Stephen Stitch, and Michael Siegal (eds.), Cambridge, UK: Cambridge University Press.
  • –––, 2003. “A New Program for Philosophy of Science?” Philosophy of Science, 70(1): 15–21.
  • –––, 2006. Scientific Perspectivism, Chicago, IL: University of Chicago Press.
  • –––, and Alan Richardson (eds.), 1996. Origins of Logical Empiricism (Minnesota Studies in the Philosophy of Science, Vol. XVI), Minneapolis, MN: University of Minnesota Press.
  • Goldman, Alvin, 1987. “The Foundations of Social Epistemics,” Synthese, 73(1): 109–144.
  • –––, 1995. “Psychological, Social and Epistemic Factors in the Theory of Science,” in PSA 1994: Proceedings of the 1994 Biennial Meeting of the Philosophy of Science Association, Richard Burian, Mickey Forbes, and David Hull (eds.), East Lansing, MI: Philosophy of Science Association, pp. 277–286.
  • –––, 1999. “Science”, Knowledge in a Social World (Chapter 8), New York: Oxford University Press, pp. 224–271, .
  • Gould, Stephen J., 1981. The Mismeasure of Man, New York, NY: W.W. Norton and Company.
  • Haack, Susan, 1996. “Science as Social: Yes and No,” in Feminism, Science, and the Philosophy of Science, Lynn Hankinson Nelson and Jack Nelson (eds.), Dordrecht: Kluwer Academic Publishers, pp. 79–94.
  • Habib, S. Irfan and Dhruv Raina, 2001. Situating the History of Science: Dialogues with Joseph Needham, New Delhi, IN: Oxford University Press.
  • Haraway, Donna, 1978. “Animal Sociology and a Natural Economy of the Body Politic (Part II),” Signs, 4(1): 37–60.
  • –––, 1988. “Situated Knowledges,” Feminist Studies, 14(3): 575–600.
  • Harding, Sandra. 1986. The Science Question in Feminism, Ithaca, NY: Cornell University Press.
  • –––, 1993. “Rethinking Standpoint Epistemology,” in Feminist Epistemologies, Linda Alcoff and Elizabeth Potter (eds.), New York: Routledge, pp. 49–82.
  • Hardwig, John, 1985. “Epistemic Dependence,” Journal of Philosophy, 82(7): 335–349.
  • –––, 1988. “Evidence, Testimony, and the Problem of Individualism,” Social Epistemology, 2(4): 309–21.
  • Hempel, Carl G., 1965. “Science and Human Values,” Scientific Explanation and Other Essays, New York: The Free Press, pp. 81–96.
  • Hesse, Mary, 1980. Revolutions and Reconstructions in the Philosophy of Science, Bloomington, IN: Indiana University Press.
  • Hull, David, 1988. Science As a Process: An Evolutionary Account of the Social and Conceptual Development of Science, Chicago: University of Chicago Press.
  • Ioannidis, John, 2005. “Why Most Published Research Findings are False,” PLOS Medicine 2(8): 696–701.
  • Jasanoff, Sheila, 2005. Designs on Nature: Science and Democracy in Europe and the United States, Princeton: Princeton University Press.
  • Jeffrey, Richard C., 1956. “Valuation and Acceptance of Scientific Hypotheses,” Philosophy of Science, 23(3): 237–246.
  • Jordan-Young, Rebecca, 2010. Brain Storm, Cambridge, MA: Harvard University Press.
  • Kaplan, Jonathan, and Rasmus Winther, 2013. “Prisoners of Abstraction? The Theory and Measure of Genetic Variation, and the Very Concept of Race,” Biological Theory, 7(4): 401–12.
  • Keller, Evelyn Fox, 1983. A Feeling for the Organism: The Life and Work of Barbara McClintock, San Francisco: W.H. Freeman.
  • –––, 1985. Reflections on Gender and Science, New Haven: Yale University Press.
  • Kellert, Stephen, 1993. In the Wake of Chaos, Chicago: University of Chicago Press.
  • Kellert, Stephen, Helen Longino, and C. Kenneth Waters (eds.), 2006. Scientific Pluralism (Minnesota Studies in the Philosophy of Science, Vol. XIX), Minneapolis: University of Minnesota Press.
  • Kitcher, Phillip, 1985. Vaulting Ambition, Cambridge, MA: MIT Press.
  • –––, 1993. The Advancement of Science: Science Without Legend, Objectivity Without Illusions, Oxford: Oxford University Press.
  • –––, 2001. Science, Truth, and Democracy, New York: Oxford University Press.
  • –––, 2011. Science in a Democratic Society, Amherst, NY: Prometheus Press.
  • Knorr-Cetina, Karin, 1981. The Manufacture of Knowledge, Oxford: Pergamon Press.
  • –––, 1983. “The Ethnographic Study of Scientific Work: Toward a Constructivist Interpretation of Science,” in Science Observed, Knorr-Cetina and Michael Mulkay (eds.), London: Sage, pp. 115–177.
  • Kourany, Janet, 2003a. “A Philosophy of Science for the Twenty-First Century,” Philosophy of Science, 70(1): 1–14.
  • –––, 2003b, “Reply to Giere,” Philosophy of Science, 70(1): 22–26.
  • –––, 2010. Philosophy of Science After Feminism, New York, NY: Oxford University Press.
  • Krimsky, Sheldon, 2003. Science in the Private Interest, Lanham: Rowman and Littlefield.
  • Kuhn, Thomas, 1962. The Structure of Scientific Revolutions, Chicago: University of Chicago Press.
  • –––, 1977. The Essential Tension: Selected Studies in Scientific Tradition and Change, Chicago: University of Chicago Press.
  • Lacey, Hugh, 2005. Values and Objectivity: The Controversy over Transgenic Crops, Lanham: Rowman and Littlefield.
  • Latour, Bruno. 1987. Science in Action, Cambridge, MA: Harvard University Press.
  • ––– and Steven Woolgar, 1986. Laboratory Life: The Construction of Scientific Facts, 2nd edition, Princeton: Princeton University Press.
  • Laudan, Larry, 1984a. “The Pseudo-Science of Science?” in Scientific Rationality: The Sociological Turn, James Brown (ed.), Dordrecht: D. Reidel, pp. 41–74.
  • Lee, Carole J., 2012. “A Kuhnian Critique of Psychometric Research on Peer Review,” Philosophy of Science, 79(5): 859–870.
  • Lee, Carole J., Cassidy R. Sugimoto, Guo Zhang, and Blaise Cronin, 2013, “Bias in Peer Review,” Journal of the American Society for Information Science and Technology, 64(1): 2–17.
  • Lewontin, Richard, Steven Rose, and Leon Kamin, 1984. Not in Our Genes, New York, NY: Pantheon.
  • Loken, Eric, and Andrew Gelman, 2017. “Measurement Error and the Replication Crisis,” Science, 355(6325): 584–585.
  • Longino, Helen E., 1990. Science as Social Knowledge: Values and Objectivity in Scientific Inquiry, Princeton: Princeton University Press.
  • –––, 2002. The Fate of Knowledge, Princeton: Princeton University Press.
  • Mayo, Deborah, and Rachelle Hollander (eds.), 1991. Acceptable Evidence: Science and Values in Risk Management, New York: Oxford University Press.
  • Mill, John Stuart, 1859. On Liberty, London: John W. Parker and Son; reprinted 1974, 1982, Gertrude Himmelfarb (ed.), Harmondsworth: Penguin.
  • Mirowski, Philip, and Esther-Mirjam Sent (eds.), 2002. Science Bought and Sold, Chicago: University of Chicago Press.
  • Mitchell, Sandra, 2002. “Integrative Pluralism,” Biology and Philosophy, 17: 55–70.
  • Muldoon, Ryan, and Michael Weisberg, 2011. “Robustness and Idealization in Models of Cognitive Labor,” Synthese, 183: 161–174.
  • Needham, Joseph, 1954. Science and Civilization in China, Cambridge: Cambridge University Press.
  • Nersessian, Nancy J., 2006. “Model-Based Reasoning in Distributed Cognitive Systems,” Philosophy of Science, 73(5): 699–709.
  • Nelson, Lynn Hankinson, 1990. Who Knows: From Quine to Feminist Empiricism, Philadelphia: Temple University Press.
  • O’Connor, Cailin, forthcoming. “The Natural Selection of Conservative Science,” Studies in the History and Philosophy of Science A, first online 27 September 2018. doi:10.1016/j.shpsa.2018.09.007
  • –––, and Justin Bruner, 2019. “Dynamics and Diversity in Epistemic Communities,” Erkenntnis, 84: 1: 101–119.
  • –––, and James Weatherall, 2017. “Scientific Polarization,” European Journal for Philosophy of Science, 8(3): 855–75.
  • Oreskes, Naomi, and Eric Conway, 2011. Merchants of Doubt, New York, NY: Bloomsbury Press.
  • Parker, Wendy, 2006. “Understanding Pluralism in Climate Modeling,” Foundations of Science, 11(4): 349–368.
  • –––, 2010. “Predicting Weather and Climate,” Studies in History and Philosophy of Science (Part B), 41(3): 263–272.
  • Peirce, Charles S., 1868. “Some Consequences of Four Incapacities,” Journal of Speculative Philosophy, 2: 140–157; reprinted in C.S. Peirce, Selected Writings, Philip Wiener (ed.), New York: Dover Publications, 1958, pp. 39–72.
  • –––, 1878. “How to Make Our Ideas Clear,” Popular Science Monthly, 12: 286–302; reprinted in C.S. Peirce, Selected Writings, Philip Wiener (ed.), New York: Dover Publications, 1958, pp. 114–136.
  • Pickering, Andrew, 1984. Constructing Quarks: A Sociological History of Particle Physics, Edinburgh: Edinburgh University Press.
  • Popper, Karl, 1950. The Open Society and its Enemies, Princeton: Princeton University Press.
  • –––, 1963. Conjectures and Refutations, London: Routledge and Kegan Paul.
  • –––, 1972. Objective Knowledge, Oxford: Oxford University Press.
  • Potter, Elizabeth, 2001. Gender and Boyle’s Law of Gases, Bloomington: Indiana University Press.
  • Raina, Rajeswari (ed.), 2015. Science, Technology, and Development in India: Encountering Values, New Delhi: Orient Black Swan.
  • Redish, A. David, Erich Kummerfeld, Rebecca Lea Morris, and Alan C. Love, 2018. “Opinion: Why Reproducibility Failures Are Essential to Scientific Inquiry,” PNAS, 115(20): 5042–46.
  • Rose, Hilary, 1983. “Hand, Brain, and Heart,” Signs, 9(1): 73–96.
  • Rosenstock, Sarita, Justin Bruner, and Cailin O’Connor, 2017. “In Epistemic Networks, Is Less Really More?” Philosophy of Science, 84: 324–52.
  • Roth, Paul, 2003. “Kitcher’s Two Cultures,” Philosophy of the Social Sciences, 33(3): 386–405.
  • Rouse, Joseph, 1987. Knowledge and Power: Toward a Political Philosophy of Science, Ithaca: Cornell University Press.
  • Rubin, Hannah, and Cailin O’Connor, 2018. “Discrimination and Collaboration in Science,” Philosophy of Science, 85: 380–402.
  • Rudner, Richard, 1953. “The Scientist Qua Scientist Makes Value Judgments,” Philosophy of Science, 20(1): 1–6.
  • Schmitt, Frederick, 1988. “On the Road to Social Epistemic Interdependence,” Social Epistemology, 2: 297–307.
  • Shapin, Steven, 1982. “The History of Science and Its Sociological Reconstruction,” History of Science, 20: 157–211.
  • –––, and Simon Schaffer, 1985. Leviathan and the Air Pump, Princeton: Princeton University Press.
  • Shrader-Frechette, Kristin, 1994. “Expert Judgment and Nuclear Risks: The Case for More Populist Policy,” Journal of Social Philosophy, 25: 45–70.
  • –––, 2002. Environmental Justice: Creating Equality; Reclaiming Democracy, New York: Oxford University Press.
  • Sober, Elliott, and David Sloan Wilson, 1998. Unto Others, Cambridge, MA: Harvard University Press
  • Solomon. Miriam, 1992. “Scientific Rationality and Human Reasoning,” Philosophy of Science, 59(3): 439–54.
  • –––, 1994a. “Social Empiricism,” Noûs, 28(3): 323–343.
  • –––, 1994b. “A More Social Epistemology,” in Socializing Epistemology: The Social Dimensions of Knowledge, Frederick Schmitt (ed.), Lanham: Rowman and Littlefield Publishers, pp. 217–233.
  • –––, 2001. Social Empiricism, Cambridge, MA: MIT Press.
  • –––, 2006. “Groupthink versus The Wisdom of Crowds: The Social Epistemology of Deliberation and Dissent,” The Southern Journal of Philosophy, XLIV: 28–42.
  • –––, 2011. “Group Judgment and the Medical Consensus Conference,” Handbook of the Philosophy of Science: Philosophy of Medicine, Fred Gifford (ed.), Amsterdam: Elsevier, pp. 239–254.
  • Spencer, Quayshawn, 2012. “What Biological Racial Realism Should Mean” Philosophical Studies, 159: 181–204.
  • –––, 2014. “Biological Theory and the Metaphysics of Race; A Reply to Kaplan and Winther,” Biological Theory, 8: 114–120.
  • Steele, Daniel and Kyle Whyte, 2012. “Environmental Justice, Values, and Scientific Expertise” Kennedy Institute of Ethics Journal, 22(2): 163–182.
  • Strevens, Michael, 2003. “The Role of the Priority Rule in Science,” Journal of Philosophy, 100: 55–79.
  • Tatsioni, Athina, with Nikolaos Bonitsis, and John Ioannidis, 2007. “The Persistence of Contradicted Claims in the Literature,” Journal of the American Medical Association, 298(21): 2517–26.
  • Thagard, Paul, 2012. The Cognitive Science of Science: Explanation, Discovery, and Conceptual Change, Cambridge, MA: MIT Press.
  • Traweek, Sharon, 1988. Beamtimes and Lifetimes: The World of High Energy Physicists, Cambridge, MA: Harvard University Press.
  • Uebel, Thomas, 2004. “Political Philosophy of Science in Logical Empiricism: The Left Vienna Circle,” Studies in History and Philosophy of Science, 36: 754–773.
  • van Fraassen, Bas, 2008. Scientific Representation, New York: Oxford University Press.
  • Welbourne, Michael, 1981. “The Community of Knowledge,” Philosophical Quarterly, 31(125): 302–314.
  • Wilholt, Torsten, 2013. “Epistemic Trust in Science,” British Journal for the Philosophy of Science, 24(2): 233–253.
  • Winsberg, Eric, 2012. “Values and Uncertainties in the Predictions of global Climate Models,” Kennedy Institute of Ethics Journal, 22(2): 111–137.
  • –––, Bryce Huebner, and Rebecca Kukla, 2014. “Accountability and Values in Radically Collaborative Research,” Studies in History and Philosophy of Science (Part A), 46: 16–23.
  • Winther, Rasmus Grønfeldt, forthcoming. When Maps Become the World, Chicago, IL: University of Chicago Press.
  • Wylie, Alison, 2002. Thinking from Things, Los Angeles: University of California Press.
  • Young, N.S., with John Ioannidis, O. Al-Ubaydli, 2008. “Why Current Publication Practices May Harm Science,” Public Library of Science Medicine, 5(10): e201, doi:10.1371/journal.pmed.0050201
  • Zollman, Kevin, 2007. “The Communication Structure of Epistemic Communities,” Philosophy of Science, 74: 574–87.
  • –––, 2010. “The Epistemic Benefit of Transient Diversity,” Erkenntnis, 72(1): 17035.
  • –––, 2013. “Network Epistemology: Communication in Epistemic Communities,” Philosophy Compass, 8(1): 15–27

Further Reading

  • Daston, Lorraine, and Peter Galison, 2010. Objectivity, Cambridge, MA: MIT Press.
  • Elliott, Kevin, 2017. A Tapestry of Values: An Introduction to Values in Science, Oxford: Oxford University Press.
  • Fleck, Ludwig, 1973. The Genesis and Development of a Scientific Fact, Chicago: University of Chicago Press.
  • Hacking, Ian, 1999. The Social Construction of What?, Cambridge, MA. Harvard University Press.
  • Latour, Bruno, 2004. Politics of Nature: How to Bring the Sciences into Democracy, Cambridge, MA: Harvard University Press.
  • Levi, Isaac, 1980. The Enterprise of Knowledge, Cambridge, MA: MIT Press.
  • Radder, Hans (ed.), 2010. The Commodification of Scientific Research, Pittsburgh, PA: University of Pittsburgh Press.
  • McMullin, Ernan (ed.), 1992. Social Dimensions of Scientific Knowledge, South Bend: Notre Dame University Press.
  • Sismondo, Sergio, 1996. Science Without Myth, Albany: State University of New York Press.

Other Internet Resources

[Please contact the author with suggestions.]

Copyright © 2019 by
Helen Longino <hlongino@stanford.edu>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free