Reliabilism is a general approach to epistemology that emphasizes the truth-conduciveness of a belief-forming process, method, or other epistemologically relevant factor. The reliability theme appears both in theories of knowledge and theories of justification. ‘Reliabilism’ is sometimes used broadly to refer to any theory of knowledge or justification that emphasizes truth-getting or truth-indicating properties. These include theories originally proposed under different labels, such as ‘tracking’ theories. More commonly, ‘reliabilism’ is used narrowly to refer to process reliabilism about justification. This entry discusses reliabilism in both broad and narrow senses but concentrates on reliability theories of justified belief, especially process reliabilism.
- 1. Reliability Theories of Knowledge
- 2. Process Reliabilism about Justification
- 3. Problems for Early Process Reliabilism
- 4. Replies, Refinements and Modifications
- 5. Strengthening or Permuting the Reliability Condition: Variants of Process Reliabilism
- 6. Conclusion
- Academic Tools
- Other Internet Resources
- Related Entries
It is generally agreed that a person S knows a proposition P only if S believes P and P is true. Since all theories accept this knowledge-truth connection, reliabilism as a distinctive approach to knowledge is restricted to theories that involve truth-promoting factors above and beyond the truth of the target proposition. What this additional truth-linkedness consists in, however, varies widely.
Perhaps the first formulation of a reliability account of knowing appeared in a note by F. P. Ramsey (1931), who said that a belief is knowledge if it is true, certain and obtained by a reliable process. This little note attracted no attention at the time and apparently did not influence reliability theories of the 1960s, 70s, or 80s. Another early reliability-type theory was Peter Unger's (1968) proposal that S knows that P just in case it is “not at all accidental that S is right about its being the case that P.” Being right about P amounts to believing truly that P. Its not being accidental that one is right about P amounts to there being something in one's situation that guarantees, or makes it highly probable, that one wouldn't be wrong. In other words, something makes the belief reliably true. David Armstrong (1973) offered an analysis of non-inferential knowledge that explicitly used the term ‘reliable.’ He drew an analogy between a thermometer that reliably indicates the temperature and a belief that reliably indicates the truth. According to his account, a non-inferential belief qualifies as knowledge if the belief has properties that are nomically sufficient for its truth, i.e., guarantee its truth via laws of nature. This can be considered a reliable-indicator theory of knowing. Alvin Goldman offered his first formulation of a reliable process theory of knowing — as a refinement of the causal theory of knowing — in a short paper on innate knowledge (Goldman, 1975).
In the 1970s and 1980s several subjunctive or counterfactual theories of knowing were offered with reliabilist contours. The first was Fred Dretske's “Conclusive Reasons” (1971), which proposed that S's belief that P qualifies as knowledge just in case S believes P because of reasons he possesses that would not obtain unless P were true. In other words, the existence of S's reasons — the way an object appears to S, for example — is a reliable indicator of the truth of P. This idea was later elaborated in Dretske's Knowledge and the Flow of Information (1981), which linked knowing to getting information from a source through a reliable channel. Meanwhile, Goldman also proposed a kind of counterfactual reliability theory in “Discrimination and Perceptual Knowledge” (1976). This theory deployed the idea of exclusion of “relevant alternatives.” In Goldman's treatment, a person perceptually knows that P just in case (roughly) she arrives at a belief in P based on a perceptual experience that enables her to discriminate the truth of P from all relevant alternatives. On this approach, S's knowing that P is compatible with there being “radical” (and hence irrelevant) situations — for example, evil demon or brain-in-a-vat situations — in which P would be false although S has the same experience and belief. But S's knowing that P is not compatible with there being some relevant alternative in which P is false although S has the same experience and belief. Although no precise definition of ‘relevance’ was offered, the implied idea was that a situation is relevant only if it is “realistic,” fairly likely to occur, or does occur in a nearby possible world. If S's perceptual experience excludes false belief in nearby possible worlds, then it is, in the intended sense, reliable.
Robert Nozick (1981) proposed a theory with similar contours, a theory he called a ‘tracking’ theory. In addition to the requirements of truth and belief, Nozick's two distinctive conditions were: (1) if P were not true, then S would not believe that P, and (2) if P were true, S would believe that P. If both conditions hold of a belief, Nozick says that the belief “tracks” the truth. The first of the two tracking conditions, the crucial one for most purposes, was subsequently called the “sensitivity” requirement. It can be symbolized as “Not-p not-B(P),” where the box-arrow expresses the subjunctive conditional. A number of counterexamples have been produced to this condition (see Goldman, 1983 and especially DeRose, 1995). A variant of the sensitivity condition is the requirement of “safety,” proposed by Ernest Sosa (1996, 2000) and Timothy Williamson (2000). Safety can be explained in a variety of ways, including “if S believes that P, then P would not easily have been false,” or “if S believes that P, then P isn't false in close possible worlds” (Williamson, 2000). Williamson classifies the safety approach as a species of reliability theory.
Reliability theories are partly motivated by the prospect of meeting the threat of skepticism. It is natural to suppose that if you know that P, then, in some sense, you “can't be wrong” about P. But what is the appropriate sense of “can't be wrong”? Does it mean that your evidence logically precludes the possibility of error? If so, very few propositions would be known (assuming a fallibilist notion of evidence); the specter of skepticism would hover ominously. Reliability theories, in their various ways, propose weaker but still substantial senses of “can't be wrong.” The no-relevant-alternatives theory implies that although your knowing P is compatible with there being logically possible situations in which you have the same evidence but P is false, there are no relevant (“nearby”) possible situations in which you have the same evidence but P is false. Nozick's tracking theory aimed to provide a “balanced” position vis-à-vis skepticism, to explain skepticism's allure without capitulating to it entirely. Under the tracking theory one can know one has two hands because in the closest possible world in which one doesn't have two hands (for example, they were lost in an accident) one doesn't believe one has two hands. This satisfies the sensitivity condition (1) and preserves common-sense knowledge. However, the tracking theory also implies that one doesn't know that one isn't a handless brain in a vat (being fed misleading experiences to make it appear as if one has two hands). That's because if one were a brain in a vat in the contemplated scenario, one would wrongly think that one isn't, in violation of sensitivity. Thus, although I know that I have two hands, I don't know the entailed proposition that I am not a handless brain in a vat. This is a serious concession to skepticism, but one that Nozick regarded as appropriate. Critics have called this conjunction of claims that affirm and deny knowledge an “abominable conjunction.” The present point, however, is that although the theory makes a concession to skepticism, Nozick thought that it avoided skepticism at the crucial juncture. Even if the tracking theory is unsatisfactory with respect to the “abominable conjunction” (and, more generally, in its rejection of epistemic closure), other reliability theories of knowledge may be more satisfactory in dealing with skepticism.
Reliability theories of knowledge of varying stripes continue to appeal to many epistemologists, and permutations abound. The reliability theories discussed above focus on modal reliability, on getting truth or avoiding error in possible worlds with specified relations to the actual one. They also focus on local reliability, that is, truth-acquisition or error avoidance in scenarios linked to the actual scenario in question. Other reliabilisms about knowledge, by contrast, draw attention to global reliability, for example, the global reliability of the process or method that produces the target belief. The global reliability of a belief-forming process is its truth-conduciveness across all the beliefs it generates. Goldman's Epistemology and Cognition (1986) combines both local and global reliability in its account of knowledge.
Some theories of knowledge that are principally known by different labels nevertheless embed reliabilist elements. Some varieties of contextualism, for example, employ a version of the sensitivity condition (DeRose, 1995). Other recent theories reconfigure older versions of reliabilism in a probabilistic framework. Sherrilyn Roush (2005) presents a probabilistic version of the tracking theory, and Igal Kvart (2006) presents a probabilistic version of the discrimination, or no-relevant-alternatives, theory. Roush seeks to improve on Nozick's theory by allowing any necessary implication of something known also to be known. Kvart's underlying idea is that a belief that P counts as knowledge only if it renders P highly likely, and significantly more likely than P would be given some pre-existing world-history. Among other conditions, he imposes a screening constraint for a predicate G to serve as a relevant alternative (or contrast) for F.
Let us turn now to reliabilist approaches to justification, especially process reliabilism. First, however, a few words about reliable indicator theories. William Alston (1988) and Marshall Swain (1981) have both proposed reliable indicator theories of justification. The fundamental idea is that a belief that P is justified on the basis of a reason, or ground, R just in case R is a reliable indication that P is true. On Alston's interpretation this means that the ground or reason must make the probability of P's being true very high. The ground of a belief might be a perceptual experience, an ostensible memory, or another (justified) belief.
Although there are these examples of reliable-indicator theories of justification, the most discussed version of justificational reliabilism is the reliable-process approach, first formulated by Alvin Goldman, in “What Is Justified Belief?” (1979). Before turning to the substance of the approach, it is well to review some constraints or desiderata that Goldman proposes for accounts of justification, because these constraints set the stage for the reliability theory. The proposal is that theories of justification should specify conditions for a belief's being justified that don't make use of the justification concept itself, or any concept (such as knowledge) that includes justification, or any epistemic concept closely allied to justification, such as reasonability or rationality. Invoking these concepts in an account of justification will either yield overt circularity or will not provide much illumination, because concepts like reasonability or rationality are as much in need of analysis as justification itself.
These requirements can potentially disqualify certain theories that are seriously in play. For example, a theory that appeals to evidence might have to be excluded. One proffered account of evidence is “that which justifies belief.” If this is how evidence is understood, it would be problematic to turn around and define justification in the way Richard Feldman and Earl Conee (1985) do: “Doxastic attitude D toward proposition P is epistemically justified for S at t if and only if having D toward p fits the evidence S has at t.” An evidentialist account of justification isn't admissible unless “evidence” is explainable in non-justificational terms. (To date, Feldman and Conee haven't shown this to be so.)
What kinds of terms, properties, or states of affairs would be admissible and appropriate in an account of justifiedness? Doxastic states such as belief, disbelief, and suspension of judgment are non-epistemic states, and so are other purely psychological states such as visual or memory experiences. Similarly, a proposition's being true or false is a non-epistemic state of affairs. Pace verificationist approaches to truth, truth is not analyzable in terms of what is known, justified, or verified (Goldman 1999, chap. 2), so truth is a perfectly legitimate concept for use in an account of justifiedness. Another admissible element in an account of justifiedness is the causal relation.
The following reasoning led Goldman to the reliable process theory. He first argued from examples that the justificational status of a belief must somehow depend on the way the belief is caused or causally sustained. Thus, suppose Fiona (justifiably) believes a conjunction of propositions Q, which logically entails P. Does it follow that if Fiona proceeds to believe P, then her belief in P is justified? No. For suppose Fiona doesn't notice that Q entails P and believes it only because she earnestly wishes it were true. Then her belief in P isn't justified. Similarly, suppose Alfred believes some propositions that support proposition R, and Alfred goes ahead and believes R. Is Alfred's belief in R justified? Again, not necessarily. Suppose the only reason Alfred believes R is that he likes the sound of the sentence ‘R’ (an example from Kornblith, 1980). Then the belief isn't justified. Apparently, beliefs formed in a defective manner aren't justified even when there is another way available to form the belief that would render it justified. In general, the belief-formation process actually used seems to be critical. No account of justification can get the story right unless it incorporates a suitable condition about belief-forming processes or methods. That was a first major conclusion of “What Is Justified Belief?”
What is the suitable condition about belief-forming processes? Again Goldman proceeded by examining cases. What are some defective processes of belief-formation, processes whose belief outputs would intuitively be classed as unjustified? Examples include wishful thinking, confused reasoning, guesswork and hasty generalization. What do these faulty processes have in common? One shared feature is unreliability: they tend to produce false beliefs a large proportion of the time. By contrast, which species of belief-forming (or belief-sustaining) processes confer justification? They include standard perceptual processes, remembering, good reasoning, and introspection. What do these processes have in common? They all seem to be reliable; that is, most of the beliefs that each process produces are true. Thus, the main proposal of “What Is Justified Belief?” was that a belief's justifiedness is fixed by the reliability of the process or processes that cause it, where (as a first approximation) degree of reliability consists in the proportion of beliefs produced by the process that are true. Justification conferring processes are ones with a high truth-ratio. (Just how high is vague, like the concept of justification itself.)
A number of refinements and consequences of reliabilism were added. One consequence is that process reliabilism, as Goldman develops it, is a “historical” theory. A reliable inference process confers justification to an output belief, for example, only if its input beliefs were themselves justified. How could their justifiedness have arisen? By having been caused by earlier reliable processes. This chain must ultimately terminate in reliable processes having only non-doxastic inputs, such as perceptual inputs. Thus, justifiedness is often a matter of a history of personal cognitive processes. This historical nature of justifiedness implied by process reliabilism contrasts sharply with traditional theories like foundationalism and coherentism, which are “current time-slice” theories. But Goldman welcomed this implication. The traditional notion that justifiedness arises exclusively from one's momentary mental states has always been problematic. Of course, the historical character of process reliabilism gives the theory an externalist character (which it has in any case by virtue of its use of truth-conduciveness). But this externalism was not regarded as a vice. Externalism implies that there is no guarantee that someone who justifiably believes P is also justified in believing that she justifiably believes P. But this “J → JJ” principle is questionable anyway. To assume its truth is to commit an epistemological level confusion (Alston, 1980).
Even if the J → JJ principle is rightly rejected, reliabilism must deal with cases in which a subject has evidence against the reliability of a process she nonetheless uses, one that is in fact reliable. Reliabilism as thus far presented implies that the de facto reliability of the process makes her belief justified, but is that correct? Doesn't her (misleading) evidence against its reliability defeat the justifiedness? “What Is Justified Belief?” addressed this problem. Instead of requiring the subject to have a reliably caused meta-belief that her first-order belief is reliably caused, it proposes a weaker condition intended to cover evidence that undercuts reliability. It says that there must be no reliable process available to the subject that, were it used by the subject in addition to the process actually used, would result in her not believing P. In other words, failing to apply a reliable inferential process to evidence against reliability cancels justifiedness. This extra condition covers the example in question without imposing a J → JJ requirement.
The advantages of reliabilism can be illustrated by showing how it handles challenging examples. Relevant examples here include immediately, or directly, justified beliefs, that is, non-inferentially justified beliefs. Feldman (2003) presents two hard cases for any theory of immediate justification. Sam enters a room and sees an unfamiliar table. He forms a belief that it's a table and also that it's a 12-year-old table. The first belief is justified but the second one isn't. Two bird-watchers, a novice and an expert, are together in the woods when a pink-spotted flycatcher alights on a branch. Both bird-watchers form a belief that it's a pink-spotted flycatcher. The expert is immediately justified in believing that it's a pink-spotted flycatcher but the novice isn't; the latter just jumps to this conclusion out of excitement. What explains these intuitive judgments of justifiedness and non-justifiedness?
Process reliabilism seems to have the right resources to handle these cases (Goldman, 2008). The difference between the expert and novice bird-watchers evidently resides in the differences between the cognitive processes they respectively use in arriving at their bird identification beliefs. The expert presumably connects selected features of his current visual experience to things stored in memory about pink-spotted flycatchers, securing an appropriate “match” between features in the experience and features in the memory store. The novice does no such thing; he just guesses. Thus, the expert's method of identification is reliable, the novice's is unreliable. Similarly, a person seeing a table for the first time won't detect any clues to which a reliable belief-forming process could be applied that would generate the output that the table is 12 years old. So, whatever his way of arriving at the belief that it's a 12-year-old table, the result isn't justified. On the other hand, he certainly has visual cues to which a reliable belief-forming process could be applied that would classify the object as a table; and he presumably uses such a process. Hence, the belief is justified. In this manner, process reliabilism proves its mettle by providing straightforward treatments of these initially challenging cases of immediate justifiedness versus unjustifiedness (Goldman, 2008).
Early process reliabilism elicited a number of criticisms that fall into pretty clear-cut categories. This section reviews five principal problems. Section 4 examines a variety of answers, clarifications, modifications or refinements aimed at resolving, averting, or mitigating these problems. Section 5 reviews the development of numerous variants, or cousins, of reliabilism that are seen by their proponents as preferable to basic reliabilism along one or more dimensions.
The first objection to reliabilism, lodged by several different authors, is the evil-demon counterexample (Cohen, 1984; Pollock, 1984; Feldman, 1985; Foley, 1985). In a possible world inhabited by an evil demon (or permute this, if you wish, into a brain-in-a-vat case), the demon creates non-veridical perceptions of physical objects in people's minds. All of their perceptual beliefs, which are stipulated to be qualitatively identical to ours, are therefore false. Hence, perceptual belief-forming processes in that world are unreliable. Nonetheless, since their perceptual experiences – and hence evidence – are identical to ours, and we surely have justified perceptual beliefs, the beliefs of the people in the demon world must also be justified. So reliabilism gets the case wrong. The intended moral of the example is that reliability isn't necessary for justification; a justified belief can be caused by a process that is unreliable (in the subject's world).
The second objection is that reliability isn't sufficient for justification. The principal example of this kind is due to Laurence BonJour (1980). BonJour presented four variants of a case in which a subject has a perfectly reliable clairvoyant faculty, but either has no evidence for believing he has such a faculty, or has evidence against this proposition, etc. In each of the cases BonJour argues that the subject isn't justified in believing the output of the faculty, namely, that the President is in New York City. Nonetheless, this is what the subject does believe. So BonJour concludes that reliabilism is wrong to say that being the output of a reliable process suffices for being justified. Of course, “What Is Justified Belief?” added a further condition to (try to) handle a similar case, as explained above. BonJour didn't address that condition, but he did formulate a similar supplement for Armstrong's reliability analysis of knowledge. As he points out, the supplementary condition would handle his cases of Casper and Maud, who believe (correctly) that they have powers of clairvoyance despite having substantial contrary evidence. BonJour also offers the case of Norman, however, which he claims cannot be handled by the supplementary condition (nor by the similar condition in “What Is Justified Belief?”). Norman is described as possessing no evidence or reasons of any kind for or against the general possibility of a clairvoyant power, or for or against the thesis that he himself possesses one. But he holds the belief that results from his clairvoyance power, viz., the belief that the President is in New York City. BonJour argues that, intuitively, he isn't justified in holding this belief. (He is said to be “subjectively irrational” in holding it.) So reliability isn't sufficient for justification.
If someone disagrees with BonJour about the Norman case, there are other examples with similar contours in the literature that may be more persuasive. Keith Lehrer (1990) gives the case of Mr. Truetemp who, unbeknownst to him, has a temperature-detecting device implanted in his head that regularly produces accurate beliefs about the ambient temperature. Although Lehrer mainly denies that these beliefs constitute knowledge, he presumably means to deny as well that they are justified. A similar example is given by Alvin Plantinga (1993a), who describes a subject with a brain lesion that causes him to have a reliable cognitive process that generates the belief that he has a brain lesion. Plantinga denies that the lesion-caused belief is justified (or warranted), again challenging the sufficiency of reliability for justification.
The third major type of problem for process reliabilism is the generality problem. Goldman already noted this problem in “What Is Justified Belief?,” but it has been pressed more systematically by Feldman (1985) and Conee and Feldman (1998). A particular belief is the product of a token causal process, the concrete process occurring at precisely the time and place in question. Such a process token, however, can be “typed” in numerous broader or narrower ways. Each type will have its own level of reliability, normally distinct from the levels of reliability of other types. Which repeatable type should be selected for purposes of assigning a determinate reliability number to the process token? “What Is Justified Belief?” does not resolve this question, and it remains an important one. Goldman (1979) says that cognitive processes should be restricted in “extent” to events within the organism's nervous system (although he does not abide by this restriction in some of his own illustrations of process types). But this restriction provides no criterion for pinpointing a unique process type. It appears, however, that a determinate reliability number cannot be assigned to a process token unless a unique type is selected.
Conee and Feldman (1998) lay down three requirements for a solution to the generality problem. First, a solution must be “principled,” in the sense that the specification of the type that determines the token's reliability must not be arbitrary; it must not be made on an ad hoc, case-by-case basis. Second, the rule must make defensible epistemic classifications. The types identified must have a reliability that is plausibly correlated with the justificational status of the resulting beliefs. Third, a solution must remain true to the spirit of the reliabilist approach, and not merely smuggle a non-reliabilist epistemic evaluation into the characterization of relevant types. For example, it wouldn't be true to the spirit of reliabilism if it merely restated an evidentialist theory in a roundabout way. Conee and Feldman then propose three places to look for a solution to the generality problem: common sense types, scientific types, and contextual factors (rather than a general principle for selecting relevant types). After critically surveying each of these possibilities, they conclude that the prospects for a solution are bleak. We shall return to a few of the detailed criticisms of the foregoing options in section 4.
The fourth and fifth problems for reliabilism are of more recent vintage than the first three. The fourth problem is the bootstrapping, or “easy knowledge,” problem, due to Jonathan Vogel (2000) and Stewart Cohen (2002). Both Vogel and Cohen formulate the problem as one about knowledge, but it applies to justification as well. In Vogel's version, we are asked to consider a driver Roxanne, who believes implicitly whatever her gas gauge “says” about the state of her fuel tank, though she doesn't antecedently know (or have justification for believing) that the gauge is reliable. In fact, it is a perfectly functioning gas gauge. Roxanne often looks at the gauge and arrives at beliefs like the following: “On this occasion the gauge reads ‘F’ and F,” where the second conjunct expresses the proposition that the tank is full. The perceptual process by which Roxanne arrives at the belief that the gauge reads ‘F’ is reliable, and, given the assumption about the proper functioning of the gauge, so is the process by which she arrives at the belief that the tank is full. Hence, according to reliabilism, her belief in the conjunction should be justified. Now Roxanne deduces the further proposition, “On this occasion, the gauge is reading accurately.” Since deduction is a reliable process, Roxanne must be justified in believing this as well. Suppose Roxanne does this repeatedly without ever getting independent information about the reliability of the gauge (whether it's broken, hooked up properly, etc.). Finally she infers by induction, “The gauge is reliable (in general).” Since each step she uses is a reliable process, the latter belief too is justified. With just a little more deduction Roxanne can conclude that the process by which she comes to believe that her gas tank is full is reliable, and hence she is justified in believing that she is justified in believing that her gas tank is full.
This entire procedure is what Vogel calls “bootstrapping,” and Cohen calls “easy knowledge.” Both claim that the procedure is illegitimate. After all, you can apply bootstrapping to a great many underlying processes, some reliable, some not. Every time, bootstrapping will tell you that the underlying process is reliable. So bootstrapping is itself unreliable. Since reliabilism licenses bootstrapping, reliabilism is in trouble; so Vogel concludes, at any rate. Another label for bootstrapping is “epistemic circularity.” Epistemic circularity is the use of an epistemic method or process to sanction its own legitimacy. In effect, Vogel is saying that reliabilism is mistaken because it wrongly permits epistemic circularity. Cohen does not pin the blame squarely on reliabilism.
The fifth problem facing reliabilism is the so-called “value problem.” Although this is posed as a problem for reliabilism as a theory of knowing, I will include it in our discussion of reliabilism as a theory of justification. In his dialogue Meno, Plato raised the question of why knowledge is more valuable than true belief. The extra-value-of-knowledge question has been brought to the fore in recent literature. Knowledge is assumed to be more valuable than true belief, and this extra value is presented as a test of adequacy for theories of knowledge. If a theory cannot account for the extra value, this is a strong count against its adequacy. Moreover, a number of writers have urged that process reliabilism fails this adequacy test (Jones, 1997; Swinburne, 1999; Zagzebski, 1996, 2003; Riggs, 2002; Kvanvig, 2003). According to process reliabilism, the extra value that knowledge has over true belief must come from the reliability of the process that causes the belief. How can this be? Jonathan Kvanvig formulates the problem by saying that any value associated with the reliability of the producing process is a function of the likelihood of the belief's being true. But doesn't the value of the belief's actual truth “swamp” the value that accrues from mere likelihood of truth? Linda Zagzebski formulates the problem using the analogy of a cup of espresso that is produced by a reliable espresso machine. “The good of the product makes the reliability of the source that produces it good, but the reliability of the source does not then give the product an additional boost of value… If the espresso tastes good, it makes no difference if it comes from an unreliable machine… If the belief is true, it makes no difference if it comes from an unreliable belief-producing source” (2003: 13).
These five problems, as well as others, comprise challenges to process reliabilism about justification, especially to its earliest and simplest version. Later discussions have proposed many replies, refinements, and/or modifications that are examined in the next section.
The first problem for reliabilism is the evil demon problem, a challenge to the claim that reliability is necessary for justification. Notice that the example makes a major assumption about the domain in which the reliability of a process is to be evaluated (henceforth: the domain of evaluation). It assumes that the relevant domain to consider when evaluating a process's reliability is the world of the example, in this case, the evil-demon world. In other words, when assessing the justificational status of a hypothetical belief in P, the reliability of the belief's generating process is to be evaluated by reference to the truth-ratio of the process in the hypothetical world. It is not to be evaluated by reference to the truth-ratio of the process in, for example, the actual world.
Although this a straightforward interpretation, it wasn't categorically endorsed in “What Is Justified Belief?” A benevolent demon was imagined who arranges things so that beliefs formed by wishful thinking are usually true. In a benevolent-demon (BD) world, wishful thinking is reliable. Hence, if process reliabilism is interpreted as saying that the domain of evaluation is always the world of the example, then a belief in a BD-world arrived at by wishful thinking will be a justified belief. Is this an acceptable result? Goldman (1979) wasn't sure of this, and considered other possibilities. One theory floated was that the domain of evaluation is the actual world (”our” world). This too was not endorsed, but led to a tentative recommendation of another methodology. “What we really want is an explanation of why we count, or would count, certain beliefs as justified and others as unjustified. Such an explanation must refer to our beliefs about reliability, not to the actual facts. The reason we count beliefs as justified is that they are formed by what we believe to be reliable belief-forming processes” (1979/1992: 121).
At this point, some critics complain, Goldman seems to change the subject. He switches the question from when a belief is justified to when we count a belief justified, or when we judge it to be justified. Aren't these distinct questions? True, they are different questions, but answering the question of when we count a belief justified may be very informative as to the conditions and criteria for a belief to be justified. Keith DeRose (1999: 188) makes a rather similar move in defending contextualism in epistemology. He views contextualism as a theory of knowledge attribution. A theory of knowledge attribution isn't the same as a theory of what knowledge is, but it can be very relevant to the latter quest. Similarly, in figuring out what criteria people use in deciding whether to count, or call, a belief justified, we can gain insight into the question of what it takes for a belief to be justified. Suppose, for example, that being justified is somehow related to the reliability (in some domain of evaluation or other) of its generating process. Then people can be expected to count or call a belief justified if they believe that the belief's method of production is reliable in the relevant domain of evaluation. That's why considering their beliefs about reliability is relevant (even if those beliefs about reliability aren't justified).
Against this background, perhaps we can make sense of the first of several modifications Goldman subsequently proposed for process reliabilism. Addressing the question about the domain of evaluation, Epistemology and Cognition advanced the “normal worlds” approach:
We have a large set of common beliefs about the actual world: general beliefs about the sorts of objects, events, and changes that occur in it. We have beliefs about the kinds of things that, realistically, do and can happen. Our beliefs on this score generate what I shall call the set of normal worlds. These are worlds consistent with our general beliefs about the actual world…. Our concept of justification is constructed against the backdrop of such a set of normal worlds. My proposal is that, according to our ordinary conception of justifiedness, a rule system is right in any world W just in case it has a sufficiently high truth ratio in normal worlds (1986: 107).
This passage could profitably be rewritten by first introducing the theory not as a theory of genuine justifiedness but as a theory of justification attribution. It is an attempted reconstruction of how our judgments of justifiedness are arrived at, not as correctness conditions or truth-conditions for statements of justifiedness. As indicated above, such a theory of justification attribution can be helpful in constructing an account of correctness conditions for justifiedness. The two should be distinguished, however.
John Pollock and Joseph Cruz (1999: 115) criticize the normal worlds approach by saying that it “puts no constraints on how we get our general beliefs. If they are unjustified, then it seems that reliability relative to them should be of no particular epistemic value.” This is an appropriate criticism if the theory is viewed — as it was indeed presented — as a theory of correctness conditions for justifiedness. But if we now view it, retrospectively, as a theory of justification attribution, it isn't so serious a criticism. On the other hand, the task still remains of specifying a theory of correctness conditions or truth-conditions for justifiedness. We shall return to this below. Goldman (1988) himself presented additional worries about the normal worlds approach, which led him to abandon this approach in subsequent writing.
Goldman experimented with two other revisions of process reliabilism. “Strong and Weak Justification” (Goldman, 1988) proposed two different senses or types of justifiedness. It considered a scientifically benighted culture of ancient or medieval vintage employing highly unreliable methods for forming beliefs, appealing, for example, to the doctrine of signatures, to astrology, and to oracles. A member of this culture forms a belief about the outcome of an impending battle by using one of these methods, call it M. Is this belief justified or not? There is a tension here. A pull to answer in the negative reflects the idea that a belief is justified only if it is generated by reliable methods, and M isn't such a method. A pull toward a positive answer reflects the cultural plight of the believer. Everyone else in his environment uses and trusts method M. Our believer has good reasons to trust his cultural peers on many matters, and finds no flaws with M. One can hardly fault him for relying on M and hence believing what he does. His belief is epistemically blameless, and in that sense justified. In short, strong justifiedness requires de facto reliability and weak justifiedness imposes no such requirement. Returning to the demon-deceived cognizer, his beliefs can be described as lacking strong justifiedness but possessing weak justifiedness.
In “Epistemic Folkways and Scientific Epistemology” (Goldman, 1992) a two-stage theory was formulated that was intended, among other things, to handle the demon-world and clairvoyance problems. “Folkways” put forward an attribution theory, a theory that aimed to explain or predict the judgments people make about justifiedness. Two distinct stages were posited in the activity of justification attribution (a two-level structure was also presented in Epistemology and Cognition). The first stage is the creation of a mental list of “good” and “bad” ways of forming beliefs, belief-forming methods one classifies as epistemic “virtues” and “vices” respectively. The hypothesis is that virtues and vices are selected as such because of the cognizer's beliefs about their reliability or unreliability (in the actual world). Alternatively, these selections might be inherited from one's epistemic community, not arrived at by purely individual means. The hypothesis of this first stage is based in part on a certain approach to the psychology of concepts, an approach that views concepts (in the psychological sense) as consisting of mental representations of positive and negative “exemplars” of the category in question. The second stage consists of applying these virtues and vices to target examples. When asked whether a specified belief is justified or unjustified, an attributor mentally considers how the subject's belief was formed and tries to match the process of its formation to one or more of the virtues or vices on his mental list. If the subject's method of formation matches a virtue, the attributor judges it to be justified; if it matches a vice, it is judged to be unjustified. If the process of formation doesn't exactly match any item on his mental list, some comparative similarity metric is deployed to make a classification. In short, the two-stage process employs reliability considerations at the first stage, the norm-selection stage. But in the second stage, the judgment or attribution stage, no recourse is taken to considerations of reliability. There is simply a “matching” process (perhaps more constructive than this term suggests) that references the stored list of virtues and vices.
How does this theory purport to handle the first two counterexamples to early reliabilism? Basing beliefs on visual appearances is presumably on everyone's list of epistemic virtues. So attributors will naturally count a vision-based belief as being justified, even if it is described as occurring in a possible world in which vision is unreliable. The theory denies that attributors revise their list of epistemic virtues and vices whenever they hear a story involving non-standard reliabilities. So this explains why positive judgments of justifiedness are made in the demon-world case. What about the clairvoyance case? The theory predicts that the evaluator will match the belief-forming processes of the clairvoyant subjects either to the vice of ignoring contrary evidence (in the cases of Casper and Maud) or to certain other vices. Admittedly, clairvoyance per se may not be on many people's list of virtues and vices. But there is a class of other putative faculties, including mental telepathy, ESP, telekinesis, and so forth that are scientifically disreputable. It is plausible that most evaluators consider any process of basing beliefs on the supposed deliverances of such faculties as vices. And it is plausible that these evaluators judge clairvoyance as similar to such vices. That is how the “Folkways” theory predicts that judgments of unjustifiedness would be made in the clairvoyance case.
Again, the “Folkways” theory is a theory of attribution. It doesn't purport to present a theory of what justified belief is. However, a natural extrapolation might be made from this theory of attribution to a theory of correctness conditions or truth conditions. The theory might run approximately as follows. First, there is a right system of epistemic norms or principles, norms that govern which belief-forming processes are permissible (or mandatory). These norms are grounded in considerations of reliability or truth-conduciveness. The right set of norms is “made” right by the true facts of reliability pertaining to our cognitive processes and the actual world. Since the ordinary person's set of virtues and vices may be at variance with the right norms, there can certainly be a difference between what are judged or considered virtuous belief-forming processes and what are in fact virtuous belief-forming processes. Finally, a belief is really justified if and only if it is arrived at (or maintained) in conformity with the right set of norms or principles. This is in fact the structure of the theory of justifiedness of Epistemology and Cognition. Departing now from that book's theory of “normal worlds,” we can add that the right system of epistemic norms is made right in virtue of facts and regularities obtaining in the actual world. Furthermore, the system that is right in the actual world is right in all possible worlds. In other words, epistemic rightness is rigidified. This is a tack considered in Epistemology and Cognition (1986: 107), though rejected in favor of the normal worlds approach. It might be objected that norm rightness should surely be relativized to different worlds or “environments” (as Sosa, 1988, 1991 contends). But it's not obvious that ordinary thought displays a systematic tendency to proceed in this fashion, so why should philosophical theorizing posit this? On the contrary, if reliabilism is on the right track, positive judgments of justification about demon-world cases support the idea that norm-rightness may be rigidified rather than allowed to vary across worlds.
We have examined some ways of dealing with the first two principal problems raised for reliabilism. Another important proposal remains to be added for dealing with the second (non-sufficiency) problem specifically. As reviewed earlier in Section 2, one way to rectify the non-sufficiency of simple reliability might be to add an epistemic ascent requirement. This would say that justifiedness not only requires the use of a reliable process to arrive at a belief in p but also requires an accompanying higher-order belief that the process so used is reliable. Reliabilists are likely to resist this epistemic-ascent proposal, however, because it sets too high a standard of justifiedness. Young children have few if any such higher-order beliefs, but still have many first-order beliefs that are justified.
A more attractive way to bolster reliabilism is to add a weaker supplementary condition, a negative higher-order condition. Goldman proposed such a condition in Epistemology and Cognition (1986: 111–112) in the form of a non-undermining (or “anti-defeater”) condition. This says that a cognizer, to be justified, must not have reason to believe that her first-order belief isn't reliably caused. This promises to handle the clairvoyance and Truetemp cases very smoothly. Surely Truetemp, like the rest of us, has reason to think that beliefs that come out of the blue—as far as one can tell introspectively—are unreliably caused. Hence he has reason to believe that his spontaneous beliefs about the precise ambient temperature are unreliably caused. So his first-order beliefs about the ambient temperature violate the supplementary condition, and therefore are unjustified. For this maneuver to help reliabilism, of course, “defeat” must be cashed out in reliabilism-friendly terms. It cannot simply be understood as “render unjustified,” because then it would be inadmissible in a base clause. The necessary cashing out seems do-able, but we won't pursue it here.
Whether or not this “negative” strengthening of a reliability condition satisfactorily resolves the non-sufficiency challenge, many epistemologists are persuaded by non-sufficiency examples that a reliability-based condition needs to be strengthened if the approach is to be viable. A variety of other ways to strengthen the theory are considered in Section 5 below.
Turning to the generality problem, many contributors have proposed solutions to this problem. A solution might be sought by trying to specify suitable process types in commonsense terms, for example, “confused reasoning,” “wishful thinking,” or “hasty generalization.” Alternatively, a solution might be sought that would identify an appropriate process type (for each token) in scientific terms, employing concepts from scientific psychology. Most attempted solutions pursue the latter approach. Alston (1995), for example, suggests that a relevant process type must be a natural kind. Interpreting process types as functions that take features of experiences as inputs and beliefs as outputs, he proposes that the relevant type is the natural psychological kind that corresponds to the function actually operative in the formation of the belief. Unfortunately, the problem remains that process tokens will instantiate indefinitely many functions. Alston tries to tackle this problem by proposing that the relevant function is the natural kind that includes all and only those tokens sharing with the target token all the same causally contributory features from the input experience to the resulting belief. Conee and Feldman raise problems for this proposal as well.
James Beebe (2004) also supports the idea that a scientific type will be the pertinent one, in particular, an information-processing procedure or algorithm. Here again is the problem that there will be indefinitely many types of this kind, of varying reliability. To pick out the appropriate type, Beebe proceeds as follows. Let A be the broadest such type. Choose a partition that is the broadest objectively homogeneous subclass of A within which the token process falls, where a class S is objectively homogeneous if no statistically relevant partitions of S can be effected. This is an interesting idea, but there remains the lingering question of whether there is always a set of conditions that meet Beebe's standards, i.e., that generate an appropriate partition.
Mark Wunderlich (2003) offers a novel response to the generality problem. He rejects the assumption that the process reliabilist must pick out a single epistemically relevant process type for any given token. Instead he proposes a complex method for organizing the “primordial soup” of reliability information associated with a given process token (the primordial soup consists of the reliability numbers of all the process types that the token instantiates). He then suggests three dimensions relevant to justificatory status along which a token can be evaluated on the basis of this richly structured reliability information. In short, the justificatory status of a belief is not a function of a single appropriate type for each token but of a reliability vector associated with each token. The details of Wunderlich's proposal are too intricate to summarize here, but it's refreshing to contemplate a new perspective from which to approach the topic.
Mark Heller (1996) offers a contextualist approach to the generality problem. Heller claims that the demand for absolutely general necessary and sufficient account of a token's relevant type is inappropriate, because the predicate ‘reliable’ is generally — not just in its epistemic interpretation — richly sensitive to the evaluator's context. Thus, the context can be expected to do the work of picking out a unique type. I agree that context plausibly plays an important role here in winnowing down the range of process types. But can it reduce them to a unique type? That is more doubtful.
A paper by Juan Comesana (2006) may offer just the right reply to critics of reliabilism like Conee and Feldman. Although Comesana purports to identify a solution to the generality problem, it's not clear that it's a new, or better, solution of the sort Conee and Feldman are requesting. The important point Comesana makes is that the generality problem is not a special problem for process reliabilism; it is a problem that all epistemologies of justification share, including Feldman and Conee's own evidentialist theory. As Comesana recognizes, every adequate epistemology theory needs an account of the basing relation, and any attempt to explain the basing relation will ultimately run into the generality problem, or something very similar to it.
The point can be developed more fully as follows. When Feldman and Conee (1985) state their final theory of justification, it features the crucial phrase “on the basis of.” True, this phrase occurs in the context of their analysis of “well-foundedness,” which they distinguish from justifiedness. But this just seems to be their way of expressing the notion of doxastic, as opposed to propositional, justification (as Conee indicates in a personal communication). Thus, Feldman and Conee agree that a basing relation is essential to an adequate account of doxastic justifiedness. Now, there is no hope of elucidating a suitable basing relation without giving it a causal interpretation. This doesn't yet imply that a particular causal process type must be selected. Indeed, Feldman and Conee might insist that so long as there is some causal relation connecting the subject's evidential states with his belief, then all is well. Nothing more specific is required in the way of a causal connection. But such a thesis would be wrong. There are such things as “deviant causal chains,” that is, causal chains that are defective relative to the property of philosophical interest. Consider a mental process that begins with appropriate evidential states but makes a detour through wishful thinking, which finally generates the target belief. This kind of process would not instantiate a suitable basing relation in virtue of which the target belief is justified. What process type would a token process have to instantiate to achieve doxastic justifiedness for the resulting belief? An evidentialist won't want to say that a suitable process type is necessarily one with high reliability. But an evidentialist owes us a story about which process types qualify as justification-conferring basing relations and which ones don't. This problem is in the same ballpark as the generality problem for reliabilism. So although there may not yet be a fully satisfactory solution to the problem from the vantage-point of reliabilism, it's a kind of problem that afflicts all epistemologies. Reliabilism is not burdened with a distinctive liability or weakness in this regard.
The fourth problem presented in Section 3 was the bootstrapping, or easy knowledge, problem. One answer to this problem takes the same shape as the answer to the generality problem: the problem is not unique to reliabilism, but is shared by many epistemologies. Cohen acknowledges this point quite clearly. His contention is that all views with “basic knowledge structure” face serious difficulties. Reliabilism is one of these views but by no means the only one. Moreover, James van Cleve (2003) argues persuasively that if what Vogel calls “bootstrapping,” or what Cohen calls “easy knowledge,” is disallowed, the only alternative is skepticism. Thus, if a theory like reliabilism – or any form of externalism – makes easy knowledge possible, this is not a terrible thing. Skepticism is a very unwelcome alternative.
The fifth problem facing reliabilism is the extra-value-of-knowledge problem. One response to this problem from the reliabilist perspective is made by Alvin Goldman and Erik Olsson (2008). They diagnose the main point behind the swamping problem as arising from the threat of “double counting.” Because the value of a token reliable process seemingly derives from the value of the true belief it causes, to suppose that the latter acquires extra value in virtue of being so caused would be a case of illegitimate double counting. Goldman and Olsson argue that the double-counting charge can be rebutted or perhaps side-stepped altogether. They offer two solutions.
According to the first solution, when a reliable process produces a true belief, the composite state of affairs has a property that would be missing if the same true belief weren't produced reliably. And this property is an (epistemically) valuable one to have. The property is making it likely that one's future beliefs of a similar kind will also be true. Under reliabilism, the probability of having more true belief in the future is greater conditional on S's knowing that P than conditional on S's mere truly believing that P. For comparison consider the espresso example. If a reliable coffee machine produces good espresso for you today and remains at your disposal, it can normally produce a good espresso for you tomorrow. The reliable production of a good cup of espresso enhances the probability of a subsequent good cup of espresso, and this probability enhancement is a valuable property to have.
The second Goldman-Olsson solution starts with the observation that the swamping argument wrongly assumes that the value of a token reliable process could only be derived from the value of the token true belief it produces. However, the imputation of instrumental value isn't generally restricted to a singular causal relation between a token instrumental event and a token result. There is a second kind of instrumentalism-based value inheritance. When tokens of type T1 regularly cause tokens of type T2, which has independent value, then type T1 tends to inherit value from type T2. Furthermore, the inherited value accruing to type T1 is also assigned to each token of T1, whether or not such a token causes a token of T2. It is further suggested that sometimes a type of state that initially has merely instrumental value eventually acquires independent, or autonomous, value status. This allows extra value to be added without illegitimate double counting. This is what is hypothesized to occur in the true belief plus reliable process scenario.
Several variants of process reliabilism have emerged as theories of either knowledge or justification. Typically they endorse the idea that reliability is a necessary condition for justifiedness (or for the third condition of knowledge) but deny that it's sufficient. Alternatively, they weave a different account on the theme of reliability. What usually motivates these approaches is the felt need for more stringent conditions on justifiedness and knowledge than mere de facto reliability. Cases in the literature like the clairvoyance, Truetemp, and brain lesion cases are taken to demonstrate the need for either strengthening or permuting the approach.
One such theory is Alvin Plantinga's (1993b) proper functionalist theory of warrant. Plantinga holds, at a first approximation, that a belief has warrant only if it is produced by cognitive faculties that are functioning properly in an appropriate environment. Plantinga's notion of proper function, moreover, implies the existence of a design plan, and a belief's having warrant requires that the segment of the design plan governing the production of the belief is aimed at truth. In addition, the design plan must be a good one in the sense that the objective probability of the belief's being true (given that it's produced in accordance with the design plan) must be high. The last condition, he says, is the reliabilist constraint on warrant, and “the important truth contained in reliabilist accounts of warrant” (1993b: 17). While it would be an exaggeration to say that Plantinga's theory is “motivated” by problems for reliabilism, he does tout his proper functioning theory in part as an improvement over reliabilism: “what determines whether the output of a process has warrant is not simply … truth-ratios …. [T]he process in question must meet another condition. It must be nonpathological; we might say that the process in question must be one that can be found in cognizers whose cognitive equipment is working properly” (1993a: 208). So, although Plantinga accepts a truth-linked constraint on warrant — namely, high probability of truth — he thinks more must be added.
I shall suggest two problems for this theory. The first is due to Holly M. Smith. Smith asks us to imagine a computer scientist who designs and builds a cognitively sophisticated race of computers, with different hardware than that of humans but the same cognitive properties. According to Plantinga's theory, many beliefs formed by these computers will be warranted, because they result from the proper working of design plans that were aimed at truth. Now suppose, however, that humans were not designed by God, nor by any other designing agent. Then, according Plantinga's ultimate theory, human beliefs are incapable of being warranted. That conclusion, however, is highly counterintuitive. By hypothesis, the human cognitive properties duplicate those of the computers. It is hardly tempting to credit the computers' beliefs with epistemic warrant while refusing to assign the same epistemic credit to human beliefs.
A second unattractive feature of Plantinga's theory is the way that it encumbers atheism. At first blush, it seems that atheism should not force one into general skepticism. Theological views should not force one to deny epistemic warrant to all people, including warrant with respect to ordinary physical-object beliefs. Yet if an atheist accepts the (philosophy-of-biology) thesis that no sound naturalistic analysis of proper function is feasible, then he would be forced by Plantinga's account of warrant into general skepticism. Plantinga, of course, would probably welcome this result. But this is a case in which one philosopher's modus ponens is appropriately countered with a modus tollens. In other words, the appropriate conclusion is that Plantinga's account of warrant is misguided.
Another theory that adds further conditions to a reliabilist theme is Ernest Sosa's virtue reliabilism. Sosa's theory, however, mainly targets the concept of knowledge rather than justifiedness, and it's not entirely clear how to extract the components that belong strictly to justifiedness. We set that issue aside. Here are two passes at Sosa's account, both drawn from his A Virtue Epistemology (2007) but emphasizing slightly different strands of the theory.
Like an archer's shot at a target, a belief can be accurate, it can manifest epistemic virtue or competence (roughly, reliability), and it can be accurate because of its competence. Sosa calls these properties, respectively, accuracy, adroitness and aptness. All three of these conditions are required for knowledge; in other words, knowledge requires a belief to be true, reliably produced, and true because reliably produced. The ‘because’ condition is a non-accidentality or anti-luck condition. Sosa also introduces a distinction between two types of knowledge: “animal” and “reflective” knowledge. Animal knowledge involves apt belief that isn't defensibly apt belief, whereas “reflective” knowledge is apt belief that is also defensibly apt belief (2007: 24). In more familiar terminology, animal knowledge is reliably and non-accidentally true belief whereas reflective knowledge features an additional “layer” of reliably and non-accidentally caused true belief, true belief about the reliability and non-accidentality of the first-order belief. As Sosa puts it elsewhere: “One has reflective knowledge if one's judgment or belief manifests not only such direct response to the fact known but also understanding of its place in a wider whole that includes one's belief and knowledge of it and how these come about.” (1991: 246). This highlights the coherence element in human knowledge. Thus, what Sosa seeks to add to an account of distinctively human knowledge – as contrasted with mere animal knowledge – is meta-level true beliefs of an appropriate provenance. This is clearly indicated in Sosa (2007: 32) where reflective knowledge (K+) is equated with animal knowledge of animal knowledge (KK).
A number of questions can be raised about these added conditions to reliability. If a first-level of reliability and non-accidentality is inadequate to achieve genuinely human knowledge, why does an added layer of the same deficient stuff turn low-grade knowledge into high-grade knowledge? Furthermore, if epistemic ascent is needed, why is it sufficient to have only one step of ascent? Doesn't a similar problem arise at the second level as arose at the first (BonJour, 2003: 197–198)? But if one agrees on the need for additional steps, there is no obvious place to stop. Isn't there a threat of an infinite regress?
Second, how exactly is Sosa's “two types of knowledge” doctrine to be understood and how well motivated is it? Sosa claims that “no human being blessed with reason has merely animal knowledge of the sort attainable by beasts”, for a “reason-endowed being automatically monitors his background information and his sensory input for contrary evidence and automatically opts for the most coherent hypothesis even when he responds most directly to sensory stimuli” (1991: 240). We can distinguish two types of coherence: negative and positive. A corpus of beliefs is negatively coherent just in case it doesn't partake of inconsistency. A corpus of beliefs has positive coherence just in case, in addition to non-inconsistency, some of its members support other members by making the latter more probable. For example, a belief to the effect that a second belief is reliably formed makes the second more likely to be true. Now the thesis that human knowledge is distinguished from animal knowledge by dint of negative coherence doesn't seem right because even animal cognition partakes of inconsistency avoidance. If the thesis is that human knowledge is distinguished from animal knowledge by dint of positive coherence, that thesis seems too strong. Not all tokens of human knowledge partake of positive coherence with others. Although self-reflection (epistemic ascent) is a sometime occurrence in human cognition, it is too strong to say that each token of human knowledge is accompanied by an extra tier of knowledgeable reflection. As noted above, such a thesis invites an infinite regress. In addition, why mark the presence or absence of higher-order knowledge as a difference between types of knowledge (human versus animal)? Agreed, it is epistemically good to have meta-level knowledge on top of first-order knowledge, but this is readily accommodated by a simple recognition that knowing additional propositions (especially explanatory ones) is epistemically good. This doesn't require postulation of a separate kind of knowledge (Greco, 2006).
A final problem concerns the aptness, or non-accidentality, concept. What does it mean for a belief to be true because of the competence exercised in its production? Under what conditions, precisely, is the correctness of a belief “due” to the believer's competence – as opposed to the circumstances in which it was formed? An example of accidental correctness is a Gettier-like case in which S believes that someone owns a Ford because Nogot does, and the facts are that somebody does own a Ford but not Nogot. But how does this case of accidentally true belief generalize? In every case of competently (reliably) formed true belief, indefinitely many causal factors in addition to the believer's competence conspire to produce correctness. The absence of any one of these factors might have yielded incorrectness. Even if we knew how to measure degrees of causal relevance — which we don't — we would need to choose a threshold of “sufficient” causal efficacy that a competence must reach in order to achieve aptness and hence knowledge. Selecting this threshold of sufficiency seems to be an insuperable problem. And would meeting such a threshold systematically correlate with positive knowledge classifications? That is unclear.
Another form of virtue reliabilism, defended by John Greco (2000), is called “agent reliabilism.” Greco identifies two problems for simple reliabilism arising from “strange” processes and “fleeting” processes. Plantinga's brain lesion case is cited as an example of a strange though reliable process. Adopting a reliable method on a whim is cited as a fleeting though reliable process. Both types of cases, Greco argues, show that not any old reliable cognitive process suffices for positive epistemic status. He proposes to add the requirement that a reliable process must be part of a stable disposition or faculty that is part of the epistemic agent's character. However, Greco doesn't adequately explain what is meant by a “strange” process. Is it simply an unusual or unfamiliar process? Strange or unfamiliar to whom? If one didn't know much about bats or dolphins, echolocation would be an unfamiliar and strange process. But can't these processes confer positive epistemic status on these creatures' perceptual beliefs? As far as fleeting processes are concerned, we can easily imagine cases in which reliable cognitive methods are newly acquired and successfully applied but promptly lost through death, stroke, Alzheimer's disease, etc. Nonetheless, can't their fleeting possession have resulted in justified beliefs or knowledge? A swampman that pops into existence and survives but a few minutes might be another such example.
Agent reliabilism is commonly accompanied by talk of an agent's “credit-worthiness.” The idea is that if a true belief results from an agent's stable disposition, which is part of his or her cognitive character, then that belief can be credited to the agent, and credit-worthiness is essential to knowledge or positive epistemic status. It is doubtful, however, that the notion of credit is very helpful here, since credit is not invariably associated with knowledge attainment. We don't typically give people “credit” for knowledge obtained through perception or memory, but this doesn't lead us to withhold knowledge attributions in these cases.
Yet another approach intended to bolster reliabilism is “transglobal reliabilism,” proposed by David Henderson and Terence Horgan (2001, 2006). Their principal concern is the evil-demon world problem for simple reliabilism. Their proposal is that the kind of reliability sufficient for justifiedness is stronger than actual-world reliability; instead it is robust reliability. Robust reliability is truth-conduciveness in a very wide set of epistemically relevant possible worlds, worlds that are experientially very much like the actual world but in other respects possibly quite different from this one. They call a process “safe” (using this term differently than other epistemologists) when it would not incur too many false beliefs in a wide range of epistemically relevant worlds, a set of worlds that reflects the uncertainty characteristic of epistemic agents. They call processes “transglobally reliable” just in case they are reliable in the domain of evaluation consisting in all experientially possible global environments. (This seems to be related, though by no means equivalent, to the “normal worlds” approach.) Details aside, a belief is justified if and only if it is generated by a process that is transglobally reliable. Perceptual beliefs in an evil-demon world can satisfy this condition because their generating processes may be reliable in the pertinent domain of evaluation. The chief worry about transglobal reliabilism is that it apparently assumes that there are more epistemically hospitable experientially possible worlds than epistemically inhospitable experientially possible worlds, and it's unclear what supports this assumption.
A final variant of reliabilism we shall consider is “internalist reliabilism,” advocated by Matthias Steup (2004). This qualifies as a variant of reliabilism not because it seeks to strengthen traditional reliabilism in the fashion of the preceding theories but because it retains a general reliabilist theme. Steup contrasts two cases, one in which your perceptual belief-forming process is reliable but you have evidence that it isn't and one in which your perceptual belief-forming process is not reliable but you have evidence that it is. In which case are your perceptual beliefs (prima facie) justified? Externalists answer: in the former case, where the process is de facto reliable. Internalists answer: in the latter case, where you have evidence for reliability. Steup endorses the internalist answer. This position runs quite contrary to traditional reliabilism, but he views it as having a reliabilist flavor because evidence for reliability is what matters.
What qualifies as “evidence” under Steup's view? His main example of having evidence for or against the reliability of a perceptual process involves memory-based evidence. Specifically, you might have seeming memories of either a good track record or a poor track record of perceptual success. Steup doesn't indicate, however, how the concept of evidence is to be analyzed, or what qualifies something as a piece of evidence. If evidence is analyzed as that which makes a proposition or belief justified, then evidence is itself an epistemic concept and must be declared inadmissible in a substantive account of justifiedness. (Recall the admissibility constraints of “What Is Justified Belief?”, discussed in Section 2.) If evidence isn't analyzed in terms of justification, and therefore passes the admissibility test, it might well turn out something qualifies as an item of evidence for P if and only if it is a reliable indicator of the truth of P. This would mean that a seeming memory of P is evidence for P if and only if it's a reliable indicator of P. But now it appears that evidence replaces justification as the primary concept of epistemic interest, and it is to be understood in terms of de facto reliability. This supports an externalist form of reliabilism that Steup rejects.
Another way to reflect on Steup's proposal is this. No matter what criterion C of justifiedness is chosen, it will always be possible (assuming the fallibility of justifiedness) for a person to undergo events that justify him in thinking falsely that he satisfies or fails to satisfy C in a particular case. Hence, he would be justified in believing falsely that a particular (first-order) belief of his is justified or unjustified. Now consider the possibility that externalist reliabilism is the correct criterion of justifiedness. Then the subjects Steup describes would be justified in their perceptual beliefs if their perceptual processes are reliable, even if they have evidence contrary to this conclusion. However, as we have seen above, having evidence might be equivalent to having (propositional) justifiedness for a proposition. So in the scenarios in question, a reliabilist would say that subjects are justified in believing that their perceptual beliefs are not justified even though, in fact, they are justified. This is perfectly in order and consonant with reliabilism. It's compatible with really being justified in believing P that you are justified in believing that you aren't justified in believing P. Thus, an externalist reliabilist can say that Steup confuses iterative unjustifiedness with respect to perceptual beliefs (J~J(P)) with first-order unjustifiedness with respect to these beliefs (~J(P)) (see Goldman, forthcoming).
Both reliabilism about knowledge and reliabilism about justification have taken a number of forms. We have examined process reliabilism about justification most carefully, starting with its strengths and rationale. Although this theory in its simplest form encounters some salient problems, many if not all of these problems can be met either by promising refinements and “fixes” or by mitigating their seriousness by noting that similar problems confront any promising theory. A number of variants of reliabilism are in active development, so the approach seems to have considerable robustness and flexibility.
- Alston, William P. (1988). “An Internalist Externalism,” Synthese, 74: 265–283. Reprinted in Alston, Epistemic Justification, Ithaca, NY: Cornell University Press (1989).
- Alston, William P. (1980). “Level Confusions in Epistemology,” Midwest Studies in Philosophy, 5: 135–150. Reprinted in Alston, Epistemic Justification, Ithaca, NY: Cornell University Press (1989).
- Alston, William P. (1995). “How to Think about Reliability,” Philosophical Topics, 23: 1–29.
- Armstrong, D. M. (1973). Belief, Truth and Knowledge, Cambridge: Cambridge University Press.
- Beebe, James (2004). “The Generality Problem, Statistical Relevance and the Tri-Level Hypothesis,” Noûs, 38: 177–195.
- BonJour, Laurence (1980). “Externalist Theories of Empirical Knowledge,” Midwest Studies in Philosophy, 5: 53–73.
- BonJour, Laurence (2003). “Reply to Sosa,” in Laurence BonJour and Ernest Sosa (eds.), Epistemic Justification, Malden, MA: Blackwell.
- Cohen, Stewart (1984). “Justification and Truth,” Philosophical Studies, 46: 279–295.
- Cohen, Stewart (2002). “Basic Knowledge and the Problem of Easy Knowledge,” Philosophy and Phenomenological Research, 65: 309–329.
- Comesana, Juan (2006). “A Well-Founded Solution to the Generality Problem,” Philosophical Studies, 129: 27–47.
- Conee, Earl and Feldman, Richard (1998). “The Generality Problem for Reliabilism,” Philosophical Studies, 89: 1–29.
- DeRose, Keith (1995). “Solving the Skeptical Problem,” Philosophical Review, 104: 1–52.
- DeRose, Keith (1999). “Contextualism: An Explanation and Defense,” in J. Greco and E. Sosa (eds.), The Blackwell Guide to Epistemology, Malden, MA: Blackwell, pp. 187–205.
- Dretske, Fred (1971). “Conclusive Reasons,” Australasian Journal of Philosophy, 49: 1–22.
- Dretske, Fred (1981). Knowledge and the Flow of Information, Cambridge, MA: MIT Press.
- Feldman, Richard (1985). “Reliability and Justification,” Monist, 68: 159–174.
- Feldman, Richard (2003). Epistemology, Upper Saddle River, NJ: Prentice-Hall.
- Feldman, Richard and Conee, Earl (1985). “Evidentialism,” Philosophical Studies, 48: 15–34.
- Foley, Richard (1985). “What's Wrong with Reliabilism?” Monist, 68: 188–202.
- Goldman, Alvin I. (1975). “Innate Knowledge,” in S. P. Stich, ed., Innate Ideas, Berkeley, CA: University of California Press.
- Goldman, Alvin I. (1976). “Discrimination and Perceptual Knowledge,” Journal of Philosophy, 73: 771–791.
- Goldman, Alvin I. (1979). “What Is Justified Belief?” in G. Pappas (ed.), Justification and Knowledge, Dordrecht: Reidel. Reprinted in A. Goldman, Liaisons: Philosophy Meets the Cognitive and Social Sciences, Cambridge, MA: MIT Press (1992).
- Goldman, Alvin I. (1983). Review of “Philosophical Explanations.” Philosophical Review, 92: 81–88.
- Goldman, Alvin I. (1986). Epistemology and Cognition, Cambridge, MA: Harvard University Press.
- Goldman, Alvin I. (1988). “Strong and Weak Justification,” in J. Tomberlin (ed.), Philosophical Perspectives, Volume 13. Atascadero, CA: Ridgeview. Reprinted in A. Goldman, Liaisons: Philosophy Meets the Cognitive and Social Sciences, Cambridge, MA: MIT Press (1992).
- Goldman, Alvin I. (1992). “Epistemic Folkways and Scientific Epistemology,” in Goldman, Liaisons: Philosophy Meets the Cognitive and Social Sciences, Cambridge, MA: MIT Press, pp. 155–175.
- Goldman, Alvin I. (1999). Knowledge in a Social World, Oxford: Oxford University Press.
- Goldman, Alvin I. (2008). “Immediate Justification and Process Reliabilism,” in Q. Smith, ed., Epistemology: New Essays, Oxford: Oxford University Press. [Preprint available from the author (PDF)]
- Goldman, Alvin I. (forthcoming). “Epistemic Relativism and Reasonable Disagreement,” in R. Feldman and T. Warfield (eds.), Disagreement, New York: Oxford University Press. [Preprint available from the author (PDF)]
- Goldman, Alvin I. and Olsson, Erik J. (2008). “Reliabilism and the Value of Knowledge,” in D. Pritchard, A. Millar and A. Haddock (eds.), Epistemic Value, Oxford: Oxford University Press. [Preprint available from the author (PDF)]
- Greco, John (2000). Putting Skeptics in Their Place, Cambridge: Cambridge University Press.
- Greco, John (2006). “Virtue, Luck and the Pyrrhonian Problematic,” Philosophical Studies, 130: 9–34.
- Heller, Mark (1995). “The Simple Solution to the Generality Problem,” Noûs, 29: 501–515.
- Henderson, David and Horgan, Terence (2001). “Practicing Safe Epistemology,” Philosophical Studies, 102: 227–258.
- Henderson, David and Horgan, Terence (2006). “Transglobal Reliabilism,” Croatian Journal of Philosophy, 6: 171–195.
- Jones, W. E. (1997). “Why Do We Value Knowledge?” American Philosophical Quarterly, 34: 423–439.
- Kornblith, Hilary (1980). “Beyond Foundationalism and the Coherence Theory,” Journal of Philosophy, 77: 597–612.
- Kvanvig, Jonathan L. (2003). The Value of Knowledge and the Pursuit of Understanding, Cambridge: Cambridge University Press.
- Kvart, Igal (2006). “A Probabilistic Theory of Knowledge,” Philosophy and Phenomenological Research, 72: 1–44.
- Lehrer, Keith (1990). Theory of Knowledge, Boulder, CO: Westview.
- Nozick, Robert (1981). Philosophical Explanations, Cambridge, MA: Harvard University Press.
- Plantinga, Alvin (1993a). Warrant: The Current Debate, Oxford: Oxford University Press.
- Plantinga, Alvin (1993b). Warrant and Proper Function, Oxford: Oxford University Press.
- Pollock, John (1984). “Reliability and Justified Belief,” Canadian Journal of Philosophy, 14: 103–114.
- Pollock, John and Cruz, Joseph (1999). Contemporary Theories of Knowledge, 2nd edition. Lanham, MD: Rowman and Littlefield.
- Ramsey, F. P. (1931). “Knowledge,” in his The Foundations of Mathematics and Other Essays, R. B. Braithwaite (ed.), New York: Harcourt Brace.
- Riggs, Wayne D. (2002). “Reliability and the Value of Knowledge,” Philosophy and Phenomenological Research, 64: 79–96.
- Roush, Sherrilyn (2005). Tracking Truth, Oxford: Oxford University Press.
- Sosa, Ernest (1988). “Beyond Skepticism, to the Best of Our Knowledge,” Mind, 97: 153–188.
- Sosa, Ernest (1991). “Reliabilism and Intellectual Virtue,” in E. Sosa, Knowledge in Perspective, Cambridge: Cambridge University Press.
- Sosa, Ernest (1996). “Postscript to ‘Proper Functionalism and Virtue Epistemology’,” in J. L. Kvanvig (ed.), Warrant in Contemporary Epistemology, Lanham, MD: Rowman & Littlefield.
- Sosa, Ernest (2000). “Skepticism and Contextualism,” Philosophical Issues, 10: 1–18.
- Sosa, Ernest (2007). A Virtue Epistemology, Oxford: Oxford University Press.
- Steup, Matthias (2004). “Internalist Reliabilism,” Philosophical Issues, 14: 403–425.
- Swain, Marshall (1981). Reasons and Knowledge, Ithaca, NY: Cornell University Press.
- Swinburne, Richard (1999). Providence and the Problem of Evil, Oxford: Oxford University Press.
- Unger, Peter (1968). “An Analysis of Factual Knowledge,” Journal of Philosophy, 65: 157–170.
- Van Cleve, James (2003). “Is Knowledge Easy – or Impossible? Externalism as the Only Alternative to Skepticism,” in S. Luper (ed.), The Skeptics, Aldershot: Ashgate.
- Vogel, Jonathan (2000). “Reliabilism Leveled,” Journal of Philosophy, 97: 602–623.
- Williamson, Timothy (2000). Knowledge and Its Limits, Oxford: Oxford University Press.
- Wunderlich, Mark (2003). “Vector Reliability: A New Approach to Epistemic Justification,” Synthese, 136: 237–262.
- Zagzebski, Linda (1996). Virtues of the Mind, Cambridge: Cambridge University Press.
- Zagzebski, Linda (2003). “The Search for the Source of Epistemic Good,” Metaphilosophy, 34: 12–28.
How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database.
[Please contact the author with suggestions.]