Epistemic Paradoxes

First published Wed Jun 21, 2006; substantive revision Thu Mar 3, 2022

Epistemic paradoxes are riddles that turn on the concept of knowledge (episteme is Greek for knowledge). Typically, there are conflicting, well-credentialed answers to these questions (or pseudo-questions). Thus the riddle immediately poses an inconsistency. In the long run, the riddle goads and guides us into correcting at least one deep error – if not directly about knowledge, then about its kindred concepts such as justification, rational belief, and evidence.

Such corrections are of interest to epistemologists. Historians date the origin of epistemology to the appearance of skeptics. As manifest in Plato’s dialogues featuring Socrates, epistemic paradoxes have been discussed for twenty five hundred years. Given their hardiness, some of these riddles about knowledge may well be discussed for the next twenty five hundred years.

1. The Surprise Test Paradox

A teacher announces that there will be a surprise test next week. It will be a surprise in that the students will not be able to know in advance on which day the exam will be given. A student objects that this is impossible: “The class meets on Monday, Wednesday, and Friday. If the test is given on Friday, then on Thursday I would be able to predict that the test is on Friday. It would not be a surprise. Can the test be given on Wednesday? No, because on Tuesday I would know that the test will not be on Friday (thanks to the previous reasoning) and know that the test was not on Monday (thanks to memory). Therefore, on Tuesday I could foresee that the test will be on Wednesday. A test on Wednesday would not be a surprise. Could the surprise test be on Monday? On Sunday, the previous two eliminations would be available to me. Consequently, I would know that the test must be on Monday. So a Monday test would also fail to be a surprise. Therefore, it is impossible for there to be a surprise test.”

Can the teacher fulfill her announcement? We have an embarrassment of riches. On the one hand, we have the student’s elimination argument. (For a formalization, see Holliday 2017.) On the other hand, common sense says that surprise tests are possible even when we have had advance warning that one will occur at some point. Either of the answers would be decisive were it not for the credentials of the rival answer. Thus we have a paradox. But a paradox of what kind? ‘Surprise test’ is being defined in terms of what can be known. Specifically, a test is a surprise if and only if the student cannot know beforehand which day the test will occur. Therefore the riddle of the surprise test qualifies as an epistemic paradox.

Paradoxes are more than edifying surprises. Here is an edifying surprise that does not pose a paradox. Professor Statistics announces she will give random quizzes: “Class meets every day of the week. Each day I will open by rolling a die. When the roll yields a six, I will immediately give a quiz.” Today, Monday, a six came up. So you are taking a quiz. The last question of her quiz is: “Which of the subsequent days is most likely to be the day of the next random test?” Most people answer that each of the subsequent days has the same probability of being the next quiz. But the correct answer is: Tomorrow (Tuesday).

Uncontroversial facts about probability reveal the mistake and establish the correct answer. For the next test to be on Wednesday, there would have to be a conjunction of two events: no test on Tuesday (a 5/6 chance of that) and a test on Wednesday (a 1/6 chance). The probability for each subsequent day becomes less and less. (It would be astounding if the next quiz day were a hundred days from now!) The question is not whether a six will be rolled on any given day, but when the next six will be rolled. Which day is the next one depends partly on what happens meanwhile, as well as depending partly on the roll of the die on that day.

This probability riddle is instructive and will be referenced throughout this entry. But the existence of quick, decisive solution shows that only a mild revision of our prior beliefs was needed. In contrast, when our deep beliefs conflict, proposed amendments reverberate unpredictably. Attempts to throw away a deep belief boomerang. For the malfunctions that ensue from distancing ourselves from the belief demonstrate its centrality. Often, the belief winds up even more entrenched. “Problems worthy of attack prove their worth by fighting back” (Hein 1966).

One sign of depth (or at least desperation) is that commentators begin rejecting highly credible inference rules. The surprise test has been claimed to invalidate the law of bivalence, the KK principle (if one is in a position to know that \(p\), then one is also in a position to know that one knows that \(p\)), and the closure principle (if one knows \(p\) while also competently deducing q from \(p\), one knows \(q\)) (Immerman 2017).

The surprise test paradox also has ties to issues that are not clearly paradoxes – or to issues whose status as paradoxes is at least contested. Consider the Monty Hall problem. There is a prize behind exactly one of three doors. After you pick, Monty Hall will reveal what is behind one of the doors that lacks the prize. He will then give you an option to switch your choice to another closed door. Should you switch to have the best chance to win the prize? When Marylin vos Savant answered yes in a 1990 Parade magazine she was mistakenly scolded by many readers — including some scholars. The correct solution was provided by the original poser of the puzzle decades earlier and was never forgotten or effectively criticized.

What makes the Monty Hall Problem interesting is that it is not a paradox. There has always been expert consensus on its solution. Yet it has many of the psychological and sociological features of a paradox. The Monty Hall Problem is merely a cognitive illusion. Paradox status is also withheld by those who find only irony in self-defeating predictions and only an embarrassment in the “knowability paradox” (discussed below). Calling a problem a ‘paradox’ tends to quarantine it from the rest of our inquiries. Those who wish to rely on the surprising result will therefore deny that there is any paradox. At most, they concede there was a paradox. Dead paradoxes are benign fertilizer for the tree of knowledge.

We can look forward to future philosophers drawing edifying historical connections. The backward elimination argument underlying the surprise test paradox can be discerned in German folktales dating back to 1756 (Sorensen 2003a, 267). Perhaps, medieval scholars explored these slippery slopes. But let us turn to commentary to which we presently have access.

1.1 Self-defeating prophecies and pragmatic paradoxes

In the twentieth century, the first published reaction to the surprise text paradox was to endorse the student’s elimination argument. D. J. O’Connor (1948) regarded the teacher’s announcement as self-defeating. If the teacher had not announced that there would be a surprise test, the teacher would have been able to give the surprise test. The pedagogical moral of the paradox would then be that if you want to give a surprise test do not announce your intention to your students!

More precisely, O’Connor compared the teacher’s announcement to utterances such as ‘I am not speaking now’. Although these utterances are consistent, they “could not conceivably be true in any circumstances” (O’Connor 1948, 358). L. Jonathan Cohen (1950) agreed and classified the announcement as a pragmatic paradox. He defined a pragmatic paradox to be a statement that is falsified by its own utterance. The teacher overlooked how the manner in which a statement is disseminated can doom it to falsehood.

Cohen’s classification is too monolithic. True, the teacher’s announcement does compromise one aspect of the surprise: Students now know that there will be a test. But this compromise is not itself enough to make the announcement self-falsifying. The existence of a surprise test has been revealed but that allows surviving uncertainty as to which day the test will occur. The announcement of a forthcoming surprise aims at changing uninformed ignorance into action-guiding awareness of ignorance. A student who misses the announcement does not realize that there is a test. If no one passes on the news about the surprise test, the student with simple ignorance will be less prepared than classmates who know they do not know the day of the test.

Announcements are made to serve different goals simultaneously. Competition between accuracy and helpfulness makes it possible for an announcement to be self-fulfilling by being self-defeating. Consider a weatherman who warns ‘The midnight tsunami will cause fatalities along the shore’. Because of the warning, spectacle-seekers make a special trip to witness the wave. Some drown. The weatherman’s announcement succeeds as a prediction by backfiring as a warning.

1.2 Predictive determinism

Instead of viewing self-defeating predictions as showing how the teacher is refuted, some philosophers construe self-defeating predictions as showing how the student is refuted. The student’s elimination argument embodies hypothetical predictions about which day the teacher will give a test. Isn’t the student overlooking the teacher’s ability and desire to thwart those expectations? Some game theorists suggest that the teacher could defeat this strategy by choosing the test date at random.

Students can certainly be kept uncertain if the teacher is willing to be faithfully random. She will need to prepare a quiz each day. She will need to brace for the possibility that she will give too many quizzes or too few or have an unrepresentative distribution of quizzes.

If the instructor finds these costs onerous, then she may be tempted by an alternative: at the beginning of the week, randomly select a single day. Keep the identity of that day secret. Since the student will only know that the quiz is on some day or other, pupils will not be able to predict the day of the quiz.

This plan is risky. If, through the chance process, the last day happens to be selected, then abiding by the outcome means giving an unsurprising test. For as in the original scenario, the student has knowledge of the teacher’s announcement and awareness of past testless days. So the teacher must exclude random selection of the last day. The student is astute. He will replicate this reasoning that excludes a test on the last day. Can the teacher abide by the random selection of the next to last day? Now the reasoning becomes all too familiar.

Another critique of the student’s replication of the teacher’s reasoning adapts a thought experiment from Michael Scriven (1964). To refute predictive determinism (the thesis that all events are foreseeable), Scriven conjures an agent “Predictor” who has all the data, laws, and calculating capacity needed to predict the choices of others. Scriven goes on to imagine, “Avoider”, whose dominant motivation is to avoid prediction. Therefore, Predictor must conceal his prediction. The catch is that Avoider has access to the same data, laws, and calculating capacity as Predictor. Thus Avoider can duplicate Predictor’s reasoning. Consequently, the optimal predictor cannot predict Avoider. Let the teacher be Avoider and the student be Predictor. Avoider must win. Therefore, it is possible to give a surprise test.

Scriven’s original argument assumes that Predictor and Avoider can simultaneously have all the needed data, laws, and calculating capacity. David Lewis and Jane Richardson object:

… the amount of calculation required to let the predictor finish his prediction depends on the amount of calculation done by the avoider, and the amount required to let the avoider finish duplicating the predictor’s calculation depends on the amount done by the predictor. Scriven takes for granted that the requirement-functions are compatible: i.e., that there is some pair of amounts of calculation available to the predictor and the avoider such that each has enough to finish, given the amount the other has. (Lewis and Richardson 1966, 70–71)

According to Lewis and Richardson, Scriven equivocates on ‘Both Predictor and Avoider have enough time to finish their calculations’. Reading the sentence one way yields a truth: against any given avoider, Predictor can finish and against any given predictor, Avoider can finish. However, the compatibility premise requires the false reading in which Predictor and Avoider can finish against each other.

Idealizing the teacher and student along the lines of Avoider and Predictor would fail to disarm the student’s elimination argument. We would have merely formulated a riddle that falsely presupposes that the two types of agent are co-possible. It would be like asking ‘If Bill is smarter than anyone else and Hillary is smarter than anyone else, which of the two is the smartest?’.

Predictive determinism states that everything is foreseeable. Metaphysical determinism states that there is only one way the future could be given the way the past is. Simon Laplace used metaphysical determinism as a premise for predictive determinism. He reasoned that since every event has a cause, a complete description of any stage of history combined with the laws of nature implies what happens at any other stage of the universe. Scriven was only challenging predictive determinism in his thought experiment. The next approach challenges metaphysical determinism.

1.3 The Problem of Foreknowledge

Prior knowledge of an action seems incompatible with it being a free action. If I know that you will finish reading this article tomorrow, then you will finish tomorrow (because knowledge implies truth). But that means you will finish the article even if you resolve not to. After all, given that you will finish, nothing can stop you from finishing. So if I know that you will finish reading this article tomorrow, you are not free to do otherwise.

Maybe all of your reading is compulsory. If God exists, then He knows everything. So the threat to freedom becomes total for the theist. The problem of divine foreknowledge raises the possibility that theism (rather than atheism) precludes free choice and thereby precludes our having any moral responsibility.

In response to the apparent conflict between freedom and foreknowledge, medieval philosophers denied that future contingent propositions have a truth-value. They took themselves to be extending a solution Aristotle discusses in De Interpretatione to the problem of logical fatalism. According to this truth-value gap approach, ‘You will finish this article tomorrow’ is not true now. The prediction will become true tomorrow. A morally serious theist can agree with the Rubaiyat of Omar Khayyam:

The Moving Finger writes; and, having writ,
Moves on: nor all your Piety nor Wit
Shall lure it back to cancel half a Line,
Nor all your Tears wash out a Word of it.

God’s omniscience only requires that He knows every true proposition. God will know ‘You will finish this article tomorrow’ as soon it becomes true – but not before.

The teacher has freewill. Therefore, predictions about what he will do are not true (prior to the examination). The metaphysician Paul Weiss (1952) concludes that the student’s argument falsely assumes he knows that the announcement is true. The student can know that the announcement is true after it becomes true – but not before.

The logician W. V. O. Quine (1953) agrees with Weiss’ conclusion that the teacher’s announcement of a surprise test fails to give the student knowledge that there will be a surprise test. Yet Quine abominates Weiss’ reasoning. Weiss breeches the law of bivalence (which states that every proposition has a truth-value, true or false). Quine believes that the riddle of the surprise test should not be answered by surrendering classical logic.

2. Intellectual suicide

Quine insists that the student’s elimination argument is only a reductio ad absurdum of the supposition that the student knows that the announcement is true (rather than a reductio of the announcement itself). He accepts this epistemic reductio but rejects the metaphysical reductio. Given the student’s ignorance of the announcement, Quine concludes that a test on any day would be unforeseen. That is, Quine accepts that the student has no advance knowledge of the time of the test, but he rejects that there is no truth in advance as to when the test will be given.

Common sense suggests that the students are informed by the announcement. The teacher is assuming that the announcement will enlighten the students. She seems right to assume that the announcement of this intention produces the same sort of knowledge as her other declarations of intentions (about which topics will be selected for lecture, the grading scale, and so on).

There are skeptical premises that could yield Quine’s conclusion that the students do not know the announcement is true. If no one can know anything about the future, as alleged by David Hume’s problem of induction, then the student cannot know that the teacher’s announcement is true. (See the entry on the problem of induction.) But denying all knowledge of the future in order to deny the student’s knowledge of the teacher’s announcement is disproportionate and indiscriminate. Do not kill a fly with cannon — unless it is killer fly and only a cannon will work!

In later writings, Quine evinces general reservations about the concept of knowledge. One of his favorite objections is that ‘know’ is vague. If knowledge entails certainty, then too little will count as known. Quine infers that we must equate knowledge with firmly held true belief. Asking just how firm that belief must be is akin to asking just how big something has to be to count as being big. There is no answer to the question because ‘big’ lacks the sort of boundary enjoyed by precise words.

There is no place in science for bigness, because of this lack of boundary; but there is a place for the relation of biggerness. Here we see the familiar and widely applicable rectification of vagueness: disclaim the vague positive and cleave to the precise comparative. But it is inapplicable to the verb ‘know’, even grammatically. Verbs have no comparative and superlative inflections … . I think that for scientific or philosophical purposes the best we can do is give up the notion of knowledge as a bad job and make do rather with its separate ingredients. We can still speak of a belief as true, and of one belief as firmer or more certain, to the believer’s mind, than another (1987, 109).

Quine is alluding to Rudolf Carnap’s (1950) generalization that scientists replace qualitative terms (tall) with comparatives (taller than) and then replace the comparatives with quantitative terms (being n millimeters in height).

It is true that some borderline cases of a qualitative term are not borderline cases for the corresponding comparative. But the reverse holds as well. A tall man who stoops may stand less high than another tall man who is not as lengthy but better postured. Both men are clearly tall. It is unclear that ‘The lengthier man is taller’. Qualitative terms can be applied when a vague quota is satisfied without the need to sort out the details. Only comparative terms are bedeviled by tie-breaking issues.

Science is about what is the case rather than what ought to be case. This seems to imply that science does not tell us what we ought to believe. The traditional way to fill the normative gap is to delegate issues of justification to epistemologists. However, Quine is uncomfortable with delegating such authority to philosophers. He prefers the thesis that psychology is enough to handle the issues traditionally addressed by epistemologists (or at least the issues still worth addressing in an Age of Science). This “naturalistic epistemology” seems to imply that ‘know’ and ‘justified’ are antiquated terms – as empty as ‘phlogiston’ or ‘soul’.

Those willing to abandon the concept of knowledge can dissolve the surprise test paradox. But to epistemologists who find promise in less drastic responses this is like using a suicide bomb to kill a fly.

Our suicide bomber may protest that the flies have been undercounted. Epistemic eliminativism dissolves all epistemic paradoxes. According to the eliminativist, epistemic paradoxes are symptoms of a problem with the very concept of knowledge.

Notice that the eliminativist is more radical than the skeptic. The skeptic thinks the concept of knowledge is coherent and definite in its requirements. We just fall short of being knowers. The skeptic treats ‘No man is a knower’ like ‘No man is an immortal’. There is nothing wrong with the concept of immortality. Biology just winds up guaranteeing that every man falls short of being immortal. Universal absence of knowledge would be shocking. But the potential to shock us should not lead us to kill the messenger (the skeptic) or declare unintelligible the vocabulary comprising the message (specifically, the word ‘know’).

Unlike the messenger telling us ‘No man is an immortal’, the skeptic has trouble telling us, ‘There is no knowledge’. According to Sextus Empiricus, assertion expresses belief that one knows what is asserted (Outlines of Pyrrhonism, I., 3, 226). He condemns the assertion ‘There is no knowledge’ (though not the proposition expressed by the assertion) as dogmatic skepticism. Sextus prefers agnosticism about knowledge rather than skepticism (considered as “atheism” about knowledge). Yet it is just as inconsistent to assert ‘No one can know whether anything is known’. For that conveys the belief that one knows that no one can know whether anything is known.

The eliminativist has even more severe difficulties in stating his position than the skeptic. Some eliminativists dismiss the threat of self-defeat by drawing an analogy. Those who denied the existence of souls were accused of undermining a necessary condition for asserting anything. However, the soul theorist’s account of what is needed gives no reason to deny that a healthy brain suffices for mental states.

If the eliminativist thinks that assertion only imposes the aim of expressing a truth, then he can consistently assert that ‘know’ is a defective term. However, an epistemologist can revive the charge of self-defeat by showing that assertion does indeed require the speaker to attribute knowledge to himself. This knowledge-based account of assertion has recently been supported by work on our next paradox.

3. Lotteries and the Lottery Paradox

Lotteries pose a problem for the theory that a high probability for a true belief suffices for knowledge. Given that there are a million tickets and only one winner, the probability of ‘This ticket is a losing ticket’ is very high. Yet we are reluctant to say this makes the proposition known.

We overcome the inhibition after the winning ticket is announced. Now the ticket is known to be a loser and tossed in the trash. But wait! Testimony does not furnish certainty. Nor does perception or recollection . When pressed, we admit there is a small chance that we misperceived the drawing or that the newscaster misread the winning number or that we are misremembering. While in this concessive mood, we are apt to relinquish our claim to know. The skeptic syllogizes from this surrender: For any contingent proposition, there is a lottery statement that is more probable and which is unknown. A known proposition cannot be less probable than an unknown proposition. So no contingent proposition is known (Hawthorne 2004). That is too much to give up! Yet the skeptic’s statistics seem impeccable.

This skeptical paradox was noticed by Gilbert Harman (1968, 166). But his views about the role of causation in inferential knowledge seemed to solve the problem (DeRose 2017, chapter 5). The baby paradox was dismissed as stillborn. Since the new arrival did not get the customary baptism of attention, epistemologists did not notice that the demise of the causal theory of knowledge meant new life for Harman’s lottery paradox.

The probability skeptic’s ordinary suggestions about how we might be mistaken contrast with the extraordinary possibilities conjured by René Descartes’ skeptic. The Cartesian skeptic tries to undermine vast swaths of knowledge with a single untestable counter-explanation of the evidence (such as the hypothesis that you are dreaming or the hypothesis that an evil demon is deceiving you). These comprehensive alternatives are designed to evade any empirical refutation. The probabilistic skeptic, in contrast, points to a plethora of pedestrian counter-explanations. Each is easy to test: maybe you transposed the digits of a phone number, maybe the ticket agent thought you wanted to fly to Moscow, Russia rather than Moscow, Idaho, etc. You can check for errors, but any check itself has a small chance of being wrong. So there is always something to check, given that the issues cannot be ignored on grounds of improbability.

You can check any of these possible errors but you cannot check them all. You cannot discount these pedestrian possibilities as science fiction. These are exactly the sorts of possibilities we check when plans go awry. For instance, you think you know that you have an appointment to meet a prospective employer for lunch at noon. When she fails to show at the expected time, you backpedal through your premises: Is your watch slow? Are you remembering the right restaurant? Could there be another restaurant in the city with the same name? Is she just detained? Could she have just forgotten? Could there have been a miscommunication?

Probabilistic skepticism dates back to Arcesilaus who took over the Academy two generations after Plato’s death. This moderate kind of skepticism, recounted by Cicero (Academica 2.74, 1.46) from his days as a student at the Academy, allows for justified belief. Many scientists feel they should only assign probabilities. They dismiss the epistemologist’s preoccupation with knowledge as old-fashioned.

Despite the early start of the qualitative theory of probability, the quantitative theory did not develop until Blaise Pascal’s study of gambling in the seventeenth century (Hacking 1975). Only in the eighteenth century did it penetrate the insurance industry (even though insurers realized that a fortune could be made by accurately calculating risk). Only in the nineteenth century did probability make a mark in physics. And only in the twentieth century do probabilists make important advances over Arcesilaus.

Most of these philosophical advances are reactions to the use of probability by scientists. In the twentieth century, editors of science journals began to demand that the author’s hypothesis should be accepted only when it was sufficiently probable – as measured by statistical tests. The threshold for acceptance was acknowledged to be somewhat arbitrary. And it was also conceded that the acceptance rule might vary with one’s purposes. For instance, we demand a higher probability when the cost of accepting a false hypothesis is high.

In 1961 Henry Kyburg pointed out that this policy conflicted with a principle of agglomeration: If you rationally believe \(p\) and rationally believe \(q\) then you rationally believe both \(p\) and \(q\). Little pictures of the same scene should sum to a bigger picture of the same scene. If rational belief can be based on an acceptance rule that only requires a high probability, there will be rational belief in a contradiction! To see why, suppose the acceptance rule permits belief in any proposition that has a probability of at least .99. Given a lottery with 100 tickets and exactly one winner, the probability of ‘Ticket \(n\) is a loser’ licenses belief. Symbolize propositions about ticket \(n\) being a loser as \(p_n\). Symbolize ‘I rationally believe’ as \(B\). Belief in a contradiction follows:

  1. \(B{\sim}(p_1 \amp p_2 \amp \ldots \amp p_{100})\),
    by the probabilistic acceptance rule.
  2. \(Bp_1 \amp Bp_2 \amp \ldots \amp Bp_{100}\),
    by the probabilistic acceptance rule.
  3. \(B(p_1 \amp p_2 \amp \ldots \amp p_{100})\),
    from (2) and the principle that rational belief agglomerates.
  4. \(B[(p_1 \amp p_2 \amp \ldots \amp p_{100}) \amp{\sim}(p_1 \amp p_2 \amp \ldots \amp p_{100})]\),
    from (1) and (3) by the principle that rational belief agglomerates.

More informally, the acceptance rule implies this: each belief that a particular ticket will lose is probable enough to justify believing it. By repeated applications of the agglomeration principle, conjoining all of these justified beliefs together gives a justified belief. Finally, conjoining that belief with the justified belief that one of tickets is a winner gives the contradictory belief to the to the effect that each will lose and one will win. Yet by agglomeration that too is justified.

Since belief in an obvious contradiction is a paradigm example of irrationality, Kyburg poses a dilemma: either reject agglomeration or reject rules that license belief for a probability of less than one. (Martin Smith 2016, 186–196) warns that even a probability of one leads to joint inconsistency for a lottery that has infinitely many tickets.) Kyburg rejects agglomeration. He promotes toleration of joint inconsistency (having beliefs that cannot all be true together) to avoid belief in contradictions. Reason forbids us from believing a proposition that is necessarily false but permits us to have a set of beliefs that necessarily contains a falsehood. Henry Kyburg’s choice was soon supported by the discovery of a companion paradox.

4. Preface Paradox

In the preface of Introduction to the Foundations of Mathematics, Raymond Wilder (1952, iv) apologizes for the errors in the text. The 1982 reprint has three pages of errata that vindicate Wilder’s humility. D. C. Makinson (1965, 205) quotes Wilder’s 1952 apology and extracts a paradox: Wilder rationally believes each of the assertions in his book. But since Wilder regards himself as fallible, he rationally believes the conjunction of all his assertions is false. If the agglomeration principle holds, \((Bp \amp Bq) \rightarrow B(p \amp q)\), Wilder would rationally believe the conjunction of all assertions in his book and also rationally disbelieve the same thing!

The preface paradox does not rely on a probabilistic acceptance rule. The preface belief is organically generated in a qualitative fashion. The author is merely reflecting on his humbling resemblance to other authors who are fallible, his own past failing that he subsequently discovered, his imperfection in fact checking, and so on.

At this juncture many philosophers join Kyburg in rejecting agglomeration and conclude that it can be rational to have jointly inconsistent beliefs. Kyburg’s solution to the preface paradox raises a methodological question about the nature of paradox. How can paradoxes change our minds if joint inconsistency is permitted? A paradox is commonly defined as a set of propositions that are individually plausible but jointly inconsistent. The inconsistency is the itch that directs us to scratch out a member of the set (or the pain that leads us to withdraw from the stimulus). For instance, much epistemology orbits an ancient riddle posed by the regress of justification, namely, which of the following is false?

  1. A belief can only be justified by another justified belief.
  2. There are no circular chains of justification.
  3. All justificatory chains have a finite length.
  4. Some beliefs are justified.

Foundationalists reject (1). They take some propositions to be self-evident or they permit beliefs to be justified by non-beliefs (such as perceptions or intuitions). Coherentists reject (2). They tolerate some forms of circular reasoning. For instance, Nelson Goodman (1965) has characterized the method of reflective equilibrium as virtuously circular. Charles Saunders Peirce (1933–35, 5.250) may have rejected (3). The first clear rejector is Peter Klein (2007). For a book-length defense, read Scott F. Aikin (2011). Infinitists believe that infinitely long chains of justification are no more impossible than infinitely long chains of causation. Finally, the epistemological anarchist rejects (4). As Paul Feyerabend refrains in Against Method, “Anything goes” (1988, vii, 5, 14, 19, 159).

Formulating a paradox as a set of individually plausible but jointly inconsistent beliefs is a feat of data compression. But if joint inconsistency is rationally tolerable, why do these philosophers bother to offer solutions to paradoxes such as the regress of justification? Why is it irrational to believe each of (1)–(4), despite their joint inconsistency?

Kyburg might answer that there is a scale effect. Although the sensation of joint inconsistency is tolerable when diffusely distributed over a large body of propositions, the sensation becomes an itch when the inconsistency localizes (Knight 2002). That is why paradoxes are always represented as a small set of propositions. A paradox is improved by reducing its membership — as when a member of the set is exposed as superfluous to the inconsistency. (Strictly speaking, a set can only change size in the metaphorical way that a number grows or shrinks.)

If you know that your beliefs are jointly inconsistent but deny this makes for a giant paradox, then you should reject R. M. Sainsbury’s definition of a paradox as “an apparently unacceptable conclusion derived by apparently acceptable reasoning from apparently acceptable premises” (1995, 1). Take the negation of any of your beliefs as a conclusion and your remaining beliefs as the premises. You should judge this jumble argument as valid, and as having premises that you accept, and yet as having a conclusion you reject (Sorensen 2003b, 104–110). If the conclusion of this argument counts as a paradox, then the negation of any of your beliefs counts as a paradox.

The resemblance between the preface paradox and the surprise test paradox becomes more visible through an intermediate case. The preface of Siddhartha Mukherjee’s The Emperor of All Maladies: A Biography of Cancer warns: “In cases where there was no prior public knowledge, or when interviewees requested privacy, I have used a false name, and deliberately confounded identities to make it difficult to track.” (2010, xiv) Those who refuse consent to be lied to are free to close Doctor Mukherjee’s chronicle. But nearly all readers think the physician’s trade-off between lies and new information is acceptable. They rationally anticipate being rationally misled. Nevertheless, these readers learn much about the history of cancer. Similarly, students who are warned that they will receive a surprise test rationally expect to be rationally misled about the day of the test. The prospect of being misled does not lead them to drop the course.

The preface paradox pressures Kyburg to extend his tolerance of joint inconsistency to the acceptance of contradictions. For Makinson’s original specimen is a logician’s regret at affirming contradictions rather than false contingent statements. Consider a logic student who is required to pick one hundred truths from a mixed list of tautologies and contradictions (Sorensen 2001, 156–158). Although the modest student believes each of his answers, \(A_1, A_2, \ldots, A_{100}\), he also believes that at least of one these answers is false. This ensures he believes a contradiction. If any of his answers is false, then the student believes a contradiction (because the only falsehoods on the question list are contradictions). If all of his test answers are true, then the student believes the following contradiction: \({\sim}(A_1 \amp A_2 \amp \ldots \amp A_{100})\). After all, a conjunction of tautologies is itself a tautology and the negation of any tautology is a contradiction.

If paradoxes were always sets of propositions or arguments or conclusions, then they would always be meaningful. But some paradoxes are semantically flawed (Sorensen 2003b, 352) and some have answers that are backed by a pseudo-argument employing a defective “lemma” that lacks a truth-value. Kurt Grelling’s paradox, for instance, opens with a distinction between autological and heterological words. An autological word describes itself, e.g., ‘polysyllabic’ is polysllabic, ‘English’ is English, ‘noun’ is a noun, etc. A heterological word does not describe itself, e.g., ‘monosyllabic’ is not monosyllabic, ‘Chinese’ is not Chinese, ‘verb’ is not a verb, etc. Now for the riddle: Is ‘heterological’ heterological or autological? If ‘heterological’ is heterological, then since it describes itself, it is autological. But if ‘heterological’ is autological, then since it is a word that does not describe itself, it is heterological. The common solution to this puzzle is that ‘heterological’, as defined by Grelling, is not a well-defined predicate (Thomson 1962). In other words, “Is ‘heterological’ heterological?” is without meaning. There can be no predicate that applies to all and only those predicates it does not apply to for the same reason that there can be no barber who shaves all and only those people who do not shave themselves.

The eliminativist, who thinks that ‘know’ or ‘justified’ is meaningless, will diagnose the epistemic paradoxes as questions that only appear to be well-formed. For instance, the eliminativist about justification would not accept proposition (4) in the regress paradox: ‘Some beliefs are justified’. His point is not that no beliefs meet the high standards for justification, as an anarchist might deny that any ostensible authorities meet the high standards for legitimacy. Instead, the eliminativist unromantically diagnoses ‘justified’ as a pathological term. Just as the astronomer ignores ‘Are there a zillion stars?’ on the grounds that ‘zillion’ is not a genuine numeral, the eliminativist ignores ‘Are some beliefs justified?’ on the grounds that ‘justified’ is not a genuine adjective.

In the twentieth century, suspicions about conceptual pathology were strongest for the liar paradox: Is ‘This sentence is false’ true? Philosophers who thought that there was something deeply defective with the surprise test paradox assimilated it to the liar paradox. Let us review the assimilation process.

5. Anti-expertise

In the surprise test paradox, the student’s premises are self-defeating. Any reason the student has for predicting a test date or a non-test date is available to the teacher. Thus the teacher can simulate the student’s forecast and know what the student expects.

The student’s overall conclusion, that the test is impossible, is also self-defeating. If the student believes his conclusion then he will not expect the test. So if he receives a test, it will be a surprise. The event will be all the more unexpected because the student has deluded himself into thinking the test is impossible.

Just as someone’s awareness of a prediction can affect the likelihood of it being true, awareness of that sensitivity to his awareness can also affect its truth. If each cycle of awareness is self-defeating, then there is no stable resting place for a conclusion.

Suppose a psychologist offers you a red box and a blue box (Skyrms 1982). The psychologist can predict which box you will choose with 90% accuracy. He has put one dollar in the box he predicts you will choose and ten dollars in the other box. Should you choose the red box or the blue box? You cannot decide. For any choice becomes a reason to reverse your decision.

Epistemic paradoxes affect decision theory because rational choices are based on beliefs and desires. If the agent cannot form a rational belief, it is difficult to interpret his behavior as a choice. In decision theory, the whole point of attributing beliefs and desires is to set up practical syllogisms that make sense of actions as means to ends. Subtracting rationality from the agent makes the framework useless. Given this commitment to charitable interpretation, there is no possibility of you rationally choosing an option that you believe to be inferior. So if you choose, you cannot really believe you were operating as an anti-expert, that is, someone whose opinions on a topic are reliably wrong (Egan and Elga 2005).

The medieval philosopher John Buridan (Sophismata, Sophism 13) gave a starkly minimal example of such instability:

(B)
You do not believe this sentence.

If you believe (B) it is false. If you do not believe (B) it is true. You are an anti-expert about (B); your opinion is reliably wrong. An outsider who monitors your opinion can reckon whether (B) is true. But you are not able to exploit your anti-expertise.

On the bright side, you are able to exploit the anti-expertise of others. Four out of five anti-experts recommend against reading any further!

5.1 The Knower Paradox

David Kaplan and Richard Montague (1960) think the announcement by the teacher in our surprise exam example is equivalent to the self-referential

(K-3)
Either the test is on Monday but you do not know it before Monday, or the test is on Wednesday but you do not know it before Wednesday, or the test is on Friday but you do not know it before Friday, or this announcement is known to be false.

Kaplan and Montague note that the number of alternative test dates can be increased indefinitely. Shockingly, they claim the number of alternatives can be reduced to zero! The announcement is then equivalent to

(K-0)
This sentence is known to be false.

If (K-0) is true then it known to be false. Whatever is known to be false, is false. Since no proposition can be both true and false, we have proven that (K-0) is false. Given that proof produces knowledge, (K-0) is known to be false. But wait! That is exactly what (K-0) says – so (K-0) must be true.

The (K-0) argument bears a suspicious resemblance to the liar paradox. Subsequent commentators sloppily switch the negation sign in the formal presentations of the reasoning from \(K{\sim}p\) to \({\sim}Kp\) (that is, from ‘It is known that not-\(p\)’, to ‘It is not the case that it is known that \(p\)’). Ironically, this garbled transmission results in a cleaner variation of the knower:

(K)
No one knows this very sentence.

Is (K) true? On the one hand, if (K) is true, then what it says is true, so no one knows it. On the other hand, that very reasoning seems to be a proof of (K). Believing a proposition by seeing it to be proved is a sufficient for knowledge of it, so someone must know (K). But then (K) is false! Since no one can know a proposition that is false, (K) is not known.

The skeptic could hope to solve (K-0) by denying that anything is known. This remedy does not cure (K). If nothing is known then (K) is true. Can the skeptic instead challenge the premise that proven is a sufficient for knowing it? This solution would be particularly embarrassing to the skeptic. The skeptic presents himself as a stickler for proof. If it turns out that even proof will not sway him, he bears a damning resemblance to the dogmatist he so frequently chides.

But the skeptic should not lose his nerve. Proof does not always yield knowledge. Consider a student who correctly guesses that a step in his proof is valid. The student does not know the conclusion but did prove the theorem. His instructor might have trouble getting the student to understand why his answer constitutes a valid proof. The intransigence may stem from the prover’s intelligence rather than his stupidity. L. E. J. Brouwer is best known in mathematics for his brilliant fixed point theorem. But a doubtful reading of Immanuel Kant’s philosophy of mathematics led Brouwer retract his proof. Brouwer also had philosophical doubts about the Axiom of Choice and the Law of Excluded Middle. Brouwer persuaded a minority of mathematicians and philosophers, known as intuitionists, to emulate his abstention from non-constructive proofs. This led them to develop constructive proofs of theorems that were earlier proved by less informative means. Everybody agrees that there is more to learn from a proof of an existential generalization that proceeds from a proved instance than from an unspecific reductio ad absurdum of the corresponding universal generalization. But this does not vindicate the intuitionists’s refusal to be persuaded by the reductio ad absurdum. The intuitionist, even in the eyes of the skeptic, has too high a standard of proof. An excessively high standard of proof can prevent knowledge by proof.

The logical myth that “You cannot prove a universal negative” is itself a universal negative. So it implies its own unprovability. This implication of unprovability is correct but only because the principle is false. For instance, exhaustive inspection proves the universal negative ‘No adverbs appear in this sentence’. A reductio ad absurdum proves the universal negative ‘There is no largest prime number’.

Trivially, false propositions cannot be proved true. Are there any true propositions that cannot be proved true?

Yes, there are infinitely many. Kurt Gödel’s incompleteness theorem demonstrated that any system that is strong enough to express arithmetic is also strong enough to express a formal counterpart of the self-referential proposition in the surprise test example ‘This statement cannot be proved in this system’. If the system cannot prove its “Gödel sentence”, then this sentence is true. If the system can prove its Gödel sentence, the system is inconsistent. So either the system is incomplete or inconsistent. (See the entry on Kurt Gödel.)

Of course, this result concerns provability relative to a system. One system can prove another system’s Gödel sentence. Kurt Gödel (1983, 271) thought that proof was not needed for knowledge that arithmetic is consistent.

J. R. Lucas (1964) claims that this reveals human beings are not machines. A computer is a concrete instantiation of a formal system. Hence, its “knowledge” is restricted to what it can prove. By Gödel’s theorem, the computer will be either inconsistent or incomplete. However, any human being could have consistent and complete knowledge of arithmetic. Therefore, necessarily, no human being is a computer.

Critics of Lucas defend the parity between people and computers. They think we have our own Gödel sentences (Lewis 1999, 166–173). In this egalitarian spirit, G. C. Nerlich (1961) models the student’s beliefs in the surprise test example as a logical system. The teacher’s announcement is then a Gödel sentence about the student: There will be a test next week but you will not be able to prove which day it will occur on the basis of this announcement and memory of what has happened on previous exam days. When the number of exam days equals zero the announcement is equivalent to sentence K.

Several commentators on the surprise test paradox object that interpreting surprise as unprovability changes the topic. Instead of posing the surprise test paradox, it poses a variation of the liar paradox. Other concepts can be blended with the liar. For instance, mixing in alethic notions generates the possible liar: Is ‘This statement is possibly false’ true? (Post 1970) (If it is false, then it is false that it is possibly false. What cannot possibly be false is necessarily true. But if it is necessarily true, then it cannot be possibly false.) Since the semantic concept of validity involves the notion of possibility, one can also derive validity liars such as Pseudo-Scotus’ paradox: ‘Squares are squares, therefore, this argument is invalid’ (Read 1979). Suppose Pseudo-Scotus’ argument is valid. Since the premise is necessarily true, the conclusion would be necessarily true. But the conclusion contradicts the supposition that argument is valid. Therefore, by reductio, the argument is necessarily invalid. Wait! The argument can be invalid only if it is possible for the premise to be true and the conclusion to be false. But we have already proved that the conclusion of ‘Squares are squares, therefore, this argument is invalid’ is necessarily true. There is no consistent judgment of the argument’s validity. A similar predicament follows from ‘The test is on Friday but this prediction cannot be soundly deduced from this announcement’.

One can mock up a complicated liar paradox that resembles the surprise test paradox. But this complex variant of the liar is not an epistemic paradox. For the paradoxes turn on the semantic concept of truth rather than an epistemic concept.

5.2 The “Knowability Paradox”

Frederic Fitch (1963) reports that in 1945 he first learned of this proof of unknowable truths from a referee report on a manuscript he never published. Thanks to Joe Salerno’s (2009) archival research, we now know that referee was Alonzo Church.

Assume there is a true sentence of the form ‘\(p\) but \(p\) is not known’. Although this sentence is consistent, modest principles of epistemic logic imply that sentences of this form are unknowable.

1. \(K(p \amp{\sim}Kp)\) (Assumption)
2. \(Kp \amp K{\sim}Kp\) 1, Knowledge distributes over conjunction
3. \({\sim}Kp\) 2, Knowledge implies truth (from the second conjunct)
4. \(Kp \amp{\sim}Kp\) 2, 3 by conjunction elimination of the first conjunct and then conjunction introduction
5. \({\sim}K(p \amp{\sim}Kp)\) 1, 4 Reductio ad absurdum

Since all the assumptions are discharged, the conclusion is a necessary truth. So it is a necessary truth that \(p \amp{\sim}Kp\) is not known. In other words, \(p \amp{\sim}Kp\) is unknowable.

The cautious draw a conditional moral: If there are actual unknown truths, there are unknowable truths. After all, some philosophers will reject the antecedent because they believe there is an omniscient being.

But secular idealists and logical positivists concede that there are some actual unknown truths. How can they continue to believe that all truths are knowable? Astonishingly, these eminent philosophers seem refuted by the pinch of epistemic logic we have just seen. Also injured are those who limit their claims of universal knowability to a limited domain. For instance, Immanuel Kant (A223/B272) asserts that all empirical propositions are knowable. This pocket of optimism would be enough to ignite the contradiction (Stephenson 2015).

Timothy Williamson doubts that this casualty list is enough for the result to qualify as a paradox:

The conclusion that there are unknowable truths is an affront to various philosophical theories, but not to common sense. If proponents (and opponents) of those theories long overlooked a simple counterexample, that is an embarrassment, not a paradox. (2000, 271)

Those who believe that the Church-Fitch result is a genuine paradox can respond to Williamson with paradoxes that accord with common sense (and science –and religious orthodoxy). For instance, common sense heartily agrees with the conclusion that something exists. But it is surprising that this can be proved without empirical premises. Since the quantifiers of standard logic (first order predicate logic with identity) have existential import, the logician can deduce that something exists from the principle that everything is identical to itself. Most philosophers balk at this simple proof because they feel that the existence of something cannot be proved by sheer logic. They are not balking at the statement that is in accord in common sense (that something exists). They are only balking at the statement that it can be proved by sheer logic. Likewise, many philosophers who agree that there are unknowables balk solely on the grounds that such a profound result cannot be obtained from such limited means.

5.3 Moore’s problem

Church’s referee report was composed in 1945. The timing and structure of his argument for unknowables suggests that Church may have been inspired by G. E. Moore’s (1942, 543) sentence:

(M)
I went to the pictures last Tuesday, but I don’t believe that I did.

Moore’s problem is to explain what is odd about declarative utterances such as (M). This explanation needs to encompass both readings of (M): ‘\(p \amp B{\sim}p\)’ and ‘\(p \amp{\sim}Bp\)’. (This scope ambiguity is exploited by a popular joke: René Descartes sits in a bar, having a drink. The bartender asks him if he would care for another. “I think not,” he says, and disappears. The joke is commonly criticized as fallacious. But it is not given Descartes’ belief that he is essentially a thinking being.)

The common explanation of Moore’s absurdity is that the speaker has managed to contradict himself without uttering a contradiction. So the sentence is odd because it is a counterexample to the generalization that anyone who contradicts himself utters a contradiction.

There is no problem with third person counterparts of (M). Anyone else can say about Moore, with no paradox, ‘G. E. Moore went to the pictures last Tuesday but he does not believe it’. (M) can also be embedded unparadoxically in conditionals: ‘If I went to the pictures last Tuesday but I do not believe it, then I am suffering from a worrisome lapse of memory’. The past tense is fine: ‘I went to the picture shows last Tuesday but I did not believe it’. The future tense, ‘I went to the picture shows last Tuesday but I will not believe it’, is a bit more of a stretch (Bovens 1995). We tend to picture our future selves as better informed. Later selves are, as it were, experts to whom earlier selves should defer. When an earlier self foresees that his later self believes \(p\), then the prediction is a reason to believe \(p\). Bas van Fraassen (1984, 244) dubs this “the principle of reflection”: I ought to believe a proposition given that I will believe it at some future time.

Robert Binkley (1968) anticipates van Fraassen by applying the reflection principle to the surprise test paradox. The student can foresee that he will not believe the announcement if no test is given by Thursday. The conjunction of the history of testless days and the announcement will imply the Moorean sentence:

(A′)
The test is on Friday but you do not believe it.

Since the less evident member of the conjunction is the announcement, the student will choose not to believe the announcement. At the beginning of the week, the student foresees that his future self may not believe the announcement. So the student on Sunday will not believe the announcement when it is first uttered.

Binkley illuminates this reasoning with doxastic logic (‘doxa’ is Greek for belief). The inference rules for this logic of belief can be understood as idealizing the student into an ideal reasoner. In general terms, an ideal reasoner is someone who infers what he ought and refrains from inferring any more than he ought. Since there is no constraint on his premises, we may disagree with the ideal reasoner. But if we agree with the ideal reasoner’s premises, we appear bound to agree with his conclusion. Binkley specifies some requirements to give teeth to the student’s status as an ideal reasoner: the student is perfectly consistent, believes all the logical consequences of his beliefs, and does not forget. Binkley further assumes that the ideal reasoner is aware that he is an ideal reasoner. According to Binkley, this ensures that if the ideal reasoner believes p, then he believes that he will believe \(p\) thereafter.

Binkley’s account of the student’s hypothetical epistemic state on Thursday is compelling. But his argument for spreading the incredulity from the future to the past is open to three challenges.

The first objection is that it delivers the wrong result. The student \(is\) informed by the teacher’s announcement, so Binkley ought not to use a model in which the announcement is as absurd as the conjunction ‘I went to the pictures last Tuesday but I do not believe it’.

Second, the future mental state envisaged by Binkley is only hypothetical: \(If\) no test is given by Thursday, the student will find the announcement incredible. At the beginning of the week, the student does not know (or believe) that the teacher will wait that long. The principle of reflection ‘Defer to the opinions of my future self ’ does not imply that I should defer to the opinions of my hypothetical future self. For my hypothetical future self is responding to propositions that need not be actually true.

Third, the principle of reflection may need more qualifications than Binkley anticipates. Binkley realizes that an ordinary agent foresees that he will forget details. That is why we write reminders for our own benefit. An ordinary agent foresees periods of impaired judgment. That is why we limit how much money we bring to the bar.

Binkley stipulates that the students do not forget. He needs to add that the students know that they will not forget. For the mere threat of a memory lapse sometimes suffices to undermine knowledge. Consider Professor Anesthesiology’s scheme for surprise tests: “A surprise test will be given either Wednesday or Friday with the help of an amnesia drug. If the test occurs on Wednesday, then the drug will be administered five minutes after Wednesday’s class. The drug will instantly erase memory of the test and the students will fill in the gap by confabulation.” You have just completed Wednesday’s class and so temporarily know that the test will be on Friday. Ten minutes after the class, you lose this knowledge. No drug was administered and there is nothing wrong with your memory. You are correctly remembering that no test was given on Wednesday. However, you do not know your memory is accurate because you also know that if the test was given Wednesday then you would have a pseudo-memory indistinguishable from your present memory. Despite not gaining any new evidence, you change your mind about the test occurring on Wednesday and lose your knowledge that the test is on Friday. (The change of belief is not crucial; you would still lack foreknowledge of the test even if you dogmatically persisted in believing that the test will be on Friday.)

If the students know that they will not forget and know there will be no undermining by outside evidence, then we may be inclined to agree with Binkley’s summary that his idealized student never loses the knowledge he accumulates. As we shall see, however, this overlooks other ways in which rational agents may lose knowledge.

5.4 Blindspots

‘I am a poet but I do not know it’ expresses a proposition I cannot know. But I can reach the proposition by other attitudes such as hoping and wishing. A blindspot for a propositional attitude is a consistent proposition that cannot be accessed by that attitude. Blindspots are relative to the means of reaching the proposition, the person making the attempt, and time at which he tries. Although \(I\) cannot rationally believe ‘Polar bears have black skin but I believe they do not’ you can believe that I mistakenly believe polar bears do not have black skin. The evidence that persuades you I am currently making that mistake cannot persuade me that I am currently making that mistake. This is an asymmetry imposed by rationality rather than irrationality. Attributions of specific errors are personal blindspots for the person who is alleged to have erred.

The anthropologist Gontran de Poncins begins his chapter on the arctic missionary, Father Henry, with a prediction:

I am going to say to you that a human being can live without complaint in an ice-house built for seals at a temperature of fifty-five degrees below zero, and you are going to doubt my word. Yet what I say is true, for this was how Father Henry lived; …. (Poncins 1941 [1988], 240])

Gontran de Poncins’ subsequent testimony might lead the reader to believe someone can indeed be content to live in an ice-house. The same testimony might lead another reader to believe that Poncins is not telling the truth. But no reader ought to believe ‘Someone can be content to live in an ice house and everybody believes that is not so’. That is a universal blindspot.

If Gontran believes a proposition that is a blindspot to his reader, then he cannot furnish good grounds for his reader to share his belief. This holds even if they are ideal reasoners. So one implication of personal blindspots is that there can be disagreement among ideal reasoners because they differ in their blindspots.

This is relevant to the surprise test paradox. The students are the surprisees. Since the announcement entails that the date of the surprise test a blindspot for them, non-surprisees cannot persuade them.

The same point holds for intra-personal disagreement over time. Evidence that persuaded me on Sunday that ‘This security code is 390524 but on Friday I will not believe it’ should no longer persuade me on Friday (given my belief that the day is Friday). For that proposition is a blindspot to my Friday self.

Although each blindspot is inaccessible, a disjunction of blindspots is normally not a blindspot. I can rationally believe that ‘Either the number of stars is even and I do not believe it, or the number of stars is odd and I do not believe it’. The author’s preface statement that there is some mistake in his book is equivalent to a very long disjunction of blindspots. The author is saying he either falsely believes his first statement or falsely believes his second statement or … or falsely believes his last statement.

The teacher’s announcement that there will be a surprise test is equivalent to a disjunction of future mistakes: ‘Either there will be a test on Monday and the student will not believe it beforehand or there will be a test Wednesday and the student will not believe it beforehand or the test is on Friday and the student will not believe it beforehand.’

The points made so far suggest a solution to the surprise test paradox (Sorensen 1988, 328–343). As Binkley (1968) asserts, the test would be a surprise even if the teacher waited until the last day. Yet it can still be true that the teacher’s announcement is informative. At the beginning of the week, the students are justified in believing the teacher’s announcement that there will be a surprise test. This announcement is equivalent to:

(A)
Either
i.
the test is on Monday and the student does not know it before Monday, or
ii.
the test is on Wednesday and the student does not know it before Wednesday, or
iii.
the test is on Friday and the student does not know it before Friday.

Consider the student’s predicament on Thursday (given that the test has not been on Monday or Wednesday). If he knows that no test has been given, he cannot also know that (A) is true. Because that would imply

  1. The test is on Friday and the student does not know it before Friday.

Although (iii) is consistent and might be knowable by others, (iii) cannot be known by the student before Friday. (iii) is a blindspot for the students but not for, say, the teacher’s colleagues. Hence, the teacher can give a surprise test on Friday because that would force the students to lose their knowledge of the original announcement (A). Knowledge can be lost without forgetting anything.

This solution makes who you are relevant to what you can know. In addition to compromising the impersonality of knowledge, there will be compromise on its temporal neutrality.

Since the surprise test paradox can also be formulated in terms of rational belief, there will be parallel adjustments for what we ought to believe. We are criticized for failures to believe the logical consequences of what we believe and criticized for believing propositions that conflict with each other. Anyone who meets these ideals of completeness and consistency will be unable to believe a range of consistent propositions that are accessible to other complete and consistent thinkers. In particular, they will not be able to believe propositions attributing specific errors to them, and propositions that entail these off-limit propositions.

Some people wear T-shirts with Question Authority! written on them. Questioning authority is generally regarded as a matter of individual discretion. The surprise test paradox shows that it is sometimes mandatory. The student is rationally required to doubt the teacher’s announcement even though the teacher has not given any new evidence of being unreliable. For when only one day remains, the announcement entails (iii), a statement is that is impossible for the student to know. The student can foresee that this forced loss of knowledge opens an opportunity for the teacher to give the surprise test. This foreknowledge is available at the time the announcement.

This solution implies there can be disagreement amongst ideal reasoners who agree on the same impersonal data. Consider the colleagues of the teachers. They are not amongst those that the teacher targets for surprise. Since ‘surprise’ here means ‘surprise to the students’, the teacher’s colleagues can consistently infer that the test will be on the last day from the premise that it has not been given on any previous day. But these colleagues are useless to the students as informants.

6. Dynamic Epistemic Paradoxes

The above anomalies (losing knowledge without forgetting, disagreement amongst equally well-informed ideal reasoners, rationally changing your mind without the acquisition of counter-evidence) would be more tolerable if reinforced by separate lines of reasoning. The most fertile source of this collateral support is in puzzles about updating beliefs.

The natural strategy is to focus on the knower when he is stationary. However, just as it is easier for an Eskimo to observe an arctic fox when it moves, we often get a better understanding of the knower dynamically, when he is in the process of gaining or losing knowledge.

6.1 Meno’s Paradox of Inquiry: A puzzle about gaining knowledge

When on trial for impiety, Socrates traced his inquisitiveness to the Oracle at Delphi (Apology 21d in Cooper 1997). Prior to beginning his mission of inquiry, Chaerephon asked the Oracle: “Who is the wisest of men?” The Oracle answered “No one is wiser than Socrates.” This astounded Socrates because he believed he knew nothing. Whereas a less pious philosopher might have questioned the reliability of the Delphic Oracle, Socrates followed the general practice of treating the Oracle as infallible. The only cogitation appropriate to an infallible answer is interpretation. Accordingly, Socrates resolved his puzzlement by inferring that his wisdom lay in recognizing his own ignorance. While others may know nothing, Socrates knows that he knows nothing.

Socrates continues to be praised for his insight. But his “discovery” is a contradiction. If Socrates knows that he knows nothing, then he knows something (the proposition that he knows nothing) and yet does not know anything (because knowledge implies truth).

Socrates could regain consistency by downgrading his meta-knowledge to the status of a belief. If he believes he knows nothing, then he naturally wishes to remedy his ignorance by asking about everything. This rationale is accepted throughout the early dialogues. But when we reach the Meno, one of his interlocutors has an epiphany. After Meno receives the standard treatment from Socrates about the nature of virtue, Meno discerns a conflict between Socratic ignorance and Socratic inquiry (Meno 80d, in Cooper 1997). How would Socrates recognize the correct answer even if Meno gave it?

The general structure of Meno’s paradox is a dilemma: If you know the answer to the question you are asking, then nothing can be learned by asking. If you do not know the answer, then you cannot recognize a correct answer even if it is given to you. Therefore, one cannot learn anything by asking questions.

The natural solution to Meno’s paradox is to characterize the inquirer as only partially ignorant. He knows enough to recognize a correct answer but not enough to answer on his own. For instance, spelling dictionaries are useless to six year old children because they seldom know more than the first letter of the word in question. Ten year old children have enough partial knowledge of the word’s spelling to narrow the field of candidates. Spelling dictionaries are also useless to those with full knowledge of spelling and those with total ignorance of spelling. But most of us have an intermediate amount of knowledge.

It is natural to analyze partial knowledge as knowledge of conditionals. The ten year old child knows the spoken version of ‘If the spelling dictionary spells the month after January as F-e-b-r-u-a-r-y, then that spelling is correct’. Consulting the spelling dictionary gives him knowledge of the antecedent of the conditional.

Much of our learning from conditionals runs as smoothly as this example suggests. Since we know the conditional, we are poised learn the consequent merely by learning the antecedent (and by applying the inference rule modus ponens: If \(P\) then \(Q\), \(P\), therefore \(Q\)). But the next section is devoted to some known conditionals that are repudiated when we learn their antecedents.

6.2 Dogmatism paradox: A puzzle about losing knowledge

Saul Kripke’s ruminations on the surprise test paradox led him to a paradox about dogmatism. He lectured on both paradoxes at Cambridge University to the Moral Sciences Club in 1972. (A descendent of this lecture now appears as Kripke 2011.) Gilbert Harman transmitted Kripke’s new paradox as follows:

If I know that \(h\) is true, I know that any evidence against \(h\) is evidence against something that is true; I know that such evidence is misleading. But I should disregard evidence that I know is misleading. So, once I know that \(h\) is true, I am in a position to disregard any future evidence that seems to tell against \(h\). (1973, 148)

Dogmatists accept this reasoning. For them, knowledge closes inquiry. Any “evidence” that conflicts with what is known can be dismissed as misleading evidence. Forewarned is forearmed.

This conservativeness crosses the line from confidence to intransigence. To illustrate the excessive inflexibility, here is a chain argument for the dogmatic conclusion that my reliable colleague Doug has given me a misleading report (corrected from Sorensen 1988b):

(C\(_1\))
My car is in the parking lot.
(C\(_2\))
If my car is in the parking lot and Doug provides evidence that my car is not in the parking lot, then Doug’s evidence is misleading.
(C\(_3\))
If Doug reports he saw a car just like mine towed from the parking lot, then his report is misleading evidence.
(C\(_4\))
Doug reports that a car just like mine was towed from the parking lot.
(C\(_5\))
Doug’s report is misleading evidence.

By hypothesis, I am justified in believing (C\(_1)\). Premise (C\(_2)\) is a certainty because it is analytically true. The argument from (C\(_1)\) and (C\(_2)\) to (C\(_3)\) is valid. Therefore, my degree of confidence in (C\(_3)\) must equal my degree of confidence in (C\(_1)\). Since we are also assuming that I gain sufficient justification for (C\(_4)\), it seems to follow that I am justified in believing (C\(_5)\) by modus ponens. Similar arguments will lead me to dismiss further evidence such as a phone call from the towing service and my failure to see my car when I confidently stride over to the parking lot.

Gilbert Harman diagnoses the paradox as follows:

The argument for paradox overlooks the way actually having evidence can make a difference. Since I now know [my car is in the parking lot], I now know that any evidence that appears to indicate something else is misleading. That does not warrant me in simply disregarding any further evidence, since getting that further evidence can change what I know. In particular, after I get such further evidence I may no longer know that it is misleading. For having the new evidence can make it true that I no longer know that new evidence is misleading. (1973, 149)

In effect, Harman denies the hardiness of knowledge. The hardiness principle states that one knows only if there is no evidence such that if one knew about the evidence one would not be justified in believing one’s conclusion.

Harman’s conclusion that new knowledge can undermine old knowledge can be applied to the surprise test paradox: The students lose knowledge of the test announcement even though they do not forget the announcement or do anything else incompatible with their credentials as ideal reasoners. A student on Thursday is better informed about the outcomes of test days than he was on Sunday. He knows the test was not on Monday and not on Wednesday. But he can only predict that the test is on Friday if he continues to know the announcement. The extra knowledge of the testless days undermines knowledge of the announcement.

Most epistemologists accepted Harman’s appeal to defeaters. Some have tried to make it more precise with details about updating indicative conditionals (Sorensen 1988b). This may vindicate and generalize Harman’s prediction that the future evidence will change your mind about what is misleading evidence. Knowledge of such conditionals is useless for a future modus ponens. The dogmatist correctly says we know the conditional “If I know that \(p\), then any evidence conflicting with \(p\) is misleading evidence”. Indeed, it is a tautology! But the dogmatist fails to recognize this known tautology is useless knowledge. Acquiring the misleading evidence will make me stop knowing p. If an auditor foresees being presented with a biased list of facts, he may utter the tautology to his assistant to convey another proposition for which he has empirical support. That empirical proposition need not be useless knowledge. When the predicted list is presented, the forearmed auditor ignores the facts. But the basis is not his a priori knowledge of the dogmatist’s tautology.

Kripke notes that this solution will not stop the quick thinking dogmatist who takes measures to prevent his acquisition of the evidence that he now deems misleading (Kripke 2011, 43–44). A second worry is that the dogmatist can still ignore weak evidence. If I know a coin is fair then I know that if the first twenty tosses comes out heads, then that is misleading evidence that the coin is not fair. Such a run does not defeat my knowledge claim. (Substitute a shorter run if you think it does defeat.) So Harman’s solution does not apply. Yet it is dogmatic to ignore this evidence.

In addition to this problem of weak dogmatism, Rachel Fraser (2022) adds a third problem of dogmatic bootstrapping. When Robert Millikan and Harvey Fletcher measured elementary electric charge with tiny charged droplets, they discounted some of the drops as misleadingly wide of the plausible interval for the true value. Drops centrally located within the interval were “beauties”. Editing out the outliers give Millikan a more precise measurement and a Nobel Prize in 1923. In 1978, the physicist Gerald Holton went through the notebooks and was shocked by how much contrary data had gone unreported by Millikan. Fraser thinks there is a vicious circularity in data purification.

But the bootstrapping dogmatist will regard the circularity as virtuous. When the evidence is a mix of strong evidence and weak counterevidence, the stronger body of evidence exposes the weaker body as misleading evidence. Think of a jigsaw puzzle that has been polluted with stray pieces from another puzzle. When you manage to get a complete fit with a subset of the pieces, the remaining pieces are removed from view. Dimming the misleading evidence allows the leading evidence to shine more visibly. Therefore, we can indeed be more confident than we were before dismissing the weak evidence. Millikan was being responsible gatekeeper rather than a wishful thinker. Just as data should control theory, theory should control data. The experimenter must strike a delicate balance between discounting too much contrary data and discounting too little. Proposed solutions to the dogmatism paradox have trouble sustaining this balance.

I. J. Good (1967) demonstrated that gathering evidence maximizes expected value given that the cost of the evidence is negligible. Under this simplifying assumption, Good shows that departures from the principle of total evidence is at least imprudent. Given epistemic utilitarianism, this practical irrationality becomes theoretical irrationality. Bob Beddor (2019) now adds the premise that it is irrational to intend to do what one foresees to be irrational. For instance, if you offered a million dollars to drink toxin tomorrow that will make you ill for a day, you could profit from the offer (Kavka 1983). But if the million would be earned immediately upon you intending to drink the toxin, then you could not profit because you know there would be no reason to follow through. By analogy, Beddor concludes that it would be irrational to intend to avoid counterevidence (Beddor 2019, 738). One is never entitled to discard evidence, even after it has been foreseen as misleading evidence.

But if the cost of evidence is significant, the connection between practical rationality and theoretical rationality favors ignoring counterevidence. Judging that p embodies a resolution not to inquire further into the question of whether p is true. Or so answers the volitionist (Fraser 2022).

6.3 The Future of Epistemic Paradoxes

We cannot predict that any specific new epistemic paradox awaits discovery. To see why, consider the prediction Jon Wynne-Tyson attributes to Leonardo Da Vinci: “I have learned from an early age to abjure the use of meat, and the time will come when men such as I will look upon the murder of animals as they now look upon the murder of men.” (1985, 65) By predicting this progress, Leonardo inadvertently reveals he already believes that the murder of animals is the same as the murder of men. If you believe that a proposition is true but will be first believed at a later time, then you already believe it – and so are inconsistent. (The actual truth is irrelevant.)

Specific regress can be anticipated. When I try to predict my first acquisition of a specific truth, I pre-empt myself. When I try to predict my first acquisition of a specific falsehood, there is no pre-emption.

There would be no problem with predicting progress if Leonardo thinks the moral progress lies in the moral preferability of the vegetarian belief rather than the truth of the matter. One might admire vegetarianism without accepting the correctness of vegetarianism. But Leonardo is endorsing the correctness of the belief. This sentence embodies a Moorean absurdity. It is like saying ‘Leonardo took twenty five years to complete The Virgin on the Rocks but I will first believe so tomorrow’. (This absurdity will prompt some to object that I have uncharitably interpreted Leonardo; he must have intended to make an exception for himself and only be referring to men of his kind.)

I cannot specifically anticipate the first acquisition of the true belief that \(p\). For that prediction would show that I already have the true belief that \(p\). The truth cannot wait. The impatience of the truth imposes a limit on the prediction of discoveries.

Bibliography

  • Aikin, K. Scott, 2011, Epistemology and the Regress Problem, London: Routledge.
  • Anderson, C. Anthony, 1983, “The Paradox of the Knower”, The Journal of Philosophy, 80: 338–355.
  • Binkley, Robert, 1968, “The Surprise Examination in Modal Logic”, Journal of Philosophy, 65/2: 127–136.
  • Bommarito, Nicolas, 2010, “Rationally Self-Ascribed Anti-Expertise”, Philosophical Studies, 151: 413–419.
  • Bovens, Luc, 1995, “‘P and I will believe that not-P’: Diachronic Constraints on Rational Belief”, Mind, 104(416): 737–760.
  • Burge, Tyler, 1984, “Epistemic Paradox”, Journal of Philosophy, 81/1: 5–29.
  • –––, 1978a, “Buridan and Epistemic Paradox”, Philosophical Studies, 34: 21–35.
  • Buridan, John, 1982, John Buridan on Self-Reference: Chapter Eight of Buridan’s ‘Sophismata’, G. E. Hughes (ed. & tr.), Cambridge: Cambridge University Press.
  • Carnap, Rudolf, 1950, The Logical Foundations of Probability, Chicago: University of Chicago Press.
  • Christensen, David, 2010, “Higher Order Evidence”, Philosophy and Phenomenological Research, 81: 185–215.
  • Cicero, On the Nature of the Gods, Academica , H. Rackham (trans.) Cambridge, MA: Loeb Classical Library, 1933.
  • Collins, Arthur, 1979, “Could our beliefs be representations in our brains?”, Journal of Philosophy, 74(5): 225–43.
  • Conee, Earl, 2004, “Heeding Misleading Evidence”, Philosophical Studies, 103: 99–120.
  • Cooper, John (ed.), 1997, Plato: The Complete Works, Indianapolis: Hackett.
  • DeRose, Keith, 2017, The Appearance of Ignorance: Knowledge, Skepticism, and Context (Volume 2), Oxford: Oxford University Press.
  • Egan, Andy and Adam Elga, 2005, “I Can’t Believe I’m Stupid”, Philosophical Perspectives, 19/1: 77–93.
  • Feyerabend, Paul, 1988, Against Method, London: Verso.
  • Fitch, Frederic, 1963, “A Logical Analysis of Some Value Concepts”, Journal of Symbolic Logic, 28/2: 135–142.
  • Fraser, Rachel, 2022, “The Will in Belief”, Oxford Studies in Epistemology.
  • Gödel, Kurt, 1983, “What is Cantor’s Continuum Problem?”, Philosophy of Mathematics, Paul Benacerraf and Hilary Putnam (eds.), Cambridge: Cambridge University Press, pp. 258–273.
  • Good, I. J., 1967, “On the Principle of Total Evidence”, British Journal for the Philosophy of Science, 17(4): 319–321.
  • Hacking, Ian, 1975, The Emergence of Probability, Cambridge: Cambridge University Press.
  • Hajek, Alan, 2005, “The Cable Guy paradox”, Analysis, 65(2): 112–119.
  • Harman, Gilbert, 1968, “Knowledge, Inference, and Explanation”, American Philosophical Quarterly, 5/3: 164–173.
  • –––, 1973, Thought, Princeton: Princeton University Press.
  • Hawthorne, John, 2004, Knowledge and Lotteries, Oxford: Clarendon Press.
  • Hein, Piet, 1966, Grooks, Cambridge, MA: MIT Press.
  • Hintikka, Jaakko, 1962, Knowledge and Belief, Ithaca, NY: Cornell University Press.
  • Holliday, Wesley, 2016, “On Being in an Undiscoverable Position”, Thought, 5(1): 33–40.
  • –––, 2017, “Epistemic Logic and Epistemology”, The Handbook of Formal Philosophy, Sven Ove Hansson and Vincent F. Hendricks (eds.), Dordercht: Springer.
  • Hughes, G. E., 1982, John Buridan on Self-Reference, Cambridge: Cambridge University Press.
  • Immerman, Daniel, 2017, “Question Closure to Solve the Surprise Test Paradox”, Synthese, 194(11): 4583–4596.
  • Kaplan, David and Richard Montague, 1960, “A Paradox Regained”, Notre Dame Journal of Formal Logic, 1: 79–90.
  • Klein, Peter, 2007, “How to be an Infinitist about Doxastic Justification”, Philosophical Studies, 134: 77–25–29.
  • Knight, Kevin, 2002, “Measuring Inconsistency”, Journal of Philosophical Logic, 31/1: 77–98.
  • Kripke, Saul, 2011, “Two Paradoxes of Knowledge”, in S. Kripke, Philosophical Troubles: Collected Papers (Volume 1), New York: Oxford University Press, pp. 27–51.
  • Kvanvig, Jonathan L., 1998, “The Epistemic Paradoxes”, Routledge Encyclopedia of Philosophy, London: Routledge.
  • Kyburg, Henry, 1961, Probability and the Logic of Rational Belief, Middletown: Wesleyan University Press.
  • Lewis, David, 1998, “Lucas against Mechanism”, Papers in Philosophical Logic, Cambridge: Cambridge University Press, pp. 166–9.
  • Lewis, David and Jane Richardson, 1966, “Scriven on Human Unpredictability”, Philosophical Studies, 17(5): 69–74.
  • Lucas, J. R., 1964, “Minds, Machines and Gödel”, in Minds and Machines, Alan Ross Anderson (ed.), Englewood Cliffs, N.J.: Prentice Hall, pp. 112–7.
  • Makinson, D. C., 1965, “The Paradox of the Preface”, Analysis, 25: 205–207.
  • Malcolm, Norman, 1963, Knowledge and Certainty, Englewood Cliffs, NJ: Prentice Hall.
  • Moore, G. E., 1942, “A Reply to My Critics”, The Philosophy of G. E. Moore, edited by P. A. Schilpp. Evanston, IL: Northwestern University.
  • Nerlich, G. C., 1961, “Unexpected Examinations and Unprovable Statements”, Mind, 70(280): 503–514.
  • Peirce, Charles Sanders, 1931–1935, The Collected Works of Charles Sanders Peirce, Charles Hartshorne and Paul Weiss (eds.), Cambridge, MA: Harvard University Press.
  • Plato, Plato: The Complete Works, John M. Cooper (ed.), Indianapolis: Hackett, 1997.
  • Poncins, Gontran de, 1941 [1988], Kabloona in collaboration with Lewis Galantiere, New York: Carroll & Graff Publishers, 1988.
  • Post, John F., 1970, “The Possible Liar”, Noûs, 4: 405–409.
  • Quine, W. V. O, 1953, “On a so-called Paradox”, Mind, 62(245): 65–7.
  • –––, 1969, “Epistemology Naturalized”, in Ontological Relativity and Other Essays, New York: Columbia University Press, pp. 69–90.
  • –––, 1987, Quiddities, Cambridge, MA: Harvard University Press.
  • Read, Stephen, 1979, “Self-Reference and Validity”, Synthese, 42(2): 265–74.
  • Sainsbury, R. M., 1995, Paradoxes, Cambridge: Cambridge University Press.
  • Salerno, Joseph, 2009, New Essays on the Knowability Paradox, New York: Oxford University Press.
  • Scriven, Michael, 1964, “An Essential Unpredictability in Human Behavior”, in Scientific Psychology: Principles and Approaches, Benjamin B. Wolman and Ernest Nagel (eds.), New York: Basic Books, pp. 411–25.
  • Sextus Empiricus, Outlines of Pyrrhonism, R. G. Bury (trans.), Cambridge, MA: Harvard University Press, 1933.
  • Skyrms, Brian, 1982, “Causal Decision Theory”, Journal of Philosophy, 79(11): 695–711.
  • Smith, Martin, 2016, Between Probability and Certainty, Clarendon: Oxford University Press.
  • Sorensen, Roy, 1988a, Blindspots, Oxford: Clarendon Press.
  • –––, 1988b, “Dogmatism, Junk Knowledge, and Conditionals”, Philosophical Quarterly, 38: 433– 454.
  • –––, 2001, Vagueness and Contradiction, Oxford: Clarendon Press.
  • –––, 2003a, “Paradoxes of Rationality”, in The Handbook of Rationality, Al Mele (ed.), Oxford: Oxford University Press, pp. 257–75.
  • –––, 2003b, A Brief History of the Paradox, New York: Oxford University Press.
  • Stephenson, Andrew, 2015, “Kant, the Paradox of Knowability, and the Meaning of Experience”, Philosophers Imprint, 15(17), 1–19.
  • Thomson, J. F., 1962, “On Some Paradoxes”, in Analytical Philosophy, R. J. Butler (ed.), New York: Barnes & Noble, pp. 104–119.
  • Tymoczko, Thomas, 1984, “An Unsolved Puzzle about Knowledge”, The Philosophical Quarterly, 34: 437–58.
  • van Fraassen, Bas, 1984, “Belief and the Will”, Journal of Philosophy, 81: 235–256
  • –––, 1995, “Belief and the Problem of Ulysses and the Sirens”, Philosophical Studies, 77: 7–37
  • Weiss, Paul, 1952, “The Prediction Paradox”, Mind, 61(242): 265–9.
  • Williamson, Timothy, 2000, Knowledge and its Limits, Oxford: Oxford University Press.
  • Wynne-Tyson, Jon, 1985, The Extended Circle, Fontwell, Sussex: Centaur Press.

Other Internet Resources

Copyright © 2022 by
Roy Sorensen <roy.sorensen@austin.utexas.edu>

This is a file in the archives of the Stanford Encyclopedia of Philosophy.
Please note that some links may no longer be functional.