Movatterモバイル変換


[0]ホーム

URL:


SEP home page
Stanford Encyclopedia of Philosophy

Epistemic Paradoxes

First published Wed Jun 21, 2006; substantive revision Thu Mar 3, 2022

Epistemic paradoxes are riddles that turn on the concept of knowledge(episteme is Greek for knowledge). Typically, there areconflicting, well-credentialed answers to these questions (orpseudo-questions). Thus the riddle immediately poses an inconsistency.In the long run, the riddle goads and guides us into correcting atleast one deep error – if not directly about knowledge, thenabout its kindred concepts such as justification, rational belief, andevidence.

Such corrections are of interest to epistemologists. Historians datethe origin of epistemology to the appearance of skeptics. As manifestin Plato’s dialogues featuring Socrates, epistemic paradoxeshave been discussed for twenty five hundred years. Given theirhardiness, some of these riddles about knowledge may well be discussedfor the next twenty five hundred years.

1. The Surprise Test Paradox

A teacher announces that there will be a surprise test next week. Itwill be a surprise in that the students will not be able to know inadvance on which day the exam will be given. A student objects thatthis is impossible: “The class meets on Monday, Wednesday, andFriday. If the test is given on Friday, then on Thursday I would beable to predict that the test is on Friday. It would not be asurprise. Can the test be given on Wednesday? No, because on Tuesday Iwould know that the test will not be on Friday (thanks to the previousreasoning) and know that the test was not on Monday (thanks tomemory). Therefore, on Tuesday I could foresee that the test will beon Wednesday. A test on Wednesday would not be a surprise. Could thesurprise test be on Monday? On Sunday, the previous two eliminationswould be available to me. Consequently, I would know that the testmust be on Monday. So a Monday test would also fail to be a surprise.Therefore, it is impossible for there to be a surprisetest.”

Can the teacher fulfill her announcement? We have an embarrassment ofriches. On the one hand, we have the student’s eliminationargument. (For a formalization, see Holliday 2017.) On the other hand,common sense says that surprise tests are possible even when we havehad advance warning that one will occur at some point. Either of theanswers would be decisive were it not for the credentials of the rivalanswer. Thus we have a paradox. But a paradox of what kind?‘Surprise test’ is being defined in terms of what can beknown. Specifically, a test is a surprise if and only if the studentcannotknow beforehand which day the test will occur.Therefore the riddle of the surprise test qualifies as anepistemic paradox.

Paradoxes are more than edifying surprises. Here is anedifying surprise that does not pose a paradox. Professor Statisticsannounces she will give random quizzes: “Class meets every dayof the week. Each day I will open by rolling a die. When the rollyields a six, I will immediately give a quiz.” Today, Monday, asix came up. So you are taking a quiz. The last question of her quizis: “Which of the subsequent days is most likely to be the dayof the next random test?” Most people answer that each of thesubsequent days has the same probability of being the next quiz. Butthe correct answer is:Tomorrow (Tuesday).

Uncontroversial facts about probability reveal the mistake andestablish the correct answer. For the next test to be on Wednesday,there would have to be aconjunction of two events: no teston Tuesday (a 5/6 chance of that)and a test on Wednesday (a1/6 chance). The probability for each subsequent day becomes less andless. (It would be astounding if the next quiz day were a hundred daysfrom now!) The question is not whether a six will be rolled on anygiven day, but when thenext six will be rolled. Which day isthe next one depends partly on what happens meanwhile, as well asdepending partly on the roll of the die on that day.

This probability riddle is instructive and will be referencedthroughout this entry. But the existence of quick, decisive solutionshows that only a mild revision of our prior beliefs was needed. Incontrast, when our deep beliefs conflict, proposed amendmentsreverberate unpredictably. Attempts to throw away a deep beliefboomerang. For the malfunctions that ensue from distancing ourselvesfrom the belief demonstrate its centrality. Often, the belief winds upeven more entrenched. “Problems worthy of attack prove theirworth by fighting back” (Hein 1966).

One sign of depth (or at least desperation) is that commentators beginrejecting highly credible inference rules. The surprise test has beenclaimed to invalidate the law of bivalence, the KK principle (if oneis in a position to know that \(p\), then one is also ina position to know that one knows that \(p\)), and theclosure principle (if one knows \(p\) while also competently deducing qfrom \(p\), one knows \(q\)) (Immerman 2017).

The surprise test paradox also has ties to issues that are not clearlyparadoxes – or to issues whose status as paradoxes is at leastcontested. Consider the Monty Hall problem. There is a prize behindexactly one of three doors. After you pick, Monty Hall will revealwhat is behind one of the doors that lacks the prize. He will thengive you an option to switch your choice to another closed door.Should you switch to have the best chance to win the prize? When Marylin vos Savant answered yes in a 1990Parade magazine she was mistakenly scolded by many readers— including some scholars. The correct solution was provided bythe original poser of the puzzle decades earlier and was neverforgotten or effectively criticized.

What makes the Monty Hall Problem interesting is that it isnota paradox. There has always been expert consensus on itssolution. Yet it has many of the psychological and sociological featuresof a paradox. The Monty Hall Problem is merely acognitiveillusion. Paradox status is also withheld by those who find onlyirony in self-defeating predictions and only anembarrassment in the “knowability paradox”(discussed below). Calling a problem a ‘paradox’ tends to quarantineit from the rest of our inquiries. Those who wish to rely on thesurprising result will therefore deny that there is any paradox. Atmost, they concede therewas a paradox. Dead paradoxes arebenign fertilizer for the tree of knowledge.

We can look forward to future philosophers drawing edifying historicalconnections. The backward elimination argument underlying the surprisetest paradox can be discerned in German folktales dating back to 1756(Sorensen 2003a, 267). Perhaps, medieval scholars explored theseslippery slopes. But let us turn to commentary to which we presentlyhave access.

1.1 Self-defeating prophecies and pragmatic paradoxes

In the twentieth century, the first published reaction to the surprisetext paradox was to endorse the student’s elimination argument.D. J. O’Connor (1948) regarded the teacher’s announcementas self-defeating. If the teacher had not announced that there wouldbe a surprise test, the teacher would have been able to give thesurprise test. The pedagogical moral of the paradox would then be thatif you want to give a surprise test do not announce your intention toyour students!

More precisely, O’Connor compared the teacher’sannouncement to utterances such as ‘I am not speakingnow’. Although these utterances are consistent, they“could not conceivably be true in any circumstances”(O’Connor 1948, 358). L. Jonathan Cohen (1950) agreed andclassified the announcement as a pragmatic paradox. He defined apragmatic paradox to be a statement that is falsified by its ownutterance. The teacher overlooked how the manner in which a statementis disseminated can doom it to falsehood.

Cohen’s classification is too monolithic. True, theteacher’s announcement does compromise one aspect of thesurprise: Students now know that there will be a test. But thiscompromise is not itself enough to make the announcementself-falsifying. Theexistence of a surprise test has beenrevealed but that allows surviving uncertainty as towhichday the test will occur. The announcement of a forthcoming surpriseaims at changing uninformed ignorance into action-guiding awareness ofignorance. A student who misses the announcement does not realize thatthere is a test. If no one passes on the news about the surprise test,the student with simple ignorance will be less prepared thanclassmates who know they do not know the day of the test.

Announcements are made to serve different goals simultaneously.Competition between accuracy and helpfulness makes it possible for anannouncement to be self-fulfilling by being self-defeating. Consider aweatherman who warns ‘The midnight tsunami will cause fatalitiesalong the shore’. Because of the warning, spectacle-seekers makea special trip to witness the wave. Some drown. The weatherman’sannouncement succeeds as a prediction by backfiring as a warning.

1.2 Predictive determinism

Instead of viewing self-defeating predictions as showing how theteacher is refuted, some philosophers construe self-defeatingpredictions as showing how thestudent is refuted. Thestudent’s elimination argument embodies hypothetical predictionsabout which day the teacher will give a test. Isn’t the studentoverlooking the teacher’s ability and desire to thwart thoseexpectations? Some game theorists suggest that the teacher coulddefeat this strategy by choosing the test date at random.

Students can certainly be kept uncertain if the teacher is willing tobe faithfully random. She will need to prepare a quiz each day. Shewill need to brace for the possibility that she will give too manyquizzes or too few or have an unrepresentative distribution ofquizzes.

If the instructor finds these costs onerous, then she may be temptedby an alternative: at the beginning of the week, randomly select asingle day. Keep the identity of that day secret. Since the studentwill only know that the quiz is on some day or other, pupils will notbe able to predict the day of the quiz.

This plan is risky. If, through the chance process, the last dayhappens to be selected, then abiding by the outcome means giving anunsurprising test. For as in the original scenario, the student hasknowledge of the teacher’s announcement and awareness of pasttestless days. So the teacher must exclude random selection of thelast day. The student is astute. He will replicate this reasoning thatexcludes a test on the last day. Can the teacher abide by the randomselection of the next to last day? Now the reasoning becomes all toofamiliar.

Another critique of the student’s replication of theteacher’s reasoning adapts a thought experiment from MichaelScriven (1964). To refute predictive determinism (the thesis that allevents are foreseeable), Scriven conjures an agent“Predictor” who has all the data, laws, and calculatingcapacity needed to predict the choices of others. Scriven goes on toimagine, “Avoider”, whose dominant motivation is to avoidprediction. Therefore, Predictor must conceal his prediction. Thecatch is that Avoider has access to the same data, laws, andcalculating capacity as Predictor. Thus Avoider can duplicatePredictor’s reasoning. Consequently, the optimal predictorcannot predict Avoider. Let the teacher be Avoider and the student bePredictor. Avoider must win. Therefore, it is possible to give asurprise test.

Scriven’s original argument assumes that Predictor and Avoidercan simultaneously have all the needed data, laws, and calculatingcapacity. David Lewis and Jane Richardson object:

… the amount of calculation required to let the predictorfinish his prediction depends on the amount of calculation done by theavoider, and the amount required to let the avoider finish duplicatingthe predictor’s calculation depends on the amount done by thepredictor. Scriven takes for granted that the requirement-functionsare compatible: i.e., that there is some pair of amounts ofcalculation available to the predictor and the avoider such that eachhas enough to finish, given the amount the other has. (Lewis andRichardson 1966, 70–71)

According to Lewis and Richardson, Scriven equivocates on ‘BothPredictor and Avoider have enough time to finish theircalculations’. Reading the sentence one way yields a truth:against any given avoider, Predictor can finish and against any givenpredictor, Avoider can finish. However, the compatibility premiserequires the false reading in which Predictor and Avoider can finishagainst each other.

Idealizing the teacher and student along the lines of Avoider andPredictor would fail to disarm the student’s eliminationargument. We would have merely formulated a riddle that falselypresupposes that the two types of agent are co-possible. It would belike asking ‘If Bill is smarter than anyone else and Hillary issmarter than anyone else, which of the two is thesmartest?’.

Predictive determinism states that everything is foreseeable.Metaphysical determinism states that there is only one way the futurecould be given the way the past is. Simon Laplace used metaphysicaldeterminism as a premise for predictive determinism. He reasoned thatsince every event has a cause, a complete description of any stage ofhistory combined with the laws of nature implies what happens at anyother stage of the universe. Scriven was only challenging predictivedeterminism in his thought experiment. The next approach challengesmetaphysical determinism.

1.3 The Problem of Foreknowledge

Prior knowledge of an action seems incompatible with it being a freeaction. If I know that you will finish reading this article tomorrow,then you will finish tomorrow (because knowledge implies truth). Butthat means you will finish the article even if you resolve not to.After all, given that you will finish, nothing can stop you fromfinishing. So if I know that you will finish reading this articletomorrow, you are not free to do otherwise.

Maybe all of your reading is compulsory. If God exists, then He knowseverything. So the threat to freedom becomes total for the theist. Theproblem of divine foreknowledge raises the possibility that theism(rather than atheism) precludes free choice and thereby precludes our having any moral responsibility.

In response to the apparent conflict between freedom andforeknowledge, medieval philosophers denied that future contingentpropositions have a truth-value. They took themselves to be extendinga solution Aristotle discusses inDe Interpretatione to theproblem of logical fatalism. According to this truth-value gapapproach, ‘You will finish this article tomorrow’ is nottruenow. The prediction willbecome true tomorrow.A morally serious theist can agree with theRubaiyat of OmarKhayyam:

The Moving Finger writes; and, having writ,
Moves on: nor all your Piety nor Wit
Shall lure it back to cancel half a Line,
Nor all your Tears wash out a Word of it.

God’s omniscience only requires that He knows every trueproposition. God will know ‘You will finish this articletomorrow’ as soon it becomes true – but not before.

The teacher has freewill. Therefore, predictions about what he will doare not true (prior to the examination). The metaphysician Paul Weiss(1952) concludes that the student’s argument falsely assumes heknows that the announcement is true. The student can know that theannouncement is trueafter it becomes true – but notbefore.

The logician W. V. O. Quine (1953) agrees with Weiss’ conclusionthat the teacher’s announcement of a surprise test fails to givethe student knowledge that there will be a surprise test. Yet Quineabominates Weiss’ reasoning. Weiss breeches the law of bivalence(which states that every proposition has a truth-value, true orfalse). Quine believes that the riddle of the surprise test should notbe answered by surrendering classical logic.

2. Intellectual suicide

Quine insists that the student’s elimination argument is only areductio ad absurdum of the supposition that the studentknows that the announcement is true (rather than areductio of the announcement itself). He accepts thisepistemicreductio but rejects the metaphysicalreductio. Given the student’s ignorance of theannouncement, Quine concludes that a test on any day would beunforeseen. That is, Quine accepts that the student has no advanceknowledge of the time of the test, but he rejects that there is notruth in advance as to when the test will be given.

Common sense suggests that the students are informed by theannouncement. The teacher is assuming that the announcement willenlighten the students. She seems right to assume that theannouncement of this intention produces the same sort of knowledge asher other declarations of intentions (about which topics will beselected for lecture, the grading scale, and so on).

There are skeptical premises that could yield Quine’s conclusionthat the students do not know the announcement is true. If no one canknow anything about the future, as alleged by David Hume’sproblem of induction, then the student cannot know that theteacher’s announcement is true. (See the entry onthe problem of induction.) But denying all knowledge of the future in order to deny thestudent’s knowledge of the teacher’s announcement isdisproportionate and indiscriminate. Do not kill a fly with cannon —unless it is killer fly and only a cannon will work!

In later writings, Quine evinces general reservations about theconcept of knowledge. One of his favorite objections is that‘know’ is vague. If knowledge entails certainty, then toolittle will count as known. Quine infers that we must equate knowledgewith firmly held true belief. Asking just how firm that belief must beis akin to asking just how big something has to be to count as beingbig. There is no answer to the question because ‘big’lacks the sort of boundary enjoyed by precise words.

There is no place in science for bigness, because of this lack ofboundary; but there is a place for the relation of biggerness. Here wesee the familiar and widely applicable rectification of vagueness:disclaim the vague positive and cleave to the precise comparative. Butit is inapplicable to the verb ‘know’, even grammatically.Verbs have no comparative and superlative inflections … . Ithink that for scientific or philosophical purposes the best we can dois give up the notion of knowledge as a bad job and make do ratherwith its separate ingredients. We can still speak of a belief as true,and of one belief as firmer or more certain, to the believer’smind, than another (1987, 109).

Quine is alluding to Rudolf Carnap’s (1950) generalization thatscientists replace qualitative terms (tall) with comparatives(taller than) and then replace the comparatives withquantitative terms (being n millimeters in height).

It is true that some borderline cases of a qualitative term are notborderline cases for the corresponding comparative. But the reverseholds as well. A tall man who stoops may stand less high than anothertall man who is not as lengthy but better postured. Both men areclearly tall. It is unclear that ‘The lengthier man istaller’. Qualitative terms can be applied when a vague quota issatisfied without the need to sort out the details. Only comparativeterms are bedeviled by tie-breaking issues.

Science is about what is the case rather than what ought to be case.This seems to imply that science does not tell us what we ought tobelieve. The traditional way to fill the normative gap is to delegateissues of justification to epistemologists. However, Quine isuncomfortable with delegating such authority to philosophers. Heprefers the thesis that psychology is enough to handle the issuestraditionally addressed by epistemologists (or at least the issuesstill worth addressing in an Age of Science). This “naturalisticepistemology” seems to imply that ‘know’ and‘justified’ are antiquated terms – as empty as‘phlogiston’ or ‘soul’.

Those willing to abandon the concept of knowledge can dissolve thesurprise test paradox. But to epistemologists who find promise in lessdrastic responses this is like using a suicide bomb to kill a fly.

Our suicide bomber may protest that the flies have been undercounted.Epistemic eliminativism dissolvesall epistemic paradoxes.According to the eliminativist, epistemic paradoxes are symptoms of aproblem with the very concept of knowledge.

Notice that the eliminativist is more radical than the skeptic. Theskeptic thinks the concept of knowledge is coherent and definite inits requirements. We just fall short of being knowers. The skeptictreats ‘No man is a knower’ like ‘No man is animmortal’. There is nothing wrong with the concept ofimmortality. Biology just winds up guaranteeing that every man fallsshort of being immortal. Universal absence of knowledge would beshocking. But the potential to shock us should not lead us to kill themessenger (the skeptic) or declare unintelligible the vocabularycomprising the message (specifically, the word‘know’).

Unlike the messenger telling us ‘No man is an immortal’,the skeptic has troubletelling us, ‘There is noknowledge’. According to Sextus Empiricus, assertion expressesbelief that one knows what is asserted (Outlines ofPyrrhonism, I., 3, 226). He condemns theassertion‘There is no knowledge’ (though not the propositionexpressed by the assertion) asdogmatic skepticism. Sextusprefers agnosticism about knowledge rather than skepticism (consideredas “atheism” about knowledge). Yet it is just asinconsistent to assert ‘No one can know whether anything isknown’. For that conveys the belief that one knows that no onecan know whether anything is known.

The eliminativist has even more severe difficulties in stating hisposition than the skeptic. Some eliminativists dismiss the threat ofself-defeat by drawing an analogy. Those who denied the existence ofsouls were accused of undermining a necessary condition for assertinganything. However, the soul theorist’s account of what is neededgives no reason to deny that a healthy brain suffices for mentalstates.

If the eliminativist thinks that assertion only imposes the aim ofexpressing a truth, then he can consistently assert that‘know’ is a defective term. However, an epistemologist canrevive the charge of self-defeat by showing that assertion does indeedrequire the speaker to attribute knowledge to himself. Thisknowledge-based account of assertion has recently been supported bywork on our next paradox.

3. Lotteries and the Lottery Paradox

Lotteries pose a problem for the theory that a high probability for atrue belief suffices for knowledge. Given that there are a milliontickets and only one winner, the probability of ‘This ticket isa losing ticket’ is very high. Yet we are reluctant to say thismakes the proposition known.

We overcome the inhibition after the winning ticket is announced. Nowthe ticket is known to be a loser and tossed in the trash. But wait!Testimony does not furnishcertainty. Nor does perception orrecollection . When pressed, we admit there is a small chance that wemisperceived the drawing or that the newscaster misread the winningnumber or that we are misremembering. While in this concessive mood,we are apt to relinquish our claim to know. The skeptic syllogizesfrom this surrender:For any contingent proposition, there is alottery statement that is more probable and which is unknown. A knownproposition cannot be less probable than an unknown proposition. So nocontingent proposition is known(Hawthorne 2004). That is toomuch to give up! Yet the skeptic’s statistics seem impeccable.

This skeptical paradox was noticed by Gilbert Harman (1968, 166). Buthis views about the role of causation in inferential knowledge seemedto solve the problem (DeRose 2017, chapter 5). The baby paradox wasdismissed as stillborn. Since the new arrival did not get thecustomary baptism of attention, epistemologists did not notice thatthe demise of the causal theory of knowledge meant new life forHarman’s lottery paradox.

The probability skeptic’s ordinary suggestions about how wemight be mistaken contrast with the extraordinary possibilitiesconjured by René Descartes’ skeptic. The Cartesianskeptic tries to undermine vast swaths of knowledge with a singleuntestable counter-explanation of the evidence (such as the hypothesisthat you are dreaming or the hypothesis that an evil demon isdeceiving you). These comprehensive alternatives are designed to evadeany empirical refutation. The probabilistic skeptic, in contrast,points to a plethora of pedestrian counter-explanations. Each is easyto test: maybe you transposed the digits of a phone number, maybe theticket agent thought you wanted to fly to Moscow, Russia rather thanMoscow, Idaho, etc. You can check for errors, but any check itself hasa small chance of being wrong. So there is always something to check,given that the issues cannot be ignored on grounds ofimprobability.

You can checkany of these possible errors but you cannotcheck themall. You cannot discount these pedestrianpossibilities as science fiction. These are exactly the sorts ofpossibilities we check when plans go awry. For instance, you think youknow that you have an appointment to meet a prospective employer forlunch at noon. When she fails to show at the expected time, youbackpedal through your premises: Is your watch slow? Are youremembering the right restaurant? Could there be another restaurant inthe city with the same name? Is she just detained? Could she have justforgotten? Could there have been a miscommunication?

Probabilistic skepticism dates back to Arcesilaus who took over theAcademy two generations after Plato’s death. This moderate kindof skepticism, recounted by Cicero (Academica 2.74, 1.46)from his days as a student at the Academy, allows for justifiedbelief. Many scientists feel they should only assign probabilities.They dismiss the epistemologist’s preoccupation with knowledgeas old-fashioned.

Despite the early start of the qualitative theory of probability, thequantitative theory did not develop until Blaise Pascal’s studyof gambling in the seventeenth century (Hacking 1975). Only in theeighteenth century did it penetrate the insurance industry (eventhough insurers realized that a fortune could be made by accuratelycalculating risk). Only in the nineteenth century did probability makea mark in physics. And only in the twentieth century do probabilistsmake important advances over Arcesilaus.

Most of these philosophical advances are reactions to the use ofprobability by scientists. In the twentieth century, editors ofscience journals began to demand that the author’s hypothesisshould be accepted only when it was sufficiently probable – asmeasured by statistical tests. The threshold for acceptance wasacknowledged to be somewhat arbitrary. And it was also conceded thatthe acceptance rule might vary with one’s purposes. Forinstance, we demand a higher probability when the cost of accepting afalse hypothesis is high.

In 1961 Henry Kyburg pointed out that this policy conflicted with aprinciple of agglomeration: If you rationally believe \(p\)and rationally believe \(q\) then yourationally believe both \(p\) and \(q\).Little pictures of the same scene should sum to a bigger picture ofthe same scene. If rational belief can be based on an acceptance rulethat only requires a high probability, there will be rational beliefin a contradiction! To see why, suppose the acceptance rule permitsbelief in any proposition that has a probability of at least .99.Given a lottery with 100 tickets and exactly one winner, theprobability of ‘Ticket \(n\) is a loser’licenses belief. Symbolize propositions about ticket \(n\)being a loser as \(p_n\). Symbolize ‘Irationally believe’ as \(B\). Belief in acontradiction follows:

  1. \(B{\sim}(p_1 \amp p_2 \amp \ldots \amp p_{100})\),
    by the probabilistic acceptance rule.
  2. \(Bp_1 \amp Bp_2 \amp \ldots \amp Bp_{100}\),
    by the probabilistic acceptance rule.
  3. \(B(p_1 \amp p_2 \amp \ldots \amp p_{100})\),
    from (2) and the principle that rational belief agglomerates.
  4. \(B[(p_1 \amp p_2 \amp \ldots \amp p_{100}) \amp{\sim}(p_1 \ampp_2 \amp \ldots \amp p_{100})]\),
    from (1) and (3) by the principle that rational beliefagglomerates.

More informally, the acceptance rule implies this: each belief that a particular ticket will lose is probable enough to justify believing it. By repeatedapplications of the agglomeration principle, conjoining all of thesejustified beliefs together gives a justified belief. Finally,conjoining that belief with the justified belief that one of ticketsis a winner gives the contradictory belief to the to the effect thateach will lose and one will win. Yet by agglomeration that too isjustified.

Since belief in an obvious contradiction is a paradigm example ofirrationality, Kyburg poses a dilemma: either reject agglomeration orreject rules that license belief for a probability of less than one.(Martin Smith 2016, 186–196) warns that even a probability ofone leads to joint inconsistency for a lottery that has infinitelymany tickets.) Kyburg rejects agglomeration. He promotes toleration ofjoint inconsistency (having beliefs that cannot all be truetogether) to avoid belief in contradictions. Reason forbids us frombelieving a proposition that is necessarily false but permits us tohave a set of beliefs that necessarily contains a falsehood. HenryKyburg’s choice was soon supported by the discovery of acompanion paradox.

4. Preface Paradox

In the preface ofIntroduction to the Foundations ofMathematics, Raymond Wilder (1952, iv) apologizes for the errorsin the text. The 1982 reprint has three pages of errata that vindicateWilder’s humility. D. C. Makinson (1965, 205) quotes Wilder’s1952 apology and extracts a paradox: Wilder rationally believes eachof the assertions in his book. But since Wilder regards himself asfallible, he rationally believes the conjunction of all his assertionsis false. If the agglomeration principle holds, \((Bp \amp Bq)\rightarrow B(p \amp q)\), Wilder would rationally believe theconjunction of all assertions in his book and also rationallydisbelieve the same thing!

The preface paradox does not rely on a probabilistic acceptance rule.The preface belief is organically generated in a qualitative fashion.The author is merely reflecting on his humbling resemblance to otherauthors who are fallible, his own past failing that he subsequentlydiscovered, his imperfection in fact checking, and so on.

At this juncture many philosophers join Kyburg in rejectingagglomeration and conclude that it can be rational to have jointlyinconsistent beliefs. Kyburg’s solution to the preface paradoxraises a methodological question about the nature of paradox. How canparadoxes change our minds if joint inconsistency is permitted? Aparadox is commonly defined as a set of propositions that areindividually plausible but jointly inconsistent. The inconsistency isthe itch that directs us to scratch out a member of the set (or thepain that leads us to withdraw from the stimulus). For instance, muchepistemology orbits an ancient riddle posed by the regress ofjustification, namely, which of the following is false?

  1. A belief can only be justified by another justified belief.
  2. There are no circular chains of justification.
  3. All justificatory chains have a finite length.
  4. Some beliefs are justified.

Foundationalists reject (1). They take some propositions to beself-evident or they permit beliefs to be justified by non-beliefs (such as perceptions or intuitions).Coherentists reject (2). They tolerate some forms of circularreasoning. For instance, Nelson Goodman (1965) has characterized themethod of reflective equilibrium asvirtuously circular.Charles Saunders Peirce (1933–35, 5.250) may have rejected (3).The first clear rejector is Peter Klein (2007). For a book-lengthdefense, read Scott F. Aikin (2011). Infinitists believe thatinfinitely long chains of justification are no more impossible thaninfinitely long chains of causation. Finally, the epistemologicalanarchist rejects (4). As Paul Feyerabend refrains inAgainstMethod, “Anything goes” (1988, vii, 5, 14, 19,159).

Formulating a paradox as a set of individually plausible but jointlyinconsistent beliefs is a feat of data compression. But if jointinconsistency is rationally tolerable, why do these philosophersbother to offer solutions to paradoxes such as the regress ofjustification? Why is it irrational to believe each of (1)–(4),despite their joint inconsistency?

Kyburg might answer that there is a scale effect. Although thesensation of joint inconsistency is tolerable when diffuselydistributed over a large body of propositions, the sensation becomesan itch when the inconsistency localizes (Knight 2002). That is whyparadoxes are always represented as asmall set ofpropositions. A paradox is improved by reducing its membership —as when a member of the set is exposed as superfluous to theinconsistency. (Strictly speaking, a set can only change size in themetaphorical way that a number grows or shrinks.)

If you know that your beliefs are jointly inconsistent but deny thismakes for a giant paradox, then you should reject R. M.Sainsbury’s definition of a paradox as “an apparentlyunacceptable conclusion derived by apparently acceptable reasoningfrom apparently acceptable premises” (1995, 1). Take thenegation of any of your beliefs as a conclusion and your remainingbeliefs as the premises. You should judge this jumble argument asvalid, and as having premises that you accept, and yet as having aconclusion you reject (Sorensen 2003b, 104–110). If theconclusion of this argument counts as a paradox, then the negation ofany of your beliefs counts as a paradox.

The resemblance between the preface paradox and the surprise testparadox becomes more visible through an intermediate case. The prefaceof Siddhartha Mukherjee’sThe Emperor of All Maladies: ABiography of Cancer warns: “In cases where there was noprior public knowledge, or when interviewees requested privacy, I haveused a false name, and deliberately confounded identities to make itdifficult to track.” (2010, xiv) Those who refuse consent to belied to are free to close Doctor Mukherjee’s chronicle. Butnearly all readers think the physician’s trade-off between liesand new information is acceptable. They rationally anticipate beingrationally misled. Nevertheless, these readers learn much about thehistory of cancer. Similarly, students who are warned that they willreceive a surprise test rationally expect to be rationally misledabout the day of the test. The prospect of being misled does not leadthem to drop the course.

The preface paradox pressures Kyburg to extend his tolerance of jointinconsistency to the acceptance of contradictions. ForMakinson’s original specimen is a logician’s regret ataffirmingcontradictions rather than false contingentstatements. Consider a logic student who is required to pick onehundred truths from a mixed list of tautologies and contradictions(Sorensen 2001, 156–158). Although the modest student believeseach of his answers, \(A_1, A_2, \ldots, A_{100}\), he also believesthat at least of one these answers is false. This ensures he believesa contradiction. If any of his answers is false, then the studentbelieves a contradiction (because the only falsehoods on the questionlist are contradictions). If all of his test answers are true, thenthe student believes the following contradiction: \({\sim}(A_1 \ampA_2 \amp \ldots \amp A_{100})\). After all, a conjunction oftautologies is itself a tautology and the negation of any tautology isa contradiction.

If paradoxes were always sets of propositions or arguments orconclusions, then they would always be meaningful. But some paradoxesare semantically flawed (Sorensen 2003b, 352) and some have answersthat are backed by a pseudo-argument employing a defective“lemma” that lacks a truth-value. Kurt Grelling’sparadox, for instance, opens with a distinction between autologicaland heterological words. An autological word describes itself, e.g.,‘polysyllabic’ is polysllabic, ‘English’ isEnglish, ‘noun’ is a noun, etc. A heterological word doesnot describe itself, e.g., ‘monosyllabic’ is notmonosyllabic, ‘Chinese’ is not Chinese, ‘verb’is not a verb, etc. Now for the riddle: Is ‘heterological’heterological or autological? If ‘heterological’ isheterological, then since it describes itself, it is autological. Butif ‘heterological’ is autological, then since it is a wordthat does not describe itself, it is heterological. The commonsolution to this puzzle is that ‘heterological’, asdefined by Grelling, is not a well-defined predicate (Thomson 1962).In other words, “Is ‘heterological’heterological?” is without meaning. There can be no predicatethat applies to all and only those predicates it does not apply to forthe same reason that there can be no barber who shaves all and onlythose people who do not shave themselves.

The eliminativist, who thinks that ‘know’ or‘justified’ is meaningless, will diagnose the epistemicparadoxes as questions that onlyappear to be well-formed.For instance, the eliminativist about justification would not acceptproposition (4) in the regress paradox: ‘Some beliefs arejustified’. His point is not that no beliefs meet the highstandards for justification, as an anarchist might deny that anyostensible authorities meet the high standards for legitimacy.Instead, the eliminativist unromantically diagnoses‘justified’ as a pathological term. Just as the astronomerignores ‘Are there a zillion stars?’ on the grounds that‘zillion’ is not a genuine numeral, the eliminativistignores ‘Are some beliefs justified?’ on the grounds that‘justified’ is not a genuine adjective.

In the twentieth century, suspicions about conceptual pathology werestrongest for the liar paradox: Is ‘This sentence isfalse’ true? Philosophers who thought that there was somethingdeeply defective with the surprise test paradox assimilated it to theliar paradox. Let us review the assimilation process.

5. Anti-expertise

In the surprise test paradox, the student’s premises areself-defeating. Any reason the student has for predicting a test dateor a non-test date is available to the teacher. Thus the teacher cansimulate the student’s forecast and know what the studentexpects.

The student’s overall conclusion, that the test is impossible,is also self-defeating. If the student believes his conclusion then hewill not expect the test. So if he receives a test, it will be asurprise. The event will be all the more unexpected because thestudent has deluded himself into thinking the test is impossible.

Just as someone’s awareness of a prediction can affect thelikelihood of it being true, awareness of that sensitivity to hisawareness can also affect its truth. If each cycle of awareness isself-defeating, then there is no stable resting place for aconclusion.

Suppose a psychologist offers you a red box and a blue box (Skyrms1982). The psychologist can predict which box you will choose with 90%accuracy. He has put one dollar in the box he predicts you will chooseand ten dollars in the other box. Should you choose the red box or theblue box? You cannot decide. For any choice becomes a reason toreverse your decision.

Epistemic paradoxes affect decision theory because rational choicesare based onbeliefs and desires. If the agent cannot form arational belief, it is difficult to interpret his behavior as achoice. In decision theory, the whole point of attributingbeliefs and desires is to set up practical syllogisms that make senseof actions as means to ends. Subtracting rationality from the agentmakes the framework useless. Given this commitment to charitableinterpretation, there is no possibility of you rationally choosing anoption that you believe to be inferior. So if you choose, you cannotreally believe you were operating as an anti-expert, that is, someonewhose opinions on a topic are reliably wrong (Egan and Elga 2005).

The medieval philosopher John Buridan (Sophismata, Sophism13) gave a starkly minimal example of such instability:

(B)
You do not believe this sentence.

If you believe (B) it is false. If you do not believe (B) it is true.You are an anti-expert about (B); your opinion is reliably wrong. Anoutsider who monitors your opinion can reckon whether (B) is true. Butyou are not able to exploit your anti-expertise.

On the bright side, you are able to exploit the anti-expertise ofothers. Four out of five anti-experts recommend against reading anyfurther!

5.1 The Knower Paradox

David Kaplan and Richard Montague (1960) think the announcement by theteacher in our surprise exam example is equivalent to theself-referential

(K-3)
Either the test is on Monday but you do not know it beforeMonday, or the test is on Wednesday but you do not know it beforeWednesday, or the test is on Friday but you do not know it beforeFriday, or this announcement is known to be false.

Kaplan and Montague note that the number of alternative test dates canbe increased indefinitely. Shockingly, they claim the number ofalternatives can be reduced to zero! The announcement is thenequivalent to

(K-0)
This sentence is known to be false.

If (K-0) is true then it known to be false. Whatever is known to befalse, is false. Since no proposition can be both true and false, wehave proven that (K-0) is false. Given that proof produces knowledge,(K-0) is known to be false. But wait! That is exactly what (K-0) says– so (K-0) must be true.

The (K-0) argument bears a suspicious resemblance to the liar paradox.Subsequent commentators sloppily switch the negation sign in theformal presentations of the reasoning from \(K{\sim}p\) to\({\sim}Kp\) (that is, from ‘It is known that not-\(p\)’,to ‘It is not the case that it is knownthat \(p\)’). Ironically, this garbledtransmission results in a cleaner variation of the knower:

(K)
No one knows this very sentence.

Is (K) true? On the one hand, if (K) is true, then what it says istrue, so no one knows it. On the other hand, that very reasoning seemsto be a proof of (K). Believing a proposition by seeing it to beproved is a sufficient for knowledge of it, so someone must know (K).But then (K) is false! Since no one can know a proposition that isfalse, (K) is not known.

The skeptic could hope to solve (K-0) by denying that anything isknown. This remedy does not cure (K). If nothing is known then (K) istrue. Can the skeptic instead challenge the premise that proven is asufficient for knowing it? This solution would be particularlyembarrassing to the skeptic. The skeptic presents himself as astickler for proof. If it turns out that even proof will not sway him,he bears a damning resemblance to the dogmatist he so frequentlychides.

But the skeptic should not lose his nerve. Proof does not always yieldknowledge. Consider a student who correctly guesses that a step in hisproof is valid. The student does not know the conclusion but did provethe theorem. His instructor might have trouble getting the student tounderstand why his answer constitutes a valid proof. The intransigencemay stem from the prover’s intelligence rather than hisstupidity. L. E. J. Brouwer is best known in mathematics for hisbrilliant fixed point theorem. But a doubtful reading of ImmanuelKant’s philosophy of mathematics led Brouwer retract his proof.Brouwer also had philosophical doubts about the Axiom of Choice and theLaw of Excluded Middle. Brouwer persuaded a minority of mathematiciansand philosophers, known as intuitionists, to emulate his abstentionfrom non-constructive proofs. This led them to develop constructiveproofs of theorems that were earlier proved by less informative means.Everybody agrees that there is more to learn from a proof of anexistential generalization that proceeds from a proved instance thanfrom an unspecificreductio ad absurdum of the correspondinguniversal generalization. But this does not vindicate theintuitionists’s refusal to be persuaded by thereductio adabsurdum. The intuitionist, even in the eyes of the skeptic, hastoo high a standard of proof. An excessively high standard of proofcan prevent knowledge by proof.

The logical myth that “You cannot prove a universalnegative” is itself a universal negative. So it implies its ownunprovability. This implication of unprovability is correct but onlybecause the principle is false. For instance, exhaustive inspectionproves the universal negative ‘No adverbs appear in thissentence’. Areductio ad absurdum proves the universalnegative ‘There is no largest prime number’.

Trivially,false propositions cannot be proved true. Arethere anytrue propositions that cannot be proved true?

Yes, there are infinitely many. Kurt Gödel’s incompletenesstheorem demonstrated that any system that is strong enough to expressarithmetic is also strong enough to express a formal counterpart ofthe self-referential proposition in the surprise test example‘This statement cannot be proved in this system’. If thesystem cannot prove its “Gödel sentence”, then thissentence is true. If the system can prove its Gödel sentence, thesystem is inconsistent. So either the system is incomplete orinconsistent. (See the entry onKurt Gödel.)

Of course, this result concerns provability relative to a system. Onesystem can prove another system’s Gödel sentence. KurtGödel (1983, 271) thought that proof was not needed for knowledgethat arithmetic is consistent.

J. R. Lucas (1964) claims that this reveals human beings are notmachines. A computer is a concrete instantiation of a formal system.Hence, its “knowledge” is restricted to what it can prove.By Gödel’s theorem, the computer will be eitherinconsistent or incomplete. However, any human beingcouldhave consistent and complete knowledge of arithmetic. Therefore, necessarily, nohuman being is a computer.

Critics of Lucas defend the parity between people and computers. Theythink we have our own Gödel sentences (Lewis 1999,166–173). In this egalitarian spirit, G. C. Nerlich (1961)models the student’s beliefs in the surprise test example as alogical system. The teacher’s announcement is then a Gödelsentence about the student: There will be a test next week but youwill not be able to prove which day it will occur on the basis of thisannouncement and memory of what has happened on previous exam days.When the number of exam days equals zero the announcement isequivalent to sentence K.

Several commentators on the surprise test paradox object thatinterpreting surprise as unprovability changes the topic. Instead ofposing the surprise test paradox, it poses a variation of the liarparadox. Other concepts can be blended with the liar. For instance,mixing in alethic notions generates the possible liar: Is ‘Thisstatement is possibly false’ true? (Post 1970) (If it is false,then it is false that it is possibly false. What cannot possibly befalse is necessarily true. But if it is necessarily true, then itcannot be possibly false.) Since the semantic concept of validityinvolves the notion of possibility, one can also derive validity liarssuch as Pseudo-Scotus’ paradox: ‘Squares are squares,therefore, this argument is invalid’ (Read 1979). SupposePseudo-Scotus’ argument is valid. Since the premise isnecessarily true, the conclusion would be necessarily true. But theconclusion contradicts the supposition that argument is valid.Therefore, by reductio, the argument is necessarily invalid. Wait! Theargument can be invalid only if it is possible for the premise to betrue and the conclusion to be false. But we have already proved thatthe conclusion of ‘Squares are squares, therefore, this argumentis invalid’ is necessarily true. There is no consistent judgmentof the argument’s validity. A similar predicament follows from‘The test is on Friday but this prediction cannot be soundlydeduced from this announcement’.

One can mock up a complicated liar paradox that resembles the surprisetest paradox. But this complex variant of the liar is not anepistemic paradox. For the paradoxes turn on the semanticconcept of truth rather than an epistemic concept.

5.2 The “Knowability Paradox”

Frederic Fitch (1963) reports that in 1945 he first learned of thisproof of unknowable truths from a referee report on a manuscript henever published. Thanks to Joe Salerno’s (2009) archivalresearch, we now know that referee was Alonzo Church.

Assume there is a true sentence of the form ‘\(p\)but \(p\) is not known’.Although this sentence is consistent, modest principles of epistemiclogic imply that sentences of this form are unknowable.

1.\(K(p \amp{\sim}Kp)\)(Assumption)
2.\(Kp \amp K{\sim}Kp\)1, Knowledge distributes over conjunction
3.\({\sim}Kp\)2, Knowledge implies truth (from the second conjunct)
4.\(Kp \amp{\sim}Kp\)2, 3 by conjunction elimination of the first conjunct and thenconjunction introduction
5.\({\sim}K(p \amp{\sim}Kp)\)1, 4 Reductio ad absurdum

Since all the assumptions are discharged, the conclusion is anecessary truth. So it is a necessary truth that \(p \amp{\sim}Kp\) isnot known. In other words, \(p \amp{\sim}Kp\) is unknowable.

The cautious draw a conditional moral: If there are actual unknowntruths, there are unknowable truths. After all, some philosophers willreject the antecedent because they believe there is an omniscientbeing.

But secular idealists and logical positivists concede that there aresome actual unknown truths. How can they continue to believe that alltruths are knowable? Astonishingly, these eminent philosophers seemrefuted by the pinch of epistemic logic we have just seen. Alsoinjured are those who limit their claims of universal knowability to alimited domain. For instance, Immanuel Kant (A223/B272) asserts thatall empirical propositions are knowable. This pocket of optimism wouldbe enough to ignite the contradiction (Stephenson 2015).

Timothy Williamson doubts that this casualty list is enough for theresult to qualify as a paradox:

The conclusion that there are unknowable truths is an affront tovarious philosophical theories, but not to common sense. If proponents(and opponents) of those theories long overlooked a simplecounterexample, that is an embarrassment, not a paradox. (2000, 271)

Those who believe that the Church-Fitch result is a genuine paradoxcan respond to Williamson with paradoxes that accord with common sense(and science –and religious orthodoxy). For instance, commonsense heartily agrees with the conclusion that something exists. Butit is surprising that this can be proved without empirical premises.Since the quantifiers of standard logic (first order predicate logicwith identity) have existential import, the logician can deduce thatsomething exists from the principle that everything is identical toitself. Most philosophers balk at this simple proof because they feelthat the existence of something cannot be proved by sheer logic. Theyare not balking at the statement that is in accord in common sense(that something exists). They are only balking at the statement thatit can be proved by sheer logic. Likewise, many philosophers who agreethat there are unknowables balk solely on the grounds that such aprofound result cannot be obtained from such limited means.

5.3 Moore’s problem

Church’s referee report was composed in 1945. The timing andstructure of his argument for unknowables suggests that Church mayhave been inspired by G. E. Moore’s (1942, 543) sentence:

(M)
I went to the pictures last Tuesday, but I don’t believe that I did.

Moore’s problem is to explain what is odd about declarativeutterances such as (M). This explanation needs to encompass bothreadings of (M): ‘\(p \amp B{\sim}p\)’ and ‘\(p\amp{\sim}Bp\)’. (This scope ambiguity is exploited by a popularjoke: René Descartes sits in a bar, having a drink. Thebartender asks him if he would care for another. “I thinknot,” he says, and disappears. The joke is commonly criticizedas fallacious. But it is not given Descartes’ belief that he isessentially a thinking being.)

The common explanation of Moore’s absurdity is that the speakerhas managed to contradict himself without uttering a contradiction. Sothe sentence is odd because it is a counterexample to thegeneralization that anyone who contradicts himself utters acontradiction.

There is no problem with third person counterparts of (M). Anyone elsecan say about Moore, with no paradox, ‘G. E. Moore went to thepictures last Tuesday but he does not believe it’. (M) can alsobe embedded unparadoxically in conditionals: ‘If I went to thepictures last Tuesday but I do not believe it, then I am sufferingfrom a worrisome lapse of memory’. The past tense is fine:‘I went to the picture shows last Tuesday but I did not believeit’. The future tense, ‘I went to the picture shows lastTuesday but I will not believe it’, is a bit more of a stretch(Bovens 1995). We tend to picture our future selves as betterinformed. Later selves are, as it were, experts to whom earlier selvesshould defer. When an earlier self foresees that his later selfbelieves \(p\), then the prediction is a reason tobelieve \(p\). Bas van Fraassen (1984, 244) dubs this“the principle of reflection”: I ought to believe aproposition given that I will believe it at some future time.

Robert Binkley (1968) anticipates van Fraassen by applying thereflection principle to the surprise test paradox. The student canforesee that he will not believe the announcement if no test is givenby Thursday. Theconjunction of the history of testless daysand the announcement will imply the Moorean sentence:

(A′)
The test is on Friday but you do not believe it.

Since the less evident member of the conjunction is the announcement,the student will choose not to believe the announcement. At thebeginning of the week, the student foresees that his future self maynot believe the announcement. So the student on Sunday will notbelieve the announcement when it is first uttered.

Binkley illuminates this reasoning with doxastic logic (‘doxa’ isGreek for belief). The inference rules for this logic of belief can beunderstood as idealizing the student into an ideal reasoner. Ingeneral terms, an ideal reasoner is someone who infers what he oughtand refrains from inferring any more than he ought. Since there is noconstraint on his premises, we may disagree with the ideal reasoner.But if we agree with the ideal reasoner’s premises, we appearbound to agree with his conclusion. Binkley specifies somerequirements to give teeth to the student’s status as an idealreasoner: the student is perfectly consistent, believes all thelogical consequences of his beliefs, and does not forget. Binkleyfurther assumes that the ideal reasoner is aware that he is an idealreasoner. According to Binkley, this ensures that if the idealreasoner believes p, then he believes that he will believe \(p\)thereafter.

Binkley’s account of the student’s hypothetical epistemicstate on Thursday is compelling. But his argument for spreading theincredulity from the future to the past is open to threechallenges.

The first objection is that it delivers the wrong result. The student\(is\) informed by the teacher’s announcement, so Binkley oughtnot to use a model in which the announcement is as absurd as theconjunction ‘I went to the pictures last Tuesday but I do notbelieve it’.

Second, the future mental state envisaged by Binkley is onlyhypothetical: \(If\) no test is given by Thursday, the student willfind the announcement incredible. At the beginning of the week, thestudent does not know (or believe) that the teacher will wait thatlong. The principle of reflection ‘Defer to the opinions of my futureself ’ does not imply that I should defer to the opinions of myhypothetical future self. For my hypothetical future self isresponding to propositions that need not be actually true.

Third, the principle of reflection may need more qualifications thanBinkley anticipates. Binkley realizes that an ordinary agent foreseesthat he will forget details. That is why we write reminders for ourown benefit. An ordinary agent foresees periods of impaired judgment.That is why we limit how much money we bring to the bar.

Binkley stipulates that the students do not forget. He needs to addthat the students know that they will not forget. For the mere threatof a memory lapse sometimes suffices to undermine knowledge. ConsiderProfessor Anesthesiology’s scheme for surprise tests: “Asurprise test will be given either Wednesday or Friday with the helpof an amnesia drug. If the test occurs on Wednesday, then the drugwill be administered five minutes after Wednesday’s class. Thedrug will instantly erase memory of the test and the students willfill in the gap by confabulation.” You have just completedWednesday’s class and so temporarily know that the test will beon Friday. Ten minutes after the class, you lose this knowledge. Nodrug was administered and there is nothing wrong with your memory. Youare correctly remembering that no test was given on Wednesday.However, you do not know your memory is accurate because you also knowthat if the test was given Wednesday then you would have apseudo-memory indistinguishable from your present memory. Despite notgaining any new evidence, you change your mind about the testoccurring on Wednesday and lose your knowledge that the test is onFriday. (The change of belief is not crucial; you would still lackforeknowledge of the test even if you dogmatically persistedin believing that the test will be on Friday.)

If the students know that they will not forget and know there will beno undermining by outside evidence, then we may be inclined to agreewith Binkley’s summary that his idealized student never losesthe knowledge he accumulates. As we shall see, however, this overlooksother ways in which rational agents may lose knowledge.

5.4 Blindspots

‘I am a poet but I do not know it’ expresses a propositionI cannot know. But I can reach the proposition by other attitudessuch as hoping and wishing. A blindspot for a propositional attitudeis a consistent proposition that cannot be accessed by thatattitude. Blindspots are relative to the means of reaching theproposition, the person making the attempt, and time at which hetries. Although \(I\) cannot rationally believe ‘Polar bearshave black skin but I believe they do not’you canbelieve that I mistakenly believe polar bears do not have blackskin. The evidence that persuades you I am currently making thatmistake cannot persuade me that I am currently making thatmistake. This is an asymmetry imposed by rationality rather thanirrationality. Attributions of specific errors are personal blindspotsfor the person who is alleged to have erred.

The anthropologist Gontran de Poncins begins his chapter on the arcticmissionary, Father Henry, with a prediction:

I am going to say to you that a human being can live without complaintin an ice-house built for seals at a temperature of fifty-five degreesbelow zero, and you are going to doubt my word. Yet what I say istrue, for this was how Father Henry lived; …. (Poncins 1941[1988], 240])

Gontran de Poncins’ subsequent testimony might lead the readerto believe someone can indeed be content to live in an ice-house. Thesame testimony might lead another reader to believe that Poncins isnot telling the truth. But no reader ought to believe ‘Someonecan be content to live in an ice house and everybody believes that isnot so’. That is a universal blindspot.

If Gontran believes a proposition that is a blindspot to his reader,then he cannot furnish good grounds for his reader to share hisbelief. This holds even if they are ideal reasoners. So oneimplication of personal blindspots is that there can be disagreementamong ideal reasoners because they differ in their blindspots.

This is relevant to the surprise test paradox. The students are thesurprisees. Since the announcement entails that the date of thesurprise test a blindspot for them, non-surprisees cannot persuadethem.

The same point holds for intra-personal disagreement over time.Evidence that persuaded me on Sunday that ‘This security code is390524 but on Friday I will not believe it’ should no longerpersuade me on Friday (given my belief that the day is Friday). Forthat proposition is a blindspot to my Friday self.

Although each blindspot is inaccessible, a disjunction of blindspotsis normally not a blindspot. I can rationally believe that‘Either the number of stars is even and I do not believe it, orthe number of stars is odd and I do not believe it’. Theauthor’s preface statement that there is some mistake in hisbook is equivalent to a very long disjunction of blindspots. Theauthor is saying he either falsely believes his first statement orfalsely believes his second statement or … or falsely believeshis last statement.

The teacher’s announcement that there will be a surprise test isequivalent to a disjunction of future mistakes: ‘Either therewill be a test on Monday and the student will not believe itbeforehand or there will be a test Wednesday and the student will notbelieve it beforehand or the test is on Friday and the student willnot believe it beforehand.’

The points made so far suggest a solution to the surprise test paradox(Sorensen 1988, 328–343). As Binkley (1968) asserts, the testwould be a surprise even if the teacher waited until the last day. Yetit can still be true that the teacher’s announcement isinformative. At the beginning of the week, the students are justifiedin believing the teacher’s announcement that there will be asurprise test. This announcement is equivalent to:

(A)
Either
i.
the test is on Monday and the student does not know itbefore Monday, or
ii.
the test is on Wednesday and the student does not knowit before Wednesday, or
iii.
the test is on Friday and the student does not know it beforeFriday.

Consider the student’s predicament on Thursday (given that thetest has not been on Monday or Wednesday). If he knows that no testhas been given, he cannot also know that (A) is true. Because thatwould imply

  1. The test is on Friday and the student does not know it beforeFriday.

Although (iii) is consistent and might be knowable by others, (iii)cannot be known by the student before Friday. (iii) is a blindspot forthe students but not for, say, the teacher’s colleagues. Hence,the teacher can give a surprise test on Friday because that wouldforce the students to lose their knowledge of the originalannouncement (A). Knowledge can be lost without forgettinganything.

This solution makes who you are relevant to what you can know. Inaddition to compromising the impersonality of knowledge, there will becompromise on its temporal neutrality.

Since the surprise test paradox can also be formulated in terms ofrational belief, there will be parallel adjustments for what we oughtto believe. We are criticized for failures to believe the logicalconsequences of what we believe and criticized for believingpropositions that conflict with each other. Anyone who meets theseideals of completeness and consistency will be unable to believe arange of consistent propositions that are accessible to other completeand consistent thinkers. In particular, they will not be able tobelieve propositions attributing specific errors to them, andpropositions that entail these off-limit propositions.

Some people wear T-shirts withQuestion Authority! written onthem. Questioning authority is generally regarded as a matter ofindividual discretion. The surprise test paradox shows that it issometimes mandatory. The student is rationally required to doubt theteacher’s announcement even though the teacher has not given anynew evidence of being unreliable. For when only one day remains, theannouncement entails (iii), a statement is that is impossible for the student to know.The student can foresee that this forced loss of knowledge opens anopportunity for the teacher to give the surprise test. Thisforeknowledge is available at the time the announcement.

This solution implies there can be disagreement amongst idealreasoners who agree on the same impersonal data. Consider thecolleagues of the teachers. They are not amongst those that theteacher targets for surprise. Since ‘surprise’ here means‘surprise to the students’, the teacher’s colleaguescan consistently infer that the test will be on the last day from thepremise that it has not been given on any previous day. But thesecolleagues are uselessto the studentsas informants.

6. Dynamic Epistemic Paradoxes

The above anomalies (losing knowledge without forgetting, disagreementamongst equally well-informed ideal reasoners, rationally changingyour mind without the acquisition of counter-evidence) would be moretolerable if reinforced by separate lines of reasoning. The mostfertile source of this collateral support is in puzzles about updatingbeliefs.

The natural strategy is to focus on the knower when he is stationary.However, just as it is easier for an Eskimo to observe an arctic foxwhen it moves, we often get a better understanding of the knowerdynamically, when he is in the process of gaining or losingknowledge.

6.1 Meno’s Paradox of Inquiry: A puzzle about gaining knowledge

When on trial for impiety, Socrates traced his inquisitiveness to theOracle at Delphi (Apology 21d in Cooper 1997). Prior tobeginning his mission of inquiry, Chaerephon asked the Oracle:“Who is the wisest of men?” The Oracle answered “Noone is wiser than Socrates.” This astounded Socrates because hebelieved he knew nothing. Whereas a less pious philosopher might havequestioned the reliability of the Delphic Oracle, Socrates followedthe general practice of treating the Oracle as infallible. The onlycogitation appropriate to an infallible answer is interpretation.Accordingly, Socrates resolved his puzzlement by inferring that hiswisdom lay in recognizing his own ignorance. While others may knownothing, Socrates knows that he knows nothing.

Socrates continues to be praised for his insight. But his“discovery” is a contradiction. If Socrates knows that heknows nothing, then he knows something (the proposition that he knowsnothing) and yet does not know anything (because knowledge impliestruth).

Socrates could regain consistency by downgrading his meta-knowledge tothe status of a belief. If he believes he knows nothing, then henaturally wishes to remedy his ignorance by asking about everything.This rationale is accepted throughout the early dialogues. But when wereach theMeno, one of his interlocutors has an epiphany.After Meno receives the standard treatment from Socrates about thenature of virtue, Meno discerns a conflict between Socratic ignoranceand Socratic inquiry (Meno 80d, in Cooper 1997). How wouldSocrates recognize the correct answer even if Meno gave it?

The general structure of Meno’s paradox is a dilemma: If youknow the answer to the question you are asking, then nothing can belearned by asking. If you do not know the answer, then you cannotrecognize a correct answer even if it is given to you. Therefore, onecannot learn anything by asking questions.

The natural solution to Meno’s paradox is to characterize theinquirer as only partially ignorant. He knows enough to recognize acorrect answer but not enough to answer on his own. For instance,spelling dictionaries are useless to six year old children becausethey seldom know more than the first letter of the word in question.Ten year old children have enough partial knowledge of theword’s spelling to narrow the field of candidates. Spellingdictionaries are also useless to those with full knowledge of spellingand those with total ignorance of spelling. But most of us have anintermediate amount of knowledge.

It is natural to analyze partial knowledge as knowledge ofconditionals. The ten year old child knows the spoken version of‘If the spelling dictionary spells the month after January asF-e-b-r-u-a-r-y, then that spelling is correct’. Consulting thespelling dictionary gives him knowledge of the antecedent of theconditional.

Much of our learning from conditionals runs as smoothly as thisexample suggests. Since we know the conditional, we are poised tolearn the consequent merely by learning the antecedent (and byapplying the inference rule modus ponens: If \(P\) then \(Q\), \(P\),therefore \(Q\)). But the next section is devoted to some knownconditionals that are repudiated when we learn their antecedents.

6.2 Dogmatism paradox: A puzzle about losing knowledge

Saul Kripke’s ruminations on the surprise test paradox led himto a paradox about dogmatism. He lectured on both paradoxes atCambridge University to the Moral Sciences Club in 1972. (A descendentof this lecture now appears as Kripke 2011.) Gilbert Harman transmittedKripke’s new paradox as follows:

If I know that \(h\) is true, I know that any evidenceagainst \(h\) is evidence against something that istrue; I know that such evidence is misleading. But I should disregardevidence that I know is misleading. So, once I know that \(h\)is true, I am in a position to disregard any futureevidence that seems to tell against \(h\). (1973, 148)

Dogmatists accept this reasoning. For them, knowledge closes inquiry.Any “evidence” that conflicts with what is known can bedismissed as misleading evidence. Forewarned is forearmed.

This conservativeness crosses the line from confidence tointransigence. To illustrate the excessive inflexibility, here is achain argument for the dogmatic conclusion that my reliable colleagueDoug has given me a misleading report (corrected from Sorensen1988b):

(C\(_1\))
My car is in the parking lot.
(C\(_2\))
If my car is in the parking lot and Doug provides evidence thatmy car is not in the parking lot, then Doug’s evidence ismisleading.
(C\(_3\))
If Doug reports he saw a car just like mine towed from theparking lot, then his report is misleading evidence.
(C\(_4\))
Doug reports that a car just like mine was towed from the parkinglot.
(C\(_5\))
Doug’s report is misleading evidence.

By hypothesis, I am justified in believing (C\(_1)\). Premise(C\(_2)\) is a certainty because it is analytically true. The argumentfrom (C\(_1)\) and (C\(_2)\) to (C\(_3)\) is valid. Therefore, mydegree of confidence in (C\(_3)\) must equal my degree of confidencein (C\(_1)\). Since we are also assuming that I gain sufficientjustification for (C\(_4)\), it seems to follow that I am justified inbelieving (C\(_5)\) by modus ponens. Similar arguments will lead me todismiss further evidence such as a phone call from the towing serviceand my failure to see my car when I confidently stride over to theparking lot.

Gilbert Harman diagnoses the paradox as follows:

The argument for paradox overlooks the way actually having evidencecan make a difference. Since I now know [my car is in the parkinglot], I now know that any evidence that appears to indicate somethingelse is misleading. That does not warrant me in simply disregardingany further evidence, since getting that further evidence can changewhat I know. In particular, after I get such further evidence I may nolonger know that it is misleading. For having the new evidence canmake it true that I no longer know that new evidence is misleading.(1973, 149)

In effect, Harman denies the hardiness of knowledge. The hardinessprinciple states that one knows only if there is no evidence such thatif one knew about the evidence one would not be justified in believingone’s conclusion.

Harman’s conclusion that new knowledge can undermine oldknowledge can be applied to the surprise test paradox: The studentslose knowledge of the test announcement even though they do not forgetthe announcement or do anything else incompatible with theircredentials as ideal reasoners. A student on Thursday is betterinformed about the outcomes of test days than he was on Sunday. Heknows the test was not on Monday and not on Wednesday. But he can onlypredict that the test is on Friday if he continues to know theannouncement. The extra knowledge of the testless days underminesknowledge of the announcement.

Most epistemologists accepted Harman’s appeal to defeaters. Somehave tried to make it more precise with details about updatingindicative conditionals (Sorensen 1988b). This may vindicate andgeneralize Harman’s prediction that the future evidence willchange your mind about what is misleading evidence. Knowledge of suchconditionals is useless for a futuremodus ponens.Thedogmatist correctly says we know the conditional “If I knowthat \(p\), then any evidence conflicting with \(p\) ismisleading evidence”. Indeed, it is a tautology! But thedogmatist fails to recognize this known tautology is uselessknowledge. Acquiring the misleading evidence will make me stop knowingp. If an auditor foresees being presented with a biased list of facts,he may utter the tautology to his assistant to convey anotherproposition for which he has empirical support. That empiricalproposition need not be useless knowledge. When the predicted list ispresented, the forearmed auditor ignores the facts. But the basis isnot hisa priori knowledge of the dogmatist’s tautology.

Kripke notes that this solution will not stop the quick thinkingdogmatist who takes measures to prevent his acquisition of theevidence that he now deems misleading (Kripke 2011, 43–44). Asecond worry is that the dogmatist can still ignore weak evidence. IfI know a coin is fair then I know that if the first twenty tossescomes out heads, then that is misleading evidence that the coin is notfair. Such a run does not defeat my knowledge claim. (Substitute ashorter run if you think it does defeat.) So Harman’s solution doesnot apply. Yet it is dogmatic to ignore this evidence.

In addition to this problem of weak dogmatism, Rachel Fraser (2022)adds a third problem of dogmatic bootstrapping. When Robert Millikanand Harvey Fletcher measured elementary electric charge with tinycharged droplets, they discounted some of the drops as misleadinglywide of the plausible interval for the true value. Drops centrally locatedwithin the interval were “beauties”. Editing out theoutliers give Millikan a more precise measurement and a Nobel Prize in1923. In 1978, the physicist Gerald Holton went through the notebooksand was shocked by how much contrary data had gone unreported byMillikan. Fraser thinks there is a vicious circularity in data purification.

But the bootstrapping dogmatist will regard the circularity as virtuous. When the evidence is a mix of strong evidence and weakcounterevidence, the stronger body of evidence exposes the weaker bodyas misleading evidence. Think of a jigsaw puzzle that has been polluted with stray pieces from another puzzle. When you manage to get a complete fit with a subset of the pieces, the remaining pieces are removed from view. Dimming the misleading evidence allows the leading evidence to shine more visibly. Therefore, we can indeed be more confident thanwe were before dismissing the weak evidence. Millikan was beingresponsible gatekeeper rather than a wishful thinker. Just as datashould control theory, theory should control data. The experimenter must strike a delicate balance between discounting too much contrary data and discounting too little. Proposed solutions to the dogmatism paradox have trouble sustaining this balance.

I. J. Good (1967) demonstrated that gathering evidence maximizesexpected valuegiven that the cost of the evidence isnegligible. Under this simplifying assumption, Good shows thatdepartures from the principle of total evidence is at least imprudent.Given epistemic utilitarianism, this practical irrationality becomestheoretical irrationality. Bob Beddor (2019) now adds the premise thatit is irrational to intend to do what one foresees to be irrational.For instance, if you offered a million dollars to drink toxin tomorrowthat will make you ill for a day, you could profit from the offer(Kavka 1983). But if the million would be earned immediately upon youintending to drink the toxin, then you could not profit because youknow there would be no reason to follow through. By analogy, Beddorconcludes that it would be irrational to intend to avoidcounterevidence (Beddor 2019, 738). One is never entitled to discardevidence, even after it has been foreseen as misleading evidence.

But if the cost of evidence is significant, the connection betweenpractical rationality and theoretical rationalityfavorsignoring counterevidence. Judging that p embodies a resolution not toinquire further into the question of whether p is true. Or so answersthe volitionist (Fraser 2022).

6.3 The Future of Epistemic Paradoxes

We cannot predict that any specific new epistemic paradox awaitsdiscovery. To see why, consider the prediction Jon Wynne-Tysonattributes to Leonardo Da Vinci: “I have learned from an earlyage to abjure the use of meat, and the time will come when men such asI will look upon the murder of animals as they now look upon themurder of men.” (1985, 65) By predicting this progress, Leonardoinadvertently reveals healready believes that the murder ofanimals is the same as the murder of men. If you believe that aproposition is true but will be first believed at a later time, thenyou already believe it – and so are inconsistent. (The actualtruth is irrelevant.)

Specific regress can be anticipated. When I try to predict my firstacquisition of a specific truth, I pre-empt myself. When I try topredict my first acquisition of a specific falsehood, there is nopre-emption.

There would be no problem with predicting progress if Leonardo thinksthe moral progress lies in the moral preferability of the vegetarianbelief rather than the truth of the matter. One might admirevegetarianism without accepting the correctness of vegetarianism. ButLeonardo is endorsing the correctness of the belief. This sentenceembodies a Moorean absurdity. It is like saying ‘Leonardo tooktwenty five years to completeThe Virgin on the Rocks but Iwill first believe so tomorrow’. (This absurdity will promptsome to object that I have uncharitably interpreted Leonardo; he musthave intended to make an exception for himself and only be referringto men of his kind.)

I cannot specifically anticipate the first acquisition of the truebelief that \(p\). For that prediction would show thatI already have the true belief that \(p\). The truthcannot wait. The impatience of the truth imposes a limit on theprediction of discoveries.

Bibliography

  • Aikin, K. Scott, 2011,Epistemology and the RegressProblem, London: Routledge.
  • Anderson, C. Anthony, 1983, “The Paradox of theKnower”,The Journal of Philosophy, 80:338–355.
  • Binkley, Robert, 1968, “The Surprise Examination in ModalLogic”,Journal of Philosophy, 65/2:127–136.
  • Bommarito, Nicolas, 2010, “Rationally Self-AscribedAnti-Expertise”,Philosophical Studies, 151:413–419.
  • Bovens, Luc, 1995, “‘P and I will believe thatnot-P’: Diachronic Constraints on Rational Belief”,Mind, 104(416): 737–760.
  • Burge, Tyler, 1984, “Epistemic Paradox”,Journalof Philosophy, 81/1: 5–29.
  • –––, 1978a, “Buridan and EpistemicParadox”,Philosophical Studies, 34: 21–35.
  • Buridan, John, 1982,John Buridan on Self-Reference: ChapterEight of Buridan’s ‘Sophismata’, G. E. Hughes(ed. & tr.), Cambridge: Cambridge University Press.
  • Carnap, Rudolf, 1950,The Logical Foundations ofProbability, Chicago: University of Chicago Press.
  • Christensen, David, 2010, “Higher Order Evidence”,Philosophy and Phenomenological Research, 81:185–215.
  • Cicero,On the Nature of the Gods, Academica, H. Rackham(trans.) Cambridge, MA: Loeb Classical Library, 1933.
  • Collins, Arthur, 1979, “Could our beliefs be representationsin our brains?”,Journal of Philosophy, 74(5):225–43.
  • Conee, Earl, 2004, “Heeding Misleading Evidence”,Philosophical Studies, 103: 99–120.
  • Cooper, John (ed.), 1997,Plato: The Complete Works,Indianapolis: Hackett.
  • DeRose, Keith, 2017,The Appearance of Ignorance: Knowledge,Skepticism, and Context (Volume 2), Oxford: Oxford UniversityPress.
  • Egan, Andy and Adam Elga, 2005, “I Can’t BelieveI’m Stupid”,Philosophical Perspectives, 19/1:77–93.
  • Feyerabend, Paul, 1988,Against Method, London:Verso.
  • Fitch, Frederic, 1963, “A Logical Analysis of Some ValueConcepts”,Journal of Symbolic Logic, 28/2:135–142.
  • Fraser, Rachel, 2022, “The Will in Belief”,OxfordStudies in Epistemology.
  • Gödel, Kurt, 1983, “What is Cantor’s ContinuumProblem?”,Philosophy of Mathematics, Paul Benacerrafand Hilary Putnam (eds.), Cambridge: Cambridge University Press,pp. 258–273.
  • Good, I. J., 1967, “On the Principle of TotalEvidence”,British Journal for the Philosophy ofScience, 17(4): 319–321.
  • Hacking, Ian, 1975,The Emergence of Probability,Cambridge: Cambridge University Press.
  • Hajek, Alan, 2005, “The Cable Guy paradox”,Analysis, 65(2): 112–119.
  • Harman, Gilbert, 1968, “Knowledge, Inference, andExplanation”,American Philosophical Quarterly, 5/3:164–173.
  • –––, 1973,Thought, Princeton:Princeton University Press.
  • Hawthorne, John, 2004,Knowledge and Lotteries, Oxford:Clarendon Press.
  • Hein, Piet, 1966,Grooks, Cambridge, MA: MIT Press.
  • Hintikka, Jaakko, 1962,Knowledge and Belief, Ithaca, NY:Cornell University Press.
  • Holliday, Wesley, 2016, “On Being in an UndiscoverablePosition”,Thought, 5(1): 33–40.
  • –––, 2017, “Epistemic Logic andEpistemology”,The Handbook of Formal Philosophy, SvenOve Hansson and Vincent F. Hendricks (eds.), Dordercht: Springer.
  • Hughes, G. E., 1982,John Buridan on Self-Reference,Cambridge: Cambridge University Press.
  • Immerman, Daniel, 2017, “Question Closure to Solve theSurprise Test Paradox”,Synthese, 194(11):4583–4596.
  • Kaplan, David and Richard Montague, 1960, “A ParadoxRegained”,Notre Dame Journal of Formal Logic, 1:79–90.
  • Klein, Peter, 2007, “How to be an Infinitist about DoxasticJustification”, Philosophical Studies, 134:77–25–29.
  • Knight, Kevin, 2002, “Measuring Inconsistency”,Journal of Philosophical Logic, 31/1: 77–98.
  • Kripke, Saul, 2011, “Two Paradoxes of Knowledge”, inS. Kripke,Philosophical Troubles: Collected Papers (Volume1), New York: Oxford University Press, pp. 27–51.
  • Kvanvig, Jonathan L., 1998, “The Epistemic Paradoxes”,Routledge Encyclopedia of Philosophy, London: Routledge.
  • Kyburg, Henry, 1961,Probability and the Logic of RationalBelief, Middletown: Wesleyan University Press.
  • Lewis, David, 1998, “Lucas against Mechanism”,Papers in Philosophical Logic, Cambridge: CambridgeUniversity Press, pp. 166–9.
  • Lewis, David and Jane Richardson, 1966, “Scriven on HumanUnpredictability”,Philosophical Studies, 17(5):69–74.
  • Lucas, J. R., 1964, “Minds, Machines and Gödel”,inMinds and Machines, Alan Ross Anderson (ed.), EnglewoodCliffs, N.J.: Prentice Hall, pp. 112–7.
  • Makinson, D. C., 1965, “The Paradox of the Preface”,Analysis, 25: 205–207.
  • Malcolm, Norman, 1963,Knowledge and Certainty, EnglewoodCliffs, NJ: Prentice Hall.
  • Moore, G. E., 1942, “A Reply to My Critics”,ThePhilosophy of G. E. Moore, edited by P. A. Schilpp. Evanston, IL:Northwestern University.
  • Nerlich, G. C., 1961, “Unexpected Examinations andUnprovable Statements”,Mind, 70(280):503–514.
  • Peirce, Charles Sanders, 1931–1935,The Collected Worksof Charles Sanders Peirce, Charles Hartshorne and Paul Weiss(eds.), Cambridge, MA: Harvard University Press.
  • Plato,Plato: The Complete Works, John M. Cooper (ed.),Indianapolis: Hackett, 1997.
  • Poncins, Gontran de, 1941 [1988],Kabloona incollaboration with Lewis Galantiere, New York: Carroll & GraffPublishers, 1988.
  • Post, John F., 1970, “The Possible Liar”,Noûs, 4: 405–409.
  • Quine, W. V. O, 1953, “On a so-called Paradox”,Mind, 62(245): 65–7.
  • –––, 1969, “EpistemologyNaturalized”, inOntological Relativity and OtherEssays, New York: Columbia University Press, pp. 69–90.
  • –––, 1987,Quiddities, Cambridge, MA:Harvard University Press.
  • Read, Stephen, 1979, “Self-Reference and Validity”,Synthese, 42(2): 265–74.
  • Sainsbury, R. M., 1995,Paradoxes, Cambridge: CambridgeUniversity Press.
  • Salerno, Joseph, 2009,New Essays on the KnowabilityParadox, New York: Oxford University Press.
  • Scriven, Michael, 1964, “An Essential Unpredictability inHuman Behavior”, inScientific Psychology: Principles andApproaches, Benjamin B. Wolman and Ernest Nagel (eds.), New York:Basic Books, pp. 411–25.
  • Sextus Empiricus,Outlines of Pyrrhonism, R. G. Bury(trans.), Cambridge, MA: Harvard University Press, 1933.
  • Skyrms, Brian, 1982, “Causal Decision Theory”,Journal of Philosophy, 79(11): 695–711.
  • Smith, Martin, 2016,Between Probability and Certainty,Clarendon: Oxford University Press.
  • Sorensen, Roy, 1988a,Blindspots, Oxford: ClarendonPress.
  • –––, 1988b, “Dogmatism, Junk Knowledge,and Conditionals”,Philosophical Quarterly, 38:433– 454.
  • –––, 2001,Vagueness and Contradiction,Oxford: Clarendon Press.
  • –––, 2003a, “Paradoxes ofRationality”, inThe Handbook of Rationality, Al Mele(ed.), Oxford: Oxford University Press, pp. 257–75.
  • –––, 2003b,A Brief History of theParadox, New York: Oxford University Press.
  • Stephenson, Andrew, 2015, “Kant, the Paradox of Knowability,and the Meaning of Experience”,Philosophers Imprint,15(17), 1–19.
  • Thomson, J. F., 1962, “On Some Paradoxes”, inAnalytical Philosophy, R. J. Butler (ed.), New York: Barnes& Noble, pp. 104–119.
  • Tymoczko, Thomas, 1984, “An Unsolved Puzzle aboutKnowledge”,The Philosophical Quarterly, 34:437–58.
  • van Fraassen, Bas, 1984, “Belief and the Will”,Journal of Philosophy, 81: 235–256
  • –––, 1995, “Belief and the Problem ofUlysses and the Sirens”,Philosophical Studies, 77:7–37
  • Weiss, Paul, 1952, “The Prediction Paradox”,Mind, 61(242): 265–9.
  • Williamson, Timothy, 2000,Knowledge and its Limits,Oxford: Oxford University Press.
  • Wynne-Tyson, Jon, 1985,The Extended Circle, Fontwell,Sussex: Centaur Press.

Other Internet Resources

Copyright © 2022 by
Roy Sorensen<roy.sorensen@austin.utexas.edu>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free

Browse

About

Support SEP

Mirror Sites

View this site from another server:

USA (Main Site)Philosophy, Stanford University

The Stanford Encyclopedia of Philosophy iscopyright © 2024 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054


[8]ページ先頭

©2009-2025 Movatter.jp