Movatterモバイル変換


[0]ホーム

URL:


SEP home page
Stanford Encyclopedia of Philosophy

Imprecise Probabilities

First published Sat Dec 20, 2014; substantive revision Tue Feb 19, 2019

It has been argued that imprecise probabilities are a natural andintuitive way of overcoming some of the issues with orthodox preciseprobabilities. Models of this type have a long pedigree, and interestin such models has been growing in recent years. This articleintroduces the theory of imprecise probabilities, discusses themotivations for their use and their possible advantages over thestandard precise model. It then discusses some philosophical issuesraised by this model. There is also a historical appendix whichprovides an overview of some important thinkers who appear sympatheticto imprecise probabilities.

1. Introduction

Probability theory has been a remarkably fruitful theory, withapplications in almost every branch of science. In philosophy, someimportant applications of probability theory go by the nameBayesianism; this has been an extremely successfulprogram(see for example Howson and Urbach2006; Bovens and Hartmann 2003; Talbott 2008). But probabilitytheory seems to impute much richer and more determinate attitudes thanseems warranted. What should your rational degree of belief be thatglobal mean surface temperature will have risen by more than fourdegrees by 2080? Perhaps it should be 0.75? Why not 0.75001? Why not0.7497? Is that event more or less likely than getting at least onehead on two tosses of a fair coin? It seems there are many eventsabout which we can (or perhaps should) take less precise attitudesthan orthodox probability requires. Among the reasons to question theorthodoxy, it seems that the insistence that states of belief berepresented by a single real-valued probability function is quite anunrealistic idealisation, and one that brings with it some ratherawkward consequences that we shall discuss later. Indeed, it has longbeen recognised that probability theory offers only a rather idealisedmodel of belief. As far back as the mid-nineteenth century, we findGeorge Boole saying:

It would be unphilosophical to affirm that the strength of thatexpectation, viewed as an emotion of the mind, is capable of beingreferred to any numerical standard.(Boole 1958[1854]: 244)

For these, and many other reasons, there is growing interestinImprecise Probability (IP) models. Broadly construed,these are models of belief that go beyond the probabilistic orthodoxyin one way or another.

IP models are used in a number of fields including:

  • Statistics(Walley 1991; Ruggeri et al. 2005; Augustin etal. 2014)
  • Psychology of reasoning(Pfeifer andKleiter 2007)
  • Linguistic processing ofuncertainty(Wallsten and Budescu1995)
  • Neurological response to ambiguity andconflict(Smithson and Pushkarskaya2015)
  • Philosophy(Levi 1980; Joyce 2011; Sturgeon2008; Kaplan 1983; Kyburg 1983)
  • Behavioural economics(Ellsberg 1961;Camerer and Weber 1992; Smithson and Campbell 2009)
  • Mathematical economics(Gilboa1987)
  • Engineering(Ferson and Ginzburg 1996; Ferson and Hajagos 2004;Oberguggenberger 2014)
  • Computer science(Cozman 2000; Cozman andWalley 2005)
  • Scientific computing(Oberkampf and Roy 2010, chapter 13)
  • Physics(Suppes and Zanotti 1991; Hartmannand Suppes 2010; Frigg et al. 2014)

This article identifies a variety of motivations for IP models;introduces various formal models that are broadly in this area; anddiscusses some open problems for these frameworks. The focus will beon formal models of belief.

1.1 A summary of terminology

Throughout the article I adopt the convention of discussing thebeliefs of an arbitrary intentional agent whom I shall call“you”. Prominent advocates of IP (including Good andWalley) adopt this convention.

This article is about formal models of belief and as such, thereneeds to be a certain amount of formal machinery introduced. There isa set of states \(\Omega\) which representsthe ways the world couldbe. Sometimes \(\Omega\) is described as theset of “possible worlds”. The objects of belief—thethings you have beliefs about—can be represented by subsets ofthe set of ways the world couldbe \(\Omega\). We can identify aproposition \(X\) with the set of states whichmake it true, or, with the set of possible worlds where it is true. Ifyou have beliefs about \(X\)and \(Y\) then you also have beliefs about“\(X\cap Y\)”,“\(X \cup Y\)” and“\(\neg X\)”;“\(X\)and \(Y\)”,“\(X\)or \(Y\)” and “it is not the casethat \(X\)” respectively. The set ofobjects of belief is the power setof \(\Omega\), orif \(\Omega\) is infinite, some measurablealgebra of the subsets of \(\Omega\).

The standard view of degree of belief is that degrees of belief arerepresented by real numbers and belief states by probabilityfunctions; this is a normative requirement. Probability functions arefunctions, \(p\), from the algebra of beliefsto real numbers satisfying:

  • \(0 = p(\emptyset) \le p(X) \le p(\Omega) = 1\)
  • If \(X\cap Y = \emptyset\) then \(p(X\cup Y) = p(X) + p(Y)\)

So if your belief state or doxastic state is representedby \(p\), then your degree of beliefin \(X\) is the value assignedto \(X\) by \(p\);that is, \(p(X)\).

Further, learning in the Bayesian model of belief is effectedbyconditionalisation. If you learn aproposition \(E\) (and nothing further) thenyour post-learning belief in \(X\) is givenby \(p(X\mid E) = p(X\cap E)/p(E)\).

The alternative approach that will be the main focus of thisarticle is the approach that represents belief by aset ofprobability functions instead of a single probability. So instead ofhaving some \(p\) represent your belief state,you have \(P\), a set of suchfunctions.van Fraassen (1990) callsthis yourrepresentor, Levi calls it acredal set. Iwill discuss various ways you might interpret the representor laterbut for now we can think of it as follows. Your representor isacredal committee: each probability function in itrepresents the opinions of one member of a committee that,collectively, represents your beliefs.

From these concepts we can define some “summaryfunctions” that are often used in discussions of impreciseprobabilities. Often, it is assumed that your degree of belief in aproposition, \(X\), is representedby \(P(X) = \{p(X) : p\in P \}\). I will adoptthis notational convention, with the proviso that I don’ttake \(P(X)\) to be an adequate representationof your degree of beliefin \(X\). Yourlower envelopeof \(X\)is: \(\underline{P}(X)=\inf P(X)\). Likewise,yourupper envelopeis \(\overline{P}(X)=\sup P(X)\). Theyareconjugates of each other in the followingsense: \(\overline{P}(X) = 1 - \underline{P}(\negX)\).

The standard assumption about updating for sets of probabilities isthat your degree of belief in \(X\) afterlearning \(E\) is givenby \(P(X\mid E) = \{p(X\mid E), p\in P, p(E)> 0\}\). Your belief state after havinglearned \(E\)is \(P(\cdot\mid E) = \{p(\cdot\mid E), p\in P,p(E) > 0\}\). That is, by the set of conditionalprobabilities.

I would like to emphasise already that these summaryfunctions—\(P(\cdot)\), \(\underline{P}(\cdot)\)and \(\overline{P}(\cdot)\)—are notproperly representative of your belief. Information is missing fromthe picture. This issue will be important later, in our discussion ofdilation.

We shall need to talk about decision making so we shall introduce asimple model of decisions in terms of gambles. We can view boundedreal valued functions \(f\) as“gambles” that are functions from someset \(\Omega\) to real numbers. Agamble \(f\) paysout \(f(\omega)\)if \(\omega\) is the true state. We assumethat you value each further unit of this good the same (the gambles’pay outs are linear in utility) and you are indifferent to concerns ofrisk. Your attitude to these gambles reflects your attitudes about howlikely the various contingencies in \(\Omega\)are. That is, gambles that win bigif \(\omega\) look more attractive the morelikely you consider \(\omega\) to be. Inparticular, consider the indicatorfunction \(I_X\) on aproposition \(X\) whichoutputs \(1\) if \(X\)is true at the actual world and \(0\)otherwise. These are a particular kind of gamble, and your attitudetowards them straightforwardly reflects your degree of belief in theproposition. The more valuable youconsider \(I_X\), the more likely youconsider \(X\) to be. Call these indicatorgambles.

Gambles are evaluated with respect to their expectedvalue. Call \(E_{p}(f)\) the expected value ofgamble \(f\) with respect toprobability \(p\), and define it as:

\[ {E}_p(f) = \sum_{\Omega} p(\omega) f(\omega) \]

How valuable you consider \(f\) to be instate \(\omega\) depends on howbig \(f(\omega)\) is. How important thegoodness of \(f\)in \(\omega\) is depends on how likely thestate is, measured by \(p(\omega)\). Theexpectation is then the sum of these probability-weightedvalues. SeeBriggs (2014) for morediscussion of expected utility.

Then we define \(\mathbf{E}_{P}(f)\)as \(\mathbf{E}_{P}(f) = \{E_{p}(f) : p\in P\}\). That is, the set of expected values for membersof \(P\). The same proviso holdsof \(\mathbf{E}_{P}(f)\) as heldof \(P(X)\): that is, the extent towhich \(\mathbf{E}_{P}(f)\) fully representsyour attitude to the value of a gamble is open to question. I willoften drop the subscript “\(P\)”when no ambiguity arises from doing so. Further technical details canbe found in theformal appendix.

1.2 Some important distinctions

There are a number of distinctions that it is important to make inwhat follows.

An important parameter in an IP theory is the normative force thetheory is supposed to have. Is imprecision obligatory or is it merelypermissible? Is italways permissible/obligatory, or onlysometimes? Or we might be interested in a purely descriptive projectof characterising the credal states of actual agents, with no interestin normative questions. This last possibility will concern us littlein this article.

It is also helpful to distinguish belief itself from theelicitation of that belief and also from your introspective access tothose beliefs. The same goes for other attitudes (values, utilitiesand so on). It may be that you have beliefs that are not amenable to(precise) elicitation, in practice or even in principle. Likewise,your introspective access to your own beliefs might be imperfect. Suchimperfections could be a source ofimprecision.Bradley (2009)distinguishes many distinct sources of imperfect introspection. Theimperfection could arise from your unawareness of the prospect inquestion, the boundedness of your reasoning, ignorance of relevantcontingencies, or because of conflict in your evidence or in yourvalues (pp. 240–241). SeeBradley andDrechsler (2014) for further discussion of types ofuncertainty.

There are a variety of aspects of a body of evidence that couldmake a difference to how you ought to respond to it. We canaskhow much evidence there is (weight of evidence). We canask whether the evidence is balanced or whether it tells heavily infavour of one hypothesis over another (balance of evidence). Evidencecan be balanced because it isincomplete: there simply isn’tenough of it. Evidence can also be balanced if itisconflicted: different pieces of evidence favour differenthypotheses. We can further ask whether evidence tells ussomethingspecific—like that the bias of a coin is 2/3in favour of heads—or unspecific—like that the bias of acoin is between 2/3 and 1 in favour of heads. This specificity shouldbe distinguished from vagueness or indeterminacy of evidence: that acoin has biasabout 2/3 is vague but specific, while that acoin has bias definitely somewhere between 2/3 and 1 is determinatebut unspecific. Likewise, a credal state could be indeterminate,fuzzy, or it could be unspecific, or it could be both. It seems likedeterminate but unspecific belief states will be rarer thanindeterminate ones.

Isaac Levi(1974, 1985) makes adistinction between “imprecise” credences and“indeterminate” credences (the scare quotes are indicatingthat these aren’t uses of the terms “imprecise” and“indeterminate” that accord with the usage I adopt in thisarticle). The idea is that there are two distinct kinds of beliefstate that might require a move to an IP representation of belief. An“imprecise” belief in Levi’s terminology is an imperfectlyintrospected or elicited belief in mine, while an“indeterminate” belief is a (possibly) perfectlyintrospected belief that is still indeterminate or unspecific (orboth). Levi argues that the interesting phenomenon is“indeterminate” credence.Walley(1991) also emphasises the distinction between cases wherethere is a “correct” but unknown probability from cases of“indeterminacy”.

There is a further question about the interpretation of IP thatcross-cuts the above. This is the question of whether weunderstand \(P\) as a “complete”or “exhaustive” representation of your beliefs, or whetherwe take the representation to be incomplete or non-exhaustive. Let’stalk in terms of the betting interpretation for a moment. Theexhaustive/non-exhaustive distinction can be drawn by asking thefollowing question: does \(P\) capturealland only your dispositions to bet ordoes \(P\) only partially capture yourdispositions to bet? Walley emphasises this distinction and suggeststhat most models are non-exhaustive.

Partly because of Levi’s injunction to distinguish“imprecise” from “indeterminate” belief, somehave objected to the use of the term “impreciseprobability”. Using the above distinction between indeterminate,unspecific and imperfectly introspected belief, we can keep separatethe categories Levi wanted to keep separate all without using the term“imprecise”. We can then use “imprecise” as anumbrella term to cover all these cases of lack ofprecision. Conveniently, this allows us to stay in line with thewealth of formal work on “Imprecise Probabilities” whichterm is used to cover cases of indeterminacy. This usage goes back atleast to Peter Walley’s influential bookStatistical Reasoningwith Imprecise Probabilities(Walley1991).

So, “Imprecise” is not quite right, but neither is“Probability” since the formal theory of IP is reallyaboutprevisions (sort of expectations) rather than justabout probability (expectations of indicator functions). Helpfully, ifI abbreviate Imprecise Probability to “IP” then I canexploit some useful ambiguities.

2. Motivations

Let’s consider, in general terms, what sort of motivations onemight have for adopting models that fall under the umbrella of IP. Thefocus will be on models of rational belief, since these are the modelsthat philosophers typically focus on, although it is worth noting thatstatistical work using IP isn’t restricted to thisinterpretation. Note that no one author endorses all of thesearguments, and indeed, some authors who are sympathetic to IP haveexplicitly stated that they don’t consider certain of these argumentsto be good (for example Mark Kaplan does not endorse the claim thatconcerns about descriptive realism suggest allowingincompleteness).

2.1 Ellsberg decisions

There are a number of examples of decision problems where we areintuitively drawn to go against the prescriptions of preciseprobabilism. And indeed, many experimental subjects do seem to expresspreferences that violate the axioms. IP offers a way of representingthese intuitively plausible and experimentally observed choices asrational. One classic example of this is theEllsbergproblem(Ellsberg 1961).

I have an urn that contains ninety marbles. Thirty marbles arered. The remainder are blue or yellow in some unknown proportion.

Consider the indicator gambles for various events in thisscenario. Consider a choice between a bet that wins if the marbledrawn is red (I), versus a bet that wins if the marble drawn is blue(II). You might prefer I to II since I involvesrisk while IIinvolvesambiguity. A prospect is risky if its outcome isuncertain but its outcomes occur with known probability. A prospect isambiguous if the outcomes occur with unknown or only partially knownprobabilities. Now consider a choice between a bet that wins if themarble drawn is not blue (III) versus a bet that wins if the marbledrawn is not red (IV). Now it is III that is ambiguous, while IV isunambiguous but risky, and thus IV might seem better to you if youpreferred risky to ambiguous prospects. Such a pattern of preferences(I preferred to II but IV preferred to III) cannot be rationalised asthe choices of a precise expected utility maximiser. The gambles aresummarised in the table.

RBY
I100
II010
III101
IV011

Table 1: The Ellsberg bets. The urncontains 30 red marbles and 60 blue/yellow marbles

Let the probabilities for red, blue and yellow marblesbe \(r\), \(b\)and \(y\) respectively. If you were anexpected utility maximiser and preferred I to II,then \(r > b\) and a preference for IV overIII entails that \(r+y < y +b\). No numberscan jointly satisfy these two constraints. Therefore, no probabilityfunction is such that an expected utility maximiser with thatprobability would choose in the way described above. While by no meansuniversal, these preferences are a robust feature of many experimentalsubjects’ response to this sort ofexample(Camerer and Weber 1992; Fox andTversky 1995). Some experiments suggest that Ellsberg-type patterns of preference are rarer than normally recognised(Binmore et al. 2012; Voorhoeve et al. 2016). For more on ambiguity attitudes, seeTrautmann and van der Kuilen (2016).

The imprecise probabilist can model the situation asfollows: \(P(R)=1/3, P(B)=P(Y)=[0,2/3]\). Notethat this expression of the belief state misses out some importantdetails. For example, for all \(p\in P\), wehave \(p(B)=2/3-p(Y)\). For the point being madehere, this detail is not important. Modelling the ambiguityallows us to rationalise real agents’ preferences for bets on red. Toflesh this story out would require a lot more to be said aboutdecision making, (see section 3.3) but theintuition is that aversion to ambiguity explains the preference for Iover II and IV over III.

AsSteele (2007) points out, theabove analysis rationalises the Ellsberg choices only if we aredealing with genuinely indeterminate or unspecific beliefs. If we weredealing with a case of imperfectly introspected belief then therewould exist some \(p\) in the representor suchthat rational choices maximise \(E_{p}\). Forthe Ellsberg choices, there is nosuch \(p\).

This view on the lessons of the Ellsberg game is notuncontroversial.Al-Najjar and Weinstein(2009) offer an alternative view on the interpretation of theEllsberg preferences. Their view is that the distinctive pattern ofEllsberg choices is due to agents applying certain heuristics to solvethe decisions that assume that the odds are manipulable. In real-lifesituations, if someone offers you a bet, you might think that theymust have some advantage over you in order for it to be worth theirwhile offering you the bet. Such scepticism, appropriately modelled,can yield the Ellsberg choices within a simple game theoretic preciseprobabilistic model.

2.2 Incompleteness and incomparability

Various arguments for (precise) probabilism assume that somerelation or other is complete. Whether this is a preference over acts,or some “qualitative probability ordering”, the relationis assumed to hold one way or the other between any two elements ofthe domain. This hardly seems like it should be a principle ofrationality, especially in cases of severe uncertainty. Thatis—to take the preference example—it is reasonable to haveno preference in either direction. This is an importantly differentattitude to beingindifferent between the options. MarkKaplan argues this point as follows:

Both when you are indifferent between \(A\)and \(B\) and when you are undecidedbetween \(A\)and \(B\) you can be said not to prefer eitherstate of affairs to the other. Nonetheless, indifference andindecision are distinct. When you are indifferentbetween \(A\)and \(B\), your failure to prefer one to theother is born of a determination that they are equallypreferable. When you are undecided, your failure to prefer one to theother is born of no such determination.(Kaplan1996: 5)

There is a standardbehaviourist response to the claimthat incomparability and indifference should be distinguished. Inshort, the claim is that it is a distinction that cannot be inferredfrom actual agents’ choice behaviour. Ultimately, in a given choicesituation you must choose one of the options. Which you choose can beinterpreted as being (weakly) preferred. Joyce offers the followingcriticism of this appeal to behaviourism.

There are just too many things worth saying that cannot be saidwithin the confines of strict behaviorism… The basic difficultyhere is that it is impossible to distinguish contexts in which anagent’s behavior really does reveal what she wants from contexts inwhich it does not without appealing to additional facts about hermental state… An even more serious shortcoming is behaviorism’sinability to make sense ofrationalizing explanations ofchoice behavior.(Joyce 1999: 21)

On top of this, behaviourists cannot make sense of the fact thatincomparable goods areinsensitive to smallimprovements. That is, if \(A\)and \(B\) are two goods that you have nopreference between (for example, bets on propositions with impreciseprobabilities) and if \(A^+\) is a goodslightly better than \(A\), then it mightstill be incomparable with \(B\). Thisdistinguishes incomparability from indifference, since indifference“ties” will be broken by small improvements. So the claimthat there is no behavioural difference between indifference andincomparability is false.

Kaplan argues that not only is violating the completeness axiompermissible, it is, in fact, sometimes obligatory.

[M]y reason for rejecting as falsely precise the demand that youadopt a … set of preferences [that satisfy the preferenceaxioms] is not the usual one. It is not that this demand is nothumanly satisfiable. For ifthat were all that was wrong, thedemand might still play a useful role as a regulative ideal—anideal which might then be legitimately invoked to get you to“solve” your decision problem as the orthodox Bayesianwould have you do. My complaint about the orthodox Bayesian demand israther that it imposes the wrong regulative ideal. For if you have[such a] set of preferences then you have a determinate assignment of[\(p\)] to every hypothesis—and then youare not giving evidence its due.(Kaplan 1983:571)

He notes that it is not the case that it is always unreasonable orimpossible for you to have precise beliefs: in that case precisioncould serve as a regulative ideal. Precise probabilism does stillserve as something of a regulative ideal, but it is the belief of anideal agentin an idealised evidential position. Idealisedevidential positions are approximated by cases where you have a coinof a known bias. Precise probabilists and advocates of IP both agreethat precise probabilism is an idealisation, and a regulativeideal. However, they differ as to what kind of idealisation isinvolved. Precise probabilists think that what precludes us fromhaving precise probabilistic beliefs is merely a lack of computationalpower and introspective capacity. Imprecise probabilists think thateven agents ideal in this sense might (and possiblyshould)fail to have precise probabilistic beliefs when they are not in anidealevidential position.

At least some of the axioms of preference are not normativeconstraints. We can now ask what can be proved in the absence of the“purely structural”—non-normative—axioms? Thissurely gives us a handle on what is really required of the structureof belief.

It seems permissible to fail to have a preference between twooptions. Or it seems reasonable to fail to consider either of twopossibilities more likely than the other. And these failures to assentto certain judgements is not the same as considering the two elementsunder consideration to beon a par in any substantivesense. That said, precise probabilism is serving as a regulativeideal. That is, precision might still be an unattained (possiblyunattainable) goal that informs agents as to how they might improvetheir credences. Completeness of preference is what the thoroughlyinformed agent ought to have. Without complete preference, standardrepresentation theorems don’t work. However, foreachcompletion of the incomplete preferenceordering—for each complete ordering that extends the incompletepreference relation—the theorem follows. So if we consider theset of probability functions that are such that some completion of theincomplete preference is represented by that function, then we canconsider this set to be representing the beliefs associated with theincomplete preference. We also get, for each completion, a utilityfunction unique up to linear transformation. This, in essence, wasKaplan’s position(see Kaplan 1983;1996).

Joyce(1999: 102–4) andJeffrey(1984: 138–41) both makesimilar claims. A particularly detailed argument along these lines forcomparative belief can be found inHawthorne(2009). Indeed, this idea has a long and distinguished historythat goes back at least as far asB.O. Koopman(1940). I.J Good(1962), TerrenceFine(1973) and PatrickSuppes(1974) all discussed ideas alongthese lines.Seidenfeld, Schervish, and Kadane(1995) give a representation theorem for preference that don’tsatisfy completeness.(See Evren and Ok 2011;Pedersen 2014; and Chu and Halpern 2008, 2004; for very generalrepresentation theorems).

2.3 Weight of evidence, balance of evidence

Evidence influences belief.Joyce(2005) suggests that there is an important difference betweenthe weight of evidence and the balance of evidence. He argues thatthis is a distinction that precise probabilists struggle to deal withand that the distinction is worth representing. This idea has beenhinted at by a great many thinkers including J.M. Keynes, RudolfCarnap, C.S. Pierce and Karl Popper(seereferences in Joyce 2005; Gärdenfors and Sahlin 1982). Here’sKeynes’ articulation of the intuition:

As the relevant evidence at our disposal increases, the magnitudeof the probability of the argument may either decrease or increase,according as the new knowledge strengthens the unfavourable or thefavourable evidence; butsomething seems to have increased ineither case,—we have a more substantial basis upon which to restour conclusion. I express this by saying than an accession of newevidence increases theweight of anargument.(Keynes 1921: 78, Keynes’emphasis)

Consider tossing a coin known to be fair. Let’s say you have seenthe outcome of a hundred tosses and roughly half have come upheads. Your degree of belief that the coin will land heads should bearound a half. This is a case where there is weight of evidence behindthe belief.

Now consider another case: a coin of unknown bias is to betossed. That is, you have not seen any data on previous tosses. In theabsence of any relevant information about the bias, symmetry concernsmight suggest you take the chance of heads to be around a half. Thisopinion is different from the above one. There is noweightof evidence, but there is nothing to suggest that your attitudesto \(H\) and \(T\)should be different. So, onbalance, you should have the samebelief in both.

However, these two different cases get represented as having thesame probabilistic belief,namely \(p(H)=p(T)=0.5\). In the fair coincase, this probability assignment comes from having evidence thatsuggests that the chance of heads is a half, and the prescription tohave your credences match chances (ceteris paribus). In theunknown bias case, by contrast, one arrives at the same assignment ina different way: nothing in your evidence supports one propositionover the other so some “principle of indifference”reasoning suggests that they should be assigned the samecredence(see Hájek 2011, for discussion of theprinciple of indifference).

If we take seriously the “ambiguity aversion” discussedearlier, when offered the choice between betting on the fair coin’slanding heads as opposed to the unknown-bias coin’s landing heads, itdoesn’t seem unreasonable to prefer the former. Recall the preferencefor unambiguous gambles in the Ellsberg gameinsection 2.1. But if both coins have the samesubjective probabilities attached, what rationalises this preferencefor betting on the fair coin? Joyce argues that there is a differencebetween these beliefs that is worth representing. IPdoesrepresent the difference. The first case is representedby \(P(H)=\{0.5\}\), while the second iscaptured by \(P(H)=[0,1]\).

Scott Sturgeon puts this point nicely when he says:

[E]vidence and attitude aptly based on it must match incharacter. When evidence is essentially sharp, it warrants sharpor exact attitude; when evidence is essentially fuzzy—as it ismost of the time—it warrants at best a fuzzy attitude. In aphrase: evidential precision begets attitudinal precision; andevidential imprecision begets attitudinalimprecision.(Sturgeon 2008: 159 Sturgeon’semphasis)

Wheeler (2014) criticises Sturgeon onthis “character matching” thesis. However, an argument forIP based on the nature of evidence only requires that the character ofthe evidence sometimes allows (or mandates?) imprecise belief and notthat the characters must always match. Inopposition,Schoenfield (2012) arguesthat evidence always supports precise credence, but that for reasonsof limited computational capacity, real agents needn’t be required tohave precise credences. However, her argument only really supports theclaim thatsometimes indeterminacy is due to complexity ofthe evidence and computational complexity. She doesn’t have anargument against the claims Levi, Kaplan, Joyce and others make thatthere are evidential situations that warrant imprecise attitudes.

Strictly speaking, what we have here is only half thestory. Thereis a difference between therepresentations of belief as regards weight and balance. But thatstill leaves open the question of exactly what is representing theweight of evidence? What aspect of the belief reflects thisdifference? One might be tempted toview \(\overline{P}(H)-\underline{P}(H)\) as ameasure of the weight of evidencefor \(H\).Walley(1991) tentatively suggests as much. However, this would getwrong cases of conflicting evidence. (Imagine two equally reliablewitnesses: one tells you the coin is biased towards heads, the othersays the bias is towards tails.) The question of whether andhow IP does better than precise probabilism has not yet received anadequate answer. Researchers in IP have, however, made progress ondistinguishing cases where your beliefs happen to have certainsymmetry properties from cases where your beliefs capture evidenceabout symmetries in the objects of belief. Thisis adistinction that the standard precise model of belief fails tocapture(de Cooman and Miranda2007).

The precise probabilist can respond to the weight/balancedistinction argument by pointing to the propertyofresiliency(Skyrms 2011)orstability(Leitgeb2014). The idea is that probabilities determined by the weightof evidence change less in response to new evidence than doprobabilities determined by balance of evidence alone. That is, ifyou’ve seen a hundred tosses of the coin, seeing it land heads doesn’taffect your belief much, while if you’ve not seen any tosses of thecoin, seeing it land heads has a bigger effect on your beliefs. Thus,the distinction is represented in the precise probabilistic frameworkin the conditional probabilities. The distinction, though, is one thatcannot rationalise the preference for betting on the fair coin. Onecould develop a resiliency-weighted expected value and claim that thisis what you should maximise, but this would be as much of a departurefrom orthodox probabilism as IP is. If someone were to develop such atheory, then its merits could be weighed against the merits of IP typemodels.

Another potential precise response would be to suggest that thereis weight of evidence for \(H\) if manypropositions that are evidence for \(H\) arefully believed, or if there is a chance proposition(about \(H\)) that is near to fullybelieved. This is in contrast to cases of mere balance where fewpropositions that are evidence for \(H\) arefully believed, or where probability is spread out over a number ofchance hypotheses. The same comments made above about resiliency applyhere: such distinctions can be made, but this doesn’t get us to atheory that can rationalise ambiguity aversion.

The phenomenon of dilation (section 3.1)suggests that the kind of argument put forward in this section needsmore care and further elaboration.

2.4 Suspending judgement

You are sometimes in a position where none of your evidence seemsto speak for or against the truth of some proposition. Arguably, areasonable attitude to take towards such a proposition is suspensionof judgement.

When there is little or no information on which to base ourconclusions, we cannot expect reasoning (no matter how clever orthorough) to reveal a most probable hypothesis or a uniquelyreasonable course of action. There are limits to the power ofreason.(Walley 1991: 2)

Consider a coin of unknown bias. The Bayesian agent must have aprecise belief about the coin’s landing heads on the next toss. Giventhe complete lack of information about the coin, it seems like itwould be better just to suspend judgement. That is, it would be betternot to have any particular precise credence. It would be better toavoid betting on the coin. But there just isn’t room in the Bayesianframework to do this. The probability function must output somenumber, and that number will sanction a particular set of bets asdesirable.

Consider \(\underline{P}(X)\) asrepresenting the degree to which the evidencesupports \(X\). Nowconsider \(I(X) = 1- (\underline{P}(X) +\underline{P}(\neg X))\). This measures the degree to which theevidence is silenton \(X\).Huber(2009) points out that precise probabilism can then beunderstood as making the claim that no evidence is ever silent on anyproposition. That is, \(I(X)=0\) forall \(X\). One can never suspendjudgement. This is a nice way of seeing the strangeness of the preciseprobabilist’s attitude to evidence. Huber is making this point aboutDempster-Shafer belief functions (seehistorical appendix, section 7), but it carries over to IP in general.

The committed precise probabilist would respond thatsetting \(p(X)=0.5\)is suspendingjudgement. Thisis the maximally noncommittal credence in thecase of a coin flip. More generally, suspending judgement should beunderstood in terms of maximisingentropy(Jaynes 2003; Williamson 2010:49–72). The imprecise probabilist could argue that thisonlyseems to be the right way to be noncommittal if you arewedded to the precise probabilist representation of belief. That is,the MaxEnt approach makes sense if you are already committed torepresentation of belief by a single precise probability, but losesits appeal if credal sets are available. Suspending judgement issomething you do when the evidence doesn’t determine yourcredence. But for the precise probabilist, there is no way to signalthe difference between suspension of judgement and strong evidence ofprobability half. This is just the weight/balance argument again.

To make things more stark, consider the following delightfully oddexample from Adam Elga:

A stranger approaches you on the street and starts pulling outobjects from a bag. The first three objects he pulls out are aregular-sized tube of toothpaste, a live jellyfish, and a travel-sizedtube of toothpaste. To what degree should you believe that the nextobject he pulls out will be another tube oftoothpaste?(2010: 1)

In this case, unlike in the coin case, it really isn’t clear whatintuition says about what would be the “correct” preciseprobabilist suspension of judgement. What Maximum Entropy methodsrecommend will depend on seemingly arbitrary choices about the formallanguage used to model the situation. Williamson is well aware of thislanguage relativity problem. He argues that choice of a languageencodes some of our evidence.

Another response to this argument would be to take William James’response to W.K. Clifford(Clifford 1901; James1897). James argued that as long as your beliefs are consistentwith the evidence, then you are free to believe what you like. Sothere isno need to ever suspend judgement. Thus, the preciseprobabilist’s inability to do so is no real flaw. This attitude, whichis sometimes calledepistemic voluntarism, is close to thesort ofsubjectivism espoused by Bruno de Finetti, FrankRamsey and others.

There does seem to be a case for an alternative method ofsuspending judgement in order to allow you to avoid making any betswhen your evidence is very incomplete, ambiguous or imprecise. If yourcredences serve as your standard for the acceptability of bets, theyshould allow for both sides of a bet to fail to be acceptable. Aprecise probabilist cannot do this since if a bet has (precise)expected value \(e\) then taking the otherside of that bet (being the bookie) has expectedvalue \(-e\). If acceptability is understoodas nonnegative expectation, then at least one side of any bet isacceptable to a precise agent. This seems unsatisfactory. Surelygenuine suspension of judgement involves being unwilling to risk moneyon the truth of a proposition at any odds.

Inspired by the famous “Bertrandparadox”,Chandler (2014)offers a neat argument that the precise probabilist cannot jointlysatisfy two desiderata relating to suspension of judgment about avariable. First desideratum: if you suspend judgement about the valueof a bounded real variable \(X\), then itseems that different intervals of possible valuesfor \(X\) of the same size should be treatedthe same by your epistemic state. Second desideratum:if \(Y\) essentially describes the samequantity as \(X\), then suspension ofjudgement about \(X\) should entail suspensionof judgement about \(Y\). Let’s imagine nowthat you have precise probabilities and that you suspend judgementabout \(X\). By the first desideratum, youhave a uniform distribution over valuesof \(X\). Now consider \(Y =1/X\). \(Y\) essentially describes thesame quantity that \(X\) did. But a uniformdistribution over \(X\) entails a non-uniformdistribution over \(Y\). So you do not suspendjudgement over \(Y\). A real-world case ofvariables so related is “ice residence time in clouds” and“ice fall rate in clouds”. These are inversely related,but describe essentially the same element of a climatesystem(Stainforth et al. 2007:2154).

So a precise probabilist cannot satisfy these reasonable desiderataof suspension of judgement. An imprecise probabilist can: for example,the set of all probability functionsover \(X\) satisfies both desiderata. Theremay be more informative priors that also represent suspension ofjudgement, but it suffices for now to point out that IP seems betterable to represent suspension of judgement than preciseprobabilism. Section 5.5 ofWalley(1991),discusses IP’s prospects as a method fordealing with suspension of judgement.

2.5 Unknown correlations

Haenni et al. (2011) motivateimprecise probabilities by showing how they can arise from preciseprobability judgements. That is, if you have a precise probabilityfor \(X\) and a precise probabilityfor \(Y\), then you can put boundson \(p(X\cap Y)\) and \(p(X\cup Y)\), even if you don’t knowhow \(X\) and \(Y\)are related. These bounds give you intervals of possible probabilityvalues for the compound events.

For example, you know that \(p(X \cap Y)\) is bounded above by\(p(X)\) and by \(p(Y)\) and thus by \(\min\{p(X),p(Y)\}\). If\(p(X) \gt 0.5\) and \(p(Y) \gt 0.5\) then \(X\) and \(Y\) mustoverlap. So \(p(X\cap Y)\) is bounded below by \(p(X)+p(Y)-1\). But,by definition, \(p(X \cap Y)\) is also bounded below by \(0\). So wehave the following result: if you know \(p(X)\) and you know \(p(Y)\),then, you know

\[\max\{0,p(X)+p(Y)-1\} \le p(X \cap Y) \le \min\{p(X),p(Y)\}.\]

Likewise, bounds can be put on \(p(X \cup Y)\). \(p(X\cup Y)\)can’t be bigger than when \(X\) and \(Y\) are disjoint, so it isbounded above by \(p(X)+p(Y)\). It is also bounded above by \(1\), andthus by the minimum of those expressions. It is also bounded below by\(p(X)\) and by \(p(Y)\) and thus by their maximum. Putting thistogether,

\[\max\{p(X),p(Y)\} \le p(X\cup Y) \le \min\{p(X)+p(Y),1\}.\]

These constraints are effectively what you get from de Finetti’sFundamental Theorem of Prevision(de Finetti1990 [1974]: 112; Schervish, Seidenfeld, and Kadane 2008).

So if your evidence constrains your beliefin \(X\) and in \(Y\),but is silent on their interaction, then you will only be able to pindown these compound events to certain intervals. Any choice of aparticular probability function will go beyond the evidence inassuming some particular evidential relationshipbetween \(X\)and \(Y\). Thatis, \(p(X)\) and \(p(X\midY)\) will differ in a way that has no grounding in yourevidence.

2.6 Nonprobabilistic chances

What if the objective chances were not probabilities? If we endorsesome kind of connection between known objective chances andbelief—for example, a principle of direct inference or Lewis’Principal Principle(Lewis1986)—then we might have an additional reason to endorseimprecise probabilism. It seems to be a truth universally acknowledgedthat chances ought to be probabilities, but it is a“truth” for which very little argument has beenoffered. For example,Schaffer (2007)makes obeying the probability axioms one of the things required inorder to play the “chance role”, but offers no argumentthat this should be the case. Joyce says “some have heldobjective chances are not probabilities. This seems unlikely, butexplaining why would take us too farafield”(2009: 279,fn.17). Various other discussions of chance—for examplein statistical mechanics(Loewer 2001; Frigg2008) or “Humeanchance”(Lewis 1986,1994)—take for granted that chances should be precise andprobabilistic (Dardashti et al. 2014 isan exception). Obviously things are confused by the use of the conceptof chance as a way of interpreting probability theory. There is,however, a perfectly good pre-theoretic notion of chance: this is whatprobability theory was originally invented to reason about, afterall. This pre-theoretic chance still seems like the sort of thing thatwe should apportion our belief to, in some sense. And there is verylittle argument that chances must always be probabilities. If thechances were nonprobabilistic in a particular way, one might arguethat your credences ought to be nonprobabilistic in the same way. Whatform a chance-coordination norm should take if chances and credenceswere to have non-probabilistic formal structures is currently an openproblem.

I want to give a couple of examples of this idea. First considersome physical process that doesn’t have a limiting frequency but has afrequency that varies, always staying within some interval. This wouldbe a process that is chancy, but fairly predictable. It might be thatthe best description of such a system is to just put bounds on itsrelative frequency. Such processes have been studied using IPmodels(Kumar and Fine 1985; Grize and Fine1987; Fine 1988), and have been discussed as a potential sourceof imprecision in credence(Hájek and Smithson2012). A certain kind of non-standard understanding of aquantum-mechanical event leads naturally to upper probabilitymodels(Suppes and Zanotti 1991; Hartmann andSuppes 2010). John Norton has discussed the limits ofprobability theory as a logic of induction, using an example which, heclaims, admits no reasonable probabilisticattitude(Norton 2007, 2008a,b). Onemight hope that IP offers an inductive logic along the lines Nortonsketches. Norton himself has expressed scepticism on thisline(Norton 2007),althoughBenétreau-Dupin (2015) has defended IP as a candidatesystem for Norton’s project.Finally, particular views on vagueness might well prompt a rethinking of the formal structure of chance(Bradley 2016).

2.7 Group belief

Suppose we wanted our epistemology to apply not just toindividuals, but to “group agents” like committees,governments, companies, and so on. Such agents may be made up ofmembers who disagree. Levi(1986, 1999)has argued that representation of such conflict is better handled withsets of probabilities than with precise probabilities. There is a richliterature on combining or aggregating the (probabilistic) opinions ofmembers of groups(Genest and Zidek1986) but the outcome of such aggregation does not adequatelyrepresent thedisagreement among the group. Some forms ofaggregation also fail to respect plausible constraints on groupbelief. For example, if every member of the group agreesthat \(X\) and \(Y\)are probabilistically independent, then it seems plausible to requirethat the group belief respects this unanimity. It is, however, wellknown that linear pooling—a simple and popular form ofaggregation—does not respect this desideratum. Consider twoprobability functions \(p, q\) suchthat \(p(X) = p(Y) = 1/3\)and \(p(X\mid Y)=p(X)\)while \(q(X) = q(Y) = 2/3\)and \(q(X\mid Y)=q(X)\). Consideraggregating these two probabilities by taking an unweighted average ofthem: \(r = p/2 + q/2\). Now, calculationshows that \(r(X\cap Y) = 5/18\)while \(r(X)r(Y) = 1/4\), thus demonstratingthat \(r\) does notconsider \(X\)and \(Y\) to be independent. So such anaggregation method does not satisfy the abovedesideratum(Kyburg and Pittarelli 1992; Cozman2012). For more on judgement aggregation in groups,seeList and Pettit (2011), especiallychapter 2.

Elkin and Wheeler (2016) argue that resolving disagreement among precise probabilist peers should involve an imprecise probability.Stewart and Quintana (2018) argue that imprecise aggregation methods have some nice properties that no precise aggregation method do.

If committee members have credencesand utilities thatdiffer among the group, then no precise probability-utility pairdistinct from the probabilities and utilities of the agents cansatisfy theParetocondition(Seidenfeld, Kadane, andSchervish 1989). The Pareto condition requires that the grouppreference respect agreement of preference among the group. That is,if all members of the group prefer \(A\)to \(B\) (that is, if each group member findsthat \(A\) has higher expected utilitythan \(B\)) then the aggregate preference (asdetermined by the aggregate probability-utility pair) should satisfythat preference. Since this “consensus preservation” is areasonable requirement on aggregation, this result shows that precisemodels of group agents are problematic. Walley discusses an example ofa set of probabilities where each probability represents the beliefsof a member of a group, then \(P\) is anincomplete description of the beliefs of each agent, in the sense thatif all members of \(P\) agree on something,then that thing is something each agent believes. Sets ofprobabilities allow us to represent an agent whoisconflicted in theirjudgements(Levi 1986, 1999).

Ideally rational agents may face choices where there is no bestoption available to them. Indeterminacy in probability judgement andunresolved conflicts between values lead to predicaments where at themoment of choice the rational agent recognizes more than one suchpreference ranking of the available options in [the set of availablechoices] to be permissible.(Levi 1999:510)

Levi also argued that individual agents can be in conflict in thesame way as groups, and thus that individuals’ credal states are alsobetter represented by sets of probabilities. (Levi also argued fortheconvexity of credal states, which brings him intoconflict with the above argument about independence (seehistorical appendixsection 3).) One doesn’t need to buy the claim that groups andindividuals must be modelled in the same way to take something awayfrom this idea. One merely needs to accept the idea that an individualcan beconflicted in such a way that a reasonablerepresentation of her belief state—or belief and valuestate—is in terms of sets offunctions.Bradley (2009) calls membersof such sets “avatars”. This suggests that we interpret anindividual’s credal set as acredal committee made up of heravatars. This interpretation of the representor is duetoJoyce (2011), though Joyce attributesit Adam Elga. This committee represents all the possible priorprobabilities you could have that are consistent with theevidence. Each credal committee member is a fully opinionated Jamesianvoluntarist. The committee as a whole, collectively, is a Cliffordianobjectivist.

3. Philosophical questions for IP

This section collects some problems for IP noted inthe literature.

3.1 Dilation

Consider two logically unrelatedpropositions \(H\)and \(X\). Now consider the four “statedescriptions” of this simple model as set outinFigure 1. So \(a=H\capX\) and so on. Now define \(Y=a \cupd\). Alternatively, consider three propositions related in thefollowing way: \(Y\) is defined as“\(H\) if and onlyif \(X\)”.

[A square with four quadrants, first column is labeled 'H' and second 'not H'; first row is labeled 'X' and second 'not X'.  First quadrant (first column/first row) is shaded and has an 'a' on it; second quadrant (second column, first row) is not shaded and has a 'b' on it; third quadrant (first column, second row) is unshaded and has a 'c' on it; and last quadrant (second column, second row) is shaded and has a 'd' on it.]

Figure 1: A diagram of the relationships afterSeidenfeld (1994); \(Y\) is the shaded area

Further imagine that \(p(H\mid X) = p(H) =1/2\). No other relationships between the propositions holdexcept those required by logic and probability theory. It isstraightforward to verify that the above constraints requirethat \(p(Y) = 1/2\). The probabilityfor \(X\), however, is unconstrained.

Let’s imagine you were given the above information, and took yourrepresentor to be the full set of probability functions that satisfiedthese constraints. Roger White suggested an intuitive gloss on how youmight receive information about propositions so related and soconstrained(White 2010). White’s puzzlegoes like this. I have a proposition \(X\),about which you know nothing at all. I have written whichever is trueout of \(X\) and \(\negX\) on theHeads side of a fair coin. I have paintedover the coin so you can’t see which side is heads. I then flip thecoin and it lands with the \(X\)uppermost. \(H\) is the proposition that thecoin lands heads up. \(Y\) is the propositionthat the coin lands with the“\(X\)” side up.

Imagine if you had a precise prior that made you certainof \(X\) (this is compatible with the aboveconstraints since \(X\) wasunconstrained). Seeing \(X\) land uppermostnow should be evidence that the coin has landed heads. The game set-upmakes it such that these apparently irrelevant instances ofevidencecan carry information. Likewise, being veryconfident of \(X\)makes \(Y\) very good evidencefor \(H\). If instead you weresure \(X\) wasfalse, \(Y\) would be solid gold evidenceof \(H\)’s falsity. So it seemsthat \(p(H\mid Y)\) is proportional to priorbelief in \(X\) (indeed, this can be provenrather easily). Given the way the events are related, observingwhether \(X\) or \(\negX\) landed uppermost is a noisy channel to learn about whetheror not \(H\) landed uppermost.

So let’s go back to the original imprecise case and consider whatit means to have an imprecise beliefin \(X\). Among other things, it meansconsidering possible that \(X\) could be verylikely. It is consistent with your belief statethat \(X\) is such that if you knew whatproposition \(X\) was, you would consider itvery likely. In this case, \(Y\) would be goodevidence for \(H\). Note that in this caselearning that the coin landed \(\neg X\)uppermost—callthis \(Y'\)—would be just as goodevidenceagainst \(H\). Likewise, \(X\)might be a proposition that you would have very low credence in, andthus \(Y\) would beevidenceagainst \(H\).

Since you are in a state of ignorance with respectto \(X\), your representor containsprobabilities that take \(Y\) to be goodevidence that \(H\) and probabilities thattake \(Y\) to be good evidencethat \(\neg H\). So, despite the factthat \(P(H)=\{1/2\}\) wehave \(P(H\mid Y) = [0,1]\). Thisphenomenon—posteriors being wider than their priors—isknown as dilation. The phenomenon has been thoroughly investigated inthe mathematical literature(Walley 1991;Seidenfeld and Wasserman 1993; Herron, Seidenfeld, and Wasserman 1994;Pedersen and Wheeler 2014). Levi and Seidenfeld reportedan example of dilation to Good followingGood(1967). Good mentioned this correspondence in his follow uppaper(Good 1974). Recent interest indilation in the philosophical community has been generated by White’spaper(White 2010).

White considers dilation to be a problem since learning \(Y\)doesn’tseem to be relevant to \(H\). That is, sinceyou are ignorant about \(X\), learning whether or not the coin landed\(X\) up doesn’t seem to tell you anything about whether thecoin landed heads up. It seems strange to argue that your belief in\(H\) shoulddilate from \(1/2\) to \([0,1]\) upon learning\(Y\). It feels as if this should just be irrelevant to\(H\). However, \(Y\) is only really irrelevant to \(H\) when\(p(X)=1/2\). Any other precise belief you might have in \(X\) is suchthat \(Y\) now affects your posterior belief in\(H\).Figure 2 shows the situation for oneparticular belief about how likely \(X\) is; for one particular \(p\inP\). The horizontal line can shift up or down, depending on what thecommittee member we focus on believes about \(X\). \(p(H\mid Y)\) isa half only if the prior in \(X\) is also a half. However, theimprecise probabilist takes into accountall the ways \(Y\) mightaffect belief in \(H\).

[A square with two columns labeled 'H' and 'not H' and two rows, a narrow one labeled 'X' and a wide one labeled 'not X'.  The first quadrant (first column, first row) is shaded and has a 'Y' on it; second quadrant (second column, first row) is not shaded and has a 'not Y' on it; third quadrant (first column, second row) is unshaded with a 'not Y' on it and the fourth quadrant (second column, second row) is shaded and has a 'Y' on it.]

Figure 2: A member of the credal committee (afterJoyce (2011))

Consider a group of agents who each had precise credences in theabove coin case and differed in their priorson \(X\). They would all start out with priorof a half in \(H\). Afterlearning \(Y\), these agents would differ intheir posterior opinions about \(H\) based ontheir differing dispositions to update. The group belief woulddilate. However, no agent in the group has acted in any wayunreasonably. If we take Levi’s suggestion that individuals can beconflicted just like groups can, then it seems that individual agentscan have their beliefs dilate just like groups can.

There are two apparent problems with dilation. First, thebelief-moving effect of apparently irrelevant evidence; and second,the fact that learning some evidence can cause your belief-intervalsto widen. The above comments speak to the first ofthese.Pedersen and Wheeler(2014) also are focused on mitigating this worry. Weturn now to the second worry.

Even if we accept dilation as a fact of life for the impreciseprobabilist, it is stillweird. Even if all of the aboveargument is accepted, it still seems strange to say that your beliefin \(H\) is dilated,whatever youlearn. That is, whether you learn \(Y\)or \(Y'\), your posterior beliefin \(H\) looks thesame: \([0,1]\). Or perhaps, what it shows tobe weird is that your initial credence was precise.

Hart and Titelbaum (2015) suggest that dilation is strange because conditionalising on a biconditional (which is, after all, what you are doing in the above example) is unintuitive even in the precise case. Whether all cases of dilation can be explained away in this manner remains to be seen.Gong and Meng (2017) likewise see dilation as a problem of mis-specified statistical inference, rather than a problem for IP per se.

Beyond this seeming strangeness, White suggests a specific way thatbeing subject to dilation is an indicator of a defectiveepistemology. White suggests that dilation examples show thatimprecise probabilities violate theReflectionPrinciple(van Fraassen 1984). Theargument goes as follows:

given that you know now that whether youlearn \(Y\) or youlearn \(Y'\) your credencein \(H\) willbe \([0,1]\) (and you will certainly learn oneor the other), your current credence in \(H\)should also be \([0,1]\).

The general idea is that you should set your credences to what you expect your credences to be in the future. More specifically, your credence in \(X\) should be the expectation of your future possible credences in \(X\) over the things you might learn. Given that, for all the things you might learn in this example your credence in \(H\) would be the same, you should have that as your prior credence also. Your prior should be such that \(P(H) = [0,1]\). So having a precise prior credence in \(H\) to start with is irrational. That’s how the argument against dilation from reflection goes. Your prior \(P\) is not fully precise though. Consider \(P(H \cap Y)\). That is, the prior belief in the conjunction is imprecise. So the alleged problem with dilation and reflection is not as simple as “your precise belief becomes imprecise”. The problem is “your precise beliefin \(H\) becomes imprecise”; or rather, your precise belief in \(H\)as represented by \(P(H)\) becomes imprecise.

The issue with reflection is more basic. What exactly doesreflection require of imprecise probabilists in this case? Now, it isobviously the case that each credal committee member’s prior credenceis its expectation over the possible future evidence (this is atheorem of probability theory). But somehow, it is felt,thecredal state as a whole isn’t sensitive to reflection inthe way the principle requires. Each \(p\inP\) satisfies the principle, but the awkward symmetries of theproblem conspire to make \(P\) as a wholeviolate the principle. This looks to be the case if we focuson \(P(H)\) as an adequate representation ofthat part of the belief state. But as noted earlier, this is not anadequate way of understanding the credal state. Note that whilelearning \(Y\) andlearning \(Y'\) both prompt revision to astate where the posterior belief in \(H\) isrepresented as an interval by \([0,1]\), thecredal statesas sets of probabilities are not the same. Callthe state afterlearning \(Y\), \(P'\)and the state afterlearning \(Y'\), \(P''\). So \(P'= \{p(\cdot \mid Y), p\in P\}\)and \(P'' = \{p(\cdot\mid Y'), p\inP\}\). While it is true that \(P'(H) =P''(H)\), \(P' \neqP''\) as sets of probabilities, sinceif \(p\in P'\)then \(p(Y) = 1\) whereasif \(p\in P''\)then \(p(Y) = 0\). So one lesson we shouldlearn from dilation is that imprecise belief is representedbysets of functions rather than by a set-valuedfunction(see also, Joyce 2011; Topey 2012;Bradley and Steele 2014b).

So dilation can perhaps be tamed or rationalised, and the issue with reflection can be mitigated.But there is still a puzzle that dilation raises:in the precise context we have a nice result – due toGood (1967) –that says roughly that learning new information has positive expected value.Information has positive value.This result is, to some extent, undermined by dilation.Bradley and Steele (2016) suggest that there is some sense in which Good’s result can be partially salvaged in the IP setting.

It seems that examples of dilation undermine the earlier claim thatimprecise probabilities allow you to represent the difference betweenthe weight and balance of evidence(seesection 2.3):learning \(Y\) appears to give rise to abelief which one would consider as representingless evidencesince it is more spread out. This is so because the prior credence inthe dilation case is precise, not through weight of evidence, butthrough the symmetry discussed earlier. We cannot take narrowness ofthe interval \([\underline{P}(X),\overline{P}(X)]\) as a characterisation of weight of evidencesince the interval can be narrow for reasons other than because lotsof evidence has been accumulated. So my earlier remarks onweight/balance should not be read as the claim that impreciseprobabilities can always represent the weight/balancedistinction. What is true is thatthere are cases whereimprecise probabilities can represent the distinction in a way thatimpacts on decision making. This issue is far from settled and morework needs to be done on this topic.

3.2 Belief inertia

Imagine there are two livehypotheses \(H_1\)and \(H_2\). You have no idea how likely theyare, but they are mutually exclusive and exhaustive. Then you acquiresome evidence \(E\). Some simple probabilitytheory shows that for every \(p\in P\) we havethe following relationship (using \(p_i = p(E\midH_i)\) for \(i=1,2\)).

\[\begin{align}p(H_1\mid E) & = {{p(E\mid H_1)p(H_1)} \over {p_1 p(H_1) +p_2p(H_2)}} \\ & = {{p_1 p(H_1)} \over {p_2 + (p_1 - p_2) p(H_1)}}\\\end{align}\]

If your prior in \(H_1\) isvacuous—if \(P(H_1) =[0,1]\)—then the above equation shows that your posterior isvacuous as well. That is, if \(p(H_1) = 0\) then \(p(H_1\mid E) =0\) and likewise for \(p(H_1) = 1 = p(H_1\mid E)\), and since theright hand side of the above equation is a continuous function of\(p(H_1)\), for every \(r\in [0,1]\) there is some \(p(H_1)\) suchthat \(p(H_1\mid E) = r\). So \(P(H_1\mid E)=[0,1]\).

It seems like the imprecise probabilist cannot learn from vacuouspriors. This problem of belief inertia goes back at least as far asLevi (1980), chapter 13.Walley also discusses the issue, but appears unmoved by it: he says that vacuous posterior probabilities are just aconsequence of adopting a vacuous prior:

The vacuous previsions really are rather trivial models. That seemsappropriate for models of “complete ignorance” which is arather trivial state of uncertainty. On the other hand, one cannotexpect such models to be very useful in practical problems,notwithstanding their theoretical importance. If the vacuousprevisions are used to model prior beliefs about a statisticalparameter for instance, they give rise to vacuous posteriorprevisions… However, prior previsions that are close to vacuousand make nearly minimal claims about prior beliefs can lead toreasonable posterior previsions.(Walley 1991:93)

Joyce (2011)andRinard (2013) have both discussedthis problem. Rinard’s solution to it is to argue that this shows thatthe vacuous prior is never a legitimate state of belief. Or rather,that we only ever need to model your beliefs using non-vacuous priors,even if these are incomplete descriptions of your belief state. Thisis similar to Walley’s “non-exhaustive” representation ofbelief.Vallinder (2018) suggests that the problem of belief inertia is quite a general one.Castro and Hart (forthcoming) use the looming danger of belief inertia to argue against what I have called an "objectivist" interpretation of IP.

An alternative solution to thisproblem,(inspired by Wilson 2001; and Cattaneo2008; 2014), would modify the update rule in such a way thatthose extreme priors that give extremely small likelihoods to theevidence are excised from the representor. More work would need to bedone to make this precise and show how exactly the response wouldgo.

3.3 Decision making

One important use that models of belief can be put to is as part ofa theory of rational decision. IP is no different. Decision makingwith imprecise probabilities has some problems, however.

The problem for IP decision making, in short, is that your credalcommittee can disagree on what the best course of action is, and whenthey do, it is unclear how you should act (recall the definitionsinsection 1.1). Imagine betting on a coin ofunknown bias. Consider the indicator gambles on heads and tails. Bothbets have imprecise expectation \([0,1]\). Howare you supposed to compare these expectations? The bets areincomparable. (If the coin case appears to have too much exploitablesymmetry, consider unit bets on Elga pulling toothpaste or jellyfishfrom his bag.) This incomparability, argues Williamson, leads todecision making paralysis, and this highlights a flaw in theepistemology(2010: 70). This argumentseems to be missing the point, however, if one of our motivations forIP is precisely to be able to represent such incompatibility ofprospects (seesection 2.2)! Theincommensurability of options entailed by IP is not a bug, it’s afeature. Decision making with imprecise probabilities is discussedbySeidenfeld(2004),Troffaes(2007),Seidenfeld, Schervish, andKadane (2010),Bradley(2015),Williams(2014),Huntley, Hable, and Troffaes(2014).

A more serious worry confronts IP when you have to make sequencesof decisions. There is a rich literature in economics on sequences ofdecisions for agents who fail to be orthodox expected utilitymaximisers(Seidenfeld 1988; 1994; Machina1989; Al-Najjar and Weinstein 2009, and the referencestherein). This topic was brought to the attentionof philosophers again after the publication ofElga’s(2010) paperSubjectiveProbabilities Should Be Sharp which highlights the problem with asimple decision example, although a very similar example appearsinHammond (1988) in relation toSeidenfeld’s discussion of Levi’s decision rule“E-admissibility”(Seidenfeld1988).

A version of the problem is as follows. You are about to be offeredtwo bets on a coin of unknown bias, \(A\)and \(B\), one after the other. The bets payout as follows:

  • \(A\) loses 10 if the coin lands heads and wins 15 otherwise
  • \(B\) wins 15 if the coin lands heads and loses 10 otherwise

If we assume you have beliefs representedby \(P(H)=[0,1]\), these bets haveexpectations of \([-10,15]\). Refusing eachbet has expectation of 0. So accepting andrefusing \(A\) are incomparable with respectto your beliefs. Likewise for \(B\). Theproblem is that refusingboth bets seems to be irrational,since accepting both bets gets you a guaranteed payoff of 5. Elgaargues that no decision rule for imprecise probabilities can rule outrefusing both bets. He then argues that this shows that impreciseprobabilities are bad epistemology. Neither argumentworks.Chandler (2014)andSahlin and Weirich (2014) both pointout that a certain kind of imprecise decision rule does make refusingboth bets impermissible and Elga has acknowledged this in an erratumto his paper.Bradley and Steele (2014a)argue that decision rules that make refusing both bets merelypermissible are legitimate ways to make imprecise decisions. They alsopoint out that the rule that Chandler, and Sahlin and Weirich advocatehas counterintuitive consequences in other decision problems.

Moss (2015) relatesElga-style IP decision problems to moral dilemmas and uses the analogyto explain the conflicting intuitions in Elga’s problem.Sud (2014) andRinard (2015)both also offer alternative decision theories for imprecise probabilities.Bradley (2019) argues that all three struggle to accommodate a version of the Ellsberg decisions discussed above.

Even ifElga’s argument worked and there were no good imprecise decisionrules, that wouldn’t show that IP was a faulty model of belief. Wewant to be able to represent the suspension of judgement on variousthings, including on the relative goodness of a number ofoptions. Such incommensurability inevitably brings with it someproblems for sequential decisions(see, forexample, Broome 2000), but this is not an argument against theepistemology. As Bradley and Steele note, Elga’s argument—if itwere valid—couldmutatis mutandis be used as anargument that there are no incommensurable goods and this seems toostrong.

3.4 Interpreting IP

Imprecise probabilities aren’t a radically new theory. Theyare merely a slight modification of existing models of belief forsituations of ambiguity. Often your credences will be precise enough,and your available actions will be such that you act more orlessas if you were a strict Bayesian. One might analogize impreciseprobabilities as the “Theory of Relativity” to the strictBayesian “Newtonian Mechanics”: all but indistinguishablein all but the most extreme situations. This analogy goes deeper: inboth cases, the theories are “empiricallyindistinguishable” in normal circumstances, but they both differradically in some conceptual respects. Namely, the role of absolutespace in Newtonian mechanics/GR; how to model ignorance in thestrict/imprecise probabilist case.Howson(2012) makes a similar analogy between modelling belief andmodels in science. Both involve some requirement to be somewhatfaithful to the target system, but in each case faithfulness must beweighed up against various theoretical virtues like simplicity,computational tractability and soon. LikewiseHosni (2014) argues thatwhat model of belief is appropriate is somewhat dependent oncontext. There is of course an important disanalogy in that models ofbelief are supposed to benormative as well as descriptive,whereas models in science typically only have to play a descriptiverole.Walley (1991) discusses a similarview but is generally sceptical of such an interpretation.

3.4.1 What is a belief?

One standard interpretation of the probability calculus is thatprobabilities represent “degrees of belief” or“credences”. This is more or less the concept that underconsideration so far. But what is a degree of belief? There are anumber of ways of cashing out what it is that a representation ofdegree of belief is actually representing.

One of the most straightforward understandings of degree of beliefis that credences are interpreted in terms of an agent’s limitingwillingness to bet. This is an idea which goes backtoRamsey (1926)andde Finetti (1964, 1990 [1974]). Theidea is that your credence in \(X\)is \(\alpha\) just incase \(\alpha\) is the value at which you areindifferent between the gambles:

  • Win \(1-\alpha\) if \(X\), lose \(\alpha\) otherwise
  • Lose \(1- \alpha \) if \(X\), win \(\alpha\) otherwise

This is the “betting interpretation”. This is theinterpretation behind Dutch book arguments: this interpretation ofbelief makes the link between betting quotients and belief strongenough to sanction the Dutch book theorem’s claimthatbeliefs must be probabilistic. Williamson in fact takesissue with IP because IP cannot be given this bettinginterpretation(2010: 68–72). Heargues that Smith’s and Walley’s contributions notwithstanding(seeformal appendix), thesingle-value betting interpretation makes sense as a standard forcredence in a way that the one-sided betting interpretationdoesn’t. The idea is that you may refuse all bets unless they are atextremely favourable odds by your lights. Such behaviour doesn’t speakto your credences. However,if you were to offer a singlevalue then this tells us something about your epistemicstate. There is something to this idea, but it must be traded offagainst the worry thatforcing agents to have such singlenumbers systematically misrepresents their epistemic states. As Kaplanputs it

The mere fact that you nominate \(0.8\)under thecompulsion to choose some determinate value for[\(p(X)\)] hardly means that you haveareason to choose \(0.8\). Theorthodox Bayesian is, in short, guilty of advocating falseprecision.(Kaplan 1983: 569, Kaplan’semphasis)

A related interpretation of credence is to understand credence asbeing just a representation of an agent’s dispositions to act. Thisinterpretation sees credence as that function such that your elicitedpreferences and observed actions can be represented as those of anexpected utility maximiser with respect to that probabilityfunction(Briggs 2014: section 2.2). Yourcredencesjust are that function that represents you as arational agent. For precise probabilism, “rational agent”means “expected utility maximiser”. For impreciseprobabilism, rational agent must mean something slightly different. Aslightly more sophisticated version of this sort of idea is tounderstand credence to be exactly that component of the preferencestructure that the probability function represents in therepresentation theorem. Recall the discussion of incompleteness(section 2.2). IP represents you as the agentconflicted between all the \(p \in P\) suchthat unless the \(p\) agreethat \(X\) is betterthan \(Y\) or vice versa, you find themincomparable. What a representation theorem actually proves is amatter of some dispute(see Zynda 2000; Hájek2008; Meacham and Weisberg 2011).

One might take the view that credence is modelling some kind ofmental or psychological quantity in the head. Strength of belief is areal psychological quantity and it is this that credence shouldmeasure. Unlike the above views, this interpretation of credence isn’teasy to operationalise. It also seems like this understanding ofstrength of belief distances credence from its role in understandingdecision making. The abovebehaviourist views take belief’srole in decision making to be central to or even definitional of whatbelief is. This psychological interpretation seems to divorce belieffrom decision. Whether there are such stable neurological structuresis also a matter of some controversy(Fumagalli2013; Smithson and Pushkarskaya 2015).

A compromise between the behaviourist views and the psychologicalviews is to say that belief is characterisedin part by itsrole in decision making. This leaves room for belief to play animportant role in other things, like assertion or reasoning andinference. So the answer to the question “What is degree ofbelief?” is: “Degree of belief is whatever psychologicalfactors play the role imputed to belief in decision making contexts,assertion behaviour, reasoning and inference”. There is room inthis characterisation to understand credence as measuring some sort ofpsychological quantity that causally relates to action, assertion andso on. This is a sort of functionalist reading of what beliefis.Eriksson and Hájek (2007) argue that“degree of belief” should just be taken as a primitiveconcept in epistemology. The above attempts to characterise degree ofbelief then fill in the picture of the role degree of beliefplays.

3.4.2 What is a belief in \(X\)?

So now we have a better idea of what it is that a model of beliefshould do. But which part of our model of belief is representing whichpart of the belief state? The first thing to say isthat \(P(X)\) isnot an adequaterepresentation of the belief in \(X\). Thatis, one of the values of the credal set approach is that it cancapture certain kinds of non-logical relationships betweenpropositions that are lost when focusing on, say, the associated setof probability values. For example, consider tossing a coin of unknownbias. \(P(H)=P(T)=[0,1]\), but this fails torepresent the important factthat \(p(H)=1-p(T)\) forall \(p\in P\). Or that getting a heads on thefirst toss is at least as likely as heads on two consecutivetosses. These facts that aren’t captured by the sets-of-values viewcan play an important role in reasoning and decision.

\(P(X)\) might be agood enoughrepresentation of belief for some purposes. For example in theEllsberg game these sets of probability values (and their associatedsets of expectations) are enough to rationalise the non-probabilisticpreferences. How good the representation needs to be depends on whatit will be used for. Representing the sun as a point mass is a goodenough representation for basic orbital calculations, but obviouslyinadequate if you are studying coronal mass ejections, solar flares orother phenomena that depend on details of the internal dynamics of thesun.

3.5 Regress

Imprecise probabilities is a theory born of our limitations asreasoning agents, and of limitations in our evidence base. If only wehad better evidence, a single probability function would do. But sinceour evidence is weak, we must use a set. In a way, the same is true ofprecise probabilism. If only we knew the truth, we could representbelief with a truth-valuation function, or just a set of sentencesthat are fully believed. But since there are truths we don’t know, wemust use a probability to represent our intermediate confidence. Andindeed, the same problem arises for the imprecise probabilist. Is itreasonable to assume that we know what set of probabilities bestrepresents the evidence? Perhaps we should have a set of sets ofprobabilities… Similar problems arise for theories ofvagueness(Sorensen 2012). We objectedto precise values for degrees of belief, so why be content withsets-valued beliefs with precise boundaries? This is the problem of“higher-order vagueness” recast as a problem for impreciseprobabilism. Why is sets of probabilities the right level to stop theregress at? Why not sets of sets? Why not second-order probabilities?Why not single probabilityfunctions?Williamson (2014)makes this point, and argues that a single precise probability is thecorrect level at which to get off the “uncertaintyescalator”. Williamson advocates the betting interpretation ofbelief, and his argument here presupposes that interpretation. But thepoint is still worth addressing: for a particular interpretation ofwhat belief is, what sort of level of uncertainty is appropriate. Forthefunctionalist interpretation suggested above, this issomething of a pragmatic choice. The further we allow this regress tocontinue, the harder it is to deal with these belief-representingobjects. So let’s not go further than we need.

We have seen arguments above that IP does have some advantage overprecise probabilism, in the capacity to represent suspendingjudgement, the difference between weight and balance of evidence andso on. So we must goat least this far up the uncertaintyescalator. But for the sake of practicality we need not go anyfurther, even though there are hierarchical Bayes models that wouldgive us a well-defined theory of higher-order models of belief. Thisis, ultimately, a pragmatic argument. Actual human belief states areprobably immensely complicated neurological patterns with all theattendant complexity, interactivity, reflexivity and vagueness. Wearemodelling belief, so it is about choosing a model at theright level of complexity. If you are working out the trajectory of acannonball on earth, you can safely ignore the gravitational influenceof the moon on the cannonball. Likewise, there will be contexts wheresimple models of belief are appropriate: perhaps your belief state isjust a set of sentences of a language, or perhaps just a singleprobability function. If, however, you are modelling the tides, thenthe gravitational influence of the moon needs to be involved: themodel needs to be more complex. This suggests that an adequate model ofbelief under severe uncertainty may need to move beyond the singleprobability paradigm. But a pragmatic argument says that we shouldonly move as far as we need to. So while you need to model the moon toget the tides right, you can get away without having Venus in yourmodel. This relates to the contextual nature of appropriateness formodels of belief mentioned earlier. If one were attempting toprovide acomplete formal characterisation of the ontology ofbelief, then these regress worries would be significantly harder toavoid.

Let’s imagine that we had a second orderprobability \(\mu\) defined over the set of(first order) probabilities \(P\). We couldthen reduce uncertainty to a single functionby \(p^*(X) = \sum_P \mu(p)p(X)\)(if \(P\) is finite, in the interests ofkeeping things simple I discuss only this case). Nowif \(p^*(X)\) is what is used in decisionmaking, then there is no real sense in which we have a genuine IPmodel, and it cannot rationalise the Ellsberg choice, nor can it giverise to incomparability. If there is some alternative usethat \(\mu\) is put to, a use that allowsincomparability and that rationalises Ellsberg choices, then it mightbe a genuine rival to credal sets, but it represents just as much of adeparture from the orthodox theory as IP does.

Gärdenfors and Sahlin’sUnreliable Probabilities modelenriches a basic IP approach with a “reliability index”(see thehistorical appendix).Lyon (2017)enriches the standard IP picture in a different way: he adds aprivileged “best guess” probability. This modificationallows for better aggregation of elicited IP estimates. How best tointerpret such a model is still an open question. Other enriched IPmodels are no doubt available.

3.6 What makes a good imprecise belief?

There are, as we have seen, certain structural properties that arenecessary conditions on rational belief. What exactly these aredepends on your views. However, there are further ways of assessingbelief. Strongly believing true things and strongly believing thenegations of false things seem like good-making-features ofbeliefs. For the case of precise credences, we can make thisprecise. There is a large literature on “scoring rules”:methods for measuring how good a probability is relative to the actualstate of the world(Brier 1950; Savage 1971;Joyce 2009; Pettigrew 2011). These are numerical methods ofmeasuring how good a probability is given the true state of theworld.

For the case of imprecise probabilities, however, the situationlooks bleak. No real valued scoring rule for imprecise probabilitiescan have the desirable property of beingstrictlyproper(Seidenfeld, Schervish, and Kadane2012).Schoenfield (2017) presents a simple version of the result. Since strict propriety is a desirable property of ascoring rule(Bröcker and Smith 2007; Joyce2009; Pettigrew 2011), this failing is serious. So further workis needed to develop a well-grounded theory of how to assess impreciseprobabilities.Mayo-Wilson and Wheeler (2016) provide a neater version of the proof, and offer a property weaker than strict propriety that an imprecise probability scoring rule can satisfy.Carr (2015) andKonek (forthcoming) both present positive suggestions for moving forward with imprecise scoring rules.Levinstein (forthcoming) suggests that the problem really only arises for determinately imprecise credences, but not for indeterminate credence.

4. Summary

Imprecise probabilities offer a model of rational belief that doesaway with some of the idealisation required by the orthodox preciseprobability approach. Many motivations for such a move have been putforward, and many views on IP have been discussed. There are stillseveral open philosophical questions relating to IP, and this islikely to be a rich field of research for years to come.

Bibliography

  • Al-Najjar, Nabil I., and Jonathan Weinstein, 2009, “TheAmbiguity Aversion Literature: A CriticalAssessment”,Economics and Philosophy, 25:249–284.
  • Augustin, Thomas, Frank P.A. Coolen, Gert de Cooman, and MatthiasC.M. Troffaes (eds), 2014,Introduction to ImpreciseProbabilities, John Wiley and Sons. New York.
  • Benétreau-Dupin, Yann, 2015, “The Bayesian who knew too much”,Synthese, 192:5 1527–1542.
  • Binmore, Ken and Lisa Stewart and Alex Voorhoeve, 2012, “How much ambiguity aversion? Finding indifferences between Ellsberg’s risk and ambiguous bets”,Journal of Risk and Uncertainty, 45: 215–238.
  • Blackwell, D., and M. A. Girschick, 1954,Theory of Games andStatistical Decisions, Wiley. New York.
  • Boole, George. 1958 [1854],The Laws of Thought,Dover. New York.
  • Bovens, Luc, and Stephan Hartmann, 2003,Bayesianepistemology, Oxford University Press. Oxford.
  • Bradley, Richard, 2009, “Revising IncompleteAttitudes”,Synthese, 171: 235–256.
  • –––, 2017, Decision theory with a human face Cambridge University Press. Cambridge.
  • Bradley, Richard, and Mareile Drechsler, 2014,“Types of Uncertainty”,Erkenntnis. 79: 1225–1248.
  • Bradley, Seamus, 2015, “How to choose among choice functions”,Proceedings of the Ninth International Symposium on Imprecise Probability: Theories and Applications, 57–66 URL =<http://www.sipta.org/isipta15/data/paper/9.pdf>.
  • –––, 2016, “Vague chance?”,Ergo, 3:20
  • –––, 2019, “A counterexample to three imprecise decision theories”,Theoria, 85:1 18–30
  • Bradley, Seamus, and Katie Steele, 2014a, “Should SubjectiveProbabilities be Sharp?”Episteme, 11:277–289.
  • –––, 2014b, “Uncertainty, Learningand the ‘Problem’ ofDilation”,Erkenntnis. 79: 1287–1303.
  • –––, 2016, “Can free evidence be bad? Value of information for the imprecise probabilist”,Philosophy of Science, 83:1 1–28
  • Brady, Michael and Rogério Arthmar, 2012, “Keynes, Boole and the interval approach to probability”,History of Economic Ideas, 20:365–84.
  • Brier, Glenn, 1950, “Verification of Forecasts Expressed inTerms of Probability”,Monthly Weather Review, 78:1–3.
  • Briggs, R.A., 2014, “Normative Theories of RationalChoice: Expected Utility”,The Stanford Encyclopedia ofPhilosophy, (Fall 2014 Edition), Edward N. Zalta (ed.), URL =<https://plato.stanford.edu/archives/fall2014/entries/rationality-normative-utility/>.
  • Broome, John, 2000, “Incommensurable Values”,inWell-Being and Morality: Essays in Honour of JamesGriffin, R. Crisp and B. Hooker (eds), 21–38, ClarendonPress. Oxford.
  • Bröcker, Jochen, and Leonard A. Smith, 2007, “ScoringProbabilistic Forecasts; On the Importance of BeingProper”,Weather and Forecasting, 22:382–388.
  • Camerer, Colin, and Martin Weber, 1992, “Recent Developmentsin Modeling Preferences: Uncertainty and Ambiguity”,Journalof Risk and Uncertainty, 5: 325–370.
  • Carr, Jennifer, 2015 “Chancy accuracy and imprecise credence”,Philosophical Topics 29 67–81.
  • Castro, Clinton and Casey Hart, forthcoming, “The imprecise impermissivists dilemma”,Synthese.
  • Cattaneo, Marco, 2008, “Fuzzy Probabilities based on theLikelihood Function”, inSoft Methods for HandlingVariability and Imprecision, D. Dubois, M. A. Lubiano, H. Prade,M. A. Gil, P. Grzegorzewski, and O. Hryniewicz (eds), 43–50,Springer.
  • –––, 2014, “A Continuous Updating Rule forImprecise Probabilities”, inInformation Processing andManagement of Uncertainty in Knowledge Based Systems, AnneLaurent, Oliver Strauss, Bernadette Bouchon-Meunier, and RonaldR. Yager (eds), 426–435, Springer.
  • Chandler, Jacob, 2014, “Subjective Probabilities NeedNot Be Sharp”,Erkenntnis. 79: 1273–1286.
  • Chu, Francis, and Joseph Y. Halpern, 2004, “Greatexpectations. Part II: Generalized expected utility as a universaldecision rule”,Artificial Intelligence, 159:207–230.
  • –––, 2008, “Great expectations. Part I: Onthe customizability of General Expected Utility”,Theory andDecision, 64: 1–36.
  • Clifford, William Kingdom, 1901, “The Ethics ofBelief”, inLectures and Essays, Leslie Stephen andFrederick Pollock (eds), 2:161–205, 3rd Edition,Macmillan. London.
  • de Cooman, Gert, and Enrique Miranda, 2007, “Symmetry ofmodels versus models of symmetry”, inProbability andInference: Essays in Honor of Henry E. Kyburg Jnr., WilliamHarper and Gregory Wheeler (eds), 67–149, Kings CollegePublications.
  • Cozman, Fabio, 2000, “Credal Networks”,ArtificialIntelligence, 120: 199–233.
  • –––, 2012, “Sets of probabilitydistributions, independence and convexity”,Synthese,186: 577–600.
  • Cozman, Fabio, and Peter Walley, 2005, “Graphoid propertiesof epistemic irrelevance and independence”,Annals ofMathematics and Artificial Intelligence, 45: 173–195.
  • Dardashti, Radin, Luke Glynn, Karim Thébault, and Mathias Frisch,2014, “Unsharp Humean chances in statistical physics: a reply toBeisbart”, inNew Directions in the Philosophy ofScience, Maria Carla Galavotti, Dennis Dieks, WenceslaoJ. Gonzalez, Stephan Hartmann, Thomas Uebel, and Marcel Weber (eds),531–542, Springer. Dordrecht.
  • Elga, Adam, 2010, “Subjective Probabilities should beSharp”,Philosophers’ Imprint, 10.
  • Elkin, Lee and Gregory Wheeler, 2016 “Resolving peer disagreements through imprecise probabilities”,Noûs, 52:2 260–278.
  • Ellsberg, Daniel, 1961, “Risk, ambiguity and the Savageaxioms”,Quarterly Journal of Economics, 75:643–696.
  • Eriksson, Lena, and Alan Hájek, 2007, “What Are Degrees ofBelief?”Studia Logica, 86: 183–213.
  • Evren, Özgür, and Efe Ok, 2011, “On the multi-utilityrepresentation of preference relations”,Journal ofMathematical Economics, 47: 554–563.
  • Ferson, Scott and Lev R. Ginzburg,1996, “Different methods are needed to propagate ignorance and variability”,Reliability Engineering and System Safety, 54 133–144.
  • Ferson, Scott and Janos G. Hajagos, 2004, “Arithmetic with uncertain numbers: Rigorous and (often) best possible answers”,Reliability Engineering and System Safety, 85 135–152.
  • Fine, Terrence L., 1973,Theories of Probability: AnExamination of Foundations, Academic Press. New York.
  • –––, 1988, “Lower Probability Models forUncertainty and Nondeterministic Processes”,Journal ofStatistical Planning and Inference, 20: 389–411.
  • de Finetti, Bruno, 1964, “Foresight: Its Logical Laws, ItsSubjective Sources”, inStudies in Subjective ProbabilityStudies in Subjective Probability, Henry E. Kyburg and HowardE. Smokler (eds), 97–158, Wiley. New York.
  • –––, 1990 [1974],Theory ofProbability, Wiley Classics Library, Vol. 1, Wiley. New York.
  • Fox, Craig R., and Amos Tversky, 1995, “Ambiguity aversionand comparative ignorance”,Quarterly Journal ofEconomics, 110: 585–603.
  • van Fraassen, Bas, 1984, “Belief and theWill”,Journal of Philosophy, 81: 235–256.
  • –––, 1990, “Figures in a ProbabilityLandscape”, inTruth or Consequences, Michael Dunn andAnil Gupta (eds), 345–356, Springer. Dordrecht.
  • Frigg, Roman, 2008, “Humean chance in Boltzmannianstatistical mechanics”,Philosophy of Science, 75:670–681.
  • Frigg, Roman, Seamus Bradley, Hailiang Du, and Leonard A. Smith,2014, “Laplace’s Demon and the Adventures of hisApprentices”,Philosophy of Science, 81:31–59.
  • Fumagalli, Roberto, 2013, “The Futile Search for TrueUtility”,Economics and Philosophy, 29:325–347.
  • Gärdenfors, Peter, 1979, “Forecasts, Decisions and UncertainProbabilities”,Erkenntnis, 14: 159–181.
  • Gärdenfors, Peter, and Nils-Eric Sahlin, 1982, “Unreliableprobabilities, risk taking and decisionmaking”,Synthese, 53: 361–386.
  • Genest, Christian, and James V. Zidek, 1986, “CombiningProbability Distributions: A Critique and AnnotatedBibliography”,Statistical Science, 1:114–135.
  • Gilboa, Itzhak, 1987, “Expected Utility with PurelySubjective Non-additive Probabilities”,Journal ofMathematical Economics, 16: 65–88.
  • Glymour, Clark, 1980, “Why I am not a Bayesian”,inTheory and Evidence, 63–93. Princeton UniversityPress. Princeton.
  • Gong, Ruobin and Xiao-Li Meng, 2017 “Judicious judgment meets unsettling update: dilation, sure loss and Simpson’s paradox”, URL =<https://arxiv.org/abs/1712.08946>.
  • Good, Irving John, 1962, “Subjective probability as themeasure of a non-measurable set”, inLogic, Methodology andPhilosophy of Science: Proceedings of the 1960 InternationalCongress, 319–329.
  • –––, 1967, “On the principle of totalevidence”,British Journal for the Philosophy ofScience, 17: 319–321.
  • –––, 1974, “A little learning can bedangerous”,British Journal for the Philosophy ofScience, 25: 340–342.
  • –––, 1983 [1971], “Twenty-Seven principlesof rationality”, inGood Thinking: The Foundations ofProbability and its Applications Good Thinking: The Foundations ofProbability and its Applications, 15–19. University ofMinnesota Press. Minnesota.
  • Grize, Yves L., and Terrence L. Fine, 1987, “ContinuousLower Probability-Based Models for Stationary Processes with Boundedand Divergent Time Averages”,The Annals ofProbability, 15: 783–803.
  • Haenni, Rolf, 2009, “Non-additive degrees of belief”,in Huber and Schmidt-Petri 2009: 121–160.
  • Haenni, Rolf, Jan-Willem Romeijn, Gregory Wheeler, and JonWilliamson, 2011,Probabilistic Logic and ProbabilisticNetworks, Synthese Library. Dordrecht.
  • Hájek, Alan, 2003, “What conditional probabilities could notbe”,Synthese, 137: 273–323.
  • –––, 2008, “Arguments for—oragainst—probabilism?”British Journal for the Philosophy ofScience, 59: 793–819.
  • –––, 2011, “Interpretations ofProbability”,The Stanford Encyclopedia of Philosophy(Winter 2012 Edition), Edward N. Zalta (ed.), URL =<https://plato.stanford.edu/archives/win2012/entries/probability-interpret/>.
  • Hájek, Alan, and Michael Smithson, 2012, “Rationality andIndeterminate Probabilities”,Synthese, 187:33–48.
  • Halpern, Joseph Y., 2003,Reasoning about uncertainty,MIT press. Cambridge.
  • Hammond, Peter, 1988, “Orderly DecisionTheory”,Economics and Philosophy, 4:292–297.
  • Harsanyi, John, 1955, “Cardinal welfare, individualisticethics and interpersonal comparisons of utility”,Journal ofPolitical Economy, 63: 309–321.
  • Hart. Casey and Michael Titelbaum, 2015 “Intuitive dilation?”,Thought, 4 252–262.
  • Hartmann, Stephan, and Patrick Suppes, 2010, “Entanglement,Upper Probabilities and Decoherence in Quantum Mechanics”,inEPSA Philosophical Issues in the Sciences: Launch of theEuropean Philosophy of Science Association, Mauricio Suárez,Mauro Dorato, and Miklós Rédei (eds), 93–103, Springer.
  • Hawthorne, James, 2009, “The Lockean Thesis and the Logic ofBelief”, in Huber and Schmidt-Petri 2009: 49–74.
  • Herron, Timothy, Teddy Seidenfeld, and Larry Wasserman, 1994,“The Extent of Dilation of Sets of Probabilities and theAsymptotics of Robust Bayesian Inference”, inPSA:Proceedings of the Biennial Meeting of the Philosophy of ScienceAssociation, 250–259.
  • Hill, Brian, 2013, “Confidence and decision”,Games and Economic Behavior, 82675–692.
  • Hosni, Hykel, 2014, “Towards a Bayesian theory of secondorder uncertainty: Lessons from non-standard logics”,inDavid Makinson on classical methods for non-classicalproblems, Sven Ove Hansson (ed.), 195–221, Springer. Dordrecht.
  • Howson, Colin, 2012, “Modelling UncertainInference”,Synthese, 186: 475–492.
  • Howson, Colin, and Peter Urbach, 2006,Scientific Reasoning:the Bayesian Approach, 3rd edition, Open Court. Chicago.
  • Huber, Franz, 2009, “Belief and Degrees of Belief”, inHuber and Schmidt-Petri 2009: 1–33.
  • –––, 2014, “Formal Representations ofBelief”,Stanford Encyclopedia of Philosophy (Spring2014 Edition), E. N. Zalta (ed.), URL =<https://plato.stanford.edu/archives/spr2014/entries/formal-belief/>.
  • Huber, Franz and Cristoph Schmidt-Petri (eds), 2009,Degreesof Belief, Springer. Dordrecht.
  • Huntley, Nathan, Robert Hable, and Matthias Troffaes, 2014,“Decision making”, in Augustin et al. 2014:190–206.
  • James, William, 1897, “The Will to Believe”,inThe Will to Believe and other essays in popularphilosophy, 1–31. Longmans, Green and Co. New York.
  • Jaynes, Edwin T., 2003,Probability Theory: The Logic ofScience, Cambridge University Press. Cambridge.
  • Jeffrey, Richard, 1983,The Logic of Decision,2nd edition. University of Chicago Press. Chicago.
  • –––, 1984, “Bayesianism with a HumanFace”, inTesting Scientific Theories, John Earman(ed.), 133–156, University of Minnesota Press. Minnesota.
  • –––, 1987, “Indefinite ProbabilityJudgment: A Reply to Levi”,Philosophy of Science, 54:586–591.
  • Joyce, James M., 1999,The Foundations of Causal DecisionTheory,Cambridge studies in probability, induction anddecision theory, Cambridge University Press. Cambridge.
  • –––, 2005, “How Probabilities ReflectEvidence”,Philosophical Perspectives, 19:153–178.
  • –––, 2009, “Accuracy and Coherence:Prospects for an Alethic Epistemology of Partial Belief”, inHuber and Schmidt-Petri 2009: 263–297.
  • –––, 2011, “A Defense of ImpreciseCredence in Inference and Decision”,PhilosophicalPerspectives, 24: 281–323.
  • Kadane, Joseph B., Mark J. Schervish, and Teddy Seidenfeld,1999,Rethinking the Foundations of Statistics, CambridgeUniversity Press. Cambridge.
  • Kaplan, Mark, 1983, “Decision theory asphilosophy”,Philosophy of Science, 50:549–577.
  • –––, 1996,Decision Theory asPhilosophy, Cambridge University Press. Cambridge.
  • Keynes, J. M., 1921,A Treatise on Probability,Macmillan. London.
  • Konek, Jason, forthcoming “Epistemic conservativity and imprecise credence”,Philosophy and Phenomenological Research
  • Koopman, B. O., 1940, “The Bases ofProbability”,Bulletin of the American MathematicalSociety, 46: 763–774.
  • Kumar, Anurag, and Terrence L. Fine, 1985, “Stationary LowerProbabilities and Unstable Averages”,Zeitschrift fürWahrscheinlichkeitstheorie und verwandte Gebiete, 69:1–17.
  • Kyburg, Henry E., 1983, “Rational belief”,TheBrain and Behavioural Sciences, 6: 231–273.
  • –––, 1987, “Bayesian and non-Bayesianevidential updating”,Artificial Intelligence, 31:271–293.
  • –––, 2003, “Are there degrees ofbelief?”Journal of Applied Logic: 139–149.
  • Kyburg, Henry E., and Michael Pittarelli, 1992,Set-basedBayesianism.
  • Kyburg, Henry E., and Choh Man Teng, 2001,UncertainInference, Cambridge University Press. Cambridge.
  • Leitgeb, Hannes, 2014, “The stability theory ofbelief”,The Philosophical Review, 123:131–171.
  • Levi, Isaac, 1974, “On Indeterminateprobabilities”,Journal of Philosophy, 71:391–418.
  • –––, 1980,The Enterprise of Knowledge,The MIT Press. Cambridge.
  • –––, 1982, “Ignorance, Probability andRational Choice”,Synthese, 53: 387–417.
  • –––, 1985, “Imprecision and Indeterminacyin Probability Judgment”,Philosophy of Science, 52:390–409.
  • –––, 1986,Hard Choices: decision makingunder unresolved conflict, Cambridge University Press. Cambridge.
  • –––, 1999, “Value commitments, valueconflict and the separability of belief andvalue”,Philosophy of Science, 66: 509–533.
  • Levinstein, Ben, forthcoming, “Imprecise epistemic values and imprecise credences”,Australasian Journal of Philosophy.
  • Lewis, David, 1986, “A Subjectivist’s Guide to ObjectiveChance (and postscript)”, inPhilosophical Papers II,83–132. Oxford University Press. Oxford.
  • –––, 1994, “Humean SupervenienceDebugged”,Mind, 103: 473–490.
  • List, Christian, and Philip Pettit, 2011,Group Agency,Oxford University Press. Oxford.
  • Loewer, B., 2001, “Determinism andchance”,Studies in the History and Philosophy of ModernPhysics, 32: 609–620.
  • Lyon, Aidan, 2017, “Vague Credences”,Synthese, 194:10 3931–3954.
  • Machina, Mark J., 1989, “Dynamic Consistency andNon-Expected Utility Models of Choice UnderUncertainty”,Journal of Economic Literature, 27:1622–1668.
  • Mayo-Wilson, Conor and Gregory Wheeler, 2016, “Scoring imprecise credences: a mildly immodest proposal”,Philosophy and Phenomenological Research, 93:1 55–78.
  • Meacham, Christopher, and Jonathan Weisberg, 2011,“Representation Theorems and the Foundations of DecisionTheory”,Australasian Journal of Philosophy, 89:641–663.
  • Miranda, Enrique, 2008, “A survey of the theory of coherentlower previsions”,International Journal of ApprocimateReasoning, 48: 628–658.
  • Miranda, Enrique, and Gert de Cooman, 2014, “Lowerprevisions”, in Augustin et al. 2014, pp. 28–55.
  • Moss, Sarah, 2015, “CredalDilemmas”,Noûs, 49:4 665–683.
  • Norton, John, 2007, “Probabilitydisassembled”,British Journal for the Philosophy ofScience, 58: 141–171.
  • –––, 2008a, “Ignorance andIndifference”,Philosophy of Science, 75:45–68.
  • –––, 2008b, “The dome: An UnexpectedlySimple Failure of Determinism”,Philosophy of Science,75: 786–798.
  • Oberguggenberger, Michael, 2014, “Engineering”, inAugustin et al. 2014: 291–304.
  • Oberkampf, William and Christopher Roy, 2010Verification and Validation in Scientific Computing, Cambridge University Press. Cambridge.
  • Pedersen, Arthur Paul, 2014, “ComparativeExpectations”,Studia Logica. 102: 811–848.
  • Pedersen, Arthur Paul, and Gregory Wheeler, 2014,“Demystifying Dilation”,Erkenntnis.79: 1305–1342.
  • Pettigrew, Richard, 2011, “Epistemic Utility Arguments forProbabilism”,The Stanford Encyclopedia of Philosophy(Winter 2011 Edition), Edward N. Zalta (ed.), URL =<https://plato.stanford.edu/archives/win2011/entries/epistemic-utility/>.
  • Pfeifer, Niki, and Gernot D. Kleiter, 2007, “Human reasoningwith imprecise probabilities: Modus ponens and denying theantecedent”,Proceedings of the 5th International Symposiumon Imprecise Probability: Theory and Application:347–356.
  • Quaeghebeur, Erik, 2014, “Desirability”, in Augustinet al. 2014: 1–27.
  • Ramsey, F. P., 1926, “Truth and Probability”,inThe Foundations of Mathematics and other Logical Essays,156–198. Routledge. London.
  • Rinard, Susanna, 2013, “Against Radical CredalImprecision”,Thought, 2: 157–165.
  • –––, 2015, “A decision theory for imprecise probabilities”,Philosophers’ Imprint, 15 1–16.
  • Ruggeri, Fabrizio, David Ríos and Jacinto Martín, 2005, “Robust Bayesian analysis”,Handbook of Statistics, 25 623–667, Elsevier. Amsterdam
  • Sahlin, Nils-Eric, and Paul Weirich, 2014, “UnsharpSharpness”,Theoria, 80: 100–103.
  • Savage, Leonard J., 1972 [1954],The Foundations ofStatistics, 2nd edition, Dover. New York.
  • –––, 1971, “Elicitation of PersonalProbabilities and Expectation”,Journal of the AmericanStatistical Association, 66: 783–801.
  • Schaffer, Jonathan, 2007, “DeterministicChance?”British Journal for the Philosophy of Science,58: 114–140.
  • Schervish, Mark J., Teddy Seidenfeld, and Joseph B. Kadane, 2008,“The fundamental theorems of prevision and assetpricing”,International Journal of ApproximateReasoning, 49: 148–158.
  • Schoenfield, Miriam, 2012, “Chilling out on epistemicrationality”,Philosophical Studies, 158:197–219.
  • –––, 2017, “The accuracy and rationality of imprecise credence”,Noûs, 51:4 667–685.
  • Seidenfeld, Teddy, 1988, “Decision theory without‘independence’ or without ‘ordering’. What’sthe difference?”Economics and Philosophy:267–290.
  • –––, 1994, “When normal and extensive formdecisions differ”,Logic, Methodology and Philosophy ofScience, IX: 451–463.
  • –––, 2004, “A contrast between twodecision rules for use with (convex) sets of probabilities:Gamma-maximin versus E-admissibility”,Synthese, 140:69–88.
  • Seidenfeld, Teddy, Joseph B. Kadane, and Mark J. Schervish, 1989,“On the shared preferences of two Bayesian decisionmakers”,The Journal of Philosophy, 86:225–244.
  • Seidenfeld, Teddy, Mark J. Schervish, and Joseph B. Kadane, 1995,“A Representation of Partially OrderedPreferences”,Annals of Statistics, 23:2168–2217.
  • –––, 2010, “Coherent choice functionsunder uncertainty”,Synthese, 172: 157–176.
  • –––, 2012, “Forecasting with impreciseprobabilities”,International Journal of ApproximateReasoning, 53: 1248–1261.
  • Seidenfeld, Teddy, and Larry Wasserman, 1993, “Dilation forsets of probabilities”,Annals of Statistics, 21:1139–1154.
  • Skyrms, Brian, 2011, “Resiliency, Propensities and CausalNecessity”, inPhilosophy of Probability: ContemporaryReadings, Antony Eagle (ed.), 529–536, Routledge. London.
  • Smith, Cedric A.B, 1961, “Consistency in StatisticalInference and Decision”,Journal of the Royal StatisticalSociety. Series B (Methodological), 23: 1–37.
  • Smithson, Michael, and Paul D. Campbell, 2009, “Buying andSelling Prices under Risk, Ambiguity andConflict”,Proceedings of the 6th International Symposium onImprecise Probability: Theory and Application.
  • Smithson, Michael, and Helen Pushkarskaya, 2015,“Ignorance and the Brain: Are there Distinct Kinds ofUnknowns?” inRoutledge International Handbook of IgnoranceStudies, Matthias Gross and Linsey McGoey (eds), Routledge.
  • Sorensen, Roy, 2012, “Vagueness”,The StanfordEncyclopedia of Philosophy (Winter 2013 Edition), Edward N. Zalta(ed.), URL =<https://plato.stanford.edu/archives/win2013/entries/vagueness/>.
  • Stainforth, David A., Miles R. Allen, E. R. Tredger, and LeonardA. Smith, 2007, “Confidence uncertainty and decision-supportrelevance in climate models”,Philosophical Transactions ofthe Royal Society, 365: 2145–2161.
  • Steele, Katie, 2007, “Distinguishing indeterminate belieffrom ‘risk averse’ preference”,Synthese,158: 189–205.
  • Stewart, Rush T. and Ignacio Ojea Quintana, 2018 “Probabilistic opinion pooling with imprecise probabilities”,Journal of Philosophical Logic,47:1 17–45.
  • Sturgeon, Scott, 2008, “Reason and the grain ofbelief”,Noûs, 42: 139–165.
  • Sud, Rohan, 2014, “A forward looking decision rule for imprecise credences”,Philosophical Studies, 167 119–139.
  • Suppes, Patrick, 1974, “The Measurement ofBelief”,Journal of the Royal Statistical Society B,36: 160–191.
  • Suppes, Patrick, and Mario Zanotti, 1991, “Existence ofHidden Variables Having Only Upper Probability”,Foundationsof Physics, 21: 1479–1499.
  • Talbott, William, 2008, “BayesianEpistemology”,The Stanford Encyclopedia of Philosophy(Fall 2013 Edition), Edward N. Zalta (ed.), URL =<https://plato.stanford.edu/archives/fall2013/entries/epistemology-bayesian/>.
  • Topey, Brett, 2012, “Coin flips, credences and theReflection Principle”,Analysis, 72:478–488.
  • Trautmann, Stefan and Guijs van der Kuilen, 2016 “Ambiguity Attitudes”,Blackwell Handbook of Judgement and Decision-Making, 89–116.
  • Troffaes, Matthias, 2007, “Decision Making under Uncertaintyusing Imprecise Probabilities”,International Journal ofApproximate Reasoning, 45: 17–29.
  • Troffaes, Matthias, and Gert de Cooman, 2014,LowerPrevisions, Wiley. New York.
  • Vallinder, Aron, 2018“Imprecise Bayesianism and global belief inertia”,British Journal for the Philosophy of Science, 69:4 1205–1230.
  • Vicig, Paolo, Marco Zaffalon, and Fabio G. Cozman, 2007,“Notes on ‘Notes on conditionalprevisions’”,International Journal of ApproximateReasoning, 44: 358–365.
  • Vicig, Paolo, and Teddy Seidenfeld, 2012, “Bruno de Finettiand imprecision: Imprecise Probability Does notExist!”International Journal of Approximate Reasoning,53: 1115–1123.
  • Voorhoeve, Alex, Ken Binmore, Arnaldur Stefansson and Lisa Stewart,2016“Ambiguity attitudes, framing and consistency”,Theory and Decision, 81:3 313–337.
  • Walley, Peter, 1991,Statistical Reasoning with ImpreciseProbabilities,Monographs on Statistics and AppliedProbability, Vol. 42. Chapman and Hall. London.
  • Walley, Peter, and Terrence L. Fine, 1982, “Towards afrequentist theory of upper and lower probability”,TheAnnals of Statistics, 10: 741–761.
  • Wallsten, Thomas, and David V. Budescu, 1995, “A review ofhuman linguistic probability processing: General principles andempirical evidence”,The Knowledge Engineering Review,10: 43–62.
  • Weatherson, Brian, 2002, “Keynes, uncertainty and interestrates”,Cambridge Journal of Economics:47–62.
  • Weichselberger, Kurt, 2000, “The theory ofinterval-probability as a unifying concept foruncertainty”,International Journal of ApproximateReasoning, 24: 149–170.
  • Wheeler, Gregory, 2014, “Character matching and the Lockepocket of belief”, inEpistemology, Context andFormalism, Franck Lihoreau and Manuel Rebuschi (eds),185–194, Synthese Library. Dordrecht.
  • Wheeler, Gregory, and Jon Williamson, 2011, “EvidentialProbability and Objective Bayesian Epistemology”,inPhilosophy of Statistics, Prasanta S. Bandyopadhyay andMalcom Forster (eds), 307–332, North-Holland. Amsterdam.
  • White, Roger, 2010, “Evidential Symmetry and MushyCredence”, inOxford Studies in Epistemology, T. SzaboGendler and J. Hawthorne (eds), 161–186, Oxford UniversityPress.
  • Williams, J. Robert G., 2014, “Decision-making underindeterminacy”,Philosophers’ Imprint, 14:1–34.
  • Williams, P. M., 1976, “Indeterminate Probabilities”,inFormal Methods in the Methodology of Empirical Sciences,Marian Przelęcki, Klemens Szaniawski, and Ryszard Wójcicki (eds),229–246, D. Reidel Publishing Company.
  • –––, 2007, “Notes on conditionalprevisions”,International Journal of ApproximateReasoning, 44: 366–383.
  • Williamson, Jon, 2010,In Defense of ObjectiveBayesianism, Oxford University Press. Oxford.
  • –––, 2014, “How uncertain do weneed to be?”Erkenntnis. 79: 1249–1271.
  • Wilson, Nic, 2001, “Modified upper and lower probabilitiesbased on imprecise likelihoods”, inProceedings of the 2ndInternational Symposium on Imprecise Probabilities and theirApplications.
  • Zynda, Lyle, 2000, “Representation Theorems and Realismabout Degrees of Belief”,Philosophy of Science, 67:45–69.

Acknowledgments

Many thanks to Teddy Seidenfeld, Greg Wheeler, Paul Pedersen, AidanLyon, Catrin Campbell-Moore, Stephan Hartmann, the ANU Philosophy ofProbability Reading Group, and an anonymous referee for helpfulcomments on drafts of this article.

Copyright © 2019 by
Seamus Bradley

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free

Browse

About

Support SEP

Mirror Sites

View this site from another server:

USA (Main Site)Philosophy, Stanford University

The Stanford Encyclopedia of Philosophy iscopyright © 2023 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054


[8]ページ先頭

©2009-2025 Movatter.jp