At the heart of the underdetermination of scientific theory byevidence is the simple idea that the evidence available to us at agiven time may be insufficient to determine what beliefs we shouldhold in response to it. In a textbook example, if I know that youspent $10 on apples and oranges and that apples cost $1 while orangescost $2, then I know that you did not buy six oranges, but I do notknow whether you bought one orange and eight apples, two oranges andsix apples, and so on. A simple scientific example can be found in therationale behind the important methodological adage that“correlation does not imply causation”. If playing violentvideo games causes children to be more aggressive in their playgroundbehavior, then we should (barring complications) expect to find acorrelation between time spent playing such video games and aggressivebehavior on the playground. But that isalso what we wouldexpect to find if children who are prone to aggressive behavior tendto enjoy and seek out violent video games more than other children, orif propensities for playing violent video games and for aggressiveplayground behavior are both caused by some third factor (like beingbullied or general parental neglect). So a high correlation betweentime spent playing violent video games and aggressive playgroundbehavior (by itself) simplyunderdetermines what we shouldbelieve about the causal relationship between the two. But it turnsout that this simple and familiar predicament only scratches thesurface of the various ways in which problems of underdeterminationcan arise in the course of scientific investigation.
The scope of the epistemic challenge arising from underdeterminationis not limited only to scientific contexts, as is perhaps most readilyseen in classical skeptical attacks on our knowledge more generally.René Descartes ([1640] 1996) famously sought to doubt any andall of his beliefs which could possibly be doubted by supposing thatthere might be an all-powerful Evil Demon who sought to deceive him.Descartes’ challenge appeals to a form of underdetermination: henotes that all our sensory experiences would be just the same if theywere caused by this Evil Demon rather than an external world of rocksand trees. Likewise, Nelson Goodman’s (1955) “New Riddleof Induction” turns on the idea that the evidence we now havecould equally well be taken to support inductive generalizations quitedifferent from those we usually take them to support, with radicallydifferent consequences for the course of future events.[1] Nonetheless, underdetermination has been thought to arise inscientific contexts in a variety of distinctive and important waysthat do not simply recreate such radically skepticalpossibilities.
The traditional locus classicus for underdetermination in science isthe work of Pierre Duhem, a French physicist as well as historian andphilosopher of science who lived at the turn of the 20thCentury. InThe Aim and Structure of Physical Theory, Duhemformulated various problems of scientific underdetermination in anespecially perspicuous and compelling way, although he himself arguedthat these problems posed serious challenges only to our efforts toconfirm theories in physics. In the middle of the 20thCentury, W. V. O. Quine suggested that such challenges applied notonly to the confirmation of all types of scientific theories, but toall knowledge claims whatsoever. His incorporation and furtherdevelopment of these problems as part of a general account of humanknowledge was one of the most significant developments of20th Century epistemology. But neither Duhem nor Quine wascareful to systematically distinguish a number of fundamentallydistinct lines of thinking about underdetermination found in theirwork. Perhaps the most important division is between what we mightcall holist and contrastive forms of underdetermination. Holistunderdetermination (Section 2 below) arises whenever our inability totest hypotheses in isolation leaves us underdetermined in ourresponse to a failed prediction or some other piece ofdisconfirming evidence. That is, because hypotheses have empiricalimplications or consequences only whenconjoined with otherhypotheses and/or background beliefs about the world, a failedprediction or falsified empirical consequence typically leaves open tous the possibility of blaming and abandoning one of these backgroundbeliefs and/or ‘auxiliary’ hypotheses rather than thehypothesis we set out to test in the first place. But contrastiveunderdetermination (Section 3 below) involves the quite differentpossibility that for any body of evidence confirming a theory, theremight well beother theories that are also well confirmed bythat very same body of evidence. Moreover, claims ofunderdetermination of either of these two fundamental varieties canvary in strength and character in any number of ways: one might, forexample, suggest that the choice between two theories or two ways ofrevising our beliefs istransiently underdetermined simply bythe evidence we happen to haveat present, or insteadpermanently underdetermined byall possibleevidence. Indeed, the variety of forms of underdetermination thatconfront scientific inquiry, and the causes and consequences claimedfor these different varieties, are sufficiently heterogeneous thatattempts to address “the” problem of underdeterminationfor scientific theories have often engendered considerable confusionand argumentation at cross-purposes.[2]
Moreover, such differences in the character and strength of variousclaims of underdetermination turn out to be crucial for resolving thesignificance of the issue. For example, in some recently influentialdiscussions of science it has become commonplace for scholars in awide variety of academic disciplines to make casual appeal to claimsof underdetermination (especially of the holist variety) to supportthe idea thatsomething besides evidence must step in to dothe further work of determining beliefs and/or changes of belief inscientific contexts. Perhaps most prominent among these are adherentsof the sociology of scientific knowledge (SSK) movement and somefeminist science critics who have argued that it is typically thesociopolitical interests and/or pursuit of power and influence byscientists themselves which play a crucial and even decisive role indetermining which beliefs are actually abandoned or retained inresponse to conflicting evidence. As we will see in Section 2.2,however, Larry Laudan has argued that such claims depend upon simpleequivocation between comparatively weak or trivial forms ofunderdetermination and the far stronger varieties from which they drawradical conclusions about the limited reach of evidence andrationality in science. In the sections that follow we will seek toclearly characterize and distinguish the various forms of both holistand contrastive underdetermination that have been suggested to arisein scientific contexts (noting some important connections between themalong the way), assess the strength and significance of theheterogeneous argumentative considerations offered in support of andagainst them, and consider just which forms of underdetermination posegenuinely consequential challenges for scientific inquiry.
Duhem’s original case for holist underdetermination is, perhapsunsurprisingly, intimately bound up with his arguments forconfirmational holism: the claim that theories or hypotheses can onlybe subjected to empirical testing in groups or collections, never inisolation. The idea here is that a single scientific hypothesis doesnot by itself carry any implications about what we should expect toobserve in nature; rather, we can derive empirical consequences froman hypothesis only when it is conjoined with many other beliefs andhypotheses, including background assumptions about the world, beliefsabout how measuring instruments operate, further hypotheses about theinteractions between objects in the original hypothesis’ fieldof study and the surrounding environment, etc. For this reason, Duhemargues, when an empirical prediction is falsified, we do not knowwhether the fault lies with the hypothesis we originally sought totest or with one of the many other beliefs and hypotheses that werealso needed and used to generate the failed prediction:
A physicist decides to demonstrate the inaccuracy of a proposition; inorder to deduce from this proposition the prediction of a phenomenonand institute the experiment which is to show whether this phenomenonis or is not produced, in order to interpret the results of thisexperiment and establish that the predicted phenomenon is notproduced, he does not confine himself to making use of the propositionin question; he makes use also of a whole group of theories acceptedby him as beyond dispute. The prediction of the phenomenon, whosenonproduction is to cut off debate, does not derive from theproposition challenged if taken by itself, but from the proposition atissue joined to that whole group of theories; if the predictedphenomenon is not produced, the only thing the experiment teaches usis that among the propositions used to predict the phenomenon and toestablish whether it would be produced, there is at least one error;but where this error lies is just what it does not tell us. ([1914]1954, 185)
Duhem supports this claim with examples from physical theory,including one designed to illustrate a celebrated further consequencehe draws from it. Holist underdetermination ensures, Duhem argues,that there cannot be any such thing as a “crucialexperiment” (experimentum crucis): a single experiment whoseoutcome is predicted differently by two competing theories and whichtherefore serves to definitively confirm one and refute the other. Forexample, in a famous scientific episode intended to resolve theongoing heated battle between partisans of the theory that lightconsists of a stream of particles moving at extremely high speed (theparticle or “emission” theory of light) and defenders ofthe view that light consists instead of waves propagated through amechanical medium (the wave theory), the physicist Foucault designedan apparatus to test the two theories’ competing claims aboutthe speed of transmission of light in different media: the particletheory implied that light would travel faster in water than in air,while the wave theory implied that the reverse was true. Although theoutcome of the experiment was taken to show that light travels fasterin air than in water,[3] Duhem argues that this is far from a refutation of the hypothesis ofemission:
in fact, what the experiment declares stained with error is the wholegroup of propositions accepted by Newton, and after him by Laplace andBiot, that is, the whole theory from which we deduce the relationbetween the index of refraction and the velocity of light in variousmedia. But in condemning this system as a whole by declaring itstained with error, the experiment does not tell us where the errorlies. Is it in the fundamental hypothesis that light consists inprojectiles thrown out with great speed by luminous bodies? Is it insome other assumption concerning the actions experienced by lightcorpuscles due to the media in which they move? We know nothing aboutthat. It would be rash to believe, as Arago seems to have thought,that Foucault’s experiment condemns once and for all the veryhypothesis of emission, i.e., the assimilation of a ray of light to aswarm of projectiles. If physicists had attached some value to thistask, they would undoubtedly have succeeded in founding on thisassumption a system of optics that would agree with Foucault’sexperiment. ([1914] 1954, p. 187)
From this and similar examples, Duhem drew the quite generalconclusion that our response to the experimental or observationalfalsification of a theory is always underdetermined in this way. Whenthe world does not live up to our theory-grounded expectations, wemust give upsomething, but because no hypothesis is evertested in isolation, no experiment ever tells us precisely whichbelief it is that we must revise or give up as mistaken:
In sum, the physicist can never subject an isolated hypothesis toexperimental test, but only a whole group of hypotheses; when theexperiment is in disagreement with his predictions, what he learns isthat at least one of the hypotheses constituting this group isunacceptable and ought to be modified; but the experiment does notdesignate which one should be changed. ([1914] 1954, 187)
The predicament Duhem here identifies is no mere rainy day puzzle forphilosophers of science, but a methodological challenge thatconsistently arises in the course of scientific practice itself. It issimply not true that for practical purposes and in concrete contextsthere is always just a single revision of our beliefs in response todisconfirming evidence that is obviously correct, most promising, oreven most sensible to pursue. To cite a classic example, whenNewton’s celestial mechanics failed to correctly predict theorbit of Uranus, scientists at the time did not simply abandon thetheory but protected it from refutation by instead challenging thebackground assumption that the solar system contained only sevenplanets. This strategy bore fruit, notwithstanding the falsity ofNewton’s theory: by calculating the location of a hypotheticaleighth planet influencing the orbit of Uranus, the astronomers Adamsand Leverrier were eventually led to discover Neptune in 1846. But thevery same strategy failed when used to try to explain the advance ofthe perihelion in Mercury’s orbit by postulating the existenceof “Vulcan”, an additional planet located between Mercuryand the sun, and this phenomenon would resist satisfactory explanationuntil the arrival of Einstein’s theory of general relativity. Soit seems that Duhem was right to suggest not only that hypotheses mustbe tested as a group or a collection, but also that it is by no meansa foregone conclusion which member of such a collection should beabandoned or revised in response to a failed empirical test or falseimplication. Indeed, this very example illustrates why Duhem’sown rather hopeful appeal to the ‘good sense’ ofscientists themselves in deciding when a given hypothesis ought to beabandoned promises very little if any relief from the generalpredicament of holist underdetermination.
As noted above, Duhem thought that the sort of underdetermination hehad described presented a challenge only for theoretical physics, butsubsequent thinking in the philosophy of science has tended to theopinion that the predicament Duhem described applies to theoreticaltesting in all fields of scientific inquiry. We cannot, for example,test an hypothesis about the phenotypic effects of a particular genewithout presupposing a host of further beliefs about what genes are,how they work, how we can identify them, what other genes are doing,and so on. In the middle of the 20th Century, W. V. O.Quine would incorporate confirmational holism and its associatedconcerns about underdetermination into an extraordinarily influentialaccount of knowledge in general. As part of his famous (1951) critiqueof the widely accepted distinction between truths that are analytic(true by definition, or as a matter of logic or language alone) andthose that are synthetic (true in virtue of some contingent fact aboutthe way the world is), Quine argued thatall of the beliefswe hold at any given time are linked in an interconnected web, whichencounters our sensory experience only at its periphery:
The totality of our so-called knowledge or beliefs, from the mostcasual matters of geography and history to the profoundest laws ofatomic physics or even of pure mathematics and logic, is a man-madefabric which impinges on experience only along the edges. Or, tochange the figure, total science is like a field of force whoseboundary conditions are experience. A conflict with experience at theperiphery occasions readjustments in the interior of the field. Butthe total field is so underdetermined by its boundary conditions,experience, that there is much latitude of choice as to whatstatements to reevaluate in the light of any single contraryexperience. No particular experiences are linked with any particularstatements in the interior of the field, except indirectly throughconsiderations of equilibrium affecting the field as a whole. (1951,42–3)
One consequence of this general picture of human knowledge is that allof our beliefs are tested against experience only as a corporatebody—or as Quine sometimes puts it, “The unit of empiricalsignificance is the whole of science” (1951, p. 42).[4] A mismatch between what the web as a whole leads us to expect and thesensory experiences we actually receive will occasionsomerevision in our beliefs, but which revision we should make to bringthe web as a whole back into conformity with our experiences isradically underdetermined by those experiences themselves. To useQuine’s example, if we find our belief that there are brickhouses on Elm Street to be in conflict with our immediate senseexperience, we might revise our beliefs about the houses on ElmStreet, but we might equally well modify instead our beliefs about theappearance of brick, our present location, or innumerable otherbeliefs constituting the interconnected web. In a pinch, we might evendecide that our present sensory experiences are simply hallucinations!Quine’s point was not that any of these are particularly likelyor reasonable responses to recalcitrant experiences (indeed, animportant part of his account is the explanation of why they are not),but instead that they would serve equally well to bring the web ofbelief as a whole in line with our experience. And if the belief thatthere are brick houses on Elm Street were sufficiently important tous, Quine insisted, it would be possible for us to preserve it“come what may” (in the way of empirical evidence), bymaking sufficiently radical adjustments elsewhere in the web ofbelief. It is in principle open to us, Quine argued, to revise evenbeliefs about logic, mathematics, or the meanings of our terms inresponse to recalcitrant experience; it might seem a tempting solutionto certain persistent difficulties in quantum mechanics, for example,to reject classical logic’s law of the excluded middle (allowingphysical particles to both have and not have some determinateclassical physical property like position or momentum at a giventime). The only test of a belief, Quine argued, is whether it fitsinto a web of connected beliefs that accords well with our experienceon the whole. And because this leaves any and all beliefs inthat web at least potentially subject to revision on the basis of ourongoing sense experience or empirical evidence, he insisted, theresimply are no beliefs that are analytic in the originally supposedsense of immune to revision in light of experience, or true no matterwhat the world is like.
Quine recognized, of course, that many of the logically possible waysof revising our beliefs in response to recalcitrant experiences thatremain open to us nonetheless strike us as ad hoc, perfectlyridiculous, or worse. He argues (1955) that our actual revisions ofthe web of belief seek to maximize the theoretical“virtues” of simplicity, familiarity, scope, andfecundity, along with conformity to experience, and elsewhere suggeststhat we typically seek to resolve conflicts between the web of ourbeliefs and our sensory experiences in accordance with a principle of“conservatism”, that is, by making the smallest possiblenumber of changes to the least central beliefs we can that willsuffice to reconcile the web with experience. That is, Quinerecognized that when we encounter recalcitrant experience we are notusually at a loss to decide which of our beliefs to revise in responseto it, but he claimed that this is simply because we are stronglydisposed as a matter of fundamental psychology to prefer whateverrevision requires the most minimal mutilation of the existing web ofbeliefs and/or maximizes virtues that he explicitly recognizes aspragmatic in character. Indeed, it would seem that on Quine’sview the very notion of a belief being more central or peripheral orin lesser or greater “proximity” to sense experienceshould be cashed out simply as a measure of our willingness to reviseit in response to recalcitrant experience. That is, it would seem thatwhat itmeans for one belief to be located“closer” to the sensory periphery of the web than anotheris simply that we are more likely to revise the first than the secondif doing so would enable us to bring the web as a whole intoconformity with otherwise recalcitrant sense experience. Thus, Quinesaw the traditional distinction between analytic and synthetic beliefsas simply registering the endpoints of a psychological continuumordering our beliefs according to the ease and likelihood with whichwe are prepared to revise them in order to reconcile the web as awhole with our sense experience as a whole.
It is perhaps unsurprising that such holist underdetermination hasbeen taken to pose a threat to the fundamental rationality of thescientific enterprise. The claim that the empirical evidence aloneunderdetermines our response to failed predictions or recalcitrantexperience might even seem toinvite the suggestion that whatsystematically steps into the breach to do the further work ofsingling out just one or a few candidate responses to disconfirmingevidence is something irrational or at least arational in character.Imre Lakatos and Paul Feyerabend each suggested that because ofunderdetermination, the difference between empirically successful andunsuccessful theories or research programs is largely a function ofthe differences in talent, creativity, resolve, and resources of thosewho advocate them. And at least since the influential work of ThomasKuhn, one important line of thinking about science has held that it isultimately the social and political interests (in a suitably broadsense) of scientists themselves which serve to determine theirresponses to disconfirming evidence and therefore the furtherempirical, methodological, and other commitments of any givenscientist or scientific community. Mary Hesse suggests that Quineanunderdetermination showed why certain “non-logical” and“extra-empirical” considerations must play a role intheory choice, and claims that “it is only a short step fromthis philosophy of science to the suggestion that adoption of suchcriteria, that can be seen to be different for different groups and atdifferent periods, should be explicable by social rather than logicalfactors” (1980, 33). Perhaps the most prominent modern daydefenders of this line of thinking are those scholars in the sociologyof scientific knowledge (SSK) movement and in feminist science studieswho argue that it is typically the career interests, politicalaffiliations, intellectual allegiances, gender biases, and/or pursuitof power and influence by scientists themselves which play a crucialor even decisive role in determining precisely which beliefs areabandoned or retained when faced with conflicting evidence (classicworks in SSK include Bloor 1991, Collins 1992, and Shapin and Schaffer1985; in feminist science studies, see Longino, 1990, 2002, and for arecent review, Nelson 2022). The shared argumentative schema here isone on which holist underdetermination ensures that the evidence alonecannot do the work of picking out a unique response to failedpredictions or recalcitrant experience, thus something else must stepin to do the job, and sociologists of scientific knowledge, feministcritics of science, and other interest-driven theorists of scienceeach have their favored suggestions close to hand. (For useful furtherdiscussion, see Okasha 2000. Note that historians of science have alsoappealed to underdetermination in presenting “counterfactualhistories” exploring the ways in which important historicaldevelopments in science might have turned out quite differently thanthey actually did; see, for example, Radick 2023.)
In response to this line of argument, Larry Laudan (1990) argues thatthe significance of such underdetermination has been greatlyexaggerated. Underdetermination actually comes in a wide variety ofstrengths, he insists, depending on precisely what is being assertedabout the character, the availability, and (most importantly) therational defensibility of the various competing hypotheses orways of revising our beliefs that the evidence supposedly leaves usfree to accept. Laudan usefully distinguishes a number of differentdimensions along which claims of underdetermination vary in strength,and he goes on to insist that those who attribute dramaticsignificance to the thesis that our scientific theories areunderdetermined by the evidence defend only the weaker versions ofthat thesis, yet draw dire consequences and shocking morals regardingthe character and status of the scientific enterprise from muchstronger versions. He suggests, for instance, that Quine’sfamous claim that any hypothesis can be preserved “come whatmay” in the way of evidence can be defended simply as adescription of what it ispsychologically possible for humanbeings to do, but Laudan insists that in this form the thesis issimply bereft of interesting or important consequences forepistemology— the study ofknowledge. Along thisdimension of variation, the strong version of the thesis asserts thatit is always normatively or rationallydefensible to retainany hypothesis in the light of any evidence whatsoever, but thislatter, stronger version of the claim, Laudan suggests, is one forwhich no convincing evidence or argument has ever been offered. Moregenerally, Lauden argues, arguments for underdetermination turn onimplausibly treating all logically possible responses to the evidenceas equally justified or rationally defensible. For example, Laudansuggests that we might reasonably hold the resources ofdeductivelogic to be insufficient to single out just one acceptableresponse to disconfirming evidence, but not that deductive logicplus the sorts of ampliative principles of good reasoningtypically deployed in scientific contexts are insufficient to doso. Similarly, defenders of underdetermination might assert thenonuniqueness claim that for any given theory or web ofbeliefs, either there is at least one alternative that can also bereconciled with the available evidence, or the much stronger claimthatall of the contraries of any given theory can bereconciled with the available evidence equally well. And the claim ofsuch “reconciliation” itself disguises a wide range offurther alternative possibilities: that our theories can be madelogically compatible with any amount of disconfirmingevidence (perhaps by the simple expedient of removing any claim(s)with which the evidence is in conflict), that any theory may bereformulated or revised so as toentail any piece ofpreviously disconfirming evidence, or so as toexplainpreviously disconfirming evidence, or that any theory can be made tobeas well supported empirically by any collection ofevidence as any other theory. And inall of these respects,Laudan claims, partisans have defended only the weaker forms ofunderdetermination while founding their further claims about andconceptions of the scientific enterprise on versions much strongerthan those they have managed or even attempted to defend.
Laudan is certainly right to distinguish these various versions ofholist underdetermination, and he is equally right to suggest thatmany of the thinkers he confronts have derived grand morals concerningthe scientific enterprise from much stronger versions ofunderdetermination than they are able to defend, but the underlyingsituation is somewhat more complex than he suggests. Laudan’soverarching claim is that champions of holist underdetermination showonly that a wide variety of responses to disconfirming evidence arelogically possible (or even just psychologically possible), ratherthan that these are all rationally defensible or equallywell-supported by the evidence. But his straightforward appeal tofurther epistemic resources like ampliative principles of beliefrevision that are supposed to help narrow the merely logicalpossibilities down to those which are reasonable or rationallydefensible is itself problematic, at least as part of any attempt torespond to Quine. This is because on Quine’s holist picture ofknowledge such further ampliative principles governing legitimatebelief revision arethemselves simply part of the web of ourbeliefs, and are therefore open to revision in response torecalcitrant experience as well. Indeed, this is true even for theprinciples of deductive logic and the (consequent) demand forparticular forms of logical consistency between parts of the webitself! So while it is true that the ampliative principles wecurrently embrace do not leave all logically or even psychologicallypossible responses to the evidence open to us (or leave us free topreserve any hypothesis “come what may”), our continuedadherence tothese very principles, rather than being willingto revise the web of belief so as to abandon them, is part of thephenomenon to which Quine is using underdetermination to draw ourattention, and so cannot be taken for granted without begging thequestion. Put another way, Quine does not simply ignore the furtherprinciples that function to ensure that we revise the web of belief inone way rather than others, but it follows from his account that suchprinciples are themselves part of the web and therefore candidates forrevision in our efforts to bring the web of beliefs into conformity(by the resulting web’s own lights) with sensory experience.This recognition makes clear why it will be extremely difficult to sayhow the shift to an alternative web of belief (with alternativeampliative or even deductive principles of belief revision) should oreven can be evaluated for its rational defensibility. Each proposedrevision is likely to be maximally rational by the lights of theprinciples it itself sanctions.[5] Of course we can rightly say that many candidate revisions wouldviolate ourpresently accepted ampliative principles ofrational belief revision, but the preference we have for those ratherthan the alternatives is itself simply generated by their position inthe web of belief we have inherited, and the role that they themselvesplay in guiding the revisions we are inclined to make to that web inlight of ongoing experience.
Thus, if we accept Quine’s general picture of knowledge, itbecomes quite difficult to disentangle normative from descriptiveissues, or questions about the psychology of human belief revisionfrom questions about the justifiability or rational defensibility ofsuch revisions. It is in part for this reason that Quine famouslysuggests (1969, 82; see also p 75–76) that epistemology itself“falls into place as a chapter of psychology and hence ofnatural science.” His point is not that epistemology shouldsimply be abandoned in favor of psychology, but instead that there isultimately no way to draw a meaningful distinction between the two.(James Woodward, in comments on an earlier draft of this entry,pointed out that this makes it all the harder to assess thesignificance of Quinean underdetermination in light of Laudan’scomplaint or even know the rules for doing so, but in an important waythis difficulty was Quine’s point all along!) Quine’sclaim is that “[e]ach man is given a scientific heritage plus acontinuing barrage of sensory stimulation; and the considerationswhich guide him in warping his scientific heritage to fit hiscontinuing sensory promptings are, where rational, pragmatic”(1951, 46), but the role of these pragmatic considerations orprinciples in selecting just one of the many possible revisions of theweb of belief in response to recalcitrant experience is not to becontrasted with those same principles having rational or epistemicjustification. Far from conflicting with or even being orthogonal tothe search for truth and our efforts to render our beliefs maximallyresponsive to the evidence, Quine insists, revising our beliefs inaccordance with such pragmatic principles “at bottom, is whatevidence is” (1955, 251). Whether or not this stronglynaturalistic conception of epistemology can ultimately be defended, itis misleading for Laudan to suggest that the thesis ofunderdetermination becomes trivial or obviously insupportable themoment we inquire into the rational defensibility rather than the merelogical or psychological possibility of alternative revisions to theholist’s web of belief.
In fact, there is an important connection between this lacuna inLaudan’s discussion and the further uses made of the thesis ofunderdetermination by sociologists of scientific knowledge, feministepistemologists, and other vocal champions of holistunderdetermination. When faced with the invocation of furtherampliative standards or principles that supposedly rule out someresponses to disconfirmation as irrational or unreasonable, thesethinkers typically respond by insisting that the embrace of suchfurther standards or principles (or perhaps their application toparticular cases) isitself underdetermined, historicallycontingent, and/or subject to ongoing social negotiation. For thisreason, they suggest, such appeals (and their success or failure inconvincing the members of a given community) should be explained byreference to the same broadly social and political interests that theyclaim are at the root of theory choice and belief change in sciencemore generally (see, e.g., Shapin and Schaffer, 1985). On bothaccounts, then, our response to recalcitrant evidence or a failedprediction is constrained in important ways by features of theexisting web of beliefs. But for Quine, the continuing force of theseconstraints is ultimately imposed by the fundamental principles ofhuman psychology (such as our preference for minimal mutilation of theweb, or the pragmatic virtues of simplicity, fecundity, etc.), whilefor constructivist theorists of science such as Shapin and Schaffer,the continuing force of any such constraints is limited only by theongoing negotiated agreement of the communities of scientists whorespect them.
As this last contrast makes clear, recognizing the limitations ofLaudan’s critique of Quine and the fact that we cannot dismissholist underdetermination with any straightforward appeal toampliative principles of good reasoning by itself does nothing toestablish the furtherpositive claims about belief revisionadvanced by interest-driven theorists of science. Conceding thattheory choice or belief revision in science is underdetermined by theevidence in just the ways that Duhem and/or Quine suggested leavesentirely open whether it is instead the (suitably broad) social orpolitical interests of scientists themselves that do the further workof singling out the particular beliefs or responses to falsifyingevidence that any particular scientist or scientific community willactually adopt or find compelling. Even many of those philosophers ofscience who are most strongly convinced of the general significance ofvarious forms of underdetermination remain deeply skeptical of thislatter thesis and thoroughly unconvinced by the empirical evidencethat has been offered in support of it (usually in the form of casestudies of particular historical episodes in science).
Appeals to underdetermination have also loomed large in recentphilosophical debates concerning the place of values in science, witha number of authors arguing that the underdetermination of theory bydata is among the central reasons that values (or“non-epistemic” values) do and perhaps must play a centralrole in scientific inquiry. Feminist philosophers of science havesometimes suggested that it is such underdetermination which createsroom not only for unwarranted androcentric values or assumptions toplay central roles in the embrace of particular theoreticalpossibilities, but also for the critical and alternative approachesfavored by feminists themselves (e.g. Nelson 2022). But appeals tounderdetermination also feature prominently in more general argumentsagainst the possibility or desirability of value-free science. Perhapsmost influentially, Helen Longino’s “contextualempiricism” suggests that a wide variety of non-epistemic valuesplay important roles in determining our scientific beliefs in partbecause underdetermination prevents data or evidence alone from doingso. For this and other reasons she concludes that objectivity inscience is therefore best served by a diverse set of participants whobring a variety of different values or value-laden assumptions to theenterprise (Longino 1990, 2002).
Although it is also a form of underdetermination, what we described inSection 1 above as contrastive underdetermination raises fundamentallydifferent issues from the holist variety considered in Section 2 (Bonk2008 raises many of these issues). John Stuart Mill articulated thechallenge of contrastive underdetermination with impressive clarity inA System of Logic, where he writes:
Most thinkers of any degree of sobriety allow, that an hypothesis...isnot to be received as probably true because it accounts for all theknown phenomena, since this is a condition sometimes fulfilledtolerably well by two conflicting hypotheses...while there areprobably a thousand more which are equally possible, but which, forwant of anything analogous in our experience, our minds are unfittedto conceive. ([1867] 1900, 328)
This same concern is also evident in Duhem’s original writingsconcerning so-called crucial experiments, where he seeks to show thateven when we explicitly suspend any concerns about holistunderdetermination, the contrastive variety remains an obstacle to ourdiscovery of truth in theoretical science:
But let us admit for a moment that in each of these systems[concerning the nature of light] everything is compelled to benecessary by strict logic, except a single hypothesis; consequently,let us admit that the facts, in condemning one of the two systems,condemn once and for all the single doubtful assumption it contains.Does it follow that we can find in the ‘crucialexperiment’ an irrefutable procedure for transforming one of thetwo hypotheses before us into a demonstrated truth? Between twocontradictory theorems of geometry there is no room for a thirdjudgment; if one is false, the other is necessarily true. Do twohypotheses in physics ever constitute such a strict dilemma? Shall weever dare to assert that no other hypothesis is imaginable? Light maybe a swarm of projectiles, or it may be a vibratory motion whose wavesare propagated in a medium; is it forbidden to be anything else atall? ([1914] 1954, 189)
Contrastive underdetermination is so-called because it questions theability of the evidence to confirm any given hypothesisagainstalternatives, and the central focus of discussion in thisconnection (equally often regarded as “the” problem ofunderdetermination) concerns the character of the supposedalternatives. Of course the two problems are not entirelydisconnected, because it is open to us to consider alternativepossible modifications of the web of beliefs as alternative theoriesbetween which the empirical evidence alone is powerless to decide. Butwe have already seen that oneneed not think of thealternative responses to recalcitrant experience as competingtheoretical alternatives to appreciate the character of theholist’s challenge, and we will see that one need not embraceany version of holism about confirmation to appreciate the quitedistinct problem that the available evidence might support more thanone theoretical alternative. It is perhaps most useful here to thinkof holist underdetermination as starting from a particular theory orbody of beliefs and claiming that our revision of those beliefs inresponse to new evidence may be underdetermined, while contrastiveunderdetermination instead starts from a given body of evidence andclaims that more than one theory may be well-supported by thatevidence. Part of what has contributed to the conflation of these twoproblems is the holist presuppositions of those who originally madethem famous. After all, on Quine’s view, we simply revise theweb of belief in response to recalcitrant experience, and so thesuggestion that there are multiple possible revisions of the webavailable in response to any particular evidential finding justis the claim that there are in fact many different“theories” (i.e. candidate webs of belief) that areequally well-supported by any given body of data.[6] But if we give up such extreme holist views of evidence, meaning,and/or confirmation, the two problems take on very differentidentities, with very different considerations in favor of taking themseriously, very different consequences, and very different candidatesolutions. Notice, for instance, that even if we somehow knew that noother hypothesis on a given subject was well-confirmed by a given bodyof data, that would not tell us where to place the blame or which ofour beliefs to give up if the remaining hypothesis in conjunction withothers subsequently resulted in a failed empirical prediction. And asDuhem suggests in the passage cited above, even if we supposed that wesomehow knew exactly which of our hypotheses to blame in response to afailed empirical prediction, this would not help us to decide whetheror not there are other hypotheses available that are alsowell-confirmed by the data we actually have.
One way to see why not is to consider an analogy that champions ofcontrastive underdetermination have sometimes used to support theircase. If we consider any finite group of data points, an elementaryproof reveals that there are an infinite number of distinctmathematical functions describing different curves that will passthrough all of them. As we add further data to our initial set we willeliminate functions describing curves which no longer capture all ofthe data points in the new, larger set, but no matter how much data weaccumulate, there will always be an infinite number of functionsremaining that define curves including all the data points inthe new set and which would therefore seem to be equally wellsupported by the empirical evidence.No finite amount of datawillever be able to narrow the possibilities down to just asingle function or indeed, any finite number of candidate functions,from which the distribution of data points we have might have beengenerated. Each new data point we gather eliminates an infinite numberof curves thatpreviously fit all the data (so the problemhere is not the holist’s challenge that we do not know whichbeliefs to give up in response to failed predictions or disconfirmingevidence), but also leaves an infinite number still in contention.
Of course, generating and testing fundamental scientific hypotheses israrely if ever a matter of finding curves that fit collections of datapoints, so nothing follows directly from this mathematical analogy forthe significance of contrastive underdetermination in most scientificcontexts. But Bas van Fraassen has offered an extremely influentialline of argument intended to show that such contrastiveunderdetermination is a serious concern for scientific theorizing moregenerally. InThe Scientific Image (1980), van Fraassen usesa now-classic example to illustrate the possibility that even our bestscientific theories might haveempirical equivalents: thatis, alternative theories making the very same empirical predictions,and which therefore cannot be better or worse supported by anypossible body of evidence. Consider Newton’s cosmology,with its laws of motion and gravitational attraction. As Newtonhimself realized, exactly the same predictions are made by the theorywhether we assume that the entire universe is at rest or assumeinstead that it is moving with some constant velocity in any givendirection: from our position within it, we have no way to detectconstant, absolute motion by the universe as a whole. Thus, vanFraassen argues, we are here faced with empirically equivalentscientific theories: Newtonian mechanics and gravitation conjoinedeither with the fundamental assumption that the universe is atabsolute rest (as Newton himself believed), or with any one of aninfinite variety of alternative assumptions about the constantvelocity with which the universe is moving in some particulardirection. All of these theories make all and only the same empiricalpredictions, so no evidence will ever permit us to decide between themon empirical grounds.[7]
Van Fraassen is widely (though mistakenly) regarded as holding thatthe prospect of contrastive underdetermination grounded in suchempirical equivalents demands that we restrict our epistemic ambitionsfor the scientific enterprise itself. His constructive empiricismholds that the aim of science is not to find true theories, but onlytheories that are empirically adequate: that is, theories whose claimsaboutobservable phenomena are all true. Since the empiricaladequacy of a theory is not threatened by the existence of anotherthat is empirically equivalent to it, fulfilling this aim has nothingto fear from the possibility of such empirical equivalents. In reply,many critics have suggested that van Fraassen gives no reasons forrestricting belief to empirical adequacy that could not also be usedto argue for suspending our belief in thefuture empiricaladequacy of our best present theories. Of course therecouldbe empirical equivalents to our best theories, but there could also betheories equally well-supported by all the evidence up to the presentwhich diverge in their predictions about observables in future casesnot yet tested. This challenge seems to miss the point of vanFraassen’s epistemic voluntarism: his claim is that we shouldbelieve no morebut also no less than we need to take fulladvantage of our scientific theories, and a commitment to theempirical adequacy of our theories, he suggests, is the least we canget away with in this regard. Of course it is true that we are runningsome epistemic risk in believing in even the full empirical adequacyof our present theories, but this is the minimum we need to take fulladvantage of the fruits of our scientific labors, and the risk isconsiderably less than what we assume in believing in their truth: asvan Fraassen famously suggests, “it is not an epistemicprinciple that one might as well hang for a sheep as a lamb”(1980, 72).
In an influential discussion, Larry Laudan and Jarrett Leplin (1991)argue that philosophers of science have invested even the barepossibility that our theories might have empirical equivalents withfar too much epistemic significance. Notwithstanding the popularity ofthe presumption that there are empirically equivalent rivals to everytheory, they argue, the conjunction of several familiar and relativelyuncontroversial epistemological theses is sufficient to defeat it.Because the boundaries of what is observable change as we develop newexperimental methods and instruments, because auxiliary assumptionsare always needed to derive empirical consequences from a theory (cf.confirmational holism, above), and because these auxiliary assumptionsare themselves subject to change over time, Laudan and Leplin concludethat there is no guarantee that any two theories judged to beempirically equivalent at a given time will remain so as the state ofour knowledge advances. Accordingly, any judgment of empiricalequivalence is both defeasible and relativized to a particular stateof science. So even if two theories are empirically equivalent at agiven time this is no guarantee that they willremain so, andthus there is no foundation for a general pessimism about our abilityto distinguish theories that are empirically equivalent to each otheron empirical grounds. Although they concede that we could have goodreason to think that particular theories have empirically equivalentrivals, this must be established case-by-case rather than by anygeneral argument or presumption.
One fairly standard reply to this line of argument is to suggest thatwhat Laudan and Leplin really show is that the notion of empiricalequivalence must be applied to larger collections of beliefs thanthose traditionally identified as scientific theories—at leastlarge enough to encompass the auxiliary assumptions needed to deriveempirical predictions from them. At the extreme, perhaps this meansthat the notion of empirical equivalents (or at least timelessempirical equivalents) cannot be applied to anything less than“systems of the world” (i.e. total Quinean webs ofbelief), but even that is not fatal: what the champion of contrastiveunderdetermination asserts is that there are empirically equivalentsystems of the world thatincorporate different theories ofthe nature of light, or spacetime, or whatever (for useful discussion,see Okasha 2002). On the other hand, it might seem that quick exampleslike van Fraassen’s variants of Newtonian cosmology do not serveto makethis thesis as plausible as the more limited claim ofempirical equivalence for individual theories. It seems equallynatural, however, to respond to Laudan and Leplin simply by concedingthe variability in empirical equivalence but insisting that this isnot enough to undermine the problem. Empirical equivalents create aserious obstacle to belief in a theory so long as there issome empirical equivalent to that theory at any given time,but it need not be the same one at each time. On this line ofthinking, cases like van Fraassen’s Newtonian example illustratehow easy it is for theories to admit of empirical equivalents at anygiven time, and thus constitute a reason for thinking that thereprobably are or will be empirical equivalents to any given theory atany particular time, assuring that whenever the question of belief ina given theory arises, the challenge posed to it by contrastiveunderdetermination arises as well.
Laudan and Leplin also suggest, however, that even if the universalexistence of empirical equivalents were conceded, this would do muchless to establish the significance of underdetermination than itschampions have supposed, because “theories with exactly the sameempirical consequences may admit of differing degrees of evidentialsupport” (1991, 465). A theory may be better supported than anempirical equivalent, for instance, because the former but not thelatter is derivable from a more general theory whose consequencesinclude a third, well supported, hypothesis. More generally, thebelief-worthiness of an hypothesis depends crucially on how it isconnected or related to other things we believe and the evidentialsupport we have for those other beliefs.[8] Laudan and Leplin suggest that we have invited the specter of rampantunderdetermination only by failing to keep this familiar home truth inmind and instead implausibly identifying the evidence bearing on atheory exclusively with the theory’s own entailments orempirical consequences (but cf. Tulodziecki 2012). This impoverishedview of evidential support, they argue, is in turn the legacy of afailed foundationalist and positivistic approach to the philosophy ofscience which mistakenly assimilates epistemic questions about how todecide whether or not to believe a theory to semantic questions abouthow to establish a theory’s meaning or truth-conditions.
John Earman (1993) has argued that this dismissive diagnosis does notdo justice to the threat posed by underdetermination. He argues thatworries about underdetermination are an aspect of the more generalquestion of the reliability of our inductive methods for determiningbeliefs, and notes that we cannot decide how serious a problemunderdetermination poses without specifying (as Laudan and Leplin donot) the inductive methods we are considering. Earman regards someversion of Bayesianism as our most promising form of inductivemethodology, and he proceeds to show that challenges to the long-runreliability of our Bayesian methods can be motivated by considerationsof the empirical indistinguishability (in several different andprecisely specified senses) of hypotheses stated in any languagericher than that of the evidence itself that do not amount simply togeneral skepticism about those inductive methods. In other words, heshows that there are more reasons to worry about underdeterminationconcerning inferences to hypotheses about unobservables than to, say,inferences about unobserved observables. He also goes on to argue thatat least two genuine cosmological theories have serious, nonskeptical,and nonparasitic empirical equivalents: the first essentially replacesthe gravitational field in Newtonian mechanics with curvature inspacetime itself,[9] while the second recognizes that Einstein’s General Theory ofRelativity permits cosmological models exhibiting different globaltopological features which cannot be distinguished by any evidenceinside the light cones of even idealized observers who live forever.[10] And he suggests that “the production of a few concrete examplesis enough to generate the worry that only a lack of imagination on ourpart prevents us from seeing comparable examples of underdeterminationall over the map” (1993, 31) even as he concedes that his caseleaves open just how far the threat of underdetermination extends(1993, 36).
Most philosophers of science, however, have not embraced the idea thatit is only lack of imagination which prevents us from findingempirical equivalents to our scientific theories generally. They notethat the convincing examples of empirical equivalents we do have areall drawn from a single domain of highly mathematized scientifictheorizing in which the background constraints on serious theoreticalalternatives are far from clear, and suggest that it is thereforereasonable to ask whether even a small handful of such examples shouldmake us believe that there are probably empirical equivalents to mostof our scientific theories most of the time. They concede that it isalwayspossible that there are empirical equivalents to evenour best scientific theories concerning any domain of nature, butinsist that we should not be willing to suspend belief in anyparticular theory until some convincing alternative to it canactually be produced: as Philip Kitcher puts it, “give us arival explanation, and we’ll consider whether it is sufficientlyserious to threaten our confidence” (1993, 154; see also Leplin1997, Achinstein 2002). That is, these thinkers insist that until weare able toactually construct an empirically equivalentalternative to a given theory, the bare possibility that suchequivalents exist is insufficient to justify suspending belief in thebest theories we do have. For this same reason most philosophers ofscience are unwilling to follow van Fraassen into what they regard asconstructive empiricism’s unwarranted epistemic modesty. Even ifvan Fraassen is right about the most minimal beliefs we must hold inorder to take full advantage of our scientific theories, most thinkersdo not see why we should believe the least we can get away with ratherthan believing the most we are entitled to by the evidence wehave.
Champions of contrastive underdetermination have most frequentlyresponded by trying to establish thatall theories haveempirical equivalents, typically by proposing something like analgorithmic procedure for generating such equivalents from any theorywhatsoever. Stanford (2001, 2006) suggests that these efforts toprove that all our theories must have empirical equivalentsfall roughly but reliably into global and local varieties, and thatneither makes a convincing case for a distinctive scientific problemof contrastive underdetermination. Global algorithms arewell-represented by Andre Kukla’s (1996) suggestion that fromany theoryT we can immediately generate such empiricalequivalents asT′ (the claim thatT’sobservable consequences are true, butT itself is false),T″ (the claim that the world behaves according toT when observed, but some specific incompatible alternativeotherwise), and the hypothesis that our experience is beingmanipulated by powerful beings in such a way as to make it appear thatT is true. But such possibilities, Stanford argues, amount tonothing more than the sort of Evil Deceiver to which Descartesappealed in order to doubt any of his beliefs that could possibly bedoubted (see Section 1, above). Such radically skeptical scenariospose an equally powerful (or powerless) challenge to any knowledgeclaim whatsoever, no matter how it is arrived at or justified, andthus pose nospecial problem or challenge for beliefs offeredto us by theoretical science. If global algorithms like Kukla’sare the only reasons we can give for taking underdeterminationseriously in a scientific context, then there is no distinctiveproblem of the underdetermination of scientific theories by data, onlya salient reminder of the irrefutability of classically Cartesian orradical skepticism.[11]
In contrast to such global strategies for generating empiricalequivalents, local algorithmic strategies instead begin with someparticular scientific theory and proceed to generate alternativeversions that will be equally well supported by all possible evidence.This is what van Fraassen does with the example of Newtoniancosmology, showing that an infinite variety of supposed empiricalequivalents can be produced by ascribing different constant absolutevelocities to the universe as a whole. But Stanford suggests thatempirical equivalents generated in this way are also insufficient toshow that there is a distinctive and genuinely troubling form ofunderdetermination afflicting scientific theories, because they relyon simply saddling particular scientific theories with further claimsfor which those theories themselves (together with whatever backgroundbeliefs we actually hold) imply that we cannot have any evidence. Suchempirical equivalents invite the natural response that they simplytack on to our theories further commitments that are or should be nopart of those theories themselves. Such claims, it seems, shouldsimply be excised from our theories, leaving over just the claims thatsensible defenders would have held were all we were entitled tobelieve by the evidence in any case. In van Fraassen’s Newtonianexample, for instance, this could be done simply by undertaking nocommitment concerning the absolute velocity and direction (or lackthereof) of the universe as a whole. Note also that if we believe agiven scientific theory when one of the empirical equivalents we couldgenerate from it by the local algorithmic strategy is correct instead,most of what we originally believed will nonetheless turn out to bestraightforwardly true.
Philosophers of science have responded ina variety of ways to the suggestion that a few or even a small handfulof serious examples of empirical equivalents does not suffice toestablish that there are probably such equivalents to most scientifictheories in most domains of inquiry. One such reaction has been toinvite more careful attention to the details of particular examples ofputative underdetermination: considerable work has been devoted toassessing the threat of underdetermination in the case of particularscientific theories (for recent examples see Pietsch 2012; Tulodziecki2013; Werndl 2013; Belot 2014; Butterfield 2014; Miyake 2015; Kovaka2019; Fletcher 2021, and others). Other thinkers have sought toinvestigate whether certain types of scientific theories or theoriesin particular scientific fields differ in the extent to which they aregenuinely threatened by underdetermination. Some have argued thatunderdetermination poses a less serious (Cleland 2002; Carman 2005) ora more serious (Turner 2005, 2007) challenge for‘historical’ sciences like geology, paleontology, andarchaeology than it does for ‘experimental’ sciences likeparticle physics. Others have resisted such generalizations butnonetheless argued that underdetermination predicaments arise indistinctive or characteristic ways in historical sciences and thatscientists in these fields have different resources and strategies foraddressing them (Currie 2018; Forber and Griffith 2011; Stanford2010). Finally, some thinkers have sought to defend particular formsof explanation frequently deployed in historical sciences, such asnarrative explanation (Sterelny and Currie 2017), from the charge thatthey suffer from rampant underdetermination.Stanford (2001, 2006) concludes that no convincing general case hasbeen made for the presumption that there are empirically equivalentrivals to all or most scientific theories, or to any theories besidesthose for which such equivalents can actually be constructed. But hegoes on to insist that empirical equivalents are no essential part ofthe case for a significant problem of contrastive underdetermination.Our efforts to confirm scientific theories, he suggests, are no lessthreatened by what Larry Sklar (1975, 1981) has called“transient” underdetermination, that is, theories whicharenot empirically equivalent but are equally (or at leastreasonably) well confirmed by all the evidence we happen to have inhand at the moment, so long as this transient predicament is also“recurrent”, that is, so long as we think that there is(probably) at least one such (fundamentally distinct) alternativeavailable—and thus the transient predicamentre-arises—whenever we are faced with a decision about whether tobelieve a given theory at a given time. Stanford argues that aconvincing case for contrastive underdetermination of this recurrent,transient variety can indeed be made, and that the evidence for it isavailable in the historical record of scientific inquiry itself.
Stanford concedes that present theories are not transientlyunderdetermined by the theoretical alternatives we have actuallydeveloped and considered to date: we think that our own scientifictheories are considerably better confirmed by the evidence than anyrivals we have actually produced. The central question, he argues, iswhether we should believe that there are well confirmed alternativesto our best scientific theories that arepresentlyunconceived by us. And the primary reason we should believe thatthere are, he claims, is the long history of repeated transientunderdetermination bypreviously unconceived alternativesacross the course of scientific inquiry. In the progression fromAristotelian to Cartesian to Newtonian to contemporary mechanicaltheories, for instance, the evidence available at the time eachearlier theory dominated the practice of its day also offeredcompelling support for each of the later alternatives (unconceived atthe time) that would ultimately come to displace it. Stanford’s“New Induction” over the history of science claims thatthis situation is typical; that is, that “we have, throughoutthe history of scientific inquiry and in virtually every scientificfield, repeatedly occupied an epistemic position in which we couldconceive of only one or a few theories that were well confirmed by theavailable evidence, while subsequent inquiry would routinely (if notinvariably) reveal further, radically distinct alternatives as wellconfirmed by the previously available evidence as those we wereinclined to accept on the strength of that evidence” (2006, 19).In other words, Stanford claims that in the past we have repeatedlyfailed to exhaust the space of fundamentally distinct theoreticalpossibilities that were well confirmed by the existing evidence, andthat we have every reason to believe that we are probably also failingto exhaust the space of such alternatives that are well confirmed bythe evidence we have at present. Much of the rest of his case is takenup with discussing historical examples illustrating that earlierscientists did not simply ignore or dismiss, but instead genuinelyfailed to conceive of the serious, fundamentally distincttheoretical possibilities that would ultimately come to displace thetheories they defended, only to be displaced in turn by others thatwere similarly unconceived at the time. He concludes that “thehistory of scientific inquiry itself offers a straightforwardrationale for thinking that there typically are alternatives to ourbest theories equally well confirmed by the evidence, even when we areunable to conceive of them at the time” (2006, 20; forreservations and criticisms concerning this line of argument, seeMagnus 2006, 2010; Godfrey-Smith 2008; Chakravartty 2008; Devitt 2011;Ruhmkorff 2011; Lyons 2013). Stanford concedes, however, that thehistorical record can offer only fallible evidence of a distinctive,general problem of contrastive scientific underdetermination, ratherthan the kind of deductive proof that champions of the case fromempirical equivalents have typically sought. Thus, claims andarguments about the various forms that underdetermination may take,their causes and consequences, and the further significance they holdfor the scientific enterprise as a whole continue to evolve in thelight of ongoing controversy, and the underdetermination of scientifictheory by evidence remains very much a live and unresolved issue inthe philosophy of science.
How to cite this entry. Preview the PDF version of this entry at theFriends of the SEP Society. Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entryatPhilPapers, with links to its database.
[Please contact the author with suggestions.]
confirmation |constructive empiricism |Duhem, Pierre |epistemology: naturalism in |feminist philosophy, interventions: epistemology and philosophy of science |Feyerabend, Paul |induction: problem of |Quine, Willard Van Orman |scientific knowledge: social dimensions of |scientific realism
I have benefited from discussing both the organization and content ofthis article with many people including audiences and participants atthe 2009 Pittsburgh Workshop on Underdetermination and the 2009Southern California Philosophers of Science retreat, as well as theparticipants in graduate seminars both at UC Irvine and Pittsburgh.Special thanks are owed to John Norton, P. D. Magnus, John Manchak,Bennett Holman, Penelope Maddy, Jeff Barrett, David Malament, JohnEarman, and James Woodward.
Copyright © 2023 by
Kyle Stanford<stanford@uci.edu>
View this site from another server:
The Stanford Encyclopedia of Philosophy iscopyright © 2023 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University
Library of Congress Catalog Data: ISSN 1095-5054