Debates about scientific realism are closely connected to almosteverything else in the philosophy of science, for they concern thevery nature of scientific knowledge. Scientific realism is a positiveepistemic attitude toward the content of our best theories and models,recommending belief in both observable and unobservable aspects of theworld described by the sciences. This epistemic attitude has importantmetaphysical and semantic dimensions, and these various commitmentsare contested by a number of rival epistemologies of science, knowncollectively as forms of scientific antirealism. This article explainswhat scientific realism is, outlines its main variants, considers themost common arguments for and against the position, and contrasts itwith its most important antirealist counterparts.
It is perhaps only a slight exaggeration to say that scientificrealism is characterized differently by every author who discusses it,and this presents a challenge to anyone hoping to learn what it is.Fortunately, underlying the many idiosyncratic qualifications andvariants of the position, there is a common core of ideas, typified byan epistemically positive attitude toward the outputs of scientificinvestigation, regarding both observable and unobservable aspects ofthe world. The distinction here between the observable and theunobservable reflects human sensory capabilities: the observable isthat which can, under favorable conditions, be perceived using theunaided senses (for example, planets and platypuses); the unobservableis that which cannot be detected this way (for example, proteins andprotons). This is to privilege vision merely for terminologicalconvenience, and differs from scientific conceptions of observability,which generally extend to things that are detectable using instruments(Shapere 1982). The distinction itself has been problematized (Maxwell1962; Churchland 1985; Musgrave 1985; Dicken & Lipton 2006) anddefended (Muller 2004, 2005; cf. Turner 2007 regarding the distantpast). If itis problematic, this is arguably a concernprimarily for certain forms of antirealism, which adopt anepistemically positive attitude only with respect to theobservable. It is not ultimately a concern for scientific realism,which does not discriminate epistemically between observables andunobservablesper se.
Before considering the nuances of what scientific realism entails, itis useful to distinguish between two different kinds of definition inthis context. Most commonly, the position is described in terms of theepistemicachievements constituted by scientific theories(and models—this qualification will be taken as givenhenceforth). On this approach, scientific realism is a positionconcerning the actual epistemic status of theories (or some componentsthereof), and this is described in a number of ways. For example, mostpeople define scientific realism in terms of the truth or approximatetruth of scientific theories or certain aspects of theories. Somedefine it in terms of the successful reference of theoretical terms tothings in the world, both observable and unobservable. (A note aboutthe literature: “theoretical term”, prior to the 1980s,was standardly used to denote terms for unobservables, but will beused here to refer to any scientific term, which is now the morecommon usage.) Others define scientific realism not in terms of truthor reference, but in terms of belief in the ontology of scientifictheories. What all of these approaches have in common is a commitmentto the idea that our best theories have a certain epistemic status:they yield knowledge of aspects of the world, including unobservableaspects. (For definitions along these lines, see Smart 1963; Boyd1983; Devitt 1991; Kukla 1998; Niiniluoto 1999; Psillos 1999; andChakravartty 2007a.)
Another way to think about scientific realism is in terms of theepistemic aims of scientific inquiry (van Fraassen 1980: 8; Lyons2005). That is, some think of the position in terms of what scienceaims to do: the scientific realist holds that science aims to producetrue descriptions of things in the world (or approximately truedescriptions, or ones whose central terms successfully refer, and soon). There is a weak implication here to the effect that if scienceaims at truth, and scientific practice is at all successful, thecharacterization of scientific realism in terms of aim may then entailsome form of characterization in terms of achievement. But this is nota strict implication, since defining scientific realism in terms ofaiming at truth does not, strictly speaking, suggest anything aboutthe success of scientific practice in this regard. For this reason,some take the aspirational characterization of scientific realism tobe too weak (Kitcher 1993: 150; Devitt 2005: n. 10; Chakravartty2007b: 197; for skepticism about scientific aim-talk more generally,see Rowbottom 2014)—it is compatible with the sciences neveractually achieving, and even the impossibility of their achieving,their aim as conceived on this view of scientific realism. Mostscientific realists commit to something more in terms of achievement,and this is assumed in what follows.
The description of scientific realism as a positive epistemic attitudetoward theories, including parts putatively concerning theunobservable, is a kind of shorthand for more precise commitments(Kukla 1998: ch. 1; Niiniluoto 1999: ch. 1; Psillos 1999:Introduction; Chakravartty 2007a: ch. 1). Traditionally, realism moregenerally is associated with any position that endorses belief in thereality of something. Thus, one might be a realist about one’sperceptions of tables and chairs (sense datum realism), or abouttables and chairs themselves (external world realism), or aboutmathematical entities such as numbers and sets (mathematical realism),and so on. Scientific realism is a realism about whatever is describedby our best scientific theories—from this point on,“realism” here denotes scientific realism. But what, moreprecisely, is that? In order to be clear about what realism in thecontext of the sciences amounts to, and to differentiate it from someimportant antirealist alternatives, it is useful to understand it interms of three dimensions: a metaphysical (or ontological) dimension;a semantic dimension; and an epistemological dimension.
Metaphysically, realism is committed to the mind-independent existenceof the world investigated by the sciences. This idea is best clarifiedin contrast with positions that deny it. For instance, it is denied byany position that falls under the traditional heading of“idealism”, including some forms of phenomenology,according to which there is no world external to and thus independentof the mind. This sort of idealism, however, though historicallyimportant, is rarely encountered in contemporary philosophy ofscience. More common rejections of mind-independence stem fromneo-Kantian views of the nature of scientific knowledge, which denythat the world of our experience is mind-independent, even if (in somecases) these positions accept that the world in itself does not dependon the existence of minds. The contention here is that the worldinvestigated by the sciences—as distinct from “the worldin itself” (assuming this to be a coherent distinction)—isin some sense dependent on the ideas one brings to scientificinvestigation, which may include, for example, theoretical assumptionsand perceptual training; this proposal is detailed further insection 4. It is important to note in this connection that human convention inscientific taxonomy is compatible with mind-independence. For example,though Psillos (1999: xix) ties realism to a “mind-independentnatural-kind structure” of the world, Chakravartty (2007a: ch.6) argues that mind-independent properties are often conventionallygrouped into kinds (see also Boyd 1999; Humphreys 2004: 22–25, 35–36, and cf. the“promiscuous realism” of Dupré 1993).
Semantically, realism is committed to a literal interpretation ofscientific claims about the world. In common parlance, realists taketheoretical statements at “face value”. According torealism, claims about scientific objects, events, processes,properties, and relations (I will use the term “scientificentity” as a generic term for these sorts of things henceforth),whether they be observable or unobservable, should be construedliterally as having truth values, whether true or false. This semanticcommitment contrasts primarily with those of certain“instrumentalist” epistemologies of science, whichinterpret descriptions of unobservables simply as instruments for theprediction of observable phenomena, or for systematizing observationreports. Traditionally, instrumentalism holds that claims aboutunobservable things have no literal meaning at all (though the term isoften used more liberally in connection with some antirealistpositions today). Some antirealists contend that claims involvingunobservables should not be interpreted literally, but as ellipticalfor corresponding claims about observables. These positions aredescribed in more detail insection 4.
Epistemologically, realism is committed to the idea that theoreticalclaims (interpreted literally as describing a mind-independentreality) constitute knowledge of the world. This contrasts withskeptical positions which, even if they grant the metaphysical andsemantic dimensions of realism, doubt that scientific investigation isepistemologically powerful enough to yield such knowledge, or, as inthe case of some antirealist positions, insist that it is onlypowerful enough to yield knowledge regarding observables. Theepistemological dimension of realism, though shared by realistsgenerally, is sometimes described more specifically in contrary ways.For example, while many realists subscribe to the truth (orapproximate truth) of theories understood in terms of some version ofthe correspondence theory of truth (as suggested by Fine 1986a andcontested by Ellis 1988), some prefer a truthmaker account (Asay 2013)or a deflationary account of truth (Giere 1988: 82; Devitt 2005; Leeds2007). Though most realists marry their position to the successfulreference of theoretical terms, including those for unobservableentities (Boyd 1983, and as described by Laudan 1981), some deny thatthis is a requirement (Cruse & Papineau 2002; Papineau 2010).Amidst these differences, however, a general recipe for realism iswidely shared: our best scientific theories give true or approximatelytrue descriptions of observable and unobservable aspects of amind-independent world.
The general recipe for realism just described is accurate so far as itgoes, but still falls short of the degree of precision offered by mostrealists. The two main sources of imprecision thus far are found inthe general recipe itself, which makes reference to the idea of“our best scientific theories” and the notion of“approximate truth”. The motivation for thesequalifications is perhaps clear. If one is to defend a positiveepistemic attitude regarding scientific theories, it is presumablysensible to do so not merely in connection with any theory (especiallywhen one considers that, over the long history of the sciences up tothe present, some theories were not or are not especially successful),but rather with respect to theories (oraspects of theories,as we will see momentarily) that would appear,prima facie,to merit such a defense,viz. our best theories (or aspectsthereof). And it is widely held, not least by realists, that even manyof our best scientific theories are likely false, strictly speaking,hence the importance of the notion that theories may be “closeto” the truth (that is, approximately true) even though they arefalse. The challenge of making these qualifications more precise,however, is significant, and has generated much discussion.
Consider first the issue of how best to identify those theories thatrealists should be realists about. A general disclaimer is in orderhere: realists are generally fallibilists, holding that realism isappropriate in connection with our best theories even though theylikely cannot be proven with absolute certainty; some of our besttheories could conceivably turn out to be significantly mistaken, butrealists maintain that, granting this possibility, there are groundsfor realism nonetheless. These grounds are bolstered by restrictingthe domain of theories suitable for realist commitment to those thatare sufficiently mature and non-ad hoc (Worrall 1989:153–154; Psillos 1999: 105–108). Maturity may be thoughtof in terms of the well established nature of the field in which atheory is developed, or the duration of time a theory has survived, orits survival in the face of significant testing; and the condition ofbeing non-ad hoc is intended to guard against theories thatare “cooked up” (that is, posited merely) in order toaccount for some known observations in the absence of rigoroustesting. On these construals, however, both the notion of maturity andthe notion of being non-ad hoc are admittedly vague. Onestrategy for adding precision here is to attribute these qualities totheories that make successful, novel predictions. The ability of atheory to do this, it is commonly argued, marks it as genuinelyempirically successful, and the sort of theory to which realistsshould be more inclined to commit (Musgrave 1988; Lipton 1990; Leplin1997; White 2003; Hitchcock & Sober 2004; Barnes 2008; for adissenting view, see Harker 2008; cf. Alai 2014).
The idea that with the development of the sciences over time, theoriesare converging on (“moving in the direction of”,“getting closer to”) the truth, is a common theme inrealist discussions of theory change (for example, Hardin &Rosenberg 1982 and Putnam 1982). Talk of approximate truth is ofteninvoked in this context and has produced a significant amount of oftenhighly technical work, conceptualizing the approximation of truth assomething that can be quantified, such that judgments of relativeapproximate truth (of one proposition or theory in comparison toanother) can be formalized and given precise definitions. This workprovides one possible means by which to consider the convergentistclaim that theories can be viewed as increasingly approximately trueover time, and this possibility is further considered insection 3.4.
A final and especially important qualification to the general recipefor realism described above comes in the form of a number ofvariations. These species of generic realism can be viewed as fallinginto three families or camps: explanationist realism; entity realism;and structural realism. There is a shared principle of speciationhere, in that all three approaches are attempts to identify morespecifically the component parts of scientific theories that are mostworthy of epistemic commitment.Explanationism recommendsrealist commitment with respect to those parts of our besttheories—regarding (unobservable) entities, laws,etc.—that are in some sense indispensable or otherwise importantto explaining their empirical success—for instance, componentsof theories that are crucial in order to derive successful, novelpredictions.Entity realism is the view that under conditionsin which one can demonstrate impressive causal knowledge of a putative(unobservable) entity, such as knowledge that facilitates themanipulation of the entity and its use so as to intervene in otherphenomena, one has good reason for realism regarding it.Structural realism is the view that one should be a realist,not in connection with descriptions of the natures of things (likeunobservable entities) found in our best theories, but rather withrespect to their structure. All three of these positions adopt astrategy of selectivity, and this and the positions themselves areconsidered further insection 2.3.
Arguably, the fact that realists have endeavored to qualify their viewand propose variations of it, as described above, suggests acollective moral: though some (especially earlier) discussions ofrealism give the impression that it is an attitude pertaining toscience across the board, this is likely too coarse a way tounderstand the position. Adopting a realist attitude toward thecontent of scientific theories does not entail that one believes allsuch content, but rather that one believes those aspects, includingunobservable aspects, regarding which one takes such belief to bewarranted, thus indicating a realism about those things morespecifically. In a similar spirit, some argue for another sort ofspecificity, suggesting that the best (or only good) arguments forrealism are formulated by concentrating on the details of specificcases—the so-called “first-order evidence” ofscientific investigation itself. For example, leveraging a case studyof Jean Perrin’s argument in 1908 for the reality ofunobservable molecules, Achinstein (2002: 491–495) contends thateven taking certain realist-friendly assumptions for granted, acompelling argument for realism about any given entity can only begiven in terms of the empirical evidence concerning that entity, notby means of more general philosophical arguments. (For similar views,see Magnus & Callender 2004: 333–336 and Saatsi 2010; forskepticism about this, see Dicken 2013 and Park 2016.)
The most powerful intuition motivating realism is an old idea,commonly referred to in recent discussions as the “miracleargument” or “no miracles argument”, afterPutnam’s (1975a: 73) claim that realism “is the onlyphilosophy that doesn’t make the success of science amiracle”. The argument begins with the widely accepted premisethat our best theories are extraordinarily successful: they facilitateempirical predictions, retrodictions, and explanations of the subjectmatters of scientific investigation, often marked by astoundingaccuracy and intricate causal manipulations of the relevant phenomena.What explains this success? One explanation, favored by realists, isthat our best theories are true (or approximately true, or correctlydescribe a mind-independent world of entities, laws, etc.). Indeed, ifthese theories were far from the truth, so the argument goes, the factthat they are so successful would be miraculous. And given the choicebetween a straightforward explanation of success and a miraculousexplanation, clearly one should prefer the non-miraculous explanation,viz. that our best theories are approximately true (etc.).(For elaborations of the miracle argument, see J. Brown 1982; Boyd1989; Lipton 1994; Psillos 1999: ch. 4; Barnes 2002; Lyons 2003; Busch2008; Frost-Arnold 2010; and Dellsén 2016.)
Though intuitively powerful, the miracle argument is contestable in anumber of ways. One skeptical response is to question the very needfor an explanation of the success of science in the first place. Forexample, van Fraassen (1980: 40; see also Wray 2007, 2010) suggeststhat successful theories are analogous to well-adaptedorganisms—since only successful theories (organisms) survive, itis hardly surprising that our theories are successful, and therefore,there is no demand here for an explanation of success. It is notentirely clear, however, whether the evolutionary analogy issufficient to dissolve the intuition behind the miracle argument. Onemight wonder, for instance, why aparticular theory issuccessful (as opposed to why theories in general are successful), andthe explanation sought may turn on specific features of the theoryitself, including its descriptions of unobservables. Whether suchexplanations need be true, though, is a matter of debate. While mosttheories of explanation require that theexplanans be true,pragmatic theories of explanation do not (van Fraassen 1980: ch. 5).More generally, any epistemology of science that does not accept oneor more of the three dimensions of realism—commitment to amind-independent world, literal semantics, and epistemic access tounobservables—will thereby present a putative reason forresisting the miracle argument. These positions are considered insection 4.
Some authors contend that the miracle argument is, in fact, aninstance of fallacious reasoning called the base rate fallacy (Howson2000: ch. 3; Lipton [1991] 2004: 196–198; Magnus & Callender2004). Consider the following illustration. There is a test for adisease for which the rate of false negatives (negative results incases where the disease is present) is zero, and the rate of falsepositives (positive results in cases where the disease is absent) isone in ten (that is, disease-free individuals test positive 10% of thetime). If one tests positive, what are the chances that one has thedisease? It would be a mistake to conclude that, based on the rate offalse positives, the probability is 90%, for the actual probabilitydepends on some further, crucial information: the base rate of thedisease in the population (the proportion of people having it). Thelower the incidence of the disease at large, the lower the probabilitythat a positive result signals the presence of the disease.
By analogy, using the success of a scientific theory as an indicatorof its approximate truth (assuming a low rate of falsepositives—cases in which theories far from the truth arenonetheless successful) is arguably, likewise, an instance of the baserate fallacy. The success of a theory does not by itself suggest thatit is likely approximately true, and since there is no independent wayof knowing the base rate of approximately true theories, the chancesof it being approximately true cannot be assessed. Worrall (unpublished,Other Internet Resources) maintains that these contentions areineffective against the miracle argument because they crucially dependon a misleading formalization of it in terms of probabilities(cf. Menke 2014; for a criticism of the miracle argument based on adifferent probabilistic framing in terms of likelihoods, see Sober2015: 912–915).
One motivation for realism in connection with at least someunobservables comes by way of “corroboration”. If anunobservable entity is putatively capable of being detected by meansof a scientific instrument or experiment, this may well form the basisof a defeasible argument for realism concerning it. If, however, thatsame entity is putatively capable of being detected by not just one,but rather two or moredifferent means ofdetection—forms of detection that are distinct with respect tothe apparatuses they employ and the causal mechanisms and processesthey are described as exploiting in the course of detection—thismay serve as the basis of a significantly enhanced argument forrealism (cf. Eronen 2015). Hacking (1983: 201; see also Hacking 1985:146–147) gives the example of dense bodies in red bloodplatelets that can be detected using different forms of microscopy.Different techniques of detection, such as those employed in lightmicroscopy and transmission electron microscopy, make use of verydifferent sorts of physical processes, and these operations aredescribed theoretically in terms of correspondingly different causalmechanisms. (For similar examples, see Salmon 1984: 217–219 andFranklin 1986: 166–168, 1990: 103–115.)
The argument from corroboration thus runs as follows. The fact thatone and the same thing is apparently revealed by distinct modes ofdetection suggests that it would be an extraordinary coincidence ifthe supposed target of these revelations did not, in fact, exist. Thegreater the extent to which detections can be corroborated bydifferent means, the stronger the argument for realism regarding theirputative target. The argument here can be viewed as resting on anintuition similar to that underlying the miracle argument: realismbased on apparent detection may be only so compelling, but ifdifferent, theoretically independent means of detection produce thesame result, suggesting the existence of one and the sameunobservable, then realism provides a good explanation of theconsilient evidence, in contrast with the arguably miraculous state ofaffairs in which theoretically independent techniques produce the sameresult in the absence of a shared target. The idea that techniques of(putative) detection are often constructed or calibrated preciselywith the intention of reproducing the outputs of others, however, maystand against the argument from corroboration. Additionally, vanFraassen (1985: 297–298) argues that scientific explanations ofevidential consilience may be accepted without the explanationsthemselves being understood as true, which once again raises questionsabout the nature of scientific explanation.
Insection 1.3, the notion of selectivity was introduced as a general strategy formaximizing the plausibility of realism, particularly with respect toscientific unobservables. This strategy is adopted in part to squarerealism with the widely accepted view that most if not all of even ourbest theories are false, strictly speaking. If, nevertheless, thereare aspects of these theories that are true (or close to the truth)and one is able to identify these aspects, one might then plausiblycast one’s realism in terms of an epistemically positiveattitude toward those aspects of theories that are most worthy ofepistemic commitment. The most important variants of realism toimplement this strategy are explanationism, entity realism, andstructural realism. (For related work pertaining to the notion ofselectivity more generally, see R. Miller 1987: chs. 8–10; Fine1991; Jones 1991; Musgrave 1992; Harker 2013; and Peters 2014.)
Explanationists hold that a realist attitude can be justified inconnection with unobservables described by our best theories preciselywhen appealing to those unobservables is indispensable or otherwiseimportant to explaining why these theories are successful. Forexample, if one takes successful novel prediction to be a hallmark oftheories worthy of realist commitment generally, then explanationismsuggests that, more specifically, those aspects of the theory that areessential to the derivation of such novel predictions are the parts ofthe theory most worthy of realist commitment. In this vein, Kitcher(1993: 140–149) draws a distinction between the“presuppositional posits” or “idle parts” oftheories, and the “working posits” to which realistsshould commit. Psillos (1999: chs. 5–6) argues that realism canbe defended by demonstrating that the success of past theories did notdepend on their false components:
it is enough to show that the theoretical laws and mechanisms whichgenerated the successes of past theories have been retained in ourcurrent scientific image. (1999: 108)
The immediate challenge to explanationism is to furnish a method withwhich to identify precisely those aspects of theories that arerequired for their success, in a way that is objective or principledenough to withstand the charge that realists are merely rationalizingpost hoc, identifying the explanatorily crucial parts of pasttheories with aspects that have been retained in our current besttheories. (For discussions, see Chang 2003; Stanford 2003a,b; Elsamahi2005; Saatsi 2005a; Lyons 2006; Harker 2010; Cordero 2011; Votsis2011; and Vickers 2013.)
Another version of realism that adopts the strategy of selectivity isentity realism. On this view, realist commitment is based on aputative ability to causally manipulate unobservable entities (likeelectrons or gene sequences) to a high degree—for example, tosuch a degree that one is able to intervene in other phenomena so asto bring about certain effects. The greater the ability to exploitone’s apparent causal knowledge of something so as to bringabout (often extraordinarily precise) outcomes, the greater thewarrant for belief (Hacking 1982, 1983; cf. B. Miller 2016; Cartwright1983: ch. 5; Giere 1988: ch. 5; on causal warrant more generally, see Egg 2012). Beliefin scientific unobservables thus described is here partnered with adegree of skepticism about scientific theories more generally, andthis raises questions about whether believing in entities whilewithholding belief with respect to the theories that describe them isa coherent or practicable combination (Morrison 1990; Elsamahi 1994;Resnik 1994; Chakravartty 1998; Clarke 2001; Massimi 2004). Entityrealism is especially compatible with and nicely facilitated by thecausal theory of reference associated with Kripke (1980) and Putnam([1975b] 1985: ch. 12), according to which one can successfully referto an entity despite significant or even radical changes intheoretical descriptions of its properties; this allows for stabilityof epistemic commitment when theories change over time. Whether thecausal theory of reference can be applied successfully in thiscontext, however, is a matter of dispute (see Hardin & Rosenberg1982; Laudan 1984; Psillos 1999: ch. 12; McLeish 2005, 2006; Chakravartty 2007a:52–56; and Landig 2014; see Weber 2014 for a case study ongenes).
Structural realism is another view promoting selectivity, but in thiscase it is the natures of unobservable entities that are viewedskeptically, with realism reserved for the structure of theunobservable realm, as represented by certain relations described byour best theories. All of the many versions of this position fall intoone of two camps: the first emphasizes an epistemic distinctionbetween notions of structure and nature; the second emphasizes anontological thesis. The epistemic view holds that our best theorieslikely do not correctly describe the natures of unobservable entities,but do successfully describe certain relations between them. The onticview suggests that the reason realists should aspire only to knowledgeof structure is that the traditional concept of entities that stand inrelations is metaphysically problematic—there are, in fact, nosuch things, or if there are such things, they are in some senseemergent from or dependent on their relations. One challenge facingthe epistemic version is that of articulating a concept of structurethat makes knowledge of it effectively distinct from that of thenatures of entities. The ontological version faces the challenge ofclarifying the relevant notions of emergence and/or dependence. (Onepistemic structural realism, see Worrall 1989; Psillos 1995, 2006;Votsis 2003; and Morganti 2004; regarding ontic structural realism,see French 1998, 2006, 2014; Ladyman 1998; Psillos 2001, 2006; Ladyman& Ross 2007; and Chakravartty 2007a: ch. 3. See Frigg & Votsis2011 for an extensive critical survey).
Lined up in opposition to the various motivations for realismpresented insection 2 are a number of important antirealist arguments, all of which havepressed realists either to attempt their refutation, or to modifytheir realism accordingly. One of these challenges, theunderdetermination of theory by data, has a storied history intwentieth century philosophy more generally, and is often traced tothe work of Duhem ([1906] 1954: ch. 6; this is not an argument forunderdetermination as such, but is regarded as sowing the seeds). Inremarks concerning the confirmation of scientific hypotheses (inphysics, which he contrasted with chemistry and physiology), Duhemnoted that a hypothesis cannot be used to derive testable predictionsin isolation. To derive predictions one also requires“auxiliary” assumptions, such as background theories,hypotheses about instruments and measurements, etc. If subsequentobservation and experiment produces data that conflict with thosepredicted, one might think that this reflects badly on the hypothesisunder test, but Duhem pointed out that given all of the assumptionsrequired to derive predictions, it is no simple matter to identifywhere the error lies. Different amendments to one’s overall setof beliefs regarding hypotheses and theories will be consistent withthe data. A similar result is commonly associated with the later“confirmational holism” of Quine (1953), according towhich experience (including, of course, that associated withscientific testing) does not confirm or disconfirm individual beliefsper se, but rather the set of one’s beliefs taken as awhole. This sort of contention is now commonly referred to as the“Duhem-Quine thesis” (Quine 1975; see Ben-Menahem 2006 fora historical introduction).
How then does this give rise to underdetermination, a presumptiveconcern for realism? The argument from underdetermination proceeds asfollows: let us call the relevant, overall sets of scientific beliefs“theories”; different, conflicting theories are consistentwith the data; the data exhaust the evidence for belief; therefore,there is no evidential reason to believe one of these theories asopposed to another. Given that the theories differ precisely in whatthey say about the unobservable (their observableconsequences—the data—are all shared), a challenge torealism emerges: the choice of which theory to believe isunderdetermined by the data. In contemporary discussions, thechallenge is usually presented using slightly different terminology.Every theory, it is said, has empirically equivalent rivals—thatis, rivals that agree with respect to the observable, but differ withrespect to the unobservable. This then serves as the basis of askeptical argument regarding the truth of any particular theory therealist may wish to endorse. Various forms of antirealism then suggestthat hypotheses and theories involving unobservables are endorsed, notmerely on the basis of evidence that may be relevant to their truth,but also on the basis of other factors that are not indicative oftruth as such (seesections 3.2, and4.2–4.4). (For recent explications, see van Fraassen 1980: ch. 3; Earman 1993;Kukla 1998: chs. 5–6; and Stanford 2001.)
The argument from underdetermination is contested in a number of ways.One might, for example, distinguish between underdetermination inpractice (or at a time) and underdetermination in principle. In theformer case, there is underdetermination only because the data thatwould support one theory or hypothesis at the expense of another isunavailable, pending foreseeable developments in experimentaltechnique or instrumentation. Here, realism is arguably consistentwith a “wait and see” attitude, though if the prospect offuture discriminating evidence is poor, a commitment to future realismmay be questioned thereby. In any case, most proponents ofunderdetermination insist on the idea of underdetermination inprinciple: the idea that there are always (plausible) empiricallyequivalent rivals no matter what evidence may come to light. Inresponse, some argue that the principled worry cannot be established,since what counts as data is apt to change over time with thedevelopment of new techniques and instruments, and with changes inscientific background knowledge, which alter the auxiliary assumptionsrequired to derive observable predictions (Laudan & Leplin 1991).Such arguments may rest, however, on a different conception ofobservation than that assumed by many antirealists (defined above, interms of human sensory capacities). (For other responses, see Okasha2002; van Dyck 2007; Busch 2009; and Worrall 2011.)
Stanford (2006, 2015) proposes a historicized version of the argumentfrom underdetermination, suggesting that the history of sciencereveals a recurring “problem of unconceived alternatives”:typically, at any given time, there are theories that do not occur toscientists but which are just as well confirmed by the availableevidence as those that are, in fact, accepted; furthermore, over time,such unconceived theories often supplant the theories adopted byhistorical actors as the relevant science develops. (For discussionsand evaluations of this challenge, see Chakravartty 2008;Godfrey-Smith 2008; Magnus 2010; Lyons 2013; Mizrahi 2015:139–146; and Egg 2016; cf. Wray 2008 and Khalifa 2010 on therelated notion of “underconsideration”, as described byLipton 1993, [1991] 2004: 151–163.)
One especially important reaction to concerns about the allegedunderdetermination of theory by data gives rise to another leadingantirealist argument. This reaction is to reject one of the keypremises of the argument from underdetermination,viz. thatevidence for belief in a theory is exhausted by the empirical data.Many realists contend that other considerations—mostprominently,explanatory considerations—play anevidential role in scientific inference. If this is so, then even ifone were to grant the idea that all theories have empiricallyequivalent rivals, this would not entail underdetermination, for theexplanatory superiority of one in particular may determine a choice(Laudan 1990; Day & Botterill 2008). This is a specificexemplification of a form of reasoning by which “we infer whatwould, if true, provide the best explanation of [the] evidence”(Lipton [1991] 2004: 1). To put a realist-sounding spin on it:
one infers, from the premise that a given hypothesis would provide a“better” explanation for the evidence than would any otherhypothesis, to the conclusion that the given hypothesis is true.(Harman 1965: 89)
Inference to the best explanation (as per Lipton’s formulation)seems ubiquitous in scientific practice. The question of whether itcan be expected to yield knowledge of the sort suggested by realism(as per Harman’s formulation) is, however, a matter ofdispute.
Two difficulties are immediately apparent regarding the realistaspiration to infer truth (approximate truth, existence of entities,etc.) from hypotheses or theories that are judged best on explanatorygrounds. The first concerns the grounds themselves. In order to judgethat one theory furnishes a better explanation of some phenomenon thananother, one must employ some criterion or criteria on the basis ofwhich the judgment is made. Many have been proposed: simplicity(whether of mathematical description or in terms of the number ornature of the entities involved); consistency and coherence (bothinternally, and externally with respect to other theories andbackground knowledge); scope and unity (pertaining to the domain ofphenomena explained); and so on. One challenge here concerns whethervirtues such as these can be defined precisely enough to permitrelative rankings of explanatory goodness. Another challenge concernsthe multiple meanings associated with some virtues (consider, forexample, mathematical versus ontological simplicity). Another concernsthe possibility that such virtues may not all favor any one theory inparticular. Finally, there is the question of whether these virtuesshould be considered evidential or epistemic, as opposed to merelypragmatic. What reason is there to think, for instance, thatsimplicity is an indicator of truth? Thus, the ability to ranktheories with respect to their likelihood of being true may bequestioned.
A second difficulty facing inference to the best explanation concernsthe pools of theories regarding which judgments of relativeexplanatory efficacy are made. Even if scientists are likely reliablerankers of theories with respect to truth, this will not lead tobelief in a true theory (in some domain) unless that theory inparticular happens to be among those considered. Otherwise, as vanFraassen (1989: 143) notes, one may simply end up with “the bestof a bad lot”. Given the widespread view, even among realists,that many and perhaps most of our best theories are false, strictlyspeaking, this concern may seem especially pressing. However, in justthe way that the realist strategy of selectivity (seesection 2.3) may offer responses to the question of what it could mean for atheory to be close to the truth without being truesimpliciter, this same strategy may offer the beginnings of aresponse here. That is to say, the best theory of a bad lot maynonetheless describe unobservable aspects of the world in such a wayas to meet the standards of variants of realism includingexplanationism, entity realism, and structural realism. (For abook-length treatment of inference to the best explanation, see Lipton[1991] 2004; for defenses, see Lipton 1993; Day & Kincaid 1994;and Psillos 1996, 2009: part III; for critiques, see van Fraassen1989: chs. 6–7; Ladyman, Douven, Horsten, & van Fraassen1997; Wray 2008; and Khalifa 2010.)
Worries about underdetermination and inference to the best explanationare generally conceptual in nature, but the so-called pessimisticinduction (also called the “pessimistic meta-induction”,because it concerns the “ground level” inductiveinferences that generate scientific theories and law statements) isintended as an argument from empirical premises. If one considers thehistory of scientific theories in any given discipline, what onetypically finds is a regular turnover of older theories in favor ofnewer ones, as scientific knowledge develops. From the point of viewof the present, most past theories must be considered false; indeed,this will be true from the point of view of most times. Therefore, byenumerative induction (that is, generalizing from these cases), surelytheories at any given time will ultimately be replaced and regarded asfalse from some future perspective. Thus, current theories are alsofalse. The general idea of the pessimistic induction has a richpedigree. Though neither endorse the argument, Poincaré ([1905]1952: 160), for instance, describes the seeming “bankruptcy ofscience” given the apparently “ephemeral nature” ofscientific theories, which one finds “abandoned one afteranother”, and Putnam (1978: 22–25) describes the challengein terms of the failure of reference of terms for unobservables, withthe consequence that theories incorporating them cannot be said to betrue. (For a summary of different formulations, see Wray 2015.)
Contemporary discussion commonly focuses on Laudan’s (1981)argument to the effect that the history of science furnishes vastevidence of empirically successful theories that were later rejected;from subsequent perspectives, their unobservable terms were judged notto refer and thus, they cannot not be regarded as true or evenapproximately true. (If one prefers to define realism in terms ofscientific ontology rather than reference and truth, one may rephrasethe worry in terms of the mistaken ontologies of past theories fromlater perspectives.) Responses to this argument generally take one oftwo forms, the first stemming from the qualifications to realismoutlined insection 1.3, and the second from the forms of realist selectivity outlined insection 2.3—both can be understood as attempts to restrict the inductive basis of theargument in such a way as to foil the pessimistic conclusion. Forexample, one might contend that if only sufficiently mature andnon-ad hoc theories are considered, the number whose centralterms did not refer and/or that cannot be regarded as approximatelytrue is dramatically reduced (see references,section 1.3). Or, the realist might grant that the history of science presents arecord of significant referential discontinuity, but contend that,nevertheless, it also presents a record of impressive continuityregarding what is properly endorsed by realism, as recommended byexplanationists, entity realists, or structural realists (seereferences,section 2.3). (For other responses, see Leplin 1981; McAllister 1993; Chakravartty2007a: ch. 2; Doppelt 2007; Nola 2008; Roush 2010, 2015; and Fahrbach2011. Hardin & Rosenberg 1982; Cruse & Papineau 2002; andPapineau 2010 explore the idea that reference is irrelevant toapproximate truth).
In just the way that some authors suggest that the miracle argument isan instance of fallacious reasoning—the base rate fallacy (seesection 2.1)—some suggest that the pessimistic induction is likewise flawed (Lewis2001; Lange 2002; Magnus & Callender 2004). The argument isanalogous: the putative failure of reference on the part of pastsuccessful theories, or their putative lack of approximate truth,cannot be used to derive a conclusion regarding the chances that ourcurrent best theories do not refer to unobservables, or that they arenot approximately true, unless one knows the base rate ofnon-referring or non-approximately true theories in the relevantpools. And since one cannot know this independently, the pessimisticinduction is fallacious. Again, analogously, one might argue that toformalize the argument in terms of probabilities, as is required inorder to invoke the base rate fallacy, is to miss the more fundamentalpoint underlying the pessimistic induction (Saatsi 2005b). One mightread the argument simply as cutting a supposed link between theempirical success of scientific theories and successful reference orapproximate truth, as opposed to relying on an inductive inferenceper se. If even a few examples from the history of sciencedemonstrate that theories can be empirically successful and yet failto refer to the central unobservables they invoke, or fail to be whatrealists would regard as approximately true, this constitutes aprima facie challenge to the notion that only realism canexplain the success of science.
The regular appeal to the notion of approximate truth by realists hasseveral motivations. The widespread use of abstraction (that is,incorporating some but not all of the relevant parameters intoscientific descriptions) and idealization (distorting the natures ofcertain parameters) suggests that even many of our best theories andmodels are not strictly correct. The common realist contention thattheories can be viewed as gradually converging on the truth asscientific inquiry advances suggests that such progress is amenable toassessment or measurement in some way, if only in principle. And evenfor realists who are not convergentists as such, the importance ofcashing out the metaphor of theories being close to the truth ispressing in the face of antirealist assertions to the effect that themetaphor is empty. The challenge to make good on the metaphor andexplicate, in precise terms, what approximate truth could be, is onesource of skepticism about realism. Two broad strategies have emergedin response to this challenge: attempts to quantify approximate truthby formally defining the concept and the related notion of relativeapproximate truth; and attempts to explicate the conceptinformally.
The formal route was inaugurated by Popper (1972: 231–236), whodefined relative orderings of “verisimilitude” (literally,“likeness to truth”) between theories in a given domainover time by means of a comparison of their true and falseconsequences. D. Miller (1974) and Tichý (1974) proved thatthere is a technical problem with this account, however, yielding theconsequence that in order for theoryA to have greaterverisimilitude than theoryB,A must be truesimpliciter, which leaves the realist desideratum ofexplaining how strictly false theories can differ with respect toapproximate truth unsatisfied (see also Oddie 1986a). Another formalaccount is the possible worlds approach (also called the“similarity” approach), according to which the truthconditions of a theory are identified with the set of possible worldsin which it is true, and “truth-likeness” is calculated bymeans of a function that measures the average or some othermathematical “distance” between the actual world and theworlds in that set, thereby facilitating orderings of theories withrespect to truth-likeness (Tichý 1976, 1978; Oddie 1986b;Niiniluoto 1987, 1998; for critiques, see D. Miller 1976 and Aronson1990). One last attempt to formalize approximate truth is the typehierarchies approach, which analyzes truth-likeness in terms ofsimilarity relationships between nodes in tree-structured graphs oftypes and subtypes representing scientific concepts on the one hand,and the entities in the world they putatively represent on the other(Aronson 1990; Aronson, Harré, & Way 1994: 15–49; fora critique, see Psillos 1999: 270–273).
Less formally and perhaps more typically, realists have attempted toexplicate approximate truth in qualitative terms. One commonsuggestion is that a theory may be considered more approximately truethan one that preceded it if the earlier theory can be described as a“limiting case” of the later one. The idea of limitingcases and inter-theory relations more generally is elaborated by Post(1971; see also French & Kamminga 1993), who argues that certainheuristic principles in science yield theories that“conserve” the successful parts of their predecessors. His“General Correspondence Principle” states that latertheories commonly account for the successes of their predecessors by“degenerating” into earlier theories in domains in whichthe earlier ones are well confirmed. Hence, for example, the oftencited claim that certain equations in relativistic physics degenerateinto the corresponding equations in classical physics in the limit, asvelocity tends to zero. The realist may then contend that latertheories offer more approximately true descriptions of the relevantsubject matter, and that the ways in which they do this can beilluminated in part by studying the ways in which they build on thelimiting cases represented by their predecessors. (For further takeson approximate truth, see Leplin 1981; Boyd 1990; Weston 1992; Smith1998; Chakravartty 2010, and Northcott 2013.)
The term “antirealism” (or “anti-realism”)encompasses any position that is opposed to realism along one or moreof the dimensions canvassed insection 1.2: the metaphysical commitment to the existence of a mind-independentreality; the semantic commitment to interpret theories literally or atface value; and the epistemological commitment to regard theories asfurnishing knowledge of both observables and unobservables. As aresult, and as one might expect, there are many different ways to bean antirealist, and many different positions qualify as antirealism(cf. Kitcher 2001: 161–163). In the historical development ofrealism, arguably the most important strains of antirealism have beenvarieties of empiricism which, given their emphasis on experience as asource and subject matter of knowledge, are naturally set against theidea of knowledge of unobservables. It is possible to be an empiricistmore broadly speaking in a way that is consistent withrealism—for example, one might endorse the idea that knowledgeof the world stems from empirical investigation and contend that onthis basis, one can justifiably infer certain things aboutunobservables. In the first half of the twentieth century, however,empiricism came predominantly in the form of varieties of“instrumentalism”: the view that theories are merelyinstruments for predicting observable phenomena or systematizingobservation reports.
According to the best known, traditional form of instrumentalism,terms for unobservables have no meaning all by themselves; construedliterally, statements involving them are not even candidates for truthor falsity (cf. a more recent proposal in Rowbottom 2011). The mostinfluential advocates of this view were the logical empiricists (orlogical positivists), including Carnap and Hempel, famously associatedwith the Vienna Circle group of philosophers and scientists as well asimportant contributors elsewhere. In order to rationalize theubiquitous use of terms which might otherwise be taken to refer tounobservables in scientific discourse, they adopted a non-literalsemantics according to which these terms acquire meaning by beingassociated with terms for observables (for example,“electron” might mean “white streak in a cloudchamber”), or with demonstrable laboratory procedures (a viewcalled “operationalism”). Insuperable difficulties withthis semantics led ultimately (in large measure) to the demise oflogical empiricism and the growth of realism. The contrast here is notmerely in semantics and epistemology: a number of logical empiricistsalso held the neo-Kantian view that ontological questions“external” to the frameworks for knowledge represented bytheories are also meaningless (the choice of a framework is madesolely on pragmatic grounds), thereby rejecting the metaphysicaldimension of realism (as in Carnap 1950). (Duhem [1906] 1954 wasinfluential with respect to instrumentalism; for a critique of logicalempiricist semantics, see H. Brown 1977: ch. 3; on logical empiricismmore generally, see Giere & Richardson 1997 and Richardson &Uebel 2007; on the neo-Kantian reading, see Richardson 1998 andFriedman 1999.)
Van Fraassen (1980) reinvented empiricism in the scientific context,evading many of the challenges faced by logical empiricism by adoptinga realist semantics. His position, “constructiveempiricism”, holds that the aim of science is empiricaladequacy, where “a theory is empirically adequate exactly ifwhat it says about the observable things and events in the world, istrue” (1980: 12; p. 64 gives a more technical definition interms of the embedding of observable structures in scientific models).Crucially, unlike logical empiricism, constructive empiricisminterprets theories in precisely the same manner as realism. Theantirealism of the position is due entirely to itsepistemology—it recommends belief in our best theories onlyinsofar as they describe observable phenomena, and is satisfied withan agnostic attitude regarding anything unobservable. The constructiveempiricist thus recognizes claims about unobservables as true orfalse, but feels no need to believe or disbelieve them. In focusing onbelief in the domain of the observable, the position is similar totraditional instrumentalism, and is for this reason sometimesdescribed as a form of instrumentalism. (For elaborations of the view,see van Fraassen 1985, 2001 and Rosen 1994.) There are also affinitieshere with the idea of fictionalism, according to which things in theworld are and behaveas if our best scientific theories aretrue (Vaihinger [1911] 1923; Fine 1993).
The collapse of the logical empiricist program was in part facilitatedby a historical turn in the philosophy of science in the 1960s,associated with authors such as Kuhn, Feyerabend, and Hanson.Kuhn’s highly influential work,The Structure of ScientificRevolutions, played a significant role in establishing a lastinginterest in a form of historicism about scientific knowledge,particularly among those interested in the nature of scientificpractice. An underlying principle of the historical turn was to takethe history of science and its practice seriously by furnishingdescriptions of scientific knowledgein situ. Kuhn arguedthat the fruits of such history illuminate a recurring pattern:periods of so-called normal science, often fairly long in duration(consider, for example, the periods dominated by classical physics, orrelativistic physics), punctuated by revolutions which lead scientificcommunities from one period of normal science into another. Theimplications for realism on this picture derive from Kuhn’scharacterization of knowledge on either side of a revolutionarydivide. Two different periods of normal science, he said, are“incommensurable” with one another, in such a way as torender the world importantly different after a revolution (thephenomenon of “world change”). (Among the many detailedstudies of these topics, see Horwich 1993; Hoyningen-Huene 1993;Sankey 1994; and Bird 2000.)
The notion of incommensurability applies to (inter alia) acomparison of theories operative during different periods of normalscience. Kuhn held that if two theories are incommensurable, they arenot comparable in a way that would permit the judgment that one isepistemically superior to the other, because different periods ofnormal science are characterized by different “paradigms”(commitments to symbolic representations of the phenomena,metaphysical beliefs, values, and problem solving techniques). As aconsequence, scientists in different periods of normal sciencegenerally employ different methods and standards, experience the worlddifferently via “theory laden” perceptions, and mostimportantly for Kuhn (1983), differ with respect to the very meaningsof their terms. This is a version of meaning holism or contextualism,according to which the meaning of a term or concept is exhausted byits connections to others within a paradigm. A change in any part ofthis network entails a change in meanings throughout—the term“mass”, for instance, has different meanings in thecontexts of classical physics and relativistic physics. Thus, anyjudgment to the effect that the latter’s characterization ofmass is closer to the truth, or even that the relevant theoriesdescribe the same property, is importantly confused: it equivocatesbetween two different concepts which can only be understood in anappropriately historicized manner, from the perspectives of theparadigms in which they occur.
The changes in perception, conceptualization, and language that Kuhnassociated with changes in paradigm also fuelled his notion of worldchange, which further extends the contrast of the historicist approachwith realism. There is an important sense, Kuhn maintained, in whichafter a scientific revolution, scientists live in a different world.This is a famously cryptic remark inStructure ([1962] 1970:111, 121, 150), but he (2000: 264) later gives it a neo-Kantian spin:paradigms function so as to create the reality of scientificphenomena, thereby allowing scientists to engage with this reality. Onsuch a view, it would seem that not only the meanings but also thereferents of terms are constrained by paradigmatic boundaries. Andthus, reflecting an interesting parallel with neo-Kantian logicalempiricism, the idea of a paradigm-transcendent world which isinvestigated by scientists, and about which one might have knowledge,has no obvious cognitive content. On this picture, empirical realityis structured by scientific paradigms, and this conflicts with thecommitment of realism to knowledge of a mind-independent world.
One outcome of the historical turn in the philosophy of science andits emphasis on scientific practice was a focus on the complex socialinteractions that inevitably surround and infuse the generation ofscientific knowledge. Relations between experts, their students, andthe public, collaboration and competition between individuals andinstitutions, and social, economic, and political contexts became thesubjects of an approach to studying the sciences known as thesociology of scientific knowledge, or SSK. Though in theory, acommitment to studying the sciences from a sociological perspective isinterpretable in such a way as to be neutral with respect to realism(Lewens 2005; cf. Kochan 2010), in practice, most accounts of scienceinspired by SSK are implicitly or explicitly antirealist. Thisantirealism in practice stems from the common suggestion that once oneappreciates the role that social factors (using this as a generic termfor the sorts of interactions and contexts indicated above) play inthe production of scientific knowledge, a philosophical commitment tosome form of “social constructivism” is inescapable, andthis latter commitment is inconsistent with various aspects ofrealism.
The term “social construction” refers to anyknowledge-generating process in which what counts as a fact issubstantively determined by social factors, and in which differentsocial factors would likely generate facts that are inconsistent withwhat is actually produced. The important implication here is thus acounterfactual claim about the dependence of facts on social factors.There are numerous ways in which social determinants of facthood maybe consistent with realism. For example, social factors mightdetermine the directions and methodologies of research that arepermitted, encouraged, and funded, but this by itself need notundermine a realist attitude with respect to the outputs of scientificwork. Often, however, work in SSK takes the form of case studies thataim to demonstrate how particular decisions affecting scientific workwere (or are) influenced by social factors which, had they beendifferent, would have facilitated results that are inconsistent withthose ultimately accepted as scientific fact. Some, includingproponents of the so-called Strong Program in SSK, argue that for moregeneral, principled reasons, such factual contingency is inevitable.(For a sample of influential approaches to social constructivism, seeLatour & Woolgar [1979] 1986; Knorr-Cetina 1981; Pickering 1984;Shapin & Schaffer 1985; and Collins & Pinch 1993; on theStrong Program, see Barnes, Bloor, & Henry 1996; for a historicalstudy of the transition from Kuhn to SSK and social constructivism,see Zammito 2004: chs. 5–7.)
By making social factors an inextricable, substantive determinant ofwhat counts as true or false in the realm of the sciences (andelsewhere), social constructivism stands opposed to the realistcontention that theories can be understood as furnishing knowledge ofa mind-independent world. And as in the historicist approach, notionssuch as truth, reference, and ontology are here relative to particularcontexts; they have no context-transcendent significance. The laterwork of Kuhn and Wittgenstein in particular were influential in thedevelopment of the Strong Program doctrine of “meaningfinitism”, according to which the meanings of terms areconceived as social institutions: the various ways in which they areused successfully in communication within a linguistic community. Thistheory of meaning forms the basis of an argument to the effect thatthe meanings of scientific (and other) terms are products of socialnegotiation and need not be fixed or determinate, which furtherconflicts with a number of realist notions, including the idea ofconvergence toward true theories, improvements with respect toontology or approximate truth, and determinate reference tomind-independent entities. The subject of neo-Kantianism thus emergeshere again, though its strength in constructivist doctrines variessignificantly. (For a robustly finitist view, see Kusch 2002; for amore moderate constructivism, see Putnam’s (1981: ch. 3)“internal realism” and cf. Ellis 1988).
Feminist engagements with science are linked thematically to SSK andforms of social constructivism by their recognition of the role ofsocial factors as determinants of scientific fact. That said, theyextend the analysis in a more specific way, reflecting particularconcerns about the marginalization of points of view based on gender,ethnicity, socio-economic status, and political status. Not allfeminist approaches are antirealist, but nearly all are normative,offering prescriptions for revising both scientific practice andconcepts such as objectivity and knowledge that have directimplications for realism. In this regard it is useful to distinguish(as originally proposed in Harding 1986) between three broadapproaches. Feminist empiricism focuses on the possibility ofwarranted belief within scientific communities as a function of thetransparency and consideration of biases associated with differentpoints of view which enter into scientific work. Standpoint theoryinvestigates the idea that scientific knowledge is inextricably linkedto perspectives arising from differences in such points of view.Feminist postmodernism rejects traditional conceptions of universal orabsolute objectivity and truth. (As one might expect, these views arenot always neatly distinguishable; for some early, influentialapproaches, see Keller 1985; Harding 1986; Haraway 1988; Longino 1990,2002; Alcoff & Potter 1993; and Nelson & Nelson 1996).
The notion of objectivity has a number of traditionalconnotations—including disinterest (detachment, lack of bias)and universality (independence from any particular perspective orviewpoint)—which are commonly associated with knowledge of amind-independent world. Feminist critiques are almost unanimous inrejecting scientific objectivity in the sense of disinterest, offeringcase studies that aim to demonstrate how the presence of (forexample) androcentric bias in a scientific community can lead to theacceptance of one theory at the expense of alternatives (Kourany 2010:chs. 1–3; for detailed cases, see Longino 1990: ch. 6 and Lloyd2006). Arguably, the failure of objectivity in this sense isconsistent with realism under certain conditions. For example, if therelevant bias is epistemically neutral (that is, if one’sassessment of scientific evidence is not influenced by it one way oranother), then realism may remain at least one viable interpretationof the outputs of scientific work. In the more interesting case wherebias is epistemically consequential, the prospects for realism arediminished, but may be enhanced by a scientific infrastructure thatfunctions to bring it under scrutiny (by means of, for example,effective peer review, genuine consideration of minority views, etc.),thus facilitating corrective measures where appropriate. Thecontention that the sciences do not generally exemplify such aninfrastructure is one motivation for the normativity of much feministempiricism.
The challenge to objectivity in the sense of universality orperspective-independence can be, in some cases, more difficult tosquare with the possibility of realism. In a Marxist vein, somestandpoint theorists argue that certain perspectives are epistemicallyprivileged in the realm of science:viz., subjugatedperspectives are epistemically privileged in comparison to dominantones in light of the deeper insight afforded the former (just as theproletariat has a deeper knowledge of human potential than thesuperficial knowledge typical of those in power). Others portrayepistemic privilege in a more splintered or deflationary manner,suggesting that no one point of view can be established as superior toanother by any overarching standard of epistemological assessment.This view is most explicit in feminist postmodernism, which embraces athoroughgoing relativism with respect to truth (and presumablyapproximate truth, scientific ontology, and other notions central tovarious descriptions of realism). As in the case of Strong ProgramSSK, truth and epistemic standards are here defined only within thecontext of a perspective, and thus cannot be interpreted in anycontext-transcendent or mind-independent manner.
It is not uncommon to hear philosophers remark that the dialoguebetween the forms of realism and antirealism surveyed in this articleshows every symptom of a perennial philosophical dispute. The issuescontested range so broadly and elicit so many competing intuitions(about which, arguably, reasonable people may disagree) that somequestion whether a resolution is even possible. This prognosis ofpotentially irresolvable dialectical complexity is relevant to anumber of further views in the philosophy of science, some of whicharise as direct responses to it. For example, Fine ([1986b] 1996: chs.7–8) argues that ultimately, neither realism nor antirealism istenable, and recommends what he calls the “natural ontologicalattitude” (NOA) instead (see Rouse 1988, 1991 for detailedexplorations of the view). NOA is intended to comprise a neutral,common core of realist and antirealist attitudes of acceptance of ourbest theories. The mistake that both parties make, Fine suggests, isto add further epistemological and metaphysical diagnoses to thisshared position, such as pronouncements about which aspects ofscientific ontology should be viewed as real, which are propersubjects of belief, and so on. Others contend that this sort ofapproach to scientific knowledge is non- or anti-philosophical, anddefend philosophical engagement in debates about realism (Crasnow2000, Mcarthur 2006). Musgrave (1989) argues that the view is eitherempty or collapses into realism.
The idea of putting the conflict between realist and antirealistapproaches to science aside is also a recurring theme in some accountsof pragmatism, and quietism. Regarding the first, Peirce ([1992] 1998,in “How to Make Our Ideas Clear”, for instance, originallypublished in 1878) holds that the content of a proposition should beunderstood in terms of (among other things) its “practicalconsequences” for human experience, such as implications forobservation or problem-solving. For James ([1907] 1979), positiveutility measured in these terms is the very marker of truth (wheretruth is whatever will be agreed in the ideal limit of scientificinquiry). Many of the points disputed by realists andantirealists—differences in epistemic commitment to scientificentities based on observability, for example—are effectivelynon-issues on this view (Almeder 2007; Misak 2010). It is neverthelessa form of antirealism on traditional readings of Peirce and James,since both suggest that truth in the pragmatist sense exhausts ourconception of reality, thus running foul of the metaphysical dimensionof realism. The notion of quietism is often associated withWittgenstein’s response to philosophical problems about which,he maintained, nothing sensible can be said. This is not to say thatengaging with such a problem is not to one’s taste, but ratherthat quite independently of one’s interest or lack thereof, thedispute itself concerns a pseudo-problem. Blackburn (2002) suggeststhat disputes about realism may have this character.
One last take on the putative irresolvability of debates concerningrealism focuses on certain meta-philosophical commitments adopted bythe interlocutors. Wylie (1986: 287), for instance, claims that
the most sophisticated positions on either side now incorporateself-justifying conceptions of the aim of philosophy and of thestandards of adequacy appropriate for judging philosophical theoriesof science.
Different assumptionsab initio regarding what sorts ofinferences are legitimate, what sorts of evidence reasonably supportbelief, whether there is a genuine demand for the explanation ofobservable phenomena in terms of underlying realities, and so on, mayrender some arguments between realists and antirealistsquestion-begging. This diagnosis is arguably facilitated by vanFraassen’s (1989: 170–176, 1994: 182) intimation thatneither realism nor antirealism (in his case, empiricism) is ruled outby plausible canons of rationality; each is sustained by a differentconception of how much epistemic risk one should take in formingbeliefs on the basis of one’s evidence. An intriguing questionthen emerges as to whether disputes surrounding realism andantirealism are resolvable in principle, or whether, ultimately,internally consistent and coherent formulations of these positionsshould be regarded as irreconcilable but nonetheless permissibleinterpretations of scientific knowledge (Chakravartty 2017; Forbes forthcoming).
How to cite this entry. Preview the PDF version of this entry at theFriends of the SEP Society. Look up this entry topic at theIndiana Philosophy Ontology Project (InPhO). Enhanced bibliography for this entryatPhilPapers, with links to its database.
abduction |constructive empiricism |empiricism: logical |feminist philosophy, interventions: epistemology and philosophy of science |feminist philosophy, topics: perspectives on science |incommensurability: of scientific theories |Kuhn, Thomas |models in science |rationality: historicist theories of |science: theory and observation in |scientific explanation |scientific knowledge: social dimensions of |scientific objectivity |scientific progress |scientific revolutions |structural realism |theoretical terms in science |truthlikeness |underdetermination, of scientific theories |Vienna Circle
For helpful comments on the whole or parts of this article, I amgrateful to Matthew J. Brown, Jacob Busch, Arthur Fine, GregoryFrost-Arnold, David Harker, Christopher Hitchcock, Kareem Khalifa,Timothy D. Lyons, Ilkka Niiniluoto, Elliott Sober, Bas C. vanFraassen, and K. Brad Wray. For special assistance, many thanks aredue to Jamee Elder, Alex Koo, and Dean Peters.
View this site from another server:
The Stanford Encyclopedia of Philosophy iscopyright © 2016 byThe Metaphysics Research Lab, Center for the Study of Language and Information (CSLI), Stanford University
Library of Congress Catalog Data: ISSN 1095-5054