Science is an enormously successful human enterprise. The study ofscientific method is the attempt to discern the activities by whichthat success is achieved. Among the activities often identified ascharacteristic of science are systematic observation andexperimentation, inductive and deductive reasoning, and the formationand testing of hypotheses and theories. How these are carried out indetail can vary greatly, but characteristics like these have beenlooked to as a way of demarcating scientific activity fromnon-science, where only enterprises which employ some canonical formof scientific method or methods should be considered science (see alsothe entry onscience and pseudo-science). Others have questioned whether there is anything like a fixed toolkitof methods which is common across science and only science. Somereject privileging one view of method as part of rejecting broaderviews about the nature of science, such as naturalism (Dupré2004); some reject any restriction in principle (pluralism).
Scientific method should be distinguished from the aims and productsof science, such as knowledge, predictions, or control. Methods arethe means by which those goals are achieved. Scientific method shouldalso be distinguished from meta-methodology, which includes the valuesand justifications behind a particular characterization of scientificmethod (i.e., a methodology) — values such as objectivity,reproducibility, simplicity, or past successes. Methodological rulesare proposed to govern method and it is a meta-methodological questionwhether methods obeying those rules satisfy given values. Finally,method is distinct, to some degree, from the detailed and contextualpractices through which methods are implemented. The latter mightrange over: specific laboratory techniques; mathematical formalisms orother specialized languages used in descriptions and reasoning;technological or other material means; ways of communicating andsharing results, whether with other scientists or with the public atlarge; or the conventions, habits, enforced customs, and institutionalcontrols over how and what science is carried out.
While it is important to recognize these distinctions, theirboundaries are fuzzy. Hence, accounts of method cannot be entirelydivorced from their methodological and meta-methodological motivationsor justifications, Moreover, each aspect plays a crucial role inidentifying methods. Disputes about method have therefore played outat the detail, rule, and meta-rule levels. Changes in beliefs aboutthe certainty or fallibility of scientific knowledge, for instance(which is a meta-methodological consideration of what we can hope formethods to deliver), have meant different emphases on deductive andinductive reasoning, or on the relative importance attached toreasoning over observation (i.e., differences over particularmethods.) Beliefs about the role of science in society will affect theplace one gives to values in scientific method.
The issue which has shaped debates over scientific method the most inthe last half century is the question of how pluralist do we need tobe about method? Unificationists continue to hold out for one methodessential to science; nihilism is a form of radical pluralism, whichconsiders the effectiveness of any methodological prescription to beso context sensitive as to render it not explanatory on its own. Somemiddle degree of pluralism regarding the methods embodied inscientific practice seems appropriate. But the details of scientificpractice vary with time and place, from institution to institution,across scientists and their subjects of investigation. How significantare the variations for understanding science and its success? How muchcan method be abstracted from practice? This entry describes some ofthe attempts to characterize scientific method or methods, as well asarguments for a more context-sensitive approach to methods embedded inactual scientific practices.
This entry could have been given the title Scientific Methods and goneon to fill volumes, or it could have been extremely short, consistingof a brief summary rejection of the idea that there is any such thingas a unique Scientific Method at all. Both unhappy prospects are dueto the fact that scientific activity varies so much acrossdisciplines, times, places, and scientists that any account whichmanages to unify it all will either consist of overwhelmingdescriptive detail, or trivial generalizations.
The choice of scope for the present entry is more optimistic, taking acue from the recent movement in philosophy of science toward a greaterattention to practice: to what scientists actually do. This“turn to practice” can be seen as the latest form ofstudies of methods in science, insofar as it represents an attempt atunderstanding scientific activity, but through accounts that areneither meant to be universal and unified, nor singular and narrowlydescriptive. To some extent, different scientists at different timesand places can be said to be using the same method even though, inpractice, the details are different.
Whether the context in which methods are carried out is relevant, orto what extent, will depend largely on what one takes the aims ofscience to be and what one’s own aims are. For most of thehistory of scientific methodology the assumption has been that themost important output of science is knowledge and so the aim ofmethodology should be to discover those methods by which scientificknowledge is generated.
Science was seen to embody the most successful form of reasoning (butwhich form?) to the most certain knowledge claims (but how certain?)on the basis of systematically collected evidence (but what counts asevidence, and should the evidence of the senses take precedence, orrational insight?)Section 2 surveys some of the history, pointing to two major themes. One themeis seeking the right balance between observation and reasoning (andthe attendant forms of reasoning which employ them); the other is howcertain scientific knowledge is or can be.
Section 3 turns to 20th century debates on scientific method. In thesecond half of the 20th century the epistemic privilege ofscience faced several challenges and many philosophers of scienceabandoned the reconstruction of the logic of scientific method. Viewschanged significantly regarding which functions of science ought to becaptured and why. For some, the success of science was betteridentified with social or cultural features. Historical andsociological turns in the philosophy of science were made, with ademand that greater attention be paid to the non-epistemic aspects ofscience, such as sociological, institutional, material, and politicalfactors. Even outside of those movements there was an increasedspecialization in the philosophy of science, with more and more focuson specific fields within science. The combined upshot was very fewphilosophers arguing any longer for a grand unified methodology ofscience. Sections 3 and 4 surveys the main positions on scientificmethod in 20th century philosophy of science, focusing onwhere they differ in their preference for confirmation orfalsification or for waiving the idea of a special scientific methodaltogether.
In recent decades, attention has primarily been paid to scientificactivities traditionally falling under the rubric of method, such asexperimental design and general laboratory practice, the use ofstatistics, the construction and use of models and diagrams,interdisciplinary collaboration, and science communication. Sections4–6 attempt to construct a map of the current domains of thestudy of methods in science.
As these sections illustrate, the question of method is still centralto the discourse about science. Scientific method remains a topic foreducation, for science policy, and for scientists. It arises in thepublic domain where the demarcation or status of science is at issue.Some philosophers have recently returned, therefore, to the questionof what it is that makes science a unique cultural product. This entrywill close with some of these recent attempts at discerning andencapsulating the activities by which scientific knowledge isachieved.
Attempting a history of scientific method compounds the vast scope ofthe topic. This section briefly surveys the background to modernmethodological debates. What can be called the classical view goesback to antiquity, and represents a point of departure for later divergences.[1]
We begin with a point made by Laudan (1968) in his historical surveyof scientific method:
Perhaps the most serious inhibition to the emergence of the history oftheories of scientific method as a respectable area of study has beenthe tendency to conflate it with the general history of epistemology,thereby assuming that the narrative categories and classificatorypigeon-holes applied to the latter are also basic to the former.(1968: 5)
To see knowledge about the natural world as falling under knowledgemore generally is an understandable conflation. Histories of theoriesof method would naturally employ the same narrative categories andclassificatory pigeon holes. An important theme of the history ofepistemology, for example, is the unification of knowledge, a themereflected in the question of the unification of method in science.Those who have identified differences in kinds of knowledge have oftenlikewise identified different methods for achieving that kind ofknowledge (see the entry on theunity of science).
Different views on what is known, how it is known, and what can beknown are connected. Plato distinguished the realms of things into thevisible and the intelligible (The Republic, 510a, in Cooper1997). Only the latter, the Forms, could be objects of knowledge. Theintelligible truths could be known with the certainty of geometry anddeductive reasoning. What could be observed of the material world,however, was by definition imperfect and deceptive, not ideal. ThePlatonic way of knowledge therefore emphasized reasoning as a method,downplaying the importance of observation. Aristotle disagreed,locating the Forms in the natural world as the fundamental principlesto be discovered through the inquiry into nature (MetaphysicsZ, in Barnes 1984).
Aristotle is recognized as giving the earliest systematic treatise onthe nature of scientific inquiry in the western tradition, one whichembraced observation and reasoning about the natural world. In thePrior andPosterior Analytics, Aristotle reflectsfirst on the aims and then the methods of inquiry into nature. Anumber of features can be found which are still considered by most tobe essential to science. For Aristotle, empiricism, carefulobservation (but passive observation, not controlled experiment), isthe starting point. The aim is not merely recording of facts, though.For Aristotle, science (epistêmê) is a body ofproperly arranged knowledge or learning—the empirical facts, butalso their ordering and display are of crucial importance. The aims ofdiscovery, ordering, and display of facts partly determine the methodsrequired of successful scientific inquiry. Also determinant is thenature of the knowledge being sought, and the explanatory causesproper to that kind of knowledge (see the discussion of the fourcauses in the entry onAristotle on causality).
In addition to careful observation, then, scientific method requires alogic as a system of reasoning for properly arranging, but alsoinferring beyond, what is known by observation. Methods of reasoningmay include induction, prediction, or analogy, among others.Aristotle’s system (along with his catalogue of fallaciousreasoning) was collected under the title theOrganon. Thistitle would be echoed in later works on scientific reasoning, such asNovum Organon by Francis Bacon, andNovum OrganonRestorum by William Whewell (see below). In Aristotle’sOrganon reasoning is divided primarily into two forms, arough division which persists into modern times. The division, knownmost commonly today as deductive versus inductive method, appears inother eras and methodologies as analysis/synthesis,non-ampliative/ampliative, or evenconfirmation/verification. The basic idea is there are two“directions” to proceed in our methods of inquiry: oneaway from what is observed, to the more fundamental, general, andencompassing principles; the other, from the fundamental and generalto instances or implications of principles.
The basic aim and method of inquiry identified here can be seen as atheme running throughout the next two millennia of reflection on thecorrect way to seek after knowledge: carefully observe nature and thenseek rules or principles which explain or predict its operation. TheAristotelian corpus provided the framework for a commentary traditionon scientific method independent of science itself (cosmos versusphysics.) During the medieval period, figures such as Albertus Magnus(1206–1280), Thomas Aquinas (1225–1274), RobertGrosseteste (1175–1253), Roger Bacon (1214/1220–1292),William of Ockham (1287–1347), Andreas Vesalius(1514–1546), Giacomo Zabarella (1533–1589) all worked toclarify the kind of knowledge obtainable by observation and induction,the source of justification of induction, and best rules for its application.[2] Many of their contributions we now think of as essential to science(see also Laudan 1968). As Aristotle and Plato had employed aframework of reasoning either “to the forms” or“away from the forms”, medieval thinkers employeddirections away from the phenomena or back to the phenomena. Inanalysis, a phenomena was examined to discover its basic explanatoryprinciples; in synthesis, explanations of a phenomena were constructedfrom first principles.
During the Scientific Revolution these various strands of argument,experiment, and reason were forged into a dominant epistemicauthority. The 16th–18th centuries were aperiod of not only dramatic advance in knowledge about the operationof the natural world—advances in mechanical, medical,biological, political, economic explanations—but also ofself-awareness of the revolutionary changes taking place, and intensereflection on the source and legitimation of the method by which theadvances were made. The struggle to establish the new authorityincluded methodological moves. The Book of Nature, according to themetaphor of Galileo Galilei (1564–1642) or Francis Bacon(1561–1626), was written in the language of mathematics, ofgeometry and number. This motivated an emphasis on mathematicaldescription and mechanical explanation as important aspects ofscientific method. Through figures such as Henry More and RalphCudworth, a neo-Platonic emphasis on the importance of metaphysicalreflection on nature behind appearances, particularly regarding thespiritual as a complement to the purely mechanical, remained animportant methodological thread of the Scientific Revolution (see theentries onCambridge platonists;Boyle;Henry More;Galileo).
InNovum Organum (1620), Bacon was critical of theAristotelian method for leaping from particulars to universals tooquickly. The syllogistic form of reasoning readily mixed those twotypes of propositions. Bacon aimed at the invention of new arts,principles, and directions. His method would be grounded in methodicalcollection of observations, coupled with correction of our senses (andparticularly, directions for the avoidance of the Idols, as he calledthem, kinds of systematic errors to which naïve observers areprone.) The community of scientists could then climb, by a careful,gradual and unbroken ascent, to reliable general claims.
Bacon’s method has been criticized as impractical and tooinflexible for the practicing scientist. Whewell would later criticizeBacon in hisSystem of Logic for paying too little attentionto the practices of scientists. It is hard to find convincing examplesof Bacon’s method being put in to practice in the history ofscience, but there are a few who have been held up as real examples of16th century scientific, inductive method, even if not inthe rigid Baconian mold: figures such as Robert Boyle(1627–1691) and William Harvey (1578–1657) (see the entryonBacon).
It is to Isaac Newton (1642–1727), however, that historians ofscience and methodologists have paid greatest attention. Given theenormous success of hisPrincipia Mathematica andOpticks, this is understandable. The study of Newton’smethod has had two main thrusts: the implicit method of theexperiments and reasoning presented in the Opticks, and the explicitmethodological rules given as the Rules for Philosophising (theRegulae) in Book III of thePrincipia.[3] Newton’s law of gravitation, the linchpin of his new cosmology,broke with explanatory conventions of natural philosophy, first forapparently proposing action at a distance, but more generally for notproviding “true”, physical causes. The argument for hisSystem of the World (Principia, Book III) was based onphenomena, not reasoned first principles. This was viewed (mainly onthe continent) as insufficient for proper natural philosophy. TheRegulae counter this objection, re-defining the aims of naturalphilosophy by re-defining the method natural philosophers shouldfollow. (See the entry onNewton’s philosophy.)
To his list of methodological prescriptions should be addedNewton’s famous phrase “hypotheses nonfingo” (commonly translated as “I frame nohypotheses”.) The scientist was not to invent systems but inferexplanations from observations, as Bacon had advocated. This wouldcome to be known as inductivism. In the century after Newton,significant clarifications of the Newtonian method were made. ColinMaclaurin (1698–1746), for instance, reconstructed the essentialstructure of the method as having complementary analysis and synthesisphases, one proceeding away from the phenomena in generalization, theother from the general propositions to derive explanations of newphenomena. Denis Diderot (1713–1784) and editors of theEncyclopédie did much to consolidate and popularizeNewtonianism, as did Francesco Algarotti (1721–1764). Theemphasis was often the same, as much on the character of the scientistas on their process, a character which is still commonly assumed. Thescientist is humble in the face of nature, not beholden to dogma,obeys only his eyes, and follows the truth wherever it leads. It wascertainly Voltaire (1694–1778) and du Chatelet (1706–1749)who were most influential in propagating the latter vision of thescientist and their craft, with Newton as hero. Scientific methodbecame a revolutionary force of the Enlightenment. (See also theentries onNewton,Leibniz,Descartes,Boyle,Hume,enlightenment, as well as Shank 2008 for a historical overview.)
Not all 18th century reflections on scientific method wereso celebratory. Famous also are George Berkeley’s(1685–1753) attack on the mathematics of the new science, aswell as the over-emphasis of Newtonians on observation; and DavidHume’s (1711–1776) undermining of the warrant offered forscientific claims by inductive justification (see the entries on:George Berkeley;David Hume;Hume’s Newtonianism and Anti-Newtonianism). Hume’s problem of induction motivated Immanuel Kant(1724–1804) to seek new foundations for empirical method, thoughas an epistemic reconstruction, not as any set of practical guidelinesfor scientists. Both Hume and Kant influenced the methodologicalreflections of the next century, such as the debate between Mill andWhewell over the certainty of inductive inferences in science.
The debate between John Stuart Mill (1806–1873) and WilliamWhewell (1794–1866) has become the canonical methodologicaldebate of the 19th century. Although often characterized asa debate between inductivism and hypothetico-deductivism, the role ofthe two methods on each side is actually more complex. On thehypothetico-deductive account, scientists work to come up withhypotheses from which true observational consequences can bededuced—hence, hypothetico-deductive. Because Whewell emphasizesboth hypotheses and deduction in his account of method, he can be seenas a convenient foil to the inductivism of Mill. However, equally ifnot more important to Whewell’s portrayal of scientific methodis what he calls the “fundamental antithesis”. Knowledgeis a product of the objective (what we see in the world around us) andsubjective (the contributions of our mind to how we perceive andunderstand what we experience, which he called the Fundamental Ideas).Both elements are essential according to Whewell, and he was thereforecritical of Kant for too much focus on the subjective, and John Locke(1632–1704) and Mill for too much focus on the senses.Whewell’s fundamental ideas can be discipline relative. An ideacan be fundamental even if it is necessary for knowledge only within agiven scientific discipline (e.g., chemical affinity for chemistry).This distinguishes fundamental ideas from the forms and categories ofintuition of Kant. (See the entry onWhewell.)
Clarifying fundamental ideas would therefore be an essential part ofscientific method and scientific progress. Whewell called this process“Discoverer’s Induction”. It was induction,following Bacon or Newton, but Whewell sought to revive Bacon’saccount by emphasising the role of ideas in the clear and carefulformulation of inductive hypotheses. Whewell’s induction is notmerely the collecting of objective facts. The subjective plays a rolethrough what Whewell calls the Colligation of Facts, a creative act ofthe scientist, the invention of a theory. A theory is then confirmedby testing, where more facts are brought under the theory, called theConsilience of Inductions. Whewell felt that this was the method bywhich the true laws of nature could be discovered: clarification offundamental concepts, clever invention of explanations, and carefultesting. Mill, in his critique of Whewell, and others who have castWhewell as a fore-runner of the hypothetico-deductivist view, seem tohave under-estimated the importance of this discovery phase inWhewell’s understanding of method (Snyder 1997a,b, 1999).Down-playing the discovery phase would come to characterizemethodology of the early 20th century (seesection 3).
Mill, in hisSystem of Logic, put forward a narrower view ofinduction as the essence of scientific method. For Mill, induction isthe search first for regularities among events. Among thoseregularities, some will continue to hold for further observations,eventually gaining the status of laws. One can also look forregularities among the laws discovered in a domain, i.e., for a law oflaws. Which “law law” will hold is time and disciplinedependent and open to revision. One example is the Law of UniversalCausation, and Mill put forward specific methods for identifyingcauses—now commonly known as Mill’s methods. These fivemethods look for circumstances which are common among the phenomena ofinterest, those which are absent when the phenomena are, or those forwhich both vary together. Mill’s methods are still seen ascapturing basic intuitions about experimental methods for finding therelevant explanatory factors (System of Logic (1843), seeMill entry). The methods advocated by Whewell and Mill, in the end, looksimilar. Both involve inductive generalization to covering laws. Theydiffer dramatically, however, with respect to the necessity of theknowledge arrived at; that is, at the meta-methodological level (seethe entries onWhewell andMill entries).
The quantum and relativistic revolutions in physics in the early20th century had a profound effect on methodology.Conceptual foundations of both theories were taken to show thedefeasibility of even the most seemingly secure intuitions aboutspace, time and bodies. Certainty of knowledge about the natural worldwas therefore recognized as unattainable. Instead a renewed empiricismwas sought which rendered science fallible but still rationallyjustifiable.
Analyses of the reasoning of scientists emerged, according to whichthe aspects of scientific method which were of primary importance werethe means of testing and confirming of theories. A distinction inmethodology was made between the contexts of discovery andjustification. The distinction could be used as a wedge between theparticularities of where and how theories or hypotheses are arrivedat, on the one hand, and the underlying reasoning scientists use(whether or not they are aware of it) when assessing theories andjudging their adequacy on the basis of the available evidence. By andlarge, for most of the 20th century, philosophy of sciencefocused on the second context, although philosophers differed onwhether to focus on confirmation or refutation as well as on the manydetails of how confirmation or refutation could or could not bebrought about. By the mid-20th century these attempts atdefining the method of justification and the context distinctionitself came under pressure. During the same period, philosophy ofscience developed rapidly, and fromsection 4 this entry will therefore shift from a primarily historical treatmentof the scientific method towards a primarily thematic one.
Advances in logic and probability held out promise of the possibilityof elaborate reconstructions of scientific theories and empiricalmethod, the best example being Rudolf Carnap’sThe LogicalStructure of the World (1928). Carnap attempted to show that ascientific theory could be reconstructed as a formal axiomaticsystem—that is, a logic. That system could refer to the worldbecause some of its basic sentences could be interpreted asobservations or operations which one could perform to test them. Therest of the theoretical system, including sentences using theoreticalor unobservable terms (like electron or force) would then either bemeaningful because they could be reduced to observations, or they hadpurely logical meanings (called analytic, like mathematicalidentities). This has been referred to as the verifiability criterionof meaning. According to the criterion, any statement not eitheranalytic or verifiable was strictly meaningless. Although the view wasendorsed by Carnap in 1928, he would later come to see it as toorestrictive (Carnap 1956). Another familiar version of this idea isoperationalism of Percy William Bridgman. InThe Logic of ModernPhysics (1927) Bridgman asserted that every physical conceptcould be defined in terms of the operations one would perform toverify the application of that concept. Making good on theoperationalisation of a concept even as simple as length, however, caneasily become enormously complex (for measuring very small lengths,for instance) or impractical (measuring large distances like lightyears.)
Carl Hempel’s (1950, 1951) criticisms of the verifiabilitycriterion of meaning had enormous influence. He pointed out thatuniversal generalizations, such as most scientific laws, were notstrictly meaningful on the criterion. Verifiability and operationalismboth seemed too restrictive to capture standard scientific aims andpractice. The tenuous connection between these reconstructions andactual scientific practice was criticized in another way. In bothapproaches, scientific methods are instead recast in methodologicalroles. Measurements, for example, were looked to as ways of givingmeanings to terms. The aim of the philosopher of science was not tounderstand the methodsper se, but to use them to reconstructtheories, their meanings, and their relation to the world. Whenscientists perform these operations, however, they will not reportthat they are doing them to give meaning to terms in a formalaxiomatic system. This disconnect between methodology and the detailsof actual scientific practice would seem to violate the empiricism theLogical Positivists and Bridgman were committed to. The view thatmethodology should correspond to practice (to some extent) has beencalled historicism, or intuitionism. We turn to these criticisms andresponses insection 3.4.[4]
Positivism also had to contend with the recognition that a purelyinductivist approach, along the lines of Bacon-Newton-Mill, wasuntenable. There was no pure observation, for starters. Allobservation was theory laden. Theory is required to make anyobservation, therefore not all theory can be derived from observationalone. (See the entry ontheory and observation in science.) Even granting an observational basis, Hume had already pointed outthat one could not deductively justify inductive conclusions withoutbegging the question by presuming the success of the inductive method.Likewise, positivist attempts at analyzing how a generalization can beconfirmed by observations of its instances were subject to a number ofcriticisms. Goodman (1965) and Hempel (1965) both point to paradoxesinherent in standard accounts of confirmation. Recent attempts atexplaining how observations can serve to confirm a scientific theoryare discussed insection 4 below.
The standard starting point for a non-inductive analysis of the logicof confirmation is known as the Hypothetico-Deductive (H-D) method. Inits simplest form, a sentence of a theory which expresses somehypothesis is confirmed by its true consequences. As noted insection 2, this method had been advanced by Whewell in the 19thcentury, as well as Nicod (1924) and others in the 20thcentury. Often, Hempel’s (1966) description of the H-D method,illustrated by the case of Semmelweiss’ inferential proceduresin establishing the cause of childbed fever, has been presented as akey account of H-D as well as a foil for criticism of the H-D accountof confirmation (see, for example, Lipton’s (2004) discussion ofinference to the best explanation; also the entry onconfirmation). Hempel described Semmelsweiss’ procedure as examining varioushypotheses explaining the cause of childbed fever. Some hypothesesconflicted with observable facts and could be rejected as falseimmediately. Others needed to be tested experimentally by deducingwhich observable events should follow if the hypothesis were true(what Hempel called the test implications of the hypothesis), thenconducting an experiment and observing whether or not the testimplications occurred. If the experiment showed the test implicationto be false, the hypothesis could be rejected. If the experimentshowed the test implications to be true, however, this did not provethe hypothesis true. The confirmation of a test implication does notverify a hypothesis, though Hempel did allow that “it providesat least some support, some corroboration or confirmation forit” (Hempel 1966: 8). The degree of this support then depends onthe quantity, variety and precision of the supporting evidence.
Another approach that took off from the difficulties with inductiveinference wasKarl Popper’s critical rationalism or falsificationism (Popper 1959, 1963).Falsification is deductive and similar to H-D in that it involvesscientists deducing observational consequences from the hypothesisunder test. For Popper, however, the important point was not thedegree of confirmation that successful prediction offered to ahypothesis. The crucial thing was the logical asymmetry betweenconfirmation, based on inductive inference, and falsification, whichcan be based on a deductive inference. (This simple opposition waslater questioned, by Lakatos, among others. See the entry onhistoricist theories of scientific rationality.)
Popper stressed that, regardless of the amount of confirming evidence,we can never be certain that a hypothesis is true without committingthe fallacy of affirming the consequent. Instead, Popper introducedthe notion of corroboration as a measure for how well a theory orhypothesis has survived previous testing—but without implyingthat this is also a measure for the probability that it is true.
Popper was also motivated by his doubts about the scientific status oftheories like the Marxist theory of history or psycho-analysis, and sowanted to demarcate between science and pseudo-science. Popper sawthis as an importantly different distinction than demarcating sciencefrom metaphysics. The latter demarcation was the primary concern ofmany logical empiricists. Popper used the idea of falsification todraw a line instead between pseudo and proper science. Science wasscience because its method involved subjecting theories to rigoroustests which offered a high probability of failing and thus refutingthe theory.
A commitment to the risk of failure was important. Avoidingfalsification could be done all too easily. If a consequence of atheory is inconsistent with observations, an exception can be added byintroducing auxiliary hypotheses designed explicitly to save thetheory, so-calledad hoc modifications. This Popper saw donein pseudo-science where ad hoc theories appeared capable of explaininganything in their field of application. In contrast, science is risky.If observations showed the predictions from a theory to be wrong, thetheory would be refuted. Hence, scientific hypotheses must befalsifiable. Not only must there exist some possible observationstatement which could falsify the hypothesis or theory, were itobserved, (Popper called these the hypothesis’ potentialfalsifiers) it is crucial to the Popperian scientific method that suchfalsifications be sincerely attempted on a regular basis.
The more potential falsifiers of a hypothesis, the more falsifiable itwould be, and the more the hypothesis claimed. Conversely, hypotheseswithout falsifiers claimed very little or nothing at all. Originally,Popper thought that this meant the introduction ofad hochypotheses only to save a theory should not be countenanced as goodscientific method. These would undermine the falsifiabililty of atheory. However, Popper later came to recognize that the introductionof modifications (immunizations, he called them) was often animportant part of scientific development. Responding to surprising orapparently falsifying observations often generated important newscientific insights. Popper’s own example was the observedmotion of Uranus which originally did not agree with Newtonianpredictions. Thead hoc hypothesis of an outer planetexplained the disagreement and led to further falsifiable predictions.Popper sought to reconcile the view by blurring the distinctionbetween falsifiable and not falsifiable, and speaking instead ofdegrees of testability (Popper 1985: 41f.).
From the 1960s on, sustained meta-methodological criticism emergedthat drove philosophical focus away from scientific method. A brieflook at those criticisms follows, with recommendations for furtherreading at the end of the entry.
Thomas Kuhn’sThe Structure of Scientific Revolutions(1962) begins with a well-known shot across the bow for philosophersof science:
History, if viewed as a repository for more than anecdote orchronology, could produce a decisive transformation in the image ofscience by which we are now possessed. (1962: 1)
The image Kuhn thought needed transforming was the a-historical,rational reconstruction sought by many of the Logical Positivists,though Carnap and other positivists were actually quite sympathetic toKuhn’s views. (See the entry on theVienna Circle.) Kuhn shares with other of his contemporaries, such as Feyerabend andLakatos, a commitment to a more empirical approach to philosophy ofscience. Namely, the history of science provides important data, andnecessary checks, for philosophy of science, including any theory ofscientific method.
The history of science reveals, according to Kuhn, that scientificdevelopment occurs in alternating phases. During normal science, themembers of the scientific community adhere to the paradigm in place.Their commitment to the paradigm means a commitment to the puzzles tobe solved and the acceptable ways of solving them. Confidence in theparadigm remains so long as steady progress is made in solving theshared puzzles. Method in this normal phase operates within adisciplinary matrix (Kuhn’s later concept of a paradigm) whichincludes standards for problem solving, and defines the range ofproblems to which the method should be applied. An important part of adisciplinary matrix is the set of values which provide the norms andaims for scientific method. The main values that Kuhn identifies areprediction, problem solving, simplicity, consistency, andplausibility.
An important by-product of normal science is the accumulation ofpuzzles which cannot be solved with resources of the current paradigm.Once accumulation of these anomalies has reached some critical mass,it can trigger a communal shift to a new paradigm and a new phase ofnormal science. Importantly, the values that provide the norms andaims for scientific method may have transformed in the meantime.Method may therefore be relative to discipline, time or place
Feyerabend also identified the aims of science as progress, but arguedthat any methodological prescription would only stifle that progress(Feyerabend 1988). His arguments are grounded in re-examining accepted“myths” about the history of science. Heroes of science,like Galileo, are shown to be just as reliant on rhetoric andpersuasion as they are on reason and demonstration. Others, likeAristotle, are shown to be far more reasonable and far-reaching intheir outlooks then they are given credit for. As a consequence, theonly rule that could provide what he took to be sufficient freedom wasthe vacuous “anything goes”. More generally, even themethodological restriction that science is the best way to pursueknowledge, and to increase knowledge, is too restrictive. Feyerabendsuggested instead that science might, in fact, be a threat to a freesociety, because it and its myth had become so dominant (Feyerabend1978).
An even more fundamental kind of criticism was offered by severalsociologists of science from the 1970s onwards who rejected themethodology of providing philosophical accounts for the rationaldevelopment of science and sociological accounts of the irrationalmistakes. Instead, they adhered to a symmetry thesis on which anycausal explanation of how scientific knowledge is established needs tobe symmetrical in explaining truth and falsity, rationality andirrationality, success and mistakes, by the same causal factors (see,e.g., Barnes and Bloor 1982, Bloor 1991). Movements in the Sociologyof Science, like the Strong Programme, or in the social dimensions andcauses of knowledge more generally led to extended and closeexamination of detailed case studies in contemporary science and itshistory. (See the entries onthe social dimensions of scientific knowledge andsocial epistemology.) Well-known examinations by Latour and Woolgar (1979/1986),Knorr-Cetina (1981), Pickering (1984), Shapin and Schaffer (1985) seemto bear out that it was social ideologies (on a macro-scale) orindividual interactions and circumstances (on a micro-scale) whichwere the primary causal factors in determining which beliefs gainedthe status of scientific knowledge. As they saw it therefore,explanatory appeals to scientific method were not empiricallygrounded.
A late, and largely unexpected, criticism of scientific method camefrom within science itself. Beginning in the early 2000s, a number ofscientists attempting to replicate the results of publishedexperiments could not do so. There may be close conceptual connectionbetween reproducibility and method. For example, if reproducibilitymeans that the same scientific methods ought to produce the sameresult, and all scientific results ought to be reproducible, thenwhatever it takes to reproduce a scientific result ought to be calledscientific method. Space limits us to the observation that, insofar asreproducibility is a desired outcome of proper scientific method, itis not strictly a part of scientific method. (See the entry onreproducibility of scientific results.)
By the close of the 20th century the search for thescientific method was flagging. Nola and Sankey (2000b) couldintroduce their volume on method by remarking that “For some,the whole idea of a theory of scientific method is yester-year’sdebate …”.
Despite the many difficulties that philosophers encountered in tryingto providing a clear methodology of conformation (or refutation),still important progress has been made on understanding howobservation can provide evidence for a given theory. Work instatistics has been crucial for understanding how theories can betested empirically, and in recent decades a huge literature hasdeveloped that attempts to recast confirmation in Bayesian terms. Herethese developments can be covered only briefly, and we refer to theentry onconfirmation for further details and references.
Statistics has come to play an increasingly important role in themethodology of the experimental sciences from the 19thcentury onwards. At that time, statistics and probability theory tookon a methodological role as an analysis of inductive inference, andattempts to ground the rationality of induction in the axioms ofprobability theory have continued throughout the 20thcentury and in to the present. Developments in the theory ofstatistics itself, meanwhile, have had a direct and immense influenceon the experimental method, including methods for measuring theuncertainty of observations such as the Method of Least Squaresdeveloped by Legendre and Gauss in the early 19th century,criteria for the rejection of outliers proposed by Peirce by themid-19th century, and the significance tests developed byGosset (a.k.a. “Student”), Fisher, Neyman & Pearsonand others in the 1920s and 1930s (see, e.g., Swijtink 1987 for abrief historical overview; and also the entry onC.S. Peirce).
These developments within statistics then in turn led to a reflectivediscussion among both statisticians and philosophers of science on howto perceive the process of hypothesis testing: whether it was arigorous statistical inference that could provide a numericalexpression of the degree of confidence in the tested hypothesis, or ifit should be seen as a decision between different courses of actionsthat also involved a value component. This led to a major controversyamong Fisher on the one side and Neyman and Pearson on the other (seeespecially Fisher 1955, Neyman 1956 and Pearson 1955, and for analysesof the controversy, e.g., Howie 2002, Marks 2000, Lenhard 2006). OnFisher’s view, hypothesis testing was a methodology for when toaccept or reject a statistical hypothesis, namely that a hypothesisshould be rejected by evidence if this evidence would be unlikelyrelative to other possible outcomes, given the hypothesis were true.In contrast, on Neyman and Pearson’s view, the consequence oferror also had to play a role when deciding between hypotheses.Introducing the distinction between the error of rejecting a truehypothesis (type I error) and accepting a false hypothesis (type IIerror), they argued that it depends on the consequences of the errorto decide whether it is more important to avoid rejecting a truehypothesis or accepting a false one. Hence, Fisher aimed for a theoryof inductive inference that enabled a numerical expression ofconfidence in a hypothesis. To him, the important point was the searchfor truth, not utility. In contrast, the Neyman-Pearson approachprovided a strategy of inductive behaviour for deciding betweendifferent courses of action. Here, the important point was not whethera hypothesis was true, but whether one should act as if it was.
Similar discussions are found in the philosophical literature. On theone side, Churchman (1948) and Rudner (1953) argued that becausescientific hypotheses can never be completely verified, a completeanalysis of the methods of scientific inference includes ethicaljudgments in which the scientists must decide whether the evidence issufficiently strong or that the probability is sufficiently high towarrant the acceptance of the hypothesis, which again will depend onthe importance of making a mistake in accepting or rejecting thehypothesis. Others, such as Jeffrey (1956) and Levi (1960) disagreedand instead defended a value-neutral view of science on whichscientists should bracket their attitudes, preferences, temperament,and values when assessing the correctness of their inferences. Formore details on this value-free ideal in the philosophy of science andits historical development, see Douglas (2009) and Howard (2003). Fora broad set of case studies examining the role of values in science,see e.g. Elliott & Richards 2017.
In recent decades, philosophical discussions of the evaluation ofprobabilistic hypotheses by statistical inference have largely focusedon Bayesianism that understands probability as a measure of aperson’s degree of belief in an event, given the availableinformation, and frequentism that instead understands probability as along-run frequency of a repeatable event. Hence, for Bayesiansprobabilities refer to a state of knowledge, whereas for frequentistsprobabilities refer to frequencies of events (see, e.g., Sober 2008,chapter 1 for a detailed introduction to Bayesianism and frequentismas well as to likelihoodism). Bayesianism aims at providing aquantifiable, algorithmic representation of belief revision, wherebelief revision is a function of prior beliefs (i.e., backgroundknowledge) and incoming evidence. Bayesianism employs a rule based onBayes’ theorem, a theorem of the probability calculus whichrelates conditional probabilities. The probability that a particularhypothesis is true is interpreted as a degree of belief, or credence,of the scientist. There will also be a probability and a degree ofbelief that a hypothesis will be true conditional on a piece ofevidence (an observation, say) being true. Bayesianism proscribes thatit is rational for the scientist to update their belief in thehypothesis to that conditional probability should it turn out that theevidence is, in fact, observed (see, e.g., Sprenger & Hartmann2019 for a comprehensive treatment of Bayesian philosophy of science).Originating in the work of Neyman and Person, frequentism aims atproviding the tools for reducing long-run error rates, such as theerror-statistical approach developed by Mayo (1996) that focuses onhow experimenters can avoid both type I and type II errors by buildingup a repertoire of procedures that detect errors if and only if theyare present. Both Bayesianism and frequentism have developed overtime, they are interpreted in different ways by its variousproponents, and their relations to previous criticism to attempts atdefining scientific method are seen differently by proponents andcritics. The literature, surveys, reviews and criticism in this areaare vast and the reader is referred to the entries onBayesian epistemology andconfirmation.
Attention to scientific practice, as we have seen, is not itself new.However, the turn to practice in the philosophy of science of late canbe seen as a correction to the pessimism with respect to method inphilosophy of science in later parts of the 20th century,and as an attempted reconciliation between sociological andrationalist explanations of scientific knowledge. Much of this worksees method as detailed and context specific problem-solvingprocedures, and methodological analyses to be at the same timedescriptive, critical and advisory (see Nickles 1987 for an expositionof this view). The following section contains a survey of some of thepractice focuses. In this section we turn fully to topics rather thanchronology.
A problem with the distinction between the contexts of discovery andjustification that figured so prominently in philosophy of science inthe first half of the 20th century (seesection 2) is that no such distinction can be clearly seen in scientificactivity (see Arabatzis 2006). Thus, in recent decades, it has beenrecognized that study of conceptual innovation and change should notbe confined to psychology and sociology of science, but are alsoimportant aspects of scientific practice which philosophy of scienceshould address (see also the entry onscientific discovery). Looking for the practices that drive conceptual innovation has ledphilosophers to examine both the reasoning practices of scientists andthe wide realm of experimental practices that are not directednarrowly at testing hypotheses, that is, exploratoryexperimentation.
Examining the reasoning practices of historical and contemporaryscientists, Nersessian (2008) has argued that new scientific conceptsare constructed as solutions to specific problems by systematicreasoning, and that of analogy, visual representation andthought-experimentation are among the important reasoning practicesemployed. These ubiquitous forms of reasoning are reliable—butalso fallible—methods of conceptual development and change. Onher account, model-based reasoning consists of cycles of construction,simulation, evaluation and adaption of models that serve as interiminterpretations of the target problem to be solved. Often, thisprocess will lead to modifications or extensions, and a new cycle ofsimulation and evaluation. However, Nersessian also emphasizesthat
creative model-based reasoning cannot be applied as a simple recipe,is not always productive of solutions, and even its most exemplaryusages can lead to incorrect solutions. (Nersessian 2008: 11)
Thus, while on the one hand she agrees with many previous philosophersthat there is no logic of discovery, discoveries can derive fromreasoned processes, such that a large and integral part of scientificpractice is
the creation of concepts through which to comprehend, structure, andcommunicate about physical phenomena …. (Nersessian 1987:11)
Similarly, work on heuristics for discovery and theory construction byscholars such as Darden (1991) and Bechtel & Richardson (1993)present science as problem solving and investigate scientific problemsolving as a special case of problem-solving in general. Drawinglargely on cases from the biological sciences, much of their focus hasbeen on reasoning strategies for the generation, evaluation, andrevision of mechanistic explanations of complex systems.
Addressing another aspect of the context distinction, namely thetraditional view that the primary role of experiments is to testtheoretical hypotheses according to the H-D model, other philosophersof science have argued for additional roles that experiments can play.The notion of exploratory experimentation was introduced to describeexperiments driven by the desire to obtain empirical regularities andto develop concepts and classifications in which these regularitiescan be described (Steinle 1997, 2002; Burian 1997; Waters 2007)).However the difference between theory driven experimentation andexploratory experimentation should not be seen as a sharp distinction.Theory driven experiments are not always directed at testinghypothesis, but may also be directed at various kinds offact-gathering, such as determining numerical parameters.Viceversa, exploratory experiments are usually informed by theory invarious ways and are therefore not theory-free. Instead, inexploratory experiments phenomena are investigated without firstlimiting the possible outcomes of the experiment on the basis ofextant theory about the phenomena.
The development of high throughput instrumentation in molecularbiology and neighbouring fields has given rise to a special type ofexploratory experimentation that collects and analyses very largeamounts of data, and these new ‘omics’ disciplines areoften said to represent a break with the ideal of hypothesis-drivenscience (Burian 2007; Elliott 2007; Waters 2007; O’Malley 2007)and instead described as data-driven research (Leonelli 2012; Strasser2012) or as a special kind of “convenienceexperimentation” in which many experiments are done simplybecause they are extraordinarily convenient to perform (Krohs2012).
The field of omics just described is possible because of the abilityof computers to process, in a reasonable amount of time, the hugequantities of data required. Computers allow for more elaborateexperimentation (higher speed, better filtering, more variables,sophisticated coordination and control), but also, through modellingand simulations, might constitute a form of experimentationthemselves. Here, too, we can pose a version of the general questionof method versus practice: does the practice of using computersfundamentally change scientific method, or merely provide a moreefficient means of implementing standard methods?
Because computers can be used to automate measurements,quantifications, calculations, and statistical analyses where, forpractical reasons, these operations cannot be otherwise carried out,many of the steps involved in reaching a conclusion on the basis of anexperiment are now made inside a “black box”, without thedirect involvement or awareness of a human. This has epistemologicalimplications, regarding what we can know, and how we can know it. Tohave confidence in the results, computer methods are thereforesubjected to tests of verification and validation.
The distinction between verification and validation is easiest tocharacterize in the case of computer simulations. In a typicalcomputer simulation scenario computers are used to numericallyintegrate differential equations for which no analytic solution isavailable. The equations are part of the model the scientist uses torepresent a phenomenon or system under investigation. Verifying acomputer simulation means checking that the equations of the model arebeing correctly approximated. Validating a simulation means checkingthat the equations of the model are adequate for the inferences onewants to make on the basis of that model.
A number of issues related to computer simulations have been raised.The identification of validity and verification as the testing methodshas been criticized. Oreskes et al. (1994) raise concerns that“validiation”, because it suggests deductive inference,might lead to over-confidence in the results of simulations. Thedistinction itself is probably too clean, since actual practice in thetesting of simulations mixes and moves back and forth between the two(Weissart 1997; Parker 2008a; Winsberg 2010). Computer simulations doseem to have a non-inductive character, given that the principles bywhich they operate are built in by the programmers, and any results ofthe simulation follow from those in-built principles in such a waythat those results could, in principle, be deduced from the programcode and its inputs. The status of simulations as experiments hastherefore been examined (Kaufmann and Smarr 1993; Humphreys 1995;Hughes 1999; Norton and Suppe 2001). This literature considers theepistemology of these experiments: what we can learn by simulation,and also the kinds of justifications which can be given in applyingthat knowledge to the “real” world. (Mayo 1996; Parker2008b). As pointed out, part of the advantage of computer simulationderives from the fact that huge numbers of calculations can be carriedout without requiring direct observation by theexperimenter/simulator. At the same time, many of thesecalculations are approximations to the calculations which would beperformed first-hand in an ideal situation. Both factors introduceuncertainties into the inferences drawn from what is observed in thesimulation.
For many of the reasons described above, computer simulations do notseem to belong clearly to either the experimental or theoreticaldomain. Rather, they seem to crucially involve aspects of both. Thishas led some authors, such as Fox Keller (2003: 200) to argue that weought to consider computer simulation a “qualitatively differentway of doing science”. The literature in general tends to followKaufmann and Smarr (1993) in referring to computer simulation as a“third way” for scientific methodology (theoreticalreasoning and experimental practice are the first two ways.). Itshould also be noted that the debates around these issues have tendedto focus on the form of computer simulation typical in the physicalsciences, where models are based on dynamical equations. Other formsof simulation might not have the same problems, or have problems oftheir own (see the entry oncomputer simulations in science).
In recent years, the rapid development of machine learning techniqueshas prompted some scholars to suggest that the scientific method hasbecome “obsolete” (Anderson 2008, Carrol and Goodstein2009). This has resulted in an intense debate on the relative merit ofdata-driven and hypothesis-driven research (for samples, see e.g.Mazzocchi 2015 or Succi and Coveney 2018). For a detailed treatment ofthis topic, we refer to the entryscientific research and big data.
Despite philosophical disagreements, the idea ofthescientific method still figures prominently in contemporary discourseon many different topics, both within science and in society at large.Often, reference to scientific method is used in ways that conveyeither the legend of a single, universal method characteristic of allscience, or grants to a particular method or set of methods privilegeas a special ‘gold standard’, often with reference toparticular philosophers to vindicate the claims. Discourse onscientific method also typically arises when there is a need todistinguish between science and other activities, or for justifyingthe special status conveyed to science. In these areas, thephilosophical attempts at identifying a set of methods characteristicfor scientific endeavors are closely related to the philosophy ofscience’s classical problem of demarcation (see the entry onscience and pseudo-science) and to the philosophical analysis of the social dimension ofscientific knowledge and the role of science in democraticsociety.
One of the settings in which the legend of a single, universalscientific method has been particularly strong is science education(see, e.g., Bauer 1992; McComas 1996; Wivagg & Allchin 2002).[5] Often, ‘the scientific method’ is presented in textbooksand educational web pages as a fixed four or five step procedurestarting from observations and description of a phenomenon andprogressing over formulation of a hypothesis which explains thephenomenon, designing and conducting experiments to test thehypothesis, analyzing the results, and ending with drawing aconclusion. Such references to a universal scientific method can befound in educational material at all levels of science education(Blachowicz 2009), and numerous studies have shown that the idea of ageneral and universal scientific method often form part of bothstudents’ and teachers’ conception of science (see, e.g.,Aikenhead 1987; Osborne et al. 2003). In response, it has been arguedthat science education need to focus more on teaching about the natureof science, although views have differed on whether this is best donethrough student-led investigations, contemporary cases, or historicalcases (Allchin, Andersen & Nielsen 2014)
Although occasionally phrased with reference to the H-D method,important historical roots of the legend in science education of asingle, universal scientific method are the American philosopher andpsychologist Dewey’s account of inquiry inHow We Think(1910) and the British mathematician Karl Pearson’s account ofscience inGrammar of Science (1892). On Dewey’saccount, inquiry is divided into the five steps of
(i) a felt difficulty, (ii) its location and definition, (iii)suggestion of a possible solution, (iv) development by reasoning ofthe bearing of the suggestions, (v) further observation and experimentleading to its acceptance or rejection. (Dewey 1910: 72)
Similarly, on Pearson’s account, scientific investigations startwith measurement of data and observation of their correction andsequence from which scientific laws can be discovered with the aid ofcreative imagination. These laws have to be subject to criticism, andtheir final acceptance will have equal validity for “allnormally constituted minds”. Both Dewey’s andPearson’s accounts should be seen as generalized abstractions ofinquiry and not restricted to the realm of science—although bothDewey and Pearson referred to their respective accounts as ‘thescientific method’.
Occasionally, scientists make sweeping statements about a simple anddistinct scientific method, as exemplified by Feynman’ssimplified version of a conjectures and refutations method presented,for example, in the last of his 1964 Cornell Messenger lectures.[6] However, just as often scientists have come to the same conclusion asrecent philosophy of science that there is not any unique, easilydescribed scientific method. For example, the physicist and NobelLaureate Weinberg described in the paper “The Methods of Science… And Those By Which We Live” (1995) how
The fact that the standards of scientific success shift with time doesnot only make the philosophy of science difficult; it also raisesproblems for the public understanding of science. We do not have afixed scientific method to rally around and defend. (1995: 8)
Interview studies with scientists on their conception of method showsthat scientists often find it hard to figure out whether availableevidence confirms their hypothesis, and that there are no directtranslations between general ideas about method and specificstrategies to guide how research is conducted (Schickore & Hangel2019, Hangel & Schickore 2017)
Reference to the scientific method has also often been used to arguefor the scientific nature or special status of a particular activity.Philosophical positions that argue for a simple and unique scientificmethod as a criterion of demarcation, such as Popperian falsification,have often attracted practitioners who felt that they had a need todefend their domain of practice. For example, references toconjectures and refutation as the scientific method are abundant inmuch of the literature on complementary and alternative medicine(CAM)—alongside the competing position that CAM, as analternative to conventional biomedicine, needs to develop its ownmethodology different from that of science.
Also within mainstream science, reference to the scientific method isused in arguments regarding the internal hierarchy of disciplines anddomains. A frequently seen argument is that research based on the H-Dmethod is superior to research based on induction from observationsbecause in deductive inferences the conclusion follows necessarilyfrom the premises. (See, e.g., Parascandola 1998 for an analysis ofhow this argument has been made to downgrade epidemiology compared tothe laboratory sciences.) Similarly, based on an examination of thepractices of major funding institutions such as the NationalInstitutes of Health (NIH), the National Science Foundation (NSF) andthe Biomedical Sciences Research Practices (BBSRC) in the UK,O’Malley et al. (2009) have argued that funding agencies seem tohave a tendency to adhere to the view that the primary activity ofscience is to test hypotheses, while descriptive and exploratoryresearch is seen as merely preparatory activities that are valuableonly insofar as they fuel hypothesis-driven research.
In some areas of science, scholarly publications are structured in away that may convey the impression of a neat and linear process ofinquiry from stating a question, devising the methods by which toanswer it, collecting the data, to drawing a conclusion from theanalysis of data. For example, the codified format of publications inmost biomedical journals known as the IMRAD format (Introduction,Method, Results, Analysis, Discussion) is explicitly described by thejournal editors as “not an arbitrary publication format butrather a direct reflection of the process of scientificdiscovery” (see the so-called “VancouverRecommendations”, ICMJE 2013: 11). However, scientificpublications do not in general reflect the process by which thereported scientific results were produced. For example, under theprovocative title “Is the scientific paper a fraud?”,Medawar argued that scientific papers generally misrepresent how theresults have been produced (Medawar 1963/1996). Similar views havebeen advanced by philosophers, historians and sociologists of science(Gilbert 1976; Holmes 1987; Knorr-Cetina 1981; Schickore 2008; Suppe1998) who have argued that scientists’ experimental practicesare messy and often do not follow any recognizable pattern.Publications of research results, they argue, are retrospectivereconstructions of these activities that often do not preserve thetemporal order or the logic of these activities, but are instead oftenconstructed in order to screen off potential criticism (see Schickore2008 for a review of this work).
Philosophical positions on the scientific method have also made itinto the court room, especially in the US where judges have drawn onphilosophy of science in deciding when to confer special status toscientific expert testimony. A key case isDaubert vs Merrell DowPharmaceuticals (92–102, 509 U.S. 579, 1993). In this case,the Supreme Court argued in its 1993 ruling that trial judges mustensure that expert testimony is reliable, and that in doing this thecourt must look at the expert’s methodology to determine whetherthe proffered evidence is actually scientific knowledge. Further,referring to works of Popper and Hempel the court stated that
ordinarily, a key question to be answered in determining whether atheory or technique is scientific knowledge … is whether it canbe (and has been) tested. (Justice Blackmun, Daubert v. Merrell DowPharmaceuticals; see Other Internet Resources for a link to theopinion)
But as argued by Haack (2005a,b, 2010) and by Foster & Hubner(1999), by equating the question of whether a piece of testimony isreliable with the question whether it is scientific as indicated by aspecial methodology, the court was producing an inconsistent mixtureof Popper’s and Hempel’s philosophies, and this has laterled to considerable confusion in subsequent case rulings that drew onthe Daubert case (see Haack 2010 for a detailed exposition).
The difficulties around identifying the methods of science are alsoreflected in the difficulties of identifying scientific misconduct inthe form of improper application of the method or methods of science.One of the first and most influential attempts at defining misconductin science was the US definition from 1989 that defined misconductas
fabrication, falsification, plagiarism,or other practices thatseriously deviate from those that are commonly accepted within thescientific community. (Code of Federal Regulations, part 50,subpart A., August 8, 1989, italics added)
However, the “other practices that seriously deviate”clause was heavily criticized because it could be used to suppresscreative or novel science. For example, the National Academy ofScience stated in their reportResponsible Science (1992)that it
wishes to discourage the possibility that a misconduct complaint couldbe lodged against scientists based solely on their use of novel orunorthodox research methods. (NAS: 27)
This clause was therefore later removed from the definition. For anentry into the key philosophical literature on conduct in science, seeShamoo & Resnick (2009).
The question of the source of the success of science has been at thecore of philosophy since the beginning of modern science. If viewed asa matter of epistemology more generally, scientific method is a partof the entire history of philosophy. Over that time, science andwhatever methods its practitioners may employ have changeddramatically. Today, many philosophers have taken up the banners ofpluralism or of practice to focus on what are, in effect, fine-grainedand contextually limited examinations of scientific method. Othershope to shift perspectives in order to provide a renewed generalaccount of what characterizes the activity we call science.
One such perspective has been offered recently by Hoyningen-Huene(2008, 2013), who argues from the history of philosophy of sciencethat after three lengthy phases of characterizing science by itsmethod, we are now in a phase where the belief in the existence of apositive scientific method has eroded and what has been left tocharacterize science is only its fallibility. First was a phase fromPlato and Aristotle up until the 17th century where thespecificity of scientific knowledge was seen in its absolute certaintyestablished by proof from evident axioms; next was a phase up to themid-19th century in which the means to establish thecertainty of scientific knowledge had been generalized to includeinductive procedures as well. In the third phase, which lasted untilthe last decades of the 20th century, it was recognizedthat empirical knowledge was fallible, but it was still granted aspecial status due to its distinctive mode of production. But now inthe fourth phase, according to Hoyningen-Huene, historical andphilosophical studies have shown how “scientific methods withthe characteristics as posited in the second and third phase do notexist” (2008: 168) and there is no longer any consensus amongphilosophers and historians of science about the nature of science.For Hoyningen-Huene, this is too negative a stance, and he thereforeurges the question about the nature of science anew. His own answer tothis question is that “scientific knowledge differs from otherkinds of knowledge, especially everyday knowledge, primarily by beingmore systematic” (Hoyningen-Huene 2013: 14). Systematicity canhave several different dimensions: among them are more systematicdescriptions, explanations, predictions, defense of knowledge claims,epistemic connectedness, ideal of completeness, knowledge generation,representation of knowledge and critical discourse. Hence, whatcharacterizes science is the greater care in excluding possiblealternative explanations, the more detailed elaboration with respectto data on which predictions are based, the greater care in detectingand eliminating sources of error, the more articulate connections toother pieces of knowledge, etc. On this position, what characterizesscience is not that the methods employed are unique to science, butthat the methods are more carefully employed.
Another, similar approach has been offered by Haack (2003). She setsoff, similar to Hoyningen-Huene, from a dissatisfaction with therecent clash between what she calls Old Deferentialism and NewCynicism. The Old Deferentialist position is that science progressedinductively by accumulating true theories confirmed by empiricalevidence or deductively by testing conjectures against basicstatements; while the New Cynics position is that science has noepistemic authority and no uniquely rational method and is merely justpolitics. Haack insists that contrary to the views of the New Cynics,there are objective epistemic standards, and there is somethingepistemologically special about science, even though the OldDeferentialists pictured this in a wrong way. Instead, she offers anew Critical Commonsensist account on which standards of good, strong,supportive evidence and well-conducted, honest, thorough andimaginative inquiry are not exclusive to the sciences, but thestandards by which we judge all inquirers. In this sense, science doesnot differ in kind from other kinds of inquiry, but it may differ inthe degree to which it requires broad and detailed backgroundknowledge and a familiarity with a technical vocabulary that onlyspecialists may possess.
How to cite this entry. Preview the PDF version of this entry at theFriends of the SEP Society. Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entryatPhilPapers, with links to its database.
al-Kindi |Albert the Great [= Albertus Magnus] |Aquinas, Thomas |Arabic and Islamic Philosophy, disciplines in: natural philosophy and natural science |Arabic and Islamic Philosophy, historical and methodological topics in: Greek sources |Arabic and Islamic Philosophy, historical and methodological topics in: influence of Arabic and Islamic Philosophy on the Latin West |Aristotle |Bacon, Francis |Bacon, Roger |Berkeley, George |biology: experiment in |Boyle, Robert |Cambridge Platonists |confirmation |Descartes, René |Enlightenment |epistemology |epistemology: Bayesian |epistemology: social |Feyerabend, Paul |Galileo Galilei |Grosseteste, Robert |Hempel, Carl |Hume, David |Hume, David: Newtonianism and Anti-Newtonianism |induction: problem of |Kant, Immanuel |Kuhn, Thomas |Leibniz, Gottfried Wilhelm |Locke, John |Mill, John Stuart |More, Henry |Neurath, Otto |Newton, Isaac |Newton, Isaac: philosophy |Ockham [Occam], William |operationalism |Peirce, Charles Sanders |Plato |Popper, Karl |rationality: historicist theories of |Reichenbach, Hans |reproducibility, scientific |Schlick, Moritz |science: and pseudo-science |science: theory and observation in |science: unity of |scientific discovery |scientific knowledge: social dimensions of |simulations in science |skepticism: medieval |space and time: absolute and relational space and motion, post-Newtonian theories |Vienna Circle |Whewell, William |Zabarella, Giacomo
View this site from another server:
The Stanford Encyclopedia of Philosophy iscopyright © 2023 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University
Library of Congress Catalog Data: ISSN 1095-5054