Verificationism, also known as theverification principle or theverifiability criterion of meaning, is adoctrine inphilosophy which asserts that a statement ismeaningful only if it is eitherempirically verifiable (can be confirmed through thesenses) or atautology (true by virtue of its ownmeaning or its ownlogical form). Verificationism rejects statements ofmetaphysics,theology,ethics andaesthetics as meaningless in conveyingtruth value orfactual content, though they may be meaningful in influencingemotions or behavior.[1]
Verificationism was a central thesis oflogical positivism, a movement inanalytic philosophy that emerged in the 1920s by philosophers who sought to unify philosophy and science under a commonnaturalistic theory of knowledge.[2] The verifiability criterion underwent various revisions throughout the 1920s to 1950s. However, by the 1960s, it was deemed to be irreparably untenable.[3] Its abandonment would eventually precipitate the collapse of the broader logical positivist movement.[4]
The roots of verificationism may be traced to at least the 19th century, in philosophical principles that aim to ground scientific theory in verifiableexperience, such asC.S. Peirce'spragmatism and the work ofconventionalistPierre Duhem,[3] who fosteredinstrumentalism.[5]Verificationism, as principle, would be conceived in the 1920s by thelogical positivists of theVienna Circle, who sought anepistemology whereby philosophical discourse would be, in their perception, as authoritative and meaningful asempirical science.[6] The movement established grounding in theempiricism ofDavid Hume,[7]Auguste Comte andErnst Mach, and thepositivism of the latter two, borrowing perspectives fromImmanuel Kant and defining their exemplar of science inEinstein'sgeneral theory of relativity.[8]
Ludwig Wittgenstein'sTractatus, published in 1921, established the theoretical foundations for the verifiability criterion of meaning.[9] Building uponGottlob Frege's work, theanalytic–synthetic distinction was also reformulated, reducing logic and mathematics tosemantical conventions. This would render logical truths (beingunverifiable by the senses) tenable under verificationism, astautologies.[10]
Logical positivists within theVienna Circle recognized quickly that the verifiability criterion was too stringent. Specifically,universal generalizations were noted to be empirically unverifiable, rendering vital domains of science andreason, including scientifichypothesis,meaningless under verificationism, absent revisions to its criterion of meaning.[11]
Rudolf Carnap,Otto Neurath,Hans Hahn andPhilipp Frank led a faction seeking to make the verifiability criterion more inclusive, beginning a movement they referred to as the "liberalization of empiricism".Moritz Schlick andFriedrich Waismann led a "conservative wing" that maintained a strict verificationism. Whereas Schlick sought to redefine universal generalizations astautological rules, thereby to reconcile them with the existing criterion, Hahn argued that the criterion itself should be weakened to accommodate non-conclusive verification.[12] Neurath, within the liberal wing, proposed the adoption ofcoherentism, though challenged by Schlick'sfoundationalism. However, hisphysicalism would eventually be adopted overMach'sphenomenalism by most members of the Vienna Circle.[11][13]
With the publication of theLogical Syntax of Language in 1934, Carnap defined ‘analytic’ in a new way to account forGödel'sincompleteness theorem, who ultimately "thought that Carnap’s approach to mathematics could be refuted."[14] This method allowed Carnap to distinguish between a derivative relation between premises that can be obtained in a finite number of steps and a semantic consequence relation that has on all valuations the same truth value for the premise as the consequent. It follows that all sentences of pure mathematics individually, or their negation, are "a consequence of the null set of premises. This leaves Gödel’s results completely intact as they concerned what is provable, that is, derivable from the null set of premises or from any one consistent axiomatization of mathematical truths."[14]
In 1936, Carnap sought a switch from verification toconfirmation.[11] Carnap's confirmability criterion (confirmationism) would not require conclusive verification (thus accommodating for universal generalizations) but allow for partial testability to establishdegrees of confirmation on a probabilistic basis. Carnap never succeeded in finalising his thesis despite employing abundant logical and mathematical tools for this purpose. In all of Carnap's formulations, a universal law's degree of confirmation was zero.[15]
InLanguage, Truth and Logic, published that year,A. J. Ayer distinguished betweenstrong andweak verification. This system espoused conclusive verification, yet allowed for probabilistic inclusion where verifiability is inconclusive. He also distinguished theoretical from practical verifiability, proposing that statements that are verifiablein principle should be meaningful, even if unverifiable in practice.[16][17]
PhilosopherKarl Popper, a graduate of theUniversity of Vienna, though not a member within the ranks of theVienna Circle, was among the foremost critics of verificationism. He identified three fundamental deficiencies in verifiability as a criterion of meaning:[18]
Popper regarded scientific hypotheses to never be completely verifiable, as well as notconfirmable underCarnap's thesis.[9][19] He also consideredmetaphysical,ethical andaesthetic statements often rich in meaning and important in the origination of scientific theories.[9]
Other philosophers also voiced their own criticisms of verificationism:
InThe Logic of Scientific Discovery (1959), Popper proposedfalsifiability, orfalsificationism. Though formulated in the context of what he perceived were intractable problems in both verifiability and confirmability, Popper intended falsifiability, not as a criterion of meaning like verificationism (as commonly misunderstood),[25] but as a criterion todemarcate scientific statements from non-scientific statements.[9]
Notably, the falsifiability criterion would allow for scientific hypotheses (expressed asuniversal generalizations) to be held as provisionally true until proven false by observation, whereas under verificationism, they would be disqualified immediately as meaningless.[9]
In formulating his criterion, Popper was informed by the contrasting methodologies ofAlbert Einstein andSigmund Freud. Appealing to thegeneral theory of relativity and its predicted effects ongravitational lensing, it was evident to Popper that Einstein's theories carried significantly greaterpredictive risk than Freud's of being falsified byobservation. Though Freud found ample confirmation of his theories in observations, Popper would note that this method of justification was vulnerable toconfirmation bias, leading in some cases to contradictory outcomes. He would therefore conclude that predictive risk, orfalsifiability, should serve as the criterion to demarcate the boundaries of science.[26]
Though falsificationism has been criticized extensively by philosophers for methodological shortcomings in its intended demarcation of science,[18] it would receive acclamatory adoption among scientists.[19] Logical positivists too adopted the criterion, even as their movement ran its course, catapulting Popper, initially a contentious misfit, to carry the richest philosophy out of interwar Vienna.[25]
In 1967,John Passmore, a leading historian of 20th-century philosophy, wrote, "Logical positivism is dead, or as dead as a philosophical movement ever becomes".[4] Logical positivism's fall heraldedpostpositivism, where Popper's view of human knowledge as hypothetical, continually growing and open to change ascended[25] and verificationism, in academic circles, became mostly maligned.[3]
In a 1976 TV interview, A. J. Ayer, who had introduced logical positivism to theEnglish-speaking world in the 1930s[27] was asked what he saw as its main defects, and answered that "nearly all of it was false".[4] However, he soon said that he still held "the same general approach", referring to empiricism andreductionism, wherebymental phenomena resolve to the material or physical and philosophical questions largely resolve to ones of language and meaning.[4] In 1977, Ayer had noted:[3]
"The verification principle is seldom mentioned and when it is mentioned it is usually scorned; it continues, however, to be put to work. The attitude of many philosophers reminds me of the relationship between Pip and Magwitch inDickens'sGreat Expectations. They have lived on the money, but are ashamed to acknowledge its source."
In the late 20th and early 21st centuries, the general concept of verification criteria—in forms that differed from those of the logical positivists—was defended byBas van Fraassen,Michael Dummett,Crispin Wright,Christopher Peacocke,David Wiggins,Richard Rorty, and others.[3]