This articlemay incorporate text from alarge language model. It may includehallucinated information,copyright violations, claims notverified in cited sources,original research, orfictitious references. Any such material should beremoved, and content with anunencyclopedic tone should be rewritten.(January 2026) (Learn how and when to remove this message) |
| Part of a series on | ||||||
| Logical positivism | ||||||
|---|---|---|---|---|---|---|
Origins and context
| ||||||
Verificationism, also known as theverification principle or theverifiability criterion of meaning, is a doctrine in philosophy which asserts that a statement iscognitively meaningful only if it isempirically verifiable (can be confirmed throughexperience) or ananalytic truth (true by virtue of its definition orlogical form).[1][2] Typically expressed as acriterion of meaning, it rejects traditional statements ofmetaphysics,theology,ethics andaesthetics asmeaningless in terms of conveyingtruth value orfactual content, reducing them toemotive expressions or "pseudostatements" that are neither true nor false.[1][3][4]
Verificationism was the central-most thesis oflogical positivism (orlogical empiricism), a philosophical movement, in theempiricist tradition, originating in theVienna Circle andBerlin Circle of the 1920s and 1930s.[5] The logical positivists sought to formulate a scientifically-orientedtheory of knowledge in which ambiguities associated with traditional metaphysical language would be negated or minimised, andempirical testability would be enforced as the paradigm of serious inquiry.[5][2]
Attempts to define a precise criterion of meaning faced intractable problems from the movement's inception. The earliest versions were found to be too restrictive in that they excludeduniversal generalizations, such as scientific laws.[2] Various alternative proposals were devised, which distinguished betweenstrong andweak verifiability or betweenpractical andin-principle verifiability, and probabilistic variations. In the 1950s, the§ theoretical foundations of verificationism encountered escalating scrutiny through the work of philosophers such asWillard Van Orman Quine andKarl Popper.[6] Widespread sentiment deemed it impossible to formulate a universal criterion that could preservescientific inquiry while rejecting the metaphysical ambiguities the positivists sought to exclude.[2][3]
By the 1960s, verificationism had become widely regarded as untenable and its abandonment is cited as a decisive factor in the subsequent decline of logical positivism.[7][3] Nonetheless, it would continue to influence laterpost-positivist philosophy and empiricist theories of truth and meaning,[8][9] including the work of philosophers such asBas van Fraassen,Michael Dummett andCrispin Wright.[3][5]
This section gives the broad historical context in which verificationism was developed and motivated, whereas details on the development of specific aspects of the theory are given in the sections on those aspects.

Nineteenth- and early twentieth-century empiricism already contained many of the ingredients of verificationism. Pragmatists such asC. S. Peirce andWilliam James linked the meaning of a concept to its practical and experiential consequences, while the conventionalistPierre Duhem treated physical theories as instruments for organizing observations rather than as literal descriptions of unobservable reality.[3][10] Later historians have therefore tended to treat verificationism as a sophisticated heir to this broader tradition of empiricist and pragmatist thought.[3] According toGilbert Ryle, James's pragmatism was "one minor source of the Principle of Verifiability".[11]
At the same time, classicalempiricism, especially the work ofDavid Hume, provided exemplars for the idea that meaningful discourse must be tied to possible experience, even if Hume himself did not draw the later positivists' radical conclusions about metaphysics.[12] Thepositivism ofAuguste Comte andErnst Mach reinforced this orientation by insisting that science should confine itself to describing regularities among observable phenomena, a stance that influenced the early logical empiricists' suspicion of unobservable entities and their admiration for the empirical success of theories such asEinstein'sgeneral theory of relativity.[13]
The more explicitly semantic side of verificationism drew on developments inanalytic philosophy.Ludwig Wittgenstein'sTractatus (1921) was read in the 1920s as offering a picture theory of meaning: a proposition is meaningful only insofar as it can represent a possible state of affairs in the world.[14] Members of what would become theVienna Circle took over this idea in an explicitly empiricist form, treating the "state of affairs" relevant to meaning as something that must in principle be checked in experience.[15]

By the mid-1920s these strands converged in the programme oflogical positivism. AroundMoritz Schlick in Vienna, philosophers and scientists such asRudolf Carnap,Hans Hahn,Philipp Frank andOtto Neurath sought to develop a "scientific philosophy" in which philosophical statements would be as clear, testable and intersubjective as those of the empirical sciences.[15] The "verifiability principle" emerged in this context as a proposed criterion of cognitive meaning, intended to underwrite the movement's anti-metaphysical stance and its aspiration to unify the special sciences within a single, naturalistic framework of knowledge.[1][3][5]
InLogical Syntax of Language (1934)Rudolf Carnap built on earlier work byGottlob Frege to develop a formal notion of analyticity that defined mathematics and logic as analytic truths, rendering them compatible with verificationism despite their status as non-empirical truths.[16][17] Outside the German-speaking world, verificationism reached a wider audience above all throughA. J. Ayer'sLanguage, Truth and Logic (1936). Drawing on a period of study in Vienna, Ayer presented the verification principle as the central thesis of logical positivism,[4][18] and his book effectively became a manifesto for the movement in the English-speaking world, even though its specific formulation of the criterion soon came under pressure from critics and from later work by Carnap and others.[3][5][19]
For members of the Vienna Circle and Berlin Circle, the proposal of a verifiability criterion was attractive for several reasons. By asking, for any disputed sentence, what observations would count for or against it, verificationists hoped to dissolve many traditional philosophical problems as products of linguistic confusion, while preserving, and clarifying, the empirical content of scientific theories.[2][3][5] It seemed to offer a precise way of separating genuine questions from pseudo-questions, to explain why many long-standing disputes in metaphysics appeared irresolvable, and to vindicate the special status of the rapidly advancing natural sciences by taking scientific testability as the model for all serious inquiry.[5][3] The verification principle thus functioned as a kind of intellectual hygiene: sentences that could not, even in principle, be checked against experience were to be diagnosed as cognitively empty rather than as mysteriously profound. The programme appeared to combine respect for the successes of modern science, the new tools of formal logic inspired by Frege and Wittgenstein, and an appealingly deflationary attitude towards grand metaphysical systems.

A much-discussed illustration of this attitude isRudolf Carnap's critique ofMartin Heidegger. In his essay "Überwindung der Metaphysik durch logische Analyse der Sprache" ("Overcoming Metaphysics through the Logical Analysis of Language") Carnap singled out Heidegger's claim that "Nothingness nothings" (German:das Nichts nichtet) from the 1929 lectureWas ist Metaphysik? (What Is Metaphysics?) as a paradigm of a metaphysical pseudo-sentence.[8][9] Although grammatically well-formed, Carnap argued, such a sentence yields no testable consequences and cannot, even in principle, be confirmed or disconfirmed by experience; on a verificationist view it therefore fails to state any fact at all and belongs, at best, to poetry or the expression of mood rather than to cognitive discourse.
According to the verification principle, adeclarative sentence is counted ascognitively meaningful only if it is eitherverifiable by experience or ananalytic truth (i.e. true by virtue of itslogical form or themeanings of its constituent terms).[2][5][3] The principle was typically expressed as acriterion of cognitive meaning orempiricist criterion of cognitive significance, referring to its purpose in demarcating meaningful from meaningless language, ultimately, to exclude "nonsense" while accommodating of the needs ofempirical science. The earliest formulations equated cognitive meaning with the need forstrong or conclusive verification. In that case, a non-analytic statement is meaningful only if its truth can belogically deduced from a finite set ofobservation sentences, which report the presence (or absence) of the observable properties ofconcrete objects.[20][2]
Members of the Vienna Circle recognised quickly that the requirement for strong verification was too restrictive.Universal generalizations, such asscientific laws, cannot be derived from any finite set of observations so that a strict reading of the principle would render vital domains ofempirical sciencecognitively meaningless.[21] Difficulties also arose on how non-observational (dispositional andtheoretical) language should be reconciled with verificationism. These shortcomings prompted a sustained program to refine the criterion of meaning.
Divisions emerged between "conservative" and "liberal" wings of the Circle regarding the corrective approach.Moritz Schlick andFriedrich Waismann defended a strict verificationism, exploring methods to reinterpret universal statements asrule-liketautologies, so that they would not conflict with the original criterion.[22]Rudolf Carnap,Otto Neurath,Hans Hahn andPhilipp Frank advocated a "liberalization of empiricism", proposing that the criterion should be rendered more permissive.[21] Neurath pronounced aphysicalist andcoherentist approach to scientific language, in which even basicprotocol sentences—traditionally considered an infallible experiential foundation—would be subject to revision.[15][23]

A. J. Ayer'sLanguage, Truth and Logic (1936; 2nd ed. 1946) responded to these difficulties by weakening the requirement on empirical sentences. Ayer distinguished betweenstrong andweak verification, and betweenpractical andin-principle verifiability, allowing a statement to be meaningful so long as experience could in some way count for or against it, even if not conclusively.[4] He later reformulated the criterion in terms ofempirical import: a non-analytic sentence has cognitive meaning only if, together with some set of auxiliary premises, it entails an "experiential proposition" (observation sentence) that cannot be derived from the auxiliary premises alone. Correspondingly, he distinguished statements that aredirectly andindirectly verifiable.[4][2]
Carl Hempel and other critics were quick to respond that, unless carefully constrained, Ayer's proposal would trivialise the distinction between meaningful and meaningless statements in that any sentence, or its negation, can be connected with some observational consequences if one is free to introduce auxiliary assumptions. Thus, any "nonsensical" expression can be made meaningful if embedded in a larger sentence that, itself, satisfies the criterion of meaning.[20][2][24] In response, Ayer imposed a recursive restriction—allowing only analytic, directly verifiable, or already indirectly verifiable statements as auxiliaries—avoiding outright triviality. However, Hempel argued that it still renders almost any sentence meaningful, allowing for complex sentences that "smuggle in" meaningless expressions.[2][25]
Rudolf Carnap's work in the 1930s and 1940s supplied many of the most influential revisions. In his papersTestability and Meaning (1936–37) and subsequent work on theoretical terms, Carnap abandoned strict verification in favour of various confirmation-based and translatability-based criteria.[26][27] Carnap proposed that a sentence is cognitively meaningful if it could betranslated (connected, by chains of definition, reduction sentences or inductive support) to an agreed "observation language", whose non-logical vocabulary is restricted to observation predicates and to expressions definable from them by purely logical means.[26][5]
Because many scientific terms are dispositional or theoretical (for example, "soluble", "magnetic", "gravitational field"), Carnap introducedreduction sentences that relate such terms to observational conditions and responses, treating them as only partially defined outside their "test conditions".[26][27] Hempel and other critics objected that this strategy either leaves the relevant vocabulary undefined in many ordinary cases (when test conditions do not obtain) or, if multiple reduction sentences are used, makes substantive empirical generalizations follow analytically from the rules introducing the new terms.[2][28] More generally, the choice of a particular "observation language" as the reference language for translatability has been criticized as ad hoc unless it can be shown, on independent grounds, to capture exactly the verifiable sentences; without such a justification, the criterion threatens simply to build the positivists' anti-metaphysical verdict into the choice of language itself.[5][29][3]
Carnap later sought a switch toconfirmation.[21] Hisconfirmability criterion would not require conclusive verification, accommodating for universal generalizations, but allow for partial testability to establishdegrees of confirmation on a probabilistic basis. Carnap never succeeded in finalising his thesis despite employing abundant logical and mathematical tools for this purpose. In all of his formulations, a universal law's degree of confirmation was zero.[30]
Verificationist accounts of meaning presuppose some class of basic sentences that provide the experiential input for testing more complex claims. Within logical empiricism these are often calledobservation sentences, understood (following Carl Gustav Hempel) as sentences that ascribe or deny an observable characteristic to one or more specifically named macroscopic objects; such sentences were taken to form the empirical basis for criteria of cognitive meaning.[20][31]
Debate within theVienna Circle quickly revealed that the notion of an empirical basis was itself contentious. In the so-calledprotocol sentence (Protokollsatz) debate, members disagreed over whether basic statements should be formulated in a phenomenalist or a physicalist idiom.[15] Phenomenalist proposals, associated with earlyRudolf Carnap and especiallyMoritz Schlick, treated the basis as first-person, present-tense reports of immediate experience – Schlick'sKonstatierungen (affirmations), such as "Here now red", which were supposed to be incorrigible and theory-free data of consciousness.[32][33] Such a basis promised epistemic certainty but sat uneasily with the verificationists' scientific ambitions, since private experiences are not straightforwardly shareable or checkable among different observers.

By contrast,Otto Neurath argued for a thoroughly physicalist basis. His protocol sentences describe publicly observable events in a third-person, physical language, typically including explicit reference to an observer, time and place (for example, "Otto's protocol at 3:17…").[34][35] Neurath rejected any class of sentences as absolutely certain: even protocol sentences are embedded in a holistic network of beliefs and remain revisable in light of further experience, a view he illustrated with the image of sailors rebuilding their ship at sea. The protocol-sentence debate thus pushed many logical empiricists towards an explicitlyfallibilist conception of the empirical basis, abandoning the idea of an infallible foundation for verification.
Carnap's later work onTestability and Meaning made the conventional and pragmatic dimension of the choice of an empirical basis explicit. He argued that the rules of an empiricist language are chosen, within broad constraints, for their usefulness; different choices of basic vocabulary yield different, but equally legitimate, "frameworks".[26][36] Nevertheless, Carnap recommended that the primitive predicates of the "observation language" be drawn from intersubjectively observable thing-predicates ("red cube", "meter reading 3", and so on), since these offer a shared physicalist basis for testing and confirming hypotheses.[37] On this liberalized view, verificationist criteria of meaning are always relative to a chosen empirical basis, typically a fallible but intersubjective class of observation sentences rather than an infallible foundation of private experience.[38]
Friends and critics of verificationism have long noted that a general "criterion of cognitive meaning" is itself neither analytic nor empirically verifiable, and so appears to fall foul of its own requirement. Logical empiricists typically replied that the verification principle was not meant as a factual thesis about language, but as part of anexplication of a vague pre-theoretic notion such as "cognitively meaningful sentence" or "intelligible assertion". Hempel, for example, describes the empiricist criterion as "a clarification and explication of the idea of a sentence which makes an intelligible assertion" and stresses that it is "a linguistic proposal" for which adequacy rather than truth or falsity is at issue.[20][2][39] In a similar spirit, A. J. Ayer later wrote that the verification principle inLanguage, Truth and Logic "is to be regarded, not as an empirical hypothesis, but as a definition", and Hans Reichenbach characterised the verifiability requirement as a stipulation governing the use of "meaning".[4][19][40]

Rudolf Carnap systematised this stance in his more general methodology ofexplication. InLogical Foundations of Probability he defines explication as the process of replacing "an inexact prescientific concept" (theexplicandum) by a new, exact concept (theexplicatum) which must, among other things, be sufficiently similar to the explicandum, more precise, fruitful for the formulation of systematic theories, and as simple as possible.[41][42] Explications and the linguistic frameworks in which they are embedded are not themselves true or false; instead they are to be judged by these "conditions of adequacy". Within this framework, a verificationist criterion of meaning becomes an explicatum for ordinary, somewhat indeterminate notions like "factual content" or "genuine assertion".
On such an explicative reading, competing criteria of cognitive meaning are proposals for regimenting scientific and everyday discourse so that logical relations to observation and to other sentences are made more explicit; they are evaluated by how well they capture intuitive judgements about meaningfulness, how precisely they can be stated, and how useful they are for organising scientific theories.[2][43][5] Historians and sympathetic "post-positivist" authors have therefore tended to interpret verificationism as a paradigm case of Carnapian explication or conceptual engineering.[43][44]
Interpreting the verification principle as an explication also reframes certain traditional objections. If the principle is not itself a factual statement, it cannot straightforwardly be criticised as "self-refuting" on the grounds that it fails its own test of verifiability. The central questions then concern whether a given criterion of meaning satisfies the Carnapian requirements of similarity, precision, fruitfulness and simplicity, and whether some rival explication might better serve the aims of empirical inquiry and philosophical clarification.[20][41][42]
By the 1950s, attempts to define a precise criterion of meaning were increasingly seen as problematic. Verificationism was subjected to sustained criticism from both friends and opponents of logical positivism, and by the late 1960s few philosophers regarded it as a tenable, exceptionless criterion, even when they continued to endorse more modest links between meaning, justification and experience.[2][3][5]
Objections focused on the alleged self-refuting character of the principle, its apparent mismatch with scientific practice, its dependence on controversial distinctions such as the analytic–synthetic divide, and worries about the holism and theory-ladenness of empirical testing. Furthermore, critics argued that the verificationists' criterion would incorrectly exclude large parts of mathematics, modality, moral philosophy and ordinary discourse from the realm of cognitively significant talk, and that it begs the question against views which treat such statements as meaningful.[45][46]
Carl Gustav Hempel, a leading figure in the movement, examined successive attempts to refine the criterion of cognitive meaning and determined that none were satisfactory. In "Problems and Changes in the Empiricist Criterion of Meaning" (1950) and "Empiricist Criteria of Cognitive Significance: Problems and Changes" (1965) he reconstructed a series of proposals—strict verifiability, practical vs. in-principle testability, A. J. Ayer's requirement of "empirical import" and various translatability conditions linking theoretical to observational vocabulary.[20][2][47]
Hempel argued that each proposal either excluded large swathes of accepted science or failed to rule out sentences the positivists regarded as "nonsense".[20][2][39] On the one hand, criteria based on conclusive verification are too strict: they would declare universal laws and many dispositional or theoretical statements meaningless, since such sentences cannot be deduced from any finite set of observation reports.[20][2] On the other hand, more liberal criteria threaten to be too permissive. A purely falsificationist criterion excludes existential claims and many mixed-quantifier statements (for example, "for every substance there is a solvent") from qualifying as meaningful.[2][39] Hempel proposed instead that cognitive meaning comes in degrees, depending not only on a statement's logical relations to observation but also on the role a statement plays within a broader theoretical network.[2][5] Historians of philosophy consider Hempel's critique, together with subsequent discourse by Quine, Popper and Kuhn, to signify the abandonment of the verificationist program.[3][5][6]

PhilosopherKarl Popper, a contemporary critic working in Vienna but not a member of theVienna Circle, argued that the verifiability principle suffers from several fundamental defects.[48][14][49] First, if meaningful empirical sentences must be conclusively verifiable, then universal generalizations of the sort employed in scientific laws (for example, "all metals expand when heated") would be meaningless, since no finite set of observations can logically entail such universals. Second, purely existential claims such as "there is at least one unicorn" qualify as empirically meaningful under the verification principle, even though in practice it may be impossible to show them false. Third, the verification principle itself appears neither analytic nor empirically verifiable; taken as a factual claim, it therefore seems to count as meaningless by its own standard, rendering the doctrine self-defeating.[48][14]
On the basis of these concerns, Popper rejected verifiability as a criterion of meaning and proposedfalsifiability instead as a criterion for thedemarcation of scientific from non-scientific statements.[48][50][49] On his view, scientific theories are characteristically universal, risk-bearing conjectures that can never be verified but may be refuted by experience; what marks a hypothesis as scientific is that it rules out certain possible observations. Verificationism, Popper argued, misconstrues the logic of scientific method by tying meaningfulness to possibilities of confirmation rather than to the capacity for severe tests and potential refutation. He also maintained that metaphysical ideas, though not empirically testable, can be meaningful and may perform a productive heuristic role in the development of scientific theories.[14][48]

Verificationism presupposed a clear demarcation between analytic and synthetic truths. This served as a vital theoretical foundation by which logic and mathematics can be defined asanalytic, therefore cognitively meaningful under the verifiability criterion, despite their non-empirical status. In his thesis,Two Dogmas of Empiricism,Willard Van Orman Quine challenged theanalytic–synthetic distinction, arguing that all attempts to defineanalyticity—in terms of meaning, synonymy, logical truth or explicit definition—reduce ultimately toquestion-begging.[6] If the very conception of analyticity is untenable, the verificationist strategy to rescue mathematics and logic as meaningful knowledge becomes groundless, jeopardising the broader logical positivist project.
In his essay, Quine also attacked the verificationist presumption that meaning should be fixed to statements by correlating each one, individually, with a determinate set of verification or falsification criteria.[6] Instead, he portrayed statements about the world as forming a "web of belief" that faces the "tribunal of experience" only as a corporate body; when predictions fail, any part of the web—including logical principles, mathematical assumptions or supposedly observational statements—can in principle be revised.[6]
His ideas build upon work by physicist–philosopherPierre Duhem postulating that experiments in physics never test a single hypothesis in isolation, but only a whole "theoretical scaffolding" of assumptions, including auxiliary hypotheses about instruments, background theories and ceteris paribus clauses.[51][52] Because a recalcitrant observation can always be accommodated by adjusting some part of this network, Duhem concluded that there are no strictly "crucial experiments" in physics which decisively verify one hypothesis and falsify its rival. The resultingDuhem–Quine thesis orconfirmation holism holds that empirical tests always involve a bundle of hypotheses rather than isolated sentences.[53]
From a different direction, philosophers of science challenged verificationist preconceptions of a stable, theory-neutral observation language. If observation itself can be shown to be permeated by theory and vulnerable to radical, historic paradigm shifts, the verificationist proposition to fix meanings by reference to a timeless, privileged observation language becomes imperilled.[54][55]
Norwood Russell Hanson emphasised thetheory-ladenness of observation, arguing that what scientists "see" in an experimental situation is shaped by the conceptual frameworks they bring to it: the same visual stimulus may be described as a "flare in the cloud chamber" or as "the track of an alpha particle" depending on one's theoretical commitments.[54][56]Thomas Kuhn'sThe Structure of Scientific Revolutions (1962) further suggested that periods of "normal science" are governed by sharedparadigms that structure problems, standards of evidence and even the classification of phenomena; during scientific revolutions, these paradigms may be replaced by ones that are partiallyincommensurable.[57][58]
In 1967,John Passmore, a leading historian of twentieth-century philosophy, famously remarked that "logical positivism is dead, or as dead as a philosophical movement ever becomes".[7] This verdict is often taken to mark the end of logical positivism as a self-conscious school, and with it the abandonment of classical verificationism as a strict criterion of meaning.[3][49] In many standard narratives, the decline of verificationism is intertwined with the rise of various forms ofpostpositivism in whichKarl Popper's falsificationism, historically oriented accounts of scientific change and more pluralist views of scientific method displace the earlier search for a single verificationist test of meaningfulness.[50][49]
Even some of verificationism's most prominent advocates later distanced themselves from its more uncompromising claims. In a 1976 television interview,A. J. Ayer—whoseLanguage, Truth and Logic had helped to popularise logical positivism in the English-speaking world—commented that "nearly all of it was false", while insisting that he continued to endorse "the same general approach" of empiricism and reductionism, according to which mental phenomena are to be understood in physical terms and philosophical questions are resolved by attention to language and logical analysis.[7][19]
"The verification principle is seldom mentioned and when it is mentioned it is usually scorned; it continues, however, to be put to work. The attitude of many philosophers reminds me of the relationship between Pip and Magwitch inDickens'sGreat Expectations. They have lived on the money, but are ashamed to acknowledge its source."[3]
InThe Logic of Scientific Discovery (1959), Popper proposedfalsifiability, orfalsificationism. Though formulated in the context of what he perceived were intractable problems in both verifiability and confirmability, Popper intended falsifiability, not as a criterion of meaning like verificationism (as commonly misunderstood),[50] but as a criterion todemarcate scientific statements from non-scientific statements.[14] Notably, the falsifiability criterion would allow for scientific hypotheses (expressed asuniversal generalizations) to be held as provisionally true until proven false by observation, whereas under verificationism, they would be disqualified immediately as meaningless.[14]
In formulating his criterion, Popper was informed by the contrasting methodologies ofAlbert Einstein andSigmund Freud. Appealing to thegeneral theory of relativity and its predicted effects ongravitational lensing, it was evident to Popper that Einstein's theories carried significantly greaterpredictive risk than Freud's of being falsified byobservation. Though Freud found ample confirmation of his theories in observations, Popper would note that this method of justification was vulnerable toconfirmation bias, leading in some cases to contradictory outcomes. He would therefore conclude that predictive risk, orfalsifiability, should serve as the criterion to demarcate the boundaries of science.[59] Popper referred to "degrees of testability", proposing that some hypotheses expose themselves to potential refutation more boldly than others.[14] Contemporary commentators note that this parallels verificationist discourse concerning graded or comparative notions of cognitive meaning raised by Hempel, Carnap and others.[49][3]
Though falsificationism has been criticized extensively by philosophers for methodological shortcomings in its intended demarcation of science,[48] it would receive acclamatory adoption among scientists.[49] Logical positivists too adopted the criterion, even as their movement ran its course, catapulting Popper, initially a contentious misfit, to carry the richest philosophy out of interwar Vienna.[50]
Although the logical positivists' attempt to state a precise, once-and-for-all verifiability criterion of meaning is now generally regarded as untenable, a number of later philosophers have developed weaker, "post-positivist" forms of verificationism that retain a tight connection between meaning, truth and warranted assertion. Cheryl Misak's historical studyVerificationism: Its History and Prospects traces both the rise and fall of classical verificationism and its re-emergence in more flexible guises, arguing that suitably liberalised verificationist ideas remain philosophically fruitful.[3][43]

In the philosophy of language and logic,Michael Dummett developed an influential form ofsemantic anti-realism that begins from the thought that understanding a statement involves grasping what would count as its correct verification or refutation.[60] On this "justificationist" view, the meaning of a sentence is tied to the conditions under which speakers are in a position to recognise it as warranted, and Dummett uses this to motivate anti-realist treatments of mathematical discourse and of some statements about the past, together with revisions of classical logic.[60]Crispin Wright, drawing extensively on Dummett, has explored epistemically constrained conceptions of truth and proposed the notion ofsuperassertibility—roughly, a status a statement would possess if it could be justified by some body of information that is in principle extendable without undermining that justification—as a candidate truth predicate for certain discourses.[61][62] Both Dummett and Wright thus preserve a verificationist link between meaning and warranted use while giving up the positivists' sharp dichotomy between meaningful science and meaningless metaphysics.
In the philosophy of science,Bas van Fraassen'sconstructive empiricism has often been described as verificationist in spirit, even though it abandons any explicit verifiability criterion of meaning.[63] Van Fraassen distinguishes belief in the literal truth of a theory from acceptance of it asempirically adequate, requiring only that accepted theories get the observable phenomena right while remaining agnostic about their claims concerning unobservables.[64] Misak and others have suggested that this emphasis on observable consequences and on the role of empirical data in theory choice continues the verificationist impulse in a more modest, methodological form.[43]

Other late twentieth-century writers have proposed explicitly "post-verificationist" approaches that reject the positivists' austere criterion of meaning but retain a close tie between meaning, justification and experiential or inferential capacities.Christopher Peacocke has argued that many concepts are to be understood viapossession conditions which specify what discriminations, recognitional capacities or patterns of inference a thinker must be able to deploy in order to count as grasping the concept, a project he presents as a successor to earlier verificationist accounts of meaning.[65][66]David Wiggins has defended a form of "conceptual realism" and has argued that truth is appropriately connected with what would be accepted under conditions of ideal reflection and convergence in judgement, a stance that many commentators, including Misak, interpret as containing important verificationist elements.[67][68][69]
Misak herself not only provides a historical reconstruction of verificationism but also defends a neo-pragmatist version inspired byCharles Sanders Peirce. InTruth and the End of Inquiry she develops a Peircean conception on which truth is the ideal limit of inquiry—what would be agreed upon at the hypothetical end of investigation by suitably situated and responsive inquirers—so that truth is tightly bound to what could in principle be justified by experience and argument.[70] On this view, verificationism survives as a normative constraint linking the content of our statements to the kinds of evidence and justificatory practices that would speak for or against them, a constraint that Misak also finds echoed in parts of contemporary feminist philosophy, the later work ofRichard Rorty and other strands of post-positivist thought.[43][70]
Recent work byHannes Leitgeb uses probability theory to propose a verificationist criterion in which a sentence A is meaningful if, and only if, there is evidence B such that P(B∣A)≠P(B).[71]