Perhaps the oldest and best understood way of representing inductivesupport is in terms ofprobability and the equivalent notionodds. Mathematicians have studied probability for over 350years, but the concept is certainly much older. In recent times anumber of other representations of partial belief and uncertaininference have emerged. Some of these approaches have found usefulapplication in computer based artificial intelligence systems thatperform inductive inferences in expert domains such as medicaldiagnosis. Nevertheless, probabilistic representations havepredominated in such application domains. So, in this article we focusexclusively on probabilistic representations of inductive support. Abrief comparative description of some of the most prominentalternative representations of uncertainty and support-strength can befound in the supplementSome Prominent Approaches to the Representation of Uncertain Inference.
The mathematical study of probability originated with Blaise Pascaland Pierre de Fermat in the mid-17th century. From thattime through the early 19th century, as the mathematicaltheory continued to develop, probability theory was primarily appliedto the assessment of risk in games of chance and to drawing simplestatistical inferences about characteristics of largepopulations—e.g., to compute appropriate life insurance premiumsbased on mortality rates. In the early 19th century Pierrede Laplace made further theoretical advances and showed how to applyprobabilistic reasoning to a much wider range of scientific andpractical problems. Since that time probability has become anindispensable tool in the sciences, business, and many other areas ofmodern life.
Throughout the development of probability theory various researchersappear to have thought of it as a kind of logic. But the firstextended treatment of probability as an explicit part of logic wasGeorge Boole’sThe Laws of Thought (1854). John Vennfollowed two decades later with an alternative empirical frequentistaccount of probability inThe Logic of Chance (1876). Notlong after that the whole discipline of logic was transformed by newdevelopments in deductive logic.
In the late 19th and early 20th century Frege,followed by Russell and Whitehead, showed how deductive logic may berepresented in the kind of rigorous formal system we now callquantified predicate logic. For the first time logicians hada fully formal deductive logic powerful enough to represent all validdeductive arguments that arise in mathematics and the sciences. Inthis logic the validity of deductive arguments depends only on thelogical structure of the sentences involved. This development indeductive logic spurred some logicians to attempt to apply a similarapproach to inductive reasoning. The idea was to extend the deductiveentailment relation to a notion ofprobabilistic entailmentfor cases where premises provide less than conclusive support forconclusions. Thesepartial entailments are expressed in termsofconditional probabilities, probabilities of the form \(P[C\pmid D] = r\) (read “the probability ofC givenDisr”), whereP is a probability function,C is a conclusion sentence,D is a conjunction ofpremise sentences, andr is the probabilistic degree of supportthat premisesD provide for conclusionC. Attempts todevelop such a logic vary somewhat with regard to the ways in whichthey attempt to emulate the paradigm of formal deductive logic.
Some inductive logicians have tried to follow the deductive paradigmby attempting to specify inductive support probabilities solely interms of the syntactic structures of premise and conclusion sentences.In deductive logic the syntactic structure of the sentences involvedcompletely determines whether premises logically entail a conclusion.So these inductive logicians have attempted to follow suit. In such asystem each sentence confers a syntactically specified degree ofsupport on each of the other sentences of the language. Thus, theinductive probabilities in such a system arelogical in thesense that they depend on syntactic structure alone. This kind ofconception was articulated to some extent by John Maynard Keynes inhisTreatise on Probability (1921). Rudolf Carnap pursuedthis idea with greater rigor in hisLogical Foundations ofProbability (1950) and in several subsequent works (e.g., Carnap1952). (For details of Carnap’s approach see the section onlogical probability in the entry oninterpretations of the probability calculus, in thisEncyclopedia.)
In the inductive logics of Keynes and Carnap, Bayes’ theorem, astraightforward theorem of probability theory, plays a central role inexpressing how evidence comes to bear on hypotheses. Bayes’theorem expresses how the probability of a hypothesish on theevidencee, \(P[h \pmid e]\), depends on the probability thate should occur ifh is true, \(P[e \pmid h]\), and onthe probability of hypothesishprior to taking theevidence into account, \(P[h]\) (called theprior probabilityofh). So, such approaches might well be calledBayesianlogicist inductive logics. Carnap proposed a way to assess thevalues of both prior probabilities and likelihoods in terms of logicalform alone, but his approach only applies to extremely simplelanguages. Other prominent Bayesian logicist approaches to thedevelopment of a probabilistic inductive logic depend on logicalconsiderations that are more subtle than mere logical form. Theseinclude the works of Jeffreys (1939), Jaynes (1968), and Rosenkrantz(1981). Jaynes (1968), for instance, articulates a way to assign priorprobabilities based on a notion of entropy, which is a measure ofinformation content.
It is now widely held that the core idea of this syntactic approach toBayesian logicism is fatally flawed—that syntactic logicalstructure cannot be the sole determiner of the degree to whichpremises inductively support conclusions. A crucial facet of theproblem faced by syntactic Bayesian logicism involves how the logic issupposed to apply in scientific contexts where the conclusion sentenceis some scientific hypothesis or theory, and the premises are evidenceclaims. The difficulty is that inany probabilistic logicthat satisfies the usual axioms for probabilities, the inductivesupport for a hypothesis must depend in part on itspriorprobability. Thisprior probability represents(arguably) how plausible the hypothesis is taken to be on the basis ofconsiderations other than the observational and experimental evidence(e.g., perhaps due to various plausibility arguments). A syntacticBayesian logicist must tell us how to assign values to thesepre-evidentialprior probabilities of hypotheses in a waythat relies only on the syntactic logical structure of the hypothesis,perhaps based on some measure of syntactic simplicity. There aresevere problems with getting this idea to work. Various kinds ofexamples seem to show that such an approach must assign intuitivelyquite unreasonable prior probabilities to hypotheses in specificcases. Furthermore, for this idea to apply to the evidential supportof real scientific theories, scientists would have to formalizetheories in a way that makes their relevant syntactic structuresapparent, and then evaluate theories solely on that syntactic basis(together with their syntactic relationships to evidence statements).Are we to evaluate alternative theories of gravitation, andalternative quantum theories, this way? This seems an extremelydubious approach to the evaluation of real scientific hypotheses andtheories. Thus, it seems that logical structure alone may not sufficefor the inductive evaluation of scientific hypotheses.
At about the time that the Bayesian logicist idea was developing, analternative conception of probabilistic inductive reasoning was alsoemerging. This approach is now generally referred to as the Bayesiansubjectivist orpersonalist approach to inductivereasoning (see, e.g., Ramsey 1926; De Finetti 1937; Savage 1954;Edwards, Lindman, & Savage 1963; Jeffrey 1983, 1992; Howson &Urbach 1993; Joyce 1999). This approach treats inductive probabilityas a measure of an agent’sdegree-of-belief that ahypothesis is true, given the truth of the evidence. This approach wasoriginally developed as part of a larger normative theory of beliefand action calledBayesian decision theory. The principalidea is that the strength of an agent’s desires for variouspossible outcomes should combine with her belief-strengths regardingclaims about the world to produce optimally rational decisions.Bayesian subjectivists provide a logic of decision that captures thisidea, and they attempt to justify this logic by showing that inprinciple it leads to optimal decisions about which of various riskyalternatives should be pursued. On the Bayesian subjectivist orpersonalist account of inductive probability, inductive probabilityfunctions represent the subjective (or personal) belief-strengths ofideally rational agents, the kind of belief strengths that figure intorational decision making. (See the section onsubjective probability in the entry oninterpretations of the probability calculus, in thisEncyclopedia.)
Elements of a logicist conception of inductive logic live on today aspart of the general approach calledBayesian inductive logic.However, among philosophers and statisticians the term‘Bayesian’ is now most closely associated with thesubjectivist or personalist account of belief and decision. And theterm ‘Bayesian inductive logic’ has come to carry theconnotation of a logic that involves purely subjective probabilities.This usage is misleading since, for inductive logics, theBayesian/non-Bayesian distinction should really turn on whether thelogic gives Bayes’ theorem a prominent role, or the approachlargely eschews the use of Bayes’ theorem in inductiveinferences, as do theclassical frequentist approaches tostatistical inference. Any inductive logic that employs the sameprobability functions to represent both theprobabilities ofevidence claims due to hypotheses and theprobabilities ofhypotheses due to those evidence claims must be aBayesian inductive logic in this broader sense; becauseBayes’ theorem follows directly from the axioms that eachprobability function must satisfy, and Bayes’ theorem expressesa necessary connection between theprobabilities of evidenceclaims due to hypotheses and theprobabilities of hypothesesdue to those evidence claims.
In this article theprobabilistic inductive logic weexamine is aBayesian inductive logic in this broader sense.This logic does not presuppose thesubjectivist Bayesiantheory of belief and decision, and avoids the objectionablefeatures of the syntactic version of Bayesian logicism. There are goodreasons to distinguishinductive probabilities fromdegree-of-belief probabilities (e.g. the so-called problem ofold evidence) and frompurely syntactic logicalprobabilities. So, the probabilistic logic articulated in thisarticle is presented in a way that depends on neither of theseconceptions of what the probability functionsare. However,this version of the logic is general enough that it may perhaps be fittedto a Bayesian subjectivist or Bayesian logicist program, if onedesires to do that.
There are alsoclassical frequentist approaches tostatistical inference, which are largely non-Bayesian. These approaches eschew theuse of prior probabilities, viewing them as too subjective to employin an objective evaluation of scientific claims. So they do not drawon Bayes’ theorem. These classical frequentist approaches wheremost famously developed by R. A. Fisher (1922) and by Neyman &Pearson (1967). Among the most prominent contemporary treatments ofnon-Bayesian frequentist inductive logic is articulated in the work onerror statistics by Mayo and Spanos (see, e.g., Mayo (1996)(1997), and Mayo and Spanos (2006)). For a detailed account offrequentist approaches see the entry onPhilosophy of Statistics, in thisEncyclopedia.
View this site from another server:
The Stanford Encyclopedia of Philosophy iscopyright © 2025 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University
Library of Congress Catalog Data: ISSN 1095-5054