As the name suggests, propositional functions are functions that havepropositions as their values. Propositional functions have played animportant role in modern logic, from their beginnings in Frege'stheory of concepts and their analyses in Russell's works, to theirappearance in very general guise in contemporary type theory andcategorial grammar.
In this article, I give an historical overview of the use ofpropositional functions in logical theory and of views about theirnature and ontological status.
Before we begin our discussion of propositional functions, it willbe helpful to note what came before their introduction. Intraditional logic, the role of propositional functions isapproximately held by terms. In traditional logic, statements such as‘dogs are mammals’ are treated as postulating a relation between theterms ‘dogs’ and ‘mammals’.
A term is treated either extensionally as a class of objects orintensionally as a set of properties. The ‘intent’ of theterm ‘dog’ includes all the properties that are includedin the intent of ‘mammal’. The intensional treatment of‘dogs are mammals’ interprets this sentence as truebecause the semantic interpretation of the subject is asuperset of the interpretation of the predicate. On theextensional treatment of the sentence, however, the sentence is truebecause the interpretation of the subject (the class of dogs) is asubset of the interpretation of the predicate (the set ofmammals).
These two treatments of the predicate are typical of the twotraditions in traditional logic—the intensional and theextensional traditions. Logicians who can be counted among theintensional logicians are Gottfried Leibniz, Johann Lambert, WilliamHamilton, Stanley Jevons, and Hugh MacColl. Among the extensionallogicians are George Boole, Augustus De Morgan, Charles Peirce, andJohn Venn.
The treatment of terms in the intensional logic tradition propertyof certain sentences might seem strange to modern readers. Theintension of a predicate, in 20th Century philosophy, includes onlythose properties that any competent speaker of a language wouldassociate with that predicate. These properties are not enough tomake true ordinary statements like ‘every dog in my house isasleep’. But we can make sense of the intensional view of terms byconsidering its origins. One of the founders of the intensional logictradition is Leibniz, who thinks that all truths are grounded in thenature of individuals. The complete concept of an individual containseverything that is true of it. Building on this, we can see that thecomplete concept of a term will include enough to ground any truthabout it as well.
In both the intensional and extensional logic traditions, we seetheories of complex terms. In the extensional tradition, disjunctiveand conjunctive terms are interpreted by taking the union andintersection of classes. The conjunctive termAB is interpreted as the intersection of the classAand the classB and the extension of the disjunctive termA+B isunderstood as the union of the extensions ofA andB.
In the intensional tradition, the reverse holds. ThetermAB is interpreted as the union of the properties in theintent ofA and the intent ofB andA+B isinterpreted as the intersection of the properties inA andB. This reversal makes sense, since morethings fit a smaller number of properties and fewer things fit alarger number of properties.
Although some of the logicians working in term logic have verycomplicated treatments of negation, we can see the origin of themodern conception in the extensional tradition as well. In Boole andmost of his followers, the negation of a term is understood as theset theoretic complement of the class represented by that term. Forthis reason, the negation of classical propositional logic is oftencalled ‘Boolean negation’.
In Charles Peirce's ‘Logic of Relatives’ (1883), we see amove towards an understanding of terms as functions. One problem withtraditional term logic is that it lacks the ability to deal withrelations. Peirce's logic of relatives is meant to remedy that. Headds terms to Boolean algebra that represent relations, and gives anextensional interpretation of them. They are not propositionalfunctions in the full sense. Peirce's relatives are ‘commonnames’ that represent classes of pairs of objects (1883, 328).Thus, the logic of relatives represents a generalization oftraditional logic rather than a departure from it.
Peirce extends the algebra of terms to deal with particularfeatures of relations. Like other terms, we can have conjunctive,disjunctive, and negative terms. Wheref andg are relatives, thenfg representsthe class of pairs (I,J) such thatI bears bothf andg toJ. Similarly, the disjunctiverelative,f+g is such that it represent(I,J) ifI bears eitherf org toJ andf′—the negation of thetermf—represents the class of pairs (I,J) such thatf does not hold between them. Peirce also has a compositionoperator, ; , such thatf;g names (I,J) if there is someentityK such thatf names (I,K)andg names (K,J).
In ‘The Critic of Arguments’ (1892), Peirce adopts anotion that is even closer to that of a propositional function. Therehe develops the concept of the ‘rhema’. He says the rhemais like a relative term, but it is not a term. It contains a copula,that is, when joined to the correct number of arguments it producesan assertion. For example, ‘__ is bought by __ from __ for __’ is afour-place rhema. Applying it to four objectsa,b,c, andd produces the assertion thata is bought byb fromc ford (ibid. 420).
One especially interesting point about Peirce's rhema is that heuses the same chemical analogy as Frege does when they discuss therelation between relations and their arguments. They both comparerelations (and properties) to ‘atoms or radicals withunsaturated bonds’. What exactly this analogy says of relationsor properties, either in Frege or Peirce is somewhat unclear.
See the entry onPeirce's logic, fora more complete exposition of his work.
In the work of Giuseppe Peano (1858–1932), we find anotherimportant step towards the modern notion of a propositionalfunction. Although his work is not as sophisticated as Frege's (seebelow), it is important because it is influential particularly onBertrand Russell.
In his ‘Principles of Arithmetic Presented by a NewMethod’ (1889), Peano introduces propositional connectives in themodern sense (an implication, negation, conjunction, disjunction, anda biconditional) and propositional constants (a verum and a falsum).
More important for us is his treatment of quantification. Peanoallows propositions to contain variables, that is to say, he utilizesopen formulas. He does not give an interpretation of openformulas. He does not tell us what they represent. But they are usedin his theory of quantification. Peano only has a universalquantifier. He does not define an existential quantifier in the‘Principles’. The quantifier is always attached to aconditional or biconditional. Quantified propositions are always ofthe form
A ⊃x,y,…B
or
A =x,y,…B
Peano reads ‘A ⊃x,y,…B’ as saying ‘whateverx,y,… may be, from the propositionA onededucesB’ and ‘=’ is Peano's biconditional, that hedefines in the usual way from the conditional and conjunction. But heprovides us with no more interpretation than that. He refers tovariables as ‘indeterminate objects’, but does not discusswhat this or what a proposition (or propositional function) thatcontains propositional objects might be.
In Frege we have a fairly general interpretation of sentences asexpressing functions applying to arguments. The view that I explorehere is one that he develops in the 1890s.
Consider the sentence
My dog is asleep on the floor.
This sentence, like all linguistic expressions, hasboth a sense and a referent. Its sense is an abstract object—athought. Its referent is its truth value (which at the moment is theTrue). We will discuss Frege's analysis of the thought soon, but rightnow let us look at the referents of the expressions that make up thissentence.
The expression ‘my dog’, according to Frege, is asingular term. It picks out an object (my dog, Zermela). Theexpression ‘is asleep on the floor’ refers to aconcept. Concepts are functions. In this case, the concept isa function from objects to truth values (which are also objects). So,we can treat the above sentence as representing the concept __is asleep on the floor as applying to the objectmydog.
Frege's concepts are very nearly propositionalfunctions in the modern sense. Frege explicitly recognizes them asfunctions. Like Peirce's rhema, a concept isunsaturated. They are in some sense incomplete. AlthoughFrege never gets beyond the metaphorical in his description of theincompleteness of concepts and other functions, one thing is clear:the distinction between objects and functions is the main division inhis metaphysics. There issomething special about functions that makes them verydifferent from objects.
Let us consider ‘my dog is asleep on the floor’again. Frege thinks that this sentence can be analyzed in variousdifferent ways. Instead of treating it as expressing the applicationof __is asleep on the floor tomy dog, we can thinkof it as expressing the application of the concept
my dog is asleep on __
to the object
the floor
(see Frege 1919). Frege recognizes what is now acommonplace in the logical analysis of natural language.We can attribute more than one logical form to a singlesentence. Let us call this theprinciple of multiple analyses. Frege does not claimthat the principle always holds, but as we shall see, modern typetheory does claim this.
With regard to the sense of sentences, they are also the result ofapplying functions to objects. The sense of ‘my dog’ is anabstract object. The sense of ‘is asleep on the floor’ isa function from individual senses, like that of ‘my dog’,to thoughts (see Frege 1891). The sense of ‘is asleep on thefloor’ is aconceptual sense. It would seem that theprinciple of multiple analyses holds as much for senses as it does forreferents. Frege, however, sometimes talks as if the senses of theconstituent expressions of a sentence are actually contained somehowin the thought. It is difficult to understand how all such sensescould be in the thought if there are different ways in which thesentence can be analyzed into constituent expressions.
In addition to concepts and conceptual senses, Fregeholds that there are extensions of concepts. Frege calls an extensionof a concept a ‘course of values’. A course of values isdetermined by the value that the concept has for each of itsarguments. Thus, the course of values for the concept __is adog records that its value for the argument Zermela is the Trueand for Socrates is the False, and so on. If two concepts have thesame values for every argument, then their courses of values are thesame. Thus, courses of values are extensional.
For more about Frege's theory of concepts and its relation to his logic, seethe entry onFrege's theorem and foundations for arithmetic.
The term ‘propositional function’ appears in print forthe first time in Bertrand Russell'sPrinciples of Mathematics (1903). Russell introduces thenotion through a discussion of kinds of propositions. Considerpropositions of the type that says of something that it is a dog. Thisis the kind ‘x is a dog’. This kind is apropositional function that takes any objecto to theproposition thato is a dog.
In this period, Russell holds that propositions are entities thathave individuals and properties and relations as constituents. Theproposition that Socrates is a man has Socrates and the property ofbeing a man as constituents. In complex propositions the relationbetween propositional function and the proposition is lessclear. Like Frege, Russell allows the abstraction of a propositionalfunction from any omission of an entity from a proposition. Thus, wecan view the proposition
if Socrates drinks hemlock he will die
as representing the application of the function
x drinks hemlock ⊃ x will die
to Socrates, or the function
Socrates will drink x ⊃Socrates will die
to hemlock, and so on. In other words, Russell accepts theprinciple of multiple analyses.
In thePrinciples, the quantifier ‘all’ is analyzed as apart of referring phrases that pick out classes (1903, 72). This, wecan see, is a hold-over from the 19th Century extensional logicians(see Section 1). But in slightly later works, such as ‘OnDenoting’ (1905), propositional functions are said to beconstituents of universal propositions. According to this analysis theproposition expressed by sentences such as ‘All dogs bark’ is made upof the propositional functionx is a dog ⊃xbarks and a function (of propositional functions) that isrepresented by the quantifier phrase ‘all’. Quantified propositionsare interesting for us because they contain propositional functions asconstituents.
It is unclear whether Russell holds that propositional functionsalso occur as constituents in singular propositions likeif Socrates drinks hemlock he will die. These propositions docontain properties, likedies, and relations, likedrinks, but it iscontroversial as to whether Russell thinks that these arepropositional functions (see Linsky 1999 and Landini 1998).
While writing thePrinciples of Mathematics, Russelldiscovered the paradox that now bears his name. Before we get toRussell's paradox, let us discuss some the methodofdiagonalization by which this and many other paradoxes aregenerated.
The power set of a setS, ℘S contains allthe subsets ofS. Georg Cantor (1845–1918) used the method ofdiagonalization to show that for any setS,℘S is larger thanS.
Here is Cantor's proof. Suppose that ℘SandS are the same size. Then, by the set-theoreticdefinition of “same size” (more correctly, ‘samecardinality’) there is a one-to-onesurjectionbetweenS and ℘S. This means that there is afunction that matches up each member ofS with a uniquemember of ℘S so that there are no members of℘S left over. Let us call thisfunction,f. Then, ifx is a memberofS,f(x) is in ℘S. Now,since ℘S is the power set ofS, it may bethatx is inf(x) or it may not beinf(x). Let us now define a setC:
C = {x ∈S:x ∉f(x)}
Clearly,C is a subset ofS, so it is in℘S. By hypothesis,f isonto—forevery membery of ℘S, there isanx ∈S suchthatf(x) =y. Thus there must besomec ∈S such that
f(c) =C
Now, either
c ∈C
or
c ∉C.
Suppose thatc is inC. Then, by the definitionofC,c is not inf(c). That is to say,c∉C. But, ifc is not inC,thenc ∉f(c). So, by the definitionofC,c is inC. Thus,
c is inC if and only ifc is notinC.
Therefore, the assumption that a set is the same sizeas its power set leads to a paradox, and so this assumption must befalse.
Cantor's theorem has important consequences for the theory ofpropositional functions. Consider a model for a (first-order) logicallanguage that has a domainD. The variables of the languagerange over members ofD. Now let us add predicate variablesto the language. These stand for propositional functions. How are weto interpret them in the model? The standard way of doing so—thatis inherited from the extensional logic tradition—is to havepredicate variables range over subsets of the domain. A model in whichpredicate variables range over all subsets of the domain is called a‘standard model’ for second-order logic. Cantor's theoremtells us that the domain for predicate variables in the standard modelis larger than the domain for individual variables. If we havepredicates of predicates, then the domain for third order predicatesis even larger. And so on.
Russell's paradox is very closely related to Cantor'stheorem. There are two versions of the paradox: (1) the classversion; (2) the propositional function version. I only discuss thepropositional function version of the paradox.
In his early writings, Russell wants logic to be auniversal science. It should allow us to talk about propertiesof everything. By this he means that the variables in logic should betaken to range over all entities. But propositional functions,at least in thePrinciples, are entities. So variablesshould range over them. Now consider the predicateR such that,
(∀x)(Rx =¬xx)
(Russell's predicateR is very similar toCantor's setC.) If we instantiateand substituteR forx, we obtain
RR ≡ ¬RR
It seems, then, that the treatment of variables ascompletely general together with the liberty to define propositionalfunctions by means of any well-formed formula enables us to derive acontradiction.
Russell blocks the contradiction inthePrinciples by the introduction of a theory of types. Thisis a simple theory of types, that only distinguishes between the typesof various propositional functions (or, in its class-form, ofclasses). Let us depart from Russell's own exposition of the theoryof types in order to give a more rigorous and more modern version ofthe theory. This will make my presentations of the ramified theory oftypes and more modern versions of type theory easier.
We'll use one basic type,i (the type of individuals) anddefine the types as follows:
The type<t1,…,tn>is the type of a relation among entities oftypest1,…,tn.But, for simplicity, we will interpret this as the type of a functionthat takes these entities to a proposition. (Note thatwhenn = 0, then the empty type, < >, isthe type for propositions.) This definition incorporates the idea of awell-founded structure. There are no cycles here. We cannot have afunction that takes as an argument a function of the same or highertype. Thus, simple type theory bans the sort of self-application thatgives rise to Russell's paradox.
The type hierarchy corresponds neatly to the hierarchy of domainsthat we saw in our discussion of Cantor's theorem. A unary predicatehas the type <i>; its domain isD—the setof individuals. A unary predicate of predicates has the type<<i>>, and this corresponds to the domain ofsubsets ofD. And so on.
For more, see the entry onRussell's paradox.
After thePrinciples, however, Russell comes to believethat the simple theory of types is insufficient. The reason forit has to do with theliar paradox. Suppose that ‘L’ is a namefor the proposition:
L is false.
This statement is false if and only if it is true.The problem here has something to do with self-reference, but itcannot be avoided by the simple theory of types alone. For simpletypes only give us a hierarchy of types of propositional functions.In simple type theory, all propositions have the same type.
The idea behind ramified type theory is to introduce ahierarchy of propositions as well. On this view, propositions andpropositional functions have anorder. If a propositional function is applied to aproposition of a particular order, then it yields a proposition of ahigher order. And every function must have a higher order than itsarguments. Thus, we avoid the liar paradox by banning a propositionfrom occurring within itself. If a propositionp occurswithin another proposition, as the argument of a function such asx is false, then the resulting proposition is of a higherorder thanp.
Unfortunately, Russell never gives a precise formulation oframified type theory. Perhaps the best formulation is due to AlonzoChurch (1976).[1]
Almost at the same time as he adopts the ramified theory of types,Russell abandons propositions. From about 1908 until 1918, althoughRussell retains the idea that there are true propositions, he deniesthat there are false ones. When we think about something that isfalse, say,Zermela is a cat, we are not thinking about afalse proposition, but rather the objects of our thought are justZermela and the property of being a cat. It might seem odd to have ahierarchy especially designed to stratify the propositions and thenclaim that there are no propositions. Some interpreters, however,have claimed that Russell's denial of the existence of propositionsshould not be taken seriously and that there are very good reasons toreadPrincipia as being largely a theory of propositions(see Church 1984).
One reason to take the ramified theory of types seriously (evenwithout accepting propositions) is that it can be usefullyincorporated into a substitutional theory of quantification. On thesubstitutional interpretation of the quantifiers, a universallyquantified formula such as (∀x)Fx is true ifand only if all of its instancesFa1,Fa2,Fa3,…are true. Similarly, (∀x)Fa is true if andonly if at least one of its instances is true.
Consider a substitutional interpretation of quantifiers withvariables ranging over predicates, as in the formula,(∀P)Pa. This formula is true if and only if allof its instances are true. On a simple theory of types, the type ofthe variableP is <i>, since its arguments are allindividuals (or singular terms). But the simple type of the function,(∀P)Px is also <i>. So an instance of(∀P)Pa is (∀P)Paitself. A substitutional interpretation of the quantifiers requiresthat instances be simpler than the formulas of which they areinstances. In this case, all we find out is that a particular formulais true only if it is true. This is uninformative and it seemsviciously circular.
To block this sort of circularity, we can turn to the ramifiedtheory of types. On the ramified theory, the propositional function(∀P)Px is of order 2, because of the presenceof the quantifier binding a variable of order 1. In this way, theramified theory forces formulas to be simpler (at least in terms oforder) than the formulas of which they are instances (see Hazen andDavoren 2000).
After 1905, we see in Russell a parsimonious inclination. He wantsto eliminate entities from his ontology. Some time between 1908 and1910 he begins to deny the existence of propositions and this denialcontinues until he develops a theory of propositions as structures ofimages or words in (1918). What, then, is the fate of propositionalfunctions? It might seem difficult to understand what a propositionalfunction is without the existence of propositions, but Russell's viewis, not that complicated. Russell only rejects false propositions. Heretains facts in his ontology. Propositional functions, inPrincipia, are what we now call ‘partialfunctions’. That is to say, they do not always have values. Forexample, the propositional function__ is a dog does not have a value for the Sydney Opera Housetaken as an argument, but it does have a value when my dog is taken asits argument. So, the rejection of false propositions does not cause aserious problem for the theory of propositional functions inRussell.
Having dealt with that problem, let us go on to see what Whiteheadand Russell think the nature of propositional functions is. InPrincipia, they say:
By a ‘propositional function’ we mean something whichcontains a variablex, and expresses a proposition as soonas a value is assigned tox. That is to say, it differs from aproposition solely by the fact that it is ambiguous: it contains avariable of which the value is unassigned. (1910, 38).
In this passage, it seems as though they are saying that apropositional function is an ambiguous proposition. In light of therejection of propositions, this view is especially hard tounderstand. Urquhart (2003) says that for Whitehead and Russell, apropositional function is something rather like a formula. This seemsright, since propositional functions contain variables.
But what exactly are propositional functions inPrincipia?This is a matter of heated debate among Russell scholars. Perhaps themost influential interpretation is the constructive interpretation,due to Kurt Gödel (1944). On this interpretation, propositionalfunctions are human constructs of some sort. They depend on ourability to think about them or refer to them. A version of theconstructive interpretation can also be found in Linsky (1999). Thereis also a more nominalist interpretation in Landini (1998). On therealist side, are the interpretations given by Alonzo Church (1984)and Warren Goldfarb (1989). Goldfarb thinks that the logical theory ofPrincipia is motivated by Russell's attempt to find the realnature of propositional functions and that this nature is independentof our thinking about it. Goldfarb has a good point, since Russell'slogic is supposed to be a perspicuous representation of the way thingsare. But Russell often seems to deny that propositional functions arereal entities.
Jumping ahead some decades, adding possible worlds together withset theory to the logicians' toolbox has provided them with a verypowerful and flexible framework for doing semantics.
First, let us recall the modern notion of a function. A function isa set of ordered pairs. If <a,b> is in afunctionf, this means that the value off for theargumenta isb or, more concisely,f(a) =b. Bythe mathematical definition of a function, for each argument of afunction there is one and only one value. So, if the ordered pair<a,b> is in a functionf and so is <a,c>, thenb is thesame thing asc.
The construction of propositional functions begins with possibleworlds and the assumption that there are sets. Let us call the set ofpossible worldsW. A proposition is a set of possible worlds. The propositionthat Zermela barks, for example, is all the sets of worlds in whichZermela barks. We also need to assume that there is a setI of possible individuals (i.e., the individuals that exist inat least one possible world). We now have all the materials toconstruct a simple type-theoretic hierarchy of functions.
The usual treatment of the meaning of predicates differs slightlyfrom the manner I have described here. Usually, the intension of apredicate is taken to be a function from possible worlds to sets ofindividuals (or sets of ordered pairs of individuals for binaryrelations, ordered triples for three place relations, and soon). Strictly speaking, these functions are not propositionalfunctions because they do not take propositions as values. But foreach such function, we can construct an ‘equivalent’propositional functions by using a process called‘Currying’ after the logician Haskell Curry. Let's startwith a functionf from worlds to sets of individuals. Then we can constructthe corresponding propositional functiong as follows. For each worldw andindividuali, we constructg so that
w is ing(i) if and onlyifi is inf(w).
So, the more standard treatment of the meanings ofpredicates is really equivalent to the use of propositionalfunctions.
Now that we have a whole hierarchy of propositional functions, weshould find some work for them to do. One theory in whichpropositional functions do good work is Montague semantics, developedin the late 1960s by Richard Montague.
In order to understand Montague's method we need tounderstandlambda abstraction. For the formulaA(x) we read the expressionλx[A(x)] as a predicate expression. Itextension (in a given possible world) is the set of things thatsatisfy the formulaA(x). Lambda abstractors are governed by two rules,known as α-conversion and β-reduction:
(α-con)A(a) (a formula witha freeforx) can be replaced byλx[A(x)]a.
(β-red)λx[A(x)]a can be replaced byA(a) (wherex is free forainA(x)).
Because of the equivalence between a formulaA(x) andλx[A(x)]a, one might wonder why addlambda abstractors to our language. In Montague semantics, the answerhas to do with the very direct way that he translates expressions ofnatural languages into his logical language. We will discuss thatsoon, but first let us learn a bit about Montague'sintensional logic.
Montague adds two other pieces of notation to hislanguage:∧ and ∨. Theexpression∧λx[Fx] representsa function from worlds to sets of individuals. Given a possible worldw,∧λx[Fx] represents a function that takesw to the extension of λx[Fx]. Theoperator∨ takes expressions of the form∧λx[Fx] ‘down’ totheir extensions at the world in which the expression is beingevaluated. For example, the extension of∨∧λx[Fx]atw is just the same as the extension ofλx[Fx] atw.
What is so special about Montague semantics is that it can be usedin a very direct way as a semantics for large fragments of naturallanguages. Consider the following sentence:
Zermela barks.
The meaning of this sentence is understood in Montaguesemantics as a structure of the meanings of its constituentexpressions. Montague represents the meanings of expressions usingtranslation rules. Here we use the following translation rules:
Zermela translates intoλP[(∨P)z]
barks translatesinto∧B
Now we can construct a formula that gives the meaningof ‘Zermela barks’:
λP[(∨P)z]∧B
Notice that in constructing the sentence we place theexpressions in the same order in which they occur in English. The useof lambda abstracts allows us to reverse the order of two expressionsfrom the way in which they would appear in ordinary statements of aformal logical language (that does not have lambdas). Now we can useβ-reduction to obtain:
(∨∧B)z
And now we apply Montague's rule toeliminate∨∧:
Bz
In this process we start with an expression that has the same orderof expressions as the original English sentence and then reduce it toa very standard formula of logic. This tells us that the truthcondition of the sentence ‘Zermela barks’ is the set ofworlds that is the proposition expressed byBz. Of course weknew that independently of Montague's work, but the point is that theMontague reduction shows us how we can connect the surface grammar ofEnglish sentences to the formula of our logical language. The formulaof standard logic, moreover, displays its truth-conditions in a veryperspicuous way. So, the Montague reduction shows us the connectionbetween sentences of natural languages to their truth conditions.
Categorial grammars were first constructed in the 1930s by KazamirAjdukiewicz (1890–1963), and developed by Yehoshua Bar Hillel(1915–1975) and Joachim Lambek (1922–) in the 1950s an1960s. Categorial grammars are logical tools for representing thesyntax of languages.
In categorial grammar, the syntax of languages is represented usinga different sort of generalization of the functional notation than inMontague semantics. In Montague Semantics, the lambda abstractor isused to move the meaning of an expression to the location that theexpression occupies in a sentence. In categorial grammar, predicatesand many other sorts of expressions are taken to be functions ofsorts. But there is a distinction in categorial grammar between twosorts of application of a function to its arguments.
Let's see how this works. Let's start with the primitive types CN(common noun) and NP (noun phrase). The indefinite article ‘a’ takesa common noun (on its right) and returns a NP. So it has the typeNP/CN. The common noun ‘dog’, of course, has the type CN. We write‘A has the typeT’ as‘A⊢T’. So we have,
a ⊢ NP/CN
and
dog ⊢ CN
In order to put these two sequents together, we canuse a form of the rule modus ponens which says that from a sequentX⊢A/B and a sequentY ⊢B, we can derive the sequentX.Y⊢A. We can use this rule to derive:
a.dog ⊢ NP
Moreover, an intransitive verb has the type NP\S,where S is the type of sentences. The backslash in NP\S means that theexpression takes an argument of type NPon the left side and returns an expression of type S. Theverb ‘barks’ is intransitive, that is,
barks ⊢ NP\S
The version of modus ponens that we use with the backslash isslightly different. It tells us that fromX ⊢A\B andY ⊢A we can deriveY.X ⊢B. So we now can obtain,
(a.dog).barks ⊢ S
This says that ‘a dog barks’ is a sentence.
The logics used to describe grammars in this wayaresubstructural logics.
What is of interest to us here is that in categorial grammarsdeterminers such as ‘a’ and verbs are thought of as functions, butthey can differ from one another in terms of whether they takearguments on their right or on their left. In the set theoreticconcept of function as a set of ordered pairs, functions are thoughtof just in terms of their correlating arguments with values. Afunction, as it is understood in categorial grammar has morestructure than this. This is an interesting generalization of thenotion of a function as it is used in logic. We can see that it alsohas important links to the concept of a propositional function,especially as it is used in Montague semantics.
In categorial grammar we can attribute more than one type to asingle expression in a language. Let us call this theprinciple of multiple types. Here is an example due to MarkSteadman. Consider the sentence
I dislike, and Mary enjoys musicals.
The transitive verbs ‘dislike’ and ‘enjoys’ have thetype (NP\S)/NP, that is, they take a noun phrase on their right andreturn a verb phrase. But in the case of ‘I dislike, and Mary enjoysmusicals’ the verbs are separated from their object and joined totheir objects. Steadman deals with this by raising the type of thesubjects ‘I’ and ‘Mary’. Usually, we treat these words as having thetype NP, but here they have the type S/(NP\S). This is the type of anexpression that takes a verb phrase on its right and returns asentence. Steadman then uses a rule that makes the backslashtransitive and derives that ‘I.dislike’ has the type S/NP, which takesa noun phrase (such as ‘musicals’) on its right an returns asentence.
We can see that the principle of multiple types also holds ifanalyze sentences other type theories, such as the simple theory oftypes. For consider the sentence
Mary eats a hamburger.
In interpreting this sentence we can take ‘Mary’ to beof typei, but we can also take it to be of type<<i>>, that is, the type of a propositionalfunction on propositional functions of individuals. We can also raisethe type of ‘eats a hamburger’ to<<<i>>>, a propositional function onpropositional functions on propositional functions on individuals. Andso on. The principle of multiple types and the principle of multipleanalyses together show that a single expression or sentence can beinterpreted as having a very large number of logical forms.
This brief history of propositional functions shows that they areuseful entities and that they have played a central role in logic asit is used in philosophy and linguistics. I have omitted the moremathematical uses of propositional functions, for example, inRussell's and Ramsey's constructions of classes, and in treatments ofgeneral models for higher-order logic. But the topic of propositionalfunctions is a big one and we can't cover it all in a singleencyclopedia article.
How to cite this entry. Preview the PDF version of this entry at theFriends of the SEP Society. Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entryatPhilPapers, with links to its database.
[Please contact the author with suggestions.]
View this site from another server:
The Stanford Encyclopedia of Philosophy iscopyright © 2023 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University
Library of Congress Catalog Data: ISSN 1095-5054