Movatterモバイル変換


[0]ホーム

URL:


SEP home page
Stanford Encyclopedia of Philosophy

Generalized Quantifiers

First published Mon Dec 5, 2005; substantive revision Thu Sep 5, 2024

Generalized quantifiers are now standard equipment in the toolboxes ofboth logicians and linguists. The purpose of this entry is to describethese tools: where they come from, how they work, and what they can beused to do. The description is by necessity sketchy, but several morecomprehensive surveys exist in the literature and will be referred towhen appropriate. To fully appreciate the text below, basicfamiliarity with elementary set theoretic terminology, and with thelanguage of first-order logic will be helpful.

1. Preliminaries

The term “generalized quantifier” reflects that theseentities were introduced in logic as generalizations of the standardquantifiers of modern logic, \(\forall\) and \(\exists\).[1] In retrospect one may say that \(\forall\) and \(\exists\) have beenfound to be just two instances of a much more general concept ofquantifier, making the term “generalized” superfluous.Today it is also common to use just “quantifier” for thegeneral notion, but “generalized quantifier” is stillfrequent for historical reasons. This article employs both terms, witha tendency to insert “generalized” in logical contexts,and to drop it in linguistic contexts.

We distinguishquantifier expressions from what they signifyor denote, the (generalized) quantifiers themselves. In logicallanguages, quantifier expressions are variable-binding operators.Thus, \(\exists\) is the familiar operator such that in a formula\(\exists x\f\),[2] \(\exists x\) binds all free occurrences ofx in \(\f\). Itsignifies the quantifier “there exists”—we’llsee shortly exactly what this object is. Likewise, the symbol \(Q_0\)is often used as a variable-binding operator signifying “thereexist infinitely many”.

In natural languages a variety of expressions have been seen asquantifier expressions, for example, each of the following Englishexpressions:

everything, nothing, three books, the tenprofessors, John, John and Mary, only John, firemen, every, at leastfive, most, all but ten, less than half of the, John’s, somestudent’s, no…except Mary, more male than female,usually, never, each other.[3]

What, then, are generalized quantifiers? Before answering thatquestion, a brief historical prelude is helpful.

2. Aristotle

Aristotle’s syllogistics can be seen as a formal study of themeaning of the four basic quantifier expressionsall,no, some, not all, and of their properties. Forexample, the validity, according to Aristotle, of the syllogism

  • all(A,B)
  • all(B,C)
  • some(A,C)

shows that he considered, in contrast with modern logical usage,allto haveexistential import, sothatAll A are B entails thatAis not an empty term. Likewise, thevalidity of the syllogism

  • some(A,B)
  • all(B,C)
  • some(A,C)

expresses thatsome ismonotoneincreasing (as we now put it) in the second argument. Each validsyllogism formalizes part of the meaning of these quantifierexpressions, but Aristotle’s study of their properties wentbeyond the syllogistics. He observed, for example, thatsomeandnoareconvertible or, as we might now say,symmetric,since they satisfy the scheme

  • Q(A,B)
  • Q(B,A)

in contrast withall andnotall. Further, he studied how variousforms ofnegation combined with quantifier expressions in(what was later called) thesquare of opposition.[4] Medieval logicians continued in Aristotle’s tradition, but alsoextended syllogistic reasoning to cases whereA,Bcould themselves be quantified expressions, thusdealing with premises and conclusions likeSomedonkey of every man doesn’t run (examplefrom John Buridan, 14th century). Even though Aristotelian logic fallsshort of the expressivity and precision of modern logic, thesyllogistics certainly was a decisive contribution to the study ofquantification. In fact, syllogistic systems of various expressivepower have recently been studied in mathematical logic, preciselybecause of their affinity to natural reasoning and their simplecomputational properties; seesection 18 below.

Especially interesting in the present context is the fact that thesequantifier expressions taketwo arguments or terms, and thuscan be seen asbinary relations, both syntactically (asAristotle no doubt saw them) and semantically: given that termssignify sets of individuals, the expressionsomecan be taken to signify the relation of overlap,i.e., of having non-empty intersection, between two sets, andallsignifies the inclusion relation. Notethat these are not relations between individuals but between sets ofindividuals—second-order relations. Indeed, they areexactly the generalized quantifierssome andall,respectively (on a given universe).

This thread—that quantifier expressions signify second-orderrelations—was not picked up by any of Aristotle’s medievalfollowers (as far as we know). Instead, they picked up on the factthat the two terms have different status: the first combines with thequantifier expression to form a noun phrase (as we now say), which isthesubject of the sentence, whereas the second is a verbphrase constituting thepredicate. This led them to focus onwhat the subject—all men,somedogs,nosailors—signified, which conceptually seems to be aharder question. One might surmise thatallmen signifies every man (or the set of men), and thatsomedogs signifies some particular dog, butwhat aboutno sailors? In fact, onecan show that approaches like these are doomed to failure.[5] The modern “solution” is that noun phrases signifysets of sets of individuals, so that, for examplesomedogs signifies the set of setscontaining at least one dog—but that appears to require a moreabstract and mathematical approach to semantics than the idea, whichis at least implicit in Aristotle, that quantifier phrases signifyrelations between (the denotations of) terms.

3. Frege

The second major historical contribution to the theory of generalizedquantifiers came from the “inventor” of modern logic,Gottlob Frege, in the 1870s. In fact, Frege’s contribution istwofold. As every philosophy student knows, he introduced the languageof predicate logic, with sentential connectives, identity, and thevariable-binding operator \(\forall\) (though his 2-dimensionallogical notation is no longer used). These are the quantifiers thatlogicians during the 1950’s began to “generalize”.But Frege also explicitly formulated the abstract notion of aquantifier as a second-order relation, or, as he called it, asecond level concept (“Begriff zweiterStufe”). He was well aware that the four Aristotelianquantifiers were prime examples, but he wanted to avoid the focus onsubject-predicate form, which he (with much justification) saw ashaving been a major obstacle to the development of logic afterAristotle. It was therefore an important discovery that thesequantifiers could all be defined in terms of \(\forall\) andsentential operators (replacingall\((A,B)\)by \(\forall x(A(x) \rightarrow B(x)\)),some\((A,B)\) by \(\neg\forall x(A(x)\rightarrow \neg B(x)\)), etc.).

In fact, the only significant difference between Frege’s notionof a second-level concept and the modern notion of a generalizedquantifier is that Frege did not have the idea of aninterpretation ormodel, which we now (since theadvent of model theory in the 1950s) see as auniverse thatthe quantifiers range over, plus an assignment of suitable semanticobjects to the non-logical symbols. Frege’s symbols all hadfixed meanings, and the only universe he considered was the totalityof everything. But apart from this, one may well say that it was Fregewho discovered generalized quantifiers. This aspect of Frege’slogic, however, remained in the background for a long time, and modeltheorists in the 50s and 60s seem not to have been aware of it.

4. Generalizing the Universal and Existential Quantifier

Modern predicate logic fixes the meaning of \(\forall\) and\(\exists\) with the respective clauses in the truth definition, whichspecifies inductively the conditions under which a formula\(\f(x_1,\ldots,x_n)\) (with at most \(x_1,\ldots,x_n\) free) issatisfied by corresponding elements \(a_1,\ldots,a_n\) in amodel \(\M = (M,I)\) (whereM is the universe andIthe interpretation function assigning suitable extensions tonon-logical symbols): \(\M \models \f(a_1,\ldots,a_n)\). The clausesare (where “iff” as usual stands for “if and onlyif”)

(1)
\(\M \models \forall x\p(x,a_1,\ldots,a_n)\) iff for each \(a\inM\), \(\M \models \p(a,a_1,\ldots,a_n)\)
(2)
\(\M \models \exists x\p(x,a_1,\ldots,a_n)\) iff there is some\(a\in M\) s.t. \(\M \models \p(a,a_1,\ldots,a_n)\)

To introduce other quantifiers, one needs to appreciate what kind ofexpressions \(\forall\) and \(\exists\) are. Syntactically, they areoperators binding one variable in one formula. To see how they worksemantically it is useful to rewrite(1) and(2) slightly. First, every formula \(\p(x)\) with one free variabledenotes in a model \(\M\) a subset ofM; the set ofindividuals inM satisfying \(\p(x)\). More generally, if\(\p(x,x_1,\ldots,x_n) = \p(x,\xbar)\) has at most the free variablesshown and \(\abar = a_1,\ldots,a_n\) are elements ofM,let

\[\p(x,\abar)^{\M,x} = \{a\in M: \M \models \p(a,\abar)\}\]

be theextension of \(\p(x,\xbar)\) in \(\M\) relative to\(a_1,\ldots,a_n\). Then we can reformulate(1) and(2) as follows:

(3)
\(\M \models \forall x\p(x,\abar)\) iff \(\p(x,\abar)^{\M,x} =M\)
(4)
\(\M \models \exists x\p(x,\abar)\) iff \(\p(x,\abar)^{\M,x}\neq\emp\)

Thus, the conditions on the right hand side emerge aspropertiesof the sets \(\p(x,\abar)\). In fact, we can think of \(\forall\)and \(\exists\) as denoting these properties, i.e., the property ofbeing identical to the universe, and of being non-empty, respectively.And now it is easy to think of other properties of sets that can alsobe treated as quantifiers, for example, the property of containing atleast 5, or exactly 3, elements, or of being infinite.[6]

Note that these properties depend only on the universeM, noton the rest of the model. Extensionally, they are simply sets ofsubsets ofM. This leads to the following definition.essentially fromMostowski (1957):

Definition 1
Ageneralized quantifier Q of type \({\langle}1{\rangle}\) is
(5)
a.
syntactically, a variable-binding operator such that whenever\(\f\) is a formula so is \(Qx\f\), and \(Qx\) binds all freeoccurrences ofx in \(\f\);
b.
semantically, a mapping from arbitrary universes (non-empty sets)M to a set \(Q_M\) of subsets ofM, which interpretsformulas of the form \(Qx\f\) according to the clause \[\tag{i} \M \models Q x\p(x,\abar) \text{ iff } \p(x,\abar)^{\M,x} \in Q_M \]

Here we use the same symbol for the quantifier expression and themapping that it signifies or denotes. Thus, \(\forall\) is now takento denote the universal quantifier, also written \(\forall\), which isthe mapping given by

\[\forall_M = \{M\}\]

for allM. Similarly, \(\exists\) denotes the mapping definedby

\[\exists_M = \{A\subseteq M: A\neq\emp\}\]

And here are some other generalized quantifiers (where \(|A|\) is thesize or cardinality of \(A\)):

\[\begin{alignat}{2}\tag{6}\label{ex-qlist1}(\exists_{\geq 5})_M & = \{A\subseteq M: |A| \geq 5\}\\ (\exists_{= 3})_M & = \{A\subseteq M: |A|= 3\}\\ (Q_0)_M & = \{A\subseteq M: A \text{ is infinite}\}\\ (Q^R)_M & = \{A\subseteq M: |A| > |M-A|\} & \textrm{ (the “Rescher}\\ && \textrm{quantifier”)} \\ (Q_{\text{even}})_M & = \{A\subseteq M: |A| \text{ is even}\} \end{alignat} \]

We now have a precise notion of a generalized quantifier, of which\(\forall\) and \(\exists\) are instances, along with infinitely manyothers. Moreover, we see how to extend first-order logicFOto a logic \(\FO(Q)\), by adding the clause(5a) to the formation rules, and the clause(5b-i) to the truth definition. Similarly if we add more than onegeneralized quantifier: \(\FO(Q_1,\ldots,Q_n)\).

In such a logic one may be a able to say things that are notexpressible inFO. For example, it is well-known that inFO the notion of finiteness cannot be expressed. Thus thereis no way to say, of an ordering relation \(<\), that each elementhas only finitely many predecessors, for instance. But this is justthe sort of thing one can express in \(\FO(Q_0)\):

\[\tag{7} \forall x \neg Q_0 y (y < x) \]

Likewise, one cannot say inFO that a (finite) setAcontains exactly half of the elements of the universeM, butthat is expressible in \(\FO(Q^R)\):

\[\tag{8} \neg Q^RxA(x) \wedge \neg Q^Rx\neg A(x) \]

(The first conjunct says that \(|A|\leq |M-A|\), and the second that\(|M-A|\leq |A|\).)

5. Generalized Quantifiers of Arbitrary Types

Further generalization is possible. First, we can letQ bindone variable in two or more formulas. Second, we can let itsimultaneously bind two or more variables in (some of) these formulas.Thetyping ofQ indicates this:Q is oftype \({\langle}n_1,\ldots,n_k{\rangle}\) (where each \(n_i\) is anatural number \(\geq 1\)) if it applies tok formulas, andbinds \(n_i\) variables in theith formula. This explains whythe quantifiers in the previous section were said to be of type\({\langle}1{\rangle}\).

In the general case, one normally chooses distinct variables\(x_{i1},\)…,\(x_{in_i} = \xbar_i\) for \(1\leq i \leq k\), sothat a formula beginning withQ has the form

\[ Q\xbar_1,\ldots,\xbar_k(\f_1,\ldots,\f_k) \]

where all free occurrences of \(x_{i1},\ldots,x_{in_i} \) in \(\f_i\)become bound. NowQ associates with each universeMak-ary relation \(Q_M\) between relations overM,where theith argument is an \(n_i\)-ary relation betweenindividuals. The corresponding clause in the truth definitionbecomes

\[\begin{align} \tag{9}&\M\models Q\xbar_1,\ldots,\xbar_k (\p_1(\xbar_1,\abar),\ldots,\p_k(\xbar_k,\abar)) \textrm{ iff }\\&Q_M(\p_1(\xbar_1,\abar)^{\M,\xbar_1},\ldots,\p_k(\xbar_k,\abar)^{\M,\xbar_k}) \end{align}\]

Here \(\p_i(\xbar_i,\ybar)\) is a formula with at most the freevariables shown, \(\abar\) is a sequence of elements ofMcorresponding to \(\ybar\), and \(\p_i(\xbar_i,\abar)^{\M,\xbar_i}\)is the extension of \(\p_i(\xbar_i,\ybar)\) in \(\M\) relative to\(\abar\), i.e., the set of \(n_i\)-tuples \(\bbar_i\) such that\(\M\models\p_i(\bbar_i,\abar)\).

This is the official concept of a generalized quantifier in thisarticle. It was introduced byLindström(1966), and these quantifiers are sometimes called“Lindström quantifiers”.[7] If we fixM to the universe containing“everything”, we essentially have Frege’s notion ofa second-level concept.[8]

Q ismonadic if on each universeM it is arelation between subsets ofM, i.e., if its type is\({\langle}1,\ldots,1{\rangle}\); otherwise it ispolyadic.For example, the Aristotelian quantifiers mentioned earlier are oftype \({\langle}1,1{\rangle}\):[9]

\[\begin{align}\tag{10}\label{ex-qlist2} \textit{all}_M(A,B) & \iff A \subseteq B\\ \textit{some}_M(A,B) & \iff A \cap B \neq\emp\\ \textit{no}_M(A,B) & \iff A \cap B = \emp\\ \textit{not all}_M(A,B) & \iff A \not\subseteq B \end{align} \]

Here are some more type \({\langle}1,1{\rangle}\) quantifiers:[10]

\[ \begin{alignat}{2}\label{ex-qlist3}\tag{11} (\textit{at least five})_M(A,B) &\iff |A \cap B|\geq 5\\ (\textit{exactly three})_M(A,B) &\iff |A \cap B|= 3\\ (\textit{infinitely many})_M(A,B) &\iff A \cap B \text{ is infinite} \\ \textit{most}_M(A,B) &\iff |A \cap B|> |A-B| \\ (\textit{an even number of})_M(A,B) &\iff |A \cap B|\text{ is even}\\ \textit{MO}_M(A,B) &\iff |A| > B|\\ \textit{I}_M(A,B) &\iff |A| = |B| \hspace{2em}\textrm{(the “Härtig}\\ &\hspace{9em}\textrm{quantifier”)} \end{alignat} \]

With monadic quantifiers it is convenient to use justonevariable, and letQ bind that same variable in each of theformulas. Thus, to say that mostAs are notB, forexample, one may write

\[ \textit{most}\: x (A(x),\neg B(x)) \]

in the corresponding logical language, rather than

\[\textit{most} [x,y (A(x),\neg B(y)).\]

Here are a few polyadic quantifiers with their types given on theright:

\[\begin{alignat}{2}\label{ex-qlist4}\tag{12} W_M(R) &\iff R \text{ is a well-ordering of } M & {\langle}2{\rangle} \\ (Q_0^n)_M(R) & \iff \text{there is an infinite} \\ & \hphantom{\iff\ } A\subseteq M \ST A^n \subseteq R & {\langle}n{\rangle}\\ Res^k(\textit{most})_M(R,S) & \iff |R\cap S| > |R-S| & {\langle}k,k{\rangle}\\ \textit{RECIP}_M(A,R) & \iff \textrm{for all distinct } a,b\in A \\ &\hphantom{\iff\ } \textrm{ there is } n\geq 1 \\ &\hphantom{\iff\ } \textrm{ and } c_0,\ldots,c_n \ST c_0 = a \\ &\hphantom{\iff\ } {} \amp c_n = b \amp c_iRc_{i+1} \textrm{ for } i < n\quad & {\langle}1,2{\rangle} \end{alignat} \]

W and \(Q_0^n\) come from logic and set theory.\(Res^k(\textit{most})\) is theresumption ofmosttok-tuples. Resumption can be applied to any quantifier (inthe syntax, this means replacing each individual variable by acorrespondingk-tuple of variables); it has logical uses butalso, likeRECIP, uses in the interpretation of certainsentences in natural languages; seesection 16 below.

6. Topic Neutrality

Both Mostowski and Lindström had one additional condition intheir definitions of generalized quantifiers: they should notdistinguish isomorphic models. Informally, they are“topic-neutral”: the truth of a statement of the form \(\f= Qx,yz(A(x),R(y,z))\), say, in a model \(\M\) doesn’t depend onthe particular individualsM consists of. If the individualsofM are mapped in a one-one fashion onto the individuals ofanother universe \(M'\), and ifA andR are mappedaccordingly, one obtains anisomorphic model \(\M'\).Isomorphism Closure then says that \(\M\models\f\) iff\(\M'\models\f\).

More formally, if \(\M = (M,I)\) and \(\M' = (M',I')\) are models forthe same vocabularyV of non-logical symbols,f isanisomorphism from \(\M\) to \(\M'\), iff

  • f is a bijection (a one-one onto function) fromM to \(M'\);
  • wheneverP is ann-ary predicate symbol inV and \(a_1,\ldots,a_n \in M\), \[ (a_1,\ldots,a_n) \in I(P) \textrm{ iff } (f(a_1),\ldots,f(a_n)) \in I'(P); \]
  • wheneverc is an individual constant inV,\(I'(c) = f(I(c))\).

\(\M\) and \(\M'\) areisomorphic, in symbols,

\[\M \cong \M'\]

if there is an isomorphism from one to the other. Now ifQ isa generalized quantifier of type \({\langle}n_1,\ldots,n_k{\rangle}\),\(P_i\) is an \(n_i\)-ary predicate symbol for \(1\leq i\leq k\), \(\M= (M,I)\) is a model for the vocabulary \(\{P_1,\ldots,P_k\}\), and\(R_i = I(P_i)\), we also write

\[\M = (M,R_1,\ldots,R_k)\]

ThenQ satisfiesIsomorphism Closure, or justIsom,if the following holds:

\[\begin{align}\label{ex-isom}\tag{13}&\textrm{If } (M,R_1,\ldots,R_k) \cong (M',R'_1,\ldots,R'_k), \\&\textrm{then } Q_M(R_1,\ldots,R_k) \Leftrightarrow Q_{M'}(R'_1,\ldots,R'_k). \end{align}\]

One easily checks that all the generalized quantifiers exemplified sofar are indeedIsom. We did not include thisrequirement in the definition of generalized quantifiers however,since there are natural language quantifiers that do not satisfy it;see below. But logic is supposed to be topic-neutral, soIsomis almost always imposed. Then two importantthings follow. First, as indicated above, sentences in logicallanguages do not distinguish isomorphic models. More precisely, wehave the following

Fact 2

If \(L = \FO(Q_1,\ldots,Q_n)\), each \(Q_i\) isIsom,\(\f\) is anL-sentence, and \(\M\cong \M'\), then \(\M\models\f \Leftrightarrow \M'\models\f\).

Second,Isom takes a particularly interestingform formonadic quantifiers. If \(\M = (M,A_1,\ldots,A_k)\),where \(A_i \subseteq M\) for eachi, then \(A_1,\ldots,A_k\)partitionM into \(2^k\) pairwise disjoint subsets(some of which may be empty); let us call them theparts of\(\M\). We illustrate with \(k=2\) and \(\M = (M,A,B)\):

two intersecting circles inside a box (the box labeled 'M') with 'A intersecting B' labeling the circle intersection and 'A minus B' and 'B minus A' labeling the non-intersecting parts of the circles. The area inside the box but not in the circles is labeled 'M minus (A union B)'

Figure 1

Now it is not hard to see that only thesizes of the partsdetermine whether two models of this kind are isomorphic or not:

Fact 3

\((M,A_1,\ldots,A_k) \cong (M',A'_1,\ldots,A'_k)\) iff thecardinalities of the corresponding parts are the same.

This shows that monadic andIsom generalizedquantifiers indeed deal only withquantities, i.e., withsizes of sets rather than the sets themselves. The list\eqref{ex-qlist3} of type \({\langle}1,1{\rangle}\) generalizedquantifiers clearly illustrates this, but also the Aristotelianquantifiers can be formulated in terms of cardinalities,

\[ \begin{align}\textit{all}_M(A,B) & \iff |A-B|=0\\ \textit{some}_M(A,B) & \iff |A \cap B|>0 \end{align} \]

etc., and similarly for the type \({\langle}1{\rangle}\) examples wegave.

More generally, underIsom, monadicquantifiers can be seen as relations between (cardinal) numbers. Forexample, ifQ is of type \({\langle}1{\rangle}\), then define(using the same symbolQ for the relation betweennumbers)

\[ Q(\kappa,\lambda) \iff \text{there is } (M,A) \ST |M\!-\!A|=\kappa \amp |A|=\lambda \amp Q_M(A) \]

Isom guarantees that this is well-defined, andwe have

\[ Q_M(A) \iff Q(|M\!-\!A|,|A|) \]

7. Relativization

Every statement involving a generalized quantifierQ takesplace within some universeM. Sometimes it is useful to beable to mirror this relativization to a universeinsideM. This means defining a new quantifier with one extra setargument which says thatQ behaves on the universe restrictedto that argument exactly as it behaves onM. Thus, ifQ is of type \({\langle}n_1,\ldots,n_k{\rangle}\), we define\(Q{^{\text{rel}}}\) of type \({\langle}1,n_1,\ldots,n_k{\rangle}\) asfollows:

\[\tag{14} (Q{^{\text{rel}}})_M(A,R_1,\ldots,R_{n_k}) \mathbin{\Longleftrightarrow_{\text{def}}} Q_A(R_1\!\restriction\! A,\ldots,R_{n_k}\!\restriction\! A) \]

where \(R_i \subseteq M^{n_i}\) and \(R_i\!\restriction\! A\) is therestriction of \(R_i\) toA, i.e., the set of\(n_i\)-tuples in \(R_i\cap A^{n_i}\).

We have in fact already seen several examples of relativization: sinceone easily verifies (see the lists \eqref{ex-qlist1} and\eqref{ex-qlist3}) that

\[\begin{align}\tag{15} \textit{all} & = \forall{^{\text{rel}}}\\ \textit{some} & = \exists{^{\text{rel}}}\\ \textit{at least five} & = (\exists_{\geq 5}){^{\text{rel}}}\\ \textit{exactly three} & = (\exists_{= 3}){^{\text{rel}}}\\ \textit{infinitely many} & = (Q_o){^{\text{rel}}}\\ \textit{most} & = (Q^R){^{\text{rel}}}\\ \textit{an even number of} & = (Q_{\text{even}}){^{\text{rel}}} \end{align} \]

8. Expressive Power

We described how generalized quantifiers can be added toFO,resulting in more expressive logics. Alogic in this senseroughly consist of a set of sentences, a class of models, and a truthrelation (or a satisfaction relation) between sentences and models.Such logics are often calledmodel-theoretic logics, sincethey are defined semantically in terms of models and truth, ratherthan proof-theoretically in terms of a deductive system for deriving theorems.[11] Here we restrict attention to logics of the form\(\FO(Q_1,Q_2,\ldots)\), formed by adding generalized quantifiers toFO, where each quantifier comes with a formation rule and asemantic clause for the truth definition as described insection 5 above.

There is an obvious way to compare the expressive power ofmodel-theoretic logics. \(L_2\) isat least as expressive as\(L_1\), in symbols,

\[L_1 \leq L_2\]

if every \(L_1\)-sentence \(\f\) islogically equivalent tosome \(L_2\)-sentence \(\p\), i.e., \(\f\) and \(\p\) are true in thesame models. Also, \(L_1\) and \(L_2\) have thesame expressivepower, \(L_1 \equiv L_2\), if \(L_1 \leq L_2\) and \(L_2 \leqL_1\), and \(L_2\) isstronger than \(L_1\), \(L_1 <L_2\), if \(L_1 \leq L_2\) but \(L_2 \not\leq L_1\). Thus, \(L_1 <L_2\) if everything that can be said in \(L_1\) can also be said in\(L_2\), but there is some \(L_2\)-sentence which is not equivalent toany sentence in \(L_1\).

How does one establish facts about expressive power? It seems as if inorder to show \(L_1 \leq L_2\) one has to go through all of theinfinitely many sentences in \(L_1\) and for each one find anequivalent in \(L_2\). But in practice it suffices to show that thegeneralized quantifiers in \(L_1\) aredefinable in \(L_2\).IfQ is of type \({\langle}1,2{\rangle}\), say,Q isdefinable in \(L_2\) if there is an \(L_2\)-sentence \(\p\)whose non-logical vocabulary consists exactly of one unary and onebinary predicate symbol, such that for all models \(\M =(M,A,R)\),

\[Q_M(A,R) \iff (M,A,R)\models\p\]

Similarly for other types. For example, the quantifierall isdefinable inFO, since the following holds:

\[\textit{all}_M(A,B) \iff (M,A,B)\models \forall x(A(x) \rightarrow B(x))\]

Likewise, \(Q^R\) is definable in \(\FO(\textit{most})\), since

\[(Q^R)_M(A) \iff (M,A,B)\models \textit{most}\: x(x=x, A(x))\]

(note that all our logics contain the logical apparatus ofFO, so they are all extensions ofFO). The latter isan instance of the following observation:

(16)
For any generalized quantifierQ,Q isdefinable in \(\FO(Q{^{\text{rel}}})\).

Such facts about definability can be easy or hard to establish,[12] but they suffice to establish positive facts about expressivity,since we have:

Fact 4

\(\FO(Q_1,\ldots,Q_n) \leq L\) if and only if each \(Q_i\) isdefinable inL.

On the other hand, to proveinexpressibility, i.e., that somesentence isnot equivalent to anyL-sentence, isharder. One way that sometimes works is to establish that \(L_1\) hassome property that \(L_2\) lacks; then one might be able to concludethat \(L_1 \not\leq L_2\). Some properties that are typical ofFO, but fail for most stronger logics, are:

  • TheLöwenheim property: If a sentence is true in someinfinite model, it is also true in some countable model.

  • TheTarski property: If a sentence is true in some countablyinfinite model, it is also true in some uncountable model.

  • Thecompactness property: If no model makes every element ofthe set of sentences \(\Phi\) true, then there is a finite subset\(\Psi\) of \(\Phi\) such that no model makes every sentence in\(\Psi\) true.

  • Thecompleteness property: The set of valid sentences isrecursively enumerable (i.e., can be generated by some formalsystem).

For example, \(\FO(Q_0)\) does not have the compactness property.[13] This can be seen by looking at the set of sentences

\[\Phi \:=\: \{\neg Q_0x(x=x)\} \cup \{\theta_n: n=1,2,\ldots\}\]

where \(\theta_n\) is anFO-sentence saying that there are atleastn elements in the universe. If you take any finitesubset \(\Phi'\) of \(\Phi\), andM is a universe whosecardinality is the largestn such that \(\theta_n\) belongsto \(\Phi'\), then all sentences in \(\Phi'\) are true inM.But no universe can make all sentences in \(\Phi\) true. And thisshows that \(Q_0\) is not definable inFO, i.e., that\(\FO(Q_0) \not\leq \FO\), since otherwise we could replace \(\Phi\)by an equivalent set ofFO-sentences, butFO doeshave the compactness property, so that it impossible.

However, this way of proving inexpressibility only works for logicswith properties like those above. Moreover, they only work if infiniteuniverses are allowed, but interesting inexpressibility facts holdalso for finite models, for example, the fact that \(Q^R\) and\(Q_{\text{even}}\) are not definable inFO, or thatmost = \((Q^R){^{\text{rel}}}\) is not definable in\(\FO(Q^R)\). Logicians have developed much more direct and efficientmethods of showing undefinability results that work also for finite models.[14]

The above properties in factcharacterizeFO, in thesense that no proper extension ofFO can have (certaincombinations of) them. This is the content of a celebrated theoremabout model-theoretic logics, Lindström’s Theorem, aversion of which is given below. For an accessible proof see, forexample,Ebbinghaus, Flum, and Thomas(1994). We say that a logic \(L = \FO(Q_1,\ldots,Q_n)\)relativizes if the “converse” of(16) holds for each \(Q_i\), i.e., if each \((Q_i){^{\text{rel}}}\) isdefinable inL.

Theorem 5 (Lindström) IfL is compactand has the Löwenheim property, then \(L \equiv \FO\). Also,providedL relativizes, ifL is complete and has theLöwenheim property, or ifL has both the Löwenheimand the Tarski properties, then \(L \equiv \FO\).

9. Generalized Quantifiers and Computation

In addition to the truth conditions associated with generalizedquantifiers, one may study the computations required to establish thetruth of a quantified statement in a model. Indeed, generalizedquantifiers turn up in various places in the part of computer sciencethat studiescomputational complexity. In this context, werestrict attention tofinite universes, and assumeIsomthroughout. So a quantifier is essentially aset of finite models; byIsom we can assumethat models of cardinalitym all have the same domain \(M =\{1,\ldots,m\}\). Such models can be coded aswords, i.e.,finite strings of symbols. For example, a model \((M,A)\) of type\({\langle}1{\rangle}\) can be seen as a binary word \(a_1\ldotsa_m\), where \(a_i\) is 1 if \(i\in A\) and 0 otherwise. Thus \(|A|\)is the number of 1’s and \(|M\!-\!A|\) the number of 0’s;byIsom, the order in the string doesn’tmatter. SoQ becomes a set \(W_Q\) of words, that is, aformal language: a subset of the set of all finite strings ofcoding symbols.[15]

We can now ask what it takes to recognize that a word belongs to\(W_Q\). The abstract notion of anautomaton gives an answer;automata are machines thataccept orreject words,and they are classified according to the complexity of the operationsthey perform. The languagerecognized by an automaton is theset of words it accepts.[16]

Afinite automaton has a finite number ofstates,including a start state and at least one accepting state. It startsscanning a word at the leftmost symbol in the start state, and at eachstep it moves one symbol to the right and enters a (possibly) newstate, according to a giventransition function. If it canmove along the whole word ending in an accepting state, the word isaccepted. The application of automata theory to generalizedquantifiers was initiated invan Benthem(1986) (Ch. 7, “Semantic automata”). It iseasy to construct a finite automaton recognizing \(\forall\) (or\(\forall{^{\text{rel}}}=\)all), i.e., checking thatw consists only of 1’s: just remain in the start state= accepting state as long as 1’s are encountered, but go to arejecting state as soon as a 0 is scanned, and remain there whateveris encountered afterwards. A slightly more complex automatonrecognizes \(Q_{\text{even}}\): again there are two states, a startstate = the accepting state and a rejecting state, and this timeremain in the same state when 0’s are scanned, but go to theother state when a 1 is scanned. To end in the acceptingstate it is then necessary and sufficient that there are an evennumber of 1’s. This machine essentially usescycles oflength 2, whereas the first example had only 1-cycles. Call anautomaton of the latter kindacyclic. Van Benthem showed thattheFO-definable quantifiers are exactly the ones accepted byfinite automata that are acyclic and permutation closed.[17]

A slightly more complex automaton, thepushdown automaton,has rudimentary memory resources in the form a of stack of symbolsthat can be pushed or popped from the top, enabling it to keep trackto some extent of what went on at earlier steps. Another result by vanBenthem is that the type \({\langle}1{\rangle}\) quantifiers acceptedby pushdown automata are precisely those for which the correspondingbinary relation between numbers is definable (with first-order means)inadditive arithmetic, i.e., in the model \((N,+)\), where\(N = \{0,1,2,\ldots\}\). An example is \(Q^R\) (or its relativizationmost): we have \(Q^R(m,n) \Leftrightarrow m < n\), and theright hand side is definable in \((N,+)\) by \(\exists x (x \neq 0\wedge m + x = n)\).[18]

Thus, an algorithmic characterization is matched with a logical one.This is one prominent direction in the study of algorithmiccomplexity. Consider now the most general abstract automata orcomputational devices, i.e.,Turing machines. One (of many)interesting complexity classes is PTIME: a problem, identified withits corresponding set of words, is PTIME if there is a polynomial\(p(x)\) and a Turing machine acceptingW such that whenever\(w \in W\) has lengthn, the accepting computation takes atmost \(p(n)\) steps. PTIME problems are usually considered“tractable”, whereas more complex problems are“intractable”, such as EXPTIME ones, where the number ofsteps required may grow exponentially. An early result by Immerman andVardi is that the PTIME sets of (words coding) finite models areprecisely those describable by single sentences in \(\FO(\LFP)\),which isFO logic with an added mechanism for formingleast fixed-points.[19] Here we need to represent not just monadic models but arbitrary ones.For example, a binary relation on the universe \(\{1,\ldots,m\}\) canbe represented by a word \(w_{11}\cdots w_{1m}\# \ldots \#w_{m1}\cdotsw_{mm}\), where the relation holds of \((i,j)\) iff \(w_{ij} = 1\).But this time the order does seem to matter, and in fact the Immermanand Vardi result just mentioned only holds for models with a givenlinear order and a binary predicate symbol standing for thatorder.

Logics like \(\FO(\LFP)\) can be recast as logics of the form\(\FO(Q_1,Q_2,\ldots)\). Here infinitely many quantifiers may berequired, but in some cases a single one suffices. As to\(\FO(\LFP)\), it suffices to add all the resumptions (see the end ofsection 5 above) of a single quantifier. More generally, let\(\FO^*(Q_1,Q_2,\ldots)\) be like \(\FO(Q_1,Q_2,\ldots)\) but withmechanisms for making relativizations (section 7) and for resuming each \(Q_i\) tok-tuples for eachk. Then there is a single quantifierQ such that\(\FO(\LFP) = \FO^*(Q)\).

So generalized quantifiers remain a simple and versatile way of addingexpressive power toFO. One natural question was if thelogical characterization of PTIME mentioned above could be improvedusing generalized quantifiers, in particular if one could remove therestriction to ordered structures in this way. The answer, however,turned out to be negative, sinceHella(1989) proved that the PTIME computable properties of arbitraryfinite structures cannot be characterized by adding a finite number ofgeneralized quantifiers toFO, or even to \(\FO(\LFP)\). Thequestion of whether PTIME can be characterized by a logic of the form\(\FO^*(Q)\) remains open, however (indeed, solving it would be amajor breakthrough in complexity theory).

10. Generalized Quantifiers and Natural Language

In the late 1960s Richard Montague showed how the semantics ofsignificant parts of natural languages could be handled with logical tools.[20] One of his main insights was that noun phrases (NPs) can beinterpreted as sets of subsets of the domain, i.e., as (what we nowcall) type \({\langle}1{\rangle}\) quantifiers. Montague worked intype theory, but around 1980 a number of linguists and logicians beganto apply the model-theoretic framework of logics with generalizedquantifiers to natural language semantics.[21] Consider the structure of a simple English sentence whose subject isa quantified NP:[22]

(17)
Linguistics tree [S [NP [Det [most]] [N [students]]] [VP [smoke]]]

The (subject) NP consists of a determiner and a noun (N). Both thenoun and the verb phrase (VP) have sets as extensions, and so thedeterminer is naturally taken to denote a binary relation betweensets, i.e., a type \({\langle}1,1{\rangle}\) quantifier. An utteranceof(17) has a (discourse) universe in the background (say, the set of peopleat a particular university), but the meaning ofmost,every,atleast five and similar expressions is nottied to particular universes. For example, the meaning ofallin

(18)
a.
All cats like milk.
b.
All electrons have negative charge.
c.
All natural numbers have a successor.
d.
All twins like each other.
e.
All compact subsets of Hausdorff spaces are closed.

has nothing to do with cats or electrons or numbers or twins orHausdorff spaces, nor with the discourse universes that may beassociated with the above examples. It simply stands for the inclusionrelation, regardless of what we happen to be talking about. Therefore,the generalized quantifierall, which with each universeM associates the inclusion relation overM, iseminently suitable to interpretall,and similarly for other determiners.

However, it is characteristic of sentences of the form(17) that the noun argument and the VP argument are not on a par. The nouncombines with the determiner to form the NP, a separate constituent,and this constituent can also be taken to signify a generalizedquantifier, this time of type \({\langle}1{\rangle}\). Thus,atleast five students denotes the set ofsubsets of the universe which contain at least five students. Thisquantifier results fromfreezing the first argument of thetype \({\langle}1,1{\rangle}\)three to the set of students;we write thisthree\(^{\textit{student}}\). In general, ifA is a fixed set andQ a type\({\langle}1,1{\rangle}\) quantifier, one may define the type\({\langle}1{\rangle}\) quantifier \(Q^A\) by

\[\tag{19} \label{QA} (Q^A)_M(B) \;\Longleftrightarrow_{\text{def}}\; Q_{M\cup A}(A,B) \]

for anyM and any \(B\subseteq M\). In a compositionalsemantics it is natural to take each constituent part of a sentence tohave a separate signification or meaning, and the defaultsignifications of noun phrases are type \({\langle}1{\rangle}\)quantifiers.

This holds also for some NPs that lack determiners, such as propernames. While the lexical itemJohn isassigned some individualj by an interpretation, the NPJohncan be taken to denote the quantifier\(I_j\), defined, for anyM, by

\[(I_j)_M = \{B\subseteq M\!: j\in B\}\]

This is in fact well motivated, not only because the interpretation ofNPs becomes more uniform, but also becauseJohncan combine with quantified NPs:

(20)
John and three professors came to the meeting.

Here it is convenient ifJohn andthree professors have the samesemantic category. Note that generalized quantifiers—in contrastwith individuals!—have a clear Boolean structure; define (herein the type \({\langle}1{\rangle}\) case, but similarly for any othertype)

\[ \begin{align}(Q_1 \wedge Q_2)_M(A) & \iff (Q_1)_M(A) \textrm{ and } (Q_2)_M(A)\\ (\neg Q)_M(A) & \iff \textrm{ not } Q_M(A) \end{align} \]

Then we can take the complex determiner in(20) to denote \(I_j \wedge \textit{three}^{\textit{professor}}\).Similarly, the complex NP in

(21)
John and Mary came to the meeting.

signifies \(I_j \wedge I_m\).

The first argument (coming from the noun) of a type\({\langle}1,1{\rangle}\) determiner denotation is often called itsrestriction, and the second itsscope. Thedifference in syntactic status between these two arguments turns outto have a clear semantic counterpart.

11. Conservativity

It was observed early on that type \({\langle}1,1{\rangle}\)quantifiers denoted by determiners in natural languages have thefollowing property:

(22)
Conservativity (Conserv):
For allM and all \(A,B\subseteq M\), \[Q_M(A,B) \iff Q_M(A,A\cap B).\]

This can be seen from sentence pairs such as the following, where itis clear that the second sentence is just an awkward way of expressingthe first:

(23)
a.
Most students smoke.
b.
Most students are students who smoke.
(24)
a.
At least five professors were absent.
b.
At least five professors were absent professors.
(25)
a.
More than one third of the graduate students are foreigners.
b.
More than one third of the graduate students are foreign graduatestudents.

Conserv says that only the part ofBwhich is common toA matters for the truth of \(Q_M(A,B)\).That is, the part \(B-A\) in Figure 1 doesn’t matter. Thisappears to hold for all determiner denotations, but it fails forperfectly natural logical quantifiers, such asMO andI from the list \eqref{ex-qlist3} above. The reason is thatit is characteristic of determiner denotations that the restrictionargumentrestricts the domain of quantification to thatargument.

12. Extension

Actually, the idea of domain restriction has one further ingredient.To restrict the domain of quantification to a subsetA ofM means not only that \(B-A\) is irrelevant but the whole partofM that lies outsideA, and hence also the part\(M-(A\cup B)\) in Figure 1. This in turn is an instance of a moregeneral property, applicable to arbitrary generalized quantifiers:

(26)
Extension (Ext):
IfQ is of type \({\langle}n_1,\ldots,n_k{\rangle}\), \(R_i\subseteq M^{n_i}\) for \(1\leq i\leq k\), and \(M \subseteq M'\),then \[Q_M(R_1,\ldots,R_k) \iff Q_{M'}(R_1,\ldots,R_k).\]

That is, nothing happens when the universe is extended, or shrunk, aslong as the arguments are not changed. Now recall that for type\({\langle}1{\rangle}\) quantifiers we already provided a logicalmechanism for restricting the quantification domain to a subuniverse,in terms of relativization (section 7). We can now see (in (b) below) that the combination ofConservandExt amounts toexactly the same thing:

Fact 6

  1. For any quantifierQ, \(Q{^{\text{rel}}}\) satisfiesExt.
  2. A type \({\langle}1,1{\rangle}\) quantifier isConservandExt if and onlyif it is the relativization of a type \({\langle}1{\rangle}\) quantifier.[23]

Again, all determiner denotations appear to satisfyExt.At first sight, nothing in principle would seemto prevent a language from containing a determiner, sayevso,which meantevery onuniverses with less than 10 elements andsome on largeruniverses. But not only is there in fact no such determiner in anylanguage—there couldn’t be, if the noun argument of adeterminer is to restrict the domain of quantification to thedenotation of that noun.

A quantifier such asevso isintuitivelynot constant, in the sense that it doesn’tmean the same, or is not interpreted by the same rule, on everyuniverse.Ext can be seen as a strongrequirement of constancy: the rule interpretingQdoesn’t even mention the universe. Indeed, many quantifiers fromlanguage and logic areExt. As we saw, allrelativized quantifiers areExt, and all theother quantifiers in the lists\eqref{ex-qlist2}–\eqref{ex-qlist4} as well, exceptW.[24] In fact, it seems that all quantifiers taking more than one argumentthat show up in natural language contexts areExt.And many type \({\langle}1{\rangle}\)quantifiers areExt too, for example,\(\exists\), \(I_j\), \(Q^A\) (whenQ isExt;see \eqref{QA} above), and all in the list\eqref{ex-qlist1} except \(Q^R\).

But \(\forall\) and \(Q^R\) are notExt. Yetone is inclined to say for them too that they mean the same on everyuniverse. The case of \(\forall\) is particularly interesting sinceone might argue that it interprets NPs likeeverythingoreverything. The crux here isthing.If this expression is seen as a logical constant that always denotesthe universe, then these NPs do denote \(\forall\): for allMand all \(B\subseteq M\),

\[ \begin{align}(\textit{every}^{\textit{thing}})_M(B) & \iff \textit{every}_M(M,B) \\ & \iff M\subseteq B \\ & \iff M = B \\ & \iff \forall_M(B) \end{align} \]

WhenExt holds, we can usually drop thesubscriptM and write, for example,

\[Q(A,B)\]

rather than \(Q_M(A,B)\). That is, a suitable universe can bepresupposed but left in the background.

13. Symmetry and Monotonicity

Other properties are not shared by all natural language quantifiersbut single out important subclasses. We mentioned two already insection 2 above:symmetry andmonotonicity. Typical symmetricquantifiers aresome, no, at least five, exactly three, an evennumber of, infinitely many, whereasall, most, at mostone-third of the are non-symmetric. Another way to expresssymmetry is to say that the truth-value of \(Q(A,B)\) only depends onthe set \(A\cap B\). More precisely, callQintersective if for allM and all \(A,A',B,B'\subseteq M\):

(27)
If \(A\cap B = A'\cap B'\) then \(Q_M(A,B) \LeftrightarrowQ_M(A',B')\).

One easily verifies:

Fact 7

For conservative type \({\langle}1,1{\rangle}\) quantifiers, symmetryand intersectivity are equivalent.[25]

We noted that some of the syllogisms express monotonicity properties.In more succinct notation, a type \({\langle}1,1{\rangle}\) quantifierQ is

right increasing (right decreasing) iff for allM and all \(A,B \subseteq B' \subseteq M\) (all \(A,B'\subseteq B \subseteq M\)), \(Q_M(A,B)\) implies \(Q_M(A,B')\).

Similarly forleft increasing or decreasing, and indeed formonotonicity in any given argument place of a generalized quantifier.In particular, it is clear what it means for a type\({\langle}1{\rangle}\) quantifier to be monotone. Monotonicity isubiquitous among natural language quantifiers. It seems thatsyntactically simple English NPs all denote monotone (increasing ordecreasing) type \({\langle}1{\rangle}\) quantifiers, and almost allsyntactically simple English determiners denote right monotone quantifiers.[26] We also have:

(28)
a.
The quantifiers \(I_j\) (proper names) are increasing
b.
\(Q^A\) is increasing (decreasing) iffQ is rightincreasing (decreasing).

The Aristotelianall, some, no are monotone in both arguments(e.g.,all is right increasing and left decreasing), as areat least five, no more than ten, infinitely many, whereasmost, at least two-thirds of the are right increasing butneither increasing nor decreasing in the left argument.Exactlythree, between two and seven are non-monotone, though both ofthese are conjunctions of a (right and left) increasing and adecreasing quantifier (e.g.,at least three and at mostthree), in contrast withan even number of, which is nota (finite) Boolean combination of monotone quantifiers.

Both symmetry and monotonicity have important explanatory roles forcertain linguistic phenomena. Symmetry is a feature of (most of) thequantifiers allowed in so-called existential there sentences (e.g.,There are at least five men in thegarden is fine, butThere are most menin the garden is not). Monotonicity is crucial for explainingthe distribution of polarity items (No onewill ever succeed is fine butSomeonewill ever succeed is not: negative polarity items such aseverrequire a decreasing environment).[27] Furthermore, monotonicity is crucially involved innatural formsof reasoning; seesection 18.

14. Determiners that are not ISOM

Consider

(29)
John’s books were stolen.
(30)
Some student’s books have not been returned.
(31)
No professor except Mary came to the meeting.
(32)
All beach goers except a few enthusiastic swimmers were fullyclothed.
(33)
More male than female students smoke.

The expressionsJohn’s,somestudent’s,no_ except Mary,all _except a few enthusiastic swimmers,moremale than female are quite naturally seen asdeterminers: when combined with nouns they form phrases that behavelike ordinary NPs. Also, the type \({\langle}1,1{\rangle}\)quantifiers they signify areConserv andExt.For example, the sentences in the followingpair are trivially equivalent:

(34)
a.
John’s books were stolen.
b.
John’s books are books that were stolen.

But in contrast with the previous examples, they are notIsom,since they involve some fixed individual orproperty: if John’s books were stolen, and the number of stolenbooks is the same as the number of red pencils (in some discourseuniverse), and the number of books that weren’t stolen is thesame as the number of pencils that aren’t red, it doesnot follow that John’s pencils are red, asIsomwould have it.

However, just as the non-Isom quantifierthree\(^{\textit{student}}\) results by freezing therestriction argument of theExt quantifierthree, the non-Isom quantifiers aboveresult by freezing arguments in more abstract relations, whichareIsom. We illustrate this with thepossessive determinerJohn’s.[28]

Given thatJohn denotes an individualj, the determinerJohn’scan be defined, for allM andall \(A,B\subseteq M\), by[29]

\[ \texttt{John’s}_M(A,B) \iff \emp\neq A\cap R_j\subseteq B \]

where \(R_j = \{b\in M\!: R(j,b)\}\) andR is some“possessor” relation; it is well-known that this relationvaries a lot with the circumstances—one could be talking aboutthe books that John owns, or has written, or borrowed, or bought as apresent to Mary, etc. SupposeR is ownership. Then(29) says that John owns at least one book, and that all of the books heowns were stolen. Now consider the more general“quantifier” defined, for \(a\in M\), \(R\subseteq M^2\),and \(A,B\subseteq M\), by

\[\mathbf{P}_M(a,R,A,B) \iff \emp\neq A \cap R_a \subseteq B\]

We could say that this is a generalized quantifier of type\({\langle}0,2,1,1{\rangle}\), letting 0 stand for individuals.\(\mathbf{P}\) isIsom (extending definition\eqref{ex-isom} in the obvious way to quantifiers of this type), andJohn’s results by freezing thefirst two arguments to suitable values.

Similar constructions work for other cases of quantifier expressionsin natural languages that denote non-Isomquantifiers. For example, the determinerno _except Mary denotes (given thatMaryrefers tom)

\[(\texttt{no }\_\texttt{ except Mary})_M(A,B) \iff A\cap B = \{m\}\]

That is,(31) says that Mary is a professor, that she came to the meeting, and thatno other professor did. Again, a correspondingIsomquantifier of type \({\langle}0,1,1{\rangle}\)is readily defined. So in this wayIsom can beretrieved for natural language quantifiers. On the other hand,associating type \({\langle}1,1{\rangle}\) quantifiers withdeterminers agrees better with syntax, and allows many generalizationsconcerning determiner denotations to hold in the non-Isomcase as well.

15. Constancy

Isom, i.e., topic neutrality, is standardlyseen as at least a necessary condition for being alogical constant.[30] It is possible to distinguishlogicality fromconstancy in the earlier mentioned sense of meaning the sameover different universes. For one thing, logicality is a property thatought to beclosed under definability, whereas it is not atall clear that constancy should be similarly closed. Note, forexample, that the class ofExt quantifiers isnot closed under first-order definability. More precisely, it isclosed under the usual Boolean operations, but not underinnernegation and hence not under takingduals, where theinner negation of a type \({\langle}1{\rangle}\) quantifierQis defined by \((Q\neg)_M(A) \Leftrightarrow Q_M(M\!-\!A)\), and thedual by \(Q^d = \neg(Q\neg)\). For example, \(\exists^d =\forall\).

One intuition might be thatExt suffices forconstancy. But a different intuition is that a quantifier meaning thesame on all universes in particular should satisfyIsom,which forcesQ to be the“same” on all universes of the same cardinality. These twoideas are incompatible, since together they would imply thatExtimpliesIsom, which ismanifestly false. Clearly, the vague notion of meaning the same acrossdifferent universes admits of different precisifications. On closerinspection, it seems unlikely that there is one precise version thatwould accommodate all intuitions about sameness.

In this situation, a suggestion would be to simply stipulate thatconstancy amounts toExt +Isom.This would be a Carnapian explication ofconstancy. Quantifiers with this combination of properties seemcertain to mean the same on all universes. On the other hand,Extbut non-Isom quantifierslikethree\(^{\textit{student}}\) orsomeprofessor’s would not have the same meaning acrossdifferent domains, which as we saw accords with one intuition.Furthermore, the few natural non-Extquantifiers we have encountered are all definable fromExt+Isom quantifiers.[31]

16. Polyadic Natural Language Quantifiers

Consider a typical English sentence where both subject and object arequantified:

(35)
Most films were reviewed by two critics.

The truth conditions of(36) can be given in terms of apolyadic quantifier, of type\({\langle}1,1,2{\rangle}\) (omittingM):

\[Q(A,B,R) \iff \textit{most}(A,\{a\!: \textit{two}(B,R_a)\})\]

(This is the “narrow scope” reading; the “widescope” reading would be instead \(\textit{two}(B,\{b\!:\textit{most}(A,(R^{-1})_b))\).) But this polyadic quantifier resultsfrom two type \({\langle}1,1{\rangle}\) quantifiers by a ubiquitousconstruction that we calliteration. If \(Q,Q'\) are of type\({\langle}1{\rangle}\), defined the type \({\langle}2{\rangle}\)quantifier \(Q\cdot Q'\) by

\[\tag{36} Q\cdot Q'(R) \iff Q(\{a\!: Q'(R_a)\}) \]

Then we obtain the iteration of two type \({\langle}1,1{\rangle}\)quantifiers \(Q_1,Q_2\) as above with \(Q_1^A \cdot Q_2^B\).Properties of iterations are studied invanBenthem (1989),Keenan (1992),Westerståhl (1994), andSteinert-Threlkeldand Icard (2013).

Keenan thinks of iteration as theFrege boundary. As he andothers pointed out, there appear to be many natural languagequantifiers beyond that boundary, i.e. not definable as iterations. Wegive a few examples here; many more can be found in the referencesjust given. The next sentence may look like expressing an iterationbut in fact doesn’t.

(37)
Different students answered different questions at the exam.

Example (37) presumably has various interpretations, for example oneusing the following type \({\langle}1,1,2{\rangle}\) quantifier:

\[ Q(A,B,R) \iff \forall a,b \in A(a\neq b \Rightarrow B\cap R_a \neq B\cap R_b) \]

This quantifier is still first-order definable but not an iteration.[32] Next, consider

(38)
a.
People usually are grateful to firemen who rescue them.
b.
Men seldom make passes at girls who wear glasses. (DorothyParker)

Adverbs likeusually, seldom, always,never can be taken to denote generalized quantifiers (anobservation originally made inLewis(1975)). For example,Dogs nevermeow is roughly synonymous withNodogs meow. But for(38) it can be argued that there is a reading where the quantifier appliestopairs: among the pairs consisting of a person and afireman who rescues that person, a majority are such that the personis grateful. This is just theresumption ofmost topairs, that we defined in \eqref{ex-qlist4}:

\[Res^2(\textit{most})(R,S) \iff |R\cap S| > |R-S|\]

So in(38b), \(R(a,b)\) iff \(a \in \textit{person}\) and \(b \in\textit{fireman}\) and \(a\: \textit{rescued } b\), and \(S(a,b)\) iffais grateful tob. It can be shown thatfor many quantifiers, in particularmost, \(Res^n(Q)\) is notdefinable in \(\FO(Q)\). In fact, \(Res^2(\textit{most})\) is notdefinable from any finite number of monadic quantifiers, so it is anexample of an irreducibly polyadic quantifier.[33]

Next:

(39)
a.
Five Boston pitchers sat alongside each other.
b.
Most of the parliament members refer to each otherindirectly.

Here (39a) can have the truth conditions

\[\exists X \subseteq \textit{Boston pitcher} [|X| = 5 \amp \textit{RECIP}(X, \textit{sat alongside})]\]

whereRECIP is the type \({\langle}1,2{\rangle}\) quantifierdefined in \eqref{ex-qlist4}. That is, there is a set of five Bostonpitchers such that if you take any two of those, either they sit nextto each other, or there is one pitcher, or two, or at most three (allin the chosen set), between them. Similarly for(39b). This is just one of several constructions of polyadic quantifiersthat occur in reciprocal sentences.[34]

Finally, consider the sentence

(40)
Most boys in your class and most girls in my class have all datedeach other.

(40) has been put forward as an example ofbranchingquantification, which can be written in a two-dimensional logicalformat as

(41)
'most x A(x)' and 'most y B(y)' each with lines to 'R(x,y)'

where the intended reading is that there is a subsetX ofA containing most of the elements ofA, and asimilarly large subsetY ofB, such that each pair\((a,b)\) where \(a \in X\) and \(b \in Y\) belongs to the relationR. More generally, we have a polyadic quantifier of type\({\langle}1,1,2{\rangle}\) defined for any \(Q_1,Q_2\) of type\({\langle}1,1{\rangle}\) by

\[\begin{align}\tag{42}\label{ex-br} &Br(Q_1,Q_2)(A,B,R) \iff\\ \exists X\subseteq A \: \exists Y&\subseteq B \, [Q_1(A,X) \amp Q_2(B,Y) \amp X\times Y \subseteq R] \end{align}\]

Quite plausibly, this gives a reading of(40). Note thatx andy here areindependent ofeach other. If one instead would use any one of the linearsentences

\[\begin{align}&\textit{most}\: x(A(x),\textit{most}\: y(B(y),R(x,y)))\\&\textit{most}\: y(B(y),\textit{most}\: x(A(x),R(x,y))) \end{align}\]

then eithery depends onx or vice versa. Thetwo-dimensional syntax in(41) reflects this semantic independence.[35]

It can be shown that \(Br(\textit{most},\textit{most})\) is notexpressible in \(\FO(\textit{most})\) alone; indeed not with anyfinite number of monadic quantifiers (for a proof, seeHella,Väänänen, and Westerståhl(1997)). On the other hand, branching quantifiers are obtainedwith a “lifting” operation applied to monadic quantifiers,and similarly for resumption. Indeed, although natural languageexhibits numerous polyadic quantifiers well beyond the Frege boundary,one might still make a case for the claim that these are all obtainedfrom monadic quantifiers in systematic ways.

17. GQ Theory and Linguistics

The advent of generalized quantifiers had a huge impact on linguisticsemantics via Montague’s work in the late 60s, reinforced by theapplication of model-theoretic methods in the early 80s by Barwise andCooper, Keenan and Stavi, and others (see note21). In almost all examples in these works, the natural language wasEnglish. Linguists have since applied and tested the tools and methodsof “GQ theory” on other languages. The collectionBachet al. (1995) has, among many otherthings, seven case studies of quantification in other languages. Italso emphasizes the distinction betweenD-quantification andA-quantification. In D-quantification, which most of ourexamples so far exhibit, the quantifier expression is (usually) adeterminer which applies to a noun. A-quantification is performed byother means—A stands for adverbs, auxiliaries, affixes, andargument-structure adjusters. Many languages prefer A-quantification,some exclusively. English has both types; recall the adverbs ofquantification in(38).[36]

More recently, the volumesKeenan and Paperno(2012) andPaperno and Keenan(2017) have a separate chapter answering a fixed set ofquestions about expressing quantification for each of 34 differentlanguages (different also from those mentioned above), in order tomake an extensive inventory of their expressive resources.[37] The approach is semantic: the questions are of the form “Canone express X in your language, and if so in what ways?”, whichallows precise questions about conservativity, monotonicity, polarityitems, monadic vs. polyadic quantification, etc. to be put to eachlanguage. The summary in the last chapter shows that many of thegeneralizations that hold for English, concerning the existence ofexpressions denoting certain quantifiers and the properties of those,hold in all or most of the other languages studied as well (Keenan andPaperno list 25 such generalizations).

On the other hand, beginning in the 1990s some linguists have arguedthat GQ theory is unable to account for a range of important semanticphenomena—in English and other languages—related toquantification.Szabolcsi (2010) gives adetailed account of these developments. One issue is that GQ theoryappears to have nothing to say about the compositional meaning ofcomplex determiners. For example, how is the meaning ofmorethan five derived from the meanings ofits parts? Or considermost, which isoften treated as a simple determiner, even though its meaning seems tosomehow come from being a superlative ofmore,or perhapsmany;see, for example, Pietrosky et al (2009), and Hackl (2000, 2010).

Another problematic phenomenon isscope. While GQ theory inprinciple seems to allow all theoretically possible scopings of nestedquantifier expressions, natural languages have restrictions regulatingwhich of these are actually allowed. Indeed, scope is a major topic inlinguistic syntax and semantics, and a complex one. The problem isalso methodological: how to determine if a given sentenceScan actually meanY (whereY corresponds toa particular scoping)? First, one must filter out cases where theunavailability ofY depends on facts about the world, notabout language. Second, whose intuitions should count: thelinguist’s, or those of native speakers in a test situation, orperhaps statistical evidence should play a role? Still, while it istrue that many readings that seem impossible at first sight are infact available in sufficiently specific contexts, it is plausible thatlanguages have scope restrictions beyond the reach of GQ theory.[38]

The “GQ theorist” could reply that her tools were nevermeant to fully explain scope, or enable compositional analyses ofevery complex expression. The model-theoretic framework is first ofalldescriptive: it provides mathematical objects that canserve as (models of) meaning, and formulate properties of andrelations between these objects. Sometimes, facts about themathematical objects reveal insights about the things they aremodeling, as in the case of monotonicity and polarity items, or thecase of the meaning of conjoined noun phrases. But there is no reasonto expect this to happen in every case.

These are positions in an ongoing debate about the role of formalmethods, and in particular of model-theoretic tools, in semantics; adebate which is by no means settled. What seems clear is that thephenomena related to quantification in natural languages continue toprovide excellent material for that discussion.

18. Quantification and Cognition

In recent years there has been an explosion of work connectingsemantics, reasoning, and cognition, much of it related to howspeakers understand and learn and reason with quantified expressions.A major strand of research concernsmonotonicity (section13). AlreadyBarwise and Cooper (1981)noted the ubiquity of monotone quantifiers in natural languages, andsuggested a way of showing that monotone quantifiers are easier toprocess than non-monotone ones, and that increasing quantifiers areeasier than decreasing ones. They also suggested that psychologicalexperiments might be used to test their hypothesis. Their technicalproposal was developed further invan Benthem(1986), which introduced a notion ofcount complexityand showed that, under some assumptions, the quantifiers with minimalcount complexity are precisely the ones with a certain strongmonotonicity property.[39]

Monotonicity is also involved in what van Benthem has called“one-step” reasoning, which appears to be easily availableto speakers. The monotonicity behavior of basic determiners alreadyshows how such reasoning is licensed. Marking right increasing(decreasing) type \({\langle}1,1{\rangle}\) quantifiers with a + (a\(-\)) to the right, and similarly for left monotonicity, we have, forexample:

\(-\textit{every}+\) \(+\textit{some}+\) \(-\textit{no}-\)\(\cdot\,\textit{most}+\) \(\cdot\,\textit{exactly three}\,\cdot\)

where \(\cdot\) marks that the position is neither decreasing norincreasing. A nice example is the following inference (fromIcardand Moss (2014), adapting an example inGeurts and Slik (2005)):

(43)
  • Most Americans who know a foreign language speak it at home
  • Most Americans who know a foreign languagespeak it at home or at work

The premise is a “donkey sentence” withmost,and it is notoriously hard to pin down the exacttruth conditions of these. In fact, several readings are possible.[40] In spite of this, speakers seem to have no problem making thisinference, apparently sincemost is right increasing (the VPargumentspeak it at home is extended tospeak it at homeor at work), regardless of what the subject noun phrase (the samein both sentences) exactly means.

Many other expressions and phrases besides determiners show fixedmonotonicity patterns. Beginning withvanBenthem (1986) this has led to algorithms for howpolaritymarkers are assigned to the nodes of analysis trees of sentences(relative to a given grammar), or how to incorporate such markersdirectly in the type notation; seeIcard andMoss (2014) for an overview and further references. Besidestheir role in inference, such marking can also explain, and sometimeseven predict, the distribution of negative polarity items in languages(end ofsection 13). Moreover, in many cases no syntactic analysis is necessary:inferences can be made directly on surface form, and would in thissense be available “on the fly” to speakers; compare(43). The paper just mentioned also presents a complete axiomatization of aformalMonotonicity Calculus, in which many varieties ofreasoning with monotonicity can be expressed.[41]

A somewhat parallel development has been the formal study of varioussyllogistic fragments; we noted insection 2 that many syllogisms express monotonicity properties. Thesefragments, most of which were studied by Ian Pratt-Hartmann and aboveall Larry Moss, range from those containing only simple sentences likeallXY orsomeXY to ones allowing complements,relative clauses, transitive verbs, non-first-order quantifiers likemost, and other features. Here is an example (Moss p.c.) ofan inference in such a fragment:

  • Everyone likes everyone who likes Pat
  • Pat likes every clarinetist
  • Everyone likes everyone who likes everyone wholikes every clarinetist

This illustrates how quite involved reasoning can be expressed in asimple syllogistic-like language. The inference is valid, but one hasto think a bit to see that.[42] A main feature of most of these fragments is that, in addition tohaving explicit complete axiomatizations, validity in them isdecidable, in contrast with first-order logic. This alsoholds for some fragments with quantifiers that are notFO-definable. Like the monotonicity calculus, the study ofsyllogistic fragments is part of the enterprise somewhat looselycallednatural logic, resulting in well-behaved subsystems ofmore familiar logics, in the sense of being both closer to naturallanguage and computationally more tractable; seeMoss(2015) for a survey.[43]

On the cognitive side, questions of understanding and learning relatedto quantification and monotonicity have been studied both inpsychology and neuroscience.Geurts and Slik(2005) asked subjects whether certain inferences involvingmonotonicity were valid or not; the results largely corroboratedBarwise and Cooper’s earlier hypotheses. The meaning ofindividual determiners has also been studied empirically;Pietroskiet al. (2009) investigatedmost,where the method was to show subjectsa picture with yellow and blue dots for a very short time (toeliminate counting) and ask, say, if it is true or false that most ofthe dots are yellow. Variations of this kind of experiment are commonin the literature; a recent instance isOdic etal. (2018), which studies the mass/count distinction incognition and semantics, and Knowlton et al (2022) on the mentalrepresentation of universal quantifiers. These studies also addressthe humannumber sense and its relation to understandingquantificational language. One might entertain a“Whorfian” hypothesis that the latter is a prerequisitefor the former. This was tested with neurobiological methods(brain-scan methods combined with psychological tests with patientssuffering from various brain disorders) inClarkand Grossman (2007). They did not findany empirical support for that hypothesis; see alsoClark(2011a) for a description of theexperiment and more on research on quantification and numbersense.

There are by now a fair number of empirical studies of how variousclasses of quantifiers identified by logical or computational meansare reflected in terms of learning, understanding, cognitive load,etc. Conversely, linguistic and cognitive facts suggest newtheoretical questions. For example, as regards computationalcomplexity, Sevenster (2006) showed that the branching ofmost as in(40) insection 9 is intractable.[44] Subsequently, Szymanik observed that if the operations ofresumption anditeration (as in(38) and(36), respectively) are applied to PTIME quantifiers, the result is againin PTIME, in contrast with branching. Similarly, some forms ofreciprocal constructions preserve PTIME computability whereas otherdon’t: “lifting”exactly five withRECIP as in(39a) does, but similarly liftingmost as in(39b) does not.

In van Benthem’s semantic automata setting (section 9),Steinert-Threlkeld and Icard (2013)proved that the Frege boundary (section 16) is robust in the sense that if twoConservandExt type \({\langle}1,1{\rangle}\)quantifiers are recognizable by finite (or push-down) automata, thenso is their iteration. Moreover,Steinert-Threlkeld(2016) showed that forlarge classes of type \({\langle}1,1,2{\rangle}\) quantifiers, it isdecidable whether they are iterations of type\({\langle}1,1{\rangle}\) quantifiers or not. A recent presentation ofboth theoretical and empirical results around the cognitive aspects ofquantifier recognition isSzymanik(2016).

Computational models oflearning the meaning of quantifiershave been given; for example by Clark (2011a) in the semantic automatasetting. In a recent development,Steinert-Threlkeldand Szymanik (2019) studylearnability with the technology of neural networks, testing whethercertain quantifiers satisfying three commonly proposeduniversals—that simple determiner denotations aremonotone,Isom, andConserv,respectively—are easier to learn thanquantifiers that do not have these properties. For each universal, thetime it takes the network to learn a quantifier satisfying it iscompared to the time it takes to learn a quantifier thatdoesn’t. It turns out that monotone andIsomquantifiers are easier than non-monotone andnon-Isom ones, whereas there is no detectabledifference forConserv.[45]Van de Pol et al (2023) compare these facts to experimentalresults concerning how easy it is todescribe quantifiers,relative to various measures of complexity of description.

These are just glimpses of ongoing research. The investigation of howspeakers process quantified expressions, combining the basicmodel-theoretic analysis with methods from psychology, neuroscience,and computer science, is by now a rich area in the study ofgeneralized quantifiers.

Bibliography

  • Bach, Emmon, Eloise Jelinek, Angelika Kratzer,and Barbara H. Partee (eds.), 1995,Quantification in NaturalLanguages, (Studies in Linguistics and Philosophy 54), Dordrecht:Springer Netherlands. doi:10.1007/978-94-017-2817-1
  • Barwise, Jon, 1979, “On Branching Quantifiersin English”,Journal of Philosophical Logic, 8(1):47–80. doi:10.1007/BF00258419
  • Barwise, Jon and Robin Cooper, 1981,“Generalized Quantifiers and Natural Language”,Linguistics and Philosophy, 4(2): 159–219.doi:10.1007/BF00350139
  • Barwise, Jon and Solomon Feferman (eds.), 1985,Model Theoretic Logics, (Perspectives in Mathematical Logic),New York: Springer-Verlag.
  • van Benthem, Johan, 1986,Essays in LogicalSemantics, (Studies in Linguistics and Philosophy, 29),Dordrecht: D. Reidel.
  • –––, 1989, “PolyadicQuantifiers”,Linguistics and Philosophy, 12(4):437–464. doi:10.1007/BF00632472
  • van Benthem, Johan F. A. K. and Alice ter Meulen(eds.), 2011,Handbook of Logic and Language, second edition,Amsterdam: Elsevier.
  • van de Pol, Iris, Paul Lodder, Leendert van Maanen, ShaneSteinert-Threlkeld and Jakub Szymanik, 2023, “QuantifiersSatisfying Semantic Universals have Shorter Minimal DescriptionLength”,Cognition, 232: 105150.doi:10.1016/j.cognition.2022.105150.
  • Bonnay, Denis, 2008, “Logicality andInvariance”,Bulletin of Symbolic Logic, 14(1):29–68. doi:10.2178/bsl/1208358843
  • Cartwright, Richard L., 1994,“Speaking of Everything”,Noûs, 28(1):1–20. doi:10.2307/2215917
  • Clark, Robin, 2011a, “GeneralizedQuantifiers and Number Sense”,Philosophy Compass,6(9): 611–621. doi:10.1111/j.1747-9991.2011.00419.x
  • –––, 2011b, “On theLearnability of Quantifiers”, in van Benthem and ter Meulen2011: 911–923.
  • Clark, Robin and Murray Grossman, 2007,“Number Sense and Quantifier Interpretation”,Topoi, 26(1): 51–62. doi:10.1007/s11245-006-9008-2
  • Dalrymple, Mary, Makoto Kanazawa, Yookyung Kim, SamMcHombo, and Stanley Peters, 1998, “Reciprocal Expressions andthe Concept of Reciprocity”,Linguistics andPhilosophy, 21(2): 159–210.doi:10.1023/A:1005330227480
  • Ebbinghaus, Heinz-Dieter and Jörg Flum,1995,Finite Model Theory, (Springer Monographs inMathematics), Berlin: Springer Berlin Heidelberg.doi:10.1007/3-540-28788-4
  • Ebbinghaus, Heinz-Dieter, Jörg Flum, andWolfgang Thomas, 1994,Mathematical Logic(Einführung in die mathematische Logik), second edition,New York: Springer-Verlag. doi:10.1007/978-1-4757-2355-7
  • Filin Karlsson, Martin, 2017, “All ThereIs: On the Semantics of Quantification over AbsolutelyEverything”, Ph.D. Thesis, University of Gothenburg, (ActaPhilosophica Gothoburgensia 31). [Karlsson 2017 available online]
  • Geurts, Bart and Frans van der Slik, 2005,“Monotonicity and Processing Load”,Journal ofSemantics, 22(1): 97–117. doi:10.1093/jos/ffh018
  • Glanzberg, Michael, 2004,“Quantification and Realism”,Philosophy andPhenomenological Research, 69(3): 541–572.doi:10.1111/j.1933-1592.2004.tb00518.x
  • Hackl, Martin, 2000, “ComparativeQuantifiers”, PhD Thesis, Massachusetts Institute of Technology. [Hackl 2000 available online]
  • –––, 2009, “On the Grammar and Processingof Proportional Quantifiers:most versus more thanhalf”,Natural Language Semantics, 17:63–98.
  • Hella, Lauri, 1989, “DefinabilityHierarchies of Generalized Quantifiers”,Annals of Pure andApplied Logic, 43(3): 235–271.doi:10.1016/0168-0072(89)90070-5
  • Hella, Lauri, Jouko Väänänen, andDag Westerståhl, 1997, “Definability of Polyadic Lifts ofGeneralized Quantifiers”,Journal of Logic, Language andInformation, 6(3): 305–335.doi:10.1023/A:1008215718090
  • Henkin, Leon, 1961, “Some Remarks onInfinitely Long Formulas”, inInfinitistic Methods:Proceedings of the Symposium on Foundations of Mathematics, Warsaw,2-9 September 1959, Oxford: Pergamon Press, 167–183.
  • Higginbotham, James and Robert May, 1981,“Questions, Quantifiers and Crossing”,The LinguisticReview, 1(1): 41–79. doi:10.1515/tlir.1981.1.1.41
  • Hintikka, Jaakko, 1973, “Quantifiers vs.Quantification Theory”,Dialectica, 27(3–4):329–358. doi:10.1111/j.1746-8361.1973.tb00624.x
  • Hopcroft, John E. and Jeffrey D. Ullman, 1979,Introduction to Automata Theory, Languages, and Computation,(Addison-Wesley Series in Computer Science), Reading, MA:Addison-Wesley.
  • Icard III, Thomas F., 2014, “Higher-OrderSyllogistics”, inFormal Grammar 2014, Glyn Morrill,Reinhard Muskens, Rainer Osswald, and Frank Richter (eds.), (LectureNotes in Computer Science 8612), Berlin, Heidelberg: Springer BerlinHeidelberg, 1–14. doi:10.1007/978-3-662-44121-3_1
  • Icard III, Thomas Icard and Lawrence S. Moss,2014, “Recent Progress in Monotonicity”, inPerspectives on Semantic Representations for TextualInference, (LiLT 9), Stanford, CA: CSLI Publications,167–194. [Icard and Moss 2014 available online]
  • Icard, Thomas, Lawrence Moss, and WilliamTune, 2017, “A Monotonicity Calculus and ItsCompleteness”, inProceedings of the 15th Meeting on theMathematics of Language, London, UK: Association forComputational Linguistics, 75–87. doi:10.18653/v1/W17-3408
  • Keenan, Edward L., 1992, “Beyond the FregeBoundary”,Linguistics and Philosophy, 15(2):199–221. doi:10.1007/BF00635807
  • Keenan, Edward L. and Leonard M. Faltz, 1984,Boolean Semantics for Natural Language, Dordrecht: SpringerNetherlands. doi:10.1007/978-94-009-6404-4
  • Keenan, Edward L. and Lawrence S. Moss, 1985,“Generalized Quantifiers and the Expressive Power of NaturalLanguage”, inGeneralized Quantifiers in NaturalLanguage, Alice ter Meulen and Johan van Benthem (eds.), Berlin,Boston: De Gruyter, 73–124. doi:10.1515/9783110867909.73
  • Keenan, Edward L. and Denis Paperno (eds.), 2012,Handbook of Quantifiers in Natural Language, (Studies inLinguistics and Philosophy 90), Dordrecht: Springer Netherlands.doi:10.1007/978-94-007-2681-9
  • Keenan, Edward L. and Jonathan Stavi, 1986,“A Semantic Characterization of Natural LanguageDeterminers”,Linguistics and Philosophy, 9(3):253–326. doi:10.1007/BF00630273
  • Keenan, Edward L. and Dag Westerståhl,2011, “Generalized Quantifiers in Linguistics and Logic”,in van Benthem and ter Meulen 2011: 859–910.
  • Knowlton, Tyler, Paul Pietrosky, Justin Halberda and Jeffrey Lidz,2022, “The Mental Representation of UniversalQuantifiers”,Linguistics and Philosophy, 45(4):911–41.doi:10.1007/s10988-021-09337-8
  • Lewis, David, 1975, “Adverbs ofQuantification”, inFormal Semantics of NaturalLanguage, Edward L. Keenan (ed.), Cambridge: Cambridge UniversityPress, 3–15. doi:10.1017/CBO9780511897696.003
  • Lindström, Per, 1966, “First OrderPredicate Logic with Generalized Quantifiers”,Theoria,32(3): 186–195. doi:10.1111/j.1755-2567.1966.tb00600.x
  • Linnebo, Øystein, 2006, “Sets,Properties, and Unrestricted Quantification”, in Rayo andUzquiano 2006: 149–178.
  • Luosto, Kerkko, 2000, “Hierarchies ofMonadic Generalized Quantifiers”,Journal of SymbolicLogic, 65(3): 1241–1263. doi:10.2307/2586699
  • Montague, Richard, 1974,FormalPhilosophy: Selected Papers of Richard Montague, Richmond H.Thomason (ed.), New Haven, CT: Yale University Press.
  • Moss, Lawrence S., 2015, “NaturalLogic”, inThe Handbook of Contemporary SemanticTheory, Shalom Lappin and Chris Fox (eds.), second edition, JohnWiley & Sons, 646–681.
  • Mostowski, Andrzej, 1957, “On aGeneralization of Quantifiers”,FundamentaMathematicae, 44(1): 12–36. doi:10.4064/fm-44-1-12-36
  • Mostowski, Marcin, 1998, “ComputationalSemantics for Monadic Quantifiers”,Journal of AppliedNon-Classical Logics, 8(1–2): 107–121.doi:10.1080/11663081.1998.10510934
  • Odic, Darko, Paul Pietroski, Tim Hunter, JustinHalberda, and Jeffrey Lidz, 2018, “Individuals andNon-Individuals in Cognition and Semantics: The Mass/Count Distinctionand Quantity Representation”,Glossa: A Journal of GeneralLinguistics, 3(1): 61. doi:10.5334/gjgl.409
  • Paperno, Denis and Edward L. Keenan (eds.), 2017,Handbook of Quantifiers in Natural Language: Volume II,(Studies in Linguistics and Philosophy 97), Cham: SpringerInternational Publishing. doi:10.1007/978-3-319-44330-0
  • Parsons, Terence, 1997 [2017], “TheTraditional Square of Opposition”,The Stanford Encyclopediaof Philosophy, (Summer 2017), Edward N. Zalta (ed.). URL = <https://plato.stanford.edu/archives/sum2017/entries/square/>
  • Peters, Stanley and Dag Westerståhl, 2002,“Does English Really Have Resumptive Quantification?”, inThe Construction of Meaning, David I. Beaver, Luis D.Casillas Martínez, Brady Z. Clark, and Stefan Kaufmann (eds.),Stanford, CA: CSLI Publications, 181–195.
  • –––, 2006,Quantifiers inLanguage and Logic, Oxford: Oxford University Press.doi:10.1093/acprof:oso/9780199291267.001.0001
  • –––, 2013, “The Semanticsof Possessives”,Language, 89(4): 713–759.doi:10.1353/lan.2013.0065
  • –––, 2023, “The Semantics ofExceptives”,Linguistics & Philosophy, 46:197–235.doi:10.1007/s10988-022-09349-y
  • Pietroski, Paul, Jeffrey Lidz, Tim Hunter,and Justin Halberda, 2009, “The Meaning of ‘Most’:Semantics, Numerosity and Psychology”,Mind &Language, 24(5): 554–585.doi:10.1111/j.1468-0017.2009.01374.x
  • Rayo, Agustín, 2012, “AbsoluteGenerality Reconsidered”, inOxford Studies in MetaphysicsVolume 7, Karen Bennett and Dean W. Zimmerman (eds.), Oxford:Oxford University Press, 93–126.doi:10.1093/acprof:oso/9780199659081.003.0004
  • Rayo, Agustín and Gabriel Uzquiano (eds.),2006,Absolute Generality, Oxford: Clarendon Press.
  • Read, Stephen, 2015, “Aristotle and Lukasiewicz onExistential Import”,Journal of the American PhilosophicalAssociation, 1: 535–44.
  • Sher, Gila Y., 1997, “Partially-Ordered(Branching) Generalized Quantifiers: A General Definition”,Journal of Philosophical Logic, 26(1): 1–43.doi:10.1023/A:1017944808396
  • Steinert-Threlkeld, Shane, 2016, “SomeProperties of Iterated Languages”,Journal of Logic,Language and Information, 25(2): 191–213.doi:10.1007/s10849-016-9239-6
  • Steinert-Threlkeld, Shane and Thomas F.Icard III, 2013, “Iterating Semantic Automata”,Linguistics and Philosophy, 36(2): 151–173.doi:10.1007/s10988-013-9132-6
  • Steinert-Threlkeld, Shane and JakubSzymanik, 2019, “Learnability and Semantic Universals”,Semantics and Pragmatics, 12(4). doi:10.3765/sp.12.4.
  • Szabolcsi, Anna, 2010,Quantification, Cambridge: Cambridge University Press.doi:10.1017/CBO9780511781681
  • Szymanik, Jakub, 2016,Quantifiers andCognition: Logical and Computational Perspectives, (Studies inLinguistics and Philosophy 96), Cham: Springer InternationalPublishing. doi:10.1007/978-3-319-28749-2
  • Westerståhl, Dag, 1987, “BranchingGeneralized Quantifiers and Natural Language”, inGeneralized Quantifiers, Peter Gärdenfors (ed.) (Studiesin Linguistics and Philosophy 31), Dordrecht: Springer Netherlands,269–298. doi:10.1007/978-94-009-3381-1_10
  • –––, 1989, “Quantifiers inFormal and Natural Languages”, inHandbook of PhilosophicalLogic, Dov M. Gabbay and Franz Guenthner (eds.), Dordrecht:Springer Netherlands, 4:1–131. Reprinted, 2007,Handbook ofPhilosophical Logic, Dov M. Gabbay and Franz Guenthner (eds.),Dordrecht: Springer Netherlands, 14:223–338.doi:10.1007/978-1-4020-6324-4_4
  • –––, 1994, “IteratedQuantifiers”, inDynamics, Polarity, andQuantification, Makoto Kanazawa and Christopher J.Piñón (eds.), (CSLI Lecture Notes 48), Stanford, CA:CSLI Publications, 173–209.
  • –––, 2012, “Classical vs.Modern Squares of Opposition, and Beyond”, inThe Square ofOpposition: A General Framework for Cognition, Jean-Yves Beziauand Gillman Payette (eds.), Bern: P. Lang, 195–229.
  • –––, 2017,“Sameness”, inFeferman on Foundations, GerhardJäger and Wilfried Sieg (eds.), (Outstanding Contributions toLogic 13), Cham: Springer International Publishing, 449–467.doi:10.1007/978-3-319-63334-3_16
  • Williamson, Timothy, 2003,“Everything”,Philosophical Perspectives, 17(1):415–465. doi:10.1111/j.1520-8583.2003.00017.x

Other Internet Resources

[Please contact the author with suggestions.]

Copyright © 2024 by
Dag Westerståhl<dag.westerstahl@philosophy.su.se>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free

Browse

About

Support SEP

Mirror Sites

View this site from another server:

USA (Main Site)Philosophy, Stanford University

The Stanford Encyclopedia of Philosophy iscopyright © 2024 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054


[8]ページ先頭

©2009-2025 Movatter.jp