Generalized Quantifiers
Generalized quantifiers are now standard equipment in the toolboxes of both logicians and linguists. The purpose of this entry is to describe these tools: where they come from, how they work, and what they can be used to do. The description is by necessity sketchy, but more comprehensive surveys exist in the literature and will be referred to when needed. To fully appreciate the text below, one will need basic familiarity with elementary set theoretic terminology, and with the language of firstorder logic.
 1. Preliminaries
 2. Aristotle
 3. Frege
 4. Generalizing the universal and existential quantifier
 5. Generalized quantifiers of arbitrary type
 6. Topic neutrality
 7. Relativization
 8. Expressive power
 9. Generalized quantifiers and computation
 10. Generalized quantifiers and natural language
 11. Conservativity
 12. Extension
 13. Symmetry and monotonicity
 14. Determiners that are not ISOM
 15. Other quantifier expressions in natural language
 16. Quantification and cognition
 Bibliography
 Academic Tools
 Other Internet Resources
 Related Entries
1. Preliminaries
The term “generalized quantifier” reflects that these entities were introduced in logic as generalizations of the standard quantifiers of modern logic, ∀ and ∃.^{[1]} In retrospect one may say that ∀ and ∃ have been found to be just two instances of a much more general concept of quantifier, making the term “generalized” superfluous. Today it is also common to use just “quantifier” for the general notion, but “generalized quantifier” is still frequent for historical reasons. This article reflects both uses, with a tendency to insert “generalized” in logical contexts, and drop it in linguistic contexts.
It is useful to distinguish quantifier expressions from what they signify or denote, the (generalized) quantifiers themselves. In logical languages, quantifier expressions are variablebinding operators. Thus, ∃ is the familiar operator such that in a formula ∃xφ, ∃x binds all free occurrences of x in φ. It signifies the quantifier ‘there exists’ — we'll see shortly exactly what this object is. Likewise, the symbol Q_{0} is often used as a variablebinding operator signifying ‘there exist infinitely many’.
In natural languages a variety of expressions have been seen as quantifier expressions, for example, each of the following English expressions: everything, nothing, three books, the ten professors, John, John and Mary, only John, firemen, every, at least five, most, all but ten, less than half of the, John's, some student's, no _ except Mary, more male than female, usually, never, each other.^{[2]}
What, then, are generalized quantifiers? Before answering that question, a brief historical prelude is enlightening.
2. Aristotle
Aristotle not only invented logic but also introduced the study of quantification as a main part of the discipline. The syllogistics can be seen as a formal study of the meaning of the four basic quantifier expressions all, no, some, not all, and of their properties. For example, the validity, according to Aristotle, of the syllogism
all(A,B) all(B,C) some(A,C)
shows that he considered, in contrast with modern logical usage, all to have existential import, so that all A are B entails that A is not an empty term. Likewise, the validity of the syllogism
some(A,B) all(B,C) some(A,C)
expresses that some is monotone increasing (as we now put it) in the second argument. Each valid syllogism formalizes part of the meaning of these quantifier expressions, but Aristotle's study of their properties went beyond the syllogistics. He observed, for example, that some and no are convertible or, as we might now say, symmetric, since they satisfy the scheme
Q(A,B) Q(B,A)
in contrast with all and not all. Further, he studied how various forms of negation combined with quantifier expressions in (what was later called) the square of opposition.^{[3]} The circumstance that the syllogistic format in itself is too weak to express interesting pieces of reasoning, and that generations of philosophers nonetheless mechanically continued to present it as the essence of logic far into the 20th century, should not obscure the fact that Aristotle's logic is a decisive contribution to the study of quantification.
Especially interesting in the present context is the fact that these quantifier expressions take two arguments or terms, and thus can be seen as binary relations, both syntactically (as Aristotle no doubt saw them) and semantically: given that terms signify sets of individuals, the expression some can be taken to signify the relation of overlap, i.e., of having nonempty intersection, between two sets, and all signifies the inclusion relation. Note that these are not relations between individuals but between sets of individuals — secondorder relations. Indeed, they are exactly the generalized quantifiers some and all, respectively (on a given universe).
This thread — that quantifier expressions signify secondorder relations — was not picked up by any of Aristotle's medieval followers (as far as we know). Instead, they picked up on the fact that the two terms have different status: the first combines with the quantifier expression to form a noun phrase (as we now say), which is the subject of the sentence, whereas the second is a verb phrase constituting the predicate. This led them to focus on what the subject — all men, some dogs, no sailors — signified, which conceptually seems to be a harder question. One might surmise that all men signifies every man (or the set of men), and that some dogs signifies some particular dog, but what about no sailors? In fact, one can show that approaches like these are doomed to failure.^{[4]} The modern solution is that noun phrases signify sets of sets of individuals, so that, for example some dogs signifies the set of sets containing at least one dog — but that appears to require a more abstract and mathematical approach to semantics than the idea, which is at least implicit in Aristotle, that quantifier phrases signify relations between (the denotations of) terms.
3. Frege
The second major historical contribution to the theory of generalized quantifiers came from the ‘inventor’ of modern logic, Gottlob Frege, in the 1870s. In fact, Frege's contribution is twofold. As every philosophy student knows, he introduced the language of predicate logic, with sentential connectives, identity, and the variablebinding operators ∀ and ∃ (in modern notation, not Frege's). These are the quantifiers that logicians during the 1950s began to ‘generalize’. But Frege also explicitly formulated the abstract notion of a quantifier as a secondorder relation, or, as he called it, a second level concept (“Begriff zweiter Stufe”). He was well aware that the four Aristotelian quantifiers were prime examples, but he wanted to avoid the focus on subjectpredicate form, which he (with much justification) saw as having been a major obstacle to the development of logic after Aristotle. It was therefore an important discovery that these quantifiers could all be defined in terms of ∀ and sentential operators (replacing all(A, B) by ∀x(A(x) → B(x)), some(A, B) by ¬∀x(A(x) → ¬B(x)), etc.).
In fact, the only difference between Frege's notion of a secondlevel concept and the modern notion of a generalized quantifier is that Frege did not have the idea of an interpretation or model, which we now (since the advent of model theory in the 1950s) see as a universe that the quantifiers range over, plus an assignment of suitable semantic objects to the nonlogical symbols. Frege's symbols all had fixed meanings, and the only universe he considered was the totality of everything. But apart from this, one may well say that it was Frege who discovered generalized quantifiers. This aspect of Frege's logic, however, remained in the background for a long time, and model theorists in the 50s and 60s seem not to have been aware of it.
4. Generalizing the universal and existential quantifier
Modern predicate logic fixes the meaning of ∀ and ∃ with the respective clauses in the truth definition, which specifies inductively the conditions under which a formula φ(y_{1},…,y_{n}) (with at most y_{1},…,y_{n} free) is satisfied by corresponding elements b_{1},…,b_{n} in a model M = (M, I) (where M is the universe and I the interpretation function assigning suitable extensions to nonlogical symbols): M ⊨ φ(b_{1},…,b_{n}). The clauses are
 M ⊨ ∀xψ(x, b_{1},…,b_{n}) iff for each a ∈ M, M ⊨ ψ(a, b_{1},…, b_{n})
 M ⊨
∃xψ(x, b_{1},…,
b_{n}) iff there is some a ∈
M s.t.
M ⊨ ψ(a, b_{1},…, b_{n})
To introduce other quantifiers, one needs to appreciate what kind of expressions ∀ and ∃ are. Syntactically, they are operators binding one variable in one formula. To see how they work semantically it is useful to rewrite (1) and (2) slightly. First, every formula ψ(x) with one free variable denotes in a model M a subset of M; the set of individuals in M satisfying ψ(x). More generally, if ψ(x, y_{1},…,y_{n}) = ψ(x,[y]) has at most the free variables shown and [b] = b_{1},…,b_{n} are elements of M, let
ψ(x,[b])^{M, x} = {a ∈ M : M ⊨ ψ(a,[b])}
be the extension of ψ(x,[y]) in M relative to b_{1},…, b_{n}. Then we can reformulate (1) and (2) as follows:
 M ⊨ ∀xψ(x,[b]) iff ψ(x,[b])^{M,x} = M
 M ⊨ ∃xψ(x,[b]) iff ψ(x,[b])^{M,x} ≠ ∅
Thus, the conditions on the right hand side emerge as properties of the sets ψ(x,[b]). In fact, we can think of ∀ and ∃ as denoting these properties, i.e., the property of being identical to the universe, and of being nonempty, respectively. And now it is easy to think of other properties of sets that can also be treated as quantifiers, for example, the property of containing at least 5, or exactly 3, elements, or of being infinite.^{[5]}
Note that these properties depend only on the universe M, not on the rest of the model. Extensionally, they are simply sets of subsets of M. This leads to the following definition. essentially from Mostowski (1957):
Definition: A generalized quantifier Q of type <1> is
 syntactically, a variablebinding operator such that whenever φ is a formula so is Qxφ, and Qx binds all free occurrences of x in φ;
 semantically, a mapping from arbitrary universes (nonempty sets) M to a set Q_{M} of subsets of M, which interprets formulas of the form Qxφ according to the clause
 M ⊨ Qxψ(x,[b]) iff ψ(x,[b])^{M,x} ∈ Q_{M}
Here we use the same symbol for the quantifier expression and the mapping that it signifies or denotes. Thus, ∀ is now taken to denote the universal quantifier, also written ∀, which is the mapping given by
∀_{M} = {M}
for all M. Similarly, ∃ denotes the mapping defined by
∃_{M} = {A ⊆ M : A ≠ ∅ }
And here are some other generalized quantifiers:
 (∃_{≥ 5})_{M} = {A ⊆
M : A ≥ 5} (X is the size or
cardinality of X)
(∃_{=3})_{M} = {A ⊆ M : A = 3}
(Q_{0})_{M} = {A ⊆ M : A is infinite}
(Q^{R})_{M} = {A ⊆ M : A > M − A} (the “Rescher quantifier”)
(Q_{even})_{M} = {A ⊆ M : A is even}
We now have a precise notion of a generalized quantifier, of which ∀ and ∃ are instances, along with infinitely many others. Moreover, we see how to extend firstorder logic FO to a logic FO(Q), by adding the clause (5) to the formation rules, and the clause (7) to the truth definition. Similarly, if we add more than one generalized quantifier: FO(Q_{1},…,Q_{n}).
In such a logic one may be a able to say things that are not expressible in FO. For example, it is wellknown that in FO the notion of finiteness cannot be expressed. Thus there is no way to say, of an ordering relation <, that each element has only finitely many predecessors, for instance. But this is just the sort of thing one can express in FO(Q_{0}):
 ∀x ¬Q_{0}y(y < x)
Likewise, one cannot say in FO that a (finite) set A contains exactly half of the elements of the universe M, but that is expressible in FO(Q^{R}):
 ¬Q^{R}x A(x) ∧ ¬Q^{R}x ¬A(x)
(The first conjunct says that A ≤ M − A, and the second that M − A ≤ A.)
5. Generalized quantifiers of arbitrary types
Further generalization is possible. First, we can let Q bind one variable in two or more formulas. Second, we can let it simultaneously bind two or more variables in (some of) these formulas. The typing of Q indicates this: Q is of type <n_{1},…,n_{k}> (where each n_{i} is a natural number ≥ 1) if it applies to k formulas, and binds n_{i} variables in the ith formula. This explains why the quantifiers in the previous section were said to be of type <1>.
In the general case, one normally chooses distinct variables x_{i1},…, x_{ini} = [x_{i}] for 1 ≤ i ≤ k, so that a formula beginning with Q has the form
 Q[x_{1}],…,[x_{k}] (φ_{1},…,φ_{k})
where all free occurrences of x_{i1},…,x_{ini} in φ_{i} become bound. Now Q associates with each universe M a kary relation Q_{M} between relations over M, where the ith argument is an n_{i}ary relation between individuals. The corresponding clause in the truth definition becomes
 M
⊨
Q[x_{1}],…,[x_{k}]
(ψ_{1}([x_{1}],[b]),…,ψ_{k}([x_{k}],[b]))
iff
Q_{M}(ψ_{1}([x_{1}],[b])^{M,[x1]},…,ψ_{k}([x_{k}],[b] )^{M,[xk]})
Here ψ_{i}([x_{i}],[y]) is a formula with at most the free variables shown, [b] is a sequence of elements of M corresponding to [y], and ψ_{i}([x_{i}],[b])^{M,[xi]} is the extension of ψ_{i}([x_{i}],[y]) in M relative to [b] , i.e., the set of n_{i}tuples [a_{i}] such that M ⊨ ψ_{i}([a_{i}],[b]).
This is the official concept of a generalized quantifier in this article. It was introduced by Lindström (1966), and these quantifiers are sometimes called “Lindström quantifiers”.^{[6]} If we fix M to the universe containing ‘everything’, we essentially have Frege's notion of a secondlevel concept.^{[7]}
Q is monadic if on each universe M it is a relation between subsets of M, i.e., if its type is <1,…,1>; otherwise it is polyadic. For example, the Aristotelian quantifiers mentioned earlier are of type <1,1>:

all_{M}(A, B) ⇔ A ⊆
B
some_{M}(A, B) ⇔ A ∩ B ≠ ∅
no_{M}(A, B) ⇔ A ∩ B = ∅
not all_{M}(A, B) ⇔ A ⊈ B ^{[8]}
Here are some more type <1,1> quantifiers:

(at least five)_{M}(A, B) ⇔
A ∩ B ≥ 5
(exactly three)_{M}(A, B) ⇔ A ∩ B = 3
(infinitely many)_{M}(A, B) ⇔ A ∩ B is infinite
most_{M}(A, B) ⇔ A ∩ B > A − B^{[9]}
(an even number of)_{M}(A, B) ⇔ A ∩ B is even
MO_{M}(A, B) ⇔ A > B
I_{M}(A, B) ⇔ A = B (the “Härtig quantifier”)
With monadic quantifiers it is convenient to use just one variable, and let Q bind that same variable in each of the formulas. Thus, to say that most As are not B, for example, one may write
most x(A(x),¬B(x))
in the corresponding logical language, rather than most x, y(A(x),¬B(y)).
Here are a few polyadic quantifiers:
15.  W_{M}(R) ⇔ R is a wellordering of M  type <2> 
(Q_{0}^{n})_{M}(R) ⇔ there is an infinite A ⊆ M s.t. A^{n} ⊆ R 
type <n>  
Res^{k}(most)_{M}(R, S) ⇔ R ∩ S > R − S  type <k,k>  
RECIP_{M}(A,
R) ⇔ for all distinct a, b ∈ A there is n ≥ 1 and c_{0},…,c_{n} s.t. c_{0} = a and c_{n} = b and c_{i}Rc_{i+1} for i < n 
type <1,2> 
W and Q_{0}^{n} come from logic and set theory. Res^{k}(most) is the resumption of most to ktuples. Resumption can be applied to any quantifier (in the syntax, this means replacing each individual variable by a corresponding ktuple of variables); it has logical uses but also, like RECIP, uses in the interpretation of certain sentences in natural languages; see Section 14 below.
6. Topic neutrality
Both Mostowski and Lindström had one additional condition in their definitions of generalized quantifiers: they should not distinguish isomorphic models. Informally, they are topicneutral: the truth of a statement of the form φ = Qx, yz(A(x), R(y, z)), say, in a model M doesn't depend on the particular individuals M consists of. If the individuals of M are mapped in a oneone fashion onto the individuals of another universe M′, and if A and R are mapped accordingly, one obtains an isomorphic model M′. Isomorphism closure then says that M ⊨ φ iff M′ ⊨ φ .
More formally, if M = (M, I) and M′ = (M′, I′) are models for the same vocabulary V of nonlogical symbols, f is an isomorphism from M to M′, iff
 f is a bijection (a oneone onto function) from M to M′;
 whenever P is an nary predicate symbol in
V and
a_{1},…,a_{n} ∈
M,
(a_{1},…,a_{n}) ∈ I(P) iff (f(a_{1}),…,f(a_{n})) ∈ I′(P);  whenever c is an individual constant in V, I′(c) = f(I(c)).
M and M′ are isomorphic, in symbols,
M ≅ M′
if there is an isomorphism from one to the other. Now if Q is a generalized quantifier of type <n_{1},…,n_{k}>, P_{i} is an n_{i}ary predicate symbol for 1 ≤ i ≤ k, M = (M, I) is a model for the vocabulary {P_{1},…,P_{k}}, and R_{i} = I(P_{i}), we also write
M = (M, R_{1},…, R_{k})
Then Q satisfies isomorphism closure, or just ISOM, if the following holds:
 If (M,
R_{1},…,R_{k})
≅
(M′,
R′_{1},…,R′_{k}),
then
Q_{M}(R_{1},…,R_{k}) ⇔ Q_{M′}(R′_{1},…, R′_{k}).
One easily checks that all the generalized quantifiers exemplified so far are indeed ISOM. We did not include this requirement in the definition of generalized quantifiers however, since there are natural language quantifiers that do not satisfy it; see below. But logic is supposed to be topicneutral, so ISOM is almost always imposed. Then two important things follow. First, as indicated above, sentences in logical languages do not distinguish isomorphic models. More precisely, we have the following
FACT
If L = FO(Q_{1},…,Q_{n}), each Q_{i} is ISOM, φ is an Lsentence, and M ≅ M′, then M ⊨ φ ⇔ M′ ⊨ φ.
Second, ISOM takes a particularly interesting form for monadic quantifiers. If M = (M, A_{1},…,A_{k}), where A_{i} ⊆ M for each i, then A_{1},…,A_{k} partition M into 2^{k} pairwise disjoint subsets (some of which may be empty); let us call them the parts of M . We illustrate with k = 2 and M = (M, A, B):
Figure 1
Now it is not hard to see that only the sizes of the parts determine whether two models of this kind are isomorphic or not:
FACT
(M, A_{1},…,A_{k}) ≅ (M′, A′_{1},…,A′_{k}) iff the cardinalities of the corresponding parts are the same.
This shows that monadic and ISOM generalized quantifiers indeed deal only with quantities, i.e., with sizes of sets rather than the sets themselves. The list (14) of type <1,1> generalized quantifiers clearly illustrates this, but also the Aristotelian quantifiers can be formulated in terms of cardinalities,
all_{M}(A, B) ⇔ A − B = 0
some_{M}(A, B) ⇔ A ∩ B > 0
etc., and similarly for the type <1> examples we gave.
7. Relativization
Every statement involving a generalized quantifier Q takes place within some universe M. Sometimes it is useful to be able to mirror this relativization to a universe inside M. This means defining a new quantifier with one extra set argument which says that Q behaves on the universe restricted to that argument exactly as it behaves on M. Thus, if Q is of type <n_{1},…,n_{k}>, we define Q^{rel} of type <1,n_{1},…,n_{k}> as follows:
 (Q^{rel})_{M}(A, R_{1},…,R_{nk}) ⇔_{def} Q_{A}(R_{1}⨡A,…,R_{nk}⨡A)
where R_{i} ⊆ M^{ni} and R_{i}⨡A is the restriction of R_{i} to A, i.e., the set of n_{i}tuples in R_{i} ∩ A^{ni}.
We have in fact already seen several examples of relativization: since one easily verifies (see the lists (8) and (14)) that
 all = ∀^{rel}
some = ∃^{rel}
at least five = (∃_{≥5})^{rel}
exactly three = (∃_{=3})^{rel}
infinitely many = (Q_{0})^{rel}
most = (Q^{R})^{rel}
an even number of = (Q_{even})^{rel}
8. Expressive power
We described how generalized quantifiers can be added to FO, resulting in more expressive logics. A logic in this sense roughly consists of a set of sentences, a class of models, and a truth relation (or a satisfaction relation) between sentences and models. Such logics are often called modeltheoretic logics, since they are defined semantically in terms of models and truth, rather than prooftheoretically in terms of a deductive system for deriving theorems.^{[10]} Here we restrict attention to logics of the form FO(Q_{1}, Q_{2},…), formed by adding generalized quantifiers to FO, where each quantifier comes with a formation rule and a semantic clause for the truth definition as described in Section 5 above.
There is an obvious way to compare the expressive power of modeltheoretic logics. L_{2} is at least as expressive as L_{1}, in symbols,
L_{1} ≤ L_{2}
if every L_{1}sentence φ is logically equivalent to some L_{2}sentence ψ, i.e., φ and ψ are true in the same models. Also, L_{1} and L_{2} have the same expressive power, L_{1} ≡ L_{2}, if L_{1} ≤ L_{2} and L_{2} ≤ L_{1}, and L_{2} is stronger than L_{1}, L_{1} < L_{2}, if L_{1} ≤ L_{2} but L_{2} ≰ L_{1}. Thus, L_{1} < L_{2} if everything that can be said in L_{1} can also be said in L_{2}, but there is some L_{2}sentence which is not equivalent to any sentence in L_{1}.
How does one establish facts about expressive power? It seems as if in order to show L_{1} ≤ L_{2} one has to go through all of the infinitely many sentences in L_{1} and for each one find an equivalent in L_{2}. But in practice it suffices to show that the generalized quantifiers in L_{1} are definable in L_{2}. If Q is of type <1, 2> , say, Q is definable in L_{2} if there is an L_{2}sentence ψ whose nonlogical vocabulary consists exactly of one unary and one binary predicate symbol, such that for all models M = (M, A, R),
Q_{M}(A, R) ⇔ (M, A, R) ⊨ ψ
Similarly for other types. For example, the quantifier all is definable in FO, since the following holds:
all_{M}(A, B) ⇔ (M, A, B) ⊨ ∀x(A(x) → B(x))
Likewise, Q^{R} is definable in FO(most), since
(Q^{R})_{M}(A) ⇔ (M, A, B) ⊨ most x(x = x, A(x))
(note that all our logics contain the logical apparatus of FO, so they are all extensions of FO). The latter is an instance of the following observation:
 For any generalized quantifier Q, Q is definable in FO(Q^{rel}).
Such facts about definability can be easy or hard to establish,^{[11]} but they suffice to establish positive facts about expressivity, since we have the readily verified
FACT
FO(Q_{1},…,Q_{n}) ≤ L if and only if each Q_{i} is definable in L.
On the other hand, to prove inexpressibility, i.e., that some sentence is not equivalent to any Lsentence, is harder. One way that sometimes works is to establish that L_{1} has some property that L_{2} lacks; then one might be able to conclude that L_{1} ≰ L_{2}. Some properties that are typical of FO, but fail for most stronger logics, are:
 The Löwenheim property: If a sentence is true in some infinite model, it is also true in some countable model.
 The Tarski property: If a sentence is true in some countably infinite model, it is also true in some uncountable model.
 The compactness property: If no model makes every element of the set of sentences Φ true, then there is a finite subset Ψ of Φ such that no model makes every sentence in Ψ true.
 The completeness property: The set of valid sentences is recursively enumerable (i.e., can be generated by some formal system).
For example, FO(Q_{0}) does not have the compactness property.^{[12]} This can be seen by looking at the set of sentences
Φ = {¬Q_{0}x(x = x)} ∪ {θ_{n} : n = 1,2,…}
where θ_{n} is an FOsentence saying that there are at least n elements in the universe. If you take any finite subset Φ′ of Φ , and M is a universe whose cardinality is the largest n such that θ_{n} belongs to Φ′, then all sentences in Φ′ are true in M. But no universe can make all sentences in Φ true. And this shows that Q_{0} is not definable in FO, i.e., that FO(Q_{0}) ≰ FO, since otherwise we could replace Φ by an equivalent set of FOsentences, but FO does have the compactness property, so that it impossible.
However, this way of proving inexpressibility only works for logics with properties like those above. Moreover, they only work if infinite universes are allowed, but interesting inexpressibility facts hold also for finite models, for example, the fact that Q^{R} and Q_{even} are not definable in FO, or that most = (Q^{R})^{rel} is not definable in FO(Q^{R}). Logicians have developed much more direct and efficient methods of showing undefinability results that work also for finite models.^{[13]}
The above properties in fact characterize FO, in the sense that no proper extension of FO can have (certain combinations of) them. This is the content of a celebrated theorem about modeltheoretic logics, Lindström's Theorem, a version of which is given below. For an accessible proof see, for example, Ebbinghaus, Flum, and Thomas (1994). We say that a logic L = FO(Q_{1},…,Q_{n}) relativizes if the ‘converse’ of (19) holds for each Q_{i}, i.e., if each (Q_{i})^{rel} is definable in L.
THEOREM (Lindström)
If L is compact and has the Löwenheim property, then L ≡ FO. Also, provided L relativizes, if L is complete and has the Löwenheim property, or if L has both the Löwenheim and the Tarski properties, then L ≡ FO.
9. Generalized quantifiers and computation
In addition to the truth conditions associated with generalized quantifiers, one may study the computations required to establish the truth of a quantified statement in a model. Indeed, generalized quantifiers turn up in various places in the part of computer science that studies computational complexity. We sketch two examples.
Restrict attention to finite universes, assume ISOM throughout, and consider first type <1> quantifiers, or type <1,1> quantifiers that are relativizations (Section 7) of these; we'll see in Section 12 that these types are highly relevant in natural language contexts. Such quantifiers can be seen as binary relations between natural numbers: using the same symbol we have Q_{M}(A) ⇔ Q(M − A, A).^{[14]} These relations can in turn be coded as sets of words, for example by letting a binary word w_{1}^{…} w_{m} correspond to a model (M, A) where M = m, A ⊆ M, and a 1 means ‘belonging to A’, and 0 means ‘not belonging to A’. Thus A is the number of 1's; note that by ISOM, the order in the string doesn't matter. So Q is coded as a set W_{Q} of such words (one for each m ≥ 1).^{[15]} One may then ask what it takes to recognize that a word belongs to W_{Q}. The abstract notion of an automaton can be used to give one kind of answer; automata are machines that accept or reject words, and they are classified according to the complexity of the operations they perform.^{[16]}
A finite automaton has a finite number of states including one start state and at least one accepting state. It starts scanning a word at the leftmost symbol in the start state, and at each step it moves one symbol to the right and enters a (possibly) new state, according to a given transition function. If it can move along the whole word ending in an accepting state, the word is accepted. It accepts a set W of words if it accepts all the words in W but no others from the same alphabet. It is easy to construct a finite automaton accepting ∀ (or ∀^{rel} = all), i.e., checking that w consists only of 1's: just remain in the start state = accepting state as long as 1's are encountered, but go to a rejecting state as soon as a 0 is scanned, and remain there whatever is encountered afterwards. A slightly more complex automaton accepts Q_{even}: again there are two states, a start state = the accepting state and a rejecting state, and this time remain in the same state when 0's are scanned, but go to the other state when a 1 is scanned. To end in the accepting state it is then necessary and sufficient that there be an even number of 1's. This machine essentially uses cycles of length 2, whereas the first example had only 1cycles. Call an automaton of the latter kind acyclic. Van Benthem showed that the FOdefinable quantifiers are exactly the ones accepted by finite automata that are acyclic and permutation closed.^{[17]}
A slightly more complex automaton, the pushdown automaton, has rudimentary memory resources in the form a of stack of symbols that can be pushed or popped from the top, enabling it to keep track to some extent of what went on at earlier steps. Another result by van Benthem is that the type <1> quantifiers accepted by pushdown automata are precisely those for which the corresponding binary relation between numbers is definable (with firstorder means) in additive arithmetic, i.e., in the model (N, +), where N = {0, 1, 2,…}. An example is Q^{R} (or its relativization most): we have Q^{R}(m, n) ⇔ m < n, and the right hand side is definable in (N, +) by ∃x(x ≠ 0 ∧ m + x = n).^{[18]}
Thus, an algorithmic characterization is matched with a logical one. This is the main theme in descriptive complexity theory. Consider now the most general abstract automaton or computational device, i.e., Turing machines. One (of many) interesting complexity class(es) is PTIME: A set W of words is PTIME if there is a polynomial p(x) (with natural number coefficients) and a Turing machine accepting W such that whenever w ∈ W has length n, the accepting computation takes at most p(n) steps. PTIME computations are ‘tractable’, in contrast with EXPTIME ones, where the number of steps required may grow exponentially. An early result by Immerman and Vardi is that the PTIME sets of (words coding) finite models are precisely those describable by single sentences in FO(LFP), which is FO logic with an added mechanism for forming least fixedpoints.^{[19]} However, now one needs to represent not just monadic models but arbitrary ones. For example, a binary relation on the universe {1,…,m} can be represented by a word w_{11} ^{…} w_{1m}#…#w_{m1} ^{…} w_{mm}, where the relation holds of (i, j) iff w_{ij} = 1. But this time the order does seem to matter, and in fact the Immerman and Vardi result just mentioned only holds for models with a given linear order and a binary predicate symbol standing for that order.
Logics like FO(LFP) can be recast as logics of the form FO(Q_{1}, Q_{2},…). Here infinitely many quantifiers may be required, but in some cases a single one suffices. As to FO(LFP), it suffices to add all the resumptions (see the end of Section 5 above) of a single quantifier. More generally, let FO^{*}(Q_{1}, Q_{2},…) be like FO(Q_{1}, Q_{2},…) but with mechanisms for making relativizations (Section 7) and for resuming each Q_{i} to ktuples for each k. Then there is a single quantifier Q such that FO(LFP) = FO^{*}(Q).
So generalized quantifiers remain a simple and versatile way of adding expressive power to FO. One natural question was if the logical characterization of PTIME mentioned above could be improved using generalized quantifiers, in particular if one could remove the restriction to ordered structures in this way. The answer, however, turned out to be negative, since Hella proved that the PTIME computable properties of arbitrary finite structures cannot be characterized by adding a finite number of generalized quantifiers to FO, or even to FO(LFP). The question of whether PTIME can be characterized by a logic of the form FO^{*}(Q) remains open, however (indeed, solving it would be a major breakthrough in complexity theory).
10. Generalized quantifiers and natural language
Around 1980 a number of linguists began to realize that generalized quantifiers of type <1> and <1,1> readily lend themselves to the interpretation of noun phrases (NP), which are ubiquitous syntactic units in many languages.^{[20]} This is immediately clear if one looks at the structure of a typical English sentence whose subject is a quantified NP:
The (subject) NP consists of a determiner and a noun (N). Both the noun and the verb phrase (VP) have sets as extensions, and so the determiner is naturally taken to denote a binary relation between sets, i.e., a type <1,1> quantifier. An utterance of (20) has a (discourse) universe in the background (say, the set of people at a particular university), but the meaning of most, every, at least five and similar expressions is not tied to particular universes. For example, the meaning of all in
 All cats like milk.
 All electrons have negative charge.
 All natural numbers have a successor.
 All twins like each other.
 All compact subsets of Hausdorff spaces are closed.
has nothing to do with cats or electrons or numbers or twins or Hausdorff spaces, nor with the discourse universes that may be associated with the above examples. It simply stands for the inclusion relation, regardless of what we happen to be talking about. Therefore, the generalized quantifier all, which with each universe M associates the inclusion relation over M, is eminently suitable to interpret all, and similarly for other determiners.
However, it is characteristic of sentences of the form (20) that the noun argument and the VP argument are not on a par. The noun combines with the determiner to form the NP, a separate constituent, and this constituent can also be taken to signify a generalized quantifier, this time of type <1>. Thus, at least five students denotes the set of subsets of the universe which contain at least five students. This quantifier results from freezing the first argument of the type <1,1> five to the set of students; we write this five^{student}. In general, if A is a fixed set and Q a type <1,1> quantifier, one may define the type <1> quantifier Q^{A} by
 (Q^{A})_{M}(B) ⇔_{def} Q_{M ∪ A}(A, B)
for any M and any B ⊆ M. In a compositional semantics it is natural to take each constituent part of a sentence to have a separate signification or meaning, and the default significations of noun phrases are type <1> quantifiers.
This holds also for some NPs that lack determiners, such as proper names. While John is assigned some individual j by an interpretation, the NP John can be taken to denote the quantifier I_{j}, defined, for any M, by
(I_{j})_{M} = {B ⊆ M : j ∈ B}
This is in fact motivated, not only because the interpretation of NPs becomes more uniform, but also because John can combine with quantified NPs; cf.
 John and three professors came to the meeting.
Here it is convenient if John and three professors have the same semantic category. Note that generalized quantifiers have a clear Boolean structure; define (here in the type <1> case, but similarly for any other type)
(Q_{1} ∧ Q_{2})_{M}(A) ⇔ (Q_{1})_{M}(A) and (Q_{2})_{M}(A)
(¬Q)_{M}(A) ⇔ not Q_{M}(A)
Then we can take the complex determiner in (23) to denote I_{j} ∧ three^{professor}. Similarly, the complex NP in
 John and Mary came to the meeting.
signifies I_{j} ∧ I_{m}.
The first argument (coming from the noun) of a type <1,1> determiner denotation is often called its restriction, and the second its scope. The difference in syntactic status between these two arguments turns out to have a clear semantic counterpart.
11. Conservativity
It was observed early on that type <1,1> quantifiers denoted by determiners in natural languages have the following property:
 Conservativity
(CONSERV):
For all M and all A, B ⊆ M, Q_{M}(A, B) ⇔ Q_{M}(A, A ∩ B).
This can be seen from sentence pairs such as the following, where it is clear that the second sentence is just an awkward way of expressing the first:
 Most students smoke.
 Most students are students who smoke.
 At least five professors were absent.
 At least five professors were absent professors.
 More than one third of the graduate students are foreigners.
 More than one third of the graduate students are foreign graduate students.
CONSERV says that only the part of B which is common to A matters for the truth of Q_{M}(A, B). That is, the part B − A in Figure 1 doesn't matter. This appears to hold for all determiner denotations, but it fails for perfectly natural logical quantifiers, such as MO and I from the list (14) above. The reason is that it is characteristic of determiner denotations that the restriction argument restricts the domain of quantification to that argument.
12. Extension
Actually, the idea of domain restriction has one further ingredient. To restrict the domain of quantification to a subset A of M means not only that B − A is irrelevant but the whole part of M that lies outside A, and hence also the part M − (A ∪ B) in Figure 1. This in turn is an instance of a more general property, applicable to arbitrary generalized quantifiers:
 Extension
(EXT):
If Q is of type <n_{1},…,n_{k}>, R_{i} ⊆ M^{ni} for 1 ≤ i ≤ k, and M ⊆ M′, then
Q_{M}(R_{1},…, R_{k}) ⇔ Q_{M′}(R_{1},…, R_{k}).
That is, nothing happens when the universe is extended, or shrunk, as long as the arguments are not changed. Now recall that for type <1> quantifiers we already provided a logical mechanism for restricting the quantification domain to a subuniverse, in terms of relativization (Section 7). We can now see (in (b) below) that the combination of CONSERV and EXT amounts to exactly the same thing:
FACT
 For any quantifier Q, Q^{rel} satisfies EXT.
 A type <1, 1> quantifier is CONSERV and EXT if and only if it is the relativization of a type <1> quantifier.^{[21]}
Again, all determiner denotations appear to satisfy EXT. At first sight, nothing in principle would seem to prevent a language from containing a determiner, say evso, which meant every on universes with less than 10 elements and some on larger universes. But not only is there in fact no such determiner in any language — there couldn't be, if the noun (restriction) argument of a determiner is to restrict the domain of quantification to the denotation of that noun.
A quantifier such as evso is intuitively not constant, in the sense that it doesn't mean the same, or is not interpreted by the same rule, on every universe. EXT can be seen as a strong requirement of constancy: the rule interpreting Q doesn't even mention the universe. Indeed, many quantifiers from language and logic are EXT. As we saw, all relativized quantifiers are EXT, and all the other quantifiers in the lists (13) – (15) as well, except W.^{[22]} In fact, it seems that all quantifiers taking more than one argument that show up in natural language contexts are EXT. And many type <1> quantifiers are EXT too, for example, ∃, I_{j}, Q^{A} (see (22) above), and all in the list (8) except Q^{R}.
But ∀ and Q^{R} are not EXT. Yet one is inclined to say for them too that they mean the same on every universe, so it seems that EXT is a sufficient conditions for constancy, but not necessary. The case of ∀ is particularly interesting since one might argue that it interprets NPs like everything or every thing. The crux here is thing. If this expression is seen as a logical constant that always denotes the universe, then these NPs do denote ∀: for all M and all B ⊆ M,
(every^{thing})_{M}(B) ⇔ every_{M}(M, B) ⇔ M ⊆ B ⇔ M = B ⇔ ∀_{M}(B)
When EXT holds, we can usually drop the subscript M and write, for example,
Q(A, B)
rather than Q_{M}(A, B). That is, a suitable universe can be presupposed but left in the background. We often follow this convention below when dealing with EXT quantifiers.
13. Symmetry and monotonicity
Other properties are not shared by all natural language quantifiers but single out important subclasses. We mentioned two already in Section 2 above: symmetry and monotonicity. Typical symmetric quantifiers are some, no, at least five, exactly three, an even number of, infinitely many, whereas all, most, at most onethird of the are nonsymmetric. Another way to express symmetry is to say that the truthvalue of Q(A,B) only depends on the set A ∩ B. More precisely, call Q intersective if for all M and all A,A′,B,B′ ⊆ M:
 if A ∩ B = A′ ∩ B′ then Q_{M}(A,B) ⇔ Q_{M}(A′,B′).
One easily verifies:
FACT
For conservative type <1,1> quantifiers, symmetry and intersectivity are equivalent.^{[23]}
We noted that some of the syllogisms express monotonicity properties. In more succinct notation, a type <1,1> quantifier Q is
 right increasing (right decreasing) iff for all M and all A,B ⊆ B′ ⊆ M (all A,B′ ⊆ B ⊆ M), Q_{M}(A,B) implies Q_{M}(A,B′).
Similarly for left increasing or decreasing, and indeed for monotonicity in any given argument place of a generalized quantifier. In particular, it is clear what it means for a type <1> quantifier to be monotone. Monotonicity is ubiquitous among natural language quantifiers. It seems that syntactically simple English NPs are all monotone (denote increasing or decreasing type <1> quantifiers), and almost all syntactically simple English determiners are right monotone.^{[24]} We have:
 The quantifiers I_{j} (proper names) are increasing.
 Q^{A} is increasing (decreasing) iff Q is right increasing (decreasing).
The Aristotelian all, some, no are monotone in both arguments (e.g. all is right increasing and left decreasing), as are at least five, no more than ten, infinitely many, whereas most, at least twothirds of the are right increasing but neither increasing nor decreasing in the left argument. Exactly three, between two and seven are nonmonotone, though both of these are conjunctions of a (right and left) increasing and a decreasing quantifier (e.g. at least three and at most three), in contrast with an even number of , which is not a (finite) Boolean combination of monotone quantifiers.
Both symmetry and monotonicity have important explanatory roles for certain linguistic phenomena. Symmetry is a feature of (most of) the quantifiers allowed in socalled existential there sentences (e.g. There are at least five men in the garden is fine, but There are most men in the garden is not). Monotonicity is crucial for explaining the distribution of polarity items (No one will ever succeed is fine but Someone will ever succeed is not: negative polarity items such as ever require a decreasing environment).^{[25]}
14. Determiners that are not ISOM
Consider
 John's books were stolen.
 Some student's books have not been returned.
 No professor except Mary came to the meeting.
 All beach goers except a few enthusiastic swimmers were fully clothed.
 More male than female students smoke.
The expressions John's, some student's, no _ except Mary, all _ except a few enthusiastic swimmers, more male than female are quite naturally seen as determiners: when combined with nouns they form phrases that behave like ordinary NPs. Also, the type <1,1> quantifiers they signify are CONSERV and EXT. For example, the sentences in the following pair are trivially equivalent:
 John's books were stolen.
 John's books are books that were stolen.
But in contrast with the previous examples, they are not ISOM, since they involve some fixed individual or property: if John's books were stolen, and the number of stolen books is the same as the number of red pencils (in some discourse universe), and the number of books that weren't stolen is the same as the number of pencils that aren't red, it does not follow that John's pencils are red, as ISOM would have it.
However, just as the nonISOM quantifier three^{student} results by freezing the restriction argument of the EXT quantifier three, the nonISOM quantifiers above result by freezing arguments in more abstract relations, which are ISOM. We illustrate this with the possessive determiner John's.^{[26]}
Given that John denotes an individual j, the determiner John's can be defined, for all M and all A, B ⊆ M, by^{[27]}
 John's_{M}(A, B) ⇔ ∅ ≠ {x ∈ A : R_{0}(j, x)} ⊆ B
Here R_{0} is some ‘possessor’ relation; it is wellknown that this relation varies a lot with the circumstances — one could be talking about the books that John owns, or has written, or borrowed, or bought as a present to Mary, etc. Suppose R_{0} is ownership. Then (34) says that John owns at least one book, and that all of the books he owns were stolen. Now consider the more general ‘quantifier’ defined, for a ∈ M, R ⊆ M^{2}, and A, B ⊆ M, by
P_{M}(a, R, A, B) ⇔ ∅ ≠ {x ∈ A : R(a, x)} ⊆ B
We could say that this is a generalized quantifier of type <0,2,1,1>, letting 0 stand for individuals. P is ISOM (extending definition (16) in the obvious way to quantifiers of this type), and John's results by freezing the first two arguments to suitable values.
Similar constructions work for other cases of quantifier expressions in natural languages that denote nonISOM quantifiers. For example, the determiner no _ except Mary denotes (given that Mary refers to m)
(no _ except Mary)_{M}(A, B) ⇔ A ∩ B = {m}
That is, (36) says that Mary is a professor, that she came to the meeting, and that no other professor did. Again, a corresponding ISOM quantifier of type <0,1,1> is readily defined. In conclusion, ISOM remains a fundamental property in the semantics of quantifier expressions, and not only in logical languages.
Digression: ISOM, i.e., topicneutrality, is standardly seen as a mark of logicality.^{[28]} It is possible to distinguish this property from constancy, in the earlier mentioned sense of meaning the same over different universes. One might therefore consider ISOM + EXT as an attempt to capture the idea of a logical constant. This doesn't quite work since ISOM appears to be a necessary (but perhaps not sufficient) condition for logicality, and EXT a sufficient (but not necessary) condition of constancy. As noted, ∀ is not EXT (but perhaps one could live with that, since ∃ (and negation) is EXT so the usual logical constants are at least definable from ISOM + EXT ones). In any case, this combination of properties has an even more succinct formulation, which is just like ISOM except that one allows arbitrary injections of the universe. Suppose Q is of type <n_{1},…,n_{k}> :
 Injection
(INJ):
If R_{i} ⊆ M^{ni} for 1 ≤ i ≤ k, and f is an injection (a oneone function) from M into M′, then Q_{M}(R_{1},…,R_{k}) ⇔ Q_{M′}(f(R_{1}),…,f(R_{k}))
(where f (R_{i}) = {(f(a_{1}),…, f(a_{ni})) : (a_{1},…,a_{ni}) ∈ R_{i}}). Then it is easy to verify the following
FACT
ISOM + EXT = INJ
There has been some discussion about whether quantifiers like most or twothirds of the are logical constants. It is not clear that this is a welldefined question since the notion of a logical constant is notoriously hard to pinpoint. But that these quantifiers satisfy INJ is certain. End of digression.
15. Other quantifier expressions in natural language
Generalized quantifiers of other types than <1> and <1,1> have also been used to account for natural language quantification. Without going into detail about the often complex issues of interpretation arising here, we illustrate with a few examples.^{[29]}
 More women than men smoke.
If more women than men is an NP, which seems to be the case, then the determiner more _ than takes two restriction arguments and one scope argument, and thus denotes the type <1,1,1> generalized quantifier
more _ than(A, B, C) ⇔ A ∩ C > B ∩ C
Keenan and Moss (1984) provide a thorough account of similar constructions requiring monadic quantifiers with more than two arguments.^{[30]} More _ than is ISOM and EXT. It is also conservative in an extended sense, for type <1,1,1> quantifiers Q where the first two arguments are restrictions:
 Q_{M}(A, B, C) ⇔ Q_{M}(A, B, (A ∪ B) ∩ C)
Next, consider

 People usually are grateful to firemen who rescue them.
 Men seldom make passes at girls who wear glasses. (Dorothy Parker)
Adverbs like usually, seldom, always, never can be taken to denote generalized quantifiers.^{[31]} For example, Dogs never meow is roughly synonymous with No dogs meow. But for (44), it can be argued that there is a reading where the quantifier applies to pairs: among the pairs consisting of a person and a fireman who rescues that person, a majority are such that the person is grateful. This is just the resumption of most to pairs, that we defined in (15):
Res^{2}(most)(R, S) ⇔ R ∩ S > R − S
So in (44a), R(a, b) iff a ∈ person and b ∈ fireman and a rescued b, and S(a, b) iff a is grateful to b. It can be shown that for many quantifiers, in particular most, Res^{n}(Q) is not definable in FO(Q). In fact, Res^{2}(most) is not definable from any finite number of monadic quantifiers, so it is an example of an irreducibly polyadic quantifier.^{[32]}
On the other hand, sentences with nested ordinary NPs, such as
 Most films were reviewed by two critics.
can also be construed with a polyadic quantifier Q, this time of type <1,1,2>, whose truth conditions may be given by the following logical sentence,
Qx, y, zu(A(x), B(y), R(z, x)) ↔ most x(A(x), two y(B(y), R(x, y)))
but as this sentence shows, Q is definable from most and two.^{[33]}
Next:
 Five Boston pitchers sat alongside each other.
This can have the truth conditions
∃ X ⊆ Boston pitcher[X = 5 & RECIP(X, sat alongside)]
where RECIP is the type <1,2> quantifier defined in (15). That is, there is a set of five Boston pitchers such that if you take any two of those, either they sit next to each other, or there is one pitcher, or two, or at most three (all in the chosen set), between them. This is just one of several polyadic quantifiers that occur in reciprocal sentences.^{[34]}
Finally, consider the sentence
 Most boys in your class and most girls in my class have all dated each other.
This has been put forward as an example of branching quantification, which can be written in a twodimensional logical format as
most x A(x)
most y B(y)> R(x,y)
where the intended reading is that there is a subset X of A containing most of the elements of A, and a similarly large subset Y of B, such that each pair (a, b) where a ∈ X and b ∈ Y belongs to the relation R. More generally, we have a polyadic quantifier of type <1,1,2> defined for any Q_{1}, Q_{2} of type <1,1> by

Br(Q_{1}, Q_{2})(A,
B, R) ⇔
∃X ⊆ A ∃Y ⊆ B [Q_{1}(A, X) & Q_{2}(B, Y) & X×Y ⊆ R]^{[35]}
Quite plausibly, this gives a reading of (47). Note that x and y here are independent of each other. If one instead would use any one of the linear sentences
most x(A(x), most y(B(y), R(x, y)))
most y(B(y), most x(A(x), R(x, y)))
either y depends on x or vice versa. For example, it is not hard to see that the first of these sentences has the truth condition
∃X ⊆ A [most(A, X) &
∀a ∈ X ∃Y ⊆ B [most(B, Y) & Y ⊆ {b : R(a, b)}]]
which clearly shows that the sets Y depend on X, in contrast with the situation in (49). The point of the twodimensional partially ordered structure in (48) is that this semantic independence is also reflected in the syntax.^{[36]}
It can be shown that Br(most, most) is not expressible in FO(most) alone; indeed not with any finite number of monadic quantifiers.^{[37]}
Many other constructions in natural languages can be taken to involve generalized quantifiers, but the above sentences at least indicate some of the relevant examples.
16. Quantification and cognition
Recent years have seen a growing interest in connecting semantics and cognition, in particular, how speakers understand and learn quantified expressions. One strand of research concerns monotonicity (Section 13). Already Barwise and Cooper (1981) noted the ubiquity of monotone quantifiers in natural languages, and suggested a way of showing that monotone quantifiers are easier to process than nonmonotone ones, and that increasing quantifiers are easier than decreasing ones. They also suggested that psychological experiments might be used to test their hypothesis. Their technical proposal was developed further by van Benthem (see van Benthem, 1986), who introduced a notion of count complexity and showed that, under some assumptions, the quantifiers with minimal count complexity are precisely the ones with a certain strong monotonicity property.^{[38]} Psychological experiments regarding monotonicity were carried out only recently; see Geurts and van der Slik (2005).^{[39]} Subjects were asked whether certain inferences involving monotonicity were valid or not, and the results largely corroborated the earlier hypotheses.
The meaning of individual determiners has also been studied empirically; a recent example is the study of most in Pietroski et al. (2009), where the method is to show subjects a picture with yellow and blue dots for a very short time (e.g. to eliminate counting) and ask subjects if it is true or false that most of the dots are yellow. This relates to the study of human number sense and its relation to understanding quantificational language. One might entertain a ‘Whorfian’ hypothesis that the latter is a prerequisite for the former. This was tested with neurobiological methods (brainscan methods combined with psychological tests with patients suffering from various brain disorders) in Clark and Grossman (2007). They did not find any empirical support for that hypothesis; see also Clark (2011b) for a description of the experiment and an overview of research on quantification and number sense.
We already mentioned in Section 9 that the automatatheoretic complexity hierarchy has been used to classify the cost of processing various quantifiers. A further application of that hierarchy to the learning of quantifiers is described in Clark (2011a). Recently, descriptive complexity (Section 9) has also been applied to natural language quantification. For example, it was shown in Sevenster (2006) that the branching of most as in (48) in Section 15 is intractable.^{[40]} This line of research was taken further in Szymanik (2009). He proved that if the operations of resumption (Section 15) and iteration (as in (45), Section 15) are applied to PTIME quantifiers, the result is again in PTIME, whereas the application of branching, as we saw, leads to intractability. Likewise, the quantifiers occurring in some reciprocal sentences (cf. (46) in Section 15) are intractable. Szymanik argues that these facts reflect processing difficulty, and reports psychological experiments aimed at corroborating various hypotheses about processing and descriptive complexity (see Szymanik 2009, and references therein).
In conclusion, the investigation of how speakers process quantified expressions, combining the basic modeltheoretic analysis with methods from psychology, neuroscience, and computer science, promises to be a rich field of study in years to come.
Bibliography
 Barwise, J. 1978, “On branching quantifiers in English,” J. Philosophical Logic, 8: 47–80.
 Barwise, J. and Cooper, R., 1981, “Generalized quantifiers and natural language,” Linguistics and Philosophy, 4: 159–219.
 Barwise, J. and Feferman, S. (eds.), 1985, ModelTheoretic Logics, Berlin: SpringerVerlag.
 Bonnay, D., 2008, “Logicality and invariance,” Bulletin of Symbolic Logic, 14: 29–68.
 Clark, R., 2011a, “On the learnability of quantifiers,” in J. van Benthem and A. ter Meulen (eds.), Handbook of Logic and Language. Second Edition, Amsterdam: Elsevier, 911–923.
 Clark, R., 2011b, “Generalized quantifiers and number sense,” to appear in Philosophy Compass.
 Clark, R. and Grossman, M., 2007, “Number sense and quantifier interpretation,” Topoi, 26: 51–62.
 Dalrymple, M., Kanazawa, M., Kim, Y., Mchombo, S., and Peters, S., 1998, “Reciprocal expressions and the concept of reciprocity,” Linguistics and Philosophy, 21: 159–210.
 Ebbinghaus, H.D. and Flum, J, 1995, Finite Model Theory, Berlin: Springer.
 Ebbinghaus, H.D., Flum, J., and Thomas, W., 1994, Mathematical Logic, Berlin: Springer.
 Feferman, S., 2010, “Settheoretical invariance criteria for logicality,” Notre Dame Journal of Formal Logic, 51: 3–20.
 Geurts, B. and van der Slik, F., 2005, “Monotonicity and processing load,” Journal of Semantics, 22: 97–117.
 Hella, L., Väänänen, J., and Westerståhl, D., 1997, “Definability of polyadic lifts of generalized quantifiers,” Journal of Logic, Language and Information, 6: 305–335.
 Henkin, L., 1961, “Some remarks on infinitely long formulas,” in Infinitistic Methods, Oxford: Pergamon Press, 167–183.
 Higginbotham, J. and May, R., 1981, “Questions, quantifiers and crossing,” Linguistic Review, 1: 41–79.
 Hintikka, J., 1973, “Quantifiers vs. quantification theory,” Dialectica, 27: 329–358.
 Hopcroft, J.E. and Ullman, J.D., 1979, Introduction to Automata Theory, Languages, and Computation, London: AddisonWesley.
 Keenan, E. and Moss, L., 1984, “Generalized quantifiers and the expressive power of natural language,” in J. van Benthem and A. ter Meulen (eds.), Generalized quantifiers in Natural Language, Dordrecht: Foris, 73–124.
 Keenan, E. and Stavi, J., 1986, “A semantic characterization of natural language determiners,” Linguistics and Philosophy, 9: 253–326.
 Keenan, E. and Westerståhl, D., 2011, “Generalized quantifiers in linguistics and logic (revised version),” in J. van Benthem and A. ter Meulen (eds.), Handbook of Logic and Language. Second Edition, Amsterdam: Elsevier, 859–910.
 Lewis, D., 1975, “Adverbs of quantification,” in E. Keenan (ed.), Formal Semantics of Natural Language, Cambridge: Cambridge University Press, 3–15.
 Lindström, P., 1966, “Firstorder predicate logic with generalized quantifiers,” Theoria, 32: 186–195.
 Luosto, K., 2000, “Hierarchies of monadic generalized quantifiers,” Journal of Symbolic Logic, 65: 1241–1263.
 Montague, R., 1973, “The Proper Treatment of Quantification in Ordinary English,” in K.J.J. Hintikka, J.~M.~E. Moravcsik, and P.~Suppes (eds.), Approaches to Natural Language, Dordrecht: D. Reidel, 221242; reprinted in Formal Philosophy (edited and with an introduction by R. Thomason), New Haven: Yale University Press, 1974, pp. 247–270.
 Mostowski, A., 1957, “On a generalization of quantifiers,” Fund. Math., 44: 12–36.
 Mostowski, M., 1998, “Computational semantics for monadic quantifiers,” Journal of Applied NonClassical Logics, 8: 107–121.
 Mostowski, M. and Wojtyniak, D., 2004, “Computational complexity of the semantics of some natural language constructions,” Annals of Pure and Applied Logic, 127: 219–227.
 Parsons, T., 2004, “The traditional square of opposition,” in E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Summer 2004 Edition), URL = <http://plato.stanford.edu/archives/sum2004/entries/square/>
 Peters, S. and Westerståhl, D. 2002, “Does English really have resumptive quantification?” in D. Beaver et al. (eds.), The Construction of Meaning, Stanford: CSLI Publications, 181–195.
 –––, 2006, Quantifiers in Language and Logic, Oxford: Oxford University Press.
 Pietroski, P., Lidz, J., Hunter, T., and Halberda, J., 2009, “The meaning of ‘most’: Semantics, numerosity and psychology,” Mind and Language, 24(5): 554–585.
 Sevenster, M., 2006, Branches of Imperfect Information: Logic, Games, and Computation, Ph.D. thesis, University of Amsterdam. [Available online.]
 Sher, G. 1997, “Partiallyordered (branching) generalized quantifiers: a general definition,” Journal of Philosophical Logic, 26: 1–43.
 Szymanik, J., 2009, Quantifiers in TIME and SPACE: Computational Complexity of Generalized Quantifiers in Natural Language, Ph.D. thesis, University of Amsterdam. [Available online.]
 van Benthem, J., 1986, Essays in Logical Semantics, Dordrecht: D. Reidel.
 –––, 1987, “Towards a computational semantics,” in P. Gärdenfors (ed.), Generalized Quantifiers, Dordrecht: D. Reidel, 31–71.
 –––, 2002, “Invariance and Definability: two faces of logical constants,” in W. Sieg, R. Sommer, and C. Talcott (eds), Reflections of the Foundations of Mathematics. Essays in Honor of Sol Feferman (ASL Lecture Notes in Logic: 15), 426–446.
 Westerståhl, D. 1987, “Branching generalized quantifiers and natural language,” in P. Gärdenfors (ed.), Generalized Quantifiers, Dordrecht: D. Reidel, 269–298.
 –––, 1989, “Quantifiers in formal and natural languages,” in D. Gabbay and F. Guenthner (eds), Handbook of Philosophical Logic, Vol. IV, Dordrecht: D. Reidel, 1–131. 2nd edition, 2007, Berlin: Springer, 223–338.
 –––, 2011, “Classical and modern squares of opposition, and beyond,” to appear in J.Y. Beziau and G. Payette (eds), New Perspectives on the Square of Opposition, Bern: Peter Lang.
 Williamson, T., 2003, “Everything,” Philosophical Perspectives, 17: 415–465.
Academic Tools
How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database.
Other Internet Resources
[Please contact the author with suggestions.]