Logical Form

First published Tue Oct 19, 1999; substantive revision Wed Sep 1, 2021

Some inferences are impeccable. Examples like (1–3) illustrate reasoning that cannot lead from true premises to false conclusions.

(1)
John danced if Mary sang, and Mary sang; so John danced.
(2)
Every politician is deceitful, and every senator is a politician; so every senator is deceitful.
(3)
The detective is in the garden; so someone is in the garden.

In such cases, a thinker takes no epistemic risk by endorsing the conditional claim that the conclusion is true if the premises are true. The conclusion follows from the premises, without any further assumptions that might turn out to be false. Any risk of error lies with the premises, as opposed to the reasoning. By contrast, examples like (4–6) illustrate reasoning that involves at least some risk of going wrong—from correct premises to a mistaken conclusion.

(4)
John danced if Mary sang, and John danced; so Mary sang.
(5)
Every feathered biped is a bird, and Tweety is a feathered biped; so Tweety can fly.
(6)
Every human born before 1879 died; so every human will die.

Inference (4) is not secure. John might dance whenever Mary sings, but also sometimes when Mary doesn’t sing. Similarly, with regard to (5), Tweety might be a bird that cannot fly. Even (6) falls short of the demonstrative character exhibited by (1–3). While laws of nature may preclude immortality, the conclusion of (6) goes beyond its premise, even if it is foolish to resist the inference.

Appeals to logical form arose in the context of attempts to say more about this intuitive distinction between impeccable inferences, which invite metaphors of security, and inferences that involve some risk of slipping from truth to falsity. The idea is that some inferences, like (1–3), are structured in a way that confines any risk of error to the premises. The motivations for developing this idea were both practical and theoretical. Experience teaches us that an inference can initially seem more secure than it is; and if we knew which forms of inference are risk-free, that might help us avoid errors. As we’ll see, claims about inference are also intimately connected with claims about the nature of thought and its relation to language.

Many philosophers have been especially interested in the possibility that grammar masks the underlying structure of thought, perhaps in ways that invite mistaken views about how ordinary language is related to cognition and the world we talk about. For example, similarities across sentences like ‘Homer talked’, ‘Nobody talked’, and ‘The nymph talked’ initially suggest that the corresponding thoughts exhibit a common subject-predicate form. But even if ‘Homer’ indicates an entity that can be the subject of a thought that is true if and only if the entity in question talked, ‘Nobody’ does not; and as we’ll see, ‘The’ is complicated. Philosophers and linguists have also asked general questions about how logic is related to grammar. Do thoughts and sentences exhibit different kinds of structure? Do sentences exhibit grammatical structures that are not obvious? And if the logical structure of a thought can diverge from the grammatical structure of a sentence that is used to express the thought, how should we construe proposals about the logical forms of inferences like (1)–(6)? Are such proposals normative claims about how we ought to think/talk, or empirical hypotheses about aspects of psychological/linguistic reality?

Proposed answers to these questions are usually interwoven with claims about why various inferences seem compelling. So it would be nice to know which inferences really are secure, and in virtue of what these inferences are special. The most common suggestion has been that certain inferences are secure by virtue of their logical form. Though unsurprisingly, conceptions of form have evolved along with conceptions of logic and language.

1. Patterns of Reason

One ancient idea is that impeccable inferences exhibit patterns that can be characterized schematically by abstracting away from the specific contents of particular premises and conclusions, thereby revealing a general form common to many other impeccable inferences. Such forms, along with the inferences that exemplify them, are said to be valid.

Given a valid inference, there is a sense in which the premises contain the conclusion, which is correspondingly extractable from the premises. With regard to (1) and (7), it seems especially clear that the conclusion is part of the first premise, and that the second premise is another part of the first.

(1)
John danced if Mary sang, and Mary sang; so John danced.
(7)
Chris swam if Pat was asleep, and Pat was asleep; so Chris swam.

We can express this point by saying that these inferences are instances of the following form: B if A, and A; so B. The Stoics discussed several patterns of this kind, using ordinal numbers (instead of letters) to capture abstract forms like the ones shown below.

If the first then the second, and the first; so the second.

If the first then the second, but not the second; so not the first.

Either the first or the second, but not the second; so the first.

Not both the first and the second, but the first; so not the second.

These schematic formulations employ variables, indicated in bold. Following a long tradition, let’s use the word ‘proposition’ as a term of art for whatever these variables range over. Propositions are potential premises/conclusions, which can be endorsed or rejected. So they are, presumably, things that can be evaluated for truth or falsity. This leaves room for various proposals about what propositions are: sentences, statements, states of affairs, etc. But let’s assume that declarative sentences can be used to express propositions; see, e.g., Cartwright (1962) and the essay on structured propositions.

A significant complication is that in ordinary conversation, the context matters with regard to which proposition is expressed with a given sentence. For example, ‘Pat is asleep’ can be used at one time to express a true premise, and at another time to express a false premise. A certain speaker might use ‘I am tired’ to express a false proposition, while another speaker uses the same sentence at the same time to express a true proposition. What counts as being tired can also vary across conversations. Context sensitivity, of various kinds, is ubiquitous in typical discourse. Moreover, even given a context, a sentence like ‘He is bald’ may not express a unique proposition. (There may be no referent for the pronoun; and even if there is, the vagueness of ‘bald’ may yield a range of candidate propositions, with no fact of the matter as to which one is the proposition expressed.) Still, we can and often do use sentences like ‘Every circle is an ellipse’ and ‘Thirteen is a prime number’ to express premises of valid arguments. To be sure, mathematical examples are special cases. But the distinction between impeccable and risky inferences is not limited to atypical contexts in which we try to think especially clearly about especially abstract matters. So when focusing on the phenomenon of valid inference, we can try to simplify the initial discussion by abstracting away from the context sensitivity of language use.

Another complication is that in speaking of an inference, one might be talking about (i) a process in which a thinker draws a conclusion from some premises, or (ii) some propositions, one of which is designated as an alleged consequence of the others; see, e.g., Harman (1973). But we can describe a risky thought process as one in which a thinker who accepts certain propositions—perhaps tentatively or hypothetically—comes to accept, on that basis, a proposition that does not follow from the initial premises. And it will be simpler to focus on premises/conclusions, as opposed to episodes of reasoning.

With regard to (1), the inference seems secure in part because its first premise has the form ‘B if A’.

(1)
John danced if Mary sang, and Mary sang; so John danced.

If the first premise didn’t have this form, the inference wouldn’t be an instance of ‘B if A, and A; so B’. It isn’t obvious that all impeccable inferences are instances of a more general valid form, much less inferences whose impeccability is due to the forms of the relevant propositions. But this thought has served as an ideal for the study of valid inference, at least since Aristotle’s treatment of examples like (2).

(2)
Every senator is a politician, and every politician is deceitful; so every senator is deceitful.

Again, the first premise seems to have several parts, each of which is a part of the second premise or the conclusion. (In English, the indefinite article in ‘Every senator is a politician’ cannot be omitted; likewise for ‘Every politician is a liar’. But at least for now, let’s assume that in examples like these, ‘a’ does not itself indicate a propositional constituent.) Aristotle, predating the Stoics, noted that conditional claims like the following are sure to be true: if (the property of) being a politician belongs to every senator, and being deceitful belongs to every politician, then being deceitful belongs to every senator. Correspondingly, the inference pattern below is valid.

Every S is P, and every P is D; so every S is D.

And inference (2) seems to be valid because its parts exhibit this pattern. Aristotle discussed many such forms of inference, called syllogisms, involving propositions that can be expressed with quantificational words like ‘every’ and ‘some’. For example, the syllogistic patterns below are also valid.

Every S is P, and some S is D; so some P is D.

Some S is P, and every P is D; so some S is D.

Some S is not P, every D is P; so some S is not D.

We can rewrite the last two, so that each of the valid syllogisms above is represented as having a first premise of the form ‘Every S is P’.

Every S is P, and some D is S; so some D is P.

Every S is P, and some D is not P; so some D is not S.

But however the inferences are represented, the important point is that the variables—represented here in italics—range over certain parts of propositions. Intuitively, common nouns like ‘politician’ and adjectives like ‘deceitful’ are general terms, since they can apply to more than one individual. And many propositions apparently contain correspondingly general elements. For example, the proposition that every senator is wealthy contains two such elements, both relevant to the validity of inferences involving this proposition.

Propositions thus seem to have structure that bears on the validity of inferences, even ignoring premises/conclusions with propositional parts. In this sense, even atomic propositions have logical form. And as Aristotle noted, pairs of such propositions can be related in interesting ways. If every S is P, then some S is P. (For these purposes, assume there is at least one S.) If no S is P, then some S is not P. It is certain that either every S is P or some S is not P; and whichever of these propositions is true, the other is false. Similarly, the following propositions cannot both be true: every S is P; and no S is P. But it isn’t certain that either every S is P, or no S is P. Perhaps some S is P, and some S is not P. This network of logical relations strongly suggests that the propositions in question contain a quantificational element and two general elements—and in some cases, an element of negation; see logic: classical. This raises the question of whether other propositions have a similar structure.

2. Propositions and Traditional Grammar

Consider the proposition that Vega is a star, which can figure in inferences like (8).

(8)
Every star is purple, and Vega is a star; so Vega is purple.

Aristotle’s logic focused on quantificational propositions; and as we shall see, this was prescient. But on his view, propositions like the conclusion of (8) still exemplify a subject-predicate structure that is shared by at least many of the sentences we used to express propositions. And one can easily formulate the schema ‘every S is P, and n is S; so n is P’, where the new lower-case variable is intended to range over proposition-parts of the sort indicated by names. (On some views, discussed below, a name like ‘Vega’ is a complex quantificational expression; though unsurprisingly, such views are tendentious.)

Typically, a declarative sentence can be divided into a subject and a predicate: ‘Every star / is purple’, ‘Vega / is a star’, ‘Some politician / lied’, ‘The brightest planet / is visible tonight’, etc. Until quite recently, it was widely held that this grammatical division reflects a corresponding kind of logical structure: the subject of a proposition (i.e., what the proposition is about) is a target for predication. On this view, both ‘Every star’ and ‘Vega’ indicate subjects of propositions in (8), while ‘is’ introduces predicates. Aristotle would have said that in the premises of (8), being purple is predicated of every star, and being a star is predicated of Vega. Later theorists emphasized the contrast between general terms like ‘star’ and singular terms like ‘Vega’, while also distinguishing terms from syncategorematic expressions (e.g., ‘every’ and ‘is’) that can combine with terms to form complex subjects and predicates, including ‘will lie’, ‘can lie’, and ‘may have lied’. But despite the complications, it seemed clear that many propositions have the following canonical form: Subject-copula-Predicate; where a copula links a subject, which may consist of a quantifier and a general term, to a general term. Sentences like ‘Every star twinkles’ can be paraphrased with sentences like ‘Every star is a twinkling thing’. This invites the suggestion that ‘twinkles’ is somehow an abbreviation for ‘is a twinkling thing’.

The proposition that not only Vega twinkles, which seems to contain the proposition that Vega twinkles, presumably includes elements that are indicated with ‘only’ and ‘not’. Such examples invite the hypothesis that all propositions are composed of terms along with a relatively small number of syncategorematic elements, and that complex propositions can be reduced to canonical propositions that are governed by Aristotelian logic. This is not to say that all propositions were, or could be, successfully analyzed in this manner. But via this strategy, medieval logicians were able to describe many impeccable infereces as instances of valid forms. And this informed their discussions of how logic is related to grammar.

Many viewed their project as an attempt to uncover principles of a mental language common to all thinkers. Aristotle had said, similarly, that spoken sounds symbolize “affections of the soul.” From this perspective, one expects to find some differences between propositions and overt sentences. If ‘Every star twinkles’ expresses a proposition that contains a copula, then spoken languages mask certain aspects of logical structure. William of Ockham held that a mental language would have no need for Latin’s declensions, and that logicians could ignore such aspects of spoken language. The ancient Greeks were aware of sophisms like the following: that dog is a father, and that dog is yours; so that dog is your father. This bad inference cannot share its form with the superficially parallel but impeccable variant: that dog is a mutt, and that mutt is yours; so that dog is your mutt. (See Plato, Euthydemus 298 d-e.) So the superficial features of sentences are not infallible guides to the logical forms of propositions. Still, the divergence was held to be relatively minor. Spoken sentences have structure; they are composed, in systematic ways, of words. And the assumption was that spoken sentences reflect the major aspects of propositional form, including a subject-predicate division. So while there is a distinction between the study of valid inference and the study of sentences used in spoken language, the connection between logic and grammar was thought to run deep. This suggested that the logical form of a proposition just is the grammatical form of some (perhaps mental) sentence.

3. Motivations for Revision

Towards the end of the eighteenth century, Kant could say (without much exaggeration) that logic had followed a single path since its inception, and that “since Aristotle it has not had to retrace a single step.” He also said that syllogistic logic was “to all appearance complete and perfect.” But this was exuberance. The successes also highlighted problems that had been recognized.

Some valid schemata are reducible to others, in that any inference of the reducible form can be revealed as valid (with a little work) given other schemata. Consider (9).

(9)
If Al ran then either Al did not run or Bob did not swim, and Al ran; so Bob did not swim.

Assume that ‘Al did not run’ negates ‘Al ran’, while ‘Bob did not swim’ negates ‘Bob swam’. Then (9) is an instance of the following valid form: if A then either not-A or not-B, and A; so not-B. But we can treat this as a derived form, by showing that any instance of this form is valid given two (intuitively more basic) Stoic inference forms: if the first then the second, and the first, so the second; either not the first or not the second, and the first; so not the second. For suppose we are given the following premises: A; and if A, then either not-A or not-B. We can safely infer that either not-A or not-B; and since we were given A, we can safely infer not-B. Similarly, the syllogistic schema (10) can be treated as a derived form.

(10)
Some S is not P, and every D is P; so not every S is D.

If some S is not P, and every D is P, then it isn’t true that every S is D. For if every S is D, and every D is P, then every S is P. But if some S is not P, then as we saw above, not every S is P. So given the premises of (10), adding ‘every S is D’ would lead to contradiction: every S is P, and not every S is P. So the premises imply the negation of ‘every S is D’. This reasoning shows how (10) can be reduced to inferential patterns that seem more basic—raising the question of how much reduction is possible. Euclid’s geometry had provided a model for how to present a body of knowledge as a network of propositions that follow from a few basic axioms. Aristotle himself indicated how to reduce all the valid syllogistic schemata to four basic patterns, given a few principles that govern how the basic patterns can be used to derive others; see Parsons (2014) for discussion. And further reduction is possible given insights from the medieval period.

Consider the following pair of valid inferences: Fido is a brown dog, so Fido is a dog; Fido is not a dog, so Fido is not a brown dog. As illustrated with the first example, replacing a predicate (or general term) like ‘brown dog’ with a less restrictive predicate like ‘dog’ is often valid. But sometimes—paradigmatically, in cases involving negation—replacing a predicate like ‘dog’ with a more restrictive predicate like ‘brown dog’ is valid. Plausibly, the first pattern reflects the default direction of valid replacement: removing a restriction preserves truth, except in special cases like those involving negation. Suppose we take it as given that poodles are dogs of a particular sort, and hence that every poodle is a dog. Then replacing‘poodle’ with ‘dog’ in ‘Fido is P’ is valid, regardless of what ‘Fido’ names. This can be viewed as a special case of ‘n is P, and every P is D; so n is D’. But the validity of this inference form can also be viewed as symptom of a basic principle that came be called dictum de omni: whatever is true of every P is true of any P. Or as Aristotle might have put it, if the property of being a dog belongs to every poodle, then it belongs to any poodle. In which case, Fido is a dog if Fido is a poodle. And since the property of being a dog surely belongs to every brown dog, any brown dog is a dog. The flip side of this point is that negation inverts the default direction of inference. Anything that isn’t a dog isn’t a brown dog; and similarly, if Fido isn’t a dog, then Fido isn’t a poodle. So in special cases, adding a restriction to a general term like ‘dog’ can preserve truth.

From this perspective, the Aristotelian quantifier ‘Some’ is a default-style quantifier that validates removing restrictions. If some brown dog is a clever mutt, it follows that some dog is a clever mutt, and hence that some dog is a mutt. By contrast, ‘No’ is an inverted-style quantifier that validates adding restrictions. If no dog is a mutt, it follows that no dog is a clever mutt, and hence that no brown dog is a clever mutt. The corresponding principle, dictum de nullo, encodes this pattern: whatever is true of no P is not true of any P; so if the property of being a mutt belongs to no dog, it belongs to no poodle. (And as Aristotle noted, instances of ‘No S is P’ can be analyzed as the propositional negations of corresponding instances of ‘Some S isn’t P’.)

Interestingly, ‘Every’ is like ‘No’ in one respect, and like ‘Some’ in another respect. If every dog is clever, it follows that every brown dog is clever; but if every dog is a clever mutt, it follows that every dog is a mutt. So when the universal quantifier combines with a general term S to form a subject, S is governed by the inverted rule of replacement. But when a universally quantified subject combines with a second general term to form a proposition, this second term is governed by the default rule of replacement. Given that ‘Every’ has this mixed logical character, the valid syllogisms can be derived from two basic patterns (noted above), both of which reflect dictum de omni: whatever is true of every P is true of any P.

Every S is P, and every P is D; so every S is D.

Every S is P, and some D is S; so some D is P.

The first principle reflects the sense in which universal quantification is transitive. The second principle captures the idea that the universal premise can license replacement of ‘S’ with ‘P’ in a proposition about some individual. In this sense, classical logic exhibits a striking unity and simplicity, at least with regard to inferences involving the Aristotelian quantifiers and predication. For further discussion, see Sommers (1984), van Bentham (1986), Sanchez (1991, 1994), and Ludlow (2005).

Alas, matters become more complicated once we consider relations.

Sentences like ‘Juliet kissed Romeo’ do not seem to have Subject-copula-Predicate form. One might suggest ‘Juliet was a kisser of Romeo’ as a paraphrase. But ‘kisser of Romeo’ differs, in ways that matter to inference, from general terms like ‘politician’. If Juliet (or anyone) was a kisser of Romeo, it follows that someone was kissed; whereas if Juliet was a politician, there is no corresponding logical consequence to the effect that someone was __-ed. Put another way, the proposition that Juliet kissed someone exhibits interesting logical structure, even if we can express this proposition via the sentence ‘Juliet was a kisser of someone’. A quantifier can be part of a complex predicate. But classical logic did not capture the validity of inferences involving predicates that have quantificational constituents. Consider (11).

(11)
Some patient respects every doctor, and some doctor is a liar; so some patient respects some liar.

If ‘respects every doctor’ and ‘respects some liar’ indicate nonrelational proposition-parts, much like ‘is sick’ or ‘is happy’, then inference (11) has the following form ‘Some P is S, and some D is L; so some P is H’. But this schema, which fails to reflect the quantificational structure within the predicates is not valid. Its instances include bad inferences like the following: some patient is sick, and some doctor is a liar; so some patient is happy. This dramatizes the point that ‘respects every doctor’ and ‘respects some liar’ are—unlike ‘is sick’ and ‘is tall’—logically related in a way that matters given the second premise of (11).

One can adopt the view that many propositions have relational parts, introducing a variable ‘R’ intended to range over relations; see the entries on medieval relations, and medieval terms. One can also formulate the following schema: some P R every D, and some D is L; so some P R some L. But the problem remains. Quantifiers can appear in complex predicates that figure in valid inferences like (12).

(12)
Every patient who respects every doctor is sick, and
some patient who saw every lawyer respects every doctor; so
some patient who saw every lawyer is sick.

But if ‘patient who respects every doctor’ and ‘patient who saw every lawyer’ are nonrelational, much like ‘old patient’ or ‘young patient’, then (12) has the following form: every O is S, and some Y R every D; so some Y is S. And many inferences of this form are invalid. For example: every otter is sick, and some yak respects every doctor; so some yak is sick. Again, one can abstract a valid schema that covers (12), letting parentheses indicate a relative clause that restricts the adjacent predicate.

Every P(R1 every D) is S, and some P(R2 every L) R1 every D; so some P(R2 every L) is S.

But no matter how complex the schema, the relevant predicates can exhibit further quantificational structure. (Consider the proposition that every patient who met some doctor who saw no lawyer respects some lawyer who saw no patient who met every doctor.) Moreover, schemata like the one above are poor candidates for basic inference patterns.

As medieval logicians knew, propositions expressed with relative clauses also pose other difficulties; see the entry on medieval syllogism. If every doctor is healthy, it follows that every young doctor is healthy. By itself, this is expected, since a universally quantified subject licenses replacement of ‘doctor’ with the more restrictive predicate ‘young doctor’. But consider (13) and (14).

(13)
No patient who saw every young doctor is healthy.
(14)
No patient who saw every doctor is healthy.

Here, the direction of valid inference is from ‘young doctor’ to ‘doctor’, as if the inference is governed by the (default) inference rule that licenses replacement of ‘young doctor’ with the less restrictive predicate ‘doctor’. One can say that the default direction of implication, from more restrictive to less restrictive predicates, has been inverted twice—once by ‘No’, and once by ‘every’. But one wants a systematic account of propositional structure that explains the net effect; see Ludlow (2002) for further discussion. Sommers (1982) offers a strategy for recoding and extending classical logic, in part by exploiting an idea suggested by Leibniz (and arguably Pāṇini): a relational sentence like ‘Juliet loved Romeo’ somehow combines an active-voice sentence with a passive-voice sentence, perhaps along the lines of ‘Juliet loved, and thereby Romeo was loved’; cp. section nine. But one way or another, quantifiers need to be characterized in a way that captures their general logical role—and not just their role as potential subjects of Aristotelian propositions—if impeccability is to be revealed as a matter of form. Quantifiers are not simply devices for creating schemata like ‘Every S is P’, into which general terms like ‘politician’ and ‘deceitful’ can be inserted. Instances of ‘S’ and ‘P’ can themselves have quantificational structure and relational constituents.

4. Frege and Formal Language

Gottlob Frege showed how to resolve these difficulties for classical logic in one fell swoop. His system of logic, published in 1879 and still in use (with notational modifications), was arguably the single greatest contribution to the subject. So it is significant that on Frege’s view, propositions do not have subject-predicate form. Indeed, his leading idea was that propositions have “function-argument” structure. Frege thereby drew a substantial distinction between logical form and grammatical form as traditionally conceived. This had a major impact on subsequent discussions of thought and its relation to language. Though before turning to details, it is worth taking a slight detour to note that Frege did not think of functions as abstract objects like numbers.

Every function maps each entity in some domain onto exactly one entity in some range. But while every function thus determines a set of ordered pairs, Frege (1891) did not identify functions with such sets. He said that a function “by itself must be called incomplete, in need of supplementation, or unsaturated. And in this respect functions differ fundamentally from numbers (p. 133).” For example, we can represent the successor function as follows, with the natural numbers as the relevant domain for the variable ‘\(x\)’: \(S(x) = x + 1\). This function maps zero onto one, one onto two, and so on. So we can specify the set \(\{\langle x, y \rangle : y = x + 1\}\) as the “value-range” of the successor function. But according to Frege, any particular argument (e.g., the number one) “goes together with the function to make up a complete whole” (e.g., the number two); and a number does not go together with a set to form a unit in this fashion. Frege granted that the word ‘function’ is often used to talk about the sets he would call value-ranges. But he maintained that the notion of an “unsaturated” function, which may be applied to endlessly many arguments, is logically prior to any notion of a set with endlessly many elements that are specified functionally; see p.135, note E. While the second positive integer is the successor of the first, the number two is still a single thing, distinct from any combination of a number with a set. Frege was influenced by Kant’s discussion of judgment, the kind of unity that a (structured) proposition exhibits, and the ancient observation that merely combining two things (e.g., Socrates and the property of being mortal) does not make the combination true or false. If it helps, think about ‘\(S(x)\)’—or better, ‘\(S(\ )\)’—as the unsaturated result of abstracting away from the numerical argument in a complex denoting expression like ‘\(S(1)\)’ or ‘\(S(62)\)’; and think about saturating ‘\(S(\ )\)’ with a numerical argument, like ‘1’ or ‘62’, as a process of “de-abstraction.” So in saying that propositions have “function-argument” structure, Frege was not only rejecting the traditional idea that logical from reflects the subject-predicate structure of ordinary sentences, he was suggesting that propositions (and any of their complex constituents) exhibit a kind of unity that is like the unity of ‘\(S(1)\)’, which can appear in invented arithmetic sentences like ‘\(S(1) = 2\)’. Church (1941) echoed Frege by distinguishing functions-in-intension, which Church identified with computational procedures, from their extensions. Perhaps Frege would have said that even procedures, as abstractions of a special kind, are too object-like to be functions in his special sense. But distinct procedures can determine the same extension. (Compare adding one to a natural number n with the following procedure: take the positive square root of the result of adding one to n squared plus n doubled.) So at least in this sense, functions-in-intension can be distinguished from the extensions they determine; cp. Chomsky’s (1986) contrast between I-languages and E-languages.

For purposes of capturing valid arguments concerning relations, the more important point is that functions need not be unary. For example, arithmetic division can be represented as a function from ordered pairs of numbers onto quotients: \(Q(x, y) = \frac{x}{y}\). Mappings can also be conditional. Consider the function that maps every even integer onto itself, and every odd integer onto its successor: \(C(x) = x\) if \(x\) is even, and \(x + 1\) otherwise; \(C(1) = 2\), \(C(2) = 2\), \(C(3) = 4\), etc. Frege held that propositions have parts that correspond to functions, and in particular, conditional functions that map arguments onto special values that reflect the truth or falsity of propositions/sentences. (As discussed below, Frege [1892] also distinguished these “truth values” from what he called Thoughts [Gedanken] or the “senses” [Sinnen] of propositions; where each of these sentential senses “presents” a truth value in certain way—i.e., as the value of a certain indicated function given a certain indicated argument.)

Variable letters, such as ‘\(x\)’ and ‘\(y\)’ in ‘\(Q(x, y) = \frac{x}{y}\)’, are typographically convenient for representing functions that take more than one argument. But we could also index argument places, as shown below.

\[ Q[(\ )_i, (\ )_j] = \frac{(\ )_i}{(\ )_j} \]

Or we could replace the subscripts above with lines that connect each pair of round brackets on the left of ‘\(=\)’ to a corresponding pair of brackets on the right. But the idea, however we encode it, is that a proposition has at least one constituent that is saturated by the requisite number of arguments.

On Frege’s view, the proposition that Mary sang has a functional component corresponding to ‘sang’ and an argument corresponding to ‘Mary’, even if the English sentence ‘Mary sang’ has ‘Mary’ as its subject and ‘sang’ as its predicate. The proposition can be represented as follows: \(\textrm{Sang}(\textrm{Mary})\). Frege thought of the relevant function as a conditional mapping from individuals to truth values: \(\textrm{Sang}(x) = \textbf{T}\) if \(x\) sang, and \(\textbf{F}\) otherwise; where ‘\(\textbf{T}\)’ and ‘\(\textbf{F}\)’ stand for special entities such that for each individual \(x\), \(\textrm{Sang}(x) = \textbf{T}\) if and only if \(x\) sang, and \(\textrm{Sang}(x) = \textbf{F}\) if and only if \(x\) did not sing. According to Frege, the proposition that John admires Mary combines an ordered pair of arguments with a functional component that corresponds to the transitive verb: \(\textrm{Admires}(\textrm{John}, \textrm{Mary})\); where for any individual \(x\), and any individual \(y\), \(\textrm{Admires}(x, y) = \textbf{T}\) if \(x\) admires \(y\), and \(\textbf{F}\) otherwise. From this perspective, the structure and constituents are the same in the proposition that Mary is admired by John, even though ‘Mary’ is the grammatical subject of the passive sentence. Likewise, Frege did not distinguish the proposition that three precedes four from the proposition that four is preceded by three. More importantly, Frege’s treatment of quantified propositions departs radically from the traditional idea that the grammatical structure of sentence reflects the logical structure of the indicated proposition.

If \(S\) is the function corresponding to ‘sang’, then Mary sang iff—i.e., if and only if—\(S(\textrm{Mary}) = \textbf{T}\). Likewise, someone sang iff: \(S\) maps some individual onto \(\textbf{T}\); that is, for some individual \(x\), \(S(x) = \textbf{T}\). Or using a modern variant of Frege’s original notation, someone sang iff \(\exists x [S(x)]\). The quantifier ‘\(\exists x\)’ is said to bind the variable ‘\(x\)’, which ranges over individual things in a domain of discourse. (For now, assume that the domain contains only people.) If every individual in the domain sang, then \(S\) maps every individual onto the truth value \(\textbf{T}\); or using formal notation, \(\forall x [S(x)]\). A quantifier binds each occurrence of its variable, as in ‘\(\exists x [P(x) \land D(x)]\)’, which reflects the logical form of ‘Someone is both a politician and deceitful’. In this last example, the quantifier combines with a complex functional component that is formed by conjoining two simpler ones.

With regard to the proposition that some politician is deceitful, traditional grammar suggests the division ‘Some politician / is deceitful’, with the noun ‘politician’ combining with the quantificational word to form a complex subject. But on a Fregean view, grammar masks the logical division between the existential quantifier and the rest: \(\exists x [P(x) \land D(x)]\). With regard to the proposition that every politician is deceitful, Frege also stresses the logical division between the quantifier and its scope: \(\forall x [P(x) \rightarrow D(x)]\); every individual is deceitful if a politician. Here too, the quantifier combines with a complex functional component, albeit one that is conditional rather than conjunctive. (The formal sentence ‘\(\forall x [P(x) \land D(x)]\)’ implies, unconditionally, that every individual is a politician.) As Frege (1879) defined his analogs of the modern symbols used here, ‘\(P(x) \rightarrow D(x)\)’ is equivalent to ‘\(\lnot P(x) \lor D(x)\)’, and ‘\(\forall x\)’ is equivalent to ‘\(\lnot \exists \lnot\)’. So ‘\(\forall x [P(x) \rightarrow D(x)]\)’ is equivalent to ‘\(\lnot \exists x \lnot [\lnot P(x) \lor D(x)]\)’; and given de Morgan’s Laws (concerning the relations between negation, disjunction, and conjunction), \(\lnot \exists x \lnot [\lnot P(x) \lor D(x)]\) iff \(\lnot \exists x [P(x) \land \lnot D(x)]\). Hence, \(\forall x [P(x) \rightarrow D(x)]\) iff \(\lnot \exists x [P(x) \land \lnot D(x)]\). This captures the idea that every politician is deceitful iff no individual is both a politician and not deceitful.

If this conception of logical form is correct, then grammar is misleading in several respects. First, grammar leads us to think that ‘some politician’ indicates a constituent of the proposition that some politician is deceitful. Second, grammar masks a difference between existential and universally quantified propositions; predicates are related conjunctively in the former, and conditionally in the latter. (Though as discussed in section seven, one can—and Frege [1884] did—adopt a different view that allows for relational/restricted quantifiers as in ‘\(\forall x{:}P(x) [D(x)]\)’.)

More importantly, Frege’s account was designed to apply equally well to propositions involving relations and multiple quantifiers. And with regard to these propositions, there seems to be a big difference between logical structure and grammatical structure.

On Frege’s view, a single quantifier can bind an unsaturated position that is associated with a function that takes a single argument. But it is equally true that two quantifiers can bind two unsaturated positions associated with a function that takes a pair of arguments. For example, the proposition that everyone likes everyone can be represented with the formal sentence ‘\(\forall x \forall y [L(x, y)]\)’. Assuming that ‘Romeo’ and ‘Juliet’ indicate arguments, it follows that Romeo likes everyone, and that everyone likes Juliet—\(\forall y [L(r, y)]\) and \(\forall x [L(x, j)]\). And it follows from all three propositions that Romeo likes Juliet: \(L(r, j)\). The rules of inference for Frege’s logic capture this general feature of the universal quantifier. A variable bound by a universal quantifier can be replaced with a name for some individual in the domain. Correlatively, a name can be replaced with a variable bound by an existential quantifier. Given that Romeo likes Juliet, it follows that someone likes Juliet, and Romeo likes someone. Frege’s formalism can capture this as well: \(L(r, j)\); so \(\exists x [L(x, j)] \land \exists x [L(r, x)]\). And given either conjunct in the conclusion, it follows that someone likes someone: \(\exists x \exists y [L(x, y)]\). A single quantifier can also bind multiple argument positions, as in ‘\(\exists x [L(x, x)]\)’, which is true iff someone likes herself. Putting these points schematically: \(\forall x (\dots x \dots)\), so \(\dots n \dots\); and \(\dots n \dots\), so \(\exists x (\dots x \dots)\).

Mixed quantification introduces an interesting wrinkle. The propositions expressed with ‘\(\exists x \forall y [L(x, y)]\)’ and ‘\(\forall y \exists x [L(x, y)]\)’ differ. We can paraphrase the first as ‘there is someone who likes everyone’ and the second as ‘everyone is liked by someone or other’. The second follows from the first, but not vice versa. This suggests that ‘someone likes everyone’ is ambiguous, in that this string of English words can be used to express two different propositions. This in turn raises difficult questions about what natural language expressions are, and how they can be used to express propositions; see section eight. But for Frege, the important point concerned the distinction between the propositions (Gedanken). Similar remarks apply to ‘\(\forall x \exists y [L(x, y)]\)’ and ‘\(\exists y \forall x [L(x, y)]\)’.

A related phenomenon is exhibited by ‘John danced if Mary sang and Chris slept’. Is the intended proposition of the form ‘(A if B) and C’ or ‘A if (B and C)’? Indeed, it seems that the relation between word-strings and propositions expressed is often one-to-many. Is someone who says ‘The artist drew a club’ talking about a sketch or a card game? One can use ‘is’ to express identity, as in ‘Hesperus is the planet Venus’; but in ‘Hesperus is bright’, ‘is’ indicates predication. In ‘Hesperus is a planet’, ‘a’ seems to be logically inert; yet in ‘John saw a planet’, ‘a’ seems to indicate existential quantification: \(\exists x [P(x) \land S(j, x)]\). (One can render ‘Hesperus is a planet’ as ‘\(\exists x [P(x) \land h = x]\)’. But this treats ‘is a planet’ as importantly different than ‘is bright’; and this leads to other difficulties.) According to Frege, such ambiguities provide further evidence that natural language is not suited to the task of representing propositions and inferential relations perspicuously. (Leibniz and others had envisioned a “Characteristica Universalis”, but without detailed proposals for how to proceed beyond syllogistic logic in creating one.) This is not to deny that natural language is well suited for other purposes, perhaps including efficient human communication. And Frege held that we often do use natural language to express propositions. But he suggested that natural language is like the eye, whereas a good formal language is like a microscope that reveals structure not otherwise observable. On this view, the logical form of a proposition is made manifest by the structure of a sentence in an ideal formal language—what Frege called a Begriffsschrift (concept-script); where the sentences of such a language exhibit function-argument structures that differ in kind from the grammatical structures exhibited by the sentences we use in ordinary communication.

The real power of Frege’s strategy for representing propositional structure is most evident in his discussions of proofs by induction, the Dedekind-Peano axioms for arithmetic, and how the proposition that every number has a successor is logically related to more basic truths of arithmetic; see the entry on Frege’s theorem and foundations for arithmetic. But without getting into these details, one can get a sense of Frege’s improvement on previous logic by considering (15–16) and Fregean analyses of the corresponding propositions.

(15)
Every patient respects some doctor
\(\forall x \{P(x) \rightarrow \exists y [D(y) \land R(x,y)]\}\)
(16)
Every old patient respects some doctor
\(\forall x \{[O(x) \land P(x)] \rightarrow \exists y [D(y) \land R(x,y)]\}\)

Suppose that every individual has the following conditional property: if he\(_x\) is a patient, then some individual is such that she\(_y\) is both a doctor and respected by him\(_x\). Then it follows—intuitively and given the rules of Frege’s logic—that every individual\(_x\) has the following conditional property: if he\(_x\) is both old and a patient, then some individual\(_y\) is such that she\(_y\) is both a doctor and respected by him\(_x\). So the proposition expressed with (16) follows from the one expressed with (15). More interestingly, we can also account for why the proposition expressed with (14) follows from the one expressed with (13).

(13)
No patient who saw every young doctor is healthy
\(\neg \exists x \{P(x) \land \forall y \{[Y(y) \land D(y)] \rightarrow S(x,y)\} \land H(x)\}\)
(14)
No patient who saw every doctor is healthy
\(\neg \exists x \{P(x) \land \forall y [D(y) \rightarrow S(x,y)] \land H(x)\}\)

For suppose it is false that some individual has the following conjunctive property: he\(x\) is a patient; and he\(x\) saw every young doctor (i.e., every individual\(y\) is such that if she\(y\) is a young doctor, then he\(x\) saw her\(y\)); and he\(x\) is healthy. Then intuitively, and also given the rules of Frege’s logic, it is false that some individual has the following conjunctive property: he\(x\) is a patient; and he\(x\) saw every doctor; and he\(x\) is healthy. This captures the fact that the direction of valid inference is from ‘every young doctor’ in (13) to ‘every doctor’ in (14), despite the fact that in simpler cases, replacing ‘every doctor’ with ‘every young doctor’ is valid. More generally, Frege’s logic handles a wide range of inferences that had puzzled medieval logicians. But the Fregean logical forms seem to differ dramatically from the grammatical forms of sentences like (13–16). Frege concluded that we need a Begriffsschrift, distinct from the languages we naturally speak, to depict (and help us discern) the structures of the propositions we can somehow express by using ordinary sentences in contexts.

Frege also made a different kind of contribution, which would prove important, to the study of propositions. In early work, he spoke as though propositional constituents were the relevant functions and (ordered n-tuples of) entities that such functions map to truth-values. But he later refined this view in light of his distinction between Sinn and Bedeutung; see the entry on Frege. The Sinn of an expression was said to be a “way of presenting” the corresponding Bedeutung, which might be an entity (with truth-values as special cases of entities) or a function from (ordered n-tuples of) entities to truth-values. The basic idea is that two names, like ‘Hesperus’ and ‘Phosphorus’, can present the same Bedeutung in different ways; in which case, the Sinn of the first name differs from the Sinn of the second. Given this distinction, we can think of ‘Hesperus’ as an expression that presents Venus as the evening star, while ‘Phosphorus’ presents Venus as the morning star. Likewise, we can think of ‘is bright’ as an expression that presents a certain function in a certain way, and ‘Hesperus is bright’ as a sentence that presents its truth-value in a certain way—i.e., as the value of the function in question given the argument in question. From this perspective, propositions are sentential ways of presenting truth-values, and proposition-parts are subsentential ways of presenting functions and arguments. Frege could thus distinguish the proposition that Hesperus is bright from the proposition that Phosphorus is bright, even though the two propositions are alike with regard to the relevant function and argument. Likewise, he could distinguish the trivial proposition Hesperus is Hesperus from the (apparently nontrivial) proposition Hesperus is Phosphorus. This is an attractive view. For intuitively, the inference ‘Hesperus is Hesperus, so Hesperus is Phosphorus’ is not an instance of the following obviously valid schema: A, so A. But this raised questions about what the Sinn of an expression really is, what “presentation” could amount to, and what to say about a name with no Bedeutung.

5. Descriptions and Analysis

Frege did not distinguish (or at least did not emphasize any distinction between) names like ‘John’ and descriptions like ‘the boy’ or ‘the tall boy from Canada’. Initially, both kinds of expression seem to indicate arguments, as opposed to functions. So one might think that the logical form of ‘The boy sang’ is simply ‘\(S(b)\)’, where ‘\(b\)’ is an unstructured symbol that stands for the boy in question (and presents him in a certain way). But this makes the elements of a description logically irrelevant. And this seems wrong. If the tall boy from Canada sang, then some boy from Canada sang. Moreover, ‘the’ implies uniqueness in a way that ‘some’ does not. Of course, one can say ‘The boy sang’ without denying that universe contains more than one boy. But likewise, in ordinary conversation, one can say ‘Everything is in the trunk’ without denying that the universe contains some things not in the trunk. And intuitively, a speaker who uses ‘the’ does imply that the adjacent predicate is satisfied by exactly one contextually relevant thing.

Bertrand Russell held that these implications reflect the logical form of a proposition expressed (in a given context) with a definite description. On his view, ‘The boy sang’ has the following logical form:

\[\exists x \{\textrm{Boy}(x) \land \forall y [\textrm{Boy}(y) \rightarrow y = x] \land S(x)\} \]

some individual\(x\) is such that he\(x\) is a boy, and every (relevant) individual\(y\) is such that if he\(y\) is a boy, then he\(y\) is identical with him\(x\), and he\(x\) sang. The awkward middle conjunct was Russell’s way of expressing uniqueness with Fregean tools; cf. section seven. But rewriting the middle conjunct would not affect Russell’s technical point, which is that ‘the boy’ does not correspond to any constituent of the formalism. This in turn reflects Russell’s central claim—viz., that while a speaker may refer to a certain boy in saying ‘The boy sang’, the boy in question is not a constituent of the proposition indicated. According to Russell, the proposition has the form of an existential quantification with a bound variable. It does not have the form of a function saturated by (an argument that is) the boy referred to. The proposition is general rather than singular. In this respect, ‘the boy’ is like ‘some boy’ and ‘every boy’; though on Russell’s view, not even ‘the’ indicates a constituent of the proposition expressed.

This extended Frege’s idea that natural language misleads us about the structure of the propositions we assert. Russell went on to apply this hypothesis to what became a famous puzzle. Even though France is currently kingless, ‘The present king of France is bald’ can be used to express a proposition. The sentence is not meaningless; it has implications. So if the proposition consists of a function indicated with ‘\(\textrm{Bald}(\ )\)’ and an argument indicated with ‘The present king of France’, there must be an argument that is indicated. But appeal to nonexistent kings is, to say the least, dubious. Russell concluded that ‘The present king of France is bald’ expresses a quantificational proposition:

\[\exists x \{K(x) \land \forall y [K(y) \rightarrow y = x] \land B(x)\}; \]

where \(K(x) = \textbf{T}\) iff \(x\) is a present king of France, and \(B(x) = \textbf{T}\) iff \(x\) is bald. (For present purposes, set aside worries about the vagueness of ‘bald’.) And as Russell noted, the following contrary reasoning is spurious: every proposition is true or false; so the present king of France is bald or not; so there is a king of France, and he is either bald or not. For let P be the proposition that the king of France is bald. Russell held that P is indeed true or false. On his view, it is false. Given that \(\neg \exists x [K(x)]\), it follows that

\[ \neg \exists x \{K(x) \land \forall y [K(y) \rightarrow y = x] \land B(x)\}. \]

But it does not follow that there is a present king of France who is either bald or not. Given that \(\neg \exists x [K(x)]\), it hardly follows that

\[\exists x \{K(x) \land [B(x) \lor \neg B(x)]\}.\]

So we must not confuse the negation of P with the following false proposition:

\[\exists x \{K(x) \land \forall y [K(y) \rightarrow y = x] \land \neg B(x)\}. \]

The ambiguity of natural language may foster such confusion, given examples like ‘The present king of France is bald or not’. But according to Russell, puzzles about “nonexistence” can be resolved without special metaphysical theses, given the right views about logical form and natural language.

This invited the thought that other philosophical puzzles might dissolve if we properly understood the logical forms of our claims. Ludwig Wittgenstein argued, in his influential Tractatus Logico-Philosophicus, that: (i) the very possibility of meaningful sentences, which can be true or false depending on how the world is, requires propositions with structures of the sort that Frege and Russell were getting at; (ii) all propositions are logical compounds of—and thus analyzable into—atomic propositions that are inferentially independent of one another; though (iii) even simple natural language sentences may correspond to very complex propositions; and (iv) the right analyses would, given a little reflection, reveal all philosophical puzzles as confusions about how language is related to the world. Wittgenstein later noted that examples like ‘This is red’ and ‘This is yellow’ present difficulties for his earlier view. (If the expressed propositions are unanalyzable, and thus logically independent, each should be compatible with the other; but at least so far, no one has provided a plausible analysis that accounts for the apparent impeccability of ‘This is red, so this is not yellow’. This raises questions about whether all inferential security is due to logical form.) And in any case, Russell did not endorse (iv). But he did say, for reasons related to certain epistemological puzzles, that (a) we are directly acquainted with the constituents of those propositions into which every proposition (that we can grasp) can be analyzed; (b) at least typically, we are not directly acquainted with the mind-independent bearers of proper names; and so (c) the things we typically refer to with names are not constituents of basic propositions.

This led Russell to say that natural language names are disguised descriptions. On this view, ‘Hesperus’ is semantically associated with a complex predicate—say, for illustration, a predicate of the form ‘\(E(x) \land S(x)\)’, suggesting ‘evening star’. In which case, ‘Hesperus is bright’ expresses a proposition of the form

\[\lsquo\exists x \{[E(x) \land S(x)] \land \forall y \{[E(y) \land S(y)] \rightarrow y = x\} \land B(x)\}\rsquo. \]

It also follows that Hesperus exists iff \(\exists x [E(x) \land S(x)]\); and this would be challenged by Kripke (1980) and others; see the entries on rigid-designators and names. But by analyzing names as descriptions—quantificational expressions, as opposed to logical constants (like ‘\(b\)’) that indicate individuals—Russell offered an attractive account of why the proposition that Hesperus is bright differs from the proposition that Phosphorus is bright. Instead of saying that propositional constituents are Fregean senses, Russell could say that ‘Phosphorus is bright’ expresses a proposition of the form

\[\lsquo\exists x \{[M(x) \land S(x)] \land \forall y \{[M(y) \land S(y)] \rightarrow y = x\} \land B(x)\}\rsquo ;\]

where ‘\(E(x)\)’ and ‘\(M(x)\)’ indicate different functions, specified (respectively) in terms of evenings and mornings. This leaves room for the discovery that the complex predicates ‘\(E(x) \land S(x)\)’ and ‘\(M(x) \land S(x)\)’ both indicate functions that map Venus and nothing else to the truth-value T. The hypothesis was that the propositions expressed with ‘Hesperus is bright’ and ‘Phosphorus is bright’ have different (fundamental) constituents, even though Hesperus is Phosphorus, but not because propositional constituents are “ways of presenting” Bedeutungen. Similarly, the idea was that the propositions expressed with ‘Hesperus is Hesperus’ and ‘Hesperus is Phosphorus’ differ, because only the latter has predicational/unsaturated constituents corresponding to ‘Phosphorus’. Positing unexpected logical forms seemed to have explanatory payoffs.

Questions about names and descriptions are also related to psychological reports, like ‘Mary thinks Venus is bright’, which present puzzles of their own; see the entry on propositional attitude reports. Such reports seem to indicate propositions that are neither atomic nor logical compounds of simpler propositions. For as Frege noted, replacing one name with another name for the same object can apparently affect the truth of a psychological report. If Mary fails to know that Hesperus is Venus, she might think Venus is a planet without thinking Hesperus is a planet; though cp. Soames (1987, 1995, 2002) and see the entry on singular propositions. Any function that has the value T given Venus as argument has the value T given Hesperus as argument. So Frege, Russell, and Wittgenstein all held—in varying ways—that psychological reports are also misleading with respect to the logical forms of the indicated propositions.

6. Regimentation and Communicative Slack

Within the analytic tradition inspired by these philosophers, it became a commonplace that logical form and grammatical form typically diverge, often in dramatic ways. This invited attempts to provide both analyses of propositions and claims about natural language, with the aim of saying how relatively simple sentences (with subject-predicate structures) could be used to express propositions (with function-argument structures).

The logical positivists explored the idea that the meaning of a sentence is a procedure for determining the truth or falsity of that sentence. From this perspective, studies of linguistic meaning and propositional structure still dovetail, even if natural language employs “conventions” that make it possible to indicate complex propositions with grammatically simple sentences; see the entry on analysis. But to cut short a long and interesting story, there was little success in formulating “semantic rules” that were plausible both as (i) descriptions of how ordinary speakers understand sentences of natural language, and (ii) analyses that revealed logical structure of the sort envisioned. (And until Montague [1970], discussed briefly in the next section, there was no real progress in showing how to systematically associate quantificational constructions of natural language with Fregean logical forms.)

Rudolf Carnap, one of the leading positivists, responded to difficulties facing his earlier views by developing a sophisticated position according to which philosophers could (and should) articulate alternative sets of conventions for associating sentences of a language with propositions. Within each such language, the conventions would determine what follows from what. But one would have to decide, on broadly pragmatic grounds, which interpreted language was best for certain purposes (like conducting scientific inquiry). On this view, questions about “the” logical form of an ordinary sentence are in part questions about which conventions one should adopt. The idea was that “internal” to any logically perspicuous linguistic scheme, there would be an answer to the question of how two sentences are inferentially related. But “external” questions, about which conventions we should adopt, would not be settled by descriptive facts about how we understand languages that we already use.

This was, in many ways, an attractive development of Frege’s vision. But it also raised a skeptical worry. Perhaps the structural mismatches between sentences of a natural language and sentences of a Fregean Begriffsschrift are so severe that one cannot formulate general rules for associating the sentences we ordinarily use with propositions. Later theorists would combine this view with the idea that propositions are sentences of a mental language that is relevantly like Frege’s invented language and relevantly unlike the spoken languages humans use to communicate; see Fodor (1975, 1978). But given the rise of behaviorism, both in philosophy and psychology, this variant on a medieval idea was initially ignored or ridiculed. (And it does face difficulties; see section eight.)

Willard Van Orman Quine combined behaviorist psychology with a normative conception of logical form similar to Carnap’s. The result was an influential view according to which there is no fact of the matter about which proposition a speaker/thinker expresses with a sentence of natural language, because talk of propositions is (at best) a way of talking about how we should regiment our verbal behavior for certain purposes—and in particular, for purposes of scientific inquiry. On this view, claims about logical form are evaluative, and such claims are underdetermined by the totality of facts concerning speakers’ dispositions to use language. From this perspective, mismatches between logical and grammatical form are to be expected, and we should not conclude that ordinary speakers have mental representations that are isomorphic with sentences of a Fregean Begriffsschrift.

According to Quine, speakers’ behavioral dispositions constrain what can be plausibly said about how to best regiment their language. He also allowed for some general constraints on interpretability that an idealized “field linguist” might impose in coming up with a regimented interpretation scheme. (Donald Davidson developed a similar line of thought in a less behavioristic idiom, speaking in terms of constraints on a “Radical Interpreter,” who seeks “charitable” construals of alien speech.) But unsurprisingly, this left ample room for “slack” with respect to which logical forms should be associated with a given sentential utterance.

Quine also held that decisions about how to make such associations should be made holistically. As he sometimes put it, the “unit of translation” is an entire language, not a particular sentence. On this view, one can translate a sentence S of a natural language NL with a structurally mismatching sentence µ of a formal language FL, even if it seems (locally) implausible that S is used to express the proposition associated with µ, so long as the following condition is met: the association between S and µ is part of a general account of NL and FL that figures in an overall theory—which includes an account of language, logic, and the language-independent world—that is among the best overall theories available. This holistic conception of how to evaluate proposed regimentations of natural language was part and parcel of Quine’s criticism of the early positivists’ analytic-synthetic distinction, and his more radical suggestion that there is no such distinction

The suggestion was that even apparently tautologous sentences, like ‘Bachelors are unmarried’ and ‘Caesar died if Brutus killed him’, have empirical content. These may be among the last sentences we would dissent from, faced with recalcitrant experience; we may prefer to say that Caesar didn’t really die, or that Brutus didn’t really kill him, if the next best alternative is to deny the conditional claim. But for Quine, every meaningful claim is a claim that could turn out to be false—and so a claim we must be prepared, at least in principle, to reject. Correlatively, no sentences are known to be true simply by knowing what they mean and knowing a priori that sentences with such meanings must be true.

For present purposes, we can abstract away from the details of debates about whether Quine’s overall view was plausible. Here, the important point is that claims about logical form were said to be (at least partly) claims about the kind of regimented language we should use, not claims about the propositions actually expressed with sentences of natural language. And one aspect of Quine’s view, about the kind of regimented language we should use, turned out to be especially important for subsequent discussions of logical form. For even among those who rejected the behavioristic assumptions that animated Quine’s conception of language, it was often held that logical forms are expressions of a first-order predicate calculus.

Frege’s Begriffsschrift, recall, was designed to capture the Dedekind-Peano axioms for arithmetic, including the axiom of induction; see the entry on Frege’s theorem and foundations for arithmetic. This required quantification into positions occupiable by predicates, as well as positions occupiable by names. Using modern notation, Frege allowed for formulae like ‘\((Fa \land Fb) \rightarrow \exists X (Xa \land Xb)\)’ and ‘\(\forall x \forall y [x = y \leftrightarrow \forall X (Xx \leftrightarrow Xy)]\)’. And he took second-order quantification to be quantification over functions. This is to say, for example, that ‘\(\exists X (Xa \land Xb)\)’ is true iff: there is a function, \(X\), that maps both the individual called ‘\(a\)’ and the individual called ‘\(b\)’ onto the truth-value T. Frege also took it to be a truth of logic that for any predicate \(P\), there is a function such that for each individual \(x\), that function maps \(x\) to T iff \(x\) satisfies (or “falls under”) \(P\). In which case, for each predicate, there is the set of all and only the things that satisfy the predicate. The axioms for Frege’s logic thus generated Russell’s paradox, given predicates like ‘is not a member of itself’. This invited attempts to weaken the axioms, while preserving second-order quantification. But for various reasons, Quine and others advocated a restriction to a first-order fragment of Frege’s logic, disallowing quantification into positions occupied by predicates. (Kurt Gödel had proved the completeness of first-order predicate calculus, thus providing a purely formal criterion for what followed from what in that language. Quine also held that second-order quantification illicitly treated predicates as names for sets, thereby spoiling Frege’s conception of propositions as unified by virtue of having unsaturated predicational constituents that are satisfied by things denoted by names.) On Quine’s view, we should replace

\[\lsquo (Fa \land Fb) \rightarrow \exists X (Xa \land Xb)\rsquo\]

with explicit first-order quantification over sets, as in

\[\lsquo (Fa \land Fb) \rightarrow \exists s (a \in s \land b \in s)\rsquo ;\]

where ‘\(\in\)’ stands for ‘is an element of’, and this second conditional is not a logical truth, but rather a hypothesis (to be evaluated holistically) concerning sets.

The preference for first-order regimentations has come to seem unwarranted, or at least highly tendentious; see Boolos (1998). But it fueled the idea that logical form can diverge wildly from grammatical form. For as students quickly learn, first-order regimentations of natural sentences often turn out to be highly artificial. (And in some cases, such regimentations seem to be unavailable.) This was, however, taken to show that natural languages are far from ideal for purposes of indicating logical structure.

A different strand of thought in analytic philosophy—pressed by Wittgenstein in Philosophical Investigations and developed by others, including Peter Strawson and John Austin—also suggested that a single sentence could be used (on different occasions) to express different kinds of propositions. Strawson (1950) argued that pace Russell, a speaker could use an instance of ‘The F is G’ to express a singular proposition about a specific individual: namely, the F in the context at hand. According to Strawson, sentences themselves do not have truth conditions, since sentences (as opposed to speakers) do not express propositions; and speakers can use ‘The boy is tall’ to express a proposition with the contextually relevant boy as a constituent. Donnellan (1966) went on to argue that a speaker could even use an instance of ‘The F is G’ to express a singular proposition about an individual that isn’t an F; see the entry on reference. Such considerations, which have received a great deal of attention in recent discussions of context dependence, suggested that relations between natural language sentences and propositions are (at best) very complex and mediated by speakers’ intentions. All of which made it seem that such relations are far more tenuous than the pre-Fregean tradition suggested. This bolstered the Quine/Carnap idea that questions about the structure of premises and conclusions are really questions about how we should talk (when trying to describe the world), much as logic itself seems to be more concerned with how we should infer than with how we do infer. From this perspective, the connections between logic and grammar seemed rather shallow; see Iacona (2018) for extended discussion.

7. Notation and Restricted Quantification

On the other hand, more recent work on quantifiers suggests that the divergence had been exaggerated, in part because of how Frege’s idea of variable-binding was originally implemented. Consider again the proposition that some boy sang, and the proposed logical division into the quantifier and the rest: \(\exists x [\textrm{Boy}(x) \land \textrm{Sang}(x)]\); something is both a boy and an individual that sang. This is one way to regiment the English sentence. But one can also offer a logical paraphrase that more closely parallels the grammatical division between ‘some boy’ and ‘sang’: for some individual \(x\) such that \(x\) is a boy, \(x\) sang. One can formalize this paraphrase with restricted quantifiers, which incorporate a restriction on the domain over which the variable in question ranges. For example, ‘\(\exists x{:}B(x)\)’ can be an existential quantifier that binds a variable ranging over the boys in the relevant domain, with ‘\(\exists x{:}B(x) [S(x)]\)’ being true iff some boy sang. Since ‘\(\exists x{:}B(x) [S(x)]\)’ and ‘\(\exists x [B(x) \land S(x)]\)’ are logically equivalent, logic provides no reason for preferring the latter regimentation of the English sentence. And choosing the latter does not show that the proposition expressed with ‘Some boy sang’ has a structure that differs from grammatical structure of the sentence.

Universal quantifiers can also be restricted, as in ‘\(\forall x{:}B(x) [S(x)]\)’, interpreted as follows: for every individual \(x\) such that \(x\) is a boy, \(x\) sang. Restrictors can also be logically complex, as in ‘Some boy from Canada sang’ or ‘Some boy who respects Mary sang’, rendered as ‘\(\exists x{:}B(x) \land F(x, c)[S(x)]\)’ and ‘\(\exists x{:}B(x) \land R(x, m)[S(x)]\)’. Given these representations, the inferential difference between ‘some boy sang’ and ‘every boy sang’ lies with the propositional contributions of ‘some’ and ‘every’ after all, and not partly with the contribution of connectives like ‘\(\land\)’ and ‘\(\rightarrow\)’.

Words like ‘someone’, and the grammatical requirement that ‘every’ must be followed by a noun (or noun phrase), suggest that natural language employs restricted quantifiers. Phrases like ‘every boy’ are composed of a determiner and a noun. Correspondingly, one can think of determiners as expressions that can combine with an ordered pair of predicates to form a sentence, much as one can think of transitive verbs as expressions that can combine with an ordered pair of names to form a sentence. And this grammatical analogy, between determiners and transitive verbs, invites a semantic correlate.

Since ‘\(x\)’ and ‘\(y\)’ are variables ranging over individuals, one can say that the function indicated by the transitive verb ‘likes’ yields the value T given the ordered pair \(\langle x,y \rangle\) as argument if and only if \(x\) likes \(y\). In this notational scheme, ‘\(y\)’ corresponds to the direct object (or internal argument), which combines with the verb to form a phrase; ‘\(x\)’ corresponds to the grammatical subject (or external argument) of the verb. If we think about ‘every boy sang’ analogously, ‘boy’ is the internal argument of ‘every’, since ‘every boy’ is a phrase. By contrast, ‘boy’ and ‘sang’ do not form a phrase in ‘every boy sang’. So let us introduce ‘\(X\)’ and ‘\(Y\)’ as second-order variables ranging over functions, from individuals to truth values, stipulating that the extension of such a function is the set of things that the function maps onto the truth value T. Then one can say that the function indicated by ‘every’ yields the value T given the ordered pair \(\langle X,Y \rangle\) as argument iff the extension of \(X\) includes the extension of \(Y\). Similarly, one can say that the function indicated by ‘some’ maps the ordered pair \(\langle X, Y \rangle\) onto T iff the extension of \(X\) intersects with the extension of \(Y\).

Just as we can describe ‘likes’ as a predicate satisfied by ordered pairs \(\langle x, y \rangle\) such that \(x\) likes \(y\), so we can think about ‘every’ as a predicate satisfied by ordered pairs \(\langle X, Y \rangle\) such that the extension of \(X\) includes the extension of \(Y\). (This is compatible with thinking about ‘every boy’ as a restricted quantifier that combines with a predicate to form a sentence that is true iff every boy satisfies that predicate.) One virtue of this notational scheme is that it lets us represent relations between predicates that cannot be captured with ‘\(\forall\)’, ‘\(\exists\)’, and the sentential connectives; see Rescher (1962), Wiggins (1980). For example, most boys sang iff the boys who sang outnumber the boys who did not sing. So we can say that ‘most’ indicates a function that maps \(\langle X, Y \rangle\) to T iff the number of things that both \(Y\) and \(X\) map to T exceeds the number of things that \(Y\) but not \(X\) maps to T.

Using restricted quantifiers, and thinking about determiners as devices for indicating relations between functions, also suggests an alternative to Russell’s treatment of ‘the’. The formula

\[\lsquo \exists x \{B(x) \land \forall y [B(y) \rightarrow x = y] \land S(x)\}\rsquo\]

can be rewritten as ‘\(\exists x{:}B(x)[S(x)] \land |B| = 1\)’, interpreted as follows: for some individual \(x\) such that \(x\) is a boy, \(x\) sang; and the number of (relevant) boys is exactly one. On this view, ‘the boy’ still does not correspond to a constituent of the formalism; nor does ‘the’. But one can depart farther from Russell’s notation, while emphasizing his idea that ‘the’ is relevantly like ‘some’ and ‘every’. For one can analyze ‘the boy sang’ as ‘\(!x:\textrm{Boy}(x)[\textrm{Sang}(x)]\)’, specifying the propositional contribution of ‘\(!\)’—on a par with as ‘\(\exists\)’ and ‘\(\forall\)’—as follows:

\[!x:Y(x)[X(x)] = \textbf{T} \text{ iff the extensions of } X \text{ and } Y \text{ intersect & } |Y| = 1.\]

This way of encoding Russell’s theory preserves his central claim. While there may be a certain boy that a speaker refers to in saying ‘The boy sang’, that boy is not a constituent of the quantificational proposition expressed with ‘\(!x:\textrm{Boy}(x)[\textrm{Sang}(x)]\)’; see Neale (1990) for discussion. But far from showing that the logical form of ‘The boy sang’ diverges dramatically from its grammatical form, the restricted quantifier notation suggests that the logical form closely parallels the grammatical form. For ‘the boy’ and ‘the’ do correspond to constituents of ‘\(!x:B(x)[S(x)]\)’, at least if we allow for logical forms that represent quantificational propositions in terms of second-order relations; see Montague (1970), and for discussion of relevant constraints on how such relations can be expressed with quantificational determiners, see Barwise and Cooper (1981), Higginbotham and May (1981), Keenan (1996), and the article on generalized quantifiers.

It is worth noting, briefly, a potential implication for inferences like ‘The boy sang, so some boy sang’. If the logical form of ‘The boy sang’ is

\[\lsquo\exists x{:}B(x) [S(x)] \land |B| = 1\rsquo ,\]

then the inference is an instance of the schema ‘\(\textbf{A} \land \textbf{B}\), so \(\textbf{A}\)’. But if the logical form of ‘The boy sang’ is simply ‘\(!(x){:}B(x)[S(x)]\)’, the premise and conclusion have the same form, differing only by substitution of ‘\(!\)’ for ‘\(\exists\)’. In which case, the impeccability of the inference depends on the specific contributions of ‘the/\(!\)’ and ‘some/\(\exists\)’. Only when these contributions are “spelled out,” perhaps in terms of set-intersection, would the validity of the inference be manifest; see, e.g., King (2002). So even if grammar and logic do not diverge in this case, one might say that grammatical structure does not reveal the logical structure. From this perspective, further analysis of ‘the’ is required. Those who are skeptical of an analytic/synthetic distinction can say that it remains more a decision than a discovery to say that ‘Some boy sang’ follows from ‘The boy sang’. In general, and especially with regard to aspects of propositional form indicated with individual words, issues about logical form are connected with issues about the analytic-synthetic distinction.

8. Transformational Grammar

Even given restricted quantifiers (and acceptance of second-order logical forms), the subject/predicate structure of ‘Juliet / likes every doctor’ diverges from the corresponding formula below.

\[\forall y{:}\textrm{Doctor}(y) [\textrm{Likes}(\textrm{Juliet}, y)] \]

We can rewrite ‘\(\textrm{Likes}(\textrm{Juliet}, y)\)’ as ‘\([\textrm{Likes}(y)](\textrm{Juliet})\)’, to reflect the fact that ‘likes’ combines with a direct object to form a phrase, which in turn combines with a subject. But this does not affect the main point: ‘every’ seems to be a grammatical constituent of the verb phrase ‘likes every doctor’; yet it also seems to indicate the main quantifier of the expressed proposition. In natural language, ‘likes’ and ‘every doctor’ form a phrase. But with respect to logical form, it is as if ‘likes’ combines with ‘Juliet’ and a variable to form a complex predicate that is in turn an external argument of the higher-order predicate ‘every’. Similar remarks apply to ‘Some boy likes every doctor’ and

\[\lsquo[\exists x{:}\textrm{Boy}(x)][\forall y{:}\textrm{Doctor}(y)]\{\textrm{Likes}(x,y)\}\rsquo.\]

So it seems that mismatches remain in the very places that troubled medieval logicians—viz., quantificational direct objects and other examples of complex predicates with quantificational constituents.

Montague (1970, 1974) showed that these mismatches do not preclude systematic connections of natural language sentences with the corresponding propositional structures. Abstracting from the technical details, one can specify an algorithm that pairs each natural language sentence that contains one or more quantificational expressions like ‘every doctor’ with one or more Fregean logical forms. This was a significant advance. Together with subsequent developments, Montague’s work showed that Frege’s logic was compatible with the idea that quantificational constructions in natural language have a systematic semantics. Indeed, one can use Frege’s formal apparatus to study such constructions. Montague maintained that the syntax of natural language was, nonetheless, misleading for purposes of (what he took to be) real semantics. On this view, the study of valid inference still suggests that grammar disguises the structure of propositional thought. But in thinking about the relation of logic to grammar, one should not assume a naive conception of the latter.

For example, the grammatical form of a sentence need not be determined by the linear order of its words. Using brackets to disambiguate, we can distinguish the sentence ‘Mary [saw [the [boy [with binoculars]]]]’—whose direct object is ‘the boy with binoculars’—from the homophonous sentence ‘Mary [[saw [the boy]] [with binoculars]]’, in which ‘saw the boy’ is modified by an adverbial phrase. The first implies that the boy had binoculars, while the second implies that Mary used binoculars to see the boy. Even if this distinction is not audibly marked, there is a significant difference between modifying a noun (like ‘boy’) with a prepositional phrase and modifying a verb phrase (‘saw the boy’). More generally, grammatical structure need not be obvious. Just as it may take work to discover the kind(s) of structure that propositions exhibit, so it may take work to discover the kind(s) of structure that sentences exhibit. And many studies of natural language suggest a rich conception of grammatical form that diverges from traditional views; see especially Chomsky (1957, 1964, 1965, 1981, 1986, 1995). So we need to ask how logical forms are related to actual grammatical forms, which linguists try to discover, since these may differ importantly from any hypothesized grammatical forms that may be suggested by casual reflection on spoken language. Appearances may be misleading with respect to both grammatical and logical form, leaving room for the possibility that these notions of structure are not so different after all.

A leading idea of modern linguistics is that at least some grammatical structures are transformations of others. Put another way, linguistic expressions often appear to be displaced from positions canonically associated with certain grammatical relations that the expressions exhibit. For example, the word ‘who’ in (17) is apparently associated with the internal (direct object) argument position of the verb ‘saw’.

(17)
Mary wondered who John saw

Correspondingly, (17) can be glossed as ‘Mary wondered which person is such that John saw that person’. This invites the hypothesis that (17) reflects a transformation of the “Deep Structure” (17D) into the “Surface Structure” (17S); where the subscripts indicate that ‘who’ bears a certain grammatical relation, often called “movement,” to the coindexed position.

(17D)
{Mary [wondered {John [saw who]}]}
(17S)
{Mary [wondered [whoi {John [saw ( … )i ]}]]}

The idea is that the embedded clause in (17D) has the same form as ‘John saw Bill’, but in (17S), ‘who’ has been displaced from its original argument position. Similar remarks apply to the question ‘Who did John see’ and other question-words like ‘why’, ‘what’, ‘when’, and ‘how’.

One might also try to explain the synonymy of (18) and (19) by positing a common deep structure, (18D).

(18)
John seems to like Mary
(19)
It seems John likes Mary
(18D)
[Seems{John [likes Mary]}]
(18S)
{Johni [seems { ( _ )i [to like Mary]}]}

If every English sentence needs a subject of some kind, (18D) must be modified: either by displacing ‘John’, as in (18S); or by inserting a pleonastic subject, as in (19). Note that in (19), ‘It’ does not indicate an argument; compare ‘There’ in ‘There is something in the garden’. Appeal to displacement also lets one distinguish the superficially parallel sentences (20) and (21).

(20)
John is easy to please
(21)
John is eager to please

If (20) is true, John is easily pleased. In which case, it is easy (for someone) to please John; and here, ‘it’ is pleonastic. But if (21) is true, John is eager that he please someone or other. This asymmetry is effaced by representations like ‘Easy-to-please(John)’ and ‘Eager-to-please(John)’. The contrast is made manifest, however, with (20S) and (21S); where ‘e’ indicates an unpronounced argument position.

(20S)
{Johni [is easy { e [to please ( _ )i ]}]}
(21S)
{Johni [is eager { ( _ )i [to please e ]}]}

It may be that in (21S), which does not mean that it is eager for John to please someone, ‘John’ is grammatically linked to the coindexed position without being displaced from that position. But whatever the details, the “surface subject” of a sentence can be the object of a verb embedded within the main predicate, as in (20S). Of course, such hypotheses about grammatical structure require defense. But Chomsky and others have long argued that such hypotheses are needed to account for various facts concerning human linguistic capacities; see, e.g., Berwick et.al. (2011). As an illustration of the kind of data that is relevant, note that while (22–24) are perfectly fine as expressions of English, (25) is not.

(22)
The boy who sang was happy
(23)
Was the boy who sang happy
(24)
The boy who was happy sang
(25)
*Was the boy who happy sang

This suggests that the auxiliary verb ‘was’ can be displaced from some positions but not others. That is, while (23S) is a permissible transformation of (22D), (25S) is not a permissible transformation of (24D).

(22D)
{[The [boy [who sang]]] [was happy]}
(23S)
Wasi {[the [boy [who sang]]] [ ( _ )i happy]}
(24D)
{[The [boy [who [was happy]]]] sang}
(25S)
*Wasi {[the [boy [who [ ( _ )i happy]]]] sang}

In (25), the asterisk indicates intuitive deviance; in (25S), it indicates the hypothesized source of this deviance—viz., that the auxiliary verb cannot be displaced from the embedded relative clause. The ill-formedness of (25S) is striking, since one can sensibly ask whether or not the boy who was happy sang. One can also ask whether or not (26) is true. But (27) is not the yes/no question corresponding to (26).

(26)
The boy who was lost kept crying
(27)
Was the boy who lost kept crying

Rather, (27) is the yes/no question corresponding to ‘The boy who lost was kept crying’, which has an unexpected meaning. So we want some account of why (27) cannot have the interpretation corresponding to (26). But this “negative fact” concerning (27) is precisely what one would expect if ‘was’ cannot be displaced from its position in (26), as in the following logically possible but grammatically illicit structure: *Wasi {[the [boy [who [( _ )i lost]]]] [kept crying]}.

By contrast, if we merely specify an algorithm that associates (27) with its actual meaning—or if we merely hypothesize that (27) is the English translation of a certain mental sentence—we have not yet explained why (27) cannot also be used to ask whether or not (26) is true. Explanations of such facts appeal to nonobvious grammatical structure, and constraints on natural language transformations. (For example, an auxiliary verb in a relative clause cannot be “fronted;” though of course, theorists try to find deeper explanations for such constraints.)

The idea was that a sentence has both a deep structure (DS), which reflects semantically relevant relations between verbs and their arguments, and a surface structure (SS) that may include displaced (or pleonastic) elements. In some cases, pronunciation might depend on further transformations of SS, resulting in a distinct “phonological form” (PF). Linguists posited various constraints on these levels of grammatical structure, and the transformations that relate them. But as the theory was elaborated and refined under empirical pressure, various facts that apparently called for explanation in these terms still went unexplained. This suggested another level of grammatical structure, obtained by a different kind of transformation on SS. The hypothesized level was called ‘LF’, intimating ‘Logical Form’; and the hypothesized transformation—called ‘Quantifier Raising’ because it targeted the kinds of expressions that indicate (restricted) quantifiers—mapped structures like (28S) onto structures like (28L).

(28S)
{Juliet [likes [every doctor]]}
(28L)
{[every doctor]i {Juliet [ likes ( _ )i ]}}

Clearly, (28L) does not reflect the pronounced word order in English. But the idea was that PF determines pronunciation, while LF was said to be the level at which the scope of a natural language quantifier is determined; see May (1985). If we think about ‘every’ as a kind of second-order transitive predicate, which can combine with two predicates like ‘doctor’ and ‘Juliet likes ( _ )i to form a complete sentence, we should expect that at some level of analysis, the sentence ‘Juliet likes every doctor’ has the structure indicated in (28L). And mapping (28L) to the Fregean logical form

\[\lsquo [\forall x{:}\textrm{Doctor}(x)]\{\textrm{Likes}(\textrm{Juliet}, x)\}\rsquo\]

is trivial. Similarly, consider the following:

(29S)
{[some boy] [likes [every doctor]]}
(29L)
{[some boy]i {[every doctor]jj {( _ )i [likes ( _ )j ]}}
(29L′)
{[every doctor]j {[some boy]i { ( _ )i [likes ( _ )j ]}}}

If the surface structure (29S) can be mapped onto either (29L) or (29L′), then (29S) can be easily mapped onto the Fregean logical forms

\[\lsquo[\exists x{:}\textrm{Boy}(x)][\forall y{:}\textrm{Doctor}(y)]\{\textrm{Likes}(x,y)\}\rsquo\]

and

\[\lsquo [\forall y{:}\textrm{Doctor}(y)][\exists x{:} \textrm{Boy}(x)]\{\textrm{Likes}(x,y)\}\rsquo . \]

This assimilates quantifier scope ambiguity to the structural ambiguity of examples like ‘Juliet saw the boy with binoculars’. More generally, many apparent examples of grammar/logic mismatches were rediagnosed as mismatches between different aspects of grammatical structure—between those aspects that determine pronunciation, and those that determine interpretation. In one sense, this is fully in keeping with the idea that in natural language, “surface appearances” are often misleading with regard to propositional structure. But it also makes room for the idea that grammatical structure and logical structure converge, in ways that can be discovered through investigation, once we move beyond traditional subject-predicate conceptions of structure with regard to both logic and grammar.

There is independent evidence for “covert” transformations—displacement of expressions from their audible positions, as in (28L); see Huang (1995), Hornstein (1995). Consider ‘Jean a vu qui’, which is the French translation of ‘Who did John see’. If we assume that ‘qui’ (‘who’) is displaced at LF, then we can explain why the question-word is understood in both French and English like a quantifier binding a variable: which person \(x\) is such that John saw \(x\)? Similarly, example (30) from Chinese is transliterated as in (31).

(30)
Zhangsan zhidao Lisi mai-te sheme
(31)
Zhangsan know Lisi bought what

But (30) is ambiguous, between the interrogative (31a) and the complex declarative (31b).

(31a)
Which thing is such that Zhangsan knows Lisi bought it
(31b)
Zhangsan knows which thing (is such that) Lisi bought (it)

This suggests covert displacement of the quantificational question-word in Chinese; see Huang (1982, 1995). Chomsky (1981) also argued that the constraints on such displacement can help explain contrasts like the one illustrated with (32) and (33).

(32)
Who said he has the best smile
(33)
Who did he say has the best smile

In (32), the pronoun ‘he’ can have a bound-variable reading: which person \(x\) is such that \(x\) said that \(x\) has the best smile. This suggests that the following grammatical structure is possible: Whoi {[( )i said [hei has the best smile]]}. But (33) cannot be used to ask this question, suggesting that some linguistic constraint rules out the following logically possible structure: *Whoi [did {[hei say [( )i has the best smile]]]. And there cannot be constraints on transformations without transformations. So if English overtly displaces question-words that are covertly displaced in other languages, we should not be too surprised if English covertly displaces other quantificational expressions like ‘every doctor’. Likewise, (34) has the reading indicated in (34a) but not the reading indicated in (34b).

(34)
It is false that Juliet likes every doctor
(34a)
\(\neg \forall x{:}\textrm{Doctor}(x) [\textrm{Likes}(\textrm{Juliet}, x)]\)
(34b)
\(\forall x{:}\textrm{Doctor}(x) \neg [\textrm{Likes}(\textrm{Juliet}, x)]\)

This suggests that ‘every doctor’ gets displaced, but only so far. Similarly, (13) cannot mean that every doctor is such that no patient who saw that doctor is healthy.

(13)
No patient who saw every doctor is healthy

As we have already seen, English seems to abhor fronting certain elements from within an embedded relative clause. This invites the hypothesis that Quantifier Raising is subject to a similar constraint, and hence, that many quantificational expressions get displaced in English. This hypothesis is not uncontroversial; see, e.g., Jacobson (1999). But many linguists (following Chomsky [1995, 2000]) would now posit only two levels of grammatical structure, corresponding to PF and LF—the thought being that constraints on DS and SS can be eschewed in favor of a simpler theory that only posits constraints on how expressions can be combined in the course of constructing complex expressions that can be pronounced and interpreted. If this development of earlier theories proves correct, then the only semantically relevant level of grammatical structure often reflects covert displacement of audible expressions; see, e.g., Hornstein (1995). In any case, there is a large body of work suggesting that many logical properties of quantifiers, names, and pronouns are reflected in properties of LF.

For example, if (35) is true, it follows that some doctor treated some doctor; whereas (36) does not have this consequence:

(35)
Every boy saw the doctor who treated himself
(36)
Every boy saw the doctor who treated him

The meanings of (35) and (36) seem to be roughly as indicated in (35a) and (36a); where ‘\(!\)’ indicates the contribution of ‘the’.

(35a)
\([\forall x{:}\textrm{Boy}(x)][!y{:}\textrm{Doctor}(y) \land \textrm{Treated}(y,y)] \{\textrm{Saw}(x,y)\}\)
(36a)
\([\forall x{:}\textrm{Boy}(x)][!y{:}\textrm{Doctor}(y) \land \textrm{Treated}(y,x)] \{\textrm{Saw}(x,y)\}\)

This suggests that ‘himself’ is behaving like a variable bound by ‘the doctor’, while ‘every boy’ can bind ‘him’. And there are independent grammatical reasons for saying that ‘himself’ must be linked to ‘the doctor’, while ‘him’ must not be so linked. Note that in ‘Pat thinks Chris treated himself/him’, the antecedent of ‘himself’ must be the subject of ‘treated’, while the antecedent of ‘him’ must not be; see Chomsky (1981).

We still need to enforce the conceptual distinction between LF and the traditional notion of logical form. There is no guarantee that structural features of natural language sentences will mirror the logical features of propositions; cp. Stanley (2000), King (2007). But this leaves room for the empirical hypothesis that LF reflects at least a great deal of propositional structure; see Harman (1972), Higginbotham (1986), Segal (1989), Larson and Ludlow (1993), and the essay on structured propositions. Moreover, even if the LF of a sentence S underdetermines the logical form of the proposition a speaker expresses with S (on a given occasion of use), the LF may provide a “scaffolding” that can be elaborated in particular contexts, with little or no mismatch between grammatical and propositional architecture. If some such view is correct, it might avoid certain (unpleasant) questions prompted by earlier Fregean views: how can a sentence be used to express a proposition with a radically different structure; and if grammar is deeply misleading, why think that our intuitions concerning impeccability provide reliable evidence about which propositions follow from which? These are, however, issues that remain unsettled.

9. Semantic Structure and Events

If propositions are the “things” that really have logical form, and sentences of English are not themselves propositions, then sentences of English “have” logical forms only by association with propositions. But if the meaning of a sentence is some proposition—or perhaps a function from contexts to propositions—then one might say that the logical form “of” a sentence is its semantic structure (i.e., the structure of that sentence’s meaning). Alternatively, one might suspect that in the end, talk of propositions is just convenient shorthand for talking about the semantic properties of certain sentences: perhaps sentences of a Begriffsschrift, or sentences of mentalese, or sentences of natural languages (abstracting away from their logically/semantically irrelevant properties). In any case, the notion of logical form has played a significant role in recent work on theories of meaning for natural languages. So an introductory discussion of logical form would not be complete without some hint of why such work is relevant, especially since attending to details of natural languages (as opposed to languages invented to study the foundations of arithmetic) led to renewed discussion of how to represent propositions that involve relations.

Prima facie, ‘Every old patient respects some doctor’ and ‘Some young politician likes every liar’ exhibit common modes of linguistic combination. So a natural hypothesis is that the meaning of each sentence is fixed by these modes of combination, given the relevant word meanings. It may be hard to see how this hypothesis could be true if there are widespread mismatches between logical and grammatical form. But it is also hard to see how the hypothesis could be false. Children, who have finite cognitive resources, typically acquire the capacity to understand the endlessly many expressions of the languages spoken around them. A great deal of recent work has focussed on these issues, concerning the connections between logical form and the senses in which natural languages are semantically compositional.

It was implicit in Frege that each of the endlessly many sentences of an ideal language would have a compositionally determined truth-condition. Frege did not actually specify an algorithm that would associate each sentence of his Begriffsschrift with its truth-condition. But Tarski (1933) showed how to do this for the first-order predicate calculus, focussing on interesting cases of multiple quantification like the one shown below:

\[\begin{align} \forall x &[\textrm{Number}(x) \:\rightarrow \\ & \exists y [\textrm{SuccessorOf}(y, x) \land \forall z [\textrm{SuccessorOf}(z, x) \rightarrow z = y]]] \end{align}\]

This made it possible to capture, with precision, the idea that an inference is valid in the predicate calculus iff: every interpretation that makes the premises true also makes the conclusion true, holding fixed the interpretations of logical elements like ‘if’ and ‘every’. Davidson (1967a) conjectured that one could do for English what Tarski did for the predicate calculus; and Montague, similarly inspired by Tarski, showed how one could start dealing with predicates that have quantificational constituents. Still, many apparent objections to the conjecture remained. As noted at the end of section four, sentences like ‘Pat thinks that Hesperus is Phosphorus’ present difficulties; though Davidson (1968) offered an influential suggestion. Davidson’s (1967b) proposal concerning examples like (37–40) also proved enormously fruitful.

(37)
Juliet kissed Romeo quickly at midnight.
(38)
Juliet kissed Romeo quickly.
(39)
Juliet kissed Romeo at midnight.
(40)
Juliet kissed Romeo.

If (37) is true, so are (38–40); and if (38) or (39) is true, so is (40). The inferences seem impeccable. But the function-argument structures are not obvious. If we represent ‘kissed quickly at midnight’ as an unstructured predicate that takes two arguments, like ‘kissed’ or ‘kicked’, we will represent the inference from (37) to (40) as having the form: \(K^*(x, y)\); so \(K(x, y)\). But this form is exemplified by the bad inference ‘Juliet kicked Romeo; so Juliet kissed Romeo’. Put another way, if ‘kissed quickly at midnight’ is a logically unstructured binary predicate, then the following conditional is a nonlogical assumption: if Juliet kissed Romeo in a certain manner at a certain time, then Juliet kissed Romeo. But this conditional seems like a tautology, not an assumption that introduces any epistemic risk. Davidson concluded that the surface appearances of sentences like (37–40) mask relevant semantic structure. In particular, he proposed that such sentences are understood in terms of quantification over events.

According to Davidson, who echoed Ramsey (1927), the meaning of (40) is reflected in the paraphrase ‘There was a kissing of Romeo by Juliet’. One can formalize this proposal in various ways, with different implications for how verbs like ‘kiss’ are related to propositional constituents: \(\exists e [\textrm{Past}(e) \land \textrm{KissingOf}(e, \textrm{Romeo}) \land \textrm{KissingBy}(e, \textrm{Juliet})]\); or \(\exists e [\textrm{Past}(e) \land \textrm{KissingByOf}(e, \textrm{Juliet}, \textrm{Romeo})]\); or as in (40a), with Juliet and Romeo explicitly represented as players of certain roles in an event.

(40a)
\(\exists e [\textrm{Agent}(e, \textrm{Juliet}) \land \textrm{Kissing}(e) \land \textrm{Patient}(e, \textrm{Romeo})]\)

But given any such representation, adverbs like ‘quickly’ and ‘at midnight’ can be analyzed as additional predicates of events, as shown in (37a–39a).

(37a)
\(\exists e [\textrm{Agent}(e, \textrm{Juliet}) \land \textrm{Kissing}(e) \land \textrm{Patient}(e, \textrm{Romeo})\:\land\) \(\textrm{Quick}(e) \land \textrm{At-midnight}(e)]\)
(38a)
\(\exists e [\textrm{Agent}(e, \textrm{Juliet}) \land \textrm{Kissing}(e) \land \textrm{Patient}(e, \textrm{Romeo})\:\land\) \(\textrm{Quick}(e)]\)
(39a)
\(\exists e [\textrm{Agent}(e, \textrm{Juliet}) \land \textrm{Kissing}(e) \land \textrm{Patient}(e, \textrm{Romeo})\:\land\) \(\textrm{At-midnight}(e)]\)

If this is correct, then the inference from (37) to (40) is an instance of the following valid form: \(\exists e [\ldots e \ldots \land Q(e) \land A(e)]\); hence, \(\exists e [\dots e \dots]\). The other impeccable inferences involving (37–40) can likewise be viewed as instances of conjunction reduction in the scope of an existential quantifier; see Pietroski (2018) for discussion that connects this point to medieval insights noted in section three. If the grammatical form of (40) is simply ‘{Juliet [kissed Romeo]}’, then the mapping from grammatical to logical form is not transparent; and natural language is misleading, in that no word corresponds to the event quantifier. But this does not posit a significant structural mismatch between grammatical and logical form. On the contrary, each word in (40) corresponds to a conjunct in (40a). This suggests a strategy for thinking about how the meaning of a sentence like (40) might be composed from the meanings of the constituent words. A growing body of literature, in philosophy and linguistics, suggests that Davidson’s proposal captures an important feature of natural language semantics, and that “event analyses” provide a useful framework for discussions of logical form; see, e.g., Schein (2017) for extended discussion and many references.

In one sense, it is an ancient idea that action reports like (40) represent individuals as participating in events; see Gillon’s (2007) discussion of Pāṇini’s grammar of Sanskrit. But if (40) can be glossed as ‘Juliet did some kissing, and Romeo was thereby kissed’, perhaps the ancient idea can be deployed in developing Leibniz’ suggestion that relational sentences like (40) somehow contain simpler active-voice and passive-voice sentences; cp. Kratzer (1996). And perhaps appeals to quantifier raising can help in defending the idea that ‘Juliet kissed some/the/every boy’ is, after all, a sentence that exhibits Subject-copula-Predicate form: ‘[some/the/every boy]i is P’, with ‘P’ as a complex predicate akin to ‘[some event]e was both a kissing done by Juliet and one in which hei was kissed’.

With this in mind, let’s return to the idea that each complex expression of natural language has semantic properties that are determined by (i) the semantic properties of its constituents, and (ii) the ways in which these constituents are grammatically arranged. If this is correct, then following Davidson, one might say that the logical forms of expressions (of some natural language) just are the structures that determine the corresponding meanings given the relevant word meanings; see Lepore and Ludwig (2002). In which case, the phenomenon of valid inference may be largely a by-product of semantic compositionality. If principles governing the meanings of (37–40) have the consequence that (40) is true iff an existential claim like (40a) is true, perhaps this is illustrative of the general case. Given a sentence of some natural language NL, the task of specifying its logical form may be inseparable from the task of providing a compositional specification of what the sentences of NL mean.

10. Further Questions

At this point, many issues become relevant to further discussions of logical form. Most obviously, there are questions concerning particular examples. Given just about any sentence of natural language, one can ask interesting questions (that remain unsettled) about its logical form. There are also very abstract questions about the relation of semantics to logic. Should we follow Davidson and Montague, among others, in characterizing theories of meaning for natural languages as theories of truth (that perhaps satisfy certain conditions on learnability)? Is an algorithm that correctly associates sentences with truth-conditions (relative to contexts) necessary and/or sufficient for being an adequate theory of meaning? What should we say about the paradoxes apparently engendered by sentences like ‘This sentence is false’? If we allow for second-order logical forms, how should we understand second-order quantification, given Russell’s Paradox? Are claims about the “semantic structure” of a sentence fundamentally descriptive claims about speakers (or their communities, or their languages)? Or is there an important sense in which claims about semantic structure are normative claims about how we should use language? Are facts about the acquisition of language germane to hypotheses about logical form? And of course, the history of the subject reveals that the answers to the central questions are by no means obvious: what is logical structure, what is grammatical structure, and how are they related? Or put another way, what kinds of structures do propositions and sentences exhibit, and how do thinkers/speakers relate them?

Bibliography

Cited Works

  • Barwise, J. & Cooper, R., 1981, “Generalized Quantifiers and Natural Language”, Linguistics and Philosophy, 4: 159–219.
  • Beaney, M., ed., 1997, The Frege Reader, Oxford: Blackwell.
  • Berwick, B. et al., 2011, “Poverty of the Stimulus Revisited”, Cognitive Science, 35: 1207–42.
  • Boolos, G., 1998, Logic, Logic, and Logic, Cambridge, MA: Harvard University Press.
  • Carnap, R., 1950, “Empiricism, Semantics, and Ontology”, reprinted in R. Carnap, Meaning and Necessity; second edition, Chicago: University of Chicago Press, 1956.
  • Cartwright, R., 1962, “Propositions”, in R. J. Butler, Analytical Philosophy, 1st series, Oxford: Basil Blackwell 1962; reprinted with addenda in Richard Cartwright, Philosophical Essays, Cambridge, MA: MIT Press 1987.
  • Chomsky, N., 1957, Syntactic Structures, The Hague: Mouton.
  • –––, 1964, Current Issues in Linguistic Theory, The Hague: Mouton.
  • –––, 1965, Aspects of the Theory of Syntax, Cambridge, MA: MIT Press.
  • –––, 1981, Lectures on Government and Binding, Dordrecht: Foris.
  • –––, 1986, Knowledge of Language, New York: Praeger.
  • –––, 1995, The Minimalist Program, Cambridge, MA: MIT Press.
  • Davidson, D., 1967a, “Truth and Meaning”, Synthese, 17: 304–23.
  • –––, 1967b, “The Logical Form of Action Sentences”, in N. Rescher (ed.), The Logic of Decision and Action, Pittsburgh: University of Pittsburgh Press.
  • –––, 1968, “On Saying That”, Synthese, 19: 130–46.
  • –––, 1980, Essays on Actions and Events, Oxford: Oxford University Press.
  • –––, 1984, Inquiries into Truth and Interpretation, Oxford: Oxford University Press.
  • Donnellan, K., 1966, “Reference and Definite Descriptions”, Philosophical Review, 75: 281–304.
  • Fodor, J., 1978, “Propositional Attitudes”, The Monist, 61: 501–23.
  • Frege, G., 1879, Begriffsschrift, reprinted in Beaney 1997.
  • –––, 1884, Die Grundlagen der Arithmetik, Breslau: Wilhelm Koebner. English translation, The Foundations of Arithmetic, J. L. Austin (trans). Oxford: Basil Blackwell, 1974.
  • –––, 1891, “Function and Concept”, reprinted in Beaney 1997.
  • –––, 1892, “On Sinn and Bedeutung”, reprinted in Beaney 1997.
  • Gillon, B., 2007, “Pāṇini’s Aṣṭādhyāyī and Linguistic Theory”, Journal of Indian Philosophy, 35: 445–468.
  • Harman, G., 1972, “Logical Form”, Foundations of Language, 9: 38–65.
  • –––, 1973, Thought, Princeton: Princeton University Press.
  • Higginbotham, J., 1986, “Linguistic Theory and Davidson’s Program in Semantics”, in E. Lepore (ed.), Truth and Interpretation, pp. 29–48, Oxford: Blackwell.
  • Higginbotham, J. & May, R., 1981, “Questions, Quantifiers, and Crossing”, Linguistic Review, 1: 47–79.
  • Hornstein, N., 1995, Logical Form: From GB to Minimalism, Oxford: Blackwell.
  • Huang, J., 1995, “Logical Form”, in G. Webelhuth (ed.), Government and Binding Theory and the Minimalist Program: Principles and Parameters in Syntactic Theory, pp. 127–175, Oxford: Blackwell.
  • Iacona, A., 2018, Logical Form: Between Logic and Natural Language, Berlin: Springer.
  • Jacobson, P., 1999, “Variable Free Semantics”, Linguistics and Philosophy, 22: 117–84.
  • King, J., 2002, “Two Sorts of Claims about Logical Form ” in Preyer and Peter 2002.
  • –––, The Nature and Structure of Content, Oxford: Oxford University Press.
  • Keenan, E., 1996, “The Semantics of Determiners”, in S. Lappin (ed.), The Handbook of Contemporary Semantic Theory, Oxford: Blackwell, pp. 41–63.
  • Kratzer, A., 1986, “Severing the External Argument from its Verb”, in J. Rooryck and L. Zaring (eds.), Phrase Structure and the Lexicon, Dordrecht: Kluwer, pp. 109–137.
  • Larson, R. and Ludlow, P., 1993, “Interpreted Logical Forms”, Synthese, 95: 305–55.
  • Lepore, E. and Ludwig, K., 2002, “What is Logical Form?”, in Preyer and Peter 2002, pp. 54–90.
  • Ludlow, P., 2002, “LF and Natural Logic”, in Preyer and Peter 2002, pp. 132–168.
  • May, R., 1985, Logical Form: Its Structure and Derivation, Cambridge, MA: MIT Press.
  • Montague, R., 1970, “English as a Formal Language”, in R. Thomason (ed.), Formal Philosophy, New Haven, CT: Yale University Press, 1974, pp. 7–27.
  • Parsons, T., 2014, Articulating Medieval Logic, Oxford: Oxford University Press.
  • Pietroski, P., 2018, Conjoining Meanings, Oxford: Oxford University Press.
  • Preyer, G. and Peter, G. (eds.), 2002, Logical Form and Language, Oxford: Oxford University Press.
  • Quine, W.V.O., 1950, Methods of Logic, New York: Henry Holt.
  • –––, 1951, “Two Dogmas of Empiricism”, Philosophical Review, 60: 20–43.
  • –––, 1953, “On What There Is”, in From a Logical Point of View, Cambridge, MA: Harvard University Press, pp. 1–19.
  • –––, 1960, Word and Object, Cambridge MA: MIT Press.
  • –––, 1970, Philosophy of Logic, Englewood Cliffs, NJ: Prentice Hall.
  • Ramsey, F., 1927, “Facts and Propositions”, Proceedings of the Aristotelian Society (Supplementary Volume), 7: 153–170.
  • Sànchez, V., 1991, Studies on Natural Logic and Categorial Grammar, Ph.D. Thesis, University of Amsterdam.
  • —1994, “Montonicity in Medieval Logic”, Language and Cognition, 4: 161–74.
  • Schein, B., 1993, Events and Plurals, Cambridge, MA: MIT Press.
  • –––, 2017, And: Conjunction Reduction Redux, Cambridge, MA: MIT Press.
  • Segal, G., 1989, “A Preference for Sense and Reference”, The Journal of Philosophy, 86: 73–89.
  • Soames, S., 1987, “Direct Reference, Propositional Attitudes, and Semantic Content”, Philosophical Topics, 15: 47–87.
  • –––, 1995, “Beyond Singular Propositions”, Canadian Journal of Philosophy, 25: 515–50.
  • –––, 2002, Beyond Rigidity, Oxford: Oxford University Press.
  • Sommers, F., 1984, The Logic of Natural Language, Oxford: Oxford University Press.
  • Stanley, J., 2000, “Context and Logical Form”, Linguistics and Philosophy, 23: 391–434.
  • Strawson, P., 1950, “On Referring”, Mind, 59: 320–44.
  • Tarski, A., 1933, “The Concept of Truth in Formalized Languages”, reprinted in Tarski 1983.
  • –––, 1944, “The Semantic Conception of Truth”, Philosophy and Phenomenological Research, 4: 341–75.
  • –––, 1983, Logic, Semantics, Metamathematics, J. Corcoran (ed.), J.H. Woodger (trans.), 2nd edition, Indianapolis: Hackett.
  • van Benthem, J., 1986, Essays in Logical Semantics, Dordrecht: D. Reidel.
  • Wiggins, D., 1980, “‘Most’ and ‘all’: some comments on a familiar programme, and on th clogical form of quantified sentences”, in M. Platts (ed.) Reference, truth and reality: Essays on the philosophy of language, London: Routledge & Kegan Paul, pp. 318–346.
  • Wittgenstein, L., 1921, Tractatus Logico-Philosophicus, D. Pears and B. McGuinness (trans.), London: Routledge & Kegan Paul.
  • –––, 1953. Philosophical Investigations, New York: Macmillan.

Some Other Useful Works

A few helpful overviews of the history and basic subject matter of logic:

  • Kneale, W. & Kneale, M., 1962, The Development of Logic, Oxford: Oxford University Press; reprinted 1984.
  • Sainsbury, M., 1991, Logical Forms, Oxford: Blackwell.
  • Broadie, A., 1987, Introduction to Medieval Logic, Oxford: Oxford University Press.
  • For these purposes, Russell’s most important books are: Introduction to Mathematical Philosophy, London: George Allen and Unwin, 1919; Our Knowledge of the External World, New York: Norton, 1929; and The Philosophy of Logical Atomism, La Salle, Ill: Open Court, 1985. Stephen Neale’s book Descriptions (Cambridge, MA: MIT Press, 1990) is a recent development of Russell’s theory.

For introductions to Transformational Grammar and Chomsky’s conception of natural language:

  • Radford, A., 1988, Transformational Grammar, Cambridge: Cambridge University Press.
  • Haegeman, L., 1994, Introduction to Government & Binding Theory, Oxford: Blackwell.
  • Lasnik, H. (with M. Depiante and A. Stepanov), 2000, Syntactic Structures Revisited, Cambridge, MA: MIT Press.

For discussions of work in linguistics bearing directly on issues of logical form:

  • Higginbotham, J., 1985, “On Semantics”, Linguistic Inquiry, 16: 547–93.
  • Hornstein, N., 1995, Logical Form: From GB to Minimalism, Oxford: Blackwell.
  • Larson, R. and Segal, G., 1995, Knowledge of Meaning, Cambridge, MA: MIT Press.
  • May, R., 1985, Logical Form: Its Structure and Derivation, Cambridge, MA: MIT Press.
  • Neale, S., 1993, Grammatical Form, Logical Form, and Incomplete Symbols, in A. Irvine & G. Wedeking (eds.), Russell and Analytic Philosophy, Toronto: University of Toronto, pp. 97–139.

For discussions of the Davidsonian program (briefly described in section nine) and appeal to events:

  • Davidson, D., 1984, Essays on Truth and Interpretation, Oxford: OUP.
  • –––, 1985, “Adverbs of Action”, in B. Vermazen and M. Hintikka (eds.), Essays on Davidson: Actions and Events, Oxford: Clarendon Press, pp. 230–241.
  • Evans, G. & McDowell, J. (eds.), 1976, Truth and Meaning, Oxford: Oxford University Press.
  • Higginbotham, J., Pianesi, F. and Varzi, A. (eds.), 2000, Speaking of Events, Oxford: Oxford University Press.
  • Ludwig, K. (ed.), 2003, Contemporary Philosophers in Focus: Donald Davidson, Cambridge: Cambridge University Pres
  • Lycan, W., 1984, Logical Form in Natural Language, Cambridge, MA: MIT Press.
  • Parsons, T., 1990, Events in the Semantics of English Cambridge, MA: MIT Press.
  • Pietroski, P., 2005, Events and Semantic Architecture, Oxford: Oxford University Press.
  • Taylor, B., 1985, Modes of Occurrence, Oxford: Blackwell.

Acknowledgments

The author would like to thank: Christopher Menzel for spotting an error in an earlier characterization of the generalized quantifier ‘every’, prompting revision of the surrounding discussion; Karen Carter, Max Heiber, Claus Schlaberg, and David Korfmacher for catching various typos in previous versions; and for comments on the intial versions, Susan Dwyer, James Lesher, the editors and referees.

Copyright © 2021 by
Paul Pietroski <paul.pietroski@rutgers.edu>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free