Discourse Representation Theory
In the early 1980s, Discourse Representation Theory (DRT) was introduced by Hans Kamp as a theoretical framework for dealing with issues in the semantics and pragmatics of anaphora and tense (Kamp 1981); a very similar theory was developed independently by Irene Heim (1982). The distinctive features of DRT, to be discussed below, are that it is a mentalist and representationalist theory of interpretation, and that it is a theory of the interpretation not only of individual sentences but of discourse, as well. In these respects DRT made a clear break with classical formal semantics, which during the 1970s had emanated from Montague's pioneering work (Thomason 1974), but in other respects it continued the tradition, e.g. in its use of model-theoretical tools. In the meantime, DRT has come to serve as a framework for explaining a wide range of phenomena, but we will confine our attention to three: anaphora, presupposition, and tense. For references to work on other topics, see the “Further reading” section.
- 1. Introduction
- 2. Donkey pronouns
- 3. Basic DRT
- 4. The DRS language: syntax, semantics, accessibility
- 5. Beyond the basics
- 6. Representationalism, attitudes, and compositionality
- 7. Further reading
- Bibliography
- Other Internet Resources
- Related Entries
1. Introduction
This article concerns Discourse Representation Theory narrowly defined as work in the tradition descending from Kamp (1981). The same term is sometimes used more broadly, occasionally embracing Heim's work (“File Change Semantics”, FCS) and the developments in Dynamic Semantics (DS) initiated by Groenendijk and Stokhof (1991). The term DRT is not typically used for Fauconnier's (1984, 1985) Mental Spaces (MS) model, though his work shares strong commonalities with that of both Heim and Kamp. The differences between DRT (narrowly defined) and the various related theories we have mentioned are summarized in Table 1.
Table 1: Comparison of DRT-like theories Theory Status of post-syntax (or post-LF) representations Construction Model-theoretic interpretation DRT essential (but see below) systematic procedure (but see below) yes FCS non-essential intended to be compositional yes MS essential informal procedure no DS non-essential fully compositional[1] yes
DRT's main (and most controversial) innovation, beyond the Montagovian paradigm which was then considered orthodox, is that it introduced a level of mental representations, called discourse representation structures (DRSs). The basic idea is rather straightforward. It is that a hearer builds up a mental representation of the discourse as it unfolds, and that every incoming sentence prompts additions to that representation. This picture has always been commonplace in the psychology of language. DRT's principal tenet is that it should be the starting point for semantic theory, too.
A theory of the DRT family consists of the following ingredients:
- A formal definition of the representation language, consisting of:
- a recursive definition of the set of all well-formed DRSs, and
- a model-theoretic semantics for the members of this set;
- a construction procedure, which specifies how to extend a given DRS when a sentence comes in.
Technically, this is very similar to earlier work in formal semantics, with two exceptions: the interpretation process always takes the previous discourse into account, and the level of semantic representations is claimed to be essential. What has worried semanticists is not so much the fact that DRSs are mental representations, but that an additional level is needed; we will return to this point in Section 6.
2. Donkey pronouns
The relationship between a pronoun and its antecedent is one that has received a great deal of attention in linguistics and philosophy. We say that a pronoun is anaphoric, as opposed to deictic, for example, if it depends for its interpretation on an antecedent expression elsewhere in the sentence or the discourse. In some cases the nature of this dependency seems straightforward:
[1] Pedro beats his donkey.
(Here and henceforth we use underlining to highlight the anaphoric links we are currently interested in.) Since “Pedro” is a referential expression, it makes sense, in this case, to say that “his” is a referential term, too, which derives its reference from its antecedent. This construal is plausible enough, but it doesn't apply across the board:
[2] No farmer beats his donkey.
The term “no farmer” is not a referential expression, so “his” cannot be coreferential with it. Rather, it would seem that, in this case, the relationship between the pronoun and its antecedent is one of binding, in the logical sense of the word: “no farmer” and “his” are interpreted as a quantifier and a variable, respectively, and the former binds the latter.
It is fairly obvious that the bound-variable construal of pronouns is subject to syntactic constraints. Most importantly, this type of interpretation requires that the pronoun be c-commanded by its antecedent, where c-command is defined as follows:
A c-commands B iff B is, or is contained in, a sister constituent of A's.
The constituent structure of sentence [2] is [S [NP No farmer][VP beat [NP his donkey]]]. Here the pronoun is contained in the sister of its antecedent, so the c-command constraint is met. In [3] “no farmer” does not c-command “his”, and so we predict, correctly, that the pronoun cannot be bound.
[3] His donkey likes no farmer.
Since the syntactic structure of [1] is identical to that of [2], we might wonder if the pronoun in [1] may not be bound by its antecedent, just as in [2]. If this were possible, there would be two ways of construing the pronoun in [1]: it could function either as a referential term or a bound variable. There is evidence that this is indeed the case:
[4a] Pedro beats his donkey, and Juan does, too. [4b] Every farmer beats his donkey, and Juan does, too.
[4a] is ambiguous in a way [4b] is not. Its second conjunct can be construed as saying either that Juan beats his own donkey or that Juan beats Pedro's donkey. The second conjunct of [4b], by contrast, can only be interpreted as saying that Juan beats his own donkey. If the possessive pronoun in the first conjunct of [4a] can be either referential or a bound variable, the contrast is readily explained.
In large part, the motivation for developing dynamic theories of interpretation, beginning with DRT, was the realization that the dichotomy between referential and bound-variable (occurrences of) pronouns is less natural than one might think—less natural in the sense that some pronouns don't fit comfortably in either category.
[5a] Pedro owns a donkey. [5b] It is grey.
What is the relationship between “it” in the second sentence and its antecedent expression in the first? On the one hand, it cannot be coreference. If the pronoun were coreferential with its antecedent, the indefinite “a donkey” would have to be a referential term, which seems unlikely, e.g. because the negation of [5a] says not that there is donkey that Pedro fails to own, but rather that he doesn't own any donkey, and furthermore, if [5a] is negated, the anaphoric link between the pronoun and its antecedent is severed:
[6a] Pedro doesn't own a donkey. [6b] *It is grey.
(The asterisk indicates that the sentence is infelicitous if “a donkey” is to be interpreted as the antecedent of “it”.) Observations like this suggest rather strongly that indefinites are quantifiers rather than referential terms. However, if we construe “a donkey” as an existential quantifier, how does it manage to bind the pronoun across a sentence boundary?
The problem with [5] is related to the fact that the pronoun and its indefinite antecedent occur in different sentences. The following examples show, however, that similar problems arise within sentences:
[7a] If Pedro owns a donkey, he beats it.
[7b] Every farmer who owns a donkey beats it.
These are the infamous “donkey sentences”, which were already discussed by scholastic philosophers, and in modern times were reintroduced into philosophy by Geach (1962). In these cases it is obvious that the pronouns don't refer, so they can't be coreferential with their antecedents, either. Nor are the pronouns bound by their antecedents, for they aren't c-commanded by them. The constituent structure of [7b] is roughly as follows:
[S [NP Every [N farmer who owns [NP a donkey]]] [VP beats it]]
Whatever the syntactical details, “a donkey” is too deeply embedded for it to c-command “it”. The same goes for [7a]. Hence, the neuter pronouns in these sentences cannot be construed as bound variables.
Another way of showing that these pronouns are problematic is by considering how we might render these sentences in predicate logic. The most obvious interpretation of [7a] is that Pedro beats every donkey he owns, and [7b] is naturally interpreted as claiming that every farmer beats every donkey he owns. (So, somewhat surprisingly, it appears that the indefinites in these sentences have universal force.) These readings may be captured as follows:
[9a] ∀x[[donkey(x) & own(Pedro,x)] → beat(Pedro,x)]
[9b] ∀x∀y[[farmer(x) & donkey(y) & own(x,y)] → beat(x,y)]
So the problem is not that predicate logic fails to capture the meanings of [7a] and [7b]. The problem is, rather, how to derive [9a] and [9b] from [7a] and [7b] in a principled way. In order to derive these logical forms, we have to assume not only that an indefinite expression buried in a subordinate position ends up having wide scope, but also that it acquires universal force in the process. A theory based on such assumptions might capture the facts, but it would be clearly ad hoc, and would throw up a host of false predictions, as well.
The upshot of the foregoing observations is that, apparently, indefinites are neither quantifiers nor referential terms, and this problem entrains another one, for as long as it unclear what indefinites mean, it will also remain obscure how they can serve as antecedents to pronouns. Later on we will see that very similar problems arise in seemingly disparate domains, like the interpretation of tense and presuppositions.
3. Basic DRT
This section introduces DRT in informal terms. We show how a hearer builds up a mental model of the ongoing discourse, dealing not only with simple sentences, but also with conditionals, quantification, and anaphora, and we discuss DRT's treatment of indefinites, and how they are set apart from quantifying expressions.
3.1 The key ideas
A discourse representation structure (DRS) is a mental representation built up by the hearer as the discourse unfolds. A DRS consists of two parts: a universe of so-called “discourse referents”, which represent the objects under discussion, and a set of DRS-conditions which encode the information that has accumulated on these discourse referents. The following DRS represents the information that there are two individuals, one of which is a farmer, the other a donkey, and that the former chased the latter:
[1] [x, y: farmer(x), donkey(y), chased(x,y)]
The universe of this DRS contains two discourse referents, x and y, and its condition set is {farmer(x), donkey(y), chased(x,y)}.
A DRS like the one in [1] can be given a straightforward model-theoretic interpretation. In DRT this is done by means of embedding functions, which are partial functions from discourse referents to individuals in a given model M. An embedding function f verifies [1] in M iff the domain of f includes at least x and y, and according to M it is the case that f(x) is a farmer, f(y) is a donkey, and f(x) chased f(y).
Meanwhile it will have become clear that the DRS in [1] is designed to reflect the intuitive meaning of:
[2] A farmer chased a donkey.
Indeed, it is claimed that, in the absence of any information about the context in which this sentence is uttered, [1] is the semantic representation of [2]. So the indefinite expressions “a farmer” and “a donkey” prompt the introduction of two new discourse referents, x and y, and contribute the information that x is a farmer and y a donkey, while the verb contributes the information that the former chased the latter.
If a discourse begins with an utterance of [2], the DRS in [1] is constructed, and this DRS forms the background against which the next utterance is interpreted. Suppose now that [2] is followed by a token of [3a]:
[3a] He caught it.
[3b] [v, w: caught(v,w)]
[3b] is the DRS that reflects the semantic content of [3a] before the pronouns are resolved. In this DRS, the anaphoric pronouns “he” and “it” in [3a] are represented by the discourse referents v and w, respectively. These discourse referents are underlined to indicate that they require an antecedent. Since [3a] is uttered in the context of [1], the next step in the interpretation of this sentence is to merge the DRS in [3b] with that in [1]. The result of this merging operation is [4a]:
[4a] [x, y, v, w: farmer(x), donkey(y), chased(x,y), caught(v,w)]
[4b] [x, y, v, w: v = x, w = y, farmer(x), donkey(y), chased(x,y), caught(v,w)]
[4c] [x, y: farmer(x), donkey(y), chased(x,y), caught(x,y)]
Since [3a] is immediately preceded by [2], the antecedent of “he” is probably (though not necessarily) “a farmer”, while “it” is anaphorically dependent on “a donkey”. At DRS level, this is represented by equating the discourse referents v and w with x and y, respectively. These operations yield [4b], which is equivalent to [4c]. Either DRS is verified in a model M iff M features a farmer who chased and caught a donkey.
Thus far, we have only considered DRSs with simple conditions, but in order to account for negated and conditional sentences, say, complex conditions are required.
[5a] Pedro doesn't have a donkey.
[5b] [1 x: Pedro(x),¬[2 y: donkey(y), owns(x,y)] ]
[5b] is the sentence DRS corresponding to [5a]. This DRS contains a condition that consists of a DRS prefixed by a negation sign. For ease of reference we will sometimes adorn DRSs with numerical labels, as we have done in [5b], and use these in names like “[5b1]”, “[5b2]”, and so on. In general, labeling of DRSs will be top-down and left-to-right, so the main (or principal) DRS will always be number one.
A function f verifies [5b1] in a model M iff f maps x onto an individual in M which “is a Pedro”, i.e. which is called “Pedro”, and f cannot be extended to a function g which verifies [5b2]—that is to say, no such g should map y onto a donkey that Pedro owns.
[5b2] contains a token of the discourse referent x which is introduced externally, in the DRS in which [5b2] is embedded, i.e. [5b1]. Apart from that, [5b2] also introduces a discourse referent of its own, i.e., y, which is associated with the indefinite NP “a donkey”, and whose scope is delimited by [5b2]. Consequently, it doesn't make sense to refer to y outside of [5b2]. In DRT, this is taken to explain why the “lifespan” of the individual introduced by the indefinite NP in [5a] is delimited by the scope of the negation operator. If [5a] were followed by [6a], for example, the pronoun could not be linked to the indefinite:
[6a] It is grey. [6b] [z: grey(z)] [6c] [1 x, z: Pedro(x),¬[2 y:donkey(y), owns(x,y)], grey(z)]
If we merge [5b] and [6b], which is the sentence DRS corresponding with [6a], we obtain [6c]. In this representation, the discourse referent z does not have access to y, because y is introduced in a DRS that is not accessible to the DRS in which z is introduced, and therefore it is not possible to bind z to y. In other words, if [6a] is preceded by [5a], the pronoun cannot be anaphorically linked to the indefinite. This prediction appears to be correct.
Accessibility is in the first instance a relation between DRSs; derivatively, it is also a relation between discourse referents. [6c1] is accessible to [6c2], but not the other way round, and therefore the discourse referents introduced in [6c1], i.e. x and z, are accessible from [6c2], but conversely, if we are in [6c1] we have no access to [6c2] and its discourse referents, i.e. y. Thus in [6c] anaphora is not possible because y is not accessible to z. In [4a], by contrast, anaphora is possible, because x and y are accessible to v and w (the accessibility relation being reflexive). The notion of accessibility is crucial to DRT's account of anaphora, and it is important to note that it is not stipulated, but is entailed by the semantics of the DRS language. [6c1] is accessible to [6c2] because every embedding function that must be considered for [6c2] is an extension of an embedding function for [6c1], and it is for this reason that every discourse referent in [6c1] is also defined in [6c2]. The converse, however, does not hold.
Like negated sentences, conditionals give rise to complex DRS-conditions, too. (7) gives an example:
[7a] If Pedro owns a donkey, he beats it. [7b] [1 x: Pedro(x), [2 y: donkey(y), owns(x,y)] ⇒ [3 v, w: beats(v,w)]] [7c] [1 x, v: v = x, Pedro(x), [2 y, w: w = y, donkey(y), owns(x,y)] ⇒ [3 : beats(v,w)] ] [7d] [1 x: Pedro(x), [2 y: donkey(y), owns(x,y)] ⇒ [3 : beats(x,y)] ]
[7b] is the sentence DRS which corresponds to [7a], and assuming for convenience that this sentence is uttered in an empty context, it is also the initial DRS of the discourse. The complex condition in this structure is interpreted as follows: if f is to verify [b1] in the current model, then f(x) must be an individual called “Pedro”, and every extension of f which verifies [b2] must itself be extendable to a function that verifies of [b3]. It follows from this that [b1] is accessible to [b2], which in its turn is accessible to [b3], and therefore v may be linked up to x (accessibility being a transitive relation) and w to y. The result is [7c], which is equivalent to [7d], both DRSs saying that Pedro beats every donkey he owns.
3.2 Quantifiers vs. indefinites
The interpretation of quantified donkey sentences is very similar to what we have just seen:
[8a] Every farmer who owns a donkey beats it. [8b] [1 [2 x, y: farmer(x), donkey(y), owns(x,y)](∀x)[3 v, w: beats(v,w)] ] [8c] [1 [2 x, y, v, w: v = x, w = y, farmer(x), donkey(y), owns(x,y)](∀x)[3 : beats(v,w)] ] [8d] [1 [2 x, y: farmer(x), donkey(y), owns(x,y)](∀x)[3 : beats(x,y)] ]
There are two ways of spelling out the interpretation of so-called duplex conditions of the form K(∀x)K′. On its weak interpretation, [8a] means that every farmer who owns a donkey beats at least one of the donkeys he owns; on its strong interpretation the sentence says that every farmer beats every donkey he owns. While the strong reading is the most natural choice for [8a], other donkey sentences prefer a weak reading:
[9] Every farmer who owns a tractor uses it to drive to church on Sundays.
The most likely interpretation for [9] to have is that every tractor-owning farmer uses one of his tractors to drive to church.
Whether weak or strong, the interpretation of a condition of the form K(Qx)K′, where Q may be any quantifier, makes K accessible to K′, and in this respect (which is the fulcrum of the DRT analysis) conditionals and quantified sentences are the same. Consequently, the discourse referents x and y in [8b2] are accessible to v and w [8b3], and the latter may be equated to the former. The resulting representation is [8c], which is equivalent to [8d].
The DRT analysis of quantified expressions like “all” or “most” is fairly standard. A quantifier binds a variable and delivers the truth conditions one should expect. Indefinites are different. An indefinite like “a donkey” is treated not as a quantifier but simply as device for introducing a discourse referent and one or more conditions; on the DRT account, indefinites have no quantifying force of their own. What quantifying force they seem to have is not theirs, but derives from the environment in which they occur. If the semantic material associated with “a donkey” is introduced in the main DRS, as in [4c], the quantifying effect will be existential, owing to the fact that this DRS is verified in a model M iff there is a way of verifying it in M. If the semantic material associated with “a donkey” is introduced in the antecedent of a conditional, as in [7d], the quantifying effect will be universal, owing to the fact that a condition K ⇒ K′ is verified in M iff every way of verifying K can be extended to a way of verifying K′. This view on indefinites lies at the heart of DRT.
DRT was one of the first semantic theories to go beyond the sentence boundary, and take into account how the interpretation of an expression may depend on the preceding discourse. Looking back at the examples discussed in the foregoing, we see that, if the DRT approach is on the right track, sentence boundaries are not as important as the Fregean conception of language (which continues to have a strong hold on linguistics and philosophy) would have it. In particular, there is essentially no difference between the DRT analyses of cross-sentential anaphora, as exemplified by the mini-discourse [2]-[3a], and sentence-internal anaphora, as in [7a] or [9a]. In either case, the pronoun simply serves to pick up an accessible discourse referent. This raises the question how DRT's new-fangled notion of anaphora relates to the dichotomy between referential and bound-variable pronouns. Curiously, this issue doesn't seem to have received much attention thus far.
3.3 One- vs. two-level versions of DRT
In the foregoing we assumed that the contribution of a sentence to the discourse, as represented by a DRS, was obtained in two steps. In the first one, a sentence DRS was constructed in a compositional way; this part of the construction process we took for granted, but its implementation wouldn't be too difficult. In the second step, the sentence DRS was merged with the DRS representing the prior discourse, and anaphoric references were resolved. This two-stage procedure has become the industrial standard, but the original version of DRT (Kamp 1981, Kamp and Reyle 1993) was monostratal in the sense that one set of rules took care of both tasks at once. In the meantime, Kamp and his co-workers have adopted the two-step method, too (e.g. Kamp et al. 2005).
In one-level versions of DRT, a single set of rules is used to obtain the semantic contribution of a sentence. To take a simple example, if the sentence is “It is grey”, the first rule to apply will say that the semantic correlate of the subject is an argument to the predicate expressed by “is grey”. Then the two main parts of the sentence are analysed further, and the pronoun “it” triggers a rule which at once deals with the pronoun's lexical content and its context dependence; i.e. it says that we must select a new discourse referent, link it to a discourse referent made available by the preceding discourse, and update the DRS so as to record these changes. In two-level versions of DRT there is no such rule. Its duties are divided between two separate mechanisms. In the first stage, a separate DRS is constructed for the sentence, in which pronominal lexemes prompt the introduction of new discourse referents which are marked as being anaphoric (we use underlining for this purpose). In the second stage, there is a general mechanism for dealing with discourse referents that are thus marked.
The distinction between one- and two-level versions of DRT is of interest for at least two reasons. First, once we go beyond basic DRT, the two-stage system may actually be more economical overall. For example, if DRT is extended for dealing with presuppositions (see §5), the one-pass approach becomes unwieldy. Secondly, and perhaps more importantly, two-stage theories may be viewed as implementing a distinction between semantics and pragmatics which has some prima facie plausibility: in the first stage a representation is computed which is projected from the sentence's lexico-grammatical structure, while in the second stage, context-dependent aspects of meaning are dealt with. This division of labor is reminiscent of the traditional dichotomy between compositional semantics and non-compositional pragmatics, but it should be noted that the two do not coincide. Usually, the “semantical” part of a two-stage version of DRT is not compositional in the Fregean sense, because its output is essentially incomplete; it may contain anaphoric “gaps”, for example. Therefore, even if a sentence DRS admits of a truth-conditional interpretation, it will typically fall short of capturing everything that is conventionally thought to belong to truth-conditional content.
4. The DRS language: syntax, semantics, accessibility
In this section we give a more precise description of the DRS language than we have done so far. We define the syntax of the language, present a model-theoretic interpretation, and discuss the notion of accessibility, which figures so prominently in DRT's account of anaphora.
4.1 Syntax
DRSs are set-theoretic objects built from discourse referents and DRS-conditions. A DRS-condition is either atomic, in which case it consists of a predicate and a suitable number of discourse referents, or complex, in which case it embeds one or two DRSs. Hence, the definition of the DRS language is by simultaneous recursion:
DRSs and DRS-conditions
- A DRS K is a pair < UK, ConK>, where UK is a set of discourse referents, and ConK is a set of DRS-conditions.
- If P is an n-place predicate, and x1, …, xn are discourse referents, then P(x1, …, xn) is a DRS-condition.
- If x and y are discourse referents, then x = y is a DRS-condition.
- If K and K′ are DRSs, then ¬K, K⇒ K′, and K v K′ are DRS-conditions.
- If K and K′ are DRSs and x is a discourse referent, then K(∀x)K′ is a DRS-condition.
Note that there are no conditions corresponding to conjoined sentences. Such sentences are dealt with, rather, by merging the DRSs associated with their parts, where the merge of two DRSs K and K′ is defined as their pointwise union:
DRS-merge • K⊕K′ = < UK ∪ UK′, ConK ∪ ConK′>
The merge operation is also used for combining a sentence DRS with the DRS representing the preceding discourse, so the idea is that there is no principled distinction between (clausal) conjunction and sentence concatenation.[2]
Officially, DRSs are set-theoretical objects, but in this article we use a linear notation. Many sources, including Kamp's own work, employ a graphical “box notation” which is sometimes clearer, but less parsimonious and more difficult to type. To illustrate the various schemes, here are three ways of representing the content of a conditional donkey sentence:
Official DRS: | <{}, <{x, y}, {farmer(x), donkey(y), owns(x,y)}> ⇒ <{}, {beats(x,y)}>} > | ||||||||||
Linear notation: | [: [x, y: farmer(x), donkey(y), owns(x,y)] ⇒ [: beats(x,y)]] | ||||||||||
Box notation: |
|
To be clear: the choice between the above three representations is one of convenience, and not intended to have any theoretical significance.
4.2 Accessibility
Accessibility is a relation between DRSs that is transitive and reflexive, i.e. it is a preorder. More in particular, it is the smallest preorder for which the following holds, for all DRSs K, K′, and K″: if ConK contains a condition of the form …
- ¬K′, then K is accessible to K′
- K′v K″, then K is accessible to K′ and K″
- K′ ⇒ K″, then K is accessible to K′ and K′ is accessible to K″
- K′(∀x)K″, then K is accessible to K′ and K′ is accessible to K″
To illustrate, in the following schematic representations, every DRS is accessible to all and only those DRSs whose number does not exceed its own (so every DRS is accessible to itself):
- [1 … [2 … ](∀x)[3 … ¬[4 … ] ] ]
- [1 … [2 … ] ⇒ [3 … ¬[4 … ] ] ]
The accessible domain of a DRS K, AK, is the set of discourse referents that occur in some K′ that is accessible to K, i.e., AK = {x: K′ is accessible to K and x ∈ UK′}. The main constraint which DRT imposes on the interpretation of anaphora is this:
Accessibility constraint
A pronoun is represented by a discourse referent x which must be equated to some discourse referent y ∈ AK, where K is the DRS in which x is introduced.
It is important to note that neither the notion of accessibility nor the accessibility constraint need be stipulated. For, as we will presently see, both follow from the way the DRS language is interpreted (cf. the notion of variable binding in standard predicate logic). Thus, we are not at liberty to modify the accessibility relation, should we wish to do so, unless we simultaneously revise the truth conditions associated with the DRS language. The accessibility constraint is a semantic one.
As discussed in §2, the main syntactic constraint on bound-variable pronouns is that they be c-commanded by their antecedents. The domain of the accessibility constraint overlaps with that of the c-command constraint, but it is wider. For one thing, sentence boundaries don't affect accessibility. For another, within the confines of a sentence, the semantic correlate of an expression E1 may be accessible to that of E2, even if E1 doesn't c-command E2. To illustrate this point, consider the conditional donkey sentence in [1a] and its sentence DRS in [1b]:
[1a] [S [S If [S [NP a farmer] [VP owns a donkey]] [S he beats it] ] [1b] [1 : [2 x, y: farmer(x), donkey(y), owns(x,y)] ⇒ [3 u, v: beats(u,v)] ]
Syntactically speaking, neither “a farmer” nor “a donkey” can bind (in the sense of Chomsky 1986 and the immense secondary literature it generated) an expression beyond the smallest clause in which they occur. In particular neither can bind any of the pronouns in the main clause of [1a]. But the discourse referents associated with these indefinites, i.e. x and y, are in the accessible domain of [1b3], and therefore the anaphoric discourse referents in this embedded DRS can be linked to x and y. Put otherwise, they are bindable at a semantic but not at a syntactic level.
4.3 Semantics
The truth-conditional semantics of the DRS language is given by defining when an embedding function verifies a DRS in a given model. An embedding function is a partial mapping from discourse referents to individuals. Given two embedding functions f and g and a DRS K, we say that g extends f with respect to K, or f[K]g for short, iff Dom(g) = Dom(f) ∪ UK, and for all x in Dom(f): f(x) = g(x). Viewing functions as set-theoretic objects, this can be formulated more succinctly as follows:
f[K]g iff f ⊆ g and Dom(g) = Dom(f) ∪ UK
In the following we define what it takes for an embedding function to verify a DRS or DRS-condition in a given model. As usual, a model M is a pair <D,I>, where D is a set of individuals, and I is an interpretation function that assigns sets of individuals to one-place predicates, sets of pairs to two-place predicates, and so on. To enhance the legibility of our definition somewhat we omit the qualification “in M” throughout:
Verifying embeddings • f verifies a DRS K iff f verifies all conditions in ConK. • f verifies P(x1, …, xn) iff <f(x1), …, f(xn)> ∈ I(P). • f verifies x = y iff f(x) = f(y). • f verifies ¬K iff there is no g such that f[K]g and g verifies K. • f verifies K v K′ iff f there is a g such that f[K]g and g verifies K or K′. • f verifies K ⇒ K′ iff, for all f[K]g such that g verifies K, there is an h such that g[K′]h and h verifies K′. • f verifies K(∀x)K′ iff, for all individuals d ∈ DM and for all f[K]g such that g(x) = d and g verifies K, there is an h such that g[K′]h and h verifies K′.
The last clause gives a strong interpretation to universal duplex conditions (cf. §3.2); the weak interpretation is obtainable as follows:
• f verifies K(∀x)K′ iff, for all individuals d ∈ DM for which there is a g such that f[K]g and such that g(x) = d and g verifies K, there is an h such that f[K ⊕ K′]h and h verifies K⊕K′.
A DRS is true in a given model iff we can find a verifying embedding for it, as follows:
Truth A DRS K is true in a model M iff there is an embedding function f such that Dom(f) = UK and f verifies K in M.
To see how accessibility follows from these definitions, the critical junctures are the points at which extensions of embedding functions are called for. For example, consider a schematic DRS with a conditional in it:
[2] [x: A(x), [y: B(x,y)] ⇒ [z: C(x,y,z)] ]
This DRS is true iff we can find an embedding function f, Dom(f) = {x}, which verifies all conditions in [2], which is to say that f(x) must be an A, and f verifies the complex condition [y: B(x,y)] ⇒ [z: C(x,y,z)]. The latter requirement is met iff every extension g of f, Dom(g) = {x,y}, such that g verifies [y: B(x,y)] can be extended to a function h, Dom(h) = {x,y,z}, such that h verifies [z: C(x,y,z)]. Hence, no matter how we choose g, g(x) must be the same individual as f(x), and for any g, no matter how we choose h, h(x) and h(y) must be the same as g(x) and g(y), respectively. This is why x is in the accessible domain associated with the antecedent of the conditional, and why y is in the accessible domain associated with the consequent.
5. Beyond the basics
The purpose of this section is to illustrate how the ideas lying at the heart of DRT have been applied elsewhere, confining our attention to the domains of tense and presupposition. In both cases, we will find interpretative dependencies reminiscent of the dependency relation between an anaphor and its antecedent, and we will describe, if only in outline, how basic DRT has been extended so as to explain the similarities.
5.1 Tense
In the early 1970s, it was pointed out that there are systematic parallels between tenses and pronouns (Partee 1973), and DRT was partly born from the conviction that a theory of interpretation should account for these parallels. Partee (1984) herself worked out some of these ideas in the DRT framework. In the following we will briefly discuss some of Partee's observations, and outline a DRT treatment.
[1a] Pedro owns a donkey. He beats it. [1b] Yesterday, Pedro tried to kiss Juanita. She slapped him.
Intuitively, the indefinite description “a donkey” in the opening sentence of [1a] serves to introduce an entity that is subsequently picked up the pronoun “it” in the second sentence, and the DRT account of indefinites and pronouns seeks to formalize this intuitive picture. [1b] shows that a very similar phenomenon occurs in the temporal domain, for in this case it is natural to construe the past tense of the second sentence as being anaphorically dependent on the content of the first: presumably, Juanita slapped Pedro right after he tried to kiss her. In order to capture this idea, DRT adopts an event-based semantics, treating events as semantic values of a designated class of discourse referents. Sentences are now construed like indefinite descriptions in that they, too, serve to introduce discourse referents. For example, the first sentence in [1b] introduces a discourse referent of the event type that represents Pedro's attempting to kiss Juanita, and the tense morpheme in “slapped” is construed as referring back, very much like an anaphoric pronoun, to that event: by default, the time of Juanita's slapping Pedro will be taken to be immediately after the time of him trying to kiss her.
Our second example involves what one might call “donkey tense”:
[2a] Every farmer who owns a donkey beats it. [2b] Whenever Pedro tried to kiss Juanita, she slapped him.
In an event-based semantics, it is natural to construe [2b] as quantifying over events, just like [2a] quantifies over individuals. In conjunction with the DRT treatment of quantification, this allows us to interpret the tense in “slapped” as referring back to the event introduced in the subordinate clause, along the following lines (where “⊃⊂” stands for the “right after” relation, e″ is the variable introduced by the tense in “slapped”, and e″ indicates that e″ requires an antecedent):
[3a] [x, y: Pedro(x), Juanita(y), [e: try-to-kiss(e,x,y)](∀e)[e′, e″ : e″ ⊃⊂ e′, slap(e′,y,x)]][3b] [x, y: Pedro(x), Juanita(y), [e: try-to-kiss(e,x,y)](∀e)[e′: e ⊃⊂ e′, slap(e′,y,x)] ]
In [3a] the tense introduces an anaphoric element, which can be bound in the restrictor of the quantifier, just as an ordinary donkey pronoun might be bound. The result, after a minor simplification, is [3b], which says that every event e of Pedro trying to kiss Juanita is immediately followed by an event e′ of her slapping him. Note that, although this DRS captures the most natural interpretation of the past tense in “slapped”, it leaves the past tense in “tried” unaccounted for. But this is as it should be, for if tense is anaphoric, then that tense, too, should be linked to a salient time point, which seems to be right: in the absence of a context that furnishes such a time, [2b] is simply infelicitous; it is like saying “He is handsome” when it is not clear whom the pronoun is intended to refer to.
5.2 Presupposition
Presuppositions are chunks of information associated with particular lexical items or syntactic constructions. There are many such items and constructions, and the following is just a small selection:
Factive verbsIt-clefts
[4a] Juan knows that Pedro beats his donkey. [4b] Pedro beats his donkey. Definites
[5a] It's in the stable that Pedro beats his donkey. [5b] Pedro beats his donkey.
[6a] Pedro beats his donkey. [6b] Pedro has a donkey.
Someone who utters any of the [a] sentences commits himself to the truth of the respective [b] sentence. Of course, this does nothing to distinguish presuppositions from ordinary entailments, but the difference becomes apparent when we embed presuppositional expressions or constructions in non-entailing contexts, as in:
[7a] It isn't in the stable that Pedro beats his donkey. [7b] Maybe it's in the stable that Pedro beats his donkey.
Here [5a] is embedded in the scope of a negation operator and a modal operator, respectively, and it appears that these sentences commit a speaker to the truth of [5b] just as much as [5a] does. This behavior sets presuppositions apart from ordinary entailments.
Generally speaking, presuppositions tend to escape from any embedded position in the sense that, if a sentence S contains a presupposition-inducing expression P, an utterance of S will usually imply that P holds. This is only generally speaking, because this rule, though correct in the majority of cases, does not hold without exceptions:
[8a] It may be that Pedro is a mean farmer, and that he beats his donkey. [8b] It may be that Pedro has a donkey, and that he beats his donkey. [9a] If Pedro is a mean farmer, he beats his donkey. [9b] If Pedro has a donkey, he beats his donkey.
While the [a] sentences would normally commit a speaker to the claim that Pedro has a donkey, the same does not hold for the [b] sentences. It appears, therefore, that presuppositions are typically though not invariably inherited by the sentences in which they occur. This is the so-called “projection problem” for presuppositions.
Considering that anaphora and presupposition were widely discussed from the late 1960s onwards, it took a remarkably long time before it was discovered that, in some respects at least, the two phenomena are very similar (van der Sandt 1992, Kripke ms). To see how, consider the following pairs:
[10a] Pedro owns a donkey. Juan knows {it/that he owns a donkey}. [10b] If Pedro owns a donkey, Juan knows {it/that he owns a donkey}. [10c] Every farmer who owns a donkey knows {it/that he owns a donkey}.
It is true that the variants with the “that”-complements are slightly odd, but actually the fact that they are proves the point we want to make; for it is clear that, in these examples, the pronoun “it” and the clause “that he owns a donkey” perform the same duty, and if the latter sounds somewhat off it is presumably because the former does the job just as well.
While the factive verb “know” triggers the presupposition that its clausal complement is true, this presupposition is neutralized in [10b] and [10c]. If presuppositions behave similarly to anaphoric pronouns, it is clear why that should be so: in both cases, the presupposition is bound sentence-internally, just like the neuter pronoun is bound.
Presupposed information is information that is presented as given, and according to the so-called “binding theory” of presupposition this means that presuppositions want to have discourse referents to bind to. However, whereas anaphoric pronouns are rarely interpretable in the absence of a suitable antecedent, the same does not hold for all presupposition-inducing expressions. For instance, a speaker may felicitously assert that he met “Pedro's sister” even if he knows full well that his audience wasn't aware that Pedro has a sister. In such cases, Stalnaker (1974) suggested, presuppositions are generally accommodated, which is to say that the hearer accepts the information as given, and revises his representation of the context accordingly. Accommodation, thus understood, is a form of exploitation in Grice's sense: the purpose of presuppositional expressions is to signal that this or that information is given, and if some information is new but not particularly interesting or controversial (like the fact that somebody has a sister) the speaker may choose to “get it out of the way” by presuppositional means.
The binding theory incorporates the notion of accommodation as follows. Presuppositions, according to the theory, introduce information that prefers to be linked to discourse referents that are already available in the DRS, and in this respect they are like pronouns. However, if a suitable discourse referent is not available, a new one will be accommodated, and the presupposition is linked to that. Generally speaking, accommodation is not an option in the interpretation of pronouns, and a possible reason for this is that a pronoun's descriptive content is relatively poor. Being told that “she” is wonderful is not particularly helpful if it isn't clear who the pronoun is meant to refer to. By contrast, if the speaker refers to “Pedro's sister” there is more to go on, and accommodation becomes feasible. Hence, the binding theory views pronouns as a special class of presuppositional expressions. While all presupposition triggers prefer to be bound, pronouns almost always must be bound. According to the binding theory, this is because pronouns are descriptively attenuated, and therefore cannot be construed by way of accommodation.
6. Representationalism, attitudes, and compositionality
The DRT framework is of interest to philosophers of language primarily because it has enabled perspicuous treatments of a range of natural language phenomena that have proved recalcitrant over many years, for example the phenomena involving donkey anaphora, tense and presupposition discussed above. But there are also several aspects of the DRT framework itself which are of philosophical interest, largely connected with the status of DRSs as mental entities.
DRT is a representational theory of interpretation. In the DRT framework, sentences have meanings, if at all, only in a derivative sense: a sentence prompts the hearer to modify his DRS, and it is DRSs that have a truth-conditional interpretation. Thus, a sentence causes a DRS Ki to be transformed into a DRS Ki+1 (usually, though not invariably, the process will consist in adding information to Ki), and at a remove there is an associated transition at the semantical level, from Ki's interpretation to Ki+1's. Hence, if we could cut out the representational middleman, we would have a theory that defines sentence meaning in dynamic terms, as a transition from one semantic object to another. Thus arises the question whether or not the representational level can be dispensed with. Kamp (1981) claimed that representations were essential to his ur-version of DRT, which was a subset of our basic DRT, but in the meantime he was proved wrong by dynamic semantic theories that recast classical DRT in purely semantical terms: starting with Barwise (1987) and Rooth (1987), then (with somewhat more splash) Groenendijk and Stokhof (1989, 1991), and, in perhaps its neatest formulation, Muskens (1996).
What does it mean to say that a theory, or framework, is intrinsically representational, or that the dynamic semantic theories listed above are not? To be sure, a very determined semanticist could force all the apparatus of DRSs into his models, producing a fine-grained semantic theory which, strictly speaking, did not make use of a level of syntactic representations (perhaps reminiscent of the dynamic property theory defined by Chierchia 1994). So in this sense the representations of DRT are dispensible. But this is a very weak sense. What it suggests is that the issue of whether a theory should be seen as representational is not clear cut.
Consider Montague Grammar. It uses a representational language, Intensional Logic, but Montague (1970) showed that this use of a higher order logical representation language was just a convenience, and completely dispensible. Specifically, Montague Grammar can be used to define a function directly from sentences of a fragment of natural language to model-theoretic entities. The same is true of dynamic semantic theories, except that the model-theoretic entities are a little different from Montague's. Note, though, that dynamic meanings capture information about which entities are available for later anaphoric reference, whereas Montagovian meanings do not. If two sentences have the same truth conditions, then they should have the same Montagovian meanings, but may differ in their dynamic meanings. It is only if the two sentences have both the same truth conditions and the same anaphoric potential that they will have the same Montagovian meanings and the same dynamic meanings in theories like those of Groenendijk and Stokhof and Muskens. But this does not in itself make dynamic meanings representational. So what makes DRT representations different?
In DRT, even if two sentences have both the same truth conditions and the same anaphoric potential, they may still have different meanings. Indeed, there is no defining criterion for identity of meanings in DRT, truth conditional, anaphoric, or otherwise: two sentences have the same DRT meanings just in case the construction algorithm gives them the same representations.[3] Thus, for example, in Montague Grammar we would expect [1a]-[1c] to all have identical meanings. In a dynamic semantic theory, we might expect [1a] to differ in meaning from [1b], because [1a] freely licenses anaphora to the painting that Jane likes, but for [1b] such anaphora is at least highly marked. Yet on a dynamic semantic theory, [1a] and [1c] would still have identical meanings. In DRT, different representations would be formed for each of [1a]-[1c].
[1a] Jane likes a painting. [1b] It is not the case that Jane doesn't like a painting. [1c] Jane likes a painting and either it is raining or it isn't.
There is, then, at the very least, a methodological issue separating DRT from non-representational frameworks: a non-representational framework is defined with a natural criterion for identity of meaning of two sentences in mind, and this criterion is related to the information conveyed by those sentences. But in a representational framework like DRT the representations themselves provide the only criterion for judging identity of meaning. So is representationalism only methodological, a mere convenience? The answer is surely no, for representationalism involves a strong philosophical claim, namely the claim that DRSs are mental representations, and in some way capture objects of thought. This philosophical position is at the heart of at least one application of DRT, namely the treatment of attitudes. And here we refer not merely to the linguistic issue of how attitude descriptions are to be interpreted, but also to the philosophical and psychological issue of what it is to be the bearer of a mental attitude.
One of the greatest problems in the treatment of attitudes is the issue of logical omniscience: humans are cognitively limited agents, and are not aware of all the consequences of their beliefs, or for that matter, the consequences of their fears and desires. On this basis, Kamp (1990), for example, argues that the objects of thought should not be seen as purely model-theoretic entities, for example a set of belief worlds, as is common in logics of belief from Hintikka (1962) onwards. Rather, the logic of belief and other attitudes must involve mental representations. The problem of logical omniscience is then resolved in terms of the failure of human agents to perform logically complete computations over these representations. Such structural models of DRT as a theory of the attitudes have been developed not only by Kamp, but also by Asher (1986,1989) and Zeevat (1984,1989a). Note that this body of work not only deals with the issue of logical omniscience, but also with a range of other philosophical problems. Perhaps most notable among these is the issue of how we can bear attitudes to objects in the real world, a problem that is sharpened somewhat if attitudes are modeled using syntactic representations.
The solution adopted by Kamp (1990) and others involves anchoring, whereby the DRS language is extended to allow discourse referents mentioned in a DRS to be connected by a special (anchoring) function to objects in the outside world, or to referents in other attitudinal representations. The device of anchoring has been used both for the treatment of attitudes and for the related issue of the interpretation of attitude descriptions, specifically in the analysis of problems involving differences between de re and de dicto attitude sentences. But it should be realized that a DRT treatment of the interpretation of attitude descriptions need not imply that mental attitudes are themselves understood as involving DRSs. For example, Geurts (1999) develops a DRT treatment of attitudes and modals, but provides truth conditions for DRS representations of attitude descriptions in terms of an underlying model in which the objects of attitudes are not structural, but rather involve a neo-Hintikkan relation between individuals and sets of worlds.
Not only is DRT a representational theory of interpretation, it is a non-compositional theory as well. These two features are intertwined. Consider, for instance, the way pronouns are interpreted in basic DRT, by first setting up a referent marker, which is subsequently linked to another discourse referent. This is a non-deterministic process, but even if it were not, it is clear that the anaphoric link is not part of the meaning of the pronoun. In standard DRT, the pronoun does not, in and of itself, introduce something into the DRS that has a model-theoretic interpretation. A standard statement of compositionality would say that the meaning of compounds must be a function of the meaning of their components and their mode of combination. But if some of the components, like pronouns, do not introduce into the DRS any object that can naturally be described as the meaning of that object, then it is clear that we do not have a compositional system.
How bad is that we lose compositionality? It is generally agreed that some of the standard arguments for compositionality are not particularly compelling. We don't need compositionality to explain why people can understand an indefinite number of sentences, and we don't need it for explaining how languages can be learned. To explain these two properties of language, it suffices that there is some procedure for establishing the meanings of sentences, but it is not strictly necessary for this procedure to be compositional. Compositionality is often viewed as a methodological principle (e.g. Groenendijk and Stokhof 1989;1991, Janssen 1997, Dever 1999), since any model of interpretation can be made compositional if we are sufficiently relaxed about the nature of lexical meanings and/or the syntactic structures over which the compositional theory is defined. In the case of DRT, the standard approaches to making the system compositional involve dispensing with Kamp's construction algorithm, and replacing it with a system in which DRSs can be composed either through suitable merge operations (Zeevat 1989b), or, in dynamic semantic systems like those of Groenendijk and Stokhof (1989) and Muskens (1996) discussed above, through use of function application in a lambda calculus. But it is fair to say that while most such systems capture meanings for sentences that are closely related to those of original DRT, they do not capture every aspect of DRT. In particular, anaphora resolution is part of DRT, but in dynamic semantic systems anaphora resolution is assumed to be performed by a separate component of the theory. Yet even this aspect of DRT could, in principle, be captured compositionally. Indeed, Kamp's original DRT could trivially (though uninterestingly) be mapped onto a compositional theory with precisely the same coverage. We would simply take meanings to be partial functions from DRSs to sets of DRSs, and then replace each construction rule with an appropriate such function. For example, the pronoun meaning could map a DRS to a set of new DRSs in which the pronoun has been resolved to various values. Sentence (and discourse) meanings would be obtained by composing such functions appropriately, producing for any sentence a set of DRSs.
Compositionality and representationalism are issues that have produced great disagreement amongst semanticists and philosophers of language, though less so nowadays than they did in the 1980s. The methodological view of compositionality has reduced what appeared once to be a substantive argument to a matter of taste, and as regards DRT it has been shown that people who care about compositionality are free to adopt a variant of the original system with many of its features, but none of its non-compositionality. Similarly, as regards representationalism, theorists can now adopt a take-it-or-leave-it approach, unless they specifically want to make claims about the status of DRSs as mental representations, e.g. as part of a theory of attitudes and/or attitude reports. And here perhaps common sense has prevailed. For from a psychological perspective it is surprising that representationalism would ever have raised much ire: if we adopt a cognitivist standpoint, and view DRT as a (somewhat abstract) theory of the psychology of interpretation, then its representationalism wouldn't be particularly controversial. The prevailing psychological winds encourage a view of thought as computation over representations of some sort. For most psychologists, the controversy is not over whether there exist such representations, but over what the representations are like.
7. Further reading
Introductions
Kamp and Reyle (1993), Kamp (1995), Kamp et al. (to appear), Blackburn and Bos (2005), van Eijck (to appear), van Eijck and Kamp (1997), Kamp et al. (to appear).
Donkey anaphora
Heim (1990), Evans (1980), Neale (1990), Kanazawa (1994), Geurts (2002), Krifka (1996).
Presupposition
van der Sandt (1992), Geurts (1999), Beaver (2002), Beaver and Zeevat (to appear), Kamp (2001), Kamp and Roßdeutscher (1994), Kamp et al. (2005), Krahmer (1998), Blackburn and Bos (2005).
Tense and aspect
Kamp and Reyle (1993), Kamp et al. (to appear), Partee (1984).
Quantification and plurality
Kamp and Reyle (1993).
Attitude reports
Asher (1986, 1989), Zeevat (1996), Kamp (1990), Geurts (1999).
Discourse structure
Asher and Lascarides (2003), van Leusen (2004).Inference systems for DRT
Kamp and Reyle (1996), Saurer (1993).
Bibliography
- Asher, N. 1986. “Belief in Discourse Representation Theory”. Journal of Philosophical Logic 15: 127-189.
- Asher, N. 1989. “Belief, acceptance and belief reports”. Canadian Journal of Philosophy 19: 321-361.
- Asher, N. and A. Lascarides. 2003. Logics of Conversation. Cambridge: Cambridge University Press.
- Barwise, J. 1987. “Noun phrases, generalized quantifiers and anaphora”. In P. Gärdenfors (ed.), Generalized Quantifiers: Linguistic and Logical Approaches, Reidel, Dordrecht, 1-30.
- Beaver, D. 1997. “Presupposition”. In J. van Benthem and A. ter Meulen (eds.), The Handbook of Logic and Language, Elsevier, pp. 939-1008.
- Beaver, D. 2001. Presupposition and Assertion in Dynamic Semantics, CSLI Publications, Stanford.
- Beaver, D. 2002. “Presupposition in DRT”. In D. Beaver, L. Casillas-Martinez, B. Clark and S. Kaufmann (eds.), The Construction of Meaning, CSLI Publications.
- Beaver, D. and H. Zeevat, forthcoming. “Accommodation”. In Ramchand and Reiss (eds.), The Oxford Handbook of Linguistic Interfaces, OUP.
- Blackburn, P. and J. Bos. 2005. Representation and Inference for Natural Language. A First Course in Computational Semantics, CSLI Publications
- Chierchia, G. 1994. “Intensionality and context change”. Journal of Logic, Language and Information, 141-168.
- Chomsky, N. 1986. Barriers, Linguistic Inquiry Monograph 13, Cambridge: MIT Press.
- Dever, J. 1999. “Compositionality as methodology”. Linguistics and Philosophy 22: 311-326.
- Evans, G. 1980. “Pronouns”. Linguistic Inquiry 11. Pp. 337-436.
- Fauconnier, G. 1984. Espaces Mentaux. Paris: Editions de Minuit.
- Fauconnier, G. 1985. Mental Spaces: Aspects of Meaning Construction in Natural Language. Cambridge: MIT Press.
- Geach, P.T. 1962. Reference and Generality. Cornell University Press, Ithaca, NY. Second edition 1968.
- Geurts, B. 1999. Presuppositions and Pronouns. Elsevier, Oxford.
- Geurts, B. 2002. “Donkey business”. Linguistics and Philosophy 25: 129-156.
- Groenendijk, J. and M. Stokhof 1989. “Dynamic Montague Grammar”. In Kálmán, L. and L. Pólos, eds., Logic and Language. Akadémiai, Budapest.
- Groenendijk, J. and M. Stokhof 1991. “Dynamic Predicate Logic”. Linguistics and Philosophy 14: 39-100.
- Heim, I. 1982. The Semantics of Definite and Indefinite Noun Phrases. Ph.D. thesis, University of Massachusetts, Amherst.
- Heim, I. 1990. “E-type pronouns and donkey anaphora”. Linguistics and Philosophy 13: 137-178.
- Hintikka, J. 1962. Knowledge and Belief: An Introduction to the Logic of the Two Notions, Cornell University Press.
- Janssen, T. M. V. 1997. “Compositionality”, in J. Van Benthem & A. ter Meulen (eds.), Handbook of Logic and Language, Elsevier, Amsterdam and MIT Press, Cambridge, , p417-473, 1997.
- Kamp, H. 1981. “A theory of truth and semantic representation”. In: J.A.G. Groenendijk, T.M.V. Janssen, and M.B.J. Stokhof (eds.), Formal Methods in the Study of Language. Mathematical Centre Tracts 135, Amsterdam. Pp. 277-322.
- Kamp, H. 1990. “Prolegomena to a structural account of belief and other attitudes”. In C. A. Anderson and J. Owens (eds). Propositional Attitudes: The Role of Content in Logic, Language, and Mind. Stanford: CSLI Publications. 27-90.
- Kamp, H. 1995. “Discourse Representation Theory”, in: J. Verschueren, J.-O. Östman & J. Blommaert (eds.), Handbook of Pragmatics, Benjamins, pp. 253-257.
- Kamp, H. 2001. “The Importance of Presupposition”, in Rohrer, C., Roßdeutscher, A. and H. Kamp, eds., Linguistic Form and its Computation, CSLI Publications.
- Kamp, H. and U. Reyle. 1993. From Discourse to Logic. Kluwer, Dordrecht.
- Kamp, H. and U. Reyle. 1996. “A Calculus for First Order Discourse Representation Structures”. Journal of Logic, Language, and Information pp. 297-348.
- Kamp, H. and A. Roßdeutscher, 1994. “DRS-Construction and Lexically Driven Inference”, Theoretical Linguistics 20, pp. 165-235.
- Kamp, H., to appear. “Computation and Justification of Presuppositions”. In Bras, M. & L. Vieu (eds.) Semantics and Pragmatics of Discourse and Dialogue: Experimenting with current theories, Elsevier.
- Kamp, H., J. van Genabith and U. Reyle, forthcoming. “Discourse Representation Theory”, in Gabbay D. and F. Guenthner, Handbook of Philosophical Logic (second edition), Springer.
- Kanazawa, M. 1994. “Weak vs. strong readings of donkey sentences and monotonicity inference in a dynamic setting”. Linguistics and Philosophy 17: 109-158.
- Krahmer, E. 1998. Presupposition and Anaphora, CSLI Lecture Notes Series, Number 89, CSLI Publications Stanford, CA.
- Krifka, M. 1996. “Pragmatic Strengthening in Plural Predications and Donkey Sentences”, in T. Galloway and J. Spence (eds.), Proceedings from Semantics and Linguistic Theory (SALT) VI, Cornell University, Ithaca, NY, pp. 136-153.
- Kripke, S. ms. “Remarks on the formulation of the projection problem”. Manuscript, Princeton University.
- Montague, R. 1970. “Universal grammar”, Theoria 36, 373-398.
- Muskens, R. 1996. “Combining Montague Semantics and Discourse Representation”, Linguistics and Philosophy, 19:143-186.
- Neale, Stephen. 1990. Descriptions, MIT Press, Cambridge, MA.
- Partee, B. H. 1973. “Some Structural Analogies between Tenses and Pronouns in English”, The Journal of Philosophy 70:601-609.
- Partee, B. H. 1984. “Nominal and Temporal Anaphora”, Linguistics and Philosophy 7.3. Pp. 243-286.
- Rooth, M. 1987. “Noun Phrase Interpretation in Montague Grammar, File Change Semantics, and Situation Semantics”. In P. Gärdenfors (ed.), Generalized Quantifiers: Linguistic and Logical Approaches, Reidel, Dordrecht, 237-268.
- Saurer, W. 1993. “A natural deduction system for discourse representation theory”. Journal of Philosophical Logic, 22:249-302.
- Stalnaker, R. 1974. “Pragmatic Presuppositions”. In Milton K. Munitz and Peter K. Unger, eds., Semantics and Philosophy. New York: New York University Press.
- Stalnaker, R. 1998. “On the representation of context”, Journal of Logic, Language and Information 7(1):3-19.
- Thomason, R.H. 1974. Formal philosophy: selected papers of Richard Montague. Yale University Press, New Haven.
- van der Sandt, R.A. 1992. “Presupposition projection as anaphora resolution”. Journal of Semantics 9: 333-377.
- van Eijck, J., forthcoming. “Discourse representation theory”. In: K. Brown (ed.), Encyclopedia of Language and Linguistics (2nd edition), Elsevier.
- van Eijck, J. and H. Kamp. 1997. “Representing discourse in context”. In J. van Benthem and A. ter Meulen (eds.), Handbook of Logic and Language. Amsterdam: Elsevier Science, 179-237.
- van Leusen, N. 2004. “Incompatibility in context: A diagnosis of correction”, Journal of Semantics 21(4).
- Zeevat, H. 1984. “Belief”. In Landman F. and F. Veltman (eds.), Varieties of Formal Semantics, Foris, Dordrecht. Pp. 405-425.
- Zeevat, H. 1989a. “Realism and definiteness”. In: G. Chierchia, B. H. Partee, and R. Turner (eds.), Properties, Types and Meaning, Vol. 2. Kluwer, Dordrecht. Pp. 269-297.
- Zeevat, H. 1989b. “A compositional approach to Discourse Representation Theory”. Linguistics and Philosophy 12: 95-131.
- Zeevat, H. 1992. “Presupposition and Accommodation in Update Semantics”, Journal of Semantics 9: 379-412.
- Zeevat, H. 1996. “A Neoclassical Analysis of Belief Sentences”. In: Proceedings of the 10th Amsterdam Colloquium. ILLC, University of Amsterdam, part III, p. 723-742.
Other Internet Resources
[Please contact the authors with suggestions.]
Related Entries
anaphora | descriptions | indexicals | situations: in natural language semantics
Acknowledgments
We would like to thank the editors and reviewers of the Stanford Encyclopedia of Philosophy for helpful feedback, and Emilie Destruel for help with formatting.