Indicative Conditionals

First published Wed Aug 8, 2001; substantive revision Thu Oct 2, 2014

Take a sentence in the indicative mood, suitable for making a statement: “We'll be home by ten”, “Tom cooked the dinner”. Attach a conditional clause to it, and you have a sentence which makes a conditional statement: “We'll be home by ten if the train is on time”, “If Mary didn't cook the dinner, Tom cooked it”. A conditional sentence “If A, C” or “C if A” thus has two contained sentences or sentence-like clauses. A is called the antecedent, C the consequent. If you understand A and C, and you have mastered the conditional construction (as we all do at an early age), you understand “If A, C”. What does “if” mean? Consulting the dictionary yields “on condition that; provided that; supposing that”. These are adequate synonyms. But we want more than synonyms. A theory of conditionals aims to give an account of the conditional construction which explains when conditional judgements are acceptable, which inferences involving conditionals are good inferences, and why this linguistic construction is so important. Despite intensive work of great ingenuity, this remains a highly controversial subject.

1. Introduction

First let us delimit our field. The examples with which we began are traditionally called “indicative conditionals”. There are also “subjunctive” or “counterfactual” conditionals like “Tom would have cooked the dinner if Mary had not done so”, “We would have been home by ten if the train had been on time”. Counterfactuals will be the subject of a separate entry, and theories addressing them will not be discussed here. That there is some difference between indicatives and counterfactuals is shown by pairs of examples like “If Oswald didn't kill Kennedy, someone else did” and “If Oswald hadn't killed Kennedy, someone else would have”: you can accept the first yet reject the second (Adams (1970)). That there is not a huge difference between them is shown by examples like the following: “Don't go in there”, I say, “If you go in you will get hurt”. You look sceptical but stay outside, when there is large crash as the roof collapses. “You see”, I say, “if you had gone in you would have got hurt. I told you so.”

It is controversial how best to classify conditionals. According to some theorists, the forward-looking “indicatives” (those with a “will” in the main clause) belong with the “subjunctives” (those with a “would” in the main clause), and not with the other “indicatives”. (See Gibbard (1981, pp. 222–6), Dudman (1984, 1988), Bennett (1988). Bennett (1995) changed his mind. Jackson (1990) defends the traditional view.) The easy transition from typical “wills” to “woulds” is indeed a datum to be explained. Still, straightforward statements about the past, present or future, to which a conditional clause is attached — the traditional class of indicative conditionals — do (in my view) constitute a single semantic kind. The theories to be discussed do not fare better or worse when restricted to a particular subspecies.

As well as conditional statements, there are conditional commands, promises, offers, questions, etc.. As well as conditional beliefs, there are conditional desires, hopes, fears, etc.. Our focus will be on conditional statements and what they express — conditional beliefs; but we will consider which of the theories we have examined extends most naturally to these other kinds of conditional.

Three kinds of theory will be discussed. In §2 we compare truth-functional and non-truth-functional accounts of the truth conditions of conditionals. In §3 we examine what is called the suppositional theory: that conditional judgements essentially involve suppositions. On development, it appears to be incompatible with construing conditionals as statements with truth conditions. §4 looks at some responses from advocates of truth conditions. In §5 we consider a wider variety of conditional speech acts and propositional attitudes.

Where we need to distinguish between different interpretations, we write “AB” for the truth-functional conditional, “AB” for a non-truth-functional conditional and “AB” for the conditional as interpreted by the suppositional theory; and for brevity we call protagonists of the three theories Hook, Arrow and Supp, respectively. We use “~” for negation.

2. Truth Conditions for Indicative Conditionals

2.1 Two Kinds of Truth Condition

The generally most fruitful, and time-honoured, approach to specifying the meaning of a complex sentence in terms of the meanings of its parts, is to specify the truth conditions of the complex sentence, in terms of the truth conditions of its parts. A semantics of this kind yields an account of the validity of arguments involving the complex sentence, given the conception of validity as necessary preservation of truth. Throughout this section we assume that this approach to conditionals is correct. Let A and B be two sentences such as “Ann is in Paris” and “Bob is in Paris”. Our question will be: are the truth conditions of “If A, B” of the simple, extensional, truth-functional kind, like those of “A and B”, “A or B” and “It is not the case that A”? That is, do the truth values of A and of B determine the truth value of “If A, B”? Or are they non-truth-functional, like those of “A because B”, “A before B”, “It is possible that A”? That is, are they such that the truth values of A and B may, in some cases, leave open the truth value of “If A, B”?

The truth-functional theory of the conditional was integral to Frege's new logic (1879). It was taken up enthusiastically by Russell (who called it “material implication”), Wittgenstein in the Tractatus, and the logical positivists, and it is now found in every logic text. It is the first theory of conditionals which students encounter. Typically, it does not strike students as obviously correct. It is logic's first surprise. Yet, as the textbooks testify, it does a creditable job in many circumstances. And it has many defenders. It is a strikingly simple theory: “If A, B” is false when A is true and B is false. In all other cases, “If A, B” is true. It is thus equivalent to “~(A&~B)” and to “~A or B”. “AB” has, by stipulation, these truth conditions.

If “if” is truth-functional, this is the right truth function to assign to it: of the sixteen possible truth-functions of A and B, it is the only serious candidate. First, it is uncontroversial that when A is true and B is false, “If A, B” is false. A basic rule of inference is modus ponens: from “If A, B” and A, we can infer B. If it were possible to have A true, B false and “If A, B” true, this inference would be invalid. Second, it is uncontroversial that “If A, B” is sometimes true when A and B are respectively (true, true), or (false, true), or (false, false). “If it's a square, it has four sides”, said of an unseen geometric figure, is true, whether the figure is a square, a rectangle or a triangle. Assuming truth-functionality — that the truth value of the conditional is determined by the truth values of its parts — it follows that a conditional is always true when its components have these combinations of truth values.

Non-truth-functional accounts agree that “If A, B” is false when A is true and B is false; and they agree that the conditional is sometimes true for the other three combinations of truth-values for the components; but they deny that the conditional is always true in each of these three cases. Some agree with the truth-functionalist that when A and B are both true, “If A, B” must be true. Some do not, demanding a further relation between the facts that A and that B (see Read (1995)). This dispute need not concern us, as the arguments which follow depend only on the feature on which non-truth-functionalists agree: that when A is false, “If A, B” may be either true or false. For instance, I say (*) “If you touch that wire, you will get an electric shock”. You don't touch it. Was my remark true or false? According to the non-truth-functionalist, it depends on whether the wire is live or dead, on whether you are insulated, and so forth. Robert Stalnaker's (1968) account is of this type: consider a possible situation in which you touch the wire, and which otherwise differs minimally from the actual situation. (*) is true (false) according to whether or not you get a shock in that possible situation.

Let A and B be two logically independent propositions. The four lines below represent the four incompatible logical possibilities for the truth values of A and B. “If A, B”, “If ~A, B” and “If A, ~B” are interpreted truth-functionally in columns (i)-(iii), and non-truth-functionally (when their antecedents are false) in columns (iv)-(vi). The non-truth-functional interpretation we write “AB”. “T/F” means both truth values are possible for the corresponding assignment of truth values to A and B. For instance, line 4, column (iv), represents two possibilities for A, B, If A, B, (F, F, T) and (F, F, F).

Truth-Functional Interpretation
      (i) (ii) (iii)
  A B AB ~AB A ⊃ ~B
1. T T T T F
2. T F F T T
3. F T T T T
4. F F T F T

Non-Truth-Functional Interpretation
      (iv) (v) (vi)
  A B AB ~AB A → ~B
1. T T T T/F F
2. T F F T/F T
3. F T T/F T T/F
4. F F T/F F T/F

2.2 Arguments for Truth-Functionality

The main argument points to the fact that minimal knowledge that the truth-functional truth condition is satisfied is enough for knowledge that if A, B. Suppose there are two balls in a bag, labelled x and y. All you know about their colour is that at least one of them is red. That's enough to know that if x isn't red, y is red. Or: all you know is that they are not both red. That's enough to know that if x is red, y is not red.

Suppose you start off with no information about which of the four possible combinations of truth values for A and B obtains. You then acquire compelling reason to think that either A or B is true. You don't have any stronger belief about the matter. In particular, you have no firm belief as to whether A is true or not. You have ruled out line 4. The other possibilities remain open. Then, intuitively, you are justified in inferring that if ~A, B. Look at the possibilities for A and B on the left. You have eliminated the possibility that both A and B are false. So if A is false, only one possibility remains: B is true.

The truth-functionalist (call him Hook) gets this right. Look at column (ii). Eliminate line 4 and line 4 only, and you have eliminated the only possibility in which “~AB” is false. You know enough to conclude that “~AB” is true.

The non-truth-functionalist (call her Arrow) gets this wrong. Look at column (v). Eliminate line 4 and line 4 only, and some possibility of falsity remains in other cases which have not been ruled out. By eliminating just line 4, you do not thereby eliminate these further possibilities, incompatible with line 4, in which “~AB” is false.

The same point can be made with negated conjunctions. You discover for sure that ~(A&B), but nothing stronger than that. In particular, you don't know whether A. You rule out line 1, nothing more. You may justifiably infer that if A, ~B. Hook gets this right. In column (iii), if we eliminate line 1, we are left only with cases in which “A ⊃ ~B” is true. Arrow gets this wrong. In column (vi), eliminating line 1 leaves open the possibility that “A → ~B” is false.

The same argument renders compelling the thought that if we eliminate just A&~B, nothing stronger, i.e., we don't eliminate A, then we have sufficient reason to conclude that if A, B.

Here is a second argument in favour of Hook, in the style of Natural Deduction. The rule of Conditional Proof (CP) says that if Z follows from premises X and Y, then “If Y, Z” follows from premise X. Now the three premises ~(A&B), A and B entail a contradiction. So, by Reductio Ad Absurdum, from ~(A&B) and A, we can conclude ~B. So by CP, ~(A&B) entails “If A, ~B”. Substitute “~C” for B, and we have a proof of “If A, then ~~C” from “~(A&~C)”. And provided we also accept Double Negation Elimination, we can derive “If A, then C” from “~(A&~C)”.

Conditional Proof seems sound: “From X and Y, it follows that Z. So from X it follows that if Y, Z”. Yet for no reading of “if” which is stronger than the truth-functional reading is CP valid — at least this is so if we treat “&” and “~” in the classical way and accept the validity of the inference: (I) ~(A&~B); A; therefore B. Suppose CP is valid for some interpretation of “If A, B”. Apply CP to (I), and we get ~(A&~B); therefore if A, B, i.e., AB entails if A, B.

2.3 Arguments Against Truth-Functionality

The best-known objection to the truth-functional account, one of the “paradoxes of material implication”, is that according to Hook, the falsity of A is sufficient for the truth of “If A, B”. Look at the last two lines of column (i). In every possible situation in which A is false, “AB” is true. Can it be right that the falsity of “She touched the wire” entails the truth of “If she touched the wire she got a shock”?

Hook might respond as follows. How do we test our intuitions about the validity of an inference? The direct way is to imagine that we know for sure that the premise is true, and to consider what we would then think about the conclusion. Now when we know for sure that ~A, we have no use for thoughts beginning “If A, ...”. When you know for sure that Harry didn't do it, you don't go in for “If Harry did it ...” thoughts or remarks. In this circumstance conditionals have no role to play, and we have no practice in assessing them. The direct intuitive test is, therefore, silent on whether “If A, B” follows from ~A. If our smoothest, simplest, generally satisfactory theory has the consequence that it does follow, perhaps we should learn to live with that consequence.

There may, of course, be further consequences of this feature of Hook's theory which jar with intuition. That needs investigating. But, Hook may add, even if we come to the conclusion that “⊃” does not match perfectly our natural-language “if”, it comes close, and it has the virtues of simplicity and clarity. We have seen that rival theories also have counterintuitive consequences. Natural language is a fluid affair, and we cannot expect our theories to achieve better than approximate fit. Perhaps, in the interests of precision and clarity, in serious reasoning we should replace the elusive “if” with its neat, close relative, ⊃ .

This was no doubt Frege's attitude. Frege's primary concern was to construct a system of logic, formulated in an idealized language, which was adequate for mathematical reasoning. If “AB” doesn't translate perfectly our natural-language “If A, B”, but plays its intended role, so much the worse for natural language.

For the purpose of doing mathematics, Frege's judgement was probably correct. The main defects of ⊃ don't show up in mathematics. There are some peculiarities, but as long as we are aware of them, they can be lived with. And arguably, the gain in simplicity and clarity more than offsets the oddities.

The oddities are harder to tolerate when we consider conditional judgements about empirical matters. The difference is this: in thinking about the empirical world, we often accept and reject propositions with degrees of confidence less than certainty. “I think, but am not sure, that A” plays no central role in mathematical thinking. We can, perhaps, ignore as unimportant the use of indicative conditionals in circumstances in which we are certain that the antecedent is false. But we cannot ignore our use of conditionals whose antecedent we think is likely to be false. We use them often, accepting some, rejecting others. “I think I won't need to get in touch, but if I do, I shall need a phone number”, you say as your partner is about to go away; not “If I do I'll manage by telepathy”. “I think John spoke to Mary; if he didn't he wrote to her”; not “If he didn't he shot her”. Hook's theory has the unhappy consequence that all conditionals with unlikely antecedents are likely to be true. To think it likely that ~A is to think it likely that a sufficient condition for the truth of “AB” obtains. Take someone who thinks that the Republicans won't win the election (~R), and who rejects the thought that if they do win, they will double income tax (D). According to Hook, this person has grossly inconsistent opinions. For if she thinks it's likely that ~R, she must think it likely that at least one of the propositions, {~R, D} is true. But that is just to think it likely that RD. (Put the other way round, to reject RD is to accept R&~D; for this is the only case in which RD is false. How can someone accept R&~D yet reject R?) Not only does Hook's theory fit badly the patterns of thought of competent, intelligent people. It cannot be claimed that we would be better off with ⊃ . On the contrary, we would be intellectually disabled: we would not have the power to discriminate between believable and unbelievable conditionals whose antecedent we think is likely to be false.

Arrow does not have this problem. Her theory is designed to avoid it, by allowing that “AB” may be false when A is false.

The other paradox of material implication is that according to Hook all conditionals with true consequents are true: from B it follows that AB. This is perhaps less obviously unacceptable: if I'm sure that B, and treat A as an epistemic possibility, I must be sure that if A, B. Again the problem becomes vivid when we consider the case when I'm only nearly sure, but not quite sure, that B. I think B may be false, and will be false if certain, in my view unlikely, circumstances obtain. For example, I think Sue is giving a lecture right now. I don't think that if she was seriously injured on her way to work, she is giving a lecture right now. I reject that conditional. But on Hook's account, the conditional is false only if the consequent is false. I think the consequent is true: I think a sufficient condition for the truth of the conditional obtains.

2.4 Grice's Pragmatic Defence of Truth-Functionality

H. P. Grice famously defended the truth-functional account, in his William James lectures, “Logic and Conversation”, delivered in 1967 (see Grice (1989); see also Thomson (1990)). There are many ways of speaking the truth yet misleading your audience, given the standard to which you are expected to conform in conversational exchange. One way is to say something weaker than some other relevant thing you are in a position to say. Consider disjunctions. I am asked where John is. I am sure that he is in the pub, and know that he never goes near libraries. Inclined to be unhelpful but not wishing to lie, I say “He is either in the pub or in the library”. My hearer naturally assumes that this is the most precise information I am in a position to give, and also concludes from the truth (let us assume) that I told him “If he's not in the pub he's in the library”. The conditional, like the disjunction, according to Grice, is true if he's in the pub, but misleadingly asserted on that ground.

Another example, from David Lewis (1976, p. 143): “You won't eat those and live”, I say of some wholesome and delicious mushrooms – knowing that you will now leave them alone, deferring to my expertise. I told no lie — for indeed you don't eat them — but of course I misled you.

Grice drew attention, then, to situations in which a person is justified in believing a proposition, which would nevertheless be an unreasonable thing for the person to say, in normal circumstances. His lesson was salutary and important. He is right, I think, about disjunctions and negated conjunctions. Believing that John is in the pub, I can't consistently disbelieve “He's either in the pub or the library”; if I have any epistemic attitude to this proposition, it should be one of belief, however inappropriate for me to assert it. Similarly for “You won't eat those and live” when I know you won't eat them. But it is implausible that the difficulties with the truth-functional conditional can be explained away in terms of what is an inappropriate conversational remark. They arise at the level of belief. Thinking that John is in the pub, I may without irrationality disbelieve “If he's not in the pub he's in the library”. Thinking you won't eat the mushrooms, I may without irrationality reject “If you eat them you will die”. As facts about the norms to which people defer, these claims can be tested. A good enough test is to take a co-operative person, who understands that you are merely interested in her opinions about the propositions you put to her, as opposed to what would be a reasonable remark to make, and note which conditionals she assents to. Are we really to brand as illogical someone who dissents from both “The Republicans will win” and “If the Republicans win, income tax will double”?

The Gricean phenomenon is a real one. On anyone's account of conditionals, there will be circumstances when a conditional is justifiably believed, but is liable to mislead if stated. For instance, I believe that the match will be cancelled, because all the players have ’flu. I believe that whether or not it rains, the match will be cancelled: if it rains, the match will be cancelled, and if it doesn't rain, the match will be cancelled. Someone asks me whether the match will go ahead. I say, “If it rains, the match will be cancelled”. I say something I believe, but I mislead my audience — why should I say that, when I think it will be cancelled whether or not it rains? This does not demonstrate that Hook is correct. Although I believe that the match will be cancelled, I don't believe that if all the players make a very speedy recovery, the match will be cancelled.

2.5 Compounds of Conditionals: Problems for Hook and Arrow

~(AB) is equivalent to A&~B. Intuitively, you may safely say, of an unseen geometric figure, “It's not the case that if it's a pentagon, it has six sides”. But by Hook's lights, you may well be wrong; for it may not be a pentagon, and in that case it is true that if it's a pentagon, it has six sides.

Another example, due to Gibbard (1981, pp. 235–6): of a glass that had been held a foot above the floor, you say (having left the scene) “If it broke if it was dropped, it was fragile”. Intuitively this seems reasonable. But by Hook's lights, if the glass was not dropped, and was not fragile, the conditional has a true (conditional) antecedent and false consequent, and is hence false.

Grice's strategy was to explain why we don't assert certain conditionals which (by Hook's lights) we have reason to believe true. In the above two cases, the problem is reversed: there are compounds of conditionals which we confidently assert and accept which, by Hook's lights, we do not have reason to believe true.

The above examples are not a problem for Arrow. But other cases of embedded conditionals count in the opposite direction. Here are two sentence forms which are, intuitively, equivalent:

(i) If (A&B), C.
(ii) If A, then if B, C.

(Following Vann McGee (1985) I'll call the principle that (i) and (ii) are equivalent the Import-Export Principle, or “Import-Export” for short.) Try any example: “If Mary comes then if John doesn't have to leave early we will play Bridge”; “If Mary comes and John doesn't have to leave early we will play Bridge”. “If they were outside and it rained, they got wet”; “If they were outside, then if it rained, they got wet”. For Hook, Import-Export holds. (Exercise: do a truth table, or construct a proof.) Gibbard (1981, pp. 234–5) has proved that for no conditional with truth conditions stronger than ⊃ does Import-Export hold. Assume Import-Export holds for some reading of “if”. The key to the proof is to consider the formula

(1) If (AB) then (if A, B).

By Import-Export, (1) is equivalent to

(2) If ((AB) & A)) then B.

The antecedent of (2) entails its consequent. So (2) is a logical truth. So by Import-Export, (1) is a logical truth. On any reading of “if”, “if A, B” entails (AB). So (1) entails

(3) (AB) ⊃ (if A, B).

So (3) is a logical truth. That is, there is no possible situation in which its antecedent (AB) is true and its consequent (if A, B) is false. That is, (AB) entails “If A, B”.

Neither kind of truth condition has proved entirely satisfactory. We still have to consider Jackson's defence of Hook, and Stalnaker's response to the problem about non-truth-functional truth conditions raised in §2.2. These are deferred to §4, because they depend on the considerations developed in §3.

3. The Suppositional Theory

3.1 Conditional Belief and Conditional Probability

Let us put truth conditions aside for a while, and ask what it is to believe, or to be more or less certain, that B if A -- that John cooked the dinner if Mary didn't, that you will recover if you have the operation, and so forth. How do you make such a judgement? You suppose (assume, hypothesise) that A, and make a hypothetical judgement about B, under the supposition that A, in the light of your other beliefs. Frank Ramsey put it like this:

If two people are arguing “If p, will q?” and are both in doubt as to p, they are adding p hypothetically to their stock of knowledge, and arguing on that basis about q; ... they are fixing their degrees of belief in q given p (1929, p. 247).

A suppositional theory was advanced by J. L. Mackie (1973, chapter 4). See also David Barnett (2006). Peter Gärdenfors's work (1986, 1988) could also come under this heading. But the most fruitful development of the idea (in my view) takes seriously the last part of the above quote from Ramsey, and emphasises the fact that conditionals can be accepted with different degrees of closeness to certainty. Ernest Adams (1965, 1966, 1975) has developed such a theory.

When we are neither certain that B nor certain that ~B, there remains a range of epistemic attitudes we may have to B: we may be nearly certain that B, think B more likely than not, etc.. Similarly, we may be certain, nearly certain, etc. that B given the supposition that A. Make the idealizing assumption that degrees of closeness to certainty can be quantified: 100% certain, 90% certain, etc.; and we can turn to probability theory for what Ramsey called the “logic of partial belief”. There we find a well-established, indispensable concept, “the conditional probability of B given A”. It is to this notion that Ramsey refers by the phrase “degrees of belief in q given p”.

It is, at first sight, rather curious that the best-developed and most illuminating suppositional theory should place emphasis on uncertain conditional judgements. If we knew the truth conditions of conditionals, we would handle uncertainty about conditionals in terms of a general theory of what it is to be uncertain of the truth of a proposition. But there is no consensus about the truth conditions of conditionals. It happens that when we turn to the theory of uncertain judgements, we find a concept of conditionality in use. It is worth seeing what we can learn from it.

The notion of conditional probability entered probability theory at an early stage because it was needed to compute the probability of a conjunction. Thomas Bayes (1763) wrote:

The probability that two ... events will both happen is ... the probability of the first [multiplied by] the probability of the second on the supposition that the first happens [my emphasis].

A simple example: a ball is picked at random. 70% of the balls are red (so the probability that a red ball is picked is 70%). 60% of the red balls have a black spot (so the probability that a ball with a black spot is picked, on the supposition that a red ball is picked, is 60%). The probability that a red ball with a black spot is picked is 60% of 70%, i.e. 42%.

Ramsey, arguing that “degrees of belief” should conform to probability theory, stated the same “fundamental law of partial belief”:

Degree of belief in (p and q) = degree of belief in p × degree of belief in q given p. (1926, p. 77)

For example, you are about 50% certain that the test will be on conditionals, and about 80% certain that you will pass, on the supposition that it is on conditionals. So you are about 40% certain that the test will be on conditionals and you will pass.

Accepting Ramsey's suggestion that “if”, “given that”, “on the supposition that” come to the same thing, writing “p(B)” for “degree of belief in B”, and “pA(B)” for “degree of belief in B given A”, and rearranging the basic law, we have:

p(B if A) = pA(B) = p(A&B)/p(A), provided p(A) is not 0.

Call a set of mutually exclusive and jointly exhaustive propositions a partition. The lines of a truth table constitute a partition. One's degrees of belief in the members of a partition, idealized as precise, should sum to 100%. That is all there is to the claim that degrees of belief should have the structure of probabilities. Consider a partition of the form {A&B, A&~B, ~A}. Suppose someone X thinks it 50% likely that ~A (hence 50% likely that A), 40% likely that A&B, and 10% likely that A&~B. Think of this distribution as displayed geometrically, as follows. Draw a long narrow horizontal rectangle. Divide it in half by a vertical line. Write “~A” in the right-hand half. Divide the left-hand half with another vertical line, in the ratio 4:1, with the larger part on the left. Write “A&B” and “A&~B” in the larger and smaller cells respectively.

A&B
A&~B
~A

(Note that as {A&B, A&~B, ~A} and {A, ~A} are both partitions, it follows that p(A) = p(A&B) + p(A&~B).)

How does X evaluate “If A, B”? She assumes that A, that is, hypothetically eliminates ~A. In the part of the partition that remains, in which A is true, B is four times as likely as ~B; that is, on the assumption that A, it is four to one that B: p(B if A) is 80%, p(~B if A) is 20%. Equivalently, as A&B is four times as likely as A&~B, p(B if A) is 4/5, or 80%. Equivalently, p(A&B) is 4/5 of p(A). In non-numerical terms: you believe that if A, B to the extent that you think that A&B is nearly as likely as A; or, to the extent that you think A&B is much more likely than A&~B. If you think A&B is as likely as A, you are certain that if A, B. In this case, your p(A&~B) = 0.

Go back to the truth table. You are wondering whether if A, B. Assume A. That is, ignore lines 3 and 4 in which A is false. Ask yourself about the relative probabilities of lines 1 and 2. Suppose you think line 1 is about 100 times more likely than line 2. Then you think it is about 100 to 1 that B if A.

Note: these thought-experiments can only be performed when p(A) is not 0. On this approach, indicative conditionals only have a role when the thinker takes A to be an epistemic possibility. If you take yourself to know for sure that Ann is in Paris, you don't go in for “If Ann is not in Paris ...” thoughts (though of course you can think “If Ann had not been in Paris ...”). In conversation, you can pretend to take something as an epistemic possibility, temporarily, to comply with the epistemic state of the hearer. When playing the sceptic, there are not many limits on what you can, at a pinch, take as an epistemic possibility – as not already ruled out. But there are some limits, as Descartes found. Is there a conditional thought that begins “If I don't exist now ...”?

On Hook's account, to be close to certain that if A, B is to give a high value to p(AB). How does p(AB) compare with pA(B)? In two special cases, they are equal: first, if p(A&~B) = 0 (and p(A) is not 0), p(AB) = pA(B) = 1 (i.e. 100%). Second, if p(A) = 100%, p(AB) = pA(B) = p(B). In all other cases, p(AB) is greater than pA(B). To see this we need to compare p(A&~B) and p(A&~B)/p(A). Consider again the partition {A&B, A&~B, ~A}. p(A&~B) is a smaller proportion of the whole space than it is of the A-part — the part of the space in which A is true — except in the special cases in which p(A&~B) = 0, or p(~A) = 0. So, except in these special cases, pA(~B) is greater than p(A&~B). Now p(AB) = p(~(A&~B)); and p(A&~B) + p(~(A&~B)) = 1. Also pA(B) + pA(~B) = 1. So from pA(~B) > p(A&~B) it follows that p(AB) > pA(B).

Hook and the suppositional theorist (call her Supp) come spectacularly apart when p(~A) is high and p(A&B) is much smaller than p(A&~B). Let p(~A) = 90%, p(A&B) = 1%, p(A&~B) = 9%. pA(B) = 10%. p(AB) = 91%. For instance, I am 90% certain that Sue won't be offered the job (~O), and think it only 10% likely that she will decline the offer (D) if it is made, that is pO(D) = 10%. p(OD) = p(~O or (O&D)) = 91%.

Now let us compare Hook, Arrow, and Supp with respect to two questions raised in §2.

  • Question 1. You are certain that ~(A&~B), but not certain that ~A. Should you be certain that if A, B?

    Hook: yes. Because “AB” is true whenever A&~B is false.

    Supp: yes. Because A&B is as likely as A. pA(B) = 1.

    Arrow: no, not necessarily. For “AB” may be false when A&~B is false. With just the information that A&~B is false, I should not be certain that if A, B.

  • Question 2. If you think it likely that ~A, might you still think it unlikely that if A, B?

    Hook: no. “AB” is true in all the possible situations in which ~A is true. If I think it likely that ~A, I think it likely that a sufficient condition for the truth of “AB” obtains. I must, therefore, think it likely that if A, B.

    Supp: yes. We had an example above. That most of my probability goes to ~A leaves open the question whether or not A&B is more probable than A&~B. If p(A&~B) is greater than p(A&B), I think it's unlikely that if A, B. That's compatible with thinking it likely that ~A.

    Arrow: yes. “If A, B” may be false when A is false. And I might well think it likely that that possibility obtains, i.e. unlikely that “If A, B” is true.

Supp has squared the circle: she gets the intuitively right answer to both questions. In this she differs from both Hook and Arrow. Supp's way of assessing conditionals is incompatible with the truth-functional way (they answer Question 2 differently); and incompatible with stronger-than-truth-functional truth conditions (they answer Question 1 differently). It follows that Supp's way of assessing conditionals is incompatible with the claim that conditionals have truth conditions of any kind. pA(B) does not measure the probability of the truth of any proposition. Suppose it did measure the probability of the truth of some proposition A*B. Either A*B is entailed by “AB”, or it is not. If it is, it is true whenever ~A is true, and hence cannot be improbable when ~A is probable. That is, it cannot agree with Supp in its answer to Question 2. If A*B is not entailed by “AB”, it may be false when ~(A&~B) is true, and hence certainty that ~(A&~B) (in the absence of certainty that ~A) is insufficient for certainty that A*B; it cannot agree with Supp in its answer to Question 1.

To make the point in a slightly different way, let me adopt the following as an expository, heuristic device, a harmless fiction. Imagine a partition as carved into a large finite number of equally-probable chunks, such that the propositions with which we are concerned are true in an exact number of them. The probability of any proposition is the proportion of chunks in which it is true. The probability of B on the supposition that A is the proportion of the A-chunks (the chunks in which A is true) which are B-chunks. With some misgivings, I succumb to the temptation to call these chunks “worlds”: they are equally probable, mutually incompatible and jointly exhaustive epistemic possibilities, enough of them for the propositions with which we are concerned to be true, or false, at each world. The heuristic value is that judgements of probability and conditional probability then translate into statements about proportions.

Although Supp and Hook give the same answer to Question 1, their reasons are different. Supp answers “yes” not because a proposition, A*B, is true whenever A&~B is false; but because B is true in all the “worlds” which matter for the assessment of “If A, B”: the A-worlds. Although Supp and Arrow give the same answer to Question 2, their reasons are different. Supp answers “yes”, not because a proposition A*B may be false when A is false; but because the fact that most worlds are ~A-worlds is irrelevant to whether most of the A-worlds are B-worlds. To judge that B is true on the supposition that A is true, it turns out, is not to judge that something-or-other, A*B, is true.

By a different argument, David Lewis (1976) was the first to prove this remarkable result: there is no proposition A*B such that, in all probability distributions, p(A*B) = pA(B). A conditional probability does not measure the probability of the truth of any proposition. If a conditional has truth conditions, one should believe it to the extent that one thinks it is probably true. If Supp is correct, that one believes “If A, B” to the extent that one thinks it probable that B on the supposition that A, then this is not equivalent to believing some proposition to be probably true. Hence, it appears, if Supp is right, conditionals shouldn't be construed as having truth conditions at all. A conditional judgement involves two propositions, which play different roles. One is the content of a supposition. The other is the content of a judgement made under that supposition. They do not combine to yield a single proposition which is judged to be likely to be true just when the second is judged likely to be true on the supposition of the first.

Note: ways of restoring truth conditions, compatible with Supp's thesis, are considered in §4.

3.2 Validity

Ernest Adams, in two articles (1965, 1966) and a subsequent book (1975), gave a theory of the validity of arguments involving conditionals as construed by Supp. He taught us something important about classically valid arguments as well: that they are, in a special sense to be made precise, probability-preserving. This property can be generalized to apply to arguments with conditionals. The valid ones are those which, in the special sense, preserve probability or conditional probability.

First consider classically valid (that is, necessarily truth-preserving) arguments which don't involve conditionals. We use them in arguing from contingent premises about which we are often less than completely certain. The question arises: how certain can we be of the conclusion of the argument, given that we think, but are not sure, that the premises are true? Call the improbability of a statement one minus its probability. Adams showed this: if (and only if) an argument is valid, then in no probability distribution does the improbability of its conclusion exceed the sum of the improbabilities of its premises. Call this the Probability Preservation Principle (PPP).

The proof of PPP rests on the Partition Principle — that the probabilities of the members of a partition sum to 100% — nothing else, beyond the fact that if A entails B, p(A&~B) = 0. Here are three consequences:

  1. if A entails B, p(A) ≤ p(B)
  2. p(A or B) = p(A) + p(B) − p(A&B) ≤ p(A) + p(B)
  3. For all n, p(A1 or ... or An) ≤ p(A1) + ... + p(An)

Suppose A1, ... An entail B. Then ~B entails ~A1 or ... or ~An. Therefore p(~B) ≤ p(~A1) + ... + p(~An): the improbability of the conclusion of a valid argument cannot exceed the sum of the improbabilities of the premises.

The result is useful to know: if you have two premises of which you are at least 99% certain, they entitle you to be at least 98% certain of a conclusion validly drawn from them. Of course, if you have 100 premises each at least 99% certain, your conclusion may have zero probability. That is the lesson of the “Lottery Paradox”. Still, Adams's result vindicates deductive reasoning from uncertain premises, provided that they are not too uncertain, and there are not too many of them.

So far, we have a very useful consequence of the classical notion of validity. Now Adams extends this consequence to arguments involving conditionals. Take a language with “and”, “or”, “not” and “if” — but with “if” occurring only as the main connective in a sentence. (We put aside compounds of conditionals.) Take any argument formulated in this language. Consider any probability function over the sentences of this argument which assigns non-zero probability to the antecedents of all conditionals — that is, any assignment of numbers to the non-conditional sentences which conforms to the Partition Principle, and to the conditional sentences which conforms to Supp's thesis: p(B if A) = pA(B) = p(A&B)/p(A). Let the improbability of the conditional “If A, B” be 1 − pA(B). Define a valid argument as one such that there is no probability function in which the improbability of the conclusion exceeds the sum of the improbabilities of the premises. And a nice logic emerges, which is now well known. It is the same as Stalnaker's logic over this domain (see §4.1). There are rules of proof, a decision procedure, consistency and completeness can be proved. See Adams (1998 and 1975).

I shall write the conditional which satisfies Adams's criterion of validity “AB”. We have already seen that in all distributions, pA(B) ≤ p(AB). Therefore, AB entails AB: it cannot be the case that the former is more probable than the latter. Call a non-conditional sentence a factual sentence. If an argument has a factual conclusion, and is classically valid with the conditional interpreted as ⊃ , it is valid with the conditional interpreted as the stronger ⇒ . The following patterns of inference are therefore valid:

A; AB; so B (modus ponens)
AB; ~B; so ~A (modus tollens)
A or B; AC; BC; so C.

We cannot consistently have their premises highly probable and their conclusion highly improbable.

Arguments with conditional conclusions, however, may be valid when the conditional is interpreted as the weaker AB, but invalid when it is interpreted as the stronger AB. Here are some examples.

B; so AB.

I can consistently be close to certain that Sue is lecturing right now, while thinking it highly unlikely that if she had a heart attack on her way to work, she is lecturing just now.

~A; so AB.

You can consistently be close to certain that the Republicans won't win, while thinking it highly unlikely that if they win they will double income tax.

~(A&B); so A ⇒ ~B

I can consistently be close to certain that it's not the case that I will be hit by a bomb and injured today, while thinking it highly unlikely that if I am hit by a bomb, I won't be injured.

A or B; so ~AB.

As I think it is very likely to rain tomorrow, I think it's very likely to be true that it will rain or snow tomorrow. But I think it's very unlikely that if it doesn't rain, it will snow.

AB; so (C&A) ⇒ B (strengthening of the antecedent).

I can think it's highly likely that if you strike the match, it will light; but highly unlikely that if you dip it in water and strike it, it will light.

Strengthening is a special case of transitivity, in which the missing premise is a tautology: if C&A then A; if A, B; so if C&A, B. So transitivity also fails:

AB; BC; so AC.

Adams gave this example (1966): I can think it highly likely that if Jones is elected, Brown will resign immediately afterwards; I can also think it highly likely that if Brown dies before the election, Jones will be elected; but I do not think it at all likely that if Brown dies before the election, Brown will resign immediately after the election!

We saw in §2.2 that Conditional Proof (CP) is invalid for any conditional stronger than ⊃ . It is invalid in Adams's logic. For instance, “~(A&B); A; so ~B” is valid. It contains no conditionals. Any necessarily truth-preserving argument satisfies PPP. If I'm close to certain that I won't be hit by a bomb and injured, and close to certain that I will be hit by a bomb, then I must be close to certain that I won't be injured. But, as we saw, “~(A&B); so A ⇒ ~B” is invalid. Yet we can get the latter from the former by CP.

Why does CP fail on this conception of conditionals? After all, Supp's idea is to treat the antecedent of a conditional as an assumption. What is the difference between the roles of a premise, and of the antecedent of a conditional in the conclusion?

The antecedent of the conditional is indeed treated as an assumption. On this conception of validity, the premises are not treated as assumptions. Indeed, it is not immediately clear what it would be to treat a conditional, construed according to Supp, as an assumption: to assume something, as ordinarily understood, is to assume that it is true; and conditionals are not being construed as ordinary statement of fact. But we could approximate the idea of taking the premises as assumptions: so doing is, in most contexts, tantamount to treating them, hypothetically, as certainties. So treating the premises would be to require of a valid argument that it preserve certainty: that there must be no probability distributions in which all the premises (conditional or otherwise) are assigned 1 and the conclusion is assigned less than 1. Call this the certainty-preservation principle (CPP).

The conception of validity we have been using (PPP) takes as central the fact that premises are accepted with degrees of confidence less than certainty. Now, anything which satisfies PPP satisfies CPP. And for argument involving only factual propositions, the converse is also true: the same class of arguments necessarily preserves truth, necessarily preserves certainty and necessarily preserves probability in the sense of PPP. But arguments involving conditionals can satisfy CPP without satisfying PPP. The invalid argument forms above do preserve certainty: if you assign probability 1 to the premises, then you are constrained to assign probability 1 to the conclusion (in all probability distributions in which the antecedent of any conditional gets non-zero probability). But they do not preserve high probability. They do not satisfy PPP. If at least one premise falls short of certainty by however small an amount, the conclusion can plummet to zero.

The logico-mathematical fact behind this is the difference in logical powers between “All” and “Almost all”. If all A-worlds are B-worlds (and there are some C&A-worlds) then all C&A-worlds are B-worlds. But we can have: almost all A-worlds are B-worlds but no C&A-world is a B-world. If all A-worlds are B-worlds and all B-worlds are C-worlds, then all A-worlds are C-worlds. But we can have: all A-worlds are B-worlds, almost all B-worlds are C-worlds, yet no A-world is a C-world; just as we can have, all kiwis are birds, almost all birds fly, but no kiwi flies.

Someone might react as follows: “All I want of a valid argument is that it preserve certainty. I'm not bothered if an argument can have premises close to certain and a conclusion far from certain, as long as the conclusion is certain when the premises are certain”.

We could use the word “valid” in such a way that an argument is valid provided it preserves certainty. If our interest in logic is confined to its application to mathematics or other a priori matters, that is fine. Further, when our arguments do not contain conditionals, if we have certainty-preservation, probability-preservation comes free. But if we use conditionals when arguing about contingent matters, then great caution will be required. Unless we are 100% certain of the premises, the arguments above which are invalid on Adams's criterion guarantee nothing about what you are entitled to think about the conclusion. The line between 100% certainty and something very close is hard to make out: it's not clear how you tell which side of it you are on. The epistemically cautious might admit that they are never, or only very rarely, 100% certain of contingent conditionals. So it would be useful to have another category of argument, the “super-valid”, which preserves high probability as well as certainty. Adams has shown us which arguments (on Supp's reading of “if”) are super-valid.

4. Truth Conditions Revisited

4.1 Nearest Possible Worlds

Adams's theory of validity emerged in the mid-1960s. “Nearest possible worlds” theories were not yet in evidence. Nor was Lewis's result that conditional probabilities are not probabilities of the truth of a proposition. (Adams expressed scepticism about truth conditions for conditionals, but the question was still open.) Stalnaker's (1968) semantics for conditionals was an attempt to provide truth conditions which were compatible with Ramsey's and Adams's thesis about conditional belief. (See also Stalnaker (1970)). That is, he sought truth conditions for a proposition A>B (his notation) such that p(A>B) must equal pA(B):

Now that we have found an answer to the question, “How do we decide whether or not we believe a conditional statement?” [Ramsey's and Adams's answer] the problem is to make the transition from belief conditions to truth conditions; ... . The concept of a possible world is just what we need to make the transition, since a possible world is the ontological analogue of a stock of hypothetical beliefs. The following ... is a first approximation to the account I shall propose: Consider a possible world in which A is true and otherwise differs minimally from the actual world. “If A, then B” is true (false) just in case B is true (false) in that possible world. (1968, pp. 33–4)

If an argument is necessarily truth-preserving, the improbability of its conclusion cannot exceed the sum of the improbabilities of the premises. The latter was the criterion Adams used in constructing his logic. So Stalnaker's logic for conditionals must agree with Adams's over their common domain. And it does. The argument forms we showed to be invalid in Adams's logic (§3.2) are invalid on Stalnaker's semantics. For instance, the following is possible: in the nearest possible world in which you strike the match, it lights; in the nearest world in which you dip the match in water and strike it, it doesn't light. So Strengthening fails. (By “nearest world in which ...” I mean the possible world which is minimally different from the actual world in which ... .)

Conditional Proof fails for Stalnaker's semantics. “A or B; ~A; so B” is of course valid. But (*) “A or B, therefore ~A>B” is not: it can be true that Ann or Mary cooked the dinner (for Ann cooked it); yet false that in the nearest world to the actual world in which Ann did not cook it, Mary cooked it.

Stalnaker (1975) tried to show that although the above argument form (*) is invalid, it is nevertheless a “reasonable inference” when “A or B” is assertable, that is, in a context in which ~A&~B has been ruled out but ~A&B and A&~B remain open possibilities.

Stalnaker's semantics uses a “selection function”, F, which selects, for any proposition A and any world w, a world, w′, the nearest (most similar) world to w at which A is true. “If A, B” is true at w iff B is true at F(A, w), i.e. at w′, the world most similar to w at which A is true. “If A, B” is true simpliciter iff B is true at the nearest A-world to the actual world. (However, we do not know which world is the actual world. To be sure that if A, B, we need to be sure that whichever world w is a candidate for actuality, B is true at the nearest A-world to w.) If A is true, the nearest A-world to the actual world is the actual world itself, so in this case “If A, B” is true iff B is also true. The selection function does substantive work only when A is false.

In the case of indicative conditionals the selection function is subject to a pragmatic constraint, set in the framework of the dynamics of conversation. At any stage in a conversation, many things are taken for granted by speaker and hearer, i.e. many possibilities are taken as already ruled out. The remaining possibilities are live. Stalnaker calls the set of worlds which are not ruled out — the live possibilities — the context set. For indicative conditionals, antecedents are typically live possibilities, and we focus on that case. The pragmatic constraint for indicative conditionals says that if the antecedent A is compatible with the context set (i.e. true at some worlds in the context set) then for any world w in the context set, the nearest A-world to w — i.e. the world picked out by the selection function — is also a member of the context set. Roughly, if A is a live possibility (i.e. not already ruled out), then for any world w which is a live possibility, the nearest A-world to w is also a live possibility.

The proposition expressed by “If A, B” is the set of worlds w such that the nearest A-world to w is a B-world. The ordering of worlds, by the pragmatic constraint, depends on the conversational setting. As different possibilities are live in different conversational settings, a different proposition may be expressed by “If A, B” in different conversational settings.

Let us transpose this to the one-person case: I am talking to myself, i.e. thinking — considering whether if A, B. The context set is the set of worlds compatible with what I take for granted, i.e. the set of worlds not ruled out, i.e. the set of worlds which are epistemically possible for me. Let A be epistemically possible for me. Then the pragmatic constraint requires that for any world in the context set, the nearest A-world to it is also in the context set. Provided you and I have different bodies of information, the proposition I am considering when I consider whether if A, B may well differ from the proposition you would express in the same words: the constraints on nearness differ; worlds which are near for me may not be near for you.

This enables Stalnaker to avoid the argument against non-truth-functional truth conditions given in §2.2. The argument was as follows. There are six incompatible logically possible combinations of truth values for A, B and ~AB. We start off with no firm beliefs about which obtains. Now we eliminate just ~A&~B, i.e. establish A or B. That leaves five remaining possibilities, including two in which “~AB” is false. So we can't be certain that ~AB (whereas, intuitively, one can be certain of the conditional in these circumstances). Stalnaker replies: we can't, indeed, be certain that the proposition we were wondering about earlier is true. But we are now in a new context: ~A&~B-worlds have been ruled out (but ~A&B-worlds remain). We now express a different proposition by “~AB”, with different truth conditions, governed by a new nearness relation. As all our live ~A-worlds are B-worlds (none are ~B-worlds), we know that the new proposition is true.

Now this hypersensitivity of the proposition expressed by “If A, B” to what is taken for granted by speaker and hearer, or to the epistemic state of the thinker, is not very plausible. One usually distinguishes sharply between the content of what is said and the different epistemic attitudes one may take to that same content. Someone conjectures that if Ann isn't home, Bob is. We are entirely agnostic about this. Then we discover that at least one of them is at home (nothing stronger). We now accept the conditional. It seems more natural to say that we now have a different attitude to the same conditional thought, that B on the supposition that ~A. It does not seem that the content of our conditional thought has changed. And if there are conditional propositions, it seems more natural to say that we now take to be true what we were previously wondering about. There does not seem to be any independent motivation for thinking the content of the proposition has changed.

Also, Stalnaker's argument is restricted to the special case where we take the ~A&~B-possibilities to be ruled out. Consider a case when, starting out agnostic, we become close to certain, but not quite certain, that A or B — say we become about 95% certain that A or B, and are about 50% certain that A. According to Supp, we are entitled to be quite close to certain that if ~A, B — 90% certain in fact. (If p(A or B) = 95% and p(A) = 50%, then p(~A&B) = 45%. Now p(~A&~B) = 5%. So, on the assumption that ~A, it's 45:5, or 9:1, that B.) In this case, no additional possibilities have been ruled out. There are ~A&~B-worlds as well as ~A&B-worlds which are permissible candidates for being nearest. Stalnaker has not told us why we should think it likely, in this case, that the nearest ~A-world is a B-world.

Uncertain conditional judgements create difficulties for all propositional theories. As we have seen, it is easy to construct probabilistic counterexamples to Hook's theory; and it is easy to do so for the variant of Stalnaker's theory according to which “If A, B” is true iff B is true at all nearest A-worlds (as Lewis (1973) holds for counterfactuals). (It is very close to certain that if you toss the coin ten times, you will get at least one head; but it is certainly false that the consequent is true at all nearest antecedent-worlds.) It is rather harder for Stalnaker's theory, because nearness is so volatile, and also because it is not fully specified. But here is a putative counterexample.[1]

We have no idea how much fuel, if any, there is in the car (the gauge isn't working). Ann is going to drive it at constant speed, using fuel at a uniform rate, along a road which is 100 miles long. The capacity of the tank is just enough to do 100 miles: if the tank is full she will go 100 miles then stop. If the tank is x% full, she will go x miles then stop. We give equal credence to the propositions “She'll stop in the first mile”, “She'll stop in the second mile” and so on.

Now consider the conditionals

(1) If she stops before half way, she will stop in the 1st mile.

...

(50) If she stops before half way, she will stop in the 50th mile.

According to Supp, these are all equally likely — each is 2% likely. This seems reasonable.

Write Stalnaker's truth condition thus:

A>B” is true iff either A&B, or ~A and the nearest A-world is a B-world.

The following assumption is very plausible: consider a world w in which Ann goes more than half way. The most similar world to w in which she does not go more than half way is one in which she stops in the 50th mile. After all, it is spatially and temporally more similar, more similar in terms of the amount of fuel in the tank, more similar in its likely causes and consequences, etc., than a world in which she stops earlier.

Let us evaluate (1) and (50) using Stalnaker's truth condition. There are two ways in which (1) can be true: (a) she stops in the first mile (1% likely); (b) she doesn't stop before half way and in the nearest world in which she does stop before half way she stops in the first mile. By our assumption (b) is certainly false. So (1) has a probability of 1% of being true.

There are two ways in which (50) can be true: (a) she stops in the 50th mile (1% likely); (b) she doesn't stop before half way and in the nearest world in which she does stop before half way she stops in the 50th mile. By our assumption, (b) is true iff she doesn't stop before half way, and so is 50% likely. So (50) gets a total probability of 51%.

It is not clear that these slippery truth conditions serve us better than no truth conditions. They account for the validity of arguments; but Adams's logic has its own rationale, without them. They account for conditional sub-sentences; but we saw (§2.4) that they sometimes give counterintuitive results. Do we escape Lewis's result that a conditional probability is not the probability of the truth of any proposition, by making the proposition expressed by a conditional context-dependent? Lewis showed that there is no proposition A*B such that in every belief state p(A*B) = pA(B). He did not rule out that in every belief state there is some proposition or other, A*B, such that p(A*B) = pA(B). However, in the wake of Lewis, Stalnaker himself proved this stronger result, for his conditional connective: the equation p(A>B) = pA(B) cannot hold for all propositions A, B in a single belief state. If it holds for A and B, we can find two other propositions, C and D (truth-functional compounds of A, B and A>B) for which, demonstrably, it does not hold. (See Stalnaker's letter to van Fraassen published in van Fraassen (1976, pp. 303–4), Gibbard (1981, pp. 219–20), and Edgington (1995, pp. 276–8).

It was Gibbard (1981, pp. 231–4) who showed just how sensitive to epistemic situations Stalnaker's truth conditions would be. Later (1984, ch. 6), reacting to Gibbard, Stalnaker seemed more ambivalent about whether conditional judgements express propositions. But he still takes his original theory to be a serious candidate (Stalnaker 2005), and this remains a popular theory.

4.2 A Special Assertability Condition

Frank Jackson holds that “If A, B” has the truth conditions of “AB”, i.e. “~A or B”; but it is part of its meaning that it is governed by a special rule of assertability. “If” is assimilated to words like “but”, “nevertheless” and “even”. “A but B” has the same truth conditions as “A and B”, yet they differ in meaning: “but” is used to signal a contrast between A and B. When A and B are true and the contrast is lacking, “A but B” is true but inappropriate. Likewise, “Even John can understand this proof” is true when John can understand this proof, but inappropriate when John is a world-class logician.

According to Jackson, in asserting “If A, B” the speaker expresses his belief that AB, and also indicates that this belief is “robust” with respect to the antecedent A. In Jackson's early work (1979, 1980) “robustness” was explained thus: the speaker would not abandon his belief that AB if he were to learn that A. This, it was claimed, amounted to the speaker's having a high probability for AB given A, i.e. for (~A or B) given A, which is just to have a high probability for B given A. Thus, assertability goes by conditional probability. Robustness was meant to ensure that an assertable conditional is fit for modus ponens. Robustness is not satisfied if you believe AB solely on the grounds that ~A. Then, if you discover that A, you will abandon your belief in AB rather than conclude that B.

Jackson came to realise, however, that there are assertable conditionals which one would not continue to believe if one learned the antecedent. I say “If Reagan worked for the KGB, I'll never find out” (Lewis's example (1986, p. 155)). My conditional probability for consequent given antecedent is high. But if I were to discover that the antecedent is true, I would abandon the conditional belief, rather than conclude that I will never find out that the antecedent is true. So, in Jackson's later work (1987), robustness with respect to A is simply defined as pA(AB) being high, which is trivially equivalent to pA(B) being high. In most cases, though, the earlier explanation will hold good.

What do we need the truth-functional truth conditions for? Do they explain the meaning of compounds of conditionals? According to Jackson, they do not (1987, p. 129). We know what “AB” means, as a constituent in complex sentences. But “AB” does not mean the same as “If A, B”. The latter has a special assertability condition. And his theory has no implications about what, if anything, “if A, B” means when it occurs, unasserted, as a constituent in a longer sentence.

(Here his analogy with “but” etc. fails. “But” can occur in unasserted clauses: “Either he arrived on time but didn't wait for us, or he never arrived at all” (see Woods (1997, p. 61)). It also occurs in questions and commands: “Shut the door but leave the window open”. “Does anyone want eggs but no ham?”. “But” means “and in contrast”. Its meaning is not given by an “assertability condition”.)

Do the truth-functional truth conditions explain the validity of arguments involving conditionals? Not in a way that accords well with intuition, we have seen. Jackson claims that our intuitions are at fault here: we confuse preservation of truth and preservation of assertability (1987, pp. 50–1).

Nor is there any direct evidence for Jackson's theory. Nobody who thinks the Republicans won't win treats “If the Republicans win, they will double income tax” as inappropriate but probably true, in the same category as “Even Gödel understood truth-functional logic”. Jackson is aware of this. He seems to advocate an error theory of conditionals: ordinary linguistic behaviour fits the false theory that there is a proposition A*B such that p(A*B) = pA(B) (1987, pp. 39–40). If this is his view, he cannot hold that his own theory is a psychologically accurate account of what people do when they use conditionals. Perhaps it is an account of how we should use conditionals, and would if we were free from error: we should accept that “If the Republicans win they will double income tax” is probably true when it is probable that the Republicans won't win. Would we gain anything from following this prescription? It is hard to see that we would: we would deprive ourselves of the ability to discriminate between believable and unbelievable conditionals whose antecedents we think false.

For Jackson's more recent thoughts on conditionals see his postscript (1998, pp. 51–54). See also Edgington (2009) and Jackson's reply (2009, pp. 463–6).

4.3 Restrictors and the Strict Conditional

Angelika Kratzer's work on conditionals in linguistics has become influential. Her articles have recently appeared, reworked, as a book, Modals and Conditionals (2012). Kratzer's inspiration comes from a paper by David Lewis, “Adverbs of Quantification” (1975). Lewis's paper is about the analysis of sentences containing adverbs such as always, never, usually, often, seldom …, sentences such as “The fog usually lifts before noon here” and “Caesar seldom awoke before dawn”. After considering and rejecting some alternatives, Lewis introduces “restriction by if-clauses”: he proposes that there is a use of if-clauses whose function is to restrict the range of cases to which the operator or quantifier applies. First paraphrase the sentences: “Usually if there is fog here, it lifts before noon.” “Seldom if Caesar awoke, it was before dawn.” (Lewis's target sentences do not have “if” in their surface structure, but they could have had: the theory also applies to sentences like “Usually, if Mary visits, she brings her dog”.) The “if” restricts the “usually” to the occurrences of fog here, or of Mary's visits, and the “seldom” to Caesar's awakenings. These sentences are not to be construed as applying an adverb to a conditional proposition. The adverb applies to the main clause, its scope restricted by the if-clause. Thus Lewis:

[T]he if of our restrictive if-clauses should not be regarded as a sentence connective. It has no meaning apart from the adverb it restricts. The if in always if …, sometimes if …, and the rest is on a par with the non-connective and in between … and …, with the non-connective or in whether … or …, or with the non-connective if in the probability that … if. It serves merely to mark an argument-place in a polyadic construction. (Lewis 1975 reprinted in Lewis 1998 pp. 14–15)

Lewis's final example is particularly interesting, especially because this paper was written at much the same time as his proof that conditional probabilities are not to be construed as probabilities of conditional propositions.

Lewis has three different accounts of “if”: he follows Jackson in claiming that the “if” of indicative conditionals is the truth-functional “if”, with a special rule of assertability (see Lewis 1986 pp. 152–6); there is his famous account of the “if” of counterfactual conditionals (Lewis 1973); and there is this use of “if” as a restrictor.

Kratzer's idea is that this last account of “if” as a restrictor should be applied to all conditionals. Consider first conditionals which contain a modal term: “If it's not in the kitchen it must be in the bathroom/might be in the bathroom/is probably in the bathroom”. By analogy with Lewis, she argues that these are not to be construed as attaching a modal term to a conditional proposition; rather, they are to be construed as attaching a modal term to the main clause, the scope of the modal term being restricted by the conditional clause.

But what of a simple conditional which does not contain a modal operator, such as “If it's not in the kitchen it is in the bathroom” — what Kratzer calls the “bare conditional”? Here is her famous remark:

The history of the conditional is the history of a syntactic mistake. There is no two-place if … then connective in the logical forms of natural languages. If-clauses are devices for restricting the domains of operators. Bare conditionals have unpronounced modal operators [my emphasis]. Epistemic MUST is one option. (Kratzer (1991), quoted from Kratzer (2012) p. 106)

Now there is much in common between the restrictor-view of conditionals and the suppositional view. A supposition also restricts one's claim to the case in which the antecedent is true. The strength of your conditional belief is measured by how probable you judge the consequent, on the assumption that the antecedent is satisfied; and this is not the same as thinking a conditional proposition is probably true. Recall Lewis's remark about “the probability that … if”. Kratzer's treatment of modal conditionals may be seen as a generalization of the treatment of “Probably, if A, C” as a conditional probability to other modalities.

However, Kratzer's treatment of the “bare conditional” is controversial: at the level of semantic structure, there really are no such things — apparent bare conditionals contain an “unpronounced modal operator”. If the modal operator is an epistemic 'must', as she suggests, bare conditionals are a species of strict conditional — something like 'all live A-possibilities are C-possibilities'.

Other philosophers have also defended the view that indicative conditionals are strict conditionals, without adopting Kratzer's restrictor view, for instance William Lycan (2001) and Anthony Gillies (2009). According to Gillies, a context determines a set of possibilities compatible with the relevant information in the context. “If A, C” is true at a context iff all relevant A-possibilities are C-possibilities, false otherwise.

These proposals have difficulty handling the fact that one may adopt epistemic attitudes to a conditional of varying degrees of closeness to certainty. I may be close to certain, but not completely certain, that Jane will accept if she is offered the job, that if I have the operation I will be cured, etc. Not all the relevant A-possibilities are C-possibilities. On this proposal, in these circumstances the conditionals are clearly, definitely false, and should be completely rejected, and hence not something one should be close to certain of. This point holds for any kind of strict conditional — any kind of 'must'. Stalnaker (1981 p. 100) made essentially the same point, about counterfactuals, comparing his view with Lewis's. On a 'strict conditional' account, the following exchange should be in order:

A: Will Jane accept if she is offered the job?
B: No, it is certainly not the case that she will accept if offered the job [for not all offer-possibilities are accept-possibilities]. But she might well accept if she is offered the job.

And Stalnaker (ibid.) attributes to Thomasson the closely related point:

A: Will Jane accept if she is offered the job?
B: I believe so, but she might not.

This is defective on the proposal: not all offer-possibilities are accept-possibilities. The conditional is clearly false. One should not believe something which one judges to be clearly false.

Nor would it do to make the unpronounced modal operator in bare conditionals “probably”; for one can be certain that it is probable that if A, C, without being certain that if A, C. This point is made in more detail by Edgington (1995, pp. 292–3).

Thus, while the restrictor view has some plausibility, its treatment of the “bare conditional” as a modalised proposition is problematic.

4.4 Compounds

A common complaint against Supp's theory is that if conditionals do not express propositions with truth conditions, we have no account of the behaviour of compound sentences with conditionals as parts (see e.g. Lewis (1976, p. 142)). However, no theory has an intuitively adequate account of compounds of conditionals: we saw in §2.4 that there are compounds which Hook gets wrong; and compounds which Arrow gets wrong. Grice's and Jackson's defences of Hook focus on what more is needed to justify the assertion of a conditional, beyond belief that it is true. This is no help when it occurs, unasserted, as a constituent of a longer sentence, as Jackson accepts. And with negations of conditionals and conditionals in antecedents, we saw, the problem is reversed: we assert conditionals which we would not believe if we construed them truth-functionally.

There have been several attempts to construct a general theory of compounds of conditionals, compatible with Supp's thesis. The first is based on a partial restoration of truth values. They are based on a partial restoration of truth values, which has some merit. Note that the difficulties for Hook and Arrow in §§2 and 3 were focused on the last two lines of the truth table — the cases in which the antecedent is false. No problems arose in virtue of the cases in which the antecedent is true. Perhaps we can say that “If A, B” is true when A and B are both true, is false when A is true and B is false, and has no truth value when A is false. We must immediately add that to believe (or assert) that if A, B, is not to believe (assert) that it is true; for it is true only if A&B; and one might believe that if A, B, and properly assert it, without believing that A&B — indeed, while thinking that it is very likely not true. If I say “If you press that button, there will be an explosion”, I hope and expect that you will not press it, and hence that my remark is not true.

Instead, one must say that to believe “If A, B” is to believe that it is true rather than false; it is to believe that A&B is much more likely than A&~B; i.e., to believe that it is true given that it is true or false. This is just to say that one's confidence in a conditional is measured by pA(B). Note that for a bivalent proposition, belief that it is true coincides with belief that it is true rather than false. But the latter, not the former, generalizes to conditionals.

This has some minor advantages. It allows one to be right by luck, and wrong by bad luck: however strong my grounds for thinking that B if A, if it turns out that A&~B, I was wrong. However poor my grounds, if it turns out that A&B, I was vindicated.

Now in principle one could handle negations, conjunctions and disjunctions of conditionals by three-valued truth tables; and continue to say that a complex statement is believable to the extent that it is judged probably true given that it is true or false. For a conjunction, ((AB)&(CD)), the most natural truth table would seem to be: the conjunction is true iff both conjuncts are true; false iff at least one conjunct is false; otherwise it lacks a truth value. This has unappetizing consequences. Consider a conjunction of two conditionals whose antecedents are A and ~A respectively, such that the first conditional is 100% certain and the second 99% certain, for instance, ((AA)&(~AB)) where p~A(B) = 0.99. This looks like something about which you should be close to certain. But it cannot be true (for one of the antecedents is false), and it may be false, in the unlucky event that it turns out that ~A&~B. So the probability of its truth, given that it has a truth value, is 0. One can try other truth tables: make the conjunction true provided that it has at least one true conjunct and no false conjunct, false if it has at least one false conjunct, lacking truth value otherwise. And one can come up with equally unappetizing consequences. For work in this tradition and valuable surveys of related work see De Finetti (1935), Belnap (1970) and Milne (1997).

A different approach gives “semantic values” to conditionals as follows: 1 (= true) if A&B; 0 (= false) if A&~B; pA(B) if ~A. Thus we have a belief-relative three-valued entity. Its probability is its “expected value”. For instance, I'm to pick a ball from a bag. 50% of the balls are red. 80% of the red balls have black spots. Consider “If I pick a red ball (R) it will have a black spot (B)”. pR(B) = 80%. If R&B, the conditional gets semantic value 1, if R&~B, it gets semantic value 0. What does it get if ~R? One way of motivating this approach is to treat it as a refinement of Stalnaker's truth conditions. Is the nearest R-world a B-world or not? Well, if I actually don't pick a red ball, there isn't any difference, in nearness to the actual world, between the worlds in which I do; but 80% of them are B-worlds. Select an R-world at random; then it's 80% likely that it is a B-world. So “If R, B” gets 80% if ~R. You don't divide the ~R-worlds into those in which “If R, B” is true and those in which it is false. Instead the conditional gets value 80% in all of them. The expected value of “If R, B” is (p(R&B) × 1) + (p(R&~B) × 0) + (p(~R) × 0.8)) = (0.4 × 1) + (0.1 × 0) + (0.5 × 0.8) = 0.8 = pR(B). Ways of handling compounds of conditionals have been proposed on the basis of these semantic values. But again, they sometimes give implausible results, for instance for conjunctions of conditionals. Also this approach is somewhat unprincipled, using a kind of average of quite distinct kinds of thing: ordinary truth values, and probability values. Note that, as in the previous approach, probability is not probability of truth. For developments of this approach, see van Fraassen (1976), McGee (1989), Jeffrey (1991), Stalnaker and Jeffrey (1994). For some counterintuitive consequences, see Edgington (1991, pp. 200–2), Lance (1991), McDermott (1996, pp. 25–28).

A recent variant of this approach, more principled and more successful, is Bradley 2012. We have a Stalnaker-like semantics. We assume or pretend that if an indicative conditional has a false antecedent, there is a fact of the matter about which world would be actual if A were. E.g. supposing I don't pick a red ball, there is a “counterfact” concerning which ball I would have picked, had I picked a red ball, though we are totally ignorant about the matter. (It does not matter if this is a pretence: the claim would be that our thinking about these matters makes it as if it were so.) We jettison the notion of similarity, and hence of an ordering of worlds. This is replaced by a probability distribution over the candidate “counterfacts”.

Hence, two types of uncertainty are in play when assessing conditionals: ordinary uncertainty about the facts — about which world is actual; and uncertainty about which world is the “counter-actual” world given that some supposition, A, is true. For familiarity, call this the “nearest” A-world, remembering that this does not mean “most similar”. Thus, for a conditional with antecedent A, there are two probability distributions in play, one concerning the facts, and one concerning which world would be actual if A is or were true.

The semantic content of a conditional, if A, C, is given not just by the set of worlds in which it is true (hence it is not a classical proposition); but by the set of ordered pairs of worlds, <wi,wj> which encode the possibilities: wi is actual, wj is the nearest A-world. It is the set of ordered pairs such that if wi is actual and wj is the nearest A-world, the conditional is true. We cannot speak of a conditional sentence being true or false at a world, simpliciter, for that leaves open which the nearest A-world is.

With this machinery, the contents of conjunctions, disjunctions and negations of conditionals are given in the usual way by intersection, union and complements of the contents of the component sentences.

The facts do not in general determine the counterfacts. But there are constraints on the relations between the facts and the counterfacts. One constraint is what Lewis calls centering: if A is true at w, the nearest A-world to w is w. In that case, the facts determine the counterfacts. But when A is false at w, which is the nearest A-world may not be determined by the facts.

The other constraint, a crucial assumption, Bradley calls “Restricted Independence” (restricted because it is a weaker independence assumption than found in the work of Van Fraassen, Stalnaker and Jeffrey and McGee mentioned above): the probability of a world's being the nearest A-world is independent of the truth or falsity of A.

Suppose A is true; in that case, the actual world is the nearest A-world, and we just have to figure out (under that assumption) how likely it is that A, i.e. p(If A, C) is pA(C). By restricted independence, if A is false, p(If A, C) is still pA(A).

For example, it's 90% likely that if you pick a red ball it will have a black spot. Suppose you don't pick a red ball. Then it's 90% likely that if you had picked a red ball it would have had a red spot. The probability in the ¬A-worlds matches the probability in the A-worlds. That is how the trick is pulled: we have a semantic entity the probability of whose truth is a conditional probability.

This has advantages over the earlier approaches: it is immune from the counterexamples that had been found for the last approach, because of the weaker independence assumption; probability is straightforwardly probability of truth; when the antecedent is true, the conditional is straightforwardly true/false according to whether the consequent is true/false; when all A-worlds are C-worlds, the conditional is straightforwardly true; the semantic value is not a contrived, belief-dependent entity; entailment is straightforwardly necessary preservation of truth.

Its main disadvantage is that the semantics is very complex: when there is more than one antecedent, for instance in a conjunction of conditionals, we need not ordered pairs, but ordered triples of worlds as semantic values, so in general we need ordered n-tuples. Nevertheless it is a sort of possibility-proof: that we can find a semantic entity — not an ordinary proposition — which embeds naturally enough.

Many followers of Adams take a more relaxed approach to the problem. They try to show that when a sentence with a conditional subsentence is intelligible, it can be paraphrased, at least in context, by a sentence without a conditional subsentence. As conditionals are not ordinary propositions, in that they essentially involve suppositions, this (it is claimed) is good enough. They also point out that some constructions are rarer, and harder to understand, and more peculiar, than would be expected if conditionals had truth conditions and embedded in a standard way. See Appiah (1985, pp. 205–10), Gibbard (1981, pp. 234–8), Edgington (1995, pp. 280–4), Woods (1997, pp. 58–68 and 120–4); see also Jackson (1987, pp. 127–37).

For some constructions the paraphrase can be done in a general, uniform way. For example, “If A, then if B, C” can be paraphrased “If A&B, C”. For to suppose that A, then to suppose that B and make a judgement about C under those suppositions, is the same as to make a judgement about C under the supposition that A&B. Let's consider this as applied to a problem raised by McGee (1985) with the following example. Before Reagan's first election, Reagan was hot favourite, a second Republican, Anderson, was a complete outsider, and Carter was lagging well behind Reagan. Consider first

(1) If a Republican wins and Reagan does not win, then Anderson will win.

As these are the only two Republicans in the race, (1) is unassailable. Now consider

(2) If a Republican wins, then if Reagan does not win, Anderson will win.

We read (2) as equivalent to (1), hence also unassailable.

Suppose I'm close to certain (say, 90% certain) that Reagan will win. Hence i am close to certain that

(3) A Republican will win.

But I don't believe

(4) If Reagan does not win, Anderson will win.

I'm less than 1% certain that (4). On the contrary, I believe that if Reagan doesn't win, Carter will win. As these opinions seem sensible, we have a prima facie counterexample to modus ponens: I accept (2) and (3), but reject (4). Truth conditions or not, valid arguments obey the probability-preservation principle. I'm 100% certain that (2), 90% certain that (3), but less than 1% certain that (4).

Hook saves modus ponens by claiming that I must accept (4). For Hook, (4) is equivalent to “Either Reagan will win or Anderson will win”. As I'm 90% certain that Reagan will win, I must accept this disjunction, and hence accept (4). Hook's reading of (4) is, of course, implausible.

Arrow saves modus ponens by claiming that, although (1) is certain, (2) is not equivalent to (1), and (2) is almost certainly false. For Stalnaker,

(5) If a Republican wins, then if Reagan doesn't win, Carter will win

is true. To assess (5), we need to consider the nearest world in which a Republican wins (call it w), and ask whether the conditional consequent is true at w. At w, almost certainly, it is Reagan who wins. We need now to consider the nearest world to w in which Reagan does not win. Call it w′. In w′, almost certainly, Carter wins.

Stalnaker's reading of (2) is implausible; intuitively, we accept (2) as equivalent to (1), and do not accept (5).

Supp saves modus ponens by denying that the argument is really of that form. “AB; A; so B” is demonstrably valid when A and B are propositions. For instance, if p(A) = 90% and pA(B) = 90% the lowest possible value for p(B) is 81%. The “consequent” of (2), “If Reagan doesn't win, Anderson will win”, is not a proposition. The argument is really of the form “If A&B, then C; A; so if B then C”. This argument form is invalid (Supp and Stalnaker agree). Take the case where C = A, and we have “If A&B then A; A; so if B then A”. The first premise is a tautology and falls out as redundant; and we are left with “A; so if B then A”. We have already seen that this is invalid: I can think it very likely that Sue is lecturing right now, without thinking that if she was injured on her way to work, she is lecturing right now.

Compounds of conditionals are a hard problem for everyone. It is difficult to see why this should be so if conditionals are propositions with truth conditions.

5. Other Conditional Speech Acts and Propositional Attitudes

As well as conditional beliefs, there are conditional desires, hopes, fears, etc.. As well as conditional statements, there are conditional commands, questions, offers, promises, bets, etc.. “If he calls” plays the same role in “If he calls, what shall I say?”, “If he calls, tell him I'm out” and “If he calls, Mary will be pleased”. Which of our theories extends to these other kinds of conditional?

One believes that B to the extent that one thinks B more likely than not B; according to Supp, one believes that B if A to the extent that one believes that B under the supposition that A, i.e. to the extent that one thinks A&B more likely than A&~B; and there is no proposition X such that one must believe X more likely than ~X, just to the extent that one believes A&B more likely than A&~B. Conditional desires appear to be like conditional beliefs: to desire that B is to prefer B to ~B; to desire that B if A is to prefer A&B to A&~B; there is no proposition X such that one prefers X to ~X just to the extent that one prefers A&B to A&~B. I have entered a competition and have a very small chance of winning. I express the desire that if I win the prize (W), you tell Fred straight away (T). I prefer W&T to W&~T. I do not necessarily prefer (WT) to ~(WT), i.e. (~W or W&T) to W&~T. For I also want to win the prize, and much the most likely way for (~W or W&T) to be true is that I don't win the prize. Nor is my conditional desire satisfied if I don't win but in the nearest possible world in which I win, you tell Fred straight away.

If I believe that B if A, i.e. (according to Supp) think A&B much more likely than A&~B, this puts me in a position to make a conditional commitment to B: to assert that B, conditionally upon A. If A is found to be true, my conditional assertion has the force of an assertion of B. If A is false, there is no proposition that I asserted. I did, however, express my conditional belief — it is not as though I said nothing. Suppose I say “If you press that switch, there will be an explosion”, and my hearer takes me to have made a conditional assertion of the consequent, one which will have the force of an assertion of the consequent if she presses the button. Provided she takes me to be trustworthy and reliable, she thinks that if she presses the switch, the consequent is likely to be true. That is, she acquires a reason to think that if she presses it, there will be an explosion; and hence a reason not to press it.

Conditional commands can, likewise, be construed as having the force of a command of the consequent, conditional upon the antecedent's being true. The doctor says to the nurse in the emergency ward, “If the patient is still alive in the morning, change the dressing”. Considered as a command to make Hook's conditional true, this is equivalent to “Make it the case that either the patient is not alive in the morning, or you change the dressing”. The nurse puts a pillow over the patient's face and kills her. On the truth-functional interpretation, the nurse can claim that he was carrying out the doctor's order. Extending Jackson's account to conditional commands, the doctor said “Make it the case that either the patient is not alive in the morning, or you change the dressing”, and indicated that she would still command this if she knew that the patient would be alive. This doesn't help. The nurse who kills the patient still carried out an order. Why should the nurse be concerned with what the doctor would command in a counterfactual situation?

Hook will reply to the above argument about conditional commands that we need to appeal to pragmatics. Typically, for any command, conditional or not, there are tacitly understood reasonable and unreasonable ways of obeying it; and killing the patient is to be tacitly understood as a totally unreasonable way of making the truth-functional conditional true — as, indeed, would be changing the dressing in such an incompetent way that you almost strangle the patient in the process. The latter clearly is obeying the command, but not in the intended manner. But it is stretching pragmatics rather far to say the same of the former. To take a less dramatic example, at Fred's request, the Head of Department agrees to bring it about that he gives the Kant lectures if his appointment is extended. She then puts every effort into making sure that his appointment is not extended. Is it plausible to say that this is doing what she was asked to do, albeit not in the intended way?

Extending Stalnaker's account to conditional commands, “If it rains, take your umbrella” becomes “In the nearest possible world in which it rains, take your umbrella”. Suppose I have forgotten your command or alternatively am inclined to disregard it. However, it doesn't rain. In the nearest world in which it rains, I don't take my umbrella. On Stalnaker's account, I disobeyed you. Similarly for conditional promises: on this analysis I could break my promise to go to the doctor if the pain gets worse, even if the pain gets better. This is wrong: conditional commands and promises are not requirements on my behaviour in other possible worlds.

Among conditional questions we can distinguish those in which the addressee is presumed to know whether the antecedent is true, and those in which he is not. In the latter case, the addressee is being asked to suppose that the antecedent is true, and give his opinion about the consequent: “If it rains, will the match be cancelled?”. In the former case — “If you have been to London, did you like it?” — he is expected to answer the consequent-question if the antecedent is true. If the antecedent is false, the question lapses: there is no conditional belief for him to express. “Not applicable” as the childless might write on a form which asks “If you have children, how many children do you have?”. You are not being asked how many children you have in the nearest possible world in which you have children. Nor is it permissible to answer “17” on the grounds that “I have children ⊃ I have 17 children” is true. Nor are you being asked what you would believe about the consequent if you came to believe that you did have children.

Widening our perspective to include these other conditionals tends to confirm Supp's view. Any propositional attitude can be held categorically, or under a supposition. Any speech act can be performed unconditionally, or conditionally upon something else. Our uses of “if”, on the whole, seem to be better and more uniformly explained without invoking conditional propositions.

Bibliography

General Overviews

  • Bennett, Jonathan, 2003. A Philosophical Guide to Conditionals, Oxford: Clarendon Press.
  • Edgington, Dorothy, 1995. “On Conditionals”, Mind, 104: 235–329.
  • Evans, Jonathan and Over, David, 2004. If, Oxford: Oxford University Press. (This is a work in cognitive psychology.)
  • Gillies, Anthony S., 2012. “Indicative Conditionals”, in Delia Graff Fara and Gillian Russell (eds.) Routledge Companion to the Philosophy of Language, New York and London: Routledge, pp. 449–65.
  • Harper, W. L., Stalnaker, R., and Pearce, C. T. (eds.), 1981. Ifs, Dordrecht: Reidel.
  • Jackson, Frank (ed.), 1991. Conditionals, Oxford: Clarendon Press.
  • Sanford, David, 2003. If P, then Q: Conditionals and the Foundations of Reasoning, London: Routledge.
  • Woods, Michael, 1997. Conditionals, Oxford: Clarendon Press.

Other Cited Works

  • Adams, E. W., 1965. “A Logic of Conditionals”, Inquiry, 8: 166–97.
  • –––, 1966. “Probability and the Logic of Conditionals”, in J. Hintikka and P. Suppes (eds.), Aspects of Inductive Logic, Amsterdam: North Holland, pp. 256–316.
  • –––, 1970. “Subjunctive and Indicative Conditionals”, Foundations of Language, 6: 89–94.
  • –––, 1975. The Logic of Conditionals, Dordrecht: Reidel.
  • –––, 1998. A Primer of Probability Logic, Stanford: CSLI Publications.
  • Appiah, A., 1985. Assertion and Conditionals, Cambridge: Cambridge University Press.
  • Barnett, David, 2006. “Zif is If”, Mind, 115 (459): 519–66.
  • Bayes, Thomas, 1763. “An Essay Towards Solving a Problem in the Doctrine of Chances”, Transactions of the Royal Society of London, 53: 370–418.
  • Belnap, Nuel, 1970. “Conditional Assertion and Restricted Quantification”, Noûs, 4: 1–13.
  • Bennett, Jonathan, 1988. “Farewell to the Phlogiston Theory of Conditionals”, Mind, 97: 509–27.
  • –––, 1995. “Classifying Conditionals: the Traditional Way is Right”, Mind, 104: 331–44.
  • Bradley, Richard, 2012. “Multidimensional Possible-World Semantics for Conditionals”, Philosophical Review, 122(4): 539–71.
  • De Finetti, Bruno, 1935. “La logique de la probabilité”, translated as “The Logic of Probability”, Philosophical Studies, 77 (1995): 181–190.
  • Dudman, V. H., 1984. “Parsing ”If“-sentences”, Analysis, 44: 145–53.
  • –––, 1988. “Indicative and Subjunctive”, Analysis, 48: 113–22.
  • Edgington, Dorothy, 1991. “The Mystery of the Missing Matter of Fact”, Proceedings of the Aristotelian Society (Supplementary Volume), 65: 185–209.
  • –––, 2009. “Conditionals, Truth and Assertion” in Ian Ravenscroft (ed.), op. cit., pp. 283–310.
  • Frege, G., 1879. Begriffsschrift, in Geach, Peter and Black, Max, 1960. Translations from the Philosophical Writings of Gottlob Frege, Oxford: Basil Blackwell.
  • Gärdenfors, Peter, 1986. “Belief Revisions and the Ramsey Test for Conditionals”, Philosophical Review, 95: 81–93.
  • –––, 1988. Knowledge in Flux, Cambridge MA. MIT Press.
  • Gibbard, A., 1981. “Two Recent Theories of Conditionals” in Harper, Stalnaker and Pearce (eds.) 1981.
  • Gillies, Anthony S., 2009. “On Truth-Conditions for If (But Not Quite Only If)”, Philosophical Review, 118(3): 325–49.
  • Grice, H. P., 1989. Studies in the Way of Words, Cambridge MA: Harvard University Press.
  • Jackson, Frank, 1979. “On Assertion and Indicative Conditionals”, Philosophical Review, 88: 565–589.
  • –––, 1981. “Conditionals and Possibilia”, Proceedings of the Aristotelian Society, 81: 125–137.
  • –––, 1987. Conditionals, Oxford: Basil Blackwell.
  • –––, 1990. “Classifying Conditionals I”, Analysis, 50: 134–47, reprinted in Jackson 1998.
  • –––, 1998. Mind, Method and Conditionals, London: Routledge.
  • –––, “Replies to my Critics” in Ian Ravenscroft (ed.), op. cit., pp. 387–474.
  • Jeffrey, Richard, 1991. “Matter of Fact Conditionals”, Proceedings of the Aristotelian Society Supplementary Volume 65: 161–183.
  • Kratzer, Angelica, 1986. “Conditionals”, In A. M. Farley, P. Farley, and K. E. McCollough (eds.), Papers from the Parasession on Pragmatics and Grammatical Theory, Chicago: Chicago Linguistics Society, pp. 115–35.
  • –––, 2012. Modals and Conditionals, Oxford: Oxford University Press.
  • Lance, Mark, 1991. “Probabilistic Dependence among Conditionals”, Philosophical Review, 100: 269–76.
  • Lewis, David, 1973. Counterfactuals, Oxford: Basil Blackwell.
  • –––, 1975. “Adverbs of Quantification” in E. Keenan (ed.), Semantics of Natural Language, Cambridge: Cambridge University Press, pp. 3–15; reprinted in David Lewis (1998), Papers in Philosophical Logic, Cambridge: Cambridge University Press, pp. 5–20.
  • –––, 1976. “Probabilities of Conditionals and Conditional Probabilities”, Philosophical Review, 85: 297–315. Page references to Lewis 1986.
  • –––, 1986. Philosophical Papers (Volume 2), Oxford: Oxford University Press.
  • Lycan, William, 2001. Real Conditionals, Oxford: Oxford University Press.
  • Mackie, J., 1973. Truth, Probability and Paradox, Oxford: Clarendon Press.
  • McDermott, Michael, 1996. “On the Truth Conditions of Certain ‘If’-Sentences”, Philosophical Review, 105: 1–37.
  • McGee, Vann, 1985. “A Counterexample to Modus Ponens”, Journal of Philosophy, 82: 462–71.
  • –––, 1989. “Conditional Probabilities and Compounds of Conditionals”, Philosophical Review, 98: 485–542.
  • Milne, Peter, 1997. “Bruno de Finetti and the Logic of Conditional Events”, British Journal for the Philosophy of Science, 48: 195–232.
  • Ramsey, F. P., 1926. “Truth and Probability” in Ramsey 1990, pp. 52–94.
  • –––, 1929. “General Propositions and Causality” in Ramsey 1990, pp. 145–63.
  • –––, 1990. Philosophical Papers, ed. by D. H. Mellor. Cambridge University Press.
  • Ravenscroft, Ian (ed.), 2009, Mind, Ethics and Conditionals: Themes from the Philosophy of Frank Jackson, Oxford: Clarendon Press.
  • Read, Stephen, 1995. “Conditionals and the Ramsey Test”, Proceedings of the Aristotelian Society Supplementary Volume, 69: 47–65.
  • Stalnaker, R., 1968. “A Theory of Conditionals” in Studies in Logical Theory, American Philosophical Quarterly (Monograph Series, 2), Oxford: Blackwell, pp. 98–112. Reprinted in F. Jackson (ed.), 1991. Page references to 1991.
  • –––, 1970. “Probability and Conditionals”, Philosophy of Science, 37: 64–80. Reprinted in Harper, W. L., Stalnaker, R. and Pearce, G. eds. 1981.
  • –––, 1975. “Indicative Conditionals”, Philosophia, 5: 269–86, reprinted in F. Jackson (ed.), 1991.
  • –––, 1984. Inquiry, Cambridge MA: MIT Press.
  • –––, 1981. “A Defense of Conditional Excluded Middle” in Harper, Stalnaker and Pearce (eds.) op. cit., pp. 87–104.
  • –––, 2005. ‘Conditional Propositions and Conditional Assertions', in New Work on Modality, MIT Working Papers in Linguistics and Philosophy, Volume 51.
  • Stalnaker, R. and Jeffrey, R., 1994. “Conditionals as Random Variables”, in E. Eells and B. Skyrms (eds.), Probability and Conditionals, Cambridge: Cambridge University Press.
  • Thomson, James, 1990. “In Defense of ⊃”, Journal of Philosophy, 87: 56–70.
  • van Fraassen, Bas, 1976. “Probabilities of Conditionals”, in Harper, W. and Hooker, C. eds., Foundations of Probability theory, Statistical Inference, and Statistical Theories of Science, Volume I. Dordrecht: Reidel, pp. 261–308.

Other Internet Resources

[Please contact the author with suggestions.]

Copyright © 2014 by
Dorothy Edgington <dorothy.edgington@philosophy.ox.ac.uk>

This is a file in the archives of the Stanford Encyclopedia of Philosophy.
Please note that some links may no longer be functional.
[an error occurred while processing the directive]