Supplement to Russell's Moral Philosophy

Two Arguments for Emotivism: Ayer, Russell & Moore

Despite its tone of iconoclastic modernism, Ayer's Language Truth and Logic (1936) is a highly derivative work, and the chief argument for emotivism is largely derived from Moore. Ayer was an admirer of Moore from way back. He read Principia Ethica as a teenager, at the instigation of the Bloomsbury aesthetician, Clive Bell, who advised his readers to ‘run out this very minute and order of copy of [Moore's] masterpiece’ (quoted in Ayer A Part of My Life : 54). Ayer took this advice and, like the members of the Bloomsbury circle when the book was first published in 1903, he swallowed Moore whole. It was not until his second year at Oxford that he ‘began to doubt whether “good” was a simple, indefinable non-natural quality.’ (Ayer, ‘My Mental Development’: 11.) Ayer may have come to doubt Moore's conclusion, but he continued to accept a large part of Moore's Open Question Argument (the OQA). Here is, Moore's Open Question Argument again:

(1) ‘Are X things good?’ is a significant or open question for any naturalistic or metaphysical predicate ‘X’ (whether simple or complex).

(2) If two expressions (whether simple or complex) are synonymous this is evident on reflection to every competent speaker.

(3) The meaning of a predicate or property word is the property for which it stands. Thus if two predicates or property words have distinct meanings they name distinct properties.

From (1) and (2) it follows that

(4) ‘Good’ is not synonymous with any naturalistic or metaphysical predicate ‘X’ (or ‘goodness’ with any corresponding noun or noun-phrase ‘X-ness’).

From (3) and (4) it follows that

(5) Goodness is not identical with any natural or metaphysical property of X-ness.

In effect, Ayer accepted premises (1) and (2) and therefore sub-conclusion (4), extending the indefinability thesis from ‘good’ to ‘ethical concepts’ generally, including ‘right’ and ‘wrong’. ‘We have already rejected the “naturalistic” theories which are commonly supposed to provide the only alternative to “absolutism” in ethics … [and] begin by admitting that the fundamental ethical concepts are unanalysable’. (Ayer (1946): 107). In other words ‘good’, like the other moral concepts, is not synonymous with any naturalistic predicate ‘X’. But Ayer was a verificationist. Synthetic propositions — which is what moral judgments would appear to be — are supposed to be either verifiable or senseless. But if ‘good’ is not synonymous with any naturalistic predicate ‘X’, then a fortiori it is not synonymous with any empirical predicate, by which I mean the kind of predicate that might contribute to the verification-conditions of an empirically verifiable proposition. Now if non-verifiable propositions are senseless — mere pseudo-propositions in fact — and if ‘good’ cannot help determine the verification-conditions of a verifiable proposition, then it would appear to follow that ‘good’ is senseless too. And this is precisely Ayer's thesis. Ethical concepts ‘are mere pseudo-concepts’. ‘The presence of an ethical symbol in a proposition adds nothing to its factual content.’ But although ethical concepts are mere pseudo-concepts and though they contribute nothing to the factual content of the sentences in which they appear, they are not totally devoid of meaning. They have a meaning but that meaning is non-descriptive. Their function is to express the speaker's feelings of approval and disapproval, to arouse similar feelings in others and thus (indirectly) to stimulate action. We can represent Ayer's argument as a supplement to the OQA, taking sub-conclusion (4) as premise.

(4) ‘Good’ is not synonymous with any naturalistic predicate ‘X’.

(4.a) All empirical predicates are naturalistic predicates.

[Asumption — but a pretty safe bet!]

Therefore

(4.b) ‘Good’ is not synonymous with any empirical predicate ‘X’.

[From (4) and (4.a).]

(4.c) A predicate is factually meaningful if and only if it is synonymous with an empirical predicate.

[Assumption — a consequence of verificationism.]

Therefore

(4.d) ‘Good’ is not factually meaningful.

[From (4.c) and (4.b).]

(4.e) ‘Good’ is not meaningless.

[Assumption: an obvious semantic fact.]

Therefore

(4.f) ‘Good’ has a non-factual meaning.

[From (4.d) and 4.e).].

This theory has the added merit, in Ayer's eyes, of explaining why it is that ‘good’ is indefinable. The reason why (1) ‘Are X things good?’ is an open question for any naturalistic ‘X’ and the reason why (4) ‘good’ is not synonymous with any naturalistic predicate is that ‘good’ is not, properly speaking, a predicate at all, since it cannot contribute to the truth-conditions of a sentence. What we are dimly aware of when we recognize ‘Are X things good?’ as open is that it is up to us whether to approve or disapprove of X. Facts are one thing, feelings are another, and there is no logical or analytic connection between the way things are and the way we feel about them.

All this is old hat. But it is worth rehearsing Ayer's argument because Russell's emotivism may have been motivated by a similar line of thought. Russell, of course, was not a verificationist. In his opinion there are synthetic propositions which are both unverifiable and factually meaningful. His example is ‘It snowed on Manhattan Island on the first of January in the year 1 A.D’. Either it snowed that day or it didn't; both suppositions make perfect sense, though neither is likely to be verified. (See An Inquiry into Meaning and Truth: 277.) But Russell believed in a semantic thesis, his ‘Fundamental Principle’, which could be combined with the claim that we are not acquainted with anything non-natural to produce a similar argument for emotivism. The Fundamental Principle in a later formulation is ‘that [the] sentences we can understand must be composed of words with whose meaning we are acquainted’ (Schilpp ed (1944): 692/Papers 11: 27). This needs to be modified slightly if the argument is to work. We say not that the ‘sentences we can understand [when analyzed] must be composed of words with whose meaning we are acquainted’, but that the ‘factually significant [or truth-apt] sentences that we can understand must [ultimately] be composed of words with whose meanings we are acquainted.’ If we combine this with the thesis that we are not acquainted with anything non-natural we have the beginnings of an argument for non-cognitivism, as opposed to an argument against objectivism. This too can be represented as a supplement to the OQA, taking sub-conclusion (4) as a premise.

(4) ‘Good’ is not synonymous with any naturalistic predicate ‘X’.

(4.g) All factually significant predicates are definable (in use) in terms of the sense-data and the universals with which we are acquainted.

[Assumption: This is a consequence of Russell's ‘Fundamental Principle’ that to understand a proposition we must be acquainted with the referents of its ultimate constituents.]

(4.h) We are not acquainted with anything non-natural/Everything we are acquainted with is natural.

[Assumption, motivated in part by the considerations developed in §7.]

(4.i) If a predicate can be defined with reference to naturalistic entities, then it is a naturalistic predicate.

[Assumption.]

Therefore

(4.j) All factually significant predicates are naturalistic predicates.

[From (4g), (4.h) and (4.i).]

Therefore

(4.k) ‘Good’ is not synonymous with any factually significant predicate.

[From (4) and (4.j).]

(4.l) A predicate is factually significant if and only if it is synonymous with a factually significant predicate.

[Assumption, but uncontroversial.]

Therefore

(4.m) ‘Good’ is not factually significant.

[From (4.k) and (4.l).]

(4.n) ‘Good’ has some kind of significance.

[Assumption: an obvious semantic fact.]

Therefore

(4.o) ‘Good’ has a non-factual significance.

[From (4.m) and (4.n).]

Russell, like Ayer, could have reinforced his theory by explaining why ‘good’ is indefinable. If it really is an optative operator, designed to express a certain kind of desire, then naturally it cannot be given a naturalistic analysis. As for the Open Question, facts are one thing, desires are another, and there is no logical or analytic link between the way things are and the way I desire that everyone should desire them to be.

Given that they agree with Moore's argument down to sub-conclusion (4) how do Ayer and Russell avoid the conclusion that they both came to doubt, namely that ‘“good” [is] a simple, indefinable non-natural quality’? The inference from (4) to (5) proceeds via premise (3), and the obvious option for those wishing to avoid a non-natural property would be to reject that premise. But as we saw in the §8, Russell seems to have accepted it, at least in an amended form, and it is not obvious to me that Ayer would have rejected it either. But though they might have accepted (3), both would have insisted on a further amendment:

(3″) The meaning of a predicate is the property for which it stands, so long as that predicate is a) a complete symbol and b) factually meaningful. Thus if two complete and factually meaningful predicates have distinct meanings they denote distinct properties.

Since both Russell and Ayer regarded ‘good’ as non-factual, this blocks the inference from (4) — a non-natural predicate ‘good’ — to (5) — a non-natural property of goodness.

What are we to make of these two arguments? Simplifying somewhat we can represent them both as instances of modus ponens, combining a conditional with a conjunction of semantic and philosophical claims. Here is Ayer's argument:

(A.1) a) ‘Good’ is not synonymous with any naturalistic predicate ‘X’ and b) a predicate is factually significant if and only if it is synonymous with a naturalistic [= empirical] predicate.

(A.2) If a) ‘good’ is not synonymous with any naturalistic predicate ‘X’ and b) a predicate is factually meaningful if and only if it is synonymous with a naturalistic [= empirical] predicate, then ‘good’ is not factually significant, i.e. such as to figure in truth-apt sentences.

(A.3) Therefore ‘good’ is not factually significant, i.e. such as to figure in truth-apt sentences.

And here is Russell's argument (or at least the argument that I have attributed to him):

(R.1) a) ‘Good’ is not synonymous with any naturalistic predicate ‘X’ and b) a predicate is factually significant if and only if it is synonymous with a naturalistic predicate, that is, a predicate definable (in use) in terms of the naturalistic objects and universals with which we are acquainted.

(R.2) If a) ‘good’ is not synonymous with any naturalistic predicate ‘X’, and b) a predicate is factually meaningful if and only if it is synonymous with a naturalistic predicate, that is, a predicate definable (in use) in terms of the naturalistic objects and universals with which we are acquainted, then ‘good’ is not factually significant, i.e. such as to figure in truth-apt sentences.

(R.3) Therefore ‘good’ is not factually significant, i.e. such as to figure in truth-apt sentences.

In both cases, the first premise combines Moore's sub-conclusion (4) that ‘good’ is not synonymous with any naturalistic predicate ‘X’ with a criterion of factual significance for predicates derived from a grand semantic theory, verificationism in the case of Ayer, and the Fundamental Principle in the case of Russell. In both cases the two conjuncts of the first premise are highly controversial, not to say intellectually suspect. But in both cases the second conditional premise is relatively uncontroversial: it simply affirms that if ‘good’ is a non-natural predicate and if non-natural predicates are not factually significant, then ‘good’ is not factually significant. Thus each argument is open to the kind of response that Moore made to idealists and to skeptics such as Hume.

The idealist argues that we don't really have flesh-and-blood hands (since they are, in some sense, mental rather than material entities) and the skeptic argues that we cannot know that we have flesh-and-blood hands. In both cases the arguments proceed from highly controversial philosophical premises. Moore waives his hands and proclaims that he does indeed have flesh-and-blood hands and, furthermore that he knows that he does. Thus — on the assumption that their arguments are valid — at least some of the premises to which his opponents appeal must be false. This is not mere dogmatism since Moore's modus tollens has a lot more going for it than his opponents' modus ponens. They argue that if philosophical principles P1Pn are true, Moore does not have flesh-and-blood hands or does not know that he does; but principles P1Pn are true; therefore Moore does not have flesh-and-blood hands or does no know that he does. Moore argues that if philosophical principles P1Pn are true, he does not have flesh-and-blood hands or does not know that he does; but Moore does have flesh-and-blood hands and furthermore, he knows that he does; therefore at least some of the principles P1Pn are false. Both sets of arguments may be valid, but Moore's is more likely to be sound, since his platitudes are a lot more plausible than the speculative philosophical principles to which his opponents appeal. (See Moore (1993b), Essays 5, 7 & 9, Lycan (2001) & (2007) and Soames (2003) chapters 1 & 2.)

Now a cognitivist about ethics — maybe even Moore himself — could reply to Ayer and Russell along much the same lines:

(A.1′) ‘Good’ is a factually significant predicate that can play a part in truth-apt sentences

(A.2′) If a) ‘good’ is not synonymous with any naturalistic predicate ‘X’, and b) a predicate is factually meaningful if and only if it is synonymous with a naturalistic [= empirical] predicate, then ‘good’ is not factually significant, i.e. such as to figure in truth-apt sentences.

(A.3′) Therefore either it is not the case that a) ‘good’ is not synonymous with any naturalistic predicate ‘X’, or it is not the case that b) a predicate is factually significant if and only if it is synonymous with a naturalistic [= empirical] predicate.

(R.1′) ‘Good’ is a factually significant predicate that can play a part in truth-apt sentences.

(R.2′) If a) ‘good’ is not synonymous with any naturalistic predicate ‘X’, and b) a predicate is factually meaningful if and only if it is synonymous with a naturalistic predicate, that is, a predicate definable (in use) in terms of the naturalistic objects and universals with which we are acquainted, then ‘good’ is not factually significant, i.e. such as to figure in truth-apt sentences.

(R.3′) Therefore either it is not the case that a) ‘good’ is not synonymous with any naturalistic predicate ‘X’, or it is not the case that b) a predicate is factually significant if and only if it is synonymous with a naturalistic predicate, that is, a predicate definable (in use) in terms of the naturalistic objects and universals with which we are acquainted.

Both sets of argument are valid, but which is most likely to be sound? The prize must go to the Moorean modus tollens arguments rather than the modus ponens arguments of Ayer or Russell. Indeed, these two ‘Moorean’ arguments are rather better than the original Moorean arguments on which they are modeled. The problem with Moore's polemical strategy is that it would seem to preclude anyone's ever arriving at surprising conclusions about anything. ‘Master Galileo, you have proved beyond reasonable doubt that if your premises are true then the earth moves. Your logic I allow is impeccable. But since it is clear as daylight that the earth does not move — since this is far more epistemically compelling than any of the premises from which you argue to the contrary — the obvious conclusion must be that some of your premises are false!’ A contemporary of Galileo's who argued along these lines would have insulated himself in error, as would a 19th Century biologist who tried to do down Darwin with similar arguments. W.G. Lycan, who both defends and deploys Moore's polemical strategy, replies that there is a considerable difference between the argument of Galileo's proto-Moorean opponent and the arguments of the actual Moore. Galileo's arguments are derived from empirical premises whereas the premises of Moore's idealist and skeptical opponents were just ‘philosophy junk’. The first kind of premise can prevail against common sense; not so, the second. The problem with this is that if you actually examine Galileo's arguments, what looks like ‘philosophy junk’ is conspicuous by its presence. Galileo certainly appeals to premises which transcend observation and even glories in reason's ability to overcome the apparent teachings of experience. More generally, a polemical strategy that depends on considerations of plausibility is fallible at best, since a proposition can be plausible but false or implausible but true.

That said, it is very implausible to suppose that for most of history most of us have misunderstood our own words, taking ‘good’ to be factual predicate when in fact it was an optative operator or a device for expressing emotion. The world at large is independent of our intentions, which is why it is easy to get things wrong. But what we mean is in some sense up to us, which is why widespread and long-lasting error about the meanings of everyday words seems to be unlikely. ‘Good’ has usually been regarded as a factually significant predicate that can play a part in truth-apt sentences. We could not coherently tell tales about a ‘tree of the knowledge of good and evil’ (Genesis, 2:17) if statements about the good were semantically incapable of truth, and knowledge of good and evil a conceptual impossibility. (This is not to say that there are [non-trivial] truths about good and evil, let alone that we know them, only that neither truth nor knowledge is precluded by the very meanings of the words.) It is not just that premises (A.1′) and (R.1′) are highly plausible, having the backing of common sense. In this case, it is difficult to tell a coherent story about how common sense could be wrong. (See §6.7. ) Meaning explains use, and if ‘good’ and ‘bad’ are used as if they are cognitive predicates, they must have a meaning that makes this possible. And it is hard to see what that could be except a cognitive meaning. There does not seem to be much daylight between seeming to be a factual or cognitive predicate and being a factual or cognitive predicate. Thus the initial premises of the Moorean arguments are pretty solid, whilst the second premises of the two arguments are agreed by both sides.

Contrast the arguments of Ayer and Russell. Premises (A.1) and (R.1) are both conjunctions in which one of the conjuncts is sub-conclusion (4) of the OQA. This is suspect in itself since it is derived from Premise (2) — that if two expressions are synonymous this is evident on reflection to every competent speaker. Premise (2) is problematic because it leads straight to the Paradox of Analysis. The Paradox of Analysis is this. Given (2) it is impossible for a philosophical analysis to be both true and interesting. For if the analysis is true, then the analysandum (the expression to be analyzed) must be synonymous with the analysans (the analyzing phrase or sentence). In which case, the analysis will tell us nothing new, since if two expressions are synonymous this is evident on reflection to every competent speaker. Suppose, on the other hand, that, the analysis tells us something new and interesting; suppose, that is, that it is not evident to every competent speaker that the analysandum (the expression to be analyzed) is synonymous with the analysans (or the analyzing phrase). Then by (2) the two expressions are not synonymous, since it is not evident to every competent speaker that they are. The usual response to this problem is to deny Premise (2), thus opening the way for non-obvious synonymies and hence for analyses that are both true and interesting. (See Pigden (2007) for more Moore, the OQA and the Paradox of Analysis.) Of course, (4) could be true and (2) false but in the absence of the OQA, we no longer have a compelling reason to believe it. It is hardly likely to trump our everyday intuition that ‘good’ is a factual predicate.

What about the second conjuncts of the two arguments? In both (A.1) and (R.1), the second conjunct applies a speculative theory of meaning to the special case of predicates. For Ayer the theory is verificationism, the idea that a synthetic sentence is only factually meaningful if it is empirically verifiable. From this he derives the consequence that a predicate can only be factually meaningful if it has verifiable content, that is, if it can contribute to the verification-conditions of a verifiable proposition. Perhaps this inference is not as secure as Ayer takes it to be, but the real problem lies with the verificationism on which it depends.

The idea that a synthetic proposition is only factually meaningful if it is verifiable follows from the Wittgensteinian slogan that ‘the sense of a proposition is the method of its verification’. (McGuinness, ed (1979), pp. 79. 227, 244, 247.) If a purported proposition has no method of verification then it has no [factual] sense or meaning. But the Wittgensteinian slogan suffers from a devastating objection. Generally speaking there is no such thing as the method of verification of a proposition; hence the method of its verification cannot constitute the proposition's meaning. Whether a given set of observations verifies a proposition depends upon what else we take for granted. On some assumptions, observations O1On will tend to confirm a proposition and on others not. As Quine famously put it ‘our statements about the external world face the tribunal of experience, not individually, but only as a corporate body’ (Quine (1951), p. 49). Other problems arose when philosophers tried to formulate the verificationist criterion of meaning precisely. Just what does it mean for a proposition to be verifiable (or falsifiable) by experience? Successive formulations either excluded what its inventors meant to include — such as scientific laws and findings — or included what they meant to exclude — such as obviously metaphysical pronouncements, including such absurdities as ‘The Absolute is lazy’. (See Soames, (2003) pp. 277-291 for a potted history.) But important as these difficulties of detail proved to be, a deeper criticism was suggested by Sir David Ross in his response to Ayer. (Ross (1939), p. 38.) He argues that verificatonism is self-refuting however formulated. For it claims that a sentence is only factually or cognitively meaningful if it is either analytic, contradictory or empirically verifiable (or perhaps falsifiable). But the verification principle itself — that a sentence is only factually or cognitively meaningful if it is either analytic, contradictory or empirically verifiable (or perhaps falsifiable) — is itself neither analytic, contradictory nor empirically verifiable or falsifiable. Thus if it is true, it is not cognitively meaningful, which means that it cannot be true. Some verificationists tried to get out of this difficulty by suggesting that the verificationist criterion of meaning should be read as a proposal rather than a proposition. The claim is not that, as a matter of fact, non-verifiable sentences are factually or cognitively meaningless, but that (how about this?) it might be a good idea to treat them as if they were. Verificationism is not a self-refuting attempt to tell it like it is (a purported truth which is factually meaningless if true, and so not true), but a non-cognitive suggestion about how some sentences should be regarded. But if verificationism degenerates into a mere suggestion, it loses its polemical bite since its opponents are at liberty to reject it. If you don't like Ayer's conclusions — and many don't — you can simply evade them by refusing to accept his proposal.

Cute as Ross's criticism is, I am inclined to think that it wrong. True, it is a clever instance of a nifty polemical strategy. Philosophers are wont to claim that kosher propositions are all of kind X, when the claim itself is not of the kosher kind X. And it is a generally a smart move to point this out. But not this time. For the verificationist criterion of factual meaning (broadly conceived) does meet its own standards for factual significance. (It claims that kosher — in this case factually meaningful — propositions are all of kind X, which is a kind of proposition to which it belongs.) Thus it is not (or need not be) self-refuting. True, it is not the kind of claim that can be conclusively verified or falsified (but then almost nothing is). But it is that kind of claim that can be confirmed or disconfirmed by the empirical evidence. Thus the problem is not that it is self-refuting. The problem is that it is empirically false.

What is the task of a theory of meaning? To explain how it is that marks on paper or sound patterns in the air manage to mean what they mean. Thus our everyday intuitions about what means what constitute the data that a theory of meaning has to explain. These are the facts that a theory of meaning has to fit. A theory of meaning for a language, L, would consist of two parts: general principles about the way words and phrases of various kinds work (how verbs manage to be meaningful for instance) and particular theses relating to the vocabulary and grammar of L. The ‘predictions’ (or retro-dictions) of such a theory should correspond to the contents of a good dictionary, which itself merely codifies the linguistic intuitions of educated speakers. A theory of meaning is confirmed (to some extent) if it matches those intuitions, and disconfirmed, if it does not.

Now, the verification principle is a high-level principle about the way that language works, confining factual meaning to claims of certain kinds. It would be included in the general part of a wide range of theories of meaning for specific languages. But its tendency would be to falsify such theories. Why so? Because it is an explicitly revisionist thesis. A large part of the point of the verification principle is to exclude as meaningless many claims that are widely thought to be meaningful (such as ‘God exists’). It was designed by Wittgenstein and the Vienna positivists as a philosophical weapon of mass destruction which would allow them to dismiss the people they disagreed with without having to argue against them in detail. But this means that if it is incorporated into a larger theory of meaning, that theory will tend to issue in the false predictions. (Though which predictions it will issue in depends on which version of the principle we adopt.) It will ‘predict’ for example that ‘It snowed on Manhattan Island on the first of January in the year 1 A.D’ (Russell's example) lacks a truth-value, which evidently it does not. It will ‘predict’ that (on certain assumptions) theism, atheism and agnosticism do not represent cognitively meaningful opinions (one of Ayer's more startling conclusions) or that ‘it can not be significantly asserted [or significantly denied] that men have immortal souls’ (Ayer again). . It will ‘predict’ (if we adopt Neurath's variant) that terms like ‘cause’, ‘true’ and ‘explanation’ cannot be used to make cognitive claims. It would entail that we cannot give a rational reconstruction of large chunks of the past, since, in many cases, the thoughts which have directed people's actions would be damned by the verificationist as lacking in cognitive content. You can't explain someone's actions as due to a belief that his soul is in danger, if it does not make sense to suppose that he has a soul. Indeed, if Ayer had been strict with himself, he would not have been able to make sense of his own mental development since (as we have seen) it involved a nonsensical belief in a the Moorean good. Thus history — or at least the kind of history which explains people's actions in terms of their beliefs and desires — would be riddled with islands of unintelligibility and would be largely condemned as bunk — not a happy consequence. For verificationism implies that many of our utterances don't mean anything at all and that others have a meaning that nobody dreamt of until the Twentieth Century. Thus if verificationism is added to a theory of meaning, it will tend to issue in predictions that are false to the facts, namely our everyday intuitions about what words mean. It is as if we had a theory to explain how ducks fly which had the surprising consequence that many of them don't, and that some that do, fly backwards. The reason for this empirical failure is not hard to find. Verificationism was never designed to explain the facts of linguistic usage but to modify those facts in the interests of a scientistic agenda. It is as if we had a theory of how ducks fly whose covert purpose was to stop many of them from flying by persuading them that for many ducks, flight is an impossibility. Verificationist philosophers have attempted to change linguistic usage in various ways, but the point of a theory of meaning is to explain it. Hence the empirical failure. (See Henle (1963) for a similar argument.)

Thus the verification principle fails, partly because of Quine's criticisms but mainly because, in so far as it can be clarified, it is not self-refuting, but empirically false. This is important for two reasons. For a start it suggests that Ayer's argument is unsound. Premise (A.1) combines the Moorean thesis a) — that ‘good’ is not synonymous with any naturalistic predicate ‘X’ — with the verificationist thesis b) — that a predicate is factually significant if and only if it is synonymous with an empirical predicate, that is a predicate that can play a part in verifiable propositions. Thesis a) is dubious since it is derived from Premise (2) of the OQA which leads straight to the Paradox of Analysis. And thesis b) is false since it depends upon verificationism which — in so far as it can be clarified — would appear to be empirically false. But there is more. In so far as verificationism leads to a surprising semantic conclusion such as emotivism, that is evidence that it is false. For a theory of meaning that aspires to fit the facts should not lead to such surprising semantic conclusions. A theory that ‘predicts’ that moral judgments mean something that nobody had ever thought of till the advent of Russell and Ayer is a theory that is contradicted by the evidence, namely the evidence of our linguistic intuitions. This gives us a principled reason for preferring the Moorean modus tollens to Ayer's modus ponens, and thus something better than flat-footed considerations of relative plausibility.

What about Russell? The Russellian argument differs only slightly from Ayer's. The premise (R.1) combines thesis a) — that ‘good’ is not synonymous with any naturalistic predicate ‘X’ — with thesis b) — that a predicate is factually significant if and only if it is synonymous with a predicate definable (in use) in terms of the naturalistic objects and universals with which we are acquainted. Thesis a) is dubious since it is derived from Premise (2) of the OQA which leads straight to the Paradox of Analysis. But what about thesis b)? That depends on Russell's Fundamental Principle ‘that [the] sentences we can understand must [ultimately] be composed of words with whose meaning we are acquainted’, together with the thesis that we are only acquainted with naturalistic things.

What can be said for the Fundamental Principle? Not much. It was the bane of Russell's existence as a philosopher and was always getting him into trouble. Indeed, if you catch Russell saying something weird or implausible, the chances are that the Fundamental Principle is at the back of it. Take, for example, his widely lampooned doctrine that ‘this’ is a proper name. This is derived, in part, from the Fundamental Principle. (Proper names are the ultimate constituents of sentences: their meaning consists in the objects to which they refer. They are the words ‘which are only significant because there is something that they mean, and if there were not this something, they would be empty noises not words’. By the Fundamental Principle, we must be acquainted with these somethings. But in 1918, the only non-predicates that Russell could think of which referred directly to the objects of our acquaintance were words like ‘this’ when used to denote sense-data. Hence ‘this’, in these uses, functions as a proper name. See The Philosophy of Logical Atomism: 62-63/Papers 8: 178-179.) Ditto, his equally bizarre doctrine that ordinary language is ambiguous, so that typically when ‘one person uses a word, he does not mean by it the same thing as another person means by it’. (By the Fundamental Principle ‘the meaning that you attach to your words must depend on the nature of the objects you are acquainted with’ and ‘since different people are acquainted with different objects [or sense-data]’ different people do not mean the same things by the same words. See The Philosophy of Logical Atomism: 56/Papers 8: 176.) Sometimes the Fundamental Principle drove Russell to the edge of despair. ‘But now [the universe] has shrunk to be no more than my own reflection in the windows of the soul through which I look out on the night of nothingness. The revolutions of the nebulae, the birth and death of stars are no more than convenient fictions in the trivial work of linking together my own sensations and perhaps those of other men not much better than myself’. (Autobiography: 393.) But it was not, as he thought, ‘the shadow physics of our time’ that threatened to imprison Russell in this solipsistic ‘dungeon’ but his own Fundamental Principle. For if the ‘sentences we can understand must be [ultimately] composed of words with whose meaning we are acquainted’ and if we are only acquainted with out own sense-data, it is hard to see how we can talk or even think about stars or nebulae if these are conceived as mind-independent entities. In the end, Russell was able to escape this intellectual prison with the aid of his theory of definite descriptions, but the point is that he would never have been at risk of arrest from the forces of solipsism had it not been for the Fundamental Principle. (See Maxwell (1974) for further details.)

Thus the Fundamental Principle is bad news, especially for a would-be scientific realist such as Russell. Furthermore, it does not sit well with Moore's Premise (2), from which thesis a) — that ‘good’ is not synonymous with any naturalistic predicate ‘X’ — is derived. If the Fundamental Principle is correct, and if we are only acquainted with sense-data and universals, then ‘I am looking at page 425 of Russell's Autobiography’ is equivalent to a long and involved sentence about black and white sense-data, in which the book itself features (if at all) as an unknown cause. Now it is certainly not obvious to every competent speaker that ‘I am looking at page 425 of Russell's Autobiography’ is equivalent to such a sentence. But by (2) — that if two expressions are synonymous this is evident on reflection to every competent speaker — if it is not evident to every competent speaker that two expressions are synonymous, it follows that they are not, in fact, synonymous. Hence ‘I am looking at page 425 of Russell's Autobiography’ is not equivalent to a long and involved sentence about black and white sense-data. But if the two expressions are not equivalent, then Russell is wrong. For either the Fundamental Principle is false or he is wrong about acquaintance, which is not confined to sense-data but can include such things as books.

This is not a decisive objection to the Russellian argument, however, since thesis (R.1) a) — that ‘good’ is not synonymous with any naturalistic predicate ‘X’ — might be true and the thesis from which it is derived — (2) that if two expressions are synonymous this is evident on reflection to every competent speaker — false. Hence the fact that, on certain assumptions (2) is incompatible with (R.1) b) does not show that Premise (R.1) [including theses a) & b) ] is inconsistent. But it does point to a fundamental problem with the Fundamental Principle.

The Fundamental Principle, like verificationism, is a high level thesis about how meaning works. It says that factually significant sentences are meaningful in virtue of being analyzable into sentences whose constituents refer directly to the objects of our acquaintance. These are taken to be sense-data and perhaps the universals to which we have sensory or intellectual access. The Fundamental Principle would figure as one of the general principles shared by specific theories of meaning for specific languages L. Such theories are confirmed if they spit out predictions in broad agreement with our linguistic intuitions, and are disconfirmed if they do not. For it is the business of a theory of meaning to explain both our everyday intuitions about meaning and the facts of linguistic usage. But if this is that case, the Fundamental Principle would tend to sabotage any theory to which it was added. For it issues in predictions that fly in the face of the linguistic facts. It is a consequence of the Fundamental Principle that ‘Charles owns a copy of Russell's Autobiography’ said by me means something very different from ‘Charles owns a copy of Russell's Autobiography’ said by you. Since our sentences are only factually meaningful because they can be cashed out in terms of the sense-data with which we are acquainted, and since we are each acquainted with different sense-data, our two utterances cannot possibly be equivalent. Yet our everyday intuition that the two sentences are equivalent is the kind of linguistic datum that a theory of meaning ought to predict. It is a consequence of the Fundamental Principle that ‘Charles owns a copy of Russell's Autobiography’ is equivalent to some long complex sentence about sense-data in which the book itself figures (if at all) as an unsensed cause, to be singled out by an elaborate definite description. Yet nobody has managed to formulate such a sentence, few would understand it if it were formulated, and even if could be formulated and were understood, many would be inclined to reject it as not what the original utterance was trying to say. (‘I was not talking about the sense-data’ they would insist, ‘I was talking about the book and the fact that Charles happens to own it.’) If a thesis about meaning conflicts with our linguistic intuitions, that is evidence that it is false, and the Fundamental Principle conflicts with our linguistic intuitions.

This brings us back to the Russellian argument for non-cognitivism. This proceeds from two premises, the uncontentious Premise (R.2) — that if a) ‘good’ is not synonymous with any naturalistic predicate, and b) a predicate is factually meaningful if and only if it definable (in use) in terms of the naturalistic objects and universals with which we are acquainted, then ‘good’ is not factually significant — together with its much more contentious antecedent, Premise (R.1) — that a) ‘good’ is not synonymous with any naturalistic predicate, and b) that a predicate is factually meaningful if and only if it definable (in use) in terms of the objects with which we are acquainted. As we have already noted, thesis (R.1) a) is suspect since it is derived from Premise (2) of the OQA, which leads straight to the Paradox of Analysis. So too is thesis (R.1) b) since this is derived from Russell's Fundamental Principle, a thesis about meaning which at odds with the empirical evidence. Thus the Russellian argument is, at best, highly dubious, and, at worst, unsound, since one of the premises appears to be false. But there is more. In so far as the Fundamental Principle leads to a surprising semantic conclusion such as emotivism, that is evidence that it is false. For a theory of meaning that aspires to fit the facts should be soporifically boring when it comes to its linguistic predictions. (This is not to say that it should contain no surprises but that the surprises should be confined to the explanatory structures and should not spill over into the empirical outputs.) A theory that ‘predicts’ that moral judgments mean something that nobody had ever thought of until Russell arrived on the scene is a theory that is contradicted by the evidence, namely the evidence of our linguistic intuitions. This gives us a principled reason for preferring the Moorean modus tollens to the Russellian modus ponens. It is not just the premises of the Moorean modus tollens are more plausible than the premises of the Russellian modus ponens (though they are, of course, more plausible). The Moorean premise, (R.1′), asserts the kind of datum — that ‘good’ can play a part in truth-apt sentences — that it is the business of a theory of meaning to explain. If a thesis about meaning denies such a datum — which is what Russell's Fundamental Principle threatens to do — then that is evidence that it is false. A theoretical claim is wrong if it fails to fit the facts. A fact is not wrong if it fails to fit a theory.

There are two lessons to be learned from all this. First, the general point. If a theory of meaning leads to surprising or revisionist conclusions, that is evidence that it is false. Thus in so far as verificationism and the Fundamental Principle suggest something as surprising as emotivism, this is evidence that they are false, not evidence that emotivism is true. But this is less conservative than it sounds. For though a theory that entails that we don't mean what we think we mean is probably false, a theory that entails that what we mean is false or even absurd, may well be true. Widespread linguistic error verges on the inconceivable, but widespread factual error is perfectly possible. It is silly to suppose that God-talk is either meaningless or non-cognitive since this flies in the face of the linguistic evidence. It is not silly (or not silly in the same way) to suppose that there is no God. In the anti-realist struggle, error theories tend to win out against variants of non-cognitivism.

The second lesson is more Russell-specific. As we have seen, what sabotaged Russell's version of the error theory, and may have led him to abandon it, was his commitment to the Fundamental Principle. And the only Russellian argument for emotivism (as opposed to against Moorean objectivism) that we have managed to come up with likewise relied on the Fundamental Principle. But the evidence suggests that the Fundamental Principle is false. Thus a philosopher like Russell who thought that the moral facts, if any, would have to be non-natural, but could not believe in the Moorean good, would have done a lot better to adopt a different kind of error theory, an error theory unburdened by the Fundamental Principle, which made no bones about talking about the non-existent but naturally indefinable properties. Such was the theory of J.L. Mackie.

Return to main entry

Copyright © 2007 by
Charles Pigden <charles.pigden@stonebow.otago.ac.nz>

Open access to the SEP is made possible by a world-wide funding initiative.
Please Read How You Can Help Keep the Encyclopedia Free