Supplement to Analysis
Conceptions of Analysis in Analytic Philosophy
- 1. Introduction
- 2. Frege
- 3. Russell
- 4. Moore
- 5. Wittgenstein
- 6. The Cambridge School of Analysis
- 7. Carnap and Logical Positivism
- 8. Oxford Linguistic Philosophy
- 9. Contemporary Analytic Philosophy
1. Introduction to Supplement
This supplement provides an account of the development of conceptions of analysis in analytic philosophy. The emergence of logical analysis as the distinctive form of analysis in early analytic philosophy is outlined in §6 of the main document.
2. Frege
Although Frege's work shows the enormous potential of logical analysis, it is not incompatible with other forms of analysis. Indeed, its whole point would seem to be to prepare the way for these other forms, as philosophers in the second phase of analytic philosophy came to argue (see The Cambridge School of Analysis). One such form is traditional decompositional analysis—understood, more specifically, as resolving a whole into its parts (e.g., a ‘thought’ or ‘proposition’ into its ‘constituents’). Decompositional analysis does indeed play a role in Frege's philosophy, but what is of greater significance is Frege's use of function-argument analysis, which operates in some tension to whole-part analysis.
In developing his logic in his first book, the Begriffsschrift, Frege's key move was to represent simple statements such as ‘Socrates is mortal’ not in subject-predicate form (‘S is P’, i.e., analyzing it into subject and predicate joined by the copula) but in function-argument form (‘Fx’)—taking ‘Socrates’ as the argument and ‘x is mortal’ as the function, which yields as value what Frege calls the ‘judgeable content’ of the statement when the argument indicated by the variable ‘x’ is filled by the name ‘Socrates’. (I gloss over here the controversial issue as to how Frege understands functions, arguments and judgeable contents at this particular time. In his later work, he regards the result of ‘saturating’ a concept by an object as a truth-value.) It was this that allowed him to develop quantificational theory, enabling him to analyze complex mathematical statements.
To appreciate some of the philosophical implications of function-argument analysis, consider the example that Frege gives in the Begriffsschrift (§9):
(HLC) Hydrogen is lighter than carbon dioxide.
According to Frege, this can be analyzed in either of two ways, depending on whether we take hydrogen as the argument and is lighter than carbon dioxide as the function, or carbon dioxide as the argument and is heavier than hydrogen as the function. If we respected subject-predicate position, we might wish to express the latter thus:
(CHH) Carbon dioxide is heavier than hydrogen.
But on Frege's view, (HLC) and (CHH) have the same ‘content’ (‘Inhalt’), each merely representing alternative ways of ‘analyzing’ that content. There does seem to be something that (HLC) and (CHH) have in common, and function-argument analysis seems to permit alternative analyses of one and the same thing, since two different functions with different arguments can yield the same value.
However, in response to this, it might be suggested that both these analyses presuppose a more ultimate one, which identifies two arguments, hydrogen and carbon dioxide, and a relation (a function with two arguments). Michael Dummett (1981b, ch. 17), for example, has suggested that we distinguish between analysis and decomposition: there can be alternative decompositions, into ‘component’ concepts, but only one analysis, into unique ‘constituents’. (By ‘analysis’ Dummett means what has here been called ‘decomposition’, which—pace Dummett—seems to imply a unique end-product far more than ‘analysis’, and by ‘decomposition’ Dummett means function-argument analysis.) But which relation do we then choose, is lighter than or is heavier than? Clearly they are not the same, since one is the converse of the other. So if we accept that (HLC) and (CHH) have the same ‘content’—and there is undoubtedly something that they have in common—then it seems that there can be alternative analyses even at the supposedly ultimate level.
The issue, however, is controversial, and leads us quickly into the deepest problems in Frege's philosophy, concerning the criteria for sameness of ‘content’ (and of ‘Sinn’ and ‘Bedeutung’, into which ‘content’ later bifurcated), the fruitfulness of definitions, and the relationship between Frege's context principle and compositionality. For discussion, see Baker and Hacker 1984, ch. 6; Beaney 1996, ch. 8; Bell 1987, 1996; Bermúdez 2001; Currie 1985; Dummett 1981b, ch. 15; 1989; 1991a, chs. 9-16; Garavaso 1991; Hale 1997; Picardi 1993; Tappenden 1995b; Weiner 1990, ch. 3.
For more on Frege's philosophy, see the entry on Frege in this Encyclopedia.
3. Russell
In My Philosophical Development, Russell wrote: “Ever since I abandoned the philosophy of Kant and Hegel, I have sought solutions of philosophical problems by means of analysis; and I remain firmly persuaded, in spite of some modern tendencies to the contrary, that only by analysing is progress possible” (MPD, 11). Similar remarks are made elsewhere (cf. e.g. POM, 3; IMP, 1-2; PLA, 189; see the supplementary section on Descriptions of Analysis). Unfortunately, however, Russell never spells out just what he means by ‘analysis’—or rather, if we piece together his scattered remarks on analysis, they by no means reflect his actual practice. In a paper entitled ‘The Regressive Method of Discovering the Premises of Mathematics’, dating from 1907, for example, Russell talks of ‘analysis’ in the regressive sense, i.e., as the process of working back to ‘ultimate logical premises’, and this as an inductive rather than deductive process. In the chapter on analysis and synthesis in his abandoned 1913 manuscript, Theory of Knowledge, on the other hand, he defines ‘analysis’ as “the discovery of the constituents and the manner of combination of a given complex” (TK, 119). This best captures Russell's ‘official’ view, and decompositional analysis undoubtedly played a major role in Russell's thought (cf. Hylton 1996; Beaney 2002, §2.1). Yet as suggested in §6 of the main document, what characterizes the founding by Frege and Russell of (at least one central strand in) the analytic movement was the use made of logical analysis, in which a crucial element was the formalization of ordinary language statements into a logical language.
It was logical analysis that was involved in Russell's celebrated theory of descriptions, first presented in ‘On Denoting’ in 1905, which Ramsey called a ‘paradigm of philosophy’ and which played a major role in the establishment of analytic philosophy. In this theory, (Ka) is rephrased as (Kb), which can then be readily formalized in the new logic as (Kc):
(Ka) The present King of France is bald.(Kb) There is one and only one King of France, and whatever is King of France is bald.
(Kc) x[Kx & y(Ky → y = x) & Bx].
The problems generated by attempting to analyze (Ka) decompositionally disappear in this analysis. Russell's problem was this: if there is no King of France, then the subject term in (Ka)—the definite description ‘the present King of France’—would seem to lack a meaning, in which case how could the whole have a meaning? Russell solved this problem by ‘analyzing away’ the definite description. The definite description has no meaning in itself, but (Ka) as a whole does have a meaning, a meaning that is given by (Kb), to which (Ka) is seen as equivalent. The meaning of (Kb) has still to be explained, but this can be done by drawing on the resources of the logical theory, in which the logical constants and quantificational structure revealed in (Kc) are clarified.
Just as Frege provided a diagnosis of what is wrong with the ontological argument, at least in its traditional form (see §6 of the main entry), so Russell showed how to avoid unnecessary reification of the purported objects of our discourse. If we can find an equivalent to a statement in which there is some problematic expression, then the problems drop away in the very process of ‘translating’ it into a logical language. Although Frege himself seems not to have fully appreciated the eliminativist possibilities opened up by this strategy of logical analysis, Russell clearly did, and in the process initiated a reductionist programme that has been influential ever since. Although, as Russell and Whitehead acknowledge in their preface to Principia Mathematica, “In all questions of logical analysis, our chief debt is to Frege” (PM, viii), Russell's own advance lay in extending logical analysis and in suggesting the possibilities of eliminativism.
For detailed discussion of Russell's theory of descriptions and its development, see Coffa 1991, ch. 6; Griffin 1996; Hylton 1990, ch. 6; Noonan 1996; Sainsbury 1979, ch. 4.
4. Moore
Moore is generally regarded as one of the founders of analytic philosophy, yet his own early conception of analysis is surprisingly traditional. In ‘The Nature of Judgement’, published in 1899, he sees analysis simply as the decomposition of complex concepts (which is what propositions were for Moore at the time) into their constituents: “A thing becomes intelligible first when it is analysed into its constituent concepts” (NJ, 8). This conception underlies the main theses of Moore's first major work, Principia Ethica (1903), including his famous ‘open question’ argument.
In the first chapter, entitled ‘The Subject-Matter of Ethics’, Moore considers how ‘good’ is to be defined. By ‘definition’ here Moore means ‘real’ rather than ‘nominal’ definition, concerned not with the meaning of a word but with the nature of the object denoted (cf. PE, 6). He comes to the conclusion that ‘good’ is indefinable, since good has no parts into which it can be decomposed:
My point is that ‘good’ is a simple notion, just as ‘yellow’ is a simple notion; that, just as you cannot, by any manner of means, explain to any one who does not already know it, what yellow is, so you cannot explain what good is. Definitions of the kind that I was asking for, definitions which describe the real nature of the object or notion denoted by a word, and which do not merely tell us what the word is used to mean, are only possible when the object or notion in question is something complex. You can give a definition of a horse, because a horse has many different properties and qualities, all of which you can enumerate. But when you have enumerated them all, when you have reduced a horse to his simplest terms, then you no longer define those terms. They are simply something which you think of or perceive, and to any one who cannot think of or perceive them, you can never, by any definition, make their nature known. (PE, 7.)
Insofar as something is complex, according to Moore, it can be ‘defined’ in terms of its component parts, and, unless we are to go on ad infinitum, we must eventually reach simple parts, which cannot themselves be defined (PE, 7-8). Since ‘good’, like ‘yellow’, is not a complex notion, it is indefinable.
Moore's ‘open question’ argument is then offered to support his claim that ‘good’ is indefinable. Consider a proposed definition of the form:
(G) Good is X.
(Suggested candidates for ‘X’ might be ‘that which causes pleasure’ or ‘that which we desire to desire’; cf. PE, 15-16.) Then either ‘good’ means the same as ‘X’, or it does not. If it does, then the definition is trivial, since ‘analytic’; but if it does not, then the definition is incorrect. But for any substitution for ‘X’—other than ‘good’ itself, which would obviously make (G) analytic—we can always raise the question (i.e., it is always an ‘open question’) as to whether (G) is true; so ‘X’ cannot mean the same as ‘good’ and hence cannot be offered as a definition of ‘good’. In particular, any attempt at providing a naturalistic definition of ‘good’ is bound to fail, the contrary view being dubbed by Moore the ‘naturalistic fallacy’.
This argument has been influential—and controversial—in metaethical discussions ever since. But in its general form what we have here is the paradox of analysis. (Although the problem itself goes back to the paradox of inquiry formulated in Plato's Meno, and can be found articulated in Frege's writings too, the term ‘paradox of analysis’ was indeed first used in relation to Moore's work, by Langford in 1942.) Consider an analysis of the form ‘A is C’, where A is the analysandum (what is analysed) and C the analysans (what is offered as the analysis). Then either ‘A’ and ‘C’ have the same meaning, in which case the analysis expresses a trivial identity; or else they do not, in which case the analysis is incorrect. So it would seem that no analysis can be both correct and informative.
There is a great deal that might be said about the paradox of analysis. At the very least, it seems to cry out for a distinction between two kinds of ‘meaning’, such as the distinction between ‘sense’ and ‘reference’ that Frege drew, arguably precisely in response to this problem (see Beaney 1996, ch. 5). An analysis might then be deemed correct if ‘A’ and ‘C’ have the same reference, and informative if ‘C’ has a different, or more richly articulated, sense than ‘A’. In his own response, when the paradox was put to him in 1942, Moore talks of the analysandum and the analysans being the same concept in a correct analysis, but having different expressions. But he admitted that he had no clear solution to the problem (RC, 666). And if this is so, then it is equally unclear that no definition of ‘good’—whether naturalistic or not—is possible.
However, if Moore provided no general solution to the paradox of analysis, his work does offer clarifications of individual concepts, and his later writings are characterized by the painstaking attention to the nuances of language that was to influence Oxford linguistic philosophy, in particular.
For fuller discussion of Moore's conception of philosophical analysis, see Baldwin 1990, ch. 7; Bell 1999.
5. Wittgenstein
In the preface to his first work, the Tractatus Logico-Philosophicus, Wittgenstein records his debt to both Frege and Russell. From Frege he inherited the assumptions that the logic that Frege had developed was the logic of our language and that propositions are essentially of function-argument form. “Like Frege and Russell I construe a proposition as a function of the expressions contained in it.” (TLP, 3.318; cf. 5.47.) From Russell he learnt the significance of the theory of descriptions. “It was Russell who performed the service of showing that the apparent logical form of a proposition need not be its real one.” (TLP, 4.0031.) Unlike Frege and Russell, however, he thought that ordinary language was in perfect logical order as it was (TLP, 5.5563). The aim was just to show how this was so through the construction of an ideal notation rather than an ideal language, revealing the underlying semantic structure of ordinary propositions no longer obscured by their surface syntactic form.
Arguably unlike Frege, too, Wittgenstein was convinced at the time of the Tractatus that “A proposition has one and only one complete analysis” (TLP, 3.25). The characteristic theses of the Tractatus result from thinking through the consequences of this, in the context of Fregean logic. Propositions are seen as truth-functions of elementary propositions (4.221, 5, 5.3), and elementary propositions as functions of names (4.22, 4.24). The meaning of each name is the simple object that it stands for (3.203, 3.22), and these simple objects necessarily exist as the condition of the meaningfulness of language (2.02ff.). For Wittgenstein, the existence of simple objects was guaranteed by the requirement that sense be determinate (3.23; cf. NB, 63). It was in this way that Wittgenstein reached metaphysical conclusions by rigorously pursuing the implications of his logical views. As he noted in his Notebooks in 1916, “My work has extended from the foundations of logic to the nature of the world” (NB, 79).
According to Wittgenstein, then, analysis—in principle—takes us to the ultimate constituents of propositions, and indeed, to the nature of the world itself. That Wittgenstein was unable to give any examples of simple objects was not seen as an objection to the logical conception itself. Equally definite conclusions were drawn as far as our thought was concerned. “If we know on purely logical grounds that there must be elementary propositions, then everyone who understands propositions in their unanalysed form must know it.” (TLP, 5.5562.) The claim might seem obviously false, but it was precisely the task of analysis to bring out what we only tacitly know.
This whole logical and metaphysical picture was dismantled in Wittgenstein's later work (see especially PI, §§1-242). The assumption that Fregean logic provides the logic of language and the world was rejected, and the many different uses of language were stressed. The idea that names mean their bearers, the various theses of functionality and compositionality, and the associated appeal to tacit processes of generating meaning were criticized. On Wittgenstein's later view, “nothing is hidden” (PI, §435; cf. Malcolm 1986, 116); philosophy is simply a matter of getting clear about what is already in the public domain -- the grammar of our language (PI, §§ 122, 126).
Our investigation is therefore a grammatical one. Such an investigation sheds light on our problem by clearing misunderstandings away. Misunderstandings concerning the use of words, caused, among other things, by certain analogies between the forms of expression in different regions of language.—Some of them can be removed by substituting one form of expression for another; this may be called an “analysis” of our forms of expression, for the process is sometimes like one of taking a thing apart. (PI, §90.)
Wittgenstein's earlier conception of analysis, as combining logical analysis with decompositional analysis, has given way to what has been called ‘connective’ analysis (Strawson 1992, ch. 2; Hacker 1996, ch. 5). Given how deeply embedded that earlier conception was in the whole metaphysics of the Tractatus, the critique of the Tractatus has been seen by some to imply the rejection of analysis altogether and to herald the age of ‘post-analytic’ philosophy. But even Wittgenstein himself does not repudiate analysis altogether, although (as the passage just quoted suggests) he does tend to think of ‘analysis’ primarily in its crude decompositional sense. Not only may logical analysis, in the sense of ‘translating’ into a logical language, still have value in freeing us from misleading views of language, but ‘connective’ analysis is still worthy of being called ‘analysis’ (as we shall see in the next three sections).
For further discussion, see Baker and Hacker 1980, chs. 2-3; Carruthers 1990, ch. 7; Glock 1996, 203-8; Hacker 1996, chs. 2, 5; Malcolm 1986, chs. 6-7.
6. The Cambridge School of Analysis: Logical and Metaphysical Analysis
The Cambridge School of Analysis, as it was known at the time, was primarily active in the 1930s. Based in Cambridge, it drew its inspiration from the logical atomism of Russell and Wittgenstein and the earlier work of Moore. As well as Moore himself, its central figures included John Wisdom, Susan Stebbing, Max Black and Austin Duncan-Jones. Together with C.A. Mace and Gilbert Ryle, Stebbing and Duncan-Jones (who was its first editor) founded the journal Analysis, which first appeared in November 1933 and which remains the flagship of analytic philosophy today.
The paradigm of analysis at this time was Russell's theory of descriptions, which (as we have seen in relation to Russell and Wittgenstein above) opened up the whole project of rephrasing propositions into their ‘correct’ logical form, not only to avoid the problems generated by misleading surface grammatical form, but also to reveal their ‘deep structure’. Embedded in the metaphysics of logical atomism, this gave rise to the idea of analysis as the process of uncovering the ultimate constituents of our propositions (or the primitive elements of the ‘facts’ that our propositions represent).
This characterization suggests a distinction that has already been implicitly drawn, and which was first explicitly drawn in the 1930s by Susan Stebbing (1932, 1933b, 1934) and John Wisdom (1934), in particular, between what was called ‘logical’ or ‘same-level’ analysis and ‘philosophical’ or ‘metaphysical’ or ‘reductive’ or ‘directional’ or ‘new-level’ analysis. The first translates the proposition to be analyzed into better logical form, whilst the second aims to exhibit its metaphysical presuppositions. In Russell's case, having ‘analyzed away’ the definite description, what is then shown is just what commitments remain—to logical constants and concepts, which may in turn require further analysis to ‘reduce’ them to things of our supposed immediate acquaintance.
The value of drawing this distinction is that it allows us to accept the first type of analysis but reject the second, which is just what Max Black (1933) did in responding to Stebbing (1933b). Attacking the idea of metaphysical analysis as uncovering facts, he considers the following example:
(E) Every economist is fallible.
Black suggests that a metaphysical analysis, on Stebbing's conception, at least at an intermediate level, would yield the following set of facts:
(E#) Maynard Keynes is fallible, Josiah Stamp is fallible, etc.
Yet (E) does not mean the same as (E#), Black objects, unless ‘means’ is being used loosely in the sense of ‘entails’. But analysis cannot exhibit the propositions entailed, since this would require knowing, in this example, the name of every economist. The correct analysis, Black suggests, is simply:
(E*) (x) (x is an economist) entails (x is fallible).
This is a logical analysis of structure rather than a metaphysical uncovering of facts. (1933, 257.)
Similar arguments might be offered in the case of other general propositions, which together with negative propositions proved particularly resistant to ‘reductive’ analysis, and the rejection of the latter in favour of logical analysis, and later, linguistic analysis, came to characterize the next phase of analytic philosophy.
For further discussion, see Beaney 2002b; Hacker 1996, ch. 4; Passmore 1966, ch. 15; Urmson 1956.
7. Carnap and Logical Positivism: Quasi-analysis and Explication
The rejection of metaphysical analysis is characteristic of logical positivism, which developed in Vienna during the 1920s and 1930s. The central figure was Rudolf Carnap, who was influenced not only by Frege, Russell and Wittgenstein but also by neo-Kantianism (see Friedman 2000, Richardson 1998). His work can be seen as marking the transition to logical and linguistic forms of analysis unencumbered, at least officially, by metaphysical baggage.
Carnap's key methodological conception in his first major work, the Aufbau (1928) is that of quasi-analysis. Carnap held that the fundamental ‘units’ of experience were not the qualities (the colours, shapes, etc.) involved in individual experiences, but those experiences themselves, taken as indivisible wholes. But this meant that analysis—understood in the decompositional sense—could not yield these qualities, precisely because they were not seen as constituents of the elementary experiences (1928, §68). Instead, they were to be ‘constructed’ by quasi-analysis, a method that mimics analysis in yielding ‘quasi-constituents’, but which proceeds ‘synthetically’ rather than ‘analytically’ (1928, §§ 69, 74).
In essence, Carnap's method of quasi-analysis is just that method of logical abstraction that Frege had used in §62 of the Grundlagen (albeit without seeing it as ‘abstraction’). An equivalence relation holding between things of one kind (concepts in Frege's case) is used to define or ‘construct’ things of another kind (numbers in Frege's case). Just as numbers are not constituents of the concepts to which they are ascribed, but can be constructed from appropriate equivalence relations, so too can other ‘quasi-constituents’ be constructed. (For detailed discussion of quasi-analysis, and the complications and difficulties that it gives rise to, see Goodman 1977, ch. 5; Richardson 1998, ch. 2.)
Carnap's use of the term ‘quasi-analysis’ is revealing, for the ‘quasi’ suggests that he is still in thrall to the decompositional conception of analysis, despite his recognition that there are other forms of analysis, e.g., which use abstraction instead. By the early 1930s, however, Carnap is happy to use the term ‘analysis’—or more specifically, ‘logical analysis’—for methods of abstraction and construction. In a paper called ‘The Method of Logical Analysis’, given at a conference in 1934, for example, he wrote: “The logical analysis of a particular expression consists in the setting-up of a linguistic system and the placing of that expression in this system.” (1936: 143.) By this time, Carnap's ‘linguistic turn’ had occurred (see Carnap 1932, 1934); but the conception underlying the Aufbau remained: analysis involves exhibiting the structural relations of something by locating it in an abstract theoretical system.
In his later work Carnap talks of analysis as ‘explication’, though this also goes back to the Aufbau, where Carnap talked of ‘rational reconstruction’. (The connection between the two ideas is made clear in Carnap's preface to the 2nd edition of the Aufbau). In Meaning and Necessity (1947), Carnap characterizes explication as follows:
The task of making more exact a vague or not quite exact concept used in everyday life or in an earlier stage of scientific or logical development, or rather of replacing it by a newly constructed, more exact concept, belongs among the most important tasks of logical analysis and logical construction. We call this the task of explicating, or of giving an explication for, the earlier concept … (1947: 8-9.)
Carnap gives as examples Frege's and Russell's logicist explication of number terms such as ‘two’—“the term ‘two’ in the not quite exact meaning in which it is used in everyday life and in applied mathematics”—and their different explications of definite descriptions (ibid.).
A fuller discussion of explication is provided in the first chapter of Logical Foundations of Probability (1950), where Carnap offers criteria of adequacy for explication, and gives as his main example the concept of temperature as explicating the vaguer concept of warmth. The idea of a scientifically defined concept replacing an everyday concept may be problematic, but the idea that analysis involves ‘translating’ something into a richer theoretical system is not only characteristic of a central strand in analytic philosophy but has also been fruitful throughout the history of philosophy. In effect, it originates in ancient Greek geometry, though it can be seen more prominently in analytic geometry (see the supplementary section on Descartes and Analytic Geometry). It is not therefore new, but it was certainly foregrounded in philosophy and given a modern lease of life in the context of the new logical systems developed by Frege, Russell and Carnap.
For further discussion of Carnap's methodology, see Beth 1963; Coffa 1991, Part II; Proust 1989, Part IV; Strawson 1963; Uebel 1992.
8. Oxford Linguistic Philosophy: Linguistic and Connective Analysis
Michael Dummett (1991a, 111) has suggested that the precise moment at which the ‘linguistic turn’ in philosophy was taken is §62 of Frege's Grundlagen, where in answer to the question as to how numbers are given to us, Frege proposes to define the sense of a proposition in which a number term occurs. Dummett has also stated that ‘the fundamental axiom of analytical philosophy’ is that “the only route to the analysis of thought goes through the analysis of language” (1993, 128). Yet both Frege and Russell were hostile to ordinary language, and the ‘linguistic turn’ was only properly taken in Wittgenstein's Tractatus, before being consolidated in the work of Carnap in the early 1930s. But Dummett's axiom has been held by many analytic philosophers and it was certainly characteristic of Oxford philosophy in the two decades or so after the Second World War.
Gilbert Ryle can be taken as representative here. In one of his earliest works, dating from before the war, he had argued that language is ‘systematically misleading’ (1932), although as he himself later remarked (in Rorty 1967, 305), he was still under the influence of the idea that was always a ‘correct’ logical form to be uncovered (see §6 of the main document). But with the breakdown of logical atomism (see §6 above), the emphasis shifted to the careful description of what Ryle called the ‘logical geography’ of our concepts. Ryle's most important work was The Concept of Mind, published in 1949, in which he argued that the Cartesian dogma of the ‘Ghost in the Machine’ was the result of a ‘category-mistake’, confusing mental descriptions with the language of physical events. Again, Ryle was later critical of the implication that the single notion of a category-mistake could function as a ‘skeleton-key’ for all problems (1954, 9); but the detailed accounts of individual concepts that he provided in his work as a whole demonstrated the power and value of linguistic analysis, and offered a model for other philosophers. In chapter 2, for example, he draws an important distinction between knowing how and knowing that. There are many things that I know how to do—such as ride a bicycle—without being able to explain what I am doing, i.e., without knowing that I am following such-and-such a rule. The temptation to assimilate knowing how to knowing that must thus be resisted.
J. L. Austin was another influential figure in Oxford at the time. Like Ryle, he emphasized the need to pay careful attention to our ordinary use of language, although he has been criticized for valuing subtle linguistic distinctions for their own sake. He was influential in the creation of speech-act theory, with such distinctions as that between locutionary, illocutionary and perlocutionary acts (Austin 1962a). Although Austin shared Ryle's belief that reflection on language could resolve traditional philosophical problems, linguistic analysis has since come to be employed more and more as a tool in the construction of theories of language. But one good illustration of the importance of such reflection for philosophy occurs in section IV of Austin's book Sense and Sensibilia (1962b), where Austin considers the various uses of the verbs ‘appear’, ‘look’ and ‘seem’. Compare, for example, the following (1962b, 36):
(1) He looks guilty.(2) He appears guilty.
(3) He seems guilty.
There are clearly differences here, and thinking through such differences enables one to appreciate just how crude some of the arguments are for theories of perception that appeal to ‘sense-data’.
Ryle, in particular, dominated the philosophical scene at Oxford (and perhaps in Britain more generally) in the 1950s and 1960s. He was Waynflete Professor of Metaphysical Philosophy from 1945 to 1968 and Editor of Mind from 1947 to 1971. His successor in the chair was P.F. Strawson, whose critique of Russell's theory of descriptions in his own seminal paper of 1950, ‘On Referring’, and his Introduction to Logical Theory of 1952 had also helped establish ordinary language philosophy as a counterweight to the tradition of Frege, Russell and Carnap. The appearance of Individuals in 1959 and The Bounds of Sense in 1966 signalled a return to metaphysics, but it was a metaphysics that Strawson called ‘descriptive’ (as opposed to ‘revisionary’) metaphysics, aimed at clarifying our fundamental conceptual frameworks. It is here that we can see how ‘connective’ analysis has replaced ‘reductive’ analysis; and this shift was explicitly discussed in the work Strawson published shortly after he retired, Analysis and Metaphysics (1992). Strawson notes that analysis has often been thought of as “a kind of breaking down or decomposing of something” (1992, 2), but points out that it also has a more comprehensive sense (1992, 19), which he draws on in offering a ‘connective model’ of analysis to contrast with the ‘reductive or atomistic model’ (1992, 21). Our most basic concepts, on this view, are ‘irreducible’, but not ‘simple’:
A concept may be complex, in the sense that its philosophical elucidation requires the establishing of its connections with other concepts, and yet at the same time irreducible, in the sense that it cannot be defined away, without circularity, in terms of those other concepts to which it is necessarily related. (1992, 22-3.)
Such a view is not new. The point had also been made by A.C. Ewing, for example, in a book on ethics published in 1953. Responding directly to Moore's arguments in Principia Ethica (see §4 above), Ewing remarks that “To maintain that good is indefinable is not to maintain that we cannot know what it is like or that we cannot say anything about it but only that it is not reducible to anything else” (1953, 89). Whatever one's view of reductionist programmes, an essential part of philosophy has always been the clarification of our fundamental concepts. Reflected in the idea of connective analysis, it is perhaps this, above all, that has allowed talk of ‘analytic’ philosophy to continue despite the demise of logical atomism and logical positivism.
For further discussion, see Baldwin 2001; Hacker 1996, ch. 6; Lyons 1980; Passmore 1966, ch. 18; Rorty 1967; Stroll 2000, ch. 6; Warnock 1989.
9. Contemporary Analytic Philosophy: The Varieties of Analysis
As mentioned at the beginning of this entry, analytic philosophy should really be seen as a set of interlocking subtraditions held together by a shared repertoire of conceptions of analysis upon which individual philosophers draw in different ways. There are conflicts between these various subtraditions. In his inaugural lecture of 1969, ‘Meaning and Truth’, Strawson spoke of a ‘Homeric struggle’ between theorists of formal semantics, as represented in their different ways by Frege, the early Wittgenstein and Chomsky, and theorists of communication-intention, as represented by Austin, Paul Grice and the later Wittgenstein (1969, 171-2). The ideas of the former were to be developed, most notably, by Donald Davidson and Michael Dummett, and the ideas of the latter by Strawson himself and John Searle; and the debate has continued to this day, ramifying into many areas of philosophy. Nor is there agreement on what Dummett called the ‘fundamental axiom’ of analytic philosophy, that the analysis of language is prior to the analysis of thought (1993, 128). As Dummett himself noted (ibid., 4), Gareth Evans's work, The Varieties of Reference (1982), would seem to put him outside the analytic tradition, so characterized. To suggest that he only remains inside in virtue of “adopting a certain philosophical style and … appealing to certain writers rather than to certain others” (Dummett 1993, 5) is already to admit the inadequacy of the characterization.
Since the 1960s, the centre of gravity of analytic philosophy has shifted towards North America, counterbalanced slightly by the blossoming in recent years of analytic philosophy in continental Europe and South America and its continued growth in Australasia. Although many of the logical positivists—most notably, Carnap—emigrated to the United States in the 1930s, it took a while for their ideas to take root and develop. Quine is the towering figure here, and his famous critique of Carnap's analytic/synthetic distinction (Quine 1951) was instrumental in inaugurating a view of philosophy as continuous with the natural sciences, with the corresponding rejection of the view that there was anything distinctive about conceptual analysis. His critique was questioned at the time by Grice and Strawson (1956), but it is only in the last few years that the issue has been revisited with a more charitable view of Carnap (Ebbs 1997, Part II; Friedman 1999, ch. 9; Richardson 1998, ch. 9).
One recent defence of conceptual analysis, with a qualified rejection of Quine's critique of analyticity, has been offered by Frank Jackson in his book, From Metaphysics to Ethics (1998). On Jackson's view, the role of conceptual analysis is to make explicit our ‘folk theory’ about a given matter, elucidating our concepts by considering how individuals classify possibilities (1998, 31-3). To the extent that it involves ‘making best sense’ of our responses (ibid., 36), it is closer to what Quine called ‘paraphrasing’ (1960, §§ 33, 53) than the simple recording of our ordinary intuitions (Jackson 1998, 45). Jackson argues for a ‘modest’ role for conceptual analysis, but in so far as he admits that a certain “massaging of folk intuitions” may be required (ibid., 47), it is not clear that his conception is as neutral as he suggests. Consider, for example, his central argument in chapter 4, offered in defence of the view that colours are primary qualities of objects (ibid., 93):
(Pr. 1) Yellowness is the property of objects putatively presented to subjects when those objects look yellow.(Pr. 2) The property of objects putatively presented to subjects when the objects look yellow is at least a normal cause of their looking yellow.
(Pr. 3) The only causes (normal or otherwise) of objects' looking yellow are complexes of physical qualities.
(Conc.) Yellowness is a complex of the physical qualities of objects.
(Pr. 1) exemplifies what Jackson calls our “prime intuition about colour”, (Pr. 2) is a “conceptual truth about presentation”, and (Pr. 3) is the empirical truth that is required to reach the metaphysical conclusion (Conc.) that ‘locates’ yellowness in our ontology. (Pr. 1) is intended to encapsulate our ordinary ‘folk view’. But as it stands it is ambiguous. Does (Pr. 1) say that there is a property, but one about which we are unsure whether it is really presented to us or not, or that the property itself is only putative? The latter reading is closest to the ‘triviality’ Jackson says he wants as his “secure starting-place”, which might be better expressed as “yellowness is the property objects look to have when they look yellow” (cf. 1998, 89); but it is the former that is doing the work in the argument. If the property itself is only putative (i.e., if colours are not properties of objects at all, as some people have held), then (Pr. 2) is false; at the very least, it is not a conceptual truth that putative properties can be normal causes. This is not to say that Jackson is wrong about the primary quality view of colour. But it does illustrate just what assumptions may already be involved in articulating ‘folk intuitions’, even on a supposedly ‘modest’ understanding of conceptual analysis. In the end, as the history of conceptions of analysis shows, no conception can be dissociated from the logical and metaphysical context in which it operates.
Analytic philosophy, then, is a broad and still ramifying movement in which various conceptions of analysis compete and pull in different directions. Reductive and connective, revisionary and descriptive, linguistic and psychological, formal and empirical elements all coexist in creative tension; and it is this creative tension that is the great strength of the analytic tradition.
For further discussion, see Beaney 2001 (on Jackson); Dummett 1993; Hacker 1996, chs. 7-8; Hookway 1988 (on Quine); Stroll 2000, chs. 7-9.