Notes to Logicism and Neologicism

1. These are called double-abstraction principles below; see §1.2.2 for explanation and discussion.

2. Benacerraf (1981: 21) claims that “If Frege was the first logicist, then he was also the last.” Naturally one’s assessment of this claim will turn on how broad is one’s conception of logicism.

3. Lanier Anderson (2004) argues that “Kant deploys a clear and defensible notion of concept containment” and that

[o]nce we understand it, that notion of containment provides resources for a compelling argument that arithmetic must be synthetic, sensu Kant. (p. 503)

But the logicist will be unmoved by this: the rejoinder will be that the most appropriate sense of ‘analytic’, for the purposes of establishing an interesting logicist thesis about mathematics, is one that departs in carefully considered and well-motivated ways from analyticity sensu Kant.

4. See Michael Friedman 1992 (p. 84). Another useful discussion of the role of the pure form of temporal intuition in Kant’s account of arithmetic is Parsons 1983.

5. In his Habilitationsschrift of 1874, Frege wrote:

Die Elemente aller geometrischen Konstruktionen sind Anschauungen, und auf Anschauung verweist die Geometrie als Quelle ihrer Axiome. Da das Objekt der Arithmetik keine Anschaulichkeit hat, so können auch ihre Grundsätze aus der Anschauung nicht stammen. (Frege 1967: 50)

We offer the following translation:

The elements of all geometrical constructions are intuitions, and geometry points to intuition as the source of its axioms. Since the object of arithmetic lacks intuitiveness, so too are its basic laws unable to stem from intuition.

This early conviction of Frege’s to the effect that the ultimate justification for the axioms of geometry was to be found in intuition, was reprised in 1903, in “Über die Grundlagen der Geometrie” (see Frege 1967: 262).

6. The date of 1853 in the English translation Dedekind (1996b: 793) of the preface to the first edition of Dedekind’s later work Was sind und was sollen die Zahlen? is in error. The German original (Dedekind 1888: 339) has 1858.

7. This passage is one of the most important ‘position statements’ ever written on foundational matters, and is worth quoting here in full. This translation, due to W. W. Beman and extensively revised by William Ewald, is from Dedekind (1996a: 767). Emphases have been added:

In discussing the concept of the approach of a variable magnitude to a fixed limiting value—in particular, in proving the theorem that every magnitude which grows continually, but not beyond all limits, must certainly approach a limiting value—I took refuge in geometrical evidence. Even now I regard such invocation of geometric intuition [Anschauung] in a first presentation of the differential calculus as exceedingly useful from a pedagogic standpoint, and indeed it is indispensable, if one does not wish to lose too much time. But no one will deny that this form of introduction into the differential calculus can make no claim to being scientific. For myself this feeling of dissatisfaction was so overpowering that I resolved to meditate on the question until I should find a purely arithmetical and perfectly rigorous foundation [Begründung] for the principles of infinitesimal analysis. The statement is frequently made that the differential calculus deals with continuous quantities, yet an explanation of this continuity is nowhere given; even the most rigorous expositions of the differential calculus do not base their proofs upon continuity but they either appeal more or less consciously to geometric representations or to representations suggested by geometry, or they depend upon theorems which are never established in a purely arithmetical manner. Among these, for example, belongs the above-mentioned theorem, and a more careful investigation convinced me that this theorem, or any one equivalent to it, can be regarded as a more or less sufficient foundation for infinitesimal analysis. It only remained to discover its true origin in the elements of arithmetic [sic], and thereby to secure a real definition of the essence of continuity. I succeeded on November 24, 1858 …

8. This English translation is from Dedekind (1996a: 767). The German original, in Dedekind (1872: 4) is: “eine rein arithmetische und völlig strenge Begründung der Prinzipien der Infinitesimalanalysis”.

9. The reason why we speak here of ‘the Fregean’ is that Frege himself, in the Grundgesetze, did not treat of addition and multiplication on the natural numbers. He treated only zero and successor.

10. It might be objected that our decimal place-notation ‘12’ is really to be regimented as (1 × 10) + 2—which involves multiplication as well as addition. If that is how the Kantian is to be accommodated, then the composite singular term for ‘12’ in the (0, s, +, ×)-language of Peano arithmetic would be

(s0 × ssssssssss0) + ss0,

and the derivation of ‘7+5=12’ as

sssssss0 + sssss0 = (s0 × ssssssssss0) + ss0

would involve appeal to the recursion axioms for multiplication, which are

x x × 0 = 0;
xy x × sy = (x × y) + x.

The logicist will have derived these axioms too (whose variables are understood as ranging over natural numbers) from deeper, ‘logical’ principles involving expressions not occurring in the recursion axioms themselves. The same kind of reply, in principle, would deal with the even more exigent requirement that the place-notation numeral ‘12’ be regimented as (1 × (101)) + (2 × (100)), which now brings in exponentiation as well.

11. If, instead of using the language of second-order logic, one were to have a two-sorted first-order logic in which properties are one of the sorts and in which there are singular terms t denoting properties, then # could be used as a function symbol like d( ), so that #t could be a singular term denoting the number of things falling under the concept denoted by t. But we shall not be discussing such variants in what follows.

12. We shall see in due course (§§2,3) what abstraction principle(s) the neo-logicists have put forward in their attempts to characterize the meaning of this number-abstraction operator. Some, like Wright and Tennant, use the variable-binding operator #x; others, like Heck and Zalta, use the variable-free operator #, and apply it to either second-order variables or predicates.

13. This displayed pasigraph is just a useful abbreviation. Note that it has no free variables. So there is no need, here, for quantifiers to be prefixed in order to bind the variables x and y. In the abbreviatory pasigraph given, the free occurrences of x and y in the part [Fx 1–1

onto
Gy]
can be thought of as bound by the initial occurrence of Rxy. There are different but equivalent pasigraphic (abbreviatory) notations in the literature for registering the existence of a one-one, onto mapping between the Fs and the Gs; any one of them will do. What matters rather is the definiens which the pasigraph abbreviates.

14. The reader is put on notice that the contrast here between single- and double-abstraction principles is a contrast drawn within a purposely narrowed class: abstraction principles in biconditional form that have identities on their left-hand sides. With the double-abstraction identity principles, the left-hand sides will be of the form #xFx = #xGx; whereas with single-abstraction identity principles they will be of the form t = #xFx. There are of course many other kinds of abstraction principle in the literature which are correctly called ‘single-abstraction’ principles, but they are ones that do not deal with identities of the latter form on their left-hand sides. (Indeed, some of them are not even of biconditional form.) Examples of ‘single-abstraction’ principles of biconditional form whose left-hand sides are not identity statements are

t ∈{x | xA ∧ φx} ↔ (tA ∧ φt).

and β-conversion in the λ-calculus:

x1xnφ(x1,…,xn)]t1tn ↔ φ(t1,…,tn).

Such principles are being purposely excluded from consideration here as single-abstraction principles, because our focus is on the use made of single-abstraction principles by neo-Fregeans, which are all identity abstraction principles.

15. As early as December 7, 1873 Cantor wrote to Dedekind that he had “found the reason why the totality [of real numbers] … cannot be correlated one-one with the totality [of natural numbers]” (see Ewald 1996: 846). The proof in question (which did not use his famous diagonal method) was published as Cantor 1874.

16. As Richard Heck (1997a) points out, Frege also developed the ‘Caesar’ objection to his doubly-abstractive definition of line-directions. The objection was that the definition would not enable us to distinguish England from the direction of the Earth’s axis.

17. Exercise: Show that this set-abstraction principle logically implies the Axiom of Extensionality,

xy (∀z (zxzy) → x = y).

18. Courtesy of Zalta’s definition (1999: 630)

#G =df ιx(Ax ∧ ∀F(xFF is equinumerous with G))

19. Explicit use of the label ‘Bad Company’ has an interesting early history. Dummett (1991: 188–9) is quoted at length by Wright (1998: 344–5; §II, titled “Bad Company?”). Dummett’s complaint had been

[I]f the context principle, as expounded by Wright, is enough to validate the ‘contextual’ method of introducing the cardinality operator, it must be enough to validate a similar means of introducing the [class] abstraction operator.

Dummett (1998: 375; title: “Neo-Fregeans: In Bad Company?”), reprises the criticism thus:

In Grundgesetze, value-ranges are introduced in a manner precisely analogous to that in which Wright argued, in his book, that Frege ought to have introduced cardinal numbers …: and yet it was so far from being justified as to lead to actual contradiction.

Clearly, at the time of those writings of theirs quoted above, Tennant, Boolos, Dummett and Wright construed the ‘bad company’ for HP as consisting of BLV alone; and this construal endured through the late 1980s and even into the early 1990s. Subsequently, further formal discoveries were made, of yet other (double-)abstraction principles that are individually consistent, but jointly inconsistent (and some of them inconsistent with HP). The seminal paper in this regard is Boolos 1997. The problem highlighted by the proliferation of conflicting principles has been given the label ‘Embarrassment of Riches’ by Weir (2003). But this label has not caught on. Instead, the ‘Bad Company’ label was happily extended so as to emphasize the worsened delinquency of the group with its newly admitted companions. This has induced a shift in our collective understanding of what comprises ‘Bad Company’, and what the Bad Company problem amounts to. An informative collection of essays on these more recent developments is Linnebo, ed. (2009). As Linnebo puts it in the abstract of his editorial Introduction,

…the acceptable abstraction principles are surrounded by unacceptable (indeed often paradoxical) ones. This is the ‘bad company problem.’

20. The reader is reminded that the variables z and w are bound in the right-hand side of this biconditional. See footnote 13.

21. See Definition Z at the end of §40 of Volume I of the Grundgesetze, at p. 57.

22. We have been using the mathematician’s ‘relation-slash’ notation rr instead of the logician’s ‘sentence-prefix’ notation ¬(rr).

23. Fortunately, this has recently been remedied. The Arché project at St. Andrews has brought out an English translation of the Grundgesetze. See the Bibliographic reference to Frege 1893, and its citation to Ebert and Rossberg translation.

24. Russell’s “Mathematical Logic as Based on the Theory of Types” (1908) is an accessible presentation of these ideas. The official, full development is Principia Mathematica (Whitehead and Russell 1910).

25. The modern set-theoretic conception of an ordinal number is due to von Neumann. The very first ordinal is the empty set ∅. If α is an ordinal, then α ∪ {α} is the successor of α. Beginning with ∅, and taking only successors, one obtains the finite ordinals. These are the usual set-theoretic surrogates for the natural numbers. The first transfinite ordinal, ω, is the set of all finite ordinals. Every ordinal has a successor. But not every ordinal is a successor. Ordinals which are like ω in that they are not successors are called limit ordinals. Every ordinal is the set of all ordinals that precede it. A cardinal number is an ordinal that is not in 1–1 correspondence with any preceding ordinal. So a cardinal number is the first ordinal of ‘its size’. The finite ordinals are the finite cardinals. But in the transfinite, of course—ω and beyond—the cardinals are more thinly sprinkled among the ordinals. Qua cardinal number, ω is called ℵ0. It is countably infinite. The first uncountable cardinal is ℵ1, which is the first ordinal after ℵ0 not in 1–1 correspondence with any ordinal preceding it.

26. For this latter observation, the author is indebted to John MacFarlane.

27. We do not say the finite von Neumann ordinals are the set-theoretic surrogates for the natural numbers, because of the well-known ‘Benacerraf point’ that there are other recursive progressions within the universe of (hereditarily finite) pure sets that could serve just as well—Zermelo’s finite ordinals, for example (see Benacerraf 1965). Interestingly, though, Boolos has argued that von Neumann’s finite ordinals are the most natural representatives, within a theory of extensions, of Frege’s finite cardinals (see Boolos 1987; see also Demopoulos 1998).

28. Cantor’s theorem (that every set has strictly more subsets than members) has the special case that ℘(ω) has more members than ω does, hence is uncountable.

29. This was an unpublished lecture delivered in Cambridge, Massachusetts, at a joint meeting of the Mathematical Association of America and the American Mathematical Society, 29–30 December 1933. See Feferman’s introductory note to Gödel 1993/1995 (p. 36).

30. John MacFarlane suggested that a “perhaps superior option” would be to require explicit domains D in the abstracts, thus: {xD | Φ(x)}. Then a free logic would not be needed, because, by Separation, the denotation for any such term would exist.

31. Quine (1960: 267) advocates strongly for “the power of the notion of class to unify our abstract ontology”. He goes on to assert

To surrender this benefit and face the old abstract objects again in all their primeval disorder would be a wrench …. [Emphasis added]

32. Snapper (1979: 208) contends that logicism survived the shift from type theory to set theory. Here is how he viewed matters:

Of course, instead of Principia, one can use any other formal set theory just as well. Since today the formal set theory developed by Zermelo and Fraenkel (ZF) is so much better known than Principia, we shall from now on refer to ZF instead of Principia. ZF has only nine axioms and, although several of them are actually axiom schemas, we shall refer to all of them as “axioms”. The formulation of the logicist’s program now becomes: Show that all nine axioms of ZF belong to logic.

Against this, it could be maintained that the move from Principia to ZF represented an abandonment of the aspirations of logicism, and an acknowledgement that the foundations of mathematics can at best be provided within just one branch of mathematics itself, namely set theory.

Cook 2015 is a subtle and technically demanding study that supports a principled pessimism about the prospects for a suitably modified neologicist approach to a theory of sets (or value-ranges) that might avoid the disaster that befell Frege’s Basic Law V, and deliver enough in the way of sets to allow the neologicist to develop a version of Frege’s account of cardinal numbers. Cook limits himself to a consideration of only double-abstraction principles; but his conclusion must give this kind of neologicist serious pause. Cook 2016 pursues the topic further, revealing a virtually uninhabitable neologicist no-man’s land between Hume’s Principle (which is consistent) and Basic Law V (which of course is not). These negative reflections on the potential reach of neologicism continue a theme from the earlier work Cook 2002, which concluded that ‘the sort of abstraction needed to obtain a theory of the reals is rampantly inflationary’, and accordingly epistemologically suspect.

Boccuni 2013 has introduced a new twist in attempted reconstructions of Fregean foundations for number theory, with a system called ‘Plural Grundgesetze’, based on the relational concept xηX (‘the individual x is among the Xs’). Its ‘Fregean devices’ include ‘the infamous Basic Law V’, along with a Plural Comprehension Principle

Xx(xηX ↔ φx) (where φ does not contain X free)

Contradiction is presumed to be avoided by replacing George Boolos’ plural semantics with Enrico Martino’s Acts of Choice Semantics. It is too early to judge whether these novel resources will prove to be a consistent mix.

33. Although Frege more or less explicitly proved Frege’s Theorem in the Grundlagen (and proved it fully explicitly in the Grundgesetze), Frege himself never actually put his finger explicitly on the ‘it’ in question.

34. For useful discussion, see MacFarlane 2002 (pp. 40–42), which concludes, in effect, that Frege was deterred by worry about the truth of HP (i.e., of Principle (A)), because of it similarity to Basic Law V. For Frege did not have the benefit of knowing that second-order logic with HP is consistent if Real Analysis is. So perhaps Frege himself deserves credit for being the first thinker to appreciate the force of the ‘Bad Company’ objection (in the sense it enjoyed when first deployed with that label—see footnote 19).

35. The author is aware that the notion of a ‘significant part’ of a branch of mathematics requires further explication. For the time being, it is enough to note that what Quine called virtual set theory is a significant part of ZFC set theory; and PA is a significant part of Th(ℕ).

36. The incompleteness phenomena affect provability. One could of course adopt a second-order axiomatization that secures every truth (about, say, ℕ) as a (second-order) logical consequence; but then one would have to live with the drawback that second-order logical consequence is not axiomatizable. See Rayo 2005.

37. Wright appears, with hindsight, to have been rather too exclusively focused on the Grundlagen, and somewhat underinfluenced by Frege’s own logical maneuvers in the Grundgesetze. We note this assessment from Dummett (1991: 123):

Crispin Wright devotes a whole section of his book … to demonstrate that, if we were to take the equivalence in question as an implicit or contextual definition of the cardinality operator, we would still derive the same theorems as Frege does. He could have achieved the same result with less trouble by observing that Frege himself gives just such a derivation of those theorems. He derives them all from that equivalence, with no further appeal to his explicit definition.

“[J]ust such a derivation”, of course, appears only in the Grundgesetze.

38. As pointed out by Tennant (1987: 236–7), Boolos’s model for FA works only for FA taken by itself; and that model

will not serve its intended purpose when [FA] is embedded within a wider theory—such as the theory of sets—calling for models in which there are distinct infinities of objects.

The “theory of number …, like logic, is to apply to all subject matters.” The consistency of a logicist theory of number “is not to depend upon the particular subject matter over which numerical notions are deployed.” Boolos (1997: 260) agrees:

The worry is that … Frege Arithmetic … is incompatible with Zermelo–Fraenkel set theory plus standard definitions, on the usual and natural readings of the non-logical expressions of both theories.

(Note that Boolos is not a logicist; he admits only the standard logical operators as logical expressions.)

39. As Dummett (1993: 441) explains,

an indefinitely extensible concept is one such that, if we can form a definite conception of a totality all of whose members fall under the concept, we can, by reference to that totality, characterize a larger totality all of whose members fall under it.

40. Some philosophers of mathematics do not share this intuition; and some of those who do might be reluctant to make it a sine qua non of any successful logicist account. Thanks to both Julian Cole and Stewart Shapiro for raising this ‘structuralist’ point. The point is made also by Carnap (1931: 93) in what was intended to be a sympathetic exposition of the aims and partial achievements of logicism. Here is the English translation taken from Carnap 1983 (p. 43):

The natural numbers do not constitute a subset of the fractions but are merely correlated in obvious fashion with certain fractions. Thus the natural number 3 and the fraction 3/1 are not identical but merely correlated with one another. Similarly we must distinguish the fraction 1/2 from the real number correlated with it.

Carnap’s view here is deferential to the similar view of Russell, who in turn had inherited from Frege this need to resort to a form of structuralism. The inability to answer the inclusion question in turn derives from not taking seriously enough the need to explain how the various numbers are canonically applied in our theorizing not only about numbers but also about concrete things. A correct solution to the applicability problem could well point the way to an answer to the inclusion question. Real numbers are used for measuring continuous magnitudes in terms of some unit of measurement. When we say that a rod is 3 units-of-length (say, meters) long, we are in effect saying that the length of the rod, in meters, is 3. Equivalently: the number of units-of-length that comprise the length of the rod is 3. The latter is the familiar natural number 3.

41. It is well known that by Gödel’s Second Incompleteness Theorem, no consistent, sufficiently strong theory T of arithmetic can prove its own consistency-statement ConT. So the extended theory T + ConT is of higher consistency-strength than the theory T itself. In general, for consistent theories T, T ′ containing a sufficiently strong fragment of arithmetic, theory T ′ is of higher consistency strength than theory T just in case T ′ proves that T is consistent. Usually this is done in one of two ways. Either T ′ proves ConT (the sentence of arithmetic expressing the consistency of T); or T ′ proves the existence of a model for (the axioms of) T. A vast range of mathematical theories can be, and have been, compared according to their consistency-strengths. These comparisons have involved fragments of arithmetic, fragments of real analysis, set theories, and type theories. The two ways in which strength can be increased in second-order theories of arithmetic is by allowing broader classes of substituends (complexity classes of formulae) in their axiom-scheme of comprehension, and in their axiom-scheme of mathematical induction. The main way in which strength can typically be increased in set theory is by postulating the existence of ever-larger cardinal numbers. The deep and puzzling phenomenon that has emerged is that consistency-strengths are linearly ordered. The modern program of Reverse Mathematics, due to Harvey Friedman (see Friedman 1975, 1976), is the major source of such insights into the relative strengths of various mathematical theories. The best avenue into the relevant literature is Simpson 1999. For a thorough investigation of the question of consistency-strengths of various foundational theories that are pertinent to logicism, see Burgess 2005, especially Table E.

42. It has been Michael Dummett, especially, who has advanced this interpretation.

43. The normalization theorem is due to Prawitz (see Prawitz 1965).

44. For a cogent critique of Carnap’s lack of appreciation of certain metamathematical subtleties in holding to his position that both logical and mathematical truths were analytic in Carnap’s explicating sense, see Koellner (Carnap ms in Other Internet Resources).

45. By contrast, Wright’s proof-sketch “stops short of being a fully rigorous deduction” (Burgess 2005: 147).

46. The use of free logic for logicist purposes has also been explored more recently by Shapiro and Weir (2000).

47. The condition of adequacy involving Schema N was put forward in Tennant 1984.

48. Note that an affirmative answer to this question invites the reflection that, if Logic commits one to the existence of any thing or kind of thing, then such existence will be necessitated. The things in question will be necessary existents.

49. This is properly classified as a demarcation problem because of a particular pre-formal view with which such a logicist methodology would be in tension. The view in question combines two main theses. The first thesis is that there is some clearly identifiable body of mathematical theorizing, employing mathematical concepts, whose eligibility for logicist reduction is in question. The second thesis is that the problem for the logicist would be to show how to ‘logicize’ this theorizing by (i) defining those mathematical concepts in purely logical terms, and then (ii) deriving as logical theorems the translations induced by those definitions of the erstwhile mathematical theorems.

50. This label is used here in its more recently acquired sense, not its original one. See footnote 19.

Copyright © 2017 by
Neil Tennant <tennant.9@osu.edu>

This is a file in the archives of the Stanford Encyclopedia of Philosophy.
Please note that some links may no longer be functional.
[an error occurred while processing the directive]