# Reverse Mathematics

*First published Fri Feb 2, 2024*

The question of whether the use of a certain method or axiom is necessary in order to prove a given theorem is widespread in mathematics. Two historical examples are particularly prominent: the parallel postulate in Euclidean geometry, and the axiom of choice in set theory. In both cases, one demonstrates that the axiom is indeed necessary by proving a “reversal” from the theorem to the axiom: that is, by assuming the theorem and deriving the axiom from it, relative to a set of background assumptions known as the base theory. Reverse mathematics is a program in mathematical logic that seeks to give precise answers to the question of which axioms are necessary in order to prove theorems of “ordinary mathematics”: roughly speaking, those concerning structures that are either themselves countable, or which can be represented by countable “codes”. This includes many fundamental theorems of real, complex, and functional analysis, countable algebra, countable infinitary combinatorics, descriptive set theory, and mathematical logic. Countable codes are just sets of natural numbers, so these theorems can be formalized in a language with minimal expressive resources, quantifying only over natural numbers and sets of natural numbers, known as second-order arithmetic.

Formalizing theorems from diverse areas of mathematics within a unified framework opens the door to comparing them. For example, the Heine–Borel covering theorem, which asserts the compactness of the closed unit interval, is equivalent to the compactness theorem for first-order logic. The connection between these two compactness principles is thus not merely an analogy, but a precise correspondence: they both express the same underlying computability-theoretic closure condition, also expressed by the set existence principle known as weak König’s lemma, which is necessary and sufficient to prove the two theorems.

Questions of necessity and sufficiency in mathematics have historically been closely linked to issues in the foundations of mathematics, and in particular to the foundational programs initiated in the early twentieth century: constructivism, finitism, and predicativism. Using reverse mathematical results to determine the extent of the mathematics that can be developed on the basis of a given foundational doctrine is a natural and appealing idea, especially since many of the systems studied in reverse mathematics have technical and historical links to those very foundational programs. In addition to its intrinsic mathematical interest, reverse mathematics thus has a role to play in the foundations of mathematics, as well as in addressing broader philosophical issues such as the role of mathematics in science and the indispensability argument.

This entry aims to give the reader a broad introduction to the history, methodology, and results of the reverse mathematics program, with a particular focus on its connections to foundational programs. §1 provides a historical introduction to reverse mathematics, tracing its development from the nineteenth century arithmetization of analysis, through the development of proof theory and computability theory in the twentieth century, until its emergence as a subject in its own right in the mid-1970s. §2 provides an introduction to subsystems of second-order arithmetic, the formal systems used in reverse mathematics. §3 introduces the base theory \(\sfRCA_0\), sketches the coding used to represent a wide variety of mathematical objects within it, and explains some fundamental connections between reverse mathematics and computability theory. §4 gives an overview of the other four of the Big Five subsystems: \(\WKL_0\) (§4.1), \(\ACA_0\) (§4.1), \(\ATR_0\) (§4.3), and \(\Pi^1_1\text{-}\CA_0\) (§4.4), including some of the theorems equivalent to the characteristic axioms of each system (weak König’s lemma, arithmetical comprehension, arithmetical transfinite recursion, and \(\Pi^1_1\) comprehension). Building on the previous two sections, §5 explores the connections between the Big Five subsystems and traditional foundational programs including constructivism (§5.1), finitism (§5.2–§5.3), and predicativism (§5.4–§5.5). The final section (§6) suggests further reading and provides pointers to other resources on reverse mathematics.

- 1. A Historical Introduction to Reverse Mathematics
- 2. Second-Order Arithmetic and Its Subsystems
- 3. The Base Theory
- 4. The Big Five
- 5. Foundational Programs and Reverse Mathematics
- 6. Further Reading
- Bibliography
- Academic Tools
- Other Internet Resources
- Related Entries

## 1. A Historical Introduction to Reverse Mathematics

### 1.1 Mathematical necessity and the axiomatic method

Reverse mathematics is a relatively young subfield of mathematical logic, having made its start in the mid-1970s as an outgrowth of computability theory. In the field’s founding paper (Friedman 1975), Harvey Friedman begins by asking

What are the proper axioms to use in carrying out proofs of particular theorems, or bodies of theorems, in mathematics? What are those formal systems which isolate the essential principles needed to prove them? (1975: 235)

Friedman’s notion of the proper axioms to prove a given theorem is one on which not only can the theorem can be proved from the axioms, but the axioms can be proved from the theorem. Put another way, the proper axioms are necessary in order to prove the theorem, and not merely sufficient.

In this sense, finding the proper axioms to prove a given theorem is a long-standing endeavour in mathematics. Euclid’s fifth postulate, the parallel postulate, is a well-worn example. It is necessary in order to prove many important results in Euclidean geometry, such as the Pythagorean theorem, but mathematicians throughout history have found the postulate disquieting. Lewis (1920) suggests that Euclid was trying to find the right principles from which to prove the propositions from 29 onwards, and that the parallel postulate was in the end given as a postulate simply because no other way to prove those propositions was found. Attempts would be made throughout antiquity to prove the parallel postulate, notably by Proclus, as well as in the medieval Islamic world and in the modern period (Bonola 1912; Rosenfeld 1976 [1988]). We now know that there was an insurmountable barrier to these attempts. As discovered in the nineteenth century, there are models of non-Euclidean geometry which satisfy the other postulates but do not satisfy the parallel postulate. Therefore, the parallel postulate does not follow from the others.

To see that the parallel postulate is necessary to prove the
Pythagorean theorem, we need a *reversal* from theorem to
axiom. We do this by giving a proof of the parallel postulate from the
Pythagorean theorem, and thereby establishing the implication

while being careful to use only the other geometrical postulates and not reason circularly by using the parallel postulate itself. Since the Pythagorean theorem follows from the parallel postulate, the two statements are equivalent, modulo the other geometrical postulates. Moreover, because the parallel postulate is independent from the other postulates, it follows from this equivalence that the Pythagorean theorem is too: it is true in all and only those geometries which are Euclidean, i.e., those in which the parallel postulate holds.

Hilbert’s axiomatizations of Euclidean and non-Euclidean
geometries in his *Grundlagen der Geometrie* (1899) provided a
setting in which such proofs of equivalence between principles could
be given. In particular, Hilbert’s axioms provided something
like a formal *base theory* which could prove the Pythagorean
theorem from the parallel postulate, and conversely. Such proofs are
now standard exercises in undergraduate geometry textbooks (e.g.,
Hartshorne 2000). Crucially, Hilbert’s base theory does not
prove the parallel postulate: as Hilbert showed, the base theory has
analytical models (ones constructed from the real numbers) in which
the parallel postulate is true, but also ones in which it is false. On
a purely logical level, if *T* is a formal theory such that both
\(\varphi\) and \(\psi\) are provable in *T*, then *T*
proves that they are equivalent. Similarly, if both \(\neg\varphi\)
and \(\neg\psi\) are provable, then *T* also proves that
\(\varphi\) and \(\psi\) are equivalent. The independence of the
parallel postulate from Hilbert’s base theory thus shows that
the equivalence between the parallel postulate and the Pythagorean
theorem is non-trivial, since it cannot be established on either the
trivial grounds that both are provable, or on the trivial grounds that
both are refutable.

Although this entry will not engage further with the study of
equivalences in geometry, it should be noted that this field remains
alive and well. For example, the work of Pambuccian (1996, 2001, 2006,
2009) is explicitly philosophical in its motivation, and has excited
interest amongst philosophers engaged with questions of methodological
and logical purity in mathematics. Some representative pieces are
Arana (2008; Arana & Mancosu 2012; Baldwin 2013; Pambuccian 2019),
Pambuccian & Schact (2021). Dean (2021) engages with reverse
mathematics and the *Grundlagen der Geometrie* from a different
direction, namely their bearing on the
Frege–Hilbert controversy.

A second historical precedent is the axiom of choice (AC) in set theory. Zermelo (1904) brought widespread attention to the axiom of choice by appealing to it in his proof of the well-ordering theorem. In the process he ignited a controversy over the principle, which attracted a wide range of criticisms from such mathematical luminaries as Henri Poincaré and Émile Borel (Moore 1982). In his response to these criticisms, Zermelo (1908) argued that AC is “necessary for science”, and in particular that there are

elementary and fundamental theorems and problems that, in my opinion, could not be dealt with at all without the principle of choice. (Zermelo 1967: 187–188)

The “elementary and fundamental theorems and problems” to which Zermelo refers include such basic matters as whether the cardinalities of two sets are always comparable. As in the case of the Pythagorean theorem, history has borne out Zermelo’s contention that the axiom of choice is necessary for proving such statements, in the sense that they imply the axiom of choice modulo the other axioms of Zermelo–Fraenkel set theory (ZF) (Jech 1973; Rubin & Rubin 1963, 1985; Howard & Rubin 1998). Paul Cohen’s discovery of forcing demonstrated a further similarity with the case of the parallel postulate, as the axiom of choice was shown to be formally independent of the base theory ZF. The equivalence of AC with statements like the well-ordering theorem was thereby shown to be non-trivial, just like the equivalence of the parallel postulate with statements like the Pythagorean theorem.

However, there are substantive differences between reverse mathematics and the study of equivalences of the axiom of choice, and equivalences provable over ZF more generally. Reverse mathematics is concerned with structures that can be represented by sets of natural numbers, and not with sets of arbitrary cardinality. As we shall see in the course of this entry, these structures and the theorems that concern them are far more extensive than one might at first suspect, including a substantial part of what is often called “ordinary mathematics”. While the axiom of choice is equivalent to many important general theorems in topology, functional analysis, and combinatorics, the full generality of these results is rarely required in everyday mathematical applications. The fundamental characterizing theorems about “concrete” structures such as the real numbers require only the axioms of weak subsystems of second-order arithmetic. Finally, the axiom of choice is also distinguished by its purely existential nature: it does not give a method or procedure for choosing elements from an infinite family of sets, but merely asserts that some such choice exists. The set existence principles studied in reverse mathematics, on the other hand, all have a computability-theoretic aspect and are thus closely connected to definability, something which forms a major theme of this entry. For principles like arithmetical comprehension (§4.2) this connection lies on the surface, while for others like weak König’s lemma (§4.1) it is less immediately apparent but no less real (Eastaugh 2019).

### 1.2 The arithmetization of analysis

Although many aspects of reverse mathematics are rooted in geometry, a different part of mathematics supplies many others: analysis. This is the part of mathematics concerned with the continuum of real numbers and the many structures which are based upon it. Analysis supplies much of the subject matter as many theorems of analysis turn out to be equivalent to the set existence principles studied in reverse mathematics. It also much of the reverse mathematician’s toolkit, namely the diverse array of coding schemes required to represent complex objects and structures within second-order arithmetic.

Both aspects of the connection between reverse mathematics and analysis can be traced to the nineteenth century, when mathematics in general, and analysis in particular, underwent a sea change. One driving force was the desire to increase the level of rigour in mathematical analysis, replacing geometric intuition and infinitesimal quantities with “arithmetical” arguments based on proto-set-theoretic principles and constructions.

Of the many important advances that were made during the
*arithmetization of analysis*, as it came to be called, we can
single out three as particularly fundamental: the new definitions of
convergent sequence and limit; of continuous function; and of the real
number system itself. Suppose \(x = \str{x_n : n \in \bbN}\) is an
infinite sequence of real numbers. Then the real number \(y\) is the
*limit* of the sequence \(x\), in symbols \(y = \lim_{n \to
\infty} x_n\), if for every real number \(\varepsilon \gt 0\) there
exists a natural number \(N\) such that for all natural numbers \(n
\gt N\), \(|x_n - y| \lt \varepsilon\). This definition of the notion
of a limit depends only on the concepts of real and natural numbers,
together with the arithmetical operations, absolute value, order, and
the concept of an infinite sequence: no recourse to geometrical
notions is required. This development set the stage for a more
thoroughgoing arithmetization in which the concept of a real number
was similarly reduced.

Like the notions of continuity and convergence, definitions of the real numbers underwent significant evolutions during this period. Historically, irrational numbers had been conceived of geometrically, for instance the square root of 2 as the length of the diagonal across the unit square, or \(\pi\) as the ratio of the circumference of a circle to its diameter. While the analysts who developed arithmetical definitions of the real numbers during the nineteenth century did not aim to eliminate this geometrical aspect, they tended to view appeals to geometric intuition in proofs with suspicion, and wanted to give a theory of real numbers that did not depend on such intuitive considerations.

One such theory was that of Georg Cantor, who gave a construction of
real numbers in terms of convergent sequences of rational numbers
(Cantor 1872, 1883). With some anachronism, let \(x : \bbN\to \bbQ\)
be a total function from natural numbers to rational numbers. We can
think of \(x\) as an infinite sequence of rationals \(\str{x_n : n \in
\bbN}\). \(x\) is a *fundamental sequence* if there exists \(k
\in \bbN\) such that for every positive rational number
\(\varepsilon\), \(|x_{n+m} - x_n| \lt \varepsilon\) for every \(m,n
\in \bbN\) where \(n \gt k\). We can then conventionally associate
with the fundamental sequence \(x\) the real number \(y = \lim x\),
the value that the sequence \(x\) achieves in the limit. Moreover,
Cantor showed how the operations of addition, subtraction,
multiplication, and division could be extended to real numbers
considered as the limits of fundamental sequences.

Richard Dedekind,
in his 1872 monograph *Stetigkeit und irrationale Zahlen*,
developed a substantially different approach to the real numbers as
“cuts” or partitions of the rational numbers into upper
and lower parts. First worked out for an introductory course that he
taught at the Zürich Polytechnic in 1858, Dedekind’s
approach takes the real numbers to be in correspondence not with the
limits of convergent sequences of rational numbers, but with arbitrary
“cuts” in the rational numbers. A cut is a disjoint pair
\((A_1,A_2)\) of nonempty sets of rational numbers, such that \(A_1
\cup A_2 = \bbQ\) and \(A_1\) is downwards closed while \(A_2\) is
upwards closed: if \(p\) and \(q\) are rational numbers such that \(p
\lt q\), if \(q \in A_1\) then \(p \in A_1\), and if \(p \in A_2\)
then \(q \in A_2\). A real number \(x \in \bbR\) corresponds to, or is
“created by”, a cut \((A_1,A_2)\) just in case for every
\(p \in A_1\), \(p \leq x\), and for every \(q \in A_2\), \(x \lt
q\).

Despite their conceptual differences, Dedekind and Cantor’s constructions have much in common. They both take the rational numbers as the basis for the real numbers, together with some set-theoretic machinery. In Dedekind’s case this involves the notion of an arbitrary subset of the rational numbers, while in Cantor’s it is the notion of an infinite sequence of rational numbers. Talk of rational numbers can, as Cantor and Dedekind knew, be reduced in principle to talk of natural numbers, and their definitions can be seen as reducing the real numbers to the “arithmetical”: the natural numbers, and some part of the theory of sets. That part is in fact quite small, requiring only the concept of a set of natural numbers, and both Cantor and Dedekind’s definitions of a real number can (with some modifications) be formalized in second-order arithmetic (see §3.3 for the former and Simpson (1984) for the latter).

In addition to its groundbreaking definition of the real numbers as
cuts in the rationals, and its formulation of the principle of
continuity as the essential axiom of the arithmetical continuum,
Dedekind (1872) also contains a striking anticipation of the method of
reversals, as first noted by Sieg (1988: 344, fn. 16). The final
section of *Stetigkeit und irrationale Zahlen* is devoted to
fundamental theorems of analysis, namely the monotone convergence
theorem and the Cauchy convergence theorem. Dedekind shows that not
only can the principle of continuity be used to derive these theorems,
but that the theorems can in turn be used to derive the principle of
continuity.

From this point onwards, the story of arithmetization becomes bound up
with the story of logic. The development of quantificational logic by
Frege and Pierce, the symbolism of Peano, and the development of type
theory by Russell all played important parts in the emergence of a
recognizably modern framework suited for the formalization of
arithmetic and analysis in the work of the Hilbert school, most
notably in Hilbert’s collaborations with Wilhelm Ackermann and
Paul Bernays. These culminated in Hilbert and Bernays’s
monumental *Grundlagen der Mathematik*, published in two
volumes (1934, 1939). Interested readers may have an easier time
obtaining the second editions (1968, 1970).

Amongst the many contributions of Hilbert and Bernays’s
*Grundlagen* to the foundations of mathematics, the most
obvious immediate connection with reverse mathematics and the study of
subsystems of second-order arithmetic comes in Supplement IV of volume
2 (1939/1970). This supplement introduces several formalisms for
arithmetic which can be seen as ancestors of the axiom system
\(\sfZ_2\) of second-order arithmetic used in reverse mathematics,
defined in
§2.3
of this entry. The bulk of Supplement IV uses a system *H* which
includes both free and bound first- and second-order variables. The
intended range of the first-order variables is the natural numbers.
The second-order terms are unary function terms, not set or predicate
terms, and the intended interpretation of the second-order variables
is as functions \(f : \bbN\to \bbN\) (Hilbert & Bernays 1970:
467). These aspects of the formalism are quite similar to the language
\(\calL_2\) of second-order arithmetic, but in other respects the
system *H* diverges quite markedly from contemporary practice.
One aspect is the inclusion of free formula variables, which some
authors such as Sanders (2022: 2) have interpreted as introducing
third-order parameters and operators into the system. Another is that
the system uses a form of Hilbert’s
epsilon calculus,
although the revised formalism *K* introduced in section F of
Supplement IV shows how the system *H* can be modified to remove
this dependency. The use of function terms, rather than set terms,
persisted into later work such as that of Grzegorczyk, Mostowski,
& Ryll-Nardzewski (1958), but in fact Hilbert and Bernays also
define in section G of Supplement IV a system *L* which exchanges
the free and bound function variables for free and bound formula
variables.

Having set up their formal system *H* of higher-order arithmetic,
Hilbert and Bernays proceed to formalize a substantial fragment of
real analysis within it, including showing that a version of the least
upper bound principle holds. They comment explicitly (Hilbert &
Bernays 1970: 482) that the possibility of representing sequences of
real numbers by individual number-theoretic functions, as is now done
in reverse mathematics by individual sets of natural numbers (see
§3.3
for details), depends on being able to define an invertible mapping
from (countable) sequences of number-theoretic functions onto the
individual number-theoretic functions, which itself arises from the
invertible mapping from pairs of natural numbers to individual
numbers. The existence and use of such representations can be
understood as a continuation of the tradition of arithmetization
developed in the nineteenth century, albeit now within a
precisely-delimited formal system rather than within informal
mathematics.

Arithmetization can thus be understood as an essential component of the development of reverse mathematics, since it shows how to reduce complex mathematical notions such as real numbers, continuous functions, measures and so on to more basic ones, namely the notion of natural number and the notion of collection or set. This reductive spirit retains its importance throughout the history of reverse mathematics, although it manifests itself in a number of different ways. One is through Hilbert’s program of reducing infinitary to finitary mathematics, and its continuation through the reductive program in proof theory. Another is the omnipresent use of coding to represent a wide variety of mathematical objects within the seemingly austere ontology of second-order arithmetic (see particularly §3.2–§3.3 for an introduction to the essentials of coding in reverse mathematics).

For readers with an interest in tracing the development of
second-order arithmetic in more detail, from the work of Hilbert and
Bernays on, the work of Dean & Walsh (2017) is an important
starting point. Sieg & Ravaglia (2005) review the contents of the
two volumes of the first edition of the *Grundlagen der
Mathematik*, albeit without much discussion of Supplement IV.
Hilbert’s lectures on the foundations of arithmetic and logic
(Ewald & Sieg 2013) also provide much more detail on the
development of the ideas that ultimately appeared in the
*Grundlagen*. Sieg (1984, 2020) provides detailed accounts of
some of the conceptual advances related in this section, and their
philosophical ramifications. Additional references can be found in the
bibliography of the entry on
Hilbert’s program.

### 1.3 From recursive counterexamples to reverse mathematics

Although it has deep roots in the work of the Hilbert school
(§1.2),
reverse mathematics is perhaps best understood methodologically (as
well as historically) as an outgrowth of computability theory, and of
definability theory more generally. In particular, the method of
reversals can be seen as originating in the tradition of *recursive
counterexamples* to classical theorems. These are computable
objects which witness the failure of classical mathematical theorems
when their quantifiers are restricted to computable (recursive)
objects. A characteristic example of a recursive counterexample is a
*Specker sequence*: a computable, bounded increasing sequence
of real numbers with no computable limit. Specker sequences are
recursive counterexamples to theorems like the monotone convergence
theorem and the Bolzano–Weierstraß theorem.

Specker (1949) refrained from drawing explicit philosophical
conclusions from his results, but it seems clear that he understood
them as applying not only to computable analysis, but also to
constructive mathematics. This was a common attitude amongst logicians
of the time: because the formal notion of a recursive function allows
us to gives a precise mathematical characterization of the informal
notions of algorithm and effective procedure, it was natural to
postulate that it could also allow us to understand the informal
notion of a mathematical construction. The use of computability theory
to study constructive mathematics was pioneered by Kleene with his
realizability interpretation
for intuitionistic logic (Kleene 1945). Kleene’s work on
realizability and on Brouwer’s fan theorem (Kleene 1952) led to
his construction of the *Kleene tree*, a computable tree with
no computable path, which was important for the discovery of the
computability-theoretic properties of weak König’s lemma.
An independent but equally important development was that of recursive
constructive mathematics within Markov’s school in Russia, which
also sought to understand the concept of a construction in terms of
the concept of recursive function (Nagorny 1994; Kushner 2006;
Moschovakis to appear).

Many constructivists either accept a form of
Church’s thesis—the
claim that every constructive process is extensionally equivalent, in
terms of its outputs, to a computable function—or accept a
system which is consistent with it. Theorems like the monotone
convergence theorem are not compatible with this approach, because as
Specker discovered, there are computable (and thus constructive)
bounded sequences of rational numbers with no computable limit.
Therefore, the limit of a Specker sequence does not exist for the
constructivist, not only because it implies the falsity of the
constructive Church’s thesis, but also because it implies a
principle known as the *limited principle of omniscience* (or
LPO), a weakening of the law of the excluded middle which
constructivists typically also deny (Mandelkern 1988, 1989).

The discovery that classical mathematical theorems like the monotone
convergence theorem or König’s lemma implied the existence
of non-computable sets was a major step towards reverse mathematics.
This was because such results typically give more information than
merely the existence of a non-computable set: they show the existence
of non-computable sets of a particular complexity, lying at a certain
point in the
arithmetical or analytical hierarchies.
For example, Specker’s proof shows that the existence of limits
of bounded sequences of rational numbers implies the existence of the
set *K* of codes for Turing machines which halt on every input.
This is the idea underlying the reversal from the monotone convergence
theorem to arithmetical comprehension.

The 1950s saw further interest in the use of tools from computability theory (then known as recursive function theory) and definability theory more generally in studying the foundational programs from earlier in the century such as intuitionism, finitism, and predicativism. All of them were viewed as involving a greater or lesser degree of constructivism, and this naturally led to an attempt to analyse them in terms of definable classes of sets or functions of natural numbers and real numbers. The most basic was the identification of the class of computable real numbers as a model for constructive mathematics (Mostowski 1959). The arithmetical sets were also studied, by Grzegorczyk (1955), Kondô (1958), and Mostowski (1959), as providing a natural model for the predicativism of Weyl (1918) (§5.4). Following Bernays (1935), more platonistic theories were associated with more encompassing notions of computability or definability (Dean & Walsh 2017: 2.2–2.3). Perhaps the culmination of this movement was Kreisel’s tentative proposal in 1960 to identify the predicatively definable sets with the class of hyperarithmetical sets (§5.5), and his associated work on the Cantor–Bendixson theorem, which he showed was false when its quantifiers were restricted to the hyperarithmetical sets (Kreisel 1959). Kreisel’s result can be understood as a recursive counterexample, and indeed a hyperarithmetical counterexample, to the Cantor–Bendixson theorem. Internalising the result led to the proof of the equivalence between the Cantor–Bendixson theorem and the axiom scheme of \(\Pi^1_1\) comprehension (§4.4).

The work of Errett Bishop in the 1960s also exerted an influence on
reverse mathematics. The revival of interest in constructivism
following the 1967 publication of Bishop’s *Foundations of
Constructive Analysis* and his forceful advocacy for a
constructive approach to mathematics led to new work on the limits of
constructive mathematics. Many of the recursive counterexamples to
classical theorems discovered in the 1970s and 1980s were later
transformed into proofs of reversals from those theorems to
non-constructive set existence principles. For example, Brown and
Simpson’s (1986) reversal of the Hahn–Banach theorem for
separable Banach spaces to weak König’s lemma
(§4.1)
is based on a recursive counterexample to the Hahn–Banach
theorem due to Metakides, Nerode, & Shore (1985), who were in turn
inspired by a Brouwerian counterexample to the Hahn–Banach
theorem constructed by Bishop himself.

### 1.4 The early development of reverse mathematics

Despite its many antecedents, reverse mathematics as a distinctive discipline, with a specific formal setting and methodology, did not emerge until the mid-1970s. Its emergence can be traced quite precisely, to Harvey Friedman’s talk at the 1974 International Congress of Mathematicians in Vancouver. Entitled “Some Systems of Second Order Arithmetic and Their Use”, and published in the Proceedings of the ICM (Friedman 1975), Friedman’s paper presents a vision of reverse mathematics that is strikingly close to the mature program described in the main sections of this entry. The formal setting is that of second-order arithmetic and its subsystems, and the Big Five subsystems (\(\sfRCA_0\), \(\WKL_0\), \(\ACA_0\), \(\ATR_0\), and \(\Pi^1_1\text{-}\CA_0\)) and their characteristic axioms are all present.

Friedman’s ICM paper presented a range of equivalences with the
members of the Big Five, modulo a base theory \(\rmRCA\) that only
proved the existence of computable sets. Amongst the statements
studied were formalizations of the fundamental theorems of analysis
from the final chapter of Dedekind’s *Stetigkeit und
irrationale Zahlen* (1872), all of which are provably equivalent
to one another and to the axiom scheme of arithmetical comprehension,
modulo the base theory \(\rmRCA\). Specker’s proof that there
exist computable bounded sequences of rational numbers whose limits
compute the halting problem was thus transformed into a proof that the
sequential completeness and sequential compactness of the real line
are equivalent to the existence of arithmetically definable sets.

On a technical level, the most substantial difference between the formalism of Friedman’s (1975) paper and that used in contemporary reverse mathematical practice is that his base theory \(\rmRCA\) includes the full induction scheme (\ref{eq:ind_scheme}). Friedman subsequently showed in 1976 that the equivalences of his 1975 paper could be proved within a weaker base theory called \(\rmRCA_0\), in which the full induction scheme is replaced by quantifier-free induction. However, in the process of weakening the induction principle Friedman substantially changed the definition of the base theory from that used in his earlier paper. The resulting system \(\rmRCA_0\) is essentially a second-order version of primitive recursive arithmetic. While formally rather different, this does result in a system that is proof-theoretically very close to the now-standard base theory \(\sfRCA_0\) (§3).

The modern form of \(\sfRCA_0\) first appears in print in an influential paper by Friedman, Simpson, and Smith (1983), with the axioms of \(\PA^-\) for \(\bbN\) as an ordered semiring, the recursive comprehension axiom scheme, and the \(\Sigma^0_1\) induction scheme. By 1983 reverse mathematics was thus substantially as we know it today, even down to the use of the slogan “reverse mathematics”, which Friedman coined during an AMS special session organized by Stephen Simpson (2009: 35).

The major developments that followed in the 1980s and 1990s, which saw
reverse mathematics become a major subfield of computability theory,
are to a large extent due to the work of Stephen Simpson and his
doctoral students. They systematically investigated the reverse
mathematical status of hundreds of theorems spread across many
disparate fields of mathematics, from functional analysis to
combinatorics, commutative algebra to measure theory, order theory to
general topology. Many of the fruits of this research program are
surveyed in Simpson’s textbook, *Subsystems of Second Order
Arithmetic*, first published in 1999. A second edition was
published in 2009.

## 2. Second-Order Arithmetic and Its Subsystems

### 2.1 The language of second-order arithmetic

The language of second-order arithmetic \(\calL_2\) is a two-sorted
language which augments the language of first-order arithmetic,
familiar from systems such as Peano arithmetic, with a second sort of
variables representing sets of natural numbers. Variables of the first
sort \(x_0, x_1, x_2, \dotsc\) are called *number variables*
and are intended to range over natural numbers, while variables of the
second sort \(X_0, X_1, X_2, \dotsc\) are called *set
variables* and are intended to range over sets of natural numbers.
The nonlogical symbols of second-order arithmetic are: the constant
symbols 0 and 1; the binary function symbols \(+\) and \(\times\); and
the binary relation symbols \(\lt\) and \(\in\). A *numerical
term* is either a number variable, 0 or 1, or has the form \(t_1 +
t_2\) or \(t_1 \times t_2\) where \(t_1\) and \(t_2\) are numerical
terms (the only terms of the second sort are set variables). An
*atomic formula* has the form \(t_1 = t_2\), \(t_1 \lt t_2\),
or \(t_1 \in X\), where \(t_1\) and \(t_2\) are numerical terms and
\(X\) is any set variable. Second-order arithmetic can be presented
using any standard classical proof system for a two-sorted language.
The identity relation for sets is defined as coextensionality:

### 2.2 Semantics for second-order arithmetic

The name “second-order arithmetic” can be confusing. Coming across it for the first time, one might easily think that its name implies that it uses second-order logic, which is familiar from other contexts in the philosophy of mathematics such as logicism and mathematical structuralism. In the so-called standard semantics for second-order logic (sometimes also called full semantics), the second-order quantifiers are interpreted as ranging over the entire powerset of the first-order domain. For example, in a model \(\calM\) of second-order Peano arithmetic \(\PA^2\), i.e., Peano arithmetic formulated in second-order logic, the range of the second-order quantifiers is \(\powset{|\calM|}\), where \(|\calM|\) is the range of the first-order quantifiers. It follows from Dedekind’s categoricity theorem that the system \(\PA^2\) only has one model up to isomorphism. This means that, at least for second-order semantic entailment \(\models_2\), \(\PA^2\) is semantically complete: for every \(\varphi \in \calL_2\), either \(\PA^2 \models_2 \varphi\) or \(\PA^2 \models_2 \neg\varphi\).

It is therefore important to note that despite its name, the semantics
used in reverse mathematics for the language \(\calL_2\) of
second-order arithmetic is not the standard second-order semantics,
but the general or Henkin semantics—in other words, those of
first-order logic. An *\(\calL_2\)-structure* is an ordered
tuple of the form

where \(M\) is the domain over which number variables range, \(\calS^\calM\) is a set of subsets of \(M\) over which set variables range, \(0^\calM\) and \(1^\calM\) are distinguished elements of \(M\), \(+^\calM\) and \(\times^\calM\) are binary operations on \(M\), and \(\lt^\calM\) is a binary relation on \(M\). The domains \(M\) and \(\calS^\calM\) are always assumed to be disjoint and nonempty.

Since the range of the second-order quantifiers can be any nonempty set \(\calS^\calM \subseteq \powset{|\calM|}\), there are many non-isomorphic \(\calL_2\)-structures. Adding axioms does not change this situation: both the formal system of second-order arithmetic \(\sfZ_2\) (introduced in §2.3) and its subsystems have many non-isomorphic models. Departing from the standard semantics of second-order logic is in fact an essential part of reverse mathematics, since in order to prove that a principle \(T_0\) does not imply another principle \(T_1\), one must prove that there is a model \(\calM\) such that \(\calM \models T_0\) and \(\calM \not\models T_1\). If every subsystem we considered had the same unique model, this would be impossible.

A basic example is provided by the \(\calL_2\)-structure

where \(\REC= \set{ X : X \text{ is computable}}\). Here \(\omega\) is
the standard natural numbers \(0,1,2,\dotsc\), and the symbols
\(0,1,+,\cdot,\lt\) have their standard interpretations. \(\calR\) is
thus what is known as an *\(\omega\)-model*, one whose
*first-order part* is the standard natural numbers.
\(\omega\)-models are thus distinguished from one another entirely by
their *second-order parts*. In the case of \(\calR\), its
second-order part consists of \(\REC\), the set of all computable sets
of natural numbers. Since different \(\omega\)-models differ from one
another only in their second-order parts, they are frequently referred
to by the names for those parts. The remainder of this essay will
consequently refer to \(\calR\) as \(\REC\), the \(\omega\)-model
\(\calA\) whose second-order part consists of the arithmetically
definable sets will be referred to as \(\ARITH\), and so on.

Although many classical mathematical principles such as the intermediate value theorem \(\IVT\) are true in \(\REC\), others are not, such as the least upper bound axiom \(\LUB\), which is equivalent to arithmetical comprehension (§4.2). From this we can conclude that \(\IVT\) does not entail \(\LUB\). This model also allows us to demonstrate an instance of incompleteness. The base theory \(\sfRCA_0\) which we will introduce in §3 is true in \(\REC\), and since \(\REC\not\models \LUB\), it follows by the soundness theorem for first-order logic that \(\sfRCA_0 \not\vdash \LUB\). In this sense, reverse mathematics is a study of incompleteness: it studies a hierarchy of theorems which are equivalent to certain set existence principles, but which are not proved either by the base theory or by strictly weaker principles in the hierarchy (Simpson 2010; Eastaugh 2019).

### 2.3 Axioms of second-order arithmetic

The *formal system of second-order arithmetic*, or \(\sfZ_2\),
has as axioms the universal closures of the following formulas of
\(\calL_2\).

- \(\PA^-\), the axioms for a discrete ordered semi-ring:
- \(n + 1 \neq 0\)
- \(m + 1 = n + 1 \rightarrow m = n\)
- \(m + 0 = m\)
- \(m + (n + 1) = (m + n) + 1\)
- \(m \times 0 = 0\)
- \(m \times (n + 1) = (m \times n) + m\)
- \(\neg m \lt 0\)
- \(m \lt n + 1 \leftrightarrow (m \lt n \vee m = n)\)

- Induction axiom: \[\label{eq:ind_axiom} \tag{\(\sfInd\)} (0 \in X \wedge \forall{n}(n \in X \rightarrow n + 1 \in X)) \rightarrow \forall{n}(n \in X)\]
- Comprehension scheme:
\[\label{eq:full_comp_scheme}
\tag{\(\Pi^1_\infty\text{-}\CA\)}
\exists{X}\forall{n}(n \in X \leftrightarrow
\varphi(n))\]
where \(\varphi(n)\) is any \(\calL_2\)-formula with \(X\) not free, but which may have other free set and number variables.

By applying the comprehension scheme (\ref{eq:full_comp_scheme}) and
the induction axiom (\ref{eq:ind_axiom}), \(\sfZ_2\) can prove the
*induction scheme*, which consists of the universal closures
of

where \(\varphi(n)\) is any \(\calL_2\)-formula.

Second-order arithmetic is sometimes referred to as the first-order
theory of analysis, since the set variables can be interpreted as
ranging over real numbers. Many older presentations employ a
functional calculus where set variables are replaced by variables
\(f^k\) for *k*-ary functions \(f : \bbN^k \rightarrow \bbN\).
Grzegorczyk, Mostowski, and Ryll-Nardzewski (1958) present such a
system, which also includes a
definite description operator
\(\iota\) and a form of the comprehension scheme which they call the
Leśniewski
schemata. In general, results about these variant systems transfer
smoothly from one to another with only minor adjustments.

A *subsystem of second-order arithmetic* is an axiom system
\(T\) where every axiom \(\varphi \in T\) is an \(\calL_2\)-sentence
provable in \(\sfZ_2\). Of the many subsystems of second-order
arithmetic which have been studied, five are of central importance in
reverse mathematics, and are known colloquially as the “Big
Five”. The first of these is the system \(\sfRCA_0\), which is
the standard base theory for reverse mathematics. The other members of
the Big Five (\(\WKL_0\), \(\ACA_0\), \(\ATR_0\), and
\(\Pi^1_1\text{-}\CA_0\)) are proper extensions of \(\sfRCA_0\), and
are each equivalent over the base theory \(\sfRCA_0\) to a multitude
of ordinary mathematical theorems from analysis, algebra, infinitary
combinatorics, logic, and topology. Each of these systems has greater
proof-theoretic strength than the preceding one, in the sense that
they prove theorems that are not provable in the preceding system:
\(\WKL_0\) proves more theorems than \(\sfRCA_0\), \(\ACA_0\) proves
more than \(\WKL_0\), \(\ATR_0\) proves more than \(\ACA_0\), and
\(\Pi^1_1\text{-}\CA_0\) proves more than \(\ATR_0\). Note that this
ordering by inclusion is not the same as the consistency strength
ordering: while most members of the Big Five prove the consistency of
the preceding system, this does not hold in the case of \(\sfRCA_0\)
and \(\WKL_0\), which are equiconsistent by the conservativity results
discussed in
§4.1.

The “0” subscript at the end of these systems’ names
means that the principle of induction is restricted. In the case of
\(\ACA_0\) and stronger systems like \(\ATR_0\) and
\(\Pi^1_1\text{-}\CA_0\), this restriction involves them containing
only the induction *axiom* (\ref{eq:ind_axiom}), not the
induction *scheme* (\ref{eq:ind_scheme}). They can therefore
only prove instances of the induction scheme where they can prove that
the corresponding set exists: for example, \(\ACA_0\) can prove any
instance of the induction scheme where the formula involved is
arithmetical (i.e., it does not contain any set quantifiers), while
\(\Pi^1_1\text{-}\CA_0\) can prove any instance of the induction
scheme where the formula involved is \(\Pi^1_1\). In the case of
systems weaker than \(\ACA_0\) like \(\sfRCA_0\) and \(\WKL_0\), the
induction axiom is too weak for the systems to be able to do much
mathematically. They are therefore augmented by the *\(\Sigma^0_1\)
induction scheme*, which is the restriction of
(\ref{eq:ind_scheme}) to \(\Sigma^0_1\) formulas, those of the form
\(\exists{x}\theta(x)\) where \(x\) is a number variable, and
\(\theta\) contains only bound number quantifiers and no set
quantifiers, although it may contain free and bound number and set
variables.

In the next two sections we will take a closer look at the Big Five subsystems, including the mathematics that can be done in each of them but not in a weaker subsystem, as well as some illuminating details of their model-theoretic and proof-theoretic properties. The close connections between reverse mathematics and computability theory mean that it will sometimes be useful to refer to relevant sections of the entry on recursive functions.

## 3. The Base Theory

### 3.1 Recursive comprehension

The subsystems of second-order arithmetic used in reverse mathematics are typically defined by replacing the comprehension scheme (\ref{eq:full_comp_scheme}) with a weaker axiom. The most fundamental of these subsystems is called \(\sfRCA_0\), where RCA stands for “recursive comprehension axiom” and the 0 subscript indicates that its induction principle is restricted. The axioms of \(\sfRCA_0\) consist of the basic axioms; the recursive (or \(\Delta^0_1\)) comprehension scheme consisting of the universal closures of all formulas of the form

where \(\varphi(n)\) is a \(\Sigma^0_1\)-formula and \(\psi(n)\) is a
\(\Pi^0_1\)-formula, both of which may have additional free number and
set variables; and the \(\Sigma^0_1\)-induction scheme, which is the
restriction of the induction scheme (\ref{eq:ind_scheme}) to
\(\Sigma^0_1\)-formulas. The name “recursive
comprehension” refers to the fact that the
\(\Delta^0_1\)-definable subsets of \(\omega\) are exactly those which
are
*recursive*
(in contemporary terminology, *computable*), or to use another
extensionally equivalent characterization, *Turing computable*
(this is
Post’s theorem).
\(\sfRCA_0\) is the base theory for most of reverse mathematics:
given a theorem \(\tau\) which is provable in some subsystem of
second-order arithmetic *S* that extends \(\sfRCA_0\), we obtain
a reversal from \(\tau\) to *S* when we show that the axioms of
*S* are provable from the axioms of \(\sfRCA_0 + \tau\).

The presence of \(\Sigma^0_1\) induction in the system allows one to derive many of the familiar algebraic properties of natural numbers: the associativity and commutativity of addition and multiplication, that 0 and 1 are the identity elements for addition and multiplication respectively, and the distributivity of multiplication over addition. These results can be summarized by saying that \(\sfRCA_0\) proves that the system \((\bbN, +, \cdot, 0, 1, \lt)\) is a commutative ordered semiring with cancellation; full details can be found in lemma II.2.1 of Simpson (2009: 65–66).

### 3.2 Pairing, sequences, and the basics of coding

All coding, however complex and multi-layered, is ultimately built on the techniques that allow one to code an ordered pair \(\str{a, b}\) or a finite sequence \(\str{a_1, a_2, \dotsc, a_k}\) of natural numbers by a single natural number \(n\) in such a way that the components \(a\) and \(b\) or elements \(a_1, \dotsc, a_k\) can be recovered from the code \(n\) in an effective manner. One simple coding scheme defines the number \((a,b)\) coding the ordered pair of numbers \(\str{a,b}\) as the number \((a + b)^2 + a\), where \(m^2\) abbreviates \(m \times m\). Within \(\sfRCA_0\) we can verify that for any \(a,b,a',b' \in \bbN\), if \((a,b) = (a',b')\) then \(a = a'\) and \(b = b'\). With this pairing function in hand we can define binary relations and single argument functions on \(\bbN\): a binary relation is just a set \(X \subseteq \bbN\) such that for all \(n \in X\), there exist \(a, b \leq n\) such that \(n = (a,b)\), while a single argument function \(f : \bbN\rightarrow \bbN\) is a binary relation on \(\bbN\) such that for all pairs \((a,b)\) and \((c,d) \in f\), if \(a = c\) then \(b = d\).

Coding sequences of arbitrary length requires a little more
artfulness, but can be accomplished by extending the idea used above
to code pairs of numbers. This method uses Sunzi’s theorem to
code finite sequences of numbers by single natural numbers in such a
way that any finite sequence can be recovered from its numerical code
in a computable fashion. The function \(\beta : \bbN\times \bbN\to
\bbN\) that produces, for any code for a finite sequence together with
an index, the value of the sequence at that index, is known as
*Gödel’s \(\beta\) function*. The set of all finite
sequences of natural numbers is known as \(\Seq\), and can be proved
to exist by recursive comprehension. The set of all \(s \in \Seq\)
such that \(\lh{s} = k\) is denoted by \(\bbN^k\).

Once we have a coding scheme for finite sequences, we can represent \(n\)-ary relations and functions within \(\sfRCA_0\) for arbitrary \(n \in \bbN\). It also allows us to code finite sequences of \(n\)-ary functions \(\str{f_1, \dotsc, f_k}\) by a single function \(f : \bbN^n \to \bbN^k\), with \(f_i(x) = (f(x))(i)\). By combining recursive comprehension and \(\Sigma^0_1\) induction with the coding of finite sequences, the theory of primitive recursive functions can be developed within \(\sfRCA_0\). Finally, note that a countable sequence of sets \(\str{X_n : n \in \bbN}\) can be coded by a single set \(X\) by letting

\[X_i = \set{m : (i,m) \in X}.\]Since functions, relations, real numbers and so on are coded by sets, this technique also allows one to code countable sequences of such objects as single sets.

### 3.3 Integer, rational, and real numbers

The ability to code pairs of numbers by single numbers is all that is needed in order to code integers and rational numbers by natural numbers. Integers are represented by pairs \((a,b) \in \bbN\times \bbN\) where the intended interpretation of the pair \((a,b)\) is as \(a - b\). We begin by defining the following operations of addition, subtraction, and multiplication on \(\bbN\times \bbN,\) together with the relations of less-than and equality.

It is also useful to define the absolute value operation \(|n|_\bbZ\),

If we take these operations alone as defining the integers, then
infinitely many different pairs will define the same integers. For
example, the integer \(-17\) is represented by the pairs \((0,17)\),
\((17,34)\), \((124,141)\), and so on. We therefore define the *set
of integers* \(\bbZ\) as consisting of the \(\lt_\bbN\)-least
members of the equivalence class induced by the relation \(=_\bbZ\).
\(\sfRCA_0\) is able to prove the existence of \(\bbZ\), and prove
that the system \((\bbZ, +_\bbZ, \cdot_\bbZ, \lt_\bbZ, =_\bbZ, 0_\bbZ,
1_\bbZ)\) is an ordered commutative ring.

Rational numbers are defined in a similar way, as pairs of (codes of) integers (and thus as pairs of pairs of natural numbers). We first define the set of positive integers \(\bbZ^+ = \set{(m,n) : (m,n) \in \bbZ\wedge n \lt m}\). The operations of addition, subtraction, and multiplication of rational numbers, together with the less-than and equality relations, can then be defined as follows:

For the sake of readability, we suppress the subscript
“\(\bbZ\)” on the operations and relations on the
right-hand-sides of the definitions. As with the integers, the set
\(\bbQ\) of *rational numbers* consists of the elements of
\(\bbZ\times \bbZ^+\) which are the \(\lt_\bbN\)-least members of
their equivalence class under the equivalence relation \(=_\bbQ\).
\(\sfRCA_0\) can prove the existence of \(\bbQ\) as a set, and show
that the system \((\bbQ, +_\bbQ, \cdot_\bbQ, \lt_\bbQ, =_\bbQ, 0_\bbQ,
1_\bbQ)\) is an ordered field.

One way to understand the coding schemes for integer and rational
numbers just described is as defining bijective functions between
those number systems and subsets of the natural numbers. As Cantor
showed, there can be no such bijection between the irrational numbers
and the natural numbers. In order to code irrational numbers, and
hence the real numbers in general, it is therefore necessary to use
sets of natural numbers. The following approach, using fast-converging
Cauchy sequences, is the standard way to do this in reverse
mathematics. A *sequence of rational numbers* is a function \(f
: \bbN\rightarrow \bbQ\), which we will typically denote by \(\str{x_k
: k \in \bbN}\) with \(x_k = f(k)\). A *real number* is a
sequence of rational numbers \(x\) obeying the following convergence
condition: for all \(n,m \in \bbN\), \(|x_n - x_{n+m}| \leq 2^{-n}\),
where \({|(a,b)|}_\bbQ= (|a|, b)\) is the absolute value operation.
Two reals \(x\) and \(y\) are said to be *equal*, \(x =_\bbR
y\), if \(|x_n - y_n| \leq 2^{-n+1}\) for all \(n\). Note that
\(=_\bbR\) is an equivalence relation on sets of natural numbers, not
the identity relation (\ref{eq:extensionality}) on sets. We cannot
select particular sets from the equivalence classes to represent real
numbers because the base theory is too weak, and the language itself
does not allow us to quantify over the equivalence classes themselves.
Moreover, while reverse mathematicians often write things like
“\(x \in \bbR\)”, strictly speaking this is an abuse of
notation, because the set \(\bbR\) of all real numbers is a
third-order object which is not directly represented in second-order
arithmetic. Expressions like “\(x \in \bbR\)” should be
interpreted as a convenient shorthand for “\(x\) is a real
number”, where the property of being a real number is that
described above, i.e., the arithmetically-definable property of coding
a fast-converging Cauchy sequence of rational numbers.

The rational numbers can be embedded into the reals by representing the rational number \(q\) with the constant sequence \(r_q = \str{q_n : n \in \bbN}\), where \(q_n = q\) for all \(n \in \bbN\). Addition, subtraction, and multiplication of (codes for) real numbers can be done in terms of addition, subtraction, and multiplication of rationals. Given real numbers \(x = \str{x_n : n \in \bbN}\) and \(y = \str{y_n : n \in \bbN}\),

for the least \(k\) such that \(|x_0| + |y_0| + 2 \leq 2^k\). Finally, we define the ordering \(\leq_\bbR\) by saying that \(x \leq_\bbR y\) if and only if for all \(k \in \bbN\), \(x_k \leq y_k + 2^{-k+1}\); \(x \lt_\bbR y\) if \(x \leq_\bbR y\) and \(x \neq_\bbR y\). As before, to aid readability we suppress the “\(\bbQ\)” subscripts on the operations and relations in the definitions above, since it is clear from context when addition, subtraction, multiplication etc. are operations on rational numbers rather than integers or natural numbers. \(\sfRCA_0\) suffices to show that the system \((\bbR, +_\bbR, \cdot_\bbR, 0_\bbR, 1_\bbR, \lt_\bbR, =_\bbR)\) thus defined satisfies the axioms of an Archimedean ordered field, although given the cardinality restrictions, the system itself is not an object of the theory.

Countable sequences of real numbers can also be coded as sets of
natural numbers. This is a very powerful device that allows the
formalization of statements from real analysis like the
Heine–Borel and Bolzano–Weierstraß theorems. A
*sequence of real numbers* is a function \(f : \bbN\times
\bbN\to \bbQ\) such that for each \(n \in \bbN\), the function \((f)_n
: \bbN\to \bbQ\), where \((f)_n(k) = f(k,n)\), is a real number. As
with sequences of rational numbers, we will often talk about sequences
of real numbers using notation like \(\str{x_n : n \in \bbN}\), where
\(x_n = (f)_n\). A sequence of reals \(\str{x_n : n \in \bbN}\) is
said to *converge* to a limit \(x\), \(x = \lim_n x_n\), if for
all \(\varepsilon \gt 0\) there exists an \(n\) such that for all
\(i\), \(|x - x_{n+i}| \lt \varepsilon\). Sequences where the limit
\(\lim_n x_n\) exists are described as *convergent*.

These notions can be used to formulate sequential versions of standard
theorems from analysis. For example, the *sequential least upper
bound axiom* states that every bounded sequence of real numbers
has a least upper bound. This principle is not provable in
\(\sfRCA_0\), and in fact is equivalent to \(\ACA_0\). However, a
weaker principle called nested interval completeness is provable in
\(\sfRCA_0\), and can be used to prove a number of fundamental results
including the intermediate value theorem, a version of Cantor’s
theorem that the real numbers are uncountable, and a version of the
Baire category theorem (Simpson 2009: 76–77).

### 3.4 \(\omega\)-models and computability

The recursive comprehension axiom scheme gets its name from the fact that the \(\Delta^0_1\)-definable sets of natural numbers are precisely the recursive (that is, computable) sets of natural numbers. As such, the set

has a special role, as the second-order part of the \(\omega\)-model
\(\calR\) of \(\sfRCA_0\) introduced in
§2.2.
This is one of a special class of \(\calL_2\)-structures known as
*\(\omega\)-models* in which the first-order domain is always
the standard natural numbers \(\omega\), and the \(+\), \(\times\),
and \(\lt\) symbols are interpreted by the standard (computable)
operations of addition and multiplication and the standard less-than
relation. Since the first-order part is fixed, the only part of these
structures that can vary is the range of the second-order quantifiers.
The \(\omega\)-models of differents subsystems of second-order
arithmetic often have interesting computability-theoretic properties
which are bound up with the set existence axioms that differentiate
them. Since their first-order part is standard, all \(\omega\)-models
satisfy the full induction scheme (\ref{eq:ind_scheme}).

The \(\omega\)-models of \(\sfRCA_0\) are exactly the *Turing
ideals*: subsets of \(\powset{\omega}\) which are closed under
recursive joins and
relative Turing computability.
The smallest Turing ideal is \(\REC\), and as such, \(\REC\) is also
the minimum \(\omega\)-model of \(\sfRCA_0\). We can think of \(\REC\)
as, in some sense, the intended model of \(\sfRCA_0\): the axioms of
\(\sfRCA_0\) only assert the existence of computable sets (or more
precisely, the existence of sets which are computable in some
parameter), and in \(\REC\) all sets are computable. The existence of
a minimum \(\omega\)-model of the system is a property that
\(\sfRCA_0\) shares with other subsystems of second-order arithmetic
whose characteristic axioms are comprehension schemes. The minimum
\(\omega\)-model of \(\ACA_0\) is the class \(\ARITH\) of
arithmetically definable sets, while the minimum \(\omega\)-model of
\(\Delta^1_1\text{-}\CA_0\) is the class \(\HYP\) of hyperarithmetical
sets. \(\Pi^1_1\text{-}\CA_0\) does not have a minimum
\(\omega\)-model, but it does have a minimum \(\beta\)-model, where
\(\beta\)-models are \(\omega\)-models that satisfy the additional
condition of satisfying all true \(\Sigma^1_1\) sentences, not just
all true arithmetical sentences as \(\omega\)-models do.

If a statement \(\varphi\) in the language of second-order arithmetic
holds in the \(\omega\)-model whose second-order part is \(\REC\),
then we say that \(\varphi\) is *computably true*, since it is
valid when its set quantifiers are restricted so that they only range
over computable sets (and thus over computable codes for other
mathematical objects which we might think of as being computable).
Since all of the axioms of \(\sfRCA_0\) are true in \(\REC\),
\(\sfRCA_0\) is computably true, and can be thought of as a natural
axiomatic system in which to formalize computable analysis, or
computable mathematics more generally. In this respect \(\sfRCA_0\) is
substantially different from most other subsystems of second-order
arithmetic which are studied in reverse mathematics. The
characteristic axioms of systems like \(\WKL_0\), \(\ACA_0\),
\(\ATR_0\) and \(\Pi^1_1\text{-}\CA_0\) can be understood in
computability-theoretic terms as asserting the existence of certain
classes of non-computable sets, and their \(\omega\)-models are
therefore guaranteed to contain non-computable sets of different
degrees of complexity, measured in terms of hierarchies such as the
arithmetical or analytical hierarchies,
or in terms of the
Turing degrees.
In this sense reverse mathematics can be thought of as measuring the
degree of non-constructivity of classical mathematical theorems.

## 4. The Big Five

### 4.1 Weak König’s lemma

When we leave the base theory \(\sfRCA_0\) and climb up the reverse
mathematical hierarchy, the first major milestone we reach is the
system \(\WKL_0\). This system is obtained by adding to the axioms of
\(\sfRCA_0\) the nonconstructive set existence axiom *weak
König’s lemma*, or WKL. The König infinity lemma,
due to Dénes König (1927), states that every finitely
branching infinite tree has an infinite path through it. WKL is
obtained by restricting König’s infinity lemma to countable
trees of binary sequences. To state it in more precise terms, we first
need to give a more formal presentation of trees.

Let \(X \subseteq \bbN\) be a set of natural numbers. We denote the
set of all finite sequences of elements of \(X\) by \(X^{\lt\bbN}\).
Two important special cases are when \(X = 2 = \set{0,1}\), and
\(2^{\lt\bbN}\) is thus the set of all finite binary sequences, and
when \(X = \bbN\), and \(\bbN^{\lt\bbN} = \Seq\), the set of all
finite sequences of natural numbers. A *tree \(T\) on \(X\)* is
a set of finite sequences of elements of \(X\) which is closed under
predecessors, so if \(\tau \in T \subseteq X^{\lt\bbN}\), then
\(\tau\restr{n} \in T\) for every \(n \lt \lh{\tau}\). A tree \(T\) on
\(X\) is *finitely branching* if for every \(\tau \in T\),
there are only finitely many \(x \in X\) such that \(\tau \concat
\str{x} \in T\). A *path through \(T\)* is a function \(f :
\bbN\rightarrow X\) such that for all \(n \in \bbN\), the initial
sequence \(f\restr{n} = \str{f(0), f(1), \dotsc, f(n - 1)} \in T\). A
tree \(T\) is *infinite* if it has infinitely many nodes, i.e.,
for all \(n \in \bbN\) there exists \(\tau \in T\) such that \(\tau
\gt n\) (where \(\tau\) is considered simply as a number, rather than
as a sequence). Weak König’s lemma (WKL) is the principle
that for every tree \(T \subseteq 2^{\lt\bbN}\), if \(T\) is infinite
then it has a path. \(\WKL_0\) is the system obtained by adding WKL to
the axioms of \(\sfRCA_0\).

Weak König’s lemma is equivalent over \(\sfRCA_0\) to a very large number of theorems, particularly in real and functional analysis, but also in logic and algebra. It is perhaps best understood as a compactness principle, which can be seen by looking at weak König’s lemma itself, but also at the theorems it is equivalent to, such as the Heine–Borel covering theorem and the compactness theorem for first-order logic. Other notable theorems that are equivalent to WKL include Gödel’s completeness theorem for first-order logic; the separable Hahn–Banach theorem; Brouwer’s fixed point theorem; the theorem that every countable commutative ring has a prime ideal; and Peano’s existence theorem for ordinary differential equations.

Perhaps surprisingly, adding weak König’s lemma to
\(\sfRCA_0\) does not result in any increase in first-order strength.
To explain this fact, we first need to introduce the notion of
*conservativity*. Informally, this is the idea that while a
theory \(S\) may have more expressive resources than a theory \(T\)
does (for example, \(S\) may have a more extensive vocabulary), in the
part of the language that the two theories have in common, \(S\) says
no more than \(T\) does. Formally, let \(S\) and \(T\) be theories in
languages \(\calL_S\) and \(\calL_T\) respectively, and let \(\Gamma\)
be a set of sentences in the language \(\calL_S \cap \calL_T\). \(S\)
is *\(\Gamma\)-conservative* over \(T\) if for every \(\varphi
\in \Gamma\), \(S \vdash \varphi\) iff \(T \vdash \varphi\). Where the
language of \(S\) is an extension of the language of \(T\), \(\calL_T
\subseteq \calL_S\), we simply say that \(S\) is *conservative*
over \(T\). A common case is when \(S\) is a subsystem of second-order
arithmetic and \(T\) is a system of first-order arithmetic. In such
cases, if \(S\) is conservative over \(T\) then we say that the
*first-order part* of \(S\) is \(T\).

Three key conservativity theorems connect \(\sfRCA_0\) and \(\WKL_0\),
and explain their first-order strength. The first is that
\(\rmI\Sigma_1\) is \(\Pi^0_2\)-conservative over \(\PRA\), where
\(\rmI\Sigma_1\) is a subsystem of first-order Peano arithmetic
\(\PA\) obtained by restricting the induction scheme to \(\Sigma_1\)
formulas, and \(\PRA\) is the system of primitive recursive arithmetic
developed by Skolem (1923) and axiomatized by Hilbert & Bernays
(1934). This result was proved by Parsons (1970) and is hence known as
*Parsons’s theorem* (Joosten 2002
[Other Internet Reseources];
Ferreira 2005). The second is that \(\rmI\Sigma_1\) is the
first-order part of \(\sfRCA_0\). This is due to Friedman (1976);
proofs can be found in Simpson (2009: IX.1) and Hirschfeldt (2014:
129). The third is that \(\WKL_0\) is \(\Pi^1_1\)-conservative over
\(\sfRCA_0\). This is an unpublished theorem of Harrington. Its proof
was made generally available by Simpson (2009: IX.2), and a somewhat
more detailed version can be found in Hirschfeldt (2014: 7.2).

Combining the first two results, we have that \(\sfRCA_0\) is \(\Pi^0_2\)-conservative over \(\PRA\). This has important ramifications for our understanding of finitism in the sense of Hilbert and Bernays, and the relationship of that program to the formal system \(\sfRCA_0\) (§5.2). With the addition of Harrington’s theorem we can see that that the first-order part of \(\WKL_0\) is \(\rmI\Sigma_1\), and that \(\WKL_0\) is \(\Pi^0_2\)-conservative over \(\PRA\). Since consistency statements are \(\Pi^0_1\) sentences, it follows that \(\WKL_0\) is equiconsistent with \(\sfRCA_0\), \(\rmI\Sigma_1\), and \(\PRA\), since one can prove in \(\PRA\) that the consistency of any of these theories implies the consistency of the others. By the same token, \(\WKL_0\) has the same proof-theoretic ordinal as \(\sfRCA_0\), \(\rmI\Sigma_1\), and \(\PRA\), namely \(\omega^\omega\).

Harrington’s proof is model-theoretic in character: it uses a forcing argument to show that every countable model of \(\sfRCA_0\) can be expanded to a countable model of \(\WKL_0\) while preserving the truth values of \(\Pi^1_1\) formulas. Sieg (1985) subsequently gave the theorem a more constructive treatment by constructing a primitive recursive function \(f\) which transforms any proof \(p\) in \(\WKL_0\) of a a \(\Pi^0_2\) statement \(\varphi\) into a proof \(f(p)\) of \(\varphi\) in \(\PRA\). Although somewhat technical, these conservativity results inspired a debate in the foundations of mathematics concerning the limits of the mathematics that could be recovered in a formal system which was finitist in the sense of Hilbert’s program (§5.3).

### 4.2 Arithmetical comprehension

The third member of the Big Five is \(\ACA_0\), where ACA stands for
“arithmetical comprehension axiom”. A formula \(\varphi\)
in the language of second-order arithmetic is *arithmetical* if
it contains no set quantifiers, although it may contain free set
variables. The *arithmetical comprehension scheme* consists of
the universal closures of all formulas of the form

where \(\varphi\) is arithmetical with \(X\) not free, although
\(\varphi\) may contain other free set and number variables.
\(\ACA_0\) is the system obtained by adding the arithmetical
comprehension scheme to the axioms of \(\sfRCA_0\). \(\ACA_0\) proves
the *arithmetical induction scheme*, that is, the restriction
of the induction scheme (\ref{eq:ind_scheme}) to arithmetical
formulas. This means that \(\ACA_0\) proves all of the axioms of
first-order Peano arithmetic \(\PA\). Via a conservativity result due
to Friedman (1976), a kind of converse of this result also holds:
every sentence in the language of first-order arithmetic which is
provable in \(\ACA_0\) is also provable in \(\PA\). It follows from
this result that
Gentzen’s consistency proof
for \(\PA\) also applies to \(\ACA_0\), and that \(\ACA_0\) has the
same proof-theoretic ordinal as \(\PA\), namely \(\varepsilon_0\).

\(\ACA_0\) is strictly stronger than both \(\sfRCA_0\) and \(\WKL_0\). Where the former is concerned, this can be easily seen from the fact that arithmetical comprehension is computably false, since it proves that the halting problem \(K\) and many other non-computable sets exist, so the \(\omega\)-model \(\REC\) provides a counterexample: a model in which \(\sfRCA_0\) is true but arithmetical comprehension is false. Proving that \(\WKL_0\) is strictly weaker than \(\ACA_0\) requires a more subtle computability-theoretic argument, using the low basis theorem of Jockusch & Soare (1972) to prove the existence of a model of \(\WKL_0\) which does not contain \(K\), and therefore does not satisfy arithmetical comprehension.

Although many important analytical theorems can be proved in \(\WKL_0\), some fundamental results about the nature of the real numbers can only be proved with the use of arithmetical comprehension. Many of these results concern the sequential completeness and compactness of the real line, and assert the existence of limits of various kinds of sequences. The most basic of these is a sequential form of the least upper bound axiom, which states that every bounded sequence of real numbers has a least upper bound. Others are the Bolzano–Weierstraß theorem, which states that every bounded sequence of real numbers contains a subsequence which converges to a limit; the monotone convergence theorem, which states that every monotone sequence of real numbers converges to a limit; and the Cauchy convergence theorem, which states that every Cauchy sequence of real numbers converges to a limit. All of these theorems are provable in \(\ACA_0\), and when added to the base theory \(\sfRCA_0\), each of them imply arithmetical comprehension. This collection of equivalences, due to Friedman (1975), can be seen of as a formalization of the informal equivalences proved in the last chapter of Dedekind (1872), but for the sequential notion of completeness of the real number line, rather than Dedekind’s notion of completeness for arbitrary bounded sets of real numbers.

Although these statements are only concerned with the real number line \(\bbR\), they can easily be extended to the Euclidean plane \(\bbR^2\), the three-dimensional Euclidean space \(\bbR^3\), or any finite-dimensional Euclidean space \(\bbR^n\). The corresponding sequential completeness theorems for those spaces are also provable in \(\ACA_0\), and thus equivalent to it over \(\sfRCA_0\), since they imply the versions for \(\bbR\). More generally still, it is possible to represent the notion of a complete separable metric space within second-order arithmetic, and thus state versions of these theorems which hold for any such space. These theorems too are provable in \(\ACA_0\), as are stronger generalizations such as the Ascoli lemma. \(\ACA_0\) is also strong enough to prove that, in any complete separable metric space which is also compact, every sequence of points has a convergent subsequence.

Although analysis is rich in theorems equivalent to arithmetical comprehension, other areas may be even more so, especially algebra. These theorems concern a wide variety of algebraic structures including countable abelian groups, countable fields (including ordered and formally real fields), countable commutative rings, and countable vector spaces. Some important examples include the theorem that every countable field has a strong algebraic closure, and that every countable field has a strong real closure; that every countable commutative ring has a maximal ideal; that every countable abelian group has a unique divisible closure; and that every countable vector space over a countable field has a basis.

Countable infinitary combinatorics also provides several important equivalences to arithmetical comprehension. One is König’s lemma that every infinite, finitely branching tree contains an infinite path. This is an interesting case in which the precise statement of the theorem matters a great deal. The restriction of König’s lemma to trees in which the number of branchings from any given node is bounded (bounded König’s lemma) is strictly weaker than arithmetical comprehension, and in fact it is equivalent to weak König’s lemma. The unrestricted version of König’s lemma, on the other hand, is equivalent to arithmetical comprehension.

Another example is Ramsey’s theorem. Given fixed natural numbers
\(n \geq 1\) and \(k \geq 1\), the statement \(\RT^n_k\) asserts that
every for every \(k\)-colouring of \(n\)-tuples of natural numbers,
there exists an infinite monochromatic or *homogeneous* subset
of those tuples. When \(n \geq 3\) and \(k \geq 2\), \(\RT^n_k\)
implies arithmetical comprehension over \(\sfRCA_0\). The statement
\(\RT^2_2\), however, holds a special place in reverse mathematics, as
it lies outside the Big Five classification: it is not provable in
\(\sfRCA_0\) (Specker 1971), not provable in \(\WKL_0\) (Jockusch
1972), does not imply arithmetical comprehension (Seetapun &
Slaman 1995), and does not imply weak König’s lemma either
(Liu 2012). Discussing these results, their implications, and more
recent work on Ramsey’s theorem and the Reverse Mathematics Zoo
[Other Internet Resources]
which arose from its study would take us too far afield, but there
are now many good introductory references including Hirschfeldt (2014)
and Dzhafarov & Mummert (2022: Chapter 8).

### 4.3 Arithmetical transfinite recursion

Arithmetical comprehension allows us to construct objects by applying
some arithmetically-definable operation a finite number of times. A
natural generalization of this method of proof is to allow such
iterations to continue all the way to the first infinite ordinal
\(\omega\), and off into the transfinite. Constructing mathematical
objects by transfinite recursion is of course a standard part of set
theory, licensed by the axiom scheme of replacement. The axiom scheme
of *arithmetical transfinite recursion* is an arithmetical
analogue, in which the operator \(\Phi(X)\) which one can apply must
be arithmetically definable, and in which the well-orderings along
which the operator can be iterated must be countable. Despite these
restrictions, arithmetical transfinite recursion is quite powerful.
Adding it to \(\sfRCA_0\) produces the system \(\ATR_0\), a much
stronger system than \(\ACA_0\) which can prove not merely theorems of
classical analysis, but many results in descriptive set theory
concerning Borel and analytic sets.

In order to define arithmetical transfinite recursion, we must first
understand what is meant in this setting by a well-ordering. A binary
relation \(R \subseteq \bbN\times \bbN\) is said to be
*reflexive* if for all \(x,y \in \bbN\), if \((x,y) \in R\)
then \((x,x) \in R\) and \((y,y) \in R\). If \(R\) is a reflexive
relation then we define \(\mathrm{field}(R) = \set{x : (x,x) \in R}\).
We also write \(x \leq_R y\) if and only if \((x,y) \in R\), and \(x
\lt_R y\) if and only if \((x, y) \in R\) and \((y, x) \not\in R\). A
*countable linear ordering* is a reflexive relation which is
also transitive (if \(x \leq_R y\) and \(y \leq_R z\), then \(x \leq_R
z\)), antisymmetric (if \(x \leq_R y\) and \(y \leq_R x\), then \(x =
y\)), and total (if \(x,y \in \mathrm{field}(R)\), then \(x \leq_R y\)
or \(y \leq_R x\)). We write \(\mathrm{LO}(R)\) as an abbreviation for
the formula asserting that \(R\) is a countable linear ordering. If
\(R\) is a reflexive relation, \(R\) is *well-founded* if it
contains no infinite descending chain: for all \(f : \bbN\to
\mathrm{field}(R)\) there exists \(n \in \bbN\) such that \(f(n+1)
\not\lt_R f(n)\). We write \(\mathrm{WF}(R)\) as an abbreviation for
the formula asserting that \(R\) is well-founded. A *countable
well-ordering* is a countable linear order \(R\) which is also
well-founded. We write \(\mathrm{WO}(R)\) as an abbreviation for the
formula asserting that \(R\) is a countable well-ordering, i.e., that
\(\mathrm{LO}(R) \wedge \mathrm{WF}(R)\).

The next step in defining arithmetical transfinite recursion is the
notion of a *hierarchy*: a system of sets built up from some
initial set by the repeated application of an operator. Given a
formula \(\varphi(n,Y)\), let \(\mathrm{H}_\varphi(X,Y)\) be the
formula asserting that \(X\) is a countable linear ordering and that
\(Y\) is the set of all pairs \((n,j)\) such that \(j \in
\mathrm{field}(X)\) and \(\varphi(n, Y^j)\), where

The idea behind this definition is that \(Y\) is the hierarchy of sets
that result from iterating the operator defined by the formula
\(\varphi\) along the ordering \(X\), and that \(Y^j\) is the set
constructed at level \(j\) of the iteration. The *axiom scheme of
arithmetical transfinite recursion* consists of the universal
closures of all instances of the scheme

where \(\varphi(n,Y)\) is arithmetical. The system \(\ATR_0\) consists of the axioms of \(\sfRCA_0\) plus all instances of the axiom scheme of arithmetical transfinite recursion.

Given the nature of the axiom it is unsurprising that theorems equivalent to ATR all have some explicit or implicit connection to countable well-orderings. One of the most fundamental, partly due to its role in the proofs of many reversals, is the seemingly innocuous statement CWO that any two countable well-orderings are comparable: one must always be isomorphic to an initial segment of the other. Ordinals have been bound up with the theory of point-sets (in the most basic case, sets of real numbers) since the inception of both in the work of Cantor in the early 1880s (Ferreirós 2007: Chapter VI). Cantor used countable ordinals in his proof of the Cantor–Bendixson theorem that every closed set is the union of a perfect set and a countable set. This theorem is actually equivalent to \(\Pi^1_1\) comprehension (§4.4), but other theorems from descriptive set theory only require arithmetical transfinite recursion. One such result is the perfect set theorem that every closed set contains a perfect subset. Another is Luzin’s separation theorem that any two disjoint analytic sets can be separated by a perfect set. This theorem is standardly used in descriptive set theory to prove Suslin’s theorem that the Borel sets are exactly those analytic sets whose complement is also analytic (Kechris 1995: 14). Suslin’s theorem can therefore also be proved in \(\ATR_0\), although it is not equivalent to it.

### 4.4 \(\Pi^1_1\) comprehension

The last of the Big Five systems is \(\Pi^1_1\text{-}\CA_0\), whose characteristic axiom is the \(\Pi^1_1\) comprehension scheme. A formula \(\varphi\) in the language of second-order arithmetic is \(\Pi^1_1\) if it has the form \(\forall{X}\theta\), where \(X\) is a set variable and \(\theta\) is an arithmetical formula. A formula \(\varphi\) is \(\Sigma^1_1\) if it has the form \(\exists{X}\theta\), with \(\theta\) arithmetical and \(X\) a set variable. Every \(\Pi^1_1\) formula is equivalent to the negation of a \(\Sigma^1_1\) formula, and every \(\Sigma^1_1\) formula is equivalent to the negation of a \(\Pi^1_1\) formula. The \(\Pi^1_1\) comprehension scheme consists of the universal closures of all formulas of the form

where \(\varphi\) is \(\Pi^1_1\) with \(X\) not free, although \(\varphi\) may contain other free set and number variables. The axioms of \(\Pi^1_1\text{-}\CA_0\) are the axioms of \(\sfRCA_0\) plus all instances of the \(\Pi^1_1\) comprehension scheme. While one can also define a corresponding \(\Sigma^1_1\) comprehension scheme, this turns out to be unnecessary, since every instance of \(\Sigma^1_1\) comprehension is provably equivalent in \(\sfRCA_0\) to an instance of \(\Pi^1_1\) comprehension.

\(\Pi^1_1\) comprehension is needed in order to extend the results on perfect sets provable in \(\ATR_0\) (§4.3). The historically most notable such result is the Cantor–Bendixson theorem, which states that every closed set is the union of a perfect set and a countable set. Kreisel (1959) proved the existence of computable closed sets of the form \(F = F_p \cup F_c\) such that \(F_p\) is perfect and \(F_c\) is countable, but the (codes of) \(F_p\) and \(F_c\) are \(\Pi^1_1\) but not \(\Delta^1_1\). This shows that \(\Pi^1_1\) comprehension is required in order to prove the Cantor–Bendixson theorem, and in fact the Cantor–Bendixson theorem is provably equivalent over \(\ACA_0\) to \(\Pi^1_1\) comprehension (Simpson 2009: theorem VI.1.6, pp. 219–220).

Other important results in descriptive set theory equivalent to \(\Pi^1_1\) comprehension include the Kondô–Addison theorem that every coanalytic relation can be uniformized by a coanalytic function, Silver’s theorem that every coanalytic equivalence relation contains either countably many or \(2^{\aleph_0}\)-many equivalence classes, and the determinacy of games with a \(\Sigma^0_1 \wedge \Pi^0_1\) payoff set. \(\Pi^1_1\text{-}\CA\) is also needed for a number of results in group theory, involving the order types of countable ordered groups and the structure theory of countable abelian groups. Solomon (2001) showed that Mal’tsev’s theorem is equivalent over \(\sfRCA_0\) to \(\Pi^1_1\text{-}\CA\), while Friedman, Simpson, and Smith (1983) showed the same for the statement that every countable abelian group is the direct sum of a divisible group and a reduced group.

\(\Pi^1_1\text{-}\CA_0\) is the strongest system standardly studied in reverse mathematics, although reversals to stronger systems have been found. One such example is the equivalence between \(\Pi^1_2\) comprehension and a statement from general topology concerning countably based spaces of maximal filters (Mummert & Simpson 2005). Other statements, for example determinacy axioms, are even stronger (Montalbán & Shore 2012). Nevertheless, for most mathematical statements studied in reverse mathematics, even \(\Pi^1_1\text{-}\CA_0\) is a much stronger system than is required in order to prove them.

## 5. Foundational Programs and Reverse Mathematics

### 5.1 Computable mathematics, constructive mathematics, and \(\sfRCA_0\)

Mathematics in \(\sfRCA_0\) shares much with that in a number of broadly constructive approaches to mathematics, most notably constructive analysis in the tradition of Bishop, Russian constructive mathematics, and computable analysis (also known in older texts as recursive analysis). However, there are also important differences, largely centred on the use of the law of excluded middle (LEM) which, as a theory formulated in classical logic, \(\sfRCA_0\) satisfies.

At the centre of the connection between the formal system \(\sfRCA_0\)
and traditions in constructive mathematics is Church’s thesis.
In the context of constructive mathematics, this states that every
constructive function is extensionally equivalent to a recursive
function, i.e., there exists a Turing machine which halts on every
input and produces the same output as the constructive function. Using
Kleene’s *T* predicate and *U* function we can
formalize this as

As noted in §1.3, the interpretation of the notion of a construction in terms of computable functions dates back to the early history of computability theory in the 1940s. Since \ref{eq:CT} can be read as a formal statement in the language of second-order arithmetic, we can consider its status in relation to classical subsystems of second-order arithmetic. \ref{eq:CT} is consistent with \(\sfRCA_0\), since the axioms of \(\sfRCA_0\) are satisfied in the \(\omega\)-model \(\REC\) consisting of all and only the computable sets (§3.4). However, it is inconsistent with stronger systems that imply the existence of non-computable sets.

Russian constructive mathematics, working in the tradition of Markov, embraced the idea that constructive procedures were algorithmic ones, and thus built a version of Church’s thesis into their approach (Kushner 2006). Bishop’s constructive analysis followed a different path: one which rejected the law of excluded middle, but accepted neither Brouwer’s new basis for analysis in his Creating Subject arguments, nor the close association with computability theory of the Russian constructivists. As Bishop puts it, he develops analysis “with an absolute minimum of philosophical prejudice concerning the nature of constructive mathematics” (Bishop 1967: ix), going on to assert that “[t]here are no dogmas to which we must conform”. Bishop’s system is thus consistent with classical mathematics, with intuitionistic mathematics, and with Church’s thesis.

However, the system \(\sfRCA_0\) is not a faithful formalization of any of these constructivist viewpoints: the classical logic it embeds is an insurmountable barrier. The intermediate value theorem provides a good example of why this is. While the intermediate value theorem is provable in \(\sfRCA_0\), the proof essentially relies on a case distinction made via the law of the excluded middle, and is thus not constructively provable (Bridges & Richman 1987: 5). This limits our ability to use provability in \(\sfRCA_0\) as a marker of constructive provability. Further complications are provided by the fact that in \(\sfRCA_0\), induction is restricted to \(\Sigma^0_1\) predicates, while constructivists tend to accept unrestricted quantifier complexity in their uses of induction.

In some ways, computable analysis as presented in works like Aberth (1980) and Pour-El & Richards (1989) provides a better match with \(\sfRCA_0\) than constructive analysis, as Simpson (2010) suggests. This does have a foundational upshot of sorts, since as Beeson (1985) remarks,

there is a coherent philosophy of mathematics under which recursive mathematics represents the truth about the mathematical universe. If one takes this view, then the subject has a significance beyond the mere production of counterexamples. [… ][T]he world under Church’s thesis is an entertaining place, full of surprises (like any foreign country), but not by any means too chaotic to support life.

Thoroughgoing constructivists do not take the natural numbers to form a completed infinite totality, while in computable analysis the standard machinery of computability theory (which does make this assumption) is taken as fundamental (Pour-El & Richards 1989: VII). The use of classical logic in \(\sfRCA_0\) is thus unproblematic. Recursive comprehension, its characteristic axiom, seems tailor-made for computable mathematics. It is thus easier to say that provability in \(\sfRCA_0\) implies truth in computable analysis, although the restriction on the induction principle does mean that as far as computable mathematics is concerned, \(\sfRCA_0\) is a needlessly restricted system. Moreover, recursive counterexamples (§1.3)—and thus reversals to set existence principles asserting the existence of non-computable sets—have a clear interpretation in the context of computable mathematics, namely demonstrating that they are false. \(\Pi^1_1\) statements should, however, be excluded from this analysis, since any true \(\Pi^1_1\) statement will be true in the \(\omega\)-model \(\REC\), and thus computably true, regardless of whether or not it is provable in \(\sfRCA_0\). For further discussion of this point, see (Eastaugh 2019: 157, 171–172).

### 5.2 Finitism and \(\sfRCA_0\)

The formal system \(\sfRCA_0\) also has connections to another important foundational program, Hilbertian finitism. Hilbert’s program was to secure the status of Cantorian infinitary mathematics, by proving its consistency using purely finitary means. This reflects Hilbert’s two-tier epistemology in which finitary combinatorial statements hold a special status, since unlike the abstract elements of ideal, infinitary mathematics, they concern “extra-logical discrete objects, which exist intuitively as immediate experience before all thought” (Hilbert 1922: 163).

The *real* (finitary, contentual) propositions include, for
Hilbert, numerical equations, together with the more complex
statements that can be formed from equations via the propositional
connectives (negation, conjunction, disjunction, implication).
Moreover, the universal closures of such statements also have finitary
meaning, because we can think of them as a schema:
\(\forall{x}\varphi(x)\), where \(\varphi\) is quantifier-free, means
that for any given number \(\overline{n}\), we can verify in a
finitary way that \(\varphi(\overline{n})\). It is anachronistic but
technically helpful to refer to this class of statements as the
\(\Pi^0_1\) sentences of the language of arithmetic. In contrast to
this, *ideal* statements do not have finitary meaning: they are
merely symbolic,

formulas that [ …] in themselves mean nothing but are merely things that are governed by our rules and must be regarded as the

ideal objectsof the theory. (Hilbert 1928 [1967: 470], emphasis in original)

According to Hilbert, these ideal objects are necessary in order to recover the general principles of logic, including the law of excluded middle, since he considered the development of analysis to be impossible without them.

Hilbert thus undertook a program to vindicate ideal mathematics on a
finitary basis. Since ideal mathematics could be formalized within
logical calculi such as those developed in Whitehead and
Russell’s
*Principia Mathematica*,
the aim of the Hilbert program became to prove the consistency of the
formalized mathematical theories of analysis and set theory. To this
end, Hilbert and his collaborators developed the tools of proof
theory, including the
\(\varepsilon\)-calculus.
On the conceptual side, the real propositions were intended as
analogous to experimental observations in physics, which can confirm
or refute a theory. In this analogy, ideal mathematics is like a
physical theory that gives general laws, only individual instances of
which can be verified. Ideal mathematics should thus be conservative
over real mathematics: any real proposition which can be proved via
ideal methods should also be provable in a finitary way.

In light of Tait’s (1981) analysis of finitistic reasoning as
primitive recursive reasoning, the \(\Pi^0_2\)-conservativity of
\(\sfRCA_0\) over \(\PRA\)
(§4.1)
takes on a philosophical dimension related to Hilbert’s
conservativity program. Tait argues for two claims, now commonly
referred to as *Tait’s theses*. The first is that the
finitist functions \(f : \bbN\rightarrow \bbN\) are exactly the
primitive recursive functions. The second is that the finitistically
provable \(\Pi^0_1\) (i.e., real) sentences are exactly those provable
in the formal system \(\PRA\). By the conservativity theorem stated
above, these are also exactly the \(\Pi^0_1\) sentences provable in
\(\sfRCA_0\), and moreover the computable functions which \(\sfRCA_0\)
can prove are total are precisely the primitive recursive ones.
Although \(\sfRCA_0\) is evidently in some sense an infinitary system
in which substantial mathematical reasoning about infinitary objects
can be carried out, in virtue of the conservativity theorem and its
consequences it also has some claim to being reducible to a finitary
system, and thus perhaps in some instrumental way acceptable to the
finitist. This kind of finitistic reductionism is discussed further in
§5.3.

Tait’s theses have been widely discussed and become broadly accepted, although not without their share of criticism along the way. These criticisms can be broken down into two rough types, historical and conceptual, although these are sometimes intertwined. On the conceptual side, there are two groups of complainants: those who consider Tait’s analysis too liberal, such as Ganea (2010), and those who consider it too conservative. Kreisel (1958, 1970) is a prominent example of the latter, arguing that finitist provability should instead be identified with provability in \(\PA\). Central to the disagreement between Kreisel and Tait is the status of functions \(f : \bbN\to \bbN\) which are computable but not primitive recursive, and thus while arguably finitary in nature, nonetheless transgress what Tait considers the boundaries of finitism.

The quintessential example of such a function is the Ackermann–Péter function. Discovered in 1928 by Wilhelm Ackermann, a member of the Hilbert school whose dissertation provided the first substantive example of a finitistic consistency proof, it was subsequently simplified by Péter (1935), and then further simplified by Robinson (1948). The Ackermann–Péter function is defined, by a doubly-recursive definition, as

Contemporary views differ as to whether the
Ackermann–Péter function is finitary in nature, and even
on whether Hilbert considered it to be so. In the course of defending
the view that finitistic provability coincides with provability in
\(\PA\), Kreisel (1970) points to Hilbert’s explicit discussion
of Ackermann’s function in Hilbert 1926 as evidence that Hilbert
considered the function to be finitary in nature. Taking the opposite
position, Tait (1981) argues that Kreisel’s view is based on a
misreading of Hilbert, and that Hilbert himself did not consider
Ackermann’s function to be finitary *simpliciter*, but
finitary relative to a system of non-finitary types. More recently,
Zach (1998) concluded that while in the early 1920s Hilbert did not
conceive of his finitism as going beyond primitive recursion, there
are reasons to believe that the later Hilbert, and especially his
collaborator Bernays, took a different view and considered the
Ackermann–Péter function, along with other functions
defined by means of higher-type recursions, to be finitary in nature.
For further discussions of these issues see Tait 2002; Zach 2003; and
Dean 2015.

### 5.3 Finitistic reductionism and \(\WKL_0\)

According to the two theses advanced by Tait (1981) and which the discussion of \(\sfRCA_0\) and finitism in §5.2 centred on, the finitist functions are precisely the primitive recursive functions, while the finitist theorems are precisely the \(\Pi^0_1\) sentences provable in \(\PRA\). Assuming that one grants the correctness of Tait’s theses in terms of determining the extent and limits of finitist mathematics, and that Gödel’s incompleteness theorems pose an insurmountable barrier to a genuine realization of the Hilbertian dream of reducing all infinitary mathematics to finitary mathematics, questions still remain concerning the extent to which ideal mathematics could be recovered by a Hilbertian reduction. Simpson (1988) proposed what he called a partial realization of Hilbert’s program. Simpson’s ideas follow those of Hilbert from the mid-1920s onwards, in which the program of giving a finitary proof of the consistency of infinitary mathematics was replaced by the program of demonstrating the conservativity of infinitary mathematics over real mathematics: that is, showing that if a finitistically-meaningful statement admits of an ideal proof, then it is also finitistically true.

Gödel’s incompleteness theorems seem to show that this
conservativity program cannot succeed, at least if we accept that all
finitary reasoning can be formalized within a fixed axiomatic system
*T* such as \(\PRA\). Wherever we draw the line between finitary
and infinitary mathematics, finitary mathematics cannot prove its own
consistency, let alone that of infinitary mathematics. Simpson accepts
this piece of received wisdom about the impact of the incompleteness
theorems on Hilbert’s program, but argues that we can
nonetheless achieve a partial realization of Hilbert’s program.
The program of finitistic reductionism which Simpson proposes leans on
the \(\Pi^0_2\)-conservativity of \(\WKL_0\) over \(\PRA\)
(§4.1).
The idea is that any finitistically meaningful (i.e., \(\Pi^0_1\))
sentence \(\varphi\) that can be proved in the ideal, infinitary
system \(\WKL_0\) is also provable in \(\PRA\), and thus by
Tait’s thesis is finitistically provable.

The mathematics that can be developed in \(\WKL_0\) is extensive, and prima facie this lends a good deal of substance to Simpson’s position. However, the view has attracted its share of criticism. Note first that one might reject the conservativity constraint altogether, as Detlefsen (1990) does. We shall pass over criticisms of this kind since addressing them would take us too far afield. The second type of criticism, typified by Sieg (1990), takes Simpson to task for being insufficiently faithful to the historical Hilbert program. Giving this claim the attention it deserves would, similar to the first type of objection, require a more detailed study than this entry is able to, but readers are directed to the entry on Hilbert’s program for starting points from which to investigate this issue. Critics of the third type point out that despite the strength of \(\WKL_0\), substantial parts of mathematics—even if we restrict ourselves to so-called “ordinary mathematics”—require stronger axioms, as the results in §4.2–§4.4 attest. This is, of course, a fair criticism of the extent of the mathematics that can be recovered within Simpson’s finitistic reductionism, but it is also in some sense misplaced, since as Simpson emphasizes, his program is only intended as a partial realization of Hilbert’s, not a complete one. Moreover, as Simpson points out, finitistic reductionism is not limited to \(\WKL_0\), since there are strict extensions of \(\WKL_0\) that are still \(\Pi^0_2\)-conservative over \(\PRA\), including the system \(\WKL^+_0\) introduced by Brown & Simpson (1993), and even \(\WKL_0 + \mathsf{RT}^2_2\) (Patey & Yokoyama 2018).

A more incisive and subtle line of argument stems from the status of the conservativity theorem itself. The mere fact that \(\WKL_0\) is \(\Pi^0_2\)- conservative over \(\PRA\) is not sufficient to establish a genuinely finitistic reduction, since in order to appeal to it, the conservativity result must be provable from the finitary standpoint. This objection can be partially rebutted by appealing to Sieg’s (1985) theorem that there exists a primitive recursive proof transformation that will transform any \(\WKL_0\) proof of a \(\Pi^0_2\) sentence into a \(\PRA\) proof of that same sentence. If we accept Tait’s theses, then the proof transformation is a finitary procedure for producing finitary proofs from infinitary ones, just as Hilbert (1928) anticipated.

However, the initial objection can be spelled out in a more damaging way, drawing on a point already made by Tait (1981), namely that his theses provide an external analysis of finitism rather than an internal one which is accessible from the finitary standpoint. This being the case, the finitist is not in a position to evaluate Tait’s theses, since their very characterizations involve infinitary notions such as that of an arbitrary function \(f : \bbN\to \bbN\) which the finitist neither understands nor accepts. Burgess (2010) argues that because of this, the finitist cannot understand Sieg’s conservativity proof as giving a method of transforming proofs in a formal system for ideal mathematics, \(\WKL_0\), into proofs in a contentual finitary system. Instead, they can only understand it as a mechanism for transforming proofs in one formal system into proofs in another. They can then laboriously verify that each \(\PRA\) proof obtained by applying the proof transformation is indeed a finitary proof. However, they cannot accept the general claim that all such proofs are finitary proofs, since this would amount to accepting the reflection principle for \(\PRA\), which implies the consistency of \(\PRA\)—a statement which, by Tait’s second thesis and Gödel’s second incompleteness theorem, is not finitistically provable (Giaquinto 1983; Dean 2015).

### 5.4 Arithmetical definability and predicativity

Arithmetical definability has been linked to predicative approaches to
the foundations of mathematics since
Hermann Weyl’s
monograph *Das Kontinuum* (Weyl 1918). Weyl argued that the
contemporary set-theoretic construction of the real numbers and the
associated theory of analysis was a “house [ …] to
a large degree built on sand” (Weyl 1994: 1). The task he set
for himself was to develop a theory of analysis based on which did not
fall foul of
Russell’s paradox,
and which embodied a mathematically more natural approach than
Russell’s theory of types.
As Weyl himself noted, the ideas of *Das Kontinuum* have much
in common with the approaches of Russell and Poincaré. They
agree in their rejection of impredicative comprehension, and
Weyl’s system of levels corresponds to Russell’s theory of
types (Russell 1908; Whitehead & Russell 1910). As such,
Weyl’s views have justly been called *predicative*.
Despite these commonalities, Weyl was critical about what he saw as
the imprecision of Poincaré’s account, and he strongly
disagreed with several pillars of the Russellian approach. In
particular, he rejected some of the reductionist aspects of
Russell’s logicism, which defined natural numbers by means of
equivalence classes, while Weyl took them to be fundamental. Weyl also
rejected Russell’s axiom of reducibility and took their
positions to be “separated by a veritable abyss” (Weyl
1994: 47).

In developing his system Weyl was guided by two key views. Firstly, he
agreed with Poincaré about the fundamentality to mathematical
thought of the idea of iteration, as embodied by the infinite sequence
of natural numbers. Secondly, he did not merely reject naive
comprehension and the idea that every concept (however precisely
defined) has an associated extension. Instead he proposed axioms, or
*principles of definition*, on which only what we now recognize
as first-order definable properties can be unproblematically taken to
have extensions. Weyl accepted that the use of his system would lead
to the loss of some mathematical principles that were accepted under
the set-theoretic orthodoxy of Cantor and Dedekind. One such casualty
was the least upper bound axiom, the principle that every bounded set
of real numbers has a least upper bound. Nevertheless, many important
consequences of the least upper bound axiom still hold in Weyl’s
system, such as the monotone convergence theorem and a sequential form
of the Heine–Borel theorem.

After the publication of *Das Kontinuum*, Weyl took an even
more radical step away from the set-theoretic view of analysis and
embraced Brouwer’s
intuitionism.
The subject of predicative analysis was not taken up again until the
1950s, when Grzegorczyk (1955) revisited Weyl’s ideas using the
new tools of computability and definability theory. Grzegorczyk
identified Weyl’s analysis with what he called, in common with
many at the time, *elementary analysis*, meaning that every
real number must be given by an elementary (that is to say,
first-order) definition. This amounted to studying analysis
constrained to the minimum \(\omega\)-model of \(\ACA_0\), the class
\(\ARITH\) of arithmetical sets. Kondô (1958) and Mostowski
(1959) pursued similar approaches to elementary or arithmetical
analysis. All three were influenced by the developments in hierarchy
theory and its relationship to descriptive set theory, which
classifies sets of real numbers in terms of their descriptive
complexity. Kondô in particular seems to have been influenced by
the French analysts and “semi-intuitionists” Borel, Baire,
and Lebesgue, who first developed what we now call descriptive set
theory in the early twentieth century. They used the tools of
Cantorian set theory to define hierarchies of sets and functions in
terms of the complexity of their construction (Kanamori 1995), but
were skeptical about many aspects of the Cantorian picture, including
the axiom of choice (Moore 1982).

Both Grzegorczyk and Mostowski offer axiomatizations of their approach to arithmetical analysis which are closely related to \(\ACA_0\). Grzegorczyk’s axioms (Grzegorczyk 1955: 337–338) are those for Peano arithmetic, formulated in the simple theory of types, but with comprehension restricted to formulas in which the only quantifiers are those of the lowest type, that of natural numbers. This restricted comprehension principle is thus a form of arithmetical comprehension, albeit for higher types as well as for sets of natural numbers. Mostowski, on the other hand, provides an axiomatization derived from Gödel–Bernays set theory \(\mathrm{GB}\) (Mostowski 1959: 184–185). \(\mathrm{GB}\) is a two-sorted theory in which the first sort of variables range over sets, and the second sort over classes. In Mostowski’s system, the first sort of variables range over natural numbers and the second sort range over sets of natural numbers. This produces an axiomatic system close to \(\ACA_0\) whose minimum \(\omega\)-model is \(\ARITH\).

As Feferman (1988) explores, there are several ways in which
\(\ACA_0\) does not completely correspond to Weyl’s system. To
begin with, it is unclear whether Weyl would have accepted the full
induction scheme (\ref{eq:ind_scheme}), or just the induction axiom
(\ref{eq:ind_axiom}). \(\ACA_0\) only has the latter, and adding the
former produces the proof-theoretically stronger system \(\ACA\). Weyl
also admits an iteration principle (principle 8) that is not
first-order definable, although this discrepancy in his approach was
not known to him (Feferman 1998: 265). Nevertheless, the axioms of
\(\ACA_0\) can be justified on the predicative grounds given in
*Das Kontinuum*, and its substantial mathematical strength
shows that Weyl was right to think that systems in which only
first-order comprehension is admissible suffice for many applications.
For further reading on predicativity and its historical development,
see Feferman (2005). §2.2 of Dean & Walsh (2017) provides a
more complete account of the development of arithmetical analysis by
Grzegorczyk, Kondô, and Mostowski.

### 5.5 Arithmetical transfinite recursion and the limits of predicativity

The renewed interest in predicative mathematics that materialized in the 1950s following the discovery of the computability-theoretic hierarchies (§5.4) led to a series of conceptual and technical developments that extended and refined the notions of predicative definability and predicative provability, as well as offering sophisticated analyses of these notions in terms of both computability theory and proof theory. Following Grzegorczyk, Mostowski, and Kondô’s arithmetical analysis of Weyl’s predicativism (§5.4), Kreisel (1960) suggested that sets of natural numbers were predicative if they appeared in the hyperarithmetical hierarchy. On the proof-theoretic side, this led to the identification of systems like \(\Delta^1_1\text{-}\CA_0\) and \(\Sigma^1_1\text{-}\mathsf{AC}_0\) as predicative (Kreisel 1962), since the class of hyperarithmetical sets \(\HYP\) is an \(\omega\)-model of both systems.

Nevertheless, Kreisel’s proposal still appeared to contain an impredicative component. Any hyperarithmetical well-ordering is predicatively definable, as a subset of the natural numbers, since by a result of Spector (1955) such a well-ordering will in fact be computable. However, the fact that the ordering is in fact a well-ordering is not in general something that can be predicatively determined: being a well-ordering is a \(\Pi^1_1\) property, since it involves quantifying over all possible infinite descending sequences in the ordering, or equivalently, over all subsets of the ordering. Indeed, the set of all codes for computable well-orderings, Kleene’s \(\mathcal{O}\), is \(\Pi^1_1\)-complete and thus a quintessentially impredicative object (Kleene 1955: 207). This worry had already been noted by Kreisel (1960: 27), and the resulting idea of examining which computable linear orderings could be predicatively proved to be well-orderings led to a series of advances in the 1960s.

Embodied in the hyperarithmetical hierarchy is the notion of
*ramification*: splitting the sets in \(\powset{\omega}\) into
levels which are built up via a transfinite iteration. This idea had
already appeared in several forms in Russell’s attempts to
overcome the set-theoretic paradoxes and found mathematics on a
logicist basis. In the context of analysis and the predicative theory
of sets of natural numbers, the *ramified analytical hierarchy*
is a transfinite hierarchy of sets of natural numbers, defined by the
conditions

where \(D^*\) is the set obtained from \(D \subseteq \powset{\omega}\) by putting \(X \in D^*\) iff there is an \(\calL_2\)-formula \(\varphi\) such that for all \(n \in \bbN\), \(n \in X \leftrightarrow \varphi(n)\) when all the second-order quantifiers in \(\varphi\) are restricted to range over \(D\). The connection to predicativity lies in the fact that second-order quantification is admissible only if the range of quantification has already been shown to exist, at some earlier stage in the construction of the hierarchy. The hyperarithmetical sets were shown by Kleene (1959) to be exactly those which appear in the ramified analytical hierarchy before stage \(\wck\), i.e., \(\HYP= R_{\wck}\).

Once provability is in the picture as a condition on the ordinals
\(\alpha\) which are to be considered predicative, one needs formal
systems in which such \(\alpha\) controls the length of the permitted
iteration of predicative definability. This gave rise to the
development of transfinite progressions of formal systems of
*ramified analysis*. The system \(\mathbf{RA}_\alpha\)
associated with an ordinal \(\alpha\) has variables
\(X_1^\beta,X_2^\beta,\dotsc\) for each \(\beta \leq \alpha\), and
axioms asserting that the sets of the \(\beta\)-th level of the
ramified analytical hierarchy exists for all \(\beta \leq
\alpha\).

The predicative ordinals can then be defined within this setting as
follows. If \(\alpha\) is a predicative ordinal, \(\prec\) is (a set
coding a) linear ordering, and \(\beta\) is the order-type of
\(\prec\), then \(\beta\) is predicative if \(\mathrm{RA}_\alpha
\vdash \mathrm{WO}(\prec)\). This is sometimes called the
*autonomy* condition. The predicative ordinals are thus
generated by starting from 0 and proving that larger and larger
ordinals are predicative, within stronger and stronger formal systems
\(\mathbf{RA}_\alpha\), where \(\alpha\) has been shown to be
predicative within some weaker system \(\mathbf{RA}_\beta\) for
\(\beta \lt \alpha\). Feferman (1964) and Schütte (1965b, 1965a)
independently proved that the least impredicative ordinal is
\(\Gamma_0\), now known as the Feferman–Schütte ordinal or
the ordinal of predicativity. Thus we can characterize the
predicatively provable statements as those which can be proved in some
\(\mathbf{RA}_\alpha\) for \(\alpha \lt \Gamma_0\).

The proof-theoretic ordinal of \(\ATR_0\) is also \(\Gamma_0\), just
like the union of all predicative systems of ramified analysis
\(\mathbf{RA}_{\lt\Gamma_0} = \bigcup_{\alpha \lt \Gamma_0}
\mathbf{RA}_\alpha\). Moreover, \(\ATR_0\) is \(\Pi^1_1\)-conservative
over \(\mathbf{RA}_{\lt\Gamma_0}\), so any \(\Pi^1_1\) sentence
\(\varphi\) provable in \(\ATR_0\) will also be provable within some
\(\mathbf{RA}_\alpha\) for \(\alpha \lt \Gamma_0\)—in order
words, \(\varphi\) will have a predicative proof. In line with the
reductive program in proof theory,
systems which are proof-theoretically reducible to a system
\(\mathbf{RA}_\alpha\) for \(\alpha \lt \Gamma_0\) are called
*predicatively reducible*. \(\ATR_0\) cannot be predicatively
reducible, since it is proof-theoretically reducible only to the union
of all predicative subsystems of ramified analysis, and can prove the
consistency of any individual \(\mathbf{RA}_\alpha\) for \(\alpha \lt
\Gamma_0\). However, because of the conservativity result above,
\(\ATR_0\) is considered *locally predicatively reducible*, in
the sense that any \(\Pi^1_1\) consequence of \(\ATR_0\) can be
recovered within a predicative subsystem of ramified analysis.

This property led Simpson (1985) to propose that \(\ATR_0\) could play
an important role in a program of *predicative reductionism*
analogous to the program of finitistic reductionism
(§5.3).
The value of such a program would lie in the fact that while
arithmetical transfinite recursion is equivalent to many mathematical
theorems that cannot be recovered in the more obviously predicative
system \(\ACA_0\), few results have been found in the gap between
\(\ACA_0\) and \(\ATR_0\). This situation had already been noted in
the 1960s by Kreisel, who remarked that

in the portions of analysis developed by working mathematicians, a theorem is either derivable by means of the arithmetic comprehension axiom or else not predicative at all (not even true when relativized to \(\HYP\) functions). (Kreisel 1962: 316)

\(\ATR_0\), by contrast, proves many theorems in descriptive set theory, algebra, and other areas of mathematics that cannot be proved within individual predicative systems. The idea that it can be used instrumentally by the predicativist to prove \(\Pi^1_1\) theorems which are then, by the conservativity theorem, anointed as predicatively legitimate, is one which holds some appeal.

Nevertheless, this view is vulnerable to a version of the same criticism advanced by Burgess (2010) against finitistic reductionism. Although the predicativist will be able to prove the conservativity theorem, and see that any \(\Pi^1_1\) theorem will be provable in some \(\mathbf{RA}_\alpha\), this does not by itself give them a warrant for the predicative provability of that theorem, since the identification of predicative provability with provability in \(\mathbf{RA}_{\lt\Gamma_0}\) is an external analysis predicated on, amongst other things, the fact that \(\Gamma_0\) is a well-defined object—something the predicativist is, if the Feferman–Schütte analysis is correct, incapable of agreeing to. They must instead verify that the proof given by the reduction is indeed predicatively acceptable, and in the process check that the relevant \(\alpha\) is a predicative ordinal. The advantage of the reductionist approach must therefore lie, as Burgess points out, not in a shortcut to predicative provability, but in some conceptual advantage obtained by reasoning via the axiom scheme of arithmetical transfinite recursion. Moreover, the very reverse mathematical results that appear to make predicative reductionism an attractive prospect also present a problem of sorts. Since statements like the perfect set theorem are not merely provable in \(\ATR_0\), but actually equivalent to its characteristic axiom, they themselves are strictly speaking impredicative. It is therefore only their \(\Pi^1_1\) consequences which can be obtained via the reduction, not the theorems themselves.

## 6. Further Reading

The primary reference work on reverse mathematics and the Big Five is
Stephen Simpson’s *Subsystems of Second Order Arithmetic*
(Simpson 2009). Denis Hirschfeldt’s monograph *Slicing the
Truth* (2014) is an introductory textbook aimed at graduate
students in mathematical logic. Its focus is combinatorial principles
such as \(\mathsf{RT}^2_2\) which are not covered in detail by Simpson
(2009). Damir Dzhafarov and Carl Mummert’s *Reverse
Mathematics: Problems, Reductions, and Proofs* (Dzhafarov &
Mummert 2022) is a comprehensive introduction to contemporary reverse
mathematics which presents \(\sfRCA_0\)-provable entailment as just
one amongst many notions of reduction, and which also provides an
up-to-date treatment of many areas of importance in contemporary
reverse mathematics not covered by Simpson, especially combinatorial
principles and the Reverse Mathematics Zoo. John Stillwell’s
*Reverse Mathematics: Proofs from the Inside Out* (Stillwell
2018) is an introductory book aimed at those with at least an
undergraduate-level education in mathematics and an interest in the
foundations of mathematics. Reverse mathematics is a relatively young
field within mathematical logic, and as such there is no definitive
history. Its origins and those of the main subsystems are traced in
Dean & Walsh (2017), which also provides compendious references to
key historical sources.

A general survey of the motivations, methods, techniques, and problems
of reverse mathematics is Shore (2010). Montalbán (2011)
provides a list of open problems in reverse mathematics, although the
reader should be aware that many of them have now been solved, and
that the direction of research in reverse mathematics has shifted
somewhat since the publications of both (Montalbán 2011) and
(Shore 2010). Two areas of particular note are underrepresented in
these surveys. The first is more fine-grained reducibility notions for
\(\Pi^1_2\) sentences, such as Weihrauch reducibility. A recent survey
of Weihrauch reducibility and computable analysis is Brattka,
Gherardi, & Pauly (2021), while a more general discussion of
reducibility notions for \(\Pi^1_2\) sentences can be found in
Hirschfeldt & Jockusch (2016). The second is *higher-order
reverse mathematics*, which modifies the formal framework of
second-order arithmetic to include third- and higher-order entities,
thereby allowing more direct formal representations of many
mathematical objects in analysis, topology, and the like. Shore (2010,
2013) proposes an approach to higher-order reverse mathematics using
higher recursion theory, but more influential has been
Kohlenbach’s approach using higher-order arithmetic (Kohlenbach
2002, 2005). Some striking results have been proved in this framework.
For example, Normann & Sanders (2019a) showed that third-order
formalizations of Cousin’s and Lindelöf’s lemmas both
imply full second-order arithmetic. Other recent work in higher-order
reverse mathematics studies transfinite recursion (Schweber 2015), the
strength of theorems of analysis such as Cousin’s lemma and
properties of the gauge integral (Normann & Sanders 2019b),
theorems about open sets (Normann & Sanders 2020), and topological
notions like dimension and paracompactness (Sanders 2020).

Finally, although issues relating to constructive and intuitionistic mathematics have been addressed at several points, this entry focuses almost exclusively on reverse mathematics in the setting of classical logic. This leaves out the various approaches to reverse mathematics in a constructive or intuitionistic setting. Contemporary forms of constructive reverse mathematics were explicitly initiated by Ishihara (2005, 2006) and Veldman (2009, 2014), but researchers in the field often view Julian & Richman (1984) as having proved the first result in constructive reverse mathematics. Another notable early example is Mandelkern (1988). Section 5 of the entry on constructive mathematics provides an introduction to constructive reverse mathematics, carried out from within an informal base theory corresponding to Bishop’s constructivism. Diener (2020 [Other Internet Resources]) gives a compendious technical overview of results in the field, while Loeb (2012) offers a critical appraisal of the program.

## Bibliography

- Aberth, Oliver, 1980,
*Computable Analysis*, New York: McGraw-Hill. - Ackermann, Wilhelm, 1928, “Zum
Hilbertschen Aufbau der reellen Zahlen”,
*Mathematische Annalen*, 99(1): 118–133. doi:10.1007/BF01459088 - Arana, Andrew, 2008, “Logical and
Semantic Purity”,
*ProtoSociology*, 25: 36–48. doi:10.5840/protosociology2008253 - Arana, Andrew and Paolo Mancosu, 2012,
“On the Relationship between Plane and Solid Geometry”,
*The Review of Symbolic Logic*, 5(2): 294–353. doi:10.1017/S1755020312000020 - Baldwin, John T., 2013, “Formalization,
Primitive Concepts, and Purity”,
*The Review of Symbolic Logic*, 6(1): 87–128. doi:10.1017/S1755020312000263 - Beeson, Michael J., 1985,
*Foundations of Constructive Mathematics*, Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978-3-642-68952-9 - Bernays, P., 1935, “Sur le platonisme
dans les mathématiques”,
*L’Enseignement Mathématique*, 34: 52–69. doi:10.5169/seals-26602 - Bishop, Errett, 1967,
*Foundations of Constructive Analysis*, New York: McGraw-Hill. - Bonola, Roberto, 1912,
*Non-Euclidean Geometry: A Critical and Historical Study of Its Development*, Chicago: Open Court. - Brattka, Vasco, Guido Gherardi,
and Arno Pauly, 2021, “Weihrauch Complexity in Computable
Analysis”, in
*Handbook of Computability and Complexity in Analysis*(Theory and Applications of Computability), Vasco Brattka and Peter Hertling (eds.), Cham: Springer, 367–417. doi:10.1007/978-3-030-59234-9_11 - Bridges, Douglas and Fred Richman,
1987,
*Varieties of Constructive Mathematics*(London Mathematical Society Lecture Note Series), Cambridge: Cambridge University Press. doi:10.1017/CBO9780511565663 - Brown, Douglas K. and Stephen G.
Simpson, 1986, “Which Set Existence Axioms are Needed to Prove
the Separable Hahn-Banach Theorem?”,
*Annals of Pure and Applied Logic*, 31: 123–144. doi:10.1016/0168-0072(86)90066-7 - –––, 1993, “The
Baire Category Theorem in Weak Subsystems of Second-Order
Arithmetic”,
*Journal of Symbolic Logic*, 58(2): 557–578. doi:10.2307/2275219 - Burgess, John P., 2010, “On the Outside
Looking in: A Caution about Conservativeness”, in
*Kurt Gödel, Essays for His Centennial*(Lecture Notes in Logic), Solomon Feferman, Charles Parsons, and Stephen G. Simpson (eds.), Cambridge/New York: Cambridge University Press, 128–142. doi:10.1017/CBO9780511750762.009 - Cantor, Georg, 1872, “Ueber die
Ausdehnung eines Satzes aus der Theorie der trigonometrischen
Reihen”,
*Mathematische Annalen*, 5(1): 123–132. doi:10.1007/BF01446327 - –––, 1883, “Sur les
ensembles infinis et linéaires de points”,
*Acta Mathematica*, 2(1): 349–380. doi:10.1007/BF02612169 - Dean, Walter, 2015, “Arithmetical
Reflection and the Provability of Soundness”,
*Philosophia Mathematica*, 23(1): 31–64. doi:10.1093/philmat/nku026 - –––, 2021, “On
Consistency and Existence in Mathematics”,
*Proceedings of the Aristotelian Society*, 120(3): 349–393. doi:10.1093/arisoc/aoaa017 - Dean, Walter and Sean Walsh, 2017,
“The Prehistory of the Subsystems of Second-Order
Arithmetic”,
*The Review of Symbolic Logic*, 10(2): 357–396. doi:10.1017/S1755020316000411 - Dedekind, Richard, 1872,
*Stetigkeit und irrationale zahlen*, Braunschweig: Vieweg. - Detlefsen, Michael, 1990, “On an
Alleged Refutation of Hilbert’s Program Using Gödel’s
First Incompleteness Theorem”,
*Journal of Philosophical Logic*, 19(4): 343–377. doi:10.1007/BF00263316 - Dzhafarov, Damir D. and Carl
Mummert, 2022,
*Reverse Mathematics: Problems, Reductions, and Proofs*(Theory and Applications of Computability), Cham: Springer International Publishing. doi:10.1007/978-3-031-11367-3 - Eastaugh, Benedict, 2019, “Set
Existence Principles and Closure Conditions: Unravelling the Standard
View of Reverse Mathematics”,
*Philosophia Mathematica*, 27(2): 153–176. doi:10.1093/philmat/nky010 - Ewald, William Bragg and Wilfried Sieg
(eds.), 2013,
*David Hilbert’s Lectures on the Foundations of Arithmetic and Logic 1917–1933*, Berlin/Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978-3-540-69444-1 - Feferman, Solomon, 1964, “Systems of
Predicative Analysis”,
*The Journal of Symbolic Logic*, 29(1): 1–30. doi:10.2307/2269764 - –––, 1988, “Weyl
Vindicated:
*Das Kontinuum*Seventy Years Later”, in*Termi e Prospettive Della Logica e Della Filosofia Della Scienza Contemporanee, Vol. 1*, pp. 59–93, CLUEB. - –––, 1998,
*In the Light of Logic*, New York/Oxford: Oxford University Press. - –––, 2005,
“Predicativity”, in
*The Oxford Handbook of Philosophy of Mathematics and Logic*, Stewart Shapiro (ed.), Oxford: Oxford University Press, 590–624. doi:10.1093/0195148770.003.0019 - Ferreira, Fernando, 2005, “A Simple
Proof of Parsons’ Theorem”,
*Notre Dame Journal of Formal Logic*, 46(1): 83–91. doi:10.1305/ndjfl/1107220675 - Ferreirós, José, 2007,
*Labyrinth of Thought: A History of Set Theory and Its Role in Modern Mathematics*, second edition, Basel/Boston: Birkhäuser. - Friedman, Harvey, 1975, “Some Systems
of Second Order Arithmetic and Their Use”, in
*Proceedings of the 17th International Congress of Mathematicians, Vancouver 1974*, Vol. 1, Ralph D. James (ed.), International Mathematical Union, 235–242. - –––, 1976, “Systems
of Second Order Arithmetic with Restricted Induction. I, II”,
*The Journal of Symbolic Logic*, 41(2): 557–559. Abstracts from the Meeting of the Association for Symbolic Logic, Chicago, 1975. - Friedman, Harvey M., Stephen G.
Simpson, and Rick L. Smith, 1983, “Countable Algebra and Set
Existence Axioms”,
*Annals of Pure and Applied Logic*, 25(2): 141–181. doi:10.1016/0168-0072(83)90012-X - Ganea, Mihai, 2010, “Two (or Three)
Notions of Finitism”,
*The Review of Symbolic Logic*, 3(1): 119–144. doi:10.1017/S1755020309990323 - Giaquinto, Marcus, 1983,
“Hilbert’s Philosophy of Mathematics”,
*The British Journal for the Philosophy of Science*, 34(2): 119–132. doi:10.1093/bjps/34.2.119 - Grzegorczyk, Andrzej, 1955,
“Elementarily Definable Analysis”,
*Fundamenta Mathematicae*, 41(2): 311–338. doi:10.4064/fm-41-2-311-338 - Grzegorczyk,
Andrzej, Andrzej Mostowski, and Czesław Ryll-Nardzewski, 1958,
“The Classical and the \(\omega\)-Complete Arithmetic”,
*The Journal of Symbolic Logic*, 23(2): 188–206. doi:10.2307/2964398 - Hartshorne, Robin, 2000,
*Geometry: Euclid and Beyond*(Undergraduate Texts in Mathematics), New York: Springer New York. doi:10.1007/978-0-387-22676-7 - Hilbert, David, 1899,
*Grundlagen der geometrie*(Festschrift zur feier der enthüllung des Gauss-Weber-denkmals in Göttingen 1), Leipzig: B.G. Teubner. - –––, 1922,
“Neubegründung der mathematik: Erste mitteilung”,
*Abhandlungen aus dem Seminar der Hamburgischen Universität*, 1: 157–177. doi:10.1007/BF02940589 - –––, 1926, “Über
das Unendliche”,
*Mathematische Annalen*, 95: 161–190. doi:10.1007/BF01206605 - –––, 1928 [1967],
“Die grundlagen der mathematik: Vortrag, gehalten auf Einladung
des Mathematischen Seminars im Juli 1927 in Hamburg”,
*Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg*, 6: 65–85. Translated in Hilbert 1967. doi:10.1007/BF02940602 - –––, 1967, “The Foundations of Mathematics”, in van Heijenoort 1967: 464–479. Translation of Hilbert 1928.
- Hilbert, David and Paul Bernays, 1934,
*Grundlagen der Mathematik,*Vol. 1, Berlin: Springer. - –––, 1939,
*Grundlagen der Mathematik,*Vol. 2, Berlin: Springer. - –––, 1968,
*Grundlagen der Mathematik,*second edition, Vol. I, Berlin/Heidelberg/New York: Springer-Verlag. - –––, 1970,
*Grundlagen Der Mathematik*, second edition, Vol. II, Berlin/Heidelberg/New York: Springer-Verlag. - Hirschfeldt, Denis R., 2014,
*Slicing the Truth: On the Computable and Reverse Mathematics of Combinatorial Principles*(Lecture Notes Series, Institute for Mathematical Sciences, National University of Singapore 28), Chitat Chong, Qi Feng, Theodore A Slaman, W Hugh Woodin, and Yue Yang (eds.), Singapore/Hackensack, NJ: World Scientific. doi:10.1142/9208 - Hirschfeldt, Denis R. and Carl G.
Jockusch, 2016, “On Notions of Computability-Theoretic Reduction
Between \(\Pi^1_2\) Principles”,
*Journal of Mathematical Logic*, 16(1): 1650002. doi:10.1142/S0219061316500021 - Howard, Paul and Jean E. Rubin, 1998,
*Consequences of the Axiom of Choice*, Providence, RI: American Mathematical Society. doi:10.1090/surv/059 - Ishihara, Hajime, 2005, “Constructive
Reverse Mathematics: Compactness Properties”, in
*From Sets and Types to Topology and Analysis: Towards Practicable Foundations for Constructive Mathematics*(Oxford Logic Guides 48), Laura Crosilla and Peter Schuster (eds.), Oxford: Clarendon Press, 245–267. doi:10.1093/acprof:oso/9780198566519.003.0016 - –––, 2006, “Reverse
Mathematics in Bishop’s Constructive Mathematics”,
*Philosophia Scientae*, CS 6: 43–59. doi:10.4000/philosophiascientiae.406 - Jech, Thomas J., 1973,
*The Axiom of Choice*(Studies in Logic and the Foundations of Mathematics 75), Amsterdam: North-Holland. - Jockusch, Carl G., 1972,
“Ramsey’s Theorem and Recursion Theory”,
*The Journal of Symbolic Logic*, 37(2): 268–280. doi:10.2307/2272972 - Jockusch, Carl G. and Robert I. Soare,
1972, “\(\Pi^0_1\) Classes and Degrees of Theories”,
*Transactions of the American Mathematical Society*, 361: 5805–5837. doi:10.1090/S0002-9947-1972-0316227-0 - Julian, William H. and Fred Richman,
1984, “A Uniformly Continuous Function on [0, 1] That Is
Everywhere Different from Its Infimum”,
*Pacific Journal of Mathematics*, 111(2): 333–340. doi:10.2140/pjm.1984.111.333 - Kanamori, Akihiro, 1995, “The
Emergence of Descriptive Set Theory”, in
*From Dedekind to Gödel*, Jaakko Hintikka (ed.), (Synthese Library 251), Dordrecht: Kluwer Academic Publishing, 241–262. doi:10.1007/978-94-015-8478-4_10 - Kechris, Alexander S., 1995,
*Classical Descriptive Set Theory*(Graduate Texts in Mathematics, 156), New York: Springer-Verlag. doi:10.1007/978-1-4612-4190-4 - Kino, A., John Myhill, and R. E. Vesley (eds.), 1970,
*Intuitionism and Proof Theory. Proceedings of the Summer Conference at Buffalo, N.Y., 1968*(Studies in Logic and the Foundations of Mathematics 60), Amsterdam: North-Holland. - Kleene, Stephen C., 1945, “On the
Interpretation of Intuitionistic Number Theory”,
*The Journal of Symbolic Logic*, 10(4): 109–124. doi:10.2307/2269016 - –––, 1952, “Recursive
Functions and Intuitionistic Mathematics”, in
*Proceedings of the International Congress of Mathematicians, 1950*, Vol. 1, Lawrence M. Graves, Einar Hille, Paul A. Smith, & Oscar Zariski (eds.), Providence, RI: American Mathematical Society, 679–685. - –––, 1955,
“Hierarchies of Number-Theoretic Predicates”,
*Bulletin of the American Mathematical Society*, 61(3): 193–213. doi:10.1090/S0002-9904-1955-09896-3 - –––, 1959,
“Quantification of Number-Theoretic Functions”,
*Compositio Mathematica*, 14: 23–40. [Kleene 1959 available online] - Kohlenbach, Ulrich, 2002, “Foundational and Mathematical Uses of Higher Types”, in Sieg, Sommer, and Talcott 2002: 92–116. doi:10.1017/9781316755983.005
- –––, 2005, “Higher
Order Reverse Mathematics”, in
*Reverse Mathematics 2001*(Lecture Notes in Logic 21), Stephen G. Simpson (ed.), Natick, MA: A. K. Peters, 281–295. doi:10.1017/9781316755846.018 - Kondô, Motokiti, 1958, “Sur les
ensembles nommables et le fondement de l’analyse
mathématique, I”,
*Japanese Journal of Mathematics*, 28: 1–116. doi:10.4099/jjm1924.28.0_1 - König, Dénes, 1927,
“Über eine Schlussweise aus dem Endlichen ins
Unendliche”,
*Acta Litterarum Ac Scientiarum Ragiae Universitatis Hungaricae Francisco-Josephinae*, 3: 121–130. - Kreisel, Georg, 1958, “Mathematical
Significance of Consistency Proofs”,
*The Journal of Symbolic Logic*, 23(2): 155–182. doi:10.2307/2964396 - –––, 1959, “Analysis
of the Cantor–Bendixson Theorem by Means of the Analytic
Hierarchy”,
*Bulletin de l’Académie Polonaise des Sciences. Série des Sciences Mathématiques, Astronomiques et Physiques*, 7: 621–626. - –––, 1960, “La
Predicativité”,
*Bulletin de La Société Mathématiques de France*, 88: 371–391. - –––, 1962, “The Axiom
of Choice and the Class of Hyperarithmetic Functions”,
*Indagationes Mathematicae (Proceedings)*, 65: 307–319. doi:10.1016/S1385-7258(62)50029-2 - –––, 1970, “Principles of Proof and Ordinals Implicit in Given Concepts”, in Kino, Myhill, and Vesley 1970: 489–516.
- Kushner, Boris A., 2006, “The
Constructive Mathematics of A. A. Markov”,
*The American Mathematical Monthly*, 113(6): 559–566. doi:10.2307/27641983 - Lewis, Florence P., 1920, “History of the
Parallel Postulate”,
*The American Mathematical Monthly*, 27(1): 16–23. doi:10.2307/2973238 - Liu, Jiayi, 2012, “\(\mathsf{RT}^2_2\) does
not imply \(\WKL_0\)”,
*The Journal of Symbolic Logic*, 77(2): 609–620. doi:10.2178/jsl/1333566640 - Loeb, Iris, 2012, “Questioning
Constructive Reverse Mathematics”,
*Constructivist Foundations*, 7(2): 131–140. [Loeb 2012 available online] - Mandelkern, Mark, 1988, “Limited
Omniscience and the Bolzano–Weierstrass Principle”,
*Bulletin of the London Mathematical Society*, 20(4): 319–320. doi:10.1112/blms/20.4.319 - –––, 1989,
“Brouwerian Counterexamples”,
*Mathematics Magazine*, 62(1): 3–27. doi:10.1080/0025570X.1989.11977404 - Metakides, George, Anil Nerode,
and Richard A. Shore, 1985, “Recursive Limits on the
Hahn–Banach Theorem”, in
*Errett Bishop: Reflections on Him and His Research*(Contemporary Mathematics 39), Murray Rosenblatt (ed.), Providence, RI: American Mathematical Society, 85–91. - Montalbán, Antonio, 2011,
“Open Questions in Reverse Mathematics”,
*The Bulletin of Symbolic Logic*, 17(3): 431–454. doi:10.2178/bsl/1309952320 - Montalbán, Antonio and Richard
A. Shore, 2012, “The Limits of Determinacy in Second-Order
Arithmetic”,
*Proceedings of the London Mathematical Society*, 104(2): 223–252. doi:10.1112/plms/pdr022 - Moore, Gregory H., 1982,
*Zermelo’s Axiom of Choice: Its Origins, Development, and Influence*(Studies in the History of Mathematics and Physical Sciences 8), New York: Springer-Verlag. doi:10.1007/978-1-4613-9478-5 - Moschovakis, Joan Rand, to appear,
“Markov’s Principle, Markov’s Rule and the Notion of
Constructive Proof”, in
*Intuitionism, Computation and Proof: Selected themes from the research of G. Kreisel*, M. Antonutti Marfori and M. Petrolo (eds.). [Moschovakis to appear available online] - Mostowski, Andrzej, 1959, “On Various
Degrees of Constructivism”, in
*Constructivity in Mathematics. Proceedings of the Colloquium Held at Amsterdam, 1957*, Arend Heyting (ed.), Amsterdam: North-Holland, 178–194. Collected in*Foundational Studies: Selected Works. Volume 2*(Studies in Logic and the Foundations of Mathematics 93, Part B), Amsterdam: North-Holland, 359–375. doi:10.1016/S0049-237X(09)70270-8 - Mummert, Carl and Stephen G. Simpson,
2005, “Reverse Mathematics and \(\Pi^1_2\) Comprehension”,
*The Bulletin of Symbolic Logic*, 11(4): 526–533. doi:10.2178/bsl/1130335208 - Nagorny, N. M., 1994, “Andrei Markov
and Mathematical Constructivism”, in
*Logic, Methodology and Philosophy of Science IX: Proceedings of the Ninth International Congress of Logic, Methodology and Philosophy of Science*(Studies in Logic and the Foundations of Mathematics 134), Dag Prawitz, Brian Skyrms, and Dag Westerståhl (eds.), Amsterdam: North-Holland, 467–479. doi:10.1016/S0049-237X(06)80057-1 - Normann, Dag and Sam Sanders, 2019a,
“The Strength of Compactness in Computability Theory and
Nonstandard Analysis”,
*Annals of Pure and Applied Logic*, 170(11): 102710. doi:10.1016/j.apal.2019.05.007 - –––, 2019b,
“On the Mathematical and Foundational Significance of the
Uncountable”,
*Journal of Mathematical Logic*, 19(1): 1950001. doi:10.1142/S0219061319500016 - –––, 2020,
“Open Sets in Computability Theory and Reverse
Mathematics”,
*Journal of Logic and Computation*, 30(8): 1639–1679. doi:10.1093/logcom/exaa049 - Pambuccian, Victor, 1996, “Splitting
the Pasch Axiom”,
*Journal of Geometry*, 56(1–2): 126–130. doi:10.1007/BF01222689 - –––, 2001,
“Fragments of Euclidean and Hyperbolic Geometry”,
*Scientiae Mathematicae Japonicae*, 53(2): 361–400. - –––, 2006,
“Axiomatizations of Hyperbolic and Absolute Geometries”,
in
*Non-Euclidean Geometries: János Bolyai Memorial Volume*(Mathematics and Its Applications 581), András Prékopa and Emil Molnár (eds.), Boston, MA: Springer, 119–153. doi:10.1007/0-387-29555-0_7 - –––, 2009, “A
Reverse Analysis of the Sylvester–Gallai Theorem”,
*Notre Dame Journal of Formal Logic*, 50(3): 245–260. doi:10.1215/00294527-2009-010 - –––, 2019,
“Prolegomena to Any Theory of Proof Simplicity”,
*Philosophical Transactions of the Royal Society A*, 377(2140): 20180035. doi:10.1098/rsta.2018.0035 - Pambuccian, Victor and Celia Schact,
2021, “The Case for the Irreducibility of Geometry to
Algebra”,
*Philosophia Mathematica*, 30(1): 1–31. doi:10.1093/philmat/nkab022 - Parsons, Charles, 1970, “On a Number-Theoretic Choice Schema and Its Relation to Induction”, in Kino, Myhill, and Vesley 1970: 459–473. doi:10.1016/S0049-237X(08)70771-7
- Patey, Ludovic and Keita Yokoyama,
2018, “The Proof-Theoretic Strength of Ramsey’s Theorem
for Pairs and Two Colors”,
*Advances in Mathematics*, 330: 1034–1070. doi:10.1016/j.aim.2018.03.035 - Péter, Rósza, 1935,
“Konstruktion nichtrekursiver funktionen”,
*Mathematische Annalen*, 111(1): 42–60. doi:10.1007/BF01472200 - Pour-El, Marian B. and J. Ian
Richards, 1989,
*Computability in Analysis and Physics*(Perspectives in Mathematical Logic), Berlin/New York: Springer Verlag. - Robinson, Raphael M., 1948, “Recursion
and Double Recursion”,
*Bulletin of the American Mathematical Society*, 54(10): 987–993. doi:10.1090/S0002-9904-1948-09121-2 - Rosenfeld, B. A., 1976 [1988],
*lstoriya Neevklidovoi Geometrii: Razvitie ponyatiya o geometriceskom prostranslve*, Moscow: Nauka; translated as*A History of Non-Euclidean Geometry: Evolution of the Concept of a Geometric Space*(Studies in the History of Mathematics and Physical Sciences 12), Abe Shenitzer (trans.), New York: Springer-Verlag. doi:10.1007/978-1-4419-8680-1 - Rubin, Herman and Jean E. Rubin, 1963,
*Equivalents of the Axiom of Choice*, Amsterdam: North-Holland. - –––, 1985,
*Equivalents of the Axiom of Choice, II*(Studies in Logic and the Foundations of Mathematics 116), Amsterdam/New York: North-Holland. - Russell, Bertrand, 1908, “Mathematical
Logic as Based on the Theory of Types”,
*American Journal of Mathematics*, 30(3): 222–262. doi:10.2307/2369948 - Sanders, Sam, 2020, “Reverse
Mathematics of Topology: Dimension, Paracompactness, and
Splittings”,
*Notre Dame Journal of Formal Logic*, 61(4): 537–559. doi:10.1215/00294527-2020-0021 - –––, 2022, “Lifting
Proofs from Countable to Uncountable Mathematics”,
*Information and Computation*, 287: 104762. doi:10.1016/j.ic.2021.104762 - Schütte, Kurt, 1965a, “Eine
Grenze Für die Beweisbarkeit der Transfiniten Induktion in der
Verzweigten Typenlogik”,
*Archiv für Mathematische Logik und Grundlagenforschung*, 7(1–2): 45–60. doi:10.1007/BF01972460 - –––, 1965b,
“Predicative Well-Orderings”, in
*Formal Systems and Recursive Functions: Proceedings of the Eighth Logic Colloquium, Oxford, July 1963*(Studies in Logic and the Foundations of Mathematics 40), J. N. Crossley and Michael Dummett (eds.), Amsterdam: North-Holland, 280–303. doi:10.1016/S0049-237X(08)71694-X - Schweber, Noah, 2015, “Transfinite
Recursion in Higher Reverse Mathematics”,
*The Journal of Symbolic Logic*, 80(3): 940–969. doi:10.1017/jsl.2015.2 - Seetapun, David and Theodore A.
Slaman, 1995, “On the Strength of Ramsey’s Theorem”,
*Notre Dame Journal of Formal Logic*, 36(4): 570–582. doi:10.1305/ndjfl/1040136917 - Shore, Richard A., 2010, “Reverse
Mathematics: The Playground of Logic”,
*The Bulletin of Symbolic Logic*, 16(3): 378–402. doi:10.2178/bsl/1286284559 - –––, 2013, “Reverse
Mathematics, Countable and Uncountable: A Computational
Approach”, in
*Effective Mathematics of the Uncountable*(Lecture Notes in Logic), Denis R. Hirschfeldt, Noam Greenberg, Joel David Hamkins, and Russell Miller (eds.), Cambridge: ASL and Cambridge University Press, 150–163. doi:10.1017/CBO9781139028592.009 - Sieg, Wilfried, 1984, “Foundations for
Analysis and Proof Theory”,
*Synthese*, 60(2): 159–200. doi:10.1007/BF00485460 - –––, 1985, “Fragments of
Arithmetic”,
*Annals of Pure and Applied Logic*, 28: 33–71. doi:10.1016/0168-0072(85)90030-2 - –––, 1988,
“Hilbert’s Program Sixty Years Later”,
*The Journal of Symbolic Logic*, 53(2): 338–348. doi:10.1017/S0022481200028292 - –––, 1990, “Review of
‘Friedman’s Research on Subsystems of Second Order
Arithmetic’, by Stephen G. Simpson (Simpson 1985)”,
*The Journal of Symbolic Logic*, 55(2): 870–874. doi:10.2307/2274673 - –––, 2020,
“Methodological Frames: Paul Bernays, Mathematical
Structuralism, and Proof Theory”, in
*The Prehistory of Mathematical Structuralism*(Logic and Computation in Philosophy), Erich H. Reck and Georg Schiemer (eds.), New York: Oxford University Press, 352–382. doi:10.1093/oso/9780190641221.003.0014 - Sieg, Wilfried and Mark Ravaglia, 2005,
“David Hilbert and Paul Bernays,
*Grundlagen der mathematik*, First Edition (1934, 1939)”, in*Landmark Writings in Western Mathematics 1640–1940*, Ivor Grattan-Guinness, Roger Cooke, Leo Corry, Pierre Crépel, and Niccolo Guicciardini (eds.), Amsterdam: Elsevier, 981–999. doi:10.1016/B978-044450871-3/50158-3 - Sieg, Wilfried, Richard Sommer, and Carolyn L. Talcott (eds.),
2002,
*Reflections on the Foundations of Mathematics: Essays in Honor of Solomon Feferman*(Lecture Notes in Logic 15), Natick, MA: Association for Symbolic Logic and A. K. Peters. doi:10.1017/9781316755983 - Simpson, Stephen G., 1984, “Which set
existence axioms are needed to prove the Cauchy/Peano theorem for
ordinary differential equations?”,
*The Journal of Symbolic Logic*, 49(3): 783–802. doi:10.2307/2274131 - –––, 1985,
“Friedman’s Research on Subsystems of Second Order
Arithmetic”, in
*Harvey Friedman’s Research on the Foundations of Mathematics*(Studies in Logic and the Foundations of Mathematics 117), Leo A. Harrington, Michael D. Morley, Andre Scedrov, and Stephen G. Simpson (eds.), Amsterdam: North-Holland, 137–159. doi:10.1016/S0049-237X(09)70158-2 - –––, 1988, “Partial
Realizations of Hilbert’s Program”,
*The Journal of Symbolic Logic*, 53(2): 349–363. doi:10.1017/S0022481200028309 - –––, 1999,
*Subsystems of Second Order Arithmetic*(Perspectives in Mathematical Logic), Berlin/New York: Springer. - –––, 2009,
*Subsystems of Second Order Arithmetic*(Perspectives in Logic), second edition, Cambridge: Association for Symbolic Logic and Cambridge University Press. doi:10.1017/cbo9780511581007 - –––, 2010, “The
Gödel Hierarchy and Reverse Mathematics”, in
*Kurt Gödel, Essays for His Centennial*(Lecture Notes in Logic 33), Solomon Feferman, Charles Parsons, and Stephen G. Simpson (eds.), Cambridge: ASL and Cambridge University Press, 109–127. doi:10.1017/CBO9780511750762.008 - Skolem, Thoralf Albert, 1923,
*Begründung der elementaren arithmetik durch die rekurierende Denkweise ohne anwendung scheinbarer veränderlichen mit unendlichem ausdehnungsbereich*(Videnskabs-selskabet i Christiania. Skrifter. I. Mathematisk-naturvidenskabelig klasse 1923, no. 6), Kristiania: in kommision bei J. Dybwad. - Solomon, Reed, 2001,
“\(\Pi^1_1\text{-}\CA_0\) and Order Types of Countable Ordered
Groups”,
*Journal of Symbolic Logic*, 66(1): 192–206. doi:10.2307/2694917 - Specker, Ernst, 1949, “Nicht
konstruktiv beweisbare Sätze der Analysis”,
*The Journal of Symbolic Logic*, 14(3): 145–158. doi:10.2307/2267043 - –––, 1971,
“Ramsey’s Theorem Does Not Hold in Recursive Set
Theory”, in
*Logic Colloquium ’69. Proceedings of the Summer School and Colloquium in Mathematical Logic, Manchester, August 1969*(Studies in Logic and the Foundations of Mathematics 61), Robin O. Gandy and C. M. E. Yates (eds.), Amsterdam: North-Holland, 439–442. doi:10.1016/S0049-237X(08)71242-4 - Spector, Clifford, 1955, “Recursive
Well-Orderings”,
*The Journal of Symbolic Logic*, 20(2): 151–163. doi:10.2307/2266902 - Stillwell, John, 2018,
*Reverse Mathematics: Proofs from the inside Out*, Princeton/Oxford: Princeton University Press. - Tait, William W., 1981, “Finitism”,
*The Journal of Philosophy*, 78(9): 524–546. doi:10.2307/2026089 - –––, 2002, “Remarks on Finitism”, in Sieg, Sommer, and Talcott 2002: 410–419. doi:10.1017/9781316755983.020.
- Van Heijenoort, Jean (ed.), 1967,
*From Frege to Gödel: A Source Book in Mathematical Logic, 1879–1931*(Source Books in the History of the Sciences), Cambridge, MA: Harvard University Press. - Veldman, Wim, 2009, “Brouwer’s
Approximate Fixed-Point Theorem Is Equivalent to Brouwer’s Fan
Theorem”, in
*Logicism, Intuitionism, and Formalism: What Has Become of Them?*(Synthese Library: Volume 341), Sten Lindström, Erik Palmgren, Krister Segerberg, and Viggo Stoltenberg-Hansen (eds.), Dordrecht: Springer, 277–299. doi:10.1007/978-1-4020-8926-8_14 - Veldman, Wim, 2014, “Brouwer’s
Fan Theorem as an Axiom and as a Contrast to Kleene’s
Alternative”,
*Archive for Mathematical Logic*, 53(5–6): 621–693. doi:10.1007/s00153-014-0384-9 - Weyl, Hermann, 1918,
*Das Kontinuum: Kritische Untersuchungen über die Grundlagen der Analysis*, Leipzig: Verlag von Veit. - –––, 1987 [1994],
*The Continuum: A Critical Examination of the Foundation of Analysis*, Stephen Pollard and Thomas Bole (trans.), Kirksville, MO: Thomas Jefferson University Press; reprinted with corrections Mineola, NY: Dover, 1994. - Whitehead, Alfred North and Bertrand
Russell, 1910,
*Principia Mathematica*, volume 1, Cambridge: Cambridge University Press. - Zach, Richard, 1998, “Numbers and
Functions in Hilbert’s Finitism”,
*Taiwanese Journal for Philosophy and History of Science*, 10: 33–60. - Zach, Richard, 2003, “The Practice of
Finitism: Epsilon Calculus and Consistency Proofs in Hilbert’s
Program”,
*Synthese*, 137(1/2): 211–259. doi:10.1023/A:1026247421383 - Zermelo, Ernst, 1904, “Beweis,
daß jede Menge wohlordnung werden kann (Aus einem an Herrn
Hilbert gerichteten Briefe)”,
*Mathematische Annalen*, 59(4): 514–516. doi:10.1007/BF01445300 - –––, 1908, “Neuer
Beweis für die Möglichkeit einer Wohlordnung”,
*Mathematische Annalen*, 65(1): 107–128. Translated in Zermelo 1967. doi:10.1007/BF01450054 - –––, 1967, “A New Proof of the Possibility of a Well-Ordering”, in van Heijenoort 1967: 183–198.

## Academic Tools

How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database.

## Other Internet Resources

- Diener, Hannes, 2020,
*Constructive Reverse Mathematics*, unpublished manuscript, version of 4 April 2020, arXiv:1804.05495. - The Reverse Mathematics Zoo, University of Connecticut
- Joosten, Joost, 2002, “Two Proofs of Parsons’ Theorem”, unpublished manuscript.
- Computability.org – A resource for computability theorists.

### Acknowledgments

The writing of this entry was informed by many valuable discussions with Marianna Antonutti Marfori, Andy Arana, Walter Dean, Mic Detlefsen, Damir Dzhafarov, Marta Fiori Carones, Martin Fischer, Volker Halbach, Leon Horsten, Hannes Leitgeb, Øystein Linnebo, Colin McLarty, Toby Meadows, Antonio Montalbán, Alberto Naibo, Carlo Nicolai, Marco Panza, Richard Pettigrew, Marcus Rossberg, Sam Sanders, Reed Solomon, Wilfried Sieg, Steve Simpson, Sean Walsh, and Philip Welch. Particular thanks are due to Marianna Antonutti Marfori, Walter Dean, and an anonymous referee for their comments on earlier drafts, along with the team at the Stanford Encyclopedia of Philosophy for their inexhaustible patience and wonderful editorial support.