Non-Deductive Methods in Mathematics

First published Mon Aug 17, 2009; substantive revision Tue Apr 21, 2020

As it stands, there is no single, well-defined philosophical subfield devoted to the study of non-deductive methods in mathematics. As the term is being used here, it incorporates a cluster of different philosophical positions, approaches, and research programs whose common motivation is the view that (i) there are non-deductive aspects of mathematical methodology and that (ii) the identification and analysis of these aspects has the potential to be philosophically fruitful.

1. Introduction

Philosophical views concerning the ontology of mathematics run the gamut from platonism (mathematics is about a realm of abstract objects), to fictionalism (mathematics is a fiction whose subject matter does not exist), to formalism (mathematical statements are meaningless strings manipulated according to formal rules), with no consensus about which is correct. By contrast, it seems fair to say that there is a philosophically established received view of the basic methodology of mathematics. Roughly, it is that mathematicians aim to prove mathematical claims of various sorts, and that proof consists of the logical derivation of a given claim from axioms. This view has a long history; thus Descartes writes in his Rules for the Direction of the Mind (1627–28) that a mathematical proposition must be “deduced from true and known principles by the continuous and uninterrupted action of a mind that has a clear vision of each step in the process” (47). An important implication of this view is that there is no room, at least ideally, in mathematics for non-deductive methods. Frege, for example, states that “it is in the nature of mathematics always to prefer proof, where proof is possible, to any confirmation by induction” (1884, 2). Berry (2016) offers a more recent defense of proof as promoting key virtues of shared enquiry within the mathematical community.

In the philosophical literature, perhaps the most famous challenge to this received view has come from Imre Lakatos, in his influential (posthumously published) 1976 book, Proofs and Refutations:

Euclidean Methodology has developed a certain obligatory style of presentation. I shall refer to this as ‘deductivist style’. This style starts with a painstakingly stated list of axioms, lemmas and/or definitions. The axioms and definitions frequently look artificial and mystifyingly complicated. One is never told how these complications arose. The list of axioms and definitions is followed by the carefully worded theorems. These are loaded with heavy-going conditions; it seems impossible that anyone should ever have guessed them. The theorem is followed by the proof.
In deductivist style, all propositions are true and all inferences valid. Mathematics is presented as an ever-increasing set of eternal, immutable truths.
Deductivist style hides the struggle, hides the adventure. The whole story vanishes, the successive tentative formulations of the theorem in the course of the proof-procedure are doomed to oblivion while the end result is exalted into sacred infallibility (Lakatos 1976, 142).

Before proceeding, it will be worthwhile to make a few distinctions in order to focus the topics of subsequent discussion.

1.1 Discovery versus Justification

The broad claim that there are some non-deductive aspects of mathematical activity seems relatively uncontroversial. For this merely amounts to the claim that not everything that mathematicians do when they do mathematics consists of deriving statements from other statements. As James Franklin puts it:

Mathematics cannot consist just of conjectures, refutations and proofs. Anyone can generate conjectures, but which ones are worth investigating? … Which might be capable of proof by a method in the mathematician’s repertoire? … Which are unlikely to yield answer until after the next review of tenure? The mathematician must answer these questions to allocate his time and effort. (Franklin 1987, 2)

One way to narrow the general claim so as to make it more substantive is make use of the familiar (though not entirely unproblematic) distinction between ‘context of discovery’ and ‘context of justification’. On the one hand, this distinction may allow the traditional deductivist view to be maintained in the face of Lakatos’s critique, by arguing that what Lakatos is pointing to concerns the context of discovery in mathematics. Within the context of justification, derivation of results from axioms may still be the correct and complete story. Some of the reactions of mathematicians to Lakatos’s views have this character, for example the following remark by Morris Kline in a letter written to Lakatos:

I do believe we need much more literature emphasizing the discovery side of mathematics. All the emphasis, as you know and as you imply, is on the deductive structure of mathematics and the impression being given to students is that one deduces new conclusions from old ones.[1]

It is also possible to find passages along similar lines in the work of Pólya, who was a major influence on Lakatos:

Studying the methods of solving problems, we perceive another face of mathematics. Yes, mathematics has two faces; it is the rigorous science of Euclid, but it is also something else. Mathematics presented in the Euclidean way appears as a systematic, deductive science, but mathematics in the making appears an experimental, inductive science. (Pólya 1945, vii) [italics in original]

Conversely, in order to pose a genuine challenge to the familiar deductivist position, the counterclaim needs to be that non-deductive methods play a role in the justification of mathematical results (Paseau 2015). It will therefore be primarily justificatory contexts which will be focused on in the remainder of this survey.[2]

1.2 Deduction and Formalization

This is not the place for a detailed analysis of deduction. For present purposes, this notion will be presumed to be fairly straightforward, at least in principle. A deduction is any sequence of statements each of which is derived from some initial set of statements (the premises) or from a prior statement in the sequence. However, one issue that does need to be addressed is the relationship between deduction and formalization (see, e.g., Azzouni 2013).

An argument may be deductive without being formal. Although the paradigm cases of deduction do tend to occur in highly formalized systems, this is not necessary. “All even numbers greater than 2 are composite; 1058 is greater than 2; 1058 is even; hence 1058 is composite” is a perfectly good deduction, despite not being formalized. Hence, contrary to what is sometimes assumed in discussions of these issues, it is not true that all informal aspects of mathematical practice are thereby non-deductive.

On the other hand, the development of formal logic has been closely bound up with providing a clear language for presenting (and evaluating) deductive mathematical reasoning. Indeed, as John Burgess argues in his (1992), modern classical logic largely developed as a basis for mathematical reasoning, especially proof. The increase in rigor within mathematics during the 19th Century is properly viewed as a cause, not an effect, of the logical revolution set off by Frege’s work. Logic, in Burgess’s view, is descriptive: its goal is to construct mathematical models of reasoning. Classical logic constitutes an idealized description of classical mathematical proof.

It may also be important to distinguish informal elements of a given mathematical proof from unformalizable elements (if there are any such things).[3] In Section 4, this issue will be taken up in connection with the use of diagrams in mathematical reasoning.

1.3 Deductivism and Foundations

In addition to the development of formal logic, another aspect of deductivism is its emphasis on ‘foundations’. The reason for this is that the passage from axioms to theorem is straightforward, in principle, since it is a matter of logical derivation. Indeed there is nothing distinctively mathematical involved in this transition. Hence attention is shifted to the starting point of the deductive process, namely the axioms. And if these axioms are themselves theorems of some more basic theory, then this pursuit of a secure starting point can be pursued down through a hierarchy of ever more foundational mathematical theories.

It is undeniable that issues in the foundations of mathematics have been the central preoccupation of philosophers of mathematics through most of the 20th Century. This is not, of course, because foundational areas such as set theory are the only areas of mathematics where philosophers think that deduction takes place, but rather because—as pointed out above—focusing on deduction puts particular emphasis on the starting points of proofs. Even those sympathetic with this focus on foundational issues are likely to acknowledge that many areas of mathematical practice are thereby ignored. The question is what—if anything—of philosophical interest is lost in the process.

2. Non-deductive Aspects of the Deductive Method

2.1 Aspects of Informality

2.1.1 Semi-Formal Proofs

As mentioned in 1.2 above, one feature of the deductivist style is that paradigmatic mathematical proofs are expressed entirely in some appropriate formal language (for example, first-order predicate logic with identity). This allows the validity of a given proof to be easily, indeed mechanically, ascertained. But of course few, if any, of the proofs circulated and published by mathematicians have this form. What qualifies as a proof for working mathematicians ranges from the completely informal to the detailed and precise, with every (or almost every) gap filled in. However, even detailed and precise proofs are rarely expressed purely in the language of logic; rather, they are a mixture of ordinary language, mathematical, and logical symbols and terminology.

Sometimes philosophers writing in the deductivist tradition make it sound as if this is a fairly trivial point; it is just a matter of mathematicians having a ‘translation scheme’ to hand, but not writing out the proof in pure logic to make it more accessible and easier to read. In point of fact, it is often far from obvious how to translate a given proof into formal logic. Furthermore, it is not clear that the notion of ‘translating’ an informal proof into a formal language is necessarily the right way of looking at the situation. Stewart Shapiro presents essentially this view at the beginning of his 1991 book, Foundations Without Foundationalism, writing that:

The languages of full logics are, at least in part, mathematical models of fragments of ordinary natural languages, like English, or perhaps ordinary languages augmented with expressions used in mathematics. The latter may be called ‘natural languages of mathematics’. For emphasis, or to avoid confusion, the language of a full logic is sometimes called a ‘formal language’.
As a mathematical model, there is always a gap between the language of a logic and its natural language counterpart. The fit between model and modelled can be good or bad, useful or misleading, for whatever purpose at hand. (Shapiro 1991, 3)

An alternative picture is that the formal and informal languages offer different ways of expressing mathematical theorems and proofs. The formal language is not used to ‘translate’, and hence does not need to be measured against what is expressed in an informal proof. Rather it offers its own, arguably superior, resources for expressing the content of mathematical statements in a precise and rigorous setting that has been specifically designed for this purpose. Whichever picture is adopted of the relation between formal and informal presentations of mathematics, two points remain. First, deductive mathematical arguments—arguments that are produced, transmitted, and built upon by mathematicians—can be either formal or informal. Second, the evaluation of such arguments as being deductively valid or invalid is easier to carry out definitively in the context of a formal system of some sort.

It is also worth noting that Lakatos argues for a third category of proof, in addition to formal and informal, that he calls “quasi-formal”. Lakatos writes that:

to suggest that an informal proof is just an incomplete formal proof seems to me to make the same mistake as early educationalists did when, assuming that a child was merely a miniature grown-up, they neglected the direct study of child-behaviour in favour of theorizing based on simple analogies with adult behaviour. (Lakatos 1980, 63)

2.1.2 Gaps in Proofs

The talk above of “every gap being filled in” in the transition to an ideal proof glosses over the fact that the notion of a “gap” in a proof is itself in need of further clarification. For one thing, the most straightforward way of defining a proof gap—as given below—is only applicable to fully formal systems.

A gap is any point in a proof where the line written does not follow from some subset of the previous lines (together with axioms) by the application of a formally valid—and explicitly stated—rule of inference for the system.

The reason for the condition that any rule be an explicitly stated rule of inference for the system is because we want to make room for gappy yet valid proofs. For example, “2 + 2 = 4, hence there are infinitely many primes” is a valid argument, but clearly there is a large gap between its premise and its conclusion. On the other hand, despite the above definition only working for formal proofs, gaplessness and formality do not always go together. Thus a traditional syllogism such as, “All men are mortal; Socrates is a man; hence Socrates is mortal” is an example of a gapless informal proof. One way to extend the notion of gappiness (and gaplessness) to informal proofs is via the notion of a basic mathematical inference, in other words an inference that is “accepted by the mathematical community as usable in proof without any further need of argument” (Fallis 2003, 49).

However we end up characterizing gaps, it is undeniably the case that most actual proofs as presented by mathematicians have gaps. Don Fallis proposes a taxonomy of kinds of proof gaps in his (2003):

  1. Inferential Gaps
    “A mathematician has left an inferential gap whenever the particular sequence of propositions that the mathematician has in mind (as being a proof) is not a proof” (Fallis 2003, 53).
  2. Enthymematic Gaps
    “A mathematician has left an enthymematic gap whenever he does not explicitly state the particular sequence of propositions that he has in mind” (Fallis 2003, 54).[4]
  3. Untraversed Gaps
    “A mathematician has left an untraversed gap whenever he has not tried to verify directly that each proposition in the sequence of propositions that he has in mind (as being a proof) follows from previous propositions in the sequence by a basic mathematical inference” (Fallis 2003, 56–7).

In addition to this taxonomical work, Fallis also argues for the philosophical thesis that gaps in proofs are not necessarily a bad thing. Building on (iii) above, he introduces the notion of a universally untraversed gap, in other words a gap that has not been bridged by any member of the mathematical community. Fallis claims that such gaps are not unusual and that at least some of the time proofs containing them are accepted by mathematicians in a justificatory context. This view is borne out in more recent work by Andersen (2018).

One currently active area of work that has led to the uncovering of hitherto unrecognized gaps of various sorts is automated proof checking. Specially designed computer programs are used to check the validity of proofs that have been rendered in an appropriate formal language. The main focus thus far has not been on discovering new results but on checking the status of proofs of already established results. George Gonthier has used this approach to verify a proof of the four color theorem (Gonthier 2008) and a proof of the odd order theorem in group theory (Gonthier et al. 2013), and Thomas Hales has verified a proof of the Jordan curve theorem (Hales 2007). In each case, a number of gaps were found and then traversed. Formal verification of this sort can also reveal other information hidden in the content of ordinary mathematical arguments. Georg Kreisel has described this general process as “unwinding proofs”, while Ulrich Kohlenbach has more recently coined the term “proof mining.” In connection with the methods described above, Avigad writes that

… proof-theoretic methods and insights can be put to use … in the field of automated reasoning and formal verification. Since the early twentieth century, it has been understood that ordinary mathematical arguments can be represented in formal axiomatic theories, at least in principle. The complexity involved in even the most basic mathematical arguments, however, rendered most formalization infeasible in practice. The advent of computational proof assistants has begin to change this, making it possible to formalize increasingly complex mathematical proofs. … [T]he methods can also be used for the more traditional task of verifying ordinary mathematical proofs, and are especially pertinent to cases where proofs rely on computation that is too extensive to check by hand. (Avigad 2007, 7)

Delariviere and Van Kerkhove (2017) point out, however, that while computer methods may play an increasingly important rule in proof verification, it is much less clear that such methods can play a correspondingly central role in advancing mathematical understanding.

2.1.3 Diagrams

Another aspect of informal proof that has been the subject of renewed attention in the recent philosophical literature is the role of diagrams (Giaquinto 2007; Shin & Lemon 2008). What is not in dispute is that proofs—especially in geometry but also in other areas ranging from analysis to group theory—are often accompanied by diagrams. One issue concerns whether such diagrams play an indispensable role in the chain of reasoning leading from the premises of a given proof to its conclusion. Prima facie, there would seem to be three possible situations:

  1. The diagrams play no substantive role in the proof and serve merely as ‘illustrations’ of aspects of the subject-matter with which it deals.
  2. In practical terms it is difficult (or even impossible) to grasp the proof without making use of the diagrams, but this indispensability is psychological rather than logical.
  3. The diagrams play an essential role in the logical structure of the proof.

The initial wave of philosophical work done on diagrammatic reasoning focused on Euclid’s Elements, partly because of the centrality and historical importance of this work, and partly because it is so often held up as a canonical example of the deductive method (see, e.g., Mumma 2010). If some or all of the diagrams in the Elements fall under option (iii) above, then deleting all the diagrams will render many of the proofs invalid. This raises the further question of whether a distinctively diagrammatic form of reasoning can be identified and analyzed, and—if so—whether it can be captured in a purely deductive system. One difficulty for any proposed rigorization is the ‘generalization problem’: how can a proof that is linked to a specific diagram be generalized to other cases? This is intertwined with the issue of distinguishing, in formal terms, between the essential and coincidental features of a given diagram.

More recent work on the role of diagrams in proofs has included a defense of the position that diagrammatic proofs can sometimes be fully rigorous (Azzouni, 2013), and exploration of diagram-based reasoning in areas of mathematical practice other than geometry (de Toffoli and Giardino, 2014; de Toffoli, 2017).

2.2 Justifying Deduction

Even if we restrict attention to the context of justification, a deductive proof yields categorical knowledge only if it proceeds from a secure starting point and if the rules of inference are truth-preserving. Can our confidence that these two conditions obtain also be grounded purely deductively? These conditions will be considered in turn.

2.2.1 Justification of Rules

In one sense, it seems quite straightforward to give a deductive justification for some favored set of rules of inference. It can be shown, for example, that if the premises of an application of Modus Ponens are true then the conclusion must also be true. The problem, at least potentially, is that such justifications typically make use of the very rule which they seek to justify. In the above case: if MP is applied to true premises then the conclusion is true; MP is applied to true premises; hence the conclusion is true. Haack (1976) and others have debated whether the circularity here is vicious or not. One crucial consideration is whether analogous ‘justifications’ can be given for invalid rules, for example Prior’s introduction and elimination rules for ‘tonk,’ that also have this feature of using a rule to justify itself.[5] (A closely related issue can be traced back to Lewis Carroll and his classic (1895) paper.)

2.2.2 The Status of Axioms

Let us assume, then, that an idealized deductive proof provides one kind of security: the transparency of each step ensures the validity of the argument as a whole, and hence guarantees that if the premises are all true then the conclusion must be true. But what of the axioms that are brought in at the beginning of the proof process? The traditional answer to this question is to claim that the truth of the axioms is secure because the axioms are “self-evident”. This certainly seems to have been the generally accepted view of the axioms of Euclidean geometry, for example. However, this attitude is much less prevalent in contemporary mathematics, for various reasons. Firstly, the discovery of non-Euclidean geometry in the early 19th Century showed that apparent self-evidence, at least in the case of the Parallel Postulate, is no guarantee of necessary truth. Secondly, the increasing range and complexity of mathematical theories—and their axiomatizations—made it much less plausible to claim that each individual axiom was transparently true. Thirdly, many mathematical subfields have become abstracted to a considerable degree from any concrete models, and this has gone hand-in-hand with the tendency for at least some mathematicians to adopt a formalist attitude to the theories they develop. Rather than expressing fundamental truths, on this view axioms serve simply to provide the starting position for a formal game.

The slide towards this sort of formalist attitude to axioms can also be traced through Frege’s logicism. The logicist program sought to show that mathematics is reducible to logic, in other words that mathematical proofs can be shown to consist of logical deductions from logically true premises. For Frege, these logically true premises are definitions of the terms which occur in them. But this again raises the issue of what distinguishes acceptable from unacceptable definitions. The worry here is not just whether our axioms are true but whether they are even consistent (a pitfall which famously befell Frege’s own system). And this is a problem once self-evidence is abandoned as the ‘gold standard’ for axioms, whether we move from here to a formalist view or a logicist view. In both cases, some other bounds on the acceptability of candidate axioms must be provided.

Is there a middle ground, then, between the high standard of self-evidence on the one hand and the ‘anything goes’ attitude on the other? One idea, a version of which can be traced back to Bertrand Russell, is to invoke a version of inference to the best explanation. Russell’s view, plausibly enough, is that the propositions of elementary arithmetic—“2 + 2 = 4”, “7 is prime”, etc.—are much more self-evident than the axioms of whatever logical or set-theoretic system one might come up with to ground them. So rather than viewing axioms as maximally self-evident we ought instead to think of them as being chosen on the basis of their (collective) capacity to systematize, derive, and explain the basic arithmetical facts. In other words, the direction of logical implication remains from axioms to arithmetical facts, but the direction of justification may go the other way, at least in the case of very simple, obvious arithmetical facts. Deriving “2 + 2 = 4” from our set-theoretic axioms does not increase our confidence in the truth of “2 + 2 = 4”, but the fact that we can derive this antecedently known fact (and not derive other propositions which we know to be false) does increase our confidence in the truth of the axioms.

The direction of justification here mirrors the direction of justification in inference to the best explanation. Once we have a measure of confidence in a particular choice of axioms then the direction of justification can also flow in the more conventional direction, in step with the deductive inferences of a proof. This will happen when the theorem proved was not one whose truth was antecedently obvious. Easwaran (2005), Mancosu (2008), and Schlimm (2013) have developed this basic account of axiom choice in different ways. For example, Mancosu argues that an analogous process may underlie the development of new mathematical theories that extend the domain of application or the ontology of previous theories. Making further progress on analyzing this process will depend on giving a satisfactory account of mathematical explanation, and this has become an area of considerable interest in the recent literature on philosophy of mathematics.

Another approach, pursued by Maddy (1988, 1997, 2001, 2011) is to look in more detail at the actual practice of mathematicians and the reasons they give for accepting or rejecting different candidate axioms. Maddy’s main focus is on axioms for set theory, and she argues that there are various theoretical virtues, with no direct link to ‘self-evidence’, which axioms may possess. What these virtues are, and how they are weighted relative to one another, may well vary across different areas of mathematics. Two core virtues which Maddy identifies for set-theoretic axioms are UNIFY (i.e. that they provide a single foundational theory for deciding set-theoretic questions) and MAXIMIZE (i.e. that they not arbitrarily restrict the range of isomorphism types). The issue of axiom choice in set theory has also been taken up in recent work by Lingamneni (2017) and by Fontanella (2019).

2.3 Gödel’s results

Undoubtedly the most notorious of the limitations on the deductive method in mathematics are those which stem from Gödel’s incompleteness results. Although these results apply only to mathematical theories strong enough to embed arithmetic, the centrality of the natural numbers (and their extensions into the rationals, reals, complexes, etc.) as a focus of mathematical activity means that the implications are widespread.

Nor should the precise implications of Gödel’s work be overstated. The order of the quantifiers is important. What Gödel showed is that, for any consistent, recursively axiomatized formal system, F, strong enough for arithmetic, there are truths expressible in purely arithmetical language which are not provable in F. He did not show that there are arithmetical truths which are unprovable in any formal system. Nonetheless, Gödel’s results did hammer some significant nails into the coffin of one version of the deductive ideal of mathematics. There cannot be a single, recursively axiomatizable formal system for all of mathematics which is (a) consistent, (b) purely deductive, and (c) complete. One line of response to this predicament is to explore options for non-deductive methods of justification in mathematics.

3. Alternative Non-deductive Methods

3.1 Experimental Mathematics

The role of non-deductive methods in empirical science is readily apparent and relatively uncontroversial (pace Karl Popper). Indeed the canonical pattern of justification in science is a posteriori and inductive. What makes empirical science empirical is the crucial role played by observation, and—in particular—by experiment. A natural starting point, therefore, in a survey of non-deductive methods in mathematics, is to look at the rise of a genre known as “experimental mathematics.” The past 15 years or so have seen the appearance of journals (e.g., The Journal of Experimental Mathematics), institutes (e.g., the Institute for Experimental Mathematics at the University of Essen), colloquia (e.g., the Experimental Mathematics Colloquium at Rutgers University), and books (e.g., Borwein and Bailey 2003 and 2004) devoted to this theme. These latter authors also argue, in Borwein and Bailey (2015), for the significance of experimental mathematics within mathematical practice more generally, while Sorensen (2016) provides a broader historical and sociological analysis of experimental mathematics.

Against the background of the traditional dichotomy between mathematical and empirical routes to knowledge, the very term “experimental mathematics” seems at best oxymoronic and at worst downright paradoxical. One natural suggestion is that experimental mathematics involves performing mathematical experiments, where the term “experiment” here is construed as literally as possible. This is the approach adopted by van Bendegem (1998). According to van Bendegem, an experiment involves “the manipulation of objects, … setting up processes in the ‘real’ world and … observing possible outcomes of these processes” (Van Bendegem 1998, 172). His suggestion is that the natural way to get an initial grip on what a mathematical experiment might be is to consider how an experiment in this paradigmatic sense might have mathematical ramifications.

One example that van Bendegem cites dates back to work done by the 19th-century Belgian physicist Plateau on minimal surface area problems. By building various geometrical shapes out of wire and dipping these wire frames into a soap solution, Plateau was able to answer specific questions about the minimum surface bounding various particular shapes, and—eventually—to formulate some general principles governing the configurations of such surfaces.[6] One way of understanding what is going on in this example is that a physical experiment—the dipping of a wire frame into a soap solution—is producing results that are directly relevant to a certain class of mathematical problem. The main drawback of this way of characterizing experimental mathematics is that it is too restrictive. Examples of the sort van Bendegem cites are extremely rare, hence the impact of mathematical experiments of this sort on actual mathematical practice can only be very limited at best. Moreover, it cannot be only this, literal sense of experiment that mathematicians have in mind when they talk about—and do—experimental mathematics.

So much for the most literal reading of “mathematical experiment.” A potentially more fruitful approach is to think in analogical or functional terms. In other words, perhaps “experimental mathematics” is being used to label activities which function within mathematics in a way analogous to the role of experiment in empirical science. Thus mathematical experiments may share some features with literal experiments, but not other features (Baker 2008; McEvoy 2008, 2013; Sorensen 2010; van Kerkhove 2008). Before proceeding with this line of analysis, it may be helpful to look briefly at a case study.

A nice example of current work in experimental mathematics appears in one of the two recent books by Borwein and Bailey (1995b, Ch. 4). A real number is said to be normal in base n if every sequence of digits for base n (of any given length) occurs equally often in its base-n expansion. A number is absolutely normal if it is normal in every base. Consider the following hypothesis:

Conjecture: Every non-rational algebraic number is absolutely normal.

Borwein and Bailey used a computer to compute to 10,000 decimal digits the square roots and cube roots of the positive integers smaller than 1,000, and then they subjected these data to certain statistical tests.

There are a couple of striking features of this example that may point to a more general characterization of experimental mathematics. Firstly, the route from evidence to hypothesis is via enumerative induction. Secondly, it involves the use of computers. In what follows, these two features will be examined in turn.

3.2 Enumerative Induction

In a letter to Euler written in 1742, Christian Goldbach conjectured that all even numbers greater than 2 are expressible as the sum of two primes.[7] Over the following two and a half centuries, mathematicians have been unable to prove Goldbach’s Conjecture. However it has been verified for many billions of examples, and there appears to be a consensus among mathematicians that the conjecture is most likely true. Below is a partial list (as of October 2007) showing the order of magnitude up to which all even numbers have been checked and shown to conform to GC.

Bound Date Author
1 × 103 1742 Euler
1 × 104 1885 Desboves
1 × 105 1938 Pipping
1 × 108 1965 Stein & Stein
2 × 1010 1989 Granville
1 × 1014 1998 Deshouillers
1 × 1018 2007 Oliveira & Silva

Despite this vast accumulation of individual positive instances of GC, aided since the early 1960s by the introduction—and subsequent rapid increases in speed—of the digital computer, no proof of GC has yet been found. Not only this, but few number theorists are optimistic that there is any proof in the offing. Fields medalist Alan Baker stated in a 2000 interview: “It is unlikely that we will get any further [in proving GC] without a big breakthrough. Unfortunately there is no such big idea on the horizon.” Also in 2000, publishers Faber and Faber offered a $1,000,000 prize to anyone who proved GC between March 20 2000 and March 20 2002, confident that their money was relatively safe.

What makes this situation especially interesting is that mathematicians have long been confident in the truth of GC. Hardy & Littlewood asserted, back in 1922, that “there is no reasonable doubt that the theorem is correct,” and Echeverria, in a recent survey article, writes that “the certainty of mathematicians about the truth of GC is complete” (Echeverria 1996, 42). Moreover this confidence in the truth of GC is typically linked explicitly to the inductive evidence: for instance, G.H. Hardy described the numerical evidence supporting the truth of GC as “overwhelming.” Thus it seems reasonable to conclude that the grounds for mathematicians’ belief in GC is the enumerative inductive evidence.

One distinctive feature of the mathematical case which may make a difference to the justificatory power of enumerative induction is the importance of order. The instances falling under a given mathematical hypothesis (at least in number theory) are intrinsically ordered, and furthermore position in this order can make a crucial difference to the mathematical properties involved. As Frege writes, with regard to mathematics:

[T]he ground [is] unfavorable for induction; for here there is none of that uniformity which in other fields can give the method a high degree of reliability. (Frege, Foundations of Arithmetic)

Frege then goes on to quote Leibniz, who argues that difference in magnitude leads to all sorts of other relevant differences between the numbers:

An even number can be divided into two equal parts, an odd number cannot; three and six are triangular numbers, four and nine are squares, eight is a cube, and so on. (Frege, Foundations of Arithmetic)

Frege also explicitly compares the mathematical and non-mathematical contexts for induction:

In ordinary inductions we often make good use of the proposition that every position in space and every moment in time is as good in itself as every other. … Position in the number series is not a matter of indifference like position in space. (Frege, Foundations of Arithmetic)

As Frege’s remarks suggest, one way to underpin an argument against the use of enumerative induction in mathematics is via some sort of non-uniformity principle: in the absence of proof, we should not expect numbers (in general) to share any interesting properties. Hence establishing that a property holds for some particular number gives no reason to think that a second, arbitrarily chosen number will also have that property.[8] Rather than the Uniformity Principle which Hume suggests is the only way to ground induction, we have almost precisely the opposite principle! It would seem to follow from this principle that enumerative induction is unjustified, since we should not expect (finite) samples from the totality of natural numbers to be indicative of universal properties.

A potentially even more serious problem, in the case of GC and in all other cases of induction in mathematics, is that the sample we are looking at is biased. Note first that all known instances of GC (and indeed all instances it is possible to know) are—in an important sense—small.

In a very real sense, there are no large numbers: Any explicit integer can be said to be “small”. Indeed, no matter how many digits or towers of exponents you write down, there are only finitely many natural numbers smaller than your candidate, and infinitely many that are larger (Crandall and Pomerance 2001, 2).

Of course, it would be wrong to simply complain that all instances of GC are finite. After all, every number is finite, so if GC holds for all finite numbers than GC holds simpliciter.[9] But we can isolate a more extreme sense of smallness, which might be termed minuteness.

Definition: a positive integer, n, is minute just in case n is within the range of numbers we can write down using ordinary decimal notation, including (non-iterated) exponentiation.

Verified instances of GC to date are not just small, they are minute. And minuteness, though admittedly rather vaguely defined, is known to make a difference. Consider, for example, the logarithmic estimate of prime density (i.e. the proportion of numbers less than a given n that are prime) which is known to become an underestimate for large enough n. Let n* be the first number for which the logarithmic estimate is too small. If the Riemann Hypothesis is true, then it can be proven that an upper bound for n* (the first Skewes number) is 8 × 10370. Though an impressively large number, it is nonetheless minute according to the above definition. However if the Riemann Hypothesis is false than our best known upper bound for n* (the second Skewes number) is 10↑10↑10↑10↑3.[10] The necessity of inventing an ‘arrow’ notation here to represent this number tells us that it is not minute. The second part of this result, therefore, although admittedly conditional on a result that is considered unlikely (viz. the falsity of RH), implies that there is a property which holds of all minute numbers but does not hold for all numbers. Minuteness can make a difference.

What about the seeming confidence that number theorists have in the truth of GC? Echeverria (1996) discusses the important role played by Cantor’s publication, in 1894, of a table of values of the Goldbach partition function, G(n), for n = 2 to 1,000 (Echeverria 1996,29–30). The partition function measures the number of distinct ways in which a given (even) number can be expressed as the sum of two primes. Thus G(4) = 1, G(6) = 1, G(8) = 1, G(10) = 2, etc. This shift of focus onto the partition function coincided with a dramatic increase in mathematicians’ confidence in GC. What became apparent from Cantor’s work is that G(n) tends to increase as n increases. Note that what GC amounts to in this context is that G(n) never takes the value 0 (for any even n greater than 2). The overwhelming impression made by data on the partition function is that it is highly unlikely for GC to fail for some large n. For example, for numbers on the order of 100,000, there is always at least 500 distinct ways to express each even number as the sum of two primes!

However, as it stands these results are purely heuristic. The thirty years following Cantor’s publication of his table of values (described by Echeverria as the “2nd period” of research into GC) saw numerous attempts to find an analytic expression for G(n). If this could be done then it would presumably be comparatively straightforward to prove that this analytic function never takes the value 0 (Echeverria 1996, 31). By around 1921, pessimism about the chances of finding such an expression led to a change of emphasis, and mathematicians started directing their attention to trying to find lower bounds for G(n). This too has proved unsuccessful, at least to date.

Thus consideration of the partition function has not brought a proof of GC any closer. However it does allow us to give an interesting twist to the argument of the previous section. The graph suggests that the hardest test cases for GC are likely to occur among the smallest numbers; hence the inductive sample for GC is biased, but it is biased against the chances of GC. Mathematicians’ confidence in the truth of GC is not based purely on enumerative induction. The values taken by the partition function indicate that the sample of positive instances of GC is indeed biased, and biased samples do not—as a general rule—lend much support to an hypothesis. But in this particular case the nature of the bias makes the evidence stronger, not weaker. So it is possible to argue that enumerative induction is unjustified while simultaneously agreeing that mathematicians are rational to believe GC on the basis of the available evidence. (Note that there is a delicate balance to maintain here because evidence for the behavior of the partition function is itself non-deductive. However the impression that G(n) is likely to be bounded below by some increasing analytic function is not based on enumerative induction per se, so the justification—while non-deductive—is not circular.)

The upshot of the above discussion, albeit based on a single case study, is that mathematicians ought not to—and in general do not—give weight to enumerative induction per se in the justification of mathematical claims. (To what extent enumerative induction plays a role in the discovery of new hypotheses, or in the choice of what open problems mathematicians decide to work on, is a separate issue which has not been addressed here.) More precisely, the thesis is in two parts:

  1. Enumerative induction ought not to increase confidence in universal mathematical generalizations (over an infinite domain).
  2. Enumerative induction does not (in general) lead mathematicians to be more confident in the truth of the conclusion of such generalizations.

3.3 Computer Proofs

A striking feature of contemporary work in experimental mathematics is that it is done using computers. Is this reliance on complex pieces of electronics what makes the field ‘experimental’? If one looks at what gets published in contemporary journals, books, and conferences devoted to experimental mathematics, the impression is that all the items are closely bound up with computers. For example, there does not appear to be a single paper published in more than a decade’s worth of issues of Experimental Mathematics that does not involve the use of computers. What about the kinds of examples which mathematicians tend to offer as paradigms of experimental mathematics? Here the data is less clear. On the one hand, an informal survey suggests that the majority of such exemplars do involve the explicit use of computers. On the other hand, it is not uncommon for mathematicians also to cite one or more historical examples, from well before the computer age, to illustrate the purported pedigree of the subdiscipline.

The biggest practice-based challenge to equating experimental mathematics with computer-based mathematics comes from what self-styled experimental mathematicians say about their nascent discipline. For when mathematicians self-consciously reflect on the notion of experimental mathematics, they tend to reject the claim that computer use is a necessary feature. For example, the editors of the journal Experimental Mathematics—in their “statement of philosophy” concerning the scope and nature of the journal—make the following remarks:

The word “experimental” is conceived broadly: many mathematical experiments these days are carried out on computers, but others are still the result of pencil-and-paper work, and there are other experimental techniques, like building physical models. (“Aims and Scope”, Experimental Mathematics—see Other Internet Resources)

And here is another passage with a similar flavor from mathematician Doron Zeilberger:

[T]raditional experimental mathematics … has been pursued by all the great, and less-great, mathematicians through the centuries, using pencil-and-paper. (Gallian and Pearson 2007, 14)

It seems fair to say that tying experimental mathematics to computer use fits well with what contemporary experimental mathematicians do but not so well with what they say.[11]

A second problem with the proposed characterization is more philosophical in nature. Consider another widely-cited example of experimental mathematics which arises in connection with Goldbach’s Conjecture. As of April 2007, all even numbers up to 1018 had been verified to conform to GC, and this project (under the direction of Oliveira e Silva) is ongoing. This massive computation task is generally considered to be a paradigm example of experimental mathematics. And it seems clear that computers are playing an essential role here: no mathematician, or group of mathematicians, could hope to duplicate 1018 calculations by hand.

In the current context, the central question is not whether computer-based mathematics is ‘experimental’ but whether it is—at least sometimes—non-deductive. In one sense, of course, all of the individual calculations performed by a computer are deductive, or at least they are isomorphic to the operations of a purely deductive formal system. When a computer verifies an instance of GC, this verification is completely deductive. We may then separate out two distinct questions. Firstly, are these computations playing a non-deductive role in some larger mathematical argument? And, secondly, are the beliefs we form directly from the results of computer computations deductively grounded beliefs? The first of these questions does not turn on anything specific to computers, and hence collapses back to the issue discussed in Section 3(B) above on enumerative induction. The second question will be examined below.

Philosophical discussion of the status of computer proofs was prompted in large part by Appel and Haken’s computer-based proof of the Four Color Theorem in 1976. In his (1979), Tymoczko argues—controversially—that mathematical knowledge based on computer proofs is essentially empirical in character. This is because such proofs are not a priori, not certain, not surveyable, and not checkable by human mathematicians. In all these respects, according to Tymoczko, computer proofs are unlike traditional ‘pencil-and-paper’ proofs. Concerning surveyability, Tymoczko writes:

A proof is a construction that can be looked over, reviewed, verified by a rational agent. We often say that a proof must be perspicuous, or capable of being checked by hand. It is an exhibition, a derivation of the conclusion, and it needs nothing outside itself to be convincing. The Mathematician surveys the proof in its entirety and thereby comes to know the conclusion. (Tymoczko 1979, 59)

Assume for sake of argument that the computer proof in question is deductively correct but is also unsurveyable in the above sense. Does our decision to rely on the output of the computer here constitute a non-deductive method? One way of viewing this kind of example is as driving a wedge between a deductive method and our non-deductive access to the results of that method. Compare, for instance, being told of a particular mathematical result by an expert mathematician (with a good track record). Is this a ‘non-deductive method’?[12]

3.4 Probabilistic Proofs

There is a small, but growing, subset of mathematical methods which are essentially probabilistic in nature. In the context of justification, these methods do not deductively imply their conclusion but rather establish that there is some (often precisely specifiable) high probability that the conclusion is true. Philosophical discussion of these methods began with Fallis (1997, 2002), while Berry (2019) is a useful recent contribution to the debate.

One type of probabilistic method links back to the earlier discussion of experimental mathematics in that it involves performing experiments in a quite literal sense. The idea is to harness the processing power of DNA to effectively create a massively parallel computer for solving certain otherwise intractable combinatorial problems. The most famous of these is the ‘Traveling Salesman’ problem, which involves determining whether there is some possible route through the nodes of a graph connected by unidirectional arrows that visits each node exactly once. Adleman (1994) shows how the problem can be coded using strands of DNA which can then be spliced and recombined using different chemical reactions. The appearance of certain longer DNA strands at the end of the process corresponds to the finding of a solution path through the graph. Probabilistic considerations come in most clearly in the case where no longer DNA strands are found. This indicates that there is no path through the graph, but even if the experiment is carried out correctly the support here falls short of full certainty. For there is a small chance that there is a solution but that it fails to be coded by any DNA strand at the start of the experiment.

There are also probabilistic methods in mathematics which are not experimental in the above sense. For example, there are properties of composite (i.e. non-prime) numbers which can be shown to hold in relation to about half of the numbers less than a given composite number. If various numbers smaller than N are selected at random and none of them bear this relation to N, then it follows that N is almost certainly prime. The level of probability here can be precisely calculated, and can be made as high as needed by picking more ‘witness’ numbers to test.

Note that these sorts of probabilistic methods contain plenty of purely deductive reasoning. Indeed, in the second example, the fact that the probability of N being prime is .99 is established purely deductively. Nonetheless, there is general consensus in the mathematical community that such methods are not acceptable substitutes for deductive proof of the conclusion. Fallis (1997, 2002) argues that this rejection is not reasonable because any property of probabilistic methods that can be pointed to as being problematic is shared by some proofs that the mathematical community does accept. Fallis’s focus is on establishing truth as the key epistemic goal of mathematics. However it seems plausible that one major reason for mathematicians’ dissatisfaction with probabilistic methods is that they do not explain why their conclusions are true. In addition, Easwaran argues, against Fallis, that there is a property, which he terms ‘transferability’, that probabilistic proofs lack and acceptable proofs have (Easwaran 2009; Jackson 2009). Fallis (2011) is a reply to some of these objections.

On the other hand, there may be cases where the bare truth or falsity of a claim is important even in the absence of accompanying explanation. For example, one could imagine a situation in which an important and interesting conjecture—say the Riemann Hypothesis—is being considered, and a probabilistic method is used to show that some number is very likely a counterexample to it. It is interesting to speculate what the reaction of the mathematical community might be to this situation. Would work on trying to prove RH cease? Would it continue until a rigorous deductive proof of the counterexample is constructed?

4. Summary / Conclusions

It is not clear why one should expect the various non-deductive methods used in mathematics to share any substantive features other than their non-deductiveness. Philosophers looking at the role of non-deductive reasoning in the context of discovery have often talked as if there is some unity to be found (for example, the subtitle to Lakatos’s Proofs and Refutations is “the Logic of Mathematical Discovery.” More likely is that the array of non-deductive methods is diverse and heterogeneous. (Compare Stanislaw Ulam’s remark that “the study of non-linear physics is like the study of non-elephant biology.”)

Work by contemporary philosophers of mathematics is continuing to push the study of non-deductive mathematical methods in new directions. One area of interest is in ‘mathematical natural kinds’ and whether such a notion can be used to ground the use of analogy in mathematical reasoning (Corfield 2004 [Other Internet Resources]). Another area being investigated is the putative role of heuristic principles in mathematics. (Much of this work traces back to Pólya (1945).)

A background issue in all of these debates concerns the extent to which each particular non-deductive method plays an essential role in the justificatory practices of mathematics. This question arises at both a local and global level. At the local level, a particular piece of reasoning to justify a given result may be unavoidably non-deductive, yet the result may also be established by some other, purely deductive piece of reasoning. At the global level, it may be that our only justification for certain mathematical claims is non-deductive. The extent to which our use of non-deductive methods is due to limitations in practice rather than limitations in principle remains an issue for further investigation.

Bibliography

  • Adleman, L., 1994, “Molecular Computation of Solutions to Combinatorial Problems”, Science, CCLXVI: 1021–1024.
  • Andersen, L., 2018, “Acceptable Gaps in Mathematical Proofs”, Synthese, URL = <https://doi.org/10.1007/s11229-018-1778-8>.
  • Avigad, J., 2006, “Mathematical Method and Proof”, Synthese, 153: 105–159.
  • –––, 2007, “5 Questions”, in Philosophy of Mathematics: 5 Questions, V. Hendricks & H. Leitgeb (ed.), Copenhagen: Automatic Press, p 1–10.
  • Azzouni, J., 2013, “The Relationship of Derivations in Artificial Languages to Ordinary Rigorous Mathematical Proof”, Philosophia Mathematica, 21: 247–254.
  • –––, 2013, “That We See that Some Diagrammatic Proofs are Perfectly Rigorous”, Philosophia Mathematica, 21: 323–338.
  • Baker, A., 2007, “Is There a Problem of Induction for Mathematics?”, in Mathematical Knowledge, M. Leng, A. Paseau, & M. Potter (eds.), Oxford: Oxford University Press, pp. 59–73
  • –––, 2008, “Experimental Mathematics”, Erkenntnis, 68: 331–344.
  • Berry, D., 2016, “Proof and the Virtues of Shared Enquiry”, Philosophia Mathematica, 26: 112–130.
  • –––, 2019, “Should Mathematicians Play Dice?”, Logique et Analyse, 246: 135–160.
  • Borwein, J., & D. Bailey, 2003, Mathematics by Experiment: Plausible Reasoning for the 21st Century, Natick, MA: A K Peters.
  • –––, 2004, Experimentation in Mathematics: Computational Paths to Discovery, Natick, MA: AK Peters.
  • –––, 2015, “Experimental Mathematics as an Ontological Game Changer: the Impact of Modern Mathematical Computation Tools on the Ontology of Mathematics”, in Mathematics, Substance and Surmise, E. Davis & P. Davis (eds.), Springer, pp. 25–68.
  • Brown, J., 2008, Philosophy of Mathematics: a Contemporary Introduction to the World of Proofs and Pictures, 2nd Edition, New York: Routledge.
  • Burgess, J., 1992, “Proofs about Proofs: a Defense of Classical Logic. Part 1: the Aims of Classical Logic”, in Proof, Logic and Formalization, M. Detlefsen (ed.), London and New York: Routledge, pp. 8–23.
  • Carroll, L. [C. L. Dodgson], 1895, “What the Tortoise Said to Achilles,”, Mind, 4: 278–280.
  • Corfield, D., 2003, Towards a Philosophy of Real Mathematics, Cambridge: Cambridge University Press.
  • Courant, R., & H. Robbins, 1941, What Is Mathematics?, Oxford: Oxford University Press.
  • Crandall, R., & C. Pomerance, 2001, Prime Numbers: a Computational Perspective, New York: Springer-Verlag.
  • De Toffoli, S., & V. Giardino, 2014, “Forms and Roles of Diagrams in Knot Theory”, Erkenntnis, 79(4): 829–842.
  • De Toffoli, S., 2017, “‘Chasing’ the Diagram: the Use of Visualizations in Algebraic Reasoning”, Review of Symbolic Logic, 10: 158–186.
  • Delariviere, S.,& Van Kerkhove, B., 2017, “The Artificial Mathematician Objection: Exploring the (Im)possibility of Automating Mathematical Understanding”, in Humanizing Mathematics and its Philosophy, B. Sriraman (ed.), Springer International Publishing, pp. 173–198.
  • Descartes, R., 1627–28, Rules for the Direction of the Mind, in Descartes: Selections, R. Eaton (tr.), New York: Charles Scribner’s Sons, 1927, pp. 38–83.
  • Detlefsen, M. (ed.), 1992, Proof, Logic and Formalization, London and New York: Routledge.
  • Dummett, M., 1978, “Wang’s Paradox”, in Truth and Other Enigmas, London: Duckworth, pp. 248–268.
  • Easwaran, K., 2005, “The Role of Axioms in Mathematics”, Erkenntnis, 68: 381–391.
  • –––, 2009, “Probabilistic Proofs and Transferability”, Philosophia Mathematica, 17: 341–362.
  • Echeverria, J., 1996, “Empirical Methods in Mathematics. A Case-Study: Goldbach’s Conjecture”, in Spanish Studies in the Philosophy of Science, G. Munévar (ed.) , Dordrecht: Kluwer, pp. 19–55.
  • Fallis, D., 1997, “The Epistemic Status of Probabilistic Proof”, Journal of Philosophy, 94(4): 165–186.
  • –––, 2002, “What Do Mathematicians Want? Probabilistic Proofs and the Epistemic Goals of Mathematicians”, Logique et Analyse, 45: 373–388.
  • –––, 2003, “Intentional Gaps in Mathematical Proofs”, Synthese, 134: 45–69.
  • –––, 2011, “Probabilistic Proofs and the Collective Epistemic Goals of Mathematicians”, in Collective Epistemology, H.B. Schmid, D. Sirtes & M. Weber (eds.), Ontos Verlag, pp. 157–176.
  • Fontanella, L., 2019, “How to Choose New Axioms for Set Theory?”, in Reflections on the Foundations of Mathematics, D. Sarikaya, D. Kant & S. Centrone (eds.), Springer Verlag.
  • Franklin, J., 1987, “Non-Deductive Logic in Mathematics”, British Journal for the Philosophy of Science, 38: 1–18.
  • Frege, G., 1884, Die Grundlagen der Arithmetik: eine logisch-mathematische Untersuchung über den Begriff der Zahl, Breslau: W. Koebner. Translated as The Foundations of Arithmetic: A logico-mathematical enquiry into the concept of number, by J.L. Austin, Oxford: Blackwell, second revised edition, 1974. Reprinted 1980.
  • Gallian, J., & M. Pearson, 2007, “An Interview with Doron Zeilberger”, FOCUS (the newsletter of the Mathematical Association of America), 27(5): 14–17.
  • Giaquinto, M., 2007, Visual Thinking in Mathematics: an Epistemological Study, Oxford: Oxford University Press.
  • Gonthier, G., et al., 2008, “Formal Proof – The Four-Color Theorem”, Notices of the American Mathematical Society, 55(11): 1382–1393.
  • Gonthier, G., 2013, “A Machine-Checked Proof of the Odd Order Theorem”, in Interactive Theorem Proving: 4th International Conference Proceedings, S. Blazy, C. Paulin-Mohring & D. Pichardie (eds.), Berlin / Heidelberg: Springer-Verlag, pp. 163–179.
  • Haack, S., 1976, “The Justification of Deduction”, Mind, 85: 112–119.
  • Jackson, J., 2009, “Randomized Arguments are Transferable”, Philosophia Mathematica, 17: 363–368.
  • Lakatos, I., 1976, Proofs and Refutations, Cambridge: Cambridge University Press.
  • –––, 1980, “What Does a Mathematical Proof Prove?”, in Mathematics, Science and Epistemology (Imre Lakatos: Philosophical Papers, Volume 2), J. Worrall & G. Currie (eds.), Cambridge: Cambridge University Press, pp. 61–69.
  • Lingamneni, S., 2017, “Can we Resolve the Continuum Hypothesis?”, Synthese, URL = < https://doi.org/10.1007/s11229-017-1648-9>.
  • Maddy, P., 1988, “Believing the Axioms. I & II”, Journal of Symbolic Logic, 53(2): 481–511, 736–764.
  • –––, 1997, Naturalism in Mathematics, Oxford: Oxford University Press.
  • –––, 2001, “Some Naturalistic Remarks on Set Theoretic Method”, Topoi, 20: 17–27.
  • –––, 2011, Defending the Axioms: On the Philosophical Foundations of Set Theory, Oxford: Oxford University Press.
  • Mancosu, P., 2008, “Mathematical Explanation: Why it Matters”, in The Philosophy of Mathematical Practice, P. Mancosu (ed.), Oxford: Oxford University Press, pp. 134–149.
  • McEvoy, M., 2008, “The Epistemological Status of Computer Proofs”, Philosophia Mathematica, 16: 374–387.
  • –––, 2013, “Experimental Mathematics, Computers and the A Priori”, Synthese, 190: 397–412.
  • Mumma, J., 2010, “Proofs, Pictures, and Euclid”, Synthese, 175: 255–287.
  • Paseau, A., 2015, “Knowledge of Mathematics without Proof”, British Journal for the Philosophy of Science, 66: 775–799.
  • Pólya, G., 1945, How to Solve It, Princeton: Princeton University Press.
  • Schlimm, D., 2013, “Axioms in Mathematical Practice”, Philosophia Mathematica, 21: 37–92.
  • Shin, S., & O. Lemon, 2008, “Diagrams”, The Stanford Encyclopedia of Philosophy (Winter 2008 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/win2008/entries/diagrams/>.
  • Sorensen, H.K., 2010, “Exploratory Experimentation in Experimental Mathematics: a Glimpse at the PSLQ Algorithm”, in Philosophy of Mathematics: Sociological Aspects and Mathematical Practice, B. Lowe & T. Muller (eds.), London: College Publications, pp. 341–360.
  • –––, 2016, “‘The End of Proof’? The Integration of Different Mathematical Cultures as Experimental Mathematics Comes of Age”, in Mathematical Cultures, B. Larvor (ed.), Cham: Birkhauser, pp. 139–160.
  • Steiner, M., 1993, “Review of Proof, Logic and Formalization”, Journal of Symbolic Logic, 58(4): 1459–1462.
  • Tennant, N., 2005, “Rule Circularity and the Justification of Deduction”, Philosophical Quarterly, 55: 625–648.
  • te Riele, H., 1987, “On the Sign of the Difference p(x)–Li(x)”, Mathematics of Computation, 48: 323–328.
  • Tymoczko, T., 1979, “The Four-Color Problem and Its Philosophical Significance”, Journal of Philosophy, 76(2): 57–83.
  • van Bendegem, J., 1998, “What, if Anything, is an Experiment in Mathematics?”, in Philosophy and the Many Faces of Science, D. Anapolitanos, et al. (eds.), London: Rowman & Littlefield, pp. 172–182.
  • van Kerkhove, B., & J. van Bendegem, 2008, “Pi on Earth”, Erkenntnis, 68: 421–435.

Other Internet Resources

Copyright © 2020 by
Alan Baker <abaker1@swarthmore.edu>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free