Supplement to Rudolf Carnap

C. Inductive Logic

From 1942 until his death in 1970, Carnap devoted the bulk of his time and energy to the development of a new form of inductive logic. Carnap’s motivations in following this new direction so late in his career are discussed in the main entry (Section 8.2); the bifurcation of the explication of the concept of probability into the two concepts probability1 (epistemic probability) and probability2 (statistical probability understood as long-run relative frequency) is taken as a prime example of explication in the supplement on Methodology (Section 1). Carnap’s own focus was entirely on probability1, in that classification.

Unlike probability2, which is an empirical matter (as in “the relative frequency of lung cancer patients amongst male Germans under 30 is…”), confirmation or probability1 is framework-relative for Carnap. In particular, when a confirmation measure P is uniquely determined by a framework, then either “\(P(H \mid E) = r\)” is analytically true or “\(P(H \mid E) \neq r\)” is analytically true in the framework (see Carnap 1950b: 34). Just as he defines linguistic frameworks as supplying deductive logical rules that are partially constitutive of these frameworks (see the main entry (Section 1.2)), in his work on confirmation beginning in the 1940s, Carnap constructs yet more comprehensive frameworks which come equipped also with rules of inductive logic. These inductive rules are meant to be continuous with the deductive ones: e.g., logical truths, or analytic sentences in the object language of a framework more generally, are to be assigned the maximal value of (absolute) confirmation 1 whatever the evidence (see supplement on Reconstruction of Scientific Theories (Sections 1 and 5)). That is because these statements are given by—and partially constitutive of—the framework, which is why they could not be disconfirmed under the very same framework. This methodological status of what one might call “relative-aprioricity” (in the sense of Friedman 1999), which frameworks possess in Carnap’s reconstructions of science (see supplement on Carnap versus Quine on the Analytic-Synthetic Distinction), carries over to a framework’s confirmation measures which make the empirical confirmation or disconfirmation of synthetic sentences in the framework possible in the first place. (As Carnap says in §2 of “Empiricism, Semantics and Ontology” (1950a) accepting a linguistic framework also means “to accept rules… for testing, accepting, or rejecting” its statements, all of which concern applications of the concept of confirmation.) The same holds for inductive inference, which for Carnap is nothing but the determination of a degree of confirmation.

For this reason, confirmation measures as determined by a Carnapian framework should not be wholly identified with modern Bayesian subjective degree-of-belief functions. From Carnap’s point of view, rather, a scientist’s personal degree-of belief function would be rationally reconstructed as resulting from updating a framework’s confirmation measure by evidence E, where the framework is a priori relative to the scientist’s theory (again in Friedman’s [1999] sense), and where E is the total relevant evidence available to the scientist or her scientific community at the time. (See Carnap (1950b: 211 and 214) for the corresponding statement of his famous “Requirement of Total Evidence”, which Carnap regards as belonging to the methodology of induction.) It is really \(P(H \mid E)\) that serves as an agent’s fair betting quotient at the time at which E is the total relevant evidence for that agent. Brössel (2012) argued that the relative-aprioricity interpretation of Carnapian confirmation measures also points to a way out of Glymour’s 1980 well-known “Old Evidence Problem” which threatens standard Bayesian accounts of confirmation: the problem is that “old evidence” E, the subjective probability \(P(E)\) of which is 1 or close to 1, could no longer confirm a hypothesis H, since \(P(H \mid E)\) and \(P(H)\) would be identical or at least approximately equal as given by the subjective probability measure P. In contrast, if a scientist working in a framework relies on an “initial” framework-relative confirmation measure P in order to determine degrees of confirmation, then E may count as confirming evidence even when E is already available to her, since the “initial” framework-relative probability \(P(E)\) will normally be far from 1.

This said, many of the standard features of the Bayesian interpretation of probability can already be found in the Logical Foundations, and they do apply to confirmation measures once they have been updated by evidence: e.g., Carnap uses the interpretation of probabilities as fair betting quotients in order to justify the axioms of probability (LFP: 165); he maintains rational practical decisions to be based on estimates of frequencies (not on frequencies themselves, which would be probability2, LFP: 250) on which we rely when we decide rationally about actions by maximizing their estimated utilities (p.260); and in §104 he proves that (averaged) degrees of confirmation coincide with estimated relative truth frequencies, anticipating, to some extent, modern epistemic justifications for Bayesianism (as in Joyce 1998 and Pettigrew 2016). For Carnap, inductive logic does not just serve science but also constitutes “a guide to life” (LFP: 161, 247), in line with the pragmatic tradition in the Bayesian literature (e.g., Jeffrey 1965, 1992). Accordingly, in his later work, logical probability functions are also considered as “rational credibility functions” (Carnap 1963b: 971), though not as personal credence functions; we will return to this issue at the end of this section. Carnap is also very explicit that if personal belief is to be rationally reconstructed, this should be done probabilistically on a numerical scale, rather than in terms of all-or-nothing acceptance on a classificatory scale: see “On Rules of Acceptance” (Carnap 1968b). (On page 256 of Logical Foundations he also gives what might be the first statement of a variant of the Lottery Paradox, which became so prominent later through Kyburg (1961). However, Carnap’s version of the lottery does not focus on the alleged closure of all-or-nothing acceptance under conjunction, but on the questionable consequences of acting as if highly likely statements were known for certain.)

More appropriately, both older (e.g., Jaynes 1968) and recent (e.g., Jon Williamson 2010) proponents of objective Bayesianism share Carnap’s focus on prior or “initial” probabilities on which evidence ought to have “objective” impact, which is why Carnap’s take on probabilistic confirmation may well be called an “objective Bayesian” one (though in some ways perhaps becoming more subjectively Bayesian in his later work on the topic—though see Sznajder 2016, 2018).

Similarly, Carnapian probability measures are closely related to David Lewis’s (1980) “initial” credence functions of agents at the beginning of their epistemic lives before any learning has taken place (except that Carnap would only use such a psychological description as a pedagogical device by which the logical set-up could be explained more easily). For the same reason, Carnapian probability measures also resemble Timothy Williamson’s (1998) “initial” probability measure P which Williamson takes to measure “something like the intrinsic plausibility of hypotheses prior to investigation” (1998: 91). Indeed, both Lewis and Williamson cite Carnap early in their papers. This said, Lewis (1980: 263) rejects any logical or objective connotations of the term “initial” when he states that

Carnap… did well to distinguish two concepts of probability… I do not think Carnap, chose quite the right two concepts, however. In place of his [Carnap’s] “degree of confirmation”, I would put credence or degree of belief; in place of his “relative frequency in the long run”, I would put chance.

Just as Lewis is interested in so-called reflection principles for subjective probability and objective chance, Carnap (1950b) is interested in reflection principles according to which probability1 is an estimate of probability2—compare page 173 of the Logical Foundations. Unlike Carnap’s theory of probability1, Lewis does not expand on the theory of “initial” probabilities in any detail. Williamson (1998: 91) distinguishes his project from Carnap’s by stating that, in contrast with Carnap’s view, “P is not assumed to be syntactically definable”. Williamson himself does not say much about “initial” probabilities, and his criticism of Carnap’s account is a bit misleading: Carnap indeed defines, in purely syntactic terms, a uniquely determined confirmation measure in the Appendix of the Logical Foundations of Probability; see below. However, in the main parts of the Logical Foundations, as well as in his later work, he states only axiomatic constraints on what Williamson would call “initial” confirmation measures; and, typically, these constraints are expressed in semantic rather than syntactic terms.

Last but certainly not least, by tying confirmation to linguistic frameworks that are (partially) defined logically and semantically, Carnapian confirmation measures may be regarded as logical or semantic probability measures. (See §8 of Logical Foundations on “The Semantical Concepts of Confirmation”; as far as the modern literature is concerned, compare Roeper & Leblanc 1999.) Indeed, much like Keynes (1921) before him, Carnap (1950b: 206) suggests that conditional degrees of confirmation may be regarded as numerical measures for partial logical implication, that is, partial preservation of truth. Even more importantly, Carnapian “initial” framework-relative confirmation measures are logical in a similar sense as logical concepts in deductive logic are (see supplements on Reconstruction of Scientific Theories (Section 6) and Aufbau (Section 2) for Carnap’s account of logical concepts and his structuralist applications of logical concepts): as we will explain now in more detail, they are invariant under isomorphism. (Maher 2010 argues that Carnapian probability measure are logical in the sense of being definable by purely mathematical means: but that is only a necessary condition, as one can define purely mathematically probability measures that are not isomorphism-invariant.)

In the Logical Foundations, Carnap considers confirmation measures that assign probabilistic degrees of confirmations to sentences in a formal first-order language. (In his later work (1971a,b, 1980) he would follow the more standard mathematical treatment of probability by assigning probabilities to members of a set-theoretic algebra of events or propositions; sentences in a formal language would then be interpreted to express set-theoretic events or propositions in such an algebra.) The predicates of any such language come with an intended qualitative and observational interpretation; Carnap does not treat theoretical terms in his probabilistic work (although in an interview from 1964 he regards this as an important step to be taken in the future: see Carnap 1964). In the Appendix to the Logical Foundations, he restricts these predicates furthermore to unary ones, but otherwise in the Logical Foundations he is also interested in the more general n-ary case. Next to individual variables, the vocabulary of any such language includes individual constants \(a_1\), \(a_2\),… as singular terms, such that two distinct constants are presumed to denote distinct objects “by logic”. (Thus, identity statements between syntactically distinct constants are regarded as logically false; the same convention was used in Carnap (1946, 1947)—compare the supplement on Semantics.) One may think of the denotations of these individual constants to be fixed from the start, which, however, does not mean that an agent whose hypotheses are assigned degrees of confirmation needs to be aware what the constants refer to. Carnap considers languages with finitely many, say, N of these individual constants (languages \(L_N)\), and a language with infinitely many of them (the language \(L_{\infty})\). The logical connectives and quantifiers are the standard ones, where quantification is understood as substitutional; e.g., universal quantification in \(L_{\infty}\) corresponds to an infinite conjunction involving all substitution instances by arbitrary individual constants. Ultimately, Carnap will require the probabilities for quantified sentences in \(L_{\infty}\) to be given by limits of probabilities of sentences in \(L_N\) when N goes to infinity, which has the consequence that a universally quantified sentence may receive a probability of 0 (see, e.g., LFP: 571) even when it may be the logical reconstruction of an empirical law hypotheses to which a physicist might want to assign a positive subjective probability: this is one of the problems with Carnap’s account. (See Popper 1935 [1959: §80] and Earman 1992. Later, Hintikka 1966, Kuipers 1978, and Hintikka & Niiniluoto 1980 suggested ways in which this problem could be avoided. Carnap himself circumvents the problem by explaining the confirmation of a universal law in terms of the confirmation of the “next” so far unobserved single case instance: see LFP: 571–575. Our discussion here will bracket this issue and focus on the languages \(L_N\) and their finitely many constants.) The semantics of these languages is developed just as in Meaning and Necessity (see the supplement on Semantics): so-called state-descriptions, which are consistent and complete sets of either atomic sentences or negations of atomic sentences (for \(L_{\infty}\)), or the corresponding conjunctions of atomic sentences and negations of atomic sentences (for \(L_N\)), serve as possible worlds at which sentences can be evaluated. On the same basis, intensions may be assigned to various types of expressions in these languages, and truth can be defined by exploiting the actual (intended) interpretation of singular terms and predicates.

For instance: let us assume \(L_4\) to be given by precisely one unary predicate B and four individual constants a, b, c, d. Then there are precisely 16 state-descriptions:

\[ \begin{array}{r@{}c@{}r@{}c@{}r@{}c@{}r} B(a) & \amp & B(b) & \amp & B(c) & \amp & B(d)\\ \neg B(a) & \amp & B(b) & \amp & B(c) & \amp & B(d)\\ B(a) & \amp & \neg B(b) & \amp & B(c) & \amp & B(d)\\ B(a) & \amp & B(b) & \amp & \neg B(c) & \amp & B(d)\\ B(a) & \amp & B(b) & \amp & B(c) & \amp & \neg B(d)\\ \neg B(a) & \amp & \neg B(b) & \amp & B(c) & \amp & B(d)\\ \neg B(a) & \amp & B(b) & \amp & \neg B(c) & \amp & B(d)\\ \neg B(a) & \amp & B(b) & \amp & B(c) & \amp & \neg B(d)\\ B(a) & \amp & \neg B(b) & \amp & \neg B(c) & \amp & B(d)\\ B(a) & \amp & \neg B(b) & \amp & B(c) & \amp & \neg B(d)\\ B(a) & \amp & B(b) & \amp & \neg B(c) & \amp & \neg B(d)\\ \neg B(a) & \amp & \neg B(b) & \amp & \neg B(c) & \amp & B(d)\\ \neg B(a) & \amp & \neg B(b) & \amp & B(c) & \amp & \neg B(d)\\ \neg B(a) & \amp & B(b) & \amp & \neg B(c) & \amp & \neg B(d)\\ B(a) & \amp & \neg B(b) & \amp & \neg B(c) & \amp & \neg B(d)\\ \neg B(a) & \amp & \neg B(b) & \amp & \neg B(c) & \amp & \neg B(d)\\ \end{array} \]

(More generally, Carnap discusses conjunctions in which each of finitely many individual constants are assigned exactly one formula out of a finite logical partition or “division” of formulas as “individual distributions”: the idea being that individuals are distributed over pairwise logically exclusive and overall logically exhaustive open formulas in one and the same free variable. See LFP: 111.)

With this all set up, Carnap’s first—unsurprising—adequacy condition on confirmation measures is that they satisfy the axioms of probability (LFP: §53). These axioms are formulated for conditional probability measures, which are not defined in terms of absolute or unconditional probability measures; however, it follows from Carnap’s axioms that when T is a logical tautology and the ratio \(P(H \amp E \mid T) / P(E \mid T)\) is defined, the conditional probability \(P(H \mid E)\) must equal this ratio; and probabilities conditional on T may be identified with unconditional probabilities. Primitive conditional probability measures had already been studied axiomatically a decade earlier by Janina Hosiasson-Lindenbaum (1940). (See Makinson 2011 for a systematic and historical survey of primitive conditional probability measures.) Amongst others, in the case of \(L_N\), the probability of a sentence is thereby required to be identical with the sum of probabilities of the (finitely many) state-descriptions at which it is satisfied, and the sum of the probabilities of all state-descriptions is postulated to be 1.

For instance, in our above example, it follows from this that

\[ \begin{align} P(B(a)\amp B(b)) = & P(B(a) \amp B(b) \amp B(c) \amp B(d)) \\ & + P(B(a) \amp B(b) \amp B(c) \amp \neg B(d))\\ & + P(B(a) \amp B(b) \amp \neg B(c) \amp B(d)) \\ & + P(B(a) \amp B(b) \amp \neg B(c) \amp \neg B(d)) \end{align} \]

In Chapter V of the Logical Foundations, Carnap adds regularity as another constraint on adequate confirmation measures: every state-description is assigned a positive real number (between 0 and 1). (Again, we are focusing here only on the finite languages \(L_N\).) Since any such P is intended to be an “initial” probability measure, this corresponds to the requirement that no logical possibility is excluded a priori (in the given framework). In our example, this implies that, e.g., \(P(B(a) \amp B(b)) \gt 0\), since

\[P(B(a) \amp B(b) \amp B(c) \amp B(d)) \gt 0\]

(and also \(P(B(a) \amp B(b) \amp B(c) \amp \neg B(d)) \gt 0\), and so on). However, as Carnap says,

A theory which holds for all regular c-functions [confirmation functions]… is very weak… Our task will be to construct the rest of inductive logic by narrowing the class of c-functions and finally selecting one of them. (LFP: 337}

This “narrowing” refers to the symmetry or structurality requirement that Carnap introduces in Chapter VIII, to which we will turn now. (The “selecting one of them” refers to the Appendix of the Logical Foundations, which we will discuss below). In 1924 the British philosopher W. E. Johnson had introduced and studied the same requirement under the name “permutation postulate” (see Zabell 2005). Carnap had read the relevant passage in Johnson’s book and even taken notes on it, but had evidently forgotten this by the late 1940s (just as he forgot that he had read—and underlined—the passage in Ramsey where the “Ramsification” of the theoretical components of theories was propounded, and at first thought he had discovered the idea himself; see Psillos 2000: 153, footnote 7).

Here is the basic thought: Carnap calls any bijective mapping between the individual constants of one of the object languages \(L_N\) or \(L_{\infty}\) a “correlation”. Each correlation induces a corresponding mapping on formulas by replacing the constants in a formula by their correlated constants. Finally, formulas are defined as isomorphic if one can be transformed into one another by a correlation.

For instance, in the example above, the state-description

\[\neg B(a) \amp B(b) \amp \neg B(c) \amp B(d)\]

is isomorphic to the state-description

\[\neg B(a) \amp B(b) \amp B(c) \amp \neg B(d)\]

in virtue of correlating a with itself, b with itself, c with d, and d with c. As is easy to see, being isomorphic to determines an equivalence relation on state-descriptions, which, in our example, yields the following five equivalence classes:

\[ \left\{\begin{array}{r@{}c@{}r@{}c@{}r@{}c@{}r} B(a) &\amp& B(b) &\amp& B(c) &\amp& B(d) \end{array} \right\}\\ \] \[ \left\{\begin{array}{r@{}c@{}r@{}c@{}r@{}c@{}r} \neg B(a) &\amp& B(b) &\amp& B(c) &\amp& B(d);\\ B(a) &\amp& \neg B(b) &\amp& B(c) &\amp& B(d);\\ B(a) &\amp& B(b) &\amp& \neg B(c) &\amp& B(d);\\ B(a) &\amp& B(b) &\amp& B(c) &\amp& \neg B(d)\\ \end{array}\right\}\\ \] \[ \left\{\begin{array}{r@{}c@{}r@{}c@{}r@{}c@{}r} \neg B(a) &\amp& \neg B(b) &\amp& B(c) &\amp& B(d); \\ \neg B(a) &\amp& B(b) &\amp& \neg B(c) &\amp& B(d); \\ \neg B(a) &\amp& B(b) &\amp& B(c) &\amp& \neg B(d); \\ B(a) &\amp& \neg B(b) &\amp& \neg B(c) &\amp& B(d); \\ B(a) &\amp& \neg B(b) &\amp& B(c) &\amp& \neg B(d); \\ B(a) &\amp& B(b) &\amp& \neg B(c) &\amp& \neg B(d) \\ \end{array}\right\}\\ \]\[ \left\{\begin{array}{r@{}c@{}r@{}c@{}r@{}c@{}r} \neg B(a) &\amp& \neg B(b) &\amp& \neg B(c) &\amp& B(d);\\ \neg B(a) &\amp& \neg B(b) &\amp& B(c) &\amp& \neg B(d);\\ \neg B(a) &\amp& B(b) &\amp& \neg B(c) &\amp& \neg B(d);\\ B(a) &\amp& \neg B(b) &\amp& \neg B(c) &\amp& \neg B(d)\\ \end{array}\right\}\\ \]\[ \left\{ \begin{array}{r@{}c@{}r@{}c@{}r@{}c@{}r} \neg B(a) &\amp& \neg B(b) &\amp& \neg B(c) &\amp& \neg B(d) \end{array}\right\} \]

Since each such equivalence class abstracts from the identity of individuals and merely concerns state-descriptions up to isomorphism, Carnap refers to the disjunction of the state-descriptions within one equivalence class as a “structure-description” (LFP: 116). Hence, e.g., this is the structure-description corresponding to the second of the above equivalence classes:

\[ \begin{array}{r@{}c@{}r@{}c@{}r@{}c@{}r} (\neg B(a) &\amp& B(b) &\amp& B(c) &\amp& B(d)) \\ {}\lor (B(a) &\amp& \neg B(b) &\amp& B(c) &\amp& B(d)) \\ {}\lor (B(a) &\amp& B(b) &\amp& \neg B(c) &\amp& B(d)) \\ {}\lor (B(a) &\amp& B(b) &\amp& B(c) &\amp& \neg B(d)), \end{array} \]

More generally, relative to finitely many given individual constants and an arbitrary finite logical partition or “distribution” of formulas, Carnap also refers to the analogous disjunctions as “statistical distributions”, which captures the idea that they are concerned only with how many individuals belong to which kind; see LFP: 111. For instance, the structure-description above is characterized by precisely one individual being \(\neg B\) and the other three individuals being B. Carnap formalizes this by assigning so-called Q-numbers to state-descriptions to the effect that isomorphic state-descriptions have the same numbers, and he studies the combinatorial properties of structure-descriptions by means of these numbers.

The upshot of these preparations is that Carnap requires adequate confirmation measures (on \(L_N\) again) to be symmetrical: they assign the same probability to isomorphic state-descriptions, that is, to the members of each of the above equivalence classes. (Symmetry on \(L_{\infty}\) is formulated in terms of limits of confirmation for N going to infinity again.) As Carnap (1971a: 119) comments later, symmetry is only to be postulated where “individual constants have the same logical nature”, unlike, e.g., Carnap’s so-called coordinate languages in which individual constants track an underlying mathematical structure (see the supplements on Logical Syntax of Language and on semantics), or in whatever other cases in which a language is set up for objects of which some distinctive properties are already known and for which symmetry therefore would be implausible to assume.

For instance, in the above example, whatever the conditional probabilities \(P(B(c) \mid B(a) \amp B(b))\) and \(P(B(d) \mid B(a) \amp B(b))\) may be numerically, the symmetry postulate entails them to be identical, since any two state-descriptions in which c and d swap roles are assigned the same probability. (Carnap uses this same example in the quotation below.) Inductive logic is just as structural, or, indeed, logical, as deductive logic (in the following quotations we replace Carnap’s symbols by ours):

Suppose X has found by observation that the individuals a and b are B; the individuals may be physical objects and B may be an observable property. Let E be the sentence expressing these results: ‘\(B(a) \amp B(b)\)’. X considers two hypotheses H and \(H'\); H is the prediction that another object c is likewise B (‘\(B(c)\)’), and \(H'\) says the same for still another object d (‘\(B(d)\)’). If X has chosen a concept P of degree of confirmation, he will ascribe a certain value to \(P(H \mid E)\). We cannot determine this value generally because it depends upon the choice of P. Different functions P, even if each of them appears as not implausible, may yield different numerical values for the given case. However, we shall expect that if X ascribes a certain value to \(P(H \mid E)\), no matter which value this may be, he will ascribe the same value to \(P(H'\mid E)\). We should find it entirely implausible if he were to ascribe different values here; that is to say, we should not regard such a function P as an adequate explicatum [of confirmation]. The reason is that the logical relation between E and H is just the same as that between E and \(H'\). Although the individuals c and d may, of course, be very different in their empirical properties, their logical status cannot be different. The evidence E does not say anything about either c or d; therefore, if e is all the relevant evidence available to X, he has no rational reasons to expect H more than \(H'\) or vice versa… To put it in very general terms, we require that logic should not discriminate between the individuals but treat all of them on a par… This is never questioned in deductive logic, although it is seldom stated explicitly. For example, since ‘\(B(c)\)’ L-implies [i.e., logically implies] ‘\(B(c) \lor B(a)\)’, ‘\(B(d)\)’ L-implies ‘\(B(d) \lor B(a)\)’. This important character of deductive logic is stated in general terms in the theorem of the invariance of the L-concepts [i.e., logical concepts] (T-26-2). What we require here is that inductive logic should have the same character. (Carnap 1950b: 484f)

The “T-26-2” reference is to a theorem in the preparatory deductive logic part of the Logical Foundations in which Carnap proves that isomorphic sentences share the usual metalogical features (e.g., logical truth): although what Carnap says there does not concern logical concepts in Tarski’s (1986) object-linguistic sense (such as negation, existential quantification, and the like), in the same section he also states that “These invariances in deductive logic have so far been studied only rarely” and cites Lindenbaum and Tarski (1936) and Mautner (1946) as the only examples of such studies. The Lindenbaum and Tarski paper acknowledges Carnap’s own work on invariances during the late 1920s, in association with his “general axiomatics” project (see the main entry (Section 2.3, last paragraph) and supplement on Reconstruction of Scientific Theories (Section 6)). And Mautner’s paper, entitled “An Extension of Klein’s Erlanger Program: Logic as Invariant-Theory” (1946), relates logic to Klein’s famous Erlanger program for geometry, just as Tarski’s (1986) well-known proposal later would.

As Carnap also argues, probability theorists have often tacitly presupposed the requirement of symmetry in their statistical work:values of symmetrical P-functions are invariant with respect to a transformation of the sentences by any… correlation… The principle of invariance seems to have been accepted by all authors on probability1 [i.e., probability as confirmation], both classical and modern, although it has hardly ever been expressed explicitly. All authors would, for instance, raise and answer questions of the following kind: Suppose that among s observed objects there have been found \(s_1\) with the property B and \(s_2 = s - s_1\) with non-B; what is, on this evidence, the probability that another observed object has the property B? Although nobody says so in so many words, it would presumably appear absurd to everybody to assume that the value of the probability on the evidence described depended also on the question which particular s individuals were observed and which particular other individual was concerned in the prediction. For classical authors, this would appear simply as a consequence of the principle of indifference; but also those modern authors who reject the latter principle seem to take it for granted that in questions of the kind mentioned only the statement of the numbers but not a specification of the individuals is relevant for the probability. To put it in our terminology, there seems to be a general agreement among authors on probability that no concept can be regarded as an adequate explicatum for probability unless it possesses the characteristic of symmetry. (Carnap 1950b: 488f)

(Subjective Bayesians would disagree with this.) Indeed, in §§94–96 of the Logical Foundations, Carnap derives various salient properties of statistical inductive inferences (in which E or H or both give information about frequencies) from the symmetry requirement, such as the Binomial Law. Moreover, Carnap’s symmetry requirement coincides with the De Finetti’s (1931, cited by Carnap) famous assumption of exchangeability of random variables, which—through De Finetti’s Representation Theorem—provides a bridge between subjective and statistical probabilities that is acceptable even on subjective Bayesian grounds. (Carnap would stress that symmetry/exchangeability is a requirement of rationality that is to be applied in certain contexts, while subjective Bayesians would consider it an assumption about data of some kind; ultimately, it is not easy to tell whether this constitutes a substantial difference for Carnap or whether he would have regarded it as merely a matter of emphasizing different aspects of the same state of affairs. See Zabell 2005, 2007 for a more detailed comparison of Carnapian symmetry with De Finittian exchangeability.)

We have seen the axioms of probability to entail that the degree of confirmation assigned to a structure-description, say,

\[ \begin{array}{r@{}c@{}r@{}c@{}r@{}c@{}r} (\neg B(a) &\amp& B(b) &\amp& B(c) &\amp& B(d)) \\ {}\lor (B(a) &\amp& \neg B(b) &\amp& B(c) &\amp& B(d)) \\ {}\lor (B(a) &\amp& B(b) &\amp& \neg B(c) &\amp& B(d)) \\ {}\lor (B(a) &\amp& B(b) &\amp& B(c) &\amp& \neg B(d)), \end{array} \]

must equal the sum of degrees assigned to its disjuncts, that is, to the four pairwise isomorphic state-descriptions belonging to it. Symmetry maintains that each of these isomorphic state-descriptions are assigned one and the same degree of confirmation. What this leaves open is the degree that is assigned to the disjuncts; or equivalently, the degree of confirmation that is assigned to the structure-description they instantiate, which in this case must be four times the degree assigned to each of the four disjuncts.

In the Appendix to the Logical Foundations, Carnap suggests the simplest possible way of determining these degrees: each structure-description (in \(L_N)\) ought to receive the same probability (which, by symmetry, is distributed uniformly over the state-descriptions falling under it). It is easy to see that this determines a unique confirmation measure, which Carnap denotes by: \(c^*\). It follows that \(c^*\) is characterized as the uniquely determined regular, symmetric conditional probability measure that is uniform over structure-descriptions. Once turned into an explicit definition of \(c^*\), inductive logic results from combining deductive logic and its state-description semantics with that definition of the \(c^*\)-function. In an analogous way in which, e.g., Tarski’s explicit definition for truth entails all instances of the truth schema for the object language in question, Carnap’s definition of \(c^*\) entails as adequacy conditions the axioms of probability, regularity, and symmetry. In addition, Carnap demonstrates the fruitfulness of this definition by deriving various plausible patterns of statistical reasoning from it—such as direct inference (from a population to a sample), predictive inference (from one sample to another), inference by analogy (from two individuals sharing properties to their sharing other properties), inverse inference (from a sample to a population), instance confirmation, and more (e.g., that a greater variety of instances yields greater \(c^*\)-confirmation; see Carnap 1945b or the corresponding appendix of Carnap 1950b for the formal details). Hence, \(c^*\) may serve as an exact, fruitful, simple, and adequate explication of probability1 in the sense explained in the supplement on Methodology. (Carnap only derives the results mentioned before for \(c^*\) and for languages with only unary predicates. Later, Jeffrey B. Paris and his collaborators extended many of these results to greater classes of probability measures and to languages with predicates of arbitrary arity, and they derived a great variety of new and important results on further kinds of symmetry requirements: see, e.g., Landes, Paris, & Vencovská 2008, 2011.)

Since \(c^*\) assigns the same initial probability to every two structure-descriptions (in \(L_N)\), and since structure-descriptions that exhibit “more uniformity” are instantiated by a smaller number of state-descriptions—e.g., the structure-description from before with three Bs and just one \(\neg B\) is instantiated by four state-descriptions, while the “less uniform” structure-description with two Bs and two \(\neg B\)s comprises six state-descriptions—“more uniform” state-descriptions receive a relatively greater initial probability, corresponding to an inductive bias towards the “uniformity of the world”. Carnap presents this on page 181 in Logical Foundations, referring to the appendix in §110 in which he explains the \(c^*\) function and its formal properties, as a replacement of the traditional (metaphysical) “principle of uniformity” by a merely analytic requirement for confirmation and hence as a kind of logical or structural “solution” to the traditional problem of induction. However, one can show that, in the context of regularity and the axioms of probability, symmetry by itself would only entail a weak principle of inductive learning (the so-called principle of nonnegative instantial relevance) according to which the probability of the “next” individual being B can never be decreased by observing an individual being B; but symmetry does not by itself yield that the probability of the “next” individual being B is increased by any such observation (as demanded by the principle of positive instantial relevance; see Humburg 1971 and Gaifman 1971). In this sense, the merely structural requirement of symmetry does not suffice for inductive learning. That is one of the reasons why Carnap wants to go beyond the axioms of probability, regularity, and symmetry, by turning to the specific confirmation function \(c^*\) in the Appendix of Logical Foundations.

Regarding the classical problem of induction, Carnap (1926: 7–9) had himself pointed out, long before Popper, that theories could not, strictly speaking, be completely verified, but only confirmed up to a certain confidence level, or disconfirmed. He maintained this acceptance of Hume’s basic point through his last writings (e.g., Carnap 1963b, 1980). He did not think, though, that the philosophical tradition of addressing it, in which “the aim was to show that inductive reasoning must be successful” was fruitful:

This demonstration was based either on metaphysical principles or on synthetic, allegedly unconfirmable principles, e.g., that of the uniformity of nature. In the demonstration, any use of inductive reasoning was prohibited; it was thought that such a use would involve a vicious circle because the validity of inductive reasoning was supposed to be dependent upon the demonstration. (Carnap 1957: 2)

However, he thought that a “justification [of induction] of this kind is neither possible nor needed”. He agreed rather with those “who search for a justification in a more modest sense”. In the terms of a distinction Feigl had made, he writes,

… this aim is not the validation but the vindication of induction. In general a vindication of a method or policy is given by showing that its use is suitable for obtaining a given end. The vindication of one inductive method in comparison with another one consists in showing that the one gives better promise for reaching the goal than the other. (1957: 2–3)

Moreover, he thought it advisable to divide the problem of the justification of induction, in this more modest sense, into two subproblems “which in my view are very different, but which usually are not clearly distinguished”. The first problem was that of deciding on the axioms of inductive logic, and on what sorts of reasons should count in reaching this decision; the second problem concerned the choice of a particular inductive method within the leeway permitted by the axioms arrived at. Regarding this second question, he wavered somewhat. In his Continuum of Inductive Methods, Carnap (1952b) suggests that inductive methods should be chosen according to their usefulness or convenience to the user, based on the user’s experience of the empirical success of different inductive methods:

The adoption of an inductive method is neither an expression of belief nor an act of faith, though either or both may come in as motivating factors. An inductive method is rather an instrument for the task of constructing a picture of the world on the basis of observational data and especially of forming expectations of future events as a guidance for practical conduct. X may change this instrument just as he changes a saw or an automobile, and for similar reasons. If X, after using his car for some time, is no longer satisfied with it, he will consider taking another one, provided that he finds one that seems to him preferable… (Carnap 1952b: 55)

This he later retreated from, in favor of a more rationalistic approach, in which the choice of inductive methods in the light of particular experiences was not rejected, but regarded as inferior to a more principled choice in the light of a prior choice of theoretical framework; see Carnap (1963b: 979). Burks (1963) suggests an application of Carnap’s internal-external distinction (see the supplement on Tolerance, Metaphysics, and Meta-Ontology) to sort out Carnap’s views on the justification of induction, which Carnap (1963b: 982), finds “illuminating”, and he agrees that theoretical considerations may indeed be brought to bear on the “external question” of which inductive axioms for frameworks to choose. (See Carus 2017 for further discussion.)

While \(c^*\) exhibits some attractive formal properties, Carnap does not claim it to be the only available explication of probability1 or to be “perfect” in any sense:

It is not claimed that \(c^*\) is necessarily the best explicatum possible. The theory of this function will be developed chiefly for the purpose of presenting a concrete example…, (Carnap 1950b: ix, Preface)

and

It may… still be inadequate in other respects. It will not be claimed that \(c^*\) is a perfectly adequate explicatum for probability1, let alone that it is the only adequate one. For the time being it would be sufficient that \(c^*\) is a better explicatum than the previous methods (if indeed it is); in the future still better explicata may be found. (Carnap 1950b: 563)

In the preface to the Logical Foundations, Carnap had already pointed out that

for any two given inductive methods… there are always some state-descriptions in which the first wins out against the second [in terms of betting-success]. Hence, we can never say of one method that it is absolutely inferior to another method in the sense of being inferior in every conceivable world. Nevertheless, the result of a comparison of two inductive methods… may practically influence our preference. (Carnap 1950b: x)

Indeed, from the so-called “l-continuum” of adequate confirmation methods published just two years later (The Continuum of Inductive Methods, Carnap 1952b) to his posthumously published systems of inductive logic (“A Basic System of Inductive Logic, Part I”, Carnap 1971a, and “A Basic System of Inductive Logic, Part II”, Carnap 1980, both edited by Richard Jeffrey), Carnap extended his explication of probabilistic confirmation from \(c^*\) to ever increasing infinite ranges of probability measures satisfying the axioms of probability, regularity, symmetry, convergence, and others. The convergence axiom, which can be shown to imply the principle of positive instantial relevance (as stated above, and assuming the rest of Carnap’s axioms), entails that the probability of the “next” individual having property B converges in the limit to the relative frequency of B-objects in the observed samples. \(c^*\) satisfies this requirement, while, e.g., the uniquely determined uniform probability measure that assigns the same probability to each state-description would neither satisfy convergence nor positive instantial relevance (see Carnap 1950b: 564f). The point of parametrizing inductive confirmation measures by the numerical quantity l in Carnap’s “l-continuum” is to determine how fast the probability of the next object being B ought to converge to the relative frequency of B-objects given the observations. His final (posthumous) publication on induction (Carnap 1980) even expands the conceptual resources of inductive logic by the assumption of quality spaces (such as color spaces) on which similarity relations or distance measures are defined, and relative to which a yet more comprehensive g-h-l family of confirmation measures can be formulated that exploits features of these quality spaces in analogical inferences: each g-parameter measures the relative “geometrical width” of an attribute, each h-parameter measures the similarity between two attributes, and that kind of similarity is used to reconstruct inductive inferences from predications of attributes to predications of similar attributes. Hence, Carnap’s late work on inductive logic augments the linguistic frameworks that figure so prominently throughout his work (see the main entry (Section 1.2)) by geometrical structure. (See Hilpinen 1973 and Sznajder 2016 for a reconstruction and discussion of Carnap’s “Basic System”. See Niiniluoto 1981, 1988; Kuipers 1984, 1988; Skyrms 1991, 1996; Maher 2000; and Huttegger 2009, for continuations and improvements of the “Basic System”.)

By continually enlarging the class of confirmation measures he regards as adequate explicata, Carnap moved closer to the Bayesian position of regarding all probability measures (with a subjective interpretation) as adequate—the Bayesian step taken, e.g., by Carnap’s student Richard Jeffrey. However, Carnap never seems to have given up on the logicality or structurality requirement of symmetry for framework-dependent prior confirmation measures. (For a survey and detailed discussion and criticism of Carnap’s symmetry requirement, see Zabell 2005.) As explained at the beginning of this section, it is questionable to criticize Carnapian symmetry on strictly Bayesian grounds, since Carnapian “initial” confirmation measures are supposed to play a functional role different from that of subjective probability measures (which may result from a Carnapian “initial” confirmation measure by update on evidence): as he says in Carnap (1971a: 118),

symmetry must be required of a C-function [confirmation measure] only because it is meant to represent credibility, not credence. A nonsymmetric credence function may still be rational… those authors who interpret the term “probability” (often “subjective probability” or “personal probability”) in the sense of credence (as, for example, de Finetti, Savage, and Jeffrey…) are quite right in restricting symmetry to special cases.

Typical criticisms of “classical probability” based on Bertrand’s Paradox (see section 3.1 of the entry on interpretations of probability for a survey) and the logical impossibility of exemplifying all possible kinds of symmetry at once, miss their target if directed at Carnap: Carnap himself warns against the “uncritical use of the principle of indifference” (1950b: 331) and maintains that “since this principle leads to contradictions, we have to give it up” (1950b: 332). It is only once a linguistic framework has been set up that symmetry with respect to individuals as specified and identified in the framework becomes a plausible rationality constraint on prior probabilities when no specific evidence on individuals has yet been collected.

Similarly, in Carnap’s philosophy of induction, Goodman’s (1955) well-known worries about the “new problem of induction” and grue-type predicates concern the pragmatic question of which linguistic framework one should presuppose for inductive reasoning: e.g., one with purely “descriptional” primitive predicates, such as “blue” and “green”, or one including also mixed “descriptional-locational” primitive predicates, such as Goodman’s “grue” and “bleen” (see Carnap 1971a: 72–76). This problem of scientific concept choice, which goes beyond induction, is urgent for everyone and leads back to the realist-antirealist debates on the “naturalness” and “fundamentality” of concepts (see “Carnap’s reconstruction of scientific theories”). At the very least, given Carnap’s general approach to philosophy and explication or rational reconstruction (see the supplement on Methodology), it should hardly come as a surprise that

The degree of confirmation is relative not only with respect to the evidence, but, like all semantical concepts, also with respect to the language system, (Carnap 1950b: 279)

and that the adequacy of one’s choice of linguistic framework for inductive reasoning is itself an important problem of applied inductive logic (see Carnap 1971a: 76).

A different kind of criticism of Carnap’s inductive logic was put forward by Putnam (1963) in the Schilpp volume on Carnap: Putnam proved that, assuming a certain computability condition on the confirmation function P (satisfied, e.g., by Carnap’s \(c^*\) from above), there must be a computable hypothesis H concerning a countably infinite ordered set of individuals which could not be accepted by P in the sense of \(P(H \mid E _n)\) converging to 1 in the limit, even though each piece of evidence \(E_n\) corresponds to the set of confirming instances of H concerning the first n individuals. This proof was the basis of the later development of formal learning theory (see Schulte 2017 for a survey).

In a reply to Putnam in the Schilpp volume, Carnap takes on board Putnam’s suggestion that scientists’ methods of accepting or rejecting hypotheses in the face of evidence might depend on which (and in which order) scientific hypotheses are proposed in the actual course of scientific development—which Putnam had shown leads to a way out of the problem for all hypotheses H that are actually proposed. (See chapter 1 of Sterkenburg 2018 for a discussion and evaluation of the exchange between Putnam and Carnap.) Sterkenburg (2018: 108) shows that if Carnapian confirmation functions are seen as instantiations of a universal Bayesian architecture by which various particular confirmation methods can be constructed, then the Carnapian inductive logic program can be saved from Putnam’s criticism. As modern work on “no-free-lunch” theorems in machine learning suggests (see Schurz 2017 for a summary and philosophical assessment), there simply does not seem to be anything like a universally successful method of induction: every reasonable such method comes with certain presuppositions and will fail to do its job if these presuppositions are not met. The inductive components of Carnapian frameworks merely constitute ways of making some of these presuppositions more precise and explicit.

(For a detailed survey on Carnap on probability and induction, see Zabell 2007; for a general defense of Carnap’s explication of confirmation, see Maher 2010.)

Copyright © 2020 by
Hannes Leitgeb <Hannes.Leitgeb@lmu.de>
André Carus <awcarus@mac.com>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free