Supplement to Truthlikeness

Expected Truthlikeness

Like truthlikness, expected degree of truthlikeness behaves quite differently from probability. The expected degree of truthlikeness of a proposition can be low even though its probability is high. A tautology, for example, has low degree of truthlikeness (closeness to the whole truth) whatever the truth is, and hence a low degree of expected truthlikeness, while its probability is maximal. More interestingly a proposition can be known to be false, and so have zero probability, and yet have a a higher degree of truthlikeness than some known truths. For example, that the number of planets is 7 is known to be false but since we know that the number of planets is 8, its degree of expected truthlikeness is identical to its actual truthlikeness, which is as good as a false answer to the question about the number of the planets gets. (See Niiniluoto 2011 for other examples.)

Of course, the possibility of estimating truthlikeness relies on the availability of a reasonable theory of probability. But the basic idea can be implemented whether one is a subjectivist about probability, or one subscribes to some more objective version of a theory of logical probability. It turns out, not surprisingly, that there is a well-known framework for logical probability which fits the normal-form approach to truthlikeness like a glove – namely, that based on Hintikka’s distributive normal forms (Hintikka 1965). Hintikka’s approach to inductive logic can be seen as a rather natural development of Carnap’s continuum of inductive methods (Carnap 1952). Recall that Carnap advocated distributing prior probabilities equally over structure descriptions, rather than over state descriptions, in order to make it possible to learn from experience. But in an infinite universe there are still infinitely many structure descriptions, and while Carnap’s approach makes it possible for a finite amount of data to change the probability of a singular prediction, it renders probabilistic learning from experience impossible for all genuine universal generalizations. Since universal claims start with zero probability, updating by conditionalization on new evidence can never change that.

Constituents partition the logical space according to the kinds of individuals that exist. Constituents can be plausibly assigned equal probabilities, perhaps motivated by a principle of indifference of some kind. In the simplest case – first-order monadic constituents of depth-1 – each constituent says of some set of \(Q\)-predicates that they are instantiated and that they are the only instantiated \(Q\)-predicates. The width of a constituent is the number of \(Q\)-predicates it says are instantiated. Suppose that \(n\) pieces of evidence have been accumulated to date, and that the total accumulated evidence \(e_n\) entails that all and only the \(Q\)-predicates in \(C^e\) are instantiated. That is, \(C^e\), is the narrowest constituent compatible with the total evidence. Then given Hintikka’s initial equal distribution, \(C^e\) emerges as the constituent with the highest probability on the evidence. Further, suppose that we reach a point in the gathering of evidence after which no new kinds of individuals are observed. (Since there are only a finite number of kinds in the monadic framework envisaged, there must be such a point.) Then \(P(C^e\mid e_n) \rightarrow 1\) as \(n \rightarrow \infty\). And then it follows that for any proposition \(A\) expressible in the framework:

\[ \mathbf{E}TL(A\mid e_n) \rightarrow TL(A\mid C^e) \text{ as } n \rightarrow \infty. \]

Further, provided that the evidence is eventually exhaustive (that is, instances of all the different kinds of individuals that are in fact instantiated eventually turn up at some stage in the evidence) expected truthlikeness will come arbitrarily close to actual degree of truthlikeness.

While this idea has been developed in detail for monadic first-order frameworks only, the model does demonstrate something interesting: namely the consilience of two important but apparently antagonistic traditions of twentieth century philosophy of science. On the one hand there is the Carnapian tradition, which stressed the probabilification of scientific hypotheses through the application of inductive methods. On the other, there is the Popperian tradition which completely rejected so-called inductive methods, along with the application of probability theory to epistemology, embracing instead the possibility of progress towards the whole truth through highly improbable conjectures and their refutation. If the model is on the right lines, then there is a rational kernel at the core of both these traditions. Inquiry can, and hopefully does, progress towards the truth through a sequence of false conjectures and their nearly inevitable refutation, but we can also have fallible evidence of such progress. Expected degree of truthlikeness will typically approach actual degree of truthlikeness in the long run.

Copyright © 2022 by
Graham Oddie <oddie@colorado.edu>
Gustavo Cevolani <gustavo.cevolani@imtlucca.it>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free