Supplement to Proof-Theoretic Semantics

Definitional Reflection and Paradoxes

The theory of definitional reflection is most conveniently framed in a sequent-style system. If the definition of A has the form

D_A { A <= Delta_i

and the Δi have the form ‘Bi1,,Biki’, then the right-introduction rules for A (definitional closure) are

|-A rule

or, in short,

|-A rule in short

and the left-introduction rule for A (definitional reflection) is

A |- rule

If ⊥ is an atom for which no definitional clause in given, then ⊥⊢ C is derivable for any C—we just have to apply definitional reflection with the empty set (n = 0) of premisses. “⇒” denotes a structural implication for which we assume that the rules

|- => rule and => |- rule

are available. They can be obtained immediately from Gentzen's sequent-rules for implication.

Now suppose we have the following definition which consists of a single clause defining R in terms of its own negation:

D_R { R<= (R => _|_)

Then we can derive absurdity, if we have as structural principles (i) initial sequents of the form AA, (ii) the contraction of identical formulas in the antecedent and (iii) the cut rule at our disposal.

derivation of _|_

There are various strategies to deal with this phenomenon, depending on which structural rule presupposed one wants to tackle (see Schroeder-Heister, 1992, 2004).

(i) We may restrict initial sequents AA to those cases, where there is no definitional clause for A available, i.e. where A cannot be introduced by definitional closure of definitional reflection. The rationale behind this proposal is that, if A has a definitional meaning, then A should be introduced according to its meaning and not in an unspecific way. This corresponds to the idea of the logical sequent calculus, in which one often restricts initial sequents to the atomic case, i.e., to the case, where no meaning-determining right and left introduction rules are available. If we impose this restriction on the definition of R, then the above derivation is no longer well-formed. As there is a definitional rule for R, we are not entitled to use the initial sequent RR. On the other hand, the initial sequent is not reducible to any other, more elementary initial sequent which is permitted. This way of proceeding was proposed by Kreuger (1994) and is related to using a certain three-valued semantics for logic programming (see Jäger and Stärk, 1998).

(ii) If we disallow contraction, the above derivation again breaks down. This is not very surprising, as, since the work of Fitch and Curry, it is well-known that choosing a logic without contraction prevents many paradoxes (see the entries on Curry's paradox and paradoxes and contemporary Logic). It would actually be sufficient to restrict contraction by forbidding only those cases in which the two expressions being contracted have been introduced in semantically different ways. This means the following: In the definitional framework an expression A can be introduced either by means of an initial sequent AA or by one of the two definitional principles (⊢A) or (A⊢). In the first case, A is introduced in an unspecific way, i.e., independent of its definition, whereas in the second case, A is introduced in a specific way, namely according to its meaning as given by the definition of A. It can be shown that contraction is only critical (with respect to cut elimination) if a specific occurrence of A is identified with an unspecific one. This is exactly the situation in the above derivation: An R coming from an initial sequent is contracted with an R coming from definitional reflection.

(iii) If we disallow cut, then again we are not able to derive absurdity. In fact, it is questionable, why we should assume cut at all. In standard logic cut is not needed as a primitive rule, since it is admissible. We can simply read the above example as demonstrating that the rule of cut is not admissible in the system without cut, as there is no other rule that can generate ⊢⊥. Instead of discussing whether we should force cut by making it a primitive rule, we can inversely use the admissibility or non-admissibility of cut to be a feature by means of which we classify definitions. Certain definitions admit cut, others do not. It is a main task of the theory of definitional reflection to sort definitions according to whether they render cut admissible. As stressed by Hallnäs (1991), this is comparable to the situation in recursion theory where one considers partial recursive functions. There the fact that a well-defined partial recursive function is total is a behavioral feature of the function defined and not a syntactic feature of its recursive definition (in fact, it is undecidable in general). For example, one sufficient condition for cut with cut formula A to be admissible is that the definition of A is well-founded, i.e. every chain of definitional successors starting with A terminates. Another condition is that the bodies of the definitional clauses for A do not contain structural implication. Giving up well-foundedness as a definitional principle is not so strange as it might appear at first sight. In the revision theory of truth (see Kremer, 2009), the investigations that have started with the works of Kripke, Gupta, Herzberger and Belnap have shown that the exclusion of non-wellfoundedness obstructs the view on interesting phenomena.

In a typed version, where cut takes the form of a substitution rule:

substitution rule

one might consider to restrict cut locally to those cases in which it generates a normalizable term s[x∕t]. In Schroeder-Heister (2012b) this is seen as an argument for the sequent calculus as the appropriate formal model of deductive reasoning, since only the sequent calculus has this local substitution rule. In natural deduction, where one would have to restrict the application of modus ponens to non-critical cases, substitution is built into the general framework as a global principle. Within a natural-deduction framework, Tranchini (2012b—see Other Internet Resources) has proposed to consider derivations of absurdity as “arguments” rather than “proofs”, understanding arguments as senses of derivations and proofs as their denotations, so that a non-normalizable derivation of absurdity is meaningful, but lacks a denotation. This approach, which builds on Prawitz's (1965, App. B) observation that the derivation of Russell's paradox is non-normalizable and Tennant's (1982) corresponding analysis of many well-known paradoxes, contributes to the research programme to apply Frege's semantical distinction between sense and denotation to proofs. This research programme aims at a distinction within the concept of proof and thus goes beyond the distinction between a proof as an intensional entity and what is proved as its extensional result, from which this article started (see section 2.1).

Copyright © 2012 by
Peter Schroeder-Heister <psh@informatik.uni-tuebingen.de>

This is a file in the archives of the Stanford Encyclopedia of Philosophy.
Please note that some links may no longer be functional.
[an error occurred while processing the directive]