Supplement to Justification Logic

Some More Technical Matters

1. Mathematical Logic Tradition

Several well-known mathematical notions which appeared prior to Justification Logic have sometimes been perceived as related to the BHK idea: Kleene realizability (Troelstra 1998), Curry-Howard isomorphism (Girard, Taylor, and Lafont 1989, Troelstra and Schwichtenberg 1996), Kreisel-Goodman theory of constructions (Goodman 1970, Kreisel 1962, Kreisel 1965), just to name a few. These interpretations have been very instrumental for understanding intuitionistic logic, though none of them qualifies as the BHK semantics.

Kleene realizability revealed a fundamental computational content of formal intuitionistic derivations, however it is still quite different from the intended BHK semantics. Kleene realizers are computational programs rather then proofs. The predicate ‘r realizes F ’ is not decidable, which leads to some serious deviations from intuitionistic logic. Kleene realizability is not adequate for the intuitionistic propositional calculus IPC. There are realizable propositional formulas not derivable in IPC (Rose 1953).[1]

The Curry-Howard isomorphism transliterates natural derivations in IPC to typed λ-terms thus providing a generic functional reading for logical derivations. However the foundational value of this interpretation is limited since, as proof objects, Curry-Howard λ-terms denote nothing but derivations in IPC itself and thus yield a circular provability semantics for the latter.

An attempt to formalize the BHK semantics directly was made by Kreisel in his theory of constructions (Kreisel 1962, Kreisel 1965). The original variant of the theory was inconsistent; difficulties already occurred at the propositional level. In (Goodman 1970) this was fixed by introducing a stratification of constructions into levels, which ruined the BHK character of this semantics. In particular, a proof of AB was no longer a construction that could be applied to any proof of A

2. Logical Awareness and Constant Specifications

Two examples in J are presented, showing modal theorems of K, and realizations for them. In the examples indices on constants have been omitted.

Example 1. This shows how to build a justification of a conjunction from justifications of the conjuncts. In traditional modal language, this principle is formalized as □A ∧ □B → □(AB). In J this idea is expressed in the more precise justification language.

  1. A → (B → (AB)) (propositional axiom)
  2. c:(A → (B → (AB))) (from 1 by Axiom Internalization)
  3. x:A → [cx]:(B → (AB)) (from 2 by Application and Modus Ponens)
  4. x:A → (y:B → [[cx] ⋅ y]:(AB)) (from 3 by Application and propositional reasoning)
  5. x:Ay:B → [[cx] ⋅ y]:(AB) (from 5 by propositional reasoning)

The derived formula 5 contains the constant c, which was introduced in line 2, and the complete reading of the result of this derivation is:

x:Ay:B → [[cx] ⋅ y]:(AB), given c:(A → (B → (AB))).

Example 2. This example shows how to build a justification of a disjunction from justifications of either of the disjuncts. In the usual modal language this is represented by □A ∨ □B → □(AB).Here is the corresponding result in J.

  1. A → (AB) (classical logic)
  2. a:(A → (AB)) (from 1 by Axiom Internalization)
  3. x:A → [ax]:(AB) (from 2 by Application and Modus Ponens)
  4. B → (AB) (by classical logic)
  5. b:(B → (AB)) (from 4 by Axiom Internalization)
  6. y:B → [by]:(AB) (from 5 by Application and Modus Ponens)
  7. [ax]:(AB) → [ax + by]:(AB) (by Sum)
  8. [by]:(AB) → [ax + by]:(AB) (by Sum)
  9. (x:Ay:B) → [ax + by]:(AB) (from 3, 6, 7, and 8 by propositional reasoning)

The complete reading of the result of this derivation is:

(x:Ay:B) → [ax + by]:(AB), given a:(A → (AB)) and b:(B → (AB))

3. Single-Agent Possible World Models for J

Here are two models in each of which x:Px:(PQ) is not valid. (P and Q are atomic and x is a justification variable.) It was pointed out that a formula t:X might fail at a possible world either because X is not believable there (it is false at some accessible world), or because t is not an appropriate reason for X. The two models illustrate both versions.

First, consider the model M having a single state, Γ, accessible to itself, and with an evidence function such that E(x, Z) is Γ, for every formula Z. In this model, x serves as‘universal’ evidence. Use a valuation such that V (P) = Γ and V (Q) = ∅. Then one has M, Γ ⊩ x:P but not-M, Γ ⊩ x:(PQ) because, even though x serves as universal evidence, PQ is not believable at Γ in the Hintikka/Kripke sense because Q is not true.

Next consider the model N , again having a single state Γ accessible to itself. This time take V to be the mapping assigning Γ to every propositional letter. But also, set E(x, P) = Γ,E (x, Z) = ∅ for ZP, and otherwise E doesn't matter for this example. Then of course both P and PQ are believable at Γ, but N, Γ ⊩ x:P and not-N, Γ ⊩ x:(PQ), the latter because x does not serve as evidence for PQ at Γ.

In Hintikka/Kripke models, believability and knowability are essentially semantic notions, but the present treatment of evidence is more of a syntactic nature. For example, the model N above also invalidates x:Px:(PP). At first glance this is surprising, since in any standard logic of knowledge or belief □P → □(PP) is valid. But, just because x serves as evidence for P, it need not follow that it also serves as evidence for PP. The formulas are syntactically different, and effort is needed to recognize that the later formula is a redundant version of the former.To take this to an extreme, consider the formula x:Px:(PP ∧ … ∧ P ), where the consequent has as many conjuncts as there are elementary particles in the universe! In brief, Hintikka/Kripke style knowledge is knowledge of propositions, but justification terms justify sentences.

4. Realization Theorems

Here is an example of an S4-derivation realized as an LP-derivation in the style of the Realization theorem. There are two columns in the table below. The first is a Hilbert-style S4-derivation of a modal formula □A ∨ □B → □(□AB). The second column displays corresponding steps of an LP-derivation of a formula:

x:Ay:B → (a ⋅ !x + by):(x:AB)

with constant specification

{a:(x:Ax:AB), b:(Bx:AB)}.
Comparing derivations in S4 and LP
Derivation in S4 Derivation in LP
1. A → □AB x:Ax:AB
2. □(□A → □AB) a:(x:Ax:AB)
3. □□A → □(□AB) !x:x:A → (a ⋅ !x):(x:AB)
4. A → □□A x:A → !x:x:A
5. A → □(□AB) x:A → (a ⋅ !x):(x:AB)
5′. (a ⋅ !x):(x:AB) → (a ⋅ !x + by):(x:AB)
5″. x:A → (a ⋅ !x + by):(x:AB)
6. B → □AB Bx:AB
7. □(B → □AB) b:(Bx:AB)
8. B → □(□AB) y:B → (by):(x:AB)
8′. (by):(x:AB) → (a ⋅ !x + by):(x:AB)
8″. y:B → (a ⋅ !x + by):(x:AB)
9. A ∨ □B → □(□AB) x:Ay:B → (a ⋅ !x + by):(x:AB)

Extra steps 5′, 5″, 8′, and 8″ are needed in the LP case to reconcile different internalized proofs of the same formula: (a ⋅ !x):(x:AB) and (by):(x:AB). The resulting realization respects Skolem's idea that negative occurrences of existential quantifiers (here over proofs hidden in the modality of provability) are realized by free variables whereas positive occurrences are realized by functions of those variables.

Proof theory plays an important role in the study of Justification Logics. Axiom systems we represented in this article, and a sequent calculus was introduced in (Artemov 1995, Artemov 2001).It has the curious disadvantage that it is cut free, but does not have the subformula property—no version with the subformula property is known. More recently, other kinds of proof procedures for Justification Logics have been created. Kurokawa uses hypersequents to provide a comprehensive proof-theoretical treatment of major systems of Justification Logic, (Kurokawa 2009), including those which combine implicit and explicit knowledge. These systems are notoriously hard to analyze and Kurokawa's results constitute a remarkable fundamental contribution to this area.

Currently realizations are being investigated for their own sake, (Fitting 2009). The basic results all are algorithmic in nature. For example, in Modal Logic a Replacement Theorem holds just as it does classically: if XX′ is provable in a normal modal logic then so is FF ′ where F is a formula and F′ is like F except that some subformula occurrence X has been replaced with X′. Indeed this can be strengthened to establish that if XX′ is provable, and X has a positive designated subformula occurrence in F, while F′ replaces that occurrence with X′, then FF ′ is provable. This does not carry over in a simple way to Justification Logic. Roughly speaking, an occurrence of X in a formula F of Justification Logic may be within the scope of various justification terms, and in F′ these will need to be ‘adjusted’ to take into account a justification for XX′. It turns out this is not simple, but an algorithm for doing so has been developed, (Fitting 2006, Fitting 2009).

Here is another result having a similar proof. In a modal sequent calculus argument a typical step is the following.

S1S2, X    S1S2, Y

S1S2, XY

The notion of a realization easily extends from formulas to sequents. Suppose both of the premise sequents above have realizations, and one would like a realization for the consequent sequent. The problem is, the two premise realizations may be quite different. Algorithmic machinery for merging them has been developed that does exactly this. This algorithm, in turn, serves as part of a new algorithm for the Realization Theorem itself.

5. Multi-Agent Justification Models

To simplify the language, one can use the forgetful projection which replaces explicit knowledge assertions t:X by JX where J stands for so-called justified common knowledge modality. Modality J is a stronger version of common knowledge: JX states all agents share sufficient evidence for X. In a formal setting, in Kripke models, JXCX, but not necessarily CXJX (Artemov 2006).

Informally, the traditional common knowledge modality (Fagin, Halpern, Moses, and Vardi 1995) is represented by the condition

CXXEXE2X ∧ … ∧ EnX ∧ …

whereas for the justified common knowledge operator J one has

JXXEXE2X ∧ … ∧ EnX ∧ …

Justified common knowledge has the same modal principles as McCarthy's common knowledge (McCarthy, Sato, Hayashi, and Igarishi 1978). In (Cubitt and Sugden 2003) a case is made that David Lewis' version of common knowledge (more properly, belief) is not identified with unlimited iteration of knowledge operators, but is much closer to justified common knowledge. (See the encyclopedia article on Common Knowledge). A good example of such a Lewis-McCarthy-Artemov justified common knowledge assertion, JX, which is stronger than the usual common knowledge, CX, is provided by situations following a public announcement of X (Plaza 2007) after which X holds at all states, not only at reachable states. Note that public announcements are the usual means for attaining common knowledge, and they lead to justified common knowledge J rather than the usual common knowledge C.

The axiomatic description of justified common knowledge J is significantly simpler than that of C . According to (Antonakos 2007), in the standard epistemic scenarios, justified common knowledge J is conservative with respect to the usual common knowledge C and hence provides a lighter alternative to the latter.

6. Self-referentiality of Justifications

Let us consider an example which was suggested by the well-known Moore's paradox:

It will rain but I don't believe that it will.

If R stands for it will rain, then a modal formalization is:

M = R ∧ ¬□R.

The Moore sentence M is easily satisfiable, hence consistent, e.g., whenever the weather forecast wrongly shows “no rain”. However, it is impossible to know Moore's sentence because

¬□M = ¬□(R ∧ ¬□R)

holds in any modal logic containing T . Here is a derivation.

  1. (R ∧ ¬□R) → R (logical axiom)
  2. □((R ∧ ¬□R) → R) (Necessitation)
  3. □(R ∧ ¬□R) → □R, (from 2 by Distribution)
  4. □(R ∧ ¬□R) → (R ∧ ¬□R) (Factivity in T)
  5. □(R ∧ ¬□R) → ¬□R (from 4 in Boolean logic)
  6. ¬□(R ∧ ¬□R) (from 3 and 5 in Boolean logic)

Furthermore, here is how this derivation is realized in LP.

  1. (R ∧ ¬[cx]:R) → R (logical axiom)
  2. c:((R ∧ ¬[cx]:R) → R) (Constant Specification)
  3. x:(R ∧ ¬[cx]:R) → [cx]:R (from 2 by Application)
  4. x:(R ∧ ¬[cx]:R) → (R ∧ ¬[cx]:R) (Factivity)
  5. x:(R ∧ ¬[cx]:R) → ¬[cx]:R (from 4 by Boolean logic)
  6. ¬x:(R ∧ ¬[cx]:R) (from 3 and 5 in Boolean logic)

Note that Constant Specification in line 2 is self-referential.

Copyright © 2011 by
Sergei Artemov <sartemov@gc.cuny.edu>
Melvin Fitting <melvin.fitting@lehman.cuny.edu>

Open access to the SEP is made possible by a world-wide funding initiative.
Please Read How You Can Help Keep the Encyclopedia Free