This is a file in the archives of the Stanford Encyclopedia of Philosophy. |
version history
|
Stanford Encyclopedia of PhilosophyA | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z
|
last substantive content change
|
A different strand, not quite so old, is the use of games for teaching logic. This is probably the right way to think of the medieval game of ‘obligationes’, where a debater tries to drive his opponent into an unnecessary contradiction. We have at least two textbooks of logic from the early sixteenth century that present it as a game for an individual student, and Lewis Carroll's The Game of Logic (1887) is a more recent example in the same genre.
Mathematical
The first half of the twentieth century was an era of increasing rigour and professionalism in logic, and to most logicians of that period the use of games in logic would probably have seemed frivolous. The intuitionist L. E. J. Brouwer expressed this attitude when he accused his opponents of causing mathematics ‘to degenerate into a game’ (as David Hilbert quoted him in 1927). Wittgenstein's language games provoked little response from the logicians. But in the second half of the century the centre of gravity of logical research moved from foundations to techniques, and from about 1960 games were used more and more often in logical papers.
There are two players. In general we can call them
and
.
The pronunciations ‘Abelard’ and ‘Eloise’ go
back to the mid 1980s and usefully fix the players as male and female
(though feminist logicians have asked about the propriety of this
type-casting). Other names are in common use for the players in
particular types of logical game.
The players play by choosing elements of a set
,
called the domain of the game. As they choose, they build
up a sequence
a0,a1,a2,...of elements of
We say that a logical game is total if every play is in
either
W or
W
, so that there are no
draws. Unless one makes an explicit exception, logical games are
always assumed to be total. (Don't confuse being total with the
much stronger property of being determined -- see below.)
It is only for mathematical convenience that the definition above expects the game to continue to infinity even when a player has won at some finite position; there is no interest in anything that happens after a player has won. Many logical games have the property that in every play, one of the players has already won at some finite position; games of this sort are said to be well-founded. An even stronger condition is that there is some finite number n such that in every play, one of the players has already won by the n-th position; in this case we say that the game has finite length.
A strategy for a player is a set of rules that describe
exactly how that player should choose, depending on how the other
player has chosen at earlier moves. Mathematically, a strategy for
consists of a
function which takes each position a with
(a) =
to an element
b of
;
we
think of it as an instruction to
to choose b
when the game has reached the position a. (Likewise
with a strategy for
.)
A strategy for a player is said to be winning
if that player wins every play in which he or she uses the strategy,
regardless of what the other player does. At most one of the players
has a winning strategy (since otherwise the players could play their
winning strategies against each other, and both would win,
contradicting that
W
and
W
have no plays in
common). Occasionally one meets situations in which it seems that two
players have winning strategies (for example in the forcing games
below), but closer inspection shows that the two players are in fact
playing different games.
A game is said to be determined if one or other of the
players has a winning strategy. There are many examples of games that
are not determined, as Gale and Stewart showed in 1953 using the axiom
of choice. This discovery led to important applications of the notion
of determinacy in the foundations of
wins every play of G in
which she hasn't lost at any finite position. The theorem states that
every closed game is determined.)
In most applications of logical games, the central notion is that of
a winning strategy for the player
.
Often these strategies
(or their existence) turn out to be equivalent to something of logical
importance that could have been defined without using games -- for
example a proof. But games are felt to give a better definition because
they quite literally supply some motivation:
is trying to win. This
raises a question that is not of much interest mathematically, but it
should concern philosophers who use logical games. If we want
's
motivation in a game
G to have any explanatory value, then we need to understand what is
achieved if
does win. In particular we should be able to
tell a realistic story of a situation in which some agent called
is trying to do
something intelligible, and doing it is the same thing as winning in
the game. As Richard Dawkins said, raising the corresponding question
for the evolutionary games of Maynard Smith,
The whole purpose of our search ... is to discover a suitable actor to play the leading role in our metaphors of purpose. We ... want to say, ‘It is for the good of ... ‘. Our quest in this chapter is for the right way to complete that sentence. (The Extended Phenotype, Oxford University Press, Oxford 1982, page 91.)For future reference let us call this the Dawkins question. In many kinds of logical game it turns out to be distinctly harder to answer than the pioneers of these games realised.
Just as in classical game theory, the definition of logical games
above serves as a clothes horse that we can hang other concepts onto.
For example it is common to have some laws that describe what elements
of
are available for a player to choose at a particular move. Strictly
this refinement is unnecessary, because the winning strategies are
not affected if we decree instead that a player who breaks the law
loses immediately; but for many games this way of viewing them seems
unnatural. (For example in Jaakko Hintikka's semantic games,
some steps expect a player to choose an element of the structure,
whereas other steps require a player to choose a formula. We may as
well make it a law that the players must choose elements of these
kinds.) A subtler extension is to restrict the information available
to the players, so that the games are of imperfect
information. Hintikka has explored this possibility in his
Game-Theoretic Semantics.
‘For all x there is y such that R(x,y)’ is truereduces to the question whether the following holds:
For every object a the sentence ‘There is y such that R(a,y)’ is true.This in turn reduces to:
For every object a there is an object b such that the sentence ‘R(a,b)’ is true.In this example, that's as far as Tarski's truth definition will take us.
In the late 1950s Leon Henkin noticed that we can intuitively understand some sentences which can't be handled by Tarski's definition. Take for example the infinitely long sentence
For all x0 there is y0 such that for all x1 there is y1 such that ... R(x0,y0,x1, y1,...).Tarski's approach fails because the string of quantifiers at the beginning is infinite, and we would never reach an end of stripping them off. Instead, Henkin suggested, we should consider the game where a person
R(a0,b0,a1, b1,...)is true. The original sentence is true if and only if player
R(a0,f0 (a0),a1,f1 (a0,a1),a2, f2(a0,a1,a2), ...).But this condition translates immediately into the language of games; the Skolem functions f0 etc. are a winning strategy for
Soon after Henkin's work, Jaakko Hintikka added that the same idea
applies with conjunctions and disjunctions. We can regard a
conjunction
‘
’ as a universally
quantified statement expressing ‘Every one of the sentences
,
holds’, so it should be
for the player
to
choose one of the sentences. As Hintikka put it, to play the
game
G(
),
chooses whether the
game should proceed as
G(
) or as
G(
). Likewise disjunctions become existentially
quantified statements about sets of sentences, and they mark moves
where the player
chooses how the game should proceed. To bring quantifiers into the
same style, he proposed that the game
G(
x
(x)) proceeds thus: player
chooses an object and
provides a name a for it, and the game proceeds as
G(
(a)). (And
likewise with existential quantifiers, except that
chooses.) Hintikka also
made an ingenious suggestion for introducing negation. Each game G has
a dual game which is the same as G except that the
players
and
are transposed in both
the rules for playing and the rules for winning. The game
G(
)
is the dual of
G(
).
One can prove that for any first-order sentence
,
interpreted in a fixed
structure A, player
has a winning strategy for Hintikka's game
G(
) if and only if
is true in A
in the sense of Tarski. Two features of this proof are interesting.
First, if
is any
first-order sentence then the game
G(
) has finite length, and so
the Gale-Stewart theorem tells us that it is determined. We infer
that
has a winning
strategy in exactly one of
G(
) and its dual; so she has a winning strategy in
G(
) if and only if she
doesn't have one in
G(
). This takes care of negation. And second, if
has a winning strategy
for each game
G(
(a)), then after choosing one such strategy
fa for each a, she can string
them together into a single winning strategy for
G(
x
(x)) (namely,
‘Wait and see what element a
chooses, then play
fa’). This takes care of the clause for
universal quantifiers; but the argument uses the axiom of choice, and
in fact it is not hard to see that the equivalence of Hintikka's and
Tarski's definitions of truth is equivalent to the axiom of choice
(given the other axioms of Zermelo-Fraenkel set theory).
It's puzzling that we have here two theories of when a sentence is true, and the theories are not equivalent if the axiom of choice fails. In fact the reason is not very deep. The axiom of choice is needed not because the Hintikka definition uses games, but because it assumes that strategies are deterministic, i.e. that they are single-valued functions giving the user no choice of options. A more natural way of translating the Tarski definition into game terms is to use nondeterministic strategies. (However, Hintikka insists that the correct explication of ‘true’ is the one using deterministic strategies, and so it is Tarski's definition that works only accidentally.)
Computer implementations of these games of Hintikka proved to be a very effective way of teaching the meanings of first-order sentences. One such package was designed by Jon Barwise and John Etchemendy at Stanford and marketed as ‘Tarski's World’. Independently another team at the University of Omsk constructed a Russian version for use at schools for gifted children.
In the published version of his John Locke lectures at Oxford,
Hintikka in 1973 raised the Dawkins question (see above) for these
games. His answer was that one should look to Wittgenstein's
language games, and the language games for understanding quantifiers
are those which revolve around seeking and finding. In the
corresponding logical games one should think of
as Myself and
as a hostile Nature who can never be relied on to present
the object I want; so to be sure of finding it, I need a winning
strategy. This story was never very convincing; the motivation of
Nature is irrelevant, and nothing in the logical game corresponds to
seeking. In retrospect it is a little disappointing that nobody took
the trouble to look for a better story. It may be more helpful to think
of a winning strategy for
in
G(
) as a kind of proof (in a suitable infinitary system)
that
is true.
Later Jaakko Hintikka extended the ideas of this section in two directions, namely to natural language semantics and to games of imperfect information. The name Game-Theoretic Semantics, GTS for short, has come to be used to cover both of these extensions.
The games described in this section adapt almost trivially to
many-sorted logic: for example the quantifier
x, where
x is a variable of sort
,
is an instruction for player
to choose an element of
sort
.
This immediately gives us the corresponding games for second-order
logic, if we think of the elements of a structure as one sort, the
sets of elements as a second sort, the binary relations as a third
and so on. It follows that we have, quite routinely, game rules for
most
Thus if
is
Pi, then player
wins at once if
s is in Pi, and otherwise player
wins at once. The
formulas
,
and
behave as in Hintikka's
games above; for example
instructs
player
to choose
whether the game shall continue as for
or for
.
If the formula
is
, then player
chooses an arrow from
s to a state t (i.e. a state t such that the
pair (s, t) is in the relation R), and the
game then proceeds from the state t according to the
instructions
.
The
rule for
is the same except
that player
makes
the choice. Finally we say that the formula
is true at s in A
if player
has
a winning strategy for this game based on
and starting at
s.
These games stand to
One natural part of such a theory would be a purely structural necessary and sufficient condition for two structures to be elementarily equivalent. Roland Fraïssé, a French-Algerian, was the first to find a usable necessary and sufficient condition. It was rediscovered a few years later by the Kazak logician A. D. Taimanov, and it was reformulated in terms of games by the Polish logician Andrzej Ehrenfeucht. The games are now known as Ehrenfeucht-Fraïssé games, or sometimes as back-and-forth games. They have turned out to be one of the most versatile ideas in twentieth-century logic. They adapt fruitfully to a wide range of logics and structures.
In a back-and-forth game there are two structures A and
B, and two players who are commonly called Spoiler and
Duplicator. (The name is due to Joel Spencer in the early 1990s. More
recently Neil Immerman suggested Samson and Delilah, using the same
initials; this places Spoiler as the male player
and Duplicator as the female
.)
Each
step in the game consists of a move of Spoiler, followed by a move of
Duplicator. Spoiler chooses an element of one of the two structures,
and Duplicator must then choose an element of the other structure. So
after n steps, two sequences have been chosen, one from
A and one from B:
(a0,...,an-1;b 0,...,bn-1).This position is a win for Spoiler if and only if some atomic formula (of one of the forms ‘R(v0,...,vk-1)’ or ‘F(v0,...,vk-1) = vk’ or ‘v0=v1’, or one of these with different variables) is satisfied by (a0,...,an-1) in A but not by (b0,...,bn-1) in B, or vice versa. The condition for Duplicator to win is different in different forms of the game. In the simplest form, EF(A,B), a play is a win for Duplicator if and only if no initial part of it is a win for Spoiler (i.e. she wins if she hasn't lost by any finite stage). For each natural number m there is a game EFm(A,B); in this game Duplicator wins after m steps provided she has not yet lost. All these games are determined, by the Gale-Stewart Theorem. The two structures A and B are said to be back-and-forth equivalent if Duplicator has a winning strategy for EF(A,B), and m-equivalent if she has a winning strategy for EFm(A,B).
One can prove that if A and B are m-equivalent for every natural number m, then they are elementarily equivalent. On the other hand a winning strategy for Spoiler in EFm(A,B) can be converted into a first-order sentence that is true in exactly one of A and B, and in which the nesting of quantifier scopes has at most m levels. So we have our necessary and sufficient condition for elementary equivalence, and a bit more besides.
If A and B are back-and-forth equivalent, then
certainly they are elementarily equivalent; but in fact back-and-forth
equivalence turns out to be the same as elementary equivalence in an
The theory behind back-and-forth games uses very few assumptions about the logic in question. As a result, these games are one of the few model-theoretic techniques that apply as well to finite structures as they do to infinite ones, and this makes them one of the cornerstone of theoretical computer science. One can use them to measure the expressive strength of formal languages, for example database query languages. A typical result might say, for example, that a certain language can't distinguish between ‘even’ and ‘odd’; we would prove this by finding, for each level n of complexity of formulas of the language, a pair of finite structures for which Duplicator has a winning strategy in the back-and-forth game of level n, but one of the structures has an even number of elements and the other has an odd number.
There is also a kind of back-and-forth game that corresponds to our modal semantics above in the same way as Ehrenfeucht-Fraïssé games correspond to Hintikka's game semantics for first-order logic. The players start with a state s in the structure A and a state t in the structure B. Spoiler and Duplicator move alternately, as before. Each time he moves, Spoiler chooses whether to move in A or in B, and then Duplicator must move in the other structure. A move is always made by going forwards along an arrow from the current state. If between them the two players have just moved to a state s' in A and a state t' in B, and some predicate Pi holds at just one of s' and t', then Duplicator loses at once. Also she loses if there are no available arrows for her to move along; but if Spoiler finds there are no available arrows for him to move along in either structure, then Duplicator wins. If the two players play this game with given starting states s in A and t in B, and both structures have just finitely many states, then one can show that Duplicator has a winning strategy if and only if the same modal sentences are true at s in A as are true at t in B.
There are many generalisations of this result, some of them involving the following notion. Let Z be a binary relation which relates states of A to states of B. Then we call Z a bisimulation between A and B if Duplicator can use Z as a nondeterministic winning strategy in the back-and-forth game between A and B where the first pair of moves of the two players is to choose their starting states. In computer science the notion of a bisimulation is crucial for the understanding of A and B as systems; it expresses that the two systems interact with their environment in the same way as each other, step for step. But a little before the computer scientists introduced the notion, essentially the same concept appeared in Johan van Benthem's PhD thesis on the semantics of modal logic (1976).
then she is entitled to choose one of the sentences and say ‘OK, I'll prove this one’. (In fact if the examiner is an intuitionist, he may insist that she choose one of the sentences to prove.) On the other hand if the sentence is![]()
![]()
![]()
then the examiner, being an examiner, might well choose one of the conjuncts himself and invite her to prove that one. If she knows how to prove the conjunction then she certainly knows how to prove the conjunct.![]()
![]()
![]()
The case of
is a little subtler. She
will probably want to start by assuming
in order to deduce
;
but there is some
risk of confusion because the sentences that she has written down so
far are all of them things to be proved, and
is not a thing to be
proved. The examiner can help her by saying ‘I'll assume
,
and let's see if you can
get to
from
there’. At this point there is a chance that she sees a way of getting
to
by deducing a
contradiction from
;
so she may turn the tables on the examiner and invite him
to show that his assumption is consistent, with a view to proving that
it isn't. The symmetry is not perfect: he was asking her to show that a
sentence is true everywhere, while she is inviting him to show that a
sentence is true somewhere. Nevertheless we can see a sort of
duality.
Ideas of this kind lie behind the dialectical games of Paul
Lorenzen. He showed that with a certain amount of pushing and shoving,
one can write rules for the game which have the property that
has a winning strategy
if and only if the sentence that she is presented with at the beginning
is a theorem of
the Proponent and the
other player the Opponent. (In the medieval version they are usually
Respondens and Opponens.) Almost as in the medieval obligationes, the
Opponent wins by driving the Proponent to a point where the only moves
available to her are blatant self-contradictions.
Lorenzen claimed that the rules of his games could be justified on a
pre-logical basis, and so they formed a foundation for logic.
Unfortunately any ‘justification’ involves a convincing
answer to the Dawkins question, and this Lorenzen never provided. For
example he spoke of moves as ‘attacks’, even when (like the
examiner's choice at
above) they look more like
help than hostility. To repair Lorenzen's omission, one certainly needs
to distinguish between different stances that a person might take in an
argument: stating, assuming, conceding, querying, attacking, committing
oneself. Whether it is really possible to define all these notions in a
pre-logical way is a moot point. But perhaps this is unimportant. A
more positive view is that this kind of refinement serves to link
Lorenzen's dialogues to
In any case, Lorenzen's games stand as an important paradigm of what recent proof theorists have called semantics of proofs. A semantics of proofs gives a ‘meaning’ not just to the notion of being provable, but to each separate step in a proof. It answers the question ‘What do we achieve by making this particular move in the proof?’ During the 1990s a number of workers at the logical end of computer science looked for games that would stand to linear logic and some other proof systems in the same way as Lorenzen's games stood to intuitionist logic. Andreas Blass, and then later Samson Abramsky and colleagues, gave games that corresponded to parts of linear logic, but at the time of writing we don't yet have a perfect correspondence between game and logic. This example is particularly interesting because the answer to the Dawkins question should give an intuitive interpretation of the laws of linear logic, a thing that this logic has badly needed. The games of Abramsky et al. tell a story about two interacting systems. But while he began with games in which the players politely take turns, Abramsky's more recent work allows the players to act ‘in a distributed, asynchronous fashion’, taking notice of each other only when they choose to. These games are no longer in the normal format of logical games, and their real-life interpretation raises a host of new questions.
To show that the house can be built to order, we need to show that
each builder separately can carry out his or her appointed task,
regardless of what the other builders do. So we imagine each builder as
player
in a
game where all the other players are lumped together as
,
and we aim to prove
that
has a
winning strategy for this game. When we have proved this for each
builder separately, we can imagine them going to work, each with their
own winning strategy. They all win their respective games and the
result is one beautiful house.
More technically, the elements of the structure A are fixed
in advance, say as a0, a1,
a2 etc., but the properties of these elements have
to be settled by the play. Each player moves by throwing in a set of
atomic or negated atomic statements about the elements, subject only to
the condition that the set consisting of all the statements thrown in
so far must be consistent with a fixed set of axioms written down
before the game. (So throwing in a negated atomic sentence
has the effect of
preventing any player from adding
at a later stage.) At the
end of the joint play, the set of atomic sentences thrown in has a
canonical model, and this is the structure A; there are ways
of ensuring that it is a model of the fixed set of axioms. A possible
property P of A is said to be enforceable if a
builder who is given the task of making P true of A has a
winning strategy. A central point (due essentially to Ehrenfeucht) is
that the conjunction of a countably infinite set of enforceable
properties is again enforceable.
The name ‘forcing’ comes from an application of related ideas by Paul Cohen to construct models of set theory in the early 1960s. Abraham Robinson adapted it to make a general method for building countable structures, and Martin Ziegler introduced the game setting. More recently Robin Hirsch and Ian Hodkinson have used related games to settle some old questions about relation algebras.
Forcing games are a healthy example to bear in mind when thinking about the Dawkins question. They remind us that in logical games it need not be helpful to think of the players as opposing each other.
These games are important in the theory of definitions. Suppose we
have a collection A of objects and a family S of
properties; each property cuts A into the set of those objects
that have the property and the set of those that don't. Let
cut, starting with the
whole set A and using a property in S as a knife;
let
choose one of
the pieces (which are subsets of A) and give it back to
to cut again, once
more using a property in S; and so on. Let
lose as soon as
chooses an empty
piece. We say that (A,S) has rank at most
m if
has
a strategy which ensures that
will lose before her
m-th move. The rank of (A,S) gives valuable
information about the family of subsets of A definable by
properties in S.
Variations of this game, allowing a piece to be cut into infinitely
many smaller pieces, are fundamental in the branch of model theory
called stability theory. Broadly speaking, a theory is
‘good’ in the sense of stability theory if, whenever we
take a model A of the theory and S the set of
first-order formulas in one free variable with parameters from
A, the structure (A,S) has
‘small’ rank. A different variation is to require that at
each step,
divides into two each of the pieces that have survived
from earlier steps, and again she loses as soon as one of the cut
fragments is empty. (In this version
is redundant.) With
this variation, the rank of (A,S) is called its
Vapnik-Chervonenkis dimension; this notion is used in
computational learning theory.
Rabin's theorem has any number of useful consequences. For example Dov Gabbay used it to prove the decidability of some modal logics. But Rabin's proof, using automata, was notoriously difficult to follow. Yuri Gurevich and Leo Harrington, and independently Andrei Muchnik, found much simpler proofs in which the automaton is a player in a game. This result is one of several that connect games with automata.
Wilfrid Hodges W.Hodges@qmw.ac.uk |
A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z