Notes to Logic and Artificial Intelligence
1. For readers who would like a better orientation to the field of AI, I recommend Russell & Norvig 2010.
2. See, for instance, Nilsson 1995.
3. For two debates, see Volume 3, Number 3 of Computational Intelligence, devoted to McDermott 1987, and the later exchange Nilsson 1991; Birnbaum 1991.
4. For some of the historical background, see Davis 1988.
5. See Stefik 1995 for general background on expert systems. For information concerning explanation, see Moore 1995a; Clancey 1983.
6. For a good example of the use of these intuitions to motivate a system of logic, see the extended argument in Hintikka 1962 that the modal logic S4 is the correct logic of belief.
7. The submissions to the 1989 conference were unclassified as to topic; I sampled every other article, a total of 522. The 1998 conference divided its contributed articles into 26 topical sessions; I sampled the first paper in each of these sessions.
8. In the decade from 1990 to 1999 I counted one JPL publication by an AI researcher, Boutilier 1996, and five papers showing some AI influence; all of these dealt with nonmonotonic logic.
9. This includes robots (or “softbots”) that navigate artificial environments such as the Internet or virtual worlds as well as embodied robots that navigate the physical world.
10. I was surprised at first to hear the AI community refer to its logical advocates as logicists. On reflection, it seems to me much better to think of logicist projects in this general sense, as proposals to apply what Alonzo Church called “the logistic method” in seeking to understand reasoning in various domains. It is far too restrictive to narrowly associate logicism with Frege’s program.
11. Data integration is one such area. See Levy 2000. Large-scale knowledge representation is another. See Lenat & Guha 1989.
12. See Reiter 2001 for an extended contribution to cognitive robotics, with references to some of the other literature in this area. Reiter’s book also contains self-contained chapters on the Situation Calculus and the problems of formalizing reasoning about action and change. I recommend these chapters to anyone wishing to follow up on the topics discussed in Section 4. Another extended treatment of action formalisms and issues is Shanahan 1997
13. But much of the work in this last area has not made heavy use of logical techniques. Qualitative physics and the formalization of other forms of qualitative reasoning is an independent specialty in AI, different in many ways from logical AI. But the two specialties have certainly influenced each other. For information concerning qualitative reasoning, consult Kuipers 1993; Weld & de Kleer 1990; Forbus 1988.
15. John McCarthy makes a similar point, illustrating it with an example, in McCarthy 1993a.
16. This very difficult and not particularly well-defined problem was very much on the minds of many AI researchers in the area that later became knowledge representation, but it has not proved to be a productive focus for the field. Natural language interpretation has developed into a separate field, that is largely concerned with less sweeping problems, such as automated speech-to-speech discourse, data mining, and text summarization. Logical techniques have been used with some success in this area, but it is fair to say that natural language interpretation has not been the best showcase for logical ideas. Even the problem of providing an adequate semantic interpretation of generic constructions--a natural application of nonmonotonic logic--has turned out to be problematic. See Krifka et al. 1995 for a general discussion of the issues.
17. This use of the word ‘frame’ is unconnected to the use of the term in the “frame problem,” and is not to be confused with that problem.
18. This is a qualitative sense of inertia, meaning simply that the truth or falsity of an atomic formula persists across a change.
19. The analogy to modal logics of provability inspired by Gödel’s work Boolos 1993 has, of course, been recognized in later work in nonmonotonic logic. But it has not been a theme of major importance.
20. See Konolige 1988.
21. Readers interested in the historical aspects of the material discussed in this section might wish to compare it to Ohrstrom & Hasle 1995.
22. For additional historical background on Prior’s work, see Copeland 1996.
23. In retrospect, the term “situation” is not entirely fortunate, since it was later adopted independently and in quite a different sense by the situation semanticists. (See, for instance. Seligman & Moss 1996). In the AI literature, the term “state” is often used interchangeably with “situation”, and as far as I can see, without causing any confusion: the connections with physical states, as well as with the more general states of any complex dynamic system are entirely appropriate.
24. The early versions of the Situation Calculus were meant to be compatible with concurrent cases, i.e., with cases in which there are multiple planning agents, possibly acting simultaneously. But most of the logical analyses have been devoted to the single-agent case.
25. Carnap’s attempts to formalize dispositional terms and inductive methods are classical examples of the problems that emerge in the formalization of empirical science.
26. For information about planning under uncertainty, see, for instance, DeJong & Bennett 1989; Bacchus et al. 1999; Boutilier et al. 1996.
27. Examples are Dennett 1987 and Fodor 1987.
28. See Schubert 1990; Reiter 1993; Reiter 2001.
30. This way of putting it is a little misleading for the Situation Calculus, since there is no robust notion of performing an action; instead, you consider the results of performing hypothetical action sequences. Even so, the point that the theory of unsuccessful actions has not been explored holds up.
31. Effects of actions that are delayed in time are a separate problem, which, as far as I know, no one has solved.
32. The relationship between an action and the occurrence of its conventional consequences is complicated, of course, by the “imperfective paradox” (see Lascarides 1992; Dowty 1977. Some of the work on AI theories of action and change is informed by these complexities; see Steedman 1995. But for the most part, they have not been taken into account in the AI literature.
33. Turner uses discrete temporal logic other than the Situation Calculus. But for uniformity of presentation I have used the Situation Calculus to present the ideas.
34. In explanation problems, one is reasoning backwards in time. Here, information is provided about a series of occurring states and the problem is to provide actions that account for the occurrences.
35. For information about the philosophical tradition, see Hintikka 1986. Also, see Laux & Wansing 1995.
36. A personal recollection: I was certainly aware of this case in the early 1970s, but did not devote much attention to it because it seemed to me that the generalization from the single-agent case was relatively trivial and did not pose any very interesting logical challenges.
37. See, for instance, Simon 1982a; 1982b; Russell & Wefald 1991.
38. Although this topic has received attention more recently in Situation Theory, the logical issues, in my opinion, have not been illuminated by this work.
39. For more information about qualitative physics, see Forbus 1988, Kuipers 1993, and Forbus:1996a.
40. See The Common Sense Problem Page for this problem and others.
41. See Stefik 1995 for background on considerations having to do with knowledge engineering.
42. For background on quantitative models of preference and decision, see Doyle & Thomason 1999. For work in AI on intentions, see, for instance Konolige & Pollack 1993; Cohen & Levesque 1990; Sadek 1992; Pollack 1992.