Notes to Logic and Artificial Intelligence
7. The submissions to the 1989 conference were unclassified as to topic; I sampled every other article, a total of 522. The 1998 conference divided its contributed articles into 26 topical sessions; I sampled the first paper in each of these sessions.
9. This includes robots (or “softbots”) that navigate artificial environments such as the Internet or virtual worlds as well as embodied robots that navigate the physical world.
10. I was surprised at first to hear the AI community refer to its logical advocates as logicists. On reflection, it seems to me much better to think of logicist projects in this general sense, as proposals to apply what Alonzo Church called “the logistic method” in seeking to understand reasoning in various domains. It is far too restrictive to narrowly associate logicism with Frege's program.
12. See Reiter 2001 for an extended contribution to cognitive robotics, with references to some of the other literature in this area. Reiter's book also contains self-contained chapters on the Situation Calculus and the problems of formalizing reasoning about action and change. I recommend these chapters to anyone wishing to follow up on the topics discussed in Section 4. Another extended treatment of action formalisms and issues is Shanahan 1997
13. But much of the work in this last area has not made heavy use of logical techniques. Qualitative physics and the formalization of other forms of qualitative reasoning is an independent specialty in AI, different in many ways from logical AI. But the two specialties have certainly influenced each other. For information concerning qualitative reasoning, consult Kuipers 1993; Weld & de Kleer 1990; Forbus 1988.
16. This very difficult and not particularly well-defined problem was very much on the minds of many AI researchers in the area that later became knowledge representation, but it has not proved to be a productive focus for the field. Natural language interpretation has developed into a separate field, that is largely concerned with less sweeping problems, such as automated speech-to-speech discourse, data mining, and text summarization. Logical techniques have been used with some success in this area, but it is fair to say that natural language interpretation has not been the best showcase for logical ideas. Even the problem of providing an adequate semantic interpretation of generic constructions--a natural application of nonmonotonic logic--has turned out to be problematic. See Krifka et al. 1995 for a general discussion of the issues.
17. This use of the word ‘frame’ is unconnected to the use of the term in the “frame problem,” and is not to be confused with that problem.
18. This is a qualitative sense of inertia, meaning simply that the truth or falsity of an atomic formula persists across a change.
19. The analogy to modal logics of provability inspired by Gödel's work Boolos 1993 has, of course, been recognized in later work in nonmonotonic logic. But it has not been a theme of major importance.
23. In retrospect, the term “situation” is not entirely fortunate, since it was later adopted independently and in quite a different sense by the situation semanticists. (See, for instance. Seligman & Moss 1996). In the AI literature, the term “state” is often used interchangeably with “situation”, and as far as I can see, without causing any confusion: the connections with physical states, as well as with the more general states of any complex dynamic system are entirely appropriate.
24. The early versions of the Situation Calculus were meant to be compatible with concurrent cases, i.e., with cases in which there are multiple planning agents, possibly acting simultaneously. But most of the logical analyses have been devoted to the single-agent case.
25. Carnap's attempts to formalize dispositional terms and inductive methods are classical examples of the problems that emerge in the formalization of empirical science.
30. This way of putting it is a little misleading for the Situation Calculus, since there is no robust notion of performing an action; instead, you consider the results of performing hypothetical action sequences. Even so, the point that the theory of unsuccessful actions has not been explored holds up.
31. Effects of actions that are delayed in time are a separate problem, which, as far as I know, no one has solved.
32. The relationship between an action and the occurrence of its conventional consequences is complicated, of course, by the “imperfective paradox” (see Lascarides 1992; Dowty 1977. Some of the work on AI theories of action and change is informed by these complexities; see Steedman 1995; 1998. But for the most part, they have not been taken into account in the AI literature.
33. Turner uses discrete temporal logic other than the Situation Calculus. But for uniformity of presentation I have used the Situation Calculus to present the ideas.
34. In explanation problems, one is reasoning backwards in time. Here, information is provided about a series of occurring states and the problem is to provide actions that account for the occurrences.
36. A personal recollection: I was certainly aware of this case in the early 1970s, but did not devote much attention to it because it seemed to me that the generalization from the single-agent case was relatively trivial and did not pose any very interesting logical challenges.
38. Although this topic has received attention more recently in Situation Theory, the logical issues, in my opinion, have not been illuminated by this work.
42. For background on quantitative models of preference and decision, see Doyle & Thomason 1999. For work in AI on intentions, see, for instance Konolige & Pollack 1993; Cohen & Levesque 1990; Sadek 1992; Pollack 1992.