Notes to Logic-Based Artificial Intelligence

1. For readers who would like an in-depth orientation to the field of AI, I recommend Russell & Norvig 2010.

2. For some of the historical background, see Davis 1988.

3. See Stefik 1995 for general background on expert systems. For information concerning explanation, see Moore 1995a; Clancey 1983.

4. One might be surprised at first to hear the AI community refer to its logical advocates as logicists. On reflection, it seems reasonable to think of logicist projects this way, as proposals to apply what Alonzo Church called “the logistic method” to reasoning in various domains. One need not narrowly associate logicism with Frege, and with Russell and Whitehead, and their programs for formalizing mathematics.

5. The submissions to the 1989 conference were unclassified as to topic; every other article was sampled, a total of 522. The 1998 conference divided its contributed articles into 26 topical sessions; the first paper in each of these sessions was sampled.

6. Data integration is one such area. See Levy 2000. Large-scale knowledge representation is another. See Lenat & Guha 1989.

7. See Reiter 2001 for an extended contribution to cognitive robotics, with references to some of the other literature in this area. Reiter’s book also contains self-contained chapters on the Situation Calculus and the problems of formalizing reasoning about action and change. These chapters may be useful to anyone wishing to follow up on the topics discussed in Section 4. Also see Levesque & Lakemeyer 2008.

8. For a survey of research in multiagent systems, see Chopra 2018.

9. Qualitative physics is an independent specialty in AI, different in many ways from logical AI. But the two specialties have certainly influenced each other.

10. At the time, this very difficult and not particularly well-defined problem was very much on the minds of many AI researchers, but it has not proved to be a productive focus for logical AI. Natural language interpretation has developed into a separate field almost entirely concerned with specialized language technologies, such as automated speech-to-speech discourse, data mining, and text summarization.

11. When Minsky speaks of a “frame”, he has in mind an information nexus in an object-oriented system of knowledge representation. His use of the word is unconnected to and not to be confused with the use of ‘frame’ in the frame problem.

12. The analogy to modal logics of provability inspired by Gödel’s work, such as Boolos 1993, has, of course, been recognized in later work in nonmonotonic logic. But it has not been a theme of major importance.

14. Readers interested in the historical aspects of the material discussed in this section might wish to compare it to Ohrstrom & Hasle 1995. For additional historical background on Prior’s work, see Copeland 1996.

15. In retrospect, the term ‘situation’ is not entirely fortunate, since it was later adopted independently and in quite a different sense by the situation semanticists. (See, for instance. Seligman & Moss 1996). In the AI literature, the term ‘state’ is often used interchangeably with ‘situation’—without, as far as I can see, causing any confusion. The connections with physical states, as well as with the more general states of any complex dynamic system are entirely appropriate.

16. The early versions of the Situation Calculus were meant to be compatible with concurrent applications, with multiple planning agents, possibly acting simultaneously. But most of the logical analyses have been devoted to the single-agent case.

17. Carnap’s attempts to formalize dispositional terms and inductive methods are classical examples of the problems that emerge in the formalization of empirical science.

18. For information about planning under uncertainty, see, for instance, DeJong & Bennett 1989; Bacchus et al. 1999; Boutilier et al. 1996.

19. Examples are Dennett 1987 and Fodor 1987.

20.In the AI community, the term for a problem so difficult that solving it would involve overcoming just about every obstacle to achieving human-level intelligence is “AI-complete.” The Frame Problem is not AI-complete.

23. This way of putting it is a little misleading for the Situation Calculus, which has no robust notion of performance, considering only the outcomes associated with hypothetical action sequences. Nevertheless, the point remains that misexecutions were neglected in early work on planning. Later work pays more attention to the needs of embodied actors; see, for instance, Ghallab et al 2014.

24. Effects of actions that are delayed in time are a separate problem, which, as far as the present author knows, no one has solved.

25. Turner uses discrete temporal logic rather than the Situation Calculus. But for uniformity of presentation, the Situation Calculus is used to present the ideas.

26. In explanation problems, one is reasoning backwards in time. Here, information is provided about a series of occurring states and the problem is to provide actions that account for the occurrences.

27. See, for instance, Simon 1982a, 1982b, and Russell & Wefald 1991.

28. See, for instance, Guidotti 2021.

29. This is related to the field of Knowledge Engineering. For background, see Stefik 1995.

30. For background on quantitative models of preference and decision, see Doyle & Thomason 1999. For work in AI on intentions, see, for instance Konolige & Pollack 1993, Cohen & Levesque 1990, Sadek 1992, and Pollack 1992.

Copyright © 2024 by
Richmond Thomason <rthomaso@umich.edu>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free