Movatterモバイル変換


[0]ホーム

URL:


SEP home page
Stanford Encyclopedia of Philosophy

Notes toLogic-Based Artificial Intelligence

1. For readers who would like an in-depth orientation to the field ofAI, I recommendRussell & Norvig 2010.

2. For some of the historical background, seeDavis 1988.

3. SeeStefik 1995 for general background on expert systems. For information concerningexplanation, seeMoore 1995a;Clancey 1983.

4. One might be surprised at first to hear the AI community refer to itslogical advocates as logicists. On reflection, it seems reasonableto think of logicist projects this way, as proposals to applywhat Alonzo Church called “the logistic method” toreasoning in various domains. One need not narrowly associate logicismwith Frege, and with Russell and Whitehead, and their programs forformalizing mathematics.

5. The submissions to the 1989 conference were unclassified as to topic;every other article was sampled, a total of 522. The 1998 conferencedivided its contributed articles into 26 topical sessions; the firstpaper in each of these sessions was sampled.

6. Data integration is one such area. SeeLevy 2000. Large-scale knowledge representation is another. SeeLenat & Guha 1989.

7. SeeReiter 2001 for an extended contribution to cognitive robotics, with referencesto some of the other literature in this area. Reiter’s book alsocontains self-contained chapters on the Situation Calculus and theproblems of formalizing reasoning about action and change. Thesechapters may be useful to anyone wishing to follow up on the topicsdiscussed inSection 4. Also seeLevesque & Lakemeyer 2008.

8. For a survey of research in multiagent systems, seeChopra 2018.

9. Qualitative physics is an independent specialty in AI, different inmany ways from logical AI. But the two specialties have certainlyinfluenced each other.

10. At the time, this very difficult and not particularly well-definedproblem was very much on the minds of many AI researchers, but it hasnot proved to be a productive focus for logical AI. Natural languageinterpretation has developed into a separate field almost entirelyconcerned with specialized language technologies, such as automatedspeech-to-speech discourse, data mining, and text summarization.

11. When Minsky speaks of a “frame”, he has in mind aninformation nexus in an object-oriented system of knowledgerepresentation. His use of the word is unconnected to and not to beconfused with the use of ‘frame’ in theframeproblem.

12. The analogy to modal logics of provability inspired byGödel’s work, such asBoolos 1993, has, of course, been recognized in later work in nonmonotonic logic.But it has not been a theme of major importance.

14.Readers interested in the historical aspects of the materialdiscussed in this section might wish to compare it toOhrstrom & Hasle 1995. For additional historical background on Prior’s work, seeCopeland 1996.

15. In retrospect, the term ‘situation’ is not entirelyfortunate, since it was later adopted independently and in quite adifferent sense by the situation semanticists. (See, for instance.Seligman & Moss 1996). In the AI literature, the term ‘state’ is often usedinterchangeably with ‘situation’—without, as far asI can see, causing any confusion. The connections with physicalstates, as well as with the more general states of any complex dynamicsystem are entirely appropriate.

16. The early versions of the Situation Calculus were meant to becompatible with concurrent applications, with multiple planningagents, possibly acting simultaneously. But most of the logicalanalyses have been devoted to the single-agent case.

17. Carnap’s attempts to formalize dispositional terms andinductive methods are classical examples of the problems that emergein the formalization of empirical science.

18. For information about planning under uncertainty, see, for instance,DeJong & Bennett 1989;Bacchus et al. 1999;Boutilier et al. 1996.

19. Examples areDennett 1987 andFodor 1987.

20.In the AI community, the term for a problem so difficult that solving itwould involve overcoming just about every obstacle to achievinghuman-level intelligence is “AI-complete.” The Frame Problem is notAI-complete.

23. This way of putting it is a little misleading for the SituationCalculus, which has no robust notion of performance, considering onlythe outcomes associated with hypothetical action sequences.Nevertheless, the point remains that misexecutions were neglected inearly work on planning. Later work pays more attention to the needs ofembodied actors; see, for instance,Ghallab et al 2014.

24. Effects of actions that are delayed in time are a separate problem,which, as far as the present author knows, no one has solved.

25. Turner uses discrete temporal logic rather than the SituationCalculus. But for uniformity of presentation, the SituationCalculus is used to present the ideas.

26. In explanation problems, one is reasoning backwards in time. Here,information is provided about a series of occurring states and theproblem is to provide actions that account for the occurrences.

28. See, for instance,Guidotti 2021.

29. This is related to the field of Knowledge Engineering. Forbackground, seeStefik 1995.

30. For background on quantitative models of preference and decision, seeDoyle & Thomason 1999. For work in AI on intentions, see, for instanceKonolige & Pollack 1993,Cohen & Levesque 1990,Sadek 1992, andPollack 1992.

Copyright © 2024 by
Richmond Thomason<rthomaso@umich.edu>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free

Browse

About

Support SEP

Mirror Sites

View this site from another server:

USA (Main Site)Philosophy, Stanford University

The Stanford Encyclopedia of Philosophy iscopyright © 2024 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054


[8]ページ先頭

©2009-2025 Movatter.jp