Movatterモバイル変換


[0]ホーム

URL:


SEP home page
Stanford Encyclopedia of Philosophy

The Frame Problem

First published Mon Feb 23, 2004; substantive revision Mon Feb 8, 2016

To most AI researchers, the frame problem is the challenge ofrepresenting the effects of action in logic without having to representexplicitly a large number of intuitively obvious non-effects. But to manyphilosophers, the AI researchers' frame problem is suggestive ofwider epistemological issues. Is it possible, in principle, to limitthe scope of the reasoning required to derive the consequences of anaction? And, more generally, how do we account for our apparent abilityto make decisions on the basis only of what is relevant to an ongoingsituation without having explicitly to consider all that is notrelevant?

1. Introduction

The frame problem originated as a narrowly defined technical probleminlogic-based artificial intelligence (AI).But it was taken up in an embellished and modified form byphilosophers of mind, and given a wider interpretation. The tensionbetween its origin in the laboratories of AI researchers and itstreatment at the hands of philosophers engendered an interestingand sometimes heated debate in the 1980s and 1990s.But since the narrow, technical problem is largely solved, recentdiscussion has tended to focus less on matters of interpretation andmore on the implications of the wider frame problem forcognitive science.To gain an understanding of the issues,this article will begin with a look at the frame problem in itstechnical guise. Some of the ways in which philosophers havere-interpreted the problem will then be examined. The articlewill conclude with an assessment of the significance of the frameproblem today.

2. The Frame Problem in Logic

Put succinctly, the frame problem in its narrow, technical form isthis (McCarthy & Hayes 1969). Using mathematical logic, how is itpossible to write formulae that describe the effects of actions withouthaving to write a large number of accompanying formulae that describethe mundane, obvious non-effects of those actions? Let's take a look atan example. The difficulty can be illustrated without the fullapparatus of formal logic, but it should be borne in mind that thedevil is in the mathematical details. Suppose we write two formulae,one describing the effects of painting an object and the otherdescribing the effects of moving an object.

  1. Colour(x,c) holds afterPaint(x,c)
  2. Position(x,p) holds afterMove(x,p)

Now, suppose we have an initial situation in whichColour(A,Red)andPosition(A,House) hold. According tothe machinery of deductive logic, what then holds after the actionPaint(A,Blue) followed by theactionMove(A,Garden)? Intuitively, wewould expectColour(A,Blue) andPosition(A,Garden) to hold. Unfortunately,this is not the case. If written out more formally in classicalpredicate logic, using a suitable formalism for representing time andaction such as the situation calculus (McCarthy & Hayes 1969), thetwo formulae above only license the conclusionthatPosition(A,Garden) holds. This isbecause they don't rule out the possibility that the colour ofA gets changed by theMove action.

The most obvious way to augment such a formalisation so that theright common sense conclusions fall out is to add a number of formulaethat explicitly describe the non-effects of each action. These formulaeare calledframe axioms. For the example at hand, we need apair of frame axioms.

  1. Colour(x,c) holds afterMove(x,p) ifColour(x,c) heldbeforehand
  2. Position(x,p) holds afterPaint(x,c) ifPosition(x,p) heldbeforehand

In other words, painting an object will not affect its position, andmoving an object will not affect its colour. With the addition of thesetwo formulae (written more formally in predicate logic), all thedesired conclusions can be drawn. However, this is not at all asatisfactory solution. Sincemost actions do not affectmost properties of a situation, in a domain comprisingM actions andN properties we will, in general, haveto write out almostMN frame axioms. Whether these formulaeare destined to be stored explicitly in a computer's memory, or aremerely part of the designer's specification, this is an unwelcomeburden.

The challenge, then, is to find a way to capture the non-effects ofactions more succinctly in formal logic. What we need, it seems, issome way of declaring the general rule-of-thumb that an action can beassumed not to change a given property of a situationunlessthere is evidence to the contrary. This default assumption is known asthecommon sense law of inertia. The (technical) frame problemcan be viewed as the task of formalising this law.

The main obstacle to doing this is themonotonicity ofclassical logic. In classical logic, the set of conclusions that can bedrawn from a set of formulae alwaysincreases with theaddition of further formulae. This makes it impossible to express arule that has an open-ended set of exceptions, and the common sense lawof inertia is just such a rule. For example, in due course we mightwant to add a formula that captures the exception to Axiom 3 thatarises when we move an object into a pot of paint. But our not havingthought of this exception before should not prevent us from applyingthe common sense law of inertia and drawing a wide enough set of(defeasible) conclusions to get off the ground.

Accordingly, researchers in logic-based AI have put a lot of effortinto developing a variety ofnon-monotonic reasoningformalisms, such ascircumscription (McCarthy 1986), andinvestigating their application to the frame problem. None of this hasturned out to be at all straightforward. One of the most troublesomebarriers to progress was highlighted in the so-calledYale shootingproblem (Hanks & McDermott 1987), a simple scenario that givesrise to counter-intuitive conclusions if naively represented with anon-monotonic formalism. To make matters worse, a full solution needsto work in the presence of concurrent actions, actions withnon-deterministic effects, continuous change, and actions with indirectramifications. In spite of these subtleties, a number of solutions tothe technical frame problem now exist that are adequate for logic-basedAI research. Although improvements and extensions continue to be found,it is fair to say that the dust has settled, and that the frameproblem, in its technical guise, is more-or-less solved (Shanahan1997; Lifschitz 2015).

3. The Epistemological Frame Problem

Let's move on now to the frame problem as it has been re-interpretedby various philosophers. The first significant mention of the frameproblem in the philosophical literature was made by Dennett (1978,125). The puzzle, according to Dennett, is how “a cognitivecreature … with many beliefs about the world” can updatethose beliefs when it performs an act so that they remain“roughly faithful to the world”? InThe Modularity ofMind, Fodor steps into a roboticist's shoes and, with the frameproblem in mind, asks much the same question: “How … doesthe machine's program determine which beliefs the robot ought tore-evaluate given that it has embarked upon some or other course ofaction?” (Fodor 1983, 114).

At first sight, this question is only impressionistically related tothe logical problem exercising the AI researchers. In contrast to theAI researcher's problem, the philosopher's question isn'texpressed inthe context of formal logic, and doesn't specifically concern thenon-effects of actions. In a later essay, Dennett acknowledges theappropriation of the AI researchers' term (1987). Yet he goes on toreaffirm his conviction that, in the frame problem, AI has discovered“a new, deep epistemological problem—accessible inprinciple but unnoticed by generations of philosophers”.

The best way to gain an understanding of the issue is to imaginebeing the designer of a robot that has to carry out an everyday task,such as making a cup of tea. Moreover, for the frame problemto be neatly highlighted, we must confine our thought experiment to acertain class of robot designs, namely those using explicitlystored, sentence-like representations of the world, reflecting themethodological tenets ofclassical AI. The AI researcherswho tackled the original frame problem in its narrow, technical guisewere working under this constraint, since logic-based AI is a variety ofclassical AI. Philosophers sympathetic to thecomputational theory of mind—who suppose that mental states comprise sets ofpropositional attitudes and mental processes are forms of inferenceover the propositions in question—also tend to feel at homewith this prescription.

Now, suppose the robot has to take a tea-cup from the cupboard. Thepresent location of the cup is represented as a sentence in itsdatabase of facts alongside those representing innumerable otherfeatures of the ongoing situation, such as the ambient temperature, theconfiguration of its arms, the current date, the colour of the tea-pot,and so on. Having grasped the cup and withdrawn it from the cupboard,the robot needs to update this database. The location of the cup hasclearly changed, so that's one fact that demands revision. But whichother sentences require modification? The ambient temperature isunaffected. The location of the tea-pot is unaffected. But if it sohappens that a spoon was resting in the cup, then the spoon's newlocation, inherited from its container, must also be updated.

The epistemological difficulty now discerned by philosophers is this.How could the robot limit the scope of the propositionsit must reconsider in the light of its actions? In a sufficiently simple robot, this doesn't seem like much of aproblem. Surely the robot can simply examine its entire database ofpropositions one-by-one and work out which require modification. Butif we imagine that our robot has near human-level intelligence, and istherefore burdened with an enormous database of facts to examine everytime it so much as spins a motor, such a strategy starts to lookcomputationally intractable.

Thus,a related issue in AI has been dubbed thecomputational aspect ofthe frame problem (McDermott 1987). This is the question of how to computethe consequences of an action without the computation having to rangeover the action's non-effects. The solution to the computational aspectof the frame problem adopted in most symbolic AI programs is somevariant of what McDermott calls the “sleeping dog” strategy(McDermott 1987). The idea here is that not every part of the datastructure representing an ongoing situation needs to be examined whenit is updated to reflect a change in the world. Rather, those partsthat represent facets of the world that have changed are modified, andthe rest is simply left as it is (following the dictum “letsleeping dogs lie”).In our example of the robot and the tea-cup, we might apply thesleeping dog strategy by having the robot update its beliefs about thelocation of the cup and the contents of the cupboard. But the robotwould not worry about some possible spoon that may or may not be on orin the cup, since the robot's goal did not directly involve any spoon.

However, the philosophical problem is not exhausted by thiscomputational issue. The outstanding philosophical question is how therobot could ever determine that it had successfully revised all itsbeliefs to match the consequences of its actions. Only then would itbe in a position safely to apply the “common sense law ofinertia” and assume the rest of the world is untouched. Fodorsuggestively likens this to “Hamlet's problem: when to stopthinking” (Fodor 1987, 140). The frame problem, he claims, is“Hamlet's problem viewed from an engineer's perspective”.So construed, the obvious way to try to avoid the frame problem is byappealing to the notion ofrelevance. Only certain propertiesof a situation are relevant in the context of any given action, so thecounter-argument goes, and consideration of the action's consequencescan be conveniently confined to those.

However, the appeal to relevance is unhelpful. For the difficulty nowis to determine what is and what isn't relevant, and this is dependenton context. Consider again the action of removing a tea-cup from thecupboard. If the robot's job is to make tea, it is relevant that thisfacilitates filling the cup from a tea-pot. But if the robot's taskis to clean the cupboard, a more relevant consequence is theexposure of the surface the cup was resting on.An AI researcher in the classical mould could rise to this challengeby attempting to specify what propositions are relevant to what context.But philosophers such as Wheeler (2005; 2008), taking their cue fromDreyfus (1992), perceive the threat of infinite regress here. As Dreyfusputs it, “if each context can be recognized only in terms of featuresselected as relevant and interpreted in a broader context, the AI workeris faced with a regress of contexts” (Dreyfus 1992, 289).

One way to mitigate the threat of infinite regress is by appeal to thefact that, while humans are more clever than today's robots,they still make mistakes (McDermott 1987). People often fail to foreseeevery consequence of their actionseven though they lack noneof the information required to derive those consequences, as any novicechess player can testify. Fodor asserts that “the frame problemgoes very deep; it goes as deep as the analysis of rationality”(Fodor 1987). But the analysis of rationality can accommodate theboundedness of the computational resources available to derive relevantconclusions (Simon 1957; Russell & Wefald 1991; Sperber &Wilson 1996). Because it sometimes jumps to premature conclusions,bounded rationality is logically flawed, but no more so than humanthinking.However, as Fodor points out, appealing to human limitationsto justify the imposition of a heuristic boundary on the kind ofinformation available to an inferential process does not in itself solvethe epistemological frame problem (Fodor 2000, Ch.2; Fodor 2008, Ch.4; see also Chow 2013).This is because it neglects the issueof how the heuristic boundary is to be drawn, which is to say itfails to address the original question of how to specify what is andisn't relevant to the inferential process.

Nevertheless, the classical AI researcher, convinced that the regress ofcontexts will bottom out eventually, may still elect to pursue the researchagenda of building systems based on rules for determining relevance,drawing inspiration from the past successes of classical AI.Whereupon the dissenting philosopher might point out that AI's pastsuccesses have always been confined to narrow domains, such as playing chess,or reasoning in limited microworlds where the set of potentially relevantpropositions is fixed and known in advance. By contrast, human intelligencecan cope with an open-ended, ever-changing set of contexts (Dreyfus 1992;Dreyfus 2008; Wheeler 2005; Wheeler 2008; Rietveld 2012).Furthermore, the classical AI researcher is vulnerable toan argument from holism. A key claim in Fodor's work is that when itcomes to circumscribing the consequences of an action, just as in the businessof theory confirmation in science, anything could be relevant (Fodor1983, 105). There are noa priori limits to the propertiesof the ongoing situation that might come into play. Accordingly, in hismodularity thesis, Fodor uses the frame problem to bolster the viewthat the mind's central processes—those that are involved infixing belief—are “informationally unencapsulated”,meaning that they can draw on information from any source (Fodor 1983;Fodor 2000; Fodor 2008; Dreyfus 1991, 115–121; Dreyfus 1992, 258).For Fodor, this is a fundamental barrier to the provision of acomputational account of these processes.

It is tempting to see Fodor's concerns as resting on a fallaciousargument to the effect that a process must be informationally encapsulatedto be computationally tractable. We only need to consider theeffectiveness of Internet search engines to see that, thanks to cleverindexing techniques, this is not the case. Submit any pair of seeminglyunrelated keywords (such as “banana” and“mandolin”) to a Web search engine, and in a fraction of asecond it will identify every web page, in a database of severalbillion, that mentions those two keywords (now including this page, nodoubt). But this is not the issue at hand. The real issue, to reiteratethe point, is one of relevance. A process might indeed be able to indexinto everything the system knows about, say, bananas and mandolins, butthe purported mystery is how it could ever work out that, of all things,bananas and mandolins were relevant to its reasoning task in the firstplace.

To summarize, it is possible to discern an epistemological frameproblem, and to distinguish it from a computational counterpart. Theepistemological problem is this: How is it possible for holistic,open-ended, context-sensitive relevance to be captured by a set ofpropositional, language-like representations of the sort used inclassical AI? Thecomputational counterpart to theepistemological problem is this. How could an inference processtractably be confined to just what is relevant, given that relevanceis holistic, open-ended, and context-sensitive?

4. The Metaphysics of Common Sense Inertia

An additional dimension to the frame problem is uncovered in (Fodor1987), where the metaphysical justification for the common sense law ofinertia is challenged. Although Fodor himself doesn't clearlydistinguish this issue from other aspects of the wider frame problem,it appears on examination to be a separate philosophical conundrum.Here is the argument. As stated above, solutions to the logical frameproblem developed by AI researchers typically appeal to some version ofthe common sense law of inertia, according to which properties of asituation are assumed by default not to change as the result of anaction. This assumption is supposedly justified by the very observationthat gave rise to the logical frame problem in the first place, namelythat most things don't change when an action is performed or an eventoccurs.

According to Fodor, this metaphysical justification is unwarranted.To begin with, some actions change many, many things. Those who affirmthat painting an object has little or no effect on most properties ofmost of the objects in the room are likely to concede that detonating abomb actually does affect most of those properties. But a deeperdifficulty presents itself when we ask what is meant by “mostproperties”. What predicates should be included in our ontologyfor any of these claims about “most properties” to fallout? To sharpen the point, Fodor introduces the concept of a“fridgeon”. Any particle is defined as afridgeonat a given time if and only if Fodor's fridge is switched on at thattime. Now, it seems, the simple act of turning Fodor's fridge on or offbrings about an astronomical number of incidental changes. In auniverse that can include fridgeons, can it really be the case thatmost actions leave most things unchanged?

The point here is not a logical one. The effect on fridgeons ofswitching Fodor's fridge on and off can concisely be representedwithout any difficulty (Shanahan 1997, 25). Rather, the point ismetaphysical. The common sense law of inertia is only justified in thecontext of the right ontology, the right choice of objects andpredicates. But what is the right ontology to make the common sense lawof inertia work? Clearly, fridgeons and the like are to be excluded.But what metaphysical principle underpins such a decision?

These questions and the argument leading to them are veryreminiscent of Goodman's treatment of induction (Goodman 1954).Goodman's “new riddle of induction”, commonly called thegrue paradox,invites us to consider the predicategrue, which is truebefore timet only of objects that are green and after timet only of objects that are blue. The puzzle is that everyinstance of a green emerald examined before timet is also aninstance of a grue emerald. So, the inductive inference that allemeralds are grue seems to be no less legitimate than the inductiveinference that all emeralds are green. The problem, of course, is thechoice of predicates. Goodman showed that inductive inference onlyworks in the context of the right set of predicates, and Fodordemonstrates much the same point for the common sense law ofinertia.

An intimate relationship of a different kind between the frameproblem and the problem of induction is proposed by Fetzer (1991), whowrites that “The problem of induction [is] one of justifying someinferences about the future as opposed to others. The frame problem,likewise, is one of justifying some inferences about the future asopposed to others. The second problem is an instance of thefirst.” This view of the frame problem is highly controversial,however (Hayes 1991).

5. The Frame Problem Today

The narrow, technical frame problem generated a great deal of workin logic-based artificial intelligence in the late 1980s and early 1990s,and its wider philosophical implications came to the fore at around thesame time. But the importance each thinker accords to the frame problemtoday will typically depend on their stance on other matters.

Within classical AI, a variety of workable solutions to the logicalframe problem have been developed, and it is no longer considered aserious obstacle even for those working in a strictly logic-basedparadigm (Shanahan 1997; Reiter 2001; Shanahan 2003; Lifschitz 2015).It's worthnoting that logically-minded AI researchers can consistently retaintheir methodology and yet, to the extent that they view their productspurely as engineering, can reject the traditional cognitivescientist's belief in the importance of computation overrepresentations for understanding the mind. Moreover, insofar as thegoal of classical AI is not computers with human-level intelligence,but is simply the design of better and more useful computer programs,it is immune to the philosophical objections of Fodor, Dreyfus, andthe like. Significantly though, for AI researchers working outsidethe paradigm of symbolic representation altogether—thoseworking in situated robotics, for example—the logical frameproblem simply doesn't feature in day-to-day investigations.

Although it can be argued that it arises even in aconnectionist setting (Haselager &Van Rappard 1998; Samuels 2010), the frame problem inherits much of itsphilosophical significance from the classical assumption of theexplanatory value of computation over representations, an assumptionthat has been under vigorous attack for some time (Clark 1997; Wheeler2005). Despite this, many philosophers of mind, in the company ofFodor and Pylyshyn, still subscribe to the view that human mentalprocesses consist chiefly of inferences over a set of propositions,and that those inferences are carried out by some form of computation.To such philosophers, the epistemological frame problem and itscomputational counterpart remain a genuine threat.

For Wheeler and others, classical AI and cognitive science rest onCartesian assumptions that need to be overthrown in favour of a moreHeideggerian stance before the frame problem can be overcome (Dreyfus2008; Wheeler 2005; 2008; Rietveld 2012). According to Wheeler (2005;2008), the situated robotics movement in AI that originated with thework of Brooks (1991) exemplifies the right way to go. Dreyfus is inpartial agreement, but contends that the early products of situatedrobotics “finesse rather than solve the frame problem”because “Brooks's robots respond only to fixed isolable featuresof the environment, not to context or changing significance”(Dreyfus 2008, 335). Dreyfus regards the neurodynamics work ofFreeman (2000) as a better foundation for the sort of Heideggerianapproach to AI in which the frame problem might be dissolved (see alsoShanahan 2010, Ch.5; Rietveld 2012; Bruineberg & Rietveld2014). Dreyfus is impressed by Freeman's approach because theneurodynamical record of significance is neither a representation noran association, but (in dynamical systems terms) “a repertoireof attractors” that classify possible responses, “theattractors themselves being the product of past experience”(Dreyfus 2008, 354).

One philosophical legacy of the frame problem is that it has drawnattention to a cluster of issues relating to holism, or so-calledinformational unencapsulation.Recall that a process is informationally unencapsulated (Fodor sometimesuses the term “isotropic”) if there is noa prioriboundary to what information is relevant to it.In recent writing, Fodor uses the term “frameproblem” in the context of all informationally unencapsulatedprocesses, and not just those to do with inferring the consequencesof change (Fodor 2000, Ch.2; Fodor 2006, Ch.4).It's clear that idealised rationality is informationallyunencapsulated, in this sense.It has also been suggested that isotropy is damaging to the so-calledtheory theory of folk psychology (Heal 1996).(For Heal, this lends support to the rivalsimulation theory, but Wilkerson (2001) argues that informational unencapsulation is aproblem for both accounts of folk psychology.)Analogical reasoning, as Fodor says, is an example of “isotropyin the purest form: a process which depends precisely upon the transferof information among cognitive domains previously assumed to beirrelevant” (Fodor 1983, 105).Arguably, a capacity for analogical and metaphoricalthinking—a talent for creatively transcending the boundariesbetween different domains of understanding—is the source ofhuman cognitive prowess (Lakoff & Johnson 1980; Mithen 1996). Sothe informational unencapsulation of analogical reasoning is potentiallyvery troublesome, and especially so for modular theories of mind in whichmodules are viewed as (context-insensitive) specialists(Carruthers 2003; 2006).

Dreyfus claims that this “extreme version of the frameproblem” is no less a consequence of the Cartesian assumptionsof classical AI and cognitive science than its less demandingrelatives (Dreyfus 2008, 361). He advances the view that a suitablyHeideggerian account of mind is the basis for dissolving the frameproblem here too, and that our “background familiarity with howthings in the world behave” is sufficient, in such cases, toallow us to “step back and figure out what is relevant andhow”. Dreyfus doesn't explain how, given the holistic,open-ended, context-sensitive character of relevance, thisfiguring-out is achieved. But Wheeler, from a similarly Heideggerianposition, claims that the way to address the“inter-context” frame problem, as he calls it, is with adynamical system in which “the causal contribution of eachsystemic component partially determines, and is partially determinedby, the causal contributions of large numbers of other systemiccomponents” (Wheeler 2008, 341). A related proposal is putforward by Shanahan and Baars (2005; see also Shanahan 2010, Ch.6),based on global workspace theory (Baars 1988), according to which thebrain incorporates a solution to the problem of informationalunencapsulation by instantiating an architecture in which a) theresponsibility for determining relevance is not centralised but isdistributed among parallel specialist processes, and b) a seriallyunfolding global workspace state integrates relevant contributionsfrom multiple domains.

Bibliography

  • Baars, B. (1988),A Cognitive Theory of Consciousness,Cambridge University Press.
  • Brooks, R.A. (1991), “Intelligence without Reason”, inProc. 12th International Conference on ArtificialIntelligence, pp. 569–595.
  • Bruineberg, J. & Rietveld, E. (2014),“Self-organization, Free Energy Minimization, and Optimal Gripon a Field of Affordances”,Frontiers in HumanNeuroscience, 8: 559, doi:10.3389/fnhum.2014.00599
  • Carruthers, P. (2003), “On Fodor's Problem”,Mindand Language, 18(5): 502–523.
  • ––– (2006),The Architecture of the Mind,Oxford University Press
  • Chow, S.J. (2013), “What’s the Problem with the FrameProblem?”,Review of Philosophy and Psychology, 4:309–331.
  • Clark, A. (1997),Being There: Putting Brain, Body, and WorldTogether Again, MIT Press.
  • Dennett, D. (1978),Brainstorms, MIT Press.
  • ––– (1987), “Cognitive Wheels: The FrameProblem in Artificial Intelligence”, in Pylyshyn (1987).
  • Dreyfus, H.L. (1991),Being-in-the-World: A Commentary onHeidegger's Being and Time,Division I, MIT Press.
  • ––– (1992),What Computers Still Can't Do,MIT Press.
  • ––– (2008), “Why Heideggerian AI Failedand How Fixing It Would Require Making It More Heideggerian”,inThe Mechanical Mind in History, P. Husbands, O. Holland& M. Wheeler (eds.), MIT Press, pp. 331–371.
  • Fetzer, J.H. (1991), “The Frame Problem: ArtificialIntelligence Meets David Hume”, in Ford & Hayes (1991).
  • Fodor, J.A. (1983),The Modularity of Mind, MITPress.
  • ––– (1987), “Modules, Frames, Fridgeons,Sleeping Dogs, and the Music of the Spheres”, in Pylyshyn(1987).
  • ––– (2000),The Mind Doesn't Work ThatWay, MIT Press.
  • ––– (2008),LOT 2: The Language of ThoughtRevisited, Oxford University Press.
  • Ford, K.M. & Hayes, P.J. (eds.) (1991),Reasoning Agents ina Dynamic World: The Frame Problem, JAI Press.
  • Ford, K.M. & Pylyshyn, Z.W. (eds.) (1996),The Robot'sDilemma Revisited: The Frame Problem in Artificial Intelligence,Ablex.
  • Freeman, W.J. (2000),How Brains Make Up Their Minds,Phoenix.
  • Goodman, N. (1954),Fact, Fiction, and Forecast, HarvardUniversity Press.
  • Hanks, S. & McDermott, D. (1987), “Nonmonotonic Logicand Temporal Projection”,Artificial Intelligence,33(3): 379–412.
  • Haselager, W.F.G. & Van Rappard, J.F.H. (1998),“Connectionism, Systematicity, and the Frame Problem”,Minds and Machines, 8(2): 161–179.
  • Hayes, P.J. (1991), “Artificial Intelligence Meets DavidHume: A Reply to Fetzer”, in Ford & Hayes (1991).
  • Heal, J. (1996), “Simulation, Theory, and Content”, inTheories of Theories of Mind, eds. P.Carruthers &P.Smith, Cambridge University Press, pp. 75–89.
  • Lakoff, G. & Johnson, M. (1980),Metaphors We Live By,University of Chicago Press.
  • Lifschitz, V. (2015), “The Dramatic True Story of the FrameDefault”,Journal of Philosophical Logic, 44:163–196.
  • McCarthy, J. (1986), “Applications of Circumscription toFormalizing Common Sense Knowledge”,ArtificialIntelligence, 26(3): 89–116.
  • McCarthy, J. & Hayes, P.J. (1969), “Some Philosophical Problemsfrom the Standpoint of Artificial Intelligence”, inMachineIntelligence 4, ed. D.Michie and B.Meltzer, Edinburgh UniversityPress, pp. 463–502.
  • McDermott, D. (1987), “We've Been Framed: Or Why AI Is Innocent ofthe Frame Problem”, in Pylyshyn (1987).
  • Mithen, S. (1987),The Prehistory of the Mind,Thames & Hudson.
  • Pylyshyn, Z.W. (ed.) (1987),The Robot's Dilemma: The FrameProblem in Artificial Intelligence, Ablex.
  • Rietveld, R. (2012), “Context-switching and Responsivenessto Real Relevance”, in J.Kiverstein & M.Wheeler (eds.),Heidegger and Cognitive Science: New Directions in CognitiveScience and Philosophy, Palgrave Macmillan,pp. 105–135.
  • Reiter, R. (2001),Knowledge in Action: Logical Foundations forSpecifying and Implementing Dynamical Systems, MIT Press.
  • Russell, S. & Wefald, E. (1991),Do the Right Thing:Studies in Limited Rationality, MIT Press.
  • Samuels, R. (2010), “Classical Computationalism and the ManyProblems of Cognitive Relevance”,Studies in History andPhilosophy of Science, 41: 280–293.
  • Shanahan, M. (1997),Solving the Frame Problem: AMathematical Investigation of the Common Sense Law of Inertia, MITPress.
  • ––– (2003), “The Frame Problem”, inTheMacmillan Encyclopedia of Cognitive Science, L. Nadel (ed.),Macmillan, pp. 144–150.
  • ––– (2010),Embodiment and the Inner Life: Cognitionand Consciousness in the Space of Possible Minds, OxfordUniversity Press.
  • Shanahan, M. & Baars, B.J. (2005), “Applying GlobalWorkspace Theory to the Frame Problem”,Cognition,98(2): 157–176.
  • Simon, H. (1957),Models of Man, Wiley.
  • Sperber, D. & Wilson, D. (1996), “Fodor's Frame Problem andRelevance Theory”,Behavioral and Brain Sciences, 19(3):530–532.
  • Wheeler, M. (2005),Reconstructing the Cognitive World:The Next Step, MIT Press.
  • ––– (2008), “Cognition in Context:Phenomenology, Situated Robotics, and the FrameProblem”,International Journal of PhilosophicalStudies, 16(3): 323–349.
  • Wilkerson, W.S. (2001), “Simulation, Theory, and the FrameProblem”,Philosophical Psychology, 14(2):141–153.

Other Internet Resources

Copyright © 2016 by
Murray Shanahan

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free

Browse

About

Support SEP

Mirror Sites

View this site from another server:

USA (Main Site)Philosophy, Stanford University

The Stanford Encyclopedia of Philosophy iscopyright © 2023 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054


[8]ページ先頭

©2009-2025 Movatter.jp