Movatterモバイル変換


[0]ホーム

URL:


SEP home page
Stanford Encyclopedia of Philosophy

Modularity of Mind

First published Wed Apr 1, 2009; substantive revision Tue Jul 8, 2025

The concept of modularity has loomed large in philosophy of psychologysince the early 1980s, following the publication of Fodor’slandmark bookThe Modularity of Mind (1983). In the decadessince the term ‘module’ and its cognates first entered thelexicon of cognitive science, the conceptual and theoretical landscapein this area has changed dramatically. Especially noteworthy in thisrespect has been the development of evolutionary psychology, whoseproponents adopt a less stringent conception of modularity than theone advanced by Fodor, and who argue that the architecture of the mindis more pervasively modular than Fodor claimed. Where Fodor (1983,2000) draws the line of modularity at the relatively low-level systemsunderlying perception and language, post-Fodorian theorists such asSperber (2002) and Carruthers (2006) contend that the mind is modularthrough and through, up to and including the high-level systemsresponsible for reasoning, planning, decision making, and the like.The concept of modularity has also figured in recent debates inphilosophy of science, epistemology, ethics, and philosophy oflanguage—further evidence of its utility as a tool fortheorizing about mental architecture.

1. What is a mental module?

In his classic introduction to modularity, Fodor (1983) lists ninefeatures that collectively characterize the type of system thatinterests him. In original order of presentation, they are:

  1. Domain specificity
  2. Mandatory operation
  3. Limited central accessibility
  4. Fast processing
  5. Informational encapsulation
  6. ‘Shallow’ outputs
  7. Fixed neural architecture
  8. Characteristic and specific breakdown patterns
  9. Characteristic ontogenetic pace and sequencing

A cognitive system counts as modular in Fodor’s sense if it ismodular “to some interesting extent,” meaning that it hasmost of these features to an appreciable degree (Fodor, 1983, p. 37).This is a weighted most, since some marks of modularity are moreimportant than others. Information encapsulation, for example, is moreor less essential for modularity, as well as explanatorily prior toseveral of the other features on the list (Fodor, 1983, 2000).

Each of the items on the list calls for explication. To streamline theexposition, we will cluster most of the features thematically andexamine them on a cluster-by-cluster basis, along the lines of Prinz(2006).

Encapsulation and inaccessibility. Informationalencapsulation and limited central accessibility are two sides of thesame coin. Both features pertain to the character of information flowacross computational mechanisms, albeit in opposite directions.Encapsulation involves restriction on the flow of information into amechanism, whereas inaccessibility involves restriction on the flow ofinformation out of it.

A cognitive system is informationally encapsulated to the extent thatin the course of processing its inputs it cannot access informationstored elsewhere; all it has to go on is the information contained inthose inputs plus whatever information might be stored within thesystem itself, for example, in a proprietary database (Fodor, 2000,pp. 62–63). (Note that items of information drawn on by a systemin the course of its operations do not count as inputs to the system;otherwise the encapsulation criterion of modularity would be triviallysatisfied.) In the case of language, for example:

A parser for [a language]L contains a grammar ofL.What it does when it does its thing is, it infers from certainacoustic properties of a token to a characterization of certain of thedistal causes of the token (e.g., to the speaker’s intentionthat the utterance should be a token of a certain linguistic type).Premises of this inference can include whatever information about theacoustics of the token the mechanisms of sensory transduction provide,whatever information about the linguistic types inL theinternally represented grammar provides,and nothing else.(Fodor, 1984, pp. 245–246; italics in original)

Similarly, in the case of perception—understood as a kind ofnon-demonstrative (i.e., defeasible, or non-monotonic) inference fromsensory ‘premises’ to perceptual‘conclusions’—the claim that perceptual systems areinformationally encapsulated is equivalent to the claim that“the data that can bear on the confirmation of perceptualhypotheses includes, in the general case, considerably less than theorganism may know” (Fodor, 1983, p. 69). The classicillustration of this property comes from the study of visualillusions, which tend to persist even after the viewer is explicitlyinformed about the character of the stimulus. In the Müller-Lyerillusion, for example, the two lines continue to look as if they wereof unequal length even after one has convinced oneself otherwise,e.g., by measuring them with a ruler (see Figure 1, below).

Two horizontal line segments of the same length with arrowheads on the left and right side; the upper line segment has arrowheads pointing inwards, towards the line segment; the lower line segment has arrowheads pointing outwards, away from the line segment.

Figure 1.The Müller-Lyerillusion.

Informational encapsulation is related to what Pylyshyn (1984, 1999)calls cognitive impenetrability, but the two properties are not thesame. To a first approximation, a system is cognitively impenetrableif its operations are not subject to direct influence by informationstored in central memory, paradigmatically in the form of beliefs andutilities. As such, information encapsulation entails cognitiveimpenetrability, but not the other way around. For example, auditoryspeech perception might be cognitively impenetrable but draw on visualinformation, as suggested by the McGurk effect (but see§2.1). The relation between informational encapsulation and cognitiveimpenetrability is, however, a matter of controversy (see§4).

The flip side of informational encapsulation is inaccessibility tocentral monitoring. A system is inaccessible in this sense if theintermediate-level representations that it computes prior to producingits output are inaccessible to consciousness, and hence unavailablefor explicit report. In effect, centrally inaccessible systems arethose whose internal processing is opaque to introspection. Though theoutputs of such systems may be phenomenologically salient, theirprecursor states are not. Speech comprehension, for example, likelyinvolves the successive elaboration of myriad representations (ofvarious types: phonological, lexical, syntactic, etc.) of thestimulus, but of these only the final product—the representationof the meaning of what was said—is consciously available.

Mandatoriness, speed, and superficiality. In addition tobeing informationally encapsulated and centrally inaccessible, modularsystems and processes are “fast, cheap, and out ofcontrol” (to borrow a phrase by roboticist Rodney Brooks). Thesefeatures form a natural trio, as we’ll see.

The operation of a cognitive system is mandatory just in case it isautomatic, that is, not under conscious control (Bargh &Chartrand, 1999). This means that, like it or not, the system’soperations are switched on by presentation of the relevant stimuli andthose operations run to completion. For example, native speakers ofEnglish cannot hear the sounds of English being spoken as mere noise:if they hear those sounds at all, they hear them as English. Likewise,it’s impossible to see a 3D array of objects in space as 2Dpatches of color, however hard one may try.

Speed is arguably the mark of modularity that requires least in theway of explication. But speed is relative, so the best way to proceedhere is by way of examples. Speech shadowing is generally consideredto be very fast, with typical lag times on the order of about 250 ms.Since the syllabic rate of normal speech is about 4 syllables persecond, this suggests that shadowers are processing the stimulus insyllabus-length bits—probably the smallest bits that can beidentified in the speech stream, given that “only at the levelof the syllable do we begin to find stretches of wave form whoseacoustic properties are at all reliably related to their linguisticvalues” (Fodor, 1983, p. 62). Similarly impressive results areavailable for vision: in a rapid serial visual presentation task(matching picture to description), subjects were 70% accurate at 125ms. exposure per picture and 96% accurate at 167 ms. (Fodor, 1983, p.63). In general, a cognitive process counts as fast in Fodor’sbook if it takes place in a half second or less.

A further feature of modular systems is that their outputs arerelatively ‘shallow’. Exactly what this means is unclear.But the depth of an output seems to be a function of at least twoproperties: first, how much computation is required to produce it(i.e., shallow means computationally cheap); second, how constrainedor specific its informational content is (i.e., shallow meansinformationally general) (Fodor, 1983, p. 87). These two propertiesare correlated, in that outputs with more specific content tend to bemore costly for a system to compute, and vice versa. Some writers haveinterpreted shallowness to require non-conceptual character (e.g.,Carruthers, 2006, p. 4). But this conflicts with Fodor’s owngloss on the term, in which he suggests that the output of a plausiblymodular system such as visual object recognition might be encoded atthe level of ‘basic-level’ concepts, like DOG and CHAIR(Rosch et al., 1976). What’s ruled out here is not conceptsper se, then, but highly theoretical concepts like PROTON,which are too informationally specific and too computationallyexpensive to meet the shallowness criterion.

All three of the features just discussed—mandatoriness, speed,and shallowness—are associated with, and to some extentexplicable in terms of, informational encapsulation. In each case,less is more, informationally speaking. Mandatoriness flows from theinsensitivity of the system to the organism’s utilities, whichis one dimension of cognitive impenetrability. Speed depends upon theefficiency of processing, which positively correlates withencapsulation in so far as encapsulation tends to reduce thesystem’s informational load. Shallowness is a similar story:shallow outputs are computationally cheap, and computational expenseis negatively correlated with encapsulation. In short, the moreinformationally encapsulated a system is, the more likely it is to befast, cheap, and out of control.

Dissociability and localizability. To say that a system isfunctionally dissociable is to say that it can be selectivelyimpaired, that is, damaged or disabled with little or no effect on theoperation of other systems. As the neuropsychological recordindicates, selective impairments of this sort have frequently beenobserved as a consequence of circumscribed brain lesions. Standardexamples from the study of vision include prosopagnosia (impaired facerecognition), achromatopsia (total color blindness), and akinetopsia(motion blindness); examples from the study of language includeagrammatism (loss of complex syntax), jargon aphasia (loss of complexsemantics), alexia (loss of object words), and dyslexia (impairedreading and writing). Each of these disorders have been found inotherwise cognitively normal individuals, suggesting that the lostcapacities are subserved by functionally dissociable mechanisms.

Functional dissociability is associated with neural localizability ina strong sense. A system is strongly localized just in case it is (a)implemented in neural circuitry that is both relatively circumscribedin extent (though not necessarily in contiguous areas) and (b)dedicated to the realization of that system alone. Localization inthis sense goes beyond mere implementation in local neural circuitry,since a given bit of circuitry could subserve more than one cognitivefunction (Anderson, 2010). Proposed candidates for strong localizationinclude systems for color vision (V4), motion detection (MT), facerecognition (fusiform gyrus), and spatial scene recognition(parahippocampal gyrus).

Domain specificity. A system is domain specific to the extentthat it has a restricted subject matter, that is, the class of objectsand properties that it processes information about is circumscribed ina relatively narrow way. As Fodor (1983) puts it, “domainspecificity has to do with the range of questions for which a deviceprovides answers (the range of inputs for which it computesanalyses)” (p. 103): the narrower the range of inputs a systemcan compute, the narrower the range of problems the system cansolve—and the narrower the range of such problems, the moredomain specific the device. Alternatively, the degree of asystem’s domain specificity can be understood as a function ofthe range of inputs that turn the system on, where the size of thatrange determines the informational reach of the system (Carruthers,2006; Samuels, 2000).

Domains (and by extension, modules) are typically more fine-grainedthan sensory modalities like vision and audition. This seems clearfrom Fodor’s list of plausibly domain-specific mechanisms, whichincludes systems for color perception, visual shape analysis, sentenceparsing, and face and voice recognition (Fodor, 1983, p.47)—none of which correspond to perceptual or linguisticfaculties in an intuitive sense. It also seems plausible, however,that the traditional sense modalities (vision, audition, olfaction,etc.), and the language faculty as a whole, are sufficiently domainspecific to count as displaying this particular mark of modularity(McCauley & Henrich, 2006).

Innateness. The final feature of modular systems onFodor’s roster is innateness, understood as the property of“develop[ing] according to specific, endogenously determinedpatterns under the impact of environmental releasers” (Fodor,1983, p. 100). On this view, modular systems come on-line chiefly asthe result of a brute-causal process like triggering, rather than anintentional-causal process like learning. (For more on thisdistinction, see Cowie, 1999; for an alternative analysis ofinnateness, based on the notion of canalization, see Ariew, 1999.) Themost familiar example here is language, the acquisition of whichoccurs in all normal individuals in all cultures on more or less thesame schedule: single words at 12 months, telegraphic speech at 18months, complex grammar at 24 months, and so on (Stromswold, 1999).Other candidates include visual object perception (Spelke, 1994) andlow-level mindreading (Scholl & Leslie, 1999).

2. Modularity, Fodor-style: A modest proposal

The hypothesis of modest modularity, as we shall call it, has twostrands. The first strand of the hypothesis is positive. It says thatinput systems, such as systems involved in perception and language,are modular. The second strand is negative. It says that centralsystems, such as systems involved in belief fixation and practicalreasoning, are not modular.

In this section, we assess the case for modest modularity. The nextsection (§3) will be devoted to discussion of the hypothesis of massivemodularity, which retains the positive strand of Fodor’shypothesis while reversing the polarity of the second strand fromnegative to positive—revising the concept of modularity in theprocess.

The positive part of the modest modularity hypothesis is that inputsystems are modular. By ‘input system’ Fodor (1983) meansa computational mechanism that “presents the world tothought” (p. 40) by processing the outputs of sensorytransducers. A sensory transducer is a device that converts the energyimpinging on the body’s sensory surfaces, such as the retina andcochlea, into a computationally usable form, without adding orsubtracting information. Roughly speaking, the product of sensorytransduction is raw sensory data. Input processing involvesnon-demonstrative inferences from this raw data to hypotheses aboutthe layout of objects in the world. These hypotheses are then passedon to central systems for the purpose of belief fixation, and thosesystems in turn pass their outputs to systems responsible for theproduction of behavior.

Fodor argues that input systems constitute a natural kind, defined as“a class of phenomena that have many scientifically interestingproperties over and above whatever properties define the class”(Fodor, 1983, p. 46). He argues for this by presenting evidence thatinput systems are modular, where modularity is marked by a cluster ofpsychologically interesting properties—the most interesting andimportant of these being informational encapsulation. In the course ofthat discussion, we reviewed a representative sample of this evidence,and for present purposes that should suffice. (Readers interested infurther details should consult Fodor, 1983, pp. 47–101.)

2.1 Challenges to low-level modularity

Fodor’s claim about the modularity of input systems has beendisputed by a number of philosophers and psychologists (Churchland,1988; Arbib, 1987; Marslen-Wilson & Tyler, 1987; McCauley &Henrich, 2006). The most wide-ranging philosophical critique is due toPrinz (2006), who argues that perceptual and linguistic systems rarelyexhibit the features characteristic of modularity. In particular, heargues that such systems are not informationally encapsulated. To thisend, Prinz adduces two types of evidence. First, there appear to becross-modal effects in perception, which would tell againstencapsulation at the level of input systems. The classic example ofthis, also from the speech perception literature, is the McGurk effect(McGurk & MacDonald, 1976). Here, subjects watching a video of onephoneme being spoken (e.g., /ga/) dubbed with a sound recording of adifferent phoneme (/ba/) hear a third, altogether different phoneme(/da/). Second, he points to what look to be top-down effects onvisual and linguistic processing, the existence of which would tellagainst cognitive impenetrability, i.e., encapsulation relative tocentral systems. Some of the most striking examples of such effectscome from research on speech perception. Probably the best-known isthe phoneme restoration effect, as in the case where listeners‘fill in’ a missing phoneme in a spoken sentence (Thestate governors met with their respective legi*latures convening inthe capital city) from which the missing phoneme (the /s/ soundinlegislatures) has been deleted and replaced with the soundof a cough (Warren, 1970). By hypothesis, this filling-in is driven bylisteners’ understanding of the linguistic context.

How convincing one finds this part of Prinz’s critique, however,depends on how convincing one finds his explanation of these effects.The McGurk effect, for example, seems consistent with the claim thatspeech perception is an informationally encapsulated system, albeit asystem that is multi-modal in character (cf. Fodor, 1983, p.132n.13).If speech perception is a multi-modal system, the fact that itsoperations draw on both auditory and visual information need notundermine the claim that speech perception is encapsulated. Othercross-modal effects, however, resist this type of explanation. In thedouble flash illusion, for example, viewers shown a single flashaccompanied by two beeps report seeing two flashes (Shams et al.,2000). The same goes for the rubber hand illusion, in whichsynchronous brushing of a hand hidden from view and arealistic-looking rubber hand seen at the usual location of the handthat was hidden gives rise to the impression that the fake hand isreal (Botvinick & Cohen, 1998). With respect to phenomena of thissort, unlike the McGurk effect, there is no plausible candidate for asingle, domain-specific system whose operations draw on multiplesources of sensory information.

Regarding phoneme restoration, it could be that the effect is drivenby listeners’ drawing on information stored in alanguage-proprietary database (specifically, information about thelinguistic types in the lexicon of English), rather than higher-levelcontextual information. Hence, it’s unclear whether the case ofphoneme restoration described above counts as a top-down effect. Butnot all cases of phoneme restoration can be accommodated so readily,since the phenomenon also occurs when there are multiple lexical itemsavailable for filling in (Warren & Warren, 1970). For example,listeners fill the gap in the sentencesThe *eel is on theaxle andThe *eel is on the orangedifferently—with a /wh/ sound and a /p/ sound,respectively—suggesting that speech perception is sensitive tocontextual information after all.

A further challenge to modest modularity, not addressed by Prinz(2006), comes from evidence that susceptibility to theMüller-Lyer illusion varies by both culture and age. For example,it appears that adults in Western cultures are more susceptible to theillusion than their non-Western counterparts; that adults in somenon-Western cultures, such as hunter-gatherers from the KalahariDesert, are nearly immune to the illusion; and that within (but notalways across) Western and non-Western cultures, pre-adolescentchildren are more susceptible to the illusion than adults are (Segall,Campbell, & Herskovits, 1966). McCawley and Henrich (2006) takethese findings as showing that the visual system is diachronically (asopposed to synchronically) penetrable, in that how one experiences theillusion-inducing stimulus changes as a result of one’s widerperceptual experience over an extended period of time. They also arguethat the aforementioned evidence of cultural and developmentalvariability in perception militates against the idea that vision is aninnate capacity, that is, the idea that vision is among the“endogenous features of the human cognitive system that are, ifnot largely fixed at birth, then, at least, geneticallypre-programmed” and “triggered, rather than shaped, by thenewborn’s subsequent experience” (p. 83). However, theyalso issue the following caveat:

[N]othing about any of the findings we have discussed establishes thesynchronic cognitive penetrability of the Müller-Lyerstimuli. Nor do the Segall et al. (1966) findings provide evidencethatadults’ visual input systems arediachronically penetrable. They suggest that it is onlyduring a critical developmental stage that human beings’susceptibility to the Müller-Lyer illusion varies considerablyand that that variation substantially depends onculturalvariables. (McCauley & Henrich, 2006, p. 99; italics in original)

As such, the evidence cited can be accommodated by friends of modestmodularity, provided that allowance is made for the potential impactof environmental, including cultural, variables ondevelopment—something that most accounts of innateness make roomfor.

A useful way of making this point invokes Segal’s (1996) idea ofdiachronic modularity (see also Scholl & Leslie, 1999). Diachronicmodules are systems that exhibit parametric variation over the courseof their development. For example, in the case of language, differentindividuals learn to speak different languages depending on thelinguistic environment in which they grew up, but they nonethelessshare the same underlying linguistic competence in virtue of their(plausibly innate) knowledge of Universal Grammar. Given the observedvariation in how people see the Müller-Lyer illusion, it may bethat the visual system is modular in much the same way, with itsdevelopment is constrained by features of the visual environment. Sucha possibility seems consistent with the claim that input systems aremodular in Fodor’s sense.

Another source of difficulty for proponents of input-level modularityis neuroscientific evidence against the claim that perceptual andlinguistic systems are strongly localized. Recall that for a system tobe strongly localized, it must be realized in dedicated neuralcircuitry. Strong localization at the level of input systems, then,entails the existence of a one-to-one mapping between input systemsand brain structures. As Anderson (2010, 2014) argues, however, thereis no such mapping, since most cortical regions of any size aredeployed in different tasks across different domains. For instance,activation of the fusiform face area, once thought to be dedicated tothe perception of faces, is also recruited for the perception of carsand birds (Gauthier et al., 2000). Likewise, Broca’s area, oncethought to be dedicated to speech production, also plays a role inaction recognition, action sequencing, and motor imagery (Tettamanti& Weniger, 2006). Functional neuroimaging studies generallysuggest that cognitive systems are at best weakly localized, that is,implemented in distributed networks of the brain that overlap, ratherthan discrete and disjoint regions.

Arguably the most serious challenge to modularity at the level ofinput systems, however, comes from evidence that vision is cognitivelypenetrable, and hence, not informationally encapsulated. The conceptof cognitive penetrability, originally introduced by Pylyshyn (1984),has been characterized in a variety of non-equivalent ways (Stokes,2013), but the core idea is this: A perceptual system is cognitivelypenetrable if and only if its operations are directly causallysensitive, in a semantically coherent way, to the agent’sbeliefs, desires, intentions, or other nonperceptual states.Behavioral studies purporting to show that vision is cognitivelypenetrable date back to the early days of New Look psychology (Brunerand Goodman, 1947) and continue to the present day, with renewedinterest in the topic emerging in the early 2000s (Firestone &Scholl, 2016). It appears, for example, that vision is influenced byan agent’s motivational states, with experimental subjectsreporting that desirable objects look closer (Balcetis & Dunning,2010) and ambiguous figures look like the interpretation associatedwith a more rewarding outcome (Balcetis & Dunning, 2006). Inaddition, vision seems to be influenced by subjects’ beliefs,with racial categorization affecting reports of the perceived skintone of faces even when the stimuli are equiluminant (Levin &Banaji, 2006), and categorization of objects affecting reports of theperceived color of grayscale images of those objects (Hansen et al.,2006).

Skeptics of cognitive penetrability point out, however, thatexperimental evidence for top-down effects on perception can beexplained in terms of effects of judgment, memory, and relativelyperipheral forms of attention (Firestone & Scholl, 2016; Machery,2015). Consider, for example, the claim that throwing a heavy ball(vs. a light ball) at a target makes the target look farther away,evidence for which consists of subjects’ visual estimates of thedistance to the target (Witt, Proffitt, & Epstein, 2004). While itis possible that the greater effort involved in throwing the heavyball caused the target to look farther away, it is also possible thatthe increased estimate of distance reflected the fact that subjects inthe heavy ball condition judged the target to be farther away becausethey found it harder to hit (Firestone & Scholl, 2016). Indeed,reports by subjects in a follow-up study who were explicitlyinstructed to make their estimates on the basis of visual appearancesonly did not show the effect of effort, suggesting that the effect waspost-perceptual (Woods, Philbeck, & Danoff, 2009). Other purportedtop-down effects on perception, such as the effect of golfingperformance on size and distance estimates of golf holes (Witt et al.,2008), can be explained as effects of spatial attention, such as thefact that visually attended objects tend to appear larger and closer(Firestone & Scholl, 2016). These and related considerationssuggest that the case for cognitive penetrability—and byextension, the case against low-level modularity—is weaker thanits proponents make it out to be.

2.2 Fodor’s argument against high-level modularity

I turn now to the dark side of Fodor’s hypothesis: the claimthat central systems are not modular.

Among the principal jobs of central systems is the fixation of belief,perceptual belief included, via non-demonstrative inference. Fodor(1983) argues that this sort of process cannot be realized in aninformationally encapsulated system, and hence that central systemscannot be modular. Spelled out a bit further, his reasoning goes likethis:

  1. Central systems are responsible for belief fixation.
  2. Belief fixation is isotropic and Quinean.
  3. Isotropic and Quinean processes cannot be carried out byinformationally encapsulated systems.
  4. Belief fixation cannot be carried out by an informationallyencapsulated system. [from 2 and 3]
  5. Modular systems are informationally encapsulated.
  6. Belief fixation is not modular. [from 4 and 5]

Hence:

  1. Central systems are not modular. [from 1 and 6]

The argument here contains two terms that call for explication, bothof which relate to the notion of confirmation holism in the philosophyof science. The term ‘isotropic’ refers to the epistemicinterconnectedness of beliefs in the sense that “everything thatthe scientist knows is, in principle, relevant to determining whatelse he ought to believe. In principle, our botany constrains ourastronomy, if only we could think of ways to make them connect”(Fodor, 1983, p. 105). Antony (2003) presents a striking case of thissort of long-range interdisciplinary cross-talk in the sciences,between astronomy and archaeology; Carruthers (2006, pp.356–357) furnishes another example, linking solar physics andevolutionary theory. On Fodor’s view, since scientificconfirmation is akin to belief fixation, the fact that scientificconfirmation is isotropic suggests that belief fixation in general hasthis property.

A second dimension of confirmation holism is that confirmation is‘Quinean’, meaning that:

[T]he degree of confirmation assigned to any given hypothesis issensitive to properties of the entire belief system …simplicity, plausibility, and conservatism are properties thattheories have in virtue of their relation to the whole structure ofscientific beliefstaken collectively. A measure ofconservatism or simplicity would be a metric overglobalproperties of belief systems. (Fodor, 1983, pp. 107–108; italicsin original).

Here again, the analogy between scientific thinking and thinking ingeneral underwrites the supposition that belief fixation isQuinean.

Both isotropy and Quineanness are features that precludeencapsulation, since their possession by a system would requireextensive access to the contents of central memory, and hence a highdegree of cognitive penetrability. Put in slightly different terms:isotropic and Quinean processes are ‘global’ rather than‘local’, and since globality precludes encapsulation,isotropy and Quineanness preclude encapsulation as well.

By Fodor’s lights, the upshot of this argument—namely, thenonmodular character of central systems—is bad news for thescientific study of higher cognitive functions. This is neatlyexpressed by his “First Law of the Non-Existence of CognitiveScience,” according to which “[t]he more global (e.g., themore isotropic) a cognitive process is, the less anybody understandsit” (Fodor, 1983, p. 107). His grounds for pessimism on thisscore are twofold. First, global systems are unlikely to be associatedwith local brain architecture, thereby rendering them unpromisingobjects of neuroscientific study:

We have seen that isotropic systems are unlikely to exhibitarticulated neuroarchitecture. If, as seems plausible,neuroarchitecture is often a concomitant of constraints on informationflow, then neural equipotentiality is what you would expect in systemsin which every process has more or less uninhibited access to all theavailable data. The moral is that, to the extent that the existence ofform/function correspondence is a precondition for successfulneuropsychological research, there is not much to be expected in theway of a neuropsychology of thought (Fodor, 1983, pp. 127).

Second, and more importantly, global processes are resistant tocomputational explanation, making them unpromising objects ofpsychological study:

The fact is that—considerations of their neural realization toone side—global systems are per se bad domains for computationalmodels, at least of the sort that cognitive scientists are accustomedto employ. The condition for successful science (in physics, by theway, as well as psychology) is that nature should have joints to carveit at: relatively simple subsystems which can be artificially isolatedand which behave, in isolation, in something like the way that theybehavein situ. Modules satisfy this condition;Quinean/isotropic-wholistic-systems by definition do not. If, as Ihave supposed, the central cognitive processes are nonmodular, that isvery bad news for cognitive science (Fodor, 1983, pp. 128).

By Fodor’s lights, then, considerations that militate againsthigh-level modularity also militate against the possibility of arobust science of higher cognition—not a happy result, as far asmost cognitive scientists and philosophers of mind are concerned.

Gloomy implications aside, Fodor’s argument against high-levelmodularity is difficult to resist. The main sticking points are these:first, the negative correlation between globality and encapsulation;second, the positive correlation between encapsulation and modularity.Putting these points together, we get a negative correlation betweenglobality and modularity: the more global the process, the lessmodular the system that executes it. As such, there seem to be onlythree ways to block the conclusion of the argument:

  1. Deny that central processes are global.
  2. Deny that globality and encapsulation are negativelycorrelated.
  3. Deny that encapsulation and modularity are positivelycorrelated.

Of these three options, the second seems least attractive, as it seemssomething like a conceptual truth that globality and encapsulationpull in opposite directions. The first option is slightly moreappealing, but only slightly. The idea that central processes arerelatively global, even if not as global as the process ofconfirmation in science suggests, is hard to deny. And that is all theargument really requires.

That leaves the third option: denying that modularity requiresencapsulation. This is, in effect, the strategy pursued by Carruthers(2006). More specifically, Carruthers draws a distinction between twokinds of encapsulation: “narrow-scope” and“wide-scope.” A system is narrow-scope encapsulatedif it cannot draw onany information held outside of it inthe course of its processing. This corresponds to encapsulation asFodor uses the term. By contrast, a system that is wide-scopeencapsulated can draw on exogenous information during the course ofits operations—it just cannot draw onall of thatinformation. (Compare: “No exogenous information isaccessible” vs. “Not all exogenous information isaccessible.”) This is encapsulation in a weaker sense of theterm than Fodor’s. Indeed, Carruthers’s use of the term“encapsulation” in this context is somewhatmisleading, insofar as wide-scope encapsulated systems count asunencapsulated in Fodor’s sense (Prinz, 2006).

Dropping the narrow-scope encapsulation requirement on modules raisesa number of issues, not the least of which being that it reduces thepower of modularity hypotheses to explain functional dissociations atthe system level (Stokes & Bergeron, 2015). That said, ifmodularity requires only wide-scope encapsulation, then Fodor’sargument against central modularity no longer goes through. But giventhe importance of narrow-scope encapsulation to Fodorian modularity,all this shows is that central systems might be modular in anon-Fodorian way. The original argument that central systems are notFodor-modular—and with it, the motivation for the negativestrand of the modest modularity hypothesis—stands.

3. Post-Fodorian modularity

According to the massive modularity hypothesis, the mind is modularthrough and through, including the parts responsible for high-levelcognition functions like belief fixation, problem-solving, planning,and the like. Originally articulated and advocated by proponents ofevolutionary psychology (Sperber, 1994, 2002; Cosmides & Tooby,1992; Pinker, 1997; Barrett, 2005; Barrett & Kurzban, 2006), thehypothesis has received its most comprehensive and sophisticateddefense at the hands of Carruthers (2006). Before proceeding to thedetails of that defense, however, we need to consider briefly whatconcept of modularity is in play.

The main thing to note here is that the operative notion of modularitydiffers significantly from the traditional Fodorian one. Carruthers isexplicit on this point:

[If] a thesis of massive mental modularity is to be remotelyplausible, then by ‘module’ we cannot mean‘Fodor-module’. In particular, the properties of havingproprietary transducers, shallow outputs, fast processing, significantinnateness or innate channeling, and encapsulation will very likelyhave to be struck out. That leaves us with the idea that modules mightbe isolable function-specific processing systems, all or almost all ofwhich are domain specific (in the content sense), whose operationsaren’t subject to the will, which are associated with specificneural structures (albeit sometimes spatially dispersed ones), andwhose internal operations may be inaccessible to the remainder ofcognition. (Carruthers, 2006, p. 12)

Of the original set of nine features associated with Fodor-modules,then, Carruthers-modules retain at most only five: dissociability,domain specificity, automaticity, neural localizability, and centralinaccessibility. Conspicuously absent from the list is informationalencapsulation, the feature most central to modularity in Fodor’saccount. What’s more, Carruthers goes on to drop domainspecificity, automaticity, and strong localizability (which rules outthe sharing of parts between modules) from his initial list of fivefeatures, making his conception of modularity even more sparse(Carruthers, 2006, p. 62). Other proposals in the literature aresimilarly permissive in terms of the requirements a system must meetin order to count as modular; an extreme case is Barrett andKurzban’s conception of modules as functionally specializedmechanisms (Barrett & Kurzban, 2006).

A second point, related to the first, is that defenders of massivemodularity have chiefly been concerned to defend the modularity ofcentral cognition, taking for granted that the mind is modular at thelevel of input systems. Thus, the hypothesis at issue for theoristslike Carruthers might be best understood as the conjunction of twoclaims: first, that input systems are modular in a way that requiresnarrow-scope encapsulation; second, that central systems are modular,but only in a way that does not require this feature. In defendingmassive modularity, Carruthers focuses on the second of these claims,and so will we.

3.1 The case for massive modularity

The centerpiece of Carruthers (2006) consists of three arguments formassive modularity: theArgument from Design, theArgument from Animals, and theArgument fromComputational Tractability. Let’s briefly consider each ofthem in turn.

TheArgument from Design is as follows:

  1. Biological systems are designed systems, constructedincrementally.
  2. Such systems, when complex, need to be organized in a pervasivelymodular way, that is, as a hierarchical assembly of separatelymodifiable, functionally autonomous components.
  3. The human mind is a biological system, and is complex.
  4. Therefore, the human mind is (probably) massively modular in itsorganization. (Carruthers, 2006, p. 25)

The crux of this argument is the idea that complex biological systemscannot evolve unless they are organized in a modular way, wheremodular organization entails that each component of the system (thatis, each module) can be selected for change independently of theothers. In other words, the evolvability of the system as a wholerequires the independent evolvability of its parts. The problem withthis assumption is twofold (Woodward & Cowie, 2004). First, notall biological traits are independently modifiable. Having two lungs,for example, is a trait that cannot be changed without changing othertraits of an organism, because the genetic and developmentalmechanisms underlying lung numerosity causally depend on the geneticand developmental mechanisms underlying bilateral symmetry. Second,there appear to be developmental constraints on neurogenesis whichrule out changing the size of one brain area independently of theothers. This in turn suggests that natural selection cannot modifycognitive traits in isolation from one another, given that evolvingthe neural circuitry for one cognitive trait is likely to result inchanges to the neural circuitry for other traits.

A further worry about the Argument from Design concerns the gapbetween its conclusion (the claim that the mind is massively modularin organization) and the hypothesis at issue (the claim thatthe mind is massively modularsimpliciter). The worry isthis. According to Carruthers, the modularity of a system implies thepossession of just two properties: functional dissociability andinaccessibility of processing to external monitoring. Suppose that asystem is massively modular in organization. It follows from thedefinition of modular organization that the components of the systemare functionally autonomous and separately modifiable. Thoughfunctional autonomy guarantees dissociability, it’s not clearwhy separate modifiability guarantees inaccessibility to externalmonitoring. According to Carruthers, the reason is that “if theinternal operations of a system (e.g., the details of the algorithmbeing executed) were available elsewhere, then they couldn’t bealtered without some corresponding alteration being made in the systemto which they are accessible” (Carruthers, 2006, p. 61). Butthis is a questionable assumption. On the contrary, it seems plausiblethat the internal operations of one system could be accessible to asecond system in virtue of a monitoring mechanism that functions thesame way regardless of the details of the processing being monitored.At a minimum, the claim that separate modifiability entailsinaccessibility to external monitoring calls for more justificationthan Carruthers offers.

In short, the Argument from Design is susceptible to a number ofobjections. Fortunately, there’s a slightly stronger argument inthe vicinity of this one, due to Cosmides and Tooby (1992). It goeslike this:

  1. The human mind is a product of natural selection.
  2. In order to survive and reproduce, our human ancestors had tosolve a number of recurrent adaptive problems (finding food, shelter,mates, etc.).
  3. Since adaptive problems are solved more quickly, efficiently, andreliably by modular systems than by non-modular ones, naturalselection would have favored the evolution of a massively modulararchitecture.
  4. Therefore, the human mind is (probably) massively modular.

The force of this argument depends chiefly on the strength of thethird premise. Not everyone is convinced, to put it mildly (Fodor,2000; Samuels, 2000; Woodward & Cowie, 2004). First, the premiseexemplifies adaptationist reasoning, and adaptationism in thephilosophy of biology has more than its share of critics. Second, itis doubtful whether adaptive problem-solving in general is easier toaccomplish with a large collection of specialized problem-solvingdevices than with a smaller collection of general problem-solvingdevices with access to a library of specialized programs (Samuels,2000). Hence, insofar as the massive modularity hypothesis postulatesan architecture of the first sort—as evolutionarypsychologists’ “Swiss Army knife” metaphor ofthe mind implies (Cosmides & Tooby, 1992)—the premise seemsshaky.

A related argument is theArgument from Animals. Unlike theArgument from Design, this argument is never explicitly stated inCarruthers (2006). But here is a plausible reconstruction of it, dueto Wilson (2008):

  1. Animal minds are massively modular.
  2. Human minds are incremental extensions of animal minds.
  3. Therefore, the human mind is (probably) massively modular.

Unfortunately for friends of massive modularity, this argument, likethe argument from design, is vulnerable to a number of objections(Wilson, 2008). We’ll mention two of them here. First,it’s not easy to motivate the claim that animal minds aremassively modular in the operative sense. Though Carruthers (2006)goes to heroic lengths to do so, the evidence he cites—e.g., forthe domain specificity of animal learning mechanisms, à laGallistel, 1990—adds up to less than what’s needed. Theproblem is that domain specificity is not sufficient forCarruthers-style modularity; indeed, it is not even one of the centralcharacteristics of modularity in Carruthers’ account. So theargument falters at the first step. Second, even if animal minds aremassively modular, and even if single incremental extensions of theanimal mind preserve that feature, it’s quite possible that aseries of such extensions of animal minds might have led to its loss.In other words, as Wilson (2008) puts it, it can’t be assumedthat the conservation of massive modularity is transitive. And withoutthis assumption, the Argument from Animals can’t go through.

Finally, we have theArgument from Computational Tractability(Carruthers, 2006, pp. 44–59). For the purposes of thisargument, we assume that a mental process is computationally tractableif it can be specified at the algorithmic level in such a way that theexecution of the process is feasible given time, energy, and otherresource constraints on human cognition (Samuels, 2005). We alsoassume that a system is wide-scope encapsulated if in the course ofits operations the system lacks access to at least some informationexogenous to it. (Recall from§2.2 that wide-scope encapsulation is entailed by narrow-scopeencapsulation. Hence, the requirement that a system be wide-scopeencapsulated does not preclude its being narrow-scopeencapsulated as well.)

  1. The mind is computationally realized.
  2. All computational mental processes must be tractable.
  3. Tractable processing is possible only in wide-scope encapsulatedsystems.
  4. Hence, the mind must consist entirely of wide-scope encapsulatedsystems.
  5. Hence, the mind is (probably) massively modular.

There are two problems with this argument, however. The first problemhas to do with the third premise, which states that tractabilityrequires wide-scope encapsulation, that is, the inaccessibility of atleast some exogenous information to processing. What tractabilityactually requires is something weaker, namely, that not allinformation is accessed by the mechanism in the course of itsoperations (Samuels, 2005). In other words, it is possible for asystem to have unlimited access to a database without actuallyaccessing all of its contents. Though tractable computation rules outexhaustive search, for example, unencapsulated mechanisms need notengage in exhaustive search, so tractability does not requirewide-scope encapsulation. The second problem with the argumentconcerns the last step. Though one might reasonably suppose thatmodular systems must be wide-scope encapsulated, the conversedoesn’t follow, so it’s unclear how one gets from a claimabout pervasive wide-scope encapsulation to a claim about pervasivemodularity.

All in all, then, compelling general arguments for massive modularityare hard to come by. This is not yet to dismiss the possibility ofmodularity in high-level cognition, but it invites skepticism,especially given the paucity of empirical evidence directly supportingthe hypothesis (Robbins, 2013). For example, it has been suggestedthat the capacity to think about social exchanges is subserved by adomain-specific, functionally dissociable, and innate mechanism (Stoneet al., 2002; Sugiyama et al., 2002). However, it appears thatdeficits in social exchange reasoning do not occur in isolation, butare accompanied by other social-cognitive impairments (Prinz, 2006).Skepticism about modularity in other areas of central cognition, suchas high-level mindreading, also seems to be the order of the day(Currie & Sterelny, 2000). The type of mindreading impairmentscharacteristic of Asperger syndrome and high-functioning autism, forexample, co-occur with sensory processing and executive functiondeficits (Frith, 2003). In general, there is little in the way ofneuropsychological evidence of high-level modularity.

3.2 The case against massive modularity

Just as there are general theoretical arguments for massivemodularity, there are general theoretical arguments against it. Oneargument takes the form of what Fodor (2000) calls the “InputProblem.” The problem is this. Suppose that the architecture ofthe mind is modular from top to bottom, and the mind consists entirelyof domain-specific mechanisms. In that case, the outputs of eachlow-level (input) system will need to be routed to the appropriatelyspecialized high-level (central) system for processing. But thatrouting can only be accomplished by a domain-general, non-modularmechanism—contradicting the initial supposition. In response tothis problem, Barrett (2005) argues that processing in a massivelymodular architecture does not require a domain-general routing deviceof the sort envisaged by Fodor. An alternative solution, Barrettsuggests, involves what he calls “enzymatic computation.”In this model, low-level systems pool their outputs together in acentrally accessible workspace where each central system isselectively activated by outputs that match its domain, in much thesame way that enzymes selectively bind with substrates that matchtheir specific templates. Like enzymes, specialized computationaldevices at the central level of the architecture accept a restrictedrange of inputs (analogous to biochemical substrates), performspecialized operations on that input (analogous to biochemicalreactions), and produce outputs in a format useable by othercomputational devices (analogous to biochemical products). Thisobviates the need for a domain-general (hence, non-modular) mechanismto mediate between low-level and high-level systems.

A second challenge to massive modularity is posed by the “DomainIntegration Problem” (Carruthers, 2006). The problem here isthat reasoning, planning, decision making, and other types ofhigh-level cognition routinely involve the production of conceptuallystructured representations whose content crosses domains. This meansthat there must be some mechanism for integrating representations frommultiple domains. But such a mechanism would be domain general ratherthan domain specific, and hence, non-modular. Like the Input Problem,however, the Domain Integration Problem is not insurmountable. Onepossible solution is that the language system has the capacity to playthe role of content integrator in virtue of its capacity to transformconceptual representations that have been linguistically encoded(Hermer & Spelke, 1996; Carruthers, 2002, 2006). On this view,language is the vehicle of domain-general thought. (For doubts aboutthe viability of this proposal, see Rice (2011) and Robbins(2002).)

Empirical objections to massive modularity take a variety of forms. Tostart with, there is neurobiological evidence of developmentalplasticity, a phenomenon that tells against the idea that brainstructure is innately specified (Buller, 2005; Buller &Hardcastle, 2000). However, not all proponents of massive modularityinsist that modules are innately specified (Carruthers, 2006; Kurzban,Tooby, & Cosmides, 2001). Furthermore, it’s unclear to whatextent the neurobiological record is at odds with nativism, given theevidence that specific genes are linked to the normal development ofcortical structures in both humans and animals (Machery & Barrett,2008; Ramus, 2006).

Another source of evidence against massive modularity comes fromresearch on individual differences in high-level cognition (Rabaglia,Marcus, & Lane, 2011). Such differences tend to be stronglypositively correlated across domains—a phenomenon known as the“positive manifold”—suggesting that high-levelcognitive abilities are subserved by a domain-general mechanism,rather than by a suite of specialized modules. There is, however, analternative explanation of the positive manifold. Since post-Fodorianmodules are allowed to share parts (Carruthers, 2006), thecorrelations observed may stem from individual differences in thefunctioning of components spanning multiple domain-specificmechanisms.

3.3. Doubts about the debate

As noted earlier, proponents of the massive modularity hypothesisargue that the architecture of the mind is modular through andthrough. Opponents of massive modularity, on the other hand, arguethat at least some components of mental architecture are not modular.Meaningful disagreement about the extent to which the mind is modular,however, can only take place against a background of agreement aboutwhat it means for a component of the mind to be modular. And given thelack of consensus in the cognitive science literature about whatfeatures are criterial of modularity, one might worry that partisansin the debate over massive modularity are talking past each other.

A version of this worry animates a recent critique of the massivemodularity debate, according to which the controversy stems from aconfusion between two levels of analysis: a functional level and anintentional level (Pietraszewski & Wertz, 2022). (WhilePietraszewski and Wertz describe these as different levels of analysisin the sense of Marr (1982), their exposition also drawson Dennett’s (1987) idea of stances, which are betterunderstood as strategies for prediction and explanation.) The crux ofPietraszewski and Wertz’s critique is as follows. At afunctional level of analysis, modules are functionally specializedmechanisms. Since proponents of massive modularity operate at thislevel of analysis, they understand modularity in terms of functionalspecialization (Barrett & Kurzban, 2006). At an intentional levelof analysis, by contrast, modules are parts of the mind that operateoutside the sphere of influence of a central agency. At this level ofanalysis, the essence of modularity is informational encapsulation, aproperty that exists only at the intentional level. (The logic behindthis restriction is that informational encapsulation entailsindependence from a central agency, and the concept of a centralagency is meaningful only at an intentional level.) Since opponents ofmassive modularity operate at an intentional level of analysis, theyunderstand modularity primarily in terms of informationalencapsulation (Fodor, 1984, 2000). According to Pietraszewski andWertz, the debate over massive modularity persists because proponentsand opponents of massive modularity are operating at different levelsof analysis (functional and intentional, respectively), and thesedifferent levels of analysis presuppose fundamentally differentcriteria of modularity (functional specialization and informationalencapsulation, respectively).

Responding to this critique, Egeland (2024) takes issue withPietraszewski and Wertz’s characterization of the massivemodularity debate (Egeland, 2024). For starters, Egeland conteststheir claim that partisans on the affirmative side of the debatetypically conceive of modules as nothing more than functionallyspecialized mechanisms. In fact, he says, most proponents of massivemodularity adopt a thicker conception of modularity, according towhich modules are both functionally specialized and domain specific(Boyer & Barrett, 2015; Coltheart, 1999; Cosmides & Tooby,1994; Sperber, 2002; Villena, 2023; Zerilli, 2017). As for thenegative side of the debate, Egeland argues against the idea thatopponents of massive modularity always conceive of modules asinformationally encapsulated, noting that some of them regard domainspecificity, rather than informational encapsulation, as the principalcriterion of modularity (Bolhuis & Macphail, 2001; Chiappe &Gardner, 2012; Reader et al., 2011). Based on these considerations,Egeland concludes that Pietraszewski and Wertz’s dismissal ofthe massive modularity debate as the product of conceptual error isboth unwarranted and counterproductive, insofar as it glosses overimportant questions about the extent to which the architecture of themind is composed of domain-specific mechanisms (Margolis &Laurence, 2023).

4. Modularity and the border between perception and cognition

The hypothesis of modest modularity (§2) says that input systems are modular while central systems arenon-modular. This section explores how philosophers have drawn on thehypothesis of modest modularity to either endorse or oppose the ideaof a scientifically grounded distinction between perception andcognition.

That there is a common-sense distinction between seeing and believingis not in doubt. It is debatable, however, whether this commonsensedistinction tracks a joint in nature: is there a border betweenperception and cognition that can be captured by the sciences of themind? One way to defend the existence of such a border is to appeal tothe hypothesis of modest modularity. If perception is modular andcognition is non-modular, then we can give an account of the border interms of the brain’s information-processing architecture. Thisview is discussed in§4.1.

On certain ways of characterizing the information-processingarchitecture of the brain, however, it is unclear whether we can makesense of a border between perception and cognition. Proponents ofpredictive processing architectures, for example, emphasize thecontinuity between cognition and perception, and the ubiquity oftop-down influences on the processing of sensory information. Thisleads some theorists to deny the existence of an architectural borderbetween perception and cognition on the grounds that perception cannotbe modular. Their arguments are considered in§4.2.

4.1 Characterizing the perception–cognition debate via modularity

The idea that there is an architectural border between perception andcognition takes its inspiration from the work of Fodor and Pylyshyn,although it is important to notice that neither makes this exactclaim. Fodor’s (1983) architectural distinction is between twofamilies of systems: modular systems performing input analysis, whichconstitute a natural kind, and non-modular central systems, whichexploit the information provided by these input systems. Fodorianinput systems include linguistic processing as well as perceptualprocessing, and Fodorian central systems are engaged in the rationalfixation of belief by non-demonstrative inference rather thancognition as it more broadly understood by cognitive psychology.Pylyshyn’s (1999) interest is specifically in the cognitiveimpenetrability of early visual processing rather than perception moregenerally, and he avoids using the term ‘modular’ to referto cognitively impenetrable systems because he thinks it conflatesseveral independent concepts. Technically, therefore, neither Fodornor Pylyshyn is attempting to characterize a border between perceptionand cognition as they are standardly understood by the mind-sciences.Contemporary proponents of the architectural approach to theperception–cognition border draw on the insights of Fodor andPylyshyn to defend the claim that that there is a joint in nature thatconstitutes a border between perception and cognition, and that thisborder exists because perception is modular and cognition isnon-modular. Versions of this claim are put forward by Firestone andScholl (2016), Mandelbaum (2017), Green (2020), Quilty-Dunn (2020),and Clarke (2021).

Some defenders of the architectural approach to theperception–cognition border remain committed to the idea thatmodularity is incompatible with cognitive penetration. Firestone andScholl, for example, claim that the nature of the joint betweenperception and cognition is “such that perception proceedswithout any direct, unmediated influence from cognition”(Firestone & Scholl, 2016, p.17). They propose that the standardpurported examples of cognitive penetration are based onmisinterpretations of the empirical data. Similarly, Quilty-Dunn(2020) also takes seriously the idea that perception is modularbecause it relies on the cognitive impenetrability of stores ofproprietary information, and this is what makes it distinct fromcognition. He allows that perception is influenced by the effects ofattention but denies that such influences violate the cognitiveimpenetrability of perception.

Some defenders of the architectural joint between perception andcognition, however, defend a version of the modularity of perceptionon which it is compatible with perception being cognitivelypenetrated. Green’s (2020) “dimension restrictionhypothesis” is an updated take on the modularity of perception,on which perceptual processes are constrained in a way that cognitiveprocesses are not: the cognitive penetration of perception can occur,but only within strict limits. Green (2020) understands perception andcognition as separate psychological systems withrestrictedpatterns of information flow between them. Clarke (2021) proposes thatthe joints between perceptual modules within a perceptual modality canbe modified by cognitive processing without challenging the modularityof perception. He suggests that mental imagery, for example, caninfluence the inputs to a perceptual module via the visual bufferwithout altering the module’s proprietary database.

One reason to prefer stronger formulations of modularity, on which itis incompatible with cognitive penetration, concerns the tractabilityof perceptual processing. It is sometimes proposed that perceptualprocessing would be computationally intractable if it were cognitivepenetrable. Green (2020) proposes that his dimensionally restrictedaccount of modularity is consistent with the tractability constraintdue to its strict limitations on information flow between perceptualand cognitive systems, and Clarke (2021) proposes that as long as nocognitive information has been added to a perceptual module’sproprietary database, worries about intractability do not arise.Brooke-Wilson (2023) denies that tractability concerns motivateinformational encapsulation in the first place: he argues thatinformational encapsulation is neither necessary nor sufficient fortractability, and thus that informational encapsulation is lessimportant than many have thought. If we reject informationalencapsulation as an essential feature of modularity, however, itbecomes more difficult to use modularity to characterize anarchitectural distinction between perception and cognition, since someparadigmatic cognitive processes may end up on the modular side of thedistinction rather than the non-modular side (Clarke & Beck,2023).

It is possible to defend the claim that there is a joint in naturebetween perception and cognition while denying that the correct way tocharacterize the joint is by appealing to modularity. One way to dothis is by focusing instead on the difference in format betweenperceptual representations and cognitive representations. Block(2023), for example, argues that perceptual representations arenonpropositional, nonconceptual, and iconic, while cognitiverepresentations are propositional, conceptual, and discursive.Characterizing the perception–cognition border in terms ofrepresentational format is not necessarily in tension with anarchitectural view of the border in terms of modularity. Those whotake the perception–cognition border to be identifiable viarepresentational format also tend to be sympathetic to modularity(e.g., Burnston, 2017). Block (2023) proposes that there issubstantial truth to the modularity hypothesis, although he thinksthat the border between perception and cognition is best captured interms of representational format.

Another way to defend the claim that there is aperception–cognition border without relying on modularity is todistinguish between mental states that depend on proximal stimulationand those that do not. Phillips (2017) and Beck (2018) propose thatperception is stimulus-dependent while cognition isstimulus-independent, for example. For further discussion of differentways to defend the idea of a border between perception and cognition,see the overviews by Watzl, Sundberg, and Nes (2021) and Clarke andBeck (2023).

4.2 Challenging the perception–cognition border via modularity

The claim that there is a perception–cognition boundary has comeinto question (Shea, 2014). One way to do this is to appeal toempirical evidence of cognitive penetration, infer that perception isnot informationally encapsulated, and conclude that perception is notmodular. This section focuses on an alternative way to challenge the,modularity of perception, which involves arguing that our besttheories of mental architecture rule out modularity in principle. Thisstrategy is primarily associated with ‘neural reuse’architectures and ‘predictive processing’architectures.

Proponents of neural reuse architectures emphasize the extent to whichneural mechanisms which originally evolved or developed for onecognitive function can be deployed to service different cognitivefunctions (Anderson, 2010; Hurley, 2008). Zerilli (2020) argues thatthe only kinds of dissociable units we encounter in the brain havelittle in common with the modules posited by psychology and cognitivescience, and thus that neural reuse architectures have disruptiveimplications for the forms of functional modularity adopted by Fodoror Carruthers. While it is true that neural reuse theories suggestthat the same individual brain regions can implement multiplefunctional modules, this is a challenge primarily to anatomicalmodularity in the form of neural localizability (see§1). Anderson (2010) proposes that the sort of functional modularitycharacterized by domain specificity and informational encapsulationdoes not require anatomical modularity, and thus that the lack ofanatomical modularity in neural reuse architectures does not provide astraightforward challenge to the modularity of mind.

Proponents of predictive processing architectures claim that the brainis a hierarchically structured engine of top-down prediction-errorminimization, in which higher-level generative models predict theinformation in lower-level generative models (Clark, 2013; Hohwy,2013). At the lowest level, the generative model is predicting thesensory input. The only information passed from lower to higher levelsin hierarchy is in the form of prediction errors that capture how theactual input to each layer or model differs from the predicted input.Higher-level models provide Bayesian priors to lower-level models andare updated by prediction errors at lower levels in accordance withBayes’ rule. Such a heavily top-down architecture, in whichthere is “no theoretical or anatomical border preventingtop-down projections from high to low levels of the perceptualhierarchy” (Hohwy, 2013, p.122), seems liable to cognitivepenetration. Vetter and Newen (2014), Lupyan (2015), andCermeño-Aínsa (2021) argue that the top-down nature ofpredictive processing architectures must lead to cognitivepenetration, and Vance and Stokes (2017) take this as evidence thatsuch architectures are necessarily non-modular. Clark (2013) concurs,holding that that according to the predictive processing approach,perception is theory-laden and knowledge-driven; he proposes that“[t]o perceive the world just is to use what you know to explainaway the sensory signal” (Clark, 2013, p. 190).

These concerns lead Clark (2013) to doubt whether there is a borderbetween perception and cognition. He proposes that predictivearchitecture “makes the lines be­tween perception andcognition fuzzy, perhaps even vanishing” (Clark, 2013, p. 190)and suggests that perception and cognition are “profoundlyunified and, in important respects, continuous” (Clark, 2013, p.187). Philosophers have responded to these concerns by arguing thatwhile predictive processing architectures may allow cognitivepenetration, they don’t necessitate it (Macpherson, 2015). Hohwy(2013) suggests that the top-down processing associated withpredictive architectures builds in a certain kind of evidentialinsulation, such that cognitive penetration will occur only underconditions of noisy input or uncertainty. Drayson (2017) argues thatthe probabilistic causal influence from one level in the hierarchy tothe level below need not be transitive, and so we can accept that eachlevel in the predictive hierarchy is causally influenced by the levelabove it without having to accept that each level is causallyinfluenced byall the levels above it. In addition to arguingthat predictive processing architectures do not necessitate thecognitive penetration of perception, some philosophers go so far as toclaim that predictive processing architectures do in fact exhibitmodularity (Beni, 2022; Burnston, 2021; Drayson, 2017).

Even if Clark is right that predictive processing architectures do notallow for modularity, however, it would not follow from this thatthere is no distinction between perception and cognition.Clark’s claim that predictive processing architectures provide“a genuine departure from many of our previous ways of thinkingabout perception, cognition, and the human cognitivearchitecture” (Clark, 2013, 187) seems to rely on the idea thatmodularity is the default view of perception. This position ischallenged at length by Stokes (2021). Furthermore, modularity is notthe only candidate for drawing a joint in nature between perceptionand cognition: see the alternative approaches to theperception–cognition border discussed in§4.1.

4.3 Modularity beyond perception and cognition

Discussions of modularity in philosophy and cognitive science tend tofocus on two aspects of the mind: sensing and perceiving the world(e.g., vision, audition, olfaction), and thinking about the world(e.g., reasoning, planning, decision making). As important andfundamental as these capacities are, they do not exhaust thepsychological domain, which also includes affective capacities (e.g.,pain) and agentive capacities (e.g., motor control). Though themodularity of these further capacities has been much less wellinvestigated, a growing literature in this area testifies to theimportance of modularity beyond the perception–cognitiondivide.

Regarding the pain system, the received view, at least until recently,has been that pain is not modular. This hypothesis is based primarilyon evidence that pain is cognitively penetrable, such as the apparentmodulation of pain by expectations (Gligorov, 2017) and the phenomenonof placebo analgesia (Shevlin & Friesen, 2021). Together with theassumption that cognitive penetrability precludes informationalencapsulation, and the assumption that modularity requires informationencapsulation, this evidence suggests that the pain system isnon-modular. Pushing back against this consensus, Casser and Clarke(2023) argue that evidence for the cognitive penetrability of pain isinconclusive. They also argue that evidence for the judgmentindependence of pain from studies of the Thermal Grill Illusion, forexample—which militates against cognitive penetrability and infavor of informational encapsulation, hence in favor ofmodularity—is robust. But even if Casser and Clarke’sargument goes through, there may be other ways to motivate theconclusion that pain is not modular. For example, Skrzypulec (2023)argues that central cognitive mechanisms are partly constitutive ofthe pain system, which in turn suggests that pain may not be modularafter all. In short, it appears that on the question of whether thepain system is modular, the jury is still out.

As for the modularity of motor control, Mylopoulos (2021) argues thatdespite being cognitively penetrable, the motor system isinformationally encapsulated, hence modular in the Fodorian sense. Onher view, motor control is (a) cognitively penetrable because it takesstates of central cognition (i.e., intentions or action plans) asinputs, and (b) informationally encapsulated because in processing itsinputs it accesses only motor schemas stored in a proprietary databaseand sensory information selected for by attention. There are twoproblems with this argument, however. First, the fact that a systemtakes states of central cognitionas inputs does not showthat in processing its inputs, the system is directly causallysensitive to states of central cognition. (Recall the distinctionbetween information drawn on by a system in the course of itsoperations, on the one hand, and inputs to the system carrying outthose operations, on the other; see§1 above.) Hence, even if the motor system took intentions as inputs,that would not show that the system is cognitively penetrable, exceptin a trivial sense. Second, and more importantly, informationencapsulation requires that when computing an input–outputfunction, a system draws only on information contained in its inputsand information stored in a proprietary database, and nothing else.But on Mylopoulos’s proposal, the motor system does not meetthis requirement. As a result, the case she makes for the modularityof motor control is less than compelling.

5. Modularity and philosophy

Interest in modularity is not confined to cognitive science and thephilosophy of mind; it extends well into a number of allied fields. Inepistemology, modularity has been invoked to defend the legitimacy ofa theory-neutral type of observation, and hence the possibility ofsome degree of consensus among scientists with divergent theoreticalcommitments (Fodor, 1984). The ensuing debate on this issue(Churchland, 1988; Fodor, 1988; McCauley & Henrich, 2006) holdslasting significance for the general philosophy of science,particularly for controversies regarding the status of scientificrealism. Relatedly, evidence of the cognitive penetrability ofperception has given rise to worries about the justification ofperceptual beliefs (Siegel, 2012; Stokes, 2012). In ethics, evidenceof this sort has been used to cast doubt on ethical intuitionism as anaccount of moral epistemology (Cowan, 2014). In philosophy oflanguage, modularity has figured in theorizing about linguisticcommunication, for example, in relevance theorists’ suggestionthat speech interpretation, pragmatic warts and all, is a modularprocess (Sperber & Wilson, 2002). It has also been used demarcatethe boundary between semantics and pragmatics, and to defend astrikingly austere version of semantic minimalism (Borg, 2004). Thoughthe success of these deployments of modularity theory is subject todispute (e.g., see Robbins, 2007, for doubts about the modularity ofsemantics), their existence testifies to the relevance of the conceptof modularity to philosophical inquiry in a variety of domains.

Bibliography

  • Anderson, M. L., 2010. Neural reuse: A fundamental organizationalprinciple of the brain.Behavioral and Brain Sciences, 33:245–313.
  • –––, 2014.After phrenology: Neural reuseand the interactive brain, Cambridge, MA: MIT Press.
  • Antony, L. M., 2003. Rabbit-pots and supernovas: On the relevanceof psychological data to linguistic theory. In A. Barber (ed.),Epistemology of Language, Oxford, UK: Oxford UniversityPress, pp. 47–68.
  • Arbib, M., 1987. Modularity and interaction of brain regionsunderlying visuomotor coordination. In J. L. Garfield (ed.),Modularity in Knowledge Representation and Natural-LanguageUnderstanding, Cambridge, MA: MIT Press, pp. 333–363.
  • Ariew, A., 1999. Innateness is canalization: In defense of adevelopmental account of innateness. In V. G. Hardcastle (ed.),Where Biology Meets Psychology, Cambridge, MA: MIT Press, pp.117–138.
  • Balcetis, E. and Dunning, D., 2006. See what you want to see:Motivational influences on visual perception.Journal ofPersonality and Social Psychology, 91: 612–625.
  • –––, 2010. Wishful seeing: More desired objectsare seen as closer.Psychological Science, 21:147–152.
  • Bargh, J. A. and Chartrand, T. L., 1999. The unbearableautomaticity of being.American Psychologist, 54:462–479.
  • Barrett, H. C., 2005. Enzymatic computation and cognitivemodularity.Mind & Language, 20: 259–287.
  • Barrett, H. C. and Kurzban, R., 2006. Modularity in cognition:Framing the debate.Psychological Review, 113:628–647.
  • Beck, J., 2018. Marking the perception–cognition boundary:The criterion of stimulus-dependence.Australasian Journal ofPhilosophy, 96: 319–334.
  • Beni, M. D., 2022. A tale of two architectures.Consciousnessand Cognition, 98 (C): 103257.
  • Block, N., 2023.The Border Between Seeing and Thinking,New York, NY: Oxford University Press.
  • Bolhuis, J. J. and Macphail, E. M., 2001. A critique of theneuroecology of learning and memory.Trends in CognitiveSciences, 5: 426–433.
  • Borg, E., 2004.Minimal Semantics, Oxford, UK: OxfordUniversity Press.
  • Boyer, P. and Barrett, H. C., 2015. Intuitive ontologies anddomain specificity. In D. M. Buss (ed.),Handbook of EvolutionaryPsychology, Hoboken, NJ: Wiley, pp. 1–19.
  • Brooke-Wilson, T., 2023. How is perception tractable?Philosophical Review, 132: 239–292.
  • Bruner, J. and Goodman, C. C., 1947. Value and need as organizingfactors in perception.Journal of Abnormal and SocialPsychology, 42: 33–44.
  • Buller, D., 2005.Adapting Minds, Cambridge, MA: MITPress.
  • Buller, D. and Hardcastle, V. G., 2000. Evolutionary psychology,meet developmental neurobiology: Against promiscuous modularity.Brain and Mind, 1: 302–325.
  • Burnston, D. C., 2017. Cognitive penetration and thecognition–perception interface.Synthese, 194:3645–3668.
  • –––, 2021. Bayes, predictive processing, and thecognitive architecture of motor control.Consciousness andCognition, 96 (C): 103218.
  • Carruthers, P., 2002. The cognitive functions of language.Behavioral and Brain Sciences, 25: 657–725.
  • –––, 2006.The Architecture of theMind, Oxford, UK: Oxford University Press.
  • Casser, L. and Clarke, S., 2023. Is pain modular?Mind &Language, 38: 828–846.
  • Cermeño-Aínsa, S., 2021. Predictive coding and thestrong thesis of cognitive penetrability.Theoria, 36:341–360.
  • Chiappe, D. and Gardner, R., 2012. The modularity debate inevolutionary psychology.Theory & Psychology, 22:669–692.
  • Churchland, P., 1988. Perceptual plasticity and theoreticalneutrality: A reply to Jerry Fodor.Philosophy of Science,55: 167–187.
  • Clark, A., 2013. Whatever next? Predictive brains, situatedagents, and the future of cognitive science.Behavioral and BrainSciences, 36:181–204.
  • Clarke, S., 2021. Cognitive penetration and informationalencapsulation: Have we been failing the module?PhilosophicalStudies, 178: 2599–2620.
  • Clarke, S. and Beck, J., 2023. Border disputes: Recent debatesalong the perception–cognition border.PhilosophyCompass, 18: e12936.
  • Coltheart, M., 1999. Modularity and cognition.Trends inCognitive Sciences, 3: 115–120.
  • Cosmides, L. and Tooby, J., 1992. Cognitive adaptations for socialexchange. In J. Barkow, L. Cosmides, and J. Tooby, eds.,TheAdapted Mind, Oxford, UK: Oxford University Press, pp.163–228.
  • –––, 1994. Origins of domain specificity: Theevolution of functional organization. In L. A. Hirschfeld and S. A.Gelman (eds.),Mapping the Mind, Cambridge, UK: CambridgeUniversity Press, pp. 85–116.
  • Cowan, R., 2014. Cognitive penetrability and ethical perception.Review of Philosophy and Psychology, 6: 665–682.
  • Cowie, F., 1999.What’s Within? NativismReconsidered, Oxford, UK: Oxford University Press.
  • Currie, G. and Sterelny, K., 2000. How to think about themodularity of mind-reading.Philosophical Quarterly, 50:145–160.
  • Dennett, D., 1987.The IntentionalStance, Cambridge, MA: MIT Press.
  • Drayson, Z., 2017. Modularity and the predictive mind. In T.Metzinger and W. Weise (eds.),Philosophy and PredictiveProcessing, Frankfurt am Main, Germany: MIND Group.
  • Egeland, J., 2024. Making sense of the modularity debate.NewIdeas in Psychology, 75: 101108.
  • Firestone, C. and Scholl, B. J., 2016. Cognition does not affectperception: Evaluating the evidence for “top-down”effects.Behavioral and Brain Sciences, 39: 1–27.
  • Fodor, J. A., 1983.The Modularity of Mind, Cambridge,MA: MIT Press.
  • –––, 1984. Observation reconsidered.Philosophy of Science, 51: 23–43.
  • –––, 1988. A reply to Churchland’s“Perceptual plasticity and theoretical neutrality.”Philosophy of Science, 55: 188–198.
  • –––, 2000.The Mind Doesn’t Work ThatWay, Cambridge, MA: MIT Press.
  • Frith, U., 2003.Autism: Explaining the enigma, 2ndedition, Malden, MA: Wiley-Blackwell.
  • Gauthier, I., Skudlarski, P., Gore, J.C., and Anderson, A. W.,2000. Expertise for cars and birds recruits brain areas involved inface recognition.Nature Neuroscience, 3: 191–197.
  • Gligorov, N., 2017. Don’t worry, this will only hurt a bit:The role of expectation and attention in pain intensity.TheMonist, 100: 501–513.
  • Green, E. J., 2020. The perception-cognition border: A case forarchitectural division.Philosophical Review, 129:323–393.
  • Hansen, T., Olkkonen, M., Walter, S., and Gegenfurtner, K. R.,2006. Memory modulates color appearance.Nature Neuroscience,9: 1367–1368.
  • Hermer, L. and Spelke, E. S., 1996. Modularity and development:The case of spatial reorientation.Cognition, 61:195–232.
  • Hohwy, J., 2013.The Predictive Mind, Oxford, UK: OxfordUniversity Press.
  • Hurley, S., 2008. The shared circuits model (SCM): How control,mirroring, and simulation can enable imitation, deliberation, andmindreading.Behavioral and Brain Sciences, 31:1–22.
  • Kurzban, R., Tooby, J., and Cosmides, L., 2001. Can race beerased? Coalitional computation and social categorization.Proceedings of the National Academy of Sciences, 98:15387–15392.
  • Levin, D. and Banaji, M., 2006. Distortions in the perceivedlightness of faces: The role of race categories.Journal ofExperimental Psychology: General, 135: 501–512.
  • Lupyan, G., 2015. Cognitive penetrability of perception in the ageof prediction: Predictive systems are penetrable systems.Reviewof Philosophy and Psychology, 6: 547–569.
  • Machery, E., 2015. Cognitive penetrability: A no-progress report.In J. Zeimbekis and A. Raftopoulos (eds.),The CognitivePenetrability of Perception: New Philosophical Perspectives,Oxford, UK: Oxford University Press.
  • Macpherson, F., 2015. Cognitive penetration and predictive coding:A commentary on Lupyan.Review of Philosophy and Psychology,6: 571–584.
  • Machery, E. and Barrett, H. C., 2006. DebunkingAdaptingMinds.Philosophy of Science, 73: 232–246.
  • Mandelbaum, E., 2017. Seeing and conceptualizing: Modularity andthe shallow contents of perception.Philosophy andPhenomenological Research, 97: 267–283.
  • Margolis, E. and Laurence, S., 2023. Making sense of domainspecificity.Cognition, 240: 105583.
  • Marr, D., 1982.Vision: A Computational Investigation into theHuman Representation and Processing of VisualInformation, Cambridge, MA: MIT Press.
  • Marslen-Wilson, W. and Tyler, L. K., 1987. Against modularity. InJ. L. Garfield (ed.),Modularity in Knowledge Representation andNatural-Language Understanding, Cambridge, MA: MIT Press.
  • McCauley, R. N. and Henrich, J., 2006. Susceptibility to theMüller-Lyer illusion, theory-neutral observation, and thediachronic penetrability of the visual input system.PhilosophicalPsychology, 19: 79–101.
  • McGurk, H. and Macdonald, J., 1976. Hearing lips and seeingvoices.Nature, 391: 756.
  • Mylopoulos, M., 2021. The modularity of the motor system.Philosophical Explorations, 24: 376–393.
  • Phillips, B., 2017. The shifting border between perception andcognition.Noûs, 53: 316–346.
  • Pietraszewski, D. and Wertz, A. E., 2022. Why evolutionarypsychology should abandon modularity.Perspectives onPsychological Science, 17: 465–490.
  • Pinker, S., 1997.How the Mind Works, New York, NY: W. W.Norton & Company.
  • Prinz, J. J., 2006. Is the mind really modular? In R. Stainton(ed.),Contemporary Debates in Cognitive Science, Oxford, UK:Blackwell, pp. 22–36.
  • Pylyshyn, Z., 1984.Computation and Cognition, Cambridge,MA: MIT Press.
  • –––, 1999. Is vision continuous with cognition?The case for cognitive penetrability of vision.Behavioral andBrain Sciences, 22: 341–423.
  • Quilty-Dunn, J., 2020. Attention and encapsulation.Mind &Language, 35: 335–349.
  • Rabaglia, C. D., Marcus, G. F., and Lane, S. P., 2011. What canindividual differences tell us about the specialization of function?Cognitive Neuropsychology, 28: 288–303.
  • Ramus, F., 2006. Genes, brain, and cognition: A roadmap for thecognitive scientist.Cognition, 101: 247–269.
  • Reader, S. M., Hager, Y., and Laland, K. N., 2011. The evolutionof primate general and cultural intelligence.PhilosophicalTransactions of the Royal Society B: Biological Sciences, 366:1017–1027.
  • Rice, C., 2011. Massive modularity, content integration, andlanguage.Philosophy of Science, 78: 800–812.
  • Robbins, P., 2002. What domain integration could not be.Behavioral and Brain Sciences, 25: 696–697.
  • –––, 2007. Minimalism and modularity. In G.Preyer and G. Peter, eds.,Context-Sensitivity and SemanticMinimalism, Oxford, UK: Oxford University Press, pp.303–319.
  • –––, 2013. Modularity and mental architecture.WIREs Cognitive Science, 4: 641–649.
  • Rosch, E., Mervis, C., Gray, W., Johnson, D., and Boyes-Braem, P.(1976). Basic objects in natural categories.CognitivePsychology, 8: 382–439.
  • Samuels, R., 2000. Massively modular minds: Evolutionarypsychology and cognitive architecture. In P. Carruthers and A.Chamberlain, eds.,Evolution and the Human Mind, Cambridge,UK: Cambridge University Press, pp. 13–46.
  • –––, 2005. The complexity of cognition:Tractability arguments for massive modularity. In P. Carruthers, S.Laurence, and S. Stich, eds.,The Innate Mind: Structure andContents, Oxford, UK: Oxford University Press, pp.107–121.
  • Scholl, B. J. and Leslie, A. M., 1999. Modularity, development and‘theory of mind’.Mind & Language, 14:131–153.
  • Segal, G., 1996. The modularity of theory of mind. In P.Carruthers and P. K. Smith, eds.,Theories of Theories ofMind, Cambridge, UK: Cambridge University Press, pp.141–157.
  • Segall, M., Campbell, D. and Herskovits, M. J., 1966.TheInfluence of Culture on Visual Perception, New York:Bobbs-Merrill.
  • Shams, L., Kamitani, Y., and Shimojo, S., 2000. Illusions: Whatyou see is what you hear.Nature, 408: 788.
  • Shea, N., 2014. Distinguishing top-down from bottom-up effects. InD. Stokes, M. Matthen, and S. Biggs (eds.),Perception and ItsModalities, New York, NY: Oxford University Press, pp.73–91.
  • Shevlin, H. and Friesen, P., 2021. Pain, placebo, and cognitivepenetration.Mind & Language, 36: 771–797.
  • Siegel, S., 2012. Cognitive penetrability and perceptualjustification.Nous, 46: 201–222.
  • Skrzypulec, B., 2023. Pain: Modularity and cognitive constitution.British Journal for the Philosophy of Science, 727001.
  • Spelke, E., 1994. Initial knowledge: Six suggestions.Cognition, 50: 435–445.
  • Sperber, D., 1994. The modularity of thought and the epidemiologyof representations. In L. A. Hirschfeld and S. A. Gelman (eds.),Mapping the Mind, Cambridge, UK: Cambridge University Press,pp. 39–67.
  • –––, 2002. In defense of massive modularity. InI. Dupoux (ed.),Language, Brain, and Cognitive Development,Cambridge, MA: MIT Press, pp. 47–57.
  • Sperber, D. and Wilson, D., 2002. Pragmatics, modularity andmind-reading.Mind & Language, 17: 3–23.
  • Stokes, D., 2012. Perceiving and desiring: A new look at thecognitive penetrability of experience.Philosophical Studies,158: 479–492.
  • –––, 2013. Cognitive penetrability ofperception.Philosophy Compass, 8: 646–663.
  • –––, 2021.Thinking and Perceiving: On theMalleability of the Mind. London, UK: Routledge.
  • Stokes, D. and Bergeron, V., 2015. Modular architectures andinformational encapsulation: A dilemma.European Journal for thePhilosophy of Science, 5: 315–338.
  • Stone, V. E., Cosmides, L., Tooby, J., Kroll, N., and Knight, R.T., 2002. Selective impairment of reasoning about social exchange in apatient with bilateral limbic system damage.Proceedings of theNational Academy of Sciences, 99: 11531–11536.
  • Stromswold, K., 1999. Cognitive and neural aspects of languageacquisition. In E. Lepore and Z. Pylyshyn, eds.,What Is CognitiveScience?, Oxford, UK: Blackwell, pp. 356–400.
  • Sugiyama, L. S., Tooby, J., and Cosmides, L., 2002. Cross-culturalevidence of cognitive adaptations for social exchange among theShiwiar of Ecuadorian Amazonia.Proceedings of the NationalAcademy of Sciences, 99: 11537–11542.
  • Tettamanti, M. and Weniger, D., 2006. Broca’s area: Asupramodal hierarchical processor?Cortex, 42:491–494.
  • Vance, J. and Stokes, D., 2017. Noise, uncertainty, and interest:Predictive coding and cognitive penetration.Consciousness andCognition, 47: 86–98.
  • Vetter, P. and Newen, A., 2014. Varieties of cognitive penetrationin visual perception.Consciousness and Cognition, 27:62–75.
  • Villena, D., 2023. Massive modularity: An ontological hypothesisor an adaptationist discovery heuristic?International Studies inthe Philosophy of Science, 36: 317–334.
  • Warren, R. M., 1970. Perceptual restoration of missing speechsounds.Science, 167: 392–393.
  • Warren, R. M. and Warren, R. P., 1970. Auditory illusions andconfusions.Scientific American, 223: 30–36.
  • Watzl, S., Sundberg, K., and Nes, A., 2021. Theperception/cognition distinction.Inquiry, 66:165–195.
  • Wilson, R. A., 2008. The drink you’re having whenyou’re not having a drink.Mind & Language, 23:273–283.
  • Witt, J. K., Linkenauger, S. A., Bakdash, J. Z. and Proffitt, D.R., 2008. Putting to a bigger hole: Golf performance relates toperceived size.Psychonomic Bulletin and Review, 15:581–585.
  • Witt, J. K., Proffitt, D. R. and Epstein, W., 2004. Perceivingdistances: A role of effort and intent.Perception, 33:577–590.
  • Woods, A. J., Philbeck, J. W., and Danoff, J. V., 2009. Thevarious perceptions of distance: An alternative view of how effortaffects distance judgments.Journal of Experimental Psychology:Human Perception and Performance, 35: 1104–1117.
  • Woodward, J. F. and Cowie, F., 2004. The mind is not (just) asystem of modules shaped (just) by natural selection. In C. Hitchcock,ed.,Contemporary Debates in Philosophy of Science, Malden,MA: Blackwell, pp. 312–334.
  • Zerilli, J., 2017. Against the “system” module.Philosophical Psychology, 30: 235–250.
  • –––, 2020.The Adaptable Mind: WhatNeuroplasticity and Neural Reuse Tell Us About Language andCognition. New York, NY: Oxford University Press.

Other Internet Resources

Copyright © 2025 by
Philip Robbins<robbinsp@missouri.edu>
Zoe Drayson<zdrayson@ucdavis.edu>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free

Browse

About

Support SEP

Mirror Sites

View this site from another server:

USA (Main Site)Philosophy, Stanford University

The Stanford Encyclopedia of Philosophy iscopyright © 2025 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054


[8]ページ先頭

©2009-2025 Movatter.jp