Movatterモバイル変換


[0]ホーム

URL:


SEP home page
Stanford Encyclopedia of Philosophy

The Philosophy of Neuroscience

First published Mon Jun 7, 1999; substantive revision Tue Aug 6, 2019

Over the past four decades, philosophy of science has grownincreasingly “local”. Concerns have switched from generalfeatures of scientific practice to concepts, issues, and puzzlesspecific to particular disciplines. Philosophy of neuroscience is onenatural result. This emerging area was also spurred by remarkablegrowth in the neurosciences themselves. Cognitive and computationalneuroscience continues to encroach directly on issues traditionallyaddressed within the humanities, including the nature ofconsciousness, action, knowledge, and normativity. Cellular,molecular, and behavioral neuroscience using animal modelsincreasingly encroaches on cognitive neuroscience’s domain.Empirical discoveries about brain structure and function suggest waysthat “naturalistic” programs might develop in detail,beyond the abstract philosophical considerations in their favor.

The literature has distinguished “philosophy ofneuroscience” from “neurophilosophy” for twodecades. The former concerns foundational issues within theneurosciences. The latter concerns application of neuroscientificconcepts to traditional philosophical questions. Exploring variousconcepts of representation employed in neuroscientific theories is anexample of the former. Examining implications of neurologicalsyndromes for the concept of a unified self is an example of thelatter. In this entry, we will develop this distinction further anddiscuss examples of both. Just as has happened in the field’shistory, work in both of these areas is scattered throughout most allsections below. Throughout we will try to specify which area landmarkwork falls into, when this location isn’t obvious.

One exciting aspect about working in philosophy of neuroscience orneurophilosophy is the continual element of surprise. Both fieldsdepend squarely on developments in neuroscience, and one simply has noinkling what’s coming down the pike in that incrediblyfast-moving science. Last year’s speculative fiction is thisyear’s scientific reality. But this feature makes a once-a-halfdecade updated encyclopedia entry difficult to manage. The scientificdetails philosophers were reflecting on at past updates can now readwoefully dated. Yet one also wants to capture some history of theongoing fields. Our solution to this dilemma has been to keep previousdiscussions, to reflect that history, but to add more recentscientific and philosophical updates, not only to sections of thisentry added at later times, but also peppered through the earlierdiscussions. It’s not always a perfect solution, but it doespreserve something of the history of the philosophy of neuroscienceand neurophilosophy against the continual advances in the sciencesthese philosophical fields depend upon.

1. Before and AfterNeurophilosophy

Historically, neuroscientific discoveries exerted little influence onthe details of materialist philosophies of mind. The“neuroscientific milieu” of the past half-century has madeit harder for philosophers to adopt substantive dualisms about mind.But even the “type-type” or “central state”identity theories that rose to brief prominence in the late 1950s(Place 1956; Smart 1959) drew upon few actual details of the emergingneurosciences. Recall the favorite early example of a psychoneuralidentity claim: “pain is identical to C-fiber firing”. The“C-fibers” turned out to be related to only a singleaspect of pain transmission (Hardcastle 1997). Early identitytheorists did not emphasize psychoneural identity hypotheses. Their“neuro” terms were admittedly placeholders for conceptsfrom future neuroscience. Their arguments and motivations werephilosophical, even if the ultimate justification of the program washeld to be empirical.

The apology offered by early identity theorists for ignoringscientific details was that the neuroscience at that time was toonascent to provide any plausible identities. But potential identitieswere afoot. David Hubel and Torsten Wiesel’s (1962)electrophysiological demonstrations of the receptive field propertiesof visual neurons had been reported with great fanfare. Using theirtechniques, neurophysiologists began discovering neurons throughoutvisual cortex responsive to increasingly abstract features of visualstimuli: from edges to motion direction to colors to properties offaces and hands. More notably, Donald Hebb had publishedTheOrganization of Behavior (1949) more than a decade earlier. Hehad offered detailed explanations of psychological phenomena in termsof neural mechanisms and anatomical circuits. His psychologicalexplananda included features of perception, learning, memory, and evenemotional disorders. He offered these explanations as potentialidentities. (See the Introduction to his 1949). One philosopher whodid take note of some available neuroscientific detail at the time wasBarbara Von Eckardt Klein (Von Eckardt Klein 1975). She discussed theidentity theory with respect to sensations of touch and pressure, andincorporated then-current hypotheses about neural coding of sensationmodality, intensity, duration, and location as theorized byMountcastle, Libet, and Jasper. Yet she was a glaring exception. Byand large, available neuroscience at the time was ignored by bothphilosophical friends and foes of early identity theories.

Philosophical indifference to neuroscientific detail became“principled” with the rise and prominence of functionalismin the 1970s. The functionalists’ favorite argument was based onmultiple realizability: a given mental state or event can be realizedin a wide variety of physical types (Putnam 1967; Fodor 1974).Consequently, a detailed understanding of one type of realizingphysical system (e.g., brains) will not shed light on the fundamentalnature of mind. Psychology is thus autonomous from any science of oneof its possible physical realizers (see theentry on multiple realizability in this Encyclopedia). Instead of neuroscience, scientifically-mindedphilosophers influenced by functionalism sought evidence andinspiration from cognitive psychology and artificial intelligence.These disciplines abstract away from underlying physical mechanismsand emphasize the “information-bearing” properties andcapacities of representations (Haugeland 1985). At this same time,however, neuroscience was delving directly into cognition, especiallylearning and memory. For example, Eric Kandel (1976) proposedpresynaptic mechanisms governing transmitter release rate as acell-biological explanation of simple forms of associative learning.With Robert Hawkins (Hawkins and Kandel 1984) he demonstrated howcognitivist aspects of associative learning (e.g., blocking,second-order conditioning, overshadowing) could be explainedcell-biologically by sequences and combinations of these basic formsimplemented in higher neural anatomies. Working on the post-synapticside, neuroscientists began unraveling the cellular mechanisms of longterm potentiation (LTP; Bliss and Lomo 1973). Physiologicalpsychologists quickly noted its explanatory potential for variousforms of learning and memory.[1] Yet few “materialist” philosophers paid any attention.Why should they? Most were convinced functionalists. They believedthat the “implementation level” details might be importantto the clinician, but were irrelevant to the theorist of mind.

A major turning point in philosophers’ interest in neurosciencecame with the publication of Patricia Churchland’sNeurophilosophy (1986). The Churchlands (Patricia and Paul)were already notorious for advocating eliminative materialism (see thenext section). In her (1986) book, Churchland distilled eliminativistarguments of the past decade, unified the pieces of the philosophy ofscience underlying them, and sandwiched the philosophy between afive-chapter introduction to neuroscience and a 70-page chapter onthree then-current theories of brain function. She was unapologeticabout her intent. She was introducing philosophy of science toneuroscientists and neuroscience to philosophers. Nothing could bemore obvious, she insisted, than the relevance of empirical factsabout how the brain works to concerns in the philosophy of mind. Herterm for this interdisciplinary method was “co-evolution”(borrowed from biology). This method seeks resources and ideas fromanywhere on the theory hierarchy above or below the question at issue.Standing on the shoulders of philosophers like Quine and Sellars,Churchland insisted that specifying some point where neuroscience endsand philosophy of science begins is hopeless because the boundariesare poorly defined. Neurophilosophers would pick and choose resourcesfrom both disciplines as they saw fit.

Three themes predominated Churchland’s philosophical discussion:developing an alternative to the logical empiricist theory ofintertheoretic reduction; responding to property-dualistic argumentsbased on subjectivity and sensory qualia; and responding toanti-reductionist multiple realizability arguments. These projectsremained central to neurophilosophy for more than a decade afterChurchland’s book appeared. John Bickle (1998) extended theprincipal insight of Clifford Hooker’s (1981a,b,c)post-empiricist theory of intertheoretic reduction. He quantified keynotions using a model-theoretic account of theory structure adaptedfrom the structuralist program in philosophy of science (Balzer,Moulines, and Sneed 1987). He also made explicit a form of argument todraw ontological conclusions (cross-theoretic identities, revisions,or eliminations) from the nature of the intertheoretic reductionrelations obtaining in specific cases. For example, it is routinelyconcluded that visible light, a theoretical posit of optics, iselectromagnetic radiation within specified wavelengths, a theoreticalposit of electromagnetism; in this case, a cross-theoretic ontologicalidentity. It is also routine to conclude that phlogiston does notexist: an elimination of a kind from our scientific ontology. Bickleexplicated the nature of the reduction relation in a specific caseusing a semi-formal account of “intertheoreticapproximation” inspired by structuralist results.

Paul Churchland (1996) carried on the attack on property-dualisticarguments for the irreducibility of conscious experience and sensoryqualia. He argued that acquiring some knowledge of existing sensoryneuroscience increases one’s ability to “imagine” or“conceive of” a comprehensive neurobiological explanationof consciousness. He defended this conclusion using acharacteristically imaginative thought-experiment based on the historyof optics and electromagnetism.

Finally, criticisms of the multiple realizability argumentflourish—and are challenged—to the present day. Althoughthe multiple realizability argument remains influential amongnonreductive physicalists, it no longer commands the near-universalacceptance it once did. Replies to the multiple realizability argumentbased on neuroscientific details have appeared. For example, WilliamBechtel and Jennifer Mundale (1999) argue that neuroscientists usepsychological criteria in brain mapping studies. This fact undercutsthe likelihood that psychological kinds are multiply realized (for areview of recent developments see theentry on multiple realizability in this Encyclopedia).

2. Eliminative Materialism and Philosophy Neuralized

Eliminative materialism (EM), in the form advocated most aggressivelyby Paul and Patricia Churchland, is the conjunction of two claims.First, our common sense “belief-desire” conception ofmental events and processes, our “folk psychology”, is afalse and misleading account of the causes of human behavior. Second,like other false conceptual frameworks from both folk theory and thehistory of science, it will be replaced by, rather than smoothlyreduced or incorporated into, a future neuroscience. TheChurchlands’ characterized folk psychology as the collection ofcommon homilies invoked (mostly implicitly) to explain human behaviorcausally. You ask why Marica is not accompanying me this evening. Ireply that our grandson needed sitting. You nod sympathetically. Youunderstand my explanation because you share with me a generalizationthat relates beliefs about taking care of grandchildren, desires tohelp daughters and to spend time with grandchildren compared toenjoying a night out, and so on. This is just one of a huge collectionof homilies about the causes of human behavior that EM claims to beflawed beyond potential revision. Although this example involves onlybeliefs and desires, folk psychology contains an extensive repertoireof propositional attitudes in its explanatory nexus: hopes,intentions, fears, imaginings, and more. EMists predict that a future,genuinely scientific psychology or neuroscience will eventually eschewall of these, and replace them with incommensurable states anddynamics of neuro-cognition.

EM is physicalist in one traditional philosophical sense. Itpostulates that some future brain science will be ultimately thecorrect account of (human) behavior. It is eliminative in predictingthe future rejection of folk psychological kinds from ourpost-neuroscientific ontology. EM proponents often employ scientificanalogies (Feyerabend 1963; Paul Churchland, 1981). Oxidativereactions as characterized within elemental chemistry bear noresemblance to phlogiston release. Even the “direction” ofthe two processes differ. Oxygen is gained when an object burns (orrusts), phlogiston was said to be lost. The result of this theoreticalchange was the elimination of phlogiston from our scientific ontology.There is no such thing. For the same reasons, according to EM,continuing development in neuroscience will reveal that there are nosuch things as beliefs, desires, and the rest of the propositionalattitudes as characterized by common sense.

Here we focus only on the way that neuroscientific results have shapedthe arguments for EM. Surprisingly, only one argument has beenstrongly influenced. (Most arguments for EM stress failures of folkpsychology as an explanatory theory of behavior.) This argument isbased on a development in cognitive and computational neurosciencethat might provide a genuine alternative to the representations andcomputations implicit in folk psychological generalizations. Manyeliminative materialists assume that folk psychology is committed topropositional representations and computations over their contentsthat mimic logical inferences (Paul Churchland 1981; Stich 1983;Patricia Churchland 1986).[2] Even though discovering an alternative to this view has been aneliminativist goal for some time, some eliminativists hold thatneuroscience only began delivering this alternative over the pastthirty years. Points in and trajectories through vector spaces, as aninterpretation of synaptic events and neural activity patterns inbiological and artificial neural networks are the key features of thisalternative. The differences between these notions of cognitiverepresentation and transformations, and those of the propositionalattitudes of folk psychology, provide the basis for one argument forEM (Paul Churchland 1987). However, this argument will be opaque tothose with no background in cognitive and computational neuroscience,so we present a few details. With these details in place, we willreturn to this argument for EM (five paragraphs below).

At one level of analysis, the basic computational element of a neuralnetwork, biological or artificial, is the nerve cell, or neuron.Mathematically, neurons can be represented as simple computationaldevices, transforming inputs into output. Both inputs and outputsreflect biological variables. For our discussion, we assume thatneuronal inputs are frequencies of action potentials (neuronal“spikes”) in the axons whose terminal branches synapseonto the neuron in question, while neuronal output is the frequency ofaction potentials generated in its axon after processing the inputs. Aneuron thereby computes its total input, usually treatedmathematically as the sum of the products of the signal strength alongeach input line times the synaptic weight on that line. It thencomputes a new activation state based on its total input and currentactivation state, and a new output state based on its new activationvalue. The neuron’s output state is transmitted as a signalstrength to whatever neurons its axon synapses on. The output statereflects systematically the neuron’s new activation state.[3]

Analyzed in this fashion, both biological and artificial neuralnetworks are interpreted naturally asvector-to-vectortransformers. The input vector consists of values reflectingactivity patterns in axons synapsing on the network’s neuronsfrom outside (e.g., from sensory transducers or other neuralnetworks). The output vector consists of values reflecting theactivity patterns generated in the network’s neurons thatproject beyond the net (e.g., to motor effectors or other neuralnetworks). Given that each neuron’s activity depends partly upontheir total input, and its total input depends partly on synapticweights (e.g., presynaptic neurotransmitter release rate, number andefficacy of postsynaptic receptors, availability of enzymes insynaptic cleft), the capacity of biological networks to change theirsynaptic weights make themplastic vector-to-vectortransformers. In principle, a biological network with plastic synapsescan come to implement any vector-to-vector transformation that itscomposition permits (number of input units, output units, processinglayers, recurrency, cross-connections, etc.) (discussed in PaulChurchland, 1987, with references to the primary scientificliterature).

The anatomical organization of the cerebellum provides a clear exampleof a network amenable to this computational interpretation. ConsiderFigure 1. The cerebellum is the bulbous convoluted structure dorsal to thebrainstem. A variety of studies (behavioral, neuropsychological,single-cell electrophysiological) implicate this structure in motorintegration and fine motor coordination. Mossy fibers (axons) fromneurons outside the cerebellum synapse on cerebellar granule cells,which in turn project to parallel fibers. Activity patterns across thecollection of mossy fibers (frequency of action potentials per timeunit in each fiber projecting into the cerebellum) provide values forthe input vector. Parallel fibers make multiple synapses on thedendritic trees and cell bodies of cerebellular Purkinje neurons. EachPurkinje neuron “sums” its post-synaptic potentials (PSPs)and emits a train of action potentials down its axon based (partly) onits total input and previous activation state. Purkinje axons projectoutside the cerebellum. The network’s output vector is thus theordered values representing the pattern of activity generated in eachPurkinje axon. Changes to the efficacy of individual synapses on theparallel fibers and the Purkinje neurons alter the resulting PSPs inPurkinje axons, generating different axonal spiking frequencies.Computationally, this amounts to a different output vector to the sameinput activity pattern—plasticity.[4]

This interpretation puts the useful mathematical resources ofdynamical systems into the hands of computationalneuroscientists.Vector spaces are an example. Learning canthen be characterized fruitfully in terms of changes in synapticweights in the network and subsequent reduction of error in networkoutput. (This approach to learning goes back to Hebb 1949, althoughthe vector-space interpretation was not part of Hebb’s account.)A useful representation of this account uses asynapticweight-error space. One dimension represents the global error inthe network’s output to a given task, and all other dimensionsrepresent the weight values of individual synapses in the network.ConsiderFigure 2. Points in this multi-dimensional state space represent the globalperformance error correlated with each possible collection of synapticweights in the network. As the weights change with each performance,in accordance with a biologically-inspired learning algorithm, theglobal error of network performance continually decreases. Thechanging synaptic weights across the network with each trainingepisode reduces the total error of the network’s output vector,compared to the desired output vector for the input vector. Learningis represented as synaptic weight changes correlated with a descentalong the error dimension in the space (Churchland and Sejnowski1992). Representations (concepts) can be portrayed aspartitions in multi-dimensional vector spaces. One example isaneuron activation vector space. SeeFigure 3. A graph of such a space contains one dimension for the activationvalue of each neuron in the network (or some specific subset of thenetwork’s neurons, such as those in a specific layer). A pointin this space represents one possible pattern of activity in allneurons in the network. Activity patterns generated by input vectorsthat the network has learned to group together will cluster around a(hyper-) point or subvolume in the activity vector space. Any inputpattern sufficiently similar to this group will produce an activitypattern lying in geometrical proximity to this point or subvolume.Paul Churchland (1989) argued that this interpretation of networkactivity provided a quantitative, neurally-inspired basis forprototype theories of concepts developed in late-twentieth centurycognitive psychology.

Using this theoretical development, and in the realm ofneurophilosophy, Paul Churchland (1987, 1989) offered a novel,neuroscientifically-inspired argument for EM. According to theinterpretation of neural networks just sketched, activity vectors arethe central kind of representations, and vector-to-vectortransformations are the central kind of computations, in the brain.This contrasts sharply with thepropositional representationsandlogical/semantic computations postulated by folkpsychology. Vectorial content, an ordered sequence of real numbers, isunfamiliar and alien to common sense. This cross-theoretic conceptualdifference is at least as great as that between oxidative andphlogiston concepts, or kinetic-corpuscular and caloric fluid heatconcepts. Phlogiston and caloric fluid are two “parade”examples of kinds eliminated from our scientific ontology due to thenature of the intertheoretic relation obtaining between the theorieswith which they are affiliated and the theories that replaced them.The structural and dynamic differences between the folk psychologicaland then-emerging cognitive neuroscientific kinds suggested that thetheories affiliated with the latter will likewise replace the theoryaffiliated with the former. But this claim was the key premise of theeliminativist argument based on predicted intertheoretic relations.And with the rise of neural networks and parallel distributedprocessing, intertheoretic contrasts with folk-psychologicalexplanatory kinds were no longer just an eliminativist’s futurehope. Computational and cognitive neuroscience was delivering analternative kinematics for cognition, one that provided no structuralanalogue for folk psychology’s propositional attitudes orlogic-like computations over propositional contents.

Certainly the vector-space alternatives of this interpretation ofneural networks are alien to folk psychology. But do they justify EM?Even if the propositional contents of folk-psychological posits findno analogues in one theoretical development in cognitive andcomputational neuroscience (that was hot three decades ago), theremight be other aspects of cognition that folk psychology gets right.Within the scientific realism that informed early neurophilosophy,concluding that a cross-theoretic identity claim is true (e.g., folkpsychological state F is identical to neural state N) or that aneliminativist claim is true (there is no such thing as folkpsychological state F) depended on the nature of the intertheoreticreduction obtaining between the theories affiliated with the posits inquestion (Hooker 1981a,b,c; Churchland 1986; Bickle, 1998). But theunderlying account of intertheoretic reduction also recognized aspectrum of possible reductions, ranging from relatively“smooth” through “significantly revisionary”to “extremely bumpy”.[5] Might the reduction of folk psychology to a “vectorial”computational neuroscience occupy some middle ground between“smooth” and “bumpy” intertheoretic reductionendpoints, and hence suggest a “revisionary” conclusion?The reduction of classical equilibrium thermodynamics-to-statisticalmechanics provided a potential analogy here. John Bickle (1992, 1998,chapter 6) argued on empirical grounds that such an outcome is likely.He specified conditions on “revisionary” reductions fromhistorical examples and suggested that these conditions are obtainingbetween folk psychology and cognitive neuroscience as the latterdevelops. In particular, folk psychology appears to have gotten rightthe grossly-specified functional profile of many cognitive states,especially those closely related to sensory inputs and behavioraloutputs. It also appears to get right the “intentionality”of many cognitive states—the object that the state is of orabout—even though cognitive neuroscience eschews its implicitlinguistic explanation of this feature. Revisionary physicalismpredicts significantconceptual change to folk psychologicalconcepts, but denies total elimination of the caloric fluid-phlogistonvariety.

The philosophy of science is another area where vector spaceinterpretations of neural network activity patterns has impactedphilosophy. In the Introduction to his (1989) book,ANeurocomputational Perspective, Paul Churchland asserted,distinctively neurophilosophically, that it will soon be impossible todo serious work in the philosophy of science without drawing onempirical work in the brain and behavioral sciences. To justify thisclaim, in Part II of the book he suggested neurocomputationalreformulations of key concepts from the philosophy of science. At theheart of his reformulations is a neurocomputational account of thestructure of scientific theories (1989: chapter 9). Problems with theorthodox “sets-of-sentences” view of scientific theorieshave been well-known since the 1960s. Churchland advocated replacingthe orthodox view with one inspired by the “vectorial”interpretation of neural network activity. Representations implementedin neural networks (as sketched above) compose a system thatcorresponds to important distinctions in the external environment, arenot explicitly represented as such within the input corpus, and allowthe trained network to respond to inputs in a fashion that continuallyreduces error. According to Churchland, these are functions oftheories. Churchland was bold in his assertion: an individual’stheory-of-the-world is a specific point in that individual’serror-synaptic weight vector space. It is a configuration of synapticweights that partitions the individual’s activation vector spaceinto subdivisions that reduce future error messages to both familiarand novel inputs. (Consider againFigure 2 andFigure 3.) This reformulation invites an objection, however. Churchland boaststhat his theory of theories is preferable to existing alternatives tothe orthodox “sets-of-sentences” account—forexample, thesemantic view (Suppe 1974; van Fraassen1980)—because his is closer to the “buzzing brains”that use theories. But as Bickle (1993) noted, neurocomputational models based on themathematical resources described above are a long way into the realmof mathematical abstraction. They are little more than novel (albeitsuggestive) application of the mathematics of quasi-linear dynamicalsystems to simplified schemata of brain circuitries. Neurophilosophersowe some account of identifications across ontological categories(vector representations and transformation to what?) before thephilosophy of science community will treat theories as points inhigh-dimensional state spaces implemented in biological neuralnetworks. (There is an important methodological assumption lurking inBickle’s objection, however, which we will discuss toward theend of the next paragraph.)

Churchland’s neurocomputational reformulations of otherscientific and epistemological concepts build on this account oftheories. He sketches “neuralized” accounts of thetheory-ladenness of perception, the nature of concept unification, thevirtues of theoretical simplicity, the nature of Kuhnian paradigms,the kinematics of conceptual change, the character of abduction, thenature of explanation, and even moral knowledge and epistemologicalnormativity. Conceptual redeployment, for example, is the activationof an already-existing prototype representation—the centerpointor region of a partition of a high-dimensional vector space in atrained neural network—by a novel type of input pattern.Obviously, we can’t here do justice to Churchland’s manyand varied attempts at reformulation. We urge the intrigued reader toexamine his suggestions in their original form. But a word aboutphilosophical methodology is in order. Churchland isnotattempting “conceptual analysis” in anything resemblingits traditional philosophical sense. Neither, typically, areneurophilosophers in any of their reformulation projects. (This is whya discussion of neurophilosophical reformulations fits with adiscussion of EM.) There are philosophers who take thediscipline’s ideal analyses to be a relatively simple set ofnecessary and sufficient conditions, expressed in non-technicalnatural language, governing the application of important concepts(like justice, knowledge, theory, or explanation). These analysesshould square, to the extent possible, with pretheoretical usage.Ideally, they should preserve synonymy. Other philosophers view thisideal as sterile, misguided, and perhaps deeply mistaken about theunderlying structure of human knowledge (Ramsey 1992).Neurophilosophers tend to reside in the latter group. Those whodislike philosophical speculation about the promise and potential ofdeveloping science to reformulate(“reform-ulate”) traditional philosophicalconcepts have probably already discovered that neurophilosophy is notfor them. But the familiar charge that neurocomputationalreformulations of the sort Churchland attempts are“philosophically uninteresting” or“irrelevant” because they fail to provide“analyses” of theory, explanation, and the like will fallon deaf ears among many contemporary “naturalistic”philosophers, who have by and large given up on traditionalphilosophical “analysis”.

Before we leave the topic of proposed neurophilosophical applicationsof this theoretical development from “neuralnetworks”-style cognitive/computational neuroscience, one finalpoint of actual scientific detail bears mention. This approach did notremain state-of-the-art computationalneuroscience for long.Manyneural modelers quickly gave up this approach tomodeling the brain.Compartmental modeling enabledcomputationalneuroscientists to mimic activity in andinteractions between patches of neuronal membrane (Bower and Beeman1995). This approach permitted modelers to control and manipulate avariety of subcellular factors that determine action potentials pertime unit, including the topology of membrane structure in individualneurons, variations in ion channels across membrane patches, and fieldproperties of post-synaptic potentials depending on the location ofthe synapse on the dendrite or soma. By the mid-1990s modelers quicklybegan to “custom build” the neurons in their targetcircuitry. Increasingly powerful computer hardware still allowed themto study circuit properties of modeled networks. For these reasons,many serious computational neuroscientists switched to working at alevel of analysis that treats neurons as structured rather than simplecomputational devices. With compartmental modeling, vector-to-vectortransformations came to be far less useful in serious neurobiologicalmodels, replaced by differential equations representing ion currentsacross patches of neural membrane. Far more biological detail came tobe captured in the resulting models than “connectionist”models permitted. This methodological change across computationalneuroscience meant that a neurophilosophy guided by“connectionist” resources no longer drew from the state ofthe art of the scientific field.

Philosophy of science and scientific epistemology were not the onlyareas where neurophilosophers urged the relevance of neuroscientificdiscoveries for traditionally philosophical topics. A decade afterNeurophilosophy’s publication, Kathleen Akins (1996)argued that a “traditional” view of the senses underlies avariety of sophisticated “naturalistic” programs aboutintentionality. (She cites the Churchlands, Daniel Dennett, FredDretske, Jerry Fodor, David Papineau, Dennis Stampe, and Kim Sterelnyas examples.) But then-recent neuroscientific work on the mechanismsand coding strategies implemented by sensory receptors shows that thistraditional view is mistaken. The traditional view holds that sensorysystems are “veridical” in at least three ways. (1) Eachsignal in the system correlates with a small range of properties inthe external (to the body) environment. (2) The structure in therelevant external relations that the receptors are sensitive to ispreserved in the structure of the internal relations among theresulting sensory states. And (3) the sensory system reconstructsfaithfully, without fictive additions or embellishments, the externalevents. Using then-recent neurobiological discoveries about responseproperties of thermal receptors in the skin (i.e.,“thermoreceptors”) as an illustration, Akins showed thatsensory systems are “narcissistic” rather than“veridical”. All three traditional assumptions areviolated. These neurobiological details and their philosophicalimplications open novel questions for the philosophy of perception andfor the appropriate foundations for naturalistic projects aboutintentionality. Armed with the known neurophysiology of sensoryreceptors, our “philosophy of perception” or account of“perceptual intentionality” will no longer focus on thesearch for correlations between states of sensory systems and“veridically detected” external properties. Thistraditional philosophical (and scientific!) project rests upon amistaken “veridicality” view of the senses.Neuroscientific knowledge of sensory receptor activity also shows thatsensory experience does not serve the naturalist well as a“simple paradigm case” of an intentional relation betweenrepresentation and world. Once again, available scientific detailshowed the naivety of some traditional philosophical projects.

Focusing on the anatomy and physiology of the pain transmissionsystem, Valerie Hardcastle (1997) urged a similar negative implicationfor a popular methodological assumption. Pain experiences have longbeen philosophers’ favorite cases for analysis and theorizingabout conscious experiences generally. Nevertheless, every positionabout pain experiences has been defended: eliminativism, a variety ofobjectivist views, relational views, and subjectivist views. Why solittle agreement, despite agreement that pain experiences are theplace to start an analysis or theory of consciousness? Hardcastleurged two answers. First, philosophers tend to be uninformed about theneuronal complexity of our pain transmission systems, and build theiranalyses or theories on the outcome of a single component of amulti-component system. Second, even those who understand some of theunderlying neurobiology of pain tend to advocate gate-control theories.[6] But the best existing gate-control theories are vague about theneural mechanisms of the gates. Hardcastle instead proposed adissociable dual system of pain transmission, consisting of a painsensory system closely analogous in its neurobiological implementationto other sensory systems, and a descending pain inhibitory system. Sheargued that this dual system is consistent with neuroscientificdiscoveries and accounts for all the pain phenomena that have temptedphilosophers toward particular (but limited) theories of painexperience. The neurobiological uniqueness of the pain inhibitorysystem, contrasted with the mechanisms of other sensory modalities,renders pain processing atypical. In particular, the pain inhibitorysystem dissociates pain sensation from stimulation of nociceptors(pain receptors). Hardcastle concluded from the neurobiologicaluniqueness of pain transmission that pain experiences are atypicalconscious events, and hence not a good place to start theorizing aboutor analyzing the general type.

3. Neuroscience and Psychosemantics

Developing and defending theories of content is a central topic incontemporary philosophy of mind. A common desideratum in this debateis a theory of cognitive representation consistent with a physical ornaturalistic ontology. We’ll here describe a few contributionsneurophilosophers have made to this project.

When one perceives or remembers that he is out of coffee, his brainstate possesses intentionality or “aboutness”. The perceptor memory is about one’s being out of coffee; it represents oneas being out of coffee. The representational state has content. Apsychosemantics seeks to explain what it is for a representationalstate to be about something, to provide an account of how states andevents can have specific representational content. A physicalistpsychosemantics seeks to do this using resources of the physicalsciences exclusively. Neurophilosophers have contributed to two typesof physicalist psychosemantics: the Functional Role approach and theInformational approach. For a description of these and other theoriesof mental content, see the entries oncausal theories of mental content,mental representation, andteleological theories of mental content.

The core claim of a functional role semantics is that a representationhas its specific content in virtue of relations it bears to otherrepresentations. Its paradigm application is to concepts oftruth-functional logic, like the conjunctive “and” ordisjunctive “or”. A physical event instantiates the“and” function just in case it maps two true inputs onto asingle true output. Thus it is the relations an expression bears toothers that give it the semantic content of “and”.Proponents of functional role semantics propose similar analyses forthe content of all representations (Block 1995). A physical eventrepresents birds, for example, if it bears the right relations toevents representing feathers and others representing beaks. Bycontrast, informational semantics ascribe content to a state dependingupon the causal relations obtaining between the state and the objectit represents. A physical state represents birds, for example, just incase an appropriate causal relation obtains between it and birds. Atthe heart of informational semantics is a causal account ofinformation (Dretske 1981, 1988). Red spots on a face carry theinformation that one has measles because the red spots are caused bythe measles virus. A common criticism of informational semantics holdsthat mere causal covariation is insufficient for representation, sinceinformation (in the causal sense) is by definition always veridicalwhile representations can misrepresent. A popular solution to thischallenge invokes a teleological analysis of “function”. Abrain state representsX by virtue of having the function ofcarrying information about being caused byX (Dretske 1988).These two approaches do not exhaust the popular options for apsychosemantics, but are the ones to which neurophilosophers have mostcontributed.

Paul Churchland’s allegiance to functional role semantics goesback to his earliest views about the semantics of terms in a language.In his (1979) book, he insisted that the semantic identity (content)of a term derives from its place in the network of sentences of theentire language. The functional economies envisioned by earlyfunctional role semanticists were networks with nodes corresponding tothe objects and properties denoted by expressions in a language. Thusone node, appropriately connected, might represent birds, anotherfeathers, and another beaks. Activation of one of these would tend tospread activation to the others. As “connectionist” neuralnetwork modeling developed (as discussed in the previous sectionabove), alternatives arose to this one-representation-per-node“localist” approach. By the time Churchland (1989)provided a neuroscientific elaboration of functional role semanticsfor cognitive representations generally, he too had abandoned the“localist” interpretation. Instead, he offered a“state-space semantics”.

We saw in the previous section how (vector) state spaces provide aninterpretation for activity patterns in neural networks, bothbiological and artificial. A state-space semantics for cognitiverepresentations is a species of a functional role semantics becausethe individuation of a particular state depends upon the relationsobtaining between it and other states. A representation is a point inan appropriate state space, and points (or subvolumes) in a space areindividuated by their relations to other points (locations,geometrical proximity). Paul Churchland (1989, 1995) illustrated astate-space semantics for neural states by appealing to sensorysystems. One popular theory in sensory neuroscience of how the braincodes for sensory qualities (like color) is theopponent processaccount (Hardin 1988). Churchland (1995) describes athree-dimensional activation vector state-space in which every colorperceivable by humans is represented as a point (or subvolume). Eachdimension corresponds to activity rates in one of three classes ofphotoreceptors present in the human retina and their efferent paths:the red-green opponent pathway, yellow-blue opponent pathway, andblack-white (contrast) opponent pathway. Photons striking the retinaare transduced by photoreceptors, producing an activity rate in eachof the segregated pathways. A represented color is hence a triplet ofneuronal activation frequency rates. As an illustration, consideragainFigure 3. Each dimension in that three-dimensional space will represent averagefrequency of action potentials in the axons of one class of ganglioncells projecting out of the retina. Each color perceivable by humanswill be a region of that space. For example, an orange stimulusproduces a relatively low level of activity in both the red-green andyellow-blue opponent pathways (x-axis andy-axis,respectively), and middle-range activity in the black-white (contrast)opponent pathway (z-axis). Pink stimuli, on the other hand,produce low activity in the red-green opponent pathway, middle-rangeactivity in the yellow-blue opponent pathway, and high activity in theblack-white (contrast) opponent pathway.[7] The location of each color in the space generates a “colorsolid”. Location on the solid, and geometrical proximity betweenthese locations, reflect structural similarities between the perceivedcolors. Human gustatory representations are points in afour-dimensional state space, with each dimension coding for activityrates generated by gustatory stimuli in each type of taste receptor(sweet, salty, sour, and bitter) and their segregated efferentpathways. When implemented in a neural network with structuralresources, and hence computational resources as vast as the humanbrain, the state space approach to psychosemantics generates a theoryof content for a huge number of cognitive states.[8]

Jerry Fodor and Ernest LePore (1992) raised an important challenge toChurchland’s psychosemantics. Location in a state space aloneseems insufficient to fix a state’s representational content.Churchland never explains why a point in a three-dimensional statespace represents acolor, as opposed to any other quality,object, or event that varies along three dimensions.[9]. So Churchland’s account achieves its explanatory power by theinterpretation imposed on the dimensions. Fodor and LePore allegedthat Churchland never specified how a dimension comes to represent,e.g., degree of saltiness, as opposed to yellow-blue wavelengthopposition. One obvious answer appeals to the stimuli that form the“external” inputs to the neural network in question. Then,for example, the individuating conditions on neural representations ofcolors are that opponent processing neurons receive input from aspecific class of photoreceptors. The latter in turn haveelectromagnetic radiation (of a specific portion of the visiblespectrum) as their activating stimuli. However, this appeal to“external” stimuli as the ultimate individuatingconditions for representational content makes the resulting approach aversion of informational semantics. Is this approach consonant withother neurobiological details?

The neurobiological paradigm for informational semantics is thefeature detector: one or more neurons that are (i) maximallyresponsive to a particular type of stimulus, and (ii) have thefunction of indicating the presence of that stimulus type. Examples ofsuch stimulus-types for visual feature detectors include high-contrastedges, motion direction, and colors. A favorite feature detector amongphilosophers is the alleged fly detector in the frog. Lettvin et al.(1959) identified cells in the frog retina that responded maximally tosmall shapes moving across the visual field. The idea that thesecells’ activity functioned to detect flies rested upon knowledgeof the frogs’ diet. (Bechtel 1998 provides a useful discussion.)Using experimental techniques ranging from single-cell recording tosophisticated functional imaging, neuroscientists discovered a host ofneurons that are maximally responsive to a variety of complex stimuli.However, establishing condition (ii) on a feature detector is muchmore difficult. Even some paradigm examples have been called intoquestion. David Hubel and Torsten Wiesel’s (1962) NobelPrize-winning work establishing the receptive fields of neurons instriate (visual) cortex is often interpreted as revealing cells whosefunction is edge detection. However, Lehky and Sejnowski (1988)challenged this interpretation. They trained an artificial neuralnetwork to distinguish the three-dimensional shape and orientation ofan object from its two-dimensional shading pattern. Their networkincorporates many features of visual neurophysiology. Nodes in thetrained network turned out to be maximally responsive to edgecontrasts, but did not appear to have the function of edge detection.(See Churchland and Sejnowski 1992 for a review.)

Kathleen Akins (1996) offered a different neurophilosophical challengeto informational semantics and its affiliated feature-detection viewof sensory representation. We saw in the previous section that Akinsargued that the physiology of thermoreception violates three necessaryconditions on “veridical” representation. From this factshe raised doubts about looking for feature-detecting neurons toground a psychosemantics generally, including for thought contents.Human thoughts about flies, for example, are sensitive to numericaldistinctions between particular flies and the particular locationsthey can occupy. But the ends of frog nutrition are well servedwithout a representational system sensitive to such ontologicalniceties. Whether a fly seen now is numerically identical to one seena moment ago need not, and perhaps cannot, figure into thefrog’s feature detection repertoire. Akins’ critique castdoubt on whether details of sensory transduction will scale up toprovide an adequate unified psychosemantics for all concepts. It alsoraised new questions for human intentionality. How do we get fromactivity patterns in “narcissistic” sensory receptors,keyed not to “objective” environmental features but ratheronly to effects of the stimuli on the patch of tissue innervated, tohuman ontologies replete with enduring objects with stableconfigurations of properties and relations, types and their tokens (asthe “fly-thought” example presented above reveals), andthe rest? And how did the development of a stable, rich ontologyconfer survival advantages to human ancestors?

4. Consciousness Explained?

Consciousness re-emerged over the past three decades as a topic ofresearch focus in philosophy of mind, and in the cognitive and brainsciences. Instead of ignoring it, many physicalists sought to explainit (Dennett 1991). Here we focus exclusively on ways thatneuroscientific discoveries have impacted philosophical debates aboutthe nature of consciousness and its relation to physical mechanisms.(See links to other entries in this encyclopedia below inRelated Entries for broader discussions about consciousness and physicalism.)

Thomas Nagel (1974) argued famously that conscious experience issubjective, and thus permanently recalcitrant to objective scientificunderstanding. He invited us to ponder “what it is like to be abat” and urged the intuitive judgment that no amount ofphysical-scientific knowledge, including neuroscientific, supplies acomplete answer. Nagel’s intuition pump has generated extensivephilosophical discussion. At least two well-known replies made directappeal to neurophysiology. John Biro (1991) suggested that part of theintuition pumped by Nagel, that bat experience is substantiallydifferent from human experience, presupposes systematic relationsbetween physiology and phenomenology. Kathleen Akins (1993a) delveddeeper into existing knowledge of bat physiology and reports much thatis pertinent to Nagel’s question. She argued that many of thequestions about bat subjective experience that we still consider openhinge on questions that remain unanswered about neuroscientificdetails. One example of the latter is the function of various corticalactivity profiles in the active bat.

David Chalmers (1996) famously argued that any possible brain-processaccount of consciousness will leave open an “explanatorygap” between the brain process and properties of the conscious experience.[10] This is because no brain-process theory can answer the“hard” question: Why should that particular brain processgive rise to that particular conscious experience? We can alwaysimagine (“conceive of”) a universe populated by creatureshaving those brain processes but completely lacking consciousexperience. A theory of consciousness requires an explanation of howand why some brain process causes a conscious experience, replete withall the features we experience. The fact that the hard questionremains unanswered shows that we will probably never get a completeexplanation of consciousness at the level of neural mechanism. Pauland Patricia Churchland (1997) offered the following diagnosis andreply. Chalmers offers aconceptual argument, based on ourability to imagine creatures possessing active brains like ours butwholly lacking in conscious experiences. But the more one learns abouthow the brain produces conscious experience—and such aliterature has emerged (for some early work, see Gazzaniga1995)—the harder it becomes to imagine a universe consisting ofcreatures with brain processes like ours but lacking consciousness.This is not just bare assertion. The Churchlands appeal to someneurobiological detail. For example, Paul Churchland (1995) develops aneuroscientific account of consciousness based on recurrentconnections between thalamic nuclei, particularly between“diffusely projecting” nuclei like the intralaminar nucleiand the cortex.[11] Churchland argues that thalamocortical recurrency accounts for theselective features of consciousness, for the effects of short-termmemory on conscious experience, for vivid dreaming during REM(rapid-eye movement) sleep, and other “core” features ofconscious experience. In other words, the Churchlands are claimingthat when one learns about activity patterns in these recurrentcircuits, one can no longer “imagine” or “conceiveof” this activity occurring without these core features ofconscious experience occurring. (Other than just mouthing theexpression, “I am now imagining activity in these circuitswithout selective attention/the effects of short-term memory/vividdreaming/…”).

A second focus of skeptical arguments about a complete neuroscientificexplanation of consciousness is on sensoryqualia: theintrospectable qualitative aspects of sensory experience, the featuresby which subjects discern similarities and differences among theirexperiences. The colors of visual sensations are a philosopher’sfavorite example. One famous puzzle about color qualia is the allegedconceivability of spectral inversions. Many philosophers claim that itis conceptually possible (if perhaps physically impossible) for twohumans not to differ neurophysiologically, while the color that fireengines and tomatoes appear to have to one subject is the color thatgrass and frogs appear to have to the other (and vice versa). A largeamount of neuroscientifically-informed philosophy has addressed thisquestion. (C.L. Hardin 1988 and Austen Clark 1993 are noteworthyexamples.) A related area where neurophilosophical considerations haveemerged concerns the metaphysics of colors themselves (rather thancolor experiences). A longstanding philosophical dispute is whethercolors are objective properties existing external to perceivers orrather identifiable as or dependent upon minds or nervous systems.Some neuroscientific work on this problem begins with characteristicsof color experiences: for example, that color similarity judgmentsproduce color orderings that align on a circle (Clark 1993). With thisresource, one can seek mappings of phenomenology onto environmental orphysiological regularities. Identifying colors with particularfrequencies of electromagnetic radiation does not preserve thestructure of the hue circle, whereas identifying colors with activityin opponent processing neurons does. Such a tidbit is not decisive forthe color objectivist-subjectivist debate, but it does convey the typeof neurophilosophical work being done on traditional metaphysicalissues beyond the philosophy of mind. (For more details on theseissues, see theentry on color in this Encyclopedia.)

We saw in the discussion of Hardcastle (1997) two sections above thatneurophilosophers have entered disputes about the nature andmethodological import of pain experiences. Two decades earlier, DanDennett (1978) took up the question of whether it is possible to builda computer that feels pain. He compares and notes tension betweenneurophysiological discoveries and common sense intuitions about painexperience. He suspects that the incommensurability between scientificand common sense views is due to incoherence in the latter. Hisattitude is wait-and-see. But foreshadowing Churchland’s replyto Chalmers, Dennett favors scientific investigations overconceivability-based philosophical arguments.

Neurological deficits have attracted philosophers interested inconsciousness. For nearly fifty years philosophers have debatedimplications for the unity of the self of the Nobel Prize-winningexperiments with commissurotomy patients who, for clinical reasons,had their corpus callosum surgically ablated (Nagel 1971).[12] The corpus callosum is the huge bundle of axons connecting neuronsacross the left and right mammalian cerebral hemispheres. In carefullycontrolled experiments, commissurotomy patients seemingly display twodissociable “seats” of consciousness. Elizabeth Schechter(2018) has recently greatly updated philosophical treatment of thescientific details of these “split-brain” patients,including their own experiential reports, and has traced implicationsfor our understanding of the self.

In chapter 5 of her (1986) book, Patricia Churchland extended both therange and philosophical implications of neurological deficits. Onedeficit she discusses in detail is blindsight. Some patients withlesions to primary visual cortex report being unable to see items inregions of their visual fields, yet perform far better than chance inforced guess trials about stimuli in those regions. A variety ofscientific and philosophical interpretations have been offered. NedBlock (1995) worried that many of these interpretations conflatedistinct notions of consciousness. He labels these notions“phenomenal consciousness” (“P-consciousness”)and “access consciousness”(“A-consciousness”). The former is the “what it islike”-ness of conscious experiences. The latter is theavailability of representational content to self-initiated action andspeech. Block argued that P-consciousness is not alwaysrepresentational, whereas A-consciousness is. Dennett (1991, 1995) andTye (1993) are skeptical of non-representational analyses ofconsciousness in general. They provide accounts of blindsight that donot depend on Block’s distinction.

We break off our brief overview of neurophilosophical work onconsciousness here. Many other topics are worth neurophilosophicalpursuit. We mentioned commissurotomy and the unity of consciousnessand the self, which continues to generate discussion. Qualia beyondthose of color and pain experiences quickly attractedneurophilosophical attention (Akins 1993a,b, 1996; Austen Clark 1993),as did self-consciousness (Bermúdez 1998).

5. Locating Cognitive Functions: From Lesion Studies to Functional Neuroimaging

One of the first issues to arise in neurology, as far back as thenineteenth century, concerned the localization of specific cognitivefunctions to specific brain regions. Although the“localization” approach had dubious origins in thephrenology of Gall and Spurzheim, and had been challenged strenuouslyby Flourens throughout the early nineteenth century, it re-emergedlate in the nineteenth century in the study of aphasia by Bouillaud,Auburtin, Broca, and Wernicke. These neurologists made careful studies(when possible) of linguistic deficits in their aphasic patients,followed by brain autopsies post mortem.[13] Broca’s initial study of twenty-two patients in themid-nineteenth century confirmed that damage to the left corticalhemisphere was predominant, and that damage to the second and thirdfrontal convolutions was necessary to produce speech productiondeficits. Although the anatomical coordinates Broca postulated for the“speech production center” do not correlate exactly withdamage producing production deficits, both this area of frontal cortexand speech production deficits still bear his name(“Broca’s area” and “Broca’saphasia”). Less than two decades later Carl Wernicke publishedevidence for a second language center. This area is anatomicallydistinct from Broca’s area, and damage to it produced a verydifferent set of aphasic symptoms. The cortical area that still bearshis name (“Wernicke’s area”) is located around thefirst and second convolutions in temporal cortex, and the aphasia thatbears his name (“Wernicke’s aphasia”) involvesdeficits in language comprehension. Wernicke’s method, likeBroca’s, was based on lesion studies produced by natural trauma:a careful evaluation of the behavioral deficits, followed by postmortem autopsies to find the sites of tissue damage and atrophy. Morerecent and more careful lesion studies suggest more preciselocalization of specific linguistic functions, and remain acornerstone to this day in aphasia research.

Lesion studies have also produced evidence for the localization ofother cognitive functions: for example, sensory processing and certaintypes of learning and memory. However, localization arguments forthese other functions invariably include studies using animal models.With an animal model, one can perform careful behavioral measures inhighly controlled settings, then ablate specific areas of neuraltissue (or use a variety of other techniques to block or enhanceactivity in these areas) and re-measure performance on the samebehavioral tests. Since we lack widely accepted animal models forhuman language production and comprehension, this additional evidenceisn’t available to the neurologist or neurolinguists. Thislimitation makes the neurological study of language a paradigm casefor evaluating the logic of the lesion/deficit method of inferringfunctional localization. Barbara Von Eckardt (Von Eckardt Klein 1978)attempted to make explicit the steps of reasoning involved in thiscommon and historically important method. Her analysis begins withRobert Cummins’ well-known analysis of functional explanation,but she extends it into a notion ofstructurally adequatefunctional analysis. These analyses break down a complex capacity Cinto its constituent capacitiesc1,c2,…,cn,where the constituent capacities are consistent with the underlyingstructural details of the system. For example, human speech production(complex capacity C) results from formulating a speech intention, thenselecting appropriate linguistic representations to capture thecontent of the speech intention, then formulating the motor commandsto produce the appropriate sounds, then communicating these motorcommands to the appropriate motor pathways (all together, theconstituent capacitiesc1,c2,…,cn). Afunctional-localization hypothesis has the form: brain structure S inorganism (type) O has constituent capacityci, whereciis a function of some part of O. An example might be: Broca’sarea (S) in humans (O) formulates motor commands to produce theappropriate sounds (one of the constituent capacitiesci). Such hypotheses specify aspects ofthe structural realization of a functional-component model. They arepart of the theory of the neural realization of the functionalmodel.

Armed with these characterizations, Von Eckardt Klein argues that inferenceto a functional-localization hypothesis proceeds in two steps. First,a functional deficit in a patient is hypothesized based on theabnormal behavior the patient exhibits. Second, localization offunction in normal brains is inferred on the basis of the functionaldeficit hypothesis plus the evidence about the site of brain damage.The structurally-adequate functional analysis of the capacity connectsthe pathological behavior to the hypothesized functional deficit. Thisconnection suggests four adequacy conditions on a functional deficithypothesis. First, the pathological behavior P (e.g., the speechdeficits characteristic of Broca’s aphasia) must result fromfailing to exercise some complex capacity C (human speech production).Second, there must be a structurally-adequate functional analysis ofhow people exercise capacity C that involves some constituent capacityci (formulating motor commands to producethe appropriate sounds). Third, the operation of the steps describedby the structurally-adequate functional analysis minus the operationof the component performingci(Broca’s area) must result in pathological behavior P. Fourth,there must not be a better available explanation for why the patientdoes P. Arguments to a functional deficit hypothesis on the basis ofpathological behavior is thus an instance of argument to the bestavailable explanation. When postulating a deficit in a normalfunctional component provides the best available explanation of thepathological data, we are justified in drawing the inference.

Von Eckardt Klein applies this analysis to a neurological case studyinvolving a controversial reinterpretation of agnosia.[14] Her philosophical explication of this important neurological methodreveals that most challenges to localization arguments either argueonly against the localization of a particular type of functionalcapacity or against generalizing from localization of function in oneindividual to all normal individuals. (She presents examples of eachfrom the neurological literature.) Such challenges do not impugn thevalidity of standard arguments for functional localization fromdeficits. It does not follow that such arguments are unproblematic.But they face difficult factual and methodological problems, notlogical ones. Furthermore, the analysis of these arguments asinvolving a type of functional analysis and inference to the bestavailable explanation carries an important implication for thebiological study of cognitive function. Functional analyses requirefunctional theories, and structurally adequate functional analysesrequire checks imposed by the lower level sciences investigating theunderlying physical mechanisms. Arguments to best availableexplanation are often hampered by a lack of theoretical imagination:the available alternative explanations are often severely limited. Wemust seek theoretical inspiration from any level of investigation orexplanation. Hence making explicit the “logic” of thiscommon and historically important form of neurological explanationreveals the necessity of joint participation from all scientificlevels, from cognitive psychology down to molecular neuroscience. VonEckardt Klein (1978) thus anticipated what came to be heralded as the“co-evolutionary research methodology”, which remains acenterpiece of neurophilosophy to the present day (seesection 6).

Over the last three decades, new evidence for localizations ofcognitive functions has come increasingly from a new source, thedevelopment and refinement of neuroimaging techniques. However, thelogical form of localization-of-function arguments appears not to havechanged from those employing lesion studies, as analyzed by VonEckardt Klein. Instead, these new neuroimaging technologies resolve some ofthe methodological problems that plagued lesion studies. For example,researchers do not need to wait until the patient dies, and in themeantime probably acquires additional brain damage, to find the lesionsites. Two functional imaging techniques have been prominent inphilosophical discussions: positron emission tomography, or PET, andfunctional magnetic resonance imaging, or fMRI. Although these measuredifferent biological markers of functional activity, PET approved forhuman use now has spatial resolution down to the single mm range,while fMRI has resolution down to less than 1mm.[15] As these techniques increased spatial and temporal resolution offunctional markers, and continued to be used with sophisticatedbehavioral methodologies, arguments for localizing specificpsychological functions to increasingly specific neural regionscontinued to grow. Stufflebeam and Bechtel provided an early andphilosophically useful discussion of PET. Bechtel and Richardson(1993) provided a general framework for “localization anddecomposition” arguments, which anticipated in many ways thecoming “new mechanistic” perspective in philosophy ofscience and philosophy of neuroscience (seesections 7 and 8 below). Bechtel and Mundale (1999) further refined philosophical argumentsfor localization of function specific to neuroscience.

More recent philosophical discussion of these functional imagingtechniques has tended to urge more caution resting localization claimson their results. Roskies (2007), for example, points out the tendencyto think of the evidential force of functional neuroimages (especiallyfMRI) on an analogy of that of photographs. Drawing on work inaesthetics and the visual arts, Roskies argues that many of thefeatures that give photographs their evidential force are not presentin functional neuroimages. So while neuroimages do serve as evidencefor claims about neurofunctions, and even for localization hypotheses,details of their proper interpretation are far more complicated thanphilosophers sometimes assume. More critically, Klein (2010) arguesthat images of “brain activity” resulting from functionalneuroimaging, especially fMRI are poor evidence for functionalhypotheses. For these images present the results of null hypothesissignificance testing on fMRI data, and such testing alone cannotprovide evidence about the functional structure of a causally densesystem, which the human brain is. Instead, functional neuroimages areproperly interpreted as indicating regions where further data andanalysis are warranted. But these data will typically require morethan simple significance testing, so skepticism about the evidentialforce of neuroimages does not warrant skepticism more generally aboutfMRI.

Localization of function remains to this day a central topic ofdiscussion in philosophy of neuroscience. We will cover more recentwork in later sections.

6. A Result of the Co-evolutionary Research Ideology: Philosophy’s Emphasis on Cognitive and Computational Neuroscience

What neuroscience has now discovered about the cellular and molecularmechanisms of neural conductance and transmission is spectacular.These results constitute one of the crowning achievements ofscientific inquiry. (For those in doubt, simply peruse for fiveminutes a recent volume ofSociety for NeuroscienceAbstracts.) Less comprehensive, yet still spectacular, arediscoveries at “higher” levels of neuroscience: circuits,networks, and systems. All this is a natural outcome of increasingscientific specialization. We develop the technology, the experimentaltechniques, and ultimately the experimental results-driven theorieswithin specific disciplines to push forward our understanding. Still,a crucial aspect of the total picture sometimes gets neglected: therelationship between the levels, the “glue” that bindsknowledge of neuron activity to subcellular and molecular mechanisms“below”, and to circuit, network, and systems activitypatterns “above”. This problem is especially glaring whenwe try to relate “cognitivist” psychological theories,postulating information-bearing representations and processesoperating over their contents, to neuronal activities.“Co-evolution” between these explanatory levels stillseems more a distant dream than an operative methodology guidingday-to-day scientific research.

It is here that some philosophers and neuroscientists turned tocomputational methods (Churchland and Sejnowski 1992). One hope wasthat the way computational models have functioned in more developedsciences, like in physics, might provide a useful model. Onecomputational resource that has usefully been applied in moredeveloped sciences to similar “cross-level” concerns aredynamical systems. Global phenomena, such as large-scalemeteorological patterns, have been usefully addressed as dynamical,nonlinear, and often chaotic interactions between lower-level physicalphenomena. Addressing the interlocking levels of theory andexplanation in the mind/brain using computational resources that haveworked to bridge levels in more mature sciences might yield comparableresults. This methodology is necessarily interdisciplinary, drawing onresources and researchers from a variety of levels, including higherones like experimental psychology, artificial intelligence, andphilosophy of science.

The use of computational methods in neuroscience itself is not new.Hodgkin, Huxley, and Katz (1952) incorporated values ofvoltage-dependent sodium and potassium conductance they had measuredexperimentally in the squid giant axon into an equation from physicsdescribing the time evolution of a first-order kinetic process. Thisequation enabled them to calculate best-fit curves for modeledconductance versus time data that reproduced the changing membranepotential over time when action potentials were generated. Also usingequations borrowed from physics, Rall (1959) developed the cable modelof dendrites. This model provided an account of how the various inputsfrom across the dendritic tree interact temporally and spatially todetermine the input-output properties of single neurons. It remainsinfluential today, and was incorporated into the GENESIS softwarefor programming neurally realistic networks (Bower and Beeman 1995;see discussion insection 2 above). David Sparks and his colleagues showed that a vector-averaging modelof activity in neurons of superior colliculi correctly predictsexperimental results about the amplitude and direction of saccadic eyemovements (Lee, Rohrer, and Sparks 1988). Working with a moresophisticated mathematical model, Apostolos Georgopoulos and hiscolleagues predicted direction and amplitude of hand and arm movementsbased on averaged activity of 224 cells in motor cortex. Theirpredictions were borne out under a variety of experimental tests(Georgopoulos, Schwartz, and Kettner 1986). We mention theseparticular studies only because these are ones with which we arefamiliar. No doubt we could multiply examples of the fruitfulinteraction of computational and experimental methods in neuroscienceeasily by one-hundred-fold. Many of these extend back before“computational neuroscience” was a recognized researchendeavor.

We’ve already seen one example, the vector transformationaccount of neural representation and computation, once under activedevelopment in cognitive neuroscience (seesection 2 above). Other approaches using “cognitivist” resources were, andcontinue to be, pursued.[16] Some of these projects draw upon “cognitivist”characterizations of the phenomena to be explained. Some exploit“cognitivist” experimental techniques and methodologies.Some even attempt to derive “cognitivist” explanationsfrom cell-biological processes (e.g., Hawkins and Kandel 1984). AsStephen Kosslyn (1997) put it, cognitive neuroscientists employ the“information processing” view of the mind characteristicof cognitivism without trying to separate it from theories of brainmechanisms. Such an endeavor calls for an interdisciplinary communitywilling to communicate the relevant portions of the mountain of detailgathered in individual disciplines with interested nonspecialists.This requires more than people willing to confer with others workingat related levels, but also researchers trained explicitly in themethods and factual details of a variety of disciplines. This is adaunting need, but it offers hope to philosophers wishing tocontribute to actual neuroscience. Thinkers trained in both the“synoptic vision” afforded by philosophy, and thescientific and experimental basis of a genuine (graduate-level)science would be ideally equipped for this task. Recognition of thispotential niche was slow to dawn on graduate programs in philosophy,but a few programs have taken steps to fill it (see, e.g.,Other Internet Resources below).

However, one glaring shortcoming remains. Given philosophers’training and interests, “higher-level”neurosciences—networks, cognitive, systems, and the fields ofcomputational neuroscience which ally with these—tend to attractthe most philosophical attention. As natural as this focus might be,it can lead philosophers to a misleading picture of neuroscience.Neurobiology remains focused on cellular and molecular mechanisms ofneuronal activity, and allies with the kind of behavioral neurosciencethat works with animal models. This is still how a majority of membersof the Society for Neuroscience, now more than 37,000 members strong,classify their own research; this is where the majority of grant moneyfor research goes; and these are the areas whose experimentalpublications most often appear in the most highly cited scientificjournals. (The link to the Society for Neuroscience’s web siteinOther Internet Resources below leads to a wealth of data on these numbers; see especially thePublications section.) Yet philosophers have tended not to pay muchattention to cellular and molecular neuroscience. Fortunately thisseems to be changing, as we will document in sections 7 and 8 below.Still, the preponderant attention philosophers pay tocognitive/systems/computational neuroscience obscures the wetlabexperiment-driven focus of ongoing neurobiology.

7. Developments in the Philosophy of Neuroscience

The distinction between “philosophy of neuroscience” and“neurophilosophy” came to be better clarified over thefirst decade of the twenty-first century, due primarily to morequestions being pursued in both areas. Philosophy of neurosciencestill tends to pose traditional questions from philosophy of sciencespecifically about neuroscience. Such questions include: What is thenature of neuroscientific explanation? And, what is the nature ofdiscovery in neuroscience? Answers to these questions are pursuedeither descriptively (how does neuroscience proceed?) or normatively(how should neuroscience proceed)? Some normative projects inphilosophy of neuroscience are “deconstructive”,criticizing claims about the topic made by neuroscientists. Forexample, philosophers of neuroscience have criticized the conceptionof personhood assumed by researchers in cognitive neuroscience (cf.Roskies 2009). Other normative projects are constructive, proposingnew theories of neuronal phenomena or methods for interpretingneuroscientific data. Such projects often integrate smoothly withtheoretical neuroscience itself. For example, Chris Eliasmith andCharles Anderson developed an approach to constructingneurocomputational models in their bookNeural Engineering(2003). In separate publications, Eliasmith has argued that theframework introduced inNeural Engineering provides both anormative account of neural representation and a framework forunifying explanation in neuroscience (e.g., Eliasmith 2009).

Neurophilosophy continued to apply findings from the neurosciences totraditional, philosophical questions. Examples include: What is anemotion? (Prinz 2004) What is the nature of desire? (Schroeder 2004)How is social cognition made possible? (Goldman 2006) What is the neural basis ofmoral cognition? (Prinz 2007) What is the neural basis of happiness?(Flanagan 2009) Neurophilosophical answers to these questions areconstrained by what neuroscience reveals about nervous systems. Forexample, in his bookThree Faces of Desire, Timothy Schroeder(2004) argued that our commonsense conception of desire attributes toit three capacities: (1) the capacity to reinforce behavior whensatisfied, (2) the capacity to motivate behavior, and (3) the capacityto determine sources of pleasure. Based on evidence from theliterature on dopamine function and reinforcement learning theory,Schroeder argued that reward processing is the basis for all threecapacities. Thus, reward is the essence of desire.

During the first decade of the twenty-first century a trend arose inneurophilosophy to look toward neuroscience for guidance in moralphilosophy. That should be evident from the themes we’ve justmentioned. Simultaneously, there was renewed interest in moralizingabout neuroscience and neurological treatments (see Levy 2007; Roskies2009). This new field,neuroethics, thus combined bothinterest in the relevance of neuroscience data for understanding moralcognition, and the relevance of moral philosophy for acquiring andregulating the application of knowledge from neuroscience. Theregulatory branch of neuroethics initially focused explicitly on theethics of treatment for people who suffer from neurologicalimpairments, the ethics of attempts to enhance human cognitiveperformance (Schneider 2009), the ethics of applying “mindreading” technology to problems in forensic science (Farah andWolpe 2004), and the ethics of animal experimentation in neuroscience(Farah 2008). More recently both of these fields of neuroethics hasseen tremendous growth. The interested reader should consult theneuroethics entry in this Encyclopedia.

Trends during the first decade of the twenty-first century inphilosophy of neuroscience included renewed interest in the nature ofmechanistic explanations. This was in keeping with a general trend inphilosophy of science (e.g., Machamer, Darden, and Craver 2000). Theapplication of this general approach to neuroscience isn’tsurprising. “Mechanism” is a widely-used term amongneuroscientists. In his book,Explaining the Brain (2007),Carl Craver contended that mechanistic explanations in neuroscienceare causal explanations, and typically multi-level. For example, theexplanation of the neuronal action potential involves the actionpotential itself, the cell in which it occurs, electro-chemicalgradients, and the proteins through which ions flow across themembrane. Thus we have a composite entity (a cell) causallyinteracting with neurotransmitters at its receptors. Parts of the cellengage in various activities, e.g., the opening and closing ofligand-gated and voltage-gated ion channels, to produce a pattern ofchanges, the depolarizing current constituting the action potential. Amechanistic explanation of the action potential thus countenancesentities at the cellular, molecular, and atomic levels, all of whichare causally relevant to producing the action potential. This causalrelevance can be confirmed by altering any one of these variables,e.g., the density of ion channels in the cell membrane, to generatealterations in the action potential; and by verifying the consistencyof the purported invariance between the variables. For challenges toCraver’s account of mechanistic explanation in neuroscience,specifically concerning the action potential, see Weber 2008, andBogen 2005.

According to epistemic norms shared implicitly by neuroscientists,good explanations in neuroscience are good mechanistic explanations;and good mechanistic explanations are those that pick out invariantrelationships between mechanisms and the phenomena they control. (Forfuller treatment of invariance in causal explanations throughoutscience, see James Woodward 2003. Mechanists draw extensively onWoodward’s “interventionist” account of cause andcausal explanations.) Craver’s account raised questions aboutthe place of reduction in neuroscience. John Bickle (2003) suggestedthat the working concept of reduction in the neurosciences consists ofthe discovery of systematic relationships between interventions atlower levels of biological organization, as these are pursued incellular and molecular neuroscience, and higher level behavioraleffects, as they are described in psychology. Bickle called thisperspective “reductionism-in-practice” to contrast it withthe concepts of intertheoretic or metaphysical reduction that havebeen the focus of many debates in the philosophy of science andphilosophy of mind. Despite Bickle’s reformulation of reduction,however, mechanists generally resist, or at least relativize, the“reductionist” label. Craver (2007) calls his view the“mosaic unity” of neuroscience. Bechtel (2009) calls his“mechanistic reduction(ism)”. Both Craver and Bechteladvocate multi-leveled “mechanisms-within-mechanisms”,with no level of mechanism epistemically privileged. This is incontrast to reduction(ism), ruthless or otherwise which privilegeslower levels. Still we can ask: Is mechanism a kind ofreductionism-in-practice? Or does mechanism, as a position onneuroscientific explanation, assume some type of autonomy forpsychology? If it assumes autonomy, reductionists might challengemechanists on this assumption. On the other hand, Bickle’sreductionism-in-practice clearly departs from inter-theoreticreduction, as the latter is understood in philosophy of science. AsBickle himself acknowledges, his latest reductionism was inspiredheavily by mechanists’ criticisms of his earlier “newwave” account. Mechanists can challenge Bickle that hisdeparture from the traditional accounts has also led to a departurefrom the interests that motivated those accounts. (See Polger 2004 fora related challenge.) As we will see insection 8 below, these issues surrounding mechanistic philosophy of neuroscience havegrown more urgent, as mechanism has grown to dominate the field.

The role of temporal representation in conscious experience and thekinds of neural architectures sufficient to represent objects in timegenerated interest. In the tradition of Husserl’s phenomenology,Dan Lloyd (2002, 2003) and Rick Grush (2001, 2009) have separatelydrawn attention to the tripartite temporal structure of phenomenalconsciousness as an explanandum for neuroscience. This structureconsists of a subjective present, an immediate past, and anexpectation of the immediate future. For example, one’sconscious awareness of a tune is not just of a time-slice oftune-impression, but of a note that a moment ago was present, anotherthat is now present, and an expectation of subsequent notes in theimmediate future. As this experience continues, what was a moment agotemporally immediate is now retained as a moment in the immediatepast; what was expected either occurred or didn’t in what hasnow become the experienced present; and a new expectation has formedof what will come. One’s experience is not static, even thoughthe experience is of a single object (the tune). These earlier worksfound increased relevance with the rise of “predictivecoding” models of whole brain function, developed byneuroscientists including Karl Friston (2009) less than a decadelater, and brought to broader philosophical attention by Jakob Hohwy(2013) and Andy Clark (2016).

According to Lloyd, the tripartite structure of consciousness raises aunique problem for analyzing fMRI data and designing experiments. Theproblem stems from the tension between the sameness in the object ofexperience (e.g., the same tune through its progression) and thetemporal fluidity of experience itself (e.g., the transitions betweenheard notes). At the time Lloyd was writing, one standard means ofanalyzing fMRI data consisted in averaging several data sets andsubtracting an estimate of baseline activation from the composites.[17] This is done to filter noise from the task-related hemodynamicresponse. But as Lloyd points out, this then-common practice ignoresmuch of the data necessary for studying the neural correlates ofconsciousness. It produces static images that neglect therelationships between data points over the time course. Lloyd insteadapplies a multivariate approach to studying fMRI data, under theassumption that a recurrent network architecture underlies thetemporal processing that gives rise to experienced time. A simplerecurrent network has an input layer, an output layer, a hidden layer,and an additional layer that copies the prior activation state ofeither the hidden layer or the output layer. Allowing the output layerto represent a predicted outcome, the input layer can then represent acurrent state and the additional layer a prior state. This assignmentmimics the tripartite temporal structure of experience in a networkarchitecture. If the neuronal mechanisms underlying consciousexperience are approximated by recurrent network architecture, oneprediction is that current neuronal states carry information aboutimmediate future and prior states. Applied to fMRI, the model predictsthat time points in an image series will carry information about priorand subsequent time points. The results of Lloyd’s (2002)analysis of 21 subjects’ data sets, sampled from the publiclyaccessible National fMRI Data Center, support this prediction.

Grush’s (2001, 2004) interest in temporal representation is partof his broader systematic project of addressing a semantic problem forcomputational neuroscience, namely: how do we demarcate study of thebrain as an information processor from the study of any other complexcausal process? This question leads back into the familiar territoryof psychosemantics (seesection 3 above), but now the starting point is internal to the practices ofcomputational neuroscience. The semantic problem is thereby renderedan issue in philosophy of neuroscience, insofar as it asks: what does(or should) “computation” mean in computationalneuroscience?

Grush’s solution drew on concepts from modern control theory. Inaddition to a controller, a sensor, and a goal state, certain kinds ofcontrol systems employ aprocess model of the actual processbeing controlled. A process model can facilitate a variety ofengineering functions, including overcoming delays in feedback andfiltering noise. The accuracy of a process model can be assessedrelative to its “plug-compatibility” with the actualprocess. Plug-compatibility is a measure of the degree to which acontroller can causally couple to a process model to produce the sameresults it would produce by coupling with the actual process. Notethat plug-compatibility is not an information relation.

To illustrate a potential neuroscientific implementation, Grushconsiders a controller as some portion of the brain’s motorsystems (e.g., premotor cortex). The sensors are the sense organs(e.g., stretch receptors on the muscles). A process model of themusculoskeletal system might exist in the cerebellum (see Kawato1999). If the controller portion of the motor system sends spiketrains to the cerebellum in the same way that it sends spikes to themusculoskeletal system, and if in return the cerebellum receives spiketrains similar to real peripheral feedback, then the cerebellumemulates the musculoskeletal system (to the degree that the mockfeedback resembles real peripheral feedback). The proposed unit overwhich computational operations range is the neuronal realization of aprocess model and its components, or in Grush’s terms an“emulator” and its “articulants”.

The details of Grush’s framework are too sophisticated topresent in short compass. (For example, he introduces a host ofconceptual devices to discuss the representation of external objects.)But in a nutshell, he contends that understanding temporalrepresentation begins with understanding the emulation of the timingof sensorimotor contingencies. Successful sequential behavior (e.g.,spearing a fish) depends not just on keeping track of where one is inspace, but where one is in a temporal order of movements and thetemporal distance between the current, prior, and subsequentmovements. Executing a subsequent movement can depend on keeping trackof whether a prior movement was successful and whether the currentmovement is matching previous expectations. Grush positsemulators—process models in the central nervoussystem—that anticipate, retain, and update mock sensorimotorfeedback by timing their output proportionally to feedback from anactual process (Grush 2005).

Lloyd’s and Grush’s approaches to studying temporalrepresentation are varied in their emphases. But they are unified intheir implicit commitment to localizing cognitive functions anddecomposing them into subfunctions using both top-down and bottom-upconstraints. (See Bechtel and Richardson 1993 for more details on thisgeneral explanatory strategy.) As we mentioned a few paragraphs above,both anticipated in important and interesting ways more recentneuroscientific and philosophical work on predictive coding and thebrain. Both developed mechanistic explanations that pay little regardto disciplinary boundaries. One of the principal lessons ofBickle’s and Craver’s work is that neuroscientificpractice in general is structured in this fashion. The ontologicalconsequences of adopting this approach continue to be debated.

8. Developments over the Second Decade of the Twenty-First Century

Mechanism, first introduced in section 7 above, came to dominate thephilosophy of neuroscience throughout the second decade of thetwenty-first century. One much-discussed example is GualtieroPiccinini and Carl Craver (2011). The authors employ two popularmechanistic notions. Their first is the multi-level, nestedhierarchies of mechanisms-within-mechanisms perspective, discussed insection 7 above, that traces back to Craver and Darden (2001). Theirsecond is that of “mechanism sketch”, suggested initiallyin Machamer, Darden, and Craver (2000) and developed in detail inCraver (2007). Piccinini and Craver’s goal is to“seamlessly” situate psychology as part of an“integrated framework” alongside neuroscience. Theyinterpret psychology’s familiar functional analyses of cognitivecapacities as relatively incomplete mechanism-sketches, which leaveout many components of the mechanisms that ultimately will fullyexplain the system’s behavior. Neuroscience in turn fills inthese missing components, dynamics, and organizations, at least onesfound in nervous systems. This filling-in thereby turnspsychology’s mechanism-sketches into full-blown mechanisticexplanations. So even though psychology proceeds via functionalanalyses, so interpreted it is nonetheless mechanistic. Piccinini andCraver realize that their “integrated” account clasheswith classical “autonomy” claims for psychology vis-à-visneuroscience. Nevertheless, they insist that their challenge toclassical “autonomy” does not commit them to“reductionism”, in either its classical or more recentvarieties. Their commitment to nested hierarchy ofmechanisms-within-mechanisms to account for a system’s behavioracknowledges the importance of mechanisms and intralevel causation atall levels constituting the system, not just at lower (i.e., cellular,molecular) levels.

David Kaplan and Craver (2011) focus the mechanist perspectivecritically on dynamical systems mathematical models popular in recentsystems and computational neuroscience. They argue that such modelsare explanatory only if there exists a “plausible mapping”between elements in the model and elements in the modeled system. Atbottom is their Model-to-Mechanism-Mapping (3M) Constraint onexplanation. The variables in a genuinely explanatory model correspondto components, activities, or organizational features of the systembeing explained. And the dependencies posited among variables in themodel, typically expressed mathematically in systems and computationalneuroscience, correspond to causal relations among the system’scomponents. Kaplan and Craver justify the 3M Constraint on grounds ofexplanatory norms, common to both science and common sense. All otherthings being equal, they insist, explanations that provide morerelevant details about a system’s components, activities, andorganization, more likely will answer more questions about how thesystem will behave in a variety of circumstances, than will anexplanation that provides fewer (mechanistic) details.“Relevant” here pertains to the functioning of thespecific mechanism. Models from systems and computational neurosciencethat violate the 3M Constraint are thus more reasonably thought of asmathematical descriptions of phenomena, not explanations of some“non-mechanistic” variety.

Kaplan and Craver challenge their own view with one of the morepopular dynamical/mathematical models in all of computationalneuroscience, the Haken-Kelso-Bunz (1985) model of human bimanualfinger-movement coordination. They point to passages in thesemodelers’ publications that suggest that the modelers onlyintended for their dynamical systems model to be a mathematicallycompact description of the temporal evolution of a “purelybehavioral dependent variable”. The modelers interpreted none ofthe model’s variables or parameters as mapping onto componentsor operations of any hypothetical mechanism generating the behavioraldata. Nor did they intend for any of the model’s mathematicalrelations or dependencies among variables to map onto hypothesizedcausal interactions among components or activities of any mechanism.As Kaplan and Craver further point out, after publishing theirdynamicist model, these modelers themselves then began to investigatehow the behavioral regularities their model described might beproduced by neural motor system components, activities, andorganization. Their own follow-up research suggests that thesemodelers saw their dynamicist model as a heuristic, to helpneuroscientists move toward “how-possibly”, and ultimatelyto a “how-actually” mechanistic explanation.

At bottom, Kaplan and Craver’s 3M constraint on explanationpresents a dilemma for dynamicists. To the extent that dynamicalsystems modelers intend to model hypothesized neural mechanisms forthe phenomenon under investigation, their explanations will need tocohere with the 3M Constraint (and other canons of mechanisticexplanation). To the extent that this is not a goal of dynamicistmodelers, their models do not seem genuinely explanatory, at least notin one sense of “explanation” prominent in the history ofscience. Furthermore, when dynamicist models are judged to besuccessful, they often prompt subsequent searches for underlyingmechanisms, just as the 3M Constraint and the general mechanistaccount of the move from “how-possibly” to “howactually” mechanisms recommends. Either horn gores dynamicistswho claim that their models constitute a necessary additional kind ofexplanation in neuroscience to mechanistic explanation, beyond anyheuristic value such models might offer toward discoveringmechanisms.

Kaplan and Craver’s radical conclusion, that dynamicist“explanations” are genuine explanations only to the degreethat they respect the (mechanist’s) 3M Constraint, needs moredefense. The burden of proof always lies on those whose conclusionsstrike at popular assumptions. More than the discussion of a couple oflandmark dynamicist models in neuroscience is needed (in their 2011,Kaplan and Craver also discuss the difference-of-Gaussians model ofreceptive field properties of mammalian visual neurons). Expectedly,dynamicists have taken up this challenge. Michael Silberstein andAnthony Chemero (2013), for example, argue that localization anddecomposition strategies characterize mechanistic explanation, andthat some explanations in systems neuroscience violate one of theseassumptions, or both. Such violations in turn create a dilemma formechanists. Either they must “stretch” their account ofexplanation, beyond decomposition and localization, to capture theserecalcitrant cases, or they must accept “counterexamples”to the generality of mechanistic explanation, in both systemsneuroscience and systems biology more generally.

Lauren Ross (2015) and Mazviita Chirimuuta (2014) independently appealto Robert Batterman’s account of minimal model explanation as animportant kind of non-mechanistic explanation in neuroscience. Minimalmodels were developed initially to characterize a kind of explanationin the physical sciences (see, e.g., Batterman and Rice 2014).Batterman’s account distinguishes between two different kinds ofscientific “why-questions”: why a phenomenon manifests inparticular circumstances; and why a phenomenon manifests generally, orin a number of different circumstances. Mechanistic explanationsanswer the first type of why-question. Here a “more details thebetter” (MDB) assumption (Chirimuuta 2014), akin to Kaplan andCraver’s “all things being equal” assumption aboutbetter explanations (mentioned above), holds force. Minimal models,however, which minimize over the presented implantation details andhence violate MDB, are better able to answer the second type ofscientific why-questions. Ross (2015), quoting from computationalneuroscientists Rinzel and Ermentrout, insists that models containingmore details than necessary can obscure identification of criticalelements by leaving too many open possibilities, especially when oneis trying to answer Batterman’s second kind of why-questionsabout a system’s behavior.

Chirimuuta and Ross each appeal to related resources fromcomputational neuroscience to illustrate the applicability ofBatterman’s minimal model explanation strategy. Ross appeals to“canonical models”, which represent “sharedqualitative features of a number of distinct neural systems”(2015: 39). Her central example is the derivation of theErmentrout-Kopell model of class I neuron excitability, which uses“mathematical abstraction techniques” to “reducemodels of molecularly distinct neural systems to a single …canonical model”. Such a model “explains why molecularlydiverse neural systems all exhibit the same qualitativebehavior”, (2015: 41) clearly a Batterman second-typewhy-question. Chirimuuta’s resource is “canonical neuralcomputations” (CNCs):

computational modules that apply the same fundamental operations in avariety of contexts … a toolbox of computational operationsthat the brain applies in a number of different sense modalities andanatomic regions and which can be described at higher levels ofabstraction from their biophysical implementation. (Chirimuuta 2014:138)

Examples include shunting inhibition, linear filtering, recurrentamplification, and thresholding. Rather than being mechanism-sketches,awaiting further mechanistic details to be turned into full-blownhow-actually mechanisms, CNCs are invoked in a different explanatorycontext, namely, ones posing Batterman’s second type ofwhy-questions. Ross concurs concerning canonical models:

Understanding the approach dynamical systems neuroscientists take inexplaining [system] behavior requires attending to their explanandumof interest and the unique modeling tools [e.g., canonical models]common in their field. (2015: 52)

In short, both Chirimuuta’s and Ross’s replies to Kaplanand Craver’s challenge is a common one in philosophy: save aparticular form of explanation from collapsing into another bysplitting the explanandum.

Finally, to wrap up this discussion of mechanism ascendant, ananalogue of Craver’s (2007) problem of accounting for“constitutive mechanistic relevance”, that is, fordetermining which active components of a system are actually part ofthe mechanism for a given system phenomenon, has also re-emerged inrecent discussions. Robert Rupert (2009) suggests that“integration” is a key criterion for determining which setof causally contributing mechanisms constitute the system for a task,based on the relative frequency with which sets of mechanismsco-contribute to causing task occurrences. He cashes frequency ofco-contribution as the probability of the set for causing thecognitive task, conditional to every other co-occurring causal set.Felipe De Brigard (2017) challenges Rupert’s criterion, arguingthat it cannot account for cognitive systems displaying two features,“diachronic dynamicity” along with “functionalstability”. The frequency with which a given mechanism causallycontributes to the same cognitive task (functional stability) canchange over time (diachronic dynamicity). Although De Brigardemphasizes the critical importance of these features forRupert’s integration criterion via a fanciful thoughtexperiment, he also argues that they are a widespread phenomenon inhuman brains. Both features are found, for example, in evidencepertaining to the “Hemispheric Asymmetry Reduction in OlderAdults”, in which tasks that recruit hemispherically localizedregions of prefrontal cortex in younger adults show a reduction inhemispheric asymmetry in older adults. And both are found in“Posterior-Anterior Shift with Aging”, where a taskincreases activity in anterior brain regions while decreasing activityin posterior regions in older adults, relative to activity invoked bythe same task in younger adults.

To replace Rupert’s notion of integration as a criterion fordetermining which sets of mechanisms constitute a cognitive system, DeBrigard points to two promising recent developments in networkneuroscience which potentially allow for parametrized time.“Scaled inclusivity” is a method for examining each nodein a network and identifying its membership in “communitystructures” across different iterations of the network.“Temporal-dynamic network analyses” is a way to quantifychanges in community structures or modules between networks atdifferent time points. Both methods thereby identify “modularalliances”, which convey both co-activation and dynamic changeinformation in a single model. De Brigard suggests that these are thusthe candidates with which cognitive systems could be identified.

Clearly, much remains to be discussed regarding the impact mechanismhas come to wield in philosophy of neuroscience over the last decade.But while mechanism has become the most dominant general perspectivein the field, work in other areas continues. Michael Anderson defendsthe relevance of cognitive neuroscience for determiningpsychology’s taxonomy, independent of any commitment tomechanism. The most detailed development of his approach is in his(2014) book,After Phrenology, based on his influential“neural reuse” hypothesis. Each region of the brain, asrecognized by the standard techniques of cognitive neuroscience(especially fMRI), engages in cognitive functions that are highlyvarious, and form different “neural partnerships” with oneanother under different circumstances. Psychological categories arethen to be reconceived along lines suggested by the wide-rangingempirical data in support of neural reuse. A genuine“post-phrenological” science of the mind must jettison theassumption that each brain region performs its own fundamentalcomputation. In this fashion Anderson’s work explicitlycontinues philosophy of neuroscience’s ongoing interest inlocalizations of cognitive functions.

In shorter compass, Anderson (2015) investigates the relevance ofcognitive neuroscience for reconceiving psychology’s basiccategories, starting from a consequence of his neural reusehypothesis. Attempts to map cognitive processes onto specific neuralprocesses and brain regions reveal “many-to-many”relations. Not only do these relations show that combinedanatomical-functional labels for brain regions (e.g., “fusiformface area”) are deceptive; they also call into question thepossibility of deciding between alternative psychological taxonomiesby appealing to cognitive neuroscientific data.

For all but the strongest proponents of psychology’s autonomyfrom neuroscience, these many-to-many mappings will suggest that thepsychological taxonomy we bring to this mapping project needsrevision. One need not be committed to any strong sense ofpsychoneural reduction, or the epistemological superiority ofcognitive neuroscience to psychology, to draw this conclusion. Themere relevance of cognitive neuroscience for psychology’scategories is enough. This debate is thus “about therequirements for a unified science of the mind, and the proper role ofneurobiological evidence in the construction of such anontology” (2015: 70), not about the legitimacy of either.

Anderson divides revisionary projects for psychology into three kinds,based on the degree of revision each kind recommends for psychology,and the extent of one-to-one function-to-structure mappings theproposed revisions predicts will be available.“Conservatives” foresee little need for extensiverevisions of psychology’s basic taxonomy, even as moreneuroscientific evidence is taken into account than current standardpractices pursue. “Moderates” insist that our knowledge ofbrain function “can (and should) act as one arbiter of thepsychologically real” (2015: 70), principally by“splitting” or “merging” psychologicalconcepts that currently are in use. “Radicals” projecteven more drastic revisions, even to the most primitive concepts ofpsychology, and even after such revisions they still do not expectthat many one-to-one mappings between brain regions and the newpsychological primitives will be found. Although Anderson does notstress this connection (eliminative materialism has not been aprominent concern in philosophy of mind or neuroscience for twodecades), readers will notice similar themes discussed insection 2 above, only now with scientific, not folk psychology the target ofthe radical revisionists. A key criterion for any satisfactoryreformulation of a cognitive ontology is the degree to which itsupports two kinds of inferences: “forward inferences”,from the engagement of a specific cognitive function to the predictionof brain activity; and “reverse inferences”, from theobservation that a specific brain region or pattern occurs to theprediction that a specific cognitive operation is engaged. In light ofthis explicit criterion, Anderson usefully surveys the work of anumber of prominent psychologists and cognitive neuroscientists ineach of his revisionist groups. Given his broader commitment to neuralreuse, and the trek it invites into “evolutionarily-inspired,ecological, and enactive terms”, Anderson’s own sentimentslie with the “radicals”:

language and mathematics, for instance, are best understood asextensions of our basic affordance processing capacities augmentedwith public symbol systems … The psychological science thatresults from this reappraisal may well look very different from theone we practice today. (2015: 75)

Landmark neuroscientific hypotheses remain a popular focus in recentphilosophy of neuroscience. Berit Brogaard (2012), for example, arguesfor a reinterpretation of the standard “dissociation”understanding of Melvin Goodale and David Milner’s (1992)celebrated “two visual processing streams”, a landmark,now “textbook” result from late-twentieth centuryneuroscience. Two components of the standard dissociation are key. Thefirst is that distinct brain regions compute information relevant forvisually guided “on-the-fly” actions, and for objectrecognition, respectively, the dorsal stream (which runs from primaryvisual cortex through the medial temporal region into the superior andinferior parietal lobules) and the ventral stream (which runs fromprimary visual cortex through V4 and into inferior temporal cortex).And second, that only information relevant for visual objectrecognition, processed in the ventral stream, contributes to thecharacter of conscious visual experiences.

Brogaard’s concern is that this standard understandingchallenges psychofunctionalism, our currently most plausible“naturalistic” account of mental states.Psychofunctionalism draws its account of mind directly from our bestcognitive psychology. If φ is some mental state type that hasinherited the content of a visual experience, then according tocognitive psychology a wide range of visually guided beliefs anddesires, different kinds of visual memories, and so on, satisfyφ’s description. But by the standard“dissociation” account of Goodale and Milner’s twovisual streams, only dorsal-stream states, and not ventral-streamstates, represent truly egocentric visual properties, namely“relational properties which objects instantiate from the pointof view of believers or perceivers”, (Brogaard 2012: 572). Butaccording to cognitive psychology, dorsal-stream states do not playthis wide-ranging φ-role. So according to psychofunctionalism“φ-mental states cannot represent egocentricproperties” (2012: 572). But it seems “enormouslyplausible” that some of our perceptual beliefs and visualmemories represent egocentric properties. So either we rejectpsychofunctionalism, and so our most plausible naturalization projectfor determining whether a given mental state is instantiated, or wereject the standard dissociation interpretation of Goodale andMilner’s two visual streams hypothesis, despite the wealth ofempirical evidence supporting it. Neither horn of this dilemma lookscomfortably graspable, although the first horn might be thought to bemore so, since psychofunctionalism as a general theory of mind lacksthe kind of strong empirical backing that the standard interpretationof Goodale and Milner’s hypothesis enjoys.

Nevertheless, Brogaard recommends retaining psychofunctionalism, andinstead rejecting “a particular formulation” of Goodaleand Milner’s two visual stream hypothesis. The interpretation toreject insists that “dorsal-stream information cannot contributeto the potentially conscious representations computed by the ventralstream” (2012: 586–587). Egocentric representations ofvisual information computed by the dorsal stream contribute toconscious visual stream representations “via feedbackconnections” from dorsal- to ventral-stream neurons (2012: 586).This isn’t to deny dissociation:

Information about the egocentric properties of objects is processed bythe dorsal stream, and information about allocentric properties ofobjects is processed by the ventral stream. (2012: 586)

But this dissociation hypothesis “has no bearing on whatinformation is passed on to parts of the brain that processinformation which correlated with visual awareness” (2012: 586).With this re-interpretation, psychofunctionalism is renderedconsistent with Goodale and Milner’s two stream, dorsal andventral, “what” and “where/how” hypothesis andthe wealth of empirical evidence that supports it. According toBrogaard, psychofunctionalism can thereby “correctly treatperceptual and cognitive states that carry information processed inthe ventral visual stream as capable of representing egocentricproperties” (2012: 586).

Despite philosophy of neuroscience’s continuing focus oncognitive/systems/computational-neuroscience (see the discussion insection 7 above), interest in neurobiology’s cellular/molecular mainstreamappears to be increasing. One notable paper is Ann-Sophie Barwich andKarim Bschir’s (2017) historical-cum-philosophical study ofG-protein coupled receptors (GPCRs). Work on the structure andfunctional significance of these proteins has dominated molecularneuroscience for the past forty years; their role in the mechanisms ofa variety of cognitive functions is now empirically documented beyondquestion. And yet one finds little interest in, or even notice of thisshift in mainstream neuroscience among philosophers. Barwich andBschir’s yeoman history research on the discovery anddevelopment of these objects pays off philosophically. The role ofmanipulability as a criterion for entity realism in thescience-in-practice of wet-lab research becomes meaningful “onlyonce scientists have decided how to conceptually coordinate measurableeffects distinctly to a scientific object” (2017: 1317).Scientific objects like GPCRs get assigned varying degrees of realitythroughout different stages of the discovery process. Such anobject’s role in evaluating the reality of “neighboringelements of enquiry” becomes a part of the criteria of itsreality as well.

The impact of science-in-practice on philosophy of science generallyhas been felt acutely in the philosophy of neuroscience, most notablyin increased philosophical interest in neuroscientificexperimentation. In itself this should not surprise. Neurosciencerelies heavily on laboratory experimentation, especially within itscellular and molecular, “Society for Neuroscience”mainstream. So the call to understand experiment should beckon anyphilosopher who ventures into neuroscience’s cellular/molecularfoundations. Two papers by Jacqueline Sullivan (2009, 2010) have beenimportant in this new emphasis. In her (2009) Sullivan acknowledgesboth Bickle’s (2003) and Craver’s (2007) focus on cellularand molecular mechanisms of long-term potentiation, andexperience-driven form of synaptic plasticity. But she insists thatbroader philosophical commitments, which lead Bickle to ruthlessreductionist and Craver to mosaic unity “global” accounts,obscure important aspects of real laboratory neuroscience practice.She emphasizes the role of “subprotocols”, which specifyhow data are to be gathered, in her model of “the experimentalprocess”, and illustrates these notions with a number ofexamples. Her analysis reveals an important underappreciated tensionamong a pair of widely-accepted experiment norms. Pursuing“reliability” drives experimenters more deeply intoextensive laboratory controls. Pursuing “externalvalidity” drives them toward enriched experimental environmentsthat more closely represent the messy natural environment beyond thelaboratory. These two norms commonly conflict: in order to get more ofone, scientists introduce conditions that give them less of the other.

In her (2010) Sullivan offers a detailed history of the Morris watermaze task, tracing her account back to Morris’s originalpublications. Philosophers of neuroscience have uncritically assumedthat the water maze is a widely-accepted neuroscience protocol forrodent spatial learning and memory, but the detailed scientifichistory is not so clear on this interpretation. Scientific commentaryover time on what this task measures, including some from Morrishimself, reveals no clear consensus. Sullivan traces the source ofthis scientific inconsistency back to the impact of 1980s-eracellular-molecular reductionism driving experimental behavioralneurobiology protocols like the Morris water maze.

A different motivation drives neurobiologist Alcino Silva,neuroinformaticist Anthony Landreth, and philosopher of neuroscienceJohn Bickle’s (2014) focus on experimentation. All contemporarysciences are growing at a vertiginous pace; but perhaps none more sothan neuroscience. It is no longer possible for any single scientistto keep up with all the relevant published literature in even his orher narrow research field, or fully to comprehend its implications. Anoverall lack of clarity and consensus about what is known, whatremains doubtful, and what has been disproven creates special problemsfor experiment planning. There is a recognized and urgent need todevelop strategies and tools to address these problems. Toward thisexplicit end, Silva, Landreth, and Bickle’s book describes aframework and a set of principles for organizing the published record.They derive their framework and principles directly from landmark casestudies from the influential neuroscientific field of molecular andcellular cognition (MCC), and describe how their framework can be usedto generate maps of experimental findings. Scientists armed with theseresearch maps can then determine more efficiently what has beenaccomplished in their fields, and where the knowledge gaps stillreside. The technology needed to automate the generation of these mapsalready exists. Silva, Landreth, and Bickle sketch the transformative,revolutionary impact these maps can have on current science.

Three goals motivate Silva, Landreth, and Bickle’s approach.First, they derive their framework from the cellular and molecularneurobiology of learning and memory. This choice was due strictly tofamiliarity with the science. Silva was instrumental in bringing genetargeting techniques applied to mammals into behavioral neuroscience,and Bickle’s focus on ruthlessly reductive neuroscience wasbuilt on these and other experimental results. And while each of theirframework’s different kinds of experiments and evidence havebeen recognized by others, theirs purports to be the first tosystematize this information explicitly toward the goal offacilitating experimental planning by practicing scientists. Silva,Landreth and Bickle insist that important new experiments can beidentified and planned by methodically filling in the different formsof evidence recognized by their framework, and applying the differentforms of experiments to the gaps in the experimental record revealedby this process.

Second, Silva, Landreth, and Bickle take head-on the problem of thegrowing amount, complexity and integration of the published literaturefor experiment planning. They show how graphic weightedrepresentations of research findings can be used to guide researchdecisions; and how to construct these. The principles for constructingthese maps are the principles for integrating experimental results,derived directly from landmark published MCC research. Using a casestudy from recent molecular neuroscience, they show how to generatesmall maps that reflect a series of experiments, and how to combinethese small maps to illustrate an entire field of neuroscienceresearch.

Finally, Silva, Landreth and Bickle begin to develop a science ofexperiment planning. They envision the causal graphs that composetheir research maps to play a role similar to that played bystatistics in the already-developed science of data analysis. Such aresource could have profound implications for further developingcitation indices and other impact measures for evaluatingcontributions to a field, from those of individual scientists to thoseof entire institutions.

More recently Bickle and Kostko (2018) have extended Silva, Landrethand Bickle’s framework beyond the neurobiology of learning andmemory. Their case study comes from developmental and socialneuroscience, Michael Meaney and Moshe Szyf’s work on theepigenetics of rodent maternal nursing behaviors on offspring stressresponses. Using the details of this case study they elaborate on anotion that Silva, Landreth and Bickle leave underdeveloped, that ofexperiments designed explicitly for their results, if successful, tobe integrated directly into an already-existing background ofestablished results. And they argue that such experiments“integratable by design” with others are aimed not atestablishing evidence for individual causal relations amongneuroscientific kinds, but rather at formulating entire causalpathways connecting multiple phenomena. Their emphasis on causal pathsrelates to that of Lauren Ross (forthcoming). Ross’s work isespecially interesting in this context because she uses her causalpathway concept to address “causal selection”, which hasto do with distinguishing between background conditions and“true” (triggering) causes of some outcome of interest.For Silva, Landreth, and Bickle (2014), accounting for thisdistinction is likewise crucial, and they rely on a specific kind ofconnection experiment, “positive manipulations”, to drawit. Bickle and Kostko’s appeal to causal paths in a detailedcase study from recent developmental neurobiology might help bridgeSilva, Landreth and Bickle’s broader work on neurobiologicalexperimentation with Ross’s work drawn from biology moregenerally.

Bibliography

  • Akins, Kathleen A., 1993a, “What Is It Like to Be Boring andMyopic?”, inDennett and His Critics: DemystifyingMind, Bo Dahlbom (ed.), Cambridge, MA: Blackwell,124–160.
  • –––, 1993b, “A Bat withoutQualities?”, inReadings in Mind and Language, MartinDavies and Glyn W. Humphreys (eds.), (Consciousness: Psychological andPhilosophical Essays 2), Cambridge, MA: Blackwell Publishing,258–273.
  • –––, 1996, “Of Sensory Systems and the‘Aboutness’ of Mental States:”,Journal ofPhilosophy, 93(7): 337–372. doi:10.2307/2941125
  • Anderson, Michael L., 2014,After Phrenology: Neural Reuse andthe Interactive Brain, Cambridge, MA: The MIT Press.
  • –––, 2015, “Mining the Brain for a NewTaxonomy of the Mind: A New Taxonomy of the Mind”,Philosophy Compass, 10(1): 68–77.doi:10.1111/phc3.12155
  • Aston-Jones, Gary, Robert Desimone, J. Driver, Steven J. Luck, andMichael Posner, 1999, “Attention”, in Zigmond et al. 1999:1385–1410.
  • Balzer, Wolfgang, C. Ulises Moulines, and Joseph D. Sneed, 1987,An Architectonic for Science, Dordrecht: SpringerNetherlands. doi:10.1007/978-94-009-3765-9
  • Barwich, Ann-Sophie and Karim Bschir, 2017, “TheManipulability of What? The History of G-Protein CoupledReceptors”,Biology & Philosophy, 32(6):1317–1339. doi:10.1007/s10539-017-9608-9
  • Batterman, Robert W. and Collin C. Rice, 2014, “MinimalModel Explanations”,Philosophy of Science, 81(3):349–376. doi:10.1086/676677
  • Bechtel, William, 1998, “Representations and CognitiveExplanations: Assessing the Dynamicist’s Challenge in CognitiveScience”,Cognitive Science, 22(3): 295–318.doi:10.1207/s15516709cog2203_2
  • Bechtel, William and Jennifer Mundale, 1999, “MultipleRealizability Revisited: Linking Cognitive and Neural States”,Philosophy of Science, 66(2): 175–207.doi:10.1086/392683
  • Bechtel, William and Robert C. Richardson, 1993,DiscoveringComplexity: Decomposition and Localization as Strategies in ScientificResearch, Princeton, NJ: Princeton University Press.
  • Bechtel, William, Pete Mandik, Jennifer Mundale, and RobertStufflebeam (eds.), 2001,Philosophy and the Neurosciences: AReader, Malden, MA: Blackwell.
  • Bermúdez, José Luis, 1998,The Paradox ofSelf-Consciousness, (Representation and Mind), Cambridge, MA: MITPress.
  • Bickle, John, 1992, “Revisionary Physicalism”,Biology & Philosophy, 7(4): 411–430.doi:10.1007/BF00130060
  • –––, 1993, “Philosophy Neuralized: ACritical Notice of P.M. Churchland’sA NeurocomputationalPerspective”,Behavior and Philosophy, 20(2):75–88.
  • –––, 1995, “Psychoneural Reduction of theGenuinely Cognitive: Some Accomplished Facts”,PhilosophicalPsychology, 8(3): 265–285. doi:10.1080/09515089508573158
  • –––, 1998,Psychoneural Reduction: The NewWave, Cambridge, MA: MIT Press.
  • –––, 2003,Philosophy and Neuroscience: ARuthlessly Reductive Account, Norwell, MA: Springer AcademicPublishers.
  • ––– (ed.), 2009,The Oxford Handbook ofPhilosophy and Neuroscience, New York: Oxford University Press.doi:10.1093/oxfordhb/9780195304787.001.0001
  • Bickle, John and Aaron Kostko, 2018, “Connection Experimentsin Neurobiology”,Synthese, 195(12): 5271–5295.doi:10.1007/s11229-018-1838-0
  • Biro, J. I., 1991, “Consciousness and Subjectivity”,inConsciousness, Enrique Villanueva (ed.), (PhilosophicalIssues 1), 113–133. doi:10.2307/1522926
  • Bliss, T. V. P. and T. Lømo, 1973, “Long-LastingPotentiation of Synaptic Transmission in the Dentate Area of theAnaesthetized Rabbit Following Stimulation of the PerforantPath”,The Journal of Physiology, 232(2): 331–356.doi:10.1113/jphysiol.1973.sp010273
  • Block, Ned, 1987, “Advertisement for a Semantics forPsychology”,Midwest Studies in Philosophy, 10:615–678. doi:10.1111/j.1475-4975.1987.tb00558.x
  • –––, 1995, “On a Confusion about aFunction of Consciousness”,Behavioral and BrainSciences, 18(2): 227–247. doi:10.1017/S0140525X00038188
  • Bogen, Jim, 2005, “Regularities and Causality;Generalizations and Causal Explanations”,Studies in Historyand Philosophy of Science Part C: Studies in History and Philosophy ofBiological and Biomedical Sciences, 36(2): 397–420.doi:10.1016/j.shpsc.2005.03.009
  • Bower, James M. and David Beeman, 1995,The Book of GENESIS:Exploring Realistic Neural Models with the GEneral NEural SImulationSystem, New York: Springer-Verlag.
  • Brogaard, Berit (Brit), 2012, “Vision for Action and theContents of Perception”:,Journal of Philosophy,109(10): 569–587. doi:10.5840/jphil20121091028
  • Caplan, David N. , T. Carr, James L. Gould, and R. Martin, 1999,“Language and Communication”, in Zigmond et al. 1999:1329–1352.
  • Chalmers, David John, 1996,The Conscious Mind: In Search of aFundamental Theory, (Philosophy of Mind Series), New York: OxfordUniversity Press.
  • Chirimuuta, M., 2014, “Minimal Models and Canonical NeuralComputations: The Distinctness of Computational Explanation inNeuroscience”,Synthese, 191(2): 127–153.doi:10.1007/s11229-013-0369-y
  • Churchland, Patricia Smith, 1986,Neurophilosophy: Toward aUnified Science of the Mind-Brain, (Computational Models ofCognition and Perception), Cambridge, MA: MIT Press.
  • Churchland, Patricia Smith and Terrence J. Sejnowski, 1992,The Computational Brain, (Computational Neuroscience),Cambridge, MA: MIT Press.
  • Churchland, Paul M., 1979,Scientific Realism and thePlasticity of Mind, Cambridge: Cambridge University Press.doi:10.1017/CBO9780511625435
  • –––, 1981, “Eliminative Materialism andthe Propositional Attitudes”,The Journal ofPhilosophy, 78(2): 67–90. doi:10.2307/2025900
  • –––, 1987,Matter and Consciousness,revised edition, Cambridge, MA: MIT Press.
  • –––, 1989,A NeurocomputationalPerspective, Cambridge, MA: MIT Press.
  • –––, 1995,The Engine of Reason, the Seat ofthe Soul, Cambridge, MA: MIT Press.
  • –––, 1996, “The Rediscovery ofLight”,The Journal of Philosophy, 93(5): 211–228.doi:10.2307/2940998
  • Churchland, Paul M. and Patricia S. Churchland, 1997,“Recent Work on Consciousness: Philosophical, Theoretical, andEmpirical”,Seminars in Neurology, 17(2): 179–186.doi:10.1055/s-2008-1040928
  • Clark, Andy, 2016,Surfing Uncertainty: Prediction, Action,and the Embodied Mind, New York: Oxford University Press.doi:10.1093/acprof:oso/9780190217013.001.0001
  • Clark, Austen, 1993,Sensory Qualities, Oxford: OxfordUniversity Press. doi:10.1093/acprof:oso/9780198236801.001.0001
  • Craver, Carl F., 2007,Explaining the Brain: What the Scienceof the Mind-Brain Could Be, Oxford: Oxford University Press.doi:10.1093/acprof:oso/9780199299317.001.0001
  • Craver, Carl F. and Lindley Darden, 2001, “DiscoveringMechanisms in Neurobiology”, in Machamer, Grush, and McLaughlin2001: 112–137.
  • Dennett, Daniel C., 1978, “Why You Can’t Make a ComputerThat Feels Pain”,Synthese, 38(3): 415–456.doi:10.1007/BF00486638
  • –––, 1991,Consciousness Explained, NewYork: Little Brown.
  • –––, 1995, “The Path Not Taken”,Behavioral and Brain Sciences, 18(2): 252–253.doi:10.1017/S0140525X00038243
  • De Brigard, Felipe, 2017, “Cognitive Systems and theChanging Brain”,Philosophical Explorations, 20(2):224–241. doi:10.1080/13869795.2017.1312503
  • Dretske, Fred, 1981,Knowledge and the Flow ofInformation, Cambridge, MA: MIT Press.
  • –––, 1988,Explaining Behavior,Cambridge, MA: MIT Press.
  • Eliasmith, Chris, 2009, “Neurocomputational Models: Theory,Application, Philosophical Consequences”, in Bickle 2009:346–369. doi:10.1093/oxfordhb/9780195304787.003.0014
  • Eliasmith, Chris and Charles H. Anderson, 2003,NeuralEngineering: Computation, Representation, and Dynamics inNeurobiological Systems, (Computational Neuroscience), Cambridge,MA: MIT Press.
  • Farah, Martha J., 2008, “Neuroethics and the Problem ofOther Minds: Implications of Neuroscience for the Moral Status ofBrain-Damaged Patients and Nonhuman Animals”,Neuroethics, 1(1): 9–18. doi:10.1007/s12152-008-9006-8
  • Farah, Martha J. and Paul Root Wolpe, 2004, “Monitoring andManipulating Brain Function: New Neuroscience Technologies and TheirEthical Implications”,The Hastings Center Report,34(3): 35–45.
  • Feyerabend, Paul K., 1963, “Comment: Mental Events and theBrain”,The Journal of Philosophy, 60(11): 295–296.doi:10.2307/2023030
  • Flanagan, Owen, 2009, “Neuro‐Eudaimonics or Buddhists LeadNeuroscientists to the Seat of Happiness”, in Bickle 2009:582–600. doi:10.1093/oxfordhb/9780195304787.003.0024
  • Fodor, Jerry A., 1974, “Special Sciences (or: The Disunityof Science as a Working Hypothesis)”,Synthese, 28(2):97–115. doi:10.1007/BF00485230
  • –––, 1981,RePresentations, Cambridge,MA: MIT Press.
  • –––, 1987,Psychosemantics, Cambridge,MA: MIT Press.
  • Fodor, Jerry and Ernest LePore, 1992,Holism: AShopper’s Guide, Cambridge, MA: MIT Press.
  • Gazzaniga, Michael S. (ed.), 1995,The CognitiveNeurosciences, Cambridge, MA: MIT Press.
  • Friston, Karl and Stefan Kiebel, 2009, “Predictive Codingunder the Free-Energy Principle”,Philosophical Transactionsof the Royal Society B: Biological Sciences, 364(1521):1211–1221. doi:10.1098/rstb.2008.0300
  • Georgopoulos, A. P., A. B. Schwartz, and R. E. Kettner, 1986,“Neuronal Population Coding of Movement Direction”,Science, 233(4771): 1416–1419.doi:10.1126/science.3749885
  • Goldman, Alvin I., 2006,Simulating Minds: The Philosophy,Psychology, and Neuroscience of Mindreading, New York: OxfordUniversity Press. doi:10.1093/0195138929.001.0001
  • Goodale, Melvyn A. and A. David Milner, 1992, “SeparateVisual Pathways for Perception and Action”,Trends inNeurosciences, 15(1): 20–25.
  • Grush, Rich, 2001, “The Semantic Challenge to ComputationalNeuroscience”, in Machamer, Grush, and McLaughlin 2001:155–172.
  • –––, 2004, “The Emulation Theory ofRepresentation: Motor Control, Imagery, and Perception”,Behavioral and Brain Sciences, 27(3): 377–396.doi:10.1017/S0140525X04000093
  • –––, 2005, “Brain Time andPhenomenological Time”, inCognition and the Brain: ThePhilosophy and Neuroscience Movement, Andrew Brook and KathleenAkins (eds.), Cambridge: Cambridge University Press, 160–207.doi:10.1017/CBO9780511610608.006
  • Haken, H., J. A. S. Kelso, and H. Bunz, 1985, “A TheoreticalModel of Phase Transitions in Human Hand Movements”,Biological Cybernetics, 51(5): 347–356.doi:10.1007/BF00336922
  • Hardcastle, Valerie Gray, 1997, “When a Pain Is Not”,The Journal of Philosophy, 94(8): 381–409.doi:10.2307/2564606
  • Hardin, C.L., 1988,Color for Philosophers: Unweaving theRainbow, Indianapolis, IN: Hackett.
  • Haugeland, John, 1985,Artificial Intelligence: The VeryIdea, Cambridge, MA: MIT Press.
  • Hawkins, Robert D. and Eric R. Kandel, 1984, “Is There aCell-Biological Alphabet for Learning?”PsychologicalReview, 91(3): 375–391.
  • Hebb, D.O., 1949,The Organization of Behavior: ANeuropsychological Theory, New York: Wiley.
  • Hirstein, William, 2005,Brain Fiction: Self-Deception and theRiddle of Confabulation, Cambridge, MA: MIT Press.
  • Hodgkin, Alan L., Andrew F. Huxley, and Bernard Katz, 1952,“Measurement of Current‐voltage Relations in the Membrane ofthe Giant Axon ofLoligo”,The Journal ofPhysiology, 116(4): 424–448.doi:10.1113/jphysiol.1952.sp004716
  • Hohwy, Jakob, 2013,The Predictive Mind, New York: OxfordUniversity Press. doi:10.1093/acprof:oso/9780199682737.001.0001
  • Hooker, C.A., 1981a, “Towards a General Theory of Reduction.Part I: Historical and Scientific Setting”,Dialogue,20(1): 38–59. doi:10.1017/S0012217300023088
  • –––, 1981b, “Towards a General Theory ofReduction. Part II: Identity in Reduction”,Dialogue,20(2): 201–236. doi:10.1017/S0012217300023301
  • –––, 1981c, “Towards a General Theory ofReduction. Part III: Cross-Categorical Reduction”,Dialogue, 20(3): 496–529.doi:10.1017/S0012217300023593
  • Horgan, Terence and George Graham, 1991, “In Defense ofSouthern Fundamentalism”,Philosophical Studies, 62(2):107–134. doi:10.1007/BF00419048
  • Hubel, D. H. and T. N. Wiesel, 1962, “Receptive Fields,Binocular Interaction and Functional Architecture in the Cat’sVisual Cortex”,The Journal of Physiology, 160(1):106–154. doi:10.1113/jphysiol.1962.sp006837
  • Jackson, Frank and Philip Pettit, 1990, “In Defence of FolkPsychology”,Philosophical Studies, 59(1): 31–54.doi:10.1007/BF00368390
  • Kandel, Eric R., 1976,Cellular Basis of Behavior: AnIntroduction to Behavioral Neurobiology, Oxford, England: W. H.Freeman.
  • Kaplan, David Michael and Carl F. Craver, 2011, “TheExplanatory Force of Dynamical and Mathematical Models inNeuroscience: A Mechanistic Perspective*”,Philosophy ofScience, 78(4): 601–627. doi:10.1086/661755
  • Kawato, M., 1999, “Internal Models for Motor Control andTrajectory Planning”,Current Opinion in Neurobiology,9(6): 718–727.
  • Klein, C., 2010, “Images Are Not the Evidence inNeuroimaging”,The British Journal for the Philosophy ofScience, 61(2): 265–278. doi:10.1093/bjps/axp035
  • Kolb, Bryan and Ian Q. Whishaw, 1996,Fundamentals of HumanNeuropsychology, 4th edition, New York: W.H. Freeman.
  • Kosslyn, Stephen M., 1997, “Mental Imagery”, inConversations in the Cognitive Neurosciences, Michael S.Gazzaniga (ed.), Cambridge, MA: MIT Press, pp. 37–52.
  • Lee, Choongkil, William H. Rohrer, and David L. Sparks, 1988,“Population Coding of Saccadic Eye Movements by Neurons in theSuperior Colliculus”,Nature, 332(6162): 357–360.doi:10.1038/332357a0
  • Lehky, Sidney R. and Terrence J. Sejnowski, 1988, “NetworkModel of Shape-from-Shading: Neural Function Arises from BothReceptive and Projective Fields”,Nature, 333(6172):452–454. doi:10.1038/333452a0
  • Lettvin, J., H. Maturana, W. McCulloch, and W. Pitts, 1959,“What the Frog’s Eye Tells the Frog’s Brain”,Proceedings of the IRE, 47(11): 1940–1951.doi:10.1109/JRPROC.1959.287207
  • Levine, Joseph, 1983, “Materialism and Qualia: TheExplanatory Gap”,Pacific Philosophical Quarterly,64(4): 354–361. doi:10.1111/j.1468-0114.1983.tb00207.x
  • Levy, Neil, 2007,Neuroethics: Challenges for the 21stCentury, Cambridge: Cambridge University Press.doi:10.1017/CBO9780511811890
  • Llinás, Rodolfo R., 1975, “The Cortex of theCerebellum”,Scientific American, 232(1/January):56–71. doi:10.1038/scientificamerican0175-56
  • Llinás, Rodolfo R. and Patricia Smith Churchland (eds.), 1996,The Mind-Brain Continuum: Sensory Processes, Cambridge, MA:MIT Press.
  • Lloyd, Dan, 2002, “Functional MRI and the Study of HumanConsciousness”,Journal of Cognitive Neuroscience,14(6): 818–831. doi:10.1162/089892902760191027
  • –––, 2003,Radiant Cool: A Novel Theory ofConsciousness, Cambridge, MA: MIT Press.
  • Machamer, Peter, Lindley Darden, and Carl F. Craver, 2000,“Thinking about Mechanisms”,Philosophy ofScience, 67(1): 1–25. doi:10.1086/392759
  • Machamer, Peter K., Rick Grush, and Peter McLaughlin (eds.), 2001,Theory and Method in the Neurosciences (Pittsburgh-KonstanzSeries in the Philosophy and History of Science), Pittsburgh, PA:University of Pittsburgh Press.
  • Magistretti, Pierre J., 1999, “Brain EnergyMetabolism”, in Zigmond et al. 1999: 389–413.
  • Nagel, Thomas, 1971, “Brain Bisection and the Unity ofConsciousness”,Synthese, 22(3–4): 396–413.doi:10.1007/BF00413435
  • –––, 1974, “What Is It Like to Be aBat?”,The Philosophical Review, 83(4): 435–450.doi:10.2307/2183914
  • Piccinini, Gualtiero and Carl Craver, 2011, “IntegratingPsychology and Neuroscience: Functional Analyses as MechanismSketches”,Synthese, 183(3): 283–311.doi:10.1007/s11229-011-9898-4
  • Place, U. T., 1956, “Is Consciousness a BrainProcess?”,British Journal of Psychology, 47(1):44–50. doi:10.1111/j.2044-8295.1956.tb00560.x
  • Polger, Thomas W., 2004,Natural Minds, Cambridge, MA:MIT Press.
  • Prinz, Jesse J., 2004,Gut Reactions: A Perceptual Theory ofthe Emotions. New York: Oxford University Press.
  • –––, 2007,The Emotional Construction ofMorals, Oxford: Oxford University Press.doi:10.1093/acprof:oso/9780199571543.001.0001
  • Putnam, Hilary, 1967, “Psychological Predicates”, inArt, Mind, and Religion: Proceedings of the 1965 OberlinColloquium in Philosophy, W. H. Capitan and D. D. Merrill (eds.),Pittsburgh, PA: University of Pittsburgh Press, pp. 49–54.
  • Rall, Wilfrid, 1959, “Branching Dendritic Trees andMotoneuron Membrane Resistivity”,ExperimentalNeurology, 1(5): 491–527. doi:10.1016/0014-4886(59)90046-9
  • Ramsey, William, 1992, “Prototypes and ConceptualAnalysis”,Topoi, 11(1): 59–70.doi:10.1007/BF00768299
  • Roskies, Adina L., 2007, “Are Neuroimages Like Photographsof the Brain?”,Philosophy of Science, 74(5):860–872. doi:10.1086/525627
  • –––, 2009, “What’s ‘Neu’ inNeuroethics?”, in Bickle 2009: 454–472.doi:10.1093/oxfordhb/9780195304787.003.0019
  • Ross, Lauren N., 2015, “Dynamical Models and Explanation inNeuroscience”,Philosophy of Science, 82(1): 32–54.doi:10.1086/679038
  • –––, forthcoming, “Causal Concepts inBiology: How Pathways Differ from Mechanisms and Why ItMatters”,The British Journal for the Philosophy ofScience, first online: 12 December 2018.doi:10.1093/bjps/axy078
  • Rumelhart, D. E., G. E. Hinton, and J. L. McClelland, 1986,“A Framework for Parallel Distributed Processing”, inParallel Distributed Processing: Explorations in theMicrostructure of Cognition, volume 1, D. E. Rumelhart and J. L.McClelland (eds.), Cambridge, MA: MIT Press, pp. 45–76.
  • Rupert, Robert D., 2009,Cognitive Systems and the ExtendedMind, New York: Oxford University Press.doi:10.1093/acprof:oso/9780195379457.001.0001
  • Sacks, Oliver, 1985,The Man Who Mistook his Wife for a Hatand Other Clinical Tales, New York: Summit Books
  • Schaffner, Kenneth F., 1992, “Philosophy of Medicine”,inIntroduction to the Philosophy of Science: A Text by Members ofthe Department of the History and Philosophy of Science of theUniversity of Pittsburg, Merrilee H. Salmon, John Earman, ClarkGlymour, James G. Lennox, Peter Machamer, J. E. McGuire. John D.Norton, Wesley C. Salmon, and Kenneth F. Schaffner (eds.), EnglewoodCliffs, NJ: Prentice-Hall, pp. 310–345.
  • Schechter, Elizabeth, 2018,Self-Consciousness and‘Split’ Brains: The Minds’ I, New York: OxfordUniversity Press. doi:10.1093/oso/9780198809654.001.0001
  • Schneider, Susan, 2009, “Future Minds: Transhumanism,Cognitive Enhancement, and the Nature of Persons”, inPennCenter Guide to Bioethics, Vardit Ravitsky, Autumn Fiester, andArthur L. Caplan (eds.), New York: Springer, pp. 95–110.
  • Schroeder, Timothy, 2004,Three Faces of Desire, Oxford:Oxford University Press.doi:10.1093/acprof:oso/9780195172379.001.0001
  • Silberstein, Michael and Anthony Chemero, 2013, “Constraintson Localization and Decomposition as Explanatory Strategies in theBiological Sciences”,Philosophy of Science, 80(5):958–970. doi:10.1086/674533
  • Silva, Alcino J., Anthony Landreth, and John Bickle, 2014,Engineering the Next Revolution in Neuroscience: The New Scienceof Experiment Planning, New York: Oxford University Press.doi:10.1093/acprof:oso/9780199731756.001.0001
  • Smart, J. J. C., 1959, “Sensations and BrainProcesses”,The Philosophical Review, 68(2): 141–156.doi:10.2307/2182164
  • Stich, Stephen, 1983,From Folk Psychology to CognitiveScience, Cambridge, MA: MIT Press.
  • Stufflebeam, Robert S. and William Bechtel, 1997, “PET:Exploring the Myth and the Method”,Philosophy ofScience, 64(supplement/December): S95–S106.doi:10.1086/392590
  • Sullivan, Jacqueline A., 2009, “The Multiplicity ofExperimental Protocols: A Challenge to Reductionist andNon-Reductionist Models of the Unity of Neuroscience”,Synthese, 167(3): 511–539.doi:10.1007/s11229-008-9389-4
  • –––, 2010, “Reconsidering ‘SpatialMemory’ and the Morris Water Maze”,Synthese,177(2): 261–283. doi:10.1007/s11229-010-9849-5
  • Suppe, Frederick, 1974,The Structure of ScientificTheories, Urbana, IL: University of Illinois Press.
  • Tye, Michael, 1993, “Blindsight, the Absent QualiaHypothesis, and the Mystery of Consciousness”, inPhilosophyand the Cognitive Sciences, Christopher Hookway and Donald M.Peterson (eds.), (Royal Institute of Philosophy Supplement 34),Cambridge: Cambridge University Press, 19–40.doi:10.1017/S1358246100002447
  • Van Fraassen, Bas C., 1980,The Scientific Image, NewYork: Oxford University Press. doi:10.1093/0198244274.001.0001
  • Von Eckardt Klein, Barbara, 1975, “Some Consequences ofKnowing Everything (Essential) There Is to Know About One’s MentalStates”,The Review of Metaphysics, 29(1): 3–18.
  • –––, 1978, “Inferring FunctionalLocalization from Neurological Evidence”, inExplorations inthe Biology of Language, Edward Walker (ed.), Cambridge, MA: MITPress, pp. 27–66.
  • Weber, Marcel, 2008, “Causes without Mechanisms:Experimental Regularities, Physical Laws, and NeuroscientificExplanation”,Philosophy of Science, 75(5): 995–1007.doi:10.1086/594541
  • Woodward, James, 2003,Making Things Happen: A Theory ofCausal Explanation, Oxford: Oxford University Press.doi:10.1093/0195155270.001.0001
  • Zigmond, Michael J., Floyd E. Bloom, Story C. Landis, James L.Roberts, and Larry R. Squire (eds.), 1999,FundamentalNeuroscience, San Diego, CA: Academic Press.

Acknowledgments

Jonathan Kanzelmeyer and Mara McGuire assisted with the research forSection 8.

Copyright © 2019 by
John Bickle<jb1681@msstate.edu>
Peter Mandik<mandikp@wpunj.edu>
Anthony Landreth<anthony.w.landreth@gmail.com>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free

Browse

About

Support SEP

Mirror Sites

View this site from another server:

USA (Main Site)Philosophy, Stanford University

The Stanford Encyclopedia of Philosophy iscopyright © 2023 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054


[8]ページ先頭

©2009-2025 Movatter.jp