Movatterモバイル変換


[0]ホーム

URL:


SEP home page
Stanford Encyclopedia of Philosophy

Connectionism

First published Sun May 18, 1997; substantive revision Fri Aug 16, 2019

Connectionism is a movement in cognitive science that hopes to explainintellectual abilities using artificial neural networks (also known as“neural networks” or “neural nets”). Neuralnetworks are simplified models of the brain composed of large numbersof units (the analogs of neurons) together with weights that measurethe strength of connections between the units. These weights model theeffects of the synapses that link one neuron to another. Experimentson models of this kind have demonstrated an ability to learn suchskills as face recognition, reading, and the detection of simplegrammatical structure.

Philosophers have become interested in connectionism because itpromises to provide an alternative to the classical theory of themind: the widely held view that the mind is something akin to adigital computer processing a symbolic language. Exactly how and towhat extent the connectionist paradigm constitutes a challenge toclassicism has been a matter of hot debate in recent years.

1. A Description of Neural Networks

A neural network consists of large number of units joined together ina pattern of connections. Units in a net are usually segregated intothree classes: input units, which receive information to be processed,output units where the results of the processing are found, and unitsin between called hidden units. If a neural net were to model thewhole human nervous system, the input units would be analogous to thesensory neurons, the output units to the motor neurons, and the hiddenunits to all other neurons.

Here is a simple illustration of a simple neural net:

a diagram of three columns with the first having seven circles, the second having four circles, and the third having three circles. Each circle in the first column is connected to each circle in the second by a line. Each circle in the second column is connected to each circle in the third column by a line.

Each input unit has an activation value that represents some featureexternal to the net. An input unit sends its activation value to eachof the hidden units to which it is connected. Each of these hiddenunits calculates its own activation value depending on the activationvalues it receives from the input units. This signal is then passed onto output units or to another layer of hidden units. Those hiddenunits compute their activation values in the same way, and send themalong to their neighbors. Eventually the signal at the input unitspropagates all the way through the net to determine the activationvalues at all the output units.

The pattern of activation set up by a net is determined by theweights, or strength of connections between the units. Weights may beeither positive or negative. A negative weight represents theinhibition of the receiving unit by the activity of a sending unit.The activation value for each receiving unit is calculated according asimple activation function. Activation functions vary in detail, butthey all conform to the same basic plan. The function sums togetherthe contributions of all sending units, where the contribution of aunit is defined as the weight of the connection between the sendingand receiving units times the sending unit’s activation value.This sum is usually modified further, for example, by adjusting theactivation sum to a value between 0 and 1 and/or by setting theactivation to zero unless a threshold level for the sum is reached.Connectionists presume that cognitive functioning can be explained bycollections of units that operate in this way. Since it is assumedthat all the units calculate pretty much the same simple activationfunction, human intellectual accomplishments must depend primarily onthe settings of the weights between the units.

The kind of net illustrated above is called a feed forward net.Activation flows directly from inputs to hidden units and then on tothe output units. More realistic models of the brain would includemany layers of hidden units, and recurrent connections that sendsignals back from higher to lower levels. Such recurrence is necessaryin order to explain such cognitive features as short-term memory. In afeed forward net, repeated presentations of the same input produce thesame output every time, but even the simplest organisms habituate to(or learn to ignore) repeated presentation of the same stimulus.Connectionists tend to avoid recurrent connections because little isunderstood about the general problem of training recurrent nets.However Elman (1991) and others have made some progress with simplerecurrent nets, where the recurrence is tightly constrained.

2. Neural Network Learning and Backpropagation

Finding the right set of weights to accomplish a given task is thecentral goal in connectionist research. Luckily, learning algorithmshave been devised that can calculate the right weights for carryingout many tasks (see Hinton 1992 for an accessible review). These fallinto two broad categories: supervised and unsupervised learning.Hebbian learning is the best known unsupervised form. As each input ispresented to the net, weights between nodes that are active togetherare increased, while those weights connecting nodes that are notactive together are decreased. This form of training is especiallyuseful for building nets that can classify the input into usefulcategories. The most widely used supervised algorithm is calledbackpropagation. To use this method, one needs a training setconsisting of many examples of inputs and their desired outputs for agiven task. This external set of examples “supervises” thetraining process. If, for example, the task is to distinguish malefrom female faces, the training set might contain pictures of facestogether with an indication of the sex of the person depicted in eachone. A net that can learn this task might have two output units(indicating the categories male and female) and many input units, onedevoted to the brightness of each pixel (tiny area) in the picture.The weights of the net to be trained are initially set to randomvalues, and then members of the training set are repeatedly exposed tothe net. The values for the input of a member are placed on the inputunits and the output of the net is compared with the desired outputfor this member. Then all the weights in the net are adjusted slightlyin the direction that would bring the net’s output values closerto the values for the desired output. For example, when male’sface is presented to the input units the weights are adjusted so thatthe value of the male output unit is increased and the value of thefemale output unit is decreased. After many repetitions of thisprocess the net may learn to produce the desired output for each inputin the training set. If the training goes well, the net may also havelearned to generalize to the desired behavior for inputs and outputsthat were not in the training set. For example, it may do a good jobof distinguishing males from females in pictures that were neverpresented to it before.

Training nets to model aspects of human intelligence is a fine art.Success with backpropagation and other connectionist learning methodsmay depend on quite subtle adjustment of the algorithm and thetraining set. Training typically involves hundreds of thousands ofrounds of weight adjustment. Given the limitations of computers in thepast, training a net to perform an interesting task took days or evenweeks. More recently, the use of massively parallel dedicatedprocessors (GPUs) has helped relieve these heavy computationalburdens. But even here, some limitations to connectionist theories oflearning will remain to be faced. Humans (and many less intelligentanimals) display an ability to learn from single examples; forexample, a child shown a novel two-wheeled vehicle and given the name“Segway”, knows right away what a Segway is (Lake, Zarembaet al. 2015). Connectionist learning techniques such asbackpropagation are far from explaining this kind of “oneshot” learning.

3. Samples of What Neural Networks Can Do

Connectionists have made significant progress in demonstrating thepower of neural networks to master cognitive tasks. Here are threewell-known experiments that have encouraged connectionists to believethat neural networks are good models of human intelligence. One of themost attractive of these efforts is Sejnowski and Rosenberg’s1987 work on a net that can read English text called NETtalk. Thetraining set for NETtalk was a large data base consisting of Englishtext coupled with its corresponding phonetic output, written in a codesuitable for use with a speech synthesizer. Tapes of NETtalk’sperformance at different stages of its training are very interestinglistening. At first the output is random noise. Later, the net soundslike it is babbling, and later still as though it is speaking Englishdouble-talk (speech that is formed of sounds that resemble Englishwords). At the end of training, NETtalk does a fairly good job ofpronouncing the text given to it. Furthermore, this abilitygeneralizes fairly well to text that was not presented in the trainingset.

Another influential early connectionist model was a net trained byRumelhart and McClelland (1986) to predict the past tense of Englishverbs. The task is interesting because although most of the verbs inEnglish (the regular verbs) form the past tense by adding the suffix“-ed”, many of the most frequently verbs are irregular(“is” / “was”, “come” /“came”, “go” / “went”). The netwas first trained on a set containing a large number of irregularverbs, and later on a set of 460 verbs containing mostly regulars. Thenet learned the past tenses of the 460 verbs in about 200 rounds oftraining, and it generalized fairly well to verbs not in the trainingset. It even showed a good appreciation of “regularities”to be found among the irregular verbs (“send” /“sent”, “build” / “built”;“blow” / “blew”, “fly” /“flew”). During learning, as the system was exposed to thetraining set containing more regular verbs, it had a tendency tooverregularize, i.e., to combine both irregular and regular forms:(“break” / “broked”, instead of“break” / “broke”). This was corrected withmore training. It is interesting to note that children are known toexhibit the same tendency to overregularize during language learning.However, there is hot debate over whether Rumelhart andMcClelland’s is a good model of how humans actually learn andprocess verb endings. For example, Pinker and Prince (1988) point outthat the model does a poor job of generalizing to some novel regularverbs. They believe that this is a sign of a basic failing inconnectionist models. Nets may be good at making associations andmatching patterns, but they have fundamental limitations in masteringgeneral rules such as the formation of the regular past tense. Thesecomplaints raise an important issue for connectionist modelers, namelywhether nets can generalize properly to master cognitive tasksinvolving rules. Despite Pinker and Prince’s objections, manyconnectionists believe that generalization of the right kind is stillpossible (Niklasson & van Gelder 1994).

Elman’s 1991 work on nets that can appreciate grammaticalstructure has important implications for the debate about whetherneural networks can learn to master rules. Elman trained a simplerecurrent network to predict the next word in a large corpus ofEnglish sentences. The sentences were formed from a simple vocabularyof 23 words using a subset of English grammar. The grammar, thoughsimple, posed a hard test for linguistic awareness. It allowedunlimited formation of relative clauses while demanding agreementbetween the head noun and the verb. So for example, in thesentence

Anyman that chases dogs that chase cats …runs.

the singular “man” must agree with theverb “runs” despite the interveningplural nouns (“dogs”, “cats”) which mightcause the selection of “run”. One of the importantfeatures of Elman’s model is the use of recurrent connections.The values at the hidden units are saved in a set of so called contextunits, to be sent back to the input level for the next round ofprocessing. This looping back from hidden to input layers provides thenet with a rudimentary form of memory of the sequence of words in theinput sentence. Elman’s nets displayed an appreciation of thegrammatical structure of sentences that were not in the training set.The net’s command of syntax was measured in the following way.Predicting the next word in an English sentence is, of course, animpossible task. However, these nets succeeded, at least by thefollowing measure. At a given point in an input sentence, the outputunits for words that are grammatical continuations of the sentence atthat point should be active and output units for all other wordsshould be inactive. After intensive training, Elman was able toproduce nets that displayed perfect performance on this measureincluding sentences not in the training set. The work of Christiansenand Chater (1999a) and Morris, Cottrell, and Elman (2000) extends thisresearch to more complex grammars. For a broader view of progress inconnectionist natural language processing see summaries byChristiansen and Chater (1999b), and Rohde and Plaut (2003).

Although this performance is impressive, there is still a long way togo in training nets that can process a language like English.Furthermore, doubts have been raised about the significance ofElman’s results. For example, Marcus (1998, 2001) argues thatElman’s nets are not able to generalize this performance tosentences formed from a novel vocabulary. This, he claims, is a signthat connectionist models merely associate instances, and are unableto truly master abstract rules. On the other hand, Phillips (2002)argues that classical architectures are no better off in this respect.The purported inability of connectionist models to generalizeperformance in this way has become an important theme in thesystematicity debate. (See Section 7 below.)

A somewhat different concern about the adequacy of connectionistlanguage processing focuses on tasks that mimic infant learning ofsimple artificial grammars. Data on reaction time confirms thatinfants can learn to distinguish well-formed from ill-formed sentencesin a novel language created by experimenters. Shultz and Bale (2001)report success in training neural nets on the same task. Vilcu andHadley (2005) object that this work fails to demonstrate trueacquisition of the grammar, but see Shultz and Bale (2006) for adetailed reply.

4. Strengths and Weaknesses of Neural Network Models

Philosophers are interested in neural networks because they mayprovide a new framework for understanding the nature of the mind andits relation to the brain (Rumelhart & McClelland 1986: Chapter1). Connectionist models seem particularly well matched to what weknow about neurology. The brain is indeed a neural net, formed frommassively many units (neurons) and their connections (synapses).Furthermore, several properties of neural network models suggest thatconnectionism may offer an especially faithful picture of the natureof cognitive processing. Neural networks exhibit robust flexibility inthe face of the challenges posed by the real world. Noisy input ordestruction of units causes graceful degradation of function. Thenet’s response is still appropriate, though somewhat lessaccurate. In contrast, noise and loss of circuitry in classicalcomputers typically result in catastrophic failure. Neural networksare also particularly well adapted for problems that require theresolution of many conflicting constraints in parallel. There is ampleevidence from research in artificial intelligence that cognitive taskssuch as object recognition, planning, and even coordinated motionpresent problems of this kind. Although classical systems are capableof multiple constraint satisfaction, connectionists argue that neuralnetwork models provide much more natural mechanisms for dealing withsuch problems.

Over the centuries, philosophers have struggled to understand how ourconcepts are defined. It is now widely acknowledged that trying tocharacterize ordinary notions with necessary and sufficient conditionsis doomed to failure. Exceptions to almost any proposed definition arealways waiting in the wings. For example, one might propose that atiger is a large black and orange feline. But then what about albinotigers? Philosophers and cognitive psychologists have argued thatcategories are delimited in more flexible ways, for example via anotion of family resemblance or similarity to a prototype.Connectionist models seem especially well suited to accommodatinggraded notions of category membership of this kind. Nets can learn toappreciate subtle statistical patterns that would be very hard toexpress as hard and fast rules. Connectionism promises to explainflexibility and insight found in human intelligence using methods thatcannot be easily expressed in the form of exception free principles(Horgan & Tienson 1989, 1990), thus avoiding the brittleness thatarises from standard forms of symbolic representation.

Despite these intriguing features, there are some weaknesses inconnectionist models that bear mentioning. First, most neural networkresearch abstracts away from many interesting and possibly importantfeatures of the brain. For example, connectionists usually do notattempt to explicitly model the variety of different kinds of brainneurons, nor the effects of neurotransmitters and hormones.Furthermore, it is far from clear that the brain contains the kind ofreverse connections that would be needed if the brain were to learn bya process like backpropagation, and the immense number of repetitionsneeded for such training methods seems far from realistic. Attentionto these matters will probably be necessary if convincingconnectionist models of human cognitive processing are to beconstructed. A more serious objection must also be met. It is widelyfelt, especially among classicists, that neural networks are notparticularly good at the kind of rule based processing that is thoughtto undergird language, reasoning, and higher forms of thought. (For awell known critique of this kind see Pinker and Prince 1988.) We willdiscuss the matter further when we turn tothe systematicity debate.

There has been a cottage industry in developing morebiologically-plausible algorithms for error-driven training that canbe shown to approximate the results of backpropagation without itsimplausible features. Prominent examples includeO’Reilly’s Generalized Error Recirculation algorithm(O’Reilly 1996), using randomized error signals rather thanerror signals individually computed for each neuron (Lillicrap,Cownden, Tweed, & Akerman 2016), and modifying weights usingspike-timing dependent plasticity--the latter of which has been afavorite of prominent figures in deep learning research (Bengio et al.2017). (For more on deep learning seesection 11 below.)

5. The Shape of the Controversy between Connectionists and Classicists

The last forty years have been dominated by the classical view that(at least higher) human cognition is analogous to symbolic computationin digital computers. On the classical account, information isrepresented by strings of symbols, just as we represent data incomputer memory or on pieces of paper. The connectionist claims, onthe other hand, that information is stored non-symbolically in theweights, or connection strengths, between the units of a neural net.The classicist believes that cognition resembles digital processing,where strings are produced in sequence according to the instructionsof a (symbolic) program. The connectionist views mental processing asthe dynamic and graded evolution of activity in a neural net, eachunit’s activation depending on the connection strengths andactivity of its neighbors.

On the face of it, these views seem very different. However manyconnectionists do not view their work as a challenge to classicism andsome overtly support the classical picture. So-called implementationalconnectionists seek an accommodation between the two paradigms. Theyhold that the brain’s net implements a symbolic processor. True,the mind is a neural net; but it is also a symbolic processor at ahigher and more abstract level of description. So the role forconnectionist research according to the implementationalist is todiscover how the machinery needed for symbolic processing can beforged from neural network materials, so that classical processing canbe reduced to the neural network account.

However, many connectionists resist the implementational point ofview. Such radical connectionists claim that symbolic processing was abad guess about how the mind works. They complain that classicaltheory does a poor job of explaining graceful degradation of function,holistic representation of data, spontaneous generalization,appreciation of context, and many other features of human intelligencewhich are captured in their models. The failure of classicalprogramming to match the flexibility and efficiency of human cognitionis by their lights a symptom of the need for a new paradigm incognitive science. So radical connectionists would eliminate symbolicprocessing from cognitive science forever.

The controversy between radical and implementational connectionists iscomplicated by the invention of what are called hybrid connectionistarchitectures. Here elements of classical symbolic processing areincluded in neural nets (Wermter & Sun 2000). For example,Miikkulainen (1993) champions a complex collection of neural netmodules that share data coded in activation patterns. Since one of themodules acts as a memory, the system taken as a whole resembles aclassical processor with separate mechanisms for storing and operatingon digital “words”. Smolensky (1990) is famous forinventing so called tensor product methods for simulating the processof variable binding, where symbolic information is stored at andretrieved from known “locations”. More recently, Eliasmith(2013) has proposed complex and massive architectures that use whatare called semantic pointers, which exhibit features of classicalvariable binding. Once hybrid architectures such as these are on thetable, it becomes more difficult to classify a given connectionistmodel as radical or merely implementational. This opens theinteresting prospect that whether symbolic processing is actuallypresent in the human brain may turn out to be a matter of degree.

The disagreement concerning the degree to which human cognitioninvolves symbolic processing is naturally embroiled with theinnateness debate—whether higher level abilities such aslanguage and reasoning are part of the human genetic endowment, orwhether they are learned. The success of connectionist models atlearning tasks starting from randomly chosen weights gives heart toempiricists, who would think that the infant brain is able toconstruct intelligence from perceptual input using a simple learningmechanism (Elman et al. 1996). On the other hand, nativists in therationalist tradition argue that at least for grammar-based language,the poverty of perceptual stimulus (Chomsky 1965: 58) entails theexistence of a genetically determined mechanism tailored to learninggrammar. However, the alignment between connectionism and non-nativismis not so clear-cut. There is no reason that connectionist modelscannot be interpreted from a nativist point of view, where the ongoing“learning” represents the process of evolutionaryrefinement from generation to generation of a species. The idea thatthe human brain has domain specific knowledge that is geneticallydetermined can be accommodated in the connectionist paradigm bybiasing the initial weights of the models to make that knowledge easyor trivial to learn. Connectionist research makes best contact withthe innateness debate by providing a new strategy for disarmingpoverty of stimulus arguments. Nativists argue that association ofideas, the mechanism for learning proposed by the traditionalempiricist, is too slender a reed to support the development of higherlevel cognitive abilities. They suppose that innate mechanisms areessential for learning (for example) a grammar of English from achild’s linguistic input, because the statistical regularitiesavailable to “mere association” massively underdeterminethat grammar. Connectionism could support an empiricism here byproviding a proof-of-concept that such structured knowledge can belearned from inputs available to humans using only learning mechanismsfound in non-classical architectures. Of course it is too soon to tellwhether this promise can be realized.

6. Connectionist Representation

Connectionist models provide a new paradigm for understanding howinformation might be represented in the brain. A seductive but naiveidea is that single neurons (or tiny neural bundles) might be devotedto the representation of each thing the brain needs to record. Forexample, we may imagine that there is a grandmother neuron that fireswhen we think about our grandmother. However, such localrepresentation is not likely. There is good evidence that ourgrandmother thought involves complex patterns of activity distributedacross relatively large parts of cortex.

It is interesting to note that distributed, rather than localrepresentations on the hidden units are the natural products ofconnectionist training methods. The activation patterns that appear onthe hidden units while NETtalk processes text serve as an example.Analysis reveals that the net learned to represent such categories asconsonants and vowels, not by creating one unit active for consonantsand another for vowels, but rather in developing two differentcharacteristic patterns of activity across all the hidden units.

Given the expectations formed from our experience with localrepresentation on the printed page, distributed representation seemsboth novel and difficult to understand. But the technique exhibitsimportant advantages. For example, distributed representations,(unlike symbols stored in separate fixed memory locations) remainrelatively well preserved when parts of the model are destroyed oroverloaded. More importantly, since representations are coded inpatterns rather than firings of individual units, relationshipsbetween representations are coded in the similarities and differencesbetween these patterns. So the internal properties of therepresentation carry information on what it is about (Clark 1993: 19).In contrast, local representation is conventional. No intrinsicproperties of the representation (a unit’s firing) determine itsrelationships to the other symbols. This self-reporting feature ofdistributed representations promises to resolve a philosophicalconundrum about meaning. In a symbolic representational scheme, allrepresentations are composed out of symbolic atoms (like words in alanguage). Meanings of complex symbol strings may be defined by theway they are built up out of their constituents, but what fixes themeanings of the atoms?

Connectionist representational schemes provide an end run around thepuzzle by simply dispensing with atoms. Every distributedrepresentation is a pattern of activity across all the units, so thereis no principled way to distinguish between simple and complexrepresentations. To be sure, representations are composed out of theactivities of the individual units. But none of these“atoms” codes for any symbol. The representations aresub-symbolic in the sense that analysis into their components leavesthe symbolic level behind.

The sub-symbolic nature of distributed representation provides a novelway to conceive of information processing in the brain. If we modelthe activity of each neuron with a number, then the activity of thewhole brain can be given by a giant vector (or list) of numbers, onefor each neuron. Both the brain’s input from sensory systems andits output to individual muscle neurons can also be treated as vectorsof the same kind. So the brain amounts to a vector processor, and theproblem of psychology is transformed into questions about whichoperations on vectors account for the different aspects of humancognition.

Sub-symbolic representation has interesting implications for theclassical hypothesis that the brain must contain symbolicrepresentations that are similar to sentences of a language. Thisidea, often referred to as the language of thought (or LOT) thesis maybe challenged by the nature of connectionist representations. It isnot easy to say exactly what the LOT thesis amounts to, but van Gelder(1990) offers an influential and widely accepted benchmark fordetermining when the brain should be said to contain sentence-likerepresentations. It is that when a representation is tokened onethereby tokens the constituents of that representation. For example,if I write “John loves Mary” I have thereby written thesentence’s constituents: “John” “loves”and “Mary”. Distributed representations for complexexpressions like “John loves Mary” can be constructed thatdo not contain any explicit representation of their parts (Smolensky1990). The information about the constituents can be extracted fromthe representations, but neural network models do not need toexplicitly extract this information themselves in order to process itcorrectly (Chalmers 1990). This suggests that neural network modelsserve as counterexamples to the idea that the language of thought is aprerequisite for human cognition. However, the matter is still a topicof lively debate (Fodor 1997).

The novelty of distributed and superimposed connectionist informationstorage naturally causes one to wonder about the viability ofclassical notions of symbolic computation in describing the brain.Ramsey (1997) argues that though we may attribute symbolicrepresentations to neural nets, those attributions do not figure inlegitimate explanations of the model’s behavior. This claim isimportant because the classical account of cognitive processing, (andfolk intuitions) presume that representations play an explanatory rolein understanding the mind. It has been widely thought that cognitivescience requires, by its very nature, explanations that appeal torepresentations (Von Eckardt 2003). If Ramsey is right, the point maycut in two different ways. Some may use it to argue for a new andnon-classical understanding of the mind, while others would use it toargue that connectionism is inadequate since it cannot explain what itmust. However, Haybron (2000) argues against Ramsey that there isample room for representations with explanatory role in radicalconnectionist architectures. Roth (2005) makes the interesting pointthat contrary to first impressions, it may also make perfect sense toexplain a net’s behavior by reference to a computer program,even if there is no way to discriminate a sequence of steps of thecomputation through time.

The debate concerning the presence of classical representations and alanguage of thought has been clouded by lack of clarity in definingwhat should count as the representational “vehicles” indistributed neural models. Shea (2007) makes the point that theindividuation of distributed representations should be defined by theway activation patterns on the hidden units cluster together. It isthe relationships between clusteringregions in the space ofpossible activation patterns that carry representational content, notthe activations themselves, nor the collection of units responsiblefor the activation. On this understanding, prospects are improved forlocating representational content in neural nets that can be comparedin nets of different architectures, that is causally involved inprocessing, and which overcomes some objections to holistic accountsof meaning.

In a series of papers Horgan and Tienson (1989, 1990) have championeda view called representations without rules. According to this viewclassicists are right to think that human brains (and goodconnectionist models of them) contain explanatorily robustrepresentations; but they are wrong to think that thoserepresentations enter in to hard and fast rules like the steps of acomputer program. The idea that connectionist systems may followgraded or approximate regularities (“soft laws” as Horganand Tienson call them) is intuitive and appealing. However, Aizawa(1994) argues that given an arbitrary neural net with a representationlevel description, it is always possible to outfit it with hard andfast representation-level rules. Guarini (2001) responds that if wepay attention to notions of rule following that are useful tocognitive modeling, Aizawa’s constructions will seem beside thepoint.

7. The Systematicity Debate

The major points of controversy in the philosophical literature onconnectionism have to do with whether connectionists provide a viableand novel paradigm for understanding the mind. One complaint is thatconnectionist models are only good at processing associations. Butsuch tasks as language and reasoning cannot be accomplished byassociative methods alone and so connectionists are unlikely to matchthe performance of classical models at explaining these higher-levelcognitive abilities. However, it is a simple matter to prove thatneural networks can do anything that symbolic processors can do, sincenets can be constructed that mimic a computer’s circuits. So theobjection can not be that connectionist models are unable to accountfor higher cognition; it is rather that they can do so only if theyimplement the classicist’s symbolic processing tools.Implementational connectionism may succeed, but radical connectionistswill never be able to account for the mind.

Fodor and Pylyshyn’s often cited paper (1988) launches a debateof this kind. They identify a feature of human intelligence calledsystematicity which they feel connectionists cannot explain. Thesystematicity of language refers to the fact that the ability toproduce/understand/think some sentences is intrinsically connected tothe ability to produce/understand/think others of related structure.For example, no one with a command of English who understands“John loves Mary” can fail to understand “Mary lovesJohn.” From the classical point of view, the connection betweenthese two abilities can easily be explained by assuming that mastersof English represent the constituents (“John”,“loves” and “Mary”) of “John lovesMary” and compute its meaning from the meanings of theseconstituents. If this is so, then understanding a novel sentence like“Mary loves John” can be accounted for as another instanceof the same symbolic process. In a similar way, symbolic processingwould account for the systematicity of reasoning, learning andthought. It would explain why there are no people who are capable ofconcludingP fromP & (Q &R), but incapable of concludingP fromP&Q, why there are no people capable of learning toprefer a red cube to green square who cannot learn to prefer a greencube to the red square, and why there isn’t anyone who can thinkthat John loves Mary who can’t also think that Mary lovesJohn.

Fodor and McLaughlin (1990) argue in detail that connectionists do notaccount for systematicity. Although connectionist models can betrained to be systematic, they can also be trained, for example, torecognize “John loves Mary” without being able torecognize “Mary loves John.” Since connectionism does notguarantee systematicity, it does not explain why systematicity isfound so pervasively in human cognition. Systematicity may exist inconnectionist architectures, but where it exists, it is no more than alucky accident. The classical solution is much better, because inclassical models, pervasive systematicity comes for free.

The charge that connectionist nets are disadvantaged in explainingsystematicity has generated a lot of interest. Chalmers (1993) pointsout that Fodor and Pylyshyn’s argument proves too much, for itentails that all neural nets, even those that implement a classicalarchitecture, do not exhibit systematicity. Given the uncontroversialconclusion that the brain is a neural net, it would follow thatsystematicity is impossible in human thought. Another often mentionedpoint of rebuttal (Aizawa 1997b; Matthews 1997; Hadley 1997b) is thatclassical architectures do no better at explaining systematicity.There are also classical models that can be programmed to recognize“John loves Mary” without being able to recognize“Mary loves John,” for this depends on exactly whichsymbolic rules govern the classical processing. The point is thatneither the use of connectionist architecture alone nor the use ofclassical architecture alone enforces a strong enough constraint toexplain pervasive systematicity. In both architectures, furtherassumptions about the nature of the processing must be made to ensurethat “Mary loves John” and “John loves Mary”are treated alike.

A discussion of this point should mention Fodor and McLaughlin’srequirement that systematicity be explained as a matter of nomicnecessity, that is, as a matter of natural law. The complaint againstconnectionists is that while they may implement systems that exhibitsystematicity, they will not have explained it unless it follows fromtheir models as a nomic necessity. However, the demand for nomicnecessity is a very strong one, and one that classical architecturesclearly cannot meet either. So the only tactic for securing a tellingobjection to connectionists along these lines would be to weaken therequirement on the explanation of systematicity to one which classicalarchitectures can and connectionists cannot meet. A convincing case ofthis kind has yet to be made.

As the systematicity debate has evolved, attention has been focused ondefining the benchmarks that would answer Fodor and Pylyshyn’schallenge. Hadley (1994a, 1994b) distinguishes three brands ofsystematicity. Connectionists have clearly demonstrated the weakest ofthese by showing that neural nets can learn to correctly recognizenovel sequences of words (e.g., “Mary loves John”) thatwere not in the training set. However, Hadley claims that a convincingrebuttal must demonstrate strong systematicity, or better, strongsemantical systematicity. Strong systematicity would require (atleast) that “Mary loves John” be recognized even if“Mary” never appears in the subject position in anysentence in the training set. Strong semantical systematicity wouldrequire as well that the net show abilities at correct semanticalprocessing of the novel sentences rather than merely distinguishinggrammatical from ungrammatical forms. Niklasson and van Gelder (1994)have claimed success at strong systematicity, though Hadley complainsthat this is at best a borderline case. Hadley and Hayward (1997)tackle strong semantical systematicity, but by Hadley’s ownadmission it is not clear that they have avoided the use of aclassical architecture. Boden and Niklasson (2000) claim to haveconstructed a model that meets at least the spirit of strongsemantical systematicity, but Hadley (2004) argues that even strongsystematicity has not been demonstrated there. Whether one takes apositive or a negative view of these attempts, it is safe to say thatno one has met the challenge of providing a neural net capable oflearning complex semantical processing that generalizes to a fullrange of truly novel inputs.

Research on nets that clearly demonstrate strong systematicity hascontinued. Jansen and Watter (2012) provide a good summary of morerecent efforts along these lines, and propose an interesting basis forsolving the problem. They use a more complex architecture thatcombines unsupervised self-organizing maps with features of simplerecurrent nets. However, the main innovation is to allow codes for thewords being processed to represent sensory-motor features of what thewords represent. Once trained, their nets displayed very good accuracyin distinguishing the grammatical features of sentences whose wordsnever even appeared in the training set. This may appear to becheating since the word codes might surreptitiously representgrammatical categories, or at least they may unfairly facilitatelearning those categories. Jansen and Watter note however, that thesensory-motor features of what a word represents are apparent to achild who has just acquired a new word, and so that information is notoff-limits in a model of language learning. They make the interestingobservation that a solution to the systematicity problem may requireincluding sources of environmental information that have so far beenignored in theories of language learning. This work complicates thesystematicity debate, since it opens a new worry about whatinformation resources are legitimate in responding to the challenge.However, this reminds us that architecture alone (whether classical orconnectionist) is not going to solve the systematicity problem in anycase, so the interesting questions concern what sources ofsupplemental information are needed to make the learning of grammarpossible.

Kent Johnson (2004) argues that the whole systematicity debate ismisguided. Attempts at carefully defining the systematicity oflanguage or thought leaves us with either trivialities or falsehoods.Connectionists surely have explaining to do, but Johnson recommendsthat it is fruitless to view their burden under the rubric ofsystematicity. Aizawa (2014) also suggests the debate is no longergermane given the present climate in cognitive science. What is neededinstead is the development of neurally plausible connectionist modelscapable of processing a language with a recursive syntax, which reactimmediately to the introduction of new items in the lexicon withoutintroducing the features of classical architecture. The“systematicity” debate may have already gone as Johnsonadvises, for Hadley’s demand for strong semantical systematicitymay be thought of as the requirement that connectionists exhibitsuccess in that direction.

Recent work (Loula, Baroni, & Lake 2018) sheds new light on thecontroversy. Here recurrent neural nets were trained to interpretcomplex commands in a simple language that includes primitives such as“jump”, “walk”, “left”,“right”, “opposite” and “around”.“Opposite” is interpreted as a request to perform acommand twice, and “around” to do so four times. So“jump around left” requests a left jump four times. Theauthors report that their nets showed very accurate generalization attasks that qualify for demonstrating strong semantic systematicity.The nets correctly parsed commands in the test set containing“jump around right” even though this phrase never appearedin the training set. Nevertheless the net’s failures at morechallenging tasks point to limitations in their abilities togeneralize in ways that would demonstrate genuine systematicity. Thenets exhibited very poor performance when commands in the test setwere longer (or even shorter), than those presented in the trainingset. So they appeared unable to spontaneously compose the meaning ofcomplex expressions from the meanings of their parts. New research isneeded to understand the nature of these failures, whether they can beovercome in non-classical architectures, and the extent to whichhumans would exhibit similar mistakes under analogouscircumstances.

It has been almost thirty years since the systematicity debate firstbegan, with over 3,000 citations to Fodor and Pylyshyn’soriginal paper. So this brief account is necessarily incomplete.Aizawa (2003) provides anexcellent view of the literature, and Calvo and Symons (2014) servesas another more recent resource.

8. Connectionism and Semantic Similarity

One of the attractions of distributed representations in connectionistmodels is that they suggest a solution to the problem of providing atheory of how brain states could have meaning. The idea is that thesimilarities and differences between activation patterns alongdifferent dimensions of neural activity record semantical information.So the similarity properties of neural activations provide intrinsicproperties that determine meaning. However, when it comes tocompositional linguistic representations, Fodor and Lepore (1992: Ch.6) challenge similarity based accounts, on two fronts. The firstproblem is that human brains presumably vary significantly in thenumber of and connections between their neurons. Although it isstraightforward to define similarity measures on two nets that containthe same number of units, it is harder to see how this can be donewhen the basic architectures of two nets differ. The second problemFodor and Lepore cite is that even if similarity measures for meaningscan be successfully crafted, they are inadequate to the task ofmeeting the desiderata which a theory of meaning must satisfy.

Churchland (1998) shows that the first of these two objections can bemet. Citing the work of Laakso and Cottrell (2000) he explains howsimilarity measures between activation patterns in nets with radicallydifferent structures can be defined. Not only that, Laakso andCottrell show that nets of different structures trained on the sametask develop activation patterns which are strongly similar accordingto the measures they recommend. This offers hope that empirically welldefined measures of similarity of concepts and thoughts acrossdifferent individuals might be forged.

On the other hand, the development of a traditional theory of meaningbased on similarity faces severe obstacles (Fodor & Lepore 1999),for such a theory would be required to assign sentences truthconditions based on an analysis of the meaning of their parts, and itis not clear that similarity alone is up to such tasks as fixingdenotation in the way a standard theory demands. However, mostconnectionists who promote similarity based accounts of meaning rejectmany of the presupposition of standard theories. They hope to craft aworking alternative which either rejects or modifies thosepresuppositions while still being faithful to the data on humanlinguistic abilities.

Calvo Garzón (2003) complains that there are reasons to thinkthat connectionists must fail. Churchland’s response has noanswer to the collateral information challenge. That problem is thatthe measured similarities between activation patterns for a concept(say: grandmother) in two human brains are guaranteed to be very lowbecause two people’s (collateral) information on theirgrandmothers (name, appearance, age, character) is going to be verydifferent. If concepts are defined by everything we know, then themeasures for activation patterns of our concepts are bound to be farapart. This is a truly deep problem in any theory that hopes to definemeaning by functional relationships between brain states. Philosophersof many stripes must struggle with this problem. Given the lack of asuccessfully worked out theory of concepts in either traditional orconnectionist paradigms, it is only fair to leave the question forfuture research.

9. Connectionism and the Elimination of Folk Psychology

Another important application of connectionist research tophilosophical debate about the mind concerns the status of folkpsychology. Folk psychology is the conceptual structure that wespontaneously apply to understanding and predicting human behavior.For example, knowing that John desires a beer and that he believesthat there is one in the refrigerator allows us to explain why Johnjust went into the kitchen. Such knowledge depends crucially on ourability to conceive of others as having desires and goals, plans forsatisfying them, and beliefs to guide those plans. The idea thatpeople have beliefs, plans and desires is a commonplace of ordinarylife; but does it provide a faithful description of what is actuallyto be found in the brain?

Its defenders will argue that folk psychology is too good to be false(Fodor 1988: Ch. 1). What more can we ask for the truth of a theorythan that it provides an indispensable framework for successfulnegotiations with others? On the other hand, eliminativists willrespond that the useful and widespread use of a conceptual scheme doesnot argue for its truth (Churchland 1989: Ch. 1). Ancient astronomersfound the notion of celestial spheres useful (even essential) to theconduct of their discipline, but now we know that there are nocelestial spheres. From the eliminativists’ point of view, anallegiance to folk psychology, like allegiance to folk (Aristotelian)physics, stands in the way of scientific progress. A viable psychologymay require as radical a revolution in its conceptual foundations asis found in quantum mechanics.

Eliminativists are interested in connectionism because it promises toprovide a conceptual foundation that might replace folk psychology.For example Ramsey, Stich, & Garon (1991) have argued that certainfeed-forward nets show that simple cognitive tasks can be performedwithout employing features that could correspond to beliefs, desiresand plans. Presuming that such nets are faithful to how the brainworks, concepts of folk psychology fare no better than do celestialspheres. Whether connectionist models undermine folk psychology inthis way is still controversial. There are two main lines of responseto the claim that connectionist models support eliminativistconclusions. One objection is that the models used by Ramsey etal. are feed forward nets, which are too weak to explain some of themost basic features of cognition such as short term memory. Ramsey etal. have not shown that beliefs and desires must be absent in a classof nets adequate for human cognition. A second line of rebuttalchallenges the claim that features corresponding to beliefs anddesires are necessarily absent even in the feed forward nets at issue(Von Eckardt 2005).

The question is complicated further by disagreements about the natureof folk psychology. Many philosophers treat the beliefs and desirespostulated by folk psychology as brain states with symbolic contents.For example, the belief that there is a beer in the refrigerator isthought to be a brain state that contains symbols corresponding tobeer and a refrigerator. From this point of view, the fate of folkpsychology is strongly tied to the symbolic processing hypothesis. Soif connectionists can establish that brain processing is essentiallynon-symbolic, eliminativist conclusions will follow. On the otherhand, some philosophers do not think folk psychology is essentiallysymbolic, and some would even challenge the idea that folk psychologyis to be treated as a theory in the first place. Under thisconception, it is much more difficult to forge links between resultsin connectionist research and the rejection of folk psychology.

10. Predictive Coding Models of Cognition

As connectionist research has matured from its “GoldenAge” in the 1980s, the main paradigm has radiated into a numberof distinct approaches. Two important trends worth mention arepredictive coding and deep learning (which will be covered in thefollowing section). Predictive coding is a well-establishedinformation processing tool with a wide range of applications. It isuseful, for example, in compressing the size of data sets. Suppose youwish to transmit a picture of a landscape with a blue sky. Since mostof the pixels in the top half of your image are roughly the sameshade, it is very inefficient to record the color value (say Red: 46Green: 78 Blue: FF in hexadecimal) over and over again for each pixelin the top half of the image. Since the value of one pixel stronglypredicts the value of its neighbor, the efficient thing to do isrecord at each pixel location, the difference between the predictedvalue (an average of its neighbors) and the actual value for thatpixel. (In the case of representing an even shaded sky, we would onlyneed to record the blue value once, followed by lots of zeros.) Thisway, major coding resources are only needed to keep track of points inthe image (such as edges) where there are large changes, that ispoints of “surprise” or “unexpected”variation.

It is well known that early visual processing in the brain involvestaking differences between nearby values, (for example, to identifyvisual boundaries). It is only natural then to explore how the brainmight take advantage of predictive coding in perception, inference, oreven action. (See Clark 2013 for an excellent summary and entry pointto the literature.) There is wide variety in the models presented inthe predictive coding paradigm, and they tend to be specified at ahigher level of generality than are connectionist models so fardiscussed. Assume we have a neural net with input, hidden and outputlevels that has been trained on a task (say face recognition) and sopresumably has information about faces stored in the weightsconnecting the hidden level nodes. Three features would classify thisnet as a predictive coding (PC) model. First, the model will havedownward connections from the higher levels that are able to predictthe next input for that task. (The prediction might be arepresentation of a generic face.) Second, the data sent to the higherlevels for a given input is not the value recorded at the input nodes,but the difference between the predicted values and the valuesactually present. (So in the example, the data provided tracks thedifferences between the face to be recognized and the generic face.)In this way the data being received by the net is already preprocessedfor coding efficiency. Third, the model is trained by adjusting theweights in such a way that the error is minimized at the inputs. Inother words, the trained net reduces as much as possible the“surprise” registered in the difference between the rawinput and its prediction. In so doing it comes to be able to predictthe face of the individual to be recognized to eliminate the error.Some advocates of predictive coding models suggest that this schemeprovides a unified account of all cognitive phenomena, includingperception, reasoning, planning and motor control. By minimizingprediction error in interacting with the environment, the net isforced to develop the conceptual resources to model the causalstructure of the external world, and so navigate that world moreeffectively.

The predictive coding (PC) paradigm has attracted a lot of attention.There is ample evidence that PC models capture essential details ofvisual function in the mammalian brain (Rao & Ballard 1999; Huang& Rao 2011). For example, when trained on typical visual input, PCmodels spontaneously develop functional areas for edge, orientationand motion detection known to exist in visual cortex. This work alsoraises the interesting point that the visual architecture may developin response to the statistics of the scenes being encountered, so thatorganisms in different environments have visual systems speciallytuned to their needs.

It must be admitted that there is still no convincing evidence thatthe essential features of PC models are directly implemented asanatomical structures in the brain. Although it is conjectured thatsuperficial pyramidal cells may transmit prediction error, and deeppyramidal cells predictions, we do not know that that is how theyactually function. On the other hand, PC models do appear moreneurally plausible than backpropagation architectures, for there is noneed for a separate process of training on an externally provided setof training samples. Instead, predictions replace the role of thetraining set, so that learning and interacting with the environmentare two sides of a unified unsupervised process.

PC models also show promise for explaining higher-level cognitivephenomena. An often-cited example is binocular rivalry. When presentedwith entirely different images in two eyes, humans report anoscillation between the two images as each in turn comes into“focus”. The PC explanation is that the system succeeds ineliminating error by predicting the scene for one eye, but only toincrease the error for the other eye. So the system is unstable,“hunting” from one prediction to the other. Predictivecoding also has a natural explanation for why we are unaware of ourblind spot, for the lack of input in that area amounts to a report ofno error, with the result that one perceives “more of thesame”.

PC accounts of attention have also been championed. For example, Hohwy(2012) notes that realistic PC models, which must tolerate noisyinputs, need to include parameters that track the desired precision tobe used in reporting error. So PC models need to make predictions ofthe error precision relevant for a given situation. Hohwy explores theidea that mechanisms for optimizing precision expectations map ontothose that account for attention, and argues that attentionalphenomena such as change blindness can be explained within the PCparadigm.

Predictive coding has interesting implications for themes in thephilosophy of cognitive science. By integrating the processes oftop-down prediction with bottom-up error detection, the PC account ofperception views it as intrinsically theory-laden. Deployment of theconceptual categorization of the world embodied in higher levels ofthe net is essential to the very process of gathering data about theworld. This underscores, as well, tight linkages between belief,imaginative abilities, and perception (Grush 2004). The PC paradigmalso tends to support situated or embodied conceptions of cognition,for it views action as a dynamic interaction between theorganism’s effects on the environment, its predictionsconcerning those effects (its plans), and its continual monitoring oferror, which provides feedback to help ensure success.

It is too early to evaluate the importance and scope of PC models inaccounting for the various aspects of cognition. Providing a unifiedtheory of brain function in general is, after all, an impossibly highstandard. Clark’s target article (2013) provides a useful forumfor airing complaints against PC models and some possible responses.One objection that is often heard is that an organism with a PC braincan be expected to curl up in a dark room and die, for this is thebest way to minimize error at its sensory inputs. However, that viewmay take too narrow a view of the sophistication of the predictionsavailable to the organism. If it is to survive at all, its geneticendowment coupled with what it can learn along the way may very wellendow it with the expectation that it go out and seek needed resourcesin the environment. Minimizing error for that prediction of itsbehavior will get it out of the dark room. However, it remains to beseen whether a theory of biological urges is usefully recast in PCterminology in this way, or whether PC theory is better characterizedas only part of the explanation. Another complaint is that thetop-down influence on our perception coupled with the constraint thatthe brain receives error signals rather than raw data would impose anunrealistic divide between a represented world of fantasy and theworld as it really is. It is hard to evaluate whether that qualifiesas a serious objection. Were PC models actually to provide an accountof our phenomenological experience, and characterize the relationsbetween that experience and what we count as real, then skepticalconclusions to be drawn would count as features of the view ratherthan objections to it. A number of responders to Clark’s targetarticle also worry that PC-models count as overly general. In tryingto explain everything they explain nothing. Without sufficientconstraints on the architecture, it is too easy to pretend to explaincognitive phenomena by merely redescribing them in a story written inthe vocabulary of prediction, comparison, error minimization, andoptimized precision. The real proof of the pudding will come with thedevelopment of more complex and detailed computer models in the PCframework that are biologically plausible, and able to demonstrate thedefining features of cognition.

11. Deep Learning: Connectionism’s New Wave

Whereas connectionism’s ambitions seemed to mature and tempertowards the end of its Golden Age from 1980–1995, neural networkresearch has recently returned to the spotlight after a combination oftechnical achievements made it practical to train networks with manylayers of nodes between input and output (Krizhevsky, Sutskever, &Hinton 2012; Goodfellow, Bengio, & Courville 2016). Amazon,Facebook, Google, Microsoft, and Uber have all since made substantialinvestments in these “deep learning” systems. Their manypromising applications include recognition of objects and faces inphotographs, natural language translation and text generation,prediction of protein folds, medical diagnosis and treatment, andcontrol of autonomous vehicles. The success of the game-playingprogram AlphaZero (Silver et al. 2018) has brought intense publicityto deep learning in the popular press. What is especially tellingabout AlphaZero is that essentially the same algorithm was capable oflearning to defeat human world champions and other top-performingartificial systems in three different rule-based games (chess, shogi,and Go) “without human knowledge” of strategy, that is, byusing only information about the rules of these games and policies itlearned from extensive self-play. Its ability to soundly defeatexpert-knowledge-based programs at their forte has been touted as thedeath knell for the traditional symbolic paradigm in artificialintelligence.

However, the new capabilities of deep learning systems have broughtwith them new concerns. Deep networks typically learn from vastly moredata than their predecessors (AlphaZero learned from over 100 millionself-played Go games), and can extract much more subtle, structuredpatterns. While the analysis of AlphaZero’s unusual approach tostrategy has created a mini-revolution in the study of chess and Go(Sadler & Regan 2019), it also raised concerns thatthe solutions deep networks discover are alien and mysterious. It isnatural, therefore, to have second thoughts about depending on deeplearning technologies for tasks that must be responsive to humaninterests and goals.

The success of deep learning would not have been possible withoutspecialized Graphics Processing Units (GPUs), massively-parallelprocessors optimized for the computational burden of training largenets. However, the crucial innovations behind deep learning’ssuccesses lie in network architecture. Although the literaturedescribes a bewildering set of variations in deep net design(Schmidhuber 2015), there are some common themes that help define theparadigm.

The most obvious feature is a substantial increase in the number ofhidden layers. Whereas Golden Age networks typically had only one ortwo hidden layers, deep neural nets have anywhere from five to severalhundred. It has been proven that additional depth can exponentiallyincrease the representational and computational power of a neuralnetwork, compared to a shallower network with the same number of nodes(Bengio & Dellaleau 2011; Montúfar et al. 2014; Raghu etal. 2017). The key is that the patterns detected at a given layer maybe used by the subsequent layers to repeatedly create more and morecomplex discriminations.

The number of layers is not the only feature of deep nets that explaintheir superior abilities. An emerging consensus is that many tasksthat are hard to learn are characterized by the presence of“nuisance parameters”, sources of variation in inputsignals that are not correlated with decision success. Examples ofnuisance parameters in visual categorization tasks include pose, size,and position in the visual field; examples in auditory tasks includetone, pitch, and duration. Successful systems must learn to recognizedeeper similarities hiding under this variation to identify objects inimages, or words in audio data.

One of the most commonly-deployed deep architectures—deepconvolutional networks—leverages a combination of strategiesthat are well-suited to overcoming nuisance variation. Golden Age netsused the same activation function for all units, and units in a layerwere fully connected to units in adjacent layers. However, deepconvolutional nets deploy several different activation functions, andconnections to units in the next higher layer are restricted to smallwindows, such as a square tile of an image or a temporal snippet of asound file.

A toy example of a deep convolutional net trained to recognize objectsin images will help illustrate some of the details. The input to sucha net consists of a digitized scene with red, green, and blue (RGB)values for the intensity of colors in each pixel. This input layer isfed to a layer of filter units, which are connected only to a smallwindow of input pixels. Filter units detect specific, local featuresof the image using an operation called convolution. For example, theymight find edges by noting where differences in the intensity ofnearby pixels are the greatest. Outputs of these units are then passedto rectified linear units (or “ReLU” nodes), which onlypass along activations from the filter nodes that exceed a certainthreshold. ReLU units send their signals to a pooling layer, whichcollects data from many ReLU units and only passes along themost-activated features for each location. The result of this sandwichof convolution-ReLU-pooling layers is a “feature map”,which marks all and only the most salient features detected at eachlocation across the whole image. This feature map can then be sent toa whole series of such sandwiches to detect larger and more abstractfeatures. For example, one sandwich might build lines from edges, thenext angles from lines, the next shapes from lines and angles, and thenext objects from shapes. A final, fully-connected classificationlayer is then used to assign labels to the objects detected in themost abstract feature map delivered by the penultimate layer.

This division-of-labor is extremely efficient at overcoming nuisancevariation, compared to shallow Golden Age networks. Furthermore,limiting the inputs of the filter nodes to a small windowsignificantly lowers the number of weights that must be learned ateach level, compared to a fully-connected network. If features usuallydepend only on local relations (i.e. in the sense that one normallydoes not need to look at someone’s feet to read their facialexpression), then this gain comes at no cost to classificationaccuracy. Furthermore, pooling the outputs of several different filternodes helps detect the same feature across small differences innuisance variables like pose or location. There is special enthusiasmfor this kind of neurocomputational division-of-labor in cognitivescience, because it was originally inspired by anatomical studies ofmammalian neocortex (Hubel & Wiesel 1965; Fukushima 1980). Othersources of empirical evidence have demonstrated the potential of suchnetworks as models for perceptual similarity and object recognitionjudgments in primates (Khaligh-Razavi & Kriegeskorte 2014; Hong etal. 2016; Kubilius, Bracci, & Beeck 2016; Lake, Zaremba et al.2015; Yamins & DiCarlo 2016; and Guest & Love 2019[Other Internet Resources, hereafter OIR]). Thesepoints also interface with the innateness controversy discussed inSection 6. For example, Buckner (2018) has recently argued that theseactivation functions combine to implement a form of cognitiveabstraction which addresses problems facing traditional empiricistphilosophy of mind, concerning the way that minds can efficientlydiscover abstract categorical knowledge in specific, idiosyncraticperceptions.

The increase in computational power that comes with deep netarchitecture brings with it additional dangers. In fact, therepresentational power of deep networks is so great that they cansimply memorize the correct answer for every item in a large, complexdata set, even if the “correct” labels were randomlyassigned (Zhang et al. 2016 inOIR). The result is poor generalization of the task to belearned—with total failure to properly respond to inputs outsidethe training set. Effective deep nets thus employ an array ofstrategies to prevent them from merely memorizing training data,mostly by biasing the network against the learning of fine-grainedidiosyncrasies. Popular options include dropout, which randomlydeactivates a small number of nodes during training, and weight decayrules, which cause weights to decrease in value if not constantlyrefreshed by different examples.

While these general points may explain why deep convolutional netstend to succeed on a wide variety of tasks, their complex structuremakes it difficult to explain their decisions in specific cases. Thisconcern interfaces with the XAI (explainable AI) movement, which aimsto inspire the development of better tools to analyze the decisions ofcomputer algorithms, especially so that AI systems can be certified tomeet practical or legal requirements (Explainable Artificial Intelligence (XAI);B. Goodman & Flaxman 2017).Deep Visualization methods are important tools in addressing thesegoals for deep neural networks. One popular family of methods usesfurther machine learning to create an artificial image that maximizesthe activation of some particular hidden layer unit (Yosinski et al.2015). The image is intended to give one an impression of the kind offeature that unit detects when it fires. As expected, the images lookmore complex and more object-like as we ascend the level hierarchy(for examples and software, see http://yosinski.com/deepvis). Withoutadditional processing, however, many of these visualizations appearchimerical and nonsensical, and it is not clear exactly how well thismethod reveals features that are genuinely important in thenetwork’s processing. Another family of methods attempts toreveal the aspects of input images that are most salient for thenets’ decision-making. Relevance decomposition, for example,determines which nodes, if deactivated, would have had the greatesteffect on some particular decision (Montavon, Samek, & Müller2018). This can generate a “heatmap”, which shows theaspects of the input that were most influential in that decision.Further machine learning has also been used to build systems able toprovide brief English phrases describing the features that lead to anet’s decisions (Hendricks et al. 2016 [OIR]; Ehsan et al. 2018). Despite these advances, the methodologies neededfor an adequate explanation of a deep network’s behavior remainunclear and would benefit from further philosophical reflection(Lipton 2016 [OIR]; Zednik 2019 [OIR]).

The need for explainable deep nets is all the more pressing because ofthe discovery of so-called “adversarial examples”(Goodfellow et al. 2014; Nguyen, Yosinski, & Clune 2015). Thesecome in at least two forms: “perturbed images” which arenatural photographs modified very slightly in a way that causesdramatic changes in classification by deep nets even though thedifference is imperceptible to humans, and “rubbishimages”, which are purportedly meaningless to humans but areclassified with high confidence scores by deep nets. Adversarialexamples have led some to conclude that whatever understanding the nethas of objects must be radically different than that of humans.Adversarial examples exhibit a number of surprising properties: thoughconstructed from a particular training set, they are highly effectiveat fooling other nets trained on the same task, even nets withdifferent training sets and different architectures. Furthermore, thesearch for effective countermeasures has led to frustrating failures.It has also been discovered, however, that perturbation methods cancreate images which fool humans (Elsayed et al. 2018), and humansubjects can predict nets’ preferred labels for rubbish imageswith high accuracy (Z. Zhou & Firestone 2019). Others have notedthat the features nets detect in adversarial examples lead to reliableclassifications in naturally-occurring data, challenging the idea thatthe nets’ decisions should be counted as mistaken (Ilyas et al.2019 [OIR]). These questions intersect with traditional issues aboutprojectibility and induction, potentially offering new test cases forolder philosophical conundrums in epistemology and philosophy ofscience (N. Goodman 1955; Quine 1969; Harman & Kulkarni 2007).

Although deep learning has received an enormous amount of attention incomputer science and from the popular press, there is surprisinglylittle published about it directly among philosophers (though this isbeginning to change—Buckner 2018, 2019 [OIR]; Miracchi 2019;Shevlin & Halina 2019; and Zednik 2019 [OIR]).However, there are rich opportunities for philosophicalresearch on deep learning. Examples of some relevant questionsinclude:

  • What kinds of explanation or justification are needed to satisfyour worries about the reliability of deep neural networks in practicalapplications? What results in deep net research would be needed toassure us that the relevant explanations or justifications are athand?
  • Can deep nets serve as explanatory models of biological cognitionin cognitive neuroscience? If so, what kind of scientific explanationsdo they provide? Are they mechanistic, functional, or non-causal innature?
  • What are the prospects for new breakthroughs in deep net naturallanguage processing, and what would it take for these to throw newlight on the systematicity controversy?
  • Does deep learning research change the terms of the conflictbetween radical connectionists and those who claim that symbolicprocessing models are required to explain higher level cognitivefunctioning?
  • Do deep nets like AlphaZero vindicate classical empiricism abouthigher reasoning? Or must they ultimately replicate more human biasesand domain-specific knowledge to reason in the way that humans do?

Bibliography

  • Aizawa, Kenneth, 1994, “Representations without Rules,Connectionism and the Syntactic Argument”,Synthese,101(3): 465–492. doi:10.1007/BF01063898
  • –––, 1997a, “Exhibiting versus ExplainingSystematicity: A Reply to Hadley and Hayward”,Minds andMachines, 7(1): 39–55. doi:10.1023/A:1008203312152
  • –––, 1997b, “ExplainingSystematicity”,Mind & Language, 12(2):115–136. doi:10.1111/j.1468-0017.1997.tb00065.x
  • –––, 2003,The Systematicity Arguments,Dordrecht: Kluwer.
  • –––, 2014, “A Tough Time to be TalkingSystematicity”, in Calvo and Symons 2014: 77–101.
  • Bechtel, William, 1987, “Connectionism and the Philosophy ofMind: An Overview”,The Southern Journal of Philosophy,26(S1): 17–41. doi:10.1111/j.2041-6962.1988.tb00461.x
  • –––, 1988, “Connectionism and Rules andRepresentation Systems: Are They Compatible?”,PhilosophicalPsychology, 1(1): 5–16. doi:10.1080/09515088808572922
  • Bechtel, William and Adele Abrahamsen, 1990,Connectionism andthe Mind: An Introduction to Parallel Processing in Networks,Cambridge, MA: Blackwell.
  • Bengio, Yoshua and Olivier Delalleau, 2011, “On theExpressive Power of Deep Architectures”, inInternationalConference on Algorithmic Learning Theory (ALT 2011), JyrkiKivinen, Csaba Szepesvári, Esko Ukkonen, and Thomas Zeugmann(eds.) (Lecture Notes in Computer Science 6925), Berlin, Heidelberg:Springer Berlin Heidelberg, 18–36.doi:10.1007/978-3-642-24412-4_3
  • Bengio, Yoshua, Thomas Mesnard, Asja Fischer, Saizheng Zhang, andYuhuai Wu, 2017, “STDP-Compatible Approximation ofBackpropagation in an Energy-Based Model”,NeuralComputation, 29(3): 555–577. doi:10.1162/NECO_a_00934
  • Bodén, Mikael and Lars Niklasson, 2000, “SemanticSystematicity and Context in Connectionist Networks”,Connection Science, 12(2): 111–142.doi:10.1080/09540090050129754
  • Buckner, Cameron, 2018, “Empiricism without Magic:Transformational Abstraction in Deep Convolutional NeuralNetworks”,Synthese, 195(12): 5339–5372.doi:10.1007/s11229-018-01949-1
  • Butler, Keith, 1991, “Towards a Connectionist CognitiveArchitecture”,Mind & Language, 6(3):252–272. doi:10.1111/j.1468-0017.1991.tb00191.x
  • Calvo Garzón, Francisco, 2003, “ConnectionistSemantics and the Collateral Information Challenge”,Mind& Language, 18(1): 77–94.doi:10.1111/1468-0017.00215
  • Calvo, Paco and John Symons, 2014,The Architecture ofCognition: Rethinking Fodor and Pylyshyn’s SystematicityChallenge, Cambridge: MIT Press.
  • Chalmers, David J., 1990, “Syntactic Transformations onDistributed Representations”,Connection Science,2(1–2): 53–62. doi:10.1080/09540099008915662
  • –––, 1993, “Connectionism andCompositionality: Why Fodor and Pylyshyn Were Wrong”,Philosophical Psychology, 6(3): 305–319.doi:10.1080/09515089308573094
  • Chomsky, Noam, 1965,Aspects of the Theory of Syntax,Cambridge, MA: MIT Press.
  • Christiansen, Morten H. and Nick Chater, 1994,“Generalization and Connectionist Language Learning”,Mind & Language, 9(3): 273–287.doi:10.1111/j.1468-0017.1994.tb00226.x
  • –––, 1999a, “Toward a Connectionist Modelof Recursion in Human Linguistic Performance”,CognitiveScience, 23(2): 157–205.doi:10.1207/s15516709cog2302_2
  • –––, 1999b, “Connectionist NaturalLanguage Processing: The State of the Art”,CognitiveScience, 23(4): 417–437.doi:10.1207/s15516709cog2304_2
  • Churchland, Paul M., 1989,A Neurocomputational Perspective:The Nature of Mind and the Structure of Science, Cambridge, MA:MIT Press.
  • –––, 1995,The Engine of Reason, the Seat ofthe Soul: A Philosophical Journey into the Brain, Cambridge, MA:MIT Press.
  • –––, 1998, “Conceptual Similarity AcrossSensory and Neural Diversity: The Fodor/Lepore ChallengeAnswered”,Journal of Philosophy, 95(1): 5–32.doi:10.5840/jphil19989514
  • Clark, Andy, 1989,Microcognition: Philosophy, CognitiveScience, and Parallel Distributed Processing, (Explorations inCognitive Science), Cambridge, MA: MIT Press.
  • –––, 1990 [1995], “ConnectionistMinds”,Proceedings of the Aristotelian Society, 90:83–102. Reprinted in MacDonald and MacDonald 1995:339–356. doi:10.1093/aristotelian/90.1.83
  • –––, 1993,Associative Engines:Connectionism, Concepts, and Representational Change, Cambridge,MA: MIT Press.
  • –––, 2013, “Whatever next? PredictiveBrains, Situated Agents, and the Future of Cognitive Science”,Behavioral and Brain Sciences, 36(3): 181–204.doi:10.1017/S0140525X12000477
  • Clark, Andy and Rudi Lutz (eds.), 1992,Connectionism inContext, London: Springer London.doi:10.1007/978-1-4471-1923-4
  • Cotrell G.W. and S.L. Small, 1983, “A Connectionist Schemefor Modeling Word Sense Disambiguation”,Cognition and BrainTheory, 6(1): 89–120.
  • Cummins, Robert, 1991, “The Role of Representation inConnectionist Explanations of Cognitive Capacities”, in Ramsey,Stich, and Rumelhart 1991: 91–114.
  • –––, 1996, “Systematicity”:,Journal of Philosophy, 93(12): 591–614.doi:10.2307/2941118
  • Cummins, Robert and Georg Schwarz, 1991, “Connectionism,Computation, and Cognition”, in Horgan and Tienson 1991:60–73. doi:10.1007/978-94-011-3524-5_3
  • Davies, Martin, 1989, “Connectionism, Modularity, and TacitKnowledge”,The British Journal for the Philosophy ofScience, 40(4): 541–555. doi:10.1093/bjps/40.4.541
  • –––, 1991, “Concepts, Connectionism andthe Language of Thought”, in Ramsey, Stich, and Rumelhart 1991:229–257.
  • Dinsmore, John (ed.), 1992,The Symbolic and ConnectionistParadigms: Closing the Gap, Hillsdale, NJ: Erlbaum.
  • Ehsan, Upol, Brent Harrison, Larry Chan, and Mark O. Riedl, 2018,“Rationalization: A Neural Machine Translation Approach toGenerating Natural Language Explanations”, inProceedings ofthe 2018 AAAI/ACM Conference on AI, Ethics, and Society (AIES’18), New Orleans, LA: ACM Press, 81–87.doi:10.1145/3278721.3278736
  • Eliasmith, Chris, 2007, “How to Build a Brain: From Functionto Implementation”,Synthese, 159(3): 373–388.doi:10.1007/s11229-007-9235-0
  • –––, 2013,How to Build a Brain: a NeuralArchitecture for Biological Cognition, New York: OxfordUniversity Press.
  • Elman, Jeffrey L., 1991, “Distributed Representations,Simple Recurrent Networks, and Grammatical Structure”, inTouretzky 1991: 91–122. doi:10.1007/978-1-4615-4008-3_5
  • Elman, Jeffrey, Elizabeth Bates, Mark H. Johnson, AnnetteKarmiloff-Smith,Domenico Parisi, and Kim Plunkett, 1996,Rethinking Innateness: A Connectionist Perspective onDevelopment, Cambridge, MA: MIT Press.
  • Elsayed, Gamaleldin F., Shreya Shankar, Brian Cheung, NicolasPapernot, Alexey Kurakin, Ian Goodfellow, and Jascha Sohl-Dickstein,2018, “Adversarial Examples That Fool Both Computer Vision andTime-Limited Humans”, inProceedings of the 32NdInternational Conference on Neural Information Processing Systems,(NIPS’18), 31: 3914–3924.
  • Fodor, Jerry A., 1988,Psychosemantics: The Problem of Meaningin the Philosophy of Mind, Cambridge, MA: MIT Press.
  • –––, 1997, “Connectionism and the Problemof Systematicity (Continued): Why Smolensky’s Solution StillDoesn’t Work”,Cognition, 62(1): 109–119.doi:10.1016/S0010-0277(96)00780-9
  • Fodor, Jerry and Ernest Lepore, 1992,Holism: AShopper’s Guide, Cambridge: Blackwell.
  • Fodor, Jerry and Ernie Lepore, 1999, “All at Sea in SemanticSpace: Churchland on Meaning Similarity”,Journal ofPhilosophy, 96(8): 381–403. doi:10.5840/jphil199996818
  • Fodor, Jerry and Brian P. McLaughlin, 1990, “Connectionismand the Problem of Systematicity: Why Smolensky’s SolutionDoesn’t Work”,Cognition, 35(2): 183–204.doi:10.1016/0010-0277(90)90014-B
  • Fodor, Jerry A. and Zenon W. Pylyshyn, 1988, “Connectionismand Cognitive Architecture: A Critical Analysis”,Cognition, 28(1–2): 3–71.doi:10.1016/0010-0277(88)90031-5
  • Friston, Karl, 2005, “A Theory of Cortical Responses”,Philosophical Transactions of the Royal Society B: BiologicalSciences, 360(1456): 815–836.doi:10.1098/rstb.2005.1622
  • Friston, Karl J. and Klaas E. Stephan, 2007, “Free-Energyand the Brain”,Synthese, 159(3): 417–458.doi:10.1007/s11229-007-9237-y
  • Fukushima, Kunihiko, 1980, “Neocognitron: A Self-OrganizingNeural Network Model for a Mechanism of Pattern Recognition Unaffectedby Shift in Position”,Biological Cybernetics, 36(4):193–202. doi:10.1007/BF00344251
  • Garfield, Jay L., 1997, “Mentalese Not Spoken Here:Computation, Cognition and Causation”,PhilosophicalPsychology, 10(4): 413–435.doi:10.1080/09515089708573231
  • Garson, James W., 1991, “What Connectionists Cannot Do: TheThreat to Classical AI”, in Horgan and Tienson 1991:113–142. doi:10.1007/978-94-011-3524-5_6
  • –––, 1994, “Cognition without ClassicalArchitecture”,Synthese, 100(2): 291–305.doi:10.1007/BF01063812
  • –––, 1997, “Syntax in a DynamicBrain”,Synthese, 110(3): 343–355.
  • Goodfellow, Ian, Yoshua Bengio, and Aaron Courville, 2016,Deep Learning, Cambridge, MA: MIT Press.
  • Goodfellow, Ian J., Jonathon Shlens, and Christian Szegedy, 2015,“Explaining and Harnessing Adversarial Examples.”, in3rd International Conference on Learning Representations, ICLR2015, San Diego, CA, May 7–9, 2015,available online.
  • Goodfellow, Ian J., Jean Pouget-Abadie, Mehdi Mirza, Bing Xu,David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio,2014, “Generative Adversarial Nets”, inProceedings ofthe 27th International Conference on Neural Information ProcessingSystems, (NIPS’14), Cambridge, MA: MIT Press, 2:2672–2680.
  • Goodman, Bryce and Seth Flaxman, 2017, “European UnionRegulations on Algorithmic Decision-Making and a ‘Right toExplanation’”,AI Magazine, 38(3): 50–57.doi:10.1609/aimag.v38i3.2741
  • Goodman, Nelson, 1955,Fact, Fiction, and Forecast,Cambridge, MA: Harvard University Press.
  • Grush, Rick, 2004, “The Emulation Theory of Representation:Motor Control, Imagery, and Perception”,Behavioral andBrain Sciences, 27(3): 377–396.doi:10.1017/S0140525X04000093
  • Guarini, Marcello, 2001, “A Defence of Connectionism Againstthe ‘Syntactic’ Argument”,Synthese,128(3): 287–317. doi:10.1023/A:1011905917986
  • Hadley, Robert F., 1994a, “Systematicity in ConnectionistLanguage Learning”,Mind & Language, 9(3):247–272. doi:10.1111/j.1468-0017.1994.tb00225.x
  • –––, 1994b, “Systematicity Revisited:Reply to Christiansen and Chater and Niklasson and van Gelder”,Mind & Language, 9(4): 431–444.doi:10.1111/j.1468-0017.1994.tb00317.x
  • –––, 1997a, “Explaining Systematicity: AReply to Kenneth Aizawa”,Minds and Machines, 7(4):571–579. doi:10.1023/A:1008252322227
  • –––, 1997b, “Cognition, Systematicity andNomic Necessity”,Mind & Language, 12(2):137–153. doi:10.1111/j.1468-0017.1997.tb00066.x
  • –––, 2004, “On The Proper Treatment ofSemantic Systematicity”,Minds and Machines, 14(2):145–172. doi:10.1023/B:MIND.0000021693.67203.46
  • Hadley, Robert F. and Michael B. Hayward, 1997, “StrongSemantic Systematicity from Hebbian Connectionist Learning”,Minds and Machines, 7(1): 1–37.doi:10.1023/A:1008252408222
  • Hanson, Stephen J. and Judy Kegl, 1987, “PARSNIP: AConnectionist Network that Learns Natural Language Grammar fromExposure to Natural Language Sentences”,Ninth AnnualConference of the Cognitive Science Society, Hillsdale, NJ:Erlbaum, pp. 106–119.
  • Harman, Gilbert and Sanjeev Kulkarni, 2007,ReliableReasoning: Induction and Statistical Learning Theory, CambridgeMA: MIT Press.
  • Hatfield, Gary, 1991a, “Representation in Perception andCognition: Connectionist Affordances”, in Ramsey, Stich, andRumelhart 1991: 163–195.
  • –––, 1991b, “Representation andRule-Instantiation in Connectionist Systems”, in Horgan andTienson 1991: 90–112. doi:10.1007/978-94-011-3524-5_5
  • Hawthorne, John, 1989, “On the Compatibility ofConnectionist and Classical Models”,PhilosophicalPsychology, 2(1): 5–15. doi:10.1080/09515088908572956
  • Haybron, Daniel M., 2000, “The Causal and Explanatory Roleof Information Stored in Connectionist Networks”,Minds andMachines, 10(3): 361–380. doi:10.1023/A:1026545231550
  • Hinton, Geoffrey E., 1990 [1991], “Mapping Part-WholeHierarchies into Connectionist Networks”,ArtificialIntelligence, 46(1–2): 47–75. Reprinted in Hinton1991: 47–76. doi:10.1016/0004-3702(90)90004-J
  • ––– (ed.), 1991,Connectionist SymbolProcessing, Cambridge, MA: MIT Press.
  • –––, 1992, “How Neural Networks Learn fromExperience”,Scientific American, 267(3):145–151.
  • –––, 2010, “Learning to Represent VisualInput”,Philosophical Transactions of the Royal Society B:Biological Sciences, 365(1537): 177–184.doi:10.1098/rstb.2009.0200
  • Hinton, Geoffrey E., James L. McClelland, and David E. Rumelhart,1986, “Distributed Representations”, Rumelhart,McClelland, and the PDP group 1986: chapter 3.
  • Hohwy, Jakob, 2012, “Attention and Conscious Perception inthe Hypothesis Testing Brain”,Frontiers in Psychology,3(96): 1–14. doi:10.3389/fpsyg.2012.00096
  • Hong, Ha, Daniel L K Yamins, Najib J Majaj, and James J DiCarlo,2016, “Explicit Information for Category-Orthogonal ObjectProperties Increases along the Ventral Stream”,NatureNeuroscience, 19(4): 613–622. doi:10.1038/nn.4247
  • Horgan, Terence E. and John Tienson, 1989, “Representationswithout Rules”,Philosophical Topics, 17(1):147–174.
  • –––, 1990, “Soft Laws”,MidwestStudies In Philosophy, 15: 256–279.doi:10.1111/j.1475-4975.1990.tb00217.x
  • ––– (eds.), 1991,Connectionism and thePhilosophy of Mind, Dordrecht: Kluwer.doi:10.1007/978-94-011-3524-5
  • –––, 1996,Connectionism and the Philosophyof Psychology, Cambridge, MA: MIT Press.
  • Hosoya, Toshihiko, Stephen A. Baccus, and Markus Meister, 2005,“Dynamic Predictive Coding by the Retina”,Nature, 436(7047): 71–77. doi:10.1038/nature03689
  • Huang, Yanping and Rajesh P. N. Rao, 2011, “PredictiveCoding”,Wiley Interdisciplinary Reviews: CognitiveScience, 2(5): 580–593. doi:10.1002/wcs.142
  • Hubel, David H. and Torsten N. Wiesel, 1965, “ReceptiveFields and Functional Architecture in Two Nonstriate Visual Areas (18and 19) of the Cat”,Journal of Neurophysiology, 28(2):229–289. doi:10.1152/jn.1965.28.2.229
  • Jansen, Peter A. and Scott Watter, 2012, “StrongSystematicity through Sensorimotor Conceptual Grounding: AnUnsupervised, Developmental Approach to Connectionist SentenceProcessing”,Connection Science, 24(1): 25–55.doi:10.1080/09540091.2012.664121
  • Johnson, Kent, 2004, “On the Systematicity of Language andThought”:,Journal of Philosophy, 101(3):111–139. doi:10.5840/jphil2004101321
  • Jones, Matt and Bradley C. Love, 2011, “BayesianFundamentalism or Enlightenment? On the Explanatory Status andTheoretical Contributions of Bayesian Models of Cognition”,Behavioral and Brain Sciences, 34(4): 169–188.doi:10.1017/S0140525X10003134
  • Khaligh-Razavi, Seyed-Mahdi and Nikolaus Kriegeskorte, 2014,“Deep Supervised, but Not Unsupervised, Models May Explain ITCortical Representation”,PLoS Computational Biology,10(11): e1003915. doi:10.1371/journal.pcbi.1003915
  • Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton, 2012,“Imagenet Classification with Deep Convolutional NeuralNetworks”,Advances in Neural Information ProcessingSystems, 25: 1097–1105.
  • Kubilius, Jonas, Stefania Bracci, and Hans P. Op de Beeck, 2016,“Deep Neural Networks as a Computational Model for Human ShapeSensitivity”,PLOS Computational Biology, 12(4):e1004896. doi:10.1371/journal.pcbi.1004896
  • Laakso, Aarre and Garrison Cottrell, 2000, “Content andCluster Analysis: Assessing Representational Similarity in NeuralSystems”,Philosophical Psychology, 13(1): 47–76.doi:10.1080/09515080050002726
  • Lake, Brenden M., Ruslan Salakhutdinov, and Joshua B. Tenenbaum,2015, “Human-Level Concept Learning through ProbabilisticProgram Induction”,Science, 350(6266):1332–1338. doi:10.1126/science.aab3050
  • Lake, Brenden M., Wojciech Zaremba, Rob Fergus, and Todd M.Gureckis, 2015, “Deep Neural Networks Predict CategoryTypicality Ratings for Images”,Proceedings of the 37thAnnual Cognitive Science Society, Pasadena, CA, 22–25 July2015,available online.
  • Lillicrap, Timothy P., Daniel Cownden, Douglas B. Tweed, and ColinJ. Akerman, 2016, “Random Synaptic Feedback Weights SupportError Backpropagation for Deep Learning”,NatureCommunications, 7(1): 13276. doi:10.1038/ncomms13276
  • Loula, João, Marco Baroni, and Brenden Lake, 2018,“Rearranging the Familiar: Testing Compositional Generalizationin Recurrent Networks”, inProceedings of the 2018 EMNLPWorkshop BlackboxNLP: Analyzing and Interpreting Neural Networks forNLP, Brussels, Belgium: Association for ComputationalLinguistics, 108–114. doi:10.18653/v1/W18-5413
  • MacDonald, Cynthia and Graham MacDonald (eds), 1995,Connectionism, (Debates on Psychological Explanation, 2),Oxford: Blackwell.
  • Matthews, Robert J., 1997, “Can Connectionists ExplainSystematicity?”,Mind & Language, 12(2):154–177. doi:10.1111/j.1468-0017.1997.tb00067.x
  • Marcus, Gary F., 1998, “Rethinking EliminativeConnectionism”,Cognitive Psychology, 37(3):243–282. doi:10.1006/cogp.1998.0694
  • –––, 2001,The Algebraic Mind: IntegratingConnectionism and Cognitive Science, Cambridge, MA: MITPress.
  • McClelland, James L and Jeffrey L Elman, 1986, “The TRACEModel of Speech Perception”,Cognitive Psychology,18(1): 1–86. doi:10.1016/0010-0285(86)90015-0
  • McClelland, James L., David E. Rumelhart, and the PDP ResearchGroup (ed.), 1986,Parallel Distributed Processing, Volume II:Explorations in the Microstructure of Cognition: Psychological andBiological Models, Cambridge, MA: MIT Press.
  • McLaughlin, Brian P., 1993, “The Connectionism/ClassicismBattle to Win Souls”,Philosophical Studies, 71(2):163–190. doi:10.1007/BF00989855
  • Miikkulainen, Risto, 1993,Subsymbolic Natural LanguageProcessing: An Integrated Model of Scripts, Lexicon, and Memory,Cambridge, MA: MIT Press.
  • Miikkulainen, Risto and Michael G. Dyer, 1991, “NaturalLanguage Processing With Modular Pdp Networks and DistributedLexicon”,Cognitive Science, 15(3): 343–399.doi:10.1207/s15516709cog1503_2
  • Miracchi, Lisa, 2019, “A Competence Framework for ArtificialIntelligence Research”,Philosophical Psychology,32(5): 588–633. doi:10.1080/09515089.2019.1607692
  • Montavon, Grégoire, Wojciech Samek, and Klaus-RobertMüller, 2018, “Methods for Interpreting and UnderstandingDeep Neural Networks”,Digital Signal Processing, 73:1–15. doi:10.1016/j.dsp.2017.10.011
  • Montúfar, Guido, Razvan Pascanu, Kyunghyun Cho, and YoshuaBengio, 2014, “On the Number of Linear Regions of Deep NeuralNetworks”, inProceedings of the 27th InternationalConference on Neural Information Processing Systems(NIPS’14), Cambridge, MA: MIT Press, 2: 2924–2932.
  • Morris, William C., Garrison W. Cottrell, and Jeffrey Elman, 2000,“A Connectionist Simulation of the Empirical Acquisition ofGrammatical Relations”, in Wermter and Sun 2000:1778:175–193. doi:10.1007/10719871_12
  • Nguyen, Anh, Jason Yosinski, Jeff Clune, 2015, “Deep NeuralNetworks Are Easily Fooled: High Confidence Predictions forUnrecognizable Images”,Proceedings of the 28th IEEEConference on Computer Vision and Pattern Recognition (CVPR2015), 427–436,available online.
  • Niklasson, Lars F. and Tim van Gelder, 1994, “On BeingSystematically Connectionist”,Mind & Language,9(3): 288–302. doi:10.1111/j.1468-0017.1994.tb00227.x
  • O’Reilly, Randall C., 1996, “Biologically PlausibleError-Driven Learning Using Local Activation Differences: TheGeneralized Recirculation Algorithm”,NeuralComputation, 8(5): 895–938.doi:10.1162/neco.1996.8.5.895
  • Phillips, Steven, 2002, “Does Classicism ExplainUniversality?”,Minds and Machines, 12(3):423–434. doi:10.1023/A:1016160512967
  • Pinker, Steven and Jacques Mehler (eds.), 1988,Connectionsand Symbols, Cambridge, MA: MIT Press.
  • Pinker, Steven and Alan Prince, 1988, “On Language andConnectionism: Analysis of a Parallel Distributed Processing Model ofLanguage Acquisition”,Cognition, 28(1–2):73–193. doi:10.1016/0010-0277(88)90032-7
  • Pollack, Jordan B., 1989, “Implications of RecursiveDistributed Representations”, in Touretzky 1989: 527–535,available online.
  • –––, 1991, “Induction of DynamicalRecognizers”, in Touretzky 1991: 123–148.doi:10.1007/978-1-4615-4008-3_6
  • Pollack, Jordan B., 1990 [1991], “Recursive DistributedRepresentations”,Artificial Intelligence,46(1–2): 77–105. Reprinted in Hinton 1991: 77–106.doi:10.1016/0004-3702(90)90005-K
  • Port, Robert F., 1990, “Representation and Recognition ofTemporal Patterns”,Connection Science, 2(1–2):151–176. doi:10.1080/09540099008915667
  • Port, Robert F. and Timothy van Gelder, 1991, “RepresentingAspects of Language”,Proceedings of the Thirteenth AnnualConference of the Cognitive Science Society, Hillsdale, N.J.:Erlbaum, 487–492,available online.
  • Quine, W. V., 1969, “Natural Kinds”, inEssays inHonor of Carl G. Hempel, Nicholas Rescher (ed.), Dordrecht:Springer Netherlands, 5–23. doi:10.1007/978-94-017-1466-2_2
  • Raghu, Maithra, Ben Poole, Jon Kleinberg, Surya Ganguli, andJascha Sohl-Dickstein, 2017, “On the Expressive Power of DeepNeural Networks”, inProceedings of the 34th InternationalConference on Machine Learning, 70: 2847–2854,available online.
  • Ramsey, William, 1997, “Do Connectionist RepresentationsEarn Their Explanatory Keep?”,Mind & Language,12(1): 34–66. doi:10.1111/j.1468-0017.1997.tb00061.x
  • Ramsey, William, Stephen P. Stich, and Joseph Garon, 1991,“Connectionism, Eliminativism, and the Future of FolkPsychology”, in Ramsey, Stich, and Rumelhart 1991:199–228.
  • Ramsey, William, Stephen P. Stich, and David E. Rumelhart, 1991,Philosophy and Connectionist Theory, Hillsdale, N.J.:Erlbaum.
  • Rao, Rajesh P. N. and Dana H. Ballard, 1999, “PredictiveCoding in the Visual Cortex: A Functional Interpretation of SomeExtra-Classical Receptive-Field Effects”,NatureNeuroscience, 2(1): 79–87. doi:10.1038/4580
  • Rohde, Douglas L. T. and David C. Plaut, 2003,“Connectionist Models of Language Processing”,Cognitive Studies (Japan), 10(1): 10–28.doi:10.11225/jcss.10.10
  • Roth, Martin, 2005, “Program Execution in ConnectionistNetworks”,Mind & Language, 20(4): 448–467.doi:10.1111/j.0268-1064.2005.00295.x
  • Rumelhart, David E. and James L. McClelland, 1986, “OnLearning the Past Tenses of English Verbs”, in McClelland,Rumelhart, and the PDP group 1986: 216–271.
  • Rumelhart, David E., James L. McClelland, and the PDP ResearchGroup (eds), 1986,Parallel Distributed Processing, Volume 1:Explorations in the Microstructure of Cognition: Foundations,Cambridge, MA: MIT Press.
  • Sadler, Matthew and Natasha Regan, 2019,Game Changer:AlphaZero’s Groundbreaking Chess Strategies and the Promise ofAI, Alkmaar: New in Chess.
  • Schmidhuber, Jürgen, 2015, “Deep Learning in NeuralNetworks: An Overview”,Neural Networks, 61:85–117. doi:10.1016/j.neunet.2014.09.003
  • Schwarz, Georg, 1992, “Connectionism, Processing,Memory”,Connection Science, 4(3–4):207–226. doi:10.1080/09540099208946616
  • Sejnowski, Terrence J. and Charles R. Rosenberg, 1987,“Parallel Networks that Learn to Pronounce English Text”,Complex Systems, 1(1): 145–168,available online.
  • Servan-Schreiber, David, Axel Cleeremans, and James L. McClelland,1991, “Graded State Machines: The Representation of TemporalContingencies in Simple Recurrent Networks”, in Touretzky 1991:57–89. doi:10.1007/978-1-4615-4008-3_4
  • Shastri, Lokendra and Venkat Ajjanagadde, 1993, “From SimpleAssociations to Systematic Reasoning: A Connectionist Representationof Rules, Variables and Dynamic Bindings Using TemporalSynchrony”,Behavioral and Brain Sciences, 16(3):417–451. doi:10.1017/S0140525X00030910
  • Shea, Nicholas, 2007, “Content and Its Vehicles inConnectionist Systems”,Mind & Language, 22(3):246–269. doi:10.1111/j.1468-0017.2007.00308.x
  • Shevlin, Henry and Marta Halina, 2019, “Apply RichPsychological Terms in AI with Care”,Nature MachineIntelligence, 1(4): 165–167.doi:10.1038/s42256-019-0039-y
  • Shultz, Thomas R. and Alan C. Bale, 2001, “Neural NetworkSimulation of Infant Familiarization to Artificial Sentences”,Infancy, 2(4): 501–536.
  • –––, 2006, “Neural Networks Discover aNear-Identity Relation to Distinguish Simple Syntactic Forms”,Minds and Machines, 16(2): 107–139.doi:10.1007/s11023-006-9029-z
  • Silver, David, Thomas Hubert, Julian Schrittwieser, IoannisAntonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, et al., 2018,“A General Reinforcement Learning Algorithm That Masters Chess,Shogi, and Go through Self-Play”,Science, 362(6419):1140–1144. doi:10.1126/science.aar6404
  • Smolensky, Paul, 1987, “The Constituent Structure ofConnectionist Mental States: A Reply to Fodor and Pylyshyn”,The Southern Journal of Philosophy, 26(S1): 137–161.doi:10.1111/j.2041-6962.1988.tb00470.x
  • –––, 1988, “On the Proper Treatment ofConnectionism”,Behavioral and Brain Sciences, 11(1):1–23. doi:10.1017/S0140525X00052432
  • –––, 1990 [1991], “Tensor Product VariableBinding and the Representation of Symbolic Structures in ConnectionistSystems”,Artificial Intelligence, 46(1–2):159–216. Reprinted in Hinton 1991: 159–216.doi:10.1016/0004-3702(90)90007-M
  • –––, 1995, “Constituent Structure andExplanation in an Integrated Connectionist/Symbolic CognitiveArchitecture”, in MacDonald and MacDonald 1995: .
  • St. John, Mark F. and James L. McClelland, 1990 [1991],“Learning and Applying Contextual Constraints in SentenceComprehension”,Artificial Intelligence, 46(1–2):217–257. Reprinted in Hinton 1991: 217–257doi:10.1016/0004-3702(90)90008-N
  • Tomberlin, James E. (ed.), 1995,Philosophical Perspectives 9:AI, Connectionism and Philosophical Psychology, Atascadero:Ridgeview Press.
  • Touretzky, David S. (ed.), 1989,Advances in NeuralInformation Processing Systems I, San Mateo, CA: Kaufmann,available online.
  • ––– (ed.), 1990,Advances in NeuralInformation Processing Systems II, San Mateo, CA: Kaufmann.
  • ––– (ed.), 1991,Connectionist Approaches toLanguage Learning, Boston, MA: Springer US.doi:10.1007/978-1-4615-4008-3
  • Touretzky, David S., Geoffrey E. Hinton, and Terrence JosephSejnowski (eds), 1988,Proceedings of the 1988 ConnectionistModels Summer School, San Mateo, CA: Kaufmann.
  • Van Gelder, Tim, 1990, “Compositionality: A ConnectionistVariation on a Classical Theme”,Cognitive Science,14(3): 355–384. doi:10.1016/0364-0213(90)90017-Q
  • –––, 1991, “What is the ‘D’ inPDP?” in Ramsey, Stich, and Rumelhart 1991: 33–59.
  • Van Gelder, Timothy and Robert Port, 1993, “Beyond Symbolic:Prolegomena to a Kama-Sutra of Compositionality”, in Vasant GHonavar, Leonard Uhr (eds.),Symbol Processing and ConnectionistModels in AI and Cognition: Steps Towards Integration, Boston:Academic Press.
  • Vilcu, Marius and Robert F. Hadley, 2005, “Two Apparent‘Counterexamples’ to Marcus: A Closer Look”,Minds and Machines, 15(3–4): 359–382.doi:10.1007/s11023-005-9000-4
  • Von Eckardt, Barbara, 2003, “The Explanatory Need for MentalRepresentations in Cognitive Science”,Mind &Language, 18(4): 427–439. doi:10.1111/1468-0017.00235
  • –––, 2005, “Connectionism and thePropositional Attitudes”, in Christina Erneling and David MartelJohnson (eds.),The Mind as a Scientific Object: Between Brain andCulture, New York: Oxford University Press.
  • Waltz, David L. and Jordan B. Pollack, 1985, “MassivelyParallel Parsing: A Strongly Interactive Model of Natural LanguageInterpretation*”,Cognitive Science, 9(1): 51–74.doi:10.1207/s15516709cog0901_4
  • Wermter, Stefan and Ron Sun (eds.), 2000,Hybrid NeuralSystems, (Lecture Notes in Computer Science 1778), Berlin,Heidelberg: Springer Berlin Heidelberg. doi:10.1007/10719871
  • Yamins, Daniel L. K. and James J. DiCarlo, 2016, “UsingGoal-Driven Deep Learning Models to Understand Sensory Cortex”,Nature Neuroscience, 19(3): 356–365.doi:10.1038/nn.4244
  • Yosinski, Jason, Jeff Clune, Anh Nguyen, Thomas Fuchs, and HodLipson, 2015, “Understanding Neural Networks Through DeepVisualization”,Deep Learning Workshop, 31st InternationalConference on Machine Learning, Lille, France,available online.
  • Zhou, Zhenglong and Chaz Firestone, 2019, “Humans CanDecipher Adversarial Images”,Nature Communications,10(1): 1334. doi:10.1038/s41467-019-08931-6

Other Internet Resources

Copyright © 2019 by
Cameron Buckner<cameron.buckner@ufl.edu>
James Garson<JGarson@uh.edu>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free

Browse

About

Support SEP

Mirror Sites

View this site from another server:

USA (Main Site)Philosophy, Stanford University

The Stanford Encyclopedia of Philosophy iscopyright © 2025 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054


[8]ページ先頭

©2009-2025 Movatter.jp