| |
This target article presents a new computational theory of explanatory coherence that applies to the acceptance and rejection of scientific hypotheses as well as to reasoning in everyday life, The theory consists of seven principles that establish relations of local coherence between a hypothesis and other propositions. A hypothesis coheres with propositions that it explains, or that explain it, or that participate with it in explaining other propositions, or that offer analogous explanations. Propositions are incoherent with each other if they (...) are contradictory, Propositions that describe the results of observation have a degree of acceptability on their own. An explanatory hypothesis is acccpted if it coheres better overall than its competitors. The power of the seven principles is shown by their implementation in a connectionist program called ECHO, which treats hypothesis evaluation as a constraint satisfaction problem. Inputs about the explanatory relations are used to create a network of units representing propositions, while coherende and incoherence relations are encoded by excitatory and inbihitory links. ECHO provides an algorithm for smoothly integrating theory evaluation based on considerations of explanatory breadth, simplicity, and analogy. It has been applied to such important scientific cases as Lovoisier's argument for oxygen against the phlogiston theory and Darwin's argument for evolution against creationism, and also to cases of legal reasoning. The theory of explanatory coherence has implications for artificial intelligence, psychology, and philosophy. (shrink) | |
No categories | |
No categories | |
Theories of language production propose that utterances are constructed by a mechanism that separates linguistic content from linguistic structure, Linguistic content is retrieved from the mental lexicon, and is then inserted into slots in linguistic structures or frames. Support for this kind of model at the phonological level comes from patterns of phonological speech errors. W present an alternative account of these patterns using a connectionist or parallel distributed proceesing (PDP) model that learns to produce sequences of phonological features. The (...) model's errors exhibit some of the properties of human speech errors, specifically, properties that have been attributed to the action of phonological rules, frames, or other structural generalizations. (shrink) | |
A set of hypotheses is formulated for a connectionist approach to cognitive modeling. These hypotheses are shown to be incompatible with the hypotheses underlying traditional cognitive models. The connectionist models considered are massively parallel numerical computational systems that are a kind of continuous dynamical system. The numerical variables in the system correspond semantically to fine-grained features below the level of the concepts consciously used to describe the task domain. The level of analysis is intermediate between those of symbolic cognitive models (...) and neural models. The explanations of behavior provided are like those traditional in the physical sciences, unlike the explanations provided by symbolic models.Higher-level analyses of these connectionist models reveal subtle relations to symbolic models. Parallel connectionist memory and linguistic processes are hypothesized to give rise to processes that are describable at a higher level as sequential rule application. At the lower level, computation has the character of massively parallel satisfaction of soft numerical constraints; at the higher level, this can lead to competence characterizable by hard rules. Performance will typically deviate from this competence since behavior is achieved not by interpreting hard rules but by satisfying soft constraints. The result is a picture in which traditional and connectionist theoretical constructs collaborate intimately to provide an understanding of cognition. (shrink) | |
We suggest that the theory of dynamical systems provides a revealing general framework for modeling the representations and mechanism underlying syntactic processing. We show how a particular dynamical model, the Visitation Set Gravitation model of Tabor, Juliano, and Tanenhaus (1997), develops syntactic representations and models a set of contingent frequency effects in parsing that are problematic for other models. We also present new simulations showing how the model accounts for semantic effects in parsing, and propose a new account of the (...) distinction between syntactic and semantic incongruity. The results show how symbolic structures useful in parsing arise as emergent properties of connectionist dynamical systems. (shrink) | |
Which notion of computation (if any) is essential for explaining cognition? Five answers to this question are discussed in the paper. (1) The classicist answer: symbolic (digital) computation is required for explaining cognition; (2) The broad digital computationalist answer: digital computation broadly construed is required for explaining cognition; (3) The connectionist answer: sub-symbolic computation is required for explaining cognition; (4) The computational neuroscientist answer: neural computation (that, strictly, is neither digital nor analogue) is required for explaining cognition; (5) The extreme (...) dynamicist answer: computation is not required for explaining cognition. The first four answers are only accurate to a first approximation. But the “devil” is in the details. The last answer cashes in on the parenthetical “if any” in the question above. The classicist argues that cognition is symbolic computation. But digital computationalism need not be equated with classicism. Indeed, computationalism can, in principle, range from digital (and analogue) computationalism through (the weaker thesis of) generic computationalism to (the even weaker thesis of) digital (or analogue) pancomputationalism. Connectionism, which has traditionally been criticised by classicists for being non-computational, can be plausibly construed as being either analogue or digital computationalism (depending on the type of connectionist networks used). Computational neuroscience invokes the notion of neural computation that may (possibly) be interpreted as a sui generis type of computation. The extreme dynamicist argues that the time has come for a post-computational cognitive science. This paper is an attempt to shed some light on this debate by examining various conceptions and misconceptions of (particularly digital) computation. (shrink) | |
Work on analogy has been done from a number of disciplinary perspectives throughout the history of Western thought. This work is a multidisciplinary guide to theorizing about analogy. It contains 1,406 references, primarily to journal articles and monographs, and primarily to English language material. classical through to contemporary sources are included. The work is classified into eight different sections (with a number of subsections). A brief introduction to each section is provided. Keywords and key expressions of importance to research on (...) analogy are discussed in the introductory material. Electronic resources for conducting research on analogy are listed as well. (shrink) | |
Along with the increasing popularity of connectionist language models has come a number of provocative suggestions about the challenge these models present to Chomsky's arguments for nativism. The aim of this paper is to assess these claims. We begin by reconstructing Chomsky's argument from the poverty of the stimulus and arguing that it is best understood as three related arguments, with increasingly strong conclusions. Next, we provide a brief introduction to connectionism and give a quick survey of recent efforts to (...) develop networks that model various aspects of human linguistic behavior. Finally, we explore the implications of this research for Chomsky's arguments. Our claim is that the relation between connectionism and Chomsky's views on innate knowledge is more complicated than many have assumed, and that even if these models enjoy considerable success the threat they pose for linguistic nativism is small. (shrink) | |
No categories | |
Words become associated following repeated co-occurrence episodes. This process might be further determined by the semantic characteristics of the words. The present study focused on how semantic and episodic factors interact in incidental formation of word associations. First, we found that human participants associate semantically related words more easily than unrelated words; this advantage increased linearly with repeated co-occurrence. Second, we developed a computational model, SEMANT, suggesting a possible mechanism for this semantic-episodic interaction. In SEMANT, episodic associations are implemented through (...) lateral connections between nodes in a pre-existent self-organized map of word semantics. These connections are strengthened at each instance of concomitant activation, proportionally with the amount of the overlapping activity waves of activated nodes. In computer simulations SEMANT replicated the dynamics of associative learning in humans and led to testable predictions concerning normal associative learning as well as impaired learning in a diffuse semantic system like that characteristic of schizophrenia. (shrink) | |
Uses connectionism (neural networks) to extract the "gist" of a story in order to represent a context going forward for the disambiguation of incoming words as a text is processed. | |
How do we construct ad hoc concepts, especially those characterised by emergent properties? A reasonable hypothesis, suggested both in psychology and in pragmatics , is that some sort of inferential processing must be involved. I argue that this inferential processing can be accounted for in associative terms. My argument is based on the notion of inference as associative pattern completion based on schemata, with schemata being conceived in turn as patterns of concepts and their relationships. The possible role of conscious (...) attention in inferential processes of this sort is also addressed. (shrink) | |
��We describe a comprehensive framework for text un- derstanding, based on the representation of context. It is designed to serve as a representation of semantics for the full range of in- terpretive and inferential needs of general natural language pro- cessing. Its most distinctive feature is its uniform representation of the various simple and independent linguistic sources that play a role in determining meaning: lexical associations, syntactic re- strictions, case-role expectations, and most importantly, contextual effects. Compositional syntactic structure from a (...) shallow parsing is represented in a neural net-based associative memory, where it then interacts through a Bayesian network with semantic associa- tions and the context or “gist” of the passage carried forward from preceding sentences. Experiments with more than 2000 sentences in different languages are included. (shrink) ![]() ![]() |