| |
Can the output of human cognition be predicted from the assumption that it is an optimal response to the information-processing demands of the environment? A methodology called rational analysis is described for deriving predictions about cognitive phenomena using optimization assumptions. The predictions flow from the statistical structure of the environment and not the assumed structure of the mind. Bayesian inference is used, assuming that people start with a weak prior model of the world which they integrate with experience to develop (...) stronger models of specific aspects of the world. Cognitive performance maximizes the difference between the expected gain and cost of mental effort. Memory performance can be predicted on the assumption that retrieval seeks a maximal trade-off between the probability of finding the relevant memories and the effort required to do so; in categorization performance there is a similar trade-off between accuracy in predicting object features and the cost of hypothesis formation; in casual inference the trade-off is between accuracy in predicting future events and the cost of hypothesis formation; and in problem solving it is between the probability of achieving goals and the cost of both external and mental problem-solving search. The implemention of these rational prescriptions in neurally plausible architecture is also discussed. (shrink) | |
Working memory limits are best defined in terms of the complexity of the relations that can be processed in parallel. Complexity is defined as the number of related dimensions or sources of variation. A unary relation has one argument and one source of variation; its argument can be instantiated in only one way at a time. A binary relation has two arguments, two sources of variation, and two instantiations, and so on. Dimensionality is related to the number of chunks, because (...) both attributes on dimensions and chunks are independent units of information of arbitrary size. Studies of working memory limits suggest that there is a soft limit corresponding to the parallel processing of one quaternary relation. More complex concepts are processed by or In segmentation, tasks are broken into components that do not exceed processing capacity and can be processed serially. In conceptual chunking, representations are to reduce their dimensionality and hence their processing load, but at the cost of making some relational information inaccessible. Neural net models of relational representations show that relations with more arguments have a higher computational cost that coincides with experimental findings on higher processing loads in humans. Relational complexity is related to processing load in reasoning and sentence comprehension and can distinguish between the capacities of higher species. The complexity of relations processed by children increases with age. Implications for neural net models and theories of cognition and cognitive development are discussed. (shrink) | |
The recognition that human minds/brains are finite systems with limited resources for computation has led some researchers to advance the Tractable Cognition thesis: Human cognitive capacities are constrained by computational tractability. This thesis, if true, serves cognitive psychology by constraining the space of computational‐level theories of cognition. To utilize this constraint, a precise and workable definition of “computational tractability” is needed. Following computer science tradition, many cognitive scientists and psychologists define computational tractability as polynomial‐time computability, leading to the P‐Cognition thesis. (...) This article explains how and why the P‐Cognition thesis may be overly restrictive, risking the exclusion of veridical computational‐level theories from scientific investigation. An argument is made to replace the P‐Cognition thesis by the FPT‐Cognition thesis as an alternative formalization of the Tractable Cognition thesis (here, FPT stands for fixed‐parameter tractable). Possible objections to the Tractable Cognition thesis, and its proposed formalization, are discussed, and existing misconceptions are clarified. (shrink) | |
Experiments are presented showing that visual search for Mueller-Lyer (ML) stimuli is based on complete configurations, rather than component segments. Segments easily detected in isolation were difficult to detect when embedded in a configuration, indicating preemption by low-level groups. This preemption—which caused stimulus components to become inaccessible to rapid search—was an all-or-nothing effect, and so could serve as a powerful test of grouping. It is shown that these effects are unlikely to be due to blurring by simple spatial filters at (...) early visual levels. It is proposed instead that they are due to more sophisticated processes that rapidly bind contour fragments into spatially-extended assemblies. These results support the view that rapid visual search cannot access the primitives formed at the earliest stages of visual processing; rather, it can access only higher-level, more ecologically-relevant structures. (shrink) | |
In computational complexity theory, decision problems are divided into complexity classes based on the amount of computational resources it takes for algorithms to solve them. In theoretical computer science, it is commonly accepted that only functions for solving problems in the complexity class P, solvable by a deterministic Turing machine in polynomial time, are considered to be tractable. In cognitive science and philosophy, this tractability result has been used to argue that only functions in P can feasibly work as computational (...) models of human cognitive capacities. One interesting area of computational complexity theory is descriptive complexity, which connects the expressive strength of systems of logic with the computational complexity classes. In descriptive complexity theory, it is established that only first-order (classical) systems are connected to P, or one of its subclasses. Consequently, second-order systems of logic are considered to be computationally intractable, and may therefore seem to be unfit to model human cognitive capacities. This would be problematic when we think of the role of logic as the foundations of mathematics. In order to express many important mathematical concepts and systematically prove theorems involving them, we need to have a system of logic stronger than classical first-order logic. But if such a system is considered to be intractable, it means that the logical foundation of mathematics can be prohibitively complex for human cognition. In this paper I will argue, however, that this problem is the result of an unjustified direct use of computational complexity classes in cognitive modelling. Placing my account in the recent literature on the topic, I argue that the problem can be solved by considering computational complexity for humanly relevant problem solving algorithms and input sizes. (shrink) | |
The emperor's new mind (hereafter Emperor) is an attempt to put forward a scientific alternative to the viewpoint of according to which mental activity is merely the acting out of some algorithmic procedure. John Searle and other thinkers have likewise argued that mere calculation does not, of itself, evoke conscious mental attributes, such as understanding or intentionality, but they are still prepared to accept the action the brain, like that of any other physical object, could in principle be simulated by (...) a computer. In Emperor I go further than this and suggest that the outward manifestations ofconscious mental activity cannot even be properly simulated by calculation. To support this view, I use various arguments to show that the results of mathematical insight, in particular, do not seem to be obtained algorithmically. The main thrust ofthis work, however, is to present an overview ofthe present state of physical understanding and to show that an important gap exists at the point where, quantum and classical physics meet, as well as to speculate on how the conscious brain might be taking advantage ofwhatever new physics is needed to fill this gap to achieve its nonalgorithmic effects. (shrink) | |
We examine the verification of simple quantifiers in natural language from a computational model perspective. We refer to previous neuropsychological investigations of the same problem and suggest extending their experimental setting. Moreover, we give some direct empirical evidence linking computational complexity predictions with cognitive reality.<br>In the empirical study we compare time needed for understanding different types of quantifiers. We show that the computational distinction between quantifiers recognized by finite-automata and push-down automata is psychologically relevant. Our research improves upon hypothesis and (...) explanatory power of recent neuroimaging studies as well as provides<br>evidence. (shrink) | |
Despite their success in describing and predicting cognitive behavior, the plausibility of so-called ‘rational explanations’ is often contested on the grounds of computational intractability. Several cognitive scientists have argued that such intractability is an orthogonal pseudoproblem, however, since rational explanations account for the ‘why’ of cognition but are agnostic about the ‘how’. Their central premise is that humans do not actually perform the rational calculations posited by their models, but only act as if they do. Whether or not the problem (...) of intractability is solved by recourse to ‘as if’ explanations critically depends, inter alia, on the semantics of the ‘as if’ connective. We examine the five most sensible explications in the literature, and conclude that none of them circumvents the problem. As a result, rational ‘as if’ explanations must obey the minimal computational constraint of tractability. (shrink) | |
Following Marr’s famous three-level distinction between explanations in cognitive science, it is often accepted that focus on modeling cognitive tasks should be on the computational level rather than the algorithmic level. When it comes to mathematical problem solving, this approach suggests that the complexity of the task of solving a problem can be characterized by the computational complexity of that problem. In this paper, I argue that human cognizers use heuristic and didactic tools and thus engage in cognitive processes that (...) make their problem solving algorithms computationally suboptimal, in contrast with the optimal algorithms studied in the computational approach. Therefore, in order to accurately model the human cognitive tasks involved in mathematical problem solving, we need to expand our methodology to also include aspects relevant to the algorithmic level. This allows us to study algorithms that are cognitively optimal for human problem solvers. Since problem solving methods are not universal, I propose that they should be studied in the framework of enculturation, which can explain the expected cultural variance in the humanly optimal algorithms. While mathematical problem solving is used as the case study, the considerations in this paper concern modeling of cognitive tasks in general. (shrink) | |
Many cognitive scientists, having discovered that some computational-level characterization f of a cognitive capacity φ is intractable, invoke heuristics as algorithmic-level explanations of how cognizers compute f. We argue that such explanations are actually dysfunctional, and rebut five possible objections. We then propose computational-level theory revision as a principled and workable alternative. | |
In the study of cognitive processes, limitations on computational resources (computing time and memory space) are usually considered to be beyond the scope of a theory of competence, and to be exclusively relevant to the study of performance. Starting from considerations derived from the theory of computational complexity, in this paper I argue that there are good reasons for claiming that some aspects of resource limitations pertain to the domain of a theory of competence. | |
On the basis of neuroiinaging studies, Posner & Raichle summarily report that the prefrontal cortex is involved in executive functioning and attention. In contrast to that superficial view, we briefly describe a testable model of the kinds of representations that are stored in prefrontal cortex, which, when activated, are expressed via plans, actions, thematic knowledge, and schemas. | |
Following Marr (1982), any computational account of cognition must satisfy constraints at three explanatory levels: computational, algorithmic, and implementational. This paper focuses on the first two levels and argues that current theories of reasoning cannot provide explanations of everyday defeasible reasoning, at either level. At the algorithmic level, current theories are not computationally tractable: they do not “scale-up” to everyday defeasible inference. In addition, at the computational level, they cannot specify why people behave as they do both on laboratory reasoning (...) tasks and in everyday life (Anderson, 1990). In current theories, logic provides the computational-level theory, where such a theory is evident at all. But logic is not a descriptively adequate computational-level theory for many reasoning tasks. It is argued that better computational-level theories can be developed using a probabilistic framework. This approach is illustrated using Oaksford and Chater's (1994) probabilistic account of Wason's selection task. (shrink) | |
Single cell recordings in monkeys provide strong evidence for an important role of the motor system in action understanding. This evidence is backed up by data from studies of the (human) mirror neuron system using neuroimaging or TMS techniques, and behavioral experiments. Although the data acquired from single cell recordings are generally considered to be robust, several debates have shown that the interpretation of these data is far from straightforward. We will show that research based on single-cell recordings allows for (...) unlimited content attribution to mirror neurons. We will argue that a theoretical analysis of the mirroring process, combined with behavioral and brain studies, can provide the necessary limitations. A complexity analysis of the type of processing attributed to the mirror neuron system can help formulate restrictions on what mirroring is and what cognitive functions could, in principle, be explained by a mirror mechanism. We argue that processing at higher levels of abstraction needs assistance of non-mirroring processes to such an extent that subsuming the processes needed to infer goals from actions under the label ?mirroring? is not warranted. (shrink) | |
Theory of mind refers to the human capacity for reasoning about others’ mental states based on observations of their actions and unfolding events. This type of reasoning is notorious in the cognitive science literature for its presumed computational intractability. A possible reason could be that it may involve higher-order thinking. To investigate this we formalize theory of mind reasoning as updating of beliefs about beliefs using dynamic epistemic logic, as this formalism allows to parameterize ‘order of thinking.’ We prove that (...) theory of mind reasoning, so formalized, indeed is intractable. Using parameterized complexity we prove, however, that the ‘order parameter’ is not a source of intractability. We furthermore consider a set of alternative parameters and investigate which of them are sources of intractability. We discuss the implications of these results for the understanding of theory of mind. (shrink) | |
We overview logical and computational explanations of the notion of tractability as applied in cognitive science. We start by introducing the basics of mathematical theories of complexity: computability theory, computational complexity theory, and descriptive complexity theory. Computational philosophy of mind often identifies mental algorithms with computable functions. However, with the development of programming practice it has become apparent that for some computable problems finding effective algorithms is hardly possible. Some problems need too much computational resource, e.g., time or memory, to (...) be practically computable. Computational complexity theory is concerned with the amount of resources required for the execution of algorithms and, hence, the inherent difficulty of computational problems. An important goal of computational complexity theory is to categorize computational problems via complexity classes, and in particular, to identify efficiently solvable problems and draw a line between tractability and intractability. -/- We survey how complexity can be used to study computational plausibility of cognitive theories. We especially emphasize methodological and mathematical assumptions behind applying complexity theory in cognitive science. We pay special attention to the examples of applying logical and computational complexity toolbox in different domains of cognitive science. We focus mostly on theoretical and experimental research in psycholinguistics and social cognition. (shrink) | |
Consciousness is known to be limited in processing capacity and often described in terms of a unique processing stream across a single dimension: time. In this paper, we discuss a purely temporal pattern code, functionally decoupled from spatial signals, for conscious state generation in the brain. Arguments in favour of such a code include Dehaene et al.'s long-distance reverberation postulate, Ramachandran's remapping hypothesis, evidence for a temporal coherence index and coincidence detectors, and Grossberg's Adaptive Resonance Theory. A time-bin resonance model (...) is developed, where temporal signatures of conscious states are generated on the basis of signal reverberation across large distances in highly plastic neural circuits. The temporal signatures are delivered by neural activity patterns which, beyond a certain statistical threshold, activate, maintain, and terminate a conscious brain state like a bar code would activate, maintain, or inactivate the electronic locks of a safe. Such temporal resonance would reflect a higher level of neural processing, independent from sensorial or perceptual brain mechanisms. (shrink) | |
This chapter presents a new semantics for inductive empirical knowledge. The epistemic agent is represented concretely as a learner who processes new inputs through time and who forms new beliefs from those inputs by means of a concrete, computable learning program. The agent’s belief state is represented hyper-intensionally as a set of time-indexed sentences. Knowledge is interpreted as avoidance of error in the limit and as having converged to true belief from the present time onward. Familiar topics are re-examined within (...) the semantics, such as inductive skepticism, the logic of discovery, Duhem’s problem, the articulation of theories by auxiliary hypotheses, the role of serendipity in scientific knowledge, Fitch’s paradox, deductive closure of knowability, whether one can know inductively that one knows inductively, whether one can know inductively that one does not know inductively, and whether expert instruction can spread common inductive knowledge—as opposed to mere, true belief—through a community of gullible pupils. (shrink) No categories | |
Consciousness is known to be limited in processing capacity and often described in terms of a unique processing stream across a single dimension: time. In this paper, we discuss a purely temporal pattern code, functionally decoupled from spatial signals, for conscious state generation in the brain. Arguments in favour of such a code include Dehaene et al.’s long-distance reverberation postulate, Ramachandran’s remapping hypothesis, evidence for a temporal coherence index and coincidence detectors, and Grossberg’s Adaptive Resonance Theory. A time-bin resonance model (...) is developed, where temporal signatures of conscious states are generated on the basis of signal reverberation across large distances in highly plastic neural circuits. The temporal signatures are delivered by neural activity patterns which, beyond a certain statistical threshold, activate, maintain, and terminate a conscious brain state like a bar code would activate, maintain, or inactivate the electronic locks of a safe. Such temporal resonance would reflect a higher level of neural processing, one that is independent from sensorial or perceptual brain mechanisms. (shrink) | |
The purpose of this paper is to present two kinds of analogical representational change, both occurring early in the analogy-making process, and then, using these two kinds of change, to present a model unifying one sort of analogy-making and categorization. The proposed unification rests on three key claims: (1) a certain type of rapid representational abstraction is crucial to making the relevant analogies (this is the first kind of representational change; a computer model is presented that demonstrates this kind of (...) abstraction), (2) rapid abstractions are induced by retrieval across large psychological distances, and (3) both categorizations and analogies supply understandings of perceptual input via construing, which is a proposed type of categorization (this is the second kind of representational change). It is construing that finalizes the unification. (shrink) | |
In attempts to formulate a computational understanding of brain function, one of the fundamental concerns is the data structure by which the brain represents information. For many decades, a conceptual framework has dominated the thinking of both brain modelers and neurobiologists. That framework is referred to here as "classical neural networks." It is well supported by experimental data, although it may be incomplete. A characterization of this framework will be offered in the next section. Difficulties in modeling important functional aspects (...) of the brain on the basis of classical neural networks alone have led to the recognition that another, general mechanism must be invoked to explain brain function. That mechanism I call " binding." Binding by neural signal synchrony had been mentioned several times in the liter ature before it was fully formulated as a general phenomenon. Although experimental evidence for neural syn chrony was soon found, the idea was largely ignored for many years. Only recently has it become a topic of animated discussion. In what follows, I will summarize the nature and the roots of the idea of binding, especially of temporal binding, and will discuss some of the objec tions raised against it. (shrink) | |
By reviewing most of the neurobiology of consciousness, this article highlights some major reasons why a successful emulation of the dynamics of human consciousness by artificial intelligence is unlikely. The analysis provided leads to conclude that human consciousness is epigenetically determined and experience and context-dependent at the individual level. It is subject to changes in time that are essentially unpredictable. If cracking the code to human consciousness were possible, the result would most likely have to consist of a temporal pattern (...) code simulating long-distance signal reverberation and de-correlation of all spatial signal contents from temporal signals. In the light of the massive evidence for complex interactions between implicit (non-conscious) and explicit (conscious) contents of representation, the code would have to be capable of making implicit (non-conscious) processes explicit. It would have to be capable of a progressively less and less arbitrary selection of temporal activity patterns in a continuously developing neural network structure identical to that of the human brain, from the synaptic level to that of higher cognitive functions. The code’s activation thresholds would depend on specific temporal signal coincidence probabilities, vary considerably with time and across individual experience data, and would therefore require dynamically adaptive computations capable of emulating the properties of individual human experience. No known machine or neural network learning approach has such potential. (shrink) | |
It is argued that current neuroimaging studies can provide useful constraints for the construction of models of cognition, and that these studies should be guided by cognitive models. A numberof challenges for a successful cross-fertilization between “mind mappers” and cognitive modelers are discussed in the light of current research on word recognition. | |
This volume explores how functional brain imaging techniques like positron emission tomography have influenced cognitive studies. The first chapter outlines efforts to relate human thought and cognition in terms of great books from the late 1800s through the present. Chapter 2 describes mental operations as they are measured in cognitive science studies. It develops a framework for relating mental operations to activity in nerve cells. In Chapter 3, the PET method is reviewed and studies are presented that use PET to (...) map the striate cortex and to activate extrastriate motion, color, and form areas. Chapter 4 shows how top down processes involving attention can lead to activation of these same areas in the detection of targets, visual search, and visual imagery. This chapter reveals complex networks of activations. Chapters 5 and 6 deal with the presentation of words. Chapter 5 illustrates PET studies of the anatomy of visual word processing and shows how the circuitry used for generating novel uses of words changes as the task becomes automated. Chapter 6 applies high density electrical recording to explore these activations in real time and to show how a constant anatomy can be reprogrammed by task instructions to produce and perform different cognitive tasks. Chapter 7 shows how studies of brain lesions and PET converge on common networks underlying attentional functions such as visual orienting, target detection, and maintenance of the alert state. Chapters 8 and 9 apply the network approach to examine normal development of attention in infants and pathological conditions resulting from brain damage, and psychiatric pathologies of depression, schizophrenia, and attention deficit disorder. In Chapter 10, new developments such as functional MRI are discussed in terms of future developments and integration of cognitive neuroscience. (shrink) | |
Recent developments in vision science have resulted in several major changes in our understanding of human visual perception. For example, attention no longer appears necessary for "visual intelligence"--a large amount of sophisticated processing can be done without it. Scene perception no longer appears to involve static, general-purpose descriptions, but instead may involve dynamic representations whose content depends on the individual and the task. And vision itself no longer appears to be limited to the production of a conscious "picture"--it may also (...) guide processes outside the conscious awareness of the observer. (shrink) | |
According to the target article authors, initial experience with a circumstance primes a relation that can subsequently be applied to a different circumstance to draw an analogy. While I broadly agree with their claim about the role of relational priming in early analogical reasoning, I put forward a few concerns that may be worthy of further reflection. | |