| |
A cognitive architecture aimed at cumulative learning must provide the necessary information and control structures to allow agents to learn incrementally and autonomously from their experience. This involves managing an agent's goals as well as continuously relating sensory information to these in its perception-cognition information processing stack. The more varied the environment of a learning agent is, the more general and flexible must be these mechanisms to handle a wider variety of relevant patterns, tasks, and goal structures. While many researchers (...) agree that information at different levels of abstraction likely differs in its makeup and structure and processing mechanisms, agreement on the particulars of such differences is not generally shared in the research community. A dual processing architecture has been proposed as a model of cognitive processing, and they are often considered as responsible for low- and high-level information, respectively. We posit that cognition is not binary in this way and that knowledge at any level of abstraction involves what we refer to as neurosymbolic information, meaning that data at both high and low levels must contain both symbolic and subsymbolic information. Further, we argue that the main differentiating factor between the processing of high and low levels of data abstraction can be largely attributed to the nature of the involved attention mechanisms. We describe the key arguments behind this view and review relevant evidence from the literature. (shrink) | |
The term embodiment identifies a theory that meaning and semantics cannot be captured by abstract, logical systems, but are dependent on an agentâs experience derived from being situated in an environment. This theory has recently received a great deal of support in the cognitive science literature and is having significant impact in artificial intelligence. Memetics refers to the theory that knowledge and ideas can evolve more or less independently of their human-agent substrates. While humans provide the medium for this evolution, (...) memetics holds that ideas can be developed without human comprehension or deliberate interference. Both theories have profound implications for the study of languageâits potential use by machines, its acquisition by children and of particular relevance to this special issue, its evolution. This article links the theory of memetics to the established literature on semantic space, then examines the extent to which these memetic mechanisms might account for language independently of embodiment. It then seeks to explain the evolution of language through uniquely human cognitive capacities which facilitate memetic evolution. (shrink) | |
The dual-code proposal of number representation put forward by Cohen Kadosh & Walsh (CK&W) accounts for only a fraction of the many modes of numerical abstraction. Contrary to their proposal, robust data from human infants and nonhuman animals indicate that abstract numerical representations are psychologically primitive. Additionally, much of the behavioral and neural data cited to support CK&W's proposal is, in fact, neutral on the issue of numerical abstraction. | |
Cohen Kadosh & Walsh (CK&W) present convincing evidence indicating the existence of notation-specific numerical representations in parietal cortex. We suggest that the same conclusions can be drawn for a particular type of numerical representation: the representation of time. Notation-dependent representations need not be limited to number but may also be extended to other magnitude-related contents processed in parietal cortex (Walsh 2003). |