Movatterモバイル変換


[0]ホーム

URL:


PhilPapersPhilPeoplePhilArchivePhilEventsPhilJobs
Order:

1 filter applied
Disambiguations
Aaron Sloman [126]A. Sloman [13]
  1.  277
    The Computer Revolution in Philosophy: Philosophy, Science, and Models of Mind.Aaron Sloman -1978 - Hassocks UK: Harvester Press.
    Extract from Hofstadter's revew in Bulletin of American Mathematical Society : http://www.ams.org/journals/bull/1980-02-02/S0273-0979-1980-14752-7/S0273-0979-1980-14752-7.pdf -/- "Aaron Sloman is a man who is convinced that most philosophers and many other students of mind are in dire need of being convinced that there has been a revolution in that field happening right under their noses, and that they had better quickly inform themselves. The revolution is called "Artificial Intelligence" (Al)-and Sloman attempts to impart to others the "enlighten- ment" which he clearly regrets not having (...) experienced earlier himself. Being somewhat of a convert, Sloman is a zealous campaigner for his point of view. Now a Reader in Cognitive Science at Sussex, he began his academic career in more orthodox philosophy and, by exposure to linguistics and AI, came to feel that all approaches to mind which ignore AI are missing the boat. I agree with him, and I am glad that he has written this provocative book. The tone of Sloman's book can be gotten across by this quotation (p. 5): "I am prepared to go so far as to say that within a few years, if there remain any philosophers who are not familiar with some of the main developments in artificial intelligence, it will be fair to accuse them of professional incom- petence, and that to teach courses in philosophy of mind, epistemology, aesthetics, philosophy of science, philosophy of language, ethics, metaphysics, and other main areas of philosophy, without discussing the relevant aspects of artificial intelligence will be as irresponsible as giving a degree course in physics which includes no quantum theory." -/- (The author now regrets the extreme polemical tone of the book.). (shrink)
    Direct download(7 more)  
     
    Export citation  
     
    Bookmark   143 citations  
  2.  67
    Motives, mechanisms, and emotions.Aaron Sloman -1987 -Cognition and Emotion 1 (3):217-233.
  3.  71
    The Computer Revolution in Philosophy.Martin Atkinson &Aaron Sloman -1980 -Philosophical Quarterly 30 (119):178.
  4.  447
    Virtual machines and consciousness.Aaron Sloman &Ronald L. Chrisley -2003 -Journal of Consciousness Studies 10 (4-5):133-172.
    Replication or even modelling of consciousness in machines requires some clarifications and refinements of our concept of consciousness. Design of, construction of, and interaction with artificial systems can itself assist in this conceptual development. We start with the tentative hypothesis that although the word “consciousness” has no well-defined meaning, it is used to refer to aspects of human and animal informationprocessing. We then argue that we can enhance our understanding of what these aspects might be by designing and building virtual-machine (...) architectures capturing various features of consciousness. This activity may in turn nurture the development of our concepts of consciousness, showing how an analysis based on information-processing virtual machines answers old philosophical puzzles as well enriching empirical theories. This process of developing and testing ideas by developing and testing designs leads to gradual refinement of many of our pre-theoretical concepts of mind, showing how they can be construed as implicitly “architecture-based” concepts. Understanding how humanlike robots with appropriate architectures are likely to feel puzzled about qualia may help us resolve those puzzles. The concept of “qualia” turns out to be an “architecture-based” concept, while individual qualia concepts are “architecture-driven”. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   38 citations  
  5.  121
    Why robots will have emotions.Aaron Sloman &Monica Croucher -1981
    Emotions involve complex processes produced by interactions between motives, beliefs, percepts, etc. E.g. real or imagined fulfilment or violation of a motive, or triggering of a 'motive-generator', can disturb processes produced by other motives. To understand emotions, therefore, we need to understand motives and the types of processes they can produce. This leads to a study of the global architecture of a mind. Some constraints on the evolution of minds are disussed. Types of motives and the processes they generate are (...) sketched. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   45 citations  
  6.  352
    `Ought' and `better'.Aaron Sloman -1970 -Mind 79 (315):385-394.
  7.  140
    What Sorts of Machines Can Understand the Symbols They Use?Aaron Sloman &L. Jonathan Cohen -1986 -Aristotelian Society Supplementary Volume 60 (1):61-96.
  8.  27
    The Computer Revolution in Philosophy: Philosophy, Science and Models of Mind.Aaron Sloman -1978 -British Journal for the Philosophy of Science 30 (3):302-304.
  9.  40
    Interactions between philosophy and artificial intelligence: The role of intuition and non-logical reasoning in intelligence.Aaron Sloman -1971 -Artificial Intelligence 2 (3-4):209-225.
  10.  50
    (1 other version)The Mind as a Control System.Aaron Sloman -1993 -Royal Institute of Philosophy Supplement 34:69-110.
    This is not a scholarly research paper, but a ‘position paper’ outlining an approach to the study of mind which has been gradually evolving since about 1969 when I first become acquainted with work in Artificial Intelligence through Max Clowes. I shall try to show why it is more fruitful to construe the mind as a control system than as a computational system.
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   22 citations  
  11.  112
    Towards a design-based analysis of emotional episodes.Ian Wright,Aaron Sloman &Luc P. Beaudoin -1996 -Philosophy, Psychiatry, and Psychology 3 (2):101-126.
    he design-based approach is a methodology for investigating mechanisms capable of generating mental phenomena, whether introspectively or externally observed, and whether they occur in humans, other animals or robots. The study of designs satisfying requirements for autonomous agency can provide new deep theoretical insights at the information processing level of description of mental mechanisms. Designs for working systems (whether on paper or implemented on computers) can systematically explicate old explanatory concepts and generate new concepts that allow new and richer interpretations (...) of human phenomena. To illustrate this, some aspects of human grief are analysed in terms of a particular information processing architecture being explored in our research group. We do not claim that this architecture is part of the causal structure of the human mind; rather, it represents an early stage in the iterative search for a deeper and more general architecture, capable of explaining more phenomena. However even the current early design provides an interpretative ground for some familiar phenomena, including characteristic features of certain emotional episodes, particularly the phenomenon of perturbance (a partial or total loss of control of attention). The paper attempts to expound and illustrate the design-based approach to cognitive science and philosophy, to demonstrate the potential effectiveness of the approach in generating interpretative possibilities, and to provide first steps towards an information processing account of `perturbant', emotional episodes. (shrink)
    Direct download(6 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  12.  131
    An alternative to working on machine consciousness.Aaron Sloman -2010 -International Journal of Machine Consciousness 2 (1):1-18.
    This paper extends three decades of work arguing that researchers who discuss consciousness should not restrict themselves only to (adult) human minds, but should study (and attempt to model) many kinds of minds, natural and artificial, thereby contributing to our understanding of the space containing all of them. We need to study what they do or can do, how they can do it, and how the natural ones can be emulated in synthetic minds. That requires: (a) understanding sets of requirements (...) that are met by different sorts of minds, i.e. the niches that they occupy, (b) understanding the space of possible designs, and (c) understanding complex and varied relationships between requirements and designs. Attempts to model or explain any particular phenomenon, such as vision, emotion, learning, language use, or consciousness lead to muddle and confusion unless they are placed in that broader context. A methodology for making progress is summarised and a novel requirement proposed for a theory of how human minds work: the theory should support a single generic design for a learning, developing system that, in addition to meeting familiar requirements, should be capable of developing different and opposed philosophical viewpoints about consciousness, and the so-called hard problem. In other words, we need a common explanation for the mental machinations of mysterians, materialists, functionalists, identity theorists, and those who regard all such theories as attempting to answer incoherent questions. No designs proposed so far come close. (shrink)
    Direct download(4 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  13.  149
    Phenomenal and access consciousness and the "hard" problem: A view from the designer stance.Aaron Sloman -2010 -International Journal of Machine Consciousness 2 (1):117-169.
    This paper is an attempt to summarise and justify critical comments I have been making over several decades about research on consciousness by philosophers, scientists and engineers. This includes (a) explaining why the concept of "phenomenal consciousness" (P-C), in the sense defined by Ned Block, is semantically flawed and unsuitable as a target for scientific research or machine modelling, whereas something like the concept of "access consciousness" (A-C) with which it is often contrasted refers to phenomena that can be described (...) and explained within a future scientific theory, and (b) explaining why the "hard problem" is a bogus problem, because of its dependence on the P-C concept. It is compared with another bogus problem, "the 'hard' problem of spatial identity" introduced as part of a tutorial on semantically flawed concepts. Different types of semantic flaw and conceptual confusion not normally studied outside analytical philosophy are distinguished. The semantic flaws of the "zombie" argument, closely allied with the P-C concept are also explained. These topics are related both to the evolution of human and animal minds and brains and to requirements for human-like robots. The diversity of the phenomena related to the concept "consciousness" as ordinarily used makes it a polymorphic concept, partly analogous to concepts like "efficient", "sensitive", and "impediment" all of which need extra information to be provided before they can be applied to anything, and then the criteria of applicability differ. As a result there cannot be one explanation of consciousness, one set of neural associates of consciousness, one explanation for the evolution of consciousness, nor one machine model of consciousness. We need many of each. I present a way of making progress based on what McCarthy called "the designer stance", using facts about running virtual machines, without which current computers obviously could not work. I suggest the same is true of biological minds, because biological evolution long ago "discovered" a need for something like virtual machinery for self-monitoring and self-extending information-processing systems, and produced far more sophisticated versions than human engineers have so far achieved. (shrink)
    Direct download(4 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  14.  104
    Developing concepts of consciousness.Aaron Sloman -1991 -Behavioral and Brain Sciences 14 (4):694-695.
  15.  133
    (1 other version)What sort of architecture is required for a human-like agent?Aaron Sloman -1996 - In Ramakrishna K. Rao,Foundations of Rational Agency. Kluwer Academic Publishers.
    This paper is about how to give human-like powers to complete agents. For this the most important design choice concerns the overall architecture. Questions regarding detailed mechanisms, forms of representations, inference capabilities, knowledge etc. are best addressed in the context of a global architecture in which different design decisions need to be linked. Such a design would assemble various kinds of functionality into a complete coherent working system, in which there are many concurrent, partly independent, partly mutually supportive, partly potentially (...) incompatible processes, addressing a multitude of issues on different time scales, including asynchronous, concurrent, motive generators. Designing human like agents is part of the more general problem of understanding design space, niche space and their interrelations, for, in the abstract, there is no one optimal design, as biological diversity on earth shows. (shrink)
    Direct download(5 more)  
     
    Export citation  
     
    Bookmark   14 citations  
  16.  37
    Actual possibilities.A. Sloman -unknown
    This is a philosophical `position paper' (html and pdf versions), starting from the observation that we have an intuitive grasp of a family of related concepts of ``possibility'', ``causation'' and ``constraint'' which we often use in thinking about complex mechanisms, and perhaps also in perceptual processes, which according to Gibson are primarily concerned with detecting positive and negative affordances, such as support, obstruction, graspability, etc. We are able to talk about, think about, and perceive possibilities, such as possible shapes, possible (...) pressures, possible motions, and also risks, opportunities and dangers. We can also think about constraints linking such possibilities. If such abilities are useful to us (and perhaps other animals) they may be equally useful to intelligent artefacts. All this bears on a collection of different more technical topics, including modal logic, constraint analysis, qualitative reasoning, naive physics, the analysis of functionality, and the modelling design processes. The paper suggests that our ability to use knowledge about ``de-re'' modality is more primitive than the ability to use ``de-dicto'' modalities, in which modal operators are applied to sentences. The paper explores these ideas, links them to notions of ``causation'' and ``machine'', suggests that they are applicable to virtual or abstract machines as well as physical machines. Some conclusions are drawn regarding the nature of mind and consciousness. (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark   11 citations  
  17.  81
    More things than are dreamt of in your biology: Information-processing in biologically inspired robots.A. Sloman &R. L. Chrisley -unknown
    Animals and robots perceiving and acting in a world require an ontology that accommodates entities, processes, states of affairs, etc., in their environment. If the perceived environment includes information - processing systems, the ontology should reflect that. Scientists studying such systems need an ontology that includes the first - order ontology characterising physical phenomena, the second - order ontology characterising perceivers of physical phenomena, and a third order ontology characterising perceivers of perceivers, including introspectors. We argue that second - and (...) third - order ontologies refer to contents of virtual machines and examine requirements for scientific investigation of combined virtual and physical machines, such as animals and robots. We show how the CogAff architecture schema, combining reactive, deliberative, and meta - management categories, provides a first draft schematic third - order ontology for describing a wide range of natural and artificial agents. Many previously proposed architectures use only a subset of CogAff, including subsumption architectures, contention - scheduling systems, architectures with Ôexecutive functionsÕ and a variety of types of ÔOmegaÕ architectures. Adding a multiply - connected, fastacting ÔalarmÕ mechanism within the CogAff framework accounts for several varieties of emotions. H - CogAff, a special case of CogAff, is postulated as a minimal architecture specification for a human - like system. We illustrate use of the CogAff schema in comparing H - CogAff with Clarion, a well known architecture. One implication is that reliance on concepts tied to observation and experiment can harmfully restrict explanatory theorising, since what an information processor is doing cannot, in general, be determined by using the standard observational techniques of the physical sciences or laboratory experiments. Like theoretical physics, cognitive science needs to be highly speculative to make progress. Ó 2004 Published by Elsevier B. V. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  18.  54
    Beyond Turing equivalence.Aaron Sloman -1996 - In Peter Millican & Andy Clark,Machines and Thought: The Legacy of Alan Turing. Oxford, England: Oxford University Press. pp. 1--179.
    What is the relation between intelligence and computation? Although the difficulty of defining `intelligence' is widely recognized, many are unaware that it is hard to give a satisfactory definition of `computational' if computation is supposed to provide a non-circular explanation for intelligent abilities. The only well-defined notion of `computation' is what can be generated by a Turing machine or a formally equivalent mechanism. This is not adequate for the key role in explaining the nature of mental processes, because it is (...) too general, as many computations involve nothing mental, nor even processes: they are simply abstract structures. We need to combine the notion of `computation' with that of `machine'. This may still be too restrictive, if some non-computational mechanisms prove to be useful for intelligence. We need a theory-based taxonomy of {\em architectures} and {\em mechanisms} and corresponding process types. Computational machines my turn out to be a sub-class of the machines available for implementing intelligent agents. The more general analysis starts with the notion of a system with independently variable, causally interacting sub-states that have different causal roles, including both `belief-like' and `desire-like' sub-states, and many others. There are many significantly different such architectures. For certain architectures (including simple computers), some sub-states have a semantic interpretation for the system. The relevant concept of semantics is defined partly in terms of a kind of Tarski-like structural correspondence (not to be confused with isomorphism). This always leaves some semantic indeterminacy, which can be reduced by causal loops involving the environment. But the causal links are complex, can share causal pathways, and always leave mental states to some extent semantically indeterminate. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  19.  92
    How to turn an information processor into an understander.Aaron Sloman &Monica Croucher -1980 -Behavioral and Brain Sciences 3 (3):447-448.
  20.  150
    The irrelevance of Turing machines to AI.Aaron Sloman -2002 - In Matthias Scheutz,Computationalism: New Directions. MIT Press.
  21.  131
    The primacy of non-communicative language.Aaron Sloman -1979 - In M. MacCafferty & Kurt Gray,The Analysis of Meaning: Informatics 5, Proceedings ASLIB/BCS Conference. Aslib.
  22.  31
    How many separately evolved emotional beasties live within us.Aaron Sloman -2001 - In Robert Trappl,Emotions in Humans and Artifacts. Bradford Book/MIT Press. pp. 35--114.
    Direct download  
     
    Export citation  
     
    Bookmark   9 citations  
  23.  118
    Toward a general theory of representations.Aaron Sloman -1994 - In Donald Peterson,Forms of representation: an interdisciplinary theme for Cognitive Science. Intellect Books. pp. 118-140.
    This position paper presents the beginnings of a general theory of representations starting from the notion that an intelligent agent is essentially a control system with multiple control states, many of which contain information (both factual and non-factual), albeit not necessarily in a propositional form. The paper attempts to give a general characterisation of the notion of the syntax of an information store, in terms of types of variation the relevant mechanisms can cope with. Similarly concepts of semantics pragmatics and (...) inference are generalised to apply to information-bearing sub-states in control systems. A number of common but incorrect notions about representation are criticised (such as that pictures are in some way isomorphic with what they represent). (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark   9 citations  
  24.  176
    The emperor's real mind -- Review of Roger Penrose's The Emperor's new Mind: Concerning Computers Minds and the Laws of Physics.Aaron Sloman -1992 -Artificial Intelligence 56 (2-3):355-396.
    "The Emperor's New Mind" by Roger Penrose has received a great deal of both praise and criticism. This review discusses philosophical aspects of the book that form an attack on the "strong" AI thesis. Eight different versions of this thesis are distinguished, and sources of ambiguity diagnosed, including different requirements for relationships between program and behaviour. Excessively strong versions attacked by Penrose (and Searle) are not worth defending or attacking, whereas weaker versions remain problematic. Penrose (like Searle) regards the notion (...) of an algorithm as central to AI, whereas it is argued here that for the purpose of explaining mental capabilities the architecture of an intelligent system is more important than the concept of an algorithm, using the premise that what makes something intelligent is not what it does but how it does it. What needs to be explained is also unclear: Penrose thinks we all know what consciousness is and claims that the ability to judge Go "del's formula to be true depends on it. He also suggests that quantum phenomena underly consciousness. This is rebutted by arguing that our existing concept of "consciousness" is too vague and muddled to be of use in science. This and related concepts will gradually be replaced by a more powerful theory-based taxonomy of types of mental states and processes. The central argument offered by Penrose against the strong AI thesis depends on a tempting but unjustified interpretation of Goedel's incompleteness theorem. Some critics are shown to have missed the point of his argument. A stronger criticism is mounted, and the relevance of mathematical Platonism analysed. Architectural requirements for intelligence are discussed and differences between serial and parallel implementations analysed. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  25.  105
    Architecture-based conceptions of mind.Aaron Sloman -2002 - In Peter Gardenfors, Katarzyna Kijania-Placek & Jan Wolenski,In the Scope of Logic, Methodology, and Philosophy of Science (Vol II). Kluwer Academic Publishers.
  26. [Book Chapter].Aaron Sloman -1995
    No categories
     
    Export citation  
     
    Bookmark   5 citations  
  27.  673
    Physicalism and the Bogey of Determinism.Aaron Sloman -unknown
    This paper rehearses some relatively old arguments about how any coherent notion of free will is not only compatible with but depends on determinism. However the mind-brain identity theory is attacked on the grounds that what makes a physical event an intended action A is that the agent interprets the physical phenomena as doing A. The paper should have referred to the monograph Intention by Elizabeth Anscombe, which discusses in detail the fact that the same physical event can have multiple (...) descriptions, using different ontologies. (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  28.  39
    Semantics in an intelligent control system.A. Sloman -1994 -Philosophical Transactions of the Royal Society: Physical Sciences and Engineering 349:43-58.
    Much research on intelligent systems has concentrated on low level mechanisms or sub-systems of restricted functionality. We need to understand how to put all the pieces together in an *architecture* for a complete agent with its own mind, driven by its own desires. A mind is a self-modifying control system, with a hierarchy of levels of control, and a different hierarchy of levels of implementation. AI needs to explore alternative control architectures and their implications for human, animal, and artificial minds. (...) Only within the framework of a theory of actual and possible architectures can we solve old problems about the concept of mind and causal roles of desires, beliefs, intentions, etc. The high level ``virtual machine'' architecture is more useful for this than detailed mechanisms. E.g. the difference between connectionist and symbolic implementations is of relatively minor importance. A good theory provides both explanations and a framework for systematically generating concepts of possible states and processes. Lacking this, philosophers cannot provide good analyses of concepts, psychologists and biologists cannot specify what they are trying to explain or explain it, and psychotherapists and educationalists are left groping with ill-understood problems. The paper sketches some requirements for such architectures, and analyses an idea shared between engineers and philosophers: the concept of ``semantic information''. (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark   6 citations  
  29.  543
    How to dispose of the free will issue.Aaron Sloman -1993 -AISB Quarterlye 82:31-2.
  30. What cognitive scientists need to know about virtual machines.Aaron Sloman -2009 - In N. A. Taatgen & H. van Rijn,Proceedings of the 31st Annual Conference of the Cognitive Science Society. pp. 1210--1215.
  31.  91
    A philosophical encounter: An interactive presentation of some of the key philosophical problems in ai and ai problems in philosophy.Aaron Sloman -unknown
    This paper, along with the following paper by John McCarthy, introduces some of the topics to be discussed at the IJCAI95 event `A philosophical encounter: An interactive presentation of some of the key philosophical problems in AI and AI problems in philosophy.' Philosophy needs AI in order to make progress with many difficult questions about the nature of mind, and AI needs philosophy in order to help clarify goals, methods, and concepts and to help with several specific technical problems. Whilst (...) philosophical attacks on AI continue to be welcomed by a significant subset of the general public, AI defenders need to learn how to avoid philosophically naive rebuttals. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  32.  51
    What sort of control system is able to have a personality?Aaron Sloman -1995 - In[Book Chapter].
    This paper outlines a design-based methodology for the study of mind as a part of the broad discipline of Artificial Intelligence. Within that framework some architectural requirements for human-like minds are discussed, and some preliminary suggestions made regarding mechanisms underlying motivation, emotions, and personality. A brief description is given of the `Nursemaid' or `Minder' scenario being used at the University of Birmingham as a framework for research on these problems. It may be possible later to combine some of these ideas (...) with work on synthetic agents inhabiting virtual reality environments. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  33.  53
    Damasio, Descartes, alarms and meta-management.A. Sloman -unknown
    This paper discusses some of the requirements for the control architecture of an intelligent human-like agent with multiple independent dynamically changing motives in a dynamically changing only partly predictable world. The architecture proposed includes a combination of reactive, deliberative and meta-management mechanisms along with one or more global ``alarm'' systems. The engineering design requirements are discussed in relation our evolutionary history, evidence of brain function and recent theories of Damasio and others about the relationships between intelligence and emotions. (The paper (...) was completed in haste for a deadline and I forgot to explain why Descartes was in the title. See Damasio 1994.). (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  34. Towards a grammar of emotions.Aaron Sloman -1982 -New Universities Quarterly 36 (3):230-238.
    My favourite leading question when teaching Philosophy of Mind is ‘Could a goldfish long for its mother?’ This introduces the philosophical technique of ‘conceptual analysis’, essential for the study of mind (Sloman 1978, ch. 4). By analysing what we mean by ‘A longs for B’, and similar descriptions of emotional states we see that they inv olve rich cognitive structures and processes, i.e. computations. Anything which could long for its mother, would have to hav e some sort of representation of (...) its mother, would have to believe that she is not in the vicinity, would have to be able to represent the _possibility _of being close to her, would have to desire that possibility, and would have to be to some extent pre-occupied or obsessed with that desire. That is, it should intrude into and interfere with other activities, like admiring the scenery, catching smaller fish, etc. If the desire were there, but could be calmly put aside, whilst other interests were pursued, then it would not be truly a state of longing. It might be a state of preferring. Thus longing involves computational interrupts. The same seems to be true of all emotions. (shrink)
     
    Export citation  
     
    Bookmark   5 citations  
  35.  457
    Virtual Machine Functionalism: The only form of functionalism worth taking seriously in Philosophy of Mind.Aaron Sloman -
    Most philosophers appear to have ignored the distinction between the broad concept of Virtual Machine Functionalism (VMF) described in Sloman&Chrisley (2003) and the better known version of functionalism referred to there as Atomic State Functionalism (ASF), which is often given as an explanation of what Functionalism is, e.g. in Block (1995). -/- One of the main differences is that ASF encourages talk of supervenience of states and properties, whereas VMF requires supervenience of machines that are arbitrarily complex networks of causally (...) interacting (virtual, but real) processes, possibly operating on different time-scales, examples of which include many different procesess usually running concurrently on a modern computer performing various tasks concerned with handling interfaces to physical devices, managing the file system, dealing with security, providing tools, entertainments, and games, and possibly processing research data. Another example of VMF would be the kind of functionalism involved in a large collection of possibly changing socio-economic structures and processes interacting in a complex community, and yet another is illustrated by the kind of virtual machinery involved in the many levels of visual processing of information about spatial structures, processes, and relationships (including percepts of moving shadows, reflections, highlights, optical-flow patterns and changing affordances) as you walk through a crowded car-park on a sunny day: generating a whole zoo of interacting qualia. (Forget solitary red patches, or experiences thereof.) -/- Perhaps VMF should be re-labelled "Virtual MachinERY Functionalism" because the word 'machinery' more readily suggests something complex with interacting parts. VMF is concerned with virtual machines that are made up of interacting concurrently active (but not necessarily synchronised) chunks of virtual machinery which not only interact with one another and with their physical substrates (which may be partly shared, and also frequently modified by garbage collection, metabolism, or whatever) but can also concurrently interact with and refer to various things in the immediate and remote environment (via sensory/motor channels, and possible future technologies also). I.e. virtual machinery can include mechanisms that create and manipulate semantic content, not only syntactic structures or bit patterns as digital virtual machines do. -/- Please note: Click on the title above or the link below to read the paper. I prefer to keep all my papers freely accessible on my web site so that I can correct mistakes and add improvements. -/- http://www.cs.bham.ac.uk/research/projects/cogaff/misc/vm-functionalism.html -/- This is now part of the Meta-Morphogenesis project: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html. (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  36.  46
    Musings on the roles of logical and non-logical representations in intelligence.Aaron Sloman -1995 - In[Book Chapter].
    This paper offers a short and biased overview of the history of discussion and controversy about the role of different forms of representation in intelligent agents. It repeats and extends some of the criticisms of the `logicist' approach to AI that I first made in 1971, while also defending logic for its power and generality. It identifies some common confusions regarding the role of visual or diagrammatic reasoning including confusions based on the fact that different forms of representation may be (...) used at different levels in an implementation hierarchy. This is contrasted with the way in the use of one form of representation (e.g. pictures) can be {\em controlled} using another (e.g. logic, or programs). Finally some questions are asked about the role of metrical information in biological visual systems. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  37.  108
    Prolegomena to a theory of communication and affect.Aaron Sloman -1992 - In Andrew Ortony, Jon Slack & Oliviero Stock,Communication from an Artificial Intelligence Perspective: Theoretical and Applied Issues. Springer.
    As a step towards comprehensive computer models of communication, and effective human machine dialogue, some of the relationships between communication and affect are explored. An outline theory is presented of the architecture that makes various kinds of affective states possible, or even inevitable, in intelligent agents, along with some of the implications of this theory for various communicative processes. The model implies that human beings typically have many different, hierarchically organized, dispositions capable of interacting with new information to produce affective (...) states, distract attention, interrupt ongoing actions, and so on. High "insistence" of motives is defined in relation to a tendency to penetrate an attention filter mechanism, which seems to account for the partial loss of control involved in emotions. One conclusion is that emulating human communicative abilities will not be achieved easily. Another is that it will be even more difficult to design and build computing systems that reliably achieve interesting communicative goals. (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  38.  113
    The irrelevance of Turing machines to artificial intelligence.Aaron Sloman -2002 - In Matthias Scheutz,Computationalism: New Directions. MIT Press.
    The common view that the notion of a Turing machine is directly relevant to AI is criticised. It is argued that computers are the result of a convergence of two strands of development with a long history: development of machines for automating various physical processes and machines for performing abstract operations on abstract entities, e.g. doing numerical calculations. Various aspects of these developments are analysed, along with their relevance to AI, and the similarities between computers viewed in this way and (...) animal brains. This comparison depends on a number of distinctions: between energy requirements and information requirements of machines, between ballistic and online control, between internal and external operations, and between various kinds of autonomy and self-awareness. The ideas are all intuitively familiar to software engineers, though rarely made fully explicit. Most of this has nothing to do with Turing machines or most of the mathematical theory of computation. But it has everything to do with both the scientific task of understanding, modelling or replicating human or animal intelligence and the engineering applications of AI, as well as other applications of computers. (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  39.  98
    What are emotion theories about?Aaron Sloman -manuscript
    findings from affective neuroscience research. I shall focus mainly on, but in a manner which, I hope is.
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  40.  23
    The well-designed young mathematician.Aaron Sloman -2008 -Artificial Intelligence 172 (18):2015-2034.
  41.  35
    What else can brains do?Aaron Sloman -2013 -Behavioral and Brain Sciences 36 (3):230-231.
  42.  750
    Evolution: The Computer Systems Engineer Designing Minds.Aaron Sloman -2011 -Avant: Trends in Interdisciplinary Studies 2 (2):45-69.
    What we have learnt in the last six or seven decades about virtual machinery, as a result of a great deal of science and technology, enables us to offer Darwin a new defence against critics who argued that only physical form, not mental capabilities and consciousness could be products of evolution by natural selection. The defence compares the mental phenomena mentioned by Darwin’s opponents with contents of virtual machinery in computing systems. Objects, states, events, and processes in virtual machinery which (...) we have only recently learnt how to design and build, and could not even have been thought about in Darwin’s time, can interact with the physical machinery in which they are implemented, without being identical with their physical implementation, nor mere aggregates of physical structures and processes. The existence of various kinds of virtual machinery (including both “platform” virtual machines that can host other virtual machines, e.g. operating systems, and “application” virtual machines, e.g. spelling checkers, and computer games) depends on complex webs of causal connections involving hardware and software structures, events and processes, where the specification of such causal webs requires concepts that cannot be defined in terms of concepts of the physical sciences. That indefinability, plus the possibility of various kinds of self-monitoring within virtual machinery, seems to explain some of the allegedly mysterious and irreducible features of consciousness that motivated Darwin’s critics and also more recent philosophers criticising AI. There are consequences for philosophy, psychology, neuroscience and robotics. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark  
  43.  72
    'Necessary', 'A Priori' and 'Analytic'.Aaron Sloman -1965 -Analysis 26 (1):12 - 16.
    Direct download(4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  44.  49
    How to Derive "Better" from "Is".Aaron Sloman -1969 -American Philosophical Quarterly 6 (1):43 - 52.
    No categories
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  45. Must intelligent systems be scruffy.Aaron Sloman -1990 - In J. E. Tiles, G. T. McKee & G. C. Dean,Evolving knowledge in natural science and artificial intelligence. London: Pitman. pp. 17.
  46.  79
    The ``semantics'' of evolution: Trajectories and trade-offs in design space and niche space.Aaron Sloman -unknown
    This paper attempts to characterise a unifying overview of the practice of software engineers, AI designers, developers of evolutionary forms of computation, designers of adaptive systems, etc. The topic overlaps with theoretical biology, developmental psychology and perhaps some aspects of social theory. Just as much of theoretical computer science follows the lead of engineering intuitions and tries to formalise them, there are also some important emerging high level cross disciplinary ideas about natural information processing architectures and evolutionary mechanisms and that (...) can perhaps be unified and formalised in the future. There is some speculation about the evolution of human cognitive architectures and consciousness. (shrink)
    Direct download(4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  47.  820
    Comments on “The Emulating Interview... with Rick Grush”.Aaron Sloman -2011 -Avant: Trends in Interdisciplinary Studies 2 (2):35–44.
    Author comments Rick Grush’s statements about emulation and embodied approach to representation. He proposes his modification of Grush’s definition of emulation, criticizing notion of “standing in for”. He defends of notion of representation. He claims that radical embodied theories are not applicable to all cognition.
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark  
  48.  82
    Altricial self-organising information-processing systems ∗.Aaron Sloman -unknown
    It is often thought that there is one key design principle or at best a small set of design principles, underlying the success of biological organisms. Candidates include neural nets, ‘swarm intelligence’, evolutionary computation, dynamical systems, particular types of architecture or use of a powerful uniform learning mechanism, e.g. reinforcement learning. All of those support types of self-organising, self-modifying behaviours. But we are nowhere near understanding the full variety of powerful information-processing principles ‘discovered’ by evolution. By attending closely to the (...) diver- sity of biological phenomena we may gain key insights into (a) how evolution happens, (b) what sorts of mechanisms, forms of representation, types of learning and development and types of architectures have evolved, (c) how to explain ill-understood aspects of human and animal intelligence, and (d) new useful mechanisms for artificial systems. (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  49.  38
    Orthogonal Recombinable Competences Acquired by Altricial Species: blankets, string and plywood.Aaron Sloman -manuscript
    CONJECTURE: Alongside the innate physical sucking reflex for obtaining milk to be digested, decomposed and used all over the body for growth, repair, and energy, there is a genetically determined information-sucking reflex, which seeks out, sucks in, and decomposes information, which is later recombined in many ways, growing the information-processing architecture and many diverse recombinable competences.
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  50.  49
    (1 other version)Did Searle attack strong strong or weak strong AI.Aaron Sloman -1986 - In A. G. Cohn and & R. J. Thomas,Artificial Intelligence and its Applications. John Wiley and Sons.
    John Searle's attack on the Strong AI thesis, and the published replies, are all based on a failure to distinguish two interpretations of that thesis, a strong one, which claims that the mere occurrence of certain process patterns will suffice for the occurrence of mental states, and a weak one which requires that the processes be produced in the right sort of way. Searle attacks strong strong AI, while most of his opponents defend weak strong AI. This paper explores some (...) of Searle's concepts and shows that there are interestingly different versions of the 'Strong AI' thesis, connected with different kinds of reliability of mechanisms and programs. (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
1 — 50 / 131
Export
Limit to items.
Filters





Configure languageshere.Sign in to use this feature.

Viewing options


Open Category Editor
Off-campus access
Using PhilPapers from home?

Create an account to enable off-campus access through your institution's proxy server or OpenAthens.


[8]ページ先頭

©2009-2025 Movatter.jp