Movatterモバイル変換


[0]ホーム

URL:


SEP home page
Stanford Encyclopedia of Philosophy

Notes toComputing and Moral Responsibility

1.The term technologicalartifacts here refers to the (socially) constructed material orphysical objects such as computers, cars and refrigerators, that humanbeings create and use to achieve a particular purpose or goal. Thisconception of technological artifacts is often used in social andhistorical studies to distinguish artifacts from natural objects andother socially constructed artifacts like regulatory laws (Hughes1982; Bijker et al. 1987). For more on the concept of artifacts see theentryArtifacts.

2.According to Bijker et al.interpretive flexibility of technological artifacts means that“there is flexibility in how people think of, or interpret,artefacts” and “that there is flexibility in how artefactsare designed” (Bijker et al. 1995, p. 40). That is, different‘relevant social groups’ have varying criteria for judgingwhat makes a design superior or even workable, depending on, oftencompeting, goals and interests, as well as on distinct ideas about whata particular artifact should do.

3.A long runningphilosophical debate about Artificial Intelligence is centered on thethesis that processes of the mind could be generated by computationalstructures (McCorduck 1979). Critics of AI have taken exception to thesuggestion that the human mind and computers could be thought of asgoverned by the same general principles (Graubard 1988). Theyhave argued against the presupposition that knowledge and intelligencecould be captured in computational structures and mathematical orlogical models. These critics have provided a range of proposedinherent properties or abilities that humans have and machines lack,such as emotion, common sense and intentionality. One of these critics,Searle, was the first to use the term ‘strong AI’ to referto the philosophical position that a computer with the right kind ofprograms can literally be a mind that is able to understand and haveother cognitive states (Searle 1980). He distinguished this kind ofresearch from, what he called, ‘weak AI’. Weak AI makes noclaims about computers being minds and merely argues that computers areuseful for testing particular explanations of processes of the mindbecause they simulate these processes. Contrary to strong AI, thisposition does not claim, according to Searle, that computers literallyare the explanation (see also the entry on theChinese Room argument).

Copyright © 2023 by
Merel Noorman<merelnoorman@gmail.com>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free

Browse

About

Support SEP

Mirror Sites

View this site from another server:

USA (Main Site)Philosophy, Stanford University

The Stanford Encyclopedia of Philosophy iscopyright © 2023 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054


[8]ページ先頭

©2009-2025 Movatter.jp