Movatterモバイル変換


[0]ホーム

URL:


SEP home page
Stanford Encyclopedia of Philosophy

Philosophy of Technology

First published Fri Feb 20, 2009; substantive revision Mon Mar 6, 2023

If philosophy is the attempt “to understand how things in thebroadest possible sense of the term hang together in the broadestpossible sense of the term”, as Sellars (1962) put it,philosophy should not ignore technology. It is largely by technologythat contemporary society hangs together. It is hugely important notonly as an economic force but also as a cultural force. Indeed duringthe last two centuries, when it gradually emerged as a discipline,philosophy of technology has mostly been concerned with the meaning oftechnology for, and its impact on, society and culture, rather thanwith technology itself. Mitcham (1994) calls this type of philosophyof technology “humanities philosophy of technology”because it accepts “the primacy of the humanities overtechnologies” and is continuous with the overall perspective ofthe humanities (and some of the social sciences). Only recently abranch of the philosophy of technology has developed that is concernedwith technology itself and that aims to understand both the practiceof designing and creating artifacts (in a wide sense, includingartificial processes and systems) and the nature of the things socreated. This latter branch of the philosophy of technology seekscontinuity with the philosophy of science and with several otherfields in the analytic tradition in modern philosophy, such as thephilosophy of action and decision-making, rather than with thehumanities and social science.

The entry starts with a brief historical overview, then continues witha presentation of the themes on which modern analytic philosophy oftechnology focuses. This is followed by a discussion of the societaland ethical aspects of technology, in which some of the concerns ofhumanities philosophy of technology are addressed. This twofoldpresentation takes into consideration the development of technology asthe outcome of a process originating within and guided by the practiceof engineering, by standards on which only limited societal control isexercised, as well as the consequences for society of theimplementation of the technology so created, which result fromprocesses upon which only limited control can be exercised.


1. Historical Developments

1.1 The Greeks

Philosophical reflection on technology is about as old as philosophyitself. Our oldest testimony is from ancient Greece. There are fourprominent themes. One early theme is the thesis that technology learnsfrom or imitates nature (Plato,Laws X 889a ff.). Accordingto Democritus, for example, house-building and weaving were firstinvented by imitating swallows and spiders building their nests andnets, respectively (Diels 1903 and Freeman 1948: 154). Perhaps theoldest extant source for the exemplary role of nature is Heraclitus(Diels 1903 and Freeman 1948: 112). Aristotle referred to thistradition by repeating Democritus’ examples, but he did notmaintain that technology can only imitate nature: “generallytechnè in some cases completes what nature cannotbring to a finish, and in others imitates nature”(Physics II.8, 199a15; see alsoPhysics II.2, andsee Schummer 2001 and this encyclopedia’s entry onepisteme andtechne for discussion).

A second theme is the thesis that there is a fundamental ontologicaldistinction between natural things and artifacts. According toAristotle (Physics II.1), the former have their principles ofgeneration and motion inside, whereas the latter, insofar as they areartifacts, are generated only by outward causes, namely human aims andforms in the human soul. Natural products (animals and their parts,plants, and the four elements) move, grow, change, and reproducethemselves by inner final causes; they are driven by purposes ofnature. Artifacts, on the other hand, cannot reproduce themselves.Without human care and intervention, they vanish after some time bylosing their artificial forms and decomposing into (natural)materials. For instance, if a wooden bed is buried, it decomposes toearth or changes back into its botanical nature by putting forth ashoot.

The thesis that there is a fundamental difference between man-madeproducts and natural substances has had a long-lasting influence. Inthe Middle Ages, Avicenna criticized alchemy on the ground that it cannever produce ‘genuine’ substances (Briffault 1930: 147).Even today, some still maintain that there is a difference between,for example, natural and synthetic vitamin C. The modern discussion ofthis theme is taken up inSection 2.5.

Aristotle’s doctrine of the four causes—material, formal,efficient and final—can be regarded as a third earlycontribution to the philosophy of technology. Aristotle explained thisdoctrine by referring to technical artifacts such as houses andstatues (Physics II.3). The four causes are still very muchpresent in modern discussions related to the metaphysics of artifacts.Discussions of the notion of function, for example, focus on itsinherent teleological or ‘final’ character and thedifficulties this presents to its use in biology. And the notoriouscase of the ship of Theseus—see this encyclopedia’sentries onmaterial constitution,identity over time,relative identity, andsortals—was introduced in modern philosophy by Hobbes as showing a conflictbetween unity of matter and unity of form as principles ofindividuation. This conflict is seen by many as characteristic ofartifacts. David Wiggins (1980: 89) takes it even to be the definingcharacteristic of artifacts.

A fourth point that deserves mentioning is the extensive employment oftechnological images by Plato and Aristotle. In hisTimaeus,Plato described the world as the work of an Artisan, the Demiurge. Hisaccount of the details of creation is full of images drawn fromcarpentry, weaving, ceramics, metallurgy, and agricultural technology.Aristotle used comparisons drawn from the arts and crafts toillustrate how final causes are at work in natural processes. Despitetheir negative appreciation of the life led by artisans, who theyconsidered too much occupied by the concerns of their profession andthe need to earn a living to qualify as free individuals, both Platoand Aristotle found technological imagery indispensable for expressingtheir belief in the rational design of the universe (Lloyd 1973:61).

1.2 Later Developments; Humanities Philosophy of Technology

Although there was much technological progress in the Roman empire andduring the Middle Ages, philosophical reflection on technology did notgrow at a corresponding rate. Comprehensive works such asVitruvius’De architectura (first century BC) andAgricola’sDe re metallica (1556) paid much attentionto practical aspects of technology but little to philosophy.

In the realm of scholastic philosophy, there was an emergentappreciation for the mechanical arts. They were generally consideredto be born of—and limited to—the mimicry of nature. Thisview was challenged when alchemy was introduced in the Latin Westaround the mid-twelfth century. Some alchemical writers such as RogerBacon were willing to argue that human art, even if learned byimitating natural processes, could successfully reproduce naturalproducts or even surpass them (Newman 2004). The result was aphilosophy of technology in which human art was raised to a level ofappreciation not found in other writings until the Renaissance.However, the last three decades of the thirteenth century witnessed anincreasingly hostile attitude by religious authorities toward alchemythat culminated eventually in the denunciationContraalchymistas, written by the inquisitor Nicholas Eymeric in 1396(Newman 2004).

The Renaissance led to a greater appreciation of human beings andtheir creative efforts, including technology. As a result,philosophical reflection on technology and its impact on societyincreased. Francis Bacon is generally regarded as the first modernauthor to put forward such reflection. His view, expressed in hisfantasyNew Atlantis (1627), was overwhelmingly positive.This positive attitude lasted well into the nineteenth century,incorporating the first half-century of the industrial revolution.Karl Marx, for example, did not condemn the steam engine or thespinning mill for the vices of the bourgeois mode of production; hebelieved that ongoing technological innovation allowed for thenecessary steps toward the more blissful stages of socialism andcommunism of the future. A discussion of different views on the roleof technology in Marx’s theory of historical development can befound in Bimber 1990. See Van der Pot 1985 [1994/2004] for anextensive historical overview of appreciations of the development oftechnology generally.

A turning point in the appreciation of technology as a socio-culturalphenomenon is marked by Samuel Butler’sErewhon (1872),written under the influence of the Industrial Revolution, andDarwin’sOn the Origin of Species (1859).Butler’s book gave an account of a fictional country where allmachines are banned and the possession of a machine or the attempt tobuild one is a capital crime. The people of this country had becomeconvinced by an argument that ongoing technical improvements arelikely to lead to a ‘race’ of machines that will replacemankind as the dominant species on earth. This introduced a theme thathas remained influential in the perception of technology eversince.

During the last quarter of the nineteenth century and most of thetwentieth century a critical attitude predominated in philosophicalreflection on technology. The representatives of this attitude were,overwhelmingly, schooled in the humanities or the social sciences andhad virtually no first-hand knowledge of engineering practice. WhereasBacon wrote extensively on the method of science and conductedphysical experiments himself, Butler, being a clergyman, lacked suchfirst-hand knowledge. Ernst Kapp, who was the first to use the term‘philosophy of technology’ in his bookEinePhilosophie der Technik (1877 [2018]), was a philologist andhistorian. Most of the authors who wrote critically about technologyand its socio-cultural role during the twentieth century werephilosophers of a general outlook, such as Martin Heidegger (1954[1977]), Hans Jonas (1979 [1984]), Arnold Gehlen (1957 [1980]),Günther Anders (1956), and Andrew Feenberg (1999). Others had abackground in one of the other humanities or in social science, suchas literary criticism and social research in the case of Lewis Mumford(1934), law in the case of Jacques Ellul (1954 [1964]), politicalscience in the case of Langdon Winner (1977, 1980, 1983) and literarystudies in the case of Albert Borgmann (1984). The form of philosophyof technology constituted by the writings of these and others has beencalled by Carl Mitcham (1994) “humanities philosophy oftechnology”, because it takes its point of departure from thehumanities and the social sciences rather than from the practices ofscience and engineering, and it approaches technology accepting“the primacy of the humanities over technologies” (1994:39), since technology originates from the goals and values of humans.

Humanities philosophers of technology tend to take the phenomenon oftechnology itself largely for granted; they treat it as a ‘blackbox’, a given, a unitary, monolithic, inescapable phenomenon.Their interest is not so much to analyze and understand thisphenomenon itself but to grasp its relations to morality (Jonas,Gehlen), politics (Winner), the structure of society (Mumford), humanculture (Ellul), the human condition (Hannah Arendt), or metaphysics(Heidegger). In this, these philosophers are almost all openlycritical of technology: all things considered, they tend to have anegative judgment of the way technology has affected human society andculture, or at least they single out for consideration the negativeeffects of technology on human society and culture. This does notnecessarily mean that technology itself is pointed out as theprincipal cause of these negative developments. In the case ofHeidegger, in particular, the paramount position of technology inmodern society is rather a symptom of something more fundamental,namely a wrongheaded attitude towards Being which has been on the risefor almost 25 centuries. It is therefore questionable whetherHeidegger should be considered as a philosopher of technology,although within the humanities view he is considered to be among themost important ones. Much the same could be said about Arendt, inparticular her discussion of technology inThe HumanCondition (1958), although her position in the canon ofhumanities philosophy of technology is not as prominent as isHeidegger’s.

To be sure, the work of these founding figures of humanitiesphilosophy of technology has been taken further by a second and thirdgeneration of scholars—in particular the work of Heideggerremains an important source of inspiration—but who in doing sohave adopted a more neutral rather than overall negative view oftechnology and its meaning for human life and culture. Notableexamples are Ihde (1979, 1993) and Verbeek (2000 [2005]).

In its development, humanities philosophy of technology continues tobe influenced not so much by developments in philosophy (e.g.,philosophy of science, philosophy of action, philosophy of mind) butby developments in the social sciences and humanities. Although, forexample, Ihde and those who take their point of departure with him,position their work as phenomenologist or postphenomenologist, theredoes not seem to be much interest in either the past or the present ofthis diffuse notion in philosophy, and in particular not much interestin the far from easy question to what extent Heidegger can beconsidered a phenomenologist. Of particular significance has been theemergence of ‘Science and Technology Studies’ (STS) in the1980s, which studies from a broad social-scientific perspective howsocial, political, and cultural values affect scientific research andtechnological innovation, and how these in turn affect society,politics, and culture. We discuss authors from humanities philosophyof technology inSection 3 on ‘Ethical and Social Aspects of Technology’, but do notpresent separately and in detail the wide variety of views existing inthis field. For a detailed treatment Mitcham’s 1994 book stillprovides an excellent overview. A recent coverage of humanitiesphilosophy of technology is available in Coeckelbergh’s (2020a)textbook. Olsen, Selinger and Riis (2008) and Vallor (2022) offerwide-ranging collections of contributions; Scharff and Dusek (2003[2014]) and Kaplan (2004 [2009]) present comprehensive anthologies oftexts from this tradition.

1.3 A Basic Ambiguity in the Meaning of Technology

Mitcham contrasts ‘humanities philosophy of technology’ to‘engineering philosophy of technology’, where the latterrefers to philosophical views developed by engineers or technologistsas “attempts … to elaborate a technologicalphilosophy” (1994: 17). Mitcham discusses only a handful ofpeople as engineering philosophers of technology: Ernst Kapp, PeterEngelmeier, Friedrich Dessauer, and much more briefly Jacques Lafitte,Gilbert Simondon, Hendrik van Riessen, Juan David García Bacca,R. Buckminster Fuller and Mario Bunge. The label ‘engineeringphilosophy of technology’ raises serious questions: many of thepersons discussed hardly classify as engineers or technologists. It isalso not very clear how the notion of ‘a technologicalphilosophy’ should be understood. As philosophers, these authorsseem all to be rather isolated figures, whose work shows littleoverlap and who seem to be sharing mainly the absence of a‘working relation’ with established philosophicaldisciplines. It is not so clear what sorts of questions and concernsunderlie the notion of ‘engineering philosophy oftechnology’. A larger role for systematic philosophy could bringit quite close to some examples of humanities philosophy oftechnology, for instance the work of Jacques Ellul, where the analyseswould be rather similar and the remaining differences would be ones ofattitude or appreciation.

In the next section we discuss in more detail a form of philosophy oftechnology that we consider to occupy, currently, the position ofalternative to the humanities philosophy of technology. It emerged inthe 1960s and gained momentum in the past twenty to twenty-five years.This form of the philosophy of technology, which may be called‘analytic’, is not primarily concerned with the relationsbetween technology and society but with technology itself. Itexpressly does not look upon technology as a ‘black box’but as a phenomenon that should be studied in detail. It does notregard technology as such as a practice but as something grounded in apractice, basically the practice of engineering. It analyses thispractice, its goals, its concepts and its methods, and it relates itsfindings to various themes from philosophy.

In seeing technology as grounded in a practice sustained by engineers,similar to the way philosophy of science focuses on the practice ofscience as sustained by scientists, analytic philosophy of technologycould be thought to amount to the philosophy of engineering. Indeedmany of the issues related to design, discussed below in Sections2.3 and2.4, could be singled out as forming the subject matter of a philosophy ofengineering. The metaphysical issues discussed in Section2.5 could not, however, and analytic philosophy of technology istherefore significantly broader than philosophy of engineering. Thevery title ofPhilosophy of Technology and EngineeringSciences (Meijers 2009), an extensive up-to-date overview, whichcontains contributions to all of the topics treated in the nextsection, suggests that technology and engineering do not coincide, butthe book does not specifically address what distinguishes technologyfrom engineering and how they are related. In fact, the existence ofhumanities philosophy of technology and analytic philosophy oftechnology next to one another reflects a basic ambiguity in thenotion of technology that the philosophical work that has been goingon has hardly succeeded in clarifying.

Technology can be said to have two aspects or dimensions, which can bereferred to asinstrumentality andproductivity.Instrumentality covers the totality of human endeavours to controltheir lives and their environments by interfering with the world in aninstrumental way, by using things in a purposeful and clever way.Productivity covers the totality of human endeavours to bring intoexistence new things through which certain things can be realized in acontrolled and clever way. For the study of the dimension ofinstrumentality, it is in principle irrelevant whether the things thatare made use of in controlling our lives and environments have beenproduced by us first; if we somehow could rely on natural objects toalways be available to serve our purposes, the analysis ofinstrumentality and its consequences for how we live our lives wouldnot necessarily be affected. Likewise, for the analysis of what isinvolved in the making of artifacts, and how the notion of artifactand of something new being brought into existence are to beunderstood, it is to a large extent irrelevant how human life, cultureand society are changed as a result of the artifacts that are in factproduced. Notwithstanding its fundamental character, the ambiguitynoted here seems hardly to be confronted directly in the literature.It is addressed by Lawson (2008, 2017) and by Franssen and Koller(2016).

Humanities philosophy of technology has been interested predominantlyin the instrumentality dimension, whereas analytic philosophy oftechnology has focused on the productivity dimension. But technologyas one of the basic phenomena of modern society, if not the most basicone, clearly is constituted by the processes centering on andinvolving both dimensions. It has proved difficult, however, to cometo an overarching approach in which the interaction between these twodimensions of technology are adequately dealt with—no doubtpartly due to the great differences in philosophical orientation andmethodology associated with the two traditions and their separatefoci. To improve this situation is arguably the most urgent challengethat the field of philosophy of technology as a whole is facing, sincethe continuation of the two orientations leading their separate livesthreatens its unity and coherence as a discipline in the first place.Indeed, during the past ten to fifteen years the philosophy ofengineering has established itself as a subdiscipline within thephilosophy of technology, for which a comprehensive handbook wasedited recently by Michelfelder and Doorn (2021).

After presenting the major issues of philosophical relevance intechnology and engineering that are studied by analytic philosophersof technology in the next section, we discuss the problems andchallenges that technology poses for the society in which it ispracticed in the third and final section.

2. Analytic Philosophy of Technology

2.1 Introduction: Science and Technology’s Different Relations to Philosophy

It may come as a surprise to those new to the topic that the fields ofphilosophy of science and philosophy of technology show such greatdifferences, given that few practices in our society are as closelyrelated as science and engineering. Experimental science is nowadayscrucially dependent on technology for the realization of its researchset-ups and for gathering and analyzing data. The phenomena thatmodern science seeks to study could never be discovered withoutproducing them through technology.

Theoretical research within technology has come to be oftenindistinguishable from theoretical research in science, makingengineering science largely continuous with ‘ordinary’ or‘pure’ science. This is a relatively recent development,which started around the middle of the nineteenth century, and isresponsible for great differences between modern technology andtraditional, craft-like techniques. The educational training thataspiring scientists and engineers receive starts off being largelyidentical and only gradually diverges into a science or an engineeringcurriculum. Ever since the scientific revolution of the seventeenthcentury, characterized by its two major innovations, the experimentalmethod and the mathematical articulation of scientific theories,philosophical reflection on science has focused on the method by whichscientific knowledge is generated, on the reasons for thinkingscientific theories to be true, or approximately true, and on thenature of evidence and the reasons for accepting one theory andrejecting another. Hardly ever have philosophers of science posedquestions that did not have the community of scientists, theirconcerns, their aims, their intuitions, their arguments and choices,as a major target. In contrast it is only recently that the philosophyof technology has discovered the community of engineers.

It might be claimed that it is up to the philosophy of technology, andnot the philosophy of science, to target first of all the impact oftechnology—and with it science—on society and culture,because science affects society only through being applied astechnology. This, however, will not do. Right from the start of thescientific revolution, science affected human culture and thoughtfundamentally and directly, not with a detour through technology, andthe same is true for later developments such as relativity, atomicphysics and quantum mechanics, the theory of evolution, genetics,biochemistry, and the increasingly dominating scientific world viewoverall. All the same philosophers of science for a long time gave theimpression that they left questions addressing the normative, socialand cultural aspects of science gladly to other philosophicaldisciplines, or to historical studies. This has changed only duringthe past few decades, by scholars either focusing on these issues fromthe start (e.g. Longino 1990, 2002) or shifting their focus towardthem (e.g. Kitcher 2001, 2011).

There is a major difference between the historical development ofmodern technology as compared to modern science which may at leastpartly explain this situation, which is that science emerged in theseventeenth century from philosophy itself. The answers that Galileo,Huygens, Newton, and others gave, by which they initiated the allianceof empiricism and mathematical description that is so characteristicof modern science, were answers to questions that had belonged to thecore business of philosophy since antiquity. Science, therefore, keptthe attention of philosophers. Philosophy of science can be seen as atransformation of epistemology in the light of the emergence ofscience. The foundational issues—the reality of atoms, thestatus of causality and probability, questions of space and time, thenature of the quantum world—that were so lively discussed duringthe end of the nineteenth and the beginning of the twentieth centuryare an illustration of this close relationship between scientists andphilosophers. No such intimacy has ever existed between philosophersand engineers or technologists. Their worlds still barely touch. To besure, a case can be made that, compared to the continuity existingbetween natural philosophy and science, a similar continuity existsbetween central questions in philosophy having to do with human actionand practical rationality and the way technology approaches andsystematizes the solution of practical problems. To investigate thisconnection may indeed be considered a major theme for philosophy oftechnology, and more is said on it in Sections2.3 and2.4. This continuity appears only by hindsight, however, and dimly, as thehistorical development is at most a slow convening of various strandsof philosophical thinking on action and rationality, not a developmentinto variety from a single origin. Significantly it is only theacademic outsider Ellul who has, in his idiosyncratic way, recognizedin technology the emergent single dominant way of answering allquestions concerning human action, comparable to science as the singledominant way of answering all questions concerning human knowledge(Ellul 1954 [1964]). But Ellul was not so much interested ininvestigating this relationship as in emphasizing and denouncing thesocial and cultural consequences as he saw them. It is all the moreimportant to point out that humanities philosophy of technology cannotbe differentiated from analytic philosophy of technology by claimingthat only the former is interested in the social context oftechnology. There are studies which are rooted in analytic philosophyof science but address in particular the relation of technology tosociety and culture, and equally the relevance of social relations totechnological practices, without taking an evaluative stand withrespect to technology; an example is Preston 2012.

2.2 The Relationship Between Technology and Science

The close relationship between the practices of engineering andscience may easily keep the important differences between thetechnology and science from view. The predominant position of sciencein the philosophical field of vision made it difficult forphilosophers to recognize that technology merits special attention forinvolving issues that do not emerge in science. This view resultingfrom this lack of recognition is often presented, perhaps somewhatdramatically, as coming down to a claim that technology is‘merely’ applied science.

A questioning of the relation between science and technology was thecentral issue in one of the earliest discussions among analyticphilosophers of technology. In 1966, in a special issue of the journalTechnology and Culture, Henryk Skolimowski argued thattechnology is something quite different from science (Skolimowski1966). As he phrased it, science concerns itself with what is, whereastechnology concerns itself with what is to be. A few years later, inhis well-known bookThe Sciences of the Artificial (1969),Herbert Simon emphasized this important distinction in almost the samewords, stating that the scientist is concerned with how things are butthe engineer with how things ought to be. Although it is difficult toimagine that earlier philosophers were blind to this difference inorientation, their inclination, in particular in the tradition oflogical empiricism, to view knowledge as a system of statements mayhave led to a conviction that in technology no knowledge claims play arole that cannot also be found in science. The study of technology,therefore, was not expected to pose new challenges nor hold surprisesregarding the interests of analytic philosophy.

In contrast, Mario Bunge (1966) defended the view that technologyis applied science, but in a subtle way that does justice tothe differences between science and technology. Bunge acknowledgesthat technology is about action, but an action heavily underpinned bytheory—that is what distinguishes technology from the arts andcrafts and puts it on a par with science. According to Bunge, theoriesin technology come in two types: substantive theories, which provideknowledge about the object of action, and operative theories, whichare concerned with action itself. The substantive theories oftechnology are indeed largely applications of scientific theories. Theoperative theories, in contrast, are not preceded by scientifictheories but are born in applied research itself. Still, as Bungeclaims, operative theories show a dependence on science in that insuch theories themethod of science is employed. Thisincludes such features as modeling and idealization, the use oftheoretical concepts and abstractions, and the modification oftheories by the absorption of empirical data through prediction andretrodiction.

In response to this discussion, Ian Jarvie (1966) proposed asimportant questions for a philosophy of technology what theepistemological status of technological statements is and howtechnological statements are to be demarcated from scientificstatements. This suggests a thorough investigation of the variousforms of knowledge occurring in either practice, in particular, sincescientific knowledge has already been so extensively studied, of theforms of knowledge that are characteristic of technology and arelacking, or of much less prominence, in science. A distinction between‘knowing that’—traditional propositionalknowledge—and ‘knowing how’—non-articulatedand even impossible-to-articulate knowledge—had been introducedby Gilbert Ryle (1949) in a different context. The notion of‘knowing how’ was taken up by Michael Polanyi under thename of tacit knowledge and made a central characteristic oftechnology (Polanyi 1958); the current state of the philosophicaldiscussion is presented in this encyclopedia’s entry onknowledge how. However, emphasizing too much the role of unarticulated knowledge, of‘rules of thumb’ as they are often called, easilyunderplays the importance of rational methods in technology. Anemphasis on tacit knowledge may also be ill-fit for distinguishing thepractices of science and engineering because the role of tacitknowledge in science may well be more important than currentphilosophy of science acknowledges, for example in concluding causalrelationships on the basis of empirical evidence. This was also animportant theme in the writings of Thomas Kuhn on theory change inscience (Kuhn 1962).

2.3 The Centrality of Design to Technology

To claim, with Skolimowski and Simon, that technology is about what isto be or what ought to be rather than what is may serve to distinguishit from science but will hardly make it understandable why so muchphilosophical reflection on technology has taken the form ofsocio-cultural critique. Technology is an ongoing attempt to bring theworld closer to the way one wishes it to be. Whereas science aims tounderstand the world as it is, technology aims to change the world.These are abstractions, of course. For one, whose wishes concerningwhat the world should be like are realized in technology? Unlikescientists, who are often personally motivated in their attempts atdescribing and understanding the world, engineers are seen, not in theleast by engineers themselves, as undertaking their attempts to changethe world as a service to the public. The ideas on what is to be orwhat ought to be are seen as originating outside of technology itself;engineers then take it upon themselves to realize these ideas. Thisview is a major source for the widely spread picture of technology asbeinginstrumental, as delivering instruments ordered from‘elsewhere’, as means to ends specified outside ofengineering, a picture that has served further to support the claimthat technology isneutral with respect to values, discussedinSection 3.3.1. This view involves a considerable distortion of reality, however.Many engineers are intrinsically motivated to change the world, inparticular the world as shaped by past technologies. As a result, muchtechnological development is ‘technology-driven’.

To understand where technology ‘comes from’, what drivesthe innovation process, is of importance not only to those who arecurious to understand the phenomenon of technology itself but also tothose who are concerned about its role in society. Technology orengineering as a practice is concerned with the creation of artifactsand, of increasing importance, artifact-based services. Thedesignprocess, the structured process leading toward that goal, formsthe core of the practice of engineering. In the engineeringliterature, the design process is commonly represented as consistingof a series of translational steps; see for this, e.g., Suh 2001. Atthe start are the customer’s needs or wishes. In the first stepthese are translated into a list offunctional requirements,which then define the design task an engineer, or a team of engineers,has to accomplish. The functional requirements specify as precisely aspossible what the device to be designed must be able to do. This stepis required because customers usually focus on just one or twofeatures and are unable to articulate the requirements that arenecessary to support the functionality they desire. In the secondstep, the functional requirements are translated intodesignspecifications, which the exact physical parameters of crucialcomponents by which the functional requirements are going to be met.The design parameters chosen to satisfy these requirements arecombined and made more precise such that ablueprint of thedevice results. The blueprint contains all the details that must beknown such that the final step to the process of manufacturing thedevice can take place. It is tempting to consider the blueprint as theend result of a design process, instead of a finished copy being thisresult. However, actual copies of a device are crucial for the purposeof prototyping and testing. Prototyping and testing presuppose thatthe sequence of steps making up the design process can and will oftencontain iterations, leading to revisions of the design parametersand/or the functional requirements. Even though, certainly formass-produced items, the manufacture of a product for delivery to itscustomers or to the market comes after the closure of the designphase, the manufacturing process is often reflected in the functionalrequirements of a device, for example in putting restrictions on thenumber of different components of which the device consists. Thecomplexity of a device will affect how difficult it will be tomaintain or repair it, and ease of maintenance or low repair costs areoften functional requirements. An important modern development is thatthe complete life cycle of an artifact is now considered to be thedesigning engineer’s concern, up till the final stages of therecycling and disposal of its components and materials, and thefunctional requirements of any device should reflect this. From thispoint of view, neither a blueprint nor a prototype can be consideredthe end product of engineering design.

The biggest idealization that this scheme of the design processcontains is arguably located at the start. Only in a minority of casesdoes a design task originate in a customer need or wish for aparticular artifact. First of all, as already suggested, many designtasks are defined by engineers themselves, for instance, by noticingsomething to be improved in existing products. Nevertheless designoften starts with a problem pointed out by some societal agent, whichengineers are then invited to solve. Many such problems, however, areill-defined orwicked problems, meaning that it is not at allclear what the problem is exactly and what a solution to the problemwould consist in. The ‘problem’ is a situation thatpeople—not necessarily the people ‘in’ thesituation—find unsatisfactory, but typically without being ableto specify a situation that they find more satisfactory in other termsthan as one in which the problem has been solved. In particular it isnot obvious that a solution to the problem would consist in someartifact, or some artifactual system or process, being made availableor installed. Engineering departments all over the world advertisethat engineering is problem solving, and engineers easily seemconfident that they are best qualified to solve a problem when theyare asked to, whatever the nature of the problem. This has led to thephenomenon of atechnological fix, the solution of a problemby a technical solution, that is, the delivery of an artifact orartifactual process, where it is questionable, to say the least,whether this solves the problem or whether it was the best way ofhandling the problem.

A candidate example of a technological fix for the problem of globalwarming would be the currently much debated option of injectingsulfate aerosols into the stratosphere to offset the warming effect ofgreenhouse gases such as carbon dioxide and methane. Such schemes ofgeoengineering would allow us to avoid facing the—in alllikelihood painful—choices that will lead to a reduction of theemission of greenhouse gases into the atmosphere, but will at the sametime allow the depletion of the Earth’s reservoir of fossilfuels to continue. See for a discussion of technological fixing, e.g.,Volti 2009: 26–32. Given this situation, and its hazards, thenotion of a problem and a taxonomy of problems deserve to receive morephilosophical attention than they have hitherto received.

These wicked problems are often broadly social problems, which wouldbest be met by some form of ‘social action’, which wouldresult in people changing their behavior or acting differently in sucha way that the problem would be mitigated or even disappearcompletely. In defense of the engineering view, it could perhaps besaid that the repertoire of ‘proven’ forms of socialaction is meager. The temptation of technical fixes could beovercome—at least that is how an engineer might see it—bythe inclusion of the social sciences in the systematic development andapplication of knowledge to the solution of human problems. Thishowever, is a controversial view.Social engineering is tomany a specter to be kept at as large a distance as possible insteadof an ideal to be pursued. Karl Popper referred to acceptable forms ofimplementing social change as ‘piecemeal socialengineering’ and contrasted it to the revolutionary butcompletely unfounded schemes advocated by, e.g., Marxism. In the entryonKarl Popper, however, his choice of words is called ‘ratherunfortunate’. The notion of social engineering, and its cogency,deserves more attention that it is currently receiving.

An important input for the design process is scientific knowledge:knowledge about the behavior of components and the materials they arecomposed of in specific circumstances. This is the point where scienceis applied. However, much of this knowledge is not directly availablefrom the sciences, since it often concerns extremely detailed behaviorin very specific circumstances. This scientific knowledge is thereforeoften generated within technology, by the engineering sciences. Butapart from this very specific scientific knowledge, engineering designinvolves various other sorts of knowledge. In his bookWhatEngineers Know and How They Know It (Vincenti 1990), theaeronautical engineer Walter Vincenti gave a six-fold categorizationof engineering design knowledge (leaving aside production andoperation as the other two basic constituents of engineeringpractice). Vincenti distinguishes

  1. Fundamental design concepts, including primarily the operationalprinciple and the normal configuration of a particular device;
  2. Criteria and specifications;
  3. Theoretical tools;
  4. Quantitative data;
  5. Practical considerations;
  6. Design instrumentalities.

The fourth category concerns the quantitative knowledge just referredto, and the third the theoretical tools used to acquire it. These twocategories can be assumed to match Bunge’s notion of substantivetechnological theories. The status of the remaining four categories ismuch less clear, however, partly because they are less familiar, ornot at all, from the well-explored context of science. Of thesecategories, Vincenti claims that they represent prescriptive forms ofknowledge rather than descriptive ones. Here, the activity of designintroduces an element of normativity, which is absent from scientificknowledge. Take such a basic notion as ‘operationalprinciple’, which refers to the way in which the function of adevice is realized, or, in short, how it works. This is still a purelydescriptive notion. Subsequently, however, it plays a role inarguments that seek to prescribe a course of action to someone who hasa goal that could be realized by the operation of such a device. Atthis stage, the issue changes from a descriptive to a prescriptive ornormative one. An extensive discussion of the various kinds ofknowledge relevant to technology is offered by Houkes (2009).

Although the notion of an operational principle—a term thatseems to originate with Polanyi (1958)—is central to engineeringdesign, no single clear-cut definition of it seems to exist. The issueof disentangling descriptive from prescriptive aspects in an analysisof technical actions and their constituents is therefore a task thathas hardly begun. This task requires a clear view on the extent andscope of technology. If one follows Joseph Pitt in his bookThinking About Technology (1999) and defines technologybroadly as ‘humanity at work’, then to distinguish betweentechnical action and action in general becomes difficult, and thestudy of action in technology must absorb all descriptive andnormative theories of action, including the theory of practicalrationality, and much of theoretical economics in its wake. There haveindeed been attempts at such an encompassing account of human action,for example Tadeusz Kotarbinski’sPraxiology (1965),but a perspective of such generality makes it difficult to arrive atresults of sufficient depth. It would be a challenge for philosophy tospecify the differences among action forms and the reasoning groundingthem in, to single out three prominent fields of study, technology,organization and management, and economics.

A more restricted attempt at such an approach is IlkkaNiiniluoto’s (1993). According to Niiniluoto, the theoreticalframework of technology as an activity that is concerned with what theworld should be like rather than is, the framework that forms thecounterpoint to the descriptive framework of science, isdesignscience. The content of design science, the counterpoint to thetheories and explanations that form the content of descriptivescience, would then be formed bytechnical norms, statementsof the form ‘If one wants to achieveX, one should doY’. The notion of a technical norm derives from GeorgHenrik von Wright’sNorm and Action (1963). Technicalnorms need to be distinguished from anankastic statements expressingnatural necessity, of the form ‘IfX is to be achieved,Y needs to be done’; the latter have a truth value butthe former have not. Von Wright himself, however, wrote that he didnot understand the mutual relations between these statements. Zwart,Franssen and Kroes (2018) present a detailed discussion. Ideas on whatdesign science is and can and should be are evidently related to thebroad problem area of practical rationality—see thisencyclopedia’s entries onpractical reason andinstrumental rationality—and also to means-ends reasoning, discussed in the next section.

2.4 Methodological Issues: Design as Decision Making

Design is an activity that is subject to rational scrutiny but inwhich creativity is considered to play an important role as well.Since design is a form of action, a structured series of decisions toproceed in one way rather than another, the form of rationality thatis relevant to it is practical rationality, the rationalityincorporating the criteria on how to act, given particularcircumstances. This suggests a clear division of labor between thepart to be played by rational scrutiny and the part to be played bycreativity. Theories of rational action generally conceive theirproblem situation as one involving a choice among various course ofaction open to the agent. Rationality then concerns the question howto decide among given options, whereas creativity concerns thegeneration of these options. This distinction is similar to thedistinction between the context of justification and the context ofdiscovery in science. The suggestion that is associated with thisdistinction, however, that rational scrutiny only applies in thecontext of justification, is difficult to uphold for technologicaldesign. If the initial creative phase of option generation isconducted sloppily, the result of the design task can hardly besatisfactory. Unlike the case of science, where the practicalconsequences of entertaining a particular theory are not taken intoconsideration, the context of discovery in technology is governed bysevere constraints of time and money, and an analysis of the problemhow best to proceed certainly seems in order. There has been littlephilosophical work done in this direction; an overview of the issuesis given in Kroes, Franssen, and Bucciarelli (2009).

The ideas of Herbert Simon on bounded rationality (see, e.g., Simon1982) are relevant here, since decisions on when to stop generatingoptions and when to stop gathering information about these options andthe consequences when they are adopted are crucial in decision makingif informational overload and calculative intractability are to beavoided. However, it has proved difficult to further developSimon’s ideas on bounded rationality since their conception inthe 1950s. Another notion that is relevant here is means-endsreasoning. In order to be of any help here, theories of means-endsreasoning should then concern not just the evaluation of given meanswith respect to their ability to achieve given ends, but also thegeneration or construction of means for given ends. A comprehensivetheory of means-ends reasoning, however, is not yet available; for aproposal on how to develop means-ends reasoning in the context oftechnical artifacts, see Hughes, Kroes, and Zwart 2007. In thepractice of engineering, alternative proposals for the realization ofparticular functions are usually taken from ‘catalogs’ ofexisting and proven realizations. These catalogs are extended byongoing research in technology rather than under the urge ofparticular design tasks.

When engineering design is conceived as a process of decision making,governed by considerations of practical rationality, the next step isto specify these considerations. Almost all theories of practicalrationality conceive of it as a reasoning process where a matchbetween beliefs and desires or goals is sought. The desires or goalsare represented by their value or utility for the decision maker, andthe decision maker’s problem is to choose an action thatrealizes a situation that, ideally, has maximal value or utility amongall the situations that could be realized. If there is uncertaintyconcerning the situations that will be realized by a particularaction, then the problem is conceived as aiming for maximalexpected value or utility. Now the instrumental perspectiveon technology implies that the value that is at issue in the designprocess viewed as a process of rational decision making is not thevalue of the artifacts that are created. Those values are the domainof theusers of the technology so created. They are supposedto be represented in the functional requirements defining the designtask. Instead the value to be maximized is the extent to which aparticular design meets the functional requirements defining thedesign task. It is in this sense that engineers share an overallperspective on engineering design as an exercise inoptimization. But although optimization is a value-orientatednotion, it is not itself perceived as a value driving engineeringdesign.

The functional requirements that define most design problems do notprescribe explicitly what should be optimized; usually they set levelsto be attained minimally. It is then up to the engineer to choose howfar to go beyond meeting the requirements in this minimal sense.Efficiency, in energy consumption and use of materials firstof all, is then often a prime value. Under the pressure of society,other values have come to be incorporated, in particularsafety and, more recently,sustainability. Sometimesit is claimed that what engineers aim to maximize is just one factor,namely market success. Market success, however, can only be assessedafter the fact. The engineer’s maximization effort will insteadbe directed at what are considered the predictors of market success.Meeting the functional requirements and being relatively efficient andsafe are plausible candidates as such predictors, but additionalmethods, informed by market research, may introduce additional factorsor may lead to a hierarchy among the factors.

Choosing the design option that maximally meets all the functionalrequirements (which may but need not originate with the prospectiveuser) and all other considerations and criteria that are taken to berelevant, then becomes the practical decision-making problem to besolved in a particular engineering-design task. This creates severalmethodological problems. Most important of these is that the engineeris facing amulti-criteria decision problem. The variousrequirements come with their own operationalizations in terms ofdesign parameters and measurement procedures for assessing theirperformance. This results in a number of rank orders or quantitativescales which represent the various options out of which a choice is tobe made. The task is to come up with a final score in which all theseresults are ‘adequately’ represented, such that the optionthat scores best can be considered the optimal solution to the designproblem. Engineers describe this situation as one wheretrade-offs have to be made: in judging the merit of oneoption relative to other options, a relative bad performance on onecriterion can be balanced by a relatively good performance on anothercriterion. An important problem is whether a rational method for doingthis can be formulated. It has been argued by Franssen (2005) thatthis problem is structurally similar to the well-known problem ofsocial choice, for which Kenneth Arrow proved his notoriousimpossibility theorem in 1950. As a consequence, as long as we requirefrom a solution method to this problem that it answers to somerequirements that spell out its generality and rationality, no suchsolution method exists. In technical design, the role that individualvoters play in situations of social choice is played by the variousdesign criteria, which each have a say in what the resulting productcomes to look like. This poses serious problems for the claims ofengineers that their designs are optimal solutions in the sense ofsatisfying the totality of the design criteria best, sinceArrow’s theorem implies that in most multi-criteria problemsthis notion of ‘optimal’ cannot be rigorously defined,just as in most multi-voter situations the notion of a best or evenadequate representation of what the voters jointly want cannot berigorously defined.

This result seems to except a crucial aspect of engineering activityfrom philosophical scrutiny, and it could be used to defend theopinion that engineering is at least partly an art, not a science.Instead of surrendering to the result, however, which has asignificance that extends much beyond engineering and even beyonddecision making in general, we should perhaps conclude instead thatthere is still a lot of work to be done on what might be termed,provisionally, ‘approximative’ forms of reasoning. Oneform of reasoning to be included here is Herbert Simon’s boundedrationality, plus the related notion of ‘satisficing’.Since their introduction in the 1950s (Simon 1957) these two termshave found wide usage, but we are still lacking a general theory ofbounded rationality. It may be in the nature of forms of approximativereasoning such as bounded rationality that a general theory cannot behad, but even a systematic treatment from which such an insight couldemerge seems to be lacking.

Another problem for the decision-making view of engineering design isthat in modern technology almost all design is done by teams. Suchteams are composed of experts from many different disciplines. Eachdiscipline has its own theories, its own models of interdependencies,its own assessment criteria, and so forth, and the professionalsbelonging to these disciplines must be considered as inhabitants ofdifferentobject worlds, as Louis Bucciarelli (1994) phrasesit. The different team members are, therefore, likely to disagree onthe relative rankings and evaluations of the various design optionsunder discussion. Agreement on one option as the overall best one canhere be even less arrived at by an algorithmic method exemplifyingengineering rationality. Instead, models of social interaction, suchas bargaining and strategic thinking, are relevant here. An example ofsuch an approach to an (abstract) design problem is presented byFranssen and Bucciarelli (2004).

To look in this way at technological design as a decision-makingprocess is to view it normatively from the point of view of practicalor instrumental rationality. At the same time it is descriptive inthat it is a description of how engineering methodology generallypresents the issue how to solve design problems. From that somewhathigher perspective there is room for all kinds of normative questionsthat are not addressed here, such as whether the functionalrequirements defining a design problem can be seen as an adequaterepresentation of the values of the prospective users of an artifactor a technology, or by which methods values such as safety andsustainability can best be elicited and represented in the designprocess. These issues will be taken up inSection 3.

2.5 Metaphysical Issues: The Status and Characteristics of Artifacts

Understanding the process of designing artifacts is the theme inphilosophy of technology that most directly touches on the interestsof engineering practice. This is hardly true for another issue ofcentral concern to analytic philosophy of technology, which is thestatus and the character of artifacts. This is perhaps not unlike thesituation in the philosophy of science, where working scientists seemalso to be much less interested in investigating the status andcharacter of models and theories than philosophers are.

Artifacts are man-made objects: they have an author (see Hilpinen 1992and Hilpinen’s articleartifact in this encyclopedia). The artifacts that are of relevance totechnology are, additionally, made to serve a purpose. This excludes,within the set of all man-made objects, byproducts and waste productsand equally, though controversially, works of art. Byproducts andwaste products result from an intentional act to make something butjust not precisely, although the author at work may be well aware oftheir creation. Works of art result from an intention directed attheir creation (although in exceptional cases of conceptual art, thisdirectedness may involve many intermediate steps) but it is contestedwhether artists include in their intentions concerning their work anintention that the work serves some purpose. Nevertheless, mostphilosophers of technology who discuss the metaphysics of artifactsexclude artworks from their analyses. A further discussion of thisaspect belongs to the philosophy of art. An interesting generalaccount which does not do so has been presented by Dipert (1993).

Technical artifacts, then, are made to serve some purpose, generallyto be used for something or to act as a component in a largerartifact, which in its turn is either something to be used or again acomponent. Whether end product or component, an artifact is ‘forsomething’, and what it is for is called the artifact’sfunction. Several researchers have emphasized that anadequate description of artifacts must refer both to their status astangible physical objects and to the intentions of the people engagedwith them. Kroes and Meijers (2006) have dubbed this view “thedual nature of technical artifacts”; its most mature formulationis Kroes 2012. They suggest that the two aspects are ‘tiedup’, so to speak, in the notion of artifact function. This givesrise to several problems. One, which will be passed over quicklybecause little philosophical work seems to have been done concerningit, is that structure and function mutually constrain each other, butthe constraining is only partial. It is unclear whether a generalaccount of this relation is possible and what problems need to besolved to arrive there. There may be interesting connections with theissue of multiple realizability in the philosophy of mind and withaccounts of reduction in science; an example where this is explored isMahner and Bunge 2001.

It is equally problematic whether a unified account of the notion offunction as such is possible, but this issue has received considerablymore philosophical attention. The notion of function is of paramountimportance for characterizing artifacts, but the notion is used muchmore widely. The notion of an artifact’s function seems to refernecessarily to human intentions. Function is also a key concept inbiology, however, where no intentionality plays a role, and it is akey concept in cognitive science and the philosophy of mind, where itis crucial in grounding intentionality in non-intentional, structuraland physical properties. Up till now there is no accepted generalaccount of function that covers both the intentionality-based notionof artifact function and the non-intentional notion of biologicalfunction—not to speak of other areas where the concept plays arole, such as the social sciences. The most comprehensive theory, thathas the ambition to account for the biological notion, cognitivenotion and the intentional notion, is Ruth Millikan’s 1984; forcriticisms and replies, see Preston 1998, 2003; Millikan 1999; Vermaas& Houkes 2003; and Houkes & Vermaas 2010. The collection ofessays edited by Ariew, Cummins and Perlman (2002) presents anintroduction to the topic of characterizing the notion of function,although the emphasis is on biological functions. This emphasisremains very strong in the literature, as can be judged from the mostrecent critical overview (Garson 2016), which explicitly refrains fromdiscussing artifact functions.

Against the view that, at least in the case of artifacts, the notionof function refers necessarily to intentionality, it could be arguedthat in discussing the functions of the components of a larger device,and the interrelations between these functions, the intentional‘side’ of these functions is of secondary importance only.This, however, would be to ignore the possibility of themalfunctioning of such components. This notion seems to bedefinable only in terms of a mismatch between actual behavior andintended behavior. The notion of malfunction also sharpens anambiguity in the general reference to intentions when characterizingtechnical artifacts. These artifacts usually engage many people, andthe intentions of these people may not all pull in the same direction.A major distinction can be drawn between the intentions of the actualuser of an artifact for a particular purpose and the intentions of theartifact’s designer. Since an artifact may be used for a purposedifferent from the one for which its designer intended it to be used,and since people may also use natural objects for some purpose orother, one is invited to allow that artifacts can have multiplefunctions, or to enforce a hierarchy among all relevant intentions indetermining the function of an artifact, or to introduce aclassification of functions in terms of the sorts of determiningintentions. In the latter case, which is a sort of middle way betweenthe two other options, one commonly distinguishes between theproper function of an artifact as the one intended by itsdesigner and theaccidental function of the artifact as theone given to it by some user on private considerations. Accidental usecan become so common, however, that the original function drops out ofmemory.

Closely related to this issue to what extent use and design determinethe function of an artifact is the problem of characterizing artifactkinds. It may seem that we use functions to classify artifacts: anobject is a knife because it has the function of cutting, or moreprecisely, of enabling us to cut. On closer inspection, however, thelink between function and kind-membership seems much lessstraightforward. The basic kinds in technology are, for example,‘knife’, ‘aircraft’ and ‘piston’.The members of these kinds have been designed in order to be used tocut something with, to transport something through the air and togenerate mechanical movement through thermodynamic expansion,respectively. However, one cannot create a particular kind of artifactjust by designing something with the intention that it be used forsome particular purpose: a member of the kind so created must actuallybe useful for that purpose. Despite innumerable design attempts andclaims, the perpetual motion machine is not a kind of artifact. A kindlike ‘knife’ is defined, therefore, not only by theintentions of the designers of its members that they each be usefulfor cutting but also by a shared operational principle known to thesedesigners, and on which they based their design. This is, in adifferent setting, also defended by Thomasson, who in hercharacterization of what she in general calls anartifactualkind says that such a kind is defined by the designer’sintention to make something of that kind, by a substantive idea thatthe designer has of how this can be achieved, and by his or herlargely successful achievement of it (Thomasson 2003, 2007). Qua sortsof kinds in which artifacts can be grouped, a distinction musttherefore be made between a kind like ‘knife’ and acorresponding but different kind ‘cutter’. A‘knife’ indicates a particular way a ‘cutter’can be made. One can also cut, however, with a thread or line, awelding torch, a water jet, and undoubtedly by other sorts of meansthat have not yet been thought of. A ‘cutter’ would thenrefer to a truly functional kind. As such, it is subject to theconflict between use and design: one could mean by‘cutter’ anything than can be used for cutting or anythingthat has been designed to be used for cutting, by the application ofwhatever operational principle, presently known or unknown.

This distinction between artifact kinds and functional kinds isrelevant for the status of such kinds in comparison to other notionsof kinds. Philosophy of science has emphasized that the concept ofnatural kind, such as exemplified by ‘water’ or‘atom’, lies at the basis of science. On the other hand itis generally taken for granted that there are no regularities that allknives or airplanes or pistons answer to. This, however, is looselybased on considerations of multiple realizability that fully applyonly to functional kinds, not to artifact kinds. Artifact kinds sharean operational principle that gives them some commonality in physicalfeatures, and this commonality becomes stronger once a particularartifact kind is subdivided into narrower kinds. Since these kinds arespecified in terms of physical and geometrical parameters, they aremuch closer to the natural kinds of science, in that they supportlaw-like regularities; see for a defense of this position (Soavi2009). A recent collection of essays that discuss the metaphysics ofartifacts and artifact kinds is Franssen, Kroes, Reydon and Vermaas2014.

2.6 Other Topics

There is at least one additional technology-related topic that oughtto be mentioned because it has created a good deal of analyticphilosophical literature, namely Artificial Intelligence and relatedareas. A full discussion of this vast field is beyond the scope ofthis entry, however. Information is to be found in the entries onTuring machines,the Church-Turing thesis,computability and complexity,the Turing test,the Chinese room argument,the computational theory of mind,functionalism,multiple realizability, andthe philosophy of computer science.

3. Ethical and Social Aspects of Technology

3.1 The Development of the Ethics of Technology

It was not until the twentieth century that the development of theethics of technology as a systematic and more or less independentsubdiscipline of philosophy started. This late development may seemsurprising given the large impact that technology has had on society,especially since the industrial revolution.

A plausible reason for this late development of ethics of technologyis the instrumental perspective on technology that was mentioned inSection 2.2. This perspective implies, basically, a positive ethical assessment oftechnology: technology increases the possibilities and capabilities ofhumans, which seems in general desirable. Of course, since antiquity,it has been recognized that the new capabilities may be put to bad useor lead to humanhubris. Often, however, these undesirableconsequences are attributed to the users of technology, rather thanthe technology itself, or its developers. This vision is known as theinstrumental vision of technology resulting in the so-calledneutrality thesis. The neutrality thesis holds that technology is aneutral instrument that can be put to good or bad use by its users.During the twentieth century, this neutrality thesis met with severecritique, most prominently by Heidegger and Ellul, who have beenmentioned in this context inSection 2, but also by philosophers from the Frankfurt School, such asHorkheimer and Adorno (1947 [2002]), Marcuse (1964), and Habermas(1968 [1970]).

The scope and the agenda for ethics of technology to a large extentdepend on how technology is conceptualized. The second half of thetwentieth century has witnessed a richer variety of conceptualizationsof technology that move beyond the conceptualization of technology asa neutral tool, as a world view or as a historical necessity. Thisincludes conceptualizations of technology as a political phenomenon(Winner, Feenberg, Sclove), as a social activity (Latour, Callon,Bijker and others in the area of science and technology studies), as acultural phenomenon (Ihde, Borgmann), as a professional activity(engineering ethics, e.g., Davis), and as a cognitive activity (Bunge,Vincenti). Despite this diversity, the development in the second halfof the twentieth century is characterized by two general trends. Oneis a move away from technological determinism and the assumption thattechnology is a given self-contained phenomenon which developsautonomously to an emphasis on technological development being theresult of choices (although not necessarily the intended result). Theother is a move away from ethical reflection on technology as such toethical reflection of specific technologies and to specific phases inthe development of technology. Both trends together have resulted inan enormous increase in the number and scope of ethical questions thatare asked about technology. The developments also imply that ethics oftechnology is to be adequately empirically informed, not only aboutthe exact consequences of specific technologies but also about theactions of engineers and the process of technological development.This has also opened the way to the involvement of other disciplinesin ethical reflections on technology, such as Science and TechnologyStudies (STS) and Technology Assessment (TA).

3.2 Approaches in the Ethics of Technology

Not only is the ethics of technology characterized by a diversity ofapproaches, it might even be doubted whether something like asubdiscipline of ethics of technology, in the sense of a community ofscholars working on a common set of problems, exists. The scholarsstudying ethical issues in technology have diverse backgrounds (e.g.,philosophy, STS, TA, law, political science, and STEM disciplines) andthey do not always consider themselves (primarily) ethicists oftechnology. To give the reader an overview of the field, three basicapproaches or strands that might be distinguished in the ethics oftechnology will be discussed.

3.2.1 Cultural and political approaches

Both cultural and political approaches build on the traditionalphilosophy and ethics of technology of the first half of the twentiethcentury. Whereas cultural approaches conceive of technology as acultural phenomenon that influences our perception of the world,political approaches conceive of technology as a political phenomenon,i.e., as a phenomenon that is ruled by and embodies institutionalpower relations between people.

Cultural approaches are often phenomenological in nature or at leastposition themselves in relation to phenomenology aspost-phenomenology. Examples of philosophers in this tradition are DonIhde, Albert Borgmann, Peter-Paul Verbeek and Evan Selinger (e.g.,Borgmann 1984; Ihde 1990; Verbeek 2000 [2005], 2011). The approachesare usually influenced by developments in STS, especially the ideathat technologies contain a script that influences not onlypeople’s perception of the world but also human behavior, andthe idea of the absence of a fundamental distinction between humansand non-humans, including technological artifacts (Akrich 1992; Latour1992, 1993; Ihde & Selinger 2003). The combination of both ideashas led some to claim that technology has (moral) agency, a claim thatis discussed below inSection 3.3.1.

Political approaches to technology mostly go back to Marx, who assumedthat the material structure of production in society, in whichtechnology is obviously a major factor, determined the economic andsocial structure of that society. Similarly, Langdon Winner has arguedthat technologies can embody specific forms of power and authority(Winner 1980). According to him, some technologies are inherentlynormative in the sense that they require or are strongly compatiblewith certain social and political relations. Railroads, for example,seem to require a certain authoritative management structure. In othercases, technologies may be political due to the particular way theyhave been designed. Some political approaches to technology areinspired by (American) pragmatism and, to a lesser extent, discourseethics. A number of philosophers, for example, have pleaded for ademocratization of technological development and the inclusion ofordinary people in the shaping of technology (Winner 1983; Sclove1995; Feenberg 1999). Such ideas are also echoed in recentinterdisciplinary approaches, such as Responsible Research andInnovation (RRI), that aim at opening up the innovation process to abroader range of stakeholders and concerns (Owen et al. 2013).

Although political approaches have obviously ethical ramifications,many philosophers who initially adopted such approaches do not engagein explicit ethical reflection. Also in political philosophy,technology does not seem to have been taken up as an important topic.Nevertheless, particularly in relation to digital technologies such associal media, algorithms and more generally Artificial Intelligence(AI), a range of political themes has recently been discussed, such asthreats to democracy (from e.g. social media), the power of Big Techcompanies, and new forms of exploitation, domination and colonialismthat may come with AI (e.g., Coeckelbergh 2022; Susskind 2022; Zuboff2017; Adams 2021). An important emerging theme is also justice, whichdoes not just encompass distributive justice (Rawls 1999), but alsorecognition justice (Fraser and Honneth 2003) and procedural justice.Questions about justice have not only been raised by digitaltechnologies, but also by climate change and energy technologies,leading to the coinage of new notions like climate justice (Caney2014) and energy justice (Jenkins et al. 2016).

3.2.2 Engineering ethics

Engineering ethics started off in the 1980s in the United States,merely as an educational effort. Engineering ethics is concerned with“the actions and decisions made by persons, individually orcollectively, who belong to the profession of engineering” (Baum1980: 1). According to this approach, engineering is a profession, inthe same way as medicine is a profession.

Although there is no agreement on how a profession exactly should bedefined, the following characteristics are often mentioned:

  • A profession relies on specialized knowledge and skills thatrequire a long period of study;
  • The occupational group has a monopoly on the carrying out of theoccupation;
  • The assessment of whether the professional work is carried out ina competent way is done by, and it is accepted that this can only bedone by, professional peers;
  • A profession provides society with products, services or valuesthat are useful or worthwhile for society, and is characterized by anideal of serving society;
  • The daily practice of professional work is regulated by ethicalstandards, which are derived from or relate to the society-servingideal of the profession.

Typical ethical issues that are discussed in engineering ethics areprofessional obligations of engineers as exemplified in, for example,codes of ethics of engineers, the role of engineers versus managers,competence, honesty, whistle-blowing, concern for safety and conflictsof interest (Davis 1998, 2005). Over the years, the scope ofengineering ethics has been broadened. Whereas it initially oftenfocused on decisions of individual engineers and on questions likewhistle-blowing and loyalty, textbooks now also discuss the widercontext in which such decisions are made and pay attention to forexample, the so-called problem of many hands (van de Poel andRoyakkers 2011; Peterson 2020) (see also section 3.3.2). Initially,the focus was often primarily on safety concerns and issues likecompetence and conflicts of interests, but nowadays also issues ofsustainability, social justice, privacy, global issues and the role oftechnology in society are discussed (Harris, Pritchard, and Rabins2014; Martin and Schinzinger 2022; Taebi 2021; Peterson 2020; van dePoel and Royakkers 2011).

3.2.3 Ethics of specific technologies

The last decades have witnessed an enormous increase in ethicalinquiries into specific technologies. This may now be the largest ofthe three strands discussed, especially given the rapid growth intechnology-specific ethical inquiries in the last two decades. One ofthe most visible new fields nowadays is digital ethics, which evolvedfrom computer ethics (e.g., Moor 1985; Floridi 2010; Johnson 2009;Weckert 2007; van den Hoven & Weckert 2008), with more recently afocus on robotics, artificial intelligence, machine ethics, and theethics of algorithms (Lin, Abney, & Jenkins 2017; Nucci &Santoni de Sio 2016; Mittelstadt et al. 2016; Bostrom & Yudkowsky2014; Wallach & Allen 2009, Coeckelbergh 2020b). Othertechnologies like biotechnology have also spurred dedicated ethicalinvestigations (e.g., Sherlock & Morrey 2002; P. Thompson 2007).More traditional fields like architecture and urban planning have alsoattracted specific ethical attention (Fox 2000). Nanotechnology andso-called converging technologies have led to the establishment ofwhat is called nanoethics (Allhoff et al. 2007). Other examples arethe ethics of nuclear deterrence (Finnis et al. 1988), nuclear energy(Taebi & Roeser 2015) and geoengineering (Christopher Preston2016).

Obviously the establishment of such new fields of ethical reflectionis a response to social and technological developments. Still, thequestion can be asked whether the social demand is best met byestablishing new fields of applied ethics. This issue is in factregularly discussed as new fields emerge. Several authors have forexample argued that there is no need for nanoethics becausenanotechnology does not raise any really new ethical issues (e.g.,McGinn 2010). The alleged absence of newness here is supported by theclaim that the ethical issues raised by nanotechnology are a variationon, and sometimes an intensification of, existing ethical issues, buthardly really new, and by the claim that these issues can be dealtwith the existing theories and concepts from moral philosophy. For anearlier, similar discussion concerning the supposed new character ofethical issues in computer engineering, see Tavani 2002.

The new fields of ethical reflection are often characterized asapplied ethics, that is, as applications of theories, normativestandards, concepts and methods developed in moral philosophy. Foreach of these elements, however, application is usually notstraightforward but requires a further specification or revision. Thisis the case because general moral standards, concepts and methods areoften not specific enough to be applicable in any direct sense tospecific moral problems. ‘Application’ therefore oftenleads to new insights which might well result in the reformulation orat least refinement of existing normative standards, concepts andmethods. In some cases, ethical issues in a specific field mightrequire new standards, concepts or methods. Beauchamp and Childressfor example have proposed a number of general ethical principles forbiomedical ethics (Beauchamp & Childress 2001). These principlesare more specific than general normative standards, but still sogeneral and abstract that they apply to different issues in biomedicalethics. In computer ethics, existing moral concepts relating to forexample privacy and ownership has been redefined and adapted to dealwith issues which are typical for the computer age (Johnson 2003). Anexample is Nissenbaum’s proposal to understand privacy in termsof contextual integrity (Nissenbaum 2010). New fields of ethicalapplication might also require new methods for, for example,discerning ethical issues that take into account relevant empiricalfacts about these fields, like the fact that technological researchand development usually takes place in networks of people rather thanby individuals (Zwart et al. 2006). Another more general issue thatapplies to many new technologies is how to deal with the uncertaintiesabout (potential) social and ethical impacts that typically surroundnew emerging technologies. Brey’s (2012) proposal for ananticipatory ethics may be seen as a reply to this challenge. Theissue of anticipation is also one of the central concerns in the morerecent interdisciplinary field of Responsible Research and Innovation(RRI) (e.g., Owen et al. 2013).

Although different fields of ethical reflection on specifictechnologies might well raise their own philosophical and ethicalissues, it can be questioned whether this justifies the development ofseparate subfields or even subdisciplines. One obvious argument mightbe that in order to say something ethically meaningful about newtechnologies, one needs specialized and detailed knowledge of aspecific technology. Moreover such subfields allow interaction withrelevant non-philosophical experts in for example law, psychology,economy, science and technology studies (STS) or technology assessment(TA), as well as the relevant STEM (Science, Technology, Engineering,Medicine) disciplines. On the other side, it could also be argued thata lot can be learned from interaction and discussion between ethicistsspecializing in different technologies, and a fruitful interactionwith the two other strands discussed above (cultural and politicalapproaches and engineering ethics). In particular more politicalapproaches to technology can be complementary to approaches that focuson ethical issue of specific technologies (such as AI) by drawingattention to justice issues, power differences and the role of largerinstitutional and international contexts. Currently, such interactionin many cases seems absent, although there are of courseexceptions.

3.3 Some Recurrent Themes in the Ethics of Technology

We now turn to the description of some specific themes in the ethicsof technology. We focus on a number of general themes that provide anillustration of general issues in the ethics of technology and the waythese are treated.

3.3.1 Neutrality versus moral agency

One important general theme in the ethics of technology is thequestion whether technology is value-laden. Some authors havemaintained that technology is value-neutral, in the sense thattechnology is just a neutral means to an end, and accordingly can beput to good or bad use (e.g., Pitt 2000). This view might have someplausibility in as far as technology is considered to be just a barephysical structure. Most philosophers of technology, however, agreethat technological development is a goal-oriented process and thattechnological artifacts by definition have certain functions, so thatthey can be used for certain goals but not, or far more difficulty orless effectively, for other goals. This conceptual connection betweentechnological artifacts, functions and goals makes it hard to maintainthat technology is value-neutral. Even if this point is granted, thevalue-ladenness of technology can be construed in a host of differentways. Some authors have maintained that technology can have moralagency. This claim suggests that technologies can autonomously andfreely ‘act’ in a moral sense and can be held morallyresponsible for their actions.

The debate whether technologies can have moral agency started off incomputer ethics (Bechtel 1985; Snapper 1985; Dennett 1997; Floridi& Sanders 2004) but has since broadened. Typically, the authorswho claim that technologies (can) have moral agency often redefine thenotion of agency or its connection to human will and freedom (e.g.,Latour 1993; Floridi & Sanders 2004, Verbeek 2011). A disadvantageof this strategy is that it tends to blur the morally relevantdistinctions between people and technological artifacts. Moregenerally, the claim that technologies have moral agency sometimesseems to have become shorthand for claiming that technology is morallyrelevant. This, however, overlooks the fact technologies can bevalue-laden in other ways than by having moral agency (see, e.g.,Johnson 2006; Radder 2009; Illies & Meijers 2009; Peterson &Spahn 2011; Miller 2020; Klenk 2021). One might, for example, claimthat technology enables (or even invites) and constrains (or eveninhibits) certain human actions and the attainment of certain humangoals and therefore is to some extent value-laden, without claimingmoral agency for technological artifacts. A good overview of thedebate can be found in Kroes and Verbeek 2014.

The debate about moral agency and technology is now particularlysalient with respect to the design of intelligent artificial agents.James Moor (2006) has distinguished between four ways in whichartificial agents may be or become moral agents:

  1. Ethical impact agents are robots and computer systems thatethically impact their environment; this is probably true of allartificial agents.
  2. Implicit ethical agents are artificial agents that have beenprogrammed to act according to certain values.
  3. Explicit ethical agents are machines that can represent ethicalcategories and that can ‘reason’ (in machine language)about these.
  4. Full ethical agents in addition also possess some characteristicswe often consider crucial for human agency, like consciousness, freewill and intentionality.

It might perhaps never be possible to technologically design fullethical agents, and if it were to become possible it might bequestionable whether it is morally desirable to do so (Bostrom &Yudkowsky 2014; van Wynsberghe and Robbins 2019). As Wallach and Allen(2009) have pointed out, the main problem might not be to designartificial agents that can function autonomously and that can adaptthemselves in interaction with the environment, but rather to buildenough, and the right kind of, ethical sensitivity into suchmachines.

Apart from the question whether intelligent artificial agents can havemoral agency, there are (broader) questions about their moral status;for example would they—and if so under whatconditions—qualify as moral patients, to which humans havecertain moral obligations. Traditionally, moral status is connected toconsciousness, but a number of authors have proposed more minimalcriteria for moral status, particularly for (social) robots. Forexample, Danaher (2020) has suggested that behaviouristic criteriamight suffice whereas Coeckelbergh (2014) and Gunkel (2018) havesuggested a relational approach. Mosakas (2021) has argued that suchapproaches do not ground moral status, and hence humans have no directmoral duties towards social robots (although they may still be morallyrelevant in other ways). Others have suggested that social robotssometimes may deceive us into believing they have certain cognitiveand emotional capabilities (that may give them also moral status)while they have not (Sharkey and Sharkey 2021).

3.3.2 Responsibility

Responsibility has always been a central theme in the ethics oftechnology. The traditional philosophy and ethics of technology,however, tended to discuss responsibility in rather general terms andwere rather pessimistic about the possibility of engineers to assumeresponsibility for the technologies they developed. Ellul, forexample, has characterized engineers as the high priests oftechnology, who cherish technology but cannot steer it. Hans Jonas(1979 [1984]) has argued that technology requires an ethics in whichresponsibility is the central imperative because for the first time inhistory we are able to destroy the earth and humanity.

In engineering ethics, the responsibility of engineers is oftendiscussed in relation to code of ethics that articulate specificresponsibilities of engineers. Such codes of ethics stress three typesof responsibilities of engineers: (1) conducting the profession withintegrity and honesty and in a competent way, (2) responsibilitiestowards employers and clients and (3) responsibility towards thepublic and society. With respect to the latter, most US codes ofethics maintain that engineers ‘should hold paramount thesafety, health and welfare of the public’.

As has been pointed out by several authors (Nissenbaum 1996; Johnson& Powers 2005; Swierstra & Jelsma 2006), it may be hard topinpoint individual responsibility in engineering. The reason is thatthe conditions for the proper attribution of individual responsibilitythat have been discussed in the philosophical literature (like freedomto act, knowledge, and causality) are often not met by individualengineers. For example, engineers may feel compelled to act in acertain way due to hierarchical or market constraints, and negativeconsequences may be very hard or impossible to predict beforehand. Thecausality condition is often difficult to meet as well due to the longchain from research and development of a technology till its use andthe many people involved in this chain. Davis (2012) neverthelessmaintains that despite such difficulties individual engineers can anddo take responsibility.

One issue that is at stake in this debate is the notion ofresponsibility. Davis (2012), and also for example Ladd (1991), arguefor a notion of responsibility that focuses less on blame and stressesthe forward-looking or virtuous character of assuming responsibility.But many others focus on backward-looking notions of responsibilitythat stress accountability, blameworthiness or liability. Zandvoort(2000), for example has pleaded for a notion of responsibility inengineering that is more like the legal notion of strict liability, inwhich the knowledge condition for responsibility is seriouslyweakened. Doorn (2012) compares three perspectives on responsibilityascription in engineering—a merit-based, a right-based and aconsequentialist perspective—and argues that theconsequentialist perspective, which applies a forward-looking notionof responsibility, is most powerful in influencing engineeringpractice.

The difficulty of attributing individual responsibility may lead tothe Problem of Many Hands (PMH). The term was first coined by DennisThompson (1980) in an article about the responsibility of publicofficials. The term is used to describe problems with the ascriptionof individual responsibility in collective settings. Doorn (2010) hasproposed a procedurals approach, based on Rawls’ reflectiveequilibrium model, to deal with the PMH; other ways of dealing withthe PMH include the design of institutions that help to avoid it or anemphasis on virtuous behavior in organizations (van de Poel, Royakers,& Zwart 2015).

Whereas the PMH refers to the problem of attributing responsibilityamong a collective of human agents, technological developments havealso made it possible to allocate tasks to self-learning andintelligent systems. Such systems may function and learn in ways thatare hard to understand, predict and control for humans, leading toso-called ‘responsibility gaps’ (Matthias 2004). Sinceknowledge and control are usually seen as (essential) preconditionsfor responsibility, lack thereof may make it increasingly difficult tohold humans responsible for the actions and consequences ofintelligent systems.

Initially, such responsibility gaps were mainly discussed in relationto autonomous weapon systems and self-driving cars (Sparrow 2007;Danaher 2016). As a possible solution, the notion of meaningful humancontrol has been proposed as a precondition for the development andemployment of such systems to ensure that human can retain control,and hence responsibility over these systems (Santoni de Sio and vanden Hoven 2018). Nyholm (2018) has argued that many alleged cases ofresponsibility gaps are better understood in terms collaborativehuman-technology agency (with humans in a supervising role) ratherthan the technology taking over control. While responsibility gaps maynot impossible, the more difficult issue may be to attributeresponsibility to the various humans involved (which brings the PMHback on the table).

More recently, responsibility gaps have become a more general concernin relation to AI. Due to the advance of machine learning, AI systemsmay learn in ways that are hard, or almost impossible to understandfor humans. Initially, the dominant notion of responsibility addressedin the literature on responsibility gaps was blameworthiness orculpability, but Santoni de Sio and Mecacci (2021) have recentlyproposed to distinguish between what they call culpability gaps, moralaccountability gaps, public accountability gaps, and activeresponsibility gaps.

3.3.3 Design

In the last decades, increasingly attention is paid not only toethical issues that arise during the use of a technology, but alsoduring the design phase. An important consideration behind thisdevelopment is the thought that during the design phase technologies,and their social consequences, are still malleable whereas during theuse phase technologies are more or less given and negative socialconsequences may be harder to avoid or positive effects harder toachieve.

In computer ethics, an approach known as Value Sensitive Design (VSD)has been developed to explicitly address the ethical nature of design.VSD aims at integrating values of ethical importance in engineeringdesign in a systematic way (Friedman & Hendry 2019). The approachcombines conceptual, empirical and technical investigations. There isalso a range of other approaches aimed at including values in design.‘Design for X’ approaches in engineering aim at includinginstrumental values (like maintainability, reliability and costs) butthey also include design for sustainability, inclusive design, andaffective design (Holt & Barnes 2010). Inclusive design aims atmaking designs accessible to the whole population including, forexample, handicapped people and the elderly (Erlandson 2008).Affective design aims at designs that evoke positive emotions with theusers and so contributes to human well-being. Van de Hoven, Vermaas,and van de Poel 2015 gives a good overview of the state-of-the art ofvalue sensitive design for various values and application domains.

If one tries to integrate values into design one may run into theproblem of a conflict of values. The safest car is, due to its weight,not likely to be the most sustainability. Here safety andsustainability conflict in the design of cars. Traditional methods inwhich engineers deal with such conflicts and make trade-off betweendifferent requirements for design include cost-benefit analysis andmultiple criteria analysis. Such methods are, however, beset withmethodological problems like those discussed inSection 2.4 (Franssen 2005; Hansson 2007). Van de Poel (2009) discusses variousalternatives for dealing with value conflicts in design including thesetting of thresholds (satisficing), reasoning about values,innovation and diversity.

3.3.4 Technological risks

The risks of technology are one of the traditional ethical concerns inthe ethics of technology. Risks raise not only ethical issues butother philosophical issues, such as epistemological anddecision-theoretical issues as well (Roeser et al. 2012).

Risk is usually defined as the product of the probability of anundesirable event and the effect of that event, although there arealso other definitions around (Hansson 2004b). In general it seemsdesirable to keep technological risks as small as possible. The largerthe risk, the larger either the likeliness or the impact of anundesirable event is. Risk reduction therefore is an important goal intechnological development and engineering codes of ethics oftenattribute a responsibility to engineers in reducing risks anddesigning safe products. Still, risk reduction is not always feasibleor desirable. It is sometimes not feasible, because there are noabsolutely safe products and technologies. But even if risk reductionis feasible it may not be acceptable from a moral point of view.Reducing risk often comes at a cost. Safer products may be moredifficult to use, more expensive or less sustainable. So sooner orlater, one is confronted with the question: what is safe enough? Whatmakes a risk (un)acceptable?

The process of dealing with risks is often divided into three stages:risk assessment, risk evaluation and risk management. Of these, thesecond is most obviously ethically relevant. However, risk assessmentalready involves value judgments, for example about which risks shouldbe assessed in the first place (Shrader-Frechette 1991). An important,and morally relevant, issue is also the degree of evidence that isneeded to establish a risk. In establishing a risk on the basis of abody of empirical data one might make two kinds of mistakes. One canestablish a risk when there is actually none (type I error) or one canmistakenly conclude that there is no risk while there actually is arisk (type II error). Science traditionally aims at avoiding type Ierrors. Several authors have argued that in the specific context ofrisk assessment it is often more important to avoid type II errors(Cranor 1990; Shrader-Frechette 1991). The reason for this is thatrisk assessment not just aims at establishing scientific truth but hasa practical aim, i.e., to provide the knowledge on basis of whichdecisions can be made about whether it is desirable to reduce or avoidcertain technological risks in order to protect users or thepublic.

Risk evaluation is carried out in a number of ways (see, e.g.,Shrader-Frechette 1985). One possible approach is to judge theacceptability of risks by comparing them to other risks or to certainstandards. One could, for example, compare technological risks withnaturally occurring risks. This approach, however, runs the danger ofcommitting a naturalistic fallacy: naturally occurring risks may(sometimes) be unavoidable but that does not necessarily make themmorally acceptable. More generally, it is often dubious to judge theacceptability of the risk of technology A by comparing it to the riskof technology B if A and B are not alternatives in a decision (forthis and other fallacies in reasoning about risks, see Hansson2004a).

A second approach to risk evaluation is risk-cost benefit analysis,which is based on weighing the risks against the benefits of anactivity. Different decision criteria can be applied if a (risk) costbenefit analysis is carried out (Kneese, Ben-David, and Schulze 1983).According to Hansson (2003: 306), usually the following criterion isapplied:

… a risk is acceptable if and only if the total benefits thatthe exposure gives rise to outweigh the total risks, measured as theprobability-weighted disutility of outcomes.

A third approach is to base risk acceptance on the consent of peoplewho suffer the risks after they have been informed about these risks(informed consent). A problem of this approach is that technologicalrisks usually affect a large number of people at once. Informedconsent may therefore lead to a “society of stalemates”(Hansson 2003: 300).

Several authors have proposed alternatives to the traditionalapproaches of risk evaluation on the basis of philosophical andethical arguments. Shrader-Frechette (1991) has proposed a number ofreforms in risk assessment and evaluation procedures on the basis of aphilosophical critique of current practices. Roeser (2012) argues fora role of emotions in judging the acceptability of risks. Hansson hasproposed the following alternative principle for risk evaluation:

Exposure of a person to a risk is acceptable if and only if thisexposure is part of an equitable social system of risk-taking thatworks to her advantage. (Hansson 2003: 305)

Hansson’s proposal introduces a number of moral considerationsin risk evaluation that are traditionally not addressed or onlymarginally addressed. These are the consideration whether individualsprofit from a risky activity and the consideration whether thedistribution of risks and benefits is fair.

Questions about acceptable risk may also be framed in terms of riskimposition. The question is then under what conditions it isacceptable for some agent A to impose a risk on some other agent B.The criteria for acceptable risk imposition are in part similar to theones discussed above. A risk imposition may, for example, be (more)acceptable if agent B gave their informed consent, or if the riskyactivity that generates the risk is beneficial for agent B. However,other considerations come in as well, like the relation between agentA and agent B. It might perhaps be acceptable for parents to imposecertain risks on their children, while it would be improper for thegovernment to impose such risks on children.

Risk impositions may particularly be problematic if they lead todomination or domination-like effects (Maheshwari and Nyholm 2022).Domination is here understood in the republican sense proposed byphilosophers like Pettit (2012). Freedom from domination does not justrequire people to have different options to choose from, but also tobe free from the (potential) arbitrary interference in the(availability of) these options by others. Non-domination thusrequires that others do not have the power to arbitrary interfere withone’s options (whether that power is exercised or not). Riskimposition may lead to domination (or at least dominating-likeeffects) if agent A (the risk imposer) by imposing a risk on agent B(the risk bearer) can arbitrary affect the range of safe optionsavailable to agent B.

Some authors have criticized the focus on risks in the ethics oftechnology. One strand of criticism argues that we often lack theknowledge to reliably assess the risks of a new technology before ithas come into use. We often do not know the probability that somethingmight go wrong, and sometimes we even do not know, or at least notfully, what might go wrong and what possible negative consequences maybe. To deal with this, some authors have proposed to conceive of theintroduction of new technology in society as a social experiment andhave urged to think about the conditions under which such experimentsare morally acceptable (Martin & Schinzinger 2022; van de Poel2016). Another strand of criticism states that the focus on risks hasled to a reduction of the impacts of technology that are considered(Swierstra & te Molder 2012). Only impacts related to safety andhealth, which can be calculated as risks, are considered, whereas‘soft’ impacts, for example of a social or psychologicalnature, are neglected, thereby impoverishing the moral evaluation ofnew technologies.

Bibliography

  • Adams, Rachel, 2021, “Can artificial intelligence bedecolonized?”,Interdisciplinary Science Reviews,46(1–2): 176–197. doi: 10.1080/03080188.2020.1840225.
  • Agricola, Georgius, 1556 [1912],De re metallica,Translated and edited by Herbert Clark Hoover and Lou Henry Hoover,London: The Mining Magazine, 1912. [Agricola 1556 [1912] available online]
  • Akrich, Madeleine, 1992, “The Description of TechnicalObjects”, in Bijker and Law (eds) 1992: 205–224.
  • Allhoff, Fritz, Patrick Lin, James H. Moor and John Weckert (eds),2007,Nanoethics: The Ethical and Social Implications ofNanotechnology, Hoboken, NJ: Wiley-Interscience.
  • Anders, Günther, 1956,Die Antiquiertheit desMenschen (Volume I:Über die Seele im Zeitalter derzweiten industriellen Revolution; Volume II:Über dieZerstörung des Lebens im Zeitalter der dritten industriellenRevolution), München: C.H. Beck.
  • Arendt, Hannah, 1958,The Human Condition, Chicago:University of Chicago Press.
  • Ariew, Andrew, Robert Cummins and Mark Perlman (eds), 2002,Functions: New Essays in the Philosophy of Psychology andBiology, New York and Oxford: Oxford University Press.
  • Aristotle,Physics, Translated inThe Complete Worksof Aristotle, Volume 1, The Revised Oxford Translation 2014,edited by Jonathan Barnes.
  • Bacon, Francis, 1627,New Atlantis: A Worke Vnfinished,in hisSylva Sylvarum: or a Naturall Historie, in TenCenturies, London: William Lee.
  • Baum, Robert J., 1980,Ethics and Engineering Curricula,Hastings-on-Hudson: The Hastings Center.
  • Beauchamp, Tom L., 2003, “The Nature of AppliedEthics”, in Frey and Wellman (eds) 2003: 1–16.doi:10.1002/9780470996621.ch1
  • Beauchamp, Tom L., and James F. Childress, 2001,Principles ofBiomedical Ethics, fifth edition, Oxford and New York: OxfordUniversity Press.
  • Bechtel, William, 1985, “Attributing Responsibility toComputer Systems”,Metaphilosophy, 16(4):296–306. doi:10.1111/j.1467-9973.1985.tb00176.x
  • Bijker, Wiebe E., and John Law (eds), 1992,ShapingTechnology/Building Society: Studies in Sociotechnical Change,Cambridge, MA: MIT Press.
  • Bimber, Bruce, 1990, “Karl Marx and the Three Faces ofTechnological Determinism”,Social Studies of Science,20(2): 333–351. doi:10.1177/030631290020002006
  • Borgmann, Albert, 1984,Technology and the Character ofContemporary Life: A Philosophical Inquiry, Chicago and London:University of Chicago Press.
  • Bostrom, Nick, and Eliezer Yudkowsky, 2014, “The Ethics ofArtificial Intelligence”, inThe Cambridge Handbook ofArtificial Intelligence, edited by Keith Frankish and William MRamsey, Cambridge: Cambridge University Press, 316–334.doi:10.1017/CBO9781139046855.020
  • Brey, Philip A.E., 2012, “Anticipatory Ethics for EmergingTechnologies”,NanoEthics, 6(1): 1–13.doi:10.1007/s11569-012-0141-7
  • Briffault, R., 1930,Rational Evolution (The Making ofHumanity), New York: The Macmillan Company.
  • Bucciarelli, Louis L., 1994,Designing Engineers,Cambridge, MA: MIT Press.
  • Bunge, Mario, 1966, “Technology as Applied Science”,Technology and Culture, 7(3): 329–347.doi:10.2307/3101932
  • Butler, Samuel, 1872,Erewhon, London: Trubner and Co. [Butler 1872 available online]
  • Callon, Michel, 1986, “The Sociology of an Actor-Network:the Case of the Electric Vehicle”, inMapping the Dynamicsof Science and Technology: Sociology of Science in the RealWorld, Michel Callon, John Law and Arie Rip (eds.), London:Macmillan, pp. 19–34.
  • Caney, Simon, 2014, “Two Kinds of Climate Justice: AvoidingHarm and Sharing Burdens”,Journal of PoliticalPhilosophy 22(2): 125–149. doi:10.1111/jopp.12030
  • Coeckelbergh, Mark, 2014, “The Moral Standing of Machines:Towards a Relational and Non-Cartesian Moral Hermeneutics”,Philosophy & Technology 27(1): 61–77. doi:10.1007/s13347-013-0133-8.
  • –––, 2020a,Introduction to Philosophy ofTechnology, Oxford and New York: Oxford University Press.
  • –––, 2020b,AI Ethics, Cambridge, MA:MIT Press.
  • –––, 2022,The Political Philosophy of AI:An Introduction, Cambridge: Polity.
  • Cranor, Carl F., 1990, “Some Moral Issues in RiskAssessment”,Ethics, 101(1): 123–143.doi:10.1086/293263
  • Danaher, John, 2016, “Robots, Law and the RetributionGap”,Ethics and Information Technology 18(4):299–309. doi: 10.1007/s10676-016-9403-3.
  • –––, 2020, “Welcoming Robots into theMoral Circle: A Defence of Ethical Behaviourism”,Scienceand Engineering Ethics 26(4): 2023–2049. doi:10.1007/s11948-019-00119-x.
  • Darwin, Charles R., 1859,On the Origin of Species by Means ofNatural Selection, or the Preservation of Favoured Races in theStruggle for Life, London: John Murray.
  • Davis, Michael, 1998,Thinking Like an Engineer: Studies inthe Ethics of a Profession, New York and Oxford: OxfordUniversity Press.
  • –––, 2005,Engineering Ethics,Aldershot/Burlington, VT: Ashgate.
  • –––, 2012, “‘Ain’t No One HereBut Us Social Forces’: Constructing the ProfessionalResponsibility of Engineers”,Science and EngineeringEthics, 18(1): 13–34. doi:10.1007/s11948-010-9225-3
  • Dennett, Daniel C., 1997, “When HAL kills, who’s toblame? Computer ethics”, inHAL’s Legacy: 2001’sComputer as Dream and Reality, edited by David G. Stork.Cambridge, MA: MIT Press, pp. 351–365.
  • Di Nucci, Ezio, and Filippo Santoni de Sio, 2016,Drones andResponsibility: Legal, Philosophical and Socio-Technical Perspectiveson Remotely Controlled Weapons, Milton Park: Routledge.
  • Diels, Hermann, 1903,Die Fragmente der Vorsokratiker,Berlin: Weidmann.
  • Dipert, Randall R., 1993,Artifacts, Art Works, andAgency, Philadelphia: Temple University Press.
  • Doorn, Neelke, 2010, “A Rawlsian Approach to DistributeResponsibilities in Networks”,Science and EngineeringEthics, 16(2): 221–249. doi:10.1007/s11948-009-9155-0
  • –––, 2012, “Responsibility Ascriptions inTechnology Development and Engineering: Three Perspectives”,Science and Engineering Ethics, 18(1): 69–90.doi:10.1007/s11948-009-9189-3
  • Ellul, Jacques, 1954 [1964],La technique ou L’enjeu dusiècle, Paris: Armand Colin. Translated asTheTechnological Society, by John Wilkinson, New York: Alfred A.Knopf, 1964.
  • Erlandson, Robert F., 2008,Universal and Accessible Designfor Products, Services, and Processes, Boca Raton, LA: CRCPress.
  • Feenberg, Andrew, 1999,Questioning Technology, Londonand New York: Routledge.
  • Finnis, John, Joseph Boyle and Germain Grisez, 1988,NuclearDeterrence, Morality and Realism, Oxford: Oxford UniversityPress.
  • Floridi, Luciano, 2010,The Cambridge Handbook of Informationand Computer Ethics, Cambridge: Cambridge University Press.doi:10.1017/CBO9780511845239
  • Floridi, Luciano, and J.W. Sanders, 2004, “On the Moralityof Artificial Agents”,Minds and Machines, 14(3):349–379. doi:10.1023/B:MIND.0000035461.63578.9d
  • Fox, Warwick, 2000,Ethics and the Built Environment,(Professional Ethics), London and New York: Routledge.
  • Franssen, Maarten, 2005, “Arrow’s Theorem,Multi-Criteria Decision Problems and Multi-Attribute Preferences inEngineering Design”,Research in Engineering Design,16(1–2): 42–56. doi:10.1007/s00163-004-0057-5
  • Franssen, Maarten, and Louis L. Bucciarelli, 2004, “OnRationality in Engineering Design”,Journal of MechanicalDesign, 126(6): 945–949. doi:10.1115/1.1803850
  • Franssen, Maarten, and Stefan Koller, 2016, “Philosophy ofTechnology as a Serious Branch of Philosophy: The Empirical Turn as aStarting Point”, inPhilosophy of Technology after theEmpirical Turn, edited by Maarten Franssen, Pieter E. Vermaas,Peter Kroes, and Anthonie W.M. Meijers, Cham: Springer, 31–61.doi:10.1007/978-3-319-33717-3_3
  • Franssen, Maarten, Peter Kroes, Thomas A.C. Reydon and Pieter E.Vermaas (eds), 2014,Artefact Kinds: Ontology and the Human-MadeWorld, Heidelberg/New York/Dordrecht/London: Springer.doi:10.1007/978-3-319-00801-1
  • Fraser, Nancy, and Axel Honneth, 2003,Redistribution orRecognition?: A Political-Philosophical Exchange, London and NewYork: Verso.
  • Freeman, K., 1948,Ancilla to the Pre-SocraticPhilosophers (A complete translation of the Fragments inDiels, Fragmente der Vorsokratiker), Cambridge, MA: HarvardUniversity Press.
  • Frey, R. G., and Christopher Heath Wellman (eds), 2003,ACompanion to Applied Ethics, Oxford and Malden, MA: Blackwell.doi:10.1002/9780470996621
  • Friedman, Batya, and David Hendry, 2019,Value SensitiveDesign: Shaping Technology with Moral Imagination, Cambridge, MA:MIT Press.
  • Garson, Justin, 2016,A Critical Overview of BiologicalFunctions, (SpringerBriefs in Philosophy), Cham: SpringerInternational Publishing. doi:10.1007/978-3-319-32020-5.
  • Gehlen, Arnold, 1957,Die Seele im technischen Zeitalter,Hamburg: Rowohlt. Translated asMan in the Age of Technology,by Patricia Lipscomb, New York: Columbia University Press, 1980.
  • Gunkel, David J., 2018,Robot Rights, Cambridge, MA: MITPress.
  • Habermas, Jürgen, 1968 [1970], “Technik undWissenschaft als ‘Ideologie’” in an an anthology ofthe same name, Frankfurt: Suhrkamp Verlag. Translated as“Technology and Science as ‘Ideology’”, inToward a Rational Society: Student Protest, Science, andPolitics, by Jeremy J. Shapiro, Boston, MA: Beacon Press, pp.81–122.
  • Hansson, Sven Ove, 2003, “Ethical Criteria of RiskAcceptance”,Erkenntnis, 59(3): 291–309.doi:10.1023/A:1026005915919
  • –––, 2004a, “Fallacies of Risk”,Journal of Risk Research, 7(3): 353–360.doi:10.1080/1366987042000176262
  • –––, 2004b, “Philosophical Perspectives onRisk”,Techné, 8(1): 10–35.doi:10.5840/techne2004818
  • –––, 2007, “Philosophical Problems inCost-Benefit Analysis”,Economics and Philosophy,23(2): 163–183. doi:10.1017/S0266267107001356
  • Harris, Charles E., Michael S. Pritchard and Michael J. Rabins,2014,Engineering Ethics: Concepts and Cases, fifth edition,Belmont, CA: Wadsworth.
  • Heidegger, Martin, 1954 [1977], “Die Frage nach derTechnik”, inVorträge und Aufsätze,Pfullingen: Günther Neske. Translated as “The Questionconcerning Technology”, inThe Question ConcerningTechnology and Other Essays, by William Lovitt, New York: Harperand Row, 1977, pp. 3–35.
  • Herkert, Joseph R., 2001, “Future Directions in EngineeringEthics Research: Microethics, Macroethics and the Role of ProfessionalSocieties”,Science and Engineering Ethics, 7(3):403–414. doi:10.1007/s11948-001-0062-2
  • Hilpinen, Risto, 1992, “Artifacts and Works of Art”,Theoria, 58(1): 58–82.doi:10.1111/j.1755-2567.1992.tb01155.x
  • Holt, Raymond, and Catherine Barnes, 2010, “Towards anIntegrated Approach to ‘Design for X’: An Agenda forDecision-Based DFX Research”,Research in EngineeringDesign, 21(2): 123–136. doi:10.1007/s00163-009-0081-6
  • Horkheimer, Max, and Theodor W. Adorno, 1947 [2002],Dialektikder Aufklärung: Philosophische Fragmente, Amsterdam: QueridoVerlag. Translated asDialectic of Enlightenment: PhilosophicalFragments, by Edmund Jephcott, and edited by Gunzelin SchmidNoerr, Stanford, CA: Stanford University Press, 2002.
  • Houkes, Wybo, 2009, “The Nature of TechnologicalKnowledge”, in Meijers 2009: 309–350.doi:10.1016/B978-0-444-51667-1.50016-1
  • Houkes, Wybo, and Pieter E. Vermaas, 2010,TechnicalFunctions: On the Use and Design of Artefacts,Dordrecht/Heidelberg/London /New York: Springer.doi:10.1007/978-90-481-3900-2
  • Hughes, Jesse, Peter Kroes, and Sjoerd Zwart, 2007, “ASemantics for Means-End Relations”,Synthese, 158(2):207–231. doi:10.1007/s11229-006-9036-x
  • Ihde, Don, 1979,Technics and Praxis,Dordrecht/Boston/Lancaster: D. Reidel.
  • –––, 1990,Technology and the Lifeworld:from Garden to Earth, Bloomington: Indiana University Press.
  • –––, 1993,Philosophy of Technology: AnIntroduction, New York: Paragon.
  • Ihde, Don, and Evan Selinger, 2003,Chasing Technoscience:Matrix for Materiality, Bloomington: Indiana UniversityPress.
  • Illies, Christian, and Anthonie Meijers, 2009, “ArtefactsWithout Agency”,The Monist, 92(3): 420–440.doi:10.5840/monist200992324
  • Jarvie, Ian C., 1966, “The Social Character of TechnologicalProblems: Comments on Skolimowski’s Paper”,Technologyand Culture, 7(3): 384–390. doi:10.2307/3101936
  • Jenkins, Kirsten, Darren McCauley, Raphael Heffron, Hannes Stephanand Robert Rehner, 2016, “Energy Justice: A ConceptualReview”,Energy Research & Social Science 11:174–182. doi:10.1016/j.erss.2015.10.004
  • Johnson, Deborah G., 2003, “Computer Ethics”, in Freyand Wellman 2003: 608–619. doi:10.1002/9780470996621.ch45
  • –––, 2006, “Computer Systems: MoralEntities But Not Moral Agents”,Ethics and InformationTechnology, 8(4): 195–205.doi:10.1007/s10676-006-9111-5
  • –––, 2009,Computer Ethics, fourthedition. Upper Saddle River, NJ: Prentice Hall.
  • Johnson, Deborah G., and Thomas M. Powers, 2005, “ComputerSystems and Responsibility: A Normative Look at TechnologicalComplexity”,Ethics and Information Technology, 7(2):99–107. doi:10.1007/s10676-005-4585-0
  • Jonas, Hans, 1979 [1984],Das Prinzip Verantwortung: Versucheiner Ethik für die technologische Zivilisation,Frankfurt/Main: Suhrkamp; extended English editionThe Imperativeof Responsibility: in Search of An Ethics for the TechnologicalAge, Chicago and London: University of Chicago Press, 1984.
  • Kaplan, David M. (ed.), 2004 [2009],Readings in thePhilosophy of Technology, Lanham, MD and Oxford: Rowman andLittlefield, first edition 2004, second revised edition 2009.
  • Kapp, Ernst, 1877 [2018],Grundlinien Einer Philosophie DerTechnik: Zur Entstehungsgeschichte Der Cultur Aus NeuenGesichtspunkten, Braunschweig: Westermann [Kapp 1877 available online]. Translated asElements of a Philosophy of Technology: On theEvolutionary History of Culture, by Lauren K. Wolfe, and editedby Jeffrey West Kirkwood and Leif Weatherby, Minneapolis, MN:University of Minnesota Press, 2018.
  • Kitcher, Philip, 2001,Science, Truth, and Democracy,Oxford and New York: Oxford University Press.
  • –––, 2011.The Ethical Project,Cambridge, MA: Harvard University Press.
  • Klenk, Michael, 2021, “How Do Technological Artefacts EmbodyMoral Values?”,Philosophy & Technology 34(3):525–544. doi: 10.1007/s13347-020-00401-y.
  • Kneese, Allen V., Shaul Ben-David and William D. Schulze, 1983,“The Ethical Foundations of Benefit-Cost Analysis”, inEnergy and the Future, edited by Douglas E. MacLean and PeterG. Brown, Totowa, NJ: Rowman and Littefield, pp. 59–74.
  • Kotarbinski, Tadeusz, 1965,Praxiology: An Introduction to theSciences of Efficient Action, Oxford: Pergamon Press.
  • Kroes, Peter, 2012,Technical Artefacts: Creations of Mind andMatter, Dordrecht/Heidelberg/New York/London: Springer.doi:10.1007/978-94-007-3940-6
  • Kroes, Peter, and Anthonie Meijers (eds), 2006, “The DualNature of Technical Artifacts”, Special issue ofStudies inHistory and Philosophy of Science, 37(1): 1–158.doi:10.1016/j.shpsa.2005.12.001
  • Kroes, Peter, Maarten Franssen and Louis Bucciarelli, 2009,“Rationality in Design”, in Meijers (ed.) 2009:565–600. doi:10.1016/B978-0-444-51667-1.50025-2
  • Kroes, Peter, and Peter-Paul Verbeek (eds), 2014,The MoralStatus of Technical Artefacts, Dordrecht: Springer.doi:10.1007/978-94-007-7914-3
  • Kuhn, Thomas S., 1962,The Structure of ScientificRevolutions, Chicago: University of Chicago Press.
  • Ladd, John, 1991, “Bhopal: An Essay on Moral Responsibilityand Civic Virtue”,Journal of Social Philosophy, 22(1):73–91. doi:10.1111/j.1467-9833.1991.tb00022.x
  • Latour, Bruno, 1992, “Where Are the Missing Masses?”,in Bijker and Law (eds) 1992: 225–258.
  • –––, 1993,We Have Never Been Modern,New York: Harvester Wheatsheaf.
  • –––, 2005,Reassembling the Social: AnIntroduction to Actor-Network-Theory, Oxford and New York: OxfordUniversity Press.
  • Lawson, Clive, 2008, “An Ontology of Technology: Artefacts,Relations and Functions”,Technè, 12(1):48–64. doi:10.5840/techne200812114
  • –––, 2017,Technology and Isolation,Cambridge and New York: Cambridge University Press.doi:10.1017/9781316848319
  • Lin, Patrick, Keith Abney and Ryan Jenkins (eds), 2017,RobotEthics 2.0: From Autonomous Cars to Artificial Intelligence,Oxford/New York: Oxford University Press.
  • Lloyd, G.E.R., 1973, “Analogy in Early Greek Thought”,inThe Dictionary of the History of Ideas, edited by PhilipP. Wiener, New York: Charles Scribner’s Sons, vol. 1 pp.60–64. [Lloyd 1973 available online]
  • Lloyd, Peter A., and Jerry A. Busby, 2003, “‘Thingsthat Went Well—No Serious Injuries or Deaths’: EthicalReasoning in a Normal Engineering Design Process”,Scienceand Engineering Ethics, 9(4): 503–516.doi:10.1007/s11948-003-0047-4
  • Longino, Helen, 1990,Science as Social Knowledge: Values andObjectivity in Scientific Inquiry, Princeton: PrincetonUniversity Press.
  • –––, 2002,The Fate of Knowledge,Princeton: Princeton University Press.
  • Maheshwari, Kritika, and Sven Nyholm, 2022, “Dominating RiskImpositions”,The Journal of Ethics. doi:10.1007/s10892-022-09407-4.
  • Mahner, Martin, and Mario Bunge, 2001, “Function andFunctionalism: A Synthetic Perspective”,Philosophy ofScience, 68(1): 73–94. doi:10.1086/392867
  • Marcuse, Herbert, 1964,One-Dimensional Man: Studies in theIdeology of Advanced Industrial Society, New York: Beacon Press,and London: Routledge and Kegan Paul.
  • Martin, Miles W., and Roland Schinzinger, 2022,Ethics inEngineering, fifth edition, Boston, MA: McGraw-Hill.
  • Matthias, Andreas, 2004, “The Responsibility Gap: AscribingResponsibility for the Actions of Learning Automata”,Ethicsand Information Technology 6(3): 175–183. doi:10.1007/s10676-004-3422-1.
  • McGinn, Robert E., 2010, “What’s Different, Ethically,About Nanotechnology? Foundational Questions and Answers”,NanoEthics, 4(2): 115–128.doi:10.1007/s11569-010-0089-4
  • Meijers, Anthonie (ed.), 2009,Philosophy of Technology andEngineering Sciences, (Handbook of the Philosophy of Science,volume 9), Amsterdam: North-Holland.
  • Michelfelder, Diane P., and Neelke Doorn (eds), 2021,TheRoutledge Handbook of the Philosophy of Engineering, New York andMilton Park, UK: Routledge.
  • Miller, Boaz, 2020, “Is Technology Value-Neutral?”,Science, Technology & Human Values 46(1): 53–80.doi: 10.1177/0162243919900965.
  • Millikan, Ruth Garrett, 1999, “Wings, Spoons, Pills, andQuills: A Pluralist Theory of Function”,The Journal ofPhilosophy, 96(4): 191–206. doi:10.5840/jphil199996428
  • Mitcham, Carl, 1994,Thinking Through Technology: The PathBetween Engineering and Philosophy, Chicago: University ofChicago Press.
  • Mittelstadt, Brent Daniel, Patrick Allo, Mariarosaria Taddeo,Sandra Wachter and Luciano Floridi, 2016, “The Ethics ofAlgorithms: Mapping the Debate”,Big Data &Society, 3(2): 1–21. doi:10.1177/2053951716679679
  • Moor, James H., 1985, “What is Computer Ethics?”Metaphilosophy, 16(4): 266–275.doi:10.1111/j.1467-9973.1985.tb00173.x
  • –––, 2006, “The Nature, Importance, andDifficulty of Machine Ethics”,IEEE IntelligentSystems, 21(4): 18–21. doi:10.1109/MIS.2006.80
  • Mosakas, Kestutis, 2021, “On the Moral Status of SocialRobots: Considering the Consciousness Criterion”,AI &Society 36(2): 429–443. doi:10.1007/s00146-020-01002-1.
  • Mumford, Lewis, 1934,Technics and Civilization, NewYork: Harcourt, Brace and Company, and London: Routledge and KeganPaul.
  • Newman, William R., 2004,Promethean Ambitions: Alchemy andthe Quest to Perfect Nature, Chicago: University of ChicagoPress.
  • Niiniluoto, Ilkka, 1993, “The Aim and Structure of AppliedResearch”,Erkenntnis, 38(1): 1–21.doi:10.1007/BF01129020
  • Nissenbaum, Helen, 1996, “Accountability in a ComputerizedSociety”,Science and Engineering Ethics, 2(1):25–42. doi:10.1007/BF02639315
  • –––, 2010,Privacy in Context: Technology,Policy, and the Integrity of Social Life, Stanford, CA: StanfordLaw Books.
  • Nyholm, Sven, 2018, “Attributing Agency to AutomatedSystems: Reflections on Human-Robot Collaborations andResponsibility-Loci”,Science and Engineering Ethics24(4): 1201–1219. doi: 10.1007/s11948-017-9943-x.
  • Olsen, Jan Kyrre Berg, Evan Selinger and Søren Riis (eds),2009,New Waves in Philosophy of Technology, Basingstoke andNew York: Palgrave Macmillan. doi:10.1057/9780230227279
  • Owen, Richard, John Bessant, and Maggy Heintz, 2013,Responsible Innovation: Managing the Responsible Emergence ofScience and Innovation in Society, Chichester: John Wiley.doi:10.1002/9781118551424
  • Peterson, Martin, 2020,Ethics for Engineers, New York:Oxford University Press.
  • Peterson, Martin, and Andreas Spahn, 2011, “CanTechnological Artefacts be Moral Agents?”Science andEngineering Ethics, 17(3): 411–424.doi:10.1007/s11948-010-9241-3
  • Pettit, Philip, 2012,On the People’s Terms: ARepublican Theory and Model of Democracy, The Seeley lectures,Cambridge and New York: Cambridge University Press.
  • Pitt, Joseph C., 1999,Thinking About Technology: Foundationsof the Philosophy of Technology, New York: Seven BridgesPress.
  • Plato,Laws, 2016, M. Schofield (ed.), T. Griffith (tr.),Cambridge: Cambridge University Press.
  • –––,Timaeus and Critias, 2008, R.Waterfield (tr.), with introduction and notes by A. Gregory, Oxford:Oxford University Press.
  • Polanyi, Michael, 1958,Personal Knowledge: Towards aPost-Critical Philosophy, London: Routledge and Kegan Paul.
  • Preston, Beth, 1998, “Why is a Wing Like a Spoon? APluralist Theory of Function”,The Journal ofPhilosophy, 95(5): 215–254. doi:10.2307/2564689
  • –––, 2003, “Of Marigold Beer: A Reply toVermaas and Houkes”,British Journal for the Philosophy ofScience, 54(4): 601–612. doi:10.1093/bjps/54.4.601
  • –––, 2012,A Philosophy of Material Culture:Action, Function, and Mind, New York and Milton Park, UK:Routledge.
  • Preston, Christopher J. (ed.), 2016,Climate Justice andGeoengineering: Ethics and Policy in the AtmosphericAnthropocene, London/New York: Rowman & LittlefieldInternational.
  • Radder, Hans, 2009, “Why Technologies Are InherentlyNormative”, in Meijers (ed.) 2009: 887–921.doi:10.1016/B978-0-444-51667-1.50037-9
  • Rawls, John, 1999,A Theory of Justice, Revised Edition,Cambridge, MA: The Belknap Press of Harvard University Press.
  • Roeser, Sabine, 2012, “Moral Emotions as Guide to AcceptableRisk”, in Roeser et al. 2012: 819–832.doi:10.1007/978-94-007-1433-5_32
  • Roeser, Sabine, Rafaela Hillerbrand, Per Sandin and MartinPeterson (eds), 2012,Handbook of Risk Theory: Epistemology,Decision Theory, Ethics, and Social Implications of Risk,Dordrecht/Heidelberg/London/New York: Springer.doi:10.1007/978-94-007-1433-5
  • Ryle, Gilbert, 1949,The Concept of Mind, London:Hutchinson.
  • Santoni de Sio, Filippo, and Giulio Mecacci, 2021, “FourResponsibility Gaps with Artificial Intelligence: Why they Matter andHow to Address them”,Philosophy & Technology34(4): 1057–1084. doi: 10.1007/s13347-021-00450-x.
  • Santoni de Sio, Filippo, and Jeroen van den Hoven, 2018.“Meaningful Human Control over Autonomous Systems: APhilosophical Account”,Frontiers in Robotics and AI.doi: 10.3389/frobt.2018.00015.
  • Scharff, Robert C., and Val Dusek (eds), 2003 [2014],Philosophy of Technology: The Technological Condition,Malden, MA and Oxford: Blackwell, first edition 2003, second [revised]edition 2014.
  • Schummer, Joachim, 2001, “Aristotle on Technology andNature”,Philosophia Naturalis, 38: 105–120.
  • Sclove, Richard E., 1995,Democracy and Technology, NewYork: The Guilford Press.
  • Sellars, Wilfrid, 1962, “Philosophy and the Scientific Imageof Man”, inFrontiers of Science and Philosophy, editedby R. Colodny, Pittsburgh: University of Pittsburgh Press, pp.35–78.
  • Sharkey, Amanda, and Noel Sharkey, 2021, “We Need to Talkabout Deception in Social Robotics!”,Ethics and InformationTechnology 23(3): 309–316. doi:10.1007/s10676-020-09573-9.
  • Sherlock, Richard, and John D. Morrey (eds), 2002,EthicalIssues in Biotechnology, Lanham, MD: Rowman and Littlefield.
  • Shrader-Frechette, Kristen S., 1985,Risk Analysis andScientific Method: Methodological and Ethical Problems with EvaluatingSocietal Hazards, Dordrecht and Boston: D. Reidel.
  • –––, 1991,Risk and Rationality:Philosophical Foundations for Populist Reform, Berkeley etc.:University of California Press.
  • Simon, Herbert A., 1957,Models of Man, Social and Rational:Mathematical Essays on Rational Human Behavior in a SocialSetting, New York: John Wiley.
  • –––, 1969,The Sciences of theArtificial, Cambridge, MA and London: MIT Press.
  • –––, 1982,Models of BoundedRationality, Cambridge, MA and London: MIT Press.
  • Skolimowski, Henryk, 1966, “The Structure of Thinking inTechnology”,Technology and Culture, 7(3):371–383. doi:10.2307/3101935
  • Snapper, John W., 1985, “Responsibility for Computer-BasedErrors”,Metaphilosophy, 16(4): 289–295.doi:10.1111/j.1467-9973.1985.tb00175.x
  • Soavi, Marzia, 2009, “Realism and Artifact Kinds”, inFunctions in Biological and Artificial Worlds: ComparativePhilosophical Perspectives, edited by Ulrich Krohs and PeterKroes. Cambridge, MA: MIT Press, pp. 185–202.doi:10.7551/mitpress/9780262113212.003.0011
  • Sparrow, Robert, 2007, “Killer Robots”,Journal ofApplied Philosophy 24(1): 62–77.doi:10.1111/j.1468-5930.2007.00346.x
  • Suh, Nam Pyo, 2001,Axiomatic Design: Advances andApplications, Oxford and New York: Oxford University Press.
  • Susskind, Jamie, 2022,The Digital Republic: On Freedom andDemocracy in the 21st Century, London: Bloomsbury.
  • Swierstra, Tsjalling, and Jaap Jelsma, 2006, “ResponsibilityWithout Moralism in Technoscientific Design Practice”,Science, Technology & Human Values, 31(1): 309–332.doi:10.1177/0162243905285844
  • Swierstra, Tsjalling, and Hedwig te Molder, 2012, “Risk andSoft Impacts”, in Roeser et al. (eds) 2012: 1049–1066.doi:10.1007/978-94-007-1433-5_42
  • Taebi, Behnam, 2021,Ethics and Engineering: AnIntroduction, Cambridge Applied Ethics series, Cambridge and NewYork: Cambridge University Press.
  • Taebi, Behnam, and Sabine Roeser (eds), 2015,The Ethics ofNuclear Energy: Risk, Justice, and Democracy in the Post-FukushimaEra, Cambridge: Cambridge University Press.doi:10.1017/CBO9781107294905
  • Tavani, Herman T., 2002, “The Uniqueness Debate in ComputerEthics: What Exactly is at Issue, and Why Does it Matter?”Ethics and Information Technology, 4(1): 37–54.doi:10.1023/A:1015283808882
  • Thomasson, Amie L., 2003, “Realism and Human Kinds”,Philosophy and Phenomenological Research, 67(3):580–609. doi:10.1111/j.1933-1592.2003.tb00309.x
  • –––, 2007, “Artifacts and HumanConcepts”, inCreations of the Mind: Essays on Artifacts andTheir Representation, edited by Eric Margolis and StephenLaurence, Oxford: Oxford University Press, pp. 52–73.
  • Thompson, Dennis F., 1980, “Moral Responsibility and PublicOfficials: The Problem of Many Hands”,American PoliticalScience Review, 74(4): 905–916. doi:10.2307/1954312
  • Thompson, Paul B., 2007,Food Biotechnology in EthicalPerspective, second edition, Dordrecht: Springer.doi:10.1007/1-4020-5791-1
  • Vallor, Shannon (ed.), 2022,The Oxford Handbook of Philosophyof Technology, Oxford and New York: Oxford University Press.
  • van den Hoven, Jeroen, and John Weckert (eds), 2008,Information Technology and Moral Philosophy, Cambridge andNew York: Cambridge University Press.
  • van den Hoven, Jeroen, Pieter E. Vermaas and Ibo van de Poel(eds), 2015,Handbook of Ethics and Values in TechnologicalDesign: Sources, Theory, Values and Application Domains,Dordrecht: Springer. doi:10.1007/978-94-007-6994-6
  • van de Poel, Ibo, 2009, “Values in EngineeringDesign”, in Meijers (ed.) 2009: 973–1006.doi:10.1016/B978-0-444-51667-1.50040-9
  • –––, 2016, “An Ethical Framework forEvaluating Experimental Technology”,Science and EngineeringEthics, 22(3): 667–686. doi:10.1007/s11948-015-9724-3
  • van de Poel, Ibo, and Lambèr Royakkers, 2011,Ethics,Technology and Engineering, Oxford: Wiley-Blackwell.
  • van de Poel, Ibo, Lambèr Royakkers and Sjoerd D. Zwart,2015,Moral Responsibility and the Problem of Many Hands,London: Routledge.
  • van der Pot, Johan Hendrik Jacob, 1985 [1994/2004],DieBewertung des technischen Fortschritts: eine systematischeÜbersicht der Theorien, 2 volumes, Assen/Maastricht: VanGorcum. Translated asSteward or Sorcerer’s Apprentice? theEvaluation of Technical Progress: A Systematic Overview of Theoriesand Opinions, by Chris Turner, 2 volumes., Delft: Eburon, 1994,second edition, 2004, under the titleEncyclopedia ofTechnological Progress: A Systematic Overview of Theories andOpinions.
  • van Wynsberghe, Aimee, and Scott Robbins, 2019, “Critiquingthe Reasons for Making Artificial Moral Agents”,Science andEngineering Ethics 25(3): 719–735. doi:10.1007/s11948-018-0030-8.
  • Verbeek, Peter-Paul, 2000 [2005],De daadkracht der Dingen:Over Techniek, Filosofie En Vormgeving, Amsterdam: Boom.Translated asWhat Things Do: Philosophical Reflections onTechnology, Agency, and Design, by Robert P. Crease, UniversityPark, PA: Penn State University Press, 2005.
  • –––, 2011,Moralizing Technology:Understanding and Designing the Morality of Things, Chicago andLondon: The University of Chicago Press.
  • Vermaas, Pieter E., and Wybo Houkes, 2003, “AscribingFunctions to Technical Artefacts: A Challenge to Etiological Accountsof Functions”,British Journal for the Philosophy ofScience, 54(2): 261–289. doi:10.1093/bjps/54.2.261
  • Vincenti, Walter A., 1990,What Engineers Know and How TheyKnow It: Analytical Studies from Aeronautical History, Baltimore,MD and London: Johns Hopkins University Press.
  • Vitruvius,De architecture, translated asThe TenBooks on Architecture, by Morris H. Morgan. Cambridge, MA:Harvard University Press, 1914. [Vitruvius Morgan’s translation 1914 available online]
  • Volti, Rudi, 2009,Society and Technological Change,sixth edition, New York: Worth Publications.
  • Von Wright, Georg Henrik, 1963,Norm and Action: A LogicalEnquiry, London: Routledge and Kegan Paul.
  • Wallach, Wendell, and Colin Allen, 2009,Moral Machines:Teaching Robots Right from Wrong, Oxford and New York: OxfordUniversity Press. doi:10.1093/acprof:oso/9780195374049.001.0001
  • Weckert, John, 2007,Computer Ethics, Aldershot andBurlington, VT: Ashgate.
  • Wiggins, David, 1980,Sameness and Substance, Oxford:Blackwell.
  • Winner, Langdon, 1977,Autonomous Technology:Technics-out-of-Control as a Theme in Political Thought,Cambridge, MA and London: MIT Press.
  • –––, 1980, “Do Artifacts HavePolitics?”Daedalus, 109(1): 121–136.
  • –––, 1983, “Techné and Politeia:The Technical Constitution of Society”, inPhilosophy andTechnology, edited by Paul T. Durbin and Friedrich Rapp,Dordrecht/Boston/Lancaster: D. Reidel, pp. 97–111.doi:10.1007/978-94-009-7124-0_7
  • Zandvoort, H., 2000, “Codes of Conduct, the Law, andTechnological Design and Development”, inThe Empirical Turnin the Philosophy of Technology, edited by Peter Kroes andAnthonie Meijers, Amsterdam: JAI/Elsevier, pp. 193–205.
  • Zuboff, Shoshana, 2017,The Age of SurveillanceCapitalism, New York: Public Affairs.
  • Zwart, Sjoerd, Maarten Franssen and Peter Kroes, 2018,“Practical Inference—A Formal Approach”, inTheFuture of Engineering: Philosophical Foundations, Ethical Problems andApplication Cases, edited by Albrecht Fritzsche and Sascha JulianOks, Cham: Springer, pp. 33–52.doi:10.1007/978-3-319-91029-1_3
  • Zwart, Sjoerd, Ibo van de Poel, Harald van Mil and MichielBrumsen, 2006, “A Network Approach for Distinguishing EthicalIssues in Research and Development”,Science and EngineeringEthics, 12(4): 663–684. doi:10.1007/s11948-006-0063-2

Journals

Encyclopedias

  • Encyclopedia of Science, Technology, and Ethics, 4volumes, Carl Mitcham (ed.), Macmillan, 2005.
  • Encyclopedia of Applied Ethics, second edition, 4volumes, Ruth Chadwick (editor-in-chief), Elsevier, 2012.

Copyright © 2023 by
Maarten Franssen<m.p.m.franssen@tudelft.nl>
Gert-Jan Lokhorst
Ibo van de Poel<I.R.vandepoel@tudelft.nl>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free

Browse

About

Support SEP

Mirror Sites

View this site from another server:

USA (Main Site)Philosophy, Stanford University

The Stanford Encyclopedia of Philosophy iscopyright © 2024 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054


[8]ページ先頭

©2009-2025 Movatter.jp