Movatterモバイル変換


[0]ホーム

URL:


SEP home page
Stanford Encyclopedia of Philosophy

Computing and Moral Responsibility

First published Wed Jul 18, 2012; substantive revision Thu Feb 2, 2023

Traditionally philosophical discussions on moral responsibility havefocused on the human components of moral action. Accounts of how toascribe moral responsibility usually describe human agents performingactions that have well-defined, direct consequences. In today’sincreasingly technological society, however, human activity cannot beproperly understood without making reference to technologicalartifacts, which complicates the ascription of moral responsibility(Jonas 1984; Doorn & van de Poel 2012).[1] As we interact with and through these artifacts, they affect thedecisions that we make and how we make them (Latour 1992, Verbeek2021). They persuade, facilitate and enable particular human cognitiveprocesses, actions or attitudes, while constraining, discouraging andinhibiting others. For instance, internet search engines prioritizeand present information in a particular order, thereby influencingwhat internet users get to see. As Verbeek points out, suchtechnological artifacts are “active mediators” that“actively co-shape people’s being in the world: theirperception and actions, experience and existence” (2006, p.364). As active mediators, they are a key part of human action and asa result they challenge conventional notions of moral responsibilitythat do not account for the active role of technology (Jonas 1984;Johnson 2001; Swierstra and Waelbers 2012).

Computing presents a particular case for understanding the role oftechnology in moral responsibility. As computer technologies havebecome a more integral part of daily activities, automate moredecision-making processes and continue to transform the way peoplecommunicate and relate to each other, they have further complicatedthe already problematic tasks of attributing moral responsibility. Thegrowing pervasiveness of computer technologies in everyday life, thegrowing complexities of these technologies and the new possibilitiesthat they provide raise new kinds of questions: who is responsible forthe information published on the Internet? To what extent and for whatperiod of time are developers of computer technologies accountable foruntoward consequences of their products? And as computer technologiesbecome more complex and behave increasingly autonomous can or shouldhumans still be held responsible for the behavior of thesetechnologies?

This entry will first look at the challenges that computing poses toconventional notions of moral responsibility. The discussion will thenreview two different ways in which various authors have addressedthese challenges: 1) by reconsidering the idea of moral agency and 2)by rethinking the concept of moral responsibility itself.

1. Challenges to moral responsibility

Moral responsibility is about human action and its intentions andconsequences (Fisher 1999, Eshleman 2016, Talbert 2022). Generallyspeaking a person or a group of people is morally responsible whentheir voluntary actions have morally significant outcomes that wouldmake it appropriate to blame or praise them. Thus, we may consider ita person’s moral responsibility to jump in the water and try torescue another person, when she sees that person drowning. If shemanages to pull the person from the water we are likely to praise her,whereas if she refuses to help we may blame her. Ascribing moralresponsibility establishes a link between a person or a group ofpeople and someone or something affected by the actions of this personor group. The person or group that performs the action and causessomething to happen is often referred to as theagent. Theperson, group or thing that is affected by the action is referred toas thepatient. Establishing a link in terms of moralresponsibility between the agent and the patient can be done bothretrospectively as well as prospectively. That is, sometimesascriptions of responsibility involve giving an account of who was atfault for an accident and who should be punished. It can also be aboutprospectively determining the obligations and duties a person has tofulfill in the future and what she ought to do.

However, the circumstances under which it is appropriate to ascribemoral responsibility are not always clear. On the one hand the concepthas varying meanings and debates continue on what sets moralresponsibility apart from other kinds of responsibility (Hart 1968,Talbert 2022, Tigard 2021a). The concept is intertwined and sometimesoverlaps with notions of accountability, liability, blameworthiness,role-responsibility and causality. Opinions also differ on whichconditions warrant the attribution of moral responsibility; whether itrequires an agent with free will or not and whether humans are theonly entities to which moral responsibility can be attributed (see theentry onmoral responsibility).

On the other hand, it can be difficult to establish a direct linkbetween the agent and the patient because of the complexity involvedin human activity, in particular in today’s technologicalsociety. Individuals and institutions generally act with and insociotechnical systems in which tasks are distributed amonghuman and technological components, which mutually affect each otherin different ways depending on the context (Bijker, Hughes and Pinch1987, Felt et al. 2016). Increasingly complex technologies canexacerbate the difficulty of identifying who or what is‘responsible’. When something goes wrong, a retrospectiveaccount of what happened is expected and the more complex the system,the more challenging is the task of ascribing responsibility (Johnsonand Powers 2005). Indeed, Matthias argues that there is a growing‘responsibility gap’: the more complex computertechnologies become and the less human beings can directly control orintervene in the behavior of these technologies, the less we canreasonably hold human beings responsible for these technologies(Matthias, 2004).

The increasing pervasiveness of computer technologies poses variouschallenges to figuring out what moral responsibility entails and howit should be properly ascribed. To explain how computing complicatesthe ascription of responsibility we have to consider the conditionsunder which it makes sense to hold someone responsible. Despite theongoing philosophical debates on the issue, most analysis of moralresponsibility share at least the following three conditions (Eshleman2016; Jonas 1984):

  1. There should be a causal connection between the person and theoutcome of actions. A person is usually only held responsible if theyhad some control over the outcome of events.
  2. The subject has to have knowledge of and be able to consider thepossible consequences of her actions. We tend to excuse someone fromblame if they could not have known that their actions would lead to aharmful event.
  3. The subject has to be able to freely choose to act in certain way.That is, it does not make sense to hold someone responsible for aharmful event if her actions were completely determined by outsideforces.

A closer look at these three conditions shows that computing cancomplicate the applicability of each of these conditions.

1.1 Causal contribution

In order for a person to be held morally responsible for a particularevent, she has to be able to exert some kind of influence on thatevent. It does not make sense to blame someone for an accident if shecould not have avoided it by acting differently or if she had nocontrol over the events leading up to the accident.

However, computer technologies can obscure the causal connectionsbetween a person’s actions and the eventual consequences.Tracing the sequence of events that led to a computer-relatedcatastrophic incident, such as a plane crash, usually leads in manydirections, as such incidents are seldom the result of a single erroror mishap. Technological accidents are commonly the product of anaccumulation of mistakes, misunderstanding or negligent behavior ofvarious individuals involved in the development, use and maintenanceof computer systems, including designers, engineers, technicians,regulators, managers, users, manufacturers, sellers, resellers andeven policy makers.

The involvement of multiple actors in the development and deploymentof technologies gives rise to what is known as the problem of‘many hands’: it is difficult to determine who wasresponsible for what when multiple individuals contributed to theoutcome of events (Jonas 1984; Friedman 1990; Nissenbaum 1994; van dePoel et al. 2015). One classic example of the problem of many hands incomputing is the case of the malfunctioning radiation treatmentmachine Therac-25 (Leveson and Turner 1993; Leveson 1995). Thiscomputer-controlled machine was designed for the radiation treatmentof cancer patients as well as for X-rays. During a two-year period inthe 1980s the machine massively overdosed six patients, contributingto the eventual death of three of them. These incidents were theresult of the combination of a number of factors, including softwareerrors, inadequate testing and quality assurance, exaggerated claimsabout the reliability, bad interface design, overconfidence insoftware design, and inadequate investigation or follow-up on accidentreports. Nevertheless, in their analysis of the events Leveson andTurner conclude that it is hard to place the blame on a single person.The actions or negligence of all those involved might not have provenfatal were it not for the other contributing events. A more recentexample of the problem of many hands is the crash of two 737 MAXpassenger aircraft in late 2018 and early 2019. These crashes led tomulitple investigations that highlighted various factors thatcontributed to the tragic outcome, include design and human errors aswell as organizational culture and lack of training (Heckert et al2020). This is not to say that there is no moral responsibility inthese case (Nissenbaum 1994; Gotterbarn 2001; Coeckelbergh 2012;Floridi 2013, Santonio De Sio et al, 2021), as many actors could haveacted differently, but it makes it more difficult to retrospectivelyidentify the appropriate person that can be called upon to answer andmake amends for the outcome.

Adding to the problem of many hands is the temporal and physicaldistance that computing creates between a person and the consequencesof her actions, as this distance can blur the causal connectionbetween actions and events (Friedman 1990). Computational technologiesextend the reach of human activity through time and space. With thehelp of social media and communication technologies people caninteract with others on the other side of the world. Satellites andadvanced communication technologies allow pilots to fly a remotelycontrolled drone from their ground-control station half way across theworld. These technologies enable people to act over greater distances,but this remoteness can dissociate the original actions from itseventual consequences (Waelbers 2009; Polder-Verkiel 2012;Coeckelbergh 2013). When a person uses a technological artifact toperform an action thousands of miles a way, that person might not knowthe people that will be affected and she might not directly, or onlypartially, experience the consequences. This can reduce the sense ofresponsibility the person feels and it may interfere with her abilityto fully comprehend the significance of her actions. Similarly, thedesigners of an automated decision-making system determine ahead oftime how decisions should be made, but they will rarely see how thesedecisions will impact the individuals they affect. Their originalactions in programming the system may have effects on people yearslater.

The problem of many hands and the distancing effects of the use oftechnology illustrate the mediating role of technological artifacts inthe confusion about moral responsibility. Technological artifactsbring together the various different intentions of their creators andusers. People create and deploy technologies with the objective ofproducing some effect in the world. Software developers develop anautomated content moderation tool, often at the request of theirmanagers or clients, with the aim of shielding particular content fromusers and influencing what these users can or cannot read. Thesoftware has inscribed in its design the various intentions of thedevelopers, managers and clients; it is poised to behave, given aparticular input, according to their ideas about which information isappropriate (Friedman 1997, Gorwa, Binns, & Katzenbach 2020).Moral responsibility can therefore not be attributed without lookingat the causal efficacy of these artifacts and how they constrain andenable particular human activities.

However, although technological artefacts may influence and shapehuman action, they do not determine it. They are not isolatedinstruments that mean and work the same regardless of why, by whom,and in what context they are used; they have interpretive flexibility(Bijker et al. 1987) or multistability (Ihde 1990).[2] Although the design of the technology provides a set of conditionsfor action, the form and meaning of these actions is the result of howhuman agents choose to use these technologies in particular contexts.People often use technologies in ways unforeseen by their designers.This interpretive flexibility makes it difficult for designers toanticipate all the possible outcomes of the use of their technologies.The mediating role of computer technologies complicates the effort ofretrospectively tracing back the causal connection between actions andoutcomes, but it also complicates forward-looking responsibility.

1.2 Considering the consequences

As computer technologies shape how people perceive and experience theworld, they affect the second condition for attributing moralresponsibility. In order to make appropriate decisions a person has tobe able to consider and deliberate about the consequences of theiractions. They have to be aware of the possible risks or harms thattheir actions might cause. It is unfair to hold someone responsiblefor something if they could not have reasonably known that theiractions might lead to harm.

On the one hand, computer technologies can help users to think throughwhat their actions or choices may lead to. They help the user tocapture, store, organize and analyze data and information (Zuboff1982). For example, one often-named advantage of remote-controlledrobots used by the armed forces or rescue workers is that they enabletheir operators to acquire information that would not be ableavailable without them. They allow their operators to look“beyond the next hill” or “around the nextcorner” and they can thus help operators to reflect on what theconsequences of particular tactical decisions might be (US Departmentof Defense 2009). Similarly, data analysis tools can find patterns inlarge volumes of data that human data analysts cannot manually process(Boyd and Crawford 2012).

On the other hand the use of computers can constrain the ability ofusers to understand or consider the outcomes of their actions. Thesecomplex technologies, which are never fully free from errors,increasingly hide the automated processes behind the interface (Vanden Hoven 2002). An example that illustrates how computer technologiescan limit understanding of the outcomes are the controversial riskassessment tools used by judges in several states in the U.S. forparole decisions and sentencing. In 2016 a civil society organizationfound, based on an analysis of the risk scores of 7000 defendantsproduced by one particular algorithm, that the scores poorly reflectedthe actual recidivism rate and seemed to have a racial bias (Angwin etal. 2016). Regardless of whether its findings were correct or not,what is particularly relevant here is that the investigation alsoshowed that judges did not have a full understanding of how theprobabilities were calculated, in part because the algorithm wasproprietary. The judges were basing their sentencing on the suggestionof an algorithm that they did not fully understand. This is the casefor most computer technologies today. Users only see part of the manycomputations that a computer performs and are for the most partunaware of how it performs them; they usually only have a partialunderstanding of the assumptions, models and theories on which theinformation on their computer screen is based. The increasingcomplexity of computer systems and their reliance on opaque machinelearning algorithms makes it even more difficult to understand what ishappening behind the interface (Pasquale 2015, Diakopoulos 2020).

The opacity of many computer systems can get in the way of assessingthe validity and relevance of the information and can prevent a userfrom making appropriate decisions. People have a tendency to eitherrely too much or not enough on the accuracy automated systems(Cummings 2004; Parasuraman & Riley 1997). This tendency is calledautomation bias. A person’s ability to act responsibly,for example, can suffer when she distrust the automation as result ofa high rate of false alarms. In the Therac 25 case, one of themachine’s operators testified that she had become used to themany cryptic error messages the machine gave and most did not involvepatient safety (Leveson and Turner 1993, p.24). She tended to ignorethem and therefore failed to notice when the machine was set tooverdose a patient. Too much reliance on automated systems can haveequally disastrous consequences. In 1988 the missile cruiser U.S.S.Vincennes shot down an Iranian civilian jet airliner, killing all 290passengers onboard, after it mistakenly identified the airliner as anattacking military aircraft (Gray 1997). The cruiser was equipped withan Aegis defensive system that could automatically track and targetincoming missiles and enemy aircrafts. Analyses of the events leadingup to the incident showed that overconfidence in the abilities of theAegis system prevented others from intervening when they could have.Two other warships nearby had correctly identified the aircraft ascivilian. Yet, they did not dispute the Vincennes’identification of the aircraft as a military aircraft. In a laterexplanation Lt. Richard Thomas of one of the nearby ships stated,“We called her Robocruiser… she always seemed to have apicture… She always seemed to be telling everybody to get on oroff the link as though her picture was better” (as quoted inGray 1997, p. 34). The captains of both ships thought that thesophisticated Aegis system provided the crew of Vincennes withinformation they did not have.

Considering the possible consequences of one’s actions isfurther complicated as computer technologies make it possible forhumans to do things that they could not do before. Several decadesago, the philosopher Ladd pointed out, “[C]omputer technologyhas created new modes of conduct and new social institutions, newvices and new virtues, new ways of helping and new ways of abusingother people” (Ladd 1989, p. 210–11). Computertechnologies of today have had a similar effect. The social or legalconventions that govern what we can do with these technologies takesome time to emerge and the initial absence of these conventionscontributes to confusion about responsibilities (Taddeo and Floridi2015). For example, the ability for users to upload and share text,videos and images publicly on the Internet raised a whole set ofquestions about who is responsible for the content of the uploadedmaterial. Such questions were at the heart of the debate about theconviction of three Google executives in Italy for a violation of thedata protection act (Sartor and Viola de Azevedo Cunha 2010). The caseconcerned a video on YouTube of four students assaulting a disabledperson. In response to a request by the Italian Postal Police, Google,as owner of YouTube, took the video down two months after the studentsuploaded it. The judge, nonetheless, ruled that Google was criminallyliable for processing the video without taking adequate precautionarymeasures to avoid privacy violations. The judge also held Googleliable for failing to adequately inform the students, who uploaded thevideos, of their data protection obligations (p. 367). In the ensuingdebate about the verdict, those critical of the ruling insisted thatit threatened the freedom of expression on the Internet and it sets adangerous precedent that can be used by authoritarian regimes tojustify web censorship (see also Singel 2010). Moreover, they claimedthat platform providers could not be held responsible for the actionsof their users, as they could not realistically approve every uploadand it was not their job to censure. Yet, others instead argued thatit would be immoral for Google to be exempt from liability for thedamage that others suffered due to Google’s profitablecommercial activity. Cases like this one show that in the confusionabout the possibilities and limitations of new technologies it can bedifficult to determine one’s moral obligations to others.

The lack of experience with new technological innovations can alsoaffect what counts as negligent use of the technology. In order tooperate a new computer system, users typically have to go through aprocess of training and familiarization with the system. It requiresskill and experience to understand and imagine how the system willbehave (Coeckelbergh and Wackers 2007). Friedman describes the case ofa programmer who invented and was experimenting with a ‘computerworm’, a piece of code that can replicate itself. At the timethis was a relatively new computational entity (1990). The programmerreleased the worm on the Internet, but the experiment quickly got outof the control when the code replicated much faster than he hadexpected (see also Denning 1989). Today we would not find this asatisfactory excuse, familiar as we have become with computer worms,viruses and other forms of malware. However, Friedman poses thequestion of whether the programmer really acted in a negligent way ifthe consequences were truly unanticipated. Does the computercommunity’s lack of experience with a particular type ofcomputational entity influence what we judge to be negligentbehavior?

1.3 Free to act

The freedom to act is probably the most important condition forattributing moral responsibility and also one of the most contested(Talbert 2022). We tend to excuse people from moral blame if they hadno other choice but to act in the way that they did. We typically donot hold people responsible if they were coerced or forced to takeparticular actions. In moral philosophy, the freedom to act can alsomean that a person has free will or autonomy (Fisher 1999). Someonecan be held morally responsible because she acts on the basis of herown authentic thoughts and motivations and has the capacity to controlher behavior (Johnson 2001). Note that this conception of autonomy isdifferent from the way the term ‘autonomy’ is often usedin computer science, where it tends to refer to the ability of a robotor computer system to independently perform (i.e. without the‘human in the loop’) complex tasks in unpredictableenvironments for extended periods of time (Noorman 2009, Zerilli et al2021).

Nevertheless, there is little consensus on what capacities humanbeings have, that other entities do not have, which enables them toact freely (see the entries onfree will,autonomy in moral and political philosophy,personal autonomy andcompatibilism). Does it require rationality, emotion, intentionality or cognition?Indeed, one important debate in moral philosophy centers on thequestion of whether human beings really have autonomy or free will?And, if not, can moral responsibility still be attributed (Talbert2022)?

In practice, attributing autonomy or free will to humans on the basisof the fulfillment of a set of conditions turns out to be a less thanstraightforward endeavor. We attribute autonomy to persons in degrees.An adult is generally considered to be more autonomous than a child.As individuals in a society our autonomy is thought to vary because weare manipulated, controlled or influenced by forces outside ofourselves, such as by our parents or through peer pressure. Moreover,internal physical or psychological influences, such as addictions ormental problems, are perceived as further constraining the autonomy ofa person.

Computing, like other technologies, adds an additional layer ofcomplexity to determining whether someone is free to act, as itaffects the choices that humans have and how they make them. One ofthe biggest application areas of computing is the automation ofdecision-making processes and control. Automation can help tocentralize and increase control over multiple processes for those incharge, while it limits the discretionary power of human operators onthe lower-end of the decision-making chain. An example is provided bythe automation of decision-making in public administration (Bovens andZouridis 2002). Large public sector organizations have over the lastdecades progressively standardized and formalized their productionprocesses. In a number of countries, the process of issuing decisionsabout (student) loans, social benefits, speeding tickets or taxreturns is carried out to a significant extent by computer systems.This has reduced the scope of the administrative discretion that manyofficials, such as tax inspectors, welfare workers, and policyofficers, have in deciding how to apply formal policy rules inindividual cases (Eubanks 2018). In some cases, citizens no longerinteract with officials that have significant responsibility inapplying their knowledge of the rules and regulations to decide whatis appropriate (e.g., would it be better to let someone off with awarning or is a speeding ticket required?). Rather, decisions arepre-programmed in the algorithms that apply the same measures andrules regardless of the person or the context (e.g., a speeding cameradoes not care about the context or personal circumstances), and thehuman beings that citizens do interact with have little opportunity tointerrogate or change decisions (Dignum 2020). Responsibility fordecisions made, in these cases, has moved from ‘street-levelbureaucrats’ to the ‘system-level bureaucrats’, suchas managers and computer experts, that decide on how to convert policyand legal frameworks into algorithms and decision-trees (Bovens andZouridis 2002).

The automation of bureaucratic processes illustrates that somecomputer technologies are intentionally designed to limit thediscretion of some human beings. An example is the anti-alcohol lockthat is already in use in a number of countries, including the USA,Canada, Sweden and the UK. It requires the driver to pass a breathingtest before she can start the car. This technology forces a particularkind of action and leaves the driver with hardly any choice. Othertechnologies might have a more subtle way of steering behavior, byeither persuading or nudging users (Verbeek 2016). For example, theonboard computer devices in some cars that show, in real-time,information about fuel consumption can encourage the driver tooptimize fuel efficiency. Such technologies are designed with theexplicit aim of making humans behave responsibly by limiting theiroptions or persuading them to choose in a certain way.

Not all these technologies are designed to stimulate morally goodbehavior. Yeung notes that these kinds of decision-guidance techniqueshave become a key element of current day Big-Data analytic techniques,as used on social media and in advertising. She argues that these‘hyper nudges’ are extremely powerful techniques tomanipulate the behaviour of internet users and users of Internet ofThings (IoT) devices due to their networked, continuously updated,dynamic and pervasive nature. As they gather data from a wide range ofsources about users to continuously make predictions in real-timeabout the habits and preferences of users, they can targetadvertisement, information and price incentives to gently andunobtrusively nudge these users in directions preferred by the thosethat control the algorithms (Yeung, 2017). When these nudges arehardly noticeable and have a powerful effect, one can wonder howautonomous the decision making of these users is. This is the case,for example, with dark patterns, which use interfaces on websites orapps that are designed to trick users to do things that did not intendto do, such as purchasing additional expensive insurance (Ravenscraft2020).

Verbeek notes that critics of the idea of intentionally developingtechnology to enforce morally desirable behavior have argued that itjettisons the democratic principles of our society and threatens humandignity. They argue that it deprives humans of their ability andrights to make deliberate decisions and to act voluntarily. Inaddition, critics have claimed that if humans are not acting freely,their actions cannot be considered moral. These objections can becountered, as Verbeek argues, by pointing to the rules, norms,regulations and a host of technological artifacts that already setconditions for actions that humans are able or allowed to perform.Moreover, he notes, technological artifacts, as active mediators,affect the actions and experiences of humans, but they do notdetermine them. Some people have creatively circumvented the strictmorality of earlier versions of the alcohol lock by having an air pumpin the car (Vidal 2004). Nevertheless, these critiques underline theissues at stake in automating decision-making processes: computing canset constraints on the freedom a person has to act and thus affectsthe extent to which she can be held morally responsible.

The challenges that computer technologies present with regard to theconditions for ascribing responsibility indicate the limitations ofconventional ethical frameworks in dealing with the question of moralresponsibility. Traditional models of moral responsibility seem to bedeveloped for the kinds of actions performed by an individual thathave directly visible consequences (Waelbers 2009,Coeckelbergh2009). However, in today’s society attributions ofresponsibility to an individual or a group of individuals areintertwined with the artifacts with which they interact as well aswith intentions and actions of other human agents that these artifactsmediate. Acting with computer technologies may require a differentkind of analysis of who can be held responsible and what it means tobe morally responsible. Below I discuss two ways in which scholarshave taken up this challenge: 1) reconsidering what it means to be amoral agent and 2) reconsidering the concept of moralresponsibility.

2. Can computers be moral agents?

Moral responsibility is generally attributed to moral agents and, atleast in Western philosophical traditions, moral agency has been aconcept exclusively reserved for human beings (Johnson 2001; Doorn andvan de Poel 2012). Unlike animals or natural disasters, human beingsin these traditions can be the originators of morally significantactions, as they can freely choose to act in one way rather thananother way and deliberate about the consequences of this choice. And,although some people are inclined to anthropomorphize computers andtreat them as if they were moral agents(Reeves and Nass 1996; Nassand Moon 2000; Rosenthal-von der Pütten 2013), mostphilosophers agree that current computer technologies should not becalled moral agents, if that would mean that they could be heldmorally responsible. However, the limitations of traditional ethicalvocabularies in thinking about the moral dimensions of computing haveled some authors to rethink the concept of moral agency. It should benoted that some authors have also argued for a reconsideration ofWestern philosophical anthropocentric conceptions of moral patiency,in particular in regard to the question concerning the moral standingof artificial agents or robots (Floridi 2016, Gunkel2020,Coeckelbergh 2020). The following will nevertheless focus onmoral agency as these reflections on moral patiency tend to notaddress the challenges to moral responsibility.

2.1 Computers as morally responsible agents

The increasing complexity of computer technology and the advances inArtificial Intelligence (AI), challenge the idea that human beings arethe only entities to which moral responsibility can or should beascribed (Bechtel 1985; Kroes and Verbeek 2014). Dennett, for example,suggested that holding a computer morally responsible is possible ifit concerned a higher-order intentional computer system (1997). Anintentional system, according to him, is one that can be predicted andexplained by attributing beliefs and desires to it, as well asrationality. In other words, its behavior can be described by assumingthe systems has mental states and that it acts according what itthinks it ought to do, given its beliefs and desires. At the time,Dennett noted that many computers were already intentional systems,but they lacked the higher-order ability to reflect on and reasonabout their mental states. They did not have beliefs about theirbeliefs or thoughts about desires. Dennett suggested that thefictional HAL 9000 that featured in the movie2001: A SpaceOdyssey would qualify as a higher-order intentional system thatcan be held morally responsible. Although advances in AI might notlead to HAL, he did see the development of computers systems withhigher-order intentionality as a real possibility.

Sullins argues in line with Dennett that moral agency is notrestricted to human beings (2006). He proposes that computers systemsor, more specifically, robots are moral agents when they have asignificant level of autonomy and they can be regarded at anappropriate level of abstraction as exhibiting intentional behavior. Arobot, according to Sullins, would be significantly autonomous if itwas not under the direct control of other agents in performing itstasks. Note that Sullins interprets autonomy in a narrow sense incomparison to the conception of autonomy in moral philosophy asproperty of human beings. He adds as a third condition that a robotalso has to be in a position of responsibility to be a moral agent.That is, the robot performs some social role that carries with it someresponsibilities and in performing this role the robot appears to have‘beliefs’ about and an understanding of its duties towardsother moral agents (p. 28). To illustrate what kind of capabilitiesare required for “full moral agency”, he draws an analogywith a human nurse. He argues that if a robot was autonomous enough tocarry out the same duties as a human nurse and had an understanding ofits role and responsibilities in the health care systems, then itwould be a “full moral agent”. Sullins maintains that itwill be some time before machines with these kinds of capabilitieswill be available, but “even the modest robots of today can beseen to be moral agents of a sort under certain, but not all, levelsof abstraction and are deserving of moral consideration” (p.29).

Echoing objections to the early project of (strong) AI (Sack 1997),[3] critics of analyses such as presented by Dennett and Sullins, haveobjected to the idea that computer technologies can have capacitiesthat make human beings moral agents, such as mental states,intentionality, common sense, emotion or empathy (Johnson 2006; Kuflik1999; Nyholm 2018). They, for instance, point out that it makes nosense to treat computer system as moral agents that can be heldresponsible, for they cannot suffer and thus cannot be punished(Sparrow 2007; Asaro 2011). Veliz argues that computers may act likemoral agents, but they lack sentience and are therefore ‘moralzombies’ (2021). Hakli and Makela argue that computerscannot have the kind of autonomy required for moral agency, becausetheir capacities are a result of engineering and programming whichundermines the autonomy of robots and disqualifies them as moralagents (2019). Or they argue, as Stahl does, that computers are notcapable of moral reasoning, because they do not have the capacity tounderstand the meaning of the information that they process (2006). Inorder to comprehend the meaning of moral statements an agent has to bepart of the form of life in which the statement is meaningful; it hasto be able to take part in moral discourses. Similar to the debatesabout AI, critics continue to draw a distinction between humans andcomputers by noting various capacities that computers do not, andcannot, have that would justify the attribution of moral agency.

Some other critics do not contest that human beings might be able tobuild computer systems with the required capacities for moral agency,but question whether it is ethically appropriate to do so. Bryson, forinstance, argues that even if it was possible to create artifacts withsuch capacities – and she assumes this might very well bepossible – human beings have a choice in the matter(2018). She defines a moral agent as “something deemedresponsible by a society for its actions” (p. 16). Society canthus at one point deem it appropriate to view certain computer systemsas moral agents, for instance as it would provide a short cut tofiguring out how responsibility should be distributed. However, sheargues that there is no necessary or predetermined position for thesetechnologies in our society. This is because, she notes, computertechnologies ethical frameworks are “artefacts of our societies,and therefore subject to human control” (p. 15). We can choosewhat capacities we equip these artefacts with, and she sees nocoherent reason for creating artificial agents that human beings haveto compete with in terms of moral agency or patiency.

2.2 Creating autonomous moral agents

In the absence of any definitive arguments for or against thepossibility of future computer systems being morally responsible,researchers within the field of machine ethics aim to further developthe discussion by focusing instead on creating computer system thatcan behaveas if they are moral agents (Moor 2006,Cervantes et al 2019 ,Zoshak andDew 2021). Researchwithin this field has been concerned with the design and developmentof computer systems that can independently determine what the rightthing to do would be in a given situation. According to Allen andWallach, suchautonomous moral agents (AMAs) would have to becapable of reasoning about the moral and social significance of theirbehavior and use their assessment of the effects their behavior has onsentient beings to make appropriate choices (2012; see also Wallachand Allen 2009 and Allen et al. 2000). Such abilities are needed, theyargue, because computers are becoming more and more complex andcapable of operating without direct human control in differentcontexts and environments. Progressively autonomous technologiesalready in development, such as military robots, driverless cars ortrains and service robots in the home and for healthcare, will beinvolved in moral situations that directly affect the safety andwell-being of humans. An autonomous bomb disposal robot might in thefuture be faced with the decision which bomb it should defuse first,in order to minimize casualties. Similarly, a moral decision that adriverless car might have to make is whether to break for a crossingdog or avoid the risk of causing injury to the driver behind him. Suchdecisions require judgment. Currently operators make such moraldecisions, or the decision is already inscribed in the design of thecomputer system. Machine ethics, Wallach and Allen argue, goes onestep beyond making engineers aware of the values they build into thedesign of their products, as it seeks to build ethical decision-makinginto the machines.

To further specify what it means for computers to make ethicaldecisions or to put ‘ethics in the machine’, Moordistinguished between three different kinds of ethical agents:implicit ethical agents, explicit ethical agents, and full ethicalagents (2006). The first kind of agent is a computer that has theethics of its developers inscribed in their design. These agents areconstructed to adhere to the norms and values of the contexts in whichthey are developed or will be used. Thus, ATM tellers are designed tohave a high level of security to prevent unauthorized people fromdrawing money from accounts. An explicit ethical agent is a computerthat can ‘do ethics’. In other words, it can on the basisof an ethical model determine what would be the right thing to do,given certain inputs. The ethical model can be based on ethicaltraditions, such as Kantian, Confucianism, Ubuntu, or utilitarianethics—depending on the preferences of its creators. Theseagents would ‘make ethical decisions’ on behalf of itshuman users (and developers). Such agents are akin to the autonomousmoral agents described by Allen and Wallach. Finally, Moor definedfull ethical agents as entities that can make ethical judgments andcan justify them, much like human beings can. He claimed that althoughat the time of his writing there were no computer technologies thatcould be called fully ethical, it is an empirical question whether ornot it would be possible in the future. Few, if any, philosopherstoday would argue that this question has been answered in the postive.

The effort to build AMAs raises the question of how this effortaffects the ascription of moral responsibility. As human beings woulddesign these artificial agents to behave within pre-specifiedformalized ethical frameworks, it is likely that responsibility willstill be ascribed to these human actors and those that deploy thesetechnologies. However, as Allen and Wallach acknowledge, the danger ofexclusively focusing on equipping robots with moral decision-makingabilities, rather than also looking at the sociotechnical systems inwhich these robots are embedded, is that it may cause furtherconfusion about the distribution of responsibility (2012). Robots withmoral decision-making capabilities may present similar challenges toascribing responsibility as other technologies, when they introducenew complexities that further obfuscate causal connections that leadback to their creators and users.

2.3 Expanding the concept of moral agency

The prospect of increasingly autonomous and intelligent computertechnologies and the growing difficulty of finding responsible humanagents lead Floridi and Sanders to take a different approach (2004).They propose to extend the class of moral agents to include artificialagents, while disconnecting moral agency and moral accountability fromthe notion of moral responsibility. They contend that “theinsurmountable difficulties for the traditional and now ratheroutdated view that a human can be found accountable for certain kindsof software and even hardware” demands a different approach (p.372). Instead, they suggest that artificial agents should beacknowledged as moral agents that can be held accountable, but notresponsible. To illustrate they draw a comparison between artificialagents and dogs as sources of moral actions. Dogs can be the cause ofa morally charged action, like damaging property or helping to save aperson’s life, as in the case of search-and-rescue dogs. We canidentify them as moral agents even though we generally do not holdthem morally responsible, according to Floridi and Sanders: they arethe source of a moral action and can be held morally accountable bycorrecting or punishing them.

Just like animals, Floridi and Sanders argue, artificial agents can beseen as sources of moral actions and thus can be held morallyaccountable when they can be conceived of as behaving like a moralagent from an appropriatelevel of abstraction. The notion oflevels of abstraction refers to the stance one adopts towards andentity to predict and explain its behavior. At a low level ofabstraction we would explain the behavior of a system in terms of itsmechanical or biological processes. At a higher level of abstractionit can help to describe the behavior of a system in terms of beliefs,desires and thoughts. If at a high enough level a computational systemcan effectively be described as being interactive, autonomous andadaptive, then it can be held accountable according to Floridi andSanders (p. 352). It, thus, does not require personhood or free willfor an agent to be morally accountable; rather the agent has to act asif it had intentions and was able to make choices.

The advantage of disconnecting accountability from responsibility,according to Floridi and Sanders, is that it places the focus on moralagenthood, accountability and censure, instead of on figuring outwhich human agents are responsible. “We are less likely toassign responsibility at any cost, forced by the necessity to identifya human moral agent. We can liberate technological development of AAs[Artificial Agents] from being bound by the standard limitingview” (p. 376). When artificial agents ‘behavebadly’ they can be dealt with directly, when their autonomousbehavior and complexity makes it too difficult to distributeresponsibility among human agents. Immoral agents can be modified ordeleted. It is then possible to attribute moral accountability evenwhen moral responsibility cannot be determined.

Critics of Floridi’s and Sanders’ view on accountabilityand moral agency argue that placing the focus of analysis oncomputational artifacts by treating them as moral agents will drawattention away from the humans that deploy and develop them. Johnson,for instance, makes the case that computer technologies remainconnected to the intentionality of their creators and users (2006).She argues that although computational artifacts are a part of themoral world and should be recognized as entities that have moralrelevance, they are not moral agents, for they are not intentional.They are not intentional, because they do not have mental states or apurpose that comes from the freedom to act. She emphasizes thatalthough these artifacts are not intentional, they do haveintentionality, but their intentionality is related to theirfunctionality. They are human-made artifacts and their design and usereflect the intentions of designers and users. Human users, in turn,use their intentionality to interact with and through the software. Ininteracting with the artifacts they activate the inscribed intentionsof the designers and developers. It is through human activity thatcomputer technology is designed, developed, tested, installed,initiated and provided with input and instructions to performspecified tasks. Without this human activity, computers would donothing. Attributing independent moral agency to computers, Johnsonclaims, disconnects them from the human behavior that creates, deploysand uses them. It turns the attention away from the forces that shapetechnological development and limits the possibility for intervention.For instance, it leaves the issue of sorting out who is responsiblefor dealing with malfunctioning or immoral artificial agents or whoshould make amends for the harmful events they may cause. It postponesthe question of who has to account for the conditions under whichartificial agents are allowed to operate (Noorman 2009).

Yet, technologies can still be part of moral action, without being amoral agent. Several philosophers have stressed that moralresponsibility cannot be properly understood without recognizing theactive role of technology in shaping human action (Jonas 1984; Verbeek2006; Johnson and Powers 2005;Nyholm 2018). Johnson, forinstance, claims that although computers are not moral agents, theartifact designer, the artifact, and the artifact user should all bethe focus of moral evaluation as they are all at work in an action(Johnson 2006). Humans create these artifacts and inscribe in themtheir particular values and intentions to achieve particular effectsin the world and in turn these technological artifacts influence whathuman beings can and cannot do and affect how they perceive andinterpret the world.

Similarly, Verbeek maintains that technological artifacts alone do nothave moral agency, but moral agency is hardly ever‘purely’ human. Moral agency generally involves amediating artifact that shapes human behavior, often in way notanticipated by the designer (2008). Moral decisions and actions areco-shaped by technological artifacts. He suggests that in all forms ofhuman action there are three forms of agency at work: 1) the agency ofthe human performing the action; 2) the agency of the designer whohelped shaped the mediating role of the artifacts and 3) the artifactmediating human action. The agency of artifacts is inextricably linkedto the agency of its designers and users, but it cannot be reduced toeither of them. For him, then, a subject that acts or makes moraldecisions is a composite of human and technological components. Moralagency is not merely located in a human being, but in a complex blendof humans and technologies.

In later papers, Floridi explores the concept of distributed moralactions (2013, 2016). He argues that some moral significant outcomescannot be reduced to the moral significant actions of someindividuals. Morally neutral actions of several individuals can stillresult in morally significant events. Individuals might not haveintended to cause harm, but nevertheless their combined actions maystill result in moral harm to someone or something. In order to dealwith the problem of subsequently assigning moral responsibility forsuch distributed moral actions, he argues that the focus of analysisshould shift from the agents to the patients of moral actions. A moralaction can then be evaluated in terms of the harm to the patient,regardless of the intentions of the agents involved. Assigningresponsibility then focuses on whether or not an agent is causallyaccountable for the outcome and on adjusting their behavior to preventharm. If the agents causally accountable - be they artificial orbiological - are autonomous, can interact with each other and theirenvironments and can learn from their interactions they can be heldresponsible for distributed moral actions, according to Floridi(2016).

3. Rethinking the concept of moral responsibility

In light of the noted difficulties in ascribing moral responsibility,several authors have critiqued the way in which the concept is usedand interpreted in relation to computing. They claim that thetraditional models or frameworks for dealing with moral responsibilityfall short and propose different perspectives or interpretations toaddress some of the difficulties. Some of these will be discussed inthis section.

3.1 Assigning responsibility

One approach is to rethink how moral responsibility is assigned(Gotterbarn 2001; Waelbers 2009). When it comes to computingpractitioners, Gotterbarn identifies a potential to side-step or avoidresponsibility by looking for someone else to blame. He attributesthis potential to two pervasive misconceptions about responsibility.The first misconception is that computing is an ethically neutralpractice. That is, according to Gotterbarn, the misplaced belief thattechnological artifacts and the practices of building them areethically neutral is often used to justify a narrowtechnology-centered focus on the development of computer systemwithout taking the broader context in which these technologies operateinto account. This narrow focus can have detrimental consequences.Gotterbarn gives the tragic case of a patient’s death as aresult of a faulty X-ray device as an example. A programmer was giventhe assignment to write a program that could lower or raise the X-raydevice on a pole, after an X-ray technician set the required height.The programmer focused on solving the given puzzle, but failed to takeaccount of the circumstances in which the device would be used and thecontingencies that might occur. He, thus, did not consider thepossibility that a patient could accidentally be in the way of thedevice moving up and down the pole. This oversight eventually resultedin a tragic accident. A patient was crushed by the device, when atechnician set the device to tabletop height, not realizing that thepatient was still underneath it. According to Gotterbarn, computerpractitioners have a moral responsibility to consider suchcontingencies, even though they may not be legally required to do so.The design and use of technological artifacts is a moral activity andthe choice for one particular design solution over another has realand material consequences.

The second misconception is that responsibility is only aboutdetermining blame when something goes wrong. Computer practitioners,according to Gotterbarn, have conventionally adopted a malpracticemodel of responsibility that focuses on determining the appropriateperson to blame for harmful incidents (2001). This malpractice modelleads to all sorts of excuses to shirk responsibility. In particular,the complexities that computer technologies introduce allow computerpractitioners to side-step responsibility. The distance betweendevelopers and the effects of the use of the technologies they createcan, for instance, be used to claim that there is no direct andimmediate causal link that would tie developers to a malfunction.Developers can argue that their contribution to the chain of eventswas negligible, as they are part of a team or larger organization andthey had limited opportunity to do otherwise. The malpractice model,according to Gotterbarn, entices computer practitioners to distancethemselves from accountability and blame.

The two misconceptions are based on a particular retrospective view ofresponsibility that places the focus on that which exempts one fromblame and liability. In reference to Ladd, Gotterbarn calls thisnegative responsibility and distinguishes it from positiveresponsibility (see also Ladd 1989). Positive responsibilityemphasizes “the virtue of having or being obliged to have regardfor the consequences that his or her actions have on others”(Gotterbarn 2001, p. 227). Positive responsibility entails that partof the professionalism of computer experts is that they strive tominimize foreseeable undesirable events. It focuses on what ought tobe done rather than on blaming or punishing others for irresponsiblebehavior. Gotterbarn argues that the computing professions shouldadopt a positive concept of responsibility, as it emphasizes theobligations and duties of computer practitioners to have regard forthe consequences of one’s actions and to minimize thepossibility of causing harm. Computer practitioners have a moralresponsibility to avoid harm and to deliver a properly workingproduct, according to him, regardless of whether they will be heldaccountable if things turn out differently.

The emphasis on the positive moral responsibility of computerpractitioners raises the question of how far this responsibilityreaches, in particular in light of systems that many hands help createand the difficulties involved in anticipating contingencies that mightcause a system to malfunction (Stieb 2008; Miller 2008). To whatextent can developers and manufacturers be expected to exertthemselves to anticipate or prevent the consequences of the use oftheir technologies or possible ‘bugs’ in their code?Computer systems today are generally incomprehensible to any singleprogrammer and it seems unlikely that complex computer systems can becompletely error free. Martin argues in this respect that developersand the companies that decide to sell a computer technologyinto a particular context are responsible for the ethicalimplication of the use of these technologies in that context (2019).They are responsible for these implications because they areknowledgeable about the design decisions and are in a unique positionto inscribe in the technology particular ideas (and biases) about whatthe technology should do and how it should do it. Thus, a company thatcreates and sells a risk-assessment system into the context ofjudicial decision-making is responsible for the ethical implicationsof biases resulting from its use and its opaqueness. The companywillingly “takes on the obligation to understand the values ofthe decision to ensure the algorithms’ ethical implications iscongruent with the context” (p. 10). This, however, leaves openthe question of what their responsibility is outside of that context.Should manufacturers of mobile phones have anticipated that theirproducts would be used in roadside bombs? Manufacturers and theirdesigners and engineers cannot foresee all the possible conditionsunder which their products will eventually operate. Moreover, how muchcontrol should a person have to be or feel responsible for the outcomeof events? Such questions speak to what Santonio de Sio and Meccaci(2020) call the “active responsibility gap”. Theydescribed active responsibility in much the same way as positiveresponsibility, in that it relates to the moral obligations of personsto ensure that the behavior of the systems they design, control, oruse minimizes harm. A gap in this responsibility, according to them,results from these persons not being sufficiently aware, capable andmotivated to see and act according to these obligations (p.1059).

To address gaps in active responsibility (as well as backward-lookinggaps inresponsibility), Santonio de Sio and Meccaci suggest anapproach that underlines the need to look at the broadersociotechnical system of human agents and technologies. Assigningresponsibility requires looking at the whole chain of design,development and use from a social, technical as well as organizationalperspective. Each element in this change can be adjusted in an effortto address responsibility gaps, including the design of the computersystems as well as the organization that uses it. They base theirapproach on the idea of designing sociotechnial systems formeaningful human control as developed by Santonio de Sio andvan den Hoven (2018). Meaningful human control is a concept thatoriginally gained currency in the context of autonomous lethal weaponsas an approach to addressing responsibility gaps. The question of whatit means to have control over a technological system becomesparticularly pertinent in situations where weapon systems aredelegated tasks involved in target selection and engagement (Ekelhof2019).Santoni di Sio and van den Hoven (2018) developed theirconception ofmeaningful human control to get a more‘actionable analysis of control’ that can help engineers,computing professionals, policy makers and designers think about howto design sociotechnical systems with responsibility in mind.

Santonio de Sio and van den Hoven assume that technology is part ofthe decisional mechanisms through which human agents carry out actionsin the world and these mechanisms should be responsive to moralreasons for an agent to have control. Meaningful human control is thusconditional on the extent to which an outcome can be connected to thedecisional mechanisms of human agents. To elucidate these connections,they formulate two necessary conditions for meaningful control calledtracking andtracing.

Tracking requires that the whole sociotechnical system oftechnical, human and organizational elements should be responsive tomoral reasons of the relevant agents and to the relevant facts of thecircumstances. That is, the behaviour of the system should reflect thereasons, values, and intentions of these actors given particularcircumstances. For example, if a machine learning system is trained todistinguish huskies from wolves, the system should act according tothe relevant reasons for doing this (e.g. alarming a farmer to thepresence of a wolf). A machine learning system that is trained to makethis distinction, but has only been shown pictures of wolves in thesnow and huskies in more urban environments, it may deduce that therelevant distinguishing feature is snow. When shown a picture of awolf in urban environment, it might subsequently misclassify the wolfas a husky. In this case, the system did not properly track thereasons of relevant human agents and the facts of the environment.Note that this tracking relation can involve multiple human agentsalong the chain. That is, the moral reasons do not necessarily have tocome from the operator or user, they can also come from policy makers,designers orprogrammers.

Tracing requires that outcomes can be traced back to earlierdecisions made by human agents that place them in the positionresulting in the outcome. An example that the authors give is a drunkdriver causing a serious accident. Even if the driver does not fulfillthe conditions of responsibility at the time of the accident - becauseof mental incapacitation – the driver did make the earlierdecision of drinking too much. Thetracing condition, asSantoni di Sio and van den Hoven formulate it, assumes that it ispossible that more than one human agent is involved in the actionsthat led to the outcome and that their actions are mediated bynon-human systems. According to them, the tracing condition requiresthat the whole sociotechnical system is designed such that at leastone human agent can have sufficient knowledge and moral awareness tobe a potential target of legitimate response for the behavior of thesystem.

Santonio de Sio and van den Hoven’s understanding of meaningfulhuman control does not only have implications for the design ofcomputer systems, but also for the design of the environment and thesocial and institutional practices. Tracking and tracing of relevanthuman moral reasons occurs on all these levels of design.

3.2 Responsibility as practice

Santonio de Sio and van den Hoven’s analysis of meaningful humancontrol draws attention to the social function of moralresponsibility, which provides yet another perspective on the issue(Stahl 2006; Tigard 2021b). Both prospectively and retrospectively,responsibility works to organize social relations between people andbetween people and institutions. It sets expectations between peoplefor the fulfillment of certain obligations and duties and provides themeans to correct or encourage certain behavior. For instance, arobotics company is expected to build in safeguards that preventrobots from harming humans. If the company fails to live up to thisexpectation, it will be held accountable and in some cases it willhave to pay for damages or undergo some other kind of punishment. Thepunishment or prospect of punishment can encourage the company to havemore regard for system safety, reliability, sound design and the risksinvolved in their production of robots. It might trigger the companyto take actions to prevent future accidents. Yet, it might alsoencourage it to find ways to shift the blame. The idea thatresponsibility is about interpersonal relationships and expectationsabout duties and obligations places the focus on the practices ofholding someone responsible (Strawson 1962, Talbert 2022).

The particular practices and social structures that are in place toascribe responsibility and hold people accountable, have an influenceon how we relate to technologies. Just before the turn of the century,Nissenbaum already noted that the difficulties in attributing moralresponsibility can, to a large extent, be traced back to theparticular characteristics of the organizational and cultural contextin which computer technologies are embedded. She argued that how weconceive of the nature, capacities and limitations of computing is ofinfluence on the answerability of those who develop and use computertechnologies (1997). At the time, she observed a systematic erosion ofaccountability in our increasingly computerized society, where sheconceived of accountability as a value and a practice that places anemphasis on preventing harm and risk. Accountability means there willbe someone, or several people, to answer not only for the malfunctionsin life-critical systems that cause or risk grave injuries and causeinfrastructure and large monetary losses, but even for the malfunctionthat cause individual losses of time, convenience, and contentment(1994, p. 74). It can be used as “a powerful tool for motivatingbetter practices, and consequently more reliable and trustworthysystems” (1997, p. 43). Holding people accountable for the harmsor risks caused by computer systems provides a strong incentive tominimize them and can provide a starting point for assigning justpunishment.

Cultural and organizational practices, at the time, however seemed todo the opposite, due to “the conditions under which computertechnologies are commonly developed and deployed, coupled with popularconceptions about the nature, capacities and limitations ofcomputing” (p. 43). Nissenbaum identified four barriers toaccountability in society: (1) the problem of many hands, (2) theacceptance of computer bugs as an inherent element of large softwaresystems, (3) using the computer as scapegoat and (4) ownership withoutliability. According to Nissenbaum people have a tendency to shirkresponsibility and to shift the blame to others when accidents occur.The problem of many hands and the idea that software bugs are aninevitable by-product of complex computer systems are too easilyaccepted as excuses for not answering for harmful outcomes. People arealso inclined to point the finger at the complexity of the computerand argue that “it was the computer’s fault” whenthings go wrong. Finally, she perceived a tendency of companies toclaim ownership of the software they developed, but to dismiss theresponsibilities that come with ownership. To illustrate, she pointedto extended license agreements that assert a manufacturer’sownership of software, but disclaim any accountability for the qualityor performance of the product.

These four barriers, Nissenbaum argued, stand in the way of a“culture of accountability” that is aimed at maintainingclear lines of accountability. Such a culture fosters a strong senseof responsibility as a virtue to be encouraged and everyone connectedto an outcome of particular actions is answerable for it.Accountability, according to Nissenbaum, is different from liability.Liability is about looking for a person to blame and to compensate fordamages suffered after the event. Once that person has been found,others can be let ‘off the hook’, which may encouragepeople to look for excuses, such as blaming the computer.Accountability, however, applies to all those involved. It requires aparticular kind of organizational context, one in which answerabilityworks to entice people to pay greater attention to system safety,reliability and sound design, in order to establish a culture ofaccountability (see also Martin 2019, Herkert et AL 2020) . Anorganization that places less value on accountability and that haslittle regards for responsibilities in organizing their productionprocesses is more likely to allow their technological products tobecome incomprehensible. Nissenbaum’s analysis illustrates thatour practices of holding someone responsible - the established ways ofholding people to account and of conveying expectations about dutiesand obligations - are continuously changing and negotiated, partly asa response to the introduction of new technologies (see also Noorman2012).

A lack of such a culture of accountability can lead to responsibilitybeing attributed to the wrong people. These people become, whatMadeleine Clare Elish calls themoral crumple zone (2019).These human actors absorb responsibility, even though they only havevery limited or no control over the systems they work with. Elishargues that, given the complexity of technological systems, the mediaand the public tend to blame accidents on human error and misattributeresponsibility to the nearest human operator, such as a pilot ormaintenance personnel, rather than the technological systems or thedecision-makers higher up the chain.

Nissenbaum argued that the context in which technologies are developedand used has a significant influence on the ascription of moralresponsibility, but several authors have stressed that moralresponsibility cannot be properly understood without recognizing theactive role of technology in shaping human action (Jonas 1984; Verbeek2006; Johnson and Powers 2005; Waelbers 2009). According to Johnsonand Powers it is not enough to just look at what humans intend and do.“Ascribing more responsibility to persons who act withtechnology requires coming to grips with the behavior of thetechnology”(2005, p. 107). One has to consider thevarious ways in which technological artifacts mediate human actions.Moral responsibility is, thus, not only about how the actions of aperson or a group of people affect others in a morally significantway; it is also about how their actions are shaped by technology.Moral responsibility from this perspective is not located in anindividual or an interpersonal relationship, but is distributed amonghumans and technologies.

4. Conclusion

Computer technologies have challenged conventional conceptions ofmoral responsibility and have raised questions about how to distributeresponsibility appropriately. Can human beings still be heldresponsible for the behavior of complex computer technologies thatthey have limited control over or understanding of? Are human beingsthe only agents that can be held morally responsible or can theconcept of moral agent be extended to include artificial computationalentities? In response to such questions philosophers have reexaminedthe concepts of moral agency and moral responsibility. Although thereis no clear consensus on what these concepts should entail in anincreasingly digital society, what is clear from the discussions isthat any reflection on these concepts will need to address how thesetechnologies affect human action and where responsibility for actionbegins and ends.

Bibliography

  • Allen, C. & W. Wallach, 2012. “Moral Machines.Contradiction in Terms or Abdication of Human Responsibility?”in P. Lin, K. Abney, and G. Bekey (eds.),Robot ethics. The ethicsand social implications of robotics. Cambridge, Massachusetts:MIT Press.
  • Allen, C., G. Varner & J. Zinser, 2000. “Prolegomena toany Future Artificial Moral Agent,”Journal of Experimentaland Theoretical Artificial Intelligence, 12: 251–261.
  • Allen, C. W. Wallach & I. Smit, 2006. “Why MachineEthics?”Intelligent Systems, IEEE ,21(4):12–17.
  • Angwin, J., J. Larson, S. Mattu & L. Kirchner, 2016.“Machine Bias. There is software that is used across the countyto predict future criminals. And it is biased against blacks”,ProPublica, May 23, 2016,Angwin et al. 2016 available online
  • Asaro, P., 2011. “A Body to Kick, But Still No Soul to Damn:Legal Perspectives on Robotics,” in P. Lin, K. Abney, and G.Bekey (eds.)Robot Ethics: The Ethical and Social Implications ofRobotics, Cambridge, MA: MIT Press.
  • Bechtel, W., 1985. “Attributing Responsibility to ComputerSystems,”Metaphilosophy, 16(4): 296–306
  • Bijker, W. E., T. P. Hughes, & T. Pinch, 1987.The SocialConstruction of Technological Systems: New Directions in the Sociologyand History of Technology, London, UK: The MIT Press.
  • Bovens, M. & S. Zouridis, 2002. “From street-level tosystem-level bureaucracies: how information and communicationtechnology is transforming administrative discretion andconstitutional control,”Public Administration Review,62(2):174–184.
  • Boyd, D. & K. Crawford, 2012. “Critical Questions forBig Data: Provocations for a Cultural, Technological, and ScholarlyPhenomenon,”Information, Communication, & Society15(5): 662–679.
  • Bryson, J.J., 2018. “Patiency is not a virtue: the design ofintelligent systems and systems of ethics,”Ethics andInformation Technology, 20(1): 15-26.
  • Cervantes, J.A., López, S., Rodríguez, L.F.,Cervantes, S., Cervantes, F. and Ramos, F., 2020.“Artificialmoral agents: A survey of the current status,”Science andEngineering Ethics, 26(2): 501-532.
  • Coeckelbergh, M., 2009. “Virtual moral agency, virtual moralresponsibility: on the moral significance of the appearance,perception, and performance of artificial agents,”AI &Society, 24: 181–189.
  • –––, 2012. “Moral responsibility,technology, and experiences of the tragic: From Kierkegaard tooffshore engineering,”Science and Engineering Ethics,18(1): 35-48.
  • –––, 2013. “Drones, informationtechnology, and distance: mapping the moral epistemology of remotefighting,”Ethics and Information Technology, 15(2):87-98.
  • Coeckelbergh, M., 2020. “Artificial intelligence,responsibility attribution, and a relational justification ofexplainability,”Science and Engineering Ethics, 26(4):2051-2068.
  • Coeckelbergh, M. & R. Wackers, 2007. “Imagination,Distributed Responsibility and Vulnerable Technological Systems: theCase of Snorre A,”Science and Engineering Ethics,13(2): 235–248.
  • Cummings, M. L., 2004. “Automation Bias in Intelligent TimeCritical Decision Support Systems,” published online: 19 Jun 2012,American Institute of Aeronautics and Astronautics.doi:10.2514/6.2004-6313
  • Diakopoulos, N., 2020. “Transparency”. In M. Dubber,F. Pasquale, & S. Das (Eds.),Oxford handbook of ethics andAI (pp. 197–214). Oxford University Press.
  • Dennett, D. C., 1997. “When HAL Kills, Who’s to Blame?Computer Ethics,” inHAL’s Legacy: 2001’sComputer as Dream and Reality, D. G. Stork (ed.), Cambridge, MA:MIT Press.
  • Denning, P. J., 1989. “The Science of Computing: TheInternet Worm,”American Scientist, 77(2):126–128.
  • Dignum, V., 2020. “Responsibility and artificialintelligence,”The Oxford Handbook of Ethics of AI,Markus D. Drubber et al. (eds.), Oxford: Oxford University Press, pp.214-231.
  • Doorn, N. & van de Poel, I., 2012. “Editors Overview:Moral Responsibility in Technology and Engineering,”Scienceand Engineering Ethics, 18: 1–11.
  • Ekelhof, M., 2019. “Moving beyond semantics on autonomousweapons: Meaningful human control in operation,”GlobalPolicy, 10(3): 343-348.
  • Elish, M.C., 2019. “Moral crumple zones: Cautionary tales inhuman-robot interaction,”Engaging Science, Technology, andSociety, 5: 40-60.
  • Eshleman, A., 2016. “Moral Responsibility,” inTheStanford Encyclopedia of Philosophy (Winter 2016 Edition), E. N.Zalta (ed.), URL = <https://plato.stanford.edu/archives/win2016/entries/moral-responsibility/>
  • Eubanks, V., 2018. “Automating inequality: How high-techtools profile, police, and punish the poor,” New York: St.Martin’s Press.
  • Felt, U., Fouché, R., Miller, C. A., & Smith-Doerr, L.,2016.The Handbook of Science and Technology Studies,Cambridge, MA: MIT Press.
  • Fisher, J. M., 1999. “Recent work on moralresponsibility,”Ethics, 110(1): 93–139.>
  • Floridi, L., & J. Sanders, 2004. “On the Morality ofArtificial Agents,”Minds and Machines, 14(3):349–379.
  • Floridi, L., 2013. “Distributed morality in an informationsociety,”Science and Engineering Ethics, 19(3):727–743.
  • –––, 2016. “Faultless responsibility: onthe nature and allocation of moral responsibility for distributedmoral actions,”Philosophical Transactions of the RoyalSociety A (Mathematical Physical and Engineering Sciences),374(2083); doi: 10.1098/rsta.2016.0112
  • Friedman, B., 1990. “Moral Responsibility and ComputerTechnology,” Institute of Education Sciences ERIC Number ED321737, [Friedman 1990 available online].
  • ––– (ed.), 1997.Human Values and the Designof Computer Technology, Stanford: CSLI Publications; New York:Cambridge University Press.
  • Gorwa, R., Binns, R., & Katzenbach, C., 2020.“Algorithmic content moderation: Technical and politicalchallenges in the automation of platform governance,”BigData & Society, 7(1); first online 28 February 2020.doi:10.1177/2053951719897945
  • Gotterbarn D., 2001. “Informatics and professionalresponsibility,”Science and Engineering Ethics, 7(2):221–230.
  • Graubard, S. R., 1988.The Artificial Intelligence Debate:False Starts, Real Foundations, Cambridge, MA: MIT Press.
  • Gray, C. H., 1997. “AI at War: The Aegis System inCombat,”Directions and Implications of AdvancedComputing, D. Schuler, (ed.), New York: Ablex, pp.62–79.
  • Gunkel, D. J., 2020. “A vindication of the rights ofmachines,” inMachine Ethics and Robot Ethics, W.Wallach and P. Asaro (eds.), London: Routledge, pp. 511-530.
  • Hakli, R. and Mäkelä, P., 2019. “Moralresponsibility of robots and hybrid agents,”TheMonist, 102(2): 259-275.
  • Hart, H. L. A., 1968.Punishment and Responsibility,Oxford: Oxford University Press.
  • Herkert, J., Borenstein, J., & Miller, K., 2020. “TheBoeing 737 MAX: Lessons for engineering ethics”.Science andengineering ethics,26, 2957-2974.
  • Hughes, T.P., 1987. “The evolution of Large TechnologicalSystem,” in W. E. Bijker, T. P. Hughes, & T. Pinch (eds.),The Social Construction of Technological Systems, Cambridge,MA: The MIT Press, pp. 51–82.
  • IJsselsteijn, W., Y. de Korte, C. Midden, B. Eggen, & E. Hoven(eds.), 2006.Persuasive Technology, Berlin:Springer-Verlag.
  • Johnson, D. G., 2001.Computer Ethics, 3rd edition, UpperSaddle River, New Jersey: Prentice Hall.
  • –––, 2006. “Computer Systems: MoralEntities but not Moral Agents,”Ethics and InformationTechnology, 8: 195–204.
  • Johnson, D. G. & T. M. Power, 2005. “Computer systemsand responsibility: A normative look at technologicalcomplexity,”Ethics and Information Technology, 7:99–107.
  • Jonas, H., 1984.The Imperative of Responsibility. In searchof an Ethics for the Technological Age, Chicago: The ChicagoUniversity Press.
  • Kroes, P.& P.P. Verbeek (eds.), 2014.The Moral Status ofTechnical Artefacts, Dordrecht: Springer
  • Kuflik, A., 1999. “Computers in Control: Rational Transferof Authority or Irresponsible Abdication of Authority?”Ethics and Information Technology, 1: 173–184.
  • Ladd. J., 1989. “Computers and Moral Responsibility. AFramework for an Ethical Analysis,” in C.C. Gould (ed.),TheInformation Web. Ethical and Social Implications of ComputerNetworking, Boulder, Colorado: Westview Press, pp.207–228.
  • Latour, B., 1992. “Where are the Missing Masses? TheSociology of a Few Mundane Artefacts,” in W. Bijker & J. Law(eds.),Shaping Technology/Building Society: Studies inSocio-Technical Change, Cambridge, Massachusetts: The MIT press,pp. 225–258.
  • Leveson, N. G. & C. S. Turner, 1993. “An Investigationof the Therac-25 Accidents,”Computer, 26(7):18–41.
  • Leveson, N., 1995. “Medical Devices: The Therac-25,”in N. Leveson,Safeware. System, Safety and Computers,Boston: Addison-Wesley.
  • Martin, K., 2019. “Ethical implications and accountabilityof algorithms,”Journal of Business Ethics, 160(4):835-850.
  • Matthias, A., 2004. “The responsibility gap: Ascribingresponsibility for the actions of learning automata,”Ethicsand Information Technology, 6: 175–183.
  • McCorduck, P., 1979.Machines Who Think, San Francisco:W.H. Freeman and Company.
  • Miller, K. W., 2008. “Critiquing a critique,”Science and Engineering Ethics, 14(2): 245–249.
  • Moor, J.H., 2006. “The Nature, Importance, and Difficulty ofMachine Ethics,”Intelligent Systems (IEEE), 21(4):18–21.
  • Nissenbaum, H., 1994. “Computing and Accountability,”Communications of the Association for Computing Machinery,37(1): 72–80.
  • –––, 1997. “Accountability in aComputerized Society,” in B. Friedman (ed.),Human Valuesand the Design of Computer Technology, Cambridge: CambridgeUniversity Press, pp. 41–64.
  • Nass, C. & Y. Moon, 2000. “Machines and mindlessness:Social responses to computers,”Journal of SocialIssues, 56(1): 81–103.
  • Noorman, M., 2009.Mind the Gap: A Critique ofHuman/Technology Analogies in Artificial Agents Discourse,Maastricht: Universitaire Pers Maastricht.
  • –––, 2012. “Responsibility Practices andUnmanned Military Technologies,”Science and EngineeringEthics, 20(3): 809–826.
  • Nihlén Fahlquist, J., Doorn, N., &Van de Poel, I.,2015. “Design for the value of responsibility,” inHandbook of ethics, values and technological design  :Sources, Theory, Values and Application Domains, Jeroen van denHoven, Ibo van de Poel and Pieter Vermaas (eds.), Dordrecht:Springer.
  • Nyholm, S., 2018. “Attributing agency to automated systems:Reflections on human-robot collaborations andresponsibility-loci,”Science and engineering ethics,24(4): 1201-1219.
  • Parasuraman, R. & V. Riley, 1997. “Humans andAutomation: Use, Misuse, Disuse, Abuse,”Human Factors: theJournal of the Human Factors Society, 39(2): 230–253.
  • Pasquale, F., 2015.The black box society: The secretalgorithms that control money and information. Cambridge, MA:Harvard University Press.
  • Polder-Verkiel, S. E., 2012. “Online responsibility: Badsamaritanism and the influence of internet mediation,”Science and engineering ethics, 18(1): 117-141.
  • Ravenscraft, E., (2020). “How to Spot—andAvoid—Dark Patterns on the Web”,Wired, July 29,Ravenscraft 2020 available online.
  • Reeves, B. & C. Nass, 1996.The Media Equation: How PeopleTreat Computers, Television, and New Media Like Real People andPlaces, Cambridge: Cambridge University Press.
  • Rosenthal-von der Pütten, A. M., Krämer, N. C.,Hoffmann, L., Sobieraj, S., & Eimler, S. C., 2013. “Anexperimental study on emotional reactions towards a robot,”International Journal of Social Robotics, 5(1):17–34.
  • Sack, W., 1997. “Artificial Human Nature,”DesignIssues, 13: 55–64.
  • Santoni de Sio, F. and Van den Hoven, J., 2018. “Meaningfulhuman control over autonomous systems: A philosophical account,”Frontiers in Robotics and AI, 5, first online 28 February2018. doi:10.3389/frobt.2018.00015
  • Santoni de Sio, F., & Mecacci, G., 2021. “Fourresponsibility gaps with artificial intelligence: Why they matter andhow to address them,”Philosophy & Technology, 34:1057–1084.
  • Sartor, G. and M. Viola de Azevedo Cunha, 2010. “The ItalianGoogle-Case: Privacy, Freedom of Speech and Responsibility ofProviders for User-Generated Contents,”InternationalJournal of Law and Information Technology, 18(4):356–378.
  • Searle, J. R., 1980. “Minds, brains, and programs”Behavioral and Brain Sciences, 3(3): 417–457.
  • Singel, R., 2010. “Does Italy’s Google ConvictionPortend More Censorship?”Wired (February 24th, 2010),Singel 2010 available online.
  • Sparrow, R., 2007. “Killer Robots,”Journal ofApplied Philosophy, 24(1): 62–77.
  • Stahl, B. C., 2004. “Information, Ethics, and Computers: TheProblem of Autonomous Moral Agents,”Minds andMachines, 14: 67–83.
  • –––, 2006. “Responsible Computers? A Casefor Ascribing Quasi-Responsibility to Computers Independent ofPersonhood or Agency,”Ethics and InformationTechnology, 8: 205–213.
  • Stieb, J. A., 2008. “A Critique of Positive Responsibilityin Computing,”Science and Engineering Ethics, 14(2):219–233.
  • Strawson, P., 1962. “Freedom and Resentment,” inProceedings of the British Academy, 48: 1-25.
  • Suchman, L., 1998. “Human/machine reconsidered,”Cognitive Studies, 5(1): 5–13.
  • Sullins, J. P., 2006. “When is a Robot a Moral Agent?”International Review ofInformation Ethics, 6(12):23–29.
  • Swierstra, T., Waelbers, K., 2012. “Designing a Good Life: AMatrix for the Technological Mediation of Morality,”Scienceand Engineering Ethics, 18: 157–172.doi:10.1007/s11948-010-9251-1
  • Taddeo, M. and L. Floridi, 2015. “The Debate on the MoralResponsibilities of Online Service Providers,”Science andEngineering Ethics, 22(6): 1575–1603.
  • Talbert, Matthew, 2022. “Moral Responsibility,”The Stanford Encyclopedia of Philosophy (Fall 2022 Edition),Edward N. Zalta & Uri Nodelman (eds.) URL = <href=“https://plato.stanford.edu/archives/win2016/entries/moral-responsibility/”>https://plato.stanford.edu/archives/win2016/entries/moral-responsibility/>.
  • Tigard, D. W., 2021a. “Responsible AI and moralresponsibility: a common appreciation,”AI and Ethics,1(2): 113-117.
  • –––, 2021b. “Artificial moralresponsibility: How we can and cannot hold machinesresponsible,”Cambridge Quarterly of Healthcare Ethics,30(3): 435-447.
  • U.S. Department of Defense, 2009. “FY2009–2034Unmanned Systems Integrated Roadmap,”available online.
  • Van den Hoven, J., 2002. “Wadlopen bij Opkomend Tij: Denkenover Ethiek en Informatiemaatschappij,” in J. de Mul (ed.),Filosofie in Cyberspace, Kampen: Uitgeverij Klement, pp.47–65.
  • Véliz, C., 2021. “Moral zombies: why algorithms arenot moral agents,”AI & Society, 36: 487–497.doi:10.1007/s00146-021-01189-x
  • Verbeek, P. P., 2006. “Materializing Morality: Design Ethicsand Technological Mediation,”Science, Technology and HumanValues, 31(3): 361–380.
  • Verbeek, P. P., 2021.What Things Do, University Park,PA: Pennsylvania State University Press.
  • Vidal, J., 2004. “The alco-lock is claimed to foildrink-drivers. Then the man from the Guardian had a go…,”The Guardian, August 5th, 2004.
  • Waelbers, K., 2009. “Technological Delegation:Responsibility for the Unintended,”Science &Engineering Ethics, 15(1): 51–68.
  • Wallach, W. and C. Allen, 2009.Moral Machines. TeachingRobots Right from Wrong, Oxford: Oxford University Press.
  • Whitby, B., 2008. “Sometimes it’s hard to be a robot.A call for action on the ethics of abusing artificial agents,”Interacting with Computers, 20(3): 326–333.
  • John Zerilli; John Danaher; James Maclaurin; Colin Gavaghan;Alistair Knott; Joy Liddicoat; Merel Noorman, 2021. “7Autonomy,” inA Citizens Guide to ArtificialIntelligence, Cambridge, MA: MIT Press, pp. 107-126.
  • Zuboff, S., 1982. “Automate/Informate: The Two Faces ofIntelligent Technology,”Organizational Dynamics,14(2):5–18

Other Internet Resources

Journals On-line

  • Ethics and Information Technology: A peer-reviewed journal dedicated to advancing the dialogue betweenmoral philosophy and the field of information and communicationtechnology (ICT).
  • Science and Engineering Ethics: Science and Engineering Ethics is a multi-disciplinary journal thatexplores ethical issues of direct concern to scientists andengineers.
  • Philosophy and Technology:A journal that addresses the expanding scope and unprecedented impactof technologies, in order to improve the critical understanding of theconceptual nature and practical consequences, and hence provide theconceptual foundations for their fruitful and sustainabledevelopments.
  • Big Data and Society: A peer-reviewed scholarly journal that publishes interdisciplinarywork principally in the social sciences, humanities and computing andtheir intersections with the arts and natural sciences about theimplications of Big Data for societies.

Centers

Organizations

  • IACAP: International Association for Computing and Philosophy: concerned with computing and philosophy broadly construed, includingthe use of computers to teach philosophy, the use of computers tomodel philosophical theory, as well as philosophical concerns raisedby computing.
  • Moral Machine: A platform for gathering a human perspective on moral decisions madeby machine intelligence, such as self-driving cars.

Blogs

  • Moral machines: blog on the theory and development of artificial moral agents andcomputational ethics.

Acknowledgments

This material is based upon work supported by the National ScienceFoundation under Grant No. SES 1058457.

Copyright © 2023 by
Merel Noorman<merelnoorman@gmail.com>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free

Browse

About

Support SEP

Mirror Sites

View this site from another server:

USA (Main Site)Philosophy, Stanford University

The Stanford Encyclopedia of Philosophy iscopyright © 2025 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054


[8]ページ先頭

©2009-2025 Movatter.jp