Movatterモバイル変換


[0]ホーム

URL:


SEP home page
Stanford Encyclopedia of Philosophy Archive
Fall 2023 Edition

Ethics of Artificial Intelligence and Robotics

First published Thu Apr 30, 2020

Artificial intelligence (AI) and robotics are digital technologiesthat will have significant impact on the development of humanity inthe near future. They have raised fundamental questions about what weshould do with these systems, what the systems themselves should do,what risks they involve, and how we can control these.

After the Introduction to the field (§1), the main themes(§2) of this article are: Ethical issues that arise with AIsystems asobjects, i.e., tools made and used by humans. Thisincludes issues of privacy (§2.1) and manipulation (§2.2),opacity (§2.3) and bias (§2.4), human-robot interaction(§2.5), employment (§2.6), and the effects of autonomy(§2.7). Then AI systems assubjects, i.e., ethics forthe AI systems themselves in machine ethics (§2.8) and artificialmoral agency (§2.9). Finally, the problem of a possible future AIsuperintelligence leading to a “singularity” (§2.10).We close with a remark on the vision of AI (§3).

For each section within these themes, we provide a general explanationof theethical issues, outline existingpositionsandarguments, then analyse how these play out with currenttechnologies and finally, whatpolicy consequencesmay be drawn.

1. Introduction

1.1 Background of the Field

The ethics of AI and robotics is often focused on“concerns” of various sorts, which is a typical responseto new technologies. Many such concerns turn out to be rather quaint(trains are too fast for souls); some are predictably wrong when theysuggest that the technology will fundamentally change humans(telephones will destroy personal communication, writing will destroymemory, video cassettes will make going out redundant); some arebroadly correct but moderately relevant (digital technology willdestroy industries that make photographic film, cassette tapes, orvinyl records); but some are broadly correct and deeply relevant (carswill kill children and fundamentally change the landscape). The taskof an article such as this is to analyse the issues and to deflate thenon-issues.

Some technologies, like nuclear power, cars, or plastics, have causedethical and political discussion and significant policy efforts tocontrol the trajectory these technologies, usually only once somedamage is done. In addition to such “ethical concerns”,new technologies challenge current norms and conceptual systems, whichis of particular interest to philosophy. Finally, once we haveunderstood a technology in its context, we need to shape our societalresponse, including regulation and law. All these features also existin the case of new AI and Robotics technologies—plus the morefundamental fear that they may end the era of human control onEarth.

The ethics of AI and robotics has seen significant press coverage inrecent years, which supports related research, but also may end upundermining it: the press often talks as if the issues underdiscussion were just predictions of what future technology will bring,and as though we already know what would be most ethical and how toachieve that. Press coverage thus focuses on risk, security (Brundageet al. 2018, in theOther Internet Resources section below, hereafter [OIR]), and prediction of impact (e.g., onthe job market). The result is a discussion of essentially technicalproblems that focus on how to achieve a desired outcome. Currentdiscussions in policy and industry are also motivated by image andpublic relations, where the label “ethical” is really notmuch more than the new “green”, perhaps used for“ethics washing”. For a problem to qualify as a problemfor AI ethics would require that we donot readily know whatthe right thing to do is. In this sense, job loss, theft, or killingwith AI is not a problem in ethics, but whether these are permissibleunder certain circumstancesis a problem. This articlefocuses on the genuine problems of ethics where we do not readily knowwhat the answers are.

A last caveat: The ethics of AI and robotics is a very young fieldwithin applied ethics, with significant dynamics, but fewwell-established issues and no authoritative overviews—thoughthere is a promising outline (European Group on Ethics in Science andNew Technologies 2018) and there are beginnings on societal impact(Floridi et al. 2018; Taddeo and Floridi 2018; S. Taylor et al. 2018;Walsh 2018; Bryson 2019; Gibert 2019; Whittlestone et al. 2019), and policy recommendations (AI HLEG 2019[OIR]; IEEE 2019). So this article cannot merely reproduce what thecommunity has achieved thus far, but must propose an ordering wherelittle order exists.

1.2 AI & Robotics

The notion of “artificial intelligence” (AI) is understoodbroadly as any kind of artificial computational system that showsintelligent behaviour, i.e., complex behaviour that is conducive toreaching goals. In particular, we do not wish to restrict“intelligence” to what would require intelligence if donebyhumans, as Minsky had suggested (1985). This means weincorporate a range of machines, including those in “technicalAI”, that show only limited abilities in learning or reasoningbut excel at the automation of particular tasks, as well as machinesin “general AI” that aim to create a generally intelligentagent.

AI somehow gets closer to our skin than other technologies—thusthe field of “philosophy of AI”. Perhaps this is becausethe project of AI is to create machines that have a feature central tohow we humans see ourselves, namely as feeling, thinking, intelligentbeings. The main purposes of an artificially intelligent agentprobably involve sensing, modelling, planning and action, but currentAI applications also include perception, text analysis, naturallanguage processing (NLP), logical reasoning, game-playing, decisionsupport systems, data analytics, predictive analytics, as well asautonomous vehicles and other forms of robotics (P. Stone et al.2016). AI may involve any number of computational techniques toachieve these aims, be that classical symbol-manipulating AI, inspiredby natural cognition, or machine learning via neural networks(Goodfellow, Bengio, and Courville 2016; Silver et al. 2018).

Historically, it is worth noting that the term “AI” wasused as above ca. 1950–1975, then came into disrepute during the“AI winter”, ca. 1975–1995, and narrowed. As aresult, areas such as “machine learning”, “naturallanguage processing” and “data science” were oftennot labelled as “AI”. Since ca. 2010, the use hasbroadened again, and at times almost all of computer science and evenhigh-tech is lumped under “AI”. Now it is a name to beproud of, a booming industry with massive capital investment (Shohamet al. 2018), and on the edge of hype again. As Erik Brynjolfssonnoted, it may allow us to

virtually eliminate global poverty, massively reduce disease andprovide better education to almost everyone on the planet. (quoted inAnderson, Rainie, and Luchsinger 2018)

While AI can be entirely software, robots are physical machines thatmove. Robots are subject to physical impact, typically through“sensors”, and they exert physical force onto the world,typically through “actuators”, like a gripper or a turningwheel. Accordingly, autonomous cars or planes are robots, and only aminuscule portion of robots is “humanoid” (human-shaped),like in the movies. Some robots use AI, and some do not: Typicalindustrial robots blindly follow completely defined scripts withminimal sensory input and no learning or reasoning (around 500,000such new industrial robots are installed each year (IFR 2019 [OIR])).It is probably fair to say that while robotics systems cause moreconcerns in the general public, AI systems are more likely to have agreater impact on humanity. Also, AI or robotics systems for a narrowset of tasks are less likely to cause new issues than systems that aremore flexible and autonomous.

Robotics and AI can thus be seen as covering two overlapping sets ofsystems: systems that are only AI, systems that are only robotics, andsystems that are both. We are interested in all three; the scope ofthis article is thus not only the intersection, but the union, of bothsets.

1.3 A Note on Policy

Policy is only one of the concerns of this article. There issignificant public discussion about AI ethics, and there are frequentpronouncements from politicians that the matter requires new policy,which is easier said than done: Actual technology policy is difficultto plan and enforce. It can take many forms, from incentives andfunding, infrastructure, taxation, or good-will statements, toregulation by various actors, and the law. Policy for AI will possiblycome into conflict with other aims of technology policy or generalpolicy. Governments, parliaments, associations, and industry circlesin industrialised countries have produced reports and white papers inrecent years, and some have generated good-will slogans(“trusted/responsible/humane/human-centred/good/beneficialAI”), but is that what is needed? For a survey, see Jobin,Ienca, and Vayena (2019) and V. Müller’s list ofPT-AI Policy Documents and Institutions.

For people who work in ethics and policy, there might be a tendency tooverestimate the impact and threats from a new technology, and tounderestimate how far current regulation can reach (e.g., for productliability). On the other hand, there is a tendency for businesses, themilitary, and some public administrations to “just talk”and do some “ethics washing” in order to preserve a goodpublic image and continue as before. Actually implementing legallybinding regulation would challenge existing business models andpractices. Actual policy is not just an implementation of ethicaltheory, but subject to societal power structures—and the agentsthat do have the power will push against anything that restricts them.There is thus a significant risk that regulation will remain toothlessin the face of economical and political power.

Though very little actual policy has been produced, there are somenotable beginnings: The latest EU policy document suggests“trustworthy AI” should be lawful, ethical, andtechnically robust, and then spells this out as seven requirements:human oversight, technical robustness, privacy and data governance,transparency, fairness, well-being, and accountability (AI HLEG 2019[OIR]). Much European research now runs under the slogan of“responsible research and innovation” (RRI), and“technology assessment” has been a standard field sincethe advent of nuclear power. Professional ethics is also a standardfield in information technology, and this includes issues that arerelevant in this article. Perhaps a “code of ethics” forAI engineers, analogous to the codes of ethics for medical doctors, isan option here (Véliz 2019). What data science itself should dois addressed in (L. Taylor and Purtova 2019). We also expect that muchpolicy will eventually cover specific uses or technologies of AI androbotics, rather than the field as a whole. A useful summary of anethical framework for AI is given in (European Group on Ethics inScience and New Technologies 2018: 13ff). On general AI policy, seeCalo (2018) as well as Crawford and Calo (2016); Stahl, Timmermans,and Mittelstadt (2016); Johnson and Verdicchio (2017); and Giubiliniand Savulescu (2018). A more political angle of technology is oftendiscussed in the field of “Science and Technology Studies”(STS). As books likeThe Ethics of Invention (Jasanoff 2016)show, concerns in STS are often quite similar to those in ethics(Jacobs et al. 2019 [OIR]). In this article, we discuss the policy foreach type of issue separately rather than for AI or robotics ingeneral.

2. Main Debates

In this section we outline the ethical issues of human use of AI androbotics systems that can be more or less autonomous—which meanswe look at issues that arise with certain uses of the technologieswhich would not arise with others. It must be kept in mind, however,that technologies will always cause some uses to be easier, and thusmore frequent, and hinder other uses. The design of technicalartefacts thus has ethical relevance for their use (Houkes and Vermaas2010; Verbeek 2011), so beyond “responsible use”, we alsoneed “responsible design” in this field. The focus on usedoes not presuppose which ethical approaches are best suited fortackling these issues; they might well be virtue ethics (Vallor 2017)rather than consequentialist or value-based (Floridi et al. 2018).This section is also neutral with respect to the question whether AIsystems truly have “intelligence” or other mentalproperties: It would apply equally well if AI and robotics are merelyseen as the current face of automation (cf. Müllerforthcoming-b).

2.1 Privacy & Surveillance

There is a general discussion about privacy and surveillance ininformation technology (e.g., Macnish 2017; Roessler 2017), whichmainly concerns the access to private data and data that is personallyidentifiable. Privacy has several well recognised aspects, e.g.,“the right to be let alone”, information privacy, privacyas an aspect of personhood, control over information about oneself,and the right to secrecy (Bennett and Raab 2006). Privacy studies havehistorically focused on state surveillance by secret services but nowinclude surveillance by other state agents, businesses, and evenindividuals. The technology has changed significantly in the lastdecades while regulation has been slow to respond (though there is theRegulation (EU) 2016/679)—the result is a certain anarchy thatis exploited by the most powerful players, sometimes in plain sight,sometimes in hiding.

The digital sphere has widened greatly: All data collection andstorage is now digital, our lives are increasingly digital, mostdigital data is connected to a single Internet, and there is more andmore sensor technology in use that generates data about non-digitalaspects of our lives. AI increases both the possibilities ofintelligent data collection and the possibilities for data analysis.This applies to blanket surveillance of whole populations as well asto classic targeted surveillance. In addition, much of the data istraded between agents, usually for a fee.

At the same time, controlling who collects which data, and who hasaccess, is much harder in the digital world than it was in theanalogue world of paper and telephone calls. Many new AI technologiesamplify the known issues. For example, face recognition in photos andvideos allows identification and thus profiling and searching forindividuals (Whittaker et al. 2018: 15ff). This continues using othertechniques for identification, e.g., “devicefingerprinting”, which are commonplace on the Internet(sometimes revealed in the “privacy policy”). The resultis that “In this vast ocean of data, there is a frighteninglycomplete picture of us” (Smolan 2016: 1:01). The result isarguably a scandal that still has not received due publicattention.

The data trail we leave behind is how our “free” servicesare paid for—but we are not told about that data collection andthe value of this new raw material, and we are manipulated intoleaving ever more such data. For the “big 5” companies(Amazon, Google/Alphabet, Microsoft, Apple, Facebook), the maindata-collection part of their business appears to be based ondeception, exploiting human weaknesses, furthering procrastination,generating addiction, and manipulation (Harris 2016 [OIR]). Theprimary focus of social media, gaming, and most of the Internet inthis “surveillance economy” is to gain, maintain, anddirect attention—and thus data supply. “Surveillance isthe business model of the Internet” (Schneier 2015). Thissurveillance and attention economy is sometimes called“surveillance capitalism” (Zuboff 2019). It has causedmany attempts to escape from the grasp of these corporations, e.g., inexercises of “minimalism” (Newport 2019), sometimesthrough the open source movement, but it appears that present-daycitizens have lost the degree of autonomy needed to escape while fullycontinuing with their life and work. We have lost ownership of ourdata, if “ownership” is the right relation here. Arguably,we have lost control of our data.

These systems will often reveal facts about us that we ourselves wishto suppress or are not aware of: they know more about us than we knowourselves. Even just observing online behaviour allows insights intoour mental states (Burr and Christianini 2019) and manipulation (seebelowsection 2.2). This has led to calls for the protection of “deriveddata” (Wachter and Mittelstadt 2019). With the last sentence ofhis bestselling book,Homo Deus, Harari asks about thelong-term consequences of AI:

What will happen to society, politics and daily life whennon-conscious but highly intelligent algorithms know us better than weknow ourselves? (2016: 462)

Robotic devices have not yet played a major role in this area, exceptfor security patrolling, but this will change once they are morecommon outside of industry environments. Together with the“Internet of things”, the so-called “smart”systems (phone, TV, oven, lamp, virtual assistant, home,…),“smart city” (Sennett 2018), and “smartgovernance”, they are set to become part of the data-gatheringmachinery that offers more detailed data, of different types, in realtime, with ever more information.

Privacy-preserving techniques that can largely conceal the identity ofpersons or groups are now a standard staple in data science; theyinclude (relative) anonymisation , access control (plus encryption),and other models where computation is carried out with fully orpartially encrypted input data (Stahl and Wright 2018); in the case of“differential privacy”, this is done by adding calibratednoise to encrypt the output of queries (Dwork et al. 2006; Abowd2017). While requiring more effort and cost, such techniques can avoidmany of the privacy issues. Some companies have also seen betterprivacy as a competitive advantage that can be leveraged and sold at aprice.

One of the major practical difficulties is to actually enforceregulation, both on the level of the state and on the level of theindividual who has a claim. They must identify the responsible legalentity, prove the action, perhaps prove intent, find a court thatdeclares itself competent … and eventually get the court toactually enforce its decision. Well-established legal protection ofrights such as consumer rights, product liability, and other civilliability or protection of intellectual property rights is oftenmissing in digital products, or hard to enforce. This means thatcompanies with a “digital” background are used to testingtheir products on the consumers without fear of liability whileheavily defending their intellectual property rights. This“Internet Libertarianism” is sometimes taken to assumethat technical solutions will take care of societal problems bythemselves (Mozorov 2013).

2.2 Manipulation of Behaviour

The ethical issues of AI in surveillance go beyond the mereaccumulation of data and direction of attention: They includetheuse of information to manipulate behaviour, online andoffline, in a way that undermines autonomous rational choice. Ofcourse, efforts to manipulate behaviour are ancient, but they may gaina new quality when they use AI systems. Given users’ intenseinteraction with data systems and the deep knowledge about individualsthis provides, they are vulnerable to “nudges”,manipulation, and deception. With sufficient prior data, algorithmscan be used to target individuals or small groups with just the kindof input that is likely to influence these particular individuals. A’nudge‘ changes the environment such that it influencesbehaviour in a predictable way that is positive for the individual,but easy and cheap to avoid (Thaler & Sunstein 2008). There is aslippery slope from here to paternalism and manipulation.

Many advertisers, marketers, and online sellers will use any legalmeans at their disposal to maximise profit, including exploitation ofbehavioural biases, deception, and addiction generation (Costa andHalpern 2019 [OIR]). Such manipulation is the business model in muchof the gambling and gaming industries, but it is spreading, e.g., tolow-cost airlines. In interface design on web pages or in games, thismanipulation uses what is called “dark patterns” (Mathuret al. 2019). At this moment, gambling and the sale of addictivesubstances are highly regulated, but online manipulation and addictionare not—even though manipulation of online behaviour is becominga core business model of the Internet.

Furthermore, social media is now the prime location for politicalpropaganda. This influence can be used to steer voting behaviour, asin the Facebook-Cambridge Analytica “scandal” (Woolley andHoward 2017; Bradshaw, Neudert, and Howard 2019) and—ifsuccessful—it may harm the autonomy of individuals (Susser,Roessler, and Nissenbaum 2019).

Improved AI “faking” technologies make what once wasreliable evidence into unreliable evidence—this has alreadyhappened to digital photos, sound recordings, and video. It will soonbe quite easy to create (rather than alter) “deep fake”text, photos, and video material with any desired content. Soon,sophisticated real-time interaction with persons over text, phone, orvideo will be faked, too. So we cannot trust digital interactionswhile we are at the same time increasingly dependent on suchinteractions.

One more specific issue is that machine learning techniques in AI relyon training with vast amounts of data. This means there will often bea trade-off between privacy and rights to data vs. technical qualityof the product. This influences the consequentialist evaluation ofprivacy-violating practices.

The policy in this field has its ups and downs: Civil liberties andthe protection of individual rights are under intense pressure frombusinesses’ lobbying, secret services, and other state agenciesthat depend on surveillance. Privacy protection has diminishedmassively compared to the pre-digital age when communication was basedon letters, analogue telephone communications, and personalconversation and when surveillance operated under significant legalconstraints.

While the EU General Data Protection Regulation (Regulation (EU)2016/679) has strengthened privacy protection, the US and China prefergrowth with less regulation (Thompson and Bremmer 2018), likely in thehope that this provides a competitive advantage. It is clear thatstate and business actors have increased their ability to invadeprivacy and manipulate people with the help of AI technology and willcontinue to do so to further their particular interests—unlessreined in by policy in the interest of general society.

2.3 Opacity of AI Systems

Opacity and bias are central issues in what is now sometimes called“data ethics” or “big data ethics” (Floridiand Taddeo 2016; Mittelstadt and Floridi 2016). AI systems forautomated decision support and “predictive analytics”raise “significant concerns about lack of due process,accountability, community engagement, and auditing” (Whittakeret al. 2018: 18ff). They are part of a power structure in which“we are creating decision-making processes that constrain andlimit opportunities for human participation” (Danaher 2016b:245). At the same time, it will often be impossible for the affectedperson to know how the system came to this output, i.e., the system is“opaque” to that person. If the system involves machinelearning, it will typically be opaque even to the expert, who will notknow how a particular pattern was identified, or even what the patternis. Bias in decision systems and data sets is exacerbated by thisopacity. So, at least in cases where there is a desire to remove bias,the analysis of opacity and bias go hand in hand, and politicalresponse has to tackle both issues together.

Many AI systems rely on machine learning techniques in (simulated)neural networks that will extract patterns from a given dataset, withor without “correct” solutions provided; i.e., supervised,semi-supervised or unsupervised. With these techniques, the“learning” captures patterns in the data and these arelabelled in a way that appears useful to the decision the systemmakes, while the programmer does not really knowwhichpatterns in the data the system has used. In fact, the programs areevolving, so when new data comes in, or new feedback is given(“this was correct”, “this was incorrect”),the patterns used by the learning system change. What this means isthat the outcome is not transparent to the user or programmers: it isopaque. Furthermore, the quality of the program depends heavily on thequality of the data provided, following the old slogan “garbagein, garbage out”. So, if the data already involved a bias (e.g.,police data about the skin colour of suspects), then the program willreproduce that bias. There are proposals for a standard description ofdatasets in a “datasheet” that would make theidentification of such bias more feasible (Gebru et al. 2018 [OIR]).There is also significant recent literature about the limitations ofmachine learning systems that are essentially sophisticated datafilters (Marcus 2018 [OIR]). Some have argued that the ethicalproblems of today are the result of technical “shortcuts”AI has taken (Cristianini forthcoming).

There are several technical activities that aim at “explainableAI”, starting with (Van Lent, Fisher, and Mancuso 1999; Lomas etal. 2012) and, more recently, a DARPA programme (Gunning 2017 [OIR]).More broadly, the demand for

a mechanism for elucidating and articulating the power structures,biases, and influences that computational artefacts exercise insociety (Diakopoulos 2015: 398)

is sometimes called “algorithmic accountabilityreporting”. This does not mean that we expect an AI to“explain its reasoning”—doing so would require farmore serious moral autonomy than we currently attribute to AI systems(see below§2.10).

The politician Henry Kissinger pointed out that there is a fundamentalproblem for democratic decision-making if we rely on a system that issupposedly superior to humans, but cannot explain its decisions. Hesays we may have “generated a potentially dominating technologyin search of a guiding philosophy” (Kissinger 2018). Danaher(2016b) calls this problem “the threat of algocracy”(adopting the previous use of ‘algocracy’ from Aneesh 2002[OIR], 2006). In a similar vein, Cave (2019) stresses that we need abroader societal move towards more “democratic”decision-making to avoid AI being a force that leads to a Kafka-styleimpenetrable suppression system in public administration andelsewhere. The political angle of this discussion has been stressed byO’Neil in her influential bookWeapons of MathDestruction (2016), and by Yeung and Lodge (2019).

In the EU, some of these issues have been taken into account with the(Regulation (EU) 2016/679), which foresees that consumers, when facedwith a decision based on data processing, will have a legal“right to explanation”—how far this goes and to whatextent it can be enforced is disputed (Goodman and Flaxman 2017;Wachter, Mittelstadt, and Floridi 2016; Wachter, Mittelstadt, andRussell 2017). Zerilli et al. (2019) argue that there may be a doublestandard here, where we demand a high level of explanation formachine-based decisions despite humans sometimes not reaching thatstandard themselves.

2.4 Bias in Decision Systems

Automated AI decision support systems and “predictiveanalytics” operate on data and produce a decision as“output”. This output may range from the relativelytrivial to the highly significant: “this restaurant matches yourpreferences”, “the patient in this X-ray has completedbone growth”, “application to credit card declined”,“donor organ will be given to another patient”,“bail is denied”, or “target identified andengaged”. Data analysis is often used in “predictiveanalytics” in business, healthcare, and other fields, to foreseefuture developments—since prediction is easier, it will alsobecome a cheaper commodity. One use of prediction is in“predictive policing” (NIJ 2014 [OIR]), which many fearmight lead to an erosion of public liberties (Ferguson 2017) becauseit can take away power from the people whose behaviour is predicted.It appears, however, that many of the worries about policing depend onfuturistic scenarios where law enforcement foresees and punishesplanned actions, rather than waiting until a crime has been committed(like in the 2002 film “Minority Report”). One concern isthat these systems might perpetuate bias that was already in the dataused to set up the system, e.g., by increasing police patrols in anarea and discovering more crime in that area. Actual “predictivepolicing” or “intelligence led policing” techniquesmainly concern the question of where and when police forces will beneeded most. Also, police officers can be provided with more data,offering them more control and facilitating better decisions, inworkflow support software (e.g., “ArcGIS”). Whether thisis problematic depends on the appropriate level of trust in thetechnical quality of these systems, and on the evaluation of aims ofthe police work itself. Perhaps a recent paper title points in theright direction here: “AI ethics in predictive policing: Frommodels of threat to an ethics of care” (Asaro 2019).

Bias typically surfaces when unfair judgments are made because theindividual making the judgment is influenced by a characteristic thatisactually irrelevant to the matter at hand, typically adiscriminatory preconception about members of a group. So, one form ofbias is a learned cognitive feature of a person, often not madeexplicit. The person concerned may not be aware of having thatbias—they may even be honestly and explicitly opposed to a biasthey are found to have (e.g., through priming, cf. Graham and Lowery2004). On fairness vs. bias in machine learning, see Binns (2018).

Apart from the social phenomenon of learned bias, the human cognitivesystem is generally prone to have various kinds of “cognitivebiases”, e.g., the “confirmation bias”: humans tendto interpret information as confirming what they already believe. Thissecond form of bias is often said to impede performance in rationaljudgment (Kahnemann 2011)—though at least some cognitive biasesgenerate an evolutionary advantage, e.g., economical use of resourcesfor intuitive judgment. There is a question whether AI systems couldor should have such cognitive bias.

A third form of bias is present in data when it exhibits systematicerror, e.g., “statistical bias”. Strictly, any givendataset will only be unbiased for a single kind of issue, so the merecreation of a dataset involves the danger that it may be used for adifferent kind of issue, and then turn out to be biased for that kind.Machine learning on the basis of such data would then not only fail torecognise the bias, but codify and automate the “historicalbias”. Such historical bias was discovered in an automatedrecruitment screening system at Amazon (discontinued early 2017) thatdiscriminated against women—presumably because the company had ahistory of discriminating against women in the hiring process. The“Correctional Offender Management Profiling for AlternativeSanctions” (COMPAS), a system to predict whether a defendantwould re-offend, was found to be as successful (65.2% accuracy) as agroup of random humans (Dressel and Farid 2018) and to produce morefalse positives and less false negatives for black defendants. Theproblem with such systems is thus bias plus humans placing excessivetrust in the systems. The political dimensions of such automatedsystems in the USA are investigated in Eubanks (2018).

There are significant technical efforts to detect and remove bias fromAI systems, but it is fair to say that these are in early stages: seeUK Institute for Ethical AI & Machine Learning (Brownsword,Scotford, and Yeung 2017; Yeung and Lodge 2019). It appears thattechnological fixes have their limits in that they need a mathematicalnotion of fairness, which is hard to come by (Whittaker et al. 2018:24ff; Selbst et al. 2019), as is a formal notion of “race”(see Benthall and Haynes 2019). An institutional proposal is in (Vealeand Binns 2017).

2.5 Human-Robot Interaction

Human-robot interaction (HRI) is an academic fields in its own right,which now pays significant attention to ethical matters, the dynamicsof perception from both sides, and both the different interestspresent in and the intricacy of the social context, includingco-working (e.g., Arnold and Scheutz 2017). Useful surveys for theethics of robotics include Calo, Froomkin, and Kerr (2016); Royakkersand van Est (2016); Tzafestas (2016); a standard collection of papersis Lin, Abney, and Jenkins (2017).

While AI can be used to manipulate humans into believing and doingthings (seesection 2.2), it can also be used to drive robots that are problematic if theirprocesses or appearance involve deception, threaten human dignity, orviolate the Kantian requirement of “respect for humanity”.Humans very easily attribute mental properties to objects, andempathise with them, especially when the outer appearance of theseobjects is similar to that of living beings. This can be used todeceive humans (or animals) into attributing more intellectual or evenemotional significance to robots or AI systems than they deserve. Someparts of humanoid robotics are problematic in this regard (e.g.,Hiroshi Ishiguro’s remote-controlled Geminoids), and there arecases that have been clearly deceptive for public-relations purposes(e.g. on the abilities of Hanson Robotics’“Sophia”). Of course, some fairly basic constraints ofbusiness ethics and law apply to robots, too: product safety andliability, or non-deception in advertisement. It appears that theseexisting constraints take care of many concerns that are raised. Thereare cases, however, where human-human interaction has aspects thatappear specifically human in ways that can perhaps not be replaced byrobots: care, love, and sex.

2.5.1 Example (a) Care Robots

The use of robots in health care for humans is currently at the levelof concept studies in real environments, but it may become a usabletechnology in a few years, and has raised a number of concerns for adystopian future of de-humanised care (A. Sharkey and N. Sharkey 2011;Robert Sparrow 2016). Current systems include robots that supporthuman carers/caregivers (e.g., in lifting patients, or transportingmaterial), robots that enable patients to do certain things bythemselves (e.g., eat with a robotic arm), but also robots that aregiven to patients as company and comfort (e.g., the “Paro”robot seal). For an overview, see van Wynsberghe (2016);Nørskov (2017); Fosch-Villaronga and Albo-Canals (2019), for asurvey of users Draper et al. (2014).

One reason why the issue of care has come to the fore is that peoplehave argued that we will need robots in ageing societies. Thisargument makes problematic assumptions, namely that with longerlifespan people will need more care, and that it will not be possibleto attract more humans to caring professions. It may also show a biasabout age (Jecker forthcoming). Most importantly, it ignores thenature of automation, which is not simply about replacing humans, butabout allowing humans to work more efficiently. It is not very clearthat there really is an issue here since the discussion mostly focuseson the fear of robots de-humanising care, but the actual andforeseeable robots in care are assistive robots for classic automationof technical tasks. They are thus “care robots” only in abehavioural sense of performing tasks in care environments, not in thesense that a human “cares” for the patients. It appearsthat the success of “being cared for” relies on thisintentional sense of “care”, which foreseeable robotscannot provide. If anything, the risk of robots in care is theabsence of such intentional care—because less humancarers may be needed. Interestingly, caring for something, even avirtual agent, can be good for the carer themselves (Lee et al. 2019).A system that pretends to care would be deceptive and thusproblematic—unless the deception is countered by sufficientlylarge utility gain (Coeckelbergh 2016). Some robots that pretend to“care” on a basic level are available (Paro seal) andothers are in the making. Perhaps feeling cared for by a machine, tosome extent, is progress for come patients.

2.5.2 Example (b) Sex Robots

It has been argued by several tech optimists that humans will likelybe interested in sex and companionship with robots and be comfortablewith the idea (Levy 2007). Given the variation of human sexualpreferences, including sex toys and sex dolls, this seems very likely:The question is whether such devices should be manufactured andpromoted, and whether there should be limits in this touchy area. Itseems to have moved into the mainstream of “robotphilosophy” in recent times (Sullins 2012; Danaher and McArthur2017; N. Sharkey et al. 2017 [OIR]; Bendel 2018; Devlin 2018).

Humans have long had deep emotional attachments to objects, so perhapscompanionship or even love with a predictable android is attractive,especially to people who struggle with actual humans, and alreadyprefer dogs, cats, birds, a computer or atamagotchi. Danaher(2019b) argues against (Nyholm and Frank 2017) that these can be truefriendships, and is thus a valuable goal. It certainly looks like suchfriendship might increase overall utility, even if lacking in depth.In these discussions there is an issue of deception, since a robotcannot (at present) mean what it says, or have feelings for a human.It is well known that humans are prone to attribute feelings andthoughts to entities that behave as if they had sentience,even toclearly inanimate objects that show no behaviour at all. Also, payingfor deception seems to be an elementary part of the traditional sexindustry.

Finally, there are concerns that have often accompanied matters ofsex, namely consent (Frank and Nyholm 2017), aesthetic concerns, andthe worry that humans may be “corrupted” by certainexperiences. Old fashioned though this may seem, human behaviour isinfluenced by experience, and it is likely that pornography or sexrobots support the perception of other humans as mere objects ofdesire, or even recipients of abuse, and thus ruin a deeper sexual anderotic experience. In this vein, the “Campaign Against SexRobots” argues that these devices are a continuation of slaveryand prostitution (Richardson 2016).

2.6 Automation and Employment

It seems clear that AI and robotics will lead to significant gains inproductivity and thus overall wealth. The attempt to increaseproductivity has often been a feature of the economy, though theemphasis on “growth” is a modern phenomenon (Harari 2016:240). However, productivity gains through automation typically meanthat fewer humans are required for the same output. This does notnecessarily imply a loss of overall employment, however, becauseavailable wealth increases and that can increase demand sufficientlyto counteract the productivity gain. In the long run, higherproductivity in industrial societies has led to more wealth overall.Major labour market disruptions have occurred in the past, e.g.,farming employed over 60% of the workforce in Europe and North-Americain 1800, while by 2010 it employed ca. 5% in the EU, and even less inthe wealthiest countries (European Commission 2013). In the 20 yearsbetween 1950 and 1970 the number of hired agricultural workers in theUK was reduced by 50% (Zayed and Loft 2019). Some of these disruptionslead to more labour-intensive industries moving to places with lowerlabour cost. This is an ongoing process.

Classic automation replaced human muscle, whereas digital automationreplaces human thought or information-processing—and unlikephysical machines, digital automation is very cheap to duplicate(Bostrom and Yudkowsky 2014). It may thus mean a more radical changeon the labour market. So, the main question is: will the effects bedifferent this time? Will the creation of new jobs and wealth keep upwith the destruction of jobs? And even if it isnotdifferent, what are the transition costs, and who bears them? Do weneed to make societal adjustments for a fair distribution of costs andbenefits of digital automation?

Responses to the issue of unemployment from AI have ranged from thealarmed (Frey and Osborne 2013; Westlake 2014) to the neutral(Metcalf, Keller, and Boyd 2016 [OIR]; Calo 2018; Frey 2019) to theoptimistic (Brynjolfsson and McAfee 2016; Harari 2016; Danaher 2019a).In principle, the labour market effect of automation seems to befairly well understood as involving two channels:

(i) the nature of interactions between differently skilled workers andnew technologies affecting labour demand and (ii) the equilibriumeffects of technological progress through consequent changes in laboursupply and product markets. (Goos 2018: 362)

What currently seems to happen in the labour market as a result of AIand robotics automation is “job polarisation” or the“dumbbell” shape (Goos, Manning, and Salomons 2009): Thehighly skilled technical jobs are in demand and highly paid, the lowskilled service jobs are in demand and badly paid, but themid-qualification jobs in factories and offices, i.e., the majority ofjobs, are under pressure and reduced because they are relativelypredictable, and most likely to be automated (Baldwin 2019).

Perhaps enormous productivity gains will allow the “age ofleisure” to be realised, something (Keynes 1930) had predictedto occur around 2030, assuming a growth rate of 1% per annum.Actually, we have already reached the level he anticipated for 2030,but we are still working—consuming more and inventing ever morelevels of organisation. Harari explains how this economic developmentallowed humanity to overcome hunger, disease, and war—and now weaim for immortality and eternal bliss through AI, thus his titleHomo Deus (Harari 2016: 75).

In general terms, the issue of unemployment is an issue of how goodsin a society should be justly distributed. A standard view is thatdistributive justice should be rationally decided from behind a“veil of ignorance” (Rawls 1971), i.e., as if one does notknow what position in a society one would actually be taking (laboureror industrialist, etc.). Rawls thought the chosen principles wouldthen support basic liberties and a distribution that is of greatestbenefit to the least-advantaged members of society. It would appearthat the AI economy has three features that make such justiceunlikely: First, it operates in a largely unregulated environmentwhere responsibility is often hard to allocate. Second, it operates inmarkets that have a “winner takes all” feature wheremonopolies develop quickly. Third, the “new economy” ofthe digital service industries is based on intangible assets, alsocalled “capitalism without capital” (Haskel and Westlake2017). This means that it is difficult to control multinationaldigital corporations that do not rely on a physical plant in aparticular location. These three features seem to suggest that if weleave the distribution of wealth to free market forces, the resultwould be a heavily unjust distribution: And this is indeed adevelopment that we can already see.

One interesting question that has not received too much attention iswhether the development of AI is environmentally sustainable: Like allcomputing systems, AI systems produce waste that is very hard torecycle and they consume vast amounts of energy, especially for thetraining of machine learning systems (and even for the“mining” of cryptocurrency). Again, it appears that someactors in this space offload such costs to the general society.

2.7 Autonomous Systems

There are several notions of autonomy in the discussion of autonomoussystems. A stronger notion is involved in philosophical debates whereautonomy is the basis for responsibility and personhood (Christman2003 [2018]). In this context, responsibility implies autonomy, butnot inversely, so there can be systems that have degrees of technicalautonomy without raising issues of responsibility. The weaker, moretechnical, notion of autonomy in robotics is relative and gradual: Asystem is said to be autonomous with respect to human control to acertain degree (Müller 2012). There is a parallel here to theissues of bias and opacity in AI since autonomy also concerns apower-relation: who is in control, and who is responsible?

Generally speaking, one question is the degree to which autonomousrobots raise issues our present conceptual schemes must adapt to, orwhether they just require technical adjustments. In mostjurisdictions, there is a sophisticated system of civil and criminalliability to resolve such issues. Technical standards, e.g., for thesafe use of machinery in medical environments, will likely need to beadjusted. There is already a field of “verifiable AI” forsuch safety-critical systems and for “securityapplications”. Bodies like the IEEE (The Institute of Electricaland Electronics Engineers) and the BSI (British Standards Institution)have produced “standards”, particularly on more technicalsub-problems, such as data security and transparency. Among the manyautonomous systems on land, on water, under water, in air or space, wediscuss two samples: autonomous vehicles and autonomous weapons.

2.7.1 Example (a) Autonomous Vehicles

Autonomous vehicles hold the promise to reduce the very significantdamage that human driving currently causes—approximately 1million humans being killed per year, many more injured, theenvironment polluted, earth sealed with concrete and tarmac, citiesfull of parked cars, etc. However, there seem to be questions on howautonomous vehicles should behave, and how responsibility and riskshould be distributed in the complicated system the vehicles operatesin. (There is also significant disagreement over how long thedevelopment of fully autonomous, or “level 5” cars (SAEInternational 2018) will actually take.)

There is some discussion of “trolley problems” in thiscontext. In the classic “trolley problems” (Thomson 1976;Woollard and Howard-Snyder 2016: section 2) various dilemmas arepresented. The simplest version is that of a trolley train on a trackthat is heading towards five people and will kill them, unless thetrain is diverted onto a side track, but on that track there is oneperson, who will be killed if the train takes that side track. Theexample goes back to a remark in (Foot 1967: 6), who discusses anumber of dilemma cases where tolerated and intended consequences ofan action differ. “Trolley problems” are not supposed todescribe actual ethical problems or to be solved with a“right” choice. Rather, they are thought-experiments wherechoice is artificially constrained to a small finite number ofdistinct one-off options and where the agent has perfect knowledge.These problems are used as a theoretical tool to investigate ethicalintuitions and theories—especially the difference betweenactively doing vs. allowing something to happen, intended vs.tolerated consequences, and consequentialist vs. other normativeapproaches (Kamm 2016). This type of problem has reminded many of theproblems encountered in actual driving and in autonomous driving (Lin2016). It is doubtful, however, that an actual driver or autonomouscar will ever have to solve trolley problems (but see Keeling 2020).While autonomous car trolley problems have received a lot of mediaattention (Awad et al. 2018), they do not seem to offer anything newto either ethical theory or to the programming of autonomousvehicles.

The more common ethical problems in driving, such as speeding, riskyovertaking, not keeping a safe distance, etc. are classic problems ofpursuing personal interest vs. the common good. The vast majority ofthese are covered by legal regulations on driving. Programming the carto drive “by the rules” rather than “by the interestof the passengers” or “to achieve maximum utility”is thus deflated to a standard problem of programming ethical machines(seesection 2.9). There are probably additional discretionary rules of politeness andinteresting questions on when to break the rules (Lin 2016), but againthis seems to be more a case of applying standard considerations(rules vs. utility) to the case of autonomous vehicles.

Notable policy efforts in this field include the report (GermanFederal Ministry of Transport and Digital Infrastructure 2017), whichstresses thatsafety is the primary objective. Rule 10states

In the case of automated and connected driving systems, theaccountability that was previously the sole preserve of the individualshifts from the motorist to the manufacturers and operators of thetechnological systems and to the bodies responsible for takinginfrastructure, policy and legal decisions.

(Seesection 2.10.1 below). The resulting German and EU laws on licensing automateddriving are much more restrictive than their US counterparts where“testing on consumers” is a strategy used by somecompanies—without informed consent of the consumers or theirpossible victims.

2.7.2 Example (b) Autonomous Weapons

The notion of automated weapons is fairly old:

For example, instead of fielding simple guided missiles or remotelypiloted vehicles, we might launch completely autonomous land, sea, andair vehicles capable of complex, far-ranging reconnaissance and attackmissions. (DARPA 1983: 1)

This proposal was ridiculed as “fantasy” at the time(Dreyfus, Dreyfus, and Athanasiou 1986: ix), but it is now a reality,at least for more easily identifiable targets (missiles, planes,ships, tanks, etc.), but not for human combatants. The main argumentsagainst (lethal) autonomous weapon systems (AWS or LAWS), are thatthey support extrajudicial killings, take responsibility away fromhumans, and make wars or killings more likely—for a detailedlist of issues see Lin, Bekey, and Abney (2008: 73–86).

It appears that lowering the hurdle to use such systems (autonomousvehicles, “fire-and-forget” missiles, or drones loadedwith explosives) and reducing the probability of being heldaccountable would increase the probability of their use. The crucialasymmetry where one side can kill with impunity, and thus has fewreasons not to do so, already exists in conventional drone wars withremote controlled weapons (e.g., US in Pakistan). It is easy toimagine a small drone that searches, identifies, and kills anindividual human—or perhaps a type of human. These are the kindsof cases brought forward by theCampaign to Stop KillerRobots and other activist groups. Some seem to be equivalent tosaying that autonomous weapons are indeed weapons …, andweapons kill, but we still make them in gigantic numbers. On thematter of accountability, autonomous weapons might make identificationand prosecution of the responsible agents more difficult—butthis is not clear, given the digital records that one can keep, atleast in a conventional war. The difficulty of allocating punishmentis sometimes called the “retribution gap” (Danaher2016a).

Another question is whether using autonomous weapons in war would makewars worse, or make wars less bad. If robots reduce war crimes andcrimes in war, the answer may well be positive and has been used as anargument in favour of these weapons (Arkin 2009; Müller 2016a)but also as an argument against them (Amoroso and Tamburrini 2018).Arguably the main threat is not the use of such weapons inconventional warfare, but in asymmetric conflicts or by non-stateagents, including criminals.

It has also been said that autonomous weapons cannot conform toInternational Humanitarian Law, which requires observance of theprinciples of distinction (between combatants and civilians),proportionality (of force), and military necessity (of force) inmilitary conflict (A. Sharkey 2019). It is true that the distinctionbetween combatants and non-combatants is hard, but the distinctionbetween civilian and military ships is easy—so all this says isthat we should not construct and use such weapons if they do violateHumanitarian Law. Additional concerns have been raised that beingkilled by an autonomous weapon threatens human dignity, but even thedefenders of a ban on these weapons seem to say that these are notgood arguments:

There are other weapons, and other technologies, that also compromisehuman dignity. Given this, and the ambiguities inherent in theconcept, it is wiser to draw on several types of objections inarguments against AWS, and not to rely exclusively on human dignity.(A. Sharkey 2019)

A lot has been made of keeping humans “in the loop” or“on the loop” in the military guidance onweapons—these ways of spelling out “meaningfulcontrol” are discussed in (Santoni de Sio and van den Hoven2018). There have been discussions about the difficulties ofallocating responsibility for the killings of an autonomous weapon,and a “responsibility gap” has been suggested (esp. RobSparrow 2007), meaning that neither the human nor the machine may beresponsible. On the other hand, we do not assume that for every eventthere is someone responsible for that event, and the real issue maywell be the distribution of risk (Simpson and Müller 2016). Riskanalysis (Hansson 2013) indicates it is crucial to identify who isexposed to risk, who is a potentialbeneficiary, andwho makes thedecisions (Hansson 2018: 1822–1824).

2.8 Machine Ethics

Machine ethics is ethics for machines, for “ethicalmachines”, for machines assubjects, rather than forthe human use of machines asobjects. It is often not veryclear whether this is supposed to cover all of AI ethics or to be apart of it (Floridi and Saunders 2004; Moor 2006; Anderson andAnderson 2011; Wallach and Asaro 2017). Sometimes it looks as thoughthere is the (dubious) inference at play here that if machines act inethically relevant ways, then we need a machine ethics. Accordingly,some use a broader notion:

machine ethics is concerned with ensuring that the behavior ofmachines toward human users, and perhaps other machines as well, isethically acceptable. (Anderson and Anderson 2007: 15)

This might include mere matters of product safety, for example. Otherauthors sound rather ambitious but use a narrower notion:

AI reasoning should be able to take into account societal values,moral and ethical considerations; weigh the respective priorities ofvalues held by different stakeholders in various multiculturalcontexts; explain its reasoning; and guarantee transparency. (Dignum2018: 1, 2)

Some of the discussion in machine ethics makes the very substantialassumption that machines can, in some sense, be ethical agentsresponsible for their actions, or “autonomous moralagents” (see van Wynsberghe and Robbins 2019). The basic idea ofmachine ethics is now finding its way into actual robotics where theassumption that these machines are artificial moral agents in anysubstantial sense is usually not made (Winfield et al. 2019). It issometimes observed that a robot that is programmed to follow ethicalrules can very easily be modified to follow unethical rules(Vanderelst and Winfield 2018).

The idea that machine ethics might take the form of “laws”has famously been investigated by Isaac Asimov, who proposed“three laws of robotics” (Asimov 1942):

First Law—A robot may not injure a human being or, throughinaction, allow a human being to come to harm. Second Law—Arobot must obey the orders given it by human beings except where suchorders would conflict with the First Law. Third Law—A robot mustprotect its own existence as long as such protection does not conflictwith the First or Second Laws.

Asimov then showed in a number of stories how conflicts between thesethree laws will make it problematic to use them despite theirhierarchical organisation.

It is not clear that there is a consistent notion of “machineethics” since weaker versions are in danger of reducing“having an ethics” to notions that would not normally beconsidered sufficient (e.g., without “reflection” or evenwithout “action”); stronger notions that move towardsartificial moral agents may describe a—currently—emptyset.

2.9 Artificial Moral Agents

If one takes machine ethics to concern moral agents, in some substantialsense, then these agents can be called “artificial moralagents”, having rights and responsibilities. However, thediscussion about artificial entities challenges a number of commonnotions in ethics and it can be very useful to understand these inabstraction from the human case (cf. Misselhorn 2020; Powers andGanascia forthcoming).

Several authors use “artificial moral agent” in a lessdemanding sense, borrowing from the use of “agent” insoftware engineering in which case matters of responsibility andrights will not arise (Allen, Varner, and Zinser 2000). James Moor(2006) distinguishes four types of machine agents: ethical impactagents (e.g., robot jockeys), implicit ethical agents (e.g., safeautopilot), explicit ethical agents (e.g., using formal methods toestimate utility), and full ethical agents (who “can makeexplicit ethical judgments and generally is competent to reasonablyjustify them. An average adult human is a full ethical agent”.)Several ways to achieve “explicit” or “full”ethical agents have been proposed, via programming it in (operationalmorality), via “developing” the ethics itself (functionalmorality), and finally full-blown morality with full intelligence andsentience (Allen, Smit, and Wallach 2005; Moor 2006). Programmedagents are sometimes not considered “full” agents becausethey are “competent without comprehension”, just like theneurons in a brain (Dennett 2017; Hakli and Mäkelä2019).

In some discussions, the notion of “moral patient” plays arole: Ethicalagents have responsibilities while ethicalpatients have rights because harm to them matters. It seemsclear that some entities are patients without being agents, e.g.,simple animals that can feel pain but cannot make justified choices.On the other hand, it is normally understood that all agents will alsobe patients (e.g., in a Kantian framework). Usually, being a person issupposed to be what makes an entity a responsible agent, someone whocan have duties and be the object of ethical concerns. Such personhoodis typically a deep notion associated with phenomenal consciousness,intention and free will (Frankfurt 1971; Strawson 1998). Torrance(2011) suggests “artificial (or machine) ethics could be definedas designing machines that do things that, when done by humans, areindicative of the possession of ‘ethical status’ in thosehumans” (2011: 116)—which he takes to be “ethicalproductivity and ethicalreceptivity” (2011:117)—his expressions for moral agents and patients.

2.9.1 Responsibility for Robots

There is broad consensus that accountability, liability, and the ruleof law are basic requirements that must be upheld in the face of newtechnologies (European Group on Ethics in Science and New Technologies2018, 18), but the issue in the case of robots is how this can be doneand how responsibility can be allocated. If the robots act, will theythemselves be responsible, liable, or accountable for their actions?Or should the distribution of risk perhaps take precedence overdiscussions of responsibility?

Traditional distribution of responsibility already occurs: A car makeris responsible for the technical safety of the car, a driver isresponsible for driving, a mechanic is responsible for propermaintenance, the public authorities are responsible for the technicalconditions of the roads, etc. In general

The effects of decisions or actions based on AI are often the resultof countless interactions among many actors, including designers,developers, users, software, and hardware.… With distributedagency comes distributed responsibility. (Taddeo and Floridi 2018:751).

How this distribution might occur is not a problem that is specific toAI, but it gains particular urgency in this context (Nyholm 2018a,2018b). In classical control engineering, distributed control is oftenachieved through a control hierarchy plus control loops across thesehierarchies.

2.9.2 Rights for Robots

Some authors have indicated that it should be seriously consideredwhether current robots must be allocated rights (Gunkel 2018a, 2018b;Danaher forthcoming; Turner 2019). This position seems to rely largelyon criticism of the opponents and on the empirical observation thatrobots and other non-persons are sometimes treated as having rights.In this vein, a “relational turn” has been proposed: If werelate to robots as though they had rights, then we might bewell-advised not to search whether they “really” do havesuch rights (Coeckelbergh 2010, 2012, 2018). This raises the questionhow far such anti-realism or quasi-realism can go, and what it meansthen to say that “robots have rights” in a human-centredapproach (Gerdes 2016). On the other side of the debate, Bryson hasinsisted that robots should not enjoy rights (Bryson 2010), though sheconsiders it a possibility (Gunkel and Bryson 2014).

There is a wholly separate issue whether robots (or other AI systems)should be given the status of “legal entities” or“legal persons” in a sense natural persons, but alsostates, businesses, or organisations are “entities”,namely they can have legal rights and duties. The European Parliamenthas considered allocating such status to robots in order to deal withcivil liability (EU Parliament 2016; Bertolini and Aiello 2018), butnot criminal liability—which is reserved for natural persons. Itwould also be possible to assign only a certain subset of rights andduties to robots. It has been said that “such legislative actionwould be morally unnecessary and legally troublesome” because itwould not serve the interest of humans (Bryson, Diamantis, and Grant2017: 273). In environmental ethics there is a long-standingdiscussion about the legal rights for natural objects like trees (C.D. Stone 1972).

It has also been said that the reasons for developing robots withrights, or artificial moral patients, in the future are ethicallydoubtful (van Wynsberghe and Robbins 2019). In the community of“artificial consciousness” researchers there is asignificant concern whether it would be ethical to create suchconsciousness since creating it would presumably imply ethicalobligations to a sentient being, e.g., not to harm it and not to endits existence by switching it off—some authors have called for a“moratorium on synthetic phenomenology” (Bentley et al.2018: 28f).

2.10 Singularity

2.10.1 Singularity and Superintelligence

In some quarters, the aim of current AI is thought to be an“artificial general intelligence” (AGI), contrasted to atechnical or “narrow” AI. AGI is usually distinguishedfrom traditional notions of AI as a general purpose system, and fromSearle’s notion of “strong AI”:

computers given the right programs can be literally said tounderstand and have other cognitive states. (Searle 1980:417)

The idea ofsingularity is that if the trajectory ofartificial intelligence reaches up to systems that have a human levelof intelligence, then these systems would themselves have the abilityto develop AI systems that surpass the human level of intelligence,i.e., they are “superintelligent” (see below). Suchsuperintelligent AI systems would quickly self-improve or develop evenmore intelligent systems. This sharp turn of events after reachingsuperintelligent AI is the “singularity” from which thedevelopment of AI is out of human control and hard to predict(Kurzweil 2005: 487).

The fear that “the robots we created will take over theworld” had captured human imagination even before there werecomputers (e.g., Butler 1863) and is the central theme inČapek’s famous play that introduced the word“robot” (Čapek 1920). This fear was first formulatedas a possible trajectory of existing AI into an “intelligenceexplosion” by Irvin Good:

Let an ultraintelligent machine be defined as a machine that can farsurpass all the intellectual activities of any man however clever.Since the design of machines is one of these intellectual activities,an ultraintelligent machine could design even better machines; therewould then unquestionably be an “intelligence explosion”,and the intelligence of man would be left far behind. Thus the firstultraintelligent machine is the last invention that man need evermake, provided that the machine is docile enough to tell us how tokeep it under control. (Good 1965: 33)

The optimistic argument from acceleration to singularity is spelledout by Kurzweil (1999, 2005, 2012) who essentially points out thatcomputing power has been increasing exponentially, i.e., doubling ca.every 2 years since 1970 in accordance with “Moore’sLaw” on the number of transistors, and will continue to do sofor some time in the future. He predicted in (Kurzweil 1999) that by2010 supercomputers will reach human computation capacity, by 2030“mind uploading” will be possible, and by 2045 the“singularity” will occur. Kurzweil talks about an increasein computing power that can be purchased at a given cost—but ofcourse in recent years the funds available to AI companies have alsoincreased enormously: Amodei and Hernandez (2018 [OIR]) thus estimatethat in the years 2012–2018 the actual computing power availableto train a particular AI system doubled every 3.4 months, resulting inan 300,000x increase—not the 7x increase that doubling every twoyears would have created.

A common version of this argument (Chalmers 2010) talks about anincrease in “intelligence” of the AI system (rather thanraw computing power), but the crucial point of“singularity” remains the one where further development ofAI is taken over by AI systems and accelerates beyond human level.Bostrom (2014) explains in some detail what would happen at that pointand what the risks for humanity are. The discussion is summarised inEden et al. (2012); Armstrong (2014); Shanahan (2015). There arepossible paths to superintelligence other than computing powerincrease, e.g., the complete emulation of the human brain on acomputer (Kurzweil 2012; Sandberg 2013), biological paths, or networksand organisations (Bostrom 2014: 22–51).

Despite obvious weaknesses in the identification of“intelligence” with processing power, Kurzweil seems rightthat humans tend to underestimate the power of exponential growth.Mini-test: If you walked in steps in such a way that each step isdouble the previous, starting with a step of one metre, how far wouldyou get with 30 steps? (answer: almost 3 times further than theEarth’s only permanent natural satellite.) Indeed, most progressin AI is readily attributable to the availability of processors thatare faster by degrees of magnitude, larger storage, and higherinvestment (Müller 2018). The actual acceleration and its speedsare discussed in (Müller and Bostrom 2016; Bostrom, Dafoe, andFlynn forthcoming); Sandberg (2019) argues that progress will continuefor some time.

The participants in this debate are united by being technophiles inthe sense that they expect technology to develop rapidly and bringbroadly welcome changes—but beyond that, they divide into thosewho focus on benefits (e.g., Kurzweil) and those who focus on risks(e.g., Bostrom). Both camps sympathise with “transhuman”views of survival for humankind in a different physical form, e.g.,uploaded on a computer (Moravec 1990, 1998; Bostrom 2003a, 2003c).They also consider the prospects of “human enhancement” invarious respects, including intelligence—often called“IA” (intelligence augmentation). It may be that future AIwill be used for human enhancement, or will contribute further to thedissolution of the neatly defined human single person. Robin Hansonprovides detailed speculation about what will happen economically incase human “brain emulation” enables truly intelligentrobots or “ems” (Hanson 2016).

The argument from superintelligence to risk requires the assumptionthat superintelligence does not imply benevolence—contrary toKantian traditions in ethics that have argued higher levels ofrationality or intelligence would go along with a better understandingof what is moral and better ability to act morally (Gewirth 1978;Chalmers 2010: 36f). Arguments for risk from superintelligence saythat rationality and morality are entirely independentdimensions—this is sometimes explicitly argued for as an“orthogonality thesis” (Bostrom 2012; Armstrong 2013;Bostrom 2014: 105–109).

Criticism of the singularity narrative has been raised from variousangles. Kurzweil and Bostrom seem to assume that intelligence is aone-dimensional property and that the set of intelligent agents istotally-ordered in the mathematical sense—but neither discussesintelligence at any length in their books. Generally, it is fair tosay that despite some efforts, the assumptions made in the powerfulnarrative of superintelligence and singularity have not beeninvestigated in detail. One question is whether such a singularitywill ever occur—it may be conceptually impossible, practicallyimpossible or may just not happen because of contingent events,including people actively preventing it. Philosophically, theinteresting question is whether singularity is just a“myth” (Floridi 2016; Ganascia 2017), and not on thetrajectory of actual AI research. This is something that practitionersoften assume (e.g., Brooks 2017 [OIR]). They may do so because theyfear the public relations backlash, because they overestimate thepractical problems, or because they have good reasons to think thatsuperintelligence is an unlikely outcome of current AI research(Müller forthcoming-a). This discussion raises the questionwhether the concern about “singularity” is just anarrative about fictional AI based on human fears. But even if onedoes find negative reasons compelling and the singularity notlikely to occur, there is still a significant possibility that one mayturn out to be wrong. Philosophy is not on the “secure path of ascience” (Kant 1791: B15), and maybe AI and roboticsaren’t either (Müller 2020). So, it appears that discussingthe very high-impact risk of singularity has justificationevenif one thinks the probability of such singularity ever occurringis very low.

2.10.2 Existential Risk from Superintelligence

Thinking about superintelligence in the long term raises the questionwhether superintelligence may lead to the extinction of the humanspecies, which is called an “existential risk” (or XRisk):The superintelligent systems may well have preferences that conflictwith the existence of humans on Earth, and may thus decide to end thatexistence—and given their superior intelligence, they will havethe power to do so (or they may happen to end it because they do notreally care).

Thinking in the long term is the crucial feature of this literature.Whether the singularity (or another catastrophic event) occurs in 30or 300 or 3000 years does not really matter (Baum et al. 2019).Perhaps there is even an astronomical pattern such that an intelligentspecies is bound to discover AI at some point, and thus bring aboutits own demise. Such a “great filter” would contribute tothe explanation of the “Fermi paradox” why there is nosign of life in the known universe despite the high probability of itemerging. It would be bad news if we found out that the “greatfilter” is ahead of us, rather than an obstacle that Earth hasalready passed. These issues are sometimes taken more narrowly to beabout human extinction (Bostrom 2013), or more broadly as concerningany large risk for the species (Rees 2018)—of which AI is onlyone (Häggström 2016; Ord 2020). Bostrom also uses thecategory of “global catastrophic risk” for risks that aresufficiently high up the two dimensions of “scope” and“severity” (Bostrom and Ćirković 2011; Bostrom2013).

These discussions of risk are usually not connected to the generalproblem of ethics under risk (e.g., Hansson 2013, 2018). The long-termview has its own methodological challenges but has produced a widediscussion: (Tegmark 2017) focuses on AI and human life“3.0” after singularity while Russell, Dewey, and Tegmark(2015) and Bostrom, Dafoe, and Flynn (forthcoming) survey longer-termpolicy issues in ethical AI. Several collections of papers haveinvestigated the risks of artificial general intelligence (AGI) andthe factors that might make this development more or less risk-laden(Müller 2016b; Callaghan et al. 2017; Yampolskiy 2018), includingthe development of non-agent AI (Drexler 2019).

2.10.3 Controlling Superintelligence?

In a narrow sense, the “control problem” is how we humanscan remain in control of an AI system once it is superintelligent(Bostrom 2014: 127ff). In a wider sense, it is the problem of how wecan make sure an AI system will turn out to be positive according tohuman perception (Russell 2019); this is sometimes called “valuealignment”. How easy or hard it is to control asuperintelligence depends significantly on the speed of“take-off” to a superintelligent system. This has led toparticular attention to systems with self-improvement, such asAlphaZero (Silver et al. 2018).

One aspect of this problem is that we might decide a certain featureis desirable, but then find out that it has unforeseen consequencesthat are so negative that we would not desire that feature after all.This is the ancient problem of King Midas who wished that all hetouched would turn into gold. This problem has been discussed on theoccasion of various examples, such as the “paperclipmaximiser” (Bostrom 2003b), or the program to optimise chessperformance (Omohundro 2014).

Discussions about superintelligence include speculation aboutomniscient beings, the radical changes on a “latter day”,and the promise of immortality through transcendence of our currentbodily form—so sometimes they have clear religious undertones(Capurro 1993; Geraci 2008, 2010; O’Connell 2017: 160ff). Theseissues also pose a well-known problem of epistemology: Can we know theways of the omniscient (Danaher 2015)? The usual opponents havealready shown up: A characteristic response of an atheist is

People worry that computers will get too smart and take over theworld, but the real problem is that they’re too stupid andthey’ve already taken over the world (Domingos 2015)

The new nihilists explain that a “techno-hypnosis” throughinformation technologies has now become our main method of distractionfrom the loss of meaning (Gertz 2018). Both opponents would thus saywe need an ethics for the “small” problems that occur withactual AI and robotics (sections 2.1 through 2.9 above), and that there is less need for the “big ethics”of existential risk from AI (section 2.10).

3. Closing

The singularity thus raises the problem of the concept of AI again. Itis remarkable how imagination or “vision” has played acentral role since the very beginning of the discipline at the“Dartmouth Summer Research Project” (McCarthy et al. 1955[OIR]; Simon and Newell 1958). And the evaluation of this vision issubject to dramatic change: In a few decades, we went from the slogans“AI is impossible” (Dreyfus 1972) and “AI is justautomation” (Lighthill 1973) to “AI will solve allproblems” (Kurzweil 1999) and “AI may kill us all”(Bostrom 2014). This created media attention and public relationsefforts, but it also raises the problem of how much of this“philosophy and ethics of AI” is really about AI ratherthan about an imagined technology. As we said at the outset, AI androbotics have raised fundamental questions about what we should dowith these systems, what the systems themselves should do, and whatrisks they have in the long term. They also challenge the human viewof humanity as the intelligent and dominant species on Earth. We haveseen issues that have been raised and will have to watch technologicaland social developments closely to catch the new issues early on,develop a philosophical analysis, and learn for traditional problemsof philosophy.

Bibliography

NOTE: Citations in the main text annotated “[OIR]” may befound in theOther Internet Resources section below, not in the Bibliography.

  • Abowd, John M, 2017, “How Will Statistical Agencies OperateWhen All Data Are Private?”,Journal of Privacy andConfidentiality, 7(3): 1–15. doi:10.29012/jpc.v7i3.404
  • Allen, Colin, Iva Smit, and Wendell Wallach, 2005,“Artificial Morality: Top-down, Bottom-up, and HybridApproaches”,Ethics and Information Technology, 7(3):149–155. doi:10.1007/s10676-006-0004-4
  • Allen, Colin, Gary Varner, and Jason Zinser, 2000,“Prolegomena to Any Future Artificial Moral Agent”,Journal of Experimental & Theoretical ArtificialIntelligence, 12(3): 251–261.doi:10.1080/09528130050111428
  • Amoroso, Daniele and Guglielmo Tamburrini, 2018, “TheEthical and Legal Case Against Autonomy in Weapons Systems”,Global Jurist, 18(1): art. 20170012.doi:10.1515/gj-2017-0012
  • Anderson, Janna, Lee Rainie, and Alex Luchsinger, 2018,Artificial Intelligence and the Future of Humans, Washington,DC: Pew Research Center.
  • Anderson, Michael and Susan Leigh Anderson, 2007, “MachineEthics: Creating an Ethical Intelligent Agent”,AIMagazine, 28(4): 15–26.
  • ––– (eds.), 2011,Machine Ethics,Cambridge: Cambridge University Press.doi:10.1017/CBO9780511978036
  • Aneesh, A., 2006,Virtual Migration: The Programming ofGlobalization, Durham, NC and London: Duke University Press.
  • Arkin, Ronald C., 2009,Governing Lethal Behavior inAutonomous Robots, Boca Raton, FL: CRC Press.
  • Armstrong, Stuart, 2013, “General Purpose Intelligence:Arguing the Orthogonality Thesis”,Analysis andMetaphysics, 12: 68–84.
  • –––, 2014,Smarter Than Us, Berkeley,CA: MIRI.
  • Arnold, Thomas and Matthias Scheutz, 2017, “Beyond MoralDilemmas: Exploring the Ethical Landscape in HRI”, inProceedings of the 2017 ACM/IEEE International Conference onHuman-Robot Interaction—HRI ’17, Vienna, Austria: ACMPress, 445–452. doi:10.1145/2909824.3020255
  • Asaro, Peter M., 2019, “AI Ethics in Predictive Policing:From Models of Threat to an Ethics of Care”,IEEE Technologyand Society Magazine, 38(2): 40–53.doi:10.1109/MTS.2019.2915154
  • Asimov, Isaac, 1942, “Runaround: A Short Story”,Astounding Science Fiction, March 1942. Reprinted in“I, Robot”, New York: Gnome Press 1950, 1940ff.
  • Awad, Edmond, Sohan Dsouza, Richard Kim, Jonathan Schulz, JosephHenrich, Azim Shariff, Jean-François Bonnefon, and Iyad Rahwan,2018, “The Moral Machine Experiment”,Nature,563(7729): 59–64. doi:10.1038/s41586-018-0637-6
  • Baldwin, Richard, 2019,The Globotics Upheaval: Globalisation,Robotics and the Future of Work, New York: Oxford UniversityPress.
  • Baum, Seth D., Stuart Armstrong, Timoteus Ekenstedt, OlleHäggström, Robin Hanson, Karin Kuhlemann, Matthijs M. Maas,James D. Miller, Markus Salmela, Anders Sandberg, Kaj Sotala, PhilTorres, Alexey Turchin, and Roman V. Yampolskiy, 2019,“Long-Term Trajectories of Human Civilization”,Foresight, 21(1): 53–83.doi:10.1108/FS-04-2018-0037
  • Bendel, Oliver, 2018, “Sexroboter aus Sicht derMaschinenethik”, inHandbuch Filmtheorie, BernhardGroß and Thomas Morsch (eds.), (Springer ReferenceGeisteswissenschaften), Wiesbaden: Springer Fachmedien Wiesbaden,1–19. doi:10.1007/978-3-658-17484-2_22-1
  • Bennett, Colin J. and Charles Raab, 2006,The Governance ofPrivacy: Policy Instruments in Global Perspective, secondedition, Cambridge, MA: MIT Press.
  • Benthall, Sebastian and Bruce D. Haynes, 2019, “RacialCategories in Machine Learning”, inProceedings of theConference on Fairness, Accountability, and Transparency - FAT*’19, Atlanta, GA, USA: ACM Press, 289–298.doi:10.1145/3287560.3287575
  • Bentley, Peter J., Miles Brundage, Olle Häggström, andThomas Metzinger, 2018, “Should We Fear Artificial Intelligence?In-Depth Analysis”, European Parliamentary Research Service,Scientific Foresight Unit (STOA), March 2018, PE 614.547, 1–40. [Bentley et al. 2018 available online]
  • Bertolini, Andrea and Giuseppe Aiello, 2018, “RobotCompanions: A Legal and Ethical Analysis”,The InformationSociety, 34(3): 130–140.doi:10.1080/01972243.2018.1444249
  • Binns, Reuben, 2018, “Fairness in Machine Learning: Lessonsfrom Political Philosophy”,Proceedings of the 1stConference on Fairness, Accountability and Transparency, inProceedings of Machine Learning Research, 81:149–159.
  • Bostrom, Nick, 2003a, “Are We Living in a ComputerSimulation?”,The Philosophical Quarterly, 53(211):243–255. doi:10.1111/1467-9213.00309
  • –––, 2003b, “Ethical Issues in AdvancedArtificial Intelligence”, inCognitive, Emotive and EthicalAspects of Decision Making in Humans and in Artificial Intelligence,Volume 2, Iva Smit, Wendell Wallach, and G.E. Lasker (eds),(IIAS-147-2003), Tecumseh, ON: International Institute of AdvancedStudies in Systems Research and Cybernetics, 12–17. [Botstrom 2003b revised available online]
  • –––, 2003c, “Transhumanist Values”,inEthical Issues for the Twenty-First Century, FrederickAdams (ed.), Bowling Green, OH: Philosophical Documentation CenterPress.
  • –––, 2012, “The Superintelligent Will:Motivation and Instrumental Rationality in Advanced ArtificialAgents”,Minds and Machines, 22(2): 71–85.doi:10.1007/s11023-012-9281-3
  • –––, 2013, “Existential Risk Prevention asGlobal Priority”,Global Policy, 4(1): 15–31.doi:10.1111/1758-5899.12002
  • –––, 2014,Superintelligence: Paths,Dangers, Strategies, Oxford: Oxford University Press.
  • Bostrom, Nick and Milan M. Ćirković (eds.), 2011,Global Catastrophic Risks, New York: Oxford UniversityPress.
  • Bostrom, Nick, Allan Dafoe, and Carrick Flynn, forthcoming,“Policy Desiderata for Superintelligent AI: A Vector FieldApproach (V. 4.3)”, inEthics of ArtificialIntelligence, S Matthew Liao (ed.), New York: Oxford UniversityPress. [Bostrom, Dafoe, and Flynn forthcoming – preprint available online]
  • Bostrom, Nick and Eliezer Yudkowsky, 2014, “The Ethics ofArtificial Intelligence”, inThe Cambridge Handbook ofArtificial Intelligence, Keith Frankish and William M. Ramsey(eds.), Cambridge: Cambridge University Press, 316–334.doi:10.1017/CBO9781139046855.020 [Bostrom and Yudkowsky 2014 available online]
  • Bradshaw, Samantha, Lisa-Maria Neudert, and Phil Howard, 2019,“Government Responses to Malicious Use of Social Media”,Working Paper 2019.2, Oxford: Project on Computational Propaganda. [Bradshaw, Neudert, and Howard 2019 available online/]
  • Brownsword, Roger, Eloise Scotford, and Karen Yeung (eds.), 2017,The Oxford Handbook of Law, Regulation and Technology,Oxford: Oxford University Press.doi:10.1093/oxfordhb/9780199680832.001.0001
  • Brynjolfsson, Erik and Andrew McAfee, 2016,The Second MachineAge: Work, Progress, and Prosperity in a Time of BrilliantTechnologies, New York: W. W. Norton.
  • Bryson, Joanna J., 2010, “Robots Should Be Slaves”, inClose Engagements with Artificial Companions: Key Social,Psychological, Ethical and Design Issues, Yorick Wilks (ed.),(Natural Language Processing 8), Amsterdam: John Benjamins PublishingCompany, 63–74. doi:10.1075/nlp.8.11bry
  • –––, 2019, “The Past Decade and Future ofAi’s Impact on Society”, inTowards a NewEnlightenment: A Transcendent Decade, Madrid: Turner - BVVA. [Bryson 2019 available online]
  • Bryson, Joanna J., Mihailis E. Diamantis, and Thomas D. Grant,2017, “Of, for, and by the People: The Legal Lacuna of SyntheticPersons”,Artificial Intelligence and Law, 25(3):273–291. doi:10.1007/s10506-017-9214-9
  • Burr, Christopher and Nello Cristianini, 2019, “Can MachinesRead Our Minds?”,Minds and Machines, 29(3):461–494. doi:10.1007/s11023-019-09497-4
  • Butler, Samuel, 1863, “Darwin among the Machines: Letter tothe Editor”, Letter inThe Press (Christchurch), 13June 1863. [Butler 1863 available online]
  • Callaghan, Victor, James Miller, Roman Yampolskiy, and StuartArmstrong (eds.), 2017,The Technological Singularity: Managingthe Journey, (The Frontiers Collection), Berlin, Heidelberg:Springer Berlin Heidelberg. doi:10.1007/978-3-662-54033-6
  • Calo, Ryan, 2018, “Artificial Intelligence Policy: A Primerand Roadmap”,University of Bologna Law Review, 3(2):180-218. doi:10.6092/ISSN.2531-6133/8670
  • Calo, Ryan, A. Michael Froomkin, and Ian Kerr (eds.), 2016,Robot Law, Cheltenham: Edward Elgar.
  • Čapek, Karel, 1920,R.U.R., Prague: Aventium.Translated by Peter Majer and Cathy Porter, London: Methuen,1999.
  • Capurro, Raphael, 1993, “Ein Grinsen Ohne Katze: Von derVergleichbarkeit Zwischen ‘Künstlicher Intelligenz’und ‘Getrennten Intelligenzen’”,Zeitschriftfür philosophische Forschung, 47: 93–102.
  • Cave, Stephen, 2019, “To Save Us from a Kafkaesque Future,We Must Democratise AI”,The Guardian , 04 January2019. [Cave 2019 available online]
  • Chalmers, David J., 2010, “The Singularity: A PhilosophicalAnalysis”,Journal of Consciousness Studies,17(9–10): 7–65. [Chalmers 2010 available online]
  • Christman, John, 2003 [2018], “Autonomy in Moral andPolitical Philosophy”, (Spring 2018)Stanford Encyclopediaof Philosophy (EDITION NEEDED), URL = <https://plato.stanford.edu/archives/spr2018/entries/autonomy-moral/>
  • Coeckelbergh, Mark, 2010, “Robot Rights? Towards aSocial-Relational Justification of Moral Consideration”,Ethics and Information Technology, 12(3): 209–221.doi:10.1007/s10676-010-9235-5
  • –––, 2012,Growing Moral Relations: Critiqueof Moral Status Ascription, London: Palgrave.doi:10.1057/9781137025968
  • –––, 2016, “Care Robots and the Future ofICT-Mediated Elderly Care: A Response to Doom Scenarios”,AI& Society, 31(4): 455–462.doi:10.1007/s00146-015-0626-3
  • –––, 2018, “What Do We Mean by aRelational Ethics? Growing a Relational Approach to the Moral Standingof Plants, Robots and Other Non-Humans”, inPlant Ethics:Concepts and Applications, Angela Kallhoff, Marcello Di Paola,and Maria Schörgenhumer (eds.), London: Routledge,110–121.
  • Crawford, Kate and Ryan Calo, 2016, “There Is a Blind Spotin AI Research”,Nature, 538(7625): 311–313.doi:10.1038/538311a
  • Cristianini, Nello, forthcoming, “Shortcuts to ArtificialIntelligence”, inMachines We Trust, Marcello Pelilloand Teresa Scantamburlo (eds.), Cambridge, MA: MIT Press. [Cristianini forthcoming – preprint available online]
  • Danaher, John, 2015, “Why AI Doomsayers Are Like ScepticalTheists and Why It Matters”,Minds and Machines, 25(3):231–246. doi:10.1007/s11023-015-9365-y
  • –––, 2016a, “Robots, Law and theRetribution Gap”,Ethics and Information Technology,18(4): 299–309. doi:10.1007/s10676-016-9403-3
  • –––, 2016b, “The Threat of Algocracy:Reality, Resistance and Accommodation”,Philosophy &Technology, 29(3): 245–268.doi:10.1007/s13347-015-0211-1
  • –––, 2019a,Automation and Utopia: HumanFlourishing in a World without Work, Cambridge, MA: HarvardUniversity Press.
  • –––, 2019b, “The Philosophical Case forRobot Friendship”,Journal of Posthuman Studies, 3(1):5–24. doi:10.5325/jpoststud.3.1.0005
  • –––, forthcoming, “Welcoming Robots intothe Moral Circle: A Defence of Ethical Behaviourism”,Science and Engineering Ethics, first online: 20 June 2019.doi:10.1007/s11948-019-00119-x
  • Danaher, John and Neil McArthur (eds.), 2017,Robot Sex:Social and Ethical Implications, Boston, MA: MIT Press.
  • DARPA, 1983, “Strategic Computing. New-Generation ComputingTechnology: A Strategic Plan for Its Development an Application toCritical Problems in Defense”, ADA141982, 28 October 1983. [DARPA 1983 available online]
  • Dennett, Daniel C, 2017,From Bacteria to Bach and Back: TheEvolution of Minds, New York: W.W. Norton.
  • Devlin, Kate, 2018,Turned On: Science, Sex and Robots,London: Bloomsbury.
  • Diakopoulos, Nicholas, 2015, “Algorithmic Accountability:Journalistic Investigation of Computational Power Structures”,Digital Journalism, 3(3): 398–415.doi:10.1080/21670811.2014.976411
  • Dignum, Virginia, 2018, “Ethics in Artificial Intelligence:Introduction to the Special Issue”,Ethics and InformationTechnology, 20(1): 1–3. doi:10.1007/s10676-018-9450-z
  • Domingos, Pedro, 2015,The Master Algorithm: How the Quest forthe Ultimate Learning Machine Will Remake Our World, London:Allen Lane.
  • Draper, Heather, Tom Sorell, Sandra Bedaf, Dag Sverre Syrdal,Carolina Gutierrez-Ruiz, Alexandre Duclos, and FarshidAmirabdollahian, 2014, “Ethical Dimensions of Human-RobotInteractions in the Care of Older People: Insights from 21 FocusGroups Convened in the UK, France and the Netherlands”, inInternational Conference on Social Robotics 2014, MichaelBeetz, Benjamin Johnston, and Mary-Anne Williams (eds.), (LectureNotes in Artificial Intelligence 8755), Cham: Springer InternationalPublishing, 135–145. doi:10.1007/978-3-319-11973-1_14
  • Dressel, Julia and Hany Farid, 2018, “The Accuracy,Fairness, and Limits of Predicting Recidivism”,ScienceAdvances, 4(1): eaao5580. doi:10.1126/sciadv.aao5580
  • Drexler, K. Eric, 2019, “Reframing Superintelligence:Comprehensive AI Services as General Intelligence”, FHITechnical Report, 2019-1, 1-210. [Drexler 2019 available online]
  • Dreyfus, Hubert L., 1972,What Computers Still Can’t Do:A Critique of Artificial Reason, second edition, Cambridge, MA:MIT Press 1992.
  • Dreyfus, Hubert L., Stuart E. Dreyfus, and Tom Athanasiou, 1986,Mind over Machine: The Power of Human Intuition and Expertise inthe Era of the Computer, New York: Free Press.
  • Dwork, Cynthia, Frank McSherry, Kobbi Nissim, and Adam Smith,2006,Calibrating Noise to Sensitivity in Private DataAnalysis, Berlin, Heidelberg.
  • Eden, Amnon H., James H. Moor, Johnny H. Søraker, and EricSteinhart (eds.), 2012,Singularity Hypotheses: A Scientific andPhilosophical Assessment, (The Frontiers Collection), Berlin,Heidelberg: Springer Berlin Heidelberg.doi:10.1007/978-3-642-32560-1
  • Eubanks, Virginia, 2018,Automating Inequality: How High-TechTools Profile, Police, and Punish the Poor, London: St.Martin’s Press.
  • European Commission, 2013, “How Many People Work inAgriculture in the European Union? An Answer Based on Eurostat DataSources”,EU Agricultural Economics Briefs, 8 (July2013). [Anonymous 2013 available online]
  • European Group on Ethics in Science and New Technologies, 2018,“Statement on Artificial Intelligence, Robotics and‘Autonomous’ Systems”, 9 March 2018, EuropeanCommission, Directorate-General for Research and Innovation, UnitRTD.01. [European Group 2018 available online]
  • Ferguson, Andrew Guthrie, 2017,The Rise of Big Data Policing:Surveillance, Race, and the Future of Law Enforcement, New York:NYU Press.
  • Floridi, Luciano, 2016, “Should We Be Afraid of AI? MachinesSeem to Be Getting Smarter and Smarter and Much Better at Human Jobs,yet True AI Is Utterly Implausible. Why?”,Aeon, 9 May2016. URL = <Floridi 2016 available online>
  • Floridi, Luciano, Josh Cowls, Monica Beltrametti, Raja Chatila,Patrice Chazerand, Virginia Dignum, Christoph Luetge, Robert Madelin,Ugo Pagallo, Francesca Rossi, Burkhard Schafer, Peggy Valcke, and EffyVayena, 2018, “AI4People—An Ethical Framework for a GoodAI Society: Opportunities, Risks, Principles, andRecommendations”,Minds and Machines, 28(4):689–707. doi:10.1007/s11023-018-9482-5
  • Floridi, Luciano and Jeff W. Sanders, 2004, “On the Moralityof Artificial Agents”,Minds and Machines, 14(3):349–379. doi:10.1023/B:MIND.0000035461.63578.9d
  • Floridi, Luciano and Mariarosaria Taddeo, 2016, “What IsData Ethics?”,Philosophical Transactions of the RoyalSociety A: Mathematical, Physical and Engineering Sciences,374(2083): 20160360. doi:10.1098/rsta.2016.0360
  • Foot, Philippa, 1967, “The Problem of Abortion and theDoctrine of the Double Effect”,Oxford Review, 5:5–15.
  • Fosch-Villaronga, Eduard and Jordi Albo-Canals, 2019,“‘I’ll Take Care of You,’ Said theRobot”,Paladyn, Journal of Behavioral Robotics, 10(1):77–93. doi:10.1515/pjbr-2019-0006
  • Frank, Lily and Sven Nyholm, 2017, “Robot Sex and Consent:Is Consent to Sex between a Robot and a Human Conceivable, Possible,and Desirable?”,Artificial Intelligence and Law,25(3): 305–323. doi:10.1007/s10506-017-9212-y
  • Frankfurt, Harry G., 1971, “Freedom of the Will and theConcept of a Person”,The Journal of Philosophy, 68(1):5–20.
  • Frey, Carl Benedict, 2019,The Technology Trap: Capital,Labour, and Power in the Age of Automation, Princeton, NJ:Princeton University Press.
  • Frey, Carl Benedikt and Michael A. Osborne, 2013, “TheFuture of Employment: How Susceptible Are Jobs toComputerisation?”, Oxford Martin School Working Papers, 17September 2013. [Frey and Osborne 2013 available online]
  • Ganascia, Jean-Gabriel, 2017,Le Mythe De LaSingularité, Paris: Éditions du Seuil.
  • EU Parliament, 2016, “Draft Report with Recommendations tothe Commission on Civil Law Rules on Robotics (2015/2103(Inl))”,Committee on Legal Affairs, 10.11.2016.https://www.europarl.europa.eu/doceo/document/A-8-2017-0005_EN.html
  • EU Regulation, 2016/679, “General Data ProtectionRegulation: Regulation (EU) 2016/679 of the European Parliament and ofthe Council of 27 April 2016 on the Protection of Natural Persons withRegard to the Processing of Personal Data and on the Free Movement ofSuch Data, and Repealing Directive 95/46/Ec”,OfficialJournal of the European Union, 119 (4 May 2016), 1–88. [Regulation (EU) 2016/679 available online]
  • Geraci, Robert M., 2008, “Apocalyptic AI: Religion and thePromise of Artificial Intelligence”,Journal of the AmericanAcademy of Religion, 76(1): 138–166.doi:10.1093/jaarel/lfm101
  • –––, 2010,Apocalyptic AI: Visions of Heavenin Robotics, Artificial Intelligence, and Virtual Reality,Oxford: Oxford University Press.doi:10.1093/acprof:oso/9780195393026.001.0001
  • Gerdes, Anne, 2016, “The Issue of Moral Consideration inRobot Ethics”,ACM SIGCAS Computers and Society, 45(3):274–279. doi:10.1145/2874239.2874278
  • German Federal Ministry of Transport and Digital Infrastructure,2017, “Report of the Ethics Commission: Automated and ConnectedDriving”, June 2017, 1–36. [GFMTDI 2017 available online]
  • Gertz, Nolen, 2018,Nihilism and Technology, London:Rowman & Littlefield.
  • Gewirth, Alan, 1978, “The Golden Rule Rationalized”,Midwest Studies in Philosophy, 3(1): 133–147.doi:10.1111/j.1475-4975.1978.tb00353.x
  • Gibert, Martin, 2019, “Éthique Artificielle (VersionGrand Public)”, inL’EncyclopédiePhilosophique, Maxime Kristanek (ed.), accessed: 16 April 2020,URL = <Gibert 2019 available online>
  • Giubilini, Alberto and Julian Savulescu, 2018, “TheArtificial Moral Advisor. The ‘Ideal Observer’ MeetsArtificial Intelligence”,Philosophy & Technology,31(2): 169–188. doi:10.1007/s13347-017-0285-z
  • Good, Irving John, 1965, “Speculations Concerning the FirstUltraintelligent Machine”, inAdvances in Computers 6,Franz L. Alt and Morris Rubinoff (eds.), New York & London:Academic Press, 31–88. doi:10.1016/S0065-2458(08)60418-0
  • Goodfellow, Ian, Yoshua Bengio, and Aaron Courville, 2016,Deep Learning, Cambridge, MA: MIT Press.
  • Goodman, Bryce and Seth Flaxman, 2017, “European UnionRegulations on Algorithmic Decision-Making and a ‘Right toExplanation’”,AI Magazine, 38(3): 50–57.doi:10.1609/aimag.v38i3.2741
  • Goos, Maarten, 2018, “The Impact of Technological Progresson Labour Markets: Policy Challenges”,Oxford Review ofEconomic Policy, 34(3): 362–375.doi:10.1093/oxrep/gry002
  • Goos, Maarten, Alan Manning, and Anna Salomons, 2009, “JobPolarization in Europe”,American Economic Review,99(2): 58–63. doi:10.1257/aer.99.2.58
  • Graham, Sandra and Brian S. Lowery, 2004, “PrimingUnconscious Racial Stereotypes about Adolescent Offenders”,Law and Human Behavior, 28(5): 483–504.doi:10.1023/B:LAHU.0000046430.65485.1f
  • Gunkel, David J., 2018a, “The Other Question: Can and ShouldRobots Have Rights?”,Ethics and InformationTechnology, 20(2): 87–99.doi:10.1007/s10676-017-9442-4
  • –––, 2018b,Robot Rights, Boston, MA:MIT Press.
  • Gunkel, David J. and Joanna J. Bryson (eds.), 2014,MachineMorality: The Machine as Moral Agent and Patient special issue ofPhilosophy & Technology, 27(1): 1–142.
  • Häggström, Olle, 2016,Here Be Dragons: Science,Technology and the Future of Humanity, Oxford: Oxford UniversityPress. doi:10.1093/acprof:oso/9780198723547.001.0001
  • Hakli, Raul and Pekka Mäkelä, 2019, “MoralResponsibility of Robots and Hybrid Agents”,TheMonist, 102(2): 259–275. doi:10.1093/monist/onz009
  • Hanson, Robin, 2016,The Age of Em: Work, Love and Life WhenRobots Rule the Earth, Oxford: Oxford University Press.
  • Hansson, Sven Ove, 2013,The Ethics of Risk: Ethical Analysisin an Uncertain World, New York: Palgrave Macmillan.
  • –––, 2018, “How to Perform an Ethical RiskAnalysis (eRA)”,Risk Analysis, 38(9): 1820–1829.doi:10.1111/risa.12978
  • Harari, Yuval Noah, 2016,Homo Deus: A Brief History ofTomorrow, New York: Harper.
  • Haskel, Jonathan and Stian Westlake, 2017,Capitalism withoutCapital: The Rise of the Intangible Economy, Princeton, NJ:Princeton University Press.
  • Houkes, Wybo and Pieter E. Vermaas, 2010,Technical Functions:On the Use and Design of Artefacts, (Philosophy of Engineeringand Technology 1), Dordrecht: Springer Netherlands.doi:10.1007/978-90-481-3900-2
  • IEEE, 2019,Ethically Aligned Design: A Vision forPrioritizing Human Well-Being with Autonomous and IntelligentSystems (First Version), <IEEE 2019 available online>.
  • Jasanoff, Sheila, 2016,The Ethics of Invention: Technologyand the Human Future, New York: Norton.
  • Jecker, Nancy S., forthcoming,Ending Midlife Bias: New Valuesfor Old Age, New York: Oxford University Press.
  • Jobin, Anna, Marcello Ienca, and Effy Vayena, 2019, “TheGlobal Landscape of AI Ethics Guidelines”,Nature MachineIntelligence, 1(9): 389–399.doi:10.1038/s42256-019-0088-2
  • Johnson, Deborah G. and Mario Verdicchio, 2017, “ReframingAI Discourse”,Minds and Machines, 27(4):575–590. doi:10.1007/s11023-017-9417-6
  • Kahnemann, Daniel, 2011,Thinking Fast and Slow, London:Macmillan.
  • Kamm, Frances Myrna, 2016,The Trolley Problem Mysteries,Eric Rakowski (ed.), Oxford: Oxford University Press.doi:10.1093/acprof:oso/9780190247157.001.0001
  • Kant, Immanuel, 1781/1787,Kritik der reinen Vernunft.Translated asCritique of Pure Reason, Norman Kemp Smith(trans.), London: Palgrave Macmillan, 1929.
  • Keeling, Geoff, 2020, “Why Trolley Problems Matter for theEthics of Automated Vehicles”,Science and EngineeringEthics, 26(1): 293–307. doi:10.1007/s11948-019-00096-1
  • Keynes, John Maynard, 1930, “Economic Possibilities for OurGrandchildren”. Reprinted in hisEssays in Persuasion,New York: Harcourt Brace, 1932, 358–373.
  • Kissinger, Henry A., 2018, “How the Enlightenment Ends:Philosophically, Intellectually—in Every Way—Human SocietyIs Unprepared for the Rise of Artificial Intelligence”,TheAtlantic, June 2018. [Kissinger 2018 available online]
  • Kurzweil, Ray, 1999,The Age of Spiritual Machines: WhenComputers Exceed Human Intelligence, London: Penguin.
  • –––, 2005,The Singularity Is Near: WhenHumans Transcend Biology, London: Viking.
  • –––, 2012,How to Create a Mind: The Secretof Human Thought Revealed, New York: Viking.
  • Lee, Minha, Sander Ackermans, Nena van As, Hanwen Chang, EnzoLucas, and Wijnand IJsselsteijn, 2019, “Caring for Vincent: AChatbot for Self-Compassion”, inProceedings of the 2019 CHIConference on Human Factors in Computing Systems—CHI’19, Glasgow, Scotland: ACM Press, 1–13.doi:10.1145/3290605.3300932
  • Levy, David, 2007,Love and Sex with Robots: The Evolution ofHuman-Robot Relationships, New York: Harper & Co.
  • Lighthill, James, 1973, “Artificial Intelligence: A GeneralSurvey”,Artificial intelligence: A Paper Symposion,London: Science Research Council. [Lighthill 1973 available online]
  • Lin, Patrick, 2016, “Why Ethics Matters for AutonomousCars”, inAutonomous Driving, Markus Maurer, J.Christian Gerdes, Barbara Lenz, and Hermann Winner (eds.), Berlin,Heidelberg: Springer Berlin Heidelberg, 69–85.doi:10.1007/978-3-662-48847-8_4
  • Lin, Patrick, Keith Abney, and Ryan Jenkins (eds.), 2017,Robot Ethics 2.0: From Autonomous Cars to ArtificialIntelligence, New York: Oxford University Press.doi:10.1093/oso/9780190652951.001.0001
  • Lin, Patrick, George Bekey, and Keith Abney, 2008,“Autonomous Military Robotics: Risk, Ethics, and Design”,ONR report, California Polytechnic State University, San Luis Obispo,20 December 2008), 112 pp. [Lin, Bekey, and Abney 2008 available online]
  • Lomas, Meghann, Robert Chevalier, Ernest Vincent Cross, RobertChristopher Garrett, John Hoare, and Michael Kopack, 2012,“Explaining Robot Actions”, inProceedings of theSeventh Annual ACM/IEEE International Conference on Human-RobotInteraction—HRI ’12, Boston, MA: ACM Press,187–188. doi:10.1145/2157689.2157748
  • Macnish, Kevin, 2017,The Ethics of Surveillance: AnIntroduction, London: Routledge.
  • Mathur, Arunesh, Gunes Acar, Michael J. Friedman, Elena Lucherini,Jonathan Mayer, Marshini Chetty, and Arvind Narayanan, 2019,“Dark Patterns at Scale: Findings from a Crawl of 11K ShoppingWebsites”,Proceedings of the ACM on Human-ComputerInteraction, 3(CSCW): art. 81. doi:10.1145/3359183
  • Minsky, Marvin, 1985,The Society of Mind, New York:Simon & Schuster.
  • Misselhorn, Catrin, 2020, “Artificial Systems with MoralCapacities? A Research Design and Its Implementation in a GeriatricCare System”,Artificial Intelligence, 278: art.103179. doi:10.1016/j.artint.2019.103179
  • Mittelstadt, Brent Daniel and Luciano Floridi, 2016, “TheEthics of Big Data: Current and Foreseeable Issues in BiomedicalContexts”,Science and Engineering Ethics, 22(2):303–341. doi:10.1007/s11948-015-9652-2
  • Moor, James H., 2006, “The Nature, Importance, andDifficulty of Machine Ethics”,IEEE IntelligentSystems, 21(4): 18–21. doi:10.1109/MIS.2006.80
  • Moravec, Hans, 1990,Mind Children, Cambridge, MA:Harvard University Press.
  • –––, 1998,Robot: Mere Machine toTranscendent Mind, New York: Oxford University Press.
  • Mozorov, Eygeny, 2013,To Save Everything, Click Here: TheFolly of Technological Solutionism, New York: PublicAffairs.
  • Müller, Vincent C., 2012, “Autonomous Cognitive Systemsin Real-World Environments: Less Control, More Flexibility and BetterInteraction”,Cognitive Computation, 4(3):212–215. doi:10.1007/s12559-012-9129-4
  • –––, 2016a, “Autonomous Killer Robots AreProbably Good News”, InDrones and Responsibility: Legal,Philosophical and Socio-Technical Perspectives on the Use of RemotelyControlled Weapons, Ezio Di Nucci and Filippo Santoni de Sio(eds.), London: Ashgate, 67–81.
  • ––– (ed.), 2016b,Risks of ArtificialIntelligence, London: Chapman & Hall - CRC Press.doi:10.1201/b19187
  • –––, 2018, “In 30 Schritten zum Mond?Zukünftiger Fortschritt in der KI”,Medienkorrespondenz, 20: 5–15. [Müller 2018 available online]
  • –––, 2020, “Measuring Progress inRobotics: Benchmarking and the ‘Measure-TargetConfusion’”, inMetrics of Sensory Motor Coordinationand Integration in Robots and Animals, Fabio Bonsignorio, ElenaMessina, Angel P. del Pobil, and John Hallam (eds.), (CognitiveSystems Monographs 36), Cham: Springer International Publishing,169–179. doi:10.1007/978-3-030-14126-4_9
  • –––, forthcoming-a,Can Machines Think?Fundamental Problems of Artificial Intelligence, New York: OxfordUniversity Press.
  • ––– (ed.), forthcoming-b,Oxford Handbook ofthe Philosophy of Artificial Intelligence, New York: OxfordUniversity Press.
  • Müller, Vincent C. and Nick Bostrom, 2016, “FutureProgress in Artificial Intelligence: A Survey of ExpertOpinion”, inFundamental Issues of ArtificialIntelligence, Vincent C. Müller (ed.), Cham: SpringerInternational Publishing, 555–572.doi:10.1007/978-3-319-26485-1_33
  • Newport, Cal, 2019,Digital Minimalism: On Living Better withLess Technology, London: Penguin.
  • Nørskov, Marco (ed.), 2017,Social Robots, London:Routledge.
  • Nyholm, Sven, 2018a, “Attributing Agency to AutomatedSystems: Reflections on Human–Robot Collaborations andResponsibility-Loci”,Science and Engineering Ethics,24(4): 1201–1219. doi:10.1007/s11948-017-9943-x
  • –––, 2018b, “The Ethics of Crashes withSelf-Driving Cars: A Roadmap, II”,Philosophy Compass,13(7): e12506. doi:10.1111/phc3.12506
  • Nyholm, Sven, and Lily Frank, 2017, “From Sex Robots to LoveRobots: Is Mutual Love with a Robot Possible?”, in Danaher andMcArthur 2017: 219–243.
  • O’Connell, Mark, 2017,To Be a Machine: Adventures amongCyborgs, Utopians, Hackers, and the Futurists Solving the ModestProblem of Death, London: Granta.
  • O’Neil, Cathy, 2016,Weapons of Math Destruction: HowBig Data Increases Inequality and Threatens Democracy, Largo, ML:Crown.
  • Omohundro, Steve, 2014, “Autonomous Technology and theGreater Human Good”,Journal of Experimental &Theoretical Artificial Intelligence, 26(3): 303–315.doi:10.1080/0952813X.2014.895111
  • Ord, Toby, 2020,The Precipice: Existential Risk and theFuture of Humanity, London: Bloomsbury.
  • Powers, Thomas M. and Jean-Gabriel Ganascia, forthcoming,“The Ethics of the Ethics of AI”, inOxford Handbookof Ethics of Artificial Intelligence, Markus D. Dubber, FrankPasquale, and Sunnit Das (eds.), New York: Oxford.
  • Rawls, John, 1971,A Theory of Justice, Cambridge, MA:Belknap Press.
  • Rees, Martin, 2018,On the Future: Prospects forHumanity, Princeton: Princeton University Press.
  • Richardson, Kathleen, 2016, “Sex Robot Matters: Slavery, theProstituted, and the Rights of Machines”,IEEE Technologyand Society Magazine, 35(2): 46–53.doi:10.1109/MTS.2016.2554421
  • Roessler, Beate, 2017, “Privacy as a Human Right”,Proceedings of the Aristotelian Society, 117(2):187–206. doi:10.1093/arisoc/aox008
  • Royakkers, Lambèr and Rinie van Est, 2016,JustOrdinary Robots: Automation from Love to War, Boca Raton, LA: CRCPress, Taylor & Francis. doi:10.1201/b18899
  • Russell, Stuart, 2019,Human Compatible: ArtificialIntelligence and the Problem of Control, New York: Viking.
  • Russell, Stuart, Daniel Dewey, and Max Tegmark, 2015,“Research Priorities for Robust and Beneficial ArtificialIntelligence”,AI Magazine, 36(4): 105–114.doi:10.1609/aimag.v36i4.2577
  • SAE International, 2018, “Taxonomy and Definitions for TermsRelated to Driving Automation Systems for on-Road MotorVehicles”, J3016_201806, 15 June 2018. [SAE International 2015 available online]
  • Sandberg, Anders, 2013, “Feasibility of Whole BrainEmulation”, inPhilosophy and Theory of ArtificialIntelligence, Vincent C. Müller (ed.), (Studies in AppliedPhilosophy, Epistemology and Rational Ethics, 5), Berlin, Heidelberg:Springer Berlin Heidelberg, 251–264.doi:10.1007/978-3-642-31674-6_19
  • –––, 2019, “There Is Plenty of Time at theBottom: The Economics, Risk and Ethics of Time Compression”,Foresight, 21(1): 84–99.doi:10.1108/FS-04-2018-0044
  • Santoni de Sio, Filippo and Jeroen van den Hoven, 2018,“Meaningful Human Control over Autonomous Systems: APhilosophical Account”,Frontiers in Robotics and AI,5(February): 15. doi:10.3389/frobt.2018.00015
  • Schneier, Bruce, 2015,Data and Goliath: The Hidden Battles toCollect Your Data and Control Your World, New York: W. W.Norton.
  • Searle, John R., 1980, “Minds, Brains, and Programs”,Behavioral and Brain Sciences, 3(3): 417–424.doi:10.1017/S0140525X00005756
  • Selbst, Andrew D., Danah Boyd, Sorelle A. Friedler, SureshVenkatasubramanian, and Janet Vertesi, 2019, “Fairness andAbstraction in Sociotechnical Systems”, inProceedings ofthe Conference on Fairness, Accountability, andTransparency—FAT* ’19, Atlanta, GA: ACM Press,59–68. doi:10.1145/3287560.3287598
  • Sennett, Richard, 2018,Building and Dwelling: Ethics for theCity, London: Allen Lane.
  • Shanahan, Murray, 2015,The Technological Singularity,Cambridge, MA: MIT Press.
  • Sharkey, Amanda, 2019, “Autonomous Weapons Systems, KillerRobots and Human Dignity”,Ethics and InformationTechnology, 21(2): 75–87.doi:10.1007/s10676-018-9494-0
  • Sharkey, Amanda and Noel Sharkey, 2011, “The Rights andWrongs of Robot Care”, inRobot Ethics: The Ethical andSocial Implications of Robotics, Patrick Lin, Keith Abney andGeorge Bekey (eds.), Cambridge, MA: MIT Press, 267–282.
  • Shoham, Yoav, Perrault Raymond, Brynjolfsson Erik, Jack Clark,James Manyika, Juan Carlos Niebles, … Zoe Bauer, 2018,“The AI Index 2018 Annual Report”, 17 December 2018,Stanford, CA: AI Index Steering Committee, Human-Centered AIInitiative, Stanford University. [Shoam et al. 2018 available online]
  • Silver, David, Thomas Hubert, Julian Schrittwieser, IoannisAntonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre,Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan,and Demis Hassabis, 2018, “A General Reinforcement LearningAlgorithm That Masters Chess, Shogi, and Go through Self-Play”,Science, 362(6419): 1140–1144.doi:10.1126/science.aar6404
  • Simon, Herbert A. and Allen Newell, 1958, “Heuristic ProblemSolving: The Next Advance in Operations Research”,Operations Research, 6(1): 1–10.doi:10.1287/opre.6.1.1
  • Simpson, Thomas W. and Vincent C. Müller, 2016, “JustWar and Robots’ Killings”,The PhilosophicalQuarterly, 66(263): 302–322. doi:10.1093/pq/pqv075
  • Smolan, Sandy (director), 2016, “The Human Face of BigData”,PBS Documentary, 24 February 2016, 56 mins.
  • Sparrow, Robert, 2007, “Killer Robots”,Journal ofApplied Philosophy, 24(1): 62–77.doi:10.1111/j.1468-5930.2007.00346.x
  • –––, 2016, “Robots in Aged Care: ADystopian Future?”,AI & Society, 31(4):445–454. doi:10.1007/s00146-015-0625-4
  • Stahl, Bernd Carsten, Job Timmermans, and Brent DanielMittelstadt, 2016, “The Ethics of Computing: A Survey of theComputing-Oriented Literature”,ACM Computing Surveys,48(4): art. 55. doi:10.1145/2871196
  • Stahl, Bernd Carsten and David Wright, 2018, “Ethics andPrivacy in AI and Big Data: Implementing Responsible Research andInnovation”,IEEE Security Privacy, 16(3):26–33.
  • Stone, Christopher D., 1972, “Should Trees Have Standing -toward Legal Rights for Natural Objects”,SouthernCalifornia Law Review, 45: 450–501.
  • Stone, Peter, Rodney Brooks, Erik Brynjolfsson, Ryan Calo, OrenEtzioni, Greg Hager, Julia Hirschberg, Shivaram Kalyanakrishnan, EceKamar, Sarit Kraus, Kevin Leyton-Brown, David Parkes, William Press,AnnaLee Saxenian, Julie Shah, Milind Tambe, and Astro Teller, 2016,“Artificial Intelligence and Life in 2030”, One HundredYear Study on Artificial Intelligence: Report of the 2015–2016Study Panel, Stanford University, Stanford, CA, September 2016. [Stone et al. 2016 available online]
  • Strawson, Galen, 1998, “Free Will”, inRoutledgeEncyclopedia of Philosophy, Taylor & Francis.doi:10.4324/9780415249126-V014-1
  • Sullins, John P., 2012, “Robots, Love, and Sex: The Ethicsof Building a Love Machine”,IEEE Transactions on AffectiveComputing, 3(4): 398–409. doi:10.1109/T-AFFC.2012.31
  • Susser, Daniel, Beate Roessler, and Helen Nissenbaum, 2019,“Technology, Autonomy, and Manipulation”,InternetPolicy Review, 8(2): 30 June 2019. [Susser, Roessler, and Nissenbaum 2019 available online]
  • Taddeo, Mariarosaria and Luciano Floridi, 2018, “How AI CanBe a Force for Good”,Science, 361(6404):751–752. doi:10.1126/science.aat5991
  • Taylor, Linnet and Nadezhda Purtova, 2019, “What IsResponsible and Sustainable Data Science?”, Big Data &Society, 6(2): art. 205395171985811. doi:10.1177/2053951719858114
  • Taylor, Steve, et al., 2018, “Responsible AI – KeyThemes, Concerns & Recommendations for European Research andInnovation: Summary of Consultation with MultidisciplinaryExperts”, June. doi:10.5281/zenodo.1303252 [Taylor, et al. 2018 available online]
  • Tegmark, Max, 2017,Life 3.0: Being Human in the Age ofArtificial Intelligence, New York: Knopf.
  • Thaler, Richard H and Sunstein, Cass, 2008,Nudge: Improvingdecisions about health, wealth and happiness, New York:Penguin.
  • Thompson, Nicholas and Ian Bremmer, 2018, “The AI Cold WarThat Threatens Us All”,Wired, 23 November 2018. [Thompson and Bremmer 2018 available online]
  • Thomson, Judith Jarvis, 1976, “Killing, Letting Die, and theTrolley Problem”,Monist, 59(2): 204–217.doi:10.5840/monist197659224
  • Torrance, Steve, 2011, “Machine Ethics and the Idea of aMore-Than-Human Moral World”, in Anderson and Anderson 2011:115–137. doi:10.1017/CBO9780511978036.011
  • Trump, Donald J, 2019, “Executive Order on MaintainingAmerican Leadership in Artificial Intelligence”, 11 February2019. [Trump 2019 available online]
  • Turner, Jacob, 2019,Robot Rules: Regulating ArtificialIntelligence, Berlin: Springer.doi:10.1007/978-3-319-96235-1
  • Tzafestas, Spyros G., 2016,Roboethics: A NavigatingOverview, (Intelligent Systems, Control and Automation: Scienceand Engineering 79), Cham: Springer International Publishing.doi:10.1007/978-3-319-21714-7
  • Vallor, Shannon, 2017,Technology and the Virtues: APhilosophical Guide to a Future Worth Wanting, Oxford: OxfordUniversity Press. doi:10.1093/acprof:oso/9780190498511.001.0001
  • Van Lent, Michael, William Fisher, and Michael Mancuso, 2004,“An Explainable Artificial Intelligence System for Small-UnitTactical Behavior”, inProceedings of the 16th Conference onInnovative Applications of Artifical Intelligence,(IAAI’04), San Jose, CA: AAAI Press, 900–907.
  • van Wynsberghe, Aimee, 2016,Healthcare Robots: Ethics, Designand Implementation, London: Routledge.doi:10.4324/9781315586397
  • van Wynsberghe, Aimee and Scott Robbins, 2019, “Critiquingthe Reasons for Making Artificial Moral Agents”,Science andEngineering Ethics, 25(3): 719–735.doi:10.1007/s11948-018-0030-8
  • Vanderelst, Dieter and Alan Winfield, 2018, “The Dark Sideof Ethical Robots”, inProceedings of the 2018 AAAI/ACMConference on AI, Ethics, and Society, New Orleans, LA: ACM,317–322. doi:10.1145/3278721.3278726
  • Veale, Michael and Reuben Binns, 2017, “Fairer MachineLearning in the Real World: Mitigating Discrimination withoutCollecting Sensitive Data”,Big Data & Society,4(2): art. 205395171774353. doi:10.1177/2053951717743530
  • Véliz, Carissa, 2019, “Three Things Digital EthicsCan Learn from Medical Ethics”,Nature Electronics,2(8): 316–318. doi:10.1038/s41928-019-0294-2
  • Verbeek, Peter-Paul, 2011,Moralizing Technology:Understanding and Designing the Morality of Things, Chicago:University of Chicago Press.
  • Wachter, Sandra and Brent Daniel Mittelstadt, 2019, “A Rightto Reasonable Inferences: Re-Thinking Data Protection Law in the Ageof Big Data and AI”,Columbia Business Law Review,2019(2): 494–620.
  • Wachter, Sandra, Brent Mittelstadt, and Luciano Floridi, 2017,“Why a Right to Explanation of Automated Decision-Making DoesNot Exist in the General Data Protection Regulation”,International Data Privacy Law, 7(2): 76–99.doi:10.1093/idpl/ipx005
  • Wachter, Sandra, Brent Mittelstadt, and Chris Russell, 2018,“Counterfactual Explanations Without Opening the Black Box:Automated Decisions and the GDPR”,Harvard Journal of Law& Technology, 31(2): 842–887.doi:10.2139/ssrn.3063289
  • Wallach, Wendell and Peter M. Asaro (eds.), 2017,MachineEthics and Robot Ethics, London: Routledge.
  • Walsh, Toby, 2018,Machines That Think: The Future ofArtificial Intelligence, Amherst, MA: Prometheus Books.
  • Westlake, Stian (ed.), 2014,Our Work Here Is Done: Visions ofa Robot Economy, London: Nesta. [Westlake 2014 available online]
  • Whittaker, Meredith, Kate Crawford, Roel Dobbe, Genevieve Fried,Elizabeth Kaziunas, Varoon Mathur, … Jason Schultz, 2018,“AI Now Report 2018”, New York: AI Now Institute, New YorkUniversity. [Whittaker et al. 2018 available online]
  • Whittlestone, Jess, Rune Nyrup, Anna Alexandrova, Kanta Dihal, andStephen Cave, 2019, “Ethical and Societal Implications ofAlgorithms, Data, and Artificial Intelligence: A Roadmap forResearch”, Cambridge: Nuffield Foundation, University ofCambridge. [Whittlestone 2019 available online]
  • Winfield, Alan, Katina Michael, Jeremy Pitt, and Vanessa Evers(eds.), 2019,Machine Ethics: The Design and Governance of EthicalAI and Autonomous Systems, special issue ofProceedings ofthe IEEE, 107(3): 501–632.
  • Woollard, Fiona and Frances Howard-Snyder, 2016, “Doing vs.Allowing Harm”,Stanford Encyclopedia of Philosophy(Winter 2016 edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/win2016/entries/doing-allowing/>
  • Woolley, Samuel C. and Philip N. Howard (eds.), 2017,Computational Propaganda: Political Parties, Politicians, andPolitical Manipulation on Social Media, Oxford: Oxford UniversityPress. doi:10.1093/oso/9780190931407.001.0001
  • Yampolskiy, Roman V. (ed.), 2018,Artificial IntelligenceSafety and Security, Boca Raton, FL: Chapman and Hall/CRC.doi:10.1201/9781351251389
  • Yeung, Karen and Martin Lodge (eds.), 2019,AlgorithmicRegulation, Oxford: Oxford University Press.doi:10.1093/oso/9780198838494.001.0001
  • Zayed, Yago and Philip Loft, 2019, “Agriculture: HistoricalStatistics”,House of Commons Briefing Paper, 3339(25June 2019): 1-19. [Zayed and Loft 2019 available online]
  • Zerilli, John, Alistair Knott, James Maclaurin, and ColinGavaghan, 2019, “Transparency in Algorithmic and HumanDecision-Making: Is There a Double Standard?”,Philosophy& Technology, 32(4): 661–683.doi:10.1007/s13347-018-0330-6
  • Zuboff, Shoshana, 2019,The Age of Surveillance Capitalism:The Fight for a Human Future at the New Frontier of Power, NewYork: Public Affairs.

Other Internet Resources

References

Research Organizations

Conferences

Policy Documents

Other Relevant pages

Acknowledgments

Early drafts of this article were discussed with colleagues at theIDEA Centre of the University of Leeds, some friends, and my PhDstudents Michael Cannon, Zach Gudmunsen, Gabriela Arriagada-Bruneauand Charlotte Stix. Later drafts were made publicly available on theInternet and publicised via Twitter and e-mail to all (then) citedauthors that I could locate. These later drafts were presented toaudiences at the INBOTS Project Meeting (Reykjavik 2019), the ComputerScience Department Colloquium (Leeds 2019), the European RoboticsForum (Bucharest 2019), the AI Lunch and the Philosophy & Ethicsgroup (Eindhoven 2019)—many thanks for their comments.

I am grateful for detailed written comments by John Danaher, MartinGibert, Elizabeth O’Neill, Sven Nyholm, Etienne B. Roesch, EmmaRuttkamp-Bloem, Tom Powers, Steve Taylor, and Alan Winfield. I amgrateful for further useful comments by Colin Allen, Susan Anderson,Christof Wolf-Brenner, Rafael Capurro, Mark Coeckelbergh, YazminMorlet Corti, Erez Firt, Vasilis Galanos, Anne Gerdes, OlleHäggström, Geoff Keeling, Karabo Maiyane, Brent Mittelstadt,Britt Östlund, Steve Petersen, Brian Pickering, Zoë Porter,Amanda Sharkey, Melissa Terras, Stuart Russell, Jan F Veneman, JeffreyWhite, and Xinyi Wu.

Parts of the work on this article have been supported by the EuropeanCommission under the INBOTS project (H2020 grant no. 780073).

Copyright © 2020 by
Vincent C. Müller<vincent.c.mueller@fau.de>

This is a file in the archives of the Stanford Encyclopedia of Philosophy.
Please note that some links may no longer be functional.

Browse

About

Support SEP

Mirror Sites

View this site from another server:

USA (Main Site)Philosophy, Stanford University

The Stanford Encyclopedia of Philosophy iscopyright © 2023 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054


[8]ページ先頭

©2009-2025 Movatter.jp