| |
As artificial intelligence becomes more sophisticated and robots approach autonomous decision-making, debates about how to assign moral responsibility have gained importance, urgency, and sophistication. Answering Stenseke’s (2022a) call for scaffolds that can help us classify views and commitments, we think the current debate space can be represented hierarchically, as answers to key questions. We use the resulting taxonomy of five stances to differentiate—and defend—what is known as the “blank check” proposal. According to this proposal, a person activating a robot could (...) willingly make themselves answerable for whatever events ensue, even if those events stem from the robot’s autonomous decision(s). This blank check solution was originally proposed in the context of automated warfare (Champagne & Tonkens, 2015), but we extend it to cover all robots. We argue that, because moral answerability in the blank check is accepted voluntarily and before bad outcomes are known, it proves superior to alternative ways of assigning blame. We end by highlighting how, in addition to being just, this self-initiated and prospective moral answerability for robot harm provides deterrence that the four other stances cannot match. (shrink) | |
In this paper, I will argue that the responsibility gap arising from new AI systems is reducible to the problem of many hands and collective agency. Systematic analysis of the agential dimension of AI will lead me to outline a disjunctive between the two problems. Either we reduce individual responsibility gaps to the many hands, or we abandon the individual dimension and accept the possibility of responsible collective agencies. Depending on which conception of AI agency we begin with, the responsibility (...) gap will boil down to one of these two moral problems. Moreover, I will adduce that this conclusion reveals an underlying weakness in AI ethics: the lack of attention to the question of the disciplinary boundaries of AI ethics. This absence has made it difficult to identify the specifics of the responsibility gap arising from new AI systems as compared to the responsibility gaps of other applied ethics. Lastly, I will be concerned with outlining these specific aspects. (shrink) | |
Artificial Intelligence (AI) Agents are often compared to psychopaths in popular news articles. The headlines are ‘eye-catching’, but the questions of what this analogy means or why it matters are hardly answered. The aim of this paper is to take this popular analogy ‘seriously’. By that, I mean two things. First, I aim to explore the scope of this analogy, i.e. to identify and analyse the shared properties of AI agents and psychopaths, namely, their lack of moral emotions and their (...) capacity for instrumental rationality. Second, I aim to examine the impact of the analogy. I argue that both agents, as ‘amoral calculators’, present the perfect candidates to revisit two long-standing debates on moral and criminal responsibility, regarding the necessity of moral emotions for ‘moral-agent-capacity responsibility’ and the necessity of ‘moral-agent-capacity responsibility’ for criminal responsibility. Finally, cross-examining the debates on the moral and criminal responsibility of psychopaths and AI agents is instructive and revealing. Instructive since the moral and legal treatment of psychopaths can be telling about the future treatment of AI agents (and vice versa) and revealing since it makes explicit our often-implicit philosophical commitments on the criteria of moral agency and the overarching purpose of criminal law. (shrink) No categories | |
In “Accepting Moral Responsibility for the Actions of Autonomous Weapons Systems—a Moral Gambit” (2022), Mariarosaria Taddeo and Alexander Blanchard answer one of the most vexing issues in current ethics of technology: how to close the so-called “responsibility gap”? Their solution is to require that autonomous weapons systems (AWSs) may only be used if there is some human being who accepts the ex ante responsibility for those actions of the AWS that could not have been predicted or intended (in such cases, (...) the human being takes what the authors call the “moral gambit”). The authors then propose several institutional safeguards to implement in order to ensure that the moral gambit is taken in a fair and just way. This paper explores this suggestion in the context of the institutional settings within which AWSs are most likely to be deployed. It raises some concerns as to the feasibility of Taddeo and Blanchard’s proposal, in light of the recent empirical work on the incentive structures likely to exist within militaries. It then presents a potential problem that may arise in case the accountability mechanisms are successfully implemented. (shrink) | |
In “Accepting Moral Responsibility for the Actions of Autonomous Weapons Systems—a Moral Gambit” (2022), Mariarosaria Taddeo and Alexander Blanchard answer one of the most vexing issues in current ethics of technology: how to close the so-called “responsibility gap”? Their solution is to require that autonomous weapons systems (AWSs) may only be used if there is some human being who accepts the ex ante responsibility for those actions of the AWS that could not have been predicted or intended (in such cases, (...) the human being takes what the authors call the “moral gambit”). The authors then propose several institutional safeguards to implement in order to ensure that the moral gambit is taken in a fair and just way. This paper explores this suggestion in the context of the institutional settings within which AWSs are most likely to be deployed. It raises some concerns as to the feasibility of Taddeo and Blanchard’s proposal, in light of the recent empirical work on the incentive structures likely to exist within militaries. It then presents a potential problem that may arise in case the accountability mechanisms are successfully implemented. (shrink) |