Movatterモバイル変換


[0]ホーム

URL:


PhilPapersPhilPeoplePhilArchivePhilEventsPhilJobs
Switch to: References

Add citations

You mustlogin to add citations.
  1. Artificial intelligence and responsibility gaps: what is the problem?Peter Königs -2022 -Ethics and Information Technology 24 (3):1-11.
    Recent decades have witnessed tremendous progress in artificial intelligence and in the development of autonomous systems that rely on artificial intelligence. Critics, however, have pointed to the difficulty of allocating responsibility for the actions of an autonomous system, especially when the autonomous system causes harm or damage. The highly autonomous behavior of such systems, for which neither the programmer, the manufacturer, nor the operator seems to be responsible, has been suspected to generate responsibility gaps. This has been the cause of (...) much concern. In this article, I propose a more optimistic view on artificial intelligence, raising two challenges for responsibility gap pessimists. First, proponents of responsibility gaps must say more about when responsibility gaps occur. Once we accept a difficult-to-reject plausibility constraint on the emergence of such gaps, it becomes apparent that the situations in which responsibility gaps occur are unclear. Second, assuming that responsibility gaps occur, more must be said about why we should be concerned about such gaps in the first place. I proceed by defusing what I take to be the two most important concerns about responsibility gaps, one relating to the consequences of responsibility gaps and the other relating to violations of jus in bello. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   30 citations  
  • A Comparative Defense of Self-initiated Prospective Moral Answerability for Autonomous Robot harm.Marc Champagne &Ryan Tonkens -2023 -Science and Engineering Ethics 29 (4):1-26.
    As artificial intelligence becomes more sophisticated and robots approach autonomous decision-making, debates about how to assign moral responsibility have gained importance, urgency, and sophistication. Answering Stenseke’s (2022a) call for scaffolds that can help us classify views and commitments, we think the current debate space can be represented hierarchically, as answers to key questions. We use the resulting taxonomy of five stances to differentiate—and defend—what is known as the “blank check” proposal. According to this proposal, a person activating a robot could (...) willingly make themselves answerable for whatever events ensue, even if those events stem from the robot’s autonomous decision(s). This blank check solution was originally proposed in the context of automated warfare (Champagne & Tonkens, 2015), but we extend it to cover all robots. We argue that, because moral answerability in the blank check is accepted voluntarily and before bad outcomes are known, it proves superior to alternative ways of assigning blame. We end by highlighting how, in addition to being just, this self-initiated and prospective moral answerability for robot harm provides deterrence that the four other stances cannot match. (shrink)
    Direct download(4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Uncovering the gap: challenging the agential nature of AI responsibility problems.Joan Llorca Albareda -2025 -AI and Ethics:1-14.
    In this paper, I will argue that the responsibility gap arising from new AI systems is reducible to the problem of many hands and collective agency. Systematic analysis of the agential dimension of AI will lead me to outline a disjunctive between the two problems. Either we reduce individual responsibility gaps to the many hands, or we abandon the individual dimension and accept the possibility of responsible collective agencies. Depending on which conception of AI agency we begin with, the responsibility (...) gap will boil down to one of these two moral problems. Moreover, I will adduce that this conclusion reveals an underlying weakness in AI ethics: the lack of attention to the question of the disciplinary boundaries of AI ethics. This absence has made it difficult to identify the specifics of the responsibility gap arising from new AI systems as compared to the responsibility gaps of other applied ethics. Lastly, I will be concerned with outlining these specific aspects. (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark  
  • “All AIs are Psychopaths”? The Scope and Impact of a Popular Analogy.Elina Nerantzi -2025 -Philosophy and Technology 38 (1):1-24.
    Artificial Intelligence (AI) Agents are often compared to psychopaths in popular news articles. The headlines are ‘eye-catching’, but the questions of what this analogy means or why it matters are hardly answered. The aim of this paper is to take this popular analogy ‘seriously’. By that, I mean two things. First, I aim to explore the scope of this analogy, i.e. to identify and analyse the shared properties of AI agents and psychopaths, namely, their lack of moral emotions and their (...) capacity for instrumental rationality. Second, I aim to examine the impact of the analogy. I argue that both agents, as ‘amoral calculators’, present the perfect candidates to revisit two long-standing debates on moral and criminal responsibility, regarding the necessity of moral emotions for ‘moral-agent-capacity responsibility’ and the necessity of ‘moral-agent-capacity responsibility’ for criminal responsibility. Finally, cross-examining the debates on the moral and criminal responsibility of psychopaths and AI agents is instructive and revealing. Instructive since the moral and legal treatment of psychopaths can be telling about the future treatment of AI agents (and vice versa) and revealing since it makes explicit our often-implicit philosophical commitments on the criteria of moral agency and the overarching purpose of criminal law. (shrink)
    No categories
    Direct download(4 more)  
     
    Export citation  
     
    Bookmark  
  • A Moral Bind? — Autonomous Weapons, Moral Responsibility, and Institutional Reality.Bartek Chomanski -2023 -Philosophy and Technology 36.
    In “Accepting Moral Responsibility for the Actions of Autonomous Weapons Systems—a Moral Gambit” (2022), Mariarosaria Taddeo and Alexander Blanchard answer one of the most vexing issues in current ethics of technology: how to close the so-called “responsibility gap”? Their solution is to require that autonomous weapons systems (AWSs) may only be used if there is some human being who accepts the ex ante responsibility for those actions of the AWS that could not have been predicted or intended (in such cases, (...) the human being takes what the authors call the “moral gambit”). The authors then propose several institutional safeguards to implement in order to ensure that the moral gambit is taken in a fair and just way. This paper explores this suggestion in the context of the institutional settings within which AWSs are most likely to be deployed. It raises some concerns as to the feasibility of Taddeo and Blanchard’s proposal, in light of the recent empirical work on the incentive structures likely to exist within militaries. It then presents a potential problem that may arise in case the accountability mechanisms are successfully implemented. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark  
  • A Moral Bind? — Autonomous Weapons, Moral Responsibility, and Institutional Reality.Bartlomiej Chomanski -2023 -Philosophy and Technology 36 (2):1-14.
    In “Accepting Moral Responsibility for the Actions of Autonomous Weapons Systems—a Moral Gambit” (2022), Mariarosaria Taddeo and Alexander Blanchard answer one of the most vexing issues in current ethics of technology: how to close the so-called “responsibility gap”? Their solution is to require that autonomous weapons systems (AWSs) may only be used if there is some human being who accepts the ex ante responsibility for those actions of the AWS that could not have been predicted or intended (in such cases, (...) the human being takes what the authors call the “moral gambit”). The authors then propose several institutional safeguards to implement in order to ensure that the moral gambit is taken in a fair and just way. This paper explores this suggestion in the context of the institutional settings within which AWSs are most likely to be deployed. It raises some concerns as to the feasibility of Taddeo and Blanchard’s proposal, in light of the recent empirical work on the incentive structures likely to exist within militaries. It then presents a potential problem that may arise in case the accountability mechanisms are successfully implemented. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark  

  • [8]ページ先頭

    ©2009-2025 Movatter.jp