Movatterモバイル変換


[0]ホーム

URL:


PhilPapersPhilPeoplePhilArchivePhilEventsPhilJobs
Switch to: References

Add citations

You mustlogin to add citations.
  1. Understanding Artificial Agency.Leonard Dung -2025 -Philosophical Quarterly 75 (2):450-472.
    Which artificial intelligence (AI) systems are agents? To answer this question, I propose a multidimensional account of agency. According to this account, a system's agency profile is jointly determined by its level of goal-directedness and autonomy as well as is abilities for directly impacting the surrounding world, long-term planning and acting for reasons. Rooted in extant theories of agency, this account enables fine-grained, nuanced comparative characterizations of artificial agency. I show that this account has multiple important virtues and is more (...) informative than alternatives. More speculatively, it may help to illuminate two important emerging questions in AI ethics: 1. Can agency contribute to the moral status of non-human beings, and how? 2. When and why might AI systems exhibit power-seeking behaviour and does this pose an existential risk to humanity? (shrink)
    Direct download(4 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  • Ethics of Artificial Intelligence and Robotics.Vincent C. Müller -2020 - In Edward N. Zalta,Stanford Encylopedia of Philosophy. pp. 1-70.
    Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used (...) by humans. This includes issues of privacy (§2.1) and manipulation (§2.2), opacity (§2.3) and bias (§2.4), human-robot interaction (§2.5), employment (§2.6), and the effects of autonomy (§2.7). Then AI systems as subjects, i.e., ethics for the AI systems themselves in machine ethics (§2.8) and artificial moral agency (§2.9). Finally, the problem of a possible future AI superintelligence leading to a “singularity” (§2.10). We close with a remark on the vision of AI (§3). - For each section within these themes, we provide a general explanation of the ethical issues, outline existing positions and arguments, then analyse how these play out with current technologies and finally, what policy consequences may be drawn. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   40 citations  
  • Moral zombies: why algorithms are not moral agents.Carissa Véliz -2021 -AI and Society 36 (2):487-497.
    In philosophy of mind, zombies are imaginary creatures that are exact physical duplicates of conscious subjects but for whom there is no first-personal experience. Zombies are meant to show that physicalism—the theory that the universe is made up entirely out of physical components—is false. In this paper, I apply the zombie thought experiment to the realm of morality to assess whether moral agency is something independent from sentience. Algorithms, I argue, are a kind of functional moral zombie, such that thinking (...) about the latter can help us better understand and regulate the former. I contend that the main reason why algorithms can be neither autonomous nor accountable is that they lack sentience. Moral zombies and algorithms are incoherent as moral agents because they lack the necessary moral understanding to be morally responsible. To understand what it means to inflict pain on someone, it is necessary to have experiential knowledge of pain. At most, for an algorithm that feels nothing, ‘values’ will be items on a list, possibly prioritised in a certain way according to a number that represents weightiness. But entities that do not feel cannot value, and beings that do not value cannot act for moral reasons. (shrink)
    Direct download(4 more)  
     
    Export citation  
     
    Bookmark   51 citations  
  • Is it time for robot rights? Moral status in artificial entities.Vincent C. Müller -2021 -Ethics and Information Technology 23 (3):579–587.
    Some authors have recently suggested that it is time to consider rights for robots. These suggestions are based on the claim that the question of robot rights should not depend on a standard set of conditions for ‘moral status’; but instead, the question is to be framed in a new way, by rejecting the is/ought distinction, making a relational turn, or assuming a methodological behaviourism. We try to clarify these suggestions and to show their highly problematic consequences. While we find (...) the suggestions ultimately unmotivated, the discussion shows that our epistemic condition with respect to the moral status of others does raise problems, and that the human tendency to empathise with things that do not have moral status should be taken seriously—we suggest that it produces a “derived moral status”. Finally, it turns out that there is typically no individual in real AI that could even be said to be the bearer of moral status. Overall, there is no reason to think that robot rights are an issue now. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   28 citations  
  • The Measurement Problem of Consciousness.Heather Browning &Walter Veit -2020 -Philosophical Topics 48 (1):85-108.
    This paper addresses what we consider to be the most pressing challenge for the emerging science of consciousness: the measurement problem of consciousness. That is, by what methods can we determine the presence of and properties of consciousness? Most methods are currently developed through evaluation of the presence of consciousness in humans and here we argue that there are particular problems in application of these methods to nonhuman cases—what we call the indicator validity problem and the extrapolation problem. The first (...) is a problem with the application of indicators developed using the differences between conscious and unconscious processing in humans to the identification of other conscious vs. nonconscious organisms or systems. The second is a problem in extrapolating any indicators developed in humans or other organisms to artificial systems. However, while pressing ethical concerns add urgency to the attribution of consciousness and its attendant moral status to nonhuman animals and intelligent machines, we cannot wait for certainty and we advocate the use of a precautionary principle to avoid causing unintentional harm. We also intend that the considerations and limitations discussed in this paper can be used to further analyze and refine the methods of consciousness science with the hope that one day we may be able to solve the measurement problem of consciousness. (shrink)
    Direct download(7 more)  
     
    Export citation  
     
    Bookmark   19 citations  
  • Robot Betrayal: a guide to the ethics of robotic deception.John Danaher -2020 -Ethics and Information Technology 22 (2):117-128.
    If a robot sends a deceptive signal to a human user, is this always and everywhere an unethical act, or might it sometimes be ethically desirable? Building upon previous work in robot ethics, this article tries to clarify and refine our understanding of the ethics of robotic deception. It does so by making three arguments. First, it argues that we need to distinguish between three main forms of robotic deception (external state deception; superficial state deception; and hidden state deception) in (...) order to think clearly about its ethics. Second, it argues that the second type of deception – superficial state deception – is not best thought of as a form of deception, even though it is frequently criticised as such. And third, it argues that the third type of deception is best understood as a form of betrayal because doing so captures the unique ethical harm to which it gives rise, and justifies special ethical protections against its use. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   27 citations  
  • Consciousness, Machines, and Moral Status.Henry Shevlin -manuscript
    In light of recent breakneck pace in machine learning, questions about whether near-future artificial systems might be conscious and possess moral status are increasingly pressing. This paper argues that as matters stand these debates lack any clear criteria for resolution via the science of consciousness. Instead, insofar as they are settled at all, it is likely to be via shifts in public attitudes brought about by the increasingly close relationships between humans and AI users. Section 1 of the paper I (...) briefly lays out the current state of the science of consciousness and its limitations insofar as these pertain to machine consciousness, and claims that there are no obvious consensus frameworks to inform public opinion on AI consciousness. Section 2 examines the rise of conversational chatbots or Social AI, and argues that in many cases, these elicit strong and sincere attributions of consciousness, mentality, and moral status from users, a trend likely to become more widespread. Section 3 presents an inconsistent triad for theories that attempt to link consciousness, behaviour, and moral status, noting that the trends in Social AI systems will likely make the inconsistency of these three premises more pressing. Finally, Section 4 presents some limited suggestions for how consciousness and AI research communities should respond to the gap between expert opinion and folk judgment. (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Prospects of Artificial Consciousness: Ethical Dimensions and Concerns.Elisabeth Hildt -2023 -American Journal of Bioethics Neuroscience 14 (2):58-71.
    Can machines be conscious and what would be the ethical implications? This article gives an overview of current robotics approaches toward machine consciousness and considers factors that hamper an understanding of machine consciousness. After addressing the epistemological question of how we would know whether a machine is conscious and discussing potential advantages of potential future machine consciousness, it outlines the role of consciousness for ascribing moral status. As machine consciousness would most probably differ considerably from human consciousness, several complex questions (...) must be addressed, including what forms of machine consciousness would be morally relevant forms of consciousness, and what the ethical implications of morally relevant forms of machine consciousness would be. While admittedly part of this reflection is speculative in nature, it clearly underlines the need for a detailed conceptual analysis of the concept of artificial consciousness and stresses the imperative to avoid building machines with morally relevant forms of consciousness. The article ends with some suggestions for potential future regulation of machine consciousness. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  • Preserving the Normative Significance of Sentience.Leonard Dung -2024 -Journal of Consciousness Studies 31 (1):8-30.
    According to an orthodox view, the capacity for conscious experience (sentience) is relevant to the distribution of moral status and value. However, physicalism about consciousness might threaten the normative relevance of sentience. According to the indeterminacy argument, sentience is metaphysically indeterminate while indeterminacy of sentience is incompatible with its normative relevance. According to the introspective argument (by François Kammerer), the unreliability of our conscious introspection undercuts the justification for belief in the normative relevance of consciousness. I defend the normative relevance (...) of sentience against these objections. First, I demonstrate that physicalists only have to concede a limited amount of indeterminacy of sentience. This moderate indeterminacy is in harmony with the role of sentience in determining moral status. Second, I argue that physicalism gives us no reason to expect that introspection is unreliable with respect to the normative relevance of consciousness. (shrink)
    Direct download(4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • On the moral status of social robots: considering the consciousness criterion.Kestutis Mosakas -2021 -AI and Society 36 (2):429-443.
    While philosophers have been debating for decades on whether different entities—including severely disabled human beings, embryos, animals, objects of nature, and even works of art—can legitimately be considered as having moral status, this question has gained a new dimension in the wake of artificial intelligence (AI). One of the more imminent concerns in the context of AI is that of the moral rights and status of social robots, such as robotic caregivers and artificial companions, that are built to interact with (...) human beings. In recent years, some approaches to moral consideration have been proposed that would include social robots as proper objects of moral concern, even though it seems unlikely that these machines are conscious beings. In the present paper, I argue against these approaches by advocating the “consciousness criterion,” which proposes phenomenal consciousness as a necessary condition for accrediting moral status. First, I explain why it is generally supposed that consciousness underlies the morally relevant properties (such as sentience) and then, I respond to some of the common objections against this view. Then, I examine three inclusive alternative approaches to moral consideration that could accommodate social robots and point out why they are ultimately implausible. Finally, I conclude that social robots should not be regarded as proper objects of moral concern unless and until they become capable of having conscious experience. While that does not entail that they should be excluded from our moral reasoning and decision-making altogether, it does suggest that humans do not owe direct moral duties to them. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   27 citations  
  • Will intelligent machines become moral patients?Parisa Moosavi -2023 -Philosophy and Phenomenological Research 109 (1):95-116.
    This paper addresses a question about the moral status of Artificial Intelligence (AI): will AIs ever become moral patients? I argue that, while it is in principle possible for an intelligent machine to be a moral patient, there is no good reason to believe this will in fact happen. I start from the plausible assumption that traditional artifacts do not meet a minimal necessary condition of moral patiency: having a good of one's own. I then argue that intelligent machines are (...) no different from traditional artifacts in this respect. To make this argument, I examine the feature of AIs that enables them to improve their intelligence, i.e., machine learning. I argue that there is no reason to believe that future advances in machine learning will take AIs closer to having a good of their own. I thus argue that concerns about the moral status of future AIs are unwarranted. Nothing about the nature of intelligent machines makes them a better candidate for acquiring moral patiency than the traditional artifacts whose moral status does not concern us. (shrink)
    Direct download(5 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  • The Weirdness of the World.Eric Schwitzgebel -2024 - Princeton University Press.
    How all philosophical explanations of human consciousness and the fundamental structure of the cosmos are bizarre—and why that’s a good thing Do we live inside a simulated reality or a pocket universe embedded in a larger structure about which we know virtually nothing? Is consciousness a purely physical matter, or might it require something extra, something nonphysical? According to the philosopher Eric Schwitzgebel, it’s hard to say. In The Weirdness of the World, Schwitzgebel argues that the answers to these fundamental (...) questions lie beyond our powers of comprehension. We can be certain only that the truth—whatever it is—is weird. Philosophy, he proposes, can aim to open—to reveal possibilities we had not previously appreciated—or to close, to narrow down to the one correct theory of the phenomenon in question. Schwitzgebel argues for a philosophy that opens. According to Schwitzgebel’s “Universal Bizarreness” thesis, every possible theory of the relation of mind and cosmos defies common sense. According to his complementary “Universal Dubiety” thesis, no general theory of the relationship between mind and cosmos compels rational belief. Might the United States be a conscious organism—a conscious group mind with approximately the intelligence of a rabbit? Might virtually every action we perform cause virtually every possible type of future event, echoing down through the infinite future of an infinite universe? What, if anything, is it like to be a garden snail? Schwitzgebel makes a persuasive case for the thrill of considering the most bizarre philosophical possibilities. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • (1 other version)In search of the moral status of AI: why sentience is a strong argument.Martin Gibert &Dominic Martin -2022 -AI and Society 37 (1):319-330.
    Is it OK to lie to Siri? Is it bad to mistreat a robot for our own pleasure? Under what condition should we grant a moral status to an artificial intelligence (AI) system? This paper looks at different arguments for granting moral status to an AI system: the idea of indirect duties, the relational argument, the argument from intelligence, the arguments from life and information, and the argument from sentience. In each but the last case, we find unresolved issues with (...) the particular argument, which leads us to move to a different one. We leave the idea of indirect duties aside since these duties do not imply considering an AI system for its own sake. The paper rejects the relational argument and the argument from intelligence. The argument from life may lead us to grant a moral status to an AI system, but only in a weak sense. Sentience, by contrast, is a strong argument for the moral status of an AI system—based, among other things, on the Aristotelian principle of equality: that same cases should be treated in the same way. The paper points out, however, that no AI system is sentient given the current level of technological development. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  • Understanding responsibility in Responsible AI. Dianoetic virtues and the hard problem of context.Mihaela Constantinescu,Cristina Voinea,Radu Uszkai &Constantin Vică -2021 -Ethics and Information Technology 23 (4):803-814.
    During the last decade there has been burgeoning research concerning the ways in which we should think of and apply the concept of responsibility for Artificial Intelligence. Despite this conceptual richness, there is still a lack of consensus regarding what Responsible AI entails on both conceptual and practical levels. The aim of this paper is to connect the ethical dimension of responsibility in Responsible AI with Aristotelian virtue ethics, where notions of context and dianoetic virtues play a grounding role for (...) the concept of moral responsibility. The paper starts by highlighting the important difficulties in assigning responsibility to either technologies themselves or to their developers. Top-down and bottom-up approaches to moral responsibility are then contrasted, as we explore how they could inform debates about Responsible AI. We highlight the limits of the former ethical approaches and build the case for classical Aristotelian virtue ethics. We show that two building blocks of Aristotle’s ethics, dianoetic virtues and the context of actions, although largely ignored in the literature, can shed light on how we could think of moral responsibility for both AI and humans. We end by exploring the practical implications of this particular understanding of moral responsibility along the triadic dimensions of ethics by design, ethics in design and ethics for designers. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  • (1 other version)In search of the moral status of AI: why sentience is a strong argument.Martin Gibert &Dominic Martin -2021 -AI and Society 1:1-12.
    Is it OK to lie to Siri? Is it bad to mistreat a robot for our own pleasure? Under what condition should we grant a moral status to an artificial intelligence system? This paper looks at different arguments for granting moral status to an AI system: the idea of indirect duties, the relational argument, the argument from intelligence, the arguments from life and information, and the argument from sentience. In each but the last case, we find unresolved issues with the (...) particular argument, which leads us to move to a different one. We leave the idea of indirect duties aside since these duties do not imply considering an AI system for its own sake. The paper rejects the relational argument and the argument from intelligence. The argument from life may lead us to grant a moral status to an AI system, but only in a weak sense. Sentience, by contrast, is a strong argument for the moral status of an AI system—based, among other things, on the Aristotelian principle of equality: that same cases should be treated in the same way. The paper points out, however, that no AI system is sentient given the current level of technological development. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   16 citations  
  • Artificial virtue: the machine question and perceptions of moral character in artificial moral agents.Patrick Gamez,Daniel B. Shank,Carson Arnold &Mallory North -2020 -AI and Society 35 (4):795-809.
    Virtue ethics seems to be a promising moral theory for understanding and interpreting the development and behavior of artificial moral agents. Virtuous artificial agents would blur traditional distinctions between different sorts of moral machines and could make a claim to membership in the moral community. Accordingly, we investigate the “machine question” by studying whether virtue or vice can be attributed to artificial intelligence; that is, are people willing to judge machines as possessing moral character? An experiment describes situations where either (...) human or AI agents engage in virtuous or vicious behavior and experiment participants then judge their level of virtue or vice. The scenarios represent different virtue ethics domains of truth, justice, fear, wealth, and honor. Quantitative and qualitative analyses show that moral attributions are weakened for AIs compared to humans, and the reasoning and explanations for the attributions are varied and more complex. On “relational” views of membership in the moral community, virtuous machines would indeed be included, even if they are indeed weakened. Hence, while our moral relationships with artificial agents may be of the same types, they may yet remain substantively different than our relationships to human beings. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   27 citations  
  • Robots in the Workplace: a Threat to—or Opportunity for—Meaningful Work?Jilles Smids,Sven Nyholm &Hannah Berkers -2020 -Philosophy and Technology 33 (3):503-522.
    The concept of meaningful work has recently received increased attention in philosophy and other disciplines. However, the impact of the increasing robotization of the workplace on meaningful work has received very little attention so far. Doing work that is meaningful leads to higher job satisfaction and increased worker well-being, and some argue for a right to access to meaningful work. In this paper, we therefore address the impact of robotization on meaningful work. We do so by identifying five key aspects (...) of meaningful work: pursuing a purpose, social relationships, exercising skills and self-development, self-esteem and recognition, and autonomy. For each aspect, we analyze how the introduction of robots into the workplace may diminish or enhance the meaningfulness of work. We also identify a few ethical issues that emerge from our analysis. We conclude that robotization of the workplace can have both significant negative and positive effects on meaningful work. Our findings about ways in which robotization of the workplace can be a threat or opportunity for meaningful work can serve as the basis for ethical arguments for how to—and how not to—implement robots into workplaces. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   23 citations  
  • Debate: What is Personhood in the Age of AI?David J. Gunkel &Jordan Joseph Wales -2021 -AI and Society 36 (2):473–486.
    In a friendly interdisciplinary debate, we interrogate from several vantage points the question of “personhood” in light of contemporary and near-future forms of social AI. David J. Gunkel approaches the matter from a philosophical and legal standpoint, while Jordan Wales offers reflections theological and psychological. Attending to metaphysical, moral, social, and legal understandings of personhood, we ask about the position of apparently personal artificial intelligences in our society and individual lives. Re-examining the “person” and questioning prominent construals of that category, (...) we hope to open new views upon urgent and much-discussed questions that, quite soon, may confront us in our daily lives. (shrink)
    Direct download(6 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  • Represent me: please! Towards an ethics of digital twins in medicine.Matthias Braun -2021 -Journal of Medical Ethics 47 (6):394-400.
    Simulations are used in very different contexts and for very different purposes. An emerging development is the possibility of using simulations to obtain a more or less representative reproduction of organs or even entire persons. Such simulations are framed and discussed using the term ‘digital twin’. This paper unpacks and scrutinises the current use of such digital twins in medicine and the ideas embedded in this practice. First, the paper maps the different types of digital twins. A special focus is (...) put on the concrete challenges inherent in the interactions between persons and their digital twin. Second, the paper addresses the questions of how far a digital twin can represent a person and what the consequences of this may be. Against the background of these two analytical steps, the paper defines first conditions for digital twins to take on an ethically justifiable form of representation. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  • Moral Status and Intelligent Robots.John-Stewart Gordon &David J. Gunkel -2021 -Southern Journal of Philosophy 60 (1):88-117.
    The Southern Journal of Philosophy, Volume 60, Issue 1, Page 88-117, March 2022.
    No categories
    Direct download(4 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  • How to deal with risks of AI suffering.Leonard Dung -forthcoming -Inquiry: An Interdisciplinary Journal of Philosophy.
    1. 1.1. Suffering is bad. This is why, ceteris paribus, there are strong moral reasons to prevent suffering. Moreover, typically, those moral reasons are stronger when the amount of suffering at st...
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • To Each Technology Its Own Ethics: The Problem of Ethical Proliferation.Henrik Skaug Sætra &John Danaher -2022 -Philosophy and Technology 35 (4):1-26.
    Ethics plays a key role in the normative analysis of the impacts of technology. We know that computers in general and the processing of data, the use of artificial intelligence, and the combination of computers and/or artificial intelligence with robotics are all associated with ethically relevant implications for individuals, groups, and society. In this article, we argue that while all technologies are ethically relevant, there is no need to create a separate ‘ethics of X’ or ‘X ethics’ for each and (...) every subtype of technology or technological property—e.g. computer ethics, AI ethics, data ethics, information ethics, robot ethics, and machine ethics. Specific technologies might have specific impacts, but we argue that they are often sufficiently covered and understood through already established higher-level domains of ethics. Furthermore, the proliferation of tech ethics is problematic because (a) the conceptual boundaries between the subfields are not well-defined, (b) it leads to a duplication of effort and constant reinventing the wheel, and (c) there is danger that participants overlook or ignore more fundamental ethical insights and truths. The key to avoiding such outcomes lies in a taking the discipline of ethics seriously, and we consequently begin with a brief description of what ethics is, before presenting the main forms of technology related ethics. Through this process, we develop a hierarchy of technology ethics, which can be used by developers and engineers, researchers, or regulators who seek an understanding of the ethical implications of technology. We close by deducing two principles for positioning ethical analysis which will, in combination with the hierarchy, promote the leveraging of existing knowledge and help us to avoid an exaggerated proliferation of tech ethics. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  • Why the Epistemic Objection Against Using Sentience as Criterion of Moral Status is Flawed.Leonard Dung -2022 -Science and Engineering Ethics 28 (6):1-15.
    According to a common view, sentience is necessary and sufficient for moral status. In other words, whether a being has intrinsic moral relevance is determined by its capacity for conscious experience. The _epistemic objection_ derives from our profound uncertainty about sentience. According to this objection, we cannot use sentience as a _criterion_ to ascribe moral status in practice because we won’t know in the foreseeable future which animals and AI systems are sentient while ethical questions regarding the possession of moral (...) status are urgent. Therefore, we need to formulate an alternative criterion. I argue that the epistemic objection is dissolved once one clearly distinguishes between the question what determines moral status and what criterion should be employed in practice to ascribe moral status. Epistemic concerns are irrelevant to the former question and—I will argue—criteria of moral status have inescapably to be based on sentience, if one concedes that sentience determines moral status. It follows that doubts about our epistemic access to sentience cannot be used to motivate an alternative criterion of moral status. If sentience turns out to be unknowable, then moral status is unknowable. However, I briefly advocate against such strong pessimism. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  • (1 other version)Artificial virtuous agents: from theory to machine implementation.Jakob Stenseke -2023 -AI and Society 38 (4):1301-1320.
    Virtue ethics has many times been suggested as a promising recipe for the construction of artificial moral agents due to its emphasis on moral character and learning. However, given the complex nature of the theory, hardly any work has de facto attempted to implement the core tenets of virtue ethics in moral machines. The main goal of this paper is to demonstrate how virtue ethics can be taken all the way from theory to machine implementation. To achieve this goal, we (...) critically explore the possibilities and challenges for virtue ethics from a computational perspective. Drawing on previous conceptual and technical work, we outline a version of artificial virtue based on moral functionalism, connectionist bottom–up learning, and eudaimonic reward. We then describe how core features of the outlined theory can be interpreted in terms of functionality, which in turn informs the design of components necessary for virtuous cognition. Finally, we present a comprehensive framework for the technical development of artificial virtuous agents and discuss how they can be implemented in moral environments. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • Can we wrong a robot?Nancy S. Jecker -2023 -AI and Society 38 (1):259-268.
    With the development of increasingly sophisticated sociable robots, robot-human relationships are being transformed. Not only can sociable robots furnish emotional support and companionship for humans, humans can also form relationships with robots that they value highly. It is natural to ask, do robots that stand in close relationships with us have any moral standing over and above their purely instrumental value as means to human ends. We might ask our question this way, ‘Are there ways we can act towards robots (...) that would be wrong to the robot?’ To address this, Part I lays out standard approaches to moral standing: appealing to intrinsic properties, human responses, and values inhering in relationships. Part II explores the third, relational strategy in detail. Looking beyond Western analyses, it considers Eastern philosophy and the environmental philosophy of 'deep ecology' and extends these approaches to sociable robots. Part III examines practical implications for the case of Samantha, a sex robot that was allegedly raped. Part IV identifies and replies to objections. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • What Matters for Moral Status: Behavioral or Cognitive Equivalence?John Danaher -2021 -Cambridge Quarterly of Healthcare Ethics 30 (3):472-478.
    Henry Shevlin’s paper—“How could we know when a robot was a moral patient?” – argues that we should recognize robots and artificial intelligence (AI) as psychological moral patients if they are cognitively equivalent to other beings that we already recognize as psychological moral patients (i.e., humans and, at least some, animals). In defending this cognitive equivalence strategy, Shevlin draws inspiration from the “behavioral equivalence” strategy that I have defended in previous work but argues that it is flawed in crucial respects. (...) Unfortunately—and I guess this is hardly surprising—I cannot bring myself to agree that the cognitive equivalence strategy is the superior one. In this article, I try to explain why in three steps. First, I clarify the nature of the question that I take both myself and Shevlin to be answering. Second, I clear up some potential confusions about the behavioral equivalence strategy, addressing some other recent criticisms of it. Third, I will explain why I still favor the behavioral equivalence strategy over the cognitive equivalence one. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • The Moral Status of Social Robots: A Pragmatic Approach.Paul Showler -2024 -Philosophy and Technology 37 (2):1-22.
    Debates about the moral status of social robots (SRs) currently face a second-order, or metatheoretical impasse. On the one hand, moral individualists argue that the moral status of SRs depends on their possession of morally relevant properties. On the other hand, moral relationalists deny that we ought to attribute moral status on the basis of the properties that SRs instantiate, opting instead for other modes of reflection and critique. This paper develops and defends a pragmatic approach which aims to reconcile (...) these two positions. The core of this proposal is that moral individualism and moral relationalism are best understood as distinct deliberative strategies for attributing moral status to SRs, and that both are worth preserving insofar as they answer to different kinds of practical problems that we face as moral agents. (shrink)
    No categories
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • Can Chatbots Preserve Our Relationships with the Dead?Stephen M. Campbell,Pengbo Liu &Sven Nyholm -forthcoming -Journal of the American Philosophical Association.
    Imagine that you are given access to an AI chatbot that compellingly mimics the personality and speech of a deceased loved one. If you start having regular interactions with this “thanabot,” could this new relationship be a continuation of the relationship you had with your loved one? And could a relationship with a thanabot preserve or replicate the value of a close human relationship? To the first question, we argue that a relationship with a thanabot cannot be a true continuation (...) of your relationship with a deceased loved one, though it might support one’s continuing bonds with the dead. To the second question, we argue that, in and of themselves, relationships with thanabots cannot benefit us as much as rewarding and healthy intimate relationships with other humans, though we explain why it is difficult to make reliable comparative generalizations about the instrumental value of these relationships. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark  
  • (1 other version)Artificial virtuous agents: from theory to machine implementation.Jakob Stenseke -2021 -AI and Society:1-20.
    Virtue ethics has many times been suggested as a promising recipe for the construction of artificial moral agents due to its emphasis on moral character and learning. However, given the complex nature of the theory, hardly any work has de facto attempted to implement the core tenets of virtue ethics in moral machines. The main goal of this paper is to demonstrate how virtue ethics can be taken all the way from theory to machine implementation. To achieve this goal, we (...) critically explore the possibilities and challenges for virtue ethics from a computational perspective. Drawing on previous conceptual and technical work, we outline a version of artificial virtue based on moral functionalism, connectionist bottom–up learning, and eudaimonic reward. We then describe how core features of the outlined theory can be interpreted in terms of functionality, which in turn informs the design of components necessary for virtuous cognition. Finally, we present a comprehensive framework for the technical development of artificial virtuous agents and discuss how they can be implemented in moral environments. (shrink)
    Direct download(4 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • The Kant-Inspired Indirect Argument for Non-Sentient Robot Rights.Tobias Flattery -2023 -AI and Ethics.
    Some argue that robots could never be sentient, and thus could never have intrinsic moral status. Others disagree, believing that robots indeed will be sentient and thus will have moral status. But a third group thinks that, even if robots could never have moral status, we still have a strong moral reason to treat some robots as if they do. Drawing on a Kantian argument for indirect animal rights, a number of technology ethicists contend that our treatment of anthropomorphic or (...) even animal-like robots could condition our treatment of humans: treat these robots well, as we would treat humans, or else risk eroding good moral behavior toward humans. But then, this argument also seems to justify giving rights to robots, even if robots lack intrinsic moral status. In recent years, however, this indirect argument in support of robot rights has drawn a number of objections. In this paper I have three goals. First, I will formulate and explicate the Kant-inspired indirect argument meant to support robot rights, making clearer than before its empirical commitments and philosophical presuppositions. Second, I will defend the argument against a number of objections. The result is the fullest explication and defense to date of this well-known and influential but often criticized argument. Third, however, I myself will raise a new concern about the argument’s use as a justification for robot rights. This concern is answerable to some extent, but it cannot be dismissed fully. It shows that, surprisingly, the argument’s advocates have reason to resist, at least somewhat, producing the sorts of robots that, on their view, ought to receive rights. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Technology and moral change: the transformation of truth and trust.Henrik Skaug Sætra &John Danaher -2022 -Ethics and Information Technology 24 (3):1-16.
    Technologies can have profound effects on social moral systems. Is there any way to systematically investigate and anticipate these potential effects? This paper aims to contribute to this emerging field on inquiry through a case study method. It focuses on two core human values—truth and trust—describes their structural properties and conceptualisations, and then considers various mechanisms through which technology is changing and can change our perspective on those values. In brief, the paper argues that technology is transforming these values by (...) changing the costs/benefits of accessing them; allowing us to substitute those values for other, closely-related ones; increasing their perceived scarcity/abundance; and disrupting traditional value-gatekeepers. This has implications for how we study other, technologically-mediated, value changes. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • The Moral Consideration of Artificial Entities: A Literature Review.Jamie Harris &Jacy Reese Anthis -2021 -Science and Engineering Ethics 27 (4):1-95.
    Ethicists, policy-makers, and the general public have questioned whether artificial entities such as robots warrant rights or other forms of moral consideration. There is little synthesis of the research on this topic so far. We identify 294 relevant research or discussion items in our literature review of this topic. There is widespread agreement among scholars that some artificial entities could warrant moral consideration in the future, if not also the present. The reasoning varies, such as concern for the effects on (...) artificial entities and concern for the effects on human society. Beyond the conventional consequentialist, deontological, and virtue ethicist ethical frameworks, some scholars encourage “information ethics” and “social-relational” approaches, though there are opportunities for more in-depth ethical research on the nuances of moral consideration of artificial entities. There is limited relevant empirical data collection, primarily in a few psychological studies on current moral and social attitudes of humans towards robots and other artificial entities. This suggests an important gap for psychological, sociological, economic, and organizational research on how artificial entities will be integrated into society and the factors that will determine how the interests of artificial entities are considered. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  • Human Brain Organoids: Why There Can Be Moral Concerns If They Grow Up in the Lab and Are Transplanted or Destroyed.Andrea Lavazza &Massimo Reichlin -2023 -Cambridge Quarterly of Healthcare Ethics 32 (4):582-596.
    Human brain organoids (HBOs) are three-dimensional biological entities grown in the laboratory in order to recapitulate the structure and functions of the adult human brain. They can be taken to be novel living entities for their specific features and uses. As a contribution to the ongoing discussion on the use of HBOs, the authors identify three sets of reasons for moral concern. The first set of reasons regards the potential emergence of sentience/consciousness in HBOs that would endow them with a (...) moral status whose perimeter should be established. The second set of moral concerns has to do with an analogy with artificial womb technology. The technical realization of processes that are typically connected to the physiology of the human body can create a manipulatory and instrumental attitude that can undermine the protection of what is human. The third set concerns the new frontiers of biocomputing and the creation of chimeras. As far as the new frontier of organoid intelligence is concerned, it is the close relationship of humans with new interfaces having biological components capable of mimicking memory and cognition that raises ethical issues. As far as chimeras are concerned, it is the humanization of nonhuman animals that is worthy of close moral scrutiny. A detailed description of these ethical issues is provided to contribute to the construction of a regulative framework that can guide decisions when considering research in the field of HBOs. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • Can Chatbots Preserve Our Relationships with the Dead?Stephen M. Campbell,Pengbo Liu &Sven Nyholm -forthcoming -Journal of the American Philosophical Association.
    ABSTRACT Imagine that you are given access to an AI chatbot that compellingly mimics the personality and speech of a deceased loved one. If you start having regular interactions with this “thanabot,” could this new relationship be a continuation of the relationship you had with your loved one? And could a relationship with a thanabot preserve or replicate the value of a close human relationship? To the first question, we argue that a relationship with a thanabot cannot be a true (...) continuation of your relationship with a deceased loved one, though it might support one’s continuing bonds with the dead. To the second question, we argue that, in and of themselves, relationships with thanabots cannot benefit us as much as rewarding and healthy intimate relationships with other humans, though we explain why it is difficult to make reliable comparative generalizations about the instrumental value of these relationships. (shrink)
    No categories
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Full Rights Dilemma for AI Systems of Debatable Moral Personhood.Eric Schwitzgebel -2023 -Robonomics 4.
    An Artificially Intelligent system (an AI) has debatable moral personhood if it is epistemically possible either that the AI is a moral person or that it falls far short of personhood. Debatable moral personhood is a likely outcome of AI development and might arise soon. Debatable AI personhood throws us into a catastrophic moral dilemma: Either treat the systems as moral persons and risk sacrificing real human interests for the sake of entities without interests worth the sacrifice, or do not (...) treat the systems as moral persons and risk perpetrating grievous moral wrongs against them. The moral issues become even more perplexing if we consider cases of possibly conscious AI that are subhuman, superhuman, or highly divergent from us in their morally relevant properties. (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Technological Future of Love.Sven Nyholm,John Danaher &Brian D. Earp -2022 - In André Grahle, Natasha McKeever & Joe Saunders,Philosophy of Love in the Past, Present, and Future. Routledge. pp. 224-239.
    How might emerging and future technologies—sex robots, love drugs, anti-love drugs, or algorithms to track, quantify, and ‘gamify’ romantic relationships—change how we understand and value love? We canvass some of the main ethical worries posed by such technologies, while also considering whether there are reasons for “cautious optimism” about their implications for our lives. Along the way, we touch on some key ideas from the philosophies of love and technology.
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Why Indirect Harms do not Support Social Robot Rights.Paula Sweeney -2022 -Minds and Machines 32 (4):735-749.
    There is growing evidence to support the claim that we react differently to robots than we do to other objects. In particular, we react differently to robots with which we have some form of social interaction. In this paper I critically assess the claim that, due to our tendency to become emotionally attached to social robots, permitting their harm may be damaging for society and as such we should consider introducing legislation to grant social robots rights and protect them from (...) harm. I conclude that there is little evidence to support this claim and that legislation in this area would restrict progress in areas of social care where social robots are a potentially valuable resource. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • What Makes Work “Good” in the Age of Artificial Intelligence (AI)? Islamic Perspectives on AI-Mediated Work Ethics.Mohammed Ghaly -2024 -The Journal of Ethics 28 (3):429-453.
    Artificial intelligence (AI) technologies are increasingly creeping into the work sphere, thereby gradually questioning and/or disturbing the long-established moral concepts and norms communities have been using to define what makes work good. Each community, and Muslims make no exception in this regard, has to revisit their moral world to provide well-thought frameworks that can engage with the challenging ethical questions raised by the new phenomenon of AI-mediated work. For a systematic analysis of the broad topic of AI-mediated work ethics from (...) an Islamic perspective, this article focuses on presenting an accessible overview of the “moral world” of work in the Islamic tradition. Three main components of this moral world were selected due to their relevance to the AI context, namely (1) Work is inherently good for humans, (2) Practising a religiously permitted profession and (c) Maintaining good relations with involved stakeholders. Each of these three components is addressed in a distinct section, followed by a sub-section highlighting the relevance of the respective component to the particular context of AI-mediated work. The article argues that there are no unsurmountable barriers in the Islamic tradition against the adoption of AI technologies in work sphere. However, important precautions should be considered to ensure that embracing AI will not be at the cost of work-related moral values. The article also highlights how important lessons can be learnt from the positive historical experience of automata that thrived in the Islamic civilization. (shrink)
    Direct download(4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Moral Uncertainty and Our Relationships with Unknown Minds.John Danaher -2023 -Cambridge Quarterly of Healthcare Ethics 32 (4):482-495.
    We are sometimes unsure of the moral status of our relationships with other entities. Recent case studies in this uncertainty include our relationships with artificial agents (robots, assistant AI, etc.), animals, and patients with “locked-in” syndrome. Do these entities have basic moral standing? Could they count as true friends or lovers? What should we do when we do not know the answer to these questions? An influential line of reasoning suggests that, in such cases of moral uncertainty, we need meta-moral (...) decision rules that allow us to either minimize the risks of moral wrongdoing or improve the choice-worthiness of our actions. One particular argument adopted in this literature is the “risk asymmetry argument,” which claims that the risks associated with accepting or rejecting some moral facts may be sufficiently asymmetrical as to warrant favoring a particular practical resolution of this uncertainty. Focusing on the case study of artificial beings, this article argues that this is best understood as an ethical-epistemic challenge. The article argues that taking potential risk asymmetries seriously can help resolve disputes about the status of human–AI relationships, at least in practical terms (philosophical debates will, no doubt, continue); however, the resolution depends on a proper, empirically grounded assessment of the risks involved. Being skeptical about basic moral status, but more open to the possibility of meaningful relationships with such entities, may be the most sensible approach to take. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Anthropomorphizing Machines: Reality or Popular Myth?Simon Coghlan -2024 -Minds and Machines 34 (3):1-25.
    According to a widespread view, people often anthropomorphize machines such as certain robots and computer and AI systems by erroneously attributing mental states to them. On this view, people almost irresistibly believe, even if only subconsciously, that machines with certain human-like features really have phenomenal or subjective experiences like sadness, happiness, desire, pain, joy, and distress, even though they lack such feelings. This paper questions this view by critiquing common arguments used to support it and by suggesting an alternative explanation. (...) Even if people’s behavior and language regarding human-like machines suggests they believe those machines really have mental states, it is possible that they do not believe that at all. The paper also briefly discusses potential implications of regarding such anthropomorphism as a popular myth. The exercise illuminates the difficult concept of anthropomorphism, helping to clarify possible human relations with or toward machines that increasingly resemble humans and animals. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Danaher’s Ethical Behaviourism: An Adequate Guide to Assessing the Moral Status of a Robot?Jilles Smids -2020 -Science and Engineering Ethics 26 (5):2849-2866.
    This paper critically assesses John Danaher’s ‘ethical behaviourism’, a theory on how the moral status of robots should be determined. The basic idea of this theory is that a robot’s moral status is determined decisively on the basis of its observable behaviour. If it behaves sufficiently similar to some entity that has moral status, such as a human or an animal, then we should ascribe the same moral status to the robot as we do to this human or animal. The (...) paper argues against ethical behaviourism by making four main points. First, it is argued that the strongest version of ethical behaviourism understands the theory as relying on inferences to the best explanation when inferring moral status. Second, as a consequence, ethical behaviourism cannot stick with merely looking at the robot’s behaviour, while remaining neutral with regard to the difficult question of which property grounds moral status. Third, not only behavioural evidence ought to play a role in inferring a robot’s moral status, but knowledge of the design process of the robot and of its designer’s intention ought to be taken into account as well. Fourth, knowledge of a robot’s ontology and how that relates to human biology often is epistemically relevant for inferring moral status as well. The paper closes with some concluding observations. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • Contesting the Consciousness Criterion: A More Radical Approach to the Moral Status of Non-Humans.Joan Llorca-Albareda &Gonzalo Díaz-Cobacho -2023 -American Journal of Bioethics Neuroscience 14 (2):158-160.
    Numerous and diverse discussions about moral status have taken place over the years. However, this concept was not born until the moral weight of non-human entities was raised. Animal ethics, for i...
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Humans, Neanderthals, robots and rights.Kamil Mamak -2022 -Ethics and Information Technology 24 (3):1-9.
    Robots are becoming more visible parts of our life, a situation which prompts questions about their place in our society. One group of issues that is widely discussed is connected with robots’ moral and legal status as well as their potential rights. The question of granting robots rights is polarizing. Some positions accept the possibility of granting them human rights whereas others reject the notion that robots can be considered potential rights holders. In this paper, I claim that robots will (...) never have all human rights, even if we accept that they are morally equal to humans. I focus on the role of embodiment in the content of the law. I claim that even relatively small differences in the ontologies of entities could lead to the need to create new sets of rights. I use the example of Neanderthals to illustrate that entities similar to us might have required different legal statuses. Then, I discuss the potential legal status of human-like robots. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Towards Establishing Criteria for the Ethical Analysis of Artificial Intelligence.Michele Farisco,Kathinka Evers &Arleen Salles -2020 -Science and Engineering Ethics 26 (5):2413-2425.
    Ethical reflection on Artificial Intelligence (AI) has become a priority. In this article, we propose a methodological model for a comprehensive ethical analysis of some uses of AI, notably as a replacement of human actors in specific activities. We emphasize the need for conceptual clarification of relevant key terms (e.g., intelligence) in order to undertake such reflection. Against that background, we distinguish two levels of ethical analysis, one practical and one theoretical. Focusing on the state of AI at present, we (...) suggest that regardless of the presence of intelligence, the lack of morally relevant features calls for caution when considering the role of AI in some specific human activities. (shrink)
    Direct download(4 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • Should We Use Technology to Merge Minds?John Danaher &Sven Nyholm -2021 -Cambridge Quarterly of Healthcare Ethics 30 (4):585-603.
  • Can AI determine its own future?Aybike Tunç -2025 -AI and Society 40 (2):775-786.
    This article investigates the capacity of artificial intelligence (AI) systems to claim the right to self-determination while exploring the prerequisites for individuals or entities to exercise control over their own destinies. The paper delves into the concept of autonomy as a fundamental aspect of self-determination, drawing a distinction between moral and legal autonomy and emphasizing the pivotal role of dignity in establishing legal autonomy. The analysis examines various theories of dignity, with a particular focus on Hannah Arendt’s perspective. Additionally, the (...) article discusses the influence of societal perceptions on AI, illustrating how AI's interactions in social contexts can shape public attitudes and compliance with legal rights. Ultimately, the article emphasizes the necessity of a comprehensive understanding of the relationship between AI, dignity, and the legal framework governing human rights, stressing the significance of recognizing dignity and societal acceptance in determining AI’s right to self-determination. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • What’s Wrong with Designing People to Serve?Bartek Chomanski -2019 -Ethical Theory and Moral Practice 22 (4):993-1015.
    In this paper I argue, contrary to recent literature, that it is unethical to create artificial agents possessing human-level intelligence that are programmed to be human beings’ obedient servants. In developing the argument, I concede that there are possible scenarios in which building such artificial servants is, on net, beneficial. I also concede that, on some conceptions of autonomy, it is possible to build human-level AI servants that will enjoy full-blown autonomy. Nonetheless, the main thrust of my argument is that, (...) in building such artificial agents, their creators cannot help but evince an objectionable attitude akin to the Aristotelian vice of manipulativeness. (shrink)
    Direct download(5 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Synthesizing Methuselah: The Question of Artificial Agelessness.Richard B. Gibson -2024 -Cambridge Quarterly of Healthcare Ethics 33 (1):60-75.
    As biological organisms, we age and, eventually, die. However, age’s deteriorating effects may not be universal. Some theoretical entities, due to their synthetic composition, could exist independently from aging—artificial general intelligence (AGI). With adequate resource access, an AGI could theoretically be ageless and would be, in some sense, immortal. Yet, this need not be inevitable. Designers could imbue AGIs with artificial mortality via an internal shut-off point. The question, though, is, should they? Should researchers curtail an AGI’s potentially endless lifespan (...) by deliberately making it mortal? It is this question that this article explores. First, it considers what type of AGI is under discussion before outlining how such beings could be ageless. Then, after clarifying the type of immortality under discussion and arguing that imbuing an AGI with synthetic aging would be person-affecting, the article explores four core conundrums: (i) deliberately causing a morally significant being’s death; (ii) immortality’s associated harms; (iii) concerns about immortality’s unequal assignment; and (iv) the danger of immortal AGI overlords. The article concludes that while prudence requires we create an aging AGI, in the face of the material harm such an action would constitute, this is an insufficient reason to justify doing so. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Problems with “Friendly AI”.Oliver Li -2021 -Ethics and Information Technology 23 (3):543-550.
    On virtue ethical grounds, Barbro Fröding and Martin Peterson recently recommended that near-future AIs should be developed as ‘Friendly AI’. AI in social interaction with humans should be programmed such that they mimic aspects of human friendship. While it is a reasonable goal to implement AI systems interacting with humans as Friendly AI, I identify four issues that need to be addressed concerning Friendly AI with Fröding’s and Peterson’s understanding of Friendly AI as a starting point. In a first step, (...) I briefly recapitulate Fröding’s and Peterson’s arguments for Friendly AI. I then highlight some issues with Fröding’s and Peterson’s approach and line of reasoning and identify four problems related to the notion of Friendly AI, which all pertain to the role and need for humans’ moral development. These are that one should consider the moral tendencies and preferences of the humans interacting with a friendly AI, it needs to be considered whether the humans interacting with a Friendly AI are still developing their virtues and character traits, the indirect effects of replacing humans with Friendly AI should be considered with respect to the possibilities for humans to develop their moral virtues and that the question whether the AI is perceived as some form of Artificial General Intelligence cannot be neglected. In conclusion, I argue that all of these four problems are related to humans moral development and that this observation strongly emphasizes the role and need for humans moral development in correlation to the accelerating development of AI-systems. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Engineering responsibility.Nicholas Sars -2022 -Ethics and Information Technology 24 (3):1-10.
    Many optimistic responses have been proposed to bridge the threat of responsibility gaps which artificial systems create. This paper identifies a question which arises if this optimistic project proves successful. On a response-dependent understanding of responsibility, our responsibility practices themselves at least partially determine who counts as a responsible agent. On this basis, if AI or robot technology advance such that AI or robot agents become fitting participants within responsibility exchanges, then responsibility itself might be engineered. If we have good (...) reason to think such technological advances are likely, then we should take steps to address the potential for engineering responsibility. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   2 citations  

  • [8]ページ先頭

    ©2009-2025 Movatter.jp