Movatterモバイル変換


[0]ホーム

URL:


PhilPapersPhilPeoplePhilArchivePhilEventsPhilJobs
Switch to: References

Add citations

You mustlogin to add citations.
  1. Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism.John Danaher -2020 -Science and Engineering Ethics 26 (4):2023-2049.
    Can robots have significant moral status? This is an emerging topic of debate among roboticists and ethicists. This paper makes three contributions to this debate. First, it presents a theory – ‘ethical behaviourism’ – which holds that robots can have significant moral status if they are roughly performatively equivalent to other entities that have significant moral status. This theory is then defended from seven objections. Second, taking this theoretical position onboard, it is argued that the performative threshold that robots need (...) to cross in order to be afforded significant moral status may not be that high and that they may soon cross it (if they haven’t done so already). Finally, the implications of this for our procreative duties to robots are considered, and it is argued that we may need to take seriously a duty of ‘procreative beneficence’ towards robots. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   85 citations  
  • Understanding Artificial Agency.Leonard Dung -2025 -Philosophical Quarterly 75 (2):450-472.
    Which artificial intelligence (AI) systems are agents? To answer this question, I propose a multidimensional account of agency. According to this account, a system's agency profile is jointly determined by its level of goal-directedness and autonomy as well as is abilities for directly impacting the surrounding world, long-term planning and acting for reasons. Rooted in extant theories of agency, this account enables fine-grained, nuanced comparative characterizations of artificial agency. I show that this account has multiple important virtues and is more (...) informative than alternatives. More speculatively, it may help to illuminate two important emerging questions in AI ethics: 1. Can agency contribute to the moral status of non-human beings, and how? 2. When and why might AI systems exhibit power-seeking behaviour and does this pose an existential risk to humanity? (shrink)
    Direct download(4 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  • Artificial Intelligence: Arguments for Catastrophic Risk.Adam Bales,William D'Alessandro &Cameron Domenico Kirk-Giannini -2024 -Philosophy Compass 19 (2):e12964.
    Recent progress in artificial intelligence (AI) has drawn attention to the technology’s transformative potential, including what some see as its prospects for causing large-scale harm. We review two influential arguments purporting to show how AI could pose catastrophic risks. The first argument — the Problem of Power-Seeking — claims that, under certain assumptions, advanced AI systems are likely to engage in dangerous power-seeking behavior in pursuit of their goals. We review reasons for thinking that AI systems might seek power, that (...) they might obtain it, that this could lead to catastrophe, and that we might build and deploy such systems anyway. The second argument claims that the development of human-level AI will unlock rapid further progress, culminating in AI systems far more capable than any human — this is the Singularity Hypothesis. Power-seeking behavior on the part of such systems might be particularly dangerous. We discuss a variety of objections to both arguments and conclude by assessing the state of the debate. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  • Is it time for robot rights? Moral status in artificial entities.Vincent C. Müller -2021 -Ethics and Information Technology 23 (3):579–587.
    Some authors have recently suggested that it is time to consider rights for robots. These suggestions are based on the claim that the question of robot rights should not depend on a standard set of conditions for ‘moral status’; but instead, the question is to be framed in a new way, by rejecting the is/ought distinction, making a relational turn, or assuming a methodological behaviourism. We try to clarify these suggestions and to show their highly problematic consequences. While we find (...) the suggestions ultimately unmotivated, the discussion shows that our epistemic condition with respect to the moral status of others does raise problems, and that the human tendency to empathise with things that do not have moral status should be taken seriously—we suggest that it produces a “derived moral status”. Finally, it turns out that there is typically no individual in real AI that could even be said to be the bearer of moral status. Overall, there is no reason to think that robot rights are an issue now. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   27 citations  
  • Robot Betrayal: a guide to the ethics of robotic deception.John Danaher -2020 -Ethics and Information Technology 22 (2):117-128.
    If a robot sends a deceptive signal to a human user, is this always and everywhere an unethical act, or might it sometimes be ethically desirable? Building upon previous work in robot ethics, this article tries to clarify and refine our understanding of the ethics of robotic deception. It does so by making three arguments. First, it argues that we need to distinguish between three main forms of robotic deception (external state deception; superficial state deception; and hidden state deception) in (...) order to think clearly about its ethics. Second, it argues that the second type of deception – superficial state deception – is not best thought of as a form of deception, even though it is frequently criticised as such. And third, it argues that the third type of deception is best understood as a form of betrayal because doing so captures the unique ethical harm to which it gives rise, and justifies special ethical protections against its use. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   27 citations  
  • The Prospects of Artificial Consciousness: Ethical Dimensions and Concerns.Elisabeth Hildt -2023 -American Journal of Bioethics Neuroscience 14 (2):58-71.
    Can machines be conscious and what would be the ethical implications? This article gives an overview of current robotics approaches toward machine consciousness and considers factors that hamper an understanding of machine consciousness. After addressing the epistemological question of how we would know whether a machine is conscious and discussing potential advantages of potential future machine consciousness, it outlines the role of consciousness for ascribing moral status. As machine consciousness would most probably differ considerably from human consciousness, several complex questions (...) must be addressed, including what forms of machine consciousness would be morally relevant forms of consciousness, and what the ethical implications of morally relevant forms of machine consciousness would be. While admittedly part of this reflection is speculative in nature, it clearly underlines the need for a detailed conceptual analysis of the concept of artificial consciousness and stresses the imperative to avoid building machines with morally relevant forms of consciousness. The article ends with some suggestions for potential future regulation of machine consciousness. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  • The Shutdown Problem: An AI Engineering Puzzle for Decision Theorists.Elliott Thornley -forthcoming -Philosophical Studies:1-28.
    I explain the shutdown problem: the problem of designing artificial agents that (1) shut down when a shutdown button is pressed, (2) don’t try to prevent or cause the pressing of the shutdown button, and (3) otherwise pursue goals competently. I prove three theorems that make the difficulty precise. These theorems show that agents satisfying some innocuous-seeming conditions will often try to prevent or cause the pressing of the shutdown button, even in cases where it’s costly to do so. And (...) patience trades off against shutdownability: the more patient an agent, the greater the costs that agent is willing to incur to manipulate the shutdown button. I end by noting that these theorems can guide our search for solutions. (shrink)
    Direct download(4 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Sentientism, Motivation, and Philosophical Vulcans.Luke Roelofs -2023 -Pacific Philosophical Quarterly 104 (2):301-323.
    If moral status depends on the capacity for consciousness, what kind of consciousness matters exactly? Two popular answers are that any kind of consciousness matters (Broad Sentientism), and that what matters is the capacity for pleasure and suffering (Narrow Sentientism). I argue that the broad answer is too broad, while the narrow answer is likely too narrow, as Chalmers has recently argued by appeal to ‘philosophical Vulcans’. I defend a middle position, Motivational Sentientism, on which what matters is motivating consciousness: (...) any kind of consciousness which presents its subject with reasons for action. (shrink)
    No categories
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  • Will intelligent machines become moral patients?Parisa Moosavi -2023 -Philosophy and Phenomenological Research 109 (1):95-116.
    This paper addresses a question about the moral status of Artificial Intelligence (AI): will AIs ever become moral patients? I argue that, while it is in principle possible for an intelligent machine to be a moral patient, there is no good reason to believe this will in fact happen. I start from the plausible assumption that traditional artifacts do not meet a minimal necessary condition of moral patiency: having a good of one's own. I then argue that intelligent machines are (...) no different from traditional artifacts in this respect. To make this argument, I examine the feature of AIs that enables them to improve their intelligence, i.e., machine learning. I argue that there is no reason to believe that future advances in machine learning will take AIs closer to having a good of their own. I thus argue that concerns about the moral status of future AIs are unwarranted. Nothing about the nature of intelligent machines makes them a better candidate for acquiring moral patiency than the traditional artifacts whose moral status does not concern us. (shrink)
    Direct download(5 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  • On the moral status of social robots: considering the consciousness criterion.Kestutis Mosakas -2021 -AI and Society 36 (2):429-443.
    While philosophers have been debating for decades on whether different entities—including severely disabled human beings, embryos, animals, objects of nature, and even works of art—can legitimately be considered as having moral status, this question has gained a new dimension in the wake of artificial intelligence (AI). One of the more imminent concerns in the context of AI is that of the moral rights and status of social robots, such as robotic caregivers and artificial companions, that are built to interact with (...) human beings. In recent years, some approaches to moral consideration have been proposed that would include social robots as proper objects of moral concern, even though it seems unlikely that these machines are conscious beings. In the present paper, I argue against these approaches by advocating the “consciousness criterion,” which proposes phenomenal consciousness as a necessary condition for accrediting moral status. First, I explain why it is generally supposed that consciousness underlies the morally relevant properties (such as sentience) and then, I respond to some of the common objections against this view. Then, I examine three inclusive alternative approaches to moral consideration that could accommodate social robots and point out why they are ultimately implausible. Finally, I conclude that social robots should not be regarded as proper objects of moral concern unless and until they become capable of having conscious experience. While that does not entail that they should be excluded from our moral reasoning and decision-making altogether, it does suggest that humans do not owe direct moral duties to them. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   26 citations  
  • The Weirdness of the World.Eric Schwitzgebel -2024 - Princeton University Press.
    How all philosophical explanations of human consciousness and the fundamental structure of the cosmos are bizarre—and why that’s a good thing Do we live inside a simulated reality or a pocket universe embedded in a larger structure about which we know virtually nothing? Is consciousness a purely physical matter, or might it require something extra, something nonphysical? According to the philosopher Eric Schwitzgebel, it’s hard to say. In The Weirdness of the World, Schwitzgebel argues that the answers to these fundamental (...) questions lie beyond our powers of comprehension. We can be certain only that the truth—whatever it is—is weird. Philosophy, he proposes, can aim to open—to reveal possibilities we had not previously appreciated—or to close, to narrow down to the one correct theory of the phenomenon in question. Schwitzgebel argues for a philosophy that opens. According to Schwitzgebel’s “Universal Bizarreness” thesis, every possible theory of the relation of mind and cosmos defies common sense. According to his complementary “Universal Dubiety” thesis, no general theory of the relationship between mind and cosmos compels rational belief. Might the United States be a conscious organism—a conscious group mind with approximately the intelligence of a rabbit? Might virtually every action we perform cause virtually every possible type of future event, echoing down through the infinite future of an infinite universe? What, if anything, is it like to be a garden snail? Schwitzgebel makes a persuasive case for the thrill of considering the most bizarre philosophical possibilities. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Legal Personhood for Artificial Intelligence: Citizenship as the Exception to the Rule.Tyler L. Jaynes -2020 -AI and Society 35 (2):343-354.
    The concept of artificial intelligence is not new nor is the notion that it should be granted legal protections given its influence on human activity. What is new, on a relative scale, is the notion that artificial intelligence can possess citizenship—a concept reserved only for humans, as it presupposes the idea of possessing civil duties and protections. Where there are several decades’ worth of writing on the concept of the legal status of computational artificial artefacts in the USA and elsewhere, (...) it is surprising that law makers internationally have come to a standstill to protect our silicon brainchildren. In this essay, it will be assumed that future artificial entities, such as Sophia the Robot, will be granted citizenship on an international scale. With this assumption, an analysis of rights will be made with respect to the needs of a non-biological intelligence possessing legal and civic duties akin to those possessed by humanity today. This essay does not present a full set of rights for artificial intelligence—instead, it aims to provide international jurisprudence evidence aliunde ab extra de lege lata for any future measures made to protect non-biological intelligence. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   16 citations  
  • AI Alignment vs. AI Ethical Treatment: Ten Challenges.Adam Bradley &Bradford Saad -manuscript
    A morally acceptable course of AI development should avoid two dangers: creating unaligned AI systems that pose a threat to humanity and mistreating AI systems that merit moral consideration in their own right. This paper argues these two dangers interact and that if we create AI systems that merit moral consideration, simultaneously avoiding both of these dangers would be extremely challenging. While our argument is straightforward and supported by a wide range of pretheoretical moral judgments, it has far-reaching moral implications (...) for AI development. Although the most obvious way to avoid the tension between alignment and ethical treatment would be to avoid creating AI systems that merit moral consideration, this option may be unrealistic and is perhaps fleeting. So, we conclude by offering some suggestions for other ways of mitigating mistreatment risks associated with alignment. (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Designing AI with Rights, Consciousness, Self-Respect, and Freedom.Eric Schwitzgebel &Mara Garza -2023 - In Francisco Lara & Jan Deckers,Ethics of Artificial Intelligence. Springer Nature Switzerland. pp. 459-479.
    We propose four policies of ethical design of human-grade Artificial Intelligence. Two of our policies are precautionary. Given substantial uncertainty both about ethical theory and about the conditions under which AI would have conscious experiences, we should be cautious in our handling of cases where different moral theories or different theories of consciousness would produce very different ethical recommendations. Two of our policies concern respect and freedom. If we design AI that deserves moral consideration equivalent to that of human beings, (...) that AI should be designed with self-respect and with the freedom to explore values other than those we might impose. We are especially concerned about the temptation to create human-grade AI pre-installed with the desire to cheerfully sacrifice itself for its creators’ benefit. (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Digital suffering: why it's a problem and how to prevent it.Bradford Saad &Adam Bradley -2022 -Inquiry: An Interdisciplinary Journal of Philosophy.
    As ever more advanced digital systems are created, it becomes increasingly likely that some of these systems will be digital minds, i.e. digital subjects of experience. With digital minds comes the risk of digital suffering. The problem of digital suffering is that of mitigating this risk. We argue that the problem of digital suffering is a high stakes moral problem and that formidable epistemic obstacles stand in the way of solving it. We then propose a strategy for solving it: Access (...) Monitor Prevent (AMP). AMP uses a ‘dancing qualia’ argument to link the functional states of certain digital systems to their experiences—this yields epistemic access to digital minds. With that access, we can prevent digital suffering by only creating advanced digital systems that we have such access to, monitoring their functional profiles, and preventing them from entering states with functional markers of suffering. After introducing and motivating AMP, we confront limitations it faces and identify some options for overcoming them. We argue that AMP fits especially well with—and so provides a moral reason to prioritize—one approach to creating such systems: whole brain emulation. We also contend that taking other paths to digital minds would be morally risky. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  • Artificial moral and legal personhood.John-Stewart Gordon -forthcoming -AI and Society:1-15.
    This paper considers the hotly debated issue of whether one should grant moral and legal personhood to intelligent robots once they have achieved a certain standard of sophistication based on such criteria as rationality, autonomy, and social relations. The starting point for the analysis is the European Parliament’s resolution on Civil Law Rules on Robotics and its recommendation that robots be granted legal status and electronic personhood. The resolution is discussed against the background of the so-called Robotics Open Letter, which (...) is critical of the Civil Law Rules on Robotics. The paper reviews issues related to the moral and legal status of intelligent robots and the notion of legal personhood, including an analysis of the relation between moral and legal personhood in general and with respect to robots in particular. It examines two analogies, to corporations and animals, that have been proposed to elucidate the moral and legal status of robots. The paper concludes that one should not ascribe moral and legal personhood to currently existing robots, given their technological limitations, but that one should do so once they have achieved a certain level at which they would become comparable to human beings. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  • How to deal with risks of AI suffering.Leonard Dung -forthcoming -Inquiry: An Interdisciplinary Journal of Philosophy.
    1. 1.1. Suffering is bad. This is why, ceteris paribus, there are strong moral reasons to prevent suffering. Moreover, typically, those moral reasons are stronger when the amount of suffering at st...
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • Meaning in Life in AI Ethics—Some Trends and Perspectives.Sven Nyholm &Markus Rüther -2023 -Philosophy and Technology 36 (2):1-24.
    In this paper, we discuss the relation between recent philosophical discussions about meaning in life (from authors like Susan Wolf, Thaddeus Metz, and others) and the ethics of artificial intelligence (AI). Our goal is twofold, namely, to argue that considering the axiological category of meaningfulness can enrich AI ethics, on the one hand, and to portray and evaluate the small, but growing literature that already exists on the relation between meaning in life and AI ethics, on the other hand. We (...) start out our review by clarifying the basic assumptions of the meaning in life discourse and how it understands the term ‘meaningfulness’. After that, we offer five general arguments for relating philosophical questions about meaning in life to questions about the role of AI in human life. For example, we formulate a worry about a possible meaningfulness gap related to AI on analogy with the idea of responsibility gaps created by AI, a prominent topic within the AI ethics literature. We then consider three specific types of contributions that have been made in the AI ethics literature so far: contributions related to self-development, the future of work, and relationships. As we discuss those three topics, we highlight what has already been done, but we also point out gaps in the existing literature. We end with an outlook regarding where we think the discussion of this topic should go next. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • Why the Epistemic Objection Against Using Sentience as Criterion of Moral Status is Flawed.Leonard Dung -2022 -Science and Engineering Ethics 28 (6):1-15.
    According to a common view, sentience is necessary and sufficient for moral status. In other words, whether a being has intrinsic moral relevance is determined by its capacity for conscious experience. The _epistemic objection_ derives from our profound uncertainty about sentience. According to this objection, we cannot use sentience as a _criterion_ to ascribe moral status in practice because we won’t know in the foreseeable future which animals and AI systems are sentient while ethical questions regarding the possession of moral (...) status are urgent. Therefore, we need to formulate an alternative criterion. I argue that the epistemic objection is dissolved once one clearly distinguishes between the question what determines moral status and what criterion should be employed in practice to ascribe moral status. Epistemic concerns are irrelevant to the former question and—I will argue—criteria of moral status have inescapably to be based on sentience, if one concedes that sentience determines moral status. It follows that doubts about our epistemic access to sentience cannot be used to motivate an alternative criterion of moral status. If sentience turns out to be unknowable, then moral status is unknowable. However, I briefly advocate against such strong pessimism. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  • The Moral Status of Social Robots: A Pragmatic Approach.Paul Showler -2024 -Philosophy and Technology 37 (2):1-22.
    Debates about the moral status of social robots (SRs) currently face a second-order, or metatheoretical impasse. On the one hand, moral individualists argue that the moral status of SRs depends on their possession of morally relevant properties. On the other hand, moral relationalists deny that we ought to attribute moral status on the basis of the properties that SRs instantiate, opting instead for other modes of reflection and critique. This paper develops and defends a pragmatic approach which aims to reconcile (...) these two positions. The core of this proposal is that moral individualism and moral relationalism are best understood as distinct deliberative strategies for attributing moral status to SRs, and that both are worth preserving insofar as they answer to different kinds of practical problems that we face as moral agents. (shrink)
    No categories
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • Acting like an algorithm: digital farming platforms and the trajectories they (need not) lock-in.Michael Carolan -2020 -Agriculture and Human Values 37 (4):1041-1053.
    This paper contributes to our understanding of farm data value chains with assistance from 54 semi-structured interviews and field notes from participant observations. Methodologically, it includes individuals, such as farmers, who hold well-known positionalities within digital agriculture spaces—platforms that include precision farming techniques, farm equipment built on machine learning architecture and algorithms, and robotics—while also including less visible elements and practices. The actors interviewed and materialities and performances observed thus came from spaces and places inhabited by, for example, farmers, crop (...) scientists, statisticians, programmers, and senior leadership in firms located in the U.S. and Canada. The stability of “the” artifacts followed for this project proved challenging, which led to me rethinking how to approach the subject conceptually. The paper is animated by a posthumanist commitment, drawing heavily from assemblage thinking and critical data scholarship coming out of Science and Technology Studies. The argument’s understanding of “chains” therefore lies on an alternative conceptual plane relative to most commodity chain scholarship. To speak of a data value chain is to foreground an orchestrating set of relations among humans, non-humans, products, spaces, places, and practices. The paper’s principle contribution involves interrogating lock-in tendencies at different “points” along the digital farm platform assemblage while pushing for a varied understanding of governance depending on the roles of the actors and actants involved. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  • The Moral Consideration of Artificial Entities: A Literature Review.Jamie Harris &Jacy Reese Anthis -2021 -Science and Engineering Ethics 27 (4):1-95.
    Ethicists, policy-makers, and the general public have questioned whether artificial entities such as robots warrant rights or other forms of moral consideration. There is little synthesis of the research on this topic so far. We identify 294 relevant research or discussion items in our literature review of this topic. There is widespread agreement among scholars that some artificial entities could warrant moral consideration in the future, if not also the present. The reasoning varies, such as concern for the effects on (...) artificial entities and concern for the effects on human society. Beyond the conventional consequentialist, deontological, and virtue ethicist ethical frameworks, some scholars encourage “information ethics” and “social-relational” approaches, though there are opportunities for more in-depth ethical research on the nuances of moral consideration of artificial entities. There is limited relevant empirical data collection, primarily in a few psychological studies on current moral and social attitudes of humans towards robots and other artificial entities. This suggests an important gap for psychological, sociological, economic, and organizational research on how artificial entities will be integrated into society and the factors that will determine how the interests of artificial entities are considered. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  • Are superintelligent robots entitled to human rights?John-Stewart Gordon -2022 -Ratio 35 (3):181-193.
  • The Full Rights Dilemma for AI Systems of Debatable Moral Personhood.Eric Schwitzgebel -2023 -Robonomics 4.
    An Artificially Intelligent system (an AI) has debatable moral personhood if it is epistemically possible either that the AI is a moral person or that it falls far short of personhood. Debatable moral personhood is a likely outcome of AI development and might arise soon. Debatable AI personhood throws us into a catastrophic moral dilemma: Either treat the systems as moral persons and risk sacrificing real human interests for the sake of entities without interests worth the sacrifice, or do not (...) treat the systems as moral persons and risk perpetrating grievous moral wrongs against them. The moral issues become even more perplexing if we consider cases of possibly conscious AI that are subhuman, superhuman, or highly divergent from us in their morally relevant properties. (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  • A fictional dualism model of social robots.Paula Sweeney -2021 -Ethics and Information Technology 23 (3):465-472.
    In this paper I propose a Fictional Dualism model of social robots. The model helps us to understand the human emotional reaction to social robots and also acts as a guide for us in determining the significance of that emotional reaction, enabling us to better define the moral and legislative rights of social robots within our society. I propose a distinctive position that allows us to accept that robots are tools, that our emotional reaction to them can be important to (...) their usefulness, and that this emotional reaction is not a direct indicator that robots deserve either moral consideration or rights. The positive framework of Fictional Dualism provides us with an understanding of what social robots are and with a plausible basis for our relationships with them as we bring them further into society. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  • Liability for Robots: Sidestepping the Gaps.Bartek Chomanski -2021 -Philosophy and Technology 34 (4):1013-1032.
    In this paper, I outline a proposal for assigning liability for autonomous machines modeled on the doctrine of respondeat superior. I argue that the machines’ users’ or designers’ liability should be determined by the manner in which the machines are created, which, in turn, should be responsive to considerations of the machines’ welfare interests. This approach has the twin virtues of promoting socially beneficial design of machines, and of taking their potential moral patiency seriously. I then argue for abandoning the (...) retributive approach to machine crime in favor of prioritizing restitution. I argue that this shift better conforms to what justice demands when sophisticated artificial agents of uncertain moral status are concerned. (shrink)
    Direct download(5 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • How to Treat Machines that Might Have Minds.Nicholas Agar -2020 -Philosophy and Technology 33 (2):269-282.
    This paper offers practical advice about how to interact with machines that we have reason to believe could have minds. I argue that we should approach these interactions by assigning credences to judgements about whether the machines in question can think. We should treat the premises of philosophical arguments about whether these machines can think as offering evidence that may increase or reduce these credences. I describe two cases in which you should refrain from doing as your favored philosophical view (...) about thinking machines suggests. Even if you believe that machines are mindless, you should acknowledge that treating them as if they are mindless risks wronging them. Suppose your considered philosophical view that a machine has a mind leads you to consider dating it. You may have reason to regret that decision should these dates lead on to a life-long relationship with a mindless machine. In the paper’s final section, I suggest that building a machine that is capable of performing all intelligent human behavior should produce a general increase in confidence that machines can think. Any reasonable judge should count this feat as evidence in favor of machines having minds. This rational nudge could lead to broad acceptance of the idea that machines can think. (shrink)
    No categories
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  • Danaher’s Ethical Behaviourism: An Adequate Guide to Assessing the Moral Status of a Robot?Jilles Smids -2020 -Science and Engineering Ethics 26 (5):2849-2866.
    This paper critically assesses John Danaher’s ‘ethical behaviourism’, a theory on how the moral status of robots should be determined. The basic idea of this theory is that a robot’s moral status is determined decisively on the basis of its observable behaviour. If it behaves sufficiently similar to some entity that has moral status, such as a human or an animal, then we should ascribe the same moral status to the robot as we do to this human or animal. The (...) paper argues against ethical behaviourism by making four main points. First, it is argued that the strongest version of ethical behaviourism understands the theory as relying on inferences to the best explanation when inferring moral status. Second, as a consequence, ethical behaviourism cannot stick with merely looking at the robot’s behaviour, while remaining neutral with regard to the difficult question of which property grounds moral status. Third, not only behavioural evidence ought to play a role in inferring a robot’s moral status, but knowledge of the design process of the robot and of its designer’s intention ought to be taken into account as well. Fourth, knowledge of a robot’s ontology and how that relates to human biology often is epistemically relevant for inferring moral status as well. The paper closes with some concluding observations. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • What’s Wrong with Designing People to Serve?Bartek Chomanski -2019 -Ethical Theory and Moral Practice 22 (4):993-1015.
    In this paper I argue, contrary to recent literature, that it is unethical to create artificial agents possessing human-level intelligence that are programmed to be human beings’ obedient servants. In developing the argument, I concede that there are possible scenarios in which building such artificial servants is, on net, beneficial. I also concede that, on some conceptions of autonomy, it is possible to build human-level AI servants that will enjoy full-blown autonomy. Nonetheless, the main thrust of my argument is that, (...) in building such artificial agents, their creators cannot help but evince an objectionable attitude akin to the Aristotelian vice of manipulativeness. (shrink)
    Direct download(5 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Can we design artificial persons without being manipulative?Maciej Musiał -2024 -AI and Society 39 (3):1251-1260.
    If we could build artificial persons (APs) with a moral status comparable to this of a typical human being, how should we design those APs in the right way? This question has been addressed mainly in terms of designing APs devoted to being servants (AP servants) and debated in reference to their autonomy and the harm they might experience. Recently, it has been argued that even if developing AP servants would neither deprive them of autonomy nor cause any net harm, (...) then developing such entities would still be unethical due to the manipulative attitude of their designers. I make two contributions to this discussion. First, I claim that the argument about manipulative attitude significantly shifts the perspective of the whole discussion on APs and that it refers to a much wider range of types of APs than has been acknowledged. Second, I investigate the possibilities of developing APs without a manipulative attitude. I proceed in the following manner: (1) I examine the argument about manipulativeness; (2) show the important novelty it brings to a discussion about APs; (3) analyze how the argument can be extrapolated to designing other kinds of Aps; and (4) discuss cases in which APs can be designed without manipulativeness. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Specter of Automation.Zachary Biondi -2023 -Philosophia 51 (3):1093-1110.
    Karl Marx took technological development to be the heart of capitalism’s drive and, ultimately, its undoing. Machines are initially engineered to perform functions that otherwise would be performed by human workers. The economic logic pushed to its limits leads to the prospect of full automation: a world in which all labor required to meet human needs is superseded and performed by machines. To explore the future of automation, the paper considers a specific point of resemblance between human beings and machines: (...) intelligence. Examining the development of machine intelligence through the Marxist concepts of alienation and reification reveals a tension between certain technophilic post-labor visions and the reality of capitalistic development oriented towards intelligent technology. If the prospect of a post-labor world depends on technologies that closely resemble humans, the world can no longer be described as post-labor. The tension has implications for the potential moral status of machines and the possibility of full automation. The paper considers these implications by outlining four possible futures of automation. (shrink)
    No categories
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • No Wellbeing for Robots (and Hence no Rights).Peter Königs -2025 -American Philosophical Quarterly 62 (2):191-208.
    A central question in AI ethics concerns the moral status of robots. This article argues against the idea that they have moral status. It proceeds by defending the assumption that consciousness is necessary for welfare subjectivity. Since robots most likely lack consciousness, and welfare subjectivity is necessary for moral status, it follows that robots lack moral status. The assumption that consciousness is necessary for welfare subjectivity appears to be in tension with certain widely accepted theories of wellbeing, especially versions of (...) Desire Satisfaction Theory and Objective List Theory. However, instead of elevating non-conscious robots to welfare subjects, this tension should lead us to reject versions of these theories that have this implausible implication. (shrink)
    No categories
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark  
  • Could a robot feel pain?Amanda Sharkey -forthcoming -AI and Society.
    Questions about robots feeling pain are important because the experience of pain implies sentience and the ability to suffer. Pain is not the same as nociception, a reflex response to an aversive stimulus. The experience of pain in others has to be inferred. Danaher’s (Sci Eng Ethics 26(4):2023–2049, 2020. ) ‘ethical behaviourist’ account claims that if a robot behaves in the same way as an animal that is recognised to have moral status, then its moral status should also be assumed. (...) Similarly, under a precautionary approach (Sebo in Harvard Rev Philos 25:51–70, 2018. ), entities from foetuses to plants and robots are given the benefit of the doubt and assumed to be sentient. However, there is a growing consensus about the scientific criteria used to indicate pain and the ability to suffer in animals (Birch in Anim Sentience, 2017. ; Sneddon et al. in Anim Behav 97:201–212, 2014. ). These include the presence of a central nervous system, changed behaviour in response to pain, and the effects of analgesic pain relief. Few of these criteria are met by robots, and there are risks to assuming that they are sentient and capable of suffering pain. Since robots lack nervous systems and living bodies there is little reason to believe that future robots capable of feeling pain could (or should) be developed. (shrink)
    Direct download(4 more)  
     
    Export citation  
     
    Bookmark  
  • Sims and Vulnerability: On the Ethics of Creating Emulated Minds.Bartlomiej Chomanski -2022 -Science and Engineering Ethics 28 (6):1-17.
    It might become possible to build artificial minds with the capacity for experience. This raises a plethora of ethical issues, explored, among others, in the context of whole brain emulations (WBE). In this paper, I will take up the problem of vulnerability – given, for various reasons, less attention in the literature – that the conscious emulations will likely exhibit. Specifically, I will examine the role that vulnerability plays in generating ethical issues that may arise when dealing with WBEs. I (...) will argue that concerns about vulnerability are more matters of institutional design than individual ethics, both when it comes to creating humanlike brain emulations, and when animal-like emulations are concerned. Consequently, the article contains reflection on some institutional measures that can be taken to protect the sims' interests. It concludes that an institutional framework more likely to succeed in this task is competitive and poly-centric, rather than monopolistic and centralized. (shrink)
    No categories
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Implementational Considerations for Digital Consciousness.Derek Shiller -manuscript
  • From computerised thing to digital being: mission (Im)possible?Julija Kiršienė,Edita Gruodytė &Darius Amilevičius -2021 -AI and Society 36 (2):547-560.
    Artificial intelligence (AI) is one of the main drivers of what has been described as the “Fourth Industrial Revolution”, as well as the most innovative technology developed to date. It is a pervasive transformative innovation, which needs a new approach. In 2017, the European Parliament introduced the notion of the “electronic person”, which sparked huge debates in philosophical, legal, technological, and other academic settings. The issues related to AI should be examined from an interdisciplinary perspective. In this paper, we examine (...) this legal innovation—that has been proposed by the European Parliament—from not only legal but also technological points of view. In the first section, we define AI and analyse its main characteristics. We argue that, from a technical perspective, it appears premature and probably inappropriate to introduce AI personhood now. In the second section, justifications for the European Parliament’s proposals are explored in contrast with the opposing arguments that have been presented. As the existing mechanisms of liability could be insufficient in scenarios where AI systems cause harm, especially when algorithms of AI learn and evolve on their own, there is a need to depart from traditional liability theories. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • SIMS and digital simulacra: is it moral to have sex with virtual copies (created by us)?Maurizio Balistreri &Roberto Manzocco -forthcoming -AI and Society:1-9.
    The development of digital technologies has opened the door to surprising possibilities for the future of humanity. The idea of creating a ‘Metaverse’ in which it is possible to build and interact with digital avatars of real deceased people raises a number of complex ethical and moral questions. The prospect of transferring memories and experiences into digital avatars or creating exact copies of the brain structures of real individuals raises questions regarding the nature of identity and consciousness. These virtual entities (...) could be used for emotional, commercial, or entertainment purposes, but also to fulfil sexual desires, opening a debate on the morality of such interactions. Furthermore, the total control we could exercise over these entities, similar to divine power, raises further ethical questions about responsibility and respect for virtual life. The discussion of these issues will become increasingly relevant as technology advances and it will be crucial to address them with an ethical and political approach that is also aware of their impact on society and the individual. We will analyse the problem by comparing the arguments that it is always wrong to have sexual relations with digital entities (copies of human beings), regardless of their moral relevance, with the arguments that it can be moral to have sex with them. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark  
  • Artificially sentient beings: Moral, political, and legal issues.Fırat Akova -2023 -New Techno-Humanities 3 (1):41-48.
    The emergence of artificially sentient beings raises moral, political, and legal issues that deserve scrutiny. First, it may be difficult to understand the well-being elements of artificially sentient beings and theories of well-being may have to be reconsidered. For instance, as a theory of well-being, hedonism may need to expand the meaning of happiness and suffering or it may run the risk of being irrelevant. Second, we may have to compare the claims of artificially sentient beings with the claims of (...) humans. This calls for interspecies aggregation, which is a neglected form of interpersonal aggregation. Lastly, there are practical problems to address, such as whether to include artificially sentient beings in the political decision-making processes, whether to grant them a right to self-determination in digital worlds, and how to protect them from discrimination. Given these, the emergence of artificially sentient beings compels us to reevaluate the positions we typically hold. (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark  
  • Basic issues in AI policy.Vincent C. Müller -2022 - In Maria Amparo Grau-Ruiz,Interactive robotics: Legal, ethical, social and economic aspects. Springer. pp. 3-9.
    This extended abstract summarises some of the basic points of AI ethics and policy as they present themselves now. We explain the notion of AI, the main ethical issues in AI and the main policy aims and means.
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark  
  • Sims and Vulnerability: On the Ethics of Creating Emulated Minds.Bartek Chomanski -forthcoming -Science and Engineering Ethics.
    It might become possible to build artificial minds with the capacity for experience. This raises a plethora of ethical issues, explored, among others, in the context of whole brain emulations (WBE). In this paper, I will take up the problem of vulnerability – given, for various reasons, less attention in the literature – that the conscious emulations will likely exhibit. Specifically, I will examine the role that vulnerability plays in generating ethical issues that may arise when dealing with WBEs. I (...) will argue that concerns about vulnerability are more matters of institutional design than individual ethics, both when it comes to creating humanlike brain emulations, and when animal-like emulations are concerned. Consequently, the article contains reflection on some institutional measures that can be taken to protect the sims' interests. It concludes that an institutional framework more likely to succeed in this task is competitive and poly-centric, rather than monopolistic and centralized. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark  
  • Robot rights in joint action.Guido Löhr -2022 - In Vincent C. Müller,Philosophy and Theory of Artificial Intelligence 2021. Berlin: Springer.
    The claim I want to explore in this paper is simple. In social ontology, Margaret Gilbert, Abe Roth, Michael Bratman, Antonie Meijers, Facundo Alonso and others talk about rights or entitlements against other participants in joint action. I employ several intuition pumps to argue that we have reason to assume that such entitlements or rights can be ascribed even to non-sentient robots that we collaborate with. Importantly, such entitlements are primarily identified in terms of our normative discourse. Justified criticism, for (...) example, presupposes that another person acted wrongly, i.e., was not entitled to this action. Praise is supposed to encourage another person and acknowledge that one did more than one was obligated to. I show that such normative talk serves the same function when cooperating with robots. This, I argue, suggests that they have the same kind of entitlements and duties at least in the context of a joint action. (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark  
  • Human rights for robots? The moral foundations and epistemic challenges.Kestutis Mosakas -forthcoming -AI and Society:1-17.
    As we step into an era in which artificial intelligence systems are predicted to surpass human capabilities, a number of profound ethical questions have emerged. One such question, which has gained some traction in recent scholarship, concerns the ethics of human treatment of robots and the thought-provoking possibility of robot rights. The present article explores this very aspect, with a particular focus on the notion of human rights for robots. It argues that if we accept the widely held view that (...) moral status and rights (including human rights) are grounded in certain cognitive capacities, then it follows that intelligent machines could, in principle, acquire these entitlements once they come to possess the requisite properties. In support of this perspective, the article outlines the moral foundations of human rights and examines several main objections, arguing that they do not successfully negate the prospect of considering robots as potential holders of human rights. Subsequently, it turns to the key epistemic challenges associated with moral status and rights for robots, outlining the main difficulties in discerning the presence of mental states in artificial entities and offering some practical considerations for approaching these challenges. The article concludes by emphasizing the importance of establishing a suitable framework for moral decision-making under uncertainty in the context of human treatment of artificial entities, given the gravity of the epistemic problems surrounding the concepts of artificial consciousness, moral status, and rights. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark  
  • “I Am Not Your Robot:” the metaphysical challenge of humanity’s AIS ownership.Tyler L. Jaynes -2021 -AI and Society 37 (4):1689-1702.
    Despite the reality that self-learning artificial intelligence systems (SLAIS) are gaining in sophistication, humanity’s focus regarding SLAIS-human interactions are unnervingly centred upon transnational commercial sectors and, most generally, around issues of intellectual property law. But as SLAIS gain greater environmental interaction capabilities in digital spaces, or the ability to self-author code to drive their development as algorithmic models, a concern arises as to whether a system that displays a “deceptive” level of human-like engagement with users in our physical world ought (...) to be uniquely protected. Although many voices in the legal and technology realms have continued to argue against unique protections for digital entities, the fact at hand is that SLAIS design is becoming increasingly anthropomorphic so as to make these systems more capable of interacting with a wide range of (potentially) vulnerable populations—generally as a means to enhance these populations’ overall well-being. To frame this concern in a different way, the specific question at hand is whether a human’s “ownership” of such an advanced SLAIS is legal, considering that it (or they) may possess intelligence on par with a human or a convincing-enough display of such behaviour. Given that “ownership” over entities with (seemingly) intelligent behaviours consistent with human populations has been effectively banned by the international community, an examination into this subject and its implications is wholly necessary given humanity’s quest to exist solely in digital environments through whatever means possible. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark  
  • Against willing servitude: Autonomy in the ethics of advanced artificial intelligence.Adam Bales -forthcoming -Philosophical Quarterly.
    Some people believe that advanced artificial intelligence systems (AIs) might, in the future, come to have moral status. Further, humans might be tempted to design such AIs that they serve us, carrying out tasks that make our lives better. This raises the question of whether designing AIs with moral status to be willing servants would problematically violate their autonomy. In this paper, I argue that it would in fact do so.
    No categories
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark  
  • Moral Status for Malware! The Difficulty of Defining Advanced Artificial Intelligence.Miranda Mowbray -2021 -Cambridge Quarterly of Healthcare Ethics 30 (3):517-528.
    The suggestion has been made that future advanced artificial intelligence (AI) that passes some consciousness-related criteria should be treated as having moral status, and therefore, humans would have an ethical obligation to consider its well-being. In this paper, the author discusses the extent to which software and robots already pass proposed criteria for consciousness; and argues against the moral status for AI on the grounds that human malware authors may design malware to fake consciousness. In fact, the article warns that (...) malware authors have stronger incentives than do authors of legitimate software to create code that passes some of the criteria. Thus, code that appears to be benign, but is in fact malware, might become the most common form of software to be treated as having moral status. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark  
  • Should we Trust Social Robots? Trust without Trustworthiness in Human-Robot Interaction.Germán Massaguer Gómez -2025 -Philosophy and Technology 38 (1):1-23.
    This paper asks three fundamental questions on the nature of trust: What is trust? What is trustworthiness? When is trust warranted? These discussions are then applied to the context of Human-Robot Interaction (HRI), asking whether we can trust social robots, whether they can be trustworthy, and, lastly, whether we should trust them. After revising the literature on the nature of trust and reliance on one hand, and on trust in social robots, considering both properties-based and non-properties-based views, on the other (...) hand, this paper defends that, given the current state of technology, we can be subjects of a paradoxical scenario in which there is trust without trustworthiness, i.e., human users that interact with social robots can develop something resembling interpersonal trust towards an artificial entity which cannot be trustworthy. This occurs because we perceive and treat social robots as trustworthy entities, while they seem to lack certain properties that would make them capable of being trustworthy (as well as untrustworthy). Understanding our psychology in HRI and trying to discern what social robots are (and are not) is capital when confronted with ethical issues. Some of the ethical issues that arise in the context of trust without trustworthiness will be considered to address the debate about if we should trust social robots. This paper concludes that we should, at least for now, not trust social robots, given the potential harms that can be done and the responsibility gaps that might appear when these harms are to be repaired. (shrink)
    No categories
    Direct download(4 more)  
     
    Export citation  
     
    Bookmark  

  • [8]ページ先頭

    ©2009-2025 Movatter.jp