| |
As technology improves, artificial systems are increasingly able to behave in human-like ways: holding a conversation; providing information, advice, and support; or taking on the role of therapist, teacher, or counsellor. This enhanced behavioural complexity, we argue, encourages deeper forms of affective engagement on the part of the human user, with the artificial agent helping to stabilise, subdue, prolong, or intensify a person’s emotional condition. Here, we defend a fictionalist account of human/AI interaction, according to which these encounters involve an (...) elaborate practise of imaginative pretence: a make-believe in which the artificial agent is attributed a life of its own. We attend, specifically, to the temporal characteristics of these fictions, and to what we imagine artificial agents are doing when we are not looking at them. (shrink) | |
Some argue that robots could never be sentient, and thus could never have intrinsic moral status. Others disagree, believing that robots indeed will be sentient and thus will have moral status. But a third group thinks that, even if robots could never have moral status, we still have a strong moral reason to treat some robots as if they do. Drawing on a Kantian argument for indirect animal rights, a number of technology ethicists contend that our treatment of anthropomorphic or (...) even animal-like robots could condition our treatment of humans: treat these robots well, as we would treat humans, or else risk eroding good moral behavior toward humans. But then, this argument also seems to justify giving rights to robots, even if robots lack intrinsic moral status. In recent years, however, this indirect argument in support of robot rights has drawn a number of objections. In this paper I have three goals. First, I will formulate and explicate the Kant-inspired indirect argument meant to support robot rights, making clearer than before its empirical commitments and philosophical presuppositions. Second, I will defend the argument against a number of objections. The result is the fullest explication and defense to date of this well-known and influential but often criticized argument. Third, however, I myself will raise a new concern about the argument’s use as a justification for robot rights. This concern is answerable to some extent, but it cannot be dismissed fully. It shows that, surprisingly, the argument’s advocates have reason to resist, at least somewhat, producing the sorts of robots that, on their view, ought to receive rights. (shrink) | |
There is growing evidence to support the claim that we react differently to robots than we do to other objects. In particular, we react differently to robots with which we have some form of social interaction. In this paper I critically assess the claim that, due to our tendency to become emotionally attached to social robots, permitting their harm may be damaging for society and as such we should consider introducing legislation to grant social robots rights and protect them from (...) harm. I conclude that there is little evidence to support this claim and that legislation in this area would restrict progress in areas of social care where social robots are a potentially valuable resource. (shrink) | |
According to a widespread view, people often anthropomorphize machines such as certain robots and computer and AI systems by erroneously attributing mental states to them. On this view, people almost irresistibly believe, even if only subconsciously, that machines with certain human-like features really have phenomenal or subjective experiences like sadness, happiness, desire, pain, joy, and distress, even though they lack such feelings. This paper questions this view by critiquing common arguments used to support it and by suggesting an alternative explanation. (...) Even if people’s behavior and language regarding human-like machines suggests they believe those machines really have mental states, it is possible that they do not believe that at all. The paper also briefly discusses potential implications of regarding such anthropomorphism as a popular myth. The exercise illuminates the difficult concept of anthropomorphism, helping to clarify possible human relations with or toward machines that increasingly resemble humans and animals. (shrink) | |
Imagine a man chatting with his AI girlfriend app. He looks at his smartphone and says, ‘Finally, I'm being understood’. Is he deceiving himself? Is there anything morally wrong with it? The human tendency to anthropomorphize AI is well established, and the popularity of AI companions is growing. This article answers three questions: (1) How can being charmed by AI's simulated emotions be considered self‐deception? (2) Why might we have an obligation to avoid harmless self‐deception? (3) When is self‐deception in (...) emotional relationships with AI morally questionable, and can it be blameworthy? Regarding question 1, I describe being seduced by AI's simulated emotions as self‐deception, where desires bias beliefs. In response to question 2, I outline two ways to justify a prima facie obligation to avoid harmless self‐deception – instrumental and autotelic. For question 3, I highlight crucial factors to consider in assessing the blameworthiness of harmless self‐deception, such as the emotional and cognitive competences of the self‐deceiver, reasons for self‐deception, and its consequences for one's predispositions, self‐image, and other people. Moreover, I argue that the ethical requirement to avoid self‐deception does not easily translate into attributing blame to others for being self‐deceived. (shrink) | |
One characteristic of socially disruptive technologies is that they have the potential to cause uncertainty about the application conditions of a concept i.e., they are conceptually disruptive. Humanoid robots have done just this, as evidenced by discussions about whether, and under what conditions, humanoid robots could be classified as, for example, moral agents, moral patients, or legal and/or moral persons. This paper frames the disruptive effect of humanoid robots differently by taking the discussion beyond that of classificatory concerns. It does (...) so by showing that humanoid robots are socially disruptive because they also transform how we experience and understand the world. Through inviting us to relate to a technological artefact as if it is human, humanoid robots have a profound impact upon the way in which we relate to different elements of our world. Specifically, I focus on three types of human relational experiences, and how the norms that surround them may be transformed by humanoid robots: (1) human-technology relations; (2) human-human relations; and (3) human-self relations. Anticipating the ways in which humanoid robots may change society is important given that once a technology is entrenched, it is difficult to counteract negative impacts. Therefore, we should try to anticipate them while we can still do something to prevent them. Since humanoid robots are currently relatively rudimentary, yet there is incentive to invest more in their development, it is now a good time to think carefully about how this technology may impact us. (shrink) | |
Humans often unconsciously perceive social robots involved in their lives as partners rather than mere tools, imbuing them with qualities of companionship. This anthropomorphization can lead to a spectrum of emotional risks, such as deception, disappointment, and reverse manipulation, that existing approaches struggle to address effectively. In this paper, we argue that a Virtual Interactive Environment (VIE) exists between humans and social robots, which plays a crucial role and demands necessary consideration and clarification in order to mitigate potential emotional risks. (...) By analyzing the relational nature of human-social robot interaction, we discuss the connotation of such a virtual interactive environment that is similar to the emotional states aroused when reading novels. Building on this comprehension, we further demonstrate that manufacturers should carry out comprehensive Virtual Interactive Environment Indication (VIEI) measures during human-social robot interaction with a stricter sense of responsibility when applying social robots. Finally, we contemplate the potential contributions of virtual interactive environment indication to existing robot ethics guidelines. (shrink) |