On the moral status of social robots: considering the consciousness criterion.Kestutis Mosakas -2021 -AI and Society 36 (2):429-443.detailsWhile philosophers have been debating for decades on whether different entities—including severely disabled human beings, embryos, animals, objects of nature, and even works of art—can legitimately be considered as having moral status, this question has gained a new dimension in the wake of artificial intelligence (AI). One of the more imminent concerns in the context of AI is that of the moral rights and status of social robots, such as robotic caregivers and artificial companions, that are built to interact with (...) human beings. In recent years, some approaches to moral consideration have been proposed that would include social robots as proper objects of moral concern, even though it seems unlikely that these machines are conscious beings. In the present paper, I argue against these approaches by advocating the “consciousness criterion,” which proposes phenomenal consciousness as a necessary condition for accrediting moral status. First, I explain why it is generally supposed that consciousness underlies the morally relevant properties (such as sentience) and then, I respond to some of the common objections against this view. Then, I examine three inclusive alternative approaches to moral consideration that could accommodate social robots and point out why they are ultimately implausible. Finally, I conclude that social robots should not be regarded as proper objects of moral concern unless and until they become capable of having conscious experience. While that does not entail that they should be excluded from our moral reasoning and decision-making altogether, it does suggest that humans do not owe direct moral duties to them. (shrink)
Human rights for robots? The moral foundations and epistemic challenges.Kestutis Mosakas -forthcoming -AI and Society:1-17.detailsAs we step into an era in which artificial intelligence systems are predicted to surpass human capabilities, a number of profound ethical questions have emerged. One such question, which has gained some traction in recent scholarship, concerns the ethics of human treatment of robots and the thought-provoking possibility of robot rights. The present article explores this very aspect, with a particular focus on the notion of human rights for robots. It argues that if we accept the widely held view that (...) moral status and rights (including human rights) are grounded in certain cognitive capacities, then it follows that intelligent machines could, in principle, acquire these entitlements once they come to possess the requisite properties. In support of this perspective, the article outlines the moral foundations of human rights and examines several main objections, arguing that they do not successfully negate the prospect of considering robots as potential holders of human rights. Subsequently, it turns to the key epistemic challenges associated with moral status and rights for robots, outlining the main difficulties in discerning the presence of mental states in artificial entities and offering some practical considerations for approaching these challenges. The article concludes by emphasizing the importance of establishing a suitable framework for moral decision-making under uncertainty in the context of human treatment of artificial entities, given the gravity of the epistemic problems surrounding the concepts of artificial consciousness, moral status, and rights. (shrink)