| |
This paper addresses what we consider to be the most pressing challenge for the emerging science of consciousness: the measurement problem of consciousness. That is, by what methods can we determine the presence of and properties of consciousness? Most methods are currently developed through evaluation of the presence of consciousness in humans and here we argue that there are particular problems in application of these methods to nonhuman cases—what we call the indicator validity problem and the extrapolation problem. The first (...) is a problem with the application of indicators developed using the differences between conscious and unconscious processing in humans to the identification of other conscious vs. nonconscious organisms or systems. The second is a problem in extrapolating any indicators developed in humans or other organisms to artificial systems. However, while pressing ethical concerns add urgency to the attribution of consciousness and its attendant moral status to nonhuman animals and intelligent machines, we cannot wait for certainty and we advocate the use of a precautionary principle to avoid causing unintentional harm. We also intend that the considerations and limitations discussed in this paper can be used to further analyze and refine the methods of consciousness science with the hope that one day we may be able to solve the measurement problem of consciousness. (shrink) | |
Virtue ethics seems to be a promising moral theory for understanding and interpreting the development and behavior of artificial moral agents. Virtuous artificial agents would blur traditional distinctions between different sorts of moral machines and could make a claim to membership in the moral community. Accordingly, we investigate the “machine question” by studying whether virtue or vice can be attributed to artificial intelligence; that is, are people willing to judge machines as possessing moral character? An experiment describes situations where either (...) human or AI agents engage in virtuous or vicious behavior and experiment participants then judge their level of virtue or vice. The scenarios represent different virtue ethics domains of truth, justice, fear, wealth, and honor. Quantitative and qualitative analyses show that moral attributions are weakened for AIs compared to humans, and the reasoning and explanations for the attributions are varied and more complex. On “relational” views of membership in the moral community, virtuous machines would indeed be included, even if they are indeed weakened. Hence, while our moral relationships with artificial agents may be of the same types, they may yet remain substantively different than our relationships to human beings. (shrink) | |
We are sometimes unsure of the moral status of our relationships with other entities. Recent case studies in this uncertainty include our relationships with artificial agents (robots, assistant AI, etc.), animals, and patients with “locked-in” syndrome. Do these entities have basic moral standing? Could they count as true friends or lovers? What should we do when we do not know the answer to these questions? An influential line of reasoning suggests that, in such cases of moral uncertainty, we need meta-moral (...) decision rules that allow us to either minimize the risks of moral wrongdoing or improve the choice-worthiness of our actions. One particular argument adopted in this literature is the “risk asymmetry argument,” which claims that the risks associated with accepting or rejecting some moral facts may be sufficiently asymmetrical as to warrant favoring a particular practical resolution of this uncertainty. Focusing on the case study of artificial beings, this article argues that this is best understood as an ethical-epistemic challenge. The article argues that taking potential risk asymmetries seriously can help resolve disputes about the status of human–AI relationships, at least in practical terms (philosophical debates will, no doubt, continue); however, the resolution depends on a proper, empirically grounded assessment of the risks involved. Being skeptical about basic moral status, but more open to the possibility of meaningful relationships with such entities, may be the most sensible approach to take. (shrink) | |
This paper critically assesses John Danaher’s ‘ethical behaviourism’, a theory on how the moral status of robots should be determined. The basic idea of this theory is that a robot’s moral status is determined decisively on the basis of its observable behaviour. If it behaves sufficiently similar to some entity that has moral status, such as a human or an animal, then we should ascribe the same moral status to the robot as we do to this human or animal. The (...) paper argues against ethical behaviourism by making four main points. First, it is argued that the strongest version of ethical behaviourism understands the theory as relying on inferences to the best explanation when inferring moral status. Second, as a consequence, ethical behaviourism cannot stick with merely looking at the robot’s behaviour, while remaining neutral with regard to the difficult question of which property grounds moral status. Third, not only behavioural evidence ought to play a role in inferring a robot’s moral status, but knowledge of the design process of the robot and of its designer’s intention ought to be taken into account as well. Fourth, knowledge of a robot’s ontology and how that relates to human biology often is epistemically relevant for inferring moral status as well. The paper closes with some concluding observations. (shrink) | |
This extended abstract summarises some of the basic points of AI ethics and policy as they present themselves now. We explain the notion of AI, the main ethical issues in AI and the main policy aims and means. | |
As we step into an era in which artificial intelligence systems are predicted to surpass human capabilities, a number of profound ethical questions have emerged. One such question, which has gained some traction in recent scholarship, concerns the ethics of human treatment of robots and the thought-provoking possibility of robot rights. The present article explores this very aspect, with a particular focus on the notion of human rights for robots. It argues that if we accept the widely held view that (...) moral status and rights (including human rights) are grounded in certain cognitive capacities, then it follows that intelligent machines could, in principle, acquire these entitlements once they come to possess the requisite properties. In support of this perspective, the article outlines the moral foundations of human rights and examines several main objections, arguing that they do not successfully negate the prospect of considering robots as potential holders of human rights. Subsequently, it turns to the key epistemic challenges associated with moral status and rights for robots, outlining the main difficulties in discerning the presence of mental states in artificial entities and offering some practical considerations for approaching these challenges. The article concludes by emphasizing the importance of establishing a suitable framework for moral decision-making under uncertainty in the context of human treatment of artificial entities, given the gravity of the epistemic problems surrounding the concepts of artificial consciousness, moral status, and rights. (shrink) |