| |
While philosophers hold that it is patently absurd to blame robots or hold them morally responsible [1], a series of recent empirical studies suggest that people do ascribe blame to AI systems and robots in certain contexts [2]. This is disconcerting: Blame might be shifted from the owners, users or designers of AI systems to the systems themselves, leading to the diminished accountability of the responsible human agents [3]. In this paper, we explore one of the potential underlying reasons for (...) robot blame, namely the folk's willingness to ascribe inculpating mental states or "mens rea" to robots. In a vignette-based experiment (N=513), we presented participants with a situation in which an agent knowingly runs the risk of bringing about substantial harm. We manipulated agent type (human v. group agent v. AI-driven robot) and outcome (neutral v. bad), and measured both moral judgment (wrongness of the action and blameworthiness of the agent) and mental states attributed to the agent (recklessness and the desire to inflict harm). We found that (i) judgments of wrongness and blame were relatively similar across agent types, possibly because (ii) attributions of mental states were, as suspected, similar across agent types. This raised the question - also explored in the experiment - whether people attribute knowledge and desire to robots in a merely metaphorical way (e.g., the robot "knew" rather than really knew). However, (iii), according to our data people were unwilling to downgrade to mens rea in a merely metaphorical sense when given the chance. Finally, (iv), we report a surprising and novel finding, which we call the inverse outcome effect on robot blame: People were less willing to blame artificial agents for bad outcomes than for neutral outcomes. This suggests that they are implicitly aware of the dangers of overattributing blame to robots when harm comes to pass, such as inappropriately letting the responsible human agent off the moral hook. (shrink) | |
Practical ability manifested through robust and reliable task performance, as well as information relevance and well-structured representation, are key factors indicative of understanding in the philosophical literature. We explore these factors in the context of deep learning, identifying prominent patterns in how the results of these algorithms represent information. While the estimation applications of modern neural networks do not qualify as the mental activity of persons, we argue that coupling analyses from philosophical accounts with the empirical and theoretical basis for (...) identifying these factors in deep learning representations provides a framework for discussing and critically evaluating potential machine understanding given the continually improving task performance enabled by such algorithms. (shrink) No categories | |
Robots are increasingly envisaged as our future cohabitants. However, while considerable progress has been made in recent years in terms of their technological realization, the ability of robots to inter-act with humans in an intuitive and social way is still quite limited. An important challenge for social robotics is to determine how to design robots that can perceive the user’s needs, feelings, and intentions, and adapt to users over a broad range of cognitive abilities. It is conceivable that if robots (...) were able to adequately demonstrate this, humans would eventually accept them as social companions. We argue that the best way to achieve this is using a systematic experimental approach based on behavioral and physiological neuroscience methods such as motion/eye tracking, electroencephalography (EEG), or functional near-infrared spectroscopy (fNIRS) embedded in interactive human-robot paradigms. This approach requires understanding how humans interact with each other, how they perform tasks together and how they develop feelings of social connection over time, and using these insights to formulate design principles that make social robots attuned to the workings of the human brain. In this review, we put forward the argument that the likelihood of artificial agents being perceived as social companions can be increased by designing them in a way that they are perceived as intentional agents that activate areas in the human brain involved in social-cognitive processing. We first review literature related to social-cognitive mechanisms involved in human-human interactions, and high-light the importance of perceiving others as intentional agents to activate these brain areas. We then discuss how attribution of intentionality can positively affect human-robot interaction by (a) fostering feelings of social connection, empathy and prosociality, and by (b) enhancing performance on joint human-robot tasks. Lastly, we describe circumstances under which attribution of intentionality to robot agents might be disadvantageous, and discuss challenges associated with designing social robots that are inspired by neuroscientific principles. (shrink) | |
According to a widespread view, people often anthropomorphize machines such as certain robots and computer and AI systems by erroneously attributing mental states to them. On this view, people almost irresistibly believe, even if only subconsciously, that machines with certain human-like features really have phenomenal or subjective experiences like sadness, happiness, desire, pain, joy, and distress, even though they lack such feelings. This paper questions this view by critiquing common arguments used to support it and by suggesting an alternative explanation. (...) Even if people’s behavior and language regarding human-like machines suggests they believe those machines really have mental states, it is possible that they do not believe that at all. The paper also briefly discusses potential implications of regarding such anthropomorphism as a popular myth. The exercise illuminates the difficult concept of anthropomorphism, helping to clarify possible human relations with or toward machines that increasingly resemble humans and animals. (shrink) | |
One of the sectors for which Artificial Intelligence applications have been considered as exceptionally promising is the healthcare sector. As a public-facing sector, the introduction of AI applications has been subject to extended news coverage. This article conducts a quantitative and qualitative data analysis of English news media articles covering AI systems that allow the automation of tasks that so far needed to be done by a medical expert such as a doctor or a nurse thereby redistributing their agency. We (...) investigated in this article one particular framing of AI systems and their agency: the framing that positions AI systems as replacing and outperforming the human medical expert, and in which AI systems are personified and/or addressed as a person. The analysis of our data set consisting of 365 articles written between the years 1980 and 2019 will show that there is a tendency to present AI systems as outperforming human expertise. These findings are important given the central role of news coverage in explaining AI and given the fact that the popular frame of ‘outperforming’ might place AI systems above critique and concern including the Hippocratic oath. Our data also showed that the addressing of an AI system as a person is a trend that has been advanced only recently and is a new development in the public discourse about AI. (shrink) | |
This paper discuss the phenomenon of empathy in social robotics and is divided into three main parts. Initially, I analyse whether it is correct to use this concept to study and describe people’s reactions to robots. I present arguments in favour of the position that people actually do empathise with robots. I also consider what circumstances shape human empathy with these entities. I propose that two basic classes of such factors be distinguished: biological and socio-cognitive. In my opinion, one of (...) the most important among them is a sense of group membership with robots, as it modulates the empathic responses to representatives of our- and other- groups. The sense of group membership with robots may be co-shaped by socio-cognitive factors such as one’s experience, familiarity with the robot and its history, motivation, accepted ontology, stereotypes or language. Finally, I argue in favour of the formulation of a pragmatic and normative framework for manipulations in the level of empathy in human–robot interactions. (shrink) | |
Spatial communications are essential to the survival and social interaction of human beings. In science fiction and the near future, robots are supposed to be able to understand spatial languages to collaborate and cooperate with humans. However, it remains unknown whether human speakers regard robots as human-like social partners. In this study, human speakers describe target locations to an imaginary human or robot addressee under various scenarios varying in relative speaker–addressee cognitive burden. Speakers made equivalent perspective choices to human and (...) robot addressees, which consistently shifted according to the relative speaker–addressee cognitive burden. However, speakers’ perspective choice was only significantly correlated to their social skills when the addressees were humans but not robots. These results suggested that people generally assume robots and humans with equal capabilities in understanding spatial descriptions but do not regard robots as human-like social partners. (shrink) | |
Despite the increasing number of studies on user experience and user interfaces, few studies have examined emotional interaction between humans and deformable objects. In the current study, we investigated how the anthropomorphic design of a flexible display interacts with emotion. For 101 unique 3D images in which an object was bent at different axes, 281 participants were asked to report how strongly the object evoked five elemental emotions (e.g., happiness, disgust, anger, fear, and sadness) in an online survey. People rated (...) the object’s shape using three emotional categories: happiness, disgust–anger, and sadness–fear. It was also found that a combination of axis of bending (horizontal or diagonal axis) and convexity (bending convexly or concavely) predicted emotional valence, underpinning the anthropomorphic design of flexible displays. Our findings provide empirical evidence that axis of bending and convexity can be an important antecedent of emotional interaction with flexible objects, triggering at least three types of emotion in users. (shrink) | |
Robots that can communicate with people are one of the goals reached by the technology developed for automation in work life. Experts aim to improve the communication skills of these robots further in the near future. Besides, various studies emphasize that people may interact with robots in a similar way as they interact with other people. In line of this idea, this study examines the possible causal chain in which the social anxiety affects the robot anxiety which in turn affects (...) the attitude toward interacting with robots. Data obtained from university students were analyzed in a simple and parallel mediation model. The results showed that robot anxiety and in particular two of its sub-dimensions mediate the relationship between social anxiety and negative attitudes toward interaction with robots. Researchers should carry out new studies about the common structural characteristics of the anxiety felt by people due to interacting with humans and robots. (shrink) | |
The paper demonstrates that social alignment is distinct from value alignment as it is currently understood in the AI safety literature, and argues that social alignment is an important research agenda. Work provides an important example for the argument, since work is a cooperative endeavor, and it is part of the larger manifold of social cooperation. These cooperative aspects of work are individually and socially valuable, and so they must be given a central place when evaluating the impact of AI (...) upon work. Workplace technologies are not simply instruments for achieving productive goals, but ways of mediating interpersonal relations. They are aspects of a cooperative interface i.e. the infrastructure by which we engage cooperative behavior with others. The concept of the cooperative interface suggests two conjectures to foreground in the social alignment agenda, motivated by the experience of algorithmic trading and social robotics: that AI impacts cooperation through its effects on social networks, and through its effects on social norms. (shrink) | |
The use of autonomous and intelligent personal social robots raises questions concerning their moral standing. Moving away from the discussion about direct moral standing and exploring the normative implications of a relational approach to moral standing, this paper offers four arguments that justify giving indirect moral standing to robots under specific conditions based on some of the ways humans—as social, feeling, playing, and doubting beings—relate to them. The analogy of “the Kantian dog” is used to assist reasoning about this. The (...) paper also discusses the implications of this approach for thinking about the moral standing of animals and humans, showing why, when, and how an indirect approach can also be helpful in these fields, and using Levinas and Dewey as sources of inspiration to discuss some challenges raised by this approach. (shrink) | |
Social robotics aims at designing robots capable of joint interaction with humans. On a conceptual level, sufficient mutual understanding is usually said to be a necessary condition for joint interaction. Against this background, the following questions remain open: in which sense is it legitimate to speak of human–robot joint interaction? What exactly does it mean to speak of humans and robots sufficiently understanding each other to account for human–robot joint interaction? Is such joint interaction effectively possible by reference, e.g., to (...) the mere ascription or simulation of understanding? To answer these questions, we first discuss technical approaches which aim at the implementation of certain aspects of human–human communication and interaction in social robots in order to make robots accessible and understandable to humans and, hence, human–robot joint interaction possible. Second, we examine the human tendency to anthropomorphize in this context, with a view to human understanding of and joint interaction with social robots. Third, we analyze the most prominent concepts of mutual understanding and their implications for human–robot joint interaction. We conclude that it is—at least for the time being—not legitimate to speak of human–robot joint interaction, which has relevant implications both morally and ethically. (shrink) | |
Should we welcome social robots into interpersonal relationships? In this paper I show that an adequate answer to this question must take three factors into consideration: (1) the psychological vulnerability that characterizes ordinary interpersonal relationships, (2) the normative significance that humans attach to other people’s attitudes in such relationships, and (3) the tendency of humans to anthropomorphize and “mentalize” artificial agents, often beyond their actual capacities. I argue that we should welcome social robots into interpersonal relationships only if they are (...) endowed with a social capacity that is functionally similar to our own capacity for social norms. Drawing on an interdisciplinary body of research on norm psychology, I explain why this capacity is importantly different from pre-programmed, top-down conformity to rules, in that it involves an open-ended responsiveness to social corrective feedback, such as that which humans provide to each other in expressions of praise and blame. (shrink) | |
Human beings express affinity (Shinwa‐kan in Japanese language) in communicating transactive engagements among healthcare providers, patients and healthcare robots. The appearance of healthcare robots and their language capabilities often feature characteristic and appropriate compassionate dialogical functions in human–robot interactions. Elements of healthcare robot configurations comprising its physiognomy and communication properties are founded on the positivist philosophical perspective of being the summation of composite parts, thereby mimicking human persons. This article reviews Mori's theory of the Uncanny Valley and its consequent debates, (...) and examines "Uncanny" relations with generating healthcare robot conversational content with artificial affective communication (AAC) using natural language processing. With healthcare robots provoking influential physical composition and sensory expressions, the relations in human–healthcare robot transactive engagements are argued as supportive of the design and development in natural language processing. This implies that maintaining human–healthcare robot interaction and assessing the eeriness situations explained in the Uncanny Valley theory are crucial positions for healthcare robot functioning as a valuable commodity in health care. As such, physical features, language capabilities and mobility of healthcare robots establish the primacy of the AAC with natural language processing as integral to healthcare robot–human healthcare practice. (shrink) | |
Anthropomorphism, the attribution of human-like qualities to non-human entities, can influence comprehension of the surrounding world. Going beyond previous research on the general assessment of anthropomorphism, the current study aimed to explore how anthropomorphising a specific animal species influences public acceptance of livestock keeping practices. Specifically, we focused on welfare-infringing practices that limit animals’ freedom, describe disruptive procedures, social isolation, or other stressful situations. Lacking experience in livestock keeping, it is likely that people project their own preferences to animals when (...) judging livestock keeping practices. Questionnaire data from a sample of the Swiss German public (N = 1232) were analysed regarding their acceptance of livestock keeping practices, as well as anthropomorphism for three animals: cattle, pigs, and poultry. We showed that judgement of livestock keeping was related to an anthropomorphic view of animals. This takes two opposite directions: (1) anthropomorphising was connected to a more critical view of livestock keeping practices and (2) the attribution of more cognitive capabilities to cattle and poultry was associated with a higher acceptance of welfare-infringing livestock keeping practices. The tendency to anthropomorphise was species-dependent, with the two mammals eliciting a higher tendency to anthropomorphise than poultry. The results suggest that the tendency to anthropomorphise plays a significant role in shaping the public’s opinion on livestock keeping. We argue that, when activating the tendency to anthropomorphise in the media, advertisements, or political publicity (e.g. by highlighting human-like features), a certain level of caution should be taken to avoid undesirable outcomes. (shrink) | |
To perceive an affordance is to perceive an object or situation as presenting an opportunity for action. The concept of affordances has been taken up across wide range of disciplines, including AI. I explore an interesting extension of the concept of affordances in robotics. Among the affordances that artificial systems have been engineered to detect are affordances to deliberate. In psychology, affordances are typically limited to bodily action, so the it is noteworthy that AI researchers have found it helpful to (...) extend the concept to encompass mental actions. I propose that psychologists can learn from this extension, and argue that human subjects can perceive mental affordances, such as affordances to attend, affordances to imagine and affordances to count. (shrink) No categories | |
This article explores the implications of what it means to moralize about future technological innovations. Specifically, I have been invited to comment on three papers that attempt to think about what seems to be an impending social reality: the availability of life-like sex robots. In response, I explore what it means to moralize about future technological innovations from a secular perspective, i.e., a perspective grounded in an immanent, socio-historically contingent view. I review the arguments of Nancy Jecker, Mark Howard and (...) Robert Sparrow, and Wang Jue and respond to their arguments concerning the permissible limits of human-robot sexual interaction. I argue that we are in a poor epistemic position regarding what the actual future human response will be towards sex robots and how it affects society generally. Given this poor epistemic position, I argue that moralizing about future trends like human-robot sex is difficult because we do not have the relevant facts to work with. Furthermore, I remain skeptical as to policy recommendations based on socio-historically contingent moral viewpoints both because they do not carry in principle moral authority to say what future others may or may not do with their property and they may not even appeal to future secular others, insofar as secular morality is plural and consistently develops anew. (shrink) |