Social appropriateness in HMI.Ricarda Wullenkord,Jacqueline Bellon,Bruno Gransche,Sebastian Nähr-Wagener &Friederike Eyssel -2022 -Interaction Studies 23 (3):360-390.detailsSocial appropriateness is an important topic – both in the human-human interaction (HHI), and in the human-machine interaction (HMI) context. As sociosensitive and socioactive assistance systems advance, the question arises whether a machine’s behavior should include considerations regarding social appropriateness. However, the concept of social appropriateness is difficult to define, as it is determined by multiple aspects. Thus, to date, a unified perspective, encompassing and combining multidisciplinary findings, is missing. When translating results from HHI to HMI, it remains unclear whether (...) such insights into the dynamics of social appropriateness between humans may in fact apply to sociosensitive and socioactive assistance systems. To shed light on this matter, we propose the Five Factor Model of Social Appropriateness (FASA) which provides a multidisciplinary perspective on the notion of social appropriateness and its implementation into technical systems. Finally, we offer reflections on the applicability and ethics of the FASA Model, highlighting both strengths and limitations of the framework. (shrink)
What’s to bullying a bot? : Correlates between chatbot humanlikeness and abuse.Merel Keijsers,Christoph Bartneck &Friederike Eyssel -2021 -Interaction Studies 22 (1):55-80.detailsIn human-chatbot interaction, users casually and regularly offend and abuse the chatbot they are interacting with. The current paper explores the relationship between chatbot humanlikeness on the one hand and sexual advances and verbal aggression by the user on the other hand. 283 conversations between the Cleverbot chatbot and its users were harvested and analysed. Our results showed higher counts of user verbal aggression and sexual comments towards Cleverbot when Cleverbot appeared more humanlike in its behaviour. Caution is warranted with (...) the interpretation of the results however as no experimental manipulation was conducted and causality can thus not be inferred. Nonetheless, the findings are relevant for both the research on the abuse of conversational agents, and the development of efficient approaches to discourage or prevent verbal aggression by chatbot users. (shrink)