Movatterモバイル変換


[0]ホーム

URL:


PhilPapersPhilPeoplePhilArchivePhilEventsPhilJobs
Switch to: References

Add citations

You mustlogin to add citations.
  1. Fictionalism about Chatbots.Fintan Mallory -2023 -Ergo: An Open Access Journal of Philosophy 10.
    According to widely accepted views in metasemantics, the outputs of chatbots and other artificial text generators should be meaningless. They aren’t produced with communicative intentions and the systems producing them are not following linguistic conventions. Nevertheless, chatbots have assumed roles in customer service and healthcare, they are spreading information and disinformation and, in some cases, it may be more rational to trust the outputs of bots than those of our fellow human beings. To account for the epistemic role of chatbots (...) in our society, we need to reconcile these observations. This paper argues that our engagement with chatbots should be understood as a form of prop-oriented make-believe; the outputs of chatbots are literally meaningless but fictionally meaningful. With the make-believe approach, we can understand how chatbots can provide us with knowledge of the world through quasi-testimony while preserving our metasemantic theories. This account also helps to connect the study of chatbots with the epistemology of scientific instruments. (shrink)
    No categories
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  • Real Feeling and Fictional Time in Human-AI Interactions.Krueger Joel &Tom Roberts -2024 -Topoi 43 (3).
    As technology improves, artificial systems are increasingly able to behave in human-like ways: holding a conversation; providing information, advice, and support; or taking on the role of therapist, teacher, or counsellor. This enhanced behavioural complexity, we argue, encourages deeper forms of affective engagement on the part of the human user, with the artificial agent helping to stabilise, subdue, prolong, or intensify a person’s emotional condition. Here, we defend a fictionalist account of human/AI interaction, according to which these encounters involve an (...) elaborate practise of imaginative pretence: a make-believe in which the artificial agent is attributed a life of its own. We attend, specifically, to the temporal characteristics of these fictions, and to what we imagine artificial agents are doing when we are not looking at them. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Kant-Inspired Indirect Argument for Non-Sentient Robot Rights.Tobias Flattery -2023 -AI and Ethics.
    Some argue that robots could never be sentient, and thus could never have intrinsic moral status. Others disagree, believing that robots indeed will be sentient and thus will have moral status. But a third group thinks that, even if robots could never have moral status, we still have a strong moral reason to treat some robots as if they do. Drawing on a Kantian argument for indirect animal rights, a number of technology ethicists contend that our treatment of anthropomorphic or (...) even animal-like robots could condition our treatment of humans: treat these robots well, as we would treat humans, or else risk eroding good moral behavior toward humans. But then, this argument also seems to justify giving rights to robots, even if robots lack intrinsic moral status. In recent years, however, this indirect argument in support of robot rights has drawn a number of objections. In this paper I have three goals. First, I will formulate and explicate the Kant-inspired indirect argument meant to support robot rights, making clearer than before its empirical commitments and philosophical presuppositions. Second, I will defend the argument against a number of objections. The result is the fullest explication and defense to date of this well-known and influential but often criticized argument. Third, however, I myself will raise a new concern about the argument’s use as a justification for robot rights. This concern is answerable to some extent, but it cannot be dismissed fully. It shows that, surprisingly, the argument’s advocates have reason to resist, at least somewhat, producing the sorts of robots that, on their view, ought to receive rights. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Why Indirect Harms do not Support Social Robot Rights.Paula Sweeney -2022 -Minds and Machines 32 (4):735-749.
    There is growing evidence to support the claim that we react differently to robots than we do to other objects. In particular, we react differently to robots with which we have some form of social interaction. In this paper I critically assess the claim that, due to our tendency to become emotionally attached to social robots, permitting their harm may be damaging for society and as such we should consider introducing legislation to grant social robots rights and protect them from (...) harm. I conclude that there is little evidence to support this claim and that legislation in this area would restrict progress in areas of social care where social robots are a potentially valuable resource. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • Anthropomorphizing Machines: Reality or Popular Myth?Simon Coghlan -2024 -Minds and Machines 34 (3):1-25.
    According to a widespread view, people often anthropomorphize machines such as certain robots and computer and AI systems by erroneously attributing mental states to them. On this view, people almost irresistibly believe, even if only subconsciously, that machines with certain human-like features really have phenomenal or subjective experiences like sadness, happiness, desire, pain, joy, and distress, even though they lack such feelings. This paper questions this view by critiquing common arguments used to support it and by suggesting an alternative explanation. (...) Even if people’s behavior and language regarding human-like machines suggests they believe those machines really have mental states, it is possible that they do not believe that at all. The paper also briefly discusses potential implications of regarding such anthropomorphism as a popular myth. The exercise illuminates the difficult concept of anthropomorphism, helping to clarify possible human relations with or toward machines that increasingly resemble humans and animals. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Self‐Deception in Human– AI Emotional Relations.Emilia Kaczmarek -forthcoming -Journal of Applied Philosophy.
    Imagine a man chatting with his AI girlfriend app. He looks at his smartphone and says, ‘Finally, I'm being understood’. Is he deceiving himself? Is there anything morally wrong with it? The human tendency to anthropomorphize AI is well established, and the popularity of AI companions is growing. This article answers three questions: (1) How can being charmed by AI's simulated emotions be considered self‐deception? (2) Why might we have an obligation to avoid harmless self‐deception? (3) When is self‐deception in (...) emotional relationships with AI morally questionable, and can it be blameworthy? Regarding question 1, I describe being seduced by AI's simulated emotions as self‐deception, where desires bias beliefs. In response to question 2, I outline two ways to justify a prima facie obligation to avoid harmless self‐deception – instrumental and autotelic. For question 3, I highlight crucial factors to consider in assessing the blameworthiness of harmless self‐deception, such as the emotional and cognitive competences of the self‐deceiver, reasons for self‐deception, and its consequences for one's predispositions, self‐image, and other people. Moreover, I argue that the ethical requirement to avoid self‐deception does not easily translate into attributing blame to others for being self‐deceived. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark  
  • Artefacts of Change: The Disruptive Nature of Humanoid Robots Beyond Classificatory Concerns.Cindy Friedman -2025 -Science and Engineering Ethics 31 (2):1-17.
    One characteristic of socially disruptive technologies is that they have the potential to cause uncertainty about the application conditions of a concept i.e., they are conceptually disruptive. Humanoid robots have done just this, as evidenced by discussions about whether, and under what conditions, humanoid robots could be classified as, for example, moral agents, moral patients, or legal and/or moral persons. This paper frames the disruptive effect of humanoid robots differently by taking the discussion beyond that of classificatory concerns. It does (...) so by showing that humanoid robots are socially disruptive because they also transform how we experience and understand the world. Through inviting us to relate to a technological artefact as if it is human, humanoid robots have a profound impact upon the way in which we relate to different elements of our world. Specifically, I focus on three types of human relational experiences, and how the norms that surround them may be transformed by humanoid robots: (1) human-technology relations; (2) human-human relations; and (3) human-self relations. Anticipating the ways in which humanoid robots may change society is important given that once a technology is entrenched, it is difficult to counteract negative impacts. Therefore, we should try to anticipate them while we can still do something to prevent them. Since humanoid robots are currently relatively rudimentary, yet there is incentive to invest more in their development, it is now a good time to think carefully about how this technology may impact us. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark  
  • Mitigating emotional risks in human-social robot interactions through virtual interactive environment indication.Aorigele Bao,Yi Zeng &Enmeng lu -2023 -Humanities and Social Sciences Communications 2023.
    Humans often unconsciously perceive social robots involved in their lives as partners rather than mere tools, imbuing them with qualities of companionship. This anthropomorphization can lead to a spectrum of emotional risks, such as deception, disappointment, and reverse manipulation, that existing approaches struggle to address effectively. In this paper, we argue that a Virtual Interactive Environment (VIE) exists between humans and social robots, which plays a crucial role and demands necessary consideration and clarification in order to mitigate potential emotional risks. (...) By analyzing the relational nature of human-social robot interaction, we discuss the connotation of such a virtual interactive environment that is similar to the emotional states aroused when reading novels. Building on this comprehension, we further demonstrate that manufacturers should carry out comprehensive Virtual Interactive Environment Indication (VIEI) measures during human-social robot interaction with a stricter sense of responsibility when applying social robots. Finally, we contemplate the potential contributions of virtual interactive environment indication to existing robot ethics guidelines. (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark  

  • [8]ページ先頭

    ©2009-2025 Movatter.jp