Movatterモバイル変換


[0]ホーム

URL:


PhilPapersPhilPeoplePhilArchivePhilEventsPhilJobs
Switch to: References

Add citations

You mustlogin to add citations.
  1. Something AI Should Tell You – The Case for Labelling Synthetic Content.Sarah A. Fisher -2025 -Journal of Applied Philosophy 42 (1):272-286.
    Synthetic content, which has been produced by generative artificial intelligence, is beginning to spread through the public sphere. Increasingly, we find ourselves exposed to convincing ‘deepfakes’ and powerful chatbots in our online environments. How should we mitigate the emerging risks to individuals and society? This article argues that labelling synthetic content in public forums is an essential first step. While calls for labelling have already been growing in volume, no principled argument has yet been offered to justify this measure (which (...) inevitably comes with some additional costs). Rectifying that deficit, I conduct a close examination of our epistemic and expressive interests in identifying synthetic content as such. In so doing, I develop a cumulative case for social media platforms to enforce a labelling duty. I argue that this represents an important element of good platform governance, helping to shore up the integrity of our contemporary public discourse, which takes place increasingly online. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Social Evidence Tampering and the Epistemology of Content Moderation.Keith Raymond Harris -2024 -Topoi 43 (5):1421-1431.
    Social media misinformation is widely thought to pose a host of threats to the acquisition of knowledge. One response to these threats is to remove misleading information from social media and to de-platform those who spread it. While content moderation of this sort has been criticized on various grounds—including potential incompatibility with free expression—the epistemic case for the removal of misinformation from social media has received little scrutiny. Here, I provide an overview of some costs and benefits of the removal (...) of misinformation from social media. On the one hand, removing misinformation from social media can promote knowledge acquisition by removing misleading evidence from online social epistemic environments. On the other hand, such removals require the exercise of power over evidence by content moderators. As I argue, such exercises of power can encourage suspicions on the part of social media users and can compromise the force of the evidence possessed by such users. For these reasons, the removal of misinformation from social media poses its own threats to knowledge. (shrink)
    Direct download(4 more)  
     
    Export citation  
     
    Bookmark  

  • [8]ページ先頭

    ©2009-2025 Movatter.jp