Movatterモバイル変換


[0]ホーム

URL:


PhilPapersPhilPeoplePhilArchivePhilEventsPhilJobs
Switch to: References

Add citations

You mustlogin to add citations.
  1. Epistemic Courage.Jonathan Ichikawa -2024 - Oxford: Oxford University Press.
    Epistemic Courage is a timely and thought-provoking exploration of the ethics of belief, which shows why epistemology is no mere academic abstraction - the question of what to believe couldn't be more urgent. Jonathan Ichikawa argues that a skeptical, negative bias about belief is connected to a conservative bias that reinforces the status quo.
    Direct download(4 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • Deepfakes and the epistemic apocalypse.Joshua Habgood-Coote -2023 -Synthese 201 (3):1-23.
    [Author note: There is a video explainer of this paper on youtube at the new work in philosophy channel (search for surname+deepfakes).] -/- It is widely thought that deepfake videos are a significant and unprecedented threat to our epistemic practices. In some writing about deepfakes, manipulated videos appear as the harbingers of an unprecedented _epistemic apocalypse_. In this paper I want to take a critical look at some of the more catastrophic predictions about deepfake videos. I will argue for three (...) claims: (1) that once we recognise the role of social norms in the epistemology of recordings, deepfakes are much less concerning, (2) that the history of photographic manipulation reveals some important precedents, correcting claims about the novelty of deepfakes, and (3) that proposed solutions to deepfakes have been overly focused on technological interventions. My overall goal is not so much to argue that deepfakes are not a problem, but to argue that behind concerns around deepfakes lie a more general class of social problems about the organisation of our epistemic practices. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  • Deepfakes, Deep Harms.Regina Rini &Leah Cohen -2022 -Journal of Ethics and Social Philosophy 22 (2).
    Deepfakes are algorithmically modified video and audio recordings that project one person’s appearance on to that of another, creating an apparent recording of an event that never took place. Many scholars and journalists have begun attending to the political risks of deepfake deception. Here we investigate other ways in which deepfakes have the potential to cause deeper harms than have been appreciated. First, we consider a form of objectification that occurs in deepfaked ‘frankenporn’ that digitally fuses the parts of different (...) women to create pliable characters incapable of giving consent to their depiction. Next, we develop the idea of ‘illocutionary wronging’, in which an individual is forced to engage in speech acts they would prefer to avoid in order to deny or correct the misleading evidence of a publicized deepfake. Finally, we consider the risk that deepfakes may facilitate campaigns of ‘panoptic gaslighting’, where many systematically altered recordings of a single person's life undermine their memory, eroding their sense of self and ability to engage with others. Taken together, these harms illustrate the roles that social epistemology and technological vulnerabilities play in human ethical life. (shrink)
    Direct download(5 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  • The Epistemic Threat of Deepfakes.Don Fallis -2020 -Philosophy and Technology 34 (4):623-643.
    Deepfakes are realistic videos created using new machine learning techniques rather than traditional photographic means. They tend to depict people saying and doing things that they did not actually say or do. In the news media and the blogosphere, the worry has been raised that, as a result of deepfakes, we are heading toward an “infopocalypse” where we cannot tell what is real from what is not. Several philosophers have now issued similar warnings. In this paper, I offer an analysis (...) of why deepfakes are such a serious threat to knowledge. Utilizing the account of information carrying recently developed by Brian Skyrms, I argue that deepfakes reduce the amount of information that videos carry to viewers. I conclude by drawing some implications of this analysis for addressing the epistemic threat of deepfakes. (shrink)
    No categories
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   39 citations  
  • Deepfakes, Pornography and Consent.Claire Benn -forthcoming -Philosophers' Imprint.
    Political deepfakes have prompted outcry about the diminishing trustworthiness of visual depictions, and the epistemic and political threat this poses. Yet this new technique is being used overwhelmingly to create pornography, raising the question of what, if anything, is wrong with the creation of deepfake pornography. Traditional objections focusing on the sexual abuse of those depicted fail to apply to deepfakes. Other objections—that the use and consumption of pornography harms the viewer or other (non-depicted) individuals—fail to explain the objection that (...) a depicted person might have to the creation of deepfake pornography that utilises images of them. My argument offers such an explanation. It begins by noting that the creation of sexual images requires an act of consent, separate from any consent needed for the acts depicted. Once we have separated these out, we can see that a demand for consent can arise when a sexual image is of us, even when no sexual activity was actually engaged in, as in the case of deepfake pornography. I then demonstrate that there are two ways in which an image can be ‘of us’, both of which can exist in the case of deepfakes. Thus, I argue: if a person, their likeness, or their photograph is used to create pornography, their consent is required. Whenever the person depicted does not consent (or in the case of children, can’t consent), that person is wronged by the creation of deepfake pornography and has a claim against its production. (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Doing your own research and other impossible acts of epistemic superheroism.Andrew Buzzell &Regina Rini -2023 -Philosophical Psychology 36 (5):906-930.
    The COVID-19 pandemic has been accompanied by an “infodemic” of misinformation and conspiracy theory. This article points to three explanatory factors: the challenge of forming accurate beliefs when overwhelmed with information, an implausibly individualistic conception of epistemic virtue, and an adversarial information environment that suborns epistemic dependence. Normally we cope with the problems of informational excess by relying on other people, including sociotechnical systems that mediate testimony and evidence. But when we attempt to engage in epistemic “superheroics” - withholding trust (...) from others and trying to figure it all out for ourselves – these can malfunction in ways that make us vulnerable to forming irrational beliefs. Some epistemic systems are prone to coalescing audiences around false conspiracy theories. This analysis affords a new perspective on philosophical efforts to understand conspiracy theories and other epistemic projects prone to collective irrationality. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  • Deepfakes, Fake Barns, and Knowledge from Videos.Taylor Matthews -2023 -Synthese 201 (2):1-18.
    Recent develops in AI technology have led to increasingly sophisticated forms of video manipulation. One such form has been the advent of deepfakes. Deepfakes are AI-generated videos that typically depict people doing and saying things they never did. In this paper, I demonstrate that there is a close structural relationship between deepfakes and more traditional fake barn cases in epistemology. Specifically, I argue that deepfakes generate an analogous degree of epistemic risk to that which is found in traditional cases. Given (...) that barn cases have posed a long-standing challenge for virtue-theoretic accounts of knowledge, I consider whether a similar challenge extends to deepfakes. In doing so, I consider how Duncan Pritchard’s recent anti-risk virtue epistemology meets the challenge. While Pritchard’s account avoids problems in traditional barn cases, I claim that it leads to local scepticism about knowledge from online videos in the case of deepfakes. I end by considering how two alternative virtue-theoretic approaches might vindicate our epistemic dependence on videos in an increasingly digital world. (shrink)
    Direct download(4 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  • Video on demand: what deepfakes do and how they harm.Keith Raymond Harris -2021 -Synthese 199 (5-6):13373-13391.
    This paper defends two main theses related to emerging deepfake technology. First, fears that deepfakes will bring about epistemic catastrophe are overblown. Such concerns underappreciate that the evidential power of video derives not solely from its content, but also from its source. An audience may find even the most realistic video evidence unconvincing when it is delivered by a dubious source. At the same time, an audience may find even weak video evidence compelling so long as it is delivered by (...) a trusted source. The growing prominence of deepfake content is unlikely to change this fundamental dynamic. Thus, through appropriate patterns of trust, whatever epistemic threat deepfakes pose can be substantially mitigated. The second main thesis is that focusing on deepfakes that are intended to deceive, as epistemological discussions of the technology tend to do, threatens to overlook a further effect of this technology. Even where deepfake content is not regarded by its audience as veridical, it may cause its viewers to develop psychological associations based on that content. These associations, even without rising to the level of belief, may be harmful to the individuals depicted and more generally. Moreover, these associations may develop in cases in which the video content is realistic, but the audience is dubious of the content in virtue of skepticism toward its source. Thus, even if—as I suggest—epistemological concerns about deepfakes are overblown, deepfakes may nonetheless be psychologically impactful and may do great harm. (shrink)
    No categories
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   21 citations  
  • AI or Your Lying Eyes: Some Shortcomings of Artificially Intelligent Deepfake Detectors.Keith Raymond Harris -2024 -Philosophy and Technology 37 (7):1-19.
    Deepfakes pose a multi-faceted threat to the acquisition of knowledge. It is widely hoped that technological solutions—in the form of artificially intelligent systems for detecting deepfakes—will help to address this threat. I argue that the prospects for purely technological solutions to the problem of deepfakes are dim. Especially given the evolving nature of the threat, technological solutions cannot be expected to prevent deception at the hands of deepfakes, or to preserve the authority of video footage. Moreover, the success of such (...) technologies depends on institutional trust that is in short supply. Finally, outsourcing the discrimination between the real and the fake to automated, largely opaque systems runs the risk of undermining epistemic autonomy. (shrink)
    Direct download(4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Liars and Trolls and Bots Online: The Problem of Fake Persons.Keith Raymond Harris -2023 -Philosophy and Technology 36 (2):1-19.
    This paper describes the ways in which trolls and bots impede the acquisition of knowledge online. I distinguish between three ways in which trolls and bots can impede knowledge acquisition, namely, by deceiving, by encouraging misplaced skepticism, and by interfering with the acquisition of warrant concerning persons and content encountered online. I argue that these threats are difficult to resist simultaneously. I argue, further, that the threat that trolls and bots pose to knowledge acquisition goes beyond the mere threat of (...) online misinformation, or the more familiar threat posed by liars offline. Trolls and bots are, in effect, fake persons. Consequently, trolls and bots can systemically interfere with knowledge acquisition by manipulating the signals whereby individuals acquire knowledge from one another online. I conclude with a brief discussion of some possible remedies for the problem of fake persons. (shrink)
    Direct download(4 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • Misinformation, Content Moderation, and Epistemology: Protecting Knowledge.Keith Raymond Harris -2024 - Routledge.
    This book argues that misinformation poses a multi-faceted threat to knowledge, while arguing that some forms of content moderation risk exacerbating these threats. It proposes alternative forms of content moderation that aim to address this complexity while enhancing human epistemic agency. The proliferation of fake news, false conspiracy theories, and other forms of misinformation on the internet and especially social media is widely recognized as a threat to individual knowledge and, consequently, to collective deliberation and democracy itself. This book argues (...) that misinformation presents a three-pronged threat to knowledge. While researchers often focus on the role of misinformation in causing false beliefs, this deceptive potential of misinformation exists alongside the potential to suppress trust and to distort the perception of evidence. Recognizing the multi-faceted nature of this threat is essential to the development of effective measures to mitigate the harms associated with misinformation. The book weaves together work in analytic epistemology with emerging empirical work in other disciplines to offer novel insights into the threats posed by misinformation. Additionally, it breaks new ground by systematically assessing different forms of content moderation from the perspective of epistemology. Misinformation, Content Moderation, and Epistemology will appeal to philosophers working in applied and social epistemology, as well as scholars and advanced students in disciplines such as communication studies, political science, and social psychology who are researching misinformation. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Synthetic Media Detection, the Wheel, and the Burden of Proof.Keith Raymond Harris -2024 -Philosophy and Technology 37 (4):1-20.
    Deepfakes and other forms of synthetic media are widely regarded as serious threats to our knowledge of the world. Various technological responses to these threats have been proposed. The reactive approach proposes to use artificial intelligence to identify synthetic media. The proactive approach proposes to use blockchain and related technologies to create immutable records of verified media content. I argue that both approaches, but especially the reactive approach, are vulnerable to a problem analogous to the ancient problem of the criterion—a (...) line of argument with skeptical implications. I argue that, while the proactive approach is relatively resistant to this objection, it faces its own serious challenges. In short, the proactive approach would place a heavy burden on users to verify their own content, a burden that is exacerbated by and is likely to exacerbate existing inequalities. (shrink)
    Direct download(5 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Deepfakes: a survey and introduction to the topical collection.Dan Cavedon-Taylor -2024 -Synthese 204 (1):1-19.
    Deepfakes are extremely realistic audio/video media. They are produced via a complex machine-learning process, one that centrally involves training an algorithm on hundreds or thousands of audio/video recordings of an object or person, S, with the aim of either creating entirely new audio/video media of S or else altering existing audio/video media of S. Deepfakes are widely predicted to have deleterious consequences (principally, moral and epistemic ones) for both individuals and various of our social practices and institutions. In this introduction (...) to the Topical Collection, I first survey existing philosophical research on deepfakes (Sects. 2 and 3). I then give an overview of the papers that comprise the Collection (Sect. 4). Finally, I conclude with remarks on a line of argument made in a number of papers in the Topical Collection: that deepfakes may cause their own demise (Sect. 5). (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Deepfakes, Intellectual Cynics, and the Cultivation of Digital Sensibility.Taylor Matthews -2022 -Royal Institute of Philosophy Supplement 92:67-85.
    In recent years, a number of philosophers have turned their attention to developments in Artificial Intelligence, and in particular to deepfakes. A deepfake is a portmanteau of ‘deep learning' and ‘fake', and for the most part they are videos which depict people doing and saying things they never did. As a result, much of the emerging literature on deepfakes has turned on questions of trust, harms, and information-sharing. In this paper, I add to the emerging concerns around deepfakes by drawing (...) on resources from vice epistemology. As deepfakes become more sophisticated, I claim, they will develop to be a source of online epistemic corruption. More specifically, they will encourage consumers of digital online media to cultivate and manifest various epistemic vices. My immediate focus in this paper is on their propensity to encourage the development of what I call ‘intellectual cynicism'. After sketching a rough account of this epistemic vice, I go on to suggest that we can partially offset such cynicism – and fears around deceptive online media more generally – by encouraging the development what I term a trained ‘digital sensibility'. This, I contend, involves a calibrated sensitivity to the epistemic merits of online content. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  • Conceptual and moral ambiguities of deepfakes: a decidedly old turn.Matthew Crippen -2023 -Synthese 202 (1):1-18.
    Everyday (mis)uses of deepfakes define prevailing conceptualizations of what they are and the moral stakes in their deployment. But one complication in understanding deepfakes is that they are not photographic yet nonetheless manipulate lens-based recordings with the intent of mimicking photographs. The harmfulness of deepfakes, moreover, significantly depends on their potential to be mistaken for photographs and on the belief that photographs capture actual events, a tenet known as the transparency thesis, which scholars have somewhat ironically attacked by citing digital (...) imaging techniques as counterexamples. Combining these positions, this paper sets out two core points: (1) that conceptions about the nature of photography introduce imperatives about its uses; and (2) that popular cultural understandings of photography imply normative ideas that infuse our encounters with deepfakes. Within this, I further raise the question of what moral ground deepfakes occupy that allows them to have such a potentially devastating effect. I show that answering this question involves reinstating the notion that photographs are popularly conceived of as transparent. The rejoinder to this argument, however, is that to take the sting out of deepfakes we must, once again, become skeptical of the veracity of all images, including photoreal ones. This kind of critical mindedness was warranted before the invention of photography for both pictorial imprints and written accounts from various media sources. Given this, along with the fact that photographic trickery is nothing new, deepfakes need not push us into a post-truth epistemic abyss, for they imply a decidedly old turn. (shrink)
    No categories
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • The Distinct Wrong of Deepfakes.Adrienne de Ruiter -2021 -Philosophy and Technology 34 (4):1311-1332.
    Deepfake technology presents significant ethical challenges. The ability to produce realistic looking and sounding video or audio files of people doing or saying things they did not do or say brings with it unprecedented opportunities for deception. The literature that addresses the ethical implications of deepfakes raises concerns about their potential use for blackmail, intimidation, and sabotage, ideological influencing, and incitement to violence as well as broader implications for trust and accountability. While this literature importantly identifies and signals the potentially (...) far-reaching consequences, less attention is paid to the moral dimensions of deepfake technology and deepfakes themselves. This article will help fill this gap by analysing whether deepfake technology and deepfakes are intrinsically morally wrong, and if so, why. The main argument is that deepfake technology and deepfakes are morally suspect, but not inherently morally wrong. Three factors are central to determining whether a deepfake is morally problematic: whether the deepfaked person would object to the way in which they are represented; whether the deepfake deceives viewers; and the intent with which the deepfake was created. The most distinctive aspect that renders deepfakes morally wrong is when they use digital data representing the image and/or voice of persons to portray them in ways in which they would be unwilling to be portrayed. Since our image and voice are closely linked to our identity, protection against the manipulation of hyper-realistic digital representations of our image and voice should be considered a fundamental moral right in the age of deepfakes. (shrink)
    No categories
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  • Real Fakes: The Epistemology of Online Misinformation.Keith Raymond Harris -2022 -Philosophy and Technology 35 (3):1-24.
    Many of our beliefs are acquired online. Online epistemic environments are replete with fake news, fake science, fake photographs and videos, and fake people in the form of trolls and social bots. The purpose of this paper is to investigate the threat that such online fakes pose to the acquisition of knowledge. I argue that fakes can interfere with one or more of the truth, belief, and warrant conditions on knowledge. I devote most of my attention to the effects of (...) online fakes on satisfaction of the warrant condition, as these have received comparatively little attention. I consider three accounts of the conditions under which fakes compromise the warrant condition. I argue for the third of these accounts, according to which the propensity of fakes to exist in an environment threatens warrant acquisition in that environment. Finally, I consider some limitations on the epistemic threat of fakes and suggest some strategies by which this threat can be mitigated. (shrink)
    Direct download(4 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  • Generative AI and photographic transparency.P. D. Magnus -2025 -AI and Society 40 (3):1607-1612.
    There is a history of thinking that photographs provide a special kind of access to the objects depicted in them, beyond the access that would be provided by a painting or drawing. What is included in the photograph does not depend on the photographer’s beliefs about what is in front of the camera. This feature leads Kendall Walton to argue that photographs literally allow us to see the objects which appear in them. Current generative algorithms produce images in response to (...) users’ text prompts. Depending on the parameters, the output can resemble specific people or things which are named in the prompt. This resemblance does not depend on the user’s beliefs, so generated images are in this sense like photographs. Given this parallel, how should we think about AI-generated images? (shrink)
    Direct download(4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Deepfakes and Dishonesty.Tobias Flattery &Christian B. Miller -2024 -Philosophy and Technology 37 (120):1-24.
    Deepfakes raise various concerns: risks of political destabilization, depictions of persons without consent and causing them harms, erosion of trust in video and audio as reliable sources of evidence, and more. These concerns have been the focus of recent work in the philosophical literature on deepfakes. However, there has been almost no sustained philosophical analysis of deepfakes from the perspective of concerns about honesty and dishonesty. That deepfakes are potentially deceptive is unsurprising and has been noted. But under what conditions (...) does the use of deepfakes fail to be honest? And which human agents, involved in one way or another in a deepfake, fail to be honest, and in what ways? If we are to understand better the morality of deepfakes, these questions need answering. Our first goal in this paper, therefore, is to offer an analysis of paradigmatic cases of deepfakes in light of the philosophy of honesty. While it is clear that many deepfakes are morally problematic, there has been a rising counter-chorus claiming that deepfakes are not essentially morally bad, since there might be uses of deepfakes that are not morally wrong, or even that are morally salutary, for instance, in education, entertainment, activism, and other areas. However, while there are reasons to think that deepfakes can supply or support moral goods, it is nevertheless possible that even these uses of deepfakes are dishonest. Our second goal in this paper, therefore, is to apply our analysis of deepfakes and honesty to the sorts of deepfakes hoped to be morally good or at least neutral. We conclude that, perhaps surprisingly, in many of these cases the use of deepfakes will be dishonest in some respects. Of course, there will be cases of deepfakes for which verdicts about honesty and moral permissibility do not line up. While we will sometimes suggest reasons why moral permissibility verdicts might diverge from honesty verdicts, we will not aim to settle matters of moral permissibility. (shrink)
    Direct download(5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Privacy rights and ‘naked’ statistical evidence.Lauritz Aastrup Munch -2021 -Philosophical Studies 178 (11):3777-3795.
    Do privacy rights restrict what is permissible to infer about others based on statistical evidence? This paper replies affirmatively by defending the following symmetry: there is not necessarily a morally relevant difference between directly appropriating people’s private information—say, by using an X-ray device on their private safes—and using predictive technologies to infer the same content, at least in cases where the evidence has a roughly similar probative value. This conclusion is of theoretical interest because a comprehensive justification of the thought (...) that statistical inferences can violate privacy rights is lacking in the current literature. Secondly, the conclusion is of practical interest due to the need for moral assessment of emerging predictive algorithms. (shrink)
    No categories
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  • How to do things with deepfakes.Tom Roberts -2023 -Synthese 201 (2):1-18.
    In this paper, I draw a distinction between two types of deepfake, and unpack the deceptive strategies that are made possible by the second. The first category, which has been the focus of existing literature on the topic, consists of those deepfakes that act as a fabricated record of events, talk, and action, where any utterances included in the footage are not addressed to the audience of the deepfake. For instance, a fake video of two politicians conversing with one another. (...) The second category consists of those deepfakes that direct an illocutionary speech act - such as a request, injunction, invitation, or promise - to an addressee who is located outside of the recording. For instance, fake footage of a company director instructing their employee to make a payment, or of a military official urging the populace to flee for safety. Whereas the former category may deceive an audience by giving rise to false beliefs, the latter can more directly manipulate an agent's actions: the speech act's addressee may be moved to accept an invitation or a summons, follow a command, or heed a warning, and in doing so further a deceiver's unethical ends. (shrink)
    Direct download(4 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • Beyond Porn and Discreditation: Epistemic Promises and Perils of Deepfake Technology in Digital Lifeworlds.Mathias Risse &Catherine Kerner -2021 -Moral Philosophy and Politics 8 (1):81-108.
    Deepfakes are a new form of synthetic media that broke upon the world in 2017. Bringing photoshopping to video, deepfakes replace people in existing videos with someone else’s likeness. Currently most of their reach is limited to pornography, and they are also used to discredit people. However, deepfake technology has many epistemic promises and perils, which concern how we fare as knowers. Our goal is to help set an agenda around these matters, to make sure this technology can help realize (...) epistemic rights and epistemic justice and unleash human creativity, rather than inflict epistemic wrongs of any sort. Our project is exploratory in nature, and we do not aim to offer conclusive answers at this early stage. There is a need to remain vigilant to make sure the downsides do not outweigh the upsides, and that will be a tall order. (shrink)
    No categories
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  • Deep learning and synthetic media.Raphaël Millière -2022 -Synthese 200 (3):1-27.
    Deep learning algorithms are rapidly changing the way in which audiovisual media can be produced. Synthetic audiovisual media generated with deep learning—often subsumed colloquially under the label “deepfakes”—have a number of impressive characteristics; they are increasingly trivial to produce, and can be indistinguishable from real sounds and images recorded with a sensor. Much attention has been dedicated to ethical concerns raised by this technological development. Here, I focus instead on a set of issues related to the notion of synthetic audiovisual (...) media, its place within a broader taxonomy of audiovisual media, and how deep learning techniques differ from more traditional approaches to media synthesis. After reviewing important etiological features of deep learning pipelines for media manipulation and generation, I argue that “deepfakes” and related synthetic media produced with such pipelines do not merely offer incremental improvements over previous methods, but challenge traditional taxonomical distinctions, and pave the way for genuinely novel kinds of audiovisual media. (shrink)
    Direct download(4 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Technology and moral change: the transformation of truth and trust.Henrik Skaug Sætra &John Danaher -2022 -Ethics and Information Technology 24 (3):1-16.
    Technologies can have profound effects on social moral systems. Is there any way to systematically investigate and anticipate these potential effects? This paper aims to contribute to this emerging field on inquiry through a case study method. It focuses on two core human values—truth and trust—describes their structural properties and conceptualisations, and then considers various mechanisms through which technology is changing and can change our perspective on those values. In brief, the paper argues that technology is transforming these values by (...) changing the costs/benefits of accessing them; allowing us to substitute those values for other, closely-related ones; increasing their perceived scarcity/abundance; and disrupting traditional value-gatekeepers. This has implications for how we study other, technologically-mediated, value changes. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • Skepticism and the Digital Information Environment.Matthew Carlson -2021 -SATS 22 (2):149-167.
    Deepfakes are audio, video, or still-image digital artifacts created by the use of artificial intelligence technology, as opposed to traditional means of recording. Because deepfakes can look and sound much like genuine digital recordings, they have entered the popular imagination as sources of serious epistemic problems for us, as we attempt to navigate the increasingly treacherous digital information environment of the internet. In this paper, I attempt to clarify what epistemic problems deepfakes pose and why they pose these problems, by (...) drawing parallels between recordings and our own senses as sources of evidence. I show that deepfakes threaten to undermine the status of digital recordings as evidence. The existence of deepfakes thus encourages a kind of skepticism about digital recordings that bears important similarities to classic philosophical skepticism concerning the senses. However, the skepticism concerning digital recordings that deepfakes motivate is also importantly different from classical skepticism concerning the senses, and I argue that these differences illuminate some possible strategies for solving the epistemic problems posed by deepfakes. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • Deepfakes and Democracy: A Catch-22?Dan Cavedon-Taylor -forthcoming -Journal of the American Philosophical Association:1-20.
    Deepfakes are AI-generated media. When produced competently, they are near-indistinguishable from genuine recordings and may mislead viewers about the actions of the individuals they depict. For this reason, it is thought to be only a matter of time before deepfakes have deleterious consequences for democratic procedures, elections in particular. But this pessimistic view about deepfakes and their relation to democracy is flawed whether it means to pick out current deepfakes or future ones. Rather than advocating for an optimistic view in (...) its place, I outline the opposite: a nihilistic account of deepfakes and their relation to democracy. On the nihilistic view, the harms that deepfakes pose for democracy are significantly more serious than those implied by the pessimistic view. Nihilism says that the real threat that deepfakes pose for democracy is that their existence counts against reforming current politics to be more truth-oriented. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark  
  • Social virtue epistemology and epistemic exactingness.Keith Raymond Harris -forthcoming -Episteme:1-16.
    Who deserves credit for epistemic successes, and who is to blame for epistemic failures? Extreme views, which would place responsibility either solely on the individual or solely on the individual’s surrounding environment, are not plausible. Recently, progress has been made toward articulating virtue epistemology as a suitable middle ground. A socio-environmentally oriented virtue epistemology can recognize that an individual’s traits play an important role in shaping what that individual believes, while also recognizing that some of the most efficacious individual traits (...) have to do with how individuals structure their epistemic environments and how they respond to information received within these environments. I contribute to the development of such an epistemology by introducing and elucidating the virtue of epistemic exactingness, which is characterized by a motivation to regulate the epistemically significant conduct of others. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark  
  • Deepfakes, shallow epistemic graves: On the epistemic robustness of photography and videos in the era of deepfakes.Paloma Atencia-Linares &Marc Artiga -2022 -Synthese 200 (6):1–22.
    The recent proliferation of deepfakes and other digitally produced deceptive representations has revived the debate on the epistemic robustness of photography and other mechanically produced images. Authors such as Rini (2020) and Fallis (2021) claim that the proliferation of deepfakes pose a serious threat to the reliability and the epistemic value of photographs and videos. In particular, Fallis adopts a Skyrmsian account of how signals carry information (Skyrms, 2010) to argue that the existence of deepfakes significantly reduces the information that (...) images carry about the world, which undermines their reliability as a source of evidence. In this paper, we focus on Fallis’ version of the challenge, but our results can be generalized to address similar pessimistic views such as Rini’s. More generally, we offer an account of the epistemic robustness of photography and videos that allows us to understand these systems of representation as continuous with other means of information transmission we find in nature. This account will then give us the necessary tools to put Fallis’ claims into perspective: using a richer approach to animal signaling based on the signaling model of communication (Maynard-Smith and Harper, 2003), we will claim that, while it might be true that deepfake technology increases the probability of obtaining false positives, the dimension of the epistemic threat involved might still be negligible. (shrink)
    Direct download(4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Neuromedia, Cognitive Offloading, and Intellectual Perseverance.Cody Turner -2022 -Synthese 200 (1):1-26.
    This paper engages in what might be called anticipatory virtue epistemology, as it anticipates some virtue epistemological risks related to a near-future version of brain-computer interface technology that Michael Lynch (2014) calls 'neuromedia.' I analyze how neuromedia is poised to negatively affect the intellectual character of agents, focusing specifically on the virtue of intellectual perseverance, which involves a disposition to mentally persist in the face of challenges towards the realization of one’s intellectual goals. First, I present and motivate what I (...) call ‘the cognitive offloading argument’, which holds that excessive cognitive offloading of the sort incentivized by a device like neuromedia threatens to undermine intellectual virtue development from the standpoint of the theory of virtue responsibilism. Then, I examine the cognitive offloading argument as it applies to the virtue of intellectual perseverance, arguing that neuromedia may increase cognitive efficiency at the cost of intellectual perseverance. If used in an epistemically responsible manner, however, cognitive offloading devices may not undermine intellectual perseverance but instead allow us to persevere with respect to intellectual goals that we find more valuable by freeing us from different kinds of menial intellectual labor. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • The identification game: deepfakes and the epistemic limits of identity.Carl Öhman -2022 -Synthese 200 (4):1-19.
    The fast development of synthetic media, commonly known as deepfakes, has cast new light on an old problem, namely—to what extent do people have a moral claim to their likeness, including personally distinguishing features such as their voice or face? That people have at least some such claim seems uncontroversial. In fact, several jurisdictions already combat deepfakes by appealing to a “right to identity.” Yet, an individual’s disapproval of appearing in a piece of synthetic media is sensible only insofar as (...) the replication is successful. There has to be some form of identity between the content and the natural person. The question, therefore, is how this identity can be established. How can we know whether the face or voice featured in a piece of synthetic content belongs to a person who makes claim to it? On a trivial level, this may seem an easy task—the person in the video is A insofar as he or she is recognised as being A. Providing more rigorous criteria, however, poses a serious challenge. In this paper, I draw on Turing’s imitation game, and Floridi’s method of levels of abstraction, to propose a heuristic to this end. I call it the identification game. Using this heuristic, I show that identity cannot be established independently of the purpose of the inquiry. More specifically, I argue that whether a person has a moral claim to content that allegedly uses their identity depends on the type of harm under consideration. (shrink)
    No categories
    Direct download(4 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • VII—Reflecting, Registering, Recording and Representing: From Light Image to Photographic Picture.Dawn M. Wilson -2022 -Proceedings of the Aristotelian Society 122 (2):141-164.
    Photography is valued as a medium for recording and visually reproducing features of the world. I seek to challenge the view that photography is fundamentally a recording process and that every photograph is a record—a view that I claim is based on a ‘single-stage’ misconception of the process. I propose an alternative, ‘multi-stage’ account in which I argue that causal registration of light is not equivalent to recording and reproducing an image. Intervention or non-intervention by photographers is more sophisticated than (...) the traditional view allows. Using the multi-stage account, I describe four models for producing photographic images and pictures. (shrink)
    Direct download(4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Descartes’s Clarity First Epistemology.Elliot Samuel Paul -forthcoming - In Kurt Sylvan, Ernest Sosa, Jonathan Dancy & Matthias Steup,The Blackwell Companion to Epistemology, 3rd edition. Wiley Blackwell.
    Descartes has a Clarity First epistemology: (i) Clarity is a primitive (indefinable) phenomenal quality: the appearance of truth. (ii) Clarity is prior to other qualities: obscurity, confusion, distinctness – are defined in terms of clarity; epistemic goods – reason to assent, rational inclination to assent, reliability, and knowledge – are explained by clarity. (This is the first of two companion entries; the sequel is called, "Descartes's Method for Achieving Knowledge.").
    Direct download  
     
    Export citation  
     
    Bookmark  
  • Deepfake Pornography and the Ethics of Non-Veridical Representations.Daniel Story &Ryan Jenkins -2023 -Philosophy and Technology 36 (3):1-22.
    We investigate the question of whether (and if so why) creating or distributing deepfake pornography of someone without their consent is inherently objectionable. We argue that nonconsensually distributing deepfake pornography of a living person on the internet is inherently pro tanto wrong in virtue of the fact that nonconsensually distributing intentionally non-veridical representations about someone violates their right that their social identity not be tampered with, a right which is grounded in their interest in being able to exercise autonomy over (...) their social relations with others. We go on to suggest that nonconsensual deepfakes are especially worrisome in connection with this right because they have a high degree of phenomenal immediacy, a property which corresponds inversely to the ease with which a representation can be doubted. We then suggest that nonconsensually creating and privately consuming deepfake pornography is worrisome but may not be inherently pro tanto wrong. Finally, we discuss the special issue of whether nonconsensually distributing deepfake pornography of a deceased person is inherently objectionable. We argue that the answer depends on how long it has been since the person died. (shrink)
    Direct download(4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Applied Epistemology: What Is It? Why Do It?Alex Worsnip -forthcoming - In Tamar Szabó Gendler, John Hawthorne, Julianne Chung & Alex Worsnip,Oxford Studies in Epistemology, Vol. 8. Oxford University Press.
    The remaining seven papers (eight, if you count this introductory piece) in this volume of Oxford Studies in Epistemology constitute a special issue on applied epistemology, an exciting, novel, and currently burgeoning subfield of epistemology. The term ‘applied epistemology’ is a relatively recent one, however, and anecdotally, many people I’ve encountered are not quite sure what it denotes, or what different works within the field have in common. In this introductory piece, I’ll venture some views about these questions, and about (...) why applied epistemology is worth doing, as well as about its dangers. Doing so will set the state for me to situate the papers in this volume within the subfield. (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark  
  • Against Imprinting: The Photographic Image as a Source of Evidence.Dawn M. Wilson -2022 -Social Research: An International Quarterly 89 (4):947-969.
    A photographic image is said to provide evidence of a photographed scene because it is a causal imprint of reflected light, an indexical trace of real objects and events. Though widely established in the history, theory, and philosophy of photography, this traditional imprinting model must be rejected because it relies on a “single-stage” misconception of the photographic process: the idea that a photographic image comes into existence at the time of exposure. In its place, a “multistage” account properly articulates different (...) production stages, such as registering and rendering, that are relevant to understanding the relation between a photographic image and the photographed scene. By denying that any photographic image is a causal imprint, the multistage approach proposes a more demanding evaluation of photographic evidence. This has implications for documentary film and photojournalism, along with specialized applications such as forensics, surveillance, and face-recognition technology. (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Ethics and Epistemology of Deepfakes.Taylor Matthews &Ian James Kidd -2024 - In Carl Fox & Joe Saunders,Routledge Handbook of Philosophy and Media Ethics. Routledge.
  • Synthetic Socio-Technical Systems: Poiêsis as Meaning Making.Piercosma Bisconti,Andrew McIntyre &Federica Russo -2024 -Philosophy and Technology 37 (3):1-19.
    With the recent renewed interest in AI, the field has made substantial advancements, particularly in generative systems. Increased computational power and the availability of very large datasets has enabled systems such as ChatGPT to effectively replicate aspects of human social interactions, such as verbal communication, thus bringing about profound changes in society. In this paper, we explain that the arrival of generative AI systems marks a shift from ‘interacting through’ to ‘interacting with’ technologies and calls for a reconceptualization of socio-technical (...) systems as we currently understand them. We dub this new generation of socio-technical systems synthetic to signal the increased interactions between human and artificial agents, and, in the footsteps of philosophers of information, we cash out agency in terms of ‘poiêsis’. We close the paper with a discussion of the potential policy implications of synthetic socio-technical system. -/- . (shrink)
    Direct download(4 more)  
     
    Export citation  
     
    Bookmark  
  • Higher-order misinformation.Keith Raymond Harris -2024 -Synthese 204 (4):1-18.
    Experts are sharply divided concerning the prevalence and influence of misinformation. Some have emphasized the severe epistemic and political threats posed by misinformation and have argued that some such threats have been realized in the real world. Others have argued that such concerns overstate the prevalence of misinformation and the gullibility of ordinary persons. Rather than taking a stand on this issue, I consider what would follow from the supposition that this latter perspective is correct. I argue that, if the (...) prevalence and influence of misinformation are indeed overstated, then many reports as to the prevalence and influence of misinformation constitute a kind of higher-order misinformation. I argue that higher-order misinformation presents its own challenges. In particular, higher-order misinformation, ironically, would lend credibility to the very misinformation whose influence it exaggerates. Additionally, higher-order misinformation would lead to underestimations of the reasons favoring opposing views. In short, higher-order misinformation constitutes misleading higher-order evidence concerning the quality of the evidence on which individuals form their beliefs. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark  
  • Smoke Machines.Keith Raymond Harris -2025 -American Philosophical Quarterly 62 (1):69-86.
    Emotive artificial intelligences are physically or virtually embodied entities whose behavior is driven by artificial intelligence, and which use expressions usually associated with emotion to enhance communication. These entities are sometimes thought to be deceptive, insofar as their emotive expressions are not connected to genuine underlying emotions. In this paper, I argue that such entities are indeed deceptive, at least given a sufficiently broad construal of deception. But, while philosophers and other commentators have drawn attention to the deceptive threat of (...) emotive artificial intelligences, I argue that such entities also pose an overlooked skeptical threat. In short, the widespread existence of emotive signals disconnected from underlying emotions threatens to encourage skepticism of such signals more generally, including emotive signals used by human persons. Thus, while designing artificially intelligent entities to use emotive signals is thought to facilitate human-AI interaction, this practice runs the risk of compromising human-human interaction. (shrink)
    No categories
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark  
  • Should we Trust Our Feeds? Social Media, Misinformation, and the Epistemology of Testimony.Charles Côté-Bouchard -2024 -Topoi 43 (5):1469-1486.
    When should you believe testimony that you receive from your social media feeds? One natural answer is suggested by non-reductionism in the epistemology of testimony. Avoid accepting social media testimony if you have an undefeated defeater for it. Otherwise, you may accept it. I argue that this is too permissive to be an adequate epistemic policy because social media have some characteristics that tend to facilitate the efficacy of misinformation on those platforms. I formulate and defend an alternative epistemic policy (...) for belief on social media, which is inspired by reductionism in the epistemology of testimony. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark  
  • The politics of past and future: synthetic media, showing, and telling.Megan Hyska -2025 -Philosophical Studies 182 (1):137-158.
    Generative artificial intelligence has given us synthetic media that are increasingly easy to create and increasingly hard to distinguish from photographs and videos. Whereas an existing literature has been concerned with how these new media might make a difference for would-be knowers—the viewers of photographs and videos—I advance a thesis about how they will make a difference for would-be communicators—those who embed photos and videos in their speech acts. I claim that the presence of these media in our information environment (...) reduces our ability to show one another things, even as it may increase our resources for telling. And I argue that this has consequences beyond the disruption of knowledge acquisition; showing is a way that we preserve relational equality through superficial asymmetries in political communication, and thereby express respect for our audiences. If synthetic media reduce our options for showing, they then interfere in the way that we manage our relationships in the context of collective political action. (shrink)
    No categories
    Direct download(4 more)  
     
    Export citation  
     
    Bookmark  
  • Subordinating Speech and the Construction of Social Hierarchies.Michael Randall Barnes -2019 - Dissertation, Georgetown University
    This dissertation fits within the literature on subordinating speech and aims to demonstrate that how language subordinates is more complex than has been described by most philosophers. I argue that the harms that subordinating speech inflicts on its targets (chapter one), the type of authority that is exercised by subordinating speakers (chapters two and three), and the expansive variety of subordinating speech acts themselves (chapter three) are all under-developed subjects in need of further refinement—and, in some cases, large paradigm shifts. (...) I also examine cases that have yet to be adequately addressed by philosophers working on this topic, like the explosion of abusive speech online (chapter four) or the distinctive speech acts of protest groups (chapter five). I argue that by considering these alongside the ‘paradigm’ cases of subordinating speech that inform most models, we are better able to capture the lived realities of this phenomena, as described by members of groups targeted by such speech. I develop a novel account of speaker authority to explain the variety of pragmatic effects subordinating speech generates. Instead of seeing this authority as reducible to either a formal position or a merely local, linguistic phenomenon, I argue for a conception of speaker authority that is a richly contextual social fact, distributed unevenly among members of different social groups. I also develop an account of collective authority that explains how a group of speakers can join together to subordinate in a way that no individual speaker is capable of doing. This account, I argue, is better able to explain the social reality of subordinating speech than individualist models. Overall, I show how a more fine-grained account of subordinating speaker authority gives us a more accurate picture of the different subordinating speech acts available to different speakers, along with how these may harm their targets. (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Your Prompt is my command: On Assessing the Human-Centred Generality of Multimodal Models.Wout Schellaert,Fernando Martínez-Plumed,Karina Vold,John Burden,Pablo A. M. Casares,Bao Sheng Loe,Roi Reichart,Sean Ó hÉigeartaigh,Anna Korhonen &José Hernández-Orallo -2023 -Journal of Artificial Intelligence Research 77.
    Even with obvious deficiencies, large prompt-commanded multimodal models are proving to be flexible cognitive tools representing an unprecedented generality. But the directness, diversity, and degree of user interaction create a distinctive “human-centred generality” (HCG), rather than a fully autonomous one. HCG implies that —for a specific user— a system is only as general as it is effective for the user’s relevant tasks and their prevalent ways of prompting. A human-centred evaluation of general-purpose AI systems therefore needs to reflect the personal (...) nature of interaction, tasks and cognition. We argue that the best way to understand these systems is as highly-coupled cognitive extenders, and to analyse the bidirectional cognitive adaptations between them and humans. In this paper, we give a formulation of HCG, as well as a high-level overview of the elements and trade-offs involved in the prompting process. We end the paper by outlining some essential research questions and suggestions for improving evaluation practices, which we envision as characteristic for the evaluation of general artificial intelligence in the future. (shrink)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  • Automated Propaganda: Labeling AI‐Generated Political Content Should Not be Required by Law.Bartlomiej Chomanski &Lode Lauwaert -forthcoming -Journal of Applied Philosophy.
    A number of scholars and policy-makers have raised serious concerns about the impact of chatbots and generative artificial intelligence (AI) on the spread of political disinformation. An increasingly popular proposal to address this concern is to pass laws that, by requiring that artificially generated and artificially disseminated content be labeled as such, aim to ensure a degree of transparency in this rapidly transforming environment. This article argues that such laws are misguided, for two reasons. We first aim to show that (...) legally requiring the disclosure of the automated nature of bot accounts and AI-generated content is unlikely to succeed in improving the quality of political discussion on social media. This is because information that an account spreading or creating political information is a bot or a language model is itself politically relevant information, and people reason very poorly about such information. Second, we aim to show that the main motivation for these laws – the threat of coordinated disinformation campaigns (automated or not) – appears overstated. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark  
  • A Polarization-Containing Ethics of Campaign Advertising.Attila Mráz -2023 -Analyse & Kritik 45 (1):111-135.
    (OPEN ACCESS) This paper establishes moral duties for intermediaries of political advertising in election campaigns. First, I argue for a collective duty to maintain the democratic quality of elections which entails a duty to contain some forms of political polarization. Second, I show that the focus of campaign ethics on candidates, parties and voters—ignoring the mediators of campaigns—yields mistaken conclusions about how the burdens of the latter collective duty should be distributed. Third, I show why it is fair to require (...) intermediaries to contribute to fulfilling this duty: they have an ultimate filtering position in the campaign communication process and typically benefit from political advertising and polarization. Finally, I argue that a transparency-based ethics of campaign advertising cannot properly accommodate a concern with objectionable polarization. By contrast, I outline the polarization-containing implications of my account, including a prohibition on online targeted advertising, and intermediaries’ duties to block hateful political advertising. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark  
  • AI and Democratic Equality: How Surveillance Capitalism and Computational Propaganda Threaten Democracy.Ashton Black -2024 - In Bernhard Steffen,Bridging the Gap Between AI and Reality. Springer Nature. pp. 333-347.
    In this paper, I argue that surveillance capitalism and computational propaganda can undermine democratic equality. First, I argue that two types of resources are relevant for democratic equality: 1) free time, which entails time that is free from systemic surveillance, and 2) epistemic resources. In order for everyone in a democratic system to be equally capable of full political participation, it’s a minimum requirement that these two resources are distributed fairly. But AI that’s used for surveillance capitalism can undermine the (...) fair distribution of these resources, thereby threatening democracy. I further argue that computational propaganda undermines the democratic aim of collective self-determination by normalizing relations of domination and thereby disrupting the equal standing of persons. I conclude by considering some potential solutions. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark  

  • [8]ページ先頭

    ©2009-2025 Movatter.jp