Movatterモバイル変換


[0]ホーム

URL:


PhilPapersPhilPeoplePhilArchivePhilEventsPhilJobs
Switch to: References

Add citations

You mustlogin to add citations.
  1. A neo-aristotelian perspective on the need for artificial moral agents (AMAs).Alejo José G. Sison &Dulce M. Redín -2023 -AI and Society 38 (1):47-65.
    We examine Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) critique of the need for Artificial Moral Agents (AMAs) and its rebuttal by Formosa and Ryan (JAMA 10.1007/s00146-020-01089-6, 2020) set against a neo-Aristotelian ethical background. Neither Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) essay nor Formosa and Ryan’s (JAMA 10.1007/s00146-020-01089-6, 2020) is explicitly framed within the teachings of a specific ethical school. The former appeals to the lack of “both empirical and intuitive support” (Van Wynsberghe and Robbins 2019, p. 721) for (...) AMAs, and the latter opts for “argumentative breadth over depth”, meaning to provide “the essential groundwork for making an all things considered judgment regarding the moral case for building AMAs” (Formosa and Ryan 2019, pp. 1–2). Although this strategy may benefit their acceptability, it may also detract from their ethical rootedness, coherence, and persuasiveness, characteristics often associated with consolidated ethical traditions. Neo-Aristotelian ethics, backed by a distinctive philosophical anthropology and worldview, is summoned to fill this gap as a standard to test these two opposing claims. It provides a substantive account of moral agency through the theory of voluntary action; it explains how voluntary action is tied to intelligent and autonomous human life; and it distinguishes machine operations from voluntary actions through the categories of poiesis and praxis respectively. This standpoint reveals that while Van Wynsberghe and Robbins may be right in rejecting the need for AMAs, there are deeper, more fundamental reasons. In addition, despite disagreeing with Formosa and Ryan’s defense of AMAs, their call for a more nuanced and context-dependent approach, similar to neo-Aristotelian practical wisdom, becomes expedient. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • Why AI May Undermine Phronesis and What to Do about It.Cheng-Hung Tsai &Hsiu-lin Ku -forthcoming -AI and Ethics.
    Phronesis, or practical wisdom, is a capacity the possession of which enables one to make good practical judgments and thus fulfill the distinctive function of human beings. Nir Eisikovits and Dan Feldman convincingly argue that this capacity may be undermined by statistical machine-learning-based AI. The critic questions: why should we worry that AI undermines phronesis? Why can’t we epistemically defer to AI, especially when it is superintelligent? Eisikovits and Feldman acknowledge such objection but do not consider it seriously. In this (...) paper, we argue that there is a way to reconcile Eisikovits and Feldman with their critic by adopting the principle of epistemic heed, according to which we should exercise our rational capacity as much as possible while heeding a superintelligence’s output whenever possible. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark  
  • Artificial Epistemic Authorities.Rico Hauswald -forthcoming -Social Epistemology.
    While AI systems are increasingly assuming roles traditionally occupied by human epistemic authorities (EAs), their epistemological status remains unclear. This paper aims to address this lacuna by assessing the potential for AI systems to be recognized as artificial epistemic authorities. In a first step, I examine the arguments against considering AI systems as EAs, in particular the established model of EAs as engaging in intentional belief transfer via testimony to laypeople – a process seemingly inapplicable to intentionless and beliefless AI. (...) Despite this, AI systems exhibit striking epistemic parallels with human EAs, including epistemic asymmetry and opacity that give rise to comparable challenges for both laypeople and AI users. These challenges include the identification problem – how to recognize reliable EAs/AI systems – and the deference problem – determining the appropriate epistemic stance towards EAs/AI systems. Faced with this dilemma, I discuss three possible solutions: 1. reject the concept of artificial EAs, 2. accept that AI can possess beliefs and intentions and thus align with the standard model, or 3. develop an alternative model that encompasses artificial EAs. I argue that while each option has its benefits and costs, a particularly strong case can be made for option 3. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark  
  • (1 other version)Word vector embeddings hold social ontological relations capable of reflecting meaningful fairness assessments.Ahmed Izzidien -2022 -AI and Society 37 (1):299-318.
    Programming artificial intelligence to make fairness assessments of texts through top-down rules, bottom-up training, or hybrid approaches, has presented the challenge of defining cross-cultural fairness. In this paper a simple method is presented which uses vectors to discover if a verb is unfair or fair. It uses already existing relational social ontologies inherent in Word Embeddings and thus requires no training. The plausibility of the approach rests on two premises. That individuals consider fair acts those that they would be willing (...) to accept if done to themselves. Secondly, that such a construal is ontologically reflected in Word Embeddings, by virtue of their ability to reflect the dimensions of such a perception. These dimensions being: responsibility vs. irresponsibility, gain vs. loss, reward vs. sanction, joy vs. pain, all as a single vector. The paper finds it possible to quantify and qualify a verb as fair or unfair by calculating the cosine similarity of the said verb’s embedding vector against FairVec—which represents the above dimensions. We apply this to Glove and Word2Vec embeddings. Testing on a list of verbs produces an F1 score of 95.7, which is improved to 97.0. Lastly, a demonstration of the method’s applicability to sentence measurement is carried out. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • (1 other version)Word vector embeddings hold social ontological relations capable of reflecting meaningful fairness assessments.Ahmed Izzidien -2021 -AI and Society (March 2021):1-20.
    Programming artificial intelligence to make fairness assessments of texts through top-down rules, bottom-up training, or hybrid approaches, has presented the challenge of defining cross-cultural fairness. In this paper a simple method is presented which uses vectors to discover if a verb is unfair or fair. It uses already existing relational social ontologies inherent in Word Embeddings and thus requires no training. The plausibility of the approach rests on two premises. That individuals consider fair acts those that they would be willing (...) to accept if done to themselves. Secondly, that such a construal is ontologically reflected in Word Embeddings, by virtue of their ability to reflect the dimensions of such a perception. These dimensions being: responsibility vs. irresponsibility, gain vs. loss, reward vs. sanction, joy vs. pain, all as a single vector. The paper finds it possible to quantify and qualify a verb as fair or unfair by calculating the cosine similarity of the said verb’s embedding vector against FairVec—which represents the above dimensions. We apply this to Glove and Word2Vec embeddings. Testing on a list of verbs produces an F1 score of 95.7, which is improved to 97.0. Lastly, a demonstration of the method’s applicability to sentence measurement is carried out. (shrink)
    Direct download(4 more)  
     
    Export citation  
     
    Bookmark  

  • [8]ページ先頭

    ©2009-2025 Movatter.jp