Movatterモバイル変換


[0]ホーム

URL:


PhilPapersPhilPeoplePhilArchivePhilEventsPhilJobs
Order:

1 filter applied
  1. Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open Challenges and Interdisciplinary Research Directions.Luca Longo,Mario Brcic,Federico Cabitza,Jaesik Choi,Roberto Confalonieri,Javier Del Ser,Riccardo Guidotti,Yoichi Hayashi,Francisco Herrera,Andreas Holzinger,Richard Jiang,Hassan Khosravi,Freddy Lecue,Gianclaudio Malgieri,Andrés Páez,Wojciech Samek,Johannes Schneider,Timo Speith &Simone Stumpf -2024 -Information Fusion 106 (June 2024).
    As systems based on opaque Artificial Intelligence (AI) continue to flourish in diverse real-world applications, understanding these black box models has become paramount. In response, Explainable AI (XAI) has emerged as a field of research with practical and ethical benefits across various domains. This paper not only highlights the advancements in XAI and its application in real-world scenarios but also addresses the ongoing challenges within XAI, emphasizing the need for broader perspectives and collaborative efforts. We bring together experts from diverse (...) fields to identify open problems, striving to synchronize research agendas and accelerate XAI in practical applications. By fostering collaborative discussion and interdisciplinary cooperation, we aim to propel XAI forward, contributing to its continued success. Our goal is to put forward a comprehensive proposal for advancing XAI. To achieve this goal, we present a manifesto of 27 open problems categorized into nine categories. These challenges encapsulate the complexities and nuances of XAI and offer a road map for future research. For each problem, we provide promising research directions in the hope of harnessing the collective intelligence of interested stakeholders. (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  2.  3
    Beyond cyborgs: the cybork idea for the de-individuation of (artificial) intelligence and an emergence-oriented design.Federico Cabitza,Chiara Natali,Francesco Varanini &David Gunkel -forthcoming -AI and Society:1-16.
    This article contributes to the philosophical inquiry of Artificial Intelligence (AI) by reframing the question “Where is the intelligence of Artificial Intelligence?” into “Where does AI intelligently operate?”. This rephrasing challenges our understanding of AI’s role in social practices and its integration into the human experience. Central to this discourse is the concept of the ‘cybork’ (a portmanteau of ‘cyborg’ and ‘work’), which symbolizes not just a physical entity but a dynamic system of actions and interactions within a socio-technical landscape: (...) work accomplished with machines. In this framework, intelligence in AI lies not in any function of isolated systems, but rather in the situated context of their use within collective and meaningful practices that give technology its sense and direction. Conversely, technology both enables and shapes these practices to the extent that distinguishing between the two can seem unnecessary, or even detrimental, to the optimal design of and for work practices. The cybork embodies this integration and entanglement, transcending the traditional boundaries between individuals and collectives, entities and actions. It reveals the inseparability and co-dependence of humans and technology, where technological artifacts become extensions of human capabilities, embody collective human history and development, and serve as both products and participants in societal practices, fundamentally shaping our interaction with the world. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark  
  3.  16
    The unbearable (technical) unreliability of automated facial emotion recognition.Martina Mattioli,Andrea Campagner &Federico Cabitza -2022 -Big Data and Society 9 (2).
    Emotion recognition, and in particular acial emotion recognition (FER), is among the most controversial applications of machine learning, not least because of its ethical implications for human subjects. In this article, we address the controversial conjecture that machines can read emotions from our facial expressions by asking whether this task can be performed reliably. This means, rather than considering the potential harms or scientific soundness of facial emotion recognition systems, focusing on the reliability of the ground truths used to develop (...) emotion recognition systems, assessing how well different human observers agree on the emotions they detect in subjects’ faces. Additionally, we discuss the extent to which sharing context can help observers agree on the emotions they perceive on subjects’ faces. Briefly, we demonstrate that when large and heterogeneous samples of observers are involved, the task of emotion detection from static images crumbles into inconsistency. We thus reveal that any endeavour to understand human behaviour from large sets of labelled patterns is over-ambitious, even if it were technically feasible. We conclude that we cannot speak of actual accuracy for facial emotion recognition systems for any practical purposes. (shrink)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
Export
Limit to items.
Filters





Configure languageshere.Sign in to use this feature.

Viewing options


Open Category Editor
Off-campus access
Using PhilPapers from home?

Create an account to enable off-campus access through your institution's proxy server or OpenAthens.


[8]ページ先頭

©2009-2025 Movatter.jp