Movatterモバイル変換


[0]ホーム

URL:


PhilPapersPhilPeoplePhilArchivePhilEventsPhilJobs
Switch to: References

Add citations

You mustlogin to add citations.
  1. Trust, Explainability and AI.Sam Baron -2025 -Philosophy and Technology 38 (4):1-23.
    There has been a surge of interest in explainable artificial intelligence (XAI). It is commonly claimed that explainability is necessary for trust in AI, and that this is why we need it. In this paper, I argue that for some notions of trust it is plausible that explainability is indeed a necessary condition. But that these kinds of trust are not appropriate for AI. For notions of trust that are appropriate for AI, explainability is not a necessary condition. I thus (...) conclude that explainability is not necessary for trust in AI that matters. (shrink)
    Direct download(4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Boosting court judgment prediction and explanation using legal entities.Irene Benedetto,Alkis Koudounas,Lorenzo Vaiani,Eliana Pastor,Luca Cagliero,Francesco Tarasconi &Elena Baralis -forthcoming -Artificial Intelligence and Law:1-36.
    The automatic prediction of court case judgments using Deep Learning and Natural Language Processing is challenged by the variety of norms and regulations, the inherent complexity of the forensic language, and the length of legal judgments. Although state-of-the-art transformer-based architectures and Large Language Models (LLMs) are pre-trained on large-scale datasets, the underlying model reasoning is not transparent to the legal expert. This paper jointly addresses court judgment prediction and explanation by not only predicting the judgment but also providing legal experts (...) with sentence-based explanations. To boost the performance of both tasks we leverage a legal named entity recognition step, which automatically annotates documents with meaningful domain-specific entity tags and masks the corresponding fine-grained descriptions. In such a way, transformer-based architectures and Large Language Models can attend to in-domain entity-related information in the inference process while neglecting irrelevant details. Furthermore, the explainer can boost the relevance of entity-enriched sentences while limiting the diffusion of potentially sensitive information. We also explore the use of in-context learning and lightweight fine-tuning to tailor LLMs to the legal language style and the downstream prediction and explanation tasks. The results obtained on a benchmark dataset from the Indian judicial system show the superior performance of entity-aware approaches to both judgment prediction and explanation. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark  
  • A k-additive Choquet integral-based approach to approximate the SHAP values for local interpretability in machine learning.Guilherme Dean Pelegrina,Leonardo Tomazeli Duarte &Michel Grabisch -2023 -Artificial Intelligence 325 (C):104014.
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark  

  • [8]ページ先頭

    ©2009-2025 Movatter.jp