Movatterモバイル変換


[0]ホーム

URL:


PhilPapersPhilPeoplePhilArchivePhilEventsPhilJobs

Explainable AI under contract and tort law: legal incentives and technical challenges

Artificial Intelligence and Law 28 (4):415-439 (2020)
  Copy   BIBTEX

Abstract

This paper shows that the law, in subtle ways, may set hitherto unrecognized incentives for the adoption of explainable machine learning applications. In doing so, we make two novel contributions. First, on the legal side, we show that to avoid liability, professional actors, such as doctors and managers, may soon be legally compelled to use explainable ML models. We argue that the importance of explainability reaches far beyond data protection law, and crucially influences questions of contractual and tort liability for the use of ML models. To this effect, we conduct two legal case studies, in medical and corporate merger applications of ML. As a second contribution, we discuss the trade-off between accuracy and explainability and demonstrate the effect in a technical case study in the context of spam classification.

Other Versions

No versions found

Links

PhilArchive

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Editors' introduction.Henry Prakken &Giovanni Sartor -1996 -Artificial Intelligence and Law 4 (3-4):157-161.

Analytics

Added to PP
2020-01-19

Downloads
78 (#289,638)

6 months
7 (#614,680)

Historical graph of downloads
How can I increase my downloads?

References found in this work

An enquiry concerning human understanding.David Hume -2000 - In Steven M. Cahn,Exploring Philosophy: An Introductory Anthology. New York, NY, United States of America: Oxford University Press USA. pp. 112.

View all 10 references / Add more references


[8]ページ先頭

©2009-2025 Movatter.jp