Responsibility Gaps and Black Box Healthcare AI: Shared Responsibilization as a Solution.Benjamin H. Lang,Sven Nyholm &Jennifer Blumenthal-Barby -2023 -Digital Society 2 (3):52.detailsAs sophisticated artificial intelligence software becomes more ubiquitously and more intimately integrated within domains of traditionally human endeavor, many are raising questions over how responsibility (be it moral, legal, or causal) can be understood for an AI’s actions or influence on an outcome. So called “responsibility gaps” occur whenever there exists an apparent chasm in the ordinary attribution of moral blame or responsibility when an AI automates physical or cognitive labor otherwise performed by human beings and commits an error. Healthcare (...) administration is an industry ripe for responsibility gaps produced by these kinds of AI. The moral stakes of healthcare are often life and death, and the demand for reducing clinical uncertainty while standardizing care incentivizes the development and integration of AI diagnosticians and prognosticators. In this paper, we argue that (1) responsibility gaps are generated by “black box” healthcare AI, (2) the presence of responsibility gaps (if unaddressed) creates serious moral problems, (3) a suitable solution is for relevant stakeholders to voluntarily responsibilize the gaps, taking on some moral responsibility for things they are not, strictly speaking, blameworthy for, and (4) should this solution be taken, black box healthcare AI will be permissible in the provision of healthcare. (shrink)
Trust criteria for artificial intelligence in health: normative and epistemic considerations.Kristin Kostick-Quenet,Benjamin H. Lang,Jared Smith,Meghan Hurley &Jennifer Blumenthal-Barby -2024 -Journal of Medical Ethics 50 (8):544-551.detailsRapid advancements in artificial intelligence and machine learning (AI/ML) in healthcare raise pressing questions about how much users should trust AI/ML systems, particularly for high stakes clinical decision-making. Ensuring that user trust is properly calibrated to a tool’s computational capacities and limitations has both practical and ethical implications, given that overtrust or undertrust can influence over-reliance or under-reliance on algorithmic tools, with significant implications for patient safety and health outcomes. It is, thus, important to better understand how variability in trust (...) criteria across stakeholders, settings, tools and use cases may influence approaches to using AI/ML tools in real settings. As part of a 5-year, multi-institutional Agency for Health Care Research and Quality-funded study, we identify trust criteria for a survival prediction algorithm intended to support clinical decision-making for left ventricular assist device therapy, using semistructured interviews (n=40) with patients and physicians, analysed via thematic analysis. Findings suggest that physicians and patients share similar empirical considerations for trust, which were primarilyepistemicin nature, focused on accuracy and validity of AI/ML estimates. Trust evaluations considered the nature, integrity and relevance of training data rather than the computational nature of algorithms themselves, suggesting a need to distinguish ‘source’ from ‘functional’ explainability. To a lesser extent, trust criteria were also relational (endorsement from others) and sometimes based on personal beliefs and experience. We discuss implications for promoting appropriate and responsible trust calibration for clinical decision-making use AI/ML. (shrink)
Are physicians requesting a second opinion really engaging in a reason-giving dialectic? Normative questions on the standards for second opinions and AI.Benjamin H. Lang -2022 -Journal of Medical Ethics 48 (4):234-235.detailsIn their article, ‘Responsibility, Second Opinions, and Peer-Disagreement—Ethical and Epistemological Challenges of Using AI in Clinical Diagnostic Contexts,’ Kempt and Nagel argue for a ‘rule of disagreement’ for the integration of diagnostic AI in healthcare contexts. The type of AI in question is a ‘decision support system’, the purpose of which is to augment human judgement and decision-making in the clinical context by automating or supplementing parts of the cognitive labor. Under the authors’ proposal, artificial decision support systems which produce (...) automated diagnoses should serve chiefly as confirmatory tools; so long as the physician and AI agree, the matter is settled, and the physician’s initial judgement is considered epistemically justified. If, however, the AI-DSS and physician disagree, then a second physician’s opinion is called on to resolve the dispute. While the cognitive labour of the decision is shared between the physicians and AI, the final decision remains at the discretion of the first physician, and with it the moral and legal culpability. The putative benefits of this approach are twofold: healthcare administration can improve diagnostic performance by introducing AI-DSS without the unintended byproduct of a responsibility gap, and assuming the physician and AI disagree less than the general rate of requested second opinions, and the AI’s diagnostic accuracy supersedes or at least …. (shrink)
Patient Consent and The Right to Notice and Explanation of AI Systems Used in Health Care.Meghan E. Hurley,Benjamin H. Lang,Kristin Marie Kostick-Quenet,Jared N. Smith &Jennifer Blumenthal-Barby -2024 -American Journal of Bioethics 25 (3):102-114.detailsGiven the need for enforceable guardrails for artificial intelligence (AI) that protect the public and allow for innovation, the U.S. Government recently issued a Blueprint for an AI Bill of Rights which outlines five principles of safe AI design, use, and implementation. One in particular, the right to notice and explanation, requires accurately informing the public about the use of AI that impacts them in ways that are easy to understand. Yet, in the healthcare setting, it is unclear what goal (...) the right to notice and explanation serves, and the moral importance of patient-level disclosure. We propose three normative functions of this right: (1) to notify patients about their care, (2) to educate patients and promote trust, and (3) to meet standards for informed consent. Additional clarity is needed to guide practices that respect the right to notice and explanation of AI in healthcare while providing meaningful benefits to patients. (shrink)
Therapeutic Artificial Intelligence: Does Agential Status Matter?Meghan E. Hurley,Benjamin H. Lang &Jared N. Smith -2023 -American Journal of Bioethics 23 (5):33-35.detailsIn their paper, “Conversational Artificial Intelligence in Psychotherapy: A New Therapeutic Tool or Agent?” Sedlakova and Trachsel (2023) claim that therapeutic insights and therapeutic changes are...