- Kary Främling ORCID:orcid.org/0000-0002-8078-517212,13,
- Marcus Westberg ORCID:orcid.org/0000-0001-5261-889812,
- Martin Jullum ORCID:orcid.org/0000-0003-3908-515514,
- Manik Madhikermi ORCID:orcid.org/0000-0002-0811-225612 &
- …
- Avleen Malhi ORCID:orcid.org/0000-0002-9303-655X13,15
Part of the book series:Lecture Notes in Computer Science ((LNAI,volume 12688))
Included in the following conference series:
2027Accesses
Abstract
Different explainable AI (XAI) methods are based on different notions of ‘ground truth’. In order to trust explanations of AI systems, the ground truth has to provide fidelity towards the actual behaviour of the AI system. An explanation that has poor fidelity towards the AI system’s actual behaviour can not be trusted no matter how convincing the explanations appear to be for the users. The Contextual Importance and Utility (CIU) method differs from currently popular outcome explanation methods such as Local Interpretable Model-agnostic Explanations (LIME) and Shapley values in several ways. Notably, CIU does not build any intermediate interpretable model like LIME, and it does not make any assumption regarding linearity or additivity of the feature importance. CIU also introduces the value utility notion and a definition of feature importance that is different from LIME and Shapley values. We argue that LIME and Shapley values actually estimate ‘influence’ (rather than ‘importance’), which combines importance and utility. The paper compares the three methods in terms of validity of their ground truth assumption and fidelity towards the underlying model through a series of benchmark tasks. The results confirm that LIME results tend not to be coherent nor stable. CIU and Shapley values give rather similar results when limiting explanations to ‘influence’. However, by separating ‘importance’ and ‘utility’ elements, CIU can provide more expressive and flexible explanations than LIME and Shapley values.
The work is partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.
This is a preview of subscription content,log in via an institution to check access.
Access this chapter
Subscribe and save
- Get 10 units per month
- Download Article/Chapter or eBook
- 1 Unit = 1 Article or 1 Chapter
- Cancel anytime
Buy Now
- Chapter
- JPY 3498
- Price includes VAT (Japan)
- eBook
- JPY 9723
- Price includes VAT (Japan)
- Softcover Book
- JPY 12154
- Price includes VAT (Japan)
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: More accurate approximations to shapley values. arXiv preprintarXiv:1903.10464 (2019)
Adebayo, J., Gilmer, J., Goodfellow, I., Kim, B.: Local explanation methods for deep neural networks lack sensitivity to parameter values. arXiv preprintarXiv:1810.03307 (2018)
Alvarez-Melis, D., Jaakkola, T.S.: On the robustness of interpretability methods. arXiv preprintarXiv:1806.08049 (2018)
Amiri, S.S., et al.: Data representing ground-truth explanations to evaluate XAI methods. arXiv preprintarXiv:2011.09892 (2020)
Barbado, A., Corcho, O.: Explanation generation for anomaly detection models applied to the fuel consumption of a vehicle fleet (2020)
Chen, H., Janizek, J.D., Lundberg, S., Lee, S.I.: True to the model or true to the data? arXiv preprintarXiv:2006.16234 (2020)
Du, M., Liu, N., Hu, X.: Techniques for interpretable machine learning. Commun. ACM63(1), 68–77 (2020).https://doi.org/10.1145/3359786
Främling, K.: Explaining results of neural networks by contextual importance and utility. In: Proceedings of the AISB 1996 Conference, Brighton, UK, 1–2 April 1996
Främling, K.: Modélisation et apprentissage des préférences par réseaux de neurones pour l’aide à la décision multicritère. Ph.D. thesis, INSA de Lyon, March 1996
Främling, K.: Decision theory meets explainable AI. In: Calvaresi, D., Najjar, A., Winikoff, M., Främling, K. (eds.) EXTRAAMAS 2020. LNCS (LNAI), vol. 12175, pp. 57–74. Springer, Cham (2020).https://doi.org/10.1007/978-3-030-51924-7_4
Främling, K.: Explainable AI without interpretable model. arXiv preprintarXiv:2009.13996 (2020).https://arxiv.org/abs/2009.13996
Främling, K., Graillot, D.: Extracting explanations from neural networks. In: ICANN 1995 Conference, Paris, France, October 1995
Främling, K.: Contextual importance and utility in R: the ‘CIU’ package. In: Proceedings of 1st Workshop on Explainable Agency in Artificial Intelligence, at 35th AAAI Conference on Artificial Intelligence, pp. 110–114 (2021)
Gruber, S., Kopper, P.: Introduction to local interpretable model-agnostic explanations (LIME).https://compstat-lmu.github.io/iml_methods_limitations/lime.html (2020)
Laugel, T., Renard, X., Lesot, M.J., Marsala, C., Detyniecki, M.: Defining locality for surrogates in post-hoc interpretablity. arXiv preprintarXiv:1806.07498 (2018)
Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 30, pp. 4765–4774. Curran Associates, Inc. (2017)
Molnar, C.: Interpretable Machine Learning.https://christophm.github.io/interpretable-ml-book/ (2019)
Molnar, C., Bischl, B., Casalicchio, G.: iml: an R package for interpretable machine learning. JOSS3(26), 786 (2018)
Papenmeier, A., Englebienne, G., Seifert, C.: How model accuracy and explanation fidelity influence user trust. CoRR abs/1907.12652 (2019).http://arxiv.org/abs/1907.12652
Pedersen, T.L., Benesty, M.: LIME: Local Interpretable Model-Agnostic Explanations (2019).https://CRAN.R-project.org/package=lime. r package version 0.5.1
Ribeiro, M.T., Singh, S., Guestrin, C.: “why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)
Shapley, L.S.: A value for n-person games. In: Contributions to the Theory of Games, vol. 2, no. 28, pp. 307–317 (1953)
Shortliffe, E.H., Davis, R., Axline, S.G., Buchanan, B.G., Green, C., Cohen, S.N.: Computer-based consultations in clinical therapeutics: explanation and rule acquisition capabilities of the MYCIN system. Comput. Biomed. Res.8(4), 303–320 (1975).https://doi.org/10.1016/0010-4809(75)90009-9
Swartout, W.R., Moore, J.D.: Explanation in second generation expert systems. In: David, J.M., Krivine, J.P., Simmons, R. (eds.) Second Generation Expert Systems, pp. 543–585. Springer, Heidelberg (1993).https://doi.org/10.1007/978-3-642-77927-5_24
Wallenius, J., Dyer, J.S., Fishburn, P.C., Steuer, R.E., Zionts, S., Deb, K.: Multiple criteria decision making, multiattribute utility theory: Recent accomplishments and what lies ahead. Manage. Sci.54(7), 1336–1349 (2008)
Yang, F., Du, M., Hu, X.: Evaluating explanation without ground truth in interpretable machine learning. arXiv preprintarXiv:1907.06831 (2019)
Author information
Authors and Affiliations
Department of Computing Science, Umeå University, Umeå, Sweden
Kary Främling, Marcus Westberg & Manik Madhikermi
Department of Computer Science, Aalto University, Espoo, Finland
Kary Främling & Avleen Malhi
Norwegian Computing Center, Gaustadalleen 23a, 0373, Oslo, Norway
Martin Jullum
Department of Computing and Informatics, Bournemouth University, Poole, UK
Avleen Malhi
- Kary Främling
You can also search for this author inPubMed Google Scholar
- Marcus Westberg
You can also search for this author inPubMed Google Scholar
- Martin Jullum
You can also search for this author inPubMed Google Scholar
- Manik Madhikermi
You can also search for this author inPubMed Google Scholar
- Avleen Malhi
You can also search for this author inPubMed Google Scholar
Corresponding author
Correspondence toKary Främling.
Editor information
Editors and Affiliations
University of Applied Sciences and Arts Western Switzerland, Sierre, Switzerland
Davide Calvaresi
University of Luxembourg, Esch-sur-Alzette, Luxembourg
Amro Najjar
Victoria University of Wellington, Wellington, New Zealand
Michael Winikoff
Umeå University, Umeå, Sweden
Kary Främling
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Främling, K., Westberg, M., Jullum, M., Madhikermi, M., Malhi, A. (2021). Comparison of Contextual Importance and Utility with LIME and Shapley Values. In: Calvaresi, D., Najjar, A., Winikoff, M., Främling, K. (eds) Explainable and Transparent AI and Multi-Agent Systems. EXTRAAMAS 2021. Lecture Notes in Computer Science(), vol 12688. Springer, Cham. https://doi.org/10.1007/978-3-030-82017-6_3
Download citation
Published:
Publisher Name:Springer, Cham
Print ISBN:978-3-030-82016-9
Online ISBN:978-3-030-82017-6
eBook Packages:Computer ScienceComputer Science (R0)
Share this paper
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative