Why Cohen's Kappa should be avoided as performance measure in classification
- PMID:31557204
- PMCID: PMC6762152
- DOI: 10.1371/journal.pone.0222916
Why Cohen's Kappa should be avoided as performance measure in classification
Abstract
We show that Cohen's Kappa and Matthews Correlation Coefficient (MCC), both extended and contrasted measures of performance in multi-class classification, are correlated in most situations, albeit can differ in others. Indeed, although in the symmetric case both match, we consider different unbalanced situations in which Kappa exhibits an undesired behaviour, i.e. a worse classifier gets higher Kappa score, differing qualitatively from that of MCC. The debate about the incoherence in the behaviour of Kappa revolves around the convenience, or not, of using a relative metric, which makes the interpretation of its values difficult. We extend these concerns by showing that its pitfalls can go even further. Through experimentation, we present a novel approach to this topic. We carry on a comprehensive study that identifies an scenario in which the contradictory behaviour among MCC and Kappa emerges. Specifically, we find out that when there is a decrease to zero of the entropy of the elements out of the diagonal of the confusion matrix associated to a classifier, the discrepancy between Kappa and MCC rise, pointing to an anomalous performance of the former. We believe that this finding disables Kappa to be used in general as a performance measure to compare classifiers.
Conflict of interest statement
The authors have declared that no competing interests exist.
Figures









Similar articles
- Optimal classifier for imbalanced data using Matthews Correlation Coefficient metric.Boughorbel S, Jarray F, El-Anbari M.Boughorbel S, et al.PLoS One. 2017 Jun 2;12(6):e0177678. doi: 10.1371/journal.pone.0177678. eCollection 2017.PLoS One. 2017.PMID:28574989Free PMC article.
- A statistical comparison between Matthews correlation coefficient (MCC), prevalence threshold, and Fowlkes-Mallows index.Chicco D, Jurman G.Chicco D, et al.J Biomed Inform. 2023 Aug;144:104426. doi: 10.1016/j.jbi.2023.104426. Epub 2023 Jun 21.J Biomed Inform. 2023.PMID:37352899
- The Matthews correlation coefficient (MCC) is more reliable than balanced accuracy, bookmaker informedness, and markedness in two-class confusion matrix evaluation.Chicco D, Tötsch N, Jurman G.Chicco D, et al.BioData Min. 2021 Feb 4;14(1):13. doi: 10.1186/s13040-021-00244-z.BioData Min. 2021.PMID:33541410Free PMC article.
- The dependence of Cohen's kappa on the prevalence does not matter.Vach W.Vach W.J Clin Epidemiol. 2005 Jul;58(7):655-61. doi: 10.1016/j.jclinepi.2004.02.021. Epub 2005 Apr 18.J Clin Epidemiol. 2005.PMID:15939215Review.
- Kappa-like indices of observer agreement viewed from a latent class perspective.Guggenmoos-Holzmann I, Vonk R.Guggenmoos-Holzmann I, et al.Stat Med. 1998 Apr 30;17(8):797-812. doi: 10.1002/(sici)1097-0258(19980430)17:8<797::aid-sim776>3.0.co;2-g.Stat Med. 1998.PMID:9595612Review.
Cited by
- Reliability and Validity of Patient-Reported Outcome Measures for Ankle Instability in Hebrew.Gottlieb U, Yona T, Shein Lumbroso D, Hoffman JR, Springer S.Gottlieb U, et al.Med Sci Monit. 2022 Sep 23;28:e937831. doi: 10.12659/MSM.937831.Med Sci Monit. 2022.PMID:36146912Free PMC article.
- Prediction of Hearing Help Seeking to Design a Recommendation Module of an mHealth Hearing App: Intensive Longitudinal Study of Feature Importance Assessment.Angonese G, Buhl M, Kuhlmann I, Kollmeier B, Hildebrandt A.Angonese G, et al.JMIR Hum Factors. 2024 Aug 12;11:e52310. doi: 10.2196/52310.JMIR Hum Factors. 2024.PMID:39133539Free PMC article.
- NAPping PAnts (NAPPA): An open wearable solution for monitoring Infant's sleeping rhythms, respiration and posture.de Sena S, Häggman M, Ranta J, Roienko O, Ilén E, Acosta N, Salama J, Kirjavainen T, Stevenson N, Airaksinen M, Vanhatalo S.de Sena S, et al.Heliyon. 2024 Jun 21;10(13):e33295. doi: 10.1016/j.heliyon.2024.e33295. eCollection 2024 Jul 15.Heliyon. 2024.PMID:39027497Free PMC article.
- The Agreement between Feline Pancreatic Lipase Immunoreactivity and DGGR-Lipase Assay in Cats-Preliminary Results.Krasztel MM, Czopowicz M, Szaluś-Jordanow O, Moroz A, Mickiewicz M, Kaba J.Krasztel MM, et al.Animals (Basel). 2021 Nov 6;11(11):3172. doi: 10.3390/ani11113172.Animals (Basel). 2021.PMID:34827904Free PMC article.
- Developing a Natural Language Processing tool to identify perinatal self-harm in electronic healthcare records.Ayre K, Bittar A, Kam J, Verma S, Howard LM, Dutta R.Ayre K, et al.PLoS One. 2021 Aug 4;16(8):e0253809. doi: 10.1371/journal.pone.0253809. eCollection 2021.PLoS One. 2021.PMID:34347787Free PMC article.
References
- Ferri C., Hernández-Orallo J., Modroiu R.: An experimental comparison of performance measures for classification. Pattern Recognition Letters 30(1), 27–38 (2009) 10.1016/j.patrec.2008.08.010 - DOI
- Sokolova M., Lapalme G.: A systematic analysis of performance measures for classification tasks. Information Processing & Management 45(4), 427–437 (2009) 10.1016/j.ipm.2009.03.002 - DOI
Publication types
MeSH terms
Grants and funding
LinkOut - more resources
Full Text Sources