Movatterモバイル変換


[0]ホーム

URL:


Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
Thehttps:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

NIH NLM Logo
Log inShow account info
Access keysNCBI HomepageMyNCBI HomepageMain ContentMain Navigation
pubmed logo
Advanced Clipboard
User Guide

Full text links

Silverchair Information Systems full text link Silverchair Information Systems Free PMC article
Full text links

Actions

Review
.2020 Mar 23;21(2):553-565.
doi: 10.1093/bib/bbz016.

Sensitivity and specificity of information criteria

Affiliations
Review

Sensitivity and specificity of information criteria

John J Dziak et al. Brief Bioinform..

Abstract

Information criteria (ICs) based on penalized likelihood, such as Akaike's information criterion (AIC), the Bayesian information criterion (BIC) and sample-size-adjusted versions of them, are widely used for model selection in health and biological research. However, different criteria sometimes support different models, leading to discussions about which is the most trustworthy. Some researchers and fields of study habitually use one or the other, often without a clearly stated justification. They may not realize that the criteria may disagree. Others try to compare models using multiple criteria but encounter ambiguity when different criteria lead to substantively different answers, leading to questions about which criterion is best. In this paper we present an alternative perspective on these criteria that can help in interpreting their practical implications. Specifically, in some cases the comparison of two models using ICs can be viewed as equivalent to a likelihood ratio test, with the different criteria representing different alpha levels and BIC being a more conservative test than AIC. This perspective may lead to insights about how to interpret the ICs in more complex situations. For example, AIC or BIC could be preferable, depending on the relative importance one assigns to sensitivity versus specificity. Understanding the differences and similarities among the ICs can make it easier to compare their results and to use them to make informed decisions.

Keywords: Akaike information criterion; Bayesian information criterion; latent class analysis; likelihood ratio testing; model selection.

© The Author(s) 2019. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

PubMed Disclaimer

References

    1. Claeskens G, Hjort NL.Model Selection and Model Averaging. New York: Cambridge University Press, 2008.
    1. Akaike H. Information theory and an extension of the maximum likelihood principle. In: Petrov BN, Csaki F (eds).Second International Symposium on Information Theory. Budapest, Hungary: Akademai Kiado, 1973, 267–281
    1. Schwarz G. Estimating the dimension of a model.Ann Stat 1978;6:461–464.
    1. Hurvich CM, Tsai C. Regression and time series model selection in small samples.Biometrika 1989;76:297–307.
    1. Bozdogan H. Model selection and Akaike’s Information Criterion (AIC): the general theory and its analytical extensions.Psychometrika 1987;52:345–370.

Publication types

MeSH terms

Grants and funding

LinkOut - more resources

Full text links
Silverchair Information Systems full text link Silverchair Information Systems Free PMC article
Cite
Send To

NCBI Literature Resources

MeSHPMCBookshelfDisclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.


[8]ページ先頭

©2009-2026 Movatter.jp