Instatistics, whenselecting astatistical model for given data, therelative likelihood compares the relative plausibilities of different candidate models or of different values of aparameter of a single model.
Assume that we are given some datax for which we have a statistical model with parameterθ. Suppose that themaximum likelihood estimate forθ is. Relative plausibilities of otherθ values may be found by comparing the likelihoods of those other values with the likelihood of. Therelative likelihood ofθ is defined to be[1][2][3][4][5]where denotes thelikelihood function. Thus, the relative likelihood is thelikelihood ratio with fixed denominator.
The functionis therelative likelihood function.
Alikelihood region is the set of all values ofθ whose relative likelihood is greater than or equal to a given threshold. In terms of percentages, ap% likelihood region forθ is defined to be.[1][3][6]
Ifθ is a single real parameter, ap% likelihood region will usually comprise aninterval of real values. If the region does comprise an interval, then it is called alikelihood interval.[1][3][7]
Likelihood intervals, and more generally likelihood regions, are used forinterval estimation within likelihood-based statistics ("likelihoodist" statistics): They are similar toconfidence intervals in frequentist statistics andcredible intervals in Bayesian statistics. Likelihood intervals are interpreted directly in terms of relative likelihood, not in terms ofcoverage probability (frequentism) orposterior probability (Bayesianism).
Given a model, likelihood intervals can be compared to confidence intervals. Ifθ is a single real parameter, then under certain conditions, a 14.65% likelihood interval (about 1:7 likelihood) forθ will be the same as a 95% confidence interval (19/20 coverage probability).[1][6] In a slightly different formulation suited to the use of log-likelihoods (seeWilks' theorem), thetest statistic is twice the difference in log-likelihoods and the probability distribution of the test statistic is approximately achi-squared distribution with degrees-of-freedom (df) equal to the difference in df-s between the two models (therefore, thee−2 likelihood interval is the same as the 0.954 confidence interval; assuming difference in df-s to be 1).[6][7]
The definition of relative likelihood can be generalized to compare differentstatistical models. This generalization is based onAIC (Akaike information criterion), or sometimesAICc (Akaike Information Criterion with correction).
Suppose that for some given data we have two statistical models,M1 andM2. Also suppose thatAIC(M1) ≤ AIC(M2). Then therelative likelihood ofM2 with respect toM1 is defined as follows.[8]
To see that this is a generalization of the earlier definition, suppose that we have some modelM with a (possibly multivariate) parameterθ. Then for anyθ, setM2 =M(θ), and also setM1 =M(). The general definition now gives the same result as the earlier definition.