Likelihood-ratio test

related topics
{math, number, function}
{rate, high, increase}
{game, team, player}
{theory, work, human}
{disease, patient, cell}
{album, band, music}

In statistics, a likelihood ratio test is used to compare the fit of two models, one of which is nested within the other. This often occurs when testing whether a simplifying assumption for a model is valid, as when two or more model parameters are assumed to be related.

Both models are fitted to the data and their log-likelihood recorded. The test statistic (usually denoted D) is twice the difference in these log-likelihoods:

The model with more parameters will always fit at least as well (have a greater log-likelihood). Whether it fits significantly better and should thus be preferred can be determined by deriving the probability or p-value of the obtained difference D. In many cases, the probability distribution of the test statistic can be approximated by a chi-square distribution with (df1 − df2) degrees of freedom, where df1 and df2 are the degrees of freedom of models 1 and 2 respectively.

The test requires nested models, that is, models in which the more complex one can be transformed into the simpler model by imposing a set of linear constraints on the parameters.

In a concrete case, if model 1 has 1 free parameter and a log-likelihood of 8012 and the alternative model has 3 degrees of freedom and a LL of 8024, then the probability of this difference is that of chi-square of 24 = 2·(8024 − 8012) under 2 = 3 − 1 degrees of freedom. Certain assumptions must be met for the statistic to follow a chi-squared distribution and often empirical p-values are computed.

Contents

Background

The likelihood ratio, often denoted by Λ (the capital Greek letter lambda), is the ratio of the likelihood function varying the parameters over two different sets in the numerator and denominator. A likelihood-ratio test is a statistical test for making a decision between two hypotheses based on the value of this ratio.

It is central to the NeymanPearson approach to statistical hypothesis testing, and, like statistical hypothesis testing generally, is both widely used and much criticized; see Criticism, below.

Simple-versus-simple hypotheses

Full article ▸

related documents
Average
Estimator
Conditional probability
Reinforcement learning
Poisson distribution
Interpolation search
Mersenne twister
Golden ratio base
Linear
Monotonic function
Automata theory
Borel algebra
Graftal
Special linear group
Monotone convergence theorem
Generating trigonometric tables
De Morgan's laws
Column space
Discriminant
Controllability
Isomorphism theorem
NaN
Generalized Riemann hypothesis
Stirling's approximation
Conjunctive normal form
Sylow theorems
Jacobi symbol
Free variables and bound variables
Laurent series
Kolmogorov space