Kolmogorov-Smirnov test

related topics
{math, number, function}
{rate, high, increase}
{theory, work, human}
{line, north, south}

In statistics, the KolmogorovSmirnov test (K–S test) is a nonparametric test for the equality of continuous, one-dimensional probability distributions that can be used to compare a sample with a reference probability distribution (one-sample K–S test), or to compare two samples (two-sample K–S test). The Kolmogorov–Smirnov statistic quantifies a distance between the empirical distribution function of the sample and the cumulative distribution function of the reference distribution, or between the empirical distribution functions of two samples. The null distribution of this statistic is calculated under the null hypothesis that the samples are drawn from the same distribution (in the two-sample case) or that the sample is drawn from the reference distribution (in the one-sample case). In each case, the distributions considered under the null hypothesis are continuous distributions but are otherwise unrestricted.

The two-sample KS test is one of the most useful and general nonparametric methods for comparing two samples, as it is sensitive to differences in both location and shape of the empirical cumulative distribution functions of the two samples.

The Kolmogorov–Smirnov test can be modified to serve as a goodness of fit test. In the special case of testing for normality of the distribution, samples are standardized and compared with a standard normal distribution. This is equivalent to setting the mean and variance of the reference distribution equal to the sample estimates, and it is known that using these to define the specific reference distribution changes the null distribution of the test statistic: see below. Various studies have found that, even in this corrected form, the test is less powerful for testing normality than the Shapiro–Wilk test or Anderson–Darling test.[1]

Contents

Full article ▸

related documents
Best, worst and average case
Median
Quantile
Log-normal distribution
Weighted mean
Linear model
Pareto distribution
Whittaker–Shannon interpolation formula
Event (probability theory)
Range encoding
Search engine (computing)
Box-Muller transform
Heaviside step function
Magma (algebra)
Polynomial time
Separated sets
Twin prime conjecture
Intersection (set theory)
NP-hard
Topological ring
Goldbach's weak conjecture
Kernel (category theory)
Euler's criterion
Partial fractions in integration
Ordered field
Regular space
Real line
Regular language
Graded algebra
Greedy algorithm