In probability theory, the expected value (or expectation, or mathematical expectation, or mean, or the first moment) of a random variable is the weighted average of all possible values that this random variable can take on. The weights used in computing this average correspond to the probabilities in case of a discrete random variable, or densities in case of a continuous random variable. From a rigorous theoretical standpoint, the expected value is the integral of the random variable with respect to its probability measure.
^{[1]}^{[2]} Intuitively, expectation is the longrun average: if a test could be repeated many times, expectation is the mean of all the results.
The term “expected value” can be misleading. It must not be confused with the “most probable value”. The expected value is in general not a typical value that the random variable can take on. It is often helpful to interpret the expected value of a random variable as the longrun average value of the variable over many independent repetitions of an experiment.
The expected value may be intuitively understood by the law of large numbers: The expected value, when it exists, is almost surely the limit of the sample mean as sample size grows to infinity. The value may not be expected in the general sense — the “expected value” itself may be unlikely or even impossible (such as having 2.5 children), just like the sample mean.
The expected value does not exist for some distributions with large “tails”, such as the Cauchy distribution.^{[3]}
It is possible to construct an expected value equal to the probability of an event by taking the expectation of an indicator function that is one if the event has occurred and zero otherwise. This relationship can be used to translate properties of expected values into properties of probabilities, e.g. using the law of large numbers to justify estimating probabilities by frequencies.
Contents
Full article ▸
