A Box–Muller transform (by George Edward Pelham Box and Mervin Edgar Muller 1958) is a method of generating pairs of independent standard normally distributed (zero expectation, unit variance) random numbers, given a source of uniformly distributed random numbers.
It is commonly expressed in two forms. The basic form as given by Box and Muller takes two samples from the uniform distribution on the interval (0, 1] and maps them to two normally distributed samples. The polar form takes two samples from a different interval, [−1, +1], and maps them to two normally distributed samples without the use of sine or cosine functions.
The Box–Muller transform was developed as a more computationally efficient alternative to the inverse transform sampling method. The Ziggurat algorithm gives an even more efficient method.
Suppose U1 and U2 are independent random variables that are uniformly distributed in the interval (0, 1]. Let
Then Z0 and Z1 are independent random variables with a normal distribution of standard deviation 1.
The derivation is based on the fact that, in a two-dimensional Cartesian system where X and Y coordinates are described by two independent and normally distributed random variables, the random variables for R2 and Θ (shown above) in the corresponding polar coordinates are also independent and can be expressed as
Because R2 is the square of the norm of the standard bivariate normal variable (X, Y), it has the chi-square distribution with two degrees of freedom. In the special case of two degrees of freedom, the chi-square distribution coincides with the exponential distribution, and the equation for R2 above is a simple way of generating the required exponential variate.
The polar form was first proposed by J. Bell and then modified by R. Knop. While several different versions of the polar method have been described, the version of R. Knop will be described here because it is the most widely used, in part due to its inclusion in Numerical Recipes.
Full article ▸