0% found this document useful (0 votes)
160 views5 pages

Box-Muller Transform Wiki

Box-Muller transform method for generatin gaussian random numbers from uniform distribution.

Uploaded by

braulio.dantas-1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
160 views5 pages

Box-Muller Transform Wiki

Box-Muller transform method for generatin gaussian random numbers from uniform distribution.

Uploaded by

braulio.dantas-1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Box–Muller transform

The Box–Muller transform, by George Edward Pelham


Box and Mervin Edgar Muller,[1] is a random number
sampling method for generating pairs of independent,
standard, normally distributed (zero expectation, unit
variance) random numbers, given a source of uniformly
distributed random numbers. The method was in fact first
mentioned explicitly by Raymond E. A. C. Paley and
Norbert Wiener in 1934.[2]

The Box–Muller transform is commonly expressed in


two forms. The basic form as given by Box and Muller
takes two samples from the uniform distribution on the
interval [0, 1] and maps them to two standard, normally
distributed samples. The polar form takes two samples
from a different interval, [−1, +1], and maps them to two
normally distributed samples without the use of sine or
cosine functions.
Visualisation of the Box–Muller transform — the
The Box–Muller transform was developed as a more coloured points in the unit square (u1, u2), drawn
as circles, are mapped to a 2D Gaussian (z0, z1),
computationally efficient alternative to the inverse
drawn as crosses. The plots at the margins are the
transform sampling method.[3] The ziggurat algorithm
probability distribution functions of z0 and z1. Note
gives a more efficient method for scalar processors (e.g.
that z0 and z1 are unbounded; they appear to be in
old CPUs), while the Box–Muller transform is superior [-2.5,2.5] due to the choice of the illustrated points.
for processors with vector units (e.g. GPUs or modern In the SVG file (https://upload.wikimedia.org/wikipe
CPUs).[4] dia/commons/1/1f/Box-Muller_transform_visualisat
ion.svg), hover over a point to highlight it and its
corresponding point.

Contents
Basic form
Polar form
Contrasting the two forms
Tails truncation
Implementation
See also
References
External links

Basic form
Suppose U1 and U2 are independent samples chosen from the uniform distribution on the unit interval (0, 1).
Let
and

Then Z0 and Z1 are independent random variables with a standard normal distribution.

The derivation[5] is based on a property of a two-dimensional Cartesian system, where X and Y coordinates
are described by two independent and normally distributed random variables, the random variables for R2 and
Θ (shown above) in the corresponding polar coordinates are also independent and can be expressed as

and

Because R2 is the square of the norm of the standard bivariate normal variable (X, Y), it has the chi-squared
distribution with two degrees of freedom. In the special case of two degrees of freedom, the chi-squared
distribution coincides with the exponential distribution, and the equation for R2 above is a simple way of
generating the required exponential variate.

Polar form
The polar form was first proposed by J.
Bell[6] and then modified by R. Knop.[7]
While several different versions of the
polar method have been described, the
version of R. Knop will be described
here because it is the most widely used,
in part due to its inclusion in Numerical
Recipes.

Given u and v, independent and


uniformly distributed in the closed
interval [−1, +1], set s = R2 = u2 + v2 . If
s = 0 or s ≥ 1, discard u and v, and try
another pair (u, v). Because u and v are
Two uniformly distributed values, u and v are used to produce the
uniformly distributed and because only
value s = R2, which is likewise uniformly distributed. The definitions
points within the unit circle have been
of the sine and cosine are then applied to the basic form of the
admitted, the values of s will be Box–Muller transform to avoid using trigonometric functions.
uniformly distributed in the open interval
(0, 1), too. The latter can be seen by
calculating the cumulative distribution function for s in the interval (0, 1). This is the area of a circle with radius
, divided by . From this we find the probability density function to have the constant value 1 on the interval
(0, 1). Equally so, the angle θ divided by is uniformly distributed in the interval [0, 1) and independent of s.

We now identify the value of s with that of U1 and with that of U2 in the basic form. As shown in the
figure, the values of and in the basic form can be replaced with the ratios
and , respectively. The advantage is that calculating the trigonometric functions
directly can be avoided. This is helpful when trigonometric functions are more expensive to compute than the
single division that replaces each one.

Just as the basic form produces two standard normal deviates, so does this alternate calculation.

and

Contrasting the two forms


The polar method differs from the basic method in that it is a type of rejection sampling. It discards some
generated random numbers, but can be faster than the basic method because it is simpler to compute (provided
that the random number generator is relatively fast) and is more numerically robust.[8] Avoiding the use of
expensive trigonometric functions improves speed over the basic form.[6] It discards 1 − π /4 ≈ 21.46% of the
total input uniformly distributed random number pairs generated, i.e. discards 4/π − 1 ≈ 27.32% uniformly
distributed random number pairs per Gaussian random number pair generated, requiring 4/π ≈ 1.2732 input
random numbers per output random number.

The basic form requires two multiplications, 1/2 logarithm, 1/2 square root, and one trigonometric function for
each normal variate.[9] On some processors, the cosine and sine of the same argument can be calculated in
parallel using a single instruction. Notably for Intel-based machines, one can use the fsincos assembler
instruction or the expi instruction (usually available from C as an intrinsic function), to calculate complex

and just separate the real and imaginary parts.

Note: To explicitly calculate the complex-polar form use the following substitutions in the general form,

Let and Then

The polar form requires 3/2 multiplications, 1/2 logarithm, 1/2 square root, and 1/2 division for each normal
variate. The effect is to replace one multiplication and one trigonometric function with a single division and a
conditional loop.

Tails truncation
When a computer is used to produce a uniform random variable it will inevitably have some inaccuracies
because there is a lower bound on how close numbers can be to 0. If the generator uses 32 bits per output
value, the smallest non-zero number that can be generated is . When and are equal to this the
Box–Muller transform produces a normal random deviate equal to
. This means that the algorithm will not produce random variables
more than 6.660 standard deviations from the mean. This corresponds to a proportion of
lost due to the truncation, where is the standard cumulative normal
distribution. With 64 bits the limit is pushed to standard deviations, for which
.

Implementation
The standard Box–Muller transform generates values from the standard normal distribution (i.e. standard
normal deviates) with mean 0 and standard deviation 1. The implementation below in standard C++ generates
values from any normal distribution with mean and variance . If is a standard normal deviate, then
will have a normal distribution with mean and standard deviation . Note that the random
number generator has been seeded to ensure that new, pseudo-random values will be returned from sequential
calls to the generateGaussianNoise function.

#include <cmath>
#include <limits>
#include <random>
#include <utility>

std::pair<double, double> generateGaussianNoise(double mu, double sigma)


{
constexpr double epsilon = std::numeric_limits<double>::epsilon();
constexpr double two_pi = 2.0 * M_PI;

//initialize the random uniform number generator (runif) in a range 0 to 1


static std::mt19937 rng(std::random_device{}()); // Standard mersenne_twister_engine seeded
with rd()
static std::uniform_real_distribution<> runif(0.0, 1.0);

//create two random numbers, make sure u1 is greater than epsilon


double u1, u2;
do
{
u1 = runif(rng);
u2 = runif(rng);
}
while (u1 <= epsilon);

//compute z0 and z1
auto mag = sigma * sqrt(-2.0 * log(u1));
auto z0 = mag * cos(two_pi * u2) + mu;
auto z1 = mag * sin(two_pi * u2) + mu;

return std::make_pair(z0, z1);


}

See also
Inverse transform sampling
Marsaglia polar method, similar transform to Box–Muller, which uses Cartesian coordinates,
instead of polar coordinates

References
Howes, Lee; Thomas, David (2008). GPU Gems 3 - Efficient Random Number Generation and
Application Using CUDA. Pearson Education, Inc. ISBN 978-0-321-51526-1.
1. Box, G. E. P.; Muller, Mervin E. (1958). "A Note on the Generation of Random Normal Deviates"
(https://doi.org/10.1214%2Faoms%2F1177706645). The Annals of Mathematical Statistics. 29
(2): 610–611. doi:10.1214/aoms/1177706645 (https://doi.org/10.1214%2Faoms%2F117770664
5). JSTOR 2237361 (https://www.jstor.org/stable/2237361).
2. Raymond E. A. C. Paley and Norbert Wiener Fourier Transforms in the Complex Domain, New
York: American Mathematical Society (1934) §37.
3. Kloeden and Platen, Numerical Solutions of Stochastic Differential Equations, pp. 11–12
4. Howes & Thomas 2008.
5. Sheldon Ross, A First Course in Probability, (2002), pp. 279–281
6. Bell, James R. (1968). "Algorithm 334: Normal random deviates" (http://portal.acm.org/citation.c
fm?doid=363397.363547). Communications of the ACM. 11 (7): 498.
doi:10.1145/363397.363547 (https://doi.org/10.1145%2F363397.363547).
7. Knop, R. (1969). "Remark on algorithm 334 [G5]: Normal random deviates" (http://portal.acm.or
g/citation.cfm?doid=362946.362996). Communications of the ACM. 12 (5): 281.
doi:10.1145/362946.362996 (https://doi.org/10.1145%2F362946.362996).
8. Everett F. Carter, Jr., The Generation and Application of Random Numbers, Forth Dimensions
(1994), Vol. 16, No. 1 & 2. (ftp://ftp.taygeta.com/pub/publications/randnum.tar.Z)
9. Note that the evaluation of 2π U1 is counted as one multiplication because the value of 2π can
be computed in advance and used repeatedly.

External links
Weisstein, Eric W. "Box-Muller Transformation" (https://mathworld.wolfram.com/Box-MullerTran
sformation.html). MathWorld.
How to Convert a Uniform Distribution to a Gaussian Distribution (C Code) (https://cockrum.net/
code.html#TrueRNG_gaussian)

Retrieved from "https://en.wikipedia.org/w/index.php?title=Box–Muller_transform&oldid=1017624883"

This page was last edited on 13 April 2021, at 20:09 (UTC).

Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. By using this
site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia
Foundation, Inc., a non-profit organization.

You might also like