0% found this document useful (0 votes)
9 views34 pages

Normal Distribution

Normal distribution

Uploaded by

Arvind Kushwaha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views34 pages

Normal Distribution

Normal distribution

Uploaded by

Arvind Kushwaha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

Normal distribution

In probability theory, a normal (or Gaussian or Gauss or


Laplace–Gauss) distribution is a type of continuous Normal distribution
probability distribution for a real-valued random variable. Probability density function
The general form of its probability density function is

The parameter is the mean or expectation of the


distribution (and also its median and mode), while the
parameter is its standard deviation.[1] The variance of the The red curve is the standard normal distribution
distribution is .[2] A random variable with a Gaussian
distribution is said to be normally distributed, and is called Cumulative distribution function
a normal deviate.

Normal distributions are important in statistics and are often


used in the natural and social sciences to represent real-
valued random variables whose distributions are not
known.[3][4] Their importance is partly due to the central
limit theorem. It states that, under some conditions, the
average of many samples (observations) of a random
Notation
variable with finite mean and variance is itself a random
variable—whose distribution converges to a normal Parameters = mean (location)
distribution as the number of samples increases. Therefore, = variance (squared scale)
physical quantities that are expected to be the sum of many
Support
independent processes, such as measurement errors, often
have distributions that are nearly normal.[5] PDF

Moreover, Gaussian distributions have some unique


properties that are valuable in analytic studies. For instance, CDF
any linear combination of a fixed collection of normal
deviates is a normal deviate. Many results and methods,
Quantile
such as propagation of uncertainty and least squares
parameter fitting, can be derived analytically in explicit form Mean
when the relevant variables are normally distributed. Median
A normal distribution is sometimes informally called a bell Mode
curve.[6] However, many other distributions are bell-shaped Variance
(such as the Cauchy, Student's t, and logistic distributions).
MAD

Skewness
Contents Ex.
Definitions kurtosis
Standard normal distribution Entropy
General normal distribution
Notation MGF
Alternative parameterizations CF
Cumulative distribution function
Standard deviation and coverage Fisher
Quantile function information
Properties
Symmetries and derivatives
Moments Kullback-
Fourier transform and characteristic function Leibler
Moment and cumulant generating functions divergence
Stein operator and class
Zero-variance limit
Maximum entropy
Other properties
Related distributions
Central limit theorem
Operations and functions of normal variables
Operations on a single normal variable
Operations of two independent normal
variables
Operations of two independent
standard normal variables
Operations of mutiple independent normal
variables
Operations of mutiple correlated normal
variables
Operations on the density function
Infinite divisibility and Cramér's theorem
Bernstein's theorem
Extensions
Statistical inference
Estimation of parameters
Sample mean
Sample variance
Confidence intervals
Normality tests
Bayesian analysis of the normal distribution
Sum of two quadratics
Scalar form
Vector form
Sum of differences from the mean
With known variance
With known mean
With unknown mean and unknown
variance
Occurrence and applications
Exact normality
Approximate normality
Assumed normality
Computational methods
Generating values from normal distribution
Numerical approximations for the normal CDF
History
Development
Naming
See also
Notes
References
Citations
Sources
External links

Definitions

Standard normal distribution

The simplest case of a normal distribution is known as the standard normal distribution. This is a special case when
and , and it is described by this probability density function:[1]

Here, the factor ensures that the total area under the curve is equal to one.[note 1] The factor in the
exponent ensures that the distribution has unit variance (i.e., variance being equal to one), and therefore also unit standard
deviation. This function is symmetric around , where it attains its maximum value and has inflection points at
and .

Authors differ on which normal distribution should be called the "standard" one. Carl Friedrich Gauss, for example, defined
the standard normal as having a variance of . That is:

On the other hand, Stephen Stigler[7] goes even further, defining the standard normal as having a variance of :

General normal distribution

Every normal distribution is a version of the standard normal distribution, whose domain has been stretched by a factor (the
standard deviation) and then translated by (the mean value):

The probability density must be scaled by so that the integral is still 1.

If is a standard normal deviate, then will have a normal distribution with expected value and standard
deviation . Conversely, if is a normal deviate with parameters and , then the distribution will have
a standard normal distribution. This variate is also called the standardized form of .

Notation

The probability density of the standard Gaussian distribution (standard normal distribution, with zero mean and unit variance)
is often denoted with the Greek letter (phi).[8] The alternative form of the Greek letter phi, , is also used quite often.[1]
The normal distribution is often referred to as or .[1][9] Thus when a random variable is normally
distributed with mean and standard deviation , one may write

Alternative parameterizations

Some authors advocate using the precision as the parameter defining the width of the distribution, instead of the deviation
or the variance . The precision is normally defined as the reciprocal of the variance, .[10] The formula for the
distribution then becomes

This choice is claimed to have advantages in numerical computations when is very close to zero, and simplifies formulas in
some contexts, such as in the Bayesian inference of variables with multivariate normal distribution.

Alternatively, the reciprocal of the standard deviation might be defined as the precision, in which case the
expression of the normal distribution becomes

According to Stigler, this formulation is advantageous because of a much simpler and easier-to-remember formula, and simple
approximate formulas for the quantiles of the distribution.

Normal distributions form an exponential family with natural parameters and , and natural statistics x and
x2 . The dual expectation parameters for normal distribution are η1 = μ and η2 = μ2 + σ2 .

Cumulative distribution function

The cumulative distribution function (CDF) of the standard normal distribution, usually denoted with the capital Greek letter
(phi),[1] is the integral

The related error function gives the probability of a random variable, with normal distribution of mean 0 and variance
1/2 falling in the range . That is:[1]

These integrals cannot be expressed in terms of elementary functions, and are often said to be special functions. However,
many numerical approximations are known; see below for more.

The two functions are closely related, namely

For a generic normal distribution with density , mean and deviation , the cumulative distribution function is
The complement of the standard normal CDF, , is often called the Q-function, especially in engineering
texts. [11][12] It gives the probability that the value of a standard normal random variable will exceed : . Other
definitions of the -function, all of which are simple transformations of , are also used occasionally.[13]

The graph of the standard normal CDF has 2-fold rotational symmetry around the point (0,1/2); that is,
. Its antiderivative (indefinite integral) can be expressed as follows:

The CDF of the standard normal distribution can be expanded by Integration by parts into a series:

where denotes the double factorial.

An asymptotic expansion of the CDF for large x can also be derived using integration by parts. For more, see Error
function#Asymptotic expansion.[14]

Standard deviation and coverage

About 68% of values drawn from a normal distribution are


within one standard deviation σ away from the mean; about 95%
of the values lie within two standard deviations; and about
99.7% are within three standard deviations.[6] This fact is
known as the 68-95-99.7 (empirical) rule, or the 3-sigma rule.

More precisely, the probability that a normal deviate lies in the


range between and is given by

For the normal distribution, the values less than one


standard deviation away from the mean account for 68.27%
of the set; while two standard deviations from the mean
account for 95.45%; and three standard deviations account
for 99.73%.

To 12 significant figures, the values for are:[15]

OEIS

1 0.682 689 492 137 0.317 310 507 863 3.151 487 187 53 OEIS: A178647
2 0.954 499 736 104 0.045 500 263 896 21.977 894 5080 OEIS: A110894
3 0.997 300 203 937 0.002 699 796 063 370.398 347 345 OEIS: A270712
4 0.999 936 657 516 0.000 063 342 484 15 787.192 7673
5 0.999 999 426 697 0.000 000 573 303 1 744 277.893 62
6 0.999 999 998 027 0.000 000 001 973 506 797 345.897

For large , one can use the approximation .

Quantile function
The quantile function of a distribution is the inverse of the cumulative distribution function. The quantile function of the
standard normal distribution is called the probit function, and can be expressed in terms of the inverse error function:

For a normal random variable with mean and variance , the quantile function is

The quantile of the standard normal distribution is commonly denoted as . These values are used in hypothesis
testing, construction of confidence intervals and Q-Q plots. A normal random variable will exceed with
probability , and will lie outside the interval with probability . In particular, the quantile is
1.96; therefore a normal random variable will lie outside the interval in only 5% of cases.

The following table gives the quantile such that will lie in the range with a specified probability . These
values are useful to determine tolerance interval for sample averages and other statistical estimators with normal (or
asymptotically normal) distributions:.[16][17] NOTE: the following table shows , not

as defined above.

0.80 1.281 551 565 545 0.999 3.290 526 731 492
0.90 1.644 853 626 951 0.9999 3.890 591 886 413
0.95 1.959 963 984 540 0.99999 4.417 173 413 469
0.98 2.326 347 874 041 0.999999 4.891 638 475 699
0.99 2.575 829 303 549 0.9999999 5.326 723 886 384
0.995 2.807 033 768 344 0.99999999 5.730 728 868 236
0.998 3.090 232 306 168 0.999999999 6.109 410 204 869

For small , the quantile function has the useful asymptotic expansion

Properties
The normal distribution is the only distribution whose cumulants beyond the first two (i.e., other than the mean and variance)
are zero. It is also the continuous distribution with the maximum entropy for a specified mean and variance.[18][19] Geary has
shown, assuming that the mean and variance are finite, that the normal distribution is the only distribution where the mean and
variance calculated from a set of independent draws are independent of each other.[20][21]

The normal distribution is a subclass of the elliptical distributions. The normal distribution is symmetric about its mean, and is
non-zero over the entire real line. As such it may not be a suitable model for variables that are inherently positive or strongly
skewed, such as the weight of a person or the price of a share. Such variables may be better described by other distributions,
such as the log-normal distribution or the Pareto distribution.

The value of the normal distribution is practically zero when the value lies more than a few standard deviations away from
the mean (e.g., a spread of three standard deviations covers all but 0.27% of the total distribution). Therefore, it may not be an
appropriate model when one expects a significant fraction of outliers—values that lie many standard deviations away from the
mean—and least squares and other statistical inference methods that are optimal for normally distributed variables often
become highly unreliable when applied to such data. In those cases, a more heavy-tailed distribution should be assumed and
the appropriate robust statistical inference methods applied.

The Gaussian distribution belongs to the family of stable distributions which are the attractors of sums of independent,
identically distributed distributions whether or not the mean or variance is finite. Except for the Gaussian which is a limiting
case, all stable distributions have heavy tails and infinite variance. It is one of the few distributions that are stable and that
have probability density functions that can be expressed analytically, the others being the Cauchy distribution and the Lévy
distribution.

Symmetries and derivatives

The normal distribution with density (mean and standard deviation ) has the following properties:

It is symmetric around the point which is at the same time the mode, the median and the mean of the
distribution.[22]

It is unimodal: its first derivative is positive for negative for and zero only at
The area under the curve and over the -axis is unity (i.e. equal to one).

Its first derivative is

Its density has two inflection points (where the second derivative of is zero and changes sign), located one
standard deviation away from the mean, namely at and [22]

Its density is log-concave.[22]


Its density is infinitely differentiable, indeed supersmooth of order 2.[23]

Furthermore, the density of the standard normal distribution (i.e. and ) also has the following properties:

Its first derivative is


Its second derivative is
More generally, its n th derivative is where is the n th (probabilist)
Hermite polynomial.[24]
The probability that a normally distributed variable with known and is in a particular set, can be
calculated by using the fact that the fraction has a standard normal distribution.

Moments

The plain and absolute moments of a variable are the expected values of and , respectively. If the expected value
of is zero, these parameters are called central moments. Usually we are interested only in moments with integer order .

If has a normal distribution, these moments exist and are finite for any whose real part is greater than −1. For any non-
negative integer , the plain central moments are:[25]

Here denotes the double factorial, that is, the product of all numbers from to 1 that have the same parity as

The central absolute moments coincide with plain moments for all even orders, but are nonzero for odd orders. For any non-
negative integer

The last formula is valid also for any non-integer When the mean the plain and absolute moments can be
expressed in terms of confluent hypergeometric functions and
These expressions remain valid even if is not an integer. See also generalized Hermite polynomials.

Order Non-central moment Central moment

The expectation of conditioned on the event that lies in an interval is given by

where and respectively are the density and the cumulative distribution function of . For this is known as the
inverse Mills ratio. Note that above, density of is used instead of standard normal density as in inverse Mills ratio, so here
we have instead of .

Fourier transform and characteristic function

The Fourier transform of a normal density with mean and standard deviation is[26]

where is the imaginary unit. If the mean , the first factor is 1, and the Fourier transform is, apart from a constant factor,
a normal density on the frequency domain, with mean 0 and standard deviation . In particular, the standard normal
distribution is an eigenfunction of the Fourier transform.

In probability theory, the Fourier transform of the probability distribution of a real-valued random variable is closely
connected to the characteristic function of that variable, which is defined as the expected value of , as a function
of the real variable (the frequency parameter of the Fourier transform). This definition can be analytically extended to a
complex-value variable .[27] The relation between both is:

Moment and cumulant generating functions

The moment generating function of a real random variable is the expected value of , as a function of the real parameter
. For a normal distribution with density , mean and deviation , the moment generating function exists and is equal to
The cumulant generating function is the logarithm of the moment generating function, namely

Since this is a quadratic polynomial in , only the first two cumulants are nonzero, namely the mean and the variance .

Stein operator and class

Within Stein's method the Stein operator and class of a random variable are
and the class of all absolutely continuous functions
.

Zero-variance limit

In the limit when tends to zero, the probability density eventually tends to zero at any , but grows without limit
if , while its integral remains equal to 1. Therefore, the normal distribution cannot be defined as an ordinary function
when .

However, one can define the normal distribution with zero variance as a generalized function; specifically, as Dirac's "delta
function" translated by the mean , that is Its CDF is then the Heaviside step function translated by the
mean , namely

Maximum entropy

Of all probability distributions over the reals with a specified mean and variance , the normal distribution is
the one with maximum entropy. [28] If is a continuous random variable with probability density , then the entropy of
is defined as[29][30][31]

where is understood to be zero whenever . This functional can be maximized, subject to the
constraints that the distribution is properly normalized and has a specified variance, by using variational calculus. A function
with two Lagrange multipliers is defined:

where is, for now, regarded as some density function with mean and standard deviation .

At maximum entropy, a small variation about will produce a variation about which is equal to 0:

Since this must hold for any small , the term in brackets must be zero, and solving for yields:
Using the constraint equations to solve for and yields the density of the normal distribution:

The entropy of a normal distribution is equal to

Other properties

1. If the characteristic function of some random variable is of the form , where is a


polynomial, then the Marcinkiewicz theorem (named after Józef Marcinkiewicz) asserts that can be at
most a quadratic polynomial, and therefore is a normal random variable.[32] The consequence of this
result is that the normal distribution is the only distribution with a finite number (two) of non-zero cumulants.
2. If and are jointly normal and uncorrelated, then they are independent. The requirement that and
should be jointly normal is essential; without it the property does not hold.[33][34][proof] For non-normal
random variables uncorrelatedness does not imply independence.
3. The Kullback–Leibler divergence of one normal distribution from another
is given by: [35]

The Hellinger distance between the same distributions is equal to

4. The Fisher information matrix for a normal distribution is diagonal and takes the form

5. The conjugate prior of the mean of a normal distribution is another normal distribution.[36] Specifically, if
are iid and the prior is , then the posterior distribution for the estimator
of will be

6. The family of normal distributions not only forms an exponential family (EF), but in fact forms a natural
exponential family (NEF) with quadratic variance function (NEF-QVF). Many properties of normal
distributions generalize to properties of NEF-QVF distributions, NEF distributions, or EF distributions
generally. NEF-QVF distributions comprises 6 families, including Poisson, Gamma, binomial, and negative
binomial distributions, while many of the common families studied in probability and statistics are NEF or EF.
7. In information geometry, the family of normal distributions forms a statistical manifold with constant curvature
. The same family is flat with respect to the (±1)-connections ∇ and ∇ .[37]

Related distributions

Central limit theorem


The central limit theorem states that under certain (fairly common) conditions,
the sum of many random variables will have an approximately normal
distribution. More specifically, where are independent and
identically distributed random variables with the same arbitrary distribution, zero
mean, and variance and is their mean scaled by

As the number of discrete events


Then, as increases, the probability distribution of will tend to the normal increases, the function begins to resemble
distribution with zero mean and variance . a normal distribution

The theorem can be extended to variables that are not independent and/or
not identically distributed if certain constraints are placed on the degree of
dependence and the moments of the distributions.

Many test statistics, scores, and estimators encountered in practice contain sums
of certain random variables in them, and even more estimators can be
represented as sums of random variables through the use of influence functions.
The central limit theorem implies that those statistical parameters will have
asymptotically normal distributions.

The central limit theorem also implies that certain distributions can be
approximated by the normal distribution, for example:

The binomial distribution is approximately normal with mean


and variance for large and for not too close to 0 or
1.
The Poisson distribution with parameter is approximately normal
with mean and variance , for large values of .[38] Comparison of probability density
functions, for the sum of fair 6-
The chi-squared distribution is approximately normal with
sided dice to show their convergence to a
mean and variance , for large . normal distribution with increasing , in
The Student's t-distribution is approximately normal with mean accordance to the central limit theorem. In
0 and variance 1 when is large. the bottom-right graph, smoothed profiles
of the previous graphs are rescaled,
Whether these approximations are sufficiently accurate depends on the purpose superimposed and compared with a
for which they are needed, and the rate of convergence to the normal normal distribution (black curve).
distribution. It is typically the case that such approximations are less accurate in
the tails of the distribution.

A general upper bound for the approximation error in the central limit theorem is given by the Berry–Esseen theorem,
improvements of the approximation are given by the Edgeworth expansions.

This theorem can also be used to justify modeling the sum of many uniform noise sources as gaussian noise. See AWGN.

Operations and functions of normal variables

The probability density, cumulative distribution, and inverse cumulative distribution of any function of one or more
independent or correlated normal variables can be computed with the numerical method of ray-scanning[39] (Matlab code (htt
ps://www.mathworks.com/matlabcentral/fileexchange/82410-integrate-and-classify-normal-distributions)). In the following
sections we look at some special cases.

Operations on a single normal variable

If X is distributed normally with mean μ and variance σ2 , then


, for any real numbers and , is also normally distributed, with
mean and standard deviation . That is, the family of normal
distributions is closed under linear transformations.
The exponential of X is distributed log-normally: eX ~ ln(N (μ, σ2)).
The absolute value of X has folded normal distribution: |X| ~ Nf (μ, σ2). If
μ = 0 this is known as the half-normal distribution.
The absolute value of normalized residuals, |X − μ|/σ, has chi
distribution with one degree of freedom: |X − μ|/σ ~ .
The square of X/σ has the noncentral chi-squared distribution with one
degree of freedom: X2/σ2 ~ (μ2/σ2). If μ = 0, the distribution is called
simply chi-squared.
The log likelihood of a normal variable is simply the log of its
probability density function:

Since this is a scaled and shifted square of a standard normal variable, it is distributed
as a scaled and shifted chi-squared variable.

The distribution of the variable X restricted to an interval [a, b] is called


the truncated normal distribution.
(X − μ)−2 has a Lévy distribution with location 0 and scale σ−2.

Operations of two independent normal variables


If and are two independent normal random variables, with means
, and standard deviations , , then their sum will also
be normally distributed,[proof] with mean and variance .
a: Probability density of a function
In particular, if and are independent normal deviates with zero
of a normal variable with
mean and variance , then and are also independent and . b: Probability
and normally distributed, with zero mean and variance . This is a density of a function of two
special case of the polarization identity.[40] normal variables and , where
, , , ,
If , are two independent normal deviates with mean and and . c: Heat map of the
deviation , and , are arbitrary real numbers, then the variable joint probability density of two
functions of two correlated normal
variables and , where ,
, , , and
. These are computed by
the numerical method of ray-
is also normally distributed with mean and deviation . It follows that the normal
scanning.[39]
distribution is stable (with exponent ).

Operations of two independent standard normal variables

If and are two independent standard normal random variables with mean 0 and variance 1, then

Their sum and difference is distributed normally with mean zero and variance two: .
Their product follows the Product distribution[41]
with density function where
is the modified Bessel function of the second kind. This distribution is symmetric around zero, unbounded
at , and has the characteristic function .
Their ratio follows the standard Cauchy distribution: .
Their Euclidean norm has the Rayleigh distribution.
Operations of mutiple independent normal variables
Any linear combination of independent normal deviates is a normal deviate.
If are independent standard normal random variables, then the sum of their squares has the
chi-squared distribution with degrees of freedom

If are independent normally distributed random variables with means and variances ,
then their sample mean is independent from the sample standard deviation,[42] which can be demonstrated
using Basu's theorem or Cochran's theorem.[43] The ratio of these two quantities will have the Student's t-
distribution with degrees of freedom:

If , are independent standard normal random variables, then the ratio of their
normalized sums of squares will have the F-distribution with (n, m) degrees of freedom:[44]

Operations of mutiple correlated normal variables

A quadratic form of a normal vector, i.e. a quadratic function of multiple independent


or correlated normal variables, is a generalized chi-square variable.

Operations on the density function

The split normal distribution is most directly defined in terms of joining scaled sections of the density functions of different
normal distributions and rescaling the density to integrate to one. The truncated normal distribution results from rescaling a
section of a single density function.

Infinite divisibility and Cramér's theorem

For any positive integer , any normal distribution with mean and variance is the distribution of the sum of

independent normal deviates, each with mean and variance . This property is called infinite divisibility.[45]

Conversely, if and are independent random variables and their sum has a normal distribution, then both
and must be normal deviates.[46]

This result is known as Cramér’s decomposition theorem, and is equivalent to saying that the convolution of two distributions
is normal if and only if both are normal. Cramér's theorem implies that a linear combination of independent non-Gaussian
variables will never have an exactly normal distribution, although it may approach it arbitrarily closely.[32]

Bernstein's theorem

Bernstein's theorem states that if and are independent and and are also independent, then both X and Y
must necessarily have normal distributions.[47][48]
More generally, if are independent random variables, then two distinct linear combinations and
will be independent if and only if all are normal and , where denotes the variance of
.[47]

Extensions

The notion of normal distribution, being one of the most important distributions in probability theory, has been extended far
beyond the standard framework of the univariate (that is one-dimensional) case (Case 1). All these extensions are also called
normal or Gaussian laws, so a certain ambiguity in names exists.

The multivariate normal distribution describes the Gaussian law in the k-dimensional Euclidean space. A
k
vector X ∈ Rk is multivariate-normally distributed if any linear combination of its components ∑j=1aj Xj has a
(univariate) normal distribution. The variance of X is a k×k symmetric positive-definite matrix V. The
multivariate normal distribution is a special case of the elliptical distributions. As such, its iso-density loci in
the k = 2 case are ellipses and in the case of arbitrary k are ellipsoids.
Rectified Gaussian distribution a rectified version of normal distribution with all the negative elements reset
to 0
Complex normal distribution deals with the complex normal vectors. A complex vector X ∈ Ck is said to be
normal if both its real and imaginary components jointly possess a 2k-dimensional multivariate normal
distribution. The variance-covariance structure of X is described by two matrices: the variance matrix Γ, and
the relation matrix C.
Matrix normal distribution describes the case of normally distributed matrices.
Gaussian processes are the normally distributed stochastic processes. These can be viewed as elements of
some infinite-dimensional Hilbert space H, and thus are the analogues of multivariate normal vectors for the
case k = ∞. A random element h ∈ H is said to be normal if for any constant a ∈ H the scalar product (a, h)
has a (univariate) normal distribution. The variance structure of such Gaussian random element can be
described in terms of the linear covariance operator K: H → H. Several Gaussian processes became popular
enough to have their own names:
Brownian motion,
Brownian bridge,
Ornstein–Uhlenbeck process.
Gaussian q-distribution is an abstract mathematical construction that represents a "q-analogue" of the normal
distribution.
the q-Gaussian is an analogue of the Gaussian distribution, in the sense that it maximises the Tsallis entropy,
and is one type of Tsallis distribution. Note that this distribution is different from the Gaussian q-distribution
above.

A random variable X has a two-piece normal distribution if it has a distribution

where μ is the mean and σ1 and σ2 are the standard deviations of the distribution to the left and right of the mean respectively.

The mean, variance and third central moment of this distribution have been determined[49]

where E(X), V(X) and T(X) are the mean, variance, and third central moment respectively.
One of the main practical uses of the Gaussian law is to model the empirical distributions of many different random variables
encountered in practice. In such case a possible extension would be a richer family of distributions, having more than two
parameters and therefore being able to fit the empirical distribution more accurately. The examples of such extensions are:

Pearson distribution — a four-parameter family of probability distributions that extend the normal law to
include different skewness and kurtosis values.
The generalized normal distribution, also known as the exponential power distribution, allows for distribution
tails with thicker or thinner asymptotic behaviors.

Statistical inference

Estimation of parameters

It is often the case that we do not know the parameters of the normal distribution, but instead want to estimate them. That is,
having a sample from a normal population we would like to learn the approximate values of
parameters and . The standard approach to this problem is the maximum likelihood method, which requires
maximization of the log-likelihood function:

Taking derivatives with respect to and and solving the resulting system of first order conditions yields the maximum
likelihood estimates:

Sample mean

Estimator is called the sample mean, since it is the arithmetic mean of all observations. The statistic is complete and
sufficient for , and therefore by the Lehmann–Scheffé theorem, is the uniformly minimum variance unbiased (UMVU)
estimator.[50] In finite samples it is distributed normally:

The variance of this estimator is equal to the μμ-element of the inverse Fisher information matrix . This implies that the
estimator is finite-sample efficient. Of practical importance is the fact that the standard error of is proportional to , that
is, if one wishes to decrease the standard error by a factor of 10, one must increase the number of points in the sample by a
factor of 100. This fact is widely used in determining sample sizes for opinion polls and the number of trials in Monte Carlo
simulations.

From the standpoint of the asymptotic theory, is consistent, that is, it converges in probability to as . The
estimator is also asymptotically normal, which is a simple corollary of the fact that it is normal in finite samples:

Sample variance

The estimator is called the sample variance, since it is the variance of the sample ( ). In practice, another
estimator is often used instead of the . This other estimator is denoted , and is also called the sample variance, which
represents a certain ambiguity in terminology; its square root is called the sample standard deviation. The estimator
differs from by having (n − 1) instead of n in the denominator (the so-called Bessel's correction):
The difference between and becomes negligibly small for large n's. In finite samples however, the motivation behind
the use of is that it is an unbiased estimator of the underlying parameter , whereas is biased. Also, by the Lehmann–
Scheffé theorem the estimator is uniformly minimum variance unbiased (UMVU),[50] which makes it the "best" estimator
among all unbiased ones. However it can be shown that the biased estimator is "better" than the in terms of the mean
squared error (MSE) criterion. In finite samples both and have scaled chi-squared distribution with (n − 1) degrees of
freedom:

The first of these expressions shows that the variance of is equal to , which is slightly greater than the σσ-
element of the inverse Fisher information matrix . Thus, is not an efficient estimator for , and moreover, since is
UMVU, we can conclude that the finite-sample efficient estimator for does not exist.

Applying the asymptotic theory, both estimators and are consistent, that is they converge in probability to as the
sample size . The two estimators are also both asymptotically normal:

In particular, both estimators are asymptotically efficient for .

Confidence intervals

By Cochran's theorem, for normal distributions the sample mean and the sample variance s2 are independent, which means
there can be no gain in considering their joint distribution. There is also a converse theorem: if in a sample the sample mean
and sample variance are independent, then the sample must have come from the normal distribution. The independence
between and s can be employed to construct the so-called t-statistic:

This quantity t has the Student's t-distribution with (n − 1) degrees of freedom, and it is an ancillary statistic (independent of
the value of the parameters). Inverting the distribution of this t-statistics will allow us to construct the confidence interval for
μ;[51] similarly, inverting the χ2 distribution of the statistic s2 will give us the confidence interval for σ2 :[52]

where tk,p and χ 2k,p are the pth quantiles of the t- and χ2 -distributions respectively. These confidence intervals are of the
confidence level 1 − α, meaning that the true values μ and σ2 fall outside of these intervals with probability (or significance
level) α. In practice people usually take α = 5%, resulting in the 95% confidence intervals. The approximate formulas in the
display above were derived from the asymptotic distributions of and s2 . The approximate formulas become valid for large
values of n, and are more convenient for the manual calculation since the standard normal quantiles zα/2 do not depend on n.
In particular, the most popular value of α = 5%, results in |z0.025 | = 1.96.

Normality tests
Normality tests assess the likelihood that the given data set {x1 , ..., xn } comes from a normal distribution. Typically the null
hypothesis H0 is that the observations are distributed normally with unspecified mean μ and variance σ2 , versus the alternative
Ha that the distribution is arbitrary. Many tests (over 40) have been devised for this problem, the more prominent of them are
outlined below:

Diagnostic plots are more intuitively appealing but subjective at the same time, as they rely on informal human judgement to
accept or reject the null hypothesis.

Q-Q plot, also known as normal probability plot or rankit plot— is a plot of the sorted values from the data set
against the expected values of the corresponding quantiles from the standard normal distribution. That is, it's
a plot of point of the form (Φ−1(pk ), x(k)), where plotting points pk are equal to pk = (k − α)/(n + 1 − 2α) and α is
an adjustment constant, which can be anything between 0 and 1. If the null hypothesis is true, the plotted
points should approximately lie on a straight line.
P-P plot— similar to the Q-Q plot, but used much less frequently. This method consists of plotting the points
(Φ(z(k)), pk ), where . For normally distributed data this plot should lie on a 45° line
between (0, 0) and (1, 1).

Goodness-of-fit tests:

Moment-based tests:

D'Agostino's K-squared test


Jarque–Bera test
Shapiro-Wilk test: This is based on the fact that the line in the Q-Q plot has the slope of σ. The test compares
the least squares estimate of that slope with the value of the sample variance, and rejects the null hypothesis
if these two quantities differ significantly.

Tests based on the empirical distribution function:

Anderson–Darling test
Lilliefors test (an adaptation of the Kolmogorov–Smirnov test)

Bayesian analysis of the normal distribution

Bayesian analysis of normally distributed data is complicated by the many different possibilities that may be considered:

Either the mean, or the variance, or neither, may be considered a fixed quantity.
When the variance is unknown, analysis may be done directly in terms of the variance, or in terms of the
precision, the reciprocal of the variance. The reason for expressing the formulas in terms of precision is that
the analysis of most cases is simplified.
Both univariate and multivariate cases need to be considered.
Either conjugate or improper prior distributions may be placed on the unknown variables.
An additional set of cases occurs in Bayesian linear regression, where in the basic model the data is
assumed to be normally distributed, and normal priors are placed on the regression coefficients. The
resulting analysis is similar to the basic cases of independent identically distributed data.

The formulas for the non-linear-regression cases are summarized in the conjugate prior article.

Sum of two quadratics

Scalar form

The following auxiliary formula is useful for simplifying the posterior update equations, which otherwise become fairly
tedious.
This equation rewrites the sum of two quadratics in x by expanding the squares, grouping the terms in x, and completing the
square. Note the following about the complex constant factors attached to some of the terms:

1. The factor has the form of a weighted average of y and z.

2. This shows that this factor can be thought of as resulting from a situation

where the reciprocals of quantities a and b add directly, so to combine a and b themselves, it's necessary to
reciprocate, add, and reciprocate the result again to get back into the original units. This is exactly the sort of
operation performed by the harmonic mean, so it is not surprising that is one-half the harmonic mean
of a and b.

Vector form

A similar formula can be written for the sum of two vector quadratics: If x, y, z are vectors of length k, and A and B are
symmetric, invertible matrices of size , then

where

Note that the form x′ A x is called a quadratic form and is a scalar:

In other words, it sums up all possible combinations of products of pairs of elements from x, with a separate coefficient for
each. In addition, since , only the sum matters for any off-diagonal elements of A, and there is no loss
of generality in assuming that A is symmetric. Furthermore, if A is symmetric, then the form

Sum of differences from the mean

Another useful formula is as follows:

where

With known variance

For a set of i.i.d. normally distributed data points X of size n where each individual point x follows with
known variance σ2 , the conjugate prior distribution is also normally distributed.

This can be shown more easily by rewriting the variance as the precision, i.e. using τ = 1/σ2 . Then if and
we proceed as follows.
First, the likelihood function is (using the formula above for the sum of differences from the mean):

Then, we proceed as follows:

In the above derivation, we used the formula above for the sum of two quadratics and eliminated all constant factors not
involving μ. The result is the kernel of a normal distribution, with mean and precision , i.e.

This can be written as a set of Bayesian update equations for the posterior parameters in terms of the prior parameters:

That is, to combine n data points with total precision of nτ (or equivalently, total variance of n/σ2 ) and mean of values ,
derive a new total precision simply by adding the total precision of the data to the prior total precision, and form a new mean
through a precision-weighted average, i.e. a weighted average of the data mean and the prior mean, each weighted by the
associated total precision. This makes logical sense if the precision is thought of as indicating the certainty of the observations:
In the distribution of the posterior mean, each of the input components is weighted by its certainty, and the certainty of this
distribution is the sum of the individual certainties. (For the intuition of this, compare the expression "the whole is (or is not)
greater than the sum of its parts". In addition, consider that the knowledge of the posterior comes from a combination of the
knowledge of the prior and likelihood, so it makes sense that we are more certain of it than of either of its components.)

The above formula reveals why it is more convenient to do Bayesian analysis of conjugate priors for the normal distribution
in terms of the precision. The posterior precision is simply the sum of the prior and likelihood precisions, and the posterior
mean is computed through a precision-weighted average, as described above. The same formulas can be written in terms of
variance by reciprocating all the precisions, yielding the more ugly formulas
With known mean

For a set of i.i.d. normally distributed data points X of size n where each individual point x follows with
known mean μ, the conjugate prior of the variance has an inverse gamma distribution or a scaled inverse chi-squared
distribution. The two are equivalent except for having different parameterizations. Although the inverse gamma is more
commonly used, we use the scaled inverse chi-squared for the sake of convenience. The prior for σ2 is as follows:

The likelihood function from above, written in terms of the variance, is:

where

Then:

The above is also a scaled inverse chi-squared distribution where

or equivalently
Reparameterizing in terms of an inverse gamma distribution, the result is:

With unknown mean and unknown variance

For a set of i.i.d. normally distributed data points X of size n where each individual point x follows with
2
unknown mean μ and unknown variance σ , a combined (multivariate) conjugate prior is placed over the mean and variance,
consisting of a normal-inverse-gamma distribution. Logically, this originates as follows:

1. From the analysis of the case with unknown mean but known variance, we see that the update equations
involve sufficient statistics computed from the data consisting of the mean of the data points and the total
variance of the data points, computed in turn from the known variance divided by the number of data points.
2. From the analysis of the case with unknown variance but known mean, we see that the update equations
involve sufficient statistics over the data consisting of the number of data points and sum of squared
deviations.
3. Keep in mind that the posterior update values serve as the prior distribution when further data is handled.
Thus, we should logically think of our priors in terms of the sufficient statistics just described, with the same
semantics kept in mind as much as possible.
4. To handle the case where both mean and variance are unknown, we could place independent priors over the
mean and variance, with fixed estimates of the average mean, total variance, number of data points used to
compute the variance prior, and sum of squared deviations. Note however that in reality, the total variance of
the mean depends on the unknown variance, and the sum of squared deviations that goes into the variance
prior (appears to) depend on the unknown mean. In practice, the latter dependence is relatively unimportant:
Shifting the actual mean shifts the generated points by an equal amount, and on average the squared
deviations will remain the same. This is not the case, however, with the total variance of the mean: As the
unknown variance increases, the total variance of the mean will increase proportionately, and we would like
to capture this dependence.
5. This suggests that we create a conditional prior of the mean on the unknown variance, with a
hyperparameter specifying the mean of the pseudo-observations associated with the prior, and another
parameter specifying the number of pseudo-observations. This number serves as a scaling parameter on the
variance, making it possible to control the overall variance of the mean relative to the actual variance
parameter. The prior for the variance also has two hyperparameters, one specifying the sum of squared
deviations of the pseudo-observations associated with the prior, and another specifying once again the
number of pseudo-observations. Note that each of the priors has a hyperparameter specifying the number of
pseudo-observations, and in each case this controls the relative variance of that prior. These are given as
two separate hyperparameters so that the variance (aka the confidence) of the two priors can be controlled
separately.
6. This leads immediately to the normal-inverse-gamma distribution, which is the product of the two
distributions just defined, with conjugate priors used (an inverse gamma distribution over the variance, and a
normal distribution over the mean, conditional on the variance) and with the same four parameters just
defined.

The priors are normally defined as follows:

The update equations can be derived, and look as follows:


The respective numbers of pseudo-observations add the number of actual observations to them. The new mean
hyperparameter is once again a weighted average, this time weighted by the relative numbers of observations. Finally, the
update for is similar to the case with known mean, but in this case the sum of squared deviations is taken with respect
to the observed data mean rather than the true mean, and as a result a new "interaction term" needs to be added to take care of
the additional error source stemming from the deviation between prior and data mean.

[Proof]

The prior distributions are

Therefore, the joint prior is

The likelihood function from the section above with known variance is:

Writing it in terms of variance rather than precision, we get:

where

Therefore, the posterior is (dropping the hyperparameters as conditioning factors):


In other words, the posterior distribution has the form of a product of a normal distribution over p(μ | σ2 ) times an inverse
gamma distribution over p(σ2 ), with parameters that are the same as the update equations above.

Occurrence and applications


The occurrence of normal distribution in practical problems can be loosely classified into four categories:

1. Exactly normal distributions;


2. Approximately normal laws, for example when such approximation is justified by the central limit theorem;
and
3. Distributions modeled as normal – the normal distribution being the distribution with maximum entropy for a
given mean and variance.
4. Regression problems – the normal distribution being found after systematic effects have been modeled
sufficiently well.

Exact normality

Certain quantities in physics are distributed normally, as was first demonstrated by James
Clerk Maxwell. Examples of such quantities are:

Probability density function of a ground state in a quantum harmonic oscillator.


The position of a particle that experiences diffusion. If initially the particle is
located at a specific point (that is its probability distribution is the Dirac delta
function), then after time t its location is described by a normal distribution with
The ground state of a
variance t, which satisfies the diffusion equation . If
quantum harmonic
the initial location is given by a certain density function , then the density at oscillator has the Gaussian
time t is the convolution of g and the normal PDF. distribution.

Approximate normality

Approximately normal distributions occur in many situations, as explained by the central limit theorem. When the outcome is
produced by many small effects acting additively and independently, its distribution will be close to normal. The normal
approximation will not be valid if the effects act multiplicatively (instead of additively), or if there is a single external
influence that has a considerably larger magnitude than the rest of the effects.

In counting problems, where the central limit theorem includes a discrete-to-continuum approximation and
where infinitely divisible and decomposable distributions are involved, such as
Binomial random variables, associated with binary response variables;
Poisson random variables, associated with rare events;
Thermal radiation has a Bose–Einstein distribution on very short time scales, and a normal distribution on
longer timescales due to the central limit theorem.

Assumed normality

I can only recognize the occurrence of the normal curve – the Laplacian
curve of errors – as a very abnormal phenomenon. It is roughly
approximated to in certain distributions; for this reason, and on account
for its beautiful simplicity, we may, perhaps, use it as a first
approximation, particularly in theoretical investigations.

— Pearson (1901)
Histogram of sepal widths for Iris
There are statistical methods to empirically test that assumption, see the above versicolor from Fisher's Iris flower
Normality tests section. data set, with superimposed best-
fitting normal distribution.
In biology, the logarithm of various variables tend to have a normal
distribution, that is, they tend to have a log-normal distribution (after
separation on male/female subpopulations), with examples including:
Measures of size of living tissue (length, height, skin area, weight);[53]
The length of inert appendages (hair, claws, nails, teeth) of biological specimens, in the direction of
growth; presumably the thickness of tree bark also falls under this category;
Certain physiological measurements, such as blood pressure of adult humans.
In finance, in particular the Black–Scholes model, changes in the logarithm of exchange rates, price indices,
and stock market indices are assumed normal (these variables behave like compound interest, not like
simple interest, and so are multiplicative). Some mathematicians such as Benoit Mandelbrot have argued
that log-Levy distributions, which possesses heavy tails would be a more appropriate model, in particular for
the analysis for stock market crashes. The use of the assumption of normal distribution occurring in financial
models has also been criticized by Nassim Nicholas Taleb in his works.
Measurement errors in physical experiments are often modeled by a normal distribution. This use of a normal
distribution does not imply that one is assuming the measurement errors are normally distributed, rather
using the normal distribution produces the most conservative predictions possible given only knowledge
about the mean and variance of the errors.[54]
In standardized testing, results can be made to have a normal distribution by either selecting the number and
difficulty of questions (as in the IQ test) or transforming the raw test scores into "output" scores by fitting them
to the normal distribution. For example, the SAT's traditional range of 200–800 is based on a normal
distribution with a mean of 500 and a standard deviation of 100.
Many scores are derived from the normal distribution, including
percentile ranks ("percentiles" or "quantiles"), normal curve equivalents,
stanines, z-scores, and T-scores. Additionally, some behavioral
statistical procedures assume that scores are normally distributed; for
example, t-tests and ANOVAs. Bell curve grading assigns relative
grades based on a normal distribution of scores.
In hydrology the distribution of long duration river discharge or rainfall,
e.g. monthly and yearly totals, is often thought to be practically normal
according to the central limit theorem.[55] The blue picture, made with
CumFreq, illustrates an example of fitting the normal distribution to Fitted cumulative normal distribution
ranked October rainfalls showing the 90% confidence belt based on the to October rainfalls, see distribution
binomial distribution. The rainfall data are represented by plotting fitting
positions as part of the cumulative frequency analysis.

Computational methods
Generating values from normal distribution

In computer simulations, especially in applications of the Monte-Carlo method,


it is often desirable to generate values that are normally distributed. The
algorithms listed below all generate the standard normal deviates, since a
2
N(μ, σ ) can be generated as X = μ + σZ, where Z is standard normal. All these
algorithms rely on the availability of a random number generator U capable of
producing uniform random variates.

The most straightforward method is based on the probability integral


transform property: if U is distributed uniformly on (0,1), then Φ−1(U)
will have the standard normal distribution. The drawback of this
method is that it relies on calculation of the probit function Φ−1,
The bean machine, a device invented by
which cannot be done analytically. Some approximate methods are
Francis Galton, can be called the first
described in Hart (1968) and in the erf article. Wichura gives a fast
[56] generator of normal random variables.
algorithm for computing this function to 16 decimal places, which This machine consists of a vertical board
is used by R to compute random variates of the normal distribution. with interleaved rows of pins. Small balls
An easy to program approximate approach, that relies on the central are dropped from the top and then bounce
limit theorem, is as follows: generate 12 uniform U(0,1) deviates, randomly left or right as they hit the pins.
add them all up, and subtract 6 – the resulting random variable will The balls are collected into bins at the
have approximately standard normal distribution. In truth, the bottom and settle down into a pattern
distribution will be Irwin–Hall, which is a 12-section eleventh-order resembling the Gaussian curve.
polynomial approximation to the normal distribution. This random
deviate will have a limited range of (−6, 6).[57]
The Box–Muller method uses two independent random numbers U and V distributed uniformly on (0,1). Then
the two random variables X and Y

will both have the standard normal distribution, and will be independent. This formulation arises because
for a bivariate normal random vector (X, Y) the squared norm X2 + Y2 will have the chi-squared distribution
with two degrees of freedom, which is an easily generated exponential random variable corresponding to
the quantity −2ln(U) in these equations; and the angle is distributed uniformly around the circle, chosen by
the random variable V.

The Marsaglia polar method is a modification of the Box–Muller method which does not require computation
of the sine and cosine functions. In this method, U and V are drawn from the uniform (−1,1) distribution, and
then S = U2 + V2 is computed. If S is greater or equal to 1, then the method starts over, otherwise the two
quantities

are returned. Again, X and Y are independent, standard normal random variables.

The Ratio method[58] is a rejection method. The algorithm proceeds as follows:


Generate two independent uniform deviates U and V;
Compute X = √8/e (V − 0.5)/U;
Optional: if X2 ≤ 5 − 4e1/4U then accept X and terminate algorithm;
Optional: if X2 ≥ 4e−1.35/U + 1.4 then reject X and start over from step 1;
If X2 ≤ −4 lnU then accept X, otherwise start over the algorithm.

The two optional steps allow the evaluation of the logarithm in the last step to be avoided in most cases.
These steps can be greatly improved[59] so that the logarithm is rarely evaluated.

The ziggurat algorithm[60] is faster than the Box–Muller transform and still exact. In about 97% of all cases it
uses only two random numbers, one random integer and one random uniform, one multiplication and an if-
test. Only in 3% of the cases, where the combination of those two falls outside the "core of the ziggurat" (a
kind of rejection sampling using logarithms), do exponentials and more uniform random numbers have to be
employed.
Integer arithmetic can be used to sample from the standard normal distribution.[61] This method is exact in
the sense that it satisfies the conditions of ideal approximation;[62] i.e., it is equivalent to sampling a real
number from the standard normal distribution and rounding this to the nearest representable floating point
number.
There is also some investigation[63] into the connection between the fast Hadamard transform and the
normal distribution, since the transform employs just addition and subtraction and by the central limit theorem
random numbers from almost any distribution will be transformed into the normal distribution. In this regard a
series of Hadamard transforms can be combined with random permutations to turn arbitrary data sets into a
normally distributed data.

Numerical approximations for the normal CDF

The standard normal CDF is widely used in scientific and statistical computing.

The values Φ(x) may be approximated very accurately by a variety of methods, such as numerical integration, Taylor series,
asymptotic series and continued fractions. Different approximations are used depending on the desired level of accuracy.

Zelen & Severo (1964) give the approximation for Φ(x) for x > 0 with the absolute error |ε(x)| < 7.5·10−8
(algorithm 26.2.17 (http://www.math.sfu.ca/~cbm/aands/page_932.htm)):

where ϕ(x) is the standard normal PDF, and b0 = 0.2316419, b1 = 0.319381530, b2 = −0.356563782, b3 =
1.781477937, b4 = −1.821255978, b5 = 1.330274429.
Hart (1968) lists some dozens of approximations – by means of rational functions, with or without
exponentials – for the erfc() function. His algorithms vary in the degree of complexity and the resulting
precision, with maximum absolute precision of 24 digits. An algorithm by West (2009) combines Hart's
algorithm 5666 with a continued fraction approximation in the tail to provide a fast computation algorithm with
a 16-digit precision.
Cody (1969) after recalling Hart68 solution is not suited for erf, gives a solution for both erf and erfc, with
maximal relative error bound, via Rational Chebyshev Approximation.
Marsaglia (2004) suggested a simple algorithm[note 2] based on the Taylor series expansion

for calculating Φ(x) with arbitrary precision. The drawback of this algorithm is comparatively slow calculation
time (for example it takes over 300 iterations to calculate the function with 16 digits of precision when x = 10).
The GNU Scientific Library calculates values of the standard normal CDF using Hart's algorithms and
approximations with Chebyshev polynomials.

Shore (1982) introduced simple approximations that may be incorporated in stochastic optimization models of engineering
and operations research, like reliability engineering and inventory analysis. Denoting p=Φ(z), the simplest approximation for
the quantile function is:

This approximation delivers for z a maximum absolute error of 0.026 (for 0.5 ≤ p ≤ 0.9999, corresponding to 0 ≤ z ≤ 3.719).
For p < 1/2 replace p by 1 − p and change sign. Another approximation, somewhat less accurate, is the single-parameter
approximation:
The latter had served to derive a simple approximation for the loss integral of the normal distribution, defined by

This approximation is particularly accurate for the right far-tail (maximum error of 10−3 for z≥1.4). Highly accurate
approximations for the CDF, based on Response Modeling Methodology (RMM, Shore, 2011, 2012), are shown in Shore
(2005).

Some more approximations can be found at: Error function#Approximation with elementary functions. In particular, small
relative error on the whole domain for the CDF and the quantile function as well, is achieved via an explicitly
invertible formula by Sergei Winitzki in 2008.

History

Development

Some authors[64][65] attribute the credit for the discovery of the normal distribution to de Moivre, who in 1738[note 3]
published in the second edition of his "The Doctrine of Chances" the study of the coefficients in the binomial expansion of
(a + b)n . De Moivre proved that the middle term in this expansion has the approximate magnitude of , and that "If m
or ½n be a Quantity infinitely great, then the Logarithm of the Ratio, which a Term distant from the middle by the Interval ℓ,
has to the middle Term, is ."[66] Although this theorem can be interpreted as the first obscure expression for the normal
probability law, Stigler points out that de Moivre himself did not interpret his results as anything more than the approximate
rule for the binomial coefficients, and in particular de Moivre lacked the concept of the probability density function.[67]

In 1809 Gauss published his monograph "Theoria motus corporum coelestium in sectionibus conicis solem ambientium"
where among other things he introduces several important statistical concepts, such as the method of least squares, the method
of maximum likelihood, and the normal distribution. Gauss used M, M′, M′′, ... to denote the measurements of some
unknown quantity V, and sought the "most probable" estimator of that quantity: the one that maximizes the probability
φ(M − V) · φ(M′ − V) · φ(M′′ − V) · ... of obtaining the observed experimental results. In his notation φΔ is the probability law
of the measurement errors of magnitude Δ. Not knowing what the function φ is, Gauss requires that his method should reduce
to the well-known answer: the arithmetic mean of the measured values.[note 4] Starting from these principles, Gauss
demonstrates that the only law that rationalizes the choice of arithmetic mean as an estimator of the location parameter, is the
normal law of errors:[68]

where h is "the measure of the precision of the observations". Using this normal law as a generic model for errors in the
experiments, Gauss formulates what is now known as the non-linear weighted least squares (NWLS) method.[69]
Although Gauss was the first to suggest the normal
distribution law, Laplace made significant
contributions.[note 5] It was Laplace who first posed the
problem of aggregating several observations in 1774,[70]
although his own solution led to the Laplacian
distribution. It was Laplace who first calculated the value
2
of the integral ∫ e−t dt = √ π in 1782, providing the
normalization constant for the normal distribution.[71]
Finally, it was Laplace who in 1810 proved and
presented to the Academy the fundamental central limit
theorem, which emphasized the theoretical importance of
the normal distribution.[72]
Carl Friedrich Gauss It is of interest to note that in 1809 an Irish mathematician Pierre-Simon Laplace proved
discovered the normal
Adrain published two derivations of the normal the central limit theorem in
distribution in 1809 as a way
probability law, simultaneously and independently from 1810, consolidating the
to rationalize the method of
Gauss.[73] His works remained largely unnoticed by the importance of the normal
least squares.
scientific community, until in 1871 they were distribution in statistics.
"rediscovered" by Abbe.[74]

In the middle of the 19th century Maxwell demonstrated that the normal distribution is not just a convenient mathematical
tool, but may also occur in natural phenomena:[75] "The number of particles whose velocity, resolved in a certain direction,
lies between x and x + dx is

Naming

Since its introduction, the normal distribution has been known by many different names: the law of error, the law of facility of
errors, Laplace's second law, Gaussian law, etc. Gauss himself apparently coined the term with reference to the "normal
equations" involved in its applications, with normal having its technical meaning of orthogonal rather than "usual".[76]
However, by the end of the 19th century some authors[note 6] had started using the name normal distribution, where the word
"normal" was used as an adjective – the term now being seen as a reflection of the fact that this distribution was seen as
typical, common – and thus "normal". Peirce (one of those authors) once defined "normal" thus: "...the 'normal' is not the
average (or any other kind of mean) of what actually occurs, but of what would, in the long run, occur under certain
circumstances."[77] Around the turn of the 20th century Pearson popularized the term normal as a designation for this
distribution.[78]

Many years ago I called the Laplace–Gaussian curve the normal curve, which name, while it avoids an
international question of priority, has the disadvantage of leading people to believe that all other distributions of
frequency are in one sense or another 'abnormal'.

— Pearson (1920)

Also, it was Pearson who first wrote the distribution in terms of the standard deviation σ as in modern notation. Soon after
this, in year 1915, Fisher added the location parameter to the formula for normal distribution, expressing it in the way it is
written nowadays:

The term "standard normal", which denotes the normal distribution with zero mean and unit variance came into general use
around the 1950s, appearing in the popular textbooks by P.G. Hoel (1947) "Introduction to mathematical statistics" and A.M.
Mood (1950) "Introduction to the theory of statistics".[79]
See also
Bates distribution — similar to the Irwin–Hall distribution, but rescaled back into the 0 to 1 range
Behrens–Fisher problem — the long-standing problem of testing whether two normal samples with different
variances have same means;
Bhattacharyya distance – method used to separate mixtures of normal distributions
Erdős–Kac theorem—on the occurrence of the normal distribution in number theory
Gaussian blur—convolution, which uses the normal distribution as a kernel
Normally distributed and uncorrelated does not imply independent
Reciprocal normal distribution
Ratio normal distribution
Standard normal table
Stein's lemma
Sub-Gaussian distribution
Sum of normally distributed random variables
Tweedie distribution — The normal distribution is a member of the family of Tweedie exponential dispersion
models
Wrapped normal distribution — the Normal distribution applied to a circular domain
Z-test— using the normal distribution

Notes
1. For the proof see Gaussian integral.
2. For example, this algorithm is given in the article Bc programming language.
3. De Moivre first published his findings in 1733, in a pamphlet "Approximatio ad Summam Terminorum Binomii
(a + b)n in Seriem Expansi" that was designated for private circulation only. But it was not until the year 1738
that he made his results publicly available. The original pamphlet was reprinted several times, see for
example Walker (1985).
4. "It has been customary certainly to regard as an axiom the hypothesis that if any quantity has been
determined by several direct observations, made under the same circumstances and with equal care, the
arithmetical mean of the observed values affords the most probable value, if not rigorously, yet very nearly at
least, so that it is always most safe to adhere to it." — Gauss (1809, section 177)
5. "My custom of terming the curve the Gauss–Laplacian or normal curve saves us from proportioning the merit
of discovery between the two great astronomer mathematicians." quote from Pearson (1905, p. 189)
6. Besides those specifically referenced here, such use is encountered in the works of Peirce, Galton (Galton
(1889, chapter V)) and Lexis (Lexis (1878), Rohrbasser & Véron (2003)) c. 1875.

References

Citations
1. "List of Probability and Statistics Symbols" (https://mathvault.ca/hub/higher-math/math-symbols/probability-st
atistics-symbols/). Math Vault. April 26, 2020. Retrieved August 15, 2020.
2. Weisstein, Eric W. "Normal Distribution" (https://mathworld.wolfram.com/NormalDistribution.html).
mathworld.wolfram.com. Retrieved August 15, 2020.
3. Normal Distribution (http://www.encyclopedia.com/topic/Normal_Distribution.aspx#3), Gale Encyclopedia of
Psychology
4. Casella & Berger (2001, p. 102)
5. Lyon, A. (2014). Why are Normal Distributions Normal? (https://aidanlyon.com/normal_distributions.pdf), The
British Journal for the Philosophy of Science.
6. "Normal Distribution" (https://www.mathsisfun.com/data/standard-normal-distribution.html).
www.mathsisfun.com. Retrieved August 15, 2020.
7. Stigler (1982)
8. Halperin, Hartley & Hoel (1965, item 7)
9. McPherson (1990, p. 110)
10. Bernardo & Smith (2000, p. 121)
11. Scott, Clayton; Nowak, Robert (August 7, 2003). "The Q-function" (http://cnx.org/content/m11537/1.2/).
Connexions.
12. Barak, Ohad (April 6, 2006). "Q Function and Error Function" (https://web.archive.org/web/20090325160012/
http://www.eng.tau.ac.il/~jo/academic/Q.pdf) (PDF). Tel Aviv University. Archived from the original (http://ww
w.eng.tau.ac.il/~jo/academic/Q.pdf) (PDF) on March 25, 2009.
13. Weisstein, Eric W. "Normal Distribution Function" (https://mathworld.wolfram.com/NormalDistributionFunctio
n.html). MathWorld.
14. Abramowitz, Milton; Stegun, Irene Ann, eds. (1983) [June 1964]. "Chapter 26, eqn 26.2.12" (http://www.math.
ubc.ca/~cbm/aands/page_932.htm). Handbook of Mathematical Functions with Formulas, Graphs, and
Mathematical Tables. Applied Mathematics Series. 55 (Ninth reprint with additional corrections of tenth
original printing with corrections (December 1972); first ed.). Washington D.C.; New York: United States
Department of Commerce, National Bureau of Standards; Dover Publications. p. 932. ISBN 978-0-486-
61272-0. LCCN 64-60036 (https://lccn.loc.gov/64-60036). MR 0167642 (https://www.ams.org/mathscinet-geti
tem?mr=0167642). LCCN 65-12253 (https://lccn.loc.gov/65012253).
15. "Wolfram|Alpha: Computational Knowledge Engine" (http://www.wolframalpha.com/input/?i=Table%5B{N(Erf
(n/Sqrt(2)),+12),+N(1-Erf(n/Sqrt(2)),+12),+N(1/(1-Erf(n/Sqrt(2))),+12)},+{n,1,6}%5D). Wolframalpha.com.
Retrieved March 3, 2017.
16. "Wolfram|Alpha: Computational Knowledge Engine" (http://www.wolframalpha.com/input/?i=Table%5BSqrt%
282%29*InverseErf%28x%29%2C+{x%2C+N%28{8%2F10%2C+9%2F10%2C+19%2F20%2C+49%2F50%
2C+99%2F100%2C+995%2F1000%2C+998%2F1000}%2C+13%29}%5D). Wolframalpha.com.
17. "Wolfram|Alpha: Computational Knowledge Engine" (http://www.wolframalpha.com/input/?i=Table%5B%7B
N(1-10%5E(-x),9),N(Sqrt(2)*InverseErf(1-10%5E(-x)),13)%7D,%7Bx,3,9%7D%5D). Wolframalpha.com.
Retrieved March 3, 2017.
18. Cover, Thomas M.; Thomas, Joy A. (2006). Elements of Information Theory (https://archive.org/details/eleme
ntsinformat00cove). John Wiley and Sons. p. 254 (https://archive.org/details/elementsinformat00cove/page/n
279).
19. Park, Sung Y.; Bera, Anil K. (2009). "Maximum Entropy Autoregressive Conditional Heteroskedasticity
Model" (https://web.archive.org/web/20160307144515/http://wise.xmu.edu.cn/uploadfiles/paper-masterdownl
oad/2009519932327055475115776.pdf) (PDF). Journal of Econometrics. 150 (2): 219–230.
CiteSeerX 10.1.1.511.9750 (https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.511.9750).
doi:10.1016/j.jeconom.2008.12.014 (https://doi.org/10.1016%2Fj.jeconom.2008.12.014). Archived from the
original (http://www.wise.xmu.edu.cn/Master/Download/..%5C..%5CUploadFiles%5Cpaper-masterdownloa
d%5C2009519932327055475115776.pdf) (PDF) on March 7, 2016. Retrieved June 2, 2011.
20. Geary RC(1936) The distribution of the "Student's" ratio for the non-normal samples". Supplement to the
Journal of the Royal Statistical Society 3 (2): 178–184
21. Lukas E (1942) A characterization of the normal distribution. Annals of Mathematical Statistics 13: 91–93
22. Patel & Read (1996, [2.1.4])
23. Fan (1991, p. 1258)
24. Patel & Read (1996, [2.1.8])
25. Papoulis, Athanasios. Probability, Random Variables and Stochastic Processes (4th ed.). p. 148.
26. Bryc (1995, p. 23)
27. Bryc (1995, p. 24)
28. Cover & Thomas (2006, p. 254)
29. Williams, David (2001). Weighing the odds : a course in probability and statistics (https://archive.org/details/
weighingoddscour00will) (Reprinted. ed.). Cambridge [u.a.]: Cambridge Univ. Press. pp. 197 (https://archive.
org/details/weighingoddscour00will/page/n219)–199. ISBN 978-0-521-00618-7.
30. Smith, José M. Bernardo; Adrian F. M. (2000). Bayesian theory (https://archive.org/details/bayesiantheory00b
ern_963) (Reprint ed.). Chichester [u.a.]: Wiley. pp. 209 (https://archive.org/details/bayesiantheory00bern_96
3/page/n224), 366. ISBN 978-0-471-49464-5.
31. O'Hagan, A. (1994) Kendall's Advanced Theory of statistics, Vol 2B, Bayesian Inference, Edward Arnold.
ISBN 0-340-52922-9 (Section 5.40)
32. Bryc (1995, p. 35)
33. UIUC, Lecture 21. The Multivariate Normal Distribution (http://www.math.uiuc.edu/~r-ash/Stat/StatLec21-25.p
df), 21.6:"Individually Gaussian Versus Jointly Gaussian".
34. Edward L. Melnick and Aaron Tenenbein, "Misspecifications of the Normal Distribution", The American
Statistician, volume 36, number 4 November 1982, pages 372–373
35. "Kullback Leibler (KL) Distance of Two Normal (Gaussian) Probability Distributions" (http://www.allisons.org/l
l/MML/KL/Normal/). Allisons.org. December 5, 2007. Retrieved March 3, 2017.
36. Jordan, Michael I. (February 8, 2010). "Stat260: Bayesian Modeling and Inference: The Conjugate Prior for
the Normal Distribution" (http://www.cs.berkeley.edu/~jordan/courses/260-spring10/lectures/lecture5.pdf)
(PDF).
37. Amari & Nagaoka (2000)
38. "Normal Approximation to Poisson Distribution" (http://www.stat.ucla.edu/~dinov/courses_students.dir/Applet
s.dir/NormalApprox2PoissonApplet.html). Stat.ucla.edu. Retrieved March 3, 2017.
39. Das, Abhranil (2020). "A method to integrate and classify normal distributions". arXiv:2012.14331 (https://arxi
v.org/abs/2012.14331).
40. Bryc (1995, p. 27)
41. Weisstein, Eric W. "Normal Product Distribution" (http://mathworld.wolfram.com/NormalProductDistribution.ht
ml). MathWorld. wolfram.com.
42. Lukacs, Eugene (1942). "A Characterization of the Normal Distribution" (https://doi.org/10.1214%2Faoms%2
F1177731647). The Annals of Mathematical Statistics. 13 (1): 91–3. doi:10.1214/aoms/1177731647 (https://d
oi.org/10.1214%2Faoms%2F1177731647). ISSN 0003-4851 (https://www.worldcat.org/issn/0003-4851).
JSTOR 2236166 (https://www.jstor.org/stable/2236166).
43. Basu, D.; Laha, R. G. (1954). "On Some Characterizations of the Normal Distribution". Sankhyā. 13 (4): 359–
62. ISSN 0036-4452 (https://www.worldcat.org/issn/0036-4452). JSTOR 25048183 (https://www.jstor.org/stab
le/25048183).
44. Lehmann, E. L. (1997). Testing Statistical Hypotheses (2nd ed.). Springer. p. 199. ISBN 978-0-387-94919-2.
45. Patel & Read (1996, [2.3.6])
46. Galambos & Simonelli (2004, Theorem 3.5)
47. Lukacs & King (1954)
48. Quine, M.P. (1993). "On three characterisations of the normal distribution" (http://www.math.uni.wroc.pl/~pms/
publicationsArticle.php?nr=14.2&nrA=8&ppB=257&ppE=263). Probability and Mathematical Statistics. 14
(2): 257–263.
49. John, S (1982). "The three parameter two-piece normal family of distributions and its fitting".
Communications in Statistics - Theory and Methods. 11 (8): 879–885. doi:10.1080/03610928208828279 (htt
ps://doi.org/10.1080%2F03610928208828279).
50. Krishnamoorthy (2006, p. 127)
51. Krishnamoorthy (2006, p. 130)
52. Krishnamoorthy (2006, p. 133)
53. Huxley (1932)
54. Jaynes, Edwin T. (2003). Probability Theory: The Logic of Science (https://books.google.com/books?id=tTN4
HuUNXjgC&pg=PA592). Cambridge University Press. pp. 592–593. ISBN 9780521592710.
55. Oosterbaan, Roland J. (1994). "Chapter 6: Frequency and Regression Analysis of Hydrologic Data" (http://w
ww.waterlog.info/pdf/freqtxt.pdf) (PDF). In Ritzema, Henk P. (ed.). Drainage Principles and Applications,
Publication 16 (second revised ed.). Wageningen, The Netherlands: International Institute for Land
Reclamation and Improvement (ILRI). pp. 175–224. ISBN 978-90-70754-33-4.
56. Wichura, Michael J. (1988). "Algorithm AS241: The Percentage Points of the Normal Distribution". Applied
Statistics. 37 (3): 477–84. doi:10.2307/2347330 (https://doi.org/10.2307%2F2347330). JSTOR 2347330 (http
s://www.jstor.org/stable/2347330).
57. Johnson, Kotz & Balakrishnan (1995, Equation (26.48))
58. Kinderman & Monahan (1977)
59. Leva (1992)
60. Marsaglia & Tsang (2000)
61. Karney (2016)
62. Monahan (1985, section 2)
63. Wallace (1996)
64. Johnson, Kotz & Balakrishnan (1994, p. 85)
65. Le Cam & Lo Yang (2000, p. 74)
66. De Moivre, Abraham (1733), Corollary I – see Walker (1985, p. 77)
67. Stigler (1986, p. 76)
68. Gauss (1809, section 177)
69. Gauss (1809, section 179)
70. Laplace (1774, Problem III)
71. Pearson (1905, p. 189)
72. Stigler (1986, p. 144)
73. Stigler (1978, p. 243)
74. Stigler (1978, p. 244)
75. Maxwell (1860, p. 23)
76. Jaynes, Edwin J.; Probability Theory: The Logic of Science, Ch 7 (http://www-biba.inrialpes.fr/Jaynes/cc07s.
pdf)
77. Peirce, Charles S. (c. 1909 MS), Collected Papers v. 6, paragraph 327
78. Kruskal & Stigler (1997)
79. "Earliest uses... (entry STANDARD NORMAL CURVE)" (http://jeff560.tripod.com/s.html).

Sources
Aldrich, John; Miller, Jeff. "Earliest Uses of Symbols in Probability and Statistics" (http://jeff560.tripod.com/sta
t.html).
Aldrich, John; Miller, Jeff. "Earliest Known Uses of Some of the Words of Mathematics" (http://jeff560.tripod.c
om/mathword.html). In particular, the entries for "bell-shaped and bell curve" (http://jeff560.tripod.com/b.html),
"normal (distribution)" (http://jeff560.tripod.com/n.html), "Gaussian" (http://jeff560.tripod.com/g.html), and
"Error, law of error, theory of errors, etc." (http://jeff560.tripod.com/e.html).
Amari, Shun-ichi; Nagaoka, Hiroshi (2000). Methods of Information Geometry. Oxford University Press.
ISBN 978-0-8218-0531-2.
Bernardo, José M.; Smith, Adrian F. M. (2000). Bayesian Theory. Wiley. ISBN 978-0-471-49464-5.
Bryc, Wlodzimierz (1995). The Normal Distribution: Characterizations with Applications. Springer-Verlag.
ISBN 978-0-387-97990-8.
Casella, George; Berger, Roger L. (2001). Statistical Inference (2nd ed.). Duxbury. ISBN 978-0-534-24312-8.
Cody, William J. (1969). "Rational Chebyshev Approximations for the Error Function". Mathematics of
Computation. 23 (107): 631–638. doi:10.1090/S0025-5718-1969-0247736-4 (https://doi.org/10.1090%2FS00
25-5718-1969-0247736-4).
Cover, Thomas M.; Thomas, Joy A. (2006). Elements of Information Theory. John Wiley and Sons.
de Moivre, Abraham (1738). The Doctrine of Chances. ISBN 978-0-8218-2103-9.
Fan, Jianqing (1991). "On the optimal rates of convergence for nonparametric deconvolution problems" (http
s://doi.org/10.1214%2Faos%2F1176348248). The Annals of Statistics. 19 (3): 1257–1272.
doi:10.1214/aos/1176348248 (https://doi.org/10.1214%2Faos%2F1176348248). JSTOR 2241949 (https://w
ww.jstor.org/stable/2241949).
Galton, Francis (1889). Natural Inheritance (http://galton.org/books/natural-inheritance/pdf/galton-nat-inh-1up
-clean.pdf) (PDF). London, UK: Richard Clay and Sons.
Galambos, Janos; Simonelli, Italo (2004). Products of Random Variables: Applications to Problems of
Physics and to Arithmetical Functions (https://archive.org/details/productsofrandom00gala). Marcel Dekker,
Inc. ISBN 978-0-8247-5402-0.
Gauss, Carolo Friderico (1809). Theoria motvs corporvm coelestivm in sectionibvs conicis Solem
ambientivm (https://archive.org/details/theoriamotuscor00gausgoog) [Theory of the Motion of the Heavenly
Bodies Moving about the Sun in Conic Sections] (in Latin). English translation (https://books.google.com/boo
ks?id=1TIAAAAAQAAJ).
Gould, Stephen Jay (1981). The Mismeasure of Man (first ed.). W. W. Norton. ISBN 978-0-393-01489-1.
Halperin, Max; Hartley, Herman O.; Hoel, Paul G. (1965). "Recommended Standards for Statistical Symbols
and Notation. COPSS Committee on Symbols and Notation". The American Statistician. 19 (3): 12–14.
doi:10.2307/2681417 (https://doi.org/10.2307%2F2681417). JSTOR 2681417 (https://www.jstor.org/stable/26
81417).
Hart, John F.; et al. (1968). Computer Approximations. New York, NY: John Wiley & Sons, Inc. ISBN 978-0-
88275-642-4.
"Normal Distribution" (https://www.encyclopediaofmath.org/index.php?title=Normal_Distribution),
Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Herrnstein, Richard J.; Murray, Charles (1994). The Bell Curve: Intelligence and Class Structure in American
Life. Free Press. ISBN 978-0-02-914673-6.
Huxley, Julian S. (1932). Problems of Relative Growth. London. ISBN 978-0-486-61114-3.
OCLC 476909537 (https://www.worldcat.org/oclc/476909537).
Johnson, Norman L.; Kotz, Samuel; Balakrishnan, Narayanaswamy (1994). Continuous Univariate
Distributions, Volume 1. Wiley. ISBN 978-0-471-58495-7.
Johnson, Norman L.; Kotz, Samuel; Balakrishnan, Narayanaswamy (1995). Continuous Univariate
Distributions, Volume 2. Wiley. ISBN 978-0-471-58494-0.
Karney, C. F. F. (2016). "Sampling exactly from the normal distribution". ACM Transactions on Mathematical
Software. 42 (1): 3:1–14. arXiv:1303.6257 (https://arxiv.org/abs/1303.6257). doi:10.1145/2710016 (https://doi.
org/10.1145%2F2710016). S2CID 14252035 (https://api.semanticscholar.org/CorpusID:14252035).
Kinderman, Albert J.; Monahan, John F. (1977). "Computer Generation of Random Variables Using the Ratio
of Uniform Deviates". ACM Transactions on Mathematical Software. 3 (3): 257–260.
doi:10.1145/355744.355750 (https://doi.org/10.1145%2F355744.355750). S2CID 12884505 (https://api.sem
anticscholar.org/CorpusID:12884505).
Krishnamoorthy, Kalimuthu (2006). Handbook of Statistical Distributions with Applications. Chapman &
Hall/CRC. ISBN 978-1-58488-635-8.
Kruskal, William H.; Stigler, Stephen M. (1997). Spencer, Bruce D. (ed.). Normative Terminology: 'Normal' in
Statistics and Elsewhere. Statistics and Public Policy. Oxford University Press. ISBN 978-0-19-852341-3.
Laplace, Pierre-Simon de (1774). "Mémoire sur la probabilité des causes par les événements" (http://gallica.
bnf.fr/ark:/12148/bpt6k77596b/f32). Mémoires de l'Académie Royale des Sciences de Paris (Savants
étrangers), Tome 6: 621–656. Translated by Stephen M. Stigler in Statistical Science 1 (3), 1986:
JSTOR 2245476 (https://www.jstor.org/stable/2245476).
Laplace, Pierre-Simon (1812). Théorie analytique des probabilités (https://archive.org/details/thorieanalytiqu
00laplgoog) [Analytical theory of probabilities].
Le Cam, Lucien; Lo Yang, Grace (2000). Asymptotics in Statistics: Some Basic Concepts (second ed.).
Springer. ISBN 978-0-387-95036-5.
Leva, Joseph L. (1992). "A fast normal random number generator" (https://web.archive.org/web/2010071603
5328/http://saluc.engr.uconn.edu/refs/crypto/rng/leva92afast.pdf) (PDF). ACM Transactions on Mathematical
Software. 18 (4): 449–453. CiteSeerX 10.1.1.544.5806 (https://citeseerx.ist.psu.edu/viewdoc/summary?doi=1
0.1.1.544.5806). doi:10.1145/138351.138364 (https://doi.org/10.1145%2F138351.138364).
S2CID 15802663 (https://api.semanticscholar.org/CorpusID:15802663). Archived from the original (http://salu
c.engr.uconn.edu/refs/crypto/rng/leva92afast.pdf) (PDF) on July 16, 2010.
Lexis, Wilhelm (1878). "Sur la durée normale de la vie humaine et sur la théorie de la stabilité des rapports
statistiques". Annales de Démographie Internationale. Paris. II: 447–462.
Lukacs, Eugene; King, Edgar P. (1954). "A Property of Normal Distribution" (https://doi.org/10.1214%2Faom
s%2F1177728796). The Annals of Mathematical Statistics. 25 (2): 389–394. doi:10.1214/aoms/1177728796
(https://doi.org/10.1214%2Faoms%2F1177728796). JSTOR 2236741 (https://www.jstor.org/stable/2236741).
McPherson, Glen (1990). Statistics in Scientific Investigation: Its Basis, Application and Interpretation (https://
archive.org/details/statisticsinscie0000mcph). Springer-Verlag. ISBN 978-0-387-97137-7.
Marsaglia, George; Tsang, Wai Wan (2000). "The Ziggurat Method for Generating Random Variables" (http
s://doi.org/10.18637%2Fjss.v005.i08). Journal of Statistical Software. 5 (8). doi:10.18637/jss.v005.i08 (https://
doi.org/10.18637%2Fjss.v005.i08).
Marsaglia, George (2004). "Evaluating the Normal Distribution" (https://doi.org/10.18637%2Fjss.v011.i04).
Journal of Statistical Software. 11 (4). doi:10.18637/jss.v011.i04 (https://doi.org/10.18637%2Fjss.v011.i04).
Maxwell, James Clerk (1860). "V. Illustrations of the dynamical theory of gases. — Part I: On the motions and
collisions of perfectly elastic spheres". Philosophical Magazine. Series 4. 19 (124): 19–32.
doi:10.1080/14786446008642818 (https://doi.org/10.1080%2F14786446008642818).
Monahan, J. F. (1985). "Accuracy in random number generation" (https://doi.org/10.1090%2FS0025-5718-19
85-0804945-X). Mathematics of Computation. 45 (172): 559–568. doi:10.1090/S0025-5718-1985-0804945-X
(https://doi.org/10.1090%2FS0025-5718-1985-0804945-X).
Patel, Jagdish K.; Read, Campbell B. (1996). Handbook of the Normal Distribution (2nd ed.). CRC Press.
ISBN 978-0-8247-9342-5.
Pearson, Karl (1901). "On Lines and Planes of Closest Fit to Systems of Points in Space" (http://stat.smmu.e
du.cn/history/pearson1901.pdf) (PDF). Philosophical Magazine. 6. 2 (11): 559–572.
doi:10.1080/14786440109462720 (https://doi.org/10.1080%2F14786440109462720).
Pearson, Karl (1905). " 'Das Fehlergesetz und seine Verallgemeinerungen durch Fechner und Pearson'. A
rejoinder" (https://zenodo.org/record/1449456). Biometrika. 4 (1): 169–212. doi:10.2307/2331536 (https://doi.
org/10.2307%2F2331536). JSTOR 2331536 (https://www.jstor.org/stable/2331536).
Pearson, Karl (1920). "Notes on the History of Correlation" (https://zenodo.org/record/1431597). Biometrika.
13 (1): 25–45. doi:10.1093/biomet/13.1.25 (https://doi.org/10.1093%2Fbiomet%2F13.1.25). JSTOR 2331722
(https://www.jstor.org/stable/2331722).
Rohrbasser, Jean-Marc; Véron, Jacques (2003). "Wilhelm Lexis: The Normal Length of Life as an
Expression of the "Nature of Things" " (http://www.persee.fr/web/revues/home/prescript/article/pop_1634-294
1_2003_num_58_3_18444). Population. 58 (3): 303–322. doi:10.3917/pope.303.0303 (https://doi.org/10.391
7%2Fpope.303.0303).
Shore, H (1982). "Simple Approximations for the Inverse Cumulative Function, the Density Function and the
Loss Integral of the Normal Distribution". Journal of the Royal Statistical Society. Series C (Applied
Statistics). 31 (2): 108–114. doi:10.2307/2347972 (https://doi.org/10.2307%2F2347972). JSTOR 2347972 (ht
tps://www.jstor.org/stable/2347972).
Shore, H (2005). "Accurate RMM-Based Approximations for the CDF of the Normal Distribution".
Communications in Statistics – Theory and Methods. 34 (3): 507–513. doi:10.1081/sta-200052102 (https://do
i.org/10.1081%2Fsta-200052102). S2CID 122148043
(https://api.semanticscholar.org/CorpusID:122148043).
Shore, H (2011). "Response Modeling Methodology". WIREs Comput Stat. 3 (4): 357–372.
doi:10.1002/wics.151 (https://doi.org/10.1002%2Fwics.151).
Shore, H (2012). "Estimating Response Modeling Methodology Models". WIREs Comput Stat. 4 (3): 323–
333. doi:10.1002/wics.1199 (https://doi.org/10.1002%2Fwics.1199).
Stigler, Stephen M. (1978). "Mathematical Statistics in the Early States" (https://doi.org/10.1214%2Faos%2F
1176344123). The Annals of Statistics. 6 (2): 239–265. doi:10.1214/aos/1176344123 (https://doi.org/10.121
4%2Faos%2F1176344123). JSTOR 2958876 (https://www.jstor.org/stable/2958876).
Stigler, Stephen M. (1982). "A Modest Proposal: A New Standard for the Normal". The American Statistician.
36 (2): 137–138. doi:10.2307/2684031 (https://doi.org/10.2307%2F2684031). JSTOR 2684031 (https://www.j
stor.org/stable/2684031).
Stigler, Stephen M. (1986). The History of Statistics: The Measurement of Uncertainty before 1900 (https://arc
hive.org/details/historyofstatist00stig). Harvard University Press. ISBN 978-0-674-40340-6.
Stigler, Stephen M. (1999). Statistics on the Table. Harvard University Press. ISBN 978-0-674-83601-3.
Walker, Helen M. (1985). "De Moivre on the Law of Normal Probability" (http://www.york.ac.uk/depts/maths/hi
ststat/demoivre.pdf) (PDF). In Smith, David Eugene (ed.). A Source Book in Mathematics. Dover. ISBN 978-
0-486-64690-9.
Wallace, C. S. (1996). "Fast pseudo-random generators for normal and exponential variates". ACM
Transactions on Mathematical Software. 22 (1): 119–127. doi:10.1145/225545.225554 (https://doi.org/10.114
5%2F225545.225554). S2CID 18514848 (https://api.semanticscholar.org/CorpusID:18514848).
Weisstein, Eric W. "Normal Distribution" (http://mathworld.wolfram.com/NormalDistribution.html). MathWorld.
West, Graeme (2009). "Better Approximations to Cumulative Normal Functions" (http://www.wilmott.com/pdf
s/090721_west.pdf) (PDF). Wilmott Magazine: 70–76.
Zelen, Marvin; Severo, Norman C. (1964). Probability Functions (chapter 26) (http://www.math.sfu.ca/~cbm/a
ands/page_931.htm). Handbook of mathematical functions with formulas, graphs, and mathematical tables,
by Abramowitz, M.; and Stegun, I. A.: National Bureau of Standards. New York, NY: Dover. ISBN 978-0-486-
61272-0.

External links
"Normal distribution" (https://www.encyclopediaofmath.org/index.php?title=Normal_distribution),
Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Normal distribution calculator (https://www.hackmath.net/en/calculator/normal-distribution), More powerful
calculator (https://keisan.casio.com/exec/system/1180573188)

Retrieved from "https://en.wikipedia.org/w/index.php?title=Normal_distribution&oldid=1002951236"

This page was last edited on 26 January 2021, at 19:49 (UTC).

Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. By using this site, you agree
to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit
organization.

You might also like