Gaussian Func
Gaussian Func
for arbitrary real constants a, b and non zero c. It is named after the
mathematician Carl Friedrich Gauss. The graph of a Gaussian is a
characteristic symmetric "bell curve" shape. The parameter a is the height of
the curve's peak, b is the position of the center of the peak and c (the
standard deviation, sometimes called the Gaussian RMS width) controls the
width of the "bell".
Properties
f(x)=exp(αx2+βx+γ)
where:
α=−0.5/c2
β=b/c2
γ=0.5(log(a)−b2)/c2
The Gaussian functions are thus those functions whose logarithm is a
concave quadratic function.
The parameter c is related to the full width at half maximum (FWHM) of the
peak according to
FWHM=22ln2−−−−√ c≈2.35482c.
The function may then be expressed in terms of the FWHM, represented by
w:
f(x)=ae−4(ln2)(x−b)2/w2
Alternatively, the parameter c can be interpreted by saying that the two
inflection points of the function occur at x = b − c and x = b + c.
FWTM=22ln10−−−−−√ c≈4.29193c.
Gaussian functions are analytic, and their limit as x → ∞ is 0 (for the above
case of b = 0).
Gaussian functions are among those functions that are elementary but lack
elementary antiderivatives; the integral of the Gaussian function is the error
function. Nonetheless their improper integrals over the whole real line can be
evaluated exactly, using the Gaussian integral
∫∞−∞e−x2dx=π√
and one obtains
∫∞−∞ae−(x−b)2/(2c2)dx=ac⋅2π−−√.
g(x)=1σ2π−−√e−(1/2)((x−μ)/σ)2.
These Gaussians are plotted in the accompanying figure.
Normalized Gaussian curves with expected value μ and variance σ2. The
corresponding parameters are a=1σ2π√, b = μ and c = σ.
∑k∈Zexp(−π⋅(kc)2)=c⋅∑k∈Zexp(−π⋅(kc)2).
Integral of a Gaussian function
∫∞−∞ae−(x−b)2/2c2dx=2√a|c|π√
An alternative form is
∫∞−∞ke−fx2+gx+hdx=∫∞−∞ke−f(x−g/(2f))2+g2/(4f)+hdx=kπf−
−√exp(g24f+h)
where f must be strictly positive for the integral to converge.
Relation to standard Gaussian integral
The integral
∫∞−∞ae−(x−b)2/2c2dx
for some real constants a, b, c > 0 can be calculated by putting it into the
form of a Gaussian integral. First, the constant a can simply be factored out
of the integral. Next, the variable of integration is changed from x to y = x -
b.
a∫∞−∞e−y2/2c2dy
a2c2−−−√∫∞−∞e−z2dz
Then, using the Gaussian integral identity
∫∞−∞e−z2dz=π√
we have
∫∞−∞ae−(x−b)2/2c2dx=a2πc2−−−−√
Two-dimensional Gaussian function
Gaussian curve with a two-dimensional domain
f(x,y)=Aexp(−((x−xo)22σ2X+(y−yo)22σ2Y)).
Here the coefficient A is the amplitude, xo,yo is the center and σx, σy are the
x and y spreads of the blob. The figure on the right was created using A = 1,
xo = 0, yo = 0, σx = σy = 1.
V=∫∞−∞∫∞−∞f(x,y)dxdy=2πAσXσY.
In general, a two-dimensional elliptical Gaussian function is expressed as
f(x,y)=Aexp(−(a(x−xo)2+2b(x−xo)(y−yo)+c(y−yo)2))
where the matrix
[abbc]
is positive-definite.
Using this formulation, the figure on the right can be created using A = 1,
(xo, yo) = (0, 0), a = c = 1/2, b = 0.
Meaning of parameters for the general equation
For the general form of the equation the coefficient A is the height of the
peak and (xo, yo) is the center of the blob.
If we set
abc=cos2θ2σ2X+sin2θ2σ2Y=−sin2θ4σ2X+sin2θ4σ2Y=sin2θ2σ2X
+cos2θ2σ2Y
θ=0
θ=0
θ=π/6
θ=π/6
θ=π/6
θ=π/3
Using the following Octave code, one can easily see the effect of changing
the parameters
A = 1;
x0 = 0; y0 = 0;
sigma_X = 1;
sigma_Y = 2;
surf(X,Y,Z);shading interp;view(-36,36)
waitforbuttonpress
end
((y−yo)22σ2Y)PY⎞⎠ .[5]
rectangular Gaussian distribution, f(x,y)=Aexp⎛⎝−((x−xo)22σ2X)PX−
f(x)=exp(−xTCx),
The integral of this Gaussian function over the whole n-dimensional space is
given as
∫Rnexp(−xTCx)dx=πndetC−−−−−√.
It can be easily calculated by diagonalizing the matrix } C and changing the
integration variables to the eigenvectors of C.
f(x)=exp(−xTCx+sTx),
where s={s1,…,sn} is the shift vector and the matrix C can be assumed to
be symmetric, CT=C, and positive-definite. The following integrals with this
function can be calculated with the same technique,
=∫Rne−xTCx+vTxdx=πndetC−−−−
−√exp(14vTC−1v)≡M.∫Rne−xTCx+vTx(aTx)dx=(aTu)⋅M, where u=1
2C−1v.∫Rne−xTCx+vTx(xTDx)dx=(uTDu+12tr(DC−1))⋅M.∫Rne−xTC′x+s
′Tx(−∂∂xΛ∂∂x)e−xTCx+sTxdx(2tr(C′ΛCB−1)+4uTC′ΛCu−2uT(C
′Λs+CΛs′)+s′TΛs)⋅M,where u=12B−1v,v=s+s′,B=C+C′.
Estimation of parameters
See also: Normal distribution § Estimation of parameters
The most common method for estimating the Gaussian parameters is to take
the logarithm of the data and fit a parabola to the resulting data set.[6][7]
While this provides a simple curve fitting procedure, the resulting algorithm
may be biased by excessively weighting small data values, which can
produce large errors in the profile estimate. One can partially compensate
for this problem through weighted least squares estimation, reducing the
weight of small data values, but this too can be biased by allowing the tail of
the Gaussian to dominate the fit. In order to remove the bias, one can
instead use an iteratively reweighted least squares procedure, in which the
weights are updated at each iteration.[7] It is also possible to perform non-
linear regression directly on the data, without involving the logarithmic data
transformation; for more options, see probability distribution fitting.
Parameter precision
Once one has an algorithm for estimating the Gaussian function parameters,
it is also important to know how precise those estimates are. Any least
squares estimation algorithm can provide numerical estimates for the
variance of each parameter (i.e., the variance of the estimated height,
position, and width of the function). One can also use Cramér–Rao bound
theory to obtain an analytical expression for the lower bound on the
parameter variances, given certain assumptions about the data.[8][9]
The noise in the measured profile is either i.i.d. Gaussian, or the noise is
Poisson-distributed.
The spacing between each sampling (i.e. the distance between pixels
measuring the data) is uniform.
The peak is "well-sampled", so that less than 10% of the area or volume
under the peak (area if a 1D Gaussian, volume if a 2D Gaussian) lies outside
the measurement region.
The width of the peak is much larger than the distance between sample
locations (i.e. the detector pixels must be at least 5 times smaller than the
Gaussian FWHM).
KGauss=σ2π√δXQ2⎛⎝⎜⎜⎜⎜⎜⎜32c0−1a02ca20−1a02ca2⎞⎠⎟⎟⎟⎟⎟⎟
,KPoiss=12π−−√⎛⎝⎜⎜⎜⎜⎜⎜3a2c0−120ca0−120c2a⎞⎠⎟⎟⎟⎟⎟⎟ ,
where δX is the width of the pixels used to sample the function, Q { is the
quantum efficiency of the detector, and σ indicates the standard deviation of
the measurement noise. Thus, the individual variances for the parameters
are, in the Gaussian noise case,
var(a)var(b)var(c)=3σ22π√δXQ2c=2σ2cδXπ√Q2a2=2σ2cδXπ√Q2
a2
and in the Poisson noise case,
var(a)var(b)var(c)=3a22π−−√c=c2π−−√a=c22π−−√a.
For the 2D profile parameters giving the amplitude A {\displaystyle A} A,
position (x0,y0) , and width (σX,σY) of the profile, the following covariance
matrices apply:[9]
KGauss=σ2πδXδYQ2KPoisson=12π⎛⎝⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜2σXσY00
σX0002σYA2σX⎞⎠⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎛⎝⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜3A
−1Aσy−1AσX02σXA2σY000002σYA2σX00−1AσY002σXA2σy0−1A
σX0013A2σY3AσX⎞⎠⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟.
σXσY00−1σY−1σX0σXAσY00000σYAσX00−1σY002σX3AσY13A−1
One may ask for a discrete analog to the Gaussian; this is necessary in
discrete applications, particularly digital signal processing. A simple answer
is to sample the continuous Gaussian, yielding the sampled Gaussian kernel.
However, this discrete function does not have the discrete analogs of the
properties of the continuous function, and can lead to undesired effects, as
described in the article scale space implementation.
T(n,t)=e−tIn(t)
See also
Normal distribution
Lorentzian function
Radial basis function kernel