Characteristic Function (Probability Theory)
Characteristic Function (Probability Theory)
The characteristic function always exists when treated as a function of a real-valued argument, unlike the
moment-generating function. There are relations between the behavior of the characteristic function of a
distribution and properties of the distribution, such as the existence of moments and the existence of a
density function.
Introduction
The characteristic function is a way to describe a random variable. The characteristic function,
a function of t, completely determines the behavior and properties of the probability distribution of the
random variable X. The characteristic function is similar to the cumulative distribution function,
(where 1{X ≤ x} is the indicator function — it is equal to 1 when X ≤ x, and zero otherwise), which also
completely determines the behavior and properties of the probability distribution of the random variable X.
The two approaches are equivalent in the sense that knowing one of the functions it is always possible to
find the other, yet they provide different insights for understanding the features of the random variable.
Moreover, in particular cases, there can be differences in whether these functions can be represented as
expressions involving simple standard functions.
If a random variable admits a density function, then the characteristic function is its Fourier dual, in the
sense that each of them is a Fourier transform of the other. If a random variable has a moment-generating
function , then the domain of the characteristic function can be extended to the complex plane, and
[1]
Note however that the characteristic function of a distribution always exists, even when the probability
density function or moment-generating function do not.
The characteristic function approach is particularly useful in analysis of linear combinations of independent
random variables: a classical proof of the Central Limit Theorem uses characteristic functions and Lévy's
continuity theorem. Another important application is to the theory of the decomposability of random
variables.
Definition
For a scalar random variable X the characteristic function is defined as the expected value of eitX, where i
is the imaginary unit, and t ∈ R is the argument of the characteristic function:
Here FX is the cumulative distribution function of X, fX is the corresponding probability density function,
QX(p) is the corresponding inverse cumulative distribution function also called the quantile function,[2] and
the integrals are of the Riemann–Stieltjes kind. If a random variable X has a probability density function
then the characteristic function is its Fourier transform with sign reversal in the complex exponential[3].[4]
This convention for the constants appearing in the definition of the characteristic function differs from the
usual convention for the Fourier transform.[5] For example, some authors[6] define φX(t) = Ee−2πitX, which
is essentially a change of parameter. Other notation may be encountered in the literature: as the
characteristic function for a probability measure p, or as the characteristic function corresponding to a
density f.
Generalizations
The notion of characteristic functions generalizes to multivariate random variables and more complicated
random elements. The argument of the characteristic function will always belong to the continuous dual of
the space where the random variable X takes its values. For common cases such definitions are listed
below:
where is the complex conjugate of and is the real part of the complex number ,
If X is a k-dimensional complex random vector, then for t ∈ Ck [8]
where is the conjugate transpose of the vector ,
If X(s) is a stochastic process, then for all functions t(s) such that the integral
converges for almost all realizations of X [9]
Examples
Distribution Characteristic function φ(t)
Degenerate δa
Bernoulli Bern(p)
Binomial B(n, p)
Poisson Pois(λ)
Laplace L(μ, b)
Logistic Logistic(μ,s)
Chi-squared χk2
Cauchy C(μ, θ)
Gamma Γ(k, θ)
Exponential Exp(λ)
Geometric Gf(p)
(number of failures)
Geometric Gt(p)
(number of trials)
Properties
The characteristic function of a real-valued random variable always exists, since it is an
integral of a bounded continuous function over a space whose measure is finite.
A characteristic function is uniformly continuous on the entire space.
It is non-vanishing in a region around zero: φ(0) = 1.
It is bounded: |φ(t)| ≤ 1.
It is Hermitian: φ(−t) = φ(t). In particular, the characteristic function of a symmetric (around the
origin) random variable is real-valued and even.
There is a bijection between probability distributions and characteristic functions. That is, for
any two random variables X1, X2, both have the same probability distribution if and only if
.
If a random variable X has moments up to k-th order, then the characteristic function φX is k
times continuously differentiable on the entire real line. In this case
If a characteristic function φX has a k-th derivative at zero, then the random variable X has all
moments up to k if k is even, but only up to k – 1 if k is odd.[11]
If X1, ..., Xn are independent random variables, and a1, ..., an are some constants, then the
characteristic function of the linear combination of the Xi 's is
One specific case is the sum of two independent random variables X1 and X2 in which case
one has
Let and be two random variables with characteristic functions and . and are
independent if and only if .
The tail behavior of the characteristic function determines the smoothness of the
corresponding density function.
Let the random variable be the linear transformation of a random variable .
The characteristic function of is . For random vectors and
(where A is a constant matrix and B a constant vector), we have
.[12]
Continuity
The bijection stated above between probability distributions and characteristic functions is sequentially
continuous. That is, whenever a sequence of distribution functions Fj(x) converges (weakly) to some
distribution F(x), the corresponding sequence of characteristic functions φj(t) will also converge, and the
limit φ(t) will correspond to the characteristic function of law F. More formally, this is stated as
Lévy’s continuity theorem: A sequence Xj of n-variate random variables converges in
distribution to random variable X if and only if the sequence φXj converges pointwise to a
function φ which is continuous at the origin. Where φ is the characteristic function of X.[13]
This theorem can be used to prove the law of large numbers and the central limit theorem.
Inversion formula
The density function is the Radon–Nikodym derivative of the distribution μX with respect to the Lebesgue
measure λ:
Theorem (Lévy).[note 1] If φX is characteristic function of distribution function FX, two points a < b are
such that {x | a < x < b} is a continuity set of μX (in the univariate case this condition is equivalent to
continuity of FX at points a and b), then
If X is scalar:
This formula can be re-stated in a form more convenient for numerical computation as[14]
For a random variable bounded from below one can obtain by taking such that
Otherwise, if a random variable is not bounded from below, the limit for
gives , but is numerically impractical.[14]
If X is a vector random variable:
Theorem. If a is (possibly) an atom of X (in the univariate case this means a point of discontinuity of FX )
then
If X is scalar:
The integral may be not Lebesgue-integrable; for example, when X is the discrete random variable that is
always 0, it becomes the Dirichlet integral.
It is well known that any non-decreasing càdlàg function F with limits F(−∞) = 0, F(+∞) = 1 corresponds
to a cumulative distribution function of some random variable. There is also interest in finding similar
simple criteria for when a given function φ could be the characteristic function of some random variable.
The central result here is Bochner’s theorem, although its usefulness is limited because the main condition
of the theorem, non-negative definiteness, is very hard to verify. Other theorems also exist, such as
Khinchine’s, Mathias’s, or Cramér’s, although their application is just as difficult. Pólya’s theorem, on the
other hand, provides a very simple convexity condition which is sufficient but not necessary. Characteristic
functions which satisfy this condition are called Pólya-type.[18]
Mathias’ theorem. A real-valued, even, continuous, absolutely integrable function φ, with φ(0) = 1, is a
characteristic function if and only if
for n = 0,1,2,..., and all p > 0. Here H2n denotes the Hermite polynomial of degree 2n.
,
is convex for ,
,
Characteristic functions are particularly useful for dealing with linear functions of independent random
variables. For example, if X1 , X2 , ..., Xn is a sequence of independent (and not necessarily identically
distributed) random variables, and
where the ai are constants, then the characteristic function for Sn is given by
In particular, φX+Y(t) = φX(t)φY(t). To see this, write out the definition of characteristic function:
The independence of X and Y is required to establish the equality of the third and fourth expressions.
Another special case of interest for identically distributed random variables is when ai = 1/n and then Sn is
the sample mean. In this case, writing X for the mean,
Moments
Characteristic functions can also be used to find moments of a random variable. Provided that the nth
moment exists, the characteristic function can be differentiated n times:
This can be formally written using the derivatives of the Dirac delta function:
which allows a formal solution to the moment problem. For example, suppose X has a standard Cauchy
distribution. Then φX(t) = e−|t|. This is not differentiable at t = 0, showing that the Cauchy distribution has
no expectation. Also, the characteristic function of the sample mean X of n independent observations has
characteristic function φX(t) = (e−|t|/n )n = e−|t|, using the result from the previous section. This is the
characteristic function of the standard Cauchy distribution: thus, the sample mean has the same distribution
as the population itself.
A similar calculation shows and is easier to carry out than applying the definition of
expectation and using integration by parts to evaluate .
The logarithm of a characteristic function is a cumulant generating function, which is useful for finding
cumulants; some instead define the cumulant generating function as the logarithm of the moment-generating
function, and call the logarithm of the characteristic function the second cumulant generating function.
Data analysis
Characteristic functions can be used as part of procedures for fitting probability distributions to samples of
data. Cases where this provides a practicable option compared to other possibilities include fitting the stable
distribution since closed form expressions for the density are not available which makes implementation of
maximum likelihood estimation difficult. Estimation procedures are available which match the theoretical
characteristic function to the empirical characteristic function, calculated from the data. Paulson et al.
(1975)[19] and Heathcote (1977)[20] provide some theoretical background for such an estimation procedure.
In addition, Yu (2004)[21] describes applications of empirical characteristic functions to fit time series
models where likelihood procedures are impractical. Empirical characteristic functions have also been used
by Ansari et al. (2020)[22] and Li et al. (2020)[23] for training generative adversarial networks.
Example
The gamma distribution with scale parameter θ and a shape parameter k has the characteristic function
with X and Y independent from each other, and we wish to know what the distribution of X + Y is. The
characteristic functions are
This is the characteristic function of the gamma distribution scale parameter θ and shape parameter k1 + k2 ,
and we therefore conclude
The result can be expanded to n independent gamma distributed random variables with the same scale
parameter and we get
Related concepts
Related concepts include the moment-generating function and the probability-generating function. The
characteristic function exists for all probability distributions. This is not the case for the moment-generating
function.
The characteristic function is closely related to the Fourier transform: the characteristic function of a
probability density function p(x) is the complex conjugate of the continuous Fourier transform of p(x)
(according to the usual convention; see continuous Fourier transform – other conventions).
where P(t) denotes the continuous Fourier transform of the probability density function p(x). Likewise, p(x)
may be recovered from φX(t) through the inverse Fourier transform:
Indeed, even when the random variable does not have a density, the characteristic function may be seen as
the Fourier transform of the measure corresponding to the random variable.
Another related concept is the representation of probability distributions as elements of a reproducing kernel
Hilbert space via the kernel embedding of distributions. This framework may be viewed as a generalization
of the characteristic function under specific choices of the kernel function.
See also
Subindependence, a weaker condition than independence, that is defined in terms of
characteristic functions.
Cumulant, a term of the cumulant generating functions, which are logs of the characteristic
functions.
Notes
1. named after the French mathematician Paul Lévy
References
Citations
1. Lukacs (1970), p. 196.
2. Shaw, W. T.; McCabe, J. (2009). "Monte Carlo sampling given a Characteristic Function:
Quantile Mechanics in Momentum Space". arXiv:0903.1592 (https://arxiv.org/abs/0903.159
2) [q-fin.CP (https://arxiv.org/archive/q-fin.CP)].
3. Statistical and Adaptive Signal Processing (2005)
4. Billingsley (1995).
5. Pinsky (2002).
6. Bochner (1955).
7. Andersen et al. (1995), Definition 1.10.
8. Andersen et al. (1995), Definition 1.20.
9. Sobczyk (2001), p. 20.
10. Kotz & Nadarajah (2004), p. 37 using 1 as the number of degree of freedom to recover the
Cauchy distribution
11. Lukacs (1970), Corollary 1 to Theorem 2.3.1.
12. "Joint characteristic function" (https://www.statlect.com/fundamentals-of-probability/joint-char
acteristic-function). www.statlect.com. Retrieved 7 April 2018.
13. Cuppens (1975), Theorem 2.6.9.
14. Shephard (1991a).
15. Cuppens (1975), Theorem 2.3.2.
16. Wendel (1961).
17. Shephard (1991b).
18. Lukacs (1970), p. 84.
19. Paulson, Holcomb & Leitch (1975).
20. Heathcote (1977).
21. Yu (2004).
22. Ansari, Scarlett & Soh (2020).
23. Li et al. (2020).
24. Lukacs (1970), Chapter 7.
Sources
Andersen, H.H.; Højbjerre, M.; Sørensen, D.; Eriksen, P.S. (1995). Linear and graphical
models for the multivariate complex normal distribution. Lecture Notes in Statistics 101. New
York: Springer-Verlag. ISBN 978-0-387-94521-7.
Billingsley, Patrick (1995). Probability and measure (3rd ed.). John Wiley & Sons. ISBN 978-
0-471-00710-4.
Bisgaard, T. M.; Sasvári, Z. (2000). Characteristic functions and moment sequences. Nova
Science.
Bochner, Salomon (1955). Harmonic analysis and the theory of probability. University of
California Press.
Cuppens, R. (1975). Decomposition of multivariate probabilities (https://archive.org/details/d
ecompositionofm00cupp). Academic Press. ISBN 9780121994501.
Heathcote, C.R. (1977). "The integrated squared error estimation of parameters". Biometrika.
64 (2): 255–264. doi:10.1093/biomet/64.2.255 (https://doi.org/10.1093%2Fbiomet%2F64.2.2
55).
Lukacs, E. (1970). Characteristic functions. London: Griffin.
Kotz, Samuel; Nadarajah, Saralees (2004). Multivariate T Distributions and Their
Applications. Cambridge University Press.
Manolakis, Dimitris G.; Ingle, Vinay K.; Kogon, Stephen M. (2005). Statistical and Adaptive
Signal Processing: Spectral Estimation, Signal Modeling, Adaptive Filtering, and Array
Processing (https://books.google.com/books?id=3RQfAQAAIAAJ). Artech House. ISBN 978-
1-58053-610-3.
Oberhettinger, Fritz (1973). Fourier transforms of distributions and their inverses; a collection
of tables. New York: Academic Press. ISBN 9780125236508.
Paulson, A.S.; Holcomb, E.W.; Leitch, R.A. (1975). "The estimation of the parameters of the
stable laws". Biometrika. 62 (1): 163–170. doi:10.1093/biomet/62.1.163 (https://doi.org/10.10
93%2Fbiomet%2F62.1.163).
Pinsky, Mark (2002). Introduction to Fourier analysis and wavelets. Brooks/Cole. ISBN 978-
0-534-37660-4.
Sobczyk, Kazimierz (2001). Stochastic differential equations. Kluwer Academic Publishers.
ISBN 978-1-4020-0345-5.
Wendel, J.G. (1961). "The non-absolute convergence of Gil-Pelaez' inversion integral" (http
s://doi.org/10.1214%2Faoms%2F1177705164). The Annals of Mathematical Statistics. 32
(1): 338–339. doi:10.1214/aoms/1177705164 (https://doi.org/10.1214%2Faoms%2F117770
5164).
Yu, J. (2004). "Empirical characteristic function estimation and its applications".
Econometrics Reviews. 23 (2): 93–1223. doi:10.1081/ETC-120039605 (https://doi.org/10.10
81%2FETC-120039605). S2CID 9076760 (https://api.semanticscholar.org/CorpusID:907676
0).
Shephard, N. G. (1991a). "From characteristic function to distribution function: A simple
framework for the theory" (https://ora.ox.ac.uk/objects/uuid:a4c3ad11-74fe-458c-8d58-6f745
11a476c). Econometric Theory. 7 (4): 519–529. doi:10.1017/s0266466600004746 (https://do
i.org/10.1017%2Fs0266466600004746). S2CID 14668369 (https://api.semanticscholar.org/
CorpusID:14668369).
Shephard, N. G. (1991b). "Numerical integration rules for multivariate inversions" (https://ora.
ox.ac.uk/objects/uuid:da00666a-4790-4666-a54c-b81fc6fc49cb). J. Statist. Comput. Simul.
39 (1–2): 37–46. doi:10.1080/00949659108811337 (https://doi.org/10.1080%2F0094965910
8811337).
Ansari, Abdul Fatir; Scarlett, Jonathan; Soh, Harold (2020). "A Characteristic Function
Approach to Deep Implicit Generative Modeling" (https://openaccess.thecvf.com/content_CV
PR_2020/html/Ansari_A_Characteristic_Function_Approach_to_Deep_Implicit_Generative
_Modeling_CVPR_2020_paper.html). Proceedings of the IEEE/CVF Conference on
Computer Vision and Pattern Recognition (CVPR), 2020. pp. 7478–7487.
Li, Shengxi; Yu, Zeyang; Xiang, Min; Mandic, Danilo (2020). "Reciprocal Adversarial
Learning via Characteristic Functions" (https://proceedings.neurips.cc/paper/2020/hash/021f
6dd88a11ca489936ae770e4634ad-Abstract.html). Advances in Neural Information
Processing Systems 33 (NeurIPS 2020).
External links
"Characteristic function" (https://www.encyclopediaofmath.org/index.php?title=Characteristic
_function), Encyclopedia of Mathematics, EMS Press, 2001 [1994]