Probability Density Function: Relative Likelihood That The Value of The Random Variable Would Be Close To That Sample
Probability Density Function: Relative Likelihood That The Value of The Random Variable Would Be Close To That Sample
In a more precise sense, the PDF is used to specify the probability of the random variable
falling within a particular range of values, as opposed to taking on any one value. This
probability is given by the integral of this variable's PDF over that range—that is, it is given
by the area under the density function but above the horizontal axis and between the lowest
and greatest values of the range. The probability density function is nonnegative everywhere,
and its integral over the entire space is equal to 1.
The terms "probability distribution function"[4] and "probability function"[5] have also
sometimes been used to denote the probability density function. However, this use is not
standard among probabilists and statisticians. In other sources, "probability distribution
function" may be used when the probability distribution is defined as a function over general
sets of values or it may refer to the cumulative distribution function, or it may be a probability Boxplot and probability density function of a normal
mass function (PMF) rather than the density. "Density function" itself is also used for the distribution N(0, σ2).
probability mass function, leading to further confusion.[6] In general though, the PMF is used
in the context of discrete random variables (random variables that take values on a countable
set), while the PDF is used in the context of continuous random variables.
Contents
Example
Absolutely continuous univariate distributions
Formal definition
Discussion
Further details
Link between discrete and continuous distributions
Families of densities Geometric visualisation
of the mode, median and
Densities associated with multiple variables
mean of an arbitrary
Marginal densities
probability density
Independence function.[1]
Corollary
Example
Function of random variables and change of variables in the probability density function
Scalar to scalar
Vector to vector
Vector to scalar
Sums of independent random variables
Products and quotients of independent random variables
Example: Quotient distribution
Example: Quotient of two standard normals
See also
References
Further reading
External links
Example
Suppose bacteria of a certain species typically live 4 to 6 hours. The probability that a bacterium lives exactly 5 hours is equal to zero. A lot of bacteria
live for approximately 5 hours, but there is no chance that any given bacterium dies at exactly 5.00... hours. However, the probability that the bacterium
dies between 5 hours and 5.01 hours is quantifiable. Suppose the answer is 0.02 (i.e., 2%). Then, the probability that the bacterium dies between 5 hours
and 5.001 hours should be about 0.002, since this time interval is one-tenth as long as the previous. The probability that the bacterium dies between 5
hours and 5.0001 hours should be about 0.0002, and so on.
In this example, the ratio (probability of dying during an interval) / (duration of the interval) is approximately constant, and equal to 2 per hour (or 2
hour−1 ). For example, there is 0.02 probability of dying in the 0.01-hour interval between 5 and 5.01 hours, and (0.02 probability / 0.01 hours) = 2
hour−1 . This quantity 2 hour−1 is called the probability density for dying at around 5 hours. Therefore, the probability that the bacterium dies at 5 hours
can be written as (2 hour−1 ) dt. This is the probability that the bacterium dies within an infinitesimal window of time around 5 hours, where dt is the
duration of this window. For example, the probability that it lives longer than 5 hours, but shorter than (5 hours + 1 nanosecond), is (2 hour−1 )×(1
nanosecond) ≈ 6 × 10−13 (using the unit conversion 3.6 × 1012 nanoseconds = 1 hour).
There is a probability density function f with f(5 hours) = 2 hour−1 . The integral of f over any window of time (not only infinitesimal windows but also
large windows) is the probability that the bacterium dies in that window.
Intuitively, one can think of as being the probability of falling within the infinitesimal interval .
Formal definition
(This definition may be extended to any probability distribution using the measure-theoretic definition of probability.)
A random variable with values in a measurable space (usually with the Borel sets as measurable subsets) has as probability distribution the
measure X∗P on : the density of with respect to a reference measure on is the Radon–Nikodym derivative:
Discussion
In the continuous univariate case above, the reference measure is the Lebesgue measure. The probability mass function of a discrete random variable is the
density with respect to the counting measure over the sample space (usually the set of integers, or some subset thereof).
It is not possible to define a density with reference to an arbitrary measure (e.g. one can't choose the counting measure as a reference for a continuous
random variable). Furthermore, when it does exist, the density is almost unique, meaning that any two such densities coincide almost everywhere.
Further details
Unlike a probability, a probability density function can take on values greater than one; for example, the uniform distribution on the interval [0, 1/2] has
probability density f(x) = 2 for 0 ≤ x ≤ 1/2 and f(x) = 0 elsewhere.
If a random variable X is given and its distribution admits a probability density function f, then the expected value of X (if the expected value exists) can be
calculated as
Not every probability distribution has a density function: the distributions of discrete random variables do not; nor does the Cantor distribution, even
though it has no discrete component, i.e., does not assign positive probability to any individual point.
A distribution has a density function if and only if its cumulative distribution function F(x) is absolutely continuous. In this case: F is almost everywhere
differentiable, and its derivative can be used as probability density:
If a probability distribution admits a density, then the probability of every one-point set {a} is zero; the same holds for finite and countable sets.
Two probability densities f and g represent the same probability distribution precisely if they differ only on a set of Lebesgue measure zero.
In the field of statistical physics, a non-formal reformulation of the relation above between the derivative of the cumulative distribution function and the
probability density function is generally used as the definition of the probability density function. This alternate definition is the following:
If dt is an infinitely small number, the probability that X is included within the interval (t, t + dt) is equal to f(t) dt, or:
More generally, if a discrete variable can take n different values among real numbers, then the associated probability density function is:
where are the discrete values accessible to the variable and are the probabilities associated with these values.
This substantially unifies the treatment of discrete and continuous probability distributions. For instance, the above expression allows for determining
statistical characteristics of such a discrete variable (such as its mean, its variance and its kurtosis), starting from the formulas given for a continuous
distribution of the probability...
Families of densities
It is common for probability density functions (and probability mass functions) to
be parametrized—that is, to be characterized by unspecified parameters.
For example, the normal distribution is parametrized in terms of the mean and the variance, denoted by and respectively, giving the family of
densities
It is important to keep in mind the difference between the domain of a family of densities and the parameters of the family. Different values of the
parameters describe different distributions of different random variables on the same sample space (the same set of all possible values of the variable); this
sample space is the domain of the family of random variables that this family of distributions describes. A given set of parameters describes a single
distribution within the family sharing the functional form of the density. From the perspective of a given distribution, the parameters are constants, and
terms in a density function that contain only parameters, but not variables, are part of the normalization factor of a distribution (the multiplicative factor
that ensures that the area under the density—the probability of something in the domain occurring— equals 1). This normalization factor is outside the
kernel of the distribution.
Since the parameters are constants, reparametrizing a density in terms of different parameters, to give a characterization of a different random variable in
the family, means simply substituting the new parameter values into the formula in place of the old ones. Changing the domain of a probability density,
however, is trickier and requires more work: see the section below on change of variables.
Marginal densities
For i = 1, 2, …, n, let fXi(xi) be the probability density function associated with variable Xi alone. This is called the marginal density function, and can be
deduced from the probability density associated with the random variables X1 , …, Xn by integrating over all values of the other n − 1 variables:
Independence
Continuous random variables X1 , …, Xn admitting a joint density are all independent from each other if and only if
Corollary
If the joint probability density function of a vector of n random variables can be factored into a product of n functions of one variable
(where each fi is not necessarily a density) then the n variables in the set are all independent from each other, and the marginal probability density function
of each of them is given by
Example
This elementary example illustrates the above definition of multidimensional probability density functions in the simple case of a function of a set of two
variables. Let us call a 2-dimensional random vector of coordinates (X, Y): the probability to obtain in the quarter plane of positive x and y is
Function of random variables and change of variables in the probability density function
If the probability density function of a random variable (or vector) X is given as fX(x), it is possible (but often not necessary; see below) to calculate the
probability density function of some variable Y = g(X). This is also called a “change of variable” and is in practice used to generate a random variable of
arbitrary shape fg(X) = fY using a known (for instance, uniform) random number generator.
It is tempting to think that in order to find the expected value E(g(X)), one must first find the probability density fg(X) of the new random variable Y = g(X).
However, rather than computing
The values of the two integrals are the same in all cases in which both X and g(X) actually have probability density functions. It is not necessary that g be
a one-to-one function. In some cases the latter integral is computed much more easily than the former. See Law of the unconscious statistician.
Scalar to scalar
This follows from the fact that the probability contained in a differential area must be invariant under change of variables. That is,
or
For functions that are not monotonic, the probability density function for y is
where n(y) is the number of solutions in x for the equation , and are these solutions.
Vector to vector
Suppose x is an n-dimensional random variable with joint density f. If y = H(x), where H is a bijective, differentiable function, then y has density g:
with the differential regarded as the Jacobian of the inverse of H(⋅), evaluated at y.[7]
For example, in the 2-dimensional case x = (x1 , x2 ), suppose the transform H is given as y1 = H1 (x1 , x2 ), y2 = H2 (x1 , x2 ) with inverses x1 = H1 −1 (y1 , y2 ),
x2 = H2 −1 (y1 , y2 ). The joint distribution for y = (y1 , y2 ) has density[8]
Vector to scalar
Let be a differentiable function and be a random vector taking values in , be the probability density function of and be the
Dirac delta function. It is possible to use the formulas above to determine , the probability density function of , which will be given by
Proof:
Let be a collapsed random variable with probability density function (i.e. a constant equal to zero). Let the random vector and the
transform be defined as
which is an upper triangular matrix with ones on the main diagonal, therefore its determinant is 1. Applying the change of variable theorem from the
previous section we obtain that
It is possible to generalize the previous relation to a sum of N independent random variables, with densities U1 , …, UN:
This can be derived from a two-way change of variables involving Y=U+V and Z=V, similarly to the example below for the quotient of independent
random variables.
To compute the quotient Y = U/V of two independent random variables U and V, define the following transformation:
Then, the joint density p(y,z) can be computed by a change of variables from U,V to Y,Z, and Y can be derived by marginalizing out Z from the joint
density.
The absolute value of the Jacobian matrix determinant of this transformation is:
Thus:
This method crucially requires that the transformation from U,V to Y,Z be bijective. The above transformation meets this because Z can be mapped
directly back to V, and for a given V the quotient U/V is monotonic. This is similarly the case for the sum U + V, difference U − V and product UV.
Exactly the same method can be used to compute the distribution of other functions of multiple independent random variables.
Given two standard normal variables U and V, the quotient can be computed as follows. First, the variables have the following density functions:
See also
Density estimation
Kernel density estimation
Likelihood function
List of probability distributions
Probability mass function
Secondary measure
Uses as position probability density:
Atomic orbital
Home range
References
1. "AP Statistics Review - Density Curves and the Normal Distributions" (https://web.archive.org/web/20150402183703/http://apstatsrevi
ew.tumblr.com/post/50058615236/density-curves-and-the-normal-distributions?action=purge). Archived from the original (https://apstat
sreview.tumblr.com/post/50058615236/density-curves-and-the-normal-distributions?action=purge) on 2 April 2015. Retrieved
16 March 2015.
2. Grinstead, Charles M.; Snell, J. Laurie (2009). "Conditional Probability - Discrete Conditional" (https://www.dartmouth.edu/~chance/te
aching_aids/books_articles/probability_book/Chapter4.pdf) (PDF). Grinstead & Snell's Introduction to Probability. Orange Grove
Texts. ISBN 161610046X. Retrieved 2019-07-25.
3. "probability - Is a uniformly random number over the real line a valid distribution?" (https://stats.stackexchange.com/questions/541479/
is-a-uniformly-random-number-over-the-real-line-a-valid-distribution). Cross Validated. Retrieved 2021-10-06.
4. Probability distribution function (https://planetmath.org/?method=png&from=objects&id=2884&op=getobj) PlanetMath Archived (http
s://web.archive.org/web/20110807023948/http://planetmath.org/?method=png&from=objects&id=2884&op=getobj) 2011-08-07 at the
Wayback Machine
5. Probability Function (http://mathworld.wolfram.com/ProbabilityFunction.html) at MathWorld
6. Ord, J.K. (1972) Families of Frequency Distributions, Griffin. ISBN 0-85264-137-0 (for example, Table 5.1 and Example 5.4)
7. Devore, Jay L.; Berk, Kenneth N. (2007). Modern Mathematical Statistics with Applications (https://books.google.com/books?id=3X7Q
ca6CcfkC&pg=PA263). Cengage. p. 263. ISBN 0-534-40473-1.
8. David, Stirzaker (2007-01-01). Elementary Probability. Cambridge University Press. ISBN 0521534283. OCLC 851313783 (https://ww
w.worldcat.org/oclc/851313783).
Further reading
Billingsley, Patrick (1979). Probability and Measure. New York, Toronto, London: John Wiley and Sons. ISBN 0-471-00710-2.
Casella, George; Berger, Roger L. (2002). Statistical Inference (Second ed.). Thomson Learning. pp. 34–37. ISBN 0-534-24312-6.
Stirzaker, David (2003). Elementary Probability (https://archive.org/details/elementaryprobab0000stir). ISBN 0-521-42028-8. Chapters
7 to 9 are about continuous variables.
External links
Ushakov, N.G. (2001) [1994], "Density of a probability distribution" (https://www.encyclopediaofmath.org/index.php?title=Density_of_a
_probability_distribution), Encyclopedia of Mathematics, EMS Press
Weisstein, Eric W. "Probability density function" (https://mathworld.wolfram.com/ProbabilityDensityFunction.html). MathWorld.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Probability_density_function&oldid=1063440620"