Probability Density Function is the function of probability defined for various distributions of variables and is the less common topic in the study of probability throughout the academic journey of students. However, this function is very useful in many areas of real life such as predicting rainfall, financial modelling such as the stock market, income disparity in social sciences, etc.
This article explores the topic of the Probability Density Function in detail including its definition, condition for existence of this function, as well as various examples.
What is Probability Density Function(PDF)?
Probability Density Function is used for calculating the probabilities for continuous random variables. When the cumulative distribution function (CDF) is differentiated we get the probability density function (PDF). Both functions are used to represent the probability distribution of a continuous random variable.
The probability density function is defined over a specific range. By differentiating CDF we get PDF and by integrating the probability density function we can get the cumulative density function.
Probability Density Function Definition
Probability density function is the function that represents the density of probability for a continuous random variable over the specified ranges.
Probability Density Function is abbreviated as PDF and for a continuous random variable X, Probability Density Function is denoted by f(x).
PDF of the random variable is obtained by differentiating CDF (Cumulative Distribution Function) of X. The probability density function should be a positive for all possible values of the variable. The total area between the density curve and the x-axis should be equal to 1.
Necessary Conditions for PDF
Let X be the continuous random variable with probability density function f(x). For a function to be valid probability function should satisfy below conditions.
- f(x) ≥ 0, ∀ x ∈ R
- f(x) should be piecewise continuous.
- \int\limits^{\infin}_{-\infin}f(x)dx = 1
So, the PDF should be the non-negative and piecewise continuous function whose total value evaluates to 1.
Check: Normal distribution Formula
Example of a Probability Density Function
Let X be a continuous random variable and the probability density function pdf is given by f(x) = x - 1 , 0 < x ≤ 5. We have to find P (1 < x ≤ 2).
To find the probability P (1 < x ≤ 2) we integrate the pdf f(x) = x - 1 with the limits 1 and 2. This results in the probability P (1 < x ≤ 2) = 0.5
Let Y be a continuous random variable and F(y) be the cumulative distribution function (CDF) of Y. Then, the probability density function (PDF) f(y) of Y is obtained by differentiating the CDF of Y.
f(y) = \bold{\frac{d}{dy}[F(y)]}
= F'(y)
If we want to calculate the probability for X lying between the interval a and b, then we can use the following formula:
P (a ≤ X ≤ b) = F(b) - F(a) = \bold{\int\limits^{b}_{a}f(x)dx}
- If we differentiate CDF, we get the PDF of the random variable.
f(y) = \bold{\frac{d}{dy}[F(y)]}
- If we integrate PDF, we get the CDF of the random variable.
F(y) = \bold{\int\limits^{y}_{-\infin}f(t) dt}
What Does a Probability Density Function (PDF) Tell Us?
A Probability Density Function (PDF) is a function that describes the likelihood of a continuous random variable taking on a particular value. Unlike discrete random variables, where probabilities are assigned to specific outcomes, continuous random variables can take on any value within a range. Probability Density Function (PDF) tells us
- Relative Likelihood
- Distribution Shape
- Expected Value and Variance, etc.
How to Find Probability from Probability Density Function
To find the probability from the probability density function we have to follow some steps.
Step 1: First check the PDF is valid or not using the necessary conditions.
Step 2: If the PDF is valid, use the formula and write the required probability and limits.
Step 3: Divide the integration according to the given PDF.
Step 4: Solve all integrations.
Step 5: The resultant value gives the required probability.
Graph for Probability Density Function
If X is continuous random variable and f(x) be the probability density function. The probability for the random variable is given by area under the pdf curve. The graph of PDF looks like bell curve, with the probability of X given by area below the curve. The following graph gives the probability for X lying between interval a and b.

Probability Density Function Properties
Let f(x) be the probability density function for continuous random variable x. Following are some probability density function properties:
- Probability density function is always positive for all the values of x.
f(x) ≥ 0, ∀ x ∈ R
- Total area under probability density curve is equal to 1.
\bold{\int\limits^{\infin}_{-\infin}f(x)dx =1}
- For continuous random variable X, while calculating the random variable probabilities end values of the interval can be ignored i.e., for X lying between interval a and b
P (a ≤ X ≤ b) = P (a ≤ X < b) = P (a < X ≤ b) = P (a < X < b)
- Probability density function of a continuous random variable over a single value is zero.
P(X = a) = P (a ≤ X ≤ a) = \bold{\int\limits^{a}_{a}f(x)dx}
= 0
- Probability density function defines itself over the domain of the variable and over the range of the continuous values of the variable.
Mean of Probability Density Function
Mean of the probability density function refers to the average value of the random variable. The mean is also called as expected value or expectation. It is denoted by μ or E[X] where, X is random variable.
Mean of the probability density function f(x) for the continuous random variable X is given by:
\bold{E[X] = \mu = \int\limits^{\infin}_{-\infin}xf(x)dx}
Median is the value which divides the probability density function graph into two equal halves. If x = M is the median then, area under curve from -∞ to M and area under curve from M to ∞ are equal which gives the median value = 1/2.
Median of the probability density function f(x) is given by:
\bold{\int\limits^{M}_{-\infin}f(x)dx = \int\limits^{\infin}_{M}f(x)dx=\frac{1}{2}}
Variance Probability Density Function
Variance of probability density function refers to the squared deviation from the mean of a random variable. It is denoted by Var(X) where, X is random variable.
Variance of the probability density function f(x) for continuous random variable X is given by:
Var(X) = E [(X - μ)2] = \bold{\int\limits^{\infin}_{-\infin}(x-\mu)^2f(x)dx}
Standard Deviation of Probability Density Function
Standard Deviation is the square root of the variance. It is denoted by σ and is given by:
σ = √Var(X)
Probability Density Function Vs Cumulative Distribution Function
The key differences between Probability Density Function (PDF) and Cumulative Distribution Function (CDF) are listed in the following table:
Aspect | Probability Density Function (PDF) | Cumulative Distribution Function (CDF) |
---|
Definition | The PDF gives the probability that a random variable takes on a specific value within a certain range. | The CDF gives the probability that a random variable is less than or equal to a specific value. |
---|
Range of Values | Defined for continuous random variables. | Defined for both continuous and discrete random variables. |
---|
Mathematical Expression | f(x), where f(x)≥0 and ∫−∞∞f(x)dx=1 | F(x), where 0≤F(x)≤1 for all x, and F(−∞)=0 and F(∞)=1 |
---|
Interpretation | Represents the likelihood of the random variable taking on a specific value. | Represents the probability that the random variable is less than or equal to a specific value. |
---|
Area Under the Curve | The area under the PDF curve over a certain interval gives the probability that the random variable falls within that interval. | The value of the CDF at a specific point gives the probability that the random variable is less than or equal to that point. |
---|
Relationship with CDF | The PDF can be obtained by differentiating the CDF with respect to the random variable. | The CDF can be obtained by integrating the PDF with respect to the random variable. |
---|
Probability Calculation | The probability of a random variable falling within a specific interval (a,b) is given by ∫abf(x)dx. | The probability of a random variable being less than or equal to a specific value x is given by F(x). |
---|
Properties | The PDF is always non-negative: f(x)≥0 for all x. The total area under the PDF curve is equal to 1. | The CDF is a monotonically increasing function: F(x1) ≤ F(x2) if x1 ≤ x2. 0≤F(x)≤1 for all x. |
---|
Examples | Normal Distribution PDF: \frac{1}{\sigma \sqrt{2\pi}}e^{-\frac{(x-\mu)^2}{2\sigma^2}}
Exponential distribution PDF: λe−λx | Normal Distribution CDF: \frac{1}{2}\left( 1+ \mathrm{erf}\left( \frac{x-\mu}{\sigma \sqrt{2}} \right) \right)
Exponential distribution CDF: 1−e−λx |
---|
Types of Probability Density Function
There are different types of probability density functions given below:
- Uniform Distribution
- Binomial Distribution
- Normal Distribution
- Chi-Square Distribution
The uniform distribution is the distribution whose probability for equally likely events lies between a specified range. It is also called as rectangular distribution. The distribution is written as U(a, b) where, a is the minimum value and b is the maximum value.
If x is the variable which lies between a and b, then formula of PDF of uniform distribution is given by:
f(x) = 1/ (b - a)
Probability Density Function for Binomial Distribution
The binomial distribution is the distribution which has two parameters: n and p where, n is the total number of trials and p is the probability of success.
Let x be the variable, n is the total number of outcomes, p is the probability of success and q be the probability of failure, then probability density function for binomial distribution is given by:
P(x) = nCx px qn-x
Probability Density Function for Normal Distribution
The normal distribution is distribution that is symmetric about its mean. It is also called as Gaussian distribution. It is denoted as N (\bar{x}, σ2) where, \bar{x}is the mean and σ2 is the variance. The graph of the normal distribution is bell like graph.
If x be the variable, \bar{x} is the mean, σ2 is the variance and σ be the standard deviation, then formula for the PDF of Gaussian or normal distribution is given by:
N (\bar{x}, σ2) = f(x) = \frac{1}{\sigma\sqrt{2\pi}}e^{\frac{-1}{2}[\frac{x - \mu}{\sigma}]^2}
In standard normal distribution mean = 0 and standard deviation = 1. So, the formula for the probability density function of the standard normal form is given by:
f(x) = \frac{1}{\sigma\sqrt{2\pi}}e^{\frac{-x^2}{2}}
Probability Density Function for Chi-Squared Distribution
Chi-Squared distribution is the distribution defined as the sum of squares of k independent standard normal form. IT is denoted as X2(k).
The probability density function for Chi-squared distribution formula is given by:
f(x) = \frac{x^{\frac{k}{2}-1}e^{\frac{-x}{2}}}{2^{\frac{k}{2}}\Gamma(\frac{k}{2})} , x > 0
f(x) = 0, otherwise
Joint Probability Density Function
The joint probability density function is the density function that is defined for the probability distribution for two or more random variables. It is denoted as f(x, y) = Probability [(X = x) and (Y = y)] where x and y are the possible values of random variable X and Y. We can get joint PDF by differentiating joint CDF. The joint PDF must be positive and integrate to 1 over the domain.
Difference Between PDF and Joint PDF
The PDF is the function defined for single variable whereas joint PDF is the function defined for two or more than two variables, and other key differences between these both concepts are listed in the following table:
PDF (Probability Density Function) | Joint PDF |
---|
Probability Density Function is the probability function defined for single variable. | Joint Probability Density Function is the probability function defined for more than one variable. |
It is denoted as f(x). | It is denoted as f (x, y, ...). |
Probability Density Function is obtained by differentiating the CDF. | Joint Probability Density Function is obtained by differentiating the joint CDF |
It can be calculated by single integral. | It can be calculated using multiple integrals as there are multiple variables. |
Applications of Probability Density Function
Some of the applications of Probability Density function are:
- Probability density functions are used in statistics for calculating probabilities for random variables.
- It is used in modelling various scientific data.
Read More,
Examples on Probability Density Function
Example 1: If the probability density function is given as: \bold{f(x)=
\begin{cases}
x / 2 & 0\leq x < 4\\
0 & x\geq4
\end{cases}}
. Find P (1 ≤ X ≤ 2).
Solution:
Apply the formula and integrate the PDF.
P (1 ≤ X ≤ 2) = \int\limits^{2}_{1}f(x)dx
f(x) = x / 2 for 0 ≤ x ≤ 4
⇒ P (1 ≤ X ≤ 2) = \int\limits^{2}_{1}(x/2)dx
⇒ P (1 ≤ X ≤ 2) = \frac{1}{2}\times\big [\frac{x^2}{2} \big ]^2_1
⇒ P (1 ≤ X ≤ 2) = 3 / 4
Example 2: If the probability density function is given as: \bold{f(x)=
\begin{cases}
c(x - 1) & 0 < x < 5\\
0 & x\geq5
\end{cases}}
. Find c.
Solution:
For PDF:
\int\limits^{\infin}_{-\infin}f(x)dx = 1\\
\Rightarrow\int\limits^{1}_{-\infin}f(x)dx\hspace{0.1cm}+\hspace{0.1cm} \int\limits^{5}_{1}f(x)dx \hspace{0.1cm}+\hspace{0.1cm} \int\limits^{\infin}_{5}f(x)dx = 1\\
\Rightarrow\int\limits^{1}_{-\infin}0dx\hspace{0.1cm}+\hspace{0.1cm} \int\limits^{5}_{1}c(x-1)dx \hspace{0.1cm}+\hspace{0.1cm} \int\limits^{\infin}_{5}0dx1\\
\Rightarrow 0 \hspace{0.1cm}+\hspace{0.1cm} c \big [\frac{x^2}{2}- x\big]^5_1 +0 =1\\
\Rightarrow c\big [\frac{x^2}{2}- x\big]^5_1\\
\Rightarrow 8c = 1\\
\Rightarrow c = \frac{1}{8}
Example 3: If the probability density function is given as: \bold{f(x)=
\begin{cases}
\frac{5}{2}x^2 & 0\leq x < 2\\
0 & otherwise
\end{cases}}
. Find the mean.
Solution:
Formula for mean:
μ = \int\limits^{\infin}_{-\infin}xf(x)dx
⇒ μ = \int\limits^{1}_{-\infin}x(0) dx\hspace{0.1cm}+\hspace{0.1cm} \int\limits^{2}_{1} x(\frac{5x^2}{2}) dx \hspace{0.1cm}+\hspace{0.1cm} \int\limits^{\infin}_{2}x(0) dx
⇒ μ = \frac{5}{2}\big[\frac{x^4}{4}\big]^2_1
⇒ μ = (5/2) × (15/4)
⇒ μ = 75/8 = 9.375
Example 4: If the probability density function is given as: \bold{f(x)=
\begin{cases}
{2}{}x & 0\leq x < 1\\
0 & otherwise
\end{cases}}
. verify if this is a valid probability density function.
To verify that f(x) is a valid PDF, it must satisfy two conditions:
f(x)≥0 for all x.
The integral of f(x) over its entire range must equal 1.
Checking f(x)≥0:
f(x)=2x is clearly non-negative for 0≤x≤10.
Integrating f(x) over its range:∫−∞∞f(x) dx=∫012x dx=[x2]01=12−02=1.\int_{-\infty}^{\infty} f(x) \, dx = \int_{0}^{1} 2x \, dx = \left[ x^2 \right]_{0}^{1}
= 12 - 02 = 1.
Since both conditions are satisfied, f(x) is a valid PDF.
Example 5: Given the probability density function f(x)= \begin{cases} 3x^2 & \text{if } 0 \leq x \leq 1 \\ 0 & \text{otherwise} \end{cases}, find the mean (expected value) of the distribution.
The mean of a continuous random variable X with PDF f(x) is given by:
E(X)= \int_{-\infty}^{\infty} x f(x) \, dx.
For the given PDF:
E(X)= = \int_{0}^{1} x \cdot 3x^2 \, dx
= \int_{0}^{1} 3x^3 \, dx
= 3 \left[ \frac{x^4}{4} \right]_{0}^{1}
= 3 \cdot \frac{1}{4}
= \frac{3}{4}
Example 6: Using the same PDF f(x) = \begin{cases} 3x^2 & \text{if } 0 \leq x \leq 1 \\ 0 & \text{otherwise} \end{cases}, find the variance of the distribution.
The variance of a continuous random variable X is given by:
\text{Var}(X) = E(X^2) - [E(X)]^2.
We already have E(X) = \frac{3}{4}.
Now, we need to find E(X2):
E(X^2) = \int_{-\infty}^{\infty} x^2 f(x) \, dx
= \int_{0}^{1} x^2 \cdot 3x^2 \, dx
= \int_{0}^{1} 3x^4 \, dx
= 3 \left[ \frac{x^5}{5} \right]_{0}^{1}
= 3 \cdot \frac{1}{5}
= \frac{3}{5}.
Practice Questions on Probability Density Function
Q 1: Let f(x) be a probability density function given by:
- f(x) = 2x for 0 ≤ x ≤ 2
- f(x) = 0 otherwise
Verify that f(x) is a valid probability density function.
Q 2: Let f(x) be a probability density function given by:
- f(x) = 1/2e-x/2 for x ≥ 0
- f(x) = 0 for x < 0
Calculate the probability that X ≤ 1.
Q 3: Let f(x) be a probability density function given by:
- f(x) = 2x2 for 0 ≤ x ≤ 1
- f(x) = 0 otherwise
Find the cumulative distribution function (CDF) F(x) for x ≥ 0.
Q 4: Given the probability density function f(x) of a continuous random variable X:
- f(x) = 3(1 - x2) for -1 ≤ x ≤ 1
- f(x) = 0 otherwise
Find P(0 ≤ X ≤ 1/2)
Q 5: Find the cumulative distribution function (CDF) for the PDF f(x) = \begin{cases} 2x & \text{if } 0 \leq x \leq 1 \\ 0 & \text{otherwise} \end{cases}.
Q 6: Given the function f(x) = \begin{cases} k(1-x^2) & \text{if } -1 \leq x \leq 1 \\ 0 & \text{otherwise} \end{cases}, find the value of k that makes f(x) a valid PDF.
Q 7: Using the same PDF f(x) = \begin{cases} \frac{1}{3} e^{-x/3} & \text{if } x \geq 0 \\ 0 & \text{otherwise} \end{cases}otherwise, find the variance Var(X).
Q 8: Given the PDF f(x) = \begin{cases} 3(1-x)^2 & \text{if } 0 \leq x \leq 1 \\ 0 & \text{otherwise} \end{cases}, find the cumulative distribution function (CDF) F(x).
Q 9: For the PDF f(x) = \begin{cases} \frac{5}{4}(x - x^2) & \text{if } 0 \leq x \leq 1 \\ 0 & \text{otherwise} \end{cases}, calculate the probability that X is between 0.2 and 0.8, i.e., P(0.2 \leq X \leq 0.8).
Q 10: For the PDF f(x) = \begin{cases} \frac{1}{3} e^{-x/3} & \text{if } x \geq 0 \\ 0 & \text{otherwise} \end{cases}, calculate the expected value E(X).
Similar Reads
Engineering Mathematics Tutorials Engineering mathematics is a vital component of the engineering discipline, offering the analytical tools and techniques necessary for solving complex problems across various fields. Whether you're designing a bridge, optimizing a manufacturing process, or developing algorithms for computer systems,
3 min read
Linear Algebra
MatricesMatrices are key concepts in mathematics, widely used in solving equations and problems in fields like physics and computer science. A matrix is simply a grid of numbers, and a determinant is a value calculated from a square matrix.Example: \begin{bmatrix} 6 & 9 \\ 5 & -4 \\ \end{bmatrix}_{2
3 min read
Row Echelon FormRow Echelon Form (REF) of a matrix simplifies solving systems of linear equations, understanding linear transformations, and working with matrix equations. A matrix is in Row Echelon form if it has the following properties:Zero Rows at the Bottom: If there are any rows that are completely filled wit
4 min read
Eigenvalues and EigenvectorsEigenvalues and eigenvectors are fundamental concepts in linear algebra, used in various applications such as matrix diagonalization, stability analysis and data analysis (e.g., PCA). They are associated with a square matrix and provide insights into its properties.Eigen value and Eigen vectorTable
10 min read
System of Linear EquationsA system of linear equations is a set of two or more linear equations involving the same variables. Each equation represents a straight line or a plane and the solution to the system is the set of values for the variables that satisfy all equations simultaneously.Here is simple example of system of
5 min read
Matrix DiagonalizationMatrix diagonalization is the process of reducing a square matrix into its diagonal form using a similarity transformation. This process is useful because diagonal matrices are easier to work with, especially when raising them to integer powers.Not all matrices are diagonalizable. A matrix is diagon
8 min read
LU DecompositionLU decomposition or factorization of a matrix is the factorization of a given square matrix into two triangular matrices, one upper triangular matrix and one lower triangular matrix, such that the product of these two matrices gives the original matrix. It is a fundamental technique in linear algebr
6 min read
Finding Inverse of a Square Matrix using Cayley Hamilton Theorem in MATLABMatrix is the set of numbers arranged in rows & columns in order to form a Rectangular array. Here, those numbers are called the entries or elements of that matrix. A Rectangular array of (m*n) numbers in the form of 'm' horizontal lines (rows) & 'n' vertical lines (called columns), is calle
4 min read
Sequence & Series
Calculus
Limits, Continuity and DifferentiabilityLimits, Continuity, and Differentiation are fundamental concepts in calculus. They are essential for analyzing and understanding function behavior and are crucial for solving real-world problems in physics, engineering, and economics.Table of ContentLimitsKey Characteristics of LimitsExample of Limi
10 min read
Cauchy's Mean Value TheoremCauchy's Mean Value theorem provides a relation between the change of two functions over a fixed interval with their derivative. It is a special case of Lagrange Mean Value Theorem. Cauchy's Mean Value theorem is also called the Extended Mean Value Theorem or the Second Mean Value Theorem.According
7 min read
Taylor SeriesA Taylor series represents a function as an infinite sum of terms, calculated from the values of its derivatives at a single point.Taylor series is a powerful mathematical tool used to approximate complex functions with an infinite sum of terms derived from the function's derivatives at a single poi
8 min read
Inverse functions and composition of functionsInverse Functions - In mathematics a function, a, is said to be an inverse of another, b, if given the output of b a returns the input value given to b. Additionally, this must hold true for every element in the domain co-domain(range) of b. In other words, assuming x and y are constants, if b(x) =
3 min read
Definite Integral | Definition, Formula & How to CalculateA definite integral is an integral that calculates a fixed value for the area under a curve between two specified limits. The resulting value represents the sum of all infinitesimal quantities within these boundaries. i.e. if we integrate any function within a fixed interval it is called a Definite
8 min read
Application of Derivative - Maxima and MinimaDerivatives have many applications, like finding rate of change, approximation, maxima/minima and tangent. In this section, we focus on their use in finding maxima and minima.Note: If f(x) is a continuous function, then for every continuous function on a closed interval has a maximum and a minimum v
6 min read
Probability & Statistics
Mean, Variance and Standard DeviationMean, Variance and Standard Deviation are fundamental concepts in statistics and engineering mathematics, essential for analyzing and interpreting data. These measures provide insights into data's central tendency, dispersion, and spread, which are crucial for making informed decisions in various en
10 min read
Conditional ProbabilityConditional probability defines the probability of an event occurring based on a given condition or prior knowledge of another event. It is the likelihood of an event occurring, given that another event has already occurred. In probability, this is denoted as A given B, expressed as P(A | B), indica
12 min read
Bayes' TheoremBayes' Theorem is a mathematical formula used to determine the conditional probability of an event based on prior knowledge and new evidence. It adjusts probabilities when new information comes in and helps make better decisions in uncertain situations.Bayes' Theorem helps us update probabilities ba
13 min read
Probability Distribution - Function, Formula, TableA probability distribution is a mathematical function or rule that describes how the probabilities of different outcomes are assigned to the possible values of a random variable. It provides a way of modeling the likelihood of each outcome in a random experiment.While a Frequency Distribution shows
13 min read
Covariance and CorrelationCovariance and correlation are the two key concepts in Statistics that help us analyze the relationship between two variables. Covariance measures how two variables change together, indicating whether they move in the same or opposite directions. Relationship between Independent and dependent variab
6 min read
Practice Questions