The Federal University of Technology, Akure, Ondo State, Nigeria
The Federal University of Technology, Akure, Ondo State, Nigeria
AN ASSIGNMENT
ON
PROBABILITY DISTRIBUTION FUNCTIONS
BY
GROUP 5
SUBMITTED TO
PROF A.O MELODI
SEPTEMBER 2023
GROUP MEMBERS
For a random variable 𝑋 with 𝑛 possible outcomes, the probability of each outcome is:
1
𝑃(𝑋 = 𝑥ᵢ) = 𝑛
for 𝑖 = 1, 2, ..., 𝑛
Let 𝑝 be the probability of success. For a random variable 𝑋 where 𝑋 = 1 indicates success and
𝑋 = 0 indicates failure:
𝑃(𝑋 = 1) = 𝑝
𝑃(𝑋 = 0) = 1 − 𝑝
Given 𝑛 trials and 𝑝 as the probability of success on each trial, the probability of observing 𝑘
successes is:
𝑃(𝑋 = 𝑘) = ( ) 𝑝 (1 − 𝑝)
𝑛
𝑘
𝑘 𝑛−𝑘
Given a rate λ (average number of occurrences in the interval), the probability of observing 𝑘
events is:
−λ
λ𝑒
𝑃(𝑋 = 𝑘) = 𝑘!
Given 𝑝 as the probability of success on each trial, the probability that the first success occurs on
the 𝑘𝑡ℎ trial is:
𝑘−1
𝑃(𝑋 = 𝑘) = (1 − 𝑝) 𝑝
Given 𝑟 successes and 𝑝 as the probability of success on each trial, the probability of 𝑘 failures
before the 𝑟𝑡ℎ success is:
𝑃(𝑋 = 𝑘) = ( ) 𝑝 (1 − 𝑝)
𝑘+𝑟−1
𝑟−1
𝑟 𝑘
Fig 2.6 Negative Binomial Distribution Graph
For a random variable 𝑋 which takes values in the interval [𝑎, 𝑏]:
1
𝑓(𝑥) = 𝑏−𝑎
for 𝑎 ≤ 𝑥 ≤ 𝑏
𝑓(𝑥) = 0 otherwise
𝑓(𝑥; μ, σ) = 𝑒 2σ
for all 𝑥
σ 2π
The time until a radioactive particle decays or the time between customer arrivals in a store.
Fig 3.3 Exponential Distribution Curve for different rate parameter
Given shape parameter 𝑘 and scale parameter θ, the probability density function (pdf) is:
𝑘−1 −𝑥/θ
𝑥 𝑒
𝑓(𝑥; 𝑘, θ ) = 𝑘
θ Γ(𝑘)
𝑓(𝑥; λ, 𝑘 ) = ()
𝑘 𝑥 𝑘−1 −(𝑥/λ)
λ λ
𝑒
Fig 3.7 Weibull Distribution Curve for different 𝝀 and k values
1
𝑓(𝑥; 𝑥₀ , γ ) = 𝑥−𝑥₀ 2
πγ(1 + ( γ
))
4.1.1 Definition
The joint probability distribution of two random variables, 𝑋 and 𝑌, gives the probability that 𝑋
takes on a specific value 𝑥 and 𝑌 takes on a specific value 𝑦 simultaneously.
4.1.2 Notation
Given a mean vector μ and a covariance matrix Σ, the probability density function (pdf) for a
random vector 𝑋 is:
1 𝑇 −1(𝑥−μ)
1 − 2 (𝑥−μ) Σ
𝑓(𝑥) = 𝑘/2 1/2 𝑒
(2π) |Σ|
Given 𝑛 trials and a probability vector 𝑝 = (𝑝₁, 𝑝₂, 𝑝₃, ..., 𝑝ₖ) where 𝑝ᵢ is the probability of
result 𝑖, the probability of the outcome vector 𝑥 = (𝑥₁, 𝑥₂, 𝑥₃, ..., 𝑥ₖ) is:
𝑛! 𝑥₁ 𝑥₂ 𝑥ₖ
𝑃(𝑋 = 𝑥) = 𝑥₁!𝑥₂!...𝑥ₖ!
𝑝₁ 𝑝₂ ... 𝑝ₖ
5.0 CONCLUSION
In the world of statistical analysis, probability distributions act like guiding lights, helping us
navigate the complexities of data and uncertainty. From the precise nature of discrete
distributions to the endless possibilities of continuous ones, and the intricate dance of variables
in multivariate scenarios, we've covered a lot of ground.
Understanding these distributions isn't just theory; it's a practical key to gaining deeper insights
in research, making smart decisions in various industries, and using data creatively. As we wrap
up this journey, it's clear that when used wisely, these tools can turn raw data into valuable
knowledge, driving progress in many fields and aiding decision-making in uncertain situations.
While this report gives you a solid foundation, the world of probability and statistics keeps
evolving. It encourages curious minds to dig deeper, explore further, and find new ways to use
and expand upon these core ideas in our ever-changing world.
REFERENCES
Cheney, W., Light, W.: A Course in Approximation Theory. Brooks/Cole, Pacific Grove (2000).
Cruze-Uribe, D. V., Fiorenza, A.: Variable Lebesgue Spaces: Foundations and Harmonic
Analysis. Birkhauser, Basel (2013).
Cohen, S., Le Pennec, E.: Conditional density estimation by penalized likelihood model selection
and application. ArXiv (arXiv:1103.2021) (2012).
Deleforge, A., Forbes, F., Horaud, R.: High-dimensional regression with Gaussian mixtures and
partially-latent response variables. Stat. Comput. 25, 893–911 (2015).
Evans, M., Hastings, N., & Peacock, B. (2000). Statistical Distributions. John Wiley & Sons.
Johnson, N. L., Kotz, S., & Balakrishnan, N. (1995). Continuous Univariate Distributions,
Volume 1. John Wiley & Sons.