0% found this document useful (0 votes)
34 views13 pages

Probability Theory

The document is a project titled 'Probability Theory' submitted by Akash Modi under the supervision of Dr. Akash Asthana at the University of Lucknow. It explores fundamental concepts of probability theory, including axioms, random variables, probability distributions, and key theorems, while also discussing real-world applications in various fields. The paper aims to highlight the importance of probability theory in understanding randomness and making informed predictions.

Uploaded by

Ankit Modi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views13 pages

Probability Theory

The document is a project titled 'Probability Theory' submitted by Akash Modi under the supervision of Dr. Akash Asthana at the University of Lucknow. It explores fundamental concepts of probability theory, including axioms, random variables, probability distributions, and key theorems, while also discussing real-world applications in various fields. The paper aims to highlight the importance of probability theory in understanding randomness and making informed predictions.

Uploaded by

Ankit Modi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 13

PROBABILITY THEORY

SUBMITTED BY-
AKASH MODI

UNDER THE SUPERVISION OF


DR. AKASH ASTHANA

DEPARTMENT OF STATISTICS
UNIVERSITY OF LUCKNOW
LUCKNOW, UTTAR PRADESH -226007

CERTIFICATE
PROJECT ACKNOWLEDGEMENT

I would like to express my gratitude to Dr. Masood


Husain Siddiqui, Head of Department of statistics for
his contributions to the completion of my project titled
“PROBABILITY THEORY”.

I would like to express my special thanks to our mentor


Dr. Akash Asthana, Department of statistics, for his
time and efforts he provided throughout the project.
Your useful advice and suggestions are really helpful to
me during the project’s completion. In this aspect, I am
eternally grateful to you.

I would like to acknowledge that this project was


completed entirely by me.

Akash Modi
CONTENT
 ABSTRACT
 INTRODUCTION
 OBJECTIVES
 KEY THEORETICAL CONCEPTS
 SAMPLE SPACE
 RANDOM EXPERIMENT
 EVENT
 CONDITIONAL PROBABILITY
 BAYES’ THEOREM
 CLASSICAL APPROACH
 PROBABILITY MASS FUNCTION
 PROBABILITY DENSITY FUNCTION
 MOMENT GENERATING FUNCTION
 CENTRAL LIMIT THEOREM
 CONCLUSION
 REFERENCE

ABSTRACT
Probability theory forms the foundation of understanding randomness and uncertainty in
various fields such as mathematics, statistics, science, and economics.
This paper explores the fundamental concepts of probability theory, including axioms,
random variables, probability distributions, and key theorems such as the Law of Large
Numbers and the Central Limit Theorem.
Additionally, it examines real-world applications, including risk analysis, machine learning,
and decision-making processes.
Through theoretical explanations and practical examples, the paper aims to highlight the
critical role of probability theory in interpreting data and making informed predictions in
uncertain environment

INTRODUCTION
The theory of probability had its origin in gambling and games of chance. It owes much to
the curiosity of gamblers who pestered their friends in the mathematical world with all sorts
of questions. Unfortunately, this association with gambling contributed to very slow and
sporadic growth of probability theory as a mathematical discipline. The mathematicians of
the day took little or no interest in the development of any theory but looked only at the
combinatorial reasoning involved in each problem.
An extension of the classical definition of Laplace was used to evaluate the probabilities of
sets of events with infinite outcomes. The notion of equal likelihood of certain events played
a key role in this development. According to this extension, if Ω is some region with a well-
defined measure (length, area, volume, etc.), the probability that a point chosen at random
lies in a subregion A of Ω is the ratio measure(A)/measure(Ω). Many problems of geometric
probability were solved using this extension.
The mathematical theory of probability as we know it today is of comparatively recent origin.
It was A. N. Kolmogorov who axiomatized probability in his funda mental work, Foundations
of the Theory of Probability (Berlin), in 1933.

OBJECTIVES OF PROBABILITY THEORY


1. Quantifying Uncertainty
 Probability theory provides a framework to measure and express uncertainty in
various events or outcomes.
 It assigns numerical values (between 0 and 1) to the likelihood of an event occurring.
2. Predicting Outcomes
 Probability helps predict future events based on past data or experiments.
 For example, in weather forecasting, probability is used to predict the likelihood of
rain.
3. Decision Making
 Probability theory aids in making informed decisions under uncertainty.
 It is used in fields like finance, insurance, and risk management to evaluate and
minimize risks.
4. Analyzing Random Phenomena
 Many processes in nature and society are random. Probability helps analyze and
understand these processes.
 Examples include tossing a coin, rolling dice, or stock market fluctuations.

6. Modeling Real-World Scenarios


 Probability theory helps model real-world scenarios involving randomness, such as
traffic flow, disease spread, or machine failures.
7. Evaluating Risks
 In engineering, medicine, and other fields, probability is used to assess and mitigate
risks associated with uncertain outcomes.
8. Enhancing Mathematical Reasoning
 It develops critical thinking and logical reasoning skills, enabling better problem-
solving abilities in uncertain situations.
 Probability theory is a cornerstone of modern science, engineering, and social
sciences, providing the mathematical tools to deal with randomness and
uncertainty effective.

KEY THEORETICAL CONCEPTS


1.SAMPLE SPACE
In most branches of knowledge, experiments are a way of life. In probability and statistics,
too, we concern ourselves with special types of experiments.
Example 1. A coin is tossed. Assuming that the coin does not land on the side, there are two
possible outcomes of the experiment: heads and tails. On any performance of this experiment,
one does not know what the outcome will be. The coin can be tossed as many times as
desired.
Definition 1. A random (or statistical) experiment is an experiment in which:
(a) All outcomes of the experiment are known in advance.
(b) Any performance of the experiment results in an outcome that is not known in advance.
(c) The experiment can be repeated under identical conditions.
Definition 2. The sample space of a statistical experiment is a pair (Ω, S), where
(a) Ω is the set of all possible outcomes of the experiment.
(b) S is a σ-field of subsets of Ω.
Example 1. Let us toss a coin. The set Ω is the set of symbols H and T, where H denotes head
and T represents tail. Also, S is the class of all subsets of Ω, namely, {{H}, {T}, {H, T}, 0}.
If the coin is tossed two times, then
Ω = {(H, H), (H, T), (T, H), (T, T)},
And
S = {φ, {(H, H)}, {(H, T)}, {(T, H)}, {(T, T)}, {(H, H), (H, T)}, {(H, H), (T, H)}, {(H, H), (T,
T)}, {(H, T), (T, H)}, {(T, T), (T, H)}, {(T, T), (H, T)}, {(H, H), (H, T), (T, H)}, {(H, H), (H,
T), (T, T)}, {(H, H), (T, H), (T, T)}, {(H, T), (T, H), (T, T)}, Ω}
where the first element of a pair denotes the outcome of the first toss, and the second element,
the outcome of the second toss. The event at least one head consists of sample points (H, H),
(H, T), (T, H). The event at most one head is the collection of sample points (H, T), (T, H), (T,
T)
2.RANDOM EXPERIMENT
A random experiment is a process or action that leads to well-defined results, called
outcomes, but the result cannot be predicted with certainty in advance.
Examples:
Tossing a coin (outcomes: Head or Tail).
Rolling a die (outcomes: 1, 2, 3, 4, 5, or 6).
Drawing a card from a deck (outcomes: 52 possibilities).

3.EVENT
An event is a subset of the possible outcomes of a random experiment. It is a specific result
or a combination of results that we are interested in.
Examples:
In tossing a coin, getting Head is an event.
In rolling a die, getting a number greater than 4 (i.e., outcomes 5 and 6) is an event.
4.CONDITIONAL PROBABILITY
Conditional probability is the probability of an event occurring given that another event has
already occurred. If A and B are two events, the probability of A given B is denoted as P(A/B)
and
is calculated using the formula:
P(A|B) = P(A∩B) / P(B), where P(B)≠0
Example:
Consider a deck of cards. Let:
 A: Event of drawing a king
 B: Event of drawing a red card.
If you draw a card and it is known to be red(B), the conditional probability P(A|B) is:
P(A|B) = P(A∩B) / P(B) = 2/26 = 1/13
Here:
 P(A∩B) is the probability of drawing a red king (2 cards).
 P(B) is the probability of drawing any red card (26) cards.
5.BAYES’ THEOREM
Bayes' Theorem is a fundamental concept in probability theory and statistics that describes
the relationship between conditional probabilities. It allows you to update the probability of
an event based on new evidence or information.
The formula for Bayes' Theorem is:
P(A∣B) = P(B∣A)⋅P(A)/P(B)
Where:
 P(A∣B) is the posterior probability — the probability of event AA happening given
that event B has occurred.
 P(B∣A) is the likelihood — the probability of event B happening given that
event A has occurred.
 P(A) is the prior probability — the initial probability of event A before considering
the new evidence B.
 P(B) is the marginal probability — the total probability of event B occurring,
regardless of whether A happens or not.
EXAMPLE:
Suppose a doctor wants to determine the probability of a patient having a disease based on
the results of a diagnostic test. Let's say:

 P(D) = 0.01 (prior probability of having the disease)


 P(T|D) = 0.99 (likelihood of testing positive given the disease)
 P(T) = 0.02 (marginal probability of testing positive)
Using Bayes' theorem, the doctor can calculate the posterior probability of the patient having
the disease given a positive test result:
P(D|T) = P(T|D) × P(D) / P(T) = 0.99 × 0.01 / 0.02 = 0.495
This means that given a positive test result, the probability of the patient having the disease is
approximately 49.5%.
4.CLASSICAL APPROACH
The classical approach to probability theory, also known as the theoretical approach, is based
on the assumption that all outcomes of an experiment are equally likely. This approach is
particularly useful when we know the total number of possible outcomes and can calculate
the probability of an event by counting the favorable outcomes.
EXAMPLE:
Tossing a Coin:
When tossing a fair coin, the sample space is S = {H, T}. If you want the probability of
getting "Heads," the event E is {Heads}. Since there are two possible outcomes and each is
equally likely, the probability of getting heads is:
P(E)=1/2
Basic Probability Rules in the Classical Approach:
1. Probability of the Sample Space: The probability of the entire sample space is
always 1, i.e., P(S)=1.
2. Complementary Events: The probability of the complement of an
event E (denoted E, the event "not EE") is given by:
P(Ec)=1−P(E)P(Ec)=1−P(E)
3. Union of Events: If two events E1E1 and E2E2 are mutually exclusive (i.e., they
cannot occur at the same time), the probability of their union is the sum of their
individual probabilities:
P(E1∪E2)=P(E1)+P(E2)P(E1∪E2)=P(E1)+P(E2)
If the events are not mutually exclusive, the formula adjusts to account for the overlap:
P(E1∪E2)=P(E1)+P(E2)−P(E1∩E2)P(E1∪E2)=P(E1)+P(E2)−P(E1∩E2)
4. Intersection of Events: If events E1E1 and E2E2 are independent (i.e., the
occurrence of one does not affect the other), then the probability of both events
occurring together (their intersection) is the product of their individual probabilities:
P(E1∩E2)=P(E1)⋅P(E2)P(E1∩E2)=P(E1)⋅P(E2)

PROBABILITY AXIOMS
Let (Ω, S) be the sample space associated with a statistical experiment. In this section we
define a probability set function and study some of its properties.
Definition 1. Let (Ω, S) be a sample space. A set function P defined on S is called a
probability measure (or simply, probability) if it satisfies the following conditions:
(i) P(A) ≥0 for all A∈S.
(ii) P(Ω) = 1
(iii) If A∩B = φ,
Then P(A∪B) = P(A) + P(B)
1. 1. Expectation (or Expected Value)
The expectation (or expected value) of a random variable is a measure of its central
location—in other words, it provides a weighted average of all possible values that the
random variable can take, weighted by their probabilities. It represents the "long-run average"
of the outcomes if the experiment is repeated many times.
For a Discrete Random Variable:
If XX is a discrete random variable with possible values x1,x2,…,xnx1,x2,…,xn and
associated probabilities P(X=xi)P(X=xi), the expectation is:
E[X]=∑i=1nxi⋅P(X=xi).E[X]=i=1∑nxi⋅P(X=xi).
For a Continuous Random Variable:
If XX is a continuous random variable with probability density function (PDF) fX(x)fX(x), the
expectation is:
E[X]=∫−∞∞x⋅fX(x) dx.E[X]=∫−∞∞x⋅fX(x)dx.
The expectation is a real number and represents the "center" of the distribution of XX.
Properties of Expectation:
1. Linearity: The expectation is linear, which means:
E[aX+b]=aE[X]+b,E[aX+b]=aE[X]+b,
where aa and bb are constants. This property also extends to sums of random variables:
E[X+Y]=E[X]+E[Y]if X and Y are random variables.E[X+Y]=E[X]+E[Y]if X and Y are rando
m variables.
2. Constant: If cc is a constant, then:
E[c]=c.E[c]=c.

2. Variance
The variance of a random variable measures the spread or dispersion of its values around
the expectation. It indicates how much the values of the random variable deviate from the
mean. A high variance means that the values are spread out widely around the mean, while a
low variance indicates that the values are concentrated near the mean.
Formula for Variance:
 The variance of a random variable XX, denoted by Var(X)Var(X), is defined
as the expected squared deviation from the
mean:Var(X)=E[(X−E[X])2].Var(X)=E[(X−E[X])2].Expanding this
expression:Var(X)=E[X2]−(E[X])2.Var(X)=E[X2]−(E[X])2.
For a Discrete Random Variable:
If XX is a discrete random variable with values x1,x2,…,xnx1,x2,…,xn and
probabilities P(X=xi)P(X=xi), the variance is:
Var(X)=∑i=1n(xi−E[X])2⋅P(X=xi).Var(X)=i=1∑n(xi−E[X])2⋅P(X=xi).
For a Continuous Random Variable:
If XX is a continuous random variable with PDF fX(x)fX(x), the variance is:
Var(X)=∫−∞∞(x−E[X])2⋅fX(x) dx.Var(X)=∫−∞∞(x−E[X])2⋅fX(x)dx.
Alternatively, the variance can be computed as:
Var(X)=E[X2]−(E[X])2.Var(X)=E[X2]−(E[X])2.
Properties of Variance:
4. Non-negative: Variance is always non-negative:
Var(X)≥0.Var(X)≥0.
If XX is a constant random variable, Var(X)=0Var(X)=0.
5. Linearity: Variance behaves differently under scaling and addition:
 If Y=aX+bY=aX+b, where aa and bb are
constants:Var(Y)=a2⋅Var(X).Var(Y)=a2⋅Var(X).The constant bb does
not affect the variance because it does not change the spread of the
distribution, only the location.
 For the sum of two independent random
variables XX and YY:Var(X+Y)=Var(X)+Var(Y).Var(X+Y)=Var(X)
+Var(Y).If XX and YY are dependent, you need to add the covariance
term:Var(X+Y)=Var(X)+Var(Y)+2Cov(X,Y).Var(X+Y)=Var(X)+Var(Y)
+2Cov(X,Y).

Relationship Between Expectation and Variance


 The mean E[X]E[X] provides information about the central location of the
random variable.
 The variance Var(X)Var(X) provides information about the spread of the
random variable around its mean.
The standard deviation σXσX is the square root of the variance:
σX=Var(X).σX=Var(X).
Standard deviation is often used because it has the same units as the random variable itself,
unlike variance, which has squared units.

Example:
Consider a discrete random variable XX with the following probability distribution:

XX 1 2 3 4

P(X)P(X) 0.1 0.2 0.3 0.4

Step 1: Calculate the Expectation E[X]E[X]


E[X]=1(0.1)+2(0.2)+3(0.3)+4(0.4)=0.1+0.4+0.9+1.6=3.0.E[X]=1(0.1)+2(0.2)+3(0.3)+4(0.4)
=0.1+0.4+0.9+1.6=3.0.
Step 2: Calculate the Variance Var(X)Var(X)
First, calculate E[X2]E[X2]:
E[X2]=12(0.1)+22(0.2)+32(0.3)+42(0.4)=0.1+0.8+2.7+6.4=10.0.E[X2]=12(0.1)+22(0.2)+32(
0.3)+42(0.4)=0.1+0.8+2.7+6.4=10.0.
Now, use the formula for variance:
Var(X)=E[X2]−(E[X])2=10.0−(3.0)2=10.0−9.0=1.0.Var(X)=E[X2]−
(E[X])2=10.0−(3.0)2=10.0−9.0=1.0.

Conclusion
 Expectation is the mean or average of a random variable, providing a measure
of the "center" of the distribution.
 Variance measures the spread or variability of the random variable around the
mean, with higher variance indicating greater spread.

You might also like