Probability Definitions
Probability Definitions
In statistics, a population is the entire pool from which a statistical sample is drawn. A
population may refer to an entire group of people, objects, events, hospital visits, or
measurements. A population can thus be said to be an aggregate observation of subjects grouped
together by a common feature.
Unlike a sample, when carrying out statistical analysis on a population, there are no standard
errors to report, i.e., because such errors inform analysts using a sample how far their estimate
may deviate from the true population value. But since you are working with the true population
you already know the true value.
Population Samples
A sample is a random selection of members of a population. It is a smaller group drawn from the
population that has the characteristics of the entire population. The observations and conclusions
made against the sample data are attributed to the population.
The information obtained from the statistical sample allows statisticians to develop hypotheses
about the larger population. In statistical equations, population is usually denoted with an
uppercase 𝑁 while the sample is usually denoted with a lowercase 𝑛.
Population Parameters
A parameter is data based on an entire population. Statistics such as averages and standard
deviations, when taken from populations, are referred to as population parameters. The
population mean and population standard deviation are represented by the Greek letters µ and 𝜎,
respectively.
The standard deviation is the variation in the population inferred from the variation in the
sample. When the standard deviation is divided by the square root of the number of observations
in the sample, the result is referred to as the standard error of the mean.
Probability sampling uses statistical theory to randomly select a small group of people (sample)
from an existing large population and then predict that all their responses will match the overall
population.
A sample space is a collection or a set of possible outcomes of a random experiment. The sample
space is represented using the symbol, “𝑆”. The subset of possible outcomes of an experiment is
called events. A sample space may contain a number of outcomes which depends on the
experiment. If it contains a finite number of outcomes, then it is known as discrete or finite
sample spaces.
A samples space for a random experiment is written within curly braces { }. There is a difference
between the sample space and the events. For rolling a die, we will get the sample space, 𝑆 as
{1, 2, 3, 4, 5, 6} whereas the event can be written as {1, 3, 5} which represents the set of odd
numbers and {2, 4, 6} which represents the set of even numbers. The outcomes of an experiment
are random and the sample space becomes the universal set for some particular experiments.
Some of the examples are as follows:
The meaning of probability is basically the extent to which something is likely to happen. This is
the basic probability theory, which is also used in the probability distribution, where you will
learn the possibility of outcomes for a random experiment. To find the probability of a single
event to occur, first, we should know the total number of possible outcomes.
The entire possible set of outcomes of a random experiment is the sample space or the individual
space of that experiment. The likelihood of occurrence of an event is known as probability. The
probability of occurrence of any event lies between 0 and 1.
If the probability of occurrence of an event is 0, such an event is called an impossible event and
if the probability of occurrence of an event is 1, it is called a sure event. In other words, the
empty set 𝜙 is an impossible event and the sample space 𝑆 is a sure event.
Simple Events
Any event consisting of a single point of the sample space is known as a simple event in
probability. For example, if 𝑆 = {56, 78, 96, 54, 89} and 𝐸 = {78} then 𝐸 is a simple event.
Compound Events
Contrary to the simple event, if any event consists of more than one single point of the sample
space then such an event is called a compound event. Considering the same example again, if
𝑆 = {56, 78, 96, 54, 89}, 𝐸1 = {56, 54}, 𝐸2 = {78, 56, 89} then, 𝐸1 and 𝐸2 represent two
compound events.
If the occurrence of any event is completely unaffected by the occurrence of any other event,
such events are known as an independent event in probability and the events which are affected
by other events are known as dependent events.
If the occurrence of one event excludes the occurrence of another event, such events are
mutually exclusive events i.e. two events don’t have any common point. For example, if
𝑆 = {1 , 2 , 3 , 4 , 5 , 6} and 𝐸1 , 𝐸2 are two events such that 𝐸1 consists of numbers less than 3 and
𝐸2 consists of numbers greater than 4. So, 𝐸1 = {1,2} and 𝐸2 = {5,6}. Then, 𝐸1 and 𝐸2 are
mutually exclusive.
Exhaustive Events
A set of events is called exhaustive if all the events together consume the entire sample space.
Complementary Events
For any event 𝐸1 there exists another event 𝐸1′ , which represents the remaining elements of the
sample space 𝑆. So, the complementary event of 𝐸1 is 𝐸1′ = 𝑆 − 𝐸1 .
If a dice is rolled then the sample space 𝑆 is given as 𝑆 = {1, 2, 3, 4, 5, 6}. If event 𝐸1 represents
all the outcomes which is greater than 4, then 𝐸1 = {5,6} and 𝐸1′ = {1,2,3,4}. Thus 𝐸1′ is the
complement of the event 𝐸1 .
If two events 𝐸1 and 𝐸2 are associated with OR then it means that either any of 𝐸1 , 𝐸2 or both.
The union symbol (∪) is used to represent OR in probability. Thus, the event 𝐸1 ∪ 𝐸2
denotes 𝐸1 𝑂𝑅 𝐸2 .
If we have mutually exhaustive events 𝐸1 , 𝐸2 , 𝐸3 , … … , 𝐸𝑛 associated with sample space 𝑆 then
the Sample Space 𝑆 = 𝐸1 U 𝐸2 U 𝐸3 U … … U 𝐸𝑛 .
If two events 𝐸1 and 𝐸2 are associated with AND then it means the intersection of elements
which is common to both the events. The intersection symbol (∩) is used to represent AND in
probability. Thus, the event 𝐸1 ∩ 𝐸2 denotes 𝐸1 AND 𝐸2 .
It represents the difference between both the events. Event 𝐸1 but not 𝐸2 represents all the
outcomes which are present in 𝐸1 but not in 𝐸2 . Thus, the event 𝐸1 but not 𝐸2 is represented
as 𝐸1 − 𝐸2 .
Probability is a measure of the likelihood of an event to occur. Many events cannot be predicted
with total certainty. We can predict only the chance of an event to occur i.e. how likely they are
to happen, using it. Probability can range in from 0 to 1, where 0 means the event to be an
impossible one and 1 indicates a certain event. For example, when we toss a coin, either we get
Head or Tail; only two possible outcomes are possible (𝐻, 𝑇). But if we toss two coins in the air,
there could be three possibilities of events to occur, such as both the coins show heads or either
shows tails or one shows heads and one tail, i.e., (𝐻, 𝐻), (𝐻, 𝑇), (𝑇, 𝑇).
The probability formula is defined as the possibility of an event to happen is equal to the ratio of
the number of favorable outcomes and the total number of outcomes.
Number of favorable outcomes
Probability of event 𝐸 is 𝑃(𝐸) = .
Total Number of outcomes
Sometimes students get mistaken for “favorable outcome” with “desirable outcome”. This is the
basic formula. But there are some more formulas for different situations or events.
Theoretical Probability
It is based on the possible chances of something to happen. The theoretical probability is mainly
based on the reasoning behind probability. For example, if a coin is tossed, the theoretical
1
probability of getting a head will be 2.
Experimental Probability
It is based on the basis of the observations of an experiment. The experimental probability can be
calculated based on the number of possible outcomes by the total number of trials. For example,
if a coin is tossed 10 times and heads is recorded 6 times then, the experimental probability for
6
heads is 10.
Sample Space The set of all the possible outcomes Tossing a coin, the Sample Space
to occur in any trial. 𝑆 = {𝐻, 𝑇}
Experiment or A series of actions where the The tossing of a coin, Selecting a card
Trial outcomes are always uncertain. from a deck of cards, throwing a dice.
Assume an event 𝐸 can occur in 𝑟 ways out of a sum of 𝑛 probable or possible equally likely
𝑟
ways. Then the probability of happening of the event or its success is expressed as 𝑃(𝐸) = 𝑛.
The probability that the event 𝐸 will not occur (𝐸 ′ , complement of 𝐸) or known as its failure is
𝑛−𝑟 𝑟
expressed by 𝑃(𝐸 ′ ) = = 1 − 𝑛.
𝑛
Therefore, now we can say 𝑃(𝐸) + 𝑃(𝐸 ′ ) = 1. This means that the total of all the probabilities
in any random test or experiment is equal to1.
When the events have the same theoretical probability of happening, then they are called equally
likely events. The results of a sample space are called equally likely if all of them have the same
1
probability of occurring. For example, if you throw a die, then the probability of getting 1 is 6.
1
Similarly, the probability of getting all the numbers from 2,3,4,5 and 6, one at a time is 6.
Complementary Events
The possibility that there will be only two outcomes considers that an event will occur or not.
Like a person will come or not come to your house, getting a job or not getting a job, etc. are
examples of complementary events. Basically, the complement of an event occurring in the exact
opposite that the probability of it is not occurring.
Conditional Probability
The conditional probability of an event 𝐵 is the probability that the event will occur given the
knowledge that an event 𝐴 has already occurred. This probability is written 𝑃(𝐵|𝐴), notation for
the probability of 𝐵 given 𝐴. In the case where the event 𝐴 has no effect on the probability of
event 𝐵, the conditional probability of event 𝐵 given event 𝐴 is simply the probability of
event 𝐵, that is 𝑃(𝐵).
If any of the events 𝐴 and 𝐵 is effects of occurring other, then the probability of the intersection
of 𝐴 and 𝐵 (the probability that both events occur) is defined by 𝑃(𝐴 ∩ 𝐵) = 𝑃(𝐴)𝑃(𝐵|𝐴). So,
𝑃(𝐴∩𝐵)
the conditional probability 𝑃(𝐵|𝐴) is easily obtained by 𝑃(𝐵|𝐴) = , where 𝑃(𝐴) > 0.
𝑃(𝐴)
Independent events are those events whose occurrence is not dependent on any other event. For
example, if we flip a coin in the air and get the outcome as Head, then again if we flip the coin
but this time we get the outcome as Tail. In both cases, the occurrence of both events is
independent of each other. It is one of the types of events in probability.
In Probability, the set of outcomes of an experiment is called events. There are different types of
events such as independent events, dependent events, mutually exclusive events, and so on. If the
probability of occurrence of an event 𝐴 is not affected by the occurrence of another event 𝐵, then
𝐴 and 𝐵 are said to be independent events.
If 𝐴 and 𝐵 are independent events, then 𝑃(𝐵|𝐴) = 𝑃(𝐵). Using the multiplication rule of the
probability, it can be written as 𝑃(𝐴 ∩ 𝐵) = 𝑃(𝐴). 𝑃(𝐵|𝐴) = 𝑃(𝐴). 𝑃(𝐵).
Two events 𝐴 and 𝐵 are said to be mutually exclusive events if they cannot occur at the same
time. Mutually exclusive events never have an outcome in common.
In probability theory, two events are said to be mutually exclusive if they cannot occur at the
same time or simultaneously. In other words, mutually exclusive events are called disjoint
events. If two events are considered disjoint events, then the probability of both events occurring
at the same time will be zero. If 𝐴 and 𝐵 are the two events, then the probability of disjoint of
event 𝐴 and 𝐵 is written by 𝑃(𝐴 ∩ 𝐵) = 0.
In probability, the specific addition rule is valid when two events are mutually exclusive. It states
that the probability of either event occurring is the sum of probabilities of each event occurring.
If 𝐴 and 𝐵 are said to be mutually exclusive events then the probability of an event 𝐴 occurring
or the probability of event 𝐵 occurring is given as 𝑃(𝐴) + 𝑃(𝐵), thus it can be expressed
as 𝑃(𝐴 ∪ 𝐵) = 𝑃(𝐴) + 𝑃(𝐵).
The difference between the independent events and mutually exclusive events are given below:
If A and B are two independent events, then If A and B are two mutually exclusive
𝑃(𝐴 ∩ 𝐵) = 𝑃(𝐵). 𝑃(𝐴) events, then 𝑃(𝐴 ∩ 𝐵) = 0
Dependent and Independent Events
Two events are said to be dependent if the occurrence of one event changes the probability of
another event. Two events are said to be independent events if the probability of one event that
does not affect the probability of another event. If two events are mutually exclusive, they are not
independent. Also, independent events cannot be mutually exclusive.
Conditional probability is stated as the probability of an event 𝐴, given that another event 𝐵 has
occurred. Conditional Probability for two independent events 𝐵 has given 𝐴 is denoted by the
𝑃(𝐴∩𝐵)
expression 𝑃( 𝐵|𝐴) and it is defined using the equation 𝑃(𝐵|𝐴) = .
𝑃(𝐴)
0
Redefine the above equation using multiplication rule, 𝑃(𝐴 ∩ 𝐵) = 0, then 𝑃(𝐵|𝐴) = 𝑃(𝐴) = 0.
So the conditional probability formula for mutually exclusive events is 𝑃(𝐵|𝐴) = 𝑃(𝐴|𝐵) = 0.
The Probability Density Function (PDF) is the probability function which is represented for the
density of a continuous random variable lying between a certain ranges of values. Probability
Density Function explains the normal distribution and how mean and deviation exists.
The standard normal distribution is used to create a database or statistics, which are often used in
science to represent the real-valued variables, whose distribution are not known.
https://byjus.com/maths/probability/
https://courses.lumenlearning.com/introstats1/chapter/the-terminology-of-probability/