0% found this document useful (0 votes)
13 views39 pages

Probability and Probaility Distributions

The document provides an overview of probability and probability distributions, covering key concepts such as discrete and continuous distributions, including binomial, Poisson, and normal distributions. It explains different interpretations of probability, including classical, relative frequency, and subjective approaches, as well as basic event relations and laws. The document also discusses the characteristics and applications of various probability distributions, emphasizing their significance in statistical analysis.

Uploaded by

Iamnehakhalid
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views39 pages

Probability and Probaility Distributions

The document provides an overview of probability and probability distributions, covering key concepts such as discrete and continuous distributions, including binomial, Poisson, and normal distributions. It explains different interpretations of probability, including classical, relative frequency, and subjective approaches, as well as basic event relations and laws. The document also discusses the characteristics and applications of various probability distributions, emphasizing their significance in statistical analysis.

Uploaded by

Iamnehakhalid
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 39

PROBABILITY

&
PROBABILITY
DISTRIBUTIONS

DR. HAFSA PARACHA, PT


DPT,MSBE

Research Coordinator

DIPMR,DUHS

OBJECTIVES:

At the end of the lecture, students will be to know:


▪ Introduction to probability
▪ Discrete and Continuous Probability Distributions: Binomial , Poisson and
Normal Distribution

INTRODUCTION

▪ PROBABILITY comes from word PROBABLE means the occurrence of


phenomenon that is not sure.
▪ The Concept of probability helps us in quantifying the chance of
occurrence of any event which is uncertain.
▪ An Experiment is an activity that is performed for some specified purpose.
▪ When we perform any experiment, the experiment produces different
outcomes, the set of all possible outcomes is called Sample Space

CLASSICAL INTERPRETATION OF
PROBABILITY
In the classical interpretation of probability, each possible distinct result is called
an outcome; an event is identified as a collection of outcomes.

▪ The applicability of this interpretation depends on the assumption that all


outcomes are equally likely. If this assumption does not hold, the probabilities
indicated by the classical interpretation of probability will be in error.

RELATIVE FREQUENCY CONCEPT
OF PROBABILITY

▪ This is an empirical approach to probability. If an experiment is repeated a


large number of times and event E occurs 30% of the time, then .30 should be
a very good approximation to the probability of event E.
▪ Symbolically, if an experiment is conducted n different times and if event E
occurs on ne of these trials, then the probability of event E is approximately

▪ We say “approximately” because we think of the actual probability P(event E)


as the relative frequency of the occurrence of event E over a very large
number of observations or repetitions of the phenomenon. The fact that we
can check probabilities that have a relative frequency interpretation (by
simulating many repetitions of the experiment) makes this interpretation very
appealing and practical.

ONE – SHOT INTERPRETATION

▪ The third interpretation of probability can be used for problems in which it is difficult
to imagine a repetition of an experiment. These are “one-shot” situations.
▪ For example, the director of a state welfare agency who estimates the probability
that a proposed revision in eligibility rules will be passed by the state legislature
would not be thinking in terms of a long series of trials.
▪ Rather, the director would use a personal or subjective probability to make a one-
shot statement of belief regarding the likelihood of passage of the proposed
legislative revision. The problem with subjective probabilities is that they can vary
from person to person and they cannot be checked.
Of the three interpretations

presented, the relative


frequency concept seems to
be the most reasonable one
because it provides a
practical interpretation of the
probability for most events of
interest

Basic Event Relations and Probability
Laws
The probability of an event—say,
event A—will always satisfy the
property.
Either A or B
Union, occurs
Intersection

Mutually
Complement
that is, the probability of an event lies exclusive
anywhere in the interval from 0 (the
occurrence of the event is impossible) to
1 (the occurrence of the event is a “sure
thing”)

EITHER A OR B OCCURS

▪ Suppose A and B represent two experimental events and you are interested in
a new event, the event that either A or B occurs. For example, suppose that we
toss a pair of dice and define the following events:

Then the event “either A or B occurs” is the event that you toss a total of either 7 or 11 with
the pair of dice.
◤ MUTUALLY EXCLUSIVE EVENTS

Two events A and B are said to be mutually exclusive if (when the experiment is performed a single
time) the occurrence of one of the events excludes the possibility of the occurrence of the other
event.

Note that, for this example, the events A and B are mutually exclusive; that is, if you observe event A
(a total of 7), you could not at the same time observe event B (a total of 11). Thus, if A occurs, B
cannot occur (and vice versa).

If two events, A and B, are mutually exclusive, the probability that either event occurs is
P(either A or B) = P(A) 1 P(B).
COMPLEMENTARY EVENT

▪ The complement of an event A is the event that A does not occur. The
complement of A is denoted by the symbol A

▪ Thus, if we define the complement of an event A as a new event—namely, “A


does not occur”—it follows that
P(A) + P(A)= 1
▪ For an example, refer again to the two-coin-toss experiment. If, in many
repetitions of the experiment, the proportion of times you observe event A, “two
heads show,” is 1/4, then it follows that the proportion of times you observe the
event A, “two heads do not show,” is 3/4.
▪ Thus, P(A) and P(A) will always sum to 1.


ADDITIONAL EVENT RELATIONS

UNION INTERSECTION
▪ The union of two events A ▪ The intersection of two
and B is the set of all events A and B is the set of
outcomes that are included in all outcomes that are
either A or B (or both). The included in both A and B. The
union is denoted as intersection is denoted as



Variables: Discrete and Continuous

Continuous Discrete Random


Random variable variable
▪ When observations on a
▪ When observations on a
quantitative random quantitative random
variable can assume any variable can assume
one of the uncountable only a countable number
number of values in a line of values, the variable is
interval, the variable is called a discrete random
called a continuous random
variable
variable.

Probability Distribution

▪ Probability distributions differ for discrete and


continuous random variables.
▪ For discrete random variables, we will compute the
probability of specific individual values occurring.
▪ For continuous random variables, the probability of
an interval of values is the event of interest.

20

◤ TYPES OF PROBABILTY

Binomial
Discrete
Poisson
Probability

Continuous Normal
21

BINOMIAL DISTRIBUTION
❑ The binomial distributions gives the probability that a specified outcomes occurs in
the given numbers of specified trials.

❑ Used where each trial has only two possible outcomes.

❑ For example: flipping a coin, positive or negative



Binomial Experiment

1. The experiment consists of n identical trials.


2. Each trial results in one of two outcomes. We will label one outcome a
success and the other a failure.
3. The probability of success on a single trial is equal t to remains
the same from trial to trial.*
4. The trials are independent; that is, the outcome of one trial does not
influence the outcome of any other trial.
5. The random variable y is the number of successes observed during the n
trials.
23


BINOMIAL DISTRIBUTION:

1) The experiment consist of fixed number of trials, n


2) Each trial is independent of other
3) For each trial, there are only two possible outcomes. For
counting purposes, one outcome is labelled as success, the
other a failure.
4) For every trial, the probability of success is called p. The
probability of getting a failure is then 1-p.
5) The binomial random variable, X, is the number of success in n
trials.
24

FORMULA:
25


POISSON DISTRIBUTION

❑ It is named after French Mathematician S.D Poisson, in 1837


❑ It is based on discrete random variable.
❑ The probability of variable is used to for rare events.
❑ It is used when the probability of an event is very small and number of trials are
very large.
26
Then
◤ the probability distribution of y is Poisson,
provided certain conditions are satisfied

▪ Events occur one at a time; two or more events do not occur precisely at the same
time or in the same space.
▪ The occurrence of an event in a given period of time or region of space is
independent of the occurrence of the event in a nonoverlapping time period or region
of space; that is, the occurrence (or nonoccurrence) of an event during one period or
in one region does not affect the probability of an event occurring at some other time
or in some other region.
▪ The expected number of events during one period or in one region, m, is the same
as the expected number of events in any other period or region.


Probability Distributions for Continuous
Random Variables

▪ Discrete random variables (such as the binomial) have possible values that are distinct and
separate, such as 0 or 1 or 2 or 3.
▪ Other random variables are most usefully considered to be continuous: Their possible values
form a whole interval (or range, or continuum).

▪ For instance, the 1-year return per dollar invested in a common stock could range from 0 to
some quite large value. In practice, virtually all random variables assume a discrete set of
values; the return per dollar of a million-dollar common-stock investment could be
$1.06219423 or $1.06219424 or $1.06219425 or. . . . However, when there are many
possible values for a random variable, it is sometimes mathematically useful to treat the
random variable as continuous.

▪ Recall that the histogram relative frequencies are proportional to areas over the class
intervals and that these areas possess a probabilistic interpretation.
▪ Thus, if a measurement is randomly selected from the set, the probability that it will
fall in an interval is proportional to the histogram area above the interval. Since a
population is the whole (100%, or 1), we want the total area under the probability
curve to equal 1.
▪ If we let the total area under the curve equal 1, then areas over intervals are exactly
equal to the corresponding probabilities
▪ This probability is written (P<a y <b)


A Continuous Probability
Distribution: The Normal Distribution

▪ Mound-shaped frequency distributions that can be approximated by using a


normal curve.
▪ The relative frequency histogram for the normal random variable, called the
normal curve or normal probability distribution, is a smooth, bell-shaped curve.
Figure 4.9(a) shows a normal curve. If we let y represent the normal random
variable, then the height of the probability distribution for a specific value of y is
represented by f(y).* The probabilities associated with a normal curve form the
basis for the Empirical Rule.


Z –SCORE
(STANDARIZED VARIABLES)

▪ The value of z computed using this formula is sometimes referred to as the z-score
associated with the y-value. Using the computed value of z, we determine the
appropriate probability.

▪ To determine the probability that a measurement will be less than some value
y, we first calculate the number of standard deviations that y lies away from the
mean by using the formula.

Z Score table for


Positive values

Z Score table
for Negative
values
39


REFERENCES:

▪ R. L Ott, Micheal T longnecker. An introduction to statistical methods and data


analysis, 7th ed. Brooks/Cole, Cengage Learning 2015

You might also like