0% found this document useful (0 votes)
360 views5 pages

Discrete Probability Distributions

Discrete probability distributions assign probabilities to discrete outcomes, represented by a probability mass function. Common examples include the binomial, Poisson, geometric, and Bernoulli distributions (paragraph 1). A discrete random variable can take on only fixed numerical values, while a continuous random variable can take any value within a specified range. The number of apples in a basket is discrete, while the time to drive home is continuous (paragraph 2). The probability distribution of a discrete random variable is defined by a probability mass function where the probabilities of all possible outcomes sum to 1 (paragraph 3). For example, the probabilities of getting 0, 1, or 2 tails when tossing 2 coins sum to 1 (paragraph 4).

Uploaded by

Azim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
360 views5 pages

Discrete Probability Distributions

Discrete probability distributions assign probabilities to discrete outcomes, represented by a probability mass function. Common examples include the binomial, Poisson, geometric, and Bernoulli distributions (paragraph 1). A discrete random variable can take on only fixed numerical values, while a continuous random variable can take any value within a specified range. The number of apples in a basket is discrete, while the time to drive home is continuous (paragraph 2). The probability distribution of a discrete random variable is defined by a probability mass function where the probabilities of all possible outcomes sum to 1 (paragraph 3). For example, the probabilities of getting 0, 1, or 2 tails when tossing 2 coins sum to 1 (paragraph 4).

Uploaded by

Azim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

Discrete Probability Distributions

In statistics and probability theory, a discrete probability distribution is a distribution


characterized by a probability mass function. This distribution is commonly used in computer
programs which help to make equal probability random selections between a number of
choices. The most common applications of discrete probability distribution are binomial
distribution, Poisson distribution, geometric distribution and Bernoulli distribution.

Any random variable is called discrete random variable which is the part of discrete
distribution. A random variable can take two types of values, either fix numbers that is
discrete values or a range that is continuous type of values. In continuous type data, the
values can lie anywhere within the specified range. For example: the number of apples in
the basket is discrete while the time needed to drive from school to home is continuous.

So the probability distribution over a random variable X where X takes discrete values, is
commonly said to be discrete probability distribution

When we say that the probability distribution of an experiment is discrete then the sum of
probabilities of all possible values of the random variable must be equal to 1. That is if X is a
discrete random variable, then,

eP(X=e)=1eP
Here, e is the set of all values that the variable X can take.

For Example: consider the event of tossing two coins, SS = HH,TH,HT,TTHH,TH,HT,TT. Let
us consider the event e Y to of occurrence of a tail. Now clearly Y = 0, 1, 2 only, that is discrete
values only.
For YY = 0, that is HHHH, P(Y)P(Y) = 1414
For YY = 1, that is TH,HTTH,HT, P(Y)P(Y) = 2424

For YY = 2, that is TTTT, P(Y)P(Y) = 1414


On adding all three we get 1414 + 2424 + 1414 = 1.
Thus we have proved our formula using a very common example.
The discrete probability distribution can always be represented in the form of a table as below:
YY
P(Y)P(Y)
0

1414

= 0.25

2424

= 0.50

1414

= 0.25

For any discrete probability distribution we can always find the mean or the expected value by:
eP(X=e)eP(X=e)
In above example, the expected value = 0 + 2424 + 2424 = 1. But it is not necessary to have
expected value equal to 1. It can be

Example
Example 1: Find the expected value of the following discrete distribution.
YY
0
1
2
3
4

P(Y)P(Y)
0.30
0.20
0.25
0.15
0.10

Solution:
YY

P(Y)P(Y) YP(Y)YP(Y)

0
0.30
0
1
0.20
0.20
2
0.25
0.50
3
0.15
0.45
4
0.10
0.40
So expected value = 0 + 0.20 + 0.50 + 0.45 + 0.40 = 1.55
Example 2: We flip a coin 10 times. Find the probability that 6 heads are obtained.

Solution:
We solve this using binomial distribution.
A binomial distribution is expressed by B (n, p) where n is the number of trials made, k is the
number of successes out of n trials and p is the probability of a success in each trial. So (1 p)
will be the probability of failure in each trial. Then, the binomial distribution is calculated as
below.
p(X=k)=(nk)pk.(1p)nkp(X=k)=(nk)pk.(1p)nk
The term (nk)(nk) is known as the binomial coefficient and is calculated as:
n!((k!)(nk)!)n!((k!)(nk)!)

Here, n = 10, k = 6, p = 1212 = 0.5. So, 1 p = 0.5


Using this we get, P(X = 6) = 0.2051

Continuous Probability Distribution.


A continuous probability distribution differs from a discrete probability distribution in several
ways.

The probability that a continuous random variable will assume a particular value is
zero.

As a result, a continuous probability distribution cannot be expressed in tabular form.

Instead, an equation or formula is used to describe a continuous probability


distribution.

Most often, the equation used to describe a continuous probability distribution is called
a probability density function. Sometimes, it is referred to as a density function,
a PDF, or a pdf. For a continuous probability distribution, the density function has the
following properties:

Since the continuous random variable is defined over a continuous range of values
(called thedomain of the variable), the graph of the density function will also be
continuous over that range.

The area bounded by the curve of the density function and the x-axis is equal to 1,
when computed over the domain of the variable.

The probability that a random variable assumes a value between a and b is equal to
the area under the density function bounded by a and b.

For example, consider the probability density function shown in the graph below. Suppose
we wanted to know the probability that the random variable X was less than or equal to a.
The probability that Xis less than or equal to a is equal to the area under the curve bounded
by a and minus infinity - as indicated by the shaded area.

Note: The shaded area in the graph represents the probability that the random variable X is
less than or equal to a. This is a cumulative probability. However, the probability
that X is exactly equal to awould be zero. A continuous random variable can take on an
infinite number of values. The probability that it will equal a specific value (such as a) is
always zero

You might also like