0% found this document useful (0 votes)
9 views42 pages

End321 02-Random Variables

Uploaded by

js547wsqkm
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views42 pages

End321 02-Random Variables

Uploaded by

js547wsqkm
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

2 END 321

Random Variables
2 Random Variables
Heuristically, a random variable is a numerical quantity whose observed value is
determined, at least in part, by some chance mechanism.The dynamics of this
mechanism that we may also call an experiment is characterized by an underlying
probability law, and then we are interested in a function of the outcome of this
experiment.
The most frequent use of probability theory is the description of random
variables.
More formally random variables are real valued functions on the sample
space. (Χ: S [0,1])
Since the outcomes of a random variable are determined by some underlying
experiment, we could also describe a probability law for random variables.
2 Random Variables
Consider the following example:
2 Random Variables
These probabilities should satisfy:
If you add all the
probabilities on the
previous slide, you
should observe
that they sum to 1.
A more complicated example is as follows:

What is the range of the random variable N?


The range of N is the set of positive integers. It can require any
integer number of trials before we observe the first head.
2 Random Variables
The probabilities are:

Like in the previous question, the above probabilities should sum to 1. That is we
need to show:
2 Random Variables
To see this:

This discrete random variable N is in fact well known in the theory of random
variables. Does anyone remember what it is?

N is an example to a geometric random variable.


2 Distribution Functions
In our examples so far, we have talked about random variables that take on either
finite or a countable number of possible values. Such random variables are called
discrete random variables.
Random variables can also take values on a continuum. These random variables are
known as continuous random variables.
For all random variables, the probability law that governs their behavior is
indicated by a special relation called the cumulative distribution function (cdf), or
briefly the distribution function. A distribution function F(.) of a random variable X
is defined for any real number b, -∞ < b < ∞ such that:

?
In words, F(b) is the probability that X takes on a value less than or equal to b.
2 Distribution Functions
The distribution function F(.) has to satisfy the following properties:

Such a function defines the probability law on random variables and hence can be
used to answer all probability related questions. For example:
2 Discrete Random Variables
For discrete random variables, there is a probability mass on sometimes a finite,
but always a countable collection of outcomes. A probability mass function (pmf)
p(a) of the discrete random variable X is defined by:

When X takes on values x1, x2, … the pmf satisfies:

Suppose X has a probability mass function given by:


2 Discrete Random Variables
Then the cdf is:
2 The Bernoulli Random Variable
Bernoulli random variables are the simplest of all random variables. If an
experiment has two outcomes one of which can be classified as a success and the
other failure, Bernoulli random variable takes on value 1 if the experiment results
in a success, and 0 if not. The probability mass function is:

where p, 0 ≤ p ≤ 1, is the probability that the trial is a success. Since Bernoulli


random variables are extremely simple, they are not terribly interesting in
themselves. They do, however provide convenient building blocks for many other
discrete random variables.
One can see clearly that probabilities on all potential outcomes of the Bernoulli
random variable add up to 1 because:
P(X = 0) + P(X = 1) = 1 – p + p = 1
2 The Binomial Random Variable
Binomial random variable is based on the idea of n independent Bernoulli trials
with success probability p. In such a repeated experiment, one might naturally be
interested in knowing the number of successes. Then the binomial random
variable X is random variable with parameters (n, p).
Does anyone remember the probability mass function of X?

where
2 The Binomial Random Variable
The probabilities sum to 1 here due to the binomial theorem:

Consider the following example:

One’s definition of success is really immaterial here. Hence, without loss of


generality, we define success as the appearance of heads, and failure the
appearance of tails. X is a binomial random variable with n = 4 and p = ½. Then,
what probability are we interested in here?
2 The Binomial Random Variable
Another example is as follows:

Any ideas? X is the number of defectives, and is a binomial random


variable with parameters n = 3, and p = 0.1.

Consider the following reliability example:

Define X to be the number engines that do not fail.


2 The Binomial Random Variable
Here we have a decision to make between a two-engine and a four-engine
airplane. We wish to maximize the probability of success in making this decision.
The number of engines that remain functional during a flight is a binomial random
variable. What is the probability that a two-engine airplane makes a
successful flight?

P(X = 1) + P(X = 2) =

How about a four-engine plane?

P(X = 2) + P(X = 3) + P(X = 4)

=
2 The Binomial Random Variable
Hence a four-engine plane is safer if:
2 The Geometric Random Variable
Suppose now that n independent trials are conducted until we observe the first
success. If X is a random variable that represents when success occurs and if at
each trial success occurs with probability p:

Like the Bernoulli random variable, the only parameter of the geometric random
variable is the probability of success p. The name geometric follows from the fact
that the probabilities of outcomes evolve in a geometric progression. One can
also check that the probabilities sum to 1:
2 The Negative Binomial Random
Variable
One might also be interested in the number of independent Bernoulli trials that
one should observe before the rth success. Such a random variable X has two
parameters r, and the probability of success p. The probability law is:

Consider the following example: Suppose that two teams are playing a series of
games each of which is independently won by team A with probability p and by
team B with probability 1-p. The winner of these series is the first team to win i
games. If i = 4, find the probability that a total of 7 games are played. Also, show
that this probability is maximized when p = ½.
How would you solve the first question?
2 The Negative Binomial Random
Variable
The number of games played until one team wins is a negative binomial random
variable. Then the probability is:

The second question asks us the value of p that would prolong such a series as
much as possible. To do so, we differentiate the above expression with respect to
p to obtain:

This derivative is zero when p = ½. Hence, when two competing teams have even
chances of winning, the series have the highest chance of going on for 7 games.
2 The Poisson Random Variable
A random variable X, taking on one of the values 0, 1, 2, …, is said to be a Poisson
random variable with parameter λ, if for some λ > 0,

This function defines a probability mass function because

Poisson random variable finds a wide application area and is very often used for
counting the number of events in a certain time interval. An intuitive explanation
on how we obtain this random variable is very much based on this counting
interpretation. Does anyone know how we obtain this distribution?
2 The Poisson Random Variable
Let’s assume we will observe a phenomenon for a fixed time period of length t,
where t is any positive number. The number of events to occur in this fixed
interval (0, t] is a random variable X. Since the value for X will be the number of
events that occur, the range of X is discrete, so X is a discrete random variable.
The probability law, of course, depends on the manner in which the events occur.
Let’s make the following assumptions about the way in which the events occur:

1) In a sufficiently short length of time, say of length Δt, only 0 or 1 event


can occur (two or more simultaneous occurrences are impossible).
2) The probability of exactly 1 event occurring in this short interval of
length Δt is equal to λΔt, proportional to the length of the interval.
3) Any nonoverlapping intervals of length Δt are independent Bernoulli
trials.
2 The Poisson Random Variable
With these assumptions, we can subdivide our interval of length t into n = t/Δt
nonoverlapping, equal length pieces. These small intervals of time then are
independent Bernoulli trials, each with probability of success (an event occurs)
equal to p = λΔt. The probability of no event occurring on each trial is q = 1 –
λΔt. Then X, the number of events in the interval of length t, is Binomial, n, p =
λΔt = λt/n and

n
P( X = k ) =  (λ∆t ) (1 − λ∆t )
k n−k

k 
 n  λt k λt n − k
=  ( ) (1 − )
k  n n
If we now take the limit of this probability function as Δt → 0, we obtain the
Poisson probability law.
2 The Poisson Random Variable
The construction for the Poisson probability law also should convince you that it
may be used to approximate a Binomial random variable when the Binomial
parameter n is large and p is small. To see this , suppose that X is a Binomial
random variable with parameters (n, p) and let λ = np. Then

For n large and p small

And hence we obtain the desired functional form.


2 The Poisson Random Variable
Consider the following examples:

Any suggestions?

Any suggestions?

Any suggestions?
2 Continuous Random Variables
Continuous random variables take on values in an uncountable set. In fact, X is a
continuous random variable if there exists a nonnegative function f(x) defined for
all real x∈(-∞,∞) having the property that for any set B of real numbers

The function f(x) is called the probability density function (pdf) of the random
variable X. f(x) must satisfy:

Similarly:
2 Continuous Random Variables
Probability that a continuous random variable assumes any particular value is zero
because

There is also a relationship between the distribution function and the density
function:

and:
2 Continuous Random Variables
Since the probability that a continuous random variable assumes any particular
value is zero, one might wonder what f(a) represents. What do you think?

An intuitive interpretation is as follows:

Hence, the probability that X takes on values in an interval of length and with
a midpoint at a is approximately . Hence, f(a) is a measure of how likely it
is that X takes on values near a.
2 The Uniform Random Variable
This is the simplest of all continuous random variables. A well-known uniform
random variable is the one that takes on values between 0 and 1 and has a
probability density function is given by:

Note that, this is a density function because f(x) ≥ 0 and

Such a random variable X is said to be uniformly distributed on (0,1). Shorthand


notation to describe this is given by:
X ~ U(0,1).
2 The Uniform Random Variable
More generally, a uniform random variable X might be defined in an arbitrary
interval between α and β (i.e., X ~ U(α, β)), with a density function:

Can you calculate the distribution function of a uniform random


variable?
Use the formula

You should get


2 The Uniform Random Variable
Compute the following probabilities:
2 The Exponential Random Variable
A continuous random variable X is said to be exponentially distributed with
parameter λ (i.e., X ~ exp(λ)) if it has the density function:

What is the distribution function?

As we will later see, exponential random variables are closely associated with
Poisson random variables.
2 The Gamma Random Variable
A gamma random variable X with parameters λ and α (X ~ gamma(α, λ)) has the
density function

The quantity Γ(α) is known as the gamma function and

where
2 The Normal Random Variable
Mostly because it has a wide area of use in statistics, normal distribution is
probably the most widely used distribution function. It is famous for its symmetric
and bell shaped curve.
We say that a random variable X is said to be normally distributed with
parameters μ and σ2 (i.e., X ~ N(μ, σ2)) if it has the density:

This density function is symmetric around μ.


2 The Normal Random Variable
An important fact about normal random variables is that if X is normally
distributed with parameters μ and σ2, then Y = αX + β is distributed normally
with parameters αμ + β and α2σ2. A powerful implication of this result is that you
can transform a normal random variable X with parameters μ and σ2 into a
normal random variable with parameters 0 and 1 by defining Y = (X – μ)/σ.
Useful Properties of Normal Distributions

• Property 1: If X is N(µ,σ2), then cX is N(cµ,c2


σ2).
• Property 2: If X is N(µ,σ2), then X + c (for any
constant c) is N(µ+c, σ2).
• Property 3: If X1 is N(µ1,σ12), X2 is N(µ2,σ22),
and X1 and X2 are independent, then X1+X2 is
N(µ1+µ2,σ12+ σ22).
Finding Normal Probabilities via Standardization

• If Z is a random variable that is N(0,1), the Z is said to be a


standardized normal random variable.
• If X is N(µ,σ2), then z=(X- µ)/σ is N(0,1). F(Z) is tabulated.
• Suppose X is (µ,σ2) and we want to find P(a ≤ X ≤ b). We use the
following relations (this procedure is called standardization):

a−µ X −µ b−µ 
P ( a ≤ X ≤ b) = P ≤ ≤ 
 σ σ σ 
a−µ b−µ 
= P ≤Z≤ 
 σ σ 
b−µ  a−µ 
= F  − F 
 σ   σ 
Standard Normal Table

37
Some probability values for Z
• P(Z<=3.0) = 0.9987
• P(Z<=2.33) = 0.9901
• P(Z<=1.64) = 0.9495
• P(Z<=1.28) = 0.8997
Normal distribution
• Example 12

Letting X annual demand for Prozac, we seek a value x such that


P(X >= x)= .01 or P(X<=x) =.99. Thus, we seek the 99th percentile of Prozac
demand.
Example 12
Example 12
Extra: Random Variables, Mean, Variance, and
Covariance

You might also like