0% found this document useful (0 votes)
10 views

Lecture8 Slides

probailty Lecture Notes

Uploaded by

philopateer
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Lecture8 Slides

probailty Lecture Notes

Uploaded by

philopateer
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Continuous Probability Distributions

A continuous random variable has a probability of 0 of assuming exactly any of its values.
P(a < X ≤ b) = P(a < X < b) + P(X = b) = P(a < X < b).
Consequently, its probability distribution cannot be given in tabular form but it can be stated as a
formula. In dealing with continuous variables, f(x) is usually called the probability density
function, or simply the density function, of X.
A probability density function is constructed so that the area under its curve bounded by the x axis
is equal to 1 when computed over the range of X for which f(x) is defined.
b
P(a < X < b) = ∫ f ( x)dx
a

Definition: The function f(x) is a probability density function (pdf) for the continuous random
variable X, defined over the set of real numbers, if

1. f(x) ≥ 0, for all x ∈ R.


2. ∫
−∞
f ( x)dx = 1.
b
3. P(a < X < b) = ∫ f ( x)dx .
a

Mathematical Expectation (Mean of a Random Variable)


Definition: Let X be a random variable with probability distribution f(x). The mean, or expected
value, of X is

μ = E(X) = ∫ xf ( x)dx if X is continuous.


−∞

Theorem: Let X be a random variable with probability distribution f(x). The expected value of the
random variable g(X) is

μ g(X) = E[g(X)] = ∫ g ( x) f ( x)dx if X is continuous.
−∞

Also, if a and b are constants, then

E(aX ± b) = aE(X) ± b.
PHM111s - Probability and Statistics
Variance of Random Variables
Definition: Let X be a random variable with probability distribution f(x) and mean μ. The variance
of X is

σ = E[(X − μ) ] = ∫ ( x − µ ) 2 f ( x)dx, if X is continuous.
2 2
−∞

Theorem: The variance of a random variable X is σ 2 = E(X2) − μ2.


Theorem: Let X be a random variable with probability distribution f(x). The variance of the
random variable g(X)

is σ g2( X ) = E{[ g ( X ) − µ g ( X ) ]2 } = ∫ [g( X ) − µ g(X )
]2 f ( x)dx if X is continuous.
−∞

Example 10.1: Suppose that the error in the reaction temperature, in ◦C, for a controlled
laboratory experiment is a continuous random variable X having the probability
density function
 x2
 , − 1 < x < 2,
f(x) =  3
 0, elsewhere.
(a) Verify that f(x) is a density function.
(b) Find P(0 < X ≤ 1).
(c) The expected value of g(X) = 4X + 3
(d) The variance of the random variable g(X).
Solution:
(a) Obviously, f(x) ≥ 0. To verify condition 2 in the previous definition, we have
∞ 2
x2 x3 2 8 1
∫ f ( x)dx = −∫1 3 dx = 9 |−1 = 9 + 9 = 1.
−∞
1
x2 x3 1 1
(b) P(0 < X ≤ 1) = ∫ = dx = |0 .
0 3 9 9
2
(4x + 3) x 2 12 3
(c) E(4X + 3) = ∫ dx = ∫ (4x + 3x 2 )dx = 8.
−1 3 3 −1
Or simply,
E(4X +3) = 4E(X) + 3.
Now
2
x2 2
x3 5
E(X) = ∫ x( = )dx ∫ = dx .
−1 3 −1 3 4
Therefore,

PHM111s - Probability and Statistics


 5
E(4X +3) = (4)   + 3 =8, as before.
 4
(d) σ 42=
X +3
Var(4 X +=
3) E{[(4 X + 3) − 8]=
2
} E[(4 X − 5) 2 ]
2
x2 12 51 51
= ∫ (4 x − 5) 2
dx = ∫ (16 x 4 − 40 x 3 + 25 x 2 )dx = . ⇒ σ = .
−1 3 3 −1 5 5

Or simply,
Var(4X +3) = 16Var(X)
5
E(X) =
4
2
2
2 x
2
x4 11
E(X ) = ∫ x ( =
2 )dx ∫ = dx .
−1 3 −1 3 5
2
11  5  51
σ = −  = .
2

5  4  80
51 51 51
⇒ Var(4X +3) = 16Var(X)
= 16( =) ⇒σ= .
80 5 5

Example 10.2: Suppose that X is a continuous random variable whose probability density
function is given by
 C (4 x − 2 x 2 ) 0< x< 2
f (x) = 
0 otherwise
(a) What is the value of C?
(b) Find P(X > 1).
Solution:

(a) Since f is a probability density function, we must have ∫


−∞
f ( x)dx = 1, implying that
2

C ∫ (4 x − 2 x 2 )dx =
1
0

or
2 x3 x= 2
C[2 x 2 − ] |x = 0 =
1
3
or
3
C=
8
Hence,

32 1
(b) P(X > 1) = ∫
1
f ( x)dx = ∫ (4 x − 2 x 2 )dx =
81 2

PHM111s - Probability and Statistics


Example 10.3: The amount of time in hours that a computer functions before breaking down is a
continuous random variable with probability density function given by
 λ e − x /100 x≥0
f (x) = 
0 x<0
(a) Find λ.
(b) What is the probability that a computer will function between 50 and 150 hours before
breaking down?
(c) What is the probability that it will function for fewer than 100 hours?
Solution:
∞ ∞

(a)=
Since 1 f ( x)dx λ ∫ e − x /100 dx
∫=
−∞ 0

1
we obtain − λ (100)e − x /100 |0∞ ==
1= 100λ or λ
100
150
1
(b) P(50 < X < 150) = ∫ 100 e
− x /100
dx = − e − x /100 |150
50
= e −1/2 − e −3/2 ≈ 0.384
50
100
1 − x /100
(c) Similarly, P(X < 100) = ∫0 100 e dx = − e − x /100 |100
0
=
1 − e −1 ≈ 0.633
In other words, approximately 63.3 % of the time, a computer will fail before registering 100
hours of use.

Example 10.4: The lifetime in hours of a certain kind of radio tube is a random variable having a
probability density function given by
0 x < 100

f (x) =  100
 x ≥ 100
 x2
What is the probability that exactly 2 of 5 such tubes in a radio set will have to be
replaced within the first 150 hours of operation? Assume that the events E i , i = 1, 2,
3, 4, 5, that the ith such tube will have to be replaced within this time are
independent.
Solution:
From the statement of the problem, we have
150

P(E i ) = ∫0
f ( x)dx
150
1
= 100
= ∫ x dx
−2

100 3
Hence, from the independence of the events E i , it follows that the desired probability is
 5 1   2 
2 3
80
 2    =
   3   3  243
PHM111s - Probability and Statistics
Example 10.7: Let X be the random variable that denotes the life in hours of a certain electronic
device. The probability density function is
 20,000
 , x > 100,
f(x) =  x 3
 0, elsewhere.
Find the expected life of this type of device.

Solution:
∞ ∞
20,000 20,000
μ = E(X) =
∫100 x3
=
x dx ∫100= dx 200.
x2
Definition: The cumulative distribution function F(x) of a continuous random variable X with
density function f(x) is
x

F(x) = P(X ≤ x) = ∫
−∞
f (t )dt , for − ∞ < x < ∞.

1. 0 ≤ F(x) ≤ 1.

2. F(−∞) = 0 and F(∞) = 1

3. P(X<a) = F(a) and 4. P(X >a) =1− F(a).

As an immediate consequence of this definition, one can write the two results

dF ( x )
5. P(a < X < b) = F(b) − F(a) and 6. f(x) = dx
if the derivative exists.
 x2
 , − 1 < x < 2,
Example 10.5: For the density function f(x) =  3 , find F(x), and use it to evaluate
 0, elsewhere.
P(0 < X ≤1).
Solution:
x

For x < −1, F(x) = ∫


−∞
f (t )dt = 0.
x
t2 x
t 3 x x3 + 1
For −1 ≤ x < 2, F(x) = ∫ f (t=
)dt ∫ = dt = |−1 .
−∞ −1 3 9 9
x 2 2
t
For x ≥ 2, F(x) = ∫ f (t )dt = 0 + ∫ dt + 0 = 0 + 1 + 0 = 1.
−∞ −1 3

Therefore,

PHM111s - Probability and Statistics


 0, x < −1,
 3
x +1
F(x) =  , − 1 ≤ x < 2,
 9
1, x ≥ 2.
The cumulative distribution function F(x) is expressed in the following figure. Now

2 1 1
P(0 < X ≤ 1) = F(1) − F(0) = − =,
9 9 9

 x2
dF ( x )  , − 1 < x < 2,
note, f(x) = dx =  3
 0, elsewhere.

PHM111s - Probability and Statistics

You might also like