0% found this document useful (0 votes)
128 views26 pages

Continuous Random Variable and Their Properties

The document discusses properties of continuous random variables (CRV) including: 1. A probability density function (PDF) describes the probability of a CRV taking on a range of continuous values. 2. The expected value and variance of a CRV can be calculated from its PDF. 3. Common distributions like uniform, normal, and log-normal are described along with their properties and how they relate to real-world data.

Uploaded by

Abdallah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
128 views26 pages

Continuous Random Variable and Their Properties

The document discusses properties of continuous random variables (CRV) including: 1. A probability density function (PDF) describes the probability of a CRV taking on a range of continuous values. 2. The expected value and variance of a CRV can be calculated from its PDF. 3. Common distributions like uniform, normal, and log-normal are described along with their properties and how they relate to real-world data.

Uploaded by

Abdallah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 26

Continuous Random

Variable and Their


Properties
Probability Density Function “PDF” for CRV
•  Most data have continuous nature. Weights, dimensions, masses,
capacities, strengths, limits…etc
• A probability distribution function “PDF” for a random variable generally
describes the probability of getting a range of a continuous value.
• The PDF has the following properties:

• One can easily see that the probability of having X = a is virtually zero for a
any continuous variable X.
• The cumulative distribution function is
Expected Value and Mean of CRV
•  The expected value is defined as:

• The variance is defined as:

• The standard deviation is:


• Generally, For any CRV with PDF f(x), we have:
Uniform Continuous Variable
The Normal Distribution
• This is the most common type of PDF
• It applies almost to any “averaged” values from large samples
• Some parameters tend to naturally follow the normal distribution
The Standard Normal Distribution

 
The standardized RV for any Normally
distributed variable x (or called the z-
score for x) is defined as:
Example:
•   compressive strength of concrete cylinders follows a normal distribution with a mean of 30 MPa and
The
standard deviation of 2 MPa.
1. What is the probability that a specimen will have a strength less than 30MPa?
2. What is the probability of having a specimen of strength between 26 and 34 MPa ?
3. What is the strength of specimen that exceeds 95% of all specimen strengths? (the 95th percentile)
4. What is the strength of specimen that is exceeded by 95% of all specimen strengths.

Here: = 30 MPa and = 2 MPa.


5. = The area under the Normal Curve below Z = 0, and it equals 50%
6. = equals the area between 4 standard deviations around the zero-mean of the standard normal
distribution = 95.44%
7. Wanted: What is x such that
Looking at the Tabulated z-scores, the area from [ { or the area from [ } occurs for z=1.64 = (x-30)/2  x =
33.28MPa
4. Wanted: what is x such that . The area from [ occurs for z=-1.64, which gives x = -2*1.64+30= 26.72 MPa
Normal Distribution in Excel
•  In Microsoft Excel there are some commands that can be used to
compute probabilities using normal distribution (or the inverse).
• The command: = NORM.DIST(x, , , TRUE) gives P(X<x). The
“True” gives the cumulative distribution value (area) below x i.e.:
the area between (
• The command: = NORM.INV(P , , ) gives the value x such that
the probability of having values below x equals P. i.e. P(X<x).
• In our previous example: Find can be solved using :
=NORM.INV(0.95 , 30 , 2) gives 33.28
• Note any other areas not from will require manipulation of the
symmetry property of the normal distribution
Normal Approximation to Other Distributions
• The Binomial and Poisson Distributions can be approximated as Normal Distributions
Log-Normal Distribution
•  Most of the times, the data has always positive values, and cannot acquire
negative values.
• In such cases, either the variables are shifted around their mean value, or
modelled as a log-normal variable.
• Definition: If a parameter X follows a log-normal distribution, then
Y=Ln(X) follows a normal distribution.
• The mean and variance of X are related to the mean and variance of the
normal variable Y as:
and
Where
Example
•  The probability of a certain live load value on a floor is described by a log-normal
distribution. If the mean value for the live load is 1.5 kN/m2, and the standard
deviation is 0.3 , determine the 95th percentile for the live load.

Since LL is log-normally distributed, then Ln(LL) is normally distributed. The 95th


percentile for the normal distribution occurs at 1.64 standard deviations from the
mean, thus,
But and
Substituting: and
Solving: ln(kN/m2) and ln(kN/m2)
=0.385+ 1.64*0.198=0.71  kN/m2
Joint Probability Function
• Most of times, experiments or data concern many random variables at the same time,
and these may or maynot be independent of each other, e.g. dimensions of specimen,
damage energy and peak frequency of an EQ.
• The probability density function that describes the simultaneous probability of each
variable is called a Joint Probability Density {or Mass} Function

  .0 {CRV}

  {CRV}

• If  X and Y are independent, then . The converse is true.


Marginal and Conditional Probability
•  If we have a joint PDF for two variables X, Y, then we define the marginal
probability density function for a variable X as the sum of all possible
probabilities for the Y variable at a specific value of x.
• For Discrete Variables:
• For Continuous Variables:
• Recall that the conditional probability , we can use this as a basis for defining
the conditional probability density function of x given y as:

• Of course as usual, the probabilities are simply the Areas (for CRV) or the Sums
(for DRV) resulting from the density functions
Function of One Random Variable
•  Most of the times, we use a function of parameters that are random, e.g.:
The pressure “p” behind a retaining wall is a function of soil density “γ” and
height of soil “h” and lateral pressure coefficient Ka . i.e. p = γ h Ka
• Let’s say we have an invertible function W = G(x) , and let’s have the
function G(x) as monotonic or ever decreasing
• Then logically, the probability of having a range of X is equivalent to have a
range of W. i.e. P(W=w) = P(X=x) = P(G –1(w))
• The probabilities are computed using the cumulative function as
For DRV :
For CRV:
, Hence:
Example
• 
• The basic wind pressure is related to the wind velocity as p = cV2 where c is a
constant. The PDF for the velocity V is fv = (a/v)*exp(-b/v), where a and b are
constant parameters. Find the PDF of the wind pressure p.

Here , and thus , and hence , by taking the positive derivative (because the PDF
must be positive)

E(p) = integral{-inf,+inf}( p*fp)


Function of Two Random Variables
• 
• If W is a function of two random variables {X and Y}.

• Assume we know the Joint PDF for X and Y : i.e. fXY is given.

• We can identify the PDF for 4 cases of W:

1. W = X + Y  , note: w – y is in fact x.

2. W = X - Y 

3. W = X / Y 

4. W = XY 
The Expected Value and Variance, DRV
The Expected Value and Variance, CRV
Example:
•   joint probability function of two discrete random variables X and Y is given by f(x,
The
y)=c(2x+y), where x and y can assume all integers such that 0≤ x ≤ 2, 0 ≤ y ≤ 3, and f (x, y)= 0
otherwise:
1. Find the value of constant c. 2. Find P(X=2,Y=1), 3.Find the P(1 ≤ X, Y ≤ 2), 4. Find the
Marginal Probability Function for X. , 5. Find the mean value of X

1. The constant c must satisfy the basic property of PDF, i.e.: , thus doing the sum, we have:

2. P(X=2,Y=1) = (1/42)*(2*2+1)=5/42
3. P(1 ≤ X, Y ≤ 2)= (1/42)*
4. P(X=x)=
Example:
•• Two one-way roads converge into a single road. The probability of traffic size on each road is described using
 Poisson Distribution, with road 1 having an average of 50 car per hour, and the second having 30 cars per hour.
Denote the number of cars in road 1 as X and the second as Y. Assume independence
• 1. Find the joint probability density function for X and Y.
• 2. The marginal probability density function for X.
• 3. Find the probability that each road has between 30 and 50 cars.
• 4. Find the probability that road 1 has between 30 and 50 cars given that the 20 cars are from road 2.
1. Assuming independence, then

3. Since and are bigger than 5, the distribution of each variable can be approximated using a Normal distribution,
therefore:
P(30<X<50,30<Y<50) = P( [30-50]/50<ZX<[50-50]/50, [30-30]/30<ZY<[50-30]/30)
= P( -0.4<ZX<0 , 0 <ZY<0.67 )= P( -0.4<ZX<0)*P(0 <ZY<0.67 )
= 0.155*0.247 = 0.038 = 3.8%
4. P(30<X<50 |Y=20) = P(30<X<50 , Y=20) / P(Y=20) = P(30<X<50)*P(Y=20)/P(Y=20)=0.155 … This is trivial because
Y and X are independent.
Covariance

• Covariance is a measure of how linear the relationship is between the


random variables.
• If the covariance is negative, the relationship is probably inversely linear
between X and Y, if positive, then it is likely monotonically linear.
• If the relationship between the random variables is nonlinear, the covariance
might not be sensitive to the relationship.
• Notice that if X and Y are independent, then E(XY)=E(X)E(Y), thus, σXY = 0.
Examples on
Covariance
The Correlation Coefficient

• The correlation is some kind of a “normalized” covariance.


• Correlation also measures how “linear” the relationship is between X and Y.
• Again, X and Y are independent if their correlation coefficient is zero.
Mean and Variance of a
Combination of Random Variables
•  Sometimes a random variable can be constructed using linear combination of
other random variables.
• For instance Take: Y = ao + a1X1+a2X2+… where ai are constants, and Xi are
random variables.
• For such cases, due to linearity, the expected value and variance for the
variable Y is simply:
E(Y) = ao + a1E(X1)+a2E(X2)+…
)
• If X1, X2,.. etc. are independent, then
Mean and Variance of a Nonlinear Function
of Random Variables
•  If the relation between the random variables is nonlinear, we can only find
approximations to the probabilities of their relation.
• If X1, X2 are independent and W = X1* X2 , then:
E(W) = E(X1) * E(X2)

• If W = G(, ,…, ), then ( as an approximation)


E(W) = G(, E(,…, )

• If Xi are not correlated then


Example:
• Let
  the random variables X1 and X2 denote the height and width, respectively, of a
beam cross-section. Assume E(X1)=40 centimeters with standard deviation 2
centimeter and E(X2)=60 centimeters with standard deviation 4 centimeter. The
covariance between X1and X2 is (– 0.3) cm. If both variables are represented by a
normal distribution, Determine the 95th percentile of the perimeter of the beam.

The perimeter is Y = 2X1+2X2 and it is a random variable, which is also normally


distributed but with its own mean and variance. We have
E(Y) = 2*40+2*60 = 200 cm
Var(Y) = 4*22+4*42+2*2*2*(-0.3) = 77.6  cm
P95 = 200 + 1.64*8.8 = 214 cm

You might also like