0% found this document useful (0 votes)
64 views8 pages

Covariances: C Ov (X, Y)

The document defines covariance and correlation, and discusses how independence relates to zero covariance. It provides examples of calculating covariance and variance for sums of random variables, and examines the constraints of linear regression.

Uploaded by

david
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views8 pages

Covariances: C Ov (X, Y)

The document defines covariance and correlation, and discusses how independence relates to zero covariance. It provides examples of calculating covariance and variance for sums of random variables, and examines the constraints of linear regression.

Uploaded by

david
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

' $

Covariances

Definition
Cov(X, Y ) = E{[X − E(X)][Y − E(Y )]} (1)
..
.
= E[XY − Y E(X) − XE(Y ) + E(X)E(Y )]
= E(XY ) − E(X)E(Y ) − E(Y )E(X) + E(X)E(Y )
..
.
∴ also = E(XY ) − E(X)E(Y ) (2)

Remark Easy to see from definition (1) that Cov(X, X) =


E{[X − E(X)]2 } = Var(X), and from convenient computational

& %
formula (2) that Cov(X, X) = E(X 2 ) − [E(X)]2 .

ActSc 613 | Lecture 8 © 2015-17 by M. Zhu, PhD 1 of 8


' $

Example 1

Let (X, Y, Z) ∼ trinomial(n; p1 , p2 , p3 ), where p1 + p2 + p3 = 1,


X + Y + Z = n, and the joint distribution of (X, Y ) is

f (x, y) = P(X = x, Y = y)
n!
= px1 py2 (1 − p1 − p2 )n−x−y .
x!y!(n − x − y)!
What’s Cov(X, Y )?

& %
ActSc 613 | Lecture 8 © 2015-17 by M. Zhu, PhD 2 of 8
' $

Linear Combinations of RVs

Example For a,b non-random,


Var(aX + bY ) = E[(aX + bY )2 ] − [E(aX + bY )]2
 2 2 2 2

= a E(X ) + b E(Y ) + 2abE(XY ) −
 2
a [E(X)]2 + b2 [E(Y )]2 + 2abE(X)E(Y )

= a2 Var(X) + b2 Var(Y ) + 2abCov(X, Y ).

Rules E(a1 X1 + ... + an Xn ) = a1 E(X1 ) + ... + an E(Xn )


" n
# n
X X X
Var ai X i = a2i Var(Xi ) + ai aj Cov(Xi , Xj )
i=1 i=1 i6=j
 
n
X m
X n X
X m
Cov  ai X i , bj Yj  = ai bj Cov(Xi , Yj )
i=1 j=1 i=1 j=1

& %
ActSc 613 | Lecture 8 © 2015-17 by M. Zhu, PhD 3 of 8
' $

Independence ⇒ Zero Covariance

Definition X and Y are independent if f (x, y) = fX (x)fY (y).

Implication Then,
Z Z Z Z
E(XY ) = xyf (x, y)dxdy = xyf (x)f (y)dxdy = ...
Z Z 
... = yf (y) xf (x)dx dy = E(X)E(Y )
| {z }
E(X)

so Cov(X, Y ) = E(XY ) − E(X)E(Y ) = 0.

& %
ActSc 613 | Lecture 8 © 2015-17 by M. Zhu, PhD 4 of 8
' $

Example 2

Suppose X1 , X2 , ... are i.i.d. random variables with E(Xi ) = µ and


Var(Xi ) = σ 2 . Let

Sn = X1 + X2 + ... + Xn .

(a) What’s E(Sn ) and Var(Sn )?


(b) What’s E(SN ) and Var(SN ), if N is a random variable as well?
(c) Suppose X1 , X2 , ... are insurance claims, with µ = $500,
σ = $100, and N ∼ Poisson(100). Then, SN represents the
total liability. Compare Var(S100 ) and Var(SN ).

& %
ActSc 613 | Lecture 8 © 2015-17 by M. Zhu, PhD 5 of 8
' $

Zero Covariance 6⇒ Independence

Counter Example
f (x, y)

x\y −1 0 +1
−1 0 0.1 0
0 0.1 0.6 0.1
+1 0 0.1 0

Exercise Show that Cov(X, Y ) = 0 but that X and Y are not


independent.

Puzzle What kind of dependence is captured by the covariance?


& %
ActSc 613 | Lecture 8 © 2015-17 by M. Zhu, PhD 6 of 8
' $

Correlation

Definition
Cov(X, Y )
Corr(X, Y ) = p
Var(X)Var(Y )

Theorem −1 ≤ Corr(X, Y ) ≤ +1, with equality if and only X


and Y are almost surely linear functions of each other.
(⇒): If Y = tX + c, then Cov(X, Y ) = tCov(X, X) + Cov(X, c) =
tVar(X), so
tVar(X) t
Corr(X, Y ) = p = = ±1.
2
Var(X)[t Var(X)] |t|

Exercise Complete the proof. [Hint: Notice Var(Y + tX + c), a


quadratic function of t, is ≥ 0 ∀ t ∈ R and consider when it is = 0.]
& %
ActSc 613 | Lecture 8 © 2015-17 by M. Zhu, PhD 7 of 8
' $

Example 3

What’s the solution to the following optimization problem?


2
 
min E (Y − g(X)) , (1)
g

s.t. g(X) = α + βX. (2)

Remark A Recall that, if the constraint (2) is removed, then


the solution to (1) is simply g(X) = E(Y |X).

Remark B The constrained optimization problem (1)-(2) above


is often referred to as the simple linear regression problem.

& %
ActSc 613 | Lecture 8 © 2015-17 by M. Zhu, PhD 8 of 8

You might also like