Lecture2 Module1 Anova 1
Lecture2 Module1 Anova 1
y of Variance and
Design of Experiments
Experiments--I
MODULE - I
LECTURE - 2
SOME RESULTS ON LINEAR
ALGEBRA, MATRIX THEORY
AND DISTRIBUTIONS
Dr. Shalabh
D
Department
t t off M
Mathematics
th ti and
d Statistics
St ti ti
Indian Institute of Technology Kanpur
2
Quadratic forms
If A is a given matrix of order m × n and X and Y are two given vectors of order m×1 and n × 1 respectively, then
the quadratic form is given by
m n
X ' AY = ∑ ∑a ijj xi y j
i =1 j =1
X ' AX = a11 x12 + ... + amm xm2 + (a12 + a21 ) x1 x2 + ... + (am−1,1 m + am,m−1 ) xm−1 xm .
X ' AX = a11 x12 + ... + amm xm2 + 2a12 x1 x2 + ... + 2am−1,m xm−1 xm
m n
=∑ ∑a x x ij i j
i =1 j =1
• If A is positive semi definite matrix then aii ≥ 0 and if aii = 0 then aij = 0 for all j, and a ji = 0 for all j.
• If P is any nonsingular matrix and A is any positive definite matrix (or positive semi-definite matrix) then P ' AP is
• A matrix A is positive definite if and only if there exists a non-singular matrix P such that A = P ' P.
• If A is m × n matrix and rank ( A ) = m < n then AA ' is positive definite and A ' A is positive semidefinite.
• If A m × n matrix and rank ( A) = k < m < n, then both A ' A and AA ' are positive semidefinite.
semidefinite
4
The set of m linear equations in n unknowns x1 , x2 ,..., xn and scalars aij and bi , i = 1, 2,..., m, j = 1, 2,..., n of the form
a11 x1 + a12 x2 + ... + a1n xn = b1
a21 x1 + a22 x2 + ... + a2 n xn = b2
#
am1 x1 + am 2 x2 + ... + amn xn = bm
can be formulated as
AX = b
where A is a real matrix of known scalars of order m × n called as coefficient matrix, X is real vector and b is n ×1 real
vector of known scalars given by
⎛ x1 ⎞ ⎛ b1 ⎞
⎜ ⎟ ⎜ ⎟
⎜ x2 ⎟ b
X= , is an n × 1 vector of variables and b = ⎜ 2 ⎟ is an m × 1 real vector.
⎜# ⎟ ⎜# ⎟
⎜ ⎟ ⎜ ⎟
⎝ xn ⎠ ⎝ bm ⎠
5
• Linear homogeneous system AX = 0 has a solution other than X = 0 if and only if rank (A) < n.
• If AX = b is consistent then AX = b has a unique solution if and only if rank (A) = n
• If aii is the ith diagonal element of an orthogonal matrix, then −1 ≤ aii ≤ 1.
• Let the n × n matrix be partitioned as A = [a1 , a2 ,..., an ] where ai is an n×1 vector of the elements of ith column of A.
A necessary and sufficient condition that A is an orthogonal matrix is given by the following:
Orthogonal matrix
A square matrix A is called an orthogonal matrix if A ' A = AA ' = I or equivalently if A−1 = A '.
• An orthogonal matrix is non
non-singular.
singular
• If A is orthogonal, then AA ' is also orthogonal.
• If A is an n × n matrix and let P is an n × n orthogonal matrix, then the determinants of A and P ' AP are the same.
6
Random vectors
Let Y1 , Y2 ,..., Yn be n random variables then Y = (Y1 , Y2 ,..., Yn ) ' is called a random vector.
• If Y1 , Y2 ,..., Yn are pair-wise uncorrelated, then the covariance matrix is a diagonal matrix.
n
If Y = (Y1 , Y2 ,..., Yn ) ', K = (k1 , k2 ,..., kn ) ' then K ' Y = ∑ kiYi ,
i =1
n
• the mean K ' Y is E ( K ' Y ) = K ' E (Y ) = ∑ ki E (Yi ) and
i =1
A random vector Y = (Y1 , Y2 ,..., Yn ) ' has a multivariate normal distribution with mean vector μ = ( μ1 , μ2 ,..., μn ) and dispersion
matrix Σ if its probability density function is
1 ⎡ 1 ⎤
f (Y | μ , Σ) = exp ⎢ − (Y − μ ) ' Σ −1 (Y − μ ) ⎥
(2π ) Σ ⎣ 2 ⎦
n /2 n /2
degrees of freedom.
⎛ x⎞
k
1 −1
f χ 2 ( x) = x 2
exp ⎜ − ⎟ ; 0 < x < ∞.
Γ(k / 2)2 k /2
⎝ 2⎠
• If Y1 , Y2 ,..., Yk are independently distributed following the normal distribution with common means 0 and common
k
1
variance σ , then ∑Y has χ - distribution with k degrees
2 2 2
g of freedom.
σ 2
i =1
i
• If the random variables Y1 , Y2 ,..., Yk are normally distributed with non-null means μ1 , μ 2 ,..., μ k but common variance
k
1 th
1, then th
the di
distribution
t ib ti off ∑Y
i =1
i
2
h
has t l χ - distribution
non-central
2
di t ib ti with
ith k degrees
d off freedom
f d and
d non-centrality
t lit
k
parameter λ = ∑ μ i
2
i =1
• If Y1 , Y2 ,..., Yk are independently distributed following the normal distribution with means μ1 , μ 2 ,..., μ k but common
k
1
∑ Yi 2 has non-central χ 2 -distribution with k degrees of freedom and noncentrality parameter λ = 12
k
variance σ then
2
σ 2
i =1 σ
∑μ
i =1
i
2
.
9
• If U has a noncentral Chi-square distribution with k degrees of freedom and noncentrality parameter λ then
E (U ) = k + λ and Var (U ) = 2k + 4λ .
• If U1 , U 2 ,..., U k are independently distributed random variables with each U i having a noncentral Chi-square
k
distribution with ni degrees of freedom and non centrality parameter λi , i = 1, 2,..., k then ∑U
i =1
i has noncentral
k k
Chi
Chi-square di t ib ti with
distribution ith ∑ ni degrees
i =1
d off freedom
f d and
d noncentrality
t lit parameter
t ∑λ . i
i =1
• Let X = ( X 1 , X 2 ,..., X n ) ' has a multivariate distribution with mean vector μ and positive definite covariance matrix Σ.
Then X ' AX is distributed as noncentral χ2 with k degrees of freedom if and only if ΣA is an idempotent matrix
of rank k.
• Let X = ( X 1 , X 2 ,..., X n ) has a multivariate normal distribution with mean vector μ and positive definite covariance
¾ X ' A1 X is
i di
distributed
t ib t d as χ 2 with
ith n1 d
degrees off freedom
f d and
d noncentrality t μ ' A1μ and
t lit parameter d
t- distribution
If
⎛ n +1 ⎞
Γ⎜ ⎛ n +1 ⎞
⎟ ⎛ t 2 ⎞ −⎜⎝ 2 ⎟⎠
fT (t ) = ⎝
2 ⎠
⎜1 + ⎟ ; - ∞ < t < ∞.
⎛n⎞ ⎝ n⎠
Γ ⎜ ⎟ nπ
⎝2⎠
X
• If the mean of X is non zero then the distribution of is called the noncentral t - distribution with n degrees
Y /n
of freedom and noncentrality parameter μ .
11
F- distribution
If X and Y are independent random variables with χ - distribution with m and n degrees of freedom respectively,
2
•
X /m
then the distribution of the statistic F = is called the F-distribution with m and n degrees of freedom. The
Y /n
probability density function of F is
⎛ m + n ⎞⎛ m ⎞
m /2
Γ⎜ ⎟⎜ ⎟ ⎛ m−2 ⎞
⎛ m+n ⎞
−⎜ ⎟
⎛ ⎛m⎞ ⎞
fF ( f ) = ⎝
2 ⎠⎝ n ⎠ ⎜ ⎟ ⎝ 2 ⎠
⎜1 + ⎜ n ⎟ ; 0 < f < ∞.
⎝ 2 ⎠
f f⎟
⎛m⎞ ⎛n⎞ ⎝ ⎝ ⎠ ⎠
Γ⎜ ⎟Γ⎜ ⎟
⎝ 2 ⎠ ⎝2⎠
• If X has a noncentral Chi-square distribution with m degrees of freedom and noncentrality parameter λ ; Y has a χ
2
distribution with n degrees of freedom, and X and Y are independent random variables, then the distribution of
X /m
F= is the noncentral F distribution with m and n degrees of freedom and noncentrality parameter λ .
Y /n