0% found this document useful (0 votes)
67 views22 pages

Mathematical Foundations123

The document discusses mathematical foundations for intelligent signal processing and control. It covers topics such as linear algebra, multivariable analysis, Lyapunov's method, optimization techniques, random variables, and fuzzy set theory. The bulk of the document provides definitions and properties for concepts like inner products, outer products, spans of vectors, matrix definiteness, eigenvalues, pseudoinverses, orthogonal and unitary matrices, and vector norms.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
67 views22 pages

Mathematical Foundations123

The document discusses mathematical foundations for intelligent signal processing and control. It covers topics such as linear algebra, multivariable analysis, Lyapunov's method, optimization techniques, random variables, and fuzzy set theory. The bulk of the document provides definitions and properties for concepts like inner products, outer products, spans of vectors, matrix definiteness, eigenvalues, pseudoinverses, orthogonal and unitary matrices, and vector norms.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Mathematical Foundations

(Intelligent Signal Processing and Control)

Aurobinda Routray

9/14/2005 Aurobinda Routray 1


Topics
• Linear Algebra
• Multivariable Analysis
• Lyapunov’s Method
• Unconstrained Optimization
• Constrained Optimization
• Random Variable and Stochastic Process
• Fuzzy Set Theory

9/14/2005 Aurobinda Routray 2


Inner Products of Two Vectors
if x, y ∈ n×1
n
x, y = x y = y x = y , x = ∑ yi xi
T T

i =1

if x, y ∈ n×1

n
x, y = x y = x y = ∑ xi yi
T *

i =1

if x ( t ) and y ( t ) are time varying vectors then


1 1 ⎛ n ⎞
x ( t ) y ( t ) dt = ⎜∑ i ( ) ( )
t2 t2
x, y = ∫ ∫
T

t2 − t1 t2 − t1
x t yi t ⎟ dt
t1 t1
⎝ i =1 ⎠

Outer Products of Two Vectors


⎡ x1 ⎤ ⎡ x1 y1 x1 y2 x1 yn ⎤
⎢x ⎥ ⎢x y x2 y2 x2 yn ⎥⎥
A = xy = ⎢ 2 ⎥ [ y1
T
y2 yn ] = ⎢ 2 1 Rank is 1
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
⎣ xn ⎦ ⎣ xn y1 xn y2 xn yn ⎦
9/14/2005 Aurobinda Routray 3
Span of vectors
span {a1 , a 2 ,… , am } {
a = α1 a1 + α 2 a 2 + + α m am : α i ∈
1≤i ≤ m
}
Linear Depenedancy of columns or rows of a matrix A
if A T A is full rank(non-singular) then columns are independent of each other
if AA T is full rank(non-singular) then rows are independent of each other

Sylvester's Inequality
given ρ ( A ) as the rank of A ∈ n×m and ρ ( B ) as the rank of B ∈ m× p

ρ ( A ) + ρ ( B ) − m ≤ ρ ( AB ) ≤ min { ρ ( A ) , ρ ( B )}

Matrix Definiteness
A symmetric matrix A is positive definite if
n×1
xT Ax > 0 for all x ∈ ⇒A 0
This represents an infinite number of inequalities
A symmetric matrix A is positive semidefinite if
n×1
x T Ax ≥ 0 for all x ∈ ⇒A≥0
9/14/2005 Aurobinda Routray 4
Eigen Values
A symmetric matrix has real eigen values and real eigen vectors
A Hermitian matrix also has real eigen values and real eigen vectors
A 0→λ >0
A≥0→λ ≥0
A≺0→λ <0
A≤0→λ ≤0
Matrix Inversion Lemma
A −1u ( v T A −1 )
( A + uv )
T −1
= A −1 −
1 + v T A −1 u
( C + DBE ) = C-1 - C-1 D ( EC-1 D + B-1 ) EC-1
-1 -1

( C − DB E ) = C + C D ( B − EC D ) EC-1
−1 -1 -1 -1 -1 -1

9/14/2005 Aurobinda Routraymatlab example on complexity of inversion


5
Schur Complements
⎡Q S⎤
if ⎢ T ≥ 0 and Q, R are symmetric
⎣S R ⎥⎦
then
R ≥ 0, Q − SR †ST ≥ 0, (
S I − RR † = 0 )
R † is the Moore-Penrose Inverse of R
proof
Let U be the orthogonal matrix ( U T U = I ) that diagonalizes R
⎡ Σ 0⎤
U T RU = ⎢ ⎥ where Σ >0 and diagonal
⎣ 0 0⎦
⎡Q S1 S2 ⎤
⎡Q S⎤ ⎡I 0 ⎤ ⎡ Q S ⎤ ⎡I 0 ⎤ ⎢ T
⎢S T ≥ 0 ⇒ = S1 Σ 0 ⎥⎥ ≥ 0
⎣ R ⎥⎦ ⎢ 0 U T ⎥ ⎢S T
⎣ ⎦⎣ R ⎥⎦ ⎢⎣ 0 U ⎥⎦ ⎢ T
⎢⎣S 2 0 0 ⎥⎦
where
[ S1 S 2 ] = SU with appropriate partitioning
We must have S 2 = 0 this is true onely iff
⎡Q S1 ⎤
S ( I - RR † ) = 0, and ⎢S T ≥ 0,
⎣ 1 R ⎥⎦
which holds only iff Q - SR † S T ≥ 0
9/14/2005 Aurobinda Routray 6
Pseudo Inverse of a Matrix
For a square matrix that is singular or a rectangular matrix a generalized inverse can be computed
A† is the Moore-Penrose Inverse of A
1. A† = ( A T A ) A T
-1

2. A † A A † = A †
3. A A † A = A
4.( AA† ) = AA†
T

5.( A† A ) = A† A
T

if A is a square non-singular matrix A† = A -1

Least Square Problem (Overdetermined case)


Ax = b A ∈ℜm×n , x ∈ℜn×1, b ∈ℜm×1 , and m > n
Ε (x) = 1 1
e
Ax −
2
b
2
==
1
( Ax − b )
T
( Ax − b )
2 2 2 2
2
Minimizing the error with respect to x leads to
A T Ax - A T b = 0
matlab example on least square solution and error
x = (A A) A b
T -1 T

For underdetermined case ( m < n ) x = A T ( AA T ) b


-1

9/14/2005 Aurobinda Routray 7


Orthogonal and Unitary Matrices
vectors qTi q j = δ ij ( Kronecker Delta function )
matrices QT Q = I or QH Q = I
The orthogonal matrices(the complex counterpart is unitary) donot affect the inner product and norms
Qx, Qy = ( Qx ) ( Qy ) =
T
x, y

Conjugate Vectors
Given a symmetric matrix Q ∈ ℜn×n and a set of non-zero vectors {d 0 ,d1 , ,dn-1 } d i ∈ ℜn×1
d Ti Qd j = 0 for i≠ j
d i and d j are Q conjugate and the vector space {d 0 ,d1 , ,dn-1 } is Q orthogonal

9/14/2005 Aurobinda Routray 8


Eigen Values and Eigen Vectors
The word eigenvalue comes from the German Eigenwert which means
Ax = λ x "proper or characteristic value."

( λ I − A ) x = 0 has soultions when λ I − A = 0,


the roots are called eigen values and the solutions are eigen vectors

Properties
n n
1. trace ( A ) = ∑ aii = ∑ λi
i =1 i =1

2. if x is an eigen vector of A with eigen values λ


then x is the eigen vector of A −1 with eigen value 1
λ
3.if x is an eigen vector with eigen value λ
then kx is also the eigen vector at the same eigen value
4. the eigen values of lower and upper triangular matrices are the diagonal elements
5. if x is the eigen vector of A corresponding to eigen vaule λ
then it also the eigen vector of A − α I withe eigen value λ -α
6.det ( A ) = product ( λ1 ⋅ λ2 λn )

9/14/2005 Aurobinda Routray 9


Vector Norms
properties
1. x ≥ 0, and x = 0 ⇔ x = 0
2. α x = α x
3. x1 + x 2 ≤ x1 + x 2 ( Triangle Inequality )
L1 norm
n
x1 ∑x
i =1
i

L p norm
1
⎡ n ⎤ p
x p ⎢ ∑ xi ⎥ p ∈ ℜ+
⎣ i =1 ⎦
Euclidean Norm(2-Norm)
1
⎡ n 2⎤
= ( xT x )
2

⎢ ∑ xi ⎥
1 1
x 2
= x, x 2

⎣ i =1 ⎦
2

L∞ norm
x ∞
= max ( x1 , x2 , , xn )
L −∞ norm
x −∞
= min ( x1 , x2 , , xn )
inner-product-generated norm

( ) ( )
1 1 1
= ( Wx ) , Wx = ⎡( Wx ) ⋅ Wx ⎤
1
x W = x, x = ⎡⎣ xT W T Wx ⎤⎦
T 2 2 2
2
W ⎣ ⎦
9/14/2005 Aurobinda Routray 10
Matrix Norms
properties
1. A ≥ 0, and A = 0 ⇔ A = 0
2. α x = α x
3. A + B ≤ A + B ( Triangle Inequality )
4. AB ≤ A B consistency condition
Frobenius norm
1 1
⎛ n n 2⎞
2
⎛ n 2⎞
2
A F ⎜ ∑∑ aij ⎟ = ⎜ ∑ σ i ⎟
⎝ i =1 j =1 ⎠ ⎝ i =1 ⎠
σ i → singular values of the matrix A
Induced Norms (Norms Induced by vectors)
Ax
= sup = sup Ax
p
A p
x≠0 x p
x p =1
p

sup: supremum is the least upper bound of Ax p

9/14/2005 Aurobinda Routray 11


Matrix Norms (contd.)
L1 norm (indcued by L1 vector norm)
⎛ n ⎞
A1 max ⎜ ∑ aij ⎟ → maximum column sum
j =1,2, , n
⎝ i =1 ⎠
L∞ norm (indcued by L∞ vector norm)
⎛ n ⎞
A∞ max ⎜ ∑ aij ⎟ → maximum row sum
i =1,2, , n
⎝ j =1 ⎠
L 2 ( spectral norm induced by Euclidean vector norm )

⎡λmax ( A A ) ⎤ 2
1
A 2
= ⎣
*

This is same as the largest singular value (to be covered next)
Spectral Radius of A
σ r ( A ) = max λi ≤ A
1≤ i ≤ n

9/14/2005 Aurobinda Routray 12


Singular Value Decomposition
if A ∈ ℜm×n
There exist real orthogonal matrices
U = [ u1 u2 um ] ∈ ℜm×m and V = [ v1 v2 v n ] ∈ ℜ n× n
such that
U T AV = pseudodiag ⎡⎣σ 1 σ 2 σ p ⎤⎦ = S
where, S ∈ ℜm×n p = min {m, n} and σ1 ≥ σ 2 ≥ ≥σp
or
r
A = USV = ∑ σ i u i v Ti
T

i =1

r → is the index of the smallest singular value


if σ 1 ≥ σ 2 ≥ ≥ σ r ≥ σ r +1 = = σ p = 0 The matrix will have rank r

9/14/2005 Aurobinda Routray 13


Singular Value Decomposition (contd.)
if σ1 ≥ σ 2 ≥ ≥ σ r ≥ σ r +1 = = σ p = 0 The matrix will have rank r
⎡ Sr×r 0r×n-r ⎤
A = [ U m×r U m×m −r ] ⎢ [ ]
T
⎥ Vn×r Vn×n − r = U m×r Sr×r VnT×r
⎣0m −r×r 0m −r×n-r ⎦
when A is symmetric the eigen values are same as singular values and U = V

Finding the inverse of Hessian which consists of a Diagonal and another low rank matrix
H = Ddiaonal + N Lowrank
N = UΣU T
where, U MXR Σ RXR
U = [u1 | u 2 | | uR ]
Σ = diag (σ 1 , σ 2 σR )
−1
H −1
= D − D U ⎡⎣ Σ −1 − U T D−1 U ⎤⎦ U T D−1
−1 −1

9/14/2005 Aurobinda Routray 14


Matrix Condition Number
cond p ( A ) A p
A†
p

small condition numbers are well-conditioned and large condition numbers are ill conditioned
σ 1 maximum singural value
cond 2 ( A ) A A† = =
2 2 σ p minimum singular value
Example
Let Ax = b let there be an error Δb in b
∴ A ( x + Δx ) = b + Δb
Δx = A -1 Δb ⇒ Δx ≤ A -1 Δb ⇒ Δx ≤ A -1 Δb

Δx A -1 Δb Δx b A -1 Δb
≤ ⇒ ≤
x x x x b
Δx Ax Δb Δb
≤ A -1 ≤ A A -1
x x b b
Δx Δb
≤ cond ( A )
x b
9/14/2005 Aurobinda Routray
matlab example on condition number 15
Kronecker Product

9/14/2005 Aurobinda Routray 16


Multivariable Analysis

9/14/2005 Aurobinda Routray 17


Convex Set
A set Σ is said to be convex if, for any elements x, y ∈Σ
β x + (1 − x ) y ∈ Σ for all 0 ≤ β ≤ 1
Convex Functions
A function defined over a convex set Σ is a convex function if for every x, y ∈ Σ and 0 ≤ β ≤ 1
f ( β x + (1 − β ) y ) ≤ β f ( x ) + (1 − β ) f ( y )

Lipschitz Continuous
Let F ( x ) be an m − dimensional function of n-dimensional variable x
Then the function is Lipschitz Continuous in the open set Σ ( open sphere ) if for some constant β
F (x) − F (y ) ≤ β x − y for all x, y ∈ Σ

Quadratic Forms
n n
q = ∑∑ pij xi x j
i =1 j=1

or q = xT Px → for real x
= xH Px → for complex x
where, P is symmteric and positive definite
q = trace ( Pxx T ) = trace ( xx T P )

9/14/2005 Aurobinda Routray 18


Convex Sets Non-Convex Sets

9/14/2005 Aurobinda Routray 19


Convex Function Non-Convex Function

f ( y)
β f ( x ) + (1 − β ) f ( y )

f ( x)

f ( β x + (1 − β ) y )

9/14/2005 Aurobinda Routray 20


Vector Differentiation
x, y ∈ ℜn×1 , P ∈ ℜn×n , A ∈ ℜm×n
∂ T ∂ T ∂ T
∂x
( x y ) = y ,
∂x
( y x ) = y ,
∂x
( x x ) = 2 x,

∂ ∂ T ∂ T
∂x
( Px ) = P T ,
∂x
( x Py ) = Py ,
∂x
( y Px ) = P T y

∂ T ∂ x
∂x
( x Px ) = Px + P T x,
∂x
( x 2)=
x2

( Ax ) = vec ( A )
∂x

9/14/2005 Aurobinda Routray 21


Matrix Differentiation
if A, B, C are of appropriate dimensions
∂ ∂ ∂
trace ( A ) = I, trace ( BAC ) = B T CT , trace ( BA T C ) = CB
∂A ∂A ∂A
∂ ∂
trace ( ABA T ) = AB T + AB, trace ( ABA ) = A T B T + B T A T ,
∂A ∂A

trace ( BACA ) = CT A T B T + B T A T CT ,
∂A

trace ( BACA T ) = B T ACT + BAC
∂A
∂ ∂
trace ( A T A ) = 2 A, trace ( e A ) = eA ,
∂A ∂A

trace ( A k ) = k ( A k −1 )
T

∂A

trace ( BAC ) = BAC ( A −1 )
T

∂A

∂A ( )
trace B ( AT A ) B T = 2AA T AB T B + 2 AB T BA T A
2

9/14/2005 Aurobinda Routray 22

You might also like