Chap 03 T
Chap 03 T
on Vector Spaces:
Think of linear transformations, or operators, on vector
spaces as similar to functions. Basically, it is a rule that
associates a vector in one space to a vector in another (or
maybe the same) space.
DEFINITION: A linear transformation A: X → Y from vector
space X into vector space Y is linear if
A ( α 1 x1 + α 2 x2 ) = α 1 Ax 1 + α 2 Ax 2
x ∈Xn, y ∈ X m.
m m
A( v j ) = ∑ aij ui and y = ∑ β i ui
i =1 i =1
n
Substituting these into y = ∑ α j A(v j ) from the previous page,
j =1
nm
y = ∑ α j ∑ aij ui
j =1 i =1
Ax
Find the matrix
x representation
θ
for A.
cos θ − sin θ
So A=
sin θ cos θ
(1,2)
1 Ax
Try this on vector x = ; let θ be 30 o : x
2
θ
cos( 30 ) − sin( 30 ) 1 − .134
Ax = =
sin( 30 ) cos( 30 ) 2 2.23
n
EXAMPLE: Let A be the linear operator that forms an
orthogonal projection from a 3-D space into a 2-D space.
Suppose the 2-D space is the "x-y plane" of the 3-D space:
e
ℜ 3 3
x
e2 A:ℜ 3 → ℜ 2
e Ax
ℜ 2
1
"Orthogonal" projection is
along the z-axis. We
could also project
"along" any other line,
but this wouldn't be
orthogonal.
We can form the matrix representation for A from the effect it
has on the basis vectors. Let the basis for R 2 be {e 1 , e2 }
and the basis for R3 is {e1 , e 2 , e 3 }.
1
Ae1 = e1 = [e1 e2 ]
0
0
Ae2 = e2 = [e1 e2 ]
1
0
Ae3 = 0 = [e1 e2 ] zero out
0
So 1 0 0
A=
0 1 0 a (2 x 3) matrix
n
We often see transformations from a space X into itself
(which results in a square matrix representation of
A). It is possible for this transformation to map
representations from one basis into representations in
a different basis, but we'll seldom find use for these.
Suppose we consider a linear transformation that maps
vectors from space X into itself: What happens when
the basis is different?
For notation, let Aˆ be the transform ation in the basis {vˆi }, and
A be the transform ation in the basis {vi }; i.e.,
Comparing to y v
= A x v from the previous page, we see that
A = B −1 Aˆ B
And if the bases are both orthonormal, the inverse is equal to the
transpose, so
A = BT Aˆ B
This
Thisisishowhowwewechange
changethe
theexpression
expressionof
ofaalinear
linear
transformation
transformationAAfrom fromone
onebasis
basisinto
intoanother
anotherbasis,
basis,and
and
ititisiscalled
calledaasimilarity
similaritytransformation.
transformation.
EXAMPLE: Consider the linear vector space of all
polynomials in s, of degree less than 4, with constant
coefficients (over the field of reals). One can show that the
operator A that takes a vector v(s) and transforms it into
v ′′ ( s) + 2 v ′ ( s) + 3v ( s)
{ei } = {s 3 s 2 s 1}
T
e1 = e1 − e2 = e1 e2 e3 e4 1 −1 0 0
T
e2 = e2 − e3 = e1 e2 e3 e4 0 1 −1 0 Some
T Tinkering
e3 = e3 − e4 = e1 e2 e3 e4 0 0 1 −1
T
e4 = e4 = e1 e2 e3 e4 0 0 0 1
forms B −1
Which of course gives us B −1 instead of B :
1 0 0 0 1 0 0 0
− 1 1 0 0 1 1 0 0
B −1 = from which we compute B =
0 −1 1 0 1 1 1 0
0 0 −1 1 1 1 1 1
3 0 0 0
6 3 0 0
A=
8 4 3 0
6 4 2 3
How do we check this? First find the representation of
our vector v in the new basis:
1 0 0 0 0 0
1 1 0 0 1 1
By v = Bv = =
1 1 1 0 0 1
definition 1 1 1 1 1 2
(= 1e2 + 1e3 + 2e4 = ( s 2 − s ) + ( s − 1) + 2 = s 2 + 1)
substitute
Now compute the differential operation within this basis:
s2 + 1
3 0 0 0 0 0
6 3 0 0 1 3 v ′′ + 2 v ′ + 3v
Av = =
8 4 3 0 1 7
6 4 2 3 2 12
J
(= 3e2 + 7e3 + 12e4 = 3(s 2 − s ) + 7( s − 1) + 12(1) = 3s 2 + 4s + 5) n
Operations on Operators:
Operator Norms:
Sometimes we want to know how big A will make x. We then
give the operator A a norm:
Ax
A = sup , or equivalently A = sup Ax
x≠ 0 x x =1
Recall that there are many ways to define x . Consequently,
there are many different matrix norms. They all follow the
rules:
Ax ≤ A ⋅ x for all x
A1 + A2 ≤ A1 + A2
A1 A2 ≤ A1 ⋅ A2
αA = α ⋅ A
{ }
1
2
A = max x T A T Ax
x =1
m xn nx1 mx1
( A: X n → X m ) (x ∈ X n) ( y ∈ X m)
We wish to investigate the circumstances of when, if
and how many solutions exists to this matrix
equation.
r ( A ) ≠ r (W )
Conversely
When NO solutions exists.
Why?
Ex. 5: 1 2 2 1 0 0
W = 3 4 3 W ′ = 0 1 0
5 6 − 4 0 0 1
r ( A) = 2, r (W ) = 3 NO solutions exist!
x = A −1 y
If not . . .
Two common cases: "Overdetermined" ( m > n)
and "Underdetermined" ( m < n)
Underdetermined Case: m<n
There is no possibility of a unique solution because
r ( A) ≤ (min( n ,m ) = m ) < n
Form "Hamiltonian" H = 12 x x + λ ( y − Ax )
T T
∂Τ H
= x − AT λ = 0
∂x Note that this is the same
∂Τ H as our "constraint"
= y − Ax = 0
∂λ equation!
∂Τ H
= x − AT λ = 0 (1)
(repeating) ∂x
∂Τ H
= y − Ax = 0 (2)
∂λ
Sometimes called a
"pseudoinverse" of A.
x = AT ( AA T ) −1 y
1 eT e
2
= 1
2
( y − Ax ) T ( y − Ax )
= 1
2
( y T − x T A T )( y − Ax )
1
2 eT e = 1
2 [( y − x A )( y − Ax ) ]
T T T
= 1
2 [y y − x A y − y Ax + x
T T T T T
AT Ax ]
These are equal because
they are scalars and
transposes of each
other!
1 eT e
2
= 1
2
y T y − 2 x T A T y + x T A T Ax
[
∂ Τ 12 eT e ] = 1 [− 2 AT y + 2 AT Ax ] = 0
∂x 2
Solving: x = ( A T A ) −1 A T y
x = ( A T A ) −1 A T y
Example: Suppose 2 2 4
A = 1 2 and y = 3
1 0 1
4
−1 1 − 1
3
2
T
( A A) A y=
T 3 3
3
0
1 − 2
1
2
1
1
= =x
1
To see how good an approximation this is, compute
the error vector:
4 2 2 0
1
e = y − Ax = 3 − 1 2 = 0
1
1 1 0 0
x ( k + 1) = Ax ( k ) + Bu ( k )
y ( k ) = Cx ( k ) + Du ( k )
x (1) = Ax ( 0) + Bu ( 0)
y ( 0) = Cx ( 0) + Du ( 0)
x ( 3) = A 3 x ( 0) + A 2 Bu ( 0) + ABu (1) + Bu ( 2 )
y ( 2 ) = CA 2 x ( 0) + CABu ( 0) + CBu (1) + Du ( 2 )
0 = x ( k ) = A k x ( 0 ) + A k −1 Bu ( 0 ) + A k − 2 Bu (1) + K + Bu ( k − 1)
Re-arrange:
u( k − 1)
M
[
− A k x ( 0) = B L Ak −2 B ]
A k −1 B
u (1)
u ( 0)
nx1 ∆
=P
We are allowing x( 0) to be any n-dimensional
vector, so by our knowledge of linear equations,
we want to have r ( P ) ≥ n . (We will find out
later that the P matrix cannot have rank greater
than n). Systems with this property are called
controllable.