0% found this document useful (0 votes)
16 views50 pages

Lecture 45

Calculus Presentation 45
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views50 pages

Lecture 45

Calculus Presentation 45
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 50

Matrix Operations

Inverse of a Matrix
Characteristics of
Invertible Matrices

Partitioned Matrices
Matrix factorization
Iterative Solutions of
linear Systems
Vector Spaces and
Subspaces
Null Spaces, Column
Spaces and Lin Tr.
Lin Ind. Sets: Bases

Coordinate Systems
Dim. of a Vector Spaces
Rank
Change of Basis
Application to Diff Eq.
Let V be an arbitrary
nonempty set of objects
on which two operations
are defined, addition and
multiplication by scalars.
If the following
axioms are
satisfied by all
objects u, v, w in V
and all scalars l and
m, then we call V a
vector space.
Axioms of
Vector Space
For any set of vectors u, v,
w in V and scalars l, m, n:
1. u + v is in V
2. u+v=v+u
3. u + (v + w) = (u + v) + w
4. There exist a zero
vector 0 such that
0+u=u+0=u
5. There exist a vector –u
in V such that
-u + u = 0 = u + (-u)
6. (l u) is in V
7. l (u + v)= l u + l v
8. m (n u) = (m n) u = n (m u)
9. (l + m) u = I u +m u
10. 1u = u where 1 is the
multiplicative identity
A subset W of a vector
space V is called a
subspace of V if W itself is a
vector space under the
addition and scalar
multiplication defined on V.
If W is a set of one or
more vectors from a
vector space V, then W
is subspace of V if and
only if the following
conditions hold:
Continue!

(a) If u and v are vectors in


W, then u + v is in W
(b) If k is any scalar and u
is any vector in W, then
k u is in W.
The null space of an m x n
matrix A (Nul A) is the set
of all solutions of the hom
equation Ax = 0
Nul A = {x: x is in Rn and Ax = 0}
The column space of
an m x n matrix A
(Col A) is the set of all
linear combinations
of the columns of A.
The column space
of a matrix A is a
subspace of R .m
A system of linear
equations Ax = b is
consistent if and only
if b is in the column
space of A.
A linear transformation
T from V into W is a rule
that assigns to each
vector x in V a unique
vector T (x) in W, such
that
(i) T (u + v) = T (u) + T (v)
for all u, v in V, and
(ii) T (cu) = c T (u) for all u
in V and all scalars c
The kernel (or null
space) of such a T is
the set of all u in V
such that T (u) = 0.
An indexed set of vectors
{v1,…, vp} in V is said to be
linearly independent if the
vector equation
c1v1  c2 v2  ...  c p v p 0 (1)
has only the trivial solution,
c1=0, c2=0,…,cp=0
The set {v1,…,vp} is said to be
linearly dependent if (1) has a
nontrivial solution, that is, if
there are some weights, c1,…,cp,
not all zero, such that (1) holds.
In such a case, (1) is called a
linear dependence relation
among v1, … , vp.
Spanning Set Theorem
Let S = {v1, … , vp} be a set in V and
let H = Span {v1, …, vp}.
(a) If one of the vectors in S, say vk,
is a linear combination of the
remaining vectors in S, then the set
formed from S by removing vk still
spans H.

(b) If H {0}, some subset of S is a
basis for H.
Suppose the set B = {b1, …, bn}
is a basis for V and x is in V.
The coordinates of x relative
to the basis B (or the B-
coordinates of x) are the
weights c1, … , cn such that
x = c1b1 + ...+ cn bn …
If c1,c2,…,cn are the B-
Coordinates of x, then
 c1 
 
the vector in Rn
 x B   
 cn 
is the coordinate of x (relative to B)
or the B-coordinate vector of x. The
mapping x  [x]B is the coordinate
mapping (determined by B)
If V is spanned by a finite
set, then V is said to be
finite-dimensional, and the
dimension of V, written as
dim V, is the number of
vectors in a basis for V.

Continue!
The dimension of the zero
vector space {0} is defined
to be zero.
If V is not spanned by a
finite set, then V is said to
be infinite-dimensional.
The pivot columns
of a matrix A form
a basis for Col A.
The Basis Theorem
Let V be a p-dimensional
vector space, p> 1. Any
linearly independent set of
exactly p elements in V is
automatically a basis for V.
Any set of exactly p elements
that spans V is automatically
a basis for V.
The dimension of Nul A is the
number of free variables in the
equation Ax = 0.

The dimension of Col A is the


number of pivot columns in A
The rank of A is the dimension of
the column space of A. Since Row
A is the same as Col AT, the
dimension of the row space of A is
the rank of AT. The dimension of
the null space is sometimes called
the nullity of A.
The Rank Theorem
The dimensions of the column
space and the row space of an
m x n matrix A are equal. This
common dimension, the rank of
A, also equals the number of
pivot positions in A and
satisfies the equation
rank A + dim Nul A = n
If A is an m x n, matrix, then
(a) rank (A) = the number of
leading variables in the
solution of Ax = 0
(b) nullity (A) = the number of
parameters in the general
solution of Ax = 0
If A is any matrix, then

rank (A) = rank (A ) T


Four Fundamental
Matrix Spaces
Row space of A
Column space of A
Null space of A
Null space of AT
Let A be an n x n matrix.
Then the following
statements are each
equivalent to the statement
that A is an invertible
matrix. …
The columns of A form a
basis of R .n

Col A = R n

dim Col A = n
rank A = n
Nul A = {0}
dim Nul A = 0
Let B = {b1, … , bn} and
C = {c1, … , cn} be bases of a
vector space V. Then there is
P
an n x n matrix C  B such that

[ x ]C  P [ x ]B …
C B
Continue!
The columns of C P B
are the C-coordinate vectors
of the vectors in the basis B.
That is,
P  [b1 ]C [b2 ]C  [bn ]C 
C B

Observe

 
1
P [ x ]C [ x ]B
C B

 
1
P  P
C B B C
Given scalars a0, … , an, with
a0 and an nonzero, and given
a signal {zk}, the equation
a0 yk+ n  a1 yk+ n-1  ...  an  1 yk+1  an yk zk ,  k

is called a linear difference


equation (or linear recurrence
relation) of order n. …
Continue!
For simplicity, a0 is often
taken equal to 1. If {zk} is the
zero sequence, the equation
is homogeneous; otherwise,
the equation is non-
homogeneous.
If an 0 and if {zk} is given,
the equation
yk+n+a1yk+n-1+…+an-1yk+1+anyk=zk,
for all k has a unique
solution whenever y0,…, yn-1
are specified.
The set H of all solutions of
the nth-order homogeneous
linear difference equation
yk+n+a1yk+n-1+…+an-1yk+1+anyk=0,
for all k is an n-dimensional
vector space.
Reduction to Systems of
First-Order Equations
A modern way to study a
homogeneous nth-order
linear difference equation is
to replace it by an equivalent
system of first order
difference equations, …
Continue!
written in the form
xk+1 = Axk for k = 0, 1, 2, …
Where the vectors xk are in
R and A is an n x n matrix.
n

You might also like