0% found this document useful (0 votes)
55 views12 pages

Things You Should Be Able To Prove:: 1. Properties of Matrix Arithmetics

This document provides a list of mathematical concepts and properties that a student is responsible for proving regarding linear algebra. It includes properties of matrix arithmetic, systems of linear equations, transpose operations, inverses, symmetry, determinants, Euclidean vector spaces, and vector spaces. The student must be able to prove various theorems, properties, and solve computational problems related to these fundamental linear algebra topics.

Uploaded by

Lâm Hà
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views12 pages

Things You Should Be Able To Prove:: 1. Properties of Matrix Arithmetics

This document provides a list of mathematical concepts and properties that a student is responsible for proving regarding linear algebra. It includes properties of matrix arithmetic, systems of linear equations, transpose operations, inverses, symmetry, determinants, Euclidean vector spaces, and vector spaces. The student must be able to prove various theorems, properties, and solve computational problems related to these fundamental linear algebra topics.

Uploaded by

Lâm Hà
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Email [email protected] if you find any typos or errors. Thanks!

Things you should be able to prove:


These are just some of the things you are responsible to prove.
You are also responsible for all the computational problems and the assigned homework.

1. Properties of Matrix Arithmetics


You should know all the properties and be able to prove them. (page 38)

2. System of equations
• If A is an m × n matrix and x is an n × 1 column vector, then the product Ax can be expressed
as a linear combination of column vectors of A.
• Two matrices have the same reduced row echelon form if and only if they are row equivalent.

• (Theorm 1.6.1) A system of linear equations has zero, one, or infinitely many solutions. There
are no other possibilities.
• (Problem 1.6.22) Let Ax = 0 be a homogeneous system of n linear equations in n unknowns,
and let Q be an invertible n × n matrix. Show that Ax = 0 has just the trivial solution if and only
if (QA)x = 0 has just the trivial solution.

• (Problem 1.6.23) Let Ax = b be any consistent system of linear equations, and let x1 be a fixed
solution. Show that every solution to the system can be wirtten in the form x = x1 + x0 , where x0
is a solution to Ax = 0 (the homogeneous system). Show also that every matrix of this form is a
solution.
• For a constant k and a matrix A, if kA = 0 then either k = 0 or A = 0. (0 is a scalar or a zero
matrix of appropriate dimensions.)

3. Transpose operation
• If A and B are matrices of appropriate dimensions,
– (AT )T = A.
– (A + B)T = AT + BT .
– (A − B)T = AT − BT .
– (AB)T = BT AT .
– (kA)T = kAT .
Email [email protected] if you find any typos or errors. Thanks! 2

4. Inverses (Assume all matrices are square and of appropriate dimensions)


• If A is an invertible matrix,
– (A−1 )−1 = A.
– Am is also invertible and (Am )−1 = (A−1 )m = A−m .
– AB is also invertible and (AB)−1 = B−1 A−1 .
– kA is also invertible and (kA)−1 = k −1 A−1 (note that for us k −1 = k1 ).
• If B and C are both inverses of a matrix A then B = C. (Uniqueness of the inverse)
• If A is an idempotent matrix then so is I − A.

• If A is an idempotent matrix then (2A − I) is invertible and is its own inverse.


• If A and B are square matrices of appropriate dimensions and BA = I then B = A−1 .

5. Symmetry
• if A and B are symmetric matrices then:
– AT is symmetric.
– A + B is symmetric.
– A − B is symmetric.
– kA is symmetric.
– A−1 is symmetric.
– AB is symmetric if and only if A and B commute.

• If AT A = A then A is symmetric and A = A2 .


• If A is invertible and skew-symmetric then A−1 is skew-symmetric.
• If A and B are skew-symmetric then so are AT , A + B, A − B, and kA.
• Every square matrix can be written as the sum of a symmetric and a skew-symmetric matrix.
Email [email protected] if you find any typos or errors. Thanks! 3

6. Determinant

• If A is a square matrix and A0 is obtained from A by switching two row of A then we have
det(A0 ) = − det(A).
– Note: I know there are probably easier proofs for this theorem but I take this opportunity to
use induction proof.
Proof: We prove by induction on the dimension of A. It is clear that the property holds for
every 2 × 2 matrix (check). Suppose for all k × k matrix B, k ≥ 2, we have det(B0 ) = − det(B)
where B0 is obtained from B by switching two rows. Let A0 be a matrix obtained by switching
rows i and j of A. Then the determinat of A0 can be obtained by cofactor expansion of A0
along the p-th row where p 6= i, j. We have:
det(A0 ) = (−1)p+1 ap 1 Mp0 1 + (−1)p+2 ap 2 Mp0 2 + · · · + (−1)p+k+1 ap k+1 Mp0 k+1
Expanding A along the same row we get:
det(A) = (−1)p+1 ap 1 Mp 1 + (−1)p+2 ap 2 Mp 2 + · · · + (−1)p+k+1 ap k+1 Mp k+1 .
Minors Mp0 q and Mp q correspond to the same k × k submatrices where one is obtained by
switiching two of the rows of the other. Hence by induction hypothesis, Mp0 q = −Mp q and
det(A0 ) = (−1)p+1 ap 1 Mp0 1 + (−1)p+2 ap 2 Mp0 2 + · · · + (−1)p+k+1 ap k+1 Mp0 k+1
= (−1)p+1 ap 1 (−Mp 1 ) + (−1)p+2 ap 2 (−Mp 2 ) + · · · + (−1)p+k+1 ap k+1 (−Mp k+1 )
= − det(A). ///
   
a1 a1
 a2   a2 
 ..   ..
   

 .   . 
• det 
   = k det   for all i.
 ka i 
  ai



 .   .
 ..   ..


an an
• If A ∈ Mn×n then det(kA) = k n det(A).
• If one row of a square matrix A is a nonzero multiple of another row of A then det(A) = 0.
     
a1 a1 a1
 a2   a2   a2 
..  ..   .. 
     
 
 . 
 . 
  
 . 
 
• det 
 ai + bi  = det  ai  + det  bi  where ai , bi are n-dimensional row.
 
     
 ..   . 
.
 . 
.
 .    .   . 
an an an
   
a1 a1
 a 2 
 a2 
  
..  .. 
 
 
 .   . 
• det 
 aj + kai  = det  aj  (Basically the third type elementary row operation does
   
   
 ..   . 
.
 .    . 
an an
not change the determinant).
Email [email protected] if you find any typos or errors. Thanks! 4

• For every square matrix A and elementary matrix E det(EA) = det(E) det(A).
• For every square matrices A and B of the same dimensions, det(AB) = det(A) det(B).
1
• If A is invertible, det(A−1 ) = .
det(A)
• Product of a row and the column vector of the cofactors of a different row is zero.
adj(A)
• for an invertible matrix A, A−1 = .
det(A)

7. Euclidean Vector Spaces


• (Paralellogram Equation) If u and v are vectors in Rn , then

||u + v||2 + ||u − v||2 = 2 ||u||2 + ||v||2




• If A is an n × n matrix and u and v are n × 1 matrices then

Au · v = u · AT v
u · Av = AT u · v

• Orthogonal decomposition of a vector (find the orthogonal and projection components of a vector
with respect to a given vector)
• If A is an m × n matrix, then the solution set of the homogeneous linear system Ax = 0 consists
of all vectors in Rn that are orthogonal to every row vector of A.
• The general solution of a consistent linear system Ax = b can be obtained by adding any specific
solution of Ax = b to the general solution of Ax = 0.
8. Vector Spaces
• Real-valued functions that are defined an R form a vector space with addition and scalar multipli-
cation of real numbers.
• The finite intersection of subspaces of a vector space is a subspace of that vector space. The union
of subspaces is not necessarily a subspace.
• Span of vectors in a vector space is the smallest subspace of that space.
• The solution set of a homogeneous linear system Ax = 0 in n unknowns is a subspace of Rn .
• If S = {v1 .v2 , . . . vr } and S 0 = {w1 , w2 , . . . , wk } are nonempty sets of vectors in a vector space V ,
then

span{v1 .v2 , . . . , vr } = span{w1 , w2 , . . . , wk }

if and only if each vector in S is a linear combination of those in S 0 , and each vector in S 0 is a linear
combination of those in S.
• A set S with two or more vectors is linearly dependent if and only if at least one of the vectors in
S is expressible as a linear combination of the other vectors in S.
• If Wronskian of n functions is not identically zero, then those functions form a linearly independent
set of vectors in Cn−1 (−∞, ∞). (no conclusion if the Wronskian IS identically zero).
Email [email protected] if you find any typos or errors. Thanks! 5

• Any set of vectors that has a linearly dependent subset is linearly dependent.
• Any subset of a linearly independent set of vectors is linearly independent.
• (Uniqueness of Basis Representation) If S = {v1 , v2 , . . . , vn } is a basis for a vector space V , then
every vector v in V can be represented in the form v = c1 v1 + c2 v2 + · · · + cn vn in exactly one way.
• (no proof) If a vector space has a basis with n vectors, then any set with more than n vectors is
linearly dependent and any set with less than n vectors does not span that vector space.
• (no proof) All bases for a finite-dimensional vector space have the same number of vectors.
• (Plus/Minus Theorem, no proof) If S is a linearly dependent set of vectors in a vector space V ,
then we can remove a vector in S that can be expressed as a linear combination of other vectors
in S and the span of the resulting set is the same as the span of S. On the other hand, if S is a
linearly independent set of vectors in V then the union of S and a vector from V that is not in the
span of S is also linearly independent.
• If V is an n-dimensional vector space and S is a subset of V with exactly n vectors, then S is a
basis for V if and only if either S is linearly independent or S spans V .
• (no proof) A linearly dependent finite set of vectors in a vector space V that spans V can be made
into a basis for V by removing appropriate vectors. A linearly independent finite set of vectors in
a vector space V can be made to a basis for V by inserting appropriate vectors from V .
• (no proof) If W is a subspace of a finite dimensional vector space V , dim(W ) ≤ dim(V ) with
equality if and only if W = V .
• (no proof) If P is the transition matrix from a basis B0 to a basis B for a finite dimensional vector
space V , then P is invertible and P−1 is the transition matrix from B to B0 .
• (no proof) A system of linear equations Ax = b is consistent if and only if b is in the column space
of A.
• (no proof) Elementary row operations do not change the null space and row space of a matrix but
they do change the column space.
• (no proof) If A and B are row equivalent matrices, then a given set of column vectors of A is
linearly independent if and only if the corresponding column vectors of B are linearly independent.
Also, a given set of column vectors of A form a basis for the column space of A if and only if the
corresponding column vectors of B form a basis for the column space of B.
• (no proof) If a matrix R is in ref, the row vectors and column vectors with the leading 1’s form a
basis for the row and column spaces of R, respectively.
• (Dimension Theorem for Matrices) If A is a matrix with n columns, then rank(A)+nullity(A) =
n.
• Let A be an m × n matrix
(a) (Overdetermined Case) If m > n then the linear system Ax = b is inconsistent for at least
one vector b in Rm . (Correction: your book is wrong; it says Rn which is not correct)
(b) (Underdetermined Case) If m < n, then for each vector b in Rm the linear system Ax = b is
either inconsistent or has infinitely many solutions.
• If A is any matrix, then rank(A) = rank(AT ).
Email [email protected] if you find any typos or errors. Thanks! 6

• If A is an m × n matrix, then
(a) The null space of A and the row space of A are orthogonal complements in Rn .
(b) The null space of AT and the column space of A are orthogonal complements in Rm .
9. Transformations
• If TA : Rn → Rm and TB : Rn → Rm are matrix transformations, and if TA (x) = TB (x) for every
vector x in Rn , then A = B.
• Be able to describe and use the reflection, projection, rotation, contraction, dilation, and shear from
R2 to R2 only.
• Every linear transformation from Rn to Rm is a matrix transformation, and conversely, every matrix
transformation from Rn to Rm is a linear transformation.
10. Eigenvalues and Eigenvectors

• (No proof) If A is an n × n matrix, TFAE:


(a) λ is an eigenvalue of A.
(b) The system of equations det(λI − A) = 0 has nontrivial solutions.
(c) There is a nonzero vector x such that Ax = λx.
(d) λ is a solution of the characteristic equation det(λI − A) = 0.
• (No proof) If A is an n × n triangular matrix (upper, lower, or diagonal) then the eigenvalues of A
are the entries on the main diagonal of A.
• If k is a positive integer, λ is an eigenvalue of a matrix A, and x is a corresponding eigenvector,
then λk is an eigenvalue of Ak and x is a correspoinding eigenvector.
• If A is an n × n matrix, the following statements are equivalent.
(a) A is diagonalizable.
(b) A has n linrearly independent eigenvectors.
• (No proof) If v1 , v2 , . . . , vk are eigenvectors of a matrix A corresponding to distinct eigenvalues,
then {v1 , v2 , . . . , vk } is a linearly independent set.
• (Use above theorem to prove) If an n×n matrix A has n distict eigenvalues, then A is diagonalizable.
• (No proof) If A is a square matrix, then:
(a) For every eigenvalue of A, the geometric multiplicity is less than or equal to the algebraic
multiplicity.
(b) A is diagonalizable if and only if the geometric multiplicity of every eigenvalue is equal to the
algebraic multiplicity.

11. Inner product

• If u and v are vectors in a real inner product space V , then

| < u, v > | ≤ ||u||||v||.


Email [email protected] if you find any typos or errors. Thanks! 7

• If W is a subspace of an inner product spaceV , then:


(a) W ⊥ is a subspace of V .
(b) W ∩ W ⊥ = {0}.
• (No proof) If W is a subspace of a finite-dimensional inner product space V , then the orthogonal
complement of W ⊥ is W ; that is

(W ⊥ )⊥ = W.

• For every inner product space V , if w is orthogonal to each of the vectors u1 , u2 , . . . ur then it is
orthogonal to every vector in span{u1 , u2 , . . . ur }.
• If {v1 , v2 , . . . , vr } is a basis for an inner product space V , then zero vector is the only vector in V
that is orthogonal to all of the basis vectors.

• If S = {v1 , v2 , . . . , vn } is an orthogonal set of nonzero vectors in an inner product space, then S is


linearly independent.
• If S = {v1 , v2 , . . . , vn } is an orthogonal basis for an inner product space V , and if u is any vector
in V , then
< u, v1 > < u, v2 > < u, vn >
u= v1 + v2 + · · · + vn .
||v1 ||2 ||v2 ||2 ||vn ||2

• If S = {v1 , v2 , . . . , vn } is an orthonormal basis for an inner product space V , and if u is any vector
in V , then

u =< u, v1 > v1 + < u, v2 > v2 + · · · + < u, vn > vn .

• Every nonzero finite-dimensional inner product space has an orthonormal basis.


• If W is a finite-dimensional inner product space, then
(a) Every orthogonal set of nonzero vectors in W can be enlarged to an orthogonal basis for W .
(b) Every orthonormal set in W can be enlarged to an orthonormal basis for W .
• (QR-Decomposition) If A is an m × n matrix with linearly independent column vectors, then A
can be factored as

A = QR

where Q is an m × n matrix with orthonormal column vectors, and R is an n × n invertible upper


triangular matrix.
• (Best Approximation Theorem) If W is a finite-dimensional subspace of an inner product space V ,
and if b is a vector in V , then projW b is the best approximation to b from W in the sense that

||b − projW b|| < ||b − w||

for every vector w in W . that is different from projW b.


Email [email protected] if you find any typos or errors. Thanks! 8

• (No proof) For every linear system Ax = b, the associated normal system

AT Ax = AT b

is consistent, and all solutions of it are least squares solutions of Ax = b. Moreover, if W is the
column space of A, and x is any least squares solution of Ax = b, then the orthogonal prijection
of b on W is projW b = Ax.
• If A is an m × n matrix, then A has linearly independent column vectors if and only if AT A is
invertible.

• If A is an m × n matrix with linearly independent column vectors, and if A = QR is a QR-


decomposition of A, then for each b in Rm the system Ax = b has a unique least squares solution
given by x = R−1 QT b.

12. Equivalence Statements


If A is an n × n matrix, then TFAE:
(a) A is invertible.
(b) Ax = 0 has only the trivial solution.
(c) The rref of A is In .
(d) A is expressible as a product of elementary matrices.
(e) Ax = b is consistent for every n × 1 matrix b.
(f) Ax = b has exactly one solution for every n × 1 matrix b.
(g) det(A) 6= 0.
(h) The column vectors of A are linearly independent.
(i) The row vectors of A are linearly independent.
(j) The column vectors of A span Rn
(k) The row vectors of A span Rn
(l) The column vectors of A form a basis for Rn
(m) The row vectors of A form a basis for Rn .
(n) A has rank n.
(o) A has nullity 0.
(p) The orthogonal complement of the null space of A is Rn .
(q) The orthogonal complement of the row space of A is {0}.
(r) The range of TA is Rn .
(s) TA is one-to-one.
(t) λ = 0 is not an eigenvalue of A.
(t) AT A is invertible.
Email [email protected] if you find any typos or errors. Thanks! 9

Things you should know or be able to calcu-


late:
• Gaussian and Gauss-Jordan elimination.
• Row echelon and reduced row echelon form.
• Solve a system using elimination.
• Find the general solution.
• Understand and be able to use the following theorems:
– If a homogeneous linear system has n unknowns and if rref of its augmented matrix has r
nonzero rows then the system has n − r free variables.
– A homogeneous linear system of more unknowns than equations has infinitely many solutions.
• Different representations of matrix multiplication, Am×p Bp×n :
Pp
– (AB)ij = r=1 (A)ir (B)rj .
– If B = [b1 b2 . . . bn ], AB = [Ab1 Ab2 . . . Abn ].
   
a1 a1 B
 a2   a2 B 
– If A =  . , AB =  . .
   
 ..   .. 
am am B
• Know the concept of linear combination .
• Find the transpose and the trace.
• Invertible and singular matrices .
• Find the inverse by Gauss-Jordan elimination.
• Find the inverse using the formula.
• Solve a system using inverse of the coefficient matrix.
• Calculate polynomials of matrices.
• Elementary matrices and their inverses.
• Write a matrix as product of elementary matrices.
• Row equivalent matrices.
• Find column vectors b for which the system Ax = b is consistent.
• Definition and properties of a diagonal matrix.
• Multiplication of a matrix by a diagonal matrix from left (multiplies the rows) and right (multiplies
the columns).
• Power and inverse of a diagonal matrix.
Email [email protected] if you find any typos or errors. Thanks! 10

• Definition and properties of triangular matrices.


• Definition and properties of symmetric matrices.
• Formal definition of determinant of a matrix.
• Determinant of a 2 × 2 and 3 × 3 matrix using the arrow method.
• Determinant using the cofactor expansion.
• Properties of determinants.
• Adjoint of a square matrix A (the matrix of cofactors).
• Cramer’s rule.
• Adding and subtracting vectors (parallelogram and triangle rule), and scalar multiplication.
• Components and equivalence of vectors
• Algebraic operations and linear combination of vectors
• Definition of a norm, Euclidean norm, distance, unit vector, and normalizing a vector
• Dot product (Euclidean inner product)
• Angle between two vectors
• Orthogonal vectors and the dot product
• Properties of the dot product (thm 3.2.2, 3.2.3)
• Cauchy-Schwarz Inequaliy
• Paralellogram Equation
• Equation of a line and a plane using the normal vector
• The distance of a point from a line or a plane, and the distance between two parallel lines or planes.
• Vector and parametric equastions of a line in R2 and R3 and a plane in R3 .
• Interpretation of the solution to a linear system of equatios with infinitely many solutions as a
particular solution point plus the solution space to the homogeneous system.
• Cross product and its properties (thm 3.5.1, 3.5.2)
• Find the area of a parallelogram and the volume of a parallelopiped using the cross product and
the scalar triple product, respectively.
• The definition of a vector space , its 10 properties and verifying those properties for a particular
set of objects and given operations.
• Examples of sets with operations that do not form a vector space.
• Definition of a subspace.
• Verify a subset of a given vector space is a subspace (check closures).
• The two subspaces every space has and subspaces of Rn .
Email [email protected] if you find any typos or errors. Thanks! 11

• A subset of M2×2 that is NOT a subspace.


• An example to show that finite union of subspaces of a vector space is NOT a subspace.

• Check if a set of vectors spans a particular vector space.


• Definition of linear independence (that’s the one on page 191 of your book).
• Verify the linear independence of a set of vectors in a vector space.

• The definition and calculation of Wronskian of n functions.


• Definition of a basis for a vector space.
• Verify that a set of vectors form a basis for a vector space.
• Having a basis for a vector space, find the coordinates of any vector in that vector space relative
to that basis.
• The definition of the dimention of a vector space
• Finding a basis and dimention of a solution space
• Given two different bases of a vector space, find the transition matrix from one basis to the other

• Find the components of a vector relative to one basis given its components relative to a differnt
basis
• Row space, column space, null space
• Use row operations to find a basis for the row space, column space, null space

• Pick row vectors of a matrix to form a basis for the row space of that matrix.(Note that this is
different form the last one)
• Find a vector form of the general solution of Ax = 0 and Ax = b.
• Find a basis for the null space of a matrix.

• Find a basis for a subspace spanned by a given set of vectors.


• Given a set of vectors S = {v1 , v2 , . . . , vk } in Rn , find a subset of these vectors that form a basis
for span(S), and express those vectors that are not in that basis as a linear combination of the basis
vectors.

• What is the rank and the nullity.


• Use the dimension theorem to find the number of parameters in a general solution Ax = 0.
• Fundamental spaces.
• Orthogonal complement of a subspace.

• Linear transformations and matrix transformations and their properties.


• Domain, codomain, and range of a transformation.
• Find the standard matrix for a matrix transformation.
Email [email protected] if you find any typos or errors. Thanks! 12

• Composition of two or more transformations.


• Commutativity of the composition of transformations.

• Invertibility and one-to-oneness of transformations.


• Definition of eigenvalue, eigenvector, characteristic equation, characteristic polynomial.
• Find eigenvalues and a basis for the eigenspace associated with each eigenvalue.

• Similar matrices, similarity transformation, and similarity invariant properties of matrices (Table
1, pg 306).
• Geometric and algebraic multiplicity of an eigenvalue.
• Properties of an inner product and a norm defined on a vector space.

• Verify a function is an inner product.


• Different examples of inner products (Euclidean, weighted, matrix, and defined on C([a, b]) using
definite integrals).
• Define a norm using an inner product.

• Find the norm of a vector given an inner product.


• The angle between two vectors in an inner product space.
• Orthogonality of two vectors.
• Find a basis for the orthogonal complement of a subspace.

• Given a basis for a vector space, use Gram-Schmidt process to find an orthonormal basis for that
vector space.
• Find the projection of a vector on a subspace given an orthogonal basis for that subspace.
• What is meant by best approximation

• Find the least squares solution


• Use the concept of finding the least squares solution to find the projection of a vector onto a space.
• Find the matrix of the projection map using [P ] = A(AT A)−1 AT (Example 3 in 6.4)

• Given a set of points, be able to find the best fitting linear, quadratic, and cubic function.

You might also like