0% found this document useful (0 votes)
9 views27 pages

Exercises

The document is a comprehensive guide on undergraduate linear algebra, covering topics such as vector spaces, linear maps, determinants, eigenvalues, inner product spaces, group theory, and field theory. It includes definitions, theorems, examples, and exercises to illustrate key concepts. The content is structured into sections and subsections, providing a detailed exploration of each topic.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views27 pages

Exercises

The document is a comprehensive guide on undergraduate linear algebra, covering topics such as vector spaces, linear maps, determinants, eigenvalues, inner product spaces, group theory, and field theory. It includes definitions, theorems, examples, and exercises to illustrate key concepts. The content is structured into sections and subsections, providing a detailed exploration of each topic.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

Comprehensive Undergraduate Linear Algebra

Prepared by Subhendu

Contents
1 Vector Spaces and Linear Maps 2
1.1 Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Bases and Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.5 Linear Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.6 Transformation Matrices of Linear Maps . . . . . . . . . . . . . . . . . . 10
1.7 Change of Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.8 Systems of Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.9 How to Find the Rank of a Matrix . . . . . . . . . . . . . . . . . . . . . 13

2 Determinant and Eigenvalues 16


2.1 Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.2 Eigenvalues and Eigenvectors . . . . . . . . . . . . . . . . . . . . . . . . 17
2.3 Diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

3 Inner Product Spaces 21


3.1 Inner Product and Orthogonality . . . . . . . . . . . . . . . . . . . . . . 21
3.2 Orthogonal Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.3 Self-adjoint Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

4 Group and Field: Definitions, Properties, and Examples 23

5 Group Theory 26
5.1 Basic Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
5.2 Subgroups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
5.3 Group Homomorphisms . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
5.4 Normal Subgroups and Quotient Groups . . . . . . . . . . . . . . . . . . 26

6 Field Theory 27
6.1 Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
6.2 Field Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
6.3 Algebraic and Transcendental Elements . . . . . . . . . . . . . . . . . . . 27
6.4 Minimal Polynomial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
6.5 Algebraic Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

1
1 Vector Spaces and Linear Maps
1.1 Vector Spaces
Definition 1.1 (Vector Space). Let F be a field. A vector space over F is a set V
together with two operations:

• Vector addition + : V × V → V

• Scalar multiplication · : F × V → V

satisfying the following axioms for all u, v, w ∈ V and a, b ∈ F :

(i) u + v = v + u (Commutativity)

(ii) (u + v) + w = u + (v + w) (Associativity)

(iii) There exists 0 ∈ V such that v + 0 = v (Additive identity)

(iv) For each v ∈ V , there exists −v ∈ V such that v + (−v) = 0 (Additive inverse)

(v) a · (b · v) = (ab) · v (Compatibility)

(vi) 1 · v = v where 1 is the multiplicative identity in F

(vii) a · (u + v) = a · u + a · v (Distributivity over vector addition)

(viii) (a + b) · v = a · v + b · v (Distributivity over field addition)

Example 1.2. The set Rn with usual addition and scalar multiplication is a vector space
over R.

Example 1.3. The set P (R) of all real-coefficient polynomials is a vector space over R.

Example 1.4. Let   


x 2
V = ∈R :x+y =0 .
y
We will check whether V is a vector space over R under the usual operations of vector
addition and scalar multiplication.

Step 1: Identify the form of elements in V


From the condition x + y = 0, we get y = −x. So every element of V is of the form:
 
x
, x ∈ R.
−x

Step 2: Zero vector


 
2 0
The zero vector in R is , which satisfies:
0
 
0
0+0=0⇒ ∈ V.
0

2
Step 3: Closed under addition
Take two elements of V :    
x y
⃗u = , ⃗v =
−x −y
Then,    
x+y x+y
⃗u + ⃗v = = ∈ V.
−x + (−y) −(x + y)

Step 4: Closed under scalar multiplication



x
Let c ∈ R and ⃗u = ∈ V . Then,
−x
   
x cx
c⃗u = c = ∈ V.
−x −cx

Step 5: Remaining axioms


Since V ⊆ R2 and uses the standard operations, all other vector space axioms (associa-
tivity, commutativity, distributivity, additive identity, and additive inverse) are automat-
ically satisfied.

Conclusion
Therefore, the set   
x 2
V = ∈R :x+y =0
y
is a vector space over R.

Exercise Questions
  
x 2
1. Let V1 = ∈ R : x − 2y = 0 .
y
Is V1 a vector space over R?
Answer: Yes
  
x 2
2. Let V2 = ∈ R : xy = 0 .
y
Is V2 a vector space over R?
Answer: No

3. Let V3 = {f ∈ C[0, 1] : f (0) = 1}, where C[0, 1] is the set of continuous real-valued
functions on [0, 1].
Is V3 a vector space over R?
Answer: No

3
1.2 Subspaces
Definition 1.5 (Subspace). A subset W ⊆ V of a vector space V over F is a subspace
if W is itself a vector space with the induced operations. (Check the presence of the zero
vector )

Theorem 1.6 (Subspace Criterion). A nonempty subset W ⊆ V is a subspace if and


only if for all u, v ∈ W and a ∈ F ,

u + v ∈ W, a · u ∈ W.

Proof. (Brief) Closure under addition, scalar multiplication, and non-emptiness implies
that the inherited vector space axioms hold.

Example 1.7. Let   


 x 
3
W =  y ∈R :x+y+z =0 .

z
 

Determine whether W is a subspace of R3 .

Step 1: Zero vector check


 
0
The zero vector in R3 is 0, and:
0
 
0
0 + 0 + 0 = 0 ⇒ 0 ∈ W.

0

Step 2: Closed under addition


Let    
x1 x2
⃗u = y1  ,
 ⃗v = y2  ∈ W,

z1 z2
so that
x1 + y1 + z1 = 0, x2 + y2 + z2 = 0.
Then,  
x1 + x2
⃗u + ⃗v =  y1 + y2  ,
z1 + z2
and

(x1 + x2 ) + (y1 + y2 ) + (z1 + z2 ) = (x1 + y1 + z1 ) + (x2 + y2 + z2 ) = 0 + 0 = 0.

Hence, ⃗u + ⃗v ∈ W .

4
Step 3: Closed under scalar multiplication
 
x
Let c ∈ R and ⃗u = y  ∈ W , so x + y + z = 0.

z
Then,  
cx
c⃗u = cy  , and cx + cy + cz = c(x + y + z) = c · 0 = 0.

cz
So, c⃗u ∈ W .

Conclusion
The set   
 x 
W =  y  ∈ R3 : x + y + z = 0
z
 

is a subspace of R3 .

Exercise Questions
  
 x 
1. Let W1 = y  ∈ R3 : x − 2y + 3z = 0 .
z
 
Is W1 a subspace of R3 ?
Answer: Yes
  
x 2
2. Let W2 = ∈R :x≥0 .
y
Is W2 a subspace of R2 ?
Answer: No
3. Let W3 = {f ∈ C[0, 1] : f (0) = f (1)}, where C[0, 1] is the set of continuous real-
valued functions on [0, 1].
Is W3 a subspace of C[0, 1]?
Answer: Yes

1.3 Bases and Dimension


Definition 1.8 (Linear Independence). A set {v1 , v2 , . . . , vn } in V is linearly indepen-
dent if
a1 v1 + a2 v2 + · · · + an vn = 0 =⇒ a1 = a2 = · · · = an = 0.
Otherwise, it is linearly dependent.
Example 1.9. Determine whether the vectors
     
1 2 3
⃗v1 = 2 , ⃗v2 = 4 ,
   ⃗v3 = 6

3 6 9
in R3 are linearly dependent.

5
Solution
We check whether there exist scalars c1 , c2 , c3 , not all zero, such that:

c1⃗v1 + c2⃗v2 + c3⃗v3 = ⃗0.

Observe:
⃗v2 = 2⃗v1 , ⃗v3 = 3⃗v1 .
So, all three vectors are scalar multiples of ⃗v1 , hence linearly dependent.
Alternatively, compute the determinant of the matrix formed by putting vectors as
columns:  
1 2 3
A = 2 4 6 , det(A) = 0.
3 6 9

Conclusion
Since the vectors are scalar multiples of each other, they are linearly dependent.

Exercises: Are the Following Sets Linearly Depen-


dent?
   
1 0
1. ⃗v1 = , ⃗v2 = in R2 .
0 1
Answer: No
     
1 2 −1
2. ⃗v1 = , ⃗v2 = , ⃗v3 = in R2 .
2 4 −2
Answer: Yes
     
1 0 1
3. ⃗v1 = 2 , ⃗v2 = 1 , ⃗v3 = 3 in R3 .
0 3 3
Answer: No

Definition 1.10 (Basis). A set B = {v1 , . . . , vn } is a basis of V if

• B is linearly independent.

• B spans V , i.e., every v ∈ V is a linear combination of elements of B.

Example 1.11. Determine whether the vectors


     
1 0 1
⃗v1 = 0 , ⃗v2 = 1 , ⃗v3 = 1
1 1 2

form a basis for R3 .

6
Solution
To form a basis of R3 , the vectors must be:

• linearly independent

• span R3

Let us form the matrix A with columns ⃗v1 , ⃗v2 , ⃗v3 :


 
1 0 1
A = 0 1 1
1 1 2

We compute the determinant:

det(A) = 1(1 · 2 − 1 · 1) − 0 + 1(0 · 1 − 1 · 1) = (2 − 1) − 0 + (0 − 1) = 1 − 1 = 0.

So, det(A) = 0, and the vectors are **linearly dependent**.

Conclusion
Since the vectors are linearly dependent, they do not form a basis of R3 .

Exercises: Do the Following Sets Form a Basis?


   
1 0
1. ⃗v1 = , ⃗v2 = in R2 .
0 1
Answer: Yes
   
1 2
2. ⃗v1 = , ⃗v2 = in R2 .
2 4
Answer: No
     
1 0 0
3. ⃗v1 = 0 , ⃗v2 = 1 , ⃗v3 = 0 in R3 .
0 0 1
Answer: Yes

Theorem 1.12 (Dimension Theorem). All bases of a finite-dimensional vector space have
the same number of elements. This number is called the dimension of V , denoted dim V .

Example 1.13. The standard basis of Rn is {e1 , . . . , en } where ei has 1 in the ith coor-
dinate and 0 elsewhere. Hence, dim Rn = n.

Example 1.14. Find the dimension of the subspace


  
 x 
3
W =  y ∈R :x+y+z =0 .

z
 

7
Solution
The condition x + y + z = 0 is a single linear equation, so it reduces the degrees of freedom
by 1.
We can write the general solution as:
       
x −y − z −1 −1
x = −y − z ⇒ y =    y  =y 1 +z 0 
  
z z 0 1

So, a basis is:    


 −1 −1 
 1 , 0 
0 1
 

Conclusion
dim(W ) = 2

Exercises: Find the Dimension of the Following Sub-


spaces
1. The space of all 2 × 2 symmetric matrices over R.
Answer: 3

2. The solution space of the homogeneous system:

x + y + z = 0, 2x + 2y + 2z = 0

in R3 .
Answer: 2
     

 1 0 1 
     
0 1 1

3. The subspace span   ,   ,   ⊆ R4 .


 1 0 1 

0 1 1
 
Answer: 2

1.4 Matrices
Definition 1.15 (Matrix). A matrix over a field F is a rectangular array of elements of
F , denoted A = (aij ).
Definition 1.16 (Matrix Addition and Scalar Multiplication). Defined entrywise for two
matrices of the same size.
Definition 1.17 (Matrix Multiplication). For A of size m × n and B of size n × p, the
product AB is an m × p matrix defined by
n
X
(AB)ij = aik bkj .
k=1

8
1.5 Linear Maps
Definition 1.18 (Linear Map). Let V and W be vector spaces over F . A function
T : V → W is linear if for all u, v ∈ V and a ∈ F ,

T (u + v) = T (u) + T (v), T (av) = aT (v).

Example 1.19. The differentiation operator D : P (R) → P (R) defined by D(p) = p′ is


linear.
Example 1.20. Check whether the function T : R2 → R2 defined by
   
x 2x + 3y
T =
y 4x − y

is a linear map.

Solution
A function T is linear if for all vectors ⃗u, ⃗v ∈ R2 and scalar c ∈ R:

T (⃗u + ⃗v ) = T (⃗u) + T (⃗v ), and T (c⃗u) = cT (⃗u).

Let    
x1 x2
⃗u = , ⃗v = .
y1 y2
Check additivity:
   
x1 + x2 2(x1 + x2 ) + 3(y1 + y2 )
T (⃗u + ⃗v ) = T =
y1 + y2 4(x1 + x2 ) − (y1 + y2 )
   
2x1 + 3y1 2x2 + 3y2
= + = T (⃗u) + T (⃗v ).
4x1 − y1 4x2 − y2
Check homogeneity:
     
cx1 2(cx1 ) + 3(cy1 ) 2x1 + 3y1
T (c⃗u) = T = =c = cT (⃗u).
cy1 4(cx1 ) − (cy1 ) 4x1 − y1

Conclusion
Since both additivity and homogeneity hold, T is a linear map.

Exercises: Check if the Following Maps are Linear


1. T : R2 → R2 defined by T (x, y) = (x + 1, y).
Answer: No

2. T : R3 → R2 defined by T (x, y, z) = (2x − y, 3z).


Answer: Yes

3. T : R2 → R2 defined by T (x, y) = (xy, x − y).


Answer: No

9
1.6 Transformation Matrices of Linear Maps
Definition 1.21 (Matrix Representation). Let T : V → W be a linear map, and let
B = {v1 , . . . , vn } and C = {w1 , . . . , wm } be ordered bases for V and W . Then the matrix
of T relative to B and C is the m × n matrix [T ]C←B whose j-th column is [T (vj )]C .
Example 1.22. Let T : R2 → R2 be a linear map defined by
T (x, y) = (3x + y, 2x − y)
Find the transformation matrix A such that T (⃗x) = A⃗x.

Solution
Let the standard basis vectors of R2 be:
   
1 0
⃗e1 = , ⃗e2 =
0 1
Apply T to each basis vector:
T (⃗e1 ) = T (1, 0) = (3, 2)
T (⃗e2 ) = T (0, 1) = (1, −1)
So the matrix A is:  
3 1
A=
2 −1
 
3 1
Thus, T (⃗x) = A⃗x where A = .
2 −1
Example 1.23. Let T : R2 → R2 be given by T (x, y) = (2x + y, x − y). In the standard
basis:
T (1, 0) = (2, 1),
T (0, 1) = (1, −1).
So the matrix is:  
2 1
[T ] = .
1 −1
Example 1.24. Let T : P2 → R2 be defined by T (p(x)) = (p(0), p(1)). With the standard
basis {1, x, x2 } for P2 :
T (1) = (1, 1), T (x) = (0, 1), T (x2 ) = (0, 1)
Thus,  
1 0 0
[T ] = .
1 1 1
Example 1.25. Let T : R3 → R2 be defined by T (x, y, z) = (x − z, 2y + z). Then:
T (1, 0, 0) = (1, 0), T (0, 1, 0) = (0, 2), T (0, 0, 1) = (−1, 1)
Matrix is:  
1 0 −1
[T ] = .
0 2 1

10
Exercises
Find the matrix representation A of each linear transformation T defined below:

1. T : R2 → R2 , defined by T (x, y) = (x − y, x + y)
Answer: 
1 −1
A=
1 1

2. T : R2 → R2 , defined by T (x, y) = (4x, 5y)


Answer:  
4 0
A=
0 5

3. T : R3 → R2 , defined by T (x, y, z) = (x + 2z, y − z)


Answer:  
1 0 2
A=
0 1 −1

1.7 Change of Basis


Definition 1.26 (Change of Basis Matrix). Given two bases B = {v1 , . . . , vn } and B ′ =
{v1′ , . . . , vn′ } of a vector space V , the change of basis matrix from B to B ′ is the matrix
P whose columns are the coordinates of vi′ in basis B.

Theorem 1.27 (Change of Matrix Representation). If T : V → V is a linear map and


P is the change of basis matrix from B to B ′ , then:

[T ]B′ = P −1 [T ]B P.
      
5 ⃗ 2 ⃗ 1
Example 1.28. Let ⃗v = and let the basis B = b1 = , b2 = . Find the
1 1 1
coordinate vector [⃗v ]B .

Solution
We want scalars c1 , c2 such that:
   
⃗ ⃗ 2 1
⃗v = c1 b1 + c2 b2 = c1 + c2
1 1
   
5 2c1 + c2
⇒ =
1 c1 + c2
Equating components:
2c1 + c2 = 5, c1 + c2 = 1
Subtract second from first:

(2c1 + c2 ) − (c1 + c2 ) = 5 − 1 ⇒ c1 = 4 ⇒ c2 = 1 − c1 = 1 − 4 = −3

11
Answer:
 
4
[⃗v ]B =
−3
Example 1.29. Let T (x, y) = (x + y, x − y) on R2 with B = {e1 , e2 } and B ′ =
{(1, 1), (1, −1)}. Then:
     
1 1 1 1 −1 1 1 1
[T ]B = , P = , P = .
1 −1 1 −1 2 1 −1

Example 1.30. Let B ′ = {(2, 1), (−1, 1)} and B = {e1 , e2 }. Then change of basis matrix
is    
2 −1 −1 1 1 1
P = , P = .
1 1 3 −1 2
Use this to transform any matrix.

Exercises
Find the coordinate vector [⃗v ]B of ⃗v with respect to the given basis B:
     
7 1 1
1. ⃗v = , B= ,
5 0 1
 
2
Answer:
5
     
3 1 1
2. ⃗v = , B= ,
1 1 −1
 
2
Answer:
1
     
6 3 1
3. ⃗v = , B= ,
4 1 1
 
2
Answer:
1

1.8 Systems of Linear Equations


A system can be expressed as
Ax = b,
where A is an m × n matrix, x an n × 1 vector of unknowns, and b an m × 1 vector.
Definition 1.31 (Consistent System). The system is consistent if there exists at least
one solution.
Theorem 1.32 (Rouché-Capelli). A system Ax = b is consistent if and only if

rank(A) = rank([A|b]),

where [A|b] is the augmented matrix.

12
Checking the Nature of Solutions of a System of Linear Equations
Consider a system of linear equations represented in augmented matrix form:

[A | b]

where A is the coefficient matrix and b is the constants vector.

Steps to Determine the Nature of Solutions


1. Reduce the augmented matrix [A | b] to Row Echelon Form (REF) or Reduced Row
Echelon Form (RREF) using elementary row operations ( upper triangular matrix
with all diagonal entries zero ).

2. Find the rank of the coefficient matrix A, denoted as rank(A), and the rank of the
augmented matrix [A | b], denoted as rank([A | b]).

3. Compare the ranks:

• If rank(A) = rank([A | b]) = n, where n is the number of variables,


then the system has a unique solution.
• If rank(A) = rank([A | b]) < n,
then the system has infinitely many solutions (dependent system).
• If rank(A) ̸= rank([A | b]),
then the system is inconsistent and has no solution.

1.9 How to Find the Rank of a Matrix


The rank of a matrix is the maximum number of linearly independent rows or columns
in the matrix. It tells us the dimension of the column space (or row space).

Steps to Find the Rank of a Matrix


1. Write the matrix.

2. Use elementary row operations (swap rows, multiply a row by a non-zero scalar,
add multiples of one row to another) to reduce the matrix to its row echelon form
(REF) or reduced row echelon form (RREF).

3. Count the number of non-zero rows. This number is the rank of the matrix.

Example
Find the rank of the matrix:  
1 2 3
A = 2 4 6
1 1 1
Apply row operations:
 
R2 ← R2 − 2R1 = 0 0 0
 
R3 ← R3 − R1 = 0 −1 −2

13
The row echelon form becomes:
 
1 2 3
0 −1 −2
0 0 0

There are 2 non-zero rows, so:

rank(A) = 2

Summary

rank(A) = rank([A | b]) = n ⇒ Unique solution

rank(A) = rank([A | b]) < n ⇒ Infinite solutions

rank(A) ̸= rank([A | b]) ⇒ No solution

Example 1.33. Solve the system of linear equations:



x + 2y + z = 6

2x + 5y + z = 12

−x − y + 2z = 1

Solution
Write the augmented matrix:  
1 2 1 6
 2 5 1 12 
−1 −1 2 1
Step 1: Make zeros below the pivot in first column.
- R2 → R2 − 2R1 :  
1 2 1 6
0 1 −1 0
−1 −1 2 1
- R3 → R3 + R1 :  
1 2 1 6
0 1 −1 0
0 1 3 7
Step 2: Make zero below the pivot in second column.
- R3 → R3 − R2 :  
1 2 1 6
0 1 −1 0
0 0 4 7
Step 3: Back-substitution:
From the third row:
7
4z = 7 =⇒ z =
4
From the second row:
7
y − z = 0 =⇒ y = z =
4

14
From the first row:
7 7
x + 2y + z = 6 =⇒ x + 2 · + =6
4 4
14 7 21
x+ + = 6 =⇒ x + =6
4 4 4
21 24 21 3
x=6− = − =
4 4 4 4

Final solution:
3 7 7
x= , y= , z= .
4 4 4

Exercises: Determine the Nature of the Solution for


Each System
1. 
x + y + z = 3

2x + 2y + 2z = 6

x−y+z =1

Answer: Infinite solutions

2. 
x + y + z = 1

x+y+z =2

2x + 2y + 2z = 3

Answer: No solution

3. (
x + 2y = 5
3x + 4y = 11
Answer: Unique solution

15
2 Determinant and Eigenvalues
2.1 Determinants
Definition 2.1 (Determinant). For a square matrix A = (aij ), the determinant det(A)
is a scalar defined recursively by
n
X
det(A) = (−1)1+j a1j det(M1j ),
j=1

where M1j is the (n − 1) × (n − 1) minor matrix after removing row 1 and column j.

Theorem 2.2 (Properties of Determinants). • det(AB) = det(A) det(B)

• det(AT ) = det(A)

• det(I) = 1

• A matrix A is invertible if and only if det(A) ̸= 0.

Example 2.3. Calculate


 
1 2
det = 1 · 4 − 2 · 3 = −2.
3 4

Example 2.4. Find the determinant of the matrix


 
2 3 1
A = 4 1 −3
1 2 1

Solution:
We use cofactor expansion along the first row:

1 −3 4 −3 4 1
det(A) = 2 · −3· +1·
2 1 1 1 1 2
Compute the minors:

1 −3
= (1)(1) − (−3)(2) = 1 + 6 = 7
2 1

4 −3
= (4)(1) − (−3)(1) = 4 + 3 = 7
1 1

4 1
= (4)(2) − (1)(1) = 8 − 1 = 7
1 2
Now plug in:

det(A) = 2(7) − 3(7) + 1(7) = 14 − 21 + 7 = 0

16
Answer:
det(A) = 0

Exercises
Find the determinant of each matrix:
1.
1 2
Answer: − 2
3 4
2.
2 1 3
0 1 2 Answer: 2
1 0 1
3.
1 2 1
0 1 0 Answer: 3
2 3 4

2.2 Eigenvalues and Eigenvectors


Definition 2.5 (Eigenvalue and Eigenvector). For T : V → V linear, λ ∈ F is an
eigenvalue if there exists v ̸= 0 with
T (v) = λv.
Any such v is an eigenvector associated with λ.
Theorem 2.6. λ is an eigenvalue of T iff det(T − λI) = 0.
Example 2.7. Find the eigenvalues and eigenvectors of the matrix
 
2 1
A=
1 2

Solution
Step 1: Compute the characteristic polynomial:
2−λ 1
det(A − λI) = = (2 − λ)2 − 1 = λ2 − 4λ + 3
1 2−λ
Step 2: Solve the equation:
λ2 − 4λ + 3 = 0 ⇒ λ = 1, 3
Step 3: Find eigenvectors.
For λ = 1:
   
1 1 1
A−I = ⇒ (A − I)⃗v = 0 ⇒ x + y = 0 ⇒ ⃗v1 =
1 1 −1
For λ = 3:    
−1 1 1
A − 3I = ⇒ x − y = 0 ⇒ ⃗v2 =
1 −1 1

17
Answer:
Eigenvalues: λ1 = 1, λ2 = 3
Eigenvectors:    
1 1
⃗v1 = , ⃗v2 =
−1 1

Exercises
Find the eigenvalues and one eigenvector for each matrix:

1.  
4 2
A=
1 3
Answer: Eigenvalues λ = 5, 2
 
2
Eigenvector for λ = 5: ⃗v =
1 
−1
Eigenvector for λ = 2: ⃗v =
1
2.  
2 0
A=
0 −1
Answer: Eigenvalues λ = 2, −1
 
1
Eigenvector for λ = 2: ⃗v =
0 
0
Eigenvector for λ = −1: ⃗v =
1
3.  
0 1
A=
−2 −3
Answer: Eigenvalues λ = −1, −2
 
1
Eigenvector for λ = −1: ⃗v =
−1
1
Eigenvector for λ = −2: ⃗v =
−2

2.3 Diagonalization
Definition 2.8 (Diagonalizable). T is diagonalizable if there exists a basis of V consisting
of eigenvectors of T . Equivalently, there exists an invertible P such that

P −1 T P = D,

where D is diagonal.

18
Theorem 2.9 (Diagonalization Criterion). T is diagonalizable if and only if the sum of
dimensions of eigenvalue eigenspaces equals dim V .
Theorem 2.10. A matrix A is diagonalizable if and only if it has n linearly independent
eigenvectors.
 
1 1
Example 2.11. Let A = . It has eigenvalue λ = 1 with multiplicity 2 but only
0 1
one independent eigenvector, so not diagonalizable.
 
0 1
Example 2.12. Let A = . Eigenvalues are real and distinct, so diagonalizable.
−2 −3
Example 2.13. Diagonalize the matrix
 
4 1
A=
0 2

Solution
Step 1: Find eigenvalues.

4−λ 1
det(A − λI) = = (4 − λ)(2 − λ) ⇒ λ = 4, 2
0 2−λ
Step 2: Find eigenvectors.
For λ = 4:   
0 1 1
(A − 4I)⃗v = ⇒ ⃗v1 =
0 −2 0
For λ = 2:   
2 1 −1
(A − 2I)⃗v = ⇒ ⃗v2 =
0 0 2
Step 3: Form matrices P and D.
   
1 −1 4 0
P = , D=
0 2 0 2

Answer:
   
−1 1 −1 4 0
A = P DP , where P = , D=
0 2 0 2

Exercises
Diagonalize each matrix by finding matrices P and D such that A = P DP −1 .

1.  
3 0
A=
0 5
Answer:   
3 0 1 0
D= , P =I=
0 5 0 1

19
2.  
2 1
A=
0 2
Answer: Not diagonalizable (only one eigenvector for repeated eigen-
value).

3.  
1 1
A=
0 2
Answer:    
1 0 1 1
λ = 1, 2; D= , P =
0 2 0 1

20
3 Inner Product Spaces
3.1 Inner Product and Orthogonality
Definition 3.1 (Inner Product). An inner product on a vector space V over R is a map

⟨·, ·⟩ : V × V → R

satisfying, for all u, v, w ∈ V and a ∈ R:

(i) ⟨u, v⟩ = ⟨v, u⟩ (Symmetry)

(ii) ⟨au + w, v⟩ = a⟨u, v⟩ + ⟨w, v⟩ (Linearity in first argument)

(iii) ⟨v, v⟩ ≥ 0, and ⟨v, v⟩ = 0 iff v = 0 (Positive-definite)

Example 3.2. Standard dot product on Rn :


n
X
⟨x, y⟩ = xi yi .
i=1

Definition 3.3 (Orthogonality). Vectors u and v are orthogonal if ⟨u, v⟩ = 0.

Definition 3.4 (Inner Product). An inner product on a vector space V is a function


⟨·, ·⟩ : V × V → F satisfying:

• Linearity in the first argument

• Symmetry: ⟨x, y⟩ = ⟨y, x⟩

• Positive-definiteness

Example 3.5. In Rn : ⟨x, y⟩ = xT y. In Cn : ⟨x, y⟩ =


P
xi yi .

Theorem 3.6 (Cauchy-Schwarz Inequality). For all x, y ∈ V ,

|⟨x, y⟩| ≤ ∥x∥ · ∥y∥

Definition 3.7 (Orthonormal Basis). A basis {v1 , . . . , vn } is orthonormal if ⟨vi , vj ⟩ =


δij .

Theorem 3.8 (Gram-Schmidt). Converts any linearly independent set to orthonormal


set.

Example 3.9. Apply Gram-Schmidt to {(1, 1), (1, −1)} in R2 yields orthonormal basis.

3.2 Orthogonal Maps


Definition 3.10 (Orthogonal Map). A linear map T : V → V is orthogonal if

⟨T (u), T (v)⟩ = ⟨u, v⟩ ∀u, v ∈ V.

21
3.3 Self-adjoint Maps
Definition 3.11 (Self-adjoint Operator). T is self-adjoint if

⟨T (u), v⟩ = ⟨u, T (v)⟩ ∀u, v ∈ V.

Theorem 3.12 (Spectral Theorem). Any self-adjoint operator on a finite-dimensional


real inner product space is diagonalizable with an orthonormal basis of eigenvectors.

Theorem 3.13. All eigenvalues of a selfadjoint operator are real.

Theorem 3.14. Eigenvectors corresponding to distinct eigenvalues are orthogonal.


 
2 i
Example 3.15. Let A = . Since A∗ = A, it’s Hermitian ⇒ selfadjoint. Eigen-
−i 3
values are real.
 
3 0
Example 3.16. Let A = . Symmetric ⇒ selfadjoint. Eigenvectors are e1 , e2 —
0 −2
orthogonal.

22
4 Group and Field: Definitions, Properties, and Ex-
amples

GROUP

Definition:
A group is a set G equipped with a binary operation · (often written as multiplication or
addition) that satisfies the following four axioms:
Let G be a non-empty set and · : G × G → G be a binary operation. Then (G, ·) is
called a group if:
Group Axioms (Properties):

1. Closure: ∀a, b ∈ G, a·b∈G

2. Associativity: ∀a, b, c ∈ G, (a · b) · c = a · (b · c)

3. Identity Element: ∃e ∈ G such that ∀a ∈ G, a·e=e·a=a

4. Inverse Element: ∀a ∈ G, ∃a−1 ∈ G such that

a · a−1 = a−1 · a = e

Abelian Group:
If in addition, the group satisfies:

5. Commutativity: ∀a, b ∈ G, a·b=b·a

then G is called an Abelian group (named after Niels Henrik Abel).

Examples of Groups:

1. (Z, +):
The set of integers under addition.
Identity: 0
Inverse of a: −a

2. (Q∗ , ·):
The set of non-zero rational numbers under multiplication.
Identity: 1
Inverse of a: a1

3. (Zn , +):
The set of integers modulo n under addition (modulo n).
Example: Z5 = {0, 1, 2, 3, 4}

4. Symmetric Group Sn :
Group of all permutations of n elements under composition of functions.

23
FIELD

Definition:
A field is a set F with two binary operations: addition + and multiplication ·, such that:

• (F, +) is an abelian group with identity 0

• (F \ {0}, ·) is an abelian group with identity 1 ̸= 0

• Multiplication is distributive over addition:

a · (b + c) = a · b + a · c ∀a, b, c ∈ F

Field Axioms (Properties):


Let F be a field. Then for all a, b, c ∈ F :
Addition:

1. Closure: a+b∈F

2. Associativity: a + (b + c) = (a + b) + c

3. Commutativity: a+b=b+a

4. Identity: ∃0 ∈ F, a+0=a

5. Inverse: ∃ − a ∈ F, a + (−a) = 0

Multiplication (for a, b, c ∈ F, a ̸= 0, b ̸= 0):

6. Closure: a·b∈F

7. Associativity: a · (b · c) = (a · b) · c

8. Commutativity: a·b=b·a

9. Identity: ∃1 ∈ F, 1 ̸= 0, a·1=a

10. Inverse: ∃a−1 ∈ F, a · a−1 = 1

Distributivity:
a · (b + c) = a · b + a · c

Examples of Fields:

1. Q (Rational numbers)
All field axioms are satisfied with usual addition and multiplication.

2. R (Real numbers)
Common field used in analysis.

3. C (Complex numbers)
Closed under addition and multiplication, with all properties.

24
4. Zp (integers modulo p, where p is prime)
A finite field.
Example: Z5 = {0, 1, 2, 3, 4} is a field.
Z4 is not a field because 2 has no multiplicative inverse modulo 4.

Summary Table:

Property Group Field


Set G F
Binary Operation(s) 1 2 (addition, multiplication)
Additive Group May exist Yes
Multiplicative Group May exist (non-zero part) Yes (excluding 0)
Commutativity Optional Required
Distributive Law Not needed Required

25
5 Group Theory
5.1 Basic Definitions
Definition 5.1 (Group). A set G with binary operation · is a group if:

(i) Closure: For all a, b ∈ G, a · b ∈ G.

(ii) Associativity: (a · b) · c = a · (b · c).

(iii) Identity: There exists e ∈ G with e · a = a · e = a for all a.

(iv) Inverses: For each a, there exists a−1 with a · a−1 = a−1 · a = e.

Example 5.2. (Z, +) is a group.

5.2 Subgroups
Definition 5.3 (Subgroup). A subset H ⊆ G is a subgroup if H itself is a group under
the operation of G.

5.3 Group Homomorphisms


Definition 5.4 (Homomorphism). A map φ : G → H between groups is a homomorphism
if
φ(a · b) = φ(a) · φ(b).

5.4 Normal Subgroups and Quotient Groups


Definition 5.5 (Normal Subgroup). N ⊴ G is normal if gN g −1 = N for all g ∈ G.

Definition 5.6 (Quotient Group). If N is normal, the set of cosets G/N forms a group
under multiplication:
(gN )(hN ) = (gh)N.

26
6 Field Theory
6.1 Fields
Definition 6.1 (Field). A set F with two operations + and · is a field if:

(i) (F, +) is an abelian group with identity 0.

(ii) (F \ {0}, ·) is an abelian group with identity 1.

(iii) Distributivity: a · (b + c) = a · b + a · c.

Example 6.2. Q, R, C are fields.

6.2 Field Extensions


Definition 6.3 (Field Extension). E is an extension field of F if F ⊆ E and operations
of F extend to E.

6.3 Algebraic and Transcendental Elements


Definition 6.4 (Algebraic Element). An element α ∈ E is algebraic over F if there
exists a nonzero polynomial p(x) ∈ F [x] such that p(α) = 0.

Definition 6.5 (Transcendental Element). An element not algebraic.

6.4 Minimal Polynomial


Definition 6.6 (Minimal Polynomial). The minimal polynomial of an algebraic element
α over F is the monic polynomial of least degree with α as root.

6.5 Algebraic Extensions


Definition 6.7 (Algebraic Extension). Extension E/F is algebraic if every element of
E is algebraic over F .

27

You might also like