0% found this document useful (0 votes)
27 views23 pages

Alg 3

The document provides an overview of matrices and determinants, including definitions, properties, and examples. It covers matrix types, operations, and the calculation of determinants using methods like Laplace Expansion and Sarrus's rule. Additionally, it discusses systems of linear equations and the conditions for invertibility of matrices.

Uploaded by

krimoffroom4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views23 pages

Alg 3

The document provides an overview of matrices and determinants, including definitions, properties, and examples. It covers matrix types, operations, and the calculation of determinants using methods like Laplace Expansion and Sarrus's rule. Additionally, it discusses systems of linear equations and the conditions for invertibility of matrices.

Uploaded by

krimoffroom4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Contents

1
Chapter 1

Matrices and Determinants

1.1 Matrices
Definition 1 A matrix is a rectangular array of numbers( real or complex ). The
numbers in the array are called the entries in the matrix.
The size of a matrix is described in terms of the number of rows (horizontal lines)
and columns (vertical lines) it contains.
The entry that occurs in row i and column j of a matrix A will be denoted by aij .
 
a11 a12 · · · a1m
 a21 a22 · · · a2m 
 
 .. .. .. 
 . . . 
an1 an2 · · · anm
The set of matrix with n row and m column noted Mn,m (K)
Example 1 
A= 1 0 −2
A is called a row matrix (since it has only one row ).
 √ 
2
 3 
B=  π 

i
B is called a column matrix or column vector (since it has only one column ).
 
0 1
C=
5 2
C is called a matrix with 2 row and 3 column.

2
Propriete 1 Let A = (aij )1≤i,j≤n a square matrix .
• if aij = 0 for i < j denote an lower triangular matrix.
• if aij = 0 for i > j denote a upper triangular matrix.
• if aij = 0 for i 6= j denote a diagonal matrix.
• if aii = 1 and aij = 0 for i 6= j) denote identity matrix noted In .
• if aij = 0 for ∀i; j denote a null matrix.

Example 2

Proposition 1 Let A = (ai,j )1≤i≤n,1≤j≤m ; B = (bi,j )1≤i≤n,1≤j≤m ; C = (ci,j )1≤i≤m,1≤j≤l


, and λ ∈ IR.

1. The summation of tow matrices given by A + B = (ai,j + bi,j )1≤i≤n,1≤j≤m


2. the addition of matrices is associative and commutative .
3. The product a matrix A by scalar λ ( scalar multiplication )is defined as
follows : λ ∗ A = (λ ∗ ai,j )1≤i≤n,1≤j≤m
4. The product of tow matrices given by A × C = (Si,j )1≤i≤n,1≤j≤l such that
Si,j = k=m
P
k=1 ai,k × bk,j for all i; j.

Definition 2 The transposition of a matrix A defined by AT = (aj,i )1≤j≤m,1≤i≤n .


The (i, j) entry of A becomes the (j, i) entry of AT .
1. If a matrix A is unaltered by transposition, it must be square and symmetrical
about its main diagonal, so aij = aji . We therefore characterize a symmetric
matrix by AT = A
2. If AT = −A; so that aij = −aji the matrix A is called skew symmetric or
antisymmetric, skew symmetric matrix must necessarily have zero elements
in the principal diagonal, since aii = −aii .

Lemma 1 Let A = (ai,j )1≤i≤n,1≤j≤m ; B = (bi,j )1≤i≤n,1≤j≤m ; C = (ci,j )1≤i≤m,1≤j≤l ,


and λ, α scalars. Then
• (AT )T = A
• (A × C)T = C T × AT
• (α ∗ A + λ ∗ B)T = α ∗ AT + λ ∗ B T

3
1.2 Determinants
1.2.1 Laplace Expansion
The development of determinants took place when mathematicians were trying to
solve a system of simultaneous linear equations. We are now able to define the
determinant of square matrix of order n × n
Definition 3 A determinant is a scalar value that can be calculated from the ele-
ments of a square matrix, noted det(A) or |A| .
For every square matrix A of order n x n, there exists a number associated with
it called the determinant
 of a square
 matrix.
a b
for 2 × 2 matrix : A = det(A) = ad − cb.
c d 
a11 a12 a13
for 3 × 3 matrix : A =  a21 a22 a23  the value of determinant is
a31 a32 a33
a a a a a a
det(A) = (−1)1+1 a11 22 23 + (−1)1+2 a12 12 23 + (−1)1+3 a13 21 22
a32 a33 a12 a33 a31 a32
In general, let A be a n × n matrix. Then, det(A) is calculated by picking a
row (or column) and taking the product of each entry in that row (column) with
its cofactor and adding these products together.
This process when applied to the ith row (column) is known as expanding along
the ith row (column) as is given by:

det(A) = (−1)i+1 ai1 Ai1 + (−1)i+2 ai2 Ai2 + ... + (−1)i+n ain Ain

where Aij is the ij th minor of A, denoted as minor(A)ij , is the determinant of the


n − 1 × n − 1 matrix which results from deleting the ith row and the j th column of
A. When calculating the determinant, you can choose to expand any row or any
column. Regardless of your choice, you will always get the same number which
is the determinant of the matrix A. This method of evaluating a determinant by
expanding along a row or a column is called Laplace Expansion or Cofactor
Expansion.
Example 3 Let  
1 2 3
A= 4 3 2 
3 2 1
Find det(A) using the method of Laplace Expansion.

Solution

4
First, we will calculate det(A) by expanding along the first column. Using Def-
inition, we take the 1 in the first column and multiply it by its cofactor, Similarly,
we take the 4 in the first column and multiply it by its cofactor, as well as with
the 3 in the first column. Finally, we add these numbers together, as given in the
following equation.(we choose to expand along the first column )

3 2 2 a3 2 3
det(A) = (−1)1+1 1 + (−1)2+1 4 + (−1)3+1 3 .
2 1 2 1 3 2

Calculating each of these, we obtain

det(A) = 1(1)(−1)4(−1)(−4) + 3(1)(−5) = −1 + 16 − 15 = 0.

Hence det(A) = 0
As mentioned in Definition, we can choose to expand along any row or column.
Let’s try now by expanding along the second row. Here, we take the 4 in the
second row and multiply it to its cofactor, then add this to the 3 in the second row
multiplied by its cofactor, and the 2 in the second row multiplied by its cofactor.
The calculation is as follows.

2 3 1 a3 1 2
det(A) = (−1)2+1 1 + (−1)2+2 4 + (−1)2+3 3 .
2 1 3 1 3 2

Calculating each of these products, we obtain

det(A) = 4(−1)(−2) + 3(1)(−8) + 2(−1)(−4) = 0.

You can see that for both methods, we obtained det(A) = 0.

Example 4

5
Lemma 2 Determinant of a Triangular Matrix
Let A be an upper or lower triangular matrix. Then det(A) is obtained by
taking the product of the entries on the main diagonal (same result if A is diagonal
matrix).
Yn
det(A) = aii
i=1

Proposition 2 Let A be a square matrix then:

1. Reflection Property
The determinant remains unaltered if its rows are changed into columns and
the columns into rows. This is known as the property of reflection.

2. All-zero Property
If all the elements of a row (or column) are zero, then the determinant is
zero.

3. Proportionality (Repetition) Property


If all elements of a row (or column) are proportional (identical) to the ele-
ments of some other row (or column), then the determinant is zero.

4. Switching Property
The interchange of any two rows (or columns) of the determinant changes
its sign.

6
5. Scalar Multiple Property
If all the elements of a row (or column) of a determinant are multiplied by a
non-zero constant, then the determinant gets multiplied by the same constant.

6. Property of Invariance: if we add to a row (column) a linear combination of


other rows (columns) the the determinant doesn’t change the value.

7. Let α be a real number then det(αA) = αn det(A).

1.2.2 sarrus method


Sarrus’s rule calculates the determinant in square matrices of order 3, those with
three rows and three columns . Write out the first two columns on the right (giving
five columns)
a11 a12 a13 a11 a12
a21 a22 a23 a21 a22
a31 a32 a33 a31 a32
Then add the products that go from top left to bottom right , then subtract the
products of the diagonals that go from bottom left to top right.
This yields the formulate.

det(A) = (a11 a22 a33 + a12 a23 a31 + a13 a21 a32 ) − (a13 a22 a31 + a11 a23 a32 + a21 a21 a33 )

Remark 1 det(A + B) 6= det(A) + det(B) i.e det is not additive .

Definition 4 Let A = (ai,j )1≤i;j≤n , A matrix A is said to be invertible if only if


there exist a matrix B in Mn (IR) such that

A × B = B × A = In

Then B is called the inverse of A.noted A−1 . If no such matrix B can be found,
then A is said to be singular.

Theorem 1 Let A be an n×n matrix. Then A is invertible if and only if det(A) 6=


0. If this is true, it follows that

det(A−1 ) = det(A)−1 .

7
Properties of Inverses
It is reasonable to ask whether an invertible matrix can have more than one inverse.
The next theorem shows that the answer is no. An invertible matrix has exactly
one inverse.

Theorem 2 If B and C are both inverses of the matrix A, then B = C

Proof 1 Since B is the inverse matrix of A, we have


AB = In .
Multiplying both sides on the left by C gives
C(AB) = CIn = C.
But
In B = (CA)B = CIn = C.
So
B=C

Theorem 3 If A and B are invertible matrices of same size, then AB is invertible


and
(AB)−1 = B −1 A−1 .

Proof 2

Proposition 3 If A is an invertible matrix, then:


* A−1 is invertible and (A−1 )−1 = A
** An is invertible and (An )−1 = (A−1 )n for n = 0, 1, 2, 3, ... .
*** For any nonzero scalar α, the matrix αA is invertible and (αA)−1 = α1 A−1 .
**** AT is invertible and (AT )−1 = (A−1 )T

Definition 5 Let A be an invertible matrix, then A−1 is defined as


1
A−1 = (com(A))T .
det(A)
Where com(A) is a matrix given by com(A) = (cij ) such that

cij = (−1)i+j det(Ăij ), ∀i, j

Ăij is a ((n − 1) × (n − 1)) matrix obtained by delating ith row and j th column.

8
1.3 Systems of Linear Equations
Definition 6 A finite set of linear equations in the variables x1 , x2 , ..., xn is called
a system of linear equations or a linear system.
A sequence of numbers ξ1 , ξ2 , ..., ξn is called a solution of the system if x1 = ξ1 , x2 =
ξ2 , ..., xn = ξn , is a solution of every equation in the system.
A system of equations that has no solutions is said to be inconsistent; if there is
at least one solution of the system, it is called consistent.
An arbitrary system of m linear equations in n unknowns can be written as:

a11 x1 + a12 x2 + ... + a1n xn = b1 ;


a21 x1 + a22 x2 + ... + a2n xn = b2 ;

S= . ..

 .. .;


a x + a x + ... + a x , = b .
m1 1 m2 2 mn n m

where a0ij s and b0i s denote constants. this system can abbreviated by writing
only the rectangular array of numbers:
 
a11 a12 ··· b1
 a21 a22
 ··· b2 

 .. .. .. .. 
 . . . . 
am1 am2 ··· bm

This is called the augmented matrix for the system

Proposition 4 Every system of linear equations has no solutions, or has exactly


one solution, or has infinitely many solutions

Remark 2 When constructing an augmented matrix, we must write the unknowns


in the same order in each equation, and the constants must be on the right.

Theorem 4 If A is an n × n matrix, then the following statements are equivalent,


that is, all true or all false.

(a) A is invertible.

(b) The homogeneous system has only the trivial solution.

(c) The nonhomogeneous system has only one solution ( unique).

9
1.3.1 Inverse method
If we have a linear system with number of equations equal the number of unknowns
n, 

 a11 x1 + a12 x2 + ... + a1n xn = b1 ;

a21 x1 + a22 x2 + ... + a2n xn = b2 ;

S= . ..

 .. .;


a x + a x + ... + a x , = b .
n1 1 n2 2 nn n n

witch means that we have a square associated matrix n × n . If it is invertible so


we have unique solution X = (ξ1 , ξ2 , ..., ξn ) defined as :

X = A−1 b

Example 5

1.3.2 Cramer’s method


Theorem 5 If we linear system with n equations and n unknowns, witch means
that we have a square associated matrix n × n . If it is invertible so we have unique
solution (ξ1 , ξ2 , ..., ξn ) defined as :

det(Ai )
xi =
det(A)

where Ai can be obtained by replacing the entries of ith column of A by the entries
of the second member (b1 , b2 , ..., bn )

Example 6

1.3.3 Elementary Row Operations( method)


Definition 7 The basic method for solving a system of linear equations is to re-
place the given system by a new system that has the same solution set but is easier
to solve. This new system is generally obtained in a series of steps by applying the
following three types of operations to eliminate unknowns systematically, we shall
illustrate the idea by reducing the following matrix to reduced row-echelon form:
algorithm of method

1. Starting from the left, find the first nonzero column. This is the first pivot
column, and the position at the top of this column is the first pivot position

10
2. Interchange the top row with another row, if necessary, to bring a nonzero
entry to the top of the column .

3. If the entry that is now at the top of the column found in Step 1 is a11 (nonzero),
multiply the first row by 1/a11 in order to introduce a leading 1.

4. Add suitable multiples of the top row to the rows below so that all entries
below the leading 1 become zeros.

5. Now cover the top row in the matrix and begin again with Step 1 applied
to the submatrix that remains. Continue in this way until the entire matrix
is in row-echelon form ( Repeat the process until there are no more rows to
modify ).

6. Beginning with the last nonzero row and working upward, add suitable mul-
tiples of each row to the rows above to introduce zeros above the leading 1’s
(we obtained a matrix with (1 or 0) ).

If we use only the first four steps, the above procedure produces a row-echelon form
and is called Gaussian elimination. Carrying the procedure through to the last
step and producing a matrix in reduced row-echelon form is called Gauss–Jordan
elimination.

Remark 3 It can be shown that every matrix has a unique reduced row-echelon
form; that is, one will arrive at the same reduced row-echelon form for a given
matrix no matter how the row operations are varied.

11
Chapter 2

Reduction of endomorphism

2.1 Endomorphism
Definition 8 Let V be a vector space and B its basis. A linear application f :
V −→ V is called a linear endomorphism (i.e a linear application from vector
space to the same) . The matrix M(f )B = M is called matrix of endomorphism
relative to basis B.

Example 7 The identity id : IRn −→ IRn is a linear endomorphism and its matrix
relative to any basis B is the identity matrix:
 
1 ··· 0
In =  ... . . . ... 
 
0 ··· 1

Let f be a linear endomorphism from IR3 to IR3 given by


f: IR3 −→ IR3
(x, y, z) −→ (x + y, 2y − z, y)
and its matrix relative to euclidian basis B is the matrix:
 
1 1 0
A= 0 2 −1 
0 1 0
Definition 9 Matrix Similarly Two matrices A, B ∈ Mn (I!R) are called similar
(semblable) if there exists an invertible matrix P ∈ Mn (I!R) such that
A = P −1 BP
Noted A ∼ B.

12
Propriete 2 Let f : V −→ V be a linear endomorphism of any finite dimensional
vector space V , for any two bases A, B of V The matrices M(f )A and M(f )B are
similar.

Example 8 Let f ((x, y)) = (x+y, 2x+3y) be a linear endomorphism


  . Take A be 
1 1 −2 1
standar basis and B = {(−2, 1), (1, −1)}. Then M(f )A = and P =
 2 3 1 −1
−1 −1
Use M(f )B = P −1 M(f )A P and compute P −1 = Then M(f )B =
      −1 −2
−1 −1 1 1 −2 1 2 1
=
−1 −2 2 3 1 −1 3 2

Lemma 3 Similarity is an equivalence relation, i.e. for n × n matrices A, B, and


C,

1. A ∼ A reflexive

2. If A ∼ B,then B ∼ A symmetric.

3. If A ∼ B and B ∼ C , then A ∼ C transitive

2.2 Eigenvalue and Eigenvector


Definition 10 Let f be a linear endomorphism of a finite dimensional vector space
V . with associated A matrix, a constant λ ∈ C is called an eigenvalue of f ( w r
to A)if there exist a non zero vector v such that

f (v) = λv ⇐⇒ Av = λv

Such v is called an eigenvector of f (w r t A) associated with λ, or a λ eigenvector


of A.

The set of all eigenvector of a matrix A is denoted by σ(A) and is referred to


as the spectrum of A.

Propriete 3 A vector v ∈ V is an eigenvector of f if only if f (spam(v)) ⊂


spam(v) i.e v is non zero vetor and the subset spam(v) is mapped into itself. The
subset σ(f ) ⊂ V is a subspace of V .(i.e non empty set and stable under addition
and multiplication ).

13
Definition 11 Suppose v satisfies the definition . Then

Av − λv = 0

or
(A − λI)v = 0
for some v 6= 0. Equivalently you could write (λI − A)v = 0. It means that the
matrix A − λI is not invertible,then its determinant is equal to 0.
The expression det(λI − A) is a polynomial (in the variable λ) called the char-
acteristic polynomial of A noted PA (λ)( of order n = dim V in case if finite
dimensional) , and det(λI − A) = 0 is called the characteristic equation.

Theorem 6 The Existence of an Eigenvector Let A be an n×n matrix and suppose


det(λI − A) = 0 for some λ ∈ C. Then λ is an eigenvalue of A and thus there
exists a nonzero vector v ∈ C n such that Av = λv.
The matrix A has less or equal n distinct eigenvalues .

Theorem 7 If v1 , v2 , ..., vk are the eigenvectors of A corresponding to distinct


eigenvalues λ1 , λ2 , ..., λl (l ≤ k ≤ n), then {v1 , v2 , ..., vk } is a linearly indepen-
dent set.

2.2.1 Finding Eigenvectors and Eigenvalues


We will now look at how to find the eigenvalues and eigenvectors for a matrix A
in detail. The steps used are summarized in the following procedure.

Let A be an n × n matrix.
First, find the eigenvalues λ of A by solving the equation det(λI − A) = 0.
For each λ, find the eigenvectors v 6= 0 by finding the solutions of (λI − A)v = 0.

To verify your work, make sure that Av = λv for each λ and associated eigen-
vector v.

Definition 12 Multiplicity of an Eigenvalue


Let A be an n × n matrix with characteristic polynomial given by det(λI − A) =
0. Then, the multiplicity of an eigenvalue λ of A is the number of times λ occurs
as a root of that characteristic polynomial PA (λ).

14
Example 9

Theorem 8 If k is a positive integer, λ is an eigenvalue of a matrix A, and v is a


corresponding eigenvector, then λk is an eigenvalue of Ak for the same eigenvector
v.

Theorem 9 Let f be an endomorphism ,λ ∈ C the following assertion are equiv-


alents:

1. λ is eigenvalue of f

2. l’endomorphisme λid − f not one to one (injectif ) i.e

ker(λid − f ) = {x ∈ V / (λid − f )(x) = 0} =


6 {0}.

3. det(λI − A) = 0

4. λ is a root of characteristic polynomial.

Remark 4 let λ be an eigenvalue of A. Then dim Vλ ≤ multiplicity of λ (the root


of PA )

2.2.2 Cayley-Hamilton Theorem


Theorem 10 Cayley-Hamilton Theorem For any A square matrix, then A satis-
fies its characteristic polynomial PA i.e

PA (A) = 0Mn (IR)

Definition 13 Nilpotent Matrix Let A be a square matrix. the matrix A is nilpo-


tent if there exist k ≥ 1 such that

Ak = 0

Lemma 4 If matrix A is nilpotent and λ is an eigenvalue of A then λ = 0, i.e all


eigenvalues are equal to 0.

corollary 1 Matrix A ∈ Mn (IR) is nilpotent if and only if its all eigenvalues


over complex numbers are equal to 0 (i.e. the characteristic polynomial PA (λ) =
(−1)n λn ).

15
2.2.3 Diagonalization
Definition 14 Diagonalization Let A be an n × n matrix. Then A is said to be
diagonalizable if there exists an invertible matrix P such that
P −1 AP = D
where D is a diagonal matrix.
Theorem 11 An n × n matrix A is diagonalizable if and only if there is an in-
vertible matrix P given by P = [v1 , v2 , ..., vn ] where the vk are eigenvectors of A.
Moreover if A is diagonalizable, the corresponding eigenvalues of A are the diagonal
entries of the diagonal matrix D.
corollary 2 Let A be an n × n matrix and suppose it has exactly n distinct eigen-
values. Then it follows that A is diagonalizable.
corollary 3 Let A be an n × n matrix, Then A is diagonalizable if only if for each
eigenvalue λ of A, dim(Vλ (A)) is equal to the multiplicity of λ .
Theorem 12 an endomorphisme is diagonalisable if and only if its characteristic
polynomial is scinde (i.e the miltplicite of all the roots equal 1 ).
Definition 15 Trigonalization
Let A be an n × n matrix. Then A is said to be trigonalizable if there exists an
invertible matrix P (basis ) such that
P −1 AP = T
where T is a lower or upper triangular matrix.
Theorem 13 An n × n matrix A is trigonalizable if and only if there is an in-
vertible matrix P given by P = [v1 , v2 , ..., vk , vk+1 , ...vn ] where the vi , 1 < i < k are
eigenvectors of A and vi , k < i ≤ n are any vectors that complete the family of
eigenvectors (usually we take it from euclidian basis ) . and
λ1 0 0 ∗ ∗ ∗
 
 0 . . . . . . ... ..
.
.. 
. 

 . ..
 ..

 0 λk ∗ . 

−1
P AP = T =  0 0
 . . .
. 
0 α 1 . . 
.. ...
 
 0 0 . 0 ∗ 
 

 0 0 .. .. 
. . 0 αj 

 
1 0 0
Example 10 A =  1 2 0  Show that A is trigonalizable
−3 5 2

16
2.3 Linear differential system
General Theory Letaij (t), bi (t); 1 ≤ i, j ≤ n be continuous functions on some
interval I. The system of n first-order differential equations


 x’1 = a11 (t)x1 + a12 (t)x2 + ... + a1n (t)xn + b1 (t);
 x’2 = a21 (t)x1 + a22 (t)x2 + ... + a2n (t)xn + b2 (t);

. ..
 ..
 .

 x’ = a (t)x + a (t)x + ... + a (t)x + b (t);
n n1 1 n2 2 nn n n

is called a first order linear differential system. The system (S) is homogeneous
if
b1 (t) ≡ b2 (t) ≡ ... ≡ bn (t) ≡ 0 on I
. Let  
a11 (t) a12 (t) · · · a1n (t)
 a21 (t) a22 (t) · · · a2n (t) 
A(t) = 
 
.. .. .. .. 
 . . . . 
an1 (t) an2 (t) · · · ann (t)
Then (S) can be written in the vector-matrix form

X 0 = A(t)X + b.(S)

The matrix A(t) is called the matrix of coefficients or the coefficient matrix.

Theorem 14 Existence and Uniqueness Theorem Let a be any point on the in-
terval I, and let α1 , α2 , ..., αn be any n real numbers. Then the initial-value (IVP)
problem  
α1
 α2 
X 0 = A(t)X + b, X(a) =  .. 
 
 . 
αn
has a unique solution.

Theorem 15 If X1 , X2 , ..., Xn are solutions of the homogeneous system (H) then


any linear combination of solutions of H is also a solution of H.
If X1 , X2 , ..., Xn re linearly dependent, then

W (t) = det(X1 , X2 , ..., Xn ) = 0 on I

W is called Wronskian of the family X1 , X2 , ..., Xn

17
Definition 16 A set X1 , X2 , ..., Xn of n linearly independent solutions of (H) is
called a fundamental set of solutions. A fundamental set of solutions is also called
a solutions basis for (H). If X1 , X2 , ..., Xn is a fundamental set of solutions of
(H), then the n × n matrix

X(t) = (X1 , X2 , ..., Xn )

(the vectors X1 , X2 , ..., Xn are the columns of X) is called a fundamental matrix


for H.

2.3.1 Homogeneous Systems with Constant Coefficients


A homogeneous system with constant coefficients is a linear differential system
having the form:


 x01 = a11 x1 + a12 x2 + ... + a1n xn ;
 x0 = a21 x1 + a22 x2 + ... + a2n xn ;

2
.. ..

 . .
 x0 = a x + a x + ... + a x ;

n n1 1 n2 2 nn n

Where aij are constants. The system in vector-matrix form is


     
x01 a11 a12 · · · a1n x1
 x0   a21 a22 · · · a2n 
  x2 
 
 2  
 ..   .. .. .. ..   .. 
 .   . . . .   . 
0
xn an1 an2 · · · ann xn
or
X 0 = AX

Solution of homogeneous DS
Given the linear differential system X 0 = AX. then the solution is given by :
If A has exactly n eigenvalue (diagonalizable) the general solution X =
1. P
n
i=1 ci vi exp(λi t) where vi are eigenvectors

2. Else (for example the multiplicity of λ equal 2 but it has only one (indepen-
dent) eigenvector v). Then a linearly independent pair of solution vectors
corresponding to λ are:
x1 (t) = exp(λt)v
and
x2 (t) = exp(λt)w + t exp(λt)v

18
where w is a vector that satisfies (A − λI)w = v. The vector w is called a
generalized eigenvector corresponding to the eigenvalue λ.

3. If λ = x + yi is a complex eigenvalue with corresponding (complex) eigen-


vector u + iv, then λ = x − yi is also an eigenvalue of A and u − iv is a
corresponding eigenvector. The corresponding linearly independent complex
solutions of X 0 = AX are:

X1 = exp((x + iy)t)(u + iv)

and
X2 = exp((x − iy)t)(u − iv)
and the corresponding linearly independent real solutions
X1 + X2 X 1 − X2
Y1 = , Y2 =
2 2i

19
Chapter 3

Bilinear Form

Definition 17 Let E, F are vectors space. A bilinear form B : E×F → IR (or C)


satisfies the following properties for all vectors u, v ∈ E, u0 , v 0 ∈ F and scalars
α, β ∈ IR( or C): Linearity in the First Argument:
B(αu + βv, u0 ) = αB(u, u0 ) + βB(v, u0 )
And linearity in the Second Argument:
B(u, αu0 + βv 0 ) = αB(u, u0 ) + βB(u, v 0 )
Definition 18 sesquilinear forms
A sesquilinear form S : E × F → IR (or C) satisfies the following properties
for all vectors u, v ∈ E, u0 , v 0 ∈ F and scalars α, β ∈ IR( or C): Linearity in the
First Argument:
B(αu + βv, u0 ) = αB(u, u0 ) + βB(v, u0 )
And semi−linearity in the Second Argument:
B(u, αu0 + βv 0 ) = αB(u, u0 ) + βB(u, v 0 )
Definition 19 Matrix Representation of Bilinear Form
A bilinear form can be represented by the matrix A such that for the vectors u and
v the bilinear form is given by:
n X
X m
t
B(u, v) = u Av = ui ai,j vj
i=1 j=1

Here, A is an n × m matrix where n is the dimension of the vector space E and


m is the dimension of the vector space F . Let B = {e1 , e2 , ..., en } basis if E and
B 0 = {f1 , f2 , ..., fm } is basis of F . The entries of the matrix A correspond to the
coefficients of the bilinear form.
aij = B(ei , fj ) ∀1 ≤ i ≤ n, 1 ≤ j ≤ m

20
Example 11 Let B : IR3 × IR2 → IR defined by u = (x, y, z)t ; v = (x0 , y 0 )t
B(u, v) = xx0 + yy 0 .
1.show that B is Bilinear Form.
2.Find the associated matrix .

Proposition 5 let E = F Symmetric Bilinear Form: If B(u, v) = B(v, u) for


all u, v the form is symmetric (aij = aji ).
Skew-Symmetric Bilinear Form: If B(u, v) = B(v, u), the form is skew-
symmetric (or alternating) aij = −aji .

Definition 20 Quadratic Forms


Given a symmetric bilinear form B on E , the associated quadratic form is the
function
n
X
q(u) = B(u, u) = ui ai,i ui
i=1

Notice that q has the property that q(λu) = λ2 q(u)

Theorem 16 Polarization Theorem


for any quadratic form q the underlying symmetric bilinear form is unique. given
by
1
B(u, v) = (q(u + v) − q(u) − q(v))
2
1
B(u, v) = (q(u + v) − q(u − v))
4

3.1 Orthogonality
Definition 21 Let E be a vector space over K with a symmetric bilinear form B
. We call two vectors v, w ∈ E orthogonal if B(v, w) = 0. This is written v⊥w. If
S ⊂ E be a non-empty subset, then the orthogonal complement of S is defined to
be
S ⊥ = {v ∈ E : ∀w ∈ S, w⊥v}.
S ⊥ is a subspace of E.

Definition 22 A basis B is called an orthogonal basis if any two distinct basis


vectors are orthogonal. Thus B is an orthogonal basis if and only if the associated
matrix is diagonal.

21
Theorem 17 If (v1 , . . . , vm ) is a list of linearly independent vectors in E, then
there exists an orthonormal list (e1 , . . . , em )
such that

span(v1 , . . . , vk ) = span(e1 , . . . , ek ), f or allk = 1, . . . , m.

22
23

You might also like