Alg 3
Alg 3
1
Chapter 1
1.1 Matrices
Definition 1 A matrix is a rectangular array of numbers( real or complex ). The
numbers in the array are called the entries in the matrix.
The size of a matrix is described in terms of the number of rows (horizontal lines)
and columns (vertical lines) it contains.
The entry that occurs in row i and column j of a matrix A will be denoted by aij .
a11 a12 · · · a1m
a21 a22 · · · a2m
.. .. ..
. . .
an1 an2 · · · anm
The set of matrix with n row and m column noted Mn,m (K)
Example 1
A= 1 0 −2
A is called a row matrix (since it has only one row ).
√
2
3
B= π
i
B is called a column matrix or column vector (since it has only one column ).
0 1
C=
5 2
C is called a matrix with 2 row and 3 column.
2
Propriete 1 Let A = (aij )1≤i,j≤n a square matrix .
• if aij = 0 for i < j denote an lower triangular matrix.
• if aij = 0 for i > j denote a upper triangular matrix.
• if aij = 0 for i 6= j denote a diagonal matrix.
• if aii = 1 and aij = 0 for i 6= j) denote identity matrix noted In .
• if aij = 0 for ∀i; j denote a null matrix.
Example 2
3
1.2 Determinants
1.2.1 Laplace Expansion
The development of determinants took place when mathematicians were trying to
solve a system of simultaneous linear equations. We are now able to define the
determinant of square matrix of order n × n
Definition 3 A determinant is a scalar value that can be calculated from the ele-
ments of a square matrix, noted det(A) or |A| .
For every square matrix A of order n x n, there exists a number associated with
it called the determinant
of a square
matrix.
a b
for 2 × 2 matrix : A = det(A) = ad − cb.
c d
a11 a12 a13
for 3 × 3 matrix : A = a21 a22 a23 the value of determinant is
a31 a32 a33
a a a a a a
det(A) = (−1)1+1 a11 22 23 + (−1)1+2 a12 12 23 + (−1)1+3 a13 21 22
a32 a33 a12 a33 a31 a32
In general, let A be a n × n matrix. Then, det(A) is calculated by picking a
row (or column) and taking the product of each entry in that row (column) with
its cofactor and adding these products together.
This process when applied to the ith row (column) is known as expanding along
the ith row (column) as is given by:
det(A) = (−1)i+1 ai1 Ai1 + (−1)i+2 ai2 Ai2 + ... + (−1)i+n ain Ain
Solution
4
First, we will calculate det(A) by expanding along the first column. Using Def-
inition, we take the 1 in the first column and multiply it by its cofactor, Similarly,
we take the 4 in the first column and multiply it by its cofactor, as well as with
the 3 in the first column. Finally, we add these numbers together, as given in the
following equation.(we choose to expand along the first column )
3 2 2 a3 2 3
det(A) = (−1)1+1 1 + (−1)2+1 4 + (−1)3+1 3 .
2 1 2 1 3 2
Hence det(A) = 0
As mentioned in Definition, we can choose to expand along any row or column.
Let’s try now by expanding along the second row. Here, we take the 4 in the
second row and multiply it to its cofactor, then add this to the 3 in the second row
multiplied by its cofactor, and the 2 in the second row multiplied by its cofactor.
The calculation is as follows.
2 3 1 a3 1 2
det(A) = (−1)2+1 1 + (−1)2+2 4 + (−1)2+3 3 .
2 1 3 1 3 2
Example 4
5
Lemma 2 Determinant of a Triangular Matrix
Let A be an upper or lower triangular matrix. Then det(A) is obtained by
taking the product of the entries on the main diagonal (same result if A is diagonal
matrix).
Yn
det(A) = aii
i=1
1. Reflection Property
The determinant remains unaltered if its rows are changed into columns and
the columns into rows. This is known as the property of reflection.
2. All-zero Property
If all the elements of a row (or column) are zero, then the determinant is
zero.
4. Switching Property
The interchange of any two rows (or columns) of the determinant changes
its sign.
6
5. Scalar Multiple Property
If all the elements of a row (or column) of a determinant are multiplied by a
non-zero constant, then the determinant gets multiplied by the same constant.
det(A) = (a11 a22 a33 + a12 a23 a31 + a13 a21 a32 ) − (a13 a22 a31 + a11 a23 a32 + a21 a21 a33 )
A × B = B × A = In
Then B is called the inverse of A.noted A−1 . If no such matrix B can be found,
then A is said to be singular.
det(A−1 ) = det(A)−1 .
7
Properties of Inverses
It is reasonable to ask whether an invertible matrix can have more than one inverse.
The next theorem shows that the answer is no. An invertible matrix has exactly
one inverse.
Proof 2
Ăij is a ((n − 1) × (n − 1)) matrix obtained by delating ith row and j th column.
8
1.3 Systems of Linear Equations
Definition 6 A finite set of linear equations in the variables x1 , x2 , ..., xn is called
a system of linear equations or a linear system.
A sequence of numbers ξ1 , ξ2 , ..., ξn is called a solution of the system if x1 = ξ1 , x2 =
ξ2 , ..., xn = ξn , is a solution of every equation in the system.
A system of equations that has no solutions is said to be inconsistent; if there is
at least one solution of the system, it is called consistent.
An arbitrary system of m linear equations in n unknowns can be written as:
a11 x1 + a12 x2 + ... + a1n xn = b1 ;
a21 x1 + a22 x2 + ... + a2n xn = b2 ;
S= . ..
.. .;
a x + a x + ... + a x , = b .
m1 1 m2 2 mn n m
where a0ij s and b0i s denote constants. this system can abbreviated by writing
only the rectangular array of numbers:
a11 a12 ··· b1
a21 a22
··· b2
.. .. .. ..
. . . .
am1 am2 ··· bm
(a) A is invertible.
9
1.3.1 Inverse method
If we have a linear system with number of equations equal the number of unknowns
n,
a11 x1 + a12 x2 + ... + a1n xn = b1 ;
a21 x1 + a22 x2 + ... + a2n xn = b2 ;
S= . ..
.. .;
a x + a x + ... + a x , = b .
n1 1 n2 2 nn n n
X = A−1 b
Example 5
det(Ai )
xi =
det(A)
where Ai can be obtained by replacing the entries of ith column of A by the entries
of the second member (b1 , b2 , ..., bn )
Example 6
1. Starting from the left, find the first nonzero column. This is the first pivot
column, and the position at the top of this column is the first pivot position
10
2. Interchange the top row with another row, if necessary, to bring a nonzero
entry to the top of the column .
3. If the entry that is now at the top of the column found in Step 1 is a11 (nonzero),
multiply the first row by 1/a11 in order to introduce a leading 1.
4. Add suitable multiples of the top row to the rows below so that all entries
below the leading 1 become zeros.
5. Now cover the top row in the matrix and begin again with Step 1 applied
to the submatrix that remains. Continue in this way until the entire matrix
is in row-echelon form ( Repeat the process until there are no more rows to
modify ).
6. Beginning with the last nonzero row and working upward, add suitable mul-
tiples of each row to the rows above to introduce zeros above the leading 1’s
(we obtained a matrix with (1 or 0) ).
If we use only the first four steps, the above procedure produces a row-echelon form
and is called Gaussian elimination. Carrying the procedure through to the last
step and producing a matrix in reduced row-echelon form is called Gauss–Jordan
elimination.
Remark 3 It can be shown that every matrix has a unique reduced row-echelon
form; that is, one will arrive at the same reduced row-echelon form for a given
matrix no matter how the row operations are varied.
11
Chapter 2
Reduction of endomorphism
2.1 Endomorphism
Definition 8 Let V be a vector space and B its basis. A linear application f :
V −→ V is called a linear endomorphism (i.e a linear application from vector
space to the same) . The matrix M(f )B = M is called matrix of endomorphism
relative to basis B.
Example 7 The identity id : IRn −→ IRn is a linear endomorphism and its matrix
relative to any basis B is the identity matrix:
1 ··· 0
In = ... . . . ...
0 ··· 1
12
Propriete 2 Let f : V −→ V be a linear endomorphism of any finite dimensional
vector space V , for any two bases A, B of V The matrices M(f )A and M(f )B are
similar.
1. A ∼ A reflexive
2. If A ∼ B,then B ∼ A symmetric.
f (v) = λv ⇐⇒ Av = λv
13
Definition 11 Suppose v satisfies the definition . Then
Av − λv = 0
or
(A − λI)v = 0
for some v 6= 0. Equivalently you could write (λI − A)v = 0. It means that the
matrix A − λI is not invertible,then its determinant is equal to 0.
The expression det(λI − A) is a polynomial (in the variable λ) called the char-
acteristic polynomial of A noted PA (λ)( of order n = dim V in case if finite
dimensional) , and det(λI − A) = 0 is called the characteristic equation.
Let A be an n × n matrix.
First, find the eigenvalues λ of A by solving the equation det(λI − A) = 0.
For each λ, find the eigenvectors v 6= 0 by finding the solutions of (λI − A)v = 0.
To verify your work, make sure that Av = λv for each λ and associated eigen-
vector v.
14
Example 9
1. λ is eigenvalue of f
3. det(λI − A) = 0
Ak = 0
15
2.2.3 Diagonalization
Definition 14 Diagonalization Let A be an n × n matrix. Then A is said to be
diagonalizable if there exists an invertible matrix P such that
P −1 AP = D
where D is a diagonal matrix.
Theorem 11 An n × n matrix A is diagonalizable if and only if there is an in-
vertible matrix P given by P = [v1 , v2 , ..., vn ] where the vk are eigenvectors of A.
Moreover if A is diagonalizable, the corresponding eigenvalues of A are the diagonal
entries of the diagonal matrix D.
corollary 2 Let A be an n × n matrix and suppose it has exactly n distinct eigen-
values. Then it follows that A is diagonalizable.
corollary 3 Let A be an n × n matrix, Then A is diagonalizable if only if for each
eigenvalue λ of A, dim(Vλ (A)) is equal to the multiplicity of λ .
Theorem 12 an endomorphisme is diagonalisable if and only if its characteristic
polynomial is scinde (i.e the miltplicite of all the roots equal 1 ).
Definition 15 Trigonalization
Let A be an n × n matrix. Then A is said to be trigonalizable if there exists an
invertible matrix P (basis ) such that
P −1 AP = T
where T is a lower or upper triangular matrix.
Theorem 13 An n × n matrix A is trigonalizable if and only if there is an in-
vertible matrix P given by P = [v1 , v2 , ..., vk , vk+1 , ...vn ] where the vi , 1 < i < k are
eigenvectors of A and vi , k < i ≤ n are any vectors that complete the family of
eigenvectors (usually we take it from euclidian basis ) . and
λ1 0 0 ∗ ∗ ∗
0 . . . . . . ... ..
.
..
.
. ..
..
0 λk ∗ .
−1
P AP = T = 0 0
. . .
.
0 α 1 . .
.. ...
0 0 . 0 ∗
0 0 .. ..
. . 0 αj
1 0 0
Example 10 A = 1 2 0 Show that A is trigonalizable
−3 5 2
16
2.3 Linear differential system
General Theory Letaij (t), bi (t); 1 ≤ i, j ≤ n be continuous functions on some
interval I. The system of n first-order differential equations
x’1 = a11 (t)x1 + a12 (t)x2 + ... + a1n (t)xn + b1 (t);
x’2 = a21 (t)x1 + a22 (t)x2 + ... + a2n (t)xn + b2 (t);
. ..
..
.
x’ = a (t)x + a (t)x + ... + a (t)x + b (t);
n n1 1 n2 2 nn n n
is called a first order linear differential system. The system (S) is homogeneous
if
b1 (t) ≡ b2 (t) ≡ ... ≡ bn (t) ≡ 0 on I
. Let
a11 (t) a12 (t) · · · a1n (t)
a21 (t) a22 (t) · · · a2n (t)
A(t) =
.. .. .. ..
. . . .
an1 (t) an2 (t) · · · ann (t)
Then (S) can be written in the vector-matrix form
X 0 = A(t)X + b.(S)
The matrix A(t) is called the matrix of coefficients or the coefficient matrix.
Theorem 14 Existence and Uniqueness Theorem Let a be any point on the in-
terval I, and let α1 , α2 , ..., αn be any n real numbers. Then the initial-value (IVP)
problem
α1
α2
X 0 = A(t)X + b, X(a) = ..
.
αn
has a unique solution.
17
Definition 16 A set X1 , X2 , ..., Xn of n linearly independent solutions of (H) is
called a fundamental set of solutions. A fundamental set of solutions is also called
a solutions basis for (H). If X1 , X2 , ..., Xn is a fundamental set of solutions of
(H), then the n × n matrix
Solution of homogeneous DS
Given the linear differential system X 0 = AX. then the solution is given by :
If A has exactly n eigenvalue (diagonalizable) the general solution X =
1. P
n
i=1 ci vi exp(λi t) where vi are eigenvectors
2. Else (for example the multiplicity of λ equal 2 but it has only one (indepen-
dent) eigenvector v). Then a linearly independent pair of solution vectors
corresponding to λ are:
x1 (t) = exp(λt)v
and
x2 (t) = exp(λt)w + t exp(λt)v
18
where w is a vector that satisfies (A − λI)w = v. The vector w is called a
generalized eigenvector corresponding to the eigenvalue λ.
and
X2 = exp((x − iy)t)(u − iv)
and the corresponding linearly independent real solutions
X1 + X2 X 1 − X2
Y1 = , Y2 =
2 2i
19
Chapter 3
Bilinear Form
20
Example 11 Let B : IR3 × IR2 → IR defined by u = (x, y, z)t ; v = (x0 , y 0 )t
B(u, v) = xx0 + yy 0 .
1.show that B is Bilinear Form.
2.Find the associated matrix .
3.1 Orthogonality
Definition 21 Let E be a vector space over K with a symmetric bilinear form B
. We call two vectors v, w ∈ E orthogonal if B(v, w) = 0. This is written v⊥w. If
S ⊂ E be a non-empty subset, then the orthogonal complement of S is defined to
be
S ⊥ = {v ∈ E : ∀w ∈ S, w⊥v}.
S ⊥ is a subspace of E.
21
Theorem 17 If (v1 , . . . , vm ) is a list of linearly independent vectors in E, then
there exists an orthonormal list (e1 , . . . , em )
such that
22
23