Algebra Cours
Algebra Cours
By
D JAMEL BELLAOUAR
University 8 Mai 1945 Guelma
Theme
i
Dedications
M y Stu d ents
R ea d er s
volent m ai s les écrits r estent... á cet effet ce tr avail.
ii
Table of contents
Introduction 1
iii
4.2 Hermitian quadratic forms over Cn . . . . . . . . . . . . . . . . . . . . . . . . 52
4.3 Gauss decomposition for hermitian forms . . . . . . . . . . . . . . . . . . . 53
Conclusion 96
Bibliography 97
iv
Notations
The following notations allow the reader to clearly understand the content of this ma-
nuscript.
• α, β, λ, µ are scalars and x, y, u, v, w, u0 , v 0 , w0 are vectors.
• ⊕ This is an internal composition law.
• ⊗ This is an external composition law.
• E a vector space over K.
• Kn The field of n-tuples of real or complex numbers.
• (x1 , x2 , ..., xn ) An element of Kn (vector).
• Kn [x] The vector space of all polynomial of degree not exceeding n with real or
complex coefficients.
• C ([a, b] , R) The vector space of all continuous functions on [a, b] .
• C ∞ ([a, b] , R) The v. space of all infinitely differentiable functions on [a, b] .
• Mn (K) The vector space of all n by n real (or complex) matrices.
• Sn (K) The vector space of all n by n real (or complex) symmetric matrices.
• An (K) The vector space of all n by n real (or complex) skew-symmetric matrices.
• GLn (K) The vector space of all n by n invertible matrices.
• Mf (B) The matrix of the mapping f with respect to the basis B.
• P The passage matrix.
• {e1 , e2 , ..., en } In general denotes for the canonical basis.
• V ect {u1 , u2 , ..., un } The vector space of all linear combinations of the vectors ui (1 ≤
1 ≤ n).
• ker f The kernel of the linear mapping f or the kernel of the bilinear symmetric form
f.
• Im f The vector subspace {f (v) : v ∈ E} .
• F ⊕ G Direct sum between F and G.
• L2 (E) The v. space 1 of all bilinear forms on E.
• L (E, F ) The v. space of all linear mappings from E to F .
• L (E, K) The v. space of all linear mappings from E to K.
• q or Q Quadratic forms.
• C The isotropic cone ; C = {v ∈ E : f (v, v) = 0}.
1. v. space means vector space.
v
• E ∗ dual v. space of a vector space E.
• Φ∗ The dual mapping of Φ.
• S2 (E) The v. subspace of all symmetric bilinear forms on E.
• A2 (E) The v. subspace of all skew-symmetric bilinear forms on E.
• Q2 (E) The set of all quadratic forms on E.
• diag {a1 , a2 , ..., an } Diagonal matrix whose diagonal entries are a1 , a2 , ..., an .
• tr (A) The trace of an n by n matrix A.
• Sp (A) The spectral set of A = The set of eigenvalues of A.
• z The conjugate of the vector z ∈ Cn .
• i The imaginary pure number (i2 = −1).
• I The identity matrix.
• Re (z) The real part of a complex number z.
• At The transpose of a matrix A.
• det (A) Determinant of a square matrix A.
• A∗ The transpose conjugate of a complex n by n matrix A.
• kvk the norm of the vector v.
• hu, vi The scalar product (or inner product) between the vectors u and v.
vi
General Introduction
T his work is the fruit of teaching of this subject at the University of 8 Mai 45
Guelma. It is intended for students of the 2nd year mathematics. This volume
is devoted to a part of the program of Algebra 4 (bilinear forms, quadratic
forms, sesquilinear forms and hermitian forms. One can see [1], [2], [3], [4], [5], [6]).
Each chapter begins with a clear presentation of definitions, lemmas and theorems,
illustrated with numerous examples. This is followed by a graduated number of a set of
solved exercises.
The course summary is sufficiently developed so that everyone will find the results
they need to solve proposed problems. Although the large number of additional pro-
blems makes their solution difficult, special importance should nevertheless be given to
those presented in the first two chapters. After engaging in it, the student will feel more
confident.
I had been teaching this material by French from 2012 to 2016. Then I have taught it to
students a second time, but by English, starting 2020 to now. Being the first subject presen-
ted to students at the beginning of their education, they gladly accepted presenting it in
English language. Indeed, this course, which is based on bilinear forms (linearity from the
right and those from the left), is a continuation study of Algebra II taught in the first year
M.I. It is composed of five chapters. In Chapter 1, we recall some definitions and give wi-
thout proof some classical results on vector spaces and linear mappings, that is, we list in
the this chapter the basic notions on a vector space and its dual space. In chapter 2 we deal
with bilinear forms over a real vector space. It is not possible to understand such proper-
ties without examining the related concepts of linear forms. More precisely, this chapter
describes the most properties of bilinear forms on a vector space and gives examples of
the three most common types of such forms as well as symmetric, skew-symmetric and
alternating bilinear forms. Chapter 3 deals with the spectral decomposition of self-adjoint
linear mappings. The important condition of nondegeneracy for a bilinear form, Gauss
decomposition theorem and the orthogonal basis for a symmetric bilinear form are the
subject of Chapter 4. An introduction to Hermitian space is given in Chapter 5. At the end
of this lecture-note, the reader will find a conclusion and a bibliography.
1
C HAPTER 1
We say that (E, ⊕, ⊗) is a v. space on the field K if the following conditions hold :
1. (E, ⊕) is a commutative (Abelian) group.
2. ∀ λ ∈ K, ∀ u, v ∈ E : λ ⊗ (u ⊕ v) = (λ ⊗ u) ⊕ (λ ⊗ v) ,
3. ∀ λ, µ ∈ K, ∀ v ∈ E : (λ + µ) ⊗ v = (λ ⊗ v) ⊕ (µ ⊗ v) ,
4. ∀ λ, µ ∈ K, ∀ v ∈ E : λ ⊗ (µ ⊗ v) = (λ.µ) ⊗ v,
5. ∀ v ∈ E : 1K ⊗ v = v. (if K = R or C ⇒ 1K = 1).
To make statements (things) easier ; in a v. space (E, ⊕, ⊗) over K, the internal law ⊕
we designate it + and the external law ⊗ we designate it · or nothing. The definition of a
v. space becomes :
We say that (E, +, .) is a vector space (or just v.s.) over the field K if :
i. + is an internal composition law on E ; i.e., ∀ u, v ∈ E : u + v ∈ E.
ii. · is an external composition law on E ; i.e., ∀ λ ∈ K, ∀ v ∈ E : λv ∈ E.
1. (E, +) is a commutative (abelian) group.
2. ∀ λ ∈ K, ∀ u, v ∈ E : λ (u + v) = λu + λv,
2
1.1. VECTOR SPACE (A SUMMARY OF LESSONS) 3
3. ∀ λ, µ ∈ K, ∀ v ∈ E : (λ + µ) v = λv + µv,
4. ∀ λ, µ ∈ K, ∀ v ∈ E : λ (µv) = (λµ) v,
5. ∀ v ∈ E : 1K v = v.
We must know the following facts :
• The elements of the vector space E are called vectors and the elements of the field K
are called scalars (⇒ the sum of two vectors is a vector and the multiplication of a
vector by a scalar is a vector).
• The neutral element with respect to + in the vector space E we designate it 0E ; and
we call it the zero vector.
• In the v. space E over K ; we have ∀ v ∈ E : −v = (−1) v ; where −v is the symmetric
element of v with respect to +, and (−1) v is the multiplication of the vector v by the
scalar −1.
• For two vectors u and v of the vector space E, we write by convention u − v instead
of u + (−v) and u + (−1) v :
Let (E, +, .) a vector space over the field K and let F be a subset of E.
Definition 1.1. We say that F is a vector subspace (or subspave) of E if (F, +, .) is a vector
space over K, where 0F = 0E .
Remark 1.1. From the above definition we deduce that every vector space is a vector
subspace of itself.
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
1.1. VECTOR SPACE (A SUMMARY OF LESSONS) 4
Or equivalently,
(
iff F 6= ∅ (0E ∈ F )
F is a v. subspace of E ⇔
∀ λ, µ ∈ K, ∀ u, v ∈ F : λ · u + µ · v ∈ F.
Kn = {(x1 , x2 , ..., xn ) / xi ∈ K}
2. ∀ λ ∈ K, ∀ (x1 , x2 , ..., xn ) ∈ Kn :
• Rn is a v. space over R,
• Rn is not a v. space over C,
• Cn is a v. space over C,
• Cn is a v. space over R.
Let E be a v. space on K, and let v, v1 , v2 , ..., vn ∈ E.
def
– We have : v is a linear combination of v1 , v2 , ..., vn ⇔ ∃ λ1 , λ2 , ..., λn ∈ K :
v = λ1 v1 + λ2 v2 + ... + λn vn .
– We always have 0E = 0.v1 + 0.v2 + ... + 0.vn (where 0E is the zero vector of space E).
– The sum of two linear combinations is a linear combination.
– Multiplying a linear combination by a scalar is a linear combination.
Let E be a v. space on K and let v1 , v2 , ..., vn ∈ E. The set of all linear combinations of
vectors v1 , v2 , ..., vn we note it V ect (v1 , v2 , ..., vn ) or hv1 , v2 , ..., vn i and we call it the subspace
generated by the vectors v1 , v2 , ..., vn . We have then
Moreover, we have
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
1.1. VECTOR SPACE (A SUMMARY OF LESSONS) 5
1. If λ1 , λ2 , ..., λn are all zero, we say that this linear relation is trivial.
2. If λ1 , λ2 , ..., λn are not all zero, we say that this linear relation is non-trivial.
We say that the vectors v1 , v2 , ..., vn are linearly independent (or free) if there is no non-
trivial linear relationship between the vectors v1 , v2 , ..., vn , in other words ; any linear rela-
tionship between vectors v1 , v2 , ..., vn is trivial ; i.e.,
def
v1 , v2 , ..., vn are free ⇔
– We say that the vectors v1 , v2 , ..., vn are linearly dependent (or linked) if they are not
free, in other words ; if there is at least one non-trivial linear relationship between
the vectors v1 , v2 , ..., vn ; i.e.,
def
v1 , v2 , ..., vn are linked ⇔
– The family of vectors {v1 , v2 , ..., vn } are said to be free if the vectors v1 , v2 , ..., vn are
free.
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
1.1. VECTOR SPACE (A SUMMARY OF LESSONS) 6
– The family of vectors {v1 , v2 , ..., vn } are said to be linked if the vectors v1 , v2 , ..., vn are
linked.
– Note that if a family contains a linked part, then this family is linked.
– If v ∈ E, then v 6= 0E ⇔ v is free ; since we have
v 6= 0E ⇔ (∀λ ∈ K : λv = 0E ⇒ λ = 0) .
– The null vector or the zero vector 0E is linked ; since we have 1.0E = 0E , which is a
non-trivial linear relationship.
– If a family of vectors contains the zero vector, then that family is related ; i.e., family
{v1 , v2 , ..., vn , 0E } is linked, since {0E } is linked ; or because
Now, if v2 , v3 , ..., vn are free, then by definition {v2 , v3 , ..., vn } is a basis of E. Hence,
dim E = n − 1. But, if v2 , v3 , ..., vn are linked, then, a vector of them is a linear combination
of the others ; For example vn = α2 v2 + α3 v3 ... + αn−1 vn−1 . Hence,
– Note that the vector subspace {0E } has no basis ; but by convention we put dim {0E } =
0 ({0E } = V ect ({0E }), where {0E } is linked).
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
1.1. VECTOR SPACE (A SUMMARY OF LESSONS) 7
– Note that if E = V ect (v), where v 6= 0E (i.e., v is free), then {v} is a base of E. In this
case, dim E = 1.
– For the vector space Kn over K, we have dim Kn = n ; since the family of vectors
{e1 , e2 , ..., en } form a basis of Kn ; which is called the canonical basis of Kn , where
– For the vector space Cn on the field R, we have dim Cn = 2n ; since the family of
vectors
{e1 , ie1 , e2 , ie2 , ..., en , ien } , where i2 = −1
∃ λ1 , λ2 , ..., λn ∈ K : v = λ1 v1 + λ2 v2 + ... + λn vn ,
since E = V ect (v1 , v2 , ..., vn ). But ; since the vectors v1 , v2 , ..., vn are free, then the
scalars λ1 , λ2 , ..., λn are unique. In this case the scalars ( λ1 , λ2 , ..., λn ) we call them
the coordinates of v in the basis B.
– In the vector space Kn over K, we have ∀ (x1 , x2 , ..., xn ) ∈ Kn :
where {e1 , e2 , ..., en } is the canonical basis of Kn . Therefore, (x1 , x2 , ..., xn ) are the co-
ordinates of the vector (x1 , x2 , ..., xn ) in the canonical basis {e1 , e2 , ..., en }.
– In the vector space Cn on R, we have ∀ (z1 , z2 , ..., zn ) ∈ Cn :
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
1.2. LINEAR MAPPINGS AND LINEAR FORMS 8
where zk = xk + iyk (1 ≤ k ≤ n) and {e1 , ie1 , e2 , ie2 ..., en , ien } is the canonical ba-
sis of Cn over R. Hence, (x1 , y1 , x2 , x2 , ..., xn , yn ) are the coordinates of the vector
(z1 , z2 , ..., zn ) in the canonical basis.
– In the v. space Rn [X], we have
∀P ∈ Rn [x] : P = a0 + a1 · x + a2 · x2 + ... + an · xn ,
where {1, x, x2 , ..., xn } is the canonical basis of Rn [x] . Hence, (a0 , a1 , a2 , ..., an ) are the
coordinates of P = a0 + a1 · x + a2 · x2 + ... + an · xn in the canonical basis.
– In the vector space M2 (R), we have : ∀A = (aij ) ∈ M2 (R) :
! ! ! !
1 0 0 1 0 0 0 0
A = a11 · + a12 · + a21 · + a22 · ,
0 0 0 0 1 0 0 1
where
( ! ! ! !)
1 0 0 1 0 0 0 0
e1 = , e2 = , e3 = , e4 =
0 0 0 0 1 0 0 1
is the canonical basis of M2 (R) . Hence, dim M2 (R) = 4. More generally, dim Mn (R) =
n2 .
prop
• f is a linear mapping ⇔ ∀ α, β ∈ K, ∀ u, v ∈ E :
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
1.2. LINEAR MAPPINGS AND LINEAR FORMS 9
Definition 1.2. Let E be a vector space over K. A linear form over K is a linear mapping
from E to K. The vector space of all linear forms on E, denoted by E ∗ , is called the dual
vector space of E.
Example 1.2. Using (1.2) or (1.3), we can easily prove that the following mappings are
linear forms on E
is a linear form on E.
The dual space of E, denoted E ∗ , is the v. space of all linear mappings on E. In other
words, E ∗ = L (E, K) .
We have the following facts :
• If E has finite dimension, then dim E = dim E ∗ .
• If u1 , u2 , ..., un is a basis of E, then the dual basis of u1 , u2 , ..., un is the list Φ1 , Φ2 , ..., Φn
of elements of E ∗ , where each Φi : E → K is a linear mapping such that
(
1, if i = j
Φi (uj ) = (1.5)
0, otherwise.
In the case when E = Kn , we can easily find the corresponding dual basis of the
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
1.2. LINEAR MAPPINGS AND LINEAR FORMS 10
Φi : Kn → K
(x1 , x2 , ..., xn ) 7→ xi
We see that Φi (ej ) satisfies (1.5). Hence, {Φ1 , Φ2 , ..., Φn } is the corresponding dual
basis of the canonical basis (e) of Kn .
– Every basis of E ∗ is the dual basis of a unique basis of E, it is called the predual basis.
– Let f be a nonzero linear form over E. Then there exists a nonzero vector v such that
f (v) = 1. In fact, since f 6= 0, there exists a nonzero vector x such that f (x0 ) 6= 0.
x0
The results holds for v = .
f (x0 )
– Let E be a finite-dimensional v. space, namely dim E = n. If v ∈ E is a nonzero
vector, then there exits a linear form f ∈ E ∗ such that f (v) = 1. Indeed, let v =
α1 u1 + ... + αn un be a a nonzero vector. Then there exists i0 ∈ 1, n such that αi0 6= 0.
Define u∗i0 as in (1.4). That is, u∗i0 (v) = αi0 6= 0. Hence, the result holds if we put
u∗
f = ∗ i0 .
ui0 (v)
Proposition 1.2 (Changing dual basis). Let B1 and B2 be two basses of E and let P be the
t
passage matrix from B1 to B2 . Then (P −1 ) is passage matrix from B1∗ to B2∗ .
Definition 1.3 (dual mapping). If Φ ∈ L (E, F ), then the dual mapping of f is the linear
mapping Φ∗ ∈ L (E ∗ , F ∗ ) defined by Φ∗ (f ) = f ◦ Φ, for f ∈ E ∗ .
Φ : Rn [x] → R
p 7→ Φ (p) = p0 ,
where p0 denotes the derivative of p. Let us take, for example f : Rn [x] → R such that
f (p) = p (n) (here n is a positive integer). Then Φ∗ (f ) is the linear mapping on Rn [x]
given by
(Φ∗ (f )) (p) = (f ◦ Φ) (p) = f [Φ (p)] = f (p0 ) = p0 (n) .
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
1.2. LINEAR MAPPINGS AND LINEAR FORMS 11
• Any linear mapping is a homomorphism (we can talk about the kernel, the image and
so on).
• If f : E → F is linear mapping, then f (0E ) = 0F (the converse is false).
• If f (0E ) 6= 0F , then f : E → F is not a linear mapping.
• Be careful, if f : E → F is linear and v ∈ E, then : f (v) = 0F ; v = 0E (in general).
But, if f is injective, then f (v) = 0F ⇒ v = 0E (since f (v) = 0F ⇔ f (v) = f (0E )).
• Every linear mapping f : E → E is called Endomorphism of E.
• Every linear mapping f : E → F bijective is called Isomorphism.
• Every bijective Endomorphism of E is called Automorphism of E.
• Every linear mapping f : Rn −→ Rm is uniquely defined as follows :
f (x1 , x2 , ..., xn ) = (a11 x1 + a12 x2 + ... + a1n xn , ..., am1 x1 + am2 x2 + ... + amn xn ) ,
Im f = {f (v) : v ∈ E} .
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
1.3. PROPOSED PROBLEMS ON LINEAR FORMS 12
• If f : E → F is linear, then
• If f : E → F is linear, then
then, determine ker f . The same question for g, where g (1, 0, 1) = −1, g (0, 1, 1) = 0 and
g (−1, 1, 1) = 2.
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
1.3. PROPOSED PROBLEMS ON LINEAR FORMS 13
f1 (x, y) = x + y, f2 (x, y) = x − y.
Exercise 4. Consider the vector space of real polynomials of degree not exceeding 2,
i.e., E = R2 [x]. Define the mappings ϕ0 , ϕ1 , ϕ2 from E to R by
Z 1
∀ p ∈ E, ϕ0 (p) = p (0) , ϕ1 (p) = p (1) et ϕ2 (p) = p (t) dt.
0
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
C HAPTER 2
I n this chapter we present a basic introduction on Bilinear forms over a vector space
including rank, kernel, Orthogonalization of Gram-Schmidt, Orthogonal matrices and
diagonalization of real symmetric matrices.
As in (1.2) and (1.3), note that a bilinear form is a mapping f from E 2 to R such that f is
linear from the left and linear from the right. For details, we present the following remark.
Remark 2.1. Let f : E × E → R be a bilinear form on E. This means that for all x, x0 , y, y 0 ∈
E and λ ∈ R we have
• f (x + x0 , y) = f (x, y) + f (x0 , y),
• f (x, y + y 0 ) = f (x, y) + f (x, y 0 ),
• f (λx, y) = λf (x, y),
• f (x, λy) = λf (x, y) .
14
2.1. BILINEAR FORMS (DEFINITIONS) 15
Example 2.1. We can easily check that the following mappings are symmetric bilinear
forms.
1. f : R × R → R, f (x, y) = xy.
Notation 2.1. Let L2 (E) denote the vector space of all bilinear forms over E, S2 (E) denote
the vector space of all symmetric bilinear forms over E and A2 (E) denote the vector space
of all skew-symmetric bilinear forms over E.
We can prove the following fact : L2 (E) = S2 (E) ⊕ A2 (E) . Indeed, we have f0 = 0 ∈
S2 (E) ∩ A2 (E) . Also, if f ∈ S2 (E) ∩ A2 (E), then by Definition 2.2 f (x, y) = f (y, x) =
−f (y, x) for every x, y ∈ E. Hence, f (x, y) = 0 for every x, y ∈ E. So, f = f0 . Thus, we
have proved that S2 (E) ∩ A2 (E) = {f0 } . Now, let f ∈ L2 (E). For any x, y ∈ E, we see
that
f (x, y) − f (y, x) f (x, y) + f (y, x)
f (x, y) = + = h1 (x, y) + h2 (x, y) ,
2 2
where h1 is skew-symmetric and h2 is symmetric.
t
f (λx + x0 , y) = (λx + x0 ) · A · y
t
= λxt · A · y + (x0 ) · A · y
= λf (x, y) + f (x0 , y) .
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
2.1. BILINEAR FORMS (DEFINITIONS) 16
Thus, f is linear from the left. We use the same manner to show that f is linear from the
right. For every x, y, y 0 ∈ Rn and λ ∈ R we have
f (x, λy + y 0 ) = xt A (λy + y 0 )
= λxt Ay + xt Ay 0
= λf (x, y) + f (x, y 0 ) .
Next, assume that A is symmetric. We show that f is also symmetric. In fact, we have
f (x, y) = xt Ay
t
= xt Ay (since xt Ay ∈ R)
t
= y t At xt (well-known result)
= y t Ax (since A is symmetric)
= f (y, x) .
Corollary 2.1. Every matrix A ∈ Mn (R) produces a bilinear form over Rn and every
symmetric matrix A ∈ Mn (R) produces a symmetric bilinear form over Rn .
Theorem 2.2. Let B and B 0 be two bases of E. Let P be the passage matrix from B to B 0 and let
f : E × E → R be a bilinear form over E. If A = Mf (B) and A0 = Mf (B 0 ), then A0 = P t · A · P.
Proof. Assume that B = {e1 , e2 , ..., en } and B 0 = {e01 , e02 , ..., e0n }. For every x, y ∈ E, we see
that Pn Pn
x = i=1 xi · ei
y = i=1 yi · ei
and
Pn 0 0
y = ni=1 yi0 · e0i
P
x = i=1 xi · ei
t t
f (x, y) = xt · A · y = (P · x0 ) · A · (P · y 0 ) = (x0 ) · P t
| {zAP} ·y 0 .
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
2.2. ORTHOGONAL MATRICES 17
Proof. Let {u1 , u2 , ..., un } be a basis of E. Define the bilinear forms fi,j by
(
1, for (i, j) = (r, s)
fi,j (er , es ) =
0, for (i, j) 6= (r, s)
Pn Pn
Let x = i=1 xi ui and y = j=1 yj uj be two vectors of E. It is clear that
n n
! n X
n
X X X
f (x, y) = f xi ui , yj uj = xi yj f (ei , ej )
i=1 j=1 i=1 j=1
n X
X n n X
X n
= xi yj aij = xi yj fi,j (x, y) .
i=1 j=1 i=1 j=1
Then these n2 bilinear forms fi,j generated the vector space fi,j . Since (fi,j )1≤i,j≤n form a
free family, we conclude that (fi,j )1≤i,j≤n is a basis of L2 (E) . The proof is finished.
and
! √1 √1 √1
√1 √1 2 6 3
2 2 −1 √1 √1 .
−1
, √
√ √1 2 6 3
2 2 −2 √1
0 √
6 3
are orthogonal.
Proposition 2.1. Let A ∈ Mn (R) be an orthogonal matrix. Then det (A) = ±1.
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
2.2. ORTHOGONAL MATRICES 18
We need to define matrix norms and scalar product over a vector space E.
k.k : E → R+
x 7→ kxk (we say : the norm of x)
In this case, the couple (E, k.k) is called normed vector space or normed space. So, a nor-
med space E is a v. space endowed by a norm.
Example 2.4. Here, we only use the two vector spaces, Kn and Mn (K) with K = R or C.
n n
! 12
X X
kxk1 = |xi | , kxk2 = |xi |2 ,
i=1 i=1
kxk∞ = max (|xi |) .
1≤i≤n
n
! 21
X
kAk2 = |aij |2 .
i,j
t
As an application, for x = −1 1 −2 , we have
√
kxk1 = 4, kxk2 = 6 and kxk∞ = 2.
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
2.2. ORTHOGONAL MATRICES 19
!
−1 −2
and for A = ∈ Mn (R), we also have
7 3
√
kAk1 = max (8, 5) = 8, kAk2 = 3 7 and kAk∞ = max (3, 10) = 10.
Lemma 2.1. For each matrix A ∈ Mn (K) and for each x ∈ Kn , we have the following inequality :
The above lemma is not used in this lecture-note ; but it remains interesting for future
study.
Definition 2.5. Let E be real vector space. The inner product of E (over E) is a function
h., .i defined by
h., .i : E×E →R
(x, y) 7→ hx, yi
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
2.2. ORTHOGONAL MATRICES 20
Example 2.6. Define over the vector space C([a, b]) the inner product :
Z b
∀ f, g ∈ C([a, b]) : hf, gi = f (x) · g (x) dx.
a
kA (x + y)k2 = kx + yk2 .
x, At Ay = hx, yi ,
2
At Ay − y = 0.
Hence, At Ay = y. Consequently, At A = In .
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
2.3. GRAM-SCHMIDT ORTHONORMALIZATION THEOREM 21
The following formulas permit us to find such orthonormal basis recursively as fol-
lows : u1
e1 =
,
ku1 k
vk = uk − k−1
P
i=1 hei , uk i · ei ,
(2.4)
vk
ek = .
kvk k
Example 2.8. Let us take E = R2 and B = {(1, −1) , (1, 1)} = {u1 , u2 }. Clearly, B is a
basis of R2 . Now, we construct the corresponding orthonormal basis using Gram-Schmith
method. First, we have
(1, −1) 1 −1
e1 = √ = √ ,√
2 2 2
and
v2 = u2 − he1 , u2 i · e1
1 −1 1 −1
= (1, 1) − √ , √ , (1, 1) · √ , √ = (1, 1) .
2 2 2 2
Hence,
v2 1 1
e2 = = √ ,√ .
kv2 k 2 2
We deduce that B = {e1 , e2 } is an orthonormal basis of R2 . Similarly, let
u1
We have e1 = = √13 , √13 , √13 . Next, by (2.4) we have
ku1 k
v2 = u2 − he1 , u2 i · e1
1 1 1 1 1 1
= (0, 1, 1) − √ , √ , √ , (0, 1, 1) · √ ,√ ,√
3 3 3 3 3 3
2 1 1 1 −2 1 1
= (0, 1, 1) − √ · √ , √ , √ = , , .
3 3 3 3 3 3 3
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
2.4. DIAGONALIZATION OF REAL SYMMETRIC MATRICES 22
v2
−2 √1 √1
Hence, e2 = = √ , , . Also, by by (2.4),
kv2 k 6 6 6
v3 = u3 − he1 , u3 i e1 − he2 , u3 i e2
1 1 1 1 1 1
= (0, 0, 1) − √ , √ , √ , (0, 0, 1) · √ ,√ ,√ −
3 3 3 3 3 3
−2 1 1 −2 1 1
√ , √ , √ , (0, 1, 1) · √ ,√ ,√
6 6 6 6 6 6
−1 1
= 0, , .
2 2
−1 √1
Thus, e3 = 0, √
2
, 2 . We deduce that B = {e1 , e2 , e3 } is an orthonormal basis of R3 .
We can easily show that if A is symmetric, then eA is also symmetric. Indeed, by definition,
if A is symmetric then we have
+∞
!t +∞ i t +∞ +∞
i
t X Ai X A X (At ) X Ai
eA = = = = = eA .
i=0
i! i=0
i! i=0
i! i=0
i!
Lemma 2.2. Every symmetric matrix A ∈ Mn (R) is diagonalizable. Moreover, every symmetric
matrix A ∈ Mn (R) can be represented in the form :
A = P · D · P t, (2.5)
where P is orthogonal and D is diagonal whose diagonal entries are the eigenvalues of A.
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
2.4. DIAGONALIZATION OF REAL SYMMETRIC MATRICES 23
Theorem 2.5. Let A ∈ Mn (R). Then A is symmetric definite positive if and only if there exists
an invertible matrix M such that
A = M tM . (2.6)
where D = (λii ) is diagonal whose diagonal elements are the eigenvalues of A. However,
since A est definite positive, the matrix D is also definite positive, that is, its diagonal
entries are strictly positive. Thus, we can define the diagonal matrix :
√ np p p o
D = diag λ1 , λ2 , ..., λn ,
√ √ √ √ t t √ √ t
A = P DP t = P D DP t = P D D P = P D P D = M t M,
√ t √
where M = P D ∈ GLn (R) ; since P, D ∈ GLn (R) . The proof of Theorem 2.2 is
finished.
Corollary 2.2. Let A be a symmetric definite positive matrix. Then det (A) > 0.
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
2.5. PROPOSED PROBLEMS ON BILINEAR FORMS 24
Proof. First method. Since A is a symmetric definite positive then by Theorem 2.5, A =
M t M , where M is invertible. Therefore,
Second method. Since A is a symmetric definite positive then Sp (A) ⊂ R∗+ . On the other
hand, it is well-known that
Y
det (A) = λi ,
Proposition 2.2. Let A be a symmetric matrix and let (λ1 , x), (λ2 , y) be two eigenpairs of A with
α 6= β. Then hx, yi = 0.
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
2.5. PROPOSED PROBLEMS ON BILINEAR FORMS 25
where {e1 , e2 } is the canonical basis of R2 . Specify the value f (x, y) for every x, y in R2 .
Exercise 4. Let f be the bilinear form on R2 setting x = (x1 , x2 ) and (y1 , y2 ) in R2 ,
1. Find the matrix A0 of f related to the basis {u1 = (1, 0) , u2 = (1, 1)} .
2. Find the matrix B of f related to the basis {v1 = (2, 1) , u2 = (1, −1)} .
3. Find the passage matrix P from the basis {u1 , u2 } to the basis {v1 , v2 } and verify that
B = P t A0 P.
4. What is the rank of f ?
Exercise 5. Let E be a vector space over a commutative field K (R or C). We denote
by S the set of symmetric bilinear forms on E and by A the set of antisymmetric bilinear
forms on E.
1. Show that S and A are two vector subspaces of E.
2. Show that the vector space of bilinear forms B(E) over E is the direct sum of S and
A.
3. We assume that E is of finite dimension n. What are the dimensions of S and A.
Exercise 6. Are the following functions E × E → R bilinear forms over the vector
space E? If yes, write their matrix in the canonical basis. Are they symmetric ? When
E = R3 , give their matrix in the basis B = {v1 , v2 , v3 } , where v1 = (1, 0, 1), v2 = (1, 1, 0)
and v3 = (−1, 0, 1) .
• f (x, y) = x1 y1 + x2 y2 + x3 y3 , E = R3
• f (x, y) = y1 y2 + x1 y1 + x3 y3 , E = R2
• f (x, y) = x22 y1 + 3x2 y2 , E = R3
• f (x, y) = x1 y2 − 2x3 (y2 + 2y1 ) + 4x3 y2 − y1 x2 , E = R3
Exercise 7. Let f1 , f2 be bilinear forms on R3 whose matrices in the canonical basis are
1
1 −1 0 0 1 2
−1
A1 = −1 −3 2 and A2 = 1 −2 .
2
1 −1
0 −2 −1 2 2
0
Write the matrices B1 and B2 of f1 and f2 with respect to the basis {v1 , v2 , v3 }, where
v1 = (1, 0, 0) , v2 = 21 , 12 , 0 , v3 = −1 −1
2
, 2
, 1 . Deduce the rank of each of the linear forms
f1 and f2 .
Exercise 8. Prove that the vectors e1 = (1, 0, 2) , e2 = (0, 1, 1) and e3 = (−1, 0, 1) form a
basis of R3 .
Determine the matrix with respect to this basis of the bilinear form R3 × R3 → R
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
2.5. PROPOSED PROBLEMS ON BILINEAR FORMS 26
g ((x1 , x2 ) , (y1 , y2 )) = x1 y2 − x2 y1 ,
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
C HAPTER 3
I n this chapter we focus on the goal of symmetric bilinear forms which define quadratic
forms, where every bilinear form is uniquely represented as the sum of a symmetric
bilinear form and a skew-symmetric bilinear form. Let us start by the following definition :
Example 3.1. Using the above definition, we can easily show that the following mappings
are quadratic forms over E.
1. E = R and q : E→ R, x 7→ x2 .
2. E = R2 and q : E→ R, (x, y) 7→ x2 + y 2 .
Rb
3. E = P [x] and q : E→ R, p 7→= a p2 (t) dt.
4. E = R3 and q : E→ R, (x1 , x2 , x3 ) 7→ x21 + x22 − x23 .
The corresponding polar forms are given in Example 2.1.
Notation 3.1. Let q : E → R be a quadratic form over E. We denote by Q2 (E) the set of
all quadratic forms over E.
1 1
f (u, v) = [q (u + v) − q (u − v)] = [q (u + v) − q (u) − q (v)] . (3.1)
4 2
27
3.2. QUADRATIC FORMS OVER RN 28
1
(u, v) 7→ [q (u + v) − q (u) − q (v)] .
2
1. f is bilinear,
2. f is symmetric,
3. f (x, x) = q (x) for any x ∈ E.
Example 3.2. Define the mapping Q : P2 [x] → R, p 7→ p (0) p (1). Show that Q is a quadratic
form over P2 [x]. In deed, by (3.1) we obtain ϕ : P2 [x] × P2 [x] → R, where
1 1
(p, q) 7→ ϕ (p, q) = p (0) q (1) + q (0) p (1) .
2 2
where (aij ) are real numbers. There are two cases to consider :
Case 1. aij = aji for 1 ≤ i, j ≤ n. The quadratic form q is given by the following matrix
form :
n
X n
X
q (x1 , x2 , ..., xn ) = aii x2i + aij · xi xj
i=1 i6=j
Xn X n
= aii x2i + 2 aij · xi xj
i=1 i<j
a11 a12 ... a1n x1
a12 a22 ... a2n
x2
= x1 x2 ... xn ..
.. ,
.
.
a1n a2n ... ann xn
= xt · A · x,
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
3.2. QUADRATIC FORMS OVER RN 29
Corollary 3.1. Every symmetric matrix A ∈ Mn (R) produces a quadratic form over Rn .
0 0 0 x3
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
3.3. QUADRATIC FORMS OVER AN ARBITRARY VECTOR SPACE 30
Then
n n
! n n
X X X X
f (u, v) = f xi ei , yi ei = xi yi f (ei , ei ) + (xi xj + xj xi ) f (ei , ej ) .
i=1 i=1 i=1 i<j
q1 : A 7→ q1 (A) = tr At A
q2 : A 7→ q2 (A) = tr A2
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
3.4. ORTHOGONALIZATION METHOD 31
Theorem 3.1. Every quadratic form over Rn is diagonalizable. That is, if q = xt Ax with A
symmetric, then q = v t Dv with D diagonal.
where the scalars λ1 , ..., λn and the vectors (v1 , v2 , ..., vn ) ∈ Rn satisfy Avi = λi vi . That is, by
(3.3) we get
λ1 v1
λ2 v2
q = v1 v2 ... vn .. .. .
. .
λn vn
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
3.4. ORTHOGONALIZATION METHOD 32
!
2 −2
2) We put A = . After few computation, the eigenpairs of A are :
−2 5
λ1 = 1 → u1 = (2, 1) ,
λ2 = 6 → u2 = (1, −2) .
√
We see that ku1 k2 = ku2 k2 = 5. Setting
! !
√2 √1 1 0
5 5
P = −2
and D = .
√1 √ 0 6
5 5
It follows that
t
q = xt Ax = xt P DP t x = xt P · D · P t x = P t x · D · P t x.
Therefore,
q = v t · D · v = λ1 v12 + λ2 v22
2 2
2√ 1√ 1√ 2√
= 5x1 + 5x2 + 6 5x1 − 5x2
5 5 5 5
(2x1 + x2 )2 (x1 − 2x2 )2
= 1· +6· .
5 5
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
3.5. DEFINITIONS AND RESULTS 33
Thus, f is called nondegenerate if ker f = {0E }. In the case when ker f 6= {0E }, f is said
to be degenerate.
C = {v ∈ E : f (v, v) = 0} . (3.4)
The set C is called the isotropic cone of E. Note that if v ∈ C, then α · v ∈ C. In fact, for
every v ∈ C and α ∈ R we have
Theorem 3.3. Let f be a nonzero bilinear symmetric form defined on a vector space E. Then
iff
C = ker f ⇔ C is a vector subspace of E.
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
3.5. DEFINITIONS AND RESULTS 34
Lemma 3.1. Let q be a quadratic form defined over E and let f be its polar form, where E is vector
space over R. Assume that C is a vector subspace of E and there exists an element x0 ∈ C/ ker f .
Then
∀ y ∈ E ; if f (x0 , y) 6= 0, then y ∈ C,
−q (y)
If we let λ0 = ∈ R, then clearly q (λ0 x0 + y) = 0, from which we deduce that
2f (x0 , y)
λ0 x0 + y ∈ C. But, C is given as a subspace containing x0 . Thus, we deduce that y belongs
to C, as claimed.
Proof of Theorem 3.3. (⇒) Let q be the quadratic form of f . For every (x, y) ∈ C 2 and λ ∈ R,
we get that
• q (x + y) = q (x) + q (y) + 2f (x, y) = 0. Implies x + y ∈ C.
• q (λx) = λ2 q (x) = 0; i.e., λx ∈ C.
Thus, C is a subspace of E.
(⇐) Suppose that C is a subspace of E. Note that the inclusion ker f ⊂ C is always
true ; since
We would like to prove that if C is a subspace of E, then C ⊂ ker f. Assume by the way of
contradiction that C * ker f . There exists a nonzero vector x0 with x0 ∈ C/ ker f. Define
H = {y ∈ E ; f (x0 , y) = 0} .
z = y + z − y.
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
3.5. DEFINITIONS AND RESULTS 35
Case 2. If z ∈
/ H, by Lemma 3.1 once again, we have z ∈ C. Thus, E = C, and so f = 0 ;
since q = 0. But this is a contradiction with our hypothesis that f is a nonzero bilinear
form. Our proof of Theorem 3.3 is finished.
We deduce from the above definition that C consists all vectors x such that x ⊥ x. Also,
ker f consists vectors which are orthogonal with all the vectors of E.
Remark 3.1. In the case when a nondegenerate bilinear form on E is not symmetric, there
are two different orthogonals of A :
Definition 3.6 (Kernel of a bilinear symmetric form). Let f ∈ S2 (E). The kernel of f is
defined by
ker f = {x ∈ E, f (x, y) = 0 for every y ∈ E} . (3.7)
From Definition 3.5, we deduce that ker f = E ⊥ . Note that x0 ∈ ker f iff f (x, y) = 0 for
every y ∈ E. Similarly, x0 ∈ / ker f if and only if there exists y ∈ E such that f (x0 , y) 6= 0, or
equivalently f (x0 , y) 6= 0 for some y ∈ E.
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
3.5. DEFINITIONS AND RESULTS 36
⊥
1. A⊥ ⊃ A.
2. (A ∩ B)⊥ ⊃ A⊥ + B ⊥ .
3. (A ∪ B)⊥ ⊃ A⊥ ∩ B ⊥
Proof. 1. Let v ∈ A. For any u ∈ A⊥ , we have f (u, x) = 0 for any x ∈ A. In particular, for
⊥
x = v we have f (u, v) = 0. This means that A⊥ contains v, are required.
2. Let v = a + b ∈ A⊥ + B ⊥ , where A⊥ contains a and B ⊥ contains b. We will prove
that (A ∩ B)⊥ contains v. For every x ∈ A ∩ B we have f (a, x) = f (b, x) = 0 and so
f (a + b, x) = 0 since f ∈ S2 (E). Thus, f (v, x) = 0.
3. Let v ∈ A⊥ ∩ B ⊥ . For every x ∈ A ∪ B we have
• If x ∈ A, then f (v, x) = 0 since v ∈ A⊥ .
• If x ∈ B, then f (v, x) = 0 since v ∈ B ⊥ .
In both cases we have f (v, x) = 0 for any x ∈ A ∪ B. Thus, v ∈ (A ∪ B)⊥ , as asked.
Proposition 3.3. Let f be a bilinear form over E. Two subsets A and B of E are called orthogonal
with respect to f if f (x, y) = 0 for any x in A and y in B. The following conditions are equivalent :
Proof. We prove (a) ⇒ (b). Let a0 ∈ A. For each vector v ∈ B, f (a0 , v) = 0 since A and
B are orthogonal. Hence, a0 ∈ B ⊥ . Next, (b) ⇒ (c). Let b0 ∈ B. For each vector v ∈ A, we
have v ∈ B ⊥ , and so f (b0 , v) = 0 since b0 ∈ B. Hence, b0 ∈ A⊥ . Finally, (c) ⇒ (a). Let u ∈ A
and v ∈ B. Since v ∈ A⊥ , then f (u, v) = 0.
Definition 3.7. Let E be a v. space on R and let {e1 , e2 , ..., en } be a family of n vectors of E.
We have
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
3.5. DEFINITIONS AND RESULTS 37
Proof. Let x ∈ ker f . Then by (3.7), f (x, y) = 0 for every y ∈ E. In the case when y = x,
we get f (x, x) = 0. But, since f is definite positive, f (x, x) = 0 implies x = 0.
Theorem 3.7. Let q be a quadratic form on a real vector space E. The following three properties
are statements :
1. q is surjective.
2. q is neither positive nor negative.
3. There exists an isotropic vector which is not in the kernel.
?
Proof. (1) ⇒ (2). Since q is surjective, then there exists x0 ∈ E (resp. x1 ∈ E) such that
(
q (x0 ) > 0,
q (x1 ) < 0.
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
3.6. GAUSS DECOMPOSITION (SILVESTER’S THEOREM) 38
Then the equation p (λ) = 0 has two roots. Let λ0 be one of them. Then the vector y0 =
λ0 x0 + x1 is by construction, isotropic. We prove by the way of contradiction that y0 is not
in the kernel of f , that is, assume that y0 ∈ ker f. Hence, f (y0 , x) = 0 for each x ∈ E . In
particular, for x = x0 and for x = x1 we have
(
0 = f (y0 , x0 ) = λ0 q (x0 ) + f (x0 , x1 ) ,
0 = f (y0 , x1 ) = λ0 f (x0 , x1 ) + q (x1 ) .
That is, (
λ20 q (x0 ) + λ0 f (x0 , x1 ) = 0,
λ0 f (x0 , x1 ) + q (x1 ) = 0.
We deduce that λ20 q (x0 ) = q (x1 ) < 0. A contradiction.
?
(3) ⇒ (1). Let y0 be an isotropic vector which is not in the kernel of f. There exists
y1 ∈ E such that f (y0 , y1 ) 6= 0. Then for each γ ∈ R, we put
γ − q (y1 )
λ= ∈ R,
2f (y0 , y1 )
where f1 , f2 , ..., fr+s are linearly independent forms over Rn . The couple (r, s) is called the
signature of q.
(i) (i) (i) (i)
Recall that if fi (x1 , x2 , ..., xn ) = a1 x1 + a2 x2 + ... + an xn with aj ∈ R for 1 ≤ i ≤ r + s
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
3.6. GAUSS DECOMPOSITION (SILVESTER’S THEOREM) 39
In the rest of this section we show how to write any quadratic form over Rn as in (3.8).
To make this, we use the well-known identity :
Let q (x1 , x2 , ..., xn ) = ni,j aij · xi xj be a quadratic form over Rn , where aij = aji for
P
It follows that
q (x1 , x2 , ..., xn ) = a11 y12 + q 0 (y2 , y3 , ..., yn ) ,
where q 0 is also a quadratic form ; but over Rn−1 . Then we repeat the same argument with
q0.
Case 2. When a11 = 0, but a12 6= 0. Here, we put
x1 = y 1 + y 2
x2 = y 1 − y 2
x3 = y 3 (3.11)
..
.
x =y .
n n
It follows that n
X
q (x1 , x2 , ..., xn ) = bij · yi yj ,
i,j
where b11 6= 0. This is the first case (we have transformed q so that we can apply the first
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
3.6. GAUSS DECOMPOSITION (SILVESTER’S THEOREM) 40
Example 3.9. Using Gauss’ Method, diagonalize the following two quadratic forms and
deduce their signatures :
• q1 = x21 + x22 + 2x23 − 4x1 x2 + 6x2 x3
• q2 = 2x1 x2 + 2x2 x3 + 2x1 x3 .
Solution : For q1 , since a11 = 1 it follows from (3.10) that
x1 = y1 + 2y2
x2 = y 2
x3 = y 3 .
This implies
Finally, we obtain
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
3.6. GAUSS DECOMPOSITION (SILVESTER’S THEOREM) 41
1 −2 0
0 0 5 = −5 6= 0.
0 1 −1
Thus, the signature of q1 is (2, 1). For the quadratic form q2 , by (3.11) we put
x1 = y 1 + y 2
x2 = y 1 − y 2
x 3 = y3 .
We obtain
It follows that
Hence,
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
3.6. GAUSS DECOMPOSITION (SILVESTER’S THEOREM) 42
1 1
2
1 2
1 −1 0 = 2 6= 0
0 0 −2
We obtain
q = (y1 + y2 ) y3 + (y1 − y2 ) y3 = 2y1 y3
Remark 3.2. The inner product is a bilinear form, symmetric and definite positive. For each
(x, y) ∈ Rn × Rn , we have
y1
y2
hx, yi = x 1 x2 . . . xn .. = xt y.
.
yn
Corollary 3.2. Let A ∈ Mn (R). Then there exists a symmetric matrix B ∈ Sn (R) such that
xt Ax = xt Bx for every x ∈ Rn .
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
3.6. GAUSS DECOMPOSITION (SILVESTER’S THEOREM) 43
It follows that
A + At
1 1
x Ax = xt Ax + xt At x = xt
t
x.
2 2 2
A + At
Note that the matrix B = is always symmetric.
2
Proposition 3.4. Let A ∈ Mn (R) be a symmetric and let (α, x), (β, y) be two eigenpairs of A
with α 6= β. Then x and y are orthogonal, i.e., x ⊥ y. Or, equivalently, hx, yi = 0.
Definition 3.10. Let E be a real vector space equipped with an inner product h., .i. The
couple (E, h., .i) is said to be a real pre-Hilbert space. A real pre-Hilbert space of finite
dimension is said to be Euclidian space.
Let (E, h., .i) be a pre-Hilbert space. The related norm is defined by
p
∀ x ∈ E : kxk = hx, xi. (3.13)
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
3.7. PROPOSED PROBLEMS (QUADRATIC FORMS) 44
1
f (x, y) = (q (x + y) − q (x − y)) . (3.14)
4
Consider the mapping q : E → K such that for all x ∈ E, and λ ∈ K we have q (λx) =
λ2 q (x) . The map f : E × E → K given by (3.14) is bilinear. Show that q is the quadratic
form associated with f .
Exercise 3. In the vector space R2 define the quadratic form :
In the vector space R2 related to its canonical base the symmetric bilinear form defined
by
f (x, y) = x1 y1 ,
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
3.7. PROPOSED PROBLEMS (QUADRATIC FORMS) 45
t t
where x1 x2 and y1 y2 are the coordinates of x and y. Calculate (V ect {e1 })⊥ ,
(V ect {e1 + e2 })⊥ , (V ect {e1 })⊥ + (V ect {e1 + e2 })⊥ and (V ect {e1 } ∩ V ect {e1 + e2 })⊥ . What
can we deduce from this ?
Exercise 6. In the vector space R3 related to its canonical base the quadratic form defi-
ned by
q (x) = x21 + x22 + x23 − 4 (x1 x2 + x1 x3 + x2 x3 ) ,
t
where x1 x2 x3 are the coordinates of x. Without using the Gauss method, find a
basis of R3 which is orthogonal by f , where f is the polar form of q.
Exercise 7. In the vector space E = R3 define to its canonical basis the quadratic form
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
3.7. PROPOSED PROBLEMS (QUADRATIC FORMS) 46
2. Show that the eigenvalues of a real skew-symmetric matrix are imaginary pure (use
two methods).
Exercise 15. Let E be a real vector space and let q be a nondegenerate quadratic form
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
3.7. PROPOSED PROBLEMS (QUADRATIC FORMS) 47
over E of the polar form f . Let a ∈ E be a nonisotropic vector. Define the mapping :
Sa E→E
:
f (x, a)
x →
7 x−2 a.
q (a)
2. Let x1 and x2 be two vectors of E such that q (x1 ) = q (x2 ) 6= 0. Prove that at least one
of the vectors x1 + x2 and x1 − x2 is nonisotropic (use the way of contradiction).
3. Deduce that there is a nonisotropic vector a0 ∈ E such that
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
C HAPTER 4
T hroughout this chapter, the field used here is the field of complex numbers and E
is a vector space over C. For example, E = Cn with n ≥ 2, Cn [x], Mn (C), and so
on. The basic goal of this chapter is to define quadratic forms over a complex pre-Hilbert
space of finite dimension, namely, hermitian space.
f : C→C
z 7→ f (z) = z
f (αv) = αz = α · z = αf (z) .
48
4.1. SESQUILINEAR FORMS AND HERMITIAN QUADRATIC FORMS 49
Definition 4.2. A sesquilinear form is a mapping f from E 2 to C such that f is linear from
the left and semi-linear from the right. Thai is, for every (x, x0 , y, y 0 ) ∈ E 4 and λ ∈ C, one
has
1. f (λx + x0 , y) = λf (x, y) + f (x0 , y) ,
2. f (x, λy + y 0 ) = λf (x, y) + f (x, y 0 )
Theorem 4.1. Let B and B 0 be two bases of E. Let P be the passage matrix from B to B 0 and
let f : E × E → R be a sesquilinear form over E. If A = Mf (B) and A0 = Mf (B 0 ), then
A0 = P t · A · P .
f : C×C→C
(z1 , z2 ) 7→ z1 z2
f (z1 , z2 ) = z1 z2 = z1 z2 = z1 z2 = z2 z1 = f (z2 , z1 ).
Theorem 4.2. Let A be a hermitian matrix, and let f : Cn × Cn → C, f (x, y) 7→ xt Ay. Then f
is a hermitian sesquilinear form over Cn .
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
4.1. SESQUILINEAR FORMS AND HERMITIAN QUADRATIC FORMS 50
Theorem 4.3. Let f be a sesquilinear form over E. Then f is hermitian if and only if f (x, x) ∈ R
for every x ∈ E.
Proof. Assume that f is Hermitian. Then by definition, f (x, y) = f (y, x) for every x, y ∈
E. In particular, when x = y we have f (x, x) = f (x, x) for every x ∈ E. Thus, f (x, x) ∈ R
for every x ∈ E.
Conversely. Assume that f (x, x) ∈ R for every x ∈ E. Then for every x, y ∈ E we also
have (
f (x + y, x + y) ∈ R,
f (ix + y, ix + y) ∈ R.
It follows that
f (x, x) + f (y, y) +f (x, y) + f (y, x) ∈ R,
| {z } | {z }
∈R ∈R
f (x, x) + f (y, y) +i [f (x, y) − f (y, x)] ∈ R.
| {z } | {z }
∈R ∈R
We put (
α = f (x, y) + f (y, x) ∈ R,
β = i [f (x, y) − f (y, x)] ∈ R.
It is clear that
α + iβ α − iβ
= f (y, x) and = f (x, y) ,
2 2
and so f (y, x) = f (x, y). This completes the proof.
Definition 4.4. Let A = (aij )1≤i,j≤n ∈ Mn (C). The matrix (aij )1≤i,j≤n is called conjugate
of A, denoted by A. The transpose conjugate matrix of A is called the adjoint of A, and
denoted by A∗ .
t
Note that for any matrix A ∈ Mn (C), we have A∗ = At = A . That is, the conjugate
transpose is the same with the transpose conjugate.
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
4.1. SESQUILINEAR FORMS AND HERMITIAN QUADRATIC FORMS 51
are hermitian.
1. I ∗ = I,
2. (A∗ )∗ = A,
3. (A + B)∗ = A∗ + B ∗ ,
4. (αA)∗ = α · A∗ ,
5. (AB)∗ = B ∗ A∗ .
Remark 4.1. Let A ∈ Mn (C). We can easily prove that the matrices A + A∗ , AA∗ and A∗ A
are hermitian.
Proposition 4.1. The diagonal entries of a hermitian matrix A are real numbers.
Proof. Let A = (aij )1≤i,j≤n ∈ Mn (C) be a hermitian matrix. Since aij = aji for each 1 ≤
i, j ≤ n, then
aii = aii , ∀ i = 1, 2, ..., n.
Proposition 4.2. Let A and B be two hermitian matrices. Then AB is hermitian if and only if
AB = BA.
Proposition 4.3. Let A ∈ Mn (C). The diagonal entries of a skew-hermitian matrix A are zero or
imaginary pure.
Proof. Let A = (aij )1≤i,j≤n ∈ Mn (C) be a skew-hermitian matrix. Since −aij = −aji for
each 1 ≤ i, j ≤ n, then
−aii = aii , ∀ i = 1, 2, ..., n.
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
4.2. HERMITIAN QUADRATIC FORMS OVER CN 52
Proof. We have
(iA)∗ = iA ⇔ −iA∗ = iA ⇔ A∗ = −A.
Example 4.5 (Homework). 1. Find the complex number b for which the matrix
0 b 0
A= b 0 1 − b , b ∈ C
0 b−1 0
is hermitian.
2. Let
0 x y
A = −x 0 z , x, y, z ∈ C
−y −z 0
Find the complex numbers x, y, z such that (i) A∗ = A, (ii) A∗ = −A, (i) A is unitary.
Remark 4.2. Every hermitian matrix A ∈ Mn (C) produces a hermitian quadratic form
over Cn .
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
4.3. GAUSS DECOMPOSITION FOR HERMITIAN FORMS 53
Thus, every hermitian quadratic form over Cn is given by the following matrix form 1 .
a11 a12 ... a1n x1
a12 a22 ... a2n
x2
q (x1 , x2 , ..., xn ) = x1 x2 ... xn ..
..
.
.
a1n a2n ... ann xn
t
= x · A · x,
Theorem 4.4. Let A ∈ Mn (C). Then A is hermitian definite positive iff there exists an invertible
matrix M such that
A = Mt · M. (4.1)
Definition 4.8. Let E be a vector space over C. An inner product over E is a sesquilinear
form, hermitian and definite positive.
Thus, a vector space E over C equipped with a sesquilinaer form which is hermitian
and definite positive is called pre-Hilbert space. If a pre-Hilbert space E has finite dimen-
sion, it is called Euclidean space.
1 1
z1 · z2 + z1 · z2 = |z1 + z2 |2 − |z1 − z2 |2 .
2 2
1. In some references xt · A · x is the matrix representation of a quadratic hermitian form over Cn , where
t
x · A · x = xt · A · x.
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
4.3. GAUSS DECOMPOSITION FOR HERMITIAN FORMS 54
1. q1 = ix1 x2 − ix2 x1 , E = C2 .
2. q2 = x1 x1 + ix1 x2 − ix2 x1 + x2 x2 , E = C2 .
3. q3 = x1 x1 + a12 x1 x2 + a21 x2 x1 + a22 x2 x2 .
4. Deduce the signature of the quadratic form given by :
q1 = ix1 x2 − ix2 x1
= x1 (ix2 ) + x1 (−ix2 )
= x1 −ix2 + x1 (−ix2 ) (which is of the form z1 z2 + z1 z2 )
1 1 1 1
= |x1 − ix2 |2 − |x1 + ix2 |2 (since z1 z2 + z1 z2 = |z1 + z2 |2 − |z1 − z2 |2 )
2 2 2 2
2 2
= |f1 | − |f2 | ,
1 −i
6= 0.
1 i
q2 = x1 x1 + ix1 x2 − ix2 x1 + x2 x2
= (x1 − ix2 ) (x1 + ix2 )
= (x1 − ix2 ) (x1 − ix2 )
= |x1 − ix2 |2
= |f1 |2 .
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
4.3. GAUSS DECOMPOSITION FOR HERMITIAN FORMS 55
We deduce that
α = 1, the signature is (1, 0) .
α > 1, the signature is (2, 0) .
α < 1, the signature is (1, 1) .
Finally, we have
Example 4.7. Let E = C, and let q be the Hermitian quadratic form over E given by
Example 4.8. Diagonalize the Hermitian quadratic form given by its matrix :
0 1−i 0
Mq = 1 + i 0 i .
0 −i 0
Here, Mq is the matrix of the hermitian quadratic form q with respect to the standard basis
of C3 .
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
4.3. GAUSS DECOMPOSITION FOR HERMITIAN FORMS 56
Solution. We have
q = (1 − i) x1 x2 + (1 + i) x2 x1 + ix2 x3 − ix3 x2
= x2 [(1 + i) x1 + ix3 ] + x2 [(1 − i) x1 − ix3 ]
= x2 [(1 − i) x1 − ix3 ] + x2 [(1 − i) x1 − ix3 ] (which is of the form z1 z2 + z1 z2 )
1 1
= |x2 + (1 − i) x1 − ix3 |2 − |x2 − (1 − i) x1 + ix3 |2
2 2
2 2
= |f1 | − |f2 | .
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
C HAPTER 5
S PECTRAL DECOMPOSITION OF
SELF - ADJOINT LINEAR MAPPINGS
I n this chapter we present a sufficiently and necessary condition for a linear form to be
normal in a complex pre-Hilbert space of finite dimension. But first, define the inner
product on a complex vector space and then we state, without proof, the spectral decom-
position theorem of self-adjoint linear mappings.
Definition 5.1. Let E be complex v. space. The inner product of E (over E) is a function
h., .i defined by
h., .i : E×E →C
(x, y) 7→ hx, yi
We say, the scalar product between x and y, or the inner product between x and y.
Definition 5.2. Let E be a complex vector space equipped with an inner product h., .i. The
couple (E, h., .i) is said to be a complex pre-Hilbert space. A complex pre-Hilbert space
of finite dimension is said to be hermitian space.
57
5.2. SPECTRAL DECOMPOSITION OF SELF-ADJOINT LINEAR MAPPING 58
t t
1 t
We can write (5.1) as : hx, yi = x ·y. In particular, for x = x1 x2 and y = y1 y2 ,
we have
hx, yi = h(x1 , x2 ) , (y1 , y2 )i = x1 y1 + x2 y2 .
Lemma 5.2. Every hermitian matrix A ∈ Mn (R) can be represented in the form :
A = Pt · D · P, (5.3)
where P is orthogonal and D is diagonal whose diagonal entries (∈ R) are the eigenvalues of A.
1. Sometimes we use the notation t x · y instead of xt · y.
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
5.2. SPECTRAL DECOMPOSITION OF SELF-ADJOINT LINEAR MAPPING 59
From the above lemma, we deduce that every hermitian definite positive matrix A can
√
written as A = M t · M , where M = DP is invertible.
Definition 5.4. Let f ∈ L (E), the adjoint (or the hermitian conjugate) of f is the mapping
f ∗ ∈ E ∗ satisfying
hf (u) , vi = hu, f ∗ (v)i ,
Theorem 5.1. Let A ∈ Mn (C) be a hermitian matrix (resp. self-adjoint mapping). Then xt Ax ∈
R for each x ∈ Cn .
Proof. We have
t
xt Ax = xt Ax (since xt Ax = a ∈ C)
= (x)t At x (known result)
= (x)t A∗ x
= xt A∗ x
= xt Ax (since A∗ = A).
where aii ∈ R for 1 ≤ i ≤ n and aij = aji for i 6= j because the matrix A is Hermitian.
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
5.2. SPECTRAL DECOMPOSITION OF SELF-ADJOINT LINEAR MAPPING 60
Therefore,
X
xt Ax = aij xi xj
i,j
Xn X
= aii xi xi + aij xi xj
i=1 i6=j
Xn X
= aii |xi |2 + (aij xi xj + aji xj xi )
i=1 i<j
| {z }
∈R
n
X X
aii |xi |2 +
= aij xi xj + aij xi xj
|i=1 {z } i<j
∈R
n
X X
= aii |xi |2 + 2Re aij xi xj
i=1 i<j
| {z } | {z }
∈R ∈R
Hence, λ ∈ R.
Theorem 5.2. The eigenvalues of a hermitian matrix (resp. self-adjoint mapping) are real numbers.
Proof. Let (λ, x) be an eigenpair of a hermitian matrix A (note that x 6= 0) 2 . We can write
Thus, λ = λ and so λ ∈ R.
2. The eigenvectors are always nonzero.
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
5.3. PROPOSED PROBLEMS 61
We finish this subsection by a simple comparison between linear algebra and sesquili-
near algebra.
Linear Semi-linear
f is bilinear f is sesquilinear
f is bilinear symmetric f is sesquilinear hermitian
q is a quadratic form q is a hermitian quadratic form
Eulidian space Hermitian space
symmetric matrix Hermitian matrix
Anti-symmetric (skew-symmetric) matrix Anti-hermitian (skew-hermitian) matrix
Orthogonal matrix Unitary matrix
... ...
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
5.3. PROPOSED PROBLEMS 62
• q (x + y) + q (x − y) = 2q (x) + 2q (y) ,
• q (αx + βy) = |α|2 q (x) + 2Re αβf (x, y) + |β|2 q (y) .
f (x, y) = 3x1 y1 +2ix1 y2 −5ix1 y3 +(2 + i) x2 y1 −7x2 y2 +x2 y3 +ix3 y1 −x3 y2 +(1 − i) x3 y3 .
5 3 + 2i 4
Exercise 6. Show that the product of two hermitian matrices A and B is a Hermitian
matrix if and only if AB = BA.
Exercise 7. Let A be the hermitian matrix :
1 1+i 2i
A= 1−i 4 2 − 3i .
−2i 2 + 3i 7
Find an invertible matrix P such that P t · A · P is diagonal. Deduce the rank and signature
of A.
t
Exercise 8. Let A be an invertible complex matrix. Show that the matrix A A is her-
mitian definite positive.
Exercise 9. Let q be a hermitian quadratic form on E with polar form f and let x be an
isotropic vector for q.
1. Show that if q is defined then f is non-degenerate.
2. Show that for all y ∈ E and λ ∈ C, we have
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
5.3. PROPOSED PROBLEMS 63
4. Using the previous questions, show that if q is positive and f is nondegenerate, then
q is definite.
Exercise 10. Let A be an invertible complex matrix. Show that if A is hermitian then
−1
A is also hermitian.
t
Exercise 11. A complex matrix A is said to be anti-hermitian if A = −A. Show that
the matrix A is anti-hermitian if and only if iA is hermitian.
Exercise 12. Give a Gaussian decomposition of the Hermitian quadratic forms of C3
whose matrices in the canonical basis are
1 1−i 0 0 −i i
A= 1+i 3 i and B = i 0 −i .
0 −i 1 −i i 0
t
where a is a real number and x1 x2 x3 are the coordinates of x in the canonical
3
basis of C .
1. Give the matrix of q in the canonical basis as well as its polar form f .
2. Using the Gauss method, decompose q into the sum of squares of modules of inde-
pendent linear forms.
3. Deduce an orthogonal basis of C3 relative to f and give the matrix of q in the new
basis.
4. Discuss according to the values of a the rank, signature and kernel of q.
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
C HAPTER 6
T he present chapter consists a detailed solution to some exercises and problems re-
lated to symmetric bilinear forms and quadratic forms. These problems were the
subject of some previous TD’s at department of mathematics.
Exercise 01. Find the corresponding symmetric matrix of each of the following qua-
dratic forms :
64
65
1
1
1 0
2
0
1 1
2
2) A = 1 , 3) A =
2 1 2
1 1
2 0 −2
2
2 1 1 0
2 1 1
1 2 1 0
4) A = 1 2 1 , 5) A =
1 1 2 0
1 1 2
0 0 0 0
q : R2 → R
(x, y) 7→ x2 − y 2 .
Solution.
1. We know that
1
f (u, v) = (q (u + v) − q (u − v)) ,
4
where u = (x, y) and v = (x0 , y 0 ) ∈ R2 ; i.e.,
f : R2 × R2 → R
(u, v) 7→ f (u, v) .
Then
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
66
(x, y) ∈ R2 ; q (x, y) = 0
C =
(x, y) ∈ R2 ; x2 − y 2 = 0
=
(x, y) ∈ R2 ; (x − y) (x + y) = 0
=
(x, y) ∈ R2 ; y = x or y = −x .
=
Thus, f or q is nondegenerate.
Exercise 03. Let f ∈ S2 (E), and let q be the associated quadratic form. Let x0 ∈ E with
q (x0 ) 6= 0. Setting
(
F : is the subspace generated (spanned) by x0 ,
G = {y ∈ E ; f (x0 , y) = 0} .
Prove that E = F ⊕ G.
Solution. At first, we can check that F ∩ G = {0E } .
Let u ∈ F ∩ G. Since u ∈ F , u = kx0 for some scalar k ∈ K. Since u ∈ G, then
f (x0 , kx0 ) = kf (x0 , x0 ) = 0. But, f (x0 , x0 ) 6= 0, then k = 0. This gives u = 0. Thus,
F ∩ G ⊂ {0E } .
Second, we prove that E = F + G. Let x ∈ E and let
f (x0 , x) f (x0 , x)
x= · x0 + x − · x0 ,
f (x0 , x0 ) f (x0 , x0 )
| {z } | {z }
u v
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
67
f (x0 , x)
where u ∈ F (since u is of the form λx0 with λ = ∈ R). Likewise, since
f (x0 , x0 )
f (x0 , x)
f (x0 , v) = f x0 , x − · x0 = f (x0 , x) − f (x0 , x) = 0,
f (x0 , x0 )
then v ∈ G. Thus, we have shown that F ∩G = {0E } and F +G = E, and hence E = F ⊕G.
A + At
t t
x Ax = x x.
2
A + At
1 1 1 1
x Ax = xt Ax + xt Ax = xt Ax + xt At x = xt
t
x.
2 2 2 2 2
f : R2 × R2 → R
(u, v) 7→ f (u, v) = ut Av,
!
1 2
where u = (x1 , x2 ), v = (y1 , y2 ) ∈ R2 and A = . Hence,
2 3
! !
1 2 y1
f = x1 x2
2 3 y2
= x1 y1 + 2x1 y2 + 2x2 y1 + 3x2 y2 .
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
68
t
Exercise 06. Let x = x1 x2 ∈ R2 . Show that there are infinitely many matrices
A ∈ M2 (R) such that !
1 4
xt Ax = xt x, (6.1)
0 0
where x ∈ R2 .
Solution. Let n ∈ N. From Exercise 06, we have
! !
1 4 1 0
! +
t 1 4 t
0 0 4 0
X X = X X
0 0 2
!
t 1 2
= X X (in this case, the matrix is symmetric)
2 0
!
1 n
= Xt X.
4−n 0
f : R3 × R3 → R
((x1 , x2 , x3 ) , (y1 , y2 , y3 )) 7→ x1 y1 + x2 y2 − x3 y3 .
f (x + y, x + y) = 0 for each x ∈ F
= f (x, x) + f (y, y) + 2f (x, y) .
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
69
Hence, F ⊂ F ⊥ .
2nd Method. By Definition 3.5, we can compute F ⊥ as follows :
F⊥ = y ∈ R3 ; f (x, y) = 0; ∀ x ∈ F
Since
(x1 , x2 , x3 ) ∈ R3 ; x1 = x3 and x2 = 0
F =
= V ect {(1, 0, 1)} .
Then clearly, F ⊂ F ⊥ .
Exercise 8.
1. Using Gauss’ Method, diagonalize the following two quadratic forms :
a. q1 = x21 + x22 + 2x23 − 4x1 x2 + 6x2 x3
b. q2 = 2x1 x2 + 2x2 x3 + 2x1 x3 .
Then, determine their associated signatures.
2. Diagonalize the following quadratic form (use two methods).
Solution.
a. Using Gauss’s method, we put
1
x1 = y 1 − (a12 y2 + ... + a1n yn )
a11
x2 = y2
..
.
xn = yn
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
70
That is,
x1 = y1 + 2y2
x2 = y2
x3 = y 3 .
This implies
Finally, we obtain
1 −2 0
0 0 5 = −5 6= 0.
0 1 −1
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
71
We obtain
It follows that
Hence,
1 1
2
12
1 −1 0 = 2 6= 0
0 0 −2
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
72
We obtain
6
1 −
5 6= 0.
1 0
Then we can write A in the form P DP t , where P is orthogonal and D is diagonal whose
diagonal entries are the eigenvalues of A. The eigenpairs of A are
(
λ1 = −4, v1 = (3, 2)
λ2 = 9, v2 = (−2, 3) .
Therefore, !
v1 v2
√3 −2
√
13 13
P = = .
kv1 k2 kv2 k2 √2 √3
13 13
which gives
q (x1 , x2 ) = xt Ax
= xt P DP t x (since A = P DP t )
t
= P tx D P tx
= v t Dv, where v = P t x.
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
73
It follows that
! ! √ √ ! !
√3 √2 x1 3
13x1 + 2
13x2 v1
13 13 13 √ 13 √
v= −2
= = .
√ √3 x2 3
13x2 − 2
13x1 v2
13 13 13 13
That is,
! !
−4 0 v1
q = v1 v2
0 9 v2
= 9v22 − 4v12
2 2
3√ 2√ 3√ 2√
= 9 13x2 − 13x1 − 4 13x1 + 13x2
13 13 13 13
= |f1 |2 − |f2 |2 .
Solution.
1. We first prove that f is a bilinear form on the space E. Indeed, ∀ A, A0 , B ∈ M2 (R),
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
74
∀ λ ∈ R we have
t
f (λA + A0 , B) = tr (λA + A0 ) M B
t
= tr λAt M B + (A0 ) M B
t 0 t
= λtr A M B + tr (A ) M B
= λf (A, B) + f (A0 , B) .
f (e1 , e1 ) = tr et1 M e1
( ! ! !)
1 0 1 2 1 0
= tr
0 0 3 5 0 0
!
1 0
= tr
0 0
= 1.
Exercise 10. Recall that a bilinear form on a vector space E is called alternating form if
and only if
∀ x ∈ E, f (x, x) = 0.
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
75
such that
f (u1 , u2 ) = 1.
Calculate f (u2 , u1 ).
3. Let U be the v. subspace spanned by u1 and u2 . Verify that {u1 , u2 } is a base of U .
Write the associated matrix of f in this basis.
4. Setting
W = {w ∈ E; f (w, u) = 0, ∀ u ∈ U } = U ⊥ .
Prove that E = U ⊕ W and deduce that there exists a basis B of the vector space E
for which
0 1 × ··· × ×
−1 0 × · · · × ×
× × ...
..
.
. .
Mf (B) = . .. ∈ Mn (R).
. 0 1
−1 0 ×
..
× × ··· . ×
× × ··· × × 0
Solution.
1. For each (x, y) ∈ E 2 we have
f (x + y, x + y) = 0 (since f is alternating)
= f (x, x) + f (y, y) +f (x, y) + f (y, x) .
| {z } | {z }
=0 =0
3. Let U be the vector subspace generated by u1 and u2 . We prove that u1 and u2 are
linearly independent. By the way of contradiction, if we put u2 = ku1 , then
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
76
where α, β ∈ R. Hence, (
αf (u1 , u2 ) = 0 ⇒ α = 0
βf (u2 , u1 ) = 0 ⇒ β = 0.
Then x = 0. Therefore, U ∩ W = {0E }.
It remains to be shown that E = U + W. For each x ∈ E, setting
u = f (x, u2 ) u1 − f (x, u1 ) u2
f (x − u, u1 ) = f (x − f (x, u2 ) u1 + f (x, u1 ) u2 , u1 )
= f (x, u1 ) − f (x, u1 )
= 0.
f (x − u, u2 ) = f (x − f (x, u2 ) u1 + f (x, u1 ) u2 , u2 )
= f (x, u2 ) − f (x, u2 )
= 0.
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
77
× × ··· × × 0
Thus, u1 , u2 , ..., un is a basis of E for which the matrix representing f has the desired form.
Exercise 11. Let E be a vector space over R with dimension 2. Let f ∈ S2 (E), and let q
be the associated quadratic form. Prove that the following three statements are equivalent :
a. f is nondegenerate and there is a nonzero vector e1 such that q (e1 ) = 0.
b. There exists a basis of E for which the matrix of f is given by
!
0 1
A= .
1 0
?
Solution. (a) ⇒ (b). Since f is nondegenerate and e1 6= 0, there exists a vector y ∈ E
such that
f (e1 , y) 6= 0.
We put
1
z= y,
f (e1 , y)
so we get
1
f (e1 , z) = f e1 , y = 1.
f (e1 , y)
For the vector e2 = z − 21 q (z) e1 , we find
(
f (e1 , e2 ) = f e1 , z − 12 q (z) e1 = 1 = f (e2 , e1 )
f (e2 , e2 ) = 0.
The family {e1 , e2 } is a basis of E. Otherwise, e2 = ke1 and f (e1 , e2 ) = 0. Here, the matrix
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
78
of f is given by !
0 1
A= .
1 0
?
(b) ⇒ (c). Conserving the previous notations. The vectors
e01 = 1 e1 + e2 ,
2
0 1
e2 = e1 − e2
2
The family {e01 , e02 } is a basis of E. Otherwise, we get e02 = αe01 , where α ∈ R. Then
?
(c) ⇒ (a). Conserving the previous notations. The quadratic form is nondegenerate
since the matrix D is invertible. For the nonzero vector v 0 = e01 + e02 , we have
Exercise 12. Let E be a real vector space and let a ∈ E. Let q be a quadratic form over
E with the polar form f . Define the mapping q 0 from E to R, by setting :
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
79
Solution.
1. We see that q 0 is a quadratic form because the mapping
f0 : E×E →R
(x, y) 7→ q (a) f (x, y) − f (a, x) f (a, y)
Hence, a ∈ ker f 0 .
We show that ker f ⊂ ker f 0 . In fact, if x ∈ ker f , then for each y ∈ E we have
Thus, x ∈ ker f 0 .
Therefore, Ra ⊂ ker f 0 .
f (λa, a) = λq (a) = 0.
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
80
where f (a, x) a ∈ Ra. It suffices to prove that q (a) x − f (a, x) a ∈ ker f. In fact, for each
y ∈ E, we have
2. Let (E, h., .i) be a Hilbert space and let {e1 , e2 , ..., en } be an orthonormal basis of E.
Prove that n
X
∀x∈E :x= hx, ei i ei .
i=1
3. Let A = {u1 , u2 , ..., un } be a finite orthonormal set. Show that A is free. Further, for
each x ∈ E prove that the vector
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
81
for i = 1, 2, ..., n. We replace αi by hx, ei i in the equation (6.4), we obtain for the
desired result.
α1 u1 + α2 u2 + ... + αn un = 0,
implies
0 = h0, ui i = hα1 u1 + α2 u2 + ... + αn un , ui i = αi , ∀ i = 1, 2, ..., n.
Exercise 14. Let q be a quadratic form over Rn which has the matrix A in the standard
basis, and let λmax be the greatest eigenvalue of A. Prove the following inequality :
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
82
and let {u1 , u2 , ..., un } an orthonormal basis formed by the eigenvectors of A. We have
x = α1 u1 + α2 u2 + ... + αn un
with α12 + α22 + ... + αn2 = 1, since kxk22 = hx, xi = 1. In this case, we can write
q (x) = xt Ax
= (α1 u1 + α2 u2 + ... + αn un )t A (α1 u1 + α2 u2 + ... + αn un )
= α12 ut1 Au1 + α22 ut2 Au2 + ... + αn2 utn Aun
= λ1 α12 ut1 u1 + λ2 α22 ut2 u2 + ... + λn αn2 utn un (since Aui = λi ui , i = 1, 2, ..., n)
≤ λmax α12 ut1 u1 + α22 ut2 u2 + ... + αn2 utn un
x
u= ; i.e., kuk2 = 1.
kxk2
Therefore,
q (x) ≤ λmax kxk2 = λmax x21 + x22 + ... + x2n .
f : Cn × Cn → C
(x, y) 7→ xt Ay.
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
83
Notice that if f : E × E → C is linear on the left and semilinear on the right then f is
called ”sesquilinear form” ; i.e., ∀ x, x0 , y, y 0 ∈ E, ∀ λ ∈ C :
f : Cn × Cn → C
(x, y) 7→ xt Ay
t
f (λx + x0 , y) = (λx + x0 ) Ay
t
= λxt Ay + (x0 ) Ay
= λf (x, y) + f (x0 , y) .
Thus, f is linear from the left. Similarly, for every x, y, y 0 ∈ Cn and λ ∈ C, we have
f (x, λy + y 0 ) = xt A(λy + y 0 )
= λxt Ay + xt A(y)0
= λf (x, y) + f (x, y 0 ) .
f (x, y) = xt Ay
= (xt Ay)t (since xt Ay ∈ C)
= (y)t At x
= y t (At )x
= y t A∗ x
= y t Ax (since A is hermitian)
= f (y, x) .
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
84
In fact, we have
∀ x, y, y 0 ∈ E, ∀ α, β ∈ C : f (x, αy + βy 0 ) = f (αy + βy 0 , x)
= αf (y, x) + βf (y 0 , x)
= αf (y, x) + βf (y 0 , x)
= αf (x, y) + βf (x, y 0 ) .
f : C2 × C2 → C
(x, y) 7→ 4x1 y1 + (2 − i) x1 y2 + (2 + i) x2 y1 − 5x2 y2 .
Exercise 17.
1. Diagonalize the following Hermitian quadratic forms :
i) q1 = ix1 x2 − ix2 x1 , E = C2 .
ii) q2 = x1 x1 + ix1 x2 − ix2 x1 + x2 x2 , E = C2 .
iii) q3 = x1 x1 + a12 x1 x2 + a21 x2 x1 + a22 x2 x2 .
2. Deduce the signature of the quadratic form given by :
Solution.
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
85
– We can write
q1 = ix1 x2 − ix2 x1
= x1 (ix2 ) + x1 (−ix2 )
= x1 −ix2 + x1 (−ix2 ) (which is of the form z1 z2 + z1 z2 )
1 1 1 1
= |x1 − ix2 |2 − |x1 + ix2 |2 (since z1 z2 + z1 z2 = |z1 + z2 |2 − |z1 − z2 |2 )
2 2 2 2
2 2
= |f1 | − |f2 | ,
1 −i
6= 0.
1 i
– Likewise, we have
q2 = x1 x1 + ix1 x2 − ix2 x1 + x2 x2
= (x1 − ix2 ) (x1 + ix2 )
= (x1 − ix2 ) (x1 − ix2 )
= |x1 − ix2 |2
= |f1 |2 .
For the quadratic form q20 = αx1 x1 + ix1 x2 − ix2 x1 + x2 x2 , α ∈ R. We see that
q20 = (α − 1) x1 x1 + q2
= (α − 1) |x1 |2 + |x1 − ix2 |2 .
We deduce that
α = 1, the signature is (1, 0) .
α > 1, the signature is (2, 0) .
α < 1, the signature is (1, 1) .
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
86
– We have
Exercise 18. Let E be a vector space with dimension 3, and let q be the hermitian
quadratic form over E given by
Exercise 19. Diagonalize the Hermitian quadratic form given by its matrix :
0 1−i 0
Mq = 1 + i 0 i .
0 −i 0
Here, Mq is the matrix of the Hermitian quadratic form q with respect to the standard basis
of C3 .
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
87
Solution. We have
q = (1 − i) x1 x2 + (1 + i) x2 x1 + ix2 x3 − ix3 x2
= x2 [(1 + i) x1 + ix3 ] + x2 [(1 − i) x1 − ix3 ]
= x2 [(1 − i) x1 − ix3 ] + x2 [(1 − i) x1 − ix3 ] (which is of the form z1 z2 + z1 z2 )
1 1
= |x2 + (1 − i) x1 − ix3 |2 − |x2 − (1 − i) x1 + ix3 |2
2 2
2 2
= |f1 | − |f2 | .
b b + 1 b − ai
For which values of the parameters a and b is the matrix B Hermitian ? and then find its
Hermitian quadratic form.
Solution. The matrix B is Hermitian if and only, if
1 0 b 1 0 b
t
B = B ⇔ 0 a+i a = 0 a−i b+1
b b + 1 b − ai b a b + ai
a+i∈R
⇔ b+1=a
b − ai ∈ R
a = α − i, where α ∈ R
⇔ b=α−1−i
α − 1 − i − (α − i) i ∈ R
a = α − i, where α ∈ R
⇔ b = (α − 1) − i
α − 2 − (1 + α) i ∈ R
α = −1,
⇔ a = −1 − i
b = −2 − i.
Therefore,
1 0 −2 − i
B= 0 −1 −1 + i .
−2 + i −1 − i −3
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
88
q = xt Ax
a11 a12 · · · a1n x1
x2
12 a22 · · ·
a a2n
= x1 x2 · · · xn ..
.
a1n a2n · · · ann xn
It follows that
t
q = xt Ax = xt P DP t x = xt P D P t x = P t x DP t x.
Setting
v1
v2
t
P x=v= .. ,
.
vn
R +∞ 2 √
1. Use the following well-known formula : −∞
e−t = π.
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
89
Implies
q = v t Dv
λ1 v1
λ2 v2
= v1 v2 · · · vn ..
..
.
.
λn vn
= λ1 v12 + λ2 v22 + ... + λn vn2 ,
where (λi )i=1,2,...,n are the eigenvalues of A. Further, suppose that q is definite positive, i.e.,
λi > 0 for every i = 1, 2, ..., n. Then
Z Z Z
I = ... e−q(x1 ,x2 ,...,xn ) dx1 dx2 ...dxn
Z Z Rn
Z
e−(λ1 v1 +λ2 v2 +...+λn vn ) dv1 dv2 ...dvn , where αJ ∈ R∗
2 2 2
= αJ ...
Rn
Z +∞ n
αJ −t2
= √ e dt
λ1 λ2 ...λn −∞
αJ √ n
= √ π .
λ1 λ2 ...λn
Note that
1
dv1 dv2 ...dvn = dx1 dx2 ...dxn
αJ
Exercise 22. Let q = xt Ax be a quadratic form over the vector space Rn . Prove that
Solution. By Definition 3.6, recall that ker f = {x ∈ E; xt Ay = 0 for each y ∈ E}. Then
ker f = {0} ⇔ ∀ y ∈ Rn : xt Ay = 0 ⇒ x = 0
⇔ ∀ y ∈ Rn : y t At x = 0 ⇒ x = 0
⇔ At x = 0 ⇒ x = 0 ; since ∀ y ∈ Rn : y t At x = 0 ⇔ At x = 0
⇔ At ∈ GLn (R)
⇔ A ∈ GLn (R) .
Exercise 23. Let E = R2 [x] be the vector space of polynomials having degree ≤ 2, and
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
90
let
Q : E→R
p 7→ p (0) p (1) .
1. Prove that Q is a quadratic form, and then give its polar form f .
2. Determine MQ (B), where B is the canonical basis of E.
3. Prove that f is degenerate. Is it positive ?, definite positive ?, negative ?, definite
negative ?
Solution.
1. From simple computation, the polar form of Q is given by
f : R2 [x] × R2 [x] → R
1
(p, q) →
7 f (p, q) = (Q (p + q) − Q (p − q))
4
1 1
= p (0) q (1) + p (1) q (0) .
2 2
2. Calculate the matrix MQ (B), where B is the canonical basis of R2 [x]. We have
1 1
f (1, 1) = 1, f (1, x) = , f 1, x2 = , f x, x2 = 0, f x2 , x2 = 0.
2 2
Therefore,
1 1
1
2 2
1
MQ (B) = 0 0 .
2
1
0 0
2
3. Since det (MQ (B)) = 0, then f is degenerate.
Further, Q neither positive nor negative ; since
(
Q (2x − 1) = (−1) × 1 = −1 < 0,
Q (−x − 2) = (−2) × (−3) = 6 > 0.
1
√ √
Remark 6.1. The eigenvectors of MQ (B) are 2
3 + 12 , 12 − 1
2
3, 0. Then MQ (B) neither
positive nor negative.
Exercise 24. Let (E, h., .i) an inner product space (a pre-Hilbert space) and let F be a
⊥ ⊥
subspace of E. Prove that F ⊂ F ⊥ and so F = F ⊥ whenever E has finite dimension.
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
91
Solution. We have
(
F ⊥ = {x ∈ E ; hx, yi = 0 for each y ∈ F } ,
⊥
F ⊥ = x ∈ E ; hx, yi = 0 for each y ∈ F ⊥ .
⊥ ⊥
We prove that F ⊂ F ⊥ . Let x0 ∈ F . Assume that x0 ∈ / F ⊥ , there exists y0 ∈ F ⊥ such
that hx0 , y0 i =
6 0. But, hx, y0 i = 0 for every x ∈ F . A contradiction.
⊥
Next, assume that E is a finite dimension space. Since E = F ⊕ F ⊥ = F ⊥ ⊕ F ⊥ , by
(1.1) we get
⊥
dim F + dim F ⊥ = dim E et dim F ⊥ + dim F ⊥ = dim E.
⊥ ⊥ ⊥
which gives dim F = dim F ⊥ . Moreover, since F ⊂ F ⊥ , we have F = F ⊥ .
Exercise 25. Let ϕ be the mapping defined on the vector space E = Rn [x] by
Z b
ϕ (P, Q) = P (t) Q (t) dt, where a < b.
a
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
92
Then ϕ is a bilinear form. Further, ϕ is symmetric since for each (P, Q) ∈ E 2 , one has
Z b Z b
ϕ (P, Q) = P (t) Q (t) dt = Q (t) P (t) dt = ϕ (Q, P ) .
a a
Rb
For all P ∈ E − {0}, we have ϕ (P, P ) = a
P 2 (t) dt > 0. Then ϕ is definite positive. Hence
ϕ is an inner product.
Now, we calculate Mϕ (B) :
ϕ (1, t) ϕ (1, t2 )
ϕ (1, 1)
Mϕ (B) = ϕ (t, t) ϕ (t, t2 )
ϕ (1, t)
ϕ (1, t2 )
ϕ (t, t2 ) ϕ (t2 , t2 )
Rb Rb
Rb 2
tdt dt
t dt
a
Rb 2 R ab
Rab 3
= tdt
t dt a t dt
Rab 3 R ba 2
Rb
a
t dt a t4 dt
a
t dt
b − 1 b2 − a2 b3 − a3
2 1 2 3 2 3 4 3 4
b −a b −a b −a
= .
2 3 4
b3 − a3 b4 − a4 b5 − a5
3 4 5
That is,
Z b 2 Z b Z b
2
P (t) Q (t) dt ≤ P (t) dt Q2 (t) dt.
a a a
Exercise 26. Let A be a symmetric matrix with real entries. Prove that the quadratic
form q = xt Ax is definite positive if and only, if the eigenvalues of A are strictly positive.
Solution. Let q = xt Ax be quadratic form definite positive, where A ∈ Sn (R) and let
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
93
Exercise 27. Let f be a bilinear form on a vector space E. Show that the mapping :
q : E→R
x 7→ f (x, x)
is a quadratic form.
Solution. Let f be a bilinear form over E. Clearly, the mapping
ϕ E×E →R
:
f (x, y) + f (y, x)
(x, y) →
7 ϕ (x, y) =
2
is symmetric bilinear form. Further, for each x ∈ E we have ϕ (x, x) = f (x, x) = q (x).
Then q is a quadratic form over E.
Exercise 28. Let q be a quadratic form over E. Prove that two vectors x and y satisfying
q (x) q (y) < 0 are independent.
Solution. Assume, by the way of contradiction that x, y are dependent. Since x and y
are nonzero (otherwise, if x or y is zero then q (x) q (y) = 0), there exists λ ∈ R∗ such that
y = λx. By (3.5), q (x) q (y) = λ2 (q (x))2 > 0, this contradicts our assumption.
Exercise 29. Diagonalize the quadratic form
1
x = x0 − (by 0 ) and y = y 0 .
a
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
94
It follows that
1 0
b 6= 0.
1
a
2
– If a, c − ba > 0, then the signature of q is (2, 0) .
2
– If a, c − ba < 0, then the signature of q is (0, 2) .
2 2
– If a > 0 and c − ba < 0 or a < 0 and c − ba > 0, then the signature of q is (1, 1).
2. Assume that a = 0 and b 6= 0. There are two cases.
2.1. For c = 0, we let
x = x0 + y 0 and y = x0 − y 0 ,
which implies
q = 2bxy + cy 2
2
b b
= 2bv u − v + c u − v
c c
2
b
= cu2 − v 2
c
2
b2
b
= c y + x − x2 .
c c
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
95
Exercise 30. Let A ∈ Mn (R) be a symmetric definite positive matrix. Using two me-
thods, prove that det (A) is strictly positive.
Solution. 1st method. We show that A is definite positive ⇔ ∀ λ ∈ Sp (A) : λ > 0. In
fact, if A is definite positive, for each eigenpair (λ, x) of A we have
It follows that
Y
det (A) = λ > 0.
λ∈Sp(A)
2nd method. In the case when A is symmetric definite positive, from Theorem 2.5, we
have
A = M t M , where M ∈ GLn (R) .
Hence, det (A) = det (M t M ) = (det (M ))2 > 0 (note that det (M ) = det (M t ) and det M 6= 0
since M is invertible).
Remark 6.2. Let A ∈ Mn (C) be a hermitian definite positive matrix. Then det (A) ∈
R∗+ .
c
2024, University 8 Mai 45 Guelma. Department of Mathematics. Djamel Bellaouar
Conclusion
96
Bibliographie
[1] I. Kaplansky, Linear Algebra and Geometry, A Second Course," Allyn & Bacon, Boston,
1969.
[2] W. Scharlau, Quadratic and Hermitian Forms, Springer-Verlag, Berlin, 1985.
[3] D. Shapiro, Compositions of Quadratic Forms, de Gruyter, Berlin, 2000.
[4] M. Knebusch, Specialization of Quadratic and Symmetric Bilinear Forms (Vol. 11). Sprin-
ger Science & Business Media, 2011.
[5] Kurgalin, Sergei, et al., Bilinear and Quadratic Forms. Algebra and Geometry with Py-
thon (2021), 335-356.
[6] Milnor, J. W., & Husemoller, D. (1973). Symmetric bilinear forms (Vol. 5). Berlin : Sprin-
ger.
[7] K. Szymiczek, Ten problems on quadratic forms. Acta Mathematica et Informatica Uni-
versitatis Ostraviensis, 10(1)(2002), 133-143.
97