LinearAlgebraI 2023 Matrdets
LinearAlgebraI 2023 Matrdets
1.1 Matrices
The term was coined by James Joseph Sylvester in the 19th century.
3
The matrices of size m × n, i.e. m rows and n columns, over R or C, is a vector space,
which we denote by Mm×n (R) (resp. C). Actually, it is like Rmn , if we choose an ordering
of the coefficients. The standard basis of Mm×n (R) consists of the matrices eji , where
(eji )k` = 1 if j = k and i = `, and zero otherwise. We chose this ordering of the subindices,
due to the fact that eji ei = ej . Note that ekj eji = eki , and ek` eji = 0 if ` 6= j. Here, the
matrices on the right are in Mm×n (R), and those on the left are in Mp×m (R).
The identity matrix I = In ∈ Mn (K): Iij = 1 if i = j, and 0 if i 6= j. One has for every
A m × n: AIn = Im A = A.
(AB)C = A(BC).
P P
Proof: It suffices to show that the (i, j) coefficient of both sides is k1 k2 aik1 bk1 k2 ck2 j .
The transpose of an m × n matrix A = (aij ), denoted by AT , is the n × m matrix defined
as ATij = aji . One easily checks that (AB)T = B T AT , etc.
4
Proposition 1.1.2 The block matrix product above yields
E
A B = AE + BF.
F
This is the case for all block matrix products. For instance, assuming that all matrix
products involved are possible (which is the case if all blocks are square matrices of the
same order n), one has
A B E F AE + BG AF + BH
= .
C D G H CE + DG CF + DH
Example 1.2.1 Let C 1 (R) be the vector spaces of functions of class C 1 on R (differen-
tiable at each point x ∈ R, and such that f 0 is continuous). The function T (f ) = f 0 (0) is
a linear map T : C 1 (R) → R.
Proposition 1.2.3 Given a linear map T : E → F , there are two vector subspaces
associated with T : the kernel of T , ker T ⊂ E, defined as {x ∈ E|T (x) = 0} and the
image of T , Im T = T (E) ⊂ F are vector subspaces of E, F respectively.
Kernel and image of a matrix: From Example 1.2.2 we may define kernel and image
of a matrix A. The kernel, ker A, is the subspace given by ker A = {x ∈ Rn : Ax = 0}.
This automatically yields the kernel as the solution set of a system of homogeneous linear
equations.
The image, Im A, is the subspace {Ax : x ∈ Rn } ⊂ Rm . Since Ax is the linear combination
Ax = x1 A1 + . . . + xn An ,
5
Definition The rank of a linear map (resp. a matrix) is the dimension of its image,
rk T = dim Im T (resp. rk A = dim Im A). In other words, the rank of a matrix is
dimhA1 , . . . , An i. The nullity of a linear map (resp. matrix) is the dimension of its
kernel, dim ker T (resp. dim ker A).
The following result elucidates the relationship between linear maps and matrices (this
will suffice for this semester).
Theorem 1.2.4 Let T : Rn → Rm be linear over R. Then T is of the form given in
Example 1.2.2.
Proof: First of all, note that for TA (x) = Ax one has T (ei ) = Ai . This shows us
precisely what we need to prove: it is the matrix A, the columns of which are the vectors
T (ei )P∈ Rm . Now, let T be an arbitrary linear map from Rn to Rm . For a vector x ∈ Rn ,
x = xi ei , one has:
x
X X .1
T (x) = T ( xi ei ) = xi T (ei ) = T (e1 ) · · · T (en ) .. = Ax,
xn
as desired.
1.3 Inverses
A square matrix A has an inverse B, also n × n, if and only if AB = BA = I. (The
results in this section may be treated with determinants and adjugants, but we assume
no previous knowledge of that subject here.)
Proof: Suppose that AB = I. It follows that the columns Ai of A span Rn . This also
means that the columns Ai are a basis of Rn , by invariance of dimension (if they were not
LI, cancelling out those unnecessary would eventually produce a basis of Rn of less than
n elements!). Thus, ker A = {0}.
We shall show that BA − I = 0. Clearly, A(BA − I) = 0, but this means that every
column of BA − I is in ker A = {0}, so indeed BA = I, as desired.
Suitable use of the transpose settles the remaining claim; however, if one should wish to
write out all details, suppose now that BA = I. Again, the columns of A form a basis,
6
for they are clearly linearly independent (Ax = 0 ⇒ x = BAx = 0) and invariance of
dimension applies. To prove that AB = I, again consider (AB−I)A = 0. This means that
Im A ⊂ ker(AB − I), but since Im A = Rn is the span of the columns of A, AB − I = 0.
Proposition 1.4.1 The following statements hold for a system of linear equations Ax = b
as above.
Proof:
1. Obvious.
P
2. A solution is a set of coefficients xi which expresses b = Ai xi , i.e. as an element
of Im A. Therefore, the system is solvable (consistent) if and only if b ∈ Im A.
Proof: Assume that M A = I. Multiplying by A−1 (obtained, for instance, through the
adjugant of A) on the left yields M = A−1 . The case AN = I is likewise settled (right
multiplication).
7
1.5 Problems
1.5.1 Let B = (bij ) be an n × n matrix, with zero diagonal, i.e., bii = 0 for all i. Prove
that there are n × n matrices X, Y such that XY − Y X = B. (Hint: Assume that X is
a suitable diagonal matrix.)
1.5.2 Let A be an n × n square matrix. Show that if the matrix G is defined by Gij =
Ai • Aj is invertible, then det A 6= 0.
1.5.3 Let A, B be square matrices of orcer n, such that I − AB is invertible. Show that
I − BA is invertible, and that
8
Chapter 2
Determinants
9
2.2 Alternate n-linear forms on K n
We have working knowledge of 2 × 2 and 3 × 3 determinants. We shall guide ourselves by
this geometrical knowledge to construct an n-dimensional generalisation, which shall be
the determinant of a square matrix of order n. First we shall list the essential properties
of the determinant.
n
Let V : Rn × · · · × Rn → R be an ‘oriented volume function’. From our 2− and
3−dimensional experience, we distilled the following rules.
DET0) (normalization) V (e1 , e2 , . . . , en ) = 1;
DET1) (multilinearity) the function V is n-linear, i.e. linear on each variable if we fix
the arguments in the others. To wit,
and in general
DET2) (alternate) the function V is alternate, i.e. if we swap two arguments, the value
of V changes precisely in a sign:
V (u1 , . . . , v, . . . , v, . . . , un ) = 0.
10
Let us retrieve the determinant in the 2×2 e 3×3 cases. Note that manipulations effected
with the canonical basis work for any basis (though this comment shall be made clear in
the 2nd Semester of the course).
Example 2.2.1 If n = 2, then V (ae1 +be2 , ce1 +de2 ) = aV (e1 , ce1 +de2 )+bV (e2 , ce1 +de2 ).
After developing the expression, it equals
V (A, B, C) = V (a1 e1 + a2 e2 + a3 e3 , b1 e1 + b2 e2 + b3 e3 , c1 e1 + c2 e2 + c3 e3 ).
11
2.3 Binet’s formula
Theorem 2.3.1 (Binet-Cauchy) det AB = det A det B.
IMPORTANT: What Proposition 2.3.1 proves is that the determinant of a square ma-
trix A, det A, is a scaling factor between the volume of an n-parallelopiped generated by
the vectors u1 , . . . , un and that of its image by A, i.e. to one determined by, Au1 , . . . , Aun .
The sign of det A shows whether the orientation of Aui is the same as, or opposite to,
that of the ui .
Theorem 2.3.2 Let A be an n × n matrix. Then: det A = 0 if and only if its columns
A1 , · · · , An are linearly dependent.
by linearity and alternateness. Now, assume that the columns of A are linearly
P k indepen-
n
dent: since the Ai form a basis of K , the canonical basis is such that ei = bi Ak . This
means that I = AB, and by Binet’s Formula det A det B = det I = 1, hence det A 6= 0.
*
In
P order to have an explicit formula for the determinant, we use matrix notation: Aj =
aij ei . By DET1, we have:
n
X n
X X
V (A1 , . . . , An ) = V ( ak1 1 ek1 , · · · , akn n ekn ) = ak1 1 · · · akn n V (ek1 , . . . , ekn ).
k1 =1 kn =1 k1 ,...,kn
Again, by DET2, only the terms where k1 , . . . , kn distinct shall survive, i.e. {k1 , . . . , kn } =
{1, . . . , n}. Here we use the notation for permutations: a permutation of n elements is
a bijection σ : {1, . . . , n} → {1, . . . , n}. The set (group) of permutations of {1, · · · , n} is
denoted by Sn , and is called the symmetric group of n elements.
Back to the expression, we have
X
V (A1 , . . . , An ) = aσ(1)1 · · · aσ(n)n V (eσ(1) , . . . , eσ(n) ) = (?)
σ∈Sn
12
Definition A permutation τ ∈ Sn is called transposition (between i, j (com i 6= j) if
τ (i) = j, τ (j) = i and τ (k) = k for every k 6= i, j. In other words, a transposition is a
swap between two elements i, j. One usually writes τ = (i j). (Note that τ −1 = τ for
every transposition τ ).
V (e2 , e3 , e4 , e5 , e1 ) = −V (e2 , e3 , e4 , e1 , e5 ).
Now e4 goes to its rightful place, and for that we swap e1 , e4 :
V (e2 , e3 , e1 , e4 , e5 ) = −V (e2 , e3 , e4 , e1 , e5 ).
We do the same with e3 and e2 , which entails two more sign changes, and thus we get
V (e2 , e3 , e4 , e5 , e1 ) = V (e1 , e2 , e3 , e4 , e5 ).
Note that, if σ = τ1 · · · τr , where τi = τi−1 are transpositions, then using (ab)−1 = b−1 a−1
yields
σ −1 = (τ1 · · · τr )−1 = τr−1 · · · τ1−1 = τr · · · τ1 .
(The process we showed in Example 2.3.4 in fact provides a decomposition of the inverse
σ −1 , but we shall not dwell on this, and instead refer to the first Chapter on groups.)
CLAIM: (proven in the Appendix at the end of this Chapter) The function sgn is well
defined, and multiplicative, i.e.: sgn(ση) = sgn(σ)sgn(η) for σ, η ∈ Sn .
X
(2.1) det A = sgn(σ)aσ(1)1 · · · aσ(n)n ,
σ∈Sn
where sgn is the sign made explicit in Corollary 2.3.6. Another important result follows:
13
Proposition 2.3.7 det A = det AT .
Proof: Clearly, aσ(k)k = ATk,σ(k) . Also, when we have the graph of a bijection, the graph
of its inverse is obtained by transposing the horizontal and vertical axes (i.e. reflection
through the diagonal) in the case of real functions of a real variable. This also holds by
permutations of {1, 2, · · · , n} (A PICTURE IS LACKING!).
Therefore, for every σ ∈ Sn , it follows from the former paragraph that
n
Y n
Y
aσ(k)k = a`σ−1 (`) ,
k=1 `=1
just swapping k by ` = σ(k). On the other hand, sgn(σ) = sgn(σ −1 ), so therefore det A =
det AT .
Example 2.4.1 Let T be an upper (resp. lower) triangular matrix, i.e. tij = 0 for i > j.
The determinant of T , det T , is the product of the terms of the main diagonal of T :
Indeed, note that det T = σ sgn(σ) ni=1 tσ(i)i . Since tij = 0 for i > j, those permuta-
P Q
tions σ of 1, 2, . . . , n that have a term σ(i) > i will produce a zero product, so the ones
we need to pick are those σ such that σ(i) ≤ i for every 1 ≤ i ≤ n. σ(1) ≤ 1, hence
σ(1) = 1. Likewise, σ(2) ≤ 2, and the permutation σ must have σ(2) = 2. Likewise,
if σ(1) = 1, . . . , σ(i − 1) = i − 1, then σ(i) ≤ i, and since σ(i) cannot be 1, 2, . . . , i − 1
(for these values are taken), we are only left with σ(i) = i. Thus, the only permutation
which produces a product that has a priori no obvious zero terms is precisely the identity,
σ = Id. This means that only one product survives, i.e. det T = t11 · · · tnn .
This shape does not have so many vanishing coefficients as a triangular matrix. We still
have an interesting phenomenon, though. We claim that the determinant is det M =
det(aij ) det(bij ).
14
First solution, perhaps unpleasant: We shall deal with this example first through
the prism of permutations. Note that mi1 = mi2 = 0 for i = 3, 4, 5. In this case, the only
significant terms in the sum over all permutations features only terms within the nonzero
frame; namely, if σ is a permutation of 1, 2, 3, 4, 5,
the term corresponding to σ is sgn(σ) 5i=1 mσ(i)i , where mσ(i)i is outside the zero zone.
Q
This means that, if i = 1, 2, then σ(i) must be 1 or 2. In other words, {σ(1), σ(2)} = {1, 2}.
Since σ is a bijection, automatically {σ(3), σ(4), σ(5)} = {3, 4, 5}.
Thus, every significant permutation σ may be factored as a product σ = α ◦ β of a
permutation α of 1, 2 (that leaves 3, 4, 5 fixed) and a permutation β of 3, 4, 5 (that leaves
1, 2 fixed). Precisely speaking,
X 5
Y X
det M = sgn(σ) mσ(i)i = sgn(αβ)mα(1)1 mα(2)2 mβ(3)3 mβ(4)4 mβ(5)5 =
σ i=1 α,β
! !
X X
sgn(α)mα(1)1 mα(2)2 sgn(β)mβ(3)3 mβ(4)4 mβ(5)5 = det(A) det(B),
α β
Applying the Binet-Cauchy formula, and working out the factors through Laplace expan-
sion, yields det M = det A· det B.
15
2.5 Vandermonde determinant
We shall apply what we learnt so far to this classical example (and will use the Laplace
expansion, too). Given x1 , . . . , xn numbers, one has the matrix M = (mij ), where mij =
xi−1
j , i.e.
1 1 1 1 ... 1
x1 x2 x3 x4 . . . x n
M = . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
xn−1
1 x2n−1 xn−1 3 xn−1
4 . . . xn−1 n
Let V (x1 , . . . , xn ) = det M . We shall find an expression for V (x1 , . . . , xn ).
Q
CLAIM: V (x1 , . . . , xn ) = 1≤i<j≤n (xj − xi ).
To start with, consider the elementary property that
n
!
X
det u1 + αi ui , u2 , . . . , un = det(u1 , u2 , . . . , un )
i=2
(which follows from linearity and alternateness!). This we may do on each column: adding
a linear combination of the other columns to a given column leaves the determinant
unchanged. By the same token, this is also the case for rows (remember that det A =
det AT !).
Thus, we replace the columns u2 , . . . , un by u2 − u1 , . . . , un − u1 , that is,
1 0 0 0 ... 0
x1 x2 − x1 x3 − x1 x4 − x1 . . . xn − x1
.............................................
V = .
xi1 xi2 − xi1 xi3 − xi1 xi4 − xi1 . . . xin − xi1
.............................................
xn−1
1 xn−1
2 x3n−1 x4n−1 . . . xnn−1
Clearly, we may extract a factor xi − x1 out of the i-th column, for 2 ≤ i ≤ n, and in so
doing
1 0 0 ... 0
x1 1 1 ... 1
n
Y .................................................................
V = (xi −x1 ) i−1 = (?).
xi1 xi−1
2 + . . . + x1i−1 x3i−1 + . . . + x1i−1 . . . xi−1
n + . . . + x1
i=2
.................................................................
xn−1
1 xn−2
2 + . . . + x1n−2 x3n−2 + . . . + x1n−2 . . . xn−2
n + . . . + xn−2
1
Clearly, Laplace expansion along the first row reduces to the expression
1 1 ... 1
Yn ...........................................................
V = (?) = (xi − x1 ) xi−1
2 + . . . + xi−1
1 xi−1
3 + . . . + xi−1
1 . . . xi−1
n + . . . + x1
i−1
.
i=1 ...........................................................
xn−2
2 + . . . + xn−2
1 xn−2
3 + . . . + xn−2
1 . . . xn−2
n + . . . + x1n−2
16
Note now that, if we isolate the above determinant on the right, leaving the accompanying
factor on the left for now, the (n − 1)-th row minus x1 times the (n − 2)-th row yields the
row
(xn−2
2 xn−2
3 . . . xnn−2 ).
Apply this process to the (n − 2)-th row now (i.e. subtracting x1 times the (n − 3)-th
row) and proceed backwards accordingly. One gets
1 1 ... 1
n
Y x2 x3 . . . x n
V = (?) = (xi − x1 ) ,
.....................
i=2
xn−2
2 x3n−2 . . . xn−2
n
Qn
which in turn equals i=2 (xi − x1 )V (x2 , . . . , xn ). The case n = 2 is clear, and the claim
follows by induction.
a22 · · · a2n
det(e1 , A2 , · · · , An ) = . . . . . . . . . . . . . .
an2 · · · ann .
For instance, one may argue that σ(1) = 1 (hence aσ(1)1 = 1) for every permutation with
a nonzero product aσ(1)1 · · · aσ(n)n .
The case with general A1 may be tackled by linearity and reduction to the determinant
det(e1 , A2 , . . . , An ).
n
X
det A = det(A1 , A2 , · · · , An ) = ai1 det(ei , A2 , · · · , An ),
i=1
A4
17
c
Notation: Denote by Aij c the determinant of order n − 1 resulting from deleting the i-th
row and the j-th column from det A. Here ic refers to the complement of i in the set
{1, 2, . . . , n} as an ordered set.
In all, the above argument and its row analogue yield the following result.
Theorem 2.6.1 (Laplace expansion, poor man’s version) Let A be a square ma-
trix of order n.
Pn i+k c
2. (Developing by a row) fix i (i-th row of A). Then: det A = k=1 (−1) aik Akic ,
c
where Aabc is the determinant of the submatrix resulting from erasing the a-th row and the
b-th column.
a12 a13 a a a a
det A = a21 (−1)1+2 + a22 (−1)2+2 11 13 + a23 (−1)1+3 11 12 .
a31 a33 a31 a33 a21 a22
Proof of Theorem 2.6.1: The argument is essentially given before the statement.
bij11 · · · bij1r
BJI = . . . . . . . . . . . . .
bijr1 · · · bijrr
c
An example of this is Aij c above, where A is an n × n matrix: here ic is the ordered set
1 < 2 < · · · < i − 1 < i + 1 < · · · < n.
Laplace expansion has a full-fledged version, which we shall not state or prove here (ask
away if ye are curious).
18
2.6.1 Cofactors. The adjugant matrix
c
Definition Let A ∈ Mn (K). The cofactor matrix of A is defined by Cof (A)ij = Aij c ,
c
defined above. Aij c is the cofactor associated with deleting the i-th row and j-th column.
Definition The adjugant matrix of A is the transpose of the cofactor matrix, adj(A) =
Cof (A)T = Cof (AT ).
Given A ∈ Mn (K) and b ∈ K n , Laplace expansion on the first column of the determinant
det(b, A2 , . . . , An ) yields the following.
b1 a12 . . . a1n
b2 a22 . . . a2n 1+1 1c 1+n nc
X
(2.2) .. .. .. = b1 (−1) A1c + . . . + bn (−1) A1c = bk Cof (A)k1 .
. . ... . k=1
bn an2 . . . ann
Pn
Note that Cof (A)kj = adj(A)jk , so actually det(b, A2 , . . . , An ) = k=1 adj(A)1k bk =
adj(A)1 b. More generally, Laplace expansion along the i-th column below yields
Pn i+k c
3. Fix i (i-th row of A). Then: det A = k=1 (−1) aik Akic ,
Pn i+k c
4. Fix i (i-th row of A), and let j 6= i. Then: k=1 (−1) ajk Akic = 0.
19
Proof: Parts 1 and 3 are in Theorem 2.6.1. Part 4 is Part 2 applied to AT . Part 2
follows from considering the determinant of A, deleting the j-th column and writing the
k-th column instead. Thus, clearly
ai11 · · · ai11
0 6= det(A1 , · · · , Ar , ek1 , · · · , ekn−r ) = ± . . . . . . . . . . . . = ±AI[1,r] .
airr · · · airr
Corollary 2.6.7 The rank of A equals the rank of its transpose AT . In other words,
row rank and column rank are equal (row rank would be the maximum number of linearly
independent rows in A).
Proof: The fact that det M T = det M and the determinantal criterion show that the
rank is the same for A and for AT . The rows of A are the columns of AT .
20
Corollary 2.6.8 let A1 , · · · , Ar ∈ K n be linearly independent vectors. If r < n, then
their span F = hA1 , · · · , Ar i is given by the following cartesian equations. Fix I = {i1 <
· · · < ir } so that AI[1,r] 6= 0. F is defined by the following n − r equations: for every j ∈ I c ,
4 −1
plete set of cartesian equations for F .
Consider a generic vector of unknowns x, y, z, t. First of all, we spot in the matrix
1 2
2 1
A= 1 1
4 −1
the minor 2 × 2 of the first two rows and columns, which is nonzero. The augmented
matrix is now
1 2 x
2 1 y
A0 =
1 1 z ,
4 −1 t
and fixing the first two rows yields two equations:
1 2 x
2 1 y =0
1 1 z
and
1 2 x
2 1 y = 0.
4 −1 t
If the reader should choose another two rows, the results would be the same –we mean, up
to linear combinations of both equations, of course.
21
Example 2.6.10 (An oldie follows from Theorem 2.6.6) Recall the case of u, v ∈
Rn , of coordinates ui , vj respectively. The rank of the matrix (u v) is then the highest
order of a nonzero minor: if one of the vectors is nonzero, then it is at least 1 (a nonzero
coordinate is a nonzero 1 × 1 minor!). The rank is precisely 2 if and only if there is a
nonzero minor ui vj − uj vi . We knew this, but now Theorem 2.6.6 vastly generalises this
old result.
Expanding on the i-th column yields (?) = det(A1 , · · · , Ai−1 , Ai xi , Ai+1 , · · · , An )+0, since
the terms with Ak on the i-th position for k 6= i yield zero by DET2. It follows that
Much more information is contained here than a mere equation. In fact, if we differentiate
up to n − 1 times the above identity, we get a homogeneous system of n equations and n
unknowns with a nontrivial solution for every x ∈ I:
22
··· α1 0
f1 (x) f2 (x) fn (x)
0 0 0 α2 0
f1 (x) f2 (x) ··· fn (x)
.. = .. .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. .
(n−1) (n−1) (n−1)
f1 (x) f1 (x) · · · fn−1 (x) αn 0
2.8 Problems
2.8.1 Consider, for 1 ≤ i, j ≤ 3 the variables xi , yj . Consider the matrix A = (aij )
defined by aij = sin(xi + yj ). Compute the determinant of A.
If n > 3, consider the n×n matrix A defined by aij = sin(xi +yj ). What is the determinant
of A?
23
2.8.2 [1, Ch. VI, Problem 2] Prove that (x − 1)3 divides the polynomial
1 x x 2 x3
1 1 1 1
.
1 2 3 4
1 4 9 16
a21 + x a1 a2 a1 a3 . . . a1 an
a2 a1 a22 + x a2 a3 . . . a2 an
P (x) = .
.................................
an a1 an a2 an a3 . . . a2n + x
The result below is the very foundation of L’Hôpital’s rule, for its proof stems from
this little gem. A determinantal guise is most recommended, both for the proof and for
memory purposes.
You might be surprised, but the following is very doable and you already made its ac-
quaintance!
2.8.6 Given A ∈ Mn (Z) a square matrix with integer coefficients, give necessary and
sufficient conditions for A to have an inverse with integer coefficients.
2.8.7 Let t be an indeterminate. Consider the monomials tr1 , · · · , trn where ri ∈ N are
pairwise distinct. Let M be the matrix whose first row is tr1 , · · · , trn , with the (i+1)-th
row being the derivative of the i-th row. Prove that det M = CtN , where C is a real
constant and N is natural, and find C, N explicitly.
24
This was an exam question back in 2019 (VE2, methinks).
2.8.8 Let x0 , · · · , xn ∈ R be pairwise distinct numbers, and let y0 , · · · , yn ∈ R. Prove
that there is precisely one polynomial p(x) ∈ R[x] of degree deg p ≤ n such that p(xi ) = yi ,
and that it is characterised by the condition
1 x0 x20 · · · xn0 y0
1 x1 x21 · · · xn1 y1
. . . . . . . . . . . . . . . . . . . . . . . . . = 0.
1 xn x2n · · · xnn yn
1 x x2 · · · xn p(x)
2.8.9 Let A be a real skew-symmetric matrix of order n, i.e. A ∈ Mn (R) such that
AT = −A (hemi-simétrica, anti-simétrica). If n is odd, show that det A = 0.
The following may be used as a stepping stone in proving Sylvester’s criterion for a
quadratic form to be positive definite.
2.8.10 (VF 2017) Let a1 , . . . , an , b ∈ R be such that b − nk=1 a2k > 0. Show that the
P
following determinant is nonzero:
1 0 0 ... 0 a1
0 1 0 ... 0 a2
0 0 1 ... 0 a3
.. .. .. ... .. .. .
. . . . .
0 0 0 . . . 1 an
a1 a2 a3 . . . an b
2.8.11 Let A be a square matrix of order n. Show that adj A 6= 0 if and only if rk A ≥
n − 1.
25
Proof:
Suppose that the indices i, j, a, b are distinct. Then: σ(i) = i < σ(j) = j.
If a < i < b (and j = b), a, i become b, i and there are b − a − 1 inversions. On the
other hand, j, b become j, a, which counts for b − a − 1 inversions.
Theorem 2.8.13 Let σi be permutations. Then: (−1)N (σ1 σ2 ) = (−1)N (σ1 ) (−1)N (σ2 ) .
Therefore, sgn(σ) = (−1)N (σ) is well defined, i.e. independent of the factorization of
σ into transpositions.
26
Bibliography
[2] K. Hoffman, R. Kunze, Linear Algebra, 2nd Ed., Prentice Hall, 1971.
[6] J. J. Ramón-Marı́, Systems of linear equations, from a 1st version typed by the
remarkable Filipe Abelha.
27