F Matrix Calculus
F Matrix Calculus
Matrix
Calculus
F1
TABLE OF CONTENTS
Page
F.1.
F.2.
F.3.
F.4.
F.5.
Introduction
The Derivatives of Vector Functions
F.2.1.
Derivative of Vector with Respect to Vector
F.2.2.
Derivative of a Scalar with Respect to Vector
F.2.3.
Derivative of Vector with Respect to Scalar
F.2.4.
Jacobian of a Variable Transformation . .
The Chain Rule for Vector Functions
The Derivative of Scalar Functions of a Matrix
F.4.1.
Functions of a Matrix Determinant
. . .
The Matrix Differential
F2
. .
.
. .
. .
.
. .
.
. .
. .
.
. .
.
. .
. .
. .
. .
. . . . . . .
F3
F3
F3
F3
F3
F4
F5
F6
F7
F8
F.2
F.1. Introduction
In this Appendix we collect some useful formulas of matrix calculus that often appear in finite
element derivations.
F.2. The Derivatives of Vector Functions
Let x and y be vectors of orders n and m respectively:
y
x
1
1
y2
x2
y=
x=
..
.. ,
.
.
xn
(F.1)
ym
where each component yi may be a function of all the x j , a fact represented by saying that y is a
function of x, or
y = y(x).
(F.2)
If n = 1, x reduces to a scalar, which we call x. If m = 1, y reduces to a scalar, which we call y.
Various applications are studied in the following subsections.
F.2.1. Derivative of Vector with Respect to Vector
The derivative of the vector y with respect to vector x is the n m matrix
y1 y2
y
xm
x
1
1
1
y1 y2
ym
y def
x2 x2 x2
=
.
x
.
.
..
.
.
.
.
.
.
.
ym
y1 y2
xn xn xn
(F.3)
y
x1
y
y def
x
=
2
x
..
.
y
xn
(F.4)
y def y1
=
x
x
y2
x
F3
...
ym
x
(F.5)
Remark F.1. Many authors, notably in statistics and economics, define the derivatives as the transposes of
those given above.1 This has the advantage of better agreement of matrix products with composition schemes
such as the chain rule. Evidently the notation is not yet stable.
Example F.1. Given
y=
and
y1
,
y2
x=
x1
x2
x3
(F.6)
y1 = x12 x2
(F.7)
y2 = x32 + 3x2
the partial derivative matrix y/x is computed as follows:
y1 y2
x1 x1
2x1
y
1
2
=
=
1
x x2 x2
0
y1 y2
x3 x3
0
3
2x3
(F.8)
x = r sin cos ,
y = r sin sin ,
z = r cos
(F.11)
where r > 0, 0 < < and 0 < 2. To obtain the Jacobian of the transformation, let
x x1 ,
r y1 ,
Then
y x2 ,
y2 ,
sin y2 cos y3
x
J = = y1 cos y2 cos y3
y
y sin y sin y
1
z x3
y3
sin y2 sin y3
y1 cos y2 sin y3
y1 sin y2 cos y3
(F.12)
cos y2
y1 sin y2
0
(F.13)
The foregoing definitions can be used to obtain derivatives to many frequently used expressions,
including quadratic and bilinear forms.
1
One author puts it this way: When one does matrix calculus, one quickly finds that there are two kinds of people in this
world: those who think the gradient is a row vector, and those who think it is a column vector.
F4
F.3
(F.14)
y = xT Ax
where A is a square matrix of order n. Using the definition (D.3) one obtains
y
= Ax + AT x
x
(F.15)
and if A is symmetric,
y
= 2Ax.
x
We can of course continue the differentiation process:
2 y
=
2
x
x
and if A is symmetric,
y
x
(F.16)
= A + AT ,
2 y
= 2A.
x2
(F.17)
(F.18)
y
x
Ax
xT A
xT x
xT Ax
AT
A
2x
Ax + AT x
x1
x2
x=
...
y1
z1
y2
z2
, y=
(F.19)
.. and z = ...
.
xn
zm
yr
where z is a function of y, which is in turn a function of x. Using the definition (D.2), we can write
z1 z1
. . . zx1n
x1
x2
T z2 z2 . . . z2
x1 x2
z
xn
=
(F.20)
.
.
.
x
..
..
..
z m
z m
m
. . . z
x1
x2
xn
Each entry of this matrix may be expanded as
r
z i yq
z i
=
x j
yq x j
q=1
F5
i = 1, 2, . . . , m
j = 1, 2, . . . , n.
(F.21)
z1 yq
Then
z
x
T
yq x1
z 2 yq
yq x1
=
..
.
zm
yq
yq x1
z 1
y1
z 2
y1
=
.
..
z 1
y2
z 2
y2
...
...
z 1 yq
yq x2
z 2 yq
yq x2
...
...
zm
yq
yq x2
z 1
yr
z 2
yr
...
y1
x1
y2
x1
.
..
z 2 yq
yq xn
z 2 yq
yq xn
zm
yq
yq xn
y1
x2
y2
x2
...
...
yr
yr
m
...
. . . z
x1
x2
yr
T T
y z T
y
z
=
.
=
y
x
x y
z m
y1
z m
y2
y1
xn
y2
xn
yr
xn
(F.22)
(F.23)
which is the chain rule for vectors. If all vectors reduce to scalars,
y z
z y
z
=
=
,
x
x y
y x
(F.24)
which is the conventional chain rule of calculus. Note, however, that when we are dealing with
vectors, the chain of matrices builds toward the left. For example, if w is a function of z, which
is a function of y, which is a function of x,
y z w
w
=
.
x
x y z
(F.25)
On the other hand, in the ordinary chain rule one can indistictly build the product to the right or to
the left because scalar multiplication is commutative.
F.4. The Derivative of Scalar Functions of a Matrix
Let X = (xi j ) be a matrix of order (m n) and let
y = f (X),
(F.26)
(F.27)
F.4
y
= y =
Ei j
,
xi j
xi j
i, j
(F.28)
where Ei j denotes the elementary matrix* of order (m n). This matrix G is also known as a
gradient matrix.
Example F.4. Find the gradient matrix if y is the trace of a square matrix X of order n, that is
n
y = tr(X) =
xii .
(F.29)
i=1
Obviously all non-diagonal partials vanish whereas the diagonal partials equal one, thus
G=
y
= I,
X
(F.30)
yi j Yi j ,
(F.32)
(F.33)
where Yi j is the cofactor of the element yi j in |Y|. Since the cofactors Yi1 , Yi2 , . . . are independent
of the element yi j , we have
|Y|
= Yi j .
(F.34)
yi j
It follows that
yi j
|Y|
=
Yi j
.
xr s
xr s
i
j
* The elementary matrix Ei j of order m n has all zero entries except for the (i, j) entry, which is one.
F7
(F.35)
A = [ai j ],
bi j =
yi j
,
xr s
B = [bi j ].
|Y|
= tr(ABT ) = tr(BT A).
xr s
(F.36)
(F.37)
Example F.5. If X is a nonsingular square matrix and Z = |X|X1 its cofactor matrix,
|X|
= ZT .
X
(F.38)
|X|
= 2ZT diag(ZT ).
X
(F.39)
G=
If X is also symmetric,
G=
d x11
def d x 21
dX =
...
d x12
d x22
..
.
d xm1
d xm2
...
...
d x1n
d x2n
.
..
.
(F.41)
. . . d xmn
d(X + Y) = dX + dY.
(F.42)
If X and Y are product-conforming matrices, it can be verified that the differential of their product
is
d(XY) = (dX)Y + X(dY).
(F.43)
which is an extension of the well known rule d(x y) = y d x + x dy for scalar functions.
Example F.6. If X = [xi j ] is a square nonsingular matrix of order n, and denote Z = |X|X1 . Find the
|X|
i, j
xi j
d xi j =
i, j
F8
(F.44)
F.5
Example F.7. With the same assumptions as above, find d(X1 ). The quickest derivation follows by differ-
from which
d(X1 )X + X1 dX = 0,
(F.45)
d(X1 ) = X1 dX X1 .
(F.46)
1
x
F9
dx
.
x2
(F.47)