CEE 471, Fall 2019: HW1 Solutions
CEE 471, Fall 2019: HW1 Solutions
1. (a) A repeated letter index indicates that we need to perform the summation over that index by
assigning it the values 1, 2 and 3 (in this case). Hence,
3
X
δii = δii = δ11 + δ22 + δ33 = 1 + 1 + 1 = 3 ♣ 1point
i=1
(b)
δrs δsr = δrr = δss = 3 ♣
or, expanding the 2 summations
3 X
X 3
δrs δsr = δrs δsr = δr1 δ1r + δr2 δ2r + δr3 δ3r
r=1 s=1
= δ11 δ11 + δ21 δ12 + δ31 δ13 + δ12 δ21 + δ22 δ22 + δ32 δ23 + δ13 δ31 + δ23 δ32 + δ33 δ33
=1+0+0+0+1+0+0+0+1=3 ♣ 1point
(c)
(Recall: ui ui = |u|2 )
2. Let u = ui ei and v = vj ej .
3. Recall that
detA = εijk Ai1 Aj2 Ak3
where εijk is defined as
1
if (ijk) = (123), (231), (312)
εijk = −1 if (ijk) = (132), (321), (213)
0 otherwise
Let us define
1
B= εijk εpqr Aip Ajq Akr
6
Using now the property that −εijk = εikj = εjik = εkji we have
1
B = (εijk Ai1 Aj2 Ak3 + εijk Ai2 Aj3 Ak1 + εijk Ai3 Aj1 Ak2
6
+εikj Ai1 Ak2 Aj3 + εjik Aj1 Ai2 Ak3 + εkji Ak1 Aj2 Ai3 ) .
In this expression note that i, j, k are just dummy indices which can be denoted by any letter index
of our choice. Denoting index appearing in A with 1 by m, with 2 by n and with 3 by p, we obtain
1
B = (6 εmnp Am1 An2 Ap3 ) = εmnp Am1 An2 Ap3 = detA. ♣ ... ( 10 points)
6
• Long version: By definition of εijk and triple scalar product of three vectors,
δ1i δ1j δ1k
εijk = (ei ∧ ej ) · ek = δ2i δ2j δ2k .
δ3i δ3j δ3k
In this we have made use of the fact that the components of the vector ei are [δ1i , δ2i , δ3i ]. Then
δ1i δ1j δ1k δ1p δ1q δ1r
εijk εpqr = δ2i δ2j δ2k δ2p δ2q δ2r . (4)
δ3i δ3j δ3k δ3p δ3q δ3r
δip δjp δkp
εijk εpqr = δiq δjq δkq .
δir δjr δkr
1 For any second order tensor A, expand detA along any row, to reduce it to this form
2
Then setting r = k leads to
δip δjp δkp δip δjp δkp
εijk εpqk = δiq δjq δkq = δiq
δjq δkq
δik δjk δkk δik δjk 3
= δik (δjp δkq − δkp δjq ) − δjk (δip δkq − δkp δiq ) + 3 (δip δjq − δjp δiq )
= δjp δiq − δip δjq − δip δjq + δjp δiq + 3δip δjq − 3δjp δiq
= δip δjq − δiq δjp . ♣ ... ( 10 points) (5)
5. (a) To show that εijk Tjk are the components of a first-order tensor, we need to go to the definition
0 0 0
of a Cartesian tensor. Let εijk Tjk = Si and εpmr Tmr = Sp . Then considering a change of
+
basis from {ei } to {e0i } defined by Q ∈ Orth such that e0i = Qij ej , we need to prove that
0
Si = Qip Sp .
By definition, εijk and Tjk are third- and second-order tensors. That is, they transform under
the change of basis as
0 0
εijk = Qip Qjq Qkr εpqr and Tjk = Qjm Qkn Tmn .
Note that all dummy indices in the two previous expressions are taken as different. This
prevents dummy indices to be repeated more than twice on multiplication which is the next
step 0 0
εijk Tjk = Qip Qjq Qkr Qjm Qkn εpqr Tmn .
3
(⇐=) Assume εijk Tjk = 0
First note that εijk Tjk = εjki Tjk = 0. Then
εpqi εjki Tjk = εpqi · 0 = 0
⇒ (δpj δqk − δpk δqj ) Tjk = 0 (5)
⇒ δpj δqk Tjk − δpk δqj Tjk = 0
⇒ δqk Tpk − δqj Tjp = 0
⇒ Tpq − Tqp = 0
⇒ Tpq = Tqp ⇔ Tij = Tji ... ( 2.5 points)
Thus, (Tij = Tji ) ⇐⇒ (εijk Tjk = 0) . ♣
Taking the trace of Equation (8) and using the linearity of the trace operator — namely, tr (A + B) =
trA + trB and tr (αA) = α trA, where α ∈ R — we have
trT3 − I1 (T) trT2 + I2 (T) trT − I3 (T) trI = tr0 = 0.
Recalling the definitions of the first three invariants gives
1î 2
ó
trT3 − (trT) trT2 + (trT) − trT2 trT − (detT) (trI) = 0,
2
where trI = δii = 3.
Solving the equation for detT and simplifying the result gives
1î 3
ó
(trT) − 3 (trT) trT2 + 2trT3
detT = ♣ ... ( 10 points)
6
4
8. (a) By the Cayley-Hamilton Theorem, we have the following (Note that Ij (T) is written more
compactly as Ij , where j = 1, 2, 3)
T3 − I1 T2 + I2 T − I3 I = 0.
T T2 − I1 T + I2 I = I3 I
⇒
T−1 T T2 − I1 T + I2 I = I3 T−1 I (assuming T−1 exists)
⇒
⇒ T2 − I1 T + I2 I = I3 T−1 .
(b) We make use of mathematical induction to prove that Tn is expressible in terms of a linear
combination of I, T and T2 multiplied by coefficients that are invariants of T, for any positive
or negative integer n, that is, there exists functions fn , gn and hn of invariants such that
Tn = fn T2 + gn T + hn I. (10)
The technique of mathematical induction requires a base case. Hence we first restrict the proof
to positive integers and will later prove it for negative integers with a different base case.
Tn+1 = fn T3 + gn T2 + hn T
= (fn I1 + gn )T2 + (−fn I2 + hn )T + fn I3 I. (with Cayley-Hamilton theorem)
Therefore,
Tn+1 = fn+1 T2 + gn+1 T + hn+1 I,
with
fn+1 = I1 fn + gn ,
gn+1 = −I2 gn + hn ,
hn+1 = I3 fn . ... ( 5 points)
Note that fn+1 , gn+1 , hn+1 are all functions of invariants of T. Hence Equation (10) holds for
n + 1. Thus the property is proved for any positive integer n.
The same induction proof — using Equation (9) as base case, assuming the property true
for T−n (n > 0) and proving that it holds for T−n−1 — can be employed to prove that the
statement is true for negative integers as well. ♣ . . . ( 5 points)
5
9. To find the eigenvalues of T, we need to find the roots of characteristic equation: λ3 − I1 (T)λ2 +
I2 (T)λ − I3 (T) = 0. Straightforward calculations yield the following values for the invariants:
• I1 (T) = trT = 6
î 2
ó
• I2 (T) = 12 (trT) − trT2 = 1
62 − 48 = −6
2
Therefore, the characteristic equation becomes λ3 − 6λ2 − 6λ + 12 = 0, which has the following roots:
Now in order to obtain the eigenvectors, we solve the following linear system:
Ä ä
T − λ(i) I v(i) = 0.
For i = 1:
(T + 1.6997I) vi = 0
gives Ñ éÑ é Ñ é
2.6977 −1 0 v1 0
−1 4.6977 4 v2 = 0 ,
0 4 3.6977 v3 0
As this is a homogeneous system of equations, we can choose v3 as 1 and find the other two compo-
nents which will be Ñ é
−0.3426
v(1) = −0.9224 .
1
The same calculation is done for i = 2 and i = 3. Normalizing the eigenvectors gives
Ñ é Ñ é Ñ é
−0.2440 0.9606 −0.1332
ṽ(1) = −0.6583 , ṽ(2) = −0.0632 , and ṽ(3) = 0.7501 .
0.7121 0.2707 0.6478
With respect to the basis of its eigenvectors {ṽ(1) , ṽ(2) , ṽ(3) }, T is given by
Ñ é
−1.6977 0 0
T0 = 0 1.0658 0 .
0 0 6.6319
6
10. Let us consider the two symmetric second-order tensors S and T. We want to prove that coaxiality
of S and T —coaxiality means that the eigenvectors coincide — is equivalent to ST = TS. We
define {u(i) } and {v(i) } as respectively the sets of eigenvectors of S and T. (=⇒) Assume that S
and T are coaxial. Two equivalent solutions of different length are included here.
• Short version
Since the eigenvectors of S and T coincide, then in the basis formed by these common eigen-
vectors, we know that S and T are diagonal from their spectral representation. Therefore
TS = ST. ♣ . . . ( 10 points)
• Long version
If the sets {u(i) } and {v(i) } coincide in some order by definition of the coaxiality of S and T,
we can rearrange one of theses sets so that u(i) coincides with v(i) for i = 1, 2, 3. Furthermore,
we can normalize the eigenvectors so that u(i) = v(i) are unit vectors for i = 1, 2, 3. We have
therefore by definition
Note that the eigenvalues of S and T (respectively λ(i) and µ(i) associated with the eigenvector
u(i) ) are different but that the eigenvectors are identical. Thus we have
TSu(i) = λ(i) Tu(i) = λ(i) µ(i) u(i) and STu(i) = µ(i) Su(i) = µ(i) λ(i) u(i) for i = 1, 2, 3,
(⇐=) Assume that ST = TS. Now, λ(n) and u(n) are the eigenvalues and unit eigenvectors of S.
Since S is a symmetric real second-order tensors, its unit eigenvectors u(n) are orthogonal one to
each other and therefore form an orthonormal basis. In this basis, S is diagonal and written as
Ñ é
λ1 0 0
S= 0 λ2 0
0 0 λ3
Note that there is no a priori relation between the eigenvectors of S and T. Hence in the basis of
eigenvectors of S, T can be defined as
Ñ é
T11 T12 T13
T= T12 T22 T23
T13 T23 T33
7
We now consider 3 different case for the values of λ1 , λ2 , λ3 .
• λ1 6= λ2 6= λ3 6= λ1
The conditions (12), (13), (14) yield T12 = T13 = T23 = 0, in which case T must have the same
eigenvectors as S because it is diagonal in the basis of eigenvectors of S. Thus T and S are
coaxial. ♣
• λ1 = λ2 6= λ3
The conditions (13), (14) give T13 = T23 = 0 and T is of the form
Ñ é
T11 T12 0
T= T12 T22 0 .
0 0 T33
We can see right away that u(3) is an eigenvector of T with eigenvalue T33 . Since the remaining
two unit eigenvectors of T — say v(1) and v(2) — have to be orthogonal to u(3) , they both
exist in the same plane as u(1) and u(2) and hence can be expressed as a linear combination of
u(1) and u(2) . However, for arbitrary α and β, we have
Therefore, any linear combination of u(1) and u(2) is an eigenvector of S, in particular v(1) and
v(2) . Therefore, {v(1) , v(2) , u(3) } are both eigenvectors of S and of T: S and T are coaxial. ♣
• λ1 = λ2 = λ3 = λ
Using the same derivation as in the second case, we can show that any linear combination
of u(1) , u(2) and u(3) is an eigenvector of S, in particular v(1) , v(2) and v(3) . Thus again,
{v(1) , v(2) , v(3) } are eigenvectors of both S and T: S and T are coaxial. ♣
Note that in this final case, S = λI and is therefore an isotropic tensor. An isotropic tensor has
the same components in any basis and hence S is diagonal in any orthonormal basis, especially
the one of the eigenvectors of T, where T itself is diagonal, in which case T is coaxial with S.