0% found this document useful (0 votes)
227 views8 pages

CEE 471, Fall 2019: HW1 Solutions

(1) The document provides solutions to homework problems involving tensor algebra and analysis. (2) It demonstrates how to compute the trace of a second order tensor, the inner product of two vectors, and the determinant of a second order tensor in terms of its components. (3) It also shows that the epsilon tensor can be used to express the determinant of a second order tensor and proves various properties of the epsilon tensor, such as its relation to the Kronecker delta and its behavior under transformations.

Uploaded by

Jonathan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
227 views8 pages

CEE 471, Fall 2019: HW1 Solutions

(1) The document provides solutions to homework problems involving tensor algebra and analysis. (2) It demonstrates how to compute the trace of a second order tensor, the inner product of two vectors, and the determinant of a second order tensor in terms of its components. (3) It also shows that the epsilon tensor can be used to express the determinant of a second order tensor and proves various properties of the epsilon tensor, such as its relation to the Kronecker delta and its behavior under transformations.

Uploaded by

Jonathan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

CEE 471, Fall 2019: HW1 Solutions

Bhavesh Shrimali, Aditya Kumar


Department of Civil and Environmental Engineering, University of Illinois, Urbana–Champaign, IL 61801, USA

1. (a) A repeated letter index indicates that we need to perform the summation over that index by
assigning it the values 1, 2 and 3 (in this case). Hence,
3
X
δii = δii = δ11 + δ22 + δ33 = 1 + 1 + 1 = 3 ♣ 1point
i=1

(b)
δrs δsr = δrr = δss = 3 ♣
or, expanding the 2 summations
3 X
X 3
δrs δsr = δrs δsr = δr1 δ1r + δr2 δ2r + δr3 δ3r
r=1 s=1
= δ11 δ11 + δ21 δ12 + δ31 δ13 + δ12 δ21 + δ22 δ22 + δ32 δ23 + δ13 δ31 + δ23 δ32 + δ33 δ33
=1+0+0+0+1+0+0+0+1=3 ♣ 1point

(c)

ui δis δsj uj = us δsj uj = us us = u1 u1 + u2 u2 + u3 u3


= 2 · 2 + (−1) · (−1) + 2 · 2 = 9 ♣ 1point

(Recall: ui ui = |u|2 )

2. Let u = ui ei and v = vj ej .

u · v = (ui ei ) · (vj ej ) = ui vj (ei · ej ) = ui vj δij = ui vi ♣ ... ( 2 points)

3. Recall that
detA = εijk Ai1 Aj2 Ak3
where εijk is defined as 
1
 if (ijk) = (123), (231), (312)
εijk = −1 if (ijk) = (132), (321), (213)

0 otherwise

Let us define
1
B= εijk εpqr Aip Ajq Akr
6

Email addresses: [email protected] (Bhavesh Shrimali), [email protected] (Aditya Kumar)


and our task is to show that B = εijk Ai1 Aj2 Ak3 1 . Expanding the summations over p, q and r with
help of the definition of εpqr and keeping only the terms with non-zero values of εpqr (namely when
p 6= q 6= r 6= p), we have
1
B = (εijk Ai1 Aj2 Ak3 + εijk Ai2 Aj3 Ak1 + εijk Ai3 Aj1 Ak2
6
−εijk Ai1 Aj3 Ak2 − εijk Ai2 Aj1 Ak3 − εijk Ai3 Aj2 Ak1 ) .

Using now the property that −εijk = εikj = εjik = εkji we have
1
B = (εijk Ai1 Aj2 Ak3 + εijk Ai2 Aj3 Ak1 + εijk Ai3 Aj1 Ak2
6
+εikj Ai1 Ak2 Aj3 + εjik Aj1 Ai2 Ak3 + εkji Ak1 Aj2 Ai3 ) .

In this expression note that i, j, k are just dummy indices which can be denoted by any letter index
of our choice. Denoting index appearing in A with 1 by m, with 2 by n and with 3 by p, we obtain
1
B = (6 εmnp Am1 An2 Ap3 ) = εmnp Am1 An2 Ap3 = detA. ♣ ... ( 10 points)
6

4. • Short version: By definition of εijk

εijk = (ei ∧ ej ) · ek (1)


εpqk = (ep ∧ eq ) · ek (2)

∴ εijk εpqk = ((ei ∧ ej ) · ek ) (ek · (ep ∧ eq ))


= (ei ∧ ej ) · (ek ⊗ ek ) (ep ∧ eq )
= (ei ∧ ej ) · (ep ∧ eq )

Using the identity ((a ∧ b) · (c ∧ d)) = (a · c) (b · d) − (a · d) (b · c) in the above expression

εijk εpqk = (ei · ep ) (ej · eq ) − (ei · eq ) (ej · ep )


= δip δjq − δiq δjp ♣ (3)

• Long version: By definition of εijk and triple scalar product of three vectors,

δ1i δ1j δ1k

εijk = (ei ∧ ej ) · ek = δ2i δ2j δ2k .
δ3i δ3j δ3k

In this we have made use of the fact that the components of the vector ei are [δ1i , δ2i , δ3i ]. Then

δ1i δ1j δ1k δ1p δ1q δ1r

εijk εpqr = δ2i δ2j δ2k δ2p δ2q δ2r . (4)
δ3i δ3j δ3k δ3p δ3q δ3r

δip δjp δkp

εijk εpqr = δiq δjq δkq .

δir δjr δkr

1 For any second order tensor A, expand detA along any row, to reduce it to this form
2
Then setting r = k leads to

δip δjp δkp δip δjp δkp

εijk εpqk = δiq δjq δkq = δiq
δjq δkq

δik δjk δkk δik δjk 3
= δik (δjp δkq − δkp δjq ) − δjk (δip δkq − δkp δiq ) + 3 (δip δjq − δjp δiq )
= δjp δiq − δip δjq − δip δjq + δjp δiq + 3δip δjq − 3δjp δiq
= δip δjq − δiq δjp . ♣ ... ( 10 points) (5)

5. (a) To show that εijk Tjk are the components of a first-order tensor, we need to go to the definition
0 0 0
of a Cartesian tensor. Let εijk Tjk = Si and εpmr Tmr = Sp . Then considering a change of
+
basis from {ei } to {e0i } defined by Q ∈ Orth such that e0i = Qij ej , we need to prove that
0
Si = Qip Sp .
By definition, εijk and Tjk are third- and second-order tensors. That is, they transform under
the change of basis as
0 0
εijk = Qip Qjq Qkr εpqr and Tjk = Qjm Qkn Tmn .

Note that all dummy indices in the two previous expressions are taken as different. This
prevents dummy indices to be repeated more than twice on multiplication which is the next
step 0 0
εijk Tjk = Qip Qjq Qkr Qjm Qkn εpqr Tmn .

Recall that QQT = I = QT Q, or in indicial notation Qab Qcb = δac . Then


0 0
εijk Tjk = Qip Qjq Qkr Qjm Qkn εpqr Tmn
= Qip (Qjq Qjm ) (Qkr Qkn ) εpqr Tmn
= Qip (δmq ) (δnr ) εpqr Tmn
= Qip εpmr Tmr . ... ( 5 points)

Thus, εijk Tjk are the components of a first-order tensor. ♣

(b) To Prove (Tij = Tji ) ⇐⇒ (εijk Tjk = 0)

(=⇒) Assume Tij = Tji .


Then

εijk Tjk = εijk Tkj (Tij = Tji )


= −εikj Tkj
= −εijk Tjk (dummy indices) ... ( 2.5 points)

Hence εijk Tjk = 0. ♣

3
(⇐=) Assume εijk Tjk = 0
First note that εijk Tjk = εjki Tjk = 0. Then
εpqi εjki Tjk = εpqi · 0 = 0
⇒ (δpj δqk − δpk δqj ) Tjk = 0 (5)
⇒ δpj δqk Tjk − δpk δqj Tjk = 0
⇒ δqk Tpk − δqj Tjp = 0
⇒ Tpq − Tqp = 0
⇒ Tpq = Tqp ⇔ Tij = Tji ... ( 2.5 points)
Thus, (Tij = Tji ) ⇐⇒ (εijk Tjk = 0) . ♣

6. Starting from the definition of eigenvalues and eigenvectors of T,


Tv = λv
with λ and v as any of its eigenvalue and eigenvector respectively, it is easy to obtain that for Tk
— where k is an integer — λk is an eigenvalue with v as the corresponding eigenvector. Then the
eigenvalue for T−1 , assuming det (T) 6= 0, is given by λ−1 . Thus the characteristic equation of T−1
is
λ−3 − I1 T−1 λ−2 + I2 T−1 λ−1 − I3 T−1 = 0
  
(6)
Now the characteristic equation for T is
λ3 − I1 (T) λ2 + I2 (T) λ − I3 (T) = 0
Dividing by −λ3 I3 (T) gives
I2 (T) −2 I1 (T) −1 1
λ−3 − λ + λ − = 0. (7)
I3 (T) I3 (T) I3 (T)
Finally, comparing Equations (6) and (7) yields
 I2 (T)  I1 (T) 1
I1 T−1 = I2 T−1 = I3 T−1 =

, , . ♣ ... ( 10 points)
I3 (T) I3 (T) I3 (T)

7. By the Cayley-Hamilton Theorem, we have the following relation:


T3 − I1 (T) T2 + I2 (T) T − I3 (T) I = 0. (8)

Taking the trace of Equation (8) and using the linearity of the trace operator — namely, tr (A + B) =
trA + trB and tr (αA) = α trA, where α ∈ R — we have
trT3 − I1 (T) trT2 + I2 (T) trT − I3 (T) trI = tr0 = 0.
Recalling the definitions of the first three invariants gives
 1î 2
ó
trT3 − (trT) trT2 + (trT) − trT2 trT − (detT) (trI) = 0,
2
where trI = δii = 3.
Solving the equation for detT and simplifying the result gives
1î 3
ó
(trT) − 3 (trT) trT2 + 2trT3

detT = ♣ ... ( 10 points)
6

4
8. (a) By the Cayley-Hamilton Theorem, we have the following (Note that Ij (T) is written more
compactly as Ij , where j = 1, 2, 3)

T3 − I1 T2 + I2 T − I3 I = 0.
T T2 − I1 T + I2 I = I3 I


T−1 T T2 − I1 T + I2 I = I3 T−1 I (assuming T−1 exists)


⇒ T2 − I1 T + I2 I = I3 T−1 .

Simplifying for T−1 gives


1  2
T−1 =

T − I1 T + I2 I ,
I3
which more explicitly is
1  2
T−1 =

T − I1 (T) T + I2 (T) I . ♣ ... ( 5 points) (9)
I3 (T)

(b) We make use of mathematical induction to prove that Tn is expressible in terms of a linear
combination of I, T and T2 multiplied by coefficients that are invariants of T, for any positive
or negative integer n, that is, there exists functions fn , gn and hn of invariants such that

Tn = fn T2 + gn T + hn I. (10)

The technique of mathematical induction requires a base case. Hence we first restrict the proof
to positive integers and will later prove it for negative integers with a different base case.

The Cayley-Hamilton Theorem provides the base case (for n = 3) since T3 = I1 T2 − I2 T + I3 I.


Note that the cases n = 0, 1, 2 are trivial. For the inductive step, we assume that we indeed
have functions fn , gn and hn such that Equation (10) holds. We then multiply (10) by T to
obtain

Tn+1 = fn T3 + gn T2 + hn T
= (fn I1 + gn )T2 + (−fn I2 + hn )T + fn I3 I. (with Cayley-Hamilton theorem)

Therefore,
Tn+1 = fn+1 T2 + gn+1 T + hn+1 I,
with

fn+1 = I1 fn + gn ,
gn+1 = −I2 gn + hn ,
hn+1 = I3 fn . ... ( 5 points)

Note that fn+1 , gn+1 , hn+1 are all functions of invariants of T. Hence Equation (10) holds for
n + 1. Thus the property is proved for any positive integer n.
The same induction proof — using Equation (9) as base case, assuming the property true
for T−n (n > 0) and proving that it holds for T−n−1 — can be employed to prove that the
statement is true for negative integers as well. ♣ . . . ( 5 points)

5
9. To find the eigenvalues of T, we need to find the roots of characteristic equation: λ3 − I1 (T)λ2 +
I2 (T)λ − I3 (T) = 0. Straightforward calculations yield the following values for the invariants:

• I1 (T) = trT = 6
î 2
ó
• I2 (T) = 12 (trT) − trT2 = 1
62 − 48 = −6

2

• I3 (T) = detT = −12.

Therefore, the characteristic equation becomes λ3 − 6λ2 − 6λ + 12 = 0, which has the following roots:

λ1 = −1.6977, λ2 = 1.0658, λ3 = 6.6319.

Now in order to obtain the eigenvectors, we solve the following linear system:
Ä ä
T − λ(i) I v(i) = 0.

For i = 1:
(T + 1.6997I) vi = 0
gives Ñ éÑ é Ñ é
2.6977 −1 0 v1 0
−1 4.6977 4 v2 = 0 ,
0 4 3.6977 v3 0
As this is a homogeneous system of equations, we can choose v3 as 1 and find the other two compo-
nents which will be Ñ é
−0.3426
v(1) = −0.9224 .
1
The same calculation is done for i = 2 and i = 3. Normalizing the eigenvectors gives
Ñ é Ñ é Ñ é
−0.2440 0.9606 −0.1332
ṽ(1) = −0.6583 , ṽ(2) = −0.0632 , and ṽ(3) = 0.7501 .
0.7121 0.2707 0.6478

With respect to the basis of its eigenvectors {ṽ(1) , ṽ(2) , ṽ(3) }, T is given by
Ñ é
−1.6977 0 0
T0 = 0 1.0658 0 .
0 0 6.6319

Again it is straightforward to calculate:

• I1 (T0 ) = trT0 = λ1 + λ2 + λ3 = 6 = I1 (T)


2
î 2
ó
• I2 (T0 ) = 21 {(trT0 ) − tr (T0 ) } = λ1 λ2 + λ2 λ3 + λ3 λ1 = −6 = I2 (T)
• I3 (T0 ) = detT0 = λ1 λ2 λ3 = −12 = I3 (T) ... ( 10 points).

6
10. Let us consider the two symmetric second-order tensors S and T. We want to prove that coaxiality
of S and T —coaxiality means that the eigenvectors coincide — is equivalent to ST = TS. We
define {u(i) } and {v(i) } as respectively the sets of eigenvectors of S and T. (=⇒) Assume that S
and T are coaxial. Two equivalent solutions of different length are included here.
• Short version
Since the eigenvectors of S and T coincide, then in the basis formed by these common eigen-
vectors, we know that S and T are diagonal from their spectral representation. Therefore
TS = ST. ♣ . . . ( 10 points)

• Long version
If the sets {u(i) } and {v(i) } coincide in some order by definition of the coaxiality of S and T,
we can rearrange one of theses sets so that u(i) coincides with v(i) for i = 1, 2, 3. Furthermore,
we can normalize the eigenvectors so that u(i) = v(i) are unit vectors for i = 1, 2, 3. We have
therefore by definition

Su(i) = λ(i) u(i) and Tu(i) = µ(i) u(i) for i = 1, 2, 3.

Note that the eigenvalues of S and T (respectively λ(i) and µ(i) associated with the eigenvector
u(i) ) are different but that the eigenvectors are identical. Thus we have

TSu(i) = λ(i) Tu(i) = λ(i) µ(i) u(i) and STu(i) = µ(i) Su(i) = µ(i) λ(i) u(i) for i = 1, 2, 3,

⇒ (TS − ST)u(i) = 0 for i = 1, 2, 3. (11)


Since {u(i) } is an orthonormal basis, we can write any non-zero vector w as w = w1 u(1) +
w2 u(2) + w3 u(3) , with (w1 , w2 , w3 ) 6= (0, 0, 0). Spelling out (TS − ST)w gives

(TS − ST)w = (TS − ST)(w1 u(1) + w2 u(2) + w3 u(3) )


= w1 (TS − ST)u(1) + w2 (TS − ST)u(2) + w3 (TS − ST)u(3)
= 0 with Eq. (11)

Since w can be any vector, we have TS − ST = 0 ⇒ TS = ST. ♣

(⇐=) Assume that ST = TS. Now, λ(n) and u(n) are the eigenvalues and unit eigenvectors of S.
Since S is a symmetric real second-order tensors, its unit eigenvectors u(n) are orthogonal one to
each other and therefore form an orthonormal basis. In this basis, S is diagonal and written as
Ñ é
λ1 0 0
S= 0 λ2 0
0 0 λ3

Note that there is no a priori relation between the eigenvectors of S and T. Hence in the basis of
eigenvectors of S, T can be defined as
Ñ é
T11 T12 T13
T= T12 T22 T23
T13 T23 T33

Then applying ST = TS, we obtain the following three non-trivial conditions

λ1 T12 = λ2 T12 (12)


λ1 T13 = λ3 T13 (13)
λ2 T23 = λ3 T23 (14)

7
We now consider 3 different case for the values of λ1 , λ2 , λ3 .
• λ1 6= λ2 6= λ3 6= λ1

The conditions (12), (13), (14) yield T12 = T13 = T23 = 0, in which case T must have the same
eigenvectors as S because it is diagonal in the basis of eigenvectors of S. Thus T and S are
coaxial. ♣
• λ1 = λ2 6= λ3

The conditions (13), (14) give T13 = T23 = 0 and T is of the form
Ñ é
T11 T12 0
T= T12 T22 0 .
0 0 T33

We can see right away that u(3) is an eigenvector of T with eigenvalue T33 . Since the remaining
two unit eigenvectors of T — say v(1) and v(2) — have to be orthogonal to u(3) , they both
exist in the same plane as u(1) and u(2) and hence can be expressed as a linear combination of
u(1) and u(2) . However, for arbitrary α and β, we have

S(αu(1) + βu(2) ) = αSu(1) + βSu(2)


= αλ1 u(1) + βλ1 u(2) (λ1 is a double eigenvalue)
(1) (2)
= λ1 (αu + βu ). ... ( 10 points)

Therefore, any linear combination of u(1) and u(2) is an eigenvector of S, in particular v(1) and
v(2) . Therefore, {v(1) , v(2) , u(3) } are both eigenvectors of S and of T: S and T are coaxial. ♣
• λ1 = λ2 = λ3 = λ
Using the same derivation as in the second case, we can show that any linear combination
of u(1) , u(2) and u(3) is an eigenvector of S, in particular v(1) , v(2) and v(3) . Thus again,
{v(1) , v(2) , v(3) } are eigenvectors of both S and T: S and T are coaxial. ♣
Note that in this final case, S = λI and is therefore an isotropic tensor. An isotropic tensor has
the same components in any basis and hence S is diagonal in any orthonormal basis, especially
the one of the eigenvectors of T, where T itself is diagonal, in which case T is coaxial with S.

You might also like