PHS 307 (Theoretical)
PHS 307 (Theoretical)
Adeloye, A. B.
1
PHS 307: Theoretical Physics II
This is the second in the series of three mathematical courses designed specially for physics
students. You have studied PHS 209: Theoretical Physics I, and beyond this course, there is PHS
308: Theoretical Physics III. As you already know, a physics students must be well grounded in
Mathematics, as it is the language of Physics.
PHS 307 will give you the opportunity of understanding important topics like Numerical
analysis, specifically, finding the equation that best fits a given set of data, perhaps the readings
obtained in a laboratory experiment. Vector spaces, are a generalisation of the concept of
Euclidean vectors that you know so well. The knowledge of this important topic will enable you
to expand any given vector in terms of a given set of vectors, known as basis vectors. In turn this
expansion, along with the knowledge of orthogonal functions gives you an idea of what
proportion of a system is any of a possible set of states, one of the most important concepts in
Quantum Mechanics. Fourier series enables you to expand a given function as a sum of simple
sinusoidal function. Conversely, you can compose a complex function from a set of simple
functions. Fourier transform is a powerful tool that enables you to transform a function of time to
a function of frequency, or a function of the frequency to a function of the wave number. You
will also get to know the connection between Fourier series and Fourier transform.
Complex analysis will deepen the knowledge you acquired in PHS 209, and provide you with the
knowledge of integrating a function of a complex variable. You will also get to integrate, with
the help of complex integration, functions of a real variable that would have proven too difficult
to do ordinarily. Fourier transform, as well as Laplace transform will prove useful in solving
problems from diverse areas of Physics.
Eigenvalue problems crop up virtually in all areas of Physics. You will get to know how to find
the eigenvalues and the eigenvectors of a given operator, and the physical meaning of each of
these terms. You will also learn how to diagonalise a matrix, and with the help of this, find the
eigenvalues of a given matrix.
From the foregoing, you can see that this is one of the most important courses in your
programme of study. As such, whatever time you can put into understanding the topics will be
time well spent. To enhance your understanding of the course, each study session has in-text
questions as well as self-assessment questions. Try your hands on them and solve some more
problems from the recommended textbooks at the end of each study session.
2
Study Session 1 A Review of Some Basic Mathematics
Introduction
It has been observed that some students take basic mathematics for granted. It is always a good
idea to review some mathematical topics that will be relevant to our study. This study unit
un takes
you through such topics as basic trigonometry, exponential functions, differentiation and
integration.
1.1 Trigonometry
1.1.1 The Right Anglengle Triangle
The nomenclature for a right angle triangle is shown in Fig. 1.1, where O, A and H are the
opposite (the angle of interest), the adjacent and the hypotenuse of the triangle.
H
O
θ
A
By Pythagoras Theorem,
H 2 = O 2 + A2 1.1
3
In addition,
O
sin θ = 1.2
H
A
cos θ = 1.3
H
O
tan θ = . 1.4
A
Though we have proved equations 1.5 and 1.6 for an angle in a right-angle triangle, they are
indeed true for any angle θ .
We recall that
sin ( A + B ) = sin A cos B + sin B cos A 1.8
sin( A − B ) = sin A cos B − sin B cos A 1.9
cos( A + B ) = cos A cos B − sin A sin B 1.10
cos( A − B ) = cos A cos B + sin A sin B 1.11
4
Next, we draw the graphs of the trigonometric functions (Fig. 1.2):
sin θ
1.5
1
0.5
0
θ
-0.5 0 90 180 270 360
-1
-1.5
cosθ
1.5
1
0.5
0
θ
-0.5 0 90 180 270 360
-1
-1.5
tan θ
30
20
10
0
θ
-10 0 90 180 270 360
-20
-30
axis of the sine function is shifted forward by 900, we shall obtain the graph for the
If the y-axis
cosine function. We can therefore say that
sin (90 0 + θ ) = cosθ 1.12
5
cos(90 + θ ) = − sin θ 1.13
From equation 1.12,, we note that cosθ is ahead of sin θ , or in other words, sin θ lags cosθ
by 900.
In other words, the Sine and Cosine functions repeat after every 3600, while the Tangent function
repeats itself after every 1800. We therefore say that Cosine and Sine functions have a period of
3600, while the Tangent function has a period of 1800.
sin ( A + B ) sin A cos B + cos A sin B
tan ( A + B ) = = 1.17
cos( A + B ) cos A cos B − sin A sin B
From which it readily follows that
tan A + tan B
tan ( A + B ) = 1.18
1 − tan A tan B
As an example,
tan 90 0 + tan θ
tan (90 + θ ) =
0
1 − tan 90 0 tan θ
Therefore,
tan (90 0 + θ ) = − cot θ 1.19
6
e x + e−x
Show that = cosh x , given that the Maclaurin series for cosh x is
2
x2 x4
1+ + + ⋅⋅⋅⋅
2! 4!
Adding equations 1.23 and 1.24 and dividing the result by 2 gives
e x + e−x x2 x4
= 1+ + + ⋅⋅⋅⋅ 1.25
2 2! 4!
But the right hand side of equation 1.25 is the Maclaurin series expansion of cosh x .
Hence,
e x + e−x
= cosh x 1.26
2
i 4 = i 2 ⋅ i 2 = −1 ⋅ −1 = 1 .
Higher powers of i just repeat this set of four numbers. Hence, we can write a sequence
i, − 1, − i, 1, i, ….
We then notice that this sequence looks like that of sin θ , or cosθ , taken in steps 900 from 00 to
3600. Respectively, these are:
0, 1, 0, -1, 0, ………
and
1, 0, -1, 0, 1, ……….
Thus, we expect that there might be a relationship between the number i , and sin θ and cosθ .
This is indeed so.
7
e ix + e − ix x2 x4
= 1− + − ⋅⋅⋅ 1.29
2 2! 4!
and
e ix − e − ix x3 x5
= x− + + ⋅⋅⋅⋅
2i 3! 5!
1.30
The right hand sides of equations 1.29 and 1.30 are, respectively, the series expansion for cosθ
and sin θ .
Therefore,
e ix + e − ix
= cos x 1.31
2
e ix − e − ix
= sin x 1.32
2i
Similarly,
e iax + e −iax
= cos ax 1.33
2
e iax − e − iax
= sin ax 1.34
2i
1.3 Differentiation
1.3.1 Polynomials
Given the function f ( x) = ax n , where a is a constant, the differential of f (x ) with respect to x
is nax n −1 . We write
d (ax n )
= nax n−1 1.35
dx
d
(3 x 2 ) = 2 × 3 × x 2 −1 = 6 x
dx
8
1.3.2 Exponential and Trigonometric functions
We can differentiate the expansion for an exponential function to get the differential of the
function. For instance,
Let
x2 x3
f (x ) = e − x = 1 − x +
− + ⋅⋅⋅⋅ 1.36
2! 3!
Then, differentiating term by term, we get
3x 2
f ′(x ) = 0 − 1 + x − + ⋅⋅⋅⋅ 1.37
3!
x2
= −1 − x +
+ ⋅ ⋅ ⋅ = −e − x 1.38
2!
Indeed, the Maclaurin series for f ( x) = e g ( x ) is
[ g ( x)] 2 [ g ( x)]3
e g ( x ) = 1 + g ( x) + + + ⋅⋅⋅⋅ 1.39
2! 3!
[ g ( x)] 2 g ( x) dg ( x )
f ′( x) = g ′( x) + g ( x) g ′( x) + + ⋅ ⋅ ⋅ ⋅ (where g ′( x ) = ) 1.40
2! dx
[ g ( x)]2
= g ′( x)1 + g ( x) + + ⋅ ⋅ ⋅ ⋅ 1.41
2!
= g ′( x)e g ( x ) 1.42
9
d d e iω t − e −iω t 1 iae iω t − (−ia)e −iω t
sin ω t = =
dt dt 2i i 2
d d e i ω t − e − iω t 1 i ω e iω t − ( − i ω e − i ω t )
sin ω t = =
dt dt 2i i 2
i ω e i ω t + e iω t
= = ω cos ω t
i 2
We conclude that,
d
sin ω t = ω cos ω t 1.44
dt
1.4 Integration
1.4.1 Polynomials
d
We have seen that ax n = nax n −1 , where a and n are constants. Multiplying both sides by dx
dx
gives
dax n = nax n −1 dx
Integrating both sides,
∫ dax
n
= ∫ nax n −1 dx + c ,
where c is an arbitrary constant, called the constant of integration.
Thus,
ax n = na ∫ x n −1 dx + c
That is,
n −1 xn
∫x dx =
n
+c,n ≠0 1.45
x n+1
This can also be written as ∫ x dx = + c , n ≠ −1 . Notice that we still put c, and not – c.
n
n +1
This is because c is arbitrary.
4 x 5+1 2
∫ 4 x dx = + c = x6 + c
5
5 +1 3
10
11
1.4.2 Exponential and Trigonometric functions
Recall that
d f ( x)
e = f ′( x )e f ( x ) 1.46
dx
Then,
∫ de
f ( x)
= ∫ f ′( x)e f ( x ) dx + c
Therefore,
∫ f ′( x)e dx = e f ( x ) + c
f ( x)
1.47
∫ 2e dx = e 2 x + c
2x
∫ dy = ∫ 2e 2 x dx + c
y = e 2 x = 2 ∫ e 2 x dx + c
Note that we could have got the same answer just by inspection.
1 1 1 f ( x) 1 2x
∫ e dx = 2 ∫ 2e dx = 2 ∫ f ' ( x)e dx = 2 e + c = 2 e + c
2x 2x f ( x)
Trigonometric functions, being sums of exponential functions are easy to integrate as all we need
to do is to integrate the terms in the sum one by one.
12
Integrate sin ω t with respect to t.
e iωt − e −iωt
∫ sinω t dt = ∫ 2i
dt
1 1 iωt 1
= e − e −iωt + c
2i iω ( − iω )
=−
(
1 e iωt + e −iωt ) 1
+ c = − cos ω t + c
ω 2 ω
1 rev/s = 1 × 2π c / s = 2π rad/s
1
1 rad/s = rev/s
2π
1
5 rad/s = 5 × rev/s = 0.7958 rad/s
2π
13
Summary of Study Session 1
In Study Session 1, you have reviewed:
1. Basic ideas of trigonometry
2. Exponential functions
3. Differentiation
4. Integration
5. The radian measure.
References
Ayres, F., Schmidt, P. A. (1958). Theory and Problems of College Mathematics, Schaum’s
Outline Series.
Self-Assessment
Assessment Questions (SAQs) for Study Session 1
Having completed this study session, you may now assess how well you have achieved the
Learning Outcomes
comes by answering the following questions. Write your answers in your Study
Diary and discuss them with your Tutor at the next Study Support Meeting. You can check your
answers with the solutions to the Self
Self-Assessment
Assessment Questions at the end of this study session.
s
14
Solutions to SAQs
SAQ 1.1
You can easily show that
tan A − tan B
tan ( A − B ) =
1 + tan A tan B
from which it follows immediately that
tan (90 0 − θ ) = cot θ
From equations 0.5 and 0.6, we conclude that
tan (90 0 − θ ) = − tan (90 0 + θ )
SAQ 1.2
Equation 1.23 –equation 1.24 yields
e x − e −x x3 x5
= x+ + + ⋅⋅⋅⋅
2 3! 5!
or
e x − e−x
= sinh x
2
SAQ 1.3
d 4
(t − 3t −1 ) = 4 × 1 × t 4 −1 + (−1) × 3 × t −1−1 = 4t 3 − 3t − 2
dt
SAQ 1.4
e iωt − e −iωt
∫ sinω t dt = ∫ 2i
dt
1 1 iωt 1
= e − e −iωt + c
2i iω ( − iω )
=−
(
1 e iωt + e −iωt ) 1
+ c = − cos ω t + c
ω 2 ω
SAQ 1.5
rev 2π c
= = 0.1047 rad / s
min 60 s
15 rev / min = 15 × 0.1047 rad / s = 1.5705 rad / s
15
Should you require more explanation on this study session, please do not hesitate to contact your
Are you in need of General Help as regards your studies? Do not hesitate to contact
the DLI IAG Center by ee-mail or phone on:
[email protected]
08033366677
16
Study Session 2 Numerical Analysis
2.0 Introduction
In most experiments as a physicist, you would be expected to plot some graphs. This chapter
explains in details, how you can interpret the equation governing a particular phenomenon, plot
the appropriate graph with the data obtained, to illustrate the inhe
inherent
rent physical features, and
deduce the values of some physical quantities. The process of fitting a curve to a set of data is
called curve-fitting. We shall now take a look at the possible cases that could arise in curve-
curve
fitting.
2.2 Linearisation
A nonlinear relationship can be linearis
linearised
ed and the resulting graph analysed to bring out the
relationship between variables. We shall consider a few examples:
(i) We could take the logarithm of both sides of equation 2.1 to base e,
ln y = ln(ae x ) = ln a + x
since ln e x = x . Thus, a plot of ln y against x gives a linear graph with slope unity and a
y-intercept of ln a.
17
(ii) We could also have plotted y against e x . The result is a linear graph through the
origin, with slope equal to a.
l
Linearise the expression T = 2π , where g is a constant. Hence, deduce what
g
functions should be plotted to get a linear graph. How can you recover the acceleration
due to gravity from the graph?
Rearranging, we obtain,
1 1
ln T = ln l + ln(2π ) − ln g
2 2
writing this in the form y = mx + c , we see that a plot of ln T against ln l gives a slope of
1 2π
0.5 and a ln T intercept of ln(2π ) − ln g = ln 1 / 2 . Once the intercept is read of the
2 g
graph, you can then calculate the value of g. Note that we could also have deduced this
rightaway from the original equation:
2π
T = 1 / 2 l 1 / 2
g
1 2π
ln T = ln l + ln 1 / 2
2 g
2π
(ii) T= l
g
A plot of T versus l gives a linear graph through the origin (as the intercept is zero).
2π
The slope of the graph is , from which the value of g can be recovered.
g
(iii) Squaring both sides,
4π 2
T2 = l
g
18
A plot of T 2 versus l gives a linear graph through the origin. The slope of the graph is
4π 2
, and the value of g can be obtained appropriately.
g
Given the expression N = N 0 e − λ t , what functions would you plot to achieve a linear
graph?
The student can show that a plot of ln N versus t will give a linear graph with slope − λ
, and ln N intercept is ln N 0 .
1 1 1
If = + , what functions would you plot to get a linear graph, and how would you
f u v
deduce the focal length of the mirror from your graph?
U 10 20 30 40 50
V -7 -10 -14 -15 -17
1 1 1
Linearise the relationship = − . Plot the graph of v −1 versus u −1 and draw the line
v f u
of best fit by eye judgment. Hence, find the focal length of the mirror. All distances are in
cm.
Table 2.1
u v 1/u 1/v
10 -7 0.100 -0.143
20 -10 0.050 -0.100
30 -14 0.033 -0.071
19
40 -15 0.025 -0.067
50 -17 0.020 -0.059
The graph is plotted in Fig. 2.1.
1/u (/cm)
0.00
0 0.05 0.1 0.15
-0.02
-0.04
-0.06
1/v (/cm)
-0.08
-0.10
-0.12
-0.14
-0.16
1 1 1
Fig . 2.1: Linear graph of the function = −
v f u
1 1 1
The slope is − 1.05 and the intercept − 0.04 . From = − , we see that the intercept is
v f u
1 1
= −0.04 , or f = − = − 25 cm.
f 0.04
20
2.3.1 Method of Least Squares
Suppose xi , i = 1, ⋅ ⋅⋅, n are the points of the independent variable where the dependent variable
having respective values y i , i = 1, ⋅ ⋅⋅, n is measured. Consider Fig. 2.2,, where we have assumed a
linear graph of equation y = mx + c . Then at each point xi , i = 1, ⋅ ⋅⋅, n , y i = mx i + c .
The least square method entails minimizing the sum of the squares of the difference between
the measured value and the one predicted by the assumed equation.
y1 − (mx1 + c)
y1
x1
mx1 + c
Fig. 2.2:: Illustration of the error in representing a set of data with the line of best fit
n
S = ∑ [ y i − ( mx i + c )]
2
2.1
i =1
We have taken the square of the difference because taking the sum alone might give the
impression that there is no error if the sum of positive differences is balanced by the sum of
negative differences, just as in the case of the relevance of the variance of a set of data.
Now, S is a function of m and c , that is, S = S ( m, c ) . This is because we seek a line of best fit,
which will be determined byy an appropriate slope and a suitable intercept. In any case, xi and y i
are not variables in this case, having been obtained in the laboratory, for instance.
21
You have been taught at one point or another that for a function of a single variable f ( x ) , the
df
extrema are the points where = 0 . However, for a function of more than one variable, partial
dx
derivatives are the relevant quantities. Thus, since S = S ( m, c ) , the condition for extrema is
∂S ∂S
= 0 and =0 2.2
∂m ∂c
∂S n
= 2∑ [ y i − ( mx i + c )]( − xi ) = 0 2.3
∂m i =1
∂S n
= 2∑ [ yi − (mxi + c )]( −1) = 0 2.4
∂c i =1
∑ xi yi − m∑ xi − ∑ cxi = 0
i =1 i =1
2
i =1
2.5
∑ y i − m∑ xi − ∑ c = 0
i =1 i =1 i =1
2.6
∑x i
It follows from the fact that x i = i =1
and similar expressions, that equations 2.5 and 2.6 give,
n
respectively,
xy − m x 2 − cx = 0 2.7
y − mx − c = 0 2.8
Multiplying equation 2.8 by x gives
x y − mx 2 − cx = 0 2.9
Finally, from equations 2.7 and 2.9,
xy − x y
m= 2.10
x2 − x 2
and from equation 2.8,
c = y − mx 2.11
A student obtained the following data in the laboratory. By making use of the method of
least squares, find the relationship between x and t.
22
t 5 12 19 26 33
x 23 28 32 38 41
23
The table can be extended to give
Table 2.3
t 5 12 19 26 33 Σ=95 t =19
x 23 28 32 38 41 Σ=162 x =32.4
tx 115 336 608 988 1353 Σ=3400 tx =680
t 2 25 144 361 676 1089 Σ=2295 x 2 =459
tx − t x 680 − 19 × 32.4
m= = = 0.6571 2.12
t2 −t 2 459 − 19 2
c = x − mt = 32.4 − 0.6571× 19 = 19.9151 2.13
The error in the measured value of the variable and the value predicted by the equation is (as we
have seen in Fig. 2.2):
ε i = y i − (mxi + c) 2.15
The fitted line requires two unknown quantities: m and c. Thus, two equations are needed. We
would achieve these two equations by dividing the data into two, one of size l and the other of
size n-l, where n is the total number of observations.
The assumption that the sum of errors for each group is zero, requires that
l
∑[ y
i =1
i − ( mx i + c )] = 0 2.16
and
n
∑[ y
l +1
i − ( mx i + c )] = 0 2.17
24
l l
∑ y i = m∑ xi + lc
i =1 i =1
2.18
∑ y i = m ∑ x i + ( n − l )c
i = l +1 i = l +1
2.19
the latter equation being true since n – l is the number of observations that fall into that group.
Dividing through by l and n – l, respectively, equation 2.18 and 2.19 give, respectively,
1 l 1 l
∑
l i =1
y i = m ∑ xi + c
l i =1
2.20
1 n 1 n
∑ i n − l i∑
n − l i =l +1
y = m
= l +1
xi + c 2.21
Thus,
y1 = mx1 + c 2.22
y 2 = mx 2 + c 2.23
Solve the problem in Table 2.2 using the method of group averages.
t 5 12 19 26 33
x 23 28 32 38 41
Table 2.4
t 5 12 19
x 23 28 32
And
25
Table 2.5
t 26 33
x 38 41
t 5 12 19 Σ=36 t1 =12
x 23 28 32 Σ=83 x1
=27.666667
t 26 33 Σ=59 t 2 =29.5
x 38 41 Σ=79 x 2 =39.5
x1 − x 2 27.666667 − 39.5
m= = = 0.67619 2.39
t1 − t 2 12 − 29.5
and
c = x1 − mt1 = 27.666667 − (0.67619 × 12) = 19.552387 2.40
Thus, the equation of best fit is, putting equations 2.39 and 2.40 into the equation
y = mx + c ,
x = 0.67619t + 19.552387 2.41
References
1. Conte, S. D. and de Boor, C. (1965). Eleme
Elementary
ntary Numerical Analysis, an algorithmic
approach. McGraw-Hill
Hill International Student Edition, McGraw
McGraw-Hill.
26
2. Grewal, B. S. (1997). Numerical Methods in Engineering and Science. Khanna
Publishers.
and
t 2.6 2.8 3
i 0.11 0.09 0.07
27
SAQ 2.4 (tests Learning Outcomes 2.2
2.2, 2.3 and 2.4)
A student performing the simple pendulum experiment obtained the following table, where t is
the time for 50oscillations.
l (cm) 50 45 40 35 30 25 20 15
t (s) 71 69 65 61 56 52 48 43
Find the acceleration due to gravity at the location of the experiment, using
(i) the method of least squares, and
(ii) the method of group averages.
Should you require more explanation on this study session, please do not hesitate to contact your
Are you in need of General Help as regards your studies? Do not hesitate to contact
the DLI IAG Center by ee-mail or phone on:
[email protected]
08033366677
28
Solutions to SAQs
SAQ 2.1
(i) Curve-fitting is the process of fitting a curve or a mathematical function to a set of data.
(ii) This is a method of curve-fitting based on minimising the sum of the squares of the
difference in the tabulated (observed) values of the dependent variable and the fitted
values predicted by the fitted equation.
(iii) The method of group averages divides the data to be fitted into a curve into two groups,
each of which is assumed to have a zero sum of residuals; one of size l and the other of
size n-l, where n is the total number of observations. Two variables are needed, the slope
and the intercept of the line of fit, two equations are needed, one each from each group of
data.
SAQ 2.2
The current flowing in a particular R-C circuit is tabulated against the change in the time t − t 0 ,
such that at time t = t 0 , the current is 1.2 A. Using the least-squares method, find the slope and
the intercept of the linear function relating the current i to the time t. Hence, determine the time-
constant of the circuit.
t
Taking logs: i = i0 e − t / RC . log i = log i0 + log(e −t / RC ) = log i0 − . A plot of log i against t gives
RC
1
slope − and intercept log i0 .
RC
t I tsquarelog l tlogl
2.0 0.200000 4 -0.69897 -1.39794 -0.7022
2.2 0.160000 4.84 -0.79588 -1.75094 -0.78902
2.4 0.130000 5.76 -0.88606 -2.12654 -0.87584
2.6 0.110000 6.76 -0.95861 -2.49238 -0.96266
2.8 0.090000 7.84 -1.04576 -2.92812 -1.04948
3.0 0.070000 9 -1.1549 -3.46471 -1.1363
Sum 15 38.2 -5.54017 -14.1606
Average 2.5 6.3666667 -0.92336 -2.3601
Slope -0.4431
Intercept 0.1844
29
c = log l − mt = 0.1844
1 1
m=− , or RC = − = 2.2568 = time constant of the circuit.
RC m
SAQ 2.3
Solve the problem in SAQ 2.2 with the method of group averages by dividing into two groups of
three data sets each.
t 2 2.2 2.4
i 0.20 0.16 0.13
and
t 2.6 2.8 3
i 0.11 0.09 0.07
Group 1
t i log i
2.0 0.20 -0.69897
2.2 0.16 -0.79588
2.4 0.13 -0.88606
Sum 6.6 -2.38091
Average 2.2 -0.79364
Group 2
t i log i
2.6 0.11 -0.95861
2.8 0.09 -1.04576
3.0 0.07 -1.15490
Sum 8.4 -3.15927
Average 2.8 -1.05309
y1 − y 2 − .79364 − (−1.05309)
m= = = −0.4324
x1 − x 2 2.2 − 2.8
c = y1 − mx1 = −0.79364 − (−0.4324 × 2.2) = 0.1576
30
SAQ 2.4
A student performing the simple pendulum experiment obtained the following table, where t is
the time for 50 oscillations.
l 50 45 40 35 30 25 20 15
(cm)
t (s) 71 69 65 61 56 52 48 43
Find the acceleration due to gravity at the location of the experiment, using
(i) the method of least squares, and
(ii) the method of group averages.
Slope = 0.429391
Intercept = 0.282157
2 pi = 6.283185
log 2 pi = 0.798236
log 2 pi –intercept = 0.516079
2(log 2pi-inter) 1.032159 = log g
10.84 = g
31
Method of least squares (taking squares)
4π 2 4π 2
T2 = l . A plot of T 2 against l gives a line through the origin with slope m = , from
g g
4π 2
which g = :
m
l t L (m) T = t/20 T2 l2 T 2l
50 71 0.50 1.42 2.01640 0.2500 1.00820
45 69 0.45 1.38 1.90440 0.2025 0.85698
40 65 0.40 1.30 1.69000 0.1600 0.67600
35 61 0.35 1.22 1.48840 0.1225 0.52094
30 56 0.30 1.12 1.25440 0.0900 0.37632
25 52 0.25 1.04 1.08160 0.0625 0.27040
20 48 0.20 0.96 0.92160 0.0400 0.18432
15 43 0.15 0.86 0.73960 0.0225 0.11094
Sum 2.6 11.0964 0.9500 4.00410
Average 0.325 1.38705 0.11875 0.50051
Slope = 3.78829
Intercept = 0.15586 g= 10.42
32
Method of group averages (taking logs)
Group 1
L t T log l log T
0.50 71 1.42 -0.3010 0.1523
0.45 69 1.38 -0.3468 0.1399
0.40 65 1.30 -0.3979 0.1139
0.35 61 1.22 -0.4559 0.0864
Sum -1.5017 0.4925
Average -0.3754 0.1231
Group 2
l t T log l log T
0.30 56 1.12 -0.5229 0.0492
0.25 52 1.04 -0.6021 0.0170
0.20 48 0.96 -0.6990 -0.0177
0.15 43 0.86 -0.8239 -0.0655
Sum -2.6478 -0.0170
Average -0.6620 -0.0043
Slope = 0.4445
Intercept = 0.29
g= 10.38
33
Method of group averages (taking squares)
Group 1
l t T l T2
0.50 71 1.42 0.50 2.0164
0.45 69 1.38 0.45 1.9044
0.40 65 1.30 0.40 1.6900
0.35 61 1.22 0.35 1.4884
Sum 1.70 7.0992
Average 0.43 1.7748
Group 2
l t T l T2
0.30 56 1.12 0.30 1.2544
0.25 52 1.04 0.25 1.0816
0.20 48 0.96 0.20 0.9216
0.15 43 0.86 0.15 0.7396
Sum 0.9 3.9972
Average 0.225 0.9993
Slope = 3.8775
Intercept = 1.0205
g= 10.18
Should you require more explanation on this study session, please do not hesitate to contact your
Are you in need of General Help as regards your studies? Do not hesitate to contact
the DLI IAG Center by ee-mail or phone on:
[email protected]
08033366677
34
35
Study Session 3 Vector Spaces I
Introduction
Vectors are of utmost importance in Physics, appearing in practically all areas of Physics. As is
usual with Mathematics, the idea of Euclidean vectors will be extended to cover mathematical
structures you never could picture as having anythin
anything g to do with vectors. You shall come to
know that some sets also behave like vectors, and hence, the idea of vectors has been expanded
to include these structures, and generally we call such sets vector spaces.
36
then, S is called a vector space or linear space. The vector space is a real vector space if
K ≡ R , and a complex vector space if K ≡ C .
The vector spaces we shall come across in our study are such that we shall be bothered with only
the first two properties of a vector space, that is, additivity and homogeneity. You might as well
take this slogan to heart: ‘additivity plus homogeneity equals linearity.’
Show that the set of Cartesian vectors in 3-dimensions, V3 , is a real vector space.
a, b ∈ V 3 , λ ∈ R .
The sum of two vectors in V3 is also in V3 :
(i) a + b ∈ V3 3.11
Prove that the set of m × n matrices under matrix addition and scalar multiplication, M mn
, is a vector space over the real numbers or the complex plane.
A, B ∈ M mn , λ ∈ R or C
The matrix sum of two m × n is also an m × n matrix:
(i) A + B ∈ M mn 3.13
The product of λ in the field of real numbers or the complex plane is, respectively, still
in M mn :
(ii) λAmn ∈ M mn 3.14
Other conditions are also satisfied by vectors in M mn . Indeed, the zero vector is Omn , the
m × n matrix that has all its entries zero which is also in M mn . The vector − M is the
additive inverse of the vector M; it is also in M mn .
A set { f ( x), g ( x), .....} = F of square integrable functions of x over the interval [ a, b] ,
f ( x ), g ( x ) ∈ F , λ ∈ R or C . That is, real- or complex-valued functions, such that
b
∫ | f ( x) | dx < ∞ .
2
a
37
The sum of two such functions is also a square integrable function of x over the interval:
(i) f ( x) + g ( x ) ∈ F 3.15
The scalar multiple of such a function is also a square integrable function over the
interval:
(ii) λf ( x) ∈ F 3.16
All other conditions are satisfied by vectors in F. The zero vector is the vector r ( x ) ≡ 0 ,
the zero-valued function which is also in F . The vector − s ( x) is the additive inverse of
the vector s ( x) , and is also in F .
a1 v 1 + a 2 v 2 + ⋅ ⋅ ⋅ + a n v n = 0 3.17
and this implies a1 = a 2 = ⋅ ⋅ ⋅ = a n = 0, then we say {v i }i =1 is a linearly independent set.
n
If even just one of them is non-zero, then the set is linearly dependent.
We see that
c1 = −2c 2 3.23
38
c3 = 0 3.24
c1 and c2 do not necessarily have to be zero.
The vectors are linearly dependent.
If we wish to write any vector in 1 (say, x ) direction, we need only one unit vector. Any two
vectors in the x direction must be linearly dependent, for we can write one as a1i and the other
a 2 i , where a1 and a 2 are scalars.
Show that any two vectors in the vector space of one-dimensional vectors are linearly
dependent.
Let the two vectors be a1i and the other a 2 i .
We form the linear combination
c1 (a1i) + c2 (a2i) = 0 3.27
where a1 and a 2 are scalar constants.
Obviously, c1 and c2 need not be identically zero for the expression to hold, for
a2
c1 = −c 2 3.28
a1
would also satisfy the expression.
We conclude therefore that the vectors must be linearly dependent.
3.3 Span
If we can express any vector in a vector space as a linear combination of a set of vectors,
{e1 , e 2 , ⋅ ⋅⋅, e n } , then we say the vector space V is spanned by the set of vectors {e1 , e 2 , ⋅ ⋅⋅, e n } .
39
Thus the set of all linear combinations of vectors in the {e1 , e 2 , ⋅ ⋅⋅, e n } set is said to be span of
the vector space V. {e1 , e 2 , ⋅ ⋅⋅, e n } is said to be a basis for V . Thus, a set of n linearly independent
vectors that span V is a basis for V.. However, we can also span the same vector space with
another set of ( n + k ) vectors with k an integer greater than zero. However, the vectors will not
n
be linearly independent as we have seen in ITQ 3. Indeed, the coefficients c i of the sum ∑c e
i =1
i i
References
1. Hill, K. (1997). Introductory Linear Algebra with Applications, Prentice Hall.
2. Butkov, E. (1968). Mathematical Physics, Addison
Addison-Wesley.
3. MacQuarrie, D. A. (2003). Mathematical Methods for Scientists & Engineers, University
Science Books.
4. Hefferson, J. (2012). Linear Algebra, http://joshua.smcvt.edu/linearalgebra/book.pdf
5. Hefferson, J. (2012). Answers to Exercises,
http://joshua.smcvt.edu/linearalgebra/jhanswer.pdf
Self-Assessment
Assessment Questions (SAQs) for Study Ses
Session 3
You have now completed this study session. You may now assess how well you have achieved
the Learning Outcomes by answering the following questions. Write your answers in your Study
40
Diary and discuss them with your Tutor at the next Study Support Meeting. You can check your
answers with the solutions to the Self-Assessment Questions at the end of this study session.
41
1 0 0 1 0 − i 1 0
, , , .
0 1 1 0 i 0 0 − 1
42
Solutions to SAQs
SAQ 3.1
In the text.
SAQ 3.2
(i) The set of real numbers over the field of real numbers.
Let the set be R, the set of real numbers, then,
a +b ∈ R ∀ a , b∈ R
and λ a∈R ∀ a∈R, λ ∈R
(iii) The set of complex numbers over the field of real numbers.
Let the set be C be the set of complex numbers, then,
c1 + c2 ∈ C ∀ a , b∈ C
and α c∈R ∀ c ∈C , α ∈C
SAQ 3.3
ic1 + 2kc 2 + jc3 = 0
c1 = 0, c 2 = 0, c3 = 0
The set is linearly independent.
SAQ 3.4
1 1 1
c1 0 + c 2 1 + c3 2 = 0
1 0 1
from which we obtain
43
c1 + c 2 + c3 = 0 (i)
− c 2 + 2c 3 = 0 (ii)
c1 + c3 = 0 (iii)
From (iii), c1 = −c3 (iv)
and from (ii) (v)
Putting (iv) and (v) in (i), gives
− c 3 − 2c 3 + c3 = 0
− 2c3 = 0 or c3 = 0
c1 = −c 3 = 0, c 2 = −2c3 = 0
c1 = c 2 = c 3 = 0
Hence, we conclude the set is linearly independent.
Let us take the determinant of the matrix formed from the three equations (i) to (iii).
1 1 1
0 − 1 2 = 1(−1 − 0) − 1(0 − 2) + 1(0 + 1) = −1 + 2 + 1 = 2 ≠ 0
1 0 1
Thus, if the determinant of the system formed by the equations is non-zero, the sets are linearly
independent. If the determinant is zero, then the vectors are linearly dependent. Of course, you
know that if one row in a matrix is a multiple of another, then the determinant of the matrix is
zero.
SAQ 3.5
Let n + 1 vectors be linearly independent in an n dimensional space. Then, we can write
a1 v 1 + a 2 v 2 + ... + a n v n + a n +1 v n +1 = 0 (i)
But we can also write any vector in terms of the n basis vectors:
v n+1 = α 1 v 1 + α 2 v 2 + ... + α n v n (ii)
Then, putting (ii) in (i), and setting β k = −a n +1α k
(a1 − β 1 ) v 1 + (a 2 − β 2 ) v 2 + ... + (a n − β n ) v n = 0
44
Then, unless β i = 0 (equivalent to α i = 0 ) for all i , in which case the vector v n +1 = 0 ,
or the set {v i }i =1 is not a linearly dependent set, a contradiction. Thus, we conclude that
n
SAQ 3.6
i 0 1 0 2i 0 i 1
2 − i , i − i , − i − i , 2 − 1
i 0 1 0 2i 0 i 1 0 0
α + β +γ +δ =
2 − i i − i − i − i 2 − 1 0 0
It follows that
iα + β + 2iγ + δ i = 0 (i)
2α + iβ − iγ + 2δ = 0 (ii)
δ =0 (iii)
− iα − iβ − iγ − δ = 0 (iv)
In view of (iii),
iα + β + 2iγ = 0 (v)
2α + iβ − iγ = 0 (vi)
− iα − iβ − iγ = 0 (vii)
45
SAQ 3.7
(i) 2i + 3 j − k , − i + j + 3k and − 3i + 2 j + k
2 − 1 − 3 0
a 3 + b 1 + c 2 = 0
− 1 3 1 0
2a − b − 3c = 0
3a + b + 2c = 0
− a + 3b + c = 0
The solution set is (0, 0, 0), i.e., a = b = c = 0.
The vectors are linearly independent.
Alternatively,
2 −1 − 3
3 1 2 = 2(1 − 6) + 1(3 + 2) − 3(9 + 1)
−1 3 1
= 2( −5) + 5 − 30 = −35 ≠ 0
i 1 2 1 − 1 2 − i 2i
(ii) − 2 2i , − i 2i , 3 − i and i − 2
i 1 2 1 − 1 2 − i 2i 0 0
a + b + c + d =
− 2 2i − i 2i 3 − i i − 2 0 0
Expanding,
ai + 2b − c − id = 0 (i)
a + b + 2c + 2id = 0 (ii)
− 2a − ib + 3c + id = 0 (iii)
2ia + 2ib − ic − 2d = 0 (iv)
Multiplying (i) by 2 and adding to (ii),
a (1 + 2i ) + 5b = 0 (v)
Multiplying (iii) by i and adding to (iv) gives
(1 + 2i )b + 2ic − 3d = 0 (vi)
Multiplying (ii) by 2 and adding to (iii),
( 2 − i )b + 7c + 5id = 0 (vii)
Multiplying (vi) by 5i and (vii) by 3 and adding,
5i (1 + 2i )b − 10c − 15id = 0 (vi)
46
3( 2 − i )b + 21c + 15id = 0 (vii)
(5i − 10)b − 10c − 15id = 0
(6 − 3i )b + 21c + 15id = 0
(−4 + 2i )b + 11c = 0 (viii)
a (1 + 2i ) 11
From (v) and (viii), b = − = c
5 2i − 4
Hence,
a (1 + 2i )( 2i − 4) 8 + 6i
c= =− a
55 55
Substituting for b and c in equation (vi),
(1 + 2i ) 8 + 6i
(1 + 2i ) − a + 2i − a − 3d = 0
5 55
(3 − 4i ) 16i − 12
a+− a = 3d
5 55
33 − 44i − 16i + 12 45 − 60i 9 − 12i
a= a= a=d (ix)
165 165 55
Putting b, c, d in (i),
a (1 + 2i ) 8 + 6i 12i − 9
ai + 2 − + a+i a=0
5 55 55
2a 4ai 8a 6ai 12a 9ia
ai − − + + − − =0
5 5 55 55 55 55
4 6 9 8 2 12
ai1 − + − + a − − = 0
5 55 55 55 5 55
55 − 44 + 6 − 9 8 − 22 − 12
ai + a =0
55 55
18 26
a i − = 0 . Hence, a = 0, meaning that b, c, and d are also zero.
55 55
ai + 2b − c − id = 0 (i)
a + b + 2c + 2id = 0 (ii)
− 2a − ib + 3c + id = 0 (iii)
2ia + 2ib − ic − 2d = 0 (iv)
47
i 2 −1 − i
1 1 2 2i
Check if ≠0
−2 −i 3 i
2i 2i −i −2
SAQ 3.8
For the set to be a basis, the vectors must be linearly independent.
1 − 1 0
a + b =
1 − 1 0
a−b = 0, a−b = 0
a=b
a and b do not have to be zero. Hence, the vectors are not linearly independent. Sketch the
vectors and satisfy yourself that they are indeed linearly dependent: they are parallel.
Alternatively,
1 −1
=0
1 −1
SAQ 3.9
The set is,
1 0 0 1 0 − i 1 0
, , , .
0 1 1 0 i 0 0 − 1
1= a +d (i)
2 = b − ic (ii)
− 2 = b + ci (iii)
i = a−d (iv)
Adding (i) and (iv):
1+ i
=a
2
(i) – (iv):
48
1− i
=d
2
(ii) + (iii):
0=b
(iii) – (ii):
2
− = 2i = c
i
Hence,
1 2 1 + i 1 0 0 1 0 − i 1 − i 1 0
− 2 i = 2 0 1 + 0 1 0 + 2i i 0 + 2 0 − 1
Should you require more explanation on this study session, please do not hesitate to contact your
Are you in need of General Help as regards your studies? Do not hesitate to contact
the DLI IAG Center by ee-mail or phone on:
[email protected]
08033366677
49
Study Session 4 Vector Spaces II
4.1 Introduction
In Study Session 3, you learnt about the properties of a vector space, as well as linear
independence of a set of vectors
vectors. In this study session, you will learn about the inner or dot or
scalar product of two vectors in a vector space, as well as the norm of a vector.
50
0 ≤ || v || 2 +t 2 || w || 2 +2t ( v, w )
Differentiating the right side with respect to t and equating to zero to find the value of t
that gives the minimum value of r (t ) being a quadratic function increasing upward, since
p (t ) ≥ 0 ,
0 = 2t || w || 2 +2 ( v, w )
( v, w )
t=−
|| w || 2
Hence,
2
( v, w ) ( v, w )
0 ≤ || v || + −
2
|| w || 2 +2 −
2
( v , w )
2
|| w || || w ||
( v, w ) 2 ( v, w ) 2 ( v, w ) 2
0 ≤ || v || +
2
−2 =|| v || −
2
|| w || 2 || w || 2 || w || 2
Therefore,
( v, w ) 2 ≤ v
2 2
w
Taking the positive square root of both sides,
( v, w ) ≤ v w
Note that we did not have to make recourse to the exact definition of the inner product.
As such, every inner product satisfies the Cauchy-Schwarz inequality because, as you can
see, we only made use of the definition of the inner product ( v, w ) .
Given the vectors a and b in 3-dimensions, i.e., V3 , we define the inner product as
(a , b ) = a T b 4.7
where a T is the transpose of the line matrix representing a. Find the inner product of a
1 2
and b, where a = 0 and b = 1 .
1 1
2
(a, b) = [1 0 1]1 = 3
1
n
Show that ( v, w ) = ∑ vi wi is an inner product on ℜ n .
i =1
51
n n
( v, v ) = ∑ v i v i = ∑ v i ≥ 0 .
2
(i)
i =1 i =1
i 1
AT =
0 1
− i 1
AT =
0 1
− i 1 0 − i 1 − 1
AT B = =
0 1 1 0 1 0
Tr ( A + B) = 1 + 0 = 1
2 1 1 − 1 − 1 2
Find the inner product of A , B ∈ M mn if A = and B = .
− 1 3 1 1 3 1
52
2 − 1 − 3 − 5 3
− 1 − 1 2
A B = 1 3
T
= 2 8 5
1 1
1 3 1
0 2 3
Hence,
− 3 − 5 3
( A, B) = Tr 2 8 5 = 8
0 2 3
In the case of the space of square integrable complex valued functions, Fs , over the interval
b
∫ f ( x) dx < ∞ , we define the inner product on this space as
2
( a, b) , i.e., f ( x ) ∈ Fs implies that
a
b
( f , g ) = ∫ f * ( x) g ( x)dx 4.9
a
Given that f ( x) = 2 x + i and g ( x ) = 5 x − 2ix both belong to the vector space of real
valued functions of x , 0 ≤ x ≤ 2 ,
(i) Normalise f ( x ) .
(ii) Find the inner product of the functions.
2 2 2
(i) ∫ 0
Af * ( x ) Af ( x ) dx = A 2 ∫ ( 2 x − i )( 2 x + i ) dx = A 2 ∫ (4 x 2 + 1) dx
0 0
2
4 4 32 38 2
= A x 3 + x = A 2 ( 2) 3 + 2 = A 2 + 2 =
2
A =1
3 0 3 3 3
3
Hence, the normalisation constant is A =
38
3
Therefore, the normalised function is, (2 x + i) .
38
2 2 2
(ii) ∫0
f * ( x ) g ( x ) = ∫ ( 2 x + i )(5 x − 2ix ) = ∫ (10 x 2 − 4ix + 5ix + 2i 2 x ) dx
0 0
2
2 10 3 x2
= ∫ (10 x + ix − 2 x )dx =
2
x +i − x2
0 3 2 0
3
10 3 2 68
= (2 ) + i − 2 2 = + 4i
3 2 3
53
4.3 Norm
Let X be a vector space over K , real or complex number field. A real valued function ⋅ on X
is a norm on X (i.e., ⋅ : X → R ) if and only if the following conditions are satisfied:
(i) x ≥0 4.10
(ii) x = 0 if and only if x = 0 4.11
(iii) x+y ≤ x + y ∀ x, y ∈ X (Triangle inequality) 4.12
(iv) αx = α x ∀ x ∈ X and α ∈ C (Absolute homogeneity) 4.13
There are many ways of defining a norm on a vector space. In the case where X = R , the real
number line, the absolute-value norm is the absolute value, x . This is the distance from the
origin (x = 0). The Euclidean norm on the n-dimensional Euclidean space is
x = x1 + x 2 + ... + x n . More generally, the Lp norm is defined on a class of spaces called
2 2 2
≤ || x || 2 + || y || 2 +2(x, y ) ≤ || x || 2 + || y || 2 +2 || x || || y ||
= (|| x || + || y ||) 2
54
Taking the positive square root of both sides as a norm must be non-negative,
x+y ≤ x + y
If the norm of v in the vector space V is unity, such a vector is said to be normalised. In any
case, even if a vector is not normalised, we can normalise it by dividing by its norm.
In Physics, the most commonly used norm is the one induced by the inner product and this is the
one we shall be concerning ourselves with. The Euclidean norm is an example of such a norm. In
that case, A = ( A, A ) , the square root of the inner product of A with itself. We shall illustrate
with three cases:
1
If a = 0 , find its norm, and hence normalise it.
1
1
(a, a) = [1 0 1]0 = 2
1
a = (a, a) = 2
We see that a is not normalised.
1
a 1
However, c = = 0 is normalised. (Check)
a 2
1
i 0
Normalise A = .
1 1
55
i 1
AT =
0 1
− i 1
AT =
0 1
− i 1 i 0 2 1
AT A = =
0 1 1 1 1 1
Tr ( A + A ) = 2 + 1 = 3
Therefore, A = 3 .
A 1 i 0
A is not normalised, but C = = is normalised.
A 3 1 1
Case 3: The space of square integrable complex valued functions, Fs , over the interval
( a, b) . Let f ( x ) ∈ Fs , then we define
f = (f, f) 4.16
b
where ( f , f ) = ∫ f ( x) dx
2
a
It is now obvious that we have to deal with a square integrable set of functions:
f
f might not be normalised, but h = is normalised. f is said to be normalisable. If for
(f, f)
instance the function is not square integrable, that would mean that we could be dividing f by an
infinite number. We say such a function is not normalisable.
2 2
The norm of g (x ) is g ( x ) = ∫ 0
g * ( x ) g ( x)dx = ∫ (5 x − 2ix)(5 x + 2ix)dx
0
2
2 29 3 29 3 232
= ∫ 29 x 2 dx = x = (2 ) =
0 3 0 3 3
The normalised function is
g ( x) 5 x − 2ix 3
= = (5 x − 2ix)
g ( x) 232 / 3 232
56
Summary of Study Session 4
In Study Session 4, you learnt the following:
1. How to calculate the inner product of two vectors in a given vector space.
2. How to find the norm of a vector in a given vector space.
3. How to normalise a given vector in a given vector space.
57
References
1. Hill, K. (1997). Introductory Linear Algebra with Applications, Prentice Hall.
2. Butkov, E. (1968). Mathematical Physics, Addison-Wesley.
3. MacQuarrie, D. A. (2003). Mathematical Methods for Scientists & Engineers, University
Science Books.
4. Hefferson, J. (2012). Linear Algebra, http://joshua.smcvt.edu/linearalgebra/book.pdf
5. Hefferson, J. (2012). Answers to Exercises,
http://joshua.smcvt.edu/linearalgebra/jhanswer.pdf
2 1 1 − 1 − 1 2
(iii) A , B ∈ M mn if A = and B = .
− 1 3 1 1 3 1
58
SAQ 4.6 (tests Learning Outcome 4.6)
2πnx
Normalize the wave function ψ = A sin , where 0 ≤ x ≤ L and n can take positive integral
L
values.
59
Solutions to SAQs
SAQ 4.1 (tests Learning Outcome 4.1)
(i) The Lp norm is defined on a class of spaces called L p spaces, and is defined as
x p
(
= | x1 | p + | x 2 | p +...+ | x n | p )1/ p
.
SAQ 4.2
The inner product of v , w ∈ V ,written as (v,w), has the following properties:
(i) (v,v) ≥ 0 4.1
(ii) (v,v) = 0 if and only if v = 0 4.2
(iii) (v,w) = (w,v) (Symmetry) 4.3
(iv) (cv, w ) = c * ( v, w ) ; ( v, cw ) = c ( v, w ) 4.4
(v) ( v, w + z ) = ( v, w ) + ( v, z ) 4.5
(vi) ( v, w ) ≤ v w Cauchy-Schwarz inequality 4.6
SAQ 4.3
i 2
(i) − 2 and − 1
2 3
2
(− i − 2 2) − 1 = −2i + 2 + 6 = 8 − 2i
3
(ii) ix 2 + 2 and 2 x − 3i 0 ≤ x ≤ 2 .
2 2
∫ (ix + 2) * ×( 2 x − 3i ) dx = ∫ ( −ix 2 + 2) * ×( 2 x − 3i ) dx
2
0 0
2
= ∫ ( 2ix 3 − 3 x 2 + 4 x − 6i )dx
0
2
x4
= i − x 3 + 2 x 2 − 6ix
2 0
= 8i − 8 + 8 − 12i
− 4i
60
2 1 1 − 1 − 1 2
(iii) A , B ∈ M mn if A = and B = .
− 1 3 1 1 3 1
2 − 1 − 2 − 1 − 2 − 3 4 − 1
+
− 1 − 1 2
( A, B ) = Tr ( A B ) = Tr 1 3 = Tr − 1 + 3 − 1 + 9 2 + 3
1 1 1 3 1
− 1 + 1 − 1 + 3 2 + 1
− 3 − 5 3
= Tr 2 8 5 = 8
0 2 3
SAQ 4.4
A real valued function ⋅ on X is a norm on X (i.e., ⋅ : X → R ) if and only if the following
conditions are satisfied:
(i) x ≥0 4.10
(ii) x = 0 if and only if x = 0 4.11
(iii) x+y ≤ x + y ∀ x, y ∈ X (Triangle inequality) 4.12
(iv) αx = α x ∀ x ∈ X and α ∈ C (Absolute homogeneity) 4.13
SAQ 4.5
+ +
2 + 3i −2 2 + 3i −2
(i) +
(u, u ) = u u = Tr
i − + i − 3 + i
3 i
+
2 − 3i − i 2 + 3i −2
= Tr
− 3 + i
− 2 − 3 − i i
(2 − 3i ) × (2 + 3i ) + (−i ) × (i ) (2 − 3i ) × (−2) + (−i ) × (−3 + i )
= Tr
(−2) × (2 + 3i ) + (−3 − i ) × (i ) (−2) × (−2) + (−3 − i ) × (−3 + i )
4 + 9 +1 − 4 − 6i + 3i + 1
= Tr
− 4 − 6i − 3i + 1 4 + 9 + 1
14 − 3 − 3i
= Tr
− 3 − 9i 14
= 14 + 14
= 28
The norm of u is (u, u) = 28
61
+
+2 + 3i − 2 3 − 4 − i
(ii) (u, v ) = u v =
i − 3 + i − i 3
2 − 3i −i 3 − 4 − i
= − i
− 2 − 3 − i 3
(2 − 3i ) × 3 + (−i ) × (−i ) (2 − 3i ) × (−4 − i ) + (−i ) × 3
=
(−2) × 3 + (−3 − i ) × (−i ) (−2) × (−4 − i ) + (−3 − i ) × 3
6 − 9i − 1 − 8 + 10i − 3i
=
− 6 + 3i − 1 8 + 2i − 9 − 3i
5 − 9i − 8 + 7i
= Tr
− 7 + 3i − 1 − i
= (5 − 9i ) + ( −1 − i )
= 4 − 10i
SAQ 4.6
2πnx
ψ = A sin , where 0 ≤ x ≤ L
L
L 2πnx
(ψ ,ψ ) = A 2 ∫ sin 2 dx
0 L
L 1 2πnx
= A2 ∫ 1 − cos dx
0 2 L
1 L L
= A2 x 0 = A2 = 1
2 2
2
Therefore, A =
L
SAQ 4.7
2i
(i) (− 2i − 1 3) − 1 = 4 + 1 + 9 = 14
3
1 1
∫ (ix + 2) * (ix 2 + 2) dx = ∫ ( −ix 2 + 2)(ix 2 + 2) dx
2
(ii)
0 0
1
1 x5 1 21
= ∫ (4 + x )dx = 4 x + = 4 + =
4
0
5 0 5 5
21
Norm =
5
62
SAQ 4.8
1
Norm of 2 is 1 + 4 + 9 = 14
3
1
1
The normalised vector is 2
14
3
− 2 1
1 1
Similarly, 0 and 2 are normalised.
20 6
4 1
SAQ 4.9
( x, x) = 2 x1 x1 − x1 x 2 − x 2 x1 + x 2 x 2 = 2 x1 + x 2 − 2 x1 x 2
2 2
(i)
= x1 + ( x1 + x 2 − 2 x1 x 2 )
2 2 2
= x1 + ( x1 − x 2 ) 2 ≥ 0
2
since the sum of two squares (of real numbers) cannot be negative.
(ii) ( x, x) = 0 , if and only if x = 0, and this is true only if x1 = x 2 = 0 . This is true of the
expression x1 + ( x1 − x 2 ) 2
2
(iii) (x, y ) = 2 x1 y1 − x1 y 2 − x 2 y1 + x 2 y 2
(iv) (cx, y ) = 2c * x1 y1 − c * x1 y 2 − c * x 2 y1 + c * x 2 y 2 = c * (2 x1 y1 − x1 y 2 − x 2 y1 + x 2 y 2 )
(x, cy ) = 2cx1 y1 − cx1 y 2 − cx2 y1 + cx2 y 2 = c(2 x1 y1 − x1 y 2 − x 2 y1 + x 2 y 2 )
(v) (x, y + z ) = 2 x1 (y + z )1 − x1 (y + z ) 2 − x 2 (y + z )1 + x 2 (y + z ) 2
= 2 x1 ( y1 + z1 ) − x1 ( y 2 + z 2 ) − x 2 ( y1 + z1 ) + x 2 ( y 2 + z 2 )
= (2 x1 y1 − x1 y 2 − x 2 y1 + x 2 y 2 ) + (2 x1 z1 − x1 z 2 − x 2 z1 + x 2 z 2 )
= ( x, y ) + ( x, z )
SAQ 4.10
|| x ||= x1 + ( x1 − x 2 ) 2 ≥ 0
2
(i)
(ii) || x ||= 0 , if and only if x = 0, and this is true only if x1 = x 2 = 0 .
63
Should you require more explanation on this study session, please do not hesitate to contact your
Are you in need of General Help as regards your studies? Do not hesitate to contact
the DLI IAG Center by ee-mail or phone on:
[email protected]
08033366677
64
65
Study Session 5 Orthogonal Matrices; Change of Basis
Introduction
In this study session, you shall learn about orthogonal matrices, because as you shall soon see
among some other desirable properties, the columns of an orthogonal matrix form an
orthonormal set. So also do the rows. The orthogonal matrix is so important in Physics because it
gives the rotation matrix of a coordinate system about one ooff the axes. These special matrices
also have some interesting properties that will really pique your interest. Rotation of a coordinate
system ultimately results in a change of basis, although this is not the only way you can achieve
a change of basis. Generally
erally therefore, we shall be looking at the matrix of transformation from
one basis to another. This is so important because the basis for a particular vector space is not
unique.
5.1 Orthogonal
thogonal Matrices
A matrix Q such that (Qa,⋅Qb) = (a, b) ∀ a, b ∈ E , the set of n-dimensional
dimensional Euclidean vectors is
called an orthogonal matrix. In other words, an orthogonal matrix preserves the inner product
of two Euclidean vectors. You should find this interesting: it means the inner product remains the
same in the two different coordinate systems, if one of them is obtained by rotating the other
o one
about one axis. We will soon illustrate what we said earlier, that the rotation matrix is an
orthogonal matrix.
Since (Qa, Qb) = (b, {Q T (Qa)} = (b, (Q T Q )a} , a necessary and sufficient condition for Q to be
orthogonal is
QQ T = I 5.1
or equivalently, by multiplying on the left by Q −1 ,
Q T = Q −1 5.2
66
Note that
det(QQ T ) = det(Q ) det(Q T ) since det( AB ) = (det A)(det B ) 5.3
= det( Q ) det( Q ) since det AT = det A 5.4
= (det(Q )) 2 = det( I ) = 1
Hence,
det(Q ) = ±1 5.5
If det (Q ) = 1 , then
det(Q − I ) = det(Q − I ) det(Q T ) (since det(Q ) = 1 ) 5.6
= det(QQ T − Q T ) 5.7
= det( I − Q )
T
(since QQ = I )T
5.9
= det( I − Q )
T TT
( det( A ) = det A
T
5.10
= + det( I − Q ) = − det( Q − I ) (since det( A − B ) = − det( B − A) ) 5.11
=0 (since x = − x implies x = 0)
Therefore, 1 is an eigenvalue so that there exists e 3 such that Qe 3 = 1e 3 = e 3 .
a c 0
Q = b d 0
T
5.13
0 0 1
1 0 0 a + b ac + bd 0
2 2
QQ T = 0 1 0 = ca + bd c2 + d 2 0 5.14
0 0 1 0 0 1
a2 + b2 = 1 = c2 + d 2 5.15
ac + bd = 0 = ca + bd 5.16
Also,
det( Q ) = 1 = ad − bc 5.17
From equation (ii),
ac
b=− 5.18
d
67
Putting this in (iii) gives
ac 2
ad + =1 5.19
d
⇒ a (c 2 + d 2 ) = d ⇒ a = d . Use a = d in (5.18) to get c = −b .
Therefore,
a b 0
Q = − b a 0 5.20
0 0 1
with
a2 + b2 = 1 5.21
Interpretation:
This corresponds to a rotation about an axis perpendicular to e 3 or k.
Improper orthogonal matrices have determinant − 1 , and are equivalent to a reflection combined
with a proper rotation.
Note that only proper orthogonal matrices represent rotations. In addition, the direction and axis
of rotation is not as simple in all cases as in the example. We state without proof, the Rotation
Matrix Theorem.
68
anticlockwise when v × Av points in your direction, where v is any vector in the plane
perpendicular to the axis of rotation.
Show that rule (iv) trivially gives us the same angle we deduced in the matrix in equation
5.22.
cosθ sin θ 0
A = − sin θ cosθ 0
0 0 1
1
[tr ( A) − 1] = 1 [(2 cos θ + 1) − 1] = cos θ
2 2
meaning that the angle of rotation is the same as θ in the matrix.
cosθ sin θ 0
Confirm that the matrix A = sin θ − cosθ 0 is a rotation matrix. Also show that
0 0 − 1
irrespective of the angle θ , the rotation angle is π .
The determinant of the matrix is − 1 × (− cos 2 θ − sin 2 θ ) = 1 (using the last row because
of the ease it offers us, having only one non-zero element). Thus, the matrix is orthogonal
and has determinant 1. It is a proper orthogonal matrix, a rotation matrix.
69
u1 (1 − cosθ ) − u 2 sin θ = 0 or − u1 sin θ + u 2 (1 + cosθ ) = 0
The axis is (sin θ ,1 − cos θ ,0) or (1 + cos θ , sin θ ,0) . Notice that this is not the z-axis. This
is partly because A33 ≠ 1 .
9 12 − 20
1
The matrix − 20 15 0 is a proper orthogonal matrix. Find the axis and angle
25
12 16 15
of rotation.
9 12 − 20
1
A= − 20 15 0
25
12 16 15
The axis is given by
9 12 20
25 − 1 25
−
25 u1 0
20 15
( A − I )u = − −1 0 u 2 = 0
25 25
12 16 15 u 3 0
25 −1
25 25
16 12 20
− 25 25
−
25 u1 0
20
0 u 2 = 0
10
( A − I )u = − −
25 25
12 16 10 u 3 0
25 25 25
− 16u1 + 12u 2 − 20u 3 = 0
− 20u1 − 10u 2 = 0
12u1 + 16u 2 + 10u 3 = 0
− 4u1 + 3u 2 − 5u 3 = 0
2u1 + u 2 = 0
6u1 + 8u 2 + 5u 3 = 0
or
70
u 2 = −2u1
− 4u1 + 3(−2u1 ) − 5u 3 = 0
− 5u 3 = 10u1
u 3 = −2u1
Hence, the axis of rotation is (1, − 2, − 2) .
Motivation
If Q is an orthogonal matrix, then, its rows are an orthonormal set. In addition, its columns are
also an orthonormal set.
2/2 0 2/2
QT = 6 / 6 6 / 3 − 6 / 6
− 3 / 3 3 /3 3 / 3
2/2 6 / 6 − 3 / 3 2 / 2 0 2/2
QQ = 0
T
6 /3 3 / 3 6 / 6 6 / 3 − 6 / 6
2 / 2 − 6 / 6 3 / 3 − 3 / 3 3 /3 3 / 3
1 0 0
= 0 1 0
0 0 1
Hence, Q is an orthogonal matrix.
You can verify that the columns are orthonormal. You can also show that the rows are
orthonormal.
71
5.2 Change of Basis
We are quite familiar with the usual basis in the Euclidean plane: i and j. We can write these as
1 0
and
0 1
We can also show that,
1 1
and
1 −1
is also a basis for the Euclidean plane. These vectors are, respectively, i + j and i − j . We see
that the basis for a vector space is not unique. We can easily construct a linear map (matrix) that
takes a basis vector in one basis to another.
It follows that
a1 = c1u11 + c 2 u 21 + ... + c n u n1
a 2 = c1u12 + c 2 u 22 + ... + c n u n 2
. 5.24
.
.
a n = c1u1n + c 2 u 2 n + ... + c n u nn
72
a1 u11 u 21 . . u n1 c1
a 2 u12 u 22 . . u n 2 c 2
. = . . . . . . 5.25
. . . . . . .
a u u 2n . . u nn c n
n 1n
a1 c1
a2 c2
or . = B . 5.26
. .
a c
n n
where B is a matrix formed by arranging the vectors (in columns) u 1 , u 2 , …, u n in order.
73
c1 a1 d1
c2 a2 d2
. = B . = B D .
−1 −1
5.30
. . .
c a d
n n n
Conversely,
d1 c1
d2 c2
. = D −1 B . 5.31
. .
d c
n n
Given the basis {(2, 3), (1, 4)}, write the matrix that will transform this basis to the basis
2
{(0, 2), (-1, 5)}? What are the coordinates of the vector in the new basis?
6
2 1 0 − 1 1 4 − 1 1 5 1
B = , D = , B −1 = , D −1 =
3 4 2 5 5 − 3 2 2 − 2 0
1 4 − 1 0 − 1 1 − 2 − 9
B −1 D = =
5 − 3 2 2 5 5 4 13
Check! Let us go back the way we came. We should recover our original coordinates in that
basis. Now, we make use of the matrix B −1 D .
c1 d 1 − 2 − 9 14 1 − 10 − 2
= B −1 D 1 = = =
c2 d 2 5 4 13 − 2 5 30 6
We recovered the original coordinates in the first basis.
74
Summary of Study Session 5
References
1. Hill, K. (1997). Introductory Linear Algebra with Applications, Prentice Hall.
2. Butkov, E. (1968). Mathematical Physics, Addison-Wesley.
3. MacQuarrie, D. A. (2003). Mathematical Methods for Scientists & Engineers, University
Science Books.
4. Hefferson, J. (2012). Linear Algebra, http://joshua.smcvt.edu/linearalgebra/book.pdf
5. Hefferson, J. (2012). Answers to Exercises,
http://joshua.smcvt.edu/linearalgebra/jhanswer.pdf
75
1 0 1 1
Find the matrix of transformation between the bases , and , . Hence,
0 1 1 − 1
3
express the vector in the two different bases.
4
76
Solutions to SAQs
SAQ 5.1
A matrix Q such that (Qa, Qb) = (a, b) ∀ a, b ∈ E , the set of n-dimensional Euclidean vectors is
called an orthogonal matrix. In other words, an orthogonal matrix preserves the inner product
of two Euclidean vectors. As such, the determinant of an orthogonal matrix is either 1 (proper
orthogonal matrix) or − 1 (improper orthogonal matrix).
They are so important in the theory of vector spaces because their columns and rows are
orthonormal vectors.
SAQ 5.2
0 1 0 0 1 0 1 0 0
(i) 1 0 0 1 0 0 = 0 1 0 The matrix is not an orthogonal matrix.
0 1 0 0 1 0 1 0 0
0 0 1 0 1 0 1 0 0
(ii) 1 0 0 0 0 1 = 0 1 0 The matrix is an orthogonal matrix.
0 1 0 1 0 0 0 0 1
−1 2 2 −1 2 2 9 0 0 1 0 0
1 1 1
(iii) 2 −1 2 2 −1 2 = 0 9 0 = 0 1 0 The matrix is an
3 3 9
2 2 − 1 2 2 − 1 0 0 9 0 0 1
orthogonal matrix.
− 1 0 0 − 1 0 0 1 0 0
(iv) 0 0 − 1 0 0 1 = 0 1 0 The matrix is an orthogonal matrix.
0 1 0 0 − 1 0 0 0 1
SAQ 5.3
The matrices (ii) and (iii) are proper orthogonal matrices, since each has a determinant of unity.
The matrix (i) is not even an orthogonal matrix (SAQ 4.1). Matrix (iv) is an orthogonal matrix,
but its determinant is − 1 . Hence, it is not a proper orthogonal matrix, but an improper orthogonal
matrix.
SAQ 5.4
The axis is given by,
( A − I )u = 0
or
77
−1 2 2 3 0 0 u1 − 4 2 2 u1 0
1 1 1
3 2 − 1 2 − 0 3 0 u 2 = 2 − 4 2 u 2 = 0
3 3
2 2 − 1 0 0 3 u
3 2 2 − 4 u 3 0
− 4u1 + 2u 2 + 2u 3 = 0
2u1 − 4u 2 + 2u 3 = 0
2u1 + 2u 2 − 4u 3 = 0
− 2u1 + u 2 + u 3 = 0
(i)
u1 − 2u 2 + u 3 = 0
(ii)
u1 + u 2 − 2u 3 = 0
(iii)
cos θ =
1
(tr ( A) − 1) = 1 − 3 − 1 = 1 (− 1 − 1) = −1
2 2 3 2
Hence,
θ = 180 0
SAQ 5.5
1 0 1 1
The matrix from basis SU is B = , and the matrix from basis S1 is D = ,
0 1 1 − 1
1 0 1 − 1 − 1 1 1 1
B −1 = , D −1 = =
0 1 − 2 − 1 1 2 1 − 1
1 1
The matrix of transformation from SU to S1 is B −1 D = D = .
1 − 1
1 1 1
The matrix of transformation from S1 to SU is D −1 B = D −1 =
2 1 − 1
78
3 3 1 1 1 3 1 7 7 / 2
So, in SU transforms to D −1 B = = = in S1 .
4 4 2 1 − 1 4 2 − 1 − 2
3
Crosscheck! Does this transform into the other way?
4
7 / 2 −1
1 7 1 1 1 7 1 6 3
=
−2 in S 1 transforms to B D 2 − 1 2 1 − 1 − 1 = 2 8 = 4 in SU .
SAQ 5.6
1 2 0 1 2 1
The matrix related to S a is B = 0 1 3 , while the one related to S b is 2 − 1 1 .
2 0 5 1 1 − 1
We need to get the inverse of D , since we need D −1 B . The inverse of a matrix is the matrix of
cofactors divided by the determinant. First, we evaluate the determinant of D.
Determinant of D is
1(1 − 1) − 2( −2 − 1) + 1( 2 + 1) = 9
The inverse of D is the transpose of the matrix of cofactors divided by the determinant:
T
−1 1 2 1 2 −1
−
1 −1 1 −1 1 1 T
0 3 3 0 3 3
1 2 1 1 1 1 2 1 1
D −1 = − − = 3 − 2 1 = 3 − 2 1
9 1 −1 1 −1 1 1 9 9
3 1 − 5 3 1 − 5
2 1 1 1 1 2
−1 −
1 2 1 2 − 1
1 2 1 0 3 3 9 0 0 1 0 0
1
2 − 1 1 3 − 2 1 = 0 9 0 = 0 1 0
1
9 9
1 1 − 1 3 1 − 5 0 0 9 0 0 1
79
SAQ 5.7
1 1 1 1 − 2 1
B = 0 1 2 D = 2 0 2
1 0 1 3 4 1
1 −1 1 4 −3 2
1 1
B −1 = 2 0 − 2 D −1 = − 2 1 0
2 4
−1 1 1
− 4 5 − 2
The transformation matrix is therefore,
4 − 3 2 1 1 1 6 1 0
1 1
D −1 B = − 2 1 0 0 1 2 = − 2 − 1 0
4 4 − 6 1 4
− 4 5 − 2 1 0 1
1
Now, what would the vector 2 be in the new basis?
1
d1 1 6 1 0 1 8 2
−1
1 1
d 2 = D B 2 = − 2 − 1 0 2 = − 4 = − 1
d 1 4 − 6 1 4 1 4 0 0
3
We could also transform back to the original basis. Obviously, we should get the vector we
started with:
c1 2 1 1 0 2 1
−1
c 2 = B D − 1 = − 2 − 6 0 − 1 = 2
c 0 2 3 1 0 1
3
80
Should you require more explanation on this study session, please do not hesitate to contact your
Are you in need of General Help as regards your studies? Do not hesitate to contact
the DLI IAG Center by ee-mail or phone on:
[email protected]
08033366677
81
Study Session 6Orthogonality
Orthogonality and Orthonormality
Introduction
You are quite familiar with the usual basis vectors in the Euclidean plane, i and j. No doubt, you
are aware that they are at right angle to each other. This is an example of orthogonality.
Moreover, you know that each of them has a unit lengt length.
h. But then, that reminds you of
normalised vectors. You can safely say that the two vectors are orthonormal: they are orthogonal
to each other, and each is normalised. We shall, as usual, be extending this idea to some other
vector spaces. Why is it so important
portant to have orthogonality and at times orthonormality? It is
easy to see that it is so for the usual vectors you are familiar with. You shall also learn how
important it is that we have an orthonormal basis in quantum mechanics, even though the vectors
in this case are in the space of square integrable functions. Once you have an orthonormal basis,
you can define an orthonormal basis, and you can find the coefficients of expansion of any
wavefunction as a linear combination of the vectors in the basis ((eigenstates).
eigenstates). Moreover, if the
wavefunction itself is normalised, you can then find the probability that the particle described by
the wavefunction is in a particular eigenstate. In addition, you shall learn to construct an
orthonormal set for a vector spacece from a given set of vectors in the space.
Suppose there exists a linearly independent set {φ i }i =1 , i.e., {φ1 , φ 2 , ⋅ ⋅⋅, φ n } , such that
n
(φ i , φ j ) = 0 , i ≠ j 6.2
82
then, {φ i }i =1 is an orthogonal set.
n
Show that the usual basis in the 2-dimensional Euclidean plane is an orthonormal set.
i ⋅ i = j ⋅ j = 1 , and i ⋅ j = 0
Hence, the set is an orthonormal set.
Show that the set {i + j, i − j} is an orthogonal set, but not an orthonormal set.
1 1
i + j = and i − j =
1 − 1
1
(i + j, i − j) = (1 1) = 0
− 1
But the vectors are not normalised. They are an orthogonal set, but not an orthonormal
set.
If any vector in the n-dimensional vector space, V , can be written as a linear combination of n
vectors
n
v = c1φ1 + c 2φ 2 + ⋅ ⋅ ⋅ + c nφ n = ∑ c iφ i 6.5
i =1
such that the ci ' s are constants in the underlying field of the vector space and (φi , φ j ) = δ ij = 1 if
i = j and zero otherwise, then we say that {φ i }i =1 is a the complete orthonormal basis for V.
n
n n n
(φ j , v ) = (φ j , ∑ ciφ i ) = ∑ c i (φ j , φ i ) = ∑ ci δ ij = c j 6.6
i =1 i =1 i =1
Moreover,
n n n n n
( v, v ) = (∑ c k φ k , ∑ c iφ i ) = ∑ c k * ∑ c i (φ j , φ i ) = ∑ c i
2
6.7
k =1 i =1 k =1 i =1 i =1
83
n
( v, v ) = ∑ ci =1
2
6.8
i =1
2
Thus, we can interpret ci as the probability that the system which has n possible states,
2
assumes state i with probability ci . In other words, the probability that the system is in state i
2
is ci .
1 0
The usual basis in 2-dimensional Euclidean plane is i and j. The basis {i, j} ≡ ,
0 1
2
is an orthonormal basis. (i)Expand the vector in terms of this
− 3
basis, and recover the coefficient of each basis vector in the expansion.
(i) Show that the squares of the coefficients are not useful in finding the
2 0
probability that the system described by vector is in either state
− 3 1
0
or state .
1
2
(ii) Normalise , and then find the coefficient of the expansion of the vector in
3
terms of the basis vectors.
Show that in this case, the coefficient of the expansion can be used as the
2
probability that the system described by the vector is in the corresponding
3
state.
(iii) Derive the result in (ii) from that in (i).
(i)
2 1 0
= a1 + a 2
− 3 0 − 1
Clearly, a1 = 2 and a 2 = −3 . Notice that | a1 | 2 + | a 2 | 2 = 2 2 + (−3) 2 = 13 ≠ 1 .
2
This is because the vector is not normalised.
− 3
1 2
(ii) Normalising it, we get the vector . Then,
13 − 3
84
1 2 1 0
= c1 + c 2
13 − 3 0 1
2 3
c1 = , and c 2 = −
13 13
Then,
4 9
| c1 | 2 + | c 2 | 2 = + =1
13 13
We can now interpret | c1 | 2 and | c 2 | 2 respectively as the probability of finding the
1 2 1 0
system described by the vector in the state and . Note that the
13 − 3 0 1
wavefunction you are expanding must be normalised, and you must have an
orthonormal set.
(ii) Notice that you might also have achieved the same answer by dividing the
squared coefficients of the expansion of the unnormalised vector by dividing
by the norm of the vector:
| a1 | 2 4 | a2 |2 9
| c1 | 2 = = ; | c | 2
= =
| a1 | + | a 2 | | a1 | + | a 2 |
2 2 2 2 2
13 13
Now, we wish to construct a set of functions {φ 1( x),φ 2 ( x), ...}from the given set
{ f1 ( x), f 2 ( x), ...} , such that each pair of the functions will be orthogonal. So, we pick the first
function f1 ( x) as the first element of the orthogonal set φ 1( x) , i.e., φ 1 ( x ) = f1 ( x) . Then, we
form a linear combination of the next function and a scalar multiple of the first element of the
orthogonal set that we just obtained φ 2 ( x) = f 2 ( x) + αφ1 ( x) . This forms the second element of
the orthogonal set. Then, we need to determine the constant multiplying the first element of the
set, α , so as to get the second function of the orthogonal set. We obtain this constant by applying
the fact that the inner product of the two new functions must be orthogonal, i.e.,
85
b
∫φ
a
* ( x)φ 2 ( x ) dx = 0. Then, we get the third element as the third function, plus a linear multiple
1
of the first two elements of the orthogonal set, φ 3 ( x ) = f 2 ( x ) + αφ1 ( x ) + β φ 2 ( x ) . Do note that
the constant α in this case is not the same as the one associated with the previous stage of
b
finding φ 2 ( x) . ∫φ * ( x)φ 2 ( x ) dx . Thus, we need to determine two constants. We equate the
1
a
b b
two inner products, ∫φ * ( x )φ 3 ( x) dx and
1 ∫φ 2 * ( x )φ 3 ( x ) dx to zero, as each of φ 1 and φ2
a a
must be orthogonal to φ 3 . This process is continued until all the required vectors are got. Then,
normalizing each element of the orthogonal set, we get the elements of the orthonormal set. An
example of this process is given in Example …
{ }
Construct an orthonormal set from the set 1, x, x 2 ,... over the interval − 1 ≤ x ≤ 1 .
Thus, given the set { f1 , f 2 , f 3 ,...}, we want to construct an orthogonal set {φ1 , φ 2 , φ 3 ,...},
i.e.,
1
∫ φ i ( x)φ j ( x)dx = 0, if i ≠ j 6.9
−1
3 −1 2 −1
3 0
2
+ 2β = 0
3
or
86
1
β =− 6.13
3
or
1 1 1
x4 αx 3 βx 2
+ + =0
4 −1
3 −1
2 −1
1
2αx 3
=0
3 0
or
α =0 6.14
Similarly,
87
1
∫ −1
A 2 x 2 dx = 1
3 1 2
x 1 1 A
A 2
= A2 + = 2 =1
3 −1
3 3 3
Thus,
3
A=
2
In like manner,
2
2 2 1 2 4 2 2 1
1 1
∫−1 A x − 3 dx = 2∫0 A x − 3 x + 9 dx = 1
from which
1
x5 2x3 x
2A −2
+ =1
5 9 90
or
1 2 1
2 A2 − + = 1
5 9 9
Therefore,
80 2
A =1
45
The normalised function
45 2 1
ψ2 = x − 6.19
8 3
88
Pru v = ( v cosθ )uˆ
where θ is the angle between the two vectors, and û is the unit vector in the direction of u, and.
u u⋅v
Therefore, uˆ = . But u ⋅ v = u v cosθ . So, cosθ =
u u v
Hence,
u⋅v u (u, v) (u, v)
Pru v = v
u = u 2 u = (u, u) u 6.20
u v
since (u, u) =| u | 2 .
We proceed as we did in the case of the example from function space.
u1 = v1 6.21
The projection of v 2 onto u 1 is Pru1 v 2 . This is the component of v 2 in the direction of u 1 . The
component of v 2 perpendicular (i.e., orthogonal) to u 1 is (vector subtraction):
u 2 = v 2 − Pru1 v 2 6.22
Similarly the components of v 3 perpendicular to both u 1 and u 2 is,
u 3 = v 3 − Pru1 v 3 − Pru 2 v 3 6.23
.
.
.
n −1
u n = v n − ∑ Pru i v n 6.24
i =1
1 1 1
You are given the basis 0 , 1 , 2 . With the standard Euclidean inner product,
1 0 1
construct an orthonormal basis for the 3-dimensional Euclidean space.
89
1 1 1
v1 = 0 , v 2 = 1 , v 3 = 2
1 0 1
1
u1 = v1 = 0
1
( v 2 , u1 )
u2 = v2 − 2
u1
u1
In this case,
1 1
( v 2 , u1 ) = (1 1 0) 0 = 1 = (1 0 1) 0 = 2
2
u1
1 1
Hence,
1 1 1/ 2
( v 2 , u1 ) 1
u2 = v2 − 2
u1 = 1 − 0 = 1
u1 0 2 1 − 1/ 2
In this case,
1 1/ 2
u1 = 0 u2 = 1
1 − 1/ 2
1
( v 3 , u 1 ) = (1 2 1) 0 = 2
1
=2
2
u1
1/ 2
1
( v 3 , u 2 ) = (1 2 1) 1 = + 2 − = 2
1
− 1/ 2 2 2
1/ 2
1
u 2 = (1 / 2 1 − 1 / 2) 1 = + 1 + =
2 1 3
− 1/ 2 4 4 2
Hence,
1 1 1/ 2
( v 3 , u1 ) (v 3 , u 2 ) 2 2
u3 = v3 − 2
u1 − 2
u 2 = 2 − 0 − 1
u1 u2 1 2 1 3 / 2 − 1/ 2
90
1 1 1/ 2
4
= 2 − 0 − 1
1 1 3 − 1/ 2
1 1 2 / 3 − 2 / 3 − 1
2
= 2 − 0 − 4 / 3 = 2 / 3 = 1
1 1 − 2 / 3 2 / 3 3 1
Check!
1/ 2
(u 1 , u 2 ) = (1 0 1) 1 = 0
− 1/ 2
− 1
(u1 , u 3 ) = (1 0 1) 1 = 0
2
3 1
− 1
2 1 1
(u 2 , u 3 ) = (1 / 2 1 − 1 / 2) 1 = − + 1 − = 0
2
3 1 3 2 2
You get the orthonormal set by dividing each vector by its norm. We already have,
u1 = 2 , u 2 = 3 / 2 . You can easily show that u 3 = 4 / 3 = 2 / 3 .
1 1/ 2 − 1 1 1 − 1
1 2 3 2 2 2 1 1
{w 1 , w 2 , w 3 ) = 0 , 1 , × 1 = 0 , × 2 , 1
2 1 3 − 1 / 2 2 3 1 2 1 3 2 − 1 3 1
1 1 − 1
2 1 1
= 0 , 2 , 1
2 1 6 − 1 3 1
1 1 − 1
2 6 3
= 0 , 2 , 1
2 1 6 − 1 3 1
91
1. Two vectors in a vector space are orthogonal if their inner product is zero.
2. Two vectors in a vector space are orthonormal if their inner product is zero, and each
vector is normalised, that is if its norm is unity.
3. Once a basis is given for a vector space, we can expand any vector in the vector space as
a linear combination of the basis vectors.
4. The coefficients of expansion can be obtained by taking the inner product of the
respective basis vector and the vector expanded in terms of the basis vectors.
5. If the basis is an orthonormal set, and the vector expanded in terms of the basis is
normalised, then the squares of the magnitude of the coefficients are the probabilities of
finding the system described by the function in the respective states.
6. How to construct an orthogonal and hence, orthonormal set from a given set of vectors
using the Gram-Smichdt orthogonalisation procedure.
References
1. Hill, K. (1997). Introductory Linear Algebra with Applications, Prentice Hall.
2. Butkov, E. (1968). Mathematical Physics, Addison-Wesley.
3. MacQuarrie, D. A. (2003). Mathematical Methods for Scientists & Engineers, University
Science Books.
4. Hefferson, J. (2012). Linear Algebra, http://joshua.smcvt.edu/linearalgebra/book.pdf
5. Hefferson, J. (2012). Answers to Exercises,
http://joshua.smcvt.edu/linearalgebra/jhanswer.pdf
92
(− 1 2 1) / 6 in terms of this basis. Calculate the probability that the system represented by
the vector is in each of these states.
93
Solutions to SAQs
SAQ 6.1
A set {v i } is said to be an orthogonal set if for any pair v k , v m , the inner product ( v k , v m ) is
zero for k ≠ m .
(i) A set {v i } is an orthonormal set if for any pair v k , v m , the inner product ( v k , v m ) = δ km
, that is, zero for k ≠ m and 1 for k = m .
(ii) The Gram-Schmidt orthogonalisation procedure is one by which vectors in a vector space
on which an inner product is defined can be made mutually orthogonal.
SAQ 6.2
The set must be linearly independent:
2 1 3
0 7 −1 7 −1 0
−1 0 7 = 2 −1 +3 = 2(7) − 1(1) + 3(1) = 16 ≠ 0
−1 −1 0 −1 0 −1
0 −1 −1
Since we have 3 linearly independent vectors in the 3-dimensional Euclidean space, the vectors
form a basis for the space.
2 1 3
To construct an orthogonal set: − 1, 0 , 7
0 − 1 − 1
2
u1 = v1 = − 1
0
( v 2 , u1 )
u2 = v2 − 2
u1
u1
In this case,
2 2
( v 2 , u1 ) = (1 0 − 1) − 1 = 2 = (2 − 1 0) − 1 = 5
2
u1
0 0
94
Hence,
1 2 1 4 / 5 1/ 5
( v 2 , u1 ) 2
u2 = v2 − 2
u1 = 0 − − 1 = 0 − − 2 / 5 = 2 / 5
u1 − 1 5 0 − 1 0 − 1
In this case,
2 1/ 5
u1 = − 1 u 2 = 2 / 5
0 −1
2
( v 3 , u1 ) = (3 7 − 1) − 1 = −1
0
=5
2
u1
1/ 5
3 14
( v 3 , u 2 ) = (3 7 − 1) 2 / 5 = + + 1 =
22
−1 5 5 5
1/ 5
1
= (1 / 5 2 / 5 − 1) 2 / 5 =
4 30 6
+ +1 = =
2
u2
−1 25 25 25 5
Hence,
3 2 1/ 5
( v 3 , u1 ) 1 22 / 5
(v 3 , u 2 )
u3 = v3 − 2
u 1 − 2
u 2 = 7 − − − 1 − 2 / 5
u1 u2 − 1 5 0 6 / 5 − 1
3 2 1 / 5 3 2 / 5 11 / 15 40 / 15
1 11
= 7 − − − 1 − 2 / 5 = 7 + − 1 / 5 − 22 / 15 = 80 / 15
− 1 5 0 3 − 1 − 1 0 − 11 / 3 8 / 3
8/3
= 16 / 3
8/3
95
SAQ 6.3
The norms of the vectors in the set are, respectively, 5, 6 / 5 and 128 / 3 . Hence, the
normalised set is,
2 1/ 5 8 / 3
1 1 1
− 1, 2 / 5 , 16 / 3
5 0 6 / 5 − 1 128 / 3 8 / 3
2 1/ 5 8/3
c1 5 3
− 1 + c2 2 / 5 + c3 16 / 3
5 6 128
0 −1 8/3
− 1
c1 = (φ1 ,ψ ) =
1
(2 − 1 0) 2 = − 4
6× 5 1 30
− 1
c 2 = (φ 2 ,ψ ) =
5 1
(1 / 5 2 / 5 − 1) 2 = − 5 − 1 + 4 − 1
6 6 1 6 5 5
− 1
c3 = (φ3 ,ψ ) = (1 2 1) 2 = 3 8 4 = 8 = 8 = 2
3 8
128 3 6 1 4 83 6 3 6 3 2 3
16 8 1 4
| c1 | 2 = = | c2 |2 = | c3 | 2 =
30 15 45 9
You know we could have expanded the vector in terms of the basis vectors, but we chose to
make use of the fact that,
ci = (φ i ,ψ )
provided {φ k }nk =1 is an orthonormal set, and
n
∑| c
i =1
i |2 = 1
provided, in addition, the vector we expanded in terms of the orthonormal basis is itself
normalised, as it is the case in this SAQ.
96
SAQ 6.4
3
2 nπ x
The allowable wavefunctions are of the form sin . Hence the expansion in terms of
a a n =1
these eigenfunctions is,
i 2 πx 1 2 2π x 1 2 3π x
sin sin + sin sin − sin
2 a a 3 a a 2 a a
Since c1 = i / 2 , c 2 = 1 / 2 , c3 = 1 / 2 , the probability, respectively, that the particle will be
found in these states (n = 1, 2, 3) is:
i −i 1 1 1 1 1 1 1
| c1 | 2 = × = , | c2 | 2 = × = , | c3 | 2 = × =
2 2 4 2 2 2 2 2 4
SAQ 6.5
We first put ψ in the form of the allowable eigenfunctions:
1 πx A 2π x 3 3π x
ψ ( x) = sin + sin + sin
5a a a a 6a a
1 2 πx A 2 2π x 3 2 3π x
= sin + sin + sin
5a 2 a a 2 a 6a 2 a
1 2 πx A 2 2π x 3 2 3π x
= sin + sin + sin
10 a a 2 a a 2 6 a a
1 A 3
= φ1 + φ2 + φ3
10 2 2 6
For ψ to be normalized,
(ψ ( x ),ψ ( x )) = 1
or
1 A 3 1 A 3
φ1 + φ2 + φ3 , φ1 + φ2 + φ3 = 1
10 2 2 6 10 2 2 6
⇒
1 A2 9
+ + =1
10 2 24
or
9 1 120 − 45 − 12 63
A 2 = 21 − − = 2 = .
24 10 120 60
63
Hence, A =
60
97
The normalised ψ is therefore,
1 63 3
ψ (x ) = φ 1+ φ 2+ φ3
10 120 2 6
The possible values of the energy are:
h 2π 2 2h 2 π 2 9h 2 π 2
E1 = , E 2 = , E 3 = ,
2ma 2 ma 2 2ma 2
1 63 9
and the probabilities, respectively, are | c1 | 2 = , | c2 |2 = , | c3 | 2 =
10 120 24
SAQ 6.6
Given a set {v i }i =1 , if we can write a1 v 1 + a 2 v 2 + ⋅ ⋅ ⋅ + a n v n = 0 and this implies
n
(a)
a1 = a 2 = ⋅ ⋅ ⋅ = a n = 0, then we say {v i }i =1 is a linearly independent set.
n
1 1
a= , b=
1 −1
To check if they are linearly independent.
1 1 0
c1 + c 2 =
1 − 1 0
Hence, c1 + c 2 = 0 and c1 − c 2 = 0 . From the last equation, c1 = c 2 . Putting this in the
first equation, c1 + c1 = 0 , or c1 = 0. Consequently, c2 = 0. Set is linearly independent.
Since there are two linearly independent vectors in two-dimensional space (plane), they
form a basis.
1
(b) To check orthogonality, (a, b) = a T b = (1 1) = 1 – 1 = 0
− 1
They are orthogonal.
1 1
(c) Are they normalised? (a, a) = (1 1) = 2 , or || a ||= 2 . (b, b) = (1 − 1) = 2 .
1 − 1
Also, || b ||= 2 .
They are not normalised.
1 1 1 1
, are normalised.
2 1 2 −1
1 1 1 1
The set { , } forms an orthonormal basis for R 2 .
2 1 2 −1
In the usual basis SU ,
98
3 1 0
= 3i + 4 j = 3 + 4
4 0 1
In the basis S1
3 α 1 β 1
= +
4 2 1 2 −1
Hence, α + β = 3 2 and α − β = 4 2
2α = 7 2 and 2 β = − 2
Therefore,
3 7 2 1 2 1 7 1 1 1
= − = −
4 2 2 1 2 2 −1 2 1 2 − 1
Should you require more explanation on this study session, please do not hesitate to contact your
Are you in need of General Help as regards your studies? Do not hesitate to contact
the DLI IAG Center by ee-mail or phone on:
[email protected]
08033366677
99
Study Session 7 Fourier Series
Introduction
Wouldd it not be nice to be able to decompose a signal into its constituent frequencies? On the
other hand, you would be glad if you could compose a complex signal or waveform from simple
sinusoids. This you achieve by understanding Fourier Series. Moreover, fr from
om your knowledge of
odd and even functions in Study Session 6, you should also expect that the Fourier series of an
even function should consist only of a constant and some cosine functions. Also, the Fourier
series of an odd function would definitely be a composition of only sine functions. These you
will learn eventually as Fourier cosine series and Fourier sine series respectively.
Figure 7.1 (a) and ( b) shows examples, respectively, of odd and even functions.
y y
0 x 0 x
100
(a) Odd function (b) Even function
An even function is symmetrical about the y axis. In other words, a plane mirror placed on the
axis will produce an image that is exactly the function across the axis. An odd function will need
to be mirrored twice, once along the y axis, and once along the x axis to achieve the same
effect.
A function f (x ) of x is said to be an odd function if f ( − x ) = − f ( x ) , e.g., sin x, x 2 n+1
A function f (x ) of x is said to be an even function if f ( − x ) = f ( x ) , e.g., cos x, x 2 n
where n = 0,1, 2, ....
Some real-valued functions are odd, some are even; the rest are neither odd nor even. However,
we can write any real-valued function as a sum of an odd and an even function. Let the function
be h(x) , then we can write
h( x ) = f ( x ) + g ( x ) 7.1
where f (x ) is odd and g (x ) is even. Then, f ( − x ) = − f ( x ) and g ( − x ) = g ( x )
h(− x) = f (− x) + g (− x) = − f ( x) + g ( x) 7.2
Adding equations (7.1) and (7.2) gives
h ( x ) + h( − x ) = 2 g ( x )
Subtracting equation (7.2) from equation (7.1) gives
h ( x ) − h( − x ) = 2 f ( x )
It follows, therefore, that
h( x) − h(− x)
f ( x) = 7.3
2
and
h( x) + h(− x)
g ( x) = 7.4
2
101
= sinh 2 x sin x
It is obvious that the odd function is a product of an odd function and an even function.
Likewise, the even function is a product of two odd functions.
Recall that the inner product in the space of twice integrable complex valued functions of two
complex valued functions f (x ) and g (x ) over the interval a ≤ x ≤ b is defined as
b
( f , g ) = ∫ f * ( x) g ( x)dx .
a
Two functions f (x ) and g (x ) are said to be orthogonal over an interval a ≤ x ≤ b if their inner
product is zero.
π π
1 1 1
= sin(m − n) x + sin(m + n) x = 0
2 m − n −π m+n
−π
102
the right hand side is called the Fourier series for the function f (x ) . Of course, we expect the
Fourier coefficients to be unique, i.e., f ( x ) ≠ g ( x ) implies they do not have identical Fourier
coefficients: a 0 , a1 , ...., b1 , b2 ,...
π
π π1 π ∞ π ∞ 1
∫−π f ( x)dx = ∫
−π 2
a 0 dx + ∫−π ∑
n =1
a n cos nx
dx + ∫π ∑
n =1
bn sin nx dx = a0 x
2 −π
since each of the last two terms on the right is zero.
Therefore,
π
∫ π f ( x)dx = a π
−
0
Only the second term on the right survives, and this only when m = n . Then,
π
π π π
am ∫ (1 + cos 2mx )dx = x
1 1 1
∫π f ( x) cos mxdx = ∫ am cos 2 mxdx = am (π − (−π )) = amπ
− −π 2 −π 2 2
−π
implying that
1 π
π ∫π
am = f ( x ) cos mxdx 7.7
−
−π π
Fig. 7.2
103
∞ ∞
1
f ( x) = a 0 + ∑ a n cos nx + ∑ bn sin nx
2 n =1 n =1
1 π 1 π 2 π
π ∫π
am = f ( x ) cos mxdx = ∫ 1. cos mxdx = ∫ cos mxdx
− π π π − 0
π
2
= sin mx = 0
πm 0
π
1 π 1 π 1
bm =
π ∫π
−
f ( x) sin mxdx =
π ∫π
−
1. sin mxdx = −
mπ
cos mx = 0
−π
Therefore,
1
f ( x) = 1 = a 0 + a1 cos x + a 2 cos 2 x + ... + b1 sin x + b2 sin 2 x + ...
2
1
= ×2 =1
2
Find the Fourier series for f ( x ) = x if x lies between − π and π . Hence, derive the
formula,
π 1 1
= 1 − + − ...
4 3 5
1 π
π ∫π
a0 = xdx = 0
−
1 π
π ∫π
am = x cos mxdx = 0
−
1 π
π ∫−π
bm = x sin mxdx
We integrate the right hand side by parts, with
1
x = u , du = dx; sin mxdx = dv, v = ∫ sin mxdx = − cos mx
m
from which I = uv − ∫ vdu gives
π
1x 1 π 1 π −π
bm = − cos mx − ∫ cos mxdx = − cos mπ − cos(− mπ )
π m −π m −π
π m m
−2 2 2
= cos mπ = for m odd and − for m even.
m m m
Hence, the Fourier series is,
104
2 2 2 2 1 1 1
f ( x) = sin x − sin 2 x + sin 3 x + sin 4 x + ... = 2 sin x − sin 2 x + sin 3 x + sin 4 x + ...
1 2 3 4 2 3 4
To derive the formula, we set x = π / 2 . Then,
1 1 3π 1 1 5π
f (π / 2) = 21 − sin π + sin + sin 2π + sin + ...
2 3 2 4 5 2
1 1 π
= 21 − + − ... = , since f ( x) = x
3 5 2
Hence,
π 1 1
= 1 − + − ...
4 3 5
1
−π
π
-1
Fig. 7.3
π ∫π
a0 =
0
− π π
0 −π 0
1 0 1 π
π ∫π
am = − 1 × cos mxdx + ∫ 1 × cos mxdx
− π 0
1 1 π
sin mx −π + sin mx 0 = 0
0
=
mπ mπ
1 0 1 π
bm = ∫ − 1 × sin mxdx + ∫ 1 × sin mxdx
π −π π 0
1 1 π 1 1
cos mx −π − cos mx 0 = [cos 0 − cos mπ ] − [cos mπ − cos 0]
0
=
mπ mπ mπ mπ
=
mπ
2
{
[1 − ( −1) m }
4
= for m odd,
mπ
= 0 for m even.
Hence, the Fourier series is,
∞
4 ∞ 1 4 1 1
f ( x ) = ∑ bn sin nx = ∑ = sin x + sin 2 x + sin 3 x + ...
n =1 π n =1 m π 2 3
105
Notice that for even functions, only the a' s are non-zero. For the odd functions, however, only
the b' s survive.
Now, what value does the Fourier series attain at a discontinuity, say at x = 0 in the Fourier
series for
− 1; − π ≤ x ≤ 0
f ( x) =
1; 0 ≤ x ≤ π
This is the average of the two values at that point, in this case, (1 − 1) / 2 = 0 . Let us check: at x =
0, the Fourier series gives f ( x) = 0 .
0, m ≠ n
d. ∫ cos mx cos nxdx = π, m = n
7.12
0, m ≠ n
e. ∫ sin mx sin nxdx = π, m = n
7,13
7.2.1 Conditions that must be fulfilled in order that a function f (x ) may be expanded as
a Fourier series
(i) The function must be periodic.
(ii) It must be single-valued and continuous, except possibly at a finite number of finite
discontinuities.
(iii) It must have only a finite number of maxima and minima within one period.
(iv) The integral over one period of f (x) must converge.
If these conditions are satisfied, then the Fourier series converges to f (x ) at all points where
f (x ) is continuous.
Note: At a point of finite discontinuity, the Fourier series converges to the value half-way
between the upper and the lower values.
106
7.3 Fourier Series of Odd and Even Functions
7.3.1 Fourier Sine Series
You would recall the series for the odd function sin x has only the odd powers of x. Moreover, in
Study Session 6, we found that an odd function has no even part. This should give you an idea
that perhaps the Fourier series of an odd function should have only the sine terms. For such a
function, it suffices to find the coefficients bn , and we can write the Fourier sine series as,
∞
f ( x) = ∑ bn sin nx 7.15
n =1
with the coefficients given by,
1 π
bm = ∫ f ( x ) sin mxdx 7.16
π −π
But since f (x ) is odd, the integrand in equation (7.16) is even, and we can write,
2 π
bm = ∫ f ( x ) sin mxdx 7.17
π 0
and
1 π
π ∫ −π
am = f ( x ) cos mxdx 7.20
f (x ) is even, the integrand in equation (7.20) is even, and we can write,
2 π
π∫
am = f ( x ) cos mxdx 7.21
0
107
7.5 The General form of Fourier Series
The general form of the Fourier series for a function f (x ) with period L is
a0 ∞ 2πrx 2πrx
f ( x) = + ∑ a r cos + br sin 7.22
2 r =1 L L
where
2 x0 + L 2πrx
ar = ∫
L 0
x
f ( x ) cos
L
dx 7.23
and
2 x0 + L 2πrx
br = ∫
L 0x
f ( x ) sin
L
dx 7.24
For a Fourier sine series, we then make use of equations (7.15) and (7.17) and for a Fourier
cosine series, we make use of equations (7.18), (7.19) and (7.21).
Calculate the half-range Fourier sine series for the function f ( x ) = cos x ; 0 ≤ x ≤ π .
Sketch the sum of the first three terms and comment on your result.
f ( x ) = cos x
x
0 π
For the half-range Fourier sine series, we need to expand the function into an odd function (Fig.
7.5).
108
Fig. 7.5: Expansion of the function in Fig. 7.4 into an odd function
2 π 2 π 2 1 π
∫ sin( m + 1) x + sin( m − 1) x dx
π ∫0 ∫
bm = f ( x ) sin mxdx = cos x sin mxdx =
π 0
π 2 0
π
1 1 1
= − cos(m + 1) x − cos(m − 1) x
π m +1 m −1 0
1 1 1
=− (cos(m + 1)π − 1) + (cos(m − 1)π − 1)
π m +1 m −1
1 1 1
=− [( −1) m +1 − 1] + [( −1) m −1 − 1]
π m +1 m −1
1 1 1 1
=− [(−1) m (−1) − 1] + [(−1) m − 1]
π m +1 m −1 (−1)
1 1 1
= [( −1) m + 1] + [( −1) m + 1]
π m +1 m −1
1 (m − 1)[(−1) + 1] + (m + 1)[(−1) + 1]
m m
=
π m2 −1
1 m(−1) + m − (−1) − 1 + m(−1) + m + (−1) m + 1
m m m
=
π m2 −1
1 m(−1) + m + m(−1) + m 2m[(−1) + 1]
m m m
= = ; m ≠1
π m2 −1 π (m 2 − 1)
0, m odd
= 4m
π (m 2 − 1) , m even
2 π
For m = 1 , b1 =
π∫
cos x sin xdx = 0
0
109
8 1 2 3
= sin 2 x + sin 4 x + sin 6 x + ...
π 3 15 35
The sketch of the sum of the first three terms is shown in Fig. 7.8.
where we have made use of the fact that e irx = cos rx + i sin rx .
110
References
1. Butkov, E. (1968). Mathematical Physics, Addison-Wesley.
2. MacQuarrie, D. A. (2003). Mathematical Methods for Scientists & Engineers, University
Science Books.
111
b. Find the Fourier coefficients of the function x 2 ( 2π 2 − x 2 ) over the interval − π ≤ x ≤ π .
(a) Write the Fourier series for a periodic function of x between − π and π .
(b) Write the general and the complex forms of Fourier series.
112
Solutions to SAQs
SAQ 7.1
(a) An even function is one such that f ( x) = f ( − x ) , while an odd function is such that
f ( x) = − f (− x) .
(b) (i) is odd, being the product of two even functions and an odd function.
(ii) is an even function, a product of two even function.
(iii) is an even function:
1 1
sec(− x) = = = sec x
cos(− x ) cos x
SAQ 7.2
(i) h( x) = e − x cosh x , h( − x) = e x cosh(− x) = e x cosh x
1 e − x cosh x − e x cosh x e x − e−x
f ( x) = [h( x) − h(− x)] = = − cosh x
2 2 2
= − cosh x sinh x
1 e − x cosh x + e x cosh x e x + e−x
g ( x) = [h( x) + h(− x)] = = cosh x
2 2 2
= cosh 2 x
SAQ 7.3
a
(i) ∫ x 2 n +1dx = 0, the integrand being an odd function.
−a
a
a a x 2 n +1 a 2 n +1
∫ x dx = 2 ∫ x dx = 2 =2
2n 2n
(ii)
−a 0 2n + 1 0 2n + 1
SAQ 7.4
π
(i) ∫ πsin mx cos nxdx = 0 , the integrand is an odd function
−
π π
1 1 1
= sin( m − n) x − sin( m + n) x = 0
2 m − n −π m+n −π
SAQ 7.5
a0 ∞
kπx kπx
f ( x) = + ∑ a k cos + bk sin
2 k =1 L L
1 L kπx
a n = ∫ f ( x ) cos dx , n = 0, 1, 2, …
L − L L
1 L kπx
bn = ∫ f ( x ) sin dx , n = 1, 2, …
L −L L
113
The n = 0 case is not needed since the integrand in the formula for b0 is sin 0 .
When x = π / 2 , then,
1 1 1 3π 1 1 5π
f (π / 2) = 1 + π / 2 = 1 + 2 − sin π + sin − sin 2π + sin + ...
1 2 3 2 4 5 2
π 1 1
= 1 + = 1 + 21 − + + ...
2 3 5
Hence,
π 1 1
= 1−
+ + ...
4 3 5
Thus, the sum of the infinite series on the right is π / 4 , or we say the series converges to π / 4 .
On the other hand, we can say that the value of the irrational number π is given by the infinite
series
1 1
π = 41 − + − ...
3 5
SAQ 7.6
A general formula for the Fourier series of a function on an interval [c, c + T ] is:
a0 ∞
2nπx 2nπx
f ( x) = + ∑ a n cos + bn sin
2 n =1 T T
114
2 c +T 2nπx
an = ∫
T c
f ( x ) cos
T
dx
2 c +T 2nπx
bn = ∫
T c
f ( x ) sin
T
dx
∞
a0
f ( x) = + ∑ ( a n cos 2nπx + bn sin 2nπx )
2 n =1
The function f (x ) is odd, so the cosine coefficients will all equal zero. Nevertheless, a 0 should
still be calculated separately.
1
a 0 = 2 ∫ xdx = 1
0
1
bn = 2 ∫ x sin 2nπxdx
0
1
− 1 sin 2nπx 1
= + 2
=−
nπ ( 2 nπ ) 0 nπ
1 ∞ sin 2nπx
f ( x) = −∑
2 n =1 nπ
SAQ 7.7
1 π 2 π
a0 = ∫π | x | dx = ∫ xdx = π
π − π 0
1 π 2 π
π ∫π
an = | x | cos nxdx = ∫ x cos nxdx
− π 0
π
2 x sin nx π sin nx 2 cos(nx) π
dx =
π n 0 ∫0 n
= −
π n 2 0
2( −1) n 2 2(( −1) n − 1)
= − =
πn 2 πn 2 πn 2
115
1 π 1 0 1 π
π∫π
bn = | x | sin( nx ) dx = ∫ − x sin nxdx + ∫ x sin nxdx
− π π π − 0
π
1 x cos(nx) 1 − x cos(nx)
0
0 cos(nx) π cos(nx)
= − ∫ dx + + ∫ dx
π n
−π
−π n n
n
0
0 n
π
1 π (−1) n sin(nx) 1 − π (−1) n sin(nx)
0
= − + π +
π n n
2
−π n n
2
0
1 π ( −1) n 1 − π ( −1) n
= + =0
π n π n
So,
π (−1) n − 1
2 ∞
π 2 ∞ −2
| x |= ∑ +
2 π n=1 n 2
cos( nx ) = + ∑
2 π n =0 (2n + 1) 2
cos((2n + 1) x)
π 4 1 1
= − (cos x + cos(3 x ) + cos(5 x ) + ...
2 π 9 25
SAQ 7.8
x
a. F ( x) = ∫ [ 2 f ( x ) − a 0 ] dx
0
π
F (π ) = ∫ [2 f ( x ) − a 0 ] dx
0
−π
F ( −π ) = ∫ [ 2 f ( x) − a 0 ] dx
0
π −π
F (π ) − F (−π ) = ∫ [ 2 f ( x ) − a 0 ] dx − ∫ [ 2 f ( x ) − a 0 ] dx
0 0
π 0
= ∫ 0
[ 2 f ( x ) − a 0 ] dx + ∫ [ 2 f ( x ) − a 0 ] dx
−π
π
= ∫ π [2 f ( x) − a
−
0 ] dx
∞ ∞
= ∑∫
n =1
−∞
[a n cos nx + bn sin nx ] dx
=0
1 π
( 2π 2 − x 2 ) cos nxdx
π∫π
b. an = x 2
−
2 π
x 2 ( 2π 2 − x 2 ) cos nxdx
π∫
=
0
π 2 π
= 4π ∫ x 2 cos nxdx −
π∫
x 4 cos nxdx
0 0
116
π
2x 1 π
= 0+0+ cos nx + − 2 ∫ 2 cos nxdx
n2 0 n 0
π
2 4 x 3 1 1 π
− 0 − − − cos nx − − 2 ∫ 12 x 2 cos nxdx
π n n 0 n 0
2π 4π 3 12 2π
= ( −1) n − (−1) n + 2 2 ( −1) n
n2 n 2
n n
2π 12
1 − 2π + 2 π ( −1)
2 n
= 2
n n
SAQ 7.9
∞ ∞
1
(a) f ( x) = a 0 + ∑ a n cos nx + ∑ bn sin nx
2 n =1 n =1
1 π 1 π 2 π
π ∫π
am = f ( x ) cos mxdx = ∫ 1. cos mxdx = ∫ cos mxdx
− π π π − 0
π
2
= sin mx = 0
πm 0
π
1 π 1 π 1
bm =
π ∫π
−
f ( x) sin mxdx = ∫ 1. sin mxdx = −
π −π mπ
cos mx = 0
−π
(b) The general form of the Fourier series for a function f (x ) with period L is
a0 ∞ 2πrx 2πrx
f ( x) = + ∑ a r cos + br sin
2 r =1 L L
where
2 x0 + L 2πrx
ar = ∫
L x0
f ( x ) cos
L
dx ,
and
2 x0 + L 2πrx
br =∫
L x0
f ( x ) sin
L
dx
117
where we have made use of the fact that e irx = cos rx + i sin rx .
SAQ 7.10
Conditions that must be fulfilled in order that a function f (x ) may be expanded as a Fourier
series:
(i) The function must be periodic.
(ii) It must be single-valued and continuous, except possibly at a finite number of finite
discontinuities.
(iii) It must have only a finite number of maxima and minima within one period.
(iv) The integral over one period of f ( x) must converge.
If these conditions are satisfied, then the Fourier series converges to f (x ) at all points where
f (x ) is continuous. At a point of finite discontinuity, the Fourier series converges to the value
half-way between the upper and the lower values.
SAQ 7.11
.
f ( x) = sin x
0 π
Fig. 7.6: Graph of f ( x ) = sin x; 0 ≤ x ≤ π
Since what we need is the half-range Fourier cosine series, we expand the function into an even
function as shown in Fig. 7.7.
Fig. 7.7: Expansion of the function in Fig. 7.6 into an even function
118
For m = 0 ,
2 π 2 π 2 2 2(1 + 1) 4
[ − cos x ] π0 = − [cos π − cos 0] =
π∫ π∫
a0 = f ( x ) dx = sin xdx = =
0 0 π π π π
2 π 2 π 2 1 π
a m = ∫ f ( x ) cos mxdx = ∫ sin x cos mxdx = sin( m + 1) x + sin(1 − m) x dx
π 0 π 0 π 2 ∫0
Even though we could have carried out the integration in the usual way, we have decided to
make use of the procedure in ITQ, by writing the second term in the integrand as
sin(1 − m ) x = − sin( m − 1) x .
Hence,
2 1 π
sin( m + 1) x − sin( m − 1) x dx
π 2 ∫0
am =
π
1 1 1
= − cos(m + 1) x + cos(m − 1) x
π m +1 m −1 0
1 1 1
=− (cos(m + 1)π − 1) − (cos( m − 1)π − 1)
π m +1 m −1
1 1 1
=− [( −1) m +1 − 1] − [( −1) m −1 − 1]
π m +1 m −1
1 1 1 1
=− [(−1) m (−1) − 1] − [(−1) m − 1]
π m +1 m −1 (−1)
1 1 1
= [( −1) m + 1] − [( −1) m + 1]
π m +1 m −1
1 (m − 1)[(−1) m + 1] − (m + 1)[(−1) m + 1]
=
π m2 −1
1 m(−1) + m − (−1) m − 1 − m(−1) m − m − (−1) m − 1
m
=
π m2 −1
1 − 2(−1) − 2
m
2[(−1) + 1]
m
= =− ; m ≠ 1.
π m −1
2
π (m 2 − 1)
0, m odd
= 4
−
π (m 2 − 1) , m even
Hence, the half-range Fourier cosine series for sin x is,
∞
1 1 4 1 1 1
sin x = a 0 + ∑ bn sin nx = + − cos 2 x − cos 4 x − cos 6 x − ...
2 n =1 π π 3 15 35
2 4 1 2 3
− cos 2 x + cos 4 x + cos 6 x + ...
π π 3 15 35
Observation
The graphs for the sum of the first three (thin curve) and the first five (thick curve) terms are
displayed in Fig. 7.8.
119
0 π
Fig. 7.8: The graph of the sum of the first 3 (thin) and the sum of the first 5 terms (thick)
Observation: As the number of terms becomes large, the function approximates a sine function
as shown in Fig. 7.9 for the sum of the first 120 terms.
0
π
Fig. 7.9: The graph of the sum of the first 120 terms
120
Should you require more explanation on this study session, please do not hesitate to contact your
Are you in need of General Help as regards your studies? Do not hesitate to contact
the DLI IAG Center by ee-mail or phone on:
[email protected]
08033366677
121
122
Study Session 8 Fourier Transform
Introduction
In Study Session 7, we explored Fourier series, and mentioned that Fourier series works only
with periodic functions. What happens if the function of interest is not periodic? In that case, we
make recourse to Fourier transform. Thus, Fourier transform is the generalisation of Fourier
series to include non-periodic
periodic functions. In the forced harmonic oscillator, Fourier series comes
to the rescue when non-sinusoidal
sinusoidal driving forces are involved. The driving forces are
decomposed by Fourier methods into sine wa waves
ves of separate frequency and phase. The system
response is then determined for each frequency. Finally the inverse Fourier transform is used to
determine the overall response. On the other hand, Engineers often use the Laplace Transform
for the same purpose.
2π
If we allow T to tend to infinity, and recall that ∆ω = , ∆ω tends to zero, and we can replace
T
the summation in Equation (8.1) with an integral. Thus, the infinite sum of terms in the Fourier
series becomes an integral, and the coefficients cr become functions off the continuous variable
ω , as follows:
1 ∞
F (ω ) = ∫ f (t )e −iωt dt 8.3
2π −∞
123
and the Fourier inverse transform as
1 ∞ iωt
f (t ) = ∫ −∞ F (ω )e dω 8.4
2π
Just as we have the pair (t , ω ) , we could also get involved in the pair ( x, k ) .
1 ∞
F (k ) = ∫ f ( x)e −iωx dx 8.5
2π −∞
F (ω ) =
1
∫
a
ce −iωt dt =
c
∫
a
e −iωt dt =
c 1
[
e −iωt ]
a
−a
2π −a
2π −a
2π − iω
2 sin ωa
=c
π ω
1, | x |≤ a
Let us find the Fourier transform of the box-car function, f (t ) =
0, otherwise
1 ∞ 1 a
F (k ) = ∫ f ( x)e −ikx dx = ∫ 1e −ikx dx
2π −∞
2π −a
=
1 1 −ikx
e =
1 1 −ika
(
e − e −ika = ) 2 sin ka
2π − ik −a 2π k 2π k
2 sin ka
=
π k
We can rewrite this as,
2 sin ka
F (k ) = a
π ka
124
The graph of f (x ) and its Fourier transform, F (k ) are shown in Fig. 8.1.
a 2 /π
−1 1
t ω
2 sin ka
The inverse Fourier transform of is
π k
1 ∞ 2 sin ka ikx 1 ∞ e ikx sin ka
2π
∫ −∞ π k
e dk =
π ∫−∞ k dk
But this should be equal to the original function whose Fourier transform we took. It follows,
therefore, that
1, | x |< a
1
ikx
1 ∞ e sin ka
π ∫ − ∞ k
dk = , x = ± a
2
0, | x |> a
The value assumed at the discontinuity, very much as iiss the case in Fourier series, is halfway up
1 1
the discontinuity, (1 − 0) = .
2 2
125
1, | x |< a
1 ∞ (cos kx + i sin kx) sin ka 1
∫
π −∞ k
dk = , x = ±a
2
0, | x |> a
Thus, the real parts equated gives,
1, | x |< a
1
1 ∞ cos kx sin ka
π ∫ − ∞ k
dk = , x = ±a
2
0, | x |> a
and the imaginary part gives,
1 ∞ cos kx sin ka
π ∫− ∞
dk = 0
k
cos kx sin ka sin ka sin ka
This has provided us with a way of integrating the functions and with
k k
respect to k, functions which ordinarily could not have been otherwise integrated.
δ ( x − x0 )
0 x = x0
126
0, x ≠ 0
x0 = 0 , and δ ( x) = 8.9
∞, x = 0
The Dirac delta function satisfies the conditions:
∞
∫ δ [ x − x 0 ]dx = 1 8.10
−∞
This means that even though the delta function is a ‘spike’ of zero width and infinite height, the
integral is taken to be unity.
∞
∫ f ( x )δ ( x − x 0 ) dx = f ( x 0 ) (The sifting property) 8.11
−∞
Notice that the Dirac delta function sifts out f (x ) at the point where the ‘spike’ is located, that is
at x = x0 .
1
i.e., as a function of y , evaluated at y = 0 .
g ' ( x)
∞
Evaluate ∫ δ ( x 2 − a 2 ) dx , where a is a constant.
−∞
g ' ( x) y = 0 = 2a
Therefore,
127
∞ 1
∫ δ ( x 2 − a 2 )dx =
−∞ 2a
8.3.2 The Relationship between the Dirac Delta function and Fourier transforms
Putting Equation 8.3 in Equation 8.4,
1 ∞ ∞
f (t ) = ∫ dωe iωt ∫ duf (u )e −iωu 8.14
2π −∞ −∞
Comparing with the equation defining the sifting property (Equation 8.11),
∞
∫ dxf ( x )δ ( x − x 0 ) = f ( x 0 ) (The sifting property)
−∞
1
1 1 1 2i e iω − e − iω
1 1 −iω t 2 sin ω
F (ω ) = ∫−11e
−iωt
dt = − = e =
2π iω 2π
−1 2π iω 2i 2π ω
This is the sinc function. The box-car function as well as its Fourier transform (the sinc function)
are shown in Fig. 8.2.
2 / 2π
t ω
−1 1
128
Fig. 8.2:
.2: The Box
Box-car
car function and its Fourier transform.
129
What is the inverse Fourier transform of the box-car function
1, − 1 ≤ t ≤ 1
f (t ) =
0, otherwise
d
1 d
iωt 1 e iωt 2d e itd − e − itd 2d sin dt
∫ 1 ⋅ e dω = = =
2π −d
2π i t −d 2π 2idt 2π dt
The graphs of the function and its inverse Fourier transform are shown in Fig. 8.3. It is clear that
as d tends to infinity, the peak at t = 0 tends to infinity and the function becomes narrower at
t = 0.
The Fourier transform of the function on the right is the function on the left in Fig. 8.3.
2d
2π
t
−d d
Fig. 8.3: The Fourier transform of the sinc function is the box-car function
130
(ii) Integration
1
F ∫ f ( s ) ds = F (ω ) + 2π cδ (ω ) 8.20
iω
where the term 2π cδ (ω ) is the Fourier transform of the constant of integration
associated with the indefinite integral.
(iii) Scaling
F [ f ( at )] =
1
F (ω ) 8.21
a
(v) Duality
Let us build a new time-domain function g (t ) = f (t ) by exchanging the roles of time
and frequency.
Then,
G (ω ) = 2π F ( −ω )
1 ∞ 1 ∞
F (ω ) = ∫ e −α t e −iω t dt = ∫ e −(iω +α ) t dt
2π 0
2π 0
∞
1 1 1 1
=− e −( iω +α )t =
2π α + iω 0 2π α + iω
We can write this expression as an amplitude multiplied by the phase, as we would any complex
number.
ω
Since modulus of α + iω is α 2 + ω 2 , and its phase φ (ω ) = tan −1 , then we can write,
α
131
1
F (ω ) = e iφ (ω ) = A(ω ) e iφ (ω )
2π α + ω 2 2
ω
F (ω ) is a complex number with amplitude A(ω ) and phase angle φ (ω ) = tan −1 .
α
1 2i e iω − e − iω 2 sin ω
1
1 1 1 1 −iω t
F (ω ) = ∫−1 1e −iωt
dt = − e = =
2π −1
iω2π iω 2π 2i π ω
This is the sinc function. The box-car function as well as its Fourier transform are shown in Fig
8.2, but now with d being equal to 1.
Find an expression for the Fourier transform of the displacement of the forced damped
harmonic oscillator,
d 2 x(t ) dx(t )
+ 2β + ω0 x(t ) = f (t )
2
2
dt dt
where all symbols have their usual meanings, with f (t ) an input signal. If the forcing
term is a delta function located at t = t 0 , write an integral expression for x(t ) .
Taking the Fourier transform (using the differentiation property) of both sides,
− ω 2 X (ω ) + 2 βiωX (ω ) + ω0 X (ω ) = F (ω )
2
Solving for X (ω ) ,
F (ω )
X (ω ) =
ω0 − ω 2 + 2iβ
2
132
1 e − iωt0
X (ω ) =
2π ω 0 − ω 2 + 2iβ
2
References
1. Butkov, E. (1968). Mathematical Physics, Addison
Addison-Wesley.
2. MacQuarrie, D. A. (2003). Mathematical Methods for Scientists & Engineers, University
Science Books.
Self-Assessment
Assessment Questions ffor Study Session 8
You have now completed this study session. You may now assess how well you have achieved
the Learning Outcomes by answering the following questions. Write your answers in your Study
Diary and discuss them with your Tutor at the next Stud
Studyy Support Meeting. You can check your
answers with the Solutions to the Self
Self-Assessment
Assessment Questions at the end of this study session.
133
α −α t
Find the Fourier transform of f (t ) =
2 2
e .
π
134
Solutions to SAQs
SAQ 8.1
∞
∫ δ (3 x − 2) x 2 dx
−∞
SAQ 8.2
1 ∞ α −α t −iω t α ∞ −(α t + iω t )
F (ω ) = ∫ π 2 ∫−∞
2 2 2
dt e e = dte
2π −∞
π
We complete the square to evaluate the integral.
α 2 t 2 + iω t = α 2 t 2 + iω t + γ − γ = (α t + β ) 2 − γ
iω
2αβ t = iω t → β =
2α
ω2
γ =β =− 2 2
4α
Thus,
iω
2
ω2
α − ∞ − ω t +
F (ω ) = ∫− ∞ e 4α 2
dte 2α
π 2
iω
Let x = α t + ⇒ dx = αdt
2α
ω2 ω2 ω2
α − 1 ∞ 1 − 1 − 4α 2
F (ω ) = e 4α 2
∫ dxe − x2
= e 4α 2
× π = e
π 2 α −∞
π 2 2π
SAQ 8.3
1 ∞ − iω t
F [tf (t )] = ∫ −∞ tf (t )e dt
2π
1 ∞ d
− iω t
= ∫ −∞ f (t )(i) dω (e )dt
2π
d 1 ∞
=i ∫ f (t )e −iω t dt
dω 2π −∞
d
=i F [ f (t )]
dω
135
SAQ 8.4
1 ∞ − iω t
F (ω ) = ∫ −∞ f (t )e dt
2π
1 ∞ −|t | − iω t
= ∫ −∞ e e dt
2π
But | t | is v-shaped and centred at t = 0 . − | t | is a tent in the negative vertical axis with
the peak at t = 0. The function will then be t from ∞ to 0, and − t from 0 to ∞ . Hence,
1 0 1 ∞
F (ω ) = ∫ et e − iω t dt + ∫ e − t e − iω t dt
2π − ∞ 2π 0
1 0 1 ∞
= ∫ e (1−iω ) t dt + ∫ e ( −1−iω ) t dt
2π −∞
2π 0
0 ∞
1 1
= e (1−iω ) t + e (1−iω ) t
(1 − iω ) 2π −∞
(−1 − iω ) 2π 0
1 1 1 + iω + 1 − i ω 2
= + = =
(1 − iω ) 2π (1 + iω ) 2π (1 + ω ) 2π
2
(1 + ω ) 2π
2
2 1
=
π (1 + ω 2 )
SAQ 8.5
Fourier Transform of the delta function:
1 ∞ 1 ∞ 1 −i ω ( 0 ) 1
F (ω ) = ∫ dte −iωtδ (t ) = ∫ dte −iωtδ (t − 0) = e =
2π −∞
2π −∞
2π 2π
df
Now, FT = iωFT ( f ) = iωF (ω )
dt
Therefore, for the square pulse,
df df
= δ (t + T / 2) − δ (t − T / 2) → F = F (δ (t + T / 2) − δ (t − T / 2))
dt dt
But FT ( f (t − t 0 )) = FT ( f (t ))e − iωt0
df
Thus, FT = (e
− iω ( − T / 2 )
− e −iωT / 2 ) FT (δ (t ))
dt
136
2 ωT
=i sin = iωF (ω )
π 2
2 sin(ωT / 2)
Hence, F (ω ) =
π ω
SAQ 8.6
g ( x) = 2 x 3 + a 2 = y
g ' ( x) = 6 x 2
2
y − a2
g ' ( x ) | y = 6
2
6a 1 / 3 6a 1 / 3
Thus, g ' ( x) y = 0 = g ' ( x) y =0
=−
41 / 3 41 / 3
∞ 41 / 3
Therefore, ∫ δ (2 x 3 + a 2 ) dx = −
−∞ 6a 1 / 3
SAQ 8.7
∞
∫ δ [ x − x 0 ]dx = 1
−∞
∞
∫ f ( x )δ ( x − x 0 ) dx = f ( x 0 ) (The sifting property)
−∞
SAQ 8.8
1 ∞
F (k ) = ∫ e −ikx e − ax dx
2π −∞
∞
1 ∞ 1 1
− ( a + ik ) x
= ∫ e dx = − e −( a +ik ) x
2π −∞
2π a + ik 0
1 1
=− (0 − 1)
2π a + ik
1 1
=
2π a + ik
137
Hence,
e − ax , x > 0
1 ∞ e ikx 1
2π ∫ − ∞ a + ik
dk = , x = 0
2
0, x < 0
Yet again, we are able to evaluate an otherwise intractable definite integral.
SAQ 8.9
(ii) Integration
1
F ∫ f ( s ) ds = F (ω ) + 2π cδ (ω )
iω
where the term 2π cδ (ω ) is the Fourier transform of the constant of integration
associated with the indefinite integral.
(iii) Scaling
F [ f ( at )] =
1
F (ω )
a
(v) Duality
Let us build a new time-domain function g (t ) = f (t ) by exchanging the roles of time
and frequency.
Then,
G (ω ) = 2π F ( −ω )
138
Should you require more explanation on this study session, please do not hesitate to contact your
Are you in need of General Help as regards your studies? Do not hesitate to contact
the DLI IAG Center by ee-mail or phone on:
[email protected]
08033366677
139
Study Session 9Laplace
Laplace Transform and Application I
Introduction
This study session is a useful concept in Physics in that it converts differential, integral and
integro-differential
differential equations into algebraic equations. In particular, we can transform any of
these equations (which ordinarilyy at times might look intractable) into an algebraic equation, or a
set of algebraic equations, solve the equations, and then transform the solution to obtain the
solution of the appropriate original equation. You can then truly appreciate the importance of this
study session.
140
s ≡ω , t ≡ t
K (ω , t ) = e − iωt / 2π 9.2
1 ∞
F (ω ) = ∫ f (t )e −iωt dt 9.3
2π −∞
∞
L{ f (t )} = ∫ e − pt f (t )dt 9.5
0
{ t
}
L ∫ f (t ) dt =
0
F ( p)
p
9.11
More generally,
{
L ∫ f (t ) dt =
p
}
F ( p) 1
+
p ∫
f (t ) dt
t = 0
9.12
141
Find the Laplace transform of f (t ) = 1 .
∞
e − pt e −∞p e 0 1
L{ f (t )} = L{1} = ∫ e
∞
− pt
⋅ 1 dt = − =− − − = 9.13
0 p 0
p p p
1 1
Therefore, the inverse Laplace transform of , i.e., L−1 = 1
p p
{ } 0
∞ ∞
L e −at = ∫ e − pt e −at dt = ∫ e −( p + a )t dt =
0
1
p+a
e −( p + a ) t
∞
0
=
1
p+a
9.14
e iωt + e −iωt
cos ω t =
2
e iωt + e −iωt 1
L{cos ω t} = L
2
iωt
= Le +Le
−iωt
[ { } { }]
2
1 1 1 p
= + = 2 9.15
2 p − iω p + iω p + ω 2
ω
Show that L{sin ωt} = 9.16
p +ω2
2
e iωt − e −iωt
sin ω t =
2i
e iωt − e −iωt 1
L{sin ω t} = L
2i
= [ { } { }]
L e iωt − L e −iωt
2i
142
1 1 1 ω
= − = 2
2i p − iω p + iω p + ω 2
We have used the linearity property here.
{} ∞
L t n = ∫ t n e − pt dt
0
where n is a constant.
∞
− pt − 1t n − pt n ∞ n −1 − pt n
Let I n = ∫ t e n
dt = e + ∫ t e dt = I n −1
0 p p 0 p
1 − pt
where we have made use of u = t n , du = nt n −1 dt , dv = e − pt dt and v = − e
p
n (n − 1)
= ⋅ I n−2
p p
n (n − 1) (n − 2)
= . ⋅ ⋅⋅⋅ I0
p p p
∞ 1
I 0 = ∫ t 1−1e − pt dt =
0 p
∞ n!
Hence, ∫ t n e − pt dt = 9.17
0 p n +1
∞
L{ f (t − t 0 )} = ∫ e − pt f (t − t 0 ) dt = I
0
Let t − t 0 = τ
Then, t = t 0 + τ
dt = dτ
143
∞
Therefore, I = ∫ e − p ( t0 +τ ) f (τ ) dτ
−t0
Check that the limits are indeed as indicated in the last equation, given the transformation from t
to τ .
∞
Therefore, I = e − pt0 ∫ e − pτ f (τ ) dτ
−t0
= e − pt0 F ( p )
Therefore,
L{ f (t − t 0 )} = e − pt0 F ( p ) 9.18
This is the t-shift rule.
d2 3
Find the Laplace transform of t
dt 2
144
Ax + B
for real A and B.
x + bx + c
2
As an example of a single ordinary differential equation with constant coefficients, we solve the
d 2x dx
ordinary differential equation 2
+ 3 + 2 x = e −3t ; x(0) = 2 ; x ' (0) = 3
dt dt
{ }
L e −3t =
1
p+3
Let L{x} = X
1
( p 2 X − 2 p − 3) + 3( pX − 2) + 2 X =
p+3
1
( p 2 + 3 p + 2) X = + 2p + 3+ 6
p+3
1 + ( p + 3) (2 p + 9)
=
p+3
1 + 2 p 2 + 15 p + 27
=
p+3
2 p 2 + 15 p + 28
X =
( p + 3)( p + 2)( p + 1)
= L{x}
1/ 2 6 15 / 2
X = − +
p + 3 p + 2 p +1
145
The Inverse Laplace transform
1 / 2 −1 6 −1 15 / 2
x = L−1 {X } = L−1 − L +L
p + 3 p + 2 p + 1
1 15
= e − 3t − 6 e − 2 t + e − t
2 2
Given that i (0) = 0 , solve for i(t) in the circuit in Fig. 9.1, given that V(t) = sin5t, R = 2 Ω
and L= 2 H.
i t=0
v R = 1 kΩ
Inductor
Fig. 9.1
di
Ri + L =v
dt
di
2i + 2 = sin 5t
dt
5
2 I + 2( pI − i(0)) =
p + 25
2
i0 = i (0) = 0
5
2 I + 2 pI =
p + 25
2
5
(1 + p) I = 2
p + 25
5 A Bp + C
I= = + 2
( p + 1)( p + 25) p + 1 p + 25
2
So
146
Equating the coefficients of p2:
5
0 = A + B gives B = − A = −
26
Equating coefficients of s:
5
0 = B + C gives C = − B =
26
So,
5 1 p 1
I= − 2 + 2
26 p + 1 p + 25 p + 25
or
i=
26
(e − cos 5t + sin 5t )
5 −t
References
1. Butkov, E. (1968). Mathema
Mathematical Physics, Addison-Wesley.
2. MacQuarrie, D. A. (2003). Mathematical Methods for Scientists & Engineers, University
Science Books.
Self-Assessment
Assessment Questions for Study Session 9
You have now completed this study session. You may now assess how well you have achieved
the Learning Outcomes by answering the following questions. Write your answers in your Study
Diary and discuss them with your Tutor at the next Study Support Meeting. You can check your
answers with the solutions
olutions to the Self
Self-Assessment Questions
ons at the end of this Module.
147
SAQ 9.1 (tests Learning Outcome 9.3)
List the properties of the Laplace transform.
SAQ 9.2 (test Learning Outcomes 9.2, 9.3 and 9.4)
Find the Laplace transform of e 2 t sin(t + 2) .
SAQ 9.3 (test Learning Outcomes 9.1, 9.2, 9.3 and 9.4)
Obtain the Laplace transform of the following:
(i) sin( t − 4)
(ii) f (at ) ; given that L{ f (t )} = F ( p)
SAQ 9.7 (tests Learning Outcomes 9.2, 9.5, 9.7 and 9.8)
Solve the following ordinary differential equation with the aid of the Laplace transform.
y ' '− y '−2 y = e 2t ; y (0) = 0 , y ' (0) = 1
SAQ 9.8 (tests Learning Outcomes 9.2, 9.3, 9.4, 9.5 and 9.9)
Consider the circuit in Fig. 9.2 when the switch is closed at t = 0 with VC(0) = 1.0 V. By applying
Laplace transform, solve for the current i(t) in the circuit.
i t=0
v(t ) = 4V
R = 1 kΩ
C=1 µF
148
Fig. 9.2
149
Solutions of SAQs
SAQ 9.1
Properties of the Laplace Transform:
(i) Linearity of Laplace transform and inverse Laplace transform:
L ( af (t ) + bg (t )) = aL ( f (t )) + bL ( g (t ))
where a and b are constants
(ii) The time-shift or t-shift rule
L{ f (t − t 0 )} = e − pt0 F ( p )
(iii) Frequency shift
L(e at f (t )) = F ( p − a )
(iv) Time scaling
1 p
L{ f ( at )} = F
|a| a
(v) Differentiation Rule
{ }
L f n (t ) = p n L{ f (t )} − p n −1 f (0) − p n − 2 f ' (0) − ⋅ ⋅ ⋅ − p 0 f ( n −1)
( 0)
SAQ 9.2
1
The Laplace transform of sin t is
p +12
1
Laplace transform of sin( t + 2) is e 2 p (t-shift rule)
p +1
2
1
Laplace transform of e 2 t sin(t + 2) is e 2 ( p − 2 ) (frequency shift)
( p − 2) 2 + 1
SAQ 9.3
∞ ω
(i) L{sin t} = ∫ (sin t )e − pt dt =
0 p +ω2
2
dτ = adt , and t = τ / a .
1 p
L{ f ( at )} = F
a a
150
SAQ 9.4
(i) Find the Laplace transform of (t − 4) 2
{}
L t2 =
2!
p3
= F ( p)
t0 = 4
{ }
L (t − 4) 2 = e − 4 p F ( p ) =
2! − 4 p
p3
e
L{cos ωt } =
p
(ii)
p +ω 2
2
p
L{cos ω (t − 5)} = e −5 p
p +ω
2 2
∞ ∞
(iii) L{te at } = ∫ te at e − pt dt = ∫ te − ( p − a )t dt
0 0
Let t = u , then dt = du
1
e − ( p − a ) t dt = dv v = − e −( p −a )t
p−a
∞ ∞
t 1 ∞ 1 1
I =− e −( p − a ) t + ∫ e −( p −a ) t dt = e −( p − a ) t =
p−a 0
p−a 0 ( p − a) 2 0
( p − a) 2
SAQ 9.5
151
SAQ 9.6
Let f (t ) = f (t + na )
∞ a 2a 3a
L ( f (t )) = ∫ f (t )e − st = ∫ f (t )e − st dt + ∫ f (t )e − st dt + ∫ f (t )e − st dt +...
0 0 a 2a
a a a
= ∫ f (t )e − st dt + ∫ f (t + a )e − s (t + a ) dt + ∫ f (t + 2a )e − s (t + 2 a ) dt +...
0 0 0
a a a
= ∫ f (t )e − st dt + ∫ f (t )e − s ( t + a ) dt + ∫ f (t )e − s (t + 2 a ) dt +...
0 0 0
a 1 a
= (1 + e − sa + e − 2 sa + ...) ∫ f (t )e − st dt = ∫ f (t )e − st dt
0 1 − e − sa 0
SAQ 9.7
L{ y ' '} = p 2Y − py (0) − y ' (0) = p 2Y − 1
L{ y '} = pY − y (0) = pY
L{2 y} = 2Y
1
L{e 2 t } =
p−2
Hence,
1
p 2Y − 1 − pY − 2Y =
p−2
1
+1
p−2 p − 2 +1 p −1
Y= 2 = =
p − p − 2 ( p − 2)( p − p − 2) ( p − 2)( p 2 − p − 2)
2
p −1 p −1
= =
( p − 2)( p − 2)( p + 1) ( p + 1)( p − 2) 2
Resolving into partial fractions,
p −1 A B C
= + +
( p + 1)( p − 2) 2
p + 1 p − 2 ( p − 2) 2
Multiplying through by ( p + 1)( p − 2) 2 ,
p − 1 = A( p − 2) 2 + B ( p + 1)( p − 2) + C ( p + 1)
Let p = 2 , then,
2 − 1 = A(0) + B (0) + 3C
Hence,
1
C=
3
Let p = −1 , then,
− 2 = 9A
152
or
2
A=−
9
Let p = 0 ,
2 1
− 1 = 4 A − 2 B + C = 4 − − 2 B +
9 3
Hence,
8 1
−1+ −
B= 9 3 = −9+8−3 = 4 = 2
−2 − 18 18 9
Therefore,
2 1 2 1 1 1
Y =− + +
9 p + 1 9 p − 2 3 ( p − 2) 2
Hence,
2 2 1
y = L−1{Y } = − e −t + e − 2t + te 2t
9 9 3
SAQ 9.8
i t=0
v(t ) = 4V
R = 1 kΩ
C=1 µF
1
C∫
idt + Ri = v (t )
1
10 −6 ∫
idt + 10 3 i = 4
Multiplying throughout by 10 −6 :
−3
∫ idt + 10 i = 4 × 10 −6
Taking Laplace transform:
−6
I 1 + 10 −3 I = 4 × 10
p p ∫
+ idt
t =0 p
Vc = 2.0 .
153
So
1
idt = 2
C ∫
Vc ( 0) =
t = 0
That is,
1
idt = 2
10 −6 ∫ t =0
or ∫ idt = 2 × 10 −6 = q 0
t =0
So,
I 2 × 10 −6 4 × 10 −6
+ + 10 −3 I =
p p p
Rearranging,
1 4 × 10 −6 2 × 10 −6 2 × 10 −6
+ 10 −3 I = − =
p p p p
(1 + 10 −3 s ) I = 2 × 10 −6 (1 + 10 −3 p ) I = 2 × 10 −6
2 × 10 −6 1
I= −3
= 2 × 10 −6 ×
1 + 10 p 1000 + p
Should you require more explanation on this study session, please do not hesitate to contact your
Are you in need of General Help as regards your studies? Do not hesitate to contact
the DLI IAG Center by ee-mail or phone on:
[email protected]
08033366677
154
155
Study Session 10 Laplace Transform and Application II
In Study Session 9 we examined Laplace transforms, and with the method, solved single
ordinary differential equations with constant coefficients as well as an integral equation. We
explore the powerful tool of Laplace transform further in this study session, by applying it to a
system of equations.
Learning
ning Outcomes for Study Session 10
At the end of this study session, you should be able to do the following:
10.1 Apply Laplace transform in converting a system of differential, integral and integro-
integro
differential equations into a system of algebraic equations (S
(SAQ 10.1-10.4).
10.4).
10.2 Solve the system of algebraic equations (SAQ 10.1 10.1-10.4).
10.3 Recover the solution to the original system via inverse Laplace transform (SAQ 10.1-
10.1
10.4).
3 3
From (10.3), ( p + 3) X + 4 pY = 2
+
p p
156
which implies
3 3
+
4 pY p 2
p 3 + 3p
X+ = = 2
p+3 p+3 p ( p + 3)
3 + 3p 4 pY
X = − 10.5
p ( p + 3) p + 3
2
Similarly,
6 3
−
(3 − p )Y p 2
p 6 − 3p
X+ = =
2p 2p 2 p3
6 − 3 p (3 − p )Y
X = − 10.6
2 p3 2p
Equating equations (10.5) and (10.6),
3 + 3p 4 pY 6 − 3 p (3 − p )Y
− = −
p ( p + 3) p + 3
2
2 p3 2p
(3 − p )Y 4 pY 6 − 3 p 3 + 3p
− = − 2
2p p+3 2p 3
p ( p + 3)
(3 − p)( p + 3)Y − 8 p 2Y ( p + 3)(6 − 3 p) − 2 p(3 + 3 p )
=
2 p( p + 3) 2 p 3 ( p + 3)
( p + 3)(6 − 3 p ) − 2 p (3 + 3 p )
[(3 − p )( p + 3) − 8 p 2 ]Y =
p2
− 3 p 2 + 6 p − 9 p + 18 − 6 p − 6 p 2 − 9 p 2 − 9 p + 18
[9 − p − 8 p ]Y =
2 2
=
p2 p2
− 9 p 2 − 9 p + 18
[9 − 9 p ]Y =
2
p2
− p2 − p + 2
[1 − p 2 ]Y =
p2
− p 2 − p + 2 − ( p − 1)( p + 2) ( p + 2)
Y= = 2 = 2 10.7
p (1 − p )
2 2
p (1 − p)(1 + p ) p ( p + 1)
Resolving into partial fractions,
( p + 2) A B C
Y= 2 = + 2 +
p ( p + 1) p p p +1
Hence,
157
Ap ( p + 1) + B ( p + 1) + Cp 2 = p + 2
Setting p = 0 , B = 2
Setting p = −1 , C = 1
Setting p = 1 , 2 A + 2 B + C = 3 ⇒ 2 A + 2( 2) + 1 = 3
3−5
Hence, A = = −1
2
Therefore,
2 1 1
Y= − + 10.8
p 2
p p +1
Substituting into equation (10.6),
6 − 3 p (3 − p ) ( p + 2 ) 6 − 3 p (3 − p )( p + 2)
X = − = −
2p 3
2 p p ( p + 1)
2
2 p3 2 p 3 ( p + 1)
( p + 1)(6 − 3 p ) − (3 − p )( p + 2)
=
2 p 3 ( p + 1)
6 p − 3p2 − 3p + 6 + p2 − p − 6
=
2 p 3 ( p + 1)
− 2 p2 + 2 p p(1 − p) 1− p
= = 3 = 2
2 p ( p + 1)
3
p ( p + 1) p ( p + 1)
1 2 2
= − + 10.9
p 2
p p +1
Taking the inverse Laplace transform of equations (10.8) and (10.9),
x = t − 2 + 2 sin t and y = 2t − 1 + sin t
t =0 5Ω 20 Ω
i1 i2
3V 10 Ω
Inductor
0 .1 H
Fig. 10.1
158
If the circuit is quiescent, then, both i1 and i 2 and their derivatives are zero:
i1 = i2 = i1 ' = i2 ' = 0
For loop 1:
5i1 + 10(i1 − i2 ) = 3
15ii − 10i2 = 3 10.10
For loop 2:
di2
10i2 + 0.1 + 10(i2 − i1 ) = 0
dt
di
− 10i1 + 20i 2 + 0.1 2 = 0
dt
di
− 100i1 + 200i2 + 2 = 0 10.11
dt
159
Multiplying through by p (3 p + 400) ,
60 = A(3 p + 400) + Bp
Setting 3p = − 400 ,
400
60 = − B
3
120 3
B=− =−
400 10
Hence,
3 3
I2 = −
20 p 30( p + 400 / 3)
Taking inverse Laplace transform of both sides,
3 1 1 3
i2 = × 1 − e − 400t / 3 = ( − e − 400t / 3 )
20 10 10 2
References
1. Butkov, E. (1968). Mathematical Physics, Addison
Addison-Wesley.
2. MacQuarrie, D. A. (2003). Mathematical Methods for Scientists & Engineers, University
Science Books.
Self-Assessment
Assessment Questions
You have now completed this study session. You may now assess how well you have achieved
the Learning Outcomes by answering the following questions. Write your answers in your Study
Diary and discuss them with your Tutor at the ne
next
xt Study Support Meeting. You can check your
answers with the solutions
olutions to the Self
Self-Assessment
Assessment Questions at the end of this Module.
160
SAQ 10.1 (tests Learning Outcomes 10.1, 10.2 and10.3)
Using the method of Laplace transform, solve the simultaneous ordinary differential equations
x& + y& = 6e 2t − 2e −2 t , 2 x& − y = 12e 2 t + 2e − 2 t ; x (0) = 2; y (0) = 2
161
Solutions to SAQs
SAQ 10.1
x& + y& = 6e 2t − 2e −2 t 10.12
2 x& − y = 12e 2 t + 2e − 2 t 10.13
x (0) = 2; y (0) = 2
2
Putting Y = into equation (10.15),
p+2
2 12 2
2 pX − 4 − = +
p+2 p−2 p+2
12 4
2 pX − 4 = +
p−2 p+2
12 4 4 p 2 − 16 + 12 p + 24 + 4 p − 8 4 p 2 + 16 p
2 pX = + +4= =
p−2 p+2 ( p − 2)( p + 2) ( p − 2)( p + 2)
4 p( p + 4)
=
( p − 2)( p + 2)
2( p + 4) A B
Hence, X = = +
( p − 2)( p + 2) p − 2 p + 2
or 2( p + 4) = A( p + 2) + B ( p − 2)
When p = 2 , 12 = 4 A or A = 3
162
When p = −2 , 4 = −4 B or B = −1
Therefore,
3 1
X = −
p−2 p+2
x = L−1 ( X ) = 3e 2 t − e −2 t
10.18
SAQ 10.2
Transforming,
1
( p + 1) X ( p ) + 2Y ( p ) = (i)
p−2
( 2 p − 1) X ( p ) + pY ( p ) =0 (ii)
Subtracting,
p
( p 2 − 3 p + 2) X =
p−2
Hence,
p
X ( p) =
( p − 1)( p − 2) 2
1− 2p
Y ( p) =
( p − 1)( p − 2) 2
Therefore,
x(t ) = L−1 ( X ) = (2t − 1)e 2 t + e t
y (t ) = L−1 (Y ) = (1 − 3t )e 2 t − e t
163
SAQ 10.3
−3 1
− pX + 1 − Y = −
p−3 p−5
5 1
X − 1 + pY = +
p−5 p−3
5p p
pX − p + p 2Y = +
p−5 p−3
5p −1 p − 3
(p 2
)
−1Y +1− p = +
p−5 p −3
5 p −1
∴ (p 2
)
−1Y =
p−5
+p
5 p −1+ p2 − 5 p 1
Y= =
( p − 5)( p − 1)
2
p−5
5 1 p 5− p 1
X = + +1− = + +1
p−5 p−3 p−5 p−5 p−3
1
=
p −3
Therefore, x = L−1 ( X ) = e 3t
y = L−1 (Y ) = e 5t
SAQ 10.4
1
( p + 1) X + 2Y =
p−2
(2 p − 1) X + pY = 0
p = A( p − 2) 2 + B( p − 1)( p − 2) + C ( p − 1)
p =1: A =1
p =2: C =2
164
p = 0 : 4 A + 2B − C = 0
4 + 2 B − 2 = 0 or B = −1
p 1 1 2
= − +
( p − 1)( p − 2) 2
p − 1 p − 2 ( p − 2) 2
Therefore,
x = e t − e 2t + 2te t
1− 2 p A B C
= + +
( p − 1)( p − 2) 2
p − 1 p − 2 ( p − 2) 2
1 − 2 p = A( p − 2) 2 + B ( p − 1)( p − 2) + C ( p − 1)
p = 1 : A = −1
p = 2 : C = −3
p = 0 : 4 A + 2B − C = 1
− 4 + 2 B + 3 = 1 or B = 1
1− 2 p 1 1 3
=− + −
( p − 1)( p − 2) 2
p − 1 p − 2 ( p − 2) 2
Therefore,
x = e t − e 2 t + 2te 2t , y = −e t + e 2t − 3te 2t
dx dy
Test: 2 + − x = 0:
dt dt
2(e t − 2e 2t + 2e 2 t + 4te 2 t ) + (−e t + 2e 2 t − 3e 2 t − 6te 2t ) − (e t − e 2 t + 2te 2t )
= (2e t + 8te 2 t − e t − e 2 t − 6te 2 t ) − (e t − e 2t + 2te 2t )
= (e t − e 2t + 2te 2t ) − (e t − e 2 t + 2te 2t ) = 0
165
Should you require more explanation on this study session, please do not hesitate to contact your
Are you in need of General Help as regards your studies? Do not hesitate to contact
the DLI IAG Center by ee-mail or phone on:
[email protected]
08033366677
166
Study Session 11 Complex Analysis
Introduction
The square root of a negative number gives the motivation for complex numbers. The fact that
simple harmonic motion involves solutions of the form sine and cosine which are exponentials of
complex variables underscores the importance of complex numbers. Wa Wavefunctions
vefunctions in Quantum
mechanics are generally complex functions, and that is why the wavefunction on its own might
not make any physical sense. But by taking its square, we get the probability distribution
function | ψ (r ) | 2 . Moreover, some integrals would prove difficult ordinarily, but such a problem
could be solved with the help of complex integration.
167
11.1 Carry out simple operations on complex numbers, such as, the addition, subtraction,
multiplication of complex numbers, complex conjugation, etc. (SAQ 11.1)
Write a function of a complex variable and be able to identify the real and imaginary
parts of such a function (SAQ 11.3, 11.4).
11.2 Calculate the roots of a complex number (SAQ 11.2,11.5).
y z
θ
x
Fig. 11.1
Both x and y are real numbers. Also from the figure, we notice that in polar form, we may also
write re iθ , r is a real number. Obviously,
r= x2 + y2 , 11.2
and
y
θ = tan −1 11.3
x
i = −1 .
x = r cosθ , y = r sin θ 11.4
z = re iθ = r (cosθ + i sin θ ) = r cos θ + ir sin θ = x + iy
To get the complex conjugate of any complex number, we replace i by − i . For example, let
z = x + iy , then, the complex conjugate z = x + iy = x − iy .
z+z z−z
Note that x = Re z = and y = Im z = .
2 2i
zz = ( x + iy )( x − iy ) = x 2 + y 2
In another form,
zz = re iθ re −iθ = r 2 = ( x2 + y2 ) =x
2
2
+ y2
z = r, z = r 2
2
zz = z
2
168
Let z = 3 + 2i
z = 32 + 2 2 = 13
z z1
Show that 1 =
z2 z2
z1 x1 + iy1 x1 − iy1 z
= = = 1
z2 x 2 + iy 2 x 2 − iy 2 z 2
Note that
z1 z 2 = r1e iθ1 r2 e iθ 2
= r1 r2 e i (θ1 +θ 2 )
z1 r1e iθ1 r
= iθ 2
= 1 e i (θ1 −θ 2 )
z 2 r2 e r2
So,
z1 z 2 ...z n = r1 r2 ...rn e i (θ1 +θ 2 +...+θ n )
− 1 = (− 1) = re iθ
1/ 2
( ) 1/ 2
− 1 = x + iy = −1 + i × 0
r = ( −1) 2 + 0 2 = 1
0
θ = tan −1 =π
−1
169
The roots are,
π + 2πk π + 2πk
r 1 / m cos + i sin
m m
k = 0,1
k = 0, z 1 / 2 = i, i 2 = −1
k = 1, z 1 / 2 = −i, ( −i ) 2 = −1
−2 3
r = 4, θ = tan −1 (
= tan −1 − 3 = −
π
)
2 3
x = 2 and y = −2 2 , meaning that the complex number is in the 4th quadrant.
θ = 2π − π / 3 = 5π / 3 . But since it is in the fourth quadrant, you could also leave the
angle as it is, but bearing in mind that the angle has been measured in the clockwise sense
from the positive x direction.
π π
− + 2πk − + 2πk
41 / 3 cos 3 + i sin 3
3 3
where k = 0, 1, 2
π π
k =0 : 41 / 3 cos − + i sin − = 0.9397 − 0.342i
9 9
5π 5π
k = 1: 41 / 3 cos + i sin = −0.1736 + 0.9848i
9 9
11π 11π
k = 2: 41 / 3 cos + i sin = −0.766 − 0.6428i
9 9
y v
170 f
z f ( z) = w
Fig. 11.1: Functions of a Complex Variable
(i) f ( z ) = z = x + iy
but f ( z ) = u + iv
Thus, u = x and v = y
(ii) f ( z ) = z 2
z 2 = ( x + iy ) 2 = x 2 − y 2 + 2ixy
Thus, u = x 2 − y 2 , v = 2 xy
(iii) f ( z ) = z
2
z = x2 + y2
2
u = x2 + y2 , v = 0
(iv) f ( z ) = z 2 − 3z
= ( x + iy ) 2 − 3 x − 3iy
= x 2 + i3 xy − y 2 − 3 x − i3 y
= x 2 − y 2 − 3 x − i3 y + i 2 xy
Hence,
u = x 2 − y 2 − 3 x , v = y ( 2 x − 3)
171
2. Write a function of a complex variable and identify the real and imaginary parts of such a
function.
3. Calculate the roots of a complex number.
References
1. Hill, K. (1997). Introductory Linear Algebra with Applications, Prentice Hall.
2. Butkov, E. (1968). Mathematical Physics, Addison-Wesley.
3. MacQuarrie, D. A. (2003). Mathematical Methods for Scientists & Engineers, University
Science Books.
4. Hefferson, J. (2012). Linear Algebra, http://joshua.smcvt.edu/linearalgebra/book.pdf
5. Hefferson, J. (2012). Answers to Exercises,
http://joshua.smcvt.edu/linearalgebra/jhanswer.pdf
172
Solutions to SAQs
SAQ 11.1
z1 + z 2 = ( x1 + iy1 ) + ( x 2 + iy 2 ) = x1 − iy1 + x 2 − iy 2
= ( x1 − iy1 ) + ( x2 − iy 2 )
= z1 + z 2
z1 z 2 = ( x1 + iy1 )( x 2 + iy 2 )
= x1 x 2 + ix1 y 2 + iy1 x 2 − y1 y 2
= x1 x 2 − ix1 y 2 − iy1 x2 − y1 y 2
= ( x1 − iy1 )( x2 − iy 2 )
= z1 z 2
SAQ 11.1
−2 3
r = 4, θ = tan −1 = tan −1 3 =
4π
( )
−2 3
(Third quadrant, since both x and y components are negative.)
4π 4π
+ 2πk + 2πk
41 / 3 cos 3 + i sin 3
3 3
where k = 0, 1, 2
π π
k = 0: 41 / 3 cos 4 + i sin 4
9 9
10π 10π
k = 1: 41 / 3 cos + i sin
9 9
16π 16π
k = 2: 41 / 3 cos + i sin
9 9
SAQ 11.2
Let z = x + iy and x = r cosθ , y = r sin θ , r = x 2 + y 2
r =| z |=| 2 + i 2 |= 2
2
θ = tan −1 = tan −1 (1) = π / 4
2
173
π
i
So z = 2e 4 .
i (θ + 2kπ )
z 1 / m =| z |1 / m exp , k = 0, 1, 2, …, m − 1
m
π
i
So the cube roots of 2e 4 are
i π
21 / 3 exp + 2kπ , k = 0, 1, 2
3 4
π 9π 17 π
i i i
1/ 3 12 1/ 3 12 1/ 3 12
or 2 e ,2 e ,2 e
SAQ 11.3
z 3 = ( x + iy ) 3 = x 3 + 3 x 2 (iy ) + 3 x(iy ) 2 + (iy ) 3
= x 3 + 3ixy 2 − 3 xy 2 − 3iy 3
2 z = 2 x + 2iy
f ( z ) = z 3 + 2 z + 3 = x 3 + 3ixy 2 − 3 xy 2 − 3iy 3 + 2 y + 2iy + 3
= x 3 − 3 xy 2 + 2 y + 3 + i (3 xy 2 − 3 y 3 + 2 y )
Hence,
u ( x, y ) = x 3 − 3 xy 2 + 2 y + 3
v( x, y ) = 3 xy 2 − 3 y 3 + 2 y
SAQ 11.4
1 1
f ( z) = z + = re iθ + e −iθ
z r
1
= r cos θ + ir sin θ + (cos θ − i sin θ )
r
1 1
= r + cos θ + i r − sin θ
r r
Hence,
1 1
u ( x, y ) = r + cos θ ; v ( x, y ) = r − sin θ
r r
SAQ 11.5
z 3 = −8i = 8e − iπ / 2 (since e iπ / 2 = cos(π / 2) − i sin(π / 2) )
Hence,
π π 2πk
z = 3 8 exp i + 2πk / 3 = 2 exp i +
2 6 3
π + 12πk
= 2 exp i
6
174
k = 0:
iπ
z = 2 exp
6
k = 1:
13iπ
z = 2 exp
6
k = 2:
25 iπ
z = 2 exp
6
175
Should you require more explanation on this study session, please do not hesitate to contact your
Are you in need of General Help as regards your studies? Do not hesitate to contact
the DLI IAG Center by ee-mail or phone on:
[email protected]
08033366677
176
177
Study Session 12 Differentiation of a Complex Variable
178
179
Introduction
Now that you have taken a look at the algebra of complex numbers, the question that comes
readily to mind is whether, just as is the case with real functions, is it possible to differentiate
complex functions.Thehe answer is
is, yes. We differentiate complex functions,
nctions, but our answers are
not as easily obtained as is the case with real functions. You shall have cause to come across
analytic functions and the attendant Cauchy-Riemann equations that have much application in
Physics. You shall see that each of the pair of functions involved in the Cauchy-Riemann
Cauchy
equations is a harmonic function.
z0
x
180
Fig. 12.1:: The limit of a complex variable at z 0 is irrespective of the direction of
approach
(i) f ( z) = z n
f ( z + h) − f ( z )
f ' ( z ) = lim h →0
h
( z + h) n − z n
= lim h→0
h
n −1
= nz
(ii) f ( z ) = Re z
Re z = x
f ( z ) − f (z 0 )
f ' ( z ) = lim z → z0
z − z0
x − x0
= lim z → z0
( x − x0 ) + i ( y − y 0 )
Therefore, Re z has no derivative. This is because the two derivatives are not the same.
f ( z ) − f (z 0 )
f ' ( z ) = lim z → z0
z − z0
z − z0
= lim z → z0
( x − x0 ) + i ( y − y 0 )
181
( x − iy ) − ( x0 − iy 0 )
= lim z → z0
( x − x0 ) + i ( y − y 0 )
( x − x0 ) − i ( y − y 0 )
= lim z → z0
( x − x0 ) + i( y − y 0 )
Therefore, f ( z ) = z has no derivative. The two derivatives are not the same.
12.2 Cauchy-Riemann
Riemann Equations
Suppose f ' ( z ) exists at a point. Then the limit
f ( z + g ) − f ( z)
f ' ( z ) = lim g →0 exists.
g
Let g = h + ik
f ( z ) = u ( x, y ) + iv ( x, y )
f ( z + g ) = u ( x + h, y + k ) + iv ( x + h, y + k )
f ' ( z ) = lim g →0
[u ( x + h, y + k ) − u ( x, y )] + i[v( x + h, y + k ) − v( x, y )]
h + ik
Let g → 0 along the x - axis, i.e., k = 0 .
L1 = lim h →0
[u ( x + h, y ) − u ( x, y )] + i[v( x + h, y ) − v( x, y )]
h
[u ( x + h, y ) − u ( x, y ) ] i[v ( x + h, y ) − v ( x, y ) ]
= lim h→0 +
h h
∂u ∂v
= +i 12.2
∂x ∂x
L2 = lim k →0
[u ( x, y + k ) − u ( x, y)] + i[v( x, y + k ) − v( x, y)]
ik
[u ( x, y + k ) − u ( x, y ) ] i[v( x, y + k ) − v ( x, y )]
= lim k →0 +
ik ik
∂u ∂v
= −i + 12.3
∂y ∂y
182
f ' ( z ) = u x + iv x = v y − iu y
This implies that (equating real and imaginary parts)
ux = vy 12.4a
and
v x = −u y 12.4b
∂u
where we have written = u x , etc.
∂x
12.3 Analyticity
A function is said to be analytic at a point if the Cauchy
Cauchy-Riemann
Riemann equations are satisfied.
u x = 2 x ; u y = −2 y
v x = 2 y ; and v y = 2 x
Therefore, the function f ( z ) = z 2 is analytic on the x − y plane.
183
Equipotential
Field line
Fig . 12.2: Equipotentials and the electric field lines of a point charge
184
Note
1. A function is analytic at a point if it is differentiable at that point and every point
in a neighbourhood of the point.
Analyticity is not a “point” idea, but that of a “neighbourhood”. Thus, for a function to be
analytic at z 0 , f ' ( z 0 ) must exist, and in addition, f ' ( z ) must exist in a certain neighbourhood
of the point.
Just as is the case in real polynomial functions and rational functions hold.
d
( a 0 + a1 z + a 2 z 2 + ... + a n z n ) = a1 + 2a 2 z + ... + na n z n −1 12.10
dz
Rule 12.9 applies to a rational function.
d −3
z = −3 z −3−1 = −3 z − 4
dz
185
12.5.1 Differentiation of Exponential, trigonometric and logarithmic
functions
Given f ( z ) = e az , where a is a constant, then
d az
e = ae az
dz
If f ( z ) = cos az , where a is a constant, then,
d
cos az = − a sin az
dz
For f ( z ) = ln z , then
d 1
ln z =
dz z
References
1. Butkov, E. (1968). Mathematical Physics, Addison-Wesley.
2. MacQuarrie, D. A. (2003). Mathematical Methods for Scientists & Engineers, University
Science Books.
Self-Assessment
Assessment Questions for Study Session 12
You have now completed this study session. You may now assess how well you have achieved
the Learning Outcomes by answering the following questions. Write your answers in your Study
Diary and discuss them with your Tutor at the next Study Support Meeting. You can check your
answers with the solutions
olutions to the Self
Self-Assessment
Assessment Questions at the end of this Module.
186
SAQ 12.1 (tests Learning Outcome 12.1)
What are analytic functions?
When do we say a function is harmonic?
d 1 + z
Find .
dz 1 − z
187
Solutions to SAQs
SAQ 12.1
A function is said to be analytic at a point if the Cauchy-Riemann equations are satisfied.
u x = v y and v x = −u y
SAQ 12.2
f ( z ) =| z | 2 = x 2
u = x2 , v = 0
ux = 2x , vy = 0
f ( z ) =| z + 1 | 2
u y = 0 , vx = 0
We conclude that f ( z ) =| z | 2 is not analytic.
SAQ 12.3
1 + z + ∆z 1 + z
−
d 1 + z 1 − z − ∆z 1 − z
= lim
dz 1 − z ∆z →0 ∆z
(1 − z )(1 + z + ∆z ) − (1 + z )(1 − z )
= lim
∆z →0 ∆z (1 − z − ∆z )(1 − z )
2 2
= = lim =
∆z → 0 (1 − z )(1 − z − ∆z ) (1 − z ) 2
SAQ 12.4
Suppose f ' ( z ) exists at a point. Then the limit
f ( z + g ) − f ( z)
f ' ( z ) = lim g →0 exists.
g
Let g = h + ik
f ( z ) = u ( x, y ) + iv ( x, y )
f ( z + g ) = u ( x + h, y + k ) + iv ( x + h, y + k )
f ' ( z ) = lim g →0
[u ( x + h, y + k ) − u ( x, y )] + i[v( x + h, y + k ) − v( x, y )]
h + ik
188
Let g → 0 along the x - axis, i.e., k = 0 .
L1 = lim h →0
[u ( x + h, y ) − u ( x, y )] + i[v( x + h, y ) − v( x, y )]
h
= lim h→0
[u ( x + h , y ) − u ( x , y ) ] +
i[v ( x + h, y ) − v ( x, y ) ]
h h
∂u ∂v
= +i
∂x ∂x
L2 = lim k →0
[u ( x, y + k ) − u ( x, y)] + i[v( x, y + k ) − v( x, y)]
ik
[u ( x, y + k ) − u ( x, y )] i[v ( x, y + k ) − v ( x, y )]
= lim k →0 +
ik ik
∂u ∂v
= −i +
∂y ∂y
SAQ 12.5
f ( z ) = x 2 − iy 2
u ( x, y ) = x 2 ; v ( x , y ) = − y 2
u x = 2 x , v y = −2 y
u y = 0 , vx = 0
u x = v y implies x = − y
Thus, the function is analytic only on the line y = − x .
SAQ 12.6
f ( z ) = z = x − iy
u ( x, y ) = x , v ( x , y ) = − y
u x = 1 , u y = −1
f ( z ) = z is not analytic.
It follows, therefore, that if any complex function should depend on z , it cannot be
analytic.
189
z−z z−z
f ( z ) = cos y − i sin y = cos − i sin is not analytic because it depends on z .
2i 2i
SAQ 12.7
We guess φ ( x, y ) = Ax + B , as the function is to be harmonic on a vertical strip, it could be
independent of y.
Then
φ (1,0) = A + B = 20
φ (3,0) = 3 A + B = 35
SAQ 12.8
Guess φ ( x, y ) = Ax + By + C
Find the values of A, B and C.
φ (3,0) = 3 A + C = 10
φ ( −3,0) = −3 A + C = −40
2C = −30
C = −15
10 − C 10 + 30 40
A= = =
3 3 3
10 − C 10 + 15 25
A= = =
3 3 3
φ (0,3) = 3B + C = 10
10 − C 10 + 15 25
B= = =
3 3 3
The solution is
25
θ ( x, y ) = ( x + y ) − 15
3
SAQ 12.9
A function of a complex variable is said to have a derivative at a point z 0 if the limit
f ( z) − f ( z0 )
f ' ( z 0 ) = lim
z → z0z − z0
exists and is independent of how z approaches z 0 .
For the differentiation of a real variable, it is sufficient that the limit exists.
190
Should you require more explanation on this study session, please do not hesitate to contact your
Are you in need of General Help as regards your studies? Do not hesitate to contact
the DLI IAG Center by ee-mail or phone on:
[email protected]
08033366677
191
Study Session 13 Complex Integration
Introduction
Just as you may integrate a function of a real variable, you can also integrate a function of a
complex variable.
iable. We shall find that it might even be easier to convert a real integral into a
complex integral in a way that would make the integration easier. We shall be learning some
powerful tools that would assist us in doing this.
192
13.1 Laurent’s Theorem
This states: If f (z ) is analytic on C1 and C2 and throughout the annulus between C1 and C 2
y C2
C1 r2
z0
r1
x
1 f (ξ ) dξ
2π i ∫ (ξ − z
Where a m = m +1
13.2
0)
1 f (ξ ) dξ
2π i ∫ (ξ − z
bm = − m +1
13.3
0)
Definition
A point z 0 is a singular point of the function f (z ) if and only if it fails to be analytic at z 0 and
every neighbourhood of z 0 contains at least one point at which f (z ) is analytic.
Definition
Let z 0 be a singular point of an analytic function. If the neighbourhood of z 0 contains no other
singular points, then, z = z 0 is an isolated singularity.
193
Definition
In the Laurent series if all but a finite number of bm ' s are zero, then z 0 is a pole of f (z ) . If k is
the highest integer such that bk ≠ 0 , then z 0 is said to be a pole of order k .
Definition
The coefficient of b1 in the Laurent expansion is called the residue of f (z ) at z 0 , denoted by
R es ( f , z 0 ) .
R es ( f , z 0 ) = b1
∞
b1
f ( z ) = ∑ an ( z − z 0 ) n + + b2 ( z − z0 ) −2 + ... + bm ( z − z0 ) −m + ...
n =1 z − z0
The coefficient of b1 is
d m −1
1
(m − 1)! dz m −1
{( z − z 0 ) m f ( z )} 13.4
Proof:
∞ b
( z − z 0 ) m f ( z ) = ( z − z 0 ) m ∑ a n ( z − z 0 ) n + 1 + b2 ( z − z 0 ) − 2 + ... + bm ( z − z 0 ) − m + ...
n =1 z − z0
∞
b
= ( z − z 0 ) m ∑ a n ( z − z 0 ) n + ( z − z 0 ) m 1 + b2 ( z − z 0 ) − 2 + ... + bm ( z − z 0 ) − m + ...
n =1 z − z0
∞
= (z − z0 ) m ∑ an ( z − z0 ) n
n =1
b1 ( z − z 0 ) m
+ + b2 ( z − z 0 ) − 2 ( z − z 0 ) m + ... + bm ( z − z 0 ) − m ( z − z 0 ) m + ...
z − z0
( )
∞
= ( z − z 0 ) m ∑ a n ( z − z 0 ) n + b1 ( z − z 0 ) m −1 + b2 ( z − z 0 ) m − 2 + ... + bm + ...
n =1
194
d2
b ( z − z 0 ) m −1 = ( m − 1) × ( m − 2)b1 ( z − z 0 ) m − 2
2 1
dz
Therefore, differentiating m − 1 times,
d m −1
b1 ( z − z 0 ) m −1 = (m − 1) × (m − 2) × ⋅ ⋅ ⋅ × 3 × 2 × 1× b1
dz m −1
Dividing through by ( m − 1) ! ,
1 d m −1
b1 ( z − z 0 ) m −1 = b1
(m − 1)! dz m −1
∫ f ( z )dz = 2π iR es( f , z
C
0 ) 13.5
1
With the aid of the residue theorem, evaluate the integral ∫ z =2 z − 1
2
dz .
1 dz
∫ z =2 z − 1
2
dz = ∫
z = 2 ( z + 1)( z − 1)
1
f ( z) =
( z + 1)( z − 1)
There are two poles, z = 1 and z = −1 .
d0 1
R es( f ( z ),1) = ( z − 1)
dz 0
( z + 1)( z − 1) 1
d0 1
R es( f ( z ), − 1) = ( z + 1)
dz 0
( z + 1)( z − 1) −1
Therefore,
1 1 1
∫ dz = 2π i − = 0
z =2 z −1
2
2 2
195
13.3 Evaluation of Real Integrals by the Residue Theorem
2π
Let us consider the integral ∫ F (sin θ , cos θ ) dθ , F being the quotient of two polynomials in
0
sin θ and cosθ . We can reduce the evaluation of such an integral to the calculation of the
integral of a rational function of z along the circle z = 1 , since rational functions have no
singularities other than poles, we can make use of the residue theorems.
Let z = e iθ 13.6
Then,
dz = e iθ dθ 13.7
or
dz
dθ = 13.8
iz
Therefore,
z + z −1
cos θ = 13.9
2
and
z − z −1
sin z = 13.10
2i
2π
Putting equations (13.9) and (13.10) in ∫
0
F (sin θ , cos θ ) dθ , we get ∫
C
R( z )dz where R (z ) is a
rational function of z and the integral is over the circular path z = 1 . The residue theorem thus
2π
yields ∫ F (sin θ , cos θ ) dθ = 2π i ∑ R es( R( z )) at the poles within the unit circle.
0
α
2 dz
α∫
= ,
C ( z − z )( z − z )
1 2
196
where z1 = −
i
α
(1 + )
1 − α 2 and z 2 = −
i
α
(1 − 1−α 2 )
References
1. Butkov, E. (1968). Mathematical Physics, Addison-Wesley.
2. MacQuarrie, D. A. (2003). Mathematical Methods for Scientists & Engineers, University
Science Books.
Self-Assessment
Assessment Questions (SAQs) for Study Session 13
You have now completed this study session. You may now assess how well you have achieved ac
the Learning Outcomes by answering the following questions. Write your answers in your Study
Diary and discuss them with your Tutor at the next Study Support Meeting. You can check your
answers with the solutions
olutions to the Self
Self-Assessment Questions at the end of this study session.
around the point z = 0 , where m and n are positive integers. Hence, show that
dz ( n + m − 2)!
∫C (1 − z ) n z m = 2πi (n − 1)!(m − 1)!
where C is a circle with radius less than unity, centred on the origin.
197
SAQ 13.3 (tests Learning Outcome 13.3)
1
Evaluate the integral: ∫ dz
z =2 z + 1
2
198
Solutions to SAQs
SAQ 13.1
(i) A point z 0 is a singular point of the function f (z ) if and only if it fails to be analytic at
z 0 and every neighbourhood of z 0 contains at least one point at which f (z ) is analytic.
(ii) Let z 0 be a singular point of an analytic function. If the neighbourhood of z 0 contains no
other singular points, then, z = z 0 is an isolated singularity.
(iii) In the Laurent series if all but a finite number of bm ' s are zero, then z 0 is a pole of f (z ) .
If k is the highest integer such that bk ≠ 0 , then z 0 is said to be a pole of order k .
(iv) The coefficient of b1 in the Laurent expansion is called the residue of f (z ) at z 0 ,
denoted by R es ( f , z 0 ) .
R es ( f , z 0 ) = b1
SAQ 13.2
1 1
f (z) =
(1 − z ) z m
n
(n + m − 2)!
Therefore, a −1 =
(n − 1)!(m − 1)!
= Residue at z = 0.
199
SAQ 13.3
1 dz
∫ z =2 z +12
dz = ∫
z = 2 ( z + i )( z − i )
1
f ( z) =
( z + i )( z − i )
There are two poles, z = i and z = −i .
d0 1
R es( f ( z ), i ) = ( z − i)
dz 0
( z + i )( z − i ) i
d0 1
R es( f ( z ), − i ) = ( z + i)
dz 0
( z + i )( z − i ) −i
Therefore,
1 1 1
∫ dz = 2π i − = 0
z =2 z +12
2i 2i
SAQ 13.4
Let z = e iθ
dz z − z −1
Then, dθ = , sin θ =
iz 2i
Therefore,
dz dz
I =∫ −1
= 2∫
5 z−z
C C 5
z 2 + iz − 2
iz +
4 2 4
dz
= 2∫ .
C i
z + ( z + 2i )
2
The poles of the integrand are at z = −2i and − i / 2 . Only the latter lies within the unit circle
i
res (− i / 2 ) = z →
1
lim
−i / 2 z +
2 i
z + ( z + 2i )
2
200
−2
=
3i
Therefore,
2i 8π
I = 2 × 2πi × − =
3 3
SAQ 13.5
This has poles at z = 0, of multiplicity k = 2 and at z = 1, of multiplicity k = 1.
Re s z = z0 f ( z ) =
1 d k −1
[
lim k −1 ( z − z 0 ) k f ( z )
(k − 1)! z → z0 dz
]
z 2 + z −1
Re s z =1 = lim( z − 1) f ( z ) = lim =1
z →1 z →1 z2
d z 2 + z − 1 (2 z + 1)( z − 1) − ( z 2 + z − 1)
Re s z =0 = lim z 2 f ( z ) = lim =
z →1 dz z →0 z −1 ( z − 1) 2
If f (z ) is analytic except at isolated singular points, then the sum of all the residues of f (z ) is 0.
Hence, Re s z =0 + Re s z =1 + Re s z =∞ = 1 + Re s z =∞ = 0
Therefore, Re s z →∞ = −1 .
SAQ 13.6
The integrand has a pole of order 2 at z = i and a pole of order 1 at z = 2i , which lie within the
circle | z | = 3 .
d 1
Res ( z = i ) = lim
z →i dz ( z + i )( z + i )( z + 4)
2
−2 2z
= lim −
z →i ( z + i ) ( z + 4)
3 2
( z + i ) ( z + 4)
2 2 2
−2 2i
= 3
−
3( 2i ) 9( 2i ) 2
i
= −
36
Similarly,
1
Res ( z = 2i ) = lim 2
z → 2 i ( z + 1)( z + 2i )
i
= −
36
201
Therefore,
dz i i
∫ = 2π i − −
| z| = 3 ( z + 1) ( z + 4)
2 2 2
36 36
π
=
9
SAQ 13.6
The only pole is at z = 0 , and the attendant residue is e kz = 1 . Hence,
z =0
e kz
∫|z|=1 z dz = 2π i ×1 = 2π i
But
k (cos θ + i sin θ )
e kz 2π e 2π
∫|z|=1 z dz = ∫0 iθ
ie iθ dθ = ∫ e k (cosθ +i sin θ ) idθ
e 0
2π
= ∫ e k cosθ e ik sin θ idθ
0
2π
= ∫ e k cos θ [cos(k sin θ ) + i sin( k sin θ )]idθ
0
2π
= ∫ e k cos θ [i cos( k sin θ ) − sin( k sin θ )]dθ
0
2π
∫ e k cos θ cos(k sin θ ) dθ = 0
0
SAQ 13.7
dz cos z cos dz
∫
| z| = 4 z − 6 z + 5
2
=∫
z = 4 ( z − 1)( z − 5)
cos z
f (z) =
( z − 1)( z − 5)
There are two poles, z = 1 and z = 5 . Only the first pole lies within the contour.
d0 cos z 1
R es( f ( z ),1) = ( z − 1) = − cos(1)
dz 0
( z − 1)( z − 5) 1 4
202
Therefore,
dz 1
∫ = −2π i × cos(1)
z =4 ( z − 1)( z − 5) 4
dz cos z π i cos(1)
∫
| z| = 4
( z − 1)( z − 5)
= −
2
SAQ 13.8
cos z
∫| z|=5 z (z − π )3
2
dz
= 2πi (Re s z =0 + Re s z =π )
d cos z 1 d 2 cos z
= 2π i lim + lim 2
z →0 dz ( z − π ) 3 2 z →π dz 2
z
3 π 2 − 6 π 2 − 12
= 2π i − 4 + = i
π 2π 4 π3
203
Should you require more explanation on this study session, please do not hesitate to contact your
Are you in need of General Help as regards your studies? Do not hesitate to contact
the DLI IAG Center by ee-mail or phone on:
[email protected]
08033366677
204
205
Study Session 14 Eigenvalue Problems
Introduction
Eigenvalue problems crop up in every area of Physics, and is perhaps one of the most
fundamental ideas behind Quantum mechanics, an area of Physics where a physical system can
only take a certain set of values. As an example, the en
energy of a quantum-mechanical
mechanical oscillator
can only take a discrete set of values; likewise the electron in a hydrogen atom. Indeed, every
physical observable in Quantum mechanics has associated with it a Hermitian operator, the
eigenvalues of which are the only
nly possible values the physical observable can attain.
In Physics, we have cause to deal with eigenvalue problems in several ways: the operator could
be a matrix, or a function. Indeed, the eigenvalue problem involves finding values
v of λ for
which equation 14.2 holds. We shall see later that in Quantum mechanics, the eigenvalues of an
operator are the possible values the physical quantity represented by the operator takes in some
allowable states. Once we have obtained the eigen
eigenvalues,
values, we put each of them back in equation
14.2 and then find the eigenstate corresponding to it. The eigenvalues are also called the
spectrum of the A.
206
14.1.1 Matrix Space
Let
Aψ = λψ
where A is a square matrix, and I is the unit or identity matrix of the same dimension as A.
Then,
( A − λI )ψ = 0 14.3
The identity matrix I is necessary as A is matrix, and there is no way you would subtract a
number from a matrix. What this effectively does is to take λ away from the diagonal element
of A. For this equation to have a unique solution, the determinant
A − λI = 0 14.4
This gives a polynomial equation in λ . The order of the equation is the dimension of the A. This
polynomial equation is called the indicial or characteristic equation. We solve the equation for
the allowable values of λ . We then put these expressions back into equation 14.2 in order to get
the corresponding eigenstates. We shall assume initially that all the eigenvalues are different.
The case where two or more are equal, we say the states are degenerate. This case will be
handled separately.
2 −1
Find the eigenvalues of the matrix .
1 − 2
207
λ2 − 4 + 1 = 0
Hence,
λ=± 3
The eigenvalues are 3 and − 3
Case (i): λ = 3
2 − 1 u1 u
= 3 1
1 − 2 u 2 u2
2u1 − u 2 = 3u1
We can choose u1 as 1, then,
u 2 = 2u1 − 3u1 = 2 − 3
Hence, the eigenstate corresponding to 3 is
1
2 − 3
Case (ii): λ = − 3
2 − 1 u1 u
= − 3 1
1 − 2 u 2 u2
2u1 − u 2 = − 3u1
We can choose u1 as 1, then,
u 2 = 2u1 − 3u1 = 2 + 3
Hence, the eigenstate corresponding to − 3 is
1
2 + 3
2 −1
What this means is that the quantity represented by the operator can exist in only two
1 − 2
1 1
eigenstates u = and u = . The value of the property measured in the two
2 − 3 2 + 3
possible states are the respective eigenvalues 3 and − 3 .
208
0 0 1
Let Ω = 0 0 0 . Find the eigenvalues of Ω .
1 0 0
−λ 0 1
The characteristic equation is formed by 0 −λ 0 =0
1 0 −λ
− λ3 + λ = 0
The eigenvalues are 0, 1 and −1 .
0 0 1 a1 a1 0
For ω = 0, eigenvector is given by 0 0 0 a 2 = 0 a 2 = 0
1 0 0 a3 a3 0
a1 = 0 , a3 = 0 . We can choose a 2 = 1 without loss of generality.
or
a1 0
a = 1
2
a3 0
− 1 0 1 a1
λ = 1 : Ω = 0 − 1 0 a 2 = 0
1 0 − 1 a3
or
a1 1
a = 0
2
a3 1
1 0 1 a1
λ = −1 : Ω = 0 1 0 a 2 = 0
1 0 1 a 3
or
a1 − 1
a = 0
2
a3 1
209
14.1.2 Function Space
In function space, let the possible eigenstates be denoted by φ or | φ > , the latter which we refer
to as the ket. In standard notation, eigenstates in quantum mechanics are denoted by kets. The
expression ( v1 , v 2 ) can be written as < v , v > or < v | v > in bra-ket notation. Just as we have
learnt in Study Session 2, the bra < v | is a row matrix, while | v > is a column matrix.
1
Given that the Hamiltonian for the Harmonic Oscillator in the ground state is hω 0 ,
2
show that the expectation of the energy of the harmonic oscillator in the ground state is
1
hω 0 . Assume that the eigenvectors are normalised.
2
The energy expectation value of for the ground state of the simple harmonic oscillator:
∞ ∞ ∞
1 1 1
< E >=
∫
−∞
ψ 0* Hˆ ψ 0 dx =
∫
−∞
ψ 0* hω 0 ψ 0 = hω 0
2 2 ∫
−∞
ψ 0*ψ 0 dx = hω 0
2
since ψ 0 is normalised.
210
It is known that the eigenvalue of square of the orbital angular momentum and the z-z
component of the angular momentum of the electron in the hydrogen atom are,
respectively l (l + 1)h 2 and mh in the states represented by the Spherical Harmonics
Ylm (θ , φ ) . Write the eigenva
eigenvalue
lue equation for each of the physical observables.
References
1. Hill, K. (1997). Introductory Linear Algebra with Applications, Prentice Hall.
2. Butkov, E. (1968). Mathematical Physics, Addison
Addison-Wesley.
3. MacQuarrie, D. A. (2003). Mathematical Methods for Scientists & Engineers, University
Science Books.
211
SAQ 14.2 (tests Learning Outcome 14.2)
Let O be a Hermitian operator. If in addition, O 3 = 2 I , prove that O = 21 / 3 I .
212
Solutions to SAQs
SAQ 14.1
(i) The eigenvalues or spectrum of an operator is the possible values a system represented by
the operator can assume.
(ii) The eigenvectors of an operator is the possible states (eigenstates) a system represented
by the operator can assume.
(iv) The characteristic or indicial equation is that which results from the determinant,
A − λI = 0
where all symbols have their usual meanings and I is the appropriate unit matrix.
SAQ 14.2
O 3ψ = 2 Iψ
O 3ψ = O 2 Oψ = O 2 λψ = λ Oλ ψ = λ2 Oψ = λ3ψ
The indicial equation is,
| 2 I − λ3 |= 0
or
2 − λ3 0 0
0 2−λ 3
0 =0
0 0 2−λ 3
λ = 21 / 3
2πk 2πk
21 / 3 cos + i sin
3 3
where k = 0, 1, 2
213
k =0 : 21 / 3 [cos 0 + i sin 0] = 21 / 3
2π 2π 1 3
k = 1: 21 / 3 cos + i sin = 21 / 3 − + i
3 3 2 2
4π 4π 1
3
k = 2: 21 / 3 cos = 21 / 3 − − i
+ i sin
3 3
2 2
A Hermitian operator can only have real eigenvalues. Thus, the only permissible eigenvalue of O
is 21 / 3 .
Thus,
Oψ = 21 / 3ψ
Hence, O = 21 / 3 I .
SAQ 14.3
1 0 1 0 1 1
1 h
Lx 1 = 1 0 1 1 = λ 1
6 6 2 2
2 0 1 0 2
1 0 1 0 1 1
1 1
Lx 1 = 1 0 1 1 = λ 1
6 6 2
2 0 1 0 2
0−λ 1 0
1 −λ 1 =0
0 1 −λ
or
− λ (λ2 − 1) − 1( −λ ) = −λ3 + λ + λ = 0
λ3 − 2λ = 0
λ ( λ 2 − 2) = 0
λ = − 2 , 0 and 2
SAQ 14.4
The indicial equation is given by,
0−λ − ih / 2 − λ − ih / 2
= =0
ih / 2 0−λ ih / 2 − λ
214
λ2 − (h / 2 )2 = 0
Hence,
h
λ=±
2
1 1
Therefore, the eigenvalues are + h and − h .
2 2
For λ = h / 2 ,
0 − ih / 2 u1 u
= h / 2 1
ih / 2 0 u 2 u2
− ihu 2 / 2 = hu1 / 2
or
− iu 2 = u1
u 2 = iu1
Hence,
1 1 0
e y = = + i
i 0 1
For λ = −h / 2 ,
0 − ih / 2 u1 u
= −h / 2 1
ih / 2 0 u 2 u2
− ihu 2 / 2 = −hu1 / 2
or
iu 2 = u1
u 2 = −iu1
Hence,
1 1 0
e y = = − i
− i 0 1
The eigenvalues of S y are:
1 1 0 1 1 0
= + i and = − i
i 0 1 − i 0 1
215
SAQ 14.5
−λ 0 1
a. The characteristic equation is formed by 0 −λ 0 =0
1 0 −λ
− λ3 + λ = 0
Eigenvalues are 0, 1 and − 1 .
0 0 1 a1
For λ = 0, eigenvector is given by 0 0 0 a 2
1 0 0 a3
a1 0
Or a 2 = 1
a3 0
a1 0
The normalised eigenfunction is ψ 1 = a 2 = 1
a3 0
− 1 0 1 a1 0
λ = 1 : 0 − 1 0 a 2 = 0
1 0 − 1 a3 0
a1 1
a = 0
2
a3 1
a1 1
The normalised wavefunction is ψ 2 = a 2 =
1
0
2
a3 1
1 0 1 a1 0
λ = −1 : 0 1 0 a 2 = 0
1 0 1 a3 0
a1 − 1
a = 0
2
a3 1
216
a1 − 1
Normalised wavefunction is ψ 3 = a 2 =
1
0
2
a3 1
1
b. The expectation value of A in state 0 is
1
0 0 1 0 0
< ψ 1 | A | ψ 1 >= [0 1 0] 0 0 0 1 = [0 1 0]0 = 0
1 0 0 0 0
0 0 1 1 1
< ψ 2 | A | ψ 2 >=
1 1 1
[1 0 1] 0 0 0 0 = [1 0 1] 0 = 1 × 2 = 1
2 2 2 2
1 0 0 1 1
0 0 1 − 1 1
< ψ 3 | A | ψ 3 >= [− 1 0 1] 0 0 0 0 = [− 1 0 1] 0 = 1 × −2 = −1
1 1 1
2 2 2 2
1 0 0 1 − 1
Comment: The expectation values are the eigenvalues we got earlier. This is another way
of getting the eigenvalues of an operator.
Should you require more explanation on this study session, please do not hesitate to contact your
Are you in need of General Help as regards your studies? Do not hesitate to contact
the DLI IAG Center by ee-mail or phone on:
[email protected]
08033366677
217
Study Session 15 Diagonalisation of a Matrix
Introduction
We have learnt how to get the eigenvalues and the eigenvectors of a given square matrix. In this
study session, we shall apply the knowledge gained to diagonalise a given squaresquar matrix. We
shall see the condition that would need to be satisfied if a square matrix is to be diagonalisable.
We shall also dwell more on the properties of some matrices and how they can furnish us with a
rich family of orthonormal eigenvectors.
Learning
ing Outcomes of Study Session 15
By the end of this study session you should be able to do the following:
15.1 Understand and correctly use the keywords in bold print (SAQ 15.1).
15.2 Determine if a given matrix can be diagonalised or not (SAQ 15.4).
15.3 Diagonalise a given matrix (SAQ 15.2, 15.3, 15.5).
15.4 Calculate the modal matrix for a given diagonalisable matrix (SAQ 15.2, 15.3).
15.5 Determine the kind of matrix that can diagonalise a given type of matrix (SAQ 15.5).
218
a11 a12 ⋅ ⋅ ⋅ a1n x1 0
a 21 a 22 ⋅ ⋅ ⋅ a2n x2 0
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
Or =
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅
a
n1 an2 ⋅ ⋅ ⋅ a nn x n 0
This equation has a unique solution if and only if | A − λ I |= 0 , giving the characteristic equation
from which the n eigenvalues {λi }in=1 could be found. It follows that all the λi could be distinct,
or some could be repeated.
Suppose we have obtained the eigenvalues of a given matrix and the corresponding eigenvectors,
then we can write,
x11 x 21 . . . x n1
x12 x 22 . . . xn 2
. . . . . .
M =
. . . . . .
. . . . . .
x x2n . . . x nn
1n
where we have arranged the eigenvectors vertically. M is the modal matrix.
AM = A(x1 x2 . . . x n ) = (λ1 x1 λ2 x 2 . . . λn x n )
219
x11 x 21 . . . x n1 λ1 0 . . . 0
x12 x 22 . . . xn2 0 λ2 . . . 0
. . . . . . . . . . . .
=
. . . . . . . . . . . .
. . . . . . . . . . . .
x x2n . . . x nn 0 0 . . . λ n
1n
220
Hence,
−1
x11 x 21 . . . x n1 x11 x 21 . . . x n1
x12 x 22 . . . xn2 x12 x 22 . . . xn 2
. . . . . . . . . . . .
M AM =
−1
. . . . . . . . . . . .
. . . . . .
. . . . . .
x x2 n . . . x nn x x2 n . . . x nn
1n 1n
λ1 0 . . . 0
0 λ2 . . . 0
. . . . . .
= =D
. . . . . .
. . . . . .
0 0 . . . λ n
D = M −1 AM
This is a similarity transformation, and the matrices A and D are said to be similar matrices.
M thus diagonalises A. M is called the modal matrix, and the diagonal matrix is the spectral
matrix that has only an eigenvalue in each column (or row). The diagonal matrix is said to be the
canonical or diagonal form. Notice that the spectral matrix has the same eigenvalues as the
matrix A. Indeed, the eigenvalues are arranged in the same way in both matrices. We can
diagonalise a matrix A if we can get a modal matrix, which of course must be invertible. In other
words, M should not be a singular matrix. How do we ensure that it is not a singular matrix? If it
is n-dimensional, then, its columns, and hence rows, must be linearly independent. One way we
can be sure of this is that all the eigenvalues are distinct. We could still have a situation where
the eigenvalues are repeated and we can still get n linearly independent eigenvectors. Such a case
we shall visit later in the study session.
The modal matrix is not the only avenue for finding the spectral matrix. Indeed, if we normalise
each vector, we would still get the spectral matrix. But now, the matrix could have some other
special properties.
221
The eigenvectors of the matrix are:
0 1 1
− 1 ,
− 1 and − 1
1 0 1
respectively for − 1 , 1, 0.
222
− 2 − 2 0 0 − 1 − 1 0 1/ 2 1/ 3 − 1 0 0
0 − 2 − 2 − 1 0 1 − 1 / 2 − 1/ 2 − 1/ 3 = 0 1 0
3 3 3 1 1 0 1 / 2 0 1 / 3 0 0 0
223
In quantum mechanics, every (real) observable has associated with it a Hermitian operator
because the eigenvalues must be real, being the real values the observable can assume under
measurement. The following theorem is therefore in order.
Theorem
To every Hermitian matrix A , there exists a unitary matrix U , built out of the eigenfunctions of
A , such that U + AU is diagonal, with the eigenvalues on the diagonal.
Proof
Let A | u1 >= b1 | u1 >
A | u 2 >= b2 | u 2 >
.
.
A | u n >= bn | u n >
u11
u12
where u 1 = . , etc.
.
u
1n
u11 u 21 . . u n1
u12 u 22 . . u n 2
Let U = . . . . . as in the earlier case, where we have represented each
. . . . .
u
1n u 2 n . . u nn
orthonormalised eigenvector by a column.
224
0 0 1
Diagonalise the symmetric matrix Ω = 0 0 0 using the appropriate orthogonal
1 0 0
matrix.
0 0 1
Ω = 0 0 0
1 0 0
The characteristic equation is formed by
−λ 0 1
0 −λ 0 =0
1 0 −λ
− λ3 + λ = 0
Eigenvalues are 0, 1 and −1 .
0 0 1 a1 a1 0
For λ = 0, the eigenvector is given by 0 0 0 a 2 = 0a 2 = 0
1 0 0 a3 a3 0
or
a1 0
a = 1
2
a3 0
For λ = 1 :
− 1 0 1 a1
Ω = 0 − 1 0 a 2 = 0
1 0 − 1 a 3
or
a1 1
a = 1 0 (normalised)
2 2
a 3 1
For λ = −1 :
1 0 1 a1
Ω = 0 1 0 a 2 = 0
1 0 1 a 3
225
or
a1 − 1
a = 1 0 (normalised)
2 2
a 3 1
0 1 0 0 0 1 0 1 / 2 −1 / 2
D = Q ΩQ
T
= 1 / 2 0 1 / 2 0 0 0 1 0 0
− 1 / 2 0 1 / 2 1 0 0 0 1 / 2 1 / 2
0 1 0 0 1 / 2 1/ 2
= 1 / 2 0 1 / 2 1 0 0
− 1 / 2 0 1 / 2 0 1 / 2 − 1 / 2
0 0 0
= 0 1 0
0 0 −1
Note that you can only diagonalise a matrix the columns (or rows) of which are linearly
independent. We learnt earlier that the columns (as well as the rows) of an orthogonal matrix are
linearly independent. In addition, a unitary matrix is the complex analogue of an orthogonal
matrix. As such, we conclude that every orthogonal matrix and every unitary matrix is
diagonalisable. Conversely, we expect that orthogonal and unitary matrices would diagonalise
some matrices, being composed of linearly independent (normalised) vectors arranged in order,
and hence non-singular.
Real symmetric matrices can be diagonalised by orthogonal matrices. As such, real symmetric
matrices with n distinct eigenvalues are orthogonally diagonalisable.
226
0 0 − i
Show that the matrix B = 0 0 0 is Hermitian. Hence, find the equivalent diagonal
i 0 0
matrix.
0 0 − i
B = 0 0 0 = B
+
i 0 0
Hence, B is Hermitian.
0 i − i
The eigenvectors are, 1 , 0 and 0
0 1 1
0 i − i 0 2 0
Arranging the eigenvectors in order in a matrix, P = 1 0 0 . P = − i 0 1
−1 1
2
0 1 1 i 0 1
0 2 0 0 0 − i 0 i − i 0 0 0
1
−1
P BP = − i 0 1 0 0 0 1 0 0 = 0 − 1 0
2
i 0 1 i 0 0 0 1 1 0 0 1
We have successfully diagonalised B. But notice that the matrix P is not a unitary
matrix.
0 i − i 0 1 0 2 0 0
PP = 1 0 0 − i 0 1 = 0 1 0 ≠ I 3
+
Then,
0 i / 2 − i / 2
U = 1 0 0
0 1 / 2 1 / 2
227
0 1/ 2 0
+
U = − i / 2 0 1/ 2
i/ 2 0 1 / 2
0 i / 2 − i / 2 0 1/ 2 0 1 0 0
+
UU = 1 0 0 − i / 2 0 1 / 2 = 0 1 0 = I 3
0 1 / 2 1 / 2 i / 2 0 1 / 2 0 0 1
showing that U is a unitary matrix. The columns are orthonormal, and indeed, U + = U −1
.
It is much easier calculating the Hermitian adjoint of a matrix than finding the inverse of the
matrix. The former entails finding the transpose and finding the complex conjugate of every
element; much simpler than the rigorous work required to find the inverse of a matrix, especially
if the dimension is more than 2.
We could also diagonalise B with the matrix U instead of P . We shall now make use of the
matrix U to show that a unitary mat
matrix diagonalises a Hermitian matrix:
0 1 0 0 0 − i 0 i / 2 − i / 2 0 0 0
U BU = − i / 2
+
0 1 / 2 0 0 0 1 0 0 = 0 − 1 0
i / 2 0 1 / 2 i 0 0 0 1 / 2 1 / 2 0 0 1
matrix is normal.
References
1. Hill, K. (1997). Introductory Linear Algebra with Applications, Prentice Hall.
228
2. Butkov, E. (1968). Mathematical Physics, Addison-Wesley.
3. MacQuarrie, D. A. (2003). Mathematical Methods for Scientists & Engineers, University
Science Books.
229
SAQ 15.2 (tests Learning Outcome 15.3)
4 0 0
Diagonalise the matrix A = − 1 − 6 − 2 .
5 0 1
230
Solutions to SAQs
SAQ 15.1
(i) A modal matrix is a matrix composed of the eigenvectors of a square matrix arranged in
order, each eigenvector written as a column vector.
(ii) The spectral matrix is the diagonal equivalent of a given diagonalisable matrix. The
eigenvalues are the only non-zero elements of the matrix and are arranged along the main
diagonal.
(iii) Unitary matrix is a matrix that satisfies the condition U + = U −1 or equivalently,
U +U = UU + = I , that is, the Hermitian conjugate of the matrix is the inverse of the
matrix.
(iv) A normal matrix that commutes with its Hermitian conjugate, that is, NN + = N + N .
(v) A Hermitian matrix is a complex-valued symmetric matrix that satisfies the condition
H + = H , that is, it is its own Hermitian conjugate. We say such a matrix is self-adjoint.
SAQ 15.2
4 0 0
We find the eigenvalues of the matrix A = − 1 − 6 − 2 .
5 0 1
The characteristic equation is given by,
4−λ 0 0
−1 −6−λ −2 =0
5 0 1− λ
Hence,
(−6 − λ )(1 − λ )(4 − λ ) = 0
λ = − 6 , 1 and 4.
231
− 13 / 30 0 0
−1
M = 10 / 21 0 − 2 / 7
− 3 / 70 1 2 / 7
− 13 / 30 0 0 4 0 0 − 30 / 13 0 0 4 0 0
10 / 21 0 − 2 / 7 − 1 − 6 − 2 1 1 1 = 0 1 0
− 3 / 70 1 2 / 7 5 0 1 − 50 / 13 − 7 / 2 0 0 0 − 6
SAQ 15.3
The eigenvectors are given by,
0−λ 1 1
1 0−λ 1 =0
1 1 0−λ
The indicial or characteristic equation is therefore,
λ3 − 3λ − 2 = (λ − 2)(λ + 1) 2 = 0
The eigenvalues are 2 , − 1 , − 1 .
satisfied by the set {u1 , u 2 , u 3 } = {−1,1, 0} and the set {u1 , u 2 , u 3 } = {−1, 0,1} , which are linearly
independent.
All the vectors are linearly independent.
Arranging the eigenvectors such that the corresponding eigenvalues are in decreasing order (the
order of the corresponding eigenvectors for repeated eigenvalues is immaterial), the matrix
1 − 1 − 1
1 1 0 is the modal matrix, M,
1 0 1
232
1 1 1
−1 1
The inverse is M = − 1 2 − 1
3
−1 −1 2
1 1 1 0 1 1 − 1 − 1 1 2 0 0
−1 1
M AM = − 1 2 − 1 1 0 1 1 0 1 = 0 − 1 0
3
− 1 − 1 2 1 1 0 0 1 1 0 0 − 1
2 0 0
M is the modal matrix and 0 − 1 0 is the spectral matrix.
0 0 − 1
SAQ 15.4
1 i − i
+
H = − i 0 0 = H
i 0 0
Hence, H is Hermitian.
It is not diagonalisable because the last two columns are not linearly independent:
α (i, 0, 0) + β ( −i, 0, 0) = 0 implies α = − β . Neither constant has to be zero.
SAQ 15.5
0 0 − i
H = 0 1 0
i 0 0
The eigenvalues are, − 1 and 1, twice.
233
1/ 2 0 −i/ 2
U = 0 1 0
− i / 2 0 1 / 2
1 / 2 0 i / 2
+
U = 0 1 0
i / 2 0 1/ 2
1 / 2 0 i / 2 0 0 − i 1 / 2 0 − i / 2 − 1 0 0
0 1 0 0 1 0 0 1 0 = 0 1 0
i / 2 0 1 / 2 i 0 0 − i / 2 0 1 / 2 0 0 1
Should you require more explanation on this study session, please do not hesitate to contact your
Are you in need of General Help as regards your studies? Do not hesitate to contact
the DLI IAG Center by ee-mail or phone on:
[email protected]
08033366677
234